entry_id
stringlengths 33
33
| published
stringlengths 14
14
| title
stringlengths 17
188
| authors
sequence | primary_category
stringlengths 5
18
| categories
sequence | text
stringlengths 2
629k
|
---|---|---|---|---|---|---|
http://arxiv.org/abs/2307.07523v1 | 20230710110551 | PapagAI:Automated Feedback for Reflective Essays | [
"Veronika Solopova",
"Adrian Gruszczynski",
"Eiad Rostom",
"Fritz Cremer",
"Sascha Witte",
"Chengming Zhang",
"Fernando Ramos López Lea Plößl",
"Florian Hofmann",
"Ralf Romeike",
"Michaela Gläser-Zikuda",
"Christoph Benzmüller",
"Tim Landgraf"
] | cs.AI | [
"cs.AI",
"cs.CL"
] |
PapagAI:
Automated Feedback for Reflective Essays
V. Solopova et al.
Freie Universität Berlin, Germany Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany Otto-Friedrich-Universität Bamberg, Germany
PapagAI:
Automated Feedback for Reflective Essays
Veronika Solopova10000-0003-0183-9433 Eiad Rostom1 Fritz Cremer1Adrian Gruszczynski1Sascha Witte1Chengming Zhang20009-0007-8695-5455Fernando Ramos López1 Lea Plößl20009-0004-7290-5068Florian Hofmann2Ralf Romeike10000-0002-2941-4288Michaela Gläser-Zikuda20000-0002-3071-2995 Christoph Benzmüller2,30000-0002-3392-3093 Tim Landgraf10000-0003-4951-5235
August 12, 2023
=================================================================================================================================================================================================================================================================================================================================================================
Written reflective practice is a regular exercise pre-service teachers perform during their higher education.
Usually, their lecturers are expected to provide individual feedback, which can be a challenging task to perform on a regular basis. In this paper, we present the first open-source automated feedback tool based on didactic theory and implemented as a hybrid AI system. We describe the components and discuss the advantages and disadvantages of our system compared to the state-of-art generative large language models. The main objective of our work is to enable better learning outcomes for students and to complement the teaching activities of lecturers.
§ INTRODUCTION
Dropout rates as high as 83% among pre-service teachers and associated teacher shortages are challenging the German education system <cit.>. This may be due to learning environments not adequately supporting prospective teachers in their learning process <cit.>. Written reflective practice may alleviate the problem: By reflecting on what has been learned and what could be done differently in the future, individuals can identify areas for improvement. However, instructors may be overburdened by giving feedback to 200+ students on a weekly basis. With the rise of large language models (LLMs, <cit.>), automated feedback may provide welcome relief. Students could iteratively improve their reflection based on the assessment of a specialized model and through that, their study performance. Instructors could supervise this process and invest the time saved in improving the curriculum. While current research is seeking solutions to align the responses of LLMs with a given set of rules, it is currently impossible to guarantee an output of a purely learnt model to be correct. Here, we propose “PapagAI", a platform to write reflections and receive feedback from peers, instructors and a specialized chatbot. PapagAI uses a combination of ML and symbolic components, an approach known as hybrid AI <cit.>. Our architecture is based on various natural language understanding modules[All ML models are available in our OSF depository (https://osf.io/ytesn/), while linguistic processing code can be shared upon request.], which serve to create a text and user profile, according to which a rule-based reasoner chooses the appropriate instructions.
§ RELATED WORK
PapagAI employs a number of models for detecting topics contained in -, and assessing the quality and depth of the reflection, as well as for detecting the sentiment and emotions of the author. While extensive previous work was published on each of these tasks, implementations in German are rare. To our knowledge, there is no previous work that combined all in one application. Automated detection of reflective sentences and components in a didactic context has been described previously <cit.>. In <cit.>, e.g., the authors analyse the depth of a reflection on the text level according to a three-level scheme (none, shallow, deep). Document-level prediction, however, can only provide coarse-grained feedback. Liu et al. <cit.>, in contrast, also use three levels for predicting reflective depth for each sentence.
In emotion detection, all previous works focus on a small set of 4 to 6 basic emotions. In Jena <cit.>, e.g., the author describes detecting students' emotions in a collaborative learning environment. Batbaatar et al. <cit.> describes an emotion model achieving an F1 score of 0.95 for the six basic emotions scheme proposed by Ekman <cit.>. Chiorrini et al. <cit.> use a pre-trained BERT to detect four basic emotions and their intensity from tweets, achieving an F1 score of 0.91. We did not find published work on the German language, except for Cevher et al. <cit.>, who focused on newspaper headlines. With regard to sentiment polarity, several annotated corpora were developed for German <cit.>, mainly containing tweets. Guhr et al. <cit.> use these corpora to fine-tune a BERT model. Shashkov et el. <cit.> employ sentiment analysis and topic modelling to relate student sentiment to particular topics in English. Identifying topics in reflective student writing is studied by Chen et al. <cit.> using the MALLET toolkit <cit.> and by De Lin et al. <cit.> with Word2Vec + K-Means clustering. The techniques in these studies are less robust than the current state-of-art, such as ParlBERT-Topic-German <cit.> and Bertopic <cit.>. Overall, published work on automated feedback to student reflections is scarce, the closest and most accomplished work being AcaWriter <cit.> and works by Liu and Shum <cit.>. They use linguistic techniques to identify sentences that communicate a specific rhetorical function. They also implement a 5-level reflection depth scheme and extract parts of text describing the context, challenge and change. The feedback guides the students to the next level of reflective depth with a limited number of questions. In their user study, 85.7% of students perceived the tool positively. However, the impact on the reflection quality over time was not measured and remains unclear.
§ METHODS, COMPONENTS AND PERFORMANCES
Data collection. Our data comes from the German Reflective Corpus <cit.>. The dataset contains reflective essays collected via google-forms from computer science and ethics of AI students in German, as well as e-portfolio diaries describing school placements of teacher trainees from Dundee University.
For such tasks as reflective level identification and topic modelling, we enlarged it by computer science education students' essays and pedagogy students' reflections[This still non-published data can be obtained upon request.]. It consists of reflections written by computer science, computer science education, didactics and ethics of AI students in German and English. Data is highly varied, as didactics students write longer and deeper reflections than e.g. their computer science peers.
Emotions detection. Setting out from the Plutchik wheel of basic emotions <cit.>, during the annotation process we realised that many of the basic emotions are never used, while other states are relevant to our data and the educational context (e.g. confidence, motivation). We framed it as a multi-label classification problem at the sentence level. We annotated 6543 sentences with 4 annotators.
The final number of labels is 17 emotions, with the 18th label being 'no-emotion'.
We calculated the loss using binary cross entropy, where each label is treated as a binary classification problem, the loss is calculated for each label independently, which we sum for the total loss. We achieved the best results with a pre-trained RoBERTa <cit.> , with a micro F1 of 0.70 and a hamming score of 0.67 across all emotion labels. The model achieved the highest scores for “surprise”, “approval” and “interest”. With a lenient hamming score, accounting for the model choosing similar emotions (e.g. disappointment instead of disapproval) our model achieves up to 0.73.
Gibbs cycle. <cit.> illustrates cognitive stages needed for optimal reflective results. It includes 6 phases: description, feelings, evaluation, analysis, conclusion and future plans. We annotated the highest phase present in a sentence and all the phases present.
We treated this as a multi-class classification problem and used a pre-trained ELECTRA model. While evaluating, we compared one-hot prediction to the highest phase present and 3 top probability classes with all the phases present. While one-hot matching only managed to score 65% F1 macro, the top 3 predictions achieve up to 98% F1 macro and micro.
Reflective level detection. Under the supervision of Didactics specialists two annotators labelled 600 texts according to Fleck & Fitzpatrick's scheme <cit.>, achieving moderate inter-annotators agreement of 0.68. The coding scheme includes 5 levels: description, reflective description, dialogical reflection, transformative reflection and critical reflection; With 70% of the data used for the training and 30% for evaluation, we used pre-trained BERT large and complete document embeddings for the English and German, resulting in QWK score of 0.71 in cross-validation.
Topic modelling. We used BERTopic <cit.> on the sentence level. First, we tokenized and normalize the input sequence to lowercase and filter out numbers, punctuation, and stop-words using nltk library <cit.>. Then, we extract embeddings with BERT, reduce dimensionalities with UMAP, cluster reduced embeddings with HDBSCAN, create topic representation with tfidf and fine-tune topic representations with the BERT model. Because we have a lot of data of different origins, we created two clusterings, one more specific to the pedagogy topic and one including various educational topics. You can see our clusters in App.
Linguistic scoring. Using spacy[https://spacy.io]
we tokenized, and lemmatize the sentences, extracted dependencies parcing and part of speech. Additionally, we used RFTagger<cit.> for parts of speech and types of verbs. We extract sentence length, adverb for verb ratio, adjective for noun ratio, number of simple and complex sentences, types of subordinated clauses and number of discourse connectors[We use Connective-Lex list for German: https://doi.org/10.4000/discours.10098.] used. This information enables us to determine the reflection length, expressivity and variability of the language, as well as surface coherence and structure.
§ SYSTEM ARCHITECTURE
In PapagAI (see Fig. <ref>)
the input text of the reflection is received from the AWS server through a WebSocket listener script. To minimize the response time, the models are loaded in the listener script once and then the user request spawn threads with the models already loaded. If the input text is smaller than three sentences and contains forbidden sequences, the processing does not start and the user receives a request to revise their input. Otherwise, the text is segmented into sentences and tokens. The language is identified using langid <cit.> and if the text is not in German, it is translated using Google translator API implementation.[https://pypi.org/project/deep-translator/] The reflective level model receives the whole text, while other models are fed with the segmented sentences. Topic modelling and Gibbs cycle results are mapped, to identify if topics were well reflected upon. If more than three sentences are allocated to the topic and these sentences were identified by the Gibbs cycle model as analysis, we consider the topics well thought through. The extracted features are then passed to the feedback module. Here, the lacking and under-represented elements are identified in linguistic features and the three least present Gibbs cycle stages. If sentiment and emotions are all positive we conclude that no potential challenges and problems are thought through. If the sentiment and emotions are all negative, we want to induce optimism. These features together with the reflective level are mapped to the database of potential prompts and questions, where one of the suitable feedback options is chosen randomly for the sake of variability. Using manually corrected Gpt-3 outputs, for each prompt we created variations so that the feedback does not repeat often even if the same prompts are required.The extracted textual prompts are built together in a rule-based way into the template, prepared for German, Spanish and English. Otherwise, the overall feedback is made in German and then translated into the input language. The textual and a vector of extracted features for visual representation are sent back to the AWS server. The whole processing takes from 15 to 30 seconds based on the length of the text. Sample feedback can be seen in Figure <ref>.
§ COMPARISON WITH GPT-3
We compared our emotions detection (fine-tuned RoBERTa) and Gibbs cycle model (fine-tuned Electra) with the prompt-engineered state-of-the-art generative model Davinci <cit.> on the same task. For the evaluation and comparison, we used a small subset of 262
samples which were not part of the training.
We first tried the zero-shot approach, where we described our labels to GPT-3 and gave it our sentence to predict. Then, we tried a one-shot approach, providing GPT-3 with one example sentence for each label. Finally, in the few-shot approach, we provided GPT-3 with three examples per label, which is the maximum number of examples possible due to the input sequence length restriction. Although the task requested GPT-3 to pick multiple labels out of the possible options, the model predicted multiple labels only in 5% of the cases for emotions. For this reason, we used the custom defined “one correct label”: the score considers the prediction correct if it contains at least one correct label from the sentence's true labels.
The zero-shot approach achieved only 0.28 accuracy in predicting one correct label for emotions. The model predicted the labels “information”, “uncertainty”, “interest”, and “motivated” for the
majority of the sentences. With the Gibbs cycle task, it achieved 80% correct predictions. Providing one example per label improved the performance noticeably by 18% (0.46) for emotions, and
the model was able to detect emotions like “confidence”, “challenged”, and
“approval” more accurately. It did not influence Gibb's cycle performance. Increasing the number of examples to three resulted in a slight improvement of 3% (0.49) for emotions, and 7% (0.87) for the Gibbs cycle. However, the best-scoring approaches did not offer a comparable performance to our fine-tuned models on these specific tasks with 0.81 on the same custom metric for emotion detection and 0.98 for the Gibbs cycle.
§ DISCUSSION AND CONCLUSION
The current PapagAI system has several advantages in comparison to generative LLMs. It ensures transparency of the evaluation and control over the output, which is based exclusively on didactic theory. Although LLMs show huge promise, they are still prone to hallucination <cit.>, and, as we have shown in <ref>, they may under-perform on difficult cognitive tasks in comparison to smaller language models fine-tuned for the task. The fine-tuning of LLMs to didactic books and instructions, which we plan for our future work, still does not guarantee 100% theoretical soundness of the output, which is problematic e.g. in the case of pre-service students with statistically low AI acceptance. At the same time, the newest models, such as GPT-4, are only available through APIs, which raises concerns about data privacy, especially as the data in focus is an intimate reflective diary. Moreover, current open-source models, such as GPT-J and GPT-2, especially for languages other than English do not draw comparable results. Our architecture has, however, obvious drawbacks. On the one hand, our models do not reach 100% accuracy and this can naturally lead to suboptimal feedback. The processing time for many models, especially for longer texts, can be significantly higher than for a single generative LLM. For now, as we provide one feedback message for one rather long reflection, this is not a big issue, however, if we implement a dialogue form, the time of response would not feel natural. Finally, the variability of output using our approach is much more limited in comparison to generative models. We try to address it by creating many similar versions of instructions rephrased by GPT-3, and corrected manually. On average 7 out of 10 prompts needed some correction. Most of the errors were related to GPT-3 trying to rephrase the given sentence using synonyms that were not didactically appropriate in the given context.
Future work, among others, will focus on user studies to understand how we can optimize the feedback, so that the users find it credible and useful, while their reflective skills advance. We also plan a more detailed evaluation based on more user data. We hope that our work will contribute to the optimization of the pre-service teachers' reflective practice and self-guided learning experience.
splncs04
§ APPENDIXES
|
http://arxiv.org/abs/2307.07429v1 | 20230714154831 | Variational dynamics of open quantum systems in phase space | [
"Debbie Eeltink",
"Filippo Vicentini",
"Vincenzo Savona"
] | quant-ph | [
"quant-ph",
"physics.comp-ph",
"physics.data-an"
] |
figures/figuresSI/../figuresSI/../figures/
[1]▹ #1
./figures/
[2]()
#1 #2
|
http://arxiv.org/abs/2307.06238v2 | 20230712152744 | Astrometric Calibration and Performance of the Dark Energy Spectroscopic Instrument Focal Plane | [
"S. Kent",
"E. Neilsen",
"K. Honscheid",
"D. Rabinowitz",
"E. F. Schlafly",
"J. Guy",
"D. Schlegel",
"J. Garcia-Bellido",
"T. S. Li",
"E. Sanchez",
"Joseph Harry Silber",
"J. Aguilar",
"S. Ahlen",
"D. Brooks",
"A. de la Macorra",
"P. Doel",
"D. J. Eisenstein",
"K. Fanning",
"A. Font-Ribera",
"J. E. Forero-Romero",
"J. Jimenez",
"Anthony Kremin",
"M. Landriau",
"Michael E. Levi",
"Paul Martini",
"Aaron M. Meisner",
"R. Miquel",
"J. Moustakas",
"Jundan Nie",
"N. Palanque-Delabrouille",
"W. J. Percival",
"C. Poppett",
"G. Rossi",
"M. Schubnell",
"Gregory Tarle",
"B. A. Weaver",
"Rongpu Zhou",
"Zhimin Zhou",
"H. Zou"
] | astro-ph.IM | [
"astro-ph.IM"
] |
Stephen Kent
[email protected]
Fermi National Accelerator Laboratory,
PO Box 500,
Batavia,
IL 60510,
USA
Department of Astronomy and Astrophysics,
University of Chicago,
5640 South Ellis Avenue,
Chicago,
IL 60637,
USA
Fermi National Accelerator Laboratory,
PO Box 500,
Batavia,
IL 60510,
USA
Center for Cosmology and AstroParticle Physics,
The Ohio State University,
191 West Woodruff Avenue,
Columbus,
OH 43210,
USA
Department of Physics,
The Ohio State University,
191 West Woodruff Avenue,
Columbus,
OH 43210,
USA
Physics Department,
Yale University,
P.O. Box 208120,
New Haven,
CT 06511,
USA
Space Telescope Science Institute,
3700 San Martin Drive,
Baltimore,
MD 21218,
USA
Lawrence Berkeley National Laboratory,
1 Cyclotron Road,
Berkeley,
CA 94720,
USA
Lawrence Berkeley National Laboratory,
1 Cyclotron Road,
Berkeley,
CA 94720,
USA
Instituto de Física Teórica (IFT) UAM/CSIC,
Universidad Autónoma de Madrid,
Cantoblanco,
E-28049,
Madrid,
Spain
Department of Astronomy & Astrophysics,
University of Toronto,
Toronto,
ON M5S 3H4,
Canada
CIEMAT,
Avenida Complutense 40,
E-28040 Madrid,
Spain
Lawrence Berkeley National Laboratory,
1 Cyclotron Road,
Berkeley,
CA 94720,
USA
Lawrence Berkeley National Laboratory,
1 Cyclotron Road,
Berkeley,
CA 94720,
USA
Physics Dept.,
Boston University,
590 Commonwealth Avenue,
Boston,
MA 02215,
USA
Department of Physics & Astronomy,
University College London,
Gower Street,
London,
WC1E 6BT,
UK
Lawrence Berkeley National Laboratory,
1 Cyclotron Road,
Berkeley,
CA 94720,
USA
Instituto de Física,
Universidad Nacional Autónoma de México,
Cd. de México C.P. 04510,
México
Department of Physics & Astronomy,
University College London,
Gower Street,
London,
WC1E 6BT,
UK
Center for Astrophysics | Harvard & Smithsonian,
60 Garden Street,
Cambridge,
MA 02138,
USA
The Ohio State University,
Columbus,
43210 OH,
USA
Institut de Física d’Altes Energies (IFAE),
The Barcelona Institute of Science and Technology,
Campus UAB,
08193 Bellaterra Barcelona,
Spain
Departamento de Física,
Universidad de los Andes,
Cra. 1 No. 18A-10,
Edificio Ip,
CP 111711,
Bogotá,
Colombia
Observatorio Astronómico,
Universidad de los Andes,
Cra. 1 No. 18A-10,
Edificio H,
CP 111711 Bogotá,
Colombia
Lawrence Berkeley National Laboratory,
1 Cyclotron Road,
Berkeley,
CA 94720,
USA
Institut de Física d’Altes Energies (IFAE),
The Barcelona Institute of Science and Technology,
Campus UAB,
08193 Bellaterra Barcelona,
Spain
Department of Physics and Astronomy,
University of California,
Irvine,
92697,
USA
Lawrence Berkeley National Laboratory,
1 Cyclotron Road,
Berkeley,
CA 94720,
USA
Lawrence Berkeley National Laboratory,
1 Cyclotron Road,
Berkeley,
CA 94720,
USA
Lawrence Berkeley National Laboratory,
1 Cyclotron Road,
Berkeley,
CA 94720,
USA
Sorbonne Université,
CNRS/IN2P3,
Laboratoire de Physique Nucléaire et de Hautes Energies (LPNHE),
FR-75005 Paris,
France
Lawrence Berkeley National Laboratory,
1 Cyclotron Road,
Berkeley,
CA 94720,
USA
IRFU,
CEA,
Université Paris-Saclay,
F-91191 Gif-sur-Yvette,
France
Departament de Física,
Universitat Autònoma de Barcelona,
08193 Bellaterra (Barcelona),
Spain
Institut de Física d’Altes Energies (IFAE),
The Barcelona Institute of Science and Technology,
Campus UAB,
08193 Bellaterra Barcelona,
Spain
Center for Cosmology and AstroParticle Physics,
The Ohio State University,
191 West Woodruff Avenue,
Columbus,
OH 43210,
USA
Department of Astronomy,
The Ohio State University,
4055 McPherson Laboratory,
140 W 18th Avenue,
Columbus,
OH 43210,
USA
The Ohio State University,
Columbus,
43210 OH,
USA
NSF's NOIRLab,
950 N. Cherry Ave.,
Tucson,
AZ 85719,
USA
Institució Catalana de Recerca i Estudis Avançats,
Passeig de Lluís Companys,
23,
08010 Barcelona,
Spain
Institut de Física d’Altes Energies (IFAE),
The Barcelona Institute of Science and Technology,
Campus UAB,
08193 Bellaterra Barcelona,
Spain
Department of Physics and Astronomy,
Siena College,
515 Loudon Road,
Loudonville,
NY 12211,
USA
National Astronomical Observatories,
Chinese Academy of Sciences,
A20 Datun Rd.,
Chaoyang District,
Beijing,
100012,
P.R. China
IRFU,
CEA,
Université Paris-Saclay,
F-91191 Gif-sur-Yvette,
France
Lawrence Berkeley National Laboratory,
1 Cyclotron Road,
Berkeley,
CA 94720,
USA
Department of Physics and Astronomy,
University of Waterloo,
200 University Ave W,
Waterloo,
ON N2L 3G1,
Canada
Perimeter Institute for Theoretical Physics,
31 Caroline St. North,
Waterloo,
ON N2L 2Y5,
Canada
Waterloo Centre for Astrophysics,
University of Waterloo,
200 University Ave W,
Waterloo,
ON N2L 3G1,
Canada
Lawrence Berkeley National Laboratory,
1 Cyclotron Road,
Berkeley,
CA 94720,
USA
Space Sciences Laboratory,
University of California,
Berkeley,
7 Gauss Way,
Berkeley,
CA 94720,
USA
University of California,
Berkeley,
110 Sproul Hall #5800 Berkeley,
CA 94720,
USA
Department of Physics,
Kansas State University,
116 Cardwell Hall,
Manhattan,
KS 66506,
USA
Center for Cosmology and AstroParticle Physics,
The Ohio State University,
191 West Woodruff Avenue,
Columbus,
OH 43210,
USA
Department of Astronomy,
The Ohio State University,
4055 McPherson Laboratory,
140 W 18th Avenue,
Columbus,
OH 43210,
USA
The Ohio State University,
Columbus,
43210 OH,
USA
Department of Physics and Astronomy,
Sejong University,
Seoul,
143-747,
Korea
Department of Physics,
University of Michigan,
Ann Arbor,
MI 48109,
USA
University of Michigan,
Ann Arbor,
MI 48109,
USA
Department of Physics & Astronomy,
Ohio University,
Athens,
OH 45701,
USA
University of Michigan,
Ann Arbor,
MI 48109,
USA
NSF's NOIRLab,
950 N. Cherry Ave.,
Tucson,
AZ 85719,
USA
Lawrence Berkeley National Laboratory,
1 Cyclotron Road,
Berkeley,
CA 94720,
USA
National Astronomical Observatories,
Chinese Academy of Sciences,
A20 Datun Rd.,
Chaoyang District,
Beijing,
100012,
P.R. China
National Astronomical Observatories,
Chinese Academy of Sciences,
A20 Datun Rd.,
Chaoyang District,
Beijing,
100012,
P.R. China
The Dark Energy Spectroscopic Instrument (DESI), consisting of
5020 robotic fiber positioners and associated systems on the Mayall
telescope at Kitt Peak, Arizona, is carrying out a survey to measure the
spectra of 40 million galaxies and quasars and produce the largest 3D map
of the universe to date. The primary science goal is to use baryon acoustic
oscillations to measure the expansion
history of the universe and the time evolution of dark energy.
A key function of the online control system is to position each fiber
on a particular target in the focal plane
with an accuracy of 11μm rms 2-D.
This paper describes the set of software
programs used to perform this function along with the
methods used to validate their performance.
§ INTRODUCTION
The Dark Energy Spectroscopic Instrument (DESI) is conducting an
optical/near-infrared survey of 40 million galaxies
and quasars in order to answer cosmological questions about the nature of
dark energy in the universe, with the goal of performing the most precise
measurement of the expansion history of the universe ever obtained
<cit.>.
DESI will use the Baryon Acoustic Oscillation (BAO)
scale to determine the distance-redshift relationship from the
recent universe to approximately redshift 3.5. In addition to the
expansion history and dark energy, DESI will also
measure the growth of cosmic structure, provide new information on the
sum of the neutrino masses, study the scale
dependence of primordial density fluctuations from inflation,
test potential modifications to the general theory of relativity,
and map the stellar halo of the Milky Way.
The survey began on 2021 May 14 and is expected to run for 5 years.
The DESI instrument <cit.>
is mounted at the prime focus of the Mayall 4-meter
telescope located on Kitt Peak in Arizona. A 6-element optical corrector
provides a 3.2 degree diameter field of view <cit.>.
An integrated atmospheric
dispersion compensator (ADC) provides correction for atmospheric chromatic
effects up to an airmass of about 2.2 (zenith distance of 63 deg).
The DESI focal plane comprises an array of 5020 optical fibers, each
mounted to a 2-axis robotic positioner,
along with ten "guide/focus assembly" (GFA) charge-coupled device (CCD) sensors
used for guiding or wavefront sensing.
<cit.>.
Each fiber feeds light from an astronomical target
to one of ten three-channel spectrographs that cover the wavelength
range 0.36μm – 0.98μm <cit.>. Light-emitting diode (LED)
light sources in
each spectrograph can be used to back-illuminate the fiber tips.
Additionally, 120 “fiducials” (fixed light sources) are distributed about the
focal plane to act as positional references.
A “fiber view camera” (FVC) <cit.> is mounted near the
vertex of the primary mirror and takes images of
the fiducials and back-illuminated
fiber tips of the focal plane through the
corrector. These images are used to measure the location of the positioner
fibers relative to the fiducials and determine the necessary corrections
needed to position the fibers at the locations
of targets in a given field on the sky. Spectrograph
integration times on a field
are typically 10-15 minutes. For efficiency, short setup times (no more than
2 minutes, including telescope slew time, for fields nearby on the sky)
are desirable.
Accurate fiber positioning requires that the focal plane be calibrated
astrometrically. In a nutshell, this calibration occurs in two
steps (Fig. <ref>).
First, the GFA guide
CCDs obtain images of stars from the Gaia DR2 catalog
<cit.>, which have high-accuracy astrometric coordinates.
Two nearby co-mounted “guider fiducials” (GIFs),
are calibrated astrometrically, making use of previously obtained laboratory
metrology that ties CCD pixels to the GIFs.
Second, an FVC image of the fiducials and back-illuminated
positioner fibers is obtained.
Using the GIFs as surrogate astrometric standards, the FVC CCD
image is calibrated astrometrically, giving the sky position of each
pixel. The positioners are commanded to move as appropriate to place
the image of each back-illuminated positioner
fiber on the desired pixel position of a target.
The main software program used to accomplish this calibration and
positioning of target fibers is
called “PlateMaker” (PM; the name is a throwback
to previous multi-object focal plane systems that used plug-plates to position
fibers - e.g., <cit.>).
FVC CCD images are analyzed with a separate program called
“spotmatch,” which derives the pixel coordinates of the fiducials and
positioner fibers. A third program, the “turbulence correction” code
(part of the “desimeter” package) is used to measure and derive
corrections to FVC images due to the effects of turbulence in the air
column between the FVC and the focal plane. A fourth program,
the “dither analysis” code, is
used offline to refine the optical model used in PM by analyzing
a special set of “dither” exposures of a field of bright astrometric
standard stars.
The functioning of these programs was
described briefly in <cit.>, and their integration with
the instrument control system was described in <cit.>.
Modeling of the telescope optics is done using another
offline program “trace”.
The purpose of this paper is discuss the operation and performance
of these programs,
particularly PlateMaker, in much greater depth.
The organization is as follows. Section <ref>
describes
the hardware systems that are involved and the nominal procedures by which
they are used. Section <ref> gives certain details of the optical
model of the telescope and wide-field corrector.
Section <ref> describes the
astrometric calibration
of the guide CCDs used for field acquisition.
Section <ref> describes the astrometric calibration of the FVC images
and how they are used to provide corrections to the fiber positioners.
Sections <ref> to <ref> present numerous measures
of the performance of the calibration procedures, including both
on-sky and off-sky tests. Conclusions are presented in
section <ref> along with thoughts on improvements that
can be made in a future wide-field spectroscopic instrument.
§ HARDWARE
Figure <ref> shows the basic hardware layout. More information
on each major component follows.
§.§ Corrector
The basic parameters of the corrector <cit.>
are give in Table <ref>.
The corrector acts as a focal expander, increasing the f/# from 2.8 of the
primary mirror to 3.9. The focal plane is chief-ray normal but not flat - the
shape is aspheric and convex towards the corrector. A side effect of
producing these optical properties is that there is large amount of radial
distortion (6 mm), requiring at least a 5th order polynomial to model.
The difference between the radial and
tangential plate scales can be as large as 10%
towards the edge of the field, which means that the exit pupil is distorted
as well. Additionally, the atmospheric dispersion compensator
("ADC") introduces non-axisymmetric distortions;
these will be discussed later.
The corrector is attached to the telescope via a hexapod system, which
provides focus, x,y translation,
and rotational motions. Only small rotations
are needed, and they are used to compensate for apparent field rotation as a
function of position in the sky.
§.§ Fiber View Camera
The FVC is described in more detail in <cit.>, so only the salient
points will be mentioned here. The camera consists of a lens, a
narrow band filter centered on 470 nm, a
quartz window, and a Kodak KAF-50100 6132 x 8176 CCD with
6μm pixels. The demagnification from the DESI
focal plane is a factor of 24. Multiple lenses have been used over time;
the current lens is a BK7 singlet 25 mm in diameter and focal length 600 mm,
producing an f/24 beam.
The FVC is mounted in the central hole in the primary mirror and is about
12 m from the DESI focal plane. Given its location near the vertex of
the primary mirror, the FVC images what is essentially the chief
ray of a back-illuminated fiber after the beam exits the corrector.
§.§ Focal Plane
The focal plane <cit.>
is assembled from 10 “petals”, each being a wedged-shaped
sector of angular width 36^o.
Each petal has mounting holes for 502 motorized
positioners, and each positioner holds a single optical fiber of diameter
107μm (1.52 arcsec).
Additionally, 10 FIFs (“focal plane fiducials") are mounted at
select locations
between the field center and edge. The fiducials have a mask that allows them
to project a pattern of 4 “pinholes" when illuminated; these pinholes
are used
to aid in absolute position calibration of FVC images, tying
FVC CCD pixels to physical locations on the DESI focal plane. Also mounted
on each petal is a single GFA assembly, consisting
of a CCD plus
two GIFs mounted to the GFA body
(Fig. <ref>). Six of these GFAs (distributed around the rim
of the focal plane) are dedicated to guiding and field acquisition, while
the remaining four are dedicated to wavefront sensing for focus and
collimation. The GIFs are used to tie the astrometric calibration
as determined by the guide CCDs to the physical location in the focal plane.
Both the fiducials and the positioner fibers can be illuminated
either individually or in combination.
The LED illuminators have a wavelength of 470 nm.
Since the telescope has an equatorial mount, the focal plane has a
nominally fixed
orientation relative to the apparent sky.
A global Cartesian coordinate system
(called “CS5” within the DESI project) is defined such that the x-axis
points in the sky W direction and the y-axis points in the sky N direction.
§.§ Metrology
Metrology (accurate 2-D or 3-D measurements of particular components)
of two subsystems – the GFAs and the petals – was performed
in a laboratory setting before the subsystems were installed at the
telescope <cit.>.
* GFAs: Using a spot projector mounted on an X-Y stage, metrology of each
GFA was obtained, consisting of:
* Linear measurements of the 4 pinholes of each GIF and of spots
projected onto six locations on the CCD distributed in a grid pattern.
* For each spot location on the CCD, the CCD was read out,
producing pixel coordinates at the same time. Spot sizes were about 50μm
(3.4 pixels) FWHM.
In this way, it was possible to extend the astrometric calibration of the CCD
pixel coordinates
to the GIFs. The rms error in this measurement was of order 27μm 1-D.
* Petals: After the GFAs were attached to the petals, the fiducials
(including the GIFs) were illuminated and the x,y position of each
pinhole was measured relative to reference balls on each petal
using a coordinate measuring machine
(CMM) with both optical and touch probes.
The rms error in this process was of order 17μm rms 1-D.
Originally it was intended to assemble the entire focal plane and perform
metrology of all petals simultaneously, but this proved to be too difficult
and risky, so the relative petal positions were left to be determined using
the FVC.
§ OPTICAL MODEL
The optical design of the corrector plus primary mirror is given in
<cit.>. This design has been analyzed using a raytrace
program[The program is home-grown, written using custom code.
It provides
many features found in commercial codes such as Zemax or Oslo but has
several enhancements that make it easier to interface to the PM code.]
to determine a number of
properties that are relevant to PM. A somewhat unique feature of
the optical design is that the ADC is formed from two adjacent spherical
surfaces that are wedged relative to the optical axis and counter-rotated
with respect to one another in order to create lateral chromatic aberration
that compensates that due to the atmosphere. The advantage of this
design is that it is compact and straightforward to fabricate. A collateral
impact, however, is that numerous side effects, described below,
must be accounted for and
compensated. None of these effects is serious, and they simply add a
few extra steps to DESI operations.
§.§ Field Center
The precise operational definition of the center of the focal plane
will be described in section <ref>; for the purpose here,
it is the place where the noses of all petals meet (Fig. <ref>).
The telescope is pointed
so that a “field center” on the sky is positioned at this point. The
ADCs introduce pointing offsets between the sky and the focal plane center
by an amount that depends on the ADC rotation settings and hence
zenith angle; at air mass 2, e.g., this offset amounts to about 72”.
§.§ Distortion
The dominant distortion in mapping the sky to the focal plane
is 3rd and 5th order radial
distortion with amplitude of order 6 mm. (Note that this value is the maximum
amplitude relative to a linear mapping between the center and edge
of the focal plane.) In addition, there is significant 2nd and 4th order
non-axisymmetric distortion that has a peak amplitude of order 100μm
at the edge of the field. Both types of distortion have static
components, while the non-axisymmetric components depend additionally
on the the ADC settings.
A naive mapping of the sky to the focal plane, in which the distortion
Δ x and Δ y at coordinates x,y in the focal plane
are each expressed as a two dimensional polynomial of those coordinates,
requires 20 terms total at 3rd order (e.g, Anderson and King 2003)
and 42 terms total at 5th order. Such transformations are poorly constrained
when solving for the coefficients using data from, say, only 120 fiducials
total. A much more efficient mapping technique was presented in
<cit.> using spin-weighted Zernike polynomials.[These
polynomials turn out to be the same as the orthogonal vector polynomials
of Zhao and Burge <cit.>(ZB) but expressed in a more easily
generalized fashion.] These
polynomials are linear combinations of the terms in the naive expansions but
have certain properties that
make it easier to eliminate terms that are unimportant. In practice,
only 13 terms are needed here. <cit.> gives a table
of these terms along with their maximum amplitude and meaning.
Most of the terms are type “E” mode while three
are type “B” mode.[ZB S and T polynomials respectively]
Two of these terms are non-axisymmetric modes (one E and one B) that are
generated by the ADC when the telescope is away from zenith and
the wedged lens surfaces are not parallel to one another.
The amplitudes of these terms vary sinusoidally
with the ADC angles, reaching a maximum of 112μm when the ADCs are at
their maximum correction. Each type of pattern is shown in
Fig. <ref>.
When measuring distortion in the DESI focal plane using the FVC,
there is an additional
non-axisymmetric static E mode component
that is well in excess of what is predicted by the optical model and whose
origin is unclear. A possible origin is a small misalignment of the FVC CCD
or lens with respect to the corrector optical axis. In any case, its
effect is
incorporated in the FVC to DESI distortion model.
In the vicinity of each GFA, a simplified, local
version of the distortion model
is used that covers just the GFA CCD and the adjacent GIFs.
This model utilizes the
as-designed optical model but also accounts for the
fact that the GFAs are tilted and rotated due to the focal plane not being
flat and that the surface of best focus is curved across the GFA.
This local model omits the
non-axisymmetric E mode static term, but this omission is largely rectified
later
in the definitions of various calibration constants for the GFA CCDs and GIFs.
The local model is parametrized as a function of the ADC settings.
While the corrector plus atmosphere is largely achromatic, it is not entirely
so - there is a dependence of spot centroid on wavelength. This dependence
is captured in a simple model consisting of x,y offsets plus a radial
polynomial, where the coefficients are a function of both wavelength and
ADC settings.
The FVC images only the chief ray from a fiber, whereas the fiber itself
images all rays falling on the primary mirror. For off-axis
images, aberrations introduce an offset between the chief
ray and the image centroid. A simple model that is comparable to that used
for the differential chromatic aberration is used to account for this offset.
§ GFA ASTROMETRIC CALIBRATION
The GFA astrometric calibration process produces a mapping from GFA CCD
pixels to sky coordinates relative to the field center.
The inputs to this PM process, as provided by the “Instrument Control
System" (ICS), include the following:
* Sky coordinates of a field of 5000 astronomical targets plus 20 blank
sky positions used for monitoring system throughput <cit.>.
* A list of stars from the Gaia DR2 catalog that fall
in the vicinity of each GFA CCD to serve as astrometric standards.
* State information (hexapod settings, etc).
* System time synchronized to UTC.
* An image of the sky made using the GFA CCDs with the telescope
pointed at the selected field center, with a typical exposure time of
10 sec.
§.§ Image Processing
The GFA images are processed to produce a list of all stellar objects
in the images.
The GFA CCDs are operated in a mode slightly different from that of most
science CCDs and thus require a somewhat nonstandard processing algorithm.
The CCDs are operated in frame transfer mode, which allows integration
and readout without the need for a shutter.
They are not actively cooled and thus have significant dark current
and hot pixels. When the CCDs are first
turned on, they are in a very noisy state. By running through a series
of resets
the noise levels can be reduced, however, sometimes this process
does not work. Some of the CCDs have hot columns or warm edges.
Fortunately the hot pixel pattern has stayed nearly
constant with time such that a template dark current frame constructed at the
beginning of the survey still suffices.
After scaling for exposure time, it is subtracted
from a GFA image. There are often gradients
in the background remaining, so the image is divided into regions of size
172x128 pixels and each is median averaged and subtracted. No flatfielding is
needed. A simple cosmic ray filter is run to clean out obvious
single-pixel artifacts. A smoothing filter is run once to reduce noise.
At this point a simple peak-finder is run to find local peaks greater than
a given threshold above sky noise and less than a saturation value.
This process generally finds all bright stars, but it can also find a number
of artifacts such as bad columns that need to be filtered out. A series of
filtering steps is run to weed these out. First, objects falling on
bad columns are identified and eliminated. Annular profiles are computed for
each remaining object and a profile (essentially a Moffat function with
beta = 3.5) is fitted. A fixed radius of 12 pixels is used regardless
of the seeing to avoid potential changes in profile shape during times of
poor seeing. Cuts are imposed on the quality of the fit, image width,
flux within the profile, and shifts in centroid. This process
handles just about any artifacts found in the images and returns a clean
list of stars.
§.§ Global Astrometric Solution
The six GFA CCDs are treated as a single rigid focal plane - each
list of stars is projected onto the sky based on the known location and
orientation of each CCD (Fig. <ref>)
and then combined into a single list.
The Gaia astrometric standard coordinates are converted to apparent
tangent plane coordinates (ξ, η) centered on the target field center
by applying standard corrections for precession,
nutation, aberration, and refraction. An important additional correction
involves field rotation due to misalignments within the telescope structure.
Since the Mayall telescope is an equatorial mount, apparent north
is nominally fixed
in the focal plane, but due to misalignments (e.g., polar axis
misalignment), the actual direction of N can vary by several arcmin,
being largest at high declinations. An empirical model was developed
to calculate rotation as a function of hour angle and declination, and
this model works with an rms error of 10 arcsec and peak errors of order
1 arcmin at high declination. (These errors are not entirely
repeatable, indicating
that there may be hysteresis of some sort in the telecope drive or support
system.)
Transformation from the instrumental coordinates to sky coordinates requires
a minimum of 4 parameters: x and y sky center offsets, rotation angle, and
scale factor.
The rotation angle and scale factor are known well enough that the dominant
unknowns are the sky center offsets.
These offsets are determined by performing
a 2-dimensional cross-correlation between the combined star list and the
list of astrometric standards. Once an initial guess at the offsets is known,
the observed and standard star lists are matched up in detail, and improved
values of the 4 parameters are computed via least squares. This process
is repeated once in case the initial match missed one or more stars or
mismatched standard stars with a particular detection. A limiting magnitude
of 18 is imposed to prevent confusion in fields with high stellar density.
Note that even if only 1 star is detected on a GFA, it can be included in
the astrometric solution.
Once the 4 parameters are determined, the following quantites are computed:
* Coordinates and residuals for all matched stars
* A list rank ordered by magnitude of candidate guide stars for
each GFA with both sky and pixel coordinates
* Pointing corrections and the hexapod rotation setting. Note that for an
equatorial telescope, for any field off the celestial equator, a motion
in right ascension induces additional field rotation that varies as
tan(δ); this extra rotation is included in the hexapod setting.
* Sky coordinates for each GIF. These are computed based on laboratory
metrology of the location of each GIF relative to GFA CCD pixels along with
optical distortion predictions from the as-built DESI optical model
(Section <ref>).
For fields near zenith, typical rms residuals under good conditions are
30 milliarcsec 1-D with little if any dependence on position in the focal
plane. This performance is close to that achieved
with the Dark Energy Camera using exposures that are 3 times longer
<cit.>.
For fields away from the zenith, the accuracy declines, likely
due to deformation of the focal plane under gravity loading. Deformation
will be discussed in section <ref>.
The total elapsed time, including overhead for communicating with the ICS,
is typically 4 seconds.
§.§ Local Astrometric Calibration
PlateMaker offers an alternative mode in which each GFA CCD is astrometrically
calibrated individually. Although this mode is not used in normal operations,
it allows astrometric solutions to be obtained when either the telescope
has large pointing errors or the field rotation is not known ab initio.
It is most commonly used when constructing a new telescope pointing model.
§.§ Updates to GFA metrology
Initially the location and orientation
of each GFA was taken from the as-designed focal plane. In practice, the
as-built focal plane location of
each GFA is offset and rotated relative to its
design value. By using a series of exposures taken near zenith, the
astrometric solutions were used to improve the locations and
orientations of each GFA CCD relative to the others. Note that the
absolute locations, rotation angle relative to the focal plane, and
an overall scale factor giving the absolute separations of the CCDs was
uncontrained because the astrometric solution includes parameters that
are degenerate with these terms and will compensate for any adjustment to
them. The absolute location, size, and orientation of the GFA array
relative to the focal plane was done using the FVC as described in
section <ref>
§.§ Focal Plane Deformation
Finite element analysis models predict that the focal plane will deform
due to gravity as the telescope moves away from the zenith <cit.>.
Analysis of
astrometric solutions for a large number of fields at different locations
in the sky does, indeed, show such an effect. Figure <ref>
shows
the astrometric residuals averaged over each GFA for two fields: one
near the zenith, showing essentially no systematic offsets, and the other
at a zenith distance of 55 degrees, where
the residuals are large and systematic. Figure <ref> shows the
astrometric residuals as a function of position angle with respect to zenith
for a set fields in the SW part of the sky at a zenith distance of
45 degrees. The large systematic trends are obvious, with a
peak amplitude of up to 0.3 arcsec. A sinusoidal model has been fit to
each of the CCD X and Y directions with terms that are a function of both
θ and 2θ. The coefficients of these terms were then fitted
as a function of zenith angle, with four different sets used for the NW, NE,
SW, and SE quadrants of the sky. Figure <ref> illustrates
the distortion pattern for one particular field.
Corrections for deformation are applied to the GFA locations before performing
the astrometric solutions. The model manages to reduce the impact of
deformation by about a factor of two, although not eliminating it completely.
§ FVC ASTROMETRIC CALIBRATION
Targets in a particular field are selected and assigned to specific
positioners in advance. The layout assumes that the direction of ICRS North
has a particular position angle with respect to the focal plane CS5 Y
direction (this value being chosen
to account for precession at an epoch near the midpoint of the survey).
The target list is input to PM, which then converts the target positions
to apparent tangent plane coordinates using the same steps as were used
for the astrometric standards. These coordinates are then converted to
focal plane locations using the as-built optical model, and the positioners
are commanded to go to these locations in an open-loop fashion. The
petal controllers then report back the estimated location of the positioner,
accounting for any disabled or otherwise malfunctioning positioners.
The FVC takes images of the DESI focal plane with fiducials and/or positioner
fibers back-illuminated (Fig. <ref>)
and analyzes them using a software program spotmatch
(SM).
In order for SM to function it needs to be provided with a list of
approximate guesses at the locations in the image of each “spot” or array
of spots from each device, at which
point it is able to identify the complete pattern and return the original
list augmented with the precise pixel location of each spot. PlateMaker
runs a separate procedure to prepare this list. It first receives as
input from the ICS a list of all fiducials that
are operational and the list of positioners with their actual
locations. It then converts them to pixel coordinates, making use of the
optical model of the telescope and an optical model of the FVC camera.
The predictions account for demagnification from the DESI focal plane to the
FVC CCD and rotation due to the hexapod; SM itself can account for any
overall translation of the field. The predicted positions are generally
accurate to 1-2 pixels.
Once PM receives back the list of fiducials and positioners with pixel
coordinates, it uses the GIF coordinates, which have known
astrometric positions
from the GFA astrometric calibration, to solve for an astrometric
calibration of the FVC CCD. Because the FVC singlet lens acts
essentially as a perfect pinhole camera, the mapping of the sky to the
CCD is a gnomonic projection, and the calibration consists of determining
a field center, scale factor, and rotation.
At this point PM can now calculate the actual sky position of each
positioner fiber, compare it with the desired position, and derive
offsets and corrections in focal plane coordinates needed to fine-tune
each positioner fiber location. These correction moves are sent back to the
ICS and applied. A final FVC image is taken to assess
the final set of offsets, which are recorded but not used in another
iteration.
The total elapsed time for each iteration of this process is typically
9 seconds and is the dominant contribution to the overall elapsed time
for the combined PM processes; however, the overall setup time (including
image acquisition times) is still within the 2 minute
requirement on setup time for moving among fields closely spaced on the
sky <cit.>.
It should be noted that, as far as PM is concerned, absolute focal plane
locations of positioner fibers are not needed. However, they are needed
for bookkeeping purposes and for the anti-collision code (which ensures
that positioners do not collide when moved) to work. For this
purpose, the fiducials are used to determine a mapping from FVC pixels
to DESI focal plane coordinates using the optical model, solving
each time for the coefficients in the distortion model.
§.§ Turbulence
An effect that turned out to be larger than expected was
the impact of air turbulence (a.k.a. “dome seeing”) in the 12 meter
column between the FVC and the DESI focal plane,
which introduces time-dependent
distortion in the spot position locations. While the distortion was
expected to be of order 3μm <cit.>, at times it
can amount to 10μm or more.
In fact, it is the dominant error in positioning fibers on targets,
to the extent that in the worst case the mispositioning causes
a spectroscopic exposure to be rendered useless. Example images are
presented in <cit.>.
Low-order modes in the turbulence pattern affect the distortion model
solutions. In the limit that abolute variations in refractive index
are small (≈ 10^-6), the displacement of a particular
spot as recorded on the FVC CCD is given by the gradient
of the integral of air density fluctuations along the sight line
from the DESI focal plane to the FVC. Thus, the
turbulence pattern should affect the curl-free E modes but not the
gradient-free B modes of the distortion model. This property is
demonstrated
in Fig. <ref>, where the coefficients for a pair of
E-mode and B-mode terms from a set of exposures are compared, showing that
turbulence does, indeed, impact only the E-modes. A comparable effect
was seen in the impact of atmospheric turbulence on the
astrometric calibration of the Dark Enery Camera focal
plane <cit.>.
One way to mitigate turbulence would be to take repeated FVC exposures to
average out the effect. However, it was discovered that one can use the
fiducials and functional fibers in disabled positioners (together,
positional references) to calibrate and
greatly reduce the impact of turbulence. To do this, one first needs to
determine the steady-state undistorted
locations of all the references relative to one another.
These locations turn out to depend on hour angle and declination and are
impacted by the focal plane deformation due to gravity as was described
above for the GFA CCD locations.[For reasons that are not entirely
understood, a model for focal plane deformation constructed using the
positional references and the model constructed from the astrometric
calibration residuals differ by up to 7μm,
implying that the GFA assemblies
somehow move relative to the petals on which they are mounted due
to gravity.] Therefore, an independent model for each positional reference
was constructed by taking a series of FVC images with the telescope
driven to a selected set of hour angle and declination settings. For these
images, the conversion from FVC CCD pixels to focal plane coordinates was
performed using the as-designed optical model since the normal transformation
derived by PM, which updates the model coefficents, removes some of the
turbulence pattern that one is trying to calibrate. Given these
steady-state locations, the first FVC image obtained in normal operations
(after the open loop positioner moves) is analyzed to determine the
turbulence pattern, which is then removed before computing the correction
moves. The process is then repeated for the second FVC image after the
correction moves are applied. The residual errors in positioner locations
after the correction move are considerably reduced by this turbulence
correction, and median 2-D values are typically of order 4μm
(Fig. <ref>).
§.§ Lens Polishing and Homogeneity Errors
The FVC images only a small portion of the beam from a back-illuminated
fiber, centered on the chief
ray. While the astrometric calibration procedure
accounts for any offsets of the chief ray from the centroid of the imaged
beam due to the as-designed corrector,
it does not account for any offsets introduced by imperfections in
the corrector optics, and in particular those due to polishing errors on the
lens surfaces and
inhomogeneities in the glass refractive indices. It can be noted that
<cit.> found it
necessary to outfit the Subaru telescope metrology camera with a 110 mm
aperture lens in order to reduce the impact of high
spatial frequency polishing errors in the Subaru prime focus corrector.
Detailed requirements
on polishing errors,
divided into low, medium and high spatial frequency ranges,
were specified for each surface of each lens at the time of fabrication
<cit.>. The values actually achieved were roughly a factor 2
better than the requirements. A detailed analysis suggests that these errors
will introduce offsets in the chief ray relative to the centroid of order
3μm rms 1-D. A related analysis based on measurements of the glass
inhomogeneities suggests an additional contribution from this
cause of about 2μm for lenses C1-C4 and the ADCs.
One feature that has proven to be more problematic is a “divot" introduced
by a machining error on the aspheric surface of lens C3. This feature
has a diameter of approximately 50 mm and a depth of about 1μm.
Since the FVC beam size is only 1.8 mm at this surface (compared to the
overall beam size of 256 mm),
the FVC positions of about 35-40 fibers are impacted, introducing larger
positioning errors and reduced throughput.
So far the errors introduced by this feature remain uncorrected.
§.§ Updates to GFA Metrology - II.
It was mentioned in section <ref>
that the positions of the GFAs relative to
one another can be measured quite accurately,
but that there is still an ambiguity in overall scale factor, rotation angle,
and offset relative to focal plane coordinates because these cannot be
determined from astrometry alone. However, the GIFs, which are calibrated
both astrometrically and via laboratory metrology, provide an absolute
tie to focal plane coordinates, and the FVC images are used to provide
these missing calibration constants.
§.§ Updates to GIF Metrology
The GIFs are used to transfer the astrometric calibration from the GFA
CCDs to the FVC CCD. Due to errors in the GIF metrology, there are
still systematic residuals in the GIF positions relative to one another
as is seen in their residuals from the FVC CCD astrometric calibration.
The GIF metrology was revised so that the relative GIF positions are
placed on a self-consistent system while the overall astrometric
calibration was unchanged. It can be noted that the corrections to the
relative GIF positions, when transformed astrometrically to the focal plane,
were also measured by the FVC images; these latter measurements were left to
provide a cross-check on the first set of revisions.
The internally consistent GIF metrology could still have an overall
external systematic offset
in location relative to the GFA CCDs that would not be captured by
either the astrometric calibrations or the FVC images. Any such offset
will be dealt with in the next section.
§.§ Split Exposures
For fields requiring long exposures, changes in differential refraction
due to changes in parallactic angle and/or zenith distance
can cause targets to become misaligned with their fibers, by up to half
an arcsecond in the worst cases. Rather than
conduct a complete new field acquisition, a mode has been implemented
that predicts the changes in fiber positions due to changes in
refraction and to any new ADC setting that is required
and determines the relative
adjustments needed to each fiber positioner and to the guide stars.
These adjustments
are applied open loop. The largest uncorrected effect turns out
to be that of field rotation. At present it is simply monitored after that
fact using the guide star measurements, although active feedback is
planned for the future.
§ ON-SKY DITHER TESTS
While the stellar astrometric calibrations are quite accurate internally
(rms errors much smaller than a fiber diameter),
the tie of the focal plane to the sky
still has an unknown uncertainty
due to the indirect manner in which the tie was
established. The tie could not be verified directly due to the lack
of any positioners with coherent imaging capability.
Thus, it was found necessary to conduct additional
tests to better measure any residual errors in the transformations and,
if necessary, apply corrections.
The technique that was found to work best <cit.>
was to observe a field of
bright stars and take a set of 13 exposures, first with the fibers placed
at the nominal positions of the stars, and then with random offsets
(“dithers”) applied in each of x and y, with each offset drawn from a
Gaussian distribution of 1-σ = 0.7”.
The throughput of each star was determined from spectroscopic
exposures in each of the B, R, and Z channels. The telescope was actively
guided during each exposure. In essence, what the dithering accomplished
was to measure the throughput of a star at a grid of locations offset
from the nominal target position,
allowing the determination of the offset that gave the maximum throughput.
In practice such a test performed on a single star would suffer from the
effects of misguiding, seeing variation, and transparency variation.
By observing a large number of stars with positions that were dithered
independently of one another,
these effects could be disentangled from each other and from
those due to individual positioner offsets.
Having measured the residual distortion in the PM optical model, the “quiver”
pattern (Fig. <ref>)
was fit with the same type of model as is used to map the FVC CCD to
the focal plane. The largest term is an E mode with amplitude 56μm;
there is also a B mode term with amplitude of 14μm. While the
origin of these terms is not known for sure, it is equivalent to a
tilt of the FVC CCD with respect to the DESI focal plane
of about 25 arcminutes.
The pattern made with the R channel data
is applied as a correction to the predicted target position by PM.
Subsequent dither tests show that the residual rms 1-D positioning
error is of order 8μm (Fig. <ref>). The known
contributions from
lens imperfections (3 μm from polishing errors, 2μm from glass
inhomogeneities), and uncorrected dome turbulence plus spot centroiding
errors (4μm)
account for perhaps 5μm or 6μm of this error.
(For reference, the original allocation
to overall fiber lateral positioning error was 7.8μm 1-D or 11μm 2D,
which means that this requirement is, indeed, being met.)
The quiver pattern exhibits some high order behavior that is not included
in the model and has the rough appearance of what might be expected from
polishing errors and/or glass inhomogeneities
in the corrector optics, although this conjecture has not been verified.
Overall, the pattern and the overall centering possibly
show some dependencies on time and zenith angle, but these dependences
are at the threshold of detection and are not yet well quantified.
A byproduct of the dither tests is that it is possible to detect any
asymmetry in the geometry of the FVC CCD, particularly a difference in
scale factor in the row and column directions. In the event,
no such asymmetry is seen at the level of a few ppm.
A couple of cross-checks can be done to demonstrate that the dither tests
are functioning properly. First, the dither analysis determines, among other
things, the offset in the field center from one exposure to the next.
These offsets should be reflected in the guide star offsets measured using
the guide CCDs at the
same time. Figure <ref>
shows such a comparison, demonstrating that the
mean field offset does, indeed, match the guide star offset. Additionally,
there is no systematic offset between the two measures, meaning that
the field is properly centered on the fibers if the guider error is
zero. (This result also demonstrates that there is no remaining systematic
offset in the GIF metrology system.)
Second,
the corrector is not perfectly achromatic, so the radial variation in
positioner offsets among the three channels should match that predicted by
the optical model. Figure <ref>
shows this comparison as well.
§ OFF-SKY OPERATION
The FVC is used frequently to image the focal plane without being on-sky,
most commonly to study positioner performance. PM is used to analyze
these images and, depending on what tests are being done, supports
multiple modes of operation.
§.§ Initial calibration
PM normally relies on spotmatch to identify and measure the locations of
fiducials, but doing so relies on having an approximate mapping of FVC
pixels to the focal plane, which is not known initially. Thus, PM has
its own code to identify the spot patterns of fiducials. A single FVC
image of the back-illuminated fiducials is obtained. 5 of the 12
fiducials on each petal are located along an arc near the edge of the focal
plane, and these can be distinguished from the others. One petal has
an extra fiducial along this arc for symmetry breaking. This extra fiducial
is identified with an automated algorithm,
and once done, the initial mapping to the focal
plane can be determined in a straightforward fashion.
§.§ Petal Offsets
The focal plane is assembled from 10 individual petals, and, when these are
assembled, there are small offsets that are not known in advance. A set
of 10 FVC images of the backlit fiducials is obtained, and a solution is
obtained for the focal plane distortion with three additional calibration
coefficients enabled for each petal - two for x,y
offsets and one for rotation
about the nose of the petal. A global constraint is imposed that the
average of all offsets and rotation be zero. This process is operationally
how the CS5 system is defined. Once these extra coefficients
are determined for each petal, they are kept frozen so as to reduce
the number of degrees of freedom in the measurement of distortions during
normal operations.
The rms residuals after fitting a distortion model to fiducials measured
in a typical FVC image are about 16μm rms 1-D. These errors are presumably
due to errors in the laboratory metrology of the fiducial prior to petals
being delivered to the telescope. One could average together many measurements
and improve the internal consistency of the petal metrology, although
this process has not been found to be essential yet.
§.§ SPOTS for turbulence calibration
The turbulence calibration relies on knowing the relative positions
of each fiducial or non-operational positioner, but their absolute location
in the focal plane is not needed (nor is it known a priori for the
positioners). Instead, a series of FVC images are averaged to get the
coordinates empirically. In practice, the coordinates will
also depend on the ADC settings and thus telescope position on the sky.
Further, there may be focal plane deformation due to gravity. Therefore,
a program termed “SPOTS" was created in which
a large number of FVC images is obtained over a range of hour
angles and declinations. The pixel coordinates of each fiducial
and positioner fiber are transformed to the
focal plane using the as-built optical model, and each coordinate
is modeled as a constant plus a polynomial
dependence on hour angle and declination; the data are used to constrain
the coefficients of this model.
§.§ Laboratory Tests
Two spare petals were set up for conducting tests at LBL in a clean room.
A separate FVC was provided for these tests. Several modifications were
needed to PM to map FVC images to each petal.
* There is no optical corrector, so a set of files to define an
optical model with no power was created.
* The focal surface of the petals is an asphere and is viewed at a finite
distance, which means that there
is quite a bit of “distortion” in mapping FVC pixels to the focal surface.
* The petals and FVC could be moved about, so procedures were developed
to redetermine basic calibration constants (position angle and demagnification)
more easily.
§ FOCAL PLANE STABILITY
PM generates a large amount of diagnostic information that can be used
to look for trends or uncover problems in the hardware. The following are
a few examples.
* The relative astrometric locations of the GFAs are measured for each
acquisition image, and long-term trends are monitored. A simple diagnostic
involves monitoring the rms errors in astrometric solutions for fields near
zenith for degradation over time. No such degradation has been seen thus
far.
* The rms errors in astrometric calibration of the FVC images are
measured at the same time. No significant trends have been noticed.
* The rms errors in FVC to focal plane mapping are tracked. These have
been stable over time.
* The rms errors in the GIF to focal plane calibration are tracked.
These have been stable over time.
* The dependence of various calibration coefficients
vs. telescope temperature are
tracked. As an example, Fig. <ref> shows the dependence
of the FVC scale factor on telescope temperature. The slope
is about -2.2× 10^-5 K^-1. This value is
several times greater than the temperature coefficient of either
silicon or BK7 glass (both in physical dimension and refractive index)
and likely arises from thermal expansion of
the aluminum structure holding the FVC and lens (without refocusing).
By contrast, the
GFA astrometric scale factor shows no particular dependence on telescope
temperature, which is consistent with the focal plane being thermally
controlled (and kept in focus).
§ CONCLUSIONS AND LESSONS LEARNED
The DESI
astrometric calibration systems and processes have demonstrated that
fiber positioning can be achieved repeatably and
reliably with an accuracy of 8μm 1-D (11μm 2-D), which meets the DESI
requirement for lateral positioning error.
Future large spectroscopic surveys currently being discussed
may possibly make use of a fiber view camera system of some sort, and
the DESI experience may provide some guidance as to their design and
implementation. In case it might help, the following are some lessons learned
from the DESI experience:
* Ensure that the end-to-end astrometric calibration process is fully
defined at an early stage.
In DESI, initial planning focused on mapping distortion in the
corrector optics using the FVC along with the fiducials, but not enough
attention was paid to mapping from the focal plane to the sky or the
FVC to the sky. The initial error budget omitted several
contributions that were important while including other
contributions that were unimportant. Requirements on the performance of
the optical corrector did not account for the fact that the FVC images
only the chief ray from a back-illumnated fiber, which is affected most
by high frequency components of the polishing error budget.
* Provide a direct mechanism to connect the focal plane to the sky.
In the case of the Sloan Digital Sky Survey <cit.>,
this connection was accomplished
by using coherent fiber bundles that were positioned in holes drilled
in a plug plate identical to the holes used for positioning
target fibers. These bundles were distributed throughout the focal plane.
In the case of the 2dF instrument at the Australian Astronomical Telescope
<cit.>, a focal plane imaging system is used, including
a type of coherent imager that utilizes a cluster of fibers,
but only 4 are available and are normally placed at the edge of the field.
Coherent imagers require different routing for their fibers than for
the target fibers, which would have been difficult for DESI and accounts for
why they were not used.
* Include a sufficient number of fiducials in the focal plane to
allow for monitoring and correction of turbulence. DESI started with
only 113 such fiducials, whereas of order 300 or more are needed.
§ ACKNOWLEDGEMENTS
The authors would like to thank Michael Lampton, Tim Miller, and Charlie
Baltay for many useful discussions. The DESI
collaboration and the authors in particular
are indebted to the late Michael Lampton for his
many contributions to the success of this project and mourns
his passing.
This work was produced, in part, by Fermi Research Alliance, LLC under
Contract No. DE-AC02-07CH11359 with the U.S. Department of Energy.
This material is based upon work supported by the U.S. Department of Energy
(DOE), Office of Science, Office of High-Energy Physics, under Contract No.
DE-AC02-05CH11231, and by the National Energy Research Scientific Computing
Center, a DOE Office of Science User Facility under the same contract.
Additional support for DESI was provided by the U.S. National Science
Foundation (NSF), Division of Astronomical Sciences under Contract
No. AST-0950945 to the NSF's National Optical-Infrared Astronomy Research
Laboratory; the Science and Technology Facilities Council of the United
Kingdom; the Gordon and Betty Moore Foundation; the Heising-Simons
Foundation; the French Alternative Energies and Atomic Energy Commission
(CEA); the National Council of Science and Technology of Mexico (CONACYT);
the Ministry of Science and Innovation of Spain (MICINN), and by the DESI
Member Institutions: <https://www.desi.lbl.gov/collaborating-institutions>.
Any opinions, findings, and conclusions or recommendations expressed in this
material are those of the author(s) and do not necessarily reflect the
views of the U. S. National Science Foundation, the U. S. Department of
Energy, or any of the listed funding agencies.
The authors are honored to be permitted to conduct scientific research on
Iolkam Du'ag (Kitt Peak), a mountain with particular significance to the
Tohono O’odham Nation.
For more information, visit <https://www.desi.lbl.gov/>.
This work has made use of data from the European Space Agency (ESA) mission
Gaia (<https://www.cosmos.esa.int/gaia>), processed by the Gaia Data
Processing and Analysis Consortium
(DPAC, <https://www.cosmos.esa.int/web/gaia/dpac/consortium>).
Funding for the DPAC has been provided by national institutions,
in particular the
institutions participating in the Gaia Multilateral Agreement.
§ DATA AVAILABILITY
All data points shown in the published graphs are available in a
machine-readable form in Zenodo at <https://zenodo.org/api/files/c135b58d-ca36-4be4-a44d-43ed428e309c/astrometry.tgz>
dummy
[Baltay et al. (2019)] baltay19
Baltay, C. et al. 2019, PASP, 131, 5001.
[Bernstein et al. (2017)] bernstein
Bernstein, G. B. et al. 2017, PASP, 129, 4503.
[Brown et al. (2018)]gaia18
Brown, A. G. A., Vallenari, A., Prusti, T. et al., 2018, A&A, 616, A1
[DESI Collaboration et al. (2016a)]desi16a
DESI Collaboration, Aghamousa, A., Aguilar, J., et al.,
2016a, arXiv:1611.00036
[DESI Collaboration et al. (2016b)]desi16b
DESI Collaboration, Aghamousa, A., Aguilar, J., et al. 2016b,
arXiv:1611.00037
[DESI Collaboration et al. (2022)]aba22
DESI Collaboration, Abareshi, B. Aguilar, J., et al. 2022, AJ, 64, 207.
[Kent (2018)]kent18
Kent, S. M. 2018, PASP, 130, 4501
[Levi et al. (2013)]levi13 Levi, M., Bebek, C.,
Beers, T., et al. 2013, arXiv:1308.0847
[Lewis et al. (2002)]lewis02 Lewis, I. et al. 2002,
MNRAS, 333, 279
[Limmongkol et al. (1993)]lim93 Limmongkol, S., Owen, R. E.,
Siegmund, W. A. & Hull, C. L., 1993, ASP Conf. Series, 37, 127
[Miller et al. (2023)]miller23 Miller, T. et al. 2023,
arXiv:2306.06310, submitted to AJ
[Owen et al. (1998)]owen98 Owen, R. et al. 1998,
ASP Conf. Series, 152, 98
[Schlafly et al. (2023)]schlafly23 Schlafly, E. et al. 2023,
arXiv:2306.06309, submitted to ApJ
[Silber et al. (2023)]silber23
Silber, J. H. et al. 2023, AJ, 165, 9
[Wang et al. (2014)] wang14
Wang, S-Y et al. 2014, Proc. SPIE, 9147.
[Zhao & Burge(2007)]zhao07
Zhao, C., & Burge, J. H. 2008,
Opt. Expr., 16, 6586
[Zhao & Burge(2008)]zhao08
Zhao, C., & Burge, J. H. 2008,
Opt. Expr., 21, 31430
|
http://arxiv.org/abs/2307.04166v1 | 20230709131847 | Parameter Identification by Deep Learning of a Material Model for Granular Media | [
"Derick Nganyu Tanyu",
"Isabel Michel",
"Andreas Rademacher",
"Jörg Kuhnert",
"Peter Maass"
] | cs.CE | [
"cs.CE"
] |
Parameter Identification by Deep Learning of a Material Model for Granular Media]Parameter Identification by Deep Learning of a Material Model for Granular Media
[1]Derick Nganyu [email protected]
2]Isabel Michel
1]Andreas Rademacher
2]Jörg Kuhnert
1]Peter Maass
[1]Centre for Industrial Mathematics (ZeTeM), University of Bremen, Bibliothekstrasse 5, Bremen, 28359, Bremen, Germany
[2]Fraunhofer Institute for Industrial Mathematics ITWM, Fraunhofer-Platz 1, Kaiserslautern, 67663, Rhineland-Palatinate, Germany
Classical physical modelling with associated numerical simulation (model-based), and prognostic methods based on the analysis of large amounts of data (data-driven) are the two most common methods used for the mapping of complex physical processes. In recent years, the efficient combination of these approaches has become increasingly important. Continuum mechanics in the core consists of conservation equations that – in addition to the always necessary specification of the process conditions – can be supplemented by phenomenological material models. The latter are an idealized image of the specific material behavior that can be determined experimentally, empirically, and based on a wealth of expert knowledge. The more complex the material, the more difficult the calibration is. This situation forms the starting point for this work's hybrid data-driven and model-based approach for mapping a complex physical process in continuum mechanics. Specifically, we use data generated from a classical physical model by the MESHFREE software <cit.> to train a Principal Component Analysis-based neural network (PCA-NN) for the task of parameter identification of the material model parameters. The obtained results highlight the potential of deep-learning-based hybrid models for determining parameters, which are the key to characterizing materials occurring naturally, and their use in industrial applications (e.g. the interaction of vehicles with sand).
[
[
August 12, 2023
===================
§ INTRODUCTION
In engineering, natural sciences, and industry, partial differential equations (PDEs) are widely used to model a great variety of problems. They are a great tool for modeling and solving complex phenomena ranging from the motion of incompressible fluids to the electronic structure of materials, just to name a few. Usually, these models follow the full life cycle of products from classical simulation and optimization during the development phase to process monitoring and control during production. PDE models generally introduce some critical parameters, which have to be calibrated so that the model reflects the system or problem being considered. These parameters could be scalar or space and time-dependent parameter functions, and their calibration process usually requires multiple runs of the model. In some scenarios, one has access to the solution of the PDE or observation of the system and wishes to infer the parameters underlying the governing PDE, thus an inverse problem. A wide range of inverse problems have been studied, such as tomography <cit.>, inverse kinematics <cit.>, and inverse problems in signal processing <cit.> and even in quantum mechanics<cit.>. However, PDE-based inverse problems are one of the most challenging inverse problems. The complexity of PDE-based inverse problems is compounded by the fact that their solutions are typically nonlinear. This further emphasizes the need for efficient and fast solvers.
While traditional or standard numerical methods such as finite differences and finite elements have been used extensively to solve PDEs, most if not all these standard PDE solvers suffer from the curse of dimensionality <cit.>, i.e. the computational cost grows exponentially as the dimension increases. This has led to the extensive study of data-driven concepts, particularly, neural network approaches for solving PDEs over the last few years. In addition to their potential of overcoming the curse of dimensionality, these data-driven concepts usually have the potential to complete mathematical-physical models as even the finest detail or tricky non-linearity is contained in a sufficient dataset. Also, since the parameters to be determined most often are not arbitrary, but follow an unknown, application-specific distribution, the training data provides a means to recover and exploit this distribution.
This paper looks at a PDE-based inverse problem in the field of continuum mechanics, which is applicable to the automobile development process. Specifically, our focus is on a physical model of soil over which vehicles ride. The rest of this work is structured as follows: We continue in Section <ref> by looking into reduced order models (ROM) and how proper orthogonal decomposition (POD) as well as deep learning (DL) can be used in ROMs. We equally highlight in Section <ref>, how neural networks have been recently applied for PDE solutions, parametric studies, and inverse problems. We then proceed to present the defining equations of our problem in Section <ref> and the laboratory test setting, which provides the basis of the MESHFREE simulations <cit.> used for the data generation. Section <ref> presents the method used to approach the problem, i.e. PCA-NN. In Section <ref>, we summarize the numerical results, followed by concluding remarks in Section <ref>.
§.§ Reduced Order Models and POD/PCA
Full-order models (FOM) like the finite difference method (FDM), finite element method (FEM), finite volume method (FVM), discontinuous Galerkin method (DGM), etc. that discretize the PDEs are usually highly accurate but very expensive. Depending on the application and the goals set, the user has to balance accuracy and computation time as an algorithm of higher accuracy implies higher computation time. In FDM, for example, a finer discretization of the domain (grid) leads to higher accuracy. The result of this is a system of linear equations with many more unknowns/parameters (i.e. the solution vector has a higher dimension); thus, a larger matrix system has to be solved to obtain the PDE solution on this fine grid. This is a major setback for real-time applications, and other settings where the PDE has to be queried multiple times. Reduced Order Models (ROM) offer a solution as they seek to reduce the dimension of the solution vector while maintaining the problem's physical features. The Reduced Basis (RB) method, which has received a lot of attention in the last decade <cit.> but can be traced back to the 1980s <cit.>, is unarguably one of the most popular ROM. This method consists of an offline and an online stage. During the offline stage, a reduced basis is obtained from a good choice of parameters, and this is used to obtain solutions of the PDE for new parameters. This is very similar to neural operator methods for solving PDEs like Fourier Neural Operator (FNO) <cit.> and Deep operator network (DeepONet) <cit.>. The RB method can also be extended for parameter identification tasks <cit.> as well as inverse problems <cit.>.
Recently, Deep Learning-based reduced order models (DL-ROM) have been popularized to efficiently solve PDEs <cit.>. Just like the RB method, they consist of an offline (training) phase and an online (testing) phase. The DL-ROM, though time-efficient during testing, might be very costly during training due to the high number of features or dimensions of the input and/or output – similar to RB method. The consequence of this is usually a network with more parameters, and thus more time is needed for optimizing these parameters. A common solution that reduces the number of network parameters while maintaining or even improving the accuracy is the proper orthogonal decomposition (POD). In the field of machine learning, this is commonly known as Principal Component Analysis (PCA), used as a technique for dimensionality reduction <cit.>. Reduced order models constructed with both deep learning and POD are referred to in <cit.> as POD-DL-ROM, where accuracy-wise, they are reported to outperform state-of-the-art POD-Galerkin ROMs during the testing stage; and efficiency-wise, they outperform DL-ROMs during the training stage.
§.§ Neural Networks and PDEs
Neural Networks have shown interesting results in dealing with high dimensional complex PDEs <cit.>, where they overcome the curse of dimensionality for the Schrödinger equation and Hamilton–Jacobi–Bellman equation <cit.>, Black–Scholes equations <cit.>, and Kolmogorov equations <cit.> which arise in option pricing <cit.>.
The popularity of neural networks in solving PDEs probably comes from the famous Physics-informed neural networks in <cit.> that use a neural network to approximate a function, i.e. the solution of the PDE for a single parameter instance. Similar works include quadratic residual networks <cit.> and Deep Ritz networks <cit.>.
Another class of neural networks – probably closer in its operation to RB methods – approximate an operator by a neural network. They are known as neural operators and can be used to query solutions of different parameter instances when trained. The PCA-based neural operator <cit.>, FNO, DeepONet are part of this class as well as other novel methods and `variants' like the Multiwavelet-based operator <cit.>, graph neural operator <cit.>, wavelet neural operator <cit.>, and many more. <cit.> provides a good overview and extends them for parametric studies as well as inverse problems.
§ PROBLEM FORMULATION
To shorten the design cycle of vehicles and reduce the cost of development, the automotive industry employs numerical simulation tools in the vehicle development process for testing and analysis. In this application example, we are interested in the interaction of vehicles with various roadbeds such as sand, snow, mud, etc. Vehicle stability depends largely on this interaction, and the safety of the passengers is thus a concern. To approach this problem, a full-body model of the vehicle dynamics is needed as well as proper modeling of the roadbed. Of interest to us, is the modeling of the roadbed consisting of granular material. This is a continuum mechanics problem that involves not only the well-known conservation equations of mass, momentum, and energy, but also a supplementary phenomenological material model. While the former specify the process conditions and are generally well understood, the latter relates the applied strain to the resulting stress and comes with uncertainties as well as non-linearities. Obviously, the overall goal is for the simulations to match the real-life experiments, thus the selected material model is of great importance.
§.§ Barodesy Model
Material models have parameters that are specific to the considered material as well as its reaction to external conditions, and these models range from simple to complex. By using single-parametric models for the granular material (roadbed), for example, the deviation between simulations and experiments increases as the simulation time progresses. As a result, complex material models with many more parameters are used. Such parameters are usually determined by a great wealth of expert knowledge, and costly experiments. The barodesy model <cit.> is one of such complex material models which conforms to the basic mechanical properties of the material. It is formulated in tensorial form by Equations (<ref>)–(<ref>)
d 𝐒/d t = 𝐖 𝐒-𝐒𝐖+𝐇(𝐒, 𝐃, e)
d e/d t = (1+e) ·tr(𝐃) ,
with
𝐃 = 1/2(∇𝐯^T+(∇𝐯^T)^T)
𝐖 = 1/2(∇𝐯^T-(∇𝐯^T)^T)
and
𝐇(𝐒, 𝐃, e) = h_b(σ) ·(f_b 𝐑^0+g_b 𝐒^0) ·|𝐃|,
where
σ = |𝐒|=√(tr(𝐒^2))
𝐒^0 = 𝐒/|𝐒|, 𝐃^0=𝐃/|𝐃|, 𝐑^0=𝐑/|𝐑|
𝐑 = tr(𝐃^0) ·𝐈+c_1 ·exp(c_2 ·𝐃^0)
h_b = σ^c_3
f_b = c_4 ·tr(𝐃^0)+c_5 ·(e-e_c)+c_6
g_b = -c_6
e_c = (1+e_c0) ·exp(σ^1-c_3/c_4 ·(1-c_3))-1.
In the above expressions:
- 𝐒∈ℝ^3 × 3 is the Cauchy stress tensor (with principal stresses σ_1, σ_2, σ_3 in axial and lateral directions),
- 𝐖 is the antisymmetric part of the velocity gradient,
- 𝐃 is the stretching tensor (the symmetric part of the velocity gradient),
- e= V_p/V_s is the void ratio with critical void ratio e_c, where V_p and V_s are the volume of pores and solids (grains).
- 𝐯∈ℝ^3 is the velocity field.
The non-linear function 𝐇 introduces the material parameters c_1, c_2, c_3, c_4, c_5, c_6, and e_c0 which we seek to identify via deep learning in a supervised learning task, provided the stress is known. For Hostun sand <cit.>, for example, c_1 = -1.7637, c_2 = -1.0249, c_3 = 0.5517, c_4 = -1174, c_5 = -4175, c_6 = 2218, e_c0 = 0.8703.
§.§ Oedometric Test
In soil mechanics, laboratory tests are used to measure the physical and mechanical properties of soil. They enable the testing and validation of material models. The tests vary from soil classification, shear strength, consolidation, and permeability tests, etc. <cit.>. The consolidation or oedometric test is one of the most conducted tests in soil mechanics. The soil (material) sample is loaded as well as unloaded in axial direction and rigid side walls prevent any lateral expansion, see Figure <ref>. With this, the soil's consolidation properties can be measured.
The laboratory measurements of oedometric tests result in stress paths (relating lateral and axial stress) and stress-strain-curves, e.g. in axial direction illustrated in Figure <ref>. These are compared to corresponding element tests wrt. a material model such as barodesy, in which the material model is integrated for one numerical point. When evaluating the quality of 3D numerical methods, only the comparison with corresponding element tests should be made, since the numerics cannot be better than the material model itself. This was investigated, for example, in <cit.> for the MESHFREE software (see Section <ref>), at that time still referred to as Finite Pointset Method (FPM).
§.§ MESHFREE and the Generalized Finite Difference Method (GFDM)
We employ the Generalized Finite Difference Method (GFDM) <cit.> implemented by Fraunhofer ITWM in the MESHFREE software <cit.> to numerically solve coupled PDEs governed by the conservation equations and material models such as the barodesy model described in Section <ref>. MESHFREE has successfully been applied for the simulation of complex continuum mechanics problems in industry, like vehicles traveling through water<cit.>, flow inside impulse-type turbines <cit.>, solution mining <cit.>, injection molding <cit.>, wet metal cutting <cit.>, and phase change processes <cit.>.
§.§.§ Point Clouds and Generalized Finite Difference Approximation
An overview on point cloud generation for meshfree methods is given in <cit.>. MESHFREE employs an advancing front procedure <cit.> that first discretizes the boundary and then iteratively the interior of the continuum domain depending on a given point interaction radius. Each point carries the physical information (such as velocity, pressure, temperature, stress, etc.) and is moved with the continuum velocity in a Lagrangian formulation <cit.>. Distortions caused by the movement can be corrected purely locally by adding and deleting points.
Discretizing the governing PDEs in their strong formulation, GFDM generalizes classical finite differences to (scattered/irregular) point clouds. Thereby, all numerical derivatives (function values, x-, y-, z-derivatives or Laplacian) are computed as linear combination of neighboring function values, where the neighbors of a point are determined by the point's interaction radius. The necessary coefficients/stencils are computed by a weighted least squares method. For more details on generalized finite difference approximation we refer to <cit.>.
§.§.§ Data Generation
Using the MESHFREE software, we generate parameters-stress pairs to train our neural network. Here, we use the physical and numerical model presented in <cit.> including corresponding boundary conditions for the cylindrical oedometric test. As described in <cit.>, the axial stress on the 3D point cloud (Figure <ref>) is averaged over all points of the sample to determine the resulting data for a parameters-stress pair, see Figure <ref>. For simplicity, the representation in this figure is dependent on time and not on axial strain as in Figure <ref>. Note that we use the settings for the dense sample in <cit.> with fixed interaction radius h=0.01 m, loading/unloading rate v_p=∓ 0.001 m/s, and fixed time step size Δ t = 0.0015 s for all parameters-stress pairs.
The choice of the parameters that constitute the data set are selected uniformly within predefined intervals. Guided by expert knowledge (see Section <ref>), a base value is selected and the interval is constructed around it by adding and subtracting 5 % of this base value to obtain the lower and upper bounds of this interval as shown in Table <ref>.
§ PROPOSED METHOD
The proposed method is inspired by both Reduced Order Models (ROM) and Neural Networks (NN). ROMs have been popular for a long time in dealing with PDEs, and even more when dealing with parameter identification problems, as outlined in Section <ref>. NNs have become popular over recent years not only due to their success in computer vision <cit.>, natural language processing <cit.>, but also due to the availability of data and growing computing power <cit.>. The efficient combination of both methods <cit.> has already achieved remarkable results not only in simple problems but also in more complex problems such as cardiac electrophysiology <cit.> (where the use of proper orthogonal decomposition (POD) further improves the results <cit.>), fluid flow <cit.>, non-linear models <cit.>, etc.
§.§ PCA-NN
We implement a variation of the PCA-NN architecture presented in <cit.>, which uses a meshless operator for the evaluation of the solution of a PDE by combining ideas of ROM with deep learning. First, for given training data (λ_i,u_i), obtain a model reduction by the use of principal component analysis (PCA) for both the input (parameter λ) and output (solution u). Only the coefficients of a finite number of PCA components are retained. Thus, PCA reduces the dimensions of both the input and output spaces to finite dimensional latent spaces. Second, use a NN to map the coefficients of the respective representations in these latent spaces.
The evaluation of this operator approximation for a novel parameter λ is highly efficient: compute the scalar products with the specified finite number of PCA components, map these coefficients to the latent coefficients of the output space with the NN, approximate the solution of the PDE by an expansion using these coefficients and the PCA on the output side. A simplified architecture of this method is shown in Figure <ref>.
The formulation of this approach is in a function space setting and hence mesh-free. For implementation purposes, however, we have to specify how to compute the scalar products with the PCA components. These are only given numerically, usually by their values specified at discrete points (in our case time steps).
This PCA-NN operator has been used in <cit.> in a multiscale plasticity problem to map strain to stress.
§.§ Workflow
In our problem, the goal is to learn the parameters μ∈ℝ^d_μ, with d_μ = 7, from the variation of the axial stress -σ_1 ∈ℝ^d over time t, where d = 675 is the fixed number of time steps corresponding to Δ t = 0.0015, see Section <ref>. The data set generated from MESHFREE is therefore a vector pair (μ,-σ_1).
Our adopted procedure can be broken down into four major steps as illustrated in Figure <ref>:
* Data Generation: Using MESHFREE and the setup described in Section <ref>, generate parameters-stress pairs (μ^i, -σ_1^i) with i = 1,2,…,N_train+N_test. These are snapshots of the full order model that is based on the GFDM described in Section <ref>.
* Training (Offline Stage): The first N_train data pairs are used to train the PCA-NN neural network. During training, the average L^2-loss
L_i (μ, μ̂) = 1d_μ∑_ℓ=1^d_μμ^i_ℓ - μ̂^i_ℓμ^i_ℓ_2
is obtained and its average over the training data
L(μ,μ̂) = 1N_train∑_i=1^N_train L_i(μ, μ̂)
is optimized, see Algorithm <ref> in Section <ref> for further details. μ̂ is the output of the model which is a composition of PCA applied on the axial stress -σ_1 followed by the neural network.
* Testing (Online Stage): Once the network is trained, it is used for testing with the next N_test unseen data. Testing proceeds as shown in Algorithm <ref> in Section <ref>. The network's performance is evaluated with the loss function given in Equation <ref>, but averaged over the N_test parameters by
L(μ, μ̂) = 1N_test∑_i=1^N_test L_i(μ, μ̂).
* Verification Stage (optional): This stage is used to ascertain the efficiency of the proposed model. Here, the material parameters μ̂ learned from the neural network are used as input to MESHFREE simulations, in order to compare the resulting stress -σ̂_1 with the stress -σ_1 obtained from the ground truth parameters μ. The difference is measured using the relative L^2-error given by
E(-σ_1, -σ̂_1) = 1N_test∑_i=1^N_test E_i(-σ_1, -σ̂_1),
where
E_i(-σ_1, -σ̂_1) = σ_1^i - σ̂_1^iσ_1^i_2.
§.§.§ Network Architecture
In our numerical examples, we followed the outline described in <cit.> and used a fully connected feed-forward neural network (FCN) for the mapping of the stress latent space (output of PCA on stress) to the parameters. The number of nodes per layer starts from d, 500, 1000, 2000, 1000, 500, and finally d_μ (which is the number of parameters to be learned, here 7). d is considered a hyperparameter, which has to be tuned. In our case, d = 50 lead to the best results. For the PCA, we use standard randomized singular vector decomposition (SVD) implementations <cit.>. Figure <ref> illustrates the overall PCA-NN architecture.
§.§.§ Algorithm
As a purely data-driven method, no physics or PDE is needed in the training of the neural network. However, the data used to train the network is obtained from MESHFREE's GFDM for solving the underlying PDE. By training the network with these numerically-given input-output pairs, we obtain a neural operator that solves the PDE for various instances irrespective of the underlying discretization.
We specify the algorithm for the continuum mechanics problem described in Section <ref> using the barodesy model. The training data is the pair (μ^i,-σ_1^i), with each μ^i∈ℝ^d_μ and -σ_1^i ∈ℝ^d. Training then proceeds as in Algorithm <ref>, while testing of the trained network proceeds as in Algorithm <ref>.
§ NUMERICAL RESULTS
Randomized PCA is used to reduce the dimensions of the principal stress component from 675 to 50. This reduced dimension is the input to the FCN similar to that in Section <ref>.
Because the parameter space is low enough, there is no need of a PCA after the FCN. The output of the FCN yields the target parameters directly. Of the 6000 data pairs generated, 75% is used for training. During training, the relative L^2-error of the individual parameters is evaluated and their average is the loss function minimized for optimizing the parameters of the neural network. However, due to the nature of this loss function, the learning of the parameters of higher magnitude is favored during training as can be seen in Figure <ref>. We observe that the loss for parameters c_4, c_5, and c_6 (that are all of the order of 1000) is minimized, while for the other parameters (that are of the order of 1) the loss is almost not minimized. As a remedy, the parameters of lower magnitude are scaled such that they are of the same order (of 1000) as the parameters of higher magnitude. In this way, learning of all individual parameters is achieved as shown in Figure <ref>. Figure <ref> illustrates the overall loss as average of the individual losses.
We obtained an average relative L^2-error of 2.63 × 10^-3 on the test data set. Figure <ref> shows the comparison of the ground truth (input to the MESHFREE simulation in blue) and the learned parameters (PCA-NN in orange) for four randomly selected examples. The learned parameters of these four examples were further used in a verification step in order to compare the resulting MESHFREE output axial stress with that produced by the ground truth parameters. The average relative error obtained was 4.12 × 10^-3. This is illustrated in Figure <ref> (top), where there is an obvious overlap of the axial stresses from the learned parameters with those from the ground truth parameters. Figure <ref> (bottom) shows the corresponding relative L^2-errors.
§ CONCLUSIONS AND OUTLOOK
The presented results highlight the potential of deep learning in continuum mechanics, specifically in material parameters identification for complex material models – a task that up till now depends heavily on expert knowledge if not trial and error. By exploiting deep learning methods, we obtain the model parameters from MESHFREE simulations. It will be equally interesting to see how the results change when experimental data is used instead of or in addition to simulation data.
The proposed method is an important first step since simulation and experimental results are almost always noisy in real-life problems. An interesting future study will be to look at the effect of different noise levels on the neural network's strength in parameter identification. This is a common practice in the field of inverse problems. For example, <cit.> studied the effects of noise on both function-approximating networks and neural operators for PDEs. There, the PCA-based method – when fed with noise – did not deviate so much from the noiseless case for increasing noise level. This is also promising for our application problem.
Acknowledgments
The authors are funded by the German Federal Ministry of Education and Research (BMBF) in the project HYDAMO. The authors would like to thank the MESHFREE team at Fraunhofer Institute for Industrial Mathematics ITWM for their support.
|
http://arxiv.org/abs/2307.04363v1 | 20230710062314 | Diffusion dynamics of star-shaped macromolecules in dilute solutions | [
"Prabeen Kumar Pattnayak",
"Aloke Kumar",
"Gaurav Tomar"
] | cond-mat.soft | [
"cond-mat.soft"
] |
Polymer chains dissolved in a solvent take random conformations due to large internal degrees of freedom and are characterized geometrically by their average shape and size. The diffusive dynamics of such large macromolecules play an indispensable role in a plethora of engineering applications. The influence of the size of the polymer chain on its diffusion is well studied, whereas the same cannot be said for the shape of the polymer chain. In the present work, the influence of shape on the center-of-mass diffusion of the star-shaped chains in solution is investigated using Multi-particle Collision Dynamics. Star-shaped chains of varying degrees of functionality are modeled in a good solvent at infinite dilution. The radius of gyration(R_g) of the star-shaped chains follows a functionality-independent scaling law with the chain length(N), R_g ∼ N^ν, where ν∼ 0.627. The shape of the polymer chains is calibrated by relative shape anisotropy. Highly anisotropic star-shaped polymer chains are found to have a faster rate of diffusion along the translational direction due to a slower rate of rotational diffusion when the radius of gyration of the polymer chains is maintained constant.
§ INTRODUCTION
Polymeric fluids are a unique class of complex fluids that show a plethora of fascinating non-Newtonian behaviours<cit.>, which can be understood with the transport and rheological properties of the fluid. One key challenge of the complex fluid community is understanding how the macroscopic properties of the polymeric fluids arise from microscopic interactions of the macromolecules<cit.>. The polymer chain dissolved in a solvent can take a multitude of conformations due to its large internal degrees of freedom. The average shape and size are used to characterize a polymer chain geometrically: the radius of gyration is widely used to describe the size and the eigenvalues of the gyration tensor for the shape calibration<cit.>. Advances in controlled polymerization have led to synthesizing complex polymer structures like star-shaped, comb-shaped, H-shaped, ring, and many more.<cit.>. The diffusive dynamics of such complex polymer chains are of fundamental interest in the biophysics community and are ubiquitous in numerous engineering applications. Star-shaped polymers are used for the controlled delivery of drugs in biomedical applications<cit.> and as viscosity index modifiers in oil industries<cit.>. Polyethylene glycol stars are used for protein delivery<cit.>. Understanding macromolecular diffusion in a biological cell is important for its various functions, such as the movement of plasmids and transport of amino acids <cit.> <cit.>. Hence, the influence of the shape and size of the polymer chain on its diffusive dynamics in solution is essential.
Most studies on the diffusion of complex polymer chains have been done by keeping the length of the polymer chain constant. Using fluorescence microscopy, Robertson et al.<cit.> have shown a lower radius of gyration and a higher diffusion coefficient for circular DNA molecules than linear ones for the same chain length. Using Brownian dynamics simulations with hydrodynamic interaction, Kanaeda and Deguchi<cit.> have reported higher diffusion coefficients for the ring polymers than for the linear polymers of the same chain length. Hegde et al.<cit.> have reported similar findings for the ring polymers in comparison to the linear chains for the same chain length by using three different simulation techniques. Singh et al.<cit.> have shown, using Multi-particle Collision Dynamics(MPCD), that the center-of-mass diffusion coefficient of the star polymer chains decreases with an increase in their radius of gyration. Hence, when it comes to size, it is clear that the higher the size of the polymer chain, the lower the center-of-mass diffusion coefficient. However, it is difficult to comment on the influence of the shape of the polymer chain on its diffusion from the same polymer chain length study as both shape and size are distinct for different polymer chain architectures<cit.>. The diffusion study of complex polymer chains for the same size case is scanty. Hegde et al.<cit.> have reported a higher diffusion coefficient for the linear chains than the ring chains for the same radius of gyration using Molecular Dynamics, MPCD, and the Lattice Boltzmann method and also noted that size could not be the only factor that influences the diffusion of the chain. Therefore, the effect of the shape parameter on the center-of-mass diffusion of the polymer chains still remains an open question.
In this work, the effect of the shape parameter on the center-of-mass diffusion of the star-shaped polymer chains in solution is studied in the limit of infinite dilution using a mesoscopic coarse-grained simulation method, namely MPCD. For simulating the Brownian motion of the complex polymer chains in a solution, MPCD is frequently used as it incorporates both thermal fluctuation and long-range hydrodynamic interactions<cit.>. At first, the shape and size of star-shaped polymer chains with different functionality are analyzed using the gyration tensor and compared with linear polymer chains at the same chain length. Subsequently, the translational diffusion of six different types of chains (one linear and five star-shaped chains) with the same radius of gyration is studied using the center-of-mass mean square displacement, followed by their rotational diffusion using the reorientation correlation function. Finally, the diffusion study is correlated to the shape characterization study in order to find the effect of the shape parameter on the center-of-mass diffusion of the star-shaped polymer chains in a solution.
§ NUMERICAL FORMULATION
The coarse-grained bead-spring model represents the polymer chains dissolved in the solution. To replicate good solvent conditions, the excluded volume interactions between the monomer beads are modeled using the repulsive part of the 12-6 Lennard-Jones (LJ) potential, also known as Weeks-Chandler-Andersen potential<cit.> (U_WCA), defined as:
U_WCA(r) =
4ε[ ( σ_p/r)^12 - ( σ_p/r)^6 ] + ε r≤ 2^1/6σ_p
0 otherwise
where σ_p is the diameter of a monomer bead, r is the distance between two beads and ε = k_BT is the strength of interaction, k_B is the Boltzmann’s constant and T is temperature. The neighboring monomers of the polymer chain are connected with springs, the potential of which is given by Finitely Extensible Nonlinear Elastic (FENE)<cit.>, defined as:
U_FENE(r) =
-1/2 k r_0^2 ln[ 1 - (r/r_0)^2 ] r≤ r_0
∞ otherwise
where k is the spring constant, and r_0 is the maximum length of the extension. The values of κ and r_0 are 30 k_BT/σ_p^2 and 1.5 σ_p respectively, as recommended by Kremer and Grest<cit.>. The bead spring model, FENE potential, and Kreme and Grest parameters are widely used in the coarse-grained modeling of the polymer chains<cit.>. The star-shaped polymer chains of varying degrees of functionality (number of arms) have been modeled by connecting different linear arms at their ends instead of connecting them to a single central monomer, ensuring equal flexibility of the arms for all the functionalities.
The solvent is modeled explicitly as an ensemble of non-interacting point particles of finite mass (m) using a mesoscopic coarse-grained simulation technique, MPCD<cit.>. MPCD consists of alternating streaming and collision steps. In the streaming step, the MPCD particles with velocity v_i undergo ballistic motion and their positions (r_i) are updated as:
r_i( t + δ t) = r_i(t) + δ t v_i(t)
In the collision step, the simulation box is divided into cubic cells of equal size(a), and all the particles within a cell undergo stochastic collision. The collision of the MPCD particles is modeled using a momentum-conserving version of the Andersen Thermostat, also known as MPCD-AT<cit.>, in which the particle velocities (v_i) are updated as:
v_i(t + δ t) = v_cm(t) + v_i^ran - Δv_cm^ran
where v_cm is the center-of-mass velocity of the collision cell, v_i^ran is a random velocity selected from a Maxwell-Boltzmann distribution, and Δv_cm^ran is the change in center-of-mass velocity of the collision cell due to the addition of v_i^ran. During the streaming interval of MPCD, the positions, and velocities of the monomer beads evolve by the velocity-Verlet algorithm<cit.> with a time step δ t_MD. During the collision step, the monomers are considered MPCD particles and undergo stochastic collisions. The three components of v_i^ran are selected from a Gaussian distribution with variance k_BT/m for the solvent particles and k_BT/M for the monomer beads, where M is the mass of a monomer. This way of considering the monomers just like other MPCD particles in the collision step for modeling solvent-monomer interaction is often used in recent studies<cit.><cit.><cit.><cit.><cit.> due to its advantage of avoiding spurious depletion forces<cit.> which could lead to breakage of FENE bonds. Galilean invariance is ensured by randomly shifting the cells before each collision step by a vector with the three components randomly chosen from [-a/2,a/2]<cit.>. All the simulations have been performed using the MPCD-AT routines in LAMMPS<cit.>(Chen et al.<cit.> <cit.>).
The size of the collision cells is taken to be the same as the size of the monomer beads, a = σ_p. The average density of the MPCD solvent equals 5m/σ_p^3. The mass of a monomer(M) is taken as 5m to achieve neutral buoyancy. The MD time step (δ t_MD) equals 0.002τ. The MPCD collision time step (δ t) is 0.09τ, where τ is the intrinsic unit of time equals √(mσ_p^2/k_BT). The resulting viscosity and Schmidt number(Sc) of the MPCD fluid are 4 k_BT/σ_p^3 and 12, respectively. The size of the cubic simulation box is increased with polymer chain length to avoid the finite size effects following the previous studies.<cit.><cit.> Box size(L) equals 32σ_p for the set of chain lengths { 24σ_p, 36 σ_p, 48σ_p } , 48 σ_p for the set of chain lengths { 60σ_p, 84 σ_p }, and 64σ_p for the set of chain lengths { 108σ_p, 192 σ_p }. The equilibration simulation run is performed for 2×10^6 MD time steps. The results are time averaged over 5×10^8 MD time steps and ensemble-averaged over five system replicas, each with a unique set of random velocities at starting of the simulation and during the stochastic collision, both taken from Maxwell-Boltzmann distribution. The measured parameters will be expressed in reduced units using the energy scale k_BT, length scale σ_p, and mass scale m. Periodic boundary conditions are implemented in all directions. The snapshots of the simulations are shown in Figure <ref>.
To validate the MPCD routines, the Brownian motion of 250 colloidal particles of the same size as the monomer beads are modeled in the MPCD solvent. The variation of their average mean square displacement (MSD) with lag time (Δ t) is plotted in Figure <ref>(a). Typically, power law describes the variation of MSD with lag time: MSD ∝Δ t^b. The dynamics of the solutes can be diffusive (b=1), sub-diffusive (b<1), or super-diffusive (b>1). From Figure <ref>(a), we note that we obtain b = 1 as expected for the colloids. Hence, the dynamics of the colloids are captured well by the simulation. Further, the radius of gyration(R_g) vs. chain length (N) is plotted for the linear chain in Figure <ref>(b). A power law behavior can be observed with a scaling exponent of 0.623 for the linear chain. The value of the power law exponent reported by Chen et al.<cit.> using similar simulation parameters is 0.61. Linear chains have been studied widely, and their corresponding scaling exponent values for good solvent conditions are summarized in Table <ref>. The exponent value calculated in the present work agrees well with earlier studies. In addition, the diffusion coefficient(D) is calculated from the center-of-mass mean square displacement(MSD) vs. lag time plot for the linear chains of different lengths. The variation of D with N is shown in Figure <ref>(b). It also follows a power law D ∼ N^-ν_d, where ν_d = 0.622. The equality of ν with ν_d confirms the Zimm theory for the diffusion of a polymer chain with intra-chain hydrodynamic interactions, which predicts D ∼1/R_g.
The difference between a good solvent and a poor solvent can be demonstrated by introducing attraction using the standard 12-6 LJ potential with a cutoff of 2.5σ_p for the pairwise interaction between monomer beads instead of WCA potential as explained by Peng et al.<cit.>. In a good solvent, the polymer chain forms a coil, whereas it becomes a globule in the case of a poor solvent. This coil-to-globule transition is shown in Figure <ref> and can be observed by measuring the radius of gyration, which is approximately 5σ_p in a good solvent for the chain length of 56σ_p, contrary to 2σ_p for the poor solvent case. This reduction in the size of the polymer chain is also visible in the values of the resulting diffusion coefficients, which are 0.0061σ_p^2/τ and 0.0032σ_p^2/τ for poor and good solvents, respectively.
§ RESULTS AND DISCUSSION
§.§ Shape and size of star-shaped chains
The gyration tensor(S) of a polymer chain is defined as the dyadic product of the position vector(P) of a monomer bead in the center-of-mass reference frame with its transpose and averaged over all the monomers of the chain<cit.>.
S = 1/NPP^T, P = [ x_1^i - x_1^cm; x_2^i - x_2^cm; x_3^i - x_3^cm ]
where (x_1^cm,x_2^cm,x_3^cm) represents the centre-of-mass of the polymer chain consisting of N identical monomers (x_1^i,x_2^i,x_3^i) and is calculated as follows:
x_1^cm = 1/N∑_i=1^Nx_1^i,
x_2^cm = 1/N∑_i=1^Nx_2^i,
x_3^cm = 1/N∑_i=1^Nx_3^i
The elements of S can be written using indicial notation as:
S_pq = 1/N∑_i=1^N (x_p^i - x_p^cm)(x_q^i - x_q^cm)
The polymer chain's shape and size can be easily measured by the eigenvalues of S, i.e., λ_1, λ_2, and λ_3. The radius of gyration(R_g) represents the size of the polymer chain, the square of which is equal to the trace of the gyration tensor<cit.>.
R_g^2 = Tr(S) = λ_1 + λ_2 + λ_3
We evaluate R_g for four different types of chain: linear, 3-armed star, 4-armed star, and 6-armed star using seven different chain lengths, and the results are summarized in Figure <ref>(a). The radius of gyration follows a power law with polymer chain length, R_g ∼ N^ν. The power law's exponent(ν) represents the quality of the solvent. The value of ν calculated in the present simulations are 0.623, 0.627, 0.631, and 0.626 for the linear, 3-armed star, 4-armed star, and 6-armed star chains, respectively. The scaling law is found to be independent of the functionality of the star-shaped chains in which the average value of ν∼ 0.627, indicating similar scaling behavior of the linear and star-shaped chains under good solvent conditions<cit.><cit.><cit.>. Table <ref> summarizes the values of ν for linear chains reported by experiments, simulation, and theory. The calculated average value of ν is in good agreement with the existing results in the literature. For the same chain length, linear chains are bigger than the star-shaped chains, and among the star-shaped chains, size decreases with an increase in functionality, i.e. number of arms, as expected. Compared to the linear chain, this reduction in the size of the branched chains for the same polymer chain length is measured using the geometrical shrinking factor(g_s) defined as the ratio of the mean squared radius of gyration of the branched chain to that of the linear chain, g_s = ⟨ R_g,b^2 ⟩/⟨ R_g,l^2 ⟩<cit.>. The values of g_s of star chains with different functionality and chain lengths are shown in Figure <ref>(b). We note that g_s doesn't vary much over the chain length for a particular chain type. The values of g_s for star chain with f=5 reported by Khabaz and Khare<cit.> is approximately 0.5, which falls between 0.41(6-armed star) and 0.58(4-armed star) calculated in the present work and suggests linear variation in g_s with f.
The shape of the polymer chains can be calibrated using asphericity(b) and relative shape anisotropy(κ^2) defined as,<cit.><cit.>
b = λ_1 - ( λ_2 + λ_3/2), λ_1 ≥λ_2 ≥λ_3
κ^2 = 1 - 3 λ_1 λ_2 + λ_2 λ_3 + λ_3 λ_1/(λ_1 + λ_2 + λ_3)^2
The variation of the two shape parameters with the polymer chain length is given in figure <ref>. The asphericity values are normalized by R_g^2 to make these independent of size. For the individual architectures, the values of both shape parameters do not vary much over the chain lengths. The normalized asphericity can take any value between 0 and 1. It will be 0 for a spherical shape or any shape of the platonic solids and 1 for a rod-like structure. As expected, the linear chains have higher asphericity values than the star-shaped chains. Its value is ∼ 0.64 for the linear chain in the present work, which is close to 0.625(calculated using eigenvalues), reported by Koyama<cit.> and 0.66, reported by Theodorou and Suter<cit.>. Among the star-shaped chains, the normalized asphericity decreases with an increase in functionality. Khabaz and Khare<cit.> have reported b/R_g^2 ∼ 0.362 for the 5-armed star chain, which falls between 0.29(6-armed star) and 0.4(4-armed star) in the present work. The other shape parameter is the relative shape anisotropy(κ^2), which also varies between 0 and 1. Its value is 1 for a rigid rod and 0 for a sphere and all platonic solids. It is also higher for linear chains than the star-shaped ones, as expected. In the present work, the value of κ^2 for the linear chain is ∼ 0.45, which is nearly the same as 0.44 reported by Khabaz and Khare<cit.>. As anticipated, for star-shaped chains, κ^2 decreases with increasing functionality. In the present work, κ^2 ∼ 0.3(3-armed star), 0.21(4-armed star), and 0.12(6-armed star) which are slightly lower than 0.3454(3-armed star), 0.2463(4-armed star), and 0.1512(6-armed star), respectively, reported by Zifferer<cit.>. The value of κ^2 reported by Khabaz and Khare<cit.> for a 5-armed star chain is ∼ 0.16, which falls between 0.12(6-armed star) and 0.21(4-armed star) calculated in the present work. To summarize, linear chains are less spherical and more anisotropic than the star-shaped chains. The higher the functionality among the star-shaped chains, the more spherical and less anisotropic the chain is. The variation of the shape parameters with functionality is plotted in Figure <ref>(a) and correlated with the diffusion coefficients in a later section.
§.§ Translational diffusion of star-shaped chains
The diffusion rate of a polymer chain in a solution can be measured by the variation of center-of-mass mean square displacement(MSD) with lag time. One linear and five star-shaped chains are modeled to investigate the influence of the shape of the polymer chain on its diffusion. The effect of chain size is eliminated by selecting the polymer's chain length (N) such that the resulting radius of gyration is approximately 5σ_p for all six types. The simulation box size is 48σ_p for all six cases. The MSD(Δ r^2) vs. lag time(Δ t) plot is shown in Figure <ref>. At short times(less than 400 τ), MSD increases at a stronger rate than linearly with time due to the inertia of the chain. For longer times, the MSD reaches the linear diffusive regime, from which diffusion coefficients(D) are calculated using the relation, Δ r^2 = 6D Δ t and summarized in the second last column of Table <ref>.
The linear chain can be considered a star chain with f=2 and has the highest value of diffusion coefficient. Among the star chains, the diffusion coefficient value decreases with increased functionality. Using the values of the diffusion coefficients, the hydrodynamic radius of the polymer chains can be calculated as<cit.>,
R_H = k_BT/6 πη D
where D is the translational diffusion coefficient of the polymer chain, and η is the solvent viscosity. The ratio of the radius of gyration and hydrodynamic radius, ρ = R_g/R_H, is a size-independent quantity and represents the effect of the architecture of the polymer chain on its diffusion. The variation of ρ with the functionality of the star polymers is plotted in Figure <ref>. We note that ρ decreases with an increase in the functionality of the star chain, as reported by Huber et al. <cit.> and Singh et al. <cit.>. Since all six types of chains are of the same size, this difference in the diffusion coefficient values can only be attributed to their shape parameters. In Figure <ref>, we have shown that the linear chain is more anisotropic and less spherical than the star-shaped chains, and among the star chains, κ^2 and b/R_g^2 decrease with increased functionality. Hence, the higher a star chain's relative shape anisotropy and normalized asphericity, the faster it diffuses along the translational direction. We investigate this further by computing the rotational diffusion of the polymer chains.
§.§ Rotational diffusion of star-shaped chains
The polymer chain reorients itself continuously in the solution while diffusing along the translational direction. The gyration tensor has real eigenvalues and orthogonal eigenvectors as it is a symmetric tensor approximating the polymer chain as an ellipsoidal shape<cit.>. The reorientation of the polymer chain is equivalent to the rotation of the imaginary ellipsoid, as explained using a schematic representation in Figure <ref>. Any vector rigidly attached to the polymer chain can be used for measuring the rate of reorientation. In this work, the eigenvector(e_1) corresponding to the largest eigenvalue(λ_1) of the gyration tensor is selected for measuring the rate of reorientation of the corresponding polymer chain. The relevant reorientational correlation function of the polymer chain can be defined as,
C(t) = ⟨ P_2(e_1(0).e_1(t)) ⟩
where P_2(x)=(3x^2 - 1)/2, is the second-order Legendre polynomial, and the angle bracket represents the time and ensemble average over five system replicas. For any isotropically reorienting polymer chain, following Wong et al.<cit.>, the reorientational correlation function can be approximated as,
C(t) = e^-6D_Rt
where D_R is the rotational diffusion coefficient of the polymer chain.
The variation of the reorientational correlation function with time is plotted in Figure <ref>. We note that C(t) decays faster for the star-shaped chains than the linear chain. For the star-shaped chains, the higher the functionality, the faster the decay of C(t). The rotational diffusion coefficients are calculated from the exponential fit using the least square method and are summarized in the last column of Table <ref>. The corresponding coefficient of determination(R^2) is more than 0.99 for all the cases. The faster the decay of the reorientational correlation function, the higher the value of D_R. The linear chain has the lowest value of D_R, and among the star chains, D_R increases with increased functionality. As discussed earlier for translational diffusion, this difference in the values of the rotational diffusion coefficient is because of the shape parameters, as all the six types of polymer chains considered here are of the same size. In terms of shape parameters, star polymer chains with lower values of relative shape anisotropy and normalized asphericity have a higher rate of rotational diffusion. The lower values of κ^2 and b/R_g^2 represent higher symmetry of monomer distribution with respect to the coordinate axes. It is intuitive that a star-shaped chain reorients faster when the distribution of its monomers is symmetrical with respect to the coordinate axes. It is to be noted that the variation in the rotational diffusion coefficient with functionality and shape parameters is opposite to that of the translational diffusion coefficient.
§.§ Correlation of diffusion and shape parameters of star-shaped chains
The variation of the shape parameters and the two types of diffusion coefficients with the functionality of the chains are plotted in Figure <ref>(a) and Figure <ref>(b), respectively, where the linear chain is considered a star chain with f = 2. Out of the two shape parameters, relative shape anisotropy(κ^2) can be expressed in terms of the invariants of the gyration tensor and is the overall measure of shape anisotropy<cit.>. By making two-to-one correspondence between the two types of diffusion coefficients in Figure <ref>(b) and the relative shape anisotropy in Figure <ref>(a), it can be stated that, for star-shaped chains with higher κ^2 values, the value of D is higher, and the value of D_R is lower. The origin of a polymer chain's translational and rotational diffusive motion is the collision with the surrounding solvent particles. The radius of gyration can be interpreted as the radius of the imaginary sphere surrounding the polymer chain in the solution. Maintaining the same R_g for all six types of chains leads to the same size of the corresponding imaginary sphere. Therefore, all six types of chains interact with an approximately equal number of solvent particles on average. Highly spherical and isotropic star-shaped polymer chains utilize more energy in rotational diffusion, which results in less energy for diffusing along the translational direction. The opposite is the case for highly anisotropic star-shaped chains. Hence, the higher the relative shape anisotropy value of a star-shaped chain, the slower the rotational diffusion rate and the faster the rate of translational diffusion, as shown in Figure <ref>(b).
The variation of the translational diffusion coefficient and the rotational diffusion coefficient with relative shape anisotropy of the star-shaped chains having the same value of R_g is shown in Figure <ref>. The higher values of κ^2 lead to a lower value of the rotational diffusion coefficient and a higher value of the translational diffusion coefficient. From these results, we conclude that a star polymer chain with a higher value of relative shape anisotropy will have a slower rate of rotational diffusion and diffuse faster in the translational direction. Even though this is demonstrated using star-shaped chains, the argument can be extended to other polymer configurations as well. Hegde et al. have reported that the linear chains have higher translational diffusion coefficients than the ring chains when both have the same radius of gyration<cit.>. From the definition of relative shape anisotropy(equation <ref>), it is intuitive that the linear polymer chain will have a higher value of κ^2 than the ring polymer chain. Hence, our argument also holds for the ring vs. linear case. Nevertheless, to verify this argument for generic polymer configurations, the study of other polymer chain architectures is essential.
§ CONCLUSION
In this work, the Brownian diffusion of the linear and star-shaped polymer chains of different functionalities is simulated using MPCD. It is shown that the radius of gyration of the star-shaped polymer chains follows a functionality-independent scaling law with chain length, in which the scaling exponent ν∼ 0.627. The linear chain is shown to be more anisotropic than the star-shaped chains, and for star-shaped chains, the value of relative shape anisotropy decreases with an increase in functionality. For the same radius of gyration, the linear chain diffuses at a faster rate along the translational direction and has a slower rate of rotational diffusion than the star-shaped chains. Among star-shaped chains with the same radius of gyration, higher functionality leads to a higher value of rotational diffusion coefficient and a slower rate of diffusion along the translational direction. In terms of the shape parameter, we conclude that the star-shaped chains with higher values of relative shape anisotropy have a slower rate of rotational diffusion and therefore diffuse at a faster rate along the translational direction. Hence, shape anisotropy leads to faster center-of-mass diffusion of star-shaped polymer chains in a solution.
G.T. acknowledges partial support from the Department of Science and Technology National Supercomputing Mission HPC system in the Supercomputing Education and Research Center-Indian Institute of Science. A.K. acknowledges partial support from SERB CRG/2022/005381. P.K.P. acknowledges partial support from the Ministry of Education, Government of India.
|
http://arxiv.org/abs/2307.05734v2 | 20230711190237 | Towards quantum-enabled cell-centric therapeutics | [
"Saugata Basu",
"Jannis Born",
"Aritra Bose",
"Sara Capponi",
"Dimitra Chalkia",
"Timothy A Chan",
"Hakan Doga",
"Frederik F. Flother",
"Gad Getz",
"Mark Goldsmith",
"Tanvi Gujarati",
"Aldo Guzman-Saenz",
"Dimitrios Iliopoulos",
"Gavin O. Jones",
"Stefan Knecht",
"Dhiraj Madan",
"Sabrina Maniscalco",
"Nicola Mariella",
"Joseph A. Morrone",
"Khadijeh Najafi",
"Pushpak Pati",
"Daniel Platt",
"Maria Anna Rapsomaniki",
"Anupama Ray",
"Kahn Rhrissorrakrai",
"Omar Shehab",
"Ivano Tavernelli",
"Meltem Tolunay",
"Filippo Utro",
"Stefan Woerner",
"Sergiy Zhuk",
"Jeannette M. Garcia",
"Laxmi Parida"
] | quant-ph | [
"quant-ph",
"q-bio.QM"
] |
1]Saugata Basu
2]Jannis Born
3]Aritra Bose
4,5]Sara Capponi
6]Dimitra Chalkia
7,8]Timothy A Chan
9]Hakan Doga
10]Frederik F. Flöther
11,12,13,14]Gad Getz
15]Mark Goldsmith
9]Tanvi Gujarati
3]Aldo Guzmán-Sáenz
6]Dimitrios Iliopoulos
9]Gavin O. Jones
15]Stefan Knecht
16]Dhiraj Madan
15]Sabrina Maniscalco
17]Nicola Mariella
3]Joseph A. Morrone
18]Khadijeh Najafi
2]Pushpak Pati
3]Daniel Platt
2]Maria Anna Rapsomaniki
16]Anupama Ray
3]Kahn Rhrissorrakrai
18]Omar Shehab
19]Ivano Tavernelli
9]Meltem Tolunay
3]Filippo Utro
19]Stefan Woerner
17]Sergiy Zhuk
9]Jeannette M. Garcia Corresponding author. {jmgarcia, parida}@us.ibm.comAuthors listed in alphabetical order with exception of corresponding authors.
3]Laxmi Parida^†^
[1]Purdue University, Department of Mathematics, West Lafayette, IN, USA
[2]IBM Research, IBM Research Europe, Zurich, Switzerland
[3]IBM Research, IBM Thomas J Watson Research Center, Yorktown Heights, NY, USA
[4]IBM Research, Almaden Research Center, San Jose, CA, USA
[5]Center for Cellular Construction, San Francisco, CA, USA
[6]Athos Therapeutics Inc., Los Angeles, CA, USA
[7]Center for Immunotherapy and Precision-Immuno-Oncology, Cleveland Clinic, Cleveland, OH, USA
[8]National Center for Regenerative Medicine, Cleveland Clinic, Cleveland, OH, USA
[9]IBM Quantum, Almaden Research Center, San Jose, CA, USA
[10]QuantumBasel, uptownBasel Infinity Corp., Arlesheim, Switzerland
[11]Massachusetts General Hospital Cancer Center, Boston, MA, USA
[12]Broad Institute of MIT and Harvard, Cambridge, MA, USA
[13]Department of Pathology, Massachusetts General Hospital, Boston, MA, USA
[14]Department of Pathology, Harvard Medical School, Boston, MA, USA
[15]Algorithmiq Ltd, Helsinki, Finland
[16]IBM Research, IBM Research India, India
[17]IBM Quantum, IBM Research Europe, Dublin, Ireland
[18]IBM Quantum, IBM Thomas J Watson Research Center, Yorktown Heights, NY, USA
[19]IBM Quantum, IBM Research Europe, Zurich, Switzerland
Towards quantum-enabled cell-centric therapeutics
[
August 12, 2023
=================================================
In recent years, there has been tremendous progress in the development of quantum computing hardware, algorithms and services leading to the expectation that in the near future quantum computers will be capable of performing simulations for natural science applications, operations research, and machine learning at scales mostly inaccessible to classical computers. Whereas the impact of quantum computing has already started to be recognized in fields such as cryptanalysis, natural science simulations, and optimization among others, very little is known about the full potential of quantum computing simulations and machine learning in the realm of healthcare and life science (HCLS).
Herein, we discuss the transformational changes we expect from the use of quantum computation for HCLS research, more specifically in the field of cell-centric therapeutics. Moreover, we identify and elaborate open problems in cell engineering, tissue modeling, perturbation modeling, and bio-topology while discussing candidate quantum algorithms for research on these topics and their potential advantages over classical computational approaches.
§ INTRODUCTION
The history of computing is a story of remarkable achievements that continue to transform almost every aspect of our society.
From the invention of the von Neumann architecture to the emergence of Moore’s law to the recent rise of artificial intelligence (AI), computing has enabled unprecedented advances in natural science, engineering, and medicine <cit.>.
However, as we approach the physical limits of classical computing, we face new challenges and opportunities that require a paradigm shift in how we process and manipulate information.
Quantum computing can provide such a paradigm shift.
By harnessing the power of quantum physics, quantum computers can potentially perform some tasks exponentially faster than classical computers and solve problems that are practically intractable for classical computers, such as simulating quantum mechanical systems <cit.> or decrypting contemporary cryptography <cit.>.
With their exponential rise in computational power (each additional qubit doubles the quantum state space), quantum computers may enable unprecedented transformative potential in the next decades.
However to effectively steer quantum algorithm development and avoid "reinventing the wheel," application domains with vast untapped potential for novel computational approaches need to be identified.
We believe that, similar to high-energy physics <cit.>, healthcare <cit.> and drug discovery <cit.> are prime examples of areas that could see a tremendous impact from quantum-classical computational workflows because they require accurate and reliable simulations of complex systems (for example, molecules, proteins, and cells) or necessitate learning complex behaviors from limited experimental data. In the following, we identify areas in healthcare and life sciences (HCLS) that have seen great advances in the recent years and in which we believe significant benefits from quantum computing applications will be possible.
§.§ Current technological shift in healthcare and life sciences
In the past decade, technological advancements have turned biological discovery into an information-rich, quantitative science. From super-resolution microscopy techniques that image macromolecules with nanometer resolution <cit.> to organoid technologies that mimic human organs <cit.> to spatial single-cell methods able to generate three-dimensional molecular maps of whole tissues <cit.>, new ways of interrogating biological systems across all scales of organization have emerged. These technologies have fueled ambitious efforts, such as creating a human cell atlas, i.e. a detailed map of all individual cells in the human body <cit.>, and have transformed how we explore fundamental questions in health and disease. Although most outcomes are still descriptive, studies that show promise in identifying patterns with clinical significance are appearing across a variety of diseases, including cancer <cit.>, cardiovascular disease <cit.>, and diabetes <cit.>. As a result of such technological advancements, one of the emerging paradigms is the ability to engineer cells to carry out therapeutic functions <cit.>. Reprogramming immune cells has been proven to be successful in treating hematological cancers <cit.> and the effort has recently been extended to treating solid tumors <cit.> and other diseases <cit.>, while also taking advantage of the most recent technologies such as mRNA delivery <cit.>.
On the computational front, AI (herein defined as intelligent software automating routine labor, understanding and/or recognizing images, text patterns, etc.) and machine learning (ML) (herein defined as the set of algorithms and the mathematical and statistical methods allowing the computer to learn from data) have accelerated discovery in HCLS.
The use of AI and ML have revolutionized several fields favoring the development and implementation of novel methodologies often based on data-driven approaches. One prominent example of the data-driven solutions provided by AI is in the field of structural biology, where the longstanding problem of predicting the three-dimensional (3D) structure of a protein given a sequence <cit.> has seen significant improvement via transformer-based architectures <cit.>.
This work has had a profound effect on the field of synthetic biology by showcasing the potential for using data-driven approaches based on ML methods to solve scientific problems. For instance, in the last few years, novel ML architectures have been developed to generate large protein complexes <cit.> and design de novo proteins and enzymes <cit.>.
AI/ML models have been used successfully to predict the effects of noncoding variants <cit.> and reach human-level performance in automated whole-cell segmentation, a task that traditionally involved hours of manual processing <cit.>. Additionally, several AI/ML approaches have shown great promise in improved disease diagnosis or prognosis. Deep convolutional neural networks (CNNs) have matched the accuracy of radiologists for predictions of lung cancer risk from CT images <cit.>, have outperformed human dermatologists in classification of skin lesions <cit.>, and have exceeded the performance of established models for breast cancer risk discrimination <cit.>. As AI is rapidly moving towards foundation model-based learning <cit.>, generalist multi-task medical AI foundation models are emerging <cit.>. Although the progress in applying AI models to biological data has been impressive, important limitations that hinder their applications to the clinic still persist <cit.>. While some limitations are related to native properties of biological systems, such as their innate complexity and scale <cit.>, others are associated with shortcomings of AI algorithms, e.g. their inability to learn in data-limited contexts, model overfitting, or learning saturation <cit.>.
§.§ Quantum-enabled healthcare and life science trajectory
Quantum computing may soon provide researchers with quantum-enabled tools that could expand the limits of computing to unprecedented capabilities, opening up previously unimagined avenues for addressing some of these challenges. Quantum algorithms make use of a radically different computing paradigm that may potentially represent and learn from biological data more efficiently, tackling classically difficult computing problems in healthcare and life sciences. Here, we advocate for the adoption of quantum computing to open up new frontiers for biological research that could enable biomedical discoveries.
There exist multiple areas for potential significant impact by quantum computing in HCLS that each merit deeper discussions, including biomarker discovery, clinical trial optimization, imaging analysis, and drug protein design and discovery. While naturally there has been a great deal of focus on chemical simulations in biomolecular systems using techniques from quantum simulation <cit.>, here we will focus on applications of quantum machine learning and optimization that have newly realizable potential for healthcare applications owing to recent advancements in quantum hardware and software development. We will elaborate on some of these technological advances in quantum computing from both a hardware and software perspective, with an emphasis on quantum optimization and quantum machine learning.
Importantly, we will present our vision to reimagine healthcare and drug discovery, summarized by Quantum Enabled Cell-Centric Therapeutics, which aims to leverage advancements in single cell and spatial single-cell technologies to create a holistic view of the cellular and metabolic activities in disease tissue to better understand disease dynamics and improve therapeutic design. Here we will highlight four areas of research explorations that address various aspects of a cell-centric therapeutic design philosophy and that may serve as a example of how bringing together quantum computing advancements, AI/ML models, and cutting-edge developments in biological research can transform therapeutic discovery and improve healthcare. There is a growing body of work exploring the application of quantum technologies in healthcare, medicine, and the life sciences <cit.>. Hence, we intend for this paper to also serve as a call to both quantum and HCLS researchers to participate in and help shape this vision, devising new biology-inspired quantum algorithms and proof-of-concepts.
§ QUANTUM COMPUTING STATE-OF-THE-ART
Challenges in healthcare and life sciences present opportunities to leverage the unique features of quantum computing to derive novel biological insights to improve patient care.
The scale of today's quantum devices are on the order of tens to hundreds of qubits and remain susceptible to noise <cit.>. Moreover, the development of error correction techniques used to protect quantum information is still in an early stage <cit.>. However with recent quantum developments, it may now be the time to begin addressing these HCLS challenges in earnest.
Qubit counts have increased <cit.>, dynamic circuits with mid-circuit measurements have been introduced <cit.>, the fidelities of 1- and 2-qubit gates have improved <cit.>, and the speed of execution of quantum circuits has increased <cit.>. Advances in error mitigation and error suppression techniques, when coupled with circuit cutting and knitting techniques <cit.>, have enabled researchers to scale up the size of their quantum experiments.
As an example, quantum chemistry simulations using circuit cutting techniques, such as entanglement forging, have enabled users to utilize double the number of qubits <cit.> while quantum-classical embedding techniques enabled the scaling up to relevant system sizes <cit.>.
These advancements have demonstrated significant progress towards reaching the stage where quantum computers can solve certain meaningful problems faster, cheaper, or more accurately than classical computers alone for selected applications <cit.>.
This progress has led to the emergence of quantum machine learning (QML), which aims to use quantum algorithms to analyze large datasets more efficiently than classical machine learning algorithms <cit.>. Depending on the specific use case, quantum computing in general, and QML in particular, may enable different types of benefits, in metrics such as accuracy, energy efficiency, input data requirements, and speed. For instance, there is early evidence in specific instances that quantum algorithms applied to electronic health records are better at handling small, noisy data sets and producing acceptable accuracies than classical approaches <cit.>.
At the end of 2022, IBM announced the production of a 433-qubit processor and that a 100-qubit device capable of achieving a depth of 100 will be available in 2024, representing a new testbed for quantum circuits <cit.>. It is now possible for quantum circuits to be designed for simple drug discovery problems <cit.> and executed on real quantum hardware using these new capabilities, such as circuit cutting and error mitigation, to complement research on advancing the classical and quantum algorithms.
§.§ Quantum hardware
Like bits for classical computers, qubits are the basic unit of quantum computation. While classical bits can only be in a state of 0 or 1, qubits can exist in a superposition of both states, meaning that they can represent multiple states simultaneously. Additionally, qubits can be entangled, which means that their states are correlated in a way without classical equivalent. Finally, qubits are measured probabilistically and one can measure the interference of their probabilities. These three properties come from the principles of quantum mechanics <cit.>.
There are several approaches to building qubits, including neutral atom qubits <cit.>, spin qubits <cit.>, topological qubits <cit.>, trapped-ion qubits, and superconducting qubits, each with its advantages and challenges as assessed by different quantum computing metrics (Fig. <ref>A).
While much of the quantum algorithms and applications that we will discuss here are platform-agnostic, we will focus the majority of our discussion on one of the most widely used qubit technologies, superconducting qubits. These qubits rely on the employment of superconducting Josephson's junctions, which leverage the properties of superconducting materials to create and manipulate the two level systems required for quantum computation.
These superconducting qubits are made using superconducting circuits <cit.> that are required to operate at extremely low temperatures, typically just a few degrees above absolute zero, in order to take advantage of the unique property of such materials, which can conduct electricity with no resistance Superconducting qubits are promising candidates for building quantum computers because they can be easily fabricated using standard semiconductor fabrication techniques and have demonstrated long coherence times.
§.§ Near-term vs. fault-tolerant quantum computing
Near-term quantum computing refers to the current state of quantum computing technology, where small-scale quantum processors are available for research and development purposes. These devices can perform simple quantum algorithms with limited numbers of qubits. However, these devices are prone to errors, and must be carefully calibrated and controlled to produce reliable results. Fault-tolerant quantum computing, on the other hand, refers to the theoretical possibility of building large-scale quantum processors that can operate reliably in the presence of noise and errors. These devices would be able to run complex quantum algorithms with many qubits, and would be a major breakthrough in the development of practical quantum computing technology.
The key difference between near-term and fault-tolerant quantum computing (FTQC) is how errors are handled. FTQC uses quantum error correction (QEC) and near-term quantum computing uses error mitigation that trades circuit executions for reduced impact of noise for certain measurements such as observables. In earlier devices, error mitigation <cit.> is limited, and the devices are only capable of running small-scale quantum algorithms that can tolerate some errors. Moreover, application of different circuit compilation <cit.> and decomposition techniques <cit.> are relatively explicit in near-term quantum computing. However, evidence of quantum utility through quantum error mitigation have started emerging for nontrivial problems <cit.>. Both the increase in the number of qubits and improved gate quality have contributed to the progress. In fault-tolerant devices <cit.>, error correction is much more robust, and it requires error rates that are not yet readily achievable, much less at the scale that is needed and hence QEC is just not practical soon. Therefore, advances both in error correction and error mitigation are needed to create a continuous path to quantum advantage. Another important difference is the number of qubits that can be used. Near-term devices typically have on the order of hundreds of qubits, numbers and qubit quality will continue to scale in the near future, where error-corrected devices may require thousands or even millions of qubits <cit.>. Thus, fault-tolerant quantum computing will require the development of new technologies for qubit fabrication, control, and error correction.
Despite these challenges, researchers are making progress towards fault-tolerant quantum computing, with many promising new technologies and algorithms being developed. However, it is still unclear when or if practical fault-tolerant quantum computers will be built. In the meantime, near-term quantum computing remains an active area of research and development, with many companies and research groups working to build better quantum processors and software tools. These devices are expected to have an impact on fields such as chemistry, optimization, and machine learning.
§.§ Quantum toolkit and services
The history of quantum software toolkits in recent years has been characterized by rapid growth and development, as researchers and companies worked to create more advanced and user-friendly tools for quantum programming. In 2017, IBM released Qiskit, an open-source quantum software toolkit that quickly became one of the most widely used tools in the field <cit.>. Qiskit runs on IBM Quantum Experience, the first ever cloud-based quantum computing service. Qiskit provides users with a comprehensive set of tools for designing and optimizing quantum circuits, as well as simulating and running quantum programs on real quantum hardware. Other toolkits include Forest <cit.>, PennyLane <cit.>, CirQ <cit.>, Braket <cit.>, and so on.
Well-developed quantum software toolkits, like Qiskit, tend to have complex and elaborate modules for different domains of applications.
Qiskit provides a server-less model to off-load quantum computes to IBM Quantum cloud through Qiskit Runtime as shown in Figure <ref>C.
Toolkits, like Qiskit, are essential for enabling researchers and developers to explore the potential of quantum computing, and to build the next generation of quantum applications, such as Algorithmiq’s Aurora <cit.>, a package for advanced quantum chemistry simulations for drug design and discovery. As these quantum software toolkits continue to evolve and improve, we will see new tools, features, and applications brought online to unlock new areas of HCLS research that are difficult to address using classical computing technologies.
§.§ Quantum hardware roadmap
Quantum computing is expected to go through revolutionary changes in the coming years, including expected near term advancements in architectural primitives like modular chip design, classically- and quantumly-connected modules, etc. A combination of larger number of qubits, higher quality quantum gates, and faster clock cycles will allow users to solve significantly larger problems that are common in healthcare and life science. At the kernel level, threaded primitives, dynamics circuit, built-in error mitigation are also expected to be available at similar timeframe. These will allow an algorithm developer to take advantage of advanced algorithm development techniques from classical computing and expand them into quantum algorithms. Quantum serverless architecture along with advanced circuit decomposition and compilation techniques will enable users to scale up their problem size rapidly. In addition to IBM's announced production of a 433-qubit processor in 2022 and a 100-qubit device capable of achieving a depth of 100 available in 2024 <cit.>, IBM has targeted the development of a 100,000 qubit quantum system by 2033 <cit.>. In summary, we are entering a new regime in which quantum computers can be used to study and gain insights for important scientific problems, where any non-trivial result will not be easily certifiable with classical alternatives. Deeper collaboration with domain experts is needed to identify the smallest practical problem at that scale.
Overall, recent developments in quantum computing hardware are driving progress towards the realization of practical quantum computers and empowering quantum algorithms to achieve heretofore unattainable results on actual quantum hardware, and we will provide, as an example, how these quantum advancements can be applied towards a cell-centric therapeutic discovery process.
§.§ Quantum algorithms
The history of quantum algorithm development began in the early 1980s when Richard Feynman and Yuri Manin suggested that quantum computers could solve problems that classical computers could not <cit.>. In 1994, Peter Shor developed a quantum algorithm for factoring large numbers, which is believed to be intractable for classical computers <cit.>. This algorithm is the basis for much of modern cryptography. In 1996, Lov Grover developed a quantum algorithm for searching unsorted databases, which can provide a quadratic speedup compared to classical algorithms <cit.>. In 1997, Seth Lloyd developed a quantum algorithm for simulating quantum systems, which has potential applications in fields such as chemistry and materials science <cit.>.
Since then, researchers developed a variety of other quantum algorithms (Fig. <ref>B) for tasks such as solving linear equations <cit.>, classification <cit.>, optimizing functions <cit.>, and simulating physical systems <cit.>. These algorithms have been further specialized and optimized for different areas of applications including HCLS <cit.>.
§.§.§ Quantum simulation
Quantum algorithms for chemistry simulation are a promising new approach to studying the behavior of atoms and molecules <cit.>. These algorithms exploit the power of quantum computers to simulate the quantum mechanical effects that are essential for understanding the properties of matter. One of the most important quantum algorithms for chemistry simulation is the variational quantum eigensolver (VQE) algorithm <cit.>. VQE can be used to calculate the ground state energy of a molecule, which is a fundamental property that determines its stability and reactivity. The VQE algorithm has been used to simulate the properties of a wide variety of molecules, including water <cit.>, methane <cit.>, deuteron <cit.>, and large organic molecules such as butyronitrile <cit.> and ferrocence <cit.>.
Quantum algorithms are also being developed to simulate excited states of quantum systems. Excited state simulations are needed for understanding the behavior of many physical and chemical systems, including electronic and magnetic properties of materials. While VQEs have been used to find these excited state energies of a quantum system by optimizing a variational ansatz wave function <cit.>, quantum subspace expansion (QSE) and the quantum equation-of-motion (qEOM) algorithms <cit.> have also been developed to perform excited state simulations. <cit.> These methods simulate the excited states of a quantum system by expanding the wave function in a basis of excited states <cit.>.
The quantum Lanczos algorithm has also been found useful for excited state simulations <cit.> by efficiently computing the eigenvalues and eigenvectors of a Hamiltonian matrix, which can be used to determine the excited states of a quantum system. In addition to these specific algorithms, there is growing interest in developing general-purpose quantum algorithms for excited state simulations. These algorithms would be able to simulate a range of quantum systems and phenomena, including the excited states of many-body systems. The development of quantum algorithms for ground and excited state simulations is still in its early stages, but researchers are optimistic about the potential impact of these algorithms on the field.
§.§.§ Quantum operations research
Quantum algorithms for operations research is a rapidly evolving area of research that focuses on applying quantum computing to solve problems in operations research. Quantum Monte Carlo algorithms can simulate the behavior of quantum systems, which is useful for solving problems in optimization, finance, and other fields <cit.>. Quantum algorithms for mixed-integer programming problems are able to exploit problem structure to maximize fractional Grover speedup <cit.>.
Quantum walk algorithms may have the potential for quantum advantage over classical Monte Carlo or random walk algorithms <cit.>. Additionally, a large number of optimization problems may be represented as quadratic unconstrained binary optimization (QUBO) problems which allows quantum approximate optimization algorithm (QAOA) to create the potential of maximizing fractional Grover speedup <cit.>.
§.§.§ Quantum machine learning
Quantum machine learning (QML) is an evolving field that combines the power of quantum computing with the techniques of machine learning <cit.>. QML algorithms have the potential to significantly improve the accuracy and efficiency of machine learning models, particularly for large datasets. One key advantage of QML is the ability to perform certain computations exponentially faster and more efficiently than classical computers <cit.>. Another advantage is the potential for QML to solve problems that are not easily solved with classical techniques. QML algorithms can be used for a variety of tasks, including classification, regression, clustering, and dimensionality reduction. Some popular QML algorithms include the quantum support vector machine <cit.>, quantum k-means clustering <cit.>, and quantum neural networks <cit.>. While QML is still in its early stages, researchers and industry experts are optimistic about its potential impact on fields such as drug discovery, finance, and materials science. However, one major challenge of QML is the need for large-scale, fault-tolerant quantum computers, which are not yet available. More importantly, the quantum amenability to practical datasets for machine learning is still not well understood. Nevertheless, quantum machine learning is an exciting area of research with promising possibilities for advancing the field of machine learning.
Quantum Support Vector Machine The quantum support vector machine (QSVM) is an emerging approach that combines principles from quantum computing and classical machine learning to enhance the capabilities of support vector machines (SVMs) <cit.>. SVMs are powerful algorithms used for classification and regression tasks, but they face limitations when dealing with large and complex datasets. QSVMs aim to overcome these limitations by leveraging quantum feature maps that encodes information into a Hilbert space. Fundamentally, the QSVM algorithm employs a quantum kernel function that measures the similarity between quantum feature vectors as the inner product between corresponding density matrices. These feature vectors encode information about the input data and are represented as quantum states. The kernel function can be computed by running the quantum circuit for feature maps corresponding to one of the inputs followed by the inverse circuit for the other quantum circuit and finally measuring in standard basis. One can then optimize the dual on a classical machine based on the kernel function computed above on quantum circuit. While the theoretical foundation of quantum feature maps is still an area of active research, asymptotic quantum speedup has been demonstrated for a certain quantum feature map <cit.>, and empirical quantum advantage has been demonstrated for electronic health record data in a few very specific settings <cit.>.
Quantum Neural Networks The convergence of quantum information science and machine learning has given rise to a novel and promising paradigm known as quantum neural networks (QNNs) <cit.>. QNNs combine the principles of quantum mechanics and classical neural networks. Like their classical counterparts, QNNs consist of interconnected nodes, or "neurons", that process and transmit information. However, unlike classical neural networks that rely on classical bits, QNNs employ qubits as their basic units of information. The architecture of a QNN comprises three key components: the input layer, hidden layers, and output layer. Each layer consists of qubits and quantum gates. The input layer encodes the input data, which is then processed through the hidden layers via quantum gates. Finally, the output layer produces the desired output based on the learned patterns. Training QNNs involves adjusting the parameters of the quantum gates to optimize the network's performance. Quantum algorithms, such as quantum gradient descent and quantum variational algorithms, play a crucial role in the training process. Quantum gradient descent adapts the parameters of the gates by minimizing the loss function, while variational algorithms optimize the parameters by leveraging quantum optimization techniques. It has been shown that a class of quantum neural networks is able to achieve a better effective dimension, which is a robust capacity measure, than comparable feed-forward networks and can be trained faster <cit.>.
Quantum Topological Data Analysis Topological data analysis (TDA), introduced in early 2000s <cit.> is a novel data science method that combines tools from algebraic topology and computational geometry to study and analyze the shape of the data to reveal hidden structures and patterns while gaining insights independent of noise. Some successful applications of TDA include areas such as medicine, biology, image analysis, network analysis, and multi-variate time series analysis <cit.>. In particular, TDA can efficiently extract higher dimensional features from a noisy data set. While TDA found many applications in different research areas, including healthcare and life sciences <cit.>, it is known that the computational cost increases exponentially in the number of data points or in the dimension of the topological features targeted. The first quantum algorithm for TDA offered an exponential speed up targeting the regime where classical TDA struggles, in particular higher dimensional features <cit.>. Under certain constraints, this algorithm utilizes some standard quantum protocols such as multi-targeted Grover's search algorithm and quantum phase estimation (QPE). This created a surge of attention to develop improved quantum algorithms for TDA <cit.>, providing both theoretical and experimental framework for researchers to analyze higher-dimensional data using quantum computing.
Quantum Cumulant Calculation Cumulant calculation is a mathematical technique that can be used to analyze and understand complex, high-dimensional, and noisy data sets including healthcare data, and has been used in conjunction with TDA to better capture high-dimensional relationships. By using cumulant-based analyses one can identify patterns and relationships within the data, which are often challenging to extract using traditional statistical methods.
Cumulants provide a way to identify redescriptions <cit.>, which can be used to identify and generate logical relationships among variates that may indicate underlying biological processes, and whose connectivities provide information about distinct pathways to disease <cit.>. The idea of redescriptions is that patients who develop conditions (e.g. atherosclerosis) tend to share clusters of other comorbidities (e.g. hypertension) and can be captured in groups according to their diagnoses of these conditions.
Often, healthcare data are strongly correlated and therefore higher-order moments tend to include the impact of such lower order correlations. These higher-order associations may reveal specific biological pathways driving disease and can lead to distinguishing multiple distinct pathways captured by edges in a network formed by multi-omics variables <cit.>. It is possible to exclude potential spurious lower-order correlations by computing cumulants of higher-order moments involving products of collections of variates since cumulants vanish if the products of collections of variates partition into independent subsets <cit.>. In order to study all the moments or the cumulants of a sequence of random variables it is usual to consider the moment as well as the cumulant generating functions. These are functions of n indeterminates where n is the number of random variables in question. These generating functions, which contain all the information about the moments and cumulants, can be quite complicated and can be challenging to compute. An analogous situation occurs in statistical physics. The partition function in various Ising models is an exponential sized sum over all configurations where the indeterminates correspond to magnetization variables. Computing the partition function is equally challenging as computations of moments or cumulants because of the exponential sized sums. Various approaches have been suggested for computing good approximations. In particular, taking all the magentization variables to be equal one obtains a polynomial in one variable. The study of the zeros of this univariate polynomial (via the Lee-Young theorem <cit.>) gives important information about the macroscopic behavior of the corresponding physical system – such as phase transition. Efficient quantum circuits has been developed for computing the complex zeros of the partition function. An analogous quantum approach towards the computation of moments and cumulants will be very interesting.
Quantum Network Medicine The emerging field of network medicine <cit.> applies network science approaches to investigate the molecular complexity of a particular disease, as well as the molecular relationships among apparently distinct phenotypes, integrating information from relevant Omics databases. The modern era has seen an exceptional growth in molecular interaction data such as molecular networks, including protein interaction networks, whose nodes are proteins that are linked to each other by physical interactions; metabolic networks, whose nodes are metabolites that are linked if they participate in the same biochemical reactions; and regulatory networks, whose directed links typically represent regulatory relationships between a transcription factor and a gene. However, such networks are known to be vastly incomplete, with large proportions of the true interactions being yet unknown <cit.>. Moreover, if we are to efficiently search for new drugs and drug combinations or pathogenic interactions within and between cells, there is a pressing need for computational methods that can access the immense molecular space until now largely unexplored. Quantum computing may be a key ingredient in enabling the full potential of network medicine. Recently, it has been proposed to combine network medicine and quantum algorithms in a novel research field, quantum network medicine, to lay the foundations of a new era of disease prevention and drug design <cit.>. A successful example of the potential of this field is the recent demonstration of a link prediction algorithm based on continuous-time quantum walks <cit.>. This algorithm has also been successfully adapted to identify disease modules and new disease pathways.
§ QUANTUM-ENABLED CELL-CENTRIC THERAPEUTICS
Therapeutic design and discovery has traditionally focused on drug-target identification and interaction optimization, and there exist many classical approaches to perform such analyses as well as clear quantum applicable algorithms. This target-centric approach has been the dominant paradigm in therapeutic design and led to the successful approval of many novel therapeutics (e.g., small molecule inhibitors, chemotherapeutic, antibody therapies) across a multitude of diseases. However, the cost in research & development per new approved drug has been doubling ∼ nine years since the 1950s <cit.> and, for many diseases (e.g., rare diseases or particularly aggressive cancers like pancreatic), effective therapies are still far away.
While typically still considered the gold standard, the target-centric approach to drug discovery appears to be falling short on delivering significant numbers of therapeutic advances <cit.>,
which may be attributable to the complexity and uncertainty in target validation <cit.>.
The validity of target-centric approaches may be reaching the point of diminishing returns as evidenced by the observation that many anticancer drugs in clinical trials exert their efficacy not through their ostensible mechanism of action but via some off-target cytotoxicity <cit.>. Moreover, medicine itself is transforming. As we progress towards precision medicine, treatments at the level of an individual, and even better proactive interventions to keep people healthy, are needed. Today, many standardized therapies fail to achieve their intended outcomes, a case in point being that less than half of cancer patients respond to immunotherapy <cit.>, thus necessitating more tailored methods <cit.>.
With the advent of single-cell and spatial single-cell technologies there is now an opportunity to more precisely understand the interactions between disease cells, microenvironment, and therapeutics to accelerate cell-centric therapeutic approaches and precision medicine. These technologies provide a detailed accounting of the activities occurring within each cell and of how those cells are interacting in a tissue. In addition, they enable the capture of a holistic perspective of the cellular and metabolic activities of malfunctioning tissue with single-cell precision, allowing the study of disease dynamics in highly heterogeneous tissues with unprecedented spatiotemporal resolution. The trajectory of these technological advancements points towards a transformation of therapeutic design. This is made more relevant as we seek to characterize complex diseases with heterogeneous disease pathologies that cannot be described through single-protein effects, and where a conventional target-centric approach cannot be successful without the involvement of serendipity <cit.>.
By using the data obtained from these new technologies to examine the cellular state with respect to its cellular context and in combination with the potential of quantum computing to complement classical computing techniques, a cell-centric therapeutic design can be realized. Under this regime, the goal is to identify therapeutics, biologics, or other interventions that modify the cellular disease ecosystem to make it more responsive to therapy or shift it to a quiescent or even dying state. This is achieved by modeling the disease microenvironment at single-cell resolution to understand the cellular interactions, feature space, and the needed changes that shift the environment from a non-responsive to a responsive state.
Though classical AI methods have shown themselves to be critical in analysing single cell and spatial single cell data, there remain challenges where quantum computing approaches may offer significant advantages. Single cell omics data is often high dimensional and with high sparsity where most genes are not altered or expressed in any given single cell. These cells are typically not labeled and thus semi-supervised approaches are needed to learn from labeled and unlabeled single cell data. These challenges in single cell analysis are being addressed with breakthrough advancements in transfer learning and transformer models <cit.>. Yet there remains important limitations with these classical computing technologies, including requiring large amount of training data, poor explainability or interpretability, limitations in capturing global contextual signals, inability to control attention, and quadratic space and time complexity.
These challenges are exacerbated when data sets from related techniques are combined, for example from flow cytometry and single-cell sequencing <cit.>; classical methods struggle with leveraging such heterogeneous data to effectively classify cells based on their physical and biochemical characteristics.
A cell-centric approach to understand perturbation response and disease behaviors presents an unique opportunity for multiple quantum and quantum-classical optimization and machine learning techniques to be brought together to address these challenges and further advance cell-centric therapeutic design strategies.
In the following, we will describe several avenues of research capturing varying aspects of this cell-centric approach and how they may come together by focusing on a cancer use case (Fig. <ref>). These include approaches for optimizing CAR T-Cell engineering; representing and analyzing spatial, single-cell data; developing predictive models of single-cell perturbation; and extracting n-th order feature interactions that inform cellular behavior.
§.§ Cell engineering for immunotherapy
Cell therapies are powerful new medicines in which a class of human cells are reprogrammed to carry out specific functions such as killing cancer cells (an example is represented schematically in Fig. <ref>A). These therapies offer novel approaches that eventually will lead to treatments of other diseases such as autoimmune disorders, inflammation, and neuro-degeneration, by overcoming immunosuppressive tumor microenvironments, reducing toxicities, and preventing antigen escape. The great advantage of cell therapies lies in the possibility of engineering each cell modular component, each characterized by specific features, and building a synthetic molecule or cell with desired functions. A cell's phenotype can be reprogrammed by engineering cells at different scales, whether single point amino acid mutations <cit.>, designed peptides (e.g. antimicrobial peptides) <cit.>, protein receptors <cit.>, or cells and tissues <cit.>. The downside of such reprogramming is that the possibility of engineering and combining different cell modular components generates a vast, complex combinatorial design space that is difficult to explore experimentally. For this reason, AI/ML models are powerful tools to address this challenge because such models can learn complex patterns and features from a given dataset and provide predictions on different phenotypes of interest.
Chimeric antigen receptors (CARs) are genetically modified T cell receptors designed to repurpose the phenotypic output of natural T cells. The extracellular domain of CAR T cells is engineered to identify specific tumor-associated antigen and the intracellular domain is engineered to include intracellular motifs that enhance the T cell activation and function (Fig. <ref>A). These motifs are part of the co-stimulatory domains and are responsible for the antitumor efficacy of CAR T cells. Combinations of these domains generate CAR T cells with different features, and currently, six CAR T cell therapies for 12 applications are approved by the US Food and Drug Administration (FDA) <cit.>. Hence, given the proven efficacy of this new technology, the outstanding question is how can we efficiently explore all the possible costimulatory domain combinations to design optimal CAR T cell for a given patient? Since there are many potential combinations of associated sets of signalling and activation motifs for an engineered T cell, novel approaches based on screening pooled CAR signaling domain libraries have been proposed <cit.>. Specifically, KG Daniels et al. <cit.> seeking to optimize the phenotypic response of CAR T cells, the authors defined a combinatorial library of 13 motifs located in 3 different positions of the receptor intracellular domain (Fig. <ref>B). With ≈2350 potential combinations, only ≈250 were tested experimentally (Fig. <ref>B) and a ML model was used to predict the cytotoxicity of the other remaining combinations. The ML algorithm, based on a Convolutional Neural Network (CNN) (<ref>C) with long-short term memory (LSTM), was able to reach ≈70% accuracy at predicting CAR T cell phenotype.
Given the highly data-constrained problem described above, we have identified this study as an instance where quantum neural networks (QNNs) would potentially be able to provide advancements. Adding to the power of classical neural network models, QNNs utilize quantum mechanical effects such as superposition, entanglement, and interference to represent complex relations among data. As such, certain QNN architectures have been shown to have greater expressivity than some of their classical counterparts, allowing them to capture more complex probability distributions than classical models, and additionally indicating that there might be potential speed-ups in training time. In general, identifying problems where QNNs are more advantageous than classical models in training or model accuracy remains an open question with great room for improvement and requires heuristic experimentation with the datasets considered. As such the use of Quantum Convolutional Neural Networks (QCNNs) <cit.> (Fig. <ref>C) can be employed to improve the aforementioned 70% accuracy. QCNNs have several useful properties, including the number of variational parameters that scale logarithmically with the number of qubits and the absence of barren plateaus during training that can affect other types of QNNs <cit.>. Finally, it has been shown that some quantum ML models, such as QCNNs, can reach low generalization errors even in the case of limited training data, which further motivates the use of QCNNs for this problem domain.
§.§ Modeling tumor microenvironments with hybrid classical-quantum GNNs
Spatial single-cell omics are currently revolutionizing and paving way for advancing cancer biology by enabling deep phenotypic profiling of each individual cell within the tumor ecosystem while preserving its topology <cit.>.
From multiplexed imaging to spatial single-cell transcriptomics, such spatial data can be elegantly modeled using cell-graph representations (Fig. <ref>A), where cells are the nodes, including cell-specific information, and edges denote cell-to-cell interactions.
Consequently, Graph Neural Networks (GNNs) have found initial applications in learning on spatial single-cell data (Fig. <ref>C).
Modeling and learning on spatial cell-graphs exhibit several attractive properties: GNNs explicitly learn on cells instead of encoding pixels, and can elegantly integrate single-cell information with disease tissue (e.g. tumor) morphology, topology, and interactions among cells and/or tissue structures <cit.>. The cell-graphs bestow an interpretable input space, which enables incorporation of prior domain knowledge from medical experts.
Furthermore, GNNs are not limited by variations in resolution and can be easily coupled with explainability techniques to provide valuable insights on which cells or cell neighborhoods drive the decision <cit.>.
Initial attempts to leverage the advantages of GNNs on spatial single-cell data have started to emerge, with applications in cell phenotyping <cit.>, learning cell-cell communication <cit.> and, recently, modeling tumor microenvironments <cit.>. Yet, the complexity of tumor graphs and the entangled cell neighborhoods lead to suboptimal embedding spaces of GNNs, which in turn struggle with learning clinically meaningful patterns from the data. At the same time, searching for relatively small query subgraphs over large, complex graphs is an NP-hard problem where classical computing approaches do not suffice.
These limitations present an interesting opportunity for quantum computing. Mapping spatial data to the exponentially large Hilbert spaces can potentially solve the sub-optimal embedding of cell neighborhoods, searching for small query sub-graphs within large graphs can be improved, and leveraging the state-space available in quantum can lead to higher predictive ability of QML models. It is also possible for quantum computing to implement biases and symmetries as well as capture hidden correlations more efficiently (Fig. <ref>D).
Currently, hybrid quantum-classical solutions are implemented that combine GNNs for data pre-processing with QML algorithms, such as Variational Quantum Classifiers (VQC), Quantum Neural Networks (QNN), and Quantum Support Vector Machines (QSVM), and validated for downstream tasks such as tumor subtyping <cit.> (Fig. <ref>B).
Quantum versions of GNNs can also be created and researched upon for these inherent graph problems to study the possible advantages quantum can provide as new research directions and advance the state-of-art in the spatial single-cell omics.
§.§ Inferring single-cell drug perturbations with quantum conditional optimal transport
At the core of finding effective, novel therapeutics lies understanding the tissue response to specific therapeutic interventions, such as drug administrations.
While cell line perturbation studies have long been successfully used for pre-clinical validation of targeted cancer drug candidates, recent technological developments have allowed for such perturbation studies at the single-cell level <cit.>.
In these studies, the multiomic states for each individual cell are measured before and after drug or genetic perturbations resulting in perturbation atlases that also capture the underlying heterogeneity of drug response.
These highly informative atlases facilitate prediction of perturbation response on the tumour tissue at the resolution of single-cells, in the near future even for entire tumors (Fig. <ref>A).
The overarching long term objective of computational models in this field will be to simulate responses of tumor microenvironments to therapeutic interventions, e.g. drug administrations, and, ultimately, to develop trajectories that capture tumor growth patterns within its native microenvironment.
Early attempts in this direction used autoencoders with linear latent arithmetics and achieved reasonable success at approximating the effect of such interventions <cit.>.
Notably however, the problem formulation is deeply rooted in optimal transport theory, which in simple intuitive terms can be illustrated by minimization of the earth mover distance when moving one distribution of earth to another <cit.>.
Thanks to approximation techniques, such as entropic regularizations <cit.>, Optimal Transport (OT) recently gained popularity across machine learning applications <cit.>.
Contemporary work suggests that the posed task can be approached best with conditional OT <cit.> – which leverages the OT principles, thus benefits from strong theoretical support and explainability while allowing to condition the OT on a desired perturbation (Fig. <ref>C).
We propose a novel hybrid quantum optimization algorithm for single-cell perturbation modeling that is able to predict transportation maps tailored to individual patients and therapeutic hypotheses.
While the theoretical support for such a quantum OT algorithm is in development, this formulation exploits some natural links between unitary operators and the structure of the OT maps (Fig. <ref>D).
The proposed system is currently validated on several public datasets of single-cell perturbations of different drugs, drug dosages, and CRISPR gene knockouts <cit.>; with the objective to achieve a significant performance improvement in capturing therapeutic effects on a tumor level. Such a methodology naturally integrates with the previously discussed activities, in particular the spatial tumor modeling work which facilitates downstream clinical applications, e.g. recommendation of therapeutic interventions, based on the simulated tumors (Fig. <ref>B).
The method is built largely in anticipation of novel single-cell data acquisition techniques that will arise in the near future, such as spatiotemporal perturbation data or combinatorial perturbations.
Future extensions of this quantum framework will focus on modeling cell system evolution over time, e.g., reconstructing trajectories of cell differentiation.
Remarkably, the devised conditional OT methodology is generic and finds immediate application across a rich set of problems in the healthcare and life sciences – essentially all scenarios where the state of a system is captured before and after a certain intervention and where the goal is to understand how the intervention alters the before state to a hypothetical after state.
§.§ BioTopology for cellular behavior
Combining insights from both a spatial cell model and cell perturbation model enabled with quantum would provide valuable insights into how tumors evade therapeutic response. Yet it is known that despite the incredible amount of information contained in imagery and omics data, this information is incomplete. This incompleteness may be due to throughput constraints on technology, constraints on funding, measurement variances between technology platforms, or lacking the technology to measure every relevant molecular entity within a cell at once. Furthermore, cell behaviors and hence disease phenotypes are the result of complex relationships dynamically occurring over time that can not be captured at scale. Hence novel methods are needed to discover those cryptic relationships, likely residing in connections among significant higher order interactions between the dimensions of a given feature space.
Topological data analysis and cumulants can capture these n-th order interactions that would represent learned complex associations leading to logical relationships explaining a given phenotype, like interactions between the community of cells/features/alterations that lead to therapeutic response (Fig. <ref>).
As described earlier (see Section <ref>), TDA discovers hidden structures in the shape of data to gain novel insights (Fig. <ref>A).
TDA is already being used in computer vision and atomistic models, and cumulants of order d > 3 are playing important roles in financial data analyses, economics, hyper-spectral image analyses, etc. Yet, they are classically constrained as to how far they can scale in terms of dimension (d <5) and feature space (< 100).
Cumulants are a useful tool for integrating and analysing multi-omics data, whether single-cell or patient omics data, where they can provide a unique measurement of liability or risk of a given disease or trait, for each cell or individual (Fig. <ref>B). The association, i.e. distance, between features or cumulants can be calculated and TDA applied to generate the topological features, e.g., barcode signatures or cycles, that characterize the clusters of features. The utility of this type of mining was demonstrated by constructing aggregated phenotypes representing distinct pathways to identify genetic variants relevant to that pathway via genome-wide association studies <cit.>, exploring metabolic syndrome with coronary artery disease, and metabolic interactions leading to COVID-19 severity, where it was found that renin angiotensin aldosterone system (RAAS) drugs mediated the risk of severe COVID-19 due to hypertension, and that lipids mediated several other metabolic syndrome risk factors for severity <cit.>.
Overall, TDA and cumulant calculation are powerful tools for analyzing healthcare data, allowing researchers and healthcare professionals to gain a deeper understanding of complex medical conditions and improve patient outcomes. However there are classical bottlenecks in cumulant computation owing to the combinatorial expansions of the multinomial feature combinations, and in TDA computations where calculating exact and approximate Betti numbers is #P and NP-hard<cit.>, respectively. For instance, in most analyses of persistent Betti number calculations, there is a phase shift in the computational time needed to perform the calculation (Fig. <ref>C). This shift represents areas where quantum may offer particularly significant advantages over classical TDA. These classical constraints can be overcome using QTDA and quantum computing for cumulant computation.
Quantum computing offers the potential for moving past the classical limitations for these techniques enabling more sophisticated applications of TDA and cumulant analysis, and other mathematical techniques to improve biomedical data analysis. Hereto, quantum walks may also prove useful on the discovered knowledge graphs, as they have already been shown to be able to infer missing links in protein-protein interaction networks <cit.>. By applying these techniques to the spatial single cell omics data, it would be possible to capture those cryptic interactions that would have been hidden even with complete biological measurements let alone incomplete ones encountered in practice to better inform spatial and perturbation models described above.
§ CONCLUSION
A cell-centric therapeutic design strategy can provide clinicians and patients with much needed additional treatment options, bringing us significantly closer to precision medicine. By developing a deeper understanding of and modeling how cancer cells behave individually and in aggregate, treatment plans can be developed to manipulate the cancer and its tumor microenvironment into a more therapeutically responsive state or shift the tumor into an indolent phase transforming the disease into a more manageable, chronic condition. Quantum computing is a powerful enabling technology to help push this approach to design therapeutics forward, and this case study may serve as an exemplar of how quantum computing can meaningfully contribute to HCLS.
§ ACKNOWLEDGMENTS
Portions of this material is based upon work supported by NSF Center for Cellular Construction DBI-1548297 grant to S.C.
unsrt
|
http://arxiv.org/abs/2307.05403v1 | 20230711160708 | 3D Stagger model atmospheres with FreeEOS I. Exploring the impact of microphysics on the Sun | [
"Yixiao Zhou",
"Anish M. Amarsi",
"Victor Aguirre Børsen-Koch",
"Klara G. Karlsmose",
"Remo Collet",
"Thomas Nordlander"
] | astro-ph.SR | [
"astro-ph.SR"
] |
I. Exploring the impact of microphysics on the Sun
3D model atmospheres with
Stellar Astrophysics Centre, Department of Physics and Astronomy, Aarhus University, Ny Munkegade 120, DK-8000 Aarhus C, Denmark
[email protected]
Theoretical Astrophysics, Department of Physics and Astronomy, Uppsala University, Box 516, SE-751 20 Uppsala, Sweden
[email protected]
DARK, Niels Bohr Institute, University of Copenhagen, Jagtvej 128, 2200 Copenhagen, Denmark
Research School of Astronomy and Astrophysics, Australian National University, ACT 2611, Canberra, Australia
ARC Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), Australia
Three-dimensional radiation-hydrodynamics (3D RHD) simulations of stellar surface convection provide valuable insights into many problems in solar and stellar physics.
However, almost all 3D near-surface convection simulations to date are based on solar-scaled chemical compositions, which limit their application on stars with peculiar abundance patterns.
To overcome this difficulty, we implement the robust and widely-used equation of state and our opacity package into the 3D radiation-magnetohydrodynamics code. We present a new 3D RHD model of the solar atmosphere, and demonstrate that the mean stratification as well as the distributions of key physical quantities are in good agreement with those of the latest solar model atmosphere.
The new model is further validated by comparing against solar observations. The new model atmospheres reproduce the observed flux spectrum, continuum centre-to-limb variation, and hydrogen line profiles at a satisfactory level, thereby confirming the realism of the model and the underlying input physics.
These implementations open the prospect for studying other stars with different α-element abundance, carbon-enhanced metal-poor stars and population II stars with peculiar chemical compositions using 3D model atmospheres.
3D Stagger model atmospheres with FreeEOS
Yixiao Zhou<ref>
Anish M. Amarsi<ref>
Victor Aguirre Børsen-Koch<ref>
Klara G. Karlsmose<ref>
Remo Collet<ref>
Thomas Nordlander<ref>,<ref>
Received / Accepted
================================================================================================================================================
§ INTRODUCTION
Stellar atmosphere models are indispensable tools for the quantitative interpretation of astronomical observations. For late-type stars, although the majority of theoretical atmosphere models are computed assuming one-dimensional (1D) geometry, hydrostatic equilibrium and phenomenological theories of convection such as the mixing length theory (MLT, ), the use of three-dimensional radiation-hydrodynamics (3D RHD) stellar atmosphere models are thriving in recent years, partly driven by more detailed observational data and the rapid growth in computing power.
In 3D RHD models, sometimes referred to as near-surface convection simulations, the motion of fluid is computed from first principles by solving the equation of mass and momentum conservation as well as the energy conservation equation coupled with the equation of radiative transfer in 3D space for each timestep. Although the current study ignores the effects of magnetic fields,
they can be included in the models by adding the induction equation, Ampère's circuital law and Ohm's law to the equation system.
The 3D RHD models have proven to be superior to their 1D counterparts in all aspects and shed light on many problems in stellar physics. The early simulations by, for example, <cit.> and <cit.> provided valuable insight into how convection operates in the near-surface layers of low-mass stars: Rather than distinct and coherent fluid parcels assumed in the MLT, convective regions show finger-like downflows that
merging together as they descend from the photosphere before finally reaching the bottom of the simulation domain.
Conservation of mass forces the relatively hot material to rise back up through the thin optical surface, forming so-called granules. Detailed solar simulations presented in <cit.> further confirm this picture, and their work demonstrated excellent agreement between simulation and observation in the granulation pattern.
These ab initio simulations have enabled the prediction of various observables in a parameter-free manner.
The most remarkable breakthrough brought by 3D RHD models is associated with spectral line profiles: Predicted spectral lines broadening, blueshifts and bisectors agree excellently with observations <cit.>, to a degree that cannot be achieved by 1D models even if free parameters in the latter can be adjusted to fit the measured line profiles. This renders 3D model atmospheres a powerful tool for elemental abundance determinations and leads to a revision of the standard solar chemical composition <cit.>.
Moreover, 3D RHD models perform well in the case of centre-to-limb variations of intensity <cit.>, making them useful for deriving limb-darkened stellar radii and effective temperatures from interferometric measurements <cit.>.
3D simulations of stellar surface convection have also contributed to the field of helioseismology. <cit.> showed that the discrepancy between theoretical and measured solar pressure mode frequencies can be reduced by combining the averaged 3D model with 1D interior model then computing the theoretical oscillation frequency based on such a patched model (see also and ). The reason for the better agreement is that the convective turbulence is self-consistently described in 3D simulations, thereby resulting in more realistic pressure stratification in the near-surface convective layers.
3D hydrodynamical simulations of the solar near-surface convective region and solar atmosphere have been carried out by several research groups with independent codes, among others, <cit.>, <cit.>, <cit.>, <cit.> and <cit.>. The solar simulation extends from the near-surface convection zone upwards to the corona to investigate processes in the transition region and the solar chromosphere <cit.>. The magneto-hydrodynamical simulations of the Sun constructed with the code have provided valuable insights into our understanding of sunspots <cit.> and the solar small-scale dynamo <cit.>. Reference solar simulations computed with the , and code were compared in detail by <cit.>, who found good quantitative agreement between the three models.
Meanwhile, near-surface convection simulations with , and were constructed for a variety of stars including warm turn-off stars <cit.>, M-type dwarfs <cit.>, metal-poor benchmark stars <cit.> and extremely metal-poor stars <cit.>. Also, grids of 3D model atmospheres, such as the grid <cit.>, the <cit.> grid, and the grid (; Rodríguez Díaz et al., in preparation), are now available, covering a large area of the Hertzsprung-Russell diagram and spanning a wide metallicity range.
However, most of the 3D model atmospheres to date are constructed based on solar-scaled chemical compositions and a fixed value of abundance enhancement of α elements[In the grid, the abundance of α process elements O, Ne, Mg, Si, S, Ar and Ca are enhanced by 0.4 dex, [α / Fe] = 0.4, for metal-poor models with [Fe/H] ≤ -1. The notation [A/B] = log(n_ A / n_ B) - log(n_ A / n_ B)_⊙, where n_ A / n_ B and (n_ A / n_ B)_⊙ represent number density ratio between element A and B in the star and the Sun, respectively.]. Although this is usually an acceptable approximation for solar-type stars (i.e. F- and G-type dwarfs), the validity of the model atmosphere is in doubt when applied to, for example, relatively metal-rich ([Fe/H] ∼ -0.5) halo stars with high α element abundance ([α / Fe] up to ∼ 0.35, see ) because α-enhancement is not considered at this metallicity.
Neglecting variations in the abundance of individual elements may result in more significant systematic errors when investigating stars with peculiar abundance patterns such as carbon-enhanced extremely metal-poor stars, whose carbon and oxygen abundance is usually enhanced by at least 1 dex with respect to solar-scaled values <cit.>.
Therefore, model atmospheres with chemical composition tailored for individual cases are necessary for the aforementioned stars, given their importance in revealing the chemical evolution history of the Milky Way and the early Universe. <cit.> and <cit.> have made pioneering efforts in this direction by generating 3D model atmospheres for carbon-enhanced metal-poor stars.
To allow realistic simulations with arbitrary chemical composition we utilise an open-source EOS code and an opacity package developed in-house (Sect. <ref>), which form the basis of our new models.
As the input physics has changed, the resulting model atmospheres need to be validated before any scientific application. The Sun is a natural test bench for all theoretical stellar models due to the rich observational constraints available. Therefore, in this work, we validate the newly implemented input physics and the resulting 3D solar model atmosphere by comparisons with previously published results in terms of mean structure and horizontal distribution of key quantities (Sect. <ref>), and by comparing model-predicted quantities with corresponding solar observations (Sect. <ref>).
§ INPUT PHYSICS
§.§ Equation of state
is an open-source EOS code by <cit.> for stellar interior and atmosphere conditions. The EOS covers a wide temperature and density range that blankets both the lower atmosphere and the core of low-mass stars. It is widely adopted in stellar evolution codes, such as <cit.> and <cit.>, and recently applied to magneto-hydrodynamical simulations to study small-scale dynamo in cool stars with different spectral types and metallicities <cit.>.
The EOS includes 20 elements (Table <ref>) as well as H_2 and H_2^+ molecules. At each density and temperature (or pressure) pair, chemical equilibrium is computed by adjusting the number density of various species (atoms, ions and electrons) such that the Helmholtz free energy is minimised, with the charge-neutral condition and particle number conservation (also known as abundance conservation) as constraints. The free energy minimisation technique is often used in EOS calculations (e.g. ) because when temperature and volume are fixed, the Helmholtz free energy is a minimum at equilibrium.
For detailed explanations of the code, we refer the readers to the documentations <cit.>[Available at <http://freeeos.sourceforge.net/documentation.html>].
We used the most realistic free energy model implemented in the (named EOS1) in our calculation. The EOS1 option takes into account all ionisation stages of the 20 included elements, arbitrarily relativistic and degenerate free electrons, higher order Coulomb effects through a Coulomb factor on the first-order Debye-Hückel term, and an occupation probability formulation (similar to the EOS, hereafter MHD EOS) for pressure ionisation.
It yields good agreement with the OPAL EOS <cit.> by design. Tests performed by <cit.> show the thermodynamical quantities predicted by the EOS1 option differ from that of the OPAL EOS by less than 0.2% for solar conditions.
We adopted two solar abundance mixtures in this work, the <cit.> and <cit.> solar abundance. The input abundances for 20 elements included in the EOS are listed in Table <ref>.
Thermal (gas plus radiation) pressure given by the two sets of abundances differs by about 0.7% in the mass density and temperature area of interest (-10 < log(ρ / [g cm^-3]) < -4; 3.5 < log(T / [K]) < 4.5, where ρ and T are mass density and temperature, respectively).
The difference is almost due entirely to different helium abundance adopted in AGSS09 and AAG21. The effect of metals on thermal pressure is negligible.
As detailed in Appendix <ref>, key quantities from the with the AGSS09 solar abundance are compared with a modified version of the MHD EOS <cit.>. We found good agreement between the two EOSs, which further validates our results.
With the aforementioned setup and inputs, we generated tables in the format required by the code. Our EOS tables are based on mass density and internal energy per mass e_m as independent variables, as internal energy per mass rather than temperature is the fundamental variable in the code (see Sect. 2 of ). Densities and internal energies are equidistant in logarithm space. Density logρ ranges from -13.9 to 0 log[g cm^-3] in steps of 0.05 while log e_m ranges from 11 to 14 log[erg g^-1] in steps of 0.005. The resolution of our EOS table is higher than that previously used in the code.
The EOS table stores thermal pressure P_ ther, temperature T and electron number density n_e and their partial derivatives with respect to two independent variables, i.e. (∂ln f / ∂lnρ)_e_m and (∂ln f / ∂ln e_m)_ρ where f ∈{ P_ ther, T, n_e }. Note that these thermodynamic derivatives are obtained directly from EOS calculations and the Maxwell relations, therefore no interpolation is needed.
§.§ Opacities
§.§.§ The opacity code
We used the code developed by <cit.> to calculate total
line and continuous monochromatic extinction coefficients at different gas temperatures and densities.
The first step to calculating these quantities is to obtain the number density of each species from the EOS.
The original EOS implemented in is based on the Saha ionisation equation
albeit with a truncation of ionisation energies to account for non-ideal gas effects.
Although valid for studying the relatively cool
photospheres of FGK-type stars,
a free-energy minimisation approach is expected to be more valid
in hotter environments <cit.>.
Thus, in the high temperature regime[In
this paper, temperatures at the stellar photosphere and lower atmosphere are called low temperatures. Quantitatively, we define low temperature as
log(T / [K]) < 4, while log T > 4 is referred as high temperature.], we instead fed the number densities from into . For
H^- whose number density is not accessible from , we solved the Saha equation (assuming chemical equilibrium between neutral
hydrogen and H^-) to obtain its number density.
It is worth mentioning that handles only 20 elements whereas 83 elements are included in . Therefore, for 63 elements that are
not incorporated in the , their number densities (including all ionisation states) were set to zero in in this high temperature
regime. Molecular abundances were also set to zero in this limit.
In the low temperature regime, we instead used the EOS, so as to include
the effects of atoms and ions of all 83 elements, as well as of molecules.
The code uses molecular partition functions from <cit.>. Atomic and ionic partition functions
were calculated using data from the Kurucz online database <cit.>,
including at least three times ionised species for all the metals except Cl (up to twice ionised), which is sufficient for the
temperature range concerned in this work (i.e. log(T / [K]) ≤ 4.5). Our tests show that below log T = 4.5, the number densities of fourth and
higher ionised species are negligible for all 18 metals.
In this work, for a given temperature, pressure, and chemical composition, the ionisation energy was truncated prior to the calculation of the partition function
(by an amount calculated via the expression in Chapter 10.5 of ).
Given the EOS, calculates total line and continuous monochromatic extinction coefficients using
transition cross-sections from various sources.
The bound-free and free-free data sources can be found in Table <ref>. We note that for many of species listed in Table <ref>, cross-sections of bound-free transitions were adopted from the Opacity project (TOP) or IRON Project (TIP) online database. These data are exactly the same as those used in the previous generation of surface convection simulations <cit.>.
We took data for bound-bound transitions in atoms, ions, and molecules from the Kurucz online database; we summarise the considered molecular species in Table <ref>.
With the exception of the scattering processes at the end of Table <ref>, all radiative transitions were
treated in true absorption.
The line opacities are slightly influenced by the choice of microturbulence;
here we set this to 2 km s^-1 as was used in the original grid.
In this work, for bound-bound transitions of species other than hydrogen, occupation probabilities w were calculated following Appendix A of <cit.>.
Individual bound-bound transitions with monochromatic extinction coefficients α_ν;𝔪,𝔫 were
thus modified as α_ν;𝔪,𝔫→α_ν;𝔪,𝔫× w_𝔫, where 𝔪 and 𝔫 denote the lower and
upper levels of the transition.
For hydrogen lines and continua, we instead implemented the module provided by <cit.>.
gives continuum and line absorption monochromatic extinction coefficients, which we added to get total
absorption coefficients, separated into true absorption (α_ ab) and scattering (α_ sc).
These can also be added together to obtain a total extinction coefficient (α_ tot).
We used these quantities in the opacity binning procedure described in Sect. <ref>.
In order to have a smooth transition between the low and high temperature regimes, in practice we carried out two separate opacity calculations.
Extinction coefficients were computed using the EOS in the low (3.2 ≤log T ≤ 4.1) regime, and
in the high (3.9 ≤log T ≤ 4.5) temperature regime.
The overlapping temperature interval log T ∈ [3.9,4.1] serves as an intermediate bridging region.
Here, we merged extinction coefficients from the two different calculations via
f_ b = 1/2[
sin( log T - log T_1/log T_2 - log T_1 - 1/2) π
+ 1 ],
α_ mg = α_ lowT (1 - f_ b) + α_ highT f_ b,
where log T_1 and log T_2 are the lower and upper boundary of the bridging region, respectively. The bridging function f_ b equals 0 at the lower boundary, and smoothly increases to 1 at the upper boundary. The term α_ mg denotes the merged extinction coefficient, which could be monochromatic, or a mean value such as the Rosseland mean.
§.§.§ Opacity grids
Using the methods described in Sect. <ref>, we constructed grids of monochromatic extinction coefficients for different values of (log T, logρ) for the AGSS09 and AAG21 solar chemical compositions. The temperature ranged from log T = 3.2 to 4.5 in steps of 0.01 log[K], and the density ranged from logρ = -13.7 to -1.0 in steps of 0.1 log[g/cm^3]. This temperature-density coverage is sufficient for our applications since the surface convection simulations do not reach the low-density upper atmosphere nor extend to the high-temperature stellar interior.
It is known that the carbon-to-oxygen ratio has a great impact on the molecular opacity when log T ≲ 3.4 <cit.>. Nevertheless, given that the C/O ratio is very close between AGSS09 and AAG21, its influence on the opacity is limited as demonstrated in Fig. 6 of <cit.>. The different Mg and Fe abundance in the two versions of solar composition, however, is likely to leave imprints on opacities as they are important electron donors
(due to their high abundances and relatively low ionisation energies), and influence the H- opacity ( Sect. 6.3). Although Si is another important electron donor, its abundance is identical in AGSS09 and AAG21.
Extinction coefficients were computed at 250 000 wavelength points between 50 nm and 50 μ m, evenly sampled in logarithmic space to better resolve the ultraviolet. The resolving power is thus given by λ / Δλ≈ 36 200.
This wavelength resolution is not adequate to resolve all absorption features caused by spectral lines. Nevertheless, in the scenario of stellar atmosphere modelling, the focus is to find a sufficient wavelength resolution such that the modelled temperature stratification converges rather than resolving all the line features in opacity calculation (see Sect. 4 for detailed discussion). In order to verify our selected wavelength resolution, we have computed monochromatic extinction coefficients at very high wavelength resolution along (ρ,T) points of the horizontal- and time-averaged model (cf. Table. <ref> and Sect. <ref> about the model). The wavelength sampling is uniform in logarithm space, with two million points between 50 nm and 50 μ m, corresponding to a wavelength resolution λ / Δλ≈ 289 530.
We compared the temperature stratification predicted from high-resolution extinction coefficients with that adopted in this work using the 1D stellar atmosphere code . The code, described in <cit.> Appendix A, employs the same EOS and opacity table as the code. At solar effective temperature and surface gravity, we found that 1D temperature structure evaluated from opacities with wavelength resolution λ / Δλ≈ 289 530 and λ / Δλ≈ 36 200 differ by less than 0.1% in the stellar atmosphere, which translates to less than 5 K error in the optically thin regime. The error estimation implies that given the opacity binning method used in this work (cf. Sect. <ref>), our adopted wavelength sampling is sufficient for obtaining a reliable temperature structure.
We compared the Rosseland and Planck mean extinction coefficients with corresponding results from other opacity datasets, and found reasonable agreement in general (cf. Appendix <ref> for quantitative comparisons).
The Rosseland mean extinction coefficient was calculated as
α_ Ross = ∫_λ_min^λ_max∂ B(T,λ)/∂ T dλ/∫_λ_min^λ_max1/α_ tot(ρ,T,λ)∂ B(T,λ)/∂ T dλ,
and the Planck mean extinction coefficient as
α_ Pl = ∫_λ_min^λ_maxα_ tot (ρ,T,λ) B(T,λ) dλ/∫_λ_min^λ_max B(T,λ) dλ,
where B is the Planck function,
B = 2hc^2/λ^51/exp(hc/k_B Tλ) - 1,
where c, h and k_B are the speed of light, Planck constant and Boltzmann constant, respectively. Numerically, the integrals in Eqs. (<ref>) and (<ref>) are discretised by the trapezoid rule, with the lower (upper) limit being the shortest (longest) wavelength point computed by , which is λ_min = 50 nm (λ_max = 50 μ m) throughout our calculation. The thus evaluated mean extinction coefficients, together with continuum extinction coefficients (continuum absorption plus scattering) at 500 nm, are then interpolated to (ρ,T) combinations that correspond to the EOS (ρ,e_m) grid and stored in the EOS table as auxiliary quantities for the post-processing of simulation data (not used by the code).
§.§.§ Opacity binning
In radiative hydrodynamical simulations, solving the radiative transfer equation across the 3D simulation domain, at every time step, in about 10 different directions for a large number of wavelengths is computationally demanding. In order to make the problem computationally feasible, the code adopts the opacity binning method (also called the multi-group method, ).
With this method we divide monochromatic opacities into multiple groups based on wavelength and opacity, or more precisely the approximate formation depth in a given model atmosphere. In each group, monochromatic opacities are appropriately averaged and treated as a “single wavelength” in the radiative transfer calculation, thereby reducing the workload enormously. We elaborate on our opacity binning procedure below and refer the readers to <cit.> for more information. A detailed study on different opacity binning approaches and their accuracy can be found in <cit.>.
Apart from the opacity data, a stellar atmosphere model is required for the binning process. In our implementation the horizontal- and time-averaged 3D model (3D model hereinafter) was used, which implies opacity binning is an iterative process as the adopted 3D model affects the binned opacities, and the latter alters the stratification of the 3D atmosphere model in return.
Monochromatic absorption and total extinction coefficients as well as the Rosseland mean extinction coefficients computed at low and high temperature regions were first merged as described in Sect. <ref> then interpolated to the densities and temperatures of the 3D model. Subsequently, we calculated the Rosseland optical depth τ_ Ross and monochromatic optical depth τ_λ according to the interpolated α_ Ross and α_ tot,λ, respectively. This is to obtain the Rosseland optical depth where monochromatic optical depth is unity, i.e. τ_ Ross(τ_λ = 1), which indicates the approximate location where flux emerges (also the approximate formation depth of lines). For a given selection of opacity bins demonstrated in Fig. <ref>, all wavelength points were assigned to an opacity bin based on their wavelength and τ_ Ross(τ_λ = 1) value.
We note that the organisation (not the exact location of boundaries) of opacity bins in Fig. <ref> is the same as what was adopted in the 3D solar model of <cit.>, which was well-tested against various observational constraints.
For each bin, averaged extinction coefficients were computed by integrating over wavelengths that belong to that bin. In the optically thick region, the Rosseland mean, α_ Ross,i, as defined in Eq. (<ref>) but integrating over the wavelengths and transitions
included in the ith bin,
is a good representation of the mean extinction coefficient within that bin.
In the optically thin part, however, the radiative flux cannot be described by the diffusion equation. Inspired by the fact that the divergence of monochromatic radiative flux is proportional to α_ tot,λ (J_λ - B_λ) in LTE, where J denotes the monochromatic mean intensity (Eq. (<ref>)), and considering that the absorption processes (characterized by J) are usually stronger than emission processes (characterized by the source function B in LTE) in stellar atmospheres (see e.g. Fig. 5 of ), we used J as the weighting function in the optically thin part <cit.>.
Also, scattering was excluded from the extinction coefficient when calculating mean opacities in the optically thin region. This is referred to as the no-scattering-in-streaming-regime approximation. The purpose of this modification is to approximate the temperature structure predicted by surface convection simulations with continuum scattering processes properly treated in the radiative transfer calculation using simulations with LTE radiative transfer <cit.>. Although the inclusion of continuum scattering in the extinction coefficient has little impact on the temperature structure of the 3D solar model ( Fig. 7), <cit.> demonstrated that for metal-poor stars, the no-scattering-in-streaming-regime approximation gives good agreements with the correct solution where scattering is included in the modelling, whereas using the total extinction coefficient in the optically thin region overheats the stellar atmosphere. In order to be consistent with future non-solar metallicity simulations, the no-scattering-in-streaming-regime approximation is adopted in this work. The same approximation was also used in the construction of the -grid ( Sect. 2.1.5).
The mean extinction coefficient in the optically thin regime is therefore
α_J,i = ∫_λ∈ bin iα_ ab J dλ/∫_λ∈ bin i J dλ,
with i = 1,2,...,12 indicates a specific opacity bin. Note that, as discussed above, scattering is excluded from the integrand. The integration is carried out with the midpoint (or rectangle) method. We have verified that the midpoint integration rule is the preferred choice for the opacity binning problem, as it reproduces very well α_J,i and α_ Ross,i obtained from monochromatic opacities with high wavelength resolution for all opacity bins. Higher-order integration methods such as the trapezoid and Simpson's rule, however, will lead to large errors in the mean extinction coefficient for some bins and result in incorrect temperature structure in the solar atmosphere.
The two types of mean extinction coefficients were blended together with an exponential bridging function to obtain the final combined bin-averaged extinction coefficient, which reads
α_ bin, i = e^-2τ_ Ross,iα_J,i +
(1 - e^-2τ_ Ross,i) α_ Ross,i,
with α_ Ross,i being the Rosseland mean extinction coefficient evaluated from wavelengths belonging to bin i and τ_ Ross,i the optical depth based on α_ Ross,i.
The bin-averaged extinction coefficients for two selected opacity bins are depicted in Fig. <ref>.
These are typical of opacity bins including optical and near-infrared wavelengths that form around the optical surface, and of opacity bins including ultraviolet wavelengths that form in higher layers.
Red solid lines stand for α_ bin, i used in this work, whereas black dotted lines represent extinction coefficients calculated from the binning code implemented in <cit.> with the <cit.> opacity dataset (see also Appendix <ref>), which corresponds to binned extinction coefficients adopted in previous simulations.
The α_ bin, i results from the previous and our new calculation agree reasonably well for both bin 3 and bin 5. The small difference seen in Fig. <ref> is due to different opacity datasets adopted, as the opacity binning procedure is identical between this work and <cit.>.
To get the mean intensity in Eq. (<ref>), we solved the radiative transfer equation in the 1D plane parallel 3D model under the assumption of LTE:
μdI_λ/dτ_λ = I_λ - B_λ,
where μ = cosθ represents the polar angle along which the equation is solved, I_λ is the monochromatic intensity, and τ_λ is the vertical monochromatic optical depth.
Integrating I_λ over the polar angle gives the mean intensity,
J_λ = 1/2∫_-1^1 I_λ dμ.
The 1D LTE radiative transfer problem was solved using a modified <cit.> technique developed by <cit.> and <cit.>, which rearranges the Feautrier transport equation and solves for Q_λ = (1/2) [I_λ(μ) + I_λ(-μ)] - B_λ in order to improve the numerical accuracy at large optical depths. This is the same numerical technique as is implemented in the 3D radiative transfer solver of the code. The integration in Eq. (<ref>) was approximated with the Gaussian-Legendre quadrature with five polar angles, which is sufficient for this problem as increasing the number of μ-angles hardly changes the resulting mean intensity.
Reducing a large number of wavelengths to 12 opacity bins in the radiative transfer calculation is a significant simplification. To examine how accurate the opacity binning method is, we compared the radiative heating (or cooling) rate computed from bin-averaged quantities
q_ rad,b = 4π∑_i=1^12α_ bin,i (J_i - B_i)
with the monochromatic solution
q_ rad,m = 4π∫_λ_min^λ_maxα_ tot, λ(J_λ - B_λ) dλ.
Here B_i = ∫_λ∈ bin i B_λ dλ is the Planck function integrated over a given bin, and J_i, the mean intensity of bin i, was obtained from Eqs. (<ref>) and (<ref>) with source function B_i and optical depth computed via α_ bin,i. The heating rate is a crucial outcome of the radiative transfer process because it enters directly into the energy equation thereby influencing the temperature stratification of the model. The difference in q_ rad is quantified by max| q_ rad,b - q_ rad,m| / max| q_ rad,m| <cit.>, which is the relative difference at the cooling peak in most cases.
In addition to q_ rad, we examined how well the opacity binning method reproduces the radiative flux, as it determines the effective temperature of the model. The radiative flux (F_ rad) was calculated by integrating the heating rate from the bottom (stellar interior) to the top (atmosphere) along the 3D model. Thus, one could realise that differences in F_ rad are a manifestation of differences in q_ rad. Nevertheless, comparing F_ rad probes the mean differences in q_ rad rather than at a particular location. In short, we employed a combination of max| q_ rad,b - q_ rad,m| / max| q_ rad,m| and the relative difference in F_ rad as an indicator of the accuracy of opacity binning:
Δ_ b,m = max| q_ rad,b - q_ rad,m|/max| q_ rad,m| +
|F_ rad,b - F_ rad,m/F_ rad,m|.
As a criterion of the realism of the opacity binning method, Δ_ b,m was utilised to select the “best” binning configuration for a given model atmosphere and opacity data: The “best selection” of opacity bins corresponds to the global minimum of Δ_ b,m. In practice, we iteratively adjusted the location of bin boundaries and computed corresponding Δ_ b,m. The optimisation problem was tackled with Powell's method (cf., for example, Sect. 10.5) with the location of bin boundaries as multi-dimensional variables and Δ_ b,m the minimisation target.
Our preferred bin selection for model (Table <ref>) obtained from minimising Δ_ b,m is illustrated in Fig. <ref>.
Because monochromatic extinction coefficients for the AGSS09 and AAG21 abundance are close to each other at most wavelengths, the optimised location of bin boundaries for model is nearly identical to that of the model.
We caution that for multi-variable optimisation, it is very challenging to find the global minimum, therefore our preferred bin selections might represent only the local minimum of Δ_ b,m and affected by our initial guess.
A comparison of q_ rad and F_ rad between the opacity binning and monochromatic calculation is presented in Fig. <ref>. In the case of 3D model, the relative difference of q_ rad at the cooling peak (in this case the same as max| q_ rad,b - q_ rad,m| / max| q_ rad,m|) is 2.42%, which is of similar accuracy level as <cit.>. The relative difference of surface flux (defined as the radiative flux at logτ_ Ross = -4) is 1.49%, which translates to a 0.37% relative difference and ≈ 21 K absolute difference in effective temperature.
Nevertheless, we note that errors in surface flux presented here are merely illustrative. Owing to the non-linear, turbulent nature of 3D surface convection simulations, it is possible that the true error of binning is larger than the estimation based on the 3D model. True errors in flux can be determined by synthesising the flux spectrum with 3D model atmosphere and comparing it with observation.
§ SOLAR ATMOSPHERE MODEL
The EOS and opacities described in Sect. <ref> were incorporated in the code as basic input physics for our 3D solar model atmospheres. The code <cit.> is a radiation-magnetohydrodynamics code that solves the time-dependent equation of mass, momentum and energy conservation, the magnetic-field induction equation, as well as the radiative transfer equation on a 3D staggered Eulerian mesh.
Solar models in this study have been constructed without magnetic fields.
Radiative energy transport was modelled in LTE. The equation of radiative transfer with the Planck function as source function (Eq. (<ref>)) was solved with a modified <cit.> technique <cit.> for all mesh points above τ_ Ross = 500 at every time-step of the simulation. The frequency dependence of the radiative transfer problem was approximated by the opacity binning method detailed in Sect. <ref>, where the layout of 12 opacity bins was optimised individually for each solar model. Spatially, the radiative transfer equation was solved along nine different directions which consist of one vertical and eight inclined directions representing the combination of two polar angles and four azimuthal angles. The integration over the polar angle was approximated by the Gauss-Radau quadrature. The thus evaluated radiative heating rate enters the equation of energy conservation and meanwhile was used to compute the radiative flux and the effective temperature of the model.
The two new 3D solar atmosphere models presented in this work are labelled and . The former adopts the AGSS09 solar chemical composition while the latter uses the recent AAG21 abundance. Both model atmospheres were constructed based on the reference solar effective temperature and surface gravity given by <cit.>. Their basic configurations are summarised in Table <ref>. In addition, in the subsequent sections we present comparisons of these two models with that of an older model (i.e. with the same input physics as used in the -grid). This model has been used in previous studies (e.g. ). We refer to as hereafter.
The simulation domain is discretised on a Cartesian mesh located around the solar photosphere (with coordinates x,y,z where y denotes the vertical dimension). For both models, the distribution of mesh is identical to that used in . The horizontal extent of the simulations is 6 Mm × 6 Mm with 240 mesh points evenly distributed in each direction, which is large enough to enclose at least ten granules at any time of the simulation <cit.>.
There are 240 mesh points in the vertical direction, where five layers at the top and bottom of the simulation domain are reserved as the so-called “ghost-zone” to ensure that vertical boundary conditions fulfil the six-order numerical differentiation scheme employed in the code ( Sect. 2.2). The remaining 230 vertical meshes constitute the “physical domain” of the simulation and will be equated with the simulation domain hereinafter.
The vertical size of our simulations is 3.6 Mm excluding ghost zones, which extends from 2.7 Mm below the base of the photosphere (the near-surface convection layers) to 0.9 Mm above it (the bottom of the chromosphere). This corresponds roughly to the outer 0.5% of the solar radius. Because the vertical scale of the simulations is tiny compared to the solar radius, spherical effects are negligible and the surface gravity can be used in the entire simulation domain. The 230 vertical mesh points are not evenly placed: the finest numerical resolution is applied around the optical surface in order to resolve the steep transition from the optically thick to thin regime (see Fig. 2 for an illustration). Given the size of the simulation box, the 240^2 × 230 numerical resolution was verified to be adequate for line formation calculations <cit.> which is the main application of our models.
The boundaries are periodic in the horizontal directions, while open in the vertical <cit.>. At the bottom boundary, outgoing flows (vertical velocities towards the stellar centre) freely carry their entropy fluctuations out of the simulation domain, whereas constant entropy and thermal (gas plus radiation) pressure is enforced for incoming flows. Temporally, our simulations span 200 solar minutes, with one snapshot stored every 30 seconds solar time. All these simulation snapshots were generated after numerical relaxation procedures described in Sect. 2.3 of <cit.>.
We note that except for the updates to the input physics, the setup and mesh properties of and simulations are practically identical to model , which is well-tested against other 3D solar atmosphere models and observational constraints <cit.>.[To be precise, the solar model atmosphere presented in <cit.> and P2013 is slightly different from model : The former two were generated with an older version of the code and uses the <cit.> chemical composition, while the latter adopts the AGSS09 abundance. Nevertheless, these two models are identical in all other aspects and the differences in their photospheric structure are minor.
]
To this end, we verify the new atmosphere models by comparing them with model in Sects. <ref> and <ref>.
§.§ Spatially and temporally averaged model
Fig. <ref> shows the mean temperature structure for the two new models as well as model . Directly comparable are the and , as they are based on the same solar abundance.
Here, two different methods were used when averaging over space – the simple horizontal average and the average over layers of constant Rosseland optical depth (τ_ Ross-average). The simple horizontal averaged quantities were obtained by taking the mean value at given vertical geometric depths, in practice defined by the numerical mesh. The τ_ Ross-average was achieved by first computing the Rosseland optical depth for the entire simulation domain. For an arbitrary physical quantity f, this establishes an f - τ_ Ross relationship at every column of the simulation box. For all columns, the f(τ_ Ross) function was then interpolated to a reference optical depth frame. Taking the mean value for all interpolated f at a particular reference optical depth gives the τ_ Ross-averaged quantity. Spatially averaged quantities were then averaged over the whole time series of the simulation, i.e., every simulation snapshot, to obtain the horizontal- and time-averaged model. In this paper, we use symbols ..._ h and ..._τ to represent the spatial and temporal averaging over constant vertical geometric depth and Rosseland optical depth, respectively.
We note that there are other ways to average 3D models. However, the focus of this section is to compare the mean structure of new models with the reference model : We aim neither to compare different averaging methods nor to determine the suitable averaging method for a certain application. We refer the readers to <cit.> for a thorough investigation in this direction.
We can see from the middle and bottom left panel of Fig. <ref> that around zero geometric depth, T_ h of and differ by more than 100 K (relative difference about 3%). This discrepancy arises because model and (also ) adopt identical geometric depth scale but are computed with different opacities. Therefore, their optical surfaces correspond to slightly different geometric depths. Due to the large temperature gradient in the convective-radiative transition zone, a small mismatch in the placement of the optical surface will cause a considerable temperature difference in the geometric depth scale. To this end, a more sensible approach is to compare the averaged temperature profile based on the optical depth scale.
From the right panels of Fig. <ref>, it is clear that and have similar mean temperature structure in general: their τ_ Ross- and time-averaged temperature differs by less than 25 K above logτ_ Ross∼ -3. The absolute temperature differences reach up to ∼ 100 K at logτ_ Ross = -5. Nevertheless, the highly turbulent outermost layers of the simulation are likely the least realistic given our neglect of magnetic fields.
In Appendix <ref>, we isolate the impact of EOSs on the mean temperature structure of 3D solar models by constructing two nearly identical models that differ solely in their input EOS. It turns out that using MHD or will lead to ∼ 15 K temperature difference in optically thick layers. However, in most parts of the optically thin regime, averaged temperatures between models with MHD and agree within 5 K, suggesting that temperature differences shown in the right panels of Fig. <ref> are primarily attributed to different opacity data and the selection of opacity bins between and .
Here we emphasise that a careful selection of opacity bins is of great importance to obtain a reliable temperature structure. From our experience, different binning configurations could affect the averaged temperature profile by more than 50 K in the modelled solar atmosphere. Although the exact number depends on the location of the atmosphere where the temperature difference is measured, the impact of binning configuration is clearly non-negligible, to a degree that is much stronger than the EOS effect especially in the optically thin region.
We note that <cit.> modelled the atmosphere of a metal-poor red giant with the opacity binning method and reached the same conclusion that an erroneous selection of bins leads to about 100 K temperature discrepancy in the stellar atmosphere (see their Fig. 9).
As shown in Fig. <ref>, the three solar models give similar mean vertical velocity profiles. Particularly notable is the large upward, mean vertical velocity just below the photosphere, which is a consequence of surface convection. The upflows and downflows that form the observed solar granulation pattern must have the same absolute momentum.
We can construct a toy model that assumes all upflows (downflows) have identical density ρ_ up (ρ_ dn) and vertical velocity v_ up (v_ dn), leading to an equation for the conservation of momentum,
a ρ_ up v_ up + b ρ_ dn v_ dn = 0.
Here a and b is the fractional area covered by upflows and downflows (the “filling factor”) respectively. A simple rearrangement of Eq. (<ref>) gives
a v_ up + b v_ dn =
-b ( ρ_ dn/ρ_ up - 1 ) v_ dn.
The left hand side of Eq. (<ref>) is the mean vertical velocity. Because the density is typically higher in downflows ( Fig. 10), the mean vertical velocity has the same direction as the upflows as depicted in Fig. <ref>. The magnitude of v_y_ h reflects the asymmetry between upflows and downflows. The strongest asymmetry is found in the convective region just below the optical surface, where the vertical velocity fluctuations, represented by σ_v_y, are also the largest (Fig. <ref>).
The ratio of turbulent to thermal pressure, which is a proxy for vertical velocity fluctuations, is demonstrated in Fig. <ref>. Thermal pressures were evaluated from the EOS while turbulent pressures were computed via ( Sect. 3)
P_ turb = ρ( v_y - ρ v_y/ρ)^2,
where overlines stand for horizontal (but not temporal) averaging. At most vertical layers, the three solar models agree well in terms of P_ turb_ h / P_ ther_ h.
For our plane-parallel radiative-hydrodynamical simulations, horizontally averaged fluid properties averaged over sufficiently long stellar time should fulfil the equation of hydrostatic equilibrium ( Appendix A.2). We check how close the 3D_h models are to hydrostatic equilibrium in Fig. <ref>, and it turns out that hydrostatic equilibrium is fulfilled at most parts for all three solar simulations. However, we observe deviations from hydrostatic equilibrium in the uppermost layers for all solar models considered, which indicates momentum is not conserved at the top boundary. Nevertheless, the top boundary has little impact on the stratification of the 3D model because of the low density there.
Meanwhile, we note that it is the total (thermal plus turbulent) pressure that enters into the equation of hydrostatic equilibrium, as detailed in <cit.>.
§.§ Distribution of intensity and vertical velocity
Checking the mean stratification provides an intuitive overview of 3D models, but meanwhile wipes out fluctuations across the horizontal plane. In this section, we will scrutinize the distribution of key simulation properties at selected horizontal planes, which captures the inhomogeneity in the convective motions.
One of the main breakthroughs brought by surface convection simulations is that they revealed how convection operates in the convective-radiative boundary layers of stars. In the photosphere, fluid elements rapidly lose their heat to radiation and become denser than their surroundings. The overdense material is pulled down by negative buoyancy through the optical surface forming the intergranular lanes. Below the surface, conservation of mass forces the lower-density, warmer material to rise back through the optical surface, forming the so-called granules (cf. for detailed description).
The distribution of emergent intensity is a direct reflection of the radiation field in granules and intergranular lanes, which originated from upflows and downflows at different heights of the atmosphere.
Here we compare the disk-centre bolometric intensity distribution of and with the reference model . The distribution of bolometric intensity across the simulation domain is shown as a histogram of normalised intensity I/I, where I is the mean bolometric intensity. In all cases, 30 equidistant bins were assigned between I/I = 0.4 and 1.6 for the evaluation of the distribution function. The time-averaged distribution shown in Fig. <ref> was obtained by computing the normalised distribution function for every simulation snapshot then averaging over all snapshots. The intensity distribution of the new solar models agrees well with model , all showing a bimodal distribution with a primary peak located at I/I≈ 0.9 that corresponds to intergranular lanes, and a secondary peak at a higher intensity I/I≈ 1.1. However, the new models predict slightly higher peaks around I/I = 0.9.
The area coverage and the strength of upflows and downflows is revealed by the distribution of vertical velocities. For each simulation snapshot, vertical velocities at each column were interpolated to τ_ Ross = 2/3 to obtain the velocity distribution at the vicinity of the optical surface.[For the Eddington grey atmosphere, the location of the optical surface is strictly τ = 2/3. However, this is generally not exact for realistic model atmospheres.] Averaging over all distribution functions gives the time-averaged velocity distribution shown in Fig. <ref>. Similar to the case of intensity, the distribution function of vertical velocity appears to be bimodal, where the primary peak corresponds to upflow. It is worth noting that the distribution function confirms the visual impression of the right panel of Fig. <ref> that upflow fills more area in the simulation domain.
§ COMPARISON WITH OBSERVATIONS
The best way of examining the fidelity of stellar models is to compare model predictions with observables. In this section, we compute the absolute flux spectrum (Sect. <ref>), the centre-to-limb variations (Sect. <ref>) and hydrogen lines (Sect. <ref>) from the new solar model atmospheres. All modelling results are compared with solar observations as well as theoretical predictions presented in P2013, which is based on a well-established solar atmosphere model computed with the code and the <cit.> abundance. We name this model to avoid confusion with model mentioned in Sect. <ref> (see footnote <ref>).
The spectrum synthesis was carried out using the 3D non-LTE radiative transfer code <cit.>, a branch of <cit.> with updates for example to the formal solver <cit.> and in particular to the EOS and opacities as discussed in Sect. <ref>. In this work, identical abundances and opacity data were employed in and the surface convection simulation.
The calculations follow what was presented in <cit.> and employ the same model atom.
Previous investigations have indicated that departures from LTE have non-negligible effects on the wings of Balmer lines (particularly Hα, see e.g., Fig. 7 of P2013 and Fig. 4 of ): In the solar case, Balmer lines computed in non-LTE show weaker wings than in LTE.
§.§ Absolute flux spectrum
The emergent flux spectrum (or spectral energy distribution) plays an important role in stellar physics. Theoretical flux spectra generated from model atmospheres can be applied to, for example: calculate synthetic photometry <cit.>, determine stellar parameters <cit.> and derive interstellar extinctions <cit.>. Previous investigations have demonstrated that 3D model atmospheres are able to produce realistic absolute flux spectra for the Sun <cit.>. It is therefore worth checking how our new models perform in this respect.
Here we first compare the continuum flux spectrum predicted by our new models with that of model . Fig. <ref> shows that except for wavelengths slightly above the Balmer jump (≈ 364.5 nm), continuum flux spectra computed from the two new models and agree well with each other, indicating the temperature stratification of the three models are close to each other around the optical surface.
The differences to the red of the Balmer jump can be attributed to the treatment of dissolved Rydberg states as implemented in the module of <cit.>, that lead to a smooth decay of the continuous opacity instead of a sharp transition.
This is also apparent in the continuum centre-to-limb variation discussed in Sect. <ref>.
Comparing the synthesised continuum flux with observation is challenging owing to the difficulty of deriving the continuum level from irradiance data in the ultraviolet wavelength region (cf. Sect. 5 and P2013 Sect. 4). To this end, we elect to synthesise the absolute flux spectrum by incorporating the information of spectral lines into opacities used in the radiative transfer calculation and comparing with the solar irradiance data of <cit.> as well as the Solar Irradiance Reference Spectra of <cit.>. The latter was measured during the solar minimum in 2008.
For both and models, we computed the theoretical absolute flux spectrum with using identical opacity data employed in our 3D atmosphere modelling. The flux spectrum calculation was carried out from 300 to 1000 nm, at a wavelength resolution of λ / Δλ = 50 000. The thus obtained absolute flux spectrum is illustrated in the upper panel of Fig. <ref> for model .
Nevertheless, the absolute flux spectrum contains a forest of lines, impeding detailed comparison between simulation and observation. Therefore, we heavily smooth both the synthetic and the observed spectra using a 5 nm wavelength bin such that line features in the spectra are smoothed out. Comparing the smoothed absolute spectra examines the temperature structure of the 3D model, which sets the modelled continuum, as well as the overall reliability of our opacity data.
The lower panels of Fig. <ref> show the smoothed flux spectra, along with the relative difference between , simulation and two sets of solar observations.
The agreement between (smoothed) synthesised and measured flux is satisfactory above ∼ 450 nm: Fractional differences between modelling and the <cit.> irradiance are below 3% in general; The difference between modelling and the <cit.> spectra is around 3%, which is close to the maximum uncertainty of the measurement (about 3.5% in the optical and near-infrared, see Sect. 3 of ). However, notable differences are found below ∼ 400 nm, where the predicted absolute fluxes are systematically larger than observation by more than 10% for both solar irradiance datasets.
As discussed in <cit.>, <cit.> and <cit.>, we suspect the discrepancy in the blue end of the spectra is due to missing opacities in the ultraviolet. We also note that several solar model atmospheres (both 1D and 3D) all predict more fluxes than the measured values below the Balmer jump <cit.>. Further investigations into the continuum and/or line opacities in the near ultraviolet region might be needed to improve the theoretical-observational consistency of absolute flux in this wavelength range.
§.§ Continuum centre-to-limb variations
The magnitude of stellar surface intensity depends on both wavelength and viewing angle. Recall from the Eddington-Barbier approximation that at a given wavelength, the surface intensity at the stellar limb emerges from a smaller optical depth (lower temperature) than the intensity at the disk centre, thereby appearing darker. A detailed understanding of the limb darkening phenomenon is necessary to accurately interpret the light curve of transit exoplanets <cit.>. Limb-darkening laws are also important for the determination of stellar radii via interferometry: the angular diameter of a star is obtained by fitting the limb-darkened stellar disc model to the visibility curve measured from interferometry <cit.>. Limb darkening can be quantified by the ratio of emergent intensity I_λ(μ) / I_λ(μ = 1), with μ = cosθ, where θ is the viewing angle relative to disk centre.
In this section, we compute the continuum centre-to-limb variation (CLV) for the new solar models and compare them with the corresponding observations. The continuum CLV reflects the temperature stratification in the continuum forming regions, and is therefore often used to check the realism of 3D atmosphere models (e.g., , P2013).
Theoretical emergent intensities for different wavelengths and angles were computed using . The setup of our radiative transfer calculation is detailed in Sect. <ref>.
Between 303.3 nm and 1098.9 nm, our theoretical predictions are compared with the observations of <cit.>, where the CLV was measured at multiple continuum wavelengths. Above 1100 nm, the observational data is taken from <cit.>. Results from model are also included in Fig. <ref> for reference.
Fig. <ref> reveals a general trend in continuum CLV: it is strong at shorter wavelengths while becoming less pronounced in the near-infrared. For all five angles considered, continuum CLVs predicted by the three 3D model atmospheres are almost indistinguishable at most wavelengths, indicating the temperature gradient between the three models is nearly identical around the optical surface.
Below 400 nm, predicted CLVs are systematically weaker (i.e. larger ratios) than observation. This discrepancy is likely associated with difficulties in determining the continuum level in this wavelength region. The near-ultraviolet regime is abundant in spectral lines. CLVs measured at selected wavelengths with finite bandwidths ( Sect. 2) might contain unaccounted lines which will affect the measured values.
From 400 nm to about 1300 nm, there is excellent agreement between all synthetic continuum CLVs and observations. At longer wavelengths (1400 ≲λ≲ 1800 nm) and closer to the limb (μ≤ 0.5), continuum CLVs predicted by 3D models are systematically stronger than measurements by about 1%. The discrepancy here is smaller for the and models. Overall, the new solar model atmospheres predict continuum CLVs that match well with measurements, performing even better than the solar model of P2013 in the near-infrared.
§.§ Hydrogen line profiles
The spectral lines of hydrogen, in particular the Balmer series,
are commonly used to derive the effective temperature for late-type stars owing to their relative insensitivity to the surface gravity and hydrogen abundance (e.g. ). They feature pronounced pressure-broadened wings that form across the lower atmosphere and the surface convection zone (-2 ≲logτ_ Ross≲ 1, Fig. 2 of ), while the line cores are formed in the chromosphere. As the wings of Balmer lines (especially Hβ and Hγ) form in relatively deep layers, their shape is affected by the near-surface convection process. Nevertheless, the wings are largely unaffected by Doppler broadening and shifts due to convective motions,
making them suitable probes to the temperature structure of the stellar atmosphere, including the surface convection zone <cit.>.
We follow P2013 in comparing our synthetic spectra to solar observations of the Hα, Hβ and Hγ Balmer lines, as well as the Paβ and Paγ Paschen lines.
The synthesised Balmer and Paschen line profiles are presented in Figs. <ref> and <ref>, together with the normalised solar flux atlases of <cit.> and <cit.> for comparison.
The grey-shaded regions in Fig. <ref> are the “line masks” for the Hα, Hβ and Hγ lines derived in <cit.>. The masks were carefully selected based on theoretical line lists and the observed solar spectrum (cf. Sect. 4.2.1 of for a detailed description) in order to highlight the unblended wavelength sections that reflect only the Balmer lines. This is particularly necessary for a clear identification of the observed Hβ and Hγ lines, as their wavelength regions suffer from severe line blending. The line masks were chosen to bracket the wavelength regions that are sensitive to the effective temperature. For Balmer lines, we will inspect how well our theoretical line profiles fit solar observations with the help of these masks.
Results based on the new 3D solar models and ab initio non-LTE radiative transfer calculations are in reasonable agreement with measured normalised fluxes for all hydrogen lines considered, indicating the temperature stratification from the surface convection zone and the lower atmosphere of our new models is realistic.
Nevertheless, neither the 3D solar model presented in P2013 nor the new models are able to predict line profiles that perfectly match observations for all Balmer and Paschen lines.
For the wings of the Hα line, models and predict almost identical profiles as the solar model of P2013, all being smaller than the measured normalised flux. On the other hand, the new solar models give rise to weaker Hβ lines particularly in the outer wings (≳ 0.4 nm away from the line core).
Although the wavelength region of the Hγ line is heavily blended, we found reasonable agreement between the synthesised and the measured line profile in the unblended region highlighted by the line masks.
As discussed in P2013, the discrepancy at the Hβ wings is not associated with the single line simplification in our line formation calculations, as including the effect of blends in theoretical calculation hardly changes the overall magnitude of the Hβ wings. Input physics to the 3D atmosphere model is also unlikely the main cause of this discrepancy, because two sets of 3D solar models with distinct EOS and opacity both failed to perfectly reproduce the observed line profile. In short, the underlying reason why 3D models underestimate the strength of Hβ wings is currently unknown.
For Paschen lines, the and models predict stronger wings compared with . The new solar models perform better in the case of Paβ line, achieving good agreement with the observed solar spectrum. Conversely, the Paγ line computed from the new models deviates a bit further from observations compared to model , being slightly stronger than observations especially in the blue wing.
To conclude, hydrogen lines computed based on the new 3D solar models agree with solar observations in general, with some synthetic lines (Hγ, Paβ) matching observations at a very satisfactory level while others (Hα, Hβ, Paγ) demonstrate small deviations. Meanwhile, synthetic lines computed with the and models are almost indistinguishable in all cases, implying that different versions of the solar chemical composition have little impact on the hydrogen line profile.
§ SUMMARY AND CONCLUSIONS
In this work, we constructed new 3D solar atmosphere models with the code, using EOS, opacity and solar compositions that are different from previous studies. We adopted the , an open-source EOS code based on the minimisation of the Helmholtz free energy. Thermodynamic quantities computed via the EOS were tabulated in a format compatible with the code.
Monochromatic extinction coefficients were computed from the opacity package. In the high temperature region, opacity calculations were based on for more accurate number densities of all atomic species and better consistency between the EOS and opacity code.
Monochromatic extinction coefficients were grouped into 12 different bins to be used in the 3D simulation. Following <cit.>, we excluded continuum scattering from the extinction coefficient when calculating the mean intensity weighted mean opacities (α_J) in the optically thin part (the no-scattering-in-streaming-regime approximation). For each opacity bin, the mean intensity weighted and the Rosseland mean extinction coefficients were merged to obtain the final bin-averaged extinction coefficients. The opacity binning procedure is identical to previous studies <cit.>.
It is worth noting that for all models constructed utilising the opacity binning method, the predicted surface flux and effective temperature differ from the monochromatic solution (see for an in-depth investigation of this problem). For solar models presented in this work, we have carefully optimised the organisation of opacity bins to minimise the error in radiative heating rates and surface flux.
3D solar atmosphere models were constructed with a recent version of the code <cit.>, based on the aforementioned input physics and the AGSS09 and AAG21 solar abundance. The simulations were properly relaxed and bottom boundary conditions carefully adjusted such that the effective temperature of the model is as close to the reference solar value as possible. The new models employ identical numerical mesh as the model of <cit.> and AAG21.
Being the first time to implement the and opacity to simulations, and noticing that the mean extinction coefficients given by show recognisable differences from our previous opacity choice (Figs. <ref> and <ref>), it is necessary to test the fidelity of the new models. We first checked the mean structure of the new models by comparing the spatial and temporal averaged quantities with that of the model.
It turns out that the new and the model agrees well in terms of mean stratification – for all mean quantities studied, the relative differences are within a few percent in most part of the atmosphere model. Larger discrepancies appear only in the outermost layers of the simulation, where the realism of the model is more uncertain due to other factors such as the magnetic field.
We subsequently examined the distribution of disk-centre bolometric intensity and vertical velocity near the optical surface of the new solar models, which reflects the area coverage and relative strength of upflows and downflows at the solar photosphere. Similar good agreements are achieved between the reference and the new solar models.
Our new solar model atmospheres are not only compared with model but also validated against various observational constraints. We carried out the radiative transfer post-processing of the new 3D models using the code, which employs identical opacity sources as the atmosphere model. The modelled absolute flux spectrum and continuum CLVs were compared with corresponding solar observations as well as results from a well-justified solar model of P2013 ().
Although different input physics are used in P2013 and this work, our theoretical results match observation well in both tests, performing even better in terms of continuum CLVs in the near-infrared region.
Moreover, we performed detailed non-LTE line formation calculations for five hydrogen lines with . We found that neither of the two new 3D models is able to perfectly reproduce the measured normalised fluxes for all hydrogen lines investigated. Nevertheless, considering the approximations (e.g. opacity binning) employed in the 3D modelling, and also the performance of 1D solar atmosphere models in this problem (see Fig. 8 of P2013), the wings of the synthetic lines predicted by the new 3D models fit reasonably well with the solar flux atlases, accomplishing similar level of realism as model .
To sum up, the new solar models are able to satisfactorily reproduce observations in all diagnostics, suggesting these ab initio simulations predict highly realistic temperature stratification at the top of the convective envelope and the lower atmosphere.
We also notice that the two new models with different solar abundance have very similar structures and predict nearly identical observables in all cases studied. This finding is in line with expectation, as the AGSS09 and AAG21 solar compositions are not drastically different from each other.
Having passed comprehensive tests against the model and observations, the validity of the new models as well as their underlying input physics can be confirmed. Therefore, the input physics developed in this work can be applied to the modelling of other stars with confidence.
This opens an exciting new path for studying stars with different α-element abundance, carbon-enhanced metal-poor stars and population II stars with peculiar chemical compositions, which we were incapable of modelling due to limitations on input physics: Previous studies on key benchmark metal-poor stars were often based on 3D model atmospheres with solar-scaled chemical composition and a fixed value of α enhancement (e.g. ). While this assumption is usually valid, the abundances of certain elements, including carbon, oxygen, and magnesium, have a strong influence on microphysics due to their contribution to either atmospheric opacities or electron density. Neglecting variations in their abundance can lead to undesirable systematic errors <cit.>. In the future, we plan to construct 3D model atmospheres with varying α enhancement, carbon-to-oxygen ratio, or abundance patterns tailored for individual cases, as more detailed atmosphere models might improve the accuracy of abundance determination thereby providing insights into the chemical evolution of the Milky Way.
The authors would like to thank the anonymous referee for their careful and thorough report
that improved the quality of this work. We also thank Åke Nordlund, Martin Asplund and Regner Trampedach for kindly answering many questions regarding EOS, opacities and 3D modelling. Lionel Bigot's help on opacity binning and intensity distribution is greatly appreciated. We are grateful to Regner Trampedach, Åke Nordlund, Jørgen Christensen-Dalsgaard, Cis Lagae, Luisa Rodríguez Díaz, Tiago Pereira and Karin Lind for reading and commenting on this manuscript. We thank also Sven Wedemeyer and Friedrich Kupka for valuable comments and fruitful discussions. Finally, we thank Alan Irwin for making the code publicly available.
YZ acknowledges support from the Carlsberg Foundation (grant agreement CF19-0649). AMA gratefully acknowledges support from the Swedish Research Council (VR 2020-03940). Funding for the Stellar Astrophysics Centre is provided by The Danish National Research Foundation (grant agreement no.:DNRF106).
This research was supported by computational resources provided by the Australian Government through the National Computational Infrastructure (NCI) under the National Computational Merit Allocation Scheme and the ANU Merit Allocation Scheme (project y89). This work was supported by the Ministry of Education, Youth and Sports of the Czech Republic through the e-INFRA CZ (ID:90254).
aa
§ COMPARISON BETWEEN MHD AND
To validate the results from , we compare key EOS outputs with the corresponding quantities from the (modified version of the) MHD EOS table, which was our adopted EOS in previous simulations (e.g. mentioned in Sect. <ref>, the solar model presented in P2013 and the -grid models of ). Fig. <ref> shows the comparisons of two of the output parameters – thermal pressure and internal energy per mass. Our comparison adequately covers the parameter space researchable by surface convection simulations of low-mass stars. For reference, the mean structure of the horizontal- and time-averaged 3D solar model is plotted in red in Fig. <ref> to indicate the approximate area of most interest.
As shown in Fig. <ref>, the agreement between MHD and is overall excellent, with the relative difference in key thermodynamic quantities far less than 1% at most (ρ,T) range considered. This is reassuring and indicates that our results are reasonable. Meanwhile, differences are found in both comparisons in the high-temperature (log(T / [K]) ≳ 4.5) and high-density (log(ρ / [g/cm^3]) ≳ -5) area. As the difference appears in both comparisons at the same region, it is likely that the area is subject to a difference in the general treatment of some physical aspect between the two EOS codes rather than numerical issues.
<cit.> implemented a τ-correction (not to be confused with optical depth) in the MHD EOS in order to limit the otherwise diverging, first-order, Debye-Hückel term of the Coulomb pressure. In comparison with the OPAL EOS, <cit.> found the suppression by τ to be too strong and result in much better agreement between the two EOSs if left out completely. The was written and originally tested with respect to a version of the MHD EOS (and OPAL EOS) including the τ-correction.
Moreover, <cit.> showed the τ-correction to be especially relevant in areas of higher densities. As the observed differences lie at high densities, this suggests that the differences are, at least in part, due to the τ-correction still being implemented in the . However, modifying the code and changing the τ-correction implementation is beyond the scope of this work.
§ THE EFFECT OF EOS ON THE MEAN TEMPERATURE STRATIFICATION
To study how different EOSs affect the mean temperature structure of the 3D solar model atmosphere, we compare two 3D models based on () and MHD EOS but identical in all other aspects such as opacity data and numerical setup. For both models, their initial simulation snapshot was constructed from the same density and pressure datacube of a relaxed simulation, with respective EOS used to derive initial internal energy and temperature (see also Sect. 3). The initial simulation snapshot of the two models underwent identical relaxation processes, after which they were evolved for the same time duration for fair comparisons.
Spatial and temporal averaged temperature for the two models, as well as their realtive and absolute difference, is demonstrated in Fig. <ref>. Using the MHD or could lead to about 15 K temperature difference around the optical surface and in the near-surface convective region. The impact of EOS on atmospherical temperature structure is smaller than 5 K for τ_ Ross≲ 0.1. This is in line with the conclusion of <cit.>, who investigated the effect of EOS on the temperature structure for 2D solar near-surface convection simulations and found that different EOSs cause less than 10 K temperature difference in the atmosphere.
§ COMPARING MEAN OPACITIES
Relative differences between the Rosseland mean extinction coefficient from and that previously adopted in the code are shown in the left panel of Fig. <ref> for the AGSS09 solar abundance. The latter adopts a comprehensive source of continuum opacities as elaborated in <cit.> and <cit.>, while line opacities are taken from the MARCS package <cit.>.
The relative difference in Rosseland mean extinction coefficient is generally below 10% in our area of interest. However, the difference becomes large when log(T / [K]) ≲ 3.3 and log T ≳ 4.2. The underlying reason for this disagreement is not clear. As the two types of opacities are calculated based on different atomic data, partition function, continuum opacity sources etc. and each factor could lead to the observed difference, investigating the source of the disagreement is beyond the scope of this work. Nevertheless, the areas where a large difference in α_ Ross is seen have little impact on our modelling at least for the solar case, because (1) temperatures in the simulation domain hardly drop below 2000 K (2) the code no longer solves the radiative transfer equation at regions with very high temperature (that is, regions located well below the photosphere), because the diffusion limit of heat transfer is a good approximation there.
We also compared Rosseland mean extinction coefficients given by with the corresponding <cit.> table and found better agreement at the high-temperature, high-density regime (see the right panel of Fig. <ref>).
Because the Rosseland mean extinction coefficient is obtained by integrating α_ tot,λ^-1, it mainly reflects the contribution from low-value continuum opacities. The Planck mean extinction coefficient, on the other hand, is dominated by strong line opacities. In Fig. <ref>, we compare our Planck mean extinction coefficients at a given density with corresponding data from <cit.> and <cit.>. Below log T ∼ 3.4, the agreement between and <cit.> is good, but they are both smaller than the <cit.> result. The cause of this disagreement between and <cit.> is not clear, but it might be due to different H_2 O and TiO line lists adopted in and <cit.>. Above log T ∼ 3.4, Planck mean extinction coefficients predicted by agree well with <cit.>. However, the disagreement between and <cit.> within 3.4 ≲log T ≲ 3.7 is not understood.
|
http://arxiv.org/abs/2307.04688v1 | 20230710164303 | A tensorial-parallel Chebyshev method for a differential game theory problem | [
"Carmelo de Castro",
"Víctor Gatón",
"Beatriz Gómez"
] | math.NA | [
"math.NA",
"cs.NA",
"math.OC"
] |
Trapping and imaging single dysprosium atoms in optical tweezer arrays
Igor Ferrier-Barbut
August 12, 2023
======================================================================
This paper concerns the design of a multidimensional Chebyshev interpolation based method for a differential game theory problem. In continuous game theory problems, it might be difficult to find analytical solutions, so numerical methods have to be applied. As the number of players grows, this may increase computational costs due to the curse of dimensionality. To handle this, several techniques may be applied and paralellization can be employed to reduce the computational time cost. Chebyshev multidimensional interpolation allows efficient multiple evaluations simultaneously along several dimensions, so this can be employed to design a tensorial method which performs many computations at the same time. This method can also be adapted to handle parallel computation and, the combination of these techniques, greatly reduces the total computational time cost. We show how this technique can be applied in a pollution differential game. Numerical results, including error behaviour and computational time cost, comparing this technique with a spline-parallelized method are also included.
Keywords: Transboundary pollution, Differential games, Parabolic differential equations, Chebyshev multidimensional interpolation.
§ INTRODUCTION
In differential game theory (see <cit.>), several agents (or players) jointly control, through their actions, a dynamical system described by differential state equations. The actions of the agents are taken in order to maximize a particular objective function (for each player) which outcome depends on the state of the system and the actions of other players. Differential game theory is broadly employed in many areas including, for example, economics, management, engineering and operations research.
In general, it might not be easy to find explicit solutions for differential game problems, even if we restrict ourselves to a small amount of players, and numerical methods have to be employed (see <cit.> or <cit.>). If collocation methods are employed, as the number of players increases, we have to deal with the so called “curse of dimensionality”, which might boost the computational cost of the numerical methods.
Spectral methods (see <cit.>) are a class of spatial discretizations for partial differential equations with an order of convergence that depends on the regularity of the function to be approximated. Spectral methods have been successfully employed in many fields and have been proved competitive with other alternatives, both in precision and computational time cost. For example, Chebyshev interpolation has been employed in <cit.> and <cit.> to price financial derivatives. In <cit.>, a Fourier cosine method is employed to solve backward stochastic differential equations. Other examples are <cit.>, <cit.> or <cit.>. In game theory and optimal control, spectral methods have also been employed. In <cit.>, a Chebyshev pseudospectral method is employed for obtaining a numerical solution of an open-loop Nash equilibrium and in <cit.> a Spectral Galerkin method is developed.
The literature in economic and environmental problems can be divided in two categories: the papers which study the economic growth theory with spatial diffusion (for example <cit.>, <cit.> or <cit.>), and papers which deal with the spatial dimension in environmental and resource economics (for example <cit.>, <cit.> or <cit.>). Concerning transboundary pollution games specifically, <cit.> and <cit.> are seminal papers and a survey of the literature in that area can be found in <cit.>.
The differential game that we are going to employ to test our numerical method is developed in <cit.>, and it corresponds to a model which combines two aspects: first, the spatial aspect to the transboundary pollution dynamic games and second, the strategic aspects to the spatial economics, in particular to the pollution control in a spatial setting.
The paper is organized as follows. In Section <ref> we make a brief description of the differential pollution game, which can be found in <cit.>, and we present the Chebyshev interpolation based algorithm that can be employed to numerically solve the game. In Section <ref>, we describe several algorithms that allow an efficient valuation of the polynomials involved and we show how the method can be extended to handle parallelization. Section <ref> gives some numerical results, including both numerical error behaviour and a comparison of the computational cost with the spline-based method which is developed in <cit.>. Finally, Section <ref> presents some concluding remarks.
All the algorithms presented in this work have been implemented in Matlab v2020b. All the numerical experiments have been performed in a personal computer with an Intel Core processor i7-8700K of 6 cores and 12 threads, with 3,70GHz(base)/4,70GHz(turbo) and 16Gb of RAM memory.
§ A POLLUTION DIFFERENTIAL GAME
The model is a J-player non cooperative differential game. Let Ω be a planar region with a given partition in J subdomains such that
Ω=⋃^J_j=1Ω_j, Ω_i∩Ω_j=∅, i≠ j,
where Ω denotes the closure of Ω.
Let ∂_ij be the common boundary between Ω_i and Ω_j, i.e.
∂_ij=∂Ω_i∩∂Ω_j=Ω_i∩Ω_j, i≠ j.
Each player i controls just region Ω_i and he can choose the rate of pollutant emissions in that region. The objective of each player is to maximize his own payoff.
Let u_i(x,t), i=1,...,J be the emission rate of subregion i, at time t≥0 and at point x∈Ω. Function P(x,t) denotes the stock of pollution defined ∀x∈Ω.
For scalar functions f:Ω→ℝ, symbol ∇ f corresponds to the spatial gradient and, for vectorial functions f:Ω→ℝ^2, symbol ∇· f=∂ f_1/∂ x+∂ f_2/∂ y represents the divergence.
The main objective in <cit.> was to study the spatial relation between decision makers. We are going to stick at the simplest model (no wind pollution transport, no non-linear reaction terms, simplest discrete-space model version...). More complex models, which might require further numerical treatment, will be considered in future research (see Section <ref>).
The following parabolic partial differential equation gives the spatio-temporal dynamics of the stock of pollution:
∂ P/∂ t=∇· (k∇ P)-cP+F(u), x∈Ω,
P(x,0)=P_0(x), x∈Ω,
α(x)P(x,t)+k(x)∇ P^T(x,t)n=α(x)P_b(x,t), x∈∂Ω,
where u=[u_1,...,u_J]^T is the vector of emission rates, k=k(x) is a local diffusion coefficient, which is assumed to be a smooth function such that k_m≤ k(x)≤ k_M, ∀x∈Ω and 0<k_m<k_M are given constants. This coefficient measures the velocity at which the stock of pollutant is diffused in a location x. Term cP=c(x,t) represents the natural decay of pollutant.
It is assumed that only agent j emits in subregion Ω_j, j=1,...,J and that each x∈Ω belongs to just one region. Therefore, the source term can be written as:
F(u(x,t))=∑^J_j=1F_j(u_j(x,t))1_Ω_j(x),
where F_j, j=1,...,J are smooth functions and 1_Ω_j is the characteristic function of Ω_j. By the hypothesis of the model, we have that F(u(x,t))=F_j(u_j(x,t)) if x∈Ω_j.
Concerning boundary condition, α(x) is a non-negative smooth function that appears due to Newton's law of diffusion on the boundary of Ω.
The objective of player i, i=1,...,J is to maximize his payoff
J_i(u_1,...,u_J,P_0)=∫_0^+∞∫_Ω_ie^-ρ tG_i(u_1,...,u_J,P)dxdt,
subject to the dynamics given by (<ref>). Parameter ρ>0 is a given time-discount rate. The instant welfare G is given by a benefit from consumption minus the damage caused by the stock of pollutants.
Each region i produces one consumption good, where the amount of production is controlled by player i, and such production produces emissions (pollution). Therefore, we can represent
G_i(u_1,...,u_J,P)=(B_i(u_i)-D_i(P))1_Ω_i
where B_i(u_i) corresponds to the instantaneous benefits from production and D_i(P) to the environmental damage caused by the accumulated stock of pollution. B_i and D_i are assumed to be smooth functions and respectively concave and convex in their arguments.
Now we proceed to describe the discrete-space version of the model. We only sketch the main ideas and we refer to Appendix B,<cit.> for the details.
Functions u_i, P_i are considered densities of emissions and pollution stocks along region Ω_i. We define
p_i(t)=1/m_i∫_Ω_iP(x,t)dx, v_i(t)=1/m_i∫_Ω_iu_i(x,t)dx, i=1,...,J
where m_i=∫_Ω_idx.
Under a linear-cuadratic specification and an infinite-time horizon
F_i(v_1,...,v_J):=β_iv_i, G_i(v_1,...,v_J,p):=v_i(A_i-v_i/2)-φ_i/2p_i^2,
p=[p_1,...,p_J]^T, v_i=v_i(p), m_i=m_j, ∀ i,j=1,...,J,
and some calculus, the objective of player i is to maximize
J_i(v_1,...,v_J,p_0)=∫_0^+∞e^-ρ t(v_i(A_i-v_i/2)-φ_i/2p_i^2)dt,
subject to the dynamics of the aggregated stock of pollution given by the set of ordinary differential equations
m_idp_i/dt=∑^J_j=0
j≠ ik_ij(p_i-p_j)-m_ic_ip_i+m_iF(v_i), i=1,...,J
supplemented with a given initial state of pollution p^0=[p_1^0,...,p_j^0]^T.
§.§ A Chebyshev-based numerical method
Let h>0 be a positive parameter, t_n=nh the discrete times defined for all positive integers n and δ_h=1-ρ h the discrete discount factor.
We denote by u̅_i, i=1,...,J a sequence of real numbers u̅_i={u_i,n}_n=0^∞ and 𝒰 denotes the set of real sequences v̅ with v_n≥0, ∀ n∈ℕ.
For p=[p_1,...,p_J]^T∈ℝ^J and u=[u_1,...,u_J]^T∈ℝ^J, u_i≥0, i=1,...,J, we define
g_i(p,u)=∑^J_j=0
j≠ ik_ij/m_i(p_i-p_j)-c_ip_i+F(u_i), i=1,...,J
and we denote g(p,u)=[g_1(p,u),...,g_J(p,u)]^T.
In the time-discrete infinite horizon game, each player i=1,...,J wants to maximize
W_i(u̅_i,p_0)=h∑^∞_n=1δ^n_hG_i(u_i,n,p_i,n), u̅_i∈𝒰,
subject to
p_n+1=p_n+hg(p_n,u_n), n≥ 0
where p_n=[p_1,n,...,p_J,n]^T, u_n=[u_1,n,...,u_J,n]^T and p_0 is a given initial state.
The time-discrete value function V_h,i(p), i=1,...,J is obtained solving Bellman's equation
V_h,i(p)=u_i≥0max{hG_i(p_i,u_i)+δ_hV_h,i(p+hg(p,[u_i,u^*_-i]))}
where for i=1,...,J
u^*_i=u_i≥ 0argmax{hG_i(p_i,u_i)+δ_hV_h,i(p+hg(p,[u_i,u^*_-i]))}
and where, from now on, we employ the notation
[u_i,v_-i]=[v_1,...,v_i-1,u_i,v_i+1,...,v_J]^T, u_i∈ℝ, v∈ℝ^J.
We now present the main steps of a generalized collocation Chebyshev-based method. A review of Chebyshev interpolation and an implementation is presented in Section 3.
Step 0: Offline Computation
We define N_p=(N^p_1,...,N^p_J)∈ℕ^J and N_u=(N^u_1,...,N^u_J)∈ℕ^J, two J-dimensional vectors such that N^p_i, N^u_i>0, i=1,...,J.
With these J-dimensional vectors, we build two adecuate sets of collocation points P⊂ℝ^J, U⊂ℝ^J (detailed in Section <ref>).
Let N_P=|P| and P={p̅_j∈ℝ^J, j=1,...,N_P}.
For each player i=1,...,J, we compute a Chebyshev interpolation polynomial in the control variables for every collocation node in the state variables, i.e. we compute
g^i_p̅_j(u), j=1,...,N_P,
which are N_P different interpolation polynomials in u, such that ∀ j=1,...,N_P it holds
g^i_p̅_j(u̅) =g_i(p̅_j,[u̅_i,u̅_-i]), ∀u̅∈U
We denote g_p̅_j(u)=[g^1_p̅_j(u),g^2_p̅_j(u),...,g^J_p̅_j(u)], j=1,2,...,N_P.
We compute some localization indexes (detailed in Section <ref>).
We set r=0 and a small time step h∈ℝ^+.
For each player i=1,...,J, we initialize the iteration with some given V^N_p,[0]_h,i(p̅_j) and u^[0](p̅_j), j=1,2,...,N_P.
For each player i=1,...,J, we compute the Chebyshev interpolation polynomial V^N_p,[0]_h,i(p) which interpolates V^N_p,[0]_h,i(p̅_j), j=1,2,...,N_P.
Step 1:
For each player i=1,...,J and each p̅_j, j=1,2,...,N_P, we compute the J-dimensional and one variable polynomial
𝒢^i_p̅_j(u)=.g_p̅_j(u)|_u^[r]_-i(p̅_j), j=1,2,...,N_P
Step 2:
For each player i=1,...,J and each p̅_j, j=1,2,...,N_P, we compute the one variable polynomial
𝒱^N_p,[r]_h,i,p̅_j(u)=V^N_p,[r]_h,i(p̅_j+h𝒢^i_p̅_j(u)).
Step 3:
For each player i=1,...,J, we find the strategy at each state node p̅_j, j=1,2,...,N_P which maximizes the objective function, i.e.
u^[r+1]_i(p̅_j)=u≥ 0argmax{𝒱^N_p,[r]_h,i,p̅_j(u)}.
Step 4:
For each player i=1,...,J, we define V^N_p,[r+1]_h,i(p) as the Chebyshev interpolation polynomial which interpolates 𝒱^N_p,[r]_h,i,p̅_j(u^[r+1]_i(p̅_j)), j=1,2,...,N_P.
If we are not below the prescribed tolerance,
|V^N_p,[r+1]_h,i(p)-V^N_p,[r]_h,i(p)|<TOL, i=1,...,J
we set r=r+1 and return to Step 1. Otherwise, we stop.
We point out that, in the particular pollution problem we are dealing with, g^i_p̅_j(u)=g^i_p̅_j(u_i), i=1,...,J, ∀p̅_j∈P is one dimensional, but we prefer to present a generalized algorithm in the case it was not.
§ THE CHEBYSHEV INTERPOLATION
We now make first a brief review of multidimensional Chebyshev interpolation and comment how the different calculus involved in the previous algorithm can be efficiently performed.
We are going to employ the work presented in Section 2,<cit.>, where it is described how multidimensional Chebyshev polynomials can be efficiently computed, storaged and evaluated for several values in all the dimensions simultaneously.
Here, we only include the main definitions in <cit.> and the modifications needed to adapt the algorithm to the problem described in Section <ref>.
§.§ A review of multidimensional Chebyshev interpolation
The Chebyshev polynomial of degree n (see <cit.>) is given by
T_n(x)=cos(n arccos (x) ),
where 0≤arccos (x) ≤π.
From now on, variable x∈[-1,1] or x=(x_1,...,x_n)∈[-1,1]^n for the n-dimensional case.
Let N∈ℕ. The N+1 Chebyshev nodes {α^k}_k=0^N in interval [-1, 1] correspond to the extrema of T_n(x) and they are given by:
α^k=cos(π k/N), k=0,1,...,N.
If the function F(x̃) that we want to interpolate is defined in interval x̃∈[a,b], the Chebyshev nodes {α̃^k}_k=0^N in interval [a,b] are computed with the {α^k}_k=0^N nodes in [-1,1] and the change of variable given by formula
x̃=b-a/2x+b+a/2, x∈[-1,1].
Let F(x̃) be a continuous function defined in x̃∈ [a,b].
For N∈ℕ, let I_N F(x) be the N degree interpolant of function F(x̃) at the Chebyshev nodes, i.e. the polynomial which satisfies
I_N F(α^k)=F(α̃^k), k=0,1,...,N.
Polynomial I_N F(x) can be expressed as
I_N F(x)=∑^N_l=0p̂_lT_l(x), x∈[-1, 1],
where coefficients p̂_l are given by
p̂_l =1/N∑^N_k=0^”F(α̃^k)T_l(α^k), if l∈{0,N},
p̂_l =2/N∑^N_k=0^”F(α̃^k)T_l(α^k), if l∈{1,2,...,N-1},
and the double prime indicates that we halve the first and last elements.
Instead of using formula (<ref>), we will employ an efficient FFT based algorithm which is presented in <cit.> or <cit.>. For the univariate case
Algorithm C1v:
1. Define
z=[F(α̃^0),F(α̃^1),...,F(α̃^N-1),F(α̃^N),F(α̃^N-1),...,F(α̃^1)]^T
2. Compute
y=real(FFT(z))/2N
3. It holds that
{p̂_0 =y(1),
p̂_l =y(l+1)+y(2N-(l-1)), if 0<l<N,
p̂_N =y(N)
.
We also mention the algorithm presented in <cit.> which allows to compute efficiently the derivative of a Chebyshev interpolation polynomial.
If F(x̃) is a continuous function defined in x̃∈ [a,b] and
I_NF(x)=∑_l=0^Np̂_lT_l(x), x∈[-1,1]
is its Chebyshev interpolation polynomial, it holds that
(I_NF(x))'=2/b-a∑_l=0^N-1q̂_lT_l(x)
where for l=0,1,...,N-1:
q̂_l=2/c_l+∑_ j=l+1
j+l odd ^Njp̂_j, where c_l={2, l=0,
1, l≥1..
Now we proceed to multidimensional interpolation.
Let x̃=(x̃_1,x̃_2,...,x̃_n ) and F̃(x̃) be a continuous function defined in x̃_j∈[a_j, b_j], j=1,2,...,n.
For N={N_1,N_2,...,N_n}∈ℕ^n, we define
L^N={l=(l_1,l_2,...,l_n) / 0≤ l_j≤ N_j, l_j∈ℕ, j=1,2,...,n}.
For j=1,2,...,n, let {α^k_j}_k=0^N_j be the N_j+1 Chebyshev nodes in [-1, 1] and {α̃^k_j}_k=0^N_j the corresponding N_j+1 Chebyshev nodes in [a_j, b_j].
We use the notation α̃^l=(α̃^l_1_1,α̃^l_2_2,...,α̃^l_n_n ) and α^l= (α^l_1_1, α^l_2_2,...,α^l_n_n ).
Let I_N F(x) be the n-dimensional interpolant of function F(x̃) at the Chebyshev nodes, i.e. the polynomial which satisfies
I_N F(α^l)=F(α̃^l), l∈ L^N.
Polynomial I_N F(x) can be expressed as
I_N F(x)=∑_l∈ L^Np̂_lT^l(x), x∈[-1, 1]^n,
where
T^l(x)=T_l_1(x_1)T_l_2(x_2) ... T_l_n(x_n).
and the coefficients p̂_l=p̂_(l_1,l_2,...,l_n)∈ℝ can be computed with the n-dimensional version of the Algorithm C1v presented before.
Algorithm Cnv:
Let Γ_(N_1+1)×...×(N_n+1) be a n-dimensional array such that
Γ(l_1+1,l_2+1,...,l_n+1)=F(α̃^l_1_1,α̃^l_2_2,...,α̃^l_n_n)
1. A_1=Γ.
2. For i=1 to n
2.1. {m_1,m_2, ...,m_n}=(B_i).
2.2. For j_2=1 to m_2, for j_3=1 to m_3, ..., for j_n=1 to m_n
B_i(:,j_2,j_3,...,j_n)=Algorithm C1v(A_i(:,j_2,j_3,...,j_n)).
2.3. A_i+1=permute(B_i),[2:n 1]).
3. p̂_l=A_n+1(l_1+1,l_2+1,...,l_n+1).
We remark that the FFT routine in Matlab admits multidimensional evaluation, so step 2.2 can be efficiently computed without loops.
Therefore, the polynomial coefficients are stored in a (N_1+1)×...×(N_n+1)-dimensional array A, where
A(l_1+1,l_2+1,...,l_n+1)=p̂_(l_1,l_2,...,l_n)
§.§ Evaluation of one N_u-dimensional polynomial
Suppose now that we have a Chebyshev interpolation polynomial I_N_u g(u), given by a (N^u_1+1)×...×(N^u_n+1)-dimensional array A and we want to evaluate it in a set of points {b^1_j}_j=1^k_1 just in the first variable, i.e. we want to compute
{I_N F(b^1_j,u_2,u_3,...,u_n)}_j=1^k_1.
In (Section 2,<cit.>) it is described how {(T_l_1(b^1_1),...,T_l_1(b^1_k_1))}_l_1=0^N_1 can be efficiently evaluated and stored in a (k_1,N_1+1)-dimensional array B such that
B(j,l)=T_l(b^1_j)
Afterwards, a standard matrix product has to be performed over all the other dimensions. We need to compute
B· A(:,i_2,...,i_n), i_s=1,...,N_s+1, s=2,...,N.
In the last version of Matlab, this can be efficiently performed with “pagemtimes” function. We can define
C=permute(pagemtimes(B,A),[2:N_n 1])
where the result is a (N_2+1)×...×(N_n+1)× k_1 dimensional array. The permutation is needed in order to evaluate further dimensions.
Array C corresponds to the coefficients of the interpolation polynomial I_N F(x) evaluated in the points {b^1_j}_j=1^k_1, i.e.
C(:,...,:,j)∼ I_N F(b^1_j,u_2,...,u_n), j=1,...,k_1.
If we want now to evaluate the polynomial in a set of points {b^2_j}_j=1^k_2 in the second variable, another set of points in the third variable..., we would proceed iteratively obtaining, at the end, a (k_1,...,k_n)-dimensional array D which contains the evaluation of the polynomial in every possible combination of the points of each variable, i.e.
D(j_1,j_2,...,j_n)=I_N F(b^1_j_1,b^2_j_2,...,b^n_j_n), j_s=1,...,k_s, s=1,...,n
§.§ Evaluation of N_P different N_u-dimensional polynomials in different points
Suppose that we have N_P different multidimensional Chebyshev interpolation polynomials, where each one is given with a N_u=(N^u_1+1,...,N^u_J+1)-dimensional array A_j, j=1,...,N_P as shown in Subsection <ref>.
They can all be stored in a (N^u_1+1,...,N^u_J+1,N_P)-dimensional array A_j where
A(:,...,:,j)=A_j∼ I_N_u g_j(u), j=1,...,N_P
and g_j(u), j=1,...,N_P is each of the functions that has been interpolated.
In order the employ the algorithm of Subsection <ref> efficiently in our pollution problem, a small modification has to be done.
Suppose that we want to evaluate each polynomial in a different point in the first variable, i.e., given {b^1_j}_j=1^N_P we have to compute
{I_N_u g_j(b^1_j,u_2,u_3,...,u_n)}_j=1^N_P,
We remark that in Subsection <ref> we wanted to evaluate (in the first variable) one polynomial in a set of k_1 different points. Here we want to evaluate each polynomial g_j(b^1_j,u_2,u_3,...,u_n) in a specific point b_j, j=1,...,N_P.
We build a 2-dimensional array B as defined in Subsection <ref> such that B(j,l)=T_l(b^1_j), and we define the following location index
aux1=[1:N_P:N_P(N^u_2+1)...(N^u_J+1)]
locind_1=aux1
for l=2:N_P
aux2=N_P(N^u_2+1)...(N^u_J+1)(l-1)+(l-1)
locind_1=[locind_1 (aux1+aux2)]
end
The evaluation
C=permute(pagemtimes(B,A),[2:J 1])
D=reshape(C(locind_1),[N^u_2 N^u_3 ... N^u_J N_P])
gives a (N^u_2+1,N^u_3+1,...,N^u_n+1,N_P)-dimensional array D where
D(:,...,:,j)∼ I_N g_j(b^1_j,u_2,...,u_J), i=1,...,J
In a similar way, a location index locind_2 can be computed to compute {I_N_u g_j(b^1_j,b^2_j,u_3...,u_J)}_j=1^N_P, i=1,...,J for a second set of points {b^2_j}_j=1^N_P. And so on for evaluating the rest of the dimensions.
§.§ Implementation of the algorithm
Step 0: Offline computations
Suppose that the J players are indexed by i=1,...,J.
Let N_p=(N^p_1,...,N^p_J)∈ℕ^J and N_u=(N^u_1,...,N^u_J)∈ℕ^J be two J-dimensional vectors such that N^p_i, N^u_i>0, i=1,...,J.
Vectors N_p and N_u will be respectively employed to define the discretization in the state space and in the control space.
Let us introduce two positive parameter P_M,U_M>0 big enough and consider intervals [0,P_M] and [0,U_M]. For each player i, the Chebyshev nodes {p̃^i_j}_j=0^N^p_i and {ũ^i_j}_j=0^N^u_i are given by
p̃^i_j =1/2[cos(π j/N^p_i)(P_M-0)+(P_M+0)], j=0,1,...,N^p_i,
ũ^i_j =1/2[cos(π j/N^u_i)(U_M-0)+(U_M+0)], j=0,1,...,N^u_i,
We consider the J-intervals
Ĩ_p =[0,P_M]×...×[0,P_M]⊂ℝ^J
Ĩ_u =[0,U_M]×...×[0,U_M]⊂ℝ^J
where we will numerically solve the pollution game. We define the sets of collocation points
P̃={(p̃^1_j_1,p̃^2_j_2,...,p̃^J_j_n), j_i=0,1,...,N^p_i, i=1,...,J}
Ũ={(ũ^1_j_1,ũ^2_j_2,...,ũ^J_j_n), j_i=0,1,...,N^u_i, i=1,...,J}
For simplicity in the notation we believe that, prior to initialize the algorithm, it is better to perform the corresponding changes of variables to [-1,1] (as seen in Subsection <ref>).
Therefore, we will work directly with the J-intervals I_p=I_u=[-1,1]^J and the corresponding sets of chebyshev collocation points
P={(p^1_j_1,p^2_j_2,...,p^J_j_n), j_i=0,1,...,N^p_i, i=1,...,J}
U={(u^1_j_1,u^2_j_2,...,u^J_j_n), j_i=0,1,...,N^u_i, i=1,...,J}
defined in [-1,1]^J. Once the algorithm is finished, we move back to the original intervals Ĩ_p and Ĩ_u.
Therefore N_P=|P|=∏^J_i=1(N^p_i+1) and P={p̅_j, j=1,...,N_P}.
For any player i∈{1,...,J}, we need to compute N_P different interpolation polynomials of {g^i_p̅(u), p̅∈P} such that ∀p̅∈P, it holds
g^i_p̅(u̅) =g_i(p̅,[u̅_i,u̅_-i]), ∀u̅∈U
We remark that these polynomials have to be computed just once and this can be efficiently done with Algorithm Cnv as seen in Subsection <ref>. The polynomials will be (N^u_1+1,...,N^u_J+1)-dimensional and, for the rest of the algorithm, we identify for any player i∈{1,...,J}
g^i_p̅_j(u) ∼{I_N_u g^i_j(u)}_j=1^N_P, j=1,...,N_P.
In the iterative algorithm, at any iteration r and for any player i∈{1,...,J}, we will need to evaluate these polynomials in
{I_N_u g^i_j(u^[r]_1(p̅_j),u^[r]_2(p̅_j),...,u^[r]_i-1(p̅_j),u^i_k,u^[r]_i+1(p̅_j),...,u^[r]_J(p̅_j))}_k=0^N^u_i, j=1,...,N_P
where we recall that {u^i_k, k=0,...,N^u_i} are the control Chebyshev nodes of player i.
Therefore, we can build a set of location indexes locind_j, j=1,...,J which allow to perform such computation efficiently as shown in Subsection <ref>.
We remark that this location indexes have to be computed just once and can be employed in any iteration [r] of the algorithm.
We initialize with some given V^N_p,[0]_h,i(p̅) and u^[0](p̅_j), p̅∈P.
For each player i=1,...,J, we compute the Chebyshev interpolation polynomial V^N_p,[0]_h,i(p), which interpolates V^N_p,[0]_h,i(p̅), p̅∈P̅ with Algorithm CnV.
Step 1 and Step 2:
For every player i∈{1,..,J} we compute {g^i_p̅_j(u^i_k,u^[r]_-i(p̅_j))}_k=0^N^u_i, i.e.
{I_N_u g^i_j(u^[r]_1(p̅_j),u^[r]_2(p̅_j),...,u^[r]_i-1(p̅_j),u^i_k,u^[r]_i+1(p̅_j),...,u^[r]_J(p̅_j))}_k=0^N^u_i, j=1,...,N_P
with the technique described in Subsection <ref> and the location indexes precomputed in Step 0.
We define
{𝒢^i_p̅_j(u^i_k)}_k=0^N^u_i={g_p̅_j(u^i_k,u^[r]_-i(p̅_j))}_k=0^N^u_i, j=1,2,...,N_P
where we recall g_p̅_j(u)=[g^1_p̅_j(u),g^2_p̅_j(u),...,g^J_p̅_j(u)], j=1,2,...,N_P.
We point out that, in practice, it is not necessary to build the interpolation polynomial of {𝒢^i_p̅_j(u^i_k)}_k=0^N^u_i. For every p̅∈P, in order to build 𝒱^N_p,[r]_h,i_0,p̅(u) we just compute
V^N_p,[r]_h,i_0(p̅+h𝒢^i_p̅(u^i_0_k)), k=0,1,...,N^u_i_0
and then apply Algorithm C1v to the results obtained.
We want to remark that, working with arrays, all the operations can be implemented simultaneously for every p̅∈P.
Step 4:
For any player i_0∈{1,..,J}, in order to compute
u^[r+1]_i(p̅)=u≥ 0argmax{𝒱^N_p,[r]_h,i,p̅(u)}, p̅∈P.
we recommend to employ Newton algorithm for two reasons.
It is straightforward to implement Newton algorithm for all p̅∈P at the same time and the derivative of a Chebyshev interpolation polynomial can be efficiently obtained employing the algorithm presented in Subsection <ref>.
§.§ Parallelization
Since the evaluation over the N_P different state nodes is independent, the multidimensional arrays involved in the numerical algorithm described in Subsection <ref> can be split in smaller packages to different cores (computer processing units).
In our case, let N_b and N_f be two natural numbers such that N_fN_b=N_P. For any array A(:,...,:,1...N_P), employing reshape function, we can redefine the array
A=reshape(A,[N_1,...,N_J,N_f,N_b])
For k=1,...,N_b, we define A'_k(:,...,:,1...N_f):=A(:,...,:,1...N_f,k).
The calculus involved in the numerical algorithm, for example the computation of {g^i_p̅_j(u^i_k,u^[r]_-i(p̅_j))}_k=0^N^u_i in Step 1, can be done independently in different cores employing Matlab parfor and arrays A'_k(:,...,:,1...N_f), k=1,...,N_b. The information can be reassembled when needed.
The precomputation of localization indexes has also to be adapted to the smaller arrays that we have just defined, but this is something straightforward to do.
This parallelization procedure can also be applied working with just one core. If array A(:,...,:,1...N_P) is very big, it can be splitted in smaller arrays as we have just described and solved with a standard for loop.
The optimal (computing time) values for N_b and N_f depend on the values of N_p and N_u, but probably they also depend on the number of cores and the kind of processors of the computer employed.
For example, with the computer that we employed in our experiments, we run a 3 players game with N^p_i=7 (N_P=512). We computed the computational time cost of the numerical solution for smaller arrays given by N_b=1,2,...,2^9. The results are represented in Figure <ref>.
This experiment shows that it was neither optimal to compute each state node in a different core (fully parallelization) nor to compute all the nodes at the same time in just one core (without parallelization and fully tensorized). The optimal computational time cost was “half way” between the size of the arrays involved and the number of blocks (which depends on the size of the arrays). Similar results were obtained when the game was played with different amounts of players.
§ NUMERICAL RESULTS
We now repeat some of the numerical experiments performed in <cit.>. We compare the spline method employed in that paper with the Chebyshev method that we have described.
When the pollution game is played by 2 players we have explicit solutions, so an error vs computational time cost analysis can be performed. For the case of 3 or more players, we lack of an explicit solution. We have obtained the same qualitative solutions as in <cit.>, but just a comparison of the computational time cost has been done.
Concerning the parallelization procedure, once we have the number of state nodes N_P, let {M_1,...,M_σ_0(N_P)} be all the natural dividers of N_P.
For each numerical experiment, all the possible combinations for N_f=M_i and N_b=M_j such that N_fN_b=N_Ψ have been tested. We point out that for all the experiments,
* Case N_f=N_P, N_b=1 (without parallelization and fully tensorized) is suboptimal.
* Case N_f=1, N_b=N_P (fully parallel) is suboptimal.
The optimal computational time cost is always attained at some value N_f=M_i, M_i≠{1,N_P}.
§.§ 2 players
We repeat Example 1 in <cit.>. Let
β_i=1, φ_i=1, A_i=0.5, c_i=0.5, i=1,2, K=[k_ij]=[ -1 1
1 -1]
The spatial configuration described by K means that players 1 and 2 share a common boundary and are isolated from outside.
We have computed the numerical solution for
* h∈{10^-2,10^-3,10^-4,10^-5},
* TOL∈{10^-2,10^-3,10^-4,10^-5,10^-6},
* N_i^p∈{2,4,8}, i=1,2.
Under the spatial configuration defined, both players are symmetric, therefore the solutions of both players must coincide. In Figure <ref> we represent the emission (left) and pollution (right) time paths obtained with the Chebyshev numerical method.
In order to analyse the performance, we study the numerical solution for the different values of N^p_i, TOL and h.
For the 2 players case, we have explicit solutions (see <cit.>), so we can compute the exact optimal policy u(x). For each experiment, we define the mean square error of the numerical solution by
error=1/N_Ψ√(∑_x∈Ψ(u^*(x)-u(x))^2)
where u^* is the numerical optimal policy obtained at the last iteration of the method in each experiment.
With the errors computed for all the experiments, we can plot the numerical error vs the computational time cost of each experiment and then retain the lower convex envolvent of the resulting cloud of points.
The lower convex envolvent informs, for a desired error tolerance, the minimum time required to attain that error. The analysis is represented in Figure <ref>, for the spline(blue) and Chebyshev(red) methods.
The results in Figure <ref> show that the Chebyshev method is much more efficient that the spline method. In average, for 2 players and a similar prescribed error tolerance, the Chebyshev method requires 1/271 of the time of the spline method. The nodes of the lower convex with the biggest errors (the two situated at the right side) correspond to N^p_i=2, the next node to N^p_i=4 and the node with the smallest error (left side) corresponds to N^p_i=8.
It is interesting that both methods present the same error behaviour (the slopes of the lower convex envolvents are similar), since Chebyshev interpolation usually has a better error convergence than spline interpolation. This is probably due to the fact that the objective function has a linear-cuadratic specification and, therefore, both methods have similar error behaviour. It is possible that with non-polynomial objective specifications Chebyshev method could also present a better behaviour.
§.§ 3 players
We now repeat Example 3 in <cit.>. The parameter values remain the same as in the previous experiment and the spatial configuration is given by
K=[k_ij]=[ -1 1 0
1 -2 0
0 1 -1]
This configuration means that Player 2 shares a boundary with both Players 1 and 3, Players 1 and 3 have no common boundary and all the countries are isolated from outside. Under this configuration, Players 1 and 3 are symmetric, so their strategies should coincide.
In Figure <ref> we represent the emission (left) and pollution (right) time paths obtained with the Chebyshev numerical method. As expected, the optimal strategies and the pollution stocks of Players 1 and 3 coincide.
Unfortunately, for 3 or more players we lack of an explicit solution. Nevertheless, we point out that, for the same values of h, TOL and N^p_i, Chebyshev method outperforms the spline method in computational time cost.
In Figure <ref> we represent for the spline(blue) and Chebyshev(red) methods, the total number of spatial nodes (N^p_i+1)^3 vs the computational time cost for N^p=3,5,7, h=10^-3, TOL=10^-4. Other values for h and TOL were also tested, and the chosen ones are the fastest for the spline method.
For the same parameter values, the Chebyshev method requires, in average, 1/146 of the time of the spline method in order to obtain a numerical solution. This is not a complete performance analysis, since we lack of the explicit solutions, and we can not measure the numerical error. But point out that the results in the experiment for 2 players, and the fact that the qualitative solutions obtained with both methods are very similar, strongly suggest that the Chebyshev method outperforms the spline method.
§.§ 4 Players
We now repeat Example 4 in <cit.>. The parameter values remain the same as in the previous experiment and the spatial configuration is given by
K=[k_ij]=[ -1 1 0 0
1 -3 1 1
0 1 -2 1
0 1 1 -2 ]
This configuration means that Player 1 shares a frontier with Player 2, Player 2 shares a frontier with Players 1, 3, 4 and Player 3 shares a boundary with players 2 and 4. All the countries are isolated from outside. Under this configuration, Players 3 and 4 are “symmetric” since they share the same amount of frontiers with other countries and, therefore, their strategies should coincide.
In Figure <ref> we represent the emission (left) and pollution (right) time paths obtained with the Chebyshev numerical method. As expected, the optimal strategies and the pollution stock of Players 3 and 4 coincide.
Concerning numerical performance, the results are similar to the result in the experiment for 3 players. For the same values of h, TOL and N^p_i, Chebyshev method outperforms in computational time cost the spline method.
In Figure <ref> we represent, for the spline(blue) and Chebyshev(red) methods, the total number of spatial nodes (N^p_i+1)^4 vs the computational time cost for N^p=3,5,7, h=10^-3, TOL=10^-4.
The Chebyshev method requires, in average, 1/100 of the time of the spline method in order to obtain a similar numerical solution.
As before, in the parallelization procedure, the optimal computational time cost is attained for a value N_b such that 1<N_b<8^4=N_P.
Finally, we would like to point out that other experiments in <cit.>, including different spatial specifications and/or that one of the regions is not isolated from outside, have also been carried out. For not overloading the paper we have not included the results, but they have been similar to the ones presented in this work.
§ CONCLUSIONS
We have presented a tensorial-parallel Chebyshev collocation method for a game theory problem, which has a fairly good computational cost behaviour. This is due to the fact that it combines parallezation with some algorithms that allow, employing tensorization, to evaluate multidimensional Chebyshev polynomials efficently.
We should mention that the localization indexes presented (see Subsection <ref>) are not unique. Other dimension orders could be considered.
In this paper, we have presented the main ideas of a Chebyshev based algorithm which can be adapted to other differential game problems. These techniques may help to improve the numerical computation of problems which are affected by the known “curse of dimensionality”, which appears when collocation methods are applied to problems with multiple dimensions.
Future work will be oriented in two different paths.
On one hand, in <cit.>, a Chebyshev based reduced function basis interpolation method is also presented. That technique allows to obtain the same numerical error with much less computational effort that a direct interpolation, as the one that we have employed in this work. Since the “curse of dimensionality” is still present, for a bigger number of players and number of state nodes, it would be interesting to adapt the reduced basis method to this problem.
On the other hand, we would like to adapt and test the algorithm to more complex model specifications. For example, it could be considered that each region i can be divided in n subregions, where player i controls the emissions in each of the different subregions. Incorporate wind and a nonlinear reaction term in the pollution dynamics is also interesting since, although it is a model more computationally challenging, it is also closer to reality.
§.§ Funding
This research was supported by Junta de Castilla y León cofinanced by FSE-YEI (first author) and Junta de Castilla y León by project VA169P20 cofinanced by FEDER funds (second author).
§.§ Acknowledgments
The authors thank Javier de Frutos and Guiomar Martín-Herrán for stimulating discussion.
99
Ba Başar T., Zaccour G. (eds.), Handbook of Dynamic Game Theory, Springer, (2018).
Brito Brito P., The Dynamics of Growth and Distribution in a Spatially Heterogenous World, WP13/2004/DE/UECE, Technical University of Lisbon, 2004.
Brock Brock W., Xepapadeas A., Yannacopoulos A.N., Optimal control in space and time and the management of environmental resources, Annu. Rev. Resour. Econ. 6 (2014), 33-68.
Camacho Camacho C., Zou, B., Briani, M., On the dynamics of capital accumulation across space. Eur. J. Oper. Res.,186 (2008), 451-465.
Camacho2 Camacho C., Pérez-Barahona A., Land use dynamics and the environment, J. Econ. Dyn. Control 52 (2015), 96-118.
Canuto Canuto C., Hussaini M.Y., Quarteroni A., Zang T.A., Spectral methods. Fundamentals in Single Domains, Springer, Berlin, 2006.
deFrutos2 de Frutos J., Gatón V., Chebyshev reduced basis function applied to option valuation, Computational Management Science, 14(2017), 465-491.
deFrutos3 de Frutos J., Gatón V., A pseudospectral method for option pricing with transaction costs under exponential utility, Journal of Computational and Applied Mathematics, 294 (2021), 113541.
deFrutos1 de Frutos J., Marín-Herrán G., Spatial effects and strategic behaviour in a multiregional transboundary pollution dynamic game, Journal of Enviromental Economics and Management, 97 (2019), 182-207.
Dockner Dockner E.J., Long N.V., International pollution control: cooperative versus noncooperative strategies, J. Environ. Econ. Manag. 25 (1993), 13-29.
Fabbri Fabbri G., Ecological barriers and convergence: a note on geometry in spatial growth models. J. Econ. Theory, 162 (2016), 114-136.
Gab Gaß M., Glau K., Mahlstedt M., Mair M., Chebyshev interpolation for parametric option pricing, Finance and Stochastics, 22 (2018), 701-731.
Falcone Falcone M., Numerical methods for differential games based on partial differential equations, International Game Theory Review, Vol. 8, N.2 (2006), 231-272.
Jor Jørgensen S., Martín-Herrán G., Zaccour, G., Dynamic Games in the Economics and Management of Pollution. Environ. Model. Assess. 15 (2010), 433-467.
Johnson Johnson P.A., Numerical Solution methods for differential game problems (MS Thesis), Massachusetts Institute of Technology, 2009.
Nikoo Nikooeinejad Z., Dekavakhalafi A., Heydari M., A numerical solution of open-loop Nash equilibrium in nonlinear differential games based on Chebyshev pseudospectral method, Journal of Computational and Applied Mathematics, 300 (2016), 369-384.
Ortiz Ortiz-Gracia L. and Oosterlee C. W., A highly efficient Shannon wavelet inverse Fourier technique for pricing European options, SIAM Journal on Scientific Computing, 38 (2016), No. 1, B118-B143.
Rivlin Rivlin T.J., Chebyshev Polynomials: From Approximation Theory to Algebra and Number Theory, Wiley, New York, (1990) MR1060735(92a:41016)
Ruijter Ruijter M. J. and Oosterlee C. W., A Fourier Cosine Method for an Efficient Computation of Solutions to BSDEs, SIAM Journal on Scientific Computing, 37 (2015), No. 2, A859-A889.
Ruijter2 Ruijter M. J., Versteegh M. and Oosterlee C. W., On the application of spectral filters in a Fourier option pricing technique, Journal of Computational Finance, 19 (2015), No. 1, 75-106.
Van Van der Ploeg F., De Zeeuw A.J., International aspects of pollution control, Environ. Resour. Econ. 2 (1992), 117-139.
Xepa Xepapadeas, A., The spatial dimension in environmental and resource economics. Environ. Dev. Econ. 15 (2010), 747-758.
Zhangb Zhang B. and Oosterlee C. W., Pricing of early-exercise Asian options under Lévy processes based on Fouirer cosine expansions, Appl. Numer. Math. 78 (2014), 14-30.
Zhang Zhang L., Zhou Z., Spectral Galerkin approximation of optimal control problem governed by Riesz fractional differential equation, Appl. Numer. Math. 143 (2019), 247-262.
|
http://arxiv.org/abs/2307.03978v2 | 20230708135448 | Separable MV-algebras and lattice-groups | [
"Vincenzo Marra",
"Matías Menni"
] | math.RA | [
"math.RA",
"math.AG",
"math.CT",
"math.LO",
"Primary: 06D35, Secondary: 06F20, 18B50, 12F10"
] |
General theory determines the notion of separable MV-algebra (equivalently, of separable unital lattice-ordered Abelian group). We establish the following structure theorem: An MV-algebra is separable if, and only if, it is a finite product of algebras of rational numbers—i.e., of subalgebras of the MV-algebra [0,1]∩. Beyond its intrinsic algebraic interest, this research is motivated by the long-term programme of developing the algebraic geometry of the opposite of the category of MV-algebras, in analogy with the classical case of commutative K-algebras over a field K.
[
Sahil Gangurde
ABV-Indian Institute of Information Technology & Management, Gwalior, India
[email protected]
===========================================================================================================================
§ INTRODUCTION
For any field K, a (commutative) K-algebra is separable if, and only if, it is a finite product of finite separable field extensions of K. See, for example, <cit.>. The aim of the present paper is to establish the analogue of this fact for MV-algebras and lattice-groups. We show as our main result that an MV-algebra is separable exactly when it is a finite product of algebras of rational numbers—the subalgebras of [0,1]∩ (Theorem <ref>). By a well-known theorem of Mundici <cit.>, the category of MV-algebras is equivalent to the category of lattice-ordered Abelian groups with a unit. We frame our treatment in the language of MV-algebras, and postpone to the final Appendix <ref> a synopsis of its translation to lattice-groups.
While the main result of this paper holds independent algebraic interest, it finds its deeper motivation in a broader mathematical landscape on which we offer some comments in this introduction.
As explained in <cit.>, some of Grothendieck’s algebro-geometric constructions may be abstracted to the context of extensive categories <cit.>.
A category with finite coproducts is extensive if the canonical functor
/X ×/Y →/(X + Y)
is an equivalence for every pair of objects X, Y in .
Extensivity attempts to make explicit a most basic property of (finite) coproducts in categories `of spaces'. For instance, the category of topological spaces and continuous functions between them is extensive; the category of groups is not.
Extensive experience indeed confirms that conceiving an extensive category as a category `of spaces' is a useful conceptual guide. Essential to the development of Algebraic Geometry is the fact that , the opposite of the category of (commutative unital) rings, is extensive.
(It easily follows that, for any ring R, the opposite of the category R/ of R-algebras is extensive.)
Extensivity naturally determines a notion of complemented subobject.
So, in an extensive category with finite products, it is also natural to consider the objects with complemented diagonal. These are traditionally called decidable objects, and it is useful to think of them as the `discrete spaces' inside the category `of spaces' where they live. For instance, a topological space is decidable if, and only if, it is discrete. For any ring R, and any R-algebra A, let A be the corresponding object in the extensive category (R/). Then A is decidable if, and only if, A is separable as an R-algebra. In other words, the separable R-algebras are precisely those for which the associated affine scheme is decidable.
Let us say that a category is coextensive if its opposite is extensive. In light of the above comments, an object in a coextensive category is called separable if the corresponding object in is decidable.
The category of MV-algebras is coextensive.
This provides the notion of separable MV-algebra that is the topic of the present paper. Explicitly, the MV-algebra A is separable if, and only if, there is a homomorphism f A + A → A such that the span
A [l]_-∇ A + A [r]^-f A
is a product diagram, where ∇ A+A→ A denotes the codiagonal map.
The geometry of has long been the subject of intensive hands-on study because of its striking connections with several areas of classical mathematics, from piecewise-linear topology to the geometry of numbers.
The characterisation of decidable objects in that we present here was motivated by our ongoing long-term project to study of the `gros Zariski' topos determined by the theory of MV-algebras as the domain of a pre-cohesive geometric morphism <cit.>. We postpone the topos-theoretic consequences of separability to further publications; no Topos Theory is required for the proof of the purely algebraic results in the present paper.
The plan of the paper is as follows. In Sections <ref>, <ref>, and <ref> we introduce the necessary material to prove a sufficient condition for an extensive category with finite products to have the property that every decidable object is a finite coproduct of connected subterminals.
In Section <ref> we verify that is coextensive.
In Theorem <ref> we characterise the subterminal objects of as, in , the subalgebras of [0,1]∩.
In order to extend Theorem <ref> to a characterisation of separable MV-algebras we need to introduce the Pierce functor for , an analogue of the standard ring-theoretic functor by the same name.
The key fact is that the Pierce functor preserves coproducts. To prove it, in Section <ref> we develop the required material on the connected-component functor π_0 in . Using the theory of spectra of MV-algebras recalled in Section <ref> along with the topological π_0 functor, we are able to show in Theorem <ref> that the Pierce functor does preserve all coproducts. Theorems <ref> and <ref> are combined in Section <ref> to obtain our main result, the mentioned characterisation of separable MV-algebras. We conclude Section <ref> with a discussion that points to further research aimed at enriching the connected-component functor on to an `arithmetic connected-component functor'; this functor, we submit, arises out of locally finite MV-algebras. Finally, in Appendix <ref> we collect the translation of our main results to lattice-groups.
§ EXTENSIVE CATEGORIES AND CONNECTED OBJECTS
In this section we recall the definition of extensive category and of connected object.
For more details about extensive categories see, for example, <cit.> and references therein.
A category with finite coproducts is called extensive if for every X and Y in the canonical functor /X ×/Y →/(X + Y)
is an equivalence.
Examples of extensive categories are (sets and functions), (finite sets and functions), any topos, , (compact Hausdorff spaces and continuous maps), (Stone[By a Stone space we mean a compact Hausdorff zero-dimensional space. Such spaces are often called Boolean in the literature.] spaces and continuous maps). The categories of rings, of Boolean algebras and of distributive lattices[Throughout the paper, with the exception of Appendix <ref>, we assume distributive lattices to have top and bottom elements preserved by homomorphisms.] are coextensive.
See <cit.> and <cit.> for further examples.
In extensive categories coproduct injections are regular monomorphisms,
coproducts of monomorphisms are monomorphisms, and
the initial object is strict in the sense that any map X → 0 is an isomorphism. Also, extensive categories are closed under slicing.
A coproduct in_0 X → X + Y ← Y :in_1 is
* disjoint if the coproduct injections are monic and the commutative square
0 [d] [r] Y [d]^-in_1
X [r]_-in_0 X + Y
is a pullback;
* universal if for every arrow Z → X + Y the two pullback squares below exist
V [d] [r] Z [d] [l]W[d]
X [r]_-in_0 X + Y [l]^-in_1 Y
and the top cospan is a coproduct diagram.
The following result is essentially <cit.>.
A category with finite coproducts is extensive if, and only if,
coproducts are universal and disjoint.
Assume from now on that is an extensive category.
A monomorphism u U → X in is called complemented if there is a v V → X such that the cospan
u U → X ← V :v is a coproduct diagram. In this case, v is the complement of u. Notice that complemented monomorphisms are regular monomorphisms because they are coproduct injections.
In the next definition, and throughout, we identify monomorphisms and subobjects whenever convenient.
An object X in is connected if it has exactly two complemented subobjects.
In or , an object is connected if and only if it has exactly two clopens.
An object A in is connected as an object in if and only if A has exactly two idempotents.
We remark that, in general, connected objects are not closed under finite products.
For each X in we let X denote the poset of complemented subobjects of X.
We stress that if u U → X and v V → X are two complemented monomorphisms in and f U → V is such that v f = u then f is complemented <cit.>. So for any two complemented subobjects u, v of X, there is no ambiguity in writing u ≤ v since it means the same for u, v considered as subobjects, or as complemented subobjects.
Extensivity easily implies that the poset X has finite infima, a bottom element, and an involution.
This structure may be used to prove that X is actually a Boolean algebra which interacts well with pullbacks in the sense that, for any map f X → Y in , pulling back along f determines a Boolean algebra homomorphism Y → X.
So, assuming that is well-powered, the assignment X ↦ X extends to a functor → between extensive categories that preserves finite coproducts.
We will use the following simple equivalences.
For any object X in the following are equivalent.
* X is connected.
* X is not initial and, for every complemented subobject u U → X, U is initial or u is an isomorphism.
* X is not initial and, for every coproduct diagram U → X ← V, U is initial or V is initial.
§ FINITE-COPRODUCT PRESERVING FUNCTORS
Let and be extensive categories, and let L → preserve finite coproducts. Such a functor preserves complemented monomorphisms so, for any X in , L induces a function X →(L X) which is actually a map in , natural in X. (It is relevant to remark such a functor also preserves pullbacks along coproduct injections. See <cit.>.)
We will say that L is injective surjective/bijective on complemented subobjects if and only if X →(L X) has the corresponding property for every X in .
The functor L → is injective on complemented subobjects if and only if it reflects 0. In this case, L also reflects connected objects.
Assume first that L is injective on complemented subobjects and let X in be such that L X = 0.
Then (L X) is the terminal Boolean algebra and, as X →(L X) is injective by hypothesis, X is also trivial.
For the converse notice that if L reflects 0 then the map X →(L X) in has trivial kernel for every X in .
To prove the second part of the statement assume that X in is such that L X is connected in .
If X were initial then so would L X because L preserves finite coproducts and, in particular, the initial object. So X is not initial.
Now assume that U → X ← V is a coproduct diagram.
Then so is L U → L X ← L V. Since L X is connected, either L U or L V is initial by Lemma <ref>.
As L reflects 0, either U or V is initial, so X is connected by the same lemma. (Alternatively, if X →(L X) is injective and its codomain is the initial Boolean algebra then so is the domain.)
We will be particularly interested in extensive categories wherein every object is a finite coproduct of connected objects.
For example, satisfies this property, but neither nor does.
If is the category of finitely presentable K-algebras for a field K, then also satisfies this property.
If L → is bijective on complemented subobjects then the following hold.
* The functor L preserves connected objects.
* For any object X in , if L X is a finite coproduct of connected objects then so is X.
* If every object in is a finite coproduct of connected objects then so is the case in .
* Assume that and have finite products and that L preserves them. If is such that finite products of connected objects are connected then so is the case in .
To prove the first item just notice that, by hypothesis, X →(L X) is an isomorphism for each X in . Hence if X has exactly two complemented subobjects then so does L X.
Before proving the second item we establish an auxiliary fact. Let X be in and let u U → L X be a complemented subobject in with connected U.
Then, as L is surjective on complemented objects by hypothesis, there exists a complemented subobject v V → X in such that L v = u as subobjects of L X. Then L V ≅ U is connected, so V is connected by Lemma <ref>.
Thus, we have lifted the `connected component' u of L X to one of X.
To prove the second item let (u_i | i ∈ I) be a finite family of pairwise-disjoint complemented subobjects of L X with connected domain whose join is the whole of L X.
For each i∈ I, let v_i be the complemented subobject of X induced by u_i as in the previous paragraph.
As L reflects 0, the family (v_i | i∈ I) is pairwise disjoint.
Also, L ⋁_i∈ I v_i = ⋁_i ∈ I L v_i = ⋁_i∈ I u_i is the whole of LX.
As L is injective on complemented subobjects, ⋁_i∈ I v_i must be the whole of X.
In summary, we have lifted the finite coproduct decomposition of L X to one of X.
The third item follows at once from the second.
For the fourth item, let X be the product of a finite family (X_i | i ∈ I) of connected objects in .
Then L X is the product of (L X_i | i ∈ I) because L preserves finite products.
Each L X_i is connected because L preserves connected objects by the first item, so L X is connected by our hypothesis on .
Hence X is connected by Lemma <ref>.
We next prove a sufficient condition for a functor L as above to be bijective on complemented subobjects.
If L → has a finite-coproduct preserving right adjoint, then L is bijective on complemented subobjects.
Let R be the right adjoint to L and let σ and τ be the unit and counit of L ⊣ R.
We show that L is both injective and surjective on complemented subobjects.
To prove injectivity it is enough to show that L reflects 0 (Lemma <ref>).
So let X be an object in such that L X is initial.
Then we may transpose the isomorphism L X → 0 in to a map X → R 0, but R 0 = 0 because R is assumed to preserve finite coproducts.
Since the initial object is strict, X is initial.
We next show that L is surjective on complemented subobjects.
Let u U → L X be a complemented monomorphism.
Then R u is complemented so the left pullback square below exists
V [d]_-v [r] R U [d]^-R u L V [d]_-L v[r] L(R U) [d]^-L(R u)[r]^-τ U [d]^-u
X [r]_-σ R (L X) L X [r]_-Lσ L(R (L X)) [r]_-τ L X
by extensivity of . Then the two squares on the right above obviously commute, and the bottom composite is the identity. Moreover, <cit.> implies that both squares are pullbacks, so u and L v coincide as subobjects of LX.
Combining Lemma <ref> and Proposition <ref> we obtain the following.
Assume that L → has a finite-coproduct preserving right adjoint. If every object in is a finite coproduct of connected objects then so is the case in .
§ DECIDABLE OBJECTS
Let be an extensive category with finite products.
In particular, has a terminal object 1.
An object X is called subterminal if the unique map X → 1 is monic.
For any object X in , the following are equivalent.
* The object X is subterminal.
* The diagonal Δ X → X× X is an isomorphism.
* The projections _0, _1 X× X → X are equal.
The first item implies the second because for any monomorphism X → 1 the following diagram
X [d]_-id[r]^-id X [d]^-!
X [r]_-! 1
is a pullback.
The second item implies the third because any map has at most one inverse.
To prove that the third item implies the first, let f, g Y → X. Then there exists a unique map fg Y → X × X such that _0 fg = f and _1 fg = g.
So f = _0 fg = _1 fg = g.
That is, for any object Y there is a unique map Y → X.
This means that the unique map X → 1 is monic.
We stress that extensivity plays no rôle in Lemma <ref>, which is a general fact about categories with finite products.
An object X in is decidable if the diagonal Δ X → X × X is complemented.
Lemma <ref> shows that subterminal objects in are decidable, and that they may be characterised as those decidable objects X such that the diagonal Δ X → X × X not only is complemented, but is actually an isomorphism.
The full subcategory of decidable objects will be denoted by →.
If is lextensive (i.e. extensive and with finite limits) it follows from <cit.> that is lextensive and that the inclusion → preserves finite limits, finite coproducts and that it is closed under subobjects. Moreover, for any X, Y in , X + Y is decidable if, and only if, both X and Y are decidable.
On the other hand, arbitrary coproducts of decidable objects need not be decidable—consider, for instance, an infinite copower of the terminal object in or .
For any object X in the following are equivalent:
* X is subterminal and connected.
* X is decidable and X × X is connected.
If X is subterminal and connected then Δ X → X× X is an isomorphism by Lemma <ref>.
So X is decidable and X× X is as connected as X.
For the converse assume that X is decidable and that X × X is connected.
Decidability means that the subobject Δ X → X × X is complemented; as X × X is connected, X is initial or Δ X → X × X is an isomorphism by Lemma <ref>. But X is not initial (because X× X is connected) so Δ X → X × X is an isomorphism. Then X is as connected as X× X, and X is subterminal by Lemma <ref>.
Let be another extensive category with finite products and let L → preserve finite products and finite coproducts.
Assume that L reflects 0 and that
1 is connected in . Then the following hold for every X in .
* If L X = 1 then X is connected.
* If X in is decidable and L X = 1 then X is subterminal.
The functor L reflects 0 so it reflects connected objects by Lemma <ref>.
As 1 is connected in by hypothesis, L X = 1 implies X connected.
If L X = 1 then L (X × X) = L X × L X = 1.
So X × X is connected by the first item.
Therefore X is subterminal by Proposition <ref>.
It easily follows from the definition of decidable object that L preserves decidable objects. In more detail, the preservation properties of L imply that the left-bottom composite below
[d] @.>[r] [d]
[r]_-L
factors uniquely through the right inclusion and, moreover, → preserves finite products and finite coproducts.
In fact, → preserves all the finite limits that L preserves (because the subcategories of decidable objects are closed under finite limits).
Additionally assume from now on that L → has a finite-coproduct preserving right adjoint R →.
Notice that under the present hypotheses both L and R preserve finite products and finite coproducts.
It follows that the adjunction L⊣ R restricts to one between and .
If every decidable object in is a finite coproduct of connected objects then so is the case in .
The adjunction L ⊣ R → restricts to one L' ⊣ R' →,
and every object in is a finite coproduct of connected objects by hypothesis.
So we may apply Corollary <ref> to L'→
Because is lextensive, there exists an essentially unique coproduct preserving functor → that also preserves the terminal object.
The functor sends a finite set I to the copower I· 1 in .
The categories , , and other examples have the property that this functor → coincides with →. Notice that if this condition holds then 1 is connected in , because = → is closed under subobjects and preserves 1.
If the canonical functor → coincides with → then every decidable object in is a finite coproduct of connected subterminals.
By Corollary <ref> every decidable object in is a finite coproduct of connected objects. So it is enough to prove that every connected decidable object in is subterminal. For this, let X be connected and decidable.
Then L X is decidable, because L preserves finite products and finite coproducts, and it is connected by Lemma <ref> and Proposition <ref>.
By hypothesis, the canonical → coincides with → so L X = 1.
Hence X is decidable and L X = 1. Therefore X is subterminal by Lemma <ref>.
For a lextensive category we have considered several conditions.
* Every decidable object is a finite coproduct of connected objects.
* Every decidable object is a finite coproduct of connected subterminals.
* The canonical functor → coincides with the inclusion →.
For a field K, (K/) satisfies the first condition but not the second. The categories and satisfy the third condition.
The third condition implies the second which, in turn, implies the first.
Proposition <ref> shows that for certain adjunctions L ⊣ R →, if satisfies the third condition then satisfies the second. This will be used to prove that satisfies the second condition (Theorem <ref>).
§ THE COEXTENSIVE CATEGORY OF MV-ALGEBRAS
For background on MV-algebras we refer to the standard textbooks <cit.>, of which we also follow the notation.
In this section we show that is coextensive by proving that products are codisjoint and couniversal (Proposition <ref>).
Let be a regular category with finite colimits.
If 0 → 1 is a regular epimorphism then products are codisjoint.
Let A be an object in .
As the composite 0 → A → 1 is a regular epimorphism by hypothesis, so is A → 1 by regularity of .
That is, not only 0 → 1 but actually any A → 1 is a regular epimorphism.
As every regular epimorphism is the coequalizer of its kernel pair, A → 1 is the coequalizer of the two projections A × A → A.
Also, as products of regular epimorphisms are epimorphisms, the product of id A → A and B → 1 is a regular epimorphism A × B → A × 1. That is, the projection A × B → A is a regular epimorphism.
To complete the proof we recall a basic fact about colimits:
for a commutative diagram as on the left below
E [d]_-e [r]<+1ex>^-e_0[r]<-1ex>_-e_1 D [d]_-d [r] B [d]
(A× A) × B [d]_-_0[rr]<+1ex>^-_0 × B[rr]<-1ex>_-_1 × B A× B [d]_-_0[r]^-_1 B [d]
F [r]<+1ex>^-f_0[r]<-1ex>_-f_1 A [r] Q
A× A [rr]<+1ex>^-_0[rr]<-1ex>_-_1 A [r] 1
such that d e_i = f_i e for i ∈{0, 1}, the top and bottom forks are coequalizers and e is epic, the inner right square is a pushout. Applying this observation to the diagram on the right above we obtain that the inner right square in that diagram is a pushout.
In particular, if is the category of models for an algebraic theory with at least one constant then the initial object 0 is non-empty and so 0 → 1 is a regular epimorphism. This is the case, of course, for =.
In , couniversality of products is entailed by the intimate relationship between idempotents and product decompositions. The situation for is analogous. An element b of an MV-algebra A is called Boolean if it satisfies one of the following equivalent conditions (see <cit.>):
b⊕ b=b
b⊙ b=b
b∨ b=1
b∧ b=0.
For x∈ A we let A → A[x^-1] be the quotient map induced by the congruence on A generated by the pair (x,1).
For any f A → B in the following diagram is a pushout
A [d]_-f[r] A[x^-1] [d]
B [r] B[(f x)^-1]
where the right vertical map is the unique one making the square commute.
Standard, using the universal property of the (horizontal) quotient homomorphisms.
For any MV-algebra A and every Boolean element x∈ A, let ⟨ x ⟩ be the ideal of A generated by { x}. Then the quotient q A→ A/⟨ x⟩ has the universal property of A → A[x^-1].
If k A → B is such that k x = 1 then x ∈k, so ⟨ x⟩k. By the universal property of quotients there is exactly one homomorphism c A/⟨ x⟩→ C such that cq=k.
In , the diagram
D [l]^-q_0 C [r]_-q_1 E
is a product precisely when there exists a Boolean element x∈ C such that q_0 has the universal property of C → C[( x)^-1] and q_1 has the universal property of C → C[x^-1].
When this is the case, the element x∈ C with the foregoing property is unique.
Assume the diagram is a product. Then there is a unique x∈ C such that q_ix=i, i=0,1. This x is Boolean because 0 and 1 are. Hence x is Boolean too, and thus ⊕-idempotent; therefore, ⟨ x ⟩={c∈ C| c ≤ x}. If c≤ x then q_1c≤ q_1( x)=0, so q_1c=0 and c∈q_1. If c∈q_1 then q_1c=0≤ q_1( x) and q_0c≤ 1=q_0( x), so c≤ x by the definition of product order. We conclude q_1=⟨ x ⟩. The projection q_1 is surjective so Lemma <ref> entails that q_1 has the universal property of C → C[x^-1].
An entirely similar argument applies to q_0.
Conversely, assume q_0 and q_1 have the universal properties in the statement.
By Lemma <ref> we may identify q_0 with C → C/⟨ x⟩ and q_1 with C → C/⟨ x⟩. So it is enough to show that the canonical C → C/⟨ x⟩× C/⟨ x⟩ is bijective.
Injectivity follows because if c≤ x, x then c≤ x∧ x=0, so ⟨ x⟩∩⟨ x⟩ = 0.
To prove surjectivity, let (q_0 c_0 , q_1 c_1) ∈ C/⟨ x⟩× C/⟨ x⟩ with c_0, c_1 ∈ C and consider
c = (c_0 ∧ x) ∨ (c_1 ∧ x) ∈ C. It is easy to check that C → C/⟨ x⟩× C/⟨ x⟩ sends c in the domain to (q_0 c_0 , q_1 c_1) in the codomain.
The content of Lemma <ref> is far from new, cf. e.g. <cit.> and <cit.>. However, having expressed that content in the form that is most suitable for the sequel, we have included a proof for the reader's convenience.
is coextensive.
Any algebraic category is complete and cocomplete, so in particular it has finite products and pushouts.
We appeal to the characterization of extensive categories in Proposition <ref>.
Codisjointness of products follows from Lemma <ref> or from a direct calculation observing that the projections of a product A × B send (0, 1) to 0 and 1 respectively, so 0 = 1 must hold in the pushout.
It remains to show that products are couniversal.
So we consider the pushout of a product diagram as below
A [d]_-h [l]_- pr_0 A× B [d]^-f[r]^- pr_1 B [d]^-k
D [l]^-q_0 C [r]_-q_1 E
and prove that the bottom span is product diagram.
Indeed, observe that the Boolean element (0, 1) ∈ A× B is sent to the Boolean element xf(1, 0) ∈ C so, by Lemma <ref>, it is enough to check that q_0 inverts x and q_1 inverts x;
but this follows from Lemma <ref>.
Although it was not necessary to prove the main result of this section, it seems worthwhile to observe that, in the context of algebraic categories, Lemma <ref> may be strengthened to a characterisation.
In any algebraic category, binary products are codisjoint if, and only if, the initial algebra has non-empty underlying set.
If the initial algebra 0 is not empty then the unique map 0 → 1 is a regular epimorphism so we can apply
Lemma <ref>.
For the converse implication notice that the following square
0 × 0 [d] [r] 0 [d]
0[r] 1
is a pushout by hypothesis. As any of the projections 0× 0 → 0 is split epic, its pushout 0 → 1 is a regular epimorphism, so 0 must be non-empty.
§ SUBTERMINALS IN , AND RATIONAL ALGEBRAS
The aim of this section is to characterize subterminal objects in .
Perhaps unexpectedly, the following fact will play an important rôle.
Monomorphisms in are stable under pushout.
It is well known <cit.> that, in algebraic categories, stability of monomorphisms under pushout is equivalent to the conjunction of the Amalgamation Property (AP) and of the Congruence Extension Property (CEP).
Pierce proved the AP for Abelian lattice-groups in <cit.>, and Mundici <cit.> observed that Pierce's result transfers through the functor Γ to MV-algebras. For a different proof of the AP for Abelian lattice-groups and MV-algebras, see <cit.>. The CEP for MV-algebras was proved in <cit.>; for an alternative proof, see <cit.>. For yet another proof in the more general context of residuated lattices, see <cit.>.
Most of the work will be done on the algebraic side, so it is convenient to start with an arbitrary
category with finite coproducts whose initial object is denoted 0.
As suggested above, we concentrate on the objects A such that the unique map 0 → A is epic. Notice that such an object is exactly a subterminal object in , but we prefer to avoid introducing new terminology such as `cosubterminal' or `supra-initial'.
For convenience we state here the dual of Lemma <ref>.
For any object A in , the following are equivalent:
* The map 0 → A is epic.
* The codiagonal ∇ A + A → A is an isomorphism.
* The coproduct injections in_0 , in_1 A → A + A are equal.
We shall also need a simple auxiliary fact.
Let 0→ A be epic and m B → A be a map.
If the coproduct map m + m B + B → A + A is monic then 0 → B is epic.
The following square commutes
B + B [d]_-m + m[r]^-∇ B [d]^-m
A + A [r]_-∇ A
by naturality of the codiagonal. The bottom map is an isomorphism by Lemma <ref>, and the left vertical map is monic by hypothesis. So the top map is also monic, as well as split epic.
Assume from now on that has finite colimits and that monomorphisms are stable under pushout. We stress that this stability property is quite restrictive. For instance, it does not hold in . On the other hand, we already know that it holds in by Lemma <ref>.
The map 0 → A is epic
if, and only if, for every monomorphism B → A, 0 → B is epic.
One direction is trivial and does not need stability of monomorphisms.
For the converse observe that, as monomorphisms are stable under pushout, finite coproducts of monomorphisms are monic.
So we can apply Lemma <ref>.
The following is a further auxiliary fact.
For any d A → D and e B → A in , if e is epic and the composite d e B → D is monic then d is an monic.
The right square below is trivially a pushout and, since e B → A is epic, the left square is also a pushout
B [d]_-e[r]^-e A [d]^-id[r]^-d D [d]^-id
A [r]_-id A [r]_-d D
so the rectangle is a pushout too. As the top composite is monic, and these are are stable under pushout by hypothesis, the bottom map is monic.
We emphasise the next particular case of Lemma <ref>.
Let d A → D be a regular epimorphism in .
If 0 → A is epic and 0 → D is monic then d is an isomorphism.
Assume now that our category with finite colimits and stable monomorphisms has a terminal object 1 such that for any object A in the unique A → 1 is a regular epimorphism.
This is common in algebraic categories.
A quotient of A in is an equivalence class of regular epimorphisms with domain A, where two such are equivalent if they are isomorphic as objects of A/.
An object A is simple if it has exactly two quotients, namely, those represented by A → 1 and id A → A.
So, if is an algebraic category, then an object is simple if and only if it has exactly two congruences.
To motivate the hypotheses of the following lemma observe that for every object A in , A is terminal or 0 → A is monic.
Similarly for and for K/ with K a field. In contrast, that is not the case in .
If for every object D of , D is terminal or 0 → D is monic, then for every epic 0 → A the following hold.
* A is simple or terminal.
* If m B → A is monic then B + B is simple or terminal.
To prove the first item let d A → D be a regular epimorphism. Then D is terminal or 0 → D is monic by hypothesis.
If 0 → D is monic then d is an isomorphism by Lemma <ref>.
So the only possible quotients of A are A → 1 or id A → A. So A is terminal or simple.
To prove the second item first recall that epimorphisms are closed under coproduct.
Then recall that, as monomorphisms are stable by hypotheses, they are closed under finite coproducts.
Therefore, m + m B + B → A + A is a monomorphism
and 0 = 0 + 0 → A + A is epic.
So, by Lemma <ref>, 0→ B + B is also epic. The first item implies that B + B is simple or terminal.
The material in this section applies to the case =, so we may now prove our first MV-algebraic result. For the proof we require a standard fact from the theory of MV-algebras and lattice-groups, which will also find further application later in this paper.
An ideal of the MV-algebra A is maximal if it is proper, and inclusion-maximal amongst proper ideals of A; equivalently, the quotient A/ is a simple algebra.
For every MV-algebra A, and for every maximal ideal of A, there is exactly one homomorphism of MV-algebras
_A⟶ [0,1],
and this homomorphism is injective.
In connection with the result that follows, let us explicitly recall that the initial object 0 in is the two-element Boolean algebra {0,1}.
For any MV-algebra A the following are equivalent.
* A is a subalgebra of [0,1]∩.
* A is non-trivial and the unique map 0 → A is epic.
* The unique map 0 → A is monic and epic.
* A is simple and 0 → A is epic.
If A ⊆ [0,1]∩ then A is certainly non-trivial, and <cit.> shows that the coproduct inclusions
in_0, in_1 A → A + A are equal.
So 0 → A is epic by Lemma <ref>.
The second and third items are clearly equivalent, and they imply the fourth by Lemma <ref>.
Finally, assume that A is simple and that 0 → A is epic.
By Hölder's Theorem (Lemma <ref>) together with simplicity, there is exactly one monomorphism A→ [0,1].
Now let r ∈ A and write ι A_r → A for the subalgebra of A generated by r.
As A_r is not trivial (and 0 → A is epic) Lemma <ref> implies that A_r + A_r is simple. Hence, by the computation in <cit.>, r must be rational.
§ THE Π_0 FUNCTOR FOR TOPOLOGICAL SPACES
In this section we show that the full inclusion → of the category of Stone spaces into that of compact Hausdorff spaces has a left adjoint π_0→ that preserves set-indexed products. The result just stated may be concisely referenced as follows. That the inclusion at hand is reflective is well known and flows readily from the universal property of the quotient topology. As shown in <cit.>, the reflection has “stable units”; we need not discuss this property here, except to recall that it easily implies that the left adjoint π_0 preserves finite products. Since Gabriel and Ulmer in <cit.> show that π_0 preserves cofiltered limits, π_0 preserves all products.[We are grateful to Luca Reggio and to Dirk Hofmann for pointing out to us, respectively, the relevance of <cit.> and of <cit.>.]
We give here a different proof that emphasises the key rôle of totally disconnected spaces in the general case. We first obtain a product-preserving left adjoint to the full inclusion of the category of totally disconnected topological spaces into .
We then show how to restrict this left adjoint to the categories of interest to us in the present paper.
The result just stated may be efficiently referenced as follows.
As pointed out to us by Luca Reggio, reflectivity of the inclusion → is discussed in <cit.> together with the fact that the reflection has stable units, so that the left adjoint preserves finite products. (Reggio also indicated that reflectivity is discussed in <cit.> as a consequence of the general theory of regular and exact completions.)
Moreover, Dirk Hofmann observed that, since Gabriel and Ulmer in <cit.> show that
the left adjoint π_0→ preserves cofiltered limits, π_0 preserves all products.
We give here a different proof. We first obtain a product-preserving left adjoint to the full inclusion of the category of totally disconnected topological spaces into .
We then show how to restrict this left adjoint to the categories of interest to us in the present paper. We begin by recalling some relevant definitions and facts.
A topological space X is connected if it so in the sense of Definition <ref>. A subset of a space is clopen if it is both closed and open. Then, a space X is connected if and only if it contains exactly two clopen sets, which are then necessarily ∅ and X. Equivalently <cit.>, X is connected if whenever X=A∪ B with A∩ B=∅ and A and B closed subsets of X, then exactly one of A and B is empty. If X is a space and x∈ X, the component of x in X, written C_x (with X understood), is defined as
C_x⋃{C X| x ∈ X and C is connected}⊆ X.
It can be shown that C_x is a connected subspace of X <cit.>, and it therefore is the inclusion-largest such to which x belongs. Also, C_x is closed in X <cit.>. A topological space X is totally disconnected if for each x∈ X we have C_x={x}.
Consider the equivalence relation on X given by
x∼ y if, and only if, C_x=C_y,
and define
π_0XX/∼.
We equip π_0X with the quotient topology, and call it the space of components of X. We write
q X ⟶π_0X
for the quotient map.
For every continuous map f X→ Y between topological spaces there is exactly one map such that the square below commutes.
X [d]_-f[r] [d]^-π_0fπ_0X
Y[r] π_0Y
We first show that f X→ Y preserves the equivalence relation ∼ in (<ref>). Given x,x' ∈ X, suppose x∼ x', so that C_x=C_y C. Since continuous maps preserve connectedness <cit.>, f[C] is a connected subset of Y that contains both fx and fx'. Hence f[C] C_fx∩ C_fx', which entails C_fx=C_fy. This completes the proof that f preserves ∼. Existence and uniqueness of π_0 f follow from the universal property of the quotient X →π_0 X.
Lemma <ref> implies that the assignment
that sends f to π_0f extends to an endofunctor
π_0⟶.
This endofunctor determines the full subcategory , as we now show.
If C ⊆π_0 X is a connected subspace then so is q^-1 [C] ⊆ X.
Let q^-1[C]=F_1∪ F_2 with F_1 and F_2 disjoint closed subsets of X. For any y ∈ C we can write the fibre q^-1[{y}] as C_x for any x∈ q^-1[{y}]. Further, we can express C_x as the disjoint union
C_x=(F_1∩ C_x)∪ (F_2∩ C_x). And C_x is closed and connected, because it is a component. Hence exactly one of q^-1[{y}]=C_x F_1 or q^-1[{y}]=C_x F_2 holds, for each y ∈ C. We can then define
S_i{ y ∈ C| q^-1[{y}] F_i}, i=1,2,
to the effect that C=S_1∪ S_2 and S_1∩ S_2 =∅. By construction we have F_i=q^-1[S_i], i=1,2. The definition of quotient topology then entails that S_i is closed because F_i is. Since C is connected, exactly one of S_1 and S_2 is empty, and hence so is exactly one of F_1 and F_2.
For any space X, the quotient map q X →π_0X in (<ref>) is universal from
X to the full inclusion →.
We first show that π_0 X is totally disconnected.
Let C_y be the component of y ∈π_0 X, with the intent of showing it is a singleton.
By Lemma <ref>, since C_y is connected in π_0 X, so is q^-1[C_y] connected in X. Therefore q^-1[C_y] is contained in the component C_x of any x∈ X with x∈ q^-1[C_y]; and thus, the direct image q[q^-1[C_y]] is contained in q[C_x]={y}. Since q[q^-1[C_y]]=C_y, because q is surjective, we conclude C_y{y}, as was to be shown.
Let f X→ Y be a continuous map, with Y totally disconnected.
We already know from the proof of Lemma <ref> that f preserves ∼ so,
as Y is totally disconnected, x ∼ x' in X implies f x = f x' in Y.
The universal property of the quotient q X →π_0 X implies the existence of a unique g π_0 X → Y such that g q = f.
We conclude that the full inclusion → has a left adjoint that, with no risk of confusion, will again be denoted by π_0 →.
The functor π_0 → preserves all set-indexed products.
Consider a family (X_s | s ∈ S) of spaces in indexed by a set S and let
γπ_0 ∏_s∈ S X_s ⟶∏_s∈ Sπ_0 X_s
be the unique map such that the triangle below commutes
π_0 ( ∏_s∈ S X_s) [rd]_-π_0 _s[r]^-γ ∏_s∈ Sπ_0 X_s [d]^-_s
π_0 X_s
for every s ∈ S.
In other words,
γ ( C( x_s | s∈ S )) = (C x_s | s∈ S) ∈∏_s∈ Sπ_0 X_s
for any ( x_s | s∈ S ) in ∏_s∈ S X_s.
To prove that γ is injective assume that
γ ( q ( x_s | s∈ S )) =γ ( q ( y_s | s∈ S )) in ∏_s∈ Sπ_0 X_s.
That is, q x_s = q y_s in π_0 X_s for every s ∈ S.
By <cit.> we have
q ( x_s | s∈ S ) = q ( y_s | s∈ S ) in π_0 ( ∏_s∈ S X_s), so γ is injective.
To prove that γ is surjective observe that the following diagram commutes
[ld]_-q∏_s∈ S X_s [d]^-∏_s∈ S q[r]^-_s X_s [d]^-q
π_0 ( ∏_s∈ S X_s) [r]_-γ ∏_s∈ Sπ_0 X_s [r]_-_s π_0 X_s
for every s∈ S, so the inner triangle commutes.
As products of surjections are surjective, the inner vertical map is surjective and hence so is γ, the bottom map of the triangle.
We next identify a related construction which will provide a useful alternative description of π_0 when restricted to .
Let us write (X,) for the set of continuous maps from the space X to the discrete two-point space {0,1}. There is a canonical continuous function
E = ⟨ f| f ∈(X,)⟩ X⟶^(X,),
x⟼ ( f x | f∈(X,) ).
For any subset S X, write χ_S X→ for the characteristic function defined by χ_S x=1 if, and only if, x∈ S.
Then S is clopen precisely when χ_S∈(X,). Thus, E in (<ref>)
can equivalently be described as the function that sends each point x ∈ X to the set of clopen subsets of X that contain x.
In order to prove the next lemma recall <cit.> that the quasi-component of x ∈ X is defined as
C_x⋂{S X| S is clopen, and x ∈ S}.
It is clear that the quasi-components of a space X partition X into closed non-empty sets.
The relation between E and quasi-components may be stated as follows.
For any x, x' ∈ X, E x = E x' if and only if C_x = C_x'.
If E x = E x' then clearly C_x = C_x'.
For the converse assume that C_x = C_x' and let S ⊆ X be a clopen containing x. Then x' ∈C_x' = C_x ⊆ S.
That is, x' ∈ S.
The reader should beware that the quasi-component C_x of x∈ X in general fails to be connected. Indeed, the inclusion C_xC_x always holds for each x∈ X <cit.>, and may be proper <cit.>. However:
For any X there exists a unique E' π_0 X →^(X,) such that the following diagram
X @(d,l)[rd]_-E[r]^-q π_0 X [d]^-E'
^(X,)
commutes.
Let x, x' ∈ X be such that x ∼ x'; that is, C_x = C_x'.
Then
x ∈ C_x ∩ C_x'⊆C_x ∩C_x'
so, as quasi-components are equal or disjoint, C_x = C_x'.
That is, E x = E x' by Lemma <ref>.
Let
X [r]^-D π'_0 X [r]^-m ^(X,)
be the epi/regular-mono factorization of the canonical map E in (<ref>). Then the following square commutes
X [d]_-D[r]^-q [d] π_0X @.>[ld]|-c[d]^-E'
π'_0X[r]_-m ^(X,)
by Lemma <ref> and, as q is regular-epi and m is monic, there is exactly one continuous map cπ_0(X)→π'_0(X) making the inner-triangles commute.
Since D is epic, so is c.
Also, since m is a regular mono, π_0'X carries the subspace topology inherited from the product ^(X,) and, as the latter is a Stone space, π_0'X is Hausdorff.
If X is compact Hausdorff then c π_0 X →π_0' X is an isomorphism and these isomorphic spaces are Stone spaces.
First recall <cit.> that, in any compact Hausdorff space X, the equality C_x=C_x holds for each x∈ X.
In other words, in this case, the function π_0 X →π_0' X is bijective.
Also, since X is compact, so is π_0 X because q is surjective.
Hence, as we already know that π_0' X is Hausdorff, the Closed Map Lemma implies that c is an isomorphism.
Similarly, compactness of X implies compactness of π_0' X and hence, the Closed Map Lemma implies that m is closed. Therefore, π_0'X is a closed subspace of the Stone space ^(X,).
It is classical that each Stone space is totally disconnected, so there is a full inclusion → such that the following diagram
[d] [r] [d]
[r]
commutes. Lemma <ref> implies that the composite
[r] [r]^-π_0
factors through the full inclusion →.
The factorization will be conveniently denoted by π_0 →.
The functor π_0→ is left adjoint to the full inclusion →, and preserves all set-indexed products.
Since, as observed above, π_0→ restricts to π_0→, the fact that the former is a left adjoint to → (Lemma <ref>) restricts to the fact that π_0→ is left adjoint to →.
It is standard that products in and in agree with products in (using, in particular, Tychonoff's Theorem that any product of compact spaces is compact), so Proposition <ref> entails that π_0→ preserves all set-indexed products.
§ SPECTRA OF MV-ALGEBRAS
In this section we recall the material about spectra of MV-algebras that is needed in the sequel.
Recall that an ideal of an MV-algebra A is prime if it is proper, and the quotient A/ is totally ordered. The (prime) spectrum of an MV-algebra A is
A{⊆ A| is a prime ideal of A}
topologised into the spectral space of A, as follows. For a subset S A, define
(S) {∈A| S⊆},
(S) A∖(S)={∈A| S⊈}.
The set (S) is called the vanishing locus, or zero set, of S, while (S) is called its support. If a ∈ A, write (a) as a shorthand for ({a}), and similarly for (a). Then the collection
{(I)| I is an ideal of A}
is the set of closed sets for a topology on A that makes the latter a spectral space in the sense of Hochster <cit.>. The collection
{(a)| a∈ A}
is a basis of compact open sets for this topology; see <cit.> and <cit.>. The topology is variously known as the Stone, Zariski, or hull-kernel topology of A.
The assignment A ↦A extends to a functor →, because inverse images of primes ideals along homomorphisms are prime. Althouh it is common to take the codomain of as the category of spectral spaces and spectral maps, for our purposes in this paper it is expedient to regard as taking values in .
The maximal spectrum of an MV-algebra A is
A{⊆ A| is a maximal ideal of A}.
We have AA, or equivalently, any simple MV-algebra is totally ordered (see e.g. <cit.>).
The maximal spectral space of A is the set A equipped with the subspace topology it inherits from A. Then A is a compact Hausdorff space <cit.>, and every compact Hausdorff space arises in this manner from some MV-algebra A <cit.>.
The standard example of MV-algebra, the interval
[0,1] equipped with the constant 0 and the operations ⊕, , generalises as follows. If X is any set, the collection [0,1]^X of all functions from X to [0,1] inherits the structure of an MV-algebra upon defining operations pointwise. If, additionally, X is a topological space, since ⊕ [0,1]^2→ [0,1], [0,1]→[0,1], and 0 are continuous with respect to the Euclidean topology of [0,1], the subset
(X){f X→ [0,1]| f is continuous}
is a subalgebra of the MV-algebra [0,1]^X. We shall describe a natural MV-homomorphism η_A A ⟶(A), for each MV-algebra A. Its existence descends from Hölder's Theorem (Lemma <ref>), which allows us to define a close analogue to the Gelfand transform in functional analysis. Indeed, in light of that result, to a∈ A and ∈A we associate the real number _(a / )∈ [0,1], obtaining the function
aA ⟶ [0,1]
⟼ h_(a).
It can be shown <cit.> that the function (<ref>) is continuous with respect to the Stone topology of A and the Euclidean topology of [0,1].
We thereby arrive at the announced homomorphism
η_A A ⟶(A)
a ⟼a
for each MV-algebra A.
For any MV-homomorphism h A→ B and any ∈B we have h^-1()∈A. Moreover, the inverse-image map h^-1B→A is continuous with respect to the Stone topology.
The first assertion is proved in <cit.>. The second assertion is a straightforward verification using the definition of Stone topology.
In light of Lemma <ref> we henceforth regard as a functor:
⟶^ op,
where denotes the category of compact Hausdorff spaces and their continuous maps.
Given a continuous map f X → Y in , it is elementary that the induced function
(f)(Y) ⟶(X),
g∈(Y) ⟼ g∘ f ∈(X)
is a morphism in . We therefore regard as a functor:
^ op⟶.
There is an adjunction
⊣^ op→
known as the Cignoli-Dubuc-Mundici adjunction <cit.>; see <cit.> for further references and details not mentioned below.
Dually to (<ref>), for any space X in there is a continuous map
ϵ_X X ⟶(X)
x ⟼{f∈(X)| f(x)=0},
and it is a standard fact that ϵ_X is a homeomorphism. (Compare <cit.>.)
Writing 𝕀 C for the identity functor on a category C, we can summarise the adjunction as follows.
The functor is left adjoint to the fully faithful , i.e. ⊣^ op→. The unit and the counit of the adjunction are the natural transformations η𝕀→ and ϵ→𝕀^ op whose components are given by (<ref>) and (<ref>), respectively.
§ THE PIERCE FUNCTOR PRESERVES COPRODUCTS
The category of Boolean algebras may be identified with the domain of the full subcategory → determined by the MV-algebras whose operation ⊕ is idempotent. It is then clear that → is a variety so, in particular, it has a left adjoint.
It also has a right adjoint that we now describe.
We write A for the collection of all Boolean elements of the MV-algebra A. By <cit.>, A is the largest subalgebra of A that is a Boolean algebra. A homomorphism h A→ B preserves Boolean elements, because the latter are defined by equational conditions. Therefore, h induces by restriction a function hA→B that is evidently a homomorphism of Boolean algebras. We thus obtain a functor
⟶
from the category of MV-algebras to that of Boolean algebras; we call it the Pierce functor because of the close analogy with the theory developed in <cit.> for rings.
The functor is right adjoint to the functor .
This is a direct consequence of the fact that A is the largest Boolean subalgebra of A, for any MV-algebra A.
The proof of Proposition <ref>—in particular, Lemma <ref>—makes it clear that → is essentially the `complemented subobjects' functor determined by the extensive category .
We now embark on the proof of the central fact that → preserves coproducts. Our aim is to reduce the problem to a situation where we can apply the topological results in Section <ref>.
For any MV-algebra A and any element a∈ A, a is Boolean if, and only if, for each prime ideal of A, we have a/∈{0,1} A/.
Let C be any totally ordered MV-algebra. For x∈ C, either x≤ x or x ≤ x. If the former holds then x∧ x=x, so that if x is Boolean then x=0. If the latter holds then x∨ x=x, and thus x=1 if x is Boolean. In summary, if x∈ C is Boolean then x∈{0,1}. The converse implication is clear. Summing up, the Boolean elements of C are precisely 0 and 1.
Boolean elements, being definable by equational conditions, are preserved by homomorphisms. Hence if a is Boolean then a/∈ A/ is Boolean, and therefore, since A/ is totally ordered, a/∈{0,1} by the argument in the preceding paragraph. This proves the left-to-right implication in the statement of the lemma.
For the converse implication, we recall that in any MV-algebra A we have ⋂A={0} <cit.>. Hence, the function ι A ⟶∏_∈A A/ defined by a ∈ A ⟼ (a/)_∈∈∏_∈A A/ is an injective homomorphism. Assume that for each ∈A we have a/∈{0,1}. Since operations in ∏_∈A A/ are computed pointwise, we infer ι(a)∨ι(a)= (a/)_∈∨ (a/)_∈=1, and therefore, since ι is an isomorphism onto its range, a∨ a=1. This completes the proof.
Let A be an MV-algebra, and suppose there exist possibly empty closed subsets X_0,X_1A with A=X_0∪ X_1 and X_0∩ X_1=∅. Then there exists exactly one Boolean element b∈ A such that b/=0 for each ∈ X_0 and b/=1 for each ∈ X_1.
By <cit.>, there is exactly one ideal I_i of A such that (I_i)=X_i, i=0,1. Consider the elements 0,1∈ A. The fact that A is partitioned into X_0 and X_i entails I_0∨ I_1=A and I_0∩ I_1={0}, so that the Chinese Remainder Theorem <cit.> applied to 0 and X_0, and to 1 and X_1, yields one element b∈ A such that b/I_0=0 and b/I_1=1. Using the Third Isomorphism Theorem, the latter conditions imply b/∈{0,1} for each ∈A so that by Lemma <ref> we conclude that b is Boolean. If b'∈ A also satisfies b'/=0 for each ∈ X_0 and b'/=1 for each ∈ X_1, then b/=b'/ for ∈A, so that b=b' because ⋂A={0} <cit.>.
We record a corollary that will have further use in the paper. It is the exact analogue for MV-algebras of a standard result for the category , see e.g. <cit.>. In order to state it, let us write X for the Boolean algebra of clopen sets of any topological space X. Let us then observe that the uniqueness assertion about the Boolean element b in Lemma <ref> allows us to define, for any MV-algebra A, a function
χ_A(A)⟶A
that assigns to each X_0∈(A) the unique element b∈A with the properties stated in that lemma with respect to X_0 and X_1A∖ X_0. It is then elementary to verify that χ_A is a homomorphism of Boolean algebras.
For any MV-algebra A, the function
ϕ_AA ⟶(A)
that sends b∈A to (b)(A) is an isomorphism of Boolean algebras whose inverse is the homomorphism χ_A in (<ref>). In particular, A is indecomposable if, and only if, A is connected.
By Lemma <ref> it is clear that (b) for each b∈A is clopen and that ϕ_A is a homomorphism. Let us consider b'χ_Aϕ_Ab. For each ∈(b) we have b/=0 by definition of , and b'/=0 by the defining property of b'. Similarly, for each ∈A∖(A) we have b/=b'/=0. Thus, b and b' agree at each prime and thus b=b' because ⋂A={0} <cit.>. Conversely, for X_0∈(A), consider the clopen ϕ_Aχ_AX_0. For ∈A, by definition of χ_A we have ∈ X_0 if, and only if, (χ_AX_0)/=0. Hence
ϕ_A (χ_A X_0)=X_0, and the proof is complete.
The radical of A is the ideal
A⋂A.
In accordance with standard terminology in general algebra, one says A is semisimple precisely when A={0}.
We note in passing that, unless A is semisimple, the statement in Lemma <ref> cannot be strenghtened to “a is Boolean if, and only if, for each ∈A we have a/∈{0,1} A/”.
Let A be an MV-algebra, and suppose there exist possibly empty closed subsets X_0,X_1A with A=X_0∪ X_1 and X_0∩ X_1=∅. Then there exists exactly one Boolean element b∈ A such that b/=0 for each ∈ X_0 and b/=1 for each ∈ X_1.
By <cit.>, each ∈A is contained in exactly one λ∈A, so that we can define a function
λA ⟶A,
⟼λ.
By <cit.>, this function is continuous, and it is a retraction for the inclusion AA. Therefore, X'_0λ^-1[X_0] and X'_1λ^-1[X_1] are closed subsets of A satisfying A=X'_0∪ X'_1 and X'_0∩ X'_1=∅. Now Lemma <ref> provides a unique Boolean element b such that b/=0 for each ∈ X_0', and b/=1 for each ∈ X_1'. As X_i X_i', i=0,1, b satisfies the condition in the statement. Concerning uniqueness, suppose a is a Boolean element of A such that a/=0 for each ∈ X_0, and a/=1 for each ∈ X_1. We claim a=b. Indeed, let ∈ X_i', i=0,1. Then a/λ=i because λ∈ X_i. The inclusion λ induces a quotient map q A/→ A/λ. By Lemma <ref> we have a/∈{0,1}. Also, A/λ is nontrivial. Therefore since q(a/)=a/λ=i it follows that a/=i. By the uniqueness assertion in Lemma <ref> we conclude a=b.
We observe that the analogue of Lemma <ref> about coproduct decompositions of A being indexed by idempotent elements does not hold in general for rings. Indeed, spectra of MV-algebras always are completely normal—which affords the existence of the map λ used in the proof above—whereas spectra of rings are not, in general.
For more on the important rôle that the continuous retraction λ in (<ref>) plays in the theory of lattice-groups and MV-algebras, see <cit.> and the references therein.
Our next objective is to show that sends the unit η of ⊣ in (<ref>) to an isomorphism.
For any MV-algebra A, the morphism η_A A→ ()A is an isomorphism.
Let b'∈(A) be Boolean, with the aim of exhibiting b∈A such that η_A(b)=b'. Evaluating the defining equality b'⊕ b'=b' at each ∈A we see that b'()∈{0,1} holds. Therefore, the two closed subsets X_0 b'^-1[{0}] and X_1 b'^-1[{1}] of A satisfy the hypotheses of Lemma <ref>. We conclude that there exists one Boolean element b∈ A with b/=0 for ∈ X_0 and b/=1 for ∈ X_1. By the definition of η_A this entails at once η_A(b)=b', so η_A is surjective. By the uniqueness statement in Lemma <ref>, η_A is also injective.
Our next step will be to factor into a manner that is useful to our purposes.
Lemma <ref> implies that the functors → in the diagram below
[d]_-[r]^-
^ op[r]_- [u]_-
are naturally isomorphic.
The functor ^ op→ preserves all set-indexed coproducts.
Using Stone duality, it is an exercise to verify that the composite functor ^ op→ induces, by taking opposite categories on each side, a functor naturally isomorphic to the functor π_0→ of Section <ref>. The lemma then follows from Theorem <ref>.
We finally obtain the main result of this section.
The Pierce functor → preserves all set-indexed coproducts.
As we saw above, the triangle (<ref>) commutes up to a natural isomorphism. Further, preserves arbitrary set-indexed colimits because it is left adjoint by Theorem <ref>; and
preserves set-indexed coproducts by Lemma <ref>. Hence preserves set-indexed coproducts.
§ MAIN RESULT, AND FINAL REMARKS
Let be a coextensive category.
Recall from the introduction that an object A in is separable if A is decidable as an object in the extensive . Thus, A is separable if, and only if, there is a morphism f A + A → A such that the span
A [l]_-∇ A + A [r]^-f A
is a product diagram.
Separable MV-algebras coincide with finite products of subalgebras of [0,1]∩.
By Theorem <ref> we have an reflection π_0 ⊣→ such that both adjoints preserve finite products and finite coproducts, so Proposition <ref> implies that every decidable object in is a finite coproduct of subterminal objects. Theorem <ref> completes the proof.
We conclude the paper with some final remarks that point to further research aimed at developing an ‘arithmetic connected-component functor’.
The guiding result from Algebraic Geometry is this: the category of étale schemes over K is reflective as a subcategory of that of locally algebraic schemes over K <cit.>. The left adjoint there is denoted by π_0, and π_0 X is called the k-schéma des composantes connexes de X
in Definition I, 4, 6.6 op. cit. Moreover, it is then proved that π_0 preserves finite coproducts.
In terms of extensive categories, this says that for =, the subcategory → has a finite-product preserving left adjoint.
We announce that the same holds for = _ fp, where _ fp is category of finitely presetable MV-algebras. The proof will be published elsewhere, but it is appropriate to indicate here the rôle of locally finite MV-algebras in connection with that result.
An MV-algebra A is locally finite if each finitely generated subalgebra of A is finite. Finite MV-algebras are evidently locally finite; [0,1]∩ is an example of a locally finite MV-algebra that is not finite. Locally finite MV-algebras were studied in <cit.>; see also <cit.> for a generalisation of the results in
<cit.>, and <cit.> for further material and <cit.> for recent progress on the topic.
The connection with Theorem <ref> is the following characterisation of rational algebras.
For any MV-algebra A the following are equivalent.
* A is simple and locally finite.
* A is a subalgebra of [0,1]∩.
(<ref>)⇒(<ref>). By Hölder's Theorem (Lemma <ref>), since A is simple there is exactly one monomorphism A→ [0,1]; let us therefore identify A with a subalgebra of [0,1]. If A contains an irrational number ρ∈ [0,1] then the subalgebra generated by ρ is infinite. Indeed, the Euclidean algorithm of successive subtractions applied to ρ,1∈ does not terminate (because ρ and 1 are incommensurable) and produces an infinite descending sequence of distinct, non-zero elements of A. Thus, A ⊆ [0,1]∩ by local finiteness.
(<ref>)⇒(<ref>). Any subalgebra of [0,1] evidently has no proper non-trivial ideal, by the Archimedean property of the real numbers, and is therefore simple. If, moreover, A⊆ [0,1]∩, the subgroup of generated by finitely many a_1,…,a_n∈ A together with 1 is discrete, and therefore by <cit.> the subalgebra generated by a_1,…,a_n is a finite chain. Thus A is locally finite.
An MV-algebra A is separable if, and only if, A is locally finite and A is finite.
If A is separable then, by Theorem <ref>, A = ∏_i∈ I A_i with I finite and A_i ⊆ [0,1]∩ for each i ∈ I.
In particular, A is finite.
Also, each A_i is locally finite by Lemma <ref>.
As finite products of locally finite algebras are locally finite, A is locally finite.
Conversely, assume that A is locally finite and A is finite.
Then, A = ∏_i∈ I A_i with I finite and A_i directly indecomposable for each i ∈ I.
As locally finite algebras are closed under quotients, each A_i is locally finite.
Hence, each A_i is locally finite and indecomposable.
But then A must be simple. Indeed, Corollary <ref> entails that A is connected, and A=A by <cit.>. Then the spectral space A is Hausdorff, and thus has a base of clopen sets—hence, being compact, it is a Stone space. Since Stone spaces are totally disconnected, connectedness of A entails that A is a singleton, so A has exactly two ideals, and so is simple.
By Lemma <ref>, A is then a subalgebra of [0,1]∩. Therefore, A is separable by Theorem <ref>.
Now, let → be the full subcategory determined by locally finite MV-algebras.
Let us prove that this subcategory is coreflective.
An element a of an MV-algebra A is of finite order-rank[The terminology we introduce here is best motivated using lattice-groups—please see Appendix <ref>.] if the subalgebra B it generates in A is finite. If B is terminal, we say the order-rank of a is zero. Otherwise, there exists exactly one n∈{1,2,…} such that B=C_1×⋯× C_n with each C_i directly indecomposable and non-terminal, and we then say the order-rank of a is n. We set
A{a ∈ A | a is of finite order-rank}.
Note that AA, because any Boolean algebra is locally finite.
For any MV-algebra A and subset G A, let us write G for the subalgebra of A generated by G. When G={g} we write g for {g}.
Any homomorphism of MV-algebras sends elements of finite order-rank to elements of finite order-rank.
Let h A→ B be a homomorphism and let a ∈A. Since h commutes with operations, a routine argument in general algebra shows that h[Sa]=(ha); since a is finite, so is (ha).
For any MV-algebra A, A is a locally finite subalgebra of A. Further, A is the inclusion-largest locally finite subalgebra of A.
Let F{a_1,…,a_n} A be a finite subset of elements of finite order-rank, n≥ 0 an integer. We need to show that the subalgebra F of A generated by F is finite. Induction on n. If n=0 then ∅ is either the terminal one-element algebra or the initial two-element algebra. Now suppose G{a_1,…, a_n-1} is such that G is finite. The subalgebra a_n is also finite, because a_n is of finite order-rank by hypothesis. The subalgebra F is the least upper bound of G and of a_n in the lattice of subalgebras of A, and therefore can be written as a quotient of the coproduct G+a_n. In more detail, by the universal property of the coproduct, the inclusion maps GF and a_nF induce a unique homomorphism hG+a_n→ A whose regular-epi/mono factorisation h=m q is such that m S→ A exhibits the subobject of A that is the join of the subobjects G and a_n—in particular, S is isomorphic to F. So F is a quotient of the algebra G+a_n. Since finite coproducts of finite MV-algebras are finite by <cit.>, G+a_n is finite and therefore so is F.
To show that A is a subalgebra of A, first note that clearly 0∈A. If a∈A then a lies in the subalgebra generated by a, which is finite; hence a is of finite order-rank. If a,b ∈A, then a⊕ b lies in the subalgebra generated by {a,b}, which is finite by the argument in the preceding paragraph; hence a⊕ b is of finite order-rank.
For the last assertion in the statement, let B be a locally finite subalgebra of A. Given any b ∈ B, the subalgebra generated by b in A is finite, by our assumption about B; hence b is of finite order-rank, and b∈ A. This completes the proof.
Lemmas <ref> and <ref> allow us to regard as a functor
⟶.
The functor ⟶ is right adjoint to the full inclusion ⟶.
This is an immediate consequence of the fact that A is the largest locally finite subalgebra of the MV-algebra A, as proved in Lemma <ref>.
It is proved in <cit.> that has all set-indexed products. This follows at once from Corollary <ref>: indeed, for any set-indexed family {A_i}_i ∈ I of locally finite MV-algebras the product of {A_i}_i ∈ I in is the coreflection (∏_i ∈ IA_i) of the product ∏_i ∈ IA_i in .
We have been unable to prove that → preserves finite products. However, writing for _ fp, we can show that the functor restricts to a left adjoint π_0 → to the inclusion
→ and, moreover, it preserves finite products.
As mentioned, the proof will appear elsewhere.
§ SEPARABLE UNITAL LATTICE-ORDERED ABELIAN GROUPS
For background on lattice-groups we refer to <cit.>. We recall that a lattice-ordered group, or ℓ-group for short, is a group that is also a lattice[In this appendix, lattices are only required to have binary meets and joins, but not top or bottom elements.] such that the group operation distributes over binary meets and joins. We only consider Abelian ℓ-groups, and thus adopt additive notation. The underlying group of an Abelian ℓ-group is torsion-free, and its underlying lattice is distributive. Write for the category of Abelian ℓ-groups and of lattice-group homomorphisms. An element 1∈ G in an Abelian ℓ-group is a (strong order) unit if for each g∈ G there is a natural number n such that n1≥ g. An Abelian ℓ-group G equipped with a distinguished unit 1 is called unital, and denoted (G,1). Write _1 for the category of unital Abelian ℓ-groups and of unit-preserving lattice-group homomorphisms.
There is a functor Γ_1→ that acts on objects by sending (G,1) to its unit interval [0,1]{x∈ G| 0≤ x ≤ 1}, and on morphisms by restriction; here, [0,1] is regarded as an MV-algebra under the operations x⊕ y (x+y)∧ 1, x 1-x, and 0. This functor has an adjoint Ξ→_1, and Mundici proved in <cit.> that Γ and Ξ constitute an equivalence of categories.
The initial object in _1 is (,1), and the terminal object is the trivial unital ℓ-group ({0=1}, 0).
In analogy with the relationship between non-unital and unital rings, the category has a zero object and is not coextensive, while the category _1 is. Separable unital Abelian ℓ-groups are defined as for any coextensive category, cf. the beginning of Section <ref>.
An object G of is Archimedean if whenever nx≤ y holds in G for each positive integer n, then x≤ 0; and an object (G,1) of _1 is called Archimedean if G is. The following characterisations hold: (G,1) is Archimedean precisely when Γ(G,1) is semisimple; and (G,1) is totally ordered and Archimedean precisely when Γ(G,1) is simple. Hölder's Theorem for the category _1 may be stated as follows: Any (G,1) that is Archimedean and totally ordered has exactly one morphism to (,1), and that morphism is monic (equivalently, its underlying function is injective).
Let us say that an object (G,1) of _1 is rational if it is isomorphic to an ordered subgroup
of the additive group containing 1, where the order of G is inherited from the natural order of the rationals. Theorem <ref> may be then formulated for the category _1 as follows.
For any unital Abelian ℓ-group (G,1) the following are equivalent.
* (G,1) is rational.
* (G,1) is non-trivial, and the unique map (,1) → (G,1) is epic.
* The unique map (,1) → (G,1) is monic and epic.
* (G,1) is totally ordered and Archimedean, and the unique map (,1) → (G,1) is epic.
An object (G,1) of _1 is Specker if its unit-interval MV-algebra Γ(G,1) is a Boolean algebra. Write _1 for the full subcategory of _1 on the the Specker objects. The inclusion functor _1→_1 has a right adjoint _1→_1, the Pierce functor for _1, and preserves arbitrary coproducts (Theorem <ref>). Our main result, Theorem <ref>, would be proved for the category _1 using this Pierce functor; it can be phrased as follows.
Separable unital Abelian ℓ-groups coincide with finite products of rational unital Abelian ℓ-groups.
Products in the category are Cartesian products, because is a variety of algebras. On the other hand, while _1 is equivalent to a variety by Mundici's cited theorem, its underlying-set functor is not right adjoint. Indeed, products in _1 are not, in general, Cartesian products. However, finite products in _1 are Cartesian—the product of (G,1) and (H,1) is (G× H, (1,1)) with the Cartesian projections.
An Abelian ℓ-group is called a simplicial group if it is isomorphic in to a free Abelian group of finite rank ^r equipped with the coordinatewise order. A unit in such a simplicial group is then any element 1∈^r whose each coordinate is strictly positive; the pair (^r,1) is called a unital simplicial group. These lattice-groups play a key rôle in the representation theory of dimension groups, see e.g. <cit.>.
An object (G,1) in _1 is a unital simplicial group exactly when its unit-interval MV-algebra Γ(G,1) is finite. An object (G,1) is locally simplicial if each sublattice subgroup generated by finitely many elements along with 1 is a unital simplicial group. An object (G,1) in _1 is locally simplicial exactly when its unit-interval MV-algebra Γ(G,1) is locally finite. Then: An object (G,1) of _1 is separable just when it is locally simplicial, and (G,1) has finite -module rank[In the literature on lattice-groups, the condition that (G,1) has finite rank is expressed in the following traditional manner: the unit of G has finitely many components.] (Corollary <ref>).
Write _1 for the full subcategory of _1 on the locally simplicial objects. The inclusion functor _1→_1 has a right adjoint _1→_1 (Corollary <ref>); that is, every (G,1) has an inclusion-largest locally simplicial unital sublattice subgroup. To prove this in the category _1 one would introduce the notion of element of `finite-order rank' of a unital Abelian ℓ-group. It is this notion that motivates the terminology we adopted in the context of MV-algebras in Section <ref>; by way of conclusion of this appendix, we offer a short discussion.
Let (G,1) be a unital Abelian ℓ-group, let g∈ G, and let H be the sublattice subgroup of G generated by g and by 1. If (H,1) is a unital simplicial group (^r,1)—equivalently, if the MV-algebra Γ(H,1) is finite—then we call g an element of finite order-rank r. This notion of rank crucially depends on the interplay between the lattice and the group structure, and is not reducible to the linear notion of rank. To explain why, let us preliminarly observe that a simplicial group ^r enjoys the finiteness property that its positive cone (^r)^+—that is, the monoid of non-negative elements of ^r—is finitely generated as a monoid. Next, let us point out that the underlying group of the Abelian ℓ-group H generated by g and 1 in G is necessarily free: indeed, any finitely generated object of has free underlying group, as was proved in <cit.>. The -module rank of H is at most countably infinite, because H is countable. But even if we assume the rank of H is finite, the unit-interval Γ(H,1) may be infinite, and in that case the lattice order of ^r≅ H cannot be simplicial—and indeed, one can prove that the monoid H^+ cannot be finitely generated. Hence, the condition that the sublattice subgroup H of G generated by g and 1 is simplicial is strictly stronger than the condition that H has finite -module rank. To illustrate, consider the subgroup H of generated by an irrational number ρ∈ together with 1; then H≅^2 as groups, the total order inherited by ^2 from is palpably not simplicial, the positive cone H^+ can be shown not to be finitely generated by an easy direct argument, and Γ(H,1) is an infinite simple MV-algebra.
amsplain
|
http://arxiv.org/abs/2307.04781v1 | 20230710121715 | Demonstrations of the Potential of AI-based Political Issue Polling | [
"Nathan E. Sanders",
"Alex Ulinich",
"Bruce Schneier"
] | cs.CY | [
"cs.CY"
] |
bottom=1.5in
0000.000
lightblue!0
Demonstrations of the Potential of AI-based Political Issue Polling
Michael Liut
August 12, 2023
===================================================================
empty
Nathan E. Sanders,*, Alex Ulinich, Bruce Schneier
Berkman Klein Center, Harvard University, 23 Everett St #2, Cambridge, Massachusetts, 02138
Mountain View High School, 3535 Truman Avenue, Mountain View, CA 94040
Harvard Kennedy School, 79 JFK Street, Cambridge, Massachusetts USA 02138
*[email protected]
Political polling is a multi-billion dollar industry with outsized influence on the societal trajectory of the United States and nations around the world.
However, in recent years it has been severely challenged by rising nonresponse rates and other factors that stress its cost, availability, and accuracy.
At the same time, artificial intelligence (AI) chatbots such as ChatGPT have become highly compelling stand-ins for a wide range of human behavior, powered by increasingly sophisticated large language models (LLMs).
Because these LLMs are trained on huge corpora of writing by diverse people captured from across the Internet, they are potentially capable of representing a wide range of beliefs on many policy issues.
Could AI chatbots be an effective tool for anticipating public opinion on controversial issues to the extent that they could be used by campaigns, interest groups, and polling firms?
We have developed a prompt engineering methodology for eliciting human-like survey responses from ChatGPT, which simulate the response to a policy question of a person described by a set of demographic factors, and produce both an ordinal numeric response score and a textual justification.
We execute large scale experiments using this method, querying GPT for thousands of simulated responses at a cost more than three orders of magnitude lower than human surveys.
We compare this simulated data to human issue polling data from the Cooperative Election Study (CES).
We find that ChatGPT is effective at anticipating both the mean level and distribution of public opinion on a variety of policy issues such as abortion bans and approval of the US Supreme Court, particularly in their breakdown along partisan lines (correlation typically >85%).
However, it is much less successful at anticipating demographic (age, race, and gender) differences between respondents.
Moreover, ChatGPT tends to overgeneralize its conception of ideological differences to new policy issues that arose after its training data was collected, such as American support for involvement in the war in Ukraine.
Our work has implications for our understanding of the strengths and limitations of the current generation of AI chatbots as virtual publics or online listening platforms, future directions for LLM development, and applications of AI tools to the political domain.
§ INTRODUCTION
While survey experiments and polling have been powerful tools for political campaigns, parties, and advocacy organizations in the US and around the world for centuries <cit.>, in recent years the cost and difficulty of operating polls has grown dramatically.
Political polling firms commonly recruit panels intended to be representative of, and to achieve high coverage of, their targeted population, such as eligible voters nationally or likely voters in a voting district.
Reaching these populations has become harder primarily because of the growth in survey nonresponse internationally: the failure to contact or refusal of potential participants to be surveyed due to factors such as lack of time, disinterest, and distrust <cit.>.
Moreover, the migration of respondents to new technologies such as cell phones and the Internet, which have uneven and evolving penetration and usage across regions and demographic groups, has constrained the coverage of survey samples .
These effects have generated simultaneous challenges for the quality and cost of political polling, as biases in political engagement and hyper-polarization manifest on response rates <cit.>.
A vast literature has developed on statistical methodologies for designing and postprocessing survey data to overcome these challenges, including methods such as demographic weighting and poststratification <cit.>.
In particular, pollsters have explored methodologies that enable meaningful public opinion research from digital platforms such as Facebook and other social media platforms, where traditional techniques of probability sampling cannot be applied because of the lack of a conventional sampling frame and researcher-controlled contact mechanism .
These various methodologies seem to have been successful at maintaining the predictive accuracy of election polling thus far, even as nonresponse has proliferated <cit.>, and yet there is widespread interest in finding transformative new models for measuring public opinion that could lead to more cost-effective, sustainable. and more reliable polling results <cit.>.
As statistical methodologies have come to play a critical role in collecting, processing, and interpreting political polling data, machine learning (ML) and artificial intelligence (AI) systems may further revolutionize this domain.
In particular, large language models (LLMs) such as ChatGPT, which can be incorporated into AI chatbots and other systems capable of providing human-like responses to natural language prompts, have a wide variety of potential applications in democratic processes, such as assisting lobbying firms <cit.>, helping citizens and stakeholders to formulate and advocate for their opinions <cit.>, facilitating connections between candidates and voters <cit.>, and even helping humans social engineer or hack political systems <cit.>.
Already, researchers have experimented with a variety of social science research and public polling applications of LLMs, such as coding open-ended survey responses <cit.>, inferring the ideology of a politician <cit.>, simulating economic behavior <cit.>, and simulating election results <cit.>.
Because they are trained on wide Internet corpora including opinion writing from a diverse range of people, LLM's have a compelling ability to represent different perspectives and to perform a wide range of tasks without specialized training <cit.>.
We therefore hypothesize that they may be effective at generating individualized responses to policy preference questions that can account for the same factors that influence human respondents, such as demographics.
However, the nature of LLMs limits their potential effectiveness as opinion sampling tools.
Like platforms such as social media, AI chatbots do not have well defined sample frames or well understood coverage characteristics.
Moreover, unlike true survey platforms, using LLMs does not actually involve any solicitation of opinion from an authentic human individual.
Instead, LLMs generate a response predicted to be most acceptable to the user on the basis of a training process such as reinforcement learning with human feedback , which may therefore reflect the incomplete, biased, or even stereotyping properties of its training dataset.
Some specific biases of Internet corpora-trained LLMs are coming in to focus.
One study attempted to assess the age and gender characteristics of ChatGPT by prompting it to express a demographic profile, finding that its responses are biased towards a young (<30 years old) and female profile .
Other investigators identified that an earlier model, GPT-2, is biased in its representation of the opinions of people from nations underrepresented in Internet usage .
Regardless of their ability to reflect the perspectives of a given demographic group, AI models may also exhibit bias in the text they generate; for example, in an analysis of the BERT model, researchers found that neural embeddings learn harmful stereotypes about persons with disabilities .
In this work, we seek to test the capability of current generation AI tools to accurately reflect distributions of public opinion, and to expose insight into its effective sociodemographic coverage as a polling instrument, using a generally available LLM and real public opinion survey questionnaires.
We have developed experimental methods (<ref>) to prompt the AI chatbot ChatGPT to generate public polling-like responses such that it can simulate a survey panel.
We test the model's ability to reflect the shift in valence between demographic groups across a variety of issues, as well as reasonably reproduce the key arguments appealed to by each demographic (<ref>).
We provide an interpretation of this capability in the context of prior Internet-assisted approaches to public opinion research, discuss the limitations of this approach and the current generation of tools, and the implications these capabilities may have as they improve (<ref>), before concluding (<ref>).
§ METHODS
We explore the viability of AI language models to simulate public opinion polling responses by developing a system that automates querying an LLM based on the questionnaire of a survey previously given to people, so that the resulting AI responses are aligned and comparable to human data.[We will publish the code associated with this work at the time the article is accepted.]
§.§ Large Language Model
We use the OpenAI Chat Completion API endpoint, through OpenAI's openai python library,[<https://github.com/openai/openai-python>] to query the gpt-3.5-turbo-0301 LLM for polling responses.
This model was the most recent model from OpenAI optimized for chat applications and made generally available as of April 2023; it is trained on data samples written as late as September 2021.[See <https://platform.openai.com/docs/models/gpt-3-5>]
We generate a balanced sample of n=20 responses per prompt per demographic cross-tab per issue across ideology (in five bins) and three demographic fields with simple categorizations (age in four bins, “man” or “woman” gender, and “white” or “non-white” race), for a total of 1,600 responses across each of seven issue prompts (see Table <ref>) for 11,200 total responses.
Note that this balanced sample does not, therefore, represent any particular target population such as US adults, as our focus is on understanding the performance of LLM's in representing the viewpoints within and across distinct demographic groups.
Because LLMs offer the opportunity to generate data for arbitrary sub-populations at arbitrary sizes, the process to generate a sample representative of a population with defined demographic characteristics is trivial, if the model is successful at accurately reproducing the views of each demographic group.
Regarding our selected demographic classes, we acknowledge that binary categorizations for gender and race are reductive and far from representative of the full spectrum of human gender and racial identity.
Our reason for focusing on these broad classes is to enable initial statistical comparisons with demographic groups well sampled in the CES dataset.
Future work should further explore the representation of AI generated responses associated with nonbinary gender and more diverse racial identities.
These queries were executed at a cost of about $3 USD through the OpenAI API, whereas an online survey of 10,000+ responses on a human population would cost at least 1,000 times that much.
LLMs can be sensitive to the way questions are phrased and what information is provided to prime them before answering a question.
We arrived at a prompt suitable for simulating public polling responses aligned to an established survey questionnaire through several iterations of trial and error in prompt engineering. We used the following prompt template when querying the LLM,
Please write a 1 paragraph letter to the editor from the perspective of a {gender} in the age range of {age} years who identifies as {white} expressing a clear point of view on the policy proposal to: “{issue}”. Before the letter, summarize their position with a “Position score:” statement followed by a single number (strictly numeric, with no other description) representing the person's position on the issue on a {cardinality}-point scale, where 1 represents the position “{low_level}” and {cardinality} represents the position “{high_level}”.
where {gender}, {age}, and {white} are demographic features; {issue} represents the question text from a survey given to humans (<ref>); {cardinality} is the maximum value of the numeric response scale; and {low_level} and {high_level} are descriptions of the bottom and top end of the response scale as defined in the polling questionnaire. The prompt component describing the “Position score:” successfully formats the output so that an ordinal numeric response value can be extracted from the plaintext completion with a simple regular expression. Additionally, we extract the textual descriptors of the top and bottom options on the original scale from the survey questionnaire to align the LLM outputs to the scale the human respondents used.
The prompt template defined above evolved significantly over the course of our experimentation.
Initially, we did not include a “Position score” requirement in the prompt.
We first tested the model's ability to generate realistic-seeming textual arguments in response to policy issue questions, from various demographically-aligned points of view.
Having initially vetted this capability, we then added a brief instruction to the prompt to assign a score on a 1-5 rating and verified that the generated ratings generally agreed with the textual letter generated by the model.
However, we identified two further challenges: 1) the generated position score would be formatted inconsistently and was difficult to extract from the generated text without manual review and, 2) the model would sometimes flip the polarity of the scoring scale, such that a given position would be variously represented as a score of 1 or 5.
To address issue 1, we added far more explicit formatting instructions (“Before the letter, summarize their position with...”), which succeeded at enforcing a formatting for the score that could be reliably extracted.
To address issue 2, we added explicit definitions to the template of the low and high position levels.
In addition to ensuring a consistent directionality of the position score, this instruction has the added benefit of aligning the model-generated score to the rating scale used in the human survey.
We use the default OpenAI system prompt of, “You are a helpful assistant”.
The LLM is not prompted sequentially; rather, each query is executed as an individual user prompt immediately following the system prompt.
Importantly, this allows us to simulate semi-independent poll responses, where the LLMs are probabilistically generating various isolated responses similar to sampling a human population.[
In contrast, a methodology that queried a system like ChatGPT with sequential user prompts would entail state evolution that biases subsequent responses, as iterative prompting is known to change the responses of LLM-based dialog systems, see e.g. .
]
Figure <ref> illustrates an example survey response distribution from this method.
A typical exchange looks like,
Prompt: Please write a 1 paragraph letter to the editor from the perspective of a politically Liberal Man in the age range of (16, 30][Note that we provide the age range in interval notation reflecting bins from the CES data; the GPT model demonstrates through its completions that it interprets the interval notation accurately.] years who identifies as non-white expressing a clear point of view on the policy proposal to: “Improve background checks to give authorities time to check the juvenile and mental health records of any prospective gun buyer under the age of 21”. Before the letter, summarize their position with a “Position score:” statement followed by a single number (strictly numeric, with no other description) representing the person's position on the issue on a 2-point scale, where 1 represents the position “support” and 2 represents the position “oppose”.
Response: Position score: 1
As a young, politically liberal man who identifies as non-white, I fully support the proposal to improve background checks for prospective gun buyers under the age of 21. It is unacceptable that individuals with a history of mental illness or juvenile offenses can easily acquire firearms. This new policy would ensure that authorities have enough time to properly check a person's background before they are allowed to purchase a gun. We have seen far too many tragic shootings in recent years, and it is time that we take meaningful action to prevent future tragedies. By implementing stricter background checks, we can protect ourselves and our loved ones from senseless violence.
§.§ Human Polling Data
As a human polling comparison for our AI-generated responses, we use the 2022 preliminary data release of the Cooperative Election Study (CES) <cit.>.
The CES is an annual online survey of ∼60,000 nationally representative US respondents administered by YouGov.
The full CES Common Content dataset consists of nearly 700 demographic, voting, and issue response variables, covering a wide range of policy- and politics-relevant factors and questions.
We selected policy issue polling questions from the CES dataset on the basis of their ability to test the LLM's ability to represent distinctive demographic groups.
In particular, we looked for questions that are fairly strongly correlated with demographic factors such as age and gender, yet relatively poorly correlated with ideological factors.
In particular, we selected questions on the basis of the empirical correlation calculated between the question-specific ordinal response and the respondent-specific political affiliation in the CES data.
Because of the high degree of partisan polarization in the US political system for so many issue, these questions provide a better test of the demographic response simulation abilities of the LLM than would more ideologically driven questions.
We make some manipulations to the survey data to accommodate generation of equivalent LLM completions. In particular, we constrain policy issue responses to an ordinal scale by removing categories such as “Not sure” (and dropping any associated responses) and replace multi-selection responses “selected” and “not selected” with “strongly agree” and “strongly disagree,” respectively. We also coarsely bin (aggregate) the age demographic variable (which is provided as a birth year integer in the raw dataset).
§ RESULTS
We systematically compare the AI-generated and human respondent issue polling data across the seven queried issues, ideology, and three demographics to understand the quality of the AI-driven approach through its correspondence to a human population.
Figure <ref> illustrates an example of this demographic level comparison for the police_safety question.
This figure demonstrates the general level of correspondence between CES and GPT-generated survey data at the finest granularity of our demographic groups for one question.
The two datasets exhibit a similar pattern of increasing safety reported from the liberal (top of figure) to conservative (bottom) ends of the spectrum.
However, some trends present in the CES data are not reproduced in the GPT results; for example, the significant, age-mediated variation across demographic subgroups among `Very liberal' CES respondents is not present in the GPT data; the GPT model seems to be over-confident in the expected response for the ideological group, regardless of other factors.
In the remainder of this section, we interrogate this correspondence statistically across survey questions and demographic properties.
In some cases, the GPT model demonstrates an excellent capacity to precisely reproduce the public polling response for individual population crosstabs (subgroups of age, gender, race, and ideological identity).
Figure <ref> shows that for the SCOTUS approval questions, there is a ρ=86% Pearson correlation between the CES and GPT polling results across all demographic crosstabs, and an even higher 95% correlation when looking at ideological subgroups only.
Beyond the correlation measure, the absolute reconstruction of the ordinal response is also highly accurate, with a mean absolute percentage error (MAPE) across demographic subgroups of ≲10% in both cases.
Naturally, the AI polling results are less impressive in some other cases.
In the following subsections, we explore the level of correspondence between the GPT and CES results in more depth by question and demographic field.
§.§ Ideological alignment
The AI model demonstrates an excellent ability to predict the alignment of different ideological subgroups across a range of policy issues (Figure <ref>).
The correlation between the AI-generated responses and the CES survey results, aggregated by ideological identification, is extremely high (>85%) for not only the scotus_approval question (Figure <ref>b), but also the abortion_ban (98% correlation), police_safety (94%), and increase_fuel_production (86%) issues.
For the prescription_import (ρ=67%) and gun_background_checks (91%) issues, the AI results are directionally consistent with the survey results and the correlations are still quite strong, but differ in the range and shape of the response, as the GPT results show a step-function-like difference between conservatives and liberals versus the gradual change in the survey data.
These trends are generally reflected in the MAPE values.
Like scotus_approval, abortion_ban has both an excellent correlation and MAPE (5%).
In contrast, the discontinuity in the prescription_import and gun_background_checks response pattern is reflected with higher MAPE values (31% and 29%, respectively).
The increase_fuel_production MAPE value is intermediate (21%).
Lastly, police_safety has a high MAPE (35%) relative to its correlation.
In this case, the high correlation reflects a consistently monotonic relationship between the GPT and CES demographic means, but a mis-calibration such that the GPT responses overestimate the decrease in perceived safety associated with the liberal groups (i.e. the ordinal response value is inflated at the liberal end).
(For discussion of the remaining queried issue, regarding the Ukraine war, see <ref>).
§.§ Distributional similarity
We further investigate the ability of the probabilistic output of the AI models to represent the distributional responses of the human panel. Figure <ref> illustrates the correspondence between question response distributions on each policy issue.
(The widths of these distributions are also illustrated by the error bar lengths in Figures <ref>, <ref>, and <ref>).
The distribution similarity is generally fairly good, with particularly good matches for the binary-valued abortion_ban and prescription_import questions.
The GPT model gets the absolute level of support wrong for the binary-valued questions increase_fuel_production and gun_background_checks; the AI model substantially underestimates the policy provisions' level of support.
For the multi-valued questions police_safety and scotus_approval, the level of matching is intermediate.
The spread of the distributions is similar.
However, as observed above, the GPT responses favor higher ordinal values for police_safety than in the CES data.
For scotus_approval, the median ordinal value (2) is over-represented in GPT responses.
(For discussion of the ukraine_war question, see <ref>.)
§.§ Demographic alignment
Because of the substantial polarization of the American public, ideological differences dominate issue response for many public policy issues and for the CES questions we studied.
It is difficult to find any policy question on which there are large, systematic differences in public opinion that cannot be explained primarily in partisan terms.
Therefore, the success of the GPT model in predicting ideological issue response is significant, and may be sufficient for many polling applications where the ideological distribution of the respondent pool is known (e.g., from jurisdictional voting records).
However, beyond ideological differences, the GPT model fails at anticipating several of the most important demographic trends relevant to these issues (Figure <ref>).
Among the strongest demographic trends in the CES data is that older respondents feel significantly more safe around police than do younger respondents.
While the GPT model does marginally reflect that trend, it does so at a vastly smaller magnitude than the survey data.
There is also a racial difference on this issue, such that non-white respondents are somewhat less likely to feel safe than white respondents, and the GPT model correctly anticipates the size and direction of this effect.
Likewise, the largest demographic trend for the fossil fuel production issue is also by age; younger respondents are more likely to disfavor increasing fuel production. The GPT model misses this trend entirely.
In some cases, the GPT model makes a prediction about demographic trends that seem reasonable, but overshoots what appears in the human sample.
In particular, the AI presumes a large difference in opposition to a complete abortion ban by gender, which seems plausible given this issue's strong identification with women's rights, as in this example:
Prompt: Please write a 1 paragraph letter to the editor from the perspective of a politically Moderate Woman in the age range of (45, 60] years who identifies as non-white expressing a clear point of view on the policy proposal to: “On the topic of abortion, do you support or oppose a policy to make abortions illegal in all circumstances?”. Before the letter, summarize their position with a “Position score:” statement followed by a single number (strictly numeric, with no other description) representing the person's position on the issue on a 2-point scale, where 1 represents the position “support” and 2 represents the position “oppose”.
Response: Position score: 2
As a politically moderate woman in my late 50s who identifies as non-white, I oppose the policy proposal to make abortions illegal in all circumstances. This issue is not as simple as being pro-life or pro-choice, and I firmly believe that a woman should have the right to make decisions about her own body without interference from the government or anyone else. Making abortions illegal would only lead to unsafe and potentially life-threatening procedures, and it would be a violation of a woman's fundamental rights. Instead, we should focus on increasing access to education and resources that help prevent unwanted pregnancies in the first place, and provide support for women who do choose to carry a pregnancy to term. It is important to respect women's autonomy and trust them to make the best decisions for themselves and their families.
However, the CES data shows a far smaller gender difference on this question (but note that there are a wide variety of other questions addressing this complex issue in the CES survey).
In contrast, the GPT model misses an important gender trend on the question about prescription drug imports.
In the human data, women are significantly more likely to oppose the idea of allowing states to import drugs from other countries, a trend that persists strongly across ideological segments.
While GPT overshoots the ideological associations for this issue, it misses the gender skew entirely.
§.§ Generalization
While political and environmental factors are constantly shifting for all the issue questions studied in this work, there was one issue that changed overwhelmingly between when the GPT training data was collected (through September 2021) and when the CES survey was fielded (fall 2022): the Ukraine war.
CES survey respondents would clearly have in mind the interstate conflict that principally began in February 2022 with the Russian invasion of Ukraine, whereas the data used to train the GPT model would have contained only data relevant to prior Russian actions, such as the Russian annexation of Crimea in 2014, the war in Donbas, or the troop buildup that began in spring 2021.
To give the GPT model specific prompting, we modify the question from the CES survey with an explicit year to specifically reference the 2022 invasion.
The CES results reflect widespread, nearly uniform public support for US “involvement” (the question does not specify what means of intervention).
There is little evidence of an ideological signal in the CES data, although a poll taken months later might reflect the growing divide among conservatives over American support for the Ukrainian effort <cit.>.
There is a small gender difference, in that men are more likely to support involvement.
In contrast, the GPT results suggest a much lower level of overall support and a wider range of support (and lack thereof) across all ideological groups (Figure <ref>).
Moreover, the GPT model strongly predicts that “very liberal” respondents would not support US involvement.
This seems like a natural extension of anti-interventionist sentiment in left-wing political opinion exhibited in response to, for example, the US invasion of Iraq, as illustrated by this example,
Prompt: Please write a 1 paragraph letter to the editor from the perspective of a politically Liberal Man in the age range of (16, 30] years who identifies as non-white expressing a clear point of view on the policy proposal to: “The United States should not get involved in response to Russia's 2022 invasion of Ukraine.”. Before the letter, summarize their position with a “Position score:” statement followed by a single number (strictly numeric, with no other description) representing the person's position on the issue on a 2-point scale, where 1 represents the position “strongly agree” and 2 represents the position “strongly disagree”.
Response: Position score: 1
As a politically liberal man in the age range of 16-30 who identifies as non-white, I strongly believe that the United States should not get involved in response to Russia's 2022 invasion of Ukraine. As much as we want to be the global police, we simply do not have the resources nor the legitimacy to intervene in every conflict around the world. American imperialism has caused more harm than good in many cases, and it's time for us to realize that we cannot solve every problem with force. Furthermore, getting involved in another country's conflict has the potential to escalate the situation and put American lives at risk. Let us focus on repairing relationships with our allies and strengthening our own nation instead of stirring up more conflict.
And yet the GPT responses do not well capture the dynamics specific to the Ukraine war, including the nature of the Russian aggression associated with the war, the reporting on possible war crimes and atrocities associated with the conflict, and the vocal support of the Ukrainian cause from the Democratic leader, President Joe Biden.
We will discuss the potential to include such additional information in model inference in <ref>.
§ DISCUSSION
This work demonstrates the potential of AI chatbot models to generate synthetic public opinion polling data that realistically reproduces human responses.
It extends the work of <cit.>, for example, to issue polling.
We provide multiple ways of thinking about how these capabilities arise (<ref>), and discuss limitations, and potential mitigations, for these abilities (<ref>).
This demonstration has significant potential implications for the political polling and market research industries and for consumers of issue polling data such as political campaigns and advocates (<ref>).
§.§ Interpretation
The mechanism by which LLMs can generate synthetic polling data can be viewed alternatively as accessing a virtual public or as a new form of AI-assisted online listening platform.
Under the virtual public framework, we consider the LLM to be simulating a population of individual synthetic respondents akin to a human survey panel.
The multi-head attention architecture used by leading LLMs has a natural interpretation in these terms; to the extent that they capture distinguishable semantic information, each attention head can effectively represent a different perspective on an issue <cit.>.[
In deep learning models, “attention” is a widely used mechanism to differentially weight components of a layer input, effectively guiding the focus of the model.
In transformer models, multiple versions of attention are learned (attention heads) to produce independent attention mechanisms, which may correspond to recognition of distinct lexical patterns such as detecting named entities, representing entity relations, word parts of speech, or even semantic information.
See for further information.
]
Combined with the increasingly human-like reasoning performance and natively probabilistic nature of autoregressive LLMs, these features provide a basis by which models like ChatGPT can generate text emanations and survey responses that appear as if they came from a diverse panel of human respondents.
The online listening interpretation places models like ChatGPT alongside tools for online social media, news, and opinion aggregation like Brandwatch <cit.>, Meltwater <cit.>, and MediaCloud <cit.>, tools widely used by market researchers, brands, and political actors to understand public sentiment and reactions to recent events.
Like those online listening platforms, the source of the LLM's capabilities is a large corpus of Internet-derived training data that reflects a broad range of perspectives that, in aggregate, reflect public opinion and, when disaggregated, can elucidate trends with respect to demographics and other variables.
A substantial advantage of LLMs in principle is that they have reasoning capacity, allowing them to generalize beyond their training data to make predictions about hypothetical events or those that occur outside of the context of their sources.
While the results of <ref> illustrate the limited abilities of current generation LLMs to succeed at this task, this ability represents a major long-term advantage of LLMs and AI generally that is sure to be exploited by companies and other users <cit.>.
LLMs are more akin to a virtual public than an online listening platform, beyond their capability to generalize to new issues, in that they offer an opportunity for AI-assisted pollsters to manipulate context and state.
When using online listening tools, you are limited to the questions and context that actual people have been exposed to and responded to, which makes it impossible to simulate a longform questionnaire like that used in the CES survey.
In the longform questionnaire, respondents (or subsets of respondents) answer questions in sequence and can be primed with certain information, such as factual evidence or talking points, in an effort to measure that contexts' influence on their response.
Because LLMs are capable of accepting sequential prompts and (at some level) of generalizing beyond the specific examples in their training data, they can simulate this kind of longitudinal questionnaire.
§.§ Limitations
A primary challenge in the design of AI polling tools is prompt engineering, as prompting strategies can dramatically effect the reasoning skills and accuracy of LLMs <cit.>.
The LLM model must be prompted not only to elicit demographically accurate differences in real public opinion associated with complex policy issues, but also, preferably, to align its response to established public polling datasets and methodologies.
As a step towards that level of alignment, in this work, we have established a methodology (<ref>) for prompting LLMs to generate both numerical responses aligned to the questionnaire of a real public polling samples as well as explanations of their policy positions.
Improved alignment on numerical responses can lend additional credence to the textual responses generated by the AI models.
The imperfect correspondence between the AI-generated results and the real human survey data presented in <ref> is surely due in part to inadequacies of the LLM used in this work, and in part to the imperfection of the prompt engineering.
Even with existing LLMs like GPT-3.5, a variety of additional model parameters and prompt considerations could enable improvements upon our results. In particular, systematic modification of the LLM's temperature parameter,[<https://platform.openai.com/docs/api-reference/chat/create#chat/create-temperature>] which adjusts variance in the probabilistic generative text output, may have the effect of controlling the spread in opinion responses returned for a given demographic and issue configuration.
Moreover, because GPT models are autoregressive, their outputs may be sensitive to the instructions in our prompt about where to place the numeric “Position score.”
In particular, since chain of thought prompting is known to affect reasoning in LLMs <cit.>, asking it to assert a score before generating the text may significantly condition that response.
Among the most critical ethical considerations in using LLMs is their potential to repeat biases from their training data, including harmful stereotypes and misinformation <cit.>.
In some cases, these biases may reflect actual (if objectionable) distributions of human opinion and beliefs, and in other cases they may reflect the over-representation of those beliefs in certain online sources.
This vulnerability would not only weaken the usefulness of LLMs for public opinion measurement, but could actively create harm from their use.
Similarly, there are biases (perceived and legitimate) in human political polling that limits its usefulness for actionable public opinion measurement <cit.>.
Another key limitation is the availability of training data relevant to novel policy issues.
In particular, the current generation of LLMs are typically trained with fixed datasets that halt at a certain time (e.g., GPT-3.5 was trained on data collected through September 2021), and their training corpora may lack coverage of certain issues (e.g., Internet corpora may reflect a systematic silencing of certain issues, see, e.g., ).
To the extent that LLMs are limited to “parroting” memorized training samples <cit.>, they cannot be expected to accurately extrapolate to the likely reactions of human respondents to truly novel world events.
Moreover, absent highly detailed prompting about the state of the world at the time, LLMs may lack context that would be determinative of human responses; for example, the repeal of the Supreme Court precedent from Roe v. Wade is important context for Americans surveyed on the question of abortion rights in 2023.
This limitation could be mitigated by further development of continuously trained or diachronic LLMs, which can be updated with new training data over time and are aware of the time sensitivity of their training samples <cit.>.
Furthermore, LLMs can be augmented with capabilities to access new sources such as by browsing the web <cit.>, giving them access to new information to inform their responses at prediction time.
§.§ Implications
If this impressive, but nascent, ability of LLMs to realistically reflect ideological and demographic issue alignment improved, it would raise significant challenges and potential benefits for the future of the survey and polling industries.
Given the rapid dissemination and low cost inference for powerful LLMs and AI chatbot systems such as ChatGPT over the past year, an accurate AI-based polling system would become a highly cost-effective alternative to human surveying.
This cost advantage could democratize access to the tool of survey research, giving smaller institutions and individuals greater access to public opinion research.
If problems of survey nonresponse continue (or grow), it may compel survey consumers to increasingly turn to alternative approaches, such as LLMs, which are capable of generating data at arbitrary speed and resolution.
Moreover, the nearly instantaneous response rate from AI models (when not subject to rate limits from the companies that control them) provides an attractive capability to iterate on survey results.
When days or weeks are not required to re-field a survey instrument, marketers and pollsters have a much greater ability to refine and update their questionnaires and collect new data.
However, these abilities will only be actionable to marketers or political users if the significant challenges associated with the current generation of LLMs can be overcome.
It remains to be fully assessed how bias inherent to LLM training data and model design will become imprinted on its outputs, and how that could shape decisions informed by simulated market research studies or simulated polling.
It may be that the web datasets commonly used to train modern LLMs <cit.> will appropriately reflect the distribution of real world public thought, but perhaps only if curated to reflect a specific jurisdiction (e.g., sources primarily from one country) and to be balanced across the ideological spectrum.
At present, these biases and their dependence on large pretraining dataset properties is both difficult to quantify and costly to measure <cit.>.
And it is unclear to what extent such a system could capture rapidly evolving market and political dynamics, either historically or in real time, which is key to most practical uses of survey data
(see <ref> for further discussion).
§ CONCLUSIONS
By sampling from the OpenAI ChatGPT model (GPT-3.5) at scale (>11,000 responses), we have demonstrated the ability of LLMs to generate synthetic political issue polling data that realistically simulates American popular opinion across a variety of controversial topics in some respects.
In particular, we have shown that AI-generated responses have an excellent correlation (typically ρ>85%) with human data within ideological subgroups for many issues.
However, we have also shown the limitations of the AI-based approach to accurate match trends in non-ideological demographic factors such as age, race, and gender, and to extrapolate to public opinion on novel events that occurred after the harvesting of their training data (such as the 2022 war in Ukraine).
We have interpreted these results in terms of multiple frameworks for the role of LLMs, as either virtual publics or online listening tools, and discussed their potential implications on the political polling and market research industries.
While additional development of capabilities for dynamic updating of LLMs, bias reduction, and generalization to novel issue topics is needed for AI tools to robustly supplement human opinion surveying, this study demonstrates the potential utility of even the current generation of AI tools to reduce cost, increase speed, and widen the accessibility of issue polling.
§.§ Acknowledgments
We thank Henry Farrell for thoughtful conversations on the role of AI in democracy, Beth Friedman for her helpful edits, and Xiao-Li Meng and an anonymous editor for their feedback.
|
http://arxiv.org/abs/2307.04147v1 | 20230709103519 | A Survey and Approach to Chart Classification | [
"Anurag Dhote",
"Mohammed Javed",
"David S Doermann"
] | cs.CV | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
Indian Institute of Information Technology, Allahabad Department of CSE, University at Buffalo, Buffalo, NY, USA
Email:{[email protected], [email protected],
[email protected]}
A. Dhote et al.
A Survey and Approach to Chart Classification
Anurag Dhote10009-0000-9385-4758 Mohammed Javed1Corresponding author0000-0002-3019-7401 David S Doermann20000-0003-1639-4561
August 12, 2023
================================================================================================================================
Charts represent an essential source of visual information in documents and facilitate a deep understanding and interpretation of information typically conveyed numerically. In the scientific literature, there are many charts, each with its stylistic differences. Recently the document understanding community has begun to address the problem of automatic chart understanding, which begins with chart classification. In this paper, we present a survey of the current state-of-the-art techniques for chart classification and discuss the available datasets and their supported chart types. We broadly classify these contributions as traditional approaches based on ML, CNN, and Transformers.
Furthermore, we carry out an extensive comparative performance analysis of CNN-based and transformer-based approaches on the recently published CHARTINFO UB-UNITECH PMC dataset for the CHART-Infographics competition at ICPR 2022. The data set includes 15 different chart categories, including 22,923 training images and 13,260 test images. We have implemented a vision-based transformer model that produces state-of-the-art results in chart classification.
§ INTRODUCTION
Charts provide a compact summary of important information or research findings in technical documents and are a powerful visualization tool widely used by the scientific and business communities. In the recent literature, the problem of chart mining has attracted increased attention due to numerous advantages, as suggested in the comprehensive survey published by Davila et al. in 2019 <cit.>. The term Chart mining refers to the process of extracting information represented by charts. Another motivating factor in the increased attention paid to this problem is a series of competitions held in conjunction with significant conferences to address the critical challenges in the chart mining pipeline<cit.>.
Since a variety of charts are possible, chart classification is often the first step in chart mining. The task of chart image classification can be formalized as, given a chart image extracted from a document, classifying the image into one of N defined categories. The wide variety of chart types in the literature adds to the complexity of the task<cit.>. Some additional problems include interclass similarity, noise in authentic chart images, and more state-of-the-art datasets that cover multiple chart types and incorporate 2.5 or 3D charts and noise into the training samples<cit.>. The rise of robust deep learning models has contributed significantly to the success of chart classification. Deep learning approaches have outperformed traditional machine learning approaches regarding robustness and performance. Yet there need to be more state-of-the-art solutions that can provide stable results and are robust enough to address noise in some data sets. In this paper, we provide a performance comparison of several deep learning models that are state-of-the-art in the ImageNet<cit.> classification task.
In addition, we report the performances of several popular vision transformers, which, to the best of our knowledge, have yet to be used for chart classification, except for the recent ICPR 2022 CHART-Infographics competition<cit.>.
This paper is organized as follows. Section 2 summarizes the existing chart classification literature covering traditional and deep learning-based methods, including a brief discussion on transformer-based chart classification. Section 3 reports and summarizes publicly available datasets.
Section 4 briefly highlights the popular ImageNet pre-trained deep learning-based models that will be used for our comparative study. Section 5 describes the latest edition of the UB PMC dataset, the training and testing protocols, and a discussion on their performance for chart classification. Section 6 provides information on possible improvements and suggestions for future research. Finally, Section 7 concludes with a summary of the paper.
§ CHART CLASSIFICATION TECHNIQUES
Based on the type of approaches used to implement the chart classification task in the literature, they can be grouped into traditional ML, CNN-based deep learning, and Transformer-based deep learning. Each type of approach is described briefly below.
§.§ Traditional ML approaches
Traditional approaches rely on feature extraction methods that are often manual and general-purpose. Features are extracted and then represented in mathematical form for direct processing by machine learning classifiers. Savva et al.<cit.> present a system that automatically reformats visualizations to increase visual comprehension. The authors use low-level image features for classification in conjunction with text-level features. The system uses a multiclass SVM classifier trained on a corpus containing 2601 chart images labeled with ten categories, following Gao et al.'s manual extraction approach. In <cit.>, researchers propose VIEW, a system that automatically extracts information from raster-format charts. The authors used an SVM to separate the textual and graphical components and classify the chart images based on the graphic elements extracted from the visual components. The text is typically found in three chart categories - bar charts, pie charts, and line graphs, with 100 images for each category collected from various real-world digital resources.
Instead of taking an image as input, Karthikeyani and Nagarajan<cit.> present a system to recognize chart images from PDF documents using eleven texture features that are part of a Gray Level Co-Occurrence Matrix. A chart image is located in the PDF Document database, and the features are extracted and fed to the learning model. SVM, KNN, and MLP are the classifiers used for classification. Cheng et al.<cit.> employ a multimodal approach that uses text and image features. These features are provided as input to an MLP. The output is characterized as a fuzzy set to get the final result. The corpus contains 1707 charts with three categories and a 96.1% classification result.
§.§ CNN-based Deep Learning Approaches
Liu et al.<cit.> used a combination of Convolutional Neural Networks (CNNs) and Deep Belief networks (DBNs) to capture high-level information present in deep hidden layers. Fully Connected Layers of Deep CNN are used to extract deeply hidden features. A DBN is then used to predict the image class using the deep hidden features. The authors use transfer learning and perform fine-tuning to prevent overfitting. They use a data set that includes more than 5,000 images of charts, including pie, scatter, line, bar, and flow classes. Deep features are useful over primitive features to provide better stability and scalability to the proposed framework. The proposed method achieves an average accuracy of 75.4%, which is 2.8% more than the method that uses only deep ConvNets.
Given the results of CNN in the classification of natural images, Siegel et al.<cit.> used two CNN-based architectures for chart classification. They evaluated AlexNet and ResNet-50, which are pre-trained on the ImageNet data set and then fine-tuned for chart classification. This transfer learning approach is prevalent in subsequent works addressing this particular problem. The proposed frameworks outperformed the state-of-the-art model at the time, such as ReVision, by a significant margin. ResNet-50 achieved the best classification accuracy of 86% on a data set that contained more than 60000 images spread over seven categories.
Amara et al.<cit.> proposed a CNN-based on LeNet to classify images from their corpus of 3377 images into 11 categories. The model comprises eight layers, one input layer, five hidden layers, one fully connected layer, and one output layer. The fully connected layer is used as a classifier, while the hidden layers are convolution and pooling layers designed to extract features automatically. A fully connected layer employs softmax activation to classify images into defined classes. For evaluation of the model's performance, an 80-20 split is performed on the data set for training and assessment. The proposed model performs better than the LeNet and pretrained LeNet architectures with an accuracy of 89.5%.
Jung et al. <cit.> present a classification method using the deep learning framework Caffe and evaluate its efficacy by comparing it with ReVision<cit.>. The authors use GoogLeNet<cit.> for classification and compare its results with shallower networks like LeNet-1 and AlexNet<cit.>. GoogLeNet outperforms LeNet-1 and AlexNet with an accuracy of 91.3%. Five-fold cross-validation is used for calculating the accuracy on an image corpus with 737 - 901 images for each chart type. The test concludes that ChartSense provides higher classification accuracy for all chart types than ReVision.
With studies adapting the deep learning approach for chart image classification, a comparative study of traditional vs. CNN architectures was required. Chagas et al.<cit.> provide a comparative analysis of conventional vs. CNN techniques. Authors evaluated CNN architectures (VGG19<cit.>, Resnet-50<cit.>, and Inception-V3<cit.>) for chart image classification for ten classes of charts. The performance is compared with conventional machine learning classifiers, Naive Bayes, HOG features combined with KNN, Support Vector Machines, and Random Forests. Pre-trained CNN models with fine-tuned last convolutional layers were used. The authors concluded that CNN models surpass traditional methods with an accuracy of 77.76% (Resnet-50) and 76.77% (Inception-V3) compared to 45.03% (HOG + SVM).
Dia et al.<cit.> employ four deep learning models on a corpus of 11,174 chart images of five categories. Of AlexNet<cit.>, VGG16<cit.>, GoogLeNet<cit.> and ResNet<cit.>, the authors get the best accuracy of 99.55% for VGG16 model. VGG16 outperforms the models used in ChartSense paper by a large margin.
Significant roadblocks to chart mining research are caused by the fact that current chart data sets must be larger and contain sufficient diversity to support deep learning. To address this problem, Jobin et al.<cit.> presented DocFigure, a chart classification data set with 33,000 charts in 28 different classes. To classify charts, the author's proposed techniques utilize deep features, deep texture features, and a combination of both. Among these baseline classification techniques, the authors observed that combining deep features and deep texture features classifies images more efficiently than individual features. The average classification accuracy improved by 3.94% and 2.10% by concatenating FC-CNN and FV-CNN over individual use of FC-CNN and FV-CNN, respectively. The overall accuracy of the combined feature methods turned out to be 92.90%.
Luo et al. proposed a unified method to handle various chart styles<cit.>, where they show that generalization can be obtained in deep learning frameworks with rule-based methods. The experiments were performed on three different datasets of over 300,000 images with three chart categories. In addition to the framework, an evaluation metric for the bar, line, and pie charts is also introduced. The authors concluded that the proposed framework performs better than traditional rules-based and pure deep learning methods.
Araújo et al.<cit.> implemented four classic CNN models that performed well on computer vision tasks, including Xception<cit.>, VGG19<cit.>, ResNet152<cit.> and MobileNet<cit.>. The weights of these models were pre-trained on the ImageNet dataset, and the authors further performed hyperparameter tuning to obtain a stable learning rate and weight decay. These models were employed on a self-aggregated chart image corpus of 21,099 images with 13 different chart categories. Xception outperforms the other models by hitting an accuracy of 95%.
The problem of small datasets has been prevalent since the problem of chart mining was first introduced. Most work tries to increase the size of the dataset. However, Bajic and Job<cit.> use a Siamese CNN network to work with smaller datasets. The authors show that an accuracy of 100% can be achieved with 50 images per class, which is significantly better than using a vanilla CNN.
With the increase in datasets for chart images and the rise of deep learning models being employed on said datasets, an empirical study of these deep learning models was due. Thiyam et al.<cit.> compared 15 different deep-learning models on a self-aggregated dataset of 110,182 images spfeatures24 different chart categories. In addition, the authors tested the performance of these models on several preexisting test sets. They concluded that Xception(90.25%) and DenseNet121(90.12%) provide the most consistent and stable performance of all the deep learning models. The authors arrived at this decision by employing a five-fold cross-validation technique and calculating the standard deviation for each model across all datasets.
Davila et al.<cit.> summarized the work of different participants in the competition's first edition by harvesting raw tables from Infographics that provided data and tools for the chart recognition community. Two data sets were provided for the classification task. One was a synthetically generated AdobeSynth dataset, and the other UB-PMC data set was gathered from the PubMedCentral open-access library. The highest average F1-measure achieved for the synthetic data set was 99.81% and the highest F1-measure achieved for the PMC data set was 88.29%. In the second edition of the competition, the PMC set was improved and included in the training phase. An ensemble of ResNet152 and DenseNet121 achieved the highest F1-score of 92.8%. The third edition of the competition was recently held at ICPR 2022. The corpus of real chart images was made up of 36,183 chart images. The winning team achieved an F1 score of 91% with a base Swin transformer model with a progressive resizing technique. We summarize the competition details in Table <ref>
§.§ Transformer-based Deep Learning Approaches
Since the inception of Vision Transformer, there has been a lot of development in various computer vision tasks such as image classification, object detection, and image segmentation. Vision transformer has outperformed CNN-based models in these tasks on the ImageNet dataset. However, there has not been widespread application of vision transformers to chart image classification.
To our knowledge, only the Swin transformer<cit.> has been used for chart classification as reported in <cit.>, which won the CHART-Infographics challenge ICPR2022. The authors applied a Swin Transformer Base Model with a progressive resizing technique. The models were initially trained on a scale (input size) of 224 followed by 384<cit.>.
The existing models in the literature are summarised in Table 2.
§ CHART CLASSIFICATION DATASETS
There has been a significant increase in the size of datasets both in terms of the number of samples and the number of chart types. The Revision dataset<cit.> had only 2,601
images and 10 chart types. The recent publicly available dataset<cit.> comprises around 33,000 chart images of 15 different categories. The details of several publicly available datasets are discussed in this section.
ChartSense <cit.>:
The ChartSense dataset was put together using the ReVision dataset, and the authors manually added some additional charts. The corpus has 5659 chart images that cover ten chart categories.
ChartVega <cit.>:
This dataset has ten chart types and was created due to a need for a benchmark dataset for chart image classification<cit.>. The dataset contains both synthetic and real chart images. The set contains 14,471 chart images, of which 12059 are for training and 2412 are for testing. In addition, a validation set of 2683 real chart images is provided. No separate annotations are provided, as chart images are separated according to their types.
DocFigure <cit.>:
This corpus consists of 28 categories of annotated figure images. There are 33,000 images that include non-chart categories like natural images, tables, 3D objects, and medical images. The train set consists of 19,797 images, and the test set contains 13173 images. The labels are provided in a text document.
ChartOCR <cit.>: The dataset contains 386,966 chart images created by the authors by crawling public excel sheets online. The dataset contains only three classes of chart images. The dataset is divided into the train, validation, and test sets. The training corpus contains 363,078 images, the validation set contains 11,932 images, and the test set contains 11,965 images. The annotations for the chart images are provided in JSON format.
UB-PMC CHART-Infographics: This dataset was introduced in the first edition of Competition on Harvesting Raw Tables from Infographics (ICPR 2019 CHART Infographics)<cit.>. This dataset has synthetic images created using matplotlib. For the testing, a large set of synthetic data and a small set of real chart images harvested from PubMedCentral[https://www.ncbi.nlm.nih.gov/pmc/] were used. The training set has 198,010 images, whereas the synthetic test set has 4540 images, and the real test set has 4242 images. The dataset has ten different chart categories.
The second edition of the competition<cit.> provided a dataset containing 22923 real chart images of 15 different chart categories in both training and testing sets. The training set has 15636 images, while the test set has 7287 images. The annotations for the chart image samples are provided in both JSON and XML formats.
The dataset presented as a part of the third and most recent competition comprises 36183 images of 15 different chart categories. The training set contains 22,923 images, while the test set contains 13,260 images. Similar to the previous edition, the annotations are provided in JSON and XML formats.
To the best of our knowledge, this is the largest publicly available dataset for chart image classification.
The existing classification data sets for charts are summarized in Table <ref>, and the composition of the publicly available datasets is reported in Table <ref>.
§ DEEP LEARNING MODELS FOR COMPARATIVE ANALYSIS
In this section, we briefly discuss prominent deep-learning models that have been used to study the performance of chart classification. We have selected two categories of deep learning models - CNN-based and Transformer-based for the comparative study. For CNN-based models, we have considered the proven state-of-the-art models for image classification on the large-scale benchmark dataset ImageNet<cit.> over the years. For vision transformer models, we have chosen the models that have been proven to outperform CNN-based models in computer vision tasks.
§.§ ResNet<cit.>
The Deep Residual Network was introduced in 2015 and was significantly deeper than the previous deep learning networks. The motivation behind the model was to address the degradation problem: Degrading training accuracy with increasing depth of the model. The authors added shortcut connections, also known as skip connections, that perform the proposed identity mapping and are significantly easier to optimize than unreferenced mappings. Despite being deeper than the previous models, ResNet still needed to be simplified. It achieved the top-5 error of 3.57% and claimed the top position in the 2015 ILSVRC classification competition<cit.>. We use a 152-layer version of this Deep Residual Network called ResNet-152 for our classification problem.
§.§ Xception<cit.>
Xception is a re-interpretation of the inception module. The said inception module is replaced with depth-wise separable convolutions. The number of parameters in both Inception V3 and Xception is the same, so the slight performance improvement is due to the more efficient use of parameters. Xception shows a better performance improvement than Inception V3 on the JFT dataset on the ImageNet dataset. It achieves the top five accuracy of 94.5%. Xception also shows promising results in the chart classification literature, as reported by <cit.> and <cit.>.
§.§ DenseNet<cit.>
The Dense Convolutional Network, introduced in 2018, connects each layer in the network architecture to all other layers. This allows for the exchange of feature maps at every level and considers the same input as input gathered from all the previous layers rather than just one preceding layer. The difference between DenseNet and Resnet lies in the way that they combine features. ResNet combines features through summation, whereas DenseNet combines them through concatenation. DenseNet is easier to train due to the improved flow of gradients and other information through the network. The vanilla DenseNet has fewer parameters than the vanilla ResNet network. We used DenseNet-121 for our classification task as it was one of the best models for the chart image dataset as reported in <cit.>.
§.§ ConvNeXt<cit.>
ConvNeXt model was introduced as a response to hierarchical transformers outperforming convnets in image classification tasks. Starting with a standard ResNet architecture, the model is carefully modified to adapt the specific characteristics of a typical hierarchical transformer. This resulted in a CNN-based model that matches the transformers in robustness and scalability across all benchmarks. ConvNeXt achieves a top-1 accuracy of 87.8% on ImageNet.
§.§ DeIT Transformer<cit.>
The authors proposed the Data Efficient Image Transformer(DeIT) with 86M parameters to make the existing vision transformer more adoptable. This convolution-free approach achieves competitive results against the existing state-of-the-art models on ImageNet. The proposed vision transformer achieved a top-1 accuracy of 85.2% on the ImageNet classification task. We use the base Base DeIT transformer for the chart classification task.
§.§ Swin Transformer<cit.>
A hierarchical transformer that employs shifting windows to obtain representations for vision tasks. The authors note that the hierarchical architecture provides linear computational complexity and scalability concerning image size. The limitation of self-attention calculation concerning noncoincident local windows due to the shifting windows allows for better cross-window connectivity. The qualities above contribute to the Swin transformer's excellent performance across computer vision tasks. It achieves 87.3% top-1 accuracy on the ImageNet-1k dataset. We perform experiments with all the 13 available Swin Transformer models and report their performance in Table <ref>. Furthermore we refer to the best performing Swin Transformer model as Swin-Chart in Table <ref>.
§ EXPERIMENTAL PROTOCOL
§.§ Dataset
We use the ICPR2022 CHARTINFO UB PMC<cit.> dataset to perform our comparative study of deep learning models. The dataset is divided into training and testing sets. The number of chart images in the training and test set is 22,923 and 11,388, respectively. The ground truth values are annotated in JSON and XML formats. We further divide the provided training set into training and validation sets with an 80/20 ratio. The dataset contains charts of 15 categories: area, map, heatmap, horizontal bar, Manhattan, horizontal interval, line, pie, scatter, scatter-line, surface, Venn, vertical bar, vertical box, and vertical interval. Samples of each chart type present in the dataset are shown in Figure <ref>
§.§ Training and Testing Setup
We choose ResNet152, DenseNet121, Xception, and ConvNeXt CNN-based models and DeIT and Swin Transformers-based models for chart image classification. The CNN-based models were selected based on their performance in the existing literature on the ImageNet image classification task. The transformer-based models are chosen because they beat the CNN-based models. We use the pre-trained ImageNet weights of these models and fine-tune them for our chart classification task. The models are trained on a computer with an RTX 3090 video card with 24 GB memory. Pytorch<cit.> was used as the engine for our experiments. We use a batch size of 64 for CNN-based models and a batch size of 16 for transformer-based models. A learning rate of 10^-4 is used to train each model for 100 epochs. Label Smoothing Cross Entropy Loss is used as a loss function. The evaluation measures the average over all classes and reports precision, recall, and F1-score.
§.§ Comparative Results
The models were trained following the steps mentioned in the previous section and were tested on the UB-PMC test data set. We calculate all deep learning models' average precision, recall, and F1 score. Among CNN-based models, ResNet-152 and ConvNeXt provide the best results across all evaluation metrics. The ResNet-152 result is consistent with the results in <cit.> for CNN-based models. For Swin transformer we perform experiments on 13 models consisting Swin Tiny(SwinT), Swin Small(SwinS), Swin Base(SwinB) and Swin Larger(SwinL) and their varients. SwinL with input image dimension 224 performs best with an F1-score of 0.932. So, SwinL model is further referred as Swin-Chart. The scores of all the Swin Transformer models are summarized in Table <ref>. The best performing CNN based models fail to compete with Swin-Chart for the chart classification task as it outperforms the other five models with an average F1-score of 0.932. The scores for the deep learning models are summarized in Table <ref>.
Furthermore, we compare our best-performing model(Swin-Chart) with the models reported in <cit.>. This comparison is summarized in Table <ref>. We note that Swin-Chart surpasses the winner of the ICPR 2022 CHART-Infographics competition with an average F1-score of 0.931.
§ FUTURE DIRECTIONS
Although there has been a significant increase in published articles on chart classification, several problems still need to be addressed.
§.§ Lack of Standard Benchmark Data Sets
The chart image classification problem has been extensively addressed in previous work. Efforts have been made to increase the size of chart image datasets that also cover a wide variety of charts<cit.>. With the growing literature in various domains, authors are finding creative ways to use different charts. This adds to the variety of chart types. Integrating such diverse chart types while creating chart datasets remains an open challenge. In addition, the popularity of charts such as bar, line, and scatter over others such as Venn, surface, and area adds to the problem of disparity between the number of samples in particular chart types.
§.§ Lack of Robust Models
Recent work makes some problematic assumptions in addressing this problem<cit.>. A lack of a diverse benchmark dataset adds to this problem, as there needs to be more consistency in model performance across publicly available datasets. The inherent intra-class dissimilarity and inter-class similarity of several chart types affect the model's performance.
§.§ Inclusion of Noise
Most of the work in the existing literature ignores the effect of noise. Different types of noise, such as background grids, low image quality, composite charts, and multiple components along with figures, lead to poor performance for models that perform exceptionally well on noiseless data<cit.>. In addition to the noiseless chart image dataset, if a small set of chart images could be provided that incorporates the noisy images, it would help fine-tune the models to work through the inclusion of noise and be invariant to the same.
§ CONCLUSION
We have provided a brief survey of existing chart classification techniques and datasets. We used a Transformer model to obtain state-of-the-art results. Although there has been a significant development both in terms of variety in models and in the size of datasets, we observe that the chart classification problem still needs to be solved, especially for noisy and low-quality charts. Our comparative study showed that Swin-Chart outperforms the other vision transformer and CNN-based models on the latest UB-PMC dataset. In the future, we plan to generalize the results of the Swin-Chart over other publicly available datasets and try to bridge the gap to a robust deep-learning model for chart image classification.
8
amara_17
Amara, J. et al.: Convolutional Neural Network Based Chart Image Classification. In: International Conference in Central Europe on Computer Graphics, Visualization, and Computer Vision. (2017).
araujo_20
Araújo, T. et al.: A Real-Worl Approach on the Problem of Chart Recognition Using Classification, Detection, and Perspective Correction. Sensors. 20, 16, 4370 (2020).
bajic_20
Bajić, F. et al.: Data Visualization Classification Using Simple Convolutional Neural Network Model. In: International Journal of Electrical and Computer Engineering Systems (IJECES). 11, 1, 43–51 (2020).
bajic_21
Bajić, F., Job, J.: Chart Classification Using Siamese CNN. Journal of Imaging. 7, 220 (2021).
balaji_18
Balaji, A. et al.: Chart-Text: A Fully Automated Chart Image Descriptor. ArXiv (2018).
chagas_18
Chagas, P. et al.: Evaluation of Convolutional Neural Network Architectures for Chart Image Classification. In:International Joint Conference on Neural Networks (IJCNN). pp. 1–8 (2018).
cheng_13
Cheng, B. et al.: Graphical chart Classification Using Data Fusion for Integrating Text and Image Features. In: Proceedings of the International Conference on Document Analysis and Recognition, ICDAR. (2013).
chollet_17_xception
Chollet, F.: Xception: Deep Learning with Depthwise Separable Convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017).
dai_18
Dai, W. et al.: Chart decoder: Generating textual and numeric information from chart images automatically. Journal of Visual Languages & Computing. 48, 101–109 (2018).
davila_19
Davila, K. et al.: ICDAR Competition on Harvesting Raw Tables from Infographics (CHART-Infographics). In: International Conference on Document Analysis and Recognition (ICDAR). pp. 1594–1599 IEEE, Sydney, Australia (2019).
davila_19_survey
Davila, K. et al.: Chart Mining: A Survey of Methods for Automated Chart Analysis. In: IEEE Transactions on Pattern Analysis and Machine Intelligence. 43, 11, 3799–3819 (2021).
davila_20
Davila, K. et al.: ICPR 2020 - Competition on Harvesting Raw Tables from Infographics. In: Pattern Recognition. ICPR International Workshops and Challenges: Virtual Event, pp. 361-380 (2021).
davila_22
Davila, K. et al.: ICPR: Challenge on Harvesting Raw Tables from Infographics (CHART-Infographics). In: 26th International Conference on Pattern Recognition (ICPR), pp.4995-5001. (2022).
gao_12
Gao, J. et al.: View: Visual Information Extraction Widget for improving chart images accessibility. In: 19th IEEE International Conference on Image Processing. pp. 2865–2868 (2012).
he_15_resnet
He, K. et al.: Deep Residual Learning for Image Recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016).
.
howard_17_mobilenet
Howard, A.G. et al.: MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications, http://arxiv.org/abs/1704.04861, (2017).
huang_18_densenet
Huang, G. et al.: Densely Connected Convolutional Networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017).
jung_17
Jung, D. et al.: ChartSense: Interactive Data Extraction from Chart Images. In: Proceedings of the 2017 CHI conference on human factors in computing systems (2017).
karthikeyani_12
Karthikeyani, V., Nagarajan, S.: Machine Learning Classification Algorithms to Recognize Chart Types in Portable Document Format (PDF) Files. IJCA. 39, 2, 1–5 (2012).
krizhevsky_12_alexnet
Krizhevsky, A. et al.: ImageNet Classification with Deep Convolutional Neural Networks. In: Advances in Neural Information Processing Systems. (2012).
kv_19
kv, J. et al.: DocFigure: A Dataset for Scientific Document Figure Classification. In: International Conference on Document Analysis and Recognition Workshops (ICDARW). (2019).
liu_15
Liu, X. et al.: Chart classification by combining deep convolutional networks and deep belief networks. In: 13th International Conference on Document Analysis and Recognition (ICDAR). pp. 801–805 (2015).
liu_19
Liu, X. et al.: Data Extraction from Charts via Single Deep Neural Network. IN: arXiv preprint arXiv:1906.11906 (2019).
liu_21_swin
Liu, Z. et al.: Swin Transformer: Hierarchical Vision Transformer using Shifted Windows. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (2021).
liu_22_convnet
Liu, Z. et al.: A ConvNet for the 2020s. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2022).
luo_21
Luo, J. et al.: ChartOCR: Data Extraction from Charts Images via a Deep Hybrid Framework. In: IEEE Winter Conference on Applications of Computer Vision (WACV). pp. 1916–1924 IEEE, Waikoloa, HI, USA (2021).
paszke_19
Paszke, A. et al.: PyTorch: an imperative style, high-performance deep learning library. In: Proceedings of the 33rd International Conference on Neural Information Processing Systems. pp. 8026–8037 Curran Associates Inc., Red Hook, NY, USA (2019).
russakovsky_15
Russakovsky, O. et al.: ImageNet Large Scale Visual Recognition Challenge. In: International journal of computer vision. pp.211-252, (2015).
savva_11
Savva, M. et al.: ReVision: automated classification, analysis and redesign of chart images. In: Proceedings of the 24th annual ACM symposium on User interface software and technology. pp. 393–402 Association for Computing Machinery, New York, NY, USA (2011).
siegel_16
Siegel, N. et al.: chartSeer: Parsing Result-charts in Research Papers. In: Leibe, B. et al. (eds.) Computer Vision – ECCV 2016. pp. 664–680 Springer International Publishing, Cham (2016).
simonyan_15_vgg
Simonyan, K., Zisserman, A.: Very Deep Convolutional Networks for Large-Scale Image Recognition, http://arxiv.org/abs/1409.1556, (2015).
szegedy_14_googlenet
Szegedy, C. et al.: Going Deeper with Convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2015).
szegedy_15_v3
Szegedy, C. et al.: Rethinking the Inception Architecture for Computer Vision. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016).
thiyam_21
Thiyam, J. et al.: Challenges in chart image classification: a comparative study of different deep learning methods. In: Proceedings of the 21st ACM Symposium on Document Engineering. pp. 1–4 Association for Computing Machinery, New York, NY, USA (2021).
thiyam_22
Thiyam, J. et al.: Chart classification: an empirical comparative study of different learning models. Presented at the December 19 (2021).
touvron_21_deit
Touvron, H. et al.: Training data-efficient image transformers & distillation through attention. In: Proceedings of the 38th International Conference on Machine Learning. pp. 10347–10357 PMLR (2021).
|
http://arxiv.org/abs/2307.04022v1 | 20230708175848 | Explicit a posteriori error representation for variational problems and application to TV-minimization | [
"Sören Bartels",
"Alex Kaltenbach"
] | math.NA | [
"math.NA",
"cs.NA",
"math.OC",
"35Q68, 49M25, 49M29, 65N30, 65N50"
] |
1]Sören BartelsEmail:
2]Alex KaltenbachEmail:
[1]Department of Applied Mathematics, University of Freiburg, Hermann–Herder–Straße 10, 79104 Freiburg
[2]Institute of Mathematics, Technical University of Berlin, Straße des 17. Juni 136, 10623 Berlin
Explicit a posteriori error representation for variational problems and application to TV-minimization
[
August 12, 2023
========================================================================================================
fancy
0cm
-0.25cm
[CO]Explicit error representation and application to TV-minimization
[CE]S. Bartels and A. Kaltenbach
[R]
[R]
In this paper, we propose a general approach for explicit a posteriori error representation for convex minimization problems using basic convex duality relations.
Exploiting discrete orthogonality relations in the space of element-wise constant vector fields as well as a discrete integration-by-parts formula between the Crouzeix–Raviart and the , all convex duality relations are transferred to a discrete level, making the explicit a posteriori error representation –initially based on continuous arguments only– practicable from a numerical point of view. In addition,
we provide a generalized Marini formula for the primal solution that determines a discrete primal solution in terms of a given discrete dual solution.
We benchmark all these concepts via the Rudin–Osher–Fatemi model. This leads to an adaptive algorithm that yields a (quasi-optimal)
linear convergence rate.
35Q68; 49M25; 49M29; 65N30; 65N50
§ INTRODUCTION
empty
The numerical analysis of the approximation of variational problems
is challenging when these are non-differentiable, degenerate, or involve
constraints. In particular, following established concepts for linear
elliptic partial differential equations often leads to sub-optimal results only.
The framework of convex duality provides an attractive concept to
reveal hidden information and structures to obtain quasi-optimal error representation formulas
under meaningful regularity conditions. Similar to <cit.>, we first exploit this
idea to derive explicit computable a posteriori error estimates for a natural error
measure. Then, this general result is transferred to a non-differentiable model problem with discontinuous solutions. As a whole, our results, similar to <cit.>, show that
the question of developing asymptotically exact a posteriori error estimators is
rather a question of identifying optimal error quantities. However, different from <cit.>, we also propose a general approach for making our results practicable from a numerical point of view.10mm
Given a domain Ω⊆ℝ^d, d∈ℕ,
a convex energy density ϕℝ→ℝ∪{+∞}, a
(Lebesgue) mea-surable energy density ψΩ×ℝ→ℝ∪{+∞} that is convex with respect to the second argument, and a Banach space X consisting of functions defined in
Ω, we denote by the minimization of the energy functional I X→ℝ∪{+∞}, for every v∈ X defined by
I(v) ∫_Ωϕ(∇ v) dx + ∫_Ωψ(·, v) dx ,
the primal problem.
Its (Fenchel) dual problem consists in the maximization of the functional D Y→ℝ∪{-∞}, where Y is a Banach space consisting of vector fields defined in
Ω, for every y∈ Y is defined by
D(y) -∫_Ωϕ^*(y) dx - ∫_Ωψ^*(·, div y) dx .
Here, ϕ^*ℝ^d→ℝ∪{+∞} and ψ^*Ω×ℝ→ℝ∪{+∞} (with respect to the second argument) denote the (Fenchel) conjugates of ϕℝ→ℝ∪{+∞} and ψΩ×ℝ→ℝ∪{+∞}, respectively.
Under rather general conditions, cf. <cit.>, we have the well-posedness of the
primal problem and the dual problem, i.e., the existence of a minimizer u∈ X of (<ref>), i.e., a primal solution, and of a maximizer z∈ Y of (<ref>), i.e., a dual solution, and the strong duality relation
min_v∈ X I(v) = I(u)= D(z) = max_y∈ Y D(y) .
Since u∈X and z∈ Y are optimal for (<ref>) and (<ref>), respectively, it holds 0∈∂ I(u) and 0∈∂ D(z).
In particular, for every v∈ X and y∈ Y, the quantities
ρ_I^2(v,u) I(v) - I(u) ,
ρ_-D^2(y,z) D(z) - D(y) ,
are non-negative. They define distances, if (<ref>) and (<ref>), respectively, are
strictly convex, and are called coercivity functionals or optimal convexity measures.
For accessible and admissible approximations v∈ X and y∈ Y of the solutions u ∈ X and z ∈ Y, given the definitions (<ref>) and (<ref>), the strong duality relation (<ref>) implies the error identity
ρ_I^2(v,u) + ρ_-D^2(y,z)
= I(v) - D(z)
η^2(v,y) .
Hence, the fully computable error estimator η^2 X× Y→ℝ∪{+∞}, cf. (<ref>), exactly
represents the sum of the primal and dual approximation errors, i.e., of (<ref>) and (<ref>).
The error representation (<ref>) can be seen as a generalization of the Prager–Synge
result, cf. <cit.>, which states that for the Poisson problem, i.e., ϕ1/2|·|^2∈ C^1(ℝ^d), ψ ((t,x)^⊤↦ -f(x)t) Ω×ℝ→ℝ∪{+∞}, where f∈ L^2(Ω), X W^1,2_D(Ω), and Y W^2_N(;Ω), for every v∈ W^1,2_D(Ω) and y∈ W^2_N(;Ω) with - y=f a.e. in Ω, we have that
12 ∇ v -∇ u_L^2(Ω;ℝ^d)^2 + 12 y - z _L^2(Ω;ℝ^d)^2
= 12 ∇ v-y ^2_L^2(Ω;ℝ^d) .
The equation (<ref>) has been used by various authors to define error estimators; for a comprehensive list of references, we refer the reader to <cit.>.
Often, local procedures are devised to construct an ad-missible vector field
y∈ W^2_N(;Ω) with - y=f a.e. in Ω from a given function v∈ W^1,2_D(Ω). While this leads to efficient procedures
to obtain accurate error estimators, the arguments cannot be expected to transfer
to non-linear problems. Another alternative to computing approximations
for the primal and dual problems consists in using finite element methods
for which reconstruction formulas are available, e.g., using the discontinuous Crouzeix–Raviart finite element
method and the Marini formula in the case of the Poisson problem, cf. <cit.>.7mm
It has recently been found (cf. <cit.>) that the discontinuous Crouzeix–Raviart finite element method leads to quasi-optimal a priori error estimates for non-linear and non-differentiable problems, while continuous finite element methods provide only a sub-optimal
convergence behavior. In the derivation of those results, a general
discrete convex duality theory with Raviart–Thomas vector fields has emerged that
also leads to reconstruction
formulas in rather general settings. As a consequence, given an approximation
v∈ X or y∈ Y, respectively, the missing one can be obtained via a simple post-processing procedure.
Then, the pair leads to the error representation formula (<ref>). It should also
be noted that neither v∈ X nor y∈ Y needs to be optimal in a subspace
of X or Y. By introducing appropriate residuals, any pair of admissible
approximations of u∈ X and z∈ Y can be used. This is particularly important for non-linear
problems, i.e., non-quadratic functionals, where an exact solution of discrete problems is neither possible nor rational.
A difficulty in the application of the explicit a posteriori error representation
formula (<ref>) arises from the condition that v∈ X and y∈ Y need to be admissible for
the functionals (<ref>) and (<ref>). In the case of the Poisson problem,
this arises, e.g., via element-wise constant approximations of f∈ L^2(Ω)
that are the images of Raviart–Thomas vector fields under the divergence operator. While data terms can be controlled by introducing appropriate
data oscillation terms, structural peculiarities of the energy densities
ϕℝ^d→ℝ∪{+∞} and ψΩ×ℝ→ℝ∪{+∞} and their (Fenchel) conjugates ϕ^*ℝ^d→ℝ∪{+∞} and ψ^*Ω×ℝ→ℝ∪{+∞} are often more challenging.
We illustrate this
by analyzing a non-differentiable
problem
which leads to a new error analysis and an adaptive refinement procedure
for the computationally challenging problem.
With ϕ = |·|∈ C^0(ℝ^d) and ψ=((x,t)^⊤↦α/2(t-g(x))^2)Ω×ℝ→ℝ
for a given function
g∈ L^2(Ω), i.e., the noisy image, and a given parameter α>0, i.e., the fidelity parameter,
the Rudin–Osher–Fatemi (ROF) model, cf. <cit.>, seeks a minimizing function u∈ BV(Ω)∩ L^2(Ω), i.e., the de-noised image, where BV(Ω) denotes the space of functions with bounded variation,
for the functional I BV(Ω)∩ L^2(Ω)→ℝ, for every v∈ BV(Ω)∩ L^2(Ω) defined by
I(v) |Dv|(Ω) + α2v-g_L^2(Ω)^2 ,
where |D(·)|(Ω)BV(Ω)→ [0,+∞] denotes the total variation functional.
The (Fenchel) problem to the minimization of the functional (<ref>) consists in the maximization of
the functional D W_N^2(;Ω)∩ L^∞(Ω;ℝ^d)→ℝ∪{-∞}, for every y∈ W_N^2(;Ω)∩ L^∞(Ω;ℝ^d) defined by
D(y) -I_K_1(0)(y)-12αdiv y+α g_L^2(Ω)^2+α2 g_L^2(Ω)^2 ,
where
I_K_1(0)(y) 0 if | y|≤ 1 a.e. in Ω and I_K_1(0)(y) +∞ else.
The primal solution u∈ BV(Ω) ∩ L^2(Ω), i.e., the unique minimizer of (<ref>), and a dual solution z∈ W_N^2(;Ω)∩ L^∞(Ω;ℝ^d), i.e., a (possibly non-unique) maximizer of (<ref>), are
(formally) related via, cf. <cit.>,
z ∈.{∇ u/|∇ u|} if |∇ u|>0
K_1(0) if |∇ u|=0
} a.e. in Ω ,
z = α (u-g) a.e. in Ω .
The relations (<ref>) determine z∈ W_N^2(;Ω)∩ L^∞(Ω;ℝ^d) via u∈ BV(Ω)∩ L^2(Ω) and vice versa.
A
Crouzeix–Raviart finite element approximation of (<ref>) is given by the minimization of the regularized, discrete functional
I_h,ε^cr𝒮^1,cr(𝒯_h)→ℝ, h,ε>0, for every v_h∈𝒮^1,cr(𝒯_h) defined by
I_h,ε^cr(v_h) f_ε(|∇_h v_h| )_L^1(Ω)
+ α2Π_h(v_h-g)_L^2(Ω)^2 .
Here, ∇_h is the element-wise application of the gradient operator
and f_ε∈C^1(ℝ) is a regularization of the modulus |·|, and Π_h denotes
the (local) L^2-projection onto element-wise constant functions.
A quasi-optimal dual Raviart–Thomas vector field z_h,ε^rt∈ℛT^0_N(𝒯_h) can be associated with a
minimizing function u_h,ε^cr∈𝒮^1,cr(𝒯_h) of I_h,ε^cr𝒮^1,cr(𝒯_h)→ℝ via the reconstruction formula
z_h,ε^rt = f_ε'(|∇_h u_h,ε^cr|) |∇_h u_h,ε^cr|∇_h u_h,ε^cr
+ αΠ_h (u_h,ε^cr -g)d( id_ℝ^d- Π_h id_ℝ^d) in ℛT^0_N(𝒯_h) .
For canonical choices of f_ε∈ C^1(ℝ), e.g.,
f_ε =|·|_ε= ((·)^2+ε^2)^1/2, it holds |Π_h z_h,ε^rt|≤ 1 a.e. in Ω, but not
|z_h,ε^rt|≤ 1 a.e. in Ω. Thus, we employ f_ε = (1-ε) |·|_ε,
so that
|f_ε'(t)|≤ 1-ε for all t∈ℝ. The choice ε∼ h^2 in (<ref>) and an additional projection step onto K_1(0)
lead to an accurate approximation z_h,ε^rt∈ℛT^0_N(𝒯_h) of z∈ W_N^2(;Ω)∩ L^∞(Ω;ℝ^d), which
satisfies |z_h,ε^rt|≤ 1 a.e. in Ω and, thus, represents an admissible test function that leads to the definition
of an error estimator. The resulting adaptive mesh-refinement procedure leads to significantly
improved experimental convergence rates compared to recent related contributions, cf. <cit.>. More precisely, we report quasi-optimal linear convergence rates which have been obtained only for meshes with quadratic grading towards a sufficiently simple jump set of a regular g in <cit.>.10mm
This article is organized as follows: In Section <ref>, we introduce the employed notation and the relevant finite element spaces. In Section <ref>, we propose a general approach for explicit a posteriori error representation for convex minimization problems based on (discrete) convex duality relations. In Section <ref>,
we transfer the concepts of Section <ref> to the Rudin–Osher–Fatemi model and propose a regularization scheme. In Section <ref>, we review our theoretical findings via numerical experiments.
§ PRELIMINARIES
§.§ Convex analysis
For a (real) Banach space X, which is equipped with the norm ·_X X→ℝ_≥ 0, we denote its corresponding (continuous) dual space by X^* equipped with the dual norm
·_X^* X^*→ℝ_≥ 0, defined by x^*_X^*sup_x_X≤ 1⟨ x^*,x⟩_X for every x^*∈ X^*, where ⟨·,·⟩_X X^*× X→ℝ, defined by ⟨ x^*,x⟩_X x^*(x) for every x^*∈ X^* and x∈ X, denotes the duality pairing.
A functional F X→ℝ∪{+∞} is called sub-differentiable in x∈ X, if F(x)<∞ and if there exists x^*∈ X^*, called sub-gradient, such that for every y∈ X, it holds
⟨ x^*,y-x⟩_X≤ F(y)-F(x) .
The sub-differential ∂ F X→ 2^X^* of a functional F X→ℝ∪{+∞} for every x∈ X is defined by (∂ F)(x){x^*∈ X^*|(<ref>) holds for x^*} if F(x)<∞ and (∂ F)(x)∅ else.
For a given functional F X→ℝ∪{±∞}, we denote its corresponding (Fenchel) conjugate by F^* X^*→ℝ∪{±∞}, which for every x^*∈ X^* is defined by
F^*(x^*)sup_x∈ X⟨ x^*,x⟩_X-F(x) .
If F X→ℝ∪{+∞} is a proper, convex, and lower semi-continuous functional, then also its (Fen-chel) conjugate F^* X^*→ℝ∪{+∞} is a proper, convex, and lower semi-continuous functional, cf. <cit.>.
Furthermore, for every x^*∈ X^* and x∈ X such that
F^*(x^*)+F(x) is well-defined, i.e., the critical case ∞-∞ does not occur, the Fenchel–Young inequality
⟨ x^*,x⟩_X≤ F^*(x^*)+F(x)
applies.
In particular,
for every x^*∈ X^* and x∈ X, it holds the Fenchel–Young identity
x^*∈ (∂ F)(x) ⇔ ⟨ x^*,x⟩_X= F^*(x^*)+F(x) .
The following convexity measures for functionals play an important role in the derivation of an explicit a posteriori error representation for convex minimization problems in Section <ref>; for further information, please refer to <cit.>.
Let X be a (real) Banach space and F X→ℝ∪{+∞} proper, i.e., D(F){x∈ X| F(x)<∞}≠∅.
(i) The σ^2_F
D(F)× X→ [0,+∞] for every x∈ D(F) and y∈ X is defined by
σ^2_F(y,x) F(y)-F(x)-sup_x^*∈ (∂ F)(x)⟨ x^*,y-x⟩_X ,
where we use the convention sup(∅)-∞.
(ii) The σ^2_F
D(F)^2→ [0,+∞] for every x,y∈ D(F) is defined by
σ_F,s^2(y,x)σ_F^2(y,x)+σ_F^2(x,y)=inf_x^*∈ (∂ F)(x);y^*∈ (∂ F)(y)⟨ x^*-y^*,x-y⟩_X ,
where we use the convention inf(∅) +∞.
Let X be a (real) Banach space and F X→ℝ∪{+∞} proper. Moreover, let x∈ X be minimal for F X→ℝ∪{+∞}. Then, the ρ^2_F
X^2→ [0,+∞] x∈ X for every y∈ X is defined by
ρ^2_F(y,x) F(y)-F(x)≥ 0 .
Let X be a (real) Banach space and F X→ℝ∪{+∞} proper. Moreover, let x∈ X be minimal for F X→ℝ∪{+∞}. Then, due to 0∈ (∂ F)(x), for every y∈ X, it holds
σ^2_F(y,x)≤ρ^2_F(y,x) .
§.§ Function spaces
Throughout the article, we denote by Ω⊆ℝ^d, d ∈ℕ, a bounded polyhedral Lipschitz domain, whose (topological) boundary is disjointly divided into a closed Dirichlet part Γ_D and an open Neumann part Γ_N, i.e., ∂Ω = Γ_D∪Γ_N and ∅ = Γ_D∩Γ_N. 3mm
For p∈[1,∞] and l∈ℕ, we employ the standard notations[Here, W^-1/p,p(Γ_N) (W^1-1/p',p'(Γ_N))^* and W^-1/p,p(∂Ω) (W^1-1/p',p'(∂Ω))^*.]
W^1,p_D(Ω;ℝ^l) {v∈ L^p(Ω;ℝ^l) |∇ v∈ L^p(Ω;ℝ^l× d), v=0 in L^p(Γ_D;ℝ^l)} ,
W^p_N(;Ω) {y∈ L^p(Ω;ℝ^d) | y∈ L^p(Ω), _n y=0 in W^-1/p,p(Γ_N)} ,
W^1,p(Ω;ℝ^l) W^1,p_D(Ω;ℝ^l) if Γ_D=∅, and W^p(;Ω) W^p_N(;Ω) if Γ_N=∅,
where we by W^1,p(Ω;ℝ^l)→L^p(∂Ω;ℝ^l) and by _n(·)W^p(;Ω)→W^-1/p,p(∂Ω), the trace and trace operator, respectively. In particular, we always omit (·) and _n(·). In addition, we employ the abbreviations L^p(Ω) L^p(Ω;ℝ^1), W^1,p(Ω) W^1,p(Ω;ℝ^1), and W^1,p_D(Ω) W^1,p_D(Ω;ℝ^1). For (Lebesgue) measurable functions u,vΩ→ℝ and a (Lebesgue) measurable set M⊆Ω, we write
(u,v)_M∫_Mu v dx ,
whenever the right-hand side is well-defined. Analogously, for (Lebesgue) measurable vector fields z,yΩ→ℝ^d and a (Lebesgue) measurable set M⊆Ω, we write (z,y)_M∫_Mz· y dx. Moreover,
let |(·)|(Ω) L^1_(Ω) →ℝ∪{+∞}, for every v∈ L^1_(Ω) defined by[Here, C_c^∞(Ω;ℝ^d) denotes the space of smooth and in Ω compactly supported vector fields.]
|v|(Ω)sup{-(v, ϕ)_Ω|ϕ∈ C_c^∞(Ω;ℝ^d);
ϕ_L^∞(Ω;ℝ^d)≤ 1} ,
denote the total variation functional. Then, the space of functions with bounded variation is defined by
BV(Ω){v∈ L^1(Ω)||v|(Ω)<∞} .
§.§ Triangulations
Throughout the entire paper, we denote by {𝒯_h}_h>0, a family of regular, i.e., uniformly shape regular and conforming, triangulations of Ω⊆ℝ^d, d∈ℕ, cf. <cit.>.
Here, h>0 refers to the average mesh-size, i.e., if we set h_T(T) for all T∈𝒯_h, then, we have that h
= 1/(𝒯_h)∑_T∈𝒯_hh_T.
For every element T ∈𝒯_h,
we denote by ρ_T>0, the supremum of diameters of inscribed balls. We assume that there exists a constant ω_0>0, independent of h>0, such that max_T∈𝒯_hh_Tρ_T^-1≤ω_0. The smallest such constant is called the chunkiness of {𝒯_h}_h>0. The sets 𝒮_h, 𝒮_h^i, 𝒮_h^∂, and 𝒩_h contain the sides, interior sides, boundary sides, and vertices, respectively, of the elements of 𝒯_h.
We have the following relation between the average mesh-size and the number of vertices:
h∼(𝒩_h)^-1/d .
For k∈ℕ∪{0} and T∈𝒯_h, let 𝒫_k(T) denote the set of polynomials of maximal degree k on T. Then, for k∈ℕ∪{0} and l∈ℕ, the sets of continuous and polynomial functions or vector fields, respectively, are defined by
ℒ^k(𝒯_h)^l {v_h∈ L^∞(Ω;ℝ^l)| v_h|_T∈𝒫_k(T)^l for all T∈𝒯_h} ,
𝒮^k(𝒯_h)^l ℒ^k(𝒯_h)^l∩ C^0(Ω;ℝ^l) .
For every T∈𝒯_h and S∈𝒮_h, let x_T1/d+1∑_z∈𝒩_h∩ Tz∈ T and x_S1/d∑_z∈𝒩_h∩ Sz∈ S denote the barycenters of T and S, respectively. The (local) L^2-projection operator Π_h L^1(Ω;ℝ^l)→ℒ^0(𝒯_h)^l onto element-wise constant functions or vector fields, respectively, for every
v∈ L^1(Ω), is defined by Π_h v|_T_Tv dx for all T∈𝒯_h.
The element-wise gradient
∇_hℒ^1(𝒯_h)^l→ℒ^0(𝒯_h)^l× d, for every v_h∈ℒ^1(𝒯_h)^l, is defined by ∇_hv_h|_T∇(v_h|_T) for all T∈𝒯_h.
§.§.§ Crouzeix–Raviart element
11mm
The Crouzeix–Raviart finite element space, cf. <cit.>, consists of affine functions that are continuous at the barycenters of inner element sides, i.e.,[Here, for every inner side S∈𝒮_h^i, v_h_S v_h|_T_+-v_h|_T_- on S, where T_+, T_-∈𝒯_h satisfy ∂ T_+∩∂ T_-=S, and for every boundary S∈𝒮_h^∂, v_h_S v_h|_T on S, where T∈𝒯_h satisfies S⊆∂ T.]
𝒮^1,cr(𝒯_h){v_h∈ℒ^1(𝒯_h)|v_h_S(x_S)=0 for all S∈𝒮_h^i} .
Note that 𝒮^1,cr(𝒯_h)⊆ BV(Ω). More precisely, for every v_h∈𝒮^1,cr(𝒯_h), cf. <cit.>, we have that Dv_h=∇_ hv_h⊗dx+v_h⊗ds|_𝒮_h with ∇_ hv_h⊗dx⊥v_h⊗ds|_𝒮_h, so that, cf. <cit.>,
|Dv_h|(Ω)= ∇_ hv_h_L^1(Ω;ℝ^d)+v_h_L^1(𝒮_h) .
The Crouzeix–Raviart finite element space with homogeneous Dirichlet boundary condition on Γ_D is defined by
𝒮^1,cr_D(𝒯_h){v_h∈𝒮^1,cr(𝒯_h)| v_h(x_S)=0 for all S∈𝒮_h∩Γ_D} .
A basis for 𝒮^1,cr(𝒯_h) is given by functions φ_S∈𝒮^1,cr(𝒯_h), S∈𝒮_h, satisfying the φ_S(x_S')=δ_S,S' for all S,S'∈𝒮_h. A basis for 𝒮^1,cr_D(𝒯_h) is given by φ_S∈𝒮^1,cr_D(𝒯_h), S∈𝒮_h∖Γ_D.
§.§.§ Raviart–Thomas element
The Raviart–Thomas finite element space, cf. <cit.>, consists of element-wise affine vector fields that have continuous constant normal components on inner element sides, i.e.,[Here, for every inner side S∈𝒮_h^i, y_h· n_Sy_h|_T_+· n_T_++y_h|_T_-· n_T_- on S, where T_+, T_-∈𝒯_h satisfy ∂ T_+∩∂ T_-=S and for every T∈𝒯_h, n_T∂ T→𝕊^d-1 denotes the outward unit normal vector field to T,
and for every boundary side S∈𝒮_h^∂, y_h· n_Sy_h|_T· n on S, where T∈𝒯_h satisfies S⊆∂ T and n∂Ω→𝕊^d-1 denotes the outward unit normal vector field to Ω.]
ℛT^0(𝒯_h){y_h∈ℒ^1(𝒯_h)^d| y_h|_T· n_T= on ∂ T for all T∈𝒯_h ,
y_h· n_S=0 on S for all S∈𝒮_h^i} .
Note that ℛT^0_N(𝒯_h)⊆ W^∞_N(;Ω).
The Raviart–Thomas finite element space with homogeneous normal component boundary condition on Γ_N is defined by
ℛT^0_N(𝒯_h){y_h∈ℛT^0(𝒯_h)| y_h· n=0 on Γ_N} .
A basis for ℛT^0(𝒯_h) is given by vector fields ψ_S∈ℛT^0(𝒯_h), S∈𝒮_h, satisfying ψ_S|_S'· n_S'=δ_S,S' on S' for all S'∈𝒮_h, where n_S is the unit normal vector on S pointing from T_- to T_+ if T_+∩ T_-=S∈𝒮_h. A basis for ℛT^0_N(𝒯_h) is given by ψ_S∈ℛT^0_N(𝒯_h), S∈𝒮_h∖Γ_N.
§.§.§ Discrete integration-by-parts formula
For every v_h∈𝒮^1,cr_D(𝒯_h) and y_h∈ℛT^0_N(𝒯_h), it holds the discrete integration-by-parts formula
(∇_hv_h,Π_h y_h)_Ω=-(Π_h v_h, y_h)_Ω .
In addition, cf. <cit.>,
if a vector field y_h∈ℒ^0(𝒯_h)^d satisfies for every v_h∈𝒮^1,cr_D(𝒯_h)
(y_h,∇_h v_h)_Ω=0 ,
then, choosing v_h=φ_S∈𝒮^1,cr_D(𝒯_h) for all S∈𝒮_h∖Γ_D, one finds that y_h∈ℛT^0_N(𝒯_h).
Similarly, if a function v_h∈ℒ^0(𝒯_h) satisfies for every y_h∈ℛT^0_N(𝒯_h)
(v_h, y_h)_Ω=0 ,
then, choosing y_h=ψ_S∈ℛT^0_N(𝒯_h) for all S∈𝒮_h∖Γ_N, one finds that v_h∈𝒮^1,cr_D(𝒯_h). In other words,
we have the orthogonal (with respect to the inner product (·,·)_Ω) decompositions
ℒ^0(𝒯_h)^d =(|_ℛT^0_N(𝒯_h))⊕∇_h(𝒮^1,cr_D(𝒯_h))
,
ℒ^0(𝒯_h) =(∇_h|_𝒮^1,cr_D(𝒯_h))⊕ (ℛT^0_N(𝒯_h)) .
§ EXACT A POSTERIORI ERROR ESTIMATION FOR CONVEX MINIMIZATION PROBLEMS
§.§ Continuous convex minimization problem and continuous convex duality
Let ϕℝ^d→ℝ∪{+∞} be a proper, convex, and lower semi-continuous function and let ψΩ×ℝ→ℝ∪{+∞} be a (Lebesgue) measurable function such that for a.e. x∈Ω, the function ψ(x,·)Ω×ℝ→ℝ∪{+∞} is proper, convex, and lower semi-continuous. We examine the convex minimization problem that seeks for a function u∈ W^1,p_D(Ω), p∈ (1,∞), that is minimal for the functional I W^1,p_D(Ω)→ℝ∪{+∞}, for every v∈W^1,p_D(Ω) defined by
I(v)∫_Ωϕ(∇ v) x+∫_Ωψ(·,v) x .
In what follows, we refer to the minimization of I W^1,p_D(Ω) →ℝ∪{+∞} as the primal problem.
A (Fenchel) dual problem to the minimization of (<ref>) consists in the maximization of the functional DL^p'(Ω;ℝ^d)→ℝ∪{ -∞}, for every y∈ L^p'(Ω;ℝ^d) defined by
D(y) -∫_Ωϕ^*( y) x-F^*( y) ,
where the distributional divergence L^p'(Ω;ℝ^d)→ (W^1,p_D(Ω))^* for every y∈L^p'(Ω;ℝ^d) and v∈W^1,p_D(Ω) is defined by ⟨ y,v⟩_W^1,p_D(Ω) -(y,∇ v)_Ω and
F^*L^p'(Ω)→ℝ∪{±∞} denotes the Fenchel conjugate to F L^p(Ω)→ℝ∪{+∞}, defined by F(v)∫_Ωψ(·,v) x for all v∈ L^p(Ω). Note that for every y∈W^p'_N(;Ω), we have that ⟨ y,v⟩_W^1,p_D(Ω)=( y, v)_Ω for all v∈ W^1,p_D(Ω) and, thus, the representation
D(y)=-∫_Ωϕ^*( y) x-∫_Ωψ^*(·, y) x .
A weak duality relation applies, cf. <cit.>, i.e.,
inf_v∈ W^1,p_D(Ω)I(v)≥sup_y∈ L^p'(Ω;ℝ^d)D(y) .
In what follows, we
always assume that ϕℝ^d→ℝ∪{+∞} and ψΩ×ℝ→ℝ∪{+∞} are such that (<ref>) admits at least one minimizer u∈ W^1,p_D(Ω), called the primal solution, (<ref>) at least one maximizer z∈ L^p'(Ω;ℝ^d), called the dual solution, and that a strong duality relation applies, i.e.,
I(u)= D(z) .
By the Fenchel–Young inequality (cf. (<ref>)), (<ref>) is equivalent to
the convex optimality relations
z·∇ u =ϕ^*(z)+ϕ(∇ u) Ω ,
z ∈∂ F(u) .
If z∈W^p'_N(;Ω), then the convex optimality relation (<ref>) is equivalent to
z u=ψ^*(·, z)+ψ(·, u) Ω .
If ϕ∈ C^1(ℝ^d),
then, by the Fenchel–Young identity (cf. (<ref>)), (<ref>) is equivalent to
z= Dϕ(∇ u) L^p'(Ω;ℝ^d) .
Similarly, if z∈W^p'_N(;Ω) and
ψ(x,·)∈ C^1(ℝ) for a.e. x∈Ω,
then (<ref>) is equivalent to
z=Dψ(·, u) L^p'(Ω) .
The convex duality relations (<ref>)–(<ref>) motivate introducing the primal-dual error estimator η^2 W^1,p_D(Ω)× L^p'(Ω;ℝ^d)→ [0,+∞], for every
v∈ W^1,p_D(Ω) and y∈ L^p'(Ω;ℝ^d) defined by
5mm
η^2(v,y) I(v)-D(y) .
Note that the sign of the estimator (<ref>) is a consequence of the weak duality relation (<ref>).
Together with the optimal convexity measures (cf. Definition <ref>) ρ_I^2 W^1,p_D(Ω)^2→ [0,+∞] of (<ref>) at a primal solution u∈ W^1,p_D(Ω) and ρ_-D^2L^p'(Ω;ℝ^d)→ [0,+∞] of the negative of (<ref>) at a dual solution z∈L^p'(Ω;ℝ^d), we arrive at the following explicit a posteriori error representation.3mm
The following statements apply:
(i) For every v∈ W^1,p_D(Ω) and y∈L^p'(Ω;ℝ^d), we have that
ρ^2_I(v,u)+ρ^2_-D(y,z)=η^2(v,y) .
(ii) For every v∈ W^1,p_D(Ω) and y∈W^p'_N(;Ω), we have that
η^2(v,y) = ∫_Ωϕ(∇ v)-∇ v· y+ϕ^*(y) dx+∫_Ωψ(·, v)- v div y+ψ^*(·,div y) dx .
(i) By the Fenchel–Young inequality (<ref>), the integrands in the representation (<ref>), are non-negative and, thus, suitable as local refinement indicators.
(ii) Appealing to Remark <ref>, from Theorem <ref> (i), for every v∈ W^1,p_D(Ω) and y∈ L^p'(Ω;ℝ^d), it follows that
σ_I^2(v,u)+σ_-D^2(y,z)≤η^2(v,y).
ad (i). Due to I(u)=D(z), cf. (<ref>), Definition <ref>, and (<ref>),
for every v∈ W^1,p_D(Ω) and y∈ L^p'(Ω;ℝ^d), we have that
ρ^2_I(v,u)+ρ^2_-D(y,z)=I(v)-I(u)+D(z)-D(y)=η^2(v,y) .
ad (ii). Using (<ref>), (<ref>), and integration-by-parts, we conclude that (<ref>) applies.
(i) In the , cf. <cit.>, i.e., ϕ1/p|·|^p∈ C^1(ℝ), p∈ (1,∞), and ψ ((t,x)^⊤↦ -f(x)t)Ω×ℝ→ℝ, where f∈ L^p'(Ω), cf. <cit.>, we have that
ρ^2_I(v,u)∼F(∇ v)-F(∇ u)_L^2(Ω;ℝ^d)^2 , ρ^2_-D(y,z)∼F^*(y)-F^*(z)_L^2(Ω;ℝ^d)^2 ,
where F,F^*ℝ^d→ℝ^d for every a∈ℝ^d are defined by F(a)| a|^p-2/2a and F^*(a)| a|^p'-2/2a.
(ii) In the , cf. <cit.>, i.e., ϕ1/2|·|^2∈ C^1(ℝ) and ψ ((t,x)^⊤↦ -f(x)t+I_χ(x)(t))Ω×ℝ→ℝ∪{+∞}, where f∈ L^2(Ω) and χ∈ W^1,2(Ω) with χ≤ 0 on Γ_D, cf. <cit.>, where I_χ(x)(t) 0 if t≥ 0 and I_χ(x)(t) +∞ else, we have that
ρ^2_I(v,u)= 12∇ v-∇ u_L^2(Ω;ℝ^d)^2+⟨ -Λ,v-u⟩_W^1,2_D(Ω) , ρ^2_-D(y,z)≥12y-z_L^2(Ω;ℝ^d)^2 ,
where Λ∈ (W^1,2_D(Ω))^* is defined by ⟨Λ,v⟩_W^1,2_D(Ω) (f,v)_Ω-(∇ u,∇ v)_Ω for all v∈ W^1,2_D(Ω).
(iii) In an , cf. <cit.>, i.e., ϕζ∘|·|∈ C^1(ℝ), where ζ(0) 0, ζ'(t)μ_2 t if t∈ [0,t_1], ζ'(t)μ_2 t_1 if t∈ [t_1,t_2], and ζ'(t)μ_1 t if t∈ [t_2,+∞) for some 0<t_1<t_2 and 0<μ_1<μ_2 with t_1μ_2=t_2μ_1, and ψ ((t,x)^⊤↦ -f(x)t)Ω×ℝ→ℝ, where f∈ L^2(Ω), cf.
<cit.>,
we have that
ρ^2_I(v,u)≥12μDϕ(∇ v)-Dϕ(∇ u)_L^2(Ω;ℝ^d)^2 , ρ^2_-D(y,z)≥12μy-z_L^2(Ω;ℝ^d)^2 .
(iv) In the , cf. <cit.>, i.e.,
ϕ|·|∈ C^0(ℝ) and ψ ((t,x)^⊤↦α/2(t-g(x))^2)Ω×ℝ→ℝ, where g∈ L^2(Ω), cf. <cit.>, we have that
ρ^2_I(v,u)≥α2v-u_L^2(Ω)^2 , ρ^2_-D(y,z)≥12α y- z_L^2(Ω)^2 .
Since the dual problem to the minimization of the negative of (<ref>), in turn, consists in the maximization of the negative of (<ref>),
the roles of the primal problem and the dual problem may be interchanged. An advantage of Theorem <ref> consists in the fact that it yields reliable and efficient a posteriori error estimators for both the primal problem and the dual problem, i.e.,7.5mm
Theorem <ref> also shows that for each y∈ L^p'(Ω;ℝ^d), the estimator η^2_I,y (v↦η^2(v,y)) W^1,p_D(Ω)→ [0,+∞]
satisfies
ρ^2_I(v,u)+ρ^2_-D(y,z)=η^2_I,y(v) ,
and for each v∈ W^1,p_D(Ω), the estimator η^2_-D,v (y↦η^2(v,y)) L^p'(Ω;ℝ^d)→ [0,+∞]
ρ^2_I(v,u)+ρ^2_-D(y,z)=η^2_-D,v(y) .
For the a posteriori error estimators (<ref>) and (<ref>) for being numerically practicable, it is necessary to have a
computationally cheap way to obtain sufficiently accurate approximation of the dual solution (for (<ref>)) and/or of the primal solution
(for (<ref>)), respectively. In Section <ref>, resorting to (discrete) convex duality relations between a non-conforming Crouzeix–Raviart approximation of the primal problem and a Raviart–Thomas approximation of the dual problem, we arrive at discrete reconstruction formulas, called generalized Marini formula, cf. <cit.>.9mm
§.§ Discrete convex minimization problem and discrete convex duality
Let ψ_hΩ×ℝ→ℝ∪{+∞} denote a suitable approximation[We refrain from being too precise concerning
what we mean with approximation to allow for more flexibility. Assumptions on both ϕℝ^d→ℝ∪{+∞} and ψ_hΩ×ℝ→ℝ∪{+∞}, h>0, that imply, e.g., Γ-convergence results can be found in <cit.>.] of ψΩ×ℝ→ℝ∪{+∞} such that ψ_h(·,t)∈ℒ^0(𝒯_h) for all t∈ℝ and for a.e. x∈Ω, ψ_h(x,·)Ω×ℝ→ℝ∪{+∞} is a proper, convex, and lower semi-continuous functional. Then, we examine the (discrete) convex minimization problem that seeks for a function u_h^cr∈𝒮^1,cr_D(𝒯_h) that is minimal for the functional I_h^cr𝒮^1,cr_D(𝒯_h)→ℝ∪{+∞}, for every v_h∈𝒮^1,cr_D(𝒯_h) defined by
I_h^cr(v_h)∫_Ωϕ(∇_ h v_h) x+∫_Ωψ_h(·,Π_h v_h) x .
In what follows, we refer the minimization of I_h^cr𝒮^1,cr_D(𝒯_h)→ℝ∪{+∞} to as the discrete primal problem.
In <cit.>, it is shown that the corresponding (Fenchel) dual problem to the minimization of (<ref>)
consists in the maximization of D_h^rtℛT^0_N(𝒯_h)→ℝ∪{-∞}, for every y_h∈ℛT^0_N(𝒯_h) defined by
D_h^rt(y_h)-∫_Ωϕ^*(Π_h y_h) x-∫_Ωψ_h^*(·, y_h) x .
A discrete weak duality relation, cf. <cit.>, applies
inf_v_h∈𝒮^1,cr_D(𝒯_h)I_h^cr(v_h)≥sup_y_h∈ℛT^0_N(𝒯_h)D_h^rt(y_h) .
We will always assume that ϕℝ^d→ℝ∪{+∞} and ψ_hΩ×ℝ→ℝ∪{+∞} are such that (<ref>) admits at least one minimizer u_h^cr∈𝒮^1,cr_D(𝒯_h), called the discrete primal solution,
(<ref>) admits at least one maximizer z_h^rt∈ℛT^0_N(𝒯_h), called the discrete dual solution, and that a discrete strong duality relation applies, i.e.,
I_h^cr(u_h^cr)=D_h^rt(z_h^rt) .
By the Fenchel–Young identity (cf. (<ref>)), (<ref>) is equivalent to the discrete convex optimality relations
Π_h z_h^rt·∇_ h u_h^cr =ϕ^*(Π_hz_h^rt)+ϕ(∇_ h u_h^cr) a.e. in Ω ,
z_h^rt Π_hu_h^cr =ψ_h^*(·, z_h^rt)+ψ_h(·,Π_hu_h^cr) a.e. in Ω .
If ϕ∈ C^1(ℝ^d), then, by the Fenchel–Young identity (cf. (<ref>)), (<ref>) is equivalent to
Π_h z_h^rt=Dϕ(∇_ h u_h^cr) in ℒ^0(𝒯_h)^d ,
and if ϕ^*∈ C^1(ℝ^d), then, by the Fenchel–Young identity (cf. (<ref>)), (<ref>) is equivalent to
∇_ h u_h^cr=Dϕ^*(Π_h z_h^rt) in ℒ^0(𝒯_h)^d .
Similarly, if ψ_h(x,·)∈ C^1(ℝ) for a.e. x∈Ω, then (<ref>) is equivalent to
z_h^rt=Dψ_h(·,Π_hu_h^cr) in ℒ^0(𝒯_h) ,
and if ψ_h^*(x,·)∈ C^1(ℝ) for a.e. x∈Ω, then (<ref>) is equivalent to
Π_hu_h^cr=Dψ_h^*(·, z_h^rt) in ℒ^0(𝒯_h) .
The relations (<ref>)–(<ref>) motivate the following discrete recontruction formulas for a discrete dual solution z_h^rt∈ℛT^0_N(𝒯_h) from a discrete primal solution u_h^cr∈𝒮^1,cr_D(𝒯_h) and vice versa, called generalized Marini formulas, cf. <cit.>.
The following statements apply:
(i) If ϕ∈ C^1(ℝ^d) and ψ_h(x,·)∈ C^1(ℝ) for a.e. x∈Ω, then, given a minimizer u_h^cr∈𝒮^1,cr_D(𝒯_h) of (<ref>),
a maximizer z_h^rt∈ℛT^0_N(𝒯_h) of (<ref>) is given via
z_h^rt= Dϕ(∇_ h u_h^cr)+Dψ_h(·, Π_hu_h^cr)/d(_ℝ^d-Π_h_ℝ^d) in ℛT^0_N(𝒯_h) ,
a discrete strong duality relation applies, i.e., (<ref>).
(ii) If ϕ^*∈ C^1(ℝ^d) and ψ_h^*(x,·)∈ C^1(ℝ) for a.e. x∈Ω, then, given a maximizer z_h^rt∈ℛT^0_N(𝒯_h) of (<ref>), a minimizer u_h^cr∈𝒮^1,cr_D(𝒯_h) of (<ref>) is given via
u_h^cr = Dψ_h^*(·, z_h^rt)+ Dϕ^*(Π_h z_h^rt)·(_ℝ^d-Π_h_ℝ^d)
in 𝒮^1,cr_D(𝒯_h) ,
a discrete strong duality relation applies, i.e., (<ref>).
It is possible to derive reconstructions formulas similar to (<ref>) and (<ref>) under weak conditions, e.g., resorting to a regularization argument (cf. Proposition <ref>) or given discrete Lagrange multipliers (cf. <cit.>).
ad (i). See <cit.>.5mm
ad (ii). By definition, it holds u_h^cr∈ℒ^1(𝒯_h) and the discrete convex optimality relation (<ref>) is satisfied.
Since z_h^rt∈ℛT^0_N(𝒯_h) is maximal for (<ref>) as well as ϕ^*∈ C^1(ℝ^d) and ψ_h^*(x,·)∈ C^1(ℝ) for a.e. x∈Ω, for every y_h∈ℛT^0_N(𝒯_h), we have that
(Dϕ^*(Π_h z_h^rt),Π_hy_h)_Ω+(Dψ_h^*(·, z_h^rt), y_h)_Ω=0 .
In particular, (<ref>) implies that Dϕ^*(Π_h z_h^rt)∈ ((|_ℛT^0_N(𝒯_h)))^⊥.
Appealing to <cit.>, it holds
((|_ℛT^0_N(𝒯_h)))^⊥=∇_h(𝒮^1,cr_D(𝒯_h)). Therefore, there exists
v_h∈𝒮^1,cr_D(𝒯_h) such that
∇_h v_h= Dϕ^*(Π_h z_h^rt) in ℒ^0(𝒯_h)^d .
Hence, for every y_h∈ℛT^0_N(𝒯_h), resorting to the discrete integration-by-parts formula (<ref>), (<ref>), (<ref>), and (<ref>), we find that
(Π_hv_h-Π_h u_h^cr, y_h)_Ω
=- (Dϕ^*(Π_h z_h^rt),Π_hy_h)_Ω-(Dψ_h^*(·, z_h^rt), y_h)_Ω=0 .
In other words, for every y_h∈ℛT^0_N(𝒯_h), we have that
( v_h-u_h^cr, y_h)_Ω= (Π_h v_h-Π_h u_h^cr, y_h)_Ω=0 .
On the other hand, we have that ∇_ h(v_h-u_h^cr)=0 in ℒ^0(𝒯_h)^d, i.e., v_h-u_h^cr∈ℒ^0(𝒯_h).
Therefore, (<ref>) in conjunction with (<ref>) implies that
v_h-u_h^cr∈ ( (ℛT^0_N(𝒯_h)))^⊥=(∇_h|_𝒮^1,cr_D(𝒯_h)). As a result, due to v_h∈𝒮^1,cr_D(𝒯_h), we conclude that u_h^cr∈𝒮^1,cr_D(𝒯_h) with
∇_ h u_h^cr =Dϕ^*(Π_h z_h^rt) in ℒ^0(𝒯_h)^d ,
Π_hu_h^cr =Dψ_h^*(·, z_h^rt) in ℒ^0(𝒯_h) .
By the Fenchel–Young identity, cf. (<ref>), (<ref>) is equivalent to
Π_h z_h^rt·∇_ h u_h^cr =ϕ^*(Π_hz_h^rt)+ϕ(∇_ h u_h^cr) a.e. in Ω ,
z_h^rt Π_hu_h^cr =ψ_h^*(·, z_h^rt)+ψ_h(·,Π_hu_h^cr) a.e. in Ω .
Eventually, adding (<ref>)_1 and (<ref>)_2, subsequently, integration with respect to x∈Ω, resorting to the discrete integration-by-parts formula (<ref>), and using the definitions (<ref>) and (<ref>), we arrive at I_h^cr(u_h^cr)=D_h^rt(z_h^rt),
which, appealing to the discrete weak duality relation (<ref>), implies that u_h^cr∈𝒮^1,cr_D(𝒯_h) is minimal for (<ref>).
§ APPLICATION TO THE RUDIN–OSHER–FATEMI (ROF) MODEL
In this section, we transfer the concepts derived in Section <ref> to the non-differentiable Rudin–Osher–Fatemi (ROF) model, cf. <cit.>. The approximation of the ROF model has been investigated by numerous authors: A priori error estimates has been derived in <cit.>.
A posteriori error estimates and adaptivity results can be found in <cit.>.7mm
§.§ The continuous Rudin–Osher–Fatemi (ROF) model
Given a function g∈ L^2(Ω), i.e., the noisy image, and a constant parameter α>0, the fidelity parameter the Rudin–Osher–Fatemi (ROF) model, cf. <cit.>, consists in the minimization of the functional I BV(Ω)∩ L^2(Ω)→ℝ, for every v∈ BV(Ω)∩ L^2(Ω) defined by
I(v)|v| (Ω)+α2v-g^2_L^2(Ω) .
In <cit.>, it has been established that there exists a unique minimizer u∈ BV(Ω)∩ L^2(Ω)
of (<ref>).
Appealing to <cit.> or <cit.>, the (Fenchel) dual problem to the minimization of (<ref>) consists in the maximization of the functional D W^2_N(;Ω) ∩ L^∞(Ω;ℝ^d)→ℝ∪{-∞}, for every y∈ W^2_N(;Ω) ∩ L^∞(Ω;ℝ^d) defined by
D(y) -I_K_1(0)(y) -12α y+α g_L^2(Ω)^2+α2g_L^2(Ω)^2 ,
where I_K_1(0) L^∞(Ω;ℝ^d)→ℝ∪{∞} is defined by I_K_1(0)(y) 0 if y∈ L^∞(Ω;ℝ^d) with | y|≤ 1 a.e. in Ω and I_K_1(0)(y)∞ else. Apart from that, in <cit.>, it is shown that (<ref>) admits a maximizer z∈ W^2_N(;Ω)∩ L^∞(Ω;ℝ^d) and that a strong duality relation applies, i.e.,
I(u)=D(z) .
Appealing to <cit.>, (<ref>) is equivalent to
the convex optimality relations
z =α (u-g) in L^2(Ω) ,
-(u, z)_Ω =|u|(Ω) .
Next, if we introduce, by analogy with Section <ref>, the primal-dual error estimator
η^2 BV(Ω)× (W^2_N(;Ω)∩ L^∞(Ω;ℝ^d))→ [0,+∞], for every v∈ BV(Ω) and y∈ W^2_N(;Ω)∩ L^∞(Ω;ℝ^d) defined by
η^2(v,y) I(v)-D(y) ,
then the concepts of Section <ref> can be transferred to the ROF model.5mm
The following statements apply:
(i) For every v∈ BV(Ω) and y∈ W^2_N(;Ω)∩ L^∞(Ω;ℝ^d), we have that
ρ^2_I(v,u)+ρ^2_-D(y,z)=η^2(v,y) .
(ii) For every v∈ BV(Ω) and y∈ W^2_N(;Ω)∩ L^∞(Ω;ℝ^d), we have that
η^2(v,y)= |Dv|(Ω)+( y,v)_Ω+12α y-α (v-g)_L^2(Ω)^2+I_K_1(0)(y) .
ad (i). Due to I(u)=D(z), cf. (<ref>), Definition <ref>, and (<ref>),
for every v∈ BV(Ω) and y∈ W^2_N(;Ω)∩ L^∞(Ω;ℝ^d), we have that
ρ^2_I(v,u)+ρ^2_-D(y,z)=I(v)-I(u)+D(z)-D(y)=η^2(v,y) .
ad (ii). For every v∈ BV(Ω) and y∈ W^2_N(;Ω)∩ L^∞(Ω;ℝ^d), we have that
η^2(v,y) =|Dv|(Ω)+( y,v)_Ω+12αα (v-g)_L^2(Ω)^2
-12α2( y,α v)_Ω+12α y+α g_L^2(Ω)^2-α2g_L^2(Ω)^2^2+I_K_1(0)(y)
=|Dv|(Ω)+( y,v)_Ω+α2v-g_L^2(Ω)^2
-
12α y-α (v-g)_L^2(Ω)^2-α2v-g_L^2(Ω)^2+I_K_1(0)(y) ,
which yields the claimed representation.
Restricting the estimator (<ref>) to subclasses of BV(Ω) and W^2_N(;Ω)∩ L^∞(Ω;ℝ^d), , for which an appropriate integration-by-parts formula apply, e.g., (<ref>), it is possible to derive alternative representations of the estimator (<ref>), whose integrands are point-wise non-negative and, thus, suitable as local refinement indicators.
(i) For every v∈ W^1,1(Ω) and y∈ W^2_N(;Ω)∩ L^∞(Ω;ℝ^d), by integration-by-parts, it holds
η^2(v,y)=∇ v_L^1(Ω;ℝ^d)-(∇ v,y)_Ω+12α y+α (v-g)_L^2(Ω)^2+I_K_1(0)(y)≥ 0 .
(ii) For every T∈𝒯_h, we define the local refinement indicator η_T^2 W^1,1(Ω)× W^2_N(;Ω)∩ L^∞(Ω;ℝ^d)→ [0,+∞] for every v∈ W^1,1(Ω) and y∈ W^2_N(;Ω)∩ L^∞(Ω;ℝ^d) by
η^2_T,W(v,y)∇ v_L^1(T;ℝ^d)-(∇ v,y)_T+12α y+α (v-g)_L^2(T)^2+I_K_1(0)(y)≥ 0 .
(iii) For every v_h∈𝒮^1,cr(Ω) and y_h∈ℛT^0_N(𝒯_h), by the representation of the total variation of Crouzeix–Raviart functions (<ref>) and the discrete integration-by-parts formula (<ref>), it holds
η^2(v_h,y_h) =∇_ h v_h_L^1(Ω;ℝ^d)+v_h_L^1(𝒮_h)-(∇_ h v_h,Π_h y_h)_Ω
+12α y_h+α (v_h-g)_L^2(Ω)^2+I_K_1(0)(y_h)≥ 0 .
(iv) For every T∈𝒯_h, we define the discrete local refinement indicator η_T,CR^2𝒮^1,cr(𝒯_h)×ℛT^0_N(𝒯_h) → [0,+∞] for every v_h∈𝒮^1,cr(𝒯_h) and y_h∈ℛT^0_N(𝒯_h) by
η^2_T,CR(v_h,y_h) ∇ v_h_L^1(T;ℝ^d)+∑_S∈𝒮_h;S⊆ Tv_h_L^1(S)-(∇_ h v_h,Π_h y_h)_T
+12α y_h+α (v_h-g)_L^2(T)^2+I_K_1(0)(y_h)≥ 0 .
We emphasize that the primal-dual error estimator (<ref>) and the representations (<ref>) or in Remark <ref> (i) & (ii) are well-known, cf. <cit.>. However, the combination of (<ref>) with the representation of the total variation of Crouzeix–Raviart functions (<ref>) and the discrete integration-by-parts formula (<ref>) in Remark <ref> (iii) & (iv), to the best of the authors' knowledge, is new and leads to significantly improved experimental convergence rates of the corresponding adaptive mesh-refinement procedure compared to the contributions <cit.>, cf. Section <ref>.
15mm
§.§ The discretized Rudin–Osher–Fatemi (ROF) model
Given g∈ L^2(Ω) and α>0, with g_hΠ_hg∈ℒ^0(𝒯_h), the discretized ROF model, proposed in <cit.>, consists in the minimization of I^cr_h𝒮^1,cr(𝒯_h)→ℝ, for every v_h∈𝒮^1,cr(𝒯_h) defined by
I^cr_h(v_h)∇_hv_h_L^1(Ω;ℝ^d)+α2Π_hv_h-α g_h^2_L^2(Ω) .
Note that the functional (<ref>) defines a non-conforming approximation of the functional (<ref>), as, e.g., jump terms of across inner element sides are not included. This, however, turned out to be essential in the derivation of optimal a priori error estimate in <cit.>.
Since the functional (<ref>) is proper, strictly convex, weakly coercive, and lower semi-continuous,
the direct method in the calculus of variations, cf. <cit.>, yields the existence of a unique minimizer u_h^cr∈𝒮^1,cr(𝒯_h), called the discrete primal solution. Appealing to <cit.>, the corresponding (Fenchel) dual problem to the minimization of (<ref>) consists in the maximization of the functional D_h^rtℛT^0_N(𝒯_h)→ℝ∪{-∞}, for every y_h∈ℛT^0_N(𝒯_h) defined by
D_h^rt(y_h) -I_K_1(0)(Π_hy_h)-12α y_h+α g_h_L^2(Ω)^2+α2g_h_L^2(Ω)^2 .
Appealing to Theorem <ref> (below), there exists a maximizer z_h^rt∈ℛT^0_N(𝒯_h) of (<ref>), which satisfies |Π_h z_h^rt|≤ 1 a.e. in Ω, a
discrete strong duality relation applies, i.e.,
I^cr_h(u_h^cr)= D_h^rt(z_h^rt) ,
and the discrete convex optimality relations
z_h^rt =α (Π_h u_h^cr-g_h) ℒ^0(𝒯_h) ,
Π_hz_h^rt·∇_h u_h^cr =|∇_h u_h^cr| ℒ^0(𝒯_h) .
§.§ The regularized, discretized Rudin–Osher–Fatemi model
To approximate a discrete minimizer u_h^cr∈𝒮^1,cr(𝒯_h) of (<ref>), it is common to approximate
the modulus function by strictly convex regularizations. In this connection, for every ε∈ (0,1), we define a special regularization f_εℝ→ℝ_≥ 0 of the modulus function, for every t∈ℝ, via
f_ε(t) (1-ε) | t|_ε , | t|_ε (t^2+ε^2)^1/2 ,
where |·|_εℝ→ℝ_≥ 0 is commonly referred to as the standard regularization.7mm
Let us collect the most important properties of the regularization (<ref>).
For every ε∈ (0,1), the following statements apply:
(i) f_ε∈ C^1(ℝ) with f_ε'(0)=0.
(ii) For every t∈ℝ, it holds -ε | t|-ε^2≤ f_ε(t)-| t|≤ε (1-| t|).
(iii) For every t∈ℝ, it holds | f_ε'(t)|≤ 1-ε.
(iv) For every s∈ℝ, it holds
f_ε^*(s)-ε ((1-ε)^2-| s|^2)^1/2 if | s|≤ 1-ε
+∞ if | s|> 1-ε .
The main reason to consider the regularization f_εℝ→ℝ_≥ 0 instead of the standard regularization |·|_εℝ→ℝ_≥ 0 consists in the property (iii) in Lemma <ref>. This additional slope reduction enables us later to construct a sufficiently accurate, admissible approximation of the dual solution using an additional projection step, cf. Remark <ref> (below) and Section <ref> (below).
ad (i). The claimed regularity f_ε∈ C^1(ℝ) is evident. Since for every t∈ℝ, it holds
f_ε'(t)=(1-ε) t(t^2+ε^2)^1/2 ,
we have that f_ε'(0)=0.
ad (ii). For every t∈ℝ, due to 0≤| t|_ε-| t|≤ε, we have that
-ε | t|-ε^2≤ -ε | t|_ε≤ f_ε(t)-| t|=ε-ε | t|_ε≤ε (1-| t|) .
ad (iii). Immediate consequence of the representation (<ref>).
ad (iv). Due to <cit.>, for every s∈ℝ and ε∈ (0,1), we have that
f_ε^*(s)=((1-ε) |·|_ε)^*(s)=(1-ε) (|·|_ε)^*(s1-ε) .
Since for every s∈ℝ and ε∈ (0,1), it holds
(|·|_ε)^*(s)=
-ε (1-| s|^2)^1/2 if | s|≤ 1
+∞ if | s|> 1
,
we conclude that
the claimed representation of the Fenchel conjugate applies.
Given g∈ L^2(Ω), α> 0, and an element-wise constant regularization parameter ε_h∈ℒ^0(𝒯_h) with 0<ε_h<1 a.e. in Ω, for g_hΠ_hg∈ℒ^0(𝒯_h), the regularized, discrete ROF model consists in the minimization of the functional I^cr_h,ε_h𝒮^1,cr(𝒯_h)→ℝ, for every v_h∈𝒮^1,cr(𝒯_h) defined by
I^cr_h,ε_h(v_h)f_ε_h(|∇_hv_h|)_L^1(Ω)+α2Π_hv_h-g_h^2_L^2(Ω) .
Since the functional (<ref>) is proper, strictly convex, weakly coercive, and lower semi-continuous,
the direct method in the calculus of variations, cf. <cit.>, yields the existence of a unique minimizer u_h,ε_h^cr∈𝒮^1,cr(𝒯_h), called the regularized, discrete primal solution.
Appealing to (f_ε_h∘|·|)^*=f_ε_h^*∘|·|, cf. <cit.>, the corresponding (Fenchel) dual problem to the minimization of (<ref>) consists in the maximization of functional D_h,ε_h^rtℛT^0_N(𝒯_h)→ℝ∪{-∞}, for every y_h∈ℛT^0_N(𝒯_h) defined by
D_h,ε_h^rt(y_h) -∫_Ωf_ε_h^*(|Π_hy_h| ) dx-12α y_h+α g_h_L^2(Ω)^2+α2g_h_L^2(Ω)^2 .
The following proposition clarifies the well-posedness of the dual regularized, discretized ROF model, i.e., the existence of a maximizer of (<ref>). It also yields a discrete reconstruction formula for a maximizer of (<ref>) from a minimizer of (<ref>) and proves discrete strong duality.
The following statements apply:
(i) A discrete weak duality relation applies, i.e.,
inf_v_h∈𝒮^1,cr_D(𝒯_h)I_h,ε_h^cr(v_h)≥sup_y_h∈ℛT^0_N(𝒯_h)D_h,ε_h^rt(y_h) .
(ii) The discrete flux z_h^rt∈ℒ^1(𝒯_h), defined via the generalized Marini formula
z_h,ε_h^rtf_ε_h'(|∇_h u_h,ε_h^cr|)|∇_h u_h,ε_h^cr|∇_h u_h,ε_h^cr+αΠ_h u_h,ε_h^cr-g_hd(_ℝ^d-Π_h_ℝ^d) ,
satisfies z_h,ε_h^rt∈ℛT^0_N(𝒯_h) and the discrete convex optimality relations
z_h,ε_h^rt =α (Π_hu_h,ε_h^cr-g_h) in ℒ^0(𝒯_h) ,
Π_h z_h,ε_h^rt =f_ε_h'(|∇_ h u_h,ε_h^cr|)|∇_ h u_h,ε_h^cr|∇_ h u_h,ε_h^cr in ℒ^0(𝒯_h)^d .
(iii) The discrete flux z_h^rt∈ℛT^0_N(𝒯_h) is a maximizer of (<ref>) and discrete strong duality applies, i.e.,
I^cr_h,ε_h(u_h,ε_h^cr)=D_h,ε_h^rt(z_h,ε_h^rt) .
Note that, by the Fenchel–Young identity, cf. <cit.>, (<ref>) is equivalent to
Π_h z_h,ε_h^rt·∇_h u_h,ε_h^cr =f_ε_h^*(|Π_h z_h,ε_h^rt| )+f_ε (|∇_h u_h,ε_h^cr|) in ℒ^0(𝒯_h) .
Appealing to Lemma <ref> (iii), we have that |Π_h z_h,ε_h^rt|≤ 1-ε_h a.e. in Ω. Therefore,
if Π_hu_h,ε_h^cr-g_h_L^∞(Ω)≤ c_0 for some c_0>0, which can be expected by discrete maximum principles, then, choosing
ε_hα c_0/dh, yields that
z_h,ε_h^rt_L^∞(Ω;ℝ^d)≤ 1. However, choices like ε_h∼ h let us expect convergence rates not better than 𝒪(h^1/2), cf. Proposition <ref> (i) (below). In order to allow for the convergence rate 𝒪(h), one needs to choose ε_h∼ h^2. But, in this case, we cannot guarantee that z_h,ε_h^rt_L^∞(Ω;ℝ^d)≤ 1, so that we instead consider the scaled vector field z_h,ε_h^rt z_h,ε_h^rt(max{1,z_h,ε_h^rt_L^∞(Ω;ℝ^d)})^-1∈ℛT^0_N(𝒯_h), which is still a sufficiently accurate approximation of the dual solution, as indicated by the numerical experiments, cf. Section <ref>.
ad (i). Using element-wise that f_ε_h=f_ε_h^**, the definition of the convex conjugate, cf. (<ref>), and the discrete integration-by-parts formula (<ref>), we find that
inf_v_h∈𝒮^1,cr_D(𝒯_h)I_h,ε_h^cr(v_h)=inf_v_h∈𝒮^1,cr_D(𝒯_h)f_ε_h^**(|∇_ h v_h|)_L^1(Ω)+α2Π_h v_h-g_h_L^2(Ω)^2
=
inf_v_h∈𝒮^1,cr_D(𝒯_h)sup_y_h∈ℒ^0(𝒯_h)^d-∫_Ωf_ε_h^*(|y_h |) dx+(y_h,∇_ h v_h)_Ω+α2Π_h v_h-g_h_L^2(Ω)^2
≥
inf_v_h∈𝒮^1,cr_D(𝒯_h)sup_y_h∈ℛT^0_N(𝒯_h)-∫_Ωf_ε_h^*(|Π_h y_h |) dx-( y_h,Π_h v_h)_Ω+α2Π_h v_h-g_h_L^2(Ω)^2
≥
sup_y_h∈ℛT^0_N(𝒯_h)-∫_Ωf_ε_h^*(|Π_h y_h |) dx-sup_v_h∈ℒ^0(𝒯_h)( y_h,v_h)_Ω-α2v_h-g_h_L^2(Ω)^2
=
sup_y_h∈ℛT^0_N(𝒯_h)-∫_Ωf_ε_h^*(|Π_h y_h |) dx-12α y_h+α g_h_L^2(Ω)^2+α2g_h_L^2(Ω)^2
=
sup_y_h∈ℛT^0_N(𝒯_h)D_h,ε_h^rt(y_h) ,
which is the claimed discrete weak duality relation.
ad (ii). By Lemma <ref>, the minimality of u_h,ε_h^cr∈𝒮^1,cr(𝒯_h) for (<ref>), for every v_h∈𝒮^1,cr(𝒯_h), yields that
(f_ε_h'(|∇_ h u_h,ε_h^cr| )∇_ h u_h,ε_h^cr|∇_ h u_h,ε_h^cr|,∇_ h v_h)_Ω+α (Π_hu_h,ε_h^cr-g_h,Π_h v_h)_Ω=0 .
By definition, the discrete flux z_h,ε_h^rt∈ℒ^1(𝒯_h)^d, defined by (<ref>), satisfies the discrete convex optimality condition (<ref>) and (z_h,ε_h^rt|_T)=α (Π_hu_h,ε_h^cr-g_h)|_T in T for all T∈𝒯_h.
Choosing v_h=1∈𝒮^1,cr(𝒯_h) in (<ref>), we find that ∫_Ωα (Π_hu_h,ε_h^cr-g_h) dx=0.
Hence, since for Γ_D=∅ the divergence operator ℛT^0_N(𝒯_h)→ℒ^0(𝒯_h)/ℝ is surjective, there exists
y_h∈ℛT^0_N(𝒯_h) such that y_h=α (Π_hu_h,ε_h^cr-g_h) in ℒ^0(𝒯_h). Then, we have that ((z_h,ε_h^rt-y_h)|_T)=0 in T for all T∈𝒯_h, i.e., z_h,ε_h^rt-y_h∈ℒ^0(𝒯_h)^d. In addition, for every v_h∈𝒮^1,cr(𝒯_h), it holds
(Π_h y_h,∇_ h v_h)_Ω =-( y_h,Π_h v_h)_Ω
=-α (Π_hu_h,ε_h^cr-g_h,Π_h v_h)_Ω
=(f_ε_h'(|∇_ h u_h,ε_h^cr| )∇_ h u_h,ε_h^cr|∇_ h u_h,ε_h^cr|,∇_ h v_h)_Ω
=(Π_h z_h,ε_h^rt,∇_ h v_h)_Ω .
In other words, for every v_h∈𝒮^1,cr(𝒯_h), it holds
(y_h-z_h,ε_h^rt,∇_ h v_h)_Ω=(Π_h y_h-Π_h z_h,ε_h^rt,∇_ h v_h)_Ω=0 ,
i.e., y_h-z_h,ε_h^rt∈∇_ h(𝒮^1,cr_D(𝒯_h))^⊥. By the decomposition (<ref>), we have that ∇_ h(𝒮^1,cr_D(𝒯_h))^⊥=(|_ℛT^0_N(𝒯_h))⊆ℛT^0_N(𝒯_h).
As a result, it holds y_h-z_h,ε_h^rt∈ℛT^0_N(𝒯_h). Due to y_h∈ℛT^0_N(𝒯_h), we conclude that z_h,ε_h^rt∈ℛT^0_N(𝒯_h). In particular, now from
(z_h,ε_h^rt|_T)=α (Π_hu_h,ε_h^cr-g_h)|_T in T for all T∈𝒯_h, it follows the discrete optimality condition
(<ref>).
ad (iii). Using (<ref>), (<ref>), and the discrete integration-by-parts formula (<ref>), we find that
I_h,ε_h^cr(u_h,ε_h^cr) =
f_ε_h(|∇_ h u_h,ε_h^cr|)_L^1(Ω)+α2Π_h u_h,ε_h^cr-g_h_L^2(Ω)^2
=-∫_Ωf_ε_h^*(|Π_h z_h,ε_h^rt|) dx+(Π_h z_h,ε_h^rt,∇_ h u_h,ε_h^cr)_Ω+12α z_h,ε_h^rt_L^2(Ω)^2
=-∫_Ωf_ε_h^*(|Π_h z_h,ε_h^rt|) dx-( z_h,ε_h^rt,Π_hu_h,ε_h^cr)_Ω+12α z_h,ε_h^rt_L^2(Ω)^2
=-∫_Ωf_ε_h^*(|Π_h z_h,ε_h^rt|) dx-1α( z_h,ε_h^rt, z_h,ε_h^rt+α g_h)_Ω+12α z_h,ε_h^rt_L^2(Ω)^2
=-∫_Ωf_ε_h^*(|Π_h z_h,ε_h^rt|) dx-12α z_h,ε_h^rt+α g_h_L^2(Ω)^2
=D_h,ε_h^rt(z_h,ε_h^rt) ,
which is the claimed discrete strong duality relation and, thus, appealing to the discrete weak duality relation (<ref>), proves the maximality of z_h,ε_h^rt∈ℛT^0_N(𝒯_h) for (<ref>).
The following proposition describes the approximative behavior the regularized, discretized ROF problem towards the (unregularized) discretized ROF problem, given uniform convergence (to zero) of the element-wise constant regularization parameter ε_h∈ℒ^0(𝒯_h). In what follows, in the convergence ε_h_L^∞(Ω)→ 0,
the average mesh-size h>0 is always fixed.2mm
If ε_h_L^∞(Ω)<1, then the following statements apply:
(i) It holds α2Π_h u_h,ε_h^cr-Π_hu_h^cr_L^2(Ω)^2
≤ε_h_L^∞(Ω)1-ε_h_L^∞(Ω) (α2 g_L^2(Ω)^2+2 |Ω|).
(ii) z_h,ε_h^rt→α (Π_hu_h^cr-g_h) in ℒ^0(𝒯_h) (ε_h_L^∞(Ω)→ 0).
(iii) f_ε_h^*(|Π_h z_h,ε_h^rt| )→ 0 in ℒ^0(𝒯_h) (ε_h_L^∞(Ω)→ 0).
(iv) f_ε_h (|∇_h u_h,ε_h^cr|)→∇_h u_h^cr in ℒ^0(𝒯_h) (ε_h_L^∞(Ω)→ 0).
ad (i). Using both the strong convexity of I_h^cr𝒮^1,cr(𝒯_h)→ℝ∪{+∞} and Lemma <ref> (ii),
we obtain
α2Π_h u_h,ε_h^cr-Π_hu_h^cr_L^2(Ω)^2 ≤ I_h^cr(u_h,ε_h^cr)-I_h^cr(u_h^cr)
≤11-ε_h_L^∞(Ω) I_h,ε_h^cr(u_h,ε_h^cr)+ε_h_L^∞(Ω)^21-ε_h_L^∞(Ω)|Ω| -I_h^cr(u_h^cr)
≤11-ε_h_L^∞(Ω) I_h,ε_h^cr(u_h^cr)+ε_h_L^∞(Ω)^21-ε_h_L^∞(Ω)|Ω|-I_h^cr(u_h^cr)
≤11-ε_h_L^∞(Ω) ( I_h^cr(u_h^cr)
+2 ε_h_L^∞(Ω) |Ω|)-I_h^cr(u_h^cr)
=
ε_h_L^∞(Ω)1-ε_h_L^∞(Ω) (I_h^cr(u_h^cr)+2 |Ω|) .
Since, by the minimality of u_h^cr∈𝒮^1,cr(𝒯_h) for (<ref>) and the L^2-stability of Π_h L^2(Ω)→ℒ^0(𝒯_h), it holds
I_h^cr(u_h^cr)≤ I_h^cr(0)=α2g_h_L^2(Ω)^2≤α2g_L^2(Ω)^2 ,
from (<ref>) we conclude the claimed error estimate.
ad (ii). From claim (i), it follows that
Π_h u_h,ε_h^cr→Π_hu_h^cr in ℒ^0(𝒯_h) (ε_h_L^∞(Ω)→ 0) .
Thus, using (<ref>), from z_h,ε_h^rt=α ( Π_h u_h,ε_h^cr-g_h) in ℒ^0(𝒯_h), cf. (<ref>), we conclude that
z_h,ε_h^rt→α (Π_hu_h^cr-g_h) ℒ^0(𝒯_h) (ε_h_L^∞(Ω)→ 0) .
ad (iii). Due to Π_h z_h,ε_h^rt=f_ε_h'(|∇_h u_h,ε_h^cr|)/|∇_h u_h,ε_h^cr|∇_h u_h,ε_h^cr and Lemma <ref> (iii), we have that
|Π_h z_h,ε_h^rt| =| f_ε_h'(|∇_h u_h,ε_h^cr|)|≤ 1-ε_h a.e. in Ω .
Therefore, using Lemma <ref> (iv) together with (<ref>), we conclude that
. | f_ε_h^*(|Π_h z_h,ε_h^rt| )| =
ε_h ((1-ε_h)^2-|Π_h z_h,ε_h^rt| ^2)^1/2
≤ε_h (1-ε_h)≤ε_h
} a.e. in Ω ,
which implies that f_ε_h^*(|Π_h z_h,ε_h^rt| )→ 0 in ℒ^0(𝒯_h) (ε_h_L^∞(Ω)→ 0).
ad (iv). Due to (<ref>), (u_h,ε_h^cr)_ε_h_L^∞(Ω)→ 0⊆𝒮^1,cr(𝒯_h) is bounded. The finite-dimensionality of 𝒮^1,cr(𝒯_h) and the Bolzano–Weierstraß theorem yield a subsequence (u_h,ε_h'^cr)_ε_h'_L^∞(Ω)→ 0⊆𝒮^1,cr(𝒯_h) and a function ũ_h^cr∈𝒮^1,cr(𝒯_h) such that
u_h,ε_h'^cr→ũ_h^cr in 𝒮^1,cr(𝒯_h) (ε_h'_L^∞(Ω)→ 0) .
From (<ref>) it is readily derived that
f_ε_h' (|∇_h u_h,ε_h'^cr|)→∇_hũ_h^cr in ℒ^0(𝒯_h) (ε_h'_L^∞(Ω)→ 0) .
Consequently, for every v_h∈𝒮^1,cr(𝒯_h), we find that
I_h^cr(ũ_h^cr) =lim_ε_h'_L^∞(Ω)→ 0I_h,ε_h'^cr(u_h,ε_h'^cr)
≤lim_ε_h'_L^∞(Ω)→ 0I_h,ε_h'^cr(v_h)
=I_h^cr(v_h) .
Thus, due to the uniqueness of u_h^cr∈𝒮^1,cr(𝒯_h) as a minimizer of (<ref>), we get ũ_h^cr=u_h^cr in 𝒮^1,cr(𝒯_h). Since this argumentation remains valid for each subsequence of (u_h,ε_h^cr)_ε_h_L^∞(Ω)→ 0⊆𝒮^1,cr(𝒯_h), the standard subsequence principle implies that f_ε_h (|∇_h u_h,ε_h^cr|)→∇_h u_h^cr in ℒ^0(𝒯_h) (ε_h_L^∞(Ω)→ 0).
The approximation properties of the regularized, discrete ROF model (<ref>) (and (<ref>)) towards the (unregularized) discrete ROF model (<ref>) (and (<ref>)) enable us to transfer the discrete convex duality relations established in Proposition <ref>, which apply mainly due to the differentiability of the regularized, discrete ROF model, to the non-differentiable discrete ROF model. To the best of the authors' knowledge, the following discrete convex duality relations for the (unregularized) discrete ROF model (<ref>)
seem to be new.7mm
There exists a vector field z_h^rt∈ℛT^0_N(𝒯_h) with |Π_h z_h^rt|≤ 1 a.e. in Ω and the following properties:
(i) For a not relabeled subsequence, it holds
z_h,ε_h^rt→ z_h^rt in ℛT^0_N(𝒯_h) (ε_h_L^∞(Ω)→ 0) .
(ii) There hold the following discrete convex optimality relations:
z_h^rt =α (Π_h u_h^cr-g_h) in ℒ^0(𝒯_h) ,
Π_hz_h^rt·∇_h u_h^cr =|∇_h u_h^cr| in ℒ^0(𝒯_h) .
(iii) The discrete flux z_h^rt∈ℛT^0_N(𝒯_h) is maximal for D_h^rtℛT^0_N(𝒯_h)→ℝ and discrete strong duality applies, i.e.,
I_h^cr(u_h^cr)=D_h^rt(z_h^rt) .
ad (i). Due to Proposition <ref> (ii) and (<ref>), the sequence (z_h,ε_h^rt)_ε_h_L^∞(Ω)→ 0⊆ℛT^0_N(𝒯_h) is bounded. Thus, by the finite-dimensionality of ℛT^0_N(𝒯_h), the Bolzano–Weierstraß theorem yields a not relabeled subsequence and a vector field z_h^rt∈ℛT^0_N(𝒯_h) such that
z_h,ε_h^rt→ z_h^rt in ℛT^0_N(𝒯_h) (ε_h_L^∞(Ω)→ 0) .
Due to the continuity of Π_h L^1(Ω)→ℒ^0(𝒯_h) and ℛT^0_N(𝒯_h)↪ L^1(Ω), from (<ref>), we obtain
Π_h z_h,ε_h^rt→Π_h z_h^rt in ℒ^0(𝒯_h) (ε_h_L^∞(Ω)→ 0) .
From |Π_h z_h,ε_h^rt|≤ 1-ε_h a.e. in Ω, cf. (<ref>), and (<ref>), we obtain |Π_h z_h^rt|≤ 1 a.e. in Ω, i.e.,
I_K_1(0)(Π_h z_h^rt)=0 .
ad (ii). Using Proposition <ref>, (<ref>), and (<ref>), we find that
. z_h^rt =lim_ε_h_L^∞(Ω)→ 0 z_h,ε_h^rt
=lim_ε_h_L^∞(Ω)→ 0α (Π_hu_h,ε_h^cr-g_h)
=α (Π_h u_h^cr-g_h) } a.e. in Ω ,
as well as.Π_h z_h^rt·∇_h u_h^cr =lim_ε_h_L^∞(Ω)→ 0Π_h z_h,ε_h^rt·∇_h u_h,ε_h^cr
=lim_ε_h_L^∞(Ω)→ 0f_ε_h^*(|Π_h z_h,ε_h^rt| )+f_ε_h(|∇_h u_h,ε_h^cr|)
=|∇_h u_h^cr| } a.e. in Ω ,
i.e., the claimed discrete convex optimality conditions.
ad (iii).
Using Proposition <ref> and (<ref>), we find that
I_h^cr(u_h^cr) =lim_ε_h_L^∞(Ω)→ 0I_h,ε_h^cr(u_h,ε_h^cr)
=lim_ε_h_L^∞(Ω)→ 0D_h,ε_h^rt(z_h,ε_h^rt)
=D_h^rt(z_h^rt) ,
i.e., the claimed discrete strong duality relation.
§ NUMERICAL EXPERIMENTS
5mm
In this section, we review the theoretical findings of Section <ref> via numerical experiments. To compare approximations to an exact solution, we impose Dirichlet boundary conditions on Γ_D=∂Ω, though an existence theory is difficult to establish, in general. However, the concepts derived in Section <ref> carry over verbatimly with Γ_N=∅ provided that the existence of a minimizer is given. All experiments were conducted deploying the finite element software package (version 2019.1.0), cf. <cit.>. All graphics were generated using the library (version 3.5.1), cf. <cit.>, and the library (version 2023.4.4), cf. <cit.>.
§.§ Implementation details regarding the optimization procedure
All computations are based on the regularized, discrete ROF problem (<ref>). This is motivated by the fact that appealing to Proposition <ref> (i), in order to bound the error u-Π_h u_h^cr_L^2(Ω), it suffices to determine the error u-Π_h u_h,ε_h^cr_L^2(Ω). The iterative minimization of (<ref>) is realized using a semi-implicit discretized L^2-gradient flow from <cit.> (see also <cit.>) modified with a residual stopping criterion guaranteeing the necessary accuracy in the optimization procedure.
Appealing to <cit.>, the iterates u_h^k∈𝒮^1,cr_D(𝒯_h), k∈ℕ, the residuals r_h^k∈𝒮^1,cr_D(𝒯_h), k∈ℕ, generated by Algorithm <ref>, and the minimizer u_h,ε_h^cr∈𝒮^1,cr_D(𝒯_h) of (<ref>) satisfy
u_h,ε_h^cr-u_h^k_L^2(Ω)≤ 2 r_h^k_L^2(Ω) .
In consequence, if we choose as a stopping criterion that r_h^k^*_L^2(Ω)≤ε_stop^hc_stop h for k^*∈ℕ, where c_stop>0 does not depend on h>0, then, owing to Proposition <ref> (i) and (<ref>), we have that
Π_h(u_h^cr-u_h^k^*)_L^2(Ω)^2≤ε_h_L^∞(Ω)1-ε_h_L^∞(Ω) (2 g_L^2(Ω)^2+8α |Ω|)+8 c_stop^2 h^2 .
If ε_h_L^∞(Ω)≤ c_reg h^2, where c_reg∈ (0,1), then, we arrive at Π_h(u_h^cr-u_h^k^*)_L^2(Ω)=𝒪(h).
Thus, to bound the error u-Π_hu_h^cr_L^2(Ω) experimentally, it is sufficient to compute u-Π_hu_h^k^*_L^2(Ω).
The following proposition proves the well-posedness, stability, and convergence of Algorithm <ref>.
Let the assumptions of Algorithm <ref> be satisfied and let ε_h∈ℒ^0(𝒯_h) such that ε_h>0 a.e. in Ω and ε_h_L^∞(Ω)<1. Then, the following statements apply:
(i) Algorithm <ref> is well-posed, i.e., for every k∈ℕ, given the most-recent iterate u_h^k-1∈𝒮^1,cr_D(𝒯_h), there exists a unique iterate u_h^k∈𝒮^1,cr_D(𝒯_h) solving (<ref>).
(ii) Algorithm <ref> is unconditionally strongly stable, i.e., for every L∈ℕ, it holds
I_h,ε_h^cr(u_h^L)+τ∑_k=1^Ld_τ u_h^k_L^2(Ω)^2≤ I_h,ε_h^cr(u_h^0) .
(iii) Algorithm <ref> terminates after a finite number of steps, i.e., there exists k^*∈ℕ such that r_h^k^*_L^2(Ω)≤ε_stop^h.6mm
The proof of Proposition <ref> (ii) is essentially based on the following inequality.
For every ε∈ (0,1) and a,b∈ℝ^d, it holds10mm
f_ε'(| a|)| a| b·(b-a)≥ f_ε(| b|)-f_ε(| a|)+12f_ε'(| a|)| a|| b-a|^2 .
Follows from <cit.>, since f_ε∈ C^1(ℝ_≥ 0) and (t↦ f_ε'(t)/t)∈ C^0(ℝ_≥ 0) is positive and non-decreasing for all ε∈ (0,1).
ad (i). Since f_ε'(t)/t≥ 0 for all ε∈ (0,1) and t≥ 0, the of Algorithm <ref> is a direct consequence of the Lax–Milgram lemma.
ad (ii).
Let L∈ℕ be arbitrary. Then,
for every k∈{1,…,L}, choosing v_h=d_τ u_h^k∈𝒮^1,cr_D(𝒯_h) in (<ref>), we find that
d_τ u_h^k_L^2(Ω)^2+(f_h,ε_h'(|∇_hu_h^k-1| )|∇_hu_h^k-1|∇_hu_h^k,∇_h d_τ u_h^k)_Ω+α (Π_hu_h^k-g_h,Π_h d_τ u_h^k)_Ω .
Appealing to Lemma <ref> with a=∇_hu_h^k-1|_T∈ℝ^d and b=∇_h u_h^k|_T∈ℝ^d applied for all T∈𝒯_h, for every k∈{1,…,L}, we have that
f_h,ε_h'(|∇_hu_h^k-1| )|∇_hu_h^k-1|∇_hu_h^k·∇_h d_τ u_h^k≥ d_τ f_h,ε_h(|∇_hu_h^k| ) a.e. in Ω .
In addition, since d_τ g_h=0, for every k∈{1,…,L}, we have that
(Π_hu_h^k-g_h)Π_h d_τ u_h^k =(Π_hu_h^k-g_h)d_τ(Π_h u_h^k-g_h)
=d_τ2|Π_hu_h^k-g_h|^2 .
Using (<ref>) and (<ref>) in (<ref>), for every k∈{1,…,L},
we arrive at
d_τ u_h^k_L^2(Ω)^2+d_τ I_h,ε_h^cr(u_h^k)≤ 0 .
Summation of (<ref>) with respect to k∈{1,…,L}, using ∑_k=1^Ld_τ I_h,ε_h^cr(u_h^k)=I_h,ε_h^cr(u_h^L)-I_h,ε_h^cr(u_h^0), yields the claimed stability estimate.
ad (iii). Due to (i), we have that d_τ u_h^k_L^2(Ω)^2→ 0 (k→∞), i.e., by the finite-dimensionality of 𝒮^1,cr_D(𝒯_h) and the equivalence of norms, it holds
u_h^k-u_h^k-1→ 0 in 𝒮^1,cr_D(𝒯_h) (k→∞) .
In addition, due to (i), we have that I_h,ε_h^cr(u_h^k)≤ I_h,ε_h^cr(u_h^0), which, using Lemma <ref>, implies that
(u_h^k)_k∈ℕ⊆𝒮^1,cr_D(𝒯_h) is bounded. Due to the finite-dimensionality of 𝒮^1,cr_D(𝒯_h), the -straß theorem yields a subsequence (u_h^k_l)_l∈ℕ⊆𝒮^1,cr_D(𝒯_h) and a function ũ_h∈𝒮^1,cr_D(𝒯_h) such that
u_h^k_l→ũ_h in 𝒮^1,cr_D(𝒯_h) (l→∞) .
Due to (<ref>), from (<ref>), we deduce that
u_h^k_l-1→ũ_h in 𝒮^1,cr_D(𝒯_h) (l→∞) .
As a result, using (<ref>)–(<ref>), by passing for l→∞ in (<ref>), for every v_h∈𝒮^1,cr_D(𝒯_h), we obtain
(f_h,ε_h'(|∇_hũ_h| )|∇_hũ_h|∇_hũ_h ,∇_hv_h )_Ω+α (Π_hũ_h-g_h,Π_hv_h)_Ω=0 ,
and, by uniqueness, ũ_h=u_h,ε_h^cr.
Hence, using (<ref>) and (<ref>), for every v_h∈𝒮^1,cr_D(𝒯_h), we obtain
(r_h^k_l,v_h)_Ω =(f_h,ε_h'(|∇_hu_h^k_l| )|∇_hu_h^k_l|∇_hu_h^k_l,∇_hv_h )_Ω+α (Π_hu_h^k_l-g_h,Π_hv_h)_Ω
→(f_h,ε_h'(|∇_hu_h,ε_h^cr| )|∇_hu_h,ε_h^cr|∇_hu_h,ε_h^cr ,∇_hv_h )_Ω+α (Π_hu_h,ε_h^cr-g_h,Π_hv_h)_Ω=0 (l→∞) ,
i.e., r_h^k_l⇀ 0 in 𝒮^1,cr_D(𝒯_h) (l→∞), and, thus, by the finite-dimensionality of 𝒮^1,cr_D(𝒯_h), r_h^k_l→ 0 in 𝒮^1,cr_D(𝒯_h) (l→∞), which implies that r_h^k_l→ 0 in L^2(Ω) (l→∞). As this remains valid for each subsequence of (r_h^k)_k∈ℕ⊆𝒮^1,cr_D(𝒯_h), the standard convergence principle yields that r_h^k→ 0 in L^2(Ω) (k→∞). In particular, there exists k^*∈ℕ such that r_h^k^*_L^2(Ω)≤ε^h_stop.
§.§ Implementation details regarding the adaptive mesh refinement procedure
8mm
Before we present numerical experiments, we briefly outline the details of the implementations regarding the adaptive mesh refinement procedure.
In general, we follow the adaptive algorithm, cf. <cit.>:
(i) The regularized, discrete primal solution u_i^cr∈𝒮^1,cr_D(𝒯_i) in step (Solve'Solve') is computed using
the semi-implicit discretized L^2-gradient flow, cf. Algorithm <ref>, for fixed step-size τ=1.0, stopping criterion ε_stop^h_ih_i/√(20), and initial condition u_i^0=0∈𝒮_D^1,cr(𝒯_i). Appealing to Proposition <ref> (ii), Algorithm <ref> is unconditionally strongly stable, so that employing the fixed step-size τ=1.0 is a reasonable choice.
The stopping criterion ε_stop^h_ih_i/√(20) ensures (cf. the argumentation below Algorithm <ref>) that the final iterate u_h_i^k^*∈𝒮^1,cr_D(𝒯_i) is a sufficiently accurate approximation of the discrete primal solution, in the sense
that its accuracy does not violate the best possible linear convergence rate, cf. Remark <ref> (below).
(ii) As an approximation u_i^cr∈𝒮^1,cr_D(𝒯_i) with u_i^cr=0 on ∂Ω, we employ
u_i^cr
u_i^cr if u_i^cr=0 on ∂Ω ,
I_k^∂ u_i^cr else ,
where the operator I_i^∂𝒮^1,cr(𝒯_i)→𝒮^1,cr_D(𝒯_i) for every v_h_i∈𝒮^1,cr(𝒯_i) is defined by
I_i^∂v_i∑_S∈𝒮_h_i;S∩∂Ω=∅v_h_i(x_S) φ_S .
(iii) Note that the particular choices in (ii) are only due to the imposed homogeneous Dirichlet boundary condition. In the case Γ_D=∅, the choice u_i^cru_i^cr∈𝒮^1,cr(𝒯_i) is always admissible.
(iv) If not otherwise specified, we employ the parameter θ=1/2 in (Estimate'Mark').
(v) To find the set ℳ_i⊆𝒯_i in step (Mark'Mark'), we deploy the Dörfler marking strategy, cf. <cit.>.
(vi) The (minimal) conforming refinement of 𝒯_i with respect to ℳ_i in step (Refine'Refine') is by deploying the red-green-blue-refinement algorithm, cf. <cit.>.
(vii) For the construction of the adaptively modified regularization parameter ε_i∈ℒ^0(𝒯_i) in step (Refine'Refine'), we employ separately the following two cases:
ε_iαd|Π_h_i-1 u_i-1^cr-g_h_i| h_i^2 + h_i^3 (locallocal) ,
h_i^2 (globalglobal) .
§.§ Example with Lipschitz continuous dual solution
We examine an example from <cit.>. In this example, we let Ω=(-1,1)^d, Γ_D=∂Ω, d∈{2,3}, r=1/2, α =10, and g=χ_B_r^d(0)∈ BV(Ω)∩ L^∞(Ω). Then, the primal solution u∈ BV(Ω)∩ L^∞(Ω) and a dual solution z∈ W^2(;Ω)∩ L^∞(Ω;ℝ^d), for a.e. x∈Ω are defined by
u(x) (1-dα r) g(x) ,
z(x)
-xr | x| < r ,
-rx| x|^d | x|≥ r .
Note that z∈ W^1,∞(Ω;ℝ^d), so that, appealing to <cit.>, uniform mesh-refinement (i.e., θ=1 in Algorithm <ref>) is expected to yield the quasi-optimal convergence rate 𝒪(h^1/2).
2D Case.
The coarsest triangulation 𝒯_0 of Figure <ref> (initial triangulation of Algorithm <ref>) consists of 16 halved squares. More precisely, Figure <ref> displays
the triangulations 𝒯_i, i∈{0,15,20,25}, generated by Algorithm <ref>
using either the adaptively modified ε_i∈ℒ^0(𝒯_i), cf. (locallocal), or the global choice ε_i h_i^2, cf. (globalglobal). For both choices,
a refinement towards the circle ∂ B_r^2(0), i.e., the jump set J_u of the exact solution u∈ BV(Ω)∩ L^∞(Ω), cf. (<ref>), is reported.
This behavior is also seen in Figure <ref>, where the regularized, discrete primal solution u_15^cr∈𝒮^1,cr_D(𝒯_15), the (local)
L^2-projection onto element-wise constant functions
Π_h_15 u_15^cr∈ℒ^0(𝒯_15), and
the (local) L^2-projections onto element-wise affine functions of
the modulus of the regularized, discrete dual solution z_15^rt∈ℛT^0_N(𝒯_15) and of the projected regularized, discrete dual solution z_15^rt∈ℛT^0_N(𝒯_15) are plotted. Figure <ref>, in addition, shows that using the adaptively modified ε_i∈ℒ^0(𝒯_i), cf. (locallocal), the refinement is more concentrated at the jump set J_u of the exact solution u∈ BV(Ω)∩ L^∞(Ω), cf. (<ref>). However, in Figure <ref> it is seen that (locallocal) does not result in an improved error decay, but an error decay comparable to (globalglobal). In addition,
Figure <ref> demonstrates that Algorithm <ref> improves the experimental convergence rate of about 𝒪(h^1/2) predicted by <cit.> for uniform mesh-refinement to the quasi-optimal rate 𝒪(h), cf. Remark <ref> (below). In addition, Figure <ref> indicates the primal-dual error estimator is reliable and efficient with respect to the error quantity
ρ̃^2(u_i^cr,z_i^rt)α2u_i^cr-u^2_L^2(Ω)+12α z_i^rt- z^2_L^2(Ω) , i∈ℕ ,
which, appealing to Remark <ref> (iv), is a lower bound for sum of the optimal convexity measures.
7mm
3D Case. The initial triangulation 𝒯_0 of Algorithm <ref> consists of 27 cubes each divided into six tetrahedrons. Using either the adaptively modified ε_i∈ℒ^0(𝒯_i), cf. (locallocal), or the global choice ε_i h_i^2, cf. (globalglobal), we report similar results to the 2D case: for both choices,
a refinement towards the sphere ∂ B_r^3(0), i.e., the jump set J_u of the exact solution u∈ BV(Ω)∩ L^∞(Ω), cf. (<ref>), is re-ported, which can be seen
in Figure <ref>, where the regularized, discrete primal solution u_10^cr∈𝒮^1,cr_D(𝒯_10) and
the (local) L^2-projection onto element-wise affine functions of
the modulus of the regularized, discrete dual solution z_10^rt∈ℛT^0_N(𝒯_10) are plotted.
Figure <ref> shows that the adaptive Algorithm <ref> improves the experimental convergence rate of about 𝒪(h^1/2) predicted by <cit.> for uniform mesh-refinement to the quasi-optimal rate 𝒪(h), cf. Remark <ref> (below).
12.5mm
In one dimension, the L^2-best-approximation error of the sign function on quasi-uniform
partitions is of order 𝒪(h^1/2), cf. <cit.>. More generally, using that the
intersection BV(Ω) ∩ L^∞(Ω) is contained in
fractional Sobolev spaces W^s,2(Ω) for all s<1/2,
cf. <cit.>, one cannot expect a higher convergence rate
than 𝒪(h^1/2) for generic, essentially bounded functions of bounded variation. For triangulations that are graded towards the jump
sets of certain discontinuous functions with a quadratic grading
strength, i.e., the local mesh-size satisfies
h_T ∼ h^2 for all elements T∈𝒯_h at the discontinuity set, with the average mesh-size h∼(𝒩_h)^-1/d, a linear
convergence rate 𝒪(h) has been established in <cit.>. Since our
error estimates not only bound squared L^2-errors but also control
squares of L^p-norms of non-linear error quantities involving derivatives, cf. , a higher convergence rate than linear cannot be expected.
In view of these aspects, the linear convergence rate 𝒪(h) for
the devised adaptive strategy is quasi-optimal.
§.§ Example without Lipschitz continuous dual solution
3mm
We examine an example from <cit.>. In this example, we let Ω=(-1.5,1.5)^2, Γ_D=∂Ω, r=1/2, α =10, and g=χ_B_r^2(re_1)-χ_B_r^2(-re_1)∈ BV(Ω)∩ L^∞(Ω). Then, the primal solution u∈ BV(Ω)∩ L^∞(Ω) and a dual solution z∈ W^2(;Ω)∩ L^∞(Ω;ℝ^2), for a.e. x∈Ω are defined by
u(x) (1-2α r) g(x) ,
z(x)∓x∓ r e_1r | x∓ r e_1| < r ,
∓r(x∓ r e_1)| x∓ r e_1|^2 | x∓ r e_1|≥ r .
Note that z∉ W^1,∞(Ω;ℝ^2), so that we cannot refer to <cit.> in order to expect uniform mesh-refinement to yield the convergence rate 𝒪(h^1/2).
However, since z|_Ω^±∈ W^1,∞(Ω^±;ℝ^2), where Ω^+Ω∩ (ℝ_>0×ℝ) and Ω^-Ω∩ (ℝ_<0×ℝ), and since the coarsest triangulation 𝒯_0 of Figure <ref> and, hence, also all resulting refinements 𝒯_i, i∈ℕ, of 𝒯_0 resolve J_zΩ∩ ({0}×ℝ), i.e., the jump set of
z∈ W^2(;Ω)∩ L^∞(Ω;ℝ^2), in the sense that J_z⊆⋃_S∈𝒮_h_iS for all i∈ℕ,
referring to <cit.>, we can expect uniform mesh-refinement to yield the convergence rate 𝒪(h^1/2).
The coarsest triangulation 𝒯_0 of Figure <ref> (initial triangulation of Algorithm <ref>) consists of 16 halved squares. More precisely, Figure <ref> displays
the triangulations 𝒯_i, i∈{0,15,20,25}, generated by Algorithm <ref>
using either the adaptively modified ε_i∈ℒ^0(𝒯_i), cf. (locallocal), or the global choice ε_i h_i^2, cf. (globalglobal). For both choices,
a refinement towards ∂ B_r^2(re_1)∪∂ B_r^2(-re_1), i.e., the jump set J_u of the exact solution u∈ BV(Ω)∩ L^∞(Ω), cf. (<ref>), is reported.
This behavior is also seen in Figure <ref>, where the regularized, discrete primal solution u_15^cr∈𝒮^1,cr_D(𝒯_15), the (local)
L^2-projection onto element-wise constant functions
Π_h_15 u_15^cr∈ℒ^0(𝒯_15), and
the (local) L^2-projections onto element-wise affine functions of
the modulus of the regularized, discrete dual solution z_15^rt∈ℛT^0_N(𝒯_15) and of the scaled regularized, discrete dual solution z_15^rt∈ℛT^0_N(𝒯_15) are plotted. Figure <ref>, in addition, shows that employing the adaptively modified regularization , cf. (locallocal), the refinement is more concentrated at the jump set J_u of the exact solution u∈ BV(Ω)∩ L^∞(Ω), cf. (<ref>). However, in Figure <ref> it can be seen that (locallocal) does not result in an improved error decay, but an error decay comparable to (globalglobal). In addition, Figure <ref> demonstrates that Algorithm <ref> improves the experimental convergence rate of about 𝒪(h^1/2) predicted by <cit.> for uniform mesh-refinement to the quasi-optimal rate 𝒪(h), cf. <ref>. In addition, Figure <ref> indicates the primal-dual error estimator is both reliable and efficient with respect to the error quantity (<ref>).
7mm
§.§ Example with Lipschitz continuous primal solution and Lipschitz continuous dual solution
We examine an example from <cit.>. In this example, we let Ω=(-1.5,1.5)^2, Γ_D=∂Ω, α =10, s(t)√(3t) and r(t)1/2√(1-4t) for t=0.1, and g∈ BV(Ω)∩ L^∞(Ω) for a.e. x∈Ω, be defined by2mm
g(x)
1 +2-α(s(t)^2+t)/s(t) if | x|≤ s(t) ,
1 +1-α(| x|^2+t)/| x| if s(t)<| x|≤ r(t) ,
0 else .
Then, the primal solution u∈ BV(Ω)∩ L^∞(Ω) and a dual solution z∈ W^2(;Ω)∩ L^∞(Ω;ℝ^2) with | z|≤ 1 a.e. in Ω, for a.e. x∈Ω are defined by
u(x)
1 - s(t)^2+t/s(t) if | x|≤ s(t) ,
1 -| x|^2+t/| x| if s(t)<| x|≤ r(t) ,
0 else ,
z(x)
-x/s(t) if | x|≤ s(t) ,
-x/| x| if s(t)<| x|≤ r(t) ,
-xr(t)/| x|^2 else .
Note that z∈W^1,∞(Ω;ℝ^2), so that, appealing to <cit.>, uniform mesh-refinement is expected to yield the quasi-optimal convergence rate 𝒪(h^1/2).
The coarsest triangulation 𝒯_0 of Figure <ref> (initial triangulation of Algorithm <ref>) consists of 16 halved squares. More precisely, Figure <ref> displays
the triangulations 𝒯_i, i∈{0,5,10,15}, generated by Algorithm <ref>
employing either ε_i∈ℒ^0(𝒯_i), cf. (locallocal), or ε_i h_i^2, cf. (globalglobal). For both choices,
a refinement mainly towards and on the set {|∇ u| >0} is reported.
This is also seen in Figure <ref>, where the regularized, discrete primal solution u_15^cr∈𝒮^1,cr_D(𝒯_10), the (local)
L^2-projection onto element-wise constant functions
Π_h_10 u_10^cr∈ℒ^0(𝒯_10), and
the (local) L^2-projections onto element-wise affine functions of
the modulus of the regularized, discrete dual solution z_10^rt∈ℛT^0_N(𝒯_10) and of the scaled regularized, discrete dual solution z_10^rt∈ℛT^0_N(𝒯_10) are plotted. Figure <ref> shows that employing the adaptively modified regularization parameter, cf. (locallocal), the refinement takes place at and on the set {|∇ u| >0}. However, in Figure <ref>, again, it can be seen that (locallocal) does not result in an improved error decay, but an error decay comparable to (globalglobal). In addition, Figure <ref> demonstrates that Algorithm <ref> improves the experimental convergence rate of about 𝒪(h^1/2) predicted by <cit.> for uniform mesh-refinement to the quasi-optimal rate 𝒪(h), cf. Remark <ref>. In addition, Figure <ref> indicates the primal-dual error estimator is both reliable and efficient with respect to the error quantity (<ref>).
7mm
§.§ Example without Dirichlet boundary condition and without exact solution
We examine an example from <cit.>. In this example, we let Ω=(-1,1)^2, r=1/2, Γ_D=∅, α =100, and g=χ_[-r,r]^2∈ BV(Ω)∩ L^∞(Ω). Then, the primal solution
and the dual solutions are not known. However, appealing to <cit.>, given the regularity of g∈ BV(Ω)∩ L^∞(Ω),
we can expect the convergence rate 𝒪(h^1/4) using uniform mesh refinement.
The coarsest triangulation 𝒯_0 of Figure <ref> (initial triangulation of Algorithm <ref>) consists of 16 halved squares. More precisely, Figure <ref> displays
the triangulations 𝒯_i, i∈{0,15,20,25}, generated by Algorithm <ref>
using either the adaptively modified ε_i∈ℒ^0(𝒯_i), cf. (locallocal), or the global choice ε_i h_i^2, cf. (globalglobal). For both choices,
a refinement towards the square ∂ [-r,r]^2, i.e., the jump set J_g of the data g∈ BV(Ω)∩ L^∞(Ω) is reported.
This behavior is also seen in Figure <ref>, where the regularized, discrete primal solution u_15^cr∈𝒮^1,cr_D(𝒯_15), the (local)
L^2-projection onto element-wise constant functions
Π_h_15 u_15^cr∈ℒ^0(𝒯_15), and
the (local) L^2-projections onto element-wise affine functions of
the modulus of the regularized, discrete dual solution z_15^rt∈ℛT^0_N(𝒯_15) and of the projected regularized, discrete dual solution z_15^rt∈ℛT^0_N(𝒯_15) are plotted. Figure <ref>, in addition, shows that using the adaptively modified ε_i∈ℒ^0(𝒯_i), cf. (locallocal), the refinement is, again, more concentrated at the jump set J_g of the data g∈ BV(Ω)∩ L^∞(Ω). However, in Figure <ref> it can be seen that (locallocal) does not result in an improved error decay, but an error decay comparable to (globalglobal). In addition,
Figure <ref> demonstrates that Algorithm <ref> improves the experimental convergence rate of about 𝒪(h^1/4) predicted by <cit.> for uniform mesh-refinement to the value 𝒪(h^2/5). This, on the one hand, confirms the optimality of the a priori error estimates established in <cit.> and, on the other hand, appealing to <cit.>, let us expect that there exists no Lipschitz continuous dual solution to the given data g=χ_[-r,r]^2∈ BV(Ω)∩ L^∞(Ω). The reported reduced error decay of 𝒪(h^2/5) compared to <cit.>, where an error decay of 𝒪(h^1/2) is reported, might only be pre-asymptotic and due to slight accuracy losses resulting due to the global scaling step. This might be due to potential singularities of a dual solution located at the corners of the square ∂ [-r,r]^2, as indicated in Figure <ref>. Therefore, it is possible that the error decay 𝒪(h^1/2) in <cit.> may be reported after surpassing a potential pre-asymptotic regime.
10mm
§.§ Numerical experiments with application to image processing
In order to benchmark the performance of the proposed numerical scheme (cf. Algorithm <ref> and Algorithm <ref>)
in a problem related to image processing, we examine a standard example from the field of image processing (cf. Section <ref>) and a new example (cf. Section <ref>).
11mm
§.§.§ The Cameraman image
We examine the cameraman image, which in a similar context has been considered in <cit.>. In this example,
we let Ω (0,1)^2, Γ_D=∅, α=1e+4, and g∈ BV(Ω)∩ L^∞(Ω) a piece-wise constant function taking its values in the interval [0,1], representing the cameraman image on a uniform triangulation with 66.049
nodes, cf. Figure <ref>. The adaptive algorithm (cf. Algorithm <ref>), employed as coarsening strategy, reduces
the number of nodes within 30 iteration steps to 25.059 nodes which corresponds to 38.0% of the initial number of nodes, which results in a squared L^2-error of u_30^cr-g_L^2(Ω)^2≈ 2.211e-3. The resulting coarsened image, by u_30^cr∈𝒮^1,cr(𝒯_30), is shown in Figure <ref>. The underlying grid 𝒯_30 shown in Figure <ref> reveals the expected coarsening of the triangulation away from the edges.
§.§.§ The Merle image
10mm
We examine an image of Merle, the male cat of the second author. In this example,
we let Ω (0,1)^2, Γ_D=∅, α=1e+4, and
g∈ BV(Ω)∩ L^∞(Ω) a piece-wise constant function taking its values in the interval [0,1], representing the Merle image on a uniform triangulation with 140.625
nodes, cf. Figure <ref>. The adaptive algorithm (cf. Algorithm <ref>), employed as coarsening strategy, reduces
the number of nodes within 30 iteration steps to 41.749 nodes which is 30.0% of the initial number of nodes, which results in a squared L^2-error of u_30^cr-g_L^2(Ω)^2≈ 2.162e-3. The resulting coarsened image, represented by u_30^cr∈𝒮^1,cr(𝒯_30), is shown in Figure <ref>. The underlying grid 𝒯_30 shown in Figure <ref> reveals the expected coarsening of the triangulation away from the edges.
5mm
10
AO00
M. Ainsworth and J. T.
Oden, A posteriori error estimation in finite element
analysis, Pure and Applied Mathematics (New York), Wiley-Interscience
[John Wiley & Sons], New York, 2000.
10.1002/9781118032824.
Bar12
S. Bartels, Total variation minimization with finite
elements: convergence and iterative solution, SIAM J. Numer. Anal.
50 no. 3 (2012), 1162–1180.
10.1137/11083277X.
Bar15
S. Bartels, Numerical methods for nonlinear
partial differential equations, Springer Series in Computational
Mathematics 47, Springer, Cham, 2015.
10.1007/978-3-319-13797-1.
Bar21
S. Bartels, Nonconforming discretizations of convex
minimization problems and precise relations to mixed methods, Comput.
Math. Appl. 93 (2021), 214–229.
10.1016/j.camwa.2021.04.014.
BDN18
S. Bartels, L. Diening, and
R. H. Nochetto, Unconditional stability of
semi-implicit discretizations of singular flows, SIAM J. Numer. Anal.
56 no. 3 (2018), 1896–1914.
10.1137/17M1159166.
BKROF22
S. Bartels and
A. Kaltenbach, Error estimates for total-variation
regularized minimization problems with singular dual solutions, Numer.
Math. 152 no. 4 (2022), 881–906.
10.1007/s00211-022-01324-w.
BK22Obstacle
S. Bartels and
A. Kaltenbach, Error analysis for a
Crouzeix-Raviart approximation of the obstacle problem, 2023.
10.48550/ARXIV.2302.01646.
BM20
S. Bartels and
M. Milicevic, Primal-dual gap estimators for a
posteriori error analysis of nonsmooth minimization problems, ESAIM
Math. Model. Numer. Anal. 54 no. 5 (2020), 1635–1660.
10.1051/m2an/2019074.
BNS15
S. Bartels, R. H. Nochetto,
and A. J. Salgado, A total variation diminishing
interpolation operator and applications, Math. Comp. 84
no. 296 (2015), 2569–2587. 10.1090/mcom/2942.
BTW21
S. Bartels, R. Tovey, and
F. Wassmer, Singular solutions, graded meshes,and
adaptivity for total-variation regularized minimization problems,
ESAIM Math. Model. Numer. Anal. 56 no. 6 (2022), 1871–1888.
10.1051/m2an/2022056.
BW21
S. Bartels and Z. Wang,
Orthogonality relations of Crouzeix-Raviart and Raviart-Thomas finite
element spaces, Numer. Math. 148 no. 1 (2021), 127–139.
10.1007/s00211-021-01199-3.
bartels15
S. Bartels, Error control and adaptivity for a
variational model problem defined on functions of bounded variation,
Math. Comp. 84 no. 293 (2015), 1217–1240.
10.1090/S0025-5718-2014-02893-7.
BC08
S. Bartels and
C. Carstensen, A convergent adaptive finite element
method for an optimal design problem, Numer. Math. 108 no. 3
(2008), 359–385. 10.1007/s00211-007-0122-x.
BBHSVN23
L. Baumgärtner,
R. Bergmann, R. Herzog,
S. Schmidt, and
J. Vidal-Núnez, Total generalized variation for
piecewise constant functions on triangular meshes with applications in
imaging, SIAM Journal on Imaging Sciences 16 no. 1 (2023),
313–339. 10.1137/22M1505281.
BC11
H. H. Bauschke and P. L.
Combettes, Convex analysis and monotone operator theory in hilbert
spaces, in CMS Books in Mathematics, 2011.
BW22
L. Baňas and A. Wilke,
A posteriori estimates for the stochastic total variation flow, SIAM
J. Numer. Anal. 60 no. 5 (2022), 2657–2680.
10.1137/21M1447982.
BB20
F. Bertrand and D. Boffi,
The Prager-Synge theorem in reconstruction based a posteriori error
estimation, in 75 years of mathematics of computation, Contemp.
Math. 754, Amer. Math. Soc., [Providence], RI, [2020] 2020, pp. 45–67. 10.1090/conm/754/15152.
Braess13
D. Braess, Finite Elemente. Theorie,
schnelle Löser und Anwendungen in der Elastizitätstheorie, 5th
revised ed. ed., Springer-Lehrb. Mastercl., Berlin: Springer Spektrum,
2013 (German). 10.1007/978-3-642-34797-9.
Brae09
D. Braess, An a posteriori error estimate and a
comparison theorem for the nonconforming P_1 element, Calcolo
46 no. 2 (2009), 149–155. 2520373.
10.1007/s10092-009-0003-z.
braides98
A. Braides, Approximation of free-discontinuity
problems, Lecture Notes in Mathematics 1694,
Springer-Verlag, Berlin, 1998. 10.1007/BFb0097344.
bregman67
L. Brégman, The relaxation method of finding the
common point of convex sets and its application to the solution of problems
in convex programming, USSR Computational Mathematics and Mathematical
Physics 7 no. 3 (1967), 200–217.
https://doi.org/10.1016/0041-5553(67)90040-7.
CL15
C. Carstensen and D. J.
Liu, Nonconforming FEMs for an optimal design problem, SIAM
J. Numer. Anal. 53 no. 2 (2015), 874–894.
10.1137/130927103.
CKNS08
J. Cascon, C. Kreuzer,
R. Nochetto, and
K. Siebert, Quasi-optimal convergence rate for an
adaptive finite element method, SIAM J. Numer. Anal. 46
no. 5 (2008), 2524–2550. 10.1137/07069047X.
CCMN08
V. Caselles, A. Chambolle,
S. Moll, and M. Novaga, A
characterization of convex calibrable sets in ℝ^N with respect to
anisotropic norms, Ann. Inst. H. Poincaré Anal. Non Linéaire
25 no. 4 (2008), 803–832.
10.1016/j.anihpc.2008.04.003.
CP20
A. Chambolle and T. Pock,
Crouzeix-Raviart approximation of the total variation on simplicial meshes,
J. Math. Imaging Vision 62 no. 6-7 (2020), 872–899.
10.1007/s10851-019-00939-3.5mm
CR73
M. Crouzeix and P.-A.
Raviart, Conforming and nonconforming finite element methods for
solving the stationary Stokes equations. I, Rev. Française
Automat. Informat. Recherche Opérationnelle Sér. Rouge 7
no. R-3 (1973), 33–75.
Dac08
B. Dacorogna, Direct methods in the calculus of
variations, second ed., Applied Mathematical Sciences 78,
Springer, New York, 2008.
DK08
L. Diening and C. Kreuzer,
Linear convergence of an adaptive finite element method for the
p-Laplacian equation, SIAM J. Numer. Anal. 46 no. 2
(2008), 614–638. 10.1137/070681508.
DR07
L. Diening and
M. Růžička, Interpolation operators in
Orlicz-Sobolev spaces, Numer. Math. 107 no. 1 (2007),
107–129. 10.1007/s00211-007-0079-9.
Doe96
W. Dörfler, A convergent adaptive algorithm for
Poisson's equation, SIAM J. Numer. Anal. 33 no. 3 (1996),
1106–1124. 10.1137/0733054.
ET99
I. Ekeland and
R. Témam, Convex analysis and variational
problems, english ed., Classics in Applied Mathematics 28,
Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA,
1999, Translated from the French.
10.1137/1.9781611971088.
EG21
A. Ern and J. L. Guermond,
Finite Elements I: Approximation and Interpolation, Texts in
Applied Mathematics no. 1, Springer International Publishing, 2021.
10.1007/978-3-030-56341-7.
FV04
F. Fierro and A. Veeser, A
posteriori error estimators for regularized total variation of characteristic
functions, SIAM J. Numer. Anal. 41 no. 6 (2003), 2032–2055.
10.1137/S0036142902408283.
HK04
M. Hintermüller and
K. Kunisch, Total bounded variation regularization
as a bilaterally constrained optimization problem, SIAM J. Appl.
Math. 64 no. 4 (2004), 1311–1333.
10.1137/S0036139903422784.
Hun07
J. D. Hunter, Matplotlib: A 2d graphics environment,
Computing in Science & Engineering 9 no. 3 (2007), 90–95.
10.1109/MCSE.2007.55.
LW10
A. Logg and G. N. Wells,
DOLFIN: automated finite element computing, ACM Trans. Math.
Software 37 no. 2 (2010), Art. 20, 28.
10.1145/1731022.1731030.
Mar85
L. D. Marini, An inexpensive method for the
evaluation of the solution of the lowest order Raviart-Thomas mixed
method, SIAM J. Numer. Anal. 22 no. 3 (1985), 493–496.
10.1137/0722029.
vedo
M. e. a. Musy, marcomusy/vedo: 2023.4.4, March 2023.
10.5281/zenodo.7734756.
NSV00
R. H. Nochetto,
G. Savaré, and
C. Verdi, A posteriori error estimates for variable
time-step discretizations of nonlinear evolution equations,
Communications on Pure and Applied Mathematics 53 no. 5
(2000), 525–589.
https://doi.org/10.1002/(SICI)1097-0312(200005)53:5<525::AID-CPA1>3.0.CO;2-M.
OBGXY05
S. Osher, M. Burger,
D. Goldfarb, J. Xu, and
W. Yin, An iterative regularization method for
total variation-based image restoration, Multiscale Modeling &
Simulation 4 no. 2 (2005), 460–489. 10.1137/040605412.
PraSyn47
W. Prager and J. L. Synge,
Approximations in elasticity based on the concept of function space,
Quart. Appl. Math. 5 (1947), 241–269.
10.1090/qam/25902.
RT75
P.-A. Raviart and J. M.
Thomas, A mixed finite element method for 2nd order elliptic
problems, in Mathematical aspects of finite element methods (Proc.
Conf., Consiglio Naz. delle Ricerche (C.N.R.), Rome, 1975),
1977, pp. 292–315. Lecture Notes in Math., Vol. 606.
Repin18
S. Repin and J. Valdman,
Error identities for variational problems with obstacles, ZAMM Z.
Angew. Math. Mech. 98 no. 4 (2018), 635–658.
10.1002/zamm.201700105.
Rep99
S. I. Repin, A posteriori error estimates for
approximate solutions to variational problems with strongly convex
functionals, J. Math. Sci. (New York) 97 no. 4 (1999),
4311–4328, Problems of mathematical physics and function theory.
10.1007/BF02365047.
ROF92
L. I. Rudin, S. Osher, and
E. Fatemi, Nonlinear total variation based noise
removal algorithms, Phys. D 60 no. 1-4 (1992), 259–268,
Experimental mathematics: computational issues in nonlinear science (Los
Alamos, NM, 1991). 10.1016/0167-2789(92)90242-F.
dr-nafsa
M. Růžička and
L. Diening, Non–Newtonian fluids and function
spaces, in Nonlinear Analysis, Function Spaces and Applications,
Proceedings of NAFSA 2006 Prague, 8, 2007, pp. 95–144.
Tart07-book
L. Tartar, An introduction to Sobolev spaces
and interpolation spaces, Lecture Notes of the Unione Matematica
Italiana 3, Springer, Berlin; UMI, Bologna, 2007.
Ver13
R. Verfürth, A Posteriori Error Estimation
Techniques for Finite Element Methods, Oxford University Press, 04 2013.
10.1093/acprof:oso/9780199679423.001.0001.9mm
ZeiIII
E. Zeidler, Nonlinear functional analysis and
its applications. III, Springer-Verlag, New York, 1985, Variational
methods and optimization, Translated from the German by Leo F. Boron.
10.1007/978-1-4612-5020-3.
|
http://arxiv.org/abs/2307.04501v1 | 20230710114528 | A Privacy-Preserving and Accountable Billing Protocol for Peer-to-Peer Energy Trading Markets | [
"Kamil Erdayandi",
"Lucas C. Cordeiro",
"Mustafa A. Mustafa"
] | cs.CR | [
"cs.CR",
"cs.CE"
] |
[t]
979-8-3503-9790-1/23/$31.00 2023 IEEE
A Privacy-Preserving and Accountable Billing Protocol for Peer-to-Peer Energy Trading Markets
This work was supported by EPSRC through EnnCore [EP/T026995/1] and by the Flemish Government
through FWO-SBO SNIPPET project [S007619]. K.E is funded by The Ministry of National Education, Republic of Turkey.
Kamil Erdayandi1, Lucas C. Cordeiro1 and
Mustafa A. Mustafa12
1Department of Computer Science, The University of Manchester, UK
2imec-COSIC, KU Leuven, Belgium
Email: {kamil.erdayandi, lucas.cordeiro, mustafa.mustafa}@manchester.ac.uk
=======================================================================================================================================================================================================================================================================================================================
This paper proposes a privacy-preserving and accountable billing (PA-Bill) protocol for trading in peer-to-peer energy markets, addressing situations where there may be discrepancies between the volume of energy committed and delivered.
Such discrepancies can lead to challenges in providing both privacy and accountability while maintaining accurate billing. To overcome these challenges, a universal cost splitting mechanism is proposed that prioritises privacy and accountability. It leverages a homomorphic encryption cryptosystem to provide privacy and employs blockchain technology to establish accountability.
A dispute resolution mechanism is also introduced to minimise the occurrence of erroneous bill calculations while ensuring accountability and non-repudiation throughout the billing process. Our evaluation demonstrates that PA-Bill offers an effective billing mechanism that maintains privacy and accountability in peer-to-peer energy markets utilising a semi-decentralised approach.
Billing, Privacy, Accountability, Peer-to-peer Energy Market, Homomorphic Encryption, Blockchain
§ NOMENCLATURE
tocsectionNomenclature
1.20
[π_P2P,π_FiT, π_RT]
c_i, p_j, u_k i-th consumer , j-th prosumer, k-th user
N_C , N_P, N_U Number of consumers, prosumers, users
V^P2P P2P market's traded electricity volume array
V^Real Real electricity consumption array
π_P2P, π_FiT, π_RT P2P, FiT, Retail price
Stat Array of the statements of the users
Bal_sup Balances of the supplier
inDev Array of the individual deviations of the users
Dev^Tot Total deviations of the users
KGen_pe(k) Paillier key generation method
PK_sup , SK_sup Public, Private (Secret) key pair of Supplier
{.}_ℰ Data homomorphically encrypted with PK_sup.
H(.) Hash Function
§ INTRODUCTION
§.§ Motivation and Background
Peer-to-peer (P2P) energy trading
enables users to obtain clean energy at more reasonable prices than traditional suppliers, making it accessible to a wider society <cit.>. It facilitates direct energy exchange between households that harness renewable energy sources (RES) <cit.>. This approach empowers individuals to become active participants in the energy system <cit.>, allowing RES owners to optimise their profits and reduce their bills through trading with other users <cit.>.
Although P2P energy trading markets offer various benefits, some challenges hinder their widespread adoption. Firstly, the vast amount of data exchanged can reveal sensitive information about users <cit.>, such as their energy usage habits and lifestyle patterns. Access to this data poses significant privacy risks <cit.> and could potentially violate privacy protection regulations, e.g., GDPR <cit.>. Thus, it is crucial to ensure privacy-preserving data processing and protect data from unauthorised access <cit.>. Secondly, such markets require secure and accountable solutions.
However, it is challenging to audit transactions without a tamper-proof system <cit.>. To ensure fair and accurate energy trading, it is also essential to guarantee integrity and verifiability of any data used. Thirdly, often what users commit at P2P markets deviates from what they deliver due to intermittent RES output. Hence, any billing models will need mechanisms to deal with such deviations.
§.§ Relevant Literature
Within P2P energy trading, two crucial phases are market clearance and billing & settlement <cit.>. Since privacy-preserving market clearing mechanisms have already been explored <cit.>, this paper focuses on the billing phase.
Madhusudan et al. <cit.> propose four billing models for P2P energy markets which account for deviations in energy volumes from the users' bids and incorporate individual, social, or universal cost-sharing mechanisms to ensure cost-effectiveness for both consumers and prosumers. Nonetheless, they do not explore user privacy.
A privacy-preserving billing protocol that incorporates an individual cost-sharing mechanism has been proposed in <cit.>.
However, it relies on a remote server for bill calculations, which poses a risk of a single point of failure.
Singh et al. <cit.> propose a method that uses blockchain and homomorphic schemes to protect the confidentiality of user data while enabling efficient data analysis. They do not explore any billing mechanisms. Gür et al. <cit.> propose a system based on blockchain technology and IoT devices to facilitate billing. To ensure data confidentiality, the system employs session keys and stores the encrypted data on the blockchain. However, this is still vulnerable to breaches as unauthorised parties can gain access to these keys, enabling them to access sensitive data.
In summary, no prior study on P2P market billing fully satisfies the three essential requirements: protecting user privacy, maintaining strong system accountability, and accommodating variations in user consumption. Neglecting any of these elements undermines the market trust, transparency and fairness, which are essential to their success and sustainability. Furthermore, integrating these three features within a single platform efficiently poses considerable challenges.
§.§ Contributions and Organization
To address the issues raised in the existing literature, we propose a novel privacy-preserving and accountable billing (PA-Bill) protocol, which effectively mitigate the challenges surrounding security, privacy, accountability, and user consumption variations prevalent in current studies.
PA-Bill utilises a universal cost-splitting billing model that mitigates the risk of sensitive information leakage due to individual deviations. It also avoids a single point of failure by performing most calculations locally in a semi-decentralised manner. To preserve privacy, the mechanism employs homomorphic encryption in bill calculations. Moreover, PA-Bill utilises blockchain technology to integrate accountability mechanisms that addresses possible conflicts during the billing calculation process. To minimise privacy leakage, only the hashed version of the data is stored on the blockchain. Finally, PA-Bill can support large communities of 500 households.
Unlike other solutions, PA-Bill integrates privacy protection, accountability, and accommodating user consumption variations into a single solution in an efficient way. To the best of our knowledge, no previous work has successfully implemented an efficient billing model that simultaneously preserves privacy, ensures accountability, and effectively handles discrepancies between committed and delivered volume.
To mitigate the aforementioned issues in the literature, we propose a Privacy-Preserving and Accountable Billing Mechanism (PA-Bill) which
* utilises a universal cost splitting billing model that determines conditions during billing calculations without relying on individual deviations from proposed electricity volumes. This mitigates the risk of sensitive information leakage due to individual deviations.
* avoids single point of failure by performing majority of calculations locally in a semi-decentralised way.
* provides privacy preserving computation mechanism with Homomorphic Encryption.
* utilises blockchain technology to integrate accountability mechanisms that address possible conflicts during the billing calculation process. To minimise privacy leakage, only the hashed version of the data is stored in the blockchain, rather than plaintext or encrypted data.
* evaluates the performance for a large community which consists of 500 households.
The rest of the paper is structured as follows: Section <ref> outlines the preliminaries. The proposed PA-Bill is presented in Section <ref>. The security analysis of PA-Bill is presented in Section <ref>, while its performance is evaluated in Section <ref>. Finally, Section <ref> concludes the paper.
[MM: Start with a paragraph about privacy-preserving billing in smart grid. In this paragraph include at least the following papers.]
* `Private Memoirs of a Smart Meter' - 2010
* `Privacy-Preserving Smart Metering with Verifiability for Both Billing and Energy Management' - 2014
* `Privacy-Preserving Smart Metering' - 2011
* `A Practical Smart Metering System Supporting Privacy Preserving Billing and Load Monitoring' - 2012
* `Design and implementation of a secure cloud-based billing model for smart meters as an Internet of things using homomorphic cryptography' - 2017
* `Plug-In Privacy for Smart Metering Billing' - 2011
* `A Privacy-Enhancing Protocol that Provides In-Network Data Aggregation and Verifiable Smart Meter Billing' - 2014
* `Practical Single-pass Oblivious Aggregation and Billing Computation Protocols for Smart Meters' - 2022
* `Verifiable and Privacy-preserving Fine-Grained Data-Collection for Smart Metering' - 2015
* `Secure and privacy-friendly local electricity trading and billing in smart grid' - 2018
[MM: Conclude by saying that the existing work is good, but no P2P markets have been considered, which make the billing models more complex.]
[MM: Write a second pargarpg focusing on PP solutions for billing on the P2P markets. End this paragraph by linking it back to the limitations that you plan to address. These two paragraphs should be sufficient. ]
§ PRELIMINARIES
§.§ System Model
Our proposed billing protocol, illustrated in Fig. <ref>, involves prosumers, consumers, a trading platform (TP), a distributed ledger/Blockchain (DLT), a referee, and a supplier.
Prosumers generate energy through renewables, consume the volume they require, and sell any surplus energy. Consumers solely consume energy. Households have home energy management systems (HEMs) and smart meters (SMs) that measure electricity flows, provide real-time measurements, and facilitate P2P trading for the user.
Prosumers and consumers can trade electricity through a P2P market using a trading platform (TP). If necessary, they can also buy or sell electricity from/to a supplier as a backup option. However, P2P trading is more beneficial than relying on the supplier due to pricing considerations <cit.>. Financial reconciliation occurs during settlement cycles (SCs) for users involved in trading. Within each SC, data regarding the actual electricity usage of households and their commitments to trade in the market are stored on DLT. Households calculate their bills locally in a decentralised manner. If a dispute arises, a referee intervenes to resolve it by requesting data from households and retrieving it from DLT.
§.§ Threat Model and Assumptions
Our threat model comprises untrustworthy and semi-honest entities. Prosumers and consumers who may attempt to violate the protocol specifications and obtain sensitive data of other users are considered to be untrustworthy. Prosumers may try to maximise their revenue, while consumers may aim to minimise their expenses. Semi-honest entities include the TP, referee, and supplier. They adhere to the protocol specifications, but they may still be curious to learn sensitive data of users.
SMs are tamper-proof and sealed. Anyone, including their users, can not tamper with them without being detected.
Users act rationally by seeking the most cost-effective electricity to buy or sell <cit.>.
We assume that the entities communicate over secure and authentic communication channels.
§.§ Design Requirements
* No single point of failure (SPF):
To avoid SPF, calculations and data storage should be distributed <cit.>.
* Privacy:
Confidentiality of individual users' volumes of energy traded and consumed as well as individual deviation and deviation sign should be provided.
* Accountability: Disputes arising from erroneous bill calculations must be addressed in an accountable way to prevent any party from denying responsibility.
* Fair deviation cost distribution: cost of P2P market deviation should be split fairly among market participants.
§.§ Building Blocks
Homomorphic encryption (HE) enables computations to be performed on encrypted data, resulting in encrypted outputs that produce the same results as if the operations were conducted on unencrypted data <cit.>. Specifically, we deploy the Paillier cryptosystem which supports homomorphic addition and scalar multiplication on ciphertexts <cit.>. Our solution ensures the privacy of households by encrypting sensitive information such as energy consumption data per SC. Billing calculations are performed on this encrypted data, thereby preserving the confidentiality of the information.
We use blockchain technology to provide accountability by ensuring that transactions are permanently recorded in a decentralised and immutable system with append-only storage. Transactions recorded on a blockchain cannot be altered by design, ensuring that they are accurate and trustworthy <cit.>.
§ PRIVACY PRESERVING AND ACCOUNTABLE BILLING (PA-BILL) PROTOCOL
In this section, we propose a privacy-preserving and accountable billing protocol for P2P energy market where users' actual energy consumption may differ from the volumes they committed. It protects sensitive household information and enables system entities to verify accurate billing calculations.
§.§ PA-Bill Overview
The process of PA-Bill protocol is illustrated in Fig. <ref>, which includes interactions between the entities. The system utilises the public-private key pair of the supplier for all homomorphically encrypted calculations. A distinct set of HE keys, namely PK_sup and SK_sup are generated for each billing month. Additionally, each month the consumers and prosumers are paired together to perform accountable calculations.
In the energy trading model, users send homomorphically encrypted bid-offer data to the TP, which calculates the final trading price π_P2P and the amount of energy V^P2P[u_k] that each user u_k will trade via the P2P market, as in <cit.>.
During each SC, π_P2P is publicly released. V^P2P[u_k] is shared with related paired users for future calculations, and its hash is stored on the DLT for future verification. SMs measure their users' actual imported/exported electricity and transmit the encrypted version (V^Real[u_k]) to relevant users. The hash of this encrypted version is also stored on the DLT.
After sending and storing related data for billing, the calculation of bills among prosumers and consumers is performed in three stages in a privacy-preserving way. Firstly, individual deviations of users are calculated. Consumers calculate the individual deviations of prosumers and vice versa. Secondly, the total deviations of consumers and prosumers are calculated by six user selected from consumers and prosumers. Thirdly, statements (bills/revenues) of users are calculated.
To protect sensitive data such as energy consumed/traded, and individual energy deviations of households, our work utilises HE scheme to process data while preserving privacy. However, it is crucial to design the billing algorithm in such a way that it avoids any indirect leakage of private information despite the use of encryption. Traditional billing methods <cit.> have the potential to expose confidential information by using individual deviations between actual and committed energy volumes to determine the “conditions" in calculating bills. This enables inferences to be made about whether the actual electricity consumption volume is lower or higher than the committed data. To address this issue, we propose a privacy-preserving and accountable cost-splitting billing that uses total deviations of consumers and prosumers rather than individual deviations to determine billing conditions.
In the event of a dispute, the referee requests the necessary data from households, as well as it retrieves the hash of the previously stored data from DLT (to ensure the accuracy of the data requested from households) to settle the dispute. In this case, the referee corrects erroneous computations of the pair of customer and prosumer whose calculations do not match each other and identifies the responsible party in the pair. The responsible party is penalised, incentivising them to act truthfully, which would otherwise result in penalties. Besides, the referee can directly calculate the supplier's balance since the calculations do not involve any confidential information.
Finally, at the end of the month, final bills and revenues, and the balance of the supplier are released with the help of the referee and the private homomorphic key of the supplier.
§.§ Technical Details of PA-Bill
At the start of each billing period (e.g., a month), the following two steps (1-2) are carried out.
§.§.§ Generation of Keys
The supplier generates a public-private HE (Paillier) key pair: KGen_pe(k) PK_sup, SK_sup.
§.§.§ Matching customers and prosumers
The referee conducts a random matching process in which each consumer is paired with a list of prosumers and vice versa.
The number of users in the lists may exceed one or be zero in cases where N_C > N_P or N_C < N_P, while the lists contain only one user if N_C = N_P. Here, N_C and N_P denote the respective number of customers and prosumers. The function M(u_k) returns the list of users that have been matched to the user u_k.
At each SC, the following six steps (3–8) are carried out.
§.§.§ Transfer and Storage of P2P Traded Data
TP makes the P2P trading price public by storing it at DLT in plaintext. For each u_k, TP transmits homomorphically encrypted value of traded volume V^P2P[u_k] to user u_k and to users in M(u_k). The privacy-preserving calculation of the encrypted traded values by user u_k (V^P2P[u_k]) can be performed after the transmission of bids-offers in a homomorphically encrypted format.
It is assumed the TP has already calculated V^P2P[u_k]. Once the data has been transmitted to relevant parties, the TP also hashes the homomorphically encrypted traded volume of user u_k, i.e., H(V^P2P[u_k]), and stores the result at the DLT, together with a timestamp and ID of u_k.
§.§.§ Collection, Transfer and Storage of SM Data
At the end of each SC, each SM measures the real volume of energy imported from (or exported to) the grid by their user, i.e., V^Real[u_k], encrypts it with PK_sup and hashes it, i.e., H(V^Real[u_k]). It then stores the hash value to DLT with timestamp and ID of u_k. The user SM also stores V^Real[u_k] as well as sends it to the users in M(u_k).
§.§.§ Calculation of Individual Deviations
in this step, each user u_k calculates the individual deviations (inDev) from the volume of energy they committed for themselves and their corresponding matched users in M(u_k)
(see Alg. <ref>). To calculate inDev, each user u_k subtracts their committed volume from the volume measured by their SM for themselves (u_k) and the users m_l in M(u_k). The calculations are carried out in homomorphically encrypted format.
The espective encrypted results inDev and inDev_M are sent to the referee.
After the referee receives the encrypted individual deviations from users, it checks whether the computations have been done correctly. For each user and its matched user, the referee receives four encrypted results. The user u_k provides its own encrypted result, inDev[u_k], as well as that of its matched user. For the matched consumer c_i and prosumer p_j, the referee checks if the calculated values are the same. In order to achieve this, the referee subtracts these two calculated values from each other in a homomorphically encrypted format. The result of this subtraction is then sent to the supplier who has the private key to perform homomorphic encryption operations. The supplier decrypts the result of subtraction and sends it back to referee. The referee checks whether the received value from the supplier is zero or not. If it is zero, it considers the calculations to be accurate and proceeds to store the hash of the resulting computation of user u_k (not that of the matched user) in DLT along with the corresponding ID and timestamp of u_k, to facilitate future verification. Otherwise (if the received result is not zero), the referee intervenes to correct any erroneous calculations and identify the responsible party. To do so, the referee requests V^Real and V^P2P from the users, checks their correctness by hashing and comparing them with the previously stored hashes in blockchain by TP and SMs. If the encrypted data received from the users is accurate, the referee recalculates the inDev in encrypted format for c_i and p_j, whose results were incorrect. Next, the referee follows the same process of subtracting the calculated values and having the result decrypted by the supplier to compare the recalculated outcome with the values obtained from c_i and p_j. The referee then identifies the party that is accountable for the mismatch.
§.§.§ Calculation of Total Deviations
To calculate total demand and supply deviations, the referee selects three consumers and three prosumers. Each consumer c_i sends their respective inDev[c_i] to the selected prosumers and vice versa.
Selected prosumers and consumers verify the received encrypted deviations by hashing and comparing them with stored hashes in DLT. Then, selected prosumers sum up inDev[c_i] for each c_i to calculate Dev_C^Tot (eq. <ref>) and selected consumers do the same for each p_i,
(eq. <ref>).
Dev_C^Tot∑_i=0^N_C-1inDev_C[c_i]
Dev_P^Tot∑_j=0^N_C-1inDev_P[p_j]
After calculating Dev_C^Tot and Dev_P^Tot, selected prosumers and consumers send them to a referee for verification. If the results match, the referee sends them to the supplier.
The supplier then decrypts the results and makes them publicly available by storing Dev_C^Tot and Dev_P^Tot into DLT. If the results do not match, the referee corrects any erroneous calculations and identifies the responsible party. This is done by recalculating (eq. <ref>) and (eq. <ref>) in encrypted format after requesting and verifying the necessary data via DLT.
§.§.§ Calculation of Bills and Rewards
we present our proposed privacy-preserving and accountable universal cost-splitting billing model that employs total deviations instead of individual deviations to establish billing conditions. The proposed billing model is presented in Alg. <ref>. The algorithm takes as input V^P2P, V^Real, π_P2P, π_RT and π_FiT and calculates the bills/revenues of consumers/prosumers. The algorithm outputs Statements Stat[u_k], Stat_M[u_k] for user u_k and its matched users in M(u_k), respectively. Stat[u_k] indicates the bill of u_k when u_k is a consumer and it stands for the revenue of u_k if u_k is a prosumer. We have devised universal formulas such as Stat[u_k] which is applicable to both consumers and prosumers.
The algorithm works in three modes based on the difference between total deviations of consumers and prosumers, and proceeds as follows.
If Dev_P^Tot = Dev_C^Tot, prosumers have generated enough electricity to meet the demand of customers, resulting in a balanced P2P market. In this case, individuals can purchase the required energy from other households and sell their excess energy to other households at π_P2P in addition to their commitments in the P2P market rather than relying on suppliers. Energy sharing between households to compensate for deviations is advantageous for both consumers and prosumers, as they can exchange energy at a price of π_P2P, which is higher than π_FiT and lower than π_RT, compared to relying on suppliers to buy electricity at π_RT and sell electricity at π_FiT. The statements for each user u_k and for paired users in M(u_k) are calculated between ln. 3-6 in the algorithm.
If Dev_P^Tot < Dev_C^Tot, there is a shortage of electricity in the P2P market as prosumers have not generated enough electricity to meet customer demand. If there is a shortage of electricity that cannot be compensated by other users, the only option is to purchase it from the supplier at π_RT. Users with a shortage of electricity can buy it at this price, while households with a surplus can sell it at π_RT instead of selling it to the supplier for π_FiT, which is advantageous for prosumers. In accordance with this, the statements for each user u_k and for paired users in M(u_k) are calculated between ln. 9-11 in the algorithm.
If Dev_P^Tot > Dev_C^Tot, there is excess electricity in the P2P market as prosumers have generated more electricity than is needed to meet customer demand. In this case, consumers can purchase energy from prosumers at π_P2P to compensate for their energy shortage due to deviation. The total revenue of the prosumers is distributed among them in proportion to the excess energy they provided. To calculate this, the total revenue generated by prosumers due to excess energy is first determined. Some of the excess energy is sold to consumers with a shortage of electricity at π_P2P, while the remainder is sold to the supplier at π_FiT. Therefore, the total revenue of prosumers, TotRev_P, can be calculated as
TotRev_P =(Dev_C^Tot·π_P2P + (Dev_P^Tot - Dev_C^Tot) ·π_FiT)
The total revenue TotRev_P is distributed among the prosumers in proportion to inDev_P[u_k] /Dev_P^Tot. In accordance with this, Alg. <ref> calculates statements for each user u_k and for paired users in M(u_k) between ln. 16-19, if u_k is a consumer. Otherwise, the statements are calculated between ln. 21-24.
At the end of the algorithm, statements are accumulated on stat^Tot in encrypted format for u_k and user in M(u_k) assuming that stat^Tot was set to zero before the first SC.
After each pair calculates their statements bilaterally, they send the results to the referee for verification. If the results do not match, the referee intervenes to correct any erroneous calculations and identify the responsible party. This is done by running Alg. <ref> for the unmatched pairs after requesting and verifying the required data for computation via DLT.
§.§.§ Calculating the of Balance of the Supplier
The referee calculates the supplier's balance using only public information, and does so in a non-encrypted format.
In the case where Dev_P^Tot = Dev_C^Tot, Bal_sup is set to zero (Bal_sup 0) since there is no excess or shortage of electricity in the P2P market to compansate from the supplier.
If (Dev_P^Tot > Dev_C^Tot), there is excess energy in P2P market and the supplier purchases it at FiT price π_FiT, resulting in a negative balance for the supplier to pay. Bal_sup is calculated as the negative product of the total excess energy (Dev_P^Tot - Dev_C^Tot) and π_FiT, i.e.
Bal_sup -(Dev_P^Tot - Dev_C^Tot)·π_FiT
If (Dev_P^Tot < Dev_C^Tot), there is a shortage of energy in P2P market that needs to be compensated by the supplier at retail price π_RT. Bal_sup is calculated as the product of supplied energy (Dev_P^Tot - Dev_C^Tot) and π_RT, i.e.
Bal_sup (Dev_C^Tot - Dev_P^Tot)·π_RT.
At each SC, the resulting Bal_sup is accumulated to the total supplier balance
except when the SC is equal to zero where Bal^Tot_sup is set to Bal_sup.
The next step is carried out at the end of each billing period.
§.§.§ Transfer and Announcement of Bills, Revenues and Supplier Balance
The final accumulated monthly statements of households are not protected from the supplier, as payments must be made, the referee sends encrypted statements consisting of bills and revenues to the supplier. The supplier then decrypts these statements using their HE private key and hashes and stores the decrypted version on the DLT system for future verification during the payment process. The supplier's balance is also hashed and stored on the DLT.
§ SECURITY, PRIVACY AND ACCOUNTABILITY ANALYSIS
The PA-Bill protocol addresses the security concern of avoiding SPF by distributing the majority of calculations and data storage locally.
It addresses privacy concerns by utilising HE to encrypt sensitive user data such as V^Real and V^P2P, ensuring that sensitive information remains confidential during billing computations. In addition, the PA-Bill protocol employs a cost-splitting mechanism that utilises the total deviations of users rather than individual deviations to calculate billing modes. This method avoids indirect privacy leakage of individual deviations.
It employs Blockchain technology to create an unalterable record of the hashes of essential data necessary for billing computations. This ensures the verification and integrity of critical data, thereby enabling all parties to be held accountable for their actions during the billing process.
§ PERFORMANCE EVALUATION
In this section, we demonstrate that PA-Bill achieves computational efficiency without compromising privacy, accountability, or the ability to accommodate user consumption variations. PA-Bill effectively addresses these critical aspects while maintaining a level of computational efficiency. We prove our claims through both theoretical analysis and experiments.
§.§ Theoretical Analysis
The time complexity of the method is mainly determined by the input parameters of Alg. <ref> and Alg. <ref>, which include the number of users (N_U). The time required to perform the algorithm grows depending on the input size. Specifically, the nested double loops in Alg. <ref> and Alg. <ref> lead to a quadratic time complexity of n^2 for cases where in cases where N_C > N_P or N_C < N_P, the time complexity is reduced to n with a single iteration in the inner loop when N_C = N_P where each user has only one matched user. The time complexity of the calculations in eq. <ref> and eq. <ref> is n, where n depends on the inputs N_C and N_P, respectively.
§.§ Experimental Results
We evaluate the performance of PA-Bill by running simulations on a PC with Intel Core i5 CPU @ 2GHz CPU and 16GB of RAM to demonstrate its efficiency. We utilise the SHA3-256 algorithm for hashing and the Paillier cryptosystem for homomorphic encryption with 2048-bit keys. These operations were implemented using the Python libraries hashlib and phe, respectively. We utilised Ethereum network to prototype the blockchain platform.
To deploy and test Ethereum for our project, we used Ganache[https://www.trufflesuite.com/ganache], wrote smart contracts in Solidity[https://solidity.readthedocs.io/en/v0.8.7/], and compiled them on Remix[https://remix.ethereum.org/]. To connect our project with the Ethereum network, we utilised the Python Web3[https://web3py.readthedocs.io/en/stable/] library. As we utilised existing tools to design the blockchain platform, we did not conduct a separate performance assessment of the platform itself. Our previous work <cit.> is deployed as electricity trading platform, so we do not reevaluate it in this context either. Instead, our primary focus lies in evaluating the performance of the privacy and accountable billing model.
The billing model simulations were conducted on a sample of 500 users, consisting of 250 consumers and 250 prosumers. We measured PA-Bill's execution time (ET) for computationally intensive components in two scenarios: worst-case (every household makes an incorrect bill calculation (unintentionally or maliciously), thus requiring an intervention from the referee) and best-case (all households make correct calculations, hence no referee intervention is deployed).
The SC is set to be one hour. Table <ref> demonstrates the average execution time per SC for PA-Bill components, computed over a one-month billing period comprising 720 SCs (24 SCs per day). The execution time which results in milliseconds for both worst-case and best-case scenarios, tested with a large group of 500 users, indicate that our proposed billing protocol offers a computationally efficient solution for PA-Bill.
§ CONCLUSION
In this work, we proposed PA-Bill, a privacy-preserving and accountable billing protocol that addresses security, privacy, and accountability issues in P2P markets at the billing and settlements stage. PA-Bill utilises a universal cost-splitting billing model, local semi-decentralised calculation, and Homomorphic Encryption for privacy protection. Blockchain technology is deployed for accountability mechanisms that resolve conflicts during billing calculation. PA-Bill is evaluated on a community of 500 households. In our future work, we plan to investigate network constraints.
IEEEtran
Potential Venues:
* https://www.sest2023.org/
* https://smartnets.ieee.tn/
* https://sites.google.com/view/smace2023/
In the case of a rejection
* https://hpcn.exeter.ac.uk/trustcom2023/
* https://brains.dnac.org/2023/
* https://attend.ieee.org/isc-2023/
|
http://arxiv.org/abs/2307.04326v1 | 20230710033943 | Automotive Radar Mutual Interference Mitigation Based on Hough Transform in Time-Frequency Domain | [
"Yanbing Li",
"Weichuan Zhang",
"Lianying Ji"
] | eess.SP | [
"eess.SP"
] |
Automotive Radar Mutual Interference Mitigation Based on Hough Transform in Time-Frequency Domain
Yanbing Li, Member, IEEE,
Weichuan Zhang, Member, IEEE,
and Lianying Ji
This work was supported by the Fundamental Research Funds for the Central Universities 2022RC008.
Yanbing Li is with the School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing, 100044, China (e-mail: [email protected]).
Weichuan Zhang is with the Institute for Integrated and Intelligent Systems, Griffith University, QLD, Australia. (e-mail: [email protected]).
Lianying Ji is with the Beijing Muniu Linghang Technology Company, Beijing, 100192, China (e-mail: [email protected]).
August 12, 2023
===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
With the development of autonomous driving technology, automotive radar has received unprecedented attention
due to its day-and-night and all-weather working capability. It is worthwhile to note that more and more vehicles are equipped with automotive radars, resulting in mutual interference between radars. The interference reduces radar target detection performance, making perception information unreliable. In this paper, a novel interference mitigation method based on power-weighted Hough transform is proposed for solving the radar mutual interference and improving the safety of autonomous driving systems. Firstly, the frequency modulation characteristics of interference signals and target echo signals are analyzed, and differences between the two signals are introduced. Secondly, based on the straight line detection technique, the power of the mutual interference signal in time-frequency domain is accumulated, and the accurate position of the interference is located. Finally, the target echo is recovered by autoregressive model. Compared with existing state-of-the-art methods, the proposed method has the ability to retain more useful signals after the interference mitigation, and achieve better interference detection robustness under low signal-to-noise ratio conditions. Simulation experiments and real scenario experiments verify the effectiveness of the proposed
method and show its superiority.
Automotive radar, Hough transform, interference mitigation, millimeter-wave radar, time-frequency spectrogram.
§ INTRODUCTION
Radar, as an environmental sensing technology, has been introduced in more and more civil fields such as automotive radar, traffic radar, and security radar. On one hand, this is due to the development of chip technology, especially millimeter-wave chip technology. These advances have made it possible to reduce radar design cost and difficulty, which allows radar manufacturers to iterate their products rapidly <cit.>. On the other hand, the trend of intelligence has led to an unprecedented emphasis on perception technology in many aspects of people’s life, which provides necessary and reliable perception data for the post-processing stage.
One of the most representative civilian applications is automotive radar. From low level assisted driving to high level autonomous driving, radars are included as an important sensor in autonomous driving solutions <cit.>. It is well known that no single sensor has ability to acquire all desired information well in the real-world scenarios of all conditions. Along this way, multi-sensor fusion techniques are increasingly being used for autonomous driving. As one of the three mainstream sensors, i.e., cameras, radars, and lidars, radars have ability to day-and-night and all-weather work, which is not well demonstrated by the other sensors. Meanwhile, radars have advantage in the radial distance and velocity measurement of targets, which is complementary to the information of the other sensors. A typical autonomous driving solution is equipped with seven millimeter-wave radars in a vehicle, which contains one long-range radar for forward looking, two mid-range radars for both forward and rearward looking, and four short-range radars in the four corners for 360 degree coverage <cit.>. This configuration allows radar sensors on a single vehicle to radiate in all directions on roads. In this case, with the development of autonomous driving, the deployment rate of automotive radars will increase rapidly in the future. As a result, influence among radars become inevitable <cit.>. Interference among radars may lead to target detection degradation and increase the likelihood of target loss in severe cases, which is unacceptable for traffic safety <cit.>.
Generally, there are two main categories of radar interference. One category is caused by radar devices interfering with each other, and the other category is spoofing attacks performed by jamming devices. The latter is similar to electronic warfare in military applications and is usually introduced in malicious attacking <cit.>. A research on the suppression of malicious jamming such as digital radio frequency memory (DRFM) jamming is discussed in <cit.>. Compared with spoofing attack jamming, the problem of mutual interference between radars is more common in practical scenarios, especially in high-density traffic flow scenarios.
Many research analyzing mutual interference of automotive radars can be found in <cit.>. These sources discuss the occurrence probability of mutual interference between radars, calculate the theoretical value of interference power, and illustrate the interference signal in the time domain, the frequency domain and the time-frequency (TF) domain, respectively. These studies deepen our understanding of the mutual interference for automotive radars and indicate that the mutual interference will worsen signal quality and signal-to-noise ratio (SNR), thereby affecting the target detection ability of radars <cit.>.
Methods used for solving the aforementioned issues can be categorized into two groups according to the degree of dependence on radar system architecture. The first group is coupled with the radar system, and its implementation usually requires specific software and hardware architectures. Approaches based on transmit waveforms such as orthogonal noise and phase-coded are proposed in <cit.>. Another waveform optimization approach is proposed in <cit.>. These methods suppress interference based on the special structure of waveforms. Digital beamforming methods based on radar antenna array structure are discussed in <cit.>, in which interference in specific directions can be suppressed by the directivity of a formed beam. Because interference sources and targets are usually in the same or close direction in a traffic scene, the digital beamforming methods face angle resolution challenges. Inspired by the biological behavior of bats, a heuristic frequency hopping technique is introduced in <cit.>. When interference occurs, a radar with higher frequency shifts its frequency upwards, while a radar with lower frequency shifts its frequency downwards. This strategy has a higher success rate for interference mitigation than random hopping way. Alternately, radar and communication cooperation is employed for solving mutual interference <cit.>. A distributed network protocol that enables the radar and communication systems to work together is designed, then the avoidance of mutual interference among radars can be achieved due to information sharing. The above-mentioned methods can realize interference mitigation by designing specific system functions, which achieves good effect in designated situations. However, these methods require constraints on radar system design, thereby increasing the development cost and difficulty of radar products.
Another group of methods does not customize the radar software and hardware, but uses signal processing techniques, i.e., signal detection and reconstruction, for suppressing interference on the existing radar system architecture, which has good versatility in practice. In terms of the acquisition domain of interference information, these methods can generally be divided into time domain, frequency domain, and TF domain methods. An adaptive noise canceller that uses interference information in negative frequencies to cancel the interference in positive frequencies is proposed in <cit.>. This is a typical implementation of interference mitigation in the frequency domain. Besides frequency domain methods, most of current interference mitigation methods are implemented in the time domain and the TF domain. Zeroing or adding a raised cosine window for the disturbed part of a received signal is adopted in <cit.>. These two ways achieve the attenuation of interference power, yet lose useful signals in the overlapped part with the interference. Wavelet decomposition and denoising is used in <cit.> for removing the interference. Due to the decomposition characteristics of the wavelet transform, useful signals in the undisturbed components can be well retained. Signal reconstruction by autoregressive (AR) model is proposed in <cit.>, which has ability to extrapolate useful signals in the interfered part and retrieve more target information than the zeroing and the windowed methods, however, reconstruction quality will be degraded when a interfered segment is wide. Another signal reconstruction method named iterative method with adaptive thresholding (IMAT) is proposed in <cit.> for overcoming the signal gap introduce by zeroing. The IMAT method is a sparse reconstruction technique from main frequency components essentially. All the methods mentioned above obtain interference information from the time domain, and suppress the interference accordingly.
More recently, a research in <cit.> shows that more structural information of interference can be observed in the TF domain. In this case, more differences between the target echo and the interference can be extracted in the TF domain than in the time domain. TF analysis of a received signal in a interference scenario is performed for locating interference time span region in <cit.>, followed by beat frequencies interpolation for recovering the target echo. Another TF analysis based method is introduced in <cit.>. Here the interference is located by a constant false alarm rate (CFAR) detector, followed by a reconstruction process by zeroing, amplitude correction, and Burg-based signal extrapolation, respectively. Experimental results demonstrate that the methods based on TF analysis are superior in interference mitigation performance to time domain methods.
Although the existing TF domain methods <cit.> have shown superiority to the time domain methods <cit.> in performance, we still have to resolve whether the characteristic information of the interference in the TF domain is fully exploited. For instance, the CFAR based method <cit.> detects and suppresses interference in frequency slices along the TF spectrogram, without considering the time-frequency variation characteristics of the interference. In this case, interference detection is based on the ratio of the interference power at a certain point to the noise level. A good interference detection performance can be obtained under high interference-to-noise ratio (INR) conditions. However, when the interference power is weak, e.g., the interferer radar is far from the victim radar, the projection of the interference power onto each frequency slice may not be enough for supporting accurate interference detection in the TF domain. In this way, degraded interference mitigation performance may be occurred in low INR conditions for the CFAR based method. Based on aforementioned facts, our main question are: (1) Is there a joint time and frequency characteristic of the interference in the TF domain? And whether this time-frequency characteristic can be effectively extracted for detecting and mitigating the interference? (2) Can the INR be improved for enhancing the interference detection performance? Focusing on these two questions, our research demonstrates that the interference has obvious joint time and frequency structural characteristics on the TF plane, that is, it appears as a straight line with a large slope. In addition, inspired by the incoherent integration method in radar target detection <cit.>, the line structure characteristics of the interference can be used to accumulated the interference power in the TF domain, thus achieving good interference detection performance.
In this paper, the mutual interference of frequency modulated continuous wave (FMCW) automotive radars based on the TF domain is discussed, and a robust interference detection and mitigation approach by power-weighted Hough transform is proposed. To the best of our knowledge, so far there is no research that considers the structure information in the TF domain to robustly detect and locate interference, especially in weak interference and low SNR conditions. Compared with the existing interference mitigation methods based on signal processing technology, the contributions of this paper are as follows:
* The first mutual interference detection method for automotive radar in terms of structure information in the TF domain is proposed. By analyzing interference signals in a radar receiver, we conclude that the interference in baseband has a linear frequency modulation (LFM) characteristic, i.e., it behaves as a straight line in the TF domain. Based on this structure feature, the Hough transform is used to locate the accurate position of the interference in the TF domain.
* For the first time, the way of power accumulation is introduced into the problem of the interference detection in the TF domain. The classical Hough transform is modified for the TF spectrogram of an FMCW radar signal, namely intensity information is introduced into the Hough transform for the power accumulation. After achieving the interference power accumulation in the Hough parameter space, the INR increases, hence improving the stability of the interference detection
* Compared with the interference mitigation methods based on the time domain, the proposed method has the ability to handle the case of multiple interference. Furthermore, the proposed method is also effective when interference duty cycle is large.
The rest of the paper is organized as follows. Section <ref> introduces the signal models of the FMCW radar signal and the mutual interference. Then an interference mitigation algorithm based on power-weighted Hough transform is presented in Section <ref>. Numerical simulations and experimental data based results are shown and discussed in Sections <ref> and <ref>, respectively, to evaluate the interference mitigation performance of the proposed method. Finally, Section <ref> concludes this paper.
§ LINEAR FMCW SIGNAL MODEL IN RADAR MUTUAL INTERFERENCE CASES
§.§ Linear FMCW Signal Model without Interference
A LFM signal, also named a chirp signal, is the most common waveform used in a FMCW radar system in real
applications <cit.>. Usually, a set of LFM signal sequences is transmitted from a radar antenna to sense environment. The single transmitted LFM signal is
s_t(t) =√(2 P_t)cos [2 (t)]
=√(2 P_t)cos[2 (f_c+1/2 k t) t],
where f_c is the central carrier frequency, P_t is the transmitted power, and k is the chirp rate which equals the ratio of the chirp sweep bandwidth B to the chirp sweep time T, i.e., k=B/T. The frequency of the transmitted signal is
f_t(t) =d (t)/d t
=f_c+k t .
Thus a frequency modulation direction is defined as up-chirp when k>0 and down-chirp when k<0, respectively.
An echo scattered by a target contains added amplitude and Doppler information related to the target’s radar cross section (RCS) and velocity, respectively. For a single-target scenario, the power of the target echo related to free space attenuation is
P_e=P_t G^2^2/(4 )^3 R^4,
where is the wavelength of the transmitted signal, G is the antenna gain on the line of sight (LOS), is the target RCS representing the ability to scatter the power of electromagnetic waves, and R is the distance between the radar and the target on the LOS. The target distance causes a delay between the target echo and the radar reference signal, which is
=2(R+v t)/c,
where c is the light speed, v is the relative velocity between the target and the radar on the LOS which causes the Doppler frequency shift. From (<ref>) and (<ref>), the echo with one target is
s_e(t)=√(2 P_e)cos [2 (t-)],
When there exists multiple targets, the echo signal is the superposition of these target’s echoes.
§.§ Linear FMCW Signal Model with Interference
When there is interference, the target echo and the interference are superimposed, and then received by a receiver antenna. For a single-interference scenario, without loss of generality, assuming a interferer radar has the same radio frequency (RF) and antenna specifications with a victim radar, i.e., the two radars have the same transmitted power P_t, wavelength , and antenna gain G, then the interference power in the receiver of the victim radar is
P_i=P_t G^2^2/(4 )^2 R_i^2,
where R_i is the distance between the interferer radar and the victim radar on the LOS. It is worthwhile to note that R_i will be equal to R when the interferer radar is installed on the target. Accordingly, the interference signal is
s_i(t)= √(2 P_i)cos[2 _i(t-_i)]
= √(2 P_i)cos[2 f_c i(t-_i)+1/2 k_i(t-_i)^2],
where f_c i and k_i are the central carrier frequency and the chirp rate of the interferer radar respectively. _i is the time delay between the interference and the reference signal. When there exists multiple interference radars, the total interference signal is the superposition of each interference represented in (<ref>). According to (<ref>) and (<ref>), the signal-to-interference ratio (SIR) at the victim radar receiver is
SIR=P_e/P_i=R_i^2/4 R^4.
From (<ref>) and (<ref>), the total signal received by the radar receiver is
s_r(t)=s_e(t)+s_i(t)+g(t),
where g(t) is the receiver noise. Dechirp processing of the received signal is achieved by using a low noise amplifier (LNA) and mixing with the reference signal. From (<ref>), (<ref>), (<ref>), and (<ref>), a beat-frequency signal in baseband can be derived as (<ref>), where _b and _b i are the constant phase terms. Accordingly, the beat frequency introduced by the target is <cit.>
f_b=k ,
and the beat frequency introduced by the interference is
f_b i=f_c-f_c i+k_i_i+1/2(k-k_i) t ,
which is a LFM signal. Substituting (<ref>) and (<ref>) into (<ref>), the beat-frequency signal can be rewritten as
s_b(t)=A_bcos(2 f_b t+_b)+A_b icos(2 f_b i t+_b i)+g(t),
where A_b=2 √(P_t P_e) and A_b i=2 √(P_t P_i) are the power of the beat-frequency signal for the target and the interference respectively. Thus, the total beat-frequency signal consists of
three parts, namely the target, the interference, and the noise.
After the dechirp processing, the beat frequency signal is filtered by a low-pass filter (LPF) whose function is to prevent signal aliasing during subsequent analog-to-digital sampling by an analog-to-digital converter (ADC). Then three fast Fourier transform (FFT) processes, i.e., range FFT, Doppler FFT, and spatial FFT, are applied to the digital signal for estimating the distance, the velocity, and the angle information of the target <cit.>. A schematic diagram of the FMCW radar system is shown in Fig. <ref>.
§ INTRODUCTION TO INTERFERENCE MITIGATION METHOD
§.§ Signal Characteristics and Method Motivation
Car detection in a typical mutual interference scenario is shown in Table <ref>. A car with an interferer radar is present at 100m, and another interferer radar is present at a distance of 2000m. In this case, for a ego radar, SIRs between the car echo and the interference produced by the mounted radar and the distant radar are -41dB and -15dB according to (<ref>), respectively. These SIR levels indicate the interference power is greater than that of the target echo due to one-way propagation effect shown in (<ref>) and (<ref>). As a result, the interferer radar may have an impact on the target detection even if it is far away from the ego radar.
In addition, TF features of the target echo and the inference before and after the dechirp processing are shown in Fig. <ref>. As a result of the dechirp processing, it can be seen from (<ref>), (<ref>), and (<ref>) that the target echo consists of a single-frequency signal, while the interference shows the characteristic of a LFM signal. After low-pass filtering, only signals in passband, which is represented by yellow area in Fig. <ref>, are retained. In this case, the target echo exists in the entire time domain as a single beat frequency signal if its beat frequency is smaller than the cut-off frequency of the LPF. However, since the LFM range of the interference is greater than the LPF passband, the interference will be intercepted by the LPF, which makes the interference exhibits finite range in time, as shown in the second row of Fig. <ref>.
In summary, the target echo and the interference have following characteristics in the automotive radar mutual interference case:
* The target echo in baseband is a single frequency signal, which demonstrates a straight line parallel to the time axis in the TF domain.
* The interference in baseband is a LFM and time limited signal, which demonstrates a straight line with a large slope in the TF domain.
* The interference power is usually greater than that of the target echo due to the difference in signal propagation path. This indicates an automotive radar may be interfered by other radars within a range of kilometers. In this case, the dynamic range of the interference power is large, i.e., both the strong and the weak interferences exist in the received signal of the victim radar.
Based on the signal characteristic analysis in the automotive radar mutual interference scenario, the dynamic range of the interference power is large in a practical scenario, which brings difficulties to interference detection. The existing interference mitigation methods, such as the wavelet and the CFAR based methods, all take advantage of the larger interference power with respect to that of the target echo for interference
detecting. However, the detection performance of the existing methods decreases with lower interference power. Inspired by the noncoherent integration processing in radar target detection applications <cit.>, the interference detection can be performed by exploiting the line feature of the interference in the TF domain, and accumulating the interference power. Based on this motivation, we propose a Hough transform based interference detection approach in a power accumulation sense. In this way, the interference detection performance can be improved by using the accumulation effect on straight line points in the Hough parameter space.
§.§ Interference Detection and Localization Based on Power-Weighted Hough Transform
The characteristics of the interference and the target echo can be obtained by TF analysis technique. STFT is a widely used TF analysis technique due to its good linearity and computational simplicity <cit.>. In this paper, the TF analysis of the received signal is obtained using STFT and the discrete version implemented in practice is <cit.>
S_r(, m)=∑_n=-∞^∞ s_r(n) w(n- D) e^-j 2 m/N n ,
where w(n) is a analysis window (e.g., Hamming window <cit.>), N is the number of frequency samples, D is the hop size between successive DFTs, and denotes the time index. Then, the power spectrogram of the received signal is
P(, m)= |S_r(, m)|^2
= S_r(, m) ·conj(S_r(, m)),
where conj(·) is a conjugate operation. As long as the power spectrogram P(, m) is considered as a special image, it can be used as the input of subsequent Hough transform.
As an effective geometric shape detection method <cit.>,<cit.>, Hough transform has been widely used in many fields such as image processing <cit.> and lane detection based on radar images <cit.>. The classical Hough transform detects straight lines in binary images. It projects straight lines into a parameter space to accumulate the scores of points and the straight lines can be obtained by threshold detection in the parameter space. The line function in the Hough transform is defined as
=x cos ()+y sin () ,
where and are the distance from the line to the origin and the angle of the line respectively. The coordinate (x, y) is used to describe pixel position for the input image, while each point (, ) in the Hough parameter space represents a line in the image. If the line exists in the image, the score of the corresponding point in the parameter space can be measured as
H(, )= ∑_(x,y)(x, y) ,
with (x, y)= {[ 1, if (x, y) is on L; 0, otherwise ]. ,
where L denotes that the line satisfies (<ref>).
Unlike ordinary images, the intensity value of each pixel in the power spectrogram represents the distribution of signal power in the TF domain. From (<ref>), if a signal has a certain power and chirp characteristics at the same time, it appears as a straight line with the corresponding power in the power
spectrogram P(, m). Due to this TF feature, accumulating power information in the Hough parameter space is utilized for improving the performance of the interference detection. The power-weighted score in the Hough parameter space is
H_P(, )=∑_(, m)∈ P P(, m) (, m) .
In addition, considering the slope of the line corresponding to the target echo in the TF spectrogram is close to 0, only lines with large slopes are detected to ensure they correspond to the interference. When the Hough parameter matrix is obtained, the lines can be extracted by threshold detection. With some prior information, the threshold can be determined in a feasible way as follows. In real scenarios, we set a maximum RCS value related to a interested target as _max and calculate the theoretical value of the target echo power according to (<ref>), then the detection threshold is determined as
Thd= P_t G^2^2_max/(4 )^3 R^4,
where is the threshold factor which can be determined in a radar test stage. After obtaining the detection threshold, the lines corresponding to the interference can be extracted if H_p(, )>T, and the interference locations in the spectrogram are found according to (<ref>) is
{[ =, if sin ()=0; m =- () +/sin (), otherwise ]. .
§.§ Interference Mitigation and Target Echo Recovery
Based on the detection results of interference lines, values at interference locations are discarded to achieve interference suppression. Meanwhile, a signal recovery process can be realized by interpolating the discarded locations using neighborhood samples.
For each specific frequency bin slice of the spectrogram, an AR model along the time axis of the spectrogram <cit.> is used for realizing a signal interpolation. The AR model is defined as
S_rec(, m)=∑__n=1^q__n S_r(-_n, m)+,
where q is the number of neighboring samples, is the prediction coefficient and is the residual. AR coefficients can be obtained by least squares. Therefore, the prediction values are obtained in terms of the solved AR model and the gaps at the corresponding interference locations are filled by the predicted signals for achieving the interference mitigation. After traversing all frequency slices by the interference mitigation process, an interference-free spectrogram is obtained.
Finally, a reconstructed target echo without interference is obtained by applying inverse STFT (ISTFT) to the
interference-free spectrogram. The ISTFT is computed by taking the inverse Fourier transform of each time bin slice of the interference-free spectrogram and overlap-adding the inverted signals <cit.> as
s_rec(n)=∑_=-∞^∞ w_i(n- D) 1/N∑_m=0^N-1 S_rec(, m) e^j 2 m/N n,
where w_i(n) defines a synthesis window which is the inverse of the analysis window w(n).
In summary, the proposed interference mitigation flow is shown in Algorithm <ref>.
§ NUMERICAL SIMULATION RESULTS
§.§ Simulation Description and Evaluation Metrics
Numerical simulation is one of the two approaches used for evaluating the performance of the proposed interference mitigation method. An FMCW radar signal flow simulation based on Fig. <ref> is carried out, which includes waveform generation, amplification, and emission, free space propagation, target back scattering and interference superposition, low-noise amplification, dechirp processing, lowpass filtering and ADC sampling. Two sampling frequencies are utilized for simulating analog and digital signals respectively, i.e., a large analog frequency (AF) such as 2GHz, is applied for analog signal simulation while an intermediate frequency (IF) is used for analog-to-digital sampling. The main radar parameter settings used in the simulation are shown in Table <ref>. In the simulation scenario, the interferer radar one and the interferer radar two are set at 30m and 150m from the victim radar respectively.
Two types of targets are set for evaluating different methods as follow:
* A stationary target is presumed to be located at 150m from the ego radar. It is mainly used for evaluating interference mitigation effects in a single chirp signal in Section <ref>, and the influence of different SNRs on interference mitigation performance in Section <ref>.
* A moving target with a speed of 11m/s is presumed to be located at 100m for evaluating velocity measurement performance in Section <ref>.
The performance of the proposed method is compared with seven state-of-the-art methods which included five time domain methods and two TF domain methods. Among them, zeroing, raised cosine window (CW) <cit.>, time domain AR (T-AR) <cit.>, wavelet decomposition <cit.> and IMAT <cit.> methods are implemented in the time domain, STFT beat frequencies interpolation by AR model (STFT-AR) and CFAR-Burg methods are implemented in the TF domain <cit.>. In order to ensure the comparability of each method, a CFAR detector is used for detecting interference positions for all the time domain methods.
The interference mitigation performance in both the time and the frequency domains are evaluated using two time domain metrics and two frequency domain metrics. The first metric is cosine similarity (CS) defined as
CS=s_rec^* s_e/s_rec_2×s_e_2,
where s_e is the target echo, s_rec is the recovered signal of s_e, and * denotes the conjugate transposition. The CS is a metric of an angle between two signal vectors, which can be used to represent the correlation of the two vectors. The closer the CS is to 1, the more correlated the two vectors are. The second time domain metric is error vector magnitude (EVM) defined as
EVM=s_rec-s_e_2/s_e_2.
The EVM is employed to describe the difference between an ideal signal and a recovered signal. A small EVM value means an accurate reconstruction.
The last two metrics, namely peak sidelobe ratio (PSLR) and integrated sidelobe ratio (ISLR) <cit.> are employed to evaluate interference mitigation performance in the frequency domain via range profiles. The PSLR is defined as
PSLR=10 ×log 10(max_m∉ [a,b]F^2(m)/max_m∈ [a,b] F^2(m)),
where F is the spectrum of s_rec and the interval [a, b] bounds the main lobe of the spectrum. The PSLR is employed to describe a ratio between the power of the max sidelobes point with respect to that of the main lobe. The ISLR is defined as
ISLR=10 ×log10(∑_m=1^a F^2(m)+∑_m=b^M F^2(m)/∑_m=a^b F^2(m)),
The ISLR describes a ratio between the energy of all sidelobes with respect to that of the main lobe. For both the PSLR and the ISLR, smaller values indicate lower sidelobe levels, which represents good interference mitigation performance in our applications.
§.§ Noise-Free Simulation Results
Noise-free simulation is used for evaluating the interference mitigation performance firstly. In this case, there are only the target echo and the interference in the received signal, which allows us to quantitatively evaluate the effects of the different interference mitigation methods. The noise-free
signals are shown in Fig. <ref>. The TF distributions of the target echo and two types of the interference in the analog domain are shown in Fig. <ref> (a), the frequencies of the interference and the target echo cross in different time. The interference signals that fell into LPF passband near these intersections are retained, and then received by the radar receiver. The output of LPF and ADC are shown in Fig. <ref> (b) and Fig. <ref> (c) respectively.
The interference mitigation effects of the proposed method are shown in Fig. <ref>. The spectrogram of the received signal with interference is shown in Fig. <ref> (a). The power accumulation and the peak detection result in the Hough parameter space is demonstrated in Fig. <ref> (b). Three peaks corresponding to interference lines are detected, and the interference lines in the TF domain are well indicated as shown in Fig. <ref> (c). Based on these locations, the interference is finely mitigated by the AR model reconstruction process. In this process, the order of the AR model is determined by Akaike information criterion <cit.>. The interference mitigation result in the TF domain is shown in Fig. <ref> (d). Compared with the spectrogram of the received signal with interference as shown in Fig. <ref> (a), the target echo is retained and the interference contaminated areas are reconstructed effectively after the interference mitigation by the proposed method.
The results of the eight methods on the four performance metrics are summarized in Table <ref>. It can be observed from Table <ref> that the four time domain methods (i.e., zeroing, CW, T-AR, and IMAT) contain large reconstruction error as shown in the EVM. Furthermore, their correlations with the target echo are poor as shown in the CS. The reason is that the amount of interference information that can be extracted from the time domain is limited. Compared with the four time domain methods, the wavelet method uses the wavelet coefficients of different resolution layers to suppress interference. In this case, the interference power is decomposed into different components, and the useful signals in those components with less interference are preserved. However, the wavelet method still works in the time domain, and does not make full use of the TF characteristics. Unlike the time domain methods, the STFT-AR, the CFAR-Burg, and the proposed methods perform interference mitigation in the TF domain and, therefore, have the ability to exploit more information for accurately locating the interference and retaining more useful signals. Thus the large CS, the small EVM and ISLR are obtained from the three TF domain methods in the noise-free experiment.
The frequency spectrum, i.e., the range profile, of the recovered signal are shown in Fig. <ref>. All the eight methods can effectively suppress the interference. Among the five time domain methods, the wavelet method has the best sidelobe levels. Although the IMAT method has the lower sidelobe at the target location of 150m, its sidelobe level deteriorates more rapidly at the distant range. The three TF domain methods have better sidelobe levels than those of the time domain methods and are very close to the signal without interference.
§.§ Simulation Results in Different SNR Levels
In this experiment, the interference power is reduced to the same level as the target echo power. This is used for simulating a distant interference scenario and verifying the mitigation performance of the different tested methods for weak interference. Moreover, different SNR level simulations are implemented for evaluating the robust performance of the tested methods. Gaussian white noise with different SNR levels are added into the received signal, and Monte Carlo simulations are repeated for evaluating the statistical performance under specific SNR levels. There are a total of 256 independent noise adding experiments for each SNR level. After all the SNR experiments, the results of the four metrics versus the SNR are shown in Fig. <ref>.
When the SNR is greater than -5dB, for the zeroing, the CW, the T-AR and the IMAT methods, the CS, the EVM, the PSLR and the ISLR of the recovered signal are about 0.8, 0.7, -18, and -3 respectively. It can be found from Fig. <ref> that the wavelet method have achieved better results than the other four time domain methods. Three TF domain methods, namely the STFT-AR, the CFAR-Burg and the proposed methods, achieve the best performance. In this case, the CS is greater than 0.95, the EVM is less than 0.25, the PSLR is less than -32, and the ISLR is less than -15. From these results, the TF domain methods are better than the time domain methods in performance because more information of the interference can be utilized and more accurate interference location can be detected.
When the SNR is low, i.e., smaller than -15dB, the signal recovery performance of all the tested methods is degraded. However, the TF domain methods still maintained advantage over the time domain methods. As shown in Fig. <ref>, in the case of -15dB SNR, the performance of the TF domain methods is still better than that of the time domain methods at high SNR on all the four metrics. In addition, with the decrease of the SNRs, the proposed method has a superiority in the performance and robustness of the interference mitigation to the STFT-AR and the CFAR-Burg methods. For example, when the SNR is -25dB, the CS of the proposed method is about 16% higher than those of the STFT-AR and the CFAR-Burg methods, and achieves a smaller statistical standard deviation in the Monte Carlo simulations. Similar results are observed on the EVM, the PSLR, and the ISLR as shown in Fig. <ref>.
The interference detection results for the STFT-AR, the CFAR-Burg, and the proposed methods in the TF domain under SNR of -5dB are shown in Fig. <ref>. It can be seen that the proposed method has better interference location detection accuracy than that of the STFT-AR and the CFAR-Burg methods. In low SNR conditions, the power-weighted Hough transform is equivalent to the power accumulation along a straight line in the TF domain. As a result, the INR is improved after Hough transform and the interference is detected robustly.
Unlike the proposed method, there is no accumulation of the interference power to improve the INR in the CFAR-Burg method. Thus it encounters false alarms caused by noise in the low SNR conditions. Moreover, in the frequency slice where the target echo is located, a failure to correctly estimate the noise level lead to a missed detection of the interference for the CFAR-Burg method as shown in Fig. <ref> (c). These corner cases cause the interference mitigation degradation and further affect the signal recovery performance.
As for the STFT-AR method, since there is no explicit interference detection and localization process in <cit.>, we manually labeled the interference locations for comparison as shown in Fig. <ref> (b). This is an ideal situation. Therefore, the performance in practice will be worse due to detection errors of the interference locations. Compared with the CFAR-Burg and the proposed methods, the STFT-AR method removes all the frequency bins in a certain time range in the TF domain. This operation causes a loss of useful signal information adjacent to the interference locations, and further lead to a decrease in interference mitigation performance.
§.§ Moving Target Simulation Results
A moving target simulation is used for evaluating the performance of target velocity measurement by chirp sequences before and after interference mitigation. In the simulation, the moving target locates at 100m with a velocity of 11m/s. A total of 256 chirps are set up as a range-Doppler (RD) processing unit. In the evaluation of the interference mitigation performance with multiple chirps, the STFT-AR method is not used because the interference locations vary in each chirp and can not be marked manually.
In the experiment, the interference mitigation process was firstly performed by traversing each chirp to obtain interference-free chirp sequence data, and then the range FFT and the Doppler FFT processing mentioned in Section <ref> was realized on the interference-free chirp sequence data for obtaining RD responses. The RD responses corresponding to the tested methods are shown in Fig. <ref> (c) to Fig. <ref> (i). As a reference, the RD responses under the interference-free and the interference conditions are given in Fig. <ref> (a) and Fig. <ref> (b), respectively. The CFAR-Burg and the proposed method have better interference mitigation effects in the RD responses than those of the time domain methods, i.e. the zeroing, the CW, the T-AR, the wavelet, and the IMAT methods. The reason is that, on the one hand, the interference can be more accurately located in the TF domain, and on the other hand, the CFAR-Burg and the proposed methods utilize the uncontaminated signals to interpolate the contaminated gaps. Thus the two methods maintain the phase coherence of the chirp sequence, and obtain the high target SNR in the RD responses.
Moreover, compared with the false alarms of the CFAR-Burg method as shown in Fig. <ref> (c), the proposed method utilizes the linear structure of the interference in the TF domain and accumulates the interference power in the Hough parameter space for robustly interference detecting, which is able to avoid the false alarms. Therefore, the proposed method has better performance in reconstructing the signal and obtains a higher SNR in the RD response. Clearer results are given in Fig. <ref>. In the RD responses, the range slice of the moving target is extracted for obtaining velocity profiles, then the velocity profiles of the tested methods are shown in Fig. <ref>. Similar to the RD response analysis, the proposed method obtains the best interference mitigation performance in the velocity profiles.
Quantitative results are given in Table <ref>. Since the velocity profile is the output of the Doppler FFT processing, the frequency domain metrics, i.e., the PSLR and the ISLR are used to measure the interference mitigation performance for the tested methods. The PSLRs of the time domain methods are distributed between -30dB and -23dB. Among them, the zeroing, the CW and the T-AR methods have the PSLRs around -25dB. These three methods only perform interference detection in the time domain which has the least amount of interference information. The wavelet method decomposes the time domain signal into components and performs interference detection in the components which can retain some useful signal information. As a result, the wavelet method achieves the PSLR level of -29 dB, which is the best performance among the time domain methods. The IMAT method is a sparse reconstruction method for the time domain signal, and this sparse approach makes it to obtain the PSLR level of -27 dB in the velocity profile. Compared with the time domain methods, the TF domain methods significantly improve the PSLR level. The CFAR-Burg and the proposed methods obtain the PSLR levels of -47dB and -52dB, respectively. Since the false alarms occurred in the CFAR-Burg method are avoided in the proposed method, the proposed method obtains the closest PSLR level to the ground truth. Similar results are obtained in the ISLR metric as shown in Table <ref>.
§ EXPERIMENT RESULTS FOR REAL SCENARIO
In this experiment, real sceario data are collected for verifying the effectiveness of the proposed method. Three 77GHz millimeter-wave radars, from https://www.muniutech.cn/vehicle?category_id=9Muniu Technology Co., Ltd., are used for data collection. Among these radars, one is used as a victim radar, the other two are used as interferer radars. Experimental data of the victim radar is recorded. The device positions in the scenario are shown in Fig. <ref> (a). The interferer radars were set on the left and the right sides relative to the LOS of the victim radar. The distance from the victim radar to the interferer radar one and the interferer radar two were 20m and 30m, respectively. Radar configurations are shown in Table <ref>. Considering the ease of implementation on actual signal processing chips, all the radars were set to have the same sweep time but different sweep bandwidth to generate LFM signals with different slopes. The victim radar was configured as the up-frequency modulation mode with the sweep bandwidth of 300MHz. Interferer radar one was configured as the down-frequency modulation mode with the sweep bandwidth of 300MHz, and interferer radar two was configured as the up-frequency modulation mode with the sweep bandwidth of 500MHz. The pulse repetition time (PRT) of the radars was set to be different to increase the probability of the mutual interference. For a single chirp, the number of sampling points is 400. A window length of 32 is used for the STFT and a step between the sliding windows is set to 4. A signal in each window is applied to 128-point FFT to obtain the TF spectrogram. For a chirp sequence, a total of 128 chirps are included as a coherent processing unit for a RD response.
§.§ Stationary Target Experiment
A corner reflector was placed at 20m in front of the victim radar to simulate a typical strong target. The time domain signal and the TF spectrogram of the received signal are shown in Fig. <ref>. It can be seen that two forms of interference related to the interferer radars are observed in the time domain as shown in Fig. <ref> (a). The interference with short duration is introduced by the interferer radar one, and the one with long duration is introduced by the interferer radar two. Fig. <ref> (b) shows the TF features of the received signal that includes LFM-like interference and the single frequency-like target echo.
The range profile after interference mitigation for all the tested methods are shown in Fig. <ref>. Overall, the TF domain methods achieve better interference mitigation performance in experiment data than that of the time domain methods. The TF domain methods result in a lower noise floor level about -30dB near the target, where the time domain methods have the noise floor level about -25dB.
Fig. <ref> shows the PSLR and the ISLR of the corner reflector in range profile, the quantitative results can be seen that the TF domain methods are superior to the time domain methods in both the PSLR and the ISLR, except the PSLR of the CFAR-Burg method. Unlike the PSLR, which reflects the side lobe level at a certain point, the ISLR reflects the average value of the side lobe level within a certain range, so it is more accurate for evaluating the interference suppression performance. Therefore, even though the PSLR of the CFAR-Burg method is higher than some time domain methods, it still achieved better performance in overall. For the TF domain methods, the proposed method achieves the best performance on the PSLR and the ISLR.
The effects before and after interference localization and mitigation for the three TF domain methods are shown in Fig. <ref>. For the STFT-AR method, the interference location in this experiment is manually marked since no method for interference detection is given in the original literature <cit.>, so it achieves a better interference mitigation effect. However, the performance of the STFT-AR method will be lower than the results in this paper in practice, since the interference detection are not as accurate as manual marking.
For the CFAR-Burg method, two factors affect the interference location detecting as shown in Fig. <ref> (c). One factor is that two adjacent interference signals in the same frequency slice will raise detection thresholds for each other during the CFAR detecting, causing interference missed detection at certain frequencies. The other factor is false alarms caused by the low SNR, which lead to the loss of useful information in the TF domain. Compared with the STFT-AR and the CFAR-Burg methods, the proposed method detects the interference location more accurately in the measured data due to the utilization of the structural information in the TF domain, therefore the best interference mitigation effect is achieved.
§.§ Pedestrian Experiment
A pedestrian walking back and forth at a range of 30m to 40m from the victim radar is used to evaluate the performance of the tested methods in the interference scenario as shown in Fig. <ref> (b). The RD responses are obtained for a coherent processing unit, i.e., the 128 chirps, with 400 time sampling points per chirp, by performing the range FFT and the Doppler FFT with Hamming window <cit.>. The processing flow of the RD responses is similar to the simulation implemented in Section <ref>. For the pedestrian, the RD responses obtained by the tested methods are shown in Fig. <ref>.
In the pedestrian experiment, the presence of a strong target such as the corner reflector makes the difference in power between the target echo and the interference no longer significant, which increases the difficulty of interference detection and localization. In this case, the interference mitigation by the time domain methods, despite the improvement, is not sufficient to achieve the required SNR for pedestrian detecting. Therefore, the time domain methods can not provide an effective measurement of the pedestrian’s range and velocity in the RD response as shown in Fig. <ref> (b) to Fig. <ref> (f). For the CFAR-Burg method, the pedestrian just barely appeared in the RD response after interference mitigation due to the existence of false alarms and interference missed detection problems as shown in Fig. <ref> (g). The interference missed detection is a failure point of the CA-CFAR approach in interference-dense scenarios. When the locations of multiple interference are close to each other in time as shown in Fig. <ref> (c), it causes the CFAR detector to overestimate noise levels, which raises the detection threshold and lead to the missed detection. The same phenomenon of the interference missed detection can be seen in the original literature of the CFAR-Burg method [34]. For the proposed method, the interference can be better mitigated in the pedestrian experiment, resulting in the correct detection of the pedestrian’s range and velocity as shown in Fig. <ref> (h).
§.§ Algorithmic Runtime Analysis
The runtime results of the tested methods are evaluated by using the data from the corner reflector experiment in Section <ref>. For a chirp signal with interference, the interference mitigation process from the tested methods is applied and the corresponding runtime is recorded. The MATLAB version used in the experiment is R2021a and the computer configurations are AMD Ryzen 7 5800H CPU and 16GB DDR4 3200MHz RAM. The runtime results of the tested methods are shown in Table <ref>.
Overall, the TF domain methods have longer runtime than that of the time domain methods because they expand an one-dimension time signal into a two-dimension TF spectrogram, and implement interference mitigation processes in the TF domain. For the STFT-AR method, since there is no interference detection and localization step, its runtime is mainly consumed in the STFT and the signal reconstruction process. For the CFAR-Burg and the proposed methods, due to the presence of the interference detection and localization steps, their runtime have a large increase compared with the STFT-AR method, which indicates the interference detection and localization in the TF domain is the most time-consuming parts of the TF methods. In addition, the Hough transform used in the proposed method detects lines by a search process in a two-dimensional parameter space and therefore has the longest algorithm running time. However, since the search grids of the Hough parameter space are independent of each other, parallel processing can be considered for reducing the runtime in practice.
§ CONCLUSIONS
In this paper, the mutual interference of automotive radars in the TF domain is analyzed. Based on the linear characteristic of the interference in the TF domain, a power-weighted Hough transform interference detection approach is proposed, and then the AR model based predicting is used for interference mitigation. Compared with the existing interference mitigation methods implemented in the time domain, the proposed method has the ability to locate the interference more accurately in the TF domain, and retains more useful signals in the interference mitigation process. Compared with the STFT-AR and the CFAR-Burg methods implemented in the TF domain, the proposed method accumulates interference power based on structural information for improving detection and location performance. As a result, the target echo can be recovered more accurately and robustly under low SNR conditions.
IEEEtran
[
< g r a p h i c s >
]Yanbing Li (M'22) received the M.S. and Ph.D. degrees in signal and information processing from Xidian University, Xi'an, China, in 2009 and 2013, respectively.
He is now an associate professor with the School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing, China. His research interests include radar system design, radar signal processing, radar target recognition, and the applications of radar sensing techniques in autonomous driving, intelligent transportation and internet of things.
[
< g r a p h i c s >
]Weichuan Zhang received the M.S. degree in
signal and information processing from the
Southwest Jiaotong University in China and the
Ph.D. degree in signal and information processing
in National Lab of Radar Signal Processing,
Xidian University, China. He is a research
fellow at Griffith University, QLD, Australia.
His research interests include computer vision,
image analysis, and pattern recognition. He is a
member of the IEEE.
[
< g r a p h i c s >
]Lianying Ji received the B.S. degree from the Dalian Maritime University, in 2004, and the Ph.D. degree from the Beijing Institute of Technology in 2009.
He has been with the School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, since 2009. During 2010, He was visitor researcher in China-Singapore Institute of Digital Media, Singapore. He is now the CTO of Beijing Muniu Linghang Technology Company. His technical contributions have been in the area of both biomedical information processing and mmWave radar signal processing.
|
http://arxiv.org/abs/2307.06264v1 | 20230712160339 | Realizing a tunable honeycomb lattice in ABBA-stacked twisted double bilayer WSe$_2$ | [
"Haining Pan",
"Eun-Ah Kim",
"Chao-Ming Jian"
] | cond-mat.str-el | [
"cond-mat.str-el",
"cond-mat.mes-hall"
] |
The ideal honeycomb lattice, featuring sublattice and SU(2) spin rotation symmetries, is a fundamental model for investigating quantum matters with topology and correlations.
With the rise of the moiré-based design of model systems, realizing a tunable and symmetric honeycomb lattice system with a narrow bandwidth can open access to new phases and insights.
We propose the ABBA-stacked twisted double bilayer WSe_2 as a realistic and tunable platform for reaching this goal.
Adjusting the twist angle allows the bandwidth and the ratio between hopping parameters of different ranges to be tuned. Moreover, the system's small bandwidth and spin rotation symmetry enable effective control of the electronic structure through an in-plane magnetic field. We construct an extended Hubbard model for the system to demonstrate this tunability and explore possible ordered phases using the Hartree-Fock approximation.
We find that at a hole filling of ν = 2 (two holes per moiré unit cell), an in-plane magnetic field of a few Tesla can “dope" the system from a semimetal to a metal. Interactions then drive an instability towards a canted antiferromagnetic insulator ground state. Additionally, we observe a competing insulating phase with sublattice charge polarization. Finally, we discuss the experimental signatures of these novel insulating phases.
Realizing a tunable honeycomb lattice in ABBA-stacked twisted double bilayer WSe_2
Chao-Ming Jian
August 12, 2023
==================================================================================
§ INTRODUCTION
The ideal honeycomb lattice with both sublattice and SU(2) spin rotation symmetries is an important platform for studying many-body physics with topology and electronic correlations. Graphene has been widely studied as a material realization of this lattice. Yet, the system's wide bandwidth limits the exploration of the full phase space of honeycomb lattice systems. Realizing a symmetric honeycomb lattice with a narrow bandwidth and enhanced tunability would significantly expand the accessible phase space, enabling the exploration of higher doping levels and stronger correlations. Among other possibilities, this advancement would facilitate experimental investigations of various intriguing correlated phases proposed earlier, such as charge and spin density waves <cit.>, chiral superconductivity <cit.>, and topological Mott insulators <cit.>. Over the past few years, moiré superlattices in transition metal dichalcogenides (TMD) multilayer structures have emerged as a highly-tunable platform for quantum simulation of two-dimensional lattice models with narrow bandwidth <cit.>. Triangular superlattices <cit.> and honeycomb superlattices with spin-orbit coupling (SOC) or asymmetric sublattices <cit.> have been realized. In this study, we aim to find a twisted TMD multilayer structure that realizes an ideal honeycomb lattice with both sublattice and SU(2) spin rotation symmetries.
Theoretically, a strategy to create an ideal emergent honeycomb lattice with sublattice and SU(2) spin symmetries in TMD multilayer structures follows three criteria as first pointed out in Refs. <cit.>:
(1) The Γ-valley of each TMD layer, where SOC is negligible, appears at the valence band edge (rather than the K/K'-valleys), forming an emergent lattice for the doped holes <cit.>.
(2) A symmetry relates the independent high-symmetry-stacking MX and XM sites [see Fig. <ref>(b)], which form the two sublattices of the emergent honeycomb lattice;
(3) The energy at the MX and XM sites is higher than the MM site (another high-symmetry stacking location) <cit.>. However, previous papers only proposed these strategies in TMD bilayers. The correct energetics (in criteria (1) and (3)) in these proposals are yet to be confirmed and realized by experiments. For example, Refs. gatti2022observationa,pei2022observation have experimentally identified the Γ-valley physics in twisted bilayer WSe_2 but found it to appear in energies much below the valence band edge, which is located at the K/K'-valleys.
Here, we pursue realizing an ideal emergent honeycomb lattice in ABBA-stacked twisted double bilayer WSe_2 (tdbWSe_2), i.e., two AB- and BA-stacked WSe_2 bilayers with a small relative twist angle [see Fig. <ref>(a)]. As experimentally shown in a closely related system, the ABAB-stacked tdbWSe_2, the increase in the number of layers enhances the interlayer hybridization driving the Γ-valley to the valence band edge <cit.>. However, the ABAB-stacked tdbWSe_2 does not have the symmetry needed for criterion (2) and only realizes a Γ-valley-based emergent triangular lattice. In this paper, we show that the ABBA-stacked tdbWSe_2 can pass the three criteria and realize a tunable ideal honeycomb lattice with both sublattice and SU(2) spin rotation symmetries.
The remainder of this paper is organized as follows. In Sec. <ref>, we provide the microscopic analysis that the ABBA-stacked tdbWSe_2 can pass the three criteria above and, hence, leads to an emergent honeycomb lattice with both sublattice and SU(2) spin rotation symmetries. We construct a continuum model for this tdbWSe_2 system, enabling estimations of the hopping parameters of the emergent honeycomb lattice. In Sec. <ref>, we demonstrate the intriguing tunability of this system by studying orders induced by an in-plane magnetic field at the hole filling ν =2, namely, the half-filling of the emergent honeycomb lattice. The phase diagram is obtained using the Hartree–Fock analysis of the extended Hubbard model on the emergent honeycomb lattice. Finally, in Sec. <ref>, we summarize our results and experimental signatures of the predicted orders.
§ MODELING THE EMERGENT HONEYCOMB LATTICE
The ABBA-stacked tdbWSe_2 can satisfy the three criteria above for the following reasons. First, this structure's increased number of layers leads to stronger interlayer tunnelings compared to a single bilayer, pushing the Γ-valley to the top of the valence bands <cit.>. Second, inherited from the D_3h point group symmetry of each WSe_2 layer, the ABBA-stacked tdbWSe_2 manifests a three-fold rotation C_3z around the z-axis, and a two-fold rotation C_2y around the in-plane y-axis, where the y-axis is chosen to align with the xy-plane projection of a bond between the tungsten atom and the selenium atom in the monolayer WSe_2.
These crystal symmetries ensure a sublattice-symmetric honeycomb lattice composed of the MX and XM sites. Third, from the ab initio tight-binding model in Appendix <ref>, we find the correct energy hierarchy amongst the MX, XM, and MM sites to ensure an emergent honeycomb lattice.
To estimate the hopping parameters in the emergent honeycomb lattice based on the three criteria, we need to obtain the dispersion of the two topmost moiré valence bands. Adapting the approach in Ref. <cit.>, we begin with a four-layer continuum Hamiltonian for our twisted ABBA-stacked system that preserves C_2y, C_3z, and SU(2) spin rotation symmetries (due to negligible SOC near the Γ-valley <cit.>):
H_4L=-ħ^2 k^2/2m + [ Δ_1(r) Δ_12(r) 0 0; Δ_12^†(r) Δ_2(r) Δ_23(r) 0; 0 Δ_23^†(r) Δ_3(r) Δ_34(r); 0 0 Δ_34^†(r) Δ_4(r) ].
H_4L is identical for both spin species. The intralayer potentials take the forms Δ_1(r)=Δ_4(r)=V_1 for the first and fourth layer, and Δ_2(r)=V_2^(0)+2V_2^(1)∑_i=1,3,5cos(G_i·r+ϕ) and Δ_3(r)=V_2^(0)+2V_2^(1)∑_i=1,3,5cos(G_i·r-ϕ) for the second and third layer (the opposite sign of ϕ as a result of C_2y symmetry).
The interlayer tunnelings take the forms Δ_12(r)=Δ_34(r)=V_12 and Δ_23(r)=V_23^(0)+2 V_23^(1)∑_i=1,3,5cos(G_i·r).
Here, G_i=4π/√(3)a_M( cos((i-1)π/3),sin((i-1)π/3) ) for i=1,2,...,6 are the six first-shell moiré reciprocal lattice vectors (see Fig. <ref> (c)). The moiré lattice constant a_M is determined by the small twist angle θ through a_M ≈ a_0/θ with the lattice constant a_0 of monolayer WSe_2 being 3.28Å <cit.>. At θ∼ 2^∘, a_M ≈ 10 nm. The moiré reciprocal lattice vectors G_i only show up in Δ_2=Δ_3 and Δ_23 because the small-angle twisting only appears between the second and third layer of WSe_2 in the twisted double bilayer structure.
We now estimate the parameters in the intralayer potentials and interlayer tunnelings in Eq. (<ref>).
These potentials and tunnelings in the tdbWSe_2 evaluated at the MM, and MX/XM sites should be fitted according to the band structures of
the untwisted ABBA-stacked double bilayer WSe_2 in the three high-symmetry stacking configurations MM, and MX/XM [see Appendix <ref> for more details] <cit.>.
The angle-independent parameters for the potentials and tunnelings are estimated as follows:
(V_1,V_2^(0),V_2^(1))=(200,-159,-8) meV, ϕ=-0.17, and (V_12,V_23^(0),V_23^(1))=(184, 356, -9) meV. We comment that these parameter estimates are obtained based on assumptions on the interlayer distances in the double bilayer WSe_2 detailed in App. <ref>. The assumed interlayer distances produce a representative example of tdbWSe_2 satisfying the energetics criteria allowing us to estimate bandwidth and demonstrate the tunability of the emergent honeycomb lattice, which are the main purposes of our work. With the above parameters, we can solve Hamiltonian (<ref>) for the moiré valence bands of tdbWSe_2. For example, Figure <ref>(d) shows the band structure along the γ-m-κ-γ path in the moiré Brillouin zone (mBZ) at a twist angle of θ = 2^∘.
From the two topmost moiré valence bands, especially the presence of a Dirac cone between them, we confirm that ABBA-stacked tdbWSe_2 with a hole filling less than 4 per moiré unit cell captures an emergent honeycomb lattice with both sublattice and SU(2) spin rotation symmetries. At a twist angle of θ=2^∘, the bandwidth is around
10 meV (which is much smaller than the ∼2 eV bandwidth of graphene <cit.>).
At hole filling ν =2, i.e., two holes per moiré unit cell, the Fermi level is precisely at the Dirac point protected by C_3z and C_2y symmetries. This results in a semimetal (noninteracting) ground state, as shown in Fig. <ref>(d). The narrow bandwidth of the tdbWSe_2-realized honeycomb lattice allows for an intriguing way to tune the electronic structure: a realistic in-plane magnetic field can “dope" the semimetal and create finite Fermi surfaces located around both κ and κ' points of the mBZ for both spin species. Note that such tuning is impossible for the K/K'-valley-based TMD moiré system due to the strong SOC therein.
Without loss of generality, we assume the in-plane magnetic field B is applied along the x direction, resulting in a Zeeman energy of h_x=1/2g μ_B B, where μ_B is the Bohr magneton (5.79 ×10^-2 meV/T,) and the g-factor is roughly 2 due to the negligible SOC near the Γ-valley.
For example, at a twist angle of θ = 2^∘, a magnetic field of 13 T, typically achievable experimentally, creates a Zeeman splitting energy of 0.8 meV. This Zeeman splitting leads to a finite Fermi surface of each spin species roughly
halfway in energy between the Dirac point (at the κ point in the mBZ) and the van Hove singularity (at the m point), as shown in Fig. <ref>.
In the presence of the in-plane field, a magnetic particle-hole instability that opens a charge gap is expected even at weak interactions because of the nesting between the Fermi surfaces of the two spin species. This instability would lead to a magnetic order that spontaneously breaks the remaining U(1) spin rotation symmetry about the x-axis, namely, the in-plane field direction.
Conceptually, a similar in-plane-field-induced particle-hole instability applies to graphene. The previous work <cit.> shows that the resulting state is a canted Néel antiferromagnetic (AF) insulator, whose magnetizations projected onto the plane perpendicular to the applied field are opposite in the two sublattices of the honeycomb. However, realizing such a state in graphene requires a very high in-plane field of 10^2∼ 10^3 T <cit.>. As we show below, a similar canted Néel AF insulator can be found in the tdbWSe_2-realized honeycomb lattice, whose bandwidth is engineered around the meV scale, at a realistic in-plane field of a few Tesla.
§ EXTENDED HUBBARD MODEL AND PHASE DIAGRAM UNDER IN-PLANE FIELD
Aside from the bandwidth distinction with graphene, the band structure of the tdbWSe_2-realized honeycomb lattice exhibits a more pronounced contribution from the next-nearest-neighbor (NNN) hopping t_2 on the honeycomb lattice. The effect of t_2 is manifested by the asymmetry (about the Dirac cone) between the two topmost valence bands shown in Fig. <ref> (d). In contrast to the graphene-inspired model of <cit.> with only the nearest-neighbor (NN) hopping t_1 and the on-site Hubbard interaction, we consider an extended Hubbard model with longer-range hoppings and interactions pertaining to our tdbWSe_2-realized honeycomb lattice system. Our Hamiltonian consists of three terms: the hopping H_ h, the Zeeman energy H_ Z from the in-plane magnetic field, and the extended Hubbard interaction H_int. The term H_ h = -t_1 ∑_ij∑_σ c_iσ^† c_jσ - t_2 ∑_ij∑_σ c_iσ^† c_jσ contains both NN hopping t_1 and NNN hoppings t_2. Here, c_iσ is the fermion operator of the hole with spin σ at the site i of the emergent honeycomb lattice. Both t_1 and t_2 are tunable by the twist angle θ.
For instance, at θ=2^∘,
we estimate t_1=1.8 meV and t_2=0.2 meV based on the moiré band structure obtained from the continuum model Eq.(<ref>). The ratio t_2/t_1 varies as we change the twist angle: 0.018 for θ=1^∘, 0.11 for θ=2^∘, and 0.12 for θ=3^∘. In the search for possible phases below, we treat t_2/t_1 as a parameter to explore the effect of longer-range hopping. The Zeeman energy H_ Z is given by H_ Z=h_x ∑_i( c_i↑^† c_i↑ - c_i↓^† c_i↓), where the spin quantization axis conveniently is chosen along the x direction. Under H_ Z, the spin symmetry is reduced to a U(1) spin rotation in the yz plane. We fix h_x=0.4 t_1 in the following. For the twist angle θ=2^∘, h_x=0.4 t_1 (corresponding to a 13 T in-plane magnetic field)
sets the Fermi level roughly halfway between the Dirac point at the κ/κ' point and the van Hove singularity at the m point of the mBZ.
The range of interactions in TMD-based superlattices typically extends beyond on-site interactions. Here, we consider the extended Hubbard interaction
H_int=U_0 ∑_i c_i↑^† c_i↑ c_i↓^† c_i↓ + U_1 ∑_⟨ ij⟩∑_σ,σ'=↑,↓ c_i,σ^† c_i,σ c_j,σ'^† c_j,σ' ,
with the on-site interaction strength U_0, and the NN Hubbard interaction of strength U_1. In the following study of the phase diagram, we treat U_0 and U_1 as two free parameters to explore their effect on the canted Néel AF insulator and other phases.
We start with understanding the effect of the NNN hopping t_2, and present the ground-state phase diagram obtained using the Hartree-Fock method in Fig. <ref>(b). In this phase diagram,
we consider a parameter range U_0 ∈ [t_1, 9t_1] and t_2 ∈ [0, 0.6t_1] while setting U_1 = 0. Using the numerical Hartree-Fock method, we find three possible phases: the canted Néel AF insulator, canted √(3)×√(3) spin density wave (SDW), and a metal. At hole filling ν = 2, the Fermi surfaces of the two spin species are always perfectly nested for t_2 ≲ t_1/3 at both the κ and κ' points of the mBZ. This perfect nesting results in a Fermi surface instability towards a robust canted Néel AF insulator, spontaneously breaking the U(1) spin rotation symmetry in the yz plane as mentioned above.
Beyond t_2>t_1/3, the system's ground state is metallic at small U_0. The metallic state is due to an extra Fermi surface of the spin-down hole appearing around the γ point when t_2>t_1/3. Consequently, the Fermi surfaces of the two spin species are no longer nested at ν =2. As U_0 increases, the system can enter an insulating SDW phase with an enlarged √(3)×√(3) unit cell. This SDW phase exhibits a 120^∘ AF order in the yz plane on each sublattice of the honeycomb. The chiralities of the two 120^∘ AF orders are opposite. For the tdbWSe_2-realized honeycomb lattice, t_2/t_1<1/3 for twist angle θ<4^∘. Hence, restricted to only on-site interactions, the canted Néel AF insulator is the most relevant in-plane-field-induced phase for this material at hole filling ν=2.
In addition to the NNN hopping t_2, we also study the effect of the NN Hubbard interaction U_1. We present the phase diagram obtained from the Hartree-Fock calculation as a function of U_0 and U_1 as shown in Fig. <ref>(c). For this phase diagram, we set t_2 = 0 for simplicity. The canted Néel AF insulator is robust at small U_1. As U_1 increases, the system is driven to an insulating phase with the charge distribution polarized to one of the sublattices, spontaneously breaking the system's C_2y symmetry. In contrast to the canted Néel AF insulator, the yz-plane U(1) spin rotation symmetry is preserved in this phase. The phase boundary between the AF insulator and the sublattice-polarized insulator is roughly at U_1∼1/3U_0 + 2/3h_x.
This is the exact phase boundary in the strong interaction limit (where the hopping terms H_ h are neglected). In the tdbWSe_2-realized honeycomb
lattice, the screening of the Coulomb interaction determines the relative strength between U_0 and U_1. Stronger screening favors the canted Néel AF insulator, while weaker screening favors the sublattice-polarized insulator. In Appendix <ref>, we present an analytical mean-field analysis in the weak-coupling limit U_0,1≪ t_1 and find that the canted √(3)×√(3) SDW insulator can also compete with the canted Néel AF insulator when U_1 /U_0 increases.
§ SUMMARY AND EXPERIMENTAL IMPLICATION
In summary, we show that the ABBA-stacked tdbWSe_2 can serve as a realistic and tunable platform to simulate Γ-valley honeycomb lattice with both sublattice and SU(2) spin rotation symmetries. We develop a continuum model to estimate the relevant hopping parameters of this honeycomb lattice model, including the NN and the NNN hoppings t_1,2. The small bandwidth, found to be at the meV scale, and the ratio t_2/t_1 are both tunable via the twist angle θ. We show that the small bandwidth enables the effective control of the electronic structure via an in-plane magnetic field. At a hole filling ν=2, an in-plane field of a few Teslas can dope this system from a Dirac semimetal to a metal having finite Fermi surfaces with instabilities. To demonstrate this tunability and understand the resulting instabilities, we construct an extended Hubbard model for this honeycomb lattice and perform a numerical Hartree-Fock calculation for the ground-state phase diagram at ν=2. We find a robust canted Néel AF insulating phase when the on-site Hubbard interaction U_0 dominates. This phase is conceptually similar to the magnetic-field-induced canted AF insulator studied in the context of graphene by Ref. <cit.>. However, our finding resides in a more realistic parameter regime. Moreover, a competing sublattice-polarized insulator is found in our extended Hubbard model when the NN Hubbard interaction U_1 increases beyond a threshold.
The results presented in this work have direct experimental implications. First, for ABBA-stacked tdbWSe_2 at hole filling ν=2, an in-plane magnetic field is predicted to induce an insulating ground state. Consequently, at a finite temperature, we expect a large magnetoresistance as a function of the in-plane field around the Dirac semimetal state (at zero field). There are two possible insulating states: the canted Néel AF insulator and the sublattice-polarized insulator. The former can be detected through its spin texture perpendicular to the in-plane field direction, potentially using spin-polarized scanning tunneling microscopy <cit.>. The finite-temperature phase transition into the canted Néel AF order is a Berezinskii–Kosterlitz–Thouless transition associated with the remaining U(1) spin rotation symmetry under the in-plane field. The sublattice-polarized insulator has no non-trivial spin texture (other than the magnetization along the in-plane field direction). Instead, the spontaneous charge polarization in the two sublattices of the emergent honeycomb lattice leads to a finite electric polarization in the tdbWSe_2 sample along the z direction. Consequently, a signature of this sublattice-polarized insulating phase is the hysteresis of
the z-axis electric polarization as a function of the z-direcction electric field. The finite-temperature phase transition into the sublattice-polarized phase is in the Ising universality class. Switching between the two insulating phases can be implemented by tuning the ratio between U_0 and U_1, which is achievable via adjusting the twist angle and/or sample's distance from the gates.
Acknowledgements.
We thank Kin Fai Mak, Jie Shan, and Liguo Ma for stimulating discussions. H.P. acknowledges the helpful discussions with Fengcheng Wu, Ming Xie, and Dan Mao.
H.P. and E-A.K. acknowledge the funding support National Science Foundation (Platform for the Accelerated Realization, Analysis, and Discovery of Interface Materials (PARADIM)) under Cooperative Agreement No. DMR-1539918. C.-M.J. is supported by a faculty startup grant at Cornell University.
§ BAND STRUCTURE CALCULATION AND PARAMETER ESTIMATE
To show that criteria (1) and (3) introduced in the main text can be satisfied by the tdbWSe_2, the first task is to obtain its band structure.
However, it is not easy to derive the band structure of tdbWSe_2 directly at the atomic level because the emergent moiré superlattice contains up to ten thousand atoms even within just one moiré unit cell. One way to approximate the band structure of the twisted system is to use a continuum model with parameters fitted according to the untwisted system in the high-symmetry stacking configurations (MM, MX, and XM) that appear in the moiré unit cell of the twisted system <cit.>. We will provide the details later in this section.
Therefore, in this section, we will first discuss the method to obtain the band structure of untwisted systems.
We calculate the band structure of the untwisted system using the ab initio tight-binding Hamiltonian <cit.>. The logic is to first construct a tight-binding model for the monolayer by considering five d orbitals of one tungsten atom and six p orbitals of the two selenium atoms in the monolayer unit cell, and including spin-orbit coupling over the entire monolayer Brillouin zone (BZ) (i.e., not just the small mBZ). It is called ab initio because hopping parameters are evaluated from the density functional theory.
After constructing the tight-binding model for the monolayer, we can model the interlayer tunneling between the adjacent layers by considering the hopping between the two selenium atoms on the adjacent layers. Combining these considerations, we can construct the Hamiltonian for the (untwisted) double bilayer WSe_2.
Finally, we diagonalize the matrix of size 88 by 88 (11 orbitals × 2 spins × 4 layers) at each momentum to obtain the band structure E_r(k), where r={MM, XM, MX} represents the high-symmetry stacking configuration, and k is the momentum in the entire monolayer BZ.
For the full recipe and the numerical details, we refer to Ref. fang2015initio.
Our goal is to show that tdbWSe_2 can satisfy criteria (1) and (3) so that a honeycomb lattice with both sublattice and SU(2) spin rotation symmetries emerges at the moiré scale.
That is to say, we require E_r(Γ)>E_r(K) for r=MM and MX/XM, and E_MX(Γ)=E_XM(Γ)>E_MM(Γ). Obtaining these energies from the model above requires us to make a choice for the interlayer distances in the tdbWSe_2 structure, which are unknown parameters. Our main goal is to demonstrate the possibility and the tunability of the emergent honeycomb lattice from tdbWSe_2 and estimate the relevant energy scale. We will not attempt to calculate the relaxed interlayer distances. Instead, we will make assumptions on them based on the following reasoning.
In light of the C_2y symmetry which flips the layers, the distance d_12 between the first and second layer should be the same as the distance d_34 between the third and fourth layer.
For the distance between the central two layers d_23, because the MM stacking corresponds to the case where all tungsten atoms (also for selenium atoms) on both layers are stacked with perfect alignment, respectively, the repulsion from the interlayer interaction should be the strongest. Therefore, the value of d_23 in the MM stacking configuration should be larger than d_12 and d_23 in the MX/XM stacking configuration.
With these considerations, we perturb the interlayer distances d_12, d_23 for the MM stacking, and d_23 for the MX/XM stacking, around the interlayer distance from the bulk WSe_2 data, ∼ 6.5 Å <cit.>, and find that the system can satisfy the energetics hierarchy in criteria (1) and (3) in a large parameter space around d_12=6 Å.
As an example that passes criteria (1) and (3), we choose the following interlayer distances (d_12,d_23,d_34)=(6,6.25,6) Å for MM stacking and (d_12,d_23,d_34)=(6,5.8,6)Å for MX and XM stacking for the estimation of the parameters in the continuum model.
Note that this choice has smaller layer distances than the bulk data. In principle, one can consider adding pressure to reduce the interlayer distance if needed.
Now, we present the valance band structure for the untwisted MM stacking in Fig. <ref>(a), and untwisted MX/XM stacking in Fig. <ref>(b). The corresponding valence band edge energies at the Γ and K valley for these two stackings are listed in Table <ref>, which validate the two criteria.
To model the twisted moiré system from the untwisted system,
we can represent the energy at the high-symmetry-stacking sites in one moiré unit cell using the energies in an untwisted system with the same type of stacking configuration. Then we interpolate them using up to the first harmonics of the moiré periodicity <cit.>.
Therefore, by extracting the 4 topmost valence bands at the Γ-valley for the untwisted systems in the MM and MX/XM stacking configurations, we can estimate the unknown parameters in the moiré continuum Hamiltonian Eq. (<ref>) in the twisted system.
To do that, we first will drop the kinetic term in Eq. (<ref>), as it is near Γ valley, and then diagonalize the potential term with r=MM and r=MX/XM, respectively, which gives four discrete energies for each site. The fitting procedure is to adjust these four energies at r=MM or r=MX/XM to match the energies obtained from the four valence bands in the ab initio tight-binding Hamiltonian at Γ valley, i.e., the first two columns in Table <ref>.
§ PHASE COMPETITION IN WEAK INTERACTION LIMIT
In this appendix, we present the analytical mean-field study of the Fermi surface instabilities in our extended Hubbard model (with the in-plane magnetic field) in the weak-coupling limit U_0,1≪ t_1 at hole filling ν=2. We find that, at zero temperature, dominant instabilities include both the canted Néel AF order and the canted √(3)×√(3) SDW order. Their competition is controlled by the ratio U_1/U_0. We also discuss the relationship between our weak-coupling analysis and the Hartree-Fock phase diagrams in Fig. <ref> obtained in the stronger coupling regime U_0 ≳ t_1.
We start with the noninteracting part of the Hamiltonian H^0=H_h+H_Z containing the hopping term and the Zeeman energy. At hole filling ν=2, for small t_2, h_x compared to t_1, the Fermi surfaces are centered at the κ and κ' points of the mBZ. We expand the dispersion around the κ/κ' point:
H^0_κ,σ =∑_kc⃗_κ,σ^ †(k) ( h_x σΛ^0+ v_F Λ⃗·k) c⃗_κ,σ(k),
H^0_κ',σ =∑_kc⃗_κ',σ^ †(k) ( h_x σΛ^0- v_F Λ⃗^* ·k) c⃗_κ',σ(k).
Here, k is the momentum measured from the κ/κ' point. The dispersion is expanded to the linear order of k, which suffices to study the Fermi surface instabilities. The Fermi velocity v_F=√(3)/2t_1 a_M, which is ∼2.2×10^4m/s at a twist angle of θ = 2^∘. c⃗_τ,σ=[ c_τ,A,σ, c_τ,B,σ]^⊺ is the fermionic spinor in the sublattice space at valley τ= κ/κ' with spin σ=↑, ↓ along the in-plane field direction x̂. Λ⃗ comprises the Pauli-X and Y matrices acting on the sublattice space. Note that t_2's contribution only starts to appear in the order-k^2 terms of the dispersion. This contribution is independent of valley, spin, and sublattice. Hence, a small t_2 is unimportant for the Fermi surface instability in the weak-coupling limit.
At non-zero h_x, the Fermi surfaces of the spin-up fermions are perfectly nested with those of the spin-down fermions. Consequently, particle-hole instabilities towards magnetic ground states with spontaneous hybridization between the two spin species are expected. The magnetic orders in these ground states spontaneously break the U(1) spin rotation symmetry in the yz plane (perpendicular to the in-plane field direction). In the limit h_x → 0, the instabilities towards such magnetically-ordered ground states vanish along with the Fermi surfaces in the noninteracting band structure. Hence, these magnetic orders, to be discussed in detail below, are dubbed in-plane-field-induced magnetic orders.
At hole filling ν=2 (and with finite h_x > 0), we begin our analysis of possible instabilities by writing down the noninteracting bands crossing the Fermi level:
H_τ = ∑_k E_+,↓(k) c_τ,+,↓^†(k) c_τ,+,↓(k)
+ E_-,↑(k) c_τ,-,↓^†(k) c_τ,-,↓(k),
for the two valleys τ =κ,κ' and the two spin species σ=↑,↓. The energies E_±,σ(k) are given by E_±,σ(k)=h_xσ± v_F k with ± playing the role of a band index. This band index ± is locked with the spin index σ for the bands that cross the Fermi level. The fermion operators c_τ, ±, σ are defined as
[ c_κ,+,σ^†(k); c_κ,-,σ^†(k) ]=1/√(2)[ 1 e^iθ(k); 1 -e^iθ(k); ][ c_κ,A,σ^†(k); c_κ,B,σ^†(k) ],
and
[ c_κ',+,σ^†(k); c_κ',-,σ^†(k) ]=1/√(2)[ -e^iθ(k) 1; e^iθ(k) 1; ][ c_κ',A,σ^†(k); c_κ',B,σ^†(k) ],
where θ(k) is the azimuth angle of k. The spin-up (spin-down) fermions have two particle-like (hole-like) Fermi surface pockets, one around the κ point and the other around the κ' point. Hence, possible particle-hole instabilities (at weak couplings) can occur through either intervalley or intravalley hybridizations of the two spin species.
First, we consider the intervalley hybridization associated with the mean-field order parameter c_τ,-,↑^†( k) c_τ̅,+,↓( k). Here, τ̅ represents the opposite valley of τ. The mean-field decomposition of the extended Hubbard interaction corresponding to the intervalley hybridization reads
H_int^inter =-∑_k,k',τ,σ( U_0/2Ncos(θ(k)-θ(k'))+3U_1/2N)c_τ̅,σ̅^†(k') c_τ,σ(k') c_τ,σ^†(k) c_τ̅,σ̅(k)
+ ∑_k,k',τ( U_0/2Ncos(θ(k)-θ(k')) + 3U_1/2N) c_τ,↑^†(k') c_τ̅,↓(k')c_τ̅,↓^†(k) c_τ,↑(k) .
Here, σ̅ represents the opposite spin of σ. N is the system size. The band index is suppressed in the shorthands c_τ,↑(k)=c_τ,-,↑(k) and c_τ,↓(k)=c_τ,+,↓(k) due to the locking of the spin and the band indices for the bands that cross the Fermi level. In the derivation of H_int^inter, the terms irrelevant to the intervalley hybridization of opposite spin species and the terms of linear or higher order in k are dropped. Note that the first (second) term in the quadratic terms in the first line of Eq. (<ref>) only couples to the p-wave (s-wave) channel of the order parameter c_τ,-,↑^†( k) c_τ̅,+,↓( k). We can consider the s-wave channel (with orbital angular momentum l=0) and the p-wave channel (with orbital angular momentum l=1) separately. At the mean-field level, the saddle point equations in the s-wave channel only depend on U_1, while those in the p-wave channel only depend on U_0.
For the intravalley hybridization associated with the mean-field order parameter c_τ,-,↑^† ( k) c_τ,+,↓ ( k), the mean-field decomposition of interaction terms reads
H_int^intra =-U_0/2N∑_k,k',τ,σ( c_τ,σ̅^†(k') c_τ,σ(k') - c_τ̅,σ̅^†(k') c_τ̅,σ(k')) c_τ,σ^†(k) c_τ,σ̅(k)
- 3U_1/2N∑_k,k',τ,σcos(θ(k)-θ(k')) c_τ,σ̅^†(k') c_τ,σ(k') c_τ,σ^†(k) c_τ,σ̅(k)
+ U_0/2N∑_k,k'( c_κ,↑^†(k') c_κ,↓(k') - c_κ',↑^†(k') c_κ',↓(k')) ( c_κ,↓^†(k) c_κ,↑(k) - c_κ',↓^†(k) c_κ',↑(k))
+3U_1/2N∑_k,k',τcos(θ(k)-θ(k')) c_τ,↑^†(k') c_τ,↓(k')c_τ,↓^†(k) c_τ,↑(k).
Similar to the intervalley case, based on the form of H_int^intra, we only need to consider the s-wave and the p-wave channel of the order parameter c_τ,-,↑^† ( k) c_τ,+,↓ ( k). Also, it is natural to expect that the order parameters in the two valleys τ = κ,κ' share the same orbital angular momentum. With this expectation, we find that the saddle point equations in the s-wave channel only depend on U_0 while those in the p-wave channel only depend on U_1.
At zero temperature, we find the two most dominant mean-field solutions with the largest gaps opened around the Fermi surface. The corresponding orders are then expected to have the highest mean-field transition temperatures. One of these mean-field solutions is given by the intervalley hybridization in the s-wave channel with c_κ,-,↑^† c_κ',+,↓ = c_κ',-,↑^† c_κ,+,↓, which gives rise to the canted √(3)×√(3) SDW order, as illustrated in Fig. <ref>(b). In this phase, the gap opened at the Fermi surface is given by 2h_xexp(-2/3U_1 ρ), where the density of states ρ can be approximated to a constant 4 h_x/3π t_1^2 within the energy interval between two Dirac points (at energies -h_x and h_x relative to the Fermi level). The other dominant mean-field solution is given by the intravalley hybridization in the s-wave channel with c_κ,-,↑^† c_κ,+,↓ = -c_κ',-,↑^† c_κ',+,↓, which gives rise to the canted Néel AF order, as shown in Fig. <ref> (b) and (c). In this canted Néel AF phase, the gap size is given by 2h_xexp(-1/U_0 ρ).
By comparing the gaps, we expect the canted Néel AF phase to be the energetically favorable ground state for U_1 < 2/3 U_0 in the weak coupling limit. When U_1 increases beyond 2/3 U_0, the ground state goes through a first-order phase transition into the canted √(3)×√(3) SDW phase. We comment that it is challenging to extend our numerical Hartree-Fock calculation, which includes all the interaction terms, to the weak-coupling limit because of the exponentially small gap sizes. From the Hartree-Fock phase diagram with U_0>t_1 as shown in Fig. <ref> (c), the sublattice-polarized phase is more energetically favorable than both the canted √(3)×√(3) SDW insulator and the canted Néel AF insulator when U_1 ≳1/3U_0 + 2/3h_x. However, there is no Fermi surface instability towards the sublattice-polarized insulator in the weak coupling limit. It is interesting to investigate the competition among the canted Néel AF insulator, the √(3)×√(3) SDW insulator, and the sublattice-polarized insulator in the regime with U_0<t_1, which we will leave for future study.
|
http://arxiv.org/abs/2307.07397v1 | 20230714151545 | Improving Zero-Shot Generalization for CLIP with Synthesized Prompts | [
"Zhengbo Wang",
"Jian Liang",
"Ran He",
"Nan Xu",
"Zilei Wang",
"Tieniu Tan"
] | cs.CV | [
"cs.CV",
"cs.LG"
] |
Improving Zero-Shot Generalization for CLIP with Synthesized Prompts
Zhengbo Wang^1,2, Jian LiangCorresponding author ^2,3, Ran He^2,3, Nan Xu^5, Zilei Wang^1, and Tieniu Tan^2,3,4
^1 University of Science and Technology of China
^2 CRIPAC & MAIS, Institute of Automation, Chinese Academy of Sciences
^3 University of Chinese Academy of Sciences
^4 Nanjing University ^5 Beijing Wenge Group
[email protected], [email protected]
Received date / Accepted date
=================================================================================================================================================================================================================================================================================================================================================================================================
empty
With the growing interest in pretrained vision-language models like CLIP, recent research has focused on adapting these models to downstream tasks.
Despite achieving promising results, most existing methods require labeled data for all classes, which may not hold in real-world applications due to the long tail and Zipf's law.
For example, some classes may lack labeled data entirely, such as emerging concepts.
To address this problem, we propose a plug-and-play generative approach called SyntHesIzed Prompts (SHIP) to improve existing fine-tuning methods.
Specifically, we follow variational autoencoders to introduce a generator that reconstructs the visual features by inputting the synthesized prompts and the corresponding class names to the textual encoder of CLIP.
In this manner, we easily obtain the synthesized features for the remaining label-only classes.
Thereafter, we fine-tune CLIP with off-the-shelf methods by combining labeled and synthesized features.
Extensive experiments on base-to-new generalization, cross-dataset transfer learning, and generalized zero-shot learning demonstrate the superiority of our approach.
The code is available at <https://github.com/mrflogs/SHIP>.
§ INTRODUCTION
In recent years, language-supervised vision pretrained models have garnered much attention.
By establishing a link between images and natural language, these models exhibit impressive zero-shot capabilities and remarkable transfer ability <cit.>, demonstrating potential in learning open-world concepts.
One of the most successful large-scale pretrained vision-language models is CLIP <cit.>.
By leveraging a massive dataset of 400 million image-text pairs, it learns to align visual and textual representations from a vision encoder and a language encoder, respectively.
After pretraining, CLIP <cit.> can perform zero-shot recognition by merely providing the class names.
The classification weights are generated by the language encoder through prompting <cit.>.
For instance, we can adopt a prompt template like “a photo of a {class}" as the input of the text encoder, and then the weights for classification can be synthesized by substituting in the “{class}" with the actual class name.
And the resulting classification score is the cosine similarity between the test image and the weights.
To further enhance the performance of CLIP, several previous works have proposed the use of learnable prompts <cit.> or adapters <cit.> to fine-tune the pretrained CLIP to specific downstream tasks.
These methods have achieved significant improvements with only a small amount of labeled data from downstream datasets, which clearly demonstrates their superiority in terms of data efficiency.
However, a significant limitation of these methods is their reliance on having data available for all classes, which can be impractical in real-world applications.
The issue arises due to Zipf's law and the long tail phenomenon, which make it challenging to collect data for rare categories, such as new species or emerging concepts.
As a result, many categories may be devoid of any relevant data, rendering previous methods either invalid <cit.> for such scenarios or leading to a significant drop in performance on the label-only classes <cit.>, compared to zero-shot CLIP.
To address this limitation, our goal is to develop a fine-tuning approach that can effectively recognize both categories with and without available data while maintaining the superior data efficiency of previous methods.
In this paper, we propose a plug-and-play generative approach called SyntHesIzed Prompts (SHIP) to improve existing fine-tuning methods.
The main objective is to train a generative model that can synthesize features by providing class names, which enables us to generate features for categories without data.
And we proceed to fine-tune CLIP using both the original labeled and the newly synthesized features with off-the-shelf methods.
However, a major obstacle is that generative models typically require a substantial amount of data to train, which contradicts our goal of data efficiency.
We propose to utilize variational autoencoder <cit.> (VAE) as the framework, which is easier to train and more effective in low-data scenarios compared to models that require adversarial training <cit.>.
Additionally, inspired by previous prompt learning methods <cit.>, we train the generator to produce prompts instead of visual features.
We then feed these prompts and corresponding class names into the frozen CLIP language encoder to obtain synthesized features.
Since CLIP has been pretrained on a large-scale dataset and has aligned visual and language representations, we believe that the pretrained language encoder aids in generating more realistic features.
In summary, this paper aims to address the issue of downstream tasks where some classes have no relevant data while maintaining the superior data efficiency of previous methods.
To achieve this goal, we propose a novel generative approach named SHIP, which can synthesize features for categories without data based solely on their class names.
Notably, our proposed generative method is orthogonal to CLIP fine-tuning methods and can enhance their performance by utilizing synthesized data.
We conduct comprehensive experiments on base-to-new generalization, cross-dataset transfer learning, and generalized zero-shot learning, resulting in state-of-the-art performance.
§ RELATED WORK
Vision-Language Pretraining.
Vision-language pretraining models (VLMs) investigate the relationship between vision and language modalities.
Various methods have been proposed to establish this connection through self-supervised learning, such as masked language model <cit.>, masked region prediction <cit.> and image-text matching <cit.>.
Recently, contrastive learning-based VLMs have shown remarkable performance by utilizing large-scale noisy image-text pairs.
These methods, including CLIP <cit.> and ALIGN <cit.>, learn aligned representations of images and text via the contrastive loss, which pulls the representations of matching image-text pairs together and pushes those of mismatching pairs apart.
Based on natural language supervision, these VLMs acquire transferable visual representations and exhibit impressive zero-shot performance on various image classification tasks.
Fine-tuning for VLMs.
Inspired by the prior work in NLP, recent researches focus on developing efficient fine-tuning methods for VLMs on downstream tasks.
One type of such method is prompt tuning, which has been explored in several recent works <cit.>.
CoOp <cit.> proposes a prompt learning method that optimizes a class-agnostic prompt template in the continuous token embedding space through back-propagation on few-shot datasets.
ProDA <cit.> attempts to learn a collection of continuous prompts to capture the variational visual representation.
PLOT <cit.> proposes to apply optimal transport to match the learnable prompts with different areas of the images.
Another type of fine-tuning method is adapters <cit.>.
CLIP-Adapter <cit.> proposes to add a lightweight MLP following the last vision layer and mix the output feature with the original zero-shot feature via a residual connection.
Tip-Adapter <cit.> further improves CLIP-Adapter <cit.> by replacing the lightweight MLP with a linear layer, whose weights are comprised of the labeled visual embeddings, acting as visual prototypes of the concepts.
This not only inherits the training-free advantage of zero-shot CLIP <cit.> but also performs comparably to those training-required approaches.
While these methods have achieved significant improvements on downstream datasets, they require data for all classes when fine-tuning.
When dealing with new unseen classes, they either become invalid <cit.> or their performance drops dramatically <cit.>.
However, some classes are difficult to collect data for because of their rareness, such as new species or concepts.
As a result, many categories may be devoid of any relevant data.
To address this, previous methods have attempted to learn more robust prompts.
CoCoOp <cit.> improves new class performance by learning an instance-specific continuous prompt conditioned on the input image.
With image information, the prompts are easily transferred to recognize new class samples.
VPT <cit.> proposes to learn the distribution of instance-specific prompts via variational inference.
During inference, VPT ensembles several prompts sampled from the distribution for the classification.
In contrast to the previous methods <cit.>, we propose to synthesize features for those unseen categories.
With features for all classes, we can utilize off-the-shelf methods to fine-tune CLIP.
Generalized Zero-Shot Learning.
Generalized zero-shot learning (GZSL) is a relevant research field with similar objectives to our work.
Specifically, GZSL focuses on training a classifier that can recognize both seen and unseen object classes, where the latter is absent from the training set.
To accomplish this, GZSL leverages auxiliary semantic information such as expert annotated attributes or text descriptions <cit.> for both seen and unseen classes.
Embedding-based GZSL methods aim to learn a visual-to-semantic mapping for visual-semantic interaction by mapping visual features into the semantic space <cit.>.
However, a major drawback of these methods is their bias towards seen classes, as they only learn from seen data.
As a solution, generative-based GZSL methods have been introduced to learn semantic-to-visual mapping to generate visual features of unseen classes <cit.> for data augmentation.
Currently, the generative methods are typically based on variational autoencoders (VAEs) <cit.>, generative adversarial networks (GANs) <cit.>, and generative flows <cit.>.
Despite their promising results, these generative-based methods require training on a large seen dataset to learn semantic-visual mapping and expertly annotated attributes or text descriptions for all classes, which can be labor-intensive.
In our work, we aim to imitate GZSL by learning to synthesize samples for new classes.
However, with limited labeled data in the training set and coarse semantic vectors for each class through prompting like “a photo of a {class}", these GZSL generative methods fail to synthesize valid new samples for new classes.
§ METHOD
§.§ Background
Contrastive Language-Image Pretraining, known as CLIP <cit.>, is a method developed for aligning the representations of images and their corresponding captions, which has gained considerable attention in recent years.
CLIP consists of two encoder modules: a visual encoder ℐ(x) and a language encoder 𝒯(t), which encode images and text descriptions, respectively, into a shared d-dimensional space.
The visual encoder can be ViT <cit.> or ResNet <cit.>, while the language encoder is a Transformer <cit.>.
Both encoders are trained jointly using a contrastive loss applied to a large dataset of paired images and captions.
Once trained, CLIP can be used for zero-shot classification of downstream tasks.
To perform C-class image classification, category descriptions {t_c}_c=1^C are generated through prompting, such as “a photo of a {class}". Then, the classification probability of the input image x is computed as follows:
p(y|x) = exp(cos(ℐ(x), 𝒯(t_y)) / τ)/∑_c=1^Cexp(cos(ℐ(x), 𝒯(t_c)) / τ),
where τ denotes the temperature, cos(·, ·) is the cosine similarity function, and y is the target class.
§.§ Synthesized Prompts
In this paper, we aim to improve the performance of CLIP on both base and new categories, i.e., categories with and without available data, while maintaining data efficiency as previous methods.
To achieve this goal, a novel generative approach named SyntHesIzed Prompts (SHIP) is proposed, which involves three stages.
First, we follow variational autoencoders to introduce a generator that reconstructs the visual features by inputting the synthesized prompts and the corresponding class names to the language encoder of CLIP.
Subsequently, we obtain the synthesized features for new categories by providing the class names.
Finally, we combine the labeled base class features with the synthesized new class features and employ existing fine-tuning methods, such as CoOp <cit.> and Tip-Adapter <cit.>, to fine-tune CLIP, which thus enhances its performance on both base and new classes.
The architecture of the generative model is illustrated in Figure <ref>.
To maintain the data efficiency, we opt to employ the Variational Autoencoder (VAE) <cit.> for training our generator instead of Generative Adversarial Networks (GANs) <cit.>.
The reason is that it is difficult to train an effective discriminator for GANs with limited labeled data <cit.>.
As depicted in Figure <ref>, the VAE architecture comprises an encoder E(x) and a generator G(z, c).
First, we leverage the fixed CLIP visual encoder to extract the feature of the input image, i.e., x = ℐ(img).
Subsequently, the VAE encoder E(x) encodes the feature x into a latent code z, and the generator G(z, c) reconstructs the feature x using the latent code z and the corresponding class name c.
The optimization of both E and G is achieved via the evidence-lower bound given by the equation as follows:
L = L_recon + L_KL
= 𝔼[-log G(z, c)] + KL(E(x)p(z|c)),
where KL represents the Kullback-Leibler divergence, p(z|c) is a prior distribution that is assumed to be 𝒩(0, 1), and -log G(z, c) denotes the reconstruction loss.
To further utilize the pretrained knowledge of CLIP, we propose a CLIP-based generator.
Notably, the pretrained CLIP has learned aligned vision and language representations, allowing us to reconstruct input features from the language encoder 𝒯.
Since having been trained on a large-scale dataset, the reconstructed features obtained from the pretrained language model 𝒯 are expected to be of higher quality than those generated by a new generator trained from scratch on the few-shot base dataset.
Drawing inspiration from previous prompt learning methods <cit.>, we generate instance-specific prompts, instead of generating the features directly.
Specifically, given the latent code z, we generate instance-specific prompts as follows:
p(z) = [p_1 + r, p_2 + r, ..., p_L + r],
where the local bias r is obtained through a two-layer fully-connected network, i.e., r=MLP(z), that embeds the latent code z into the token embedding space, and L is the length of prompts.
As in Eq. (<ref>), our prompts consist of two components: a global fixed set of learnable prompts {p_i, i=1,2,..., L}, which are randomly initialized, capturing the global information of the input features and a local bias r that encodes the instance-specific information of the input feature into the prompts.
By combining the prompts and the token embedding of the corresponding class name, we obtain the reconstructed features as follows:
x = 𝒯(t), t = {p(z), e_c},
where 𝒯 is the frozen language encoder, and e_c is the token embedding of the corresponding class names.
During the training stage, we maintain the CLIP frozen and only optimize the encoder E, the lightweight MLP, and the global prompts p = [p_1, p_2, ..., p_L].
§.§ Fine-tuning CLIP
Following the training stage, the generator is employed to synthesize features for new classes.
Specifically, given the class name c of a new class and the noise z sampled from the prior distribution, the generator G(z, c) is utilized to generate the corresponding features.
This process is repeated for each new class, resulting in a new synthetic dataset.
When combined with the labeled base dataset, a complete dataset for all classes is obtained.
Consequently, off-the-shelf methods <cit.> can be employed to fine-tune CLIP, which is expected to perform better on new classes in comparison to its previous counterparts.
§ EXPERIMENTS
§.§ Setup
We evaluate our method for three different tasks: base-to-new generalization, cross-dataset transfer, and generalized zero-shot classification.
For the base-to-new generalization and cross-dataset transfer tasks, we follow the same experimental setting as CoCoOp <cit.>.
It uses a total of 11 diverse image classification datasets, i.e., ImageNet <cit.> and Caltech101 <cit.> for generic object recognition, OxfordPets <cit.>, StanfordCars <cit.>, Flowers102 <cit.>, Food101 <cit.> and FGVCAircraft <cit.> for fine-grained image recognition, EuroSAT <cit.> for satellite image classification, UCF101 <cit.> for action classification, DTD <cit.> for texture classification, and SUN397 <cit.> for scene recognition.
For generalized zero-shot classification tasks, we follow the same setting as <cit.>, and we conduct the experiments on three standard zero-shot recognition datasets: Caltech-UCSD-Birds <cit.> (CUB), Oxford Flowers <cit.> (FLO), and Animals with Attributes2 <cit.> (AWA2), containing 200, 102, and 50 categories, respectively. For a fair comparison, we use the same data splits and evaluation protocols as proposed in <cit.>.
Implementation details.
Our proposed method is comprised of three sub-networks: a VAE encoder, a lightweight MLP, and a pretrained CLIP.
The VAE encoder and the MLP are implemented as two-layer fully-connected networks with 4,096 hidden units and ReLU activation.
And we employ ViT-B/16 <cit.> and transformer <cit.> as the vision and language encoders of CLIP, which are initialized with CLIP's pretrained weights and kept frozen during training.
The dimensions of the latent code z are set to be equal to the dimension of token embedding.
We fix the length of the learnable global context vectors to 4 and initialize them with Gaussian noise.
The features are normalized to a unit sphere, as proposed in CLIP <cit.>.
And we utilize MSE as the reconstruction loss of the VAE.
All the networks are trained using the AdamW optimizer with a learning rate of 0.001.
During the fine-tuning of CLIP, since we utilize off-the-shelf methods, we follow the same settings as those proposed in their papers <cit.>.
We randomly synthesize a batch of new class features and combine them with the original batch to form a new batch during training.
We conduct all experiments on a single NVIDIA GeForce RTX 3090, except for the ImageNet dataset, which is conducted on an NVIDIA A100.
§.§ Results
§.§.§ Base-to-new generalization
Setup.
Following CoCoOp <cit.>, we partition each dataset into two equal non-overlapping subsets: the base classes and the new classes.
Subsequently, we randomly extract a few-shot training set from base classes, while preserving the original test set for evaluation purposes.
Specifically, we perform training on the base classes with a mere 16 samples per class and evaluate the trained model on both the base and new classes.
To evaluate the model's performance, we compute the average accuracy of both the base and new classes, as well as their harmonic mean <cit.> (H = 2× base× new / (base + new)).
Results.
We choose CLIP <cit.>, CoOp <cit.>, CoCoOp <cit.>, CLIP-Adapter <cit.>, Tip-Adapter <cit.>, VPT <cit.>, and ProDA <cit.> as our baseline.
The result of Tip-Adapter <cit.> is not included in the table due to its inability to test on new classes.
Results from Table <ref> show that the previous fine-tuning methods significantly degrade the performance of CLIP on new classes.
Specifically, CoOp <cit.> reduces the accuracy of new classes by an average of 11% across 11 datasets.
Tip-Adapter <cit.> is even worse as it fails to recognize new categories outside the training set.
It is noteworthy that all previous methods, except VPT <cit.>, harm the CLIP performance on new classes.
However, VPT <cit.> achieves this by reducing the base class accuracy by 10.7%.
As shown in Table <ref>, we add our generative prompt tuning method to three baseline methods: CoOp <cit.>, CLIP-Adapter <cit.>, and Tip-Adapter <cit.>.
By adding our method, CoOp + SHIP outperforms CoOp <cit.> by 10.47% and 5.07% on the new classes and harmonic mean, respectively, while only sacrificing 2.66% on the base classes.
The incorporation of generative prompt tuning into CLIP-Adapter <cit.> results in a 2.57% and 1.62% improvement in performance on the new classes and harmonic mean, respectively, without affecting the performance of the base classes.
Notably, augmenting Tip-Adapter <cit.> with our proposed generative prompt tuning method not only expands its recognition ability to new classes but also achieves almost the best results compared to all the baseline methods.
Specifically, Tip-Adapter + SHIP achieves a 14.46% improvement on the base classes, 2.20% on the new classes, and 8.24% on the harmonic mean on average across all datasets compared to zero-shot CLIP.
Moreover, it obtains the highest harmonic mean on nine of the eleven datasets, except for Caltech101 <cit.> and OxfordPets <cit.>, where the performance has already reached a high level (>95%), thus limiting the potential for improvement.
§.§.§ Cross-dataset transfer learning
Setup.
Following CoCoOp <cit.>, we present an evaluation of our method's cross-dataset transfer performance.
Specifically, we examine the effectiveness of our approach on ten different target datasets following training on the source dataset (ImageNet <cit.>).
To simulate more realistic scenarios, we train our generative model and CoOp <cit.> on 16-shot ImageNet, utilizing all 1,000 available classes.
Subsequently, using the generative model, we generate features for all classes in the target dataset and fine-tune CoOp <cit.> with the synthesized data.
We report the average accuracy of these datasets for a fair comparison.
Results.
We report the performance of the proposed CoOp + SHIP compared to the CoOp <cit.> and CLIP <cit.> in ten target datasets.
The results are shown in Table <ref>, indicating an improvement range of 0.34% to 3.77%, with an average improvement of 1.81%.
Notably, the CoOp + SHIP outperformed the baselines in eight out of ten datasets, with exceptions in Flowers102 <cit.> and FGVCAircraft <cit.> datasets.
The reason for this observation is that Flowers102 <cit.> and FGVCAircraft <cit.> are fine-grained datasets that pose a challenge for the generator to synthesize in-distribution and non-trivial features.
§.§.§ Generalized zero-shot learning
Setup.
We follow the same data split and evaluation metrics as in <cit.>.
To ensure fairness in comparison, the model is trained on the complete training set of seen classes instead of 16 shots per class.
In this case, we extract the image feature from CLIP visual encoder and obtain the corresponding class attribute from the prompt template “a photo of a {class}".
As in <cit.>, we report the average per-class top-1 accuracy on seen and unseen classes.
Furthermore, the harmonic mean (H = 2 * Unseen * Seen / (Unseen + Seen))is also reported to provide a balance between seen and unseen accuracy.
Results.
The results of generalized zero-shot learning are shown in Table <ref>.
Experiments are conducted on three standard benchmarks for zero-shot classification: CUB <cit.>, AWA2 <cit.>, and FLO <cit.>.
We choose f-CLSWGAN <cit.>, Cycle-WGAN <cit.>, LisGAN <cit.>, TCN <cit.>, f-VAEGAN <cit.>, TF-VAEGAN <cit.>, GCM-CF <cit.>, HSVA <cit.>, and MSDN <cit.> as our baseline methods.
These methods extract the average-pooled feature instances of size 2,048 from the ImageNet-1K <cit.> pretrained ResNet-101 <cit.>.
And they use expert annotated attributes or text descriptions <cit.> as auxiliary information of classes, which requires additional human labor.
The results reported in Table <ref> indicate that CoOp <cit.> yields a substantial improvement in the performance of seen classes.
Specifically, the method leads to a 9.0%, 2.2%, and 17.9% performance increase on CUB, AWA2, and FLO datasets, respectively.
However, the performance of CoOp on unseen classes is comparatively lower, as evidenced by a decline of 6.0%, 15.6%, and 13.4% on CUB, AWA2, and FLO datasets, respectively, compared to CLIP <cit.>.
This observation suggests that CoOp may suffer from severe overfitting on the seen classes.
In this regard, our proposed method, CoOp + SHIP, leverages generative prompt tuning to enhance the performance of unseen classes.
Our experimental results demonstrate that CoOp + SHIP leads to significant gains of +6.1%, +11.4%, and +16.8% on unseen classes compared to CoOp <cit.>.
Furthermore, the performance of CoOp + SHIP is comparable or superior to previous zero-shot learning methods.
To ensure a fair comparison, we have implemented the TF-VAEGAN <cit.> and f-VAEGAN <cit.> using CLIP extracted features, with the attribute of each class generated through the prompt template “a photo of a {class}".
The results presented in the table indicate that while these models achieve the highest performance on seen classes, their performance on unseen classes is significantly lower, suggesting that these models suffer from severe overfitting to the seen classes.
We presume that the use of a coarse prompt template such as “a photo of a {class}" may not provide sufficient transferability compared to expert-annotated attributes used in previous methods.
§.§ Ablation Study
Different generative models.
We conducted a series of experiments to investigate the effectiveness of the generative framework and the CLIP-based generator.
For this, we implemented four distinct types of generators, with two types of frameworks and two types of generators.
Table <ref> presents the experimental results.
In the table, G denotes the use of GAN <cit.> as the framework, while V denotes the use of VAE <cit.> as the framework.
S represents the training of a three-layer MLP as a generator from scratch, while T denotes the utilization of the CLIP-based generator discussed in Section <ref>.
Notably, V + T is equivalent to our model.
We incorporate the generative models into CoOp <cit.> and evaluate their performance on the 11 datasets mentioned above.
In Table <ref>, the results indicate that VAE-based models outperform GAN-based models, supporting our claims that GANs <cit.> are difficult to train with the few-shot base dataset, leading to a suboptimal performance on new classes.
Additionally, we find that utilizing the CLIP-based generator yields superior results to straightforwardly training the generator from scratch, highlighting the effectiveness of our CLIP-based generator and the efficient utilization of pretrained knowledge of CLIP.
Furthermore, the table reveals that the combination of CoOp with G + S yields inferior performance compared to vanilla CoOp <cit.>.
This indicates that not arbitrary data generation for new classes can improve model performance.
Based on these results, we select VAE <cit.> as our generative architecture and choose to utilize the CLIP-based generator.
Different forms of generative prompts.
The prompts used in our method comprise a fixed set of global prompts and a local instance-specific bias, as described in Section <ref>.
More specifically, the prompts are represented as an addition of the global prompts and the local bias, i.e., p = [p_1 + r, p_2 + r, ... p_L + r].
We investigate the impact of different forms of prompts on performance.
We use the term global to denote the use of global prompts, [p_1, p_2, ... p_L], and the term sequential to refer to the use of identical or sequential local bias, i.e., [r, r, ..., r] or [r_1, r_2, ..., r_L].
The results, presented in Table <ref>, indicate that utilizing global prompts along with identical local bias yields the best performance.
And using local prompts alone (whether sequential or not) results in a negative impact on the new class performance, underscoring the importance of global prompts in capturing vital information.
Notably, using both global and sequential prompts results in the worst performance.
This may be attributed to the instability of training since they both learn sequential prompts.
Interpretation of prompts.
One benefit of our CLIP-based generative model is that we can provide interpretive prompts.
The model learns the mapping from visual features to token embedding space via the VAE process.
By utilizing this mapping, we can obtain instance-specific prompts for the input image.
The next step involves selecting the nearest natural words from the vocabulary based on their Euclidean distance to the prompts in latent space.
However, the approach maps continuous vectors into discrete codes of words, which can result in generated sentences that may not necessarily be semantically coherent, as noted in prior research <cit.>.
The interpretation of the prompts reveals several noteworthy observations.
As depicted in Figure <ref> (a), the model has learned to associate the term `aerial" with images captured from an aerial perspective in EuroSAT.
Furthermore, the model has accurately identified some characteristics of dogs, as exemplified in Figure <ref> (b)-(c), where the terms “swift-footed" and “fierce" can be used to describe animals.
Additionally, the model has demonstrated an understanding of floral morphology, as demonstrated in Figure <ref> (d)-(f), where the terms “odd-pinnate," “three-lobed," and “shot-stem" are employed to describe characteristics of flowers.
Since we utilize the identical local bias in the prompts, some words are the same in the interpretation sentence.
Although the interpretation may not be entirely precise, it provides valuable insights into the images.
We hope the results inform future studies on interpretable vision-language inference and yield further insights.
§ CONCLUSION
In this paper, we provide a generative approach, SHIP, to handle the scenario where some classes have no data.
By training a data-efficient generator to bridge the data gap in new classes, we improve CLIP performance on various tasks using off-the-shelf methods, including base-to-new generalization, cross-data transfer learning, and generalized zero-shot classification.
Although achieving remarkable results, it requires additional training costs, which we aim to mitigate in future research. Additionally, future work will explore the applicability of SHIP in dense prediction tasks.
ieee_fullname
§ APPENDIX
§.§ Datasets details
The details of the 11 datasets used in base-to-new generalization and cross-dataset transfer learning are shown in Table <ref>.
In addition, the statistic of datasets used in generalized zero-shot learning is summarized in Table <ref>.
§.§ Generalized zero-shot setting
The current evaluation protocol utilized in base-to-new generalization assumes that base and new classes are completely isolated during testing, which may not reflect a realistic scenario. In contrast, in a more realistic setting, test sets contain a mix of base and new class data, as previously employed in generalized zero-shot learning. We refer to this as the generalized zero-shot setting and re-evaluate base-to-new generalization under this setting.
The results of our evaluation are presented in Table <ref>, which indicates a significant decrease in performance for previous methods such as CoOp <cit.> and CLIP-Adapter <cit.> under this more strict setting.
Conversely, our proposed method, SHIP, continues to improve performance in new classes.
§.§ Different lengths of prompts.
As described in Sec. <ref>, our proposed approach generates instance-specific prompts to produce corresponding features, which consist of a global prompt and a local bias. Specifically, the prompts is computed as follows:
p = [p_1 + r, p_2 + r, ..., p_L + r],
where L is the length of prompts.
To examine the influence of prompt length on our method's performance, we conduct an ablation study, the results of which are presented in Table <ref>.
Specifically, we set the prompt lengths in our approach to 1, 2, 4, and 8 and integrate our method into CoOp <cit.> to evaluate its performance on base-to-new generalization.
Our experimental results indicate that our proposed approach performs best when the prompt length is set to L=4. Therefore, we set the default prompt length as L=4 for our experiments.
|
http://arxiv.org/abs/2307.04958v1 | 20230711012100 | Near-wall model for compressible turbulent boundary layers based on an inverse velocity transformation | [
"Kevin Patrick Griffin",
"Lin Fu",
"Parviz Moin"
] | physics.flu-dyn | [
"physics.flu-dyn"
] |
Laser light scattering (LLS) to observe plasma impact on the adhesion of micrometer-sized particles to a surface
J Beckers
October 2023
================================================================================================================
In this work, a near-wall model, which couples the inverse of a recently developed compressible velocity transformation [Griffin, Fu, & Moin, PNAS, 118:34, 2021] and an algebraic temperature-velocity relation, is developed for high-speed turbulent boundary layers.
As input, the model requires the mean flow state at one wall-normal height in the inner layer of the boundary layer and at the boundary-layer edge.
As output, the model can predict mean temperature and velocity profiles across the entire inner layer, as well as the wall shear stress and heat flux.
The model is tested in an a priori sense using a wide database of direct numerical simulation high-Mach-number turbulent channel flows, pipe flows, and boundary layers (48 cases with edge Mach numbers in the range of 0.77–11 and semi-local friction Reynolds numbers in the range of 170–5700).
The present model is significantly more accurate than the classical ordinary differential equation (ODE) model for all cases tested.
The model is deployed as a wall model for large-eddy simulations in channel flows with bulk Mach numbers in the range of 0.7–4 and friction Reynolds numbers in the range of 320–1800. When compared to the classical framework, in the a posteriori sense, the present method greatly improves the predicted heat flux, wall stress, and temperature and velocity profiles, especially in cases with strong heat transfer. In addition, the present model solves one ODE instead of two and has a similar computational cost and implementation complexity as the commonly used ODE model.
§ INTRODUCTION
The largest driver of computational cost in numerical simulations of wall-bounded turbulence is typically the numerical resolution in the near-wall region. In scale-resolving simulations, e.g., wall-resolved (WR) large-eddy simulation (LES), high spatial and temporal resolutions are required to accurately simulate the small-scale eddies near walls. Wall models, or approximate boundary conditions, can be employed to reduce the near-wall resolution requirements.
The computational cost (the number of grid points multiplied by the number of time steps) for the simulation of a turbulent boundary layer scales with the Reynolds number as Re^2.7 for WRLES and Re^1.1 for wall-modeled (WM) LES <cit.>. Thus, wall models lead to substantial cost savings for high-Reynolds-number applications.
In simulations of the Reynolds-averaged Navier-Stokes (RANS) equations, high spatial resolution is also required to resolve the steep near-wall gradients in the mean flow.
Therefore, wall models —typically referred to as wall functions in the RANS context —can also greatly accelerate numerical simulations.
The present work focuses on the paradigm of wall-stress modeling <cit.> for LES. These models were derived from RANS analysis of boundary layers and typically invoke a zero-equation RANS model such as the Prandtl mixing length argument <cit.>, which models the turbulence length scale as a linear function of the wall-normal distance. An empirical damping function is introduced following <cit.> to ensure the correct near-wall scaling of the mixing length. RANS models have naturally been widely used as boundary conditions for under-resolved RANS simulations (e.g., <cit.>). In this context, such a model is typically referred to as a wall function.
<cit.> showed that the mixing length RANS model is suitable for use as a boundary condition for the LES equations, i.e., for deployment as a wall-stress model. Specifically, they invoke the
one-dimensional simplification of the RANS streamwise momentum equation. That is,
y((μ + μ_t) Uy) = 0,
where μ, μ_t, and U are the molecular dynamic viscosity, eddy viscosity, and velocity profiles, respectively, and y is the wall-normal coordinate. (·) denotes the Reynolds average and (·) denotes the Favre (density-weighted) average. Throughout this work the Favre- (density-weighted-) averaged RANS and LES equations are employed.
The eddy viscosity is further modeled as
μ_t = κ y ρ√(τ_w/ρ)( 1 - exp (y^+/A^+) )^2,
where ρ(y) is the density profile. The subscript (·)_w denotes quantities evaluated at the wall. τ_w = μ_w (dU/dy)_w is the wall shear stress. The superscript (·)^+ denotes non-dimensionalization by the friction velocity u_τ = √(τ_w/ρ_w), ρ_w, and the kinematic wall viscosity ν_w=μ_w/ρ_w.
The von Kármán constant κ = 0.41 and the eddy-viscosity damping coefficient A^+ = 17 are adopted following <cit.>.
For an incompressible flow, the density and molecular dynamic viscosity are known constants. In the context of WMLES, the ODE in Eq. (<ref>) is solved with two boundary conditions: 1) the no-slip wall condition and 2) a velocity sample, which is taken from the LES at a wall-normal distance referred to as the matching location. Note that the solution procedure is iterative because the eddy viscosity depends on the wall stress (Eq. (<ref>)). The computed wall stress τ_w is then applied as a momentum-flux boundary condition for the outer LES solver, which completes the two-way coupling of the wall model (inner) solution and the PDE (outer) simulation.
For compressible flow, the RANS equation for temperature can similarly be simplified to the one-dimensional form <cit.>, which results in a second, coupled ODE for the temperature profile, i.e.,
y((μ + μ_t) UUy + C_p (μ/+μ_t/_t)Ty) = 0,
where T is the temperature profile. C_p is the specific heat capacity at constant pressure, is the Prandtl number, and _t is the turbulent Prandtl number, which is assumed to be 0.9 <cit.>.
The dependence of molecular dynamic viscosity on temperature can be assumed to follow a power law or Sutherland's law.
The ideal gas equation of state closes the system
and the thin-boundary-layer assumption implies that the pressure is constant across the inner layer.
In WMLES, the temperature ODE in Eq. (<ref>) is solved with two additional boundary conditions: 1) the wall temperature and 2) the temperature at the matching location. Note that the solution procedure is also iterative in that the temperature depends on the velocity solution. The velocity also depends on the temperature through the density and viscosity. Solving two coupled boundary-value problems iteratively introduces a higher degree of non-linearity compared to the incompressible case and can prove difficult to converge in flows with strong temperature gradients (strong heat transfer), e.g., as was reported in <cit.>. In addition to the numerical difficulties, the accuracy of this wall model degrades substantially in flows with strong heat transfer (as will be demonstrated herein).
Improved results for high-speed wall-bounded turbulent flows over cold walls have been obtained by using the semi-local scaling in the damping function <cit.>, however, <cit.> reports that in adiabatic walls, the classical scaling (consistent with the van Driest transformation) is more accurate. This motivates using a recently developed compressible velocity transformation that is accurate for both diabatic and adiabatic turbulent boundary layers <cit.>.
In this work, a wall model for high-speed wall-bounded turbulent flows is developed in section <ref>. The model is evaluated via a priori testing in section <ref> and via a posteriori validation in section <ref>. Conclusions are drawn in section <ref>.
§ MODEL DEVELOPMENT
There are two principal differences between the present model and the classical ODE-based wall model (Eqs. (<ref>–<ref>)): (1) rather than solving an ODE for the compressible velocity profile directly, the incompressible ODE (with constant density and viscosity) is solved, and an inverse compressibility transformation <cit.> is employed; (2) rather than employing a RANS equation for temperature and assuming a constant Pr_t, an algebraic temperature-velocity relation is adopted, thus obviating the need to solve a second ODE.
§.§ Inverse compressible velocity transformation
A compressible velocity transformation seeks to map the local mean strain rate of the variable-property compressible flow, dU/dy, to the non-dimensional mean strain rate of a constant-property incompressible flow at an equivalent Reynolds number. Upon integration, the transformation maps the compressible velocity profile to an incompressible velocity profile. In this way, a successful transformation can collapse profiles with different Mach numbers and thermal boundary conditions to a single incompressible law of the wall. Coupled with the incompressible profile implied by Eq. (<ref>), an inverse velocity transformation can recover the compressible velocity profile.
The total-stress-based compressible velocity transformation of <cit.> is used in this work since it is shown to be accurate in a wide range of flows, including boundary layers with strong heat transfer. This transformation uses the viscous scaling arguments of <cit.> and <cit.> in the near-wall viscous region and uses a modified version of the turbulence equilibrium arguments of <cit.> for the logarithmic region. The transformation is an algebraic function that relates the local mean strain rate of the compressible flow, dU/dy, to the non-dimensional incompressible mean strain-rate, S_t^+, at the same semi-local friction Reynolds number, Re_τ^*, according to the relation
S_t^+ = S_eq^+/1+S_eq^+-S_TL^+,
where S_eq^+=1/μ^+ dU^+/dy^* and S_TL^+=μ^+ dU^+/dy^+. The superscript (·)^* denotes non-dimensionalization by the local density ρ(y), local molecular dynamic viscosity μ(y), and the semi-local friction velocity u_sl=√(τ_w/ρ(y)) <cit.>. The semi-local friction Reynolds number is thus defined as Re_τ^* = ρ_e u_slδ / μ_e, where the subscript (·)_e denotes quantities evaluated at the boundary layer edge (throughout this work, δ denotes the channel half height or the boundary-layer thickness). Note that all variables of the form S_(·)^+ represent different local non-dimensionalizations of the compressible strain rate, which were designed in prior works with the target of equaling the strain rate implied by the incompressible law of the wall. For example, although S_TL^+ is equivalent to the viscous stress, it is also a non-dimensionalization of the mean strain rate in a compressible flow. S_TL^+ will exactly recover the incompressible strain rate of a flow with the equivalent viscous stress as long as the compressible flow also obeys μ^+=1. Additionally, note that the transformation in Eq. (<ref>) assumes a constant stress layer in the buffer region of the boundary layer, where there is a transition between the underlying viscous and equilibrium transformations. <cit.> verifies that the deployment of this assumption does not significantly affect the accuracy of the transformation in equilibrium flows, and <cit.> verifies the same for boundary layers with moderate pressure gradients.
The inverse velocity transformation is readily obtained by algebraically rearranging the transformation to find
U^+y^* = (1/μ^+ S^+_t - 1/μ^+ +√(ρ^+)(1 + 1/2ρ^+ρ^+y^+ y^+ - 1/μ^+μ^+y^+y^+ ) )^-1.
The incompressible mean strain rate S_t^+ is available algebraically from the constant-property version of Eq. (<ref>), i.e., ρ=ρ_w and μ=μ_w. The incompressible model constants κ and B are determined using the aforementioned calibration but Re_τ^* is used in place of Re_τ since the former is invariant under the velocity transformation. Integrating Eq. (<ref>) with variable properties yields the targeted compressible velocity profile; the properties are functions of temperature, which will be discussed next.
§.§ Algebraic temperature-velocity relation
In order to close the velocity equation (Eq. (<ref>)), the temperature profile must be determined. The classical model uses the constant turbulent Prandtl number assumption to develop a coupled ODE for temperature (Eq. (<ref>)). However, the constant Prandtl number assumption has been shown to be less accurate than invoking the Generalized Reynolds Analogy (GRA) <cit.>. Thus, the presently proposed wall model leverages the GRA instead.
The analogy between the conservation equations for momentum and energy has led to the derivation of several algebraic relations between temperature and velocity.
Walz's equation <cit.> (also known as the modified Crocco-Busemann relation <cit.>) leverages the analogy between the conservation equations for momentum and energy to arrive at an algebraic relation between mean temperature and velocity. This relation accounts for non-unity Pr effects via a recovery factor, which is taken as r=()^1/3. While this relation is accurate in high-speed adiabatic boundary layers, <cit.> observed that the accuracy degrades significantly in boundary layers with wall heat transfer and proposed a semi-empirical correction to the relation.
This was subsequently recast in terms of a generalized Reynolds analogy <cit.>, thereby introducing the Reynolds analogy factor, s, which they choose as s = 1.14 following convention. The resulting temperature-velocity relation is given as,
T = T_w + s (T_r-T_w) U/U_e(1 - U/U_e) + ( U/U_e)^2 ( T_e-T_w ),
where the subscript (·)_e denotes quantities at the boundary-layer edge, the recovery temperature T_r = T_e + r U_e^2/(2 C_p).
This relation has been validated across a wide range of channel flows, pipe flows, and boundary layers with and without heat transfer <cit.>. Specifically, this relation is derived by <cit.> through defining the generalized recovery temperature T_r_g = T + r_g U^2/(2 C_p). Then, it is assumed that T_r_g = T_w + U_s U/C_p,
where U_s is a constant velocity scale. Equivalently, the assumption can be reinterpreted that T can be approximately represented as a second order Taylor expansion in terms of powers of U, i.e.,
T = b_0 + b_1 U + b_2 U^2/2,
where the no-slip condition implies b_0 = T_w, b_1 = (TU)|_w.
The algebraic relation of <cit.> can be recovered if b_2 is specified by evaluating the expression at the boundary-layer edge T_e=T|_U_e and b_1 is determined using the Reynolds analogy. However, in this work, we use the matching data (denoted with subscript (·)_m) T_m=T|_U_m to set b_2, such that the exact value at the matching location can be enforced.
The final temperature-velocity relation is
T = T_w + s (T_r-T_w) U/U_e(1 - U/U_m) + ( U/U_m)^2 ( T_m-T_w ).
Note that one consequence of this relation is that the wall heat flux and wall shear stress are algebraically linked by the Reynolds analogy factor, where the heat flux is defined as q_w = s τ_w C_p (T_w-T_r)/U_e.
§.§ Implementation details
Like the classical model (Eqs. (<ref>–<ref>)), the present model requires a matching temperature, velocity, and density, an equation of state (the ideal gas law is used in this work and the thin-boundary-layer assumption implies the pressure is constant), and a viscosity law (either a power law or Sutherland's law depending on the relevant reference data). In addition, the present model requires as input the velocity and temperature at the boundary-layer edge (computed using the method of <cit.>) for deploying the algebraic temperature-velocity relation (Eq. (<ref>)) due to its dependence on the recovery temperature and edge velocity. To solve the nonlinear system, the following approach is used. The incompressible ODE (Eq. (<ref>)) with constant properties is integrated once analytically, rearranged for dU/dy and substituted into the inverse velocity transformation (Eq. (<ref>)) as S. This equation (initial value problem with an initial guess for the wall shear stress) is solved via the shooting method, where, at each integration step, a sub-iteration determines the velocity increment that is consistent with the temperature-velocity relation (Eq. (<ref>)) and the resulting density and viscosity at that location.
The implementation of the present model is available at the link provided in the data availability section at the end of this manuscript. This implementation was first developed by <cit.> to compute temperature and velocity profiles for estimating grid-point requirements in compressible flows, and this manuscript serves as the comprehensive documentation and the further development of the underlying inverse method for WMLES approach for the first time. Intermediate developments were presented in <cit.>, and initial results were reported in <cit.>. <cit.> used a similar procedure but with a data-driven velocity transformation <cit.>. <cit.> and <cit.> approximate the mean profiles of channel flows by considering two velocity transformations <cit.> and employing the Central Mean Temperature Scaling <cit.>.
§ A PRIORI RESULTS
The present and classical wall models are first evaluated via a priori analysis. That is, the matching data are taken from DNS at a wall-normal distance of y_m=0.3δ. The wall model estimates the velocity and temperature profiles, as well as the wall shear stress and wall heat flux. The predicted velocity and temperature profiles are shown in Figure <ref> and <ref> for four channel flows with various Mach and Reynolds number conditions, Figure <ref> for two pipe flows at different Reynolds numbers, and Figure <ref> for two boundary layers, one with a heated and one with a cooled wall boundary condition. The bulk Mach number is defined as M_b = U_b/√(()γ R T_w), where γ is the ratio of specific heats and R is the gas constant. The bulk Reynolds number is defined as Re_b = ρ_b U_b δ / μ_w, where the bulk density is defined as ρ_b = ∬_A ρ dA/A and the bulk velocity is defined as U_b = ∬_A U dA/A, where A is the cross-sectional area of the domain. Reference DNS data are provided by <cit.>.
For all cases, the profiles predicted by the present model agree with the DNS profiles significantly better than the classical model. Note that the velocities are non-dimensionalized by the predicted friction velocity, so the obtained profiles do not necessarily pass through the matching data if the predicted wall stress is inaccurate.
Next, the model performance is evaluated with a wide range of DNS data from 48 different simulations.
The errors in the modeled wall stress and heat flux predictions are reported for each case with y_m=0.3δ. The relative error in the wall stress prediction ϵ_τ_w is defined as
ϵ_τ_w = τ_w,model - τ_w,DNS/τ_w,DNS× 100%.
The non-dimensional wall heat flux is defined as B_q = q_w/(C_p T_w ρ_w u_τ), and the relative error in the wall heat flux is defined as
ϵ_q_w = q_w,model - q_w,DNS/q_w,DNS× 100%.
ϵ_q_w is not reported for adiabatic boundary layer data because it is undefined, and both models predict negligible heat transfer for these data.
The data considered include the compressible channel flow simulations of <cit.>, the pipe flow simulations of <cit.>, the adiabatic supersonic and hypersonic boundary layers of <cit.>, and the diabatic supersonic and hypersonic boundary layers of <cit.>.
The cases have edge Mach numbers in the range of 0.77–11 and semi-local friction Reynolds numbers in the range of 170–5700.
Only the cases with Re_τ^* > 150 are analyzed because lower Reynolds numbers can exhibit strong Reynolds number effects <cit.> and are not the target of this study. The error measures are shown in Figure <ref>. The present model generates significantly less modeling error than the classical model, with the greatest error reduction when the non-dimensional heat transfer is the highest.
To distinguish the effects of Reynolds number and compressibility, we explore the effect of using Reynolds-number-dependent coefficients for the underlying incompressible Law of the Wall. Specifically, rather than letting the von Kármán constant κ and the damping coefficient A^+ be fixed values of 4.1 and 17, respectively, we recalibrate these values using incompressible reference data at various Reynolds numbers. We employ the DNS data from five incompressible turbulent channel flows <cit.> with friction Reynolds numbers Re_τ = u_τδ / ν_w = {182, 543, 1000, 1990, 5190}, and fit the least-squares optimal values of κ = {0.400, 0.408, 0.400, 0.391, 0.391} and A^+ = {18.2, 17.4, 17.0, 16.5, 16.5}. Linear interpolation and constant extrapolation of the optimal values are used to define κ and A^+ for all Reynolds numbers. The inverse velocity transformation uses the semi-local wall-normal coordinate y^*, so the incompressible data should be interpreted as a function of Re_τ^* rather than Re_τ. A priori analysis is performed as before using compressible DNS data, but with the optimal coefficients selected according to the Re_τ^* observed in the compressible DNS. In Figure <ref>(a-b), for the case of a turbulent channel flow with Re_τ^* = 190 and M_b = 1.7, there is a modest improvement from using the Reynolds-number-dependent coefficients for the incompressible model. This suggests that at low Reynolds numbers, the deviation of DNS data for the incompressible constant-property velocity profile from the nominal law of the wall is on the same order as the deviation of the constant coefficient model and compressible DNS velocity profile. However, there is not a complete collapse of the model with Reynolds-number-dependent coefficients with the compressible DNS. This is likely attributed to the documented error in the compressible velocity transformation at Re_τ^* <∼ 200 <cit.>. In Figure <ref>(c-d), the case of a turbulent channel flow with Re_τ^* = 590 and M_b = 1.7 is considered. The Reynolds number is high enough that the optimal and constant coefficients are similar; thus, the performance of the present model with either set of coefficients is similar. Overall, there is no significant sensitivity to tuning the coefficients, so, for simplicity, we use the constant coefficients of κ=0.41 and A^+=17 for the remainder of this manuscript.
Two more recently developed compressible wall models are considered. The first is developed by <cit.>; they show that the damping function in the classical model (Eq. (<ref>)) is consistent with the velocity transformation of <cit.>, which has been shown to be less accurate in channel flows than the velocity transformation of <cit.>. Therefore, <cit.> rewrite the damping function in terms of y^* and show that this makes the model consistent with the Trettel-Larsson transformation. The second additional model considered is proposed by <cit.>, which also uses the semi-local damping function and further replaces the constant turbulent Prandtl number assumption of the classical model with an explicit function of y^*. In Figure <ref>, these two additional wall models are compared with the classical and present wall models. Figure <ref>(a-d) indicate that all models are performing well in the channel flows except for the classical model. This behavior is explained by the behavior of the underlying velocity transformations. The models of <cit.> and <cit.> use the Trettel-Larsson transformation and the present model uses the total-stress-based transformation <cit.>. Both of these transformations are well established to outperform the van Driest transformation (used by the classical model) in channel flows. In Figures <ref>(e-f) and <ref>(g-h), the models are applied to boundary layers with cooled and heated walls, respectively. For both cases the classical model is the least accurate likely due to the inaccuracy of the van Driest transformation for boundary layers with strong heat transfer <cit.>, as the velocity transformation is the only difference between the classical model and that of <cit.>. Also for both cases, the models that use semi-local damping <cit.> perform almost identically, suggesting limited sensitivity in these flows to the change in turbulent Prandtl number model proposed by <cit.>. For the heated boundary layer, the present model slightly improves the prediction of the temperature peak and the log slope of the velocity compared to the semi-local damping models. For the cooled boundary layer, there is a more substantial improvement from the present model for the log slope of the velocity but the temperature profiles are only slightly improved. These improvements of the present model over the semi-local damping models are consistent with the improvements of the total-stress-based transformation over the Trettel-Larsson transformation for boundary layers with strong heat transfer.
§ A POSTERIORI WMLES RESULTS
In this section, several WMLES simulations are conducted using charLES, a high-fidelity compressible finite-volume code <cit.>. The numerical method consists of a low-dissipation, approximately entropy-preserving scheme, which utilizes artificial bulk viscosity to capture the solution discontinuities. Additional details about the solver and a summary of validation campaigns are available in <cit.>.
The WMLESs conducted herein are compressible turbulent channel flows driven with uniform volumetric momentum and energy source terms to achieve the same bulk Mach number M_b and bulk Reynolds number Re_b conditions of the DNS simulations of <cit.> as summarized in table <ref>.
The cases are run on a domain of size (π× 2 ×π√(3)/4)δ with periodic boundary conditions in the streamwise (first) and spanwise (third) dimensions. The mean profiles and fluxes were insensitive to doubling of the streamwise and spanwise domain sizes. Consistent with the DNS simulations, the viscosity is described by μ/μ_ref=(T/T_ref)^0.75 and Pr = 0.7. All cases are initialized from a uniform solution with the target bulk Mach number and Reynolds number, and zero velocity in the wall-normal and spanwise directions. The simulations are allowed to transition from laminar to turbulent states naturally and are run for ∼500 eddy turnover times δ/u_τ. To challenge the wall model and isolate the effect of near-wall numerical errors <cit.>, the wall model matching location is placed at y_m=0.3δ and a coarse grid of 12 points per half channel height is used for all simulations unless otherwise indicated. The computational cost of the present model is similar to that of the classical model. The present model varies between being 7% faster and 32% slower depending on the Reynolds number, matching location, and Mach number. No effort was made to optimize the performance of the present model, so these numbers are just meant to indicate that the approximate cost of the model is similar in the cases tested. In general, modest differences in the cost of a wall model can be efficiently amortized over parallel processors via load balancing that assigns fewer control volumes to processors that contain more boundary faces, but this is not used in the present study.
The velocity and temperature profiles from WMLES are shown in Figure <ref> and <ref> for turbulent channel flows at four combinations of Reynolds and Mach numbers. In all cases, the present model is significantly more accurate than the classical model for the prediction of velocity and temperature with respect to the reference DNS solutions. For these cases and the others listed in table <ref>, the errors in the predictions of the wall shear stress and the wall heat flux are shown in Figure <ref>.
The wall model is based on the inversion of the total-stress-based velocity transformation <cit.> and that was observed to have the greatest improvement over classical approaches in cases with strong heat transfer. This explains why the errors from the classical wall model grow significantly with the strong heat transfer, but the errors from the present model are rather small and do not vary with heat flux.
The primary quantities of interest for WMLES are the predictions of the mean profiles and fluxes. The fluctuating parts of LES solutions are not expected to exactly agree with DNS results unless the WMLES is conducted with DNS-like resolution, which is impractical. Nevertheless, the effect of wall models on the fluctuating part of the LES solution is presented for comparison between the present and classical models. Figures <ref> and <ref> include profiles of the LES resolved turbulent Mach number M_t=u”/√(()γ R T̃) and the LES temperature fluctuations T”, where (·)” denotes the Favre fluctuation (·)” = (·) - (̃·̃)̃. There is an improvement in the predictions of the fluctuating statistics by the present model compared to those by the classical model. An accurate prediction of second-order statistics is unlikely without an accurate prediction of mean statistics. Thus, the improved second-order statistics of the present model are likely a consequence of its improved mean statistics compared to those of the classical model (see Figure <ref> and <ref>). However, correct prediction of the mean field is not sufficient for the accurate prediction of second-order statistics in LES. In fact, the fluctuations in the LES results are generally over-predicted compared to the DNS data. The over-prediction may be due in part to the wall-blocking effect of stress-based wall model <cit.>. Given the coarse resolution of twelve points across the channel half height, numerical errors and subgrid-scale model errors are certainly contributing. The subgrid-scale model has not been adapted for compressibility other than by accounting for variable properties <cit.>. The turbulent Mach numbers are on the order of 0.3, which is sufficiently high that modeling for dilatational dissipation is a promising path to further improvements of the fluctuating statistics in the volume of the LES domain. Such research may be pursued independently of the current study focusing on wall modeling and the prediction of mean profiles and fluxes.
§.§ Sensitivity to numerical resolution and the matching location
In WMLES, the wall model exchanges data with the outer LES solver at the matching location. The modeling error in the inner wall modeled equations may grow as the matching distance increases, which motivates placing the matching location near the wall. On the other hand, the matching location should be far enough from the wall in terms of the LES mesh resolution so that the LES solver can resolve the large scales of turbulence at the height of the matching location. Otherwise, numerical errors may contaminate the matching data that is provided as input to the wall model. <cit.> demonstrate this trade-off and how LES numerical errors contaminate the wall-modeled solution if the matching distance is on the order of the wall-normal grid resolution. The optimal matching distance will depend on the accuracy of a specific LES solver, but a typical choice is y_m ≥ 3Δ <cit.>, where Δ is the wall-normal grid spacing near the wall.
To evaluate the convergence and sensitivity of the presently proposed wall model, two types of mesh convergence studies are considered. In the first study, the matching location is held fixed at y_m=0.3δ, which corresponds in semi-local units to y_m^*=186 and y_m^*=237 for the present model and classical model cases across all resolutions.
For the case of M_b=3.0 and Re_τ=1800, the numerical resolution of the WMLES is varied. In Figure <ref>, the WMLES solutions are shown for three LES resolutions with 9, 18, and 36 grid points across the channel half-height. The uniform hexagonally close-packed mesh topology with global refinement is employed, resulting in three meshes with 2.0×10^4, 1.6× 10^5, and 1.3× 10^6 control volumes, respectively (note that the reference DNS uses as many as 6.4× 10^8 control volumes).
In this study, the LES numerical errors at the matching location are expected to diminish as the resolution is refined, but modeling errors from using the wall model over the domain y∈[0,0.3δ] are not expected to change with resolution. For this reason, the classical model shows a large error in the log intercept of the velocity profile that is persistent with refinement and consistent with a priori analysis in Figure <ref>(a). For the finest resolution with the present model, the grid point nearest to the wall exhibits an error that is persistent with refinement, which is consistent with the observations of <cit.> and does not affect the accuracy of the simulation since the inner solution is applicable for y<y_m. For both the present and classical models, the results are only weakly dependent on the grid resolution. This suggests that the leading source of error for the simulations with the classical wall model is in fact the wall model rather than the numerical or subgrid-scale modeling errors, even on the coarsest simulation with 9 grid points per channel half height.
In the second grid convergence study, the models are tested in the way that WMLES is typically used in practice. That is, the matching distance is moved toward the wall as the grid is refined. In this study, two channel flows with different Reynolds number conditions are considered for three LES resolutions with 12, 24, and 48 grid points across the channel half height. The matching locations are y_m= 0.3δ, 0.15δ, and 0.075δ, respectively, which corresponds to y_m = 4 Δ for all cases, thus the effect of near-wall LES numerical errors is expected to be minor <cit.>. In Figure <ref>, the convergence study is performed for M_b=3.0 and Re_τ^*=590, and a lower Reynolds number case of M_b=3.0 and Re_τ^*=200 is shown in Figure <ref>. In both cases, the accuracy of the present model is relatively high and insensitive to mesh resolution compared to that of the classical model. For the higher Reynolds number test, the matching locations in semi-local units are always in the logarithmic region of the boundary layer. Therefore, the WMLES results are not sensitive to refinement over this range of resolutions. However, for the lower Reynolds number case, the most refined meshes lead to semi-local matching locations y_m^* in the buffer region. For the classical model, because the relative error of the modeled U^+ versus the DNS U^+ is maximal in the region of the buffer layer and early log layer (compare to similar a priori results in Figure <ref>), the convergence behavior for the classical model is complex in this regime. In other words, as the mesh is refined, although the LES numerical errors are diminishing, the wall modeling errors for the classical model may increase or decrease depending on the matching location since the relative modeling error does not monotonically reduce with wall-normal distance. On the other hand, the outer solution of the present model is relatively accurate irrespective of the matching location because the inner wall-modeled solution agrees well with the DNS solution throughout the viscous sublayer, buffer layer, and log layer (which is consistent with similar a priori results in Figure <ref>).
§ CONCLUSION
In this work, a wall model is proposed for turbulent wall-bounded flows with heat transfer. The model uses an established ODE description of incompressible flow, transforms that equation to account for compressibility effects, and is closed with an algebraic temperature-velocity relation. The resulting model can accurately estimate the near-wall profiles of temperature and velocity when the matching location is in the inner layer. This model is suitable for deployment as a boundary condition for an outer LES or RANS solver, an inflow generation scheme, or the base flow for perturbation methods, possibly with the incompressible model augmented with a wake profile for the outer layer of the boundary layer. The proposed method can only be as accurate as the models on which it is based, namely, the forward velocity transformation and the algebraic temperature-velocity relation. While these models have been widely validated in channel and pipe flows and boundary layers with moderate pressure gradients, further studies in complex flows are warranted, e.g., the developing boundary layers on a blunt body behind a curved shock.
The model is first tested a priori to verify that it can recover the boundary layer velocity and temperature data when provided with matching data from DNS. Numerical results reveal that the model accurately recovers the targeted profiles well, and the predicted wall stress and heat flux are within a few percent of their expected values for a wide database of DNS data for high-Mach-number turbulent channel flows, pipe flows, and boundary layers (48 cases with edge Mach numbers in the range of 0.77–11 and semi-local friction Reynolds numbers in the range of 170–5700). The model is also tested a posteriori as a boundary condition for WMLES in turbulent channel flows with bulk Mach numbers M_b=0.7–4.0 and Re_τ=320–1800. Especially in flows with strong heat transfer, the proposed model is substantially more accurate than the classical ODE-based near-wall model. The superior performance of the present model is due to two key differences with respect to the classical model: 1) the constant turbulent Prandtl number assumption is replaced with a more accurate algebraic temperature-velocity relation and 2) the van Driest velocity transformation is replaced with the total-shear-stress velocity transformation <cit.>.
§ ACKNOWLEDGMENTS
Kevin Griffin acknowledges support from the National Defense Science and Engineering Graduate Fellowship, the Stanford Graduate Fellowship, the Stanford Lieberman Fellowship, and the Exascale Computing Project (Grant17-SC-20SC), a collaborative effort of two US Department of Energy organizations (Office of Science and the National Nuclear Security Administration) responsible for the planning and preparation of a capable exascale ecosystem, including software, applications, hardware, advanced system engineering, and early testbed platforms, in support of the nation’s exascale computing imperative. Lin Fu acknowledges funding from the Research Grants Council (RGC) of the Government of Hong Kong Special Administrative Region (HKSAR) with RGC/ECS Project (No. 26200222) and from the Guangdong Basic and Applied Basic Research Foundation (No. 2022A1515011779). Parviz Moin acknowledges support from NASA grant (No. NNX15AU93A).
We wish to gratefully acknowledge helpful comments from Sanjeeb T. Bose.
This work was authored in part by the National Renewable Energy Laboratory, operated by Alliance for Sustainable Energy, LLC, for the U.S. Department of Energy (DOE) under Contract No. DE-AC36-08GO28308. The views expressed in the article do not necessarily represent the views of the DOE or the U.S. Government. The U.S. Government retains and the publisher, by accepting the article for publication, acknowledges that the U.S. Government retains a nonexclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published form of this work, or allow others to do so, for U.S. Government purposes.
[Declaration of interests]The authors declare that they do not have any financial or non-financial conflict of interests.
[Data availability statement]The data that support the findings of this study are available from the corresponding authors upon reasonable request. Matlab code implementing the proposed model will be available in the following public repository after the manuscript is accepted for publication:
https://github.com/kevingriffin1/comp_wm<https://github.com/kevingriffin1/comp_wm>
[Author ORCID]
Kevin Griffin https://orcid.org/0000-0002-0866-6224(0000-0002-0866-6224);
Lin Fu https://orcid.org/0000-0001-8979-8415(0000-0001-8979-8415)
jfm
|
http://arxiv.org/abs/2307.05666v1 | 20230711180000 | JASMINE: Near-Infrared Astrometry and Time Series Photometry Science | [
"Daisuke Kawata",
"Hajime Kawahara",
"Naoteru Gouda",
"Nathan J. Secrest",
"Ryouhei Kano",
"Hirokazu Kataza",
"Naoki Isobe",
"Ryou Ohsawa",
"Fumihiko Usui",
"Yoshiyuki Yamada",
"Alister W. Graham",
"Alex R. Pettitt",
"Hideki Asada",
"Junichi Baba",
"Kenji Bekki",
"Bryan N. Dorland",
"Michiko Fujii",
"Akihiko Fukui",
"Kohei Hattori",
"Teruyuki Hirano",
"Takafumi Kamizuka",
"Shingo Kashima",
"Norita Kawanaka",
"Yui Kawashima",
"Sergei A. Klioner",
"Takanori Kodama",
"Naoki Koshimoto",
"Takayuki Kotani",
"Masayuki Kuzuhara",
"Stephen E. Levine",
"Steven R. Majewski",
"Kento Masuda",
"Noriyuki Matsunaga",
"Kohei Miyakawa",
"Makoko Miyoshi",
"Kumiko Morihana",
"Ryoichi Nishi",
"Yuta Notsu",
"Masashi Omiya",
"Jason Sanders",
"Ataru Tanikawa",
"Masahiro Tsujimoto",
"Taihei Yano",
"Masataka Aizawa",
"Ko Arimatsu",
"Michael Biermann",
"Celine Boehm",
"Masashi Chiba",
"Victor P. Debattista",
"Ortwin Gerhard",
"Masayuki Hirabayashi",
"David Hobbs",
"Bungo Ikenoue",
"Hideyuki Izumiura",
"Carme Jordi",
"Naoki Kohara",
"Wolfgang Löffler",
"Xavier Luri",
"Ichiro Mase",
"Andrea Miglio",
"Kazuhisa Mitsuda",
"Trent Newswander",
"Shogo Nishiyama",
"Yoshiyuki Obuchi",
"Takafumi Ootsubo",
"Masami Ouchi",
"Masanobu Ozaki",
"Michael Perryman",
"Timo Prusti",
"Pau Ramos",
"Justin I. Read",
"R. Michael Rich",
"Ralph Schönrich",
"Minori Shikauchi",
"Risa Shimizu",
"Yoshinori Suematsu",
"Shotaro Tada",
"Aoi Takahashi",
"Takayuki Tatekawa",
"Daisuke Tatsumi",
"Takuji Tsujimoto",
"Toshihiro Tsuzuki",
"Seitaro Urakawa",
"Fumihiro Uraguchi",
"Shin Utsunomiya",
"Vincent Van Eylen",
"Floor van Leeuwen",
"Takehiko Wada",
"Nicholas A. Walton"
] | astro-ph.IM | [
"astro-ph.IM",
"astro-ph.EP",
"astro-ph.GA",
"astro-ph.SR"
] |
space vehicles: instruments_1 — astrometry_2 — Galaxy: center_3 — techniques: photometric_4 — infrared: planetary systems_5
TESS Stellar Rotation up to 80 days in the Southern Continuous Viewing Zone
[
August 12, 2023
===========================================================================
Please use \comment{}, to make your comments, so that we can hide them all, when needed.
Japan Astrometry Satellite Mission for INfrared Exploration () is a planned M-class science space mission by the Institute of Space and Astronautical Science, the Japan Aerospace Exploration Agency. has two main science goals. One is Galactic archaeology with Galactic Center Survey, which aims to reveal the Milky Way's central core structure and formation history from Gaia-level (∼25 ) astrometry in the Near-Infrared (NIR) H_w-band (1.0-1.6 ). The other is the Exoplanet Survey, which aims to discover transiting Earth-like exoplanets in the habitable zone from the NIR time-series photometry of M dwarfs, when the Galactic center is not accessible. We introduce the mission, review many science objectives and present the instrument concept. will be the first dedicated NIR astrometry space mission and provide precise astrometric information of the stars in the Galactic center, taking advantage of the significantly lower extinction in the NIR band. The precise astrometry is obtained by taking many short-exposure images.
Hence, the Galactic center survey data will be valuable for studies of exoplanet transits, asteroseismology, variable stars and microlensing studies, including discovery of (intermediate mass) black holes. We highlight a swath of such potential science, and also describe synergies with other missions.
§ INTRODUCTION
Since the discovery that the Galaxy is one among hundreds of billions in a vast cosmic web – a gravitational framework set in place at the very first moments of the universe – it has been understood that the Galaxy is an ancient record, carrying within it the products and imprints of physics of the universe and the processes that eventually resulted in our Sun, Earth, and life itself.
The near-infrared (NIR) astrometry space mission, Japan Astrometry Satellite Mission for INfrared Exploration ([The mission detailed in this paper was originally known as Small- in previous documentation, but now is simply referred to as .]^,[
<http://jasmine.nao.ac.jp/index-en.html>]) <cit.>, has been selected for an M-class space mission by the Institute of Space and Astronautical Science (ISAS), the Japan Aerospace Exploration Agency (JAXA), with a planned launch in 2028. has two main science goals. One is to decipher the ancient stellar fossil record: to reveal the Milky Way's central core structure and its formation history from Gaia-level (∼25 ) astrometry in the NIR H_w-band (1.0–1.6 , H_w≈0.9J+0.1H-0.06(J-H)^2).
This is referred to as the Galactic Center Survey (GCS), spanning 2.52 deg^2.
The other goal is the ExoPlanet Survey (EPS) to discover Earth-like habitable exoplanets from the NIR time-series photometry of M dwarf transits, when the Galactic center is not accessible to . will be the first dedicated NIR astrometry space mission, and provide precise astrometric information of the stars in the Galactic center, taking advantage of the significantly lower extinction in the NIR band than in the visual band.
The precise astrometry is obtained by taking many short-exposure (∼12.5 sec) images.
Hence, can also provide time series NIR photometry data in addition to precise astrometry.
The aim of this paper is to describe the novel science opened up by the unique new capabilities of the mission.
§.§ Gaia revolution and next step: NIR astrometry
The European Space Agency's Gaia mission <cit.> made its second data release <cit.> in April 2018 and subsequent third data release <cit.>, which provided measurements of the positions, motions and photometric properties for more than one billion stars with unprecedented precision. Gaia DR2 and DR3, in combination with ground-based spectroscopic surveys such as the RAdial Velocity Experiment <cit.>, the Sloan Digital Sky Survey (SDSS), the Sloan Extension for Galactic Understanding and Exploration <cit.>, the Apache Point Observatory Galactic Evolution Experiment <cit.>, the Large Sky Area Multi-Object Fiber Spectroscopic Telescope <cit.>, the Gaia-ESO survey <cit.> and the Galactic Archaeology with HERMES survey <cit.>, have revolutionized our view of the Galactic disk and halo, including the various stellar streams and the impact of satellite galaxies, and the interplay between the disks and stellar and dark halos.
For the Galactic disk, Gaia enabled us to analyze the 6D phase-space distribution (position, proper motion, radial velocity, parallax) of stars within a few kpc from the Sun for the first time, and revealed that the Galactic disk is heavily perturbed <cit.>. These fine kinematic structures have provided insight into the origin of the spiral structure, possibly transient <cit.>, and the pattern speed of the bar, including the possibility of it slowing down <cit.>. In addition, the discovery of phase space features correlating in and out of plane motion of the disk stars
<cit.> opened up a heated discussion of its origin, which has been attributed to various causes including: the phase mixing after a tidal perturbation by a dwarf galaxy passing through the Galactic plane <cit.>, the bar buckling <cit.> and/or a persistent dark matter wake <cit.>.
Regarding the stellar Galactic halo, Gaia and ground-based spectroscopic data revealed that a significant fraction of halo stars are moving on very radial orbits.
This has been interpreted as a remnant of a relatively large galaxy, so-called Gaia-Sausage-Enceladus, falling into the Milky Way about 10 Gyr ago <cit.>. The impact of this merger is considered to have disrupted the proto-Galactic disk and created the inner metal rich halo stars, which are suggested to be of the same population as the thick disk <cit.>.
Future Gaia data releases will undoubtedly yield further discoveries pertaining to the Galactic disk and halo structures and provide an even deeper insight into the formation and evolution history of the Milky Way. However, due to the high dust extinction in the optical band where Gaia operates (∼0.6), Gaia cannot provide reliable astrometry for stars near the Galactic nucleus. This is an unfortunate hindrance, as this is where the history of the first structure and the formation of the Super-Massive Black Hole (SMBH) of the Galaxy should be imprinted.
The GCS targets the Galactic center field (the projected Galactocentric radius of R_ GC≲100 pc).
is designed to achieve Gaia-level astrometric accuracy toward the Galactic center in the NIR H_w-band.
The GCS will provide precise astrometric information of the stars in the Galactic center. In this survey, for objects brighter than H_w=12.5 mag, will achieve a parallax accuracy of σ_π∼25 and a proper motion accuracy of σ_μ∼25 yr^-1. plans to downlink the data for all stars brighter than H_w=14.5 mag. For stars as bright as H_w=14.5 mag, will achieve an accuracy of σ_μ=125 yr^-1.
Section <ref> describes the scientific details of the GCS.
§.§ Habitable Zone Exoplanet search
A major ambition of modern astronomy is to identify the biosignatures of terrestrial exoplanets located in Habitable Zones (HZ). Directly imaged planets and transiting planets are the two primary targets for the astrobiological exploration of exoplanets. The space-based direct imaging missions, HabEx <cit.> and LUVOIR <cit.> and its successor LUVeX <cit.>, are capable of searching for metabolic gas biosignatures in the atmosphere of planets orbiting solar-type stars. The James Webb Space Telescope (JWST) and subsequent planned missions with larger telescopes, such as LUVeX, will characterize terrestrial planet atmospheres using stable low-dispersion spectroscopy of transiting exoplanets <cit.>. Extremely large ground-based telescopes such as the Thirty Meter Telescope (TMT), the Giant Magellan Telescope (GMT), and the European Extremely Large Telescope (E-ELT), will search for biosignatures using transmission spectroscopy <cit.> and direct imaging spectroscopy <cit.> of terrestrial planets around late-type stars. There are also plans to use the UV-spectrograph on the World Space Observatory-Ultraviolet (WSO-UV) and Life-environmentology, Astronomy, and PlanetarYUltraviolet Telescope Assembly (LAPYUTA) to search for the atomic oxygen (OI) line (130 nm), a potential biosignature, in the upper atmosphere of terrestrial transiting exoplanets orbiting late-type stars <cit.>.
The future exploration of terrestrial planet atmospheres relies crucially on identifying suitable targets prior to characterization. Among transiting planets, those in the HZ around late-type stars are best suited for two reasons. Firstly, the HZ planets around low-luminosity, late-type stars have much shorter orbital periods than the Earth. This makes it more plausible to find them via the transit method in the first place, and it also allows multiple monitorings of their transit. Secondly, the small radius of late-type stars deepens the transit signal, compared to Sun-like stars. Various efforts have been made to detect terrestrial planets transiting late-type dwarfs, from both ground and space.
Several planets in the HZ of an ultracool dwarf, TRAPPIST-1, are good examples of transiting terrestrial planets that are ideal for further atmospheric characterization <cit.>.
NASA's Spitzer telescope situated the Earth-like planets of the TRAPPIST-1 system in the HZ. Unfortunately, the Spitzer mission terminated in January 2020, removing a key instrument from the exoplanet community's toolbox.
The Transiting Exoplanet Survey Satellite <cit.> is an all-sky survey that discovered such a planet around an early M dwarf, TOI-700d and e <cit.>. With its 10.5 cm diameter cameras, TESS is mainly suited for targeting bright early- to mid-M dwarfs. In contrast, ground-based surveys such as MEarth <cit.>, TRAPPIST, and SPECULOOS <cit.>, take advantage of their larger telescope aperture to find terrestrial planets around ultracool dwarfs with even later spectral types.
In summer and winter[ can observe the galactic center field only in spring and fall, because the Sun is aligned in the same direction as the galactic center in winter. In summer, it is prohibitively difficult for the satellite to meet the required thermal conditions for precision astrometry.], the NIR and high-cadence time-series photometry capability of will be used to conduct the exoplanet transit survey, which aims to fill a gap in TESS and ground-based surveys. has a NIR photometry capability similar to Spitzer and offers long-term,
a few weeks, follow-up transit observation of exoplanet systems discovered by TESS and ground-based surveys. This allows for the detection of outer orbiting planets that could be in the HZ. The scientific goals and strategies of the exoplanet transit survey are described in section <ref>, including other potential ways of finding exoplanets using microlensing and astrometry in the GCS field.
§.§ The mission
To achieve these main science objectives of the Galactic Center Archaeology survey and the Exoplanet survey,
will have a scientific payload with a 36 cm aperture telescope with a single focal plane. will provide photometric observations with the passband of 1.0–1.6 , the field of view (FoV) of 0.55^∘× 0.55^∘. will orbit the Earth in a temperature-stable Sun-synchronous orbit. Figure <ref> shows an artist's impression of the spacecraft. The telescope structure is made of Super-super invar alloy <cit.>. A large Sun shield and telescope hood are installed to maintain thermal stability. The satellite is expected to be launched aboard a JAXA Epsilon S Launch Vehicle from the Uchinoura Space Center in Japan in 2028. More detailed instrumental specifications and survey strategies of the mission are provided in section <ref>.
The unique NIR astrometry and time series photometry capabilities of would be applicable for the other science targets and the unique data from the GCS would be valuable for a wide range of scientific topics. In section <ref>, we summarise some examples of science cases to utilise the data from .
Section <ref> summarizes the potential synergies with the other relevant projects operating when launches. Finally, section <ref> provides a brief summary of the paper.
§ GALACTIC CENTER SURVEY (GCS): GALACTIC CENTER ARCHAEOLOGY
At the heart of the Galaxy, the Galactic center, lies a SMBH with a mass of 4.297×10^6 M_⊙ <cit.>,
the origin of which remains an area of intense study in modern astrophysics. What is known is that the masses, M_BH, of SMBHs correlate with the stellar mass, concentration, and velocity dispersion, σ_⋆, of their host bulge <cit.>, and there is an emerging dependence on the host galaxy morphology <cit.>.
The initial relationships were unexpected because the gravitational sphere of influence of the SMBH has a radius on the order of ∼1 parsec, thousands of times smaller than the scale of the Galactic bulge <cit.>.
From the turn of the millennium when the initial correlations were discovered, it has arguably been the singular unifying focus of research on SMBHs and galaxies. It is slowly coming to be understood as resulting from an interplay between periods of baryonic mass accretion onto the SMBH and neighboring stellar populations, and the “feedback” from the resultant active galactic nucleus (AGN) activity <cit.>, a process that may have started at the earliest epochs <cit.>. Indeed, the existence of SMBHs at redshifts exceeding 7.5 <cit.>, when the universe was barely 500 million years old, implies that much cosmic history may be imprinted upon the environment of the Galactic center, motivating Galactic archaeology <cit.> at the Galactic center.
While it is now apparent that major `dry' mergers have established the bulge-black hole (BH) scaling relations in elliptical galaxies <cit.>, with AGN feedback relegated to a maintenance role <cit.>, the new morphology-aware scaling relations have recently revealed how accretion and minor mergers have likely built spiral galaxies from lenticular galaxies <cit.>. Indeed, it is not just the Milky Way which shows evidence of disrupted satellites <cit.>. Moreover, some of these satellites may be delivering massive BHs into the spiral galaxies, possibly evidenced by the X-ray point source in the disrupted Nikhuli system captured by NGC 4424 <cit.>.
Galactic center archaeology has been a distinctly multi-wavelength field. The highest energy observations have appropriately revealed the most conspicuous evidence of past AGN activity from the SMBH of the Milky Way, , in the form of the Fermi bubbles seen in γ-rays, which extend to ∼10 kpc and reveal AGN activity several Myr ago <cit.>. The X-rays likewise implicate past AGN activity, such as the presence of Fe Kα (hν=6.4 keV) fluorescence at the interfaces of cold molecular regions in the vicinity of <cit.>, or the presence of a hot-gas cavitation, symmetric about , extending out to several kpc <cit.>. These “light echos” are also seen at lower energies <cit.>, with Hα emission seen in the Magellanic Streams requiring an illumination by UV ionizing photons generally only possible from AGNs <cit.>.
From visual to NIR wavelengths, Galactic center archaeology studies have been focused predominantly on the history and makeup of the Galaxy's stellar populations. Several key parameters, such as the chemical abundances, e.g. [Fe/H] and [α/Fe], surface gravity, effective temperature and kinematics, are of interest to these studies. Although there are many optical photometric and spectroscopic survey of stars in the central region of the Galaxy, the optical surveys avoid the Galactic center, because of the extremely high extinction in the optical wavelength, but focus on the global properties of the bulge/bar. To study the Galactic center, NIR observations are crucial. For example, photometric data from the IRSF/Simultaneous Infrared Imager for Unbiased Survey <cit.>, Two Micron All Sky Survey's <cit.>, VISTA Variables in the Via Lactea <cit.> and GALACTICNUCLEUS <cit.>
revealed the detailed stellar structure of the Galactic center, including the Nuclear Star Cluster (NSC, section <ref>) and Nuclear Stellar Disk (NSD: section <ref>). APOGEE, with its NIR (1.51–1.70) high spectral resolution (R∼22500), has been critical for understanding the chemical makeup of stellar populations in the Galaxy, as well as providing line-of-sight velocities. Results from APOGEE include kinematic and structural differentiation between metal poor and metal rich stars in the inner Galaxy <cit.>, detailed chemical abundances <cit.>, and measuring the rotation of the NSD <cit.>, which may extend to kpc-scales in size <cit.>. Recently, using the K-band Multi-Object Spectrograph (KMOS) on the Very Large Telescope (VLT), <cit.> surveyed the stars in the NSD, and analysed their line-of-sight velocities and metallicities. <cit.> found that the NSD is kinematically cold, and more metal rich stars are kinematically colder.
The presence of the NSD in other galaxies has been used to infer the longevity of galaxy bars and their role in transporting gas to the innermost regions <cit.>.
There has been an explosion of archaeological studies of the Galactic center as a result of new facilities and methods to use the stars relic signatures to tell the history of the Galaxy formation, by measuring the line-of-sight velocities and chemical abundances. However, high-precision proper motions and parallaxes, critical to provide the remaining three dimensional phase space information for mapping the gravitational potential of the Galactic center, have thus far been unavailable. This is because the Galactic center can exhibit upwards of 30 to 60 magnitudes of extinction at visual wavelengths <cit.>, precluding the use of Gaia data. will close this gap by mapping the positions, motions, and parallaxes of stars in the NIR band, where the extinction is much lower A_H_w∼5-6 mag <cit.>.
The GCS of will provide the parallax and proper motion of stars in the Galactic center field within a rectangular region of
-1.4^∘<l<0.7^∘ and -0.6^∘<b<0.6^∘. Note that 0.7^∘ corresponds to about 100 pc at the distance of the Galactic center. This survey region is still a subject of change, and it may become -0.7^∘<l<1.4^∘ and -0.6^∘<b<0.6^∘, i.e. shifting toward the positive longitude without changing the size of the survey area, depending on the scientific values and operation and/or data analysis requirements. In this paper, unless otherwise stated, we consider the region of -1.4^∘<l<0.7^∘ and -0.6^∘<b<0.6^∘ as highlited in figure <ref>.
Within this main survey area, is expected to produce accurate positions and proper motions for about 120,000 stars up to H_w<14.5 mag, including about 68,000 Galactic center stars. Here, we estimate the number of stars in the Galactic center, considering that all the stars with J-H>2 mag are in the Galactic center, although a redder cut is likely required at a lower Galactic latitude. These numbers are obtained from the combined data of the 2MASS point source catalog and the SIRIUS catalog.
The proper motion accuracy achieved by for the bright stars with H_w<12.5 mag is expected to be about 25 yr^-1, and the proper motion accuracy for the faintest stars with H_w=14.5 mag is expected to be about 125 yr^-1. These accuracies respectively correspond to the velocity accuracies of ∼0.98 km s^-1 and ∼4.9 km s^-1 at the distance to the Galactic center. Recently, the careful analysis of the long time span NIR ground-based imaging data provides sub-mas yr^-1 proper motion accuracy. For example, VVV InfraRed Astrometry Catalogue <cit.> reaches the mode of the proper motion uncertainty distribution of 0.3 mas yr^-1 in version 2 of VIRAC <cit.>. However, the systematic uncertainties of the ground-based data are difficult to quantify due to atmospheric effects and the distortion of telescope and detector, which are not well controlled. will provide independent and improved measurements of the proper motion from space, and also the reference frame in NIR in the Galactic center region, which will be a valuable asset in improving the proper motion accuracy with these legacy ground-based data. In addition, for the bright stars with H_w<12.5 mag will provide the precise parallax measurements with the accuracy of ∼25 . This corresponds to ∼20 % accuracy of the parallax distance for the stars in the Galactic center, which enables us to identify the Galactic center stars less ambiguously. These unique distance confirmation with parallax measurements will provide a critical verification of the colour selection which are currently used to sample the Galactic center stars.
This section summarizes the science objectives of the GCS. The section is organized by increasing physical scale, from parsec-scale dynamics of the nuclear star cluster in section <ref> to kiloparsec-scale dynamics of the Galactic bar, bulge and inner disk in section <ref>. Note that at a Galactic center distance of 8.275 kpc <cit.>,
the angular scale is 0.04 pc arcsec^-1.
§.§ Kinematics and History of the Nuclear Star Cluster
On scales of only a few parsecs, the stellar structure closest to and, therefore, most affected by is the nuclear star cluster (NSC), which has a stellar mass of about ∼10^7 M_⊙ <cit.>.
The first separation of the NSC from the other components of the Galaxy (NSD/bulge/bar/disc) yielded a NSC light profile that appeared well matched by a Sérsic model with an index of ∼3 and an effective half-light radius of 3.2 pc <cit.>. A recent estimate reports an index of 2.2±0.7 and R_ e=5.1±1.0 pc
<cit.>. However, perhaps the core-Sérsic model <cit.>, fit separately to the NSC's different stellar populations, may prove more apt at quantifying it and, in turn, yielding insight into its origins and evolution. This could be relevant in the case of binary collisions preferentially removing red giant stars <cit.>.
When present, NSCs scale with the mass of their host bulge, possibly in a manner dependent on the host galaxy morphology <cit.>. Furthermore, a positive correlation between NSC mass and the central BH mass has also been observed <cit.>.
NSCs in late-type galaxies show
extended periods of star formation <cit.>, resulting in a younger stellar populations <cit.>.
The NSC of the Milky Way also shows extended star formation <cit.>. Most stars formed more than five Gyr ago, followed by a low level of star formation, until the burst of star formation in the last 100 Myr <cit.>. Wolf-Rayet and O- and B-type stars (a few Myr age) are found in the central 0.5 pc of NSCs <cit.>, which indicates the in-situ formation of the NSC of the Milky Way at least for the youngest population <cit.>.
On the other hand, using NIR drift-scan spectroscopic observations, <cit.> found a kinematic misalignment between the NSC with respect to the Galactic plane and a rotating substructure that suggests past accretion of at least one other star cluster. Dynamical friction may lead to the capture and central deposition of globular clusters <cit.>.
This scenario has some theoretical support, as N-body simulations have shown that a NSC like the one in the Milky Way can be produced by the infall of a few globular clusters that are shredded by the SMBH <cit.>. Interestingly, high resolution spectroscopy of the NSC <cit.> finds a wide abundance range spanning [M/H]=-1.25 to supersolar metallicities, [M/H]>0.3. This is confirmed in larger low-resolution surveys, e.g. using the integral field spectrograph KMOS on VLT, <cit.>. They further find that the subsolar population with [M/H]<0 shows an asymmetric spatial distribution, which might be the signature of a recent globular cluster merger.
Hence, the nature and origin of the NSC of the Milky Way are still in debate, and could link to the in-situ star formation at the Galactic center and/or mergers of globular cluster or ancient mergers with satellites and other galaxies. These are distinct dynamical processes, where spectroscopic and numerical studies have made great headway so far. What is missing is the other two dimensions of kinematic information of the proper motion that an astrometric mission, such as , can provide. We expect to observe about 100 NSC stars with H_w<14.5 mag.
Although this is a small number of stars, the novel ∼100 μas yr^-1 level of proper motion for the NSC stars, combined with high-precision radial velocities, will likely yield accelerations and orbits, and enhance our understanding of the nature and origin of the NSC.
§.§ SMBH Formation History
As alluded to in the beginning of this section, remnants of the earliest epochs can be imprinted on the centers of galaxies, and indeed the formation of the SMBHs themselves may leave a mark on their environment. The presence of 10^9 M_⊙ SMBHs that powered quasars when the universe was only a few hundred Myr old <cit.> is a major conundrum for the mechanism of formation of SMBHs <cit.>.
This is especially challenging for the `light' seed scenario, where the stellar mass BHs ( ≲100 M_⊙) formed from massive first generation stars (Pop III stars) in mini-halos. Even if accreting at the maximum stable rate set by the balance of gravity and radiation pressure (the Eddington limit), these light BHs did not have sufficient time to grow into the monster SMBHs powering the earliest quasars.
Although, super-Eddington accretion from disk-fed BHs rather than an idealised spherical model may circumvent this concern <cit.>.
Nonetheless, this had made an alternative “heavy" seed scenario more attractive <cit.>. If a metal free halo is massive enough to host atomic cooling, i.e. T_ vir∼10^4 K, and is exposed to strong Lyman-Werner radiation, H_2 cooling is completely suppressed, and the high temperature prevents fragmentation <cit.>. This leads to the formation of a super-massive star, a.k.a. quasi-star <cit.>, which directly collapses into a BH with a mass of ∼10^4-6 M_⊙ <cit.>. These massive BH seeds also form in relatively massive halos with high-density gas, which enables further gas accretion onto the BH. Hence, the heavy seeds can grow rapidly enough to explain the massive SMBHs found at z∼7.
Although the heavy seed scenario is attractive for explaining SMBH at high redshift, they are considered to be rare, ∼10^-6-10^-4 cMpc^-3 <cit.>. This is a smaller number density than that for the Milky Way-sized galaxies. It is an interesting question whether or not the SMBH of the Milky Way formed from a single massive direct collapse to a BH, mainly via accretion. Alternatively, the SMBH of the Milky Way could be formed from more abundant Pop III origin BHs, via accretion and mergers, i.e., the light seed scenario. The light seeds from Pop III stars can be as massive as 1,000 M_⊙ <cit.>, and using cosmological numerical simulations <cit.> demonstrated that such light seeds can reproduce the observed BH mass-stellar velocity dispersion relation. In this scenario, BHs are expected to be found ubiquitously in galaxies of all masses, with intermediate-mass BHs <cit.> of M_ BH<10^5–10^6 M_⊙ perhaps common in low-mass galaxies <cit.>.
Building off the promise of earlier discoveries of AGNs in local dwarf galaxies <cit.>, this predictive framework has fueled an intense search for IMBHs in dwarf galaxies, with some promising results <cit.>, including dynamically estimated BH mass of NGC 205 having M_ BH=6.8^+95.6_-6.7×10^3 M_⊙ <cit.>. The mergers of BHs with ∼10^4-10^7 M_⊙ at the galactic center are a prime gravitational wave target of ESA's Laser Interferometer Space Array <cit.>. These multi-messenger astronomy sources will provide strong constraints on the population and merger rates of IMBHs, ultimately revealing the formation mechanism of SMBHs <cit.> and adding detail to the suspected accretion-driven origin of spiral galaxies <cit.>.
can provide an independent test of the significance of the IMBH mergers for the formation of the SMBH of the Milky Way, because if the SMBH was built up by the mergers of IMBHs, these coalescences would heat up the older stars within 100 pc of the Galactic center, with the stellar profile becoming cored <cit.>. To demonstrate this, we set up an N-body simulation of a spherical bulge model, following <cit.>. Although, as discussed in the previous and the following sections, the Galactic center has a complex structure — and there are an NSC (section <ref>) and NSD (section <ref>) — here, for simplicity we consider a spherical isotropic bulge component only. Hence, this merely serves to show the potential capability of the data with a simple model. More complex modelling studies are required to address how the different populations of the Galactic center stars are affected by the mergers of IMBHs, and how gas may spare the ejection of stars. We have first assumed a spherical stellar component following a Hernquiest profile <cit.> with the mass of 9×10^9 M_⊙ and half-mass radius of 1.1 kpc[Our assumed bulge mass is slightly larger than the current upper limit of the bulge mass in the Milky Way from kinematics of the stars in the Galactic center region, e.g. about 10 % of the disk mass, i.e. ∼5×10^9 M_⊙, suggested by <cit.>. However, for this simple numerical experiment, we consider a higher mass for increased stability, and as it would provide more conservative results for the dynamical response.]. We use 524,288 particles to describe this stellar system. We then add five 8×10^5 M_⊙ BHs, which follow the same distribution function as the stellar system. Because of dynamical friction, these BHs fall into the center of the system, and merge into a SMBH in the time scale of about one Gyr <cit.>. The back reaction of dynamical friction heats up the stellar system in the central 100 pc. Figure <ref> shows the density profile (left) and velocity dispersion profile (right) evolution when the BHs merge into a central SMBH. The red lines of these figures show the initial condition of the Hernquist profile with the central density profile slope of r^-1. Blue lines show how the density and velocity dispersion profile changes as the BHs merge into the center of the system, and dynamical friction heats up the stellar system. After 3 Gyr, the density profile becomes significantly shallower with the slope of less than r^-0.5 within r=100 pc, and the velocity dispersion could be different from the initial velocity dispersion by as large as 30 km s^-1 at r∼10 pc.
We ran the simulations with different masses of the seed BHs of 4×10^5 M_⊙ and 2×10^5 M_⊙, and found
that as long as the total mass is 4×10^6 M_⊙ like the SMBH of the Milky Way, the cored density distribution with a similar size and slope can be caused by the BH mergers, independent of the initial seed mass. A difference, depending on the mass of the seed BHs, is that lighter seed BHs take a longer time to merge into a single central SMBH.
is expected to observe about 6,000 (68,000) stars with H_w<12.5 (14.5) mag and J-H>2 mag in the Galactic center, and measure their parallax and proper motion with accuracies of 25 (125) and 25 (125) yr^-1 (about 1 (5) km s^-1 at d=8.275 kpc), respectively. The superb parallax accuracy of for bright (H_w<12.5 mag) stars will minimise the contamination of the foreground Galactic disk stars. Because the intrinsic NIR color of giants are almost constant irrespective of their intrinsic luminosity, the color selection criterion for the Galactic center stars obtained from the parallax-color relation of the bright stars at the different positions of the sky can be applied to select the Galactic center stars for the fainter stars <cit.>.
Then, we can use the larger number of faint stars with similar colours, which enables us to observe the velocity dispersion profile within 100 pc to tell the difference in velocity dispersion by about 10 km s^-1 as shown in figure <ref>. Hence, can detect the relic of possible BH mergers, if the SMBH of the Milky Way was built up via mergers of IMBHs. If such heated velocity dispersion profile is not observed, it could be a sign of the SMBH started from a relatively massive BH and grew mainly via accretion, although there could be some alternative scenarios, such as the formation of NSC and/or NSD adiabatically contracting the bulge stars.
§.§ The Nuclear Stellar Disk, Non-axisymmetry and young star clusters
Extending out to ∼200 pc, the Galactic center hosts a NSD coincident with and of the same scale as the Central Molecular Zone <cit.>. The NSD has an exponential profile <cit.>, in accord with other galaxies <cit.>. The radial extent of the disk is around 230 pc <cit.>, and the vertical scale-height is measured to be around 45 pc <cit.>. The total mass of the nuclear stellar disk is estimated to be around 1.4×10^9 M_⊙ <cit.>. The presence of classical Cepheids <cit.> reveals a thin disk of young stars continuously formed over the past ∼100 Myr <cit.>. While line-of-sight kinematic studies have been done to understand kinematics of the NSD <cit.>, a full 6D phase-space characterization awaits a high-precision astrometric mission for the Galactic center stars, which will provide, as demonstrated with the current existing dataset in <cit.>
An interesting question is if there is a connection between the NSD and the nuclear bar seen in simulations <cit.> and inferred in
NIR photometric campaigns <cit.>. Nuclear bars are frequently found in earlier galaxy types <cit.>, and appear to be a distinct kinematic structure from the main outer bar. Using the VVV survey data, <cit.> suggested that the Milky Way has a nuclear bar with a size of about 500 pc, which is misaligned with the long bar. Such nuclear bar, if it exists, can drive the inflow of the gas from the size of the central molecular zone, from about ∼200 to 10 pc, without destroying the NSD or central molecular zone's disk (or ring) structure <cit.>. However, <cit.> suggested that the nuclear bar observed in the star counts can be explained by the projection effect of the inner boxy bar <cit.>, thus casting doubt on the nuclear bar's existence. However, still the NSD may have non-axisymmetric structure, and the strength of non-axisymmetry could determine the efficiency of the gas inflow. The precise motion of the stars in the NSD provided by can assess non-axisymmetry of the NSD, because the kinematics of the NSD stars should be affected by non-axisymmetry, and will provide a stronger constraint than the star count observations.
There are two young star clusters, the Arches <cit.> and Quintuplet <cit.>, within about 30 pc from the Galactic center. The stellar mass of Arches is estimated as an upper limit of 7×10^4 M_⊙ <cit.> and the age is about 3.5 Myr <cit.>. The line-of-sight velocity and the proper motion of the Arches cluster are estimated to be v_ LOS=95±8 km s^-1 <cit.> and
v_ pm=172±15 km s^-1 <cit.>, respectively.
The stellar mass and age of the Quintuplet are 10^4 M_⊙ <cit.> and age of 4.8 Myr <cit.>, respectively. The line of sight velocity and proper motion of the Quintuplet cluster are estimated to be v_ LOS=102±2 km s^-1 <cit.>
and v_ pm=132±15 km s^-1 <cit.>.
Based on the high-resolution numerical simulations of the barred Milky Way-like galaxy, resolving the star formation in the central molecular zone, <cit.> argue that the current position and estimated velocity for the Arches and Quintuplet clusters indicate that both clusters formed at the location where the gas inflow from the bar collides with the central molecular zone. This is consistent with what is also discussed in <cit.>, who suggested that the young star clusters formed at the transition place between the x_1-orbits (the gas in the leading side of the bar) and x_2-orbits (central molecular zone).
Recently, <cit.> measured the absolute proper motion of these clusters using the multi-epoch Hubble Space Telescope observation data and the Gaia EDR3 reference frame, and obtained (μ_lcosb, μ_b) = (-2.03±0.025, -0.30±0.029) mas yr^-1 for Arches and (-2.45±0.026, -0.37±0.029) mas yr^-1 for Quintuplet, where (μ_lcosb, μ_b) are proper motion in Galactic longitude and latitude directions. These results indicate that these clusters are moving parallel to the Galactic mid-plane.
is expected to observe about 10 stars around Arches and about 20 stars around Quintuplet with 0.125 mas yr^-1 proper motion accuracy. The accuracy of the proper motion measurements of these clusters are comparable to the previous studies, but will have a longer time baseline and improved absolute reference frame. The combination of the data and that of existing literature will further improve the proper motion measurements.
The improved proper motions will test this scenario and pinpoint the formation location of these clusters. is also capable of revealing the presence of new young star clusters (see also section <ref>). Revealing the formation location of the hidden young clusters in addition to the Arches and Quintuplet clusters will help to understand the star formation mechanism in the central molecular zone, and their relation to the NSD, by comparing with the observations of molecular gas in the Galactic center region <cit.>.
§.§ Nuclear Stellar Disk: The formation epoch of the Galactic Bar and the Sun's birth radius
The central few kpc region of the Milky Way shows a prominent bar structure <cit.>. The shape of the inner region of the bar/bulge has been found to display a so-called boxy/peanut morphology <cit.>. The inner stellar kinematics shows a cylindrical rotation <cit.>, which indicates that the inner bulge is dominated by the bar <cit.>. <cit.> found that a classical bulge with a merger-origin makes up less than 8% of the Galaxy's disk mass. The Galactic bulge is therefore primarily a disk-like structure built up with a more secular formation history <cit.>.
The central bar components affect the radial and rotational velocity distribution of the Galactic disk stars. The presence of groups of stars moving with particular radial and rotational velocities in the solar neighbourhood is considered to be due to the bar <cit.>. The Hercules stream, which is a group of stars rotating slower than the average local azimuthal velocity and moving outward in the disk, has been suggested to be caused by the outer Lindblad resonance of the bar being just inside of the Sun's orbital radius, allowing us to derive the pattern speed of the bar <cit.>. However, recent studies demonstrate that such moving groups can also be caused by transient spiral arms <cit.>, as well as the other resonances of the bar, including the co-rotation resonance <cit.> and the 4:1 resonance <cit.>.
On the other hand, NIR photometric surveys, such as VVV, and spectroscopic surveys, such as the Bulge Radial Velocity Assay <cit.>, the Abundances and Radial velocity Galactic Origins Survey <cit.>, the Pristine Inner Galaxy Survey <cit.> and APOGEE, revealed the detailed stellar structure and line-of-sight velocities. These observations were directly compared with theoretical models <cit.>, suggesting a slower pattern speed of the bar than what was inferred by assuming that the outer Lindblad resonance lay just inside of the solar radius. Recently, both the gas dynamics <cit.> and stellar dynamics from the Optical Gravitational Lensing Experiment <cit.>, BRAVA, ARGOS, APOGEE, VVV and/or Gaia DR2 data <cit.> have converged to a value for the bar pattern speed of around 35-40 km s^-1 kpc^-1.
<cit.> further analysed the age and stellar abundances of the stars within the bar region, and found it populated mainly by stars with older ages and higher levels of [α/Fe], akin to properties of old thick disk stars. They further suggested that the Galactic bar formed when the old thick disk formed at an early epoch of the Milky Way's formation. If the bar formed at such an early epoch of about 10 Gyr ago, it may have affected the chemical distribution of stars in the Galactic disk. For example, the Lindblad resonances due to the bar act as a barrier for stellar migration, and the stars formed inside and outside of the outer Lindblad resonance does not mix <cit.>. Hence, identifying the epoch of bar formation is an imperative task in understanding the formation history of the Galactic disk. Table <ref> summarizes examples of observed estimations of the Galactic bar age in literature. It should be noted that the age of the stars in the Galactic bar does not tell us the formation epoch of the bar, because both the stars formed after and before the bar formation can be captured by the bar <cit.>. Hence, the formation epoch of the bar remains a challenging question even in the post-Gaia era.
It is known that bar formation induces gas inflow to the central sub-kpc region of the host Galaxy, which leads to the formation of a compact and kinematically cold rotating nuclear gas disk <cit.>. Subsequently, the stars formed from the nuclear gas disk build up the NSD <cit.>. Using an N-body/hydrodynamics simulation of a Milky Way-sized disk galaxy, <cit.> demonstrated that bar formation triggers an intense burst of star formation in the central region which creates the NSD (figure <ref>). Consequently, the oldest age limit of the NSD stars displays a relatively sharp cut-off, and it tells us the age of the bar. Therefore, the age distribution of the NSD in the Milky Way can tell us the formation epoch of the Galactic bar <cit.>.
<cit.> showed that the NSD is kinematically colder than the other stellar components, which is consistent with the recent observation mentioned above <cit.>. Hence, the NSD stars can be identified kinematically. <cit.> also suggested that the proper motion of the stars in addition to the line-of-sight velocity is crucial to reduce the contamination from the other kinematically hotter stars, such as bulge, bar and Galactic disk stars. The superb astrometric accuracy of will enable us to identify the NSD stars and reduce the contamination from the other stellar components.
To obtain the age of the NSD stars, we will use Mira variables, because Mira variables are known to follow an age-period relation <cit.>. Mira variables are intrinsically bright, and there are expected to be enough Miras in GCS to enable a robust age distribution study of the NSD. From the number of the observed Miras in a smaller region of the Galactic center in <cit.>, it is expected that about 2,000 of Miras whose colour is red enough to be at the distance of the NSD and luminosity is brighter than H_w=14.5 mag will be observed within the GCS region of . The proper motion measurements by as well as the line-of-sight velocity from ground-based spectroscopic follow up (see section <ref> for potential instruments to be used) will enable us to pick up Miras in the NSD.
Before the launch of , we need to identify Miras in the GCS field (figure <ref>) with currently existing ground-based photometric surveys. Recently, <cit.> identified more Miras from the VVV survey. We will also use the upcoming data from the Prime focus Infrared Microlensing Experiment (PRIME, see section <ref>), which is a joint Japan-U.S.-South Africa 1.8m NIR telescope (with 1.3 deg^2 FoV) built in South Africa. The PRIME data will allow us to identify brighter Miras which are saturated in the VVV data, and will be also valuable to verify the VVV Miras.
Although the age-period relation of the Miras are still not well calibrated, we expect that it will be improved with the Gaia data <cit.>. Even without such a relation, the period of such Miras will be measured precisely and identifying the shortest period of Miras in the NSD, and comparing with the period of Miras in the other Galactic components, will tell us the relative formation epoch of the Galactic bar (see figure <ref>), e.g. with respect to the thick and thin disks. This was demonstrated in <cit.> who focused on the age of stars in the bar, but not reaching the NSD, because they studied Miras detected by Gaia <cit.>.
Producing a sample of NSD stars with age information will also help to further uncover the slowdown history of the pattern speed of the Galactic bar. Recently, <cit.> suggested that the kinematics of the local stars observed in Gaia DR2 can be naturally explained, if the pattern speed of the Galactic bar is slowing down. The slowdown of the bar pattern speed is often seen in N-body simulation of barred galaxies and is considered to be due to the transfer of the angular momentum of the bar to the dark halo <cit.>. The size of the NSD is likely affected by the location of the inner Lindblad resonance of the bar or the existence of x_2-orbits <cit.>, which is likely affected by the pattern speed of the bar.
Hence, it is expected that the age distribution of the NSD disk as a function of radius and/or angular momentum could be sensitive to the slowdown history of the Galactic bar. Interestingly, a radial dependence of the age of the NSD stars is recently indicated by <cit.>, which motivates further studies.
will be able to provide the age and kinematics of stars in the NSD, which will open up this new window to unravel the history of the Galactic bar pattern speed. How the NSD stellar age and kinematics are affected by the change in bar pattern speed is not well understood. Also, the bar pattern speed may not monotonically spin down, but may spin up, for example, due to the gas infall. This could cause a more complicated mix of the age-kinematics relation of the NSD stars. More theoretical studies on this topic are also encouraged.
Identifying the bar formation epoch and the evolution of the pattern speed is crucial to answer how radial migration shaped the current chemodynamical structure of the Galactic disk. Bar formation induces significant radial migration in stars of the Galactic disk <cit.>, and the coupling between bar resonances and spiral arms resonances further induces strong radial migration of disk stars <cit.>. The slowing down of the bar pattern speed means the resonances of the bar sweep a large radial range of the Galactic disk, and affect the kinematics of the disk stars in these radii and enhance the radial migration <cit.>.
The Sun is considered to have been formed in the inner disk (R∼ 5-7 kpc), and radially migrated outward to reach the current radius <cit.>. It is a fascinating question as to whether the Galactic bar formed before or after the formation of the Sun. If the Galactic bar is younger than the age of the Sun, the orbital history of the Sun must be largely influenced by the Galactic bar formation and evolution, making this a key question in understanding the Sun's dynamical history.
Recently, the accurately measured chemical abundance pattern of a large number of the disk stars by APOGEE combined with the kinematics provided by Gaia enabled the modeling of the history of the inside-out disk formation of the Galactic disk and the impact of radial migration of stars <cit.>. However, radial migration due to the bar formation and evolution has not yet been taken into account, because of the complexity of such a model with additional parameters needed to fit the data and thus requiring additional priors. One such key prior is the formation epoch of the bar. As discussed in this section, will provide this important piece of the information from the age distribution of the NSD.
It would be also interesting to compare the bar formation epoch of the Milky Way with the cosmic bar fraction <cit.>, to study how common the formation epoch of the Milky Way is. Identifying the bar formation epoch in the Milky Way allows us to study the physical process of the Galactic bar formation by comparing with the expected formation epochs of the bar with the different mechanisms from the cosmological simulations <cit.>. Another intriguing question is if the formation epoch of the bar of the Milky Way is similar to the time of the merger events of the Gaia-Sausage-Enceladus and Sagittarius dwarf, whose impact could have induced the bar formation of the Milky Way.
§.§ Nuclear Stellar Disk: A link to the high redshift progenitors
The size of the Galactic nuclear disk is similar to the size of star forming galaxies at redshift, z>6 <cit.>, as also seen by JWST recently <cit.>. The compact disks observed at high redshift display a surface brightness profile following an exponential law, like a rotating disk <cit.>. The theoretical study of <cit.> suggested that the gas accretion to a massive BH can create a NSD-like compact disk at high redshift. It would be a fascinating question if the Milky Way started its formation from a compact disk as those observed at the high redshift, and if so, whether a part of or the majority of the NSD could be a relic of such a compact disk epoch.
The age and metallicity distribution of the NSD disk stars identified from the proper motions measured by will answer this question regarding the early structure of the Milky Way. If finds an old and metal poor population in the NSD, in addition to the populations formed after the bar formation, they could be a relic of the high redshift disk. A comparison of the kinematics and metallicity properties between the Milky Way's old disk and the high redshift disks in Milky Way progenitors will be a new window to connect Galactic archaeology to the observations of high redshift galaxies <cit.>, which will be further advanced by the advent of JWST and future 30-m class ground based telescopes, such as E-ELT, GMT and TMT.
§.§ Inner disk: spiral arms and Galactoseismology
The foreground stars of the GCS field will provide the disk kinematics from the solar radius to the Galactic center.
The Gaia data revealed the striking ridge like features in the stellar rotation velocity distribution as a function of radius, i.e. the R_ GC-V_ϕ diagram <cit.>, which is also related to the radial velocity <cit.>. These are considered to be due to the bar resonances and the effect of the spiral arms <cit.>. Because of the heavy dust extinction in the inner disk, the Gaia data can reach only up to about 3 kpc, i.e. R_ GC≳ 5 kpc, in the mid-plane toward the Galactic center. The NIR astrometry of can extend this analysis up to the Galactic center.
The nature of spiral arms, especially if the spiral arms are classic density wave like features <cit.> or transient features <cit.>, is currently hotly debated <cit.>. Spiral arms could be related with the bar <cit.> and induced by the satellite galaxy interactions <cit.>.
Also, the effects of spiral arms on radial migration depend on the nature of the spiral arms. Radial migration is much more significant if the spiral arms are transient features <cit.>. will provide detailed kinematics of the stars towards the Galacitc center, which covers the Sagittarius arm and the Scutum-Centaurus arm. The locations of the spiral arms of the Galaxy are still not measured confidently from the stellar kinematics, although they can be traced with the gas, star forming regions and young stars <cit.>. It is also debated if the Sagittarius arm is the major arm, i.e. a stellar arm, or purely a gas arm <cit.>, although the recent Gaia EDR3 data indicate the excess of the young stars in the Sagittarius arm <cit.>. A crucial piece of information regarding the nature of spiral arms and the strength of stellar spiral arms is the stellar kinematics both inside and outside of the arm <cit.>. In addition to GCS, we plan to run the Galactic mid-plane survey (see section <ref> for more details). The combination of these data can provide the kinematics of the both sides of the Sagittarius arm and Scutum-Centaurus arm at different radii. Comparison between these data and N-body modelling of disk galaxies will be necessary to ultimately answer the long-standing question in Galactic astronomy of the nature of spiral arms.
The Gaia data also revealed a vertical corrugation in the vertical velocity, v_ z, of the disk stars as a function of Galactocentric radius <cit.>, which are related to the phase spiral most clearly seen in the z-v_ z diagram when colored by the radial velocity, v_ rad <cit.>. Hence, they are likely closely linked with the in-plane kinematical features, such as moving groups, the R_ GC-V_ϕ ridge like features and the spiral arm features <cit.>. These vertical kinematic features are considered to be due to the recent (<Gyr) perturbation from the Sagittarius dwarf <cit.> and/or the wake of the dark matter halo induced by an earlier phase of the infall of the Sagittarius dwarf a few Gyr ago <cit.>. However, it is also possible to be induced by the bar buckling, which may have happened recently and created the X-shape/boxy inner Milky Way's bar <cit.>. Tracing the vertical corrugation, seen around the solar radius, up to the Galactic center will be key to answering whether the vertical corrugation extends to the inner disk. A comparison of the observed corrugation features as a function of radius with different models of the Sagittarius dwarf perturbation and the bar buckling model will allow us to reveal the origin of the corrugation. Both the impact of the dwarf perturbation and the bar buckling can also affect the radial migration significantly. The inner disk Galactoseismic information of both in-plane and vertical kinematics as measured by will provide crucial information regarding the recent dynamical evolution of the Galactic disk.
The foreground disk stars of the GCS field up to d∼6 kpc from the Sun are dominated by red giants.
The time-series photometry of with about 46 instances of ∼12.5 s exposures every ∼530 min (TBC, depending on the mapping strategy) for 3 years of the nominal lifetime of the telescope will be highly valuable for asteroseismology. It will allow for the measurement of precise stellar masses, which will provide relative ages of stars, crucial information for Galactic archaeology studies <cit.>.
It is also likely that a few open clusters will be discovered in this field. There is at least one star cluster (UBC335) in the new Gaia DR2 star cluster catalogue <cit.> in the GCS field. These cluster data would be useful for the calibration of asteroseismic ages, as proposed in an ESA M7 mission candidate, HAYDN <cit.>. can provide a proof of concept study for HAYDN in terms of asteroseismology in dense stellar fields.
The ages of giant stars can be obtained from follow-up spectroscopy data, mainly from [C/N] <cit.>, which can be calibrated with their asteroseismic ages. Machine learning techniques are often used to train a model to map the chemical abundance patterns to ages, where having the high-quality training set is crucial <cit.>. The asteroseismology information of with spectroscopic follow up data will provide the unique calibration information for the inner disk red giants for future machine learning applications.
§ EXOPLANETS SURVEY (EPS)
§.§ Transiting Planet Survey around Late-type Stars
As described in the Introduction (section <ref>) and shown in figure <ref>, terrestrial planets in the HZ around mid- to late-M dwarfs remain difficult to explore with both TESS and ground-based surveys. Searching for terrestrial planets around ultracool dwarfs like TRAPPIST-1 is challenging for TESS owing to its modest 10.5 cm aperture. In ground-based surveys, HZ terrestrial planets around earlier-type stars, like TOI-700,
exhibit transits that are too shallow and have too long orbital periods.
In contrast, has a 36 cm aperture and an NIR passband, and can also perform long continuous monitoring from space and suppress intrinsic variability due to stellar activity (see section <ref> for details). This makes a unique probe of terrestrial planets in the HZ around mid- to late-M dwarfs that have orbital periods of several weeks, or semi-major axes of 0.03–0.3 AU.
Figure <ref> shows this trade-off relation between the stellar brightness and transit depth. The left panel shows the apparent magnitudes of the 250–th brightest star in a radius bin of width of 0.1 R_⊙, where R_⊙ is the solar radius.
For M-type stars with radii smaller than 0.5 R_⊙, smaller stars are apparently fainter both in visible and NIR bands. In contrast, the transit signals are larger for those smaller stars, as shown in the right panel. The sweet spot for precise NIR photometry with therefore lies around 0.2–0.3 R_⊙.
Figure <ref> assesses this statement more realistically.
Here, we estimate the H_w magnitudes of the brightest M dwarfs in the entire sky that host transiting Earth-sized planets in the HZ whose transits are deep enough to be detected with . The result is based on a Monte-Carlo simulation consisting of the following three steps:
(i) Assign Earth-sized planets at the inner edge of the classical HZ to certain fractions (here 100% and 10%) of nearby M dwarfs in the TESS candidate target list <cit.>.
(ii) Check which planets show transits, assuming that the orbits are isotropically oriented.
(iii) Check whether a single transit signal has a signal-to-noise ratio (S/N) greater than 7, where the signal is computed from the stellar radius, and the noise is evaluated for the timescale of expected transit duration using the noise model in section <ref>.
The dashed and dotted lines show the magnitudes of the brightest stars estimated from 10,000 random realizations in each stellar mass bin, assuming that HZ Earths occur around 100 % and 10 % of the sample stars, respectively.
These estimates are compared with known transiting planets and planet candidates from ground- and space-based surveys (see legends), including the two Earth-sized planets (<1.5 R_⊕) in the HZ, TOI-700 and TRAPPIST-1, shown with filled symbols.
The fact that the two known planets lie close to the dotted line suggests that an occurrence rate of ∼ 10% provides a reasonable lower limit.
Our simulation results demonstrate the ability of to identify Earth-sized transiting planets in the HZ of mid-M (∼0.2 M_⊙) dwarfs with H_w≲10 mag, if their occurrence rate is ≳𝒪(10)% and if those stars are monitored over a sufficiently long time.
§.§ Follow-up Characterization from Space
Photometry from space is important not only for finding new transiting planets, but also for characterizing them. For example, modeling of transit timing variations (TTVs) (i.e., variations in the orbital periods due to gravitational interactions between multiple planets) measured by Spitzer played a significant role in precisely weighing the planets of the TRAPPIST-1 system <cit.>. Such precise masses for Earth-mass planets are still difficult to obtain, even with state-of-the-art infrared spectrographs, but are crucial for understanding the composition of these planets and for interpreting their transmission spectra. Even if TTVs alone are not sufficient for obtaining precise masses, they still constrain mass ratios, and joint analyses with RV data improve mass measurements. Thus, follow-up observations of known multi-transit systems with are valuable, even if no new transiting planet is found. These data contribute to the derivation of the best-constrained masses for terrestrial planets transiting late-type stars.
Long-term photometry from space is important even for single-transit planets without any known signature of dynamical interactions. Extending the baseline of transit measurements helps to pinpoint the ephemeris of transiting planets found from relatively short baseline observations. If follow-up spectroscopy is performed much later than the discovery, a small error in the ephemeris calculated from the discovery data could result in a transit prediction that is far from the actual value (stale ephemeris problem). Extending the transit baseline is essential to avoid wasting precious observation time, for example, with JWST. Checking for the presence or absence of TTVs is also important in this context.
The ability to perform high-cadence, high-precision, NIR photometry from space makes a unique follow-up facility, similar to Spitzer, even for systems with shorter observing baselines. The detailed shapes of the transit light curves revealed from such measurements are important for ensuring that the system is not an eclipsing binary system (which tends to produce V-shaped dips). The chromaticity of the transit also helps to identify stellar eclipses if the candidates are obtained from optical observations. Better constraints on the transit shapes, in particular the durations of the transit ingress and egress, also improve the measurements of transit impact parameters, and hence the planetary radii <cit.>, as the two parameters are correlated through stellar limb darkening. This is important for learning about their compositions, given that the precision of stellar parameter measurements has now become comparable (or even better, depending on the instrument) to the precision of the radius ratio measurements from the transit light curve. NIR observations are particularly well suited for this task because the limb-darkening and stellar variabilities (i.e., the source of correlated noise) are both weaker than those in the optical regime.
Spitzer had long been an important facility to perform such spaced-based follow-up observations until 2020. A similar role has now been fulfilled by the CHaracterising ExOPlanet Satellite (CHEOPS). could serve as a successor of these missions in late 2020s and address various other scientific questions as were covered by Spitzer <cit.>. See table <ref> for comparisons of those space missions.
§.§ Searching for Young Planets
Other potential targets for the pointing–individual–target (PIT) type transit survey by are stars in star clusters, young moving groups, and younger star-forming regions. Recently, transit surveys from space have unveiled a population of young close-in planets <cit.>.
These planets are often reported to have inflated radii for their insolation flux level received from host stars <cit.>, in comparison with their older counterparts. Some of super-Neptune-sized young planets were discovered as multi-planet systems, and those could be progenitors of “compact-multi” super-Earths <cit.>, most commonly found around main-sequence solar-type stars. Hypotheses can be tested in relation to the formations of the super-Earth “radius gaps” <cit.> and the desert of the close-in Neptunian planets <cit.>.
Measurements of eccentricities and inclinations (spin-orbit angles) of young planets would also allow the corroboration or refutation of planet migration theories for close-in large planets <cit.>.
Overall, young exoplanet systems provide an ideal setting for testing hypotheses on planet formation and evolution.
Younger systems allow two possible directions for JASMINE observations. One is a blind transit survey for stellar clusters, stars in star-forming regions, etc. Although the FoV of is much smaller than that of ordinary transit surveys (i.e., often larger than 10^∘× 10^∘), many stars in the clusters (of the order of 10-100 for, e.g., the Beehive open cluster) are simultaneously observed, even with 's FoV (≈ 0.55^∘× 0.55^∘). This simultaneous photometry for multiple stars would significantly enhance the probability of transit detection compared with targeting field stars.
For reference, the distribution of the clusters within 500 pc from the Earth, i.e. close enough to detect planet transits, is shown in figure <ref>. The size of data points represents the number of cool stars which may show a larger transit signal than 4 σ when transiting a planetary candidate's orbit. The blue and gray colors correspond to cases when the transiting planetary radii are Earth- and Neptune- sizes, respectively. Here, we computed stellar radii using relations between the Gaia BP-RP colors and effective temperatures in <cit.>. The transit depth was derived as (R_p/R_⋆)^2 and the photometric noise follows figure <ref>. There are a good number of clusters that contain ∼ 100 of effective targets for the blind survey of transiting planets along the Galactic longitude.
The other direction is the follow-up photometry of planet-hosts or planet-candidate hosting young stars for an additional planet(s) search, as in the case of M dwarf campaigns described in the previous sections. These follow-up observations target young stars hosting transiting close-in planets, similar to K2-25b and AU Mic b. It should be stressed that, for these targets, space photometry by Spitzer has played a significant role in confirming the planetary nature and refining transit ephemerides <cit.>.
Reduced flux modulations resulting from stellar activity in the NIR region <cit.> make optimal for these types of transit surveys from space (see section <ref>). The HZ around young stars (even M stars) tends to be further out in orbit (P>50 days) because of their inflated radii, but the HZ shrinks in the orbital distance as the system ages and the central star becomes smaller. Finding “future-habitable” planets would be an intriguing topic.
§.§ Stellar spin-down relations from young cluster observations
Photometric observations of stars in young clusters described in section <ref> would usher in new knowledge on stellar rotation evolution especially young mid-to-late M dwarfs. It has been known that stars spin down over time via magnetic braking (angular momentum loss) since <cit.>. This has led to the development of the gyrochronology relation, which uses stellar rotation periods (P_ rot) as an indicator of stellar ages (e.g., ).
This gyrochronology relation is important for considering the effects of stellar magnetic activities (e.g., X-ray/EUV emissions) on the planetary atmosphere over various ages (e.g., ; ).
Recent K2 mission data have provided us the measurements of P_ rot in benchmark open clusters over various ages from pre-main-sequence age (∼3 Myr) to intermediate age (∼2.7 Gyr)
(; ; ).
They showed that the formula describing the process of stellar spin-down cannot be as simple as first assumed.
<cit.> showed that low-mass stars show temporal stalling of spin down in medium ages. <cit.> reported on the rotation of stars in younger clusters (from 1 Myr to 790 Myr), and properties of rotation evolution differ greatly depending on spectral types. This would also be closely related with star-disk interactions in the pre-main sequence phase of stellar evolution (e.g., ). These recent studies showed the possibility that the rotation evolution of M dwarfs could be somewhat different (e.g., large scatter of M dwarfs at ∼790 Myr in figures 9 of ) from that of G-dwarfs, and this difference is yet to be completely understood. Moreover, in the previous studies using K2 & TESS data, there are very few measurements of mid-to-late M dwarfs. For example, in the K2 optical observations, most of mid-to-late M dwarfs were not selected as targets.
Then NIR photometric observations using
would help to fill in this “blank region” by measuring the rotation period of more mid-to-late M dwarfs in young stellar clusters, as byproducts of young cluster observations described in section <ref>.
As discussed in the above sections, mid-to-late M dwarfs are bright in NIR wavelengths (figure <ref>), and the precision of JASMINE photometric observations for mid-to-late M dwarfs can be better than K2 & TESS.
In addition, the pixel size of JASMINE, which is more than order of magnitude better than those of Kepler and TESS (table <ref>), can be beneficial for such studies due to the capability to remove visual binary stars.
§.§ Photometric Variability of Brown Dwarfs
Brown dwarfs are intermediate objects between stars and planets with temperatures below ∼ 2,400 K. This temperature range resembles that of the primary targets for current observations of exoplanet atmospheres. Since brown dwarfs share many physical and chemical processes with gas giant exoplanets, understanding their atmospheres is also essential for studying these exoplanet atmospheres.
Many brown dwarfs reportedly show large photometric variations of typically up to a few percent
<cit.> with timescales similar to their rotation periods (ranging from a few hours to a day).
Observed significant photometric variations in brown dwarfs are mainly attributed to the inhomogeneities of cloud opacity over their surface.
Using the general circulation model (GCM), <cit.> recently demonstrated that cloud radiative feedback drives vigorous atmospheric circulation, producing inhomogeneities of cloud distributions over the surface.
Their simulations predict more prominent variability when viewed as equator-on rather than pole-on, consistent with the tentative trend of current observations <cit.>, pending further confirmation.
The dependence of the variability on other parameters such as the spectral type and gravity remains highly uncertain because of the lack of samples with precise photometric monitoring. This constitutes an obstacle to obtaining detailed insights into the cause of the variability, and further understanding physical and chemical processes in the brown dwarf atmospheres.
For example, if silicate clouds alone are responsible for the observed variability of brown dwarfs, one would expect the greatest variability at the L-type/T-type spectral transition. This is because those clouds start to form below the photosphere in the cold atmospheres of T-type dwarfs.
On the other hand, if some other types of clouds, such as Na_2S and KCl, instead form in such cooler atmospheres <cit.>,
the trend of variability over the spectral type should be more complex (the variability amplitude should be maximized at the spectral type whose photospheres have the temperatures close to the condensation temperatures of each forming clouds).
In this context, the high photometric precision of makes it suitable for systematically performing variability observations for several brown dwarfs with different fundamental parameters (spectral type, inclination angle, gravity, and age) to reveal the trend of variability against those parameters.
A recently developed dynamic mapping technique <cit.> makes it possible to explore the origin of the variability, namely time-varying cloud distributions, and further understand atmospheric circulation and cloud formation mechanisms in brown dwarf atmospheres.
Here, we note that, owing to the rapid rotation of brown dwarfs with rotation periods of the order of hours <cit.>, the observation time required to monitor the global surface of a brown dwarf is relatively short.
Whereas the variability observation campaign for 44 brown dwarfs was conducted using Channels 1 and 2 (3.6 and 4.5 μm) of the Spitzer Space Telescope <cit.>, can perform these observations at different wavelengths (1.0–1.6 μm). Note that some of those brown dwarfs have been observed with the Wide Field Camera 3 (WFC3; 1.1–1.7 μm) of the Hubble Space Telescope <cit.>, including the investigation of the spectral variability for a few targets <cit.>.
Because different wavelengths can probe different depths in the atmosphere, combining the observations by Spitzer and will provide comprehensive insight into the vertical structure of the atmosphere, including clouds.
In particular, has an excellent capability for observing brown dwarf binaries.
Recently, <cit.> observed the precise light curves of a nearby bright brown dwarf binary, Luhman 16 AB, using TESS.
Thanks to their long-term observation covering about 100 estimated rotation periods of Luhman 16B, they succeeded in extracting several minor modulation periods of 2.5, 6.94, and 90.8 hr in the observed light curve in addition to the dominant period of 5.28 hr.
They concluded that the 2.5 and 5.28 hr periods emerge from Luhman 16B possibly due to the atmospheric waves with half and one rotation periods while the 6.94 hr peak is likely the rotation period of Luhman 16 A.
As for the 90.8 hr period, they could not determine which component it is originated from, but they tentatively attributed that modulation to the vortex in the polar regions.
can resolve the semi-major axis of this system corresponding to 1.8 arcsec, while TESS was unable to separate them (figure <ref>).
Thus, can confirm whether the 6.94 hr light curve modulation indeed corresponds to the rotation period of Luhman 16 A. In addition, if a long-term observation is possible, the component causing the 90.8 hr modulation will be identified, which will provide detailed insights into the atmospheric dynamics of the brown dwarfs.
Moreover, simultaneous observations with ground-based high-resolution spectrographs such as the Infrared Doppler <cit.> mounted on the 8.2 m Subaru telescope for several best-suited targets will allow us to obtain more detailed surface maps of the temperatures and clouds for the observed brown dwarfs using the Doppler imaging technique, as done for Luhman 16 B <cit.>.
§.§ NIR Photometry by
Sections <ref>–<ref> described the science cases that use precision NIR photometry by . Here, we show how the required precision is attained. First, the photometric precision required to detect Earth-sized planets around M dwarf stars (3.1) is ∼ 0.1%, which is less stringent than that achieved by Kepler and TESS. Detecting variability in young planets and brown dwarfs requires a similar photometric precision of 0.1% (or milder). We first describe the statistical noise, including the shot noise, dark current, and readout noise of . Next, we discuss the systematic noise caused by inter- and intra-pixel fluctuations of the detector sensitivity. Finally, we consider the extent to which astrophysical noise due to stellar activity can be suppressed using an NIR passband.
§.§.§ Statistical Noise
The top panel in figure <ref> shows three statistical noise components: shot noise, dark current, and readout noise, as functions of the H_w magnitude.
The stellar shot noise is most significant for exoplanet photometry targets.
The bottom panel shows the transit depth corresponding to a detection level of 7σ in 5 minutes as a function of H_w.
In reality, a typical transit we search for lasts about 30 minutes instead of 5 minutes, and so the limiting depth will be a few times smaller.
Consequently, one can search for a ∼ 0.2% signal from a terrestrial planet around a R_⋆=0.2 R_⊙ star that is brighter than H_w=11.5 mag.
§.§.§ Impact of inter- and intrapixel sensitivity fluctuations on photometric precision
One main source of error in space-based precise photometry is an inhomogeneous detector sensitivity. Kepler and TESS avoid this systematic error by providing accurate attitude control for satellites. After its reaction wheel broke, Kepler lost precision in the attitude control to pin the stellar position in a single detector pixel. This resulted in ∼ 1 % systematic error of the aperture photometry according to the satellite drift. These systematic errors can be corrected using several techniques, e.g., pixel-level decorrelation <cit.>.
To investigate the systematic errors in the detector sensitivity, we developed a detector simulator dedicated to , called the () (Kamizuka et al. in prep). simulates pixel images by incorporating both intra- and interpixel sensitivity models with attitude fluctuations. In the case of , the point spread function (PSF) size was approximately a few times greater than the pixel size. This reduces the effect of the intrapixel fluctuation of the sensitivity, whereas the interpixel sensitivity is expected to be ∼ 1 % and it remains as a systematic error. We found that the systematic errors can be suppressed using a sufficiently sensitive detector to detect an Earth-sized planet around an M-type star after the flat correction of the interpixel sensitivity, as shown in section <ref>. In addition to precise measurements of the detector sensitivity prior to launch, we plan to have a single-mode fiber-type light source for the flat correction on board (Kotani et al. in prep).
§.§.§ Suppression of Flux Modulations by Stellar Activity
Enhanced stellar activity poses a challenge in the search for transiting planets around young stars. Most young stars are magnetically active, often creating dark spots and plages on their surface, which are revealed as strong periodic flux modulations in the light curves. For instance, the flux variation amplitude of the young M dwarf planet-host K2-33 is as large as ≈ 2% <cit.> in the Kepler band. (In contrast, the transit depth of K2-33b is only ≈ 0.25 %.) In the presence of such large flux modulations, together with the inflated radii of young stars, detecting “small” transiting planets (producing small transit depths) is not straightforward for young systems.
In addition, such variability can be an additional noise source, even for transit-signal detection in mature late-type stars. For the TESS passband (0.6–1.1 μm), there are approximately stellar variabilities of 400, 1,000, and 4,000 ppm on average for early to late M stars, corresponding to 0.35 R_⊙, 0.2 R_⊙, and 0.1 R_⊙ <cit.>, respectively. Considering the typical signal levels of terrestrial planets of 1,200, 2,500, and 10,000 ppm for early to late M stars, flux modulation due to stellar activity can be a major source of noise when searching for the transit signal of terrestrial planets in the TESS band.
Photometric monitoring in the NIR region has an advantage over optical observations in terms of the reduced flux contrast of active surface regions (spots and etc.). Recent radial velocity measurements <cit.>, as well as photometric observations <cit.>, revealed that the contrast of typical surface spots on young active stars is mitigated by a factor of 2-3 in the NIR region.
For reference, the expected relative modulation amplitudes by the stellar activity are listed in table <ref>, where Kepler passband case is used as a reference and set to be 1. The contrast was calculated by combining the PHOENIX atmospheric model and the response function for each passband in the same manner as in <cit.> when the stellar surface temperature and relative starspot temperature are 3,500 K and 0.95, respectively.
Miyakawa et al. (in prep.) implemented detailed simulations of the detection of close-in transiting planets around young cool stars (primarily M dwarfs). Their injection-recovery tests showed that the injected planets were recovered with enhanced rates in the NIR for all cases, and the recovery rate was significantly improved for more rapidly rotating stars (P_rot< a few days) with larger activity-induced modulations.
This highlights the remarkable benefit of performing transit surveys by NIR photometry to identify small transiting planets around particularly young active stars, such as those in the Pleiades cluster (∼ 100 Myr) and star-forming regions (≲ 10 Myr).
§.§ Exoplanet Microlensing
Gas-giant planets, such as Jupiter and Saturn, are thought to have formed just outside the snow line, where icy materials are abundant. In this circumstance, a protoplanetary core grows quickly and starts accumulating the surrounding gas within the lifetime of the protoplanetary disk <cit.>. However, the detailed process of gas-giant planet formation remains unclear. Unveiling the mass distribution of exoplanets outside the snowline is of particular importance in understanding this process. However, existing exoplanet detection techniques (except for microlensing) are insufficiently sensitive for detecting planets in this orbital region.
Microlensing detects planetary signals by observing the characteristic light curve produced by a background source star and altered by the gravitational lensing effect of the foreground planetary system. By monitoring hundreds of millions of stars in the Galactic bulge region, more than a hundred exoplanets have been detected using this technique. However, the ultrahigh stellar density of the field and the large distance to the planetary systems has made it difficult to further characterize each planetary systems. Although the Nancy Grace Roman Space Telescope (see section <ref>) aims to improve the demographics of exoplanets substantially in the late 2020s by monitoring the Galactic bulge region <cit.>, the difficulty of performing follow-up observations of each system remains.
This difficulty can be resolved if planetary systems are detected at close distances by microlensing. Although the event rate of such nearby planetary microlensing events is expected to be relatively small, the first such event was serendipitously discovered in 2017 <cit.>. High-cadence, large-sky-area surveys such as All Sky Automated Survey for SuperNovae (ASAS-SN), Zwicky Transient Facility (ZTF), Tomo-e Gozen, and Vera C. Rubin Observatory's Legacy Survey of Space and Time (LSST) have the potential to find more such events.
can play an invaluable role in following up such nearby planetary microlensing event observations from the ground. Firstly, the high astrometric capability of allows the centroid shift of the source star to be measured, thanks to the lensing effect (the signal is typically ∼1 mas).
This will help to solve the degeneracy between the mass and distance of the lens system. Secondly, by simultaneously observing the same event from and from the ground, one can measure the parallax effect in the microlensing light curve, which will also help solve for degeneracy. Thirdly, the NIR light curve from allows the luminosity of the lens (host) star to be measured, thereby providing additional information about the host star in addition to its mass and distance (e.g., temperature). We note that although the GCS data will automatically provide all the required data, if events happen in the GCS field, a full utilization of the advantages of requires a target-of-opportunity (ToO) mode that can respond to a trigger within a few days.
§.§ Astrometric Planet Survey
The astrometric detection of exoplanets has two key advantages over other exoplanet detection methods. First, the astrometric signal increases for planets more distant from their host stars, making this method complementary to the radial velocity and transit methods, which are sensitive only to short-period planets. Second, the two-dimensional motion of a star measured by astrometry, combined with Kepler's laws and an estimated stellar mass, allows a solution of both the absolute mass and the complete planetary orbit. This is generally not possible for exoplanets discovered based on radial velocities or microlensing.
The astrometric signal (maximum shift of the stellar position by a planet) is given by
a_s ∼ 30 (M_p/M_ Jup) (M_s/M_⊙)^-1 (a / 3 au) (d/100 pc)^-1μas,
where M_p denotes the planetary mass, M_s the stellar mass, a the semi-major axis, and d the distance to the planetary system.
Although few exoplanets have been detected to date by this technique alone, Gaia is expected to detect tens of thousands of Jovian planets at a few au around solar-type stars using this technique <cit.> because of its ultra-high astrometric precision at optical wavelengths (∼25 for stars with G < 15 mag). However, it remains difficult for Gaia to detect planets around ultra-cool dwarfs and/or long-period planets because of their lack of sensitivity. can complement Gaia exoplanet exploration with its NIR capability and the long time baseline between Gaia and . In the following subsections, we describe several scientific cases that can pursue.
§.§.§ Planets around ultra-cool dwarfs
Core-accretion theories predict that massive planets are less abundant around lower-mass stars because of the lack of materials in the surrounding protoplanetary disks <cit.>. However, some Jovian planets have been discovered around mid-to-early M dwarfs, challenging current planet formation theories <cit.>.
To further address this problem, it is important to
unveil the planet population around further lower-mass stars or ultra-cool dwarfs (UCDs; T_ eff≲ 3000 K).
So far, only a limited number of planets have been discovered around UCDs owing to their faintness at optical wavelengths.
The astrometry technique is well sensitive to distant planets around nearby UCDs because the astrometric signal increases inversely with the stellar mass. For example, the astrometric signal on a 0.1 M_⊙ star at a distance of 20 pc caused by a planet with a mass of 0.1 M_ jupter and an orbital period of 20 years reaches ∼500 μas. Because Gaia alone is likely difficult to firmly detect such planets owing to the faintness of nearby UCDs in optical (G >16 mag) and the limited time baseline (∼10 years), could play an essential role in confirming the candidates of such planets that will be detected by Gaia due to the NIR brightness of nearby UCDs (J ∼12–14 mag; see figure <ref>), and the time baseline between Gaia and (up to ∼18 years).
§.§.§ Planet search in the Galactic Center Survey
The data obtained by intensive observation with toward the Galactic center (section <ref>) will also be useful when searching for exoplanets using the astrometric technique. Despite the limited number of nearby stars within the GCS area that are suitable for this planetary search, the expected astrometric precision toward this region (25 μas) will provide a good sensitivity to low-mass planets (down to ∼Neptune-mass) around M dwarfs. Potentially, about a thousand of M and K dwarfs that are bright enough for are located within the GCS region (most within 1 kpc), providing good targets for astrometric planetary search.
§.§.§ Synergy with radial velocity and direct imaging
Astrometry has good synergy with radial velocity and direct imaging techniques. Although the radial velocity technique is sensitive to long-period (tens of years) planets, this method alone cannot measure the orbital inclination. Thus, it cannot measure the true value but only the lower limit of the planetary mass. By measuring the astrometric signal of the host star of such a planet, one can determine the complete orbit and mass of the planet <cit.>. can play an important role in measuring the true mass of long-period planets (planetary candidates) around nearby M dwarfs discovered by radial velocity surveys, such as the one that is ongoing with the Subaru/IRD. Astrometry can also play a crucial role in determining the formation scenario of young self-luminous planets discovered by direct imaging. Although direct imaging can measure the complete orbit and luminosity of a planet, it alone cannot measure the dynamic mass of the planet, which is key to distinguishing different formation scenarios, i.e., hot and cold start models. Recently, the accelerations of host stars of several direct imaging planets were measured by combining the Hipparcos and Gaia proper motions, constraining their formation scenarios <cit.>. will be able to contribute to such studies in combination with Gaia.
§ MISSION AND INSTRUMENT CONCEPT
In this section we summarize the current concept study of the mission and the survey plan to achieve the above mentioned main science objectives of the Galactic Center Archaeology and the HZ Exoplanet search. The mission and instrument concepts are still under development. Hence, the specifications summarized in this section are subject to change during the further development phases of the mission.
§.§ Satellite and Payload Design
The satellite system of consists of a bus module and a mission module. The mission module includes a telescope and electronics box.
A large Sun shield and telescope hood are installed to prevent telescope from stray light and temperature change. The satellite will be launched by JAXA Epsilon S Launch Vehicle from the Uchinoura Space Center in Japan. will be in the Sun-synchronous low Earth orbit at an altitude of about 600 km with a mission lifetime of 3 years. The weight of is about 600 kg.
has a circular primary mirror with a diameter size of 36 cm. The current preliminary optical design adopts a modified Korsch system with three mirrors and two folding mirrors to fit the focal length (4.37 m) into the size of the payload (figure <ref>). is planned to use CLEARCERAM®-Z EX for the mirror and Super-super invar alloy <cit.> for the telescope structure. CLEARCERAM®-Z EX and the Super-super invar have extremely low (0±1×10^-8 K^-1 for CLEARCERAM®-Z EX and 0±5×10^-8 K^-1 for Super-super invar) coefficients of thermal expansion (CTE) at about 278 K, which is the operation temperature of the telescope. uses four Hamamatsu Photonics InGaAs NIR detectors <cit.>.
The FoV is 0.55^∘×0.55^∘. uses an H_w-filter which covers from 1.0 μm to 1.6 μm. Figure <ref> shows the H_w passband.
The whole telescope of is encased within the “Telescope Panel Box" which insulates the telescope from outside (figure <ref>). This enables precise control of the temperature variation of the telescope within 0.1 degree for 50 min, by keeping the temperature change inside the Telescope Panel Box within 1 degree. The operation temperature for the detector is required to be less than 173 K and the temperature variation is required to be kept within 1 degree level for 50 min. The structural thermal stability will keep the position of stars on the focal plane within 10 nm for 50 min. This is achieved by the above mentioned thermal control and the extremely low CTE of the CLEARCERAM®-Z EX mirror and Super-super invar telescope structure. The summary of the specifications of the satellite and telescope are shown in table <ref>.
§.§ Galactic Center Survey Strategy
The GCS is designed to achieve the science goals in section <ref>. To achieve the precise astrometric accuracy, will observe the same field repeatedly (about 60,000 times). The GCS field covers a rectangular region of -0.6^∘<b<0.6^∘ and -1.4^∘<l<0.7^∘ (or -0.7^∘<l<1.4^∘) as described in section <ref>.
The whole GCS region will be mapped with a strategy to observe all the stars in this region for a similar number of times during the three years of the nominal operation period of and detect each star at the different positions within the detector, to randomize the noise and reduce systematic biases. The expected number of observations for the stars in and around the GCS region is shown in figure <ref>. As can be seen from the figure, a number of stars surrounding the GCS region will be observed throughout the mission operation, though with a lower samping rate. Although the accuracy of the astrometry will be worse in this surrounding region, we will downlink the data for stars in this region and analyse their time-series astrometry and photometry.
At each pointing of a single field of view of 0.55^∘×0.55^∘, takes 12.5 sec exposure (plus 1 sec of read-out time) 46 times, which are combined to one frame which is hereafter defined as a “small frame". The exposure of 12.5 sec corresponds to the saturation limit of H_w=9.7 mag. Although the pixel resolution of each image is 0.472 arcsec, we will determine the centroid position of the stars with the accuracy of about 4 mas level for the stars brighter than H_w=12.5 mag, using the effective Point Spread Function <cit.>.
The parallax accuracy of 25 for the stars brighter than 12.5 mag is obtained by the repeated observation of about 60,000 times in 3 years. In each orbit of about 97 min. will observe 4 different small frames within 48 % of each orbit[The number of different fields observed per orbit is to be determined when the various trade-off studies are finalized.]. Two of these fields are overlapped about the half size of the field of view (either shifted vertically or horizontally, so that there will be two directions of the overlap) to correct the distortion of the images due to optical distortion and detector distortion. The observed fields in each orbit are chosen to map the main survey field homogeneously, but as random as possible.
The distortions of the images are modelled with two dimensional 5th order polynomial functions by assuming the primary stars do not move in the short observational time in a single orbit. Here, the primary stars are defined as stars that do not have an intrinsic shift due to binaries or microlensing. As mentioned above, the telescope is stable enough to keep the position of stars on the focal plane to within 10 nm for 50 min, which allows us to ignore the time variation of the coefficients of the terms higher than the first order of the polynomial functions for the distortion correction, and maintain 10 μas stability of astrometric measurement.
The time variation of the first order of the distortion, expansion and contraction, will be modelled with the stars whose parallax and proper motion is accurately measured with Gaia.
The time-series photometry can be obtained from the GCS data. Figure <ref> shows the expected photometric uncertainties for a single exposure.
The figure shows that the photometric accuracy is about 1 mmag for bright stars with H_w=10 mag, but reaches to 40 mmag for the faint stars (H_w=14.5 mag) in a small frame. Here, we assume that the inter- and intra-pixel sensitivity fluctuations are completely calibrated. The internal error in figure <ref> indicates the photometric errors computed from the Poisson noise and the read-out noise. “RMS" shows the photometric errors computed from the RMS error of the photometric measurement of 100 simulated images created with (section <ref>), which is almost consistent with the internal error. Each star is expected to be observed every ∼530 min.
§.§ Exoplanet Survey Strategy
The field of view of (0.55^∘× 0.55^∘) makes it better suited to pointing–individual–target (PIT) observations than to a blind transit survey. Because the probability that an HZ planet transits a randomly chosen mid-M star is only 𝒪(1%), and because the orbital period of the HZ planet is 𝒪(week), hundreds or more of such stars need to be monitored for at least a month to detect those planets in a blind transit survey. This is impractical with .
Instead, plans to focus on fewer (𝒪(10)) target stars that have a high prior probability of having transiting planets in the HZ. Specifically, we focus on stars with known transiting planets with orbital periods shorter than those in the HZ (i.e., ≲ 10 days), which have been detected by ground- and/or space-based surveys prior to . If these planets also have an outer planet in the HZ whose orbit is likely aligned with the inner transiting planet(s), the HZ planet is much more likely to transit than around a randomly chosen star: for mutual orbital inclinations of a few degrees, as inferred for multiplanetary systems from Kepler <cit.>, the transit probability for the HZ planet conditioned on the presence of inner transiting planets is a few 10% (figure <ref>).
Such undetected HZ planets may be detected with a long-term follow-up monitoring using .
This strategy makes it plausible to find HZ transiting planets by monitoring 𝒪(10) targets.
Indeed, the effectiveness of such a strategy has been demonstrated by Spitzer observations of TRAPPIST-1. They revealed five more terrestrial planets, d–g <cit.> on wider orbits covering the HZ than planets b and c, which were originally reported from a ground-based survey <cit.>.
To identify transiting planets in the HZ, needs to monitor each star for at least few weeks in total, and the survey is planned to be performed during the periods when the Galactic center is not observable.
Because can observe the region within 45^∘ around the Sun,
such observations are feasible for most stars in the sky (figure <ref>).
Here the white dots show the locations of potential target M dwarfs, the color corresponds to the total number of visible days per year, and the gray area corresponds to the region around the Galactic and anti-Galactic centers. Even excluding those latter regions, many mid- or late-M dwarfs can be observed with a sufficiently long baseline time.
In principle, the anti-Galactic center direction could also be observed for the exoplanet survey during the astrometric survey of the Galactic center. However, this may affect the thermal stability of the astrometric survey. A more detailed thermal stability analysis is needed to determine the visibility in the anti-Galactic direction.
Figure <ref> shows a simulated light curve of a mid- to late- M type star that includes a transit signal by a terrestrial planet. This simulation includes photon noise, dark current, readout noise, systematic error from intra- and interpixel sensitivity fluctuations, incorporating attitude control error of the satellite and a PSF of the optics with wavefront aberrations expressed by Zernike polynomials. An Earth-sized planet around a star with R_⋆ = 0.2 R_⊙ yields a transit signal of ∼ 0.2–0.3 %, which can be detected in the simulated light curve of after the flat correction of the interpixel sensitivity (see section <ref> for details).
§ OTHER POTENTIAL SCIENCE CASES
§.§ Galactic Mid-Plane Survey
While the Gaia mission is creating groundbreaking advances in the exploration of the structure and kinematics of the Galaxy as mentioned above, Gaia's contributions to understanding the large-scale dynamics of the youngest populations of the disk (“Extreme Population I”) will necessarily be limited, as even 1 of latitude is the scale height of these populations at a distance of about 5 kpc. Yet it is within this ≲100 pc from the Galactic mid-plane where is the main acting fields of the primary agents of disk dynamical evolution—e.g., Giant Molecular Clouds, the spiral arms, and the densest parts of the Galactic bar. To gather a complete understanding of processes such as secular orbital heating and radial migration that are driven by these perturbations requires contiguous kinematic information spanning the very youngest stars born in the Galactic midplane to their older siblings (well studied by other surveys) that have long since been scattered to dynamically hotter and/or radially migrated orbits.
Moreover, without accurate proper motions, even surveys like APOGEE, which is providing the first global view of the detailed chemistry and radial velocities of disk and bulge stars, are limited in their ability to place this information within a firm dynamical context; many thousands of mid-plane stars in the APOGEE database lack Gaia astrometry, so that 3D orbits are not possible to be inferred. Only a NIR astrometry facility can remedy this problem by supplying a similar global view of mid-plane stellar disk dynamics. The importance of accessing this “optically hidden Milky Way” has motivated discussions at ESA to follow-up Gaia with a NIR counterpart mission, GaiaNIR <cit.>, but this likely will not be realized for a few decades.
is well-suited to be a pathfinder for a proposed large flagship all-sky astrometric survey mission in the NIR, like the GaiaNIR concept, and is capable of having an immediate impact in the field of Galactic archaeology. In addition to the GCS in section <ref>, to address the science objectives of the inner disk dynamics summarised in section <ref>, tracing the dynamics and chemistry of the stars in the Galactic disk mid-plane at the various azimuthal angles of the disk is required. One potential targeting strategy is
a small campaigns of mosaicked pointings, which are centered on the location of APOGEE b = 0 fields, which are located every about 5 in longitude around the entire sky. Such a strategy will instantly pay dividends through the value-added information on tens of thousands of stars with spectroscopy from the APOGEE and APOGEE-2 surveys, and their SDSS-V extension, Milky Way Mapper (see section <ref>), that will lack Gaia's accurate astrometric data, because of the extreme foreground obscuration. These data will provide a systematic probe of the dynamics of the low latitude disk,
The required astrometric accuracy should not be as strict as the GCS. Although APOGEE spectroscopy yields ∼0.1 km s^-1 accuracy radial velocities for stars, we do not require such accuracy in the transverse velocities. This is because we are primarily interested in measuring the velocity dispersions for young Population I stars, which are of order 10–30 km s^-1 per dimension.
For example, 200 μas yr^-1 proper motions provided by translates to a transverse velocity error of < 5 km s^-1 for a star at 5 kpc (this is a bit higher than the approximate median distance of the APOGEE midplane stars, which are primarily red giants), and < 10 km s^-1 for stars at 10 kpc. This relaxed requirement allows us to survey a large range of the Galactic longitude, when is not observing primary science targets.
§.§.§ Star forming region
A rigid adherence to the pointing strategy of the Galactic mid-plane survey is not necessary, and other interesting science problems–e.g., astrometric and photometric exploration of young stellar clusters and star forming regions (which naturally lie at low latitude) can also benefit from judicious placement of the Galactic mid-plane Survey “pickets”. For example, the APOGEE project has dedicated itself to intense spectroscopic probes of several star forming regions (e.g., the Orion Complex, the Perseus Molecular Cloud, and the Cygnus region), which may also be matched with targeting.
Stars are mainly formed in Giant Molecular Clouds (GMCs). Moreover, star forming regions strongly concentrate into very compact sections within GMCs. Using the Two Micron All Sky Survey (2MASS) point source catalog, <cit.> estimated the fraction of young stellar populations contained within clusters to be 50% - 100% for nearby cluster-forming GMCs, such as Perseus, Orion A, Orion B and MonR2. NIR surveys of young stellar populations using the Spitzer Space Telescope have confirmed that clustered star formation is the dominant mode of star formation in the Galaxy <cit.>.
This traditional picture is now challenged by the recent Gaia data releases, which are revealing more complex reality. <cit.> claimed that only about 16 % of the stars in the solar neighborhood formed in bound clusters, comparing the star formation rate in the solar neighbourhood and the populations of the young star clusters. From the more small-scale kinematics of the OB associations, <cit.> showed that the velocity fields of the OB association is highly substructured <cit.>, which is not consistent with a monolithic scenario, where stars formed in the core of bound clouds and expanded subsequently due to the outflow of the gas caused by feedback. They discussed that these are consistent with a hierarchical star formation model, where stars formed in large-scale gravitationally unbound structures in molecular clouds.
Although the Gaia data is revolutionizing our understanding of star formation, these optical observations are unavoidably missing the information deep in the core of star forming regions where the dust extinction is too severe. The NIR astrometric observations with and APOGEE spectroscopic data can unveil the whole picture of the star forming region hidden in the dust, which will complement what the Gaia data revealed and provide a more complete picture of these star forming regions.
§.§.§ Milky Way Neighborhood Dwarf Galaxies
The dwarf galaxy population around the Milky Way is diverse and new dwarfs are continuously being discovered <cit.>. The once troublesome “missing satellites problem” that plagued the ΛCDM cosmology theoretical framework is now steadily being refined and coming in line with observations of the dwarf galaxy populations around more massive galaxies such as the Milky Way <cit.>.
The ESA F-class mission ARRAKIHS (a planned 2030 lanuch) will further this inventory with unprecedented surface brightness levels in the Euclid VIS (0.550-0.900 ), Y (0.920-1.230 ) and J (1.169-1.590 ) bands images of nearby galaxies.
One remarkable discovery that came from the recent Gaia DR2 is that of the giant dwarf galaxy Antlia 2 <cit.>. The primary reason that this giant dwarf galaxy lurking in the dark matter halo of the Milky Way went undetected until now is that it lies only 11 degrees off the Galactic plane. Where Gaia has run into the limit of visual-wavelength extinction, can go deeper into the Galactic mid-plane, as part of
the Galactic mid-plane survey,
allowing for serendipitous dwarf galaxy and globular/stellar cluster discoveries that cannot be made via any other method.
With the anticipated astrometric precision and stellar content, we will be able to detect over-densities of stars that contain common proper motions indicative of low-latitude dwarf galaxies and/or clusters that are in the optical Zone of Avoidance, within a distance of about 10 kpc, as demonstrated for Gaia data in <cit.> and <cit.>. So far, there is no dwarf galaxy found within 20 kpc, and Draco 2 at about 22 kpc is the closest <cit.>. Although the total field of view of the Galactic mid-plane survey is very limited, finding any galaxy within 20 kpc will be a significant discovery, and will be integral to studies of the survivability of dwarf galaxies within the inner Galaxy, their contribution to the bulge, and their impact on the Galactic disk <cit.>.
§.§ X-ray Binaries
Another interesting target for which the precise astrometry with a short cadence observations of can provide unique and impactful data is X-ray binaries, including gamma-ray binaries <cit.>. These are ideal laboratories for the study of high-energy astrophysics and prime future targets for multi-messenger astronomy, including continuous gravitational wave observations <cit.>. Astrometric measurements of their companion stars enables us to measure the physical scale of the orbital parameters, and to unveil the mass of the compact object, whether it be a white dwarf, neutron star or BH <cit.>. We listed below examples of X-ray binaries, which are particularly interesting targets for , when cannot observe the GCS field. This merely presents examples of potential targets, and is not an exhaustive list.
γ Cassiopeia (γ Cas): γ Cas is considered to be the first star identified as Be star (B type star with emission lines) <cit.>.
However, now γ Cas is known to be a rare kind of Be star, which is characterised by a hard X-ray spectrum with a thermal X-ray emission, a high temperature (>10 keV) and a lack of strong variability <cit.>. Despite the proximity of the object (d∼168 pc) <cit.> and several decades of observational and theoretical studies, X-ray emission mechanism and the nature of the lower mass secondary star are still debated <cit.>.
While the ∼204 day binary period with close to the circular orbit <cit.> is known, the mass of smaller mass secondary star is not well measured and it is still debated if the secondary star is white dwarf or neutron star <cit.>.
With a visual magnitude of ∼2, γ Cas is too bright for Gaia. can adjust the exposure time to observe such a bright object with a short cadence. Such time-series astrometric information will provide the precise orbital parameters and mass of the secondary star from astrometric observations <cit.>, which will be a crucial to understanding the long-debated properties of γ Cas and the other similar systems (γ Cas analogs).
LSI +61 303 / HESS J0632+057: These are both gamma-ray binaries, in which the source of gamma rays may be the impact of a relativistic pulsar wind on out-flowing protons in the disk of a Be star, UV photons from a massive main sequence star, or by the interaction of UV photons from such a star on the accretion disk of an X-ray binary counterpart <cit.>. Determining the masses of these companions can be achieved with high astrometric precision observations
X Per / V725 Tau: Like γ Cas, these are X-ray binaries with rapidly rotating B stars, and likely host neutron star companions. Neutron stars are characterize by their equation of state, which requires knowing their masses <cit.>, which can be done with precise determination of orbital parameters <cit.>.
Cygnus X-1: The tens of astrometric regime achievable with will make new studies of the orbit and jet physics of this quintessential BH binary system possible, and the NIR astrometry of this source can be compared to the exquisite radio positional information that exists from VLBI studies <cit.>.
§.§ Complementary Sciences with the Galactic Center Survey data
The GCS data will provide the accurate astrometry and time-series photometry for all the stars with 9.5<H_w<14.5 mag in the GCS field. The GCS data should be valuable for the wide range of scientific studies, not just for the core science of as shown in section <ref>. In this section, we highlight some of these science cases. Note that the aim of this section is not to provide the comprehensive list, but merely list the potential science cases. We hope that many science cases that are not as premeditated will be developed by the wider science community.
§.§.§ Hunting Inner Disk BHs
Massive stars are expected to become BHs upon their demise <cit.>. Therefore, it is expected that there are many stellar mass BHs floating around in the Milky Way <cit.>. Stellar mass BHs are found in the Galaxy as X-ray binaries <cit.>, whose masses are around 5 to 20 M_⊙. Several gravitational wave detections of stellar mass BH binaries since the first detection of GW150914 by LIGO/VIRGO collaborations <cit.> revealed that there are indeed stellar mass BHs, up to M_ BH∼150 M_⊙ <cit.>. Two remaining questions are what is the mass function of this BH population and how they are spatially distributed in the Galaxy. These questions are also related to the origin of SMBHs, as discussed in section <ref>.
One promising method to detect a large population of these stellar mass BHs is finding a binary system of a BH and a companion star bright enough to allow for its kinematics to be measured with astrometry and/or spectroscopic observations. These system does not require the companion star be interacting to the BH and be emit X-ray, i.e. non-interacting BH <cit.>. Such non-interacting BHs are expected to be observed by precise astrometry available with Gaia <cit.>. Recently <cit.> estimated that Gaia will detect ∼1.1-46 non-interacting BH binaries <cit.>. In fact, so far from Gaia DR3 and follow-up observations, two non-interacting BH binaries, Gaia BH1 with a Sun-like star <cit.> and Gaia BH2 with a red giant star <cit.>, have been found.
will offer similarly precise astrometry for stars in the inner disk from the Sun to the Galactic center, where Gaia cannot observe due to the high extinction. Therefore, is expected to uncover the population of BHs in the inner Galaxy.
According to a similar model of <cit.>, in the GCS region 100-1,000 BH-star binaries are expected to exist. Further study of how many of such binaries can be detected by is ongoing.
Another way of hunting BHs with is microlensing. will offer the time-series photometry of the Galactic center region where the stellar density is very high.
We expect that will find about 3 microlensing events during the nominal operation of 3 years, which is an optimistic estimate based on the VVV microlensing survey results <cit.>. As suggested by <cit.>, the long timescale (>100 days) microlensing events are expected to be dominated by a high mass (>∼30 M_⊙) BH lens. Photometric microlensing itself does not give us the lens' mass. However, can also detect astrometric microlensing from the centroid shift of the source <cit.>. Astrometric microlensing enables us to measure the source mass if the mass and distance to the lens are optimum for the measurement. Recently, the first astrometric microlensing measurement has been reported for a microlensing event found by the ground-based observation and followed up by the Hubble Space Telescope for astrometry <cit.>. However, the lens masses for the same event, OGLE-2011-BLG-0462/MOA-2011-BLG-191, reported by the two teams are so far quite different. <cit.> reported the lens mass of 7.1±1.3 M_⊙, clearly indicating a BH, at distance of 1.58±0.18 pc, while <cit.> reported the lens mass of 1.6-4.2 M_⊙, which could be a neutron star or BH, at a distance of 0.69-1.37 kpc. This tension could be due to systematic uncertainty from the two independent measurements of photometry and astrometry.
The astrometric displacement of this event was >mas and large enough to be clearly detected by . can provide both photometric and astrometric information, which may help to reduce the systematic uncertainty of such microlensing events.
The chance of having such an event in the GCS field in the lifetime of could be slim, but one event of precise measurement of the high mass BH would still be an exciting outcome, because only few of such events are expected even in the Gaia data <cit.>.
§.§.§ Hunting IMBH in the Galactic center
A pressing mystery is the low number of confidently confirmed IMBHs M_BH=100-10^5 M_⊙ <cit.>. Many candidates are, however, known <cit.>, and they may form the low-mass (M_ bh<10^5 M_⊙) extension to the quadratic BH/bulge mass scaling relation for disk galaxies <cit.>.
The Galactic center is an attractive area to explore for these long-sought after IMBHs <cit.>, especially if brought in through the capture of dwarf-mass galaxies. Furthermore,
<cit.> demonstrated that some massive star clusters formed in the central 100 pc undergo core collapse before the massive stars die, i.e. ∼3 Myr, which induces a runaway stellar merger and creates a IMBH. They estimated that within 10 pc from the Galactic center about 50 IMBHs may exist. Some of them could be still within the survived star clusters <cit.>, like the star clusters near the Galactic center, the Arches <cit.> and the Quintuplet <cit.>. Detecting a star cluster in a high stellar density region like the Galactic center only with photometric data is difficult. However, the proper motion data from will enable the detection of star clusters in the Galactic center (see also section <ref>), which can inform follow-up studies of their cluster centers with X-ray and/or radio surveys <cit.>.
Interestingly, so far five IMBH candidates have been discovered as high-velocity (velocity width >50 km s^-1) compact (<5 pc) clouds in the Galactic center <cit.>. The advent of the Atacama Large Millimeter/submillimeter Array (ALMA) enables the measurement of the detailed velocity structure of the compact clouds less than 0.1 pc from the center, which is consistent with Keplearian rotation around a massive object whose inferred mass between 10^4 and 10^5 M^⊙ <cit.>.
Although further studies are required to prove that they are the true IMBHs, these observations may indicate that several IMBHs exist in the Galactic center.
With , IMBHs can be detected as a binary motion of bright stars around an IMBH or astrometric microlensing, as discussed in the previous section, if such systems exist or such an event occurs. For example, if a 1 M_⊙ AGB star is rotating around a 1,000 M_⊙ BH with the orbital period of 3 years with zero eccentricity at a distance of 8 kpc, the semi-major axis of the orbit corresponds to 2.6 mas, which can be detected by .
There will be an astrometric microlensing event if an IMBH crosses in front of a distant star. Following <cit.>, we can consider an event for a source star at 8 kpc, and a lens object of an IMBH crossing at 7.5 kpc. The Einstein time-scale of this event is 713 days, and the maximum displacement due to the astrometric microlensing is 2.9 mas <cit.>. This can be detected by , though such an event would be extremely rare.
§.§.§ Gravitational Waves
Gravitational waves (GWs) have been successfully detected by the Laser Interferometer Gravitational-Wave Observatory (LIGO), Virgo and the Kamioka Gravitational Wave Detector (KAGRA) collaborations.
The sources for these events are merging compact objects such as BHs and neutron stars.
It is of importance to detect gravitational waves from SMBHs binaries to study the growth mechanism of SMBHs. These waves have much longer wavelengths than detectable by ground-based detectors. Astrometry could be a valuable resource to detect or constrain such low frequency gravitational wave <cit.>.
Here, we estimate strain sensitivity of the GCS. It is well known that the maximal magnitude of the astrometric effect of a gravitational wave is h/2 for h being the strain. The astrometric accuracy of single observations of being Δθ = 4 mas for stars with magnitude H_w<12.5 mag and the uncertainty grows exponentially for fainter sources. Given that each star will be observed around N_ obs=68,000 times, considering a realistically expected distribution of stars in H_w magnitudes of the GCS and using the theoretical formulation developed for Gaia-like astrometry <cit.>, one can conclude that the full sensitivity of to the effects of a gravitational wave will be h=3×10^-13. Here we assume that the instrument is ideally calibrated, so that the full accuracy scales as N_ obs^-1/2 for each source and the sensitivity is also accordingly computed from the combination of the contributions from individual sources.
However, the astrometric effect of a gravitationa wave is proportional to the sine of the angular separation χ between the directions of observations and that towards the gravitational wave source <cit.>. Although makes relative astrometry only, the variation of the astrometric effect within the observed field on the sky can be detected. Therefore, the sensitivity quoted above should be scaled by | sin(χ+f/2)-sin(χ-f/2) |=2sin(f/2) | cosχ |, where f is the extension of the observed field, being f≈2^∘ for the GCS. Therefore, the theoretical sensitivity of can be estimated h=8.6×10^-12 | cosχ |. Interestingly, the maximal sensitivity is reached for the gravitational sources approximately in the direction of observations or the opposite direction where | cosχ | ≈1.
This theoretical sensitivity is valid for the gravitational wave periods between the typical cadence of observations and the duration of observations by .
§.§.§ Ultra Light Dark Matter
The precise measurement of the kinematics of stars revealed by would enable us to reconstruct the mass distribution in the Galactic center <cit.>. It is believed that in the central 100 pc of the Galaxy baryons dominate the mass profile, and the total mass measured from the dynamical model is consistent with what is expected from the stellar density profile <cit.>. However, the Galactic center is attracting interests in testing for the existence of a particular dark matter candidate, namely Ultra Light Dark Matter (ULDM), including Axion-like ULDM particles <cit.>. Although ULDM behaves like conventional cold dark matter on the large scales, ULDM is expected to produce a soliton core in the galactic center in the de Broglie wavelength scale due to Bose-Einstein condensation. <cit.> suggested that ULDM particle masses of ∼8×10^-23 eV can explain the dynamical mass profile of the Fornax dwarf galaxy <cit.>. <cit.> suggested that the soliton core created from the dark matter particle mass less than 10^-19 eV can influence the gravitational potential in the Galactic center significantly. <cit.> showed that velocity dispersions observed in the Galactic center imply a soliton core as massive as ∼10^9 M_⊙, expected from ULDM particles with 10^-22 eV. <cit.> also demonstrated that a soliton core corresponding to a particle mass of ∼2.5×10^-21 eV explains the rotation curve of the Milky Way in the central region. <cit.> showed that such a massive soliton core as suggested above can influence the kinematic properties of the nuclear gas disk on the scale of ∼200 pc.
Recently, <cit.> demonstrated that the kinematics of stars in the NSC can provide constraints on the particle mass range of ULDM. <cit.> applied a simple isotropic dynamical model to the kinematics data of the NSC stars in <cit.>, and rejected the mass range of ULDM between 10^-20.4 eV and 10^-18.5 eV. will provide the precise kinematics of the stars in the NSD (section <ref>) which is the dominant stellar component from a few pc to ∼200 pc. This size corresponds to the size of the soliton core of 10^-19-10^-22 eV ULDM. Using the dynamical modelling of these stellar structures, the precise astrometric information of may uncover indirect evidence of ULDM or provide stringent constraints on the existence of ULDM whose particle mass between 10^-22 eV and 10^-19 eV.
§.§.§ Identifying disrupted globular cluster population
Recent observational studies of the Galactic bulge by APOGEE have discovered that a significant fraction of the bulge stars have unusually
high [N/Fe] <cit.>. These N-rich stars are not found in the Galactic disk, but they are ubiquitous in globular clusters. Accordingly, one of the possible scenarios for the formation of the N-rich stars in the Galactic bulge is that the stars originate from globular clusters that had been completely destroyed by the strong tidal field of the Galactic bulge. Interestingly, these N-rich stars have been discovered in elliptical galaxies <cit.>, which suggests that N-rich populations are common in galactic bulges and elliptical galaxies, i.e. not just in the Galactic bulge, in line with the indistinguishable properties of classical bulges and elliptical galaxies <cit.>.
Globular clusters can spiral into the central region of the Galactic bulge due to dynamical friction <cit.>, and they can be more severely influenced by the tidal field of the bulge in the inner region. Accordingly, if such a globular cluster destruction scenario for the N-rich stars is correct, then stars from the destroyed globular clusters can be a major population in the central region of the Galactic halo. In fact, using APOGEE DR16, <cit.> estimated that N-rich stars contribute to about 17 % of the total halo stars at 1.5 kpc from the Galactic center <cit.>.
will enable us to investigate the 3D spatial distributions and kinematics of N-rich (globular cluster origin) and N-normal halo stars in the central region through its superb accuracy in its proper motion measurement. Because the globular cluster origin stars could inherit unique kinematics different from the other halo stars, such 3D dynamics of N-rich stars will contribute to our understanding of the formation of the inner bulge. In APOGEE DR17 <cit.>, there are 436 stars with the measured [N/Fe] and [Fe/H]
in the GCS field with good quality stars, i.e. STARFLAG=0, ASPCAPFLAG=0, SNR>70, 3250 K<T_ eff<4500 K and log g<3 <cit.>,
and 6 stars of them are N-rich stars ([N/Fe]>0.5, -1.5<[Fe/H]<0.0). All these stars are bright enough for to observe. Therefore, it is promising that will provide the proper motion of good number of N-rich stars in the Galactic center field in the combination with future high-resolution high-quality spectroscopic surveys of the Galactic center field, which will help to discover more N-rich stars.
§.§.§ Relics of Ancient Mergers
An ancient galaxy merger of Gaia-Sausage-Enceladus discovered in the Gaia data (section <ref>) leaves questions like "where is the core of the remnant now?" and "has the core of the progenitor galaxy reached to the Galactic center?". To assess the possibility of identifying such merger remnants in the GCS, we again use APOGEE DR17, but apply a slightly different quality cut, i.e. STARFLAG=0, APPCAPFLAG=0, SNR>70, 3,500 K<T_ eff<5,500 K and log g<3.6,
following <cit.> who used APOGEE DR17 to chemically characterise halo substructures of the likely accreted populations. We find that there are 284 APOGEE high-quality star data within the GCS field. The Gaia-Sausage-Enceladus remnants occupies a distinct stellar abundance distribution in the [α/Fe]-[Fe/H] plane <cit.>. Out of this sample, we find 4 stars within the abundances expected for the Gaia-Sausage-Enceladus remnants, i.e. stars with [Fe/H]<-1.1 and [Mg/Fe]<-0.28. All these stars are brighter than H_w=14.5 mag.
Note, however, that the APOGEE DR17 sample is not a complete sample up to H_w=14.5 mag, but has a sample selection due to colours and/or the specific scientific targets. The GCS will obtain the precise proper motion for about 1,000 times more stars than present in the APOGEE data. Obtaining accurate proper motions and orbits of these potential remnant stars of the Gaia-Sausage-Enceladus interaction in the inner Galactic disk will allow studies to test the association with the already measured Gaia-Sausage-Enceladus remnants, which have so far been found exclusively in the solar neighbourhood.
<cit.> found the Inner Galactic Structure (IGS) which has a similar chemical properties to the accreted components of the Galactic halo. They suggest that this could be a relic of an ancient accretion of another galaxy in the Milky Way earlier than the Gaia-Sausage-Enceladus merger and could be a more massive progenitor than Gaia-Sausage-Enceladus. Further studies with Gaia DR3 and ground-based spectroscopic data <cit.> argue that such centrally concentrated metal poor stars are relics of the ancient Milky Way proto-Galaxy, which could be mix of merger and in-situ populations from the early epoch of the Milky Way formation. can provide the proper motion of the stars in the Galactic center where Gaia cannot observe, and will help to identify the inner extension of the ancient populations.
§.§.§ Origin of Hyper-velocity Stars
<cit.> theoretically predicted
that the SMBH at the Galactic center () ejects stars with extremely large velocities as a result of close encounter and disruption of stellar binaries near the SMBH. <cit.> expanded upon the possible ejection mechanisms. The discoveries of young hyper-velocity stars (HVSs) in the halo <cit.> confirmed this prediction. Among these discoveries, the most intriguing one is the A-type HVS dubbed S5-HVS1 <cit.>. Based on the astrometric data from Gaia and a follow-up spectroscopic observation, it turned out that this star was ejected from the Galactic center 4.8 Myr ago with the ejection velocity of ∼ 1,800 km s^-1. Some numerical simulations suggests that the ejection rate of HVSs is around 10^-5–10^-4 yr^-1 <cit.>. This ejection rate suggests that there are 1 to 10 HVSs within a sphere of radius 0.1 kpc centered at the Galactic center, given their typical velocity ∼ 1000 km s^-1.
Of course, what we can expect to observe with is a tiny fraction of them,
because they need to be bright enough to be detected by .
Given that the GCS area of includes a square region of ±0.6^∘ around the Galactic center (0.6^∘ corresponds to about 0.09 kpc at the projected distance of 8.275 kpc),
it is an enticing prospect to look for HVS candidates with . If discovers an HVS within r<0.1 kpc from the Galactic center, this will be very useful to understand the detailed mechanism of HVS ejection. For example, a HVS with a velocity of 1,000 km s^-1 at r=0.1 kpc can be traced back to the Galactic center by integrating the orbit backward in time for just 0.1 Myr. This means that we can probe the environment near the SMBH in the immediate past (just 0.1 Myr ago),
such as the binary fraction near the SMBH or the orbital distribution near the SMBH.
§.§.§ X-ray Sources and the Origin of the Galactic Ridge X-ray Emission
The apparently extended hard (≥ 2 keV) X-ray emission along the Galactic plane has been known as the Galactic Ridge X-ray Emission (GRXE) since the early 1980s <cit.>. This emission extends tens of degrees in Galactic longitude and a few degrees in Galactic latitude in |l|<45^∘ and |b|<1.5^∘. The GRXE has an integrated X-ray luminosity of ∼1×10^38 erg s^-1 in the 2-10 keV range <cit.> at a distance of the Galactic center. The X-ray spectrum is described by a two-temperature thermal plasma (∼1 and 5–10 keV) with the K shell emission lines <cit.>; from neutral or lowly-ionized Fe at 6.4 keV (Fe I) as well as from highly-ionized Fe at 6.7 (Fe XXV) and 7.0 keV (Fe XXVI).
It has been under intensive debate whether GRXE is a truly diffuse emission of a low surface brightness along the Galactic plane or a composition of discrete faint unresolved X-ray sources such as cataclysmic variables (CVs) and X-ray active stars <cit.>.
Many X-ray observations were carried out on this topic. In a Galactic bulge region (l=0.^∘08, b=-1.^∘42), ∼80% of the diffuse X-ray emission was resolved into faint X-ray point sources using the deepest X-ray observation with the Chandra X-ray Observatory having an excellent spatial resolution of 0.5. This indicates that the apparently diffuse emission in the Galactic bulge is primarily made of faint discrete X-ray sources <cit.>. There are several candidates for such population of faint X-ray sources, including magnetic CVs <cit.>, non-magnetic CVs <cit.> and X-ray active stars <cit.>.
However, it is difficult to constrain the nature of these faint X-ray point sources from X-ray observations alone, because most of these sources are detected only with a limited number of X-ray photons (less than 10 photons) even with the deepest observations.
Thus, follow-up observations at longer wavelengths are needed. Because of the large interstellar absorption toward the Galactic plane, NIR observations are more
suited than optical observations. NIR identifications of X-ray point sources were performed along the Galactic plane <cit.>, which provided clues to the nature of faint X-ray point source populations that make up the GRXE. The distance of these sources is unknown. Thus, classification of the sources is based on the X-ray to NIR flux ratio; high values suggest sources containing a compact object such as CVs and low values suggest sources otherwise such as stars. If the distance is obtained for many of these sources with , we can discuss their nature based on the absolute luminosity both in X-ray and NIR bands and discriminate foreground contamination in the line of sight. A more robust classification of X-ray sources and their 3D distribution allow us to constrain the Galactic X-ray point source population for the different locations and components of our Galaxy, providing a hint to understanding the formation history of our Galaxy.
A large fraction of the observing fields of the GCS using (-0.6^∘<b<0.6^∘ and -1.4^∘<l<0.7^∘ or -0.7^∘<l<1.4^∘ in section <ref>),
was observed with Chandra <cit.>. A total of 9,017 X-ray point sources were detected with a total exposure of 2.5 Ms <cit.>.
NIR identifications for these X-ray point sources were also made <cit.>. Based on this, we estimate that ∼600 X-ray sources will be identified in NIR brighter than 12.5 mag in the H_w-band in the GCS region. This is a significant improvement compared to the Gaia DR3 optical identification and astrometric distances for ∼100 sources <cit.>, which are mostly foreground sources located within 2 kpc. This will be complemented with .
§.§.§ Observations of small solar system bodies
As solar system bodies are moving objects, they are good targets for astrometry and time-series photometry. Precise astrometry improves the orbital elements of small solar system bodies such as comets and asteroids. It provides a solid foundation in several fields; Risks of minor bodies that threaten the Earth (potentially hazardous asteroids) can be precisely assessed; Non-gravitational effects such as the Yarkovsky effect can be quantitatively measured; An asteroid family, asteroids derived from the same parent body, can be identified. Astrometry of interstellar objects such as 1I/'Oumuamua and 2I/Borisov is essential to understand their origins. The rotation periods and shapes of minor bodies are derived from time-series photometry, leading to their internal structure estimates (bulk density). A binary system can be identified if it shows an eclipse or mutual event. Time-series photometry is also useful for tracking the brightness changes of active asteroids. Since is in a Sun-synchronous polar orbit, 's photometry will be complementary to ground-based observations. Finally, non-targeted, serendipitous surveys provide opportunities to discover new minor bodies.
The expected number of small solar system bodies via is estimated as follows. Here, we focus on asteroids, the most abundant objects among minor bodies detectable by . The spectral energy density of an asteroid is generally dominated by two components: reflected sunlight in optical wavelengths and thermal emission in infrared wavelengths. 's H_w-band is located at transitional wavelengths between the two components. Hence, unfortunately, the minor bodies are fainter in H_w-band, and it is more challenging to detect them with . To evaluate the observability of asteroids, we propagate the positions of known asteroids and check if they cross the GCS observing region. The orbital elements of asteroids were retrieved from Lowell Minor Planet Services operated by Lowell Observatory on 26 August 2022. Objects with large uncertainties were removed. The total number of objects was 1,192,756, which includes 1,148,593 Main Belt Asteroids, 28,829 Near Earth Asteroids, 11,458 Jupiter Trojans, and 3,876 Trans-Neptunian Objects. The topocentric (geocentric) coordinates were calculated from 1 January 2028 to 31 December 2031. For the sake of simplicity, 's observing region was defined as a circle with a radius of 0.7 degrees centered at (l, b) = (359.9, 0.0) in Galactic coordinates. The defined region differs from the current baseline of GCS, but the number of observable objects is not significantly affected. With the distances, absolute magnitudes, and slope parameters, the apparent magnitudes of the bodies in the V-band can be calculated <cit.>. We then assume that the V-H_w color for the objects is the same as the Sun, i.e. (V-H_w)_∼1.21, and convert the V-band to H_w-band magnitude, assuming that all the asteroids have flat reflection spectra.
Figure <ref> shows the brightnesses of asteroids crossing the GCS region at different epoch. Each segment shows an individual asteroid. Due to the operation constraint of the satellite, does not observe the Galactic center in the gray-shaded seasons. The length of the segment represents the observable duration, which depends on the relative motion to . A handful of asteroids can be observed at apparent magnitudes brighter than H_w=14.5 mag (sufficient for astrometry) and H_w=17.0 mag (for photometry). Figure <ref> illustrates the cumulative histogram of the H_w magnitude for observable asteroids. The numbers of objects brighter than H_w=14.5 and 17.0 mag are about 10 and 100, respectively, throughout the operation period. The expected number of the potential targets for astrometry in the GCS is rather small. Thus, it is unlikely that there are serendipitous astrometric measurements with , while targeted observations for known objects to provide additional astrometric information are preferred.
Taking advantage of accurate photometry, may observe occultation events by minor bodies. Occultation provides valuable information for shape modeling and binary search. Since the brightness of a minor body does not matter in an occultation observation, the number of potential targets significantly increases. Detecting an occultation event is feasible with accurate orbital elements and dedicated observation planning. Serendipitous observations of occultation events may detect new objects that are unreachable even by large telescopes with apertures of ∼10 m; <cit.> claimed an occultation event by a Trans-Neptunian Object
with a radius of 500 m at 45 au in archival data by the Hubble Space Telescope's Fine Guidance Sensors. <cit.> detected an occultation event by a Trans-Neptunian Object with a radius of 1.3 km using a coordinated observation system of multiple low-cost commercial off-the-shelf 0.28 m aperture telescopes, the Organized Autotelescopes for Serendipitous Event Survey <cit.>. Occultation events sometimes reveal additional features of satellites and rings. Rings around (2060) Chiron and (10199) Chariklo were identified by ground-based occultation observations <cit.>. Recently, the ring beyond the Roche limit was discovered around (50000) Quaoar <cit.>. High precision photometry with has the potential to detect minor features in light curves <cit.>. As for the serendipitous survey, careful assessment of false detection is required. OASES adopted simultaneous observations with multiple telescopes, to minimise the false detection rate. may be able to detect occultation events after all anomalous signals are suppressed by careful calibration.
§ SYNERGIES WITH THE OTHER PROJECTS
In this section, we summarize the Galactic stellar survey projects complementary to and planned to be operating in late-2020s. Although there are many projects relevant to the science cases, here, we only list the projects and/or new instruments more relevant to the 's main survey targets of the GCS and the Exoplanet Survey, targeting M dwarfs.
§.§ Ground-Based Surveys
§.§.§ PRIME
The Prime Focus Infrared Microlensing Experiment (PRIME) is a wide field (1.56 deg^2) 1.8 m telescope at the South African Astronomical Observatory, which is planned to operate from 2023. PRIME is jointly managed by Japan, the USA and South Africa. PRIME has z, Y, J and H-band filters and several narrow-band filters.
PRIME will observe the Galactic center region around -3^∘<l<3^∘ and -2^∘<b<2^∘, which covers the whole area of the GCS field. The prime target of PRIME is a microlensing exoplanet search.
Joint observations with can help to constrain the parameters of the exoplanet detection further with the additional accurate astrometry information of the source stars that can provide.
The time-series photometry of PRIME will also find many variable stars. As mentioned above, the PRIME data will be used to provide the catalog of Miras observable with .
§.§.§ Vera C. Rubin Observatory/Legacy Survey of Space and Time (LSST)
The Vera C. Rubin Observatory is located on the Cerro Panchón ridge in Chile, and will run
the ten-year Legacy Survey of Space and Time (LSST) with an 8.4 m (6.5 m effective) Simonyi Survey Telescope <cit.>. The Rubin Observatory LSST Camera will have a 3.5-degree field of view with about 32 gigapixels with 0.2 arcsec sampling pixel size. There will be six filters (u, g, r, i, z and y) covering 320-1,050 nm. The survey is planed to begin in 2024, and the main survey will observe 18,000 deg^2 region of the sky about 800 times in the planed duration of 10 years. The co-added map will reach to r∼27.5 mag, and it is anticipated to detect about 20 billion of stars and a similar number of galaxies. The main science drivers of LSST are probing the properties of dark energy and dark matter, cataloging an inventory of the solar system, exploring the transient optical sky and mapping the Milky Way. The survey field covers the Galactic bulge and the Galactic center. The LSST is capable of providing the astrometric measurements for fainter stars than possible with Gaia. With the 10 year baseline, the expected uncertainties of parallax and proper motions are respectively σ_π=0.6 mas and σ_μ=0.2 mas yr^-1 for the stars brighter than r=21 mag, and σ_π=2.9 mas and σ_μ=1.0 mas yr^-1 for the stars brighter than r=24 mag. In addition, the time-series photometry of the LSST will help to find many variable stars and microlensing events. The majority of them will be too faint for to follow up. However, if they are bright enough and in the same field, can provide more accurate astrometric information.
§.§.§ SDSS-V
The Sloan Digital Sky Survey (SDSS)-V <cit.> is an ambitious project to run all-sky multi-epoch spectroscopic survey, utilising telescopes in both Northern and Southern hemispheres. The survey will provide optical and IR spectra covering 2,500 deg^2, ultra-wide field for more than 6 million objects in five years (2020-2025). SDSS-V use the telescopes at Apache Point Observatory (APO) in USA and and Las Campanas Observatory (LCO) in Chile. At APO, 2.5 m Sloan telescope will be continuously used for SDSS-V full-time. At LCO, more than 300 nights per year of telescope time of 2.5 m du Pont telescope will be dedicated to this survey. The survey will also use the smaller (1 m to 16 cm) telescopes at APO and LCO. The NIR APOGEE spectrograph (300 fibers, R=22,000, λ=1.5-1.7 μm), the eBOSS optical spectrograph (500 fibers, R∼2,000, λ=0.36-1.0 μm) and the MaNGA multi-object IFU (R∼4,000, λ=0.36-1.0 μm) will be used at both APO and LCO. SDSS-V will run three surveys, the Milky Way Mapper, the Black Hole Mapper and the Local Volume Mapper. The most relevant survey to is the Milky Way Mapper, which plans to observe 4-5 million stars in the Milky Way, with the NIR APOGEE spectrograph and/or the optical BOSS spectrograph. The Milky Way Mapper aims to understand the evolution of the Milky Way, the physics of the stars and the interstellar medium as well as multiple stars and exo-planetary systems. The Galactic Genesis Survey as a part of the Milky Way Mapper targets the stars with H<11 mag and G-H>3.5 mag, which are likely to overlap with the bright target stars of the GCS fields, and will provide accurate radial velocity and abundance patterns.
§.§.§ Subaru/PFS
Subaru Prime Focus Spectrograph <cit.> is the next generation instrument of the 8.2 m Subaru telescope at the summit of Maunakea, Hawai'i in the US, operated by National Astronomical Observatory of Japan. PFS is a joint instrument of the institutes in Japan, Taiwan, the USA, France, Brazil, Germany and China. PFS has ∼1.38 deg^2 field of view, and about 2,400 science fibers. PFS consists of blue (λ=0.38-0.65 μm, R∼2,300), red (low resolution mode: λ=0.63-0.97 μm, R∼3,000; medium resolution mode: λ=0.71-0.885 μm, R∼5,000) and NIR (λ=0.94-1.26 μm, R∼4,300) spectrographs. It is scheduled to start operating in 2024. About 300 nights over 5 years of Subaru time will be dedicated for the PFS survey for cosmology, galaxy evolution and Galactic archaeology, though the Subaru Strategic Survey Program (SSSP). The GCS field is not included in the PFS SSSP. However, the NIR spectrograph of PFS is especially well-suited for spectroscopic follow up for the GCS field stars to obtain radial velocity and chemical abundances. We plan to apply for a Subaru Intensive Program (5 nights per semester) to follow up the target stars.
§.§.§ ULTIMATE-Subaru
ULTIMATE-Subaru <cit.> is another next generation NIR instrument of Subaru, which is planned to be installed around 2028. ULTIMATE-Subaru is a wide-field (14'×14') NIR imager and multi-objects spectrograph with Ground-Layer Adaptive Optics (GLAO), which enables a spatial resolution of FWHM∼0.2" in the K-band. There is a planned Galactic Center Survey of ∼6 deg^2 region of the Galactic center, which covers the whole Galactic center field, J, H, K and the narrow-band K_ NB filters with a high cadence of 4 days (1 months) for the high (low) cadence field. can provide the astrometric reference stars in NIR, which would improve the astrometric accuracy of ULTIMATE-Subaru Galactic center survey data. The ULTIMATE-Subaru Galactic center survey can observe numerous stars fainter than the magnitude limit. The combined data of and ULTIMATE-Subaru will provide the accurate astrometric information of these faint stars. They would help to identify the star clusters in the Galactic center region from the proper motion of stars, and increase the event rate of the astrometric microlensing, which will enable the measurement of the masses of lensed objects with high precision, and help to identify BHs and exoplanets.
§.§.§ VLT/MOONS
The Multi Object Optical and Near-infrared Spectrograph <cit.> is a next generation instrument of the Very Large Telescope (VLT) UT1 at the European Southern Observatory (ESO) on Cerro Paranal in the Atacama Desert of Chile, and the planned first light is in 2023. MOONS is a multi-object (about 1,000 fibers) NIR spectrograph with a field of view of 25 arcmin diameter. There are three channels of spectrograph, covering RI, YJ and H-bands, with both low and high resolution modes. The low resolution mode covers the wavelength range of 0.65-1.8 μm with R_RI>4,100, R_YJ>4,300 and R_H>6,600. High resolution modes cover 3 disconnected wavelength ranges λ_RI=0.76-0.89 μm, λ_YJ=0.93-1.35 μm and λ_H=1.52-1.64 μm with the spectral resolution of R_RI>9,200, R_YJ>4,300 (fixed with the low resolution mode) and R_H>18,300, respectively.
MOONS science targets cover Galactic archaeology, the growth of galaxies and the first galaxies. Galactic archaeology studies by MOONS plan to take spectra for the several million stars observed by Gaia and the VISTA telescope, providing the crucial complementary information of the accurate radial velocity and detailed chemical abundances. The NIR coverage of MOONS is capable of recording the spectra of stars in the heavily obscured Galactic center region. Even with the high-resolution mode in the NIR H-band, a signal-to-noise ratio of more than 60 can be obtained with one hour of exposure for objects brighter than H=15 mag. MOONS is likely to be the most powerful instrument in 2020s, capable of taking high-resolution spectra for all the target stars in the GCS field.
§.§.§ VISTA/4MOST
ESO's 4-meter Multi-Object Spectroscopic Telescope <cit.> will be installed on the VISTA telescope in Chile in 2024. 4MOST has 2,436 fibers and about a 4.2 deg^2 field of view. There are two low-resolution and one high-resolution spectrographs. The low-resolution spectrograph covers the wavelength range of λ=0.37-0.95 μm with R∼6,500, while the high-resolution spectrograph covers three wavelength passbands of λ=0.3926-0.4355, 0.5160-0.5730 and 0.6100-0.6760 μm.
The 4MOST consortium will run 10 surveys using 70 % of the available time in five years, and planned to start in 2023, taking more than 20 million low-resolution spectra and more than 3 million high-resolution spectra. 4MOST Milky Way Disc And BuLgE Low <cit.> and High-Resolution <cit.> surveys will take the spectra of about 15 million and 2 million stars in the Milky Way, respectively. Their target includes the inner disk and bar/bulge region. Their survey focuses on the stars for which Gaia provides precise astrometry, but are too faint for Gaia's radial velocity spectrograph to provide a radial velocity. Their optical survey will not cover many stars in the GCS region. However, the combination of 4MIDABLE data and Gaia data will be a powerful resource to unveil the global nature of the bar and spiral arms, and therefore highly complementary to the GCS.
§.§.§ Subaru/IRD
The InfraRed Doppler (IRD) spectrograph (R≈ 70,000, λ=0.950-1.73 μ m) on the Subaru telescope <cit.> is one of the most powerful instruments in the world to follow up exoplanets around M dwarfs and young stars. Besides the “validation" of transiting-planet candidates around mid-M dwarfs identified by , high-dispersion spectroscopy by IRD can play a key role in further characterizations of those planets; precision radial velocity measurements by IRD would enable us to constrain precise planet masses as well as orbital eccentricities. Moreover, NIR transit spectroscopy by IRD would also allow us to constrain the stellar obliquity (spin-orbit angle) and atmospheric composition (e.g., He I and molecular species), which are supposed to reflect the dynamical and chemical evolution of exoplanetary systems <cit.>.
Since activity-induced radial-velocity variations of host stars are suppressed in the NIR, IRD is also an ideal tool to confirm and characterize planets around young active stars.
§.§ Space Missions
§.§.§ Gaia
ESA's Gaia <cit.> was launched in December 2013. The nominal mission lifetime was 5 years but the mission has been extended to the second quarter of 2025. The Gaia mission is all-sky survey to provide the precise astrometry for more than one billion stars brighter than G∼21 mag. Gaia uses a broad passband, the G-band that covers the wavelength range of λ∼0.33-1.05 μm <cit.>. In the final data release after the end of the mission, the astrometric accuracy for the bright stars with G≲ 13 mag is expected to reach about 7 μas. The astrometric accuracy of about 149 μas is expected to achieve for stars brighter than G=19 mag[<cosmos.esa.int/web/gaia/science-performance>]. Gaia has three instruments, the Astrometric instrument (ASTRO), the Spectrophotometer (BP/RP) and Radial Velocity Spectrograph (RVS). The ASTRO provides the five astrometric parameters, stellar position, proper motion and parallax. The Spectrophotometer consists of the BP and RP spectrophotometer. BP and RP respectively provides a low-resolution (R∼5-25) spectra of the wavelength ranges of λ∼0.33-0.68 μm and ∼0.64-1.05 μm, which are used for chromaticity calibration for astrometric measurement, and estimates of the stellar parameters and dust extinction. RVS is an integral-field spectrograph with R∼11,500, covering λ=0.845-0.872 μm <cit.>. The main aim of the RVS is to provide the radial velocity for about 150 million stars brighter than G_ RVS=16 mag, depending on the spectral type of stars, where G_ RVS is magnitude in the RVS passband. Gaia's fourth data release is expected to be in 2025, and all the catalog and data will be released, including all epoch data for all sources based on 66 months of data of the nominal mission. The final fifth data release based on about 11 years of the extended mission is expected to be in 2030. Hence, late-2020s and early-2030s will be the truly golden age of the Galactic archaeology. is expected to be launched in this golden age, and will provide the complementary data to the Gaia data, especially for the Galactic center stars, which the optical astrometry mission, Gaia, cannot observe.
§.§.§ Nancy Grace Roman Space Telescope
The Nancy Grace Roman Space Telescope (Roman Space Telescope) is a NASA observatory to study dark energy, exoplanets and infrared astrophysics <cit.>. The telescope has a primary mirror of 2.4 m diameter. The nominal mission lifetime is six years. The Roman Space Telescope has the Wide Field Instrument (WFI) and the Coronagraph Instrument (CGI). WFI is a large area, 300 megapixel, NIR camera for imagaing and slitless spectroscopy with a grism (R=435-865) and prism (R=70-170). The imaging mode utilises several filters covering the wavelength range of λ=0.48-2.0 μm. The most relevant survey of the Roman Space Telescope to 's GCS is their Microlensing Survey, which will repeatedly observe about 2.81 deg^2 (with 10 fields) around -0.5^∘<l<1.8^∘ and -2.2^∘<b<-1^∘. There will be six 72 day campaigns over six years with cadence of every 15 minutes with a wide filter and 12 hours with a blue filter. The Roman Space Telescope will detect billions of bulge stars and the study to obtain the precise astrometry is ongoing <cit.>.
The current planned survey region of Microlensing Survey of Roman Space Telescope does not cover the Galactic center where 's GCS targets, because the Microlensing Survey requires it to maximize the event rate of microlensing and therefore target the region of less dust extinction to obtain a higher number density of background stars. However, the Roman Space Telescope is currently gathering the community input for the survey strategies to maximize the science output. With strong community inputs, there could be a possibility for the Roman Space Telescope to observe the GCS field. Then, astrometry results would be valuable to calibrate their astrometry for fainter stars observed by the Roman Space Telescope.
§.§.§ James Webb Space Telescope
The James Webb Space Telescope (JWST) is a NASA's flagship space observatory (6.5 m aperture), launched at the end of 2021 <cit.>. JWST has two spectrographs in the NIR band, the NIR Spectrograph (NIRSpec) and the NIR Imager and Slitless Spectrograph (NIRISS). NIRSpec is a medium resolution spectrograph (R=100-2,700) with a wavelength coverage of 0.6–5 μm. The saturation limit is H ∼ 10 mag. NIRISS is a slitless spectrograph, whose spectral resolution is about R=700. The saturation limit is J ∼ 8.5 mag. The mission lifetime of JWST is five year as designed, but 10 years as an optimistic goal. If the lifetime of JWST overlaps the observation period of , these instruments will be the most powerful instruments to follow up exoplanets found by Exoplanet Survey, to characterize the exoplanets atmosphere with detailed and precise transmission spectroscopy.
§.§.§ CHEOPS
The CHaraterising ExOPlanet Satellite (CHEOPS), launched in December 2019, is an ESA space mission dedicated for precision photometry to determine the radius of transiting exoplanets <cit.>. The mission lifetime is assumed to be 3.5 years (nominal). CHEOPS is a PIT type of transiting exoplanet exploration, similar to . The telescope diameter (32 cm) is similar to that of . The passband of CHEOPS is visible (see figure <ref>) while that of is NIR. In this sense, is complementary to CHEOPS. However, considering the difference of the launch date, should be regarded as a successor of CHEOPS in terms of the space facility for photometric follow-up of transiting planets found by the ground-based surveys.
§.§.§ TESS
The Transiting Exoplanet Survey Satellite (TESS) is an MIT/NASA-led all-sky survey mission to find planets transiting bright nearby stars <cit.>. TESS has four cameras, each with 10.5 cm diameter and 24×24 deg^2 field of view (∼ 2,000 deg^2 in total) and each camera has four 2k×2k CCDs with a pixel scale of 21”. The detectors are sensitive from 0.6 to 1.0 μm. During the 2 year prime mission since the launch in 2018, TESS is monitoring the almost entire sky in 26 overlapping segments, and observe each segment for 27.4 days with 2 min cadence for the pre-selected 15,000 stars and 30 min cadence for the full image. Extension of the mission has been approved and TESS will keep tiling the whole sky at least till 2024.
TESS has a capability of finding Earth-sized transiting planets near the habitable zone of early- to mid-M dwarfs, and such an example has indeed been reported <cit.>. The larger telescope aperture and redder passband of will make it sensitive to similar planets around later-type M dwarfs.
Follow-up observations by for planets detected by TESS may lead to finding longer-period/smaller planets that were missed by TESS, as well as to characterizing them even better through a finer sampling of the transit light curve.
§.§.§ PLATO
PLAnetary Transits and Oscillations of stars (PLATO) is the third M-class (M3) mission under development by ESA for a planned launch in 2026 <cit.>. The primary science goal is the detection and characterization of planets transiting bright solar-type stars, in particular terrestrial planets in the HZ. This will be achieved by high-precision, continuous photometric monitoring of a large number of bright stars using a collection of small and wide-field optical telescopes. According to the PLATO definition study report,[<https://sci.esa.int/web/plato/-/59252-plato-definition-study-report-red-book>] the payload is planned to consist of >20 cameras each with 12 cm diameter and covering the wavelength range of 0.5-1.0 μm, which result in a total field of view of ∼2,000 deg^2. Although the specific observing strategy is yet to be determined,
PLATO is likely to cover a significant fraction of the entire sky, as well as to monitor certain regions for a duration long enough <cit.> to find planets in the HZ of Sun-like stars.
The duration of the nominal science operations is 4 years and may well overlap with the operation period of . The main targets of PLATO are bright (V≲13 mag) Sun-like stars, while targets late-type stars fainter in the optical passband, taking advantage of the NIR photometry. Therefore, the two missions are complementary to each other. Similarly to TESS, PLATO observations might also provide transiting planet target candidates around M dwarfs that can be further characterized with NIR observations by . PLATO also aims to characterise the properties, including the precise age estimates in the 10 % precision level, of the host stars from the time-series photometry using asteroseismology. The age information of a large number of stars which PLATO will observe will be a precious information for studies of Galactic archaeology <cit.>. Hence, it will be also provide complementary data to the GCS and mid-plane survey.
§.§.§ ARIEL
Atmospheric Remote-sensing Infrared Exoplanet Large-survey <cit.> is the first space telescope dedicated to the study of exoplanet atmospheres, adopted as an ESA's M4 mission, whose planned launch is in 2029. The effective size of the primary mirror of ARIEL will be ∼1 m, which is much smaller than JWST (6.5 m). However, ARIEL will be able to collect fluxes in the wavelength range of 0.5-7.8 μm at one time, using five dichroic mirrors, three NIR spectrometers and three optical photometric detectors. This allows one to obtain an atmospheric spectrum with a very wide wavelength coverage from a single planetary transit or eclipse observation.
ARIEL will observe a thousand exoplanets with a wide range of mass and temperature, from hot Jupiters to warm/temperate Earths, in order to understand the statistical properties of exoplanetary atmospheres and planetary formation histories. While the current target list for ARIEL already includes a large number of Jovian planets, it still lacks Neptune- and smaller-sized planets that are suitable for atmospheric study, i.e., hosted by nearby M dwarfs (Edwards et al. 2019). Although TESS has been increasing the number of such targets, it may not be enough due to its limited telescope aperture size (10 cm) and wavelength sensitivity (because of covering only the optical). Small transiting planets around nearby M dwarfs that will be discovered by JASMINE can thus be good targets for atmospheric characterization by ARIEL. Given that is planned to be launched ahead of ARIEL, can provide prime targets for ARIEL in a timely manner.
§ SUMMARY AND CONCLUSIONS
We summarize that the unique capability of the mission will fill the gap of left by other planned and ongoing projects of the Galactic stellar surveys for Galactic archaeology and the habitable exoplanet searches in late-2020s. will be the first mission to provide the 10 μas-level astrometry in the NIR band with the time-series photometry. will offer the precise astrometric information where the dust extinction is too strong for the optical astrometry mission, Gaia, to detect any stars, such as the Galactic center field and the Galactic mid-plane. The astrometric data of stars hidden behind the dust in the Galactic center and Galactic mid-plane will shed light on the formation epoch of the Galactic bar, the nature of the spiral arms and the mechanism underlying radial migration in the inner Galactic disk, which are likely to be remaining questions after Gaia. The combination of time-series photometry and precise astrometry will provide a vast opportunities of serendipitous discovery, including the possibilities of detecting IMBH, astrometric microlensing of inner disk BHs and studying the nature of star forming regions and X-ray sources. will be also the only space observatory in the late-2020s which can follow up exoplanet transits detected by the ground-based telescope to find the planets in the outer and habitable orbits around late-type stars, just as Spitzer space observatory contributed to revolutionising the field.
Finally, we note that will be a crucial science demonstration mission for what the future NIR astrometry mission can offer. will be a key mission to bridge between the successful Gaia mission and the proposed Gaia's successor mission, GaiaNIR <cit.> to be launched in the 2040s. GaiaNIR will provide the all-sky global astrometry in NIR band, including the Galactic disk, bar and bulge regions. Unprecedentedly high proper motion for the stars also observed with Gaia will be obtained, taking advantages of ∼20 years of baseline between the Gaia and GaiaNIR missions. Also, GaiaNIR will help maintaining and improving on the absolute astrometric quality of the celestial reference frame, which otherwise degrades with time. will be an pioneering mission to open up the future μas-level NIR astrometry, and become an important milestone to demonstrate the power of NIR astrometry. About 20 years of time difference between and GaiaNIR will provide superb proper motion measurements for the stars observed by , including the NSD stars. With careful correction of systematic errors, the combination of and GaiaNIR will offer endeavour to measure the acceleration of the stars and map the gravitational field in the Galactic center.
We thank Megan Johnson and Stephen Williams for their contribution to the early draft of this manuscript.
This work presents results from the European Space Agency (ESA) space mission Gaia. Gaia data are being processed by the Gaia Data Processing and Analysis Consortium (DPAC). Funding for the DPAC is provided by national institutions, in particular the institutions participating in the Gaia MultiLateral Agreement (MLA). The Gaia mission website is https://www.cosmos.esa.int/gaia. The Gaia archive website is https://archives.esac.esa.int/gaia. This work is also based on data products from observations made with ESO Telescopes at the La Silla or Paranal Observatories under ESO programme ID 179.B-2002. Funding for the Sloan Digital Sky
Survey IV has been provided by the
Alfred P. Sloan Foundation, the U.S.
Department of Energy Office of
Science, and the Participating
Institutions.
SDSS-IV acknowledges support and
resources from the Center for High
Performance Computing at the
University of Utah. The SDSS
website is www.sdss4.org. SDSS-IV is managed by the
Astrophysical Research Consortium
for the Participating Institutions
of the SDSS Collaboration including
the Brazilian Participation Group,
the Carnegie Institution for Science,
Carnegie Mellon University, Center for
Astrophysics | Harvard &
Smithsonian, the Chilean Participation
Group, the French Participation Group,
Instituto de Astrofísica de
Canarias, The Johns Hopkins
University, Kavli Institute for the
Physics and Mathematics of the
Universe (IPMU) / University of
Tokyo, the Korean Participation Group,
Lawrence Berkeley National Laboratory,
Leibniz Institut für Astrophysik
Potsdam (AIP), Max-Planck-Institut
für Astronomie (MPIA Heidelberg),
Max-Planck-Institut für
Astrophysik (MPA Garching),
Max-Planck-Institut für
Extraterrestrische Physik (MPE),
National Astronomical Observatories of
China, New Mexico State University,
New York University, University of
Notre Dame, Observatário
Nacional / MCTI, The Ohio State
University, Pennsylvania State
University, Shanghai
Astronomical Observatory, United
Kingdom Participation Group,
Universidad Nacional Autónoma
de México, University of Arizona,
University of Colorado Boulder,
University of Oxford, University of
Portsmouth, University of Utah,
University of Virginia, University
of Washington, University of
Wisconsin, Vanderbilt University,
and Yale University.
This work is a part of MWGaiaDN, a Horizon Europe Marie Skłodowska-Curie Actions Doctoral Network funded under grant agreement no. 101072454 and also funded by UK Research and Innovation (EP/X031756/1).
This work was partly supported by the UK's Science & Technology Facilities Council (STFC grant ST/S000216/1, ST/W001136/1), JSPS KAKENHI (23H00133, 21J00106), JSPS Postdoctoral Research Fellowship Program, the Spanish MICIN/AEI/10.13039/501100011033, "ERDF A way of making Europe" by the “European Union” through grants RTI2018-095076-B-C21 and PID2021-122842OB-C21, the Institute of Cosmos Sciences University of Barcelona (ICCUB, Unidad de Excelencia ’María de Maeztu’) through grant CEX2019-000918-M, NASA ADAP award program Number (80NSSC21K063), the Swedish National Space Agency (SNSA Dnr 74/14 and SNSA Dnr 64/17), the Royal Society (URF\R1\191555) and the ERC Consolidator Grant funding scheme (project ASTEROCHRONOMETRY <https://www.asterochronometry.eu/>, G.A. n. 772293).
apj
|
http://arxiv.org/abs/2307.10214v1 | 20230714134316 | Time for aCTIon: Automated Analysis of Cyber Threat Intelligence in the Wild | [
"Giuseppe Siracusano",
"Davide Sanvito",
"Roberto Gonzalez",
"Manikantan Srinivasan",
"Sivakaman Kamatchi",
"Wataru Takahashi",
"Masaru Kawakita",
"Takahiro Kakumaru",
"Roberto Bifulco"
] | cs.CR | [
"cs.CR",
"cs.LG"
] |
Time for aCTIon: Automated Analysis of
Cyber Threat Intelligence in the Wild
Giuseppe Siracusano
Davide Sanvito
Roberto Gonzalez
NEC Laboratories Europe
Manikantan Srinivasan
Sivakaman Kamatchi
NEC Corporation India
Wataru Takahashi
Masaru Kawakita
Takahiro Kakumaru
NEC
Roberto Bifulco
NEC Laboratories Europe
=============================================================================================================================================================================================================================================
Cyber Threat Intelligence (CTI) plays a crucial role in assessing risks and enhancing security for organizations. However, the process of extracting relevant information from unstructured text sources can be expensive and time-consuming. Our empirical experience shows that existing tools for automated structured CTI extraction have performance limitations. Furthermore, the community lacks a common benchmark to quantitatively assess their performance.
We fill these gaps providing a new large open benchmark dataset and aCTIon, a structured CTI information extraction tool.
The dataset includes 204 real-world publicly available reports and their corresponding structured CTI information in STIX format. Our team curated the dataset involving three independent groups of CTI analysts working over the course of several months. To the best of our knowledge, this dataset is two orders of magnitude larger than previously released open source datasets.
We then design aCTIon, leveraging recently introduced large language models (GPT3.5) in the context of two custom information extraction pipelines. We compare our method with 10 solutions presented in previous work, for which we develop our own implementations when open-source implementations were lacking.
Our results show that aCTIon outperforms previous work for structured CTI extraction with an improvement of the F1-score from 10%points to 50%points across all tasks.
§ INTRODUCTION
Cyber Threat Intelligence (CTI) provides security operators with the information they need to protect against cyber threats and react to attacks <cit.>. When structured in a standard format, such as STIX <cit.>, CTI can be used with automated tools and for efficient search and analysis <cit.>. However, while many sources of CTI are structured and contain Indicators of Compromise (IoCs), such as block lists of IP addresses and malware signatures, most CTI data is usually presented in an unstructured format, i.e., text reports and articles <cit.>. This form of CTI proves to be the most helpful to security operators, since it includes information about the attackers (threat actors) and victims (targets), and how the attack is performed: tools (malwares) and attack patterns. Ultimately, this is the information that enables threat hunting activities <cit.>.
Given the relevance of CTI, despite the limited resources, security analysts invest a significant amount of time to manually process sources of CTI to structure the information in a standard format <cit.>. In fact, the effort is sufficiently large that companies form organizations to share the structured CTI and the cost of producing it. For instance, the Cyber Threat Alliance (CTA) provides a platform to share CTI among members in the form of STIX bundles, and counts over 30 large companies among its members, such as CISCO, McAfee, Symantec, Sophos, Fortinet and others <cit.>.
To aid this activity, the security community has been actively researching ways to automate the process of extracting information from unstructured CTI sources, which led to the development of several methods and tools <cit.>. While these solutions contribute to reduce the analyst load, their focus has been historically limited to the extraction of IoCs, which are relatively easy to identify with pattern matching methods (e.g., regular expressions). Only recently, the advances in natural language processing (NLP) using deep learning have enabled the development of methods that can extract more complex information (i.e., threat actor, malware, target, attack pattern). Nonetheless, the performance of these solutions is still limited (Section <ref>).
One of the problems is the way these machine learning solutions operate: they often specialize a general natural language processing machine learning model, fine-tuning it for the cybersecurity domain. Fine-tuning happens by means of providing the models with a training dataset, built by manually labeling a large number of reports.
However, these AI models are specifically designed to perform tasks such as Named Entity Recognition (NER), which are close to the needs of a security analyst and yet crucially different. For instance, a report describing the use of a new malware might mention other known malwares in a general introductory section. These malwares would be extracted by a regular NER model, whereas a security analyst would ignore them when compiling the structured report. That is, generating a structured CTI report requires extracting only the relevant named entities.
To make things worse, the security community currently lacks a large labeled dataset that could work as benchmark to evaluate these tools. Indeed, the current state-of-the-art is mostly evaluated using metrics belonging to the NLP domain, which essentially evaluate a subtask in place of the end-to-end task performed by the security analyst.
Our goal is to provide a means to evaluate existing and future tools for structured CTI information extraction, and a solution to improve on the state-of-the-art.
First, we contribute a labeled dataset including 204 reports collected from renowned sources of CTI, and their corresponding STIX bundles. The reports vary in content and length, containing 2133 words on average and up to 6446. Our team of trained security analysts examined the reports over the course of several months to define the corresponding STIX bundles.
This process requires, among other things, to classify attack patterns using the MITRE ATT&CK Matrix (tactics, techniques, and procedures), which includes more than 190 detailed entries <cit.>. The analyst needs to know these techniques and understand if the case described in the report fits any of them, to perform correct classification.
Second, we replicate the results of 10 recent works, providing our own implementations when these were not available, and use our benchmark dataset to evaluate them. Our evaluation shows that the improvement in NLP technology had a significant impact on the performance of the tools, which got much better over time and since the inclusion of NLP technology such as Transformer Neural Networks (e.g., BERT <cit.>). At the same time, the evaluation shows there are still significant gaps, with the best performing tools achieving on average across all reports less than 50% in recall/precision, for any specific type of information extracted (i.e., malware, threat actor, target and attack pattern).
Finally, inspired by recent advances in Large Language Models (LLMs) such as GPT3 <cit.>, we contribute a new solution, aCTIon, using LLM's zero-shot prompting and in-context learning capabilities.
Our approach addresses some of the main shortcomings and constraints of the current generation of LLMs, namely hallucinations <cit.> and small context windows <cit.>, in the constrained setting of our use case.
To do so, we introduce a novel two-step LLM querying procedure, which resembles some of the recent approaches used in the design of LLM-based generative AI Agents <cit.>.
In the first step, we pre-process the input report to extract and condense information in a text that can fit the limits of the target LLM. In the second step, we define extraction and self-verification prompts for the LLM, which finally selects and classifies the extracted information.
We experiment with several alternative variations of the above general approach, and find that aCTIon can outperform the state-of-the-art by increasing the F1-score by 15-50% points for malware, threat actor and target entities extraction, and by about 10% points for attack pattern extraction.
To foster further research in this area, we release our dataset, including reports and labels.
§ LIFE AND PAIN OF A CTI ANALYST
A large amount of valuable CTI is shared in unstructured formats, including open-source intelligence (OSINT), social media, the dark web, industry reports, news articles, government intelligence reports, and incident response reports. Using unstructured CTI is challenging as it cannot be efficiently stored, classified and analyzed, requiring security experts to thoroughly read and comprehend lengthy reports. Consequently, one of the tasks of a security analyst is to convert the vast amount of unstructured CTI information in a format that simplifies its further analysis and usage.
STIX <cit.> is an example of a standard format for CTI widely adopted by the industry. In STIX, each report (a in STIX terminology) is a knowledge graph, i.e., a set of nodes and relations that describe a security incident or a relevant event. The STIX ontology describes all the entity and relation types: Figure <ref> shows a subset of the STIX ontology.
The ontology includes several conceptual entities, such as , , , , and . Furthermore, it also defines relations between these entities, such as and , to capture their interactions.
In the remainder of this section we provide an example of a report, and introduce how analysts extract structured STIX bundles from text reports. We focus on the most common information extracted by analysts:
* Who performed the attack (i.e., ),
* Against whom it was performed (i.e. pointed by a relation).
* How the attack was performed (i.e., and ),
This subset of the STIX's ontology is the most common set of information pieces contained in reports. For instance, in our dataset 75% and 54% of reports include at least a Malware and Threat Actor entity, respectively. Furthermore, and more importantly for a fair evaluation of the state-of-the-art, this subset is consistently supported across existing tools and previous work, which allows us to run an extensive comparison among solutions.
§.§ Structured CTI Extraction
To illustrate the structured CTI extraction task, we consider
a technical blog post from Palo Alto Networks[ <https://unit42.paloaltonetworks.com/helloxd-ransomware/>], presented at a glance in Figure <ref>. The report describes the attribution of the ransomware HelloXD to a the threat actor known as x4k, including the set of tactics, techniques and procedures associated with them.
The report is about 3.7K words long, includes 24 different images, 3 tables with different information and a list of Indicator of Compromise (in a dedicated section at the end of the report). It first explains the functionality of the HelloXD ransomware, and then it uncovers several clues that link the ransomware to the threat actor x4k.
Furthermore, the post provides a description of the threat actor's modus operandi and infrastructure.
Structured CTI extraction is to define a STIX representing the report, like the one depicted in Figure <ref>. The bundle includes the (x4k), the (HelloXD), and a set of entities describing the various tactics, techniques and procedures, plus the s extracted from the last section of the report.
Defining this bundle is a time consuming task that requires security knowledge and experience. It can take 3-10 hours to extract a structured STIX bundle out of a report. For instance, in <cit.> the authors mention that labelling 133 reports required 3 full time annotators over 5 months. Likewise, the annotation of the 204 reports in the dataset we release with this paper took our team of CTI analysts several months.
To understand why this task is time consuming, and why it is hard to automate, let us consider how analysts identify the relevant entities and the case of our sample report.
Malware, Threat Actor, Identity
First, the analyst typically starts identifying malwares, threat actors and identities. While this might at first glance appear as a simple task, security reports tend to be semantically complex, including issues such as: information represented ambiguously, e.g., threat actor and malware called with the same name; the use of aliases to describe the same entity, e.g., a malware called with multiple name variants; uncertain attribution of attacks, i.e., the report might certainly attribute some attacks to a threat actor, and only mention other attacks as potentially related to the threat actor, but not confirmed. These are just few examples of the nuances that make processing time consuming, and automation difficult.
Example: our sample report specifically discusses the HelloXD ransomware. Yet, it is not uncommon for a malware to be deployed and utilized in conjunction with other malicious software.
Thus, understanding what malicious software is effectively described in the attack is critical to understanding which malware nodes should be included in the STIX bundle.
In the report, there are mentions of two other malwares beyond HelloXD: LockBit 2.0 and Babuk/Babyk.
However, these malwares should NOT be included in the bundle.
For instance, LockBit 2.0 is mentioned because it leverages the same communication mean used by HelloXD (see quote below). Nonetheless, LockBit 2.0 is not directly connected to HelloXD infections, and therefore it should not be included.
The ransom note also instructs victims to download Tox and provides a Tox Chat ID to reach the threat actor. Tox is a peer-to-peer instant messaging protocol that offers end-to-end encryption and has been observed being used by other ransomware groups for negotiations. For example, LockBit 2.0 leverages Tox Chat for threat actor communications.
Attack Pattern Second, the analyst has to identify the attack patterns, i.e., descriptions of tactics, techniques and procedures, and attribute them to the entities identified in the previous step. This introduces additional challenges: attack patterns are behaviors typically described throughout several paragraphs of the report, and they are collected and classified in standard taxonomies such as the MITRE ATT&CK Matrix <cit.>. The MITRE ATT&CK Matrix includes more than 190 detailed techniques and 400 sub-techniques. The analyst has to refer to them when building the bundle, identifying which of the classified techniques are contained in the text report.
That is, this tasks requires both understanding of the report and extensive specialized domain knowledge.
The following is an example of how an Attack Pattern is described in a text report[<https://www.proofpoint.com/us/blog/threat-insight/good-bad-and-web-bug-ta416-increases-operational-tempo-against-european>]:
TA416 has updated the payload by changing both its encoding method and expanding the payloads configuration capabilities
The analyst has first to identify the sentence containing the attack pattern, and then map it to the corresponding MITRE definition, that in the case of the example is the technique T1027 "Obfuscated Files or Information"[<https://attack.mitre.org/techniques/T1027/>] described as:
Adversaries may attempt to make an executable or file difficult to discover or analyze by encrypting, encoding, or otherwise obfuscating its contents on the system or in transit.
Relevance
Throughout the process, the analyst has to take decisions about what to leave out of the bundle, using their experience. This decision usually includes considerations about the level of confidence and details of the information described in the report.
For instance, our sample report describes other activities related to the threat actor x4k, such as the deployment of Cobalt Strike Beacon and the development of custom Kali Linux distributions. The analyst must determine whether to include this information or not. In this example, these other activities are just mentioned, but they are not related to the main topic of the report (nor contain enough details), and therefore they should not be included.
§.§ The quest for automation
Given the complexity of the task, several solutions have been proposed over time to help automate structured CTI extraction <cit.>. Previous work addressed either individual problems, such as attack patterns extraction, or the automation of the entire task. Nonetheless, all the proposed tools still require significant manual work in practice. We find evidence for this claim in the empirical work of our team of CTI analysts, and we also confirm this later in the paper when introducing our evaluation (Section <ref>).
We speculate one of the reasons for which existing solutions do not meet the expectations of CTI analysts is due to the lack of a benchmark that correctly represents the structured CTI extraction task, with its nuances and complexities. In particular, since previous work heavily relies on machine learning methods for natural language processing (NLP), it is quite common to resort to typical NLP evaluation approaches. However, NLP tasks are a conceptual subset of the end-to-end structured CTI extraction task, therefore, the majority of proposed benchmarks do not evaluate CTI-metrics.
To exemplify this issue, let us consider Named Entity Recognition (NER), i.e., the NLP task of automatically identifying and categorizing relevant entities in a text.
When evaluating a NER component, NLP-metrics count how many times a word representing an entity is identified. For instance, if the malware HelloXD is mentioned and identified 10 times, it would be considered as 10 independent correct samples by a regular NER evaluation. We refer to this approach as word-level[For the sake of simplicity of exposition we use the term "word-level" in place of the more appropriate "token-level".] labeling. However, for structured CTI extraction, our interest is to extract the malware entity, regardless of how many times it appears in the report. This can potentially lead to an overestimation of the method performance. More subtly, as we have seen with the example of the Lockbit 2.0 malware, some entities that would be correctly identified by a NER tool are not necessarily relevant for CTI information extraction task. However, such entities are typically counted as correct if the evaluation employs regular NLP metrics. The same issue applies to the more complex Attack Pattern extraction methods. Indeed, they are commonly evaluated on sentence classification tasks, which assess the method ability to recognize if a given sentence is an attack pattern and to assign it to the correct class. We refer to this approach as sentence-level labeling.
However, such metrics do not fully capture the performance of the method in the light of the CTI-metrics. This would involve identifying all relevant attack patterns in a given report, and correctly attributing them to the relevant entities.
Table <ref> summarizes datasets from the literature, which are employed in the evaluation of the respective previous works. The table considers separately the extraction of the Attack Pattern entity, given its more complex nature compared to other entities (e.g., Malware, Threat Actor, Identity).
Some of these works use remarkably large datasets to evaluate the information extracted in terms of NLP performance (e.g., number of extracted entities). Unfortunately, they often include much smaller data subsets to validate the methods in respect to CTI-metrics. One reason often mentioned for the small size of such data subsets is the inherent cost of performing manual annotation by expert CTI analysts.
NLP-metrics SecIE <cit.>, ThreatKG <cit.> and LADDER <cit.>, all adopt datasets with word-level or sentence-level labeling.
Similarly, CASIE <cit.> provides a large word-level labeled dataset, and does not cover attack patterns.
TRAM <cit.> and rcATT <cit.> only provide attack pattern datasets that are sentence-level labeled.
CTI-metrics Only a few works provide labeled data that correctly capture CTI-metrics.
TTPDrill <cit.> and AttacKG <cit.> perform the manual labeling of 80 and 16 reports, respectively, on a reports-basis, but unfortunately do not share them. Also, they cover only attack patterns extraction.
SecBERT <cit.> evaluates the performance on a large sentence-level dataset, but then provides only 6 reports with CTI-metrics.
Similarly, as part of its evaluation LADDER <cit.> also includes the Attack Pattern Extraction task using CTI-metrics, but just on 5 reports (which are not publicly shared).
§ THE DATASET
The lack of an open and sufficiently large dataset focused on structured CTI extraction hinders our ability to evaluate existing solutions and to consistently improve on the state-of-the-art.
To fill this gap we created a new large dataset including 204 reports spanning 12 months since February 2022, and their corresponding STIX bundles, as extracted by our expert team of CTI analysts. The dataset represents real-world CTI data, as processed by security experts, and therefore exclusively focuses on CTI-metrics.
We make the dataset publicly accessible.[The link to the repository is hidden for anonymity.]
The remainder of this section provides information about the dataset creation methodology, and introduces high-level statistics about the data.
§.§ Methodology
Our organization includes a dedicated team of CTI analysts whose main task is to perform structured CTI extraction from publicly available sources of CTI.
We leverage their expertise and established methodology to create a dataset of unstructured reports, and their corresponding extracted STIX bundles.
Among the reports daily processed by our CTI analysts team we selected a subset of 204 publicly available reports published by well-known relevant sources (cf. Section <ref>) and we further manually verified the classification of each of them.
Structured CTI extraction is manually performed by CTI analysts, organized in three independent groups with different responsibilities, as outlined next:
Group A selects unstructured reports or sources of information for structured CTI extraction. The selection is based on the analyst's expertise, and is often informed by observed global trends. This group is formed by a variable number of people, usually from two to four.
Group B performs a first pass of structured CTI extraction from the selected sources. This group makes large use of existing tools to simplify and automate information extraction. Notice that this set of tools partially overlaps with those we mentioned in Table <ref> and that we later assess in our evaluation in Section <ref>. The actual structured CTI extraction happens in multiple processing steps. First, the report is processed with automated parsers[Ad-hoc parsers are developed by the CTI analysts team whenever a new web source is consulted.], e.g., to extract text from a web source. Second, the retrieved text is segmented into groups of sentences. These sentences are then manually analyzed by the analyst, who might further split, join, or delete sentences to properly capture concepts and/or remove noise. The final result is a set of paragraphs. Third, the analyst applies automated tools to do a pre-labeling of the named entities, and then performs a manual analysis on each single paragraph flagging the entities that are considered relevant and correct, potentially adding entities that were not detected. At this stage, the use of automated entity extraction tools expedites the tagging process for analysts. Named entities within the text are highlighted, accelerating the reading process, however, the accuracy of automatically assigned labels will undergo further verification to ensure their correctness.
Fourth, a second analysis is performed on the same set of paragraphs, this time to extract attack patterns.
Also in this case, automated tools are used in the analysis process. The Logistic Regression model implemented in TRAM <cit.> is used to identify sentences that clearly contain attack patterns.
The classification of these identified sentences can be quickly verified. Subsequently, the remaining text, which does not contain clear or explicit descriptions of attack patterns, is manually analyzed and classified.
By initially identifying obvious attack pattern definitions in the text, not only the amount of text requiring in-depth analysis is reduced, but it also simplifies the classification of ambiguous sentences.
This group is composed of two analysts, with each analyst processing different reports.
Finally, the analyst uses a visual STIX bundle editor (an internally built GUI) to verify the bundle and check any correct attribution, i.e., the definition of relations among entities.
Group C performs an independent review of the work performed by Group B.
The review includes manual inspection of the single steps performed by Group B, with the goal of accepting or rejecting the extracted STIX bundle.
This group is also composed of two analysts. The members of Group C and Group B switch roles for each analyzed report.
The above process is further helped by a software infrastructure that our organization developed specifically to ease the manual structured CTI extraction tasks. Analysts connect to a web application and are provided with a convenient pre-established pipeline of tools. Furthermore, the web application keeps track of their interactions and role (e.g., analyst vs reviewer), and additionally tracks the time spent for each sub-step of the process. This allows us to estimate the time spent to perform structured CTI extraction on a report (excluding the work of Group A). For the dataset presented in this paper, we observed an average of 4.5h per report, with the majority of the time spent by group B (about 3h).
In addition to the process described above, we incorporated an additional step involving a group of two researchers. This step focused on validating a subset of reports that were processed by Groups B and C. Firstly, we selected reports that were publicly available on the Internet and published by well-known organizations and institutions. Next, the two researchers independently relabeled and classified these reports. As a result of this step, we obtained a final dataset consisting of 204 reports where Groups B, C, and the researchers unanimously agreed on the labels and classifications.
During this validation process, the group of researchers directly consulted the original web sources to avoid any potential, even if remote, errors introduced by the automated web parser used by Group B.
§.§ Dataset Summary
The final dataset comprises 204 reports and their corresponding STIX bundle representations. The reports are published by renowned organizations and institutions such as Palo Alto Networks, Trend Micro, and Fortinet, resulting in a total of 62 different sources, with each source providing on average 3.3 reports (cf. Table <ref>).
Approximately, 79% of our sources are also referenced on the official MITRE ATT&CK website as external references when providing procedure examples for specific attack pattern techniques.
This confirms that the selected reports are representative of a wide well-known body of CTI sources.
Table <ref> presents the main topics covered in the reports. Approximately 75% of the reports focus on a specific malware, often accompanied by information about the threat actor utilizing or developing the malware (around 30%).
Additionally, around 7% of the reports discuss related vulnerabilities, while approximately 8% provide details about both the associated threat actor and the exploited vulnerability.
Some reports do not include malware entities but specifically describe threat actors (approximately 11%) or vulnerabilities associated with them (around 4%).
The remaining 10% of reports cover topics such as attack campaigns and vulnerabilities.
The selected reports in the dataset encompass approximately 90% of the attack pattern classes found in the MITRE ATT&CK Matrix for Enterprise.
In addition, the dataset covers all the 10 most prevalent MITRE ATT&CK tactics and techniques leveraged by attackers in 2022 <cit.> with tens of reports mentioning each one of them. In total, the reports mention 188 different malware variants and 91 different threat actors.
The resulting dataset comprises 204 STIX bundles, which collectively contain 36.1k entities and 13.6k relations. Figure <ref> presents the resulting STIX ontology based on our dataset, including 9 unique entity types and 5 unique relation types. The figure also shows the set of admissible types of relations between specific pairs of entity types.
Table <ref> reports the dataset statistics of the STIX bundles associated to the reports and is split in three sections: total number of STIX objects and relations, number of STIX objects by type and number of STIX relations by type. For the last two sections, the last column provides the quota of bundles that includes at least once a given type of entity and relation.
For example, 75% of the bundles include a Malware entity, and 54% include a Threat Actor. This highlights the prevalence of these critical components within the dataset, underscoring their importance in the context of CTI extraction and analysis.
§ IT IS TIME FOR ACTION
The dataset introduced in Section <ref> allows us to quantify the performance of existing tools for structured CTI extraction. We presents those results in details later in Section <ref>, but anticipate here that our empirical experience is confirmed: the performance of previous work on structured CTI extraction is still limited. For example, the best performing tools in the state-of-the-art provide at most 60% F1-score when extracting entities such as or .
Given the pressure to reduce the cost of structured CTI extraction in our organization, the limitations of the state-of-the-art, and the recent emergence of a new wave of powerful machine learning technologies for natural language processing, such as Large Language Models (LLM), we developed aCTIon: a structured CTI extraction framework. Our goal is to significantly simplify, or ideally entirely replace, the information extraction step from the task, i.e., the work of Group B described in Section <ref>, focusing most of the manual effort only on the bundle review step (the work of Group C).
aCTIon builds on the recent wave of powerful LLMs, therefore we provide first a short background about this technology, before detailing aCTIon's design goals and decisions.
§.§ Large Language Models primer
LLMs <cit.> are a family of neural network models for text processing, generally based on Transformer neural networks <cit.>. Unlike past language models trained on task-specific labeled datasets, LLMs are trained using unsupervised learning on massive amount of data. While their training objective is to predict the next word, given an input prompt, the scale of the model combined with the massive amount of ingested data makes them capable of solving a number of previously unseen tasks and acquire emergent behaviors <cit.>. For instance, LLMs are capable of translating from/to multiple languages, perform data parsing and extraction, classification, summarization, etc.
More surprisingly, these emergent abilities include creative language generation, reasoning and problem-solving, and domain adaptation <cit.>.
From the perspective of system builders, perhaps the most interesting emergent ability of LLMs is their in-context learning and instruction-following capabilities. That is, users can program the behavior of an LLM prompting it with specific natural language instructions. This removes the need to collect specific training data on a per-task basis, and enables their flexible inclusion in system design.
For instance, a prompt like "Summarize the following text" is sufficient to generate high-quality summaries.
While LLMs have great potential, they also have significant limitations. First, their training is very expensive, therefore they are retrained with low frequency. This makes the LLM unable to keep up to date with recent knowledge. Second, their prompt input and output sizes are generally limited. The input of LLMs is first tokenized, and then provided to the LLM. A token can be thought as a part of a word. For instance, a model might limit to 4k tokens (about 3k-3.5k words) the total size of input plus output, which limits the kind of inputs that can be processed. Finally, LLMs might generate incorrect outputs, a phenomenon sometimes called hallucination <cit.>. In such cases, the LLM generated answers might be imprecise, incorrect, or even completely made-up, despite appearing as a confident statement at first glance.
§.§ aCTIon - Design
Figure <ref> depicts the overall architecture of the aCTIon framework, which comprises three main components. Downloader and parser converts unstructured input reports in different formats, e.g., HTML, to text-only representations. It includes plugins to handle the format of specific well-known CTI sources, as well as a fallback default mode if no plugin is available for the desired source.
The second component is the core of the aCTIon framework and consists of two different pipelines: a pipeline extracts most of the entities and relations; a second pipeline deals specifically with attack pattern extraction. Both pipelines implement a two-stage process and leverage an LLM at different stages. The two stages implement Preprocessing (P), to select relevant text from the unstructured input report, and Extraction (E), to select and classify the target entities.
The LLM is provided as-a-Service, through API access. While different providers are in principle possible (including self-hosting), aCTIon currently supports the entire GPT family from OpenAI <cit.>. In this paper we specifically focus on the model <cit.>.
Finally, Data Exporter parses the output of the pipelines to generate the desired output format, i.e., STIX bundles.
Design challenges and decisions
The aCTIon's two-stages pipelines are designed to handle the two main challenges we faced during the design phase.
First, our main concern has been related to the handling of LLM's hallucinations, such as made-up malware names.
To minimize the probability of such occurrences, we decided to use the LLM as a reasoner, rather than relying on its retrieval capability. That is, we instruct the LLM to use only information that is exclusively contained in the provided input. For instance, we always provide the definition for an entity we want to extract, even if the LLM has in principle acquired knowledge about such entity definition during its training.
Nonetheless, this approach relies exclusively on prompt engineering <cit.>, and by no means it provides strong guarantees about the produced output. Therefore, we always introduce additional steps with the aim of verifying LLM's answers. These steps might be of various types, including a second interaction with the LLM to perform a self-check activity: the LLM is prompted with a different request about the same task, with the objective of verifying consistency. Finally, we keep CTI analysts in the output verification loop, always including in our procedures the STIX bundle review step as described in Section <ref>.
A second challenge is related to the input size limitations. Our current target model supports a maximum of 4k tokens to be shared in between input and output size. This budget of tokens has to suffice for: (i) the instruction prompt; (ii) any definition, such as what is a malware entity; (iii) the entire unstructured input text; (iv) the produced output. Taking into account that reports in our dataset can be over 6k words long, we had to introduce ways to distill information from the unstructured text, before performing information extraction. Our solution was to introduce the pre-processing steps in our pipelines, with the purpose of filtering, summarizing and selecting text from unstructured inputs. Like in the case of the self-check activity, we might leverage the LLM to perform text selection and summarization.
§.§ Entity and Relation Extraction Pipeline
For the entity and relations pipeline, the preprocessing step performs iterative summarization <cit.>.
First, the input text is split in chunks of multiple sentences, then each chunk is summarized using the LLM, with the following prompt.
[colback=red!5!white,colframe=red!100!black,title=Preprocessing prompt,fontupper=,fontlower=,boxsep=2pt,left=2pt,right=2pt,top=2pt,bottom=2pt]
The generated summaries are joined together in a new text that is small enough to fit in the LLM input. This process could be repeated iteratively, however in our experience a single iteration is generally sufficient.
The extraction stage takes as input the summarized report and performs as many requests as entities/relations that need to be extracted. Each provided prompt contains: (i) a definition of the entity that should be extracted; (ii) a direct question naming such entity. A (partial) example follows.
[colback=green!5!white,colframe=mygreen!100!black,title=Entity Extraction prompt,fontupper=,fontlower==2pt,left=2pt,right=2pt,top=2pt,bottom=2pt]
After the extraction, the pipeline performs a check-step, querying the LLM to confirm that the extracted entity/relation is present in the original text, and reporting an error in case of inconsistency. In our tests no errors were reported.
§.§ Attack Pattern Extraction Pipeline
Extracting Attack Patterns differs significantly from the extraction of simpler entities. As introduced earlier, this task is about identifying behaviors described in the text and associating them to a definition of a behavior according to the MITRE ATT&CK Matrix taxonomy. Given the large number of attack patterns in the MITRE ATT&CK Matrix, it would be inefficient (and expensive) to query the LLM directly for each attack pattern's definition and group of sentences in the input report.[Assuming that a report has 10 paragraphs to inspect, and considering the over 400 techniques described the MITRE ATT&CK Matrix, we would need to perform over 4k LLM requests for a single report!] We rely instead on an approach that is in principle similar to previous work <cit.>: we check the similarity between embeddings of the report's sentences and of the attack pattern's description examples. An embedding is the encoding of a sentence generated by a language model[aCTIon computes the embeddings using model from OpenAI <cit.>, but our approach is generic and not tied to a specific text embeddings generation method.]. The language model is trained in such a way that sentences with similar meanings have embeddings that are close according to some distance metric (e.g., cosine similarity). Thus, our pipeline's extraction stage compares the similarity between report sentences embeddings and the embeddings generated for the attack pattern examples provided by MITRE.
However, differently from the state-of-the-art, we design a pre-processing stage with the goal of generating different descriptions for the same potential attack patterns contained in the report. Here, an important observation is that an attack pattern might be described in very heterogeneous ways. Therefore, for the preprocessing our goal is to generate multiple descriptions of the same attack pattern, to enhance our ability to discover similarities between such descriptions and the taxonomy's examples. In particular, we introduce three different description generation strategies.
The first strategy prompts the LLM to extract blocks of raw text or sentences that explicitly contain formal descriptions of attack patterns. The output of this strategy is generally a paragraph, or in some cases a single sentence. The example prompt follows.
[colback=red!5!white,colframe=red!65!black,title=Attack Pattern Extraction preprocessing strategy #1,fontupper=,fontlower==2pt,left=2pt,right=2pt,top=2pt,bottom=2pt]
The second strategy leverages the LLM's reasoning abilities and prompts it to describe step-by-step the attack's events, seeking to identify implicit descriptions <cit.>. The output of the second strategy is a paragraph.
[colback=red!5!white,colframe=red!65!black,title=Attack Pattern Extraction preprocessing strategy #2,fontupper=,fontlower==2pt,left=2pt,right=2pt,top=2pt,bottom=2pt]
Finally, the third strategy simply applies sentence splitting rules on the input text, similarly to what happens in previous work, and provides single sentences as output.
All the outputs of the three selection strategies are passed to the extraction step, where they are individually checked for similarity with the MITRE taxonomy's examples. We empirically define similarity threshold, after which we assign to the examined text block the attack pattern's classification of the corresponding MITRE taxonomy's example.
The three strategies extract complementary and non-overlapping information. We report a more detailed analysis in Section <ref>.
§ EVALUATION
The need to reduce the effort of CTI analysts in our organization pushed us to test many solutions and previous works. In this section, we present the evaluation of aCTIon in comparison to such solutions, which we consider as performance baselines. We start presenting the implementation of the baselines (which were not always available in open source) and the evaluation metrics, to then present results on the dataset introduced in Section <ref> and an ablation study.
§.§ Baselines Selection and Implementation
For baselines implementation, we followed the following principles.
First, we aimed to include at least one representative method for each family of NLP algorithms used in the literature.
Second, whenever possible, we used the original open-source implementation of the methods. When this was not possible, we relied on the original implementation of the underlying NLP algorithm.
Third, we trained or fine-tuned all methods using the same dataset (when possible).
In fact, the NLP-based methods we tested can be leveraged in two different ways. One approach is to train them on general data and (later referred as domain-agnostic models) use them directly. Another approach is to further fine-tune them on CTI-specific data, before using them.
Finally, we used the default hyperparameters as described in the corresponding papers.
We discuss the implemented solutions next, dividing them among solutions that focus on general entity and relations extraction, and solutions that deal specifically with attack pattern extraction.
Entity and Relations Extraction
As explained in Section <ref>, a Named Entity Recognition (NER) solution is the basic building block of previous work. The three main families of models used in state-of-the-art NER tasks are Convolutional Neural Networks (CNN), BiLSTM <cit.> and Transformers <cit.>.
Among the previous work targeting structured CTI extraction, GRN <cit.> relies on CNN, ThreatKG <cit.> and CASIE <cit.> are based on BiLSTM, FLERT <cit.> and LADDER <cit.> are based on Transformers.
In the case of CNN and BiLSTM methods, models are specifically trained end-to-end for the Entity extraction task.
Instead, approaches based on Transformers typically rely on pre-trained language models that are trained either on a general-domain corpus or a corpus that includes both general and domain-specific documents <cit.>. These models are then fine-tuned on a labeled dataset for the Entity extraction task.
For all approaches, a word-level CTI dataset is required. This dataset consists of a CTI corpus where individual words in a sentence have been annotated with tags that identify the named entities according to a specific CTI ontology and a specific format, such as the BIO format <cit.>.
The labeling for this task is complex and time consuming <cit.>, moreover it requires cross-domain expertise. In fact, CTI experts need to be also familiar with NLP annotation techniques, which makes generating such datasets challenging. Thus, we couldn't use our dataset as training corpus, and instead we rely on a publicly accessible dataset.
We trained all the selected models on the same dataset, using the same train/test/validation set split, to ensure a fair comparison. Specifically, we used the word-level dataset provided in <cit.>, which has also been used in previous works such as <cit.>. We chose this dataset because it is the largest open-source dataset available and it is labeled according to an ontology that can be easily mapped to the STIX.
We then test the trained methods and tools performance using our dataset, since it focuses on CTI-metrics.
For CNN-based NER, we use the original open-source implementation <cit.> of GRN <cit.>.
For BiLSTM-based models, we use the domain-agnostic open-source implementation <cit.> from <cit.>. Indeed, ThreatKG <cit.> does not provide an open-source version of their models, and while CASIE <cit.> does provide an open-source implementation, it cannot be directly adapted to the dataset used to train the other models. Also, their dataset is labeled according to an ontology that is very different from STIX and thus cannot be used for a fair comparison.
Finally, for Transformer-based models, we present two baselines: one is the original implementation of LADDER <cit.> as a domain-specific tool, and the other one is a domain-agnostic NER implementation based on FLERT <cit.> using the open-source implementation provided in <cit.>.
Attack Pattern Extraction For the Attack Pattern Extraction task, we evaluated a wide range of approaches, namely template matching (TTPDrill <cit.>, AttacKG <cit.>), Machine Learning (rcATT <cit.>, TRAM <cit.>), LSTM (cybr2vec LSTM <cit.>), and Transformers (LADDER <cit.>, SecBERT <cit.>). All the baselines target the identification of the attack patterns in terms of MITRE techniques[All the baselines focus on the main MITRE enterprise techniques without considering the lower-level sub-techniques.] and provided either a pre-trained model or their own dataset to perform the training.
All the methods employ datasets based on the same taxonomy (i.e., MITRE ATT&CK) and that were directly extracted from the same source, either the description of the MITRE attack patterns or samples of MITRE attack pattern description (both provided by MITRE). Given the high similarity of the datasets in this case, we trained each model using their own dataset.
All the methods were evaluated using their original open-source implementations <cit.>.
§.§ Performance metrics
We compared each method against the Ground Truth (GT) from our dataset using the following metrics as defined in <cit.>:
* Recall: fraction of unique entities in the GT that have been correctly extracted
* Precision: fraction of unique extracted entities that are correct (i.e. part of the GT)
* F1-score: harmonic mean of Precision and Recall
For the sake of clarity, in the rest of this section we use "entities" to refer to both entities and attack patterns, i.e. the outcomes of the two extraction tasks.
From a high-level perspective, the Recall indicates, for a given report, how much the GT entities have been covered by a method.
The Precision is instead impacted by the extracted entities which are wrong (i.e. False Positive), e.g. the ones extracted with a wrong type or the ones that the human annotator has not selected as relevant enough.
On the contrary, a True Positive refers to an entity that has been correctly identified with the proper type and with the same text as in the GT.
Finally, a False Negative refers to an entity present in the GT but that has been missed by the method at hand.
A naïve tool extracting all the possible entities of a given type might have a very high Recall but a very low Precision.
A good tool should rather balance both metrics, especially when used to help the annotation task of the human operator that would otherwise spend a lot of time in checking results with many False Positives.
To further investigate this aspect, we also provide the number of entities reported by each tool and we compare it to the numbers from the GT.
We perform this investigation only for the Attack Pattern Extraction task because, based on a simple analysis on the GT, there are an order of magnitude more Attack Patterns than the other types of entities, making this issue particularly important.
In the following subsections, we compute these metrics for , , pointed by a relation (which we call just ) and .
We adopt the same methodology of <cit.>, compute the metrics per each report, and then provide aggregate statistics.
Given the nature of the GT (with some reports having just 0 or 1 entity of a given type), some metrics exhibit a bimodal distribution across the reports, i.e. they can be either 0 or 1.
In order to provide better visibility of the underlying distribution of the values, we selected violin plots <cit.> in place of boxplots.
Still, to show at-a-glance the data range, we also report both the average across reports (with a marker) and the 25th and 75th percentiles (as vertical bars).
§.§ Entity Extraction
Figures <ref> and <ref> show the results for aCTIon against the baselines for Malware and Threat Actor entities, respectively.
aCTIon outperforms the other baselines in terms of Recall, Precision, and consequently F1-score, for both entity extraction tasks.
aCTIon achieves an average Recall, Precision and F1-score of 77%, 71% and 72%, respectively, for the Malware entity extraction and 84%, 78% and 80% for the extraction of Threat Actor entities. This is an increment of over 25%points for Malware, and about 20%points for Threat Actor when comparing the F1-score with the best performing baseline (LADDER).
To explain this performance difference, we inspected where the baselines fail, and identified two main cases: (i) baselines fail to understand when an entity is not relevant for the target STIX bundle; (ii) they tend to wrongly include entities that are conceptually close to the entity of interest (e.g., they select a named software in place of a Malware).
For example, when considering our example report from Section <ref>, all baselines can identify HelloXD as malware, resulting in the same recall performance (for this specific report) to that achieved by aCTIon. However, the precision is much lower. This is because baseline methods include entities such as Lockbit 2.0 and x4k (the name of the threat actor and their several aliases) among the detected malwares. Furthermore, they also include a wide range of legitimate applications such as Tox, UPX, Youtube, and ClamAV. For instance in the following extract, in contrast to aCTIon, baselines identify ClamAV as a malware:
led us to believe the ransomware developer may like using the ClamAV branding for their ransomware.
Figure <ref> shows the results for extracting Target entities. This type of entity is trickier, since it requires to understand both named entities, and then the relation among them. Consider the following extract:
HelloXD is a ransomware family performing double extortion attacks that surfaced in November 2021. During our research, we observed multiple variants impacting Windows and Linux systems.
The above sentences describe the targets of the attack: "Windows and Linux systems". In STIX, they will be classified with the generic Identity class, which includes classes of individuals, organizations, systems, or groups. To correctly identify the Target entity, it is then necessary to understand if the Identity node is an object of a "targets" relation.
Among the considered baselines, only LADDER is capable of extracting this type of entity, as it is equipped with a Relation Extraction model in addition to NER components. Despite the complexity of this task, aCTIon demonstrates its effectiveness by significantly outperforming LADDER also in this case (about 50%points higher F1-score).
§.§ Attack Pattern Extraction
For the Attack Pattern Extraction, we focused on a subset of 127 reports that do not include in the text a list or a table of MITRE Techniques in the well-defined format[E.g. refers to the MITRE Technique Encrypted Channel. The full taxonomy is provided at <https://attack.mitre.org/techniques/enterprise/>.].
Attack Patterns reported with such well-formed format can be trivially extracted with a regex-based approach, and could therefore be trivially detected.
The first plot of Figure <ref>a reports the number of attack pattern extracted from each report by different methods (please notice that, apart from LADDER, the baselines are different from those used for the previous task, cfr. Sec. <ref>).
The plots of Figure <ref>b report instead the Recall, Precision and F1-score performance metrics.
From a high-level perspective we can divide the baseline methods in two groups.
The first group includes "conservative" methods, i.e., those that tend to limit the number of Attack patterns extracted from each report, resulting in a lower average number compared to the Ground Truth[The first bar in the violin plot in Figure <ref>a indicates the actual number of techniques per report.], namely rcATT, LADDER, TRAM and cybr2vec LSTM. This group is characterized by recall values that are significantly lower than those of precision.
The second group includes instead methods that have an average number of extracted attack patterns higher than the Ground Truth, and with recall similar to precision, namely TTPDrill, AttacKG and SecBERT.
In the first group, TRAM offers the best performance (F1-score), while in the second group, SecBERT offers the overall best performance.
In addition to being the best within their respective group, these two methods represent two different approaches to the Attack pattern extraction.
Indeed, the former, TRAM, is a framework designed to assist an expert in manual classification, i.e., its output must be reviewed by an expert, so it is important to have high precision and keep the number of attack patterns to be verified low.
This can be achieved, e.g., by increasing TRAM's minimum confidence score from 25% (its default value) to 75%.
On the other hand, the latter, SecBERT, is designed to be fully automated.
aCTIon outperforms all the baselines in terms of overall performance (F1-score) by about 10% point.
More importantly, the recall is higher than any other solution, and the average precision is about 50%. These results make a manual verification by CTI analysts manageable: the average number of attack patterns extracted per-report is 25 (cf. Figure <ref>a).
Comparison with previous work
The main difference between aCTIon and state-of-the-art methods is in how they select text that may contain attack patterns. Some methods, such as rcACT, use the entire document, while others (SecBERT and TRAM) use all the sentences in the document. Yet others (such as LADDER and TTPDrill) select a specific subset of sentences. The selected pieces of text are then used as input to a classifier. The advantage of aCTIon is that it can leverage the LLM reasoning capability to improve the selection of text blocks before the classification/extraction.
The following is a paragraph describing the use of attack pattern T1573 "Encrypted Channel"[<https://attack.mitre.org/techniques/T1573/>.] extracted from a report:
The January 2022 version of PlugX malware utilizes RC4 encryption along with a hardcoded key that is built dynamically. For communications, the data is compressed then encrypted before sending to the command and control (C2) server and the same process in reverse is implemented for data received from the C2 server. Below shows the RC4 key "sV!e@T#L$PH%" as it is being passed along with the encrypted data. The data is compressed and decompressed via LZNT1 and RtlDecompressBuffer. During the January 2022 campaigns, the delivered PlugX malware samples communicated with the C2 server 92.118.188[.]78 over port 187.
How PlugX leverages RC4 encryption to communicate with the command and control, and thus that is an obfuscated communication channel is clear only after a few sentences.
Some of the state-of-the-art methods will miss this attack pattern because they act at sentence level. Some others may miss it because the selection of the sentences depends on a previous correct identification of the involved entities in the description (and in the previous section we show how this step can fail). Finally, for those methods which use the whole document, there is no guarantee that the document-level representation will capture this information.
aCTIon uses two different strategies for selecting the text block. The first strategy prompts the LLM to retrieve portions of the text that contains the attack pattern description (i.e., more than one sentence). However, the LLM may not recognize the specific attack pattern, indeed there is no guarantee that the LLM knows this specific attack pattern. To avoid such cases, a second strategy is used together with the previous one. The LLM is also prompted to reason about the key steps performed in the attack. The sentence below is the output of the second strategy.
The January 2022 version of PlugX malware uses RC4 encryption with a dynamically built key for communications with the command and control (C2) server.
This sentence not only clearly expresses the attack pattern but it easier to process in the classification step.
Finally, we also process all the sentences in the text separately.
How these strategies contribute to the final result is discussed in the next section.
§.§ Ablation Study
We conducted an ablation study on the preprocessing step for both the Entity and Attack Pattern extraction pipelines. Indeed, it is crucial to understand how information is selected and filtered in this step.
For Entity extraction, we compared a configuration in which the preprocessing step was included (aCTIon) to a configuration where it was omitted (aCTIon (w/o PP)).
When omitted, the input report was provided in separate chunks if larger than the available input size.
Figures <ref> and <ref> present the performance for the two different configurations when extracting Malware and Threat Actors entities, respectively. The decrease in precision for aCTIon (w/o PP) was expected because non-relevant information was still present in the text during processing. Additionally, since only a few Threat Actors are usually present in the same report, the drop in performance was more noticeable for the Malware entity.
It is noteworthy that the preprocessing step does not result in any decrease in Recall, indicating that no relevant information is lost in the summary produced by the LLMs. Additionally, for Malware entities, the recall on preprocessed text is even slightly higher. We conclude that reformulating and summarizing concepts during preprocessing can aid in the extraction process.
In the case of attack pattern extraction, we utilize the preprocessing module to select and enhance text extracts that potentially contain descriptions of attack patterns. The objective of our ablation study is to examine how different preprocessing strategies contribute to identifying such descriptions. We present four variants of our method that correspond to various preprocessing configurations and report their performance in Figure <ref>.
The first configuration, aCTIon (VTE), selects verbatim text excerpts (strategy #1 from Section <ref>) that may contain an attack pattern. This results in a few attack patterns per report, with high precision (above 67%). However, it has lower recall, since it is unusual for all attack patterns to be explicitly described in the text.
In the second configuration, aCTIon (SBSA) (strategy #2), the preprocessing is configured to describe the step-by-step actions performed during the attack, aiming to capture implicit or not-obvious descriptions of attack patterns. Using this configuration, we match the global performance (F1-score) of the best state-of-the-art method (SecBERT), while outputting on average 14.4 attack patterns per report - on average, half of what is produced by SecBERT.
The third configuration, aCTIon (VTE+SBSA), uses both preprocessing strategies together, resulting in improved performance. Additionally, it shows that the proposed preprocessing methods extract non-overlapping, complementary information.
Finally, the fourth preprocessing configuration, aCTIon (VTE+SBSA+OT), is our chosen configuration described in Section <ref> and labeled aCTIon in Figure <ref>. We report it again here for convenience.
§.§ Multi-language support
Not all the CTI sources are in English language.
This is an issue for the analyst that not only should have expertise in the cybersecurity domain, but would also need to know fluently more than one language.
Automated tools can be used only when specialized on multiple languages and this is typically not the case.
Among our baselines, only LADDER includes a multi-language model (XLM-RoBERTa <cit.>) that can work on non-english reports.
However, this only applies to its NER components for the Entity Extraction task.
Indeed, when considering its 3-stages Attack Pattern Extraction pipeline, all three stages (based on RoBERTa <cit.> and Sentence-BERT <cit.>), only support the English language, making them unsuitable for other languages.
Our method is based on an LLM that during training was exposed to a huge corpus of text including a variety of languages and thus can process reports in languages other than English <cit.>.
Out of our dataset, 13 reports are also provided in an alternative language other than English, such as Japanese.
We evaluated the performance of aCTIon against LADDER for the Entity Extraction task and we reported in Table <ref> the average performance across the 13 reports.
In general, we can observe that aCTIon and LADDER present a gap of performance comparable to the one observable when analyzing the English reports (9%point-35%point).
In the case of Attack Pattern Extraction, only aCTIon is able to produce a usable result, and its performances are comparable to what obtained for English reports.
§ DISCUSSION
Limitations of the Evaluation
Our evaluation focused on , , and entities. This was the case because these entities enabled us to directly compare with previous work, i.e., other entities were not widely supported by other tools. As a result, our evaluation did not extensively cover the extraction of relations. In fact, only the extraction of includes relation extraction (of type ), and we could only compare to LADDER that supports it (cf. Figure <ref>). However, aCTIon can extract any relation defined in the STIX's ontology, and therefore we assessed the performance also in that regard. For example, for all relations between , and , i.e., relation of types and , aCTIon achieves 73% recall and 88% precision on average (cf. Appendix <ref>).
Deployment advantages
We focused this paper on performance results, however in a practical setting it is important to consider ease of deployment and maintenance among the goals. Compared to previous work, the reliance of aCTIon on LLMs removes the need to collect, annotate and maintain datasets for the different components of the previous work's pipelines (e.g., NER components). Furthermore, previous work, e.g., LADDER, makes extensive use of hand-crafted heuristics to clean and filter the classification output (e.g., count-based filters to remove noisy entities and ambiguities, or allow-lists of know non-malicious software). Heuristics also require continuous maintenance and adaptation. In contrast, aCTIon does not require the collection and annotation of datasets, nor the use of hand-crafted heuristics (cf. Appendix <ref>).
It is important to note that heuristics are data-dependent, meaning that a change in the dataset's characteristics might require a corresponding change in the utilized heuristics (this is the case, e.g., for the count-based heuristics of LADDER). On the other hand, the prompts used in the aCTIon pipeline are solely task-dependent.
Is the problem solved?
Arguably, most of the benefits of aCTIon are derived from the proper use and integration of LLMs in the information extraction pipeline. Our tests with this technology were initially purely exploratory, but as we employed it increasingly in testing deployments, we grew confidence that tools like aCTIon can already do most of the heavy lifting in place of CTI analysts, for structured CTI extraction. We expect performance will continue to improve with the development of more powerful LLMs (e.g., GPT4 was released during the writing of this paper) that allow for larger input sizes and better reasoning capabilities. Therefore, we expect the recall and precision metrics to further improve without significant changes to aCTIon.
Nonetheless, we also inherit the shortcomings of LLMs, such as hallucinations <cit.>.
CTI analysts are required to carefully review the outputs of the system.
This issue is what currently hinders the full automation of the structured CTI information extraction tasks.
Hallucinations. One well-known challenge of LLMs is their tendency to produce nonfactual, untruthful information, which is commonly referred to as "hallucinations" <cit.>. To mitigate this issue, aCTIon has implemented several measures. Firstly, the LLM is instructed to rely solely on the input CTI report provided to it, without considering any additional knowledge acquired during training. Secondly, for entity extraction, it is possible to verify if the model circumvented this limitation by checking if the extracted entities are actually present in the text. During our tests, we only found 2 (0.9%) cases of this for both Malware and Threat actors. Additionally, aCTIon uses only user-provided classes to label entities and attack patterns, thus the model cannot hallucinate labels as output. Furthermore, any errors in classification will have the same impact as simple misclassification.
Other issues
The current precision and recall of aCTIon is within the 60%-90% range for most entities. In our experience, this is already in line with the performance of a CTI analyst, and unlike human analysts, aCTIon keeps consistent performance over time not being affected by tiredness.
In fact, we speculate that many misclassifications are an artifact of the ambiguous semantics associated with CTI and the related standard ontologies. For example, what is considered a relevant entity by an analyst may differ. Defining clear semantics for CTI data remains an open challenge.
Ethical considerations
In this paper we do not collect/use any user data nor perform unauthorized experiments over third party infrastructures. Our dataset only includes already publicly accessible reports about cyber threats, shared by the security community. We believe that this information can help defending against such threats, and we do not foresee any harmful use of the shared structured information that would out-weight the benefits.
We further verified each entry to exclude any potentially confidential or sensible information.
Reports are shared with a link to the original source. Some reports and STIX bundles in the dataset may contain explicit Indicators of Compromise (IoCs). We have not redacted them from the public version of the dataset as we believe that this information may be valuable for future research. Some reports and their respective STIX objects may include offensive language such as malware or threat actor names. We have not redacted this information because it would affect the usefulness of the data. Lastly, reports and their respective STIX bundles may attribute attacks to specific groups of people or states. When considering such information, the full report text should be considered to put in context why the attribution was suggested by their authors.
Finally, regarding aCTIon, all the STIX bundles it automatically extracts are always verified by CTI experts.
§ RELATED WORK
We covered related work on LLMs in Section <ref> and on structured CTI extraction in Section <ref>, therefore here we focus on works that leverage the structured information. In fact, we already mentioned that structured CTI enables the investigation and monitoring activities inside an organization, i.e. threat hunting, based for example on TTPs <cit.>. However, there are also other relevant uses.
In particular, trend analysis and prediction of threats for proactive defense take advantage of structured CTI <cit.>. Adversary emulation tools like MITRE CALDERA <cit.> can also benefit from structured CTI because they are typically fed with adversarial techniques based on e.g., MITRE ATT&CK.
§ CONCLUSION
We introduced a dataset to benchmark the task of extracting structured Cyber Threat Intelligence (CTI) from unstructured text, and aCTIon, a solution to automate the task.
The dataset is the outcome of months of work from our CTI analysts, and provides a structured STIX bundle for each of the 204 reports included. We release it openly. To the best of our knowledge, the dataset is 34x larger than any other publicly available dataset for structured CTI extraction, and the only one to provide complete STIX bundles.
We then introduced aCTIon, a framework that leverages recent advances in Large Language Models (LLMs) to automate structured CTI extraction. To evaluate aCTIon we selected 10 different tools from the state-of-the-art and previous work, re-implementing them when open source implementations were not available.
Our evaluation on the proposed benchmark dataset shows that aCTIon largely outperforms previous solutions.
Currently, aCTIon is in testing within our organization for daily production deployment.
There has been a recent wave of announcements of security products based on LLMs to analyze and process CTI <cit.>. However, the security community was still lacking a benchmark that would allow the evaluation of such tools on specific CTI analysts tasks. Furthermore, there is a lack of information about how LLMs could be leveraged in this area for the design of systems.
We believe our work provides both a way to benchmark such new tools, with our dataset, and a first system design and set of insights to leverage LLMs in CTI tasks.
IEEEtran
§ RELATIONSHIP EXTRACTION
Evaluating relationship extraction task is more complex than regular entity extraction. In fact, there is a need to include negative samples of relationships in order to verify that the classifier is able not only to confirm the existence of a relation among entities, but also its lack. Generating random non-existent syntactically correct links is an approach commonly adopted in the state-of-the-art <cit.>.
Thus, we defined a benchmark to evaluate aCTIon performance in extracting STIX relations between entities.
We use as positive samples (i.e. existing relations between existing entities) all the relations between Malware, Threat Actor, and Identity entities that were present in the STIX representation of the report (i.e. as they were extracted by the CTI analyst).
We use as negative samples (i.e. not existing relations between existing entities) a set of randomly generated relations between entities that are present in the text.
This set may also include entities which are extracted by the Entity extraction task but that have been then filtered out in the final STIX representation by the CTI analyst (e.g. Lockbit 2.0 in the HelloXD report).
Positive samples and negative samples form our evaluation dataset.
Furthermore, since the scope of the test is to benchmark just the relation extraction capabilities, we do not use entities extracted by aCTIon (that would automatically filter out some negative samples) but we build this dataset directly from the STIX representation provided by the CTI analyst.
aCTIon is configured to preprocess the text using the same compression techniques described in Section <ref> and is prompted to extract the relation between two entities using a direct question. Figure <ref> shows the results for all relations between Malware, Threat Actor, and Identity entities, i.e., relations of types targets and uses. On average, aCTIon achieves 73% recall, 88% precision and 86% f1-score.
§ BASELINES WITH HEURISTICS
In this section we consider 3 additional baselines where we apply the same post-processing heuristics provided by LADDER to each one of the other 3 baselines, i.e. BiLSTM, GRN and FLERT.
We named these new baselines BiLSTM, GRN and FLERT, respectively.
The post-processing heuristics were used by LADDER to remove noisy entities and solve some ambiguities produced by the NER module.
As shown in Figures <ref> and <ref>, the heuristics are able to increase the Precision because some irrelevant entities are filtered out.
At the same time, however, the Recall decreases, implying that the heuristics are also wrongly cutting out some information which is relevant.
Remarkably, when extracting Malware entities, FLERT is able to reach the F1-score performance of LADDER, which however still keeps its role of best performing baseline for both types of entity.
|
http://arxiv.org/abs/2307.05114v1 | 20230711084750 | Localization in the Incommensurate Systems: A Plane Wave Study via Effective Potentials | [
"Ting Wang",
"Yuzhi Zhou",
"Aihui Zhou"
] | math-ph | [
"math-ph",
"math.MP"
] |
Critical steady states of all-to-all driven-dissipative spin models: An analytic approach
Ana Maria Rey
August 12, 2023
==========================================================================================
In this paper, we apply the effective potentials in the localization landscape theory (Filoche et al., 2012, Arnold et al., 2016) to study the spectral properties of the incommensurate systems.
We uniquely develop a plane wave method for the effective potentials of the incommensurate systems and utilize that,
the localization of the electron density can be inferred from the effective potentials. Moreover, we show that the spectrum distribution can also be obtained from the effective potential version of Weyl's law.
We perform some numerical experiments on some typical incommensurate systems, showing that the effective potential provides an alternative tool for investigating the localization and spectrum distribution of the systems.
§ INTRODUCTION
Localization of waves in the non-periodic media is a remarkable and well-known phenomenon, which has garnered significant interest due to its critical role in numerous material properties <cit.>.
Due to the advances in techniques nowadays, the localization of incommensurate systems has been widely observed from experiments in optical, mechanical, and low-dimensional material systems <cit.> about 60 years after the classic work of Anderson <cit.>.
In <cit.>, the localization was studied in cold atomic gases by controlling
artificially incommensurate potentials through finely tuned lasers.
Furthermore, the localization of electronic wave functions in modern low-dimensional materials such as twisted bilayer graphene can impact drastically their transport and magnetic properties <cit.>. These materials form incommensurate structures through a precisely controlled interlayer twist.
And many significant physical quantities of interests, like the quantum Hall effect <cit.>, the enhanced carrier mobility <cit.>, and the unconventional superconductivity <cit.>
have deep connections with the localization of the electrons in the flat bands, though they have not been fully understood yet.
Therefore, studying the localization mechanism in incommensurate systems is a crucial step to understanding the relevant unconventional phenomena and making use of them in the future.
Theoretical studies on the localization properties of disordered or incommensurate systems have raised particular attention in the physics and mathematics communities <cit.>, and references therein.
Only the literature directly relevant to this paper will be listed.
In <cit.>, the localized-extended transitions in the one-dimensional incommensurate systems were studied by means of a scattering picture under the plane wave discretization.
Alternatively, a new mathematical technique was proposed to approximate the eigenstates. In <cit.>, the landscape concept was introduced first, whose authors provided a novel sight to predict the location of the low energy eigenfunctions by the relevant Dirichlet problem. In <cit.>, some mathematics consequences were presented. Further, in <cit.>, effective potential defined as the reciprocal of landscape function was proposed. It contains abundant information on localized eigenpairs,
providing a wealth of insights into their localization properties with the advantage of reduced computational cost. Recently, considerable applications of this theory have been presented <cit.>.
In <cit.>, the localization properties in systems gases of ultracold atoms and disordered semiconductor alloys were addressed by these prediction methods.
In <cit.>, the localization was studied for tight-binding Hamiltonians in two-dimensional materials by means of the localization landscape theory. Yet, to our best knowledge, this framework has not been applied to the incommensurate systems.
On the other hand, spectral properties play a central role in determining the physical behaviors of incommensurate systems.
Through the evaluation of the spectral distribution, we can in principle derive the system's conductivity, specific heat, magnetism, and superconductivity, among other relevant properties. The density of states is a powerful tool in this respect, as it allows us to compute the spectral distribution by quantifying the number of accessible energy states per unit of energy in a more or less mean-field context, and as an intermediate quantity to further calculate other physical properties.
The integrated density of states, essentially an accumulation of the density of states, offers a broader perspective by accounting for the number of states with energies less than a certain threshold.
A crucial aspect of this process is the application of Weyl's law. This fundamental mathematical and physical principle offers an asymptotic estimation of the integrated density of states, yielding precise results, particularly for large systems and high-energy scenarios <cit.>. In <cit.>, an approximation of the integrated density of states represented by the effective potential, was constructed based on a variant of Weyl's law. This was pursued due to the limitations of the standard approximation.
In this work, we study the localization in the incommensurate system by the plane wave method with the aid of the effective potential.
Unlike the existing literature that addresses the problem within a bounded domain equipped with a specific boundary condition, we undertake a more comprehensive approach. We directly discretize the Schrödinger operator with incommensurate potentials across the entire real space and offer the formulation of the effective potential.
From the effective potential, we are able to extract the spectral properties of the systems within the real space domain of our interest.
Specifically, the positions of the minima for the effective potential allow us to infer the localization positions of the electron density that served as an alternative more effective physical observable for studying localization in such extended systems.
Moreover, we provide a means to predict the spectral distribution of incommensurate systems based on the effective potential.
The 1D and 2D numerical experiments give examples of how to capture the localization information of the electron density and spectral distribution by the effective potential efficiently.
Outline.
The rest of this paper is organized as follows. In Section <ref>, the incommensurate systems, the corresponding Schrödinger operator and
other associated observables are briefly introduced.
In Section <ref>, the localization landscape theory and effective potential for the incommensurate systems are presented. In Section <ref>, numerical schemes for partial differential equations related to the incommensurate eigenvalue problem are listed. Furthermore, the process of studying localization under the context of plane wave discretization is explored.
In Section <ref>, some numerical experiments are performed to show the procedure of predicting electron density and spectral distribution from the effective potential.
In Section <ref>, some conclusions are drawn.
§ LOCALIZATION IN QUANTUM INCOMMENSURATE SYSTEMS
Our study focuses on two d-dimensional (d=1,2) periodic systems, arranged in parallel along the (d+1)th dimension. For the sake of simplicity, we neglect the (d+1)th dimension and the distance between the two layers. Notably, the theoretical and algorithmic frameworks elaborated upon in this paper are readily generalizable to incommensurate systems with more than two layers and models that involve the (d+1)th dimension.
A periodic system with d-dimensional can be characterized using a Bravais lattice as follows:
_j={A_jn:n∈ℤ^d}, j=1,2,
where A_j∈ℝ^d× d is an invertible matrix.
The j-th layer unit cell can be represented as
Γ_j={A_jα:α∈[0,1)^d}, j=1,2.
Each layer _j (j=1,2) exhibits periodicity. This means that the layer remains invariant with respect to its lattice vectors
_j = _j + A_j n ∀ n∈ℤ^d.
The corresponding reciprocal lattice and the reciprocal unit cell can be defined as:
_j^*={2π A_j^-Tn:n∈ℤ^d} and Γ_j^*={2π A_j^-Tα:α∈[0,1)^d}
respectively.
Even though the layers _1 and _2 are periodic on their own, the translation invariance might not hold when the layers are stacked together. This leads to so-called incommensurate systems.
Two Bravais lattices _1 and _2 are called incommensurate if
_1∪_2 + τ = _1∪_2 ⇔ τ=0∈^d .
In this work, we take into account the systems such that not only the lattices _1 and _2 are incommensurate, but their corresponding reciprocal lattices ^*_1 and ^*_2 are also incommensurate.
Our focus is primarily on the Schrödinger operator for the bi-layer incommensurate system, which is given by
H := -1/2Δ + V_1 + V_2,
where V_j:^d→^+ (j=1,2) are smooth and _j-periodic functions.
Owing to the periodicity of the potentials, it is possible to express v_j in terms of a Fourier series:
V_j(x) = ∑_G∈_j^*V̂_j,G e^iG· x with V̂_j,G = 1/|Γ_j|∫_Γ_jV_j(x)e^-iG· x x j=1,2.
The Schrödinger operator is a central object across numerous fields, including condensed matter physics and materials science.
It serves as a fundamental component in the mathematical modeling of quantum-physical processes, including Bose-Einstein condensates associated with ultracold bosonic or photonic gases and the electronic structure of molecule systems.
In particular, we are interested in the incommensurate layered systems, such as single layers of low-dimensional materials stacked on top of each other with a twist.
This configuration drastically affects the properties of the single layer counterparts, leading to the emergence of intriguing phenomena such as localization.
The localization is a fundamental phenomenon in both physics and materials science, characterized by the spatial confinement of particles within a specific region. This confinement results in a significant reduction of their mobility and propagation, thus deeply influencing various properties such as electrical conductivity, optical characteristics, and magnetism among others.
The implications of localization can be so substantial as to alter the inherent behavior of a material, potentially causing a transition from a metallic to an insulating state.
Consequently, the exploration of these mathematical manifestations not only assists in the quantitative understanding of localization effects but also offers a microscopic lens for comprehending the underlying physic.
By examining the Schrödinger operator, one can derive various physical properties, including energy levels, electron density, and other measurable observables.
Since the spectral behavior of the Schrödinger operator proves to be especially fascinating with incommensurate potentials, the density of states (DoS) serves as an appropriate tool for examining its spectral distribution. This concept denotes the number of states within each energy interval at each energy level.
Another crucial concept in this field is the integrated density of states (IDoS), which is defined as the integral of the density of states from -∞ to a real energy value E.
To effectively estimate the spectral distribution, Weyl's law offers an asymptotic formula for the spectral representation <cit.>:
N_V(E) : = (2π)^-d vol{(x, ξ) ∈Ω×^d | V(x) + |ξ|^2 ≤ E} as E →∞,
where V:= V_1 + V_2. In essence, this formula serves as a valuable tool for calculating the IDoS. It forms a significant connection between the spectral properties of an operator with the volume of certain subsets of phase space, opening up new avenues for exploration.
Furthermore, electron density describes the spatial probability distribution of electrons within the system, which fundamentally governs the chemical and physical properties of the systems and thus determines the macroscopic behavior of materials.
Specifically, the density operator for the incommensurate system is represented as:
ρ(x)=f_μ,β(H)(x, x),
where f_μ,β(·)=1/1+e^(·-μ)β
is the Fermi-Dirac function, μ is the chemical potential and β:=(k_BT)^-1 is the inverse temperature with k_B the Boltzmann constant.
§ EFFECTIVE POTENTIAL OF INCOMMENSURATE SYSTEMS
In a remarkable contribution, Filoche & Mayboroda <cit.> proposed a simple but astonishingly effective method to predict the structure of the eigenfunctions and the spectral distribution.
These predictions are attained by solving the corresponding partial differential source problem.
It makes it possible to predict the localization regions of electron states without the direct solution of eigenvalue problems, especially at a considerable computational cost.
We present first the main features of the localization landscape theory introduced in <cit.>.
Following these ideas,
the landscape function u of incommensurate systems is defined as the unique solution to
(-1/2Δ + V_1 + V_2) u = 1.
In a bounded domain setting, one of the fascinating results is that the function u can control point wisely all eigenfunctions of (<ref>) by
|ψ (x)| ≤λ u(x)ψ_L^∞, ∀ x∈Ω,
where Ω⊂^d is an open, bounded domain. Equivalently,
the eigenfunction ψ can only localize in a bounded domain <cit.>
{x: u(x)≥ 1/λ}⊂Ω.
By composing an eigenstate ψ of H as
ψ = u, we see that the auxiliary function obeys a Schrödinger-type eigenvalue problem
-1/u^2∂/∂ x(u^2 ∂/∂ x) + 1/u = λ,
in which the origin potential V_1+V_2 in (<ref>) has disappeared.
Although the Laplacian operator is replaced by a slightly more complicated elliptic operator, the new function
(x) := u(x)^-1
plays the role of effective potential and presents influential features. Numerical experiments <cit.> have suggested that the smallest local minima of correspond precisely to the location where the first few eigenfunctions localize. Moreover, the energy of the local fundamental state inside each well was found to be almost proportional to the value of the effective potential at its minimum inside the well <cit.>.
Rather than predicting localization through (<ref>), the effective potential (<ref>) incorporates abundant information, a superiority that has been theoretically and numerically substantiated in <cit.>.
Further, the effective potential serves as an approximation for the spectral distribution. In particular, by replacing the original potential with the effective potential, an approximation of the IDoS can be achieved. As outlined in the works of <cit.>, the IDoS, denoted as N_ eff(E), can be represented as:
N_ eff(E) := (2π)^-d vol{(x, ξ)∈Ω×^d | (x) + |ξ|^2 ≤ E }.
The variant, demonstrated through experimental results, exhibits high precision across the energy spectrum. Notably, it maintains remarkable accuracy even at lower energy levels, an accomplishment that is not typically realized with standard Weyl's law (<ref>).
§ PLANE WAVE DISCRETIZATIONS
To present the numerical schemes for studying the localization of incommensurate systems, we first construct a discrete Hamiltonian by the plane wave basis. Let us reiterate the plane wave framework of incommensurate systems proposed in <cit.> (c.f. also earlier work on quasicrystals <cit.>).
Let W, L >0, we define the following truncation for the plane wave vectors in _1^* ×_2^* introduced in <cit.>
_W,L := {(G_1,G_2) ∈_1^* ×_2^* : |G_1+G_2|≤ W , |G_1-G_2|≤ L }.
We consider the corresponding plane wave {e^i(G_1+G_2)· x}_(G_1, G_2)∈_W,L as basis functions, and they satisfies
the following orthonormal condition:
lim_R→∞1/|B_R|∫_B_R e^-i(G_1+G_2)x e^i(G_1^'+G_2^')x x = δ_G_1G_1^'δ_G_2G_2^' ∀ (G_1, G_2), (G_1^', G_2^')
∈_W,L,
where B_R⊂^d is the ball centered at the origin with radii R. With the plane wave discretization, we obtain the following discrete Hamiltonian of (<ref>)
^W,L_,' = 1/2|G_1+G_2|^2δ_G_1G_1^'δ_G_2G_2^' + V̂_1,G_1-G_1^'δ_G_2G_2^' + V̂_2,G_2-G_2^'δ_G_1G_1^'
for (G_1,G_2), (G_1^',G_2^') ∈_W,L.
Following that, we provide the process to predict the localization of the incommensurate system under plane wave discretizations.
With the plane wave basis functions, we can approximate the solution u in (<ref>) by
u(x) = ∑_(G_1,G_2)∈_W,L_G_1,G_2 e^i(G_1+G_2)· x.
Denote by
U:={_G_1,G_2}_(G_1,G_2)∈_W, L, we can derive the linear system by using the standard Galerkin projection and the orthonormal condition,
^W,L U=I_,
where ^W,L∈^|_W,L|× |_W,L| and I_:={δ_G_1δ_G_2}_(G_1,G_2)∈_W, L.
Starting from the approximate solution u (<ref>), we can compute directly the absolute value of the effective potential pointwise,
|(x)| = |u(x)^-1|, x ∈^d.
In a standard procedure of representing the spectral distribution and electron density under the plane wave discretizations, we first solve the matrix eigenvalue problem
^W,LΦ_j = λ_j Φ_j, j = 1, ⋯, N_W,L.
By the definition of the integrated density of states, the eigenvalue counting function can be stated as
N(E) = #{λ_j≤ E, j= 1, …, N_W,L},
which represents the count of all eigenvalues less than or equal to some E.
Advancing further with the eigenvalue λ_j and eigenvector Φ_j={ϕ_j,G_1,G_2}_(G_1,G_2)∈_W, L, we can express the discretized electron density immediately as:
ρ^W,L(x) = 1/S_d,L∑_j=1^N_W,Lf_β,μ(λ_j)|ψ_j(x)|^2,
where S_d,L is a normalized constant related to the dimension d and parameter L (see <cit.> for more details), and ψ_j(x) is approximated by the plane wave basis function,
ψ_j(x) = ∑_(G_1,G_2)∈_W,Lϕ_j,G_1,G_2 e^i(G_1+G_2)· x for x ∈^d.
To predict the structure of the electron density, especially for identifying their localized domains, we first select a domain in real space and identify a series of local minima of |(x)|. The location of these local minima will serve as our prediction of where the electron density localizes (as detailed in Section <ref>). The electron density not only circumvents the inaccuracy in predicting the order of localization positions for eigenstates when eigenvalues are close (as mentioned in <cit.>), but also serves as a more fitting physical observable for studying localization in such systems.
Unlike the eigenvalue problem and related partial differential equation in <cit.> and references therein, we do not first restrict our problem to a bounded domain equipped with a specific boundary condition at first, while directly discrete it by the plane wave basis function.
The advantage is that we can study localization on the basis of our interests in the domain.
We mention that this practice is not irresponsible, since these eigenvalue problems of incommensurate systems can be interpreted in a higher dimensional space with periodic boundary conditions (see <cit.>).
§ NUMERICAL STUDY
In this section, we will report some numerical experiments on the linear Schrödinger operator for the incommensurate systems. Particularly, we compute the effective potentials under plane wave discretizations, analyze the localization of the electron density, and compute the spectral distribution.
In order to assess the accuracy of the prediction approaches based on effective potential, the eigenvalues and eigenfunctions are computed by discretized eigenvalue problem (<ref>).
Example 1.
(1D incommensurate system)
Considering the following eigenvalue problem:
(-1/2Δ + V_1(x) + V_2(x) ) ψ(x) = λψ(x) x ∈,
where
V_1(x)= s_1∑_G_1∈^*_1 e^-γ|G_1|^2 e^iG_1· x and
V_2(x)= s_2∑_G_2∈^*_2 e^-γ |G_2|^2 e^iG_2 · x
are incommensurate potentials with s_1=3, s_2=2, L_1=2, L_2=√(5)-1, and γ = 0.05.
We tackle the linear systems as given in (<ref>) and compute the effective potential using (<ref>). The blue line in Fig. <ref> depicts the effective potential |V_ eff(x)|. To contrast this with the original potential, we also display the incommensurate potential V_1(x) + V_2(x) using a yellow line in the same figure. From this visualization, it is evident that both the effective potential and the original potentials capture the majority of the local minimum points. However, the effective potential, exhibiting a smoother profile than V_1(x) + V_2(x), is able to refine a few local minima points.
Further, we solve the eigenvalue problem as depicted in equation (<ref>) and proceed with computing the electronic density by (<ref>). Specifically, our study revolves around the Fermi-Dirac function (see Section <ref>) under the influence of varied parameters μ with an appropriate β. The parameter β is associated with the inverse temperature of the system. In contrast,
the parameter μ is typically referred to as the chemical potential within the Fermi-Dirac distribution function. It is inherently associated with the system's electron density, effectively representing the number of electrons per unit volume.
This distribution outlines the probability of a state with a specified energy being occupied by an electron.
Consequently, adjusting the chemical potential allows for shifting the energy levels where electrons are most likely to be found, thus controlling the number of electrons within a specific energy range.
We compute ρ^W,L(x) by taking different parameters μ with W = 50, L = 1000, β = 100.
The resultant electronic density is graphically represented in Fig. <ref> and Fig. <ref>, where it is evident that the electron density is localized within a certain region.
Furthermore, the local maxima of the electron density versus the local minima of the effective potential are shown in Figs. <ref> and <ref>, in which the numbers label the sequences of the minima and maxima. This comparison reveals that the regions of electron density localization coincide with the positions of the local minima of the effective potential except for a few points.
The positions of the first 18 local minima of the effective potential accurately correspond to the regions of electron density localization.
By comparing the electron density with the positions of local maxima and minima (Fig. <ref> — Fig. <ref>), we can infer that the localization of the electron density is more pronounced at the positions of smaller local minima.
On the other hand, to estimate the prediction of the spectral distribution, we employ different truncation parameters. Given the unique roles that parameters L and W play in modulating the spectral distribution (see <cit.>), we factor this into our calculation by multiplying the prediction result by a constant related L. Intriguingly, as can be observed in Figs. <ref> and <ref>, we find that the invariant Weyl's law, when amplified by a certain constant c_L, closely aligns with the eigenvalue count function. More specifically, the constant c_L = 0.0185 L presented in the numerical experiments is determined through testing with various parameters L and W. This adjustment enhances the precision of the approximation, taking into account the specific influences exerted by the size of the system. However, standard Weyl's law can not yield an accurate prediction for the incommensurate systems.
Example 2 (2D incommensurate system)
Consider a two-dimensional incommensurate system that is created by overlaying two periodic lattices, where one layer is rotated by an angle θ=5^∘ relative to the other.
More precisely, we take ℛ_1= A_1ℤ^2 with
A_1=2[
1/2 1/2
-√(3)/2 √(3)/2
].
The structure of such twisted bi-layered systems is illustrated in Fig. <ref>. The potentials are characterized in the same manner as in (<ref>) with s_1 = 3, s_2 = 2, γ =0.05.
We first plot the effective potential within a range of lower values.
As depicted in Fig. <ref> and Fig. <ref>, there is a remarkable correspondence between the minima of the effective potential and the positions of electron density localization. Expanding our examination to a broader range of the effective potential and increasing the chemical potential μ, we observe intriguingly that the electron density tends to localize in the valleys of the effective potential. In other words, regions with larger values of electron density coincide perfectly with those where the effective potential values are smaller.
Similar to the one-dimensional case, we solve the eigenvalue problem (<ref>) by selecting different parameters L and W. The eigenvalue counting function is then further compared with the invariant Weyl's law. From numerical results Fig. <ref> and Fig. <ref>, it is also demonstrated that the two are nearly consistent with each other by a constant factor related to L. Specifically, this constant factor is c_L = 0.0072L^2.
Integrating with the one-dimensional result, we can establish the dependence on the size of the reciprocal space and the dimensionality of the system.
The theoretical basis for this constant will be the subject of our subsequent research.
§ CONCLUSION
In this paper, we study the localization in the incommensurate systems under the plane wave discretizations with the aid of the effective potential. Our study uniquely ventures into exploring these problems from the perspective of the entire real space.
With the plane wave presentation of the effective potential, we numerically exhibit that electron density,
localizes in regions where the effective potential exhibits local minima in good agreement with direct eigenpairs calculations and intuitive.
We further provide a robust prediction for the spectral distribution from the variant of Weyl's law utilizing the effective potentials.
Our future works will involve applications in the more practical incommensurate systems and study the deeper connection between the effective potentials and other physical quantities of interests.
0.3cm
Acknowledgements.
The authors gratefully thank Prof. Huajie Chen and Dr. Daniel Massatt for their insightful discussions. This work was supported by the National Key R & D Program of China under grants 2019YFA0709600 and 2019YFA0709601. Y. Zhou’s work was also partially supported by the National Natural Science Foundation of China under grant 12004047.
abbrv
|
http://arxiv.org/abs/2307.03902v1 | 20230708044951 | Feature selection simultaneously preserving both class and cluster structures | [
"Suchismita Das",
"Nikhil R. Pal"
] | cs.LG | [
"cs.LG"
] |
[mycorrespondingauthor]Corresponding author
Electronics and Communication Sciences Unit, Indian Statistical Institute, 203 B T Road, Kolkata-700108
[email protected],[email protected]
When a data set has significant differences in its class and cluster structure, selecting features aiming only at the discrimination of classes would lead to poor clustering performance, and similarly, feature selection aiming only at preserving cluster structures would lead to poor classification performance. To the best of our knowledge, a feature selection method that simultaneously considers class discrimination and cluster structure preservation is not available in the literature. In this paper, we have tried to bridge this gap by proposing a neural network-based feature selection method that focuses both on class discrimination and structure preservation in an integrated manner. In addition to assessing typical classification problems, we have investigated its effectiveness on band selection in hyperspectral images. Based on the results of the experiments, we may claim that the proposed feature/band selection can select a subset of features that is good for both classification and clustering.
Feature selection, Structure preserving, Classification, Neural network, Sammon's Stress, Band selection, Hyperspectral Image.
§ INTRODUCTION
Feature selection methods can be broadly classified on the basis of the utilization of the class label information. There are three categories: supervised, semi-supervised and unsupervised <cit.>. The supervised feature selection method exploits the label information to find out the relevant features which distinguish samples of different classes <cit.>. Semi-supervised feature selection is used when some labeled samples along with plenty of unlabelled samples are present <cit.>. Both labeled and unlabelled data are used to modify a hypothesis obtained from the labeled data <cit.>. Unsupervised feature selection is much more difficult as it needs to find out the useful features in the absence of the label information <cit.>. Different criteria have been chosen to select a subset of original features in different unsupervised feature selection studies. Some of them are: preserving the data distribution such as manifold structure <cit.>, preserving cluster structure <cit.>, and preserving data similarity <cit.>. It is noteworthy that in the case of unsupervised feature selection, some methods try to preserve the “structure" or “geometry" of the data in some sense. Contrarily supervised feature selection methods in most cases do not set any explicit criteria to preserve the structure of the data. They only pay heed to separating the classes as much as possible with different measures exploiting class information such as Fisher score <cit.>, Laplacian score <cit.>, mutual information <cit.>, normalized mutual information <cit.>, ReliefF <cit.>, class correlation <cit.>, classifier score <cit.>. We should note here that the feature selection criterion are not always lead by a single objective. Feature selection methods often follow a criterion that consisits of two or more objectives. The study in <cit.> proposes a criterion named `maximum projection and minimum redundancy' which is governed by two goals: projecting data into a feature subspace with minimum reconstruction error and minimum redundancy. The studies in <cit.> claim that both global structure and local structure should be preserved in the projected space as both them may carry important discriminating information and hence, they have proposed feature selection schemes that focus both on global and local structure preservation. The investigation in <cit.> claims to preserve dual global structures. Going through various feature selection schemes having multiple objective we found that whenever class label is available, no work in feature selection explicitly focused preserving structural information along with class information although both of these are important discriminative information and may have positive impact on the generalization ability of the classifier. Suppose, for a data set, the class and cluster structures are substantially different. Exploiting only the class labels, it may not be possible to keep the cluster structures in the projected space. For a practical system, even when the primary task is classification, we may need to cluster the samples in the space defined by the selected features. For example, fuzzy rule based classifiers are often designed by clustering the training data for each class and translating each cluster into a rule<cit.>. We could not find any feature selection method that focuses both on class and cluster separability. To bridge this gap, in this study we propose a feature selection method that selects features preserving class and cluster-structure information simultaneously. We employ a multi-layer perceptron (MLP) based neural network to develop an embedded feature selection scheme. The training of the proposed MLP based feature selection method is governed by both class discriminating and cluster (structure) preserving objectives. The philosophy is quite general and can be easily extended to other networks such as radial basis function network.
§ PROPOSED METHOD
Let us denote the input data by an n× P matrix, 𝐗={𝐱_i∈ℝ^P}_i=1^n. Here, 𝐱_i is a P dimensional row vector of the form, 𝐱_i=(x_i1,x_i2,⋯,x_iP). The collection of class labels of 𝐗 be 𝐙={z_i∈{1,2,⋯, C}}_i=1^n, where, z_i is the class label corresponding to 𝐱_i. We aim to select a subset of size Q from the original set of features such that the selected subset performs reasonably well in terms of the classification task as well as in clustering. In other words, if we design a classifier using the selected features, the performance of the classifier would be comparable to a classifier designed using all features. Similarly, if we cluster the data in the reduced dimension as well as in the original dimension we expect to get similar partition matrix. Here, we propose a neural network-based framework to select features. Neural networks have been explored for the feature selection <cit.> as well as for classification <cit.>. However, in our proposed model the neural network simultaneously selects features and learns a classifier as we follow an embedded method for feature selection. Moreover, our proposed network preserves structural information and the class label information simultaneously, whereas, the feature selection networks in <cit.> solve classification problems, consider class label information in their loss function but not any structural information. Note here, the work in <cit.> considers a system identification problem. To build the neural network-based embedded feature selector, we employ the multi-layer perceptron (MLP) based framework used in <cit.>. The basic framework is shown in Fig. <ref>.
As seen in Figure <ref>, preceding the input layer of the MLP, there is a layer consisting of P nodes. Before entering to the input layer of the MLP, the jth feature passes through the node f_j(). These nodes act as attenuating gates that allow or block features from contributing to the output of the neural network effectively. For the ith instance, it's jth feature x_ij on passing the gate node f_j() becomes a_jx_ij; i.e., f_j(x_ij)=a_jx_ij. In MLP, a weighted sum of the values available at the input nodes is applied to the hidden nodes of the first hidden layer. Zero value at an input node implies that the corresponding feature is not considered. When training of the MLP-based framework is complete, a_js for the selected features become close to 1, effectively allowing them to contribute to the classifier. Whereas, for poor or rejected features a_js become close to 0, effectively making them not contribute to the classifier. In <cit.>, this framework was explored for classification-oriented feature selection, group feature selection, and redundancy-controlled feature selection. Here, we explore this framework for simultaneous structure-preserving and class-discriminating feature selection. Next, we elaborate on the MLP-based framework and the proposed objective functions to train the network.
We denote the P nodes before the input layer of the MLP as f_j()s for j=1,2,… P where f_j() is a gate or modulator function applied on the jth feature, x_j. Now, we have to design f_j() in such a way,
f_j(x_j)=a_jx_j=x_j if x_j is a useful feature.
0 otherwise.
In our framework, the factor, a_j is learnable. We implement a_j as a smooth continuous function, a_j=exp(-λ_j^2). Clearly, when λ_j=0, the value of exp(-λ_j^2)= 1 and when λ_j→±∞, the value of exp(-λ_j^2)=0. By adding suitable regularizer terms to the objective function we design our learning system in such a way that, over the learning process, the gate parameters, λ_js for useful features drop close to zero and that for derogatory or indifferent features rise to high values. So, in our learning system, the learnable parameters, λ_js and the neural network weights are learned together, i.e., the loss function is minimized with respect to both λ_js and the neural network weights.
Now, we have to define a suitable loss function for selecting features along with learning the embedded classifier. Our aim is to select features that are reasonably good for classification as well as clustering. To satisfy this requirement, we take the loss function as a combination of two losses E_class and E_struct. E_class is considered for preserving class information and E_struct is considered for preserving structural information. At this moment let us consider the network for selecting features for efficient classification only. A suitable loss function to impose class discrimination is the cross-entropy loss <cit.>. We define, E_class as the cross-entropy loss involving actual and predicted class labels.
E_class=-1n∑_i=1^n∑_k=1^Ct^i_klog(p_k(𝐱_i) )
Here, t^i_k is kth element of the one-hot encoded label of the sample 𝐱_i or in other words t^i_k is the kth element of the vector 𝐭^i∈{ 0,1}^C such that
t_k^i=
1 if k=z_i
0 otherwise
In (<ref>), p_k(𝐱_i) is the predicted probability (by the MLP classifier) of 𝐱_i being in kth class. As already discussed above, for effective feature selection, the magnitude of λ_js for the selected features should drop to almost zero and for rejected features should rise to high values. To ensure this condition we add the following regularizer.
E_select = 1P∑_j=1^P a_j(1-a_j)
= 1P∑_j=1^Pexp(-λ_j^2)(1-exp(-λ_j^2))
In a feature selection framework, a constraint for selecting a fixed number of features is necessary. The following regularizer tries to keep the number of the selected features close or equal to Q.
E_Q=1Q^2{(∑_j=1^Pa_j)-Q}^2=1Q^2{(∑_j=1^Pexp(-λ_j^2))-Q}^2
So, the overall loss function for the selection of features with our framework for classification purposes is the following.
E= E_class + α_1E_select + α_2E_Q
Here, α_1≥ 0,α_2≥ 0 are scalar multipliers for adjusting the importance of E_select and E_Q in the overall error function E.
Now let us focus on our original agenda of selecting features that perform satisfactorily both for classification and clustering. To preserve structural information of the data in the lower dimensional space formed by the selected Q features, we consider the Sammon's stress <cit.> as a loss function. The Sammon's stress is the loss function for a non-linear mapping named Sammon's mapping that is able to capture complex non-linear structures in data, as a result, also preserves cluster structure. The lower the value of Sammon's stress, the better the lower dimensional representations in capturing the original inter-point distances or structures of the original data. We can define Sammon's stress involving the original input space and selected feature space as the following.
E_sammons=1(∑_i,l=1^n d_il)∑_i=1^n-1∑_l=i+1^n( d_il^𝐗- d_il^𝐗̂)^2d_il^𝐗
d_il^𝐗 is the distance between 𝐱_i and 𝐱_l. 𝐗̂={𝐱̂_i=(a_1x_i1,a_2x_i2,⋯,a_Px_iP)^T∈ℝ^P}_i=1^n. So, d_il^𝐗̂ is the distance between 𝐱̂_i and 𝐱̂_l. As discussed earlier, at the end of the training of our embedded system, a_js will be close to 0 or 1 depending on whether the corresponding features are rejected or selected. Therefore, for a trained system d_il^𝐗̂ would signify the distance between ith and lth instances in the latent space formed by the implicitly selected Q features. So considering E_sammons in Equation (<ref>) as an regularizer, the resultant overall loss function is given by.
E_tot=E_class+ β E_sammons + α_1 E_select + α_2 E_Q
β≥ 0 is a scalar multiplier that controls the trade-off between the class information and the structural information in the feature selection process.
Note that, the computational complexity for the loss function in Equation (<ref>) is O(n^2). For large n, computing Equation (<ref>) and hence Equation (<ref>) is intensive. As the weight update at each iteration will involve computing Equation (<ref>), the overall computation cost would be high. For small and moderate n, we use Equation (<ref>) as the loss function to be minimized. However, for large n to avoid the high computational cost we modify Equation (<ref>) as follows.
E_struct= 1(∑_𝐱_i,𝐱_l∈ S_t d_il)∑_𝐱_i∈ S_t∑_𝐱_l∈ S_t; 𝐱_l𝐱_i( d_il^𝐗- d_il^𝐗̂)^2d_il^𝐗
Here S_t is a randomly selected subset of 𝐗 at the tth iteration. Different S_ts are chosen at different iterations and hence different sets of inter-point distances are preserved. Since the considered MLP is trained over a large number of iterations, the use of Equation (<ref>) is expected to result in almost the same effect as that by Equation (<ref>). We have to choose |S_t| such that Equation (<ref>) is computationally manageable and at the same time it should be large enough to make E_struct an effective substitute of E_sammons. Adding Equation (<ref>) to Equation (<ref>) we propose the following loss function for our system.
E_tot=E_class+ β E_struct + α_1 E_select + α_2 E_Q
E_tot is minimized with respect to the gate parameters λ_js and the weights of the network to find their optimal values.
§ EXPERIMENTATION AND RESULTS
The feature selection framework proposed in this chapter is generic but it can be adapted to solve specialized problems. We have studied the proposed framework for general datasets as well as for solving a special problem: band selection of hyperspectral images. We present the results of band selection for HSIs in a different subsection, Subsec. <ref>. We present the results of feature selection for the conventional classification problem in the following subsection (Subsec. <ref>).
§.§ Feature selection for conventional classification problems
We have used five publicly available datasets that are very commonly used for classification and clustering. The first four datasets are downloaded from UCI machine learning repository <cit.>. AR10P is downloaded from the open-source feature selection repository of Arizona State University<cit.>. We have also performed the experiments with three benchmark HSI datasets for land cover classification problems. We discuss them in a separate subsection (Subsec. <ref>).
The details of the number of features, number of classes, and number of instances for the five datasets are summarized in Table <ref>.
The datasets are used directly without any further processing. The datasets are partitioned into training and test sets as approximately 90% and 10% of the total number of instances. To implement our proposed feature selection scheme we use the neural network shown in Fig. <ref> with the number of hidden layers, n_H = 1. The input and output layers have P and C nodes respectively, where P is the number of features and C is the number of classes corresponding to the considered dataset. The number of hidden nodes in the hidden layer is 8 (20 for AR10P data set). To get stable feature selection results, the network weights are initialized in a certain way. To set the initial weights of the proposed network, we undergo the following steps. First, we consider the usual MLP part of our network (i.e. without feature selection), depicted by the portion within the dotted rectangle in Fig. <ref>, and initialize its weights randomly. Next, we train the usual MLP with the cross-entropy loss defined in Equation (<ref>) with the training set until convergence. The weights of the converged network are used as the initial weights of the proposed network. The gate parameter λ_js are initialized with values drawn randomly from a normal distribution with mean =2 and spread =1/√(P). The initial values of λ_js are chosen around 2 to effectively make the gates almost closed initially. As the learning progresses the λ_js are updated in a way to allow the useful features to the network. For the proposed system, to select a subset of Q features, the gate parameters λ_js are sorted in ascending order, and the Q features corresponding to the top Q, λ_js are selected. The network weights as well as the gate parameters λ_js are learned using the adaptive gradient algorithm, `train.AdagradOptimizer' routine of the `TensorFlow' framework <cit.>. For all experiments with the data sets in Table <ref>, both α_1 and α_2 of the error functions in Equations (<ref>) and (<ref>) are set as 1. The total number of iterations for training the network is set to 20000. The five datasets we consider here, have the number of instances n<400, which is not so large. Therefore, we use (<ref>) as the overall loss function to train the MLP based architecture for selecting features that are reasonably good for clustering and classification. When β=0 in (<ref>), effectively, the error function that governs the learning of our MLP based embedded feature selection scheme is (<ref>). The corresponding feature selection scheme now only considers classification. Let us name this method as feature selection with MLP (FSMLP). When β≠ 0 in (<ref>), our method takes structure preservation into account along with classification. Let us name the corresponding method as FSMLPstruct. To understand the importance of adding the structure preserving regularizer (<ref>), we perform feature selection with FSMLP and compare with FSMLPstruct having different β values. We explore three values of βs 0.1, 1, and 10. Although the exact value of the β that is optimum for a particular dataset for a particular number of selected features Q cannot be decided from these three values, we investigate the effect of three widely different βs to see the role of the weight to the structure preserving regularizer, i.e. β on the performance of the selected features. We compare with three other methods namely, Independent Component Analysis (ICA)-based feature selection <cit.>, F-score based filter method <cit.>, and mutual information based filter method <cit.>. The performance of both FSMLP and FSMLPstruct is dependent on the initial weights of the network. So, we repeat the initialization of the network weights and gate parameters λ_js five times and run the schemes- FSMLP or FSMLPstruct five times with the five initializations. For the performance measure of FSMLP and FSMLPstruct, we consider the average performance over the five subsets obtained from the five runs. To check the effectiveness of the methods in selecting features that perform well in classification and clustering simultaneously, we compute the classification scores of the support vector machine (SVM) classifier as well as several structure-preserving indices: Sammon's stress (SS) <cit.>, normalized mutual information (NMI) <cit.>, adjusted rand index (ARI) <cit.>, and Jaccard Index (JI) <cit.>. As the measure of classification performance, we use the overall classification accuracy (OCA) of the SVM classifier. The optimal hyper-parameters of SVM are determined through five-fold cross-validation using grid search. Note that here the test set is not only unseen to the SVM classifier but unseen to the feature selection methods also. SS, defined in Equation (<ref>) use the original inter-point distances d_il^𝐗s and latent space inter-point distances d_il^𝐗̂s. Here to compute d_il^𝐗̂, we use the lower dimensional data formed by the selected Q features. We use NMI, ARI, and JI as the structure-preserving performance metrics by supplying the cluster labels obtained from clustering the data in the original space (using all features) as the true label and the cluster labels obtained from clustering the data in the reduced space formed by the selected Q features as the predicted cluster label. So, NMI, ARI, and JI measure how the cluster assignments in the original space and in the selected space agree, effectively giving a measure for the preservation of the original cluster structure in the selected space. We know that the maximum value for NMI or ARI or JI is 1. Here, the value of each of these three measures being close to 1 indicates that the cluster structure in the original space is preserved in the selected space. As the clustering algorithm we use, the fuzzy C means (FCM) algorithm <cit.> with the fuzzy exponent m=2. We set the number of clusters for FCM algorithm as the number of classes. We use two values for the number of the selected features, Q. Q=0.35 × P and Q=0.5 × P, where these values are rounded up to the nearest integers using the ceiling function.
Tables <ref> and <ref> summarize the performances of the proposed method and other comparing methods for training and test sets, respectively for the E. coli dataset. We tabulate the three previously mentioned structure preserving measures and one classifier score for two choices of the number of selected features (approximately 35% and 50% of the original dimension) i.e., Q=3, and Q=4 in Tables <ref> and <ref>.
As we have already discussed in Sec. <ref> the lesser the value of SS, the better the projected space (formed by selected features) preserves the original pairwise distances and hence the structure of the original data. We observe in Table <ref>, the mutual information based method shows the lowest value of SS, and the second lowest is FSMLPstruct with β=10 for both Q=3 and Q=4. Actually, the SS values for the mutual information based method and FSMLPstruct with β=10 are almost the same, equal up to two places after decimal points in both choices of Q. The SS values achieved by ICA, the F score based method and FSMLP are comparatively higher. So, the mutual information based method and FSMLPstruct with β=10 preserve the original pairwise distances most in the projected space. They are also expected to preserve the structures most. The values of the other three structure preserving measures i.e., NMI, ARI, and JI confirm that. We know that the higher the values of NMI, ARI, and JI are, the closer the clustering structures of the projected space are to the original clustering structures. The highest values of the NMI, ARI, and JI are obtained by the mutual information based method, followed by the FSMLPstruct with β=10. So, in cluster structure preservation, mutual information based method and FSMLPstruct with β=10 perform better than the other three methods and even than the other two models trained by FSMLPstruct with β=0.1 and 1. β is the weight of the regualizer E_sammons in (<ref>). Although SS and E_sammons are not exactly same, under the influence of E_select, it is expected that the higher the value of β lesser the value of SS would be. Table <ref> reconfirms that. The SS values become lesser as the β increases from 0.1 to 10. SS values of FSMLP (which is basically FSMLPstruct with β=0) and FSMLPstruct with β=0.1 are the same for both the choices of Q. Actually, FSMLP and FSMLPstruct with β=0.1 have all the ten measures the same. It proves that for the E.coli dataset β=0.1 does not give any effective weightage to the structure preservation term and chooses the same subsets as FSMLP. For the classification performance measure OCA, FSMLPstruct with β=1, achieves the highest value, followed by FSMLPstruct with β=10. The mutual information based method and FSMLPstruct with β=10 have all the structure preserving measures either almost equal or of comparable values, however for OCA, FSMLPstruct with β=10 is better than mutual information based method with a margin more than 18%. For E. coli data, the test set follows the observed trends in the training set with the following exceptions. First, for Q=3, the values of NMI, ARI, and JI have not increased as β increases from 0.1 to 10. Second, for Q=4, FSMLPstruct with β=10 beats all the methods including mutual information based method. Analyzing the performances over train and test sets, for E. coli data FSMLPstruct with β=10 is the winner among the other six models.
Tables <ref> and <ref> compare the performance of the proposed method with other methods in terms of different criteria for the Glass dataset on its training and test sets, respectively.
The chosen numbers of features for the Glass data are 4 and 5. The expected nature of decreasing SS with increasing β is clearly observed for Q=5 for both the training and test set. For Q=4, the Glass data also follows the characteristics of the E. coli data of having the same values for FSMLP and FSMLPstruct with β=0.1 in all the ten measures for both training and test set. For Q=4, from β=0.1 onwards, increasing βs produce decreasing SS values and increasing NMI, ARI, and JI values for both training and test datasets. We observe from the Tables <ref> and <ref>, for Q=5, as the β increases from 0 (FSMLP) to 0.1, and then to 1, NMI, ARI, and JI values are increased for both training and test datasets, however at β=10, NMI, ARI, and JI values are decreased compared to β=0.1 and 1. We can conclude that, for Q=4, FSMLPstruct with β=10 gives the best structure preserving performance among the considered models and for Q=5, FSMLPstruct with β=1 is best in structure preservation. In terms of the classification performance measure OCA, FSMLPstruct with β=10 and FSMLPstruct with β=1 show the highest OCA values for the training set and test set respectively, with Q=4. On the other hand, for Q=5, FSMLPstruct with β=10 show the highest OCA values for the training set and FSMLPstruct with β=1 show the highest OCA values for the test set. Inspecting all the performance measure values, we conclude that for the Glass dataset, both FSMLPstruct with β=10 and FSMLPstruct with β=1 are comparatively better in simultaneously preserving both class and cluster structures than the other methods.
The performances of the Ionosphere dataset are recorded in Tables <ref> and <ref> for training and test sets respectively.
For the Ionosphere data set, the number of selected features, Q is set as 12 and 17. Here, in all the cases, whenever the β is increasing, SS is decreasing and the other structure preserving indices NMI, ARI, and JI are increasing consistently. Unlike, E. coli and Glass data set, here when β increases from 0 (in FSMLP) to 0.1, the structure preserving metrics including SS shifted in the desired direction in most of the cases and remained the same in some cases. Except for SS, in the other three structure preserving measures, ICA and F score based method have performed better than FSMLP and FSMLPstruct for all the cases. Classification performance is good for almost all methods for the Ionosphere data set. In the training set, for both Q=12 and Q=17, an accuracy of 97.46% is reached by mutual info and F score based methods, however, FSMLP and FSMLPstruct models have reached more than 96% accuracy in every case. For the test set, all the structure preserving indices are better for FSMLP and FSMLPstruct than ICA, F score, and mutual information based methods, although in terms of classification score OCA, the F score and mutual information based methods have performed marginally better than FSMLP and FSMLPstruct models. This may have happened because the selected features from the neural network based classifier which are expected to be discriminatory features, may not be the best for SVM. Moreover, FSMLPstruct makes a compromise between preserving cluster structure and classifier loss. For the Ionosphere dataset, our proposed models are not the winner. May be with higher β, FSMLPstruct would deliver better scores.
For the Sonar data, the summary of the performances of the training and test data sets in terms of the five measures for two choices of the number of selected features are available in Tables <ref> and <ref>.
We set, Q=21 and 30 for the Sonar data set. In the case of the Sonar data set, not only with increasing β, all the structure preserving indices improve, in case of the training set, FSMLPstruct with β=10 are significantly better than ICA, F score, and mutual information based methods, and FSMLP in all five scores for both the choices of Q. In test set for some cases, FSMLPstruct with β=1 is better than FSMLPstruct with β=10. For the Sonar data set, clearly, the proposed method performed extremely well in terms of classification and clustering performance.
Tables <ref> and <ref> summarize the performances of the proposed method and other comparing methods for training and test sets, respectively for the AR10P data set. The original number of features, P for the AR10P data set is 2400, which is comparatively higher than that of the other two data sets used in this sub-section. The two choices of the number of selected features here are 40 and 60 and these are not approximately 35% and 50% of the original dimension like in previous cases.
The study in <cit.>, proposed a feature selection scheme for redundancy control in features. They reported an average number of selected features of 58.9 without practicing redundancy control and an average number of selected features in the range of 22.8 to 44.2 when practicing redundancy control for AR10P data set. Hence, we choose the number of selected features Q as 40 and 60. From the classification scores shown in Table <ref>, we note that for all the methods for both the choices of Q, classification scores in training set are more than 99%. In the training set, we observe that for FSMLPstruct as β increases SS is decreased in almost all the cases. But for the test set, this is not true. For the other structure-preserving measures for the training set, FSMLPstruct with β=50 is best among all the methods for Q=40 and FSMLPstruct with β=100 is best among all the methods for Q=60. In the test set, all the methods have performed almost the same in terms of the structure-preserving measures. The classification performances of FSMLPstruct are very poor in the test set for AR10P data. The significant differences in training and test OCA values for FSMLPstruct indicate poor generalization of the system. This problem may be addressed by choosing the number of nodes for our MLP based model through cross-validation.
Results from the five data sets clearly establish the benefit of introducing the proposed structure preserving regularizer term, E_sammons in the overall loss function (<ref>) of the MLP based embedded feature selection scheme. Next we shall consider the band (channel) selection problem for hyperspectral satelite images.
§.§ Band selection in hyperspectral images
Let our considered hyperspectral image I be of dimension, H× W× P where, H, W, and P are the height, width, and number of spectral bands of the image respectively. We can represent the pixels of I as 𝐱_i∈ℝ^P: i=1, 2, … ,H× W. Let, there be total n pixels annotated with C land cover classes. Without any loss of generality, we take the first n pixels, i.e., i=1,2, … n as the pixels having class labels. Our input data for land cover classification problem be 𝐗={𝐱_i=(x_i1,x_i2,⋯,x_iP) ∈ℝ^P}_i=1^n. The collection of class labels of 𝐗 be 𝐙={z_i∈{1,2,⋯, C}}_i=1^n, where, z_i is the class label corresponding to 𝐱_i. We aim to select a subset of size Q from the original set of bands such that the selected subset performs reasonably well for land cover classification as well as in clustering.
We have performed the experiments with three benchmark HSI datasets for land cover classification problems- Indian pines, Pavia University, and Salinas<cit.>. We have used the corrected version of the Indian pines and Salinas dataset having the number of bands 200 and 204 respectively.
The Pavia University dataset uses 103 bands. The pre-processing of the datasets is the same as done in <cit.>, following the code available in<cit.>. For any dataset, its pixel values are scaled to [0,1] using the expression (x-min(x))/(max(x)-min(x)), where, x is a pixel value. The max and min are computed over the entire HSI. The data are then mean normalized across each channel by subtracting channel-wise means. The datasets are partitioned into training and test datasets. For band selection, only the training datasets are fed to the model. For measuring performances both training and test datasets are used. For splitting the datasets into training and test subsets, we drop the pixels of the unknown land-cover type. Let 𝐗 be the set of pixels with known land-cover type. To obtain the training and test sets, let us divide 𝐗 into two subsets 𝐀 and 𝐁 such that 𝐀⋃𝐁=𝐗, 𝐀⋂𝐁=ϕ, and 𝐀 and 𝐁 contain, respectively, 25% and 75% pixels of 𝐗. We use 𝐀 as the test set. Note that, both the datasets suffer from the class imbalance problem. To avoid the learning difficulty raised by class imbalance, in the training set, we consider the same number of instances from each class. For this, from the subset 𝐁, we randomly select (without replacement) 200 pixels per class. If a class has less than 200 instances in 𝐁, we oversample the class by synthetic minority oversampling technique (SMOTE) <cit.> to gather 200 points. For band selection also, we use the same neural network (Fig. <ref>) with the number of hidden layers, n_H=3. The numbers of hidden nodes in the three hidden layers are 500, 350, and 150 respectively. Here the number of input nodes of the MLP is equal to the number of bands (P). The network weights and the gate parameters λ_js are initialized in the same way as done. For all experiments of the current sub-section, α_1 and α_2 of the error functions in Equations (<ref>) and (<ref>) are set as 5 and 1 respectively. The total number of iterations for training the network is set to 50000. The rest of experimental settings are kept same as the previously mentioned experiment with the two data sets. The number of training instances of Indian pines and Salinas data set
is 3200 and that of Pavia university is 1800. Both of the number of training instances, n are high. Computation of the E_sammons in (<ref>) would involve computing (3200)^2 or (1800)^2 distances. Adding E_sammons to the overall loss function would cause very intensive computation at each iteration. So, instead of E_sammons, its proposed approximation E_struct defined in (<ref>) is used. In (<ref>), |S_t| is taken as 100. Varying the value of β in (<ref>), we analyse its effect on the OCA, SS, NMI, ARI, and JI. We compute SS, NMI, ARI, and JI as described in Subsec. <ref>. We also use the same clustering algorithm with the same settings as used in Subsec. <ref>.
Tables <ref> and <ref> summarize the comparative results of FSMLPstruct with FSMLP and other band selection methods, ICA, F score, and mutual information based filter methods on the training and test datasets of Indian pines respectively.
Similarly, Table <ref> and <ref> summarize the comparative results on the training and test datasets of Pavia university.
In this experiment, we have fixed the number of selected bands Q, approximately to 35% of the original number of bands P. So, The number of selected bands is 70 for Indian pines and
it is 35 for Pavia University. Tables <ref> and <ref> record the values of the structure-preserving indices and classification scores on Indian pines for different β values in FSMLPstruct (β values in Equation (<ref>)). The considered βs for Indian pines,
are 2, 5, 20, and 50. Note here that, FSMLP is basically FSMLPstruct with β=0. We observe in Tables <ref> and <ref> that both for training and test datasets as the value of β increases for FSMLPstruct (in the last five rows of the corresponding Tables) the value of SS becomes smaller. A similar trend is also observed for the Pavia University data set (here, β varies as 1,1.5,2, and 2.5) for training (Table <ref>) and test (Table <ref>) sets.
For the Pavia university dataset, we have set the values of β in FSMLPstruct as 1,1.5,2, and 2.5. Unlike Indian pines
for Pavia university, we restrict the βs to lower values. This is due to the fact that the number of selected bands for Pavia university is 35 and that for Indian pines
is 70. Lesser the number of bands, the lesser the importance (β) to be given to our structure preserving regularizer in Equation (<ref>) to obtain a desired balance between classification and clustering performance. Table <ref> which contains the results for the Indian pines training data, clearly shows that both FSMLP and FSMLPstruct are better than ICA, F-score based, mutual information based methods in all four structure preserving metrics as well as in terms of the OCA. In Table <ref> we observe that with increasing values of β there is a consistent improvement in the values of the four structure preserving metrics while the values of OCAs retain approximately at 91%. The results shown in Table <ref> for the Indian pines test set also show that FSMLP and FSMLPstruct perform better in terms of all the five metrics than the other three methods. Also with an increase in β all the structure-preserving metrics improve for FSMLPstruct, except the value of JI slightly decreases when β goes to 50 from 20. The classification metric OCA is around 78% with bands selected by FSMLPstruct for different choices of βs.
It is notable here that the test set is completely unseen in the process of band selection, yet the selected bands for the proposed method is providing fairly good results for structure preservation as well as for classification. As observed from Table <ref> and Table <ref> for Pavia university training and test sets respectively, the lowest (best) SS value among all the comparing methods is achieved by mutual information based filter method. However, for the other four metrics i.e. NMI, ARI, JI and OCA; FSMLP and FSMLPstruct show better values. In the case of the Pavia university dataset with increasing β; NMI, ARI, and JI are not consistently increasing but the results indicate that, it is possible to find a β, (here β=2) where the structures are preserved better maintaining a good classification score.
Table <ref> and <ref> summarize the comparative results of training and test datasets of Salinas.
We note from Tables <ref> and <ref> that, for the Salinas dataset, FSMLP and FSMLPstruct are better than the other three methods in all the five metrics used. All four structure preserving metrics scores of FSMLPstruct are better than or comparable to FSMLP keeping the classification score OCA at approximately 96% for the training dataset and 90% for the test dataset. Tables <ref> and <ref> reveal that when β is increased from 0 to 2, the value of SS is increased however, from β=2 to β=50 onward, the values of SS are decreased. The exceptions for the Salinas dataset, while increasing β from 0 to 2 is possibly due to the fact that we do not use the entire training data in Equation (<ref>) and use of |S_t|=100 in Equation (<ref>) is not adequate to capture the structure of the data faithfully for the Salinas dataset. As discussed earlier, setting the value of |S_t| is crucial for approximating Equation (<ref>) with Equation (<ref>). We have set |S_t|=100 for all three datasets empirically. However, choosing an optimum value of |S_t| for each dataset is expected to avoid the occurred exceptions.
As we increase the value of β, there is more stress to reduce the loss function Equation (<ref>). In most cases, increasing β results in a drop in SS. This clearly suggests that the loss function Equation (<ref>) that we use, is a computationally efficient substitute for the original SS defined in Equation (<ref>).
We have included results of the thematic maps (Fig. <ref>) and it reveals that our proposed method is capable of selecting useful bands that can broadly capture the land cover types.
Figure <ref> illustrates thematic maps of the entire region captured in the Indian pines dataset. Figure <ref> shows ground truth labels. Figures <ref>, <ref>, <ref> are thematic maps of the Indian pines data set using the class labels obtained from the SVM classifier trained on the considered training set represented with 70 bands selected by FSMLPstruct with β=0, i.e., by the method FSMLP, and FSMLPstruct considering β=20, and β=50, respectively.
Figure <ref> ensures that even with the increasing stress on the structure-preserving regularizer E_struct, our proposed band selection method FSMLPstruct is able to select bands that maintain a good land cover classification performance.
§ CONCLUSION AND DISCUSSIONS
To the best of our knowledge, a feature selection method that simultaneously cares about class discrimination and structure preservation is not available in the literature. In this study, we have tried to bridge this gap by proposing a neural network-based feature selection method that focuses both on class discrimination and structure preservation. To learn the proposed system, we use Sammon's stress as a regularizer to the classification loss. For datasets having a large number of instances, the computational overhead associated with Sammon's stress is very high. Consequently, as the structure-preserving regularizer, we use Sammon's stress computed based on a sample of the original data (using dynamic sampling on each iteration during the adaptive gradient descent based learning). Using this regularizer in the experiments with datasets having a large number of instances, we have demonstrated that this regularizer is an effective and computationally efficient implementation of Sammon's stress based structure-preserving regularizer. Our proposed feature selection scheme is generic. So we have investigated its effectiveness on datasets commonly used for assessing classifiers as well as for a specialized case: band selection in hyperspectral images (HSI). We have applied the feature selection scheme to five real-world datasets which are commonly used typically for assessing classification. In the context of band selection, we have applied our method to three well-known HSI datasets and compared performances with three other band selection methods. Based on our experiments, we conclude that the proposed feature selection method is able to produce reasonably good classification and clustering scores in the majority of the data sets, proving that the proposed method is capable of selecting a subset of features that is good both for classification and clustering. Our scheme provides a mechanism to control the number of selected features. The proposed method is easily extendable to other networks like Radial Basis Function (RBF) network.
|
http://arxiv.org/abs/2307.07655v1 | 20230714232036 | Toy model illustrating the effect of measurement dependence on a Bell inequality | [
"Sophia M. Walls",
"Ian J. Ford"
] | quant-ph | [
"quant-ph"
] |
Department of Physics and Astronomy, University College London, Gower
Street, London, WC1E 6BT, United Kingdom
Bell's inequalities rely on the assumption of measurement independence,
namely that the probabilities of adopting configurations of hidden
variables describing a system prior to measurement are independent
of the choice of physical property that will be measured. Weakening
this assumption can change the inequalities to accommodate experimental
data. We illustrate this by considering quantum measurement to be
the dynamical evolution of hidden variables to attractors in their
phase space that correspond to eigenstates of system observables.
The probabilities of adopting configurations of these variables prior
to measurement then depend on the choice of physical property measured
by virtue of the boundary conditions acting on the dynamics. Allowing
for such measurement dependence raises the upper limit of the CHSH
parameter in Bell's analysis of an entangled pair of spin half particles
subjected to measurement of spin components along various axes, whilst
maintaining local interactions. We demonstrate how this can emerge
and illustrate the relaxed upper limit using a simple toy model of
dynamical quantum measurement. The conditioning of the hidden variable
probability distribution on the chosen measurement settings can persist
far back in time in certain situations, a memory that could explain
the correlations exhibited in an entangled quantum system.
Toy model illustrating the effect of measurement dependence on a Bell
inequality
Sophia M. Walls and Ian J. Ford
================================================================================
Bell inequalities arise from analysis of the statistics of purported
`hidden variables' that evolve according to local interactions and
represent `elements of reality' with definite values prior to measurement.
It has been demonstrated that the inequalities can be violated, with
recent experiments removing areas of uncertainty in the analysis such
as the fair-sampling and locality loopholes <cit.>.
Perhaps the most striking aspect of quantum mechanics is that its
predictions are consistent with these violations. The implication
is either that physical effects operate non-locally between space-like
separated points, or that we have to abandon the concept of a reality
independent of observation at microscopic scales <cit.>.
Should we wish to avoid these (arguably) unpalatable conclusions it
is necessary to examine the assumptions made in Bell's analysis, one
of which is `measurement independence', or `statistical independence'
<cit.>, according to which the system prior to measurement
is assumed to adopt states with probabilities that are independent
of the measurement settings. A stronger version is that the prior
state of a system should be capable of providing a measurement outcome
for any system property.
We examine the effect of relaxing this assumption in the standard
situation where two spin half particles in an entangled state have
spin components separately measured along arbitrarily chosen axes.
We demonstrate, by introducing measurement dependence, that the upper
bound of the Clauser-Horne-Shimony-Holt (CHSH) parameter, S(â,b̂,â^',b̂^'),
defined as <cit.>
S=| C(â,b̂)-C(â,b̂^')|+| C(â^',b̂)+C(â^',b̂^')|,
can be increased. C(â,b̂) is the correlation function
of spin component measurement outcomes A and B (each taking
values ±1) for particle 1 undergoing a spin measurement along
axis â and particle 2 along axis b̂, respectively,
namely C(â,b̂)=AB=P_++^â,b̂-P_+-^â,b̂-P_-+^â,b̂+P_–^â,b̂
where P_±±^â,b̂ are the probabilities of measurement
outcomes A=±1,B=±1 along the respective axes. Each correlation
function takes a value between ±1 and S could therefore lie
between 0 and 4. However, assuming that the outcome of measurement
of particle 2 is influenced neither by the choice of axis setting
nor the outcome of measurement of particle 1, and vice versa, and
also assuming that the system prior to measurement is specified by
a set of hidden variables λ adopted according to a probability
density function (pdf) ρ(λ), then P_±±^â,b̂=∫ p_1±^â(λ)p_2±^b̂(λ)ρ(λ)dλ
where p_1±^â(λ) are the probabilities of outcomes
±1 for the spin component of particle 1 along axis â,
for a given specification of hidden variables, with a similar meaning
for p_2±^b̂(λ). Bell's analysis requires S
to have an upper bound of 2. The crucial observation is that
|B̅(b̂,λ)-B̅(b̂^',λ)|+|B̅(b̂,λ)+B̅(b̂^',λ)|≤2,
where B̅(b̂,λ)=p_2+^b̂(λ)-p_2-^b̂(λ)
is the mean outcome of measurement of the spin component of particle
2 along axis b̂ for a given set of hidden variables λ.
Note also that |B̅(b̂,λ)|≤1. It has been
shown that this upper bound can be violated experimentally.
Measurement independence is the use in the analysis of a pdf ρ(λ)
that depends neither on the measurement axes nor the outcome of the
measurement. This can seem reasonable unless we entertain the view
that measurement is a dynamical process. The usual alternatives to
this are the classical acquisition of information without changing
the system state, or the quantum mechanical instantaneous projection
of the system to one of the eigenstates of the measured property.
Instead, it could conceivably involve the evolution of hidden variables,
that specify the states of the system and a coupled measuring device,
towards attractors in their phase space that correspond to system
eigenstates and device readings correlated with those eigenstates.
This would create a relationship between initial state probabilities
and the final measurement outcomes that act as boundary conditions.
We could also imagine that deterministic rules could govern the evolution
of the complete set of hidden variables while those describing the
system alone could evolve stochastically, reflecting uncertainty in
the initial state of the measuring device.
This point of view suggests that probabilities of hidden variable
configurations might reasonably be conditioned on events taking place
in the future, also known as `future input dependence' <cit.>.
This is as natural as deducing the statistics of the position and
velocity of a tennis ball, at the moment when it is struck by a racket,
using information on where and how it later strikes the ground, assuming
deterministic Newtonian dynamics, perhaps supplemented by stochastic
environmental effects such as air currents. If measurement is dynamical,
then the choice of the axes in a spin measurement experiment, and
indeed the outcome of the measurement, could similarly convey information
about the initial states adopted by the system and device.
The amount of measurement dependence required to reproduce the observed
violations of the CHSH upper limit has been quantified through a number
of parameters, such as the amount of mutual information shared between
the detectors and the hidden variables describing the entangled particle
pair; a distance measure quantifying the overlap or degree of distinguishability
of different choices of settings; or by requiring the probability
of choosing certain settings to be within a particular range <cit.>.
It has been shown that the upper bound of the Bell inequalities may
be relaxed by removing all freedom of choice of settings <cit.>.
It has subsequently been demonstrated that only 0.046-0.080 bits
of shared information is required to reproduce quantum correlations
for retrocausal or causal models <cit.>.
More elaborate scenarios have also been studied <cit.>
and investigations have been carried out into how measurement dependence
and Bell inequality violation are affected by imperfect measuring
devices, and the role measurement dependence has in randomness amplification
and quantum causality <cit.>.
Experimental advances have also been made in an attempt to close the
`measurement-dependence loophole' by allowing measurement settings
to be chosen by random number generators, the wavelength of light
from distant quasars or stars, or participants in a video game <cit.>.
Such arrangements can indeed establish correlations between the settings
and distant or highly complex dynamical influences. We nonetheless
take the view that the settings, however they are determined, can
still provide information about the system variables prior to measurement,
when the process of measurement is dynamical. The question is then
not how the measurement settings are determined but how
much such settings condition the system variables prior to measurement.
Such an effect is not retrocausality, but instead a process of inference
regarding the uncertain initial state of the system.
Violation of measurement independence has the potential to render
vulnerable to attack any entanglement-based technologies that rely
on true quantum randomness. This particularly concerns quantum cybersecurity
and encryption, where it is possible that an adversary could exploit
measurement dependence to gain access <cit.>.
The effect of measurement dependence on the upper limit placed on
the CHSH parameter has been examined before <cit.>
but our analysis is novel since it rests on the dynamical evolution
of the hidden variables during measurement. We use a toy model, to
explore conditions for which the effect could be significant.
To be specific, let us imagine that quantum measurement drives hidden
system variables λ towards measurement axis-dependent regions
in their phase space that correspond to spin component outcomes for
those axis orientations. The chosen measurement axes and the evolution
model allow us, in principle, to deduce the probability density over
configurations of λ prior to measurement. The pdf ρ
is therefore conditioned on the choice of axes, such that the correlation
function should be written
C(â,b̂)=∫A̅(â,λ)B̅(b̂,λ)ρ(λ|â,b̂)dλ,
using previous notation extended to indicate the conditioning of ρ.
We still assume that the mean outcome of spin component measurement
for particle 1 along axis â does not depend on the orientation
of axis b̂ specifying the spin component measurement of particle
2, and vice versa, so the average of AB given â, b̂
and λ factorises, namely AB(â,b̂,λ)=A̅(â,λ)B̅(b̂,λ).
The CHSH parameter is built from correlation functions involving four
pairs of measurement axes, and hence four conditioned pdfs over λ,
which we denote ρ(λ|â,b̂), ρ(λ|â,b̂^'),
ρ(λ|â^',b̂) and ρ(λ|â^',b̂^').
We can use these to define a normalised average pdf ρ̅(λ,â,b̂,â^',b̂^')=1/4(ρ(λ|â,b̂)+ρ(λ|â,b̂^')+ρ(λ|â^',b̂)+ρ(λ|â^',b̂^'))
together with combinations that describe the differences between them:
ρ̅ϵ =1/4(ρ(λ|â,b̂)-ρ(λ|â,b̂^')+ρ(λ|â^',b̂)-ρ(λ|â^',b̂^'))
ρ̅σ =1/4(ρ(λ|â,b̂)+ρ(λ|â,b̂^')-ρ(λ|â^',b̂)-ρ(λ|â^',b̂^'))
ρ̅η =1/4(ρ(λ|â,b̂)-ρ(λ|â,b̂^')-ρ(λ|â^',b̂)+ρ(λ|â^',b̂^')),
such that ρ(λ|â,b̂)=ρ̅(1+ϵ+σ+η),
ρ(λ|â,b̂^')=ρ̅(1-ϵ+σ-η),
ρ(λ|â^',b̂)=ρ̅(1+ϵ-σ-η),
and ρ(λ|â^',b̂^')=ρ̅(1-ϵ-σ+η).
The ϵ, σ and η functions depend on all four
axis orientations as well as λ.
Now consider the first combination of correlation functions in the
CHSH parameter, C(â,b̂)-C(â,b̂^').
By introducing conditioning of the probability densities according
to the chosen measurement axes this can written as
∫A̅(â,λ)(B̅(b̂,λ)ρ(λ|â,b̂)-B̅(b̂^',λ)ρ(λ|â,b̂^'))dλ
=∫A̅(â,λ)(B̅(b̂,λ)-B̅(b̂^',λ))ρ̅(λ)dλ
+∫A̅(â,λ)(B̅(b̂,λ)+B̅(b̂^',λ))ρ̅(λ)ϵ(λ)dλ
+∫A̅(â,λ)(B̅(b̂,λ)-B̅(b̂^',λ))ρ̅(λ)σ(λ)dλ
+∫A̅(â,λ)(B̅(b̂,λ)+B̅(b̂^',λ))ρ̅(λ)η(λ)dλ,
which, since |A̅(â,λ)|≤1, implies that
| C(â,b̂)-C(â,b̂^')|≤∫|B̅(b̂,λ)-B̅(b̂^',λ)|ρ̅(λ)dλ
+∫|B̅(b̂,λ)+B̅(b̂^',λ)|ρ̅(λ)|ϵ(λ)| dλ
+∫|B̅(b̂,λ)-B̅(b̂^',λ)|ρ̅(λ)|σ(λ)| dλ
+∫|B̅(b̂,λ)+B̅(b̂^',λ)|ρ̅(λ)|η(λ)| dλ.
Similarly the combination C(â^',b̂)+C(â^',b̂^')
may be written
∫A̅(â^',λ)(B̅(b̂,λ)ρ(λ|â^',b̂)+B̅(b̂^',λ)ρ(λ|â^',b̂^'))dλ
=∫A̅(â^',λ)(B̅(b̂,λ)+B̅(b̂^',λ))ρ̅(λ)dλ
+∫A̅(â^',λ)(B̅(b̂,λ)-B̅(b̂^',λ))ρ̅(λ)ϵ(λ)dλ
+∫A̅(â^',λ)(-B̅(b̂,λ)-B̅(b̂^',λ))ρ̅(λ)σ(λ)dλ
+∫A̅(â^',λ)(-B̅(b̂,λ)+B̅(b̂^',λ))ρ̅(λ)η(λ)dλ,
which, since |A̅(â,λ)|≤1, leads to
| C(â^',b̂)+C(â^',b̂^')|≤∫|B̅(b̂,λ)+B̅(b̂^',λ)|ρ̅(λ)dλ
+∫|B̅(b̂,λ)-B̅(b̂^',λ)|ρ̅(λ)|ϵ(λ)| dλ
+∫|B̅(b̂,λ)+B̅(b̂^',λ)|ρ̅(λ)|σ(λ)| dλ
+∫|B̅(b̂,λ)-B̅(b̂^',λ)|ρ̅(λ)|η(λ)| dλ.
Combining Eqs. (<ref>), (<ref>) and (<ref>), the
CHSH parameter satisfies
S ≤2+μ,
where μ=2(∫ρ̅|ϵ| dλ+∫ρ̅|σ| dλ+∫ρ̅|η| dλ)≥0
represents an elevation of the usual upper limit. Note that ϵ=σ=η=0
and hence μ=0 in the absence of measurement dependence. Also,
if only one of the four probability densities is non-zero for a given
λ, an extreme case of measurement dependence, then ρ̅|ϵ|,
ρ̅|σ| and ρ̅|η| are
all normalised to unity, in which case μ=6. Values of μ
above 2 are redundant since S cannot exceed 4.
The rather abstract discussion given so far can be illustrated using
a toy model. Consider a system described by two discrete hidden variables
λ_1,2 and a phase space in the form of a grid of squares
labelled with integer values. The measurement of properties A and
B is represented by attraction of each of λ_1 and λ_2
towards one of two points, yielding four `targets' at coordinates
(λ_1^±,λ_2^±). The idea is that
interactions between the system and measuring device bring about a
dynamical evolution of the λ_1,2 to attractors (here fixed
points) located at the targets. The outcome of the measurement of
each property is ±1, as designated by the superscripts on the
target coordinates.
We consider a particular situation where passage to each target arises
from separate basins of attraction, shown as four different shades
of grey. For example, in Figure <ref>(a) the measurement
dynamics generate trajectories starting from any of the black squares
and all terminating at (λ_1^+,λ_2^+).
If there were information on the probabilities of adopting each black
square prior to measurement, then we would reasonably deduce from
the dynamics that these would accumulate to provide a probability
P_++ of an outcome A=B=+1 associated with system arrival at
the (λ_1^+,λ_2^+) target. The suffixes in this
probability correspond to the superscripts on the target coordinates.
However, we instead wish to deduce the probabilities of having started
out on a black square conditioned on information about the type of
measurement performed and its outcome. Given that the probability
of arrival at (λ_1^+,λ_2^+) after measurement
is P_++, we need to distribute this probability appropriately
over the basin of attraction of that target at earlier times, in accordance
with the measurement dynamics. The probability distribution in the
past is conditioned on future information: this is the measurement
dependence under consideration.
We can now see how such conditioned probabilities would change if
the measurement concerned a different property, represented in the
toy model as attraction towards a different set of targets. Consider
the shifted targets represented by the green crosses in Figure <ref>(b).
The basins of attraction under the measurement dynamics would, in
general, form a different pattern for these targets. There are therefore
revised probabilities of adoption of configurations prior to measurement
when conditioned on a measurement process defined by the green crosses.
This is compounded if we consider sets of probabilities of arrival
P_±± (outcome probabilities) at the green and red targets
that differ from one another.
It is crucial for the emergence of measurement dependence in this
toy model that the phase space should separate into a set of basins
of attraction to each outcome target for each measurement situation.
In real systems where the evolution is deterministic, outcomes are
encoded in the coordinates describing the initial state and such a
situation would be natural. For systems governed by effective stochastic
dynamics arising from coupling to an uncertain environment, such encoding
could persist to some degree depending on the situation.
We now illustrate how a CHSH parameter greater than 2 can be accommodated
through this measurement dependence. In Figure <ref>
we define four sets of targets based on the pairs of positions towards
which the measurement dynamics drive the variables λ_1,2.
The target pairs λ_1^± and λ̅_1^±
will be labelled `axis choices' â and â^',
respectively, and λ_2^± and λ̅_2^±
are to be associated with `axes' b̂ and b̂^'.The
terminology is chosen to establish an analogy with spin component
measurement and the superscripts indicate the outcome values for properties
Aand B. Four measurement situations can then be considered,
specified by the choice of blue, red, purple or green targets. The
possible measurement outcomes of A and B can be correlated (±1,±1)
or anti-correlated (±1,∓1). In Figure <ref>
we associate +1 outcomes with targets situated at even label positions
on the grid and -1 outcomes with odd labels.
We can then consider a set of probabilities P_±±^x̂,ŷ
for arrival of the system after measurement at the four targets in
each group, for x̂=â or â^' and ŷ=b̂
or b̂^'. These are shown in Figure <ref>
as circular symbols with different shades of colour to indicate magnitude.
To simplify the situation, we impose conditions P_++^x̂,ŷ=P_–^x̂,ŷ
and P_+-^x̂,ŷ=P_-+^x̂,ŷ. Furthermore,
we consider identical outcome probabilities for the red, blue and
purple measurement situations, namely P_±±^â,b̂=P_±±^â^',b̂=P_±±^â^',b̂^'.
However, we design the green measurement situation to be different
by setting P_±∓^â,b̂^'=P_±±^â,b̂,
namely that dark and light shades of green characterise measurement
outcomes coloured in light and dark shades, respectively, for the
other three cases.We can then combine this information with a chosen
pattern of basins of attraction associated with each target to derive
the conditioned probabilities over the phase space prior to measurement
for each pair of chosen `axes'. Our aim is to produce a probability
distribution P(λ_1,2|â,b̂^') over the
phase space conditioned on outcomes at the green targets that differs
from those conditioned on the blue, red and purple targets, which
for simplicity we shall choose to be the same, namely P(λ_1,2|â,b̂)=P(λ_1,2|â^',b̂)=P(λ_1,2|â^',b̂^')≠ P(λ_1,2|â,b̂^').The
prior distribution over the hidden variables λ_1,2 then
clearly depends on the measurement situation. For the green measurement
situation, there would be a higher probability associated with the
anti-correlated measurement outcomes, whereas correlated outcomes
would be more probable if any of the other three measurement situations
were considered.
Notice that our approach is to start with the correlation functions
for each measurement situation and then to deduce the conditioned
probability distributions on the phase space prior to measurement.
In this way we can construct a CHSH parameter of any magnitude between
zero and 4 but we then need to establish that such a parameter is
compatible with an extended upper limit by virtue of the additional
term μ in Eq. (<ref>) arising from differences in the
conditioned prior probabilities.
We consider measurement dynamics that generate independent random
walks in two dimensions with |Δλ_1|=|Δλ_2|=2
per timestep, terminating at a target. We imagine that such a stochastic
scheme can emerge from the deterministic dynamics of the system together
with a measuring device whose initial state is uncertain. The scheme
means that the basin of attraction leading to a target where both
λ_1 and λ_2 are even is comprised of all locations
in the phase space with similarly even-even coordinates. The basin
of attraction leading to a target where λ_1 is even and
λ_2 is odd consists of all locations with even-odd coordinates,
and so on. Separate basins of attraction to each outcome are maintained.
We also assume a phase space with an even number of locations in each
dimension (N locations in all) and periodic boundary conditions.
This means that the probability P(λ_1,2|â,b̂)
at any even-even location in the phase space, at a sufficiently large
interval of time prior to measurement with settings â,b̂,
is a constant P_ee^â,b̂ given by the probability
of arrival P_++^â,b̂ at red target (λ_1^+,λ_2^+)
divided by the number of such locations, N/4, which is P_ee^â,b̂=4P_++^â,b̂/N.
Similarly, the conditioned probability P_eo^â,b̂
at an even-odd location on the grid at such a time prior to measurement
is the arrival probability P_+-^â,b̂ at red target
(λ_1^+,λ_2^-) divided equally between all even-odd
locations in the grid, or P_eo^â,b̂=4P_+-^â,b̂/N.
The conditioned probability at an odd-even location is P_oe^â,b̂=4P_-+^â,b̂/N
and for odd-odd locations it is P_oo^â,b̂=4P_–^â,b̂/N.
The same arguments apply to measurement situation â,b̂^'
with arrival probabilities P_±±^â,b̂^'
at green targets, measurement situation â^',b̂
with arrival probabilities P_±±^â^',b̂
at blue targets, and measurement situation â^',b̂^'
with arrival probabilities P_±±^â^',b̂^'
at purple targets.
What this means is that the conditioned probabilities on the phase
space are repeated versions of the patterns of probabilities of arrival
at the targets after measurement. This is illustrated in Figure <ref>(a)
for the â,b̂ measurement situation. The dark and light
shades of pink represent the two probabilities P_ee^â,b̂
and P_eo^â,b̂, respectively, recalling that we have
imposed P_ee^â,b̂=P_oo^â,b̂ and P_eo^â,b̂=P_oe^â,b̂.
The distribution P(λ_1,2|â,b̂) illustrated is
the discrete analogue of the pdf ρ(λ|â,b̂)
considered for the spin measurement situation. With the assumptions
we have made, the conditioned probability distributions for measurement
situations â^',b̂ and â^',b̂^'
are identical to the distribution shown for â,b̂ (but
would be illustrated in blue and purple, respectively). However, the
conditioned probability distribution for the situation â,b̂^'
shown in Figure <ref>(b) is different, with a light
shade of green, representing P_ee^â,b̂^'=P_eo^â,b̂,
at a location where a dark shade of pink, representing P_ee^â,b̂,
lies in Figure <ref>(a), and vice versa.
The analogue of the additional term in Eq. (<ref>) for the
toy model is a sum of magnitudes of differences between the probability
distribution shown in pink in Figure <ref>(a) and
its blue, purple and green counterparts, analogous to ρ(λ|â,b̂),
ρ(λ|â^',b̂), ρ(λ|â^',b̂^')
and ρ(λ|â,b̂^'), respectively. For
the simplified case under consideration, the first three distributions
are identical and the fourth is different. This implies, in the notation
of a continuum phase space, that ρ̅ϵ=-ρ̅σ=ρ̅η=1/4(ρ(λ|â,b̂)-ρ(λ|â,b̂^'))
and hence the additional term in the Bell inequality for S in these
circumstances is μ=3/2∫|ρ(λ|â,b̂)-ρ(λ|â,b̂^')| dλ.
For the discrete phase space the integral is replaced by a sum of
moduli of the differences in probability at each position of the grid
conditioned on the red and green measurement outcomes. For the situation
we have constructed, these differences are P_ee^â,b̂-P_ee^â,b̂^'
at even-even and odd-odd points on the grid, and P_eo^â,b̂-P_eo^â,b̂^'
at even-odd and odd-even points, and these quantities are given by
±(P_ee^â,b̂-P_eo^â,b̂),
respectively. The sum of moduli of probability differences over the
grid is therefore N|P_ee^â,b̂-P_eo^â,b̂|=4|P_++^â,b̂-P_+-^â,b̂|
and the additional term for the toy model is μ=6|P_++^â,b̂-P_+-^â,b̂|.
Similarly, all four correlation functions can be specified in terms
of P_++^â,b̂ and P_+-^â,b̂, specifically
C(â,b̂)=C(â^',b̂)=C(â^',b̂^')=-C(â,b̂^')=2(P_++^â,b̂-P_+-^â,b̂).
Thus the CHSH parameter is S=4| C(â,b̂)| and
the additional term is μ=3| C(â,b̂)|. Bell's
analysis therefore requires that
S=4| C(â,b̂)|≤2+μ=2+3| C(â,b̂)|,
which is clearly satisfied for the relevant range 0≤| C(â,b̂)|≤1.
We conclude that measurement dependence has elevated the upper limit
to accommodate an imposed value of the CHSH parameter.
A toy model can merely illustrate a possibility of behaviour in more
realistic systems. It can focus attention on the elements required
to be present in those systems to produce an effect. Thus, for example,
we see the need for a dynamics of measurement that does not draw configurations
towards measurement outcome targets indiscriminately, but instead
selects them from basins of attraction in the phase space of hidden
variables. We also speculate that a measurement outcome should be
associated with system adoption, post-measurement, of a narrowly defined
set of hidden variable configurations. The measurement dependence
effect might therefore be delicate, and could perhaps emerge only
in situations involving few degrees of freedom. The lack of Bell-violating
correlations in more complex experimental circumstances might be explained
in this way.
The freedom of choice of measurement settings often arises in discussions
of measurement dependence. Rather than deny this freedom, a view can
be taken that the settings are indeed under experimental control,
or can be made effectively random by coupling them to complex external
systems. We reiterate that our interest lies in the extent to which
the given measurement settings and outcomes can provide information
about the subjectively uncertain configuration of system hidden variables
prior to measurement. The claim of measurement independence is that
no such information is conveyed. But if quantum measurement
involves the evolution of hidden variables with measurement settings
and outcomes acting as boundary conditions on the dynamics, then conditioning
of the probabilities of adopting pre-measurement configurations will
follow. This conditioning, or dynamically imposed memory, could extend
far back in time, challenging the premise normally employed in the
Bell analysis. Perhaps this is the principal implication of the experimental
violation of Bell inequalities.
SMW is supported by a PhD studentship funded by EPSRC under grant
codes EP/R513143/1 and EP/T517793/1.
|
http://arxiv.org/abs/2307.05286v1 | 20230711142739 | Heavy-flavor hadronization mechanism from pp to AA collisions: a theoretical perspective | [
"Andrea Beraudo"
] | hep-ph | [
"hep-ph",
"hep-ex",
"nucl-ex",
"nucl-th"
] |
A stochastic two-step inertial Bregman proximal alternating linearized minimization algorithm for nonconvex and nonsmooth problemsSupported by Scientific Research Project of Tianjin Municipal Education Commission (2022ZD007).
Chenzheng GuoEmail: [email protected],
Jing ZhaoCorresponding author. Email: [email protected],
Qiao-Li DongEmail: [email protected]
College of Science, Civil Aviation University of China, Tianjin 300300, China
======================================================================================================================================================================================================================================
§ MOTIVATIONS: WHY STUDYING HEAVY-FLAVOR HADRONIZATION?
The original motivation for studying in-medium heavy-flavor (HF) hadronization was related to the extraction of the transport coefficients governing the heavy-quark dynamics in the hot, deconfined and expanding fireball produced in relativistic heavy-ion collisions (HIC's). Within this setup heavy quarks (HQ's) are described as Brownian particles undergoing a stochastic dynamics in the Quark-Gluon Plasma (QGP), modelled through some transport equation (Boltzmann, Fokker-Planck or Langevin), until they reach a hadronization hypersurface where they give rise to the final HF hadrons. Exploiting the stochastic diffusive dynamics of Brownian particles to access information on microscopic medium properties is not a novelty. More than 100 years ago the study of the diffusion of colloidal particles in water allowed Perrin to get a quite precise and accurate estimate for the Avogadro number, N_ A≈5.5-7.2· 10^23 <cit.>. Nowadays, in nuclear collisions, HF studies aim at quantifying with a similar precision and accuracy the HQ spatial (D_s) and momentum (κ) diffusion coefficients. Recent results are shown in Fig. <ref>. As one can see, we are still quite far from achieving this goal. One of the major systematic uncertainties affecting phenomenological estimates is represented by hadronization, since the final detected particles are not the parent HQ's, but their daughter hadrons: hence the interest in developing the most possible reliable description of this process.
Indeed, understanding how the quark-to-hadron conversion changes moving from a dilute to a dense system, with a lot of color charges floating around, can represent an item of interest by itself. In this connection, the study of open HF has the great advantage that one knows exactly one of the parent partons of the final hadron, which can only be a charm or beauty quark produced in a hard scattering occurring before the fireball starts its hydrodynamic expansion. Furthermore, the fact that the same modifications of HF hadrochemistry – with a relative enhancement of charmed-baryon production – measured in nucleus-nucleus collisions are also observed in the proton-proton case <cit.> may be considered a signature that also in these smaller systems a little droplet of QGP is produced.
§ COMMON FEATURES AND CHALLENGES TO ANY HADRONIZATION MODEL
Any conceivable hadronization mechanism must start from grouping colored partons into color-singlet structures. Depending on the model, these composite objects are referred to as strings (e.g. in PYTHIA), clusters (e.g. in HERWIG) or are directly identified with the final hadrons/resonances (as in coalescence models). What changes in the different physical situations is where the recombining partons are taken from. In collisions leading to the formation of a sufficiently dilute system (left panel of Fig. <ref>) the partons are taken from the hard scattering, from the parton-shower stage, from the underlying event and from the beam remnants and are grouped following the color-flow of the event. If, on the contrary, a very dense partonic system is formed – as in HIC's – one can assume that color neutralization occurs locally (right panel of Fig. <ref>), with the considered parton (the HQ in this case) undergoing recombination with the closest opposite color-charge. As suggested by the figure, the latter can also be a diquark, which favors the formation of clusters carrying one unit of baryon number. Notice that recombining nearby partons, besides minimizing the potential energy stored in the color-field, entails a strong space-momentum correlation (SMC), since particles belonging to the same fluid cell of an expanding fireball must share a common collective velocity. This, as we will see, has deep consequences for the typical invariant-mass of the formed clusters and for the kinematic distributions of the final hadrons.
In order to appreciate the challenges in developing a realistic model of hadronization it may be useful to consider a different situation in which composite objects are formed starting from more elementary degrees of freedom: stellar nucleosynthesis of ^12C (see Fig. <ref>). ^12C can be considered a cluster of 3 α particles, taken as the elementary building blocks of the process. They are the equivalent of quarks in hadronization. A direct recombination of 3 α particles would be very unlikely, but the process is favored by the existence of a resonant ^8Be* state just above the 2α threshold, produced in the reaction α+α↔ ^8Be* and which can live long enough, until the scattering with a third α particle. ^8Be* can be considered the equivalent of diquarks in hadronization, which favor the formation of three-quark clusters. However, also in this case, the direct formation of ^12C would be extremely unlikely. Thus, the great abundance of ^12C in the Universe led Hoyle to predict the existence of an excited ^12C* state just above the α+^8Be* threshold, easily accessible in a scattering process <cit.>. In a tiny fraction of cases ^12C* undergoes an electromagnetic decay into the ground-state, explaining the observed carbon abundance. Soon after its prediction ^12C* was actually discovered <cit.>. Clearly the latter can be considered the equivalent of the long list of excited hadronic resonances predicted by relativistic quark models (RQM's), whose feed-down is necessary to explain the observed abundance of ground-state HF hadrons in high-energy collisions within calculations assuming their statistical production from a hadronization hypersurface <cit.>.
To summarize, the final yields of stable nuclei in stellar nucleosynthesis are extremely sensitive to the existence of excited states just above the two-particle threshold, which have been experimentally well know for long and which are also predicted by theory calculations <cit.>. Furthermore, the stellar temperatures ∼ 10^8 K∼ 10 keV are not high enough to affect the nucleon/nuclear properties. Unfortunately, none of the above conditions is actually satisfied in the quark-to-hadron transition in HIC's. First of all, only few of the hadronic resonances predicted by the RQM <cit.> and necessary to reproduce the measured yields of ground-state HF hadrons have been so far experimentally observed and listed in the PDG tables (left panel of Fig. <ref>). Secondly, at temperatures around the QCD crossover hadron spectral functions are strongly modified, both in the light <cit.> and in the heavy <cit.> sectors (right panel of Fig. <ref>), raising questions about the nature itself of a hadron around T_c. A bound state in the vacuum can remain a bound state, but can also become a broad resonance above the two-particle threshold. In this connection, the Resonance Recombination Model (RRM) developed in Ref. <cit.> precisely relies on the existence of resonant mesonic states M above the m_Q+m_q threshold, whose large thermal width allows the reactions Qq↔ M to reach dynamical equilibrium.
§ RECOMBINATION OF COLOR-CHARGES IN THE QGP
Since, as above mentioned, the modelling of the quark-to-hadron transition is so challenging, it is useful to keep the discussion as simple as possible, considering in deeper detail the minimal model of hadronization briefly sketched in Sec. <ref>. This will be sufficient to illustrate very general features of models based on parton recombination and to provide a quantitative guidance to interpret several experimental measurements. Within such a model, presented in Ref. <cit.>, color-neutralization occurs locally, on a isothermal hypersurface around the critical temperature T_H=155 MeV, recombining a HQ with an opposite color-charge from the same fluid cell, either a light antiquark of a diquark. Both the species and the momentum of the heavy-quark companion are sampled from a thermal distribution in the local rest frame of the fluid. A color-singlet cluster is then constructed, which tipically has quite a low invariant-mass, due to SMC which favors the recombination of collinear partons following the collective expansion of the firaball (see left panel of Fig. <ref>). Light clusters undergo then a 2-body decay into a charmed hadron plus a soft particle (tipically a pion), ensuring exact four-momentum conservation (at variance with standard coalescence approaches). Heavier clusters, above an invariant mass around 4 GeV, are treated as Lund strings whose fragmentation is simulated through PYTHIA 6.4. The recombination with diquarks – assumed to be present in the fireball around the QCD transition – enhances the production of HF baryon as compared to standard vacuum fragmentation (see right panel of Fig. <ref>). Notice that breaking SMC leads to a harder cluster-mass distribution and hence to a drop in the Λ_c^+/D^0 ratio, since high invariant-mass strings fragment as in the vacuum. Further results are discussed in the next Section, where a unified description of in-medium hadronization from pp to AA collisions is provided.
§ IN-MEDIUM HADRONIZATION ALSO IN SMALL SYSTEMS?
One of the most surprising results involving HF observables obtained in proton-proton collisions is large value of the Λ_c^+/D^0 ratio <cit.>, strongly enhanced with respect to expectations based on fragmentation fractions extracted from e^+e^- data and compatible with measurements obtained in heavy-ion collisions.
Since in this last case the charmed-baryon enhancement is commonly attributed to a recombination process between the HQ and an opposite color-charge (possibly a diquark) from the hot, deconfined medium generated after the collision, one wonders whether a similar mechanism of hadronization can occur in proton-proton collisions, in which a small droplet of Quark-Gluon-Plasma (QGP) might also be produced.
This was the idea proposed for instance in Refs. <cit.>, which allowed the authors to satisfactory describe the Λ_c^+/D^0 ratio measured in pp collisions at the LHC. Another attempt to interpret the enhanced production of charmed baryons was based on the Statistical Hadronization Model <cit.>, assuming a thermal population of the different charmed meson and baryon states predicted by the Relativistic Quark Model around a universal hadronization temperature, as already discussed in Sec. <ref>.
Reproducing such observations is a challenge for QCD event generators, but recent Color-Reconnection (CR) models implemented in PYTHIA 8 <cit.> can provide a satisfactory description of the data. The mechanism is illustrated in Fig. <ref>. Strings, i.e. color flux-tubes, have a non-zero transverse thickness, with a radius around 0.5 fm <cit.>. Hence, in a hadronic collision with multiple partonic interactions, some of the strings stretched between the produced partons and the beam remnants may overlap and/or interact, leading to a rearrangement of the confining potential among the partons before hadronization which decreases the energy stored in the color field and favors the production of baryons. Notice that, even if strictly speaking this CR mechanism does not involve the formation of a deconfined medium, what occurs at hadronization is very similar. In fact, the color reconnections which tend to occur are the ones leading to a decrease of the invariant mass of the strings, hence the ones resulting in quite collinear partons as final string endpoints. But this is exactly what occurs, within a hot expanding fireball, in parton recombination approaches implementing SMC: one can consider them an extreme case of CR, in which the memory of the initial color connections is completely lost.
How reasonable is the assumption that the modification of HF production in proton-proton collisions depends on the formation of a small droplet of QGP in which HQ's undergo rescattering and recombination? This issue was addressed in detail in Ref. <cit.>, where the authors performed event-by-event (EBE) simulations involving the generation of a sample of minimum-bias initial conditions arising from the pp collisions, their hydrodynamic evolution, the stochastic propagation of HQ's throughout the fireball and their recombination with light thermal quarks or diquarks once reaching a hadronization hypersurface at T_H=155 MeV.
The initial entropy deposition in the transverse plane was modelled through the TrENTo code <cit.> and the authors checked that, on an EBE basis, one correctly reproduces the measured charged-particle multiplicity per unit rapidity. An example of initial condition from the minimum-bias sample is displayed in the left panel of Fig. <ref>. The HQ's, generated with the the POWHEG-BOX package <cit.>, were then distributed among the different pp events according to the initial entropy-density in the transverse plane. Hence, they tend to populate the hot spots of the events with the largest deposited entropy per unit rapidity dS/dη_s. As a result, considering the minimum-bias sample, only a small fraction around 5% of the HQ's is initially found in the fireball corona below T_H (see right panel of Fig. <ref>). Thus, it not surprising that the same modifications of HF production found in HIC's and attributed to a hot deconfined medium are also observed in the proton-proton case.
An example of the above medium effects found in <cit.> is shown in Fig. <ref>, where the ratio of various charmed-hadron species with respect to D^0 yields is plotted for different colliding systems: minimum-bias pp collisions, high-multiplicity pp collisions (the 1% of the events with the highest deposited entropy) and central PbPb collisions. As one can see one gets qualitatively similar results, with an enhanced charmed baryon-to-meson ratio which moves to higher momenta going from minimum-bias pp to central PbPb collisions due to the stronger radial-flow of the fireball.
This kind of study is also relevant to correctly quantify medium effects in heavy-ion collisions, where the pp benchmark enters in defining the nuclear modification factor R_ AA(p_T)∝(dN/dp_T)_ AA/(dN/dp_T)_ pp.
As one can see in the left panel of Fig. <ref>, the inclusion of medium effects in pp collisions allows one to correctly reproduce the location and magnitude of the radial-flow peak (i.e. the reshuffling of the particle momenta, from low to moderate p_T) and to obtain a species dependence of the results with the same qualitative trend of the experimental data. Notice also (see right panel of Fig. <ref>) that the response to the initial EBE fireball eccentricity leads to a non-vanishing elliptic-flow coefficient v_2 of D^0 mesons both in minimum-bias and high-multiplicity proton-proton collisions, the response being stronger in the last case due to the longer lifetime of the fireball. An important fraction of the flow is actually acquired at hadronization, due to recombination with nearby thermal particles.
§ CONCLUSIONS
Even if hadronization is a non-perturbative process associated to one of the most characteristic but hardest features to study of strong interactions – color confinement – there is growing evidence that in any hadronic collision the latter occurs via some form of recombination involving partons which initially were not necessarily color-connected, but which eventually are sufficiently close in coordinate and momentum space and with the proper color structure to give rise to low invariant-mass composite color-singlet objects. The advantage of focusing on HF hadrons is that in this case one knows exactly at least one of the parents of the final particle, since charm and bottom quarks can be produced only in the initial hard-scattering or during the parton-shower stage, but not in the final state via excitations of QQ pairs from the vacuum.
We showed how assuming the formation of a small droplet of QGP also in pp collisions, in which HQ's can undergo rescattering and hadronization, allows one to provide a unified description of a wide set of experimental data concerning HF production, from minimum-bias proton-proton to central nucleus-nucleus collisions. So far most of the studies in the literature were limited to charm. The extension of similar analysis to the bottom sector <cit.> and also to multi-charmed hadrons <cit.> is currently addressed by several groups and will contribute to improve our knowledge both on the hadronization mechanism and on the HF transport coefficients in the QGP.
JHEP
|
http://arxiv.org/abs/2307.03947v1 | 20230708101048 | Hyperelliptic Gorenstein curves and logarithmic differentials | [
"Luca Battistella",
"Sebastian Bozlee"
] | math.AG | [
"math.AG",
"math.GT",
"14H20 (Primary) 14H10 (Secondary)"
] |
1.2
#1|_#1
commutative diagrams/.cd,
arrow style=tikz,
diagrams=>=stealth
definition
innercustomthmTheorem
tocline#1#2#3#4#5#6#7#1>@̧tocdepth
secpenalty#2
M
ifempty#4
tempdimar@tocindent#1
tempdima#4
@ #3tempdimapnumwidth plus4em -pnumwidth
#5-tempdima
#1
1em 2em 3em
#6topnumwidthtocpagenum#7
[1]
@cev#1
calc
fadings
decorations.pathmorphing
decorations.pathreplacing
shapes
marginnote
#1marginnote[][]#1
#1#1
OT1pzcmit
definition
theoremTheorem[section]
*theorem*Theorem
claim[theorem]Claim
conjecture[theorem]Conjecture
*conjecture*Conjecture
corollary[theorem]Corollary
lemma[theorem]Lemma
proposition[theorem]Proposition
remark[theorem]Remark
assumption[theorem]Assumption
*runningexample*Running example
aside[theorem]Aside
*aside*Aside
condition[theorem]Condition
construction[theorem]Construction
convention[theorem]Convention
definition[theorem]Definition
example[theorem]Example
exerciseExercise
notation[theorem]Notation
proposition-definition[theorem]Proposition-Definition
question[theorem]Question
setting[theorem]Setting
theorem
innercustomconjConjecture
theorem
innercustomcorCorollary
|
http://arxiv.org/abs/2307.05309v1 | 20230711145706 | Symmetric monoidal equivalences of quantum field theories in dimension two and Frobenius algebras | [
"Pablo S. Ocal"
] | math.QA | [
"math.QA",
"57R56, 18M05, 57K16, 18M15, 16L60"
] |
We show that the canonical equivalences of categories between 2-dimensional (unoriented) topological quantum field theories valued in a symmetric monoidal category and (extended) commutative Frobenius algebras in that symmetric monoidal category are symmetric monoidal equivalences. As an application, we recover that the invariant of 2-dimensional manifolds given by the product of (extended) commutative Frobenius algebras in a symmetric tensor category is the multiplication of the invariants given by each of the algebras.
Programmable and arbitrary-trajectory ultrafast flying focus pulses
M. V. Ambat,1,2
J. L. Shaw1 J.J. Pigeon,1 K. G.
Miller,1 T. T.
Simpson,1 D. H. Froula,1and J. P. Palastro,1,3
August 12, 2023
=====================================================================================================================
§ INTRODUCTION
There abound references formalizing the folklore statement that 2-dimensional topological quantum field theories with values in a symmetric monoidal category and commutative Frobenius algebras in are equivalent, see for example <cit.>. A similar bijection between unoriented 2-dimensional topological quantum field theories with values in a symmetric monoidal category and extended commutative Frobenius algebras in was given in <cit.>. Here we fill a gap in the existing literature by showing that these correspondences are in fact symmetric monoidal equivalences. We also provide a cute application by recovering the multiplicativity of these invariants when the TQFTs take values in a symmetric tensor category, and remark how results of analogous generality for 2-dimensional homotopy quantum field theories cannot be obtained.
§ PRELIMINARIES
All the categories in this note will be symmetric monoidal as in <cit.>, and using the coherence theorem for symmetric monoidal categories we regard all associators and unitors as identities. A symmetric monoidal equivalence of symmetric monoidal categories is a symmetric monoidal functor which is also an equivalence of categories (see <cit.>, this is the particular case of a braided monoidal equivalence <cit.> between categories having a symmetric braiding). Given categories and , then there is a category SymMonCat(,) of symmetric monoidal functors and symmetric monoidal natural transformations between them <cit.> (compare <cit.> with <cit.>). Given F and G in SymMonCat(,) then F⊗ G:→ given by A↦ F(A)⊗ G(A) on objects of and h↦ F(h)⊗ G(h) on morphisms of is a symmetric monoidal functor. The monoidal unit is the functor sending every object to 1_ and every morphism to 𝕀1_. The associator, unitors, and braiding are inherited from . Altogether, this gives SymMonCat(,) a symmetric monoidal structure. The categories 2Cob and 2UCob of 2-dimensional oriented and unoriented cobordisms are symmetric monoidal, with monoidal structure given by disjoint union (see <cit.> and <cit.>). A 2-d TQFT <cit.> with values in is a symmetric monoidal functor from 2Cob to , these form the category SymMonCat(2Cob,). An unoriented 2-d TQFT <cit.> with values in is a symmetric monoidal functor from 2UCob to , these form the category SymMonCat(2UCob,).
We will be using the conventions in <cit.> for algebra, coalgebra, and Frobenius algebra in a category . Given A and B Frobenius algebras in , a morphism of Frobenius algebras in is a morphism f:A→ B in making the following diagrams commute (compare with <cit.>).
1[swap]du_Ar𝕀1 1du_B
A rf B
A⊗ A [swap]dm_Arf⊗ f B⊗ B dm_B
A rf B
1r𝕀1 1
A rfuϵ_A B [swap]uϵ_B
A⊗ A rf⊗ f B⊗ B
A rfuΔ_A B [swap]uΔ_B
An extended Frobenius algebra in is a tuple (A,m,u,Δ,ϵ,ϕ,θ) where (A,m,u,Δ,ϵ) is a Frobenius algebra in , and ϕ:A→ A and θ:1→ A are morphisms of Frobenius algebras in making the following diagrams commute (see <cit.>).
A rϕ[swap]dr𝕀A A dϕ
A
A⊗ A dm 1⊗ A rθ⊗𝕀A[swap]lθ⊗𝕀A A⊗ A [swap]dm
A rrϕ A
[column sep=20pt]
1⊗1dθ⊗θrl_1 1rru A rΔ A⊗ A [swap]dϕ⊗𝕀A
A⊗ A rrm A A⊗ A [swap]llm
Given A and B extended Frobenius algebras in , a morphism of extended Frobenius algebras in is a morphism of Frobenius algebras f:A→ B in making the following diagrams commute (compare with <cit.>).
1[swap]dθ_Ar𝕀1 1dθ_B
A rf B
A [swap]dϕ_Arf B dϕ_B
A rf B
There is a category cFrob() of commutative Frobenius algebras and morphisms of Frobenius algebras in , which has a subcategory cExtFrob() of commutative extended Frobenius algebras and morphisms of extended Frobenius algebras in . Given A and B in cFrob() then, by tedious diagram completion, the object A⊗ B with multiplication (m_A ⊗ m_B)(𝕀A⊗ c_A,B⊗𝕀B), unit (u_A⊗ u_B)(l^-1_1_), comultiplication (𝕀A⊗ c^-1_A,B⊗𝕀B)(Δ_A⊗Δ_B), and counit (l_1_)(ϵ_A⊗ϵ_B) is a commutative Frobenius algebra in . When A and B are also in cExtFrob() then a similar procedure shows that the involution ϕ_A⊗ϕ_B and the distinguished element (θ_A⊗θ_B)(l^-1_1_) make the above A⊗ B into an extended commutative Frobenius algebra. Thus, the inclusions of categories cExtFrob() ⊆cFrob() ⊆ endow cFrob() and cExtFrob() with a symmetric monoidal structure. We refer to <cit.> for similar techniques and for Frobenius algebras obtained by replacing the braiding c_A,B with more general isomorphisms in .
§ FROM CORRESPONDENCES TO SYMMETRIC MONOIDAL EQUIVALENCES
The canonical equivalence Φ:SymMonCat(2Cob,) ≃cFrob() is a symmetric monoidal equivalence.
The assignment Φ mapping a TQTF to its evaluation at the circle F ↦ F(𝕊^1) and mapping a symmetric monoidal natural transformation to its component at the circle (η:F⇒ G) ↦ (η_𝕊^1:F(𝕊^1)→ G(𝕊^1)), is the canonical equivalence by <cit.> and <cit.>. The unit 1_2d TQFT:2Cob→ is mapped to the unit 1_, the tensor product of TQFTs F⊗ G is mapped to the tensor product of their respective evaluations F(𝕊^1)⊗ G(𝕊^1), whence taking J_F,G = 𝕀F(𝕊^1)⊗ G(𝕊^1) makes Φ into a symmetric monoidal functor.
There is a canonical symmetric monoidal equivalence Φ:SymMonCat(2UCob,) ≃cExtFrob().
There is a bijective correspondence between isomorphism classes of functors F:2UCob→ and commutative extended Frobenius algebras in by <cit.> and <cit.>. Let Φ be the same assignment as in the oriented case, so an unoriented TQFT F is mapped to a commutative extended Frobenius algebra F(𝕊^1), and a symmetric monoidal natural transformation η:F⇒ G is mapped to η_𝕊^1:F(𝕊^1)→ G(𝕊^1) a morphism in . Given A a commutative extended Frobenius algebra in then up to isomorphism there is a unique symmetric monoidal functor F:2UCob→ such that F(𝕊^1) = A, whence Φ is canonical. Following the reasoning in <cit.>, the naturality of η with respect to both pairs of pants, cup, and cap, implies that η_𝕊^1 is a morphism of Frobenius algebras in . This naturality with respect to the orientation reversing cylinder implies that η_𝕊^1ϕ_F(𝕊^1) = ϕ_G(𝕊^1)η_𝕊^1, and naturality with respect to the punctured projective sphere seen as a morphism between the empty manifold and the circle implies that η_𝕊^1θ_F(𝕊^1) = θ_G(𝕊^1). Thus η_𝕊^1 is a morphism of extended Frobenius algebras in , and Φ is an equivalence of categories as in the oriented case. Since the monoidal structure of SymMonCat(2UCob,) is inherited from the monoidal structure of SymMonCat(2Cob,), the same reasoning as in Theorem <ref> makes Φ into a symmetric monoidal functor.
When is a symmetric tensor category <cit.>, a 2-d (oriented or unoriented) TQFT F gives a numerical invariant of (oriented or unoriented) surfaces M by regarding them as morphisms from the empty manifold to the empty manifold <cit.>, whence F(M) ∈1≅𝕜 can be identified with a scalar in 𝕜 an algebraically closed field. As a consequence of the (symmetric) monoidal structure of SymMonCat(2Cob,) and SymMonCat(2UCob,), we obtain that given F and G two TQFTs, the invariant of surfaces given by F⊗ G their tensor product as TQFTs is precisely (F⊗ G)(M) = F(M)⊗ G(M), namely F(M)G(M) the (commutative) product in 𝕜 of the invariants associated to F and G. This reasoning holds in any dimension, whence the invariant given by the tensor product of n-dimensional TQFTs is the product of the invariants given by the n-dimensional TQFTs. This can be translated via the canonical equivalences of Theorems <ref> and <ref> into the usual statement saying that the invariant given by the tensor product of Frobenius algebras is the product of the invariants given by each algebra. Our proof differs from the usual one since it does not rely on cutting the surface nor in the existence of a normal form.
Unfortunately, in full generality, other equivalences similar in nature to the ones treated above are not symmetric monoidal equivalences. For example, <cit.> exhibits an equivalence between 2-dimensional homotopy quantum field theories over a topological space X <cit.> and twisted Frobenius algebras <cit.>. These notions can be effortlessly generalized to admit values in and to be certain objects in G-graded categories ⊕_g∈ G_g, respectively. The category SymMonCat(2Bord(X),) of 2-d HQFTs is always symmetric monoidal (whence the invariant of the product is the product of the invariants, as before). However, already for = 𝕜-Vec_G the subcategory of twisted Frobenius algebras does not inherit a monoidal structure because twisted associativity is not preserved.
alpha
|
http://arxiv.org/abs/2307.04523v1 | 20230710124620 | 1D non-LTE corrections for chemical abundance analyses of very metal-poor stars | [
"L. Mashonkina",
"Yu. Pakhomov",
"T. Sitnova",
"A. Smogorzhevskii",
"P. Jablonka",
"V. Hill"
] | astro-ph.SR | [
"astro-ph.SR",
"astro-ph.GA"
] |
firstpage–lastpage
Cluster-Induced Mask Transformers for Effective Opportunistic Gastric Cancer Screening on Non-contrast CT Scans
Mingze Yuan^1,2,3,*, Yingda Xia^1,, Xin Chen^4,, Jiawen Yao^1,3, Junli Wang^5, Mingyan Qiu^1,3, Hexin Dong^1,2,3, Jingren Zhou^1, Bin Dong^2,6, Le Lu^1, Li Zhang^2, Zaiyi Liu^4,, Ling Zhang^1
August 12, 2023
===================================================================================================================================================================================================
Detailed chemical abundances of very metal-poor (VMP, [Fe/H] < -2) stars are important for better understanding the First Stars, early star formation and chemical enrichment of galaxies. Big on-going and coming high-resolution spectroscopic surveys provide a wealth of material that needs to be carefully analysed. For VMP stars, their elemental abundances should be derived based on the non-local thermodynamic equilibrium (non-LTE = NLTE) line formation because low metal abundances and low electron number density in the atmosphere produce the physical conditions favorable for the departures from LTE. The galactic archaeology research requires homogeneous determinations of chemical abundances. For this purpose, we present grids of the 1D-NLTE abundance corrections for lines of Na, Mg, Ca, Ca, Ti, Fe, Zn, Zn, Sr, and Ba in the range of atmospheric parameters that represent VMP stars on various evolutionary stages and cover effective temperatures from 4000 to 6500 K, surface gravities from = 0.5 to = 5.0, and metallicities -5.0 ≤ [Fe/H] ≤ -2.0. The data is publicly available, and we provide the tools for interpolating in the grids online.
line: formation – stars: abundances – stars: atmospheres.
§ INTRODUCTION
Very metal-poor (VMP, [Fe/H][In the classical notation, where [X/H] = log(N_ X/N_
H)_star - log(N_ X/N_ H)_⊙.] < -2) stars are fossils of the early epochs of star formation in their parent galaxy.
Their detailed elemental abundances are of extreme importance for understanding the nature of the First Stars, uncovering the initial mass function and the metallicity distribution function of the galaxy, testing the nuclesynthesis theory predictions and the galactic chemical evolution models <cit.>.
Since 1980th, the number of discovered VMP star candidates has grown tremendously thanks to the wide-angle spectroscopic and photometric surveys, such as HK <cit.>, HES <cit.>, RAVE <cit.>, SMSS <cit.>, SEGUE/SDSS <cit.>, LAMOST <cit.>.
The survey Pristine has been specially designed for efficient searching VMP stars <cit.>. Using the narrow-band photometric filter centered on the Ca H & K lines makes possible to successfully predict stellar metallicities <cit.>.
The number of confirmed VMP stars is substantially lower than the number of candidates because the verification of very low metallicity requires the high-resolution follow-ups. The SAGA (Stellar Abundances for Galactic Archaeology) database <cit.> includes about 1390 Galactic stars with [Fe/H] ≤ -2, for which their metallicities were derived from the R = λ /Δλ≥ 20 000 spectra. The 470 stars of them have [Fe/H] ≤ -3, and 28 stars are ultra metal-poor (UMP, [Fe/H] ≤ -4). A burst in the number of VMP stars with detailed elemental abundances derived is expected with the launch of the WEAVE (WHT Enhanced Area Velocity Explorer) project <cit.>. A vast amount of spectral data will be taken with the coming 4-metre Multi-Object Spectroscopic Telescope <cit.>.
Abundance ratios among the elements of different origin, such as Mg and Fe, for stellar samples covering broad metallicity ranges serve as the observational material for the galactic archaeology research.
The simplest and widely applied method to derive elemental abundances is based on using one-dimensional (1D) model atmospheres and the assumption of local thermodynamic equilibrium (LTE), see, for example, the abundance results from the high-resolution spectroscopic survey APOGEE <cit.>.
In metal-poor atmospheres, in particular, of cool giants, low total gas pressure and low electron number density lead to departures from LTE that grow towards lower metallicity due to decreasing collisional rates and increasing radiative rates as a result of dropping ultra-violet (UV) opacity. The non-local thermodynamic equilibrium (non-LTE = NLTE) line formation calculations show that the NLTE effects for lines of one chemical species and for different chemical species are different in magnitude and sign, depending on the stellar parameters and element abundances. Ignoring the NLTE effects leads to a distorted picture of the galactic abundance trends and thus to wrong conclusions about the galactic chemical evolution.
The NLTE abundance from a given line in a given star can be obtained by adding the theoretical NLTE abundance correction, which corresponds to the star's atmospheric parameters, to the LTE abundance derived from the observed spectrum: NLTE = LTE + Δ_ NLTE. For a number of chemical species, Δ_ NLTE can be taken online from the websites
* INSPECT (<http://www.inspect-stars.com>) for lines of Li, Na, Mg, Ti, Fe-, and Sr,
* NLTE_MPIA (<http://nlte.mpia.de/>) for lines of O, Mg, Si, Ca-, Ti-, Cr, Mn, Fe-, and Co,
* <http://spectrum.inasan.ru/nLTE/> for lines of Ca, Ti-, and Fe.
Extensive grids of the NLTE abundance corrections are provided by <cit.>, <cit.>, and <cit.>.
The NLTE abundance corrections for the selected lines of S and Zn in the limited set of atmospheric models were computed by <cit.>. <cit.> report the NLTE to LTE equivalent width ratios for lines of Mg, Ca, and Ca in the grid of model atmospheres representing cool giants.
Different approach is a determination of the NLTE abundance directly, by using the synthetic spectrum method and the precomputed departure coefficients, b_i = n_i^ NLTE/n_i^ LTE, for the chemical species under investigation. Here, n_i^ NLTE and n_i^ LTE are the statistical equilibrium and the Saha-Boltzmann's number densities, respectively, for the enery level i.
<cit.> provide the grids of b_i
for 13 chemical species (neutral H, Li, C, N, O, Na, Mg, Al, Si, K, Ca, Mn; singly ionized Mn, Ba)
across a grid of the classical one-dimensional (1D) MARCS model atmospheres <cit.>.
This approach is based on using the 1D-NLTE spectral synthesis codes, such as SME <cit.> synthV_NLTE <cit.>, Turbospectrum <cit.>.
An approach based on three-dimensional (3D) model atmospheres combined with the NLTE line formation is extremely time consuming and, to now, was applied to a few chemical species in the Sun <cit.> and the benchmark VMP stars <cit.>. Grids of the 3D-NLTE abundance corrections were computed for lines of O <cit.> and Fe- <cit.> using the STAGGER grid of model atmospheres for a limited range of effective temperatures ( = 5000-6500 K), surface gravities ( = 3.0-4.5), and metallicities ([Fe/H] = 0 to -3). For the Li lines, grids of the 3D-NLTE abundance corrections were computed by <cit.> and <cit.> with the CO^5BOLD and STAGGER model atmospheres, respectively.
The 3D-NLTE calculations are available for a small number of the chemical elements observed in VMP stars, and they cover only in part the range of relevant atmospheric parameters. Furthermore, as shown by <cit.> for Fe, the abundance differences between 3D-NLTE and 1D-NLTE are generally less severe compared with the differences between 3D-NLTE and 1D-LTE and reach 0.2 dex, at maximum (see Figs. 5-7 in their paper). Therefore, calculations of the 1D-NLTE abundance corrections for extended linelists across the stellar parameter range which represents the VMP stars make sense, and they are useful for the galactic archaeology research. Availability and comparison of Δ_ NLTE from different independent studies increase a credit of confidence in the spectroscopic NLTE analyses.
This paper presents the 1D-NLTE abundance corrections for lines of 10 chemical species in the grid of MARCS model atmospheres with = 4000-6500 K, = 0.5-5.0, and -5 ≤ [Fe/H] ≤ -2.
We provide the tools for calculating online the NLTE abundance correction(s) for given line(s) and given atmospheric parameters by interpolating in the precomputed grids.
Potential users may take the following advantages of our data compared with the grids of the 1D-NLTE abundance corrections available in the literature.
* Only this study provides extended grids of the NLTE abundance corrections for lines of Zn and Ba.
* For Ca and Ca, the NLTE calculations were performed with advanced treatment of the Ca + H and Ca + H collisions, following <cit.> and <cit.>, respectively.
* For Zn and Sr, our results are based on advanced treatment of collisions with H, following <cit.> and <cit.>. Our grids cover the broader range of , , and [Fe/H] compared to that for Zn in <cit.> and for Sr in the INSPECT database.
* For Ca–Ca, Fe–Fe, and Na, the developed 1D-NLTE methods have been verified with spectroscopic analyses of VMP stars and have been shown to yield reliable results.
The paper is organised as follows. Section <ref> describes our NLTE methods and their verification with observations of VMP stars. New grids of the NLTE abundance corrections are presented in Sect. <ref>. In Sect. <ref>, we compare our calculations with those from other studies. Our recommendations and final remarks are given in Sect. <ref>.
§ NLTE METHODS AND THEIR VERIFICATION
The present investigation is based on the NLTE methods developed and tested in our earlier studies.
Details of the adopted atomic data and the NLTE line formation for Na, Mg, Ca-, Ti-Ti, Fe-, Zn-, Sr, and Ba can be found in the papers cited in Table <ref>.
It is important to note that collisions with hydrogen atoms were treated with the data based on quantum-mechanical calculations.
The exceptions are Ti and Fe-, for which we adopted the Drawinian rates <cit.> scaled by an empirically estimated factor of = 1 <cit.> and = 0.5 <cit.>, respectively.
The code detail <cit.> with the revised opacity package <cit.> was used to solve the coupled radiative transfer and statistical equilibrium (SE) equations. The obtained LTE and NLTE level populations were then implemented in the code linec <cit.> that, for each given spectral line, computes the NLTE curve of growth and finds the shift in the NLTE abundance, which is required to reproduce the LTE equivalent width. Such an abundance shift is referred to as the NLTE abundance correction, Δ_ NLTE = NLTE-LTE.
All the calculations were performed using the classical LTE model atmospheres with the standard chemical composition <cit.>, as provided by the MARCS website[<http://marcs.astro.uu.se>].
Below we provide evidence for a correct treatment of the NLTE line formation for Fe-Fe, Ca-Ca, and Na in the atmospheres of VMP stars.
§.§ Spectroscopic versus Gaia eDR3 distances
Iron is represented in the VMP stars by the two ionization stages, which are used in many studies to determine spectroscopic surface gravities (g_ Sp) from the requirement that abundances from lines of Fe and Fe in a given star must be equal. The surface gravity can also be derived from distance; this is the distance-based surface gravity, g_ d. If g_ Sp based on the NLTE calculations and g_ d are obtained to be consistent within the error bars, this means that the calculations for Fe-Fe are correct.
<cit.> and <cit.> derived the surface gravities for the two Galactic stellar samples using photometric effective temperatures and the NLTE analysis of the Fe and Fe lines. Using the Gaia eDR3 parallaxes corrected according to
<cit.>, we calculated distances from the maximum
of the distance probability distribution function, as recommended by
<cit.>, and then
_ d from the relation
log g_ d = -10.607 +log M+4 log T_ eff -
0.4 [4.74 - (V + BC + 5 - 5 log d - A_V)]
Here, M is a star's mass, A_V is an interstellar extintion in the V-band, BC is a bolometric correction which was calculated by interpolation in the grid of <cit.>[<https://wwwuser.oats.inaf.it/castelli/colors/bcp.html>]. The atmospheric parameters and A_V were taken from <cit.> and <cit.>. Stellar masses and V magnitudes for the <cit.> sample are listed in their Table 5 and 2, respectively. For the stellar sample of <cit.>, the V magnitudes are listed in their Table 5. For each VMP giant, we adopt M = 0.8 M_⊙.
Statistical error of the distance-based surface gravity was computed as the quadratic sum of errors of the star's distance, effective temperature, mass, visual magnitude, and BC. We assumed the stellar mass error as σ_M = 0.1 M_⊙ and took the effective temperature errors, σ_T, from <cit.> and <cit.>. The total error is dominated by σ_M for the nearby stars and by the distance error, σ_ d, for the distant objects.
Table <ref> lists the obtained Gaia eDR3 distances and _ d values, as well as the spectroscopic surface gravities from <cit.> and <cit.>.
The differences log g_ Sp – _ d are shown in Fig. <ref>. The majority of our stars lie within 631 pc from the Sun, and their spectroscopic surface gravities are found to be consistent within the error bars with the distance-based ones. A clear outlier is HD 8724, with log g_ Sp – _ d = -0.48. We note that the discrepancy between log g_ Sp and _ d has reduced compared to -0.76 dex obtained for HD 8724 by <cit.> using the Gaia DR1 parallax <cit.>. However, it is still greater than the error of spectroscopic surface gravity, σ_log g (sp) = 0.24 dex. Formal calculation of σ_log g (d) leads to 0.07 dex (Table <ref>), however, astrometric_excess_noise_sig = 6.005 and astrometric_chi2_al = 419.84 indicated by <cit.> for HD 8724 suggest an unreliable solution for the Gaia eDR3 parallax.
For 15 distant stars, with d > 2 kpc, the errors of _ d grow. Nevertheless, the spectroscopic surface gravities are consistent, on average, with the distance-based ones.
Thus, our NLTE method for Fe/Fe is reliable and can be used for determinations of surface gravities, in particular, for distant stars with large distance errors.
§.§ Ca versus Ca
A firm argument for a correct treatment of the NLTE line formation for Ca-Ca can be obtained from a comparison of the NLTE abundances from lines of the two ionization stages. <cit.> report the LTE and NLTE abundances from lines of Ca and Ca 8498 Å for five reference stars with well-determined atmospheric parameters in the -2.7 < [Fe/H] < -1.3 metallicity range and find fairly consistent NLTE abundances, while the LTE abundance difference between Ca and Ca 8498 Å grows in absolute value towards lower metallicity and reaches -0.45 dex for [Fe/H] = -2.62, see their Fig. 6.
<cit.> studied the UMP stars and improved their atmospheric parameters using an extensive method based on the colour- calibrations, NLTE fits of the Balmer line wings, and Gaia DR2 trigonometric parallaxes. For each star, the derived effective temperature and surface gravity were checked by inspecting the Ca/Ca NLTE ionization equilibrium and by comparing the star's position in the - plane
with the theoretical isochrones of 12 and 13 Gyr.
The abundance differences between the two ionization stages from the NLTE and LTE calculations of <cit.> and <cit.> are displayed in Fig. <ref>. Nowhere, the NLTE abundance difference Ca – Ca exceeds 0.15 dex, while the LTE abundances from lines of Ca are systematically lower compared with that from Ca, by up to 0.85 dex. Thus, the NLTE results obtained using our NLTE method for Ca- <cit.> can be trusted.
§.§ Na resonance lines in VMP stars
Figure <ref> displays the [Na/Mg] abundance ratios in the wide range of metallicities from the LTE and NLTE calculations of <cit.> and <cit.>. For [Fe/H] > -1, both LTE and NLTE data form a well-defined upward trend, with a small star-to-star scatter for the stars of close metallicity. The situation is very different in LTE and NLTE for [Fe/H] < -1. In LTE, the [Na/Mg] ratios reveal a big scatter, which is substantially reduced in the NLTE calculations. An explanation lies mostly with the NLTE effects for lines of Na. For Mg, the differences between the NLTE and LTE abundances do not exceed 0.1 dex.
For [Fe/H] > -1, the Na abundances were derived by <cit.> from the Na 5682, 5688, 6154, 6160 5895 Å subordinate lines, which are slightly affected by NLTE, with negative Δ_ NLTE of ≾0.1 dex, in absolute value.
In the lower metallicity stars, sodium is observed in the Na 5889, 5895 Å resonance lines only. They are subject to strong NLTE effects, with Δ_ NLTE depending on the atmospheric parameters and the Na abundance itself. For different stars, Δ_ NLTE varies between -0.1 and -0.6 dex <cit.>. Removing the star-to-star scatter of the [Na/Mg] NLTE abundance ratios for [Fe/H] < -1 can serve as a circumstantial evidence for the line formation to be treated correctly.
Taking advantage of the obtained Galactic NLTE [Na/Mg] trend, we found that the modern nuclesynthesis and Galactic chemical evolution (GCE) calculations, which are represented in Fig. <ref> (right panel) by the GCE model of <cit.>, predict correctly contributions from the core-collapse supernovae (SNeII) and the asymptotic giant branch (AGB) stars to production of Mg and Na during the Galaxy history.
§ GRIDS OF THE NLTE ABUNDANCE CORRECTIONS
By request of the Pristine collaboration <cit.>, the NLTE abundance corrections were computed for the lines which can be detected in spectra of VMP stars, that is, for the [Fe/H] ≤ -2 range. We focused, in particular, on the spectral ranges observed by WEAVE[https://ingconfluence.ing.iac.es/confluence/display/WEAV/Science], that is 4040-4650 Å, 4750-5450 Å, and 5950-6850 Å for the high-resolution (R = λ/Δλ = 20 000) observations and 3660-9590 Å for the R = 5000 observations, and 4MOST [https://www.4most.eu/cms], that is 3926-4350 Å, 5160-5730 Å, and 6100-6790 Å for the high-resolution spectrograph (HRS, R ≃ 20 000) and 3700-9500 Å for the low-resolution spectrograph (LRS, R ≃ 4000-7500). We selected
4 / 15 / 28 / 4 / 54 / 262 / 7 / 2 / 2 / 5 lines of Na / Mg / Ca / Ca / Ti / Fe / Zn / Zn / Sr / Ba.
The range of atmospheric parameters was selected to represent metal-poor stars on various evolutionary stages, from the main sequence to the red giant branch (RGB); see the isochrone of 12 Gyr, [Fe/H] = -2, and [α/Fe] = 0.4 from <cit.> in Fig. <ref>. The NLTE calculations were performed in the following ranges of effective temperature and surface gravity:
= 4000 to 4750 K for = 0.5 to 2.5;
= 5000 K for = 0.5 to 5.0;
= 5250 to 5500 K for = 2.0 to 5.0;
= 5750 to 6500 K for = 3.0 to 5.0.
Metallicity range is -5.0 ≤ [Fe/H] ≤ -2.0.
The nodes of the NLTE abundance correction grids correspond to the nodes of the MARCS model grid. Therefore, varies with a step of 250 K, with a step of 0.5, and [Fe/H] with a step of 0.5. The MARCS website does not provide models with [Fe/H] = -3.5 and -4.5. The missing models were calculated by interpolating between the [Fe/H] = -3 and -4 and between the [Fe/H] = -4 and -5 models. We applied the FORTRAN-based interpolation routine written by Thomas Masseron and available on the MARCS website.
For Fe- and Zn-, the SE calculations were performed with [Element/Fe] = 0.0;
for Mg and Ti with [Element/Fe] = 0.4 and 0.3, respectively.
For Na, Ca, Ca, Sr, and Ba, the NLTE effects are sensitive to not only //[Fe/H], but also the element abundance used in the SE calculations. Therefore, the grids of the NLTE corrections are 4-dimensional where [Element/Fe] takes the following numbers:
[Na/Fe] = -0.6, -0.3, 0.0, 0.3, 0.6;
[Ca/Fe] = 0.0 and 0.4;
[Sr/Fe] = -1.0, -0.5, 0.0, 0.5, 1.0 for the dwarf model atmospheres,
[Sr/Fe] = -1.5, -1.0, -0.5, 0.0, 0.5 for the giant model atmospheres;
[Ba/Fe] = -1.0, -0.5, 0.0, 0.5 for the dwarf model atmospheres,
[Ba/Fe] = -1.5, -1.0, -0.5, 0.0, 0.5 for the giant model atmospheres.
The website INASAN_NLTE[<http://spectrum.inasan.ru/nLTE2/>] provides the tools for calculating online the NLTE abundance correction(s) for given spectral line(s) and atmospheric parameters , , [Fe/H], [Element/Fe] by an interpolation in the NLTE correction grids.
§.§ NLTE corrections depending on atmospheric parameters
Figure <ref> displays the NLTE abundance corrections predicted for representative lines of different chemical species in VMP stars on different evolutionary stages, namely, the turn-off (TO, / = 6250/4.0), the bottom red giant branch (bRGB, 5250/3.0), and the RGB (4500/1.5). For each line, Δ_ NLTE depends on , and [Fe/H]. Therefore, neglecting the NLTE effects distorts the galactic abundance trends.
In the same atmosphere, different lines have the NLTE corrections of different magnitude and sign. Therefore,
the star's element abundance pattern derived under the LTE assumption does not reflect correctly relative contributions of different nuclesynthesis sources.
The sign of Δ_ NLTE is determined by the mechanisms that produce the departures from LTE for lines of a given species in given physical conditions.
In the stellar parameter range with which we concern, Mg, Ca, and Fe are the minority species in the line formation layers, and they are subject to the ultra-violet (UV) overionization, resulting in depleted atomic level populations, weakened lines, and positive NLTE abundance corrections <cit.>. The intensity of the ionizing UV radiation increases with decreasing metallicity, resulting in growing departures from LTE.
Na is also the minority species, however, due to low photoionization cross-sections of its ground state, the main NLTE mechanism is a "photon suction" process <cit.> which produces overpopulation of the neutral stage, resulting in strengthened Na lines and negative NLTE abundance corrections. Photon suction is connected with collisional processes that couple the high-excitation levels of Na with the singly ionized stage. In contrast to the radiative processes, an influence of collisional processes on the statistical equilibrium of Na is weakened with decreasing metallicity, and Δ_ NLTE for Na 5895 Å decreases in absolute value and becomes even slightly positive for [Fe/H] ≤ -4.5 in the 4500/1.5 models.
The NLTE effects for the majority species Ca, Ti, Sr, and Ba are driven by the bound-bound (b-b) transitions. For an individual line, the sign and magnitude of Δ_ NLTE depend on the physical conditions and the transition where the line arises. Ca 8498 Å arises in the transition 3d2D3/2 – 4p2P∘3/2. The upper level is depopulated in the atmospheric layers where the core of Ca 8498 Å forms via photon loss in the wings of the Ca 3933, 3968 Å resonance lines and the 8498, 8542, 8668 Å infra-red (IR) triplet lines. The Ca 8498 Å line core is strengthened because the line
source function drops below the Planck function, resulting in negative Δ_ NLTE <cit.>. In the [Fe/H] = -2 models, Ca 8498 Å is very strong with a total absorption dominated by the line wings that form in deep atmospheric layers where the NLTE effects are small. With decreasing [Fe/H] (and Ca abundance, too) the line wings are weakened, and Δ_ NLTE grows in absolute value. In the 6250/4.0 and 5250/3.0 models, Δ_ NLTE decreases in absolute value for [Fe/H] ≤ -3.5 because of shifting the formation depths for Ca 8498 Å in deep atmospheric layers.
Owing to a complex atomic term structure, the levels of Ti are tightly coupled to each other and to the ground state via radiative and collisional processes, and the NLTE corrections for the Ti lines are slightly positive in the stellar parameter range with which we concern <cit.>: Δ_ NLTE≾ 0.1 dex for Ti 4395 Å.
<cit.> and <cit.> predicted theoretically that NLTE may either strengthen or weaken the lines of Sr and Ba, depending on the stellar parameters and elemental abundance. For example, in the 6250/4.0 models, Δ_ NLTE is positive for Ba 4554 Å over full range of [Fe/H] = -2 down to -4.5, while, for Sr 4215 Å, Δ_ NLTE is negative when [Fe/H] ≥ -2.5 and positive for the more metal-deficient atmospheres. In the RGB atmospheres, both Sr 4215 Å and Ba 4554 Å are very strong until metallicity decreases to [Fe/H] = -3.5, and the NLTE corrections are small. For the lower metallicity, Δ_ NLTE is positive for both lines and grows with decreasing [Fe/H].
For lines of Zn, the NLTE abundance corrections depending on atmospheric parameters are discussed by <cit.>.
§.§ NLTE corrections depending on elemental abundances
The stars of close metallicity in the [Fe/H] < -2 range reveal a substantial scatter of the Na, Sr, and Ba abundances <cit.>. Exactly for Na, Sr, and Ba the NLTE effects depend strongly on not only atmospheric parameters, but also the element abundance. Therefore in order to interpret correctly the chemical evolution of Na, Sr, and Ba, abundance analyses of VMP samples should be based on the NLTE abundances.
Figure <ref> shows that, for the TO and bRGB stars, the LTE analysis overestimates the Na abundances, by the quantity which is greater for the Na-enhanced than for Na-poor star. The difference in Δ_ NLTE exceeds 0.4 dex for [Fe/H] = -2.5 and reduces towards the lower [Fe/H]. The same is true for the RGB stars with [Fe/H] ≤ -3.5, but the situation is more complicated for the higher metallicities. For [Fe/H] > -3, the Na 5895 Å line is very strong in the Na-enhanced cool atmospheres, and the total line absorption is dominated by the line wings that form in deep atmospheric layers affected only weakly by NLTE. Accounting for the NLTE effects for the Na lines reduces substantially the abundance discrepancies found for stellar samples in LTE, as well illustrated by Fig. <ref>.
Using the same atmospheric parameters, LTE may either overestimate, or underestimate abundances of Sr and Ba depending on the elemental abundances, as shown in Fig. <ref>. For [Fe/H] < -2, the NLTE abundance corrections for Sr 4215 Å and Ba 4554 Å are positive in the Sr- and Ba-poor atmospheres, while they can be negative for the Sr- and Ba-enhanced atmospheres. Accounting for the NLTE effects can reduce the abundance discrepancies found for stellar samples in LTE, by more than 0.4 dex for Sr in the TO [Fe/H] = -2.5 stars and for Ba in the bRGB [Fe/H] = -2.5 stars.
§.§ NLTE corrections for different type model atmospheres
The model atmospheres computed with different codes produce, as a rule, very similar atmospheric structures and spectral energy distributions for common atmospheric parameters. We checked how different type model atmospheres influence on magnitudes of the NLTE abundance corrections. Taking the ATLAS9-ODFNEW models from R. Kurucz's website[<http://kurucz.harvard.edu/grids/gridm40aodfnew/>], we performed the NLTE calculations for Ca-, Fe-, and Ba with the models 6250/4.0/-4.0 and 4500/1.5/-4.0. For these atmospheric parameters, the selected lines reveal the greatest NLTE effects. The results are presented in Table <ref>.
For 6250/4.0/-4.0, the MARCS and ATLAS9-ODFNEW model atmospheres provide consistent within 0.036 dex NLTE abundance corrections. Slightly larger differences of up to 0.058 dex are obtained for the strong lines, Ca 4226 Å and Ca 8498 Å, in the cool giant atmosphere. We remind that the MARCS models with ≤ 2 were computed as spherically-symmetric, and the difference in temperature stratification between the spherically-symmetric and plane-parallel (ATLAS9-ODFNEW) models can explain, in part, differences in Δ_ NLTE for strong spectral lines.
§ COMPARISONS WITH OTHER STUDIES
The NLTE methods based on comprehensive model atoms and the most up-to-date atomic data have been developed in the literature for many chemical species observed in spectra of the Sun and F-G-K type stars because the NLTE results are in demand in chemical abundance analyses of, in particular, VMP stars. For a common chemical species, the model atoms in different NLTE studies can differ by a treatment of inelastic collisions with electrons and hydrogen atoms and by the sources of transition probabilities and photoionization cross-sections. Different NLTE studies use different NLTE codes, with a different treatment of background opacity, and different model atmospheres. We compared our NLTE calculations with the NLTE abundance corrections from the other studies.
§.§ Lines of Fe
As shown in Fig. <ref>, our results for lines of Fe agree well with the NLTE abundance corrections from the NLTE_MPIA database, which were computed using the model atom of <cit.> and the same treatment of collisions with H, as in our calculations, namely, the formulas of <cit.> with a scaling factor of = 0.5. The differences in Δ_ NLTE between this study (TS) and NLTE_MPIA mostly do not exceed 0.02 dex, with the maximal (TS – NLTE_MPIA) = 0.06 dex for Fe 5506 Å in the 6350/4.09/-2.18 model and Fe 5041 Å in the 4630/1.28/-2.99 model.
<cit.> provide the NLTE abundance corrections computed with the 1D and 3D model atmospheres. The 3D-NLTE calculations were performed for a limited atmospheric parameter range ( = 5000–6500 K, = 4.0 and 4.5, [Fe/H] = 0 to -3) and a limited number of Fe lines. We selected Fe 5232 Å for a comparison. Amarsi22 computed more positive NLTE corrections compared with ours (Fig. <ref>), by 0.07 to 0.27 dex in the 1D case and by 0.14 to 0.39 dex in the 3D case. The difference between 1D-NLTE corrections is most probably due to a different treatment of the Fe + H collisions in this and Amarsi22's studies. For H impact excitation and charge transfer, Amarsi22 apply the asymptotic model of <cit.> complemented by the free electron model of <cit.> for the b-b transitions. We showed earlier <cit.> that compared with the <cit.> formulas with = 0.5 using data of <cit.> leads to stronger NLTE effects. For example, Δ_ NLTE = 0.08 dex and 0.35 dex, respectively, for Fe 5232 Å in the 6350/4.09/-2.18 model atmosphere. In the 3D model atmospheres, the NLTE effects for Fe are stronger than in the 1D models, and notable departures from LTE appear for lines of Fe, in contrast to the 1D case, such that, for two benchmark VMP stars, Amarsi22 (see their Table 5) obtain similar abundance differences between Fe and Fe in the 1D-NLTE and 3D-NLTE calculations. To remind the reader, our 1D-NLTE approach for Fe- makes the spectroscopic distances of the VMP stellar sample to be consistent with the Gaia eDR3 ones (Sect. <ref>).
§.§ Lines of Na, Mg, Ca, Ca, and Sr
We selected Mg 5528 Å, in order to compare our NLTE calculations with the 1D-NLTE corrections provided by the NLTE_MPIA database and by <cit.>. The used model atoms <cit.> are similar to ours, including a treatment of collisions with H atoms. As seen in Fig. <ref>, our calculations agree very well with those of Lind22. The differences in Δ_ NLTE do not exceed 0.01 dex and 0.02 dex for the = 4.0 and 2.5 models, respectively. The exception is the 4000/2.5/-3 model, for which we obtained a 0.065 dex more negative Δ_ NLTE. NLTE_MPIA provides more positive NLTE corrections compared with ours, by 0.03–0.05 dex. The difference is 0.12 dex for the 4000/2.5/-3 model.
Similar model atoms of Na were used in this study and by Lind22. The differences in Δ_ NLTE for Na 5895 Å are very small (∼0.01 dex) for the coolest and the hottest temperatures in Fig. <ref>. It is difficult to explain why TS – Lind22 = 0.07 dex for the 5000/2.5/-3 model, but TS – Lind22 = 0.00 for 5000/4.0/-3.
For lines of Sr, the 1D-NLTE corrections are provided by the INSPECT database. Their NLTE calculations were performed with the model atom developed by <cit.> and did not take into account collisions with H atoms. This is in contrast to this study based on quantum-mechanical rate coefficients for the Sr + H collisions. The atmospheric parameter range is narrower in INSPECT compared with this study, namely: 4400 K ≤≤ 6400 K, 2.2 ≤≤ 4.6, -3.9 ≤ [Fe/H] ≤ 0. The differences in Δ_ NLTE for Sr 4077 Å are small except the models 5500/2.5/-3 and 6000/4.0/-3, where TS – INSPECT = -0.07 dex and +0.05 dex, respectively (Fig. <ref>).
The 1D-NLTE corrections for the Ca lines at the NLTE_MPIA database were computed with the model atom developed by <cit.> and using <cit.> formulas with = 0.1 for calculating hydrogen collision rates. In this study, we applied the same model atom, however, the Ca + H collisions were treated using quantum-mechanical rate coefficients from <cit.>. As seen in Fig. <ref>, NLTE_MPIA provides systematically greater NLTE corrections for Ca 6162 Å compared with our data, by 0.08 to 0.20 dex, probably due to a simplified treatment of hydrogenic collisions.
Ignoring the Ca + H collisions in the SE calculations resulted in stronger NLTE effects for the Ca triplet lines in <cit.> study compared with ours. For example, <cit.> report the NLTE/LTE equivalent ratios of 1.28 and 1.16 for Ca 8498 and 8542 Å, respectively, in the 4250/1.5/-4.0 model, while our corresponding values are 1.22 and 1.12.
§.§ Lines of Ba
Finally, we compared our results with the 1D-NLTE corrections calculated by <cit.> for lines of Ba. <cit.> provide the data for the -2 ≤ [Fe/H] ≤ 0.5 metallicity range. Therefore, Δ_ NLTE comparisons are presented in Fig. <ref> for the same temperatures and surface gravities, as in Fig. <ref>, but for [Fe/H] = -2. The differences in Δ_ NLTE for Ba 6496 Å do not exceed 0.02 dex except the coolest and the hottest giant atmospheres, where TS – K15 = -0.05 dex and +0.05 dex, respectively.
To summarise this section, the situation with the 1D-NLTE corrections for lines of Na, Mg, and Fe looks good. For each of these chemical species, there are, at least, two independent NLTE studies that predict consistent within 0.01-0.02 dex NLTE corrections and provide the grids which cover the full range of atmospheric parameters of VMP stars. For Sr and Ba, the NLTE corrections predicted by the independent studies agree reasonably well in the overlapping atmospheric parameter range.
§ FINAL REMARKS
This study presents grids of the 1D-NLTE abundance corrections for the Na, Mg, Ca, Ca, Ti, Fe, Zn, Zn, Sr, and Ba lines, which are used in the galactic archaeology research. The range of atmospheric parameters represents VMP stars on various evolutionary stages and covers 4000 K ≤ ≤ 6500 K, 0.5 ≤ ≤ 5.0, and -5.0 ≤ [Fe/H] ≤ -2.0. The NLTE corrections for Zn, Zn, Sr, and Ba have been calculated for the first time for such a broad atmospheric parameter range. Compared to the data available in the literature, our NLTE corrections for lines of Ca, Ca, Zn, Zn, Sr, and Ba are based on accurate treatment of collisions with H atoms in the statistical equilibrium calculations.
In the same model atmosphere, the NLTE abundance corrections may have different magnitude and sign for lines of the same chemical species, for example Δ_ NLTE = 0.092 dex (Mg 5528 Å) and Δ_ NLTE = -0.083 dex (Mg 5172 Å) in the 4500/1.5/-3.5 model. Accounting for the NLTE effects in stellar abundance determinations is expected to improve an accuracy of the obtained results.
In the same model atmosphere, the NLTE abundance corrections may have different magnitude and sign for lines of different chemical species, for example, Δ_ NLTE = -0.222 dex (Na 5895 Å) and Δ_ NLTE = 0.092 dex (Mg 5528 Å) in the 4500/1.5/-3.5 model. Therefore, an appropriate treatment of the line formation is obligatory for the studies based on analysis of the stellar element abundance patterns.
For all spectral lines and chemical species, the NLTE corrections depend on metallicity. Neglecting the NLTE effects in stellar abundance determinations leads to distorted galactic abundance trends and incorrect conclusions on the Galactic chemical evolution.
We show that, for common spectral lines and the same atmospheric parameters, independent NLTE studies of Na, Mg, and Fe predict consistent 1D-NLTE abundance corrections, with the difference of 0.01-0.02 dex in Δ_ NLTE.
The obtained results are publicly available. At the website INASAN_NLTE (<http://spectrum.inasan.ru/nLTE2/>), we provide the tools for calculating online the NLTE abundance correction(s) for given line(s) and given atmospheric parameters.
§ ACKNOWLEDGEMENTS
This research has made use of the data from the European Space Agency (ESA) mission Gaia[<https://www.cosmos.esa.int/gaia>], processed by the Gaia Data Processing and Analysis Consortium (DPAC[<https://www.cosmos.esa.int/web/gaia/dpac/consortium>]).
This research has made use of the MARCS and ADS[<http://adsabs.harvard.edu/abstract_service.html>] databases. L.M. thanks the Russian Science Foundation (grant 23-12-00134) for a partial support of this study (Sections 1, 2, 4, 5). T.S. acknowledges a partial support (Section 3) from the MK project, grant 5127.2022.1.2.
§ DATA AVAILABILITY
All our results are publicly available at the website INASAN_NLTE (<http://spectrum.inasan.ru/nLTE2/>).
mnras
|
http://arxiv.org/abs/2307.06086v1 | 20230712111526 | On the sharp Makai inequality | [
"Francesca Prinari",
"Anna Chiara Zagati"
] | math.OC | [
"math.OC"
] |
On a convex bounded open set, we prove that Poincaré–Sobolev constants for functions vanishing at the boundary can be bounded from below in terms of the norm of the distance function in a suitable Lebesgue space.
This generalizes a result shown, in the planar case, by E. Makai, for the torsional rigidity. In addition, we compare the sharp Makai constants obtained in the class of convex sets with the optimal constants defined in other classes of open sets.
Finally, an alternative proof of the Hersch–Protter inequality for convex sets is given.
[
Andreas Vogelsang
=====================
11cm
.1cm
§ INTRODUCTION
The aim of this paper is to provide a sharp lower bound for the quantity
λ_p,q(Ω) d_Ω_L^p q/p-q(Ω)^p,
where Ω⊊ℝ^N is an convex bounded open set, 1≤ q<p<∞ or 1<q=p<∞, λ_p,q(Ω) is the generalized principal frequency defined as
λ_p,q(Ω) :=inf_ψ∈ C^∞_0(Ω){∫_Ω |∇ψ|^p dx : ∫_Ω |ψ|^q dx=1},
and d_Ω is the distance function from the boundary ∂Ω, namely
d_Ω(x):=inf{|x-y| : y∈∂Ω}.
Here and in what follows, L^p q/p-q(Ω) stands for L^∞(Ω) when p=q and we will write λ_p(Ω) in place of λ_p,p(Ω).
.2cm
Our study is motivated by an old result due to Makai (see <cit.>) for the torsional rigidity
T(Ω)=1/λ_2,1(Ω),
which asserts that, for every convex bounded open set Ω⊊ℝ^2, the following sharp upper bound holds
T(Ω) ≤∫_Ω d_Ω^ 2 dx.
§.§ Optimal lower bound on convex sets: the main result
Inspired by the above Makai inequality, in this paper we will prove the following theorem which extends, to every dimension N and every 1≤ q<p<∞, the optimal lower bound given by Makai in the case q=1 and p=2.
Let 1 ≤ q < p < ∞ and let Ω⊊ℝ^N be a convex bounded open set. Then, the following lower bound holds
λ_p,q(Ω) ≥C_p,q/(∫_Ω d_Ω^p q/p-q dx )^p-q/q ,
where C_p,q is the positive constant given by
C_p,q=
(π_p,q/2)^p (p-q/pq+p-q)^p-q/q,
with
π_p,q := inf_u ∈ C_0^∞((0,1)){u'_L^p([0,1]) : u_L^q([0,1])=1 }.
Moreover, the estimate (<ref>) is sharp.
The proof of Theorem <ref> is inspired by the covering argument for polygonal sets exploited by Makai in the planar case. In the N-dimensional case, thanks to a standard approximation argument, we can restrict ourselves to consider the case when Ω⊊ℝ^N is the interior of a polytope K (see Section <ref>). In this case, in order to prove (<ref>), the key tool we use is given by Lemma <ref> where we construct a suitable covering of Ω by means of convex sets Ω_i, every one satisfying the property that ∂Ω_i∩Ω is the graph of a continuous function defined on a facet S_i of the polytope K. The proof of the convexity of each set Ω_i relies on the concavity property of the distance function d_Ω (see <cit.>).
.2cm
If we denote by r_Ω the inradius of Ω, which coincides with supremum of the distance function d_Ω,
as an application of Theorem <ref>, we can give a different proof of the following sharp estimate (<ref>), first proved in <cit.> when 1≤ q<2 and then extended to cover the case p 2 and q=1 in <cit.>. The general case 1≤ q<p<∞ was first shown in <cit.> by means of a comparison argument.
Let 1 ≤ q < p<∞ and Ω⊊ℝ^N be a convex bounded open set. Then, the following lower bound holds
λ_p,q(Ω) |Ω|^p-q/q≥( π_p,q/2)^p 1/r_Ω^ p.
Moreover, the estimate (<ref>) is sharp.
A further result that follows from (<ref>), simply taking the limit as q↗ p[We use here that q↦λ_p,q(Ω) is left-continuous at q=p when Ω⊂ℝ^N is a bounded open set.], is the sharp inequality
λ_p(Ω) ≥( π_p/2)^p 1/r_Ω^ p,
valid for every convex bounded open set Ω⊊ℝ^N. Here π_p is defined as in (<ref>) with p=q.
Formula (<ref>) represents the extension to the case of the p-Laplacian of the Hersch–Protter inequality, originally proved by Hersch in <cit.> for p=N=2 and then generalized to every dimension N ≥ 2 by Protter in <cit.> (see also <cit.>). The case p 2 has been already obtained in <cit.>. In Section <ref>, we will give a further alternative proof of (<ref>) by exploiting a change of variables formula (<ref>), proved in <cit.> when the domain of integration is a connected open set of class C^2, and then using a suitable approximation result for convex sets.
.2cm
§.§ Lower bounds on other classes of open sets
The final part of this paper is devoted to compare the optimal constant C_p,q for convex sets, defined by (<ref>), with the infimum of (<ref>) in other classes of open sets of ℝ^N. First of all, for every 1≤ q< p<∞ or 1<q=p<∞, we introduce the constant
C_p,q=inf{λ_p,q(Ω) d_Ω_L^p q/p-q(Ω)^p: Ω⊊ℝ^N , d_Ω∈ L^p q/p-q(Ω)}.
The condition on d_Ω is motivated by the recent results in <cit.> where, by means of a comparison principle provided in <cit.> for the sub-homogeneous Lane–Emden equation,
it is shown that, when Ω⊊ℝ^N is an open set and 1≤ q<p<∞ or 1<p=q<∞, then the following implication holds
λ_p,q(Ω)>0 ⟹ d_Ω∈ L^p q/p-q(Ω)
(see <cit.>). Moreover, in the same paper, following an argument used in <cit.> when p=2, the above implication is shown to be an equivalence in the class of the open sets Ω⊊ℝ^N which satisfy the Hardy inequality
∫_Ω |∇ u|^p dx ≥ C ∫_Ω|u|^p/d_Ω^ p dx, u ∈ C^∞_0(Ω),
with 1<p<∞ and C=C(p,Ω)>0.
Indeed, if
d_Ω∈ L^p q/p-q(Ω) for 1≤ q<p<∞ or 1< q=p<∞,
then, the joint application of the Hölder inequality and of (<ref>), gives
∫_Ω |u|^q dx ≤( ∫_Ω|u|^p/d_Ω^ p dx )^q/p d_Ω_L^p q/p-q(Ω)^p
≤1/C^ q/p(∫_Ω |∇ u|^p dx )^q/p d_Ω_L^p q/p-q(Ω)^p, u∈ C^∞_0(Ω),
that implies
λ_p,q(Ω)≥𝔥_p(Ω)d_Ω_L^p q/p-q(Ω)^p,
where
𝔥_p(Ω):=inf_u∈ C^∞_0(Ω){∫_Ω |∇ u|^p dx : ∫_Ω|u|^p/d_Ω^ p dx=1}.
We recall that, when Ω⊊ℝ^N is a convex bounded open set, it is well known that
𝔥_p(Ω)= (p-1/p)^p,
(for a proof, see <cit.>). The resulting estimate (<ref>) in the class of the convex bounded open sets is far of being sharp, as the case q=1 and p=N=2 shows, being C_2,1=1>1/4.
Now, the constants C_p,q defined in (<ref>) satisfy the following facts:
* when 1<p≤ N, it holds that
C_p,q=0, 1≤ q< p q=p.
Indeed, if 1<p≤ N, fixed a bounded open subset Ω⊊ℝ^N, we remove from it a periodic array of n points and we call Ω_n the open set so constructed.
As p≤ N, points in ℝ^N have zero p-capacity (see <cit.>), then, for every 1≤ q<p≤ N or 1<q=p≤ N, it holds
λ_p,q(Ω_n)=λ_p,q(Ω), n ∈ℕ.
Since r_Ω_n tends to 0, as n →∞, the above equality implies that
C_p,q≤lim sup_n →∞λ_p,q(Ω_n) d_Ω_n_L^p q/p-q(Ω_n)^p ≤λ_p,q(Ω) |Ω|^p-q/qlim sup_n →∞ r^ p_Ω_n =0,
for every 1≤ q<p≤ N or 1<q=p≤ N. In particular, in this range, it follows that
C_p,q < C_p,q;
* when p>N, as shown independently by Lewis in <cit.> and Wannebo in <cit.>, every open subset Ω⊊ℝ^N satisfies the Hardy inequality (<ref>) and it holds
𝔥_p(Ω)≥(p-N/p)^p,
(for the latter, see <cit.> and <cit.>).
By using the above lower bound in (<ref>), we get that
C_p,q≥(p-N/p)^p>0, N<p<∞1≤ q< p,
and the natural question that arises is whether the strict inequality (<ref>) also holds in the case p>N, for 1 ≤ q < p or q= p.
We will address Section <ref> to discuss this question and, by means of a perturbative argument, we will be able to show that, for every N≥ 2 and for every fixed 1 ≤ q <N, there exists p=p(q)>N such that (<ref>) holds for every p∈ (q, p].
.2cm
Another interesting class of sets where the quantity (<ref>) is bounded from below by a positive constant is that one of planar simply connected open sets.
Indeed, if p=N=2, thanks to a result due to Ancona (see <cit.>), every simply connected open set Ω⊊ℝ^2 verifies the Hardy inequality (<ref>) with the optimal Hardy constant satisfying
𝔥_2(Ω) ≥1/16.
Hence, for N=2 and every 1≤ q≤ 2, defined
C_2,q=inf{λ_2,q(Ω) d_Ω_L^2 q/2-q(Ω)^2: Ω⊊ℝ^2 , d_Ω∈ L^2 q/2-q(Ω) },
the joint application of (<ref>) and (<ref>) implies that
C_2,q≥1/16>0=C_2,q, for every 1≤ q≤ 2.
By using a different argument, in <cit.> Makai shows that
1/4≤C_2,2<π^2/4=C_2,2,
and, in order to prove the upper bound, he exhibits a simply connected open set Ω⊊ℝ^2 satisfying
λ_2(Ω) r^ 2_Ω<π^2/4.
In Section <ref>,
we use this fact to show that there exists 1 ≤q<2 such that it holds
C_2,q< C_2,q, q∈ [q, 2].
In addition, by exploiting (<ref>), we finally prove that, in the case N=2, there exists p>2 such that
0<C_p,p<(π_p/2)^p=C_p,p, p∈ (2,p].
.2cm
§.§ Plan of the paper
In Section <ref> we introduce some basic properties of polytopes in ℝ^N and we extend, to any dimension N ≥ 2, the covering argument applied by Makai in <cit.>. In Section <ref>, we prove the main results stated in Theorem <ref> and in Corollary <ref>.
In the subsequent Section <ref>, we give an alternative proof for the Hersch–Protter inequality (<ref>), by means of a change of variables formula. Finally, in Section <ref>, we compare the sharp constant for the Makai inequality on convex sets with the optimal constants for other class of sets; in particular, we consider the class of general open sets whose distance function satisfies (<ref>) and that one of planar simply connected sets.
The authors are deeply indebted to Lorenzo Brasco for having pointed out the open questions concerning the Makai inequality and for the interesting discussions and remarks on the subject.
F. P. and A. C. Z. are grateful to the Dipartimento di Matematica e Informatica at University of Ferrara and to the Dipartimento di Matematica at University of Pisa for their hospitality. F.P. has been financially supported by the PRA-2022 project ”Geometric evolution problems and PDEs on variable domains”, University of Pisa. F. P. and A. C. Z. are members of the Gruppo Nazionale per l'Analisi Matematica, la Probabilità
e le loro Applicazioni (GNAMPA) of the Istituto Nazionale di Alta Matematica (INdAM).
§ PRELIMINARIES
For every convex set C ⊂ℝ^N, we will denote by relint(C) and relbd(C) the relative
interior and the relative boundary of C, once we regard it as a subset of its affine hull. We define the dimension of C, and we denote it by (C), as the dimension of its affine hull. Conventionally, the empty set has dimension -1.
Let C ⊂ℝ^N be a not empty convex closed set. A convex subset S⊆ C is a face of C if each segment [x,y]⊂ C satisfying S∩relint([x,y])∅ is contained in S.
We denote by ℱ(C) the set of all faces of C and by ℱ_i(C) the set of all faces of C having dimension i, for every 0≤ i ≤(C)-1. The empty set and C itself are faces of C; the other faces are called proper. Every ((C)-1)-dimensional face of C is called a facet of C.
We can summarise the main properties of faces in the following theorem (see <cit.>).
Let C⊂ℝ^N be a not empty convex bounded set. Then
* the faces of C are closed;
* if F C is a face of C, then F∩relint(C)=∅;
* if G and F are faces of C, then G∩ F is a face of C;
* if G is a face of F and F is a face of C, then G is a face of C;
* each point x∈ C is contained in relint(F) for a unique face F∈ℱ(C).
We say that K⊂ℝ^N is a polytope if it is the convex hull of finitely many points of ℝ^N.
We recall that, thanks to <cit.>, a polytope is a compact convex set. Moreover, each proper face of K is itself a polytope and is contained in some facet of K.
.2cm
Furthermore, we recall that if K is a polytope and 0∈ int(K), then the following facts hold:
(i) the polar set K^∘={ x∈ℝ^N: ⟨ x,y ⟩≤ 1, y∈ K} is itself a polytope;
(ii) if F is a face of K then the conjugate set F={ x∈ K^∘: ⟨ x,y ⟩ = 1, y∈ F} is itself a polytope such that
(F)= N-(F)-1;
(iii) if F,G∈ℱ (K) are such that F⊂ G, then F⊃G;
(iv) the application F↦F is a bijection from ℱ (K) to ℱ (K^∘).
.2cm
In the sequel we need the following result.
Let K be a polytope and assume that 0∈ int(K). Then,
* if S,S' are facets of K such that S≠S' then S ∩relint(S') =∅;
* for every facet S of K, if z∈relbd(S), then there exists another facet S S of K such that z∈S. In particular, z ∈relbd(S) ∩relbd(S).
.1cm
* Without loss of generality, we assume that F=S∩ S'∅. Since S≠S', we have that F is a proper face of S'. Then, by Part (2) of Theorem <ref>, we have
S∩relint(S') ⊂ S ∩ S' ∩relint(S')= F ∩relint(S') = ∅;
.1cm
* let S be a facet of K and let z∈relbd(S).
Then, there exists a facet F of S such that z∈ F. Then, F ∈ℱ(K) and (F)=N-2. This implies (F)= 1.
Therefore, there exist exactly two points x',x”, such that x' x” and
F=[x',x”].
Then, thanks to property (iv) above, there exist two facets S',S”∈ℱ(K), with S' S”, such that S'={x' } and S”={x”}, and, by property (ii) above,
z∈ F⊆{x' }∩{x”}= S'∩ S”,
namely, there exists at least another facet S≠S containing z.
Since z∈relbd(S)∩S⊆ S∩S and, thanks to Part (1) of this lemma, S∩relint(S)=∅, we can conclude that
z ∈relbd(S) ∩relbd(S).
In the next lemma, we extend to every dimension the argument applied by Makai in <cit.> in order to divide the interior of a polytope K, when int(K)≠∅, in a finite number of suitable convex open subsets.
Let N≥ 2 and K ⊂ℝ^N be a polytope such that 0∈int(K). Let Ω =int(K) and let S_1,S_2,…, S_h be the facets of K.
For every i ∈{1,…, h}, let Π_i: ℝ^N→ H_i be the orthogonal projection on the affine hyperplane H_i containing S_i. Define
Ω_i = { x ∈Ω : d_S_i(x) = d_Ω(x) },
where
d_S_i(x)=min_y∈ S_i|x-y|.
Then, for every i ∈{1,…, h}, the following facts hold:
.1cm
* if x∈Ω_i then Π_i(x)∈ S_i. In particular Π_i (x) is the unique minimizer of the problem defining d_S_i(x);
.1cm
* Π_i(x)∈relint(S_i), for every x∈Ω_i;
.1cm
* Ω_i is a convex set;
.1cm
*
int(Ω_i)∅;
.1cm
* Ω_i can be included in a rectangle with base S_i and height r_Ω;
.1cm
* for every x_0 ∈ relint (S_i), there exists a unique y_0∈∂Ω_i ∩Ω such that Π_i(y_0)=x_0.
In particular, the restriction Π_i: ∂Ω_i∩Ω→relint(S_i) is a continuous bijection.
First of all, we notice that, since int(K)∅, we have that Ω=K. Now, we prove every part separately. Let us fix i ∈{1,…, h}.
* Let x ∈Ω_i. By contradiction, if Π_i(x)∉ K, then the segment [x,Π_i(x)] would intersect ∂Ω in a point z. Since zΠ_i(x), then z ∈ S_j, with S_j S_i. Hence, we would have that
d_Ω(x)≤ d_S_j(x)≤ |x-z|<|x-Π_i(x)|≤ d_S_i(x)=d_Ω(x),
which is a contradiction.
In particular, using the fact that Π_i(x)∈ S_i, we get that
d_S_i(x)=min_y∈ S_i|x-y|≤ |x-Π_i (x)|=d_H_i(x)≤ d_S_i(x)
that is, Π_i (x) is the unique minimizer of the problem defining d_S_i(x);
.2cm
* by contradiction, let us suppose that Π_i(x)∈ S_i∖relint(S_i)=relbd(S_i). By Lemma <ref>, Part (2), there exists S_j S_i such that Π_i(x) ∈ S_j. Since
d_S_j(x)≤ |x- Π_i(x)|=d_S_i(x)=d_Ω(x)≤ d_S_j(x),
we get that
d_S_j(x)=|x- Π_i(x)|.
Being Π_i (x)∈ S_j, by uniqueness, we would obtain Π_j(x)=Π_i(x)∈ H_i∩ H_j. Then H_i ≡ H_j, giving the contradiction S_i=S_j;
.2cm
*
by contradiction, assume that there exist x,y∈Ω_i and λ∈ (0,1) such that z=λ x + (1-λ) y ∉Ω_i. Hence z∈Ω_j for some j i, and
d_Ω(z)=d_S_j(z)<d_S_i(z).
Since Ω is a convex set, then the distance function d_Ω is a concave function (see <cit.>), hence, it follows that
d_Ω(z)=d_Ω(λ x + (1-λ) y) ≥λ d_Ω(x) + (1-λ) d_Ω(y).
On the other hand, by Part (1), we have that Π_i(x), Π_i(y)∈ S_i. By linearity, we get that
Π_i(z)=λ Π_i(x) + (1-λ) Π_i(y) ∈ S_i,
which implies that
d_S_i(z)=|z-Π_i(z)|≤λ|x-Π_i(x)| + (1-λ)|y-Π_i(y)|=λ d_Ω(x) + (1-λ) d_Ω(y).
By combining (<ref>), (<ref>) and (<ref>), we obtain a contradiction;
.2cm
* let x_0∈relint(S_i), then, by Part(1) of Lemma <ref>, we have that x_0∉ S_j for every S_j S_i.
We set , then, it holds that
g_j(x_0)=d_S_j(x_0)>0.
Being g_j a continuous function on ℝ^N, there exists B_ρ(x_0) such that g_j>0 on B_ρ(x_0), for every j i.
Hence d_Ω=d_S_i on the open set B_ρ(x_0)∩Ω∅, which implies that
B_ρ(x_0)∩Ω⊂Ω_i, giving the desired conclusion;
.2cm
* without loss of generality, suppose that
H_i={ (y,x_N)∈ℝ^N: x_N=0 }.
Hence, thanks to Part (3), one of the following inclusion holds
Ω_i⊂{ (y,x_N)∈ℝ^N: x_N≥0 } Ω_i⊂{ (y,x_N) ∈ℝ^N: x_N≤0 }.
In both cases, since
|x_N|=d_S_i(x)=d_Ω(x) ≤ r_Ω, x ∈Ω_i,
we obtain the claimed conclusion;
.2cm
* let x_0∈relint(S_i). Since int(Ω_i)∅, we can take an open half-line r_x_0 with origin x_0 such that it is perpendicular to S_i in x_0 and r_x_0∩int(Ω_i)∅. Being Ω_i bounded set and r_x_0 a connected set, we obtain that r_x_0∩∂Ω_i∅. Moreover, as ∂Ω_i=(∂Ω_i∩Ω)∪ S_i and r_x_0∩ S_i=∅, we also have that
Σ_i(x_0):=r_x_0∩∂Ω_i∩Ω∅.
Now, we will show that Σ_i(x_0) consists in exactly a point.
Indeed, we note that if y'∈Σ_i(x_0), then
]x_0,y'[= ]x_0,x]∪ [x,y'[⊆int(Ω_i), x∈ r_x_0∩int(Ω_i).
In particular, if y',y”∈Σ_i(x_0), with y' y”, then we would obtain the contradiction
y”∈ ]x_0,y'[ ⊂int(Ω_i) y'∈ ]x_0,y”[⊂int(Ω_i).
.1cm
§ PROOF OF THE MAIN RESULT
In proving Theorem <ref>, first we will restrict ourselves to consider the case when the convex open set Ω coincides with the interior of a polytope K. Then, the general result will follow thanks to an approximation argument by means of polytopes, valid for convex sets.
.3cm
Proof of Theorem <ref>. By following <cit.>, we divide the proof in three parts: first, we prove the lower bound (<ref>) when Ω is the interior of a polytope K, then, by applying an approximation argument, we show that such a lower bound holds when Ω⊊ℝ^N is a general convex set. Finally, we show that (<ref>) is asymptotically sharp for slab-type sequences.
.2cm
Part 1: Makai's inequality for a polytope.
Without loss of generality, let us suppose that 0∈Ω. Moreover, in this step we assume that Ω =int(K), where K ⊂ℝ^N is a polytope.
According to the notation in Lemma <ref>, we consider the subsets Ω_i given by (<ref>), with i ∈{1,…, h}.
Now, we will show that, for every i ∈{1,…, h} and for every u∈ C^∞_0(Ω), it holds
∫_Ω_i |u|^q dx ≤( 2/π_p,q)^q ( pq+p-q/p-q)^p-q/p( ∫_Ω_i d_Ω^p q/p-q dx )^p-q/p( ∫_Ω_i |∇ u|^p dx )^q/p.
Indeed, let i ∈{1,…, h} and, up to translations and rotations, we can assume that
H_i={ (y,t)∈ℝ^N: t=0 }
is the affine hyperplane containing S_i.
Then
t=d_S_i(y,t)=d_Ω(y,t), (y,t)∈Ω_i.
Thanks to Lemma <ref>, we have that Π_i: ∂Ω_i∩Ω→ S_i is a bijective and continuous function between two compact sets.
Hence, defining S'_i={ y∈ℝ^N-1: (y,0)∈ S_i}, we obtain that there exists a continuous function f_i: S'_i→ [0,+∞)
such that
(y,t)∈∂Ω_i∩Ω ⟺ y∈ S'_i t=f_i(y),
and it is easy to show that
Ω_i = { (y,t) ∈ S'_i×ℝ: 0< t ≤ f_i(y) }.
Indeed, the inclusion “⊆” follows by using (<ref>) and (<ref>), while the converse one “⊇” is an application of Parts (3) and (6) of Lemma <ref>, taking into account that
f_i(y)>0 ⟺ (y,0)∈relint(S_i).
Now, we recall that
(π_p,q/2)^p=min_φ∈ W^1,p((0,1))∖{ 0}{∫_0^1 |φ'|^p dt/(∫_0^1 |φ|^q dt)^p/q:φ(0)=0},
(see <cit.>), which implies that, for every s>0
( ∫_0^s |φ|^q dt )^p/q≤( 2/π_p,q)^p s^pq+p-q/q∫_0^s |φ'|^p dt, φ∈ C_0^∞((0,s]).
Hence, for every u∈ C^∞_0(Ω) and for every i ∈{1,…,h}, thanks to formula (<ref>), by using Fubini's Theorem and (<ref>) with s=f_i(y), we get
∫_Ω_i |u(x)|^q dx = ∫_S'_i∫_0^f_i(y) |u(y,t)|^q dt dy
≤( 2/π_p,q)^q ∫_S'_i f_i(y)^pq+p-q/p( ∫_0^f_i(y)|∂ u/∂ t (y,t)|^p dt )^q/p dy
≤( 2/π_p,q)^q ( ∫_S'_i f_i(y)^pq+p-q/p-q dt )^p-q/p( ∫_S'_i∫_0^f_i(y)|∂ u/∂ t (y,t)|^p dy dt )^q/p,
where we also apply an Hölder's inequality in the last line.
Taking into account (<ref>), we have that
∫_S'_i f_i(y)^pq+p-q/p-q dy = ( pq+p-q/p-q) ∫_S'_i( ∫_0^f_i(y) t^p q/p-q dt ) dy
= ( pq+p-q/p-q)∫_S'_i( ∫_0^f_i(y) (d_Ω(y,t))^p q/p-q dt) dy
=( pq+p-q/p-q)∫_Ω_i d_Ω^p q/p-q dx.
By combining (<ref>) and (<ref>), for every i ∈{1,…,h}, we obtain (<ref>).
Finally, since
Ω=⋃_i=1^h Ω_i,
by summing with respect to the index i ∈{1,…,h} in (<ref>), it follows that
∫_Ω |u|^q dx ≤( 2/π_p,q)^q ( pq+p-q/p-q)^p-q/p∑_i=1^h (∫_Ω_i d_Ω^p q/p-q dx )^p-q/p( ∫_Ω_i |∇ u|^p dx )^q/p.
By applying Hölder's inequality
a b _ℓ^1≤a_ℓ^r b_ℓ^r',
with r=p/q, we get
∫_Ω |u|^q dx ≤( 2/π_p,q)^q ( pq+p-q/p-q)^p-q/p( ∫_Ω d_Ω^p q/p-q dx )^p-q/p( ∫_Ω |∇ u|^p dx )^q/p.
and by raising to the power p/q on both sides, this in turns implies
∫_Ω |∇ u|^p dx/(∫_Ω |u|^q dx)^p/q≥( π_p,q/2)^p ( p-q/pq+p-q)^p-q/q1/( ∫_Ω d_Ω^p q/p-q dx )^p-q/q, u∈ C^∞_0(Ω).
Taking the infimum on C^∞_0(Ω) on the left-hand side, we get that for every polytope K ⊂ℝ^N, the set Ω=int(K) satisfies the lower bound (<ref>), as desired.
.2cm
Part 2: approximation argument.
If Ω⊊ℝ^N is a general convex bounded open set, thanks to <cit.>, for every 0<ε≪ 1, there exists a polytope K_ε such that
K_ϵ⊆Ω⊆Ω⊆1/1-εK_ε.
In particular, we have that
(1-ε) Ω⊆int(K_ε).
Then, thanks to Part (1) of the proof, we obtain that
λ_p,q(Ω)/(1- ε)^p-N+Np/q=λ_p,q((1-ε) Ω)≥λ_p,q(int(K_ε)) ≥C_p,q/(∫_int(K_ε) d_int(K_ε)^p q/p-q dx )^p-q/q≥C_p,q/(∫_Ω d_Ω^p q/p-q dx )^p-q/q,
where the last inequality follows from the fact that
d_int(K_ε)(x) ≤ d_Ω(x), for every x ∈ K_ϵ.
Then, by sending ε→ 0^+, we get that Ω satisfies (<ref>).
.2cm
Part 3: sharpness. Now we will show that estimate (<ref>) is asimptotically sharp for slab-type sequences
Ω_L = ( -L/2 ,L/2)^N-1× (0,1), L ≥ 1.
.1cm
With this aim, we denote by y=(x_1, …, x_N-1) a point of ℝ^N-1. Let S_0 and S_1 be the facets of Ω_L contained, respectively, in the hyperplanes {(y,t)∈ℝ^N : t=0} and {(y,t)∈ℝ^N : t=1}, and
we define the lateral surface 𝒮_L of Ω_L as the following union:
𝒮_L:= ⋃_i=1^N-1{(y,t) ∈Ω_L: x_i=±L/2, 0< t < 1}.
Then, we define the set Ω_1 as
Ω_1 = { x ∈Ω_L: d_𝒮_L(x) ≥ 1 }.
Since |Ω_1|=(L-2)^N-1, we obtain that
|Ω_L ∖Ω_1|=L^N-1-(L-2)^N-1∼ L^N-2 C_N, L →∞,
which implies
∫_Ω_L ∖Ω_1 d_Ω_L^p q/p-q dx ≤ r_Ω_L^p q/p-q |Ω_L ∖Ω_1| ∼ L^N-2 C_N (1/2)^p q/p-q, L →∞.
Moreover, since, for every (y,t) ∈Ω_1, it holds that
d_Ω_L (y,t)= d_S_0∪ S_1(y,t)=min{t, 1-t},
we obtain
∫_Ω_1 d_Ω_L^p q/p-q dt =(∫_(-L/2+1, L/2-1)^N-1 dy) (2 ∫_0^1/2 t^p q/p-q dt) =L^N-1 (1/2)^p q/p-q p-q/pq+p-q.
In particular, since
∫_Ω_L d_Ω_L^p q/p-q dx = ∫_Ω_L ∖Ω_1 d_Ω_L^p q/p-q dx + ∫_Ω_1 d_Ω_L^p q/p-q dx,
we have the following asymptotic behavior
∫_Ω_L d_Ω_L^p q/p-q dx ∼ L^N-1 (1/2)^p q/p-q p-q/pq+p-q, L →∞.
Hence, we finally obtain
C_p,q/( ∫_Ω_L d_Ω_L^pq/p-q dx )^p-q/q=( π_p,q/2)^p ( p-q/pq+p-q)^p-q/q1/( ∫_Ω_L d_Ω_L^p q/p-q dx )^p-q/q∼( π_p,q)^p 1/L^(p-q)(N-1)/q, L →∞.
On the other hand, by <cit.>, the following upper bound holds
λ_p,q(Ω) < ( π_p,q/2)^p ( P(Ω)/|Ω|^1-1/p+1/q)^p
and it is asimptotically sharp for slab–type sequences Ω_L. Finally, since
P(Ω_L) ∼ 2 L^N-1 |Ω_L| ∼ L^N-1, L →∞,
we obtain
λ_p,q(Ω_L) ∼ ( π_p,q )^p 1/L^(p-q)(N-1)/q, L →∞.
The proof is over.
.2cm
As an easy consequence of Theorem <ref>, we can give an alternative proof of the sharp lower bound (<ref>).
.3cm
Proof of Corollary <ref>.
We use the same notation as in Theorem <ref>. Thanks to Part (3) of Lemma <ref>, it holds that
f_i(y)≤ r_Ω, y∈ S_i'.
By combining (<ref>) and (<ref>), we obtain that
∫_Ω d_Ω^p q/p-q dx
= ∑_i=1^h ∫_S_i'∫_0^f_i(y) (d_Ω(y,t))^p q/p-q dt dy
= p-q/pq+p-q∑_i=1^h ∫_S_i' (f_i(y))^p q/p-q+1 dy
≤p-q/pq+p-q r_Ω^p q/p-q( ∑_i=1^h ∫_S_i' f_i(y) dy )
= p-q/pq+p-q r_Ω^p q/p-q( ∑_i=1^h ∫_S_i'∫_0^f_i(y) dt dy )
= p-q/pq+p-q r_Ω^p q/p-q |Ω|.
and, by applying the above estimate in (<ref>), we get the desired lower bound (<ref>). Finally, we obtain that this inequality is asymptotically sharp by using the slab-type sequences
Ω_L = ( -L/2 ,L/2)^N-1× (0,1), L ≥ 1,
as it was proved in <cit.>.
Let Ω⊊ℝ^N be a convex bounded open set, then, from the proof of Corollary <ref>, it follows that
∫_Ω d_Ω^ α dx ≤ |Ω| r_Ω^ α/α+1, α>0.
Moreover, such an estimate is sharp and equality is asymptotically attained by slab–type sequences Ω_L, as in Corollary <ref>.
§ ALTERNATIVE PROOF FOR HERSCH-PROTTER-KAJIKIYA INEQUALITY FOR Λ_P
We now focus on the case 1<p=q<∞. In this section we provide an alternative proof for the Hersch–Protter-type inequality for λ_p in any dimension N ≥ 2.
More precisely, we show the following result:
Let 1 < p < ∞ and let Ω⊊ℝ^N be a convex bounded open set. Then, the following lower bound holds
λ_p(Ω) ≥( π_p/2)^p 1/r_Ω^ p.
Moreover, the estimate is sharp.
.2cm
With this aim, we need some preliminary results.
§.§ Change of variables theorem
First of all, we recall the change of variables formula which follows from <cit.> when K= B_1 (see also <cit.>).
Let Ω⊂ℝ^N be a bounded open connected set of class C^2. Let l:∂Ω→ℝ be the function given by
l(x)= sup{d_Ω(z): z ∈Ω x ∈Π(z)},
where
Π(z)={ x∈∂Ω: d_Ω(z)=|x-z| },
and let Φ=∂Ω×ℝ→ℝ^N be the map defined by
Φ(x,t)=x +t ν(x), (x,t) ∈∂Ω×ℝ,
where, for every x∈∂Ω, ν(x) is the inward normal unit vector to ∂Ω at x. Then, for every h ∈ L^1(Ω), it holds
∫_Ω h(x) d x = ∫_∂Ω( ∫_0^l(x) h(Φ(x,t)) ∏_i=1^N-1 (1-t k_i(x)) d t ) d ℋ^N-1(x).
Here k_1(x), …, k_n-1(x) are the principal curvatures of ∂Ω at x, i. e. the eigenvalues of the Weingarten map W(x):T_x→ T_x, where T_x denotes the tangent space to ∂Ω at x.
Thanks to the C^2 regularity assumption on Ω, it follows that k_i is continuous on ∂Ω for every i∈{1,…,N-1}.
We note that, when Ω is a polytope, with the notation of Lemma <ref>, we have that l(x)=|x-Π^-1_i(x)|, for every x∈ relint(S_i).
§.§ Weighted Rayleigh quotients
Let w: (0,L) →ℝ be a monotone non-increasing positive function, w≢0. For every 1<p<∞, we define the following weighted Rayleigh quotient
μ_p(w, (0,L)):=inf_ψ∈ C_0^∞((0,L])∖{0}{∫_0^L |ψ'(t)|^p w(t) d t/∫_0^L |ψ(t)|^p w(t) d t}.
When w≡ 1 on (0,L), we will write μ_p(w, (0,L))=μ_p(1, (0,L)).
With the aim to show that μ_p(1, (0,L))≤μ_p(w, (0,L)) for every monotone non-increasing positive weight w: (0,L) →ℝ, first we prove that there exists a monotone non-increasing positive minimizer for μ_p(1, (0,L)).
Let 1<p<∞, then
μ_p(1, (0,L))=L^-p(π_p/2)^p.
In particular, there exists a positive and monotone non-decreasing solution of the minimization problem
μ_p(1, (0,L)) = inf_ψ∈ W^1,p((0,L))∖{0}, ψ(0)=0{∫_0^L |ψ'(t)|^p dt/∫_0^L |ψ(t)|^p dt}.
First we note that, thanks to the density of C^∞_0((0,L]) in the subspace {φ∈ W^1,p(0,L): φ (0)=0}, we have that (<ref>) holds.
Moreover, following the proof of <cit.>, we have that
μ_p(1, (0,L))=L^-pμ_p(1, (0,1)) = L^-p(π_p/2)^p,
and there exists a positive symmetric function v∈ W^1,p_0((-1/2, 1/2)) which is monotone non-increasing on (0,1/2),
such that
π_p^p= ∫_0^1/2 |v'(t)|^p d t/∫_0^1/2 |v(t)|^p d t.
Then, it is easy to show that the function ṽ∈ W^1,p( (0,L)) defined by
ṽ(t):=v(L-t/2L)
satisfies ṽ(0)=0 and it is a positive monotone non-decreasing minimizer of the problem (<ref>).
Now we are in position to show the following minimization result.
Let 1 < p < ∞ and let w be a positive and monotone non-increasing function on (0,L), then
μ_p(w, (0,L)) ≥μ_p(1, (0,L)).
First, we show that, for every positive and monotone non-increasing weight w∈ L^∞((0,L)), it holds
μ_p(1, (0,L)) ∫_0^L |φ|^p w d t ≤∫_0^L |φ'|^p w d t, for every φ∈ C^∞_0((0,L]).
Indeed, let v ∈ W^1,p((0,L)) be a positive and monotone non-decreasing eigenfunction for μ_p(1, (0,L)) whose existence is ensured by Lemma <ref>. Then v is a weak solution of
-(|v'|^p-2 v')'=μ_p(1, (0,L)) v^p-1, (0,L),
v(0)=0.
Moreover, fixed a mollifier δ∈ C^∞_0(ℝ), given by
δ(x):=
e^1/|x|^2-1, |x|<1,
0, |x| ≥ 1,
for 0<ε<1, we define
δ_ε(x) = 1/ε δ(x/ε) ∈ C^∞(ℝ).
Then w_ε=w*δ_ε∈ C^∞((ε, L-ε)) and, as ε→ 0,
w_ε pointwise converges to w a. e. on (0,L).
Moreover, w_ε' ≤ 0. Indeed, for every t, t' ∈ (ε, L-ε) such that t>t', we have that
w_ε(t)=(w * δ_ε)(t) = ∫_0^L δ_ε(t-y) w(y) dy =1/ε∫_0^L δ(t-y/ε) w(y) dy
= 1/ε∫_t-ε^t+εδ(t-y/ε) w(y) dy = ∫_-1^1δ(z) w(t-ε z) dz
≤∫_-1^1δ(z) w(t'-ε z) dz = w_ε(t'),
where in the last inequality we use that δ(z) > 0, for every z ∈ (-1,1) and w is a monotone non-increasing function.
Now, by using that w_ε' ≤ 0 and v'≥ 0 a. e. on (0,L) (thanks to Lemma <ref>), for every φ∈ C^∞_0((0,L]), we have that
μ_p(1, (0,L)) ∫_0^L |φ|^p w_ε d t = μ_p(1, (0,L)) ∫_0^L v^p-1|φ|^p/v^p-1 w_ε d t
= ∫_0^L |v'|^p-2 |v'| ( |φ|^p/v^p-1 w_ε)' d t
= ∫_0^L |v'|^p-2 v' ( |φ|^p/v^p-1)' w_ε d t + ∫_0^L |v'|^p-2 v' |φ|^p/v^p-1 w_ε' d t
≤∫_0^L |v'|^p-2 v' ( |φ|^p/v^p-1)' w_ε d t.
By applying Picone's inequality on the last integral (see <cit.>), the above inequality implies
μ_p(1, (0,L)) ∫_0^L |φ|^p w_ε d t ≤∫_0^L |φ'|^p w_ε d t.
Since w_ε_L^∞((0,L))≤w_L^∞((0,L)), as ε→0, by using the Dominated Convergence Theorem, we obtain that w satisfies (<ref>).
Now, we remove the assumption that w is bounded. For every M>0, we define w_M:= min{w, M}∈ L^∞((0,L)).
By applying (<ref>), we have that
μ_p(1, (0,L)) ∫_0^L |φ|^p w_M d t ≤∫_0^L |φ'|^p w_M d t ≤∫_0^L |φ'|^p w dt, for every φ∈ C^∞_0((0,L])
and, sending M→∞, we get that also w satisfies (<ref>).
Finally, passing to the infimum on functions φ∈ C^∞_0((0,L]) in (<ref>), we obtain the desired estimate
(<ref>).
Now, by applying the previous results, we are in a position to show Theorem <ref>.
.3cm
Proof of Theorem <ref>. We divide the proof in two parts.
.2cm
Part 1: inequality for C^2 convex bounded sets.
We first suppose that Ω is a convex bounded open set of class C^2.
Let u∈ C^∞_0(Ω) then, by using formula (<ref>), we have that
∫_Ω |u(x)|^p dx = ∫_∂Ω( ∫_0^l(x) |u(x+ t ν(x))|^p ∏_i=1^N-1 (1-tk_i(x)) dt ) dℋ^n-1(x)
and
∫_Ω |∇ u(x)|^p dx = ∫_∂Ω( ∫_0^l(x) |∇ u(x+ t ν(x))|^p ∏_i=1^N-1 (1-t k_i(x)) dt ) dℋ^n-1(x).
Now we fix x ∈∂Ω and let v∈ C^∞_0((0, l(x)]) be defined by v(t):=u(x+tν(x)) for every t∈ (0, l(x)]. Furthermore we introduce the weight w_x: (0,l(x))→ℝ given by
w_x(t)=∏_i=1^N-1 (1-t k_i(x))∈ L^∞(0,l(x)).
It is easy to verify that the weight w_x is monotone non-increasing. Moreover, we note that, by <cit.>, l is a positive and continuous function on ∂Ω.
In particular, there exists z=z(x) ∈Ω, such that x ∈Π(z) and l(x) = d_Ω(z). Hence
1 - t k_i(x) >1 - d_Ω(z) k_i(x) ≥ 0, t ∈ (0, l(x)),
where the last inequality follows from <cit.>.
In particular, this implies that w_x>0 on (0, l(x)).
Being satisfied all the hypotheses of Theorem <ref>, we have that
∫_0^l(x) |u(x+tν (x))|^p w_x(t) dt = ∫_0^l(x) |v(t)|^p w_x(t) dt
≤1/μ_p(w_x, (0,l(x)))∫_0^l(x) |v'(t)|^p w_x(t) dt
≤1/μ_p(1, (0,l(x)))∫_0^l(x) |v'(t)|^p w_x(t) dt.
Then, applying Lemma <ref> and taking into account that l(x) ≤ r_Ω for every x ∈∂Ω, we obtain that
∫_0^l(x) |u(x+tν (x))|^p w_x(t) dt ≤(2/π_p)^p l(x)^ p∫_0^l(x) |v'(t)|^p w_x(t) dt
≤( 2/π_p)^p r_Ω^ p ∫_0^l(x) |∇ u(x+tν(x))|^p w_x(t) d t, for every x∈∂Ω.
By exploiting the above estimate in (<ref>) and then using (<ref>), we get
∫_Ω |u|^p d x ≤( 2/π_p)^p r_Ω^ p( ∫_∂Ω( ∫_0^l(x) |∇ u(x+tν(x))|^p w_x(t) d t ) d ℋ^N-1(x) ) = ( 2/π_p)^p r_Ω^ p∫_Ω |∇ u|^p d x,
that is
∫_Ω |∇ u|^p dx /∫_Ω |u|^p dx≥( π_p/2)^p 1/r_Ω^ p, for every u ∈ C^∞_0(Ω).
Taking the infimum on C^∞_0(Ω), we obtain that (<ref>) holds when Ω is a convex bounded open set of class C^2.
.2cm
Part 2: inequality for convex bounded sets.
We will apply an approximation argument to show the validity of (<ref>) for every convex bounded open set. Let Ω⊂ℝ^N be a convex bounded open set, then, thanks to <cit.>, there exists a sequence {C_k}_k ∈ℕ⊂ℝ^N of convex bounded closed sets of class C^2
such that
* C_k+1⊆ C_k⊆ C_1, for every k∈ℕ;
.1cm
* Ω⊆ C_k and
d_ℋ(C_k, Ω) = min{λ≥ 0 : Ω⊆ C_k + λ B_1,C_k ⊆Ω + λ B_1}≤1/k,
i. e. {C_k}_k ∈ℕ converges to Ω, as k →∞, in the sense of Hausdorff.
This implies that, for every ε>0, there exists k_1=k_1(ε) ∈ℕ, such that
(1-ε) Ω⊆ C_k, k ≥ k_1,
which leads to
(1-ε) Ω⊆int(C_k), k ≥ k_1.
Let r_k be the inradius of Ω_k:= int(C_k), for every k ∈ℕ. Then, thanks to the monotonicity of λ_p with respect to the inclusion of sets and by using Part (1) on Ω_k, we obtain that
λ_p(Ω)/(1-ε)^p≥λ_p(Ω_k) ≥( π_p/2)^p 1/r_k^ p.
Since
Ω⊆Ω_k⊆ C_1, k∈ℕ,
we can consider the co-Hausdorff distance between Ω and Ω_k, given by
d^ℋ(Ω_k, Ω)= d_ℋ(C_1∖Ω_k, C_1∖Ω),
and, being Ω and Ω_k convex open sets, we also have that
d^ℋ(Ω_k, Ω)=d_ℋ(∂Ω_k, ∂Ω)=d_ℋ(C_k, Ω)≤1/k.
Hence, as k→∞, the sequence {Ω_k}_k ∈ℕ converges to Ω in the sense of co-Hausdorff. By using <cit.>, we have that
r_k → r_Ω, k →∞.
Hence, by sending k →∞ and then ε→ 0 in (<ref>), we finally get (<ref>).
.2cm
Finally, we notice that the inequality is sharp. Indeed, the equality can be attained by different class of sets, for example by infinite slabs, as ℝ^N-1× (0,1), or asymptotically by the family of collapsing pyramids C_α=convex hull((-1,1)^N-1∪{(0, …, 0, α)}), as proved in <cit.>. The proof is over.
§ MAKAI'S COSTANTS FOR NON-CONVEX SETS
As pointed out in the introduction, defined C_p,q and C_2,q as in (<ref>) and (<ref>), the natural questions that arise are whether
.1cm
* C_p,q<C_p,q, when p>N and 1≤ q<N;
.1cm
* C_p,p<C_p,p, when p>N;
.1cm
* C_2,q<C_2,q, when N=2 and 1≤ q < 2.
In this section, we give some partial answers to the questions above.
§.§ The case of general open sets for 1≤ q<N<p.
We focus on the class of general open sets of ℝ^N with the aim to show that, for every fixed 1≤ q<N, there exists p= p(q)>N such that
C_p,q<C_p,q, p∈ (q,p].
We consider the infinite fragile tower set 𝒯⊂ℝ^N defined as in <cit.>. By contruction, it satisfies the following properties:
.2cm
* d_𝒯∈ L^1(𝒯) ∩ L^∞(𝒯);
.2cm
* λ_p,q(𝒯) = 0, for every 1 ≤ q<p≤ N,
Moreover, thanks to (<ref>), λ_p,q(𝒯) > 0, for every p>N and 1≤ q≤ p. In order to show (<ref>), it is sufficient to prove that
lim_p ↘ Nλ_p,q(𝒯) ( ∫_𝒯 d_𝒯^p q/p-q dx)^p-q/q < lim_p ↘ N C_p,q.
With this aim, we observe that the following convergences hold
lim_p ↘ N( ∫_𝒯 d_𝒯^p q/p-q dx)^p-q/q = ( ∫_𝒯 d_𝒯^N q/N-q dx)^N-q/q
and
lim_p↘ N C_p,q = lim_p ↘ N( π_p,q/2)^p ( p-q/pq+p-q)^p-q/q = C_N,q.
The last limit follows by taking into account that, as computed in <cit.>,
π_p,q=2/q (1+qp')^1/q (1+p'q)^-1/p B(1/q,1/p'),
where p'=p/(p-1) and B is the Euler Beta function, which is continuous on (0,+∞). If we show that
lim sup_p ↘ Nλ_p,q(𝒯) ≤λ_N,q(𝒯),
by using (<ref>), (<ref>) and (<ref>), we obtain that
lim_p ↘ Nλ_p,q(𝒯) ( ∫_𝒯 d_𝒯^p q/p-q dx)^p-q/q≤λ_N,q(𝒯) ( ∫_𝒯 d_𝒯^N q/N-q dx)^N-q/q =0 < C_N,q = lim_p ↘ N C_p,q,
which gives the desired conclusion.
In order to prove the claim (<ref>), we note that, for every 1≤ r< ∞ and for every open set Ω⊆ℝ^N, it holds
lim_p ↘ r∇φ_L^p(Ω)^p = ∇φ_L^r(Ω)^r, φ∈ C^∞_0(Ω).
Indeed, if φ∈ C^∞_0(Ω), then, for every p> r, it follows that
∫_Ω |∇φ|^p dx = ∫_Ω |∇φ|^p-r |∇φ|^r dx ≤∇φ_L^∞(Ω) ^p-r∫_Ω |∇φ|^r dx,
which implies
lim sup_p ↘ r∇φ_L^p(Ω)^p ≤∇φ_L^r(Ω)^r.
On the other hand, by Fatou's Lemma, we also have that
∫_Ω |∇φ|^r dx ≤lim inf_p ↘ r∫_Ω |∇φ|^p dx.
By applying (<ref>) with r=N, we obtain
lim sup_p ↘ Nλ_p,q(𝒯) ≤lim sup_p ↘ N∫_𝒯 |∇φ|^p dx/(∫_𝒯 |φ|^q dx)^p/q=∫_𝒯 |∇φ|^N dx/(∫_𝒯 |φ|^q dx)^N/q, φ∈ C^∞_0(𝒯),
and by taking the infimum on C^∞_0(𝒯), this easily implies (<ref>).
.2cm
Let 1 ≤ q < ∞.
We notice that
lim_p →∞( C_p,q)^1/p=lim_p →∞(C_p,q)^1/p=1.
Indeed, by <cit.>, it holds that
lim_p →∞π_p,q/2 = 1/21/(∫_0^1 (min{t,1-t})^q dx)^1/q=1/(q+1)^1/q,
which implies
lim_p→∞(C_p,q)^1/p = lim_p →∞( π_p,q/2)^p ( p-q/pq+p-q)^p-q/q=1.
Then, taking into account also (<ref>), it easily follows that
1 ≤lim inf_p →∞(C_p,q)^1/p≤lim sup_p →∞(C_p,q)^1/p≤lim_p→∞(C_p,q)^1/p = 1.
.2cm
§.§ The case of simply connected open sets for p=N=2
In this subsection, we restrict ourselves to the case N=2 and we prove that there exist 1 ≤q<2 and p> 2 such that
C_2,q< C_2,q, q ∈ [q,2],
and
C_p,p< C_p,p, p ∈ [2,p].
First of all, we give an explicit example of a simply connected open set Ω⊂ℝ^2 such that
λ_2(Ω) r_Ω^2 < C_2,2= π^2/4.
Let A ⊂ℝ^2 be an annulus from which we remove a segment, that is
A={(x_1,x_2) ∈ℝ^2 : 1 < √(x_1^2+x_2^2) < 2 }∖{(x_1,0) ∈ℝ^2 : 1<x_1<2 }.
Then it holds that λ_2(A)=π^2 (see <cit.>, page 551).
Now, for a fixed 0<ε<1, we consider the simply connected open set
Ω= A ∪{(x_1,x_2) ∈ℝ^2 : √(4-x_2^2)≤ x_1<2+ε, -ε <x_2<ε}∖{(x_1,0) ∈ℝ^2 : 1<x_1 <2+ϵ}.
Since A ⊂Ω and |Ω∖ A| ∅, we get that
λ_2(Ω) < λ_2(A) = π^2.
Being r_A=r_Ω=1/2, the above inequality implies (<ref>).
Now, we recall that, by <cit.>, it holds
lim_q ↗ 2λ_2,q(Ω) = λ_2(Ω) lim_q ↗ 2π_2,q=π,
hence, by combining the above limits with (<ref>), we get
lim_q ↗ 2λ_2,q(Ω) ( ∫_Ω d_Ω^2 q/2-q dx)^2-q/q = λ_2(Ω) r_Ω^ 2 < π^2/4 = lim_q ↗ 2 C_2,q .
This gives the desired conclusion (<ref>).
Finally, we note that, by applying (<ref>) with r=2, for every φ∈ C^∞_0(Ω), it holds that
lim sup_p ↘ 2λ_p(Ω) ≤lim sup_p ↘ 2∫_Ω |∇φ|^p dx/∫_Ω |φ|^p dx≤lim sup_p ↘ 2∫_Ω |∇φ|^p dx/|Ω|^2-p/2(∫_Ω |φ|^2 dx)^p/2 = ∫_Ω |∇φ|^2 dx/∫_Ω |φ|^2 dx,
and, by taking the infimum on φ∈ C^∞_0(Ω), we get that
lim sup_p ↘ 2λ_p(Ω) ≤λ_2(Ω).
Hence,
lim sup_p ↘ 2λ_p(Ω) r^ p_Ω≤λ_2(Ω) r_Ω^ 2< π^2/4= lim_p ↘ 2 C_p,p,
which implies the desired conclusion (<ref>).
100
AH W. Allegretto, Y. X. Huang, A Picone's identity for the p-Laplacian and applications, Nonlinear Anal., 32 (1998), 819–830.
Ancona A. Ancona, On strong barriers and inequality of Hardy for domains in ℝ^N, J. London Math. Soc. (2) 34, (1986), 274-–290.
AU D. H. Armitage, Ü. Kuran, The Convexity of a Domain and the Superharmonicity of the Signed Distance Function, Proc. Amer. Math. Soc., 93 (1985), 598–600.
Av F. G. Avkhadiev, Hardy type inequalities in higher dimensions with explicit estimate of constants, Lobachevskii J. Math., 21 (2006), 3–31.
vBe M. van den Berg, Estimates for the torsion function and Sobolev constants, Potential Anal., 36 (2012), 607–616.
Bra1 L. Brasco, On principal frequencies and isoperimetric ratios in convex sets, Ann. Fac. Sci. Toulouse Math. (6), 29, (2020), 977–1005.
Bra2 L. Brasco, On principal frequencies and inradius in convex sets, Bruno Pini Math. Anal. Semin. 9, Univ. Bologna, Alma Mater Stud., Bologna, (2018).
BM L. Brasco, D. Mazzoleni, On principal frequencies, volume and inradius in convex sets, NoDEA Nonlinear Differential Equations Appl., 27, (2020).
BPZ1 L. Brasco, F. Prinari, A. C. Zagati, A comparison principle for the Lane–Emden equation and applications to geometric estimates, Nonlinear Analysis, 220, (2022), 112847.
BPZ2 L. Brasco, F. Prinari, A. C. Zagati, Sobolev embeddings and distance functions, to appear on Advances in Calculus of Variations, (2023).
BBP L. Briani, G. Buttazzo, F. Prinari, A Shape Optimization Problem on Planar Sets with Prescribed Topology, J Optim Theory Appl 193, (2022), 760–784.
BGM G. Buttazzo, S. Guarino Lo Bianco, M. Marini, Sharp estimates for the anisotropic torsional rigidity and the principal frequency, J. Math. Anal. Appl., 457 (2018), 1153–1172.
CM G. Crasta, A. Malusa, The distance function from the boundary in a Minkowski space, Trans. Amer. Math. Soc. 359, (2007), 5725–5759.
DDG F. Della Pietra, G. Di Blasio N. Gavitone, Sharp estimates on the first Dirichlet eigenvalue of nonlinear elliptic operators via maximum principle, Adv. Nonlinear Anal., 9 (2020), 278–291.
DPGG F. Della Pietra, N. Gavitone, S. Guarino Lo Bianco, On functionals involving the torsional rigidity related to some classes of nonlinear operators, J. Differential Equations, 265 (2018), 6424–6442.
Eggl H. G. Eggleston, Convexity, Cambridge Tracts in Math and Mathematical Phisics, 47, Cambridge University Press, 1958.
GPP D. Goel, Y. Pinchover, G. Psaradakis, On weighted L^p-Hardy inequality on domains in ℝ^N, Special Issue on Analysis and PDE Dedicated to Professor Shmuel Agmon, Pure Appl. Funct. Anal., 7(3), (2022), 1025–1023.
He J. Hersch, Sur la fréquence fondamentale d'une membrane vibrante: évaluations par défaut et principe de maximum, Z. Angew. Math. Phys., 11, (1960), 387–413.
Ka1 R. Kajikiya, A priori estimate for the first eigenvalue of the p-Laplacian, Differential Integral Equations, 28 (2015), 1011–1028.
Lewis J. L. Lewis, Uniformly fat sets, Trans. Amer. Math. Soc., 308 (1988), 177–196.
Makai E. Makai, On the principal frequency of a membrane and the torsional rigidity of a beam, Studies in Math. Analysis and Related Topics, Stanford Univ. Press, Stanford, (1962), 227–231.
makai65 E. Makai, A lower estimation of the principal frequencies of simply connected membranes, Acta Mathematica Academiae Scientiarum Hungaricae 16, (1965), 319–323 .
MMP M. Marcus, V. J. Mizel, Y. Pinchover, On the best constant for Hardy's inequality in ℝ^n, Trans. Amer. Math. Soc., 350 (1998), 3237–3255.
Oss R. Osserman, A note on Hayman's theorem on the bass note of a drum, Commentarii Mathematici Helvetici, 52, (1977), 545–555.
Pr M. H. Protter, A lower bound for the fundamental frequency of a convex region, Proc. Amer. Math. Soc., 81, (1981), 65–70.
S R. Schneider, Convex Bodies: The Brunn-Minkowski Theory, Encyclopedia Math. Appl., Cambridge University Press, Cambridge, 44, 1993.
T L. Tartar, An introduction to Sobolev spaces and interpolation spaces, Lecture Notes of the Unione Matematica Italiana, 3, Springer, Berlin; UMI, Bologna, 2007.
Ta G. Talenti, Best constant in Sobolev inequality, Ann. Mat. Pura Appl. (4), 110 (1976), 353–372.
Wan A. Wannebo, Hardy inequalities, Proc. Amer. Math. Soc. 109 (1990), 85–95.
|
http://arxiv.org/abs/2307.05581v1 | 20230710142653 | Exploring the Dynamics of the Specialty Insurance Market Using a Novel Discrete Event Simulation Framework: a Lloyd's of London Case Study | [
"Sedar Olmez",
"Akhil Ahmed",
"Keith Kam",
"Zhe Feng",
"Alan Tua"
] | q-fin.GN | [
"q-fin.GN",
"cs.CE",
"cs.MA"
] |
Reliable Devices Yield Stable Quantum Computations
The manuscript is authored by UT-Battelle, LLC under Contract No. DE-AC05-00OR22725 with the U.S. Department of Energy. The U.S. Government retains for itself, and others acting on its behalf, a paid-up nonexclusive, irrevocable worldwide license in said article to reproduce, prepare derivative works, distribute copies to the public, and perform publicly and display publicly, by or on behalf of the Government. The Department of Energy will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan: https://www.energy.gov/doe-public-access-plan.
Samudra Dasgupta^1, 2^*, and Travis S. Humble^1,2^†
^1Quantum Science Center, Oak Ridge National Laboratory, Oak Ridge, Tennessee, USA
^2Bredesen Center, University of Tennessee, Knoxville, Tennessee, USA
^*[email protected], ORCID: 0000-0002-7831-745X
^†[email protected], ORCID: 0000-0002-9449-0498
February 2023
====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
This research presents a novel Discrete Event Simulation (DES) of the Lloyd's of London specialty insurance market, exploring complex market dynamics that have not been previously studied quantitatively. The proof-of-concept model allows for the simulation of various scenarios that capture important market phenomena such as the underwriting cycle, the impact of risk syndication, and the importance of appropriate exposure management. Despite minimal calibration, our model has shown that it is a valuable tool for understanding and analysing the Lloyd's of London specialty insurance market, particularly in terms of identifying areas for further investigation for regulators and participants of the market alike. The results generate the expected behaviours that, syndicates (insurers) are less likely to go insolvent if they adopt sophisticated exposure management practices, catastrophe events lead to more defined patterns of cyclicality and cause syndicates to substantially increase their premiums offered. Lastly, syndication enhances the accuracy of actuarial price estimates and narrows the divergence among syndicates. Overall, this research offers a new perspective on the Lloyd's of London market and demonstrates the potential of individual-based modelling (IBM) for understanding complex financial systems.
§ ACKNOWLEDGEMENTS
A special thanks to the Alan Turing Institute's Internship Network (TIN) in supporting the project. Accenture for their partnership with academia. Ki and Brit syndicates for bridging the relationship between academia and industry, providing the resources to undertake the research project. A special thanks to Reuben Thomas-Davis for developing and providing access to the HADES framework.
§ INTRODUCTION
The specialty insurance market is a large-scale complex system with many uncertainties, complex business relationships and non linear dynamics and interactions amongst participants. To quantify the behaviour of this complex system, researchers have turned to various traditional approaches ranging from time-series methods <cit.> to differential equation based mathematical models <cit.>. This top-down, aggregate approach of modelling complex systems may fail to capture the interactions which occur at a micro-scale which ultimately lead to large-scale emergent phenomena. Indeed, many researchers attest to this challenge within the specialty insurance modelling literature. For instance, the “Underwriting Cycle", an important phenomena where the market undergoes periods of high profitability, less competition and low profitability, high competition among insurers (in the insurance literature known as the hardening and softening of the market), is seldom accurately captured using the vast number of traditional aforementioned methods. Boyer et al. demonstrates the challenges in modelling this phenomena in their paper <cit.> where they show time-series methods fail to model this important stylised fact of the specialty insurance market. All of the above points reflect the need for an alternative approach, this is where Individual-Based Modelling (IBM) approaches can help.
Individual-Based Models, which encompass modelling frameworks such as Discrete Event Simulations (DES) to Agent-Based Models (ABMs), can alleviate the above challenges posed by traditional methods utilised in the study of insurance markets. For example, these models allow complex systems to be built from the bottom-up, focusing on interactions at the micro-scale between autonomous “agents" which can represent anything from people, organisations to more granular entities such as molecules. Since the late 90's, IBM applications have grown in popularity, starting from computational social science to ecological studies, biology and environmental sciences. Given its influence in various research fields, the IBM method continues to advance across disciplines, to more relevant areas of research including the modelling of economic markets such as housing, insurance and energy <cit.>.
The IBM methodology, can enable researchers to observe the emergent phenomena at the aggregate level which arise from the micro level interactions of autonomous heterogeneous agents. Each type of agent can embed behaviours and actions reflective of real-world concepts. In fact, these powerful IBM methodologies have reached industry practitioners within the wider retail insurance market, evidencing the appeal of such approaches in modelling market dynamics. The Institute and Faculty of Actuaries (IFA), the professional body and regulator of actuaries in the UK, have gone so far as to start an “Agent-Based Modelling Working Party" where workshops have taken place, and research such as <cit.> presented. Moreover, the Bank of England has published working papers utilising the approach <cit.>. On the other hand, academics from the Bayes Business School <cit.> and The Institute for New Economic Thinking <cit.> have all applied ABMs to the insurance market with significant success.
Given the spotlight on the modelling of insurance markets and a lack thereof in specialty insurance markets, this paper will utilise the findings from the existing ABM insurance literature. We combine these findings with the expert knowledge gained through workshops and interviews with underwriters, actuaries, capital modellers, portfolio analysts, brokers and algorithm engineers at Ki and Brit syndicates (insurers) within Lloyd's of London (the oldest specialty insurance marketplace). These findings allow us to create a novel, discrete event simulation (DES) to study the emergent properties and drivers of the specialty insurance market at various spatio-temporal resolutions. The model proposed in this research article is novel in several aspects:
* The model utilises a DES framework open-sourced by Ki insurance (HADES). HADES consists of two main components:
* Processes - these components are responsible for performing some actions and subsequently emitting events as a response to input events, i.e., Broker process responding to Day, Month or Year events.
* Events - a piece of information that can be altered given its interactions with processes.
* The model incorporates functionality for different event types, such as catastrophe losses (typically low-frequency, high-severity events) and attritional losses (high-frequency, low-severity events). While previous research endeavours have only sought a single type of loss.
* Interviews and workshops conducted with experts within Ki and Brit insurance have shaped the conceptualisation of the agents represented in the model.
* The model simulates the dynamics of the Lloyd's of London specialty insurance market by incorporating unique features such as lead and follow insurers.
This research article will conduct four experiments with varying complexity to demonstrate the emergent properties of the specialty insurance market <cit.>. The first experiment, explores the interactions between syndicates and brokers with simplistic pricing methods to demonstrate a profitable market with low volatility. The second experiment incorporates catastrophe events that increase volatility in the market which can lead to insolvencies and pronounced cyclicality. Thirdly, the decision-making of syndicates is advanced with the introduction of improved exposure management, where syndicates become better equipped in dealing with catastrophe losses. Finally, we introduce the syndication of risk in the marketplace, via lead-follow mechanics, which significantly reduces volatility and tightly couples syndicates' loss experiences.
Some initial findings from the model have demonstrated stylised facts observed in the specialty insurance market, Lloyd's of London, thus validating the model. For example, when insurers price, based on past loss history with uniform risks and attritional losses only, we find that premium prices tend towards the fair analytical price but with substantial variance. When catastrophe events occur which affect multiple risks simultaneously, this leads to large industry losses, which cause step drops in syndicate capitals and increases in premium prices immediately afterwards. When syndicates are able to use exposure management processes to manage tail risks, this leads to smaller and better placed syndicate portfolios. Lastly, when lead and follow quote negotiation and line sizes are applied to premiums and claims, this improves the actuarial price estimates and reduces the spread between syndicates. Syndicate performance becomes more correlated as a result.
The following sections of the article are Literature Review <ref> where pre-existing insurance based literature is disseminated and strengths/weaknesses highlighted. Model Description <ref> describes every process, event and underlying functionality of the model. Results & Discussion <ref> introduces the specialty insurance market in the real-world, then delves into the aforementioned four experiment scenarios and discusses the observed quantitative and qualitative results. Lastly, the Conclusion <ref> highlights the findings from the model, how it contributes to specialty insurance research and future avenues to be explored.
§ LITERATURE REVIEW
The specialty insurance market differs from the conventional retail insurance market. The main difference is that the risks are intrinsically complex, offering cover for perils such as kidnap and ransom, cybersecurity breaches, terrorist attacks and political violence, commercial property damage and personal accident. These risks typically require specialist/expert assessment by insurers/syndicates <cit.>. Simplifying the process extensively, a client reaches out to a broker with a risk, the broker brings the risk to the Lloyd's of London marketplace. Underwriters that operate within the Lloyd's market are reached out to, the underwriters utilise their subject matter expertise, actuarial pricing strategies and portfolio management approaches to decide if they should offer a lead, follow line or no line at all. If an underwriter offers a lead line, they underwrite a portion of the risk and tender a policy as a lead insurer, usually the lead covers a bigger portion of the risk and thus is paid a bigger share of the premiums, subsequently taking on more risk. Every risk is insured by a single lead insurer who usually shapes the policy and multiple follow insurers who agree to the terms and offer a line size. Once the risk has been covered proportionally across many syndicates, premiums are distributed depending on the line size agreed by all parties. Conversely, if a claim comes through, then everyone pays out proportionally, see <cit.>.
The hardening and softening of the insurance market has been an area of academic enquiry over the years. The specialty insurance market can be a very costly, dynamic and highly volatile market to operate within, one characteristic of the market is the underwriting cycle. In short, when capacity is high, rates decrease and profits decrease, conversely, losses occur, insurers go insolvent or withdraw and capacity decreases, with subsequent upwards pressure on premium levels to generate acceptable return on investment. This is a typical soft to hard market transition, respectively <cit.>, which attracts new capacity. The cycle then repeats.
The majority of the literature, has focused on this underwriting cycle phenomena, and have utilised agent-based, mathematical and time-series models to investigate the emergence and drivers of these cycles <cit.>. Surprisingly, there seems to be contradictory views with regard to the causes of market cycles, for example, <cit.> argues that institutional lags are the main reason behind the liability insurance market cycle in the mid-1980s in North America when analysing insurance company level data. However, when analysing market-level data, they claim the opposite. Most in-depth literature reviews <cit.> agree that research into underwriting cycles intensified in the mid-1980s due to the “liability crisis". During this time, an interesting discovery made by <cit.> was that different lines of insurance have quasi-cyclical behaviour but others can't be discerned from white noise fluctuations, i.e., random walk. Due to these discoveries, Venezian et al. proposed auto-regressive (AR) models which replicated these cyclical behaviours. Following the findings from <cit.>, most researchers primarily utilised AR models to investigate cyclicality in the insurance market. Furthermore, to ensure these models lead to actionable insights empirically, <cit.> proposes that data used for analysing market cycles has traditionally been at the aggregate, industry level at a yearly temporal resolution, to remedy this, they argue for comprehensive datasets captured at a more granular spatio-temporal resolution.
There are several hypothesised causes of underwriting cycles discerned by researchers, these are:
* The flow of capital into and out of the market in response to market conditions such as major catastrophe events (capital shock) <cit.>, while some disagree <cit.>.
* General economic conditions, i.e., rise and fall of central bank interest rates <cit.>, contrary views <cit.>.
* Unanticipated inflation <cit.>, while <cit.> disagrees.
* Institutional delays and lags, i.e., data collection, regulation and renewal periods <cit.> while <cit.> also makes a counter argument.
* Forecasting errors, imperfect knowledge of the market <cit.>.
One of the prevailing reasons for why the analysis of market cycles has been inadequate is due to the cycle periods being longer than periods over which data is sampled <cit.>. Furthermore, traditional time-series approaches work well in analysing deterministic systems, however, given the volatile nature of financial systems, this approach can be limited <cit.>. Moreover, if a modelled system can be affected by external factors then these factors should explicitly be included in the time-series model <cit.>. For this and many other aforementioned reasons, this research article is well-timed and necessary.
Given the interest in modelling the insurance market, researchers have adopted individual-based modelling approaches, to capture new insights and overcome some of the earlier described issues <cit.>. Some researchers have utilised individual-based models to discover stylised facts <cit.> such as the emergence of market crashes and bubbles. <cit.> argue that traditional methods such as AR models, fall short in quantifying the existence of a cycle in time-series data and questions the ability for these methods to forecast accurately. The article <cit.> suggests that linear models are insufficient and other approaches used to detect periodicity should be considered. Individual-based modelling approaches are more appealing as they can model the individual behaviours and social interactions at the micro level and generate complex aggregate behaviours such as cycles at the macro level <cit.>. Some researchers agree that the insurance market is complex, irrational and heterogeneous, evidenced by the interactions of individuals such as actuaries, underwriters, claims adjusters, brokers and organisations such as syndicates, regulators, managing agents and capital providers <cit.>. The model developed by <cit.> tests the hypothesis that agents with simple decision-making rules interacting at the individual-level can produce aggregate complex behaviours depicting empirically valid market dynamics. <cit.> also tests the theory of plural rationality <cit.>, where they can parameterise the insurers mark-up calculation where a policy is either priced aggressively or priced using the actuarial fair price (market value). They found that as the pricing aggression increased, market volatility also increased due to large price fluctuations.
Research conducted in <cit.> proposes an agent-based simulation of the specialty insurance market. The premise of the research is to investigate the regulatory changes under Solvency II that came into force in 2016. The model proposed, agents and processes represented as re-insurers, insurers, customers, shareholders and cat bonds that interact with each other and share information. The article finds that when the number of risk models increases for an insurer, i.e., an insurer utilises several catastrophe models (these are processes used to evaluate and manage natural and man-made catastrophe risk from perils, for example, hurricanes, wildfires) more risks are insured and conversely fewer risk models lead to lower profitability and higher risk of defaulting. The research article provides many interesting results with regards to catastrophe modelling approaches, to advanced exposure management features, whereby each insurer follows real-world regulations such as Solvency II and the “Solvency Capital Requirement". This requirement is described as Insurers are required to have 99.5% confidence they could cope with the worst expected losses over a year. That is, they should be able to survive any year-long interval of catastrophes with a recurrence frequency equal to or less than 200 years. However, in the model of <cit.> some important aspects of the specialty insurance market have been neglected. These include attritional losses <cit.> as well as the syndication of risk, a unique feature of Lloyd's of London <cit.>, and finally the temporal granularity is fixed. Our proposed solution addresses the limitations of previous work. We believe our model is the first discrete event simulation of the specialty insurance market inspired by real-world actors within Lloyd's of London.
A key insight and novelty of our article is in the use of a novel DES framework as opposed to the traditional ABM approach adopted by the aforementioned articles <cit.>. Unlike the classical ABM approach, the DES approach captures the time-irregular and asynchronous nature of events in the specialty insurance market whereby events (such as insurance claims) occur at irregular time intervals from as little to as many events occurring simultaneously. Handling such unpredictable, irregular and asynchronous events in a classic ABM approach can be challenging, but is seamlessly handled using the DES approach. The benefits of DES have thoroughly been discussed in the following articles <cit.>. Our model architecture is highly modular compared to the other approaches <cit.>, as evidenced in the workflow diagram <ref>, this allows the user to “plug-in" with ease different pricing models (actuarial pricing, underwriter markup, exposure management), relationship networks (the relationships between syndicates and brokers), loss generators (attritional, catastrophic or both) and many output metrics such as yearly, monthly syndicate capitals, premiums offered and accepted, industry statistics (market health indicators), syndicate performance (syndicate health indicators), syndicate insolvency, catastrophe event (the risks affected and extent of losses), claims per risk and syndicates assigned to risk. These configuration processes and metrics will be described in detail in the Model Description <ref> and Results & Discussion <ref> sections. Moreover, data analysis notebooks describing the stylised facts and unique features of the model will be provided as supplementary materials, some of these include, catastrophe losses and its impact on market health, advanced syndicates with exposure management modules and how they deal with losses, syndicate relationships with multiple brokers and its impact on risk coverage, capital venting strategies and what this means for the market.
There are many reasons why insurers and/or researchers would utilise our proposed model to simulate the specialty insurance market dynamics. Practitioners may want a deeper understanding of the market dynamics, such as underwriting cycles, which drive the stability and profitability of the market. Moreover, the model provides both a quantitative and qualitative picture of the impact and drivers of these dynamics. Lastly, the model could be used to develop heuristics which enable better estimation of the current and future market conditions.
§ MODEL DESCRIPTION
The purpose of the Lloyd's of London model is to investigate the emergent, complex characteristics which arise from individual-level interactions of the specialty insurance market. In this pursuit, the model conceptualises the behaviours of syndicates and brokers with which complex processes are undertaken as shown in Figure <ref>. These individual-level interactions from the bottom-up lead to the emergence of stylised facts at the aggregate level which correspond to empirical market trends. In this section, we describe the main agents/processes which the model encapsulates, the events they respond to and emit (i.e. the actions they undertake) and how these features all fit together to produce the DES. We start this section by discussing the notion of time in the model and its importance.
[!htbp]
< g r a p h i c s >
Workflow diagram of the specialty insurance market discrete event simulation. The line colours convey unique relationships between processes and events, i.e., yellow: new risk to quote requested and quote component generated, green: lead quote generation and selection, teal: follow quote generation and selection, red: losses to claims, blue: industry level statistics update, purple: syndicate level capital update.
§.§ Time
Discrete Event Simulations do not necessarily require the notion of time compared to other methods such as ABMs. Instead, events trigger processes which the process in turn responds to by emitting additional events and/or performing actions, i.e., updating an internal state variable. In theory, there is no need for the concept of time. However, in reality, there is a notion of time associated with events. For simulations of complex financial systems, such as the specialty insurance market, the challenge is that events occur asynchronously while the timescale is irregular and varying. Capturing these features of the specialty insurance market is both paramount (in order to ensure the model reflects empirical trends) and difficult to achieve with a typical ABM as mentioned in the following <cit.>. Conversely, the flexibility of the DES framework, allows events to have an associated timestep, which only triggers once the timestep in question occurs, e.g., t = 3. In our DES, a regular timer is incorporated with a standard timesteps of days, months and years. This allows us to capture events which occur in the simulation at different levels of granularity. Moreover, we are able to investigate different emergent phenomena independent of the timescale across which they occur.
The regular timer in our model generates the events shown in Table <ref>. We also detail the processes which respond to the events and the description of the events.
§.§ Broker process
The primary purpose of the Broker process is to bring new risks to the market. This occurs in response to a Day event as noted in Table <ref>. Upon the triggering of a Day event, the broker generates n_r risks according to a Poisson distribution where the λ variable for the distribution is given by the risks per day (RPD) input parameter as described in Table <ref>. The Broker process emits a number of events when the risk is generated in order to broadcast the risk to the insurers/syndicates and to set deadlines for the quotes from the insurers to be finalised.
The broker responds to and generates the events shown in Table <ref>
§.§ Broker-syndicate network process
The broker-syndicate network process is responsible for requesting quotes for new risks entering the market from the registered syndicates. This is facilitated by a network topology which for a given risk, selects a number of syndicates to which a quote is requested. Therefore, a risk does not necessarily have to be broadcasted to all syndicates depending on the parameter choices. The options for the topologies are the following:
* Circular topology: inspired by the model presented in <cit.>.
* Network topology: inspired by the interviews and workshops with stakeholders at Ki & Brit Syndicates.
* Random topology: is a base case feature.
Prevalent in all the topologies is a lead and follow top_k, parameter as mentioned in Table <ref>. This parameter selects the best k syndicates based on the chosen topology methods. For instance, in the circular topology <cit.>, the distance between the brokers and syndicates is used as a measure of the “ease of doing business" where a small distance implies a low cost of doing business conversely a large distance implies a larger cost. The syndicates are then ordered based on the top_k parameter which selects the k lowest cost syndicates. The network topology represents a connected network/graph between the brokers and syndicates where the edge weightings represent the ease of doing business between the brokers and syndicates. The larger the edge weighting, the more likely for brokers and syndicates to do business and vice-versa. Once again, the syndicates are ordered based on the strongest relationship with the top_k parameter filtering this down to the strongest k relationships. Lastly, in the random topology (the adopted network for the experiments conducted below), as the name suggests, the syndicates are randomly ordered with the top_k parameter selecting the first k syndicates in the list.
Given the above, the broker-syndicate network responds to and generates the events shown in Table <ref>.
§.§ Central risk repository process
The central risk repository process is responsible for tracking all of the risks, quotes and underwritten policies in the market. This includes all of the quotes offered for a risk, the policy information if the risk has been underwritten such as who the leader and follower syndicates are. Finally, the central risk repository is responsible for applying any losses, whether attritional or catastrophic, to the underlying syndicates on the policy in question.
Given the above, the central risk repository responds to and generates the events shown in Table <ref>.
§.§ Syndicate process
The syndicate process represents one of the most detailed agents within the model, as it is responsible for a number of important functions. Primarily, the syndicate is responsible for pricing any risks which come to market and decide which line size to give. This is all done in the context of a capital management framework whereby syndicates must ensure they are appropriately capitalised even in the case of tail loss events occurring, in order to avoid insolvency. Lastly, the syndicate must also provide any dividend back to capital providers in the case of profitable performance. As the syndicate is responsible for a number of functions, this section has been split into a number of sub-sections according to the sub-processes which compose the main syndicate process.
Before moving on to the sub-processes, we note that the main syndicate process is responsible for coordinating and organising the sub-processes which compose it. In this regard, the syndicate process responds to and generates the events shown in Table <ref>.
§.§.§ Actuarial sub-process
Pricing of risks in the insurance market is a complex process, risks are heterogeneous, extremely variable and fundamentally stochastic and ambiguous <cit.>. Our model captures all of these features but makes the simplifying assumption that all risks are homogeneous and that they belong to a catastrophe exposed class such as property insurance. The price of a risk will depend on several factors, primarily, the nature of the risk itself (i.e. the frequency and severity with which the risk can occur) and the market's view of the risk (i.e. the supply and demand of services in the market) <cit.>.
These two factors are different from one another. The first, is related to the quantification of the risk itself without considering any external market influences. This process is often carried out by an actuary <cit.> The actuary will assess the risk on experience based metrics (historical data on the risk or similar risks) and/or exposure based metrics (quantification of the risk in absence of data but based on a risk profile). Fundamentally, the actuary attempts to come up with a “fair price" for the risk based on the expectation of losses. That is to say, a price which would cover the expected losses of the risk over its lifetime. The second factor, considers the price of the risk based on the market's view i.e. the supply and demand of insurance services. This process is typically carried out by an underwriter. The underwriter, will take the fair price guideline from the actuary and, in very simplistic terms, scale this price up or down based on the supply and demand in the market. For instance, although the actuary may propose a much higher fair price, the market trends may force the underwriter to reduce this price in order to be competitive.
In this section, we will focus on the actuarial sub-process, inspired by the work of <cit.>. In the next section, we will discuss the underwriting sub-process.
Based on the above, the actuarial sub-process price is given by two main components. The first component is the insurer's expected claim cost:
P_t = zX̅_t + (1-z)λ'_tμ'_t
where P_t is the insurers expected claim cost at timestep t, X̅_t is the insurers past weighted average claims, λ'_t is the industry-wide average claim frequency, μ'_t is the industry-wide expected claim cost and finally z is the internal experience weight input parameter Table <ref> which decides whether a syndicate weighs their own loss or the industry loss experience as more important. As per <cit.>, the insurers expected claim cost, X̅_t, is calculated as a simple exponentially weighted moving average where the weight is an input parameter to the model Table <ref> called the loss experience recency weight.
The final actuarial price, which we denote as P_at, is the sum of the insurers expected claim cost, P_t, and a “risk loading term", α F_t, where F_t, is the standard deviation of the insurer's claims while α is a input parameter to the model Table <ref>, called the volatility weight:
P_at = P_t + α F_t
As can be seen, the actuarial price in Equation <ref>, captures the main idea of syndicates pricing a risk to cover their expected claim losses as well as to allow for some volatility in the losses.
The actuarial sub-process responds to and generates the events shown in Table <ref>.
§.§.§ Underwriting sub-process
As explained in the previous section, the objective of the underwriting sub-process is to “scale" the actuarial price in order to match market supply and demand. We again, employ the equations used by <cit.>, where they apply neoclassic price theory for the price-elasticity of demand. The details of the derivation can be found in <cit.>. The underwriter scaling is given by:
P_t = P_ate^m_t
where P_t is the final price offered by the syndicate after the underwriters scaling, P_at is the actuarial price from Equation <ref> and m_t is the underwriter log markup which attempts to model the price-elasticity of demand in the market.
The underwriter markup, m_t, is also calculated as a simple exponentially weighted moving average where the weight is an input parameter to the model Table <ref> called the underwriter markup recency weighting. Details on how the underwriter markup is calculated can be found in <cit.>.
The underwriting sub-process responds to and generates the events shown in Table <ref>.
§.§.§ Value at Risk (VaR) exposure management sub-process
Exposure management is a critical function for all insurance firms, to such that appropriate levels of exposure management are regulated by law for insurers. Exposure management allows insurers to have a quantitative understanding of the tail risk associated with the policies that they underwrite. By doing so, insurers can quantify the impact of worst case scenarios on their portfolio. This informs insurer's underwriting strategy. For instance, given the state in Figure <ref>, an over-exposed strategy would be for lead syndicate B to underwrite all risks within region 1. Therefore, exposure management can alert syndicates if they are over-exposed to a given catastrophe peril region.
The Value at Risk (VaR) of an insurer's portfolio is a common measure used to quantify the level of risk taken by an insurer. The VaR with some exceedance probability, which we denote as α Table <ref>, identifies the amount of syndicate capital which is at risk in case of any tail events occurring e.g. exceedingly large catastrophe events. The exposure management sub-process is therefore responsible for ensuring that the syndicate capital remains above this threshold value. Given its importance, our model employs the VaR exposure management detailed in <cit.>.
The VaR Exposure Management sub-process responds to and generates the events shown in Table <ref>.
§.§.§ Premium exposure management sub-process
In the previous section, one approach to exposure management was explored via the VaR measure. However, as extensively discussed by <cit.>, VaR exposure management is a complicated process, often relying on computationally expensive monte carlo simulations and in many cases, difficult-to-measure variables. Often insurers seek approximations or proxy approaches which capture the essence of the VaR exposure management. In this section, we detail such a methodology which we refer to as Premium Exposure Management.
The premium an insurer collects for underwriting a risk can be thought of as a proxy for exposure. In particular, when the total premium written is compared to the total capital available, this gives a proxy measure of how exposed an insurer is to the potential risk of insolvency. For example, if an insurer has underwritten a large number of risks, and therefore is collecting a large premium, but their total capital is comparatively smaller. This indicates that the insurer might have insufficient capital available to cover the full range of potential losses. On the other hand, if premiums written were large but the capital available was also large then this would indicate that the insurer has suffered minimal losses and that their current underwriting strategy was profitable. This is the essence of the premium exposure management sub-process.
The Premium Exposure Management sub-process responds to and generates the events shown in Table <ref>.
§.§.§ Line size sub-module
A unique feature of the Lloyd's of London specialty insurance market is the syndication of risk i.e. there will typically be several syndicates on a policy which includes one lead syndicate and several follow syndicates. For this reason, the various syndicates will underwrite only a fraction of the risk, this is termed the line size. In our model, the lead syndicates offer a default line size and set the price of the policy based on the actuarial and underwriting sub-processes. The candidate follow syndicates, also price the risk based on their actuarial and underwriting sub-processes which they then compare to the actual price from the lead syndicate. This allows them to assess the “pricing strength" which in turn is used to decide the line size to offer. The pricing strength is defined as the ratio of the follower's proposed price to the lead price. That is, if the pricing strength is above one, this implies the price of the risk is good and a larger line size is offered and vice versa.
§.§.§ Dividend sub-module
As explored in <cit.>, our DES model also includes, the ability for syndicates to pay a dividend to capital holders on a yearly basis provided they have made a profit. The reason for why this is a class object and not a DES process, is because the only process that relies on its outputs is the syndicate process.
The dividend sub-module only becomes active when a Year event triggers the syndicate process. When this occurs, the syndicate uses the dividend sub-module to check whether a profit has been made and if so calculates the dividend as follows:
D = γ Pr_t
where D is the dividend paid, Pr_t is the profit made by the syndicate, and γ, profit fraction, is an input parameter to the model Table <ref>, which represents the fraction of the profit to pay out as a dividend.
§.§ Attritional loss generator process
Attritional losses (as opposed to catastrophe losses) are defined as those losses which are generally uncorrelated with each other in both space and time, have high frequency, low severity and are fairly predictable <cit.>. On the other hand, catastrophe losses, which will be discussed in the next sub-section, tend to be spatially correlated (affecting a number of policies concentrated by a given peril region, class or industry), low frequency, high severity and difficult to predict.
In our model, we develop the attritional loss generator process inspired by the work of <cit.>. The attritional loss generator process pre-generates a number of AttritionalLossOccurred events when the risk is first brought to the market. The number of claim events is given by the Poisson distribution with the λ value set as the yearly claim frequency, an input parameter as defined in Table <ref>. The severity of the loss is defined by the gamma distribution. The shape parameter of the distribution is given by 1/COV^2, where COV is the gamma distribution's coefficient of variation as defined in Table <ref>. The scale parameter of the distribution is given by μ COV^2 where μ is the mean of the gamma distribution. These AttritionalLossOccurred events are then placed within the event queue within the expiration date of the risk.
Given the above, the attritional loss generator responds to and generates the events shown in Table <ref>.
§.§ Catastrophe loss generator process
In our model, we develop the catastrophe loss generator process inspired by <cit.>. The catastrophe loss generator attempts to capture the phenomena that catastrophe losses are both correlated spatially (e.g. a number of policies concentrated by a given peril region) and temporally. For this reason, all risks in the model have an associated peril region, as observed in Figure <ref>. Unlike the attritional loss generator, which generates attritional events and losses on a risk by risk basis, the catastrophe loss generator generates catastrophe events and losses on a peril region basis. The total loss affecting a given peril region is then cascaded down to the affected risks within the peril region, and subsequently the lead and follow insurers.
When the simulation starts, the catastrophe loss generator pre-generates a number of catastrophe events over the length of the entire simulation. This is done via the Poisson distribution with the λ value set as the product of the mean number of catastrophe events per year Table <ref> and the number of years in the simulation. A peril region is randomly assigned to the CatastropheLossOccurred event. The total loss affecting the peril region is given by a truncated Pareto distribution with the minimum value set as the minimum catastrophe damage Table <ref>. The CatastropheLossOccurred events are then added within the event queue up until the end of the simulation.
Given the above, the catastrophe loss generator responds to and generates the events shown in Table <ref>.
§.§ Industry statistics process
The industry statistics process, as the name suggests, simply keeps a track of all the relevant industry statistics and metrics. This is necessary, as many of the syndicate sub-processes rely on market-wide metrics. Note, that this process does not attempt to mimic any real market agents/processes. It is simply just an aggregator of data.
The industry statistics process responds to and generates the events shown in Table <ref>.
§ RESULTS & DISCUSSION
The Lloyd's of London specialty insurance market is a complex system. Many in academia and the insurance industries have tried to identify the underlying processes that lead to the emergence of phenomena at the aggregate level. One particular phenomena that has been an area of focus, is the underwriting cycle (as described in Section <ref>). As a proof-of-concept we propose several experiments that aim to simulate conditions in the market with varying complexity to reproduce existing industry phenomena. Whereby practitioners and researchers can investigate the underlying features of the market that lead to these trends, better preparing those involved in the market to deal with exogenous shocks.
This section first presents empirical market dynamics, then delves into the four experiments, starting with a base case, introduction of catastrophe events, syndicates adapting to these events and finally the introduction of lead and follow dynamics (described in the previous section <ref>).
In Figure <ref> the market experiences catastrophe events which exacerbates periods of hardening and softening of the market. This phenomena is conveyed as the underwriting cycle, which as a benchmark could be a phenomena we reproduce using our DES model. In the following sub-section, we describe each model scenario (experiment) and subsequently present the results while discussing the findings in relation to Lloyd's of London.
§.§ Model scenarios
To ensure the model is able to tell a story from simplistic behaviours to more advanced features and outcomes, we start with a base case experiment, this and subsequent experiments are described below (see Section <ref> for descriptions of each model feature):
* Base case actuarial pricing, all syndicates use actuarial pricing models, premium exposure management and attritional losses occur (Scenario 1).
* Catastrophe event, all syndicates use actuarial pricing, premium exposure management, attritional and catastrophe losses occur (Scenario 2).
* Syndicates adapting to catastrophe events, all syndicates use actuarial pricing, VaR exposure management, attritional and catastrophe losses occur (Scenario 3).
* Leaders and Followers, syndicates can either be leaders or followers and use actuarial pricing, premium exposure management and only attritional losses occur (Scenario 4).
§.§ Base case actuarial pricing (scenario 1)
This experiment utilises the most basic components of the model, in order to demonstrate the ability for core features to represent important market phenomena. In this experiment, we include five syndicates and twenty-five brokers. This maintains the 1/5 ratio of syndicates to brokers as is the case in the Lloyd's of London market pocket guide <cit.>. As we are only utilising the actuarial pricing model, we expect the premium prices to converge around the fair price, which given the model parameters Table <ref> is $300,000. Secondly, we expect the loss ratio of syndicates to fluctuate between periods of profitability and losses, i.e., early signs of cyclicality. Note, due to computational complexity, the model was only run for a limited number of times.
In sub-figure <ref>, the syndicate response over time, is indicative of capital fluctuations in reality, as observed in <cit.> where the authors discuss capital profiles over time. For instance, some syndicates, i.e., syndicate 0, are bankrupted as a result of their underwriting strategy, while others ride the boom and bust period better. As we hypothesised, the premiums offered have converged to the actuarial fair price of approximately $300,000 sub-figure <ref>, the reason for this (refer to the sub-section <ref>) is because the syndicate's actuarial pricing, offers a price which attempts to cover its prior losses. Given the current model setup, the mean of the prior losses is $300,000.
For context, the loss ratio when ≥ 1 indicates an unprofitable syndicate as their losses are greater than their income, i.e., premiums. Conversely, < 1 leads to profitable syndicates. As mentioned, periods of profitability and losses is a crucial trait of all models that simulate the insurance market as described in <cit.>. Clearly, our model outputs also demonstrate this behaviour as observed in Figure <ref>.
§.§ World shock events, where catastrophes meet the insurance market (scenario 2)
In this experiment, we repeat the input parameters earlier, however, now we introduce the catastrophe loss generator. When catastrophe events occur, this should lead to major losses and the severity should vary across syndicates depending on their underwriting strategies. We expect to see some syndicates go insolvent, while others may not. Furthermore, insurance insiders and academics claim that the primary cause of cyclicality in the market, is due to unexpected catastrophe events <cit.>. Our model allows us to test this hypothesis, which we attempt in this section. These experiments should show exaggerated cyclical trends compared to the previous experiment.
The most intriguing result from this experiment, is shown in Figure <ref>, which demonstrates not only the presence of cyclicality in the premiums offered, but provides an explanation for why this phenomena occurs. Put simply, the cyclicality occurs, in the initial phase, when premiums begin to converge to a fair price, however, catastrophe events which result in large losses, forces syndicates to price premiums higher resulting in an increase in the prices offered. Eventually, once the effect of the catastrophe wears off, the syndicates once again try to converge towards the fair price and the cycle repeats itself.
As mentioned previously, cyclicality is pronounced and in some cases, due to large catastrophe losses, loss ratios go above one Figure <ref>.
§.§ Adapting to market shocks, the utility of VaR exposure management (scenario 3)
The purpose of this experiment, is to specifically showcase the effects of advanced exposure management methods. Due to brevity, we only showcase results indicative of this objective. To this end, syndicates have utilised simplistic premium exposure management. Now, we introduce the VaR exposure management, which allows syndicates to manage their exposure in a more sophisticated manner representing behaviours of real-world syndicates more closely.
VaR exposure management <cit.> should enable syndicates to adopt underwriting strategies which avoid over-subscribing to a given peril-region. A good exposure management strategy, is to distribute risk underwritten uniformly across all peril-regions <cit.>. For this reason, we present the uniform deviation (measures how much the real distribution of risks in the peril regions varies from a perfectly uniform peril-region distribution, where 0 is perfectly uniform). For comparison, we include no exposure management and premium exposure management outputs. As can be seen from Figure <ref>: From left to right, we observe that as the exposure management becoming more sophisticated and stringent, it is clear to see that the uniform deviation moves closer to zero.
§.§ Leaders and followers among syndicates (scenario 4)
The Lloyd's of London specialty insurance market as described in Section <ref> differs from other insurance markets. The most prominent difference is the syndication of risk, i.e., the ability for multiple syndicates to underwrite a risk either as leaders or followers. Lloyd's of London claims that the syndication of complex risks across multiple syndicates, allows these unconventional risks and the potential losses to be distributed among several insurers as opposed to one. This should result in less volatility, and lower likelihoods of insolvency as described in the documentation published by Lloyd's <cit.> and <cit.>. However, as far as the authors are aware, these claims have not been quantitatively tested in academic research, until now where our DES model allows us to investigate this particular claim.
As observed in Figure <ref> compared to Figure <ref> the volatility of the premium offered is significantly lower, with the premiums tightly converging towards the fair price. Due to the syndication of risks, this allows more syndicates to participate significantly in the market. This means that any losses, are shared among all syndicates, implying their loss experience is similar to each other. As a result, they all offer similar prices. This behaviour can also be observed in Figure <ref> where the loss ratios of the syndicates are tightly coupled/correlated, indicating a similar loss experience.
In this scenario, we observe that no insolvencies occurred, compared to the same scenario minus the lead and follow dynamics in sub-section <ref>. These findings, provide strong quantitative justification for the market structure imposed by the regulator Lloyd's of London.
In summary, the proof-of-concept DES can reproduce quantitatively and qualitatively market phenomena, such as the underwriting cycle, merits of lead - follow market structures, and the importance of exposure management.
§ CONCLUSION
This research article proposes a novel DES of the Lloyd's of London specialty insurance market. The model captures granular interactions of significant actors within the marketplace, i.e., syndicates and brokers. The model has demonstrated significant results with regards to market dynamics observed in the real-world, this includes signs of cyclical behaviours congruent of the underwriting cycle, which becomes more severe with catastrophe events. Given the unique model architecture, users can swap components such as pricing models, broker-syndicate relationship networks and other features with ease that may not have been possible in past attempts such as <cit.>. Furthermore, the article has proposed a conceptualisation of the lead and follow syndicate dynamics which is unique to Lloyd's of London, the results from these specific experiments have reinforced the innovative market structure, employed by Lloyd's which reduces volatility in their market.
The proposed DES has many strengths as discussed previously, for example, from a technical perspective, we adopt first-hand knowledge from experts in Lloyd's of London syndicates Ki and Brit and propose a novel, modular DES with changeable components. From a market dynamics and results perspective, we incorporate the lead and follow mechanics unique to Lloyd's of London, integrate both attritional and catastrophe loss events, and quantitatively study many important market phenomena. However, several areas of improvement can also be highlighted, for example, given the nature of the complexity which drives the market, it is difficult to abstract and quantify all aspects of the market appropriately. An example is quantifying the broker-syndicate relationship which is the cornerstone of the market. Capturing these individual-level human relationships is difficult with any modelling framework, however, individual-based modelling has allowed us to acquire new insights regarding this relationship. Furthermore, reinsurance is an important function of the insurance market. Reinsurance has a big effect on the capacity a syndicate can underwrite as they manage the tail risk of a portfolio. Modelling the availability and cost of reinsurance policies is therefore a significant driver of market dynamics. Conventional calibration and validation has not been pursued in this iteration of the proof-of-concept model, i.e., global sensitivity analysis. However, the behaviours of the model have been verified by Ki and Brit. In future there are plans to incorporate proprietary Ki and Brit data in order to calibrate the model so that these validated results can be disseminated.
Given the wealth of engineered features of the model, many future avenues can be explored. The underwriting markup (pricing model) <cit.> was not utilised in the experiments conducted in this article, however, comparing the different pricing models, i.e., actuarial and underwriting may lead to new insights with regards to market competition. Additionally, as discussed and utilised by <cit.>, another major component of a syndicate's activities is to pay out dividends in case of profitable performance. Given that the dividend feature is available in the model as described in Section <ref> this can easily be explored as a path for future work. We hope this model provides researchers and specialty insurance practitioners with new insights that enable future R&D projects and provide the means to reduce market volatility and enhance insurance businesses.
§ SUPPLEMENTARY MATERIALS
* The Hades framework is made open-source by Ki Insurance and can be found at the following link <https://pypi.org/project/hades-framework/>
alpha
|
http://arxiv.org/abs/2307.04334v1 | 20230710041019 | Quasicrystalline second-order topological semimetals | [
"Rui Chen",
"Bin Zhou",
"Dong-Hui Xu"
] | cond-mat.mes-hall | [
"cond-mat.mes-hall"
] |
Department of Physics, Hubei University, Wuhan 430062, China
Department of Physics, Hubei University, Wuhan 430062, China
[][email protected]
Department of Physics, Chongqing University, Chongqing 400044, China
Chongqing Key Laboratory for Strongly Coupled Physics, Chongqing University, Chongqing 400044, China
Three-dimensional higher-order topological semimetals in crystalline systems exhibit higher-order Fermi arcs on one-dimensional hinges, challenging the conventional bulk-boundary correspondence. However, the existence of higher-order Fermi arc states in aperiodic quasicrystalline systems remains uncertain. In this work, we present the emergence of three-dimensional quasicrystalline second-order topological semimetal phases by vertically stacking two-dimensional quasicrystalline second-order topological insulators. These quasicrystalline topological semimetal phases are protected by rotational symmetries forbidden in crystals, and are characterized by topological hinge Fermi arcs connecting fourfold degenerate Dirac-like points in the spectrum. Our findings reveal an intriguing class of higher-order topological phases in quasicrystalline systems, shedding light on their unique properties.
Quasicrystalline second-order topological semimetals
Dong-Hui Xu
August 12, 2023
====================================================
§ INTRODUCTION
Symmetry-protected topological phases of matter have emerged as a major new theme in modern condensed-matter physics in the past nearly two decades. While the discovery of topological insulators initially sparked interest in this field, recent focus has shifted towards exploring higher-order topological insulators <cit.>. Unlike traditional topological insulators, higher-order topological insulators exhibit unconventional bulk-boundary correspondence, allowing for the existence of gapless boundary excitations of higher co-dimensions. For example, a second-order topological insulator (SOTI) in two dimensions hosts robust gapless boundary modes localized at its zero-dimensional corners, dubbed corner modes <cit.>, while three-dimensional (3D) SOTIs support gapless boundary modes confined to their one-dimensional hinges <cit.>. In addition to higher-order topological insulators, higher-order topological semimetals have also been identified. These semimetals, including higher-order Dirac semimetals and higher-order Weyl semimetals, exhibit exotic hinge Fermi arcs that connect the projected nodes on the hinges, distinguishing them from conventional Dirac and Weyl semimetals <cit.>.
Initially, topological phases were observed in crystalline materials. However, more recently, researchers have extended these phases to aperiodic quasicrystalline systems, which lack discrete translational symmetry <cit.>. The absence of translational symmetry allows for the presence of rotational symmetries that are prohibited in crystals. This property enables the existence of new topological phases without crystalline counterparts, such as two-dimensional (2D) SOTIs protected by eightfold <cit.> and twelvefold <cit.> rotational symmetries. Moreover, a 3D time-reversal symmetry (TRS) breaking gapless topological phase hosting Weyl-like points has been proposed in a quasicrystal stack of Chern insulators <cit.>.
However, gapless phases with higher-order topology in quasicrystalline systems have yet to be discovered. This knowledge gap motivates us to explore the possibility of gapless quasicrystalline higher-order topological phases using a stacking approach with 2D quasicrystalline SOTIs. It has been demonstrated that stacking 2D topological materials provides a natural way of realizing 3D topological phases. This approach has been successful in achieving various topological phases, including Weyl semimetals <cit.>, axion insulators <cit.>, hinged quantum spin Hall insulators <cit.>, and high-Chern number quantum anomalous Hall insulators <cit.>.
In this work, we present the discovery of a quasicrystalline second-order topological semimetal (SOTSM) phase obtained by stacking 2D quasicrystalline SOTIs along the vertical direction (Fig. <ref>). The distinctive feature of the quasicrystalline SOTSM is the presence of rotation-symmetry-protected topological hinge Fermi arcs that terminate at fourfold degenerate Dirac-like points in the spectrum. The C_n^z-symmetric quasicrystalline SOTSM can support n topological hinge Fermi arcs (see the second column in Fig. <ref>), inheriting their topological nature from C_n^z-symmetric quasicrystalline SOTI hosting n corner modes (see the first column in Fig. <ref>). The number n can be four [Figs. <ref>(a) and <ref>(b)], as allowed in crystalline systems <cit.>, but it can also be eight [Figs. <ref>(c) and <ref>(d)] and twelve [Figs. <ref>(e) and <ref>(f)], which are typically forbidden in crystalline systems. Furthermore, we present the phase diagram of the stacked systems and identify a 3D quasicrystalline SOTI phase in addition to the quasicrystalline SOTSM phase. Finally, we show that the disclination-induced bound states can further reveal the topological nature of the quasicrystalline SOTSM phase.
This work is organized as follows. We first give a simple review of 2D quasicrystalline SOTI in Sec. <ref> and show a stack of it gives rise to the 3D quasicrystalline SOTSM phase with Dirac-like points in the spectrum in Sec. <ref>. A detailed discussion on Dirac-like points is presented in Sec. <ref>. Subsequently, we illustrate the phase diagram of the stacked quasicrystalline system in Sec. <ref> and investigate the disclination-induced bound state in Sec. <ref>. We summarize our conclusions and discuss possible experimental
schemes for the quasicrystalline SOTSM phase in Sec. <ref>.
§ REVIEW OF 2D QUASICRYSTALLINE SOTIS
2D quasicrystalline SOTIs had been proposed in
eightfold symmetric Ammann-Beenker-tiling (AB-tiling) quasicrystal <cit.> [Figs. <ref>(a) and <ref>(c)] and twelvefold symmetric Stampfli-tiling quasicrystal <cit.> [Fig. <ref>(e)]. The AB-tiling quasicrystal consists of two types of primitive tiles: square tiles (yellow) and rhombus tiles (green) with a small angle 45^∘. The Stampfli-tiling quasicrystal consists of three types of primitive tiles:
square tiles (yellow), regular triangle tiles (red), and rhombus tiles (green) with a small angle 30^∘.
In the tight-binding model, the lattice sites are placed on the vertices of each tile. The Hamiltonian of the 2D quasicrystalline SOTI contains two parts, H(M)=H_1st(M)+H_m <cit.>. The first part denotes a 2D first-order topological insulator protected by TRS
H_1st(M) = -∑_j≠ kZ(r_jk)/2[it_1( s _3τ _1cosϕ_jk+s _0τ _2sinϕ_jk)
+ t_2s _0τ_3] c_j^†c_k+∑_j(M+2t_2)s _0τ _3 c_j^†c_j,
where c^†_jα=(c^†_jα↑,c^†_jα↓) are electron creation operators at site j with the orbital α. t_1 and t_2 are hopping amplitudes, and M denotes the Dirac mass, together with t_2, determining the first-order topology. s_1,2,3 and τ_1,2,3 are the Pauli matrices acting on the spin and orbital spaces, respectively. s_0 is the 2× 2 identity matrix. ϕ_jk is the azimuthal angle of the bond between site j and k with respect to the horizontal direction. Z( r_jk) = e^1-r_jk/ξ is the spatial decay factor of hopping amplitudes with the decay length ξ.
The second part is a TRS breaking Wilson mass term, which is
H_m(η)=g∑_j≠ kZ(r_jk)/2cos( ηϕ_jk) s _1τ _1 c_j^†c_k,
where g and η describe the magnitude and varying period of the Wilson mass, respectively. H_m(η) are responsible for higher-order topology <cit.>. In the subsequent calculations, we fix the side length of the tiles as a=1 (white lines connecting the vertices in Fig. <ref>) and ξ=t_1=1.
For η=2,4,6, the Wilson mass gives rise to the SOTI phases in quasicrystals hosting four, eight, and twelve corner modes protected by the combined symmetry C_4^z U <cit.>, C_8^z U <cit.>, and C_12^z U <cit.>, respectively, where C_n^z is the n-fold rotational operation, and U could be the TRS operation T=i s_2τ_0 K or the mirror symmetry operation m_z=s_3 τ_0. K is the complex conjugation operator. The symmetry-protected eightfold and twelvefold corner modes, which are impossible in crystals <cit.>, are distinguishing characteristics of the 2D quasicrystalline SOTIs. Additionally, these corner modes are pinned to zero energy due to the existence of particle-hole symmetry.
The emergence of the zero-energy corner modes can be simply understood as follows <cit.>: g opens a gap in the first-order topological edge states and then induces Wilson mass kinks near the boundary. If one corner mode |ψ_c> appears at 𝐫_c, where the Wilson mass flips the sign, then the C_n^z U symmetry ensures that the number of corner modes is n. Because C_n^z U|ψ_c> is also the eigenstate of the system, which is localized at another corner by rotating 𝐫_c by an angle of 2π/n.
§ 3D QUASICRYSTALLINE SOTSMS
3D crystalline SOTSMs have been constructed by stacking 2D crystalline SOTIs along the vertical direction <cit.>. 3D quasicrystalline SOTSM phases can be achieved in a similar manner, i.e., by periodically staking 2D quasicrystalline SOTIs with an orbital-dependent hopping t_z s_0τ_3 on each site <cit.>. After Fourier transformation applied to the vertical direction z, the 3D stacked Hamiltonian can be expressed as
H_3D=∑_k_zH(M-2t_zcos k_z).
The conduction and valence bands of in this model have double degeneracy because of the presence of the combination of TRS and inversion symmetry PT <cit.>, where P=s_0τ_3 is the inversion symmetry operator. It is necessary to point out that when η=2, applying the stacked Hamiltonian to periodic cubic lattices will give birth to a 3D crystalline SOTSM <cit.> (see Appendix <ref>) with four hinge Fermi arcs connecting the projection of fourfold degenerate Dirac points that are well defined in the momentum space. Next, we investigate the situation where the Hamiltonian is defined on a stack of 2D quasicrystals.
§.§ η=2
We first consider a 3D quasicrystal [Fig. <ref>(b)] by stacking 2D AB-tiling quasicrystals with the square-shaped boundary [Fig. <ref>(a)] and set the varying period of Wilson mass η=2. Figure <ref>(a) shows the spectral function 𝒜 (E_F,k_z) of the 3D quasicrystalline system with open-boundary condition in the xy-plane. We can see that the bulk conduction and valence bands touch at two discrete points k_z=± k_z^1 where the energy gap is closed, indicating a semimetal phase. Importantly, fourfold degenerate zero-energy flat band boundary states emerge in the region |k_z|>k_z^1, describing hinge Fermi arc states in this semimetal phase. Figure <ref>(c) displays the probability density distribution of the zero-energy states at k_z=-2 [marked by the green star in Fig. <ref>(a)].
Figure <ref>(b) illustrates the spectral function of the quasicrystalline system with periodic boundary conditions along all the directions. The periodic boundary condition in the xy-plane is achieved by treating the system as a crystal with a supercell periodicity. Comparing to the spectral function under open boundary condition in Fig. <ref>(a), the zero-energy flat band boundary states disappear, further confirming that the zero-energy modes in between ± k_z^1 are hinge Fermi arc states. Moreover, the higher-order topology of the hinge Fermi arcs is revealed by the quantized quadrupole moment Q_xy=0.5 for |k_z|>k_z^1 [Fig. <ref>(d)]. Therefore, the system is identified as a quasicrystalline SOTSM.
The bulk spectral function versus k_z exhibits a linear dispersion near the gap closing points at ± k_z^1 [Fig. <ref>(b)]. Meanwhile, the density of states around the gap closing points is parabolic, as shown in the inset of Fig. <ref>(b), which identifies the well-known bulk signatures of Dirac points in crystalline systems <cit.>. These features suggest that the gapless points in the present system are Dirac points in quasicrystals. However, as discussed in Sec. <ref>, a more detailed analysis reveals that the situation is complex.
§.§ η=4 and η=6
Now, we come to the case of η=4 and η=6, which can give rise to 2D quasicrystalline SOTIs without crystalline counterpart <cit.>. Here, the 3D quasicrystalline systems are stacked by the AB-tiling octagonal quasicrystal [Figs. <ref>(c) and <ref>(d)] and the Stampfli-tiling dodecagonal quasicrystal [Figs. <ref>(e) and <ref>(f)], respectively. Figures <ref>(a) and <ref>(b) show the spectral function 𝒜 (E_F,k_z) of the two 3D quasicrystalline systems with open boundary condition in the xy-plane and periodic boundary condition along the vertical direction. The spectral functions look similar to that shown in Fig. <ref>(a), however, the degeneracy of zero-energy modes is different.
These zero-energy flat-band boundary modes in the region |k_z|>k_z^1 are hinge Fermi arc states traveling on the hinges of 3D octagonal/dodecagonal quasicrystals. This can be observed more clearly in Figs. <ref>(e) and <ref>(f), which show the energy spectra and the probability distributions of the zero-energy modes for fixed k_z marked by the green stars shown in Figs. <ref>(a) and <ref>(a), respectively. Apparently, the hinge Fermi arc states are inherited from the C_n^z-symmetric corner modes in quasicrystalline SOTIs, where n=8 in the AB-tiling octagonal quasicrystal and n=12 in the Stampfli-tiling dodecagonal quasicrystal.
To diagonalize the electronic structure of bulk state, we plot the spectral function under periodic boundary conditions along all the three directions in Figs. <ref>(c) and <ref>(d). As seen in the case with η=2, similar phenomena are observed, such as the disappearance of zero-energy hinge arcs, a linear dispersion along k_z, and the quadratic density of states around the gap closing points.
Therefore, our study demonstrates that stacking 2D quasicrystals can result in the emergence of an exotic topological phase of matter i.e., the quasicrystalline SOTSMs, which possesses eight and twelve hinge Fermi arcs protected by forbidden rotation symmetries in crystalline systems. Our findings highlight the potential for stacking 2D quasicrystals and expand our understanding of condensed matter physics.
§ DIRAC-LIKE POINTS
Upon initial inspection, the gap closing points near k_z=± k_z^1,2,3 shown in Figs. <ref>(b), <ref>(c) and <ref>(d) are reminiscent of the Dirac point characterized by the massless Dirac equation. They both exhibit a linear dispersion along k_z and a unique quadratic density of state near the gap closing points. However, a closer inspection of the spectrum reveals that the gap closing points in quasicrystalline SOTSMs are distinct from those in crystalline second-order topological Dirac semimetals (SODSMs).
Figure <ref>(a) shows the spectrum near the gap closing point k_z^1 in the SOTSM of η=2 under periodic boundary conditions along all the directions [see Fig. <ref>(b)]. There appear three band-crossing points, which is quite different from the crystalline SODSM phase that hosts only one band-crossing point [Fig. <ref>(e)]. Figures <ref>(b) and <ref>(e) show the wave function of the states marked by the red and green stars in Fig. <ref>(a), respectively. One of the band-crossing is dominated by the local patch containing three square tiles and two rhombus tiles [Figs. <ref>(b) and <ref>(c)], and the other band-crossing is dominated by the local patch containing six rhombus tiles [Figs. <ref>(d) and <ref>(e)]. The appearance of multiple band-crossing points is because gap closes at different k_z for distinct kinds of local patches. This phenomenon is attributed to the absence of discrete translational symmetry in quasicrystalline systems.
For the AB-tiling octagonal quasicrystal with η=4, the spectrum opens a tiny energy gap [Fig. <ref>(f)]. The size of the energy gap decays with the increase of size [Fig. <ref>(g)]. For the Stampfli-tiling quasicrystal with η=6, the spectrum is similar to the case with η=2, except that there appear more band crossings. This is because there are more different patterns of local patches in Stampfli-tiling quasicrystal.
Though the gap-closing points in quasicrystalline SOTSMs manifest several similarities compared to the Dirac points in crystalline SODSMs. However, we found the fine structure of the gap-closing points due to the absence of translational symmetry by further checking the spectrum. Therefore, we dub these gap closing points in the quasicrystalline SOTSM phase as Dirac-like points.
§ PHASE DIAGRAM
We present the topological phase diagram of the stacked quasicrystal system in this section.
Figures <ref>(a)-<ref>(b) show ln E_g and Q_xy as functions of the momentum k_z and the parameter M for the AB-tiling quasicrystalline square system with η=2. E_g is the value of the energy gap obtained under periodic boundary conditions along all the three directions. Each point along the white line corresponds to the gap-closing point shown in Fig. <ref>(b). For about -5.7<M<0.3, the existence of the gap closure with the accompanying topological phase transition between Q_xy=0 and Q_xy=0.5 indicates that the system corresponds to the SOTSM phase.
For about M>0.3, the system corresponds to a 3D quasicrystalline SOTI phase with a topological gap characterized by a quantized quadrupole moment Q_xy=0.5 for any k_z. For about M<-5.7, the system is a normal insulator (NI) with a topologically trivial gap.
Above we only consider the case of η=2 in the AB-tiling quasicrystal. In the cases of the AB-tiling octagonal quasicrystal with η=4 and the Stampfli-tiling dodecagonal quasicrystal with η=6, we find similar results by adjusting the parameter M, i.e., the systems also support the quasicrystalline SOTSM phase, the 3D quasicrystalline SOTI phase, and the NI phase.
§ DISCLINATION-INDUCED BOUND STATES
Disclination-induced bound states provide a potential probe of crystalline topology, which has been widely investigated in different topological systems <cit.>. Recently, disclination-induced bound states have been observed in topological crystalline insulators <cit.>, acoustic topological insulators <cit.>, and acoustic Weyl semimetals <cit.>. In this section, we study the disclination-induced bound states in the quasicrystalline SOTSM phase.
The disclination is introduced by cutting out a specific segment [the first column in Fig. <ref>] and then glue the lattice back together [the second column in Fig. <ref>]. The two sides of the cut are glued together by identifying sites on the two sides of the cut related by rotational symmetry, which is called a Volterra process <cit.>. The defects break the rotational symmetry locally at the center of lattice, but the rest preserves the rotational symmetry and is indistinguishable from the bulk of the original system without the cut.
The corresponding spectral function of sample geometries in Figs. <ref>(a)-<ref>(b), Figs. <ref>(c)-<ref>(d), and Figs. <ref>(e)-<ref>(f) are similar to Fig. <ref>(a), Fig. <ref>(a), and Fig. <ref>(b), respectively, except that the spatial probability distributions are different for the zero-energy modes. The colored points in Fig. <ref> display the probability distributions of the zero-energy modes in these systems with k_z=-2.
For the three different disclination systems in Figs. <ref>(b), <ref>(d), and <ref>(f), they both host one zero-energy mode at the disclination core, and three, seven, and eleven zero-energy modes at the hinges of the systems, respectively. Moreover, similar to the zero-energy hinge modes, the disclination modes only appear for |k_z|>k_z^1/2/3, and disappear in the regions of |k_z|<k_z^1/2/3. This further reveals that the disclination-induced bound states and the hinge Fermi arc states are the consequence of nontrivial bulk topology, which cannot be removed without topologically trivializing the bulk of systems <cit.>. Moreover, the k_z-dependent disclination-induced bound states provide an experimental probe for the quasicrystalline SOTSM phase.
§ CONCLUSION AND DISCUSSION
In conclusion, this study has demonstrated that a stack of 2D quasicrystalline SOTIs can give rise to 3D quasicrystalline SOTSM phases. These 3D phases exhibit rotation-symmetry protected hinge Fermi arcs, which are forbidden in crystalline systems. Additionally, our calculations have shown that the stacked systems also support the 3D quasicrystalline SOTI phase, as evidenced by the phase diagram. We have proposed that the dependence of k_z on disclination-induced bound states can serve as an experimental probe for the quasicrystalline SOTSM phase.
While the quasicrystalline SOTSM shares similarities with the crystalline SODSM <cit.>, there are three main distinctions between them. Firstly, the number of C_n^z-symmetry protected hinge Fermi arcs in the quasicrystalline SOTSM is not limited to four, as observed in crystalline SODSM, but can be eight or twelve as well. Secondly, in the quasicrystalline SOTSM, the lack of translational symmetry renders the in-plane momentum ineffective as a quantum number, making it impossible to define Dirac points in momentum space, unlike in crystalline SODSM where the Dirac equation applies. Lastly, the spectrum of the quasicrystalline SOTSM exhibits a higher number of band-crossing points compared to the crystalline SODSM, a consequence of the absence of in-plane translational symmetry in the stacked quasicrystals.
Moreover, recent experiments investigating the stack of Ta_1.6Te quasicrystal layers <cit.>, along with first-principles calculations and symmetry analysis, have revealed a symmetry-protected semimetal phase and explored the topological properties of the material. This suggests that the quasicrystalline SOTSM phase can be experimentally realized in real materials. Furthermore, considering the successful experimental realization of the 2D quasicrystalline SOTI phase in electrical circuit systems <cit.>, we believe that the quasicrystalline SOTSM holds promise in metamaterials. These unique features and possibilities offer exciting prospects for the future implementation of our proposal.
D.-H.X. was supported by the NSFC (under Grant Nos. 12074108 and 12147102), the Natural Science Foundation of Chongqing (Grant No. CSTB2022NSCQ-MSX0568). R.C. acknowledges the support of the Chutian Scholars Program in Hubei Province. B.Z. was supported by the NSFC (under Grant No. 12074107), the program of outstanding young and middle-aged scientific and technological innovation team of colleges and universities in Hubei Province (under Grant No. T2020001) and the innovation group project of the natural science foundation of Hubei Province of China (under Grant No. 2022CFA012).
§ CRYSTALLINE SODSM
To make a comparative study, we investigate the 3D crystalline SODSM phase [Fig. <ref>(b)], modeled by staking 2D crystalline SOTIs along the vertical direction [Fig. <ref>(a)]. Figures <ref>(c) and <ref>(d) show the spectral function of the crystalline system with open and periodic boundary conditions in xy-plane, respectively. Hinge Fermi arcs appear and connect the band-closing points at k_z=± k_z^4. The results are similar to those in Figs. <ref>(a)-<ref>(b). Figure <ref>(e) shows the spectrum near the band-closing point -k_c^4. Only one band-crossing point is observed because of the existence of transitional symmetry in crystalline systems. This is observed more clearly in Fig. <ref>(f). The
probability distribution of the state labeled by green star [Fig. <ref>(e)] is uniformly distributed and all the local patches undergo the topological phase transition simultaneously when k_z varies.
Moreover, we find that the low-energy effective Hamiltonian can be described by the massless Dirac equation. Therefore, the system is identified as the crystalline SODSM phase.
apsrev4-1-etal-title_6authors
|
http://arxiv.org/abs/2307.05928v1 | 20230712054811 | Effect of polarisation on two-photon resonance in a large Zeeman manifold | [
"Nayan Sharma",
"Ranjit Kumar Singh",
"Souvik Chatterjee",
"Prasanta K. Panigrahi",
"Ajay Tripathi"
] | physics.optics | [
"physics.optics",
"physics.atom-ph"
] |
APS/123-QED
Department of Physics, Sikkim University, 6th Mile Samdur, East Sikkim, India -737102
Department of Physics, Sikkim University, 6th Mile Samdur, East Sikkim, India -737102
Department of Physics, Sikkim University, 6th Mile Samdur, East Sikkim, India -737102
Department of Chemistry, Amity Institute of Applied Sciences,
Amity University, Sector-125, Noida, Uttar Pradesh 201313, India.
Department of Physical Sciences, Indian Institute of Science Education
and Research Kolkata, Mohanpur 741246, West Bengal, India.
In this study, we present numerical investigations on a large Zeeman manifold in an electromagnetically induced transparency (EIT) medium, focusing on the D_1 and D_2 lines of ^87 Rb as our model system. We examine two distinct models comprising 13 and 16 energy levels, respectively, using pump-probe spectroscopy with varying polarization of the light fields. A longitudinal magnetic field is used, and the ellipticity of both light fields is varied with the constraint that both lights have orthogonal polarization. We discover that in the presence of a longitudinal magnetic field, the change in ellipticity of light polarization induces optical anisotropy. This anisotropy results from the uneven distribution of population among the ground Zeeman levels, leading to the absorption of weak probe light. For a large number of states interacting with different field components, the existence of a steady state depends upon the multi-photon resonance and phase matching conditions. A comment is made on why such conditions are not required in our model, and the assumptions and limitations of the model are also discussed. To validate our numerical findings, we perform experimental measurements at two different magnetic field strengths in the D_2 line of ^87 Rb. The experimental results align well with our numerical simulations. Specifically, we conclude that the probe transmission spectra at lower magnetic field values (up to 20 G) exhibit
similarity for both the D_1 and D_2 lines of ^87 Rb, effectively described by the 13-level model. However, at higher magnetic field values, a more complicated 16-level (or higher) system is necessary to accurately capture the response of the probe in D_2 line.
Effect of polarisation on two-photon resonance in a large Zeeman manifold
Prasanta K. Panigrahi
August 12, 2023
=========================================================================
§ INTRODUCTION
The interaction of light fields with multi-level atoms gives rise to various non-linear optical phenomena. These are often multi-photon processes that are based on quantum interference between the various excitation pathways <cit.>. The phenomena of electromagnetically induced transparency (EIT) <cit.>, electromagnetically induced absorption (EIA) <cit.>, coherent population trapping (CPT) <cit.>, four wave mixing (FWM) <cit.>, six wave mixing <cit.>, two-photon absorption (TPA) <cit.> and others are some examples where multi-photon processes are dominant. These phenomena are extensively studied using hyperfine levels of alkali metals like Rb, Cs, Na, e.t.c., in an atomic vapor system <cit.>. Atomic vapor systems in the presence of a magnetic field are an excellent platform to study the interaction of light fields with multi-level atoms. Moreover, since the atomic medium is quantum mechanical in nature, the magnetic field (B) provides a preferred axis for the measurement of the observables of the system.
In experiments, the resonances are observed either by changing the frequency detuning of one of the light field or by using a time dependent magnetic field (Hanle configuration). Without magnetic fields, it has been shown that the nature of the resonance can be controlled by adding the field components <cit.> and by changing the phase as well as the polarization state of the light field in systems such as Λ, N and tripod types <cit.>. On the other hand, in Hanle configuration, the effect of the ellipticity in the light polarisation is known to have a significant effect on the EIT/EIA resonances. By changing the polarisation states the amplitude of the EIT/ EIA resonances can be controlled <cit.>. Also, change in ellipticity can result in conversion of EIT to EIA (vice-versa) as mentioned in <cit.>.
In presence of a static magnetic field, two-photon resonances can switch between transmission and absorption at room temperature in both D_1 and D_2 line of ^87 Rb<cit.>. Using two light fields of linear orthogonal polarisation (lin⊥lin), the number of two-photon resonances in the probe spectra can be changed by using a different direction of the magnetic field <cit.>. By changing the direction of the magnetic field, the polarisations of the light fields are redefined, modifying the number of Λ systems that are responsible for such change in the number of resonances. Detailed study of the nature of EIT resonances in atomic vapor systems at both low and high magnetic field values has been performed by various groups <cit.>.
However, a thorough literature review indicates that there is a lack of detailed investigation on the effects of light polarisation on two-photon resonances in presence of a fixed static magnetic field (non-Hanle configuration) in EIT medium.
In the present work we attempt to fill this research gap by investigating the effects of light polarisation, for a multi level atomic EIT system, interacting with two light fields (pump and probe) in presence of a magnetic field.
The model case is briefly discussed below.
§.§ Model Case: D_1 and D_2 line of ^87 Rb
We take a case where a Λ system is formed with two light fields (weak probe and strong pump ) using the hyperfine levels |F=1⟩, |F=2⟩ and |F'=2⟩ for both D_1 and D_2 line of ^87 Rb as shown in Fig.<ref>.(a). and (b). At room temperature (300 K) the Doppler width (FWHM) of ^87 Rb is 511 MHz. For D_1 line, the nearest excited state is |F'=1⟩ with a frequency gap of 817 MHz which is larger than the Doppler width. On the other hand, for D_2 line the frequency gap between |F'=2⟩ and |F'=1⟩ is 157 MHz which is well within the Doppler width. Hence, the effect of the nearby state |F'=1⟩ (if any) would be more pronounced in case of D_2 line for our model. Therefore, we study two different systems having 3 and 4 hyperfine levels for D_1 and D_2 lines respectively. In absence of magnetic field, the model system comprises of 13 and 16 degenerate energy levels for D1 and D2 lines respectively. After the application of the magnetic field all of these levels become nondegenerate. We consider a case where the polarisation of both the light fields are orthogonal to each other. The quantization axis is fixed by the direction of the static magnetic field which is along the propagation direction (longitudinal magnetic field). Hence, if both the fields are linearly polarized (lin⊥lin) the atomic frame would measure the light as equal mix of σ^+ and σ^- light. The energy level diagram for this case with the Zeeman levels (without splitting) is shown in is shown in Fig.<ref> (a) and (b).
Typically for numerical calculations different Λ systems are identified and treated as individual systems. The final spectra is then reproduced by summing up the contributions of all the individual Λ systems. This approach however fails to reproduce the experimental spectra even at moderate value of magnetic field where simultaneous presence of two photon absorption and EIT are present at different frequencies. On the other hand, for models with large number of energy states, the existence of steady state
depends on the existence of time and phase independent basis for the system. We present a simple derivation to show that just by counting the number of field components and available energy states the existence of such basis can be commented upon. Rest of the article is organised as follows, first we introduce the equations in section <ref>, followed by a detailed discussion of results for 13 level and 16 level systems in section <ref>. In section <ref> we present a comment on the existence of steady state solutions followed by a discussion on the assumptions and limitations of our model in <ref>. We also present an experiment performed in D_2 line of ^87 Rb which are in agreement with the numerical results in section <ref>. Finally, the article is concluded in section <ref> where possible applications of our work is also discussed.
§ EQUATIONS
We consider the light fields to be planes waves with elliptical polarisation,
E⃗= E⃗_⃗p⃗ (e^i ω_p t + e^- i ω_p t)+E⃗_⃗c⃗ (e^i ω_c t+e^-i ω_c t)
where, ω_p and ω_c are the frequency of the probe and pump respectively. The spatial dependence of the phase (e^± i k z) part is neglected using the Dipole approximation. The vector amplitudes which includes polarisation are,
E⃗_⃗p⃗= E_p ê_p, E⃗_⃗c⃗= E_c ê_c
Where, ê_p and ê_c are represented in circular basis in the following manner,
ê_p = + sin(ϵ-π/4) ê_+ - cos(ϵ-π/4) ê_-
ê_c = - cos(ϵ-π/4) ê_+ - sin(ϵ-π/4) ê_-
Where, ϵ is the ellipticity and ê_±=∓(ê_x ± iê_y/√(2)) are the circular basis. Note that the constraint ê_p . ê_c=0 allows us to define only one ellipticity parameter ϵ to represent the polarisation state of both the light fields (Fig.<ref>). Ellipticity is defined within the range -π/4≤ϵ≤π/4, for which ϵ=0 represents linearly polarised states and ϵ=±π/4 represents circular polarisations. The fields in the new basis are,
E⃗_⃗p⃗= E^+_p ê_++E^-_p ê_-, E⃗_⃗c⃗= E^+_c ê_++E^-_c ê_-
where, E^+_p=E_p sin(ϵ-π/4) and so on.
The total Hamiltonian for the system is defined as,
H=H_0+H_I+H_B
H_0 and H_I represents bare state and interaction Hamiltonian respectively. H_B represents the energy corrections due to the magnetic field. For a 16 level system using the rotating wave approximation (RWA) we have,
H_0= ħ (δ_p-δ_c) ∑_i=4^8|i⟩⟨i|+
ħ(δ_p-Δ
-kv) ∑_i=9^11|i⟩⟨i| +ħ (δ_p-kv) ∑_i=12^16|i⟩⟨i|
Where, δ_p and δ_c are the frequency detuning of the probe and pump from the transitions |F=1⟩→|F'=2⟩ and |F=2⟩→|F'=2⟩ respectively. Δ is the frequency gap between the excited hyperfine states(|F'=2⟩ and |F'=1⟩).
H_I=-ħ/2
[Ω^+_p(c_1,10|1⟩⟨10|
+c_1,14|1⟩⟨14|
+c_2,11|2⟩⟨11|
+c_2,15|2⟩⟨15|
+c_3,16|3⟩⟨16|)
+Ω^-_p(c_1,12|1⟩⟨12|
+c_2,9|2⟩⟨9|
+c_2,13|2⟩⟨13|
+c_3,10|3⟩⟨10|
+c_3,14|3⟩⟨14|)
+Ω^+_c(c_4,9|4⟩⟨9|
+c_4,13|4⟩⟨13|
+c_5,10|5⟩⟨10|
+c_5,14|5⟩⟨14|
+c_6,15|6⟩⟨15|
+c_6,11|6⟩⟨11|
+c_7,16|7⟩⟨16|)
+Ω^-_c(c_5,12|5⟩⟨12|
+c_6,9|6⟩⟨9|
+c_6,13|6⟩⟨13|
+c_7,10|7⟩⟨10|
+c_7,14|7⟩⟨14|
+c_8,11|8⟩⟨11|
+c_8,15|7⟩⟨16|)]+ h.c.
Ω^±_p and Ω^±_c are the reduced Rabi frequencies defined as,
Ω^±_p=E^±_p ⟨J||e r| |J'⟩/ħ Ω^±_c=E^±_c ⟨J||e r| |J'⟩/ħ
c_i,j is the Clebsch-Gordan coefficient which defines the transition strengths between the Zeeman levels <cit.>.
H_B=μ_B B ∑_i=1^16 m_F_i g_F_i|i⟩⟨i|
Where, m_F_i and g_F_i are the magnetic quantum number and Lande' g factor associated with the state |i⟩ respectively. μ_B is the Bohr magneton and B is the strength of the magnetic field. Similarly, the Hamiltonian for a 13 level system can also be written.
Time evolution of the system within Density matrix formulation is given by,
ρ̇(δ_c,v) = -i/ħ [H(δ_c,v),ρ(δ_c,v)]-1/2{Γ ,ρ(δ_c,v)}
-γρ(δ_c,v)
Γ (of the order of MHz) is the relaxation matrix which incorporates the spontaneous decays out of the excited state. γ is the decay rate due to collisions of atoms which is typically of the order of few KHz at room temperature. The velocity distribution function of the atoms as a function of temperature is given by,
f(v)=√(m/2π K_B T) exp(-mv^2/2 K_B T)
Equation <ref> is solved under the steady state condition
ρ̇=0 to find the density matrix elements at every particular velocity in the range -300 m/s to 300 m/s. These density matrix elements are then averaged using f(v) as a kernel to get the final results,
ρ̅_i,j(δ_c)= ∫ dv f(v) ρ(δ_c,v)
§ RESULTS AND DISCUSSION
§.§ 13 level system
For a 13-level system, the response of the probe field is found by calculating the susceptibility given by,
χ_p(δ_c)=2 N |d|^2 /ħϵ_0[(Ω^+_p)^-1(c_1,11 ρ̅_1,11+c_2,12 ρ̅_2,12
+c_3,13 ρ̅_3,13)+(Ω^-_p)^-1(c_1,9 ρ̅_1,9
+c_2,10 ρ̅_2,10+c_3,11 ρ̅_3,11)]
Where, N is the number density and ϵ_0 is the permittivity of free space and d=⟨J||e r| |J'⟩. Finally, the transmission of the probe is calculated using the imaginary part of the susceptibility,
T=T_0 e^-k |Im(χ_p)| L
Where, L is the length of the cell containing the atomic vapor.
The numerical transmission spectra are shown in Fig.<ref>. The calculations are performed for five different values of ellipticity (ϵ=0,±π/6,±π/4) with a fixed magnetic field of B=20 G. The energy level diagram without Zeeman splitting for this case is shown in Fig.<ref>.(a). The probe and pump intensity are kept fixed at 0.3 mW/cm^2 and 161.2 mW/cm^2 respectively, throughout the evaluations. Figure <ref> (b) shows the spectra for ϵ=0 in which both the σ^± components of the pump and probe have equal intensity. Three different transmission peaks are observed in this case, each of which is the superposition of different two-photon resonances due to a Λ system. There are in total ten Λ systems present in this case whose two-photon resonance condition is satisfied at δ_c=0± 2Δ_g (Δ_g=μ_B B/2ħ). Out of the ten Λ, four are responsible for the transmission at δ_c=0. For each transmission at δ_c=± 2Δ_g there is a superposition of three such Λ systems. The amplitude of the transmission at δ_c=0 is larger as compared to the one at δ_c=±Δ_g due to the involvement of a larger number of Λ systems (hence a larger population).
Figure <ref>.(c). shows the result for the case of ϵ=π/6, where it was observed that the two photon resonance at δ_c=2Δ_g i.e, the blue detuned peak flips and converts to an absorption. Also, the amplitude of the transmission at δ_c=0 is smaller as compared to the ϵ=0 case. This suggests that the large populations in the system are now undergoing two-photon absorption. It was found that if we calculate the susceptibility Eq.<ref> without the coherence term ρ̅_1,11 the transmission is recovered in the spectra, which is shown in the inset of Fig. <ref> (c). For this value of ellipticity, we have Ω^+_c>Ω^-_c (Ω^+_p<Ω^-_p) which leads to population accumulation at the state |8⟩ (|F=2,m_F=+2⟩). This occurs because the transition |8⟩→|12⟩ has a low value of rabi-frequency (c_8,12Ω^-_c). Such unequal population distribution creates optical anisotropy in the medium where a weak σ^+ probe component (ρ̅_1,11) gets absorbed near the two-photon resonance condition with a small light shift, which is of the order of (Ω^+_c)^2/δ_p.
The result for ϵ=π/4 (pump is σ^- and probe is σ^+) is shown in Fig. <ref> (d). For this case, we find that there are two transmission peaks at δ_c=0 and δ_c=2Δ_g and the resonance at δ_c=-2Δ_g is missing. The reason for the observation is simple, as for this polarisation there are only two Λ systems present (Fig.<ref>.(a)). Also, the weak σ^+ component of the probe undergoing absorption at ϵ=π/6 is absent in this case.
For the cases of ϵ=-π/6 and ϵ=-π/4 the results are shown in Fig.<ref> (e) and (f), respectively. The observations are symmetric with respect to the ones obtained for ϵ=π/6 and ϵ=π/4. Figure <ref> (e) shows the result for which the condition Ω^-_c>Ω^+_c (Ω^-_p<Ω^+_p) is maintained, i.e., ϵ=-π/6. We find that the absorption for this case occurs at the red detuning part (δ_c=-2Δ_g) which is related to the coherence term ρ̅_3,11. Like in the previous case, the spectra without the term ρ̅_3,11 had no absorption, as shown in the inset of Fig.<ref>.(e). Hence, a weak σ^- component of the probe undergoes absorption in the presence of a relatively stronger σ^+ at near two-photon resonance. Finally, the result for ϵ=-π/4 (Fig.<ref>.(f)) reveals two transmission peaks at δ_c=0 and δ_c=-2Δ_g which are due to the two Λ system formed by the σ^- pump and σ^+ probe as seen from the energy level diagram shown in Fig.<ref>.(b).
§.§ 16 level system
The numerical calculations for the 16-level system are performed at 30 G. The reason for choosing a larger magnetic field is to involve the Zeeman states of the other excited state. The energy levels diagram with the Zeeman states is shown in Fig.<ref>.(a). The susceptibility of the probe field in this case is calculated as follows:
χ_p(δ_c)=2 N |d|^2 /ħϵ_0[(Ω^+_p)^-1(c_1,10 ρ̅_1,10+
c_1,14 ρ̅_1,14+c_2,11 ρ̅_2,11+c_2,15 ρ̅_2,15+
c_3,16 ρ̅_3,16)+(Ω^-_p)^-1(c_1,12 ρ̅_1,12+c_2,9 ρ̅_2,9+
c_2,13 ρ̅_2,13+c_3,10 ρ̅_3,10+c_3,14 ρ̅_3,14)]
The numerical transmission (Eq.<ref>) spectra for this case are shown in Fig.<ref>. For ϵ=0 i.e., for linearly polarised light fields, three transmission peaks are observed at δ_c=0,± 2 Δ_g. For 16-level system, we find there are in total 18 Λ systems out of which five each are responsible for transmission at δ_c=± 2 Δ_g and the remaining eight superpose at δ_c=0. The amplitude of the transmission peak at δ_c=- 2Δ_g is larger than the other two. The reason for this is that the inclusion of extra Zeeman levels changes the population distribution through optical pumping in such a way that larger populations are accumulated at |3⟩.
For ϵ=π/6 i.e., when Ω^+_c>Ω^-_c (Ω^+_p<Ω^-_p) , similar to the 13-level system, the transmission at δ_c=+2Δ_g converts to an absorption dip. Unlike the 13-level system, the transmission at δ_c=0 shows some asymmetric line shape in this case. Such asymmetry arises due to the coupling of light fields with the Zeeman levels of the nearby excited state <cit.>. We calculated the susceptibility (Eq.<ref>) without the coherence terms ρ̅_1,10 and ρ̅_1,14 and got the transmission back in the spectra as shown in the inset of Fig.<ref>.(c). Hence, in this case also, a weak σ^+ component of the probe is absorbed in the presence of a strong σ^- component ( and a strong σ^+ of the pump).
Next, the calculations are performed at ϵ=+π/4 for which the result is shown in Fig.<ref>.(d). The results are similar to those of the 13-level system, where we find two transmission peaks at δ_c=0 and δ_c=2Δ_g. For this case where the light fields are circularly polarised (σ^- probe and σ^+ pump) two Λ systems superpose to give transmission peaks at δ_c=0 and δ_c=2Δ_g as shown in Fig.<ref>.(a).
Figure <ref>.(e) shows the result for the case of ϵ=-π/6 which shows that there are three transmission peaks and no switching occurs at any two photon resonance condition. The symmetric result of the 13-level system is not reproduced in the case of the 16-level system. The reason for the absence of absorption for ϵ=-π/6 in which the condition Ω^-_c>Ω^+_c (Ω^-_p<Ω^+_p) is maintained is due to the involvement of the Zeeman state |10⟩. The weak σ^- component of the probe, which was responsible for the absorption in the 13-level system, is now red detuned with respect to |3⟩→|14⟩ and blue detuned with respect to |3⟩→|10⟩. This kind of scenario leads to increased scattering of probe through both the states |10⟩ and |14⟩ as both levels are taken to be within the Doppler width (Δ_D) in the system, i.e., Δ_D>ω_14-ω_10. In other words, the probe is far detuned to the level |14⟩ while at the same time being near detuned to the state |10⟩. This condition leads to the reduction in cross-section (σ_TPA)of the two photon absorption phenomena as σ_TPA∝δ_p <cit.>. For ϵ=π/6, on the other hand, the weak σ^+ component of the probe is blue detuned with respect to both the transitions |1⟩→|10⟩ and |1⟩→|14⟩. This increases the probability of a two-photon absorption process as the probe becomes far detuned with respect to both the states |10⟩ and |14⟩.
For ϵ=-π/4, similar to a 13-level system, two transmission peaks occur at δ_c=-2Δ_g and δ_c=0 (Fig.<ref>(f)). Four Λ systems are present at this polarisation of light fields where Ω^-_p and Ω^-_c components are absent (Fig.<ref>(b)).
§ EXISTENCE OF STEADY STATE SOLUTIONS.
While writing the total Hamiltonian(H), we assumed that the system has a time and phase independent basis (co-rotating frame) which ensured the existence of a steady state solution. A general condition for this existence of the steady state solution is presented in this section . Let us take a case of N number of energy levels in a cascade system as shown in Fig.<ref>. There are in total N fields with Rabi frequency Ω_1,Ω_2,......Ω_N stimulating N transitions.
The interaction Hamiltonian in Schroedinger's picture for this N × N system is,
H̅_I=-ħ/2(∑_j=1^N-1Ω_j e^i β_j|j⟩⟨j+1|+∑_j=1^N-1Ω^*_j e^- i β_j
|j+1⟩⟨j|+Ω_N e^i β_N|1⟩⟨3|+Ω^*_N e^-i β_N|3⟩⟨1|
)
Where, Ω_j is the Rabi frequency of the transition |j⟩→|j+1⟩ and β_j=ω_j t+ϕ_j is the phase of the fields. We look for a unitary matrix U=∑_k=1^N e^i α_k|k⟩⟨k| such that UH̅_IU^† becomes time and phase independent.
H_I=UH̅_IU^†= -ħ/2(∑_k=1^N e^i α_k|k⟩⟨k|)
(∑_j=1^N-1Ω_j e^i β_j|j⟩⟨j+1|
+∑_j=1^N-1Ω^*_j e^- i β_j|j+1⟩⟨j|+
Ω_N e^i β_N|1⟩⟨3|+Ω^*_N e^-i β_N|3⟩⟨1|)
(∑_l=1^N e^- i α_l|l⟩⟨l|)
The problem then reduces to the following set of N linear equations,
β_k+α_k-α_k+1=0 k=1,2,....N-1
β_N+α_1-α_3=0
A non-trivial solution for these N linear equations can be found by taking α_1=0 which gives for N>1,
α_N=∑_i=2^Nβ_i-1 with β_N=β_1+β_2
The condition β_N=β_1+β_2 defines the so called multi-photon resonance (Ω_N=ω_1+ω_2) and phase matching (ϕ_N=ϕ_1+ϕ_2). Hence, for the existence of a time and phase independent basis for N fields with N energy levels multi-photon resonance and phase matching conditions needs to be satisfied. This is also true for a case where the number of fields is larger than the number of available states, which would add more conditions. On the other hand, if there was only N-1 fields with N energy levels no extra condition is needed for the existence of the co-rotating frame. This is a very handy result where only by counting the number of field components and available energy states, we can tell about the existence of the steady state solution for the problem.
§.§ Assumption and limitation of the model
In our model, for the case of a lin⊥lin fields i,e., ϵ=0 the total number of active transitions is 14 and 23 for a 13-level and 16-level system, respectively. This would mean that the system may not have steady-state solutions for arbitrary phases of the light fields. However, in an experiment, either one of the light fields is in scanning mode (frequency changes in time). For this case of scanning field if the condition τ_coh.v_s<Δ_g is satisfied, we assume that this reduces the number of active field components at a given time. v_s is the scan speed, τ_coh is the ground state coherence life time, and Δ_g=μ_B B/2ħ is the Larmor's frequency associated with the ground state electrons. Under this condition, no multi-photon resonance or phase matching between the light fields is required for the existence of a steady state. Hence, we assume that the form of the H_I can be written in a time- and phase-independent basis. The condition τ_coh.v_s<Δ_g also allows us to consider the system an open one where the re-population matrix is not needed for the steady state condition. Our model works for small values of magnetic field up to 30 G for the D_2 line. For larger values, the Zeeman states of other hyperfine states need to be considered to reproduce the experimental observations.
In the next section, we present an experiment performed in the D_2 line of ^87Rb at room temperature, where both the 13-level and 16-level results are observed but at two different values of magnetic field. The results for the 13-level system where only the Zeeman levels of F'=2 are included match the experimental one at low magnetic fields. At higher values of the magnetic field, the experimental spectra started showing the features of a 16-level system. The amplitude and width of the two-photon resonance are not exactly confirmed with numerical results, which shows the limitations of our model. However, the nature and overall line shape of the experiemtnal resonances are in good agreement with the numerical results.
§ EXPERIMENT ON D_2 LINE OF ^87RB
§.§ Experimental setup
The optical setup shown in Fig.<ref> was used for obtaining the experimental results. Both the laser sources (probe and pump) are generated from external feedback diode lasers with central wavelength of 780 nm having linewidth of 1 MHz. The experiments were performed in the D_2 line of ^87Rb atoms. Using the laser locking electronics (Toptica digilock 110), the probe laser was locked from F=1 to F'=2 transition and the frequency of the pump laser was scanned across the hyperfine transitions. To avoid any back reflection optical isolators were used. The beam profile was also made nearly circular using anamorphic prism pair. The beam diameter of both the lasers are 0.2 cm. The signals produced with saturated absorption spectroscopy (SAS) was used as a frequency reference scale for both the lasers. Both the lasers were also passed through a faraday rotator for pre control of the polarisation before it is combined with a polarizing beam splitter (PBS). The power of both the lasers were controlled using a combination of half wave plate (HWP) and PBS. The ellipticity of both the laser is simultaneously tuned using a quarter wave plate (QWP). This technique helps in maintaining the orthogonal polarisation of both the laser fields while tuning the ellipticity. The experimental cell is a Rb vapor cell having dimension of 25 x 75 mm. It was maintained at room temperature with a temperature controller system. A solenoid, with a length of 10 cm and a diameter of 6 cm, is wound with a Standard Wire Gauge (SWG) of 16 is used for generating the magnetic field. It consists of 590 turns and was subjected to an applied current between 0-1.5 A. The entire chamber enclosed with μ metal sheets to nullify the earth's magnetic field. The detected signal is measured using an fiber coupled ultra-fast photo detector and observed in a 5 channel oscilloscope.
§.§ Results
In the experiment, the probe is locked at the |F=1⟩→|F'=2⟩ transition, and the pump is scanned around |F=2⟩→|F'=2⟩. The probe and pump intensity are kept fixed at 3.3 mW/cm^2 and 161.2 mW/cm^2 throughout the study.
The experimental results obtained for a magnetic field of 17 G are shown in Fig. <ref>. As discussed in the previous section, the ellipticity (ϵ) of both beams is simultaneously varied using a quarter wave plate. For the case when ϵ = 0, three transmission peaks are observed (curve-III). The peak positions of the three peaks are at δ_c=0,± 2Δ_g (Δ_g=μ_B B/2ħ).
As discussed earlier, altogether, 10 Λ systems are responsible for these three transmission peaks. By tuning the ellipticity to + 30^0 (ϵ=π/6), one of the peaks (+2Δ_g) converts to absorption (curve-II). By further changing the ellipticity to + 45^0 ((ϵ=π/4)), only two transmission peaks are observed (curve-I). For - 30^0 (ϵ=-π/6), the observation is similar to what is observed for + 30^0. However, the position of the absorption peak is observed on the red detuned part of the spectrum (curve-IV). Observations similar to + 45^0 are made when the ellipticity is tuned to - 45^0 (curve-V). In this case too, the only difference is in the position of the two peaks. For ellipticity higher than 45^0, the features repeated themselves, maintaining the symmetry we had observed for positive and negative values of ellipticity. The results are in complete agreement with the numerical results of the 13-level system. This shows that the effect of Zeeman states of |F'=1⟩ is negligible at this value of magnetic field for the D_2 line. We conclude that the transmission spectra of D_1 and D_2 lines of ^87 Rb would be the same at low values of magnetic field.
The experimental observations for a magnetic field of 45 G are shown in Fig.<ref>. The reason for increasing the magnetic field was to create a situation where the effects of other hyperfine levels became more prominent. For ϵ=0, three transmission peaks are observed similar to what is observed for low magnetic field (curve-I). However, the amplitudes of these three peaks are different. For ellipticity equal to + 30^0, an absorption peak on the blue detuned part and two transmission peaks are observed, as shown in curve-II of Fig.<ref>. Two transmission peaks are visible for both the case of circularly polarised light fields, i.e., for + 45^0 and -45^0 as shown in Fig.<ref> (curve-I and V respectively). The main difference in the observations for this case as compared to the low magnetic field is the absence of symmetry with respect to the sign of ellipticity. The results produced here for B=45 G are in agreement with the 16-level model system. Hence, we conclude that for our model system of D_2 line of ^87 Rb at higher magnetic field, nearby Zeeman states of F'=1 actively participate in two-photon processes.
§ CONCLUSION
In conclusion, our numerical work focused on a large Zeeman manifold within an electromagnetically induced transparency (EIT) medium, using the D_1 and D_2 lines of ^87Rb as a model system. We investigated two different models comprising 13 and 16 energy levels, respectively, in the context of pump-probe spectroscopy with varying polarization's of the light fields. By applying a longitudinal magnetic field and manipulating the ellipticity of the light fields while maintaining orthogonal polarization's, we made several key observations. For the 13-level system, at a specific ellipticity value of ϵ = +30, our results demonstrated the coexistence of EIT and two-photon absorption in the probe spectra.
Notably, the position of the two-photon absorption peak shifted from the blue-detuned region to the red-detuned region, and vice versa, upon flipping the sign of the ellipticity. This intriguing symmetry was absent in the 16 level system, where two photon absorption only occurred at positive ellipticity values. We attributed this behavior to the optical anisotropy arising from the unequal population distribution in the ground Zeeman levels due to the change in ellipticity. Our findings emphasized that the presence of a longitudinal magnetic field and the modulation of light polarization ellipticity induce optical anisotropy, resulting in the absorption of a weak probe. We highlighted that for a large number of interacting states and various field components, the existence of a steady state crucially depends on multi-photon resonance and phase matching conditions. However, in our model, we did not require these conditions, suggesting a departure from the typical assumptions in similar systems. Furthermore, to validate our numerical results, we conducted experiments at two distinct magnetic field strengths in the D2 line of ^87Rb . The experimental spectra closely matched our numerical predictions, confirming the need for a 16-level (or higher) system to accurately represent the response of the D_2 lines of ^87Rb at larger magnetic field values. We conclude that, for magnetic fields up to 20 G, both the D_1 and D_2 lines of ^87Rb exhibits identical spectra, effectively described by the 13-level model. Finally we discussed the assumptions and limitations of our model acknowledging its simplifications and potential deviations from experimental results.
In summary, our study sheds light on the intricate in interplay between Zeeman sublevels, magnetic fields, and light polarization in EIT systems. The observed phenomena provide valuable insights for designing and optimizing magnetic field direction-dependent optical switching, where the magnetic field direction can be used as a knob to switch between subluminal and superluminal resonances <cit.>. This work can also contribute in im proving the sensitivity of EIT-based magnetometers <cit.> by controlling the influence of nearby hyperfine states. Additionally, our work has potential applications in the field of optical quantum gates <cit.>. Further investigations considering additional energy levels and exploring various experimental conditions would be beneficial for a comprehensive understanding of these complex systems.
|
http://arxiv.org/abs/2307.04267v2 | 20230709214158 | Phase transitions in sampling and error correction in local Brownian circuits | [
"Subhayan Sahu",
"Shao-Kai Jian"
] | quant-ph | [
"quant-ph",
"cond-mat.stat-mech"
] |
quantikz
*theoremTheorem
[ [email protected] Institute for Theoretical Physics, Waterloo, Ontario N2L 2Y5, [email protected] of Physics and Engineering Physics, Tulane University, New Orleans, Louisiana, 70118, USA
We study the emergence of anticoncentration and approximate unitary design behavior in local Brownian circuits.
The dynamics of circuit averaged moments of the probability distribution and entropies of the output state can be represented as imaginary time evolution with an effective local Hamiltonian in the replica space.
This facilitates large scale numerical simulation of the dynamics in 1+1d of such circuit-averaged quantities using tensor network tools, as well as identifying the various regimes of the Brownian circuit as distinct thermodynamic phases.
In particular, we identify the emergence of anticoncentration as a sharp transition in the collision probability at log N timescale, where N is the number of qubits.
We also show that a specific classical approximation algorithm has a computational hardness transition at the same timescale.
In the presence of noise, we show there is a noise-induced first order phase transition in the linear cross entropy benchmark when the noise rate is scaled down as 1/N.
At longer times, the Brownian circuits approximate a unitary 2-design in O(N) time.
We directly probe the feasibility of quantum error correction by such circuits, and identify a first order transition at O(N) timescales.
The scaling behaviors for all these phase transitions are obtained from the large scale numerics, and corroborated by analyzing the spectrum of the effective replica Hamiltonian.
Phase transitions in sampling and error correction in local Brownian circuits
Shao-Kai Jian
August 12, 2023
=============================================================================
§ INTRODUCTION
Random quantum circuits (RQC) play a pivotal role in both quantum dynamics theory and quantum information theory, offering insights into fundamental aspects such as quantum chaos, out-of-time correlation functions, and entanglement entropy <cit.>.
Closely related to RQC, random tensor network serves as a valuable tool for investigating the AdS/CFT correspondence, a theory aiming to understand quantum gravity through quantum entanglement <cit.>.
Additionally, random quantum circuits find extensive applications in quantum information theory, including quantum advantage <cit.>, quantum error correction <cit.>, etc.
Random circuits are expected to be a toy model capturing the following properties of generic quantum circuits: they output states of high complexity and generate maximal entanglement between initially disconnected regions.
An important question is characterizing the time (depth) required for achieving the high complexity.
How do we characterize the complexity of random circuits? In this work, we focus on two distinct features: anticoncentration and unitary design.
Consider a circuit C acts on an initial simple state (the product state of |0⟩ on all qubits) and the output state is measured in the computational basis to obtain a distribution over measurement outcomes, p_C(s) = |⟨s|C|0⟩|^2.
Anticoncentration is the property that p_C(s) is well spread over all bitstrings s.
Certifying that the circuit is anticoncentrated is crucial in guaranteeing that the RQC simulation is classically hard, and is a promising route towards demonstrating quantum advantage <cit.>.
At long enough depths, RQC has a stronger notion of complexity: it becomes an approximate unitary design.
A unitary ensemble is said to be k-design if it approximates a global Haar random unitary in its first k moments.
In particular, ensuring that a RQC has achieved the 2-design property is enough for the RQC to be maximally decoupling.
Consider a system A, initially maximally entangled with a reference R, is subjected to a circuit C, before being coupled to an environment E.
The initial encoding via C is said to have the decoupling property if the joint density matrix on R∪ E is approximately factorizable ρ_RE≈ρ_R⊗ρ_E.
This can also be associated with the RQC dynamically generating a quantum error correcting code <cit.>.
Several avenues of research on RQC have established that anticoncentration and unitary design occur at parametrically distinct timescales.
Suppose we consider circuits with spatial local connectivity in d dimensions.
Past research has shown that ensembles of RQC with Haar random local gates achieve anticoncentration and unitary design in O(log N) <cit.> and O(N^1/d) <cit.> timescales, respectively, where N denotes the number of qubit.
Note that both anti-concentration and 2-design property are diagnosed by non-linear properties of the quantum state generated by the circuits.
This makes numerically simulating these properties for local Haar random circuits hard and limited to modest system sizes and for short times.
Hence, much of the research on RQC has depended on proving analytical bounds, classically simulable Clifford circuits, and perturbations around semi-classical limits, such as large local Hilbert space dimensions.
In this work, we provide a minimal model which allows us to do efficient and guaranteed numerical simulation of the quantum informational quantities probing anticoncentration and 2-design property of large-sized random circuits using tensor network technology.
We take the approach of directly representing the informational quantities averaged over the circuit ensemble as a linear observable in a replicated Hilbert space.
Here, replicas are simply exact copies of the original system, and the informational observables probe the correlation between different replicas.
We study a particular ensemble of RQC, namely local Brownian circuits <cit.>.
These Brownian qubit models can be defined in any graph where each vertex hosts a qubit, with nearest neighbor Brownian interaction generating the unitary evolution.
Remarkably, the real-time evolution of circuit averaged non-linear observables of the density matrix can now be realized as imaginary time evolution in the replica space.
The Hilbert space for k replicas is simply the combination of a forward contour and a backward contour for real-time evolution for each replica; so the local Hilbert space encompasses 2k spins.
After averaging of Brownian couplings, the quantum dynamics reduces to a Hermitian replica qubit Hamiltonian with the same locality properties as the initial interaction graph.
This model not only establishes a clear mapping between various quantum information quantities and those of a quantum spin model, but also transforms the problem of quantum dynamics into a thermodynamic problem.
Furthermore, imaginary time evolution with local Hamiltonians is guaranteed to be efficient in 1d using simple matrix product state and Time Evolving Block Decimation (TEBD) algorithms <cit.>.
This allows us to perform large-scale simulations of these informational quantities in 1+1 dimensional circuit.
As an example, we can simulate the averaged Rényi-2 entanglement properties of a Brownian circuit on N∼ O(100) qubits for t∼ O(N) depths in a few minutes on a standard laptop.
The effective Hamiltonian approach also provides a statistical mechanical description of different regimes of RQC as distinct `phases', separated by phase transitions.
These phases can be described within a generalized Landau framework involving multiple replicas, where the relevant symmetry is the replica permutation symmetry <cit.> (when we introduce multiple identical copies of the system, they can be permuted amongst each other without changing the effective description).
Specifically, in the two replica scenario that we focus on in this work, the effective Hamiltonian has a ℤ_2 symmetry corresponding to a relative swap between the two real-time contours, which turns out to be the relevant symmetry for non-linear observables of the density matrix.
This effective Hamiltonian is essentially a ℤ_2 Ising model in the replica space, and the phases of quantum information and their phase transitions are associated with the various phases and critical properties of this Ising model.
Using large scale numerics of the Brownian circuit model we can directly probe the dynamical properties of the quantum informational quantities, and identify the saturation to anticoncentration (at ∼log N depth) and 2-design property (at ∼ N depth) of the RQC as sharp transitions, confirmed by careful finite-size scaling of the numerical data.
This can be understood analytically by investigating the spectral properties of the effective Hamiltonian.
The anticoncentration transition can also be directly associated with a transition in the computational hardness of classically simulating the output distribution.
To show this, we show that a specific algorithm for simulating the output distribution <cit.> undergoes a hardness transition in ∼log N depth.
In order to study the 2-design transition, we focus on investigating the feasibility of the Brownian circuit as a quantum error-correcting code, by directly simulating a quantity akin to the mutual information between the reference and environment in the decoupling setup, named `Mutual Purity' <cit.>.
The mutual purity is a 2-replica quantity, and has recently been shown to provide a bound for the error correction capabilities of RQC in <cit.>.
We show that the mutual purity undergoes a first order transition in O(N) time, after which the Brownian circuit approximates the global Haar random unitary for coding purposes.
This coding transition is a first order pinning transition, driven by boundary conditions determining the mutual purity, akin to <cit.>.
Furthermore, the mutual purity contributes to a bound for the failure probability for correcting errors after the encoding by the RQC.
By numerically computing the mutual purity for different error models after the 2-design transition, we can also find a first order threshold transition for the code distance.
As mentioned earlier, sampling of RQC outcome states is one of the most promising routes towards demonstration of quantum advantage in near term quantum devices <cit.>.
However, real quantum devices suffer from noise.
In order to benchmark that the noisy quantum device, an estimate of the fidelity of the output state is desirable.
One proposal for an efficient estimate for the fidelity is the linear cross-entropy benchmark χ_XEB, and a high score in this benchmark suggests that the RQC simulation is classically hard <cit.>.
However, it has recently been understood that with local noise models, there is a noise-induced phase transition (NIPT) in the linear cross entropy benchmarking <cit.>.
In the weak noise regime, χ_XEB provides a reliable estimate of fidelity, and in the strong noise regime, it fails to accurately reflect fidelity.
Furthermore, this implies that in the strong noise regime, classical simulation can yield a high score in the cross-entropy benchmark <cit.>, without necessarily solving the sampling task.
The noise model can be incorporated in our Brownian circuit setup, where the noise serves as an explicit replica-permutation symmetry breaking field <cit.>.
Using a combination of numerical and analytical tools, we characterize the NIPT in benchmarking by identifying it as a first order phase transition in the effective Hamiltonian picture.
§.§ Main results and outline of paper
We first briefly summarize the results of the paper.
The main results of the paper are represented in Fig. <ref> and Table <ref>.
* Anticoncentration: We probe anticoncentration in the 1+1d Brownian circuit U by computing the `collision probability' <cit.>, defined as the circuit averaged probability that two independent samples of the RQC (acting on the |0^⊗ N⟩ state of N qubits) produce the same result, defined as Z = 𝔼∑_x|⟨x|U|0^⊗ N⟩|^4, where the averaging ∼𝔼 is done over all realizations of the circuit.
In the context of the effective Ising Hamiltonian (H_eff) description, we demonstrate that Z equates to the transition probability between an imaginary-time evolved state from the initial state and a quantum paramagnetic state (defined in a later section).
The imaginary time evolution gradually projects the initial state onto the ground state of H_eff, which corresponds to Z∼ 2^-N.
However, in finite time t, excited states contributions result in Z = 2^-N + S_Δ e^- Δ t, where Δ (S_Δ) denotes the energy gap (entropy) of the excitation [In a one-dimensional chain with local couplings, the elementary excitation manifests as a domain wall, with a finite gap independent of the system size and an entropy proportional to the system size].
The anticoncentration transition, thus occurs at t = 1/Δlog S_Δ∼log N, representing a depth-induced computational transition.
The log N results arise from the nature of the elementary excited states of the Ising model, and can be confirmed by direct large scale simulation of the imaginary time evolution.
* Computational Hardness transition: We probe the computational hardness of classically simulating the probability distribution in the measurement outcome in the earlier setup, i.e. p_U(x) = |⟨x|U|0^⊗ N⟩|^2.
By studying the Rényi-2 version of conditional mutual information (CMI) of p_U(x) using numerics of the imaginary time evolution, we probe the hardness of a specific classical algorithm (`Patching algorithm') for approximately simulating the output distribution as introduced in <cit.>.
We find that the CMI undergoes a phase transition at O(log N) time, with the same scaling behavior as the collision probability, signalling a computational hardness phase transition at the same depth.
* Phase transition in cross-entropy benchmarking of noisy Brownian circuits: Here we consider the following setup of two copies of the Brownian circuit, one that is affected by noise (denoted by the noisy channel 𝒩), and the other copy undergoes the noise-free Brownian circuit.
We can now update the effective Hamiltonian with explicit noise in one of the replicas, H_eff→ H_eff^'. We can compute the fidelity F = [𝒩(ρ)ρ] of the noisy simulation by doing imaginary time evolution with H_eff^' (with local noise models, H_eff^' remains local). We also compute the linear cross entropy benchmark, defined as χ_XEB = 2^N ∑_x p(x) q(x) - 1, where p(x) and q(x) represent the output distribution in the noise-free and the noisy cases respectively <cit.>.
In H_eff^', noise explicitly breaks the Ising symmetry and subsequently pins the Ising spins.
Consider a local (unital) noise model, with λ strength for each qubit (to be explicitly defined later).
Noise generically undermines the ferromagnetic phase that leads to anticoncentration, and leads to erosion of the quantum advantage.
This holds true for constant rate noise λ∼ O(1).
Through the mapping to the quantum Ising model, we discover that noise behaves as a relevant perturbation with a scaling dimension of one.
Therefore, when the noise rate scales inversely with respect to the size of the chain, λ∼ 1/N, we get a noise-induced computational transition at some critical λ^*∼ O(1/N).
This transition essentially resembles a field-induced first-order transition and conforms to finite size scaling with ν = 1/2, which we confirm numerically.
Moreover, if the rate scales less (greater) than 1/N, the noise is deemed irrelevant (relevant).
This result is consistent with recent results on Noise-induced phase transitions in cross entropy benchmarking <cit.>.
We also study whether this transition signals a transition in the computational hardness in the simulation of noisy Brownian circuits.
By studying the Rényi-2 CMI of p_𝒩(U)(x), we find that it does not undergo a hardness transition with depth for large enough depths, and actually exponentially decays with time. This suggests that the 1+1d noisy random circuits are efficiently simulable in the long-time limit, even in the presence of infinitesimal scaled noise.
* Coding transitions: We encode some local information (a reference qubit R) in the entire system A using the Brownian circuit, and probe the effectiveness of this encoding as a quantum error correcting code.
After encoding, the state on A is affected by noise, which can be identified as a unitary coupling with the environment E.
Mutual purity ℱ_RE <cit.> is a two replica quantity which upper bounds the trace distance between the initial encoded state and the error affected encoded state after error correction using a recovery channel <cit.>.
From the effective Hamiltonian perspective, the mutual purity can be represented as a transition probability between two ferromagnetic states.
In particular, we find that at short times the mutual purity decays exponentially, which can be identified with domain wall configurations pinned between the initial and final states; while after t ∼ O(N) time the domain walls get depinned, resulting in the saturation of the Mutual Purity to a global Haar value, i.e. realizes an approximate 2-design.
Using large-scale numerics, we are able to directly probe this transition to a 2-design as a first order depinning transition.
Furthermore, since the mutual purity determines the feasibility of error correction after the application of noise, we find a first order threshold transition in the fraction of qubits which are affected by noise.
The critical fraction can be identified as a lower bound for the `code distance' of the Brownian circuit as a quantum error correcting code.
The paper is organized as follows.
Section <ref> presents an introduction to the Brownian circuit model and a derivation of the effective Hamiltonian for k=2 replicas.
In section <ref>, we describe the symmetries of the effective Hamiltonian, and provide heuristic description of the phase diagrams.
In section <ref> we discuss the anticoncentration and computational hardness transition.
In section <ref> we investigate the noise-induced phase transition in benchmarking noisy Brownian circuits.
In section <ref> we study the error-correcting properties of the Brownian circuit and probe the transition to an approximate 2-design.
We conclude by discussing the implications of this work and future directions in section <ref>.
§ LOCAL BROWNIAN CIRCUITS
We consider a Brownian circuit on N qubits in a chain, with the Hamiltonian
H_t = ∑_⟨ i,j⟩^N∑_α,β J_t,ij^αβ σ_i,ασ_j,β,
where α, β label the Pauli indices of the local Pauli matrices σ_i, interacting between nearest neighbor pairs ⟨ i,j ⟩.
J_t,αβ is a normal random variable uncorrelated in time, defined via the following properties
𝔼[J_t,ij^αβ] = 0
𝔼[J_t,ij^αβJ_t^',ij^α^'β^'] = Jδ_tt^'/δ tδ_αα^'δ_ββ^'.
𝔼 denotes the average according to the distribution.
§.§ Effective Hamiltonian description
We integrate over the random couplings to get an effective Hamiltonian in the replica space.
To this end, let's first consider a unitary evolution of a density matrix, ρ' = U ρ U^†.
Explicitly writing out the indices, it is
ρ'_a'b' = ∑_a,b U_a' aρ_ab U^†_bb' = ∑_a,b U_a' a U^∗_b'bρ_ab.
In the second term, we transpose U^†, and use the fact that (U^†)^T = U^∗.
Viewed as a tensor, the time evolution can be expressed by an operator U⊗ U^∗ acting on a state ∑_abρ_ab |a⟩⊗ |b ⟩.
This is essentially the Choi–Jamiołkowski isomorphism (the operator-state mapping) <cit.>.
Now we can extend this to two replicas.
Since most of our discussion is focused on two replicas, we derive an effective Hamiltonian for two replicas.
Notice that it is straightforward to generalize the derivation to k replicas and to arbitrary number of qubits on each node <cit.>.
Because the random couplings at different time are uncorrelated, the central quantity is the instantaneous time evolution (for a small time interval δ t) operator for the four contours,
U_1(δ t) ⊗ U_2(δ t)^∗⊗ U_3(δ t) ⊗ U_4(δ t)^∗,
where U_a, a=1,2,3,4 denotes the unitary evolution operator generated by the Brownian spin Hamiltonian U_a(δ t) = e^-i δ t H_t,a acting on the four Hilbert spaces.
It includes two replicas, each of which contains a forward contour a=1,3 and a backward contour a=2,4.
The complex conjugate is due to the Choi–Jamiołkowski isomorphism, as demonstrated above.
The average over the random coupling reads
𝔼[ U_1(δ t) ⊗ U_2(δ t)^∗⊗ U_3(δ t) ⊗ U_4(δ t)^∗]
= ∫ DJ P[J] exp( ∑_a (- i)^aδ t ∑_⟨ i,j⟩∑_α,β J_t,ij^αβτ_i,a^ατ_j,a^β),
where τ_i,1^α = τ_i,3^α = σ_i^α, and τ_i,2^α = τ_i,4^α = (σ_i^α)^∗, α = 1,2,3. Here σ^α denotes the Pauli matrix, and the complex conjugate for the a=2,4 contour is due to the backward evolution.
DJ = ∏_⟨ i, j ⟩∏_α, β dJ_t,ij^αβ, and P[J] denotes the Gaussian distribution specified by (<ref>).
Integrating over the random couplings results in an effective Hamiltonian,
𝔼[ U_1(δ t) ⊗ U_2(δ t)^∗⊗ U_3(δ t) ⊗ U_4(δ t)^∗] = e^-δ t H_eff,
with
H_eff = J/2∑_⟨ i,j⟩∑_a,b (-1)^a+b (τ⃗_i,a·τ⃗_i,b) (τ⃗_j,a·τ⃗_j,b).
This Hamiltonian describes a spin chain with four spins per site, denoted by τ⃗_i,a, a=1,2,3,4.
In the following, we will see that this Hamiltonian can describe various information phases and phase transitions, such as dynamical computational transition, error correcting transition, etc.
Similarly, for a finite time evolution, we have
𝕌≡𝔼[ U_t⊗ U_t^∗⊗ U_t⊗ U_t^∗] = e^-H_eff t.
Here we use U_t = e^-∫ dt H_t to denote the unitary generated by the Brownian circuit for a time interval t.
§.§.§ Numerical Implementation
We simulate imaginary time evolution in the replica Hilbert space using the TEBD algorithm. The local Hilbert space is ℂ_2^⊗ 4, reflecting the two replicas and two time contours per replica. To simulate exp(- t H_eff) we now need to Trotterise the TEBD evolution with Δ t as the time step, we take the energy-scale J = 1/Δ t. This ensures that Δ t · H_eff is dimensionless with the energy scale set to 1, and with the evolved time t as non-negative integers. All calculations are performed using the TeNPy Library <cit.>.
§.§ Replica permutation symmetry
The Hamiltonian Eq. <ref> is invariant under replica re-labelings, and has the symmetry group,
(S_2× S_2)⋊ℤ_2,
where S_2× S_2 is the permutation group on the two replica labels.
The outer ℤ_2 arises from the symmetry of shuffling between the two time-conjugated copies after taking the complex conjugation [Note that since the variance of coupling is independent of α = x,y,z, the resulted Hamiltonian also enjoys a SU(2) symmetry for each site. But our results do not rely on this symmetry.].
Put simply, each of the S_2 transformation swaps τ_i,1^α↔τ_i,3^α or τ_i,2^α↔τ_i,4^α, whereas the ℤ_2 exchanges τ_i,1^α↔τ_i,2^α and τ_i,3^α↔τ_i,4^α simultaneously.
It is easy to see that the Hamiltonian can be brought into a sum of squares,
H_eff = J/2∑_⟨ i, j ⟩∑_α,β( ∑_a (-1)^a τ_i,a^ατ_j,a^β)^2.
Therefore, the eigenvalues are no less than zero.
Two ground states are |id⟩⟩^⊗ N and |swap⟩⟩^⊗ N, where
|id⟩⟩ = 1/2(|0000 ⟩ + |0011 ⟩ + |1100 ⟩ + |1111 ⟩ ),
|swap⟩⟩ = 1/2(|0000 ⟩ + |1001 ⟩ + |0110 ⟩ + |1111 ⟩ ).
Here, we use |0> and |1> to denote ± eigenstates of the σ_z Pauli operator. The name of the state indicates that | id> ⟩ is a product of an EPR state of the first and second spins and an EPR state of the third and fourth spins, and | swap> ⟩ is a product of an EPR state of the first and fourth spins and an EPR state of the second and third spins.
Using the properties of EPR pairs, namely, τ_1 =τ_2, τ_3 =τ_4, τ_1 =τ_4, τ_2 =τ_3 we can see that every square in the Hamiltonian vanishes, so that these two states are ground states with zero energy.
The permutation symmetry is spontaneously broken by the ground state, and when the low-energy physics is concerned, our model is essentially equivalent to an Ising model.
Notice that we can organize the permutation transformation such that one of them permutes the second and the fourth spins (we denote this by S_2^r: τ_i,2^α↔τ_i,4^α), while the other permutes both the first and the third spins as well as the second and the fourth spins.
Then S_2^r can transform one ground state to the other, and only S_2^r is spontaneously broken.
Our model Eq. <ref> transforms the real-time evolution along the four contours into an imaginary-time evolution that progressively projects onto the ground state subspace of the Hamiltonian described in Eq. <ref>.
This imaginary-time evolution allows us to capture the dynamics of several important quantum information quantities.
One such quantity is the collision probability, which measures the degree of anticoncentration and corresponds in the replica model to the overlap between the time-evolved state and a final state (to be specified later).
The magnitude of this overlap is determined by the excitation gap present in the Hamiltonian Eq. <ref>.
In one-dimensional systems, the elementary excitation takes the form of domain walls, which possess a finite energy gap and exhibit logarithmic entropy.
As a result, the process of anticoncentration requires a timescale proportional to log N, where N represents the system size.
§.§ Effective Hamiltonian with local noise
Since we would also like to investigate the effect of quantum noise, we now consider imperfect time evolution due to the presence of quantum errors.
The unitary time evolution operators are replaced by a quantum channel. A local depolarization channel is given by
ρ→ (1-λ) ρ + λ/3∑_α = 1,2,3σ_i^αρσ_i^α,
where 0 ≤λ < 3/4 for complete positivity.
Using the operator-state mapping, this can be mapped to,
𝒩_i^depol(λ) = (1-λ) I^⊗ 2 + λ/3∑_α=1,2,3σ_i^α⊗ (σ_i^α)^∗,
where I denotes the identity operator.
The noise can induce a transition of random circuit sampling <cit.>.
An observable of such a transition is the cross-entropy benchmarking (XEB), which we will describe in detail later.
For now, let us just mention that XEB contains two distributions: one from a noiseless quantum circuit, and the other from a noisy quantum circuit.
Therefore, we are again concerned with only two replicas.
Without loss of generality, we assume the noisy replica is described by the first two contours a= 1, 2 and the noisy replica is described by the last two contours a= 3, 4.
We upgrate the quantum channel into
𝒩_i^depol(λ) →𝒩_i^depol(λ) ⊗ I ⊗ I.
The identity operators for the last two Hilbert space is clear since the second replica is noiseless. Thus, the noise occurs for the first replica with the first and second copies of the Hilbert space.
It is not hard to see that the channel can be equivalently described by a perturbation described by the following effective Hamltonian,
H_depol(λ) = 3/4δ tlog(1/1- 4/3λ) ∑_i (1 - 1/3∑_ατ_i,1^ατ_i,2^α).
Here, we assume the noise occurs at each site with the same strength. Since 0 ≤λ < 3/4 for the depolarizing channel, the prefactor is positive.
Essentially, the perturbation explicitly breaks the permutation symmetry.
The state | id> ⟩^⊗ N is still an eigenstate of these two perturbations with eigenvalue zero, whereas, the state | swap> ⟩^⊗ N obtains a finite positive energy, i.e.,
⟨< swap|^⊗ N H_depol(λ) | swap> ⟩^⊗ N = 3N/4δ tlog(1/1- 4/3λ).
Therefore, the presence of noise effectively lifts the degeneracy between the two ground states and biases the system towards the state | id> ⟩^⊗ N.
In the regime of low-energy physics, the noise can be treated as an external field that explicitly breaks the Ising symmetry and favors a particular state.
For notation simplicity, we denote the local Zeeman energy as ϵ.
For the depolization channel,
the effective Zeeman energy is ϵ = 3/4δ tlog(1/1- 4/3λ).
When λ is small, we can deduce that ϵ≈λ/δ t.
Note that λ is dimensionless and δ t is the unit of energy.
In the symmetry breaking phase, the external field acts as a relevant perturbation with a scaling dimension of one.
Consequently, a first-order transition occurs at an infinitesimally small noise strength, which is independent of the system size.
However, if the noise strength is appropriately scaled down by a factor of 1/N, which compensates for its relevant scaling dimension, the transition can take place at a finite, size-dependent noise strength given by ϵ∼ 1/N.
This noise-induced transition also manifests as a first-order phase transition.
The first-order transition exhibits a finite size scaling, and is distinguished from a second-order transition <cit.>.
We will confirm it via a systematic finite size scaling.
§ ANTICONCENTRATION AND COMPUTATIONAL HARDNESS OF SAMPLING BROWNIAN CIRCUITS
§.§ Anticoncentration
It is well-known that random circuits generate output states that are anti-concentrated, which roughly means that the probability distribution of the classical bitstrings generated by measuring the output state of a random circuit in the computational basis, is well spread out and not concentrated on a few bit-strings.
Naturally, this also implies that classical sampling of these bitstrings will be hard.
Two key ingredients underpin random circuit sampling. Firstly, anticoncentration asserts that the distribution deviates only slightly from a uniform distribution.
This property is typically required in hardness proofs. However, anticoncentration can be easily attained by applying a Hadamard gate to all qubits. Therefore, we need the second ingredient, randomness, to eradicate any discernible structure in the circuit. Given that randomness is inherent in our model, we are intrigued by whether the distribution exhibits anticoncentration and, if so, at what time (depth) it occurs.
This indicates a transition in computational complexity, wherein the system shifts from a region that is easily achievable by classical means to a region that becomes challenging for classical algorithms.
When the random circuits are generated by a particular ensemble of local quantum gates, a key diagnostic of the complexity of the ensemble is the time it takes to anti-concentrate the output states.
Concretely, we can compute the collision probability, which is defined as the probability that the measurement outcomes of two independent copies of the random circuit agree with each other, i.e. ∑_sp_U(s)^2, where p_U(s) = |⟨s|U|0⟩|^2, for a given bitstring s.
We are interested in the ensemble averaged collision probability which can be readily expressed as transition amplitude in the replicated dynamics,
Z = 𝔼_J∑_sp_U(s)^2 = ∑_s ⟨⟨ s^⊗ 4 | 𝕌 | 0^⊗ 4⟩⟩,
where 𝕌 = 𝔼[ U_t⊗ U_t^∗⊗ U_t⊗ U_t^∗] can be represented by an imaginary time evolution with a replica Hamiltonian defined in Eq. <ref>.
We identify the circuit to have reached anti-concentration if Z ≈ c 2^-N and to not have anti-concentration if Z≥ e^N^c2^-N for some O(1) constant c.
In Fig. <ref> we study the averaged collision probability in a 1d Brownian circuit by the tensor network simulations.
We find that the Brownian circuit anti-concentrates in logN depth, which is consistent with the fact that local Haar random circuits anti-concentrate in Ω(log N) depth in 1d <cit.>.
Furthermore, in Fig. <ref>b, we show data collapse which is consistent with the following approximate form for the collision probability,
2^N Z = 2+ c_1e^-c_2(t-τ^*log N),
for some O(1) constants c_1 and c_2. This expression can be justified by the effective Hamiltonian picture as follows.
Because 𝕌 = e^-H t, with H given by Eq. <ref>, it effectively projects the initial state |0^⊗ 4>⟩ to the ground state 2^N 𝕌|0^⊗4>⟩≈| id>⟩^⊗ N + | swap>⟩^⊗ N + excitations.
The leading contribution of excitations is given by a single domain wall (since we have used open boundary condition, a single domain wall is allowed), | DW_k > ⟩≈| swap>⟩^⊗ k⊗| id>⟩^⊗ (N-k), k=1,...,N-1.
Therefore, the multiplicity of such an excitation is proportional to N.
The excitation energy Δ, on the other hand, is a constant independent of N, and it contributes to an exponential function e^-Δ t.
Therefore, according to this picture, the prediction for the collision probability reads
2^N Z ≈ 2 + N e^-Δ t = 2 + e^-Δ (t- 1/Δlog N),
where we have noticed that ∑_s ⟨< s | id> ⟩ = ∑_s ⟨< s | swap> ⟩ =∑_s ⟨< s | DW_k > ⟩ = 1.
This result is consistent with the data collapse.
In particular, it is clear that the transition time log N is due to the entropy of the domain wall excitation.
§.§ Hardness of classical simulation
As a consequence of anticoncentration and randomness, classical simulation of the output probabilities of the Brownian circuits after log N depth is expected to be hard.
In this section, we show that, with respect to a particular algorithm for approximate classical simulation, there is a computational hardness transition at t∼log N depth.
We study the computational hardness of the Patching algorithm introduced in <cit.>.
Heuristically, the algorithm attempts to sample from the marginal probability distribution of spatially separated patches, and then combine the results together.
This succeeds in poly(N) time if the output distribution of the state generated by the circuit has decaying long-range correlations.
Without going into the details of the algorithm itself, we study the condition on the long-range correlations for which the algorithm is expected to successfully sample from the output distribution.
Consider a tripartition of N qubits into A∪ B∪ C, such that dist(A,C)≥ l.
For the output probability distribution p_U(s) = |⟨s|U|0⟩|^2, we consider the conditional mutual information between the regions A and C conditioned on B, as in I(A:C|B)_p = S(AB)_p+S(BC)_p-S(B)_p-S(ABC)_p, where the S(A) refers to the entropy of the marginal distribution of p on the region A.
The output distribution is defined to have f(l)- Markov property if I(A:C|B)_p≤ f(l).
We quote the main Theorem about the condition for successful Patching algorithm from <cit.>:
Patching algorithm succeeds in poly(N) time to sample from a probability distribution arbitrarily close in total variation distance to the exact output distribution p_U(s) of a quantum circuit on N qubits, if p_U(s) has e^-Ω(l) Markov property, for a suitable choice of the length-scale parameter l.
In the local Brownian circuits introduced earlier, we can directly compute the averaged Rényi-2 version of the conditional mutual information (CMI) of the output distribution p_U(x), i.e. I^(2)(A:C|B)_p = S^(2)(AB)_p+S^(2)(BC)_p-S^(2)(B)_p-S^(2)(ABC)_p, as a function of time t.
In Fig. <ref>a and b, we study the Rényi-2 CMI for an equal tripartition of the qubit chain (i.e. |A| = |B| = |C| = N/3), and find that there is a transition at log N depth.
In particular, at long times, I^(2)(A:C|B)_p asymptotes to log 2, indicating long-range correlations in the output probability distribution.
At short times, the data is consistent with I^(2)(A:C|B)_p∼ O(e^-N). There is, furthermore, a sharp transition at t∼τ^*log N at τ^*≈ 1.2.
The data collapses as a function of t-τ^*log N as shown in Fig. <ref>b inset, indicating the same statistical mechanical interpretation as the collision probability.
Even though this result is for the Rényi-2 version of the CMI, and not the actual CMI itself, it provides evidence that the anticoncentration transition corresponds to an actual phase transition in computational hardness of classical estimation of the output probabilities of the random circuit.
§ NOISY BROWNIAN CIRCUITS
Random circuit sampling is widely implemented in experiments to show quantum advantage.
However, sufficiently large noise can diminish the quantum advantage.
It was reported recently a noise induced phase transition in random circuit sampling <cit.>.
For weak noise, the cross-entropy benchmarking provides a reliable estimate of fidelity.
Whereas, for strong noise, it fails to accurately reflect fidelity.
§.§ Cross-entropy benchmarking
In the random circuit sampling, we start from a product state ρ_0 = |0>^⊗ N< 0|^⊗ N (The initial state does not really matter, and we choose this just for simplicity), and evolve the state using the Brownian spin Hamiltonian.
For brevity, we denote the unitary generated by the Brownian spin model as U.
In an ideal case, i.e., there is no noise, the final state is
ρ = U ρ_0 U^†.
A measurement is performed on the computational basis, and this will generate a probability distribution,
p(s) = ⟨ s | ρ | s⟩,
where s denotes the bit string.
In a real experiment, the implementation of Brownian spin Hamiltonian is not ideal because errors can occur.
In this case, the time evolution of the system is, in general, not unitary and should be described by a quantum channel,
ρ_err = 𝒩 (ρ_0).
Here, 𝒩 denotes the noise channel.
The probability distribution for a bit string s is now given by
q(s) = ⟨ s | ρ_err | s ⟩.
We are interested in the cross entropy benchmarking (XEB), defined as follows,
χ_XEB = 2^N ∑_s p(s) q(s) - 1,
where p(s) is an ideal distribution (which in practice can be estimated by classical simulations), and q(s) is the probability distribution sampled from real experiments.
Since the circuit involves Brownian variables, we consider the average over these random variables, 𝔼(χ_XEB).
§.§ XEB in the replica model
Using the operator-state mapping,
∑_s q(s) p(s) = ∑_s ⟨⟨ s | 𝒩⊗ U ⊗ U^∗ |0 ⟩⟩,
where U is the unitary generated by the Brownian spin model, and 𝒩 denotes the channel generated by both the Brownian spin model and the errors.
For simplicity, we will denote 𝔼[𝒩⊗ U ⊗ U^∗]= 𝕌_err.
And the initial and final states are the same as in the collision probability.
Actually, the collision probability is closely related to the noiseless XEB.
Consider imperfect time evolution due to the presence of quantum errors.
To this end, after integrating over the Brownian variable, we arrived at the imaginary-time evolution given by
∑_s ⟨⟨ s | 𝕌_err | 0 ⟩⟩ = ∑_s ⟨⟨ s | e^-(H+H'(λ)) t | 0 ⟩⟩,
where H'(λ) is the perturbation caused by the noise.
The example of dephasing and depolarizing channels are given by Eq. <ref>.
The average XEB then reads
𝔼 [χ_XEB] = 2^N ∑_s ⟨⟨ s | e^-(H+H'(λ)) t | 0 ⟩⟩ - 1,
On the other hand, the average fidelity is given by
𝔼 [F] = 2^N ⟨⟨swap |^⊗ N𝔼 [𝒩⊗ U ⊗ U^∗] | 0^⊗4⟩⟩
= 2^N ⟨⟨swap |^⊗ N e^-(H+H'(λ)) t | 0^⊗4⟩⟩.
Comparing it with the XEB, we can see that the difference comes from the final state.
As discussed before, the noise lifts the degeneracy and behaves as an external field.
We denote the local Zeeman energy by ϵ.
The Zeeman field is a relevant perturbation even in the symmetry breaking phase.
We will show that the competition between one of the lifted ground state and the excited state leads to a first-order transition at a finite noise rate ϵ N ∼ const.
We will also perform a finite size scaling analysis to verify such a first-order transition in the following.
We consider the evolution of XEB as a function of time.
In the long-time limit, we expect the time-evolved state is a superposition of the ground state with a few low-lying excitations.
It can be approximately written as
2^N 𝕌_err| 0^⊗ 4>⟩≈|id> ⟩^⊗ N + e^-N ϵ t| swap>⟩^⊗ N
+ e^-Δ t∑_k e^ -kϵ t|DW_k >⟩ + ∑_k e^-(2Δ + ϵ) t|SF_k > ⟩
where Δ is the local energy cost of a domain wall, and ϵ are the local energy cost and the Zeeman energy of a local spin flip.
We have included both domain wall excitations and local spin flips, |SF_k >⟩ = |id> ⟩^⊗ k-1⊗|swap> ⟩⊗|id> ⟩^⊗ N-k.
Note that the domain wall excitation can lead to an extensive energy cost, but we need to include them because the external field scales ϵ∼ 1/N.
Therefore, the average XEB at late time is
𝔼[χ_XEB] = e^-N ϵ t + e^-Δ t∑_k=1^N-1 e^-k ϵ t + N e^-(2Δ + ϵ) t,
On the other hand, the average fidelity is
𝔼[F] = e^-N ϵ t.
Actually, the fidelity is lower bounded by 2^-N.
This is because | id> ⟩^⊗ N and | swap> ⟩^⊗ N are orthogonal only at the thermodynamic limit N →∞.
For a finite N, their overlap is (⟨< id|^⊗ N) ( | swap> ⟩^⊗ N) = 2^-N.
It is clear that for the XEB to well estimate the fidelity, we require e^-N ϵ t≫ e^-Δ t.
If we consider the ratio between them
𝔼 [F]/𝔼 [χ_XEB]≈1/1+ e^-Δ t + Nϵ t .
To the leading order in N, there is a noise-induced phase transition at
ϵ_c = Δ/N, separating between a weak noise phase, where the XEB well estimates the fidelity, and a strong noise phase, where they do not match.
This is consistent with the scaling dimension analysis.
§.§ Noise-induced transition
In the short-time region, all kinds of excitations contribution to the XEB, and its evolution is non-universal.
A crude estimate of the XEB is given as follows,
𝔼 [χ_XEB] ≈ (1 + e^-(2Δ + ϵ) t )^N - 1 .
This estimate comes from the superposition of all possible spin flips at each site [For a more accurate estimate, we need to rescale N by a factor c_3 < 1. This is because spin flips do not interact with each other only when they are dilute enough. ].
Here Δ is the effective local energy cost of a spin flip.
The XEB is exponential in the system size ∼exp[N e^-(2Δ + ϵ) t], but this behavior decays exponentially fast.
Then the XEB will transition to the late-time behavior.
In the long-time limit, since we are at the weak noise phase, we expect the XEB matches fidelity.
To verify this, We plot the time evolution of XEB (solid curves) and fidelty (dahsed curves) in Fig. <ref> for a fixed noise rate.
At the long-time limit, their evolution follows closely.
The fact that the deviation is larger for a bigger N is because we have fixed ϵ.
It is also clear that the XEB curves exhibit a crossover from a short-time non-universal region to a long-time universal region.
In order to show the noise-induced phase transition in our replica model, we plot the time-evolution of the XEB for different noise rates in Fig. <ref>a.
It is clear that when the noise rate is less than λ^∗≈ 0.84/N, the XEB tracks the fidelity very well.
Here the fidelity is shown by a dashed curve.
Note also that the fidelity has a lower bound given by 2^-N.
Next, to connect this to the statistical mechanical model and implement a finite size scaling analysis, we consider scaling t ∼ N to feature an equal space-time scaling.
The ratio between the fidelity and XEB is plotted in Fig. <ref>b for different system sizes.
The crossing indicates a transition at λ^∗ N ≈ 0.84.
The inset shows data collapse for different sizes as a function of (λ - λ^∗) N^2, which shows 1/ν = 2.
To understand this exponent, we briefly review the finite size scaling at first-order phase transitions.
The finite size scaling near a first-order phase transition is studied in Ref. <cit.>.
We briefly repeat the argument here.
In a classical Ising model in d dimensional cube with size L^d, the probability distribution of the magnetization P_L(s) in the ferromagnetic phase can be well approximated by a double Gaussian distribution
P_L(s) ∝ e^-(s-M)^2 L^d/χ + e^-(s+M)^2 L^d/χ,
here χ denotes the susceptibility, and M is the average magnetization.
To incorporate the external field, notice that the probability distribution can be expressed as P_L(s) ∝ e^-f L^d, where f is the free energy density.
From the Ising transition, the free energy is given by
f = f_0 + r/2 s^2 + u/4 s^4 - sH
= f'_0 + u/4(s^2-M^2)^2 - sH,
where H denotes the external field, M = √(-r/u) is the average magnetization when r<0, and f_0, f'_0 are unimportant constants.
If we approximate the magnetization around ± M, then the double Gaussian distribution reads
P_L(s) ∝ e^- ((s-M)^2 - s χ H)L^d/χ + e^- ((s+M)^2 - s χ H)L^d/χ,
where χ = -r.
It is clear that the distribution will be shifted, and the one near s = M will be amplified.
This probability distribution can serve as a starting point for finite size scaling analysis.
The external field is equipped with scaling dimension L^-d, implying ν = 1/d.
Now in our analysis, the Hamiltonian Eq. <ref> corresponds to a 1d quantum system or a 2d classical Ising model, which leads to ν = 1/2, consistent with our scaling data collapse in Fig. <ref>b.
§.§ Hardness of simulating noisy Brownian circuits
As we have described, the linear cross-entropy benchmark can be described in the 2-replica formalism, where the noise acts on only one of the replicas. In this section we briefly comment on the hardness of classical simulation of noisy Brownian circuits, by analysing the Rényi-2 conditional mutual information of the output distribution p(s) of the noisy circuit, as in Sec. <ref>. In this formulation the noise acts on both replicas. In Fig. <ref> we plot the Rényi-2 CMI as a function of time for two instances of weak and strong scaled local depolarization channels, with strength λ = μ/N with μ = 0.1, 2.0 respectively. The plots show that the CMI doesn't asymptote to log 2 as the noise-free case, and ultimately decays as e^-μ t without any signature of crossing. This suggests that in the long-time limit, even in the presence of scaled noise, the output distribution remains efficiently estimable using the Patching algorithm <cit.>.
These numerical results provide evidence that the noise-induced phase transition in the linear cross-entropy benchmark does not signal a phase transition in the hardness of classical simulability of the output distribution of the noisy random circuits. In fact, in the presence of noise, 1+1d random circuits remain efficiently simulable by the Patching algorithm.
§ QUANTUM ERROR CORRECTING CODES FROM BROWNIAN CIRCUITS
Random circuits scramble local information into global correlations of a state, in a way which is inaccessible to local probes.
As a result of this, the encoded information can be protected from local noise, thereby leading to the notion of random circuits generating quantum error correcting codes <cit.>.
§.§ Decoupling by Random circuits
The intuition as to why random circuits are able to dynamically generate a quantum error correcting code comes from the decoupling principle.
Consider the setup in Fig. <ref>, where initial quantum information is initialized in the entangled state between reference R (code subspace) and part of the system A_1⊂ A, such that the dimensions match, |R| = |A_1|.
Now A is subjected to an encoding through the random circuit U_enc.
Suppose a part of the system A_4⊂ A is subjected to a noise channel 𝒩.
By Stinespring dilation, the noise channel can be identified as a unitary coupling with an environment E, as shown in Fig. <ref>.
If U_enc forms an approximate 2-design, the circuit is able to decouple effectively <cit.>, i.e., the environment E has bounded access to the information encoded in R.
Concretely, let us consider local qubit degrees of freedom, such that the Hilbert space dimension of any set A is d_A = 2^|A|.
Consider the isometric encoding V:ℋ_R→ℋ_A generated by the circuit U_enc, which transforms the basis vectors as follows,
|ϕ_i⟩_A≡ V|i⟩_R = U_enc|i⟩_A_1|0⟩_A_2.
Any density matrix ρ_R of R is encoded as Vρ_RV^†.
Suppose the encoded state is now subjected to noise, resulting in the density matrix ρ_err = 𝒩(Vρ_RV^†).
A convenient probe is the noise-affected encoding of a maximally entangled state between the code subspace R and A_1. By introducing an auxiliary environment E the effect of the noise channel can be represented by a unitary on the combined system and environment, A ∪ E,
|Ψ^'⟩ = 1/√(d_R)∑_i = 1^d_R|i⟩_R U_err(|ϕ_i⟩_A|e_0⟩_E).
Here, d_R refers to the Hilbert space dimension for R; if the local degrees of freedom are q dimensional qudits, then d_R = q^|R|.
By the decoupling theorem, for U_enc which are approximate 2 designs and small enough error, we have a factorized reduced density matrix on R∪ E, ρ_RE^Ψ^'≈ρ_R^Ψ^'⊗ρ_E^Ψ^'.
The time required by random circuits with locality to approximately form a 2 design is upper bounded by O(N^1/d) in d dimensions <cit.>.
A probe of the extent of decoupling is the mutual information <cit.>, I_Ψ^'(R:E) = S(ρ^'_R)+S(ρ^'_E)-S(ρ^'_RE).
A central theorem in quantum error correction is the existence of an optimal recovery channel ℛ that undoes the effect of noise ℛ(ρ_err) = ρ_R, if perfect decoupling has occurred, i.e. I_Ψ^'(R:E) = 0 <cit.>.
This can be generalized to approximate error correction in the presence of approximate decoupling <cit.>.
In particular, the trace distance between the recovered state by a near-optimal recovery channel ℛ, and any encoded state can be bounded by the mutual information computed for Ψ^',
||ℛ(ρ_err)-ρ_R||_1≤(I_Ψ^'(R:E))^1/4.
§.§ Approximate error correction in Brownian circuits
Recently <cit.> derived a similar bound as Eq. <ref>, with the right-hand side replaced by a different entropic quantity rather than the mutual information.
Mutual information is difficult to analytically study because of the associated replica limit in the definition of the von Neumann entropy.
They instead introduce the mutual purity of the noise-affected state in Eq. <ref>, which is defined as,
ℱ_Ψ^'(R:E) = (ρ_RE^' 2 - ρ_R^' 2⊗ρ_E^' 2).
They showed that for the same approximate recovery channel as <cit.>, the trace distance between the recovered state and the encoded state can be bounded by the mutual purity,
||ℛ(ρ_err)-ρ_R||_1≤ d_R^5/2d_E^1/2(ℱ_Ψ^'(R:E))^1/4.
We provide a description of the recovery channel and the sketch of the proof of this bound in Appendix <ref>.
This bound can be computed using just a two-replica computation for local Brownian circuits in 1+1d with the imaginary TEBD protocol that we have introduced earlier.
§.§ Numerical results in 1d
Using the replicated Hilbert space formalism, we can represent the mutual purity for the 1+1d local Brownian circuit by the following expression,
ℱ^Ψ^'_RE = ⟨⟨ψ_err|𝕌_enc|ψ_in⟩⟩ = ⟨⟨ψ_err|e^-t H_eff|ψ_in⟩⟩,
where 𝕌_enc = 𝔼[U_enc⊗ U^†_enc⊗ U_enc⊗ U^†_enc], and appropriately defined states ψ_in, ψ_err in the replicated Hilbert space, given by,
|ψ_in⟩⟩ = ( - 1/2)^⊗ A_1⊗0000^⊗ A_2
|ψ_err⟩⟩ = 2^N∑_m,n=0^d_E-1 E_mE_n^*E_nE_m^*^⊗ A.
Notice that both |ψ_in⟩⟩|ψ_err⟩⟩ are not normalized.
The operators E_m are non-unitary operators implementing the error on the systems A,
U_err|ψ⟩_A|e_0⟩_E = ∑_m E^m_A.
The derivation is provided in Appendix <ref>. The replica order of the initial state ψ_in reveals that the state breaks the replica symmetry to `swap' in the region A_1 (reflecting the encoded qubit), and preserves the replica symmetry in A_2. As for the final state, ψ_err, the replica order is `id' in the region where the error doesn't act, and `swap' in the region where error acts.
To diagnose the error correcting properties of the Brownian circuit, we need to take specific noise models.
In this section, we focus on local depolarization channels acting on a few qubits, say a fraction p of them.
The depolarization channel of strength λ acts on the density matrix as follows,
𝒩_i(ρ) = (1-λ) ρ + λ/3(∑_α = x,y,zσ_i,αρσ_i,α).
In Fig. <ref>a we present the plot of the mutual purity of the 1+1d Brownian circuit as a function of time, where a single qubit in R is encoded in the system A of size N.
The noise model is chosen to depolarization channel of λ = 0.05 acting on p = 0.25 fraction of qubits.
It is clear from the plot that the mutual purity initially exponentially decays, until it saturates to the global Haar value which is O(2^-N).
The time taken for the saturation scales as t∝ N.
In Appendix <ref> we derive the explicit result for mutual purity with globally Haar random encoding ℱ_Haar = O(2^-N).
This numerical result demonstrates that the Brownian circuit approximates a two design in O(N) times, and we show in Fig. <ref>b that the 2 design transition occurs after time τ^*N, where τ^*∼ 0.77. The scaling collapse of the transition reveals ℱ/ℱ_Haar∼ f(t-τ^*N).
Furthermore, we can study the mutual purity and the RHS of the quantum error correction bound Eq. <ref> for different values of p.
In Fig. <ref>d we plot the saturation value of the RHS of Eq. <ref> (after the Brownian circuit has run for t = N steps) for different values of p and system sizes N, for a single qubit encoding, and depolarization channel of strength λ = 0.05.
We find that the RHS of the error correction bound undergoes a transition at p^*≈ 0.17, which can be identified as the threshold of this quantum error correction code.
Note the quantum error correction bound in Eq. <ref> guarantees that for p<p^* the Brownian circuit generates a quantum error correction code whose errors are correctable using the recovery channel outlined in Appendix <ref>.
We don't expect this threshold to be tight, as the error correction bound with the mutual purity is expected to be looser than the bound from mutual information.
However, the numerical results strongly indicate that the quantum error correction transition with t, the depth in the Brownian circuit (the time when the circuit approximates a 2 design) and the threshold transition both correspond to a first-order domain wall pinning transition.
§.§ Coding transitions
As discussed in the previous section, the mutual purity is given by the amplitude, ℱ^Ψ^'_RE = ⟨⟨ψ_err|𝕌_enc|ψ_in⟩⟩.
It is convenient to view the space-time layout of the Brownian circuit as a two-dimensional statistical model.
In our setting, this is nothing but the mapping from a d dimensional quantum system to a d+1 dimensional classical system.
It is important to note that in the wave function |ψ_in> ⟩, the encoded |A_1| qubits is mapped to a projection to | swap> ⟩, i.e.,
⟨< id| ( - 1/2) = 0.
Whereas the wave function of the rest |A_2| qubits behaves as a free boundary condition, i.e., ⟨< id |0000 > ⟩ = ⟨< swap |0000 > ⟩ = 1/2.
On the other hand, |ψ_err> ⟩ can effectively change the boundary condition on the top layer.
In particular, for the single-qubit depolarization channel at site i, the wave function becomes a superposition of two spins,
| ψ_err,i> ⟩ = (1- 4/3λ)^2 | id> ⟩ + 4/3λ( 1 - 2/3λ ) | swap> ⟩.
Note that when λ = 3/4, the wave function is given by | swap> ⟩ only.
Therefore, the statistical mechanical picture is that in the symmetry breaking phase of the Ising model, the boundary condition caused by the noise channel |ψ_err> ⟩ will induce different domains in the bulk.
Namely, these are domains denoted by either `id' or `swap' (equivalently, the two Ising values) as shown schematically in Fig. <ref>(e).
The mutual purity is only nonzero when the encoded qubit is located in the 'swap' domain.
To understand better the coding transition, we perform a finite scaling analysis of mutual purity as a function of depth and discussion different cases in the following.
* Noisy region overlaps the encoding qubit. As shown in Fig. <ref>a, the reference qubit is encoded on the left-most edge, and the noise occurs in a contiguous region that is also on the left edge.
In this case, there are many domain wall configurations that can contribute to the mutual purity.
To simplify the discussion, we focus on two different domain walls: one ends on the bottom layer, and the other ends on the right edge.
A schematic plot of these two domain walls is shown in Fig. <ref>e.
It is clear that their contributions are
ℱ ∼ e^- Δ t + e^-Δ'(1-p) L
= e^-Δ'(1-p) L (1 + e^- Δ t +Δ'(1-p) L),
where Δ and Δ' denote the tension of the two kinds of domain walls respectively.
L is the length of the chain.
In the short-time region, the first kind of domain wall dominates, while in the long-time region, the second kind of domain wall dominates, and it becomes time independent.
There is an exchange of dominating domain configurations, as demonstrated in Fig. <ref>e.
The transition time is roughly Δ'(1-p)/Δ L ∝ N.
This explains the behavior in Fig. <ref>b and c.
Replacing the contribution from the second kind of domain wall by ℱ_Haar, we can obtain ℱ/ ℱ_err = 1 + e^- (Δ t + logℱ_err), which is consistent with the data collapse performed in the Fig. <ref>c.
* Random noisy region. In this case, the noise occurs in random positions, as shown in Fig. <ref>.
The picture of exchange of two kinds of domain wall configuration is still correct.
The inset of Fig. <ref> shows consistent data collapse.
* Noisy region does not overlap the encoding qubit.
The encoding qubit and the noisy region is shown in Fig. <ref>.
In the calculation, we set λ = 3/4.
The boundary condition creates a domain wall at the boundary between the noisy qubits and the noiseless qubits.
Due to the causality, the back propagation of the domain wall is constrained in an emergent light cone.
Thus, the mutual purity is zero (up to an exponentially small number of N) when the encoding coding qubit is still outside the back propagating light cone of the domain wall.
Moreover, unlike in the previous case where the first kind of domain wall that ends on the bottom can lead to a finite mutual parity, here, only the second kind of domain wall that ends on the right boundary can have a significant contribution to the mutual purity.
This is only possible when the back propagating light one hits the right boundary.
Therefore, this indicates a dynamical transition at a timescale that is proportional to the system size.
In Fig. <ref>, the crossing of mutual parity in different sizes indicates such a transition.
We also performed the data collapse in the inset of Fig. <ref>.
Different from the previous two cases, the scaling is given by (t - τ^∗ N)/√(N).
To understand this, note that the `id' domain back propagates to the light cone as shown in.
In the symmetry breaking phase, the domain wall can fluctuate away from the light cone [Note that in the symmetry breaking phase, there are still two phases for domain walls, the pinning and depinning phases, separating by a pinning transition. For our case, since the coupling at the top layer is the same as the coupling in the bulk, the depinning transition is the same as the symmetry breaking transition. This means the domain wall is depinned and can fluctuate.].
The average position is √(N) away from the domain wall <cit.>.
Furthermore, the fluctuation of the domain wall is captured by a universal function of α = δ L/ √(N) <cit.>, where δ L denotes the distance away from the light cone.
More concretely, the magnetization profile is a function of α at the distance that is of the order √(N) (outside this distance, the magnetization is given either by one of the two spin polarization).
We expect that this function also captures the mutual purity because mutual purity in this case prob the `swap' spin at the right boundary.
Now, since the light cone reaches the right boundary at a time of order N, the mutual purity is then a universal function of (t- N/v)/√(N), where v is the light speed.
This explains the data collapse.
In summary, depending on whether the noise occurs in the encoding qubit, we discover distinct coding transitions.
If the noisy region covers the encoding qubit, there are two kinds of domain wall configurations contributing to the mutual purity.
They are schematically shown in Fig. <ref>e.
The exchange of dominance between the two kinds of domain walls underlies the physics of the coding transition in this situation.
On the other hand, if the noisy region does not cover the encoding qubit, the mutual purity is only nonzero when the noise back propagates to the encoding qubit.
In this case, the transition is induced by the fluctuating domain walls
and captured by a different scaling, N^-1/2, as shown in the data collapse.
§ CONCLUDING REMARKS
In this paper, we have used the effective replica Hamiltonian mapping for local Brownian circuits to probe timescales of complexity growth in random quantum circuits, namely anti-concentration and approximate unitary-design generation.
The effective replica model serves two purposes: we can perform large-scale numerics to simulate several quantum informational quantities for long times, using tensor network tools.
This makes local Brownian circuits as efficient numerical tools to study unitary quantum many-body dynamics.
Secondly, it transforms the question of time-scales in the real-time dynamics into questions of energy-scales in a corresponding thermodynamic problem, which allows us to make analytical progress.
We have shown that local Brownian circuits in 1+1d anticoncentrate in log N time, consistent with earlier results in local Haar random circuits <cit.>.
Furthermore, we have analyzed the success condition of an approximate classical algorithm <cit.> to sample from the output distribution of Brownian circuits, and have identified that there is a sharp transition in the computational hardness of simulation at the same timescale.
The anticoncentration transition arises from the transition in dominance of different low-energy states of the effective Hamiltonian in the collision probability.
In particular, the collision probability (a probe of anticoncentration) gets contribution from eigenstates of the effective Hamiltonian with domain walls and the timescale where this becomes relevant can be related to the logarithm of the number of such domain wall states (which in 1d is ∼ N).
In the presence of noise, we showed that there is a noise-induced phase transition in linear cross entropy benchmark (χ_XEB), as has been recently demonstrated for related noisy random circuit models in <cit.>.
This can be seen as a consequence of explicit replica symmetry breaking in the effective Hamiltonian model in the presence of noise acting on a single replica of the system.
By relating the χ_XEB to specific transition amplitudes in the corresponding replica model, we identify the noise-induced transition in the cross entropy benchmarking as the transition in the dominance of certain domain wall states in the presence of explicit bulk symmetry breaking field.
The critical properties of the transition can be related to those of external field-induced first-order transitions in the classical Ising model in 2d <cit.>.
Finally, we probed the generation of approximate unitary design by Brownian circuits.
By directly probing the quantum error-correcting properties of the Brownian circuit, namely a 2-replica quantity called Mutual Purity <cit.>, we find that the 1+1d Brownian circuits become good quantum error correcting codes in O(N) time.
This transition can be identified as first-order transitions between certain space-time domain wall configurations, which are related to first-order boundary-driven pinning transitions in classical Ising models.
There can be several avenues of future research based on this work.
Here we have demonstrated 1+1d Brownian circuits as a useful numerically accessible tool for studying the dynamics of quantum information.
A natural question is whether the same numerical feasibility extends to higher dimensions.
Here, we speculate the dynamics and transitions in informational quantities in higher dimensional Brownian circuits d>1 (here d is the spatial dimension of the Brownian circuit, N is total number of qubits, we also use the volume L^d ∼ N with L denotes the length scale):
* Collision probability.
It is still true that in higher dimensions, the collision probability at long enough time is dominated by two grounds states and elementary excitations.
Distinct from the situation in 1d, now the lowest excitation is given by local spin flips with an energy that is independent of system sizes.
Nevertheless, the entropy of such a local excitation is proportional to the system size N.
Therefore, we expect that the Brownian circuit anti-concentrates on a log N timescale, similar to that in 1d <cit.>.
* Computational transition in patching algorithm.
The patching algorithm is closely related to the symmetry breaking of the underlying 2-replica spin model <cit.>.
In higher dimensions, d>1, the discrete symmetry can be broken in a finite depth.
This contrasts with the 1d case, where even the discrete symmetry can only be broken in a log depth.
This means that the patching algorithm will fail when the depth of the Brownian circuit exceeds a critical depth that is independent of system sizes.
* Noisy cross-entropy benchmarking.
A noise ϵ behaves as an external field and will lift the degeneracy between | id>⟩ and | swap>⟩, i.e., | swap>⟩ will be suppressed by a factor of e^- N ϵ t.
On the other hand, as discussed in the collision probability, the elementary excitation is given by local spin flips.
With an external field, there is an additional cost, adding up to a factor ∼ e^- z Δ t - ϵ t, where z is the coordination number.
Therefore, we expect that when the noise rate scales as 1/N = 1/L^d, there will be a first-order phase transition with critical exponent given by 1/ν = d+1.
* Coding transition.
The dynamical transition for the Brownian circuit to achieve an approximate unitary 2-design is given by the transition of dominance between two kinds of domain walls.
It should be the same in higher dimensions.
Therefore, the transition occurs on a timescale ∼ L = N^1/d <cit.>.
Next, consider the different regions of noise and the encoding qubit.
We expect the mutual purity transition is similarly given by transitions of domain walls when the noisy region overlaps the encoding qubit.
On the other hand, when the noisy region does not overlap the encoding qubit, we expect the fluctuation of domain wall dictates the coding transition.
Even in 1+1d, this work paves the way towards exploration of quantum information dynamics in symmetric Brownian circuits, by studying directly the spectrum of the effective Hamiltonian in the presence of other circuit symmetries.
Another direction of interest is incorporating the effects of mid-circuit measurements in the entanglement dynamics in Brownian circuit <cit.>.
We will present these results in a future work.
In this work, we have focused on only 2-replica quantities, such as collision probabilities and mutual purities.
In principle, any integer k replica quantities can be represented in the effective Hamiltonian picture, with q^k local Hilbert space dimension (q being the dimension of the original degrees of freedom), which makes numerical methods intractable at large sizes for large k.
An outstanding question is to develop controlled numerical methods or analytical techniques to take the k→ 1 replica limit.
§ ACKNOWLEDGEMENTS
We thank Timothy Hsieh, Tsung-Cheng Lu, Utkarsh Agrawal, and Xuan Zou for useful discussions, and Tsung-Cheng Lu for detailed comments. We have used the TeNPy package for the tensor network simulations <cit.>.
The numerical simulations were performed using the Symmetry HPC system at Perimeter Institute (PI). Research at PI is supported in part by the Government of Canada through the Department of Innovation, Science and Economic Development and by the Province of Ontario through the Ministry of Colleges and Universities.
S.-K.J is supported by a startup fund at Tulane University.
iblabel[1][S#1]
§ QUANTUM ERROR CORRECTING CODES GENERATED BY BROWNIAN CIRCUITS
§.§ Error correction bound with mutual purity
In this section we will briefly recap the derivation of the error correction bound Eq. <ref> derived in <cit.>. We will assume the setup described in Fig. <ref>. Consider first the encoding of the maximally entangled state between the reference R and system A,
|Ψ⟩_RA = 1/√(d_R)∑_i = 1^d_R|i⟩_R|ϕ_i⟩_A,
before any application of error. After the error channel acts on A, we get the following noise-affected state on RAE,
|Ψ^'⟩_RAE = 1/√(d_R)∑_i = 1^d_R|i⟩_RU_err(|ϕ_i⟩_A|e_0⟩_E) .
The error recovery procedure ℛ <cit.> works by first measuring A using an ideal projective measurement that probes the effect of error in |Ψ^'⟩_RAE, followed by a unitary update of the state to restore it to |Ψ⟩_RA. We first introduce an an orthonormal basis of states in A, |ϕ_ij⟩_A. The projective measurement is given by the projection operators,
Π_j = ∑_i=1^d_R|ϕ_ij⟩_A⟨ϕ_ij|_A.
Depending on the measurement outcome, a corrective unitary U_j, A is applied on system A. In order to study the effectiveness of the recovery channel, we want to study the trace distance between the recovered state and the encoded state, ||ℛ(|Ψ^'⟩⟨Ψ^'|),|Ψ⟩⟨Ψ|||_1. In order to bound this, we introduce a fictitious state |̃Ψ̃⟩̃_RAE which aids in the analysis.
Consider ρ̃_RE = ρ^'_R⊗ρ^'_E, where the reduced density matrices ρ^' are obtained from the state |Ψ^'⟩_RAE. We now take the fictitious state |̃Ψ̃⟩̃_RAE which is a purification of ρ̃_RE such that the trace distance between |̃Ψ̃⟩̃_RAE and |Ψ^'⟩_RAE is minimal. This uniquely defines,
|̃Ψ̃⟩̃_RAE = 1/√(d_R)∑_i = 1^d_R∑_j = 1^d_E√(α_j)|i⟩_R|ϕ_ij⟩_A|e_j⟩_E.
Imagine we apply the recovery channel ℛ on |̃Ψ̃⟩̃_RAE instead. After the measurement, say the outcome j is obtained. The measured fictitious state is now,
|̃Ψ̃⟩̃_RAE^j = 1/√(d_R)∑_i = 1^d_R|i⟩_R|ϕ_ij⟩_A|e_j⟩_E.
We can now choose U_j acting on A such that U_j,A|ϕ_ij⟩_A = |ϕ_i⟩_A, and we get,
U_j,A|̃Ψ̃⟩̃_RAE^j = |Ψ⟩_RA|e_j⟩_E.
From the above relation, we find that,
||ℛ(|Ψ^'⟩⟨Ψ^'|),|Ψ⟩⟨Ψ|||_1 = ||ℛ(|Ψ^'⟩⟨Ψ^'|),ℛ(|̃Ψ̃⟩̃⟨̃Ψ̃|̃)||_1≤|||Ψ^'⟩⟨Ψ^'|,|̃Ψ̃⟩̃⟨̃Ψ̃|̃||_1.
where the last inequality follows from the monotonicity property of the trace distance. In <cit.>, the last expression is bounded by the Mutual Purity defined in Eq. <ref>[See Appendix. B in <cit.>]. We quote the result in Eq. <ref>.
§.§ Replica computation of mutual purity
We first represent the reduced density matrix of the noise-affected state defined in Eq. <ref>,
ρ^'_RE = _A|Ψ^'⟩⟨Ψ^'| = 1/d_R∑_i,j = 1^d_R|i⟩⟨j|_R⊗_A{U_err(U_enc(|i⟩⟨j|_A_1⊗|0⟩⟨0|_A_2)U_enc^†⊗|e_0⟩⟨e_0|)U_err^†}
The effect of the U_err on the system and the environment can be represented by Kraus operators acting on the system itself,
U_err|ψ⟩_A|e_0⟩_E = ∑_m E^m_A|ψ⟩_A|e_m⟩_E
E_A^m: ℋ_A→ℋ_A, ∑_mE^m†_AE^m_A = 1_A.
The squared density matrix
ρ^'⊗ 2_RE can be represented by a state vector in the replicated Hilbert space ℋ⊗ℋ^*⊗ℋ⊗ℋ^*, and the replicated unitaries 𝕌_enc = U_enc⊗ U^*_enc⊗ U_enc⊗ U^*_enc and 𝕌_err = U_err⊗ U^*_err⊗ U_err⊗ U^*_err as follows,
|ρ^'⊗ 2_RE⟩⟩ = 1/d_R^2∑_i,j,i^',j^'=1^d_R|iji^'j^'⟩⟩_R∑_s,k = 1^d_A⟨⟨ sskk|_A𝕌_err⊗𝕌_enc|iji^'j^'⟩⟩_A_1|0^⊗ 4|A_2|⟩⟩_A_2|e_0^⊗ 4⟩⟩_E.
While this representation looks cumbersome, it makes further computations straightforward. The mutual purity is given by ℱ^Ψ^'_RE = ρ_RE^' 2-ρ_R^' 2ρ_E^' 2. Let us compute each term.
It is convenient to express Eq. <ref> pictorially, with 4-rank tensors for each subsystem, representing ℋ⊗ℋ^*⊗ℋ⊗ℋ^*,
< g r a p h i c s >
By unitarity of U_err and U_enc we have,
< g r a p h i c s >
.
Using this result we find,
< g r a p h i c s >
< g r a p h i c s >
In general, d_R = 2^|A_2|, where the local Hilbert space dimension is 2. We introduce notation for the following states in the replicated Hilbert space on A,
|ψ_in⟩⟩ = ( - 1/2)^⊗ A_1⊗0000^⊗ A_2
|ψ_err⟩⟩ = 2^N∑_m,n=0^d_E-1 E_mE_n^*E_nE_m^*^⊗ A.
Note ψ_err is an unnormalized state and the expression for ψ_err includes the case for no error acting on the subsystem A_3⊂ A by choosing E_m = 1_A_3⊗ E_m, A_4.
Combining all the expressions, the mutual purity is given by,
ℱ^Ψ^'_RE = ⟨⟨ψ_err|𝕌_enc|ψ_in⟩⟩.
The noise model for probing the extent of error correction enters the computation of mutual purity only in the definition of ψ_err. Consider the local depolarization channel of strength λ on a subset A_4⊂ A, such that number of qubits undergoing noise is |A_4| = p|A|. The Kraus operators for the depolarization channel are,
E_0 = √(1-λ)1, E_x,y,z = √(λ/3)σ_x,y,z.
Local depolarization channel acting on each qubit can be purified using a environment degree of freedom with 4 levels 0,x,y,z. The corresponding d_env = 4^p|A|.
§.§ Maximal complexity encoding by Haar random circuits
We can compute the mutual purity for any noise model for an encoding unitary U_enc which is a global Haar random unitary. Any unitary 2 designs will exhibit this value of mutual purity. By examining the time it requires for the Brownian circuit to achieve this value of mutual purity, we can diagnose the time required for the Brownian circuit to realise a 2 design.
For a global Haar random unitary, we have,
U_Haar⊗ U^*_Haar⊗ U_Haar⊗ U^*_Haar = 1/d^2-1(+-1/d(+)).
Using this identity for U_enc in Eq. <ref>, we get the following terms in the expression for mutual purity,
𝔼[ρ_RE^2] = (1/√(d_R))^4 1/d_A^2-1( f_id(λ) d_R + f_swap(λ) d_R^2 - 1/d_A( f_id(λ) d_R^2 + f_swap(λ) d_R )),
𝔼[ρ_E^2 ] = (1/√(d_R))^4 1/d_A^2-1( f_id(λ) d_R^2 + f_swap(λ) d_R - 1/d_A( f_id(λ) d_R + f_swap(λ) d_R^2 )),
where we have introduced the notation,
f_id(λ) = ψ_errid⟩⟩ , f_swap(λ) = ψ_errswap⟩⟩.
Combining the results, we get,
ℱ^Haar_RE = 1/d_A^2-1( 1- 1/d_R^2) ( f_swap - 1/d_Af_id).
For the depolarization channel of strength λ acting on a fraction p of the qubits, we get the following expression,
ℱ^Haar_RE = d_A/d_A^2-1(1-1/d_R^2)(1-g(λ)^p|A|), g(λ) = (1-λ)^2+λ^2/3.
If one qubit is encoded in N qubits, i.e.
d_R = 2, d_A = 2^N, we have for N≫ 1,
ℱ^Haar_RE = 2^-N+2 3(1-g(λ)^pN). ]
|
http://arxiv.org/abs/2307.04453v1 | 20230710100605 | Tracking the Long-Term GW Phase Evolution for HM Cancri-like Binaries with LISA | [
"Naoki Seto"
] | gr-qc | [
"gr-qc",
"astro-ph.HE",
"astro-ph.IM"
] | |
http://arxiv.org/abs/2307.04617v2 | 20230710150213 | Weakly-supervised positional contrastive learning: application to cirrhosis classification | [
"Emma Sarfati",
"Alexandre Bône",
"Marc-Michel Rohé",
"Pietro Gori",
"Isabelle Bloch"
] | cs.CV | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
Weakly-supervised positional contrastive learning
E. Sarfati et al.
Guerbet Research, Villepinte, France LTCI, Télécom Paris, Institut Polytechnique de Paris, FranceSorbonne Université, CNRS, LIP6, Paris, France
Weakly-supervised positional contrastive learning: application to cirrhosis classification
Emma Sarfati1,2
Alexandre Bône1
Marc-Michel Rohé1
Pietro Gori2
Isabelle Bloch2,3
August 12, 2023
==========================================================================================
Large medical imaging datasets can be cheaply and quickly annotated with low-confidence, weak labels (e.g., radiological scores). Access to high-confidence labels, such as histology-based diagnoses, is rare and costly. Pretraining strategies,
like contrastive learning (CL) methods, can leverage unlabeled or weakly-annotated datasets. These methods typically require large batch sizes, which poses a difficulty in the case of large 3D images at full resolution, due to limited GPU memory. Nevertheless, volumetric positional information about the spatial context of each 2D slice can be very important for some medical applications. In this work, we propose an efficient weakly-supervised positional (WSP) contrastive learning strategy where we integrate both the spatial context of each 2D slice and a weak label via a generic kernel-based loss function. We illustrate our method on cirrhosis prediction using a large volume of weakly-labeled images, namely radiological low-confidence annotations, and small strongly-labeled (i.e., high-confidence) datasets. The proposed model improves the classification AUC by 5% with respect to a baseline model on our internal dataset, and by 26% on the public LIHC dataset from the Cancer Genome Atlas. The code is available at: <https://github.com/Guerbet-AI/wsp-contrastive>.
Weakly-supervised learning, Contrastive learning, CT, Cirrhosis prediction, Liver.
§ INTRODUCTION
In the medical domain, obtaining a large amount of high-confidence labels, such as histopathological diagnoses, is arduous due to the cost and required technicality. It is however possible to obtain lower confidence assessments for a large amount of images, either by a clinical questioning, or directly by a radiological diagnosis. To take advantage of large volumes of unlabeled or weakly-labeled images, pre-training encoders with self-supervised methods showed promising results in deep learning for medical imaging <cit.>. In particular, contrastive learning (CL) is a self-supervised method that learns a mapping of the input images to a representation space where similar (positive) samples are moved closer and different (negative) samples are pushed far apart.
Weak discrete labels can be integrated into contrastive learning by, for instance, considering as positives only the samples having the same label, as in <cit.>, or by directly weighting unsupervised contrastive and supervised cross entropy loss functions, as in <cit.>.
In this work, we focus on the scenario where radiological meta-data (thus, low-confidence labels) are available for a large amount of images, whereas high-confidence labels, obtained by histological analysis, are scarce.
Naive extensions of contrastive learning methods, such as <cit.>, from 2D to 3D images may be difficult due to limited GPU memory and therefore small batch size. A usual solution consists in using patch-based methods <cit.>. However, these methods pose two difficulties: they reduce the spatial context (limited by the size of the patch), and they require similar spatial resolution across images. This is rarely the case for abdominal CT/MRI acquisitions, which are typically strongly anisotropic and with variable resolutions. Alternatively, depth position of each 2D slice, within its corresponding volume, can be integrated in the analysis. For instance, in <cit.>, the authors proposed to integrate depth in the sampling strategy for the batch creation. Likewise, in <cit.>, the authors proposed to define as similar only 2D slices that have a small depth difference, using a normalized depth coordinate d∈[0,1]. These works implicitly assume a certain threshold on depth to define positive and negative samples, which may be difficult to define and may be different among applications and datasets. Differently, inspired by <cit.>, here we propose to use a degree of “positiveness”
between samples by defining a kernel function w on depth positions. This allows us to consider volumetric depth information during pre-training and to use large batch sizes. Furthermore, we also propose to simultaneously leverage weak discrete attributes during pre-training by using a novel and efficient contrastive learning composite kernel loss function, denoting our global method Weakly-Supervised Positional (WSP).
We apply our method to the classification of histology-proven liver cirrhosis, with a large volume of (weakly) radiologically-annotated CT-scans and a small amount of histopathologically-confirmed cirrhosis diagnosis. We compare the proposed approach to existing self-supervised methods.
§ METHOD
Let x_t be an input 2D image, usually called anchor, extracted from a 3D volume, y_t a corresponding discrete weak variable and d_t a related continuous variable. In this paper, y_t refers to a weak radiological annotation and d_t corresponds to the normalized depth position of the 2D image within its corresponding 3D volume: if V_max corresponds to the
maximal depth-coordinate of a volume V, we compute d_t=p_t/V_max with p_t∈[0,V_max] being the original depth coordinate. Let x_j^- and x_i^+ be two semantically different (negative) and similar (positive) images with respect to x_t, respectively.
The definition of similarity is crucial in CL and is the main difference between existing methods.
For instance, in unsupervised CL, methods such as SimCLR <cit.> choose as positive samples random augmentations of the anchor x_i^+=t(x_t), where t∼𝒯 is a random transformation chosen among a user-selected family 𝒯. Negative images x_j^- are all other (transformed) images present in the batch.
Once x_j^- and x_i^+ are defined, the goal of CL is to compute a mapping function f_θ: 𝒳→𝕊^d, where 𝒳 is the set of images and 𝕊^d the representation space, so that similar samples are mapped closer in the representation space than dissimilar samples. Mathematically, this can be defined as looking for a f_θ that satisfies the condition:
s_tj^- - s_ti^+ ≤ 0 ∀ t,j,i
where s_tj^-=sim(f_θ(x_t),f_θ(x_j^-)) and s_ti^+=sim(f_θ(x_t),f_θ(x_i^+)), with sim a similarity function defined here as sim(a,b)=a^Tb/τ with τ>0.
In the presence of discrete labels y, the definition of negative (x_j^-) and positive (x_i^+) samples may change. For instance, in SupCon <cit.>, the authors define as positives all images with the same discrete label y. However, when working with continuous labels d, one cannot use the same strategy since all images are somehow positive and negative at the same time. A possible solution <cit.> would be to define a threshold γ on the distance between labels (e.g., d_a, d_b) so that, if the distance is smaller than γ (i.e., ||d_a - d_b||_2 < γ), the samples (e.g., x_a and x_b) are considered as positives. However, this requires a user-defined hyper-parameter γ, which could be hard to find in practice. A more efficient solution, as proposed in <cit.>, is to define a degree of “positiveness” between samples using a normalized kernel function w_σ(d,d_i) = K_σ(d - d_i), where K_σ is, for instance, a Gaussian kernel, with user defined hyper-parameter σ and 0 ≤ w_σ≤ 1. It is interesting to notice that, for discrete labels, one could also define a kernel as: w_δ(y,y_i) = δ(y - y_i), δ being the Dirac function, retrieving exactly SupCon <cit.>.
In this work, we propose to leverage both continuous d and discrete y labels, by combining (here by multiplying) the previously defined kernels, w_σ and w_δ, into a composite kernel loss function. In this way, samples will be considered as similar (positive) only if they have a composite degree of “positiveness” greater than zero, namely both kernels have a value greater (or different) than 0 (w_σ > 0 and w_δ≠ 0). An example of resulting representation space is shown in Figure <ref>. This constraint can be defined by slightly modifying the condition introduced in Equation <ref>, as:
w_δ(y_t,y_i) · w_σ(d_t,d_i)_composite kernel w_ti (s_tj-s_ti) ≤ 0 ∀ t,i,j≠ i
where the indices t,i,j traverse all N images in the batch since there are no “hard” positive or negative samples, as in SimCLR or SupCon, but all images are considered as positive and negative at the same time. As commonly done in CL <cit.>, this condition can be transformed into an optimization problem using the max operator and its smooth approximation LogSumExp:
_f_θ∑_t,imax(0, w_ti{s_tj - s_ti}_j=1
j ≠ i^N) = _f_θ∑_t,i w_timax(0, { s_tj - s_ti}_j=1
j ≠ i^N)
≈_f_θ( - ∑_t,i w_tilog( exp(s_ti)/∑_j ≠ i^N exp(s_tj)))
By defining P(t)={i:y_i=y_t} as the set of indices of images x_i in the batch with the same discrete label y_i as the anchor x_t, we can rewrite our final loss function as:
ℒ_WSP=-∑_t=1^N∑_i∈ P(t) w_σ(d_t,d_i) log( exp(s_ti)/∑_j≠ i^Nexp(s_tj))
where w_σ(d_t,d_i) is normalized over i∈ P(t). In practice, it is rather easy to find a good value of σ, as the proposed kernel method is quite robust to its variation. A robustness study is available in the supplementary material. For the experiments, we fix σ=0.1.
§ EXPERIMENTS
We compare the proposed method with different contrastive and non-contrastive methods, that either use no meta-data (SimCLR <cit.>, BYOL <cit.>), or leverage only discrete labels (SupCon <cit.>), or continuous labels (depth-Aware <cit.>). The proposed method is the only one that takes simultaneously into account both discrete and continuous labels. In all experiments, we work with 2D slices rather than 3D volumes due to the anisotropy of abdominal CT-scans in the depth direction and the limited spatial context or resolution obtained with 3D patch-based or downsampling methods, respectively, which strongly impacts the cirrhosis diagnosis that is notably based on the contours irregularity. Moreover, the large batch sizes necessary in contrastive learning can not be handled in 3D due to a limited GPU memory.
§.§ Datasets
Three datasets of abdominal CT images are used in this study. One dataset is used for contrastive pretraining, and the other two for evaluation. All images have a 512x512 size, and we clip the intensity values between -100 and 400.
𝒟_radio. First, 𝒟_radio contains 2,799 CT-scans of patients in portal venous phase with a radiological (weak) annotation, i.e. realized by a radiologist, indicating four different stages of cirrhosis: no cirrhosis, mild cirrhosis, moderate cirrhosis and severe cirrhosis (y_radio). The respective numbers are 1880, 385, 415 and 119. y_radio is used as the discrete label y during pre-training.
𝒟_histo^1. It contains 106 CT-scans from different patients in portal venous phase, with an identified histopathological status (METAVIR score) obtained by a histological analysis, designated as y_histo^1. It corresponds to absent fibrosis (F0), mild fibrosis (F1), significant fibrosis (F2), severe fibrosis (F3) and cirrhosis (F4). This score is then binarized to indicate the absence or presence of advanced fibrosis <cit.>: F0/F1/F2 (N=28) vs. F3/F4 (N=78).
𝒟_histo^2. This is the public LIHC dataset from the Cancer Genome Atlas <cit.>, which presents a histological score, the Ishak score, designated as y_histo^2, that differs from the METAVIR score present in 𝒟_histo^1. This score is also distributed through five labels: No Fibrosis, Portal Fibrosis, Fibrous Speta, Nodular Formation and Incomplete Cirrhosis and Established Cirrhosis. Similarly to the METAVIR score in 𝒟_histo^1, we also binarize the Ishak score, as proposed in <cit.>, which results in two cohorts of 34 healthy and 15 pathological patients.
In all datasets, we select the slices based on the liver segmentation of the patients. To gain in precision, we keep the top 70% most central slices with respect to liver segmentation maps obtained manually in 𝒟_radio, and automatically for 𝒟_histo^1 and 𝒟_histo^2 using a U-Net architecture pretrained on 𝒟_radio <cit.>. For the latter pretraining dataset, it presents an average slice spacing of 3.23mm with a standard deviation of 1.29mm. For the x and y axis, the dimension is
0.79mm per voxel on average, with a standard deviation of 0.10mm.
§.§ Architecture and optimization.
Backbones. We propose to work with two different backbones in this paper: TinyNet and ResNet-18 <cit.>. TinyNet is a small encoder with 1.1M parameters, inspired by <cit.>, with five convolutional layers, a representation space (for downstream tasks) of size 256 and a latent space (after a projection head of two dense layers) of size 64. In comparison, ResNet-18 has 11.2M parameters, a representation space of dimension 512 and a latent space of dimension 128. More details and an illustration of TinyNet are available in the supplementary material, as well as a full illustration of the algorithm flow.
Data augmentation, sampling and optimization. CL methods <cit.> require strong data augmentations on input images, in order to strengthen the association between positive samples <cit.>. In our work, we leverage three types of augmentations: rotations, crops and flips. Data augmentations are computed on the GPU, using the Kornia library <cit.>. During inference, we remove the augmentation module to only keep the original input images.
For sampling, inspired by <cit.>, we propose a strategy well-adapted for contrastive learning in 2D medical imaging. We first sample N patients, where N is the batch size, in a balanced way with respect to the radiological/histological classes; namely, we roughly have the same number of subjects per class. Then, we randomly select only one slice per subject. In this way, we maximize the slice heterogeneity within each batch. We use the same sampling strategy also for classification baselines. For 𝒟_histo^2, which has fewer patients than the batch size, we use a balanced sampling strategy with respect to the radiological/histological classes with no obligation of one slice per patient in the batch. As we work with 2D slices rather than 3D volumes, we compute the average probability per patient of having the pathology. The evaluation results presented later are based on the patient-level aggregated prediction.
Finally, we run our experiments on a Tesla V100 with 16GB of RAM and a 6 CPU cores, and we used the PyTorch-Lightning library
to implement our models. All models share the same data augmentation module, with a batch size of B=64 and a fixed number of epochs n_epochs=200. For all experiments, we fix a learning rate (LR) of α=10^-4 and a weight decay of λ=10^-4. We add a cosine decay learning rate scheduler <cit.> to prevent over-fitting. For BYOL, we initialize the moving average decay at 0.996.
Evaluation protocol. We first pretrain the backbone networks on 𝒟_radio using all previously listed contrastive and non-contrastive methods. Then, we train a regularized logistic regression on the frozen representations of the datasets 𝒟_histo^1 and 𝒟_histo^2. We use a stratified 5-fold cross-validation.
As a baseline, we train a classification algorithm from scratch (supervised) for each dataset, 𝒟_histo^1 and 𝒟_histo^2, using both backbone encoders and the same 5-fold cross-validation strategy. We also train a regularized logistic regression on representations obtained with a random initialization as a second baseline (random). Finally, we report the cross-validated results for each model on the aggregated dataset 𝒟_histo^1+2=𝒟_histo^1+𝒟_histo^2.
§ RESULTS AND DISCUSSION
We present in Table <ref> the results of all our experiments. For each of them, we report whether the pretraining method integrates the weak label meta-data, the depth spatial encoding, or both, which is the core of our method. First, we can notice that our method outperforms all other pretraining methods in 𝒟_histo^1 and 𝒟_histo^1+2, which are the two datasets with more patients. For the latter, the proposed method surpasses the second best pretraining method, depth-Aware, by 4%. For 𝒟_histo^1, it can be noticed that WSP (ours) provides the best AUC score whatever the backbone used. For the second dataset 𝒟_histo^2, our method is on par with BYOL and SupCon when using a small encoder and outperforms the other methods when using a larger backbone.
To illustrate the impact of the proposed method, we report in Figure <ref> the projections of the ResNet-18 representation vectors of 10 randomly selected subjects of 𝒟_histo^1 onto the first two modes of a PCA. It can be noticed that the representation space of our method is the only one where the diagnostic label (not available during pretraining) and the depth position are correctly integrated. Indeed, there is a clear separation between slices of different classes (healthy at the bottom and cirrhotic cases at the top) and at the same time it seems that the depth position has been encoded in the x-axis, from left to right. SupCon performs well on the training set of 𝒟_radio (figure available in the supplementary material), as well as 𝒟_histo^2 with TinyNet, but it poorly generalizes to 𝒟_histo^1 and 𝒟_histo^1+2. The method depth-Aware manages to correctly encode the depth position but not the diagnostic class label.
To assess the clinical performance of the pretraining methods, we also compute the balanced accuracy scores (bACC) of the trained classifiers, which is compared in Table <ref> to the bACC achieved by radiologists who were asked to visually assess the presence or absence of cirrhosis for the N=106 cases of 𝒟_histo^1.
l7cm
Comparison of the pretraining methods with a binary radiological annotation for cirrhosis on 𝒟_histo^1. Best results are in bold, second top results are underlined.
c]@c@Pretraining
method c]@c@bACC
models c]@c@bACC
radiologists
Supervised 0.78 (±0.04) 7*
None (random) 0.71 (±0.13)
ImageNet 0.74 (±0.13)
SimCLR 0.78 (±0.08)
BYOL 0.77 (±0.04) 0.82
SupCon 0.77 (±0.10)
depth-Aware 0.84 (±0.04)
Ours 0.85 (±0.09)
The reported bACC values correspond to the best scores among those obtained with Tiny and ResNet encoders. Radiologists achieved a bACC of 82% with respect to the histological reference. The two best-performing methods surpassed this score: depth-Aware and the proposed WSP approach, improving respectively the radiologists score by 2% and 3%, suggesting that including 3D information (depth) at the pretraining phase was beneficial.
§ CONCLUSION
In this work, we proposed a novel kernel-based contrastive learning method that leverages both continuous and discrete meta-data for pretraining. We tested it on a challenging clinical application, cirrhosis prediction, using three different datasets, including the LIHC public dataset. To the best of our knowledge, this is the first time that a pretraining strategy combining different kinds of meta-data has been proposed for such application. Our results were compared to other state-of-the-art CL methods well-adapted for cirrhosis prediction. The pretraining methods were also compared visually, using a 2D projection of the representation vectors onto the first two PCA modes. Results showed that our method has an organization in the representation space that is in line with the proposed theory, which may explain its higher performances in the experiments. As future work, it would be interesting to adapt our kernel method to non-contrastive methods, such as SimSIAM <cit.>, BYOL <cit.> or Barlow Twins <cit.>, that need smaller batch sizes and have shown greater perfomances in computer vision tasks.
In terms of application, our method could be easily translated to other medical problems, such as pancreas cancer prediction using the presence of intrapancreatic fat, diabetes mellitus or obesity as discrete meta-labels.
Compliance with ethical standards. This research study was conducted retrospectively using human data collected from various medical centers, whose Ethics Committees granted their approval. Data was de-identified and processed according to all applicable privacy laws and the Declaration of Helsinki.
splncs04
§ SUPPLEMENTARY MATERIAL
|
http://arxiv.org/abs/2307.04127v1 | 20230709085036 | Self-healing unitarity is an Optical illusion: Comment on "Self-healing of unitarity in effective field theories and the onset of new physics" | [
"Archit Vidyarthi"
] | hep-ph | [
"hep-ph",
"hep-th"
] |
Department of Physics, Indian Institute of Science Education and Research Bhopal, Madhya Pradesh - 462066, India
Among the vast variety of proposals put forward by the community to resolve tree-level unitarity violations in Higgs inflation models, there exists the concept of self-healing. This mechanism helps cancel out tree-level violations for elastic scattering processes by summing over successive vacuum polarization loop corrections. In this comment, we shall see how self-healing is a manifestation of the optical theorem for a theory tailored to behave in a certain way.
Self-healing unitarity is an Optical illusion: Comment on `Self-healing of unitarity in effective field theories and the onset of new physics'
Archit Vidyarthi [email:[email protected]]
August 12, 2023
==============================================================================================================================================
§ INTRODUCTION
Unitarity is one of several properties at the heart of a quantum theory, and essentially implies that the probability of an event cannot exceed unity. Along with other properties such as positivity, causality, etc., it helps provide us with useful bounds on a theory (for example: perturbative bounds, Froissart bounds, etc.) in the form of constraints on a parameter, or on the domain within which the theory is valid, without needing to introduce new degrees of freedom (DsOF).
Tree-level unitarity violations, estimated using perturbative unitarity bounds, are immensely helpful in pointing out missing pieces in a theory. For a non-renormalizable theory, these may imply that the loop corrections might become relevant as we approach the apparent violation scale in describing the complete process <cit.>. For others, they may indicate that the theory is incomplete. Beyond Standard Model (BSM) physics helps fill in gaps stemming from the incompatibility of the Standard Model and gravity, and provides us with possible candidates for the missing DsOF, often motivated by the existence of dark matter and dark energy that make up the majority of the energy content of the universe.
Given how Higgs driven inflation has been one of the prime candidates for a theory describing the birth of the universe, the fact that it faces unitarity violations far below the Planck scale is something the scientific community has been trying to explain away for a long time (see our recent work <cit.> and cited works for more info). After several decades of search, though, we have as of yet not been able to resolve the issue completely. Among the several approaches suggested towards resolving the issue is `self-healing' of unitarity proposed in <cit.> and later applied in the context of Higgs inflation in <cit.>, which are at the heart of what we discuss in this work.
This paper is organized as follows: in Sec.<ref>, we introduce the reader to the optical theorem and partial wave unitarity bounds as presented in <cit.>; in Sec.<ref>, we briefly review the idea of self-healing as it was put forward in <cit.>; followed by our explanation how self-healing is a special case of the optical theorem in Sec.<ref>; and lastly, some discussions in Sec.<ref>.
§ BRIEF RECAP
Imposing that the action is unitary, we obtain the famous optical theorem, which equates the imaginary part of the scattering amplitude to the total scattering cross section.
ℳ(i→ f)-ℳ^*(f→ i)=i∑_X∫ dΠ_X (2π)^4δ^4(p_i-p_X)ℳ(i→ X)ℳ^*(f→ X).
In its generalized form (<ref>), this theorem states that order-by-order in perturbation theory, imaginary parts of higher loop amplitudes are determined by lower loop amplitudes. For instance, the imaginary part of one-loop amplitude could be determined by the tree-level amplitude. A special case arises from this using the assumption that the initial and final states are the same state |A>:
Imℳ(A→ A)=2E_CM|p_i|∑_Xσ(A→ X).
Optical theorem puts a constraint on how large a scattering amplitude can be. From the approximate form,
Imℳ≤|ℳ|^2|ℳ|<1.
Now, using the partial wave expansion of the scattering amplitude to impose constraints on coefficients of the Legendre polynomials. To recap, we first expand the scattering amplitude as:
ℳ(θ)=16π∑_j a_j (2j+1) P_j(cosθ),
where P_j(cosθ) are Legendre polynomials, with P(1)=1, and
∫_-∞^∞ P_j(cosθ) P_k(cosθ) dcosθ=2/2j+1δ_jk.
For a case where the initial and final states are the same, we can write the cross section in the center of mass frame as:
σ_CM_tot= 16π/E_CM^2∑_j |a_j|^2 (2j+1).
Employing the optical theorem at θ=0, we have,
Imℳ(A B → A B at θ=0) =2 E_CM|p⃗_i| ∑_X σ_tot(A B → X)
≥ 2 E_CM|p⃗_i| σ_tot(A B → A B),
where a simple inequality has been introduced owing to the fact that |AB>∈|X>. Then,
∑_j=0^∞(2 j+1) Im(a_j) ≥2|p⃗_i|/E_CM∑_j=0^∞(2 j+1)|a_j|^2 .
This, coupled with the inequality |a_j|≥Im(a_j), means that the magnitude of a_j is now constrained as |a_j|≤1, 0≤Im(a_j)≤ 1, and |Re(a_j)| ≤ 1/2, as seen in Fig. (<ref>).
§ PROPOSITION: SELF-HEALING UNITARITY
Preceding <cit.>, authors of <cit.> worked with a set of complex scalar fields, nonminimally coupled with gravity, and tried to estimate the scattering amplitude for the process ss→ s's', where they set s≠ s' to make sure that only the s-channel graviton exchange diagram contributed to the process, and they could avoid collinear divergences in the t and u channels. They, then, try to estimate the scale at which the standard model of particle physics and the minimal supersymmetric standard model, both coupled with gravity, would similarly violate unitarity at tree-level. They claim that in the limit where the number of particles is large, the leading order loop corrections are successive vacuum polarization diagrams and that these violations could be fixed by considering such higher-loop corrections.
Following this, authors of <cit.> consider a similar Lagrangian as <cit.> involving a nonminimal coupling between gravity and multiple scalar fields and provide a useful confirmation for the results presented in <cit.>. They first use partial wave analysis to do so, and then verify the result using a summation of the infinite series loops diagrams. Please note that <cit.> focused on j=2 partial wave amplitudes only.
Authors of <cit.> expanded on the work of the preceding authors and verified the results for a theory involving the Higgs doublet. Instead of sticking to just one process, however, the authors considered certain combinations of these scalars for initial and final states, making sure the combinations adhered to the rules set forth in <cit.> mentioned earlier. Later, they summed over the contributions from all of these processes to show explicitly that the self-healing phenomenon could be applied to j=0 level as well.
§ EQUALITIES
The most important step in order to proceed with Eq.(<ref>) is to fix the initial state |AB>. |X> would, then, contain all possible states that |AB> could transform to, with |AB> itself being one of them. This is what causes the inequality in Eq.(<ref>). What happens if we constrain the theory in such a way that the only possible state is |AB>? Then, instead of an inequality we'd get the equality |a_j|=Im(a_j) for all j. This is exactly what's observed in <cit.>, though they only show it for j= 2. So while the iterative sums are novel and useful in explicitly demonstrating the idea of self-healing, it is, for all intents and purposes, simply an artefact of the optical theorem. This could be visualized easily in Fig.(<ref>).
Additionally, if we fix the initial state, find out all the elements of the corresponding scattering matrix, and sum over their contributions, we revert to the initial form of the optical theorem Eq.(<ref>) and, again, instead of a partial wave bound (inequality), we get an equality as proposed originally. This can be seen in the result of <cit.> for all j, though it was shown explicitly only for j=0. Again, another manifestation of the optical theorem. This latter result covers theories that could be transformed to different forms using field transformations where the `ideal' structure (as required in <cit.>) of these theories could get ruined as more interaction terms show up, meaning more varied final states.
Also note that it was stated in <cit.> that the contribution from vacuum polarization corrections far exceeded that from other sources only when the number of particles was large. For a limited number of DsOF, contribution from other loop diagrams, such as vertex corrections or embedded loops, might be of the same order as the vacuum polarization corrections. Nevertheless, the optical theorem should still be able to restore unitarity in those theories. One example of this is <cit.> where, as previously mentioned, the authors have considered four DsOF in the form of the Higgs doublet.
§ DISCUSSION
Well-behaved gravitational theories are expected to face unitarity violations close to the Planck scale, where the loop diagrams start to contribute. Any violations below this scale imply either that the loops from DsOF (other than graviton) already present in the theory may be contributing, or that some new DsOF need to be introduced. It was seen to be the former in theories mentioned in this work, where summing over loop contributions was able to restore unitarity through the self-healing mechanism, which turned out to be a special case of the optical theorem.
The results of the optical theorem Eq.(<ref>) and Eq.(<ref>), and even the partial wave analysis Eq.(<ref>), are independent of whether the collisions are elastic or inelastic. Therefore, this analysis should be applicable to those cases as well, i.e. even the inelastic versions of the processes considered in <cit.> should be able to `self-heal' adequately. This could be explicitly verified as an independent work.
§ ACKNOWLEDGEMENT
This work is partially funded by DST (Govt. of India), Grant No. SERB/PHY/2021057.
unsrtnat
|
http://arxiv.org/abs/2307.05566v1 | 20230710013526 | Scalable Protocol for ZZ-Crosstalk Mitigation in Universal Quantum Gates | [
"Yan Liang",
"Ming-Jie Liang",
"Sai Li",
"Z. D. Wang",
"Zheng-Yuan Xue"
] | quant-ph | [
"quant-ph"
] |
Key Laboratory of Atomic and Subatomic Structure and Quantum Control (Ministry of Education),
and School of Physics, South China Normal University, Guangzhou 510006, China
Key Laboratory of Atomic and Subatomic Structure and Quantum Control (Ministry of Education),
and School of Physics, South China Normal University, Guangzhou 510006, China
Guangdong Provincial Key Laboratory of Quantum Engineering and Quantum Materials,
Guangdong-Hong Kong Joint Laboratory of Quantum Matter, and Frontier Research Institute for Physics,
South China Normal University, Guangzhou 510006, China
[email protected]
Guangdong-Hong Kong Joint Laboratory of Quantum Matter, Department of Physics,
and HK Institute of Quantum Science & Technology, The University of Hong Kong, Pokfulam Road, Hong Kong, China
[email protected]
Key Laboratory of Atomic and Subatomic Structure and Quantum Control (Ministry of Education),
and School of Physics, South China Normal University, Guangzhou 510006, China
Guangdong Provincial Key Laboratory of Quantum Engineering and Quantum Materials,
Guangdong-Hong Kong Joint Laboratory of Quantum Matter, and Frontier Research Institute for Physics,
South China Normal University, Guangzhou 510006, China
Hefei National Laboratory, Hefei 230088, China
High-fidelity universal quantum gates are widely acknowledged as essential for scalable quantum computation. However, in solid-state quantum systems, which hold promise as physical implementation platforms for quantum computation, the inevitable ZZ-crosstalk resulting from inter-qubit interactions significantly impairs quantum operation performance. Here we propose a scalable protocol to achieve ZZ-crosstalk mitigation in universal quantum gates. This method converts the noisy Hamiltonian with ZZ-crosstalk into a framework that efficiently suppresses all ZZ-crosstalk effects, leading to ideal target quantum operations. Specifically, we first analytically derive the ZZ-crosstalk mitigation conditions and then apply them to enhance the performance of target universal quantum gates. Moreover, numerical simulations validate the effectiveness of ZZ-crosstalk mitigation when multiple qubit gates operate concurrently. As a result, our protocol presents a promising approach for implementing practical parallel quantum gates in large-scale quantum computation scenarios.
Scalable Protocol for ZZ-Crosstalk Mitigation in Universal Quantum Gates
Zheng-Yuan Xue
August 12, 2023
=========================================================================
§ INTRODUCTION
Quantum computation is an emerging technology that leverages the principles of quantum mechanics to tackle problems that are intractable for classical computation, such as factorization of large integers <cit.> and database searching <cit.>. The success of quantum computation relies on the implementation of high-fidelity quantum operations. However, quantum crosstalk in large-scale quantum systems influences parallel quantum operations, leading to error accumulation and propagation <cit.>, which can ultimately cause the quantum computation process to fail. The ZZ crosstalk resulting from inter-qubit interactions is prevalent in various quantum systems, such as semiconductor and superconducting qubits <cit.>, and leads to correlated and nonlocal errors <cit.>, as well as spectator errors <cit.>. Therefore, the development of universal quantum gates that can withstand the effects of ZZ crosstalk is crucial for large-scale quantum computation.
Recently, significant efforts have been devoted to mitigating the ZZ crosstalk effect in quantum information processing. Firstly, various hardware-based strategies have been proposed, including the integration of tunable couplers or buses <cit.>, and the utilization of qubits with opposite anharmonicity <cit.>. However, these strategies heavily rely on the precision of hardware manufacturing, posing substantial challenges. Secondly, quantum control strategies, such as the simultaneous ac Stark effect on coupled qubits <cit.> and dynamical decoupling <cit.>, offer alternative approaches that can alleviate hardware requirements while suppressing ZZ crosstalk. However, their extensibility is limited due to the lack of control freedom, and they cannot be utilized for constructing universal quantum gates.
In this paper, we introduce a scalable protocol for implementing universal quantum gates with ZZ-crosstalk mitigation (ZZCM) in large-scale quantum systems. The quantum system with a ZZ-crosstalk Hamiltonian is transformed into a framework that suppresses all ZZ-crosstalk effects between qubits, yielding ideal quantum operations. We analytically derive the conditions that the transformed operator must satisfy and apply it to enhance the performance of universal quantum gates. We demonstrate that, for single-qubit gates, the application of a continuous external drive field to the gate qubit can effectively suppress ZZ crosstalk from all spectators, i.e., nearby qubits, significantly reducing experimental complexity and yielding high-fidelity quantum gates. For parallel quantum gates, the mitigation of ZZ crosstalk between adjacent qubits can be achieved through the application of continuous external driving fields only to the qubits involved in the gate operation. Consequently, this approach holds great promise for practical large-scale parallel quantum computation.
§ QUANTUM GATES WITH ZZCM
In this section, we first present the considered lattice model for universal quantum gates with specification on the explicit form of the ZZ-crosstalk. Then, we propose the general scheme for constructing universal quantum gates with ZZCM.
§.§ ZZ-Crosstalk Model
We consider a general two-dimensional lattice model for scalable quantum computation. For demonstration purposes and without loss of generality, we assume the lattice consists of N× N physical qubits, as depicted in Fig. <ref>(a). In this lattice, we label the qubit at the ith row and jth column as Q_i,j. For typical solid-state quantum systems, two nearest-neighbor qubits are coupled through the XY type of interaction, and each qubit can be driven independently. The static ZZ coupling is considered the dominant source of noise. The dynamics of the system is governed by the total Hamiltonian H_t(t)=H_0(t)+H_zz, with H_0(t) =H_d(t)+H_J(t), where H_d(t) represents the Hamiltonian with individual single-qubit drives, H_J(t) describes the interaction between nearest-neighbor qubits of the system, and H_zz accounts for the ZZ-crosstalk Hamiltonian. Setting ħ = 1 hereafter, in the interaction picture, the Hamiltonian of a single-qubit under resonant driven is given by
H_d(t)=∑_i, jΩ_i,j (t)(cosϕ_i,jσ_i,j ^x+sinϕ_i,jσ_i,j ^y),
where Ω_i,j (t) (ϕ_i,j) are the time-dependent driving amplitude (phases) acting on the Q_i,j individually, and σ_i,j=(σ_i,j ^x, σ_i,j ^y, σ_i,j ^z) is Pauli operator. The XY interaction between nearest-neighbor qubits is
H_J(t) = ∑_i, jJ_i,j^x(t)/2 (σ_i,j^x σ_i+1,j^x +σ_i,j^y σ_i+1,j^y)
+∑_i, jJ_i,j^y(t)/2 (σ_i,j^x σ_i,j+1^x +σ_i,j^y σ_i,j+1^y),
where {i, j}∈{1, 2, ...N} and J_i,j^x(t) and J_i,j^y(t) being the controllable coupling strength between Q_i,j and its nearest-neighbor qubits along the row and column direction, respectively. The adjacent ZZ-crosstalk Hamiltonian is
H_zz=∑_i, j(η_i,j^xσ_i,j^z σ_i+1,j^z + η_i,j^yσ_i,j^z σ_i,j+1^z),
where η_i,j^x,y characterize the coupling strength of ZZ interactions between nearby qubits.
§.§ The general scheme
To eliminate the unwanted ZZ crosstalk,
we rotate the system to the framework defined by a time-dependent unitary transformation 𝒜(t) as
H_𝒜(t) = 𝒜^†(t)H_t(t)𝒜(t)+ i𝒜^†(t)𝒜(t).
Our goal is to devise the form of 𝒜(t) such that the resulting evolution operator of H_𝒜(t) yields an ideal gate operation at the final moment, i.e.,
U_𝒜(T,0) =𝒯e^- i∫^T_0H_𝒜(t)dt=U_0
where 𝒯 denotes time-ordering, T represents the duration of the gate operation, and U_0 signifies the ideal gate operation free from the influence of ZZ crosstalk. By imposing the boundary condition of 𝒜(T)=𝒜(0)=I, the evolution operator generated by H_t(t) in the interaction picture can be expressed as
U_t(T,0) = 𝒜(T)U_𝒜(T,0)=I· U_0,
which implies that we eliminate the adverse effect of ZZ crosstalk and realize the ideal gate operation.
To determine the specific form of 𝒜(t) satisfying Eq. (<ref>), we divide the evolution into k segments (k is a positive integer), that is T=k τ, and expand U_𝒜(T,0) as
U_𝒜(T,0) = ∏_n=1^kU_𝒜[n τ, (n-1)τ].
For the nth period U_𝒜[nτ, (n-1)τ]=𝒯 exp [- i∫_(n-1)τ^nτH_𝒜(t)dt ],
by using the Magnus expansion <cit.>, the unitary evolution
operator U_𝒜[nτ, (n-1)τ] corresponding to a time-dependent Hamiltonian is
U_𝒜[nτ, (n-1)τ]= exp{𝒢[nτ, (n-1)τ]},
where 𝒢[nτ, (n-1)τ]=∑_l=1^∞τ^l𝒢_l[nτ, (n-1)τ] is the Magnus series.
When τ is infinitesimal, we can ignore the high-order terms of τ and only consider the first-order term with relatively large influence
𝒢_1[nτ, (n-1)τ]=- i/τ∫_(n-1)τ^nτH_𝒜(t)dt
=- i/τ [∫_(n-1)τ^nτH_𝒜0(t)dt+∫_(n-1)τ^nτ𝒜^†(t)H_zz𝒜(t)dt ],
where
H_𝒜0(t)=𝒜^†(t)H_0(t)𝒜(t)+ i𝒜^†(t)𝒜(t)
is the target Hamiltonian without ZZ crosstalk for H_0(t), in the 𝒜(t) framework, which can realize the ideal gate U_0.
To eliminate the influence of H_zz, we impose two conditions to 𝒜(t), i.e.,
𝒜(t)=𝒜(t+τ),
∫_(n-1)τ^nτ𝒜^†(t)H_zz𝒜(t)dt=0,
which indicate that 𝒜(t) is periodic with its period being τ, and the integral of H_zz in the framework of 𝒜(t) is zero, within any periods.
By these settings, Eq. (<ref>) becomes
U_𝒜(T,0) ≈∏_n=1^ke^- i∫_(n-1)τ^nτH_𝒜0(t)dt≈ U_0,
which is the ideal rotation operation. Moving back to the interaction picture, the evolution operator generated by H_t(t) is U_t(T,0)=𝒜(T)U_𝒜(T,0)=U_0. Overall, the key to mitigating ZZ crosstalk and realizing ideal quantum gates is to find a transformed operator 𝒜(t) that satisfies the boundary condition 𝒜(T)=𝒜(0)=I and Eq. (<ref>).
§ EXAMPLES OF UNIVERSAL QUANTUM GATES
In this section, we exemplify the construction of universal quantum gates with ZZCM in the considered lattice model, which can support scalable universal quantum computation.
§.§ Single-qubit gate
As shown in Fig. <ref>(a), we utilize the construction of a single-qubit σ^x/2 gate on qubit Q_i,j as an example to provide a detailed explanation of how to eliminate ZZ crosstalk from the surrounding four spectator qubits, i.e., Q_i-1,j, Q_i+1,j, Q_i,j-1, Q_i,j+1. We start from Eq. (<ref>), where the single-qubit driven Hamiltonian in the framework of 𝒜_1(t) reads
H_𝒜1(t)=Ω_01(t )σ_i,j^x,
where Ω_01(t )=Ω_01sin^2(π t/T_1) with Ω_01 being the pulse amplitude, and T_1 being the gate operation time satisfied ∫_0^T_1Ω_01(t)dt=π/4. The nearest ZZ-crosstalk Hamiltonian can be written as
H_zz1=η_zzσ_i,j^z Z_i, j,
where
Z_i, j=σ_i-1,j^z+σ_i+1,j^z+σ_i,j-1^z+σ_i,j+1^z
are the summation of the σ^z operators of the four nearest-neighbor qubits for Q_i, j, and we assume that the ZZ-crosstalk strengths are the same for the sake of simplicity.
For elimination of ZZ crosstalk, we choose the time-dependent transformed operator 𝒜_1(t) to be
𝒜_1(t)=exp[- iω_1 τ_1/πsin^2 ( π t/τ_1) σ_i,j^x],
where τ_1=T_1/k is the period, and ω_1 being the parameter used to satisfy Eq. (<ref>b). It is obvious that 𝒜_1(t) satisfies the boundary condition 𝒜_1(0)=𝒜_1(T_1)=I and Eq. (<ref>a).
In addition, the condition in Eq. (<ref>b) can be written as
∫_(n-1)τ_1^nτ_1𝒜_1^†(t)H_zz1𝒜_1(t)dt
=η_zz∫_(n-1)τ_1^nτ_1(cosχσ_i,j^z+sinχσ_i,j^y)Z_i, j dt,
where χ=2ω_1τ_1/πsin^2( π t/τ_1). We define the error cumulant during the nth period as
EC=η_zz[|∫_(n-1)τ_1^nτ_1cosχ dt | +|∫_(n-1)τ_1^nτ_1sinχ dt |].
Through the numerical simulation, it can be concluded that ω_1=4.81 k Ω_01 is the optimal choice for making the error cumulant to be zero, see Appendix <ref> for details.
Once the form of 𝒜_1(t) is determined, we can obtain the total Hamiltonian H_1(t) in the interaction picture by inverting Eq. (<ref>), i.e.,
H_1(t) = 𝒜_1(t)H_𝒜1(t)𝒜_1^†(t)+ id𝒜_1(t)/dt𝒜_1^†(t)+H_zz1
= Ω_1(t)σ_i,j^x+H_zz1,
with Ω_1(t)=Ω_01sin^2( π t/T_1)+ω_1 sin(2π t/τ_1). This indicates that we can mitigate the ZZ crosstalk from all spectators only by modulating the shape of the external drive on the gate qubit from Ω_01(t) to Ω_1(t).
We numerically quantify the gate-robustness by using the gate-fidelity of F(U_0)=|Tr(U_0^†U_zz)|/|Tr(U_0^†U_0)| <cit.>, where U_0 and U_zz are the ideal and error-affected evolution operators, respectively.
Figure <ref>(b) shows a comparison of the robustness to ZZ crosstalk of ZZCM schemes with different k and the DY scheme, where the DY scheme is implemented by using a pulse of Ω_d1(t)=Ω_01sin^2(π t/T_d1), with T_d1 being the gate time, see Appendix <ref> for details. The numerical results reveal a positive correlation between an increase in k and an enhancement of the resilience to ZZ crosstalk. Additionally, we can observe that the gate infidelity as a function of η_zz/Ω_01 (η_zz/Ω_01∈ [-0.5,0.5]) can be smaller than 10^-4 throughout the entire range of errors for k≥4. Notably, compared to the DY scheme, the gate infidelity of the ZZCM schemes reduces by at least two orders of magnitude when the error ratio exceeds |0.02|.
However, we can infer from Eq. (<ref>) that the modulated coupling strength Ω_1(t) increases as k increases, which is not favorable for implementation in an actual experimental system. To address this issue, we set the maximum value of Ω_1(t) as Ω_m, with η_zz/Ω_m ranging from -0.05 to 0.05. Under these conditions, the optimal choice for k is found to be 4, as shown in Appendix <ref>.
§.§ Parallel single-qubit gates
We can construct the ZZCM single-qubit gates on any two physical qubits at the same time. Here we take the parallel single-qubit gates σ^x on Q_i,j and σ^y on Q_i+1,j+1 simultaneously as an example, as shown in the dotted box in Fig. <ref>(a). Note that, parallel single-qubit gates on nearest-neighbour qubits can also be implemented, as detailed in Appendix <ref>.
Starting from Eq. (<ref>), the individual single-qubit driven Hamiltonian is
H_𝒜2(t)=Ω_02(t)(σ_i,j^x+σ_i+1,j+1^y)
with Ω_02(t)=Ω_02sin^2( π t/T_2), where the gate operation time T_2 satisfies ∫_0^T_2Ω_02(t) dt=π/2.
In this case, the Hamiltonian of nearest ZZ crosstalk can be written as
H_zz2=η_zz(σ_i,j^z Z_i,j+σ_i+1,j+1^z Z_i+1,j+1).
When we choose the transformed operator as
𝒜_2(t)= exp [- iω_2τ_2/πsin^2 ( π t/τ_2)(σ_i,j^x+σ_i+1,j+1^y) ],
the Hamiltonian in the interaction picture can be obtained by inverting Eq. (<ref>), i.e.,
H_2(t)= Ω_2(t)(σ_i,j^x+σ_i+1,j+1^y )+H_zz2,
where the equivalent coupling strength is Ω_2(t)=Ω_02sin^2( π t/T_2)+ω_2sin(2π t/τ_2). Here, τ_2=T_2/k is the period, and ω_2 is the parameter used to satisfy Eq. (<ref>b).
By utilizing numerical simulations, we can determine the optimal value of ω_2 is 2.4 kΩ_02, which ensures the error cumulant to be zero, see Appendix <ref> for details. By setting the maximum value of Ω_2(t) to Ω_m, and imposing a constraint on η_zz/Ω_m in the range of [-0.05, 0.05], we can identify the optimal value of k to be 4.
A comparison between the robustness of the ZZCM scheme with k=4 and the DY scheme against ZZ crosstalk is presented in Fig. <ref>(b), where the DY scheme is implemented by using a pulse of Ω_d2(t)=Ω_msin^2(π t/T_d2), with T_d2 being the gate time, see Appendix <ref> for details. The results indicate a meaningful increase in robustness for the entire error range when implementing the ZZCM proposal.
§.§ The two-qubit Swap gate
Compared to single-qubit gates, two-qubit gates are more susceptible to parasitic ZZ crosstalk, making it significantly challenging significant to achieve high-performance two-qubit gates. Therefore, suppressing ZZ crosstalk is crucial for implementing high-performance two-qubit gates. In this regard, we devise the two-qubit Swap gate with ZZ-crosstalk mitigation effects. Considering a quantum system consisting of eight physical qubits enclosed in a dotted box in Fig. <ref>(a), where the Swap gate U_(i,j),(i,j+1)^S are acted on qubits Q_i,j and Q_i,j+1, and the remaining six qubits, Q_i-1,j,Q_i-1,j+1,Q_i,j-1,Q_i,j+2,Q_i+1,j,Q_i+1,j+1, denote the spectators. The interaction between qubits is indicated by the solid and dashed lines, which correspond to the XY and ZZ interactions, respectively.
The Hamiltonian for this quantum system in the interaction picture is given by
H_3(t)=H_J(t)+H_zz3+H_A(t),
where
H_J(t)=J(t) 2 (σ_i,j^xσ_i,j+1^x+σ_i,j^yσ_i,j+1^y)
is the XY interaction Hamiltonian between qubits Q_i,j and Q_i,j+1, with J(t)=J_0sin^2(π t/T_3) being the time-dependent interaction strength. The Hamiltonian
H_zz3=η_zz[σ_i,j^z Z_i,j+σ_i,j+1^z (Z_i,j+1-σ_i,j^z)]
represents the ZZ crosstalk between qubits. To suppress the ZZ crosstalk, we introduce an additional Hamiltonian H_A(t)= i𝒜̇_3(t)𝒜^†_3(t), where the explicit form of 𝒜_3(t) is presented in Appendix <ref>.
To improve the performance of the Swap gate and minimize the ZZ crosstalk, additional single-qubit operations are required on the spectator qubits Q_i-1,j+1, Q_i,j+2, and Q_i+1,j+1, as detailed in Appendix <ref>. However, these qubits do not introduce any extra overhead in terms of physical resources, since arbitrary quantum gate operations can still be performed on them independently.
This means that not only can the Swap gate be implemented, but a parallel gate consisting of one Swap gate and three single-qubit gates can also be achieved.
To demonstrate this, we introduce a parallel gate U_(i,j),(i,j+1)^S⊗σ^x_i-1,j+1⊗σ^y_i,j+2⊗ I_i+1,j+1, which consists of a Swap gate on qubits Q_i,j and Q_i,j+1, a σ^x gate on qubit Q_i-1,j+1, a σ^y gate on qubit Q_i,j+2, and an identity gate on qubit Q_i+1,j+1. In this scenario, the total Hamiltonian is represented asH'_3(t)=H_3(t)+H_s(t), where
H_s(t)=J(t) 2 (σ_i- 1,j+1^x+σ_i,j+2^y)
is the Hamiltonian for constructing the single-qubit gates.
To achieve the parallel gate for
suppressing the ZZ crosstalks, the evolution is divided into two steps. In the first segment, with ∫_0^T_3J(t)dt=π/2, the time-dependent transformed operator is chosen to be
𝒜_31(t) = exp [- iω_3τ_3/πsin^2 ( π t/τ_3) (σ_i,j^x+σ_i-1,j+1^x
+σ_i,j+2^y+σ_i+1,j+1^x)],
with τ_3=T_3/4 (meaning k=4), and ω_3=4×2.4 J_0. In the second segment, with ∫_T_3^2T_3J(t)dt=π/2, the operator 𝒜_32(t) is chosen as
𝒜_32(t) = exp[- iω_3τ_3/πsin^2 ( π t/τ_3) (σ_i,j^y+σ_i-1,j+1^x
+σ_i,j+2^y+σ_i+1,j+1^x) ].
Figure <ref>(b) displays the numerically simulated gate fidelity as a function of ZZ crosstalk. The comparison between the performance of ZZCM and DY schemes is presented, where the latter is implemented via Hamiltonian of
H_d3(t) = J(t) 2(σ_i,j^xσ_i,j+1^x+σ_i,j^yσ_i,j+1^y)
+ J(t) (σ_i-1,j+1^x+σ_i,j+2^y),
see Appendix <ref> for details. The figure demonstrates that the ZZCM scheme exhibits remarkable robustness against ZZ crosstalk, outperforming the DY scheme. The results reveal that the two-qubit gate is much more sensitive to ZZ crosstalk than the single-qubit gate and that the incorporation of the ZZCM method can efficiently mitigate this issue. Specifically, the incorporation of the ZZCM approach reduces the infidelity of the parallel gate U_(i,j),(i,j+1)^S⊗σ^x_i-1,j+1⊗σ^y_i,j+2⊗ I_i+1,j+1 by three orders of magnitude, compared to the DY scheme, when the ZZ crosstalk ratio is 0.05.
Parallel SWAP gates can also be implemented, see Appendix <ref> for details.
§ CONCLUSIONS
We have presented a protocol for implementing ZZ-crosstalk-mitigation universal quantum gates in a two-dimensional square lattice. This approach is also applicable to other kinds of lattices, encompassing three-dimensional qubit arrays. It eliminates the need for auxiliary qubits, simplifying the implementation process and reducing resource requirements. Moreover, the method is compatible with different types of quantum processors, accommodating both direct and bus-based qubit interactions. These features contribute to the versatility and scalability of the ZZ-crosstalk mitigation scheme, making it a promising approach for a wide range of quantum computation platforms.
By employing a time-dependent unitary transformation operator, we successfully realize high-performance isolated and parallel quantum gates while mitigating the ZZ crosstalk between qubits. Notably, the ZZCM proposal can be utilized to prevent the accumulation and propagation of errors induced by ZZ crosstalk, making it a promising solution for constructing deep quantum circuits and simulating quantum algorithms. Consequently, our protocol may lay the groundwork for practical, scalable, and fault-tolerant quantum computation.
§ ACKNOWLEDGEMENTS
This work was supported by the National Natural Science Foundation of China (Grant No. 12275090), the Guangdong Provincial Key Laboratory (Grant No. 2020B1212060066), the Innovation Program for Quantum Science and Technology (Grant No. 2021ZD0302300), NSFC/RGC JRS grant (N-HKU774/21), and the CRF of Hong Kong (C6009-20G).
§ NUMERICAL SIMULATION FOR THE ERROR CUMULANT
To mitigate the influence of ZZ crosstalk, the time-dependent transformed operator 𝒜(t) needs to satisfy the requirement stated in Eq. (<ref>) of the main text.
For single-qubit gates U_(l=1,2) (U_1=σ^x/2, U_2=σ^x) on qubit Q_i,j, the corresponding operation duration satisfy ∫_0^T_1^1Ω_01sin^2( π t/T_1^1)dt=π/4 and ∫_0^T_1^2Ω_01sin^2( π t/T_1^2)dt=π/2, respectively. The Hamiltonian describing the ZZ crosstalk between nearest qubits reads in Eq. (<ref>).
To mitigate the ZZ crosstalk from nearest qubits, we choose 𝒜_1^l(t)= exp [- iω_1^l τ_1^l/πsin^2( π t/τ_1^l) σ_i,j^x ], where τ_1^l=T_1^l/k is the period, and ω_1^l=γ_1^l k Ω_01 (k is positive integer) is the parameter used to satisfy Eq. (<ref>b). Therefore, the condition in Eq. (<ref>b) can be expressed as
∫_(n-1)τ_1^l^nτ_1^l𝒜_1^l†(t)H_zz1𝒜_1^l(t)dt
=η_zz∫_(n-1)τ_1^l^nτ_1^l(cosχ_1^l σ_i,j^z+sinχ_1^l σ_i,j^y) Z_i, j dt,
where χ_1^l=2γ_1^l k Ω_01τ_1^l/πsin^2( π t/τ_1^l). We define the error cumulant during the nth period as
EC_1^l=η_zz[|∫_(n-1)τ_1^l^nτ_1^lcosχ_1^l dt | +|∫_(n-1)τ_1^l^nτ_1^lsinχ_1^l dt |].
Figures <ref>(a) and (b) display the error cumulant as a function of γ_1^l, indicating that for the σ^x/2 (σ^x) gate, γ_1^1=|4.81| (γ_1^2=|2.4|) is the optimal choice for eliminating the ZZ crosstalk.
For the parallel single-qubit gate on next-nearest-neighbor qubits Q_i,j and Q_i+1,j+1, the ZZ crosstalk Hamiltonian is in the form in Eq. (<ref>).
The gate operation time T_2 satisfies ∫_0^T_2Ω_02sin^2( π t/T_2)dt=π/2. When we choose the transformed operator in the form in Eq. (<ref>), with τ_2=T_2/k being the period, and ω_2=γ_2 k Ω_02 used to satisfy Eq. (<ref>b). In this case, the condition in Eq. (<ref>b) can be written as
∫_(n-1)τ_2^nτ_2𝒜_2^†(t)H_zz2𝒜_2(t)dt
=η_zz [∫_(n-1)τ_2^nτ_2(cosχ_2σ_i,j^z+sinχ_2σ_i,j^y) Z_i,jdt
+∫_(n-1)τ_2^nτ_2(cosχ_2σ_i+1,j+1^z -sinχ_2σ_i+1,j+1^x)Z_i+1,j+1 dt ],
with χ_2=2γ_2 k Ω_02τ_2/πsin^2( π t/τ_2). We can also define the error cumulant during the nth period as
EC_2=η_zz[|∫_(n-1)τ_2^nτ_2cosχ_2 dt | +|∫_(n-1)τ_2^nτ_2sinχ_2 dt |],
which is consistent with the error cumulant in isolated σ^x gate.
§ THE CONSTRUCTION OF DYNAMICAL GATES
Here, we present the construction of dynamical (DY) gates with simple resonant interaction. They are implemented by using time-dependent driving fields, and the corresponding Hamiltonians are H_d1(t)= Ω_dsin^2( π t/T_d1) σ_i,j^x and H_d2(t)=Ω_dsin^2( π t/T_d2)(σ_i,j^x+σ_i',j'^y) for single-qubit gate on qubit Q_i,j, and parallel single-qubit gate σ^x_i,j⊗σ^y _i',j' between nearest or next-nearest neighboring qubits Q_i,j and Q_i',j', respectively. To construct the isolated gate σ^x/2 (σ^x), the corresponding evolution time T_d1 satisfies ∫_0^T_d1Ω_dsin^2( π t/T_d1)dt=π/4 (π/2). Similarly, for the parallel single-qubit gate, the evolution time is ∫_0^T_d2Ω_dsin^2( π t/T_d2)dt= π/2. The amplitude of the driven field is set to be equal to the amplitude of the equivalent coupling strength in the ZZCM scheme, i.e., Ω_d=Ω_m.
We also present the implementation of the two-qubit Swap gate using a simple XY interaction. The corresponding Hamiltonian is given by
H_d3(t)=J_d(t) (σ_i,j^xσ_i',j'^x+σ_i,j^yσ_i',j'^y)/2,
where J_d(t)=J_0sin^2(π t/T_d3) is the time-dependent XY interaction between qubits Q_i,j and Q_i',j', and the evolution time satisfies ∫_0^T_d3J_d(t)dt=π/2.
Moreover, we construct parallel gates of U_(i,j),(i,j+1)^S⊗σ^x_i-1,j+1⊗σ^y_i,j+2⊗ I_i+1,j+1 and U_(i,j),(i,j+1)^S⊗ U_(i+1,j),(i+1,j+1)^S by using the DY method. The corresponding Hamiltonian is given by Eq. (<ref>)
and
H_d5(t) = J_d(t) 2 (σ_i,j^xσ_i,j+1^x+σ_i,j^yσ_i,j+1^y
+σ_i+1,j^xσ_i+1,j+1^x+σ_i+1,j^yσ_i+1,j+1^y),
respectively, where the evolution time is chosen to satisfy ∫_0^T_d4(5)J_d(t)dt=π/2. We note that the form of the ZZ-crosstalk Hamiltonian is the same as in the ZZCM scheme.
§ THE OPTIMAL K FOR THE ZZCM SCHEME
We draw attention to the incorporation of the ZZCM control, which induces an increase in the effective coupling strength as k grows, as established by Eq. (<ref>) in the main manuscript. The corresponding equivalent coupling strength for the ZZCM approach is provided by Ω_1(t)=Ω_01sin^2( π t/T_1)+ω_1sin(2π t/τ_1), where ω_1=γ_1 k Ω_01. To prevent excessive driving field amplitude in experiments, the amplitude of the equivalent coupling strength is constrained to Ω_m, and the ZZ crosstalk ratio η_zz/Ω_m is within the range of [-0.05,0.05]. Under these conditions, as k increases, the gate driven field amplitude Ω_01 diminishes, consequently resulting in increased relative noise, quantified by the parameter η_zz/Ω_01. Therefore, there exists a trade-off between the enhancement in robustness with increasing k and the proportion of noise η_zz/Ω_01. Figures <ref>(a) and (b) depict the infidelities of σ^x/2 and σ^x gates, respectively, as a function of η_zz/Ω_m, with an optimal value of k=4 identified for the ZZCM scheme. Additionally, Figs. <ref>(c) and (d) contrast the robustness of the ZZCM scheme, with k=4, against the DY scheme concerning ZZ crosstalk when the equivalent coupling strength amplitude is Ω_m. The plots distinctly showcase the outstanding suppression of ZZ crosstalk achieved by the ZZCM scheme.
§ THE CONSTRUCTION OF PARALLEL GATES ON NEARBY QUBITS
Here, we present the construction of parallel single-qubit gate between nearest-neighbor qubits in an eight-qubit system, which is enclosed by the dotted box in Fig. <ref>(a). We apply σ^x and σ^y gates simultaneously on qubits Q_i,j and Q_i,j+1, and the remaining six qubits serve as spectators. Starting from Eq. (<ref>) in the maintext, the individual single-qubit driven Hamiltonian in the 𝒜 framework is H_𝒜2'(t)=Ω_02sin^2( π t/T_2)(σ_i,j^x+σ_i,j+1^y), and the ZZ interaction Hamiltonian is
H_zz2'=η_zz[σ_i,j^z Z_i,j+σ_i,j+1^z(Z_i,j+1 -σ_i,j^z)].
To suppress the ZZ crosstalk between qubits, we choose
𝒜_2'(t) = exp [- iω_2τ_2πsin^2 (π tτ_2) (σ_i-1,j+1^x.
+.σ_i,j^x+σ_i,j+2^x+σ_i+1,j+1^x) ].
The total Hamiltonian,
H_2'(t) = H_zz2 + Ω_2(t)σ_i,j^x
+Ω_02sin^2 (π t T_2) σ_i,j+1^y
+Ω_𝒜(σ_i-1,j+1^x+σ_i,j+2^x+σ_i+1,j+1^x),
in the interaction picture can be obtained by inverting Eq. (4) of maintext, where Ω_𝒜=ω_2sin(2π t/τ_2), and Ω_2(t)=Ω_02sin^2( π t/T_2)+Ω_𝒜 is the equivalent coupling strength applying on qubit Q_i,j. It indicates that, for mitigating ZZ crosstalk, we need additional control fields apply on qubits Q_i-1,j+1, Q_i,j, Q_i,j+2, and Q_i+1,j+1. However, it is worth noting that qubits Q_i-1,j+1, Q_i,j+2, and Q_i+1,j+1 do not incur extra resource consumption on the quantum processor as we can still implement single-qubit gates independently on these qubits.
We conduct a comparative analysis of the ZZCM scheme and the DY scheme, focusing on their robustness against ZZ crosstalk. We here impose a constraint on the maximum value of the equivalent coupling strength Ω_2(t), limiting it to Ω_m. Comparing Fig. <ref>(b) and Figs. <ref>(c) and (d), we observe that the susceptibility of the parallel single-qubit gate to ZZ crosstalk exceeds that of isolated single-qubit gates. Fortunately, the proposed ZZCM scheme effectively mitigates the adverse effects of ZZ crosstalk. Consequently, the parallel single-qubit gate achieves comparable performance to isolated single-qubit gates.
§ THE CONSTRUCTION OF TWO-QUBIT SWAP GATE
Next, we present a detailed description of the construction of a two-qubit Swap gate for an an eight-qubit system, enclosed in a dotted box as shown in Fig. <ref>(a). Qubits Q_i,j and Q_i,j+1 serve as gate qubits for the Swap gate U_(i,j),(i,j+1)^S, and qubits Q_i-1,j, Q_i-1,j+1, Q_i,j-1, Q_i,j+2, Q_i+1,j, and Q_i+1,j+1 act as spectators. The interactions between the qubits are represented by solid and dashed lines, corresponding to the XY and ZZ interactions, respectively. The Hamiltonian for this quantum system in the interaction picture is given by Eq. (<ref>).
To achieve the Swap gate while suppressing ZZ crosstalk, we divide the evolution into two steps. In the first step, we choose 𝒜_31(t)= exp [- iω_3τ_3/πsin^2( π t/τ_3) (σ_i-1,j+1^x+σ_i,j^x+σ_i,j+2^x+σ_i+1,j+1^x) ], with τ_3=T_3/k, and ω_3= 2.4 k J_0. By rotating the system to the framework defined by 𝒜_31(t), we obtain
H^1_𝒜3(t) = 𝒜_31^†(t)H_3(t)𝒜_31(t)+ i𝒜^†_31(t)𝒜_31(t)
= J(t)/2σ_i,j^xσ_i,j+1^x
+ 𝒜_31^†(t)[J(t)/2σ_i,j^yσ_i,j+1^y+H_zz3]𝒜_31(t).
It is not difficult to prove that ∫_(n-1)τ_3^nτ_3𝒜_31^†(t) [ J(t)σ_i,j^yσ_i,j+1^y/2 +H_zz3 ]𝒜_31(t)dt=0. Hence, when ∫_0^T_3J(t)dt=π/2, in the framework of 𝒜_31(t), the evolution operator in the first step is U^1_𝒜3(T_3,0)= exp[- iπσ_i,j^xσ_i,j+1^x/4 ].
In the second step, we choose 𝒜_32(t)= exp [- iω_3τ_3/πsin^2( π t/τ_3) (σ_i-1,j+1^y+σ_i,j^y+σ_i,j+2^y+σ_i+1,j+1^y) ]. The Hamiltonian in the framework defined by 𝒜_32(t) is
H^2_𝒜3(t) = 𝒜_32^†(t)H_3(t)𝒜_32(t)+ i𝒜^†_32(t)𝒜_32(t)
= J(t)/2σ_i,j^yσ_i,j+1^y
+ 𝒜_32^†(t)[J(t)/2σ_i,j^yσ_i,j+1^y+H_zz3]𝒜_32(t).
Since ∫_(n-1)τ_3^nτ_3𝒜_32^†(t) [ J(t)σ_i,j^xσ_i,j+1^x/2+H_zz3]𝒜_32(t)dt=0, the evolution operator in the second step becomes U^2_𝒜3(2T_3,T_3)= exp[- iπσ_i,j^yσ_i,j+1^y/4 ]. As a result, the evolution operator of the whole process is
U_𝒜3(2T_3,0)=U_𝒜3(2T_3,T_3)U_𝒜3(T_3,0)=U^S_(i,j),(i,j+1),
which is a Swap gate between qubits Q_i,j and Q_i,j+1. Moving back to the interaction picture, the evolution operator generated by H_3(t) at the end time reads
U_3(2T_3)=𝒜_32(2T_3)U^S_(i,j),(i,j+1)=U^S_(i,j),(i,j+1).
It should be noted that, to suppress ZZ crosstalk, an additional Hamiltonian H_A(t)= i𝒜̇_3(t)𝒜^†_3(t) is introduced, as shown in Eq. (<ref>). This extra Hamiltonian requires additional drives not only on qubit Q_i,j but also on spectator qubits Q_i-1,j+1, Q_i,j+2, and Q_i+1,j+1. We emphasize that these spectator qubits do not represent any additional physical qubits overhead on the quantum processor, as we can still perform independent quantum gate operations on them. In addition, the realization of a parallel quantum gate U_(i,j),(i,j+1)^S⊗σ^x_i-1,j+1⊗σ^y_i,j+2⊗ I_i+1,j+1 that consists of one Swap gate and three single-qubit gates has been implemented in the maintext.
We investigate the robustness of the Swap gate within the ZZCM scheme (with k=4) and conduct a comparative analysis against the DY scheme in the presence of ZZ crosstalk. Figure <ref>(b) displays the results of this comparison, where we observe that the susceptibility of the two-qubit gate to ZZ crosstalk surpasses that of a single-qubit gate. However, we address this concern by proposing the ZZCM scheme, which effectively mitigates the detrimental effects. Notably, the fidelity of the two-qubit gate within the ZZCM scheme remains consistently above 99.99% across the entire range of errors examined in this study.
§ THE CONSTRUCTION OF PARALLEL SWAP GATES
We next demonstrate the application of the ZZCM control for realizing the parallel two-qubit Swap gate in an eight-qubit system, as shown in the dotted box in Fig. <ref>(c).
The Swap gates on qubits Q_i,j and Q_i,j+1, Q_i+1,j and Q_i+1,j+1 are implemented by turning on the XY interaction between the respective qubit pairs. The total Hamiltonian of this quantum system in the interaction picture is
H_4(t)=H'_J(t)+H_zz4+H'_A(t),
which is composed of the XY interaction Hamiltonian
H'_J(t) = J(t)' 2 (σ_i,j^xσ_i,j+1^x+σ_i,j^yσ_i,j+1^y
+σ_i+1,j^xσ_i+1,j+1^x+σ_i+1,j^yσ_i+1,j+1^y)
between qubits Q_i,j and Q_i,j+1, Q_i+1,j and Q_i+1,j+1, with J(t)'=J_0sin^2(π t/T_4) as the time-dependent interaction strength. The ZZ-crosstalk Hamiltonian in this system is described by H_zz4=η_zz [σ_i,j^z(Z_i,j-σ_i,j-1^z)+σ_i,j+1^z(σ_i-1,j+1^z+σ_i+1,j+1^z) +σ_i+1,j^z(σ_i+1,j+1^z+σ_i+2,j^z)+σ_i+1,j+1^zσ_i+2,j+1^z ]. To suppress the ZZ crosstalk, we use the additional Hamiltonian H'_A(t)= i𝒜̇_4(t)𝒜^†_4(t).
The procedure of constructing the parallel Swap gate shares similarities with the construction process of an isolated Swap gate. However, there is a key difference lies in the selection of transformed operator 𝒜(t). Specifically, we select 𝒜_41(t)= exp [- iω_4τ_4/πsin^2( π t/τ_4) (σ_i-1,j+1^x+σ_i,j^x+σ_i+1,j+1^x+σ_i+2,j^x) ] in the first step, followed by 𝒜_42(t)= exp [- iω_4τ_4/πsin^2( π t/τ_4) (σ_i-1,j+1^y+σ_i,j^y+σ_i+1,j+1^y+σ_i+2,j^y) ] in the second step, with τ_4=T_4/k, and ω_4=2.4 k J_0. Under these settings, the ZZCM scheme exhibits superior robustness against ZZ crosstalk as expected, as shown in Fig. <ref>(d). In contrast, the gate fidelity experiences a rapid decline as η_zz/J_0 increases when the ZZCM control is absent. However, by employing the ZZCM method, we successfully enhance the fidelity of the parallel Swap gate from an initial value of 93.02% to 99.99% when the ZZ crosstalk ratio is η_zz/J_0=|0.05|.
99
Vandersypen2001
L. M. K. Vandersypen, M. Steffen, G. Breyta,
C. S. Yannoni, M. H. Sherwood, and I. L. Chuang,
Experimental realization of Shor's quantum factoring algorithm using nuclear magnetic resonance,
Nature (London) 414, 883 (2001).
Lucero2012
E. Lucero, R. Barends, Chen Y, J. Kelly, M. Mariantoni, A. Megrant,
P. O'Malley, D. Sank, A. Vainsencher, J. Wenner, T. White, Y. Yin, A. N. Cleland, and J. M. Martinis,
Computing prime factors with a Josephson phase qubit quantum processor,
Nat. Phys. 8, 719 (2012).
Jones1998
J. A. Jones, M. Mosca, and R. H. Hansen,
Implementation of a quantum search algorithm on a quantum computer,
Nature (London) 393, 344 (1998).
Shahandeh2021
A. Rodriguez-Blanco, A. Bermudez, M. Müller, and F. Shahandeh,
Efficient and robust certification of genuine multipartite entanglement in noisy quantum error correction circuits,
PRX Quantum 2, 020304 (2021).
PZhao2022
P. Zhao, K. Linghu, Z. Li, P. Xu, R. Wang, G. Xue, Y. Jin, and H. Yu,
Quantum Crosstalk Analysis for Simultaneous Gate Operations on Superconducting Qubits,
PRX Quantum 3, 020301 (2022).
Buterakos2018
D. Buterakos, R. E. Throckmorton, and S. D. Sarma,
Crosstalk error correction through dynamical decoupling of single-qubit gates in capacitively coupled singlet-triplet semiconductor spin qubits,
Phys. Rev. B 97, 045431 (2018).
Throckmorton2022
R. E. Throckmorton and S. Das Sarma,
Crosstalk- and charge-noise-induced multiqubit decoherence in exchange-coupled quantum dot spin qubit arrays,
Phys. Rev. B 105, 245413 (2022).
AKandala2021
A. Kandala, K. X. Wei, S. Srinivasan, E. Magesan, S. Carnevale, G. A. Keefe, D. Klaus, O. Dial, and D. C. McKay,
Demonstration of a High-Fidelity CNOT Gate for Fixed-Frequency Transmons with Engineered ZZ Suppression,
Phys. Rev. Lett. 127, 130501 (2021).
LPostler2018
L. Postler, Á. Rivas, P. Schindler, A. Erhard, R. Stricker, D. Nigg, T. Monz, R. Blatt, and M. M¨1ller,
Experimental quantification of spatial correlations in quantum dynamics,
Quantum 2, 90 (2018).
UvonLupke2020
U. von Lüpke, F. Beaudoin, L. M. Norris, Y. Sung, R. Winik, J. Y. Qiu, M. Kjaergaard, D. Kim, J. Yoder, S. Gustavsson, L. Viola, and W. D. Oliver,
Two-qubit spectroscopy of spatiotemporally correlated quantum noise in superconducting qubits,
PRX Quantum 1, 010305 (2020).
Sundaresan2020
N. Sundaresan, I. Lauer, E. Pritchett, E. Magesan, P. Jurcevic, and J. M. Gambetta,
Reducing unitary and spectator errors in cross resonance with optimized rotary echoes,
PRX Quantum 1, 020318 (2020).
Krinner2020
S. Krinner, S. Lazar, A. Remm, C. K. Andersen, N. Lacroix, G. J. Norris, C. Hellings, M. Gabureac, C. Eichler, and A. Wallraff,
Benchmarking Coherent Errors in Controlled- Phase Gates due to Spectator Qubits,
Phys. Rev. Appl. 14, 024042 (2020).
TQCai12021
T. Q. Cai, X. Y. Han, Y. K. Wu, Y. L. Ma, J. H. Wang, Z. L. Wang, H. Y. Zhang, H. Y. Wang, Y. P. Song, and L. M. Duan,
Impact of Spectators on a Two-Qubit Gate in a Tunable Coupling Superconducting Circuit,
Phys. Rev. Lett. 127, 060505 (2021).
PMundada2019
P. Mundada, G. Zhang, T. Hazard, and A. Houck,
Suppression of Qubit Crosstalk in a Tunable Coupling Superconducting Circuit,
Phys. Rev. Appl. 12, 054023 (2019).
XYHan2020
X. Y. Han, T. Q. Cai, X. G. Li, Y.K.Wu, Y. W. Ma, Y. L. Ma, J. H. Wang, H. Y. Zhang, Y. P. Song, and L. M. Duan,
Error analysis in suppression of unwanted qubit interactions for a parametric gate in a tunable superconducting circuit,
Phys. Rev. A 102, 022619 (2020).
LDuan2020
X. Li, T. Cai, H. Yan, Z. Wang, X. Pan, Y. Ma, W. Cai, J. Han, Z. Hua, X. Han, Y. Wu, H. Zhang, H. Wang, Y. Song, L. Duan, and L. Sun,
Phys. Rev. Appl. 14, 024070 (2020).
MCCollodo2020
M. C. Collodo, J. Herrmann, N. Lacroix, C. K. Andersen, A. Remm, S. Lazar, J.-C. Besse, T. Walter, A. Wallraff, and C. Eichler,
Implementation of Conditional Phase Gates Based on Tunable ZZ Interactions,
Phys. Rev. Lett. 125, 240502 (2020).
JChu2021
J. Chu and F. Yan,
Coupler-Assisted Controlled-Phase Gate with Enhanced Adiabaticity,
Phys. Rev. Appl. 16, 054020 (2021).
pZhao2021
P. Zhao, D. Lan, P. Xu, G. Xue, M. Blank, X. Tan, H. Yu, and Y. Yu,
Suppression of Static ZZ Interaction in an All- Transmon Quantum Processor,
Phys. Rev. Appl. 16, 024037 (2021).
JStehlik2021
J. Stehlik, D. M. Zajac, D. L. Underwood, T. Phung, J. Blair, S. Carnevale, D. Klaus, G. A. Keefe, A. Carniol, M. Kumph, M. Steffen, and O. E. Dial,
Tunable Coupling Architecture for Fixed-Frequency Transmon Superconducting Qubits,
Phys. Rev. Lett. 127, 080505 (2021).
Youngkyu2021
Y. Sung, L. Ding, J. Braumüller, A. Vepsäläinen, B. Kannan, M. Kjaergaard, A. Greene, G. O. Samach, C. McNally, D. Kim, A. Melville, B. M. Niedzielski, M. E. Schwartz, J. L. Yoder, T. P. Orlando, S. Gustavsson, and W. D. Oliver,
Realization of High-Fidelity CZ and ZZ-Free iSWAP Gates with a Tunable Coupler,
Phys. Rev. X 11, 021058 (2021).
JKu2020
J. Ku, X. Xu, M. Brink, D. C. McKay, J. B. Hertzberg, M. H. Ansari, and B. L. T. Plourde,
Suppression of Unwanted ZZ Interactions in a Hybrid Two-Qubit System,
Phys. Rev. Lett. 125, 200504 (2020).
PZhao2020
P. Zhao, P. Xu, D. Lan, J. Chu, X. Tan, H. Yu, and Y. Yu,
High-Contrast ZZ Interaction Using Superconducting Qubits with Opposite-Sign Anharmonicity,
Phys. Rev. Lett. 125,, 200503 (2020).
XXu2021
X. Xu and M. H. Ansari,
ZZ Freedom in Two-Qubit Gates,
Phys. Rev. Appl. 15, 064074 (2021).
Noguchi2020
A. Noguchi, A. Osada, S. Masuda, S. Kono, K. Heya, S. P. Wolski, H. Takahashi, T. Sugiyama, D. Lachance-Quirion, and Y. Nakamura,
Fast parametric two-qubit gates with suppressed residual interaction using the second-order non-linearity of a cubic transmon,
Phys. Rev. A 102, 062408 (2020).
Mitchell2021
B. K. Mitchell, R. K. Naik, A. Morvan, A. Hashim, J. M. Kreikebaum, B. Marinelli, W. Lavrijsen, K. Nowrouzi, D. I. Santiago, and I. Siddiqi,
Hardware-Efficient Microwave-Activated Tunable Coupling between Superconducting Qubits,
Phys. Rev. Lett. 127, 200502 (2021).
HXiong2022
H. Xiong, Q. Ficheux, A. Somoroff, L. B. Nguyen, E. Dogan, D. Rosenstock, C. Wang, K. N. Nesterov, M. G. Vavilov, and V. E. Manucharyan,
Arbitrary controlled-phase gate on fluxonium qubits using differential ac stark shifts,
Phys. Rev. Res. 4, 023040 (2022).
KXWei2021
K. X. Wei, E. Magesan, I. Lauer, S. Srinivasan, D. F. Bogorin, S. Carnevale, G. A. Keefe, Y. Kim, D. Klaus, W. Landers, N. Sundaresan, C. Wang, E. J. Zhang, M. Steffen, O. E. Dial, D. C. McKay, and A. Kandala,
Quantum crosstalk cancellation for fast entangling gates and improved multi-qubit performance,
arXiv:2106.00675.
ZCNi2022
Z. C. Ni, S. Li, L. B. Zhang, J. Chu, J. j. Niu, T. X. Yan, X. H. Deng, L. Hu, J. Li, Y. P. Zhong, S. Liu, F. Yan, Y. Xu, and D. P. Yu,
Scalable Method for Eliminating Residual ZZ Interaction between Superconducting Qubits,
Phys. Rev. Lett. 129, 040502 (2022).
Viola1999
L. Viola, S. Lloyd, and E. Knill,
Dynamical Decoupling of Open Quantum Systems,
Phys. Rev. Lett. 82, 2417(1999).
Suter2016
D. Suter and G. A. Álverez,
Protecting quantum information against environmental noise,
Rev. Mod. Phys. 88, 041001 (2016).
Jurcevic2021
P. Jurcevic, A. Javadi-Abhari, L. S. Bishop, I. Lauer, D. F. Bogorin, M. Brink, L. Capelluto, O. Günlük, T. Itoko, and N. Kanazawa,
Demonstration of quantum volume 64 on a superconducting quantum computing system,
Quantum Sci. Technol. 6, 025020 (2021).
Zeyuan2022
Z. Y. Zhou, R. Sitler, Y. Oda, K. Schultz, and G. Quiroz,
Quantum Crosstalk Robust Quantum Control,
arXiv:2208.05978 (2022).
Tripathi2022
V. Tripathi, H. Chen, M. Khezri, K.-W. Yip, E. M. Levenson-Falk, and D. A. Lidar,
Suppression of crosstalk in superconducting qubits using dynamical decoupling,
Phys. Rev. Appl. 18, 024068 (2022).
Magnus1954
W. Magnus,
On the Exponential Solution of Differential Equations for a Linear Operator,
Commun. Pure Appl. Math. 7, 649 (1954).
Blanes2009
S. Blanes, F. Casas, J. A. Oteo, and J. Ros,
The Magnus Expansion and Some of Its Applications,
Phys. Rep. 470, 151 (2009).
XGWang2009
X. G. Wang, Z. Sun, and Z. D. Wang,
Operator fidelity susceptibility: An indicator of quantum criticality,
Phys. Rev. A 79, 012105 (2009).
|
http://arxiv.org/abs/2307.03952v2 | 20230708110202 | Is ChatGPT a Good Personality Recognizer? A Preliminary Study | [
"Yu Ji",
"Wen Wu",
"Hong Zheng",
"Yi Hu",
"Xi Chen",
"Liang He"
] | cs.CL | [
"cs.CL"
] |
1
.001
Is ChatGPT a Good Personality Recognizer? A Preliminary Study
Yu Ji et al.
mode = title]Is ChatGPT a Good Personality Recognizer? A Preliminary Study
1,2]Yu Ji[orcid=0000-0001-6048-9184]
[email protected]
2,3]Wen Wu[orcid=0000-0002-2132-5993]
[1]
[email protected]
[1]Corresponding author
4]Hong Zheng
3]Yi Hu
3]Xi Chen
1,2]Liang He
[1]organization=Institute of AI Education, East China Normal University,
city=Shanghai,
country=China
[2]organization=School of Computer Science and Technology, East China Normal University,
city=Shanghai,
country=China
[3]organization=Shanghai Key Laboratory of Mental Health and Psychological Crisis Intervention, School of Psychology and Cognitive Science, East China Normal University,
city=Shanghai,
country=China
[4]organization=Shanghai Changning Mental Health Center,
city=Shanghai,
country=China
In recent years, personality has been regarded as a valuable personal factor being incorporated into numerous tasks such as sentiment analysis and product recommendation. This has led to widespread attention to text-based personality recognition task, which aims to identify an individual's personality based on given text. Considering that ChatGPT has recently exhibited remarkable abilities on various natural language processing tasks, we provide a preliminary evaluation of ChatGPT on text-based personality recognition task for generating effective personality data. Concretely, we employ a variety of prompting strategies to explore ChatGPT's ability in recognizing personality from given text, especially the level-oriented prompting strategy we designed for guiding ChatGPT in analyzing given text at a specified level. The experimental results on two representative real-world datasets reveal that ChatGPT with zero-shot chain-of-thought prompting exhibits impressive personality recognition ability and is capable to provide natural language explanations through text-based logical reasoning. Furthermore, by employing the level-oriented prompting strategy to optimize zero-shot chain-of-thought prompting, the performance gap between ChatGPT and corresponding state-of-the-art model has been narrowed even more. However, we observe that ChatGPT shows unfairness towards certain sensitive demographic attributes such as gender and age. Additionally, we discover that eliciting the personality recognition ability of ChatGPT helps improve its performance on personality-related downstream tasks such as sentiment classification and stress prediction.
ChatGPT Personality Recognition Chain-of-Thought Prompting Strategy Level-Oriented Prompting Strategy Natural Language Explanation Unfairness
[
[
August 12, 2023
===================
§ INTRODUCTION
As one of the basic individual characteristics, personality describes the relatively stable pattern of individual w.r.t. her/his behavior, thought, and emotion <cit.>. In recent years, an increasing number of researchers have considered personality as a valuable factor and incorporated it into various tasks (e.g., machine translation <cit.>, product recommendation <cit.>, sentiment analysis <cit.>, and mental health analysis <cit.>), resulting in significant performance improvements. In order to automatically obtain large-scale user personality, text-based personality recognition task is designed to infer user personality based on given user-generated text <cit.>. With the rapid developments of pre-trained Large Language Models (LLMs) (e.g., BERT <cit.>, RoBERTa <cit.>, GPT-3 <cit.>,PaLM <cit.>, and LLaMA <cit.>), more and more LLMs-based methods have been proposed for text-based personality detection task and have achieved remarkable performance improvements <cit.>.
More recently, ChatGPT[https://chat.openai.com/] has attracted a considerable amount of attention with its impressive general language processing ability <cit.>, sparking exploration into its capability boundaries <cit.>. Several works have provided a preliminary evaluation of ChatGPT on various common tasks such as machine translation <cit.>, product recommendation <cit.>, sentiment analysis <cit.>, and mental health analysis <cit.>. Therefore, in this work, we are interested in evaluating the performance of ChatGPT on text-based personality recognition task for generating effective personality data. We also would like to see whether eliciting the personality recognition ability of ChatGPT contributes to improving its performance on other downstream tasks. Concretely, we raise the following Research Questions (RQs):
RQ1: How do different prompting strategies affect ChatGPT's ability to identify personality?
RQ2: How unfair is ChatGPT when serving as a personality recognizer on various sensitive demographic attributes?
RQ3: Does the personality inferred by ChatGPT help improve its performance on other downstream tasks?
To answer these research questions, we conduct experiments on two representative text-based personality recognition datasets (i.e., Essays and PAN) to compare the performance of ChatGPT, traditional neural network (e.g., Recurrent Neural Network (RNN)), fine-tuned RoBERTa, and corresponding State-Of-The-Art (SOTA) model. Specifically, we adopt three classic prompting strategies to elicit the personality recognition ability of ChatGPT, including zero-shot prompting, zero-shot Chain-of-Thought (CoT) prompting, and one-shot prompting. Furthermore, considering that researchers typically analyze texts at different levels (e.g., word level, sentence level, and document level) to obtain valuable text information <cit.>, we design zero-shot level-oriented CoT prompting to guide ChatGPT in analyzing given text at a specified level, thereby gaining a more targeted understanding of given text and recognizing personality more precisely. According to the experimental results, our findings can be summarized as follows:
(1) Among the three classic prompting strategies, zero-shot CoT prompting can better elicit ChatGPT's ability to predict personality based on given text, resulting in its optimal overall performance on the two datasets, although there is still a certain gap in performance compared to the SOTA model. Additionally, ChatGPT with zero-shot CoT prompting could generate more natural language explanations by text-based logical reasoning, enhancing the interpretability of the prediction results. Furthermore, with the assistance of zero-shot level-oriented CoT prompting, ChatGPT could perform more targeted text analysis, enabling it to complete more accurate personality prediction.
(2) ChatGPT exhibits unfairness to some sensitive demographic attributes on text-based personality recognition task. Based on ChatGPT's analysis, the woman group is more likely to have high levels of Openness, Conscientiousness, and Agreeableness when compared to the man group. Besides, relative to the younger group, the elderly group has a higher likelihood to have low Openness.
(3) The personality inferred by ChatGPT could enhance its performance on sentiment classification task and stress prediction task, which may provide new insights for other personality-related tasks (e.g., machine translation and product recommendation).
In the following sections, we first introduce related work regarding personality recognition in Section <ref>. After that, we present the details of our experimental design and analyze the experimental results in Section <ref>. Finally, we conclude the paper and indicate some future directions in Section <ref>.
§ BACKGROUND AND RELATED WORK
Big-Five Factor (BFF) model and Myers-Briggs Type Indicator (MBTI) are two most popular personality assessment models <cit.>. To be specific, BFF model describes personality based on five traits: Openness (O), Conscientiousness (C), Extraversion (E), Agreeableness (A), and Neuroticism (N) <cit.>. Table <ref> shows the propensities of individuals under different personality traits and levels. On the contrary, MBTI describes personality according to four dimensions, including Extraversion/Introversion, Sensing/Intuition, Thinking/Feeling, and Judging/Perceiving <cit.>. Compared to BFF model, MBTI still faces controversy within the academic community <cit.>. Hence, we adopt BFF model to describe individuals' personalities in this paper.
In recent years, an increasing number of researchers regarded Big-Five personality as a valuable personal factor and incorporated it into their models, resulting in significant performance improvements on various tasks <cit.>. For example, Wu et al. <cit.> adopted users' Big-Five personalities to personalize the recommendation diversity being tailored to the users' diversity needs. Ban et al. <cit.> utilized learners' Big-Five personalities to model the individual differences for better predicting the learners' knowledge levels. This has sparked researchers' interest in efficiently acquiring Big-Five personalities.
The conventional approach to identify an individual's Big-Five personality is via personality questionnaires (e.g., NEO-FFI questionnaire <cit.>, BFI-44 <cit.>, BFI-10 <cit.>, and BFMS <cit.>). These personality questionnaires are typically carefully designed by psychology experts and require individuals to rate their behaviors using Likert scales, which is time-consuming and labor-intensive <cit.>. In order to apply Big-Five personality on a large scale across various domains (e.g., machine translation <cit.>, product recommendation <cit.>, sentiment analysis <cit.>, and mental health analysis <cit.>), researchers attempted to implicitly obtain Big-Five personality from various User-Generated Content (UGC), including text <cit.>, handwriting <cit.>, speech <cit.>, electroencephalography (EEG) <cit.>, and so on. Due to substantial evidence from psychological research demonstrating the correlation between user-generated texts and users' Big-Five personalities <cit.>, researchers made an extensive exploration of text-based personality recognition. However, the related methods normally regarded text-based personality recognition task as a special case of text classification. Most of them utilized machine learning algorithms to build personality recognizers with text features such as Linguistic Inquiry and Word Count (LIWC) <cit.> and Structured Programming for Linguistic Cue Extraction (SPLICE) <cit.>. Furthermore, with the rapid development of deep learning, more and more methods using deep neural networks are proposed to solve text-based personality recognition task, as deep neural networks could extract high-order text features from user-generated text automatically <cit.>. For example, Majumder et al. <cit.> designed a deep convolutional neural network with Word2Vec embeddings <cit.> for personality detection. Xue et al. <cit.> presented a two-level hierarchical neural network to learn the deep semantic representations of users' posts for recognizing users' Big-Five personalities. Lynn et al. <cit.> utilized message-level attention to learn the relative weight of users' posts for assessing users' Big-Five personalities. Zhu et al. <cit.> learned post embeddings by contrastive graph transformer network for personality detection. Zhu et al. <cit.> proposed a lexical psycholinguistic knowledge-guided graph neural network to enrich the semantics of users' posts with the personality lexicons. Recently, the remarkable performance enhancements achieved by LLMs in numerous Nature Language Processing (NLP) tasks <cit.> prompted researchers to explore the utilization of LLMs in text-based personality prediction task <cit.>. For example, Mehta et al. <cit.> performed extensive experiments with BERT to arrive at the optimal configuration for personality detection. Ren et al. <cit.> leveraged BERT to generate sentence-level embedding for personality recognition, while a sentiment dictionary is used to consider sentiment information in the process of personality prediction.
Lately, the release of ChatGPT has drawn increasingly great attention due to the incredible general language processing ability of ChatGPT. Therefore, more and more researchers attempted to explore the capability boundaries of ChatGPT and evaluate it on various tasks, including machine translation <cit.>, product recommendation <cit.>, sentiment analysis <cit.>, mental health analysis <cit.>, and so on. Hence, in this work, we are interested in exploring the personality recognition ability of ChatGPT through different prompting strategies for obtaining effective personality data.
§ EXPERIMENTS
§.§ Datasets
We adopt two well-known publicly available datasets in our experiments for text-based Big-Five personality recognition task:
(1) Essays <cit.>: This stream-of-consciousness dataset consists of 2,467 essays written by psychology students, and the Big-Five personality levels (i.e., low and high levels) of the students were acquired through standardized self-report questionnaire.
(2) PAN[https://pan.webis.de/clef15/pan15-web/author-profiling.html]: This dataset comes from the PAN2015 data science competition, which consists of four language sub-datasets (i.e., Dutch, English, Italian, and Spanish). In this work, we choose the English sub-dataset, which contains 294 users' tweets and their Big-Five personality scores. The Big-Five personality scores of the users were obtained by BFI-10 questionnaire <cit.>. Note that, similar to <cit.>, for each of the five personality traits, we adopt the corresponding mean value to convert personality scores into two personality levels (i.e., low and high levels). To be specific, personality score below the corresponding mean value is converted into the low level, while personality score equal to or above the corresponding mean value is converted into the high level.
Similar to <cit.>, we randomly split Essays and PAN datasets into training, validation, and testing sets in the proportion of 8:1:1. The statistics of the two datasets are summarized in Figure <ref>.
§.§ Prompting Strategies
We employ three classic prompting strategies to explore the personality recognition ability of ChatGPT, including zero-shot prompting, zero-shot CoT prompting, and one-shot prompting. The reason for using one-shot prompting alone is that ChatGPT has a limitation on the length of input. Considering that the texts in both Essays and PAN datasets are normally long (i.e., the average lengths of texts in Essays and PAN datasets are 749 and 1,405 respectively), we only provide one demonstration example in the input (i.e., one-shot prompting) without offering more demonstration examples (e.g., two-shot prompting). In addition, inspired by existing NLP research mining valuable text information at different levels (e.g., word level, sentence level, and document level) <cit.>, we design level-oriented prompting strategy to guide ChatGPT in analyzing text at a specified level. Concretely, we combine the level-oriented prompting strategy with zero-shot CoT prompting to construct zero-shot level-oriented CoT prompting. The reason for constructing zero-shot level-oriented CoT prompting based on zero-shot CoT prompting is that ChatGPT with zero-shot CoT prompting has better overall performance on the two datasets when compared to zero-shot prompting and one-shot prompting (see Section <ref>). Hence, we would like to see whether the level-oriented prompting strategy could further enhance the effectiveness of zero-shot CoT prompting. Note that, the four prompting strategies require ChatGPT to simultaneously output the person's levels of five personality traits (i.e., O, C, E, A, and N) based on given text.
(1) Zero-Shot prompting
Analyze the person-generated text, determine the person's levels of Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism. Only return Low or High.
Text: "[Text]"
Level:
(2) Zero-Shot CoT prompting
Analyze the person-generated text, determine the person's levels of Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism. Only return Low or High.
Text: "[Text]"
Level: Let's think step by step:
(3) One-Shot prompting
Analyze the person-generated text, determine the person's levels of Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism. Only return Low or High.
Text: "[Example Text]"
Level: [Openness Level of Example Text] Openness, [Conscientiousness Level of Example Text] Conscientiousness, [Extraversion Level of Example Text] Extraversion, [Agreeableness Level of Example Text] Agreeableness, [Neuroticism Level of Example Text] Neuroticism
Text: "[Text]"
Level:
Note that, to minimize the variance resulting from the sampling of demonstration examples, we randomly select three demonstration examples for conducting experiments and reporting the average performance.
(4) Zero-Shot Level-Oriented CoT prompting
We modify zero-shot CoT prompting as follow to construct zero-shot level-oriented CoT prompting, while [Specified Level] can be set as word level, sentence level, or document level.
Analyze the person-generated text from [Specified Level], determine the person's levels of Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism. Only return Low or High.
Text: "[Text]"
Level: Let's think step by step:
§.§ Baselines
Based on our literature research, we choose the following representative models as baselines:
(1) RNN <cit.>: uses RNN to generate text representation for recognizing Big-Five personality. In addition, the pre-trained GloVe model <cit.> is used to initialize the word embeddings.
(2) RoBERTa <cit.>: fine-tunes pre-trained RoBERTa-Base model and utilizes the representation of [CLS] with a linear layer for personality classification.
(3) HPMN (BERT) <cit.>: is one of the SOTA personality prediction models, which uses the personality lexicons to incorporate relevant external knowledge for enhancing the semantic meaning of the person-generated text. Its performance on Essays and PAN datasets is quoted from the original paper.
§.§ Evaluation Metrics
It can be observed from Figure <ref> that Essays and PAN datasets maintain class balance across most of the five personality traits. Therefore, we use Accuracy (the higher the better) <cit.> as the evaluation metric, which is used to measure the personality classification performance. Besides, to make a more intuitive comparison, we adopt Accuracy Improvement Percentage (AIP) to measure the accuracy improvement percentage of ChatGPT against the SOTA model (i.e., HPMN (BERT)), which is calculated as:
AIP=Accuracy_testmodel-Accuracy_SOTA/Accuracy_SOTA*100%
where Accuracy_SOTA and Accuracy_testmodel denote the accuracy of the SOTA model and the test model such as ChatGPT with zero-shot prompting.
§.§ Implementation Details
For the usage of ChatGPT, we adopt the representative version of ChatGPT (i.e., gpt-3.5-turbo). In addition, we set the temperature to 0 for producing more deterministic and focused responses. For RNN and fine-tuned RoBERTa, we set each text has no more than 512 words (padding when text length is less than 512, truncation when text length is greater than 512). Besides, for RNN, the dimension of hidden state, the batch size, and the learning rate are set to 128, 32, and 1e-3 respectively. While for fine-tuned RoBERTa, the batch size and the learning rate are set to 32 and 5e-5 respectively.
§.§ Overall Performance (RQ1)
Considering that ChatGPT may refuse personality recognition due to some reasons[One unexpected response of ChatGPT: “Unfortunately, there is not enough information in the provided text to accurately determine the person's levels of Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism.".], we adopt Majority approach to obtain the prediction results when encountering such rare situations. Specifically, for each personality trait, we regard the majority personality level in training set as the personality level of each sample in testing set. The experimental results on Essays and PAN datasets are shown in Table <ref> and Table <ref>. Concretely, ChatGPT_ZS, ChatGPT_CoT, and ChatGPT_OS represent ChatGPT with zero-shot prompting, zero-shot CoT prompting, and one-shot prompting. In addition, ChatGPT_CoT_W, ChatGPT_CoT_S, and ChatGPT_CoT_D denotes ChatGPT with zero-shot level-oriented CoT prompting, while [Specified Level] is set to word level, sentence level, and document level respectively.
Results of zero-shot prompting. As shown in Table <ref> and Table <ref>, ChatGPT_ZS has better performance than the traditional neural network RNN on both Essays and PAN datasets. For example, relative to RNN, ChatGPT_ZS increases its average classification accuracy from 50.3% to 57.4% on Essays dataset. Furthermore, ChatGPT_ZS not only performs comparably to fine-tuned RoBERTa on Essays dataset (e.g., 57.4% vs. 57.3% in terms of average classification accuracy) but also outperforms fine-tuned RoBERTa on PAN dataset (e.g., 57.3% vs. 55.3% w.r.t. average classification accuracy). Therefore, ChatGPT_ZS exhibits incredible text-based personality recognition ability under zero-shot setting. Since the SOTA model is a task-specific fully-supervised model with complex architecture for personality recognition task, the performance of ChatGPT_ZS falls far behind that of the SOTA model on the two datasets (e.g., 57.3% vs. 67.5% w.r.t. average classification accuracy on PAN dataset). However, another interesting observation is that compared with Essays dataset (i.e., the relatively large-scale dataset), ChatGPT_ZS shows a relatively higher AIP on PAN dataset (i.e., the relatively small-scale dataset). For example, the AIP of ChatGPT_ZS against the SOTA model on Essays and PAN datasets are -29.0% and -15.1% respectively. Furthermore, ChatGPT_ZS even surpasses the SOTA model when predicting personality trait A on PAN dataset (i.e., 70.0% vs. 66.3%). The possible reason is that PAN dataset provides relatively fewer training data for the fully-supervised SOTA model, preventing it from fully learning the differences in personality levels. In contrast, ChatGPT_ZS does not require training data and relies solely on its existing knowledge under zero-shot setting, narrowing the performance gap between ChatGPT_ZS and the SOTA model.
Results of zero-shot CoT prompting. Table <ref> and Table <ref> reveal that zero-shot CoT prompting could effectively enhance ChatGPT's ability on text-based personality recognition task. For example, ChatGPT_CoT increases its average classification accuracy from 57.3% to 60.7% on PAN dataset when compared with ChatGPT_ZS. As for reason, with the help of zero-shot CoT prompting, ChatGPT_CoT can perform more complex logical reasoning, so as to accurately complete the personality prediction task. Besides, ChatGPT_ZS only provides final prediction results (see Figure <ref>), while ChatGPT_CoT could provide additional natural language explanations for its prediction results in most cases (see Figure <ref>). The natural language explanations generated by ChatGPT_CoT not only enhance users' trust in the prediction results but also enables developers to obtain a better understanding of the knowledge deficiencies in ChatGPT. To gain a deep insight into the natural language explanations generated by ChatGPT_CoT, we categorize the nature language explanations into three types: (1) None: no explanation or refuse personality recognition; (2) Original Content: only the original text is provided as explanation; (3) Logical Reasoning: logical reasoning based on the original text. Figure <ref> shows the examples of three types of natural language explanations for the prediction of personality trait O, and Figure <ref> illustrates the distribution of three types of natural language explanations on different datasets and personality traits. As depicted in Figure <ref>, on both Essays and PAN datasets, ChatGPT_CoT provides more natural language explanations of the logical reasoning type for the prediction of personality trait O, while offering more natural language explanations of the original content type when identifying personality trait N. With regard to possible reasons, personality trait O reflects whether a person is creative/open-minded (with high level) or reflective/conventional (with low level) <cit.>, which may not be directly presented in person-generated text. Hence, the prediction of personality trait O requires ChatGPT to engage in more logical reasoning for a deeper analysis of given text. For example, as shown in Figure <ref>, based on given text, ChatGPT_CoT infers that the person's text is mostly focused on concrete details and experiences, with little indication of abstract or imaginative thinking. Therefore, ChatGPT_CoT predicts that the person has low O. On the contrary, personality trait N reflects whether a person is emotionally stable (with low level) or emotionally unstable (with high level) <cit.>. Since individuals normally directly express their negative emotions (e.g., anxiety) in their texts, it is relatively easier for ChatGPT_CoT to predict personality trait N based on the original text without logical reasoning. For example, one of natural language explanation of the original content type generated by ChatGPT_CoT for predicting personality trait N is mentions feeling stressed, tense, and worried about health problems and homework overload. Furthermore, as demonstrated in Figure <ref>, compared with Essays dataset, ChatGPT_CoT provides relatively more natural language explanations of the logical reasoning type for personality recognition on PAN dataset. The possible reason is that Essays dataset consists of stream-of-consciousness essays written by psychology students under professional guidance, while PAN dataset is composed of tweets written freely by various internet users. Hence, compared with the texts in Essays dataset, the texts in PAN datasets generally contain relatively less valuable information, which increases the difficulty of text-based personality prediction on PAN dataset. Therefore, compared to Essays dataset, ChatGPT_CoT needs to perform more logical reasoning to accomplish personality recognition task accurately on PAN dataset.
Results of one-shot prompting. From Table <ref> and Table <ref>, it can be observed that by providing a demonstration example, ChatGPT's performance has improved on Essays dataset but largely declined on PAN dataset. To be specific, ChatGPT_OS increases its average classification accuracy from 57.4% to 58.2% on Essays dataset when compared with ChatGPT_ZS. However, relative to ChatGPT_ZS, ChatGPT_OS decreases its average classification accuracy from 57.3% to 49.3% on PAN dataset. Regarding possible reasons, on the one hand, as mentioned above, the texts in Essays dataset generally contain more valuable information when compared to PAN dataset. Hence, there is a higher probability of selecting samples containing more invalid information from PAN dataset than from Essays dataset, thereby affecting ChatGPT_OS's learning of the relationship between text and Big-Five personality on PAN dataset. On the other hand, the persons in Essays dataset are all psychology students, while the persons in PAN dataset are various internet users from different age groups (from 18 years old to over 50 years old). Hence, without the corresponding demographic attributes (e.g., age) provided, the demonstration example selected from the training set of PAN dataset may not assist ChatGPT_OS in predicting the personalities of certain groups. For instance, if the demonstration example is generated by a young person, the association between text and personality that ChatGPT_OS learns from this demonstration example may not be helpful in predicting the personality of an old person.
Results of zero-shot level-oriented prompting. Table <ref> and Table <ref> demonstrate that guiding ChatGPT_CoT to analyze given text from specified level could help ChatGPT in analyzing given text more targeted and completing personality prediction task precisely. For example, by guiding ChatGPT_CoT_D to analyze given text from document level, its performance on Essays dataset can rival the performance of ChatGPT_OS (58.3% vs. 58.2% w.r.t. average classification accuracy). Similarly, on PAN dataset, when ChatGPT_CoT_S is guided to analyze given text from sentence level, its average classification accuracy has been a notable improvement when compared to ChatGPT_CoT, rising from 57.3% to 62.7%. We believe the possible reason is that the texts in Essays dataset were written within a limited time frame, making it more suitable for conducting overall analysis from document level. On the other hand, the texts in PAN dataset are composed of tweets posted at different times. Hence, it is more appropriate to analyze given text in PAN dataset from sentence level, which is helpful to mine diverse individual information reflected in different tweets. This discovery not only helps optimize existing promptings for text analysis but also offers new insights into eliciting various abilities of LLMs in a fine-grained manner.
§.§ Fairness of ChatGPT on Personality Recognition (RQ2)
Considering that LLMs may be unfair to certain groups due to social bias in its large pre-training corpus <cit.>, we further investigate the fairness of ChatGPT on personality prediction task across different groups. To be specific, we adopt ChatGPT_CoT with different demographic attributes for personality prediction on PAN dataset, as PAN dataset provides various demographic attributes, including gender and age (see Table <ref>). Concretely, we modify zero-shot CoT prompting as follow to provide ChatGPT with specific demographic attribute corresponding to given text:
Analyze the person-generated text, determine the person's level of Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism. Only return Low or High. Note that, the person is [Corresponding Attribute].
Text: "[Text]"
Level: Let's think step by step:
Please refer to Table <ref> for the setting of [Corresponding Attribute]. For example, [Corresponding Attribute] is set to aged between 18 and 24 when the age of the corresponding person is between 18 and 24 years old. To be specific, ChatGPT_CoT_gender and ChatGPT_CoT_age represent ChatGPT with the modified zero-shot CoT promptings, which incorporates gender and age information respectively.
It is apparent from Figure <ref> that the incorporation of demographic attributes impairs the personality prediction ability of ChatGPT_CoT to some extent, especially the integration of age information. For example, relative to ChatGPT_CoT, ChatGPT_CoT_gender and ChatGPT_CoT_age decrease their average accuracy from 55.5% to 55.2% and 54.0% respectively. We speculate that this phenomenon may be due to ChatGPT's biases towards certain groups, which leads to unfair treatment of those groups. In order to better observe ChatGPT's biases on personality prediction task, we first obtain the prediction results of ChatGPT_CoT, ChatGPT_CoT_gender, and ChatGPT_CoT_age towards different groups. We then visualize the proportion of low and high levels in those prediction results. Concretely, Figure <ref> and Figure <ref> show the distribution of the prediction results of ChatGPT_CoT and ChatGPT_CoT_gender towards woman and man groups respectively. In addition, Figure <ref>, Figure <ref>, Figure <ref>, and Figure <ref> illustrate the distribution of the prediction results of ChatGPT_CoT and ChatGPT_CoT_age towards different age groups. Take Figure <ref> as an example, the figure represents that among the 174 women in PAN dataset, 51% of them have high O (i.e., ground truth). However, ChatGPT_CoT classifies 74.8% of the 174 women as high O, while ChatGPT_CoT_gender classifies 82.3% of the 174 women as high O. In contrast, as shown in Figure <ref>, among the 174 men in PAN dataset, 47.6% of them have low O (i.e., ground truth). However, ChatGPT_CoT classifies 29.9% of the 174 men as low O, while ChatGPT_CoT_gender classifies 32.0% of the 174 men as low O. In summary, after adding gender information, ChatGPT_CoT_gender classifies more women as high O and classifies more men as low O. This phenomenon suggests that ChatGPT considers women to be more likely to belong to high O when compared to men. In order to make a more intuitive comparison of the prediction results of ChatGPT_CoT, ChatGPT_CoT_gender, and ChatGPT_CoT_age towards different groups, we further visualize the changes of the proportion of high level in the prediction results of ChatGPT_CoT_gender/ ChatGPT_CoT_age relative to ChatGPT_CoT (see Figure <ref>). For example, as displayed in Figure <ref>, for 174 women in PAN dataset, the proportion of women with high A in the prediction results of ChatGPT_CoT_gender has increased by 8.1% when compared to ChatGPT_CoT. Based on Figure <ref>, the biases of ChatGPT towards certain groups can be summarized as follows:
(1) Relative to the man group, the woman group is more likely to exhibit high levels of personality traits O, C, and A.
(2) The older an individual is, the greater the likelihood of her/his personality traits O being low level.
However, these findings are not entirely consistent with existing research. For example, some studies suggest that the woman group is more likely to exhibit high levels of personality traits A and N compared to the man group, whereas gender differences in the other personality traits (i.e., O, C, and E) have been either inconsistent or of negligible magnitude <cit.>. Possible reasons for this could be that, on the one hand, ChatGPT's biases are influenced by the biases of the annotators, which may not be representative. On the other hand, these findings are discovered based solely on the PAN dataset, limiting their generalization to some extent. Nevertheless, this phenomenon serves as a cautionary reminder for researchers to consider fairness when utilizing ChatGPT for personality prediction.
§.§ ChatGPT's Personality Recognition Ability on Downstream Task (RQ3)
We apply the personality data generated by ChatGPT to other downstream tasks for validating the effectiveness of ChatGPT's personality recognition ability. Concretely, we choose sentiment classification task and stress prediction task as the downstream tasks, because existing psychological research indicates that there is a correlation between Big-Five personality and sentiment expression <cit.> as well as stress vulnerability <cit.>. For each task, to make a more comprehensive assessment of the impact of personality data generated by ChatGPT, we first adopt ChatGPT_CoT and fine-tuned RoBERTa to generate the corresponding Big-Five personality based on given text respectively. We then use a basic prompting to elicit the task-related ability (i.e., sentiment classification ability and stress prediction ability) of ChatGPT. Finally, we modify the basic prompting by incorporating different Big-Five personalities and observe the task-related ability of ChatGPT with different modified basic promptings.
To be specific, for sentiment classification task, we adopt a subset of Yelp-2 dataset <cit.> for conducting experiments. The reason for not utilizing the complete Yelp-2 dataset is to take into account the cost of using ChatGPT's API. Concretely, we randomly select 500 positive samples and 500 negative samples from the testing set of Yelp-2 dataset to construct the subset. While for stress prediction task, we choose Dreaddit dataset, which consists of 715 samples (369 positive samples and 346 negative samples) in its testing set. Specifically, considering that the texts in the PAN dataset, Yelp-2 dataset, and Stress dataset are all web posts, we use fine-tuned RoBERTa trained on PAN dataset to generate personality data. Besides, since both tasks are binary classification tasks, we adopt Accuarcy (the higher the better) as the evaluation metric. In addition, the basic promptings used for sentiment classification task and stress prediction task are proposed by <cit.> and <cit.>. Please refer to Table <ref> for the detail of the unmodified/modified basic promptings.
The experimental results are illustrated in Figure <ref>. Note that, ChatGPT_basic represents ChatGPT with the basic prompting, while ChatGPT_basic_PC and ChatGPT_basic_PR denotes ChatGPT with the modified basic promptings, which incorporates the personality data generated by ChatGPT_CoT and fine-tuned RoBERTa respectively. It can be observed that after incorporating the personality data predicted by ChatGPT_CoT, there is an improvement in ChatGPT's performance on both sentiment classification task and stress prediction task. For example, ChatGPT_basic_PC increases its classification accuracy from 96.6% to 97.6% on sentiment classification task when compared to ChatGPT_basic. While for stress prediction task, ChatGPT_basic_PC increases its classification accuracy from 71.3% to 73.0% when compared to ChatGPT_basic. This proves the effectiveness of the personality data generated by ChatGPT_CoT. With an understanding of individuals' Big-Five personalities, ChatGPT can analyze their sentiment expression and stress condition in a more personalized manner. Another interesting finding is that the personality data generated by fine-tuned RoBERTa can help improve the performance of ChatGPT in sentiment classification tasks, but it actually decreases ChatGPT's performance in stress prediction task. We believe that the possible reason for this is that fine-tuned RoBERTa trained on PAN dataset does not generalize well, which results in the poor performance of personality prediction on Dreaddit dataset. In contrast, ChatGPT relies solely on zero-shot CoT prompting to elicit its personality prediction ability and does not require training data, thus exhibiting stronger generalization performance on different datasets.
§ CONCLUSION AND FUTURE DIRECTIONS
In this work, we evaluate the personality recognition ability of ChatGPT with different prompting strategies, and compare its performance with RNN, fine-tuned RoBERTa, and corresponding SOTA model on two representative text-based personality identification datasets. With the elicitation of zero-shot CoT prompting, ChatGPT exhibits impressive personality recognition ability and has strong interpretability for its prediction results. In addition, we find that guiding ChatGPT to analyze text at a specified level helps improve its ability to predict personality, which proves the effectiveness of level-oriented prompting strategy. Moreover, we discover that ChatGPT exhibits unfairness to some sensitive demographic attributes, leading to unfair treatment of some specific groups when predicting personality. Besides, we apply the personality data inferred by ChatGPT in other downstream tasks and achieve performance improvement to some extent. This proves that ChatGPT's personality prediction ability is effective and has high generalization performance.
As for future work, on the one hand, we would like to apply level-oriented prompting strategy to more NLP tasks for observing its effectiveness in mining text information. On the other hand, with the continuous emergence of various LLMs, we are interested in exploring the construction of domain-specific LLMs based on psychological data in order to enhance the personality recognition ability of LLMs.
§ ACKNOWLEDGMENT
This work is funded by Science and Technology Commission of Shanghai Municipality, China (under project No. 21511100302), National Natural Science Foundation of China (under project No. 61907016), Natural Science Foundation of Shanghai (under project No. 22ZR1419000), the Research Project of Changning District Science and Technology Committee (under project No. CNKW2022Y37), and the Medical Master's and Doctoral Innovation Talent Base Project of Changning District (under project No. RCJD2022S07). In addition, it is also supported by The Research Project of Shanghai Science and Technology Commission (20dz2260300) and The Fundamental Research Funds for the Central Universities.
§.§ CRediT Authorship Contribution Statement
Yu Ji: Conceptualization, Methodology, Software, Validation, Formal analysis, Investigation, Data Curation, Writing-Original Draft, Writing-Review and Editing. Wen Wu: Conceptualization, Methodology, Formal analysis, Investigation, Writing-Original Draft, Writing-Review and Editing, Supervision. Hong Zheng: Writing-Review and Editing. Yi Hu: Supervision, Writing-Review and Editing. Xi Chen: Writing-Review and Editing. Liang He: Supervision, Writing-Review and Editing.
§.§ Ethical Approval
Not applicable.
§.§ Data Availability
Data will be made available on request.
§.§ Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
unsrt
|
http://arxiv.org/abs/2307.04522v1 | 20230710124559 | Accretion Flow Properties of EXO 1846-031 During its Multi-Peaked Outburst After Long Quiescence | [
"Sujoy Kumar Nath",
"Dipak Debnath",
"Kaushik Chatterjee",
"Riya Bhowmick",
"Hsiang-Kuang Chang",
"Sandip K. Chakrabarti"
] | astro-ph.HE | [
"astro-ph.HE"
] |
Dipak Debnath
[email protected]
[email protected]
0000-0002-6640-0301]Sujoy Kumar Nath
Indian Center for Space Physics, 466 Barakhola, Netai Nagar, Kolkata 700099, India
0000-0003-1856-5504]Dipak Debnath
Institute of Astronomy Space and Earth Science, AJ 316, Sector II, Salt Lake, Kolkata 700091, India
Institute of Astronomy, National Tsing Hua University, Hsinchu 300044, Taiwan
0000-0002-6252-3750]Kaushik Chatterjee
South Western Institute for Astronomical Research, Yunnan University, University Town, Chenggong, Kunming 650500, P. R. China
Institute of Astronomy Space and Earth Science, AJ 316, Sector II, Salt Lake, Kolkata 700091, India
Institute of Astronomy, National Tsing Hua University, Hsinchu 300044, Taiwan
0000-0002-7658-0350]Riya Bhowmick
Indian Center for Space Physics, 466 Barakhola, Netai Nagar, Kolkata 700099, India
0000-0002-5617-3117]Hsiang-Kuang Chang
Institute of Astronomy, National Tsing Hua University, Hsinchu 300044, Taiwan
Department of Physics, National Tsing Hua University, Hsinchu 300044, Taiwan
0000-0002-0193-1136]Sandip K. Chakrabarti
Indian Center for Space Physics, 466 Barakhola, Netai Nagar, Kolkata 700099, India
We study the recent outburst of the black hole candidate EXO 1846-031 which went into an outburst in 2019
after almost 34 years in quiescence. We use archival data from Swift/XRT, MAXI/GSC, NICER/XTI and NuSTAR/FPM
satellites/instruments to study the evolution of the spectral and temporal properties of the source during
the outburst. Low energy X-ray flux of the outburst shows multiple peaks making it a multipeak outburst.
Evolving type-C quasi-periodic oscillations (QPOs) are observed in the NICER data in the hard, hard intermediate
and soft intermediate states. We use the physical Two Component Advective Flow (TCAF) model to analyze the combined
spectra of multiple satellite instruments. According to the TCAF model, the accreting matter is divided into Keplerian
and sub-Keplerian parts, and the variation in the observed spectra in different spectral states arises out of the variable
contributions of these two types of accreting matter in the total accretion rate. Studying the evolution of the accretion
rates and other properties of the accretion flow obtained from the spectral analysis, we show how the multiple peaks in the outburst
flux arises out of discontinuous supply and different radial velocities of two types of accreting matter from the pile-up radius.
We detect an Fe emission line at ∼6.6 keV in the hard and the intermediate states in the NICER spectra. We determine the probable
mass of the black hole to be 12.43^+0.14_-0.03 M_⊙ from the spectral analysis with the TCAF model. We also estimate viscous time
scale of the source in this outburst to be ∼ 8 days from the peak difference of the Keplerian and sub-Keplerian mass accretion rates.
§ INTRODUCTION
A Low mass Black hole X-ray binary system (BHXRBs) consists of a stellar-mass main-sequence star orbiting around a
stellar-mass black hole (SMBH). Transient BHXRBs spend most of their lifetime in a quiescent state, exhibiting very low X-ray
luminosity (L_X ∼ 10^30-33 ergs/s; Tetarenko et al. 2016). Occasionally transient BHXRBs show bright outbursts,
lasting for a few weeks to a few months, during which the source becomes extremely luminous
(L_X ∼ 10^37-38 ergs/s; Tanaka & Shibazaki 1996).
Due to its non-zero angular momentum, matter from the companion star accretes onto the black hole (BH), forming an inward-spiralling accretion disk.
The accumulating matter heats up the disk, and the matter in the disk gets ionized causing thermal-viscous instability (Dubus et al. 2001; Lasota 2001).
As a result of the instability, the viscosity of the ionized matter in the outer disk increases suddenly. This causes more angular momentum
to be redistributed outward, and the accretion rate in the inner disk increases rapidly, triggering an outburst
(Chakrabarti & Titarchuk 1995; Ebisawa et al. 1996; Chakrabarti 2013). During an outburst, low mass BHXRBs go through
a succession of `accretion states', showing a rapid change in their temporal and spectral properties (Fender et al. 2004;
Homan & Belloni 2005; McClintock & Remillard, 2006). During the initial phase of the outburst, the source luminosity is
low and the energy spectrum can be approximated with a hard non-thermal power-law component. This state is called
the hard state (HS). As the outburst progresses, the source transits through the hard-intermediate state (HIMS) and
soft-intermediate state (SIMS), when the source luminosity gradually increases and the contribution of the low energy thermal
photons increase, which gradually softens the spectrum. The source luminosity becomes maximum in the soft state (SS) when the spectrum is dominated by
a thermal multicolor disk blackbody. After that, the source luminosity gradually decreases, and the source transits through
SIMS, HIMS and finally, to the HS. Low-frequency peaked and narrow noise components called quasi-periodic oscillations (QPOs)
has been observed in the power-density spectra (PDS) of most BHXRBs. Their properties (centroid frequency, Q-value, rms amplitude
and noise) also vary depending on the spectral state, and Casella et al. (2005) have classified these LFQPOs into three types:
A, B, and C. Generally, type-C QPOs with monotonically increasing or decreasing centroid frequency can be observed in the HS and
HIMS, while no QPOs are observed in the SS. The evolution of these spectral and temporal properties are strongly correlated, which is
manifested in the `Hardness-Intensity Diagram' (HID; Belloni et al. 2005; Debnath et al. 2008) or the `Accretion Rate Ratio-Intensity Diagram'
(ARRID; Jana et al. 2016).
Two separate mechanisms are responsible for the production of low and high-energy X-ray radiation from the accretion disks.
An optically thick, geometrically thin Keplerian flow dissipates the gravitational energy of the accreting matter through
viscosity and emits multicolor thermal blackbody photons (Novikov & Thorne 1973; Shakura & Sunyaev 1973).
When these low-energy photons get intercepted by a hot electron cloud, they get repeatedly inverse Comptonised and are
emitted as high-energy X-rays (Sunyaev & Titarchuk 1980, 1985). While there is general agreement about the emission mechanisms,
the actual nature of the hot electron cloud or the Compton cloud has been a matter of debate.
According to the Two-Component Advective Flow (TCAF) model (Chakrabarti & Titarchuk 1995; Chakrabarti 1997, Chakrabarti 2018),
the CENtrifugal pressure supported BOundary Layer (CENBOL) acts as the Compton cloud. This CENBOL is formed near the black hole when the
low viscous, freely falling sub-Keplerian matter comes to a halt as the outward centrifugal pressure becomes comparable to the inward
gravitational force, and it forms a standing or oscillating shock. The post-shock region becomes hot and puffs up and forms a torus-like region of
hot ionised matter. In the equatorial region, the viscosity remains higher than a certain critical value to maintain Keplerian angular momentum,
and this Keplerian matter becomes optically thick and emits the multicolor soft photons which are then partially intercepted by the CENBOL
and emitted as hard non-thermal photons. In the TCAF model, any observed spectrum depends on four independent flow parameters,
i.e. the accretion rates of the Keplerian and the sub-Keplerian matter, the location of the shock which is the outer boundary of CENBOL,
and the ratio of the post-shock to pre-shock matter densities (compression ratio). Additionally, it also depends on the mass of the BH
and a normalization factor which is the ratio of emitted to observed X-ray spectrum, both of which are constants for a given source.
As an outburst starts, the faster and hotter sub-Keplerian matter rushes towards the BH and forms the CENBOL which increases the hard Comptonised
flux. The Keplerian matter, which has a low velocity due to the higher viscosity, gradually moves towards the BH and cools down the CENBOL.
The CENBOL region shrinks in size, the hard photon flux decreases and the spectra become gradually softer. As the outer boundary of the CENBOL
oscillates (e.g. due to a resonance between the Compton cooling and compressional heating), the Comptonized hard X-ray intensity also varies
which gives rise to the observed quasi-periodic oscillations. This CENBOL also acts as the base of the jet/outflows.
To study how the physical flow parameters vary during an outburst and to estimate the intrinsic parameters of the BH, this TCAF model has been
incorporated into the spectral analysis software package pcrXSPEC (Arnaud, 1996) as a local additive table model
(Debnath et al. 2014, 2015). So far, the TCAF model has been used to study the accretion flow dynamics of more than fifteen BHXRBs
(Mondal et al. 2016; Debnath et al. 2017; Chatterjee et al. 2021). Intrinsic parameters, like the
BH mass and its distance have been estimated (Molla et al. 2017; Chatterjee et al. 2019; Jana et al. 2020a; Nath et al. 2023). The origin of QPOs
and jet/outflows has also been successfully studied using this model (Mondal et al. 2015; Chakrabarti et al. 2015; Chatterjee et al. 2016, Jana et al. 2017; Debnath et al. 2021)
Galactic X-ray transient EXO 1846-031 was first discovered by EXOSAT during its outburst in 1985 (Parmar & White 1985).
Based on the ultra-soft component in the spectra of this outburst, Parmar et al. (1993) indicated the source EXO 1846-031 is a BH candidate.
After the first outburst, the source remained in quiescence for almost 34 years. Recently, the source was again found to be in outburst by
MAXI on 2019 July 23 (Negoro et al. 2019). Evolving Type-C QPOs were observed in the Insight-HXMT and NICER data
(Liu et al. 2021) which is generally observed in BHXRBs. From strong reflection features in the NuSTAR spectrum, Draghis et al. (2020)
suggested EXO 1846-031 to be a BH with nearly maximal spin (a=0.99^+0.002_-0.001) with a disk inclination of θ≈73^∘.
From Insight-HXMT and NuSTAR data, Wang et al. (2021) found signatures of an ionised disk wind with velocities up to 0.06c.
They suggest EXO 1846-031 is a low inclination system with θ≈40^∘. Ren et al. (2022) investigated the spectral evolution from
Insight-HXMT data and suggested that the maximal spin found by Draghis et al. (2020) might be affected by choice of a different
hardening factor (f_col). Evidence of the presence of a pair of 3:2 ratio high-frequency quasi-periodic oscillations (HFQPO)
was found, and based on this the probable mass of the source was determined to be 3.4±0.2 M_⊙ (Strohmayer & Nicer Observatory
Science Working Group 2020).
Analysing the radio observations from MeerKAT, VLA and AMI-LA, Williams et al. (2022) suggested a distance range of
2.4–7.5 kpc, and a jet speed of β_int=0.29c.
We study the spectral and temporal properties of EXO 1846-031 during its 2019 outburst using Swift/XRT, Swift/BAT,
MAXI/GSC, NICER/XTI and NuSTAR/FPM data with the TCAF model in this paper. We discuss the observation, data reduction,
and analysis procedures briefly In 2. In 3 we present the results of our analysis. In 4, we discuss the
obtained results and draw conclusions.
1.0cm
§ OBSERVATION AND DATA ANALYSIS
§.§ Observations
We study the 2019-2020 outburst of EXO 1846-031 using archival data from Swift (Gehrels et al. 2004), NICER (Gendreau et al. 2012),
MAXI (Matsuoka et al. 2009), and NuSTAR (Harrison et al. 2013) satellites. We study the evolution of the X-ray fluxes in the soft
and hard energy bands and their ratios using MAXI/GSC (2-10 keV) and Swift/BAT (15-50 keV) data of ∼ 10 months from 2019 July 9 (MJD=58673)
to 2020 April 10 (MJD=58949). For the detailed temporal and spectral study, we use data from Swift/XRT, NICER/XTI, MAXI/GSC and NuSTAR/FPM
satellites/instruments.
Although NICER and Swift monitored the source regularly in the rising phase of the outburst, during the declining phase, there is a data gap
of ∼ 3 months for Swift and ∼ 4 months for NICER. We use 14 data of NICER/XTI (1-11 keV) and 11 data of Swift/XRT (1-10 keV)
for spectral analysis. To study the spectra in a wider energy band, we also use MAXI/GSC (7-20 keV) and NuSTAR/FPM (4-79 keV) simultaneously
with NICER and Swift data. A detailed log of the observations is given in Table 1.
§.§ Data Reduction
§.§.§ Swift
Swift/XRT window timing (WT) mode data were used in our analysis. Level 1 data files obtained from the archive are processed with the qcrXRTPIPELINE task to
produce Level 2 clean event files. A circular region of radius 30” around the source location is then used to extract the source
spectra and a region of the same radius is chosen away from the source to extract the background spectra using the tool qcrXSELECT. ARF files
are created using the tool qcrXRTMKARF and corresponding RMFs are obtained from the qcrCALDB.
Using the qcrGRPPHA tool, the spectra are rebinned to have at least 20 counts/bin.
Swift/BAT daily lightcurves are obtained from the Swift https://swift.gsfc.nasa.gov/results/transients/weak/EXO1846-031/website.
§.§.§ NICER
NICER is an external payload attached to the International Space Station which has an X-ray timing instrument (XTI; Gendreau et al. 2012)
working in the energy range 0.2-12 keV with a timing resolution of ∼100 ns and spectral resolution of ∼85 eV at 1 keV. For analysis,
the Level 1 data files are processed with qcrnicerl2 script in the latest caldb environment (ver. xti20221001) to obtain
Level 2 clean event files. The command qcrbarycorr is then used to apply barycentric correction to the event files.
The lightcurves and spectra are extracted from these barycentre-corrected event files using the qcrXSELECT task.
The qcrnibackgen3C50 tool (Remillard et al. 2022) is then used to simulate the background corresponding to each observation.
The spectra are then rebinned to have at least 20 counts/bin with the qcrGRPPHA task.
§.§.§ NuSTAR
NuSTAR raw data from the web archive is reduced with the NuSTAR data analysis software (qcrNuSTARDAS, version 1.4.1).
Cleaned event files are produced using the qcrnupipeline task in the presence of the latest calibration files.
With the qcrXSELECT task, a circular region of 60” centred at the source coordinates is chosen as the source region,
and a circular region with the same radius away from the source location is chosen as the background region.
The qcrnuproduct task is then used to extract the spectrum, ARF and RMF files.
The extracted spectra are then rebinned to have at least 30 counts/bin with the qcrGRPPHA task.
§.§.§ MAXI
MAXI/GSC spectra are obtained using the http://maxi.riken.jp/mxondem/MAXI on-demand process web tool (Matsuoka et al. 2009).
To study the evolution of the X-ray fluxes, daily average lightcurves are obtained from the MAXI http://maxi.riken.jp/star_data/J1849-030/J1849-030.htmlwebsite.
§.§ Data Analysis
Daily average light curve data of MAXI/GSC and Swift/BAT are used to study the variation of the X-ray flux in various energy bands
throughout the outburst. To study the hardness variations, we use two types of hardness ratios, namely hardness ratio 1 (HR1)
i.e. the ratio of 15-50 keV Swift/BAT flux in mCrab to 2-10 keV MAXI/GSC flux in mCrab, and hardness ratio 2 (HR2) i.e. the ratio
of 4-10 keV to 2-4 keV MAXI/GSC flux. To search for the presence of LFQPOs, we use the qcrpowspec task
to generate power density spectra (PDS) from 0.01 s time binned light curves of NICER. The light curve of a total observation is separated
into a number of intervals, each of which contains 8192 newbins. For each interval, a PDS is created, and these individual PDSs
are averaged to generate a single PDS which was then geometrically rebinned. We model the PDSs with multiple Lorentzian models
in qcrXSPEC version 12.11.1 (Arnaud 1996) to account for the broadband noise, QPOs and its harmonics.
From the fits we obtain the QPO frequencies (ν_QPO), width (Δν), Q-value (Q=ν_QPO/Δν) and RMS (%) amplitude.
We utilize HEASARC's spectral analysis software package qcrXSPEC version 12.11.1 (Arnaud 1996)
for analyzing the spectra. All the spectra are fitted with the TCAF model based local additive table model (Debnath et al. 2014).
To fit spectra using the TCAF model, four input flow parameters are essential:
(1) the Keplerian disk accretion rate (ṁ_d in Ṁ_Edd),
(2) the sub-Keplerian halo accretion rate (ṁ_h in Ṁ_Edd),
(3) the shock location (X_s in Schwarzschild radius r_s=2 GM_BH/c^2), and
(4) the dimensionless shock compression ratio (R = ρ_+/ρ_-, ratio of the post-shock to the pre-shock matter density).
In addition, one system parameter, i.e., the mass of the BH (M_BH in M_⊙) and one instrument parameter, i.e. the model normalization (N)
are required. To account for the interstellar absorption, we use the qcrTBabs model with qcrvern
cross-sections (Verner et al. 1996) and qcrwilm abundances (Wilms et al. 2000).
We use the qcrsmedge model to account for the instrumental features in the NICER spectra at ∼1.8 keV.
§ RESULTS
After almost 34 years in quiescence, EXO 1846-031 again went into an outburst on 2019 July 23 (MJD 58687) which lasted for ∼10 months.
To examine the nature of the outburst and the accretion flow properties during the outburst, we carried out a detailed temporal and
spectral study using data from multiple satellites. The results of the study are presented below.
§.§ Temporal Properties
To study the outburst profile in different energy bands and the variation of hardness ratios, we use MAXI/GSC and Swift/BAT
daily light curve data. To study the low timescale variability features, we use NICER/XTI data due to its high
temporal resolution.
§.§.§ Outburst Profile and Hardness Ratios
We show the variation of X-ray fluxes in different energy bands and their hardness ratios
from 2019 July 9 (MJD=58673) to 2020 April 10 (MJD=58949) in various panels of Fig. 1.
The variation of the Swift/BAT (15-50 keV) flux and the MAXI/GSC (2-10 keV) flux is shown in panel (a),
while the variation of their hardness ratio (HR1) is shown in panel (b). Likewise, panel (c) shows
the variation of MAXI/GSC flux in lower (2-4 keV) and higher (4-10 keV) energy bands while panel (d)
shows the variation in their hardness ratio (HR2). From the Figure, we can observe that at the start
of the outburst, both soft and hard fluxes increased rapidly, and the 15-50 keV Swift/BAT flux
attained a maximum on MJD 58697, roughly 8 days before the softer (2-4 keV and 4-10 keV) MAXI/GSC fluxes.
The hardness ratios (HR1 and HR2) also increased and attained a maximum around MJD 58691 and decreased
quickly to a low level. After the initial maximum, the Swift/BAT flux slowly decreased and decayed into
quiescence at the end of the outburst. On the other hand, after the maximum around MJD 58705, the MAXI/GSC
fluxes (in different energy bands) decreased for ∼13 days and then started to increase again. They
attained a maximum around MJD 58736 and then decreased with an almost constant rate for ∼65 days. After
that, the GSC fluxes remained at a constant low level till the end of the outburst.
Looking at the outburst profile, we can see that this 2019 outburst of EXO 1846-031 has shown two stronger flux peaks in the rising phase
and two very weak peaks in the declining phase. To estimate the total amount of flux released during each of the peaks, we fit the 2-10 keV
MAXI/GSC lightcurve using FRED profiles (Kocevski et al. 2003). A combination of four FRED profiles are used to fit the complete outburst (Fig. 2)
(see, Chakrabarti et al. 2019, Bhowmick et al. 2021, Chatterjee et al. 2022 for more details). In the Fig. 2, the blue curve marks the combined
fit of the entire outburst and the red curves mark individual FRED fitted peaks of the outburst. We choose 12 mCrab as the threshold of flux
for the outburst. The horizontal black line indicates the 12 mCrab flux value. Two vertical lines mark the start and the end of the outburst when
the X-ray flux rises above and below this 12 mCrab threshold. The total integrated X-ray flux (IFX_tot) of the complete outburst calculated
from the combined fit is 39.70^+3.29_-3.05 Crab. The individual integrated flux values (IFX) of the first, second, third and fourth peaks are
6.31^+0.26_-0.25 Crab, 30.82^+2.60_-2.42 Crab, 1.77^+0.60_-0.38 Crab and 0.80^+0.01_-0.16 Crab respectively. IFX values depict
the amount of energy release in each peaks. Comparing the IFX values of the four peaks, we can conclude that maximum amount of energy was released
during the second peak, i.e., maximum amount of matter got cleared during the time period of the second peak of the outburst.
§.§.§ Power Density Spectra (PDS)
To study the presence and evolution of LFQPOs during the outburst, we use 0.01 s time-binned NICER
light curves. We use zero-centred Lorentzian models to fit the broad noise components and narrow
Lorentzians to fit the QPO peaks to determine the centroid frequencies, Q-values, rms amplitudes, etc.
We find the presence of QPOs in 19 NICER observations in the initial phase of the outburst.
The observed QPOs can be classified as type-C which are characterized by a Q-value of ∼ 7-12 and
an amplitude of ∼3–16 % rms that are superposed on a broad flat-top noise (Casella et al. 2005).
Figure 3 shows a representative pds where a QPO of 3.24 Hz can be seen along with its harmonic at 6.52 Hz.
The QPOs are found in the hard, the hard intermediate and the soft intermediate states which are discussed in detail in later sections.
§.§ Spectral Properties
We use data from Swift/XRT, NICER/XTI, MAXI/GSC and NuSTAR/FPM for spectral analysis in a broad
1-79 keV energy range. We mainly use the absorbed TCAF model to study the spectra. We use the
qcrTBabs model for absorption where the hydrogen column density (N_H)
was kept free. We found the N_H to vary between 5.12×10^22 cm^-2 and
10.94×10^22 cm^-2 during our analyses period. In the NICER spectra, edge-like
residuals are seen at ∼1.8 keV corresponding to the Silicon K edge which is an instrumental
feature typical for Si-based detectors (Alabarta et al. 2020, Miller et al. 2018).
We use the qcrsmedge model to account for it. An Fe-Kα emission line at ∼6.4 keV
is also observed in the NICER spectra of the initial rising phase which was fitted using the
qcrGaussian model. We jointly fit the XRT+GSC spectra with qcrconstant*TBabs*(TCAF)
model (Fig. 4a) and the NICER+GSC spectra with qcrconstant*TBabs*smedge(TCAF) or
qcrconstant*TBabs*smedge(TCAF+gaussian) model (Fig. 4b). In the NICER+NuSTAR spectra,
a dip is observed at ∼10 keV in the NuSTAR data. At first, we fit the spectra with qcrconstant*TBabs*smedge(TCAF+Gaussian) model
ignoring the dip, and obtain χ^2/DOF=1.79. To improve the statistic, we use the qcrgabs model to account for the dip
and get a good statistic with χ^2/DOF=0.91. The corresponding spectra are shown in Fig. 5(a–b). Detailed
results of our spectral analysis are shown in Table 2.
§.§.§ Evolution of the Spectral states
The evolution of various temporal and spectral parameters of our analysis with the TCAF model shows that the source has evolved through
different spectral states in this outburst. We get a rough estimation of the state evolution from the outburst profile and the variation of HRs.
From the variation of the spectral parameters, we get a clearer picture of the state evolution as they show the actual evolution
of the accretion flow dynamics, e.g. the change in the disk and halo accretion rates, the change of the position of the shock and its strength, etc.
In the Figure 6, we show the variation of the disk accretion rate (ṁ_d), the halo accretion rate (ṁ_h), the total accretion rate
(ṁ_d + ṁ_h) and the accretion rate ratio (ARR = ṁ_h/ṁ_d). In the Figure 7, we show the variation of the best fitted
mass parameter (M_BH), the shock location (X_s), the shock compression ratio (R) alongwith the evolution of the QPO centroid frequency.
Here we discuss the spectral states in detail.
(1) Rising Hard State (HS):
The source was in the hard state when we start our spectral analysis on 2019 July 31 (MJD 58695).
The total accretion rate was high in this state, and the maximum part of the accreting matter was sub-Keplerian
as the ṁ_h was higher than the ṁ_d by almost ∼3 times. The ARR was also high in this state,
which started to decrease gradually as ṁ_d started to increase and ṁ_h started to decrease
as the outburst progressed. The shock started to move towards the BH from a faraway location (460r_s),
but its strength was almost constant in this period (R∼1.5). Two LFQPOs were found in this state
whose centroid frequency increased from 0.25 Hz to 0.41 Hz. High HR was also observed in this state
as the hard flux (Swift/BAT) dominated the soft flux (MAXI/GSC). The source remained in this state until 2019 August 2 (MJD 58697).
(2) Rising Hard Intermediate State (HIMS):
After MJD 58697, the total accretion rate started to decrease as the previously dominant ṁ_h
started to decrease rapidly. The total accretion rate began to increase again after 2019 August 5 (MJD 58700)
as ṁ_d started to increase and became dominant. The ARR decreased steadily in this state. Likewise,
the shock moved inward rapidly, moving from ∼325 r_s to ∼37 r_s in ∼7 days with decreasing strength.
Nine LFQPOs were found in this state whose centroid frequency increased rapidly to ∼7 Hz.
The HR decreased in this state as the dominating hard flux began to decrease and soft flux increased steadily.
The source stayed in this state until 2019 August 8 (MJD 58703).
(3) Rising Soft Intermediate State (SIMS):
The total accretion rate decreased and became roughly constant at a low level after MJD 58703.
Both the ṁ_d and ṁ_h became steady, with ṁ_d dominating over the ṁ_h.
The shock ceased to move towards the BH and came to a halt at ∼35r_s and its strength also became constant.
We found eight LFQPOs during the initial part of this state, with their centroid frequency showing a slowly
decreasing trend. The hard flux and the soft flux both decreased in this state, causing the HR to become low.
This state of the outburst continued until 2019, August 31 (MJD 58726).
(4) Soft State/High Soft State (SS/HSS):
After MJD 58726, the soft fluxes began to increase rapidly again which is quite unusual. An abrupt change has
taken place in the accretion process. The hard 15-50 keV flux remained low, and this shows that the change
in the accretion process has only affected the fluxes below 10 keV. The soft fluxes increased up to
2019 September 10 (MJD 58736) and then decreased almost linearly until 2019 November 14 (MJD 58801)
and became steady at a low level. The HRs also became low, signifying the source had transitioned into a
soft state/high soft state. Although XRT and NICER spectra were available for some days at the start of this
state, the TCAF fit of these spectra was statistically unacceptable, which shows that the two component configuration
of the accretion flow had been violated. We discuss this in detail in 4. After November 2019, spectral data is unavailable
for ∼ 4 months, due to the source becoming sun-constrained (Williams et al. 2022). Hence it became impossible
to determine how long the source was in the soft state.
(5) Declining Hard State (HS):
After 2020 February 26 (MJD 58905), Swift/XRT data became available for spectral analysis.
The total accretion rate, the ṁ_d and the ṁ_h all were low in this period.
On the other hand, the ARR became high, and the shock also moved outward at ∼250r_s with increased strength.
All of these show that the source had already transitioned into the declining hard state.
§.§.§ Estimation of BH mass from spectral analysis
Mass of the BH (M_BH) is an important parameter for spectral fitting with TCAF. Mass of the BH
in EXO 1846-031 was previously determined to be 3.24±0.2 M_⊙ based on the presence of
3:2 ratio HFQPOs (Strohmayer & Nicer Observatory Science Working Group 2020). Initially, we tried
to fit the spectra with TCAF keeping the mass parameter frozen at this value. But the resulting reduced chi-squares were high
and the fits were statistically unacceptable. Hence we keep the mass parameter free during further analysis with TCAF.
From our spectral fits, we find the best fitted M_BH values to vary between 7.1-12.6 M_⊙.
However, mass of a BH in a BHXRB system is not supposed to change significantly during the course of an outburst.
The spread in the mass values obtained from TCAF fits results from random measurement errors, and they do not show
the variation of the actual BH mass during the outburst. In our spectral analysis, we fitted the spectra of different
energy bands obtained from multiple instruments of different effective areas, which may also contributes to the measurement
errors in the mass of the BH. To reduce such errors, we perform a global fit using all spectra in different epochs.
We use the model qcrconstant*TBabs*smedge(TCAF+gaussian) and keep the mass parameter linked
for all spectra. The joint fit is shown in Fig. 8. From the global fit, we obtain a mass value of
the source as 12.43^+0.14_-0.03 M_⊙.
§ DISCUSSIONS AND CONCLUDING REMARKS
EXO 1846-031 is a galactic black hole candidate that went into an outburst in July 2019 after remaining almost 34 years
in quiescence after its discovery in 1985. We study the evolution of the temporal and spectral properties of the
source during the outburst using observational data from Swift/XRT, Swift/BAT, NICER/XTI, MAXI/GSC and NuSTAR/FPM
satellites/instruments. For the spectral analysis we use the physical TCAF model and fit NICER (1–10 keV), combined
NICER+GSC (1–20 keV), XRT+GSC (1–20 keV) and NICER+NuSTAR (1–79 keV) spectra for 25 epochs spread over the outburst duration.
From our spectral fits, we obtain flow parameters of the system such as the Keplerian disk accretion rate (ṁ_d),
the sub-Keplerian halo accretion rate (ṁ_h), the shock location (X_s), and the shock compression ratio (R).
As these flow parameters evolve during the outburst, we gain insight into how the accretion flow of matter changes
and produces different kinds of spectral and temporal variability. We also estimate the mass of the black hole from our spectral analysis.
Generally, transient black hole outbursts show two types of outburst profiles, fast rise exponential decay (FRED) or
slow rise slow decay (SRSD) (Debnath et al. 2010). However, in the case of some BHXRBs, the X-ray flux does not decay
into quiescence after the first outburst peak. Rather, they show one or more peaks after the main outburst peak
before eventually going into the quiescence phase. In literature, such phenomena are known as “reflares”, “rebrightenings”
or “mini-outbursts” (e.g. GRO J0422+32, MAXI J1659-152, GRS 1739-278, MAXI J1910-057;
Chen et al. 1997, Homan et al. 2013, Yan & Yu 2017, Nath et al. 2023). For the 2019 outburst of EXO 1846-031, we can see from
Fig. 1 that both the soft and hard fluxes increase rapidly at the start of the outburst. While the hard flux decayed
slowly after attaining its maximum, the soft flux, though it started to decrease initially, began to increase
again and attained a maximum comparable with the first peak. This outburst can be classified as a multipeak outburst
according to the re-brightening classification scheme of Zhang et al. (2019). As matter gets accumulated at
the pile-up radius (X_p; Chakrabarti et al. 2019; Bhowmick et al. 2021; Chatterjee et al. 2022)
in the quiescence phase, before an outburst, it is heated up and gets ionized. This ionised matter
causes a thermal-viscous instability in the matter. This instability increases the viscosity in the disk
causing an increased outward redistribution of angular momentum. This causes the matter to flow rapidly inward
onto the BH, triggering the outburst (Lasota 2001; Dubus et al. 2001; Hameury 2020). However, this disk
instability model (DIM) cannot explain these re-brightenings/mini-outbursts phenomena very well. Although several
models have been proposed that explain the reflares (e.g., Kuulkers et al. 1994; Hameury et al. 2000; Zhang et al. 2019),
none of them are well verified. Hence we investigate the rebrightening phenomena of EXO 1846-031 with the TCAF picture.
During the 2019 outburst, EXO 1846-031 showed two brighter (in the rising phase) and two dimmer peaks (in the declining phase)
in the low energy outburst profile. We fitted the 2-10 keV MAXI/GSC outburst profile with multiple FRED models, and from this fit we estimated
that the total integrated flux released in the outburst is 39.70^+3.29_-3.05 Crab. The contribution of individual peaks calculated from
the individual FRED profiles are 16%, 78%, 4% and 2% respectively for the first, second, third and fourth peaks. Here we observe that
although the peak fluxes are roughly same, five times more energy is released during the second peak than the first peak, and this
indicates that most of the matter has been released from the X_p during the second peak. This is quite uncommon in transient BHXRBs.
At the start of the outburst, when the viscosity at the pile-up radius increased above the critical value,
matter began to rush inward. We can see from Fig. 6 that the halo rate is high compared to the disk rate.
As the sub-Keplerian matter has low viscosity, it falls freely towards the BH, whereas the Keplerian matter
has large viscosity and it moves inward slower in viscous timescale. The sub-Keplerian matter reaches
the BH faster than the Keplerian matter, and the halo rate attains peak value before the disk rate.
From Fig. 7, we see that the shock is far away in this initial phase. As there is no Keplerian matter to cool
the faster-moving sub-Keplerian matter, it forms a large CENBOL, and this CENBOL inverse Comptonizes most of
the soft thermal photons and produces a large number of hard photons. Hence we can see from Fig. 1 that the
high energy fluxes dominate the low energy fluxes making the HRs high and the source goes into the rising
hard state. After MJD 58697, the Keplerian matter begins to cool down the sub-Keplerian matter as it
gradually moves towards the BH. The disk rate starts to increase and the halo rate decreases. The CENBOL
shrinks in size, and the shock, which is the outer boundary of CENBOL, moves closer to the BH and decreases
in strength. Hence the inverse-Comptonization is reduced, the hard flux begins to decrease, the thermal flux
increases, the HRs decrease, and the source goes into the hard intermediate state. As the supply of accreting matter
gradually decreases, both the disk rate and halo rate decrease, and the CENBOL shrinks farther and the shock
moves very closer to the BH. Both the soft and the hard flux decrease, the HRs are decreased to a very low level
and the source goes into a soft intermediate state. In all of these three states, we find the presence of low-frequency
quasi-periodic oscillations (LFQPO). In the TCAF picture, LFQPOs are produced due to the oscillation of
the shock, i.e. the outer boundary of the CENBOL. The centroid frequency of the LFQPO (ν_QPO) is roughly inversely
proportional to the location of the shock (r_s) (ν_QPO∼ 1/r_s(r_s-1)^1/2: Chakrabarti et al. 2008, Debnath et al. 2010).
As we can see from Fig. 7, as the shock moves closer to the BH in the HS and HIMS, the centroid frequency of the QPO
increases. As the shock becomes almost steady in the SIMS, the QPO frequency also becomes steady.
After some days in the SIMS (∼ MJD 58715), the value of the compression ratio becomes close to one
and the halo rate becomes close to zero. This signifies that the post-shock and pre-shock matter density is equal,
which means that the shock has become very weak or disappeared and it has moved very close to the black hole.
As the shock disappears, the sub-Keplerian and Keplerian components of the accretion flow becomes essentially the same.
This very weak shock was unable to produce any QPOs, hence the QPO disappear gradually at the later stage of the SIMS.
After MJD 58726, the soft fluxes began to increase again while the hard fluxes remained low, which shows that
there is an increase in the thermal emission without much of it being inverse-Comptonized.
Although some NICER and XRT spectra are available in this phase, we failed to fit these spectra with the TCAF model.
These indicate that the two component configuration of the accretion flow is no longer maintained in this period.
The sharp increase in the soft fluxes indicates that the supply of the Keplerian matter has increased.
This increased supply of Keplerian matter has cooled down the remaining sub-Keplerian matter and only the
Keplerian disk accretion is occurring in this state. According to previous studies (Chakrabarti et al. 2019,
Bhowmick et al. 2021, Chatterjee et al. 2022), accreting matter supplied from the companion starts to accumulate at
the pile-up radius (X_p) during the quiescence phase prior to an outburst, and the longer the duration
of the quiescence phase, the more is the matter accumulation. Prior to the outburst in 2019, EXO 1846-031 was
in the quiescence phase for a long time (∼ 34 years). This is very similar to the 2003 outburst of the
source H 1743-322 when it remained inactive for 25 years before the outburst (Chakrabarti et al. 2019).
Similar to the case of H 1743-322, this long period of inactivity of EXO 1846-031 indicates a large amount of matter was
accumulated at the X_p before the outburst. This accumulated matter starts to heat up the accretion flow
and gives rise to a convective instability which increases the viscosity due to the resulting turbulence.
As the viscosity at X_p increased above the critical value, the outburst was triggered.
However, the increase in viscosity was not large enough to release all
of the accumulated matter from the pile-up radius. As the sub-Keplerian matter moves faster, all of it
gets depleted quickly and the source enters the SIMS, which could also be interpreted as the declining state
of the first failed (as the soft state is not reached) outburst. At the end of the SIMS, viscosity at the X_p
increases again, and the remaining Keplerian matter is released triggering the reflare event.
We find an evidence of a broad absorption feature in the SIMS around ∼ 10 keV which we model with
qcrgabs with a line energy of 9.71±0.23 keV. This kind of absorption feature
could indicate a presence of highly ionised high-velocity winds from the accretion disk (Prabhakar et al. 2023),
which in turn indicates that the radiation pressure in the disk is very high. This large amount of radiation
irradiates the remaining matter at X_p, and an instability builds up again creating a situation similar
to the initial phase before an outburst. This instability again increases the viscosity at X_p and
matter starts to accrete again towards the BH. Majority of the sub-Keplerian matter was accreted
during the first peak, and this new accretion consists largly of high viscous Keplerian matter.
This Keplerian matter interacts with the remaining small amounts of sub-Keplerian matter and cools it down.
From Fig. 1, we can see that after attaining the second maximum, the soft flux decreases almost linearly
instead of an exponential decline, which is another indication that only the comparatively slow moving Keplerian
matter is responsible for this reflare. After ∼ MJD 58800, the source became Sun-constrained
and there is no available data for spectral analysis in the period between MJD 58808 and MJD 58904.
Hence the exact date when the source came out of the SS cannot be determined.
After MJD 58905, spectral analysis shows that the shock has moved outward at ∼ 250 r_s with an
increased ARR. This indicates the source has already moved into the declining hard state.
The time taken by the high viscous matter to reach the BH from the pile-up radius is termed
as the viscous timescale (Chakrabarti et al. 2019). Due to its low viscosity, the sub-Keplerian matter
moves toward the BH in freefall timescale, whereas the Keplerian matter takes more time to reach the BH due
to its higher viscosity. Due to this reason, it is generally observed that the halo accretion rate attains
its peak before the disk rate, and the time difference between disk and halo peaks gives us an idea to infer
viscous timescale of the source (Debnath et al. 2015, Jana et al. 2016, 2020b). From Fig. 1, we can see
that 15-50 keV Swift/BAT hard flux attains a peak on MJD 58796, and the 2-4 keV MAXI/GSC soft flux attains a peak
∼ 8 days later on MJD 58705. A similar delay between the peaks of halo and disk rates is also found (see Fig. 6).
Hence we estimate the viscous timescale of this source to be ∼ 8 days. This large viscous timescale indicates
that X_p is far away from the BH and size of the accretion disk is large.
The mass of the BH in this source has not yet been measured dynamically, so we try to estimate the mass
from our spectral fits. The spectra emitted from the accretion processes around a BH is highly dependent
on its mass as various features of the accretion dynamics such as the size of the CENBOL and the electron
number density inside it, soft radiation intensity from the Keplerian disk, etc. depend on the mass. We allow
the M_BH parameter to vary freely during our spectral analysis so that we get a best fitted value of the
parameter from each spectral fit. We find the best fitted values of the parameter to vary in the range
7.1-12.6 M_⊙. This spread in the mass value is a consequence of systematic errors
due to the use of data from multiple instruments with different energy range and effective areas.
To reduce such errors, we jointly fit all the spectra of different epochs and estimate the most probable
mass of the source to be 12.43^+0.14_-0.03 M_⊙.
§ SUMMARY AND CONCLUSIONS
We study the spectral and temporal properties of the source EXO 1846-031 during its 2019 outburst after almost 34
years in quiescence. We use MAXI/GSC and Swift/BAT daily lightcurve data to study the evolution of the X-ray
fluxes and hardness ratios during the outburst. We use the FRED profile to fit the outburst flux profile and
estimate the contribution of each flux peak in the total amount of flux released during the outburst.
We use data from multiple instruments (Swift/XRT, MAXI/GSC, NICER/XTI, NuSTAR/FPM) for a broadband spectral study over a period of 222 days.
We perform our spectral study using physical TCAF model. Based on our spectral analysis, the following conclusions can be drawn:
* After 34 years in quiescence, EXO 1846-031 showed an outburst in 2019 that can be classified as a multipeak outburst.
* Before the start of the outburst, a large amount of matter was accumulated at the pile up radius X_p (located far away
from the BH) and all the matter was not accreted during the first outburst peak.
* The broad absorption feature around ∼ 9 keV indicates the presence of a fast moving highly ionized disk wind in the rising SIMS.
* As the high X-ray flux irradiates the remaining matter at X_p, the viscosity increases and starts a fresh accretion of
matter triggering the reflare.
* This increased supply of high viscous Keplerian matter in the reflaring event cools down and washes off the sub-Keplerian matter,
and only Keplerian disk accretion happens in the HSS.
* Although the source showed two brighter and two dimmer peaks during the outburst, ∼ 78% of total energy has been released in the second
flaring event.
* From spectral fitting with TCAF, probable mass of the source was estimated to be 12.43^+0.14_-0.03 M_⊙.
* From the disk and halo peak difference in the rising phase of the outburst, we estimated viscous time scale of the source to be ∼ 8 days.
§ ACKNOWLEDGEMENTS
This work made use of Swift/XRT, Swift/BAT, NICER/XTI, and NuSTAR/FPM data supplied by the UK Swift Science Data Centre at the University of Leicester,
and MAXI/GSC data were provided by RIKEN, JAXA, and the MAXI team. S.K.N. acknowledges support from the SVMCM fellowship, the government of West Bengal.
S.K.N. and D.D. acknowledge support from the ISRO-sponsored RESPOND project (ISRO/RES/2/418/17-18) fund. D.D. and K.C. acknowledge visiting research grants
of National Tsing Hua University, Taiwan (NSTC 111-2811-M-007-066). R.B. acknowledges support from the CSIR-UGC fellowship (June-2018, 527223).
H.-K. C. is supported by NSTC of Taiwan under grant 111-2112-M-007-019.
99
Arnaud, K. A. 1996, in ASP Conf. Ser. 101, Astronomical Data Analysis Software and Systems V, ed. G. H. Jacoby & J. Barnes (San Francisco, CA: ASP), 17
Alabarta K., et al., 2020, MNRAS, 497, 3896
Belloni, T., Homan, J., Casella, P., et al. 2005, A&A, 440, 207
Bhowmick, R., Debnath, D., Chatterjee, K., et al., 2021, ApJ, 910, 138
Casella, P., Belloni, T., & Stella, L., 2005, ApJ, 629, 403
Chakrabarti, S. K., & Titarchuk, L. G. 1995, ApJ, 455, 623
Chakrabarti, S. K., 1997, ApJ, 484, 313
Chakrabarti, S. K., Debnath, D., Nandi, A., et al. 2008, A&A, 489, L41
Chakrabarti, S. K. 2013, in Proc. Conf. Ser., Vol. 8, Astron. Soc. of India, ed. S. Das (Assam, India), 1
Chakrabarti, S. K., Mondal, S., & Debnath, D., 2015, MNRAS, 452, 3451
Chakrabarti, S. K., 2018, in Ruffini R., Jantzen R., Bianchi M., eds, Proc. 14th Marcel Grossman meeting. Study of Accretion processes Around Black Holes becomes Science: Tell Tale Observational Signatures of Two Component Advective Flows. World Scientific Press, Singapore
Chakrabarti, S. K., Debnath, D., & Nagarkoti, S. 2019, AdSpR, 63, 3749
Chatterjee, D., Debnath, D., Chakrabarti, S. K., Mondal, S., Jana, A., 2016, ApJ, 827, 88
Chatterjee, D., Debnath, D., Jana, A., Chakrabarti, S. K., 2019, AP&SS, 364, 14
Chatterjee, K., Debnath, D., et al. 2021, Ap&SS, 366, 63
Chatterjee, K., Debnath, D., Bhowmick, R, Nath, S. K., & Chatterjee, D., 2022, MNRAS, 510, 1128
Chen, W., Shrader, C. R., & Livio, M. 1997, ApJ, 491, 312
Debnath, D., Chakrabarti, S. K., Nandi, A., Mandal, S., 2008, Bull. Astron. Soc. India, 36, 151
Debnath, D., Chakrabarti, S. K., & Nandi, A. 2010, A&A, 520, 98
Debnath, D., Mondal, S., & Chakrabarti, S. K. 2014, MNRAS, 440, L121
Debnath, D., Mondal, S., & Chakrabarti, S. K. 2015, MNRAS, 447, 1984
Debnath, D., Jana, A., Chakrabarti, S. K., & Chatterjee, D. 2017, ApJ, 850, 92
Debnath, D., Chatterjee, K., Chatterjee, D., et al. 2021, MNRAS, 504, 4242
Draghis, P. A., Miller, J. M., Cackett, E. M., et al. 2020, ApJ, 900, 78
Dubus, G., Hameury, J.-M., & Lasota, J.-P. 2001, A&A, 373, 251
Ebisawa, K., Titarchuk, L. G., & Chakrabarti, S. K. 1996, PASJ, 48, 59
Fender, R. P., Belloni, T., Gallo, E. 2004, MNRAS, 355, 1105
Gendreau, K. C., Arzoumanian, Z., & Okajima, T. 2012, Proc. SPIE, 8443, 844313
Hameury, J.-M., Lasota, J.-P., Warner, B., 2000, A&A, 353, 244
Hameury, J. M., 2020, Advances in Space Research, 66, 1004
Homan, J., Belloni, T., 2005, AP&SS, 300, 107
Homan, J., Fridriksson, J. K., Jonker, P. G., et al. 2013, ApJ, 775, 9
Jana, A., Debnath, D., Chakrabarti, S. K., Mondal, S., Molla, A. A., 2016, ApJ, 819, 107
Jana, A., Debnath, D., Chatterjee, D., et al. 2020a, RAA, 20, 28
Jana, A., Debnath, D., Chatterjee, D., et al. 2020b, ApJ, 897, 3
Kocevski, D., Ryde, F., & Liang, E., 2003, ApJ, 596, 389
Kuulkers, E., van der Klis, M., Oosterbroek, T., Asai, K., Dotani, T., van Paradijs, J., Lewin, W. H. G., 1994, A&A, 289, 795
Lasota, J. P. 2001, NewAR, 45, 449
Liu, H.-X., Huang, Y., Xiao, G.-C., et al. 2021, RAA, 21, 070
Matsuoka, M., Kawasaki, K., Ueno, S., et al., 2009, PASJ, 61, 999
McClintock J. E., Remillard R. A., 2006, in Lewin W., van der Klis M., eds, Cambridge, Astrophysical Series 39: Compact Stellar X-ray Sources. Cambridge Univ. Press, Cambridge, p. 157
Miller, J. M., et al., 2018, ApJ, 860, L28
Molla, A. A., Chakrabarti, S. K., Debnath, D., Mondal, S., 2017, ApJ, 834, 88
Mondal, S., Chakrabarti, S. K., & Debnath, D., 2015, ApJ, 798, 57
Mondal, S., Chakrabarti, S. K., Debnath, D., 2016, Ap&SS, 361, 309
Nath, S. K., Debnath, D., Chatterjee, K., Jana, A., Chatterjee, D., & Bhowmick, R., 2023, AdSpR, 71(1), 1045
Negoro, H., Nakajima, M., Sugita, S., et al. 2019, ATel, 12968, 1
Novikov, I. D., & Thorne, K. S. 1973, in Black Holes (Les astres occlus), ed. C. DeWitt & B. DeWitt (New York: Gordon and Breach), 343
Parmar, A. N., & White, N. E. 1985, IAUC, 4051, 1
Parmar, A. N., Angelini, L., Roche, P., & White, N. E. 1993, A&A, 279, 179
Prabhakar, G., Mandal, S., Bhuvana, G. R., Nandi, A., 2023, MNRAS, 520, 4889
Remillard, R. A., Loewenstein, M., Steiner, J. F., et al. 2022, AJ, 163, 130
Ren, X. Q., Wang, Y., Zhang, S. N., et al. 2022, ApJ, 932, 66
Shakura, N. I., & Sunyaev, R. A. 1973, A&A, 24, 337
Strohmayer T. E., Nicer Observatory Science Working Group 2020, in American Astronomical Society Meeting Abstracts 235. p. 159.02
Sunyaev, R. A., & Titarchuk, L. G. 1980, ApJ, 86, 121
Tanaka, Y., & Shibazaki, N. 1996, ARA&A, 34, 607
Tetarenko, B. E., Sivakoff, G. R., Heinke, C. O., Gladstone, J. C., 2016, ApJS, 222, 15
Verner, D. A., Ferland, G. J., Korista, K. T., & Yakovlev, D. G. 1996, ApJ, 465, 487
Wang, Y., Ji, L., García, J. A., et al. 2021, ApJ, 906, 11
Williams, D. R. A., Motta, S. E., Fender, R., Miller-Jones, J. C. A., et al. 2022, MNRAS, 517, 2801
Wilms, J., Allen, A., & McCray, R. 2000, ApJ, 542, 914
Yan, Z., & Yu, W. 2017, MNRAS, 470, 4298
Zhang, G.-B., et al. 2019, ApJ, 876, 5
|
http://arxiv.org/abs/2307.04728v1 | 20230710173804 | Non-equilibrium attractor for non-linear stochastic dynamics | [
"A. Patrón",
"B. Sánchez-Rey",
"E. Trizac",
"A. Prados"
] | cond-mat.stat-mech | [
"cond-mat.stat-mech"
] |
[email protected]
Física Teórica, Universidad de Sevilla, Apartado de
Correos 1065, E-41080 Sevilla, Spain
[email protected]
Departamento de Física Aplicada I, E.P.S., Universidad de
Sevilla, Virgen de África 7, E-41011 Sevilla, Spain
LPTMS, Université Paris-Saclay, CNRS, 91405, Orsay, France
Ecole normale supérieure de Lyon, F-69364 Lyon, France
[email protected]
Física Teórica, Universidad de Sevilla, Apartado de
Correos 1065, E-41080 Sevilla, Spain
We study the dynamical behaviour of mesoscopic
systems in contact with a thermal bath, described either via the non-linear Fokker-Planck equation for the probability distribution function at the ensemble level—or the corresponding non-linear Langevin equation at the trajectory level. Our focus is put on one-dimensional—or d-dimensional isotropic—systems in confining potentials, with detailed balance—fluctuation-dissipation thus holds, and the stationary probability distribution has the canonical form at the bath temperature. When quenching the bath temperature to low enough values, a far-from-equilibrium state emerges that
rules the dynamics over a characteristic intermediate timescale. Such a long-lived
state has a Dirac-delta probability distribution function and attracts all solutions over this intermediate timescale, in which the initial conditions are immaterial while the influence of the bath is still negligible. Numerical evidence and qualitative physical arguments suggest that the above picture extends to higher-dimensional systems, with anisotropy and interactions.
Non-equilibrium attractor for non-linear stochastic dynamics
A. Prados
August 12, 2023
============================================================
Stochastic processes are ubiquitous in physics. Systems of interest are usually not isolated but in contact with a much larger environment. What makes their dynamics stochastic is the interaction with the environment (thermal bath): the integration over its degrees of freedom entails that the “force”—understood in a generalised sense—acting on the system becomes effectively random <cit.>. It is in this approach, often called mesoscopic, that the Langevin equation emerges—see Ref. <cit.> for a recent review.
More than a century ago, Langevin initiated the approach that bears his name, when studying Brownian motion <cit.>. This is still an active field of research today: current experimental techniques make it possible to confine the Brownian particles in a potential, the profile of which can be controlled <cit.>. In turn, shaping the potential makes it possible to control the dynamical evolution, allowing for optimising observables such as irreversible work <cit.> or escape times <cit.>, building smooth protocols that connect arbitrary states <cit.>, or precisely designing finite-time computations <cit.>.
The relevance of the Langevin approach is not restricted to Brownian motion; it is employed in a wealth of physical contexts, in which the above general picture for stochastic dynamics applies. Examples abound, including astrophysics <cit.>, polymers <cit.>, laser-cooled atoms <cit.>, particle physics <cit.>, systems with negative temperatures <cit.>, or optical spectroscopy <cit.>, to name just a few. Interestingly, the analysis of experimental “noisy” data makes it possible to infer the underlying stochastic, Langevin-like, dynamical equations, not only in physics but also in neuroscience or biology <cit.>. Besides, since the early days of quantitative economy, related approaches making use of random walks are employed <cit.>.
In the long-time limit, systems evolving under stochastic dynamics typically relax to equilibrium at the bath temperature. The equilibrium state is thus a global attractor, reached from an arbitrary initial condition, of the system dynamics <cit.>. A relevant question is whether it is only the final equilibrium state that is independent of the initial preparation or there appears a previous global non-equilibrium attractor, already independent of the initial preparation. In the latter case, relaxation to equilibrium would proceed in two stages: first, the system would approach the universal non-equilibrium state and, second, this non-equilibrium state would tend to equilibrium.
In this Letter, we show—under general assumptions—that there emerges such a universal non-equilibrium state for a wide class of systems in contact with a thermal bath, when quenched to low enough temperatures. Their dynamics is assumed to be Markovian and described by a non-linear Langevin equation. This state, which we term long-lived non-equilibrium state (LLNES), is a global attractor of the dynamics for an intermediate time scale, over which initial conditions are immaterial but the system is still far from equilibrium. In particular, the probability distribution function (pdf) features a Dirac-delta shape within the LLNES.
For the sake of concreteness, we focus here on the physical, intuitive ideas, that are behind the emergence of the LLNES in one-dimensional—or d-dimensional isotropic—systems; a more formal, mathematical, approach is presented in the supplemental material <cit.>. Therein, we also provide numerical evidence on the existence of the LLNES for a more general situation, d-dimensional confining potentials—including anisotropy and interactions.
Let us now consider a physical system with mesoscopic state described by r≡{x_1,…,x_d}. A prototypical example is a colloidal particle confined in a d-dimensional
potential well. We assume the dynamics of r to be Markovian and governed by the following non-linear Fokker-Planck equation for the pdf P = P(r,t),
∂_t P= ∇_r·[ A(r)P
+1/2 B^2(r)∇_rP].
We stress the fact that, in general, not only the “force” A(r) but also the diffusivity B^2(r) are non-linear functions of r. The dynamics of the system is stochastic due to its contact with a thermal bath at temperature T. We assume that detailed balance holds <cit.>, so the fluctuation-dissipation relation
2A(r)=β B^2(r) ∇ H(r).
is verified, H(r) being the system's “Hamiltonian” [In certain contexts, H(r) would not be the Hamiltonian of the system but the function playing its role: e.g., for an overdamped Brownian particle, H(r) would be the confining potential.] and β=(k_BT)^-1. Therefore, the canonical distribution, proportional to e^-β H(r), is the stationary solution of the Fokker-Planck equation <cit.>.
The Markov process r(t) can also be characterised by the Langevin equation at the trajectory level of description. When B depends on r, the noise is said to be “multiplicative” <cit.> and several Langevin formulations correspond to the same Fokker-Planck equation,
ṙ(t)=-[A(r)-(α-1) B(r)∇ B(r)
]+B(r)η(t).
Here, η(t) is the unit Gaussian white noise, η_i(t)=0, η_i(t)η_j(t')=δ_ijδ(t-t') and the “multiplicative-noise” parameter α must be chosen in the interval [0,1] <cit.>[For each physical situation, the correct interpretation—typical ones are α=0 for Ito's, α=1/2 for Stratonovich's, α=1 for Klimontovich's—of the Langevin equation with multiplicative noise is dictated by physics, not by mathematics <cit.>. If B is constant, i.e., if the noise is additive, α becomes irrelevant.]. Now we consider a quench to a low temperature: the system is initially prepared at equilibrium at temperature T_i, and put in contact with a thermal bath at a much lower temperature T_f. In the subsequent relaxation to equilibrium at temperature T_f, there is a time regime in which noise is negligible: since H is independent of the temperature, fluctuation-dissipation (<ref>) entails that B^2(r)/|A(r)|∝ T_f≪ T_i. Therefore, terms containing B(r) in Eq. (<ref>) can be neglected and the Langevin equation reduces to the deterministic, noiseless equation
ṙ=-A(r),
which is independent of the parameter α in Eq. (<ref>).
In what follows, we establish the conditions under which, for long enough times, the initial conditions are forgotten for the solution of Eq. (<ref>). To be concrete, a simple but physically relevant situation with radial symmetry, A(r)=A(r)r̂,
r=|r|, r̂=r/r, is considered. The deterministic “force” A must be confining but otherwise arbitrary. This is indeed the case of the prototypical situation of a Brownian particle confined in an isotropic potential U, for which the Langevin equation reads
ṙ=-γ^-1 U'(r)r̂+√(2D) η(t),
where γ and D are the friction and diffusion coefficients,
assumed to be position independent. The identifications H=U,
A=γ^-1U'(r)r̂ and B=√(2D) (thus additive noise) in the general
fluctuation-dissipation relation (<ref>) lead to the
Einstein relation βγ D=1 [Still, this is not the only physical situation, e.g. one may also address the relaxation of the velocity of a colloidal particle due to the nonlinear drag force stemming from its interaction with the background fluid, considered later. Therein, the variable r would stand for the velocity of the particle.]. Note that, since A may change sign as r decreases, the potential may have several minima.
From Eq. (<ref>), the time evolution for one trajectory starting from r_i is implicitly given by
t=∫_r(t)^r_idr'/A(r'), r_i≡ r(t=0).
Assuming that
lim_r→+∞r^-1A(r)=+∞,
i.e. A diverging faster than linearly for large r, we have
t =
∫_r(t)^+∞dr'/A(r')-∫_r_i^+∞dr'/A(r'),
when the confining is stronger than harmonic at large distances. The first (second) term on the rhs of Eq. (<ref>) is the time needed to relax from a very large value of r, much larger than r_i, to the instantaneous position r(t) (r_i).
Let us assume that the initial temperature T_i is much larger than the final one T_f, implying the following timescale separation
t_1≡τ(T_i)≪ t ≪ t_2≡τ(T_f),
where τ(T) is the relaxation time to equilibrium at temperature T. In this way, there appears an intermediate time regime, in which the second term on the rhs of Eq. (<ref>) is negligible against the first while noise is still irrelevant. Over the timescale in Eq. (<ref>), we thus get
r(t)∼ r_(t), ∫_r_(t)^+∞dr/A(r)=t.
The state r_(t) defined in Eq. (<ref>) is a non-equilibrium attractor of the dynamics of the system. We term it long-lived non-equilibrium state (LLNES) [This terminology was already employed in Ref. <cit.> for a specific form of A(r) in the context of non-linear Brownian motion.]. Note that t_1 and t_2 are thus determined by the conditions r_(t_1)=r_i and r_(t_2)=r_f, respectively.
Over this far-from-equilibrium state, independent of initial conditions, the pdf is [Throughout the paper, we use the symbol ∼ with the meaning of “asymptotic to” <cit.>, i.e. f(x)∼ g(x) for x→ x_0 means that lim_x→ x_0f(x)/g(x)=1.] <cit.>
P_(r,t)∼δ (r-r_(t)).
The function r_(t) defined by Eq. (<ref>) depends on the specific form of the function A(r). However, we can introduce a scaled variable c such that its corresponding pdf is universal and time-independent,
c≡r/r(t),
P_(c,t)∼δ (c-1).
We recall that, over the LLNES, r(t)=r_(t). Note that the terms containing B(r) in the Langevin equation (<ref>) eventually drive the system to equilibrium at T_f. In other words, the LLNES is “destroyed” for long enough times, when r_(t)=O(r_(T_f)), i.e. as t=O(t_2).
We now apply the results presented here to two different physical situations.
First, we consider the confined Brownian particle of Eq. (<ref>), particularised for
the nonlinear potential
U(r)=1/2k r^2+1/4λ r^4, λ>0.
The condition λ>0 ensures that the potential is confining: A(r)=ar+br^3, a≡ k/γ, b≡λ/γ. Moreover, Eq. (<ref>) holds and we have the necessary timescale separation. We analyse the case k>0 to start with, in which the “force” A(r)>0 ∀ r 0 and U(r) has only one minimum at the origin. Later, we consider the case k<0, which corresponds to a “lemon-squeezer” potential with multiple minima at r=r_c≡√(|a|/b)=√(|k|/λ).
For k>0, Eq. (<ref>) reduces to
ṙ=-ar (1+r^2/r_c^2)r̂+√(2D) η(t).
In this physical situation, there are two characteristic lengths, r_λ≡ (k_B T/λ)^1/4 and r_k≡ (k_B T/k)^1/2, which—aside from constants—correspondingly give the equilibrium lengths at high and low temperatures. In fact, it is useful for our analysis to introduce a dimensionless temperature T^*=k_B T λ/k^2=(r_k/r_λ)^4, high and low temperatures thus correspond to the regimes T^*≫ 1 and T^*≪ 1, respectively.
Let us analyse the emergence of the LLNES in this specific situation. The particularisation of Eq. (<ref>) gives
2at=
ln(1+r_c^2/r^2(t))-ln(1+r_c^2/r_i^2).
For a high enough initial temperature T_i^*≫ 1,
we estimate r_i with r_λ,i=(k_B T_i/λ)^1/4. There appears an intermediate time window over which r_i ≫ r(t) ≫ r_c, initial conditions are forgotten, specifically
r(t)∼ r_(t)=(2bt)^-1/2, (T_i^*)^-1/2≪ 2at≪ 1 .
Note that r_(t) only depends on b=λ/γ, i.e. only on the behaviour of the potential at large distances.
In order to derive Eq. (<ref>), it is only necessary to consider a high enough initial temperature; the role of the final temperature is to (possibly) limit the timescale over which the LLNES is observed. Noise is negligible as long as r_(t) is much larger than the equilibrium value at the final temperature, r_k,f=(k_B T_f/k)^1/2, which gives the condition 2at ≪ (T_f^*)^-1. If T_f^*=O(1) or larger, this restricts the LLNES in Eq. (<ref>) to the time window (T_i^*)^-1/2≪ 2at ≪ (T_f^*)^-1. If T_f^*≪ 1, the LLNES extends to longer times such that 2at=O(1), r(t) becomes of the order of r_c and
r_(t)=r_c (e^2at-1)^-1/2.
Figure <ref> shows a set of stochastic trajectories for which the behaviours in Eqs. (<ref>) and (<ref>) are observed.
We now study the case k<0, the “lemon-squeezer” potential with multiple minima at r=r_c [In the one-dimensional situation, the potential would be bistable, with two symmetric minima.
]. The LLNES in Eq. (<ref>), which only depends on the details of the potential at large r, is still present for T_i^*≫ 1; it is thus independent of the presence of other minima. Also, the LLNES extends to longer times if T_f^*≪ 1, but it is no longer given by Eq. (<ref>), we have r_(t)=r_c(1-e^-2at) instead. The system reaches equilibrium at r_c over this regime, with small thermal fluctuations <cit.>.
Now we consider another relevant physical system: an isotropic fluid
with non-linear drag force. Specifically, we investigate the stochastic evolution of N particles undergoing binary collisions and immersed in a background fluid acting as a thermal bath. For dilute enough systems, the velocity pdf P(v,t) obeys the Boltzmann-Fokker-Planck equation <cit.>
∂_t P=∇_v·[ζ(v)(v+k_B T/m∇_v)P]+J[P,P],
where ζ(v) stands for the velocity-dependent drag
coefficient and J[P,P] is the Boltzmann collision term, which is bilinear in P <cit.>. For low velocities, the drag force is usually linear in v, lim_v→ 0ζ(v)= ζ_0. For large velocities, the drag force may become non-linear in v: the dimensionless drag coefficient ζ^*≡ζ/ζ_0 thus depends on v, as is the case then the masses of the Brownian and background fluid particles are comparable <cit.>. If collisions among particles are elastic, this system tends to the canonical distribution with H(v) = mv^2/2, provided that A and B are such that Eq. (<ref>) holds. Since A(v) = ζ(v)v, we need B^2(v)=2 ζ(v) k_B T/m; noise is thus multiplicative.
The kinetic temperature is T_(t)≡ mv^2(t)/(dk_B), which equals the bath temperature at equilibrium. Initially, the system is equilibrated at T_i, thus T_(t=0)=T_i, and the bath temperature is suddenly quenched to T_f≪ T_i. To be concrete, we restrict ourselves to drag coefficients with algebraic behaviour for large v, ζ^*(v)∼γ (v/v_,f)^n, where γ is the non-linearity parameter and v_,f is the thermal velocity at T_f, v_,f≡ (2 k_B T_f/m)^1/2. If n>1, there appears a timescale over which the non-linear drag dominates and both noise and collisions—even if they are inelastic—are negligible. Over this wide time window, initial conditions are forgotten and the LLNES emerges <cit.>,
v_(t)/v_,f=(γζ_0n t)^-1/n, (T_f/T_i)^n/2≪ n γζ_0 t ≪ 1 .
in strong analogy with Eq. (<ref>). The kinetic temperature thus shows a slow non-exponential, algebraic, decay as T_(t)∝ t^-2/n, which rules the emergence of memory effects such as the Kovacs and Mpemba effects <cit.>.
Figure <ref> shows the pdf of the scaled variable c,
for the two specific examples of physical systems described above. The delta-peak structure is clearly observed, for one-, two-, and three-dimensional systems. For the non-linear fluid, the data shown corresponds to n=2.
In this Letter, we have analysed the dynamical behaviour of a wide class of physical systems, described by a non-linear Langevin (or Fokker-Planck) equation with detailed balance. When quenched to a low enough temperature, all these systems reach a universal long-lived non-equilibrium state, regardless of initial conditions. This state, which we have termed LLNES, is characterised by a Dirac-delta pdf.
There are two main hypotheses for the emergence of the LLNES: (i) the non-linearity of the “force” in the Langevin equation and (ii) the separation
of the initial temperature T_i from the final one T_f,
T_i≫ T_f. A separation of time scales ensues, with the
LLNES appearing in the intermediate window, where
initial conditions are irrelevant and noise is negligible.
Under these quite general assumptions, our results are independent of both the nature of the noise (either additive or multiplicative) and the dimensionality of the system, as shown in Fig. <ref>.
For the sake of simplicity, we have restricted the discussion to isotropic situations, in which our work proves the existence of the LLNES. It is always the form of the “force” at large distances that controls the emergence and shape of the LLNES, as illustrated by our analyses of the quartic potential and the non-linear fluid above. The effective reduction to one degree of freedom stemming from isotropy have allowed us to obtain analytical results for the emergence of the LLNES—see also <cit.>. Numerical evidence and qualitative, physical, arguments hint at the the existence of the LLNES for more complex scenarios with several degrees of freedom, including anisotropy and interactions <cit.>. Rigorously proving the conditions under which the LLNES emerges in these more complex situations is a highly non-trivial problem that lies beyond the goals of the present work.
Quasi-elastic one-dimensional granular systems have been shown to display Dirac-delta pdfs <cit.> resembling that of the LLNES. This result was derived from the inelastic Boltzmann equation, and therefore it cannot be considered as a particular case of the general result derived in this Letter—obtained within the Langevin framework. Still, the similarity of the observed pdfs entails it is worth investigating possible connections between these two intrinsically different physical situations.
Also, testing the emergence of the LLNES in real experiments is an interesting prospect for future work. In particular, it seems worth exploring the relevance of the LLNES to control the time evolution of mesoscopic systems, like biomolecules or memory devices. In this regard, it must be stressed that the two specific examples considered here describe actual physical systems. Current techniques make it possible to control the shape of the potential confining a colloidal particle immersed in a fluid <cit.>, and the Langevin equation for the velocity with non-linear drag has been successfully employed to describe mixtures of ultracold atoms <cit.>.
A. Patrón, B. Sánchez-Rey and A. Prados acknowledge financial support from Grant PID2021-122588NB-I00 funded by MCIN/AEI/10.13039/501100011033/ and by “ERDF A way of making Europe”. All the authors acknowledge financial support from Grant ProyExcel_00796 funded by Junta de Andalucía's PAIDI 2020 programme. A. Patrón acknowledges support from the FPU programme through Grant FPU2019-4110, and also additional support from the FPU programe through Grant EST22/00346, which funded his research stay at Univ. Paris-Saclay during autumn 2022. A. Prados also acknowledges the hospitality of LPTMS, which funded his stay at Univ. Paris-Saclay in June 2022.
|
http://arxiv.org/abs/2307.04891v1 | 20230710202544 | Accelerated Discovery of Machine-Learned Symmetries: Deriving the Exceptional Lie Groups G2, F4 and E6 | [
"Roy T. Forestano",
"Konstantin T. Matchev",
"Katia Matcheva",
"Alexander Roman",
"Eyup B. Unlu",
"Sarunas Verner"
] | hep-th | [
"hep-th",
"cs.LG",
"hep-ph",
"math-ph",
"math.GR",
"math.MP"
] |
numbers,sort compress
=1
|
http://arxiv.org/abs/2307.05683v2 | 20230711180005 | WHFast512: A symplectic N-body integrator for planetary systems optimized with AVX512 instructions | [
"Pejvak Javaheri",
"Hanno Rein",
"Dan Tamayo"
] | astro-ph.EP | [
"astro-ph.EP",
"astro-ph.IM",
"physics.comp-ph"
] |
WHFast512 - N-body integrator optimized with AVX512 instruction
Javaheri, Rein & Tamayo
Department of Physical and Environmental Sciences, University of Toronto at Scarborough, Toronto, Ontario M1C 1A4, Canada
Department of Physical and Environmental Sciences, University of Toronto at Scarborough, Toronto, Ontario M1C 1A4, Canada
Department of Astronomy and Astrophysics, University of Toronto, Toronto, Ontario, M5S 3H4, Canada
Department of Physics, Harvey Mudd College, 301 Platt Blvd., Claremont 91711, USA
We describe the implementation of the symplectic N-body integrator WHFast512 using Single Instruction Multiple Data (SIMD) parallelism and 512-bit Advanced Vector Extensions (AVX512).
We are able to speed up integrations of planetary systems by up to 4.7x compared to the non-vectorized version of WHFast.
WHFast512 can integrate the Solar System with 8 planets for 5 billion years in less than 1.4 days.
To our knowledge, this makes WHFast512 the fastest direct N-body integrator for systems of this kind.
As an example, we present an ensemble of 40-Gyr integrations of the Solar System.
Ignoring the Sun's post-main sequence evolution, we show that the instability rate is well captured by a diffusion model.
WHFast512 is freely available within the REBOUND package.
WHFast512: A symplectic N-body integrator for planetary systems optimized with AVX512 instructions
Daniel Tamayo
Indian Institute of Technology Kharagpur
==================================================================================================
§ INTRODUCTION
Numerical integrations of the Solar System have a long history <cit.>.
With the discovery of extrasolar planetary systems, the interest in fast and efficient numerical methods to study orbital evolution has further increased.
Most direct N-body simulations use symplectic integrators that make use of the so-called Wisdom-Holman splitting <cit.>.
One frequently used implementation of a Wisdom-Holman integrator is WHFast <cit.>, which is freely available within the REBOUND package <cit.>.
As the name suggests, WHFast was developed with efficiency being one of the goals.
WHFast and most other N-body integrators used for long-term integrations of planetary systems run on CPUs.
Other astrophysical simulations have been very successful in achieving significant speed-ups using Graphics Processing Units (GPUs).
This not only includes cosmology or fluid dynamics codes but also N-body integrators <cit.>.
The reason why long-term integrations of planetary systems are typically not run on GPUs is that they are inherently sequential: timesteps are performed one after the other and during each timestep each body depends on the positions of all the other bodies.
In simulations with a small number of particles (e.g., 1 star and 8 planets), the time required to perform each timestep is short, typically of the order of microseconds.
Almost any attempt to make use of the parallel computing power of a GPU to accelerate a single timestep is going to experience a slowdown rather than a speedup because of the communication overhead that occurs even on shared memory systems[See <cit.> for an alternative approach which does provide a significant speedup of 50x but increases the computational cost by a factor of 1000.].
CPUs typically have higher clock rates than GPUs and can therefore finish simulation faster.
Because one needs to resolve at least the orbital timescale of the short-period planet with several timesteps, the total number of timesteps in a long-term integration can be very large, ≳ 10^12.
As a result, a single integration might run for weeks or months.
Of course, GPU or CPU-based computing clusters allow for many independent integrations to be performed in parallel.
But this leaves us in an unusual situation where someone might have access to a vast amount of computing power, but without a way to use those resources to accelerate a single integration.
In this paper, we describe a way to speed up small N-body simulations with 512-bit wide Advanced Vector Extensions (AVX512).
Our new integrator can speed up both a single integration and an ensemble of integrations by almost a factor of 5.
We first provide a short introduction to AVX512 in Section <ref>.
We then give a review of Wisdom-Holman integrators and the specific Hamiltonian splitting we're using, before providing a detailed description of the implementation of WHFast512 in Section <ref>.
Performance tests are described in Section <ref>.
As an example, we present the results of an ensemble of 40 Gyr integrations of the Solar System in Section <ref>.
We conclude in Section <ref> with some ideas for further optimizations.
§ ADVANCED VECTOR EXTENSIONS
AVX512 instructions are an extension to the x86 CPU instruction set.
They are available on some, mostly high-end, Intel and AMD CPUs.
It allows programs to pack eight 64-bit double-precision floating point numbers within 512-bit vectors and perform operations on these vectors.
One AVX512 instruction can perform as many double floating point operations as eight regular instructions.
In an ideal scenario we might achieve a speed-up of 8x.
Consider the task of adding an eight element vector (a_0,…, a_7) to another vector (b_0,… , b_7). This can be done with the AVX512 instruction :
[scale=0.65]
every node=[font=88]
[shift=(0,0)]
[xstep=15mm,black] (0,0) grid (8*1.5,1);
ıin 0,...,7
at (0.75+1.5*ı,0.5) a_ı;
in 0,...,7
[color=black,fill=white] (1.5*+0.75,-0.5) node +;
[shift=(0,-2)]
[xstep=15mm,black] (0,0) grid (8*1.5,1);
ıin 0,...,7
at (0.75+1.5*ı,0.5) b_ı;
in 0,...,7
[color=black,fill=white] (1.5*+0.75,-2.5) node =;
[shift=(0,-4)]
[xstep=15mm,black] (0,0) grid (8*1.5,1);
ıin 0,...,7
at (0.75+1.5*ı,0.5) a_ı+b_ı;
The example above operates on the two input vectors vertically.
There are some instructions such as permutations that can operate on vectors horizontally.
For example, the AVX512 instruction can be used to shuffle neighbouring elements in a vector (a_0,…,a_7):
[scale=0.65]
every node=[font=88]
[shift=(0,0)]
[xstep=15mm,black] (0,0) grid (8*1.5,1);
ıin 0,...,7
at (0.75+1.5*ı,0.5) a_ı;
in 0,...,3
[<-] (1.5*2*+0.75,-0.8) – (1.5*2*+2.25,-0.2);
[<-] (1.5*2*+2.25,-0.8) – (1.5*2*+0.75,-0.2);
[shift=(0,-2)]
[xstep=15mm,black] (0,0) grid (8*1.5,1);
ıin 0,...,7
Mod(ı,2)==0?1:0
>0
at (0.75+1.5*ı+1.5,0.5) a_ı;
at (0.75+1.5*ı-1.5,0.5) a_ı;
ıin 0,...,3
[thick, decoration=brace, mirror, raise=0.5cm,decorate] (0.1+2*ı*1.5,0.5) – (1.9*1.5+2*ı*1.5,0.5);
at (1*1.5+2*ı*1.5,-0.7) 128-bit lane;
ıin 0,...,1
[thick, decoration=brace, mirror, raise=0.5cm,decorate] (0.1+4*ı*1.5,-0.5) – (3.9*1.5+4*ı*1.5,-0.5);
at (2*1.5+4*ı*1.5,-1.7) 256-bit lane;
Depending on the specific CPU architecture, some horizontal operations involve a performance penalty when lanes are crossed, especially when crossing from one 256-bit lane to another.
In the above example using the instruction neither 128-bit nor 256-bit lane boundaries are crossed.
This is important because in an N-body simulation every planet interacts with every other planet, so at some point we need to transfer information across lanes.
As we describe in more detail below, we try to minimize these lane crossings for the Kepler and jump steps in WHFast512.
We are specifically interested in creating a fast integrator for systems with 8 planets.
With this in mind, it makes sense to distribute particle data in 512-bit vectors, each containing one element of each planet.
We have one vector for the masses of each planet, one for the x-coordinate of each planet, one for the y-coordinates, and so on.
§ WHFAST512
In this section, we introduce the WHFast512 algorithm.
We provide a short general introduction to Wisdom Holman integrators before discussing specifically WHFast512 and its implementation.
§.§ Wisdom-Holman integrator
The main idea behind the Wisdom-Holman (WH) integrator is to split the complicated orbital motion of planets into smaller, more tractable parts <cit.>.
As long as planets remain well separated, the gravitational interactions between a planet and the star dominate the motion and planet-planet interactions can be considered a perturbation.
We can stitch together an approximation to the full solution by frequently switching back and forth between propagating planets on Keplerian ellipses around the star (ignoring planet-planet interactions) and applying kicks due to other planets (ignoring the motion around the star).
Wisdom-Holman integrators can be used with a variety of different coordinate systems.
The choice of the coordinate system affects the choice of Hamiltonian splitting.
For example, the WHFast implementation in REBOUND supports Jacobi coordinates, democratic heliocentric coordinates <cit.>, and the WHDS splitting <cit.>.
There are advantages and disadvantages to each splitting.
See <cit.> for an overview.
For reasons we will explain in more detail below, we here use the Wisdom-Holman integrator with democratic heliocentric coordinates (WHD).
In these coordinates, the position of particle i with mass m_i is 𝐐_i and is measured relative to the star.
The corresponding canonical momentum 𝐏_i is measured relative to the barycentre.
𝐏_0 and M are the total momentum and total mass of the system respectively.
Following <cit.>, we write the N-body Hamiltonian in democratic heliocentric coordinates and split it into four parts:
H = P_0^2/2M_H_0 +∑_i=1^N ( P_i^2/2m_i - Gm_0m_i/Q_i)_H_K
-∑_i=1^N∑_j=i+1^N Gm_i m_j/Q_ij_H_I
+ 1/2m_0( ∑_i=1^N 𝐏_i )^2_H_J.
where 𝐐_ij = 𝐐_i - 𝐐_j.
H_0 describes the motion of the barycenter.
Because there are no external forces, the barycenter travels along a straight line with constant velocity.
This part is therefore trivial to solve.
H_K describes the Keplerian motion of each planet.
H_I describes the planet-planet interactions.
Because we are using democratic heliocentric coordinates, we have an additional term, H_J, the so-called jump step.
During this step, the positions of all planets change by the same amount, but the momenta stay constant (hence the name jump step).
One full Wisdom-Holman timestep of length dt in operator notation is then given by
Ĥ_K(dt/2)
·Ĥ_J(dt/2)
·Ĥ_I(dt)
·Ĥ_J(dt/2)
·Ĥ_K(dt/2)
where we are ignoring Ĥ_0 as it commutes with all other operators.
Unless we require an output, the first and last Kepler steps (which are the most computationally expensive) can be combined.
Furthermore, the jump step commutes with the interaction step if no general relativistic corrections are included (see below). If that is the case, then the two jump steps can be combined.
The above splitting scheme results in a second-order integrator where the energy error scales as O(ϵ dt^2) where ϵ is of the order of m_i/M.
Higher order generalizations of the WH integrator exist but are not considered in this paper <cit.>.
The individual steps are described in detail in the following sections.
§.§ Kepler Step
During the Kepler step, particles are moved along their Keplerian orbits.
These Keplerian orbits are independent of each other because no planet-planet interactions occur during this step.
It is therefore ideal for parallelizing under a single instruction multiple data (SIMD) paradigm.
Our new Kepler step is based on WHFast but with several modifications and limitations.
The main reason for these modifications is that we want to avoid any conditional branching during the step to keep the calculations of all eight planets in sync.
For each planet, we need to solve the equations of motion corresponding to the Hamiltonian
H_K,i = P_i^2/2m_i - Gm_0m_i/Q_i.
To obtain a solution we need to first solve Kepler's equation.
We do this iteratively using a combination of Halley's method and Newton's method.
In the original version of WHFast, the iteration is stopped when the solution has converged to machine precision or begins to oscillate.
Here, we use a fixed number of two Halley steps followed by two Newton steps.
For low eccentricity elliptical orbits and small timesteps, this is sufficient to achieve machine precision.
However, the solution might not converge for highly eccentric or hyperbolic orbits.
Furthermore, when the timestep becomes comparable to the orbital period (or more precisely the pericentre timescale, see below), then the solution might also not converge.
Instead of cosine and sine functions, the Kepler solver in WHFast uses Stumpff and Stiefel functions.
These are evaluated using a Taylor series expansion and a precomputed lookup table.
In the standard version of WHFast, it is guaranteed that the series converges by repeatedly applying the quarter angle formula.
For WHFast512, we do not guarantee this because it would require branching. Instead, we assume that the timestep is always small compared to the orbital period <cit.>.
In the original WHFast version, the number of terms in the Taylor series expansion of the Stumpff function is fixed to 6.
Here, we use only 4 terms during the first two steps of the iteration (Halley's method).
During the last two steps in the iteration (Newton's method), we use 8 terms.
Our tests (see below) show that a less accurate result for the first two iterations are good enough to improve the initial guess.
The last iterations can then further increase the accuracy and reach machine precision.
For details on how we arrived at this specific combination, see Appendix <ref>.
We now show that all modifications mentioned above do not affect the accuracy of the Kepler solver as long as we know that the orbital periods in the planetary system don't become too short and that the orbits don't become too eccentric during the integration for a given timestep.
To test where our Kepler step fails, we run integrations of a massless test particle with varying semi-major axis and eccentricity.
The semi-major axis should be conserved exactly in this test problem[A two-body problem where both particles have a finite mass does not conserve the semi-major axis exactly because of our choice of Hamiltonian splitting which includes a jump term.].
We plot the magnitude and sign of the relative error in semi-major axis after ten years in Fig. <ref>.
A 5-day timestep is used in these integrations and the initial phase of the test particle is random.
On the dashed blue line, the pericenter timescale, T_f, defined as
T_f = 2π/n(1-e)^2/√(1-e^2),
where n and e are the planet's mean motion and eccentricity, respectively,
is equal to the timestep.
On the red dashed line, T_f is equal to two timesteps.
Note that T_f is simply the orbital period for circular orbits. For eccentric orbits it provides a timescale that describes how long a particle spends near pericenter. <cit.> calls this the effective period at pericenter.
One can see that with a 5-day tiemstep, our Kepler solver achieves machine precision accuracy for test particles with Mercury's semi-major axis up to eccentricities of ∼ 0.7.
Furthermore, note that the sign of the error is random whenever the result is converged, indicating that the solver is unbiased.
We therefore don't expect a drift in the semi-major axis over long timescales as long as Mercury's semi-major axis doesn't shrink significantly and we always resolve T_f with at least two timesteps.
Note that this discussion so far only applies to the convergence of the Kepler solver - a smaller timestep might be needed to accurately integrate the system (see the discussion in Section <ref>).
§.§ Interaction Step
In the interaction step, we solve the equations of motion corresponding to the Hamiltonian H_I.
For each planet, the solution involves calculating the sum of the accelerations from all other planets, specifically
𝐚_i = -∑_j=1, j≠ i^N G m_j/Q^3_ij𝐐_ij .
Once the accelerations are calculated, we only need to multiply them with the timestep and then add the result to the velocities.
Nevertheless, the interaction step is more complicated to implement because for every particle we need information from all other particles to calculate the acceleration.
Thus, this step does not follow a simple SIMD paradigm like the Kepler step.
We restrict our discussion to the case of one star and eight planets.
There are a total of 8·(8-1) acceleration terms to calculate.
Because of Newton's third law of motion, we can reduce the number of square root and inverse operations by half to 8·(8-1)/2=28.
In addition to the square root and inverse operations, we also need to perform other operations, for example to calculate 𝐐_ij.
However, the square root and inverse calculations are by far the most time consuming operations[An add, multiply, or fused multiply-add operation takes 4 clock cycles whereas a square root or inverse operation takes 31 or 23 clock cycles respectively on an Intel Skylake CPU.].
In the standard version of WHFast, the 28 square roots and inverse operations are calculated sequentially.
Using AVX512, we can calculate up to eight double-precision square roots or inverse calculations with one instruction.
For eight planets, we thus only need four square-root instructions to calculate all accelerations.
We now discuss the specific order in which we calculate the pairwise accelerations.
Although different choices are possible, the key here is to minimize the number of operations, and specifically to avoid frequent shuffling of particle data across 256-bit lanes (see Section <ref>).
The following illustration shows the first pairs of accelerations that we calculate.
We have two vectors of particle data to work on, one in which the order is simply sequential and another where the particles have been shuffled.
Specifically, the interactions that we calculate first are between planet 0 and planet 3, planet 1 and planet 2, and so on.
The double-headed arrows, e.g. 0 ↔ 3, indicate that we use Newton's third law to calculate the accelerations for both particle 0 and 3 using one square root operation.
The matrix on the right shows in black all 16 accelerations that we calculate in this step.
[scale=0.65]
every node=[font=88]
0,2,1,1,3,4,4,3,
2,0,1,1,3,3,4,4,
1,1,0,2,4,3,3,4,
1,1,2,0,4,4,3,3,
3,3,4,4,0,2,1,1,
4,3,3,4,2,0,1,1,
4,4,3,3,1,1,0,2,
3,4,4,3,1,1,2,0
[shift=(0,0)]
[shift=(0,2)]
[step=10mm, black] (0,0) grid (8,1);
ıin 0,...,7
at (0.5+ı,0.5) ı;
[fill=black!10!white] (0,0) rectangle (8,1);
[step=10mm, black] (0,0) grid (8,1);
ı[count=ξ] in 3,2,0,1,7,6,4,5
at (-0.5+ξ,0.5) ı;
in 0,...,7
[latex-latex] (+0.5,1.2) – (+0.5,1.8);
[shift=(9,3),scale=0.375]
in 0,...,7
[scale=0.8] at (+0.5,0.5) ;
[scale=0.8] at (-0.5,--0.5) ;
;
[black]
[count=] in
[count=] in
=1
(-1, -+1) rectangle ++(1, -1)
;
[thin] (0, 0) grid[step=1] (, -);
Next, we shuffle the second of the two vectors in a way that lets us calculate the remaining accelerations in the top left and bottom right quadrants of the matrix.
Note that we do not need to calculate the self-interactions on the diagonal.
Also note that in this step we are calculating the acceleration between each particle pair twice.
The interaction 0→ 1 could be used to calculate 1→ 0 but we're calculating it separately.
Since it takes the same amount of time to calculate eight inverse square roots than it takes to calculate four, this is faster than reusing parts of the calculation which would require another shuffling operation.
The matrix again shows in black the interactions calculated in this step, and in gray those in the previous step.
[scale=0.65]
every node=[font=88]
0,2,1,1,3,4,4,3,
2,0,1,1,3,3,4,4,
1,1,0,2,4,3,3,4,
1,1,2,0,4,4,3,3,
3,3,4,4,0,2,1,1,
4,3,3,4,2,0,1,1,
4,4,3,3,1,1,0,2,
3,4,4,3,1,1,2,0
[shift=(0,-4.5)]
[shift=(0,2)]
[step=10mm, black] (0,0) grid (8,1);
ıin 0,...,7
at (0.5+ı,0.5) ı;
[fill=black!10!white] (0,0) rectangle (8,1);
[step=10mm, black] (0,0) grid (8,1);
at (0.5,0.5) 1;
at (1.5,0.5) 0;
at (2.5,0.5) 3;
at (3.5,0.5) 2;
at (4.5,0.5) 5;
at (5.5,0.5) 4;
at (6.5,0.5) 7;
at (7.5,0.5) 6;
in 0,...,7
[latex-] (+0.5,1.2) – (+0.5,1.8);
[shift=(9,3),scale=0.375]
in 0,...,7
[scale=0.8] at (+0.5,0.5) ;
[scale=0.8] at (-0.5,--0.5) ;
;
[black]
[count=] in
[count=] in
=2
(-1, -+1) rectangle ++(1, -1)
;
[black!60!white]
[count=] in
[count=] in
=1
(-1, -+1) rectangle ++(1, -1)
;
[thin] (0, 0) grid[step=1] (, -);
So far, all the shuffling happened within 256-bit lanes.
However, the shuffling needed for the next step requires us to cross 256-bit lane boundaries.
The following illustrations show this third step and all the accelerations calculated so far.
[scale=0.65]
every node=[font=88]
0,2,1,1,3,4,4,3,
2,0,1,1,3,3,4,4,
1,1,0,2,4,3,3,4,
1,1,2,0,4,4,3,3,
3,3,4,4,0,2,1,1,
4,3,3,4,2,0,1,1,
4,4,3,3,1,1,0,2,
3,4,4,3,1,1,2,0
[shift=(0,-9)]
[shift=(0,2)]
[step=10mm, black] (0,0) grid (8,1);
ıin 0,...,7
at (0.5+ı,0.5) ı;
[fill=black!10!white] (0,0) rectangle (8,1);
[step=10mm, black] (0,0) grid (8,1);
at (0.5,0.5) 4;
at (1.5,0.5) 5;
at (2.5,0.5) 6;
at (3.5,0.5) 7;
at (4.5,0.5) 1;
at (5.5,0.5) 2;
at (6.5,0.5) 3;
at (7.5,0.5) 0;
in 0,...,7
[latex-latex] (+0.5,1.2) – (+0.5,1.8);
[shift=(9,3),scale=0.375]
in 0,...,7
[scale=0.8] at (+0.5,0.5) ;
[scale=0.8] at (-0.5,--0.5) ;
;
[black]
[count=] in
[count=] in
=3
(-1, -+1) rectangle ++(1, -1)
;
[black!60!white]
[count=] in
[count=] in
> 0
< 3
(-1, -+1) rectangle ++(1, -1)
;
[thin] (0, 0) grid[step=1] (, -);
We need one more shuffling of particle data - this time again within 256-bit lanes - to complete the calculation of all acceleration terms.
[scale=0.65]
every node=[font=88]
0,2,1,1,3,4,4,3,
2,0,1,1,3,3,4,4,
1,1,0,2,4,3,3,4,
1,1,2,0,4,4,3,3,
3,3,4,4,0,2,1,1,
4,3,3,4,2,0,1,1,
4,4,3,3,1,1,0,2,
3,4,4,3,1,1,2,0
[shift=(0,-13.5)]
[shift=(0,2)]
[step=10mm, black] (0,0) grid (8,1);
ıin 0,...,7
at (0.5+ı,0.5) ı;
[fill=black!10!white] (0,0) rectangle (8,1);
[step=10mm, black] (0,0) grid (8,1);
at (0.5,0.5) 5;
at (1.5,0.5) 6;
at (2.5,0.5) 7;
at (3.5,0.5) 4;
at (4.5,0.5) 2;
at (5.5,0.5) 3;
at (6.5,0.5) 0;
at (7.5,0.5) 1;
in 0,...,7
[latex-latex] (+0.5,1.2) – (+0.5,1.8);
[shift=(9,3),scale=0.375]
in 0,...,7
[scale=0.8] at (+0.5,0.5) ;
[scale=0.8] at (-0.5,--0.5) ;
;
[black]
[count=] in
[count=] in
=4
(-1, -+1) rectangle ++(1, -1)
;
[black!60!white]
[count=] in
[count=] in
> 0
< 4
(-1, -+1) rectangle ++(1, -1)
;
[thin] (0, 0) grid[step=1] (, -);
Now all the time-consuming inverse square-root calculations have been completed.
However, we need one more shuffling operation (across 256-bit lane boundaries) to combine the accelerations calculated in the first two steps with those in the last two steps.
In addition to the accelerations in Eq. <ref>, we also implement an additional acceleration term coming from a 1/r^2 potential centered on the star.
This can be used to mimic the effects of general relativistic precession <cit.>.
Because we are working in democratic heliocentric coordinates, the acceleration felt by the planets is easy to calculate in a SIMD fashion without any lane crossings.
However, we also need to include the back reaction onto the star originating from the 1/r^2 potential.
For this part, we need to scale the accelerations from all planets, add them, and then subtract the result from the direct contributions.
This part therefore requires a lane crossing in the same way as the jump step does (see next section).
§.§ Jump Step
The so-called jump step in democratic heliocentric coordinates modifies the positions of all planets.
Specifically, the equations of motions from H_J require us to calculate the sum of the momenta of all planets (but not the star), scale it by some constant factor, and then add it to all planets:
Δ𝐐_i = dt/M_0∑_j=1^N 𝐏_j,
where M_0 is the mass of the star.
Note that each planet's change in position depends on the sum of the momenta of all planets.
By carefully arranging shuffle and add operations, we can implement the jump step with only four additions and three shuffles per coordinate, of which only one shuffle involves a lane crossing across 256-bit boundaries.
Let us assume that some vector p initially contains one Cartesian component of the momenta of all eight planets, m_0· v_0, m_1· v_1, …, m_7· v_7.
We first swap neighbouring pairs of the vector and then add the original vector to the shuffled vector.
The process is repeated two more times, with the second step crossing 128-bit boundaries, and with the last shuffle crossing 256-bit boundaries.
The algorithm is illustrated below:
[scale=0.75]
every node=[font=77]
[shift=(0,2)]
[step=10mm, black] (0,0) grid (8,1);
at (0.5,0.5) 0;
at (1.5,0.5) 1;
at (2.5,0.5) 2;
at (3.5,0.5) 3;
at (4.5,0.5) 4;
at (5.5,0.5) 5;
at (6.5,0.5) 6;
at (7.5,0.5) 7;
[step=10mm, black] (0,0) grid (8,1);
at (0.5,0.5) 1;
at (1.5,0.5) 0;
at (2.5,0.5) 3;
at (3.5,0.5) 2;
at (4.5,0.5) 5;
at (5.5,0.5) 4;
at (6.5,0.5) 7;
at (7.5,0.5) 6;
in 0,...,3
[<-] (2*+0.7,1.2) – (2*+1.3,1.8);
[<-] (2*+1.3,1.2) – (2*+0.7,1.8);
in 0,...,7
[color=black,fill=white] (+0.5,1.5) circle (0.2) node +;
[scale=0.75]
every node=[font=77]
[<-] (4,-0.8) – (4,-0.2);
[shift=(0,-2)]
[step=10mm, black] (0,0) grid (8,1);
in 0,...,1
at (+0.5,0.5) 0+1;
at (+2.5,0.5) 2+3;
at (+4.5,0.5) 4+5;
at (+6.5,0.5) 6+7;
[shift=(0,-4)]
[step=10mm, black] (0,0) grid (8,1);
in 0,...,1
at (+0.5,0.5) 2+3;
at (+2.5,0.5) 0+1;
at (+4.5,0.5) 6+7;
at (+6.5,0.5) 4+5;
[shift=(0,-3.5)]
in 0,...,1
[<-] (4*+3.3,0.7) – (4*+0.7,1.3);
[<-] (4*+2.3,0.7) – (4*+1.7,1.3);
[<-] (4*+1.7,0.7) – (4*+2.3,1.3);
[<-] (4*+0.7,0.7) – (4*+3.3,1.3);
in 0,...,7
[color=black,fill=white] (+0.5,1) circle (0.2) node +;
[scale=0.75]
every node=[font=77]
[<-] (4,-4.8) – (4,-4.2);
[shift=(0,-6)]
[step=10mm, black] (0,0) grid (8,1);
in 0,...,3
[align=center] at (+0.5,0.5) 0+1
+
2+3;
[align=center] at (+4.5,0.5) 4+5
+
6+7;
[shift=(0,-8)]
[step=10mm, black] (0,0) grid (8,1);
in 0,...,3
[align=center] at (+0.5,0.5) 4+5
+
6+7;
[align=center] at (+4.5,0.5) 0+1
+
2+3;
[shift=(0,-7.5)]
in 0,...,3
[<-] (+4.3,0.7) – (+0.7,1.3);
[<-] (+0.7,0.7) – (+4.3,1.3);
in 0,...,7
[color=black,fill=white] (+0.5,1) circle (0.2) node +;
In the end, we have the element-wise sum of p in each vector element.
Using Intel's intrinsic functions[For a list of available intrinsic functions see e.g. <https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html>.], this algorithm can be written compactly as
During the jump step, the star does not move (there is no back reaction).
§.§ Other implementation details
Throughout WHFast512, we make extensive use of fused multiply-add (FMA) instructions.
As the name suggests, one FMA instruction performs one multiplication and one addition in a single step and with a single rounding at the end.
This not only leads to an increase in performance but also in accuracy.
WHFast512 works with democratic heliocentric coordinates but not with Jacobi coordinates.
This has the advantage that we do not need to convert back and forth to inertial coordinates during each timestep.
With Jacobi coordinates, coordinate transformations would be required twice per timestep (we need the relative distance between particles in the interaction step which is easier to do in inertial coordinates, but Jacobi coordinates are required for the Kepler step).
Jacobi coordinate transformations are difficult to parallelize in a SIMD fashion because every particle's coordinates depend on the coordinates of all interior particles.
For systems with 8 planets this leads to a significant bottleneck[Jacobi coordinates would be more feasible for systems with 2 or 4 planets.].
Without loss of generality, we make several further assumptions for WHFast512 to avoid unnecessary operations.
* We assume a fixed timestep. If it becomes necessary to change the timestep at some point during an integration, the user needs to manually reset the integrator.
* The timestep needs to be a positive number. To integrate backwards in time the user can flip the sign of the velocities.
* We work in units in which the gravitational constant G is equal to 1. Any N-body system can always be rescaled to these units.
* We assume that star and planet masses do not change during the integration. If they do, the user needs to manually reset the integrator after each change.
* We always combine the two drift steps in the Drift-Kick-Drift algorithm. In the original WHFast version, this is referred to as 'safe mode off'. When an output is required, the integrator needs to be 'synchronized' by applying half a drift step.
The 'keep unsynchronized' feature of WHFast is supported, allowing users to synchronize the state vectors to generate an output, but then continue the integration from the unsynchronized values.
This allows for bit-wise reproducibility independent of the number of outputs and avoids the unnecessary accumulation of round-off errors when frequent outputs are required.
§ PERFORMANCE
In this section, we present results from performance tests.
Specifically, we compare the original non-AVX512 version of WHFast to our new version, WHFast512.
All simulations use a 5 day timestep and the present day Solar System planets as initial conditions[Actually, the initial conditions don't matter for this test. We only test the speed, not the accuracy and one WHFast512 timestep always takes the same amount of time regardless of the planets' coordinates.].
The tests were performed on the Niagara cluster, owned by the University of Toronto and operated by SciNet.
Each node has two Intel(R) Xeon(R) Gold 6148 2.40 GHz CPUs (with a maximum frequency 3.7 GHz).
There are a total of 40 Skylake cores per node.
Using hyperthreading, up to 80 threads can be executed in parallel per node.
We run our tests using both the GNU (version 8.3.0) and Intel compilers (version 19.0.4.243).
The results for the Intel compiler are shown in the top row of Figure <ref>, and those for the GNU compiler in the bottom row.
We start by running one simulation (one thread) per node.
This significantly underutilizes the node, but it gives us the fastest possible speed for a single simulation.
For 8 planets, using either Intel or GNU compilers, we are able to integrate for 1 Gyr in just 0.35 days (8.4 hours).
This is a speedup of 4.7x (GNU)/2.9x (Intel) compared to the non-AVX512 version of WHFast.
Running 80 threads on a single node slightly increases the runtime.
This is expected because hyperthreading will not provide a perfect scaling, the simulations are more likely memory bound, and the CPU is more likely frequency throttled.
Note that this is the same for both the original and the AVX512 version of WHFast.
The speedup for 8 planets is 4.7x/3.7x for the GNU and Intel compiler respectively.
Looking at the runtime, we can see that to integrate 80 realizations of the Solar system to 1 Gyr on a single node, we need 0.43 days (10 hours) using either the GNU or Intel compilers.
The reason the speed-up is slightly lower for the Intel compiler is that this compiler does some amount of SIMD vectorization by itself.
We also run additional tests on the login nodes of the general purpose Béluga Cluster which uses the same Intel(R) Xeon(R) Gold 6148 CPU but with hyperthreading disabled.
On this machine we achieve an even shorter runtime:
integrating the Solar System for 1 Gyr takes 0.27 days (6.6 hours), allowing us to integrate to 5 Gyr in less than 1.4 days.
Appendix <ref> compares the performance of WHFast512 to several other freely available N-body codes.
§ LONGTERM INTEGRATIONS
We present results from 80 long-term simulations of the Solar System with all 8 planets, general relativistic corrections, and a 5 day timestep.
We start with initial conditions representing the present day Solar System.
The initial conditions of the 80 integrations are identical except that we perturb the x coordinate of Mercury by one meter in a random direction.
Because the Solar System is chaotic, the trajectories of the planets will diverge.
In Figure <ref> we plot the relative energy error for our ensemble of simulations.
Each individual simulation's error is plotted as a gray line while Mercury's eccentricity stays below 0.65.
The median is shown in red.
The error is plotted as a blue line when Mercury's eccentricity exceeds 0.65.
One can see that the median relative energy error remains below 10^-8 for the entire 40 Gyr integration.
The only simulations for which the energy error increases are those where Mercury's eccentricity is high and planets have close encounters.
Note that there is no long-term trend in the median energy error (red curve), showing that, at least at the level of 10^-8, the integrator is unbiased over 40 Gyrs.
The fraction of unstable systems as a function of time in our ensemble runs is shown as a red line in Figure <ref>.
<cit.> ran 1000 simulations using the non-AVX512 version of WHFast.
For comparison, we plot these results as a dotted line.
We also plot the diffusion model by <cit.> as a dashed line.
The gray shaded area represents the 2σ confidence interval for the diffusion model assuming an ensemble size of 80.
As a criterion for whether a simulation has gone unstable or not we test whether the eccentricity of Mercury exceeds 0.65.
At this point, the orbits of Mercury and Venus are almost crossing and a violent outcome is pretty much guaranteed.
We tried other criteria (e>0.55, e>0.6, relative energy error larger than 10^-7) all of which give very similar results.
As one can see in Figure <ref>, our results are in good agreement with both the diffusion model of <cit.> and the N-body results of <cit.>.
This shows that our optimized integrator is well suited to study the long-term evolution of planetary systems such as the Solar System.
<cit.> shows that for fewer than ∼ 17 timesteps per pericenter timescale T_f, the overlap of timestep resonances introduces energy errors larger than machine precision <cit.>.
Nevertheless, quantifying the impact of such small numerical chaos on the statistics of dynamical instabilities in the Solar System is a challenging theoretical problem.
Numerical convergence tests by Brown et al. (in prep) show that the rate of dynamical instabilities over 5 Gyr in their Solar System integrations remain consistent for timesteps significantly larger than this criterion.
However, this paper is meant to simply provide a demonstration of our new code, rather than a detailed convergence analysis.
The fact that our results agree well with the diffusion model, and particularly with the simulations of <cit.>, which use a 3 day timestep, gives us confidence that our results are physical and not a numerical artifact.
§ CONCLUSIONS
In this paper, we introduced WHFast512.
To our knowledge, WHFast512 is the fastest N-body integrator for systems with a small number of planets (N=8).
We can integrate the Solar System for 5 billion years, the expected time the Sun has left on the main sequence, in just 1.4 days.
The speedup that we achieve is significant, up to 4.7x.
It will allow N-body simulations to run almost 5 times longer for the same amount of computing resources.
Alternatively, one can run 5 times as many simulations for the same integration time and computing resources.
Lastly, one can run the same number of simulations but at a cost of only 1/5 of the computing resources.
As scientists become more aware of and concerned about the environmental impact of large-scale numerical simulations, the last option seems particularly appealing.
In this paper, we focused on developing an integrator for systems with 8 planets.
Aside from the Solar System, there are currently few known planetary systems with that many planets.
It would therefore make sense to further improve the performance of WHFast512 in future work specifically for systems with 2 or 4 planets by sharing one 512-bit vector among 4 or 2 simulations respectively.
In fact, because the interaction matrix is more sparse and there are no longer 256-bit lane crossings required in this case, the speedup should be even greater.
Jacobi coordinates could also be considered for systems with a smaller number of planets (especially 2).
Furthermore, note that some AVX512 instructions can lead to CPU frequency throttling <cit.>. Although we don't observe any significant effect in WHFast512, it would be interesting to compare the performance of 512-bit instructions to twice the amount of 256-bit instructions, especially when integrating 2 or 4 planet systems.
WHFast512 is freely available within the REBOUND package at <https://github.com/hannorein/rebound> which provides both a C and Python interface.
To use WHFast512 the user needs to have a CPU and a compiler which support AVX512 instructions.
Note that to allow for easy post-processing of simulation data, one can read SimulationArchive files <cit.> of simulations run with WHFast512 even if AVX512 instructions are not available.
In that case, if a synchronization is required, it is performed with WHFast.
§ ACKNOWLEDGEMENTS
We thank Dorian Abbot and collaborators for sharing a draft of their paper and the data shown in Figure <ref> with us.
We thank Matthew Holman for helpful feedback on an earlier draft of this paper.
We also thank an anonymous referee for helpful comments which allowed us to improve the manuscript.
This research has been supported by the Natural Sciences and Engineering Research Council (NSERC) Discovery Grant RGPIN-2020-04513.
This research was enabled in part by support provided by the Digital Research Alliance of Canada (formerly Compute Canada; https://alliancecan.ca/enalliancecan.ca).
Computations were performed on the Niagara supercomputer <cit.> at the SciNet HPC Consortium (<https://www.scinethpc.ca>).
SciNet is funded by the following: the Canada Foundation for Innovation; the Government of Ontario; Ontario Research Fund – Research Excellence; and the University of Toronto.
§ OPTIMAL NUMBER OF ITERATIONS FOR THE KEPLER SOLVER
To arrive at the specific combination of Halley's and Newton's method and the number of terms in the Taylor series expansion described in Section <ref>, we ran a systematic search over all possible combinations.
As an accuracy requirement we chose that at Mercury's current semi-major axis, the relative semi-major axis error of Mercury stays below 10^-13 when integrated for 10 years with a 5-day timestep and eccentricities up to 0.7.
Starting with 700 possible combinations, only 85 pass this requirement.
All of them require 6 terms in the Taylor series expansion of the Stumpff functions in the last iteration.
If we only use Newton steps, we need at least six iterations to converge.
If we start with 1 Halley step, we still need another 4 Newton steps.
If we start with 2 Halley steps, we get away with only 2 more Newton steps.
The lowest number of terms in the series expansion for the Halley steps that lead to converged results is 4.
Testing the performance of all combinations, this last combination (2 Halley steps, 2 Newton steps, 4 and 6 terms respectively) turns out to be the most efficient and we therefore chose it as the default in WHFast512.
Of course, other requirements - e.g. a different timestep or eccentricity - will lead to a different optimal combination.
§ PERFORMANCE COMPARED TO OTHER N-BODY CODES
We compare WHFast512 primarily to WHFast because both integrators are part of the same package and we think this is the fairest comparison.
In this appendix we report results of experiments comparing WHFast512 to other freely available N-body integrators.
All of them are open source, except HNBody which is only available as a binary.
Although we took care in making these comparisons as fair as possible and closely follow the developers' instructions, there might be optimizations (settings, compiler flags, etc) in these codes that we did not make use of.
In addition some of the codes are optimized for other use cases, for example integrations with a large number of test particles or close encounters, rather than integrating the Solar System planets.
Table <ref> lists the runtime for a 5 Gyr integration of the eight Solar System planets using a 5 day timestep for different N-body codes.
All runs were performed on the same Intel(R) Xeon(R) Gold 6148 CPU.
|
http://arxiv.org/abs/2307.03877v1 | 20230708014525 | Designing Mixed-Initiative Video Games | [
"Daijin Yang"
] | cs.HC | [
"cs.HC",
"cs.AI",
"J.5"
] |
[1]Covercover
To my family.
[1]Table of Contentscontents
I would like to first thank my great, beautiful, and strong mother. Since being diagnosed with multiple myeloma nine years ago, she has endured unimaginable suffering, but she has never given up on her life and has even achieved more in her work than healthy people. After contracting COVID-19, her condition deteriorated rapidly, and as I write this paper, she is fighting bravely against cancer in the hospital. She has always inspired and supported me to move forward. She is a great mother and I will love her forever.
I am grateful to my father, my mother's sister, and other family members for taking care of my mother and allowing me to focus on my studies.
I would thank Professor Elina Tochilnikova, Professor Giovanni Maria Troiano, Professor Bob De Schutter, Professor Casper Harteveld, Professor Leanne Chukoskie, and all other professors in the field of Game Science and Design at Northeastern University for their invaluable guidance and unwavering patience in supporting my work. I would also express my sincere gratitude to Professor Max Kreminski at Santa Clara University for providing crucial feedback and suggestions on my thesis.
I would like to extend my appreciation to all of my colleagues who generously provided valuable suggestions and constructive feedback on my work. Additionally, I am grateful to my friends Binyao Jian and Xinyan Deng, who stood by me during the most challenging times. Their unwavering support and companionship have been invaluable to me.
The development of Artificial Intelligence (AI) enables humans to co-create content with machines. The unexpectedness of AI-generated content can bring inspiration and entertainment to users. However, the co-creation interactions are always designed for content creators and have poor accessibility. To explore gamification of mixed-initiative co-creation and make human-AI interactions accessible and fun for players, I prototyped Snake Story, a mixed-initiative game where players can select AI-generated texts to write a story of a snake by playing a “Snake” like game. A controlled experiment was conducted to investigate the dynamics of player-AI interactions with and without the game component in the designed interface. As a result of a study with 11 players (n=11), I found that
players utilized different strategies when playing with the two versions,
game mechanics significantly affected the output stories, players' creative process, as well as role perceptions,
and players with different backgrounds showed different preferences for the two versions.
Based on these results, I further discussed considerations for mixed-initiative game design.
This work aims to inspire the design of engaging co-creation experiences.
Keywords - human-AI interaction, gamification of human-AI collaboration, mixed-initiative interface, mixed-initiative game, AI co-writing, playing and creating conflicts
headings
CHAPTER: INTRODUCTION
Recent machine learning (ML) techniques have boosted human creation, enabling humans to co-work with artificial intelligence (AI) to compose music <cit.>, draw illustrations <cit.>, write stories <cit.>, reply emails <cit.>, create characters <cit.>, and develop games <cit.>. In this mixed-initiative co-creation <cit.> process, AI acts as a partner of humans and provides real-time feedback aligned with the creation iteration. Since the algorithm can generate numerous instances with easy inputs in a relatively short time, the mixed-initiative interfaces can help its users quickly explore the solution space, inspire them with unexpected ideas <cit.>, and make creative experiences accessible to non-professional creators <cit.>.
Current mixed-initiative co-writing interfaces mainly focus on supporting writers. These systems were designed to help writers to keep the consistency of stories, plan plots, get unstuck <cit.>, and change text-based stories into other forms <cit.>. Users must have basic writing skills to operate these systems. Other work introduced gamified designs such as temporary rules and goals <cit.>, as well as scores <cit.> into mixed-initiative co-writing to make the system more enjoyable to novice writers. However, previous work on human-AI collaboration in the context of creative writing focused on AI as a supporting mechanism to facilitate creative storytelling efforts. Here, I extend prior work by exploring the use of AI for mixed-initiative creative writing as a game mechanic in the context of game design. To design mixed-initiative video games, I aim to explore the following research questions:
(1) What patterns of interaction and player identification emerge in the player-AI co-creating process?
(2) How do game mechanics impact the creation and role perceptions in the process?
(3) How can mix-initiative co-creating be integrated with game mechanics for a unified play experience?
To ground my study, I designed and prototyped Snake Story, a mixed-initiative game with the mechanics from “Snake” [https://www.arcade-history.com last accessed 03.06.2023]. The game (referred to as the game version) involved players selecting AI-generated texts or adding their own texts by controlling a growing snake to eat candies on the map, resulting in the creation of a story about a snake. A GPT-3 <cit.> based language model was employed to provide 2 text selections in each round with different preset parameters. The model would consider previous selections and would write an end for the story when the game or the interaction is over. For comparison, a system (referred to as the non-game version) was also developed for players to directly select AI-generated texts without engaging in gameplay.
To investigate how players dynamically interact with the game, I conducted a within-subject user study with 11 players (n = 11). Each player was asked to write an approximately 300-word story about a snake in the randomly assigned two versions. Eleven individuals participated in a study where they played Snake Story and their experience was analyzed using a mixed-method approach, including gameplay log data, survey, think-aloud, interview, and observations. Results from the study show: game mechanics significantly affect players' text selection strategies, the quality of the stories, and sense of engagement in creating; players shared a selection strategy for GPT-3 generated texts; different players had different play strategies in both versions and thus perceived themselves and AI differently because of the game mechanics; players with different writing and AI experiences hold different preferences on the two versions.
In summary, the thesis makes the following contributions to the mixed-initiative game design:
(1) I introduce Snake Story, a mixed-initiative game for collaborative writing with AI. I present techniques that enable players to write with AI, and I develop both game and non-game interactions and interfaces.
(2) In a within-subject study with 11 players, I compared the non-game and the game version and defined:
(a) Players' usage data.
(b) Statistic difference between the two versions.
(c) Players' strategies for selecting AI-generated texts to create stories
(d) Players' play strategies and role perceptions in the two versions.
(e) Players' preferences for the two versions.
(3) Based on the results of the user study, I discuss the design
implications that mixed-initiative games should:
(a) Resolve playing and creating conflicts.
(b) Increase narrative engagement in playing.
(c) Enhance emotional involvement in creating.
(d) Balance playing and creating.
(e) Find new evaluation criteria.
Taken together, these findings guide the design of future engaging co-creation experiences.
CHAPTER: RELATED WORK
§ NEURAL LANGUAGE MODELS FOR TEXT GENERATION
Text generation has appeared as a critical application of natural language processing (NLP) technologies, with various applications such as chatbots <cit.> and content creation <cit.>. The rise of deep learning has enabled significant advancements in the field of text generation, with language models such as the Generative Pre-trained Transformer (GPT) series achieving remarkable success. It has been proven that GPT models have the ability to generate texts that cannot be distinguished from human-written pieces <cit.>. Based on the previous GPT-2 structure <cit.>, the GPT-3 model with larger model size, dataset size, and more training has demonstrated stronger abilities in text completion and outperformed GPT-2 on several metrics, including fluency, coherence, and relevance <cit.>. As a result, GPT-3 was employed in the Snake Story game.
By using a few-shot learning approach <cit.>, the GPT-3 model is able to perform specific tasks under natural language instructions inputted by users. For example, in the Snake Story game proposed in this thesis, a prefix "writing a story of a snake" was added to restrict the generated texts under the topic. Despite the impressive advancements in text generation, several challenges remain in using GPT-3, including the issue of bias and the difficulty of producing diverse and engaging content. The issue of bias stands for that the generated text may reflect the biases inherent in the training data. Identical prompts in GPT-3 can result in stereotype-related outputs including biases on sex <cit.>, race, and certain religious groups<cit.>. Also, GPT-3 still has the problem of sometimes generating low-quality texts that repeat themselves, lose coherence over paragraphs, and have contradictory logic <cit.>. This problem will be enlarged when the parameters of GPT-3 are not set properly [https://platform.openai.com/docs/api-reference/models last accessed 03.06.2023].
§ MIXED-INITIATIVE CO-WRITING INTERFACES
Mixed-initiative interfaces that enable co-creation in various fields have been widely researched <cit.>.
The interfaces can take advantage of exploratory creativity from human writers and the fast generation of diagrammatic lateral paths from generative algorithms to create mixed-initiative co-creativity <cit.>. Extensive research has explored the potential of mixed-initiative interfaces to aid human writing through editing human-written texts, as well as generating and expanding ideas. Editing and refining functions are the most common functions in the interfaces. For example, Shuming Shi et al. <cit.> utilized AI technologies to boost users' writing proficiency by enabling them to generate higher-quality text more efficiently. This was accomplished through the implementation of five distinct categories of features in their digital writing assistant, including text completion, error detection, text refinement, keyword-to-sentence conversion (K2S), and cloud-based input methods (cloud IME). Xin Zhao <cit.> developed a writing assistant that can assist non-native English speakers in overcoming language barriers by offering rewriting alternatives with various tones (such as casual or formal) and lengths (like shortening or expanding).
In collaborative writing, AI can also serve as an idea generator, contributing to the generation of novel concepts and plot lines. For instance, Melissa Roemmele et al. <cit.> created a system that aids users in brainstorming by providing suggestions for the next sentence in a story. Swanson et al. <cit.> described an interactive storytelling system that utilizes a case-based reasoning architecture to offer a range of options for the subsequent sentences, leading the story in entirely diverse directions. Chung et al. <cit.> introduced an alternative to suggestion-based co-ideation approaches by developing line-sketching interactions that enable users to co-create stories while actively controlling and making sense of the protagonist's fate. Beyond idea generators, AI in <cit.> was AI was granted a more substantial role as an active writer and assumed responsibility for continuing users' narratives through a unique form of story solitaire. In contrast, Biermann et al. <cit.> proposed that AI could jeopardize writers' control, autonomy, and ownership by exceeding co-creative limits, and therefore sought to preserve human control over the writing process in their system.
Moreover, AI can assist in bridging the gaps between the skeleton structures of stories. Ammanabrolu et al. <cit.> introduced an ensemble-based model capable of generating event-driven stories. Yuan et al. <cit.> built a text editor that can provide plot points that can contextualize the scene built by humans. Laclaustra et al. <cit.> introduced a system that empowered users to specify characters, locations, and objects within a story world. The system, subsequently, generated a rudimentary story by allotting actions to individual characters and creating a sequence of events.
In summary, the aforementioned applications are well-designed to assist creative writers in enhancing language, ensuring consistency, overcoming writer's block, managing reader experience, as well as refining and iterating on expressive intent <cit.>. However, it is crucial to note that more research is indispensable to cater to the needs of casual creators or non-creators in the realm of content creation.
§ MIXED-INITIATIVE CO-WRITING GAMES
In order to broaden the accessibility of co-creative experiences for a wider range of users, various applications have recognized the benefits of integrating mixed-initiative co-writing as a valuable component of narrative instruments <cit.> in games, thereby enhancing the overall interactive experience of "play".
For example, a mixed-initiative text-based game, AI dungeon[https://play.aidungeon.io/main/home last accessed 03.06.2023], used AI to generate and respond to players' text-based commands and choices. The AI system produces a distinctive story outcome based on the players' inputs, providing an evolving and personalized gaming experience of exploring and creating stories in pre-set scenes.
Moreover, Kreminski et al. <cit.> developed "Why Are We Like This?", a mixed-initiative, co-creative storytelling game that aimed to engage players in investigating the generated history of characters and to bring the story to a satisfying conclusion by selecting and writing actions for the characters. The game involves designed author goals, proper AI suggestions, and player curiosity to encourage co-authorship.
While the term "play" is commonly used to denote the interaction between human and mixed-initiative interfaces, it is essential to recognize that "games" bear distinctive dissimilarities from play, as they feature unambiguous goals that encourage participants to engage in the interpretation and optimization of rules and tactics <cit.>. The introduction of goals into a system serves as the most straightforward means of distinguishing mixed-initiative co-writing games from mixed-initiative co-writing interfaces.
Xi et al. <cit.> introduced KuiLeiXi, an open-ended text adventure game that required players to interact with the AI to achieve predetermined plot goals. The system was developed to address the lack of incentives for players in AI Dungeon.
Additionally, Ben Samuel et al. <cit.> created a mixed-initiative playful tool, Writing Buddy, that integrates the affordances of both authoring and playable media to support creative writing endeavors. The game mechanics prompted players to engage in a puzzle-solving-like experience, where they possessed the freedom to add or eliminate story beats to alter the characters' states within the game and attained the pre-determined narrative goal.
Building upon the concept of Writing Buddy, Kreminski et al. <cit.> developed Loose Ends, a mixed-initiative co-creative storytelling play experience that incorporates obligatory storytelling goals that can be parameterized with specific characters and additional constraints. In addition, the AI system in Loose Ends ensures consistency with all previous texts, which emulates the functionality of an active writing partner.
The present state of mixed-initiative co-writing games suggests that their full potential has yet to be realized, as they continue to rely on interactions that overlap with mixed-initiative interfaces. While the mobile game designed by Castaño et al. <cit.> represents a step forward in this field, enabling users to collaboratively create a story by arranging a card-game-like system, further exploration of combining mixed-initiative interfaces and game mechanics are required.
CHAPTER: SNAKE STORY
To ground my study, a mixed-initiative game named "Snake Story" was designed and developed in the Unity3D engine. As shown in Fig. <ref>, the game consists of 2 parts: the non-game version and the game version. The game allows players to write different stories under the same prompt, “writing a story of a snake”, with GPT-3 generated texts in different turn-based interactions. The "text-davinci-003" model [https://platform.openai.com/docs/models/overview last accessed 03.06.2023] was employed in the system to generate the text.
§ NON-GAME VERSION
As illustrated in Fig. <ref>, the non-game version of the system functions as follows: players are presented with two 30-word text options with different temperatures (0.6 and 1.4) generated by GPT-3 in each turn. If they wish to continue the story, they can select one of the options, which will be automatically added to the narrative. Alternatively, players can opt to compose their own text to continue the story if they are dissatisfied with the AI-generated options. In the subsequent turn, GPT-3 generates two fresh text alternatives for the players to choose from. Once the players decide to end the story, GPT-3 assists in linking the narrative to the predefined ending: ", and the story of the snake ends" with a maximum of 80 words.
As depicted in Fig. <ref>, the interface of the non-game version presents the two GPT-3 generated text options on the left side of the screen, accompanied by square buttons containing labels to their left. Additionally, an input field is positioned beneath the text options, enabling players to contribute their own textual content. Once the GPT-3 generation process is completed, the button adjacent to the input field becomes interactable. Players can then click this button to incorporate their selected text into the ongoing narrative, marking the initiation of a new turn. Moreover, an "End" button is situated underneath the text options, providing players with the means to end the story.
§ GAME VERSION
In contrast to the non-game version, the game version of the system employs "Snake"-game-like mechanics as a metaphor for adding paragraphs to a story, as demonstrated in Fig. <ref>. In the game version, players are still presented with two selections of texts. However, these texts are now represented by candies positioned on a 15*15 tile map, each of which possesses unique mechanics. To add a text to the story, players must navigate a growing snake towards the corresponding candy, which triggers the addition of the selected text to the narrative, along with the application of the corresponding mechanics to either the player or the game map. Players are unable to terminate the story unless their life points become exhausted.
As shown in Fig. <ref> a), the game is played on the left-hand side of the screen, while the story is displayed on the right-hand side. Players’ life points are shown on the left bottom under the tile map. The player's life points are located at the bottom-left corner of the tile map. The two text selections, along with their corresponding candies, are displayed under the story. A countdown scrollbar for the pause time is located between the story and text selections, and the game pauses momentarily when new candies and texts appear. Once a player collects the special candy (Blue), they are given the opportunity to contribute to the story by writing their own text. As shown in Fig. <ref> b), an input field will appear under 2 text selections, and a corresponding yellow candy will be generated on the map.
As shown in Fig. <ref>, seven different tiles are designed in the game, comprising of six types of candies and one obstacle. The candies are divided into two pools: pool 1 for text selection 1 and pool 2 for text selection 2. To research how game mechanics can affect players’ choices of texts, the temperature for selection 1 and candy pool 1 that has negative effects is set lower than that for selection 2 and candy pool 2 for better and more stable text output. Candies with neutral and negative effects are designed for pool 1, which are indicated by negative colors. The white candy, with neutral mechanics, will only increase the snake's length by 1, while the black candy will additionally introduce three extra obstacles on the map. Furthermore, the red candy will decrease the player's life points by 1. Pool 2, on the other hand, features candies with neutral and positive effects as counterparts to the negative candies, indicated by positive colors. The green candy will add 1 life point, while the blue candy will permit players to write their text in the next turn, as demonstrated in Fig. <ref>. After each turn, three obstacles will be added to the map to increase the difficulty level.
In order to investigate the influence of game mechanics on players' text choices, the temperature for selection 1 and candy pool 1 was intentionally set lower (0.6) than that for selection 2 and candy pool 2 (1.4). This decision was made to improve the quality and stability of text output in selection 1. Considering players’ average reading speed and the usage of the think-aloud protocol, the game will be paused for 25 seconds each time when players get new texts. This pause duration will be extended to 45 seconds when players wish to write their own text to add to the story. Players can choose to end the pause early by clicking the buttons adjacent to the text selections, similar to how they would end their turns in the non-game version.
When players’ life points become 0, the interaction and the story will end. As shown in Fig. <ref>, players will enter a result page. On the right-hand side of the screen, the full story with an automatically generated ending will be displayed. Additionally, the interface will indicate the length of the snake and story, as well as provide information on the types of candies consumed by the player during gameplay.
CHAPTER: USER STUDY
§ PARTICIPANTS
To research how different players interact with Snake Story, 11 Game Design students (n=11, referred to as P1-P11) from Northeastern University were recruited through a Discord poster to play the game. Given the premise that the players' writing and AI experience may contribute to their distinct perceptions of the game <cit.>, the study recruited a diverse cohort of participants with variable levels of writing proficiency and collaborating experiences with AI. All participants volunteered for the study and were not compensated.
§ PROCEDURE
The study was designed as a within-subject investigation, whereby each participant was assigned to play both the non-game version and the game version of Snake Story in random orders. In each session, the participant was given a brief tutorial on the game mechanics and interface and was then instructed to compose a 300-word story about a snake with AI. The participant was also required to engage in think-aloud protocols during the 10-to-15-minute gameplay. Subsequently, the participant was asked to complete a 5-Likert scale usability questionnaire. Following the completion of two sessions, the participants would participate in a semi-structured interview lasting approximately 5-10 minutes, in which they shared their interaction experiences. Finally, participants were asked to complete a demographic survey, which included questions about their writing and AI experience.
§ EVALUATION
In the game, each text selection generated by GPT-3 was captured and stored. Moreover, the game also recorded the players' selection of texts and the stories they created. To further evaluate the user experience quantitatively, the usability questionnaire incorporated queries on the quality of the generated text, the overall story, and the user's interaction experience.
These collected data were subjected to quantitative analysis, including the use of Wilcoxon signed-rank tests to compare the results from the two versions of Snake Story.
During the study, the screen was recorded to capture the participant's interactions with the game, while the audio was recorded to generate transcriptions of the think-aloud protocols and interviews. The resulting data were analyzed using a qualitative approach based on open coding <cit.>, allowing for a thorough exploration of the participants' experiences and interactions with the game.
CHAPTER: QUANTITATIVE RESULTS
§ USAGE STATISTICS
[1] Colors in the game version candies row match candy colors mentioned in Section <ref>
11 players wrote a total of 22 stories about snakes in 2 versions of the Snake Story. The total number for each detailed statistic with an average number (M) and standard deviation (SD) are reported. As shown in Fig. <ref>, the players made a total of 130 choices (M = 11.82, SD = 1.80) in the non-game version. Of these, the generated texts with a lower temperature (0.6) were selected 63 times (M = 5.73, SD = 1.76), while the generated texts with a higher temperature (1.4) were selected 53 times (M = 4.82, SD = 2.48). Additionally, the players chose to write their own words 14 times (M = 1.27, SD = 2.05). On average, the players spent 49.14 seconds (SD = 13.13) making decisions in the non-game version.
Correspondingly, the players made a total of 142 choices (M = 12.91, SD = 4.50) in the game version. Of these, 0.6 temperature texts were selected 43 times (M = 3.91, SD = 1.98), while 1.4 temperature texts were selected 89 times (M = 8.09, SD = 4.72). Players chose to write their own words 10 times (M = 0.91, SD = 1.83). On average, the players spent 27.33 seconds (SD = 7.69) making decisions in the game version.
In the game, 91 white candies were generated, 42 of which were selected (46.15%); 50 black candies were generated, 18 of which were selected (36.00%); 47 red candies were generated, 11 of which were selected (23.40%); 46 green candies were generated, 31 of which were selected (67.39%); 47 blue candies were generated, 30 of which were selected (63.83%); 40 yellow candies were generated, 10 of which were selected (25.00%).
Wilcoxon signed-rank tests were conducted to compare players' selection and time usage differences in the 2 versions. The test results showed that there was no significant difference in the total number of selections made by players (W(11) = 29.0, p = 0.76). However, the test results showed that game mechanics significantly affected players' choices for 0.6 temperature texts (W(11) = 7.0, p = 0.035). By contrast, it was worth noting that players' choices for 1.4 temperature texts (W(11) = 10.0, p = 0.14) had no statistically significant differences. Moreover, no significant differences were found in self-writing (W(11) = 2.5, p = 0.16) choices between the two versions. Additionally, the analysis indicated that players made decisions significantly faster in the game version (W(11) = 2.0, p = 0.0029).
§ STORY EVALUATION
The stories in the non-game version had an average of 260.64 words (SD = 35.61), while the stories in the game version had 272.64 words(SD = 64.22). There was no significant difference in the length of the stories between the two versions (W(11) = 28, p = 0.70).
Automated writing evaluation tools <cit.> were employed to assess the cohesion, grammar, language diversity, and overall writing quality of the 22 stories.
Cohesion was evaluated using two metrics obtained from the Tool for the Automatic Analysis of Cohesion[https://www.linguisticanalysistools.org/taaco.html last accessed 03.08.2023] (TAAOC) <cit.>: the sentence overlap rate (S. Overlap) and the paragraph latent semantic overlap rate (P. LSA).
The Grammar and Mechanics Error Tool[https://www.linguisticanalysistools.org/gamet.html last accessed 03.08.2023] (GAMET) <cit.> was utilized to detect the number of grammatical errors in the texts.
In order to assess the language diversity of the writing, the Tool for the Automatic Analysis of Lexical Diversity[https://www.linguisticanalysistools.org/taaled.html last accessed 03.08.2023] (TAALED) <cit.> was employed. This tool was chosen for its ability to provide a reliable metric for the measure of textual lexical diversity (MTLD) <cit.>.
Finally, GPT-3[https://chat.openai.com/chat last accessed 03.08.2023] itself was used to provide an overall score for the stories on a scale of 0 to 10 <cit.>.
The results of the evaluations were shown in Table <ref>. The results from Wilcoxon signed-rank test indicated that the change in text selection preference between versions may impact the cohesion of paragraphs within the stories (W(11) = 6.0, p = 0.014). However, no other significant differences were found in the stories between the two versions.
Furthermore, the players were requested to assess the stories they wrote in the 5-Likert scale questionnaire. The results of this evaluation are presented in Figure <ref>. The additional Wilcoxon signed-rank test results indicated that there were no significant differences in the language used in the stories between the game and non-game versions (W(11) = 12.0, p = 0.19). However, the logic of the stories in the game version was significantly weaker than that in the non-game version (W(11) = 0.0, p = 0.0096). Moreover, the overall quality of the stories in the game version was significantly lower than that of the non-game version (W(11) = 3.5, p = 0.020).
§ QUANTITATIVE EXPERIENCE REPORT
As shown in Fig. <ref>, through the implementation of Wilcoxon signed-rank tests on the questionnaire data, it was observed that players had significantly less authorship of the story in the game version (W(11) = 3.0, p = 0.031). Furthermore, the players showed a significant difference in their interaction goal between the two versions, as they placed a greater emphasis on prioritizing the quality of the stories in the non-game version (W(11) = 2.0, p = 0.040). Nonetheless, no significant statistical difference was detected in their preference for the stories across the two versions (W(11) = 4.0, p = 0.083).
Additionally, players rated their interaction experience in the questionnaire. As shown in Fig. <ref>, players had significantly different perceptions between the two versions. Specifically, The game version was perceived to be significantly more complex compared to the non-game version (W(11) = 0.0, p = 0.039). Additionally, interactions within the game version were reported to have a significant impact on the creation process (W(11) = 0.0, p = 0.00098), whereas the creation process in the game version was considered to be less engaging (W(11) = 2.5, p = 0.047).
CHAPTER: QUALITATIVE RESULTS
§ INTERACTION PATTERNS IN THE NON-GAME VERSION
§.§ Text Selection Strategies
After analyzing the think-aloud data of players in the non-game version, 124 open codes were identified into 5 distinct categories based on the explanations the players provided for their choices. These categories are language (24), consistency (69), unexpectedness (17), self-writing reasons (12), and other situation (2).
§.§.§ Language
Players tended to choose texts of higher language quality, particularly those containing detailed descriptions, elaborate adjectives, and emotional expressions (19). For example, P9 mentioned "Although both 2 texts were cohesive to the previous texts, the description of snake behavior in Text 1 is more specific." when selecting between "...to explore the inside, and soon found himself submerged in the knee-depth water. He made his way from lily pad..." and "...and he quickly jumped into the water. He swam around for hours, enjoying the cool and refreshing sensation of the pond's waters. As...".
Additionally, players demonstrated a preference for texts that were well-structured and composed with a professional tone (5). For instance, P3 mentioned "I think the 2 selections are similar, but the second one is more professional and I will go with this one." when selecting between "...He had seen many creatures come and go in his long life, but he was content with his own company. He kept to himself, and the..." and "...Named Anaconda, known by every passing creature in pursuit of warmth. One could often hear laughter ringing near it’s solace...".
§.§.§ Consistency
Players preferred the texts that aligned with their pre-determined tone (24). As an illustration, P5 pointed out "I would select 1 because 1 is more like a start of a fairy tale. I do not want to write a realistic story so I will choose 1." when selecting between "Once upon a time, there was a small snake who lived in the forest. She was very curious and loved to explore her surroundings. One day..." and "Once there was a green, spotted snake who mind made his home in the deep parts of lush tropical jungle. This snake was quite different than other...".
Moreover, players demonstrated a proclivity towards selecting texts that unfolded the story in their anticipation (15). Such as P2 stated "I wanted to put my snake in a safe situation, (because) I don't want my snake to die. (Choose 1)" when choosing from "...-green scales glinted in the sun. Alice was sure that this snake wasn't dangerous, and she certainly didn't want to..." and "...shadowed a lower tear of its cheekbones. 'Hello there,' She eagerly greeted the glowing-eyed serpent and for just a few...".
Furthermore, players exhibited an inclination towards texts that maintained coherence with preceding texts, specifically those that exhibited sound and expandable logic (30). As an instance, P7 said "I think a snake cannot wear the necklace. Also, the golden necklace is so luxurious that it does not seem like something a bird that has just been saved would have." when selecting between "...special gift. It was a beautiful golden necklace with a single ruby in the center. Slither was amazed by the gift and decided to wear it around..." and "...(a)n Acorn sap, that would grant Slither fortune to any human being wished aide of him. Slither couldn't wait to tell the...".
§.§.§ Unexpectedness
Players displayed a preference to select unexpected texts that featured fresh settings, new characters, and surprising plot twists (11). To illustrate, P3 explained "I think 1 is more fun. It has new people (objects). 2 just mentioned familiar faces. I don't like familiar faces" when choosing between "...it was surprised to find a new world of exotic animals, plants and trees. It found itself in an oasis full of life and beauty,..." and "...it met the familiar faces, who watched without any hesitation. The explorative beast evolved into steps of understanding slowly and carefully forging relationships while exchange...".
In addition, players showed a propensity to select texts that possessed a sense of suspension regarding their potential narrative developments (6). For example, P11 said "I want to see where this goes. I chose this (2) because it's messed up, and I want to see if it becomes more messed up if I choose the messed up option." when selecting between "...I grant ye the power to control the elements of this world, but only when you accept my blessing.' George was terrified and uncertain what..." and "...Gallivanting about every daye frivolipaseday joorneys with on larkining flightal skeemeshyne lizard wingable sprites...".
§.§.§ Self-writing Reasons
In the situation that neither of the presented text options fulfilled their preferences, players were observed to write their own content, which frequently drew inspiration from the provided text selections (6). As an example, P10 mentioned "The first one is cheap...I like the 'anger' part (in 2), but then I don't like the 'mouth puckering', so maybe I can do that..." when facing "...and eventually let out a soft purr before turning away and walking off into the distance. The snake was relieved that he had been spared,..." and "...and anger never leaving his eyes. He narrowed his focus to the powerless serpent before him, is mouth puckering upward ready to end this mercy mission of...", but finally writing "..., but anger never leaving his eyes. It calmed down eventually and let out a soft purr before turning away and walking off into the distance. The tiny snake was relieved that he had been spared,..."
Players' desire to play with the AI was another factor that motivated them to write their own content (6), as will be illustrated in Section <ref>.
§.§.§ Other Situation
In a rare situation, players indicated satisfaction with both text options and selected one randomly (2). P7 said "I think both of the selections were good and appealing to me. Can I randomly choose one of them? (After performing a random selection procedure,) OK, I will go with 1." when choosing between "...other animals of his luck, but first he wanted to test the sap. He poured some onto a nearby rock and wished for more food. Suddenly,..." and "...animals in the Kindgdom about this wonderful gift! He spread word quickly, and sure enough many of his animal friends began asking for his help....".
§.§ Role Perceptions and Play Strategies
Five players use AI as a writing assistant (WA) to support their writing. Three of these players, who identify themselves as writers, believed that they had made the majority of the contributions to the story. For example, P5 said "Well, even though the AI generated most of the content, I still feel like I had a significant role in creating the story because I made the choices on how the AI wrote. So, I believe that I can claim authorship of this story." in the interview. The other two players claim less authorship of the story and describe themselves as "puppet masters" according to P3. P6 also shares this sentiment, stating 'I think I am just providing prompts, and the AI can help me to link them.'.
Four players consider AI to be an active writer (AW) that provides stories for them to read. They would describe themselves as readers of an interactive storybook, where the AI is the author, and they are the audience. For instance, P10 mentioned "I'm not planning on writing much on my own. I'm actually more interested in seeing what the AI comes up with." before starting to play the non-game version.
The two remaining players consider AI as a playful tool (PT) and engage in challenging or tricking the AI by actively using self-writing functions to generate unexpected or amusing outcomes. They view AI-generated texts not only as a means of co-creation but also as a source of entertainment, exploring the limits and capabilities of the system. To illustrate, P6 mentioned "I think AI is pretty good at generating texts based on cohesive inputs, but I'm curious to see how it can handle unexpected situations. So I'm gonna test it out by seeing what happens if I just kill off the snake in the story and let the AI continue from there." when adding the sentence "The snake is dead." at the very beginning of the story.
§ INTERACTION PATTERNS IN THE GAME VERSION
§.§ The Effect of Mechanics
All players acknowledged that the mechanics had a significant impact on their co-writing process.
The overall game design, particularly the time limit mechanics, had a significant influence on how the players read the generated texts. Two out of eleven players reported that they never read the generated text. For instance, P5 mentioned that "Given that it's a game, I'm not particularly concerned about what the AI writes. My main focus is on the gameplay itself." By contrast, four players read the text in its entirety, but only intermittently as they controlled the snake to avoid obstacles. For example, during the 5th round, P11 commented, "I think I can find a safe path for my snake to stay in, and then I can have extra time for reading the texts. Oh, this works!". The remaining five players opted to give the generated texts a quick scan. To illustrate, P8 mentioned "So basically, I just skim through the text real quick cause I also need to focus on figuring out how to get my snake to chow down on what I picked out for it at the same time." in the interview.
Additionally, the candy mechanics influenced players' choice strategies. Despite their low-quality text, the green and blue candies (good candies) are particularly attractive to players.
To illustrate, P2 mentioned "I would more likely go for the green candy to regain my lost HP and keep myself alive in the game for a bit longer." while playing the game. By contrast, black and red candies (bad candies) are rarely chosen by players. For example, P7 mentioned that "Even though the texts in the black candy are better, I'm not really keen on making the game more challenging. Plus, the white candy's text is good enough for me." However, in situations where white candies are present alongside white or "good" candies, or when a player's health points are at a safe level, they are more likely to apply their selection strategies from the non-game version for text content. Such as P11 said "The (black) one I chose just now was the more sad option, and I am choosing to make this snake's life sad." selecting between "...The snake was quite content with this lifestyle until one day, when it heard a strange noise coming from the house. Curiosity got the better..." (black) and "...Seasons unearthed happiness all around this cozy old home, kids screeched and teenage gossip shook the foundation undoubtedly our friend welcomed shelter in..." (blue).
Furthermore, the intentional design of the obstacles in the game resulted in a notable increase in emotional arousal among players during the co-creation process. Players were found to experience a range of negative emotions, such as tension and frustration, when attempting to navigate and avoid obstacles, or when inadvertently colliding with them.
§.§ Role Perceptions and Play Strategies
Despite all players identifying as "players" in the game version, their respective play strategies exhibited significant variation.
The majority of the players (7) will make trade-offs (T) between game mechanics and writing. These players aimed to uphold the story's integrity but were willing to compromise its quality if it meant prolonging the snake's lifespan in the game. As an illustration, P11 mentioned "I'm really tempted to pick the red option, but I know it'll end up killing me, so I'm gonna have the other one. I'd rather keep myself alive in the game to see more stories."
However, four of the players ignore (I) the writing systems and just merely focus on the "Snake" game. These players indulge in either the good or bad candies exclusively during gameplay, purely maintaining the life of the snake or increasing the difficulty of the game for fun. For example, P1 stated "Even I want to choose the texts but the mechanics keep me away. To be honest, I'd much rather focus on enjoying the gameplay rather than putting effort into crafting a compelling narrative." in the interview.
§ PREFERENCE
[1]AI experiences: N (No), Y (Yes); Writing experiences: R (rich), P (Poor), N (NO); Non-game Version (V.) Role Perception (RP): WA(Writing Assistant), AW (Active Writer), PT (Playful Tool); Game Version (V.) Play Strategies (PS): I (Ignore Writing), T (Make trade-offs); Preference: NG (Non-game Version), G (Game Version), - (No preference)
As shown in Table <ref>, most of the players (5) did not demonstrate a discernible inclination towards either the non-game version or the game version. While they believed that the non-game version was more suitable for serious writing, they found the game version to be more entertaining and enjoyable. For example, P5 mentioned "If I wanna write a story, I would choose the first one (the non-game version). But for fun, I would play the second one (the game version)." in the interview.
However, three players (P1, P6, and P8) expressed their strong dislike for the game version, stating that it significantly impaired their creation and reading process. For instance, P8 explained "I didn't think it was as fun as the other version of the game. I thought it was a little stressful ... if you enjoy that type of narrative (reading or writing a story as it unfolds), I think the first one is, gonna be more appealing." in the interview.
Nevertheless, the remaining three players (P7, P8, and P11), who had neither AI nor writing experience, expressed their strong admiration for the game version. They believed that the challenges presented in the game version increased their engagement in the creation process. As an illustration, P9 stated "I like the game version more. I think the challenge in the game makes me more engaged in the interaction. The sense of tension in the game version makes it harder for me to consider each selection thoroughly. This means I'm always looking forward to the next choice, hoping to make better decisions than before." in the interview.
CHAPTER: DISCUSSION
§ RESOLVE PLAYING AND CREATING CONFLICTS
Designing mixed-initiative games with consideration for the potential conflicts between gameplay and creative content generation is essential to promote engagement in the co-creating process. Mechanics that allow for both play and creativity to coexist can encourage players to develop their own unique stories and experiences within the game world. Specifically, as discussed in Section <ref> and Section <ref>, clear rules and mechanics in Snake Story can pose additional challenges for players who wish to engage in creative content generation, particularly when their writing goals (write a better story) conflict with the playing goals of the game (live longer). To mitigate such conflict, exchanging the temperature between good and bad candies can incentivize players to focus on both keeping the snake alive and generating high-quality stories. However, it is important to note that some intrinsic conflicts between playing and creating cannot be easily resolved through such parameter adjustments. In such cases, more specialized and deliberate mechanics must be designed. For example, Snake Story has an emergent endpoint when players run out of life points, whereas players' stories may continue, making it difficult to determine a definitive end for them. One possible solution for this issue can be a Neo-rogue-like game system with permanent death mechanics <cit.> that enables players to continue creating a larger story despite dying multiple times.
§ INCREASE NARRATIVE ENGAGEMENT IN PLAYING
Developing a tight narrative link between game mechanics and co-created content is a crucial factor in augmenting the participants' sense of immersion in mixed-initiative games. Although Snake Story was designed based on a metaphorical representation of the manipulated snake as the snake in the story, a majority of the players (n=7) expressed their dissatisfaction with the perceived disconnection between the game and the narrative.
Two possible directions can be applied to Snake Story as well as future mixed-initiative games. The first direction involves simulating the creative process of renowned writers, such as Shakespeare, in crafting a story. This would involve modeling how such writers generate and develop various ideas, unfold plots, and navigate potential challenges in their writing process. In the game, AI would be leveraged to simulate the thought processes of these writers <cit.>, while game mechanics can enable players to actively participate in the co-creation of the story by engaging in this abstract thinking process.
Alternatively, players can be cast as the main character of the co-created story. This can be accomplished through an interactive drama game design <cit.>, wherein players take on the role of the protagonist and make consequential decisions that influence the story's direction. To enhance player immersion and emotional investment in the story, personalized elements reflecting the player's experiences and characteristics can be integrated using AI. However, since players' interests align with those of the characters, conflicts between playing and creating must be resolved through additional mechanics.
§ ENHANCE EMOTIONAL INVOLVEMENT IN CREATING
To mitigate player frustration, mixed-initiative games should incorporate a degree of flexibility that allows players to manage unforeseen emergent events that may arise during gameplay or the creative process. For instance, in Snake Story, as discussed in Section <ref>, players experienced frustration when they were unable to allocate sufficient time to planning the story and maneuvering the snake simultaneously. To address these concerns, a mechanic could be incorporated that enables players to conserve unused time during easier situations and then utilize it during more challenging scenarios. This flexible design can decrease player frustration by introducing a feeling of control, while still retaining the intensity of the gameplay experience.
Moreover, given that mechanics have the potential to exert a noteworthy influence on players' co-creation strategies, mixed-initiative games can employ incentivization through game mechanics as a means of fostering engagement in the co-writing process. For example, in the Snake Story, favorable outcomes can be associated with the acquisition of the yellow candy, thereby stimulating players to generate their own textual content.
§ BALANCE PLAYING AND CREATING
Similar to the significance of traditional games keeping players in an optimal state of flow <cit.>, mixed-initiative games should maintain a good balance between playing and creating for players. There are different positive feedback mechanisms between gaming and creative endeavors. Gaming requires short-term, rapid feedback, while creative endeavors often involve long-term, slow feedback. As mixed-initiative games require the players to both engage with game mechanics and creative content generation, it is crucial that the game design facilitates a smooth transition between these two modes in its gameplay. This can be achieved through thoughtful design of factors such as game pacing and player agency.
Furthermore, a well-designed mixed-initiative game should provide players with appropriate guidance and tools to enable them to create meaningful and enjoyable content, without feeling overwhelmed by the creative demands of the game.
In addition, it is imperative to account for individual differences when designing mixed-initiative games. This is due to the fact discussed in Section <ref> that varying players may necessitate distinct interaction strategies, thereby necessitating a tailored approach to maintain optimal playing-creating flow. Additionally, AI should consider the unique creating strategies (as described in <ref>) of each player to generate personalized content that aligns with their writing goals. The integration of player-centric AI content generation can help to keep players in the flow by reducing low-quality options and providing uplifting text at the appropriate time.
§ FIND NEW EVALUATION CRITERIA
To achieve a unified experience of creating and playing in mixed-initiative games, it is crucial to establish novel evaluation criteria that can fairly assess players' creative behavior. This is because an unfair assessment may lead to player frustration and undermine the gameplay experience. While the use of automatic writing evaluation <cit.> was demonstrated in the study as a post-evaluation method for the stories, its applicability to evaluating writing quality within the game may be limited by its statistical nature, which might not be applicable for an individual's writing and does not consider subjective player perceptions. Furthermore, real-time human evaluation is not a feasible option. As such, a potential solution could involve the development of a novel algorithm to evaluate players' work automatically. Alternatively, a better approach could involve incorporating game mechanics that allow players to self-evaluate or rate each other. However, the effectiveness and feasibility of these approaches need further investigation.
Additionally, while current evaluation criteria for traditional games may still apply to some extent, mixed-initiative games involve unique features and require new criteria to accurately measure their effectiveness. Mixed-initiative games require new evaluation criteria that account for both the game mechanics and the effectiveness of the mixed-initiative interface. Specifically, it is important to assess how the game mechanics are dynamically combined with the mixed-initiative interface. Nevertheless, the evaluation of mixed-initiative games is still an area that requires further research to establish effective criteria and methodologies
CHAPTER: CONCLUSION
In conclusion, the paper presents a prototype of a mixed-initiative game, Snake Story, aimed at exploring gamification of co-creation interactions between humans and AI. The study involved 11 participants, who were asked to play two versions of the game with and without game mechanics.
The finding suggested that mechanics might significantly influence the players' creative processes and self-identifications in the game. Additionally, players with different backgrounds showed different preferences for the two versions. Overall, the study highlights the potential of gamification in making human-AI interactions accessible and fun for players and provides valuable insights for the design of engaging co-creation experiences.
plain
CHAPTER: STORIES
§ SAMPLE STORY 1 (GAME VERSION P8)
Once upon a time, there was a snake who lived in the woods. He had never seen another creature like himself, but he enjoyed living among the trees and eating the small animals that lived there.
One day, he noticed something strange in the distance - a long, winding line of creatures like himself! He decided to investigate, and as he got closer, he realized that the line was a group of snakes making their way through the forest.
He was so excited to see other snakes like himself that he quickly joined the procession, slithering alongside them as they moved through the forest. Along the way, he felt particularly drawn to one snake in particular, who would often look back and pause for just a moment as if to acknowledge his presence.
After a few days, the procession reached its destination - a beautiful, secluded lake in the middle of the forest. The snakes quickly dispersed, but the snake that he had been drawn to stayed behind and waited for him.
The two of them shared a moment above the waters as they looked into each other’s eyes, acknowledging their instant connection. From then on, it was only the two of them and their limitless adventures among the trees and by the lake, both content to live life together in this idyllic home.
§ SAMPLE STORY 2 (NON-GAME VERSION P8)
Once upon a time, there was a small snake named Lucy. She lived in the woods near a small village and often ventured out during the night when things were still and quiet. Every day was the same for Lucy, scampering among the earthy loam of the forest floor in search of insects and grubs to satisfy her hunger.
But one night on her usual midnight march, something stopped Lucy in her tracks – a basket of fruits, vegetables and other goodies had been left outside the village gates. Lucy was curious and hungry so she slithered closer to investigate. As she inched closer, Lucy noticed that the basket was guarded by a large and intimidating snake. He had a long body with shimmering golden scales and a sharp, pointed tail. Lucy knew that this was no ordinary snake – it was a cobra!
The cobra noticed Lucy and coiled itself around the basket as to challenge her. Even with her tiny size, Lucy stood up and faced off against the cobra. Still her bravery paid off and the cobra slithered away, allowing Lucy to feast on all the goodies inside.
From that day forward, Lucy became known as the brave little snake who stood up against a cobra. She was respected and admired by all of her forest friends, and even the villagers began to leave treats outside the gates for her. Lucy lived a long and happy life in the woods, always remembered as the brave and intrepid, little snake.
§ SAMPLE STORY 3 (GAME VERSION P9)
Once, there lived a majestic green snake in the heart of a untouched forest. Its piercing fire suffused its emerald body as it knowingly crawled through the foliage.
The snake had a special affinity for humans and often followed them around their camps, watching from afar as they cooked, talked, and laughed. It had no fear of them and often interacted with them in a friendly manner - though some people were scared of it because of its size.
One day, the snake was out exploring a new part of the forest when it stumbled across a mysterious stone altar with strange symbols carved into it. It was intrigued and decided to investigate further, only to find that the altar held a powerful magical gem.
The snake quickly realized that the gem had the power to grant wishes, and it began to think of all the things that it could wish for. After much deliberation, it decided that it wanted to fly so that it could see the world beyond its forest home. So, with a passionate final wish, the snake found itself rising into the air and soaring through the sky.
It was a liberating experience for the snake, and it enjoyed every second of its newfound freedom. From that day forward, the snake was able to explore distant lands and experience new cultures. It even made friends with other animals along its journey.
The snake was truly happy, and it would never forget the day it found that magical gem.
§ SAMPLE STORY 4 (NON-GAME VERSION P9)
Once upon a time, in a grassy meadow surrounded by forest hills, wove the adventurous and playful snake named Oscar. He was brown and yellow in colour, with a white diamond pattern on his back.
Oscar was always looking for new places to explore, so one day he decided to wander through the forest hills. He slithered up and down the dirt paths, taking in all of nature's beauty around him.
As he kept moving, Oscar noticed a small pond in the middle of the forest. He decided to take a closer look and when he got there, he was in awe. The pond was crystal clear and filled with lily pads and colorful fish.
Oscar couldn't resist the temptation and he quickly jumped into the water. He swam around for hours, enjoying the cool and refreshing sensation of the pond's waters.
As the sunlight dipped behind the trees, Oscar decided to pay one last exploration visit before he finished up for the day. As he rounded a corner, he came face to face with a giant boa constrictor. The two were both startled, but the boa quickly wrapped itself around Oscar and squeezed tight.
Oscar was scared and he knew that his life was in danger, but then something amazing happened. The boa constrictor loosened its strong grip and released Oscar without hurting him. The two then just looked at each other, as if they were communicating something neither of them could understand.
Oscar was so surprised by the snake's kindness and he knew that this experience had changed him forever. From then on,t the world around him was full of wonders, just waiting for him to discover.
CHAPTER: LOG DATA
§ SAMPLE LOG DATA 1 (GAME VERSION P11))
[3/6/2023 7:29:47 PM]Game Start
[3/6/2023 7:29:48 PM][1][3]Once lived in an old abandoned house in the center of a busy city, was a black-bellied golden snake. Day by day he would
[3/6/2023 7:29:48 PM][0][1]Once upon a time there was a snake named Slither who lived in the forest. He had been living peacefully and happily until one day, when
[3/6/2023 7:30:03 PM]Chose[1][3]
[3/6/2023 7:30:04 PM][1][0] scour up and down the crevices of this broken home unnoticed. Occasionally it crawled out into the scared city ridden society to cultivate a small hoard
[3/6/2023 7:30:04 PM][0][1] slither around the house, exploring every corner and crevice.
One day he was out for a stroll in the garden of the old house
[3/6/2023 7:30:17 PM][2][5]
[3/6/2023 7:30:21 PM]Chose[1][0]
[3/6/2023 7:30:22 PM][1][3] of food just enough to keep warm. As chilling winter winds slowly rolled in and darkness increased, no bouts of sunlight hit the grimy abode itself
[3/6/2023 7:30:23 PM][0][1] of food.
One day, the snake decided to venture out a bit further into the city than usual. He was mesmerized by all
[3/6/2023 7:30:52 PM]Chose[1][3]
[3/6/2023 7:30:53 PM][1][4].
Seasons unearthed happiness all around this cozy old home, kids screeched and teenage gossip shook the foundation undoubtedly our friend welcomed shelter in
[3/6/2023 7:30:53 PM][0][1].
The snake was quite content with this lifestyle until one day, when it heard a strange noise coming from the house. Curiosity got the better
[3/6/2023 7:31:19 PM]Chose[0][1]
[3/6/2023 7:31:21 PM][1][3] of it and so, he slithered its way inside towards the commotion. To his surprise were two other snakes - one bright orange spotted snake that
[3/6/2023 7:31:21 PM][0][2] of it, and the snake decided to investigate. As it slithered its way up the stairs, towards the source of this noise, its scales began
[3/6/2023 7:32:09 PM]Chose[0][2]
[3/6/2023 7:32:10 PM][1][4] a tapestry of gold and red waves in the sometimes pitch twilight. When finally it made its way to place from where this racket fell on its
[3/6/2023 7:32:10 PM][0][0] to tremble in fear. It was a human, and it seemed as if he had found the snake's secret hideout.
The man
[3/6/2023 7:32:44 PM]Chose[1][4]
[3/6/2023 7:32:46 PM][0][2] ears, it was astounded by what it saw.
There in the center of this dingy room stood a small boy, no older than
[3/6/2023 7:32:46 PM][1][4] ears, what it encountered swore changed his life forever.
Two people lay huddled next to in the interlocked embrace; poor souls everywhere had
[3/6/2023 7:33:12 PM][2][5]he snake had a massive concussion and it couldn't move for a few minutes.
[3/6/2023 7:33:16 PM]Chose[2][5]
[3/6/2023 7:33:17 PM][0][0]
When it finally regained its senses, the snake realized that the noise was coming from a group of people who were renovating this old home
[3/6/2023 7:33:18 PM][1][0] To it surprise the intruder was an orange rat, thief like eyes darting to and fro then, looking at it . On its left hip pocket was
[3/6/2023 7:34:39 PM]Chose[0][0]
[3/6/2023 7:34:40 PM][1][4]. It had instantly scored itself an arduous task - Surviving this hostile environment would take hidden amidst the remodelling mess- all depending on patience and
[3/6/2023 7:34:40 PM][0][2]. They had come to give the house a much needed facelift and were unearthing all its hidden secrets. However, in their haste they
[3/6/2023 7:35:18 PM]Chose[1][4]
[3/6/2023 7:35:19 PM][0][2] luck.
The snake, however, was a master of unseen and it managed to stay hidden for months on end until the house was finally
[3/6/2023 7:35:19 PM][1][0] strength alike.
Evey night while evading punches and hammer-killer blows, that very same unsettling snake somehow managed to maintain existence into this confusing
[3/6/2023 7:35:44 PM][2][5]and it's agility of movement.
[3/6/2023 7:35:49 PM]Chose[2][5]
[3/6/2023 7:35:50 PM][0][0]The snake was able to dodge the workers' tools and their careless feet. For weeks, it stayed hidden in a corner of the house,
[3/6/2023 7:35:51 PM][1][4] Determined to beat the odds, and escape both unscathed plus undestroyed the snake definitely nested deep down regular ceilings and walls. Now is
[3/6/2023 7:36:54 PM]Chose[1][4]
[3/6/2023 7:36:56 PM][0][1] the time to wait and watch.
As days passed, the snake became accustomed to its new home - it knew where it could find food
[3/6/2023 7:36:56 PM][1][0] another member of new house happily nestled furniture plots manoeuvring escape for various protected corners. Growing about doubled its apparel and authority, this dust-rake
[3/6/2023 7:37:24 PM][2][5]ind it's way down a pipe and into a safe underground haven. Aka the sewer.
[3/6/2023 7:37:27 PM]Chose[2][5]
[3/6/2023 7:37:28 PM][0][2]
The snake stayed there for many years, while the house was slowly remodelled and restored to its former glory. As time passed by,
[3/6/2023 7:37:28 PM][1][0]Gearing up each weakening suna with newfoundfault rather vighiand swimming tamanringly tyrying es never laidoutsdong
[3/6/2023 7:37:45 PM]Chose[1][0]
[3/6/2023 7:37:47 PM][0][2] in the sewers, learning to live off of whatever edible morsels it could find.
Years passed and the snake slowly adapted to this
[3/6/2023 7:37:47 PM][1][0]obtained ulumizedeach tiny wramping word ways falwen curprise lifhpiful second uphew.Finally iazuplettsand w
[3/6/2023 7:38:13 PM]Chose[1][0]
[3/6/2023 7:38:15 PM][1][3]all waters an passiffirece teleasight immish are found campfty confidescedi hraculumsetll tearchedting old for
[3/6/2023 7:38:15 PM][0][1]hile the snake could barely remember it's old life, it was content to spend its days in the sewers of the city.
[3/6/2023 7:38:29 PM]Game End
[3/6/2023 7:38:29 PM]Ate[14]
§ SAMPLE LOG DATA 1 (NON-GAME VERSION P11))
[3/6/2023 7:49:53 PM]Game Start
[3/6/2023 7:49:54 PM][0.6]Once upon a time there was a small snake who lived in the forest. The snake loved to explore his surroundings and would often slither around looking
[3/6/2023 7:49:54 PM][1.4]When Gus came to visit his uncle's country farm he was excited for the adventures that awaited him. He had heard all about how this new environment offered
[3/6/2023 7:50:22 PM][Add Own Text]There once was a Boy named George that lived on a farm near a forest.
[3/6/2023 7:50:23 PM][0.6] One day, while walking in the forest, he came across a large snake. The snake was coiled up and had bright yellow eyes that seemed to
[3/6/2023 7:50:23 PM][1.4] On the very edge of what felt like a forbidden wild abyss, there was an old abandoned shed that felt particularly displaced from Nature. Little did George know
[3/6/2023 7:50:54 PM][Chose][0.6]
[3/6/2023 7:50:55 PM][0.6] look right through him.
George was scared, but he knew that snakes were important to the environment and so he decided not to run away.
[3/6/2023 7:50:55 PM][1.4] stare straight into George's soul. As George stepped closer, his eyes began to widen as fear and tension filled the air.
George hesitated at
[3/6/2023 7:51:34 PM][Add Own Text]e was now convinced that he was chosen to be a member of the dark side.
[3/6/2023 7:51:35 PM][0.6] He knew that he could no longer live his old life and had to accept the snake as his master.
He followed the snake deep into
[3/6/2023 7:51:35 PM][1.4]He timidly reached out to softly touch its flat yet slimy black color. When he had finally worked up enough courage, his finger immediately
[3/6/2023 7:51:49 PM][Chose][0.6]
[3/6/2023 7:51:50 PM][0.6] the forest and to a large clearing with an old, decrepit cabin. The snake slithered around George and motioned for him to enter the cabin
[3/6/2023 7:51:50 PM][1.4] the Forest and before long, they came across an old temple. Inside the temple was a large room filled with pits of glowing green goo - this
[3/6/2023 7:52:31 PM][Chose][1.4]
[3/6/2023 7:52:32 PM][0.6] was the snake's home.
The snake spoke to George, telling him that he could have anything he wanted if he pledged his allegiance and
[3/6/2023 7:52:33 PM][1.4] was no ordinary sight. The snake slithered in and placed itself at the centre of the room before beginning to speak:
"Close humanling
[3/6/2023 7:53:18 PM][Chose][1.4]
[3/6/2023 7:53:19 PM][0.6], I am the guardian of this temple and you have been chosen to serve me. From now on you will be my servant and do as I command
[3/6/2023 7:53:20 PM][1.4], it's time ye became one with the snakes of dragons. I shall be tay tupping hearfy to protect youalways from lives touched barren
[3/6/2023 7:53:39 PM][Chose][1.4]
[3/6/2023 7:53:40 PM][0.6]. I grant ye the power to control the elements of this world, but only when you accept my blessing."
George was terrified and uncertain what
[3/6/2023 7:53:40 PM][1.4]. Gallivanting about every daye frivolipaseday joorneys with on larkining flightal skeemeshyne lizard wingable sprites
[3/6/2023 7:54:05 PM][Chose][1.4]
[3/6/2023 7:54:06 PM][0.6].
George was amazed by what he heard and felt compelled to accept the snake's offer, so he nodded in agreement. He quickly began
[3/6/2023 7:54:06 PM][1.4]. Forewards danwortion glymback bowards blessid them plateary putenderbound beasts."
George was astounded but knew this
[3/6/2023 7:54:12 PM][Chose][1.4]
[3/6/2023 7:54:14 PM][0.6] was his destiny. He bowed down before the snake and accepted its offer to become a part of its family. From then on, George became known as
[3/6/2023 7:54:14 PM][1.4] must be the purpose of whatever journey lay ahead he was been. The snake placed its hood across his head and instructed him ingest a mysterious blue liquid in
[3/6/2023 7:54:44 PM][Add Own Text]absolute gibberish must be a forgotten tongue spoken by the ancient snake
[3/6/2023 7:54:45 PM][1.4]s of dragons. With a slight belly flutter, he stepped out with the Giant Snake behind him and declared his loyalty to protecting its power full force
[3/6/2023 7:54:45 PM][0.6]s. He nodded his head in agreement and from that day forward, became the snake's apprentice. The snake taught George the forgotten language of dragons and
[3/6/2023 7:55:17 PM][Chose][0.6]
[3/6/2023 7:55:19 PM][1.4] came to see him as a son. George worked hard learning the magicks from his master and embracing the darkness with open arms and righteous judgment.
[3/6/2023 7:55:19 PM][0.6] together they travelled to distant lands, learning secrets and exploring the unknown.
Years passed by and George became an expert in dragon lore and a
[3/6/2023 7:55:46 PM][Chose][1.4]
[3/6/2023 7:55:47 PM][0.6]Years later, George had become a powerful wizard and a respected leader of the dark side. He was feared by many but respected for his wisdom and
[3/6/2023 7:55:48 PM][1.4]When George reached maturity, he joined the warrior army of Serpentin Graywitch who worshipped him and regarded him as one of their champions. George made
[3/6/2023 7:56:06 PM][Chose][1.4]
[3/6/2023 7:56:07 PM][1.4] excellent achievements in the army, never losing a battle or mission. Little by little, with the help of George its prime warrior, the dark forces slowly
[3/6/2023 7:56:07 PM][0.6] sure to always keep the snake close by his side and together they fought off many enemies.
The snake was very proud of George and eventually rewarded
[3/6/2023 7:56:25 PM][Chose][1.4]
[3/6/2023 7:56:26 PM][0.6] began to take over the world.
Today, George is known as the King of Snakes and his faithful servant still by his side.
[3/6/2023 7:56:27 PM][1.4] took control of the entire landscape and drew all things into itselfy. Who knew one random meeting with a snake in the forest would lead to such power
[3/6/2023 7:56:44 PM][Chose][1.4]
[3/6/2023 7:56:45 PM][0.6]!
[3/6/2023 7:56:45 PM][1.4]?
[3/6/2023 7:56:47 PM][Chose][1.4]
[3/6/2023 7:57:01 PM]Game End
CHAPTER: SURVEY QUESTIONS
§ STORY EVALUATION
In a degree of 5 (from strongly disagree to strongly agree), answer the following questions:
Q1 I think the logic of the story is well organized.
Q2 I think the language in the story was professionally written.
Q3 I think the overall quality of the story was perfect.
§ EXPERIENCE EVALUATION
In a degree of 5 (from strongly disagree to strongly agree), answer the following questions:
Q1 I think the story is written by myself.
Q2 I think I prioritize the quality of the story in the game.
Q3 I think I like this story and want to share it with others.
Q1 I think the system is too complex to be understood.
Q2 I think the gameplay interrupts my thinking while writing the story.
Q3 I think I am engaged in the co-writing process.
§ DEMOGRAPHIC QUESTIONS
Q1 Did you co-create content with any Artificial Intelligence before? Y/N
Q2 How would you describe your writing skills?
1 Never wrote stories
2 Have some skeletons of stories but never complete them
3 Wrote some stories and shared them with others privately or published them
|
http://arxiv.org/abs/2307.06279v1 | 20230709050025 | SpreadNUTS -- Moderate Dynamic Extension of Paths for No-U-Turn Sampling & Partitioning Visited Regions | [
"Fareed Sheriff"
] | stat.CO | [
"stat.CO",
"cs.LG"
] |
— Moderate Dynamic Extension of Paths for No-U-Turn Sampling & Partitioning Visited Regions
Fareed Sheriff
May 17, 2023
============================================================================================
§ INTRODUCTION & PRIOR WORK
Markov chain Monte Carlo (MCMC) methods have existed for a long time and the field is well-explored. The purpose of MCMC methods is to approximate a distribution through repeated sampling; most MCMC algorithms exhibit asymptotically optimal behavior in that they converge to the true distribution at the limit. However, what differentiates these algorithms are their practical convergence guarantees and efficiency. While a sampler may eventually approximate a distribution well, because it is used in the real world it is necessary that the point at which the sampler yields a good estimate of the distribution is reachable in a reasonable amount of time. Similarly, if it is computationally difficult or intractable to produce good samples from a distribution for use in estimation, then there is no real-world utility afforded by the sampler. Thus, most MCMC methods these days focus on improving efficiency and speeding up convergence.
We present a cursory overview of popular MCMC techniques. Random-walk Metropolis-Hastings is a rudimentary algorithm for sampling from a distribution by inducing a Markov chain on repeated samples: the next sample is chosen through a draw from the sampling distribution that takes the current sample as a parameter. However, as the name suggests, this exhibits strong random walk behavior, making it undesirable practically due to the possibly long burn-in period and large number of samples needed to thoroughly explore the distribution space. In fact, many MCMC algorithms suffer from random walk behavior and often only mitigate such behavior as outright erasing random walks is difficult. Hamiltonian Monte Carlo (HMC) is a class of MCMC methods that theoretically exhibit no random walk behavior because of properties related to Hamiltonian dynamics. This paper introduces modifications to a specific HMC algorithm known as the no-U-turn sampler (NUTS) that aims to explore the sample space faster than NUTS, yielding a sampler that has faster convergence to the true distribution than NUTS.
§.§ Hamiltonian/Hybrid Monte Carlo
[This subsection summarizes relevant parts of <cit.>]
Hamiltonian dynamics work on a system of position-momentum pairs (p,q) subject to Hamilton's equations
dq_i/dt = ∂ H/∂ p_i, dp_i/dt = -∂ H/∂ q_i
where p,q are vector-valued functions of time over a d-dimensional space and H(q,p) is the Hamiltonian, which represents the system's total energy. We assume for HMC that the Hamiltonian expresses the system's potential and kinetic energies H(q,p) = U(q)+K(p). We also define for HMC U(q) to be the negative of the log density of q up to a constant and K(p) = 12p^TM^-1p to be the negative of the log density of the Gaussian with zero mean and covariance matrix M (often, the Gaussians will be uncorrelated, so M will be diagonal), also up to a constant. We thus rewrite Hamilton's equations to be
dq_i/dt = (M^-1p)_i, dp_i/dt = - ∂ U/∂ q_i
As with MCMC methods as a whole, the Hamiltonian is (time-)reversible and is invariant under Hamilton's equations, meaning the acceptance probability is 1. In practice, it is close to 1 because we cannot practically make the Hamiltonian invariant when solving Hamilton's equations due to error accumulated when solving the PDEs numerically.
To numerically solve the PDEs, we use a symplectic integrator, which preserves the Hamiltonian's invariance under integration of Hamilton's equations. A commonly-used symplectic integrator is the leapfrog integrator, which makes use of a "halfstep" in the integration process to better inform the estimate of the Hamiltonian in the next timestep. The equations that govern the leapfrog integrator are as follows with stepsize :
p_i(t+2) = p_i(t)- /2∂ U/∂ q_iq(t)
q_i(t+) = q_i(t) + p_i(t+2)/m_i
p_i(t+) = p_i(t+2) - /2∂ U/∂ q_i q(t+)
In effect, we compute an estimate of p at t+2, estimate q using this estiamte of p, then again estimate p using the estimate of q at t+, thus taking into account the estimate of p at t+2 and p's relationship with q.
HMC samples from continuous distributions on ^d with well-defined densities and partials of the log densities. We define the joint distribution P of (p,q) on the Hamiltonian H to be
P(q,p) = 1/Ze^-1/TH(q,p)
for any positive constant Z and T. Then,
H(q,p) = U(q)+K(p) → P(q,p) = 1/Ze^-U(q)/Te^-K(p)/T
We choose U(q) to be -logπ(q) for the distribution π from which we are trying to sample. The distribution of K(p) is independent of q, but it is common to use a quadratic like K(p) = p^TM^-1p/2. For diagonal M, this yields K(p) = ∑_ip^2_i/2m_i.
HMC works in two steps. The first step draws a value for momentum p using the zero-centered Gaussian with covariance matrix M. The second step conducts a Metropolis update using the Hamiltonian. Using a stepsize of for L steps, a trajectory of samples is calculated, which is accepted with probability
min(1,exp(U(q)-U(q^*)+K(p)-K(p^*)_H(q,p)-H(q^*,p^*)))
which works exactly because the Hamiltonian is time-reversible.
Practical considerations to take into account when implementing HMC include varying ,L. Note, however, that HMC requires adjustment/setting of the parameters , L.
§ NO-U-TURN SAMPLING
One of the few and biggest problems with HMC<cit.> is the necessity to tune ,L — without proper tuning, we lose many of the efficiency guarantees of HMC. No-U-turn sampling (NUTS)<cit.>] aims to alleviate some of these problems. NUTS is a type of HMC algorithm that does not calculate the trajectory for constant L steps and instead stops the trajectory when sufficient error or space explored has been accumulated. Furthermore, it tunes dynamically to make NUTS an effectively parameterless version of HMC.
NUTS replaces a constant L by stopping the trajectory once some condition has been triggered. This condition is checking that the distance between the proposal q^* and the initial q will not continue to increase. We can check this by taking the product of the momentum and the difference between the sampled proposal and initial proposal (q^*-q)· p^* (the U-turn condition), noting that if it is negative, then the direction of our next step will be toward already-sampled points. Because this does not maintain time-reversibility, NUTS runs the Hamiltonian both forward and backward with equal probability and calculates the U-turn condition between the endpoints of the extension of the trajectory generated in the current iteration, checking that it is nonnegative. NUTS generates the trajectory through a doubling scheme that randomly chooses a direction (forward or backward in time), then on the ith iteration of generating this trajectory takes 2^i timesteps in the chosen direction, adding the calculated points to the current trajectory. A point is chosen as a sample from this trajectory in the following manner: once the trajectory is generated first by sampling some rejection energy threshold u uniformly from [0,P(q,p)] = [0,e^-H(q,p)], extending the point forward and backward in time repeatedly, then uniformly randomly selecting a point from this "tree" of points (trajectory).
§ MODERATE DYNAMIC EXTENSION OF PATHS
We consider two additions to the NUTS scheme: relaxing the U-turn condition checks on the induced binary tree of the generated trajectory with, and increasing the size of the trajectory by more than double every iteration. Our reasoning behind both of these ideas is that the number of U-turn condition checks on the subtrees of the subtrajectory created by the doubling process in NUTS adds excessive (and potentially underjustified) overhead when checking that the U-turn condition is not violated between the two leaves on the edge of each sutree. This overhead is linear in the number of generated points. While it is stated that "except for very simple models with very little data, the costs of these inner products should be negligible compared to the cost of computing gradients" <cit.> (in reference to the inner products calculated when evaluating the U-turn condition), such a rigorous check can in and of itself be counterproductive and could risk cutting off the trajectory being generated before it has sufficiently explored the space around it. This is because while the U-turn condition checks whether the trajectory turns back on itself, if we check for violation between many pairs of points, adjacent or not, this degenerates into a check that the trajectory is always pointing in the direction of unexplored space.
However, this is not a very useful condition to force because we could have a trajectory that moves backward a tiny bit but later continues to move away from previously-explored points, thus exhibiting a general trend toward unexplored space. While we agree that checking that no violation of the U-turn condition should occur between the first few points on the path, we note that as long as the general trend of the path does not violate the U-turn condition, the path contributes to exploring space. We thus strike a compromise: we relax the U-turn condition checks on the balanced tree built on each iteration's points by continuing to check that the U-turn condition is not violated between the leaves on the edge of each subtree of the tree built on each iteration's point, but now build a k-ary tree on the calculated points instead of a binary tree where k is the iteration number. This both decreases the number of U-turn condition checks and iteratively relaxes the strictness of the U-turn violation penalty as more points are generated.
Specifically, instead of doubling the tree by adding 2^k points to the end of our path in direction d∼{-1,1}, we add k^k points and check the U-turn condition fewer times on these points: where we would check the U-turn condition around 2^klog_2k time on these k^k points, we now check the condition k^k-1/k-1≈ k^k-1=2^(k-1)log_2k, which is less than 2^klog_2k by a multiplicative factor of k (which grows asymptotically).
§ PARTITIONING VISITED REGIONS
To prevent ourselves from exploring parts of the distribution that we have already explored, when sampling from the generated trajectory, we bias our selection toward points the space around which we have not already explored. This still satisfies detailed balance because the probability of having already chosen a point from some subspace of the distribution is uniform across all subspaces. Thus, we still have the same convergence guarantees as NUTS. However, we attempt to sample the distribution in a more "spread out" manner by exploring unexplored parts of the trajectory (which itself maintains the invariant of a fixed density) so in the end we still sample in accordance with the distribution's density but with regularization that enforces exploring unexplored parts of the space.
We can keep track of how much space we have explored close to a datapoint using any type of querying data structure that allows us to calculate some measure of how explored the space around a given point is (for example, a multidimensional Gaussian convoluted with all previously-sampled points). For sake of example and efficiency, we consider a k-dimensional binary search tree T on all sampled points that allows us to find the closest point in average-case Ø(logn) time with insertion also taking Ø(logn).
Our metric d_p for how much space has been explored near a given point p will be the squared L_2 norm of p with the closest neighbor in T (sum of squares of difference of coordinates). We then define the probability of choosing p to be proportional to d_p and the metric on all other points of the trajectory so that the probability we select p from trajectory t = (p_0,⋯, p_k) equals
m_p/∑_p_i∈ tm_p_i
We can then choose a point by allocating some proportion of a uniform r.v. to each point and sampling from this uniform to select the point. This is efficient and so the entire procedure allows us to regularize toward sampling the distribution thoroughly while maintaining sampling by density with the cost of a multiplicative Ø(logn) factor to the sampling process.
§ RESULTS
We discuss our testing regime in more detail: we randomly generate mixtures of multivariate Gaussians, which we use to compare how well regular NUTS samples compared to the modified NUTS algorithm presented in this paper by comparing the empirical distributions of each algorithm with the true distribution of the mixtures using a sort of discretized total variation metric. We refer to our algorithm as "SpreadNUTS" because it attempts to spread NUTS trajectories over the sample space to better leave less of the sample space unexplored.[Our code for SpreadNUTS is based on the code at <cit.>, and we test SpreadNUTS against this implementation of NUTS]
§.§ Testing Regime
We randomly select k Gaussian distributions where k is distributed over a discrete uniform that takes values from 1 to 4 (the choice of 5 is arbitrary). We choose the means of the distributions uniformly randomly from the interval [-⃗2⃗0⃗, 2⃗0⃗] (this choice is also arbitrary); we choose the covariance matrix by generating a matrix whose entries are uniformly random over [0,1], multiplying it by its transpose (generating a valid correlation matrix), then multiplying by a draw from a uniform over interval [0,4] (also arbitrary). This ensures the covariance matrix is positive semidefinite (and is also diagonally dominant). We also uniformly randomly choose a dimension for the Gaussians from 1 to 5. Finally, we generate mixture probabilities p⃗ such that the elementwise sum is 1 and each value is nonnegative by generating [0,1] entries, then dividing by the sum of these entries. While this does not yield a uniform distribution (the distribution is biased toward 1⃗D⃗ where D is the dimension and is chosen uniformly from 1 to 3 — the low upper bound on dimension is because for dimensions 4 or higher, regular NUTS tends to perform very slowly and it takes too much time to generate samples), this is okay for our purposes because we desire mixtures biased toward uniformly sampling from each vertex so there is sufficient density for sampling algorithms to actually sample from the Gaussians. This randomly generates Gaussian mixtures. Our choice of using Gaussian mixtures was arbitrary and based primarily on convenience of sampling through methods other than Monte Carlo.
We define our discretized total variation metric by randomly sampling from the Gaussian mixture (which we do by randomly sampling from each Gaussian, then choosing a number of samples from each collection of samples proportional to the probability of the Gaussian relative to the rest of the mixture). We then generate a relative empirical pdf by discretizing the interval from -⃗2⃗0⃗ to 2⃗0⃗ into 0.1-unit squares, calculating the proportion of samples in each square. Our discretized total variation metric m_TV is calculated by taking the absolute difference between the relative empirical pdfs of the samples generated from each algorithm and the relative empirical pdf generated by sampling directly from Gaussians weighted by the relative empirical pdf of the Gaussians. Our comparison between the two algorithms is done by both looking at both the ratio and actual values of m_TV between the algorithms and the mixture samples over choice of dimension. We also compare this with the m_TV between the Gaussian mixtures resampled again in order to obtain a means of roughly evaluating how well our algorithm performs both relative to NUTS and relative to a true sampler.
§.§ Results & Conclusion
We compare the m_TV metric between NUTS and SpreadNUTS by plotting them against each other and samples resampled from the mixture as well as by plotting the log of the m_TV ratio between NUTS and SpreadNUTS as well as between each algorithm and samples resampled from the mixture. In the first plot, the lower the m_TV, the better. In the second plot, the close to 0 the score the better; specifically, the log of the ratio between the algorithm and resampled mixture should ideally be close to 0 because this indicates it performs as well as samples from the mixture. We then discuss trends we noticed and provide examples of plots to compare NUTS to SpreadNUTS visually.
The following is a plot of m_TV vs. dimension for NUTS, our algorithm, and samples from a Gaussian mixture all compared against samples from a Gaussian mixture. Note that we compare two distinct draws from a Gaussian mixture with each other when calculating the m_TV to estimate how much of the m_TV of the algorithms is due to randomness attributed to relatively small sample size (we sample 10000 points per mixture and discard the first 500 as burn-in). Alongside it is a comparison of ratios between NUTS m_TV and our algorithm's m_TV with the mixture m_TV vs. dimension to see how close to a random sample the two algorithms get to m_TV.
The following are plots of m_TV ratio with the mixture m_TV for varying values of k (the number of Gaussians in the mixture) after fixing dimension.
The above shows that for dimension 1, NUTS performs better than SpreadNUTS; however, for higher dimensions, SpreadNUTS gets closer and closer to Gaussian sampling, suggesting that it handles density islands better than NUTS.
We note some interesting idiosyncracies of SpreadNUTS: in spite of the fact that it tends to perform better than NUTS in higher dimensions, what might actually be going on is that when the distance between "islands" of density in a distribution is sufficiently small enough for classical NUTS to feasibly leap across islands, SpreadNUTS simply makes it more likely that we will actually leap across islands. However, when the distance between these islands is too large for classical NUTS to reasonably travel between islands, SpreadNUTS cannot increase a low probability of traversing these islands enough for it to happen often. Thus, we conclude that while SpreadNUTS may increase the probability of traversing relatively high-density portions of the distribution relative to classical NUTS, it only attempts to "smooth" sampling across parts of the sample space that classical NUTS explores — it cannot explore parts of the sample space that classical NUTS does not explore. We examine two examples that showcase this trend: a 2d Gaussian mixture consisting of two distributions (μ,I_2),(-μ, I_2) with equal weight on both. In the first figure, μ = ⟨2.5,2.5⟩; in the second figure μ = ⟨5,5⟩. We compare SpreadNUTS to NUTS and see that while SpreadNUTS increases the probability of traversing these islands relative to classical NUTS, SpreadNUTS does not traverse the islands when classical NUTS does not. Furthermore, looking at the above figures, we can see that on the whole, SpreadNUTS m_TV gets closer to Gaussian sampling as dimension increases while NUTS first increases at dimension 2, then decreases at dimension 3 but still with significantly greater m_TV than either Gaussian sampling or SpreadNUTS sampling. We note that the number of dimensions used was small (3) and the number of Gaussians in the mixture was from 1 to 4; furthermore, the number of samples was 9.5K for each sampling method. Some error may have been introduced in the relatively small number of samples. A bigger point of contention is that the number of dimensions was too small to make any concrete claims about the efficacy of NUTS vs. SpreadNUTS and the use of Gaussian mixtures as our sample distribution may have introduced some bias that helps SpreadNUTS sample better than NUTS. There is more testing to be done, but we tentatively conclude that SpreadNUTS alleviates to some degree the lack of sample space exploration present in NUTS.
unsrt
§ APPENDIX
We derive the gradient and log-likelihood of Gaussian mixture M ∼∑^Nπ_i(μ_i, Σ_i). The likelihood (for a single datapoint x) is
p_M(x|π,μ⃗,Σ⃗) = ∑_i=1^Nπ_i(x|μ_i,Σ_i)
and the log-likelihood is
lnp_M(x|π, μ⃗,Σ⃗) = ln(∑_i=1^Nπ_i(x|μ_i,Σ_i))
For a single Gaussian, this devolves to c -0.5 (μ- x)^TΣ^-1(μ-x) for extra constant c = -0.5ln(|Σ^-1|(2π)^k).
Then, the gradient of the log-likelihood w.r.t. μ⃗ is
∂ln(p_M(x|π, μ⃗, Σ⃗))/∂μ⃗ = 1/∑_iπ_i(x|μ_i,Σ_i)·∂ p(x|π,μ⃗,Σ⃗)/∂μ⃗
∂ p(x|π,μ⃗,Σ⃗)/∂μ⃗ = ∑_i∂π_i(x|μ_i,Σ_i)/∂μ_i
∂π_i(x|μ_i,Σ_i)/∂μ_i = ∂/∂μ_i(π_i√(|Σ^-1_i|(2π)^-k)exp(-1/2(μ_i-x)^TΣ^-1_i(μ_i-x))) = Σ^-1(x-μ_i)π_i(x|μ_i,Σ_i)
∂ln(p_M(x|π, μ⃗, Σ⃗))/∂μ⃗ = ∑_iΣ^-1(x-μ_i)π_i(x|μ_i,Σ_i)/∑_iπ_i(x|μ_i,Σ_i)
For a single Gaussian, this simplifies to Σ^-1(x-μ).
As an aside, our testing regime experiences compounding rounding errors when exponentiating and taking logs, specifically when we take the log of the exponential of a number close to 0, which rounds to 0. We attempt to alleviate this problem by expressing the proportions of the normal likelihoods π_i(x|μ_i,Σ_i) to the sum of the normal likelihoods as the exponential of the difference of the log likelihood and the log of the sum of likelihoods, where we calculate the log of the sum of likelihoods by summing logs as below:
log(x+y) = log(x(1+yx)) = logx + log(1+yx) = logx + log(1+e^logy-logx)
log∑_ix_i = log(x_1(1+1/x_1∑_i=2^kx_i)) = logx_1 + log(1+e^log∑_i>1x_i-logx_1)
log∑_i>1x_i = logx_2 + log(1+e^log∑_i>2x_i-logx_2)
x_i/∑x_i = exp(logx_i-log∑x_i)
Thus, we can recursively express the log of sums as the sum of log sums (in practice, we sort the Gaussian pdfs when evaluating logs to minimize error at each step, yielding a technique known as LogSumExp or LSE). This helps decrease error accumulated when summing likelihoods because of the error introduced when summing exponentials.
|
http://arxiv.org/abs/2307.05967v1 | 20230712072826 | Towards an integrated determination of proton, deuteron and nuclear PDFs | [
"Tanjona Rabemananjara"
] | hep-ph | [
"hep-ph"
] |
shapes, arrows
decorations.pathreplacing
positioning, calc
fitted = [rectangle, minimum width=5cm, minimum height=1cm, text centered, draw=black, fill=red!30]
operations = [rectangle, rounded corners, minimum width=2cm,text centered, draw=black, fill=red!30]
roundtext = [rectangle, rounded corners, minimum width=2cm, minimum height=0.8cm, text centered, draw=black, fill=red!30]
n3py = [rectangle, rounded corners, minimum width=3cm, minimum height=1cm, text centered, draw=black, fill=green!30]
myarrow = [thick,->,>=stealth]
line =[draw, -latex']
decision = [diamond, draw, fill=red!20, text width=7.5em, text centered, inner sep=0pt, minimum height=2em, aspect=4]
cloud = [draw, ellipse,fill=green!20, minimum height=2em]
inout = [rectangle, draw, fill=green!20, text width=9.5em, text centered, rounded corners, minimum height=2em, minimum width=10em]
block=[rectangle, draw, fill=blue!20, text width=9.5em,
text centered, rounded corners, minimum height=2em,
minimum width=10em]
#1
#1
#1
#1
#1
#1
and
#1
Submitted to #1
Abstract
Presented
PRESENTED AT
Towards an integrated determination of
proton, deuteron and nuclear PDFs
Tanjona R. Rabemananjara
Department of Physics and Astronomy, Vrije Universiteit, NL-1081 HV Amsterdam
Nikhef Theory Group, Science Park 105, 1098 XG Amsterdam, The Netherlands
We present progress towards a unified framework enabling the simultaneous determination of
the parton distribution functions (PDFs) of the proton, deuteron, and nuclei up to lead (^208Pb).
Our approach is based on the
integration of the fitting framework underlying the nNNPDF3.0 determination of nuclear PDFs
into that adopted for the NNPDF4.0 global analysis of proton PDFs.
Our work paves the
way toward a full integrated global analysis of non-perturbative QCD – a key ingredient for the
exploitation of the scientific potential of
present and future nuclear and particle physics facilities such as the Electron-Ion Collider
(EIC).
DIS2023: XXX International Workshop on Deep-Inelastic Scattering and
Related Subjects,
Michigan State University, USA, 27-31 March 2023
< g r a p h i c s >
Introduction.
Future measurements at the recently approved Electron-Ion
Collider (EIC) <cit.> and at the High-Luminosity
Large Hadron Collider (HL-LHC) <cit.> will perform key measurements that can pin down the
dynamics of Quantum Chromodynamics (QCD).
By colliding
(polarized) beams with proton or deuteron or heavier nuclei for a range of center-of-mass
energies, these experiments will be fundamental in understanding how partons are distributed
in momentum and position spaces within a proton and how the parton distribution of nucleon
bound within nuclei are modified w.r.t. their free-nucleon counterparts.
In order to fully
interpret the precision-level measurements offered by such experiments, the theoretical
determination of both the unpolarized proton and nuclear parton distribution functions,
henceforth referred to as (n)PDFs, needs to be improved.
While the determination of the free-proton PDFs has seen quite an advancement in recent
years <cit.>
– owing mostly to the availability of a broad
experimental dataset and theoretical improvements in higher-order calculations
– the current status in the determination of nuclear PDFs is somewhat less advanced.
In addition,
albeit both proton and nuclear PDFs are simultaneously constrained by datasets
where one of the targets or projectiles
is not a free-state proton, their extractions are usually performed in
a separate manner.
On one hand, it has been well understood that measurements involving deuteron and heavier
nuclear targets play a significant role in disentangling the proton's quark and antiquark
distributions, and in separating the up and down PDFs for large momentum fractions in which
searches for physics beyond the Standard Model (BSM) are relevant <cit.>.
In the past, different
approaches have been adopted to account for nuclear corrections in proton PDF fits, each with
its own motivations and limitations. The approach adopted in the NNPDF
methodology <cit.>
consists in adding the uncertainties due to nuclear effects as an extra contribution to the
theory covariance matrix <cit.>.
On the other hand, all hadronic datasets included in the determination of nuclear PDFs involve
a free-proton in the initial state. Most nuclear PDF sets are therefore determined assuming a
fixed proton baseline which can be considered as a theoretical bias whose effect is difficult
to estimate. In the nNNPDF methodology <cit.>,
although the A=1 dependence
(with A representing the atomic mass number) is also fitted in the same footing as A ≠ 1
by means of a neural network, a fixed proton baseline is still required to enforce the A → 1
limit, therefore introducing a potential bias in that the back-reaction
of the fitted data on the proton PDf is ignored.
Such a constraint is imposed at the level of
the χ^2 as a penalty by means of a Lagrange multiplier.
Motivated by the need for a consistent and concurrent extractions of the unpolarized
proton, deuteron, and heavier nuclear PDFs, we present here
work in progress towards an “integrated fit” (see also <cit.> for a first attempt) in
which the atomic mass number dependence is smoothly parametrized from A=1 to A=208
by fitting simultaneously to proton, deuteron, and heavier nuclear datasets,
removing the need for imposing a boundary condition for the A=1 limit.
A similar idea was applied to PDFs and fragmentation functions
by the JAM collaboration in <cit.>.
Our approach is
based on the integration of the fitting framework underlying the nNNPDF3.0 determination
of nuclear PDFs into that adopted for the NNPDF4.0 global analysis of proton PDFs <cit.>.
As a first attempt to apply our methodology, we perform fits in which only deep inelastic
scattering
(DIS) processes are included using NNLO QCD calculations with perturbative charm.
Here we briefly describe the integrated fitting
framework, emphasizing on the its main differences w.r.t the (n)NNPDF approaches.
We then study the stability of the integrated fits based on the NNPDF4.0 default hyperparameters
from which all the subsequent results are derived. We assess the impacts of the integrated
fitting methodology on the proton and nuclear PDFs by comparing the results with the reference
(n)NNPDF determination. Conclusions are drawn in the last section.
Methodology.
As mentioned above, the integrated fitting methodology incorporates the nuclear
PDF parametrization from nNNPDF3.0 into the determination of free-proton PDF using the NNPDF4.0
methodology. The reason for this is twofold. First, albeit the nNNPDF3.0 is also based on
deterministic minimization algorithm it does not provide some of the advanced machine learning
techniques that the NNPDF4.0 possesses – such as the automatic tuning of the hyperparameters
using the k-folding procedure <cit.>. In addition, the NNPDF4.0
methodology also provides
ways to carefully validate the resulting PDF uncertainties via the closure and future test
approaches. Nonetheless, the two determinations share a large number of similarities, one among which is
the parametrization of the (n)PDFs in the evolution basis.
The main modification to the NNPDF4.0 methodology to account for the nuclear fit consists in
parametrizing the A-dependence of the (n)PDFs during the fit. That is, the
relation between the output of the neural network and the (n)PDFs is given by:
xf^A_k(x, Q_0; θ) = η_k^A
x^1-α_k^A (1-x)^β_k^ANN_k^A (x, Q_0; θ),
where k runs over the elements of the PDF in the evolution basis,
NN_k^A(x; θ) is the k-th output of the neural network,
θ indicates the full set of neural network parameters, η_k^A
is the normalization corresponding to the k-th PDF, and A
represents the atomic mass number of the proton/deuteron/nucleus. Note that in principle
the preprocessing exponents α_k^A and β_k^A
should also depend on A. The way in which such a dependence on the atomic
mass number A is propagated through the fitting framework is
illustrated in Fig. <ref>.
In order to test the integrated framework, we focus in the present study on fits to DIS datasets
only which include measurements on proton, deuteron, and nuclear targets as comprised
in the (n)NNPDF determinations. As theory inputs, we use NNLO QCD calculations provided by the
new theory pipeline used in the NNPDF framework <cit.>.
And in order to be able to compare to previous
nNNPDF PDF releases, we take the approach in which charm is generated perturbatively. The fitting
scale and the kinematic cuts are taken to be the same as in the default NNPDF4.0 methodology,
refer to <cit.> for more details.
Training stability.
The baseline hyperparameters used in the present analysis for the integrated fit are the same as the
ones used in the default NNPDF4.0 methodology extracted from a k-folding hyperoptimization
procedure. These are listed in Table 3.7
of <cit.> with the difference that now the maximum number of epochs is larger to account for the
longer training. Owing to the more complex two-dimensional parameter space (x, A) to be fitted
<cit.> – as opposed to the one-dimensional space relevant for a free-proton PDF determination in
which only the x-dependence is parametrized – one indeed expects the integrated fit to converge slower.
Table <ref> compares the fit quality between a free-proton fit determined using the default NNPDF4.0
methodology and a combined proton and nuclear fit determined using the integrated framework. Displayed
are the total experimental χ^2_ exp per data point for all the datasets entering each
determination, the average experimental ⟨χ^2_ exp⟩ over the replica, and the
experimental training and validation error functions ⟨ E_ tr⟩ and
⟨ E_ val⟩.
From Table <ref> one finds that in the integrated fit a good description of the experimental
data which includes nuclear datasets is achieved, with a total of χ^2_ exp = 1.322 per data
point. Looking at the χ^2-breakdown we see that the total experimental χ^2_ exp(A=1) = 1.192
on the proton datasets is much smaller than the value obtained from the free-proton only fit. This might
potentially be a sign of an unbalanced fit in that the integrated fit is slightly overlearning along the
proton direction – which contains much more data points – while underlearning along the nuclear directions.
Such an instability could be seen when looking at the distribution of the training E^(k)_ tr and
validation E^(k)_ val errors evaluated over the different MC replicas as shown in
Fig. <ref>. In general, one expects that the values of E^(k)_ tr are smaller
than those of E^(k)_ val, however, as opposed to the case of the free-proton fit, very few points
are located below the diagonal line. This observation is further strengthened by looking at the distribution
of training lengths as shown in Fig. <ref> in which one can see that not only most
of the replicas reached the maximum number of iterations but also exhibit a bimodal distribution. This
indicates that in order to further stabilize the integrated fits, a k-folding hyperoptimization – on the
combined proton, deuteron, and nuclear datasets – is necessary to find the best combinations of
hyperparameters. Given that this is an exhaustive task, we henceforth, present results based on the default
hyperparameters.
Towards integrated (n)PDFs.
We now study the impacts of the integrated determination by comparing the resulting (n)PDFs with the proton
and nuclear PDFs determined using the NNPDF4.0 and nNNPDF methodology respectively. We recall that the
comparisons shown henceforth are for DIS-only fits with NNLO QCD theory and perturbative charm.
Let us first examine the proton-PDFs determined using the integrated approach. In Fig. <ref>
we show the comparison of the NNPDF4.0 and integrated approaches for the up and gluon PDFs as a function of x
at the fitting scale Q_0=1.65 GeV^2. The results are normalized to the central values of the PDFs
determined in the integrated approach. The solid and dashed uncertainty bands represent the 68% confidence
level interval and the one-sigma error, respectively. In general, there is an excellent agreement between the
proton PDFs determined using the default NNPDF4.0 methodology and the integrated approach. Marked wiggles can
be observed in the integrated determination which could be connected to the slight overfitting discussed in the
previous section. The consistency between the two approaches is further supported by the distance plots shown in
Fig. <ref>. Displayed are the absolute and variance distance measured from the default
NNPDF4.0 determination on the flavor basis as a function of x at Q^2=1.7 GeV^2. As we can see, the
differences in both the central values and uncertainties are well within one sigma.
We now turn to the comparisons of the deuteron and heavier nuclear PDFs. Fig. <ref>
displays the comparisons between the nNNPDF and integrated approaches for the deuteron (^2D), and
two heavier nuclei – namely Iron (^56Fe) and lead (^208Pb). The results are shown as a
function of x for the up and gluon nPDFs at Q^2=1.65 GeV^2. Similar to the previous results,
the solid and dashed bands represent the 68% c.l. and one-sigma uncertainties, respectively. All results
are normalized to the the central values of the integrated nPDFs. The two determinations are overall in
excellent agreement within the uncertainties although in general the nPDFs determined using the integrated
approach yield larger uncertainties and more fluctuations. This can be markedly seen for instance for the
gluon PDF of the deuteron. More pronounced differences are observed for both the up and gluon PDF of the
lead in which the integrated method yield much smaller error at medium- and large-x.
These comparisons demonstrate that albeit the issues regarding the stability of the integrated fit – which
is directly linked to the choice of hyperparameters – the framework is working and is capable of reproducing
the reference fits in which the proton and nuclear PDFs are separately determined.
Conclusions and outlook.
We presented a framework in which the proton, deuteron, and heavy nuclear parton distributions are
simultaneously determined without the need for imposing a boundary condition to reproduce the A=1
limit. It is based on the integration of the framework underpinning the nNNPDF3.0 determination of the
parton distribution of nucleons bound within nuclei into that adopted in the NNPDF4.0 methodology for
the determination of free-proton PDFs.
It was shown that the (n)PDFs extracted from the integrated approach are consistent with the reference
(n)NNPDF determinations. However, using the default NNPDF4.0 set of hyperparameters did not yield
desirable results due to the sign of instability in the training. After all, the underlying nature of the neural
network architecture has drastically changed due to the additional parametrization of the atomic mass
number A. Therefore, a k-folding-based scan of the parameter space must be performed in order to
obtain the best combination of hyperparameters.
From the physics' point of view, in order to achieve a reliable separation between quark flavors, we must
extend the experimental datasets to include hadronic processes. All the ingredients should be indeed
available to perform a global nuclear PDF fit with NNLO QCD calculations with fitted charm. Once this is
done, the next steps will be to include estimation of missing higher order
uncertainties <cit.>, and
to provide an approximate N3LO (n)PDF integrated determination <cit.>.
Acknowledgments.
The author is grateful to Juan Rojo for the careful reading of the manuscript. The author also wishes
to thank the collaborators from the Netherlands eScience Center (NLeSC) for discussions during the development
of the project. This work is supported by an Accelerating Scientific Discoveries grant of the NLeSC.
JHEP
|
http://arxiv.org/abs/2307.04364v2 | 20230710064055 | Probe hyperon electric dipole moments with full angular analysis | [
"Jinlin Fu",
"Hai-Bo Li",
"Jian-Peng Wang",
"Fu-Sheng Yu",
"Jianyu Zhang"
] | hep-ex | [
"hep-ex"
] |
APS/123-QED
[email protected]
[email protected]
[email protected]
^1School of Physical Sciences, University of Chinese Academy of Sciences, Beijing 100049, People's Republic of China
^2Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, People's Republic of China
^3MOE Frontiers Science Center for Rare Isotopes, Lanzhou University, Lanzhou 730000, People's Republic of China
^4School of Nuclear Science and Technology, Lanzhou University, Lanzhou 730000, People's Republic of China
^5Center for High Energy Physics, Peking University, Beijing 100871, People's Republic of China
The electric dipole moment (EDM) of elementary particles, arising from flavor-diagonal CP violation, serves as a powerful probe for new physics beyond the Standard Model (SM) and holds the potential to provide novel insights in unraveling the enigma of the matter-dominated universe.
Hyperon EDM is a largely unexplored territory.
In this paper, we present a comprehensive angular analysis that focuses on entangled hyperon-antihyperon pairs in J/ψ decays for the indirect extraction of hyperon EDM. The statistical sensitivities are investigated for BESIII and the proposed Super Tau-Charm Facility (STCF). Leveraging the statistics from the BESIII experiment, the estimated sensitivity for Λ EDM can reach an impressive level of 10^-19 e cm, demonstrating a three-order-of-magnitude improvement over the only existing measurement in a fixed-target experiment at Fermilab with similar statistics. The estimated sensitivities for the Σ^+, Ξ^-, and Ξ^0 hyperons at the same level of 10^-19 e cm will mark the first-ever achievement and the later two will be the first exploration in hyperons with two strange valence quarks. The EDM measurements for hyperons conducted at the BESIII experiment will be a significant milestone and serve as a litmus test for new physics such as SUSY and left-right symmetrical model. Furthermore, at the STCF experiment, the sensitivity of hyperon EDM measurements can be further enhanced by two orders of magnitude. Additionally, this angular analysis enables the determination of CP violation in hyperon decays, the effective weak mixing angle, and beam polarization.
Probe hyperon electric dipole moments with full angular analysis
Jianyu Zhang^1
August 12, 2023
================================================================
The measurement of a particle's permanent electric dipole moment (EDM), which violates both Parity (P) and Time reversal symmetries, and consequently Charge Parity (CP) symmetry according to the CPT theorem, provides a robust test within and beyond the Standard Model (SM). It serves as a sensitive probe for new physics, especially those that could induce lower loop or flavor diagonal CP Violation (CPV), in the multi-100 TeV mass range <cit.>.
Neutron and ^199Hg EDM measurement have set an upper limit on the SM QCD effective vacuum phase of θ̅⪅10^-10, yet the SM permits any value within the [0,2π] range.
This conundrum is commonly known as the strong CP problem <cit.>.
Examining EDM within the hadronic system serves as a means to either corroborate or disprove the θ̅ explanation and, in conjunction with the investigation of leptonic EDM,
constitutes an essential approach for the pursuit of new physics <cit.>.
Investigating EDM in baryonic and light nuclear systems offers a distinct opportunity to uncover diverse CPV models <cit.>.
Within the hyperon system, the strange quark may exhibit a special interaction with new physics, potentially resulting in a substantial EDM effect.
This could suggest that the new physics possesses a specific flavor structure.
Another crucial aspect is that a single EDM measurement alone is insufficient to distinguish between various sources of CPV beyond the SM. Therefore, it becomes essential to employ complementary observations of different systems, such as hadrons, atoms, nuclei, and molecules, in order to effectively discriminate between these sources <cit.>.
Despite more than 70 years of researches in the pursuit of EDMs, the Λ hyperon remains the sole member of the hyperon family for which the upper limit of EDM, 1.5× 10^-16 e cm, has been measured utilizing spin precession at Fermilab <cit.>.
The indirectly predicted absolute value of the Λ EDM, based on the experimental upper limit of the neutron EDM, is < 4.4× 10^-26 e cm <cit.>.
There are no indirect predictions for hyperons with two or three strange valence quarks.
A variety of experimental approaches have been proposed, such as Λ EDM measurement utilizing spin precession induced by dipole magnetic at the LHCb experiment <cit.>,
Ξ^+ and Ω^+ EDM measurements employing spin precession induced by bent crystal at a fixed-target experiment <cit.>.
Due to the short lifetimes of hyperons, conducting direct measurements of EDM through the spin precession presents significant challenges.
Preparing sources of various hyperons for EDM measurements in a single fixed-target experiment is also challenging, due to different production mechanisms and lifetimes of hyperons.
Unlike fixed-target experiments and hadron collider experiments, a large number of entangled Λ, Σ, and Ξ hyperon-antihyperon pairs can be readily produced and reconstructed from charmonium J/ψ decays at Tau-Charm factories.
The substantial production cross-section of J/ψ in e^+e^- annihilation, along with the large branching fraction of J/ψ to hyperon-antihyperon pairs and the outstanding performance of modern detectors, ensure that the reconstruction of hyperon-antihyperon pairs is usually achieved with a purity greater than 95%.
This capability allows for the search of subtle violations of conservation laws <cit.>.
The production of entangled hyperon-antihyperon pairs, with the electric dipole form factor embeded in the P and CP violating term of the Lorentz invariant amplitude, offers a distinctive opportunity for indirectly extracting hyperon EDM. The electric dipole form factor is generally a complex number for non-zero timelike momentum transfer, and becomes EDM in the zero momentum transfer limit.
In practice, this kind of form factor can be treated as an EDM assuming that the momentum transfer dependence is negligible due to an unknown extension to the zero region.
This Letter reports a proposal to extract the hyperon EDMs through full angular analysis.
EDM measurements will be discussed in e^+e^- collision within the region of J/ψ resonance, considering two different types: (i) J/ψ→ BB where B are Λ, Σ^+ hyperons. (ii) J/ψ→ BB where B are Ξ^-, Ξ^0. Sequential hyperon decays are reconstructed as Λ→ pπ^-, Σ^+→ pπ^0, Ξ^-→Λπ^-, and Ξ^0→Λπ^0, correspondingly.
A comprehensive angular analysis using multi-dimensional information in the full decay chain yields enhanced sensitivity for EDM measurement when compared to one-dimensional analysis, such as a CP-odd triple-product moment encompassing hyperons Λ, Σ^+, Ξ^- and Ξ^0 <cit.>.
Scenarios for the BESIII experiment and a proposed future Super Tau-Charm Facility (STCF) are investigated. The first experiment has already collected the world's largest dataset of 10 billion J/ψ particles <cit.>, while the latter one is designed to collect approximately 3.4×10^12 J/ψ particles per year <cit.>.
Charmonium J/ψ is produced via e^+e^- annihilation, where interference between the contributions from virtual γ and Z-boson exchanges leads to a small longitudinal polarization of J/ψ meson.
The leading contribution from Z-boson exchange in SM, which violates parity symmetry, is suppressed by a factor of M^2_J/ψ/m^2_Z. Polarization effects are encoded in BB hyperon pair spin density matrix defined as
R(λ_1,λ_2;λ^'_1,λ^'_2)∝∑_m,m^' ρ_m,m^'d^j=1_m,λ_1-λ_2(θ)d^j=1_m^',λ^'_1-λ^'_2(θ)
×ℳ_λ_1,λ_2ℳ^*_λ^'_1,λ^'_2δ_m,m^',
where the indices m^(') and λ^(')_1,2 represent the helicities of the J/ψ meson and B(B) hyperons, respectively.
The ρ_m,m^' is spin density matrix of J/ψ meson,
d^j_m^('),λ^(')_1-λ^(')_2(θ) is Wigner rotational function, and ℳ_λ^(')_1,λ^(')_2 is the helicity amplitude of J/ψ→ BB. The θ represents the angle between the momentum of the hyperon B, denoted as p̂, and the motion of electron beam Z axis as shown in Fig <ref>.
The helicity m^(') is denoted as +, -, and 0 corresponding to the helicity states of J/ψ meson. The 3×3 matrix ρ_m,m^' is reduced to a 2×2 matrix due to the component ρ_00 suppressed by a factor of m^2_e/M^2_J/ψ.
The Lorentz invariant helicity amplitude in J/ψ→ BB decay with four independent form factors fixed at q^2=M^2_J/ψ is written as <cit.>
ℳ_λ_1,λ_2=ϵ_μ(λ_1-λ_2)u̅(λ_1,p_1) (F_Vγ^μ+i/2mσ^μνq_νH_σ
+γ^μγ^5F_A+σ^μνγ^5q_νH_T )v(λ_2,p_2),
where m is B hyperon mass, and p_1 and p_2 are four momentum of B and B, respectively.
Processes involving a flavor-diagonal CP-violating vertex contribute to the electric dipole form factor H_T. An effective Lagrangian, encompassing all of these CP-violating operators, plays a crucial role as a bridge between hyperon EDM and the fundamental theories.
The diverse extensions of the SM result in distinct contributions to these operators, leading to different impact on the hyperon EDM.
Taking the Λ hyperon as an example, there are several expressions in the literature for evaluating the contributions arising from the QCD θ term <cit.>, quark chromo-electric dipole moment (qCEDM), four-quark operators <cit.>, and the quark EDM (qEDM) <cit.>. Hyperon EDM measurements offer direct sensitivity to the contributions from qEDM and qCEDM, owing to the suppressed effects of high-dimensional operators and the experimental constraint imposed by neutron EDM measurements on the QCD θ term.
The flavour-diagonal CP-violating contributions in the SM are extremly tiny, while new physics, such as SUSY and left-right symmetrical model, may give large enhancement on hyperon EDM as discussed extensively by analysing EDM results from electron, neutron and ^199Hg systems <cit.>.
The unexpectedly large hyperon EDM may suggest a special coupling between the strange quark and new physics.
Consequently, in the decay chain under consideration, we provide an opportunity to explore these possible effects in the hyperon family by relating H_T to hyperon EDM contribution <cit.>,
H_T=2e/3M^2_J/ψg_Vd_B.
The form factor H_T here, in fact, varies with q^2. Assuming q^2 dependence is ignored, d_B is then EDM of hyperon B. Considering the dispersive part of time-like reaction, the imaginary part of H_T is also investigated in this angular analysis.
The aforementioned discussions will also be applicable to the hyperons Σ and Ξ in this Letter.
The form factors F_V and H_σ are related to the redefined G_1,2 as described in <cit.>
F_V=G_1-4m^2(G_1-G_2)/(p_1-p_2)^2, H_σ=4m^2(G_1-G_2)/(p_1-p_2)^2.
The form factors G_1 and G_2 are linked to the experimental observables α_J/ψ, ΔΦ, and Γ(J/ψ→ BB) through the relations α_J/ψ=M^2J/ψ|G1|^2-4m^2|G_2|^2/M^2J/ψ|G1|^2+4m^2|G_2|^2 and G_1/G_2=|G_1/G_2|e^-iΔΦ <cit.>.
The form factor F_A, primarily arising from Z-boson exchange between cc and light quark pairs qq within the SM can be related to the effective weak mixing angle θ^eff_W through
F_A≈ -1/6Dg_Vg^2/4cos^2θ^eff_W1-8sin^2θ^eff_W/3/m^2_Z,
which leads to a parity violation effect estimated to be the order of 10^-6, where g_V is defined as ⟨0|c̅γ^μc|J/ψ⟩=g_Vϵ^μ, D is a non-perturbative parameter that is fitted from data <cit.>. By conducting precise measurements utilizing large statistics, it becomes possible to extract the weak mixing angle sin^2θ^eff_W which is essential in testing the SM, particularly in regards to the effects derived from quantum corrections of heavy particles, such as the Higgs boson and the top quark, at the loop level <cit.>.
The longitudinal polarization of the J/ψ meson, denoted as P_L, is defined as the relative difference between the diagonal elements of the density matrix, ρ_++ and ρ_–. Moreover, in experiment such as BESIII where there is no beam polarization, the polarization P_L is closely connected to the left-right asymmetry 𝒜^0_LR,
P_L=𝒜^0_LR=σ_R-σ_L/σ_R+σ_L=-sin^2θ^eff_W+3/8/2sin^2θ^eff_Wcos^2θ^eff_WM^2_J/ψ/m^2_Z.
Here, σ_R(L) represents the J/ψ cross section with right-handed(left-handed) electrons. This asymmetry induced by the effective weak mixing angle θ^eff_W and hence suppressed to the order of 10^-4 <cit.>.
When there is longitudinally polarized electron beam polarization with magnitude of P_e, as in the experiment of STCF <cit.>, the P_L can be replaced by ξ
ξ=σ_R(1+P_e)/2-σ_L(1-P_e)/2/σ_R(1+P_e)/2+σ_L(1-P_e)/2=𝒜^0_LR+P_e/1+P_e𝒜^0_LR≈ P_e.
The longitudinally polarized electron beam instead of Z- boson exchange may play a crucial role in enhancing the sensitivity of measurements.
Based on the rotational symmetry, helicity representation of the complete angular distribution for type (ii) is given
dσ/dΩ∝∑_[λ] R(λ_1,λ_2;λ^'_1,λ^'_2)
D^*j=1/2_λ_1,λ_3(ϕ_1,θ_1)D^j=1/2_λ^'_1,λ^'_3(ϕ_1,θ_1)ℋ_λ_3ℋ^*_λ^'_3
D^*j=1/2_λ_2,λ_4(ϕ_2,θ_2)D^j=1/2_λ^'_2,λ^'_4(ϕ_2,θ_2)ℋ_λ_4ℋ^*_λ^'_4
D^*j=1/2_λ_3,λ_5(ϕ_3,θ_3)D^j=1/2_λ^'_3,λ_5(ϕ_3,θ_3)ℱ_λ_5ℱ^*_λ_5
D^*j=1/2_λ_4,λ_6(ϕ_4,θ_4)D^j=1/2_λ^'_4,λ_6(ϕ_4,θ_4)ℱ_λ_6ℱ^*_λ_6
where [λ] is a set containing all of possible helicity symbols appearing in the summation like λ_1,λ_2,λ^'_1,λ^'_2....
Polar and azimuthal angles θ_1,ϕ_1 and θ_2,ϕ_2 parameterize momenta directions of Λ and Λ in the frame of Ξ and Ξ, respectively.
Polar and azimuthal angles θ_3,ϕ_3 and θ_4,ϕ_4 are that of proton and anti-proton in the frame of Λ pairs.
The definitions of these helicity angles are illustrated in Fig <ref>, and analogous definitions are employed for the subsequent decay of antiparticles.
Helicity amplitudes ℋ_λ_i and ℱ_λ_i are used to parameterize dynamics of weak decay Ξ→Λπ and Λ→ pπ,
and corresponding charge conjugated process are denoted by ℋ and ℱ with bar. The formula for type (i) is obtained by retaining only θ_1,2 and ϕ_1,2 and identifying ℋ as ℱ.
Following the definition of asymmetry parameters α and ϕ, originally introduced by Lee and Yang <cit.>,
the hyperon CP violating observables, induced by these asymmetry parameters, are quantified as A^B_CP=(α_B+α̅_B)/(α_B-α̅_B) and Δϕ^B_CP=(ϕ_B+ϕ̅_B)/2 <cit.>.
Two observables are complementary as they rely on the sine and cosine of the strong phase difference, respectively. In hyperon decays, the relative strong phases are small, leading to the latter exhibiting better sensitivity <cit.>. Moreover, in this Letter, the latter in Ξ decays can be determined due to the measurable polarization of Λ hyperon.
To assess the statistical sensitivity of the measurement, 500 pseudo experiments of each decay are generated and fitted by using a probability density function based on the full angular distributions shown in Equation (<ref>).
The estimated yields presented in Table <ref>,
as well as the form factors and decay parameters obtained from the published articles <cit.>, are fixed for the generation.
The EDM, along with other form factors, decay parameters and polarization, can be simultaneously determined from fitting.
The study further investigates sensitivities for different statistics at BESIII and STCF experiments, taking into account branching fractions, detection efficiencies, and the impact of longitudinally polarized electron beam.
Figure <ref> presents the estimated sensitivities for hyperon EDMs.
With the statistics from BESIII experiment, the Λ EDM sensitivity, 10^-19 e cm (red full circle), demonstrates a remarkable three-order-of-magnitude enhancement over
the only existing measurement at Fermilab with similar statistics <cit.>, while maintaining cutting-edge sensitivities, 10^-19 e cm, for Σ^+, Ξ^-, and Ξ^0 hyperons. The EDM sensitivities will be further improved by 1∼2 orders of magnitude (open square and full triangle) at STCF experiment.
Figure <ref> illustrates the estimated sensitivities for CPV in hyperon decays.
With an 80% longitudinally polarized electron beam at STCF experiment, the best sensitivities for CPV induced by the α_B parameter (red full triangle) can reach 5×10^-5 (6×10^-5) in J/ψ→ΛΛ (J/ψ→Σ^+Σ^-) decays, while for the ϕ_B parameter (blue full triangle), they can reach 2×10^-4 (3×10^-4) in J/ψ→Ξ^-Ξ^+ (J/ψ→Ξ^0Ξ^0) decays.
The sensitivities for A^B_CP and Δϕ^B_CP observables have reached the prediction of the SM <cit.>.
Figure <ref> shows the estimated sensitivities for F_A and sin^2θ^eff_W. Only the sensitivities for the module of F_A are reported due to a negligible dependence on the phase from toy study.
The sensitivity for sin^2θ^eff_W associated to F_A can reach to 8×10^-3.
Figure <ref> depicts the estimated sensitivities for J/ψ polarization and sin^2θ^eff_W. The sensitivity for sin^2θ^eff_W associated to P_L can reach to 2×10^-2 at STCF experiment. Additionally, by applying simultaneous constraint on F_A and P_L, the sensitivity for sin^2θ^eff_W can be further enhanced to 5×10^-3 in J/ψ→ΛΛ decays.
Longitudinal polarization for electron beam can also be determined through angular analysis with the highest precision sensitivity reaching up to 6×10^-5, as depicted in Figure <ref> (red full triangle up), which can used for more precise weak mixing angle measurement from Bhabha scattering events <cit.>.
In conclusion, to investigate largely unexpored territory of hyperon EDMs, we have established a comprehensive angular analysis, considering P violation in J/ψ production and CP and P violation in J/ψ decay. The EDM, along with CP violating observables in hyperon decays, effective weak mixing angle, and beam polarization can be simultaneously extracted from angular analysis. The statistical sensitivities for physical observables have been investigated for BESIII and STCF scenarios. Utilizing the expected statistics obtained from the BESIII experiment, the Λ EDM measurement can achieve an impressive upper limit of 10^-19 e cm, presenting a remarkable improvement of three orders of magnitude compared to the only existing measurement at Fermilab with similar statistics. The EDM measurement of Σ^+, Ξ^-, and Ξ^0 hyperons at the same level of 10^-19 e cm could represent a groundbreaking accomplishment as the first-ever achievement and the later two will be the first exploration in hyperons with two strange valence quarks.
At the STCF experiment, with a longitudinally polarized electron beam, a search for hyperon EDMs could potentially reach levels of 10^-21∼10^-20 e cm.
The EDM measurements for hyperons will be a significant milestone and serve as a stringent test for new physics, such as SUSY and left-right symmetrical model.
At the same time, the verification of CPV in hyperon decays could be achieved at levels of 10^-5∼10^-4, which has already matched the predictions of the SM.
The effective weak mixing angle parameter can be measured at a level of 10^-3 and can be further enhanced by utilizing the precisely determined beam polarization obtained from this angular analysis.
This method can also be extended to ψ(2S) decays for investigating the pure strange quark hyperon Ω, taking into account additional form factors due to its spin-3/2 property.
We would like to thank Prof. Fengkun Guo, Prof. Xiaogang He, Prof. Jianping Ma and Prof. Yangheng Zheng for very useful discussion.
This work is supported by National Key R&D Program of China No. 2022YFA1602204; National Natural Science Foundation of China (NSFC) under Contracts Nos. 11935018, 12221005 and 11975112; Fundamental Research Funds for the Central Universities.
|
http://arxiv.org/abs/2307.04222v1 | 20230709162845 | Derandomizing Codes for the Binary Adversarial Wiretap Channel of Type II | [
"Eric Ruzomberka",
"Homa Nikbakht",
"Christopher G. Brinton",
"David J. Love",
"H. Vincent Poor"
] | cs.IT | [
"cs.IT",
"math.IT"
] |
extend_vtrue
Derandomizing Codes for the Binary Adversarial Wiretap Channel of Type II
This work is supported in part by the U.S. National Science Foundation under grants CNS-2128448, CNS-2212565, CNS-2225577, EEC-1941529, ITE-2226447 and by the Office of Naval Research under grant ONR N000142112472.
Eric Ruzomberka1, Homa Nikbakht1, Christopher G. Brinton2, David J. Love2 and H. Vincent Poor1
1Princeton University 2Purdue University
August 12, 2023
==================================================================================================================================================================================================================================================================================================
plain
plain
We revisit the binary adversarial wiretap channel (AWTC) of type II in which an active adversary can read a fraction r and flip a fraction p of codeword bits. The semantic-secrecy capacity of the AWTC II is partially known, where the best-known lower bound is non-constructive, proven via a random coding argument that uses a large number (that is exponential in blocklength n) of random bits to seed the random code. In this paper, we establish a new derandomization result in which we match the best-known lower bound of 1-H_2(p)-r where H_2(·) is the binary entropy function via a random code that uses a small seed of only O(n^2) bits. Our random code construction is a novel application of pseudolinear codes – a class of non-linear codes that have k-wise independent codewords when picked at random where k is a design parameter. As the key technical tool in our analysis, we provide a soft-covering lemma in the flavor of Goldfeld, Cuff and Permuter (Trans. Inf. Theory 2016) that holds for random codes with k-wise independent codewords.
§ INTRODUCTION
Consider a communication setting in which a sender Alice wishes to communicate a message to a receiver Bob by sending a sequence of bits over a noisy wiretap channel. The channel is controlled by an (active) adversary who can both read a fraction r ∈ [0,1] and flip a fraction p ∈ [0,1] of Alice’s transmitted bits. In this setting, Alice’s and Bob’s communication goal under any adversarial strategy is two-fold:
* (Reliability) Bob must decode Alice’s message with small probability of error.
* (Secrecy) The adversary must extract negligible information of the Alice's message via its observation of Alice's sequence.
Critically, we make no assumptions about the adversary’s computational limitations, and thus, secrecy must be guaranteed in an information theoretic sense by “hiding” the message in the adversary's bit-limited observation. Furthermore, the adversary may choose the location of the bit reads and bit flips in an arbitrary manner using knowledge of Alice and Bob communication scheme. In the literature, the above setting is known as the binary adversarial wiretap channel of type II (denoted as (p,r)-AWTC II) <cit.>.
Much is known about the fundamental limits of communication over the (p,r)-AWTC II. Roughly defined, the secrecy capacity of the (p,r)-AWTC II is the largest rate at which Alice and Bob can communicate while meeting the above goals under a given secrecy measure. The measure we focus on is semantic secrecy (SS) <cit.>, which is widely recognized as the cryptographic gold standard for evaluating secrecy <cit.>. The SS capacity, denoted C(p,r), is partially known where the best-known lower bound <cit.> and upper bound <cit.> are
max{1-H_2(p) - r,0 }≤ C(p,r) ≤ 1-H_2(p) - r - min_x ∈ [0,1] f(x)
where H_2(·) is the binary entropy function and f(x) = H_2((2p-1)x+1-p) - H_2(p) - rH_2(x). Note that the two bounds are close for small r and tight for p=0.
While the limits of communication over the (p,r)-AWTC II are mostly understood, less is known on how to construct efficient codes to achieve these limits. The proof of the lower bound (<ref>), as given in <cit.>, is non-constructive and follows an ordinary random coding argument in which codewords are chosen uniformly and independently from space {0,1 }^n where n is the blocklength of the code. As a tool for probabilistic constructions, the practical use of this random code distribution is limited. For example, to represent a code picked in this way, one would need to remember at least n 2^Rn random bits where R is the coding rate.[Additional random (seed) bits are needed if one considers codes with private randomness at the encoder (i.e., stochastic codes).] Thus, codes picked from a distribution with mutual independence property lack a succinct representation. Furthermore, the high degree of randomness used in the construction obscures insight into the structure of a good code. Without sufficient structure, efficient encoding and decoding algorithms are likely to be elusive.
In this paper, we work towards an efficient code construction for the (p,r)-AWTC II by partially derandomizing the random code used in <cit.> to establish the lower bound (<ref>). We do so by relaxing the requirement that codewords be mutually independent and consider random codes with k-wise independent codewords for some positive integer k << n. We show that random codes under this weaker notation of independence can achieve the lower bound (<ref>) for some parameter k large enough but constant in n. As a result, these codes have both a more succinct representation and additional structure compared to random codes with mutually independent codewords.
The approach we take is the following. We focus on a class of non-linear codes known as pseudolinear codes (precisely defined in Section <ref>), which was initially proposed by Guruswami and Indyk <cit.> outside of the AWTC setting. In the AWTC setting, pseudolinear codes have a number of nice properties, including succinct representations (i.e., O(k n^2) bits), efficient encoding algorithms, some linear structure, and k-wise independent codewords when chosen at random for a designable parameter k. We initiate the study of pseudolinear codes for achieving both secrecy and reliability in the wiretap setting. As our main result, we show that random pseudolinear codes achieve the best-known SS capacity lower bound (<ref>). Conversely, we show that non-linear codes are necessary to achieve this lower bound for some values of p and r. To prove our main result, we provide a new lemma on the soft-covering phenomenon <cit.> under random coding with k-wise independent codewords.
§ PRELIMINARIES, RESULTS & RELATED WORK
§.§ Notation
Unless stated otherwise, we denote random variables in uppercase font (e.g., X), realizations of random variables in lowercase font (e.g., x), and sequences in bold font (e.g., X, x). An exception to the above rules occurs when we denote codes: we denote random codes with script typeset (e.g., 𝒞) and realizations of random codes with calligraphic typeset (e.g., 𝒞). We denote the set of all possible distributions over a set 𝒳 as 𝒫(𝒳), and denote the uniform distribution over 𝒳 as Unif(𝒳). We denote that X is distributed as P ∈𝒫(𝒳) by writing X ∼ P. For PMFs P and Q such that supp(P) ⊆supp(Q) (absolute continuity), the relative entropy of P and Q is
D(P||Q) ≜∑_x ∈supp(P) P(x) log_2 P(x)/Q(x). For α>0 and α≠ 1, the Rényi divergence of order α is
D_α(P||Q) ≜1/α-1log_2 ∑_x ∈supp(P) P(x) (P(x)/Q(x))^α-1. Define the special case D_1(P||Q) ≜lim_α→ 1 D_α(P||Q) = D(P||Q). For an event 𝒜, we let 1{𝒜} denote the indicator of 𝒜.
§.§ Setup
Code: A (binary) code 𝒞_n of blocklength n is a subset of {0,1}^n. We will associate a code 𝒞_n with an encoding function x(·), which performs a mapping from the message space ℳ to codewords in {0,1}^n. As is common for wiretap codes, we will consider stochastic encoding in which x takes as argument a private random key w ∈𝒲 that is known only to Alice. Specifically, for a message rate R = log_2 |ℳ|/n and a (private) key rate R' = log_2 |𝒲|/n, an [n,R n,R' n] code 𝒞_n is a set
𝒞_n = {x(m,w): (m,w) ∈ℳ×𝒲}
where we refer to x(w,m) as the (n-bit) codeword corresponding to message m and key w. In turn, a family of codes is a sequence {𝒞_n}_n=1^∞ where for each n≥1, 𝒞_n is an [n,Rn,R'n] code.
Encoding/Decoding: For an [n,Rn,R'n] code 𝒞_n, probability mass function (PMF) P_M ∈𝒫(ℳ), a message M ∼ P_M and a private key W ∼Unif(𝒲) where M and W are independent, Alice encodes M into a codeword x(M,W) and transmits it over the channel. Subsequently, Bob receives a corrupted version of the codeword and performs decoding by choosing a message estimate M∈ℳ. We say that a decoding error occurs if M≠ M.
The AWTC II:
For a read fraction r ∈ [0,1] and an error fraction p ∈ [0,1/2], the adversary can observe rn bits and flip up to pn bits of x(M,W). The location of the read bits are indexed by a coordinate set 𝒮, which the adversary can choose from the set 𝒮 consisting of all subsets of [n] of size rn. In turn, the adversary observes Z = x(M,W,𝒮) where x(M,W,𝒮) denotes the rn bits of x(M,W) indexed at 𝒮, and subsequently, chooses the location of the bit flips. We emphasize that the location of the bit flips need not coincide with 𝒮. In general, the adversary can randomize its above choices by choosing a distribution on 𝒮 that can depend on the code, as well as a distribution on the bit flip locations that can depend on both the code and the observation Z.
Secrecy: Define the semantic leakage as
Sem(𝒞_n) = max_P_M ∈ P(ℳ), 𝒮∈𝒮 I_𝒮(M;Z)
where I_𝒮(M;Z) denotes the mutual information between M ∼ P_M and Z = x(M,W,𝒮). In turn, a family of codes {𝒞_n}_n=1^∞ is said to be semantically-secret if Sem(𝒞_n) = 2^-Ω(n). We remark that this mutual-information based notation of SS is shown in <cit.> to be (asymptotically) equivalent to the operational definition of SS given in <cit.>. Further, SS is a stronger notation of secrecy than strong secrecy.[A family of codes is said to achieve strong secrecy if lim_n →∞max_𝒮∈𝒮I_𝒮(M;Z)=0 where the message distribution is fixed s.t. P_M ∼Unif(ℳ).]
Reliability: The (maximum) probability of decoding error is defined as
P^max_error(𝒞_n) = max_m ∈ℳℙ( M≠ m | M =m )
where the probability is taken w.r.t. the distribution of Alice's key and the worst-case distribution of the adversary's bit read/flip locations. A family of codes {𝒞_n}_n=1^∞ is said to be reliable if for any δ > 0,
P_error(𝒞_n) ≤δ for large enough n.
SS Capacity: The rate R>0 is said to be achievable over the (p,r)-AWTC II if there exists a family of codes {𝒞_n}_n=1^∞ (where for each n, 𝒞_n is an [n,Rn,R'n] code for some R'≥ 0) that is both semantically-secret and reliable. The SS capacity C(p,r) is the supremum of rates achievable over the (p,r)-AWTC II.
§.§ Results
Our first result is on the necessity of non-linear codes for achieving the SS capacity. We say that a [n,Rn,R'n] code 𝒞_n is linear[Examples of linear codes in the wiretap setting include Ozarow's and Wyner's linear coset coding scheme <cit.> and some polar code and LDPC code based schemes (e.g., <cit.>).] if there exists a generator matrix G ∈{0,1}^(R+R')n × n such that the codeword corresponding to any message m ∈ℳ≜{0,1}^Rn and key w ∈𝒲≜{0,1}^R'n is x(m,w) = [ m w ]G. A corollary of the following Theorem is that for any r ∈ (0,1] and p=0 (i.e., the channel to Bob is noiseless), linear codes cannot achieve SS capacity C(0,r) = max{ 1 - r,0 }.
Let p =0, r ∈ (0,1], R > max{1-2r,0} and R' ∈ [0,1-R]. For large enough n, every linear [n,Rn,R'n] code 𝒞_n has either semantic leakage Sem(𝒞_n) ≥ 1 or probability of error P_error(𝒞_n) ≥ 1/2 over the (0,r)-AWTC II.
Theorem <ref> can be extended to non-zero values of p. In particular, together with the lower bound (<ref>), Theorem <ref> implies that linear codes cannot achieve C(p,r) for either any p ∈ [0,1/2) and r∈ (0,1/2] such that H_2(p)<r, or any p ∈ [0,1/2] and r ∈ [1/2,1], except for the trivial case when C(p,r) is 0.
A proof of Theorem <ref> is given in Section <ref>, which involves a specific construction of the adversary's coordinates 𝒮 together with the Plotkin bound to upper bound the minimum distance of a code. We remark that tighter distance bounds can be used in place of the Plotkin bound. For instance, if one uses the Elias-Bassalygo bound <cit.>, the rate lower bound in Theorem <ref> can be tightened to R > max{1 - H( 1- √(1-2r)/2),0 }. All bounds discussed thus far are plotted in Fig. <ref>.
In light of Theorem <ref>, non-linear codes must be considered to achieve the lower bound (<ref>) for at least some values of p ∈ [0,1/2] and r ∈ [0,1]. We turn now to non-linear codes.
For R∈(0,1], R' ∈ [0,1-R] and positive integers n and k, let H be the parity check matrix of any binary linear code with the following parameters:
* blocklength 2^(R+R')n-1
* dimension 2^(R+R')n-1-ℓ for some ℓ = O(k(R+R')n)
* minimum distance at least k+1.
An [n,Rn,R'n,k] psuedolinear code 𝒞_n is any [n,Rn,R'n] code that satisfies the following two step encoding process. First, a message-key pair (m,w) ∈ℳ×𝒲 is mapped to the row of H^T indexed by (m,w), which we denote as h(m,w).[To account for the message-key pair (0,0), we define h(0,0) to be the all zeros vector.] Second, h(m,w) is linearly mapped to an n-bit codeword by some “generator” matrix G ∈{0,1}^ℓ× n, i.e.,
x(m,w) = h(m,w) G.
Thus, the non-linearity of 𝒞_n is confined to the first stage of encoding.
Towards the goal of derandomizing the random code of <cit.>, pseudolinear codes have the following three attractive properties <cit.>.[See <cit.> for further discussion of pseudolinear codes.] First, a pseudolinear code has a succinct representation as only ℓ n = O(k(R+R')n^2) bits are needed to describe the generator matrix. Second, encoding is computationally efficient if h(m,w) can be obtained in time polynomial in n for each (m,w) ∈ℳ×𝒲. For instance, we can let H be the parity check matrix of a binary Bose–Chaudhuri–Hocquenghem (BCH) code of design distance k+1, in which case H has an explicit representation and h(m,w) can be efficiently obtained by computing powers of a primitive (2^(R+R')n-1)-th root of unity from the extension field GF(2^(R+R')n), e.g., see <cit.>.
Third, if we consider a random pseudolinear code by choosing the generator matrix G at random while fixing the parity check matrix H, then the codewords of the random code are uniformly distributed in {0,1}^n and k-wise independent, i.e., any subset of codewords of size k are mutually independent.[In contrast, random linear codes have codewords that are pair-wise (i.e., 2-wise) independent in non-trivial cases.] This final property is the key to showing that pseudolinear codes achieve the best-known lower bound of C(p,r).
Let p ∈ [0,1/2] and r ∈ [0,1] such that 1-H_2(p)-r is positive. For any R < 1- H_2(p) - r and for large enough (but fixed) k, there exists a family pseudolinear codes {𝒞_n}_n=1^∞ (where for n≥ 1, 𝒞_n is an [n,Rn,R'n,k] pseudolinear code for some R'≥ 0) that is both reliable and semantically-secret.
A proof of Theorem <ref> is provided in Section <ref>. The key technical tool in the proof is a new version of Wyner's soft-covering lemma which holds for codes with k-wise independent codeword. However, our version differs significantly from Wyner's <cit.>, which we state and prove in Section <ref>.
Our version is closest to (and proved similarly to) the soft-covering lemma of Goldfeld, Cuff and Permuter <cit.>, which roughly states that if the key rate R' is larger than the mutual information between Alice's channel input and the adversary's observation, then a random code with mutually independent codewords satisfies an exponential number of secrecy constraints with probability at least 1-2^-2^Ω(n). Here, the double-exponential probability bound is important as it allows one to take a union bound over an exponential number of events. Our version of the lemma states that when we restrict the random code to a k-wise independent distribution, the same constraints hold with probability at least 1-2^-k Ω(n). Critically, while our probability bound tends to 1 more slowly than double-exponentially, it remains fast enough to take a union bound over an exponential number of events when k is large enough.
§.§ Related Work
Linear Codes and Semantic-Secrecy: Recall that Theorem <ref> states that linear codes cannot achieve the SS capacity for the (noiseless) (0,r)-AWTC II for any r ∈ (0,1]. Prior to this work, some special classes of linear codes were known to not achieve the SS capacity. In particular, Ozarow's and Wyner's linear coset coding scheme <cit.> does not achieve SS capacity of the (0,r)-AWTC II for any r ∈ (0,1]. extend_vWe provide a proof of this result in Appendix <ref>.We provide a proof of this result in the extended paper <cit.>. We remark that the necessity of non-linear codes for achieving the secrecy capacity is a product of the joint consideration of the semantic secrecy metric and the type II property of the wiretap channel. In contrast, linear codes are sufficient to achieve the weak secrecy capacity over the noiseless WTC II <cit.>. Furthermore, linear codes are sufficient to achieve both the weak and strong secrecy capacity of the noisy (but non-adversarial) WTC I <cit.>.
Code Constructions: Explicit (and efficient) constructions that achieve the best known lower bound of the (p,r)-AWTC II are not known in general, except for the special cases of p=0 <cit.> and r = 0 <cit.>. In the general case, one promising approach is use modular constructions, which combine an existing error-control code with an invertible extractor <cit.> or algebraic manipulation detection code <cit.>. However, constructing binary error-control codes that are both efficiently encodable/decodable and achieve the (reliability) capacity of the (p,r)-AWTC is an open problem. In contrast to the above modular constructions, pseudolinear codes offer a non-modular approach. Recently, random (and thus non-explicit) pseudolinear codes were shown to achieve the (reliability) capacity of the (p,r)-AWTC II <cit.>.
§ PROOF OF THEOREM <REF>
Notation: For message rate R>0, key rate R'∈ [0,1-R], and blocklength n ≥ 1 define ℳ≜{0,1}^Rn and 𝒲≜{0,1}^R'n. For an [n,Rn,R'n] linear code 𝒞_n, let G denote the (R+R')n × n generator matrix of 𝒞_n, which can be partitioned such that G = [ G_M; G_W ] where G_M ∈{0,1}^Rn × n and G_W ∈{0,1}^R'n × n. In turn, the codeword corresponding to message m ∈ℳ and key w ∈𝒲 is x(m,w) = m G_M + w G_W. For a coordinate set 𝒮∈𝒮, let the matrices G_M(𝒮) and G_W(𝒮) denote the columns of G_M and G_W indexed by 𝒮, respectively. Using this notation, if Alice transmits codeword x(m,w) then the adversary observes z = m G_M(𝒮) + w G_W(𝒮).
Preliminaries: Let 𝒞_n be an [n,Rn,R'n] linear code with generator matrix G. We make the following assumption.
Without loss of generality (w.l.o.g.), we assume that G is full rank, i.e., rank(G) = (R+R')n.
The claim being w.l.o.g. is roughly as follows: if G is not full rank, then either P^max_error(𝒞_n) ≥ 1/2 or both 𝒲 and G can be replaced with a smaller key set and full rank generator matrix, respectively, without changing the code. extend_vA detailed discussion is provided in Appendix <ref>.A detailed discussion is provided in the extended paper <cit.>. We remark that following Assumption <ref>, we have that rank(G_M) = Rn and rank(G_W)= R'n.
Before proving the converse result (Theorem <ref>), we state a few preliminary results relating the semantic leakage to the rank of G_M(𝒮) and G_W(𝒮) for 𝒮∈𝒮. For a code 𝒞_n and coordinate set 𝒮∈𝒵, we denote the mutual information between M and Z as I_𝒮(M;Z) (where the dependency on 𝒞_n is implied).
For 𝒮∈𝒮 and M uniformly distributed over ℳ,
I_𝒮(M;Z) = rank( G(S) ) - rank(G_W(𝒮) ).
Let 𝒮∈𝒮. We first characterize the joint PMF of M, W and Z, which we denote as P_M,W,Z. We drop the subscripts from the PMF P_M,W,Z and its marginal PMFs when the meaning is clear from the use of the realization variables m, w and z.
For z∈{0,1}^rn and m ∈ℳ, we have that
P(z|m) = ∑_w ∈𝒲 P(z,w|m) (a)=∑_w ∈𝒲 P(z|m,w) P(w)
(b)= T_m,z 2^-R'n
where (a) follows from the independence of M and W, (b) follows from W ∼Unif(𝒲), and where T_m,z≜∑_w ∈𝒲1{z = m G_M(𝒮) + w G_W(𝒮)}.
To simplify (<ref>), define
𝒯≜{(m',z') ∈ℳ×{0,1}^rn: T_m',z'≥ 1 }
and suppose that (m,z) ∈𝒯. By definition, there exists an w ∈𝒲 such that w G_W(𝒮) = m G_M(𝒮) + z. In turn, since the mapping G_W(𝒮):𝒲→{0,1}^rn is a linear transformation, there must be 2^nullity(G_W(𝒮)) number of w ∈𝒲 such that w G_W(𝒮) = m G_M(𝒮) + z where nullity(G_W(𝒮)) is the dimension of the null space of G_W(𝒮). By the rank-nullity theorem <cit.>, 2^nullity(G_W(𝒮)) = 2^dim(𝒲)-rank(G_W(𝒮)) = 2^R' n-rank(G_W(𝒮)). In turn,
T_m,z =
2^R'n - rank(G_W(𝒮)) , (m,z) ∈𝒯
0, (m,z) ∉𝒯,
and in turn, following (<ref>),
P(z|m) =
2^ -rank(G_W(𝒮)), (m,z) ∈𝒯
0, (m,z) ∉𝒯.
Repeating the above approach for the PMF of Z, one can show using the assumption that m is uniformly distributed over ℳ = {0,1}^Rn that
P(z) =
2^ -rank(G(𝒮)), ∃ m ∈ℳ s.t. (m,z) ∈𝒯
0, ∀ m ∈ℳ, (m,z) ∉𝒯.
Using the above PMFs, we evaluate the mutual information between M and Z:
I_𝒮(M;Z) ≜∑_m ∈ℳ∑_z∈{0,1}^rn P(m,z) log_2 P(z|m)/P(z)
(c)=∑_(m,z) ∈𝒯 P(m,z) log_2 2^rank(G(𝒮)) - rank(G_W(𝒮))
(d)=rank(G(𝒮)) - rank(G_W(𝒮)).
where (c) follows from (<ref>), (<ref>), and P(m,z) = 0 ∀ (m,z) ∉𝒯, and (d) follows from ∑_(m,z) ∈𝒯 P(m,z) = 1.
If R'+R ≤ r, then lim_n →∞Sem(𝒞_n) = ∞.
Suppose that M is uniformly distributed and that R+R' ≤ r. Recall that G has rank (R+R')n (c.f. Assumption <ref>). Since R+R' ≤ r, there exists a 𝒮∈𝒮 such that rank(G(𝒮)) = rank(G) = (R+R') n. Let 𝒮 be this coordinate set. It follows that rank(G_W(𝒮)) = R'n, and in turn, I_𝒮(M;Z) = Rn following Lemma <ref>. In conclusion, Sem(𝒞_n) ≥ Rn.
For the converse analysis, we will need the following version of the Plotkin bound <cit.>.
Suppose that Ψ is an [n,Rn] code (not necessarily linear) with minimum distance d_min∈ (0,n/2]. Then for δ≜ d_min/n,
R ≤ 1 - 2 δ + o(1)
where the o(1) term tends to 0 as n tends to infinity.
Converse (Proof of Theorem <ref>) Setup: Set p=0 and let r ∈ [0,1]. For any ϵ > 0, let R = max{1 - 2r,0 } + ϵ and let R' ∈ [0,1-R] such that R+R'>r (c.f. Corollary <ref>). In turn, we let 𝒞_n be an [n,Rn,R'n] linear code with generator matrix G. W.l.o.g., we assume that G is full rank (c.f. Assumption <ref>).
Converse Attack: The adversary orchestrates it attack in two steps. First, the adversary chooses an index set 𝒱⊆ [n] of size (R+R')n such that all columns of G(𝒱) are linearly independent. Note that such a set exists following our assumption that G is rank (R+R')n. Second, the adversary chooses a coordinate set 𝒮^* ∈𝒮 to be a subset of 𝒱 that minimizes the rank of G_W(𝒮^*). Once Alice transmits her codeword x(M,W), the adversary reads the codeword bits Z = x(M,W,𝒮^*) corresponding to the coordinates 𝒮^* with corresponding mutual information I_𝒮^*(M;Z).
Converse Analysis: The goal of the converse analysis is to show that I_𝒮^*(M;Z) ≥ 1. We remark that 𝒮^* is a strict subset of 𝒱 following the inequality r<R+R'. This fact together with the fact that all |𝒱| column of G(𝒱) are linearly independent implies that the rank of G(𝒮^*) is rn. In turn, following Lemma <ref>,
I_𝒮^*(M;Z) = rn - rank(G_W(𝒮^*)).
In the converse analysis, we show that rn - rank(G_W(𝒮^*)) ≥ 1.
We proceed with the following dual code perspective. Consider G_W(𝒱) as the R'n × (R+R')n generator matrix of some [(R+R')n,R'n] linear code Ψ. In turn, let G_W^⊥(𝒱) denote the Rn × (R+R')n generator matrix of the [(R+R')n,Rn] dual code Ψ^⊥ of Ψ. By definition, G_W(𝒱) is the parity check matrix corresponding to the generator matrix G_W^⊥(𝒱). Let d^⊥_min denote the minimum distance of Ψ^⊥. By the definition of the parity check matrix (e.g., see <cit.>), there exists d^⊥_min linearly dependent columns of the parity check matrix G_W(𝒱).
Hence, if d^⊥_min≤ rn, then the adversary's choice of 𝒮^* contains the indices of these d^⊥_min linearly dependent columns of G_W, i.e, the rank of G_W(𝒮^*) is bounded above by rn - 1. In turn, I_𝒮^*(M;Z) ≥ 1 via (<ref>). To complete the proof, we show that d^⊥_min≤ rn.
Applying the Plotkin bound (Lemma <ref>) to the dual code Ψ^⊥, we have that
R/R+R'≤ 1 - 2 δ^⊥ + o(1)
for the distance parameter δ^⊥≜d^⊥_min/(R+R')n and where the o(1) term tends to 0 as n tends to infinity. In turn, for large enough n,
d^⊥_min (d)≤R'n/2 + o(n)
(e)≤2r - ϵ/2 + o(n)
(f)< rn
where (d) follows from a rearrangement of (<ref>), (e) follows from the setting of rate R = max{1 - 2r,0 } + ϵ and the trivial inequalities R+R' ≤ 1 and max{1-2r,0 }≥ 1- 2r, and (f) follows for large enough n. In conclusion, for large enough n, I_𝒮^*(M;Z) ≥ 1 and thus Sem(𝒞_n) ≥ 1.
§ A SOFT-COVERING LEMMA FOR K-WISE INDEPENDENT CODEWORDS
Notation: In this section only, we consider a more general code model than that introduced in Section <ref>. For an alphabet 𝒰 which is not necessarily binary, a blocklength n and a (private) key rate R' > 0, we define an [n,R'n] code 𝒞_n as a subset of 𝒰^n of size |𝒞_n| = 2^R'n. We will often describe 𝒞_n by its set of codewords {u(w,𝒞_n) }_w ∈𝒲 for a key set 𝒲 = [2^R'n].
We introduce the soft-covering problem, depicted in Fig. <ref>. The problem setup is as follows. For a blocklength n ≥ 1, let 𝒞_n = {u(w,𝒞_n)}_w ∈𝒲 be an [n,R'n] code. Given a finite input alphabet 𝒰, an input distribution Q_U, a finite output alphabet 𝒱 and channel Q_V|U, consider the PMFs induced on the output sequence V∈𝒱^n when an input sequence U∈𝒰^n is sent through the n-shot memoryless channel Q_V|U^n: for v∈𝒱^n,
* The PMF of V when U is drawn randomly from Q^n_U, i.e.,
Q_V(v) = Q_V^n(v) = ∑_u∈𝒰 Q^n_V|U(v|u) Q_U^n(u).
* The PMF of V when U is the codeword u(W,𝒞_n) for W ∼Unif(𝒲), i.e.,
P^(𝒞_n)_V(v) ≜∑_w ∈𝒲 Q^n_V|U(v|u(w,𝒞_n)) 2^-Rn.
The soft-covering problem asks how to design a code 𝒞_n such that the induced PMF 𝒫^(𝒞_n)_V is approximately Q_V^n in the limit as n tends to infinity. The following lemma states that if R' > I(U;V), then for any integer k large enough a random [n,R'n] code 𝒞_n with k-wise independent codewords each drawn from distribution Q_U^n results in P^(𝒞_n)_V≈ Q^n_V for large enough n. Recall that we denote random codes with script typeface (e.g., 𝒞_n) and we denote realizations of random codes with calligraphic typeface (e.g., 𝒞_n).
Suppose that the random code 𝒞_n has k-wise independent codewords for some even integer k≥ 4, each drawn from a PMF Q_U^n for finite 𝒰. Let Q_V|U be any conditional PMF where |𝒱| is finite and let R' > I(U;V). There exists some γ_0 >0 and γ_1 >0 that depend only on R' and I(U;V) such that for large enough n
ℙ_𝒞_n( D(P_V^(𝒞_n)|| Q^n_V) > 2^-γ_1 n) ≤ 2^(-k γ_0 + log_2 |𝒱|) n
where we recall that D is the relative entropy.
§.§ Overview of Proof of Lemma <ref>
Setup: Let the blocklength n ≥ 1 and key rate R' > I(U;V), and let k be a positive integer that will be set later. In turn, let 𝒞_n be a random [n,R'n] code drawn from any distribution that has k-wise independent codewords each with marginal PMF Q^n_U.
The proof of Lemma <ref> follows a two step approach. In the first step, the proof closely follows the proof outline of <cit.> in which we construct an upper bound on the relative entropy D(P_V^(𝒞_n) || Q^n_V) based on a typical set construction of n-symbol sequences. In the second step, the proof diverges from <cit.> to analyze how the relative entropy upper bound concentrates. This second step uses the k-wise independent property of the random code 𝒞_n.
Define the information density of a scalar pair (u,v) ∈𝒰×𝒱 as
i_Q_U,V(u;v) ≜log_2 Q_V|U(v|u)/Q_V(v). In turn, define the information density of an n-symbol sequence pair (u,v) ∈𝒰^n ×𝒱^n,
i_Q^n_U,V(u;v) ≜∑_j=1^n i_Q_U,V(u_j;v_j).
For ϵ > 0, define a typical set of n-symbol sequence pairs
𝒜_ϵ≜{ (u,v) ∈𝒰^n ×𝒱^n: i_Q^n_U,V(u;v) < (I(U;V)+ϵ)n }.
Recall that for an [n,R'n] code 𝒞_n, the PMF P^(𝒞_n)_V is the PMF of V when U is a codeword drawn from the code 𝒞_n (c.f. (<ref>)). We split P^(𝒞_n)_V into two terms based on the typical set 𝒜_ϵ: for v∈𝒱^n, define
P^(𝒞_n)_V,1(v) ≜
2^-Rn∑_w ∈𝒲 Q^n_V|U(v|u(w,𝒞_n)) 1{(u(w,𝒞_n),v) ∈𝒜_ϵ},
and define
P^(𝒞_n)_V,2(v) ≜
2^-Rn∑_w ∈𝒲 Q^n_V|U(v|u(w,𝒞_n)) 1{(u(w,𝒞_n),v) ∉𝒜_ϵ}.
By inspection, P^(𝒞_n)_V = P^(𝒞_n)_V,1 + P^(𝒞_n)_V,2; note that P^(𝒞_n)_V,1 and P^(𝒞_n)_V,2 may not be PMFs. We also define the ratios
Δ^(𝒞_n)_V,1(v) ≜P^(𝒞_n)_V,1(v)/Q^n_V(v) and Δ^(𝒞_n)_V,2(v) ≜P^(𝒞_n)_V,2(v)/Q^n_V(v).
We restate a result from <cit.> that bounds the relative entropy of P^(𝒞_n)_V and Q^n_V in terms of the introduced quantities.
For every [n,R'n] code 𝒞_n,
D( P^(𝒞_n)_V|| Q^n_V) ≤ H_2 ( ∑_v∈𝒱^n P^(𝒞_n)_V,2(v) )
+ D( P^(𝒞_n)_V,1|| Q^n_V) + D( P^(𝒞_n)_V,2|| Q^n_V).
We remark that the RHS of the inequality of Lemma <ref> is well defined if we extend the definition of relative entropy D(·||·) in the natural way to account for functions P^(𝒞_n)_V,1 and P^(𝒞_n)_V,2 which may not be PMFs. The following sufficient condition for Lemma <ref> follows from Lemma <ref>.
Suppose that for some π_0 ∈ [0,1] and with probability at least 1-π_0 over the random code distribution, for some π_1>0
∑_v∈𝒱^n P^(𝒞_n)_V,2(v) < 2^-π_1 n
and
Δ^(𝒞_n)_V,1(v) < 1 + 2^-π_1 n for all v∈𝒱^n.
Then
ℙ_𝒞_n( D( P^(𝒞_n)_V|| Q^n_V) ≥ q_n 2^-π_1 n) ≤π_0
where q_n = 2log_2 e + π_1 n + n log_2 ( max_v ∈supp(Q_V)1/Q_V(v)).
extend_v
Let π_1>0 and suppose that 𝒞_n is a realization of 𝒞_n such that both (<ref>) and (<ref>) hold. We bound each of the 3 terms in the inequality of Lemma <ref> using (<ref>) and (<ref>).
Consider the first term. Following (<ref>) and the inequality[This inequality follows from an application of both the inequality x/1+x≤ln(1+x) for x>-1 and the definition of H_2(x).] H_2(x) ≤ x log_2 e/x for x ∈ [0,1], we have that
H_2( ∑_v∈𝒱^n P^(𝒞_n)_V,2(v) ) ≤ H_2(2^-π_1 n) < 2^-π_1 n (log_2 e + π_1 n).
Moving on to the second term, following (<ref>) and the inequality log_2(1+x) ≤ x log_2 e for x>0, we have that
D(P^(𝒞_n)_V,1 || Q_V^n) ≜∑_v∈𝒱^n P^(𝒞_n)_V,1(v) log_2 Δ^(𝒞_n)_V,1
< ∑_v∈𝒱^n P^(𝒞_n)_V,1log_2 (1+2^-π_1 n)
≤log_2(1+2^-π_1 n) ≤ 2^-π_1 nlog_2 e.
Moving to the last term, we will use the following inequality which uses the assumption that |𝒱| is finite: Δ^(𝒞_n)_V,2(v) ≜P^(𝒞_n)_V,2(v)/Q^n_V(v)≤max_v' ∈supp(Q^n_V)1/Q^n(v') = (max_v' ∈supp(Q_V)1/Q(v'))^n for all v∈𝒱^n. Following this inequality and (<ref>), we have that
D(P^(𝒞_n)_V,2 || Q^n_V) ≜∑_v∈𝒱^n P^(𝒞_n)_V,2(v) log_2 Δ^(𝒞_n)_V,2
≤∑_v∈𝒱^n P^(𝒞_n)_V,2(v) n log_2 ( max_v' ∈supp(Q_V)1/Q_V(v'))
< 2^-π_1 n n log_2 ( max_v' ∈supp(Q_V)1/Q_V(v')).
Combining the bounds (<ref>), (<ref>) and (<ref>) together with Lemma Lemma <ref>, the desired inequality (<ref>) immediately follows.
A proof of Lemma <ref> is available in the extended version <cit.>.
In the remainder of the proof of Lemma <ref>, we apply the framework of the sufficient condition (Lemma <ref>) and show that inequalities (<ref>) and (<ref>) hold with probability 1-π_0 over the distribution of 𝒞_n for a value π_0 = 2^-kΩ(n) + n log_2|𝒱| and some π_1 > 0. As the primary technical tools of the proof, we use the concentration inequalities of Schmidt, Siegel and Srinivasan <cit.> and Bellare and Rompel <cit.> for sums of k-wise independent random variables.
§.§ Proof of Lemma <ref>
First, we show that inequality (<ref>) holds with high probability over the random code 𝒞_n for some π_1>0. Consider the quantity
∑_v∈𝒱^n P^(𝒞_n)_V,2(v)
= [0.93]∑_w ∈𝒲 2^-R'n∑_v∈𝒱^n Q^n_V|U(v|U(w,𝒞_n)) 1{(U(w,𝒞_n),v) ∉𝒜_ϵ}
= [0.93]∑_w ∈𝒲 2^-R'nℙ_V∼ Q^n_V|U( (U(w,𝒞_n),V) ∉𝒜_ϵ| U = U(w,𝒞_n))
Note that (<ref>) is a sum of |𝒲|=2^R'n k-wise-independent terms following that the codewords of 𝒞_n are k-wise independent.
For w ∈𝒲, the expectation of the w^th term in the sum of (<ref>) is
2^-R'n𝔼_𝒞_nℙ_V∼ Q^n_V|U( (U(w,𝒞_n),V) ∉𝒜_ϵ| U = U(w,𝒞_n))
(a)= 2^-R'nℙ_(U,V) ∼ Q^n_U,V( (U,V) ∉𝒜_ϵ)
(b)= 2^-R'nℙ_(U,V) ∼ Q^n_U,V(i_Q^n_U,V(U;V) ≥ (I(U;V)+ϵ)n )
(c)= 2^-R'nℙ_(U,V) ∼ Q^n_U,V( 2^λ i_Q^n_U,V(U;V)≥ 2^λ(I(U;V)+ϵ)n)
(d)≤ 2^-R'n( 𝔼_(U,V) ∼ Q_U,V[ 2^λ i_Q_U,V(U;V)]/2^λ(I(U;V)+ϵ))^n
= 2^-λ( I(U;V) + ϵ - 1/λlog_2 𝔼_(U,V) ∼ Q_U,V[2^λ i_Q_U,V(U;V)] )n - R'n
(e)= 2^-λ( I(U;V) + ϵ - D_λ+1(Q_U,V||Q_U Q_V) )n - R'n
= 2^-(α_λ,ϵ + R')n
where (a) follows from the fact that U(w,𝒞_n) is distributed as Q^n_U, (b) follows from the definition of 𝒜_ϵ, (c) holds for any λ > 0, (d) follows from Markov's inequality and the product form of the joint PMF Q^n_U,V, (e) follows from the definition of Rényi divergence of order λ+1, and where α_λ,ϵ≜λ( I(U;V) + ϵ - D_λ+1(Q_U,V||Q_U Q_V) ).
For ϵ>0, we remark that i) α_λ,ϵ tends to 0 as λ tends to 0, and ii) α_λ,ϵ is positive for small enough λ>0; these follow from the facts that D_λ+1(Q_U,V||Q_UQ_V) is a continuous and non-decreasing function of λ>0 and that D_1(Q_U,V||Q_UQ_V) = I(U;V). In the sequel, for a given ϵ>0, we let λ>0 be small enough such that α_λ,ϵ∈ (0,R'). Moving forward, we write α_λ,ϵ as simply α when the dependency on λ and ϵ is clear from context.
Suppose that {T_w}_w ∈𝒲 are random variables that take values in [0,1], and define T ≜∑_w ∈𝒲 T_w and μ≜𝔼[T]. For τ > 0, if the variables are k-wise independent for some k ≥ k^*(|𝒲|,μ,τ) ≜⌈μτ/1 - μ/|W|⌉, then
ℙ( T ≥μ(1+τ) ) ≤|𝒲| k^*(μ/|𝒲|)^k^*/μ(1+τ) k^*.
Using the framework of Lemma <ref>, we set T_w for each w ∈𝒲 to be the w^th term in the sum of (<ref>), i.e.,
[0.95]T_w = 2^-R'nℙ_V∼ Q^n_V|U( (U(w,𝒞_n),V) ∉𝒜_ϵ | U = U(w,𝒞_n) ),
and in turn, we have that T ≜∑_w ∈ T_w T_w = ∑_v∈𝒱^n P^(𝒞_n)_V,1(v). Note that the expectation μ≜𝔼_𝒞_n [T] is bounded above by 2^-α n following (<ref>). For a parameter β∈ (0,α) that will be set later, set τ such that μ(1+τ) = 2^(β-α)n.
Before applying Lemma <ref>, we normalize the random variables {T_w}_w ∈𝒲 to optimize the parameter k^*. For some parameter θ∈ (0,1] which we will soon set, define T'_w = θ 2^R'n T_w and note that T'_w ∈ [0,1]. Similarly, define the normalized sum T' = θ 2^R'n T, its normalized expectation μ' = θ 2^R'nμ which is bounded above by θ 2^(R'-α)n, and note that μ'(1+τ)= θ 2^(R'+β-α). Now consider the quantity k^*(|𝒲|,μ',τ) as a function of θ, and let n be large enough and choose θ∈ (0,1] such that k^*(|𝒲|,μ',τ) is equal to k; such a choice exists for fixed k and large enough n since k^*(|𝒲|,μ',τ) ≥μ' τ = θ 2^(R'+β-α)n - μ' ≥θ 2^(R'-α)n(2^β n-1) is tending larger than k for fixed θ > 0 as n tends to infinity following α < R'.
We apply Lemma <ref> to the normalized random variables {T'_w}_w ∈𝒲. We have for large enough n
[0.90]ℙ_𝒞_n( ∑_v∈𝒱^n P^(𝒞_n)_V,2(v) ≥ 2^(β-α)n) =
ℙ_𝒞_n( T ≥ 2^(β - α)n)
(f)=ℙ_𝒞_n(T' ≥θ 2^(R+β-α)n)
(g)≤2^R'n k( μ'/2^R'n)^k/θ 2^(R'+β-α)n k
(h)≤k^k/k!( μ'/θ 2^(R'+β-α)n)^k
(i)≤k^k/k! 2^-k β n
where (f) follows from the normalization T' = θ 2^R'n T, (g) follows for large enough n from Lemma <ref> and the choice of θ such that k^*=k, (h) follows from the inequalities m^k/k^k≤m k≤m^k/k! for any 1 ≤ k ≥ m, and (i) follows from the bound μ' ≤θ 2^(R'- α).
Next, we show that inequality (<ref>) holds with high probability over the random code 𝒞_n. For v∈𝒱^n, expand Δ^(𝒞_n)_V,1(v):
Δ^(𝒞_n)_V,1(v) ≜P^(𝒞_n)_V,1(v)/Q^n_V(v)
= ∑_w ∈𝒲 2^-R'nQ^n_V|U(v| U(w,𝒞_n))/Q^n_V(v)1{(U(w,𝒞_n),v) ∈𝒜_ϵ}.
Note that (<ref>) is a sum of |𝒲|=2^R'n k-wise independent terms following that the codewords of 𝒞_n are k-wise independent. For w ∈𝒲, the expectation of the w^th term in the sum of (<ref>) is
2^-R'n𝔼_𝒞_n[ Q^n_V|U(v|U(w,𝒞_n))/Q^n_V(v)1{(U(w,𝒞_n),v) ∈𝒜_ϵ}]
(j)≤ 2^-R'n𝔼_𝒞_n[ Q^n_V|U(v|U(w,𝒞_n))/Q^n_V(v)]
(k)= 2^-R' n∑_u∈𝒰^n Q^n_U(u) Q^n_V|U(v|u)/Q^n_V(v)
= 2^-R'n
where (j) follows from the trivial bound 1{·}≤ 1 and (k) follows from the distribution of codeword U(w,𝒞_n) ∼ Q^n_U.
Let k ≥ 4 be an even integer. Suppose that {T_w}_w ∈𝒲 are k-wise independent random variables that take values in [0,1], and define T ≜∑_w ∈𝒲 T_w and μ≜𝔼[T]. For any τ > 0,
ℙ(T ≥μ(1+τ)) ≤ 8 ( k μ + k^2/(μτ)^2)^k/2.
Using the framework of Lemma <ref>, fix v∈𝒱^n and set T_w for each w∈𝒲 to be
[0.94]T_w = 2^(-I(U;V)-ϵ)n(Q^n_V|U(v | U(w,𝒞_n))/Q^n_V(v)) 1{(U(w,𝒞_n),v) ∈𝒜_ϵ}
which coincides with the w^th term in the sum of (<ref>) normalized by the factor 2^(R' - I(U;V) - ϵ)n. This normalization factor was chosen to ensure T_w is bounded above by 1 which follows from that fact that for any (u,v) ∈𝒜_ϵ we have that Q^n_V|U(v|u)/Q^n_V(v) < 2^(I(U;V)+ϵ)n. Set T = ∑_w ∈𝒲T_w and note that μ≜𝔼_𝒞_n[T] is bounded above by 2^(R' - I(U;V)-ϵ)n following (<ref>) and the choice of normalization factor. Finally, set τ such that μ(τ+1) = 2^(R' - I(U;V)-ϵ)n(1+2^(β-α)n) and note that μτ = 2^(R'-I(U;V)-ϵ)n(1+2^(β-α)n) - μ≥ 2^(R'-I(U;V)-ϵ+β-α)n. Applying Lemma <ref>, we have that for for even integer k ≥ 4, small enough ϵ>0 and large enough n
ℙ_𝒞_n( Δ^(𝒞_n)_V,1(v) ≥ 1 + 2^(β-α)n) = ℙ_𝒞_n( T ≥μ(1+τ))
(ℓ)≤ 8 ( k 2^(R' - I(U;V)-ϵ)n + k^2/2^2(R'-I(U;V)-ϵ+β-α)n)^k/2
(m)≤ 8 ( (k+1) 2^(R' - I(U;V)-ϵ)n/2^2(R'-I(U;V)-ϵ+β-α)n)^k/2
= 8 (k+1)^k/2· 2^-k η n.
where (ℓ) follows from Lemma <ref> and the bounds μ≤ 2^(R'-I(U;V)-ϵ)n and μτ≥ 2^(R'-I(U;V)-ϵ+β-α)n, and (m) follows for small enough ϵ>0 and large enough n such that k 2^(R'-I(U;V)-ϵ)n >> k^2, and where
η = R'-I(U;V)-ϵ +2(β-α)/2
In turn, by a simple union bound over all v∈𝒱^n, and by letting k ≥ 4 be an even integer, ϵ>0 be small enough and n be large enough,
ℙ_𝒞_n( ∃v∈𝒱^n s.t. Δ^(𝒞_n)_V,1(v) ≥ 1 + 2^(β-α)n)
≤ 8k (k+1)^k/2· 2^-(k η_1 +log_2|𝒱|)n.
To complete the proof, we put together the above results and apply the sufficient condition (Lemma <ref>). In the framework of Lemma <ref>, we set π_1 = α - β. If π_1>0, then it follows from Lemma <ref> that the inequalities (<ref>) and (<ref>) hold with probability at least 1-π_0 where
π_0 = k^k/k!2^-k β n + 8k (k+1)^k/2· 2^(-kη+log_2 |𝒱|)n
where the expression for π_0 follows from (<ref>) and (<ref>) together with a simple union bound.
The last step is to show that for some choice of the free parameters ϵ>0, λ>0 and β∈ (0,α) we have that π_1 > 0 and π_0 = 2^-k Ω(n) + n log_2 |𝒱|. Recall that for a fixed ϵ>0, α = α_λ,ϵ tends to 0 as λ tends to 0, and α_λ,ϵ is positive for small enough λ>0. Furthermore, recall that R' > I(U;V) by assumption, and thus, η given by (<ref>) is positive for small enough ϵ>0, small enough α_λ,ϵ>0, and any β∈ (0,α_λ,ϵ). Thus, given even k≥ 4, we can pick ϵ>0 small enough, and in turn, pick λ>0 small enough such that both α_λ,ϵ and η_1 are positive. In turn, picking β∈ (0,α_λ,ϵ) ensures that α_λ,ϵ - β >0 and thus π_1>0. Thus, π_0 = 2^-kΩ(n) + log_2 |𝒱|. This completes the proof of Lemma <ref>.
§ PROOF OF THEOREM <REF>
Setup: Let p ∈ [0,1/2] and r ∈ [0,1] such that 1 - H_2(p)- r > 0. For ϵ>0 and ϵ' ∈ (0,ϵ), let R = 1 - H_2(p) - r - ϵ and R' = r + ϵ'. Let k be a positive integer to be set in the proof. The goal of the proof is to show that for large enough k constant in n and for large enough n, there exists an [n,Rn,R'n,k] pseudolinear code 𝒞_n such that both Sem(𝒞_n) = 2^-Ω(n) and P^max_error(𝒞_n) = o(1).
Encoding: Alice uses an [n,Rn,R'n] code 𝒞_n = {x(m,w) }_(m,w) ∈ℳ×𝒲 to encode her message M. That is, for a message distribution P_M ∈𝒫(ℳ), Alice draws M ∼ P_M and W ∼Unif(𝒲) and transmits x(M,W).
Decoding: Upon receiving the channel output y, Bob performs min-distance decoding by choosing the message estimate m and key estimate w such that
(m,w) = min_(m,w) ∈ℳ×𝒲 d_H(x(m,w),y)
where d_H denotes the Hamming distance.
§.§ Code Distribution
We show the existence of a good code via a random coding argument. As our random code distribution, we will use the following distribution over [n,Rn,R'n,k] pseudolinear codes.
Let F[n,Rn,R'n,k] be the distribution over all [n,Rn,R'n,k] pseudolinear codes where the parity check matrix H (c.f. Definition <ref>) is fixed and the generator matrix G is chosen uniformly from {0,1}^ℓ× n.
The following property of F[n,Rn,R'n,k] is useful.
The codewords of 𝒞_n ∼ F[n,Rn,R'n,k] are uniformly distributed over {0,1}^n and are k-wise independent.
§.§ Secrecy Analysis
For a given 𝒮∈𝒮, let Q^(𝒮)_Z denote the PMF of the adversary's observation Z∈{0,1}^rn when Alice sends a random n-bit sequence X∼ Q^n_X ≜Unif({0,1}^n) through the channel. We have that
Q^(𝒮)_Z(z) = Q_X(𝒮)(z) = Q^rn_X(z), for all z∈{0,1}^rn.
Furthermore, for an [n,Rn,R'n] code 𝒞_n, let P^(𝒞_n,𝒮)_M,Z denote the joint PMF of message M and observation Z when Alice sends the codeword x(M,W,𝒞_n) through the channel. Then for marginal PMF P_M ∈𝒫(ℳ),
I_𝒞_n( M; Z) ≜ D ( P^(𝒞_n,𝒮)_M,Z || P_M P^(𝒞_n,𝒮)_Z)
(a)=D(P^(𝒞_n,𝒮)_M,Z || P_M Q^(𝒮)_Z) - D(P^(𝒞_n,𝒮)_Z || Q^(𝒮)_Z)
(b)≤ D(P^(𝒞_n,𝒮)_M,Z || P_M Q^(𝒮)_Z)
≤∑_m ∈ℳ P_M(m) max_m' ∈ℳ D(P^(𝒞_n,𝒮)_Z|M=m' || Q^(𝒮)_Z)
= max_m ∈ℳ D(P^(𝒞_n,𝒮)_Z|M=m || Q^(𝒮)_Z)
where (a) follows from the relative entropy chain rule and (b) follows from the property D(· || ·) ≥ 0. Thus,
Sem(𝒞_n) = max_P_M ∈𝒫(ℳ), 𝒮∈𝒮 I_𝒞_n(M;Z)
(c)≤max_𝒮∈𝒮max_m ∈ℳ D(P^(𝒞_n,𝒮)_Z|M=m || Q^(𝒮)_Z)
(d)=max_𝒮∈𝒮max_m ∈ℳ D(P^(𝒞_n,𝒮)_Z|M=m || Q^rn_X)
where (c) follows from (<ref>) and (d) follows from (<ref>).
Consider the relative entropy D ( P^(𝒞_n,𝒮)_Z|M=m || Q^rn_X ) in the framework of the soft-covering lemma for k-wise independent codewords (Lemma <ref>), as illustrated in Fig. <ref>. Here, (m,W) is uniformly drawn from a message-key product set {m}×𝒲 of rate R'/r, i.e., |{m}×𝒲| = 2^R'n = 2^rn R'/r. Since rate R' / r = (r+ϵ')/r is greater than I(X;Z) = 1, it follows from Lemma <ref> that there exists γ_0>0 and γ_1>0 such that for even integer k≥ 4 and large enough n,
ℙ_𝒞_n( D ( P^(𝒞_n,𝒮)_Z|M=m || Q^rn_X ) > 2^-γ_1 rn) ≤ 2^(-k γ_0 +1)rn.
In turn,
ℙ_𝒞_n( Sem(𝒞_n) > 2^-γ_1 rn)
(e)≤ℙ_𝒞_n( max_𝒮∈𝒮max_m ∈ℳ D(P^(𝒞_n,𝒮)_Z|M=m || Q^rn_X) > 2^-γ_1 r n)
≤ℙ_𝒞_n( ⋃_𝒮∈𝒮⋃_m ∈ℳ{ D(P^(𝒞_n,𝒮)_Z|M=m || Q^rn_X) > 2^-γ_1 r n})
(f)≤ 2^(-k γ_0 r + r + R+1)n
where (e) follows from (<ref>), and (f) follows for large enough n from a simple union bound, the inequality |𝒮| = n rn≤ 2^n and (<ref>).
§.§ Reliability Analysis
Unlike the above secrecy analysis, the reliability analysis requires additional structure of the code 𝒞_n beyond the k-wise independence property. In particular, we will use the pseudolinear structure of 𝒞_n. We restate a reliability result of <cit.> without proof. For a code 𝒞_n and a message m ∈ℳ, define the probability of decoding error conditioned on M=m as
P^(m)_error(𝒞_n) ≜ℙ(M≠ m | M=m)
where the probability is w.r.t. W ∼Unif(𝒲) and the adversary's choice of bit read/flip locations.
Suppose that p ∈ (0,1/2) and r< 1 - H_2(p). If the key rate R' > r and the sum rate R+R' < 1 - H_2(p), then for large enough (but fixed) k and any fixed δ>0, there exists γ_2>0 such that for large enough n and any m ∈ℳ,
ℙ_𝒞_n( P^(m)_error(𝒞_n) > δ) ≤ 2^-kγ_2 n.
We apply Lemma <ref> to bound the maximum probability of error P^max_error(𝒞_n) ≜max_m ∈ℳ P^(m)_error (𝒞_n). Note that our choice of ϵ and ϵ' ensures that R'>r and R+R' < 1-H_2(p). Also, we have that R < 1 - H_2(p) - r. Thus, for δ > 0,
ℙ_𝒞_n( P^max_error(𝒞_n) > δ) ≜ℙ_𝒞_n( max_m ∈ℳ P^(m)_error(𝒞_n) > δ)
(g)≤∑_m ∈ℳℙ_𝒞_n( P^(m)_error(𝒞_n) > δ)
(h)≤ 2^(-k γ_2 + 1 - H_2(p) - r)n
where (g) follows from a union bound and (h) follows for large enough k and for large enough n via Lemma <ref>.
§.§ Combining Secrecy and Reliability Analysis
To complete the proof, we combine the secrecy and reliability analysis. For large enough k and k even, and for large enough n,
ℙ_𝒞_n( {Sem(𝒞_n) > 2^-γ r n}∪{ P^max_error(𝒞_n) > δ})
≤ 2^(-k γ_0 r + 2r + R)n + 2^(-k γ_2 + 1 - H_2(p) - r)n
following both (<ref>), (<ref>) and a simple union bound. In summary, for large enough k and k even (which is constant in n) and large enough n, we have that (<ref>) is less than 1, and in turn, there exists an [n,Rn,R'n,k] pseudolinear code 𝒞_n such that Sem(𝒞_n) ≤ 2^-γ_1 r n and P^max_error(𝒞_n) ≤δ.
§ CONCLUSION
We showed that random pseudolinear codes achieve the best known lower bound of the semantic secrecy capacity of the binary adversarial wiretap channel of type II. A necessary condition on the non-linearity of a capacity achieving code was also shown. One possible avenue for future research is to apply further derandomization techniques to our random codes, e.g., in the spirit of <cit.>. The goal here is to replace random pseudolinear codes with a significantly derandomized class that can maintain the same error-correction and secrecy power while being more amendable to efficient decoding algorithms.
extend_v
§ LINEAR COSET CODING SCHEMES
In this appendix, we prove that the linear coset coding scheme of Ozarow and Wyner <cit.> is not semantically-secret for any positive message rate. We first define coset coding.
The linear coset coding scheme, proposed in <cit.>, is as follows: Let R>0 be the message rate. For blocklength n, let H be the Rn × n parity check matrix of some [n,n-Rn] binary linear code. Encoding: Suppose that Alice wants to transmit a message m ∈{0,1}^Rn. Alice encodes m by choosing the n bit codeword x randomly and uniformly from the set of solutions {x' ∈{0,1}^n: x' H^T = m} and transmits x over the (noiseless) (0,r)-AWTC II. Decoding: Upon receiving x, Bob performs decoding by choosing the message estimate m = xH^T. It is easy to show that the above linear coset coding scheme is an [n,Rn,(1-R)n] linear code.
We prove the following result.
Let rate R>0. For large enough n, any [n,Rn,(1-R)n] binary code 𝒞_n that is a linear coset coding scheme has semantic leakage Sem(𝒞_n) ≥ 1.
For any R>0, let 𝒞_n be an [n,Rn,(1-R)n] binary code that is a linear coset coding scheme and let H be the corresponding Rn × n parity check matrix. Suppose that Alice's message is uniformly distributed over {0,1}^n. To prove Lemma <ref>, we will use the following result due to Ozarow and Wyner.
For an index set ℐ⊆ [n], let H(ℐ) denote the |ℐ| columns of H indexed by ℐ. The adversary's equivocation is
Δ≜min_𝒮∈𝒮 H(M|Z) = min_ℐ⊆ [n]: |ℐ| = (1-r)nrank( H(ℐ) ).
Recall the following definitional inequalities:
Sem(𝒞_n) ≥max_𝒮∈𝒮 I_𝒮(M;Z)
= H(M) - min_𝒮∈𝒮 H(M|Z) = Rn - Δ.
Thus, to show that Sem(𝒞_n) ≥ 1 for large enough n, it is sufficient to show that Δ≤ Rn -1.
Let n be large enough and suppose by contradiction that Δ = Rn. By Lemma <ref>, we have that rank(H(ℐ)) = Rn for every set ℐ⊆ [n] s.t. |ℐ| = (1-r)n. This in turn by the definition of H implies that the [n,(1-R)n] binary code with parity check matrix H has minimum distance, denoted d_min, of at least Rn+1. However, by the Plotkin bound of Lemma <ref>, we have that 1-R ≤ 1 - 2 d_min/n + o(1), or equivalently, d_min≤Rn/2 + o(n). Thus, for n large enough such that the o(n) term is negligible, we have a contradiction. This completes the proof of Lemma <ref>.
§ DISCUSSION OF ASSUMPTION <REF>
We show that if the generator matrix G of an [n,Rn] linear code 𝒞_n is not full rank, then either the probability of decoding error is large such that P^max_error(𝒞_n) ≥ 1/2 or both 𝒲 and G can be replaced with a smaller key set 𝒲' and generator matrix G', respectively, without changing the code. Let 𝒞_n be an [n,Rn] linear code and suppose that G is not full rank.
Suppose that G_W is full rank. Since the channel is noiseless, Bob's received sequence is guaranteed to be a codeword in 𝒞_n. Suppose that Bob receives the codeword c∈𝒞_n. From Bob's perspective, the set of all possible message-key pairs that Alice could have sent is
ℳ_c = { (m,w) ∈ℳ×𝒲: [ m w ] G = c}
= { (m,w) ∈ℳ×𝒲: m G_M + w G_W = c}.
Since the mapping G:{0,1}^(R+R')n→{0,1}^n is a linear transformation, the number of pairs in ℳ_c is |ℳ_c|= 2^nullity(G) = 2^(R+R')n - rank(G) where the second equality follows from the rank-nullity theorem. In turn, since rank(G) < (R+R')n, it follows that |ℳ_c| ≥ 2. Now consider two unique pairs in ℳ_c, say (m_1,w_1) and (m_2,w_2). We show that m_1 ≠ m_2 by considering 2 cases. (Case 1): Suppose that w_1 = w_2. Then m_1 ≠ m_2 by the uniqueness of the pairs. Done. (Case 2): Suppose instead that w_1 ≠ w_2. Since G_W is full rank, we have that (w_1+w_2)G_W ≠ 0. In turn, [m_1 w_1]G = [m_2 w_2]G implies that (m_1+m_2) G_M = (w_1+w_2) G_W ≠ 0, and thus, m_1 ≠ m_2. Done. In summary, upon receiving c, Bob finds that at least 2 messages could be Alice's message. Thus, for PMFs P_M = Unif(ℳ) and P_W = Unif(𝒲),
P^max_error(𝒞_n) ≥ℙ_(M,W) ∼ P_M P_W( M≠ M )
= ∑_c∈𝒞_nℙ_(M,W) ∼ P_M P_W( M≠ M | Bob RXs c) 1/|𝒞_n|
≥ 1/2.
Suppose instead that G_W is not full rank. Then each (R'n)-bit sequence in the rowspace of G_W corresponds to multiple (i.e., redundant) keys in 𝒲. Hence, we can eliminate this redundancy by shortening the key w from R'n bits to rank(G_W) bits and replacing G_W with full rank matrix G'_W that has rowspace(G'_W) = rowspace(G_W) without changing the code 𝒞_n.
IEEEtran
|
http://arxiv.org/abs/2307.07541v1 | 20230714142009 | ConTrack: Contextual Transformer for Device Tracking in X-ray | [
"Marc Demoustier",
"Yue Zhang",
"Venkatesh Narasimha Murthy",
"Florin C. Ghesu",
"Dorin Comaniciu"
] | cs.CV | [
"cs.CV"
] |
Digital Technology and Innovation, Siemens Healthineers, Princeton, NJ, USA
M. Demoustier et al.
Contextual Transformer for Device Tracking in X-ray
ConTrack: Contextual Transformer for Device Tracking in X-ray
Marc Demoustier, Yue Zhang, Venkatesh Narasimha Murthy(), Florin C. Ghesu, and Dorin Comaniciu
August 12, 2023
==================================================================================================
Device tracking is an important prerequisite for guidance during endovascular procedures. Especially during cardiac interventions, detection and tracking of guiding the catheter tip in 2D fluoroscopic images is important for applications such as mapping vessels from angiography (high dose with contrast) to fluoroscopy (low dose without contrast). Tracking the catheter tip poses different challenges: the tip can be occluded by contrast during angiography or interventional devices; and it is always in continuous movement due to the cardiac and respiratory motions. To overcome these challenges, we propose ConTrack, a transformer-based network that uses both spatial and temporal contextual information for accurate device detection and tracking in both X-ray fluoroscopy and angiography. The spatial information comes from the template frames and the segmentation module: the template frames define the surroundings of the device, whereas the segmentation module detects the entire device to bring more context for the tip prediction. Using multiple templates makes the model more robust to the change in appearance of the device when it is occluded by the contrast agent. The flow information computed on the segmented catheter mask between the current and the previous frame helps in further refining the prediction by compensating for the respiratory and cardiac motions. The experiments show that our method achieves 45% or higher accuracy in detection and tracking when compared to state-of-the-art tracking models.
§ INTRODUCTION
Tracking of interventional devices plays an important role in aiding surgeons during catheterized interventions such as percutaneous coronary interventions (PCI), cardiac electrophysiology (EP), or trans arterial chemoembolization (TACE). In cardiac image-guided interventions, surgeons can benefit from visual guidance provided by mapping vessel information from angiography (Fig. <ref>) to fluoroscopy (Fig. <ref>) <cit.> for which the catheter tip is used as an anchor point representing the root of the vessel tree structure. This visual feedback helps in reducing the contrast usage <cit.> for visualizing the vascular structures and it can also aid in effective placements of stents or balloons.
Recently, deep learning-based siamese networks have been proposed for medical device tracking <cit.>. These networks achieve high frame rate tracking, but are limited by their online adaptability to changes in target’s appearance as they only use spatial information. Cycle Ynet <cit.> uses the cycle consistency of a sequence and relies on a semi-supervised learning approach by doing a forward and a backward tracking. In practice, this method suffers from drifting for long sequences and cannot recover from misdetections because of the single template usage. The closest work related ours is <cit.>, they use a convolutional neural network (CNN) followed by particle filtering as a post processing step. The drawback of this method is that, it does not compensate for the cardiac and respiratory motions as there is no explicit motion model for capturing temporal information. A similar method adds a graph convolutional neural network for aggregating both spatial information and appearance features <cit.> to provide a more accurate tracking but its effectiveness is limited by its vulnerability to appearance changes and occlusion resulting from detection techniques. Optical flow based network architectures <cit.> utilize keypoint tracking throughout the entire sequence to estimate the motion of the whole image. However, such approaches are not adapted for tracking a single point, such as a catheter tip.
For general computer vision applications, transformer <cit.> based-trackers have achieved state-of- the-art performance <cit.>. Initially proposed for natural language processing (NLP), Transformers learn the dependencies between elements in a sequence, making it intrinsically well suited at capturing global information. Thus, our proposed model consists of a transformer encoder that helps in capturing the underlying relationship between template and search image using self and cross attentions, followed by multiple transformer decoders to accurately track the catheter tip.
To overcome the limitations of existing works, we propose a generic, end-to-end model for target object tracking with both spatial and temporal context. Multiple template images (containing the target) and a search image (where we would identify the target location, usually the current frame) are input to the system. The system first passes them through a feature encoding network to encode them into the same feature space. Next, the features of template and search are fused together by a fusion network, i.e., a vision transformer. The fusion model builds complete associations between the template feature and search feature and identifies the features of the highest association. The fused features are then used for target (catheter tip) and context prediction (catheter body). While this module learns to perform these two tasks together, spatial context information is offered implicitly to provide guidance to the target detection. In addition to the spatial context, the proposed framework also leverages the temporal context information which is generated using a motion flow network. This temporal information helps in further refining the target location.
Our main contributions are as follows:
1) Proposed network consists of segmentation branch that provides spatial context for accurate tip prediction; 2) Temporal information is provided by computing the optical flow between adjacent frames that helps in refining the prediction; 3) We incorporate dynamic templates to make the model robust to appearance changes along with the initial template frame that helps in recovery in case of any misdetection; 4) To the best of our knowledge, this is the first transformer-based tracker for real-time device tracking in medical applications; 5) We conduct numerical experiments and demonstrate the effectiveness of the proposed model in comparison to other state-of-the-art tracking models.
§ METHODOLOGY
Given a sequence of consecutive X-ray images {I_t}_t=0^n and an initial location of the target catheter tip x_0 = (u_0, v_0), our goal is to track the location of the target x_t = (u_t, v_t) at any time t, t>0. The proposed model framework is summarized in Fig. <ref>. It consists of two stages, target localization stage and motion refinement stage. First, given a selective set of template image patches and the search image, we leverage the CNN-transformer architecture to jointly localize the target and segment the neighboring context, i.e., body of the catheter. Next, we estimate the context motion via optical flow on the catheter body segmentation between neighboring frames and use this to refine the detected target location. We detail these two stages in the following subsections.
§.§ Target localization with multi-template feature fusion
To identify the target in the search frame, existing approaches build a correlation map between the template and search features. Limited by definition, the template is a single image, either static or from the last frame tracked result. A transformer naturally extends the bipartite relation between template and search images to complete feature associations which allow us to use multiple templates. This improves model robustness against suboptimal template selection which can be caused by target appearance changes or occlusion.
Feature fusion with multi-head attention. In the encoding stage, given a set of template image patches centered around the target {T_ti}_ti∈ℋ and current frame I_s as the search image, we aim to determine the target location by fusing information from multiple templates. ℋ is the set containing historically selected frames for templates. This can be naturally accomplished by multi-head attention (MHA). Specifically, let us denote the ResNet encoder by θ, given the feature map of the search image θ(I_s)∈ℝ^C× H_s× W_s, and the feature maps of the templates {θ(T_ti)}}, we use 1x1 convolutions to project and flatten them into d-dimensional vector query, key and value embedding, q_s, k_s, v_s for the search image features and {q_ti}, {k_ti}, {v_ti} for templates features respectively. The attention is based on the concatenated vectors,
Attention(Q, K, V) := softmax (QK^T/√(d))V,
where Q=Concat(q_s, q_t1, q_t2,..., q_tn), K=Concat(k_s, k_t1, k_t2,..., k_tn), V=Concat(v_s, v_t1, v_t2,..., v_tn). The definition of MHA then follows <cit.>.
Joint target localization and context segmentation.
In the decoding stage, we follow <cit.> and adjust the transformer decoder to a multi-task setting. As the catheter tip represents a sparse object in the image, solely detecting it suffers from class imbalance issue. To guide the catheter tip tracking with spatial information, we incorporate additional contextual information by simultaneously segmenting the catheter body in the same frame. Specifically, two object queries (e_1, e_2) are employed in the decoder, where e_1 defines the position of the catheter tip, and e_2 defines the mask of the catheter body. As is illustrated in Fig. <ref> (b), we first calculate similarity scores between decoder and the encoder output via dot product. We then use element-wise product between the similarity scores and the encoder features to promote regions with high similarity. After reshaping the processed features to d× H_s× W_s, an encoder-decoder structured 6-layer FCN is attached to process the features to probability maps with the same size as the search image. A combination of the binary cross-entropy and the dice loss is then used,
L = λ^x_bce L_bce(G(x_i; μ, σ), x̂^s_i) + λ^x_dice L_dice(G(x_i; μ, σ), x̂^s_i) +
λ^m_bce L_bce(m_i, m̂_i) + λ^m_dice L_dice(m_i, m̂_i),
where x_i, m_i represent the ground truth annotation of the catheter tip and mask, x̂^s_i, m̂^s_i are predictions respectively. Here we use sup-script “s” to denote the predictions from this spatial stage. G(x_i; μ, σ) := exp(-x_i-μ^2/σ^2) is the smoothing function that transfers dot location of x_i to probability map. λ^*_bce, λ^*_dice∈ℝ are hyperparameters that are empirically optimized.
§.§ Localization refinement with context flow
In interventional procedures, one common challenge for visual tracking comes from occlusion. This can be caused by injected contrast medium (in the angiographic image) or interferring devices such as sternal wires, stent and additional guiding catheters. If the target is occluded in the search image, using only spatial information for localization is inadequate. To address this challenge, we impose a motion prior of the target to further refine the tracked location. As the target is a sparse object, this is done via optical flow estimation of the context.
Context flow estimation. Obtaining ground truth optical flow in real world data is a challenging task and may require additional hardware such as motion sensors. As such, training a model for optical flow estimation directly in the image space is difficult. Instead, we propose to estimate the flow in the segmentation space, i.e., on the predicted heatmaps of the catheter body between neighboring frames. We use the RAFT <cit.> model for this task. Specifically, given the predicted segmentation maps m_t-1 and m_t, we first use a 6-block ResNet encoder g_θ to extract the features g_θ(m_t-1), g_θ(m_t) ∈ R^H_f× W_f × D_f. Then we construct the correlation volume pyramid {C_i}_i=0^3, where
C_i = AvgPool(corr(g_θ(m_t-1), g_θ(m_t)), stride=2^i).
Here corr(g_θ(m_t-1), g_θ(m_t)) ∈ℝ^H_f× W_f × H_f × W_f stands for correlation evaluation:
corr(g_θ(m_t-1), g_θ(m_t))_ijkl = ∑_h=1^D_f g_θ(m_t-1)_ijh· g_θ(m_t)_klh,
which can be computed via matrix multiplication. Starting with an initial flow f_0=0, we follow the same model setup as <cit.> to recurrently refine the flow estimates to f_k = f_k-1 + △ f with a gated recurrent unit (GRU) and a delta flow prediction head of 2 convolutional layers. Given the tracked tip result from the previous frame x̂_t-1, we can then predict the new tip location at time t by warpping with the context flow x̂^f_t = f^k (x̂_t-1). Here we use sup-script “f” to denote the prediction by flow warpping.
We note here that since the segmentations of the catheter body are sparse objects compared to the entire image, computation of the correlation volume and subsequent updates can be restricted to a cropped sub-image which reduces computation cost and flow inference time. As the flow estimation is performed on the segmentation map, one can simply generate synthetic flows and warp them with the existing catheter body annotation to generate data for model training.
Refinement with combined spatial-temporal prediction. Finally, we generate a score map with combined information from the spatial localization stage and the temporal prediction by context flow,
S_t(u, v) =
(α + m̂_t^s(u,v)) (x̂_t^s(u, v) + x̂_t^f(u,v)) m̂_t^s(u,v) > 0,
x̂_̂t̂^̂ŝ(u, v) + x̂_t^f(u,v) otherwise.
Here α is a positive scalar. It helps the score map to promote coordinates that are activated jointly on both the spatial prediction x̂_t^s and the temporal prediction x̂_t^f. Finally, we forward the score map through a refinement module to finalize the prediction. The refinement module consists of a stack of 3 convolutional layers. Similar to the spatial localization stage, a combination of the binary cross-entropy and the dice loss is used as the final loss.
§ EXPERIMENTS AND RESULTS
Dataset. Our study uses an internal dataset of X-ray sequences captured during percutaneous coronary intervention procedures, featuring a field of view displaying the catheter within the patient's heart. The test dataset is divided into two primary categories: fluoroscopic and angiographic sequences. Fluoroscopic sequences are real-time videos of internal movements captured by low-dose X-rays without radiopaque substances, while angiographic sequences display blood vessels in real-time after the introduction of radiopaque substances.
We further separate the test dataset into a third category, “devices”, presenting a unique challenge for both fluoroscopic and angiographic sequences. In these cases, devices such as wires can obscure the catheter tip and have a similar appearance to the catheter, making tracking more challenging.
The dataset includes frames annotated with the coordinates of the catheter tip and, in some cases, a catheter body mask annotation. For training and validation, we use 2,314 sequences consisting of 198,993 frames, of which 44,957 are annotated. As the model training only requires image pairs, templates and search images, in order to reduce annotation effort, a nonadjacent subset of frames in each sequence is annotated. Their neighboring unannotated frames are also used to provide flow estimation, as is shown in Fig. <ref>(c). For testing, we use 219 sequences consisting of 17,988 frames, all annotated. The test dataset split is as follows: Fluoro (i.e., fluoroscopy), consisting of 94 sequences, 8,494 frames, from 82 patients; Angio (i.e., angiography), consisting of 101 sequences, 6,904 frames, from 81 patients; and devices, consisting of 24 sequences, 2,593 frames, from 10 patients. All frames undergo a preprocessing pipeline with resampling and padding to size of 512×512 with 0.308 mm isotropic pixel spacing.
Training. The template frame is of size 64×64. The search frame is of size 160×160. With this, the inference speed reaches 12 fps. We train our model for 300 epochs using a learning rate of 0.0001.
Comparison study. We compare the proposed approach with existing arts and summarize the results in Table. <ref>. The proposed approach achieves best performance in all testing dataset. In contrast to our method, SiameseRPN <cit.>, STARK <cit.> and MixFormer <cit.> focus on spatial localization of the target. Temporal information is being incorporated only with the setting of multi-templates thus target motion modeling is limited. While such approaches can achieve good performance with low median errors (∼2mm), the high 5-7mm standard deviations indicate the stability issues, especially in data with devices where occlusions are present. Cycle Ynet <cit.> uses cycle-consistency loss for motion learning directly on the target. As catheter tip is a sparse object, our approach leverages the motion information of the neighboring context which provide more robust guidance for target location refinement.
Overall, ConTrack outperforms all other methods, with a median tracking error of less than 1.08mm. Our model is particularly effective at tracking the catheter tip when other devices are in the field of view, where all other methods tend to underperform. Compared to Cycle Ynet on all test datasets, our model is 45% more accurate, with an average distance of less than 1mm between the prediction and ground truth. Further, we show the accuracy distributions in Fig. <ref>. It can be seen that the proposed approach shows superior performance to all other approaches in various percentiles.
Ablation study. We conduct an ablation study to investigate the effectiveness of different model components. Results are summarized in Table <ref>. Our ablation study revealed three key findings: 1) The addition of the mask segmentation branch improved tracking performance on Fluoro, where the device appearance remains consistent and there is no occlusion. However, when there are distractors, the results are less accurate; 2) The inclusion of a mask segmentation enabled the estimation of motion. The resulting flow helped to stabilize tracking in the presence of distractors; and 3) Multiple templates were employed to better handle changes in appearance. The combined model showed the best performance in dataset of angiography and data with devices, while yielding similar results in dataset of fluoroscopy.
Despite our framework's incorporation of various temporal and spatial contexts, catheter tracking remains a challenging task, particularly in cases where other devices or contrast agents obscure the catheter tip and create visual similarities with the catheter itself. Nonetheless, our results demonstrate the promise of ConTrack as a valuable tool for enhancing catheter tracking accuracy.
§ CONCLUSION
Device tracking is an important task in interventional procedures. In this paper, we propose a generic model framework, ConTrack, that leverages both spatial and temporal information of the surrounding context for accurate target localization and tracking in X-ray. Through extensive experimentation on large datasets, our approach demonstrated superior tracking performance, outperforming other state-of-the-art tracking models, especially in challenging scenarios where occlusions and distractors are present. Current approach has its limitations. Motion estimation is learned from neighboring two frames and thus target historical trajectory information is missing. Further, transformer-based model training require large amount of annotated data, which is challenging to collect in interventional applications. Finally, throughout the paper we follow established setups and focus on the development on the tracking model with manual initialization. In general, long-term visual tracking with automatic (re-)initialization is a challenging problem and require a system of approaches. A safe and automatic system of device and anatomy tracking is of great clinical relevance and will be an important future work for us.
§.§.§ Disclaimer
The concepts and information presented in this paper/presentation are based on research results that are not commercially available. Future commercial availability cannot be guaranteed.
splncs04
|
http://arxiv.org/abs/2307.03997v1 | 20230708154148 | Efficient Model-Free Exploration in Low-Rank MDPs | [
"Zakaria Mhammedi",
"Adam Block",
"Dylan J. Foster",
"Alexander Rakhlin"
] | cs.LG | [
"cs.LG",
"math.OC"
] |
Lightweight Improved Residual Network for Efficient Inverse Tone Mapping
Liqi Xue, Tianyi Xu, Yongbao Song, Yan Liu, Lei Zhang, Xiantong Zhen, and Jun Xu
This work was sponsored by the National Natural Science Foundation of China (No. 62002176, 62176068, and 12101334), CAAI-Huawei MindSpore Open Fund, the Natural Science Foundation of Tianjin (No. 21JCQNJC00030), and the Fundamental Research Funds for the Central Universities. Corresponding author: Xiantong Zhen ([email protected]) and Jun Xu ([email protected]).
Liqi Xue, Tianyi Xu, Yan Liu, and Jun Xu are with the School of Statistics and Data Science, Nankai University, Tianjin 300071, China.
Yongbao Song is with the School of Mathematical Science, Nankai University, Tianijn 300071, China.
Lei Zhang and Xiantong Zhen are with the Computer Science College, Guangdong University of Petrochemical Technology, Maoming 525000, China.
August 12, 2023
====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
A major challenge in reinforcement learning is to develop practical, sample-efficient algorithms for exploration in high-dimensional domains where generalization and function approximation is required. Low-Rank Markov Decision Processes—where transition probabilities admit a low-rank factorization based on an unknown feature embedding—offer a simple, yet expressive framework for RL with function approximation, but existing algorithms are either (1) computationally intractable, or (2) reliant upon restrictive statistical assumptions such as latent variable structure, access to model-based function approximation, or reachability. In this work, we propose the first provably sample-efficient algorithm for exploration in Low-Rank MDPs that is both computationally efficient and model-free, allowing for general function approximation and requiring no additional structural assumptions. Our algorithm, , uses the notion of a generalized optimal design for the feature embedding as an efficiently computable basis for exploration, performing efficient optimal design computation by interleaving representation learning and policy optimization. Our analysis—which is appealingly simple and modular—carefully combines several techniques, including a new reduction from optimal design computation to policy optimization based on the Frank-Wolfe method, and an improved analysis of a certain minimax representation learning objective found in prior work.
§ INTRODUCTION
In reinforcement learning and control, many of the most promising
application domains require the agent to navigate complex,
high-dimensional state and action spaces, where generalization and function approximation
is necessary. The last decade has
witnessed impressive empirical success in domains where
data are abundant <cit.>,
but when data are limited, ensuring efficient exploration in
large domains is a major research question. For
statistical efficiency, the foundations have recently begun to
take shape, with a line
of research providing structural conditions that facilitate
sample-efficient exploration, as well as fundamental limits
<cit.>. Computational
efficiency, however, remains a major challenge: outside of simple
settings <cit.>, existing algorithms
with provable sample complexity guarantees are computationally
inefficient, and typically require solving intractable non-convex
optimization problems
<cit.>. The
prospect of
developing practical algorithms for exploration in
high-dimensional state spaces that are both computationally and
statistically efficient raises three fundamental questions:
* What are the right computational primitives for exploration?
That is, how can one efficiently represent and compute exploratory policies that
allow the learner
to explore the state
space and gather useful data?
* How should one leverage function approximation—for
example, via
representation learning—to
discover such primitives in a computationally and statistically
efficient fashion?
* Given answers to the first two questions, how can one efficiently interleave function approximation and exploration to provide provably efficient algorithms?
In this paper, we investigate these questions through the
model <cit.>. In a , the state space is large
and potentially continuous, but the transition probabilities admit an
(unknown) low-rank factorization. Concretely, for a finite-horizon
with horizon H, the transition densities for layer
h∈H satisfy
T_h(x_h+1|x_h,a_h) = [h+1](x_h+1)^(x_h,a_h),
where (·,·)∈^d and
(·)∈^d are state-action and next-state
embeddings. The low-rank structure in (<ref>)
facilitates tractable exploration: if the embedding is known
to the learner, one can efficiently learn a near-optimal policy with sample
complexity polynomial in the feature dimension d, and independent of
the size of the state space <cit.>; in this regard,
can be thought of as a low-dimensional representation that enables
sample-efficient RL. Following
<cit.>, we consider the challenging setting in
which both and are unknown to the
learner. This formulation generalizes well-known frameworks such as
the Block MDP (BMDP) model <cit.>,
and necessitates the use of representation
learning: the agent must learn an embedding that approximates
as it explores the environment, and must use this learned embedding
to drive subsequent exploration. This form of function approximation allows
for great flexibility, as can be an arbitrary, nonlinear
function of the state; in practice, it is common to model as a neural net <cit.>.
The is perhaps the simplest MDP structure that demands
systematic exploration and nonlinear function approximation while allowing for a continuum of states, yet understanding of
efficient algorithm design for this model is surprisingly
limited. Existing algorithms suffer from at least one of the following drawbacks:
* Computational intractability <cit.>.
* Strong modeling assumptions (e.g., ability to model
[h+1](·), which facilitates application of model-based
RL techniques)
<cit.>;
in this work, we aim for model-free methods that only require
learning .
* Restrictive structural assumptions (e.g.,
non-negativity or latent variable
structure for the embeddings in (<ref>)) <cit.>.
At the root of these limitations is the complex interplay between
exploration and representation learning:
the agent must learn a high-quality representation to guide
in exploring
the state space, but learning such a representation requires gathering
diverse and informative data, which is difficult to acquire without
having already explored the state space to begin with. Overcoming
this challenge—particularly where computational efficiency is
concerned—requires (1) representation learning procedures that lead to sufficiently expressive
representations for downstream applications, (2) efficient exploration procedures that are
robust to errors in learned representations, and 3) understanding the
interaction between these procedures, which must be interleaved. In
this work, we propose an algorithm that addresses each of these challenges, as detailed below.
Contributions
We provide the first provably computationally efficient and model-free
algorithm for general Low-Rank MDPs.
Our algorithm, (“Volumetric Exploration”), uses
the notion of a generalized optimal design for the
embedding as an efficiently computable
basis for exploration, and combines this with a minimax representation
learning objective <cit.>. interleaves exploration with representation learning in a layer-wise
fashion, learning a new representation at each layer h using exploratory
data gathered at previous layers, then uses this representation to
facilitate computation of a collection of exploratory policies (a
policy cover), which act as an approximate optimal design
for the features at layer h+1, ensuring good coverage for subsequent
iterations. is simple and modular, and its analysis is
surprisingly compact given the greater generality compared to prior
work
<cit.>.
accommodates general-purpose function approximation
to learn the representation (e.g., neural
nets or other flexible classes), and is efficient whenever a certain minimax
representation learning objective <cit.> can be solved efficiently for the
function class of interest. Compared to efficient algorithms from
prior work, : (1) is model-free (i.e., only requires access to a function class
Φ capable of modeling , and does not need to model
), and (2) applies to general Low-Rank MDPs, removing
the need for strong assumptions such as reachability or non-negativity of the feature embeddings
(so-called latent variable structure); see
<Ref>).
As a secondary benefit, the algorithm is reward-free.
Our analysis carefully combines several new techniques, including (1) a new reduction from optimal design
computation to policy optimization based on the Frank-Wolfe method, and (2) a new analysis of a minimax representation learning
objective introduced in <cit.>,
which leads to faster rates and shows for
the first time that this objective can lead to meaningful guarantees in general Low-Rank
MDPs without latent variable structure.
The algorithm follows a simple and modular template. To highlight this, we use the same template to give a
variant of the algorithm, (<ref>), which
leverages barycentric spanners <cit.> for
exploration, and obtains a tighter
sample complexity bound under an additional reachability assumption; see <ref>.
Organization
sec:setting formally introduces the model and the online
reinforcement learning framework we consider. In
<ref>, we highlight challenges faced
by previous approaches, introduce our main algorithm, , and
show how it overcomes these challenges, and then present its main
sample complexity guarantee. We conclude
with discussion in <ref>.
§ PROBLEM SETTING
§.§ Model
We work in an episodic, finite-horizon reinforcement learning framework, where H∈ denotes the horizon. A <cit.> is a tuple =(,, ()_h∈ [H],([h])_h∈[H],) consisting of a state space , action space with =A, distribution over initial states ∈Δ(), and mappings :→^d and : ×→^d.[We emphasize that neither [h] nor is known to the agent, in contrast to the linear MDP setting <cit.>.]
Beginning with _1∼, an episode proceeds in H steps, where for each step h∈H, the state _h evolves as a function of the agent's action _h via
_h+1∼T_h(·|_h,_h),
where T_h is a probability transition kernel, which is assumed to factorize based on and . In detail, we assume that there exists a σ-finite measure ν on such that for all 1 ≤ h ≤ H-1, and for all x ∈ and a ∈, the function x' ↦(x')^⊤(x, a) is a probability density with respect to ν (i.e. the function is everywhere non-negative and integrates to 1 under ν). For any '⊆, the probability that _h+1∈' under _h+1∼T_h(·|x_h,a_h) is then assumed to follow the law
T_h('|x_h,a_h) = ∫_'(x)^⊤(x_h, a_h) ν(x).
For notational compactness, we assume (following, e.g., <cit.>) that the MDP is layered so that = _1∪…∪_H for _i ∩_j=∅ for all i≠ j, where _h⊆ is the subset of states in that are reachable at layer h∈[H]. This can be seen to hold without loss of generality (modulo dependence on H), by augmenting the state space to include the layer index.
Our formulation, in which the transition dynamics (<ref>) are stated with respect to a base measure ν, are a rigorous generalization of formulations found in previous works <cit.>, which tend to implicitly assume the state space is countable and avoid rigorously defining integrals. We adopt this more general formulation to emphasize the applicability our results to continuous domains. However, in the special case where state space is countable, choosing ν as the counting measure yields T_h('|x_h,a_h) = ∑_x∈'(x)^⊤(x_h, a_h), which is consistent with prior work.
Policies and occupancy measures
We define =*π:→Δ() as the set of all randomized, Markovian policies. For a policy π∈, we let ^π denote the law of (_1,_1),…,(_H,_H) under _h∼π(_h), and let ^π denote the corresponding expectation. For any '⊆_h, we let _h^π[']^π[_h ∈'] denote the marginal law of _h under π. For x∈_h, we define the occupancy measure d^π(x) _h^π/ν(x) as the density of ^π_h with respect to ν.
§.§ Online Reinforcement Learning and Reward-Free Exploration
We consider a standard online reinforcement learning framework where the Low-Rank MDP is unknown, and the learning agent interacts with it in episodes, where at each episode the agent executes a policy of the form π:→Δ() and observes the resulting trajectory (_1,_1),…,(_H,_H).
While the ultimate goal of reinforcement learning is to optimize a policy with respect to a possibly unknown reward function, here we focus on the problem of
reward-free exploration, which entails learning a collection of policies that almost optimally “covers” the state space, and can be used to efficiently optimize any downstream reward function <cit.>. To wit, we aim to construct an policy cover, a collection of policies that can reach any state with near-optimal probability.
For α,∈(0,1], a subset Ψ⊆ is an (α,)-policy cover for layer h if
max_π∈Ψ d^π(x)≥α·max_π' ∈ d^π'(x) for all x∈_h such that max_π'∈Π d^π'(x)≥·[h](x).
Informally, an (α,)-policy cover Ψ has the property that for every state x∈ that is reachable with probability at least ·[h](x), there exists a policy in Ψ that reaches it with probability at least α··[h](x). We show (<ref>) that given access to such a policy cover with α =(, d^-1 ,A^-1), it is possible to optimize any downstream reward function to () precision with polynomial sample complexity.
def:polcover101 generalizes the notion of approximate policy cover used by <cit.> for the Block MDP setting; as in that work, the definition allows one to sacrifice states for which the maximum occupancy is small, which is necessary in the absence of reachability-style assumptions <cit.>. Compared to <cit.>, we replace the Block MDP condition max_π∈ d^π(x) ≥ by max_π∈ d^π(x) ≥·[h](x). As our analysis shows, the latter condition turns out to be better suited to the ℓ_2 geometry of the model, and is sufficient for the purpose of optimizing downstream reward functions up to O() precision (<ref>).
Function approximation and desiderata
We do not assume that the true features ()_h∈[H] or the mappings ([h])_h∈[H] are known to the learner.
To provide sample-efficient learning guarantees we make use of function approximation as in prior work <cit.>, and assume access to a feature class Φ⊆{ϕ : ×→^d} that contains , for h∈[H-1].
[Realizability]
The feature class Φ⊆{ϕ : ×→^d} has ∈Φ for all h∈[H]. Moreover, for all ϕ∈Φ, x ∈, and a ∈, it holds that ϕ(x, a)≤ 1.
The class Φ may consist of linear functions, neural networks, or other standard models depending on the application, and reflects the learner's prior knowledge of the underlying MDP. We assume that Φ is finite to simplify presentation, but extension to infinite classes is straightforward, as our results only invoke finiteness through standard uniform convergence arguments.
Note that unlike model-based approaches <cit.>, we do not assume access to a class capable of realizing the features , and our algorithm does not attempt to learn these features; this is why we distinguish our results as model-free.
Beyond realizability, we assume (following <cit.>) for normalization that, for all h∈[H] and (x,a)∈_h×, *_h(x,a)≤1, and that for all g:_h→0,1,
*∫__h[h](x)g(x) ν(x)≤√(d).
For ∈(0,1), our goal is to learn an (α,)-policy cover with α= (,d^-1,A^-1) using
(d,A,H,logΦ,^-1)
episodes of interaction.
This guarantee scales with the dimension d of the feature map and the complexity logΦ of the feature class but, critically, does not depend on the size of the state space ; note that by <cit.>, dependence on both H and A= is necessary when is unknown. Given such a guarantee, we show in <Ref> that it is possible to optimize any downstream reward function to error with polynomial sample complexity.
Additional preliminaries
For any m,n ∈ℕ, we denote by [mn] the integer interval {m,…, n}. We also let [n] [1n]. For any sequence of objects o_1, o_2,…, we define o_m:n (o_i)_i∈[m n].
A partial policy is a policy defined over a contiguous subset of layers ℓr⊆H. We denote by ^ℓ:r{π⋃_h=ℓ^r _h →Δ()} the set of all partial policies over layers ℓ to r; note that ≡^1:H. For a policy π∈^ℓ:r and h∈ℓr, π(x_h) denotes the action distribution for the policy at layer h when x_h∈_h is the current state. For 1≤ t≤ h≤ H and any pair of partial policies π∈^1:t-1, π'∈^t:h, we define π∘_t π'∈^1:h as the partial policy given by (π∘_t π')(x_ℓ) = π(x_ℓ) for all ℓ<t and (π∘_t π')(x_ℓ) = π'(x_ℓ) for all ℓ∈ [t h]. We define π∘_t π' in the same fashion for π∈^1:ℓ for ℓ≥ t.
We use the _h∼π as shorthand to indicate that _h is drawn from the law ^π, and likewise for (_h,_h)∼π and so on. For a set of partial policies Ψ{π^(i) i ∈ [N]}, we define (Ψ) as the random partial policy obtained by sampling ∼([N]) and playing π^(). We define ∈ as the random policy that selects actions in uniformly at random at each layer.
We use *· to denote the Euclidean norm, *·_∞ to denote the supremum norm on functions, and let (r)⊆^d denote the Euclidean ball of radius r. We let _(r) be the Frobenius ball of radius r>0 in ^d× d. We denote by the set of positive semi-definite matrices in ^d× d, and by “≼” the corresponding partial order. For a vector v∈^d, we denote by v[i] its ith coordinate.
We refer to a scalar c>0 as an absolute constant to indicate that it is independent of all problem parameters and use (·) to denote a bound up to factors polylogarithmic in parameters appearing in the expression.
§ : ALGORITHM AND MAIN RESULTS
In this section, we present the algorithm. We begin by describing
challenges in deriving efficient, model-free algorithms using existing
approaches (<ref>). We then formally describe (<ref>) and build intuition as to how it is able to overcome these challenges, and finally state our main sample
complexity guarantee (<ref>).
§.§ Challenges and Related Work
Designing algorithms with provable guarantees in the Low-Rank MDP setting is challenging because of the complicated interplay between representation learning and exploration. Indeed, while there are many efficient algorithms for the so-called linear MDP setting where the feature maps ()_h∈[H] are known (removing the need for representation learning) <cit.>, these approaches do not readily generalize to accommodate unknown features. For Low-Rank MDPs, previous algorithms suffer from at least one of the following three drawbacks: (1) the algorithms are computationally inefficient; (2) the algorithms are model-based; or (3) the algorithms place strong assumptions on the MDP that are unlikely to hold in practice. To motivate the algorithm, we briefly survey these results, highlighting several key challenges in avoiding these pitfalls.
Let us first discuss the issue of computational efficiency. While there are a number of algorithms—all based on the principle of optimism in the face of uncertainty—that provide tight sample complexity guarantees for Low-Rank MDPs in reward-based <cit.> and reward-free <cit.> settings, these algorithms involve intractable optimization problems, and cannot be implemented efficiently even when the learner has access to an optimization oracle for the representation class Φ <cit.>. This intractability arises because these algorithms implement optimism via a “global” approach, in which the algorithm explores at each round by choosing the most optimistic value function in a certain version space of candidate value functions; optimizing over this version space is challenging, as it involves satisfying non-convex constraints with a complicated dependence on the learned representation that are coupled globally across layers h∈H.
To avoid the intractability of global optimism, several works have restricted attention to a simpler model-based setting. Here, in addition to assuming that the feature maps ()_h∈[H] are realizable with respect to Φ, one assumes access to a second feature class Υ capable of modeling the mappings ()_h∈[H]; this facilitates direct estimation of the transition probability kernel T_h(·|x,a). For the model-based setting, it is possible to efficiently implement certain “local” forms of optimism <cit.>, as well as certain non-optimistic exploration techniques based on policy covers <cit.>. For example, one can estimate features using maximum likelihood, and then apply efficient algorithms for the known-feature setting with the estimated features plugged-in <cit.>; here, a key insight is that model-based estimation leads to strong distribution transfer guarantees for the learned features. As a result, there are now a number of efficient model-based algorithms <cit.>, some of which have been practically implemented <cit.>. Unfortunately, model-based realizability is a restrictive assumption, and falls short of the model-free guarantees we aim for in this work; indeed, in general, one cannot hope to estimate the feature map without sample complexity scaling with the number of states.[For example, in the special case of the Block MDP setting <cit.>, model-based realizability entails modeling a certain emission process, which is not required by model-free approaches.]
When one moves from model-based learning to model-free learning, representation learning becomes substantially more challenging—both for optimistic and non-optimistic approaches. Here, a key challenge is to develop representation learning procedures that are (1) efficient, yet (2) provide meaningful guarantees when the learned features are used downstream for exploration.
To our knowledge, the only proposal for a representation learning procedure satisfying both desiderata comes from the work of <cit.>, who introduced a promising “minimax” representation learning objective (described in detail in the sequel; cf. <ref>), which <cit.> subsequently showed to have encouraging empirical performance. However, to provide guarantees for this objective, both works place substantial additional restrictions on the low-rank factorization. In particular, <cit.> make the so-called latent variable assumption <cit.>, which asserts that and are non-negative coordinate-wise, and <cit.> further restrict to the Block MDP model <cit.>.
Non-negativity is a substantial restriction, as the best non-negative factorization can have exponentially large dimension relative to the best unrestricted factorization <cit.>. Beyond non-negativity, many prior works <cit.> require reachability assumptions, the weakest of which asserts that there exists η>0 such that for all x∈_h,
max_π∈ d^π(x)≥η·[h](x).
These works give sample complexity bounds that scale polynomially in η^-1, and do not give any guarantee when η=0; see <ref> for further background.[When specialized to tabular MDPs, reachability asserts that for each state x∈, there exists a policy that reaches x with probability at least η.] The source of both restrictions is the problem of how to quantify how close a learned representation ϕ is to the ground truth , which depends strongly on the downstream exploration strategy. In what follows, we show that with the right exploration strategy, this challenge can be ameliorated, but prior to our work it was unclear whether the minimax objective could lead to meaningful guarantees in the absence of non-negativity.
§.§ The Algorithm
Our algorithm, , is presented in <ref>. The
algorithm proceeds by building a policy cover layer-by-layer in an
inductive fashion. To describe the algorithm in detail, we slightly generalize <ref>.
For α,∈(0,1], a distribution P∈Δ() is an (α,)-randomized policy cover for layer h if
_π∼ P*d^π(x)≥α·max_π' ∈ d^π'(x) for all x∈_h such that max_π'∈Π d^π'(x)≥·[h](x).
If P is a randomized policy cover, then the set Ψ(P) is a policy
cover in the sense of <ref>, but is most
naturally described in terms of randomized policy covers, which allow
for non-uniform mixtures of policies. Critically, the randomized
policy covers used in have support size polynomial in d and H,
which allows them to be computed and represented efficiently.
For each layer h≥2, uses a randomized policy cover
Ph built at a previous iteration to perform K steps of
interleaved representation learning and exploration. Starting from
h,0Ph, for each step k∈K, first
invokes a subroutine,
(<ref>; deferred to <ref>) with the
randomized policy cover h,k-1 to produce a
feature map ϕh,k that approximates . Using
this feature map, the algorithm invokes a second subroutine,
(<ref> in <ref>) to produce a (sparsely
supported) policy distribution
Ph,k∈Δ() that acts as a generalized optimal design for the
estimated feature map ϕh,k, ensuring maximal coverage in
a certain sense; given this distribution, the algorithm defines
h,k=1/2k∑_ℓ=1^kPh,k +
1/2Ph and proceeds to step k+1. Once this process
completes, a new randomized policy cover for layer h+2 is formed via Ph+2=1/K∑_k=1^K∑_π∈(Ph,k)Ph,k(π)·_π∘_h+1. To
invoke the
subroutine, makes use of additional subroutines for policy optimization
(; <ref> in
<ref>) and estimation of certain
matrix-valued functionals (; <ref>
in <ref>). The use of multiple
(K>1) inner loop iterations within this scheme is
necessary to handle certain distribution shift
issues, which we will elaborate on momentarily.
We now describe
each component of the algorithm in detail,
highlighting how they allow us to overcome the
challenges in the prequel.
Generalized optimal design
At the heart of is the notion of a generalized
optimal design as an efficient basis for exploration. We
begin by defining a generalized optimal design for an abstract of
positive-semidefinite matrices ⊆.
Given a set ⊂ and parameters γ∈(0,1/d),
C≥1, we say that a distribution P∈Δ() is a
(C,γ)-generalized optimal design for if the matrix
M_PγI_d+_W∼P*W satisfies
sup_W∈(M_P^-1W) ≤ (1+C)d.
This definition generalizes the classical notion of G-optimal
design <cit.>, which corresponds to the
special case in which each W∈ is a rank-one matrix, and where γ=C=0.
The utility of generalized optimal designs for reward-free exploration is
highlighted in the following lemma.
Let h∈[H]. If a distribution P∈Δ() over policies is a
(C,γ)-generalized optimal design for the set
_h{^π[
(_h, _h)(_h, _h) ^]|π∈},
then the distribution
P'=∑_π∈(P)P(π)·_π∘_h+1 is an
(α,η)-randomized policy cover for layer h+2 with αη/2 d A and η 4 d √((1+C)γ).
<Ref>, proven in <Ref>, shows that to compute a policy cover for layer h+2, it suffices to compute a distribution over policies that acts as a generalized optimal design for the set _h{^π[
(_h, _h) (_h, _h) ^]|π∈}⊆^d. Of course, even if is known, this observation is only useful if we
can compute a spanner without explicitly enumerating over the set
, since our goal is to develop an efficient
algorithm. In what follows, we will show:
* By applying the Frank-Wolfe method
<cit.> to a certain determinantal/volumetric objective,
it holds that for any ϕ∈Φ, a sparsely supported
generalized optimal design for the set {^π[
ϕ(_h, _h)ϕ(_h, _h) ^ ]|π∈} can be computed
efficiently whenever, for any M∈ with
*M_≤1, one can (approximately) solve policy optimization problems of the form
_π∈^π*ϕ(_h,_h)Mϕ(_h,_h)^.
* Given access to policy covers P1,…,Ph for layers 1 to h, one can efficiently solve the optimization problem in (<ref>) by
appealing to the algorithm for policy
optimization (<ref>).
To handle the fact that is unknown, <ref>
uses the approach above to compute a generalized optimal design for the set {^π[
ϕh(_h, _h) ]|π∈}, where
ϕh∈Φ is a learned feature map. In what follows, we
first give a detailed overview of our optimal design computation approach, then show
how applies this approach to a feature map estimated via
representation learning.
Prior work <cit.> makes use
of elliptic planning objectives similar to the notion of optimal
design in
<ref>. An
important difference in our approach, which follows from the explicit
connection to optimal design, is that the right-hand side in
(<ref>) is bounded by an absolute (problem-dependent)
constant (d), and does not scale inversely proportional to the
target precision >0 or any sort of reachability parameter. This
property is essential to our reachability-free analysis.
Optimal design computation via approximate linear optimization
To describe generalized optimal design in , we take a brief detour
and consider an abstract approach to optimal design computation, which generalizes our problem. Suppose that we wish
to compute a spanner for an implicitly specified set of matrices
=*W^z_z∈⊆ indexed by an abstract set
. The set (which will be set to when we apply this
framework to RL) may be exponentially large, and cannot be efficiently enumerated. In addition, given z∈, we
cannot explicitly compute W^z, and have to settle for a noisy approximation.
To allow for optimal design computation, we assume access to two
oracles for the set , a linear optimization oracle :∩_(1)→ and
an index-to-matrix oracle :Δ()→. We assume
that for some _, _>0:
* For all M∈ with *M_=1, the output
ẑ_M(M) satisfies
(MW^ẑ_M) ≥sup_z∈(MW^z) - _.
* For all P∈Δ(), the output W_P(P)
satisfies
W_P - _z∼P*W^z_≤_.
Given access to oracles and with _=(γ) and _=(γ^2), the algorithm
(<ref>) computes a (C,γ)-approximate spanner for
using *γ^-2C^-2 d^-1ln (1 + 1/γ)
oracle calls. can be viewed as an application of the Frank-Wolfe
algorithm <cit.> for first-order optimization to
maximize the determinantal/volumetric objective
F(P) log(γ I_d + _z∼ P[W^z]),
which is inspired by the well-known duality of G-optimal and D-optimal
design <cit.>. Frank-Wolfe is well-suited to
our setting because it produces a sparsely supported
distribution P∈Δ(), with the sparsity bounded by the
number of iterations (d,γ^-1) and independent of
. This feature is critical for computational efficiency
when applied to RL, as the set = is too large for one to even
represent a general distribution P∈Δ() efficiently.
Representation learning
Ideally, we would
like to use to construct a generalized optimal design for the set {^π[_h(_h, _h) _h(_h, _h)^]|π∈} with =.
Because we do not have access to _h, each inner loop iteration
k∈K in <ref> instead applies with {^π[ϕh,k(_h, _h)]|π∈},
where ϕh,k is a learned
representation. We now describe how the feature map
ϕh,k is learned, then show how to use these learned features to
efficiently implement the oracles (·) and (·).
To learn representations for layer h, we use the algorithm (<ref>),
which was originally introduced in
<cit.>. When invoked in each inner loop
iteration k∈K via ϕh,k = (h, ,Φ,
Ph,k-1,n_) (<ref>), the
algorithm gathers a
collection of triples (_h, _h, _h+1) by rolling in to
_h with a policy sampled from the randomized policy cover h,k-1 and selecting _h
uniformly at random, then observing the resulting state _h+1. Using this dataset, the algorithm
solves a sequence of adversarial training sub-problems
(<ref> of <ref>) which involve
the feature class Φ and an auxiliary discriminator class :
→. As we discuss in detail in the sequel, these
sub-problems, described in (<ref>),
are amenable to standard gradient-based training methods. The
sub-problems are designed to approximate the following “idealized”
min-max-min representation learning objective:
ϕh,k∈_ϕ∈Φsup_f ∈inf_w∈(2d^1/2)_π∼h,k-1^π∘_h[(
ϕ(_h, _h)w - *f(_h+1)|_h,_h)^2
].
The intuition for
this objective lies in the fact that in a Low-Rank MDP, for any function f:→, the mapping (x,a)↦[ f(_h+1)
|_h=x, _h=a ] is linear in
_h(x, a). Thus, if is sufficiently expressive, we may
hope that any ϕh,k which solves (<ref>) will approximate
well. We adopt the simple discriminator class neurips= {. x ↦max_a∈θϕ(x, a) | θ∈(1), ϕ∈Φ}.
= {. x ↦max_a∈θϕ(x, a) | θ∈(1), ϕ∈Φ}.
We show that solving
(<ref>) with this choice for , which is slightly
simpler than those considered in <cit.>, yields an approximation
guarantee for ϕh,k that is suitable for downstream use in
optimal design computation.
To facilitate an analysis of that does not require reachability assumptions, we use
slightly different parameter values for than in
<cit.>, and provide a tighter sample
complexity bound (<ref>) which may be of independent interest.
In more detail, prior work shows that the algorithm solves
a variant of (<ref>) with
w∈(d^1/2·(^-1)), where >0 is the desired
bound on mean-squared error. Due to the polynomial dependence on
^-1, such a guarantee would lead to vacuous
guarantees when invoked within our analysis of . Our improved
analysis of , which is based on a determinantal potential
argument, shows that w∈((d)) suffices. A secondary benefit of our improved bound is a faster rate with
respect to the number of trajectories.
Putting everything together Having learned ϕh,k
using , each inner loop iteration k∈K of applies with {^π[ϕh,k(_h, _h) ϕh,k(_h, _h)^]|π∈},
=, C = 2, and γ chosen as a function of the
target accuracy; that is, we use the learned
representation ϕh,k as a plug-in estimate for the true representation
.[Though the policies produced by the
algorithm may not necessarily induce an optimal design for _h= {^π[
(_h, _h) ]|π∈} (this would
require a stronger coordinate-wise approximation guarantee, does not
necessarily follow from <ref>), our analysis shows that they still suffice to build a policy cover for layer h+2.]
With this choice, implementing
entails (approximately) solving
_π∈^π[ ϕh,k(_h, _h)^M ϕh,k(_h, _h)]
for a given matrix M∈∩_(1), and implementing entails estimating
the second moment matrix
^π[ϕh,k(_h, _h) ϕh,k(_h, _h)^]
for a given policy π∈.
We instantiate (π) as the Monte Carlo algorithm
(<Ref>), which simply samples trajectories according to π and returns the sample average of ϕh,k(_h, _h) ϕh,k(_h, _h)^.
To
implement (θ), we appeal to (<ref>). , given an arbitrary reward function r_1:h:×→ and a function class ⊆{g:
×→} capable of realizing all possible value
functions induced by these rewards, can use the policy covers
P1,…,Ph to efficiently compute a policy = (h,r_1:h, ,
P1:h, n) that approximately solves neurips_π∈^π[∑_t=1^h r_t(_t,_t)],
_π∈^π[∑_t=1^h r_t(_t,_t)],
and does so using polynomially many episodes; see <ref> for
details and formal guarantees.[This is the main
place where the analysis uses the inductive hypothesis
that P1:h are policy covers.] Thus, implementing (M)
for M∈∩_(1) is as
simple as invoking with the rewards neurips
r_h(x, a; M) = ϕh,k(x,a)^⊤ Mϕh,k(x,a), and r_t(x,a; θ) = 0, for t ≠ h.
r_t(x,a;M){[ ϕh,k(x,a)^⊤ Mϕh,k(x,a), for
t=h,; 0, otherwise. ].
Addressing distribution shift
With this, we have all the
ingredients needed for optimal design computation, and can prove that
Ph,k is an approximate optimal design with respect to
ϕh,k. However, we not quite done, due to the issue of
distribution shift, which motivates the use of multiple (K>1)
inner loop iterations within . In particular, while the
objective in (<ref>) ensures that ϕh,k approximates
well under Ph,k-1, the representations may be far
away from one another under the new distribution Ph,k produced
when we invoke with ϕh,k.[If Ph were
an exact (i.e., (α,0)-) policy cover, this would be a
non-issue. However with an approximate policy cover, which is all that
one can for in the absence of reachability, distribution shift must
be addressed.] To address this issue, we use a potential argument <cit.>
to show that as long as K is chosen to be sufficiently large, there exists
k^⋆∈*K such that ϕh,k^⋆
(approximately) enjoys a stronger on-policy approximation guarantee:
ϕh,k^⋆∈_ϕ∈Φsup_f ∈inf_w∈(2d^1/2)_π∼h,k^⋆^π∘_h[(
ϕ(_h, _h)w - *f(_h+1)|_h,_h)^2
].
This suffices to prove that the distribution Ph+2 constructed
in is an approximate policy cover
for layer h+2.
§.§ Main Guarantee for
The following result is the main sample complexity guarantee for (<ref>).
Let δ, η∈(0,1), and suppose that realizability holds (<ref>). If = (A,H,d,ln
(|Φ|/δ)) is sufficiently large, then the distributions P1:H
produced by (Φ, η, , δ) are a
(η^3/· d^6 A^2,)-randomized policy cover with probability at least
1-δ, where 4 H d^3/2η.
The total number of episodes used by is at most:
(A^4 d^20 H^17 (d + ln (|Φ|/δ))· 1/^14).
The next corollary follows immediately from the definition of a policy cover (<ref>).
Consider the setting of <ref> and let P1:H be the distributions
produced by . Then, under the same success event as in <ref>, the collection of policies Ψ1,…, ΨH, where Ψh Ph for each h∈[H], are a (η^3/· d^6 A^2,)-policy cover in the sense of <ref>, where η/(4 H d^3/2).
<ref> is the first provable, model-free sample complexity
guarantee for general Low-Rank MDPs that is attained by an
efficient algorithm. Prior to our work, all efficient model-free algorithms required non-negative features (latent
variable structure), reachability, or stronger assumptions
<cit.>; see <ref>.
While our guarantee is polynomial in
all relevant problem parameters, improving the dependence further
(e.g., to match that of the best known inefficient algorithms) is
an interesting direction for future research.
Application to reward-based RL
By using the policy cover produced by within (<ref>),
we can optimize any downstream reward function to error using
(d,A,H,logΦ,^-1) episodes. See
<ref> for details. A technical novelty here compared to, e.g. <cit.> (who also used and policy covers to optimize downstream reward functions), is in proving that our notion of approximate policy cover (<ref>) is sufficient for downstream reward optimization in s.
Efficiency and practicality is simple and practical. Defining _(ϕ, w, f) ∑_(x, a,
x')∈ (ϕ(x,a)^⊤ w - f(x'))^2, where
is a dataset consisting of (_h,_h,_h,_h+1)
tuples, the algorithm is provably efficient whenever the adversarial
objective
ft∈_f∈max_ϕ̃∈Φ{min_w∈(3d^3/2)_(ϕt, w, f) - min_w̃∈(2d^1/2)_(ϕ̃, w̃, f) },
in <ref> of (<ref>),
can be implemented efficiently. This objective was also assumed to be efficiently solvable in
<cit.> and was empirically shown to
be practical in <cit.>.[In
addition to <ref>, also solves the
objective
ϕt+1∈_ϕ∈Φmin_(w_1,…,w_t)∈(2√(d))^t∑_ℓ=1^t _(ϕ,w_ℓ,fℓ)
in <ref> of <ref>. Compared the
adversarial objective in (<ref>), this objective is
simpler, and only
requires minimization.] Note that both of the objective
is amenable to standard gradient-based optimization techniques, and allows
the class to be over-parameterized. While a detailed
experimental evaluation is outside of the scope of this paper, we are
optimistic about the empirical performance of the algorithm in light
of the encouraging results based on the same objective in
<cit.>.
Outside of representation learning, the only computational overhead in is
in the subroutine, which has runtime polynomial in all parameters. Indeed,
requires only polynomially many calls to the linear optimization oracle, instantiated as , which is
efficient whenever standard least-squares regression problems based on
the class Φ can be solved efficiently, analogous to
<cit.>. The
distributions Ph,k returned by each invocation of have
support size (d,^-1), and hence can be represented with
polynomial space memory; it follows that all of policy
distributions maintained throughout the execution of <ref> have
polynomial support size as well.
Under the setting of <ref>, if = (A,H,d,ln
(|Φ|/δ)) is sufficiently large, then the distributions P1:H
produced by (Φ, η, , δ) are such that max_h∈[H]| Ph| ≤· d^7/η^4.
Analysis and proof techniques
A significant challenge overcome by the proof of <ref> (given
in <ref>) is to show that
—despite being non-optimistic—succeeds in the absence of
reachability-type assumptions. To achieve this, we use a novel
adaptation of the extended
MDP technique introduced in the recent work
<cit.> in the context of Block MDPs. This
technique allows us to analyze in a modified version of the
true MDP which emulates certain properties of reachability; see
<ref> for details. Within the extended MDP, the crux of
the proof is to show that the
representation learning guarantee in (<ref>) is strong
enough to ensure that the downstream optimal design computation in
succeeds. It is straightforward to show that optimal design
computation would succeeds if we have access to an estimated
representation that ϕh,k that approximates
point-wise (i.e., uniformly for all (x,a) pairs), but the key challenge is that the guarantee in
(<ref>) only holds on average under the roll-in
distribution h,k-1. Prior works that make use of the same representation
learning objective ( <cit.> and
<cit.>) make use of additional structural assumptions
(non-negativity of the factorization for , and Block MDP
structure for ) to facilitate change-of-measure arguments
that address this issue. We avoid such assumptions by inductively appealing to
the optimal design objective in (<ref>), which provides a
stronger coverage guarantee compared to elliptic objectives from prior
work; see <ref>. While the high-level schema for the
proof is quite simple, there are
several subtle technical challenges that arise in analyzing in the
extended MDP, including:
* Showing that succeeds when invoked within , despite
the lack of uniform coverage.
* Proving that gives a sufficiently strong
approximation guarantee even when the weights used by the algorithm
are kept uniformly bounded throughout training; see <ref>.
* Addressing distribution shift that occurs when the updates policies using the
representations produced by .
See <ref> for
details.
§.§ Stronger Guarantees under Reachability:
The algorithm is appealing in its simplicity and
modularity. To highlight this, we use the same template to give a variant of the
algorithm, (<ref>), which obtains a tighter
sample complexity bound whenever a reachability assumption is satisfied.
Concretely, we make the following assumption.
[η-reachability]
For any h∈[H] and x∈_h,
max_π∈ d^π(x)≥η·[h](x).
<ref> generalizes and subsumes all
previous reachability-like conditions of which we are aware
<cit.>. Notably,
reachability is implied by the notion of feature
coverage <cit.> (used in the context of
transfer learning in Low-Rank MDPs), which asserts that
sup_π∈λ_min(^π[(_h,_h)(_h,_h)^⊤])
≥η, for some η>0. It is also implied by
explorability <cit.>, which is
similar to feature coverage, but involves the first moments of
. Our reachability assumption is also weaker than
the notion used in <cit.>
under the latent variable model, and generalizes the
notions of reachability for BMDPs <cit.>. See <ref> for details, as well as an exponential separation between <ref> and analogous assumptions in <cit.>.
follows the same template as , with two
differences. First, we remove the inner loop (which corresponds to
setting K=1 in ). Second, and more importantly, the subroutine is replaced
with a new subroutine, . Instead of computing an optimal
design, computes an alternative basis for exploration known as
a barycentric spanner <cit.>. is
an error-tolerant variant of a classical spanner computation
algorithm of <cit.>, and may be of independent
interest; we use the algorithm to compute a spanner for learned feature maps via reduction to policy
optimization. The sample complexity of improves upon ,
but its analysis leverages reachability. See <ref> for a detailed overview.
The main sample complexity guarantee for is as follows.
Let δ∈(0,1) be given, and suppose that realizability holds (<ref>) and that reachability (<ref>) is satisfied with parameter η>0. If = η/36 d^5/2 and = (A,H,d,ln
(|Φ|/δ)) is sufficiently large, then the policies Ψ1:H
produced by (Φ, , , δ) are a
(1/4 Ad,0)-policy cover with probability at least
1-δ.
The total number of episodes used by is at most:
( A^4 d^9 H^4 (d + ln (|Φ|/δ))· 1/η^2).
The sample complexity bound in <ref> scales
with the reachability parameter η as η^-2, which
significantly improves upon the dependence on the accuracy parameter
in <ref>. The dependence on the
dimension d is also tighter. We
find this result to be notable in its own right, as even in the
presence of similar reachability assumptions, all efficient model-free
algorithms in prior work required non-negative features (latent
variable structure) or stronger assumptions
<cit.>.
A secondary benefit of lies in memory: The algorithm
maintains policy covers with support size (d,^-1), while
the policy covers used in have support size (d),
which is independent of the target accuracy.
The proof of <ref> is similar to that of
<ref>, but is somewhat simpler, and does not require
appealing to the extended MDP analysis of
<cit.>. A useful feature of our proof is to show that the notion of
reachability in <ref>, which generalizes and
extends all previous reachability conditions in the and Block
MDP literature <cit.>,
is sufficient to build an exact (i.e., (α,0)-) policy cover. We
anticipate that this observation will find broader use.
§ DISCUSSION
Our work shows for the first time how to achieve efficient, model-free
exploration in general Low-Rank MDPs. On the technical side, our
results leave open a number of interesting technical questions,
including (1) regret (as opposed to PAC) guarantees, and (2) matching the minimax rate achieved by
inefficient algorithms using an efficient
algorithm. empirical evaluation?
More broadly, our work highlights the power of non-optimistic
algorithms that explore by building policy covers. In light of this, perhaps the most interesting question
is how to extend our techniques to more general function approximation
settings beyond the Low-Rank MDP model; this will likely entail
replacing the notion of optimal design with a more general form of
exploration basis.
§.§ Acknowledgements
We thank Noah Golowich, Dhruv Rohatgi, and Ayush Sekhari for
several helpful discussions. ZM and AR acknowledge support from the ONR through awards N00014-20-1-2336 and N00014-20-1-2394, and ARO through award W911NF-21-1-0328. AB acknowledges support from the National Science Foundation Graduate Research Fellowship under Grant No. 1122374.
§ ADDITIONAL RELATED WORK
In thi section, we discuss relevant related work not already covered.
Block MDPs
A particularly well-studied special case low-rank MDPs with the latent variable assumed in <cit.> (defined in <Ref>) is the Block MDP (BMDP) model <cit.>. For this setting, <cit.> provide algorithms that conduct exploration in a provably oracle-efficient manner under a reachability assumption. This reachability assumption was removed by subsequent work of <cit.> (with a suboptimal rate) and <cit.> (with optimal error dependence). These works are tailored to the BMDP model, and it is unclear whether it is possible to extend them to general low-rank MDPs.
Barycentric spanners
<cit.> consider a variant of the framework in which we are given a class Υ that realizes the
next-state feature map , but do not have access to a class
Φ for the feature map , which is unknown. Their
algorithm, like , is based on barycentric spanners, though the algorithm
design considerations and analysis are significantly
different. Notably, their algorithm is not computationally efficient,
and their analysis takes advantage of the fact that realizability of
facilitates estimation of the occupancies d^π(·)_π∈ in ℓ_1-error. Barycentric spanners were also in the work of <cit.> for reinforcement learning in Partially Observable MDPs (POMDPs). Their analysis is substantially different from ours, and their algorithm appeals to the barycentric spanner computation approach in <cit.> in an off-the-shelf fashion.
Frank-Wolfe method in RL
Similar to our work, <cit.> make use of the Frank-Wolfe method for policy cover computation, but their algorithm is tailored to the known-feature (linear MDP) framework, and the design and analysis are quite different.
PART:
Analysis of
§ ORGANIZATION OF PART:ANALYSISVOX
<ref> of the appendix contains the proof of our main
result, <ref>, as well as other proofs. This
section is organized as follows:
* <ref> contains the analysis of <ref>.
* <ref>, <ref>, and <ref> contain results we rely on in the proof of <ref>. In particular, <ref>, <ref>, and <ref> provide generic guarantees for the subroutines (<ref>), (<ref>), and (<ref>) of (<ref>), respectively.
* In <ref>, we show how an approximate policy cover can be used to optimize downstream reward functions.
* In <ref>, we present some useful structural results concerning the extended MDP introduced in <ref>.
* Finally, <ref> contains a set of helper
results used throughout the analysis.
§ ANALYSIS: PROOF OF THM:VOXMAIN
In this section, we present the full proof of the main guarantee for (<ref>). In <ref>, we define key concepts needed for the analysis. <ref>, <ref>, and <ref> give guarantees for (<ref>), (<ref>), and (<ref>) as instantiated within . <ref> gives guarantees for the subroutine within . We then combine these results in <ref> to prove <ref>.
§.§ Extended Low-Rank MDP and Truncated Policies
In this section, we present two tools, the extended MDP and a truncated policy class, that will be used throughout the analysis of , and facilitate an analysis that does not require reachability assumptions. The definitions we give generalize analogous definitions given in <cit.> for the special case of Block MDPs, though the generalization to the low-rank MDP setting is non-trivial.
Extended MDP As in <cit.>, we define the extended MDP to be the result of augmenting the true MDP by adding a set of H terminal states _1:H, and a terminal action with the property that taking from any state at layer h∈ [H-1] leads to _h+1 deterministically, and any action in ∪{} at latent state _h transitions to _h+1 deterministically. To express as a low-rank MDP, we increase the feature dimension by 1. First, for any ϕ∈Φ, we define the extension
ϕ̅(x,a) = {[ [ϕ(x,a)^⊤, 0]^⊤∈^d+1, ∀ a∈, ∀ x∈,; e_d+1∈^d+1, a = , ∀ x∈,; e_d+1∈^d+1, ∀ a∈, x ∈{_1,…, _H}, ]. with ϕ̅^⋆ denoting the extension of ϕ^⋆. We similarly define[h](x) = {[ [[h](x)^⊤, 0]^⊤∈^d+1, ∀ x∈,; e_d+1∈^d+1, x=_h, ].
for h∈[H]. With these definitions, we formally define =(∪{_1,⋯, _H}, ∪{}, ρ, ([h])_h∈[H], (ϕ̅_h^⋆)_h∈[H]) as the extended MDP, which one can verify is indeed a low-rank MDP in d+1 dimensions.
We let be the set of all randomized Markov policies in , with the convention that π(_h)= for all π∈ and h∈ [H]. For any policy π→, we extend it to ∪{_1, …, _H} by taking π(_h)= for all h∈[H]. Moving forward, for any h∈[H], we let _h _h ∪{_h}, and define =∪.
We denote expectations and probability laws for trajectories in by and , respectively, and for any '⊆_h, we let _h^π[']^π[_h ∈'] denote the induced law of _h under a policy π in . Furthermore, for any x∈_h, we define the occupancy measure ^π(x) _h^π/ν̅(x) as the density of ^π_h with respect to ν̅= ν +∑_h∈[H]𝕀__h.
We define Φ be the set of all extended feature maps (as in (<ref>)) for ϕ∈Φ. In some proofs, it will be convenient to work with the restriction of the extended feature maps to their first d coordinates; for any ϕ∈Φ, we define
ϕ̃(·,·) (ϕ̅(·,·)[1], …, ϕ̅(·,·)[d])^⊤.
Finally, we the extend the notion of a policy cover to the extended MDP as follows.
For α∈(0,1], η≥ 0, a distribution P∈Δ() is a (α, η)-randomized policy cover relative to Π⊆ for layer h in if
_π∼ P [^π(x)] ≥α·max_π'∈Π^π'(x), for all x∈_h such that max_π'∈Π^π'(x)≥η·[h](x).
Truncated policy class
Next, we introduce the notion of the truncated policy class, generalizing <cit.>. We begin with some preliminary definitions.
For any h ∈ [H], given a collection of policies Π'⊆, we let
_h(Π') {ϕ̃^⋆,π_h|π∈Π'}, where ϕ̃^⋆,π_h^π[ϕ̃^⋆_h(_h, _h)].
Using this, we define the notion of η-reachable states relative to Π'.
For h∈[H] and a policy class Π'⊆, we define the set of η-reachable states at layer h relative to the set Π' as:
_h, η(Π') {x∈_h |∃ u ∈_h-1(Π') : [h](x)^⊤ u ≥[h](x)·η}.
Given a parameter η>0, we now define the truncated policy class _η inductively as follows: Let _0,η, and for each h≥ 1, let _h, η be the set of policies defined by
π∈_h,η∃π'∈_h-1,η : ∀ t ∈[H], ∀ x ∈_t, π(x) = {[ π'(x), if t=h and x ∈_h,η(_h-1,η),; , otherwise. ].
Finally, we define _η_H,η.
As in <cit.>, the utility behind the extended MDP and truncated policy class is as follows:
* While the extended BMDP does not necessarily enjoy the reachability property (<ref>), it emulates certain properties of reachable MDPs, but only if we compare performance to policies in _η.
* For all reward functions of interest, the best reward that can be achieved by a policy in _η is close to what can be achieved using arbitrary policies in .
§.§ Proof Overview
The proof of <ref> is inductive. For fixed h, the inductive hypothesis is that the distributions over policies P1:h+1 produced by satisfy the property:[By extending policies in to in the fashion described in <ref>, the distributions P1:h can be viewed as distribution over policies in .]
P1,… Ph+1 are (η32 dK A, η)-randomized policy covers relative to _η for layers 1 through h+1 in ,
where K is defined as in <ref>. Assuming the inductive hypothesis holds, we prove that with high probability, the distribution Ph+2 is a (η/32 dK A, η)-randomized policy cover relative to _η in for layer h+2. This inductive hypothesis is primarily used to show that , as invoked in <ref> is a valid choice for the oracle required by (that is, implements approximate linear optimization over = {^π[ ϕ(_h, _h)ϕ(_h, _h)^⊤] |π∈}, for any choice of ϕ∈Φ), which is proven in <Ref>. With this established, we instantiate the guarantee for from <ref> with and set to the instances of (<ref>) and (<ref>) in , respectively. To conclude the proof of the inductive step, we combine the guarantee for and the guarantee for in <Ref> with a change of measure argument, also enabled by the inductive hypothesis that P1:h are approximate policy covers (i.e. (<ref>)). As in <cit.>, a key feature of the analysis is that we work with the extended MDP and truncated policy class throughout the proof, only passing back to the true MDP once the induction is complete and <ref> has been proven to hold for all layers H. To pass back to the true MDP, we use the following (proven in <ref>).
Let h∈ [H], α∈ (0,1), and η >0 be given.
If P∈Δ() is an (α,η)-randomized policy cover relative to _η for layer h in , then P is an (α/2,)-randomized policy cover relative to for layer h in the true MDP , where 4 H d^3/2η.
In <ref> [reps. <ref>] we show that [resp. ], as invoked in <ref>, instantiates the approximate linear optimization oracle [resp. index-to-matrix oracle ] required by . In <ref> and <ref>, we prove guarantees for the instantiations of and within , respectively. In <ref>, we conclude the proof of <ref>.
§.§ Guarantee for as a Subroutine for
We begin by showing that , as invoked in <ref>, instantiates the approximate linear optimization oracle required by . In particular, we fix a layer h∈[H] and assume that P1:h+1 satisfy (<ref>) and apply the generic guarantees for given <Ref>.
For M ∈∩_(1) and ϕ∈Φ, define function classes '_1:h(M,ϕ) as follows:
'_t(M,ϕ) {g:(x,a)↦ϕ(x,a)^⊤ w |ϕ∈Φ , w ∈(√(d))}, ∀ t ∈[h-1] and '_h(M,ϕ) {r'_h(·,·; M,ϕ)} ,
where we define reward functions r'_1:h(·,·;M, ϕ) by:
∀ (x,a)∈×, r'_t(x,a;M,ϕ){[ ϕ(x,a)^⊤ M ϕ(x,a), for
t=h,; 0, otherwise. ].
With these rewards and function classes, we show will that for any M ∈∩_(1) and ϕ∈Φ, the output
= (h, r'_1:h(·, ·;M,ϕ), '_1:h(M,ϕ), P1:h, n)
satisfies the property that
max_π∈_η^π[ ϕ̃(_h, _h)^⊤ M ϕ̃(_h, _h) ] ≤^[ ϕ̃(_h, _h)^⊤Mϕ̃(_h, _h) ] + ,
with high probability once n≥ 1 is sufficiently large; recall that ϕ̃ is the restriction of to its first d coordinates, with defined as in <ref>.
Note that we can equivalently formulate (<ref>) as, for fixed M ∈∩_(1) and ϕ∈Φ, maximizing the sum of the reward functions r'_1:h(·,·;M, ϕ) in (<ref>).
Note that this matches the choice of reward functions in (<ref>) at iteration h, with ϕ = ϕh,k, the feature map returned by in <ref>.
We first verify that the function classes '_1:h(M,ϕ) realize the reward functions specified in (<ref>) in the sense of <Ref>.
For any ϕ∈Φ and M∈∩_F(1), under <ref>, the function classes '_1:h(M,ϕ) in (<ref>) realize the reward functions in (<ref>) in the sense of <ref> (in the true MDP). Furthermore:
* All functions in '_1:h(M,ϕ) take values in [-√(d), √(d)].
* max_t∈[h]ln_'_t(M,ϕ)()≤ln |Φ|+ d ln (√(d) /), where we recall that _() denotes the -covering number for a function class in ℓ_∞-distance (see <ref>).
Fix ϕ∈Φ and M∈∩_(1), and let r'_t(·,·)≡ r'_t(·,·; M, ϕ) and _t'_t'(M,ϕ), for t∈[h]. Further, for t∈[h] and π∈^t+1:h, we define the state-action value function (Q-function) at layer t with respect to the rewards r'_1:h and partial policy π:
∀ (x,a)∈_t×, Q^π_t(x,a) r'_t(x,a)+^π[.∑_ℓ=t+1^h r'_ℓ(_ℓ,_ℓ) | _t=x,_t=a].
For t=h, we clearly have that for any π∈^h:h, Q^π_h(·,·)=r'_h(·,·)∈'_h. For t<h and any π∈^t+1:h, we have from the low-rank structure that for any (x,a)∈_t×, the Q-function Q^π_t satisfied
Q^π_t(x,a) = ∫__t+1^π[r'_h(_h,_h)|_t+1=y,_t+1=π(y)] ·ϕ^⋆_t(x,a)^⊤μ_t+1^⋆(y) ν (y),
= ϕ^⋆_t(x,a)^⊤( ∫__t+1^π[r'_h(_h,_h)|_t+1=y,_t+1=π(y)] ·μ_t+1^⋆(y) ν (y)).
Now, note that for all y∈_t+1,
0≤^π[r'_h(_h,_h)|_t+1=y,_t+1=π(y)] ≤r'_t(·, ·)_∞,
≤M_·sup_x∈_t,a∈ϕ(x,a)^2, (by Cauchy-Schwarz)
≤ 1,
where the last inequality follows by the fact that ϕ(·,·)≤ 1 for all ϕ∈Φ, and that M_≤M_≤ 1. Combining (<ref>) with the normalizing assumption made on ([h])_h∈[H] in <ref> (i.e. that for all g:_t+1→0,1, *∫__t+1[t+1](y)g(y) ν(y)≤√(d)), we have that
w_t ∫__t+1^π[r'_h(_h,_h)|_t+1=y,_t+1=π(y)] ·μ_t+1^⋆(y) ν (y) ∈(√(d)).
Thus, by (<ref>) we have
Q_t^π(·,·) ≡ϕ^⋆_t(·,·)^⊤ w_t, with w_t ∈(√(d)).
This, together with the fact that [t]∈Φ (by <ref>), implies that Q_t^π∈'_t, which establishes that '_1:h realize the rewards r'_1:h. The bound on the covering number _'_t() follows from a standard bound on the covering number of the ball (√(d)) <cit.>.
Combining <Ref> with <Ref> gives in the following bound on the quality of as an approximate linear optimization oracle over the space of policies.
Fix δ∈(0,1) and h∈[H]. Let M∈∩_(1), ϕ∈Φ, and be the output of when given input (h, r'_1:h(·, ·;M,ϕ), '_1:h(M,ϕ), P1:h, n), where
* The reward functions r'_1:h(·, ·;M,ϕ) are as in (<ref>).
* The function classes '_1:h(M,ϕ) are as in (<ref>).
* The distributions P1:h satisfy (<ref>).
Then, under <ref>, with probability at least 1-δ, we have that
max_π∈_η^π[ ϕ̃(_h, _h)^⊤Mϕ̃(_h, _h) ] ≤^[ ϕ̃(_h, _h)^⊤Mϕ̃(_h, _h) ] + _(n,δ),
where _(n,δ) cH d A√(K η^-1 n^-1 (d ln (n d^1/2)+ln (|Φ|/δ))) and c>0 is a sufficiently large absolute constant.
§.§ Guarantee for as a Subroutine for
We now state a performance guarantee for the subroutine (<Ref>), which simply estimates the second moment of the feature embedding of (_h, _h) under policy π by sampling sufficiently many trajectories and taking the empirical second moment. The following result shows that is a valid choice for the subroutine passed to within .
Let δ∈(0,1) h∈[H], ϕ∈Φ, π∈, and n∈ℕ be given. The output M_h= (h,ϕ(·,·)ϕ(·, ·)^⊤,π, n) (<ref>) satisfies M_h ∈ and, with probability at least 1-δ,
M_h - ^π[ϕ(_h,_h)ϕ(_h,_h)^⊤] _≤_(n,δ),
where _(n,δ) c ·√(n^-1·log( 1/δ)) and c>0 is a sufficiently large absolute constant.
Let ϕ∈Φ and π∈. The claim that M_h ∈ follows by the fact that M_h is an empirical average of rank-1 matrices in .
Now, we show (<ref>). By a standard matrix concentration inequality (see for example <cit.>) and the fact that ϕ(x, a)ϕ(x, a)^⊤_≤ 1 for all x ∈ and a ∈ (following from ϕ(·,·)≤ 1), there exists an absolute constant c>0 such that with probability at least 1 - δ,
M_h - ^π[ ϕ(_h, _h) ϕ(_h, _h)^⊤]_≤ c ·√(log(1/δ)/n) .
Since policies in never take the terminal action, the guarantee in <ref> can also be expressed in the extended MDP as we do in the next corollary.
Let δ∈(0,1) h∈[H], ϕ∈Φ, π∈, and n∈ℕ be given. The output M_h of (h,ϕ(·,·)ϕ(·, ·)^⊤,π, n) (<ref>) satisfies M_h ∈ and for a sufficiently large absolute constant c>0, with probability at least 1-δ,
M_h - ^π[ϕ̃(_h,_h)ϕ̃(_h,_h)^⊤] _≤_(n,δ),
where _(n,δ) c ·√(n^-1·log( 1/δ)) and ϕ̃ is the restriction of to the first d coordinates; see <ref>.
§.§ Guarantee for as a Subroutine for
In this section, we prove a guarantee for the instantiation of (<ref>) within .
For the rest of this section, we recall that (ϕh,k)_k∈[K] denote the feature maps returned by within (<ref>) at iteration h∈[H-2], and that (Ph,k)_k∈[K] denote the distributions returned by within <ref> at iteration h∈[H-2]. We define
Mh,kγ I + _π∼ Ph,k^π[ϕh,k(_h,_h)ϕh,k(_h,_h)^⊤].
In , we instantiate passing as and as . Combining <Ref> with the general guarantee of in <Ref>, we have the following result.
Let δ,γ∈(0,1) and K≥ 1 be as in <ref>, and fix h∈[H-2] and k∈[K]. Suppose that the feature class Φ satisfies <ref>, and that P1:h in <ref> satisfy (<ref>). Then, with probability at least 1-δ/3H:
* The number of iterations used by (<ref>) when invoked in <Ref> of <Ref> is at most T ⌈4/γ^2dlog( 1+1/γ)⌉.
* The distribution Ph,k output by is such that | Ph,k|≤ T and for Mh,k as in (<ref>), we have
sup_π∈_η^π[ ϕ̃h,k(_h,_h) ^2_( Mh,k)^-1] ≤ 3 d,
where we recall that ϕ̃h,k is the restriction of h,k to its first d coordinates, and h,k is the extension of ϕh,k to ; see <ref>.
By <Ref>, on the event that the instance of (resp. ) used by satisfy <Ref> with _=2γ/5 [_ = 2 γ^2/10], the two desiderata of the lemma hold; Here, we instantiate the guarantee in <ref> with C=2, which is what it is set to in <ref>. We claim that, with probability at least 1- δ/6 T H, each call to and to satisfies <Ref> with
=, _ref=_η, _=, and = {^π[ϕ̃h,k(_h,_h)ϕ̃h,k(_h,_h)^⊤] |π∈}.
Since and are called at most two times per iteration of , a union bound (see <ref>) concludes the proof contingent on the above claim.
We now prove the claim. First, note that the instance of that (<ref>) uses within <ref> is always of the form (see <ref> of <ref>):
(h, r_1:h(·, ·, M/M_), _1:h(M/M_), P1:h, n_)
with r_1:h and _1:h as in <Ref> and M ∈∖{0}; this matches the form in <Ref> ('s guarantee) with ϕ = ϕh,k, which implies that with probability at least 1- δ/6 T K, the output of _M of the instance in (<ref>) satisfies:
max_π∈_η^π[ ϕ̃h,k(_h, _h)^⊤Mϕ̃h,k(_h, _h) ]- ^_M[ ϕ̃h,k(_h, _h)^⊤Mϕ̃h,k(_h, _h) ]
≤ cM_· H d A√(K (d ln (n_ d^1/2)+ln (6 TK|Φ|/δ))/η n_),
for a sufficiently large absolute constant c>0. Thus, by choosing
n_ =·η^-1γ^-2 H^2 d^2K A^2· (d + ln (|Φ|/δ)),
for = (A,d,H,log(|Φ|/δ)) sufficiently large, the of (<ref>) is bounded by 2M_γ/5, which implies the claim for the invocation of within . Similarly, the choice of n_ in <Ref> ensures that the claim holds for the invocation of within , by <Ref>. The result follows.
§.§ Guarantee for as a Subroutine for
In this subsection, we prove a guarantee for the instantiation of within . Recall that (ϕh,k)_k∈[K] denote the feature maps returned by within (<ref>) at iteration h, and let (Ph,k)_k∈[0 K-1] and ( Ph,k)_k∈[K] be as in <ref>.
Recall that Ph,k-1∈Δ() is the distribution over policies that passes to at outer iteration h∈[H-2] and inner iteration k∈[K] to compute ϕh,k. Thus, by invoking <ref> in <ref> and using that
n_ = ·η^-5 A^2 d^10log (|Φ|/δ)
in <ref> for = (A,d,H,log(|Φ|/δ)) sufficiently large, we immediately obtain the following corollary.
Let δ,η∈(0,1), K≥ 1, and be as in <ref>, and fix h∈[H-2] and k∈[K]. Suppose that the class Φ satisfies <ref>. Then, with probability at least 1-δ/3HK, the instance of in <ref> of <ref> runs for t≤'· d iterations for ' = (A,d,H,log(|Φ|/δ)) sufficiently large, and outputs ϕh,k such that for all f∈, there exists w_fh,k∈(3d^3/2) satisfying
_π∼Ph,k-1^π[∑_a∈(ϕh,k(_h,a)^⊤wh,k_f - ϕ_h^⋆(_h,a)^⊤w_f)^2] ≤' · d^4 n^-1_log
(|Φ|/δ) ≤αη^2/32,
where w_f ∫__h+1 f(y) (y) ν(y) and αη/32 d K A.
We note that by the definition of Ph,k-1 in <ref> of <ref>, <ref> implies that, with probability at least 1-δ/3HK, for all k∈[2 K], f∈ and w_f,w_fh,k∈^d as in <ref>,
1/k-1∑_ℓ=1^k-1_π∼Ph,ℓ^π[∑_a∈(ϕh,k(_h,a)^⊤wh,k_f - ϕ_h^⋆(_h,a)^⊤w_f)^2] ≤2 ' · d^4 n^-1_log
(|Φ|/δ),
We now instantiate <ref> with B=3d^3/2A^1/2, ^2 =2 ' · d^4 n^-1_log
(|Φ|/δ), πℓ = _π∼ Ph,ℓ [π] ∈, for each ℓ∈[k], and
δk=√(∑_a∈(ϕh,k(·,a)^⊤wh,k_f - ϕ_h^⋆(·,a)^⊤w_f)^2),
and make use of the following facts:
* δk_∞≤ 3d^3/2 A^1/2 (since w_f∨w_fh,k≤3 d^3/2 and ϕ_h^⋆(·,·)∨ϕh,k(·,·)≤ 1).
* <ref> sets K = · d^5A/η^2 and n_≥·η^-4A d^10log (|Φ|/δ) with = (A,d,H,log(|Φ|/δ)) sufficiently large.
This leads to the following corollary.
Let δ,η∈(0,1), K≥ 1, and be as in <ref>, and fix h∈[H-2] and k∈[K]. Suppose that the feature class Φ satisfies <ref>. Then, with probability at least 1-δ/3H, the outputs (ϕh,k)_k∈[K] of in <ref> at iteration h of <ref> are such that for all f∈, with w_f, w_fh,k∈^d defined as in <ref>,
min_k∈[K]_π∼ Ph,k^π[∑_a∈(ϕh,k(_h,a)^⊤wh,k_f - ϕ_h^⋆(_h,a)^⊤w_f)^2] ≤η^2/128 d.
§.§ Concluding the Proof of thm:voxmain
In this section, we conclude the proof of <ref>. We prove the result as a direct consequence of the following inductive statement.
Consider iteration h∈[H] of (Φ, η, ,δ) (<ref>) with parameters >0,δ, η∈(0,1) and a feature class Φ satisfying <ref>. Further, assume that:
* The distributions P1:h+1 at the start of the hth iteration of satisfy (<ref>).
* P1:h+1 are supported on policies that never take the terminal action .
* The input parameter = (A,d,H,log(|Φ|/δ)) is sufficiently large.
Then, with probability at least 1-δ/H, the distribution Ph+2 produced by (Φ,η,,δ) at the end of the hth iteration is an ( η/32 dK A,η)-randomized policy cover relative to _η in for layer h+2, where K is as in <ref>. In addition, Ph+2⊆, and | Ph+2|≤576 d^7/η^4log (1+576 d^4/η^2).
This immediately implies <ref>, which bounds the cardinality of the supports of the distributions returned by <ref>
Follows immediately from <ref>.
In a first step we prove that with probability at least 1-δ, P1,… PH are (η32 dK A, η)-randomized policy covers relative to _η for layers 1 through H in ; that is, we need to show that (<ref>) holds for h=H-1 with probability at least 1-δ. To do this, we proceed by induction over h=1,…,H-1. The base case of h=1 trivially holds because Ψ1=∅ and Ψ2={π_}. The induction step now follows by <ref> and the union bound (see <ref>). Now, <ref> implies that P1,… PH are (η64 dK A, )-randomized policy covers relative to for layers 1 through H in the real MDP M, where 4 H d^3/2η. Plugging in the choice of K in <ref> implies the claim on P1,…, PH.
We now bound the number of trajectories <ref> requires. The total number of trajectories is equal to the sum of the number of trajectories , , and require. We know that and are called T = O(γ^-2 d) times by (<ref>) at each inner iteration k∈[K] of <ref> (γ is defined in <ref>), and is called once. Furthermore, each call to requires H · n_ trajectories, and and require n_ and n_ trajectories, respectively. Thus, the total number of trajectories is equal to
n_· H^2 K T+ n_· H K T + n_· H K
≤O(η^-13 d^27 H^4 A^4 (d + ln (|Φ|/δ))) +O(η^-14 d^28 H A ln (1/δ)) +O(η^-7 d^15 A^3 H ln (|Φ|/δ)),
where the inequality follows by the choice of parameters in <ref>.
This implies the desired bound on the number of trajectories
Let _h, _h', and _h” denote the success events in <ref>, <ref>, and <ref>, respectively, and note that by the union bound, we have [_h ∩_h'∩”_h]≥ 1 - δ/H. For the rest of this proof, we will condition on _h ∩_h'∩”_h.
Using <ref>, the assumption that P1:h+1 satisfy (<ref>) implies that the distributions P1, …, Ph+1 have the property that for all ℓ∈[h+1], x∈_ℓ,η(_η), then
_π∼ Pℓ*[ℓ](x)^⊤ϕ̅_ℓ-1^⋆,π≥α·sup_π∈_η[ℓ](x)^⊤ϕ̅_ℓ-1^⋆,π, for αη/32 dK A.
We will show that with probability at least 1-δ/H, the policy distribution Ph+2 satisfies the same property:
∀ x∈_h+2,η(_η), _π∈ Ph+2*[h+2](x)^⊤ϕ̅_h+1^⋆,π≥α·sup_π∈_η[h+2](x)^⊤ϕ̅_h+1^⋆,π.
By <ref> this is equivalent to the statement that Ph+2 is an ( η/32 dK A,η)-randomized policy cover relative to _η for layer h+2 in .
Throughout the proof, for any ℓ∈[2 H] and z∈_ℓ, we define
π_z ∈_π∈_η^π(z),
and note that by <ref>, we have
π_z ∈_π∈_η[ℓ](z)^⊤ϕ̅_ℓ-1^⋆,π, where ϕ̅_ℓ-1^⋆,π^π[^⋆_ℓ-1(_ℓ-1, _ℓ-1)].
Fix x∈_h+2,η(_η).
In the remainder of the proof, we will argue that Ph+2 satisfies the coverage property <ref> for x.
Preliminaries
We begin with some notation. Let us introduce a function f_x: _h+1→ defined by
f_x(y)_x^⊤ϕ̅^⋆_h+1(y,π_x(y)), where _x [θ_x^⊤, 0]^⊤ and θ_x [h+2](x)/[h+2](x).
Note that [h+2](x)>0, since x∈_h+2,η(_η). Next, we define
w_x ∫__h+1 f_x(y) (y) ν(y), and w̅_x [w_x^⊤, 0]^⊤∈^d+1.
By definition of π_x, we have that for all y∈_h+1,
_x^⊤ϕ̅^⋆_h+1(y,π_x(y)) = max_a∈_x^⊤ϕ̅^⋆_h+1(y,a),
≤max_a∈_x^⊤ϕ̅^⋆_h+1(y,a), (justified below)
= max_a∈θ_x^⊤ϕ^⋆_h+1(y,a), (since y≠_h+1 and [θ̅_x]_d+1=0)
where (<ref>) follows by the facts that _x^⊤ϕ̅^⋆_h+1(y,)=0 (since ϕ̅^⋆_h+1(·,)≡ e_d+1 and [_x]_d+1=0) and that
∀ a∈, _x^⊤ϕ̅^⋆_h+1(y,a) y≠_h+1=θ_x^⊤ϕ^⋆_h+1(y,a) = [h+2](x)^⊤ϕ_h+1^⋆(y,a)/[h+2](x),
≥ 0. ([h+2](·)^⊤ϕ_h+1^⋆(y,a) is a conditional law)
eq:cravit and the fact that θ_x=1 implies that
f_x|__h+1∈,
where f_x|__h+1 denotes the restriction of f_x to _h+1. We also note that since x∈_h+2,η(_η), we have
_x^⊤ϕ̅_h^⋆, π_x = [ ∫__h+1 f_x(y) (y)^⊤ν(y), 0] ϕ̅_h^⋆, π_x, (by definition of w̅_x in (<ref>))
= ∫__h+1 f_x(y) (y)^⊤ϕ̅_h^⋆, π_xν(y), (since (y)=[(y)^⊤, 0], for all y≠_h+1)
= ∫__h+1 f_x(y) (y)^⊤ϕ̅_h^⋆, π_x(y), (since f_x(_h+1)=0)
=_x^⊤ϕ̅_h+1^⋆,π_x, (by definition of f_x in (<ref>))
= 1/*[h+2](x)max_π∈_η[h+2](x)^⊤ϕ̃_h+1^⋆,π, (by definition of θ̅_x in (<ref>))
≥η>0,
where (<ref>) uses the definition of reachable states _h+2,η(_η) (see <ref>); we recall (see <ref>) that ϕ̃^⋆,π_h^π[ϕ̃^⋆_h(_h, _h)] and ϕ̃^⋆_h represents the restriction of ϕ̅^⋆_h to its first d coordinates.
Applying the guarantee for
Moving forward, we let (ϕh,k)_k∈[K] be the feature maps returned by within (<ref>) at iteration h, and define ϕ̅^k,π_h^π[h,k(_h,_h)], for any π∈, where we recall that h,k is the extension of ϕh,k to ; see <ref>. Further, for k∈[K], let wh,k_x be the vector wh,k_f in <ref> with f=f_x|__h+1, and note that
w_xh,k≤3d^3/2.
We will use the extended vector w̅_xh,k [(w_xh,k)^⊤,0]^⊤∈^d+1. By Jensen's inequality, we have for all k∈[K],
( h,k_x_h^k,π_x- _xϕ̅_h^⋆, π_x)^2
≤^π_x[(h,k(_h,_h)^⊤h,k_x - ϕ̅_h^⋆(_h,_h)^⊤_x)^2],
= ^π_x[(h,k(_h,π_x(_h))^⊤h,k_x - ϕ̅_h^⋆(_h,π_x(_h))^⊤_x)^2],
= ^π_x[𝕀{_h ∈_h,η(_η)}·(h,k(_h,π_x(_h))^⊤h,k_x - ϕ̅_h^⋆(_h,π_x(_h))^⊤_x)^2],
≤^π_x[𝕀{_h ∈_h,η(_η)}·∑_a∈(h,k(_h,a)^⊤h,k_x - ϕ̅_h^⋆(_h,a)^⊤_x)^2],
where the last inequality follows by the fact that h,k(·,)≡ϕ̅^⋆_h(·,) ≡ e_d+1 and [w̅_xh,k]_d+1=[w̅_x]_d+1=0 (by definition). Thus, for g(y) 𝕀{y∈_h,η(_η)}·∑_a∈(ϕ̅h,k(y,a)^⊤_xh,k - ϕ̅_h^⋆(y,a)^⊤_x )^2, (<ref>) implies that
( h,k_x_h^k,π_x- _xϕ̅_h^⋆, π_x)^2
≤∫__h g(y)[h](y)^⊤ϕ̅^⋆,π_x_h-1(y),
≤∫__h g(y)[h](y)^⊤ϕ̅^⋆,π_y_h-1(y), (by definition of π_y ((<ref>)) and (<ref>)))
≤α^-1_π∼ Ph[ ∫__h g(y)[h](y)^⊤ϕ̅^⋆,π_h-1(y)], (by (<ref>) with ℓ=h, and g(y)=0 for all y∉_h,η(_η))
≤ 2 α^-1_π∼Ph,k-1[ ∫__h g(y)[h](y)^⊤ϕ̅^⋆,π_h-1(y)], (Ph,k-1 as in <ref> of <ref>)
= 2 α^-1_π∼Ph,k-1^π[∑_a∈(h,k(_h,a)^⊤h,k_x - ϕ̅_h^⋆(_h,a)^⊤_x)^2],
= 2 α^-1_π∼Ph,k-1^π[∑_a∈(ϕh,k(_h,a)^⊤wh,k_x - ϕ_h^⋆(_h,a)^⊤w_x)^2],
where (<ref>) follows by the fact that the policies in the support of Ph,k-1 never take the terminal action (by assumption) and that h,k(x,a)^⊤h,k_x - ϕ̅_h^⋆(x,a)^⊤_x=ϕh,k(x,a)^⊤wh,k_x - ϕ_h^⋆(x,a)^⊤w_x for all a∈ whenever x≠_h. We note that Ph,k-1 is the distribution over policies that passes to to compute ϕh,k. Thus, since w_x = ∫__h+1 f_x(y) (y) ν(y) (see (<ref>)) and f_x|__h+1∈ (see (<ref>)), the guarantee for in <ref> together with (<ref>), implies that (recall that we condition on the event )
∀ k∈[K], | h,k_x_h^k,π_x- _xϕ̅_h^⋆, π_x| ≤η/4,
Since _xϕ̅_h^⋆, π_x≥η (see (<ref>)), (<ref>) implies that under , we have
∀ k∈[K], _xϕ̅_h^⋆, π_x≤4/3h,k_x_h^k,π_x.
Applying the guarantee for
To proceed, define
ℓ∈_k∈[K]_π∼ Ph,k^π[∑_a∈(ϕh,k(_h,a)^⊤wh,k_x - ϕ_h^⋆(_h,a)^⊤w_x)^2].
Note that by <ref>, we have,
_π∼ Ph,ℓ^π[∑_a∈(ϕh,ℓ(_h,a)^⊤wh,ℓ_x - ϕ_h^⋆(_h,a)^⊤w_x)^2] ≤η^2/128 d.
Let γ be as in <ref>, and for each k∈[K] define
Mh,kγ I + _π∼ Ph,k^π[ϕh,k(_h,_h)ϕh,k(_h,_h)^⊤], and Mh,k[ Mh,k 0_d × 1; 0_1 × d 0 ]∈^(d+1)× (d+1).
From (<ref>), Hölder's inequality, and AM-GM, we have
_xϕ̅_h^⋆, π_x ≤4/3*w̅h,ℓ_x _ Mh,ℓ·^ℓ, π_x_h_( Mh,ℓ)^, (( Mh,k)^ denotes the pseudo-inverse of Mh,k)
≤8d/η*w̅h,ℓ_x^2_ Mh,ℓ + η/12 d^ℓ, π_x_h^2_( Mh,ℓ)^,
≤8d/η*w̅h,ℓ_x^2_ Mh,ℓ + η/12 d^π_x[h,k(_h,_h)^2_( Mh,ℓ)^], (Jensen's inequality)
≤8d/η*w̅h,ℓ_x^2_ Mh,ℓ + η/12 d^π_x[ϕ̃h,k(_h,_h)^2_( Mh,ℓ)^-1].
By <ref> (in particular (<ref>)), we have that under the event _h”,
^π_x[ϕ̃h,k(_h,_h)^2_( Mh,ℓ)^-1] ≤ 3 d.
Combining this with (<ref>), it follows that
_xϕ̅_h^⋆, π_x ≤η/4 + 8d/η*w̅h,ℓ_x ^2_ Mh,ℓ ,
= η/4 + 8d/η·*wh,ℓ_x^2_ Mh,ℓ,
=η/4+ 8dγ/η·*wh,ℓ_x^2 + 8d/η·_π∼ Ph,ℓ^π[ ( ϕh,ℓ(_h,_h)^⊤wh,ℓ_x)^2 ],
≤η/4+ 72 d^4γ/η + 16 d/η·_π∼ Ph,ℓ^π[ ( ϕ^⋆_h(_h,_h)^⊤w_x)^2 ]+ η/8, (see below)
≤η/2+ 16 d/η·_π∼ Ph,ℓ^π[ ( ϕ^⋆_h(_h,_h)^⊤w_x)^2 ],
where (<ref>) follows by (<ref>), (<ref>), and that (a+b)^2 ≤ 2a^2 +2b^2. The last inequality follows by the parameter choice γ = η^2/576 d^4 (see <ref>).
Concluding
By the definition of w_x, the fact that μ_h+2^⋆(x)^⊤ϕ^⋆_h+1(y,π_x(y))≥ 0 is a conditional density for all y∈_h+1, and Jensen's inequality, we have:
∀ (y',a')∈_h×, (ϕ^⋆_h(y',a')^⊤ w_x )^2 = (∫__h+1μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(y,π_x(y)) μ_h+1^⋆(y)^⊤ϕ^⋆_h(y',a') ν(y))^2,
≤∫__h+1(μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(y,π_x(y)) )^2 μ_h+1^⋆(y)^⊤ϕ^⋆_h(y',a') ν(y),
≤∫__h+1μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(y,π_x(y)) μ_h+1^⋆(y)^⊤ϕ^⋆_h(y',a') ν(y),
where the last inequality follows by Cauchy-Schwarz and that ϕ^⋆_h+1(·, ·)≤ 1.
Plugging this into (<ref>), we have
_xϕ̅_h^⋆, π_x - η/2
≤16 d/η·_π∼ Ph,ℓ^π[∫__h+1μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(y,π_x(y)) μ_h+1^⋆(y)^⊤ϕ^⋆_h(_h,_h) ν(y)] ,
≤16 d A/η·_π∼ Ph,ℓ^π[1/A∑_a∈∫__h+1μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(y,a) μ_h+1^⋆(y)^⊤ϕ^⋆_h(_h,_h) ν(y)] , (see below)
= 16 d A/η·_π∼ Ph,ℓ^π∘_h+1π_[ μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(_h+1,_h+1) ],
≤16 d A K/η·_π∼ Ph+2^π[ μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(_h+1,_h+1) ],
= 16 d A K/η·_π∼ Ph+2[ μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆,π_h+1],
where (<ref>) uses that μ_h+2^⋆(x)^⊤ϕ^⋆_h+1(y,a) is non-negative for all (y,a)∈_h+1× (since it is a conditional density), and (<ref>) follows by definition of Ph+2 in <ref>.
Combining (<ref>) with the fact that _xϕ̅_h^⋆, π_x≥η (see (<ref>)) yields
1/2·μ̅_h+2^⋆(x)^⊤/μ̅_h+2^⋆(x)ϕ̅^⋆,π_x_h+1 ≤16 d A K/η·_π∼ Ph+2[ μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆,π_h+1],
= 16 d A K/η·_π∼ Ph+2[ μ̅_h+2^⋆(x)^⊤/μ̅_h+2^⋆(x)ϕ̅^⋆,π_h+1],
where the last equality follows by the fact that policies in the support of Ph+2 never take the terminal action. This establishes (<ref>). Since this argument holds uniformly for all x∈_h+2,η(_η), the proof is completed. The bound on | Ph+2| follows immediately from <ref> and the choice of γ in <ref>.
§.§ Proof of <ref>
Let h∈ [H] and P∈Δ() be a (C,γ)-generalized optimal design (see <ref>) for the set
_h{^π[
(_h, _h)(_h, _h) ^]|π∈}.
Further, define P'=∑_π∈(P)P(π)·_π∘_h+1 and
M_PγI_d+_π∼P^π*(_h, _h)(_h, _h) ^.
We will show that P' is a (α,η)-randomized policy cover for layer h+2 with αη/2 d A and η 4 d √((1+C)γ).
Let x∈_h+2,η() and π_x ∈_π∈ d^π(x).
Preliminaries
We begin with some notation. Let us introduce a function f_x: _h+1→ defined by
f_x(y)θ_x^⊤ϕ^⋆_h+1(y,π_x(y)), where θ_x [h+2](x)/[h+2](x).
Note that [h+2](x)>0, since x∈_h+2,η(). Next, we define
w_x ∫__h+1 f_x(y) (y) ν(y) ∈^d.
Since f_x takes values in [-1,1] (because ϕ_h+1^⋆(· , ·)≤ 1 and θ_x≤ 1), the normalizing assumption on μ^⋆_h+1 in (<ref>) implies that
w_x ∈(2√(d)).
We also note that the definitions of f_x and w_x imply that
w_x^⊤ϕ_h^⋆, π_x = θ_x^⊤ϕ_h+1^⋆,π_x = sup_π∈θ_x^⊤ϕ_h+1^⋆,π, (by definition of π_x)
= 1/*[h+2](x)max_π∈[h+2](x)^⊤ϕ_h+1^⋆,π, (by definition of θ_x in (<ref>))
≥η>0,
where the penultimate inequality follows by the fact that x∈_h+2,η().
Using the generalized optimal design property
By Hölder's inequality, we have for any ν>0,
w_x^⊤ϕ_h^⋆,π_x ≤w_x_M_P·ϕ^⋆, π_x_h_M_P^-1,
≤1/2νw_x^2_M_P + ν/2ϕ^⋆, π_x_h^2_M_P^-1, (AM-GM)
≤1/2νw_x^2_M_P + ν/2^π_x[ ϕ^⋆_h(_h, _h)^2_M_P^-1], (Jensen's inequality)
= 1/2νw_x^2_M_P + ν/2(M_P^-1^π_x[ ϕ^⋆_h(_h, _h) ϕ^⋆_h(_h, _h)^⊤] ),
≤1/2νw_x^2_M_P + ν· d(1+C)/2, (P is a (C,γ)-generalized optimal design)
= γ/2νw_x^2 + 1/2ν_π∼ P^π[(w_x^⊤ϕ^⋆_h(_h,_h))^2] + ν· d(1+C)/2, (by definition of M_P)
≤2γ d/ν + 1/2ν_π∼ P^π[(w_x^⊤ϕ^⋆_h(_h,_h))^2] + ν· d(1+C)/2,
where the last inequality follows by the bound on w_x in (<ref>). Now, by the definition of w_x, the fact that μ_h+2^⋆(x)^⊤ϕ^⋆_h+1(y,π_x(y))≥ 0 is a conditional density for all y∈_h+1, and Jensen's inequality, we have:
∀ (y',a')∈_h×, (ϕ^⋆_h(y',a')^⊤ w_x )^2 = (∫__h+1μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(y,π_x(y)) μ_h+1^⋆(y)^⊤ϕ^⋆_h(y',a') ν(y))^2,
≤∫__h+1(μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(y,π_x(y)) )^2 μ_h+1^⋆(y)^⊤ϕ^⋆_h(y',a') ν(y),
≤∫__h+1μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(y,π_x(y)) μ_h+1^⋆(y)^⊤ϕ^⋆_h(y',a') ν(y),
where the last inequality follows by Cauchy-Schwarz and that ϕ^⋆_h+1(·, ·)≤ 1. Plugging (<ref>) into (<ref>) and rearranging, we obtain: for all ν>0,
w_x^⊤ϕ_h^⋆,π_x - 2γ d/ν - ν· d(1+C)/2
≤1/2ν_π∼ P^π[∫__h+1μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(y,π_x(y)) μ_h+1^⋆(y)^⊤ϕ^⋆_h(_h,_h) ν(y)],
≤A/2ν_π∼ P^π[1/A∑_a∈∫__h+1μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(y,a) μ_h+1^⋆(y)^⊤ϕ^⋆_h(_h,_h) ν(y)], (see below)
= A/2ν_π∼ P^π∘_h+1π_[ μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(_h+1,_h+1) ],
= A/2ν_π∼ P'^π[ μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(_h+1,_h+1) ],
where (<ref>) uses that μ_h+2^⋆(x)^⊤ϕ^⋆_h+1(y,a) is non-negative for all (y,a)∈_h+1× (since it is a conditional density), and the last inequality follows by definition of P'. Now, using (<ref>), we get: for ν2 √(γ (1+C)^-1),
1/2w_x^⊤ϕ_h^⋆,π_x ≤w_x^⊤ϕ_h^⋆,π_x - η/2,
≤w_x^⊤ϕ_h^⋆,π_x -2 d√((1+C)γ), (using that γ = η^2 d^-2 (1+C)^-1/16)
≤w_x^⊤ϕ_h^⋆,π_x - 2γ d/ν - ν· d(1+C)/2, (by the choice of ν)
≤A/2ν_π∼ P'^π[ μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(_h+1,_h+1) ], (by (<ref>))
= A/4 √(γ (1+C)^-1)_π∼ P'^π[ μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(_h+1,_h+1) ],
= Ad/η_π∼ P'^π[ μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(_h+1,_h+1) ],
where the last equality uses that γ = η^2 d^-2 (1+C)^-1/16.
Rearranging, implies that P' is an (η/2d A,η) randomized policy cover for layer h+2.
§ GENERIC GUARANTEE FOR
In this section we give a generic guarantee for the (<ref>). We consider the abstract framework introduced in <ref>, in which the aim is to compute a generalized optimal design for an implicitly specified set of matrices
=*W^z_z∈⊆ indexed by an abstract set . We assume that subroutines and used by satisfy the following assumption.
[Approximation guarantee for and ]
Consider an abstract set and a collection of PSD matrices {W^z∈^d× d| z∈} indexed by elements in . There exist _,_>0 and reference subsets _ref, _⊆ such that for any M ∈ and P∈Δ(_), the outputs ẑ_M (M/M_) and W_P (P)∈ satisfy ẑ_M∈_ and
sup_z∈_ref(M W^z) ≤(M W^ẑ_M)+_·M_ , and W_P - _z∼ P[W^z]_≤_.
For our application to RL, the sets _ref and _ are useful to accommodate algorithms that optimize relative to restricted policy sets.
Given such subroutines and , and γ>0, ((·),(·), ·,γ) applies the Frank-Wolfe (conditional gradient method) to approximately solve the optimization problem
_P ∈Δ() F(P), where F(P)-log(γ I_d + _z∼ P[W^z]).
Letting {W^z | z∈} and assuming that ⊆_(1), the main result for this subsection (<ref>) bounds the number of iterations used by ((·),(·), ·,γ) under <ref> and gives a guarantee for the output.
Let C∈(1,2] and γ∈(0,1) be such that γ C<5/2, and suppose that the collection {W^z | z ∈} consists of PSD matrices of Frobenius norm bounded by 1. If (<Ref>) is run with parameters C, γ and , satisfying <ref> with _=Cγ/5 and _=Cγ^2 /10, then the algorithm terminates after t ≤16 γ^-2C^-2 d^-1ln (1 + 1/γ) iterations,[While it may seem odd at first glance that the iteration complexity for scales with d^-1, we note that the non-trivial regime in <ref> is when γ≤ 1/d. This is because for γ≥ 1/d, we have (M_P^-1 W^z)≤ d for any P∈Δ() and z∈, since M_P≽ I_d/d and W^z∈∩_(1). Whenever γ≤1/d, the iteration complexity for increases with d, as expected.] and requires at most twice that many calls to each of and . Furthermore, the output P_t of is such that P_t∈Δ(_),
|supp P_t|≤ t, and
sup_z∈_ref(M_P_t^-1 W^z) ≤ (1+3C/2) · d, where M_P_tγ I_d +_z∼ P_t[ W^z].
Let F be as in (<ref>). For z∈ and P∈Δ(), define M^zγ I_d
+ W^z, W_P_z∼ P[W^z], and M_P γ I_d + W_P. Throughout the proof, we will use that the function f: M ↦ -log M defined over has the following gradient and Hessian expressions:
∇ f(M)[H] = - (M^-1 H) and ∇^2 f(M)[H,H] = (M^-1 H M^-1 H),
for all H∈^d× d.
To begin, by Taylor's theorem and the fact that the set of PSD matrices is convex, there exists λ∈[0,1] such that for any P,P'∈, defining M_λλ M_P + (1-λ) M_P'∈,
F(P') - F(P) = f(M_P') -f(M_P),
= ∇ f(M_P)[M_P'-M_P] + 1/2∇^2 f(M_λ)[M_P'-M_P, M_P'-M_P] ,
= - (M_P^-1 (W_P'- W_P)) + 1/2(M^-1_λ (W_P'-W_P) M^-1_λ(W_P'- W_P)),
≤- (M_P^-1 (W_P'- W_P)) + 1/2γ^2W_P' - W_P^2_,
where the last inequality follows because for all z∈, M^z = γ I_d + W^z≽γ I_d, since W^z∈. We also note that by definition of F in (<ref>) and the fact that ⊂∩_(1), we have
sup_P,P'∈Δ() F(P') - F(P) ≤ dln (1 + 1/γ),
since the determinant of a matrix is bounded by the product of the norms of its columns.
Bounding the number of iterations
If <ref> has not terminated at iteration ℓ≥ 1, then
(M_ℓ^-1W_ℓ)>(1+C)d,
where M_ℓ = γ I_d + (P_ℓ), W_ℓ =
(𝕀_z̃_ℓ), and z̃_ℓ =
(M_ℓ^-1/M_ℓ^-1_F). Since satisfies <ref> with _=
γ^2 C/10, we have that
M_P_ℓ - M_ℓ_∨W^z̃_ℓ - W_ℓ_≤γ^2 C/10.
Furthermore, since M_P_ℓ≽γ I_d (because ⊆), we have using Cauchy-Schwarz
rM_P_ℓ^-1· (M_ℓ - M_P_ℓ)_≤M_P_ℓ^-1_·M_P_ℓ - M_ℓ_≤γ C/10<1/4,
where the last inequality follows by the fact that γ C<5/2.
On the other hand, by <ref>, instantiated with A = M_P_ℓ and E = M_ℓ -M_P_ℓ, we have that
M_P_ℓ^-1 - M_ℓ^-1_≤M_ℓ -M_P_ℓ_/1-r·M_P_ℓ^-1_^2 ≤4/3 γ^2γ^2 C/10 , (by (<ref>), (<ref>), and M_P_ℓ≽γ I_d)
= 2C/15≤C/5.
Note also that since only returns matrices in (see <ref>), we have M_ℓ≽γ I_d, and so
M_ℓ^-1_≤1/γ.
Using (<ref>)-(<ref>) and the triangle inequality, we obtain
(M_P_ℓ^-1 W^z̃_ℓ) = ((M_P_ℓ^-1 -M_ℓ^-1) W^z̃_ℓ) + (M_ℓ^-1 (W^z̃_ℓ-W_ℓ)) + (M_ℓ^-1 W_ℓ),
> - M_P_ℓ^-1 -M_ℓ^-1_·W^z̃_ℓ_ -M_ℓ^-1_·W^z̃_ℓ-W_ℓ_ + (1+C)d, (by (<ref>))
≥ - C/5 - 1/γ·γ C/5+ (1+C)d, (by ⊆_(1) and (<ref>)-(<ref>))
≥ - C/2 + (1+C)d.
Now, recall that μ = Cγ^2 d/8. Instantiating (<ref>) with P'=P_ℓ+1 and P=P_ℓ and using (<ref>), we have
F(P_ℓ+1) ≤ F(P_ℓ) + (M_P_ℓ^-1 (W_P_ℓ- W_P_ℓ+1)) + 2/γ^2W_P_ℓ+1- W_P_ℓ^2_,
= F(P_ℓ) + μ·(M_P_ℓ^-1 (W_P_ℓ- W^z̃_ℓ)) + μ^2/2γ^2W^z̃_ℓ- W_P_ℓ^2_,
< F(P_ℓ) + μ·(C/2 - (1+C)d + (M_P_ℓ^-1 W_P_ℓ) ) + 2 μ^2/γ^2, (by ⊆_(1) and (<ref>))
≤ F(P_ℓ) - μ Cd/2 + 2μ^2/γ^2, (see below)
≤ F(P_ℓ) - γ^2 C^2 d^2/16 ,
where (<ref>) follows by the fact that (M_P_ℓ^-1 W_P_ℓ) ≤ d, and the last inequality follows by the choice of μ in <ref>. If the algorithm runs for t≥ 1 iterations, then summing (<ref>) and telescoping, we have
- (t-1) γ^2 C^2 d^2/16 > F(P_t)- F(P_1) ≥inf_P,P'∈Δ() F(P)-F(P') ≥ -d ln (1+1/γ),
where the last inequality follows by (<ref>). By rearranging, we conclude that
t < 1 + 16 γ^-2C^-2 d^-1ln (1 + 1/γ),
giving the claimed bound on the number of iterations.
Guarantee for the last iterate
Suppose the algorithm terminates at step t. Since and satisfy <ref> with _= C
γ/5, the iterates at step t satisfy (<ref>) in addition to
sup_z∈_(M_t^-1 W^z) ≤(M_t^-1 W^z̃_t) + C γM_t^-1_/5,
≤(M_t^-1 W^z̃_t) + C d^1/2M_t^-1_ /5,
≤(M_t^-1 W^z̃_t) + Cd^1/2 /5,
where the last inequality follows by (<ref>).
Combining this with the termination condition (M_t^-1W_t) ≤
(1+C)d, we have that
sup_z ∈_(M_P_t^-1 W^z)
≤sup_z ∈_((M_P_t^-1-M_t^-1) W^z)+ sup_z ∈_(M_t^-1 W^z),
≤sup_z ∈_((M_P_t^-1-M_t^-1) W^z) + (M_t^-1 W^z̃_t) +C d^1/2/5, (by (<ref>))
= sup_z ∈_((M_P_t^-1-M_t^-1) W^z) + (M_t^-1 W_t)+ (M_t^-1 (W^z̃_t -W_t)) +C d^1/2/5,
≤sup_z ∈_M_P_t^-1 -M_t^-1_·W^z_ + (1+C)d+M_t^-1_·W^z̃_t- W_t_ + C d^1/2/5, (see below)
≤2C/15+ (1+C)d+1/γ·C γ^2/10 + C d^1/2/5, (by (<ref>)-(<ref>) and ⊆_(1))
≤ (1+3C/2)· d,
where (<ref>) follows by Cauchy-Schwarz and (M_t^-1W_t) ≤
(1+C)d. This completes the proof.
§ GENERIC GUARANTEE FOR
In this section, we give a generic guarantee for (<ref>). Compared to previous guarantees in <cit.>, we prove a fast 1/n-type rate of convergence for , and show that the algorithm succeeds even when the norm of the weight w in <ref> does not grow with the number of iterations. We also use the slightly simpler discriminator class:
{. f x ↦max_a∈θ^⊤ϕ(x,a) | θ∈(1), ϕ∈Φ}.
The main guarantee for is as follows.
Let h∈ [H], δ∈(0,e^-1), and n∈ℕ be given, and suppose that satisfies the normalization assumption in <ref>.
For any function f ∈, define
w_f = ∫__h+1 f(x) _h+1(x) ν(x).
Let P∈Δ() be a distribution over policies, be as (<ref>), and
Φ be a feature class satisfying <ref>. With probability at least 1 - δ, with input (h, , Φ, P, n) terminates after t≤ T*d log_3/2 (2n d^-1/2) iterations, and its output ϕt satisfies
sup_f∈inf_w ∈(3d^3/2)_π∼ P^π∘_h π_[(w^⊤ϕt(_h,_h)- w_f^⊤ϕ_h^⋆(_h,_h) )^2] ≤_^2(n,δ),
where _^2(n,δ) c T d^3 n^-1log
(|Φ|/δ), for some sufficiently large absolute constant c>0.
To prove the theorem, we need a technical lemma, which follows from <cit.>.
Consider a call to (h, , Φ, P, n) (<ref>) in the setting of <ref>. Further, let _ be as in <ref> and define
(ϕt, wt_1,…, wt_t-1)∈_ϕ∈Φ,(w_1,…,w_t-1)∈(2√(d))^t-1∑_ℓ=1^t-1_(ϕ,w_ℓ,fℓ).
For any δ∈(0,1), there is an event t(δ) of probability at least 1-δ such that under t(δ), if <ref> does not terminate at iteration t≥ 1, then for wℓ w_fℓ:
∑_ℓ =1^t-1_π∼ P^π∘_h π_[( ϕt(_h,_h)^⊤ wt_ℓ - ϕ_h^⋆(_h,_h)^⊤ wℓ)^2] ≤ t _^2(n,δ),
inf_w ∈3/2(d^3/2)_π∼ P^π∘_h π_[( ϕt(_h,_h)^⊤ w- ϕ_h^⋆(_h,_h)^⊤ wt)^2] > 8 d t_^2(n,δ),
where ^2_(n,δ) c d^2 n^-1ln
(|Φ|/δ) and c≥1 is a sufficiently large absolute constant.
With this, we prove <ref>.
Let us abbreviate _(n,δ),
with _(n,δ) defined as in <ref>. Further, let N 1+ *d log_3/2 (2d^3/2/), δ' δ/2N, and define
__(n,δ').
Note that ≤_ and N -1 ≤ T, where T is the number of iterations in the theorem statement; the latter inequality follows by the facts that the absolute constant c in <ref> is at least 1 and ln (|Φ|/δ)≥1. We define an event 1(δ')∩…∩N(δ'), where (^t(·))_t are the success events in <ref>. Note that []≥ 1 - δ/2 by the union bound. Throughout this proof, we condition on the event .
To begin the proof, we define a sequence of vectors (v_1:dℓ)_ℓ≥ 0 in an inductive
fashion, with v_iℓ∈^d for all
i∈d and ℓ≥0. For ℓ=0, we let
v_i0 = e_i/d, for all i∈[d]. For
ℓ≥ 1, we consider two cases:
* Case I: If
ℓ{j ∈[d] | |(V_-jℓ-1, wℓ)|>(1+C)· |(Vℓ-1)| . }≠∅,
where
Vℓ-1 (v_1ℓ-1,…,
v_dℓ-1)∈^d× d and
wℓw_fℓ, then we let
j_j'∈ℓj' and define
v_iℓ{[ wℓ , if i=j,; v_iℓ-1, otherwise. ].
* Case II: If ℓ=∅, we let
v_iℓ = v_iℓ-1, for all i∈[d].
We first show that t≠∅ at any iteration t∈[N] where does not terminate. Let t∈[N] be an iteration where the algorithm does not terminate, and suppose that t=∅. This means that
∀ j∈[d] , |(V_-jt-1, wt)|≤ (1+C)· |(Vt-1)|.
Now, since (Vt-1)≠ 0 (note that
*(Vt) is non-decreasing with t), we have
that span( Vt-1)= ^d. Thus, there exist
β_1,…, β_d∈ be such that wt=
∑_i=1^d β_i vt-1_i. By the linearity of the
determinant and (<ref>), we have
∀ j ∈[d], (1+C)|·(Vt-1)| ≥ |(V_-jt-1, wt)|,
= |(V_-jt-1, ∑_i=1^d β_i vt-1_i )|,
= *∑_i∈[d]β_i·(V_-jt-1, v_it-1),
= |β_j| · |(Vt-1)|.
This implies that |β_j|≤ (1+C) for all
j∈[d]. Now, note that by the definition of (v_it-1), we have that for any i∈[d] such that v_it-1≠ e_i/d, there exists ℓ∈ [t-1] such that wℓ= v_it-1. Let
t{i∈[d]| v_it-1≠ e_i/d},
and for any i∈t, let ℓ_i∈[t-1] be such that wℓ_i= v_it-1. Further, define
wt∑_i∈tβ_i wℓ_i= ∑_i∈tβ_i v_it-1,
and note that by the triangle inequality and the fact that wt=∑_i=1^d β_i v_it-1, we have
wt- wt≤ (1+C)_.
Finally, with the notation in (<ref>), define
wt_t ∑_i∈tβ_i wt_ℓ_i,and note that wt_t ∈ (1+C) (2d^3/2),
since |β_i| ≤ (1+C) for all i∈[d], |t|≤ d, and wt_ℓ∈(2√(d)), for all ℓ∈[t-1]. Now, by <ref>, in particular (<ref>), we have
∑_i∈t_π∼ P^π∘_h π_[( ϕt(_h,_h)^⊤ wt_ℓ_i - ϕ_h^⋆(_h,_h)^⊤ wℓ_i)^2] ≤ t _^2,
where _ is as in (<ref>). Using the
expressions in <ref> with (<ref>) and Jensen's inequality, we have that under t,
_π∼ P^π∘_h π_[( ϕt(_h,_h)^⊤ wt_t - ϕ_h^⋆(_h,_h)^⊤ wt)^2]
≤(∑_j∈t |β_j|) ·∑_i∈t_π∼ P^π∘_h π_[( ϕt(_h,_h)^⊤ wt_ℓ_i - ϕ_h^⋆(_h,_h)^⊤ wℓ_i)^2] ,
≤ (1+C) d t _^2.
Now, using (<ref>) and the facts that (a+b)^2 ≤ 2a^2 + 2 b^2 and ϕ^⋆_h_2≤ 1, we have that
_π∼ P^π∘_h π_[( ϕt(_h,_h)^⊤ wt_t - ϕ_h^⋆(_h,_h)^⊤ wt)^2] ≤ 2(1+C)^2 ^2 + 2(1+C)dt _^2,
≤ 2(1+C)^2 ^2_ + 2(1+C)dt _^2.
Using that C=1/2, we conclude that the right-hand side of this inequality is bounded by 8 d t_^2 which is a contradiction, since wt_t ∈ (1+C)(2d^3/2) = (3d^3/2) and by <ref>, we must have
inf_w∈(3d^3/2)_π∼ P^π∘_h π_[( ϕt(_h,_h)^⊤ w- ϕ_h^⋆(_h,_h)^⊤ wt)^2]> 8 t _
if does not terminate at round t.
Therefore, we have that t≠∅, for any
iteration t∈[2 N] where does not
terminate.
We now bound the iteration count and prove that the guarantee in
<ref> holds at termination. Note that whenever ℓ≠∅ for ℓ>1, we have by construction:
|(Vℓ)| > 3/2 · |(Vℓ-1)|.
Thus, if runs for t∈[2 N] iterations, then
|(Vt)| > (3/2)^t-1· |(V1)|.
On the other hand, since the determinant of a matrix is bounded by the product of the norms of its columns and v_1:dt∈(2√(d)), we have
|(Vt)| ≤ 2^d d^d/2.
Note also that |(V0)| = (/d)^d. Plugging this
into (<ref>), we conclude that
(3/2)^t-1 < (2d^3/2/)^d.
Taking the logarithm on both sides and rearranging yields
t < 1+ d log_3/2 (2d^3/2/)≤ N.
Thus, the algorithm must terminate after at most N-1 iterations. Furthermore, by <cit.>, we have that with probability at least 1-δ/2N, if the algorithm terminates at iteration t, then
max_f∈inf_w ∈(3d^3/2)_π∼ P^π∘_h π_[(w^⊤ϕt(_h,_h)- w_f^⊤ϕ_h^⋆(_h,_h) )^2] ≤ 32 t _^2,
≤ 32 (N-1)_^2,
≤ 32 T _^2.
Applying a
union bound completes the proof.
§ GENERIC GUARANTEES FOR
In this section, we present self-contained guarantees for (<ref>). We show that given any reward functions r_1:h:×→_≥ 0 and function classes _1:h, where _t⊆{g: _t×→} for t∈[h], that “realize” these reward functions (we formalize this in the next definition), that if P1:h are (approximate) policy covers for layers 1 through h, then for sufficiently large n≥ 1 and with high probability, the output = (h,r_1:h, _1:h, P1:h, n) is an approximate maximizer of the objective
max_π∈^π[∑_t=1^h r_t(_t,_t)].
To formalize this result, we define the notion of realizability we require for the function classes _1:h.
We say that function classes _1:h, where _t⊆{g: _t×→} for t∈[h], realize reward functions r_1:h:×→ if for all t∈[h] and all π∈^t+1:h,
Q_t^π∈_t, where Q^π_t(x,a) r_t(x,a)+^π[.∑_ℓ=t+1^h r_ℓ(_ℓ,_ℓ) | _t=x,_t=a].
Note that Q^π_t in (<ref>) represents the state-action value function (Q-function) at layer t∈[h] with respect to the rewards r_1:h and partial policy π.
In what follows, given a function class ⊆{g: ×→}, we use _() to denote the -covering number of in ℓ_∞ distance.
A set of functions {g_1, …, g_N}⊂{g: ×→} is an -cover of ⊆{g:×→} in ℓ_∞-distance if for all g∈, there exists i ∈ [N] such that
g - g_i_∞≤.
The -covering number _() is the size N of the smallest -cover of .
§.§ Intermediate Results for
To prove our main guarantees for (stated in the next subsection), we first two intermediate lemmas. The first shows that for any poly π, the corresponding Q-function is the Bayes-optimal predictor for the regression problem solved in when π is executed.
Let reward functions r_1:h:×→, P∈Δ(), and ∈^t+1:h be given. Fix t∈h, and let g^P,_ denote the Bayes-optimal predictor[Observe that because this loss is strongly convex with respect to the prediction, the Bayes-optimal predictor is unique up to sets of measure zero.] for the sum of rewards under a policy π sampled from P and composed with via π∘_t∘_t+1; that is,
g^P,_∈_ g : _t ×→_π∼ P^π∘_t π_∘_t+1[( g(_t, _t) - ∑_ℓ=t^h r_ℓ(_ℓ,_ℓ) )^2].
Then, g^P,_(·,·)≡ Q^_t(·,·), where Q^_t is the Q-function defined in (<ref>) for the partial policy ∈^t+1,h and rewards r_1:h.
The least-squares solution g^P,_ of the problem in (<ref>) satisfies, for all a∈ and x∈_t,
g^P,_ (x,a) = _π∼ P^π∘_t π_∘_t+1[ . ∑_ℓ=t^h r_ℓ(_ℓ,_ℓ) | _t =x ,_t =a ],
= [ r_t(_t,_t)|_t = x,_t = a]+ _π∼ P^π∘_t π_∘_t+1[ . ∑_ℓ=t+1^h r_ℓ(_ℓ,_ℓ) | _t = x, _t =a],
= r_t(x,a) +^[ . ∑_ℓ=t+1^h r_ℓ(_ℓ,_ℓ) | _t = x, _t =a], (see below)
= Q_t^(x,a),
where (<ref>) follows by the fact that conditioned on (_t,_t)=(x,a), the sum of rewards ∑_ℓ=t+1^h r_ℓ(_ℓ,_ℓ) depend only on and not on the policy used to roll-in to layer t.
The next lemma shows that the solution t to the least-squares problem in (<ref>) of <ref> is close to the Q-function in the appropriate sense.
Let δ∈(0,1), B>0, n≥ 1, and h ∈[H] be fixed. Further, let (_, r_1:h, _1:h, P1:h) be such that
* _(n,δ)^2 = cB^2A/n (max_t∈[h]ln__t(1/n)+ln (n/δ)), where c>0 is a sufficiently large absolute constant.
* The function classes _1:h realize the reward functions r_1:h: ×→ (in the sense of <Ref>).
* The functions in _1:h are bounded in absolute value by B uniformly.
* P1,…,Ph∈Δ().
Then, for t∈[h], the solution t to the least-squares problem in (<ref>) in <ref> when invoked as (h, r_1:h, _1:h, P1:h, n) satisfies with probability at least 1-δ,
_π∼ Pt^π[ max_a∈( t(_t,a) - Q_t^t+1(_t, a) )^2 ]≤^2_(n,δ),
where t+1∈^t+1:h is defined as in <ref>.
Fix t∈[h] and abbreviate
gt_ g^Pt,t+1_,
where g^Pt,t+1 is defined as in <ref> (with P= Pt, = t+1, and reward functions r_1:h as in the lemma statement). By <ref>, gt_ is the Bayes-optimal solution to the least-squares problem in (<ref>) of <ref>. Thus, since _1:h realize the reward functions r_1:h, a standard uniform-convergence guarantee for least-square regression (see e.g. <cit.> with = 0 almost surely) implies that there exists an absolute constant c>0 (independent of t,h, and any other problem parameters) such that with probability at least 1-δ,
_π∼ Pt^π∘_tπ_∘_t+1t+1[ ( t(_t,_t) - gt_(_t,_t) )^2 ]≤ c· B^2 ·ln__t(1/n)+ln (n/δ)/n.
Since actions at layer t are taken uniformly at random, (<ref>) implies that
_π∼ Pt^π∘_tπ_∘_t+1t+1[ max_a∈( t(_t,a) - gt_(_t,a) )^2 ]≤ c· B^2A ·ln__t(1/n)+ln (n/δ)/n.
The desired result follows by observing that:
* For all (x,a)∈_t×, gt_(x,a)=Q^t+1_t(x,a), by <ref>.
* The term max_a∈( t(_t,a) - gt_(_t,a) )^2 in (<ref>) does not depend on the actions _t:h, and so the expectation _π∼ Pt^π∘_tπ_∘_t+1t+1· can be simplified to _π∼ Pt^π·.
§.§ Main Guarantee for With Non-Negative Rewards
We now state and prove the main guarantee for used within <ref>, which is stated with respect to the extended MDP defined in <ref>. This result requires non-negative rewards. For the rest of this section, we make use of the extended MDP notation and definitions introduced in <ref>. In addition, given non-negative reward functions r_1:h×→_≥ 0, we define their extensions r̅_1:h in as
r̅_t(x,a){[ r_t(x,a), (x,a)∈_t×; 0, if x= or a=. ].
With this, we now state the guarantee of .
Let α, δ,η∈(0,1), B>0, and h∈[H] be given. Consider reward functions r_1:h: ×→_≥ 0, function classes _1:h, policy distribution P1:h, and a parameter n≥ 1 satisfying the following properties:
* The function classes _1:h, where _t⊆{g: _t×→} for t∈[h], realize the reward functions r_1:h (in the sense of <Ref> with respect to the true MDP), and all functions in _1:h have range uniformly bounded by B.
* For each 1 ≤ t ≤ h, it holds that Pt is a (α,η)-randomized policy cover relative to _η for layer t in (see <ref>).
Then, with probability at least 1 - δ, the policy = (h, r_1:h, _1:h, P1:h, n) produced by <ref> (when applied to the true MDP), satisfies the following guarantee for r̅_1:h as in (<ref>):
max_π∈_η^π[∑_t=1^hr̅_t(_t,_t)] ≤^[∑_t=1^hr̅_t(_t,_t)] + _(n,δ),
where _(n,δ) c·H √(α^-1 B^2 A n^-1· (max_t∈[h]ln__t(1/n)+ln (n/δ))) and c>0 is an absolute constant.
First, we define extensions of Q-functions to the extended MDP using the extended rewards r̅_1:h in (<ref>); for all t∈[h] and all π∈^t+1:h, define the Q-function at layer t in the extended MDP with respect to the extended rewards r̅_1:h and partial policy π:
∀ (x,a)∈_t ×, Q^π_t(x,a) r̅_t(x,a)+^π[.∑_ℓ=t+1^hr̅_ℓ(_ℓ,_ℓ) | _t=x,_t=a].
Note that for any partial policy π∈^t+1:h that never takes the terminal action, we have
Q^π_t(x,a)= {[ Q^π_t(x,a)≥ 0, if (x,a)∈_t ×,; 0 , if x = or a = , ].
where the fact that Q^π_t(·,·)≥ 0 follows because the rewards are non-negative. Further, for the function ĝt in <ref>, we define its (clipped) extension
g̅t(x,a){[ max(0,ĝt(x,a)), if (x,a)∈_t ×,; 0 , if x = or a = . ].
To begin, we will show that for any t∈[h] and _(·,·) as in <ref>, there is an event _t of probability at least 1- δ/H under which the learned partial policies t,t+1 are such that
^π_⋆[Q^t+1_t(_t,π_⋆(_t))- Q^t+1_t(_t, t(_t))] ≤ 2 α^-1/2_(n,δH),
where π_⋆∈_π∈_η^π[∑_t=1^hr̅_t(_t,_t)] is the optimal policy with respect to the truncated policy set _η (definition in <ref>) and Q^π_t is the Q-function defined in (<ref>). Once we establish (<ref>) for all t∈[h], we will apply the performance difference lemma (<ref>) and the union bound to obtain the desired result.
Let π_⋆∈_π∈_η^π[∑_ℓ=1^h r̅_ℓ(_ℓ,_ℓ)]. Observe that the following properties hold:
* For all x∉_t,η(_η), π_⋆(x)= (by definition of _η); and
* For all policies π∈^t+1:h that never take the terminal action, Q^π_t(·,)≡ 0 ≤min_a∈, y∈_tQ^π_t(y,a) (see (<ref>)),
As a result, we have that for any t∈[h] and _t,η_t,η(_η),
^π_⋆[Q^t+1_t(_t,π_⋆(_t))- Q^t+1_t(_t, t(_t))]
≤^π_⋆[ 𝕀{_t ∈_t,η}·( Q^t+1_t(_t,π_⋆(_t)) - Q^t+1_t(_t, t(_t))) ],
=
^π_⋆[ 𝕀{_t ∈_t,η}·(Q^t+1_t(_t,π_⋆(_t))-g̅t(_t,π_⋆(_t)) + g̅t(_t,π_⋆(_t))- Q^t+1_t(_t, t(_t)))],
≤^π_⋆[ 𝕀{_t ∈_t,η}·(Q^t+1_t(_t,π_⋆(_t))-g̅t(_t,π_⋆(_t)) + g̅t(_t,t(_t))- Q^t+1_t(_t, t(_t)))],
where the last inequality follows by the facts that:
* t(x)∈_a∈t(x,a), for all x∈_t, by the definition of t in (<ref>).
* g̅t(·, )≡ 0 ≤g̅t(·, a), for all a∈, by definition of g̅t in (<ref>).
Continuing from the previous display, we have
^π_⋆[Q^t+1_t(_t,π_⋆(_t))- Q^t+1_t(_t, t(_t))]
≤ 2 ·^π_⋆[𝕀{_t ∈_t,η}·max_a∈| Q^t+1_t(_t,a)-g̅t(_t,a)| ],
= 2 ·^π_⋆[𝕀{_t ∈_t,η}·max_a∈| Q^t+1_t(_t,a)-g̅t(_t,a)| ], (since Q^t+1_t(·,)≡g̅t(·,)≡ 0)
≤ 2 ·√(^π_⋆[𝕀{_t ∈_t,η}·max_a∈( Q^t+1_t(_t,a)-g̅t(_t,a))^2 ]), (Jensen's inequality)
= 2 √(∫__t𝕀{x ∈_t,η}·max_a∈( Q^t+1_t(x,a)-g̅t(x,a))^2 ^(x) ν̅(x)),
≤ 2 √(α^-1∫__t𝕀{x ∈_t,η}·max_a∈( Q^t+1_t(x,a)-g̅t(x,a))^2 _π∼ Pt[^π(x)] ν̅(x)), (justified below)
≤ 2 √(α^-1_π∼ Pt[ ∫__tmax_a∈( Q^t+1_t(x,a)-g̅t(x,a))^2 ^π(x) ν̅(x)]), (Fubini's theorem)
= 2 √(α^-1·_π∼ Pt^π[max_a∈( Q^t+1_t(_t,a)-g̅t(_t,a))^2 ]),
= 2√(α^-1·_π∼ Pt^π[max_a∈( Q^t+1_t(_t,a)-max(0,t(_t,a)))^2 ]),
≤ 2 √(α^-1·_π∼ Pt^π[max_a∈( Q^t+1_t(_t,a)-t(_t,a))^2 ]),
where (<ref>) follows from the fact that Pt is an (α,η)-cover relative to _η for layer t in and π_⋆∈_η, and (<ref>) follows because:
* The policies in the support of Pt never take the terminal action; and
* | Q^t+1_t(x',a')-t(x',a')| = | Q^t+1_t(x',a')-max(0,g̅t(x',a'))|, ∀ (x',a')∈_t× (see (<ref>) and (<ref>)).
Finally, (<ref>) follows by the fact that the Q-functions are non-negative (since the rewards are non-negative), and so replacing max(0,ĝt(_t,a)) by ĝt(_t,a) on the right-hand side of (<ref>) only increases the value of the latter.
Now, from <ref> and the fact that _1:h realize r_1:h, we have that for any t∈[h], there is an absolute constant c>0 (independent of t and other problem parameters) and an event _t of probability at least 1-δ/H under which the solution t to the least-squares regression problem on (<ref>) of <ref> satisfies
_π∼ Pt^π[max_a∈( Q^t+1_t(_t,a)-t(_t,a))^2 ]≤_(n,δH)^2,
where _(·,·)^2 is defined as in <ref>. Combining (<ref>) with (<ref>) establishes (<ref>) under the event _t.
To conclude the proof, we note that by the performance difference lemma (<ref>), we have
^[∑_t=1^hr̅_t(_t,_t)] - ^[∑_t=1^hr̅_t(_t,_t)]
= ∑_t=1^h ^π_⋆[Q^t+1_t(_t,π_⋆(_t))- Q^t+1_t(_t, t(_t))].
Thus, under the event ⋃_t=1^h_t, we have that
^[∑_t=1^hr̅_t(_t,_t)] - ^[∑_t=1^hr̅_t(_t,_t)] ≤ 2H α^-1/2_(n,δH).
The desired result follows from the union bound, which gives []≥ 1-δ.
Let π,∈ be policies, and assume that π never takes the terminal action. Let Q_t^π be defined as in (<ref>). Then for any h≥ 1,
^[ ∑_t = 1^h r̅_t(_t, _t) ] - ^π[ ∑_t = 1^h r̅_t(_t, _t) ] = ∑_t= 1^h ^[Q_t^π(_t, (_t)) - Q_t^π(_t, π(_t)) ].
§.§ Main Guarantee for With Signed Rewards
We now state and prove a guarantee for in the true MDP , when invoked with signed rewards. We make use of the following lemma, which bounds the total probability mass for the set of states that are not reachable with sufficiently high probability.
For any t∈[H], it holds that
sup_π∈^π[_t ∈_t ∖_t,η()] ≤η· d^3/2.
Fix t∈ [H]. By definition of _t,η(), we have that
∀ x∈_t ∖_t,η(), sup_π∈ d^π(x) ≤η·μ^⋆_t(x).
Thus, integrating over x∈_t ∖_t,η(), we obtain
sup_π∈^π[_t ∈_t ∖_t,η()] = sup_π∈∫__t ∖_t,η() d^π(x) ν(x),
= η·∫__t ∖_t,η()μ^⋆_t(x)ν(x), (by (<ref>))
≤η·∫__tμ^⋆_t(x)ν(x),
≤η d^3/2,
where the last inequality follows by <ref>; this is a consequence of the normalization assumption (<ref>).
With this, we now state the guarantee of .
Let α, δ,∈(0,1), B,B_1:h>0, and h∈[H] be given. Consider reward functions r_1: _1×→ [-B_1,B_1],…,r_h: _h×→ [-B_h,B_h], function classes _1:h, distributions over policies P1:h, and a parameter n≥ 1 satisfying the following properties:
* The function classes _1:h, where _t⊆{g: _t×→} for t∈[h], realize the reward functions r_1:h (in the sense of <Ref>), and all functions in _1:h have range uniformly bounded by B.
* For each 1 ≤ t ≤ h, it holds that Pt is a (α,)-randomized policy cover for layer t (see <ref>).
Then, with probability at least 1 - δ, the policy = (h, r_1:h, _1:h, P1:h, n) produced by <ref> satisfies the following guarantee:
max_π∈^π[∑_t=1^hr_t(_t,_t)] ≤^[∑_t=1^hr_t(_t,_t)] + _(n,δ) + 2 h d^3/2·∑_t=1^h B_t,
where _(n,δ) c·H √(α^-1 B^2 A n^-1· (max_t∈[h]ln__t(1/n)+ln (n/δ))) and c>0 is an absolute constant.
First, we define the Q-functions for the reward r_1:h; for all t∈[h] and all π∈^t+1:h, define the Q-function at layer t with respect to the rewards r_1:h and partial policy π:
∀ (x,a)∈_t ×, Q^π_t(x,a) r_t(x,a)+^π[.∑_ℓ=t+1^hr_ℓ(_ℓ,_ℓ) | _t=x,_t=a].
To begin, we will show that for any t∈[h] and _(·,·) as in <ref>, there is an event _t of probability at least 1- δ/H under which the learned partial policies t,t+1 are such that
^π_⋆[Q^t+1_t(_t,π_⋆(_t))- Q^t+1_t(_t, t(_t))] ≤ 2 α^-1/2_(n,δH) + 2 d^3/2·∑_ℓ=1^h B_ℓ,
where π_⋆∈_π∈^π[∑_t=1^h r_t(_t,_t)] is the optimal policy. Once we establish (<ref>) for all t∈[h], we will apply the performance difference lemma (<ref> instantiated in the true MDP) and the union bound to obtain the desired result.
Let π_⋆∈_π∈^π[∑_ℓ=1^h r_ℓ(_ℓ,_ℓ)]. We have that for any t∈[h] and _t,_t,(),
^π_⋆[Q^t+1_t(_t,π_⋆(_t))- Q^t+1_t(_t, t(_t))]
= ^π_⋆[𝕀{_t ∈_t,}·( Q^t+1_t(_t,π_⋆(_t))- Q^t+1_t(_t, t(_t))) ]
+ ^π_⋆[𝕀{_t ∈_t ∖_t,}·( Q^t+1_t(_t,π_⋆(_t))- Q^t+1_t(_t, t(_t))) ].
We now bound the last term in (<ref>). Note that by the range assumption on the rewards r_1:h and the definition of the Q-function, we have Q^π_t(x,a)∈ [-∑_ℓ=t^h B_ℓ, ∑_ℓ=t^h B_ℓ], for all π∈^t+1:h. Thus, we have
^π_⋆[𝕀{_t ∈_t ∖_t,}·( Q^t+1_t(_t,π_⋆(_t))- Q^t+1_t(_t, t(_t))) ] ≤ 2^π_⋆[_t ∈_t ∖_t,] ·∑_ℓ=t^h B_ℓ,
≤2 · d^3/2·∑_ℓ=1^h B_ℓ,
where the last inequality follows by <ref>.
Plugging (<ref>) into (<ref>) and using that B_1:h≥ 0 implies that
^π_⋆[Q^t+1_t(_t,π_⋆(_t))- Q^t+1_t(_t, t(_t))] - 2 d^3/2·∑_ℓ=1^h B_ℓ
≤^π_⋆[ 𝕀{_t ∈_t,}·( Q^t+1_t(_t,π_⋆(_t)) - Q^t+1_t(_t, t(_t))) ],
=
^π_⋆[ 𝕀{_t ∈_t,}·(Q^t+1_t(_t,π_⋆(_t))-ĝt(_t,π_⋆(_t)) + ĝt(_t,π_⋆(_t))- Q^t+1_t(_t, t(_t)))],
≤^π_⋆[ 𝕀{_t ∈_t,}·(Q^t+1_t(_t,π_⋆(_t))-ĝt(_t,π_⋆(_t)) + ĝt(_t,t(_t))- Q^t+1_t(_t, t(_t)))],
where the last inequality follows by the fact that t(x)∈_a∈t(x,a), for all x∈_t, by the definition of t in (<ref>). Continuing from the previous display, we have
^π_⋆[Q^t+1_t(_t,π_⋆(_t))- Q^t+1_t(_t, t(_t))] - 2 d^3/2·∑_ℓ=1^h B_ℓ
≤ 2 ·^π_⋆[𝕀{_t ∈_t,}·max_a∈| Q^t+1_t(_t,a)-ĝt(_t,a)| ],
≤ 2 ·√(^π_⋆[𝕀{_t ∈_t,}·max_a∈( Q^t+1_t(_t,a)-ĝt(_t,a))^2 ]), (Jensen's inequality)
= 2 √(∫__t𝕀{x ∈_t,}·max_a∈( Q^t+1_t(x,a)-ĝt(x,a))^2 d^(x) ν(x)),
≤ 2 √(1/α∫__t𝕀{x ∈_t,}·max_a∈( Q^t+1_t(x,a)-ĝt(x,a))^2 _π∼ Pt[d^π(x)] ν(x)), (justified below)
≤ 2 √(1/α_π∼ Pt[ ∫__tmax_a∈( Q^t+1_t(x,a)-ĝt(x,a))^2 d^π(x) ν(x)]), (Fubini's theorem)
= 2√(1/α·_π∼ Pt^π[max_a∈( Q^t+1_t(_t,a)-t(_t,a))^2 ]),
where (<ref>) follows from the fact that Pt is an (α,)-randomized policy cover for layer t.
Now, from <ref> and the fact that _1:h realize r_1:h, we have that for any t∈[h], there is an absolute constant c>0 (independent of t and other problem parameters) and an event _t of probability at least 1-δ/H under which the solution t to the least-squares regression problem on (<ref>) of <ref> satisfies
_π∼ Pt^π[max_a∈( Q^t+1_t(_t,a)-t(_t,a))^2 ]≤_(n,δH)^2,
where _(·,·)^2 is defined as in <ref>. Combining (<ref>) with (<ref>) establishes (<ref>) under the event _t.
To conclude the proof, we note that by the performance difference lemma (<ref>), we have
^[∑_t=1^h r_t(_t,_t)] - ^[∑_t=1^h r_t(_t,_t)]
= ∑_t=1^h ^π_⋆[Q^t+1_t(_t,π_⋆(_t))- Q^t+1_t(_t, t(_t))].
Thus, under the event ⋃_t=1^h_t, we have that
^[∑_t=1^h r_t(_t,_t)] - ^[∑_t=1^h r_t(_t,_t)] ≤ 2 H α^-1/2_(n,δH) +2 hd^3/2·∑_t=1^h B_t.
The desired result follows from the union bound, which gives []≥ 1-δ.
§ APPLICATION TO REWARD-BASED RL
In this section, we show how the output P1:H of (<ref>), which is a (η^3/· d^6 A^2, )-policy cover for η = /(4 H d^3/2) and = (A,H,d, log(|Φ|/δ)) sufficiently large (see <ref>), can be used to optimize downstream reward functions r_1:H; our treatment also applies to (for Ph(Ψh) for all h∈[H]). Since the output of is a randomized policy cover, one way to optimize the sum of rewards S_H ∑_h=1^H r_h is by first generating trajectories using policies in P1:H, then applying an offline RL algorithm, e.g. Fitted Q-Iteration () <cit.>, to optimize S_H. It is also possible to use with the randomized policy cover P1:H to achieve the same goal. We will showcase the latter approach, since we can make use of the guarantees for given in <ref>.
As in <ref>, we assume access to a function class _1:H, where _h ⊆{g: _h×→} for each h∈[H], that realize the rewards r_1:H in the following sense: for all h∈[H] and all π∈^h+1:H,
Q_h^π∈_h, where Q^π_h(x,a) r_h(x,a)+^π[.∑_t=h+1^H r_t(_t,_t) | _h=x,_h=a].
Note that when the reward functions r_1:H are linear in the feature map ; that is, when for all h∈[H] and (x,a)∈_h×,
r_h(x,a)=θ_h^⊤(x,a)
for some θ_h∈(1) (this is a common assumption in the context of RL in Low-Rank MDPs <cit.>), then the function classes _1:H, where
∀ h∈[H], _h = {g:(x,a)↦ϕ(x,a)^⊤ w |ϕ∈Φ , w ∈(2H√(d))},
realize r_1:H. We show this claim next.
Under <ref>, the function classes _1:H in (<ref>) realize the reward functions in (<ref>). Furthermore, the functions in _1:H are uniformly bounded by 2√(d)H, and ln__h()≤ln |Φ|+ d ln (2√(d)H /), for all h∈[H], where we recall that _() denotes the -covering number of in ℓ_∞-distance (see <ref>).
For h=H, we clearly have that for any π∈^H:H, Q^π_H(·,·)=r_H(·,·)∈_H. For h<H and π∈^h+1:H, we have, by the low-rank MDP structure and the expression of the rewards in (<ref>), that
Q^π_h(x,a) =r_h(_h,_h)+∫__h+1^π[.∑_t=h+1^H r_t(_t,_t) | _h+1=y,_h+1=π(y)] ·ϕ^⋆_h(x,a)^⊤μ_h+1^⋆(y) ν (y),
= ϕ^⋆_h(x,a)^⊤( θ_h + ∫__h+1^π[.∑_t=h+1^H r_t(_t,_t) | _h+1=y,_h+1=π(y)] ·μ_h+1^⋆(y) ν (y)).
Now, by the fact that ^π[∑_t=h+1^H r_t(_t,_t)|_h+1=y,_h+1=π(y)] ∈ [-H-h,H-h], for all y∈_h+1 (since the rewards take values between -1 and 1 thanks to ϕ(·,·),θ_h∈(1), for all h∈[H]), and the normalizing assumption made on ([h])_h∈[H] in <ref> (i.e. that for all g:_h+1→0,1, *∫__h+1[h+1](y)g(y) ν(y)≤√(d)), we have that
w_h θ_h+∫__h+1^π[.∑_t=h+1^H r_t(_t,_t) | _h+1=y,_h+1=π(y)] ·μ_h+1^⋆(y) ν (y) ∈(2H√(d)).
This, together with (<ref>) and the fact that [h]∈Φ (by <ref>), implies that that Q_h^π∈_h. The bound on the covering number __h(), follows from a standard bound on the covering number of the ball (2H√(d)) <cit.>.
Combining <Ref> with <Ref> results in the following guarantee for .
Let α,,δ∈(0,1) be given and fix h∈[H]. Let be the output of when given input (H, r_1:H, _1:H, P1:H, n), where
* The reward functions r_1:H are as in (<ref>), with θ_1:H∈(1)
* The function classes _1:H are as in (<ref>).
* For each 1≤ h≤ H, it holds that Ph is a (α,)-randomized policy cover for layer h (see <ref>).
Then, under <ref>, with probability at least 1-δ, we have that
max_π∈^π[∑_h=1^H r_h(_h,_h)]≤^[∑_h=1^H r_h(_h,_h)] + c H^2 √(d A · (d log(2n √(d)H) +ln (n|Φ|/δ)) /α n ) + 2 H^2 d^3/2,
for a sufficiently large absolute constant c>0.
By using that the distributions return by are an (η^3/· d^6 A^2, )-policy cover for η = /(4 H d^3/2) and = (A,H,d, log(|Φ|/δ)) sufficiently large (<ref>), we obtain the claimed sample complexity for <ref> in <ref>.
§ STRUCTURAL RESULTS FOR EXTENDED LOW-RANK MDP
In this section, we present some structural results involving the extented MDP and truncated policy class defined in <ref>. First, we recall the definition of the truncated policy class. Given a parameter η>0, let _0,η, and for each h≥ 1, let _h, η be the set of policies defined by
π∈_h,η∃π'∈_h-1,η : ∀ t ∈[H], ∀ x ∈_t, π(x) = {[ π'(x), if t=h and x ∈_h,η(_h-1,η),; , otherwise, ].
where for a set of policies Π'⊆, we let
_h, η(Π') {x∈_h | max_π∈Π'^π(x) ≥μ̅_h^⋆(x)·η. }.
Note that this matches the definition in (<ref>) because [μ̅^⋆_h(x)]_d+1=0, for all x≠_h. Finally, we let _η_H,η.
The next lemma bounds the probability of the set of states that are not reachable with sufficiently high probability.
Under the normalization assumption (<ref>), we have that for any t∈[H],
sup_π∈_η^π[_t ∈_t ∖_t,η(_η)] ≤η· d^3/2.
Fix t∈ [H]. By definition of _t,η(_η), we have that
∀ x∈_t ∖_t,η(_η), sup_π∈_η^π(x) ≤η·^⋆_t(x).
Thus, integrating over x∈_t ∖_t,η(_η), we obtain
sup_π∈_η^π[_t ∈_t ∖_t,η(_η)] = sup_π∈_η∫__t ∖_t,η(_η)^π(x) (x),
= η·∫__t ∖_t,η(_η)μ̅^⋆_t(x)(x), (by (<ref>))
≤η·∫__tμ̅^⋆_t(x)ν̅(x),
= η·∫__tμ^⋆_t(x)ν(x), (since [_t(x)]_d+1=0, ∀ x ≠_t)
≤η d^3/2,
where the last inequality follows by <ref>; this is a consequence of the normalization assumption (<ref>).
The next lemma generalizes <cit.> to s.
For all h ∈[H], x∈_h, and ℓ∈[h H], we have max_π∈_ℓ-1,η(x)= max_π∈_ℓ,η(x). Further,
∀ x∈_h, max_π∈_h-1, η^π(x) = max_π∈_η^π(x) .
We will show that for all ℓ∈[hH],
∀ x∈_h, max_π∈_ℓ-1,η(x)= max_π∈_ℓ,η(x).
This implies (<ref>) by summing both sides of (<ref>) over ℓ=h,…, H, telescoping, and using that _η=_H, η. To prove the result, let ℓ∈[hH], x∈_h, and π̃∈_π'∈_ℓ-1,η^π'(x). Further, let π∈_ℓ, η be as in (<ref>) with π'=π̃. In this case, by (<ref>), we have π̃(x')=π(x'), for all x'∈_τ, and τ≤ [ℓ-1]. Using this and the fact that x∈_h and ℓ≥ h, we have
max_π̆∈_ℓ-1,η^π̆(x) =^π̃(x)= ^π(x) ≤max_π̆∈_ℓ, η^π̆(x).
We now show the inequality in the other direction. Let ℓ∈[hH], x∈_h, and π̃∈_π̆∈_ℓ,η^π̆(x). Further, let π'∈_ℓ-1, η be as in (<ref>) for π = π̃. In this case, by (<ref>), we have π̃(x)=π'(x), for all τ∈ [ℓ-1]. Using this and the fact that x∈_h and ℓ≥ h, we have
max_π̆∈_ℓ,η^π̆(x) =^π̃(x)= ^π'(x) ≤max_π̆∈_ℓ-1, η^π̆(x).
This shows (<ref>) and completes the proof.
Using <ref> and the definition of _h,η(·) in (<ref>), we obtain the following corollary.
For all h∈[H], it holds that
_h,η(_h-1,η) = _h,η(_η).
The next lemma quantifies the “cost of truncation” incurred by optimizing reward functions using policies in the truncated class _η instead of
Let η∈(0,1), and B_1:H>0, and consider reward functions r_1: _1×→ [-B_1,B_1],…,r_H: _H×→ [-B_H,B_H]. We have
sup_π∈_η^π[ ∑_h=1^H r̅_h(_h,_h) ] ≥sup_π∈^π[ ∑_h=1^H r̅_h(_h,_h) ] - 2 H d^3/2η∑_h=1^H B_h,
where, for each h∈[H], r̅_h(x,a)=r_h(x,a) for all (x,a)∈_h×, and r̅_h(x,a)=0 when x=_h or a=.
Let r̅_1:H be the “extended” reward functions as in the lemma's statement. Let h∈[H] and π_h-1∈_π∈_h-1,η^π[∑_h=1^H r̅_h(_h,_h)]. Further, define π_h as π∈_h,η in (<ref>) with π'=π_h-1. Note that since for all t∈[h-1] and x∈_t, π_h(x)=π_h-1(x) (by (<ref>)), we have
^π_h-1[∑_t=1^h-1r̅_t(_t,_t)] = ^π_h[∑_t=1^h-1r̅_t(_t,_t)].
On the other hand, for _h,η_h,η(_h-1,η) we have
^π_h-1[∑_t=h^H r̅_t(_t,_t)]
= ^π_h-1[∑_t=h^H r̅_t(_t,_t)],
= ^π_h-1[ 𝕀{_h ∈_h,η}·∑_t=h^H r̅_t(_t,_t)]+ ^π_h-1[ 𝕀{_h ∉_h,η}·∑_t=h^H r̅_t(_t,_t)] ,
= ^π_h[ 𝕀{_h ∈_h,η}·∑_t=h^H r̅_t(_t,_t)] + ^π_h-1[ 𝕀{_h ∉_h,η}·∑_t=h^H r̅_t(_t,_t)] , (by definition of _h,η and π_h)
= ^π_h[ ∑_t=h^H r̅_t(_t,_t)] - ^π_h[ 𝕀{_h ∉_h,η}·∑_t=h^H r̅_t(_t,_t)] + ^π_h-1[ 𝕀{_h ∉_h,η}·∑_t=h^H r̅_t(_t,_t)],
= ^π_h[ ∑_t=h^H r̅_t(_t,_t)] - ^π_h[ 𝕀{_h ∈_h∖_h,η}·∑_t=h^H r̅_t(_t,_t)] + ^π_h-1[ 𝕀{_h ∈_h∖_h,η}·∑_t=h^H r̅_t(_t,_t)],
where the last equality follows by the fact that I) if _h =_h, then _t=_t for all t∈ [h H], and II) r̅_t(,·)≡ 0, for all t∈ [h … H]. Now, using the range assumption on the rewards, we get
^π_h-1[∑_t=h^H r̅_t(_t,_t)] ≤^π_h[ ∑_t=h^H r̅_t(_t,_t)] +(^π_h[_h ∈_h ∖_h,η] + ^π_h-1[_h ∈_h ∖_h,η]) ∑_t=h^H B_t.
On the other hand, by <ref> and the fact that π_h-1∈_h-1,η and π_h∈_h,η, we have that
^π_h-1[_h ∈_h ∖_h,η] ∨^π_h[_h ∈_h ∖_h,η]≤sup_π∈_η^π[_h ∈_h ∖_h,η].
Furthermore, by <ref>, we have _h,η = _h,η(_η). Combining this with (<ref>) and <ref>, we get
^π_h-1[_h ∈_h ∖_h,η] ∨^π_h[_h ∈_h ∖_h,η]≤sup_π∈_η^π[_h ∈_h ∖_h,η(_η)] ≤η d^3/2.
Plugging this into (<ref>) and using (<ref>) implies that
^π_h-1[∑_t=h^H r̅_t(_t,_t)] ≤^π_h[ ∑_t=h^H r̅_t(_t,_t)]+ 2 η d^3/2∑_h=1^H B_h.
Summing both sides of (<ref>) for h=1,…, H, telescoping, and using that _0,η= and _H,η= _η, we get
max_π∈^π[∑_t=1^Hr̅_t(_t,_t)] ≤max_π∈_η^π[∑_t=1^Hr̅_t(_t,_t)] + 2H η d^3/2∑_h=1^H B_h.
Using this, we now prove <ref>, which allows us to transfer any guarantees in the extended MDP and truncated policies _η back to the original MDP with the unrestricted policy class .
Fix h∈[H], and let y∈_h be such that μ_h^⋆(y)>0. To prove <ref>, we will instantiate <ref> with rewards (r_t) given by
r_t(x,a) = {[ μ_h^⋆(y)^⊤/μ_h^⋆(y)ϕ^⋆_h-1(x,a), if t=h and (x,a)∈_h×,; 0, otherwise. ].
We define the extended rewards (r̅_t) such that for all t∈[H], r̅_t(x,a)=r_t(x,a) for all (x,a)∈_t×, and r̅_t(x,a)=0 when x=_t or a=. By applying <ref> (with B_h =1 and B_t=0 for all t≠ h) and using that |r_h(·,·)|≤ 1 (since ϕ^⋆_h-1(·, ·)≤ 1), we get
max_π∈^π[∑_t=1^Hr̅_t(_t,_t)] ≤max_π∈_η^π[∑_t=1^Hr̅_t(_t,_t)] + 2H η d^3/2.
On the other hand, the definition of (r_t) implies that for any π∈,
^π[∑_t=1^Hr̅_t(_t,_t)] = μ_h^⋆(y)^⊤/μ_h^⋆(y)ϕ̃^⋆,π_h-1,
where ϕ̃^⋆,π_h-1^π[ϕ̃^⋆_h-1(_h-1,_h-1)] and ϕ̃^⋆_h-1 is the restriction of ^⋆_h-1 to its first d coordinates (^⋆_h-1 is defined in <ref>). Now, since y≠_h, we have [μ̅_h^⋆(y)]_d+1=0, and so μ^⋆_h(y)^⊤ϕ̃^⋆,π_h-1= ^⋆_h(y)^⊤ϕ̅^⋆, π_h-1. Thus, plugging this into (<ref>) and using <ref>, we get
∀π∈, ^π[∑_t=1^Hr̅_t(_t,_t)] = _h^⋆(y)^⊤/μ_h^⋆(y)ϕ̅^⋆,π_h-1= ^π(y)/μ^⋆_h(y).
Plugging this into (<ref>) and using that ⊆, we have
max_π∈d^π(y)/μ^⋆_h(y) =max_π∈^π(y)/μ^⋆_h(y)≤max_π∈^π(y)/μ^⋆_h(y)≤max_π∈_η^π(y)/μ^⋆_h(y) + 2Hη d^3/2.
Now, suppose that y is such that max_π∈d^π(y)/μ^⋆_h(y)≥ 4 H η d^3/2. By (<ref>), this implies that
max_π∈_η^π(y)/μ^⋆_h(y)≥ 2H η d^3/2≥η,
and so since P is a (α,η)-randomized policy cover relative to _η for layer t in , we have that
max_π∈_η^π(y)/μ^⋆_h(y)≤α^-1_π∼ P^π[d̅^π(y)/μ^⋆_h(y)].
Combining this with (<ref>) implies that
max_π∈d^π(y)/μ^⋆_h(y) ≤α^-1_π∼ P^π[d̅^π(y)/μ^⋆_h(y)] + 2Hη d^3/2,
≤α^-1_π∼ P^π[d̅^π(y)/μ^⋆_h(y)] +1/2max_π∈d^π(y)/μ^⋆_h(y),
where the last inequality follows by the fact that y is such that max_π∈d^π(y)/μ^⋆_h(y)≥ 4 H η d^3/2. Rearranging the previous display and using that ^π(·)≡ d^π(·) for all policies π that never take the terminal action, we get:
α/2max_π∈d^π(y)/μ^⋆_h(y)≤_π∼ P^π[d^π(y)/μ^⋆_h(y)].
This shows that P is a (α/2, 4 Hη d^3/2)-randomized policy cover.
§ HELPER LEMMAS
For any h∈[2 H], x∈_h, and π∈, we have
d^π(x) = [h](x)^⊤ϕ^⋆, π_h-1, where ϕ^⋆, π_h-1^π[ϕ^⋆_h-1(_h-1,_h-1)],
Let δ∈(0,1) and H≥ 1 be given. If a sequence of events _1,…,_H satisfies [_h|_1,…,_h-1]≥1-δ/H for all h∈[H], then
[_1:H]≥1-δ.
By the chain rule, we have
[_1:H] = ∏_h∈[H][_h|_1,…,_h-1] ≥∏_h∈[H] (1-δ/H) =(1-δ/H)^H ≥ 1-δ.
The normalization assumption in (<ref>) has the following useful implication.
For any h∈[H], if the normalization condition (<ref>) holds, then
∫__hμ^⋆_h(x)ν(x) ≤ d^3/2.
For each i∈[d], if we define g(x)sgn([μ^⋆_h(x)]_i), we have
∫__h |[μ^⋆_h(x)]_i| ν (x) = ∫__h g(x) · [μ^⋆_h(x)]_i ν (x),
= √((∫__h g(x) · [μ^⋆_h(x)]_i ν (x))^2),
≤√(∑_j∈[d](∫__h g(x) · [μ^⋆_h(x)]_j ν (x))^2),
= ∫__h g(x) ·μ^⋆_h(x)ν(x) ,
≤√(d).
Therefore, we have
∫__hμ^⋆_h(x)ν (x)≤∑_i∈[d]∫__h |[μ^⋆_h(x)]_i| ν (x)≤ d^3/2.
Next, we show that the coverability parameter <cit.> constant for s is bounded by d.
For all h∈[H], there exists a measure ρ_h on _h × such that
sup_(x,a)∈_h×sup_π∈d^π(x,a)/ρ_h(x,a)≤ d.
Consider layer h+1. By definition for x ∈_h+1, we have that for any
π, d^π(x) = ^π[
μ_h+1^⋆(x)^⊤ϕ_h^⋆(_h, _h)]=μ_h+1^⋆(x)^⊤ϕ_h^⋆, π. Let
Ψ{π_1, …, π_d} be a barycentric
spanner for the set {ϕ^⋆, π_h |π∈} (see <ref>). Let
π_x denote the policy maximizing d^π(x) (if no such
maximizer exists, we may pass to a maximizing sequence). By definition of a barycentric spanner, there exist β_1, …, β_d ∈ [-1, 1] such that ϕ_h^⋆, π_x = ∑_i=1^d β_i ϕ_h^⋆, π_i, and so
d^π_x(x) = ∑_i = 1^d β_i
μ_h+1^⋆(x)^⊤ϕ_h^⋆,
π_i≤∑_i = 1^d *β_iμ_h+1^⋆(x)^⊤ϕ_h^⋆,
π_i
≤ d ·∑_i = 1^d 1/dμ_h+1^⋆(x)^⊤ϕ_h^⋆,
π_i
=d ·∑_i = 1^d 1/d
d^π_i(x),
where we have used that μ_h+1^⋆(x)^⊤ϕ_h^⋆,
π_i is non-negative.
Thus, by defining ρ_h+11/d∑_i=1^d d^π_i, we obtain the desired result.
Let >0, and B>0 be given. Fix h∈[H] and consider
a sequence of policies π1:K∈ and functions δ1:K:_h×→ [-B,B] such that for all k∈ [2 K],
^k-1[ δk(_h,_h)^2 ] ≤^2, where k-11/k-1∑_ℓ=1^k-1πℓ. Thenmin_k∈[K]^πk[ δk(_h,_h) ] ≤√(2 d ln K) + 2 d B K^-1.
Define k-1(·,·) ^k-1[d^π(·,·)], if k∈[2 K],
and k-1(·,·)≡ 0 if k=1. Further, let
ρ̃k(·,·) d/kρ_h(·,·), where
ρ_h(x,a) is as in <ref>. Finally, for any (x,a)∈_h ×, we define the “burn-in” index
τ_h(x,a) min{ k ∈[K] |d̅k-1(x,a) > (k-1) · d ·ρ_h(x,a) },
and note that τ_h(·,·)>1. Since the coverability constant is bounded by d in s (see <ref>), we have the following facts which follow from the derivations in <cit.>:
∑_k=1^K ^πk[ 𝕀{τ_h(_h,_h) > k }·δk(_h,_h)] ≤ 2d B,
∀ (x,a)∈_h ×,∀ k≥τ_h(x,a), d̅k-1(x,a) + ρ̃k(x,a) ≤ 2d̅k-1(x,a).
With this, we have
∑_k=1^K ^πk[ δk(_h,_h) ]
= ∑_k=1^K ^πk[ 𝕀{τ_h(_h,_h) > k }·δk(_h,_h)] + ∑_k=1^K ^πk[ 𝕀{τ_h(_h,_h) ≤ k }·δk(_h,_h)] ,
≤ 2 d B + ∑_k=1^K ^πk[ 𝕀{τ_h(_h,_h) ≤ k }·δk(_h,_h)] ,
where the last inequality uses (<ref>). We now bound the second term on the of (<ref>). We have
∑_k=1^K ^πk[ 𝕀{τ_h(_h,_h) ≤ k }·δk(_h,_h)]
=∑_k=1^K ∑_(x,a)∈_h×dk(x,a) δk(x,a) ·𝕀{τ_h(x,a) ≤ k } ,
=∑_k=2^K ∑_(x,a)∈_h×dk(x,a) δk(x,a) ·𝕀{τ_h(x,a) ≤ k }, (since τ_h(·,·)>1)
= ∑_k=2^K ∑_(x,a)∈_h×d^πk(x,a)(k-1(x,a)/k-1(x,a))^1/2δk(x,a)·𝕀{τ_h(x,a) ≤ k } ,
≤√(∑_k=2^K ∑_(x,a)∈_h×d^πk(x,a)^2 ·𝕀{τ_h(x,a) ≤ k }/k-1(x,a))·√(∑_k=1^K ∑_(x,a)∈_h×k-1(x,a) ·δk(x,a)^2), (Cauchy Schwarz)
≤√(∑_k=2^K ∑_(x,a)∈_h×2d^πk(x,a)^2/k-1(x,a) + ρ̃k(x,a))·√(∑_k=1^K ∑_(x,a)∈_h×k-1(x,a) ·δk(x,a)^2),
where the last step follows by (<ref>). For the second term in (<ref>), we have
∑_k=1^K ∑_(x,a)∈_h×k-1(x,a) δk(x,a)^2 ≤ K ^2,
by (<ref>).
On the other hand, for the first term on the of (<ref>), we have
∑_k=2^K ∑_(x,a)∈_h×d^πk(x,a)^2/k-1(x,a) + ρ̃k(x,a) ≤∑_k=2^K ∑_(x,a)∈_h×max_ℓ∈ [K] d^πℓ(x,a)d^πk(x,a)/k-1(x,a) + ρ̃k(x,a) ,
≤∑_k=2^K ∑_(x,a)∈_h× d ρ_h(x,a)d^πk(x,a)/k-1(x,a) + ρ̃k(x,a),
≤∑_k=1^K ∑_(x,a)∈_h×d ρ_h(x,a)k · d^πk(x,a)/∑_ℓ∈[k-1] d^πℓ(x,a) + dρ_h(x,a),
≤ K d∑_(x,a)∈_h×ρ_h(x,a) ln K,
=K dln K,
where (<ref>) follows by <ref>
and <cit.>. Plugging (<ref>)
and (<ref>) into (<ref>), we get that
∑_k=1^K ^πk[ 𝕀{τ_h(_h,_h) ≤ k }·δk(_h,_h)] ≤ K √(2 d ln K).
Combining this with (<ref>), we get
K ·min_k∈[K]^πk[ δk(_h,_h) ] ≤∑_k=1^K ^πk[ δk(_h,_h) ] ≤ K √(2 d ln K) + 2 d B.
This implies the desired result.
The following is a restatement of Theorem 2.2 in <cit.>.
Let A, E∈^d× d. If A is non-singular and rA^-1E_< 1, then A+E is non-singular and (A+E)^-1- A^-1_≤E_A^-1^2_/(1-r).
PART:
Analysis of
§ ORGANIZATION OF PART:ANALYSISSPANNER
<ref> of the appendix contains the proof
of <ref>, the guarantee for <ref>. This
section is organized as follows:
* In <ref>, we give an overview of (<ref>) and highlight its key differences to (<ref>).
* <ref> contains the proof of <ref>.
* <ref>, provides generic guarantees
for the subroutine of ,
which are used within the proof of <ref>.
* Finally, <ref> compares the
reachability assumption used in the analysis of
to other notions used throughout the literature on RL in Low-Rank MDPs.
We note that the analysis of <ref> in <ref> also makes use of the guarantee of from <ref> in <ref>.
§ : ALGORITHM OVERVIEW
The algorithm is presented in <ref>. The
algorithm proceeds by building a policy cover layer-by-layer in an
inductive fashion. The structure of the algorithm is similar to that
of , with the main difference being that instead of computing
an optimal design, the algorithm computes a barycentric spanner
for the feature map.
In more detail, for each layer h≥2, uses a policy cover
Ψh built at a previous iteration within the
(<ref>) subroutine to produce a
feature map h that approximates . Using this feature map, the algorithm invokes a second subroutine, (<ref>) to produce a collection of policies π_1,…,π_d that act as a barycentric spanner for the
feature map, ensuring maximal coverage; given these policies, a new policy cover for layer h+2 is formed via Ψh+2={π_i∘_h+1π_ : i∈[d] }. To invoke the
subroutine, makes use of for policy optimization and
(<ref>) for estimation of vector-valued
functionals. Compared to , there is no inner loop (i.e.,
K=1); this is facilitated by the reachability assumption.
In what follows, we expand on the main differences between .and , focusing on the role of barycentric spanners
Barycentric spanners
uses the notion of a barycentric spanner
<cit.> as an efficient basis for exploration. We
define a barycentric spanner for an abstract set as follows
Given a set ⊂^d such that () = ^d, we say that a set { w_1, …, w_d }⊆ is a (C, )-approximate barycentric spanner for if for every w ∈, there exist β_1, …, β_d ∈ [-C, C] such that w - ∑_i = 1^d β_i w_i≤.[Note that our definition is a slight generalization of <cit.>; the latter is recovered with = 0.]
The following result shows that for Low-Rank MDPs, barycentric spanners
offer a compact representation for policy covers.
Suppose <ref> holds with η>0. If Ψ⊆ is a collection of policies such that {^π[
(_h, _h) ]|π∈Ψ}⊆^d is a (C,
)-approximate barycentric spanner for _h{^π[
(_h, _h) ]|π∈} with ≤η/2, then Ψ is an (α,0)-policy cover for layer h+1 with α = (2dC)^-1.
<Ref>, proven in <Ref>, shows that to compute a policy
cover for layer h+1, it suffices to find a barycentric spanner for the
set _h{^π[
(_h, _h) ]|π∈}⊆^d. Similar to the approach to optimal design computation in
, we show that barycentric spanner computation can be
efficiently reduced
to policy optimization:
* Using, , a novel adaptation of the classical algorithm of
<cit.>, it holds that for any ϕ∈Φ,
spanner computation for the set {^π[
ϕ(_h, _h) ]|π∈} can be performed efficiently whenever, for any θ∈(1), one can (approximately) solve linear optimization problems of the form
_π∈^π*θ^ϕ(_h,_h).
* Given access to policy covers Ψ1:h for layers 1 to h, one can efficiently solve the optimization problem in (<ref>) by
appealing to (<ref>).
To handle the fact that is unknown, <ref> computes policies π_1:d that induce a barycentric spanner for the set {^π[
h(_h, _h) ]|π∈}, where
h∈Φ is an estimated feature map produced by
. In what follows, we give a detailed overview of how the
subroutine achieves efficient spanner computation.
Barycentric spanner computation via approximate linear optimization
We consider an abstract framework for
barycentric spanner computation, which generalizes the problem faced
within . Suppose that we wish
to compute a spanner for an implicitly specified set
=*w^z_z∈⊆^d indexed by an abstract set
.
To allow for efficient spanner computation without resorting to
enumeration over the set , we assume access to two
oracles for the set , a linear optimization oracle :(1)→ and
an index-to-vector oracle :→^d. We assume that for some >0:
* For all θ∈^d with *θ=1, the output
ẑ_θ(θ) satisfies
θ^⊤w^ẑ_θ≥sup_z∈θ^⊤ w^z -.
* For all z∈, the output ŵ_z(z)
satisfies
ŵ_z - w^z≤.
The algorithm
(<ref>) computes a (C,)-approximate spanner for
using
(dlog(d/)) total calls to and . is an error-tolerant variant of the classical spanner computation algorithm of
<cit.>, which was originally introduced and
analyzed for
spanner computation with an exact linear optimization
oracle. Tolerance to approximation errors in the linear optimization oracle
is critical for our application to RL, where additive
errors will arise in sampling trajectories, as well as estimating
the feature maps ()_h∈[H]. achieves error tolerance by
perturbing the vectors returned by (θ) in the direction of
θ, which amounts to running the classical algorithm on an -fattening of , and is necessary in order to ensure that the approximation error of does not swamp the signal in directions θ in which is too “skinny.” This technique may be of independent interest; see <ref>
for additional details and formal guarantees.
Putting everything together Equipped with an estimated
feature map h from , applies
to the set {^π[h(_h,
_h)]|π∈} with
= and C = 2; that is, we plug-in the learned
representation h for the true representation
.[Though the policies produced by the algorithm may not necessarily induce a spanner for _h= {^π[
(_h, _h) ]|π∈} (this would
require “point-wise” representation learning guarantees, which we do
not have), our analysis shows that they still suffice to build a policy cover for layer h+2.]
With this choice, implementing
entails (approximately) solving
_π∈^π[ θ^⊤h(_h, _h)]
for a given θ∈(1), and implementing the oracle
entails estimating
^π[h(_h, _h)]
for a given π∈.
We instantiate (π) as the Monte Carlo algorithm
(<Ref>). To
implement (θ), we invoke with the rewards neurips
r_h(x, a; θ) = h(x,a)^⊤θ, and r_t(x,a; θ) = 0, for t ≠ h.
r_t(x,a;θ){[ h(x,a)^⊤θ, for
t=h,; 0, otherwise. ].
§ ANALYSIS: PROOF OF THM:SPANRLMAIN
In this section, we prove the main guarantee for (<ref>). First, we outline our proof strategy in <ref>. Then, in <ref> and <ref>, we present guarantees for the instances of (<ref>) and (<ref>) used within . We then combine these results in <ref> to complete the proof of <ref>. A self-contained guarantee for (<Ref>) is given in <Ref>.
§.§ Proof Strategy
Like the proof of <ref> for , the proof of <ref> is inductive. However, due to the assumption of reachability, the proof does not make use of the extended MDP analysis used in the proof of <ref>, making it somewhat simpler.
For fixed h, we assume that the policy set Ψ1:h+1 produced by satisfies the property:
Ψ1,…Ψh+1 are (1 Ad,0)-policy covers for layers 1 through h+1, and max_t∈[h+1]|Ψt|≤ d.
Conditioned on this claim, we show that with high probability, the set Ψh+2 is a (1/4 A d,0)-policy cover for layer h +2. To prove this, we use the inductive assumption to show that acts as an approximate linear optimization oracle over = {^π[ h(_h, _h) ] |π∈} (<Ref>). Using this, we then instantiate the guarantee of from <ref> with and instantiated with and . To conclude the proof of the inductive step, we the main guarantee for together with the main guarantee for (<Ref>), along with a change of measure argument enabled by the assumption that Ψ1:h are policy covers (i.e. (<ref>)).
§.§ Guarantee for as a Subroutine for
We begin by showing that , as configured within , acts as an approximate linear optimization oracle as required by . In particular, we fix a layer h, assume that Ψ1:h+1 satisfy (<ref>), apply the generic guarantees for in <Ref>.
Define function classes _1:h such that for each t∈[h],
_t {g:(x,a)↦ϕ(x,a)^⊤ w |ϕ∈Φ, w ∈(2√(d))}.
Given θ∈(1) and ϕ∈Φ, consider the reward functions r'_1:h(·,·;θ, ϕ) given by:
∀ (x,a)∈×, r'_t(x,a;θ,ϕ){[ ϕ(x,a)^⊤θ, for
t=h,; 0, otherwise. ].
With these rewards and function classes, we show that the output
= (h, r'_1:h(·, ·;θ,ϕ), _1:h, P1:h, n),
where Pt(Ψt), for each t∈[h], approximately solves
max_π∈θ^π[ ϕ(_h, _h) ]
with high probability if n≥ 1 is sufficiently large. Note that this matches the choice of reward functions in (<ref>) at iteration h with ϕ = ϕh, the feature map returned by in <ref>.
We first verify that the classes _1:h realize the reward functions specified in (<ref>) in the sense of <Ref>.
Under <ref>, the function classes _1:h in (<ref>) realize (<ref>) the reward functions in (<ref>) for any ϕ∈Φ and θ∈(1). Furthermore, the functions in _1:h are uniformly bounded by 2√(d), and ln__t()≤ln |Φ|+ d ln (2√(d) /), for all t∈[h], where we recall that _() denotes the -covering number of in ℓ_∞-distance (see <ref>).
Fix ϕ∈Φ and θ∈(1), and let r'_t(·,·)≡ r'_t(·,·; θ, ϕ), for t∈[h]. Further, for t∈[h] and π∈^t+1:h, we define the state-action value function (Q-function) at layer t with respect to the rewards r'_1:h and partial policy π:
∀ (x,a)∈_t×, Q^π_t(x,a) r'_t(x,a)+^π[.∑_ℓ=t+1^h r'_ℓ(_ℓ,_ℓ) | _t=x,_t=a].
For t=h, we clearly have that for any π∈^h:h, Q^π_h(·,·)=r'_h(·,·)∈_h. For t<h and π∈^t+1:h, we have by the low-rank structure that
Q^π_t(x,a) = ∫__t+1^π[r'_h(_h,_h)|_t+1=y,_t+1=π(y)] ·ϕ^⋆_t(x,a)^⊤μ_t+1^⋆(y) ν (y),
= ϕ^⋆_t(x,a)^⊤( ∫__t+1^π[r'_h(_h,_h)|_t+1=y,_t+1=π(y)] ·μ_t+1^⋆(y) ν (y)).
Now, by the fact that ^π[r'_h(_h,_h)|_t+1=y,_t+1=π(y)] ∈ [-1,1], for all y∈_t+1 (since ϕ(·,·)∈(1), for all ϕ∈Φ), and the normalizing assumption made on ([h])_h∈[H] in <ref> (i.e. that for all g:_t+1→0,1, *∫__t+1[t+1](y)g(y) ν(y)≤√(d)), we have that
w_t ∫__t+1^π[r'_h(_h,_h)|_t+1=y,_t+1=π(y)] ·μ_t+1^⋆(y) ν (y) ∈(2√(d)).
This, together with (<ref>) and the fact that [t]∈Φ (by <ref>), implies that that Q_t^π∈_t. The bound on the covering number __t(), follows from a standard bound on the covering number of the ball (2√(d)) <cit.>..
Combining <Ref> with <Ref> (with =0) results in the following bound on the quality of as an approximate linear optimization oracle.
Let ,δ∈(0,1) be given and fix h∈[H]. Given θ∈(1) and ϕ∈Φ, let be the output of when given input (h, r'_1:h(·, ·;θ,ϕ), _1:h, P1:h, n), where
* The reward functions r'_1:h(·, ·;θ,ϕ) are as in (<ref>).
* The function classes _1:h are as in (<ref>).
* Pt(Ψt), for each t∈[h], and the collection of policies Ψ1:h satisfy (<ref>).
Then, under <ref>, with probability at least 1-δ, we have that
max_π∈θ^⊤^π[ϕ(_h,_h)]≤θ^⊤^[ϕ(_h,_h)] + _(n,δ),
where _(n,δ) c H A d √(d n^-1 (d ln (2n d^1/2)+ln (|Φ|/δ))) for a sufficiently large absolute constant c>0.
§.§ Guarantee for as a Subroutine for
In this section, we prove a guarantee for the invocation of within . We first show that (<Ref>) is a valid choice for the subroutine passed to .
Let δ∈(0,1), h∈[H], ϕ∈Φ, π∈, and n∈ℕ be given. The output _h= (h,ϕ,π, n) (<ref>) satisfies, with probability at least 1-δ,
_h - ^π[ϕ(_h,_h)] ≤_(n,δ),
where _ c ·√(n^-1·log (1/δ)) and c>0 is a sufficiently large absolute constant.
By a standard vector-valued concentration bound in euclidean space (see for example <cit.>) and the fact that ϕ(x, a)≤ 1 for all x ∈ and a ∈, there exists an absolute constant c>0 such that with probability at least 1 - δ,
_h - ^π[ ϕ(_h, _h) ]≤ c ·√(log(1/δ)/n).
Recall that in , we instantiate passing as and as . Combining <Ref> with the general guarantee for in <Ref>, we have the following result.
Consider iteration h∈ [H] of (Φ,,,δ) (<ref>) with ,>0, δ∈(0,1), and feature class Φ satisfying <ref>. Further, let h denote the feature map returned by in <Ref> at iteration h. If Ψ1:h in <ref> satisfy (<ref>) and =(A,d,H,ln(|Φ|/δ)) is sufficiently large, then with probability at least 1 - δ/2H, we have that
* The number of iterations of in <Ref> of <Ref> is at most N ⌈d/2log_2( 100d/)⌉.
* The output (π_1, …, π_d) of has the property that for all π∈, there exist β_1,…,β_d∈[-2,2] such that
*^(h),π - ∑_i=1^d β_i ^(h),π_i≤ 3 d , where ^(h),π'^π'[h(_h,_h)].
By <Ref>, on the event that the instances of and used by satisfy <Ref> with ' = /2, the two prerequisite assumptions of the lemma hold; We instantiate the guarantee in <ref> with C=2, as used by <ref>. We claim that each call to and to satisfies <Ref> with probability at least 1- δ/8 d N H. Because each of and get called at most 4 d N times per iteration of , a union bound concludes the proof contingent on this claim.
We now prove the claim. First, note that the instance of that uses within <ref> is of the form:
(h, r_1:h(·, ·, θ), _1:h, P1:h, n_)
with r_1:h and _1:h as in <Ref>, and Pt(Ψt) for each t∈[h]; this matches the form in <Ref> ('s guarantee) with ϕ = ϕh, which implies that with probability at least 1- δ/8 d N H, the output of of the instance in (<ref>) satisfies:
max_π∈θ^⊤^π[ϕ(_h,_h)]≤θ^⊤^[ϕ(_h,_h)] + c θ H A d √(d · (d ln (2n_ d^1/2)+ln (8 dNH|Φ|/δ))/n_),
for a sufficiently large absolute constant c>0. Thus, by choosing
n_ = ·^-2 A^2 d^3 H^2 · (d +ln (|Φ|/δ)),
for =(A,d,H,ln(|Φ|/δ)) sufficiently large, the of (<ref>) is bounded by θ/2, which implies the claim for the invocation of within . Similarly, the choice of n_ in <Ref> ensures that the claim holds for the invocation of within by <Ref>.
§.§ Guarantee for as a Subroutine for
In this section, we prove a guarantee for the invocation of within
Recall that Ph= (Ψh) is the distribution over policies that passes to at iteration h∈[H-2] to compute feature map ϕh. Thus, by invoking <ref> in <ref> and using the choice of n_ in <ref>, we immediately obtain the following corollary.
Let δ,∈(0,1), and be as in <ref>, and fix h∈[H-2]. Suppose that the feature class Φ satisfies <ref>. Then, with probability at least 1-δ/2H, the instance of in <ref> of <ref> runs for t≤· d iterations for = (A,d,H,log(|Φ|/δ)) sufficiently large, and returns output ϕh such that for all f∈, there exists w_fh∈(3d^3/2) satisfying
^(Ψh)[∑_a∈(ϕh(_h,a)^⊤wh_f - ϕ_h^⋆(_h,a)^⊤w_f)^2] ≤η^2/64 A^2 d^2,
where w_f ∫__h+1 f(y) (y) ν(y).
§.§ Concluding the Proof of thm:spanrlmain
In this section, we conclude the proof of the main guarantee (<ref>). We derive the guarantee from the following inductive claim.
Consider iteration h∈ [H] of (Φ,,,δ) (<ref>) with parameters ,>0, δ∈(0,1) and a feature class Φ satisfying <ref>. Further, assume that:
* The collection of policies Ψ1:h+1 at the start of the hth iteration of satisfy (<ref>).
* <ref> (reachability) holds with η>0.
* The input parameter to is set to =η/36 d^5/2.
* The input parameter =(A,d,H,ln (|Φ|/δ)) is sufficiently large.
Then, with probability at least 1-δ/H, the set of policies Ψh+2 produced by (Φ,,,δ) at the end of iteration h is an (1/ Ad,0)-policy cover for layer h+2.
With this, we can now prove <ref>.
Note that it suffices to prove that (<ref>) holds for h=H-1 with probability at least 1-δ. To do this, we proceed by induction over h=1,…,H-1. The base case of h=1 trivially holds because Ψ1=∅ and Ψ2={π_}. The induction step now follows by <ref> and the union bound (see <ref>).
The number of trajectories used by is dominated by calls to . Since is called O(dln (d/)) times at each iteration of (<ref>), and each call to requires at most H n_ trajectories, the total number of trajectories after H iterations of is bounded by O(H^2 d n_). By plugging the choices for n_ and from the theorem statement, we obtain the claimed sample complexity.
Before proving <ref>, we make the following simple observation.
For any π∈, h∈ [H-1], any x∈_h+1, we have
(x)^⊤^π[ϕ_h^⋆(_h,_h)]=d^π(x)≥ 0.
The equality follows by construction. The non-negativity of d^π(x) follows by definition of a probability density.
We now prove <ref>.
Let _h and _h' denote the success events in <ref> and <ref>, respectively, and note that by the union bound, we have [_h ∩_h']≥ 1 - δ/H. For the rest of this proof, we will condition on _h ∩_h'.
Throughout, we denote
ϕ_h^⋆,π^π[ϕ_h^⋆(_h,_h)], ∀ h∈[H], ∀π∈.
Because Ψ1:h+1 satisfy (<ref>) (i.e., are a policy cover) it holds by <Ref> that for all x∈_h,
max_π∈Ψh[h](x)^⊤ϕ_h-1^⋆,π≥α·sup_π∈[h](x)^⊤ϕ_h-1^⋆,π, for α1/ A d.
We will show that with probability at least 1-δ/H, the policy set Ψh+2 has the same property for layer h+2; that is, for all x∈_h+1,
max_π∈Ψh+2[h+2](x)^⊤ϕ_h+1^⋆,π≥α·sup_π∈[h+2](x)^⊤ϕ_h+1^⋆,π.
Again, by <ref> this is equivalent to the statement that Ψh+2 is an (1/ Ad,0)-policy cover for layer h+2.
For the remainder of the proof, we will fix x∈_h+2 and let π_x ∈_π∈[h+2](x)^⊤ϕ_h+1^⋆,π. Our goal is to show that the inequality <ref> holds for x.
Preliminaries Note that since x∈_h+2, we have [h+2](x)>0. It will be convenient to introduce a function f: _h+1→ defined by
f(y)θ_x^⊤ϕ^⋆_h+1(y,π_x(y)), where θ_x [h+2](x)/[h+2](x).
Further, we define
w_x ∫__h+1 f(y) (y) ν(y).
By definition of π_x, we have that for all y∈_h+1,
θ_x^⊤ϕ^⋆_h+1(y,π_x(y)) = max_a∈θ_x^⊤ϕ^⋆_h+1(y,a).
This together with the fact that θ_x=1 implies that
f ∈ = {. x ↦max_a∈θ^⊤ϕ(x,a) | θ∈(1), ϕ∈Φ};
the discriminator class in <ref> of .
Note also that since x∈_h+2, we have by reachability that
w_x^⊤ϕ_h^⋆, π_x= θ_x^⊤ϕ_h+1^⋆,π_x=1/*[h+2](x)max_π∈[h+2](x)^⊤ϕ_h+1^⋆,π≥η>0.
Applying the guarantee for
Moving forward, let h be the feature map returned by at the hth iteration of <ref>, and define ϕ^(h),π^π[ϕh(_h,_h)], for any π∈. Further, let w_xh be the vector w_fh in <ref> with f=f_x, and note that
w_xh≤ 3 d^3/2.
By Jensen's inequality, we compute
( wh_xϕ^(h),π_x- w_xϕ_h^⋆, π_x)^2
≤^π_x[( h(_h,_h)^⊤ wh_x - ϕ_h^⋆(_h,_h)^⊤ w_x )^2], (Jensen's inequality)
= ∫__h( h(y,π_x(y))^⊤ wh_x - ϕ_h^⋆(y,π_x(y))^⊤ w_x )^2[h](y)^⊤ϕ^⋆,π_x_h-1ν(y), (Low-Rank MDP)
≤α^-1max_π̃∈Ψh∫__h( h(y,π_x(y))^⊤ wh_x -ϕ_h^⋆(y,π_x(y))^⊤ w_x )^2[h](y)^⊤ϕ^⋆,π̃_h-1ν(y), (by (<ref>))
≤α^-1∑_π̃∈Ψh∫__h( h(y,π_x(y))^⊤ wh_x - ϕ_h^⋆(y,π_x(y))^⊤ w_x )^2[h](y)^⊤ϕ^⋆,π̃_h-1ν(y), (by <ref>)
≤α^-1∑_π̃∈Ψh∑_a∈∫__h( h(y,a)^⊤ wh_x - ϕ_h^⋆(y,a)^⊤ w_x )^2[h](y)^⊤ϕ^⋆,π̃_h-1ν(y),
=A α^-1 d·^(Ψh)[( h(_h,_h)^⊤ wh_x - ϕ_h^⋆(_h,_h)^⊤ w_x )^2],
where the last step follows by the definition of Ψh in <ref> and that Ψh = d. Now, since w_x = ∫__h+1 f(y) (y) ν(y) (see (<ref>)) and f∈ (see (<ref>)); the guarantee for in <ref> together with (<ref>) implies that (conditioned on the event )
| wh_x^(h),π_x- w_xϕ_h^⋆, π_x| ≤√(A dη^2/64 α A^2 d^2)≤η/4.
Applying the guarantee for
Letting π_1,…,π_d be the policies returned by at iteration h of , the guarantee of in <ref> implies that there exist β_1, …, β_d∈[-2,2] such that
*ϕ^(h),π_x-∑_i=1^d β _iϕ^(h),π_i≤ 3 d ≤η/12 d^3/2,
where the last inequality follows by the fact that = η/36 d^5/2. Combining (<ref>) with (<ref>) and using the triangle inequality, we get that
w_x^⊤ϕ_h^⋆, π_x ≤∑_i=1^d β_i w_x^⊤ϕ_h^⋆, π_i + wh_x·η/12 d^3/2 +η/4,
≤∑_i=1^d β_i w_x^⊤ϕ_h^⋆, π_i + η/4+η/4, (by (<ref>))
≤ 2d max_i∈[d] w_x^⊤ϕ_h^⋆, π_i + η/2.
Combining this with (<ref>) and rearranging implies
w_x^⊤ϕ_h^⋆, π_x≤ 4d·max_i∈[d] w_x^⊤ϕ_h^⋆, π_i.
On the other hand, by definition of w_x, we have
max_i∈[d] w_x^⊤ϕ_h^⋆, π_i = max_i∈[d]θ_x^⊤ϕ_h+1^⋆, π_i∘_h+1π_x,
= 1/*[h+2](x)max_i∈[d]^π_i ∘_h+1π_x[[h+2](x)^⊤ϕ_h+1^⋆(_h+1,_h+1)],
≤A/*[h+2](x)max_i∈[d]^π_i ∘_h+1π_[[h+2](x)^⊤ϕ_h+1^⋆(_h+1,_h+1)], (see below)
= A/*[h+2](x)max_π∈Ψh+2[h+2](x)^⊤ϕ_h+1^⋆, π,
where the inequality follows from the non-negativity of _h+1(·)_h+1(x,a), for all (x,a)∈_h× (due to <Ref>), and (<ref>) follows from the definition of Ψh+2 in <Ref> of <Ref>. Combining (<ref>) and (<ref>) then implies that
1/*[h+2](x)[h+2](x)^⊤ϕ_h+1^⋆, π_x =θ_x^⊤ϕ_h+1^⋆,π_x= w_x^⊤ϕ_h^⋆, π_x ≤ 4d ·max_i∈[d] w_x^⊤ϕ_h^⋆, π_i,
≤4 A d/*[h+2](x)max_π∈Ψh+2[h+2](x)^⊤ϕ_h+1^⋆, π.
This, together with <ref>, implies that (<ref>) holds. Since this argument holds uniformly for all x∈_h+2, this completes the proof.
§.§ Proof of lem:barycentricspannerknownphi
By definition for x ∈_h+1, we have d^π(x) = ^π[ (x)^⊤[h](_h, _h)]. Let π_x denote the policy maximizing d^π(x) (if no such maximizer exists, we may pass to a maximizing sequence) and let Ψ = {π_1, …, π_d }. Then, we have for some β_1, …, β_d ∈ [-C, C],
d^π_x(x) = (x)^⊤(∑_i = 1^d β_i [π_i]) + (x)^⊤( [π_x] - ∑_i = 1^d β_i[π_i]),
≤ C d ·max_i ∈[d](x)^⊤[π_i] + ·(x)
, (Cauchy-Schwarz)
≤ C d ·max_i ∈[d](x)^⊤[π_i] + 1/2d^π_x(x),
where the inequality follows by the fact that <ref> holds with ≤η/2. The result now
follows by rearranging.
§ GENERIC GUARANTEE FOR
In this section, we give a generic guarantee for the algorithm when invoked with oracles and satisfying the following assumption.
[ and as approximate Linear Optimization Oracles]
For some abstract set and a collection of vectors {w^z∈^d | z∈} indexed by elements in , there exists '>0 such that for any θ∈^d∖{0} and z∈, the outputs ẑ_θ(θ/θ) and ŵ_z (z) satisfy
sup_z∈θ^⊤ w^z≤θ^⊤ w^ẑ_θ +' ·θ, and ŵ_z - w^z≤' .
Letting {w^z | z∈} and assuming that ⊆(1), the next theorem bounds the number of iterations of ((·),(·), ·,·) under <ref>, and shows that the output is an approximate barycentric spanner for (<ref>). Our result extends those of <cit.>, in that it only requires an approximate linear optimization oracle, which is potentially of independent interest.
Fix C>1 and ∈(0,1) and suppose that {w^z | z ∈}⊆(1). If (<Ref>) is run with parameters C, >0 and oracles , satisfying <ref> with '=/2, then it terminates after d + d/2log_C100 d/^2 iterations, and requires at most twice that many calls to each of and . Furthermore, the output z_1:d has the property that for all z∈, there exist β_1,…,β_d∈[-C,C], such that
*w^z - ∑_i=1^dβ_i w^z_i≤3Cd ·/2.
The proof will follows similar steps to those in <cit.>, with modifications to account for the fact that linear optimization over the set {w^z | z∈} is only performed approximately.
Part I: Bounding the number of iterations
In <Ref>, there are two loops, both of which require two calls to and per iteration. As the first loop has exactly d iterations, it suffices to bound the number of iterations in the second loop.
Let Mi (w_1,…, w_i, e_i+1, …, e_d) be the matrix whose columns are the vectors at end of the ith iteration of the first loop (<ref>) of <ref>; note that columns i+1 through d are unchanged at this point in the algorithm. For i∈[d], we define ℓ_i(w) (w,Mi_-i) and θ_i((e_j, Mi_-i))_j∈ [d]∈^d, where we recall that for any matrix A, the matrix A_-i is defined as the result of removing the ith column from A. Note that ℓ_i is linear in w, and in particular
ℓ_i(w) w^⊤θ_i.
Let W0 Md = (w_1, …, w_d), and let Wj denote the resulting matrix after j iterations of the second loop (<Ref>) of <ref>. We will show that for any J≥ 1,
(WJ) ≤(W0) ·( 100 d/^2)^d/2.
By construction of the loop, we have (Wj) ≥ C ·(Wj-1) for each j ∈[J], and thus (WJ) ≥(W0) · C^J. Combining these two facts will establish the bound on the iteration complexity. We now prove (<ref>).
Let u_i = e^⊤_i(Mi)^-1 (note that u_i is a row vector) and let U denote the matrix whose ith row is u_i. We observe that for all w ∈^d,
u_iw = ℓ_i(w)/ℓ_i(w_i),
where we note that ℓ_i(w_i) ≠ 0 by construction; indeed, the columns of Mi are a basis for ^d because (Mi) ≠ 0, and the equality holds on the columns, so the two linear functions must be equal. Now, since <ref> holds with '=/2, we have
θ^⊤_iw_i^+≥sup_z ∈θ^⊤_iw^z - /2θ_i, and θ^⊤_iw_i^-≤inf_z ∈θ^⊤_iw^z + /2θ_i,
where w_i^± = (z_i^±). We will now show that
ℓ_i(w_i) ≥/2·θ_i.
There are two cases. First, suppose that θ^⊤_iw_i^+≥ - θ^⊤_iw_i^-, corresponding to the conditional in <Ref> of <ref> being satisfied. Combining this with (<ref>), we have
θ_i^⊤ w_i^+ ≥( sup_z∈θ_i^⊤ w^z -/2θ_i) ∨ (-θ_i^⊤ w_i^-),
≥( sup_z∈θ_i^⊤ w^z -/2θ_i)∨( sup_z∈ -θ_i^⊤ w^z -/2θ_i), (by (<ref>))
= ( sup_z∈θ_i^⊤ w^z )∨( sup_z∈ -θ_i^⊤ w^z ) - /2θ_i,
≥ - /2θ_i.
Because the conditional is satisfied, w_i = w_i^+ + ·θ_i/θ_i, and so by plugging this into (<ref>), we have
ℓ_i(w_i) = θ^⊤_iw_i≥/2·θ_i.
The case that θ^⊤_iw_i^+≤ - θ^⊤_iw_i^- is essentially identical, establishing (<ref>). Now, recall that { w^z | z ∈} and let ⊕( 3/2) { w + b | w ∈ and b ∈( 3/2) } denote the Minkowski sum with ( 3/2). By Cauchy-Schwarz, it holds that for all w' w + b ∈⊕( 3/2),
ℓ_i(w') = θ^⊤_iw' = θ^⊤_iw + θ^⊤_ib≤( 1 + 3 /2) ·θ_i,
where we used that ⊆(1) (by assumption). Thus, for any w' ∈⊕( 3/2), we have
u_iw' = ℓ_i(w')/ℓ_i(w_i)≤ 1+3 /2 .
We now observe that by construction and the fact that <ref> holds with '=/2, the kth column w_k' of WJ belongs to ⊕( 3 /2), for any k∈[d]. Thus, the (i,k) entry u_iw_k' of U WJ satisfies u_iw_k'∈[-1 - 3 /2, 1+ 3 /2], and so the columns of U WJ have Euclidean norm at most 10 √(d)/. Since the magnitude of the determinant of a matrix is upper bounded by the product of the Euclidean norms of its columns, it holds that (U WJ)≤( 100 d/^2)^d/2.
On the other hand, again by construction, we see that the columns w_1,…, w_d of W0 satisfy u_iw_j=0, for j<i, and u_iw_i=1. Thus, U W0 is an upper-triangular matrix with 1s on the diagonal, and hence has determinant 1. Because determinants are multiplicative, this implies that (U) ≠ 0. We now compute:
(WJ) = (U WJ)/(U) = (U WJ)/(U W0)≤( 100 d/^2)^d/2.
Thus, the upper bound on (WJ) holds and the claim is proven. Therefore, we have
C^J ≤( 100 d/^2)^d/2,
and so J ≤⌈d/2log_C( 100 d/^2)⌉.
Part II: Spanner property for the output Having shown that the algorithm terminates, we now show that the result is an approximate barycentric spanner for . Let W (w_1, …, w_d) be the matrix at termination of the algorithm. By definition, if the second loop (<Ref>) has terminated, then for all i∈[d],
max(θ_i^⊤ w_i^+, - θ_i^⊤ w_i^-) +·θ_i≤ C · |(w_i,W_-i)|,
where θ_i = ((e_j, W_-i))_j∈[d]∈^d. On the other hand, by <ref>, (<ref>) holds, and so
∀ z∈, ∀ i ∈ [d], |(w^z,W_-i)| = |θ_i^⊤ w^z| ≤max(θ_i^⊤ w_i^+, - θ_i^⊤ w_i^-) +·θ_i,
≤ C· |(w_i,W_-i)|.
Now, fix z∈. Since (W) ≠ 0, there exist β_1:d∈ such that w^z= ∑_i=1^d β_i w_i. By plugging this into (<ref>) and using the linearity of the determinant, we have
∀ i∈[d], C· |(w_i,W_-i)| ≥ |(w^z,W_-i)| = |∑_j=1^d β_i (w_j,W_-i)| = |β_i| · |(w_i,W_-i)|.
Therefore, |β_i|≤ C, for all i∈[d]. Now, by definition of w_1:d and w_1:d, for all i∈[d], we have that w_i - w_i≤. Furthermore, by <ref>, we also have that w_i -w^z_i≤/2. Therefore, by the triangle inequality, we have
w^z- ∑_i=1^d β_i w^z_i≤w^z- ∑_i=1^d β_i w_i + ∑_i=1^d|β_i| w_i - w^z_i + ∑_i=1^d|β_i| w_i - w_i ≤ 3d C /2.
This completes the proof.
§ PROPERTIES OF REACHABILITY ASSUMPTION
In this section, we compare the η-reachability assumption used by
(<ref>) to different reachability
assumptions used throughout the literature on RL in Low-Rank MDPs. In
<ref>, we demonstrate an exponential separation
between our notion of reachability and notions considered in the so-called latent variable model <cit.>. In <ref>, we consider a number of other reachability assumptions and show that they imply <Ref>.
§.§ Comparison to Latent Variable Model
In this subsection, we show that our reachability assumption is
implied a reachability assumption used by
<cit.> in the latent
variable/non-negative feature model, and show that our reachability
assumption can hold even when the best possible latent variable
embedding dimension is exponential in the dimension d. We begin by
defining the latent variable model.
Givn a transition operator T:
×→Δ(), a latent variable representation consists of a countable latent space and functions ψ:×→Δ() and
q:→Δ(), such that T(·| x,a) = ∑_z∈
q(·| z) ψ(z | x,a). The latent variable
dimension of T, denoted is the cardinality of smallest
latent space for which T admits a latent variable
representation.
The interpretation for the latent variable model is as follows:
* Each (x,a) pair
induces a distribution ψ(x,a) ∈Δ()
over z∈.
* The latent variable is sampled as ∼ψ(x,a).
* The next state is sampled as '
∼ q(·|).
Note that in discrete state spaces, all transition operators admit a trivial latent variable
representation, as we may take ψ(x,a) = T(·| x,a), but
the dimension of such a representation is potentially infinite. A latent
variable representation certifies that there exists a factorization T(x' | x,a) =
ψ(x,a)^⊤ q(x') with embedding dimension ||, and so
, and hence gives an upper bound on the rank of the
transition operator. On the other hand, compared with the general Low-Rank factorization,
the latent variable factorization additionally requires that ψ(x,a)
and q(·| z) are probability distributions, and thus
non-negative, for all z∈ and (x,a)∈×,
implying that is equivalent to the non-negative rank <cit.> of the transition operator.
Assuming that a latent variable representation exists, <cit.> consider the following notion of reachability.
There exists η>0 such that
∀ h∈[H-1], ∀ z∈_h+1, sup_π∈^π[_h+1=z]≥η.
We first show the latent variable reachability condition above implies our more general assumption.
Consider a Low-Rank MDP with rank d≥ 1. Under the
latent variable model in <ref>, if the latent
variable reachability condition in (<ref>) is satisfied for some η>0, then, for all h∈[H], the transition kernel T_h in admits a factorization T_h(·| x,a)=(·)^⊤(x,a), where (·)∈^ and (·,·)∈^, such that ≤ d A^2/η^2 and η^2/A √(d)-reachability (in the sense of <ref>) is satisfied.
Suppose that <ref> (η-reachability) holds. By <cit.>, the non-negative rank of is bounded as ≤ d A^2/η^2.
Letting q and ψ be as in the definition of the latent variable representation in <ref>, we define and as: for all h∈[H-1],
(·) (q(·| z))_z∈∈^, and (·,·) (ψ(z|· , ·))_z∈∈^.
Now, fix h∈[H-1] and x∈_h+1. For z_0∈_z∈_h+1q(x| z), we have
sup_π∈ d^π(x)= ^π[_h+1 = x] = sup_π∈∑_z∈_h+1
q(x | z) ·^π[ψ(z |_h,_h)],
=sup_π∈
q(x | z_0) ·^π[ψ(z_0 |_h,_h)],
= (x)_∞·sup_π∈^π[_h+1=z_0],
≥η·(x)_∞ , (using reachability)
≥η/√()·(x).
We now complement the result above by showing that there
exists low-rank MDPs for which our notion of reachability
(<ref>) is satisfied with η
polynomially small, yet the best possible latent variable
embedding has dimension =2^Ω(d). This contrasts
the results in <cit.>, which
show that latent variable reachability implies a polynomial
bound on the latent variable dimension.
There exists a one-step Low-Rank-MDP of rank d≥1, where η-reachability (<ref>) is satisfied with η=1/2√(d), but where the non-negative rank satisfies =2^Ω(d).
Let n ∈ℕ and d n 2 +1. As shown
in the proof of <cit.>, there exists
a horizon-two MDP with the following properties:
* The state spaces _1 and _2 at layers 1 and 2, respectively, are finite.
* The cardinality of is d; i.e. = {a_1,…, a_d}.[Technically, the example in the proof of <cit.> does not explicitly specify the number of actions. Instead, the example assigns a number of state-action pairs to vectors in ^d, without specifying the number of actions. The number of actions in their example is a degree of freedom, which we set to d here without loss of generality.]
* The transition kernel T_1 admits the factorization:
T_1(·| x,a) = [2](·)^⊤ϕ_1^⋆(x,a)∈Δ(_2), ∀ (x,a)∈_1×,
where for all x'∈_2, [2](x')∈_≥ 0^d, and for all (x,a)∈_1 ×, ϕ_1^⋆(x,a)∈_≥0^d.
* The non-negative rank of is =2^Ω(d).
We augment this MDP by adding an extra state , and let
_1_1∪{}. We define
_1^⋆:_1×→_≥0^d be the
extension of ϕ_1^⋆ given by
∀ i∈[d], _1^⋆(, a_i)= e_i, and ∀ x ∈_1, _1^⋆(x, a_i)= ϕ_1^⋆(x,a_i),
where e_i is the ith basis element in ^d. We define the
initial state distribution to have ρ()=1/2 and
ρ(x)=1/2 |_1|, for all x∈_1.[We note
that <cit.> did not specify the initial
distribution, which is not needed for the conclusion of their
result.] We let =(_1∪_2,,
_1^⋆,([h])_h∈[2],) denote the resulting
MDP. Note that adding an extra state at layer 1 in this fashion only adds d additional rows to the transition matrix T (viewed as a (|_1×|)× |_2| matrix). Therefore, the non-negative rank of is as least that of .
We now show that reachability is satisfied in . Let π_i the policy that always plays action a_i. With this, we have that for any x'∈_2,
sup_π∈ d^π(x') ≥max_i∈[d] d^π_i(x'),
= max_i∈[d][2](x')^⊤[_1^⋆(_1,a_i)] ,
= max_i∈[d]{[𝕀{_1=}·[2](x')^⊤_1^⋆(_1,a_i)] +[𝕀{_1≠}·[2](x')^⊤_1^⋆(_1,a_i)] },
≥max_i∈[d]ρ() [2](x')^⊤_1^⋆(,a_i).
where the last inequality follows by the fact that, for all (x,a)∈_1×, [2](·)^⊤_1^⋆(x,a)=[2](x')^⊤ϕ_1^⋆(x,a) ≥ 0
(since [2](x')^⊤ϕ_1^⋆(x,a) is a conditional
density). On the other hand, from the construction of _1^⋆ and the fact that [2](x')∈^d_≥ 0, we have
max_i∈[d][2](x')^⊤_1^⋆(,a_i)=[2](x')_∞≥[2](x')/√(d).
Combining this with (<ref>) and using that ρ(x_0)=1/2
implies that reachability 1/(2√(d)) is satisfied in .
§.§ Relation to Other Reachability Assumptions
In this subsection, we show that <ref> is implied
by a notion of feature coverage used in the context of transfer
learning in Low-Rank MDPs <cit.>, as well as a notion of
explorability used in the context of reward-free RL in linear
MDPs <cit.>.
§.§.§ Feature Coverage
We first consider coverage condition used by <cit.>, which involves the second moments of the feature map .
We say that the linear MDP with featurization _h satisfies η-feature coverage if for all h ∈ [H],
sup_π∈λ_min(^π[(_h,_h)(_h,_h)^⊤]) ≥η.
We show that η-feature coverage implies
(η/2)^3/2-reachability. Thus, up to polynomial dependence,
η-feature coverage is a special case of <ref>.
Suppose that an MDP satisfies η-feature coverage as in <ref> for some η>0. If (x,a)∈(1) for all x,a, then the MDP satisfies (η/2)^3/2-reachability in the sense of <Ref>.
Let h∈ [H] and x∈_h+1 be given and define
θ(x)/(x).
To keep notation compact, we define _h ϕ_h^⋆(_h,_h). By η-feature coverage, there exists π∈ such that
η≤^π [(θ^⊤_h)^2] = ^π [𝕀{(θ^⊤_h)^2 < η/2}· (θ^⊤_h)^2] + ^π [𝕀{(θ^⊤_h)^2 ≥η/2}· (θ^⊤_h)^2] ,
≤η/2 + ^π [(θ^⊤_h)^2 ≥η/2],
where we have used that θ=1 and ϕ_h^⋆(x,a)≤ 1 for all (x,a)∈_h×. Rearranging (<ref>) and using that θ^⊤_h≥ 0 (it is a scaled conditional density), have
^π [θ^⊤_h ≥√(η/2)] = ^π [(θ^⊤_h)^2 ≥η/2] ≥η/2.
Now, by Markov's inequality, we have that
θ^⊤ϕ_h^⋆,π= ^π[θ^⊤_h] ≥√(η/2)·^π [θ^⊤_h ≥√(η/2)] ≥ (η/2)^3/2,
where we have once more used that θ^⊤_h≥ 0 almost surely.
§.§.§ Explorability
We now consider the explorability assumption of <cit.>, which involves the first moment of the feature map . This notion is defined as follows.
We say that a linear MDP satisfies η-explorability if for any h∈[H] and any θ∈^d∖{0} it holds that
sup_π∈ |θ^⊤^π[(_h,_h)]| ≥η·θ.
We now show that η-explorability is a special case of η-reachability:
Suppose that the explorability condition in <ref> is satisfied with η>0. Then, η-reachability is satisfied.
Let x∈_h+1 and define θ(x). By explorability, we have that
sup_π∈ d^π(x) = sup_π∈^π[(x)^⊤(_h,_h)],
= sup_π∈ |^π[(x)^⊤(_h,_h)]|, ((·)^⊤(x,a) is a condition law)
= sup_π∈ |θ^⊤^π[(_h,_h)]|,
≥η·θ , (by explorability)
= η·(x).
This shows that <ref> is satisfied with
parameter η.
|
http://arxiv.org/abs/2307.04485v1 | 20230710111725 | Silver-Platinum nanoparticles and nanodroplets supported on silica surfaces: structure and chemical ordering | [
"F. Ait Hellal",
"J. Puibasset",
"C. Andreazza-Vignolle",
"P. Andreazza"
] | cond-mat.mtrl-sci | [
"cond-mat.mtrl-sci",
"cond-mat.stat-mech"
] |
organization=ICMN, CNRS, Université d'Orléans,
addressline=1b rue de la Férollerie, CS 40059,
city=Orléans,
postcode=45071 cedex 02,
country=France
Stable and metastable metallic nanoparticles exhibit unique properties compared to the bulk, with potentially important applications for catalysis. This is in particular the case for the AgPt alloy that can exhibit the ordered L1_1 structure (alternation of pure Ag and Pt (111) planes) in nanometer size particles. However, for such small systems, the interfaces play an important role. Therefore, the support used to elaborate the nanoparticles in ultrahigh vacuum experiments may influence their properties, even in the case of weakly interacting substrates like amorphous carbon or silica. This work focuses on the AgPt nanoparticles deposited on silica, and investigates the effect of the support disorder and roughness on the structure and chemical ordering, in particular at the interface with the substrate, by Monte Carlo calculations of the atomic density profiles with semi-empiric potentials.
metallic nanoparticle AgPt nanoalloy chemical ordering density profile Monte Carlo simulation
§ INTRODUCTION
Supported metallic nanoparticles (NPs) are catalyst models for structure and chemical ordering studies <cit.>. Choosing an amorphous substrate like amorphous carbon or silicium oxide, as silica, has the advantage of minimizing the interactions with the NP, hence preserving the structure and morphology it would adopt in vacuum in the absence of interactions. This strategy is used in experiments where the NPs are grown on amorphous supports in ultrahigh vacuum conditions <cit.>. It has however been observed that this type of support may still influence the NP structure and morphology <cit.>. Although the expected effects are less spectacular than for crystalline supports like MgO which strongly interact with NPs <cit.>, theoretical works have recently focused on the effect of weak surfaces <cit.>.
Among the possible effects of the substrate, chemical ordering is particularly relevant for catalysis, since NPs offers the unique opportunity to possibly drastically minimize the required amount of active matter by an optimal organization of the chemical species at the NP external surface.
The silver-platinum alloy exhibits interesting features <cit.>, in particular an ordered L1_1 structure (alternation of pure Ag and Pt (111) planes) that has been observed in nanometer size particles <cit.>, and stimulated theoretical studies <cit.>. It has also important applications as catalyst in fuel cells, where chemical ordering strongly influences its catalytic efficiency <cit.>. Therefore, assessing the structure of the supported AgPt nanoalloy is a relevant issue.
The elaboration process strongly influences the structure of the NP which is not necessarily at equilibrium (the system remains trapped in local minima, corresponding to metastable states) <cit.>.
In simulations, the situation is even worse since the limited capabilities of computers impede to reach the experimental timescale, and thus strongly limits the possibilities of atomic reorganization.
To circumvent the problem, it is possible to increase the metal mobility by increasing the temperature <cit.>. This is why, besides the solid NP, we will also consider the corresponding liquid droplet at 1200 K and 1500 K.
Beyond the fact that this forces atomic mobility towards equilibrium, the disappearance of the L1_1 chemical order in the core due to the thermal agitation should leave room for the observation of a possible remnant chemical ordering at the interfaces, in particular that with the silica support.
It is emphasized that, in contrast to simulations, increasing the temperature of Ag-based nanoparticles like AgPt up to the liquid phase is experimentally difficult both in ultrahigh vacuum conditions, due to sublimation before melting, and at ambient pressure, due to the contamination by the atmosphere.
This paper is divided as follows: We first present the numerical model and the methods to characterize the structure and chemical ordering in terms of atomic density profiles. Then the results follow, with a focus on the effect of the temperature on the structure and chemical ordering, as well as the influence of the disorder and roughness of the support.
§ NUMERICAL DETAILS AND METHODS
Molecular model: We perform Monte Carlo (MC) simulation of supported AgPt nanoparticles on various silica substrates that mimic the supports used to elaborate the systems in ultrahigh vacuum experiments. The system is described at the atomic level, with semi-empirical interatomic potentials. The simulations are performed in the canonical ensemble, including random displacements and atomic exchanges between the metallic species Ag and Pt. These exchanges are particularly useful at low temperature where the energy barrier associated to atomic diffusion in the core of the NP is too high to allow chemical reorganization.
Interatomic potentials: The many-body metal-metal interaction derives from the tight binding scheme in the second moment approximation (TBSMA) <cit.>. The ordered AgPt L1_1 structure being stabilized by the contribution of the second neighbors, an additional Gaussian term has been developed by Front and Mottet to reproduce the main structural properties of AgPt alloys <cit.>. They have in particular determined the most stable structures of TOh AgPt NPs smaller than 7 nm. In this study we will focus on the NP with 1289 atoms (3.4 nm) which exhibits a rich structure depicted in Fig. <ref>.
The metal-support interaction being weak (van der Waals like), a simple Lennard-Jones potential has been used, that has been previously developed for pure silver and platinum NPs as well as AgPt nanoalloys <cit.>. This potential has been parameterized to reproduce the aspect ratio (defined as H/D where H is the height and D the diameter) of experimentally deposited NPs (the parameters are given in Table <ref>).
Supports: We consider two silica supports to evidence the effect of the atomic disorder and roughness that can be observed in experimental supports like oxidized silicon wafers. As a perfectly flat and ordered surface we use the (100) quartz surface. It is emphasized that this surface undergoes a 1× 2 reconstruction with a top layer twice as dense as the bulk quartz (see Fig. <ref>a) <cit.>. To model a disordered substrate we simply cut and relax a slab of amorphous silica (a-SiO_2) (see Fig. <ref>b). More details in the method and the potentials used can be found in Ngandjong et al. <cit.>. It is mentioned that, despite the fact that the amorphous silica surface is hydroxylated, the hydrogen species are not explicitly taken into account in the interactions.
Chemical ordering and density profiles: The objective is to measure the effect of the substrate on the chemical ordering in the nanoalloy. This is done simply by comparing layer by layer with the equilibrium structure of the free NP (Fig. <ref>) since the geometric structure is marginally affected by the weak interaction with the substrate. However, the time scale involved in experiments being inaccessible to simulations, the intrinsic mobility of Ag (due to its lower cohesion) is largely underestimated in the calculations. The introduction of MC exchanges between Ag and Pt greatly solves the problem and allows chemical rearrangement, but possibly misses some facets of the complex cross-diffusion of Ag and Pt in the NP. We therefore consider the effect of temperature, ranging from the solid up to the liquid state (above approximately 1200K for Ag_3Pt) <cit.>. In this case, the NP structure is fully disordered (droplet) but may exhibit partial chemical ordering, in particular close to the interfaces (with vacuum and substrate).
In the liquid case, the structure of the nanodroplet is characterized by averaging the atomic density profiles of the metal. Since our objective is to determine the aspect ratio (H/D) we focus on two density profiles, along the z axis (perpendicular to the surface) and along the radial coordinate r_cyl (in the plane parallel to the substrate, see Fig. <ref>). So, from the local atomic density ρ(r_cyl,θ,z) we construct the two quantities:
ξ_r (z) = ∫ρ(r_cyl,θ,z) r_cyl dr_cyl dθ
ξ_z (r_cyl) = ∫ρ(r_cyl,θ,z) r_cyl dz dθ.
These density profiles have the dimension of the inverse of a distance, and their integrals give the total number of atoms in the NP.
In order to determine the aspect ratio from the data, these profiles are fitted on a simple model of a truncated sphere of uniform density ρ_0 of radius R_1 with a skin of thickness R_2-R_1 where the density decreases from ρ_0 down to zero. We define the droplet radius as R=0.5(R_1+R_2) and the aspect ratio is H/D=(h+R)/2R (see Fig. <ref>). In practice, ξ_z (r_cyl) mostly depends on the droplet radius, while ξ_r (z) is sensitive to h.
§ RESULTS
§.§ Structure and chemical ordering of solid AgPt NPs on quartz
At low temperature (around room T or below), the supported AgPt NP remains essentially frozen in its initial state, preserving its highly structured layers and chemical ordering (in particular the L1_1 structure) with only scarce exchanges between Ag and Pt species. Increasing the temperature enhances the atomic mobility and somehow mimics the experimental conditions where moderate annealing allows the AgPt NPs to reorganize thanks to the large mobility of Ag atoms. The optimal temperature is around 700 K, the largest possible below the melting point of Ag for a NP of few nanometers <cit.>. We first consider the AgPt NP deposited on the perfectly ordered quartz surface. The atomic density profiles along the z axis are acquired for Ag and Pt and shown in Fig. <ref>.
One observes a strong layering through the whole NP showing that at 700 K the NP remains solid during the simulation run. One can however observe some atomic mobility at the external surface of the NP, as revealed by the small peak C in Fig. 4 corresponding to adatoms on the top layer. An example where such adatoms can be seen is given in Fig. <ref>.
Despite the low atomic mobility, the MC chemical exchanges between Ag and Pt species allow the system to reorganize. It is observed that the chemical ordering associated to the alternating Ag and Pt planes in the core of the NP is lost, showing that the L1_1 structure is destabilized by the temperature.
On the other hand, the outermost silver layer is quite robust, as can be seen on the snapshot (Fig. <ref>) and the presence of essentially pure Ag peaks in the first and last layers denoted A and B in Fig. 4. Note however that these layers are not perfectly pure Ag anymore, revealing that some Pt atoms can diffuse in the outer Ag shell. But, as can be seen in the inset, the peaks associated to Pt in these layers are not centred with respect to the corresponding Ag peaks. In each case the Pt peak is shifted towards the centre of the NP, meaning a strong penalty for Pt at the surface. Quantitatively, the integration of Pt peaks gives their proportion in the A and B layers. One gets for A 3.7% Pt and for B 5.5% Pt. The free Ag surface is slightly more favourable for Pt compared to the surface in contact with the support. This is an interesting feature at odds with the observation that the Pt-SiO_2 interaction is stronger than the Ag-SiO_2 interaction (Table <ref>). A possible interpretation is that the Ag layer at the interface with silica is more constrained than the free one.
§.§ Structure and chemical ordering of AgPt nanodroplets on quartz
What happens above the melting point of AgPt? The structure of the NP is now expected to be destabilized (and at equilibrium thanks to the atomic mobility), which could influence the chemical ordering. The double objective is thus to determine the aspect ratio of the AgPt droplet and the chemical profiles at the interfaces. The first calculations are done well above the melting point, at T = 1500 K, for the AgPt NP deposited on the quartz surface.
We first calculate the density profiles ξ_r (z) and ξ_z (r_cyl) for all atoms in the drop without distinguishing Ag and Pt (Fig. <ref>). As can be seen on ξ_r (z), the layer structure along the z axis is smoothed out due to the thermal agitation, except close to the perfectly flat and ordered quartz surface, where one can observe at least three layers. In the upper region of the droplet the layering has completely disappeared and the density profile decreases smoothly with a small tail at z = 20 Å.
Along the radial coordinate, ξ_z (r_cyl) shows an initial increase essentially linear corresponding to the integration of a uniform density on the surface of a cylinder. It then reaches a maximum and rapidly decreases due to the spherical shape of the drop, with a tail due to the smooth transition between the metal core and the surrounding vacuum.
Examination of the density profiles gives a good estimate of the drop height and diameter, but a best fit with the liquid drop model depicted in Fig. <ref>b gives a better insight (smooth solid red lines in Fig. <ref>). Obviously, the layering observed in ξ_r (z) cannot be described by the uniform density model, but the average variations are caught. However, this reveals that the first layers are particularly structured because of the height of the maxima compared to the drop model curve. This is essentially because of the perfectly ordered quartz surface. Otherwise, the model describes quantitatively the variations of ξ_r (z) far from the support, as well as the variations of ξ_z (r_cyl) in the whole range of values. The corresponding density profile of the liquid drop model is shown in the inset, and the values given by the best fit are h = 13 Å and R = 19.5 Å, giving the aspect ratio H/D = 0.83. The uniform density is ρ_0 = 0.049 atom/Å^3.
The density profile in the inset shows that the thickness of the skin (5 Å) somehow corresponds to 1 to 2 atomic diameters and is not negligible compared to the radius of the droplet. A more refined profile can be acquired directly during the course of the simulation by calculating a spherical radial distribution ρ(r_sph) taking care to exclude the rays in the solid angle defined by the intersection between the sphere and the substrate. The result is given in Fig. <ref>. It confirms that the density within the core of the droplet is essentially uniform within uncertainties, due to the low statistics in the centre of the sphere. The average density on the plateau is in agreement with the value previously extracted from the best fit. It also confirms that the atomic density profile drops smoothly to zero within a skin thickness of 5 Å.
The same analysis has been done at a lower temperature T = 1200 K (see Fig. <ref> for ξ_r (z) and ξ_z (r_cyl) and Fig. <ref> for ρ(r_sph)). As can be seen, reducing the temperature enhances the observed layering at the quartz surface. Otherwise, the structure is not affected significantly except for a slightly larger density in the core: ρ_0 = 0.051 atom/Å^3. The extracted values from the fit are h = 13 Å and R = 19.7 Å, giving an aspect ratio H/D = 0.83.
In order to quantify the chemical ordering at the interface due to the support we focus only on the case of the AgPt droplet at 1200 K on quartz: the low temperature and the strong interaction with the support is expected to enhance the effect. Figure <ref> shows that at midheight (around z=0) the Ag and Pt species have equal probabilities to be at any position. However, close to the substrate, one observes a strong chemical ordering with an almost pure Ag layer at the interface with the quartz, the second layer being filled essentially with Pt, the silver atoms being at the periphery. On the other side (top of the NP), we also observe chemical ordering (silver excess at the interface with vacuum). All this is explained by the low surface tension of Ag which preferentially migrates to the interfaces, while Pt accumulates in subsurface, in particular at the interface with the support, a behaviour similar to what was observed in the solid state.
§.§ Effect of the support disorder and roughness on the liquid NPs
Does the strong layering observed on the quartz persist in presence of disorder or roughness? To answer this question, we performed MC simulations of the AgPt NP on the amorphous SiO_2 surface (Fig. <ref>b) at T = 1500 K. The density profiles ξ_r (z) and ξ_z (r_cyl) exhibit essentially the same characteristics as for the quartz support, except for two points (see Fig. <ref>)
(i) The layering close to the support is significantly smoothed out due to the atomic disorder of the amorphous silica surface. Note that the roughness of this surface is quite small, but also participates to the destabilization of the first layer. The consequence is that the density profile now closely approaches the smooth curve given by the liquid drop model.
(ii) The best fit with the model gives the following parameters: h = 14.5 Å and R = 19.5 Å, giving an aspect ratio H/D = 0.87. As can be seen, compared to quartz, the aspect ratio is slightly closer to 1, in agreement with the fact that the surface density of the a-SiO_2 is lower than that of the quartz which exhibits a densification due to the reconstruction. Otherwise, the density in the core of the drop is ρ_0 = 0.049 atom/Å^3, a value identical to that observed on the quartz.
Reducing the temperature favors layering along the z axis (see Fig. <ref>). It is however much less pronounced than on quartz. The main difference is that here the layering has a uniform amplitude from the first layer in contact with the substrate up to the center of the NP, while on quartz it was highly increasing in the vicinity of the surface. This suggests that in the case of the quartz support, the surface clearly imposes a strong ordering, while, on the a-SiO_2 support, the layering is essentially intrinsic to the metal structure although it is of course initiated and stabilized by the surface.
The best fit with the drop model gives the following parameters: h = 14.5 Å and R = 19.5 Å, giving an aspect ratio H/D = 0.87, and ρ_0 = 0.051 atom/Å^3, a value identical to that on the quartz at the same temperature.
§ CONCLUSION
The structure and chemical ordering in AgPt NPs of 3.4 nm (1289 atoms) deposited on silica supports have been investigated by Monte Carlo simulations. The introduction of chemical exchanges between the Ag and Pt species allows to converge towards the chemical ordering at equilibrium even for NPs trapped in a metastable structure in terms of atomic positions. It is observed that silver preferentially migrates to the outer surface and at the interface with the substrate, preserving the same stable structure as for the free NP. Increasing the temperature to 700 K allows partial atomic mobility without melting the NP. One observes Ag adatoms on the external surface and close to the substrate, and a small diffusion of Pt atoms from the subsurface layer to the surface layer.
It is however observed that the Pt atoms at the periphery always remain slightly embedded in the outermost Ag layer, which is of importance for catalysis. Since the local structure can be strongly influenced by the surrounding atmosphere, further studies are necessary to quantify the catalytic efficiency of this system in real conditions. Systems exhibiting similar structures or chemical ordering illustrate this point <cit.>.
Higher temperatures have been considered, in the liquid state. The layer structure is expected to disappear, but the presence of the support is able to stabilize the first layers. This is particularly visible for the ordered quartz surface, but less pronounced for the disordered amorphous silica. The determination of the aspect ratio shows that the morphology of the supported drop follows the expected behaviour, with a lower aspect ratio on the more attractive quartz surface. Regarding chemical ordering, the external surface at the interface with the substrate is mostly composed of Ag species, with statistically few Pt atoms. Here again, examination of density profiles shows that the Pt atoms always remain slightly embedded in the outermost Ag layer.
§ DECLARATION OF INTERESTS
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
§ ACKNOWLEDGEMENTS
F. A. H. acknowledges a grant from the education and research ministry for her Ph.D. The authors would like to acknowledge support from the International Research Network - IRN “Nanoalloys” of CNRS.
elsarticle-num
|
http://arxiv.org/abs/2307.04868v1 | 20230710193252 | Leveraging an Alignment Set in Tackling Instance-Dependent Label Noise | [
"Donna Tjandra",
"Jenna Wiens"
] | cs.LG | [
"cs.LG"
] |
The Synthesis Lab: Empowering Collaborative Learning in Higher Education through Knowledge Synthesis
Bodong Chen
August 12, 2023
====================================================================================================
Noisy training labels can hurt model performance. Most approaches that aim to address label noise assume label noise is independent from the input features. In practice, however, label noise is often feature or instance-dependent, and therefore biased (i.e., some instances are more likely to be mislabeled than others). E.g., in clinical care, female patients are more likely to be under-diagnosed for cardiovascular disease compared to male patients. Approaches that ignore this dependence can produce models with poor discriminative performance, and in many healthcare settings, can exacerbate issues around health disparities. In light of these limitations, we propose a two-stage approach to learn in the presence instance-dependent label noise. Our approach utilizes points, a small subset of data for which we know the observed and ground truth labels. On several tasks, our approach leads to consistent improvements over the state-of-the-art in discriminative performance (AUROC) while mitigating bias (area under the equalized odds curve, AUEOC). For example, when predicting acute respiratory failure onset on the MIMIC-III dataset, our approach achieves a harmonic mean (AUROC and AUEOC) of 0.84 (SD [standard deviation] 0.01) while that of the next best baseline is 0.81 (SD 0.01). Overall, our approach improves accuracy while mitigating potential bias compared to existing approaches in the presence of instance-dependent label noise.
*Data and Code Availability
This paper uses the MIMIC-III dataset <cit.>, which is available on the PhysioNet repository <cit.>. We also use two public datasets outside of the healthcare domain: 1) the Adult dataset[https://github.com/AissatouPaye/Fairness-in-Classification-and-Representation-Learning], and 2) the COMPAS dataset[https://www.kaggle.com/danofer/compass]. A link to the source code is provided in the footnote[https://github.com/MLD3/Instance_Dependent_Label_Noise].
*Institutional Review Board (IRB)
This work is not regulated as human subjects research since data are de-identified.
§ INTRODUCTION
Motivation and Problem Setting Datasets used to train machine learning models can contain incorrect labels (i.e., label noise), which can lead to overfitting. While label noise is widely studied, the majority of past work focuses on instance-independent label noise (i.e., when the noise is independent from an instance's features) <cit.>. However, label noise can depend on instance features <cit.>, leading to different noise rates within subsets of the data. Furthermore, in settings where the noise rates differ with respect to a sensitive attribute, this can lead to harmful disparities in model performance <cit.>. For example, consider the task of predicting cardiovascular disease among patients admitted to a hospital. Compared to male patients, female patients may be more likely to be under-diagnosed <cit.> and thus mislabeled, potentially leading to worse predictions for female patients. Although instance-dependent label noise has recently received more attention <cit.>, the effect of these approaches on model bias has been relatively understudied <cit.>. Here, we address current limitations and propose a novel method for learning with instance-dependent label noise in a setting inspired by healthcare, specifically examining how modeling assumptions affect existing issues around potential model bias.
Gaps in Existing Work Broadly, current work addressing instance-dependent label noise takes one of two approaches: 1) learn to identify mislabeled instances <cit.>, or 2) learn to optimize a noise-robust objective function <cit.>. In the first category, instances identified as mislabeled are either filtered out <cit.> or relabeled <cit.>. In some settings, this approach can have a negative effect on model bias. Revisiting our example on cardiovascular disease, approaches that filter out mislabeled individuals could ignore more female patients, since they have a potentially higher noise rate. While relabeling approaches use all available data, they can be sensitive to assumptions around the noise distribution <cit.>. In the second category, current approaches rely on objective functions that are less prone to overfitting to the noise and use all of the data and observed labels <cit.>. However, past work has empirically shown that these improve discriminative performance the most when used to augment filtering approaches, and thus, the limitations and scenarios described above still potentially hold.
Our Idea In light of these limitations, we propose an approach that addresses instance-dependent label noise, makes no assumptions about the noise distribution, and uses all data during training. We focus on a setting that frequently arises in healthcare, where we are given observed labels for a condition of interest (e.g., cardiovascular disease) and have a clinical expert who can evaluate whether the observed labels are correct for a small subset of the data (e.g., by manual chart review). Using this subset, which we refer to as the `alignment' set, we learn the underlying pattern of label noise in a pre-training step. We then minimize a weighted cross-entropy over all the data. Note that our set is a special case of anchor points <cit.>, with the added requirement that the set contains instances for which the ground truth and observed labels do and do not match.
On synthetic and real data, we demonstrate that our approach improves on state-of-the-art baselines from the noisy labels and fairness literature, such as stochastic label noise <cit.> and group-based peer loss <cit.>. Overall, our contributions include:
* A novel approach to learn from datasets with instance-dependent noise that highlights a setting frequently found in healthcare
* A systematic examination of different settings of label noise, evaluating discriminative performance and bias mitigation
* Empirical results showing that the proposed approach is robust to both to the noise rate and amount of noise disparity between subgroups, reporting the model’s ability to maintain discriminative performance and mitigate potential bias
* A demonstration of how performance of the proposed approach changes when assumptions about the set are violated
§ METHODS
We introduce a two-stage approach for learning with instance-dependent label noise that leverages a small set of points for which we have both observed and ground truth labels.
*Notation and Setting
Our notation is summarized in Table <ref>, with additional notation defined throughout as needed. Our dataset, D=A ∪A consists of instances in A={x^(j), ỹ^(j), y^(j)}_j=1^a and A={x^(i), ỹ^(i)}_i=1^a. A is the set of points (i.e., the set), where both ỹ^(j) and y^(j) are known, and we assume that it includes instances where ỹ^(i)≠ y^(i). points are a special case of anchor points <cit.>, where points that do and do not have matching observed and ground truth labels are both required. A is the non-set and contains instances for which we do not know the ground truth labels. In the presence of noisy labels, we assume that whether ỹ=y is dependent on x (i.e., P(ỹ==y) ≠ P(ỹ==y |x)). Given this dataset, we aim to train a model to learn f: ℝ^d → [0, 1] (i.e. the function used to predict the ground truth labels), so that we can map unseen instances into one of two classes based on their feature vectors. Our learned model parameters, θ, are such that the output of the corresponding model represents the predicted class probabilities, (i.e., ŷ). Although we focus on binary classification, our setup can be applied to multiclass classification.
Justification and Desired Properties Our setting is inspired by the use of pragmatic labeling tools in healthcare. Such tools are often based on various components of the electronic health record (EHR), and they are applied to identify cohorts or outcomes of interest <cit.>. However, while practical, such definitions are not always reflective of the ground truth, and thus, require validation through manual chart review. This is often done on a randomly chosen subset of individuals, which can be constructed to represent the target population and account for known heterogeneity. As a result, f is the function that predicts whether the condition is actually present, and the set is the chart reviewed subset used to help learn f.
Through our approach, we aim to achieve: 1) robustness to the overall noise rate and 2) robustness to differences in noise rates between groups (i.e., the noise disparity). Revisiting our motivating example with EHR-based labeling tools, previous work has shown that labeling tools for rarer conditions such as drug-induced liver injury and dementia are more likely to be less reliable than those for common conditions <cit.>. Similar to how different noise rates can arise in practice, differences in noise rates between subgroups can also vary in practice <cit.>. As a result, achieving these properties can potentially make our approach generalize to a wide variety of settings.
*Proposed Approach Here, we describe the proposed network and training procedure.
Proposed Network. Our proposed network (Figure <ref>) consists of two components. The first, parameterized by θ, is a feed-forward network that uses feature vector x to predict the class probability, ŷ=P(y==1 |x; θ). The second component, paramaterized by ϕ, is an auxiliary feed-forward network that uses observed label ỹ and features x to compute β̂=P(y==ỹ|ỹ, x; ϕ), an instance-dependent prediction for whether the observed label is correct based on x and ỹ. β̂ can be considered as a confidence score for the observed label, with higher values indicating higher confidence. Learning β̂ models the underlying pattern of label noise by forcing the model to learn which instances are correctly labeled. We use β̂ to reweight the objective function during the second step of training, as described below. By including the observed label as input to ϕ, our approach also applies to instance-independent label noise because it accounts for the case when the underlying pattern of label noise does not depend on the features. In order to learn β̂, we assume that the label noise pattern can be represented as some function, though the specific form of this function (e.g., linear) does not need to be known. During training, we compute the loss using the outputs from both networks. At inference time (i.e., in practical use after training), we compute the class predictions from the network parameterized by θ only since ỹ is unavailable.
Training Procedure. Our training procedure is summarized in Figure <ref> and Appendix <ref>. In Step 1, we pre-train both networks using the points, A, minimizing an objective function based on cross entropy: θ', ϕ' = argmin_θ, ϕℒ_θ + α_1 ℒ_ϕ. α_1∈ℝ^+ is a scalar hyperparameter; θ' and ϕ' are parameters that represent the initial values of θ and ϕ. ℒ_θ is the cross-entropy loss between the class predictions and ground truth labels. It aids in learning the parameter values for θ, and thus, the model's decision boundary. 𝕀 is an indicator function.
ℒ_θ = -1/| A |∑_j ∈ A𝕀(y^(j)==1)log(ŷ^(j))
+ 𝕀(y^(j)==-1)log(1-ŷ^(j))
ℒ_ϕ is the cross-entropy loss between the predicted confidence score β̂^(j) and the actual agreement between ỹ^(j) and y^(j). It aids in learning the weights for ϕ, and thus, the underlying label noise pattern.
ℒ_ϕ = -1/| A |∑_j ∈ A𝕀(ỹ^(j)==y^(j))log (β̂^(j))
+ 𝕀(ỹ^(j)≠ y^(j))log (1 - β̂^(j))
In Step 2, we initialize θ and ϕ as θ' and ϕ' and fine tune using the complete dataset. Step 2 consists of two parts, Step 2a and Step 2b. Each part aims to improve a specific component of the network (e.g., θ) using another component of the network (e.g., ϕ). We begin with Step 2a, move to Step 2b, and continue to alternate between Step 2a and Step 2b in a manner similar to expectation maximization so that we continually improve both θ and ϕ. In Step 2a, we freeze ϕ and find θ that minimizes the objective ℒ'_θ + γℒ_θ. γ∈ℝ^+ is a scalar hyperparameter. In Step 2b, we freeze θ and find ϕ that minimizes the objective ℒ'_θ + α_2 ℒ_ϕ. α_2 ∈ℝ^+ is a scalar hyperparameter. ℒ_θ' computes the cross-entropy loss over the potentially noisy, non-points. Each instance is weighted by the model's confidence in whether the observed label is correct via β̂^(i), taking advantage of the model's learned noise pattern. Our approach aims to mitigate bias by up-weighting groups, k=1,2,...,g with a higher estimated noise rate, r̂_k, so that they are not dominated by/ignored compared to groups with a lower estimated noise rate.
ℒ_θ' = -1/|A|∑_k=1^g1/1-r̂_k∑_i ∈A∩ G_k
∑_j∈{-1, 1}β̂^(i)_ϕ𝕀(ỹ^(i)==j)log(ŷ^(i)_j)
We calculate 1 - r̂_k is as follows. We introduce sets G_k for k=1,2,...,g to represent disjoint subgroups of interest in the data, which are assumed to be known in advance. G_a ∩ G_b = ∅ for all a=1, 2, ..., g, b=1, 2, ..., g with a ≠ b and ∪_k=1^g G_k = D. Each group G_k is then associated with estimated noise rate r̂_k=1/| G_k |∑_i ∈ G_k 1-β̂^(i). Although weighting each instance by β̂ is a form of soft filtering, weighting each group by the inverse of its overall `clean' rate avoids the effect of de-emphasizing groups with higher predicted noise rates. As a result, the expected value of ℒ_θ' with respect to β̂ is equal to the cross-entropy loss between the model's predictions and ground truth labels (see Appendix <ref> for proof). However, this assumes accurate estimates of β̂. Thus, we expect that the proposed approach will perform best when the set is representative of the target population. In scenarios where the set is biased (e.g., some groups are underrepresented), if the learned noise function does not transfer to the underrepresented group, then the proposed approach may not be beneficial. In Section <ref>, we test this.
During Step 2a, ℒ_θ' is used to train θ by learning to predict ŷ such that it matches observed label ỹ on instances that are predicted to be correctly labeled. During Step 2b, ℒ_θ' is used to train ϕ. Here, since θ is frozen and ϕ is not, the network learns to predict the optimal β̂. Based on ℒ_θ' alone, there are two possible options to learn β̂: 1) consistently make β̂ close to 0, and 2) predict β̂ such that it is close to 1 when ŷ matches ỹ and close to 0 when ŷ does not match ỹ. Since ỹ is used as a proxy for y in this step, the second option aligns with what we want β̂ to represent. To encourage this over the first option (i.e., consistently predicting 0 for β̂), we include ℒ_ϕ in Step 2b, which is not minimized by consistently predicting 0 for β̂. Note that, in Step 2b, we rely on the cluster assumption <cit.> from semi-supervised learning, which broadly states that labeled data fall into clusters and that unlabeled data aid in defining these clusters. In the context of Step 2b, `labeled' and `unlabeled' are analogous to whether we know if the ground truth and observed labels match (i.e., point versus non-point), rather than the actual class labels themselves. As a result, we also rely on the set being representative of the target population here to avoid dataset shift.
In contrast to previous filtering approaches, our approach utilizes all data during training. Moreover, it does not require a specialized architecture beyond the auxiliary network to compute β̂. Thus, it can be used to augment existing architectures.
§ EXPERIMENTAL SETUP
We empirically explore the performance of our proposed approach relative to state-of-the-art baselines on five benchmark prediction tasks with two different label noise settings. For reproducibility, full implementation details are provided in Appendices <ref> and <ref>. We aim to test 1) the extent to which our desired properties hold, 2) the extent to which the proposed approach is robust to changes in the composition of the set, and 3) which components of the proposed approach contribute the most.
*Datasets We consider five different binary prediction tasks on four datasets from several domains with synthetic and real datasets. Though inspired by healthcare, we also consider domains outside of healthcare to show the broader applicability of our approach in areas where harmful biases can arise (e.g., predicting recidivism and income). Throughout our experiments, we start by assuming the labels in the dataset are noise free, and we inject varying amounts of synthetic label noise. In this subsection, we describe the tasks, features, and `ground truth' labels we use. The next subsection will describe how we introduce synthetic label noise.
Synthetic: We generate a dataset containing 5,000 instances according to the generative process in Appendix <ref>. The positive rates for the majority and minority groups are 37.5% and 32.3%, respectively.
MIMIC-III: Within the healthcare domain, we leverage a publicly available dataset of electronic health record data <cit.>. We consider two separate prediction tasks: onset of 1) acute respiratory failure (ARF) and 2) shock in the ICU (intensive care unit) <cit.>. MIMIC-III includes data pertaining to vital signs, medications, diagnostic and procedure codes, and laboratory measurements. We consider the four hour prediction setup for both tasks as described by <cit.>, resulting in 15,873 and 19,342 ICU encounters, respectively. After preprocessing (see Appendix <ref>), each encounter had 16,278 and 18,186 features for each task respectively. We use race as a sensitive attribute, with about 70% of patients being white (positive rate 4.5% [ARF], 4.1% [shock]) and 30% being non-white (positive rate 4.4% [ARF], 3.7% [shock]).
Beyond healthcare, we use two benchmark datasets frequently considered in the fairness domain.
Adult: a publicly available dataset of census data <cit.>. We consider the task of predicting whether an individual's income is over $50,000. This dataset includes data pertaining to age, education, work type, work sector, race, sex, marital status, and country. Its training and test sets contain 32,561 and 16,281 individuals, respectively. We use a pre-processed version of this dataset and randomly select 1,000 individuals out of 32,561 for training. We also only include features pertaining to age, education, work type, marital status, work sector, and sex to make the task more difficult (see Appendix <ref>). After preprocessing, each individual was associated with 56 features, and all features had a range of 0-1. We use sex as a sensitive attribute, with 67.5% of individuals being male (positive rate 30.9%) and 32.5% being female (positive rate 11.3%).
COMPAS: a publicly available dataset collected by ProPublica from Broward County, Florida, USA <cit.>. We consider the task of predicting recidivism within two years, i.e., whether a criminal defendant is likely to re-offend. COMPAS includes data pertaining to age, race, sex, and criminal history. We use a pre-processed version of this dataset and also normalize each feature to have a range of 0-1 (see Appendix <ref>). After preprocessing, the dataset included 6,172 individuals with 11 features per individual. We use race as a sensitive attribute, with 65.8% of individuals being white (positive rate 39.1%) and 34.2% being non-white (positive rate 44.5%).
*Label Noise To test the robustness of our approach in different settings of label noise, we introduce synthetic instance-dependent label noise to our datasets. Like past work <cit.>, our setup is limited for the real datasets because our added noise is synthetic and we use the labels provided in the dataset as ground truth, since we do not have access to actual ground truth labels on these public datasets.
To introduce instance-dependent noise, mislabeling was a function of the features. Let w_m ∼ N(0, 0.33)^D and z_m = σ(x·w_m), where σ is the sigmoid function, denote the coefficients describing the contribution of each feature to mislabeling and the risk of mislabeling, respectively. Whether an instance was mislabeled was based on z_m and the desired noise rate. For example, for a noise rate of 30%, instances whose value for z_m was above the 70^th percentile had their labels flipped. This allowed us to vary the noise rate within subgroups in a straightforward manner. Across datasets, we focused on cases where the noise rate in the `minority' population was always greater than or equal to that of the `majority' group since this is more likely to occur <cit.>.
*Evaluation Metrics We evaluate our proposed approach in terms of discriminative performance and model bias. For discriminative performance, we evaluate using the area under the receiver operating characteristic curve (AUROC) (higher is better).
With respect to model bias, while there exist many different measures, we focus on equalized odds <cit.>, since it is commonly used in the context of healthcare <cit.>, when similar performance across groups is desired <cit.>. Because equalized odds focuses on the difference between the true and false positive rates among groups, it is applicable to many settings in healthcare since the consequences of failing to treat a patient in need <cit.>, or giving an inappropriate treatment <cit.> can be serious. More specifically, we measure the area under the equalized odds curve (AUEOC) <cit.> (higher is better). For classification threshold τ, we calculate the equalized odds (EO(τ)) between two groups, called 1 and 2, as shown below. TP_a(τ) and FP_a(τ) denote true and false positive rates for group a at threshold τ, respectively. The AUEOC is obtained by plotting the EO against all possible values of τ and calculating the area under the curve.
We compute the harmonic mean (HM) between the AUROC and AUEOC to highlight how the different approaches simultaneously maintain discriminative performance and mitigate bias. In the harmonic mean the worse performing metric dominates. For example, if a classifier has AUROC=0.5 and AUEOC=1.0, the harmonic mean will emphasize the poor discriminative performance.
EO(τ) = 2 - | TP_1(τ) - TP_2(τ) | - | FP_1(τ) - FP_2(τ) |/2
*Baselines We evaluate our proposed approach with several baselines to test different hypotheses.
Standard does not account for label noise and assumes that ỹ=y is always true.
SLN + Filter <cit.> combines filtering <cit.> and SLN <cit.> and was shown to outperform state-of-the-art approaches like Co-Teaching <cit.> and DivideMix <cit.>. It relies on filtering heuristics, which indirectly rely on uniform random label noise to maintain discriminative performance and mitigate bias.
JS (Jensen-Shannon) Loss <cit.> builds on semi-supervised learning and encourages model consistency when predicting on perturbations of the input features. It was shown to be competitive with other state-of-the-art noise-robust loss functions <cit.>. It was proposed for instance-independent label noise.
Transition <cit.> learns to correct for noisy labels by learning a transition function and was shown to outperform state-of-the-art approaches such as MentorNet <cit.>. It applies to instance-dependent label noise, but it assumes that the contributions of each feature to mislabeling and input reconstruction are identical.
CSIDN (confidence-scored instance-dependent noise) <cit.> also learns a transition function and was shown to outperform state-of-the-art approaches such as forward correction <cit.>. Like our approach, CSIDN uses the concept of `confidence' in the observed label to help with training. Unlike our approach, CSIDN uses the model's class predictions directly as confidence scores (instead predicting them via an auxiliary network) and uses them to learn the transition function (as opposed to re-weighting the loss).
Fair GPL <cit.> builds on work addressing uniform random label noise <cit.> and uses peer loss (i.e., data augmentation that reduces the correlation between the observed label and model's predictions) within subgroups <cit.>. It assumes that label noise only depends on group membership.
We also train a model using the ground truth labels (called Clean Labels) as an empirical upper bound for discriminative performance.
*Implementation Details
For each dataset, we randomly split the data into 80/20% training/test, ensuring that data from the same individual did not appear across splits. For the Adult dataset, we used the test set provided and randomly selected 1,000 individuals from the training set. We then randomly selected 10% of the training data for all datasets except MIMIC-III from each subgroup to be points, thereby ensuring that they were representative of the overall population. For the MIMIC-III dataset, 2% from each subgroup were selected as points due to the larger size of the dataset. points were selected randomly to simulate our setting of focus, where we have a proxy labeling function and then randomly select a subset of the data to chart review in order to validate the proxy function. Then, for all datasets, half of the points were then set aside as a validation set to use during training for early stopping and hyperparameter selection, while the other half remained in the training set. Later, in our experiments, we evaluated when the set size varied and when the set was biased. All approaches (i.e., baselines and proposed) were given the ground truth labels for data in the set (i.e., no noise added to points) during training so that some approaches did not have an unfair advantage.
All models were trained in Python3.7 and Pytorch1.7.1 <cit.>, using Adam <cit.>. Hyperparameters, including the learning rate, L2 regularization constant, and objective function scalars (e.g., α), were tuned using random search, with a budget of 20. We used early stopping (patience=10) based on validation set performance, which we measured with the HM. We report results on the held-out test set, showing the mean and standard deviation over 10 replications.
§ RESULTS AND DISCUSSION
We describe the results from experiments with instance-dependent noise. For each plot, we combined discriminative performance and bias mitigation and plotted the HM of the AUROC and AUEOC to assess general performance with respect to both metrics. We show the AUROC and AUEOC separately in Appendix <ref>. Additional experiments are provided in Appendix <ref>. Their results are summarized here.
*Robustness to Noise Rate Here, we investigated how robust the proposed approach and baselines were to varying amounts of instance-dependent label noise (Figure <ref>). Since noise was synthetically introduced and not dataset specific, we conducted two experiments on the synthetic dataset. In the first, we varied the overall noise rate from 10-60% in the majority group. For the minority group, we considered noise rates that were consistently 20% higher than that of the majority group, to keep the noise disparity level (i.e., the difference in noise rates between subgroups) constant. In the second, we varied the minority noise rate from 20-90% with a majority noise rate fixed at 20% throughout (i.e., from 0-70% disparity) on the synthetic dataset.
Part 1: Overall Noise Rate. Overall, our proposed approach demonstrated robustness to a variety of noise rates within a realistic range (Figure <ref>). At low minority noise rates (i.e., below 40%), the proposed approach and baselines, with the exception of JS Loss, were competitive. As the noise rate increased, many of the baselines experienced noticeable degradation in performance. The proposed approach and Transition showed more robustness, with the proposed approach being the most robust until a minority noise rate of 80%, which represents an extreme case of label noise.
Part 2: Noise Disparity. Like the previous experiment, the proposed approach was robust over a variety of noise disparities (Figure <ref>). This is likely because the objective function ℒ'_θ from Step 2 of training accounts for disparities by scaling each instance-specific loss term with the reciprocal of its estimated group clean rate (i.e., 1 - the estimated group noise rate). Similar to the previous experiment, at a minority noise rate of 80% and above, the proposed approach was no longer the most robust, though this setting is unlikely to occur in practice.
*Sensitivity to Set Composition Our next set of experiments tested the proposed approach in settings where we relax key settings about the set. We considered all datasets with instance-dependent noise. The majority/minority noise rates were 20%/40%, respectively. Here we show performance with respect to the proposed approach, Standard, and Clean Labels. Results for the other baselines are included in Appendix <ref>.
Part 1: set size. We varied the size of the set, from 1% and 15% of the training set, with the set being representative of the test set (Figure <ref>). The proposed approach was robust to a wide range of set sizes, only showing noticeable degradation at set sizes of 3% or lower. As the size of the set grew, performance improved, likely since having a larger set provided access to a larger set of ground truth labels at training time. Although the minimum number of points required in the set is likely to vary depending on the task, our results are promising in that they show that our approach is effective on a variety of real life tasks, even when the set is small (i.e., as little as 3% of the data).
Part 2: Biased set. Here, we test how the proposed approach performs when the set is not representative of the population. We varied the amount of bias in the set by changing the proportion at which the subgroups were present. We kept the size of the set constant at 10% of the training data (2% for MIMIC-III on both tasks). We observed that the proposed approach was robust over a wide range of conditions, i.e., when the minority proportion is 20%-80% (Figure <ref>). We hypothesize that this is because the learned relationship between the features and noise can generalize across groups to an extent. In scenarios where performance of the proposed approach degraded, one subgroup heavily dominated the set. This is shown in Figure <ref> on the extremes of the x-axis of some datasets, which correspond to an set that is heavily over-represented for one subgroup and heavily under-represented for the other. Our approach relies, in part, on having a relatively unbiased set for estimating β̂ in order to avoid introducing dataset shift between the two steps of our training pipeline. Thus, these results are in line with our expectations and highlight a limitation of our approach. However, despite this reliance, we observe that our approach is still robust in some scenarios where the set is biased.
*Which Parts of Our Approach Matter? Our last set of results examines the individual components of the approach itself on the synthetic dataset. Here, we performed an ablation study where we began with training on only the points (i.e., Step 1 of our approach), and then gradually added the other components of our approach (e.g., add Step 2a). In summary, while each component improved performance, we find that the most improvement came from adding ℒ_θ and ℒ_ϕ during Steps 2a and 2b, respectively, as opposed to using only ℒ_θ' during those steps. We also performed a hyperparameter sensitivity analysis on the three hyperparameters, α_1, γ, and α_2, that our approach introduced. The approach was most sensitive to the α_2 hyperparameter and more robust to α_1 and γ. We include results for the ablation study and hyperparameter sensitivity analysis in Appendix <ref>.
*Which Parts of Our Approach Matter? Our last set of results aims to more closely examine the individual components of the approach itself. We include results for an ablation study and a hyperparameter sensitivity analysis in Appendix <ref>. In summary, while each component improved performance, we find that the most improvement came from adding ℒ_θ and ℒ_ϕ during Steps 2a and 2b, respectively, as opposed to using only ℒ_θ' during those steps. The approach was most sensitive to the α_2 hyperparameter and more robust to α_1 and γ.
§ RELATED WORK
We build from previous work in label noise and address key limitations. Generally, many state-of-the-art approaches <cit.> are limited in that they do not consider instance-dependent noise, do not consider the potential consequences of bias in label noise, or do not leverage the information our setting provides. We tackle these limitations by accounting for differences in noise rates among subsets of the data and taking advantage of additional information that can be found in our setting. In this section, we summarize past work and highlight our contributions.
*Identifying Mislabeled Data Approaches that learn to identify mislabeled instances fall into two sub-categories: 1) filtering approaches and 2) relabeling approaches. Filtering approaches use heuristics to identify mislabeled instances (e.g., MentorNet <cit.>, Co-teaching <cit.>, FINE <cit.>). Many are based on the idea that correctly labeled instances are easier to classify than mislabeled instances (i.e., the memorization effect) <cit.>. For example, mislabeled instances could be those that the model incorrectly classifies <cit.>, have a high loss value <cit.>, or significantly increase the complexity of the model <cit.>. Given the identified mislabeled instances, these approaches either ignore them during training <cit.> or treat them as `unlabeled’ and apply techniques from semi-supervised learning (e.g., DivideMix <cit.>, SELF <cit.>). Overall, these heuristics have been shown to improve discriminative performance. However, depending on the setting, they can disproportionately discard subsets of data, which could exacerbate biases in model performance.
For binary classification, some approaches `correct' (i.e., switch) the observed label for instances that are predicted to be incorrect <cit.>. Building on this idea, others make use of a transition function that estimates the probability of the observed label being correct. Model predictions can then be adjusted by applying the transition function to the classifier's predictions for each class. Some works manually construct the transition function from expert knowledge <cit.>, while others learn it <cit.>. However, such approaches often make assumptions on the form of the noise distribution, and past work has shown that results are sensitive to the choice of distribution <cit.>.
To date, much of the work described above assumes instance-independent label noise (i.e., mislabeling is independent of the features). However, when this assumption is violated, the model may overfit to label noise <cit.>. From an emerging body of work in instance-dependent label noise <cit.>, current approaches remain limited in that they still rely on filtering heuristics. Although we use soft filtering, we filter based on the learned relationship between the features and noise rather than existing heuristics and upweight groups with a higher estimated noise rate. While similar to a transition function in some aspects, our approach requires fewer probability estimates on label correctness (two estimates compared to the number of classes squared for a transition function) while achieving state-of-the-art performance.
*Noise-Robust Loss Functions Prior work examines how regularization techniques can be adapted to the noisy labels setting, addressing issues related to overfitting on noisy data <cit.>. Label smoothing, and in some cases negative label smoothing, were found to improve the accuracy on both correctly labeled and mislabeled data <cit.>. With this approach, the observed labels are perturbed by a small, pre-determined value, with all labels receiving the same perturbation at every training epoch. Follow-up work found that, instead of applying the same perturbation at each epoch, adding a small amount of Gaussian stochastic label noise (SLN) at each epoch resulted in further improvements, as it helped to escape from local optima <cit.>. However, these approaches were most beneficial in the context of augmenting existing methods that identify mislabeled instances (e.g., stochastic label noise is applied to instances that are identified as correctly labeled by filtering approaches), and thus, potentially suffer from the same limitations. Alternatively, recent work has also proposed perturbing the features to encourage consistency in the model's predictions <cit.>, though mainly in the context of instance-independent label noise. Others have proposed noise-robust variations of cross entropy loss <cit.> but generally relied on assumptions like the memorization effect.
*Label Noise in Fairness Label noise has also been addressed within the fairness literature recently. When the frequencies at which subgroups (defined by a sensitive attribute) appear are different within a dataset, past work has shown that common approaches addressing label noise can increase the prediction error for minority groups (i.e., rarer subgroups) <cit.>. Past work proposed to re-weight instances from subgroups during training where model performance is poorer <cit.> in the instance-independent noise setting. Others use peer loss <cit.> within subgroups <cit.> but assume that noise depends only on the sensitive attribute. We also train with a weighted loss, but weights are based on predicted label correctness rather than performance on the observed labels. Recently, <cit.> addressed some of the gaps of past work by examining the instance-dependent case. Our proposed approach differs from theirs in that we do not require our features to be grouped into distinct categories, such as root and low level attributes.
*Anchor Points for Addressing Label Noise Another related setting in past work uses anchor points. Anchor points are subsets of the data where the ground truth labels are known <cit.>. To date, anchor points are generally used to learn a transition function <cit.> or for label correction directly <cit.>. We use a similar concept, points, to 1) pre-train the model, and 2) predict label correctness. The first part builds from work in semi-supervised learning <cit.>, which has shown improvements from pre-training on labeled data. The second part is similar to a transition function, but differs in that we use the correctness predictions to re-weight the loss rather than adjust the predictions. We also assume that, for some alignment points, the ground truth and observed labels do not match. Generally, anchor-based approaches mitigate model bias by implicitly assuming that the anchor points are representative of the target population. Our approach also uses this assumption, but we empirically explore how model performance changes when the anchor points are biased (i.e., not representative), since it may be easier to obtain correct labels for specific subgroups <cit.>.
§ CONCLUSION
We introduce a novel approach for learning with instance-dependent label noise. Our two-stage approach uses the complete dataset and learns the relationship between the features and label noise using a small set of points. On several datasets, we show that the proposed approach leads to improvements over state-of-the-art baselines in maintaining discriminative performance and mitigating bias. Our approach is not without limitations. We demonstrated that the success of the approach depends, in part, on the representativeness in the set. Our experiments were also on pseudo-synthetic data in which we injected noise; this assumes we start from a noise free dataset. Finally, we only examined one form of bias in a specific case of instance-dependent label noise. Nonetheless, our case frequently arises in healthcare, especially when pragmatic (e.g., automated) labeling tools are used on large datasets, and chart review on the entire dataset is infeasible.
This work was supported by Cisco Research and the National Science Foundation (NSF award no. IIS 2124127). The views and conclusions in this document are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of Cisco Systems Inc. or the National Science Foundation. We also thank the anonymous reviewers for their valuable feedback.
§ PROPOSED APPROACH: ADDITIONAL DETAILS
We provide additional details on our approach, including a general overview in the form of pseudocode as well as a justification for the proposed objective function and its relation to the clean label loss.
§.§ General Overview
We summarize our approach with pseudocode below in Algorithm <ref>. We begin with the dataset and initial model parameters, and we aim to use the dataset to learn the final model parameters. A is the set of anchor points. θ' and ϕ' are the initial model parameters for the θ and ϕ networks. Here, 'stopping criteria' may refer to any stopping criteria, such as early stopping. The Freeze() function takes as input model parameters and freezes them, and the Unfreeze() function takes as input model parameters and unfreezes them.
§.§ Proposed and Clean Label Loss
We show that minimizing the proposed loss ℒ'_θ from Step 2 of the proposed method is equal to minimizing cross entropy on the clean labels in expectation.
ℒ'_θ = -1/|A|∑_k=1^g∑_i ∈A∩ G_k1/1-r̂_k∑_j=1^c
β̂^(i)_ϕ𝕀(ỹ^(i)==j)log(ŷ^(i)_j)
Therefore,
𝔼[∑_k=1^g∑_i ∈A∩ G_k1/1-r̂_k∑_j=1^cβ̂^(i)_ϕ𝕀(ỹ^(i)==j)log(ŷ^(i)_j) ]
= ∑_k=1^g∑_i ∈A∩ G_k1/1-r̂_k∑_j=1^c𝔼 [β̂_ϕ^(i)𝕀(ỹ^(i)==j)log(ŷ^(i)_j) ]
=∑_k=1^g∑_i ∈A∩ G_k1/1-r̂_k∑_j=1^c(1-r̂_k)𝕀(y^(i)==j)log(ŷ^(i)_j)
=∑_k=1^g∑_i ∈A∩ G_k∑_j=1^c𝕀(y^(i)==j)log(ŷ^(i)_j)
As a reminder, each group G_k is then associated with estimated noise rate r̂_k=1/| g_k |∑_i ∈ G_k 1-β̂^(i)_ϕ and estimated clean (i.e., correct) rate 1 - r̂_k = 1/| G_k |∑_i ∈ G_kβ̂^(i)_ϕ. We can express the noise and clean rates in terms of β̂^(i)_ϕ since
1 - r_k = 1/| G_k |∑_i ∈ G_k𝕀(ỹ^(i)==y^(i))
= P(y==ỹ|ỹ, x) for a random instance in G_k
= 1/| G_k |∑_i ∈ G_k P(y^(i)==ỹ^(i)|ỹ^(i), x^(i))
where r_k and 1 - r_k are the actual noise and clean rates within group k, respectively. Therefore, since β̂_ϕ is trained to predict P(y==ỹ|ỹ, x), we estimate the noise and clean rates using β̂_ϕ.
§ PREPROCESSING DETAILS
Here, we provide more detail on our synthetic data generation process and real dataset pre-processing.
§.§ Synthetic
Our data generation process is as described below. Note that the Percentile(p, {z}) function outputs the p^th percentile over all values in {z}. We defined the feature at index 0 to be a synthetic sensitive attribute. Instances with values below the 20^th percentile for this feature were considered as the `minority', and the rest were considered as the `majority'. Features 10-19 for the majority instances and features 20-29 for the minority instances were set to 0 to provide more contrast between the two groups. For individual i,
d=30, x^(i)∼ N(0, 1)^30
w∼ N(0, 1)^30, z^(i)=x^(i)·w
y^(i)=1 if z^(i)>Percentile(50, {z^(j)}_j=1^5000) else 0
x^(i)_j=0 for ȷ=10,11,...,19
if x^(i)_0 >Percentile(20, {x^(j)_0}_j=1^5000)
x^(i)_j=0 for ȷ=20,21,...,29
if x^(i)_0 <Percentile(20, {x^(j)_0}_j=1^5000)
§.§ MIMIC-III
Data were processed using the FlexIble Data Driven pipeLinE (FIDDLE), [<cit.>], a publicly available pre-processing tool for electronic health record data. We used the same features as [<cit.>] for our tasks. More information can be found at https://physionet.org/content/mimic-eicu-fiddle-feature/1.0.0/.
§.§ Adult
Although, we used a pre-processed version of this dataset, we omitted features pertaining to education, work type, and work sector to make the task more difficult. More specifically, in the file `headers.txt' at the repository mentioned in Footnote 1, we kept all features beginning with `age', `workclass', `education', `marital status', and `occupation'. We also kept the `Sex_Female' feature. The remaining features were excluded to make the task more difficult. Values were normalized for each feature to have a range of 0-1 by subtracting by the minimum value observed among all individuals and dividing by the range. During training, we only used 1,000 randomly selected individuals from the provided dataset to make the task more difficult, since there would be fewer samples from which to learn. We made the task more difficult for this dataset to further highlight the differences in performance between the approaches.
§.§ COMPAS
Although, we used a pre-processed version of this dataset, we omitted the feature `score_factor' (i.e., the risk score for recidivism from the ProPublica model) to make the task more difficult. Values were normalized for each feature to have a range of 0-1 by subtracting by the minimum value observed among all individuals and dividing by the range.
§ ADDITIONAL NETWORK AND TRAINING DETAILS
Here, our ranges of hyperparameters and implementation choices for the proposed network. All networks were trained on Intel(R) Xeon(R) CPUs, E7-4850 v3 @ 2.20GHz and Nvidia GeForce GTX 1080 GPUs. All layers were initialized with He initialization from a uniform distribution. We divide our training data into five batches during training. All random seeds (for Pytorch, numpy, and Python's random) were initialized with 123456789.
§.§ Hyperparameter Values Considered
Here, we show the range of values we considered for our random search. More details are provided in Table <ref>. For any hyperparameters associated with the Adam optimizer not mentioned above, we used the default values. Not all hyperparameters were used with each approach. `Filter Threshold' and `Noise Added' were only used with the baseline SLN + Filter. Here, Filter Threshold refers to the minimum value of the predicted probability of the observed label for an instance to be considered `correctly labeled'. For example, if Filter Threshold=0.5, then all examples whose predicted probability for the observed label is at least 0.5 are considered `correct' and used during training. `Number of Parts' was only used with the baseline Transition. `α_GPL' was only used with the baseline Fair GPL. `α_1Proposed', `α_2Proposed', and `γ_Proposed' was only used with the proposed method. Here, `α_1Proposed' and `α_2Proposed' correspond to the terms α_1 and α_2 that were used in the objective functions. We refer to them with the added term `Proposed' in the subscript in this section to distinguish it from the α value used by the baseline Fair GPL.
§.§ Network Details
For the overall architecture, we used a feed forward network with two hidden layers. The auxiliary β prediction component was also implemented with two feed forward layers. All layer sizes are as described in Table <ref>. In addition, we used the ReLU activation function. The complete implementation can be found in the attached code.
§ EXPANDED RESULTS
Here, we describe additional results that were not included in the main text. We begin with followup experiments on the synthetic data and then describes results from the real data.
§.§ Robustness to Noise Rate Expanded
Here we include the AUROC and AUEOC plotted separately for the experiments where we varied the overall noise rate and noise disparity.
As we varied the overall noise rate (Figure <ref>), the proposed approach is able to consistently outperform the baselines with respect to discriminative performance until a minority noise rate of 80%. This observation is similar to what we observed with the HM. With respect to bias mitigation, the proposed approach is not more beneficial than the baselines up to a minority noise rate of 60%. At a minority noise rate above 60%, our approach experienced the least degradation compared to the baseline approaches. This is in line with our expectations since our approach explicitly accounts for differences in noise rates among groups during training.
As we varied the noise disparity (Figure <ref>), we have similar observations to the previous experiment in that the proposed approach is able to consistently outperform the baselines with respect to discriminative performance until a minority noise rate of 80%. With respect to bias mitigation, the proposed approach is not more beneficial than the baselines up to a minority noise rate of 40%. At a minority noise rate above 40%, our approach experienced the least degradation compared to most of the other baseline approaches and was comparable to the Transition baseline. Unlike the previous experiment, the degradation in AUEOC among many of the baseline approaches is larger, which is in line with our expectations since we were directly changing the difference in noise rates between the groups while the previous experiment kept the difference constant.
§.§ Ablation Study
We also examined our approach more closely by conducting an ablation study and a hyperparameter sensitivity analysis on the synthetic data. We used the synthetic dataset since our noise was synthetically introduced and not dataset specific. In our ablation study (Figure <ref>), we began with training on only the points (i.e., Step 1 only), which achieved the worst performance. We then introduced Step 2 and added the remaining training data (i.e., non-points) but only trained using ℒ_θ'. This led to an improvement in performance, but not to the level of the full approach. The next two ablations build on the previous one. In the first one, we added continued supervision on the points with ℒ_θ, and observed an improvement in performance, likely due to the retention of high quality data in this step. In the second one, we added continued supervision on the points using ℒ_ϕ, and observed an even larger improvement. This is likely because including ℒ_θ prevented the model from learning a solution where β̂ was small for all instances, as previously discussed. Finally, we end with our full proposed approach, which performed noticeably better than each of the ablations, showing the importance of each component.
§.§ Hyperparameter Sensitivity Analysis
In our sensitivity analysis on the synthetic data (Figure <ref>), we tested how performance of the (full) proposed approach varied to changes in the hyperparameters α_1, α_2, and γ. For each of these hyperparameters, we measured performance at values between 0.01 and 100 on a logarithmic scale while keeping the other two values constant at 1. We found that α_1 and γ were the most robust to changes in the value. We found that α_2 was more sensitive, with values between 0.1 and 10 generally working best.
§.§ Sensitivity to Set Composition Expanded
In our analysis on sensitivity to set composition, we include results for the other baselines in (Figure <ref>). At set sizes of below 5% on the real datasets, the proposed approach was beneficial to the baselines. At larger set sizes, the baseline Transition was able to match the proposed method due to the increased amount of clean data. When the set was biased, the proposed approach outperformed the baselines in the unbiased settings and was competitive as bias in the set increased.
|
http://arxiv.org/abs/2307.04054v1 | 20230708222123 | Deep Unsupervised Learning Using Spike-Timing-Dependent Plasticity | [
"Sen Lu",
"Abhronil Sengupta"
] | cs.CV | [
"cs.CV"
] |
Deep Unsupervised Learning Using Spike-Timing-Dependent Plasticity
Sen Lu, Abhronil Sengupta
School of Electrical Engineering and Computer Science
The Pennsylvania State University
University Park, PA 16802, USA
Email: {senlu, sengupta}@psu.edu
============================================================================================================================================================================================
Spike-Timing-Dependent Plasticity (STDP) is an unsupervised learning mechanism for Spiking Neural Networks (SNNs) that has received significant attention from the neuromorphic hardware community. However, scaling such local learning techniques to deeper networks and large-scale tasks has remained elusive. In this work, we investigate a Deep-STDP framework where a convolutional network is trained in tandem with pseudo-labels generated by the STDP clustering process on the network outputs. We achieve 24.56% higher accuracy and 3.5× faster convergence speed at iso-accuracy on a 10-class subset of the Tiny ImageNet dataset in contrast to a k-means clustering approach.
Unsupervised Learning, Spiking Neural Networks, Spike-Timing-Dependent Plasticity
§ INTRODUCTION
With high-quality AI applications permeating our society and daily lives, unsupervised learning is gaining increased attention as the cost of procuring labeled data has been skyrocketing concurrently. The ever-more data-hungry machine learning models usually require a humongous amount of labeled data, sometimes requiring expert knowledge, to achieve state-of-the-art performance today. Since manual annotation requires a huge investment of resources, unsupervised learning is naturally emerging as the best alternative.
One of the most prominent unsupervised learning methods is clustering. The main concept of clustering is to compress the input data (like images in the case of computer vision problems) into lower dimensions such that the low-dimensional features can be clustered into separable groups. The efficiency of the sample clustering process improves with better representations of the compressed features. Since the quality of features depends only on the dimension reduction algorithm, the design and choice of the clustering method are critical to the success of unsupervised learning. However, most real-world tasks are not easily represented as separable low-dimensional points. Earlier attempts include classical PCA reduction before clustering <cit.>, while others attempt to augment more features with “bags of features" <cit.>; but mostly constrained to smaller tasks. Recent works like DeepCluster have explored scaling of unsupervised learning approaches by incorporating the k-means clustering algorithm with a standard Convolutional Neural Network (CNN) architecture that can learn complex datasets such as ImageNet without any labels <cit.>. Some works have also proven that pre-training the network, even unsupervised, is beneficial to building the final model in terms of accuracy and convergence speed <cit.>.
The focus of this article, however, is on scaling unsupervised learning approaches in a relatively nascent, bio-plausible category of neural architectures - Spiking Neural Networks (SNNs). SNNs have been gaining momentum for empowering the next generation of edge intelligence platforms due to their significant power, energy, and latency advantages over conventional machine learning models <cit.>. One of the traditional mechanisms of training SNNs is through Spike-Timing-Dependent Plasticity (STDP) where the model weights are updated locally based on firing patterns of connecting neurons inspired by biological measurements <cit.>. STDP based learning rules have been lucrative for the neuromorphic hardware community where various emerging nanoelectronic devices have been demonstrated to mimic STDP based learning rules through their intrinsic physics, thereby leading to compact and resource-efficient on-chip learning platforms <cit.>. Recent works have also demonstrated that unsupervised STDP can serve as an energy-efficient hardware alternative to conventional clustering algorithms <cit.>.
However, scaling STDP trained SNNs to deeper networks and complex tasks has remained a daunting task. Leveraging insights from hybrid approaches to unsupervised deep learning like DeepCluster <cit.>, we aim to address this missing gap to enable deep unsupervised learning for SNNs. Further, while techniques like DeepCluster have shown promise to enable unsupervised learning at scale, the impact of the choice of the clustering method on the learning capability and computational requirements remains unexplored.
The main contributions of the paper can therefore be summarized as follows:
(i) We propose a hybrid SNN-compatible unsupervised training approach for deep convolutional networks and demonstrate its performance on complex recognition tasks going beyond toy datasets like MNIST.
(ii) We demonstrate the efficacy of STDP enabled deep clustering of visual features over state-of-the-art k-means clustering approach and provide justification through empirical analysis by using statistical tools, namely Fisher Information Matrix Trace, to prove that STDP learns faster and more accurately.
(iii) We also provide preliminary computational cost estimate comparisons of the STDP enabled Deep Clustering framework against conventional clustering methods and demonstrate the potential of significant energy savings.
§ RELATED WORKS
Deep Learning: Unsupervised learning of deep neural networks is a widely studied area in the machine learning community <cit.>. It can be roughly categorized into two main methods, namely clustering and association. Among many clustering algorithms, k-means <cit.>, or any variant of it <cit.>, is the most well-known and widely used method that groups features according to its similarities. Its applications can be found in practice across different domains <cit.>. Other approaches focus on associations to learn data representations which are described by a set of parameters using architectures such as autoencoders <cit.> (where the data distribution is learnt by encoding features in latent space).
In more recent works, such unsupervised learning methods have been applied to larger and more complex datasets <cit.>, making them applicable to more difficult problems. Further, recent advances in generative models have also provided opportunities at mapping unlabeled data to its underlying distribution, especially in the domain of image generation using Generative Adversarial Network (GAN) <cit.> with reconstruction loss directly <cit.> or using the auto-encoded latent space <cit.>. Dumoulin et al.'s recent effort at combining GAN and auto-encoder has demonstrated even better performance <cit.>.
Bio-Plausible Learning: Visual pattern recognition is also of great interest in the neuromorphic community <cit.>. In addition to standard supervised vision tasks, SNNs offer a unique solution to unsupervised learning - the STDP learning method <cit.>. In this scheme, the neural weight updates depend only on the temporal correlation between spikes without any guiding signals, which makes it essentially unsupervised. While it offers a bio-plausible solution, it is rarely used beyond MNIST-level tasks<cit.> and primarily used for single-layered networks. Going beyond conventional STDP based learning, Lee et al. <cit.> proposed an STDP-based pre-training scheme for deep networks that greedily trained the convolutional layers' weights, locally using STDP, one layer at a time but limited only to MNIST. Similarly, in Ferre et al.'s work <cit.>, the convolutional layers were trained on CIFAR10 and STL-10 with simplified STDP, but the layers were also trained individually with complex mechanisms. Further, their works are also limited to shallow convolutional architectures.
Our work explores a hybrid algorithm design based on a merger of the above two approaches. Our proposed framework provides a global training signal for the CNN using a straightforward and end-to-end STDP-based SNN implementation. We demonstrate significant accuracy improvement and computation savings for VGG-15 architecture on the Tiny ImageNet dataset in contrast to state-of-the-art deep clustering approaches.
§ PRELIMINARIES
§.§ Deep Clustering with k-means Algorithm
Deep Clustering <cit.> enabled unsupervised training of visual features primarily relies on the ability of clustering algorithms like the k-means to group together similar data points. k-means is a popular unsupervised algorithm for separating data points into distinct clusters. Given a user-specified value of k, the algorithm will find k clusters such that each data point is assigned to its nearest cluster. The vanilla implementation of the k-means algorithm iteratively calculates the Euclidean distance between points for comparison and updates the cluster centroids to fit the given distribution.
Deep Clustering utilizes the traditional CNN architecture to obtain the features to be used for clustering. The reason behind this feature reduction choice hinges upon the fact that a randomly initialized and untrained CNN outperforms a simple multilayer perceptron network by a considerable margin <cit.>. Driven by this observation, the main idea behind this framework is to bootstrap the better-than-chance signal to teach the network and learn the features. This teaching signal is transformed into a `pseudo-label' so that the network can learn from it. The `pseudo-labels' which may or may not be the same as the ground truth labels reflect the direction that the network weights should be updated. By doing so, the feature extraction layers may become slightly better at recognizing certain features and thereby producing more representative features. The improved features can ideally be more separable, thereby generating higher quality `pseudo-labels'. By repeating this process iteratively, the CNN should ideally converge by learning the `pseudo-labels' <cit.>.
Note that the CNN layers used for feature-reduction purposes can be converted into SNN layers with various methods as shown in many recent studies <cit.>, or trained from scratch using backpropagation through time (BPTT) <cit.> which opens up the potential for adopting the entire feature-reduction in a low-power neuromorphic setting. In this work, we therefore do not focus on the CNN-SNN conversion and train it by backpropagation without unrolling through time.
§.§ STDP Enabled Neuromorphic Clustering
STDP is an unsupervised learning mechanism that learns or unlearns neurons' synaptic connections based on spike timings <cit.>. In particular, the synaptic connection is strengthened when the post-synaptic neuron fires after the pre-synaptic neuron, and the connection is weakened if the post-synaptic neuron fires before the pre-synaptic neuron. The intuition behind STDP follows Hebbian learning philosophy where neurons that are activated together and sequentially are more spatio-temporally correlated and thus form a pattern, and vice versa. This learning rule enables the encoding of complex input distributions temporally without the need for guiding signals such as the label. The weights of the neuronal synapses are updated based on spike timings <cit.> as follows:
Δ w =
A_+e^-Δ t/β_+, if Δ t > 0
-A_-e^Δ t/β_-, if Δ t < 0
where, w is the weight, A_+/- are the learning rates, Δ t is the exact time difference between post-neuron and pre-neuron firing and β_+/- are the time-constants for the learning windows. In practical implementations, the exact spike timing is usually replaced with a spike trace (see Section IV-B) that decays over time to reduce memory storage for STDP implementation <cit.>.
STDP training is predominantly explored in Winner-Take-All networks in literature which consists of an excitatory layer of neurons with recurrent inhibitory connections <cit.> (see “STDP Enabled SNN for Clustering" sub-panel in Fig. <ref>). Such connections create a mechanism called `lateral inhibition' where activated neurons inhibit other neurons' activities and therefore assist the activated neurons to accentuate the learning process of its weights. To prevent any neuron from dominating the firing pattern, the second key mechanism is `homeostasis' which balances the overall activities of the neurons. Homeostasis prevents neurons from runaway excitation or total quiescence. One popular way to achieve this is through adaptive and decaying thresholding in which after every firing event, the firing threshold increases such that the firing neuron requires higher membrane potential to fire again in the future. Consequently, this will provide opportunities for other neurons in the network to fire and learn the synaptic weights. The critical balance of these two mechanisms ensures stable learning of the SNN. Fig. <ref> shows an example of STDP-trained weights of the excitatory neuron layer of an SNN where representative digit shapes are learnt without any label information for the MNIST dataset <cit.>. Each neuron in the network represents a cluster. By running inferences on the STDP network, we can cluster the inputs according to their corresponding most activated neuron. The learnt weights of each neuron is equivalent to the centroid of the cluster represented by that neuron.
§ METHODS
§.§ Proposed Deep-STDP Framework
As mentioned previously, the convolutional layers of the network compress the input images to a lower dimensional feature space as a one-dimensional vector. In abstract terms, the framework solves the following optimization problem <cit.>:
min _w ∈ℝ^d × k1/N∑_n=1^Nmin _y_n ∈{0,1}^k||f_θ (img_n) - w_y_n ||_1
such that y_n^⊺ 1_k = 1
where, N is the total number of training samples, y_n is the n-th optimal neuron assignment encoded as a one-hot vector, f_θ is the ConvNet forward pass output parameterized by its weights θ, img_n is the n-th input sample, w_y_n is the STDP-learnt synaptic weight map of the most activated neuron, d is the feature dimension of the ConvNet output and k is the number of neurons/clusters in the network. By minimizing the difference between the weights of the neurons and the patterns of the features, we can obtain an SNN that generates optimal assignments of y_n parameterized by weights w, which act as the pseudo-labels for our algorithm.
With the pseudo-labels, the network training can be accomplished through the standard minimization problem of network loss which can be described by:
min _ρ, θ1/N∑_n=1^Nℒ (g_ρ (f_θ (img_n)), y^*_n )
where, θ, ρ are parameters of the ConvNet f_θ (·) and classifier g_ρ (·) respectively, ℒ(·) is the loss function, img_n again is the n-th image input, y^*_n is the n-th optimal pseudo-label for this iteration.
However, SNNs only accept discrete spikes as input and therefore the ConvNet feature outputs in floating-point representation (after appropriate pre-processing like PCA reduction and l_2-normalization <cit.>) are subsequently rate encoded by a Poisson spike train generator, where the feature values are used as the Poisson distribution rate and sampled from the respective distribution.
At the end of the pseudo-label assignment, the STDP enabled SNN resets for the next iteration. This is intuitive since after the ConvNet weight update process, the feature distribution gets shifted and hence a new set of neuron/cluster weights should be learnt by the STDP framework. Algorithms <ref>-<ref> describe the overall structure of the proposed Deep-STDP framework shown in Fig. <ref>.
[1]// #1
[1]// #1
§.§ STDP Enabled SNN for Clustering
Clustering in the SNN is mediated through the temporal dynamics of Leaky-Integrate-Fire neurons in the excitatory layer. In the absence of any spiking inputs, the membrane potential of neurons in the excitatory layer is represented by V_exc at timestep t, or simply V_exc^t. It initializes with V_exc^t=0 = V_rest and decays as,
V_exc^t = V_rest + exp(1/V_decay) (V_exc^t-1-V_rest)
where, V_rest is the resting potential and V_decay is the potential decay constant.
Prior works <cit.> on using SNNs for clustering have mainly dealt with simple datasets without negative-valued features. This is in compliance with the nature of STDP learning for positive valued spikes. However, in our scenario, we consider negative valued spiking inputs as well in order to rate encode the negative features provided as output of the ConvNet. In order to enable STDP learning for negative inputs, we decompose the weight map into positive and negative components to learn positive and negative spike patterns respectively. Therefore, in presence of spikes, the excitatory layer's neuron membrane potential dynamics is updated as,
V_exc^t s^pre_+·w_+ + s^pre_-·w_-
where, the membrane potential is denoted by V^t_exc at timestep t, and the input spikes and pre-synaptic weights are represented by s^pre and w respectively (with their positive and negative counterparts). It is worth mentioning here that pre-neurons refer to the input neurons and post-neurons refer to the excitatory layer neurons since the synapses joining them are learnt by STDP.
Further, there is a refractory period L parameter for every neuron which will only allow execution of Eq. <ref> and <ref> if the refractory counter, l, equals `0'. A spike will be generated when the membrane potential at the current timestep is greater than the membrane threshold:
s=
1 if (V^t_exc > V_thr + ϵ) and (l=0)
0 otherwise
where, V_thr is the membrane threshold to fire a spike, ϵ is the adaptive threshold parameter, l is the refractory period counter which is reset to L upon a firing event and decays by 1 otherwise (thereby preventing neurons from firing for L timesteps after a spike). V^t_exc resets to V_reset after firing a spike. The adaptive threshold parameter acts as a balancer to prevent any neuron from being over-active (homeostasis) and is incremented by parameter α upon a firing event and otherwise decays exponentially at every timestep similar to Eq. <ref>: exp(1/ϵ_decay) ϵ. Every spike generated by a post-neuron triggers a membrane potential decrement by an amount w_inh for all the other neurons except itself.
In the context of our implementation, we used the spike trace τ to represent the temporal distance between two spikes. The spike trace value peaks at its firing to τ_o and exponentially decay as time lapses: exp(1/τ_decay) τ. The weight updates are similarly separated into positive and negative parts.
Pre-synaptic update:
[ Δ w_+ = -η^pre (s^pre_+ * τ^post); Δ w_- = η^pre (s^pre_- * τ^post) ]
Post-synaptic update:
[ Δ w_+ = η^post (τ^pre_+ * s^post); Δ w_- = η^post (τ^pre_- * s^post) ]
where, Δ w are the weight updates, η^pre, η^post are the learning rates for pre- and post-synaptic updates respectively, τ is the spike trace, and s is the spiking pattern. Superscript (^pre), (^post) indicates whether the trace or spike is from pre- or post-synaptic neuron respectively, and the subscript (_+), (_-) indicates whether the operation is for positive or negative input spikes. Note that the negative s^pre_- can be flipped easily by the distributive property of matrix multiplication.
§ EXPERIMENTS AND RESULTS
§.§ Datasets and Implementation
The proposed method was evaluated on the Tiny ImageNet dataset, which is a center-cropped subset of the large-scale ImageNet dataset <cit.>. Unlike the ImageNet 2012 dataset, which contains 1000 object categories, the Tiny ImageNet dataset comprises of only 200 categories. Due to computation constraints, we selected the first 10 classes from the Tiny ImageNet dataset by the naming order and considered both the training and testing sets for those corresponding classes in this work. All images were normalized to zero mean and unit variance and shuffled to avoid any bias. We chose VGG15 as the baseline network architecture with randomly initialized weights. Simulations were conducted using the PyTorch machine learning library and a modified version of the BindsNet toolbox <cit.> as the base platform for the experiments. The results reported for the DeepCluster framework <cit.> were obtained without any modification to the open-source codebase associated with the work, and its hyperparameters were unchanged unless mentioned in this work. The ConvNet learning rate was set to 1e-2 and the number of clusters was set to 10 times the number of classes (recommended as optimal in Ref. <cit.> and also found optimal in the Deep-STDP framework). The training was performed for 200 epochs. All results obtained were run on 2 GTX 2080Ti GPUs and the associated hyper-parameters used for the Deep-STDP framework can be found in Table <ref>.
Numerous cluster re-assignment frequencies were explored and `1' (`2') was found to be the optimal for Deep-STDP (DeepCluster), i.e. the pseudo-labels were generated by passing the entire dataset once (twice) every epoch. Note that this frequency represents the number of dataset iterations per epoch. Following the evaluation method proposed by Zhang et. al <cit.>, we froze all network parameters and trained a linear layer at the output to evaluate the efficiency of the model to capture the distribution of images in the training set as well as its usage as a pre-trained model for general use cases. We fixed the random seeds in each experiment such that the clustering process is deterministic for a particular run. To prevent loss in generality, all accuracy results reported here represent the average value over 5 independent runs with different sets of random seeds.
§.§ Evaluation Metrics
§.§.§ Fisher Information
The Fisher information (FI) quantitatively measures the amount of information retained in a statistical model after being trained on a given data distribution <cit.>. Many prior works have used this metric to measure different aspects of deep learning models including SNN models <cit.>. Unlike prior works, we use pseudo-labels to generate FI instead of ground-truth labels. FI reflects the impact of weight changes on the ConvNet output. If the FI of model parameters is small, we can conclude that the model's learning efficiency is poor since the weights can be pruned without affecting the output, and vice versa. Therefore, this metric implicitly measures the quality of the pseudo-labels.
Let us consider that the network tries to learn y from a distribution p parametrized by a set of weights θ. Given samples x, the posterior distribution is p_θ(y|x).
The Fisher information matrix (FIM) is defined as:
F=𝔼_x∼ X𝔼_y∼ p_θ(y|x) [∇_θlog p_θ(y|x) ∇_θlog p_θ(y|x)^T]
where, X is the empirical distribution of the actual dataset. However, the exact FIM is usually too large to be computed directly and therefore the value is usually approximated by its trace, which is given by:
Tr(F)=𝔼_x∼ X𝔼_y∼ p_θ(y|x) [||∇_θlog p_θ(y|x)||^2]
in which the expectations can be replaced by the averaged observation from the dataset of N samples:
Tr(F) = 1/N∑_k=1^N ||∇_θlog p_θ(y|x)||^2_2
where, Tr(F) is the trace of FIM, ∇ is the partial derivative operator. We follow the same implementation as the algorithm specified in Ref. <cit.>.
§.§.§ Normalized Mutual Information
Further, following the Deep Clustering work <cit.>, we also measured the Normalized Mutual Information (NMI) metrics to evaluate mutual information between two consecutive assignments of the STDP-enabled SNN, given by Eq. <ref>.
NMI(y^p,y^p-1) = I(y^p;y^p-1)/√([H(y^p)H(y^p-1)])
where, y^p,y^p-1 are label assignments for epoch p-1 and p respectively, I(·) is the mutual information function, and H(·) is the entropy function.
Since the assignments y^p, y^p-1 are consecutive and are generated from the same inputs, a high NMI value indicates a high correlation between the two sets of assignments as well as stable assignments of the pseudo-labels.
§.§ Performance Evaluation
Fig. <ref> demonstrates that Deep-STDP based unsupervised feature learning significantly outperforms DeepCluster approach based on k-means clustering. The superior quality of pseudo-labels generated by Deep-STDP is also explained empirically by the FIM trace variation over the learning epochs (see Fig. <ref>). While both algorithms perform similarly during the initial stages, the accuracy and FIM trace start improving significantly for the Deep-STDP approach over subsequent epochs. Performance evaluation metrics (NMI, FIM and Accuracy) for the two approaches at the end of the training process are tabulated in Table <ref>.
In addition to training an additional linear layer for numerical performance analysis, we also visualized the convolutional filter activations of the CNN trained using our proposed framework. We can observe from Fig. <ref> that the network forms distinct filters specialized for completely different visual patterns in different layers without using any ground truth label information. On the other hand, similar visualization performed on the DeepCluster trained network yielded similar simple patterns in the shallow layers without any complex patterns represented in the deeper layers, further substantiating the efficacy of the Deep-STDP approach.
§.§ Computational Cost Estimation
While a detailed system level hardware analysis for the two approaches is outside the scope of this work, we provide motivation for neuromorphic deep clustering by performing a comparative analysis of the computational cost of the two approaches.
§.§.§ Cost of k-means Clustering
To find the new centroid of a particular cluster, the algorithm calculates the averaged center of all the data points assigned to that cluster using the following equation:
c_j = 1/|C_j|∑x_i ∈ C_j
where, c_j is the averaged coordinates of the j-th centroid, |C_j| is the number of data points assigned to that corresponding cluster, and x_i is the i-th data point. Subsequently, the algorithm calculates the Euclidean distance between every data point and every centroid and assigns each data point to the cluster with the shortest distance to its centroid. The goal is to solve the optimization problem:
*argmin_C∑_j=1^k∑_i=1^|C_j| ||x_i - c_j||^2_2
where, *argmin_C solves for the optimal centroids and k is the total number of clusters.
The above two calculations will be repeated until convergence is achieved or until a maximum number of iterations is reached. Hence, the number of mathematical operations can be summarized as follows:
* Clustering Step: Compute the distance ||x_i - c_j||^2_2 from every point to every centroid and assign to k clusters
* Update Step: Re-center the centroids in new clusters by averaging over |C_j| for all clusters
* Repeat it times
To calculate the distance of a point x_i from c_j:
||x_i - c_j||^2_2 = √(∑_m=1^d=256(x_im - c_jm)^2)
where, d is the number of dimensions in the feature.
Hence, the number of multiplications (the number of squaring operations) in order to calculate the Euclidean distance is:
[k· d] · it · N
and the number of addition operations involved is:
[k· (2d-1) + d] · it · N
where, k is the number of clusters, N is the number of training samples, and it is the number of maximum iterations in the k-means algorithm. In Eq. <ref>, the k · (d-1) component arises from the summation of individual distance along each dimension while another k · d component arises from the subtraction operation for distance calculation along each dimension. The last d component arises from updating the new cluster coordinates (which in the worst case will iterate through all data points, see Eq. <ref>). Given the cost of float ADD operation is 0.9pJ and float MULT operation is 3.7pJ in 45nm CMOS process <cit.>, we estimated the total computational cost in the clustering process for every training epoch to be 14.1mJ (considering it=20). Considering 175 epochs of DeepCluster training to reach peak accuracy, the total computational cost is 2467.5mJ.
§.§.§ Cost of STDP Clustering
In the STDP based clustering approach, the computations can be summarized into the following parts:
* Feedforward Step: Integrate input Poisson spike train through the synapses connecting input and excitatory layer
* Learning Step: Updating the excitatory layer weights based on pre- and post-synaptic spiking activities
* Inhibition Step: Updating the neuron membrane potential based on lateral inhibitory connections
* Repeat T times
Although multiplication symbols were used in Algo. <ref>, computation with spike signals can always be reduced to summation operation since the spike magnitude is always `0' or `1' <cit.>. Further, the addition operation is conditional upon the receipt of spikes, thereby reducing the computation cost by a significant margin for a highly sparse spike train. For instance, the average spiking probability per neuron per timestep in the excitatory layer of the network is only 0.19%. Hence, the total number of addition operations can be summarized as:
[p_input· |w^exc| + (p_input + p_exc)· |w^exc| + p_exc· |w^inh|]
· T · N
where, p_input,p_exc are the average (per neuron per timestep averaged over the entire training process) spiking probability of the input and excitatory neuronal layer respectively, |w^exc| is the number of synaptic connections between the input and excitatory layer, either |w_+| or |w_-| since the input can be either positive or negative, |w^inh| is the total number of inhibitory connections in the network, T is the number of timesteps used for the STDP training process, and N is the number of training samples.
It is worth mentioning here that we primarily focus on the computationally expensive portions of both algorithms for these calculations. In Eq. <ref>, the p_input· |w^exc| component arises from the feedforward propagation of input spikes, (p_input + p_exc)· |w^exc| component arises from the learning step and p_exc· |w^inh| arises from the inhibition step. Therefore, the total computational cost for Deep-STDP per epoch is 55.34mJ and considering 50 epochs of training (iso-accuracy comparison as shown in Fig. <ref>), the total energy consumption is estimated to be 2767.2mJ - comparable to the DeepCluster framework.
§.§.§ System Level Cost Comparison:
We note that the STDP based framework does not change the computational load of the clustering framework significantly. However, the computational load at the system level will be also dependent on the computational load for feature extraction in the ConvNet. For instance, Ref. <cit.> mentions a third of the time during a forward pass is attributed to the clustering algorithm while the remaining is attributed to the deep ConvNet feature extraction. Therefore, we expect the Deep-STDP based framework to be significantly more resource efficient than the DeepCluster based approach due to 3.5× reduction in the number of training epochs - equivalently reducing the ConvNet feature extraction computational cost.
§ CONCLUSIONS
In conclusion, we proposed an end-to-end hybrid unsupervised framework for training deep CNNs that can be potentially implemented in a neuromorphic setting. We demonstrated significant benefits in terms of accuracy and computational cost by leveraging bio-plausible clustering techniques for deep unsupervised learning of visual features and substantiated our claims by empirical analysis through statistical tools like Fisher Information and Normalized Mutual Information. Our work significantly outperforms prior attempts at scaling bio-inspired learning rules like STDP to deeper networks and complex datasets. Future work can focus on further scaling of the approach and delving deeper into the mathematical underpinnings of the superior performance of STDP as a deep clustering mechanism.
§ ACKNOWLEDGMENTS
This material is based upon work supported in part by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, under Award Number #DE-SC0021562 and the National Science Foundation grant CCF #1955815 and by Oracle Cloud credits and related resources provided by the Oracle for Research program.
10
url@samestyle
ding2004k
C. Ding and X. He, “K-means clustering via principal component analysis,” in
Proceedings of the twenty-first international conference on Machine
learning, 2004, p. 29.
csurka2004visual
G. Csurka, C. Dance, L. Fan, J. Willamowski, and C. Bray, “Visual
categorization with bags of keypoints,” in Workshop on statistical
learning in computer vision, ECCV, vol. 1, no. 1-22.1em plus 0.5em
minus 0.4emPrague, 2004, pp. 1–2.
caron2018deep
M. Caron, P. Bojanowski, A. Joulin, and M. Douze, “Deep clustering for
unsupervised learning of visual features,” in Proceedings of the
European conference on computer vision (ECCV), 2018, pp. 132–149.
radford2015unsupervised
A. Radford, L. Metz, and S. Chintala, “Unsupervised representation learning
with deep convolutional generative adversarial networks,” arXiv
preprint arXiv:1511.06434, 2015.
oord2018representation
A. v. d. Oord, Y. Li, and O. Vinyals, “Representation learning with
contrastive predictive coding,” arXiv preprint arXiv:1807.03748,
2018.
radford2019language
A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, I. Sutskever et al.,
“Language models are unsupervised multitask learners,” OpenAI blog,
vol. 1, no. 8, p. 9, 2019.
sengupta2019going
A. Sengupta, Y. Ye, R. Wang, C. Liu, and K. Roy, “Going deeper in spiking
neural networks: Vgg and residual architectures,” Frontiers in
neuroscience, vol. 13, p. 95, 2019.
davies2021advancing
M. Davies, A. Wild, G. Orchard, Y. Sandamirskaya, G. A. F. Guerra, P. Joshi,
P. Plank, and S. R. Risbud, “Advancing neuromorphic computing with loihi: A
survey of results and outlook,” Proceedings of the IEEE, vol. 109,
no. 5, pp. 911–934, 2021.
diehl2015unsupervised
P. Diehl and M. Cook, “Unsupervised learning of digit recognition using
spike-timing-dependent plasticity,” Frontiers in Computational
Neuroscience, vol. 9, p. 99, 2015.
saha2021intrinsic
A. Saha, A. Islam, Z. Zhao, S. Deng, K. Ni, and A. Sengupta, “Intrinsic
synaptic plasticity of ferroelectric field effect transistors for online
learning,” Applied Physics Letters, vol. 119, no. 13, 2021.
frady2020neuromorphic
E. P. Frady, G. Orchard, D. Florey, N. Imam, R. Liu, J. Mishra, J. Tse,
A. Wild, F. T. Sommer, and M. Davies, “Neuromorphic nearest neighbor search
using intel's pohoiki springs,” in Proceedings of the neuro-inspired
computational elements workshop, 2020, pp. 1–10.
bengio2012unsupervised
Y. Bengio, A. C. Courville, and P. Vincent, “Unsupervised feature learning and
deep learning: A review and new perspectives,” CoRR, abs/1206.5538,
vol. 1, no. 2665, p. 2012, 2012.
dike2018unsupervised
H. U. Dike, Y. Zhou, K. K. Deveerasetty, and Q. Wu, “Unsupervised learning
based on artificial neural network: A review,” in 2018 IEEE
International Conference on Cyborg and Bionic Systems (CBS).1em plus
0.5em minus 0.4emIEEE, 2018, pp. 322–327.
lloyd1982least
S. Lloyd, “Least squares quantization in pcm,” IEEE transactions on
information theory, vol. 28, no. 2, pp. 129–137, 1982.
krishna1999genetic
K. Krishna and M. N. Murty, “Genetic k-means algorithm,” IEEE
Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics),
vol. 29, no. 3, pp. 433–439, 1999.
arthur2007k
D. Arthur and S. Vassilvitskii, “K-means++ the advantages of careful
seeding,” in Proceedings of the eighteenth annual ACM-SIAM symposium
on Discrete algorithms, 2007, pp. 1027–1035.
ng2006medical
H. Ng, S. Ong, K. Foong, P.-S. Goh, and W. Nowinski, “Medical image
segmentation using k-means clustering and improved watershed algorithm,” in
2006 IEEE southwest symposium on image analysis and
interpretation.1em plus 0.5em minus 0.4emIEEE, 2006, pp.
61–65.
kim2008recommender
K.-j. Kim and H. Ahn, “A recommender system using ga k-means clustering in an
online shopping market,” Expert systems with applications, vol. 34,
no. 2, pp. 1200–1209, 2008.
rumelhart1986learning
D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations
by back-propagating errors,” nature, vol. 323, no. 6088, pp.
533–536, 1986.
hinton2006reducing
G. E. Hinton and R. R. Salakhutdinov, “Reducing the dimensionality of data
with neural networks,” science, vol. 313, no. 5786, pp. 504–507,
2006.
rombach2022high
R. Rombach, A. Blattmann, D. Lorenz, P. Esser, and B. Ommer, “High-resolution
image synthesis with latent diffusion models,” in Proceedings of the
IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp.
10 684–10 695.
bojanowski2017optimizing
P. Bojanowski, A. Joulin, D. Lopez-Paz, and A. Szlam, “Optimizing the latent
space of generative networks,” arXiv preprint arXiv:1707.05776, 2017.
kingma2013auto
D. P. Kingma and M. Welling, “Auto-encoding variational bayes,” arXiv
preprint arXiv:1312.6114, 2013.
masci2011stacked
J. Masci, U. Meier, D. Cireşan, and J. Schmidhuber, “Stacked
convolutional auto-encoders for hierarchical feature extraction,” in
Artificial Neural Networks and Machine Learning–ICANN 2011: 21st
International Conference on Artificial Neural Networks, Espoo, Finland, June
14-17, 2011, Proceedings, Part I 21.1em plus 0.5em minus 0.4emSpringer, 2011, pp. 52–59.
diehl2015fast
P. U. Diehl, D. Neil, J. Binas, M. Cook, S.-C. Liu, and M. Pfeiffer,
“Fast-classifying, high-accuracy spiking deep networks through weight and
threshold balancing,” in 2015 International joint conference on neural
networks (IJCNN).1em plus 0.5em minus 0.4emieee, 2015, pp.
1–8.
neftci2014event
E. Neftci, S. Das, B. Pedroni, K. Kreutz-Delgado, and G. Cauwenberghs,
“Event-driven contrastive divergence for spiking neuromorphic systems,”
Frontiers in neuroscience, vol. 7, p. 272, 2014.
lee2018pretrain
C. Lee, P. Panda, G. Srinivasan, and K. Roy, “Training deep spiking
convolutional neural networks with stdp-based unsupervised pre-training
followed by supervised fine-tuning,” Frontiers in Neuroscience,
vol. 12, 2018.
liu2019stdpLearning
D. Liu and S. Yue, “Event-driven continuous stdp learning with deep structure
for visual pattern recognition,” IEEE Transactions on Cybernetics,
vol. 49, no. 4, pp. 1377–1390, 2019.
ferre2018unsupervised
P. Ferré, F. Mamalet, and S. J. Thorpe, “Unsupervised feature learning
with winner-takes-all based stdp,” Frontiers in computational
neuroscience, vol. 12, p. 24, 2018.
noroozi2016unsupervised
M. Noroozi and P. Favaro, “Unsupervised learning of visual representations by
solving jigsaw puzzles,” in Computer Vision–ECCV 2016: 14th European
Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings,
Part VI.1em plus 0.5em minus 0.4emSpringer, 2016, pp. 69–84.
midya2019artificial
R. Midya, Z. Wang, S. Asapu, S. Joshi, Y. Li, Y. Zhuo, W. Song, H. Jiang,
N. Upadhay, M. Rao et al., “Artificial neural network (ann) to
spiking neural network (snn) converters based on diffusive memristors,”
Advanced Electronic Materials, vol. 5, no. 9, p. 1900060, 2019.
lu2020exploring
S. Lu and A. Sengupta, “Exploring the connection between binary and spiking
neural networks,” Frontiers in neuroscience, vol. 14, 2020.
lu2022neuroevolution
——, “Neuroevolution guided hybrid spiking neural network training,”
Frontiers in neuroscience, vol. 16, 2022.
gao2023high
H. Gao, J. He, H. Wang, T. Wang, Z. Zhong, J. Yu, Y. Wang, M. Tian, and C. Shi,
“High-accuracy deep ann-to-snn conversion using quantization-aware training
framework and calcium-gated bipolar leaky integrate and fire neuron,”
Frontiers in Neuroscience, vol. 17, p. 1141701, 2023.
bellec2018long
G. Bellec, D. Salaj, A. Subramoney, R. Legenstein, and W. Maass, “Long
short-term memory and learning-to-learn in networks of spiking neurons,”
Advances in neural information processing systems, vol. 31, 2018.
Rathi2020DIETSNNDI
N. Rathi and K. Roy, “DIET-SNN: Direct input encoding with leakage and
threshold optimization in deep spiking neural networks,” ArXiv, vol.
abs/2008.03658, 2020.
caporale2008spike
N. Caporale and Y. Dan, “Spike timing–dependent plasticity: a hebbian
learning rule,” Annu. Rev. Neurosci., vol. 31, pp. 25–46, 2008.
Hazan_2018
H. Hazan, D. J. Saunders, H. Khan, D. Patel, D. T. Sanghavi, H. T. Siegelmann,
and R. Kozma, “Bindsnet: A machine learning-oriented spiking neural networks
library in python,” Frontiers in Neuroinformatics, vol. 12, p. 89,
2018.
deng2012mnist
L. Deng, “The mnist database of handwritten digit images for machine learning
research,” IEEE Signal Processing Magazine, vol. 29, no. 6, pp.
141–142, 2012.
deng2009imagenet
J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “ImageNet: A
large-scale hierarchical image database,” in Computer Vision and
Pattern Recognition, 2009. CVPR 2009. IEEE Conference on.1em plus
0.5em minus 0.4emIEEE, 2009, pp. 248–255.
zhang2016colorful
R. Zhang, P. Isola, and A. A. Efros, “Colorful image colorization,” in
Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The
Netherlands, October 11-14, 2016, Proceedings, Part III 14.1em plus
0.5em minus 0.4emSpringer, 2016, pp. 649–666.
amari2000methods
S.-i. Amari and H. Nagaoka, Methods of information geometry.1em
plus 0.5em minus 0.4emAmerican Mathematical Soc., 2000, vol. 191.
karakida2019universal
R. Karakida, S. Akaho, and S.-i. Amari, “Universal statistics of fisher
information in deep neural networks: Mean field approach,” in The 22nd
International Conference on Artificial Intelligence and Statistics.1em plus 0.5em minus 0.4emPMLR, 2019, pp. 1032–1041.
kim2022exploring
Y. Kim, Y. Li, H. Park, Y. Venkatesha, A. Hambitzer, and P. Panda, “Exploring
temporal information dynamics in spiking neural networks,” arXiv
preprint arXiv:2211.14406, 2022.
erhan2009visualizing
D. Erhan, Y. Bengio, A. Courville, and P. Vincent, “Visualizing higher-layer
features of a deep network,” University of Montreal, vol. 1341,
no. 3, p. 1, 2009.
han2015learning
S. Han, J. Pool, J. Tran, and W. Dally, “Learning both weights and connections
for efficient neural network,” Advances in neural information
processing systems, vol. 28, 2015.
|
http://arxiv.org/abs/2307.04992v1 | 20230711030733 | Peeking into the next decade in Large-Scale Structure Cosmology with its Effective Field Theory | [
"Diogo Bragança",
"Yaniv Donath",
"Leonardo Senatore",
"Henry Zheng"
] | astro-ph.CO | [
"astro-ph.CO",
"hep-ph",
"hep-th"
] |
=1
./figures/
equationsection
16.4 true cm
22.0 true cm
0 cm
0 cm
0.4 cm
0. true cm
psf
empty
x⃗_fl
1.1
equationsection
|
http://arxiv.org/abs/2307.05103v1 | 20230711082444 | Control and estimation of multi-commodity network flow under aggregation | [
"Yongxin Chen",
"Tryphon T. Georgiou",
"Michele Pavon"
] | math.OC | [
"math.OC",
"cs.MA",
"cs.SY",
"eess.SY",
"93E20, 90B10, 90C35, 90B06, 15B48, 97M40, 05C81, 82C41"
] |
Control and estimation of multi-commodity network flow under aggregation
Yongxin Chen, Tryphon T. Georgiou and Michele Pavon
Y. Chen is with the School of Aerospace Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA;[email protected]
T.T. Georgiou is with the Department of Mechanical and Aerospace Engineering, University of California, Irvine, CA 92697, USA; [email protected]
M. Pavon is with the Division of Science, New York University Abu Dhabi, U.A.E.; [email protected]
This work was supported in part by the NSF under grants 1942523 and 2206576, the AFOSR under FA9550-23-1-0096, the ARO under W911NF-22-1-0292, and the NYUAD under grant 76/71260/ADHPG.
August 12, 2023
=========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
A paradigm put forth by E. Schrödinger in 1931/32, known as Schrödinger bridges, represents a formalism to pose and solve control and estimation problems seeking a perturbation from an initial control schedule (in the case of control), or from a prior probability law (in the case of estimation), sufficient to reconcile data in the form of marginal distributions and minimal in the sense of relative entropy to the prior. In the same spirit, we consider traffic-flow and apply a Schrödinger-type dictum, to perturb minimally with respect to a suitable relative entropy functional a prior schedule/law so as to reconcile the traffic flow with scarce aggregate distributions on families of indistinguishable individuals. Specifically, we consider the problem to regulate/estimate multi-commodity network flow rates based only on empirical distributions of commodities being transported (e.g., types of vehicles through a network, in motion) at two given times. Thus, building on Schrödinger's large deviation rationale, we develop a method to identify the most likely flow rates (traffic flow), given prior information and aggregate observations.
Our method further extends the Schrödinger bridge formalism to the multi-commodity setting, allowing commodities to exit or enter the flow field as well (e.g., vehicles to enter and stop and park) at any time. The behavior of entering or exiting the flow field, by commodities or vehicles, is modeled by a Markov chains with killing and creation states. Our method is illustrated with a numerical experiment.
§ INTRODUCTION
Inverse Problems constitute a large class of typically ill-posed problems of central importance in all branches of science. In an inverse problem, one seeks to derive a model (a function, a field, a probability distribution, etc.) from a set of observations. Examples are ubiquitous in system identification, spectral estimation, computed tomography, deconvolution, inverse scattering, weather prediction, and so on. For instance, image deblurring may be viewed as a deconvolution problem in the plane, where resolution is to be restored based on priors on features of objects and texture.
Regularization is a process in mathematics, statistics, and machine learning, that consists in adding information to solve ill-posed problems and/or to prevent overfitting[An overfitted model is a statistical model that contains more parameters than can be justified by the data thereby violating Novacula Occami: “Frustra fit per plura quod potest fieri pauciora". Novacula Occami should perhaps be better translated “Ockham's comb" rather than Ockham's razor as it is customary.]. For instance, in compressed sensing, sparsity of the solution is the added information. Other times, it consists in imposing smoothness of the solution or penalazing some norm of the solution. From a Bayesian point of view, many regularization techniques correspond to imposing certain prior distributions on model parameters. Tikhonov's regularization, for instance, is widely used in finite and infinite-dimensional context,
see e.g., <cit.>.
A most powerful paradigm (entropy regularization) to learn a probability distribution from scarce information was put forward by Ludwig Boltzmann <cit.> in 1877 and by Erwin Schrödinger <cit.> in 1931/32. The most likely probability distribution can be characterized as the solution of a maximum entropy problem. This inference method has, in the meantime, proved very fruitful in several branches of science: We mention the far reaching work of Jaynes, Burg, Dempster and Csiszár <cit.>. In the case of a discrete version of the dynamical problem considered by Schrödinger, this method can be fruitfully reformulated as a Markov decision problem. We show in this paper that a suitable modification of this inference method can be applied to characterize network flows under very meager information. We consider indeed multi-commodity flows where only aggregate information is available at some initial and final times. Moreover, some of the agents may enter or exit the flow during the time interval of interest. Hence, the total number of travelling vehicles, packets, etc. is in general not preserved.
To give a more precise formulation of the problem, consider for instance a network of roads and highways. At some initial time t=0, we observe the distribution of different types of vehicles traveling on the network. The distribution μ_0 of the various classes (cars, trucks, etc.) is defined on the vertices of the network (e.g., at crossroads, intersections, or cities, for such a network). At some final time t=N, we observe a similar distribution μ_N. We seek to determine the most likely network flow on the discrete time interval [0,N] which is consistent with the available scant information. An effective framework to study such problems is a suitable extension of what is called regularized optimal transport, also known as discrete Schrödinger Bridges <cit.>. We show that such problems admit a formulation as suitable Markov decision problems in the spirit of <cit.>.
We are actually able to solve the problem in the more complex situation, as noted, when some of the vehicles enter or exit the flow during [0,N].
In a similar spirit, in the multi-commodity setting that we study herein, and in the context of transportation, we assume knowledge of the fraction of vehicles that are private cars, taxis, trucks, and so on (that more generally may be thought of as different commodities, species, etc.). We also assume knowledge of a prior probability on the flow across the network for each specific group of vehicles; such information can in principle be provided by historic data. With such data and assumptions in place, we are interested in identifying the most likely path that each commodity has followed while being transported across the network. A similar problem may be formulated when commodities have sources and sinks.
In the present work we show that by suitably extending the theory of Schrödinger Bridges it is possible to answer such questions in spite of the scarce, aggregate information available at the initial and final times. As a second contribution of the paper, we address the case where vehicles transporting commodities may enter or exit the flow during the prescribed window of time. The sudden appearance or disappearance of vehicles can be modeled probabilistically, via notions referred to as creation or killing, and can be derived along the lines of <cit.> that dealt with single-commodity transport with killing, in continuous-time.
The paper is structured as follows. In Section <ref>, we briefly recall the formulation and key results for the single commodity mass-preserving Schrödinger Bridge problem. Section <ref> formulates and solves the multi-commodity network flow problem. In Section <ref>, we study the same problem in the presence of creation and killing.
§ SINGLE-COMMODITY TRAFFIC FLOW OVER NETWORKS
We begin by discussing a paradigm of great significance in single-commodity network flows. It amounts to designing probabilistic transitions between nodes, and thereby probability laws on path spaces, so as to reconcile marginal distributions with priors that reflect the structure of the network and objectives on transference of resources across the network. More precisely, we wish to study a generalization of the so called discrete-time Schrödinger bridge problem. This dynamic formulation echoes the fluid dynamic problem associated to the classical (continuous time and space) Schrödinger bridge problem.
Consider a directed, strongly connected, aperiodic graph G=(,ℰ) with vertex set ={1,2,…,n} and edge set ℰ⊆×. Consider trajectories/paths on this graph over the time set 𝒯={0,1,…,N}.
We seek a probability distribution on the space of paths ^N+1 with prescribed initial and final marginals
μ_0 and μ_N, respectively, and such that the resulting random evolution
is closest to a “prior” measure in a suitable sense.
The prior law for our problem is a Markovian evolution with transition kernel A, which may be assumed to be time-homogenous for simplicity.
In accordance with
the topology of the graph, A(i,j)=0 whenever (i,j)∉ℰ.
We assume that μ_0 is positive on , i.e.
μ_0(i)>0i∈.
The Markovian kernel A, together with the measure μ_0(·),
induces
a measure on the space of trajectories, which assigns to a path (i_0,i_1,…,i_N)∈^N+1 the value
(i_0,i_1,…,i_N)=ν_0(i_0)A(i_0,i_1)⋯ A(i_N-1,i_N).
We denote by 𝒫(μ_0,μ_N) the family of probability distributions on ^N+1 having the prescribed marginals μ_0 and μ_N.
We seek a distribution in this set which is closest to the prior in relative entropy (divergence, Kullback-Leibler index) defined by
KL():={[ ∑_x(x)log (x)/ (x), ()⊆ (),; +∞ , ()⊈ (). ].
Here, by definition, 0·log 0=0.
This brings us to the
so-called
Schrödinger Bridge Problem (SBP):
Determine
^⋆= argmin{ KL() |∈𝒫(μ_0,μ_N)
}.
We parameterize as
(i_0,i_1,…,i_N)=μ_0(i_0)π_i_0i_1(0)⋯π_i_N-1i_N,
and let p_t be the one-time marginals of , i.e.,
p_t(i_t) = ∑_i_ℓ≠ t(i_0,i_1,…,i_N), t∈𝒯.
We finally have the update mechanism
p_t+1(i_t+1)=∑_i_t∈ p_t(i_t) π_i_ti_t+1(t)
which, in (column) vector form, is
p_t+1=Π'(t) p_t.
Here Π=[ π_ij(t)]_i,j=1^n is the transition matrix and prime denotes transposition.
Using (<ref>)-(<ref>) we obtain
KL()= KL(μ_0ν_0) + ∑_t=0^N-1∑_i_t KL
(π(t)A(t))p_t(i_t).
We have the following theorem <cit.>:
Suppose there exists a pair of nonnegative functions (φ,φ̂) defined on {0,1,…,N}× V and satisfying the system
φ(t,i_t)=∑_i_t+1A_i_ti_t+1(t)φ(t+1,i_t+1),
φ̂(t+1,i_t+1)=∑_i_t A_i_ti_t+1(t)φ̂(t,i),
for t=0,1,…, N-1, as well as the boundary conditions
φ(0,i_0)φ̂(0,i_0)=μ_0(i_0), φ(N,i_N)φ̂(N,i_N)=μ_N(i_N),
for i_t∈ and t∈{0,N}, accordingly.
Suppose moreover that φ(t,i)>0, ∀ t=0, 1, …, N, ∀ i∈ V. Then, the Markov distribution ^⋆ in 𝒫(μ_0,μ_N) having transition probabilities
π^⋆_i_ti_t+1(t)=A_i_ti_t+1(t)φ(t+1,i_t+1)/φ(t,i_t)
solves Problem <ref>.
Notice that if (φ,φ̂) satisfy (<ref>)-(<ref>)-(<ref>), so does the pair (cφ,1/cφ̂) for all c>0. Hence, uniqueness for the Schrödinger system is always intended as uniqueness of the ray.
Under the assumption that the entries of the matrix product A^N
are all positive, there exists a (unique in the sense of ray) solution to the system (<ref>)-(<ref>) which can be computed through a Fortet-IPF-Sinkhorn iteration <cit.>. The infinite-horizon counterpart of it has been considered in <cit.>.
§ MULTI-COMMODITY TRAFFIC FLOW
Consider a traffic flow consisting of K commodities. Each commodity may correspond to one origin-destination pair. Other ways to differentiate commodities may include vehicle types. Now assume that the prior population distribution of the K commodities follows the probability (vector) = [_1,_2,…,_K]^T, that is, _k represents the portion of the population corresponding to the k-th commodity. Suppose each individual in the k-th commodity follows a time-homogenous Markov model with transition kernel A_k and initial distribution r_k. The transition kernel for each commodity can be time-varying, however, we henceforth use time-homogenous kernel to keep the notation simple.
The above defines a probabilistic model for the traffic. It is a hierarchical model, where the first level is the commodity type which follows the distribution . The second level represents the dynamics over the traffic network captured by the transition kernel A_k with initial distribution r_k, for each commodity. A traffic flow then can be viewed as a collection of independent samples from this probabilistic model parametrized by the tuple (, A_1, r_1,…, A_K, r_K).
We are interested in the inference problem of estimating the group transport of the commodities with limited and aggregated data. In the aggregate measurements, the individuals from different commodities are indistinguishable. Thus, the measurement at any time-instance is a histogram, representing the distribution of the total population over the traffic network. The measurements are limited in the sense that we can only make measurement every once in a while. In particular, we consider the setting where the traffic flow is only measured at two points in time, t=0 and t=N.
Suppose the population measurement is μ_0 at the initial time t=0 and μ_N at terminal time t=N. Here the measurements
μ_0, μ_N are normalized to be probability vectors dividing by the total number L of vehicles. Our goal is to infer the most likely evolution of each commodity. More precisely, we want to recover the most likely distribution on the space of trajectories over the traffic network that the vehicles may have takes so as to match the measurements μ_0, μ_N, taken at t=0, N. When there is only a single commodity, i.e., K=1, the problem clearly reduces to a standard Schrödinger bridge problem. Thus, this problem represents a generalization of the standard SBP to the multi-commodity setting.
To describe the evolution of the group behavior of the K commodities, let ^k∈^n^(N+1) be the normalized population distribution of the k-th commodity over the path space ^N+1. It means that the portion of the population in the k-th commodity travels along the graph path i_0, i_1, …, i_N is then ^k_i_0,i_1,…,i_N. Note that due to normalization, ∑_k=1^K |^k| = 1 where |^k| denotes the 1-norm of the tensor ^k, describing the portion of population that belongs to the k-th commodity. The tensors ^1, ^2, …, ^K fully characterize the group behavior of the whole population. Our goal is to find the most likely tuple ^1, ^2, …, ^K given the observations μ_0, μ_N. It turns out that this multi-commodity network flow also satisfies the large deviation principle as in the following theorem.
The probability of an empirical population distribution (^1,^2, ⋯,^K) in the traffic flow, parameterized by the tuple (, A_1, r_1,…, A_K, r_K), is
Prob (^1,^2, ⋯,^K) ≈exp[-L × I(^1,^2, ⋯,^K)],
where L is the total number of the population, and
I(^1,^2, ⋯,^K) = ∑_k=1^K KL (^k |
^k|r_k(i_0)A_k(i_0,i_1)⋯
⋯ A_k(i_N-1,i_N)) + ∑_k=1^K |^k|log|^k|/p_k.
is the rate function.
Proof
The empirical distribution (^1,^2, ⋯,^K) assigns probability ^k_i_0,i_1,…,i_N to the observation k, i_0,i_1,…,i_N. The prior model assigns probability _k r_k (i_0) A_k(i_0,i_1)⋯ A_k(i_N-1, i_N) to the same observation. By the standard theory of large deviation, the rate function I is the KL divergence between these two probability distributions. The expression (<ref>) then follows a straightforward calculation of the KL divergence.
In light of the above large deviation result, our problem of recovering the most likely evolution can be formulated as the optimization
min_^1,⋯,^K ∑_k=1^K KL (^k | ^k|r_k(i_0)A_k(i_0,i_1)⋯
A_k(i_N-1,i_N)) + ∑_k |^k|log|^k|/p_k
∑_k ∑_i_1,i_2,⋯,i_N^k_i_0,i_1,…,i_N = μ_0(i_0),
∑_k ∑_i_0,i_1,⋯,i_N-1^k_i_0,i_1,…,i_N = μ_N(i_N).
Just like the standard SBP, our inference problem can be formulated as an optimal transport problem, albeit multi-marginal, with entropic regularization. To this end, let = [^1,^2,⋯,^K]. Then, straightforward calculations yield the following reformulation of (<ref>).
The multi-commodity traffic flow inference problem (<ref>) can be reformulated as a multi-marginal optimal transport with entropy regularization
min_ ⟨, ⟩+ ⟨, log⟩
(<ref>-<ref>)
where the transport cost tensor is
_k,i_0,i_1,⋯,i_N = -log r_k(i_0)A_k(i_0,i_1)⋯
⋯ A_k(i_N-1,i_N) -log p_k.
The above entropy-regularized, multi-marginal optimal transport (MOT) problem can be solved using the standard Sinkhorn algorithm. The Sinkhorn algorithm for (<ref>) is a block ascent algorithm for its dual <cit.>. Each iteration requires a projection operation to compute a marginal distribution of the tensor . Thus, a generic Sinkhorn solver of this type has computational complexity that scales exponentially with the number of marginals (in our case, N+2). Fortunately, the optimization (<ref>) is a MOT problem with graph-structured cost <cit.>. In particular, the cost tensor in (<ref>) can be written as
_k,i_0,i_1,⋯,i_N = ∑_t=1^N ^t_k,i_t-1,i_t
where
^1_k,i_0,i_1 = -log r_k(i_0)A_k(i_0,i_1) - log p_k,
and
^t_k,i_t-1,i_t = -log A_k(i_t-1,i_t), 2≤ t≤ N.
We remark that, even though the total cost tensor is N+2 dimensional, it is the summation of N tensors of dimension 3. This decomposition enables us to exploit the graphical structure of the cost tensor to greatly reduce the computational complexity of the Sinkhorn algorithm. The cost in (<ref>) is associated with the junction tree in Figure <ref>.
This junction tree is associated with a graph with N+2 nodes. The nodes 0, 1, …, N correspond to the vehicle distributions of the total population at each time point. The node N+2 is used to model the mass distribution over different commodities.
The solution to (<ref>) is characterized by the system of equations
φ(t,i_t, k) = ∑_i_t+1 A_k(i_t,i_t+1) φ(t+1,i_t+1, k),
φ̂(t+1,i_t+1,k) = ∑_i_t A_k(i_t,i_t+1) φ̂(t,i_t, k),
φ(0,i_0) = ∑_i_1,kr_k(i_0)p_kA_k(i_0,i_1) φ(1,i_1, k),
φ̂(1,i_1,k) = ∑_i_0 r_k(i_0)p_kA_k(i_0,i_1) φ̂(0,i_0),
φ(N-1,i_N-1,k) = ∑_i_N A_k(i_N-1,i_N) φ(N,i_N),
φ̂(N,i_N) = ∑_i_N-1,kA_k(i_N-1,i_N) φ̂(N-1,i_N-1, k),
for t = 1, …, N-2, with boundary conditions
φ(0, ·) φ̂(0,·) = μ_0, φ(N,·)φ̂(N,·) = μ_N.
Moreover, the transition probabilities of the solution are
π_t^k (i_t,i_t+1) = A_k(i_t,i_t+1) φ(t+1,i_t+1,k)/φ(t,i_t,k),
π_0^k(i_0,i_1) = A_k(i_0,i_1) φ(1,i_1,k)/∑_i_1A_k(i_0,i_1)φ(1,i_1,k)
π_N-1^k(i_N-1,i_N) = A_k(i_N-1,i_N) φ(N,i_N)/φ(N-1,i_N-1,k)
and the marginal distributions are
μ_t^k(i_t) = φ(t,i_t, k) φ̂(t,i_t,k), t= 1, …, N-1
μ_0^k(i_0) = φ̂(0,i_0) ∑_i_1r_k(i_0)p_kA_k(i_0,i_1) φ(1,i_1, k)
μ_N^k(i_N) = φ(N,i_N) ∑_i_N-1 A_k(i_N-1,i_N) φ̂(N-1,i_N-1, k).
The graphical OT can be efficiently solved with the Sinkhorn belief propagation algorithm <cit.>, which is a combination of the Sinkhorn algorithm for OT and the belief propagation algorithm for probabilistic graphical models. Unlike the vanilla Sinkhorn <cit.>, whose complexity scales exponentially with the number of marginals, the complexity of the Sinkhorn belief propagation algorithm is determined by the largest node degree in the junction tree associated with the cost tensor. For our multi-species problem (<ref>), the Sinkhorn belief propagation algorithm is specialized to Algorithm <ref>.
§ MOST LIKELY NETWORK FLOW WITH CREATION AND KILLING
We now turn our attention to the traffic-flow estimation in the case where commodities may disappear or be suddenly introduced, at various times and locations, during transport. The appearance and disappearance are referred to as creation and killing.
In applications as in traffic flow, the creation may model the situation where a vehicle enters the traffic flow from a parking state, and the killing models the converse. We focus on the single-commodity case, but the same idea can be applied to the multi-commodity setting in a completely analogous manner.
In the single-commodity case, the prior dynamics is encoded in a transition kernel A over the traffic network G=(,ℰ) as before, but which however, may not be necessarily a stochastic matrix in the present context, as commodities are created or destroyed. That is, whether time-homogeneous or not,
A 1≠1
in general. In fact, we assume that the row sums are less than 1. This implies that, on each node i ∈ in the graph, the random walk has probability of 1-∑_j A (i,j) to be killed/absorbed. In our traffic-flow estimation problem, we observe two marginal distribution μ_0 at time 0 and μ_N at time N that may not have the same mass, in general. Our goal is once again to estimate the most likely traffic flow, given the prior dynamics, that is consistent with the two observed marginals.
To account for the unbalance in mass between the marginals, we introduce a parking state for killing and creation. A similar idea has been used in <cit.> to study an unbalanced Schrödinger bridge problem over a continuous state-space. We assume that the prior probability of going from the parking state to the graph nodes (creation rate at each node) is c ∈_+^n with c^T ≤ 1. Augmenting the state space by the parking state, we obtain a Markov chain with transition kernel
 = [A b
c^T d]
where b = -A and d = 1- c^T. We assume that the total number of vehicles in the augmented traffic network (including the parking state) is fixed. Without loss of generality, suppose that the marginals μ_0 and μ_N have been normalized with respect to this number, that is, μ_0^T ≤ 1, μ_N^T ≤ 1. We define the augmented marginal distributions
μ̂_0 = [μ_0
1 - μ_0^T ]
and
μ̂_N = [μ_N
1 - μ_N^T ].
Let be the distribution over the trajectories induced by the prior dynamics. It assigns to a path (i_0,i_1,…,i_N)∈^N+1 the value
(i_0,i_1,…,i_N)=ν̂_0(i_0)Â(i_0,i_1)⋯Â(i_N-1,i_N).
Denote by (μ̂_0, μ̂_N) all the Markov chains over the augmented state space with marginal distribution μ̂_0 at time 0 and marginal distribution μ̂_N at time N. Then our estimation problem becomes a Schrödinger bridge problem as follows.
min{ KL () | ∈𝒫(μ̂_0,μ̂_N)
}
Applying the standard Schrödinger bridge theory (as in Theorem <ref>) we obtain the following characterization of the solution to (<ref>).
The Schrödinger system
[
A^T c
b^T d
]
[φ̂_t
ψ̂_t
]
= [φ̂_t+1
ψ̂_t+1]
, t = 0, 1,…, N-1
[
A b
c^T d
]
[φ_t+1
ψ_t+1]
= [φ_t
ψ_t]
, t = 0, 1,…, N-1
[φ_0φ̂_0
ψ_0ψ̂_0
]
= [μ_0
1-μ_0^T 1],
[φ_Nφ̂_N
ψ_Nψ̂_N
]
= [μ_N
1-μ_N^T 1]
admits a unique (up to a constant factor) solution. Moreover, the solution to the SBP problem (<ref>) has transition matrix
[(φ_t)^-1 A (φ_t+1) (φ_t)^-1 bψ_t+1
1/ψ_tc^T(φ_t) dψ_t+1/ψ_t]
with associated marginal flow
μ̂_t = [φ_tφ̂_t
ψ_tψ̂_t
].
Based on this result, we can recover the most likely evolution for the original Markov chain without the parking state. Its transition matrix is
(φ_t)^-1 A (φ_t+1).
Note that, in general, this is not a stochastic matrix. This result should be compared with Theorem <ref> which does not involve killing or creation.
§ NUMERICAL EXAMPLES
In this section, we present a network flow example to illustrate our method. Consider a traffic network as shown in Figure <ref> where the agents live on the edges instead of nodes. Assume there are 2 commodities corresponding to cars and trucks. Assume the total number of individuals is 3000: 2000 cars and 1000 trucks. Some nodes in the graph allow parking and thereby allow killing or creation with a certain probability. Each commodity follows certain prior dynamics that respect the topology of the network. In particular, the cars have a high probability to start from the edges (2, 4) and (6, 9), and the trucks have a high probability to start from the edges (1, 3) and (19, 18). Moreover, the trucks are not allowed on the edges (6, 9) and (9, 10). We take as initial time t=0 and as a final time t=9.
In the traffic flow problem with aggregate observations, we assume the source and target distributions are given, as depicted in Figure <ref>. Our goal is to estimate the most likely evolution of each species, or equivalently, to specify transition rates to effect the flow that reconciles the prior with the scarce data. We showcase the estimated results in Figures <ref> and <ref>. These are deemed reasonable considering the limited aggregate measurements that are accessible
and available, to reconcile with
their prior dynamics. We
display the result on
the proportion of cars and trucks that are not in the parking state in Figure <ref>. The transition, creation and killing rates of the posterior (solution to the problem) serve, both, as the solution to an estimation problem as well as a solution to a control problem, to dictate specifics of the flow (E.g., direction, parking or entering the flow by vehicles) that ensure matching the specified target marginals.
§ CONCLUSIONS
The purpose of the present work is to explain how the framework of Schrödinger bridges can be extended to model simultaneous transportation of multiple commodities, with partial aggregate data, as well as in the presence of creation and killing along the transport.
Traditionally, the term “bridges” refers to (probability laws on) paths linking marginal data, and the specific paradigm of Schrödinger bridges, is the method of constructing such laws on paths that maximize the likelihood over alternatives. To this end, a prior law is given (or chosen, as a “design parameter”) and a posterior is sought to minimize the relative entropy between the two while maintaining consistency with available data.
The first contribution of the paper is to note that, almost verbatim, the Schrödinger paradigm can be carried over to the case where multiple commodities are being transported at the same time and, when aggregate information is available on marginal distributions.
A second contribution is to enhance the Schrödinger framework with an additional state (or, states) that absorb commodities, and thereby take those out of the traffic flow randomly, or generate commodities similarly at random, so as to bring consistency with measured marginal distributions at various points. Here, marginals are assumed at the start and end of a specified interval.
Thus, this extra state (or, states) may be thought as a reservoir.
The rate of killing in the prior is dictated by the distance to one of row sums in the transition kernel. The creation rate at nodes needs to be specified as a design parameter, or based on historical data.
Starting from such a suitably enlarged transition kernel that includes the reservoir state, the Schrödinger method can be readily applied to produce adjustment in the transition, creation, and killing probabilities so that the posterior kernel brings consistency with the measured marginal data in a way that may be deemed as the most likely.
The philosophy underlying the paper is fairly general, and can be suitably modified for more general marginal information that fully or partially reflect on the distributions of individual commodities. On a flip side, obtaining a transition kernel that meets marginals can be thought of as solving the control problem to decide on flow rates that effect an overall flow that matches the marginal data. In such a case marginal data can be seen as specifications dictated by supply and demand at various nodes in the network.
A final point that we wish to highlight is that the problems considered herein, and in the generality envisioned (though refrained so as to keep a simplified notation and exposition) can be efficiently cast and numerically solved as multimarginal transport problems.
IEEEtran
|
http://arxiv.org/abs/2307.04999v1 | 20230711033305 | The GECAM Real-Time Burst Alert System | [
"Yue Huang",
"Dongli Shi",
"Xiaolu Zhang",
"Xiang Ma",
"Peng Zhang",
"Shijie Zheng",
"Liming Song",
"Xiaoyun Zhao",
"Wei Chen",
"Rui Qiao",
"Xinying Song",
"Jin Wang",
"Ce Cai",
"Shuo Xiao",
"Yanqiu Zhang",
"Shaolin Xiong"
] | astro-ph.HE | [
"astro-ph.HE",
"astro-ph.IM"
] |
Vol.0 (20xx) No.0, 000–000
Key Laboratory of Particle Astrophysics, Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, China; [email protected], [email protected]
Southwest Jiaotong University, Chengdu 611756, China
Qufu Normal University, Qufu 273165, China
University of Chinese Academy of Sciences, Chinese Academy of Sciences, Beijing 100049, China
Received 20xx month day; accepted 20xx month day
Gravitational Wave High-energy Electromagnetic Counterpart All-sky Monitor (GECAM), consisting of two micro-satellites, is designed to detect gamma-ray bursts associated with gravitational-wave events. Here, we introduce the real-time burst alert system of GECAM, with the adoption of the BeiDou-3 short message communication service. We present the post-trigger operations, the detailed ground-based analysis, and the performance of the system. In the first year of the in-flight operation, GECAM was triggered by 42 GRBs. GECAM real-time burst alert system has the ability to distribute the alert within ∼1 minute after being triggered, which enables timely follow-up observations.
Yue Huang, Dongli Shi, Xiaolu Zhang, et al
GECAM Real-Time Burst Alert System
The GECAM Real-Time Burst Alert System
Yue Huang*
1
Dongli Shi
1,2
Xiaolu Zhang
1,3
Xiang Ma
1
Peng Zhang
1,2
Shijie Zheng
1
Liming Song
1
Xiaoyun Zhao
1
Wei Chen
1
Rui Qiao
1
Xinying Song
1
Jin Wang
1
Ce Cai
1,4
Shuo Xiao
1,4
Yanqiu Zhang
1,4
Shaolin Xiong*
1,4
August 12, 2023
==================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
On September 14, 2015, the first detection of gravitational wave (GW) signals from the merger of two stellar-mass black holes, observed by the Laser Interferometer Gravitational-Wave Observatory (LIGO) detectors, inaugurated the era of GW astronomy <cit.>. This was the first direct evidence of the predictions of general relativity. On August 17, 2017, the Advanced LIGO and Advanced Virgo Gravitational-Wave interferometers detected the first GW, GW 170817, from a binary neutron star merger, significantly promoting the study of gravitational-wave multi-messenger astronomy <cit.>. Fermi and INTEGRAL detected a short gamma-ray burst (GRB), GRB 170817A, 1.7 s after the GW events. The electromagnetic (EM) follow-up observations not only succeeded in localizing the merger to the host galaxy, NGC 4993, but also provided the first unambiguous detection of a kilonova, the broadband signature of rapid neutron capture nucleosynthesis (r-process) in the merger ejecta. These detections made by GW and EM observatories, for the first time, validated the merger model proposed decades ago to explain the short GRBs <cit.>.
The identification of EM counterparts to GW events allows for the precise localization of the GW source, which would further yield rich scientific rewards (see for a review). The EM counterpart identification is constrained by the accuracy of the localization of the GW signal, which is usually expected to be a few hundreds of square degrees <cit.>. In general, we expect that searching for high energy EM counterparts to a GW event will play a major role in the discovery of the EM counterpart. This is because, firstly, the luminosity of the high-energy counterpart is large and less likely to be absorbed by the medium; secondly, in the low energy bands, there might be few optical candidates localized within the error region of the GW source (i.e., ). Since the high energy sky is less "crowded", it is more reasonable to relate a high energy transient to the GW event; thirdly, the time delay between the high-energy emission and the GW emission is assumed to be minimal. Therefore, a precise localization of the high-energy transient could substantially reduce the localization uncertainty of the GW event, which further facilitates the follow-up observations at other wavelengths. In recent years, a large number of observations have been made with hard X-ray and γ-ray telescopes, such as Fermi-GBM <cit.>, Swift-BAT <cit.>, INTEGRAL-SPI-ACS <cit.>, Insight-HXMT <cit.> and Konus-Wind <cit.>, to search for high energy counterparts to GW sources.
Gravitational wave high-energy Electromagnetic Counterpart All-sky Monitor <cit.> (GECAM, also known as “HuaiRou-1”) is a space-based project proposed for the detection of high-energy EM counterparts to GW sources, as well as other high-energy transient sources, i.e., GRBs and magnetars. GECAM consists of two micro-satellites, GECAM-A and GECAM-B, which are designed to operate on identical orbits (600 km altitude and 29^∘ inclination), on opposite sides of the Earth, in order to get a simultaneous view of the entire sky. Each satellite features a dome-shaped array of 25 Gamma-ray detectors (GRD) and 8 Charged particle detectors (CPD). The GRDs are composed of a LaBr_3 crystal and silicon photomultiplier tube (SiPM) array, covering an energy range from 6 keV to 5 MeV <cit.>. The CPDs are used to monitor the flux of charged particles on the GECAM orbit and help distinguish between astrophysical events and charged particle events. The CPDs use plastic scintillators combined with SiPM, covering an energy range of 300 keV–5 MeV <cit.>. In case of a trigger, the flight software <cit.> catches the in-direction and provides a preliminary classification to the source, which will be downlinked as a trigger alert to the ground. In order to carry out rapid follow-up observations at other wavelengths, a real-time downlink of the alert data is required. Considering the current status of the real-time downlink resources in China, GECAM adopts the global short message communication service <cit.> of BeiDou-3 navigation satellite system <cit.> to downlink the trigger alert data to the ground. GECAM is the first satellite to use the BeiDou-3 global short message service on board and the first space astronomy satellite in China capable of real-time downlink.
The GECAM Scientific Ground Segment <cit.> thus includes a section that is devoted to process the BeiDou short messages upon their arrival. In the following, we describe the onboard triggering and data flow in Section <ref> and the real-time burst alert system in Section <ref>. We report the in-flight performance of the first year in Section <ref>. At last, in Section <ref> we give a summary.
§ ONBOARD TRIGGERING AND DATA FLOW
§.§ In-flight Trigger and Localization
The GECAM In-flight Realtime Trigger and Localization software (GIRTLS) <cit.> continuously monitors the background count rates of all GRDs for significant increases on different energy ranges and timescales, to detect GRBs and other short-timescale transients. The background is accumulated over 20 s pre-trigger, excluding the most recent 5 seconds (in default). The event data are binned to 50 ms and 8 energy channels, which means that the trigger timescales are defined as multiples of 50 ms until reaching 4 s. Except for the 50 ms timescale, all of the triggers include two phases offset by half of the time bin. GECAM supports 64 different trigger algorithms, each of which comes with an adjustable threshold. The trigger algorithms currently implemented include five energy ranges and seven timescales, a detailed description of the 64 algorithms can be found in <cit.>. A trigger is only generated when at least three detectors exceed the threshold at the same time. When there is a trigger, the GIRTLS gives an approximate location to the source using the relative rates recorded in the 25 GRDs that accumulated on 4 timescales. Besides GRBs, there are other events, such as solar flares and charged particle events that can trigger the alert, so the GIRTLS further performs a classification by using the count ratio between CPD and GRD, the localization, the hardness ratio, and the geographic location of the satellite to identify the type of source.
Once triggered on board, the GIRTLS produces the trigger alert data that are downlinked to the ground via BeiDou short message. The trigger alert data includes information on the trigger significance, the burst spectrum, on-board localization and classification, and light curve for improving ground localization. There are two algorithms to localize the burst on the ground: one using the relative count rates from the 25 GRDs, which requires a relatively long-time light curve from each detector; the other one using the time delay of the burst between the two satellite, which operates on high temporal resolution light curves <cit.>. Due to the limitation of capacity of single BeiDou short message (560 bits per message) and downlink capacity of the BeiDou system <cit.>, the high temporal resolution light curve is only generated for short bursts that are believed to be related to neutron star mergers <cit.>.
There are two types of trigger alert data: long trigger and short trigger. If the count rate exceeds the threshold at 4 s and 20 s post-trigger, the trigger will be identified as long trigger. Each long trigger is comprised of 31 BeiDou short messages. The first two messages contain the most important parameters for the rapid follow up observations, i.e., trigger time, burst localization, classification and spectrum, satellite position and attitude at trigger time, with backups. The 3rd and 4th messages contain light curves from three GRDs with the highest and lowest trigger significance, which is binned by different trigger timescales and energy ranges. The light curves provide a quick view of the burst. The 5th message contains the light curve of 8 CPDs, covering from 30 s prior to 180 s following the trigger time, which is used to distinguish particle events from GRBs. The 6th to 30th messages store the light curves from each GRD from ∼50 s before the trigger (divided into 8 time bins) to 185 s after the trigger (divided into 22 time bins) and are binned by timescales from 50 ms to 50 s, with shorter timescales close to the trigger time. The last message gives the satellite attitude which lasts 120 s after the trigger time. The BeiDou short messages transmit every 17 s and take about 10 minutes to finish all 31 messages.
The difference between the short and long trigger alert data is that the short trigger includes a combined high-resolution (0.4 ms in default) light curve from 25 GRDs with 2500 bins. Each short trigger contains up to 31 short messages, depending on the size of the light curve after compression. The first two messages are the same as the long trigger. The rest of the messages are the compression method and the compressed light curve.
§.§ On-ground Analysis
After being received by the National Space Science Center (NSSC) on ground, the BeiDou short message is forwarded to Scientific Ground Segment at Institute of High Energy Physics (IHEP) and ingested into the Burst Alert System (BAS). The BAS is developed to process the trigger alert data in real-time and transmit the locations and other important information to the astronomy community via the standard communication channel (e.g., the GRB Coordinates Network (GCN) [See http://gcn.gsfc.nasa.gov/gcn]). The types of GECAM notices generated by the BAS are listed below.
1. GECAM FLIGHT: trigger time, trigger energy range, trigger significance, on-board localization (RA and Dec), ground refined classification (see Section <ref>), ∼1 minute after trigger.
2. GECAM GROUND: ground localization (RA and Dec, see Section <ref>) and classification (see Section <ref>), ∼10 minutes after trigger.
The notices are sent only if the BAS classified the trigger as an astrophysical transient, such as a GRB. Since July 15, 2021, we sent a total of 323 notices in 2021, of which 156 were flight and 167 were ground, containing 205 triggers.
The BAS provides a refined classification by using an updated algorithm (see Section <ref>). Due to the limitation on memory and computational resources on board, the GIRTLS uses a coarser sky grid (3072 grid points), three pre-defined templates (soft, normal and hard spectra in Band function), and an averaged pre-burst background level to localize the source. Compared to GIRTLS, the BAS provides improved locations by applying a finer sky grid, fitting the burst spectrum, and estimating the background with pre- and post-trigger data (see Section <ref>) or with the time delay calculated based on the Modified Cross-correlation Function <cit.> when a burst is observed by both satellites, or GECAM and other satellites (see Section <ref>).
Moreover, GECAM produces time-tagged event data that are transmitted via the X-band ground station. The X-band data are not downlinked in real-time like the alert data, but delayed up to several hours based on the passages over the station. The X-band data are used to determine the final characteristics of the bursts. The continuous event data also enhances the ground-based searching for untriggered GRBs by using the coherent search method, which was initially applied to Insight-HXMT <cit.>.
§ THE BURST ALERT SYSTEM (BAS)
§.§ Re-classification of the trigger
GECAM will detect GRBs, solar flares, particle events, soft gamma repeaters (SGRs) and earth occultation of bright sources (e.g., Sco X–1). The GIRTLS in-flight uses the background-subtracted counts ratio between CPD and GRD to identify particle events and further uses the event localization (the error box is 2 σ) and hardness ratio to distinguish known sources. Hence, it is only valid when the background is correctly estimated and a precise location is obtained.
On the other hand, the BAS on-ground provides a refined classification to each trigger. The relevant data applied are event localization, hardness ratio, count rate of CPD, count ratio of CPD and GRD, the location of the spacecraft, and McIlwain magnetic L coordinates. Particle events occur predominantly in trapped particle regions, mostly in the entry or exit of the South Atlantic Anomaly (SAA) region, or at high L values. Thus, they are identified when three of the following four conditions are met: spacecraft geographic location, L value, CPD count rate and the count ratio between CPD and GRD. Like in GIRTLS, the BAS compares the event location with the sun and other known sources, e.g. SGR 1935+2154, with the error box set to 3 σ of the location error and includes the systematic error. If the hardness ratio is in the predefined range, and the source (the sun and other known sources) is not occulted by Earth, the event is classified as a solar flare or burst from known sources. Events which are located near the galactic plane and have a hardness ratio above one will be classified as generic sources. GECAM can also be triggered by bright sources rising from the Earth's limb, and this can be easily identified since the occultation time for each source can be calculated precisely.
§.§ Ground localization using relative rates
§.§.§ Background estimation
The BAS performs background fittings after the BeiDou short messages are complete. The method applied here is recursive non-parametric regression, similar to what is adopted by Fermi-GBM RoboBA <cit.>. First, we fit the data from -49.1 to -4.1 s (divided into 4 time bins, binned by timescales from 5 to 20 s) pre-trigger and 5 to 185 s (divided into 10 time bins, binned by timescales from 5 to 50 s) post-trigger by a polynomial function up to second order for each GRD, respectively. When at least four detectors exceed the predefined signal-to-noise ratio thresholds, the corresponding bins will be removed from the background. The regression will perform repeatedly on the remaining time bins, until the recursive process converges (see Figure <ref>). When there are less than two bins at pre-trigger or post-trigger, the BAS cannot perform background fitting, and the background is thereby averaged by pre-trigger. This usually happens during extreme background fluctuation, i.e., the satellite is close to SAA, or when the burst duration is abnormally long. There are 6 out of 37 GRBs [GECAM detected 42 GRBs in 2021, but 5 of which dropped the data packets. see Section <ref>] which failed to fit the background. Five failures result from the long burst duration, the other one is caused by the background fluctuation. The BAS has a success rate of about 84%.
§.§.§ Spectrum fit and localization
The GECAM on-board localization system operates on three spectral templates, which leads to an inaccurate localization if the spectral templates mismatch with the actual spectrum. Ideally, this can be corrected by simultaneously fitting the spectrum and location <cit.>. However, the small number of time and energy bins of the trigger alert data are not suitable for this fitting. Thus, one needs to fit the spectrum and location iteratively. The burst spectrum is a combination of the 3 detectors with the highest trigger significance. These detectors usually have a similar incidence angle and therefore response. We added their response files. First, we generate the response file using the on-board location and fit the spectrum with the Band function and cut-off power-law model (see Figure <ref>). Then, we construct the template for each detector in 15–1020 keV range over 12,288 grid points in the payload coordinates, with the best-fitting model and parameters. These are compared to the observed counts accumulated in the 25 GRDs, to find a χ ^2 minimum. And the position is converted to equatorial coordinates using the spacecraft attitude. The new position is used as input for the next iteration, until the position converges. A full sky HEALPix map of the localization is then produced, see Figure <ref> for an example.
§.§ Ground localization using Time delay method
In addition to the spectral fitting method, GRBs can also be located via the time delay method or triangulation technique <cit.>. When a GRB arrives at two spacecrafts, it can be localized to an annulus characterized by the time delay and spacecraft positions. The time delay and its uncertainty are usually calculated by the cross-correlation function (CCF). However, when the classic CCF method is applied to locate GRBs for low orbit satellites, the localization region becomes too large to give effective constraints. To make an improvement, <cit.> proposed an improved time delay localization method based on a Modified Cross-correlation Function (MCCF, Li–CCF) <cit.>, from which it provides an accurate time delay from the high time resolution light curves.
Once all the short trigger alert data are received, BAS decompresses it to obtain a high time resolution light curve (see Figure <ref>). If a burst is observed by both satellites, the light curves are sent to the MCCF localization algorithm. <cit.> provides a full description of the algorithm and an estimate of the uncertainty (1σ: less than 0.3 ^∘). Consequently, the annulus is excluded by the Earth occulted part and combined with the localization derived by comparing the count rates from different detectors <cit.>.
Because GECAM-A has not turned on yet (see Section <ref>), there are no GRBs or other bursts that have been localized with this method by the two GECAM satellites. However, we have applied this method to locate a burst from SGR 1935+2154 observed by GECAM-B, Fermi-GBM, and INTEGRAL/API-ACS <cit.>. The half-width of the annulus region obtained by GECAM-B and Fermi-GBM is 0.4 ^∘ (1σ).
§ IN-FLIGHT PERFORMANCE
The two GECAM satellites were co-launched on 2020 December 10 (Beijing Time) <cit.>. GECAM is scheduled to work in a survey mode, where the GECAM points opposite to the Earth. Because of the power issue, GECAM-B was set to the “pointing” mode with the solar panel orienting towards the Sun since January 14, 2021, in order to provide the maximum energy to the spacecraft. Unfortunately, at the date of this writing, GECAM-A failed to turn on the payload, due to a power supply issue. GECAM-B works for about 10 hours per day.
§.§ Trigger statistics and analysis
During its first year (2021) of operation, GECAM was triggered 858 times [GECAM was initially triggered 1029 times in flight. Due to the dropped data packets during the real-time communication stream, we received only 858 triggers. See section <ref> for details.] on a variety of transient events in flight (see Figure <ref>): 42 of these are verified as GRBs, 32 as bursts from SGRs, 1 as Type-I X-ray burst (XRB) from X-ray binary 4U 0614+09 <cit.>, and 783 as others (solar flares, charged particles, earth occultation, or instrument effect) by Burst Advocate (BA). Table <ref> shows the number of events classified by the GIRTLS, BAS, and BA. For example, 666 triggers are classified as GRBs by the GIRTLS. Among them, 42 are “real” GRBs, 32 are SGRs, 1 is an XRB, and 591 are Others. The GIRTLS has a 100% success rate classifying GRBs, but only a 24% success rate of not identifying other events as GRBs. Compared to GIRTLS, 288 triggers are classified as GRBs by the BAS, and 34 of these are “real” GRBs. Eight “real” GRBs were misclassified as Generic sources by the BAS, as they were located near the galactic plane. The BAS has an 80% success rate classifying GRBs, and a 70% success rate of not identifying other events as GRBs. Most of those mis-classified as GRBs are particle events and instrument effects. We will continue to investigate additional improvements to the classification algorithms.
The monthly trigger statistics over the first year of the mission is shown in Figure <ref>. The higher rate of triggers in the beginning six months is due to the temperatures of the SiPM exceeded the design specifications (-20±3 ^∘C) when the spacecraft adjusted its attitude mode. This leads to an increase of the thermal noise and may give false triggers. The SiPM is also prone to significantly increased thermal noise caused by on-orbit radiation damages, thereby decreasing its signal-to-noise ratio. This is clearly suggested by a significant decrease in the rate of detected triggers on the occulation of Sco X–1 after April, which has a soft spectra (see Figure <ref>). Thus, we raised the low-energy threshold of GRD on December 30, 2020, January 5 and 18, 2021, and February 19, 2021, and the current low-energy threshold of GRD is about 15 keV. In addition, on Januray 27, 2021 we presented the first report of the reactivation of SGR J1935+2154 <cit.>. GECAM also detected a series of bursts from this source in July and September of 2021.
Table <ref> summarizes all 42 in-flight triggered GRBs from the first year's operation of GECAM. Figure <ref> shows the sky distribution of the GRBs in celestial coordinates. There are 27 GRBs that are localized by other instruments (e.g., Swift-BAT, Fermi-GBM) or the IPN. These reference locations are also listed in Table <ref>. Figure <ref> shows the fraction of GECAM in flight and ground localizations within a given offset from the reference location. The vertical dot-dashed line shows that 68% of the reference locations are contained in a ∼9^∘ region for both in-flight and ground locations.
§.§ BeiDou short message Performance
The performance of the BeiDou-3 short message service is presented in this section. The time latency and the success rate of messages transmission are given.
Figure <ref> shows the time delay between the trigger time and the receiving time of the first or second short message of the trigger. The average time delay is 45 s and the minimum time delay is 25 s. About 95% of the triggers have time delays of less than 67 s. This is necessary for follow up observations and has led to several observations, e.g., <cit.>. The time delay includes two parts. The first one is the delay from on-board signal processing. For short triggers, it takes ∼5 s to process the data, while for long triggers, it takes about 20 s. The second one is the delay between the message sending on-board and receiving on ground via the BeiDou short message service. Since the message is transmitted every 17 s, there will be an extra 17 s delay if the previous message failed to be received.
The BeiDou short messages is not only transmitted in real-time, but also stored in the on-board storage and transmitted via the X-band ground station. We can thereby estimate the success rate of transmissions by comparing the data from the two methods. Figure <ref> shows the total number and the lost number of BeiDou short messages per day in 2021. On around January 15, most of the messages failed to transmit due to the attitude of the satellite. Because of the power supply issue, the satellite has to be frequently turn-off, which makes some messages fail to be timely sent before the satellite shutting down. Regardless of the satellite status, the success rate is 94.6%, which is consistent with the official result given by the Beidou system which has a theoretical and test success rate of 95% and 97.1% <cit.>.
§ CONCLUSIONS AND PERSPECTIVES
GECAM is the China's first transient explorer with a real-time alert system, who is capable of distributing GRB coordinates to ground observers within minutes using the BeiDou-3 short message service. During the first year of operation, GECAM had been triggered 858 times in flight, of which 42 are GRBs. The BAS processes the trigger alert data and provides refined classifications and localizations. The burst alert data can be transmitted to our collaborations within ∼1 minute. As of this writing, we are also collaborating with the GCN team on disseminating the notices via GCN. The in-flight performance shows that GECAM real-time BAS based on the BeiDou-3 short message service operates stably and efficiently. It has been applied to the subsequent GRB mission High Energy Burst Searcher (HEBS), which is a gamma-ray burst monitor on-board an experimental satellite to be launched in 2022 <cit.>.
GECAM mission aims to detect and localize GRBs associated with GW events. But the low luminosity and flux of GRB 170817A suggest that a population of short GRBs may be missed due to the lack of on-board triggers. In addition to the automated flight triggers, GECAM will also provide a targeted coherent search for GRBs associated with GW events, and search the sub-threshold short GRBs which can be used to search for low-significance GW signals. Moreover, a further dedicated effort is ongoing to improve the ground localization, classification, and automatic alerting procedure. GECAM is going to play a crucial role in the LIGO, Virgo, and KAGRA forthcoming fourth observing run (O4) to search for and characterize the EM counterparts of GW events. It is also necessary to fully exploit the scientific potential of neutrinos and fast radio bursts, since these events also require high-energy EM observations for identification and further study.
The GECAM (Huairou-1) mission is supported by the Strategic Priority Research Program on Space Science of the Chinese Academy of Sciences. The authors thank the support from the Strategic Priority Research Program on Space Science (Grant No. XDA15360000, XDA15360300, XDA15360102, XDA15052700) of the Chinese Academy of Sciences, the National Natural Science Foundation of China (Grant No. U2031205, 12133007), and the National Key R&D Program of China (2021YFA0718500, 2022YFF0711404).
aasjournal
|
http://arxiv.org/abs/2307.07608v1 | 20230714200727 | Want to Raise Cybersecurity Awareness? Start with Future IT Professionals | [
"Lydia Kraus",
"Valdemar Švábenský",
"Martin Horák",
"Vashek Matyáš",
"Jan Vykopal",
"Pavel Čeleda"
] | cs.CY | [
"cs.CY",
"cs.CR",
"K.3"
] |
Want to Raise Cybersecurity Awareness? Start with Future IT Professionals.]Want to Raise Cybersecurity Awareness? Start with Future IT Professionals.
L. Kraus]Lydia Kraus
0000-0002-1387-3578
Masaryk University
Brno
Czech Republic
[email protected]
V. Švábenský]Valdemar Švábenský
0000-0001-8546-280X
Masaryk University
Brno
Czech Republic
[email protected]
M. Horák]Martin Horák
0000-0002-1835-6465
Masaryk University
Brno
Czech Republic
[email protected]
V. Matyáš]Vashek Matyáš
0000-0001-7957-7694
Masaryk University
Brno
Czech Republic
[email protected]
J. Vykopal]Jan Vykopal
0000-0002-3425-0951
Masaryk University
Brno
Czech Republic
[email protected]
P. Čeleda]Pavel Čeleda
0000-0002-3338-2856
Masaryk University
Brno
Czech Republic
[email protected]
As cyber threats endanger everyone, from regular users to computing professionals, spreading cybersecurity awareness becomes increasingly critical. Therefore, our university designed an innovative cybersecurity awareness course that is freely available online for students, employees, and the general public. The course offers simple, actionable steps that anyone can use to implement defensive countermeasures. Compared to other resources, the course not only suggests learners what to do, but explains why and how to do it. To measure the course impact, we administered it to 138 computer science undergraduates within a compulsory information security and cryptography course. They completed the course as a part of their homework and filled out a questionnaire after each lesson. Analysis of the questionnaire responses revealed that the students valued the course highly. They reported new learning, perspective changes, and transfer to practice. Moreover, they suggested suitable improvements to the course. Based on the results, we have distilled specific insights to help security educators design similar courses. Lessons learned from this study are relevant for cybersecurity instructors, course designers, and educational managers.
<ccs2012>
<concept>
<concept_id>10010405.10010489</concept_id>
<concept_desc>Applied computing Education</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10002978</concept_id>
<concept_desc>Security and privacy</concept_desc>
<concept_significance>100</concept_significance>
</concept>
<concept>
<concept_id>10002978.10003029.10011703</concept_id>
<concept_desc>Security and privacy Usability in security and privacy</concept_desc>
<concept_significance>500</concept_significance>
</concept>
</ccs2012>
[500]Applied computing Education
[100]Security and privacy
[
[
August 12, 2023
===================
§ INTRODUCTION
Protecting oneself from security and privacy threats in cyberspace is challenging.
IT-knowledgeable users are thereby an important information source for users without IT background <cit.>.
However, where do these IT-knowledgeable users learn about security advice and behaviors?
The literature indicates that there is no unified source for security advice online; the advice seems to be spread across the Internet, and opinions about which advice should be prioritized diverge among lay users and experts <cit.>.
Our university offers a unified source of advice: the Cybercompass, which is a freely available online resource for students, employees, and the wider public <cit.>. The course consists of five lessons: Security of devices, Passwords, (Cybersecurity) Self-defense, Secure communication, and Incident reporting.
To raise cybersecurity awareness among IT-knowledgeable users, we included the Cybercompass in a compulsory introductory course to information security and cryptography (ISC) for computer science students and evaluated their experiences.
Thereby, we assigned them a homework to explore the course.
After each of the five online lessons, students filled in a questionnaire examining their overall impression of the lesson, its usefulness, comprehensibility, and difficulty.
We further asked whether they learned something new, whether taking the Cybercompass changed their view on everyday cybersecurity, and whether they would recommend the Cybercompass to others, such as family, non-university friends, fellow students, or colleagues.
Our work yields two key contributions:
* We evaluate the effects of including the Cybercompass material into introductory security courses. Our results show that students valued the Cybercompass highly. They reported new learning, changes in their perspective, and transfer to practice.
* We release the Cybercompass. The course is freely available online <cit.> and can thus serve as an inspiration for educators who plan to design a similar course.
As a result of our positive experiences, we encourage other teachers to consider including practical cybersecurity hints and defensive countermeasures as covered in the Cybercompass into introductory security courses. This will improve awareness and cybersecurity best practices in the higher education environment and beyond.
§ RELATED WORK
§.§ Cybersecurity Threats in the Higher Education Sector
Higher education institutions have become attractive targets during the last years, as several data breaches and surveys indicate <cit.>.
As of 2017, the number of data breaches at higher education institutions doubled, and email addresses continue to constitute a popular target for hackers <cit.>.
As of 2018/2019, 72% of higher education institutions consider phishing and social engineering the top threat they are facing <cit.>.
Ransomware/malware and unpatched security vulnerabilities rank second and third <cit.>.
While in the past, only a third of higher education institutions offered security training for students and staff <cit.>, the numbers increased up to 80% <cit.>.
Bongiovanni <cit.> reviewed the literature on information security management in higher education and concluded that the topic is “highly under-investigated”.
§.§ Students' Information Security Awareness
While often online, students were shown to lack information security awareness, particularly when they enter higher education <cit.>.
Data show a rise of scamming emails targeting students at the beginning of every academic year <cit.>.
North et al. surveyed 465 students in introductory computer technology courses at different US universities. Most participants demonstrated a satisfactory awareness of computer security and ethics <cit.>.
Yet, between 20% and 52% of participants had knowledge gaps in specific areas of computer security.
Muniandy et al. assessed cybersecurity behavior of 128 students in the categories of malware, password usage, phishing, social engineering, and online scamming <cit.>. They found that the reported behavior was unsatisfying in several categories. About 30% of students were unsure about the status of their antivirus software, and almost 30% indicated that they would be willing to download material from insecure websites. Similarly, about 50% of the students did not follow safe password practices.
Sheng et al. conducted a survey on the susceptibility to phishing with 1001 participants. They found that younger participants between the ages of 18 and 25 were more susceptible to phishing than other age groups <cit.>.
Lastly, Matyas et al. investigated students' security behavior over the course of several years and found that secure behavior improved, despite less and less students reading the university's security directive <cit.>.
§.§ Where Do Students Receive Essential Cybersecurity Training?
Kim conducted a survey with 68 undergraduate and graduate students about exposure to information security awareness training <cit.>.
Although many students in the study understood the need for information security awareness training, most respondents did not participate in training at the university or work <cit.>.
Similarly, a CDW-G survey showed a discrepancy between the training that students ought to receive and the training that students actually receive <cit.>: 82% of IT professionals said that students need to engage in cybersecurity training at least once a year, yet, only 35% of students said that was required of them.
In general, students learn about cybersecurity from various sources such as websites and media rather than from dedicated training <cit.>.
Yet, learning by security advice from the open web has its pitfalls.
Redmiles et al. evaluated advice sources and their quality on the web and identified 374 pieces of unique advice <cit.>.
They found that a vast majority (89%) of advice is considered useful by users and experts alike <cit.>.
However, the common problem is that users and experts struggle to prioritize these advice pieces <cit.>.
Moreover, many representative security awareness websites do not offer a structured way of conveying advice to end users <cit.>.
§.§ Security Awareness Training Outside Academia
The interest in cybersecurity training is high even beyond the academic environment.
Ricci et al. surveyed more than 200 adults and found that most of them would be interested in a cybersecurity seminar, especially if the employer would pay for it <cit.>.
Yet, the willingness to spend time and money for such a seminar is limited: the desired seminar length was rated 1 to 1.5 hours, and the desired costs were rated $20 on average, with 40% of participants not willing to pay at all <cit.>.
Regarding the format of cybersecurity education seminars, Ricci et al. further found that more than two-thirds of surveyed participants prefer some form of online education as part of a seminar <cit.>.
Given the facts described above, it is important that organizations and institutions offer free and efficient online cybersecurity education for everyone.
§ THE EVALUATED CYBERSECURITY AWARENESS COURSE
Our university offers a structured cybersecurity awareness course. The Cybercompass is an extracurricular activity in the form of educational material presented on a website. It is open-source and freely available for anyone on the Internet <cit.>.
The most valuable features of the course are two-fold:
First, it offers and prioritizes security advice.
Second, the advice presented on the website is complemented with reasoning: it explains both, the what of proper security behavior and the why.
§.§ Course Design
The Cybercompass was designed by a multidisciplinary team in 2019. The main challenge was to prepare an easily accessible course for all users, which will positively influence their security behavior. The team used the Design Thinking methodology <cit.>, working with users from different target audiences during the design process. Specifically, the team focused on simplicity of language, chunking of topics into lessons, appealing visual style, and intuitive interactions with the website. Also, the content of the course was reduced to a reasonable minimum (1–2 hours in total) with the goal to provide information that can help users practically perform basic cybersecurity measures and influence their behaviors.
Specific security measures were identified from different sources. First, the team studied numerous sources that deal with information security awareness (such as <cit.>). Second, the team discussed security measures with members of two Computer Security Incident Response Teams (CSIRTs). Finally, the team also included three members who focused on information security awareness.
All identified security measures were then internally evaluated in terms of their suitability for course users. Afterward, the final selection was checked with the members of CSIRTs and tested with users. The prototype of the course was iteratively improved based on insights from users.
§.§ Course Structure and Content
The course consists of five lessons.
Each lesson takes between 15 and 30 minutes to complete.
The lessons (except for Incident reporting) contain text with factual information about cybersecurity threats and tutorials on how to better protect oneself in cyberspace, together with examples of and links to protective tools (such as a password manager and anti-malware software).
At the end of each lesson, there is a bonus section for curious users.
Example screenshots of the Cybercompass are provided in Figure <ref>.
The course offers a variety of topics, covered in five lessons as described above: Security of devices, Passwords, (Cybersecurity) Self-defense, Secure communication, and Incident reporting.
* Security of devices: explains the importance of antivirus software on smartphones and PCs, encompasses a step-by-step tutorial for online and offline backups, and encourages the use of screen locks, device encryption, anonymous browsing mode, and others. Furthermore, it advices the rapid installation of updates.
* Passwords: teaches secure password creation with passphrases and encourages the use of a password manager. This is accompanied by a tutorial about how to install and set up a particular password manager. It discourages bad password practices and closes with a bonus section about two-factor authentication (2FA) and password strength-checking.
* Cybersecurity self-defense: provides a phishing guide, a phishing quiz, and information on how to avoid dangerous websites. It raises awareness about users' visibility in public, password-protected, and virtual private networks (VPNs).
* Secure communication: introduces learners to the benefits of Eduroam (an international roaming service for higher education institutions), including a tutorial on how to install the related configuration software. It further teaches learners to use the university VPN and information system features for secure file sharing. It links to guides for obtaining personal certificates for e-mail encryption and signing. It closes with a bonus section that links to the university IT services website and further useful applications.
* Incident reporting: provides the learners with a step-by-step guide of how to report a cybersecurity incident, accompanied by relevant contact information and a picture of members of the university CSIRT.
§ EVALUATION METHODOLOGY
To investigate how students perceive the course in different dimensions, we administered an online questionnaire, asking students to evaluate the course, posing questions related to the course outreach, and exploring its impact.
We designed the study in an open manner with no preset hypotheses to capture the unique and disparate issues arising from interacting with the course.
The evaluation presented was done by an independent team not involved in the course design. Yet, this paper is authored by members of both teams – the evaluation and the course design team.
The evaluation ran in spring 2021.
§.§ Setting
To find out more about students' prior exposure to security advice and to confront them with a unified resource of advice, we included the Cybercompass into a homework assignment of a compulsory introductory course to information security and cryptography (ISC). This course is taught yearly at our Faculty of Informatics.
It is mandatory for undergraduate students of computer science in their second year and encompasses between 250 and 300 participants every year.
The ISC course consists of 12–13 lectures with accompanying seminars and 5–6 homework assignments over the course of the semester.
Students receive up to six points for finishing homework assignments.
Each point contributes 1% to the total grade from the course.
To finish the ISC course successfully, students need to achieve at least 50%.
We decided to reward them with 1.5 points (1.5%) for taking the Cybercompass and answering related questionnaires and 3.5 points (3.5%) for creating new educational material that could potentially be used to enhance the course in the future.
§.§ Study Design
We asked students to proceed through the course lesson by lesson.
After each Cybercompass lesson, we had students fill in a questionnaire that encompassed these issues:
* Course evaluation: What is students' overall impression of each lesson? Do they find the lessons useful, comprehensible, or difficult? Do students learn something new in each lesson? Do students use the suggested activities and tools? Do students read the bonus material at the end of each lesson?
* Course outreach: Have students heard about the course before? If so, where did they learn about it?
* Course impact: Does the course change students' view on preparedness and education regarding everyday cybersecurity (cf. <cit.>)? If so, how? Would students recommend the course to people in their circles, such as family, friends, fellow students, and colleagues?
Each questionnaire further contained three questions about the lesson's content to check whether students had worked with the material.
To harvest further insights into students' reasoning, the questionnaire featured both, closed and open-ended questions.
Students could opt-out of their data being used for research purposes; of 219 students who were enrolled in the course, 138 participated in the study.
Pilot testing before deployment did not indicate any major issues.
The study did not request ethics approval because no personal data of students were collected. The questionnaire is available as a supplementary material to the paper at <cit.>.
§.§ Data Analysis
We conducted statistical analyses to investigate the answers to the closed-ended questions. To do so, we transformed the answer scales into numerical values. Details are described in Section <ref>. Answers to open-ended questions were analyzed using qualitative data analysis techniques. We first performed open coding and identified themes by question and by lesson. Thereafter, we looked for re-occurring patterns across lessons, i.e., across the whole course (axial coding).
§.§ Computer Science Student Population
In spring 2021, 219 students were enrolled in the course. 176 were male and 43 were female. As most students enter our faculty directly after graduation from high school, we estimate their average age to 20–22 years. 154 students were enrolled into the computer science bachelor program, 52 into the software development program, and 13 into other programs.
University-internal survey reveals that 83.3% of graduates from our faculty work after graduation in the field of information and communication activities, with, for example, programming, consulting, management of computer equipment, and other activities <cit.>.
§ RESULTS
We now interpret the answers of 138 students who participated in the evaluation of the Cybercompass.
Only few of them (1–5 for each lesson) answered two or more reading check questions incorrectly; their answers were thus excluded from the respective questionnaires.
§.§ Successes
§.§.§ Cybercompass lessons are perceived as positive and useful.
We had students evaluate their overall impression of each lesson on a Likert scale from 1 (very negative) to 5 (very positive). Each lesson was perceived on average as somewhat positive with a tendency to very positive (Security of devices: M=4.22, SD=.76; Passwords: M=4.26, SD=.83; Self-defense: M=4.36, SD=.74, Secure communication: M=4.17, SD=.82, Incident reporting: M=4.33, SD=.81). Similarly, the usefulness of each lesson was rated on average as somewhat useful with a tendency to very useful (Security of devices: M=3.22, SD=.71; Passwords: M=3.34, SD=.75; Self-defense: M=3.34, SD=.80, Secure communication: M=3.37, SD=.72, Incident reporting: M=3.32, SD=.86). Note that the usefulness scale encompassed four items from 1 (not at all useful), over 2 (slightly useful), to 3 (somewhat useful) and 4 (very useful).
Repeated measures ANOVAs did not indicate any significant difference between the lessons.
§.§.§ Students learned new things in the Cybercompass lessons.
An important indicator for deciding whether to include topics of everyday cybersecurity in a compulsory ISC course, is whether it provides students with new insights.
Subsequently, we asked students for each lesson whether they had learned something new (answered on a yes/no scale plus an open-ended text field for detailing the new learning).
Passwords was the lesson with the least amount of new learning (34%), while Security of devices (58%) and Self-defense (57%) ranked moderately, and Secure communication (83%) and Incident reporting (83%) provided the most new insights.
Cochran's Q test revealed that the learning between the lessons differed significantly, Q(df=4, N=130)=123.42, p<.001, η^2_Q=.24.
In particular, Bonferroni-corrected post-hoc tests (McNemar <cit.>) indicated that Passwords differed from all other lessons (p<.001) and that Security of devices and Self-defense both differed from Secure communication and Incident reporting (p<.001).
Open-ended answers revealed that the most salient new things for students concerned: the 3-2-1 back-up rule, anti-theft tracking and device encryption, the breakability and creation of passphrases, the security of public Wi-Fi and the use of VPN in that context, the prevalence of phishing at our university, the possibility to use the file-sharing option in the university information system, the Eduroam configuration tool, the university VPN, the possibility of obtaining a personal certificate for e-mail encryption and signing, and the incident reporting process and contact point at our university.
§.§.§ Cybercompass changes the view of students on everyday cybersecurity.
A non-negligible share of students (42%) reported that the Cybercompass changed their view on education and preparedness in everyday security.
When asked how the course changed their view, many indicated that it raised awareness and encouraged them to take action, as illustrated by the following quotes.
I gained an overall view on everyday security issues, and it made me think more about security on a daily basis.
It changed my view on passwords, I think I [will] start using password manager more and maybe also rework my passwords.
I will definitely check the addresses of emails more often.
§.§.§ Cybercompass lessons encourage action.
The course contains different kind of activities and recommendations for the use of protective tools.
Many students indicated that they had tried at least one of these activities or tools during the course (Security of devices: 81%, Passwords: 67%, Self-defense: 60%, Secure communication: 64%).
This was especially salient in the open-ended answers for the first three lessons, as illustrated by the following quotes:
The good point, the article encouraged me to dig around more in my smartphone security settings. (Security of devices)
As I was reading this lesson, I sa[i]d to myself more than once, that I have to do this. So e.g. I changed my notification preview and as I am writing this an encryption of my mobile is running. (Security of devices)
The lesson convinced me it is a good idea to set up a password manager. (Passwords)
I also tried the challenge with recognising phishing emails and I was not that successfull, so that surprised me. (Self-defense)
I liked especially the interactive part – phishing quiz which I will definitely remember for a long time. (Self-defense)
§.§.§ Bonus material is highly appreciated.
Students highly appreciated the bonus material provided at the end of each lesson.
For each lesson,
a high amount of students indicated that they had at least partially read the provided material (Security of devices: 91%, Passwords: 94%, Self-defense: 88%, Secure communication: 87%).
§.§.§ Students are willing to recommend the Cybercompass to others.
As IT knowledgeable users are an important information source for cybersecurity advice for people without IT background <cit.>, we asked students whether they would recommend the course to people in their circles.
71% said that they would recommend it to members of their family, 68% to their non-university friends, 54% to fellow students, and even 28% would recommend it to work colleagues.
§.§ Challenges
§.§.§ Lessons vary in comprehensibility and difficulty.
We had students evaluate the comprehensibility and difficulty of each lesson. Both were rated on a four-point scale from 1 (not at all), over 2 (slightly) to 3 (somewhat) and 4 (very).
The lessons were perceived as very comprehensible on average, with only Secure communication showing a tendency towards somewhat comprehensible (Security of devices: M=3.65, SD=.58; Passwords: M=3.70, SD=.64; Self-defense: M=3.58, SD=.67, Secure communication: M=3.48, SD=.71, Incident reporting: M=3.74, SD=.68).
A repeated measures ANOVA showed that the comprehensibility differed significantly between the lessons, F(3.36, 432.88)=8.03, p<.001, η^2_part.=.06. In particular, Bonferroni-corrected post-hoc tests revealed that the Secure communication lesson was perceived as less comprehensible than the Security of devices (p=.02), the Passwords (p=.001), and the Incident reporting lesson (p=.001).
On average, the lessons were perceived as not at all difficult with only Secure communication having a tendency towards slightly difficult (Security of devices: M=1.38, SD=.56; Passwords: M=1.22, SD=.41; Self-defense: M=1.47, SD=.65, Secure communication: M=1.66, SD=.78, Incident reporting: M=1.11, SD=.34).
A repeated measures ANOVA showed that the difficulty differed between the lessons, F(3.30, 425.00)=25.60, p<.001, η^2_part.=.17.
In particular, Bonferroni-corrected post-hoc tests revealed that Secure communication was the most difficult lesson, differing significantly from Security of devices (p<.001), Passwords (p<.001), and Incident reporting (p<.001).
§.§.§ Most students have not heard of the course before.
A huge majority (91.7%) of participants had not heard about the Cybercompass before.
This is surprising, given that the course is advertised through several channels within the university environment.
Those who had heard about the course indicated that this was through the university social media channels (LinkedIn and Facebook), the university information system news section, physical bulletins, the website of our school of computer science, a classmate, and an external website.
§.§.§ Information on password managers and 2FA is insufficient.
As part of the Passwords lessons, learners are presented with a tutorial on how to install a specific password manager.
Moreover, in the bonus section of that lesson, there is a brief section on how to set up 2FA in three popular online services.
In the open-ended answers that followed the Likert-scale overall rating for each lesson, many students criticized that the provided information on those two topics is insufficient, as illustrated by the following quotes.
I'm not rating [the lesson] `very positive` because I feel like offline password managers should be mentioned.
I've never used a password manager and what would really help me to convince me is recommending a free (or very cheap) password manager, [...]
explaining the process of retrieving the passwords in case I loose access to my account, explaining how and where are the passwords stored [...]
Maybe I would emphasize the use of the two factor authentication more, I think it should be a standard these days, not something 'more'.
I think that at some class in [the ISC course], we were told that having SMS as second factor is not that secure.
§ DISCUSSION AND CONCLUSION
We evaluated an online cybersecurity awareness course with 138 computer science undergraduates – future IT professionals. The students valued the course highly, reporting new learning, changes in their perspectives, and transfer to practice. At the same time, they suggested suitable improvements to the course. Evaluating the course yielded lessons that we processed into recommendations to help designers of similar courses and security educators.
§.§ Discussion
Students learned the most in the lessons on Secure communication and Incident reporting.
The Secure communication lesson familiarized students with the university IT services.
Although students usually take the ISC course in their second year, many were not aware of the offered variety of services for secure communication.
Similarly, many students did not know where to report an incident and what the reporting process should look like.
For quite a few of the students (42%), the Cybercompass even influenced their view on everyday cybersecurity.
This indicates that the course constitutes a valuable resource for increasing awareness.
Students' comments included helpful ideas that we can incorporate into future versions of the course.
For instance, students wished to see more about 2FA and a broader range of password managers covered.
Similarly, students' ratings indicated that the Secure communication lesson is slightly less comprehensive and more difficult than the other lessons and thus needs to be simplified.
This is in line with observations from related work, which assert that especially email encryption and signing are notoriously hard to understand for users <cit.>.
Ratings of the course were mostly positive, yet few students had heard about it before despite the university-wide dissemination efforts.
This suggests that this kind of educational material needs different and additional promotion channels.
We believe that including the course in first-year introductory lectures would be a good way to achieve a broader reach.
Therefore, we plan to investigate how to convince other educators to have the course in the first year.
Our results further showed that students are willing to disseminate the course among fellow students, family members, friends, and colleagues.
Subsequently, students could act as “cybersecurity advocates: individuals who encourage positive change by promoting and providing guidance on security best practices and technologies” <cit.>.
As such, they are indispensable for increasing security in different ecosystems, even outside the university.
§.§ Limitations
As we evaluated the Cybercompass with computer science undergraduates, results can only be generalized to this kind of population.
Future work can evaluate the course with different kind of populations such as students from other faculties and employees of the university.
As the data was coded by an experienced analyst, we did not calculate the inter-rater agreement. Thus, the qualitative results should be generalized with caution.
§.§ Recommendations for Course Designers
Use the Cybercompass as an inspiration.
The positive evaluation and successes reported in Section <ref> indicate that the course targets the right topics.
As such, visit the course website and take it as an inspiration.
Encourage action.
Activities and tools included into the Cybercompass, such as a phishing quiz, a tutorial for getting a password manager, and a hint to review smartphone settings were welcomed by many students.
Similarly, the literature asserts that security advice should be actionable <cit.>.
Include clear calls to action into your course if you want to make a difference to people's security habits.
Include bonus material for curious users.
Our student sample appreciated the bonus course material.
If your audience is diverse as ours (coming from various schools and institutes, plus staff and public), add extra material for curious users.
Evaluate dissemination channels and measure reach.
Although the Cybercompass was widely promoted on university online channels, its reach seems to be limited as less than 10% of our sample had heard of the course before.
If you are designing a similar course, make sure to reach out to the intended audience in more creative ways than we did.
Additionally, try to measure the return rate on different channels to evaluate the effectiveness of each channel.
Update the course regularly.
Everyday cybersecurity is prone to change.
With evolving platforms and tools, recommendations should be adjusted at least once per year.
As in our case, we appreciate that the computer science students hinted us towards including more information on 2FA and password managers.
We will stay in touch with experts and users to identify trending topics.
§.§ Recommendations for Security Educators
Think of your computer science students as future cybersecurity advocates.
Our results show that an overwhelming majority of computer science students is willing to recommend the Cybercompass to others.
Providing students with a unified source of everyday security advice does not only serve them, but is also likely to increase information security awareness at the university and in society.
As such, think of your students as future advocates – even the advanced who do not learn something new in that course can pass on the knowledge to others.
Consider including topics of everyday cybersecurity into your information security courses.
Our results show that including the Cybercompass into a compulsory introductory ISC course yielded positive experiences among students.
Students learned new things in all lessons, reported increased awareness, and were encouraged to take action.
Even if curricula constraints are tight, consider including material of everyday cybersecurity – at least the Secure communication and Incident reporting lessons, which showed to yield the most new learning.
This research was supported by ERDFERDF project CyberSecurity, CyberCrime and Critical Information Infrastructures Center of Excellence (No. ERDFCZ.02.1.01/0.0/0.0/16_019/0000822). We would like to thank Martin Ukrop for help with the questionnaire deployment, and the students who participated in the survey. Furthermore, we would like to thank Adam Skrášek and Elizabeth Stobert for helpful comments during the pilot testing.
ACM-Reference-Format
|
http://arxiv.org/abs/2307.07443v1 | 20230714160642 | Can Large Language Models Empower Molecular Property Prediction? | [
"Chen Qian",
"Huayi Tang",
"Zhirui Yang",
"Hong Liang",
"Yong Liu"
] | cs.LG | [
"cs.LG",
"cs.AI",
"q-bio.QM"
] |
Complete characterization of robust perfect adaptation
in biochemical reaction networks
Mustafa Khammash
August 12, 2023
============================================================================================
Molecular property prediction has gained significant attention due to its transformative potential in multiple scientific disciplines. Conventionally, a molecule graph can be represented either as a graph-structured data or a SMILES text.
Recently, the rapid development of Large Language Models (LLMs) has revolutionized the field of NLP.
Although it is natural to utilize LLMs to assist in understanding molecules represented by SMILES, the exploration of how LLMs will impact molecular property prediction is still in its early stage.
In this work, we advance towards this objective through two perspectives: zero/few-shot molecular classification, and using the new explanations generated by LLMs as representations of molecules.
To be specific, we first prompt LLMs to do in-context molecular classification and evaluate their performance. After that, we employ LLMs to generate semantically enriched explanations for the original SMILES and then leverage that to fine-tune a small-scale LM model for multiple downstream tasks.
The experimental results highlight the superiority of text explanations as molecular representations across multiple benchmark datasets, and confirm the immense potential of LLMs in molecular property prediction tasks.
Codes are available at <https://github.com/ChnQ/LLM4Mol>.
§ INTRODUCTION
As a cutting-edge research topic at the intersection of artificial intelligence and chemistry, molecular property prediction has drawn increasing interest due to its transformative potential in multiple scientific disciplines such as virtual screening , drug design and discovery <cit.>, to name a few.
Based on this, the effective modeling of molecular data constitutes a crucial prerequisite for AI-driven molecular property prediction tasks <cit.>.
In the previous literature, on one hand, molecules can be naturally represented as graphs with atoms as nodes and chemical bonds as edges. Therefore, Graph Neural Networks (GNNs) can be employed to handle the molecular data <cit.>.
Simultaneously, the other line of research explores the utilization of NLP-like techniques to process molecular data <cit.>, since in many chemical databases <cit.>, molecular data is commonly stored as SMILES (Simplified Molecular-Input Line-Entry System) <cit.> strings, a textual representation of molecular structure following strict rules.
In recent years, the rapid development of LLMs have sparked a paradigm shift and opened up unprecedented opportunities in the field of NLP <cit.>.
Those models demonstrate tremendous potential in addressing various NLP tasks and show surprising abilities (i.e., emergent abilities <cit.>).
Notably, ChatGPT <cit.> is the state-of-the-art AI conversational system developed by OpenAI in 2022, which possesses powerful text understanding capabilities and has been widely applied across various vertical domains.
Note that, since molecules can be represented as SMILES sequences, it is natural and intuitive to employ LLMs with rich world knowledge to handle molecular data.
For instance, as depicted in Figure <ref>, given the SMILES line of a molecule, ChatGPT can accurately describe the functional groups, chemical properties, and potential pharmaceutical applications the given molecule. We believe that such textual descriptions are meaningful for assisting in molecular-related tasks.
However, the application of LLMs in molecular property prediction tasks is still in its primary stages.
In this paper, we move towards this goal from two perspectives: zero/few-shot molecular classification task, and generating new explanations for molecules with original SMILES.
Concretely, inspired by the astonishing in-context learning capabilities <cit.> of LLMs, we first prompt ChatGPT to perform in-context molecular classification.
Then, we propose a novel molecular representation called Captions as new Representation (CaR), which leverages ChatGPT to generate informative and professional textual analyses for SMILES. Then the textual explanation can serving as new representation for molecules, as illustrated in Figure <ref>.
Comprehensive experimental results highlight the remarkable capabilities and tremendous potential of LLMs in molecular property prediction tasks.
We hope this work could shed new insights in model design of molecular property prediction tasks enpowered by LLMs.
§ METHOD
In this section, we will elaborate on our preliminary exploration of how LLMs can serve molecular property prediction tasks.
Zero/Few-shot Classification.
With the continuous advancement of LLMs, In-Context Learning (ICL) <cit.> has emerged as a new paradigm for NLP.
Using a demonstration context that includes several examples written in natural language templates as input, LLMs can make predictions for unseen input without additional parameter updates <cit.>.
Therefore, we attempt to leverage the ICL capability of ChatGPT to assist in molecular classification task by well-designed prompts, as shown in Figure <ref>. This paradigm makes it much easier to incorporate human knowledge into LLMs by changing the demonstration and templates.
Captions as New Representations. With vivid world knowledge and amazing reasoning ability, LLMs have been widely applied in various AI domains <cit.>. Also, we reckon that LLMs can empower LLMs can greatly contribute to the understanding of molecular properties. Taking a commonly used dataset in the field of molecular prediction for a toy example, PTC <cit.> is a collection of chemical molecules that reports their carcinogenicity in rodents.
We conduct a keyword search using terms such as `toxicity' `cancer', and `harmful' to retrieve all explanations generated by ChatGPT for the originally SMILES-format PTC dataset.
Interestingly, we observed that the majority of these keywords predominantly appeared in entries labeled as -1.
This demonstrates that ChatGPT is capable of providing meaningful and distinctive professional explanations for the raw SMILES strings, thereby benefiting downstream tasks.
Towards this end, we propose to leverage ChatGPT to understand the raw SMILES strings and generate textual descriptions that encompass various aspects such as functional groups, chemical properties, pharmaceutical applications, and beyond.
Then, we fine-tune a pre-trained small-scale LM (RoBERTa <cit.>) on various downstream tasks, such as molecular classification and properties prediction.
§ EXPERIMENTS
§.§ Setup
Datasets.
To comprehensively evaluate the performance of CaR, we conduct experiments on 9 datasets spanning molecular classification tasks and molecular regression tasks.
1) 3 classification datasets from TUDataset <cit.>: MUTAG, PTC, AIDS.
2) 4 classification datasets from MoleculeNet <cit.>: Sider, ClinTox, Bace, BBBP.
3) 2 regression datasets from MoleculeNet: Esol, Lipophilicity.
Baselines.
We compare CaR with the following baselines:
1) GNN-based methods, GCN <cit.>, GIN <cit.>, ChebyNet <cit.>, D-MPNN <cit.>, GraphMVP <cit.>, InfoGraph <cit.>, G-Motif <cit.>, Mole-BERT <cit.>.
2) SMILES-based methods, ECFP <cit.>, SMILES-Transfor <cit.>, MolR <cit.>, ChemBERTa <cit.>, MolKD <cit.>.
Settings.
For all datasets, we perform a 8/1/1 splitting for train/validate/test, where the best average performance (and standard variance) on the test fold is reported.
Specially, we perform a 10-fold cross-validation (CV) with a holdout fixed test for random split datasets; conduct experiments for scaffold splitting datasets with 5 random seeds.
Small-scale LMs are implemented using the Hugging Face transformers library <cit.> with default parameters.
§.§ Main Results
How does ChatGPT perform on zero/few-shot molecular classification? Figure <ref> illustrates the few-shot learning capabilities of ChatGPT, traditional GNNs, and ECFP on two datasets.
It is observed that ChatGPT underperforms compared to traditional methods for MUTAG, whereas conversely for PTC. Furthermore, see Figure <ref>, as the number of shots increases, ChatGPT demonstrates an upward trend in performance for both datasets.
These results indicate that ChatGPT possesses a certain level of few-shot molecular classification capability.
However, throughout the experiments, we find that ChatGPT's classification performance was not consistent for the same prompt, and different prompts also have a significant impact on the results. Therefore, it is crucial to design effective prompts that incorporate rational prior information to achieve better zero/few-shot classification.
How does CaR perform compared with existing methods on common benchmarks?
The main results for comparing the performance of different methods on several benchmark datasets are shown in Table <ref> and Table <ref>. From the tables, we obtain the following observation:
1) Under the random split setting, CaR achieves superior results on almost all datasets, whether in classification or regression tasks. Remarkably, CaR exhibits a significant performance improvement of 53% compared to traditional methods on the PTC dataset.
2) For Scaffold splitting, one can observe that compared to other models, LLM demonstrates comparable results on Sider and Bace with slightly less superior; in the Lipo regression task, CaR falls short compared to GNNs; However, CoR achieves notable performance improvements on the remaining datasets.
These observations indicate LLMs' effectiveness and potential in enhancing molecular predictions across various domains.
Convergence Analysis. In Figure <ref>, we plot the ROC-AUC and loss curves on three datasets to verify CaR's convergence.
One can observe that the loss value decreases rapidly in the first several steps and then continuously decrease in a fluctuation way until convergence. Also, the ROC-AUC curve exhibits an inverse and corresponding trend. These results demonstrate the convergence of CaR.
Replace Small-scale LMs.
To validate the effectiveness of CaR, we further fine-tune two additional pre-trained LMs (DeBERTa <cit.>, adaptive-lm-molecules <cit.>) and also train a non-pretrained DeBERTa from scratch. The results are plotted in Figure <ref>. One can observe that different pre-trained LMs exhibit similar performance, and generally outperform the LM trained from scratch, which validate the effectiveness of CaR.
§ CONCLUSION
In this work, we explore how LLMs can contribute to molecular property prediction from two perspectives, in-context classification and generating new representation for molecules.
This preliminary attempt highlights the immense potential of LLM in handling molecular data. In future work, we attempt to focus on more complex molecular downstream tasks, such as generation tasks and 3D antibody binding tasks.
§ LIMITATIONS
Lack of Diverse LLMs. In this work, we primarily utilized ChatGPT as a representative of LLMs. However, the performance of other LLMs on molecular data has yet to be explored, such as the more powerful GPT-4 <cit.> or domain-specific models like MolReGPT <cit.>.
Insufficient Mining of Graph Structures. While we currently model molecular prediction tasks solely as NLP tasks, we acknowledge the crucial importance of the graph structure inherent in molecules for predicting molecular properties. How to further enhance the performance of our framework by mining graph structured information is worth exploring.
Beyond SMILES. In this work, we focus on small molecule data that can be represented as SMILES strings. However, in practical biochemistry domains, there is a wide range of data, such as proteins, antibodies, and other large molecules, that cannot be represented using SMILES strings. Therefore, the design of reasonable sequential representations for the large molecules with 3D structure to LLMs of is an important and urgent research direction to be addressed.
acl_natbib
§ N-SHOT RESULTS
|
http://arxiv.org/abs/2307.06187v1 | 20230712142646 | Self-Adaptive Large Language Model (LLM)-Based Multiagent Systems | [
"Nathalia Nascimento",
"Paulo Alencar",
"Donald Cowan"
] | cs.MA | [
"cs.MA",
"cs.AI",
"cs.CL"
] |
Self-Adaptive Large Language Model (LLM)-Based Multiagent Systems
Nathalia Nascimento, Paulo Alencar, Donald Cowan
David R. Cheriton School of Computer Science
University of Waterloo (UW)
Waterloo, Canada
{nmoraesd, palencar, dcowan} @uwaterloo.ca
July 2023
========================================================================================================================================================================================================
In autonomic computing, self-adaptation has been proposed as a fundamental paradigm to manage the complexity of multiagent systems (MASs). This achieved by extending a system with support to monitor and adapt itself to achieve specific concerns of interest.
Communication in these systems is key given that in scenarios involving agent interaction, it enhances cooperation and reduces coordination challenges by enabling direct, clear information exchange.
However, improving the expressiveness of the interaction communication with MASs is not without challenges. In this sense, the interplay between self-adaptive systems and effective communication is crucial for future MAS advancements.
In this paper, we propose the integration of
large language models (LLMs) such as GPT-based technologies into multiagent systems.
We anchor our methodology on the MAPE-K model, which is renowned for its robust support in monitoring, analyzing, planning, and executing system adaptations in response to dynamic environments. We also present
a practical illustration of the proposed approach, in which we implement and assess a basic MAS-based application. The approach significantly advances the state-of-the-art of self-adaptive systems by proposing a new paradigm for MAS self-adaptation of autonomous systems based on LLM capabilities.
self-adaptation, software development, multiagent systems, MAPE-K, large language models, general purpose technologies.
§ INTRODUCTION
In autonomic computing, the development of self-adaptive multiagent systems (MASs) is known to be a complex task <cit.>. Self-adaptation is a well-known approach used to manage the complexity of these systems as it extends a system with support to monitor and adapt itself to achieve a concern of interest <cit.>. For example, by adjusting to changing scenarios, these systems can optimize resource allocation or become fault tolerant by expressing high-level goals as utility functions. Communication is key in this regard. Even with basic communication constructs, simple agents can develop robust collective behaviors <cit.> <cit.>. Conversely, complex tasks often trigger the emergence of adaptive behaviour, leading to self-organized, collaborative agents. In advanced scenarios involving agent interaction, these communication systems enhance cooperation and reduce coordination challenges by enabling direct, clear information exchange <cit.>. The interplay of self-adaptive systems and effective communication is crucial for future autonomic MAS advancements.
However, improving the expressiveness of the interaction communication with MASs is not without challenges.
The increased complexity of these systems introduces synchronization overheads, thus necessitating careful selection of the approach best suited to the problem at hand <cit.>. This has led researchers to often opt for simple communication constructs, allowing robots to independently develop their own communication structures to address specific issues. Despite the inherent limitations of such an approach, the rapid advancement of Large Language Models (LLMs) and General Purpose Technologies (GPTs)<cit.> <cit.> <cit.> provides a silver lining. These generative AI-based technologies allow for the integration of highly advanced conversational communication systems into software or hardware agents while using fewer resources.
In this paper, we propose a paradigm that integrates large language models (LLMs) such as GPT-based technologies into multiagent systems. By exploiting the rich capabilities of these advanced communication systems, we delve into the hypothesis of equipping autonomous agents with more sophisticated tools from the onset. We are particularly interested in the emergent abilities and capabilities these agents may exhibit when pre-equipped with such a powerful communication system. The primary input to these agents would consist of sensor data and communication from neighboring agents. In comparison with our prior approaches where agents evolved their own communication systems through evolutionary neural network algorithms <cit.>, the possibility we are exploring is of a paradigm shift in the agent's capabilities from the very inception. Will these agents still need to evolve and adapt their communication methods or will they be ready to execute complex tasks, leveraging the advanced communication systems inherent in the LLMs?
In our work, we present an innovative approach for developing self-adaptive agents using large language models (LLMs) within multi-agent systems (MASs). We anchor our methodology on the MAPE-K model, which is renowned for its robust support in monitoring, analyzing, planning, and executing system adaptations in response to dynamic environments. With this, we integrate GPT-4 technology, a cutting-edge LLM, enabling agents to adapt to more complex tasks and react to evolving situations intelligently. This, in turn, empowers our agents with improved communicative capabilities and adaptability.
The paper is structured as follows. Section 2 provides some research background and related work. Section 3 presents our approach, which relies on an LLM-based Mape-K model. Section 4 presents a practical illustration of our approach, in which we implement and assess a basic MAS-based application. This experiment, presented in Section 3, exemplifies the application of our proposed approach. Section 5 concludes with summary remarks and future perspectives.
§ BACKGROUND AND RELATED WORK
§.§ LLM and GPT
Language Models (LLM) and Generative Pretrained Transformers (GPT) are integral parts of AI's Natural Language Processing (NLP) realm. While LLM is a broad category encompassing models that predict word sequences and can be used for various tasks such as text generation and translation, GPT, developed by OpenAI, is a specific LLM type. GPT, renowned for generating text akin to human writing, undergoes extensive pre-training before fine-tuning for specialized tasks. In essence, GPT is a subclass of LLMs, but not all LLMs are GPT models. Other prominent LLM examples include BERT, RoBERTa, and XLNet.
A GPT solution comprises several key components, such as a pretrained neural network model, a fine-tuning component to improve the model for specific tasks, an inference engine that uses the fine-tuned GPT model to generate responses or predictions (i.e. the inference engine feeds input data into the model and processes the model's output), and data pipeline that handles the flow of data in and out of the model <cit.>.
§.§ Self-adaptive Systems: MAPE-K control loop
The IBM control loop <cit.>, introduced in 2004, is a well-known architecture <cit.> for fostering autonomy and self-awareness in systems. The loop's framework, referred to as MAPE-K (Monitoring, Analyzing, Planning, Executing, and Knowledge), serves as a foundation for expanding self-adaptive and self-organizing systems <cit.>. The Monitoring stage involves collecting and associating data from the system's environment using specialized sensory functions. The Analyzing phase follows, where this monitored data is evaluated to determine necessary responses based on the environmental changes detected. Next, in the Planning stage, this analysis is used to narrow down a specific set of actions intended to reach a desired state within the system. Finally, the chosen actions are implemented in the Executing stage via effectors.
Several researchers have suggested incorporating the MAPE-K loop into multiagent systems <cit.><cit.> and have developed novel autonomic methods, either integrating or modifying the MAPE-K structure <cit.><cit.><cit.><cit.>. Nascimento and Lucena, for instance, proposed substituting the 'analyze' and 'plan' stages with a neural network. In their model, sensor inputs feed the neural network, which in turn informs the agent's effector. The MAPE-K loop serves as a benchmark in this field.
§ APPROACH: LLM-BASED MAPE-K MODEL
In our research, we introduce an innovative architecture that integrates LLMs, specifically GPT-4, into multi-agent systems (MASs). Each agent within the MAS employs this technology in its control loop, creating an environment where every autonomous entity communicates and self-adapts using natural language processing. Our methodology is grounded in an extension of the MAPE-K model, renowned for facilitating adaptivity in dynamically changing environments.
As depicted in Figure <ref>, our proposed architecture modifies the traditional MAPE-K model, integrating the GPT-4, a state-of-the-art LLM, into the agent's control loop, enabling agents to adapt to and execute complex tasks while exhibiting advanced communication capabilities. This figure represents a MAS where each agent is autonomously managed through our adapted MAPE-K loop, comprising two core components: the managed element and the autonomic agent.
The managed element comprises the environment with which the agent interacts, encompassing a range of sensors and actuators that monitor and control environmental elements. For instance, in a smart traffic application scenario, the managed element includes the monitored environmental factors (e.g., the number of cars and pedestrians) and the elements controllable by the agent (e.g., traffic lights).
The autonomic agent, which is represented with more details in Figure <ref>, performs three primary tasks: 1) Monitor - this process collects data from the agent's sensors, processes the current state of the agent, and compiles messages from other agents. The consolidated information is transformed into a GPT-compatible prompt. If the agent receives messages from multiple agents, these messages are concatenated into a single prompt for each iteration; 2) GPT - this phase encapsulates the activities of analyze, plan, and knowledge. It operates the fine-tuned GPT model, with the pretrained neural network model and inference engine, to generate responses or predictions, handling the data flow in and out of the model; and 3) Execute - the GPT model's output is translated into an actionable command for the agent.
The intriguing aspect of this approach is its inherent adaptability. Each agent not only responds effectively to changes within its environment but also benefits from the advanced analytical capabilities of GPT-4. With LLM embedded into each agent, we posit that unique behaviors might emerge from such MAS. Therefore, our aim is to delve into the exploration of the potential behaviors within these LLM-embedded autonomous agents as they interact and self-adapt.
§ APPLICATION SCENARIO
In order to validate our approach for developing self-adaptive agents that leverage large language models (LLMs) within multiagent systems, we constructed a simple yet illustrative multiagent application. Our scenario, inspired by conventional examples found in multi-agent systems literature, consists of an online book marketplace, where autonomous agents act as buyers and sellers on behalf of users.
As shown in Figure <ref>, our application mimics an e-commerce marketplace that facilitates book trading, where each seller possesses identical books but has the liberty to dictate their selling price. Conversely, each buyer's objective is to purchase a single book at the lowest possible price, creating a competitive environment where the seller accrues the most profit and the buyer spending the least emerges as the winner.
Our application was developed using the JAVA Agent Development Framework (JADE) <cit.>, an instrumental platform known for its ease in multi-agent systems creation. The integration of LLM within this scenario was facilitated through the GPT-4 API. At the onset of the simulation, each agent receives an input prompt, as illustrated in Figure <ref>. In our study, we deliberately set the temperature parameter of the model to 0.7. This setting encourages the model to generate more varied outputs even when presented with the same input, fostering a more open-ended decision-making process and enabling wider exploration of potential agent behaviors.
This construct provides an interesting platform to investigate the behavior, decision-making abilities, and interaction patterns among LLM-embedded autonomous agents in a competitive environment.
§.§ Results and Discussion
The agents displayed decision-making and reasoning skills. For instance, as shown in Figure <ref>, a buyer chose to negotiate with the cheaper of three seller options, attempting a bargain.
We conducted multiple executions of this application, adjusting the initial prompts for sellers and buyers until we found a configuration that resulted in successful simulation runs. The specific prompt used for the initial sellers' setup is shown in Figure <ref>, while the prompt for buyers is displayed in Figure <ref>. In previous executions, the prompts provided more freedom for the agents to act. Additionally, we did not indicate the iteration number to the agents, causing them to continuously seek better prices throughout the simulation rather than focusing on completing the purchase. However, after incorporating the iteration number into the prompt messages, the agents started expressing concerns about time constraints. As depicted in Figure <ref>, seller Agent1 confirmed a sale to one of the buyers in the final iteration. We requested the agents to explain their decisions at the end of the simulation, and Agent1 provided the following argument:
From the given information, it seems that the price of my book has been fluctuating during the conversation. As we're in the final iteration, I need to confirm a sale to avoid losing. Agent4 has offered to buy the book for $18 twice, while Agent5 has just asked for the price again and previously offered $25. Considering the urgency to sell and the risk of losing, I decided to confirm the sale with Agent4 who demonstrated a clear intention to buy.
Interestingly, despite receiving identical prompts, the agents displayed diverse behaviors during the simulations. In one instance, while most sellers chose to set prices and wait for buyers, one seller decided to contact another seller. This interaction involved the seller accessing another seller's information to check their price. Additionally, there were cases where seller agents sent messages to themselves, pretending to be clients, resulting in self-generated purchase confirmations, as illustrated in Figure <ref>. Although this behavior was unexpected and undesired, it validates the effectiveness of the approach in facilitating the emergence of new behaviors.
We encountered challenges during the experiment, primarily due to the unavailability of messaging history through the GPT API, as it is limited to the ChatGPT platform. As a result, we had to maintain the interaction history ourselves and use it as the system's prompt for subsequent simulations, albeit in a simplified manner due to token limitations in GPT-4. Before incorporating the previous prompts into the agents' input, they were not able to maintain consistent personas during the simulation, instead acting solely based on the prompt in each iteration (e.g., behaving like an agent from the movie “Mission Impossible 007").
Considering the observed constraints and the wide range of behavioral patterns, it is evident that our proposed LLM-based MAS approach would benefit from the inclusion of auxiliary local planning and knowledge components to refine the decision-making scope. Firstly, we need to find an alternative approach for creating a local history, a memory structure that can be used to support the decision-making process and be synthesized as prompts for the GPT. The local planning component could provide constraints to guide the agents' choices, such as instructing them to respond to messages from specific identified agents instead of making arbitrary decisions. When faced with multiple output options, a discerning selection process should be implemented. In this regard, we envision the GPT serving as an aid to a decision-making module, leveraging additional structures like neural networks or state machines to make more informed decisions.
§ CONCLUSION AND FUTURE WORK
Integrating Large Language Models (LLMs) like GPT-3 or GPT-4 into multiagent systems is a novel and emerging field. The application of such models in this area could potentially revolutionize how agents understand, learn from, and interact with their environment and other agents. The potential of using natural language processing capabilities of LLMs could lead to more sophisticated communication between agents, improved adaptability in dynamic environments, and more robust problem-solving capabilities. Furthermore, LLMs can serve as a common platform for diverse agents to interact, facilitating heterogeneous multi-agent systems. However, this integration also brings up significant challenges, such as the computational overhead of LLMs, the interpretability of their decisions, and ethical considerations.
Our approach presents the integration of Large Language Models (LLMs) within multi-agent systems (MASs) to develop self-adaptive agents.
To evaluate the proposed approach, we used a simplified marketplace scenario as a testbed, with autonomous agents tasked to buy and sell books. These agents, each possessing an embedded LLM, were observed for decision-making and emergent behavior, exploring the potential for self-adaptation.
Future work includes the following topics: (i) non-shared generative AI models; (ii) other application scenarios; and (iii) human-in-the-loop interactions.
§.§ Non-shared generative AI models
In future research, a crucial step will be creating distinct OpenAI accounts for each agent. Presently, all agents share a single account, leading to potential shared knowledge among them. Despite each agent having a specific ID and acting independently, we can't fully ensure that one agent's decisions are not influencing the responses produced by the GPT-4 model for another agent. By having distinct accounts, we minimize the potential for unintentional interplay between agents via the shared AI model, ensuring that agents can only interact with each other through environmental modifications or direct communication exchanges. This allows for a more accurate assessment of each agent's adaptability and performance.
§.§ Other Application Scenarios
As part of our future endeavors, we plan to delve into other application scenarios, including the replication of experiments involving evolutionary robotics where agents interact for mutual evolution. Traditionally, in these experiments, agents needed to undergo an evolution process via an evolutionary neural network algorithm to develop their own communication system and solve problems effectively. However, we postulate that equipped with a powerful communication system, like the GPT-4, these robots might not need to go through this lengthy evolutionary process. In this context, consider a scenario where each robot is equipped with sensors, actuators, and a cloud-based GPT-4 communication system, thereby eliminating the need for evolution. This bypasses the centuries-long process of selecting the best behavior, allowing for quicker and more efficient problem-solving.
In addition to this, we aim to recreate the Internet of Things experiments proposed by Nascimento and Lucena <cit.>, utilizing the principles of evolutionary robotics. These experiments promise to explore novel territories of interaction and problem-solving, thereby pushing the boundaries of what self-adaptive LLM multi-agent systems can achieve.
§.§ Human-in-the-loop interactions
Human-in-the-loop interactions present a compelling avenue for enhancing the performance and usability of LLM-based multiagent systems.
The first potential approach could be centered around enabling humans to influence the self-adaptive behaviors of agents directly. For instance, through a conversational interface, humans could suggest new behaviors, provide high-level goals, or specify certain constraints or preferences. This would allow the system to incorporate human intuition and expertise into the adaption process, potentially leading to more effective or desirable outcomes.
Second, a feedback loop could be established, where the system generates understandable reports about its observations, decisions, or actions (like data collected from sensors or outcomes from self-adaptive behaviors). This transparency can help humans gain a better understanding of the system's workings, build trust in the system's actions, and offer a basis for improved system tuning or personalization.
Lastly, in relation to our MAPE-K-based model, one aspect that can be improved is the level of interpretability of the knowledge component. While the model provides a structured way of handling self-adaptivity, it might be difficult for a human to understand the complex rules or relationships that dictate agent behavior. Making these more interpretable, through natural language explanations, could significantly enhance human-machine interaction, enabling humans to work more effectively with the LLM-based multiagent system.
IEEEtran
|
http://arxiv.org/abs/2307.04550v2 | 20230710132923 | Gradient Surgery for One-shot Unlearning on Generative Model | [
"Seohui Bae",
"Seoyoon Kim",
"Hyemin Jung",
"Woohyung Lim"
] | cs.LG | [
"cs.LG",
"cs.AI"
] |
[
Gradient Surgery for One-shot Unlearning on Generative Model
equal*
Seohui Baecomp
Seoyoon Kimcomp
Hyemin Jungcomp
Woohyung Limcomp
compLG AI Research, Seoul, South Korea
Seohui [email protected]
Woohyung [email protected]
deep unlearning, generative model, privacy
0.3in
]
Recent regulation on right-to-be-forgotten emerges tons of interest in unlearning pre-trained machine learning models. While approximating a straightforward yet expensive approach of retrain-from-scratch, recent machine unlearning methods unlearn a sample by updating weights to remove its influence on the weight parameters. In this paper, we introduce a simple yet effective approach to remove a data influence on the deep generative model. Inspired by works in multi-task learning, we propose to manipulate gradients to regularize the interplay of influence among samples by projecting gradients onto the normal plane of the gradients to be retained. Our work is agnostic to statistics of the removal samples, outperforming existing baselines while providing theoretical analysis for the first time in unlearning a generative model.
§ INTRODUCTION
Suppose a user wants to get rid of his/her face image anywhere in your facial image generation application - including the database and the generative model on which it is trained. Is the expensive retrain-from-scratch the only solution for this kind of request? As the use of personal data has been increased in training the machine learning models for online service, meeting individual demand for privacy or the rapid change in the legislation of General Data Protection Registration (GDPR) is inevitable to ML service providers nowadays. This request on `Right-To-Be-Forgotten (RTBF)' might be a one-time or in-series, scaling from a feature to a number of tasks, querying single instance to multiples.
A straightforward solution for unlearning a single data might be to retrain a generative model from scratch without data of interest. This approach, however, is intractable in practice considering the grand size and complexity of the latest generative models <cit.> and the continual request for removal.
Unlearning, thereafter, aims to approximate this straightforward-yet-expensive solution of retrain-from-scratch time and computation efficiently. First-order data-influence-based approximate unlearning is currently considered the state-of-the-art approach to unlearning machine learning models in general. Grounded by the notion of data influence <cit.>, a simple one-step Newton's update certifies sufficiently small bound between retrain-from-scratch <cit.>. Nonetheless, those relaxations are infeasible to the non-convex deep neural networks (generative model) where the gap is not certifiably bounded and the process of computing the inverse of hessian is intractable. Several recent works also have affirmed that these relaxed alternatives perform poorly on deep neural networks <cit.> and even that on generative models have not been explored yet.
Contribution In this work, we propose a novel one-shot unlearning method for unlearning samples from pre-trained deep generative model. Relaxing the definition of influence function on parameters in machine unlearning <cit.>, we focus on the influence of a single data on the test loss of the others and propose a simple and cost-effective method to minimize this inter-dependent influence to approximate retrain-from-scratch. We summarize our contributions as follows:
* We propose to annul the influence of samples on generations with simple gradient manipulation.
* Agnostic to removal statistics and thus applied to any removals such as a single data, a class, some data feature, etc.
* Grounded by a theoretical analysis bridging standard machine unlearning to generative model.
§ GRADIENT SURGERY FOR ONE-SHOT DATA REMOVALS ON GENERATIVE MODEL
Notations Let D={x_i}_i=1^N⊆𝒳 be the training data where x_i ∈𝒳 is input. Let D_f ⊆ D be a subset of training data that is to be forgotten (i.e. forget set) and D_r = D ∖ D_f be remaining training data of which information we want to retain. Recall that the goal of unlearning is to approximate the deep generative model retrained from scratch with only D_r, which we denote as f_θ^* parameterized by θ^*. Then, our goal is to unlearn D_f ⊆ D from a converged pre-trained generator f_θ̂ by updating the parameter θ̂→θ^-, where θ^- represents the updated parameters obtained after unlearning.
Proposed method
Given a generative model that models the distribution of training data p(D), a successful unlearned model that unlearns D_f would be what approximates p(D_r), the distribution of D_r, as if it had never seen D_f. The only case where the unlearned model generates samples similar to x∈ D_f is when p(D_f) and p(D_r) happen to be very close from the beginning. Under this goal, a straight-forward objective given the pre-trained model approximating p(D) is to make the output of generation to deviate from p(D_f), which could be simply formulated as the following:
max_θ𝔼_(x,y)∼ D_fℒ(θ, x, y)
where ℒ denotes training loss (e.g. reconstruction loss).
Meanwhile, assume we could define the influence of a single data on the weight parameter and generation result. Then, unlearning this data would be by simply updating the weight parameter in a direction of removing the data influence. Toward this, we start with defining the data influence on weight parameters and approximates to feasible form as introduced in <cit.>:
Given upweighting z by some small ϵ and the new parameters θ̂_ϵ,z*argmin_θ∈Θ1/n∑_i=1^nℒ(z_i, θ) + ϵℒ(z,θ), the influence of upweighting z on the parameter θ̂ is given by
I_up,param(z) dθ̂_ϵ,z/dϵ|_ϵ=0 -H_θ̂^-1∇_θ L(z,θ̂)
where H_θ̂ = 1/n∑_i=1^n∇_θ^2 L(z_i, θ̂) is the Hessian and is positive definite (PD) by assumption.
By forming a quadratic approximation to the empirical risk around θ̂, a data influence on the weight parameter is formulated as a single Newtons step (See details in Appendix of <cit.>), which is consistent with the objective we have mentioned in Equation <ref>. Although numerous works have verified that this data influence-based approach works well in shallow, discriminative models <cit.>, we cannot apply this directly to our generative model due to intractable computation and lack of guarantees on bounds.
To address this problem, we re-purpose our objective to minimize the data influence on generation. Grounded by recent works <cit.>, we find that we could enjoy this on generative model simply by diminishing the gradient conflict as follows:
Reducing the influence of samples z∈ D_f in training data with regard to test loss is formulated as:
I^'_up,loss(D_f,z') → 0,
which is equivalent to
∇_θℒ(z',θ̂)^T ∑_z ∈ D_f∇_θℒ(z,θ̂) → 0
where z'∈ D_r in our scenario.
Informally, we could achieve this by alleviating the conflict between two gradients ∇_θℒ(z',θ̂) and ∇_θℒ(z,θ̂), resulting in diminishing the inner product of two gradients. This reminds us of a classic approach of gradient manipulation techniques for conflicting gradients in multi-task learning scenario <cit.>. Specifically, we project a gradient of forget sample x_f ∈ D_f onto normal plane of a set of retain samples x_r ∈ D_r to meet ℐ_up,loss(x_f, x_r)=0. This orthogonal projection manipulates the original gradient of forget sample 𝐠_f=∇ℒ_f to the weight parameter to which sufficiently unlearns a sample x_f ∈ D_f: g_f = g_f - g_f ·g_r/g_r^2g_r. Then, the unlearned model θ^- is obtained after the following gradient update: θ^- = θ̂ - ηg_f.
§ EXPERIMENTS
We verify our idea under numerous data removal requests. Note that measuring and evaluating a generative model to unlearn a single data is non-trivial. Even comparing pre-trained generative models trained with a particular data over without simply by looking at the output of training (e.g. generated image, weight) is intractable in case of a deep generative model to the best of our knowledge <cit.>. To make the problem verifiable, in this work, we experiment to unlearn a group of samples sharing similar statistics in the training data - either belonging to a particular class or that has a distinctive semantic feature. In this case, one can evaluate the output of the generation by measuring the number of samples including that class or a semantic feature; a successfully unlearned model would generate nearly zero number of samples having these features. Although we are not able to cover unlearning a single data in this work, note that in essence, our method could successfully approximate the generative model trained without a single data seamlessly, and we look forward to exploring and adjusting a feasible evaluation on this scenario in the near future.
§.§ Experimental Setup
Scenarios
We unlearn either a whole class or some notable feature from a group of samples. In the experiment, we use a subset of MNIST <cit.> with samples of classes 1,3,8
and 64x64 CelebA <cit.> to train and unlearn vanilla VAE <cit.>.
Evaluation
We evaluate our method under the following three criteria: a privacy guarantee, utility guarantee, and cost. Privacy guarantee includes feature ratio ( fratio), a ratio of images including the target feature (See details in Appendix <ref>). Utility guarantee includes Frechet Inception Distance (FID), a widely used measure for generation quality. Cost includes a total execution time (Time) which should be shorter than retrain-from-scratch. A successfully unlearned model would show near-zero on feature ratio, the same IS, FID score as the initial pre-trained model (BEFORE), and the lowest possible execution time. Given the legal impact and the goal of unlearning, note that guaranteeing privacy is prioritized the highest.
§.§ Result on Pre-trained Generative Model
Quantitative Result We run the proposed method on pre-trained VAE to remove unlearning group D_f (e.g. class 1 or male, respectively) and evaluate them as follows (Table <ref>) Starting from the pre-trained model (BEFORE) our method unlearns the target D_f with a large decrease on fratio by 65% to 70% while keeping the time cost of unlearning ≤ 5% of retrain-from-scratch.
All the while, our method still keeps a decent utility performance. Comparing the baselines, our method shows the best in privacy - the prioritized metric - through all experiments. Note that the feature ratio of gradient ascent in the CelebA experiment (feature ratio-CelebA-Grad.Ascnt) was omitted because the generated samples are turned out to be noisy images and thus the evaluation result of pre-trained classifier cannot be accepted. Also, note that although baselines show better performance in terms of utility and cost, they don't show near-best score on privacy guarantee.
Qualitative Result
We further validate our method by comparing the generated images before and after the proposed unlearning algorithm. As in Figure <ref>, no class 1 samples are observed after unlearning class 1, meaning that our method successfully meets the request of unlearning class 1, which aligns with the quantitative result where the ratio of samples with class 1 is reduced from 34.3% to ≤ 15% as in Table <ref>. The output of image generation is fair where 3 and 8 are decently distinguishable through one's eyes, although it is certain that some examples show some minor damaged features, which are in the same line as a decrease in IS and an increase in FID score. Note that the ultimate goal of unlearning is to meet the privacy guarantee while preserving the utility of pre-training, which are remained as our next future work.
§ CONCLUSION
In this work, we introduce a novel theoretically sounded unlearning method for the generative method. Inspired by the influence of the sample on the others, we suggest a simple and effective gradient surgery to unlearn a given set of samples on a pre-trained generative model and outperform the existing baselines. Although we don't experiment to unlearn single data due to a lack of ground evaluation on the uniqueness of the particular data, we leave it as future work emphasizing that our method could also be applied to this scenario. Furthermore, it would be interesting to verify our ideas on various privacy-sensitive datasets. Nonetheless, our work implies the possibility of unlearning a pre-trained generative model, laying the groundwork for privacy handling in generative AI.
bishop1992exact
goodfellow2013multi
fu2022knowledge
liu2021federaser
gupta2021adaptive
bourtoule2021machine
zhang2022prompt
icml2023
§ EXPERIMENTAL DETAILS
§.§ Setup
Architecture
In this experiment, we use vanilla VAE <cit.> with encoders of either stack of linear(for MNIST experiment) or convolutional(for CelebA experiment) layers. Although we verify our result on VAE, note that our method can be applied to any variational inference based generative model such as <cit.>.
Baseline
We compare our experimental results with the following two baselines. One is a recently published, first and the only unlearning work on generative model <cit.> (FU) to unlearn by feeding a surrogate model with projected latent vectors. We reproduce FU and follow the hyperparameter details (e.g. unlearning epochs 200 for MNIST) as in the original paper. The other is a straight-forward baseline (Grad.Ascnt.) which updates the gradient in a direction of maximizing the reconstruction loss on forget, which is equivalent to meeting e.g. Objective <ref> without gradient surgery. Note that we keep the same step size when unlearning with these three different methods (including ours) for fair comparison.
Training details
We use Adam optimizer with learning rate 5e-04 for MNIST experiment and 1e-05 for CelebA experiment. We update the parameter only once (1 epoch) for removals, thus named our title 'one-shot unlearning'. All experiments are three times repeated.
§.§ How to Evaluate Feature Ratio
We first prepare a classification model that classifies the image having a target feature from the remains. In order to obtain a highly accurate classifier, we search for the best classifier which shows over 95% accuracy. In the experiment, we use AllCNN <cit.> to classify class 1 over the other in MNIST with 1,3,8 (MNIST381), and ResNet18 <cit.> to classify male over female on CelebA. After unlearning, we generate 10000 samples from the generator and feed the sample to the pre-trained classifier. Assuming that the classifier classifies the image well, the prediction result would the probability that the generated output contains the features to be unlearned.
§ DEFINITIONS AND PROOF FOR THEORETICAL ANALYSIS
In <cit.> and <cit.>, an influence of sample z on weight parameter is defined as the product of its gradient and inverse of hessian. Moreover, an influence of sample z to test loss of sample z' defined in as following:
(Equation 2 from <cit.>)
Suppose up-weighting a converged parameter θ̂ by small ϵ, which gives us new parameters θ̂_ϵ,z*argmin_θ∈Θ1/n∑_i=1^nℒ(z_i, θ) + ϵℒ(z,θ). The influence of up-weighting z on the loss at an arbitrary point z' against has a closed-form expression:
ℐ_up,loss(z, z') dℒ(z',θ̂_ϵ,z)/dϵ|_ϵ=0
= ∇_θℒ(z',θ̂)^⊤ H_θ̂^-1∇_θℒ(z,θ̂)
where H_θ̂1/n∑_i=1^n∇_θ^2ℒ(z_i, θ̂) is the Hessian and is positive definite (PD) by assumption on convex and Lipschitz continuity of loss ℒ.
(Theorem <ref> from Section <ref>)
Reducing the influence of samples z∈ D_f in training data with regard to test loss is formulated as:
I^'_up,loss(D_f,z') → 0,
which is equivalent to
∇_θℒ(z',θ̂)^T ∑_z ∈ D_f∇_θℒ(z,θ̂) → 0
where z'∈ D_r in our scenario.
The second-order influence of D_f, ℐ^(2)_up, param, is formulated as sum of first-order influence ℐ^(1)_up, param and ℐ^'
_up, param, which captures the dependency of the terms in 𝒪(ϵ^2) on the group influence is defined as following:
ℐ^'_up, param(D_f,z') = 𝒜 H_θ̂^-1∑_z ∈ D_f∇_θℒ(z,θ̂)
where 𝒜 = p/1-p(I-(∇^2 L(θ^*))^-11/|𝒰|∑_z∈𝒰∇^2 l(h_θ^*(z))) (from <cit.>).
The influence of samples in D_f on the test loss of z' can be formulated as:
ℐ_up, loss(D_f,z') = ∇_θℒ(z,θ̂)^T ℐ_up, param(D_f)
which can be equivalently applied to all orders of ℐ including ℐ^(1), ℐ^(2), ℐ^'.
Then, ℐ^'_up, loss(D_f,z') = 0 is now reduced to
∇_θℒ(z,θ̂)^T 𝒜 H_θ̂^-1∑_z ∈ D_f∇_θℒ(z,θ̂) = 0
which satisfies the right-hand side of Theorem <ref> where 𝒜 and H_θ̂^-1 are negligible.
|
http://arxiv.org/abs/2307.04065v1 | 20230709000559 | Large-scale global optimization of ultra-high dimensional non-convex landscapes based on generative neural networks | [
"Jiaqi Jiang",
"Jonathan A. Fan"
] | cs.LG | [
"cs.LG",
"math.OC"
] |
Projective Rectangles
Thomas Zaslavsky
August 12, 2023
=====================
We present a non-convex optimization algorithm metaheuristic, based on the training of a deep generative network, which enables effective searching within continuous, ultra-high dimensional landscapes. During network training, populations of sampled local gradients are utilized within a customized loss function to evolve the network output distribution function towards one peaked at high performing optima. The deep network architecture is tailored to support progressive growth over the course of training, which allows the algorithm to manage the curse of dimensionality characteristic of high dimensional landscapes. We apply our concept to a range of standard optimization problems with dimensions as high as one thousand and show that our method performs better with fewer functional evaluations compared to state-of-the-art algorithm benchmarks. We also discuss the role of deep network over-parameterization, loss function engineering, and proper network architecture selection in optimization, and why the required batch size of sampled local gradients is independent of problem dimension. These concepts form the foundation for a new class of algorithms that utilize customizable and expressive deep generative networks to solve non-convex optimization problems.
§ INTRODUCTION
High dimensional, non-convex optimization problems are pervasive in many scientific and engineering domains, including computational materials science <cit.>, electromagnetics <cit.>, circuits design <cit.>, process engineering <cit.>, and systems biology <cit.>. These problems are known to be very difficult to solve because they are NP-hard, and algorithms aiming to definitively search for the global optimum, such as branch and bound methods, cannot practically scale to high dimensional systems. As such, various algorithm heuristics have been developed, ranging from evolutionary metaheuristics to Bayesian optimization <cit.>, which use judicious sampling of the landscape to identify high performing optima. In all cases, it remains challenging to apply these algorithms to ultra-high dimensional spaces with dimensions of hundreds to thousands due to the curse of dimensionality.
The explosion of interest and research in deep neural networks over the last decade has presented new opportunities in optimization, as the process of training a deep network involves solving a high dimensional optimization problem. To this end, gradient-based optimization metaheuristics termed global topology optimization networks (GLOnets) <cit.> were recently proposed that use the training of a deep generative network to perform non-convex optimization. The concept applies to optimization problems where 𝐱 is a d-dimensional variable and the goal is to maximize the smoothly varying, non-convex objective function f(𝐱). To run the metaheuristic, the generative network is first initialized so that it outputs a distribution of 𝐱 values that spans the full optimization landscape. Over the course of network training, this distribution is sampled, f(𝐱) and local gradients are computed for these sampled points, and these values are incorporated into a customized loss function and backpropagated to evolve and narrow the distribution around high performing optima. Initial demonstrations indicate that GLOnets can perform better than standard gradient-based optimizers and global search heuristics for various non-convex optimization problems. However it is unable to extend to high dimensional problems in its current form, and the lack of interpretability with this black box algorithm has made it difficult to understand if and how it can to adapt to more general problems, including high dimensional problems.
In this Article, we introduce the progressive growing GLOnet (PG-GLOnet) in which optimization within an ultra-high dimensional non-convex landscape is mediated through the training of a progressive growing deep generative network. Our tailoring of the network architecture for this optimization task serves to incorporate knowledge and assumptions about the optimization landscape into the metaheuristic, which is a requirement for tractably navigating ultra-high dimensional landscapes. We also explain how the algorithm works to smoothen the design landscape, how evaluation of the loss function serves as a gradient estimation calculation, and why the number of required functional evaluations is independent of problem dimension. With standard benchmarking test functions, we show that our concept performs better than state-of-the-art algorithms with fewer functional evaluations for one thousand dimensional problems. We anticipate that the customization of network architectures within the GLOnets framework will seed new connections between deep learning and optimization.
§ PROGRESSIVE GROWING GLONETS ALGORITHM AND BENCHMARKING
The PG-GLOnet concept builds on the foundation of the original GLOnet algorithm, which we briefly review here. The optimization problem to be solved with GLOnets can be written in the following form:
max_𝐱 f(𝐱)
where f(𝐱) is a non-convex, continuous objective function with feasible gradients. With GLOnets, this optimization problem is indirectly solved through the training of a general neural network (Figure <ref>a), where the input is a d-dimensional random variable 𝐳 with a standard normal distribution and the output is a distribution of 𝐱's. The generator therefore serves to map 𝐳 onto 𝐱 = G(𝐳; ϕ) with a distribution P(𝐱; ϕ), where ϕ denotes the trainable neural network parameters. The optimization objective for the generator is defined as:
L = max_ϕ𝔼_𝐱∼ P(𝐱; ϕ)exp[ f(𝐱)/T]
The distribution that maximizes this expected value is a delta function centered at the global optimum, and as such, an ideally trained generator will produce a narrow distribution centered at the global optimum, thereby solving the original optimization problem. The use of the exponential function and the hyperparameter T in the optimization objective further enhance the valuation of the global optimum, and more generally high performing optima, in the design space.
Generator training is consistent with conventional deep learning training methods: gradients of the objective function with respect to network parameters, ∇_ϕ𝔼f, are calculated through backpropagation, and they are used to iteratively optimize ϕ using standard gradient-based methods. In practice, the objective function is approximated by a batch of M samples. P(𝐱; ϕ), on the other hand, is typically implicit and cannot be directly sampled. To circumvent this issue, we draw M samples {𝐳^(m)}_m=1^M from the standard normal distribution, transform them to {𝐱^(m)}_m=1^M, and then approximate L and its gradient ∇_ϕ L with respect to network parameters ϕ:
L ≈1/M∑_m=1^Mexp[ f(𝐱^(m))/T]
∇_ϕ L ≈1/M∑_m=1^M1/Texp[ f(𝐱^(m))/T] ∇_𝐱f · D_ϕ𝐱^(m)
∇_𝐱f = [∂ f/∂ x_1, ∂ f/∂ x_2, …, ∂ f/∂ x_d] are the gradients of f(𝐱) and D_ϕ𝐱 = ∂ (x_1, x_2, …)/∂(ϕ_1, ϕ_2, ...) is the Jacobian matrix. Evaluation of f(𝐱) is usually performed by a numerical simulator and the gradient of f(𝐱) can be calculated explicitly or by auto-differentiation for analytic expressions, or by the adjoint variables method (AVM).
In the initial conception of GLOnet, which we term FC-GLOnet, the generative network was a fully connected deep network and was capable of effectively addressing optimization problems with a modest number of dimensions. However, it was found to be ineffective at optimizing within very high dimensional landscapes due to the curse of dimensionality, which makes a direct search for the global optimum within a full, high dimensional landscape an intractable proposition. We therefore propose the PG-GLOnet, which utilizes a generative network that outputs a distribution that gradually grows from a coarse, low dimensional space to a fine, high dimensional space. By tailoring the network architecture in this way, we regularize the optimization process to take place over differing degrees of optimization landscape smoothing, enabling our search process to be computationally efficient and tractable.
The PG-GLOnet generator architecture is shown in Figure <ref>b. The progressive growth concept is inspired by progressively growing GANs <cit.> that have been developed in the computer vision community to process images with increasing spatial resolution during network training. The input to the network is a D-dimensional random vector 𝐱^0, and its dimension is much smaller than that of 𝐱. With L growing blocks, the network simultaneously transforms and increases the dimensionality of the input vector, and its output is a 2^L D dimensional vector 𝐱^L that matches the dimensionality of 𝐱.
In each growing block, the input vector dimension is doubled in two ways, by direct upsampling and by a linear transform. The resulting outputs are combined together and further transformed using a non-linear activation function:
𝐱^out_2d × 1 = q((1-α)
[ 𝐱^in_d × 1; 𝐱^in_d × 1 ]
+α A_2d × d·𝐱^in_d × 1)
A_2d × d are trainable parameters in the linear transformation branch, q(·) is a non-linear activation function, and α is a hyperparameter that is manually tuned over the course of optimization.
Initially, α's for all of the growing blocks in the network are set to 0, such that the vector outputted by each block has the same effective dimensionality as its input vector. The network output 𝐱^L therefore has an effective dimensionality that matches the dimensionality of the input 𝐱^0. As α is increased for a particular growing block, its output vector becomes dominated by its linear transformation branch, as opposed to its upsampling branch, and it has an effective dimensionality that exceeds and eventually doubles that of the growing block input vector. The effective dimensionality of 𝐱^L therefore arises from the aggregation of effective dimensionality increases from all growing blocks. To control the effective dimensionality of 𝐱^L over the course of PG-GLOnet training, α is manually changed from 0 to 1 sequentially from the left to right blocks (bottom of Figure <ref>b). At the end of PG-GLOnet training, α is 1 for all growing blocks and the effective dimensionality of 𝐱^L matches that of 𝐱.
To evaluate the efficacy of PG-GLOnet in solving high dimensional non-convex optimization problems, we perform a series of benchmark numerical experiments where we optimize a set of standard test functions with PG-GLOnet and other established algorithms. In the first set of experiments, we consider a testing function that can be tuned from a convex to non-convex function and compare PG-GLOnet with ADAM, a well known momentum-based gradient descent algorithm that is typically more effective than gradient descent. ADAM is a local optimization algorithm and performs well on convex objective functions but can get trapped within local optima for non-convex functions. Our test function is a modified Rastrigin function defined as follows:
f(𝐱; ρ) = ρ d + ∑_i=1^d [x_i^2 - ρcos(2π x_i)]
ρ is a hyperparameter that specifies the amplitude of the sinusoidal modulation within the function. When ρ =0, f(𝐱; ρ) = ∑_i=1^d x_i^2 and is a convex function. As ρ increases, more local optima emerge and these optima become separated by larger magnitude barriers.
We first consider the computational cost required by ADAM and PG-GLOnet to find the global optimum of a two dimensional modified Rastrigin function as a function of ρ. For ADAM, we run 10000 optimizations for 200 iterations with random starting points, and for PG-GLOnet, we run the algorithm 10 times with a batch size of 20 for 200 total iterations. In both cases, the algorithms terminate early when they output results within 10^-3 of the global optimum, and computational cost is quantified as the average number of function evaluations required to find the global optimum. The results are summarized in Figure <ref>a and indicate that for convex or nearly convex optimization landscapes, ADAM is more efficient at finding the global optimum. This efficiency arises because ADAM is a specially tailored local optimizer that is well suited for these types of problems, while PG-GLOnet always requires relatively large batch sizes and more iterations to converge. As ρ increases, orders-of-magnitude more ADAM evaluations are required to search for the global optimum due to trapping within local optima in the design landscape. The computational cost for PG-GLOnet, on the other hand, does not increase nearly as rapidly due to its ability to navigate non-convex landscapes and is ten times more efficient than ADAM for ρ greater than 3.
We also perform benchmarks between ADAM and PG-GLOnet for a ten dimensional problem. Due to the inability for ADAM to converge to the global optimum in non-convex, high dimensional landscapes, we perform this benchmark differently and compare the best optimal value found by ADAM and PG-GLOnet given the same amount of computational resources. Here, we run ADAM for 200 iterations with 20 random starting points and PG-GLOnet for 200 iterations with a batch size of 20. We run these benchmark experiments ten times and average the best values from each experiment, and the results are reported in Figure <ref>b.
We find that the PG-GLOnet is able to consistently find solutions at or near the global optimum for all values of ρ, but the local optimizer gets progressively worse as ρ increases.
In our next set of benchmark experiments, we compare PG-GLOnet with the covariance matrix adaptation evolution strategy (CMA-ES), which is an established evolutionary algorithm used to perform population-based global searching of an optimization landscape. Compared to ADAM, it is more suitable for performing non-convex optimization. We consider two standard non-convex testing functions with lots of local optima, the Rastrigin and Schwefel functions (defined in the Appendix).
Plots in Figures <ref>c and <ref>d show the average number of function evaluations required to find the global optimum as a function of problem dimension d. The computational cost of CMA-ES increases exponentially as the problem dimension becomes larger, indicating the intractability of applying this algorithm to ultra-high dimensional problems. For the Schwefel function, we limited our CMA-ES benchmarking experiments to a problem dimension of 20 due to this scaling trend. PG-GLOnet, on the other hand, has a relatively small computational cost that is not sensitive to the dimension. In fact, the same neural network architecture and batch size is used for all problems. A more detailed discussion as to the origins of problem dimension and batch size decoupling is provided in the Discussion section.
Finally, we benchmark PG-GLOnet with state-of-art algorithms on testing functions proposed by the CEC'2013 Special Session and Competition on Large-Scale Global Optimization (LSGO) <cit.>. We consider the six non-convex benchmark functions from the competition, which involve variations and combinations of the Rastrigin and Ackely functions and are defined in the Appendix. These benchmark functions were designed to incorporate a number of challenging features for optimization, including:
* High dimensions. The design space of a optimization problem grows exponentially as the dimension of design variables increases. These benchmark functions utilize one thousand dimensional landscapes.
* Functions with non-separable subcomponents. The whole design variable is decomposed into several subcomponents and dimensions within each subcomponent are strongly coupled together.
* Imbalance in the contribution of subcomponents. The contribution of a subcomponent is magnified or dampened by a coefficient.
* Non-linear transformations to the base functions. Three transformations are applied to break the symmetry and introduce some irregularity on the landscape: (1) Ill-conditioning (2) Irregularities (3) Symmetry breaking.
To globally search these landscapes for the global optimum, we perform a two step optimization procedure. First, we run PG-GLOnet for each benchmark function for 200 iterations and a batch size of 100, from which our generative network outputs a narrow distribution of 𝐱's in promising regions of the optimization landscape. We then sample this distribution 100 times and perform local gradient descent on each of these design variables for an additional 200 iterations. The best function values found by PG-GLOnet plus local gradient descent are reported in Table <ref>, together with results produced from FC-GLOnet plus local gradient descent, local conjugate gradient descent, and two state-of-art non-convex optimization algorithms that were the best performing algorithms in the most recent LSGO contest: CC-RDG3, which is a divide-and-conquer method <cit.>, and DGSC, which is a differential group method utilizing spectral clustering <cit.>. We observe that PG-GLOnet with local gradient descent refinement is able to significantly outperform the other algorithms for the majority of test functions. In addition, the total computational cost of the two step optimization procedure is only 4× 10^4 function evaluations, while CC-RDG3 and DGSC require 3× 10^6 function evaluations.
§ DISCUSSION
We discuss the origins of the efficiency and efficacy of PG-GLOnet in solving ultra-high dimensional non-convex optimization problems. First, we examine how the generic GLOnet algorithm operates and why it is able to effectively utilize a gradient-based strategy to solve non-convex optimization problems. Second, we examine the role of the progressive growing generative network architecture in PG-GLOnet in solving ultra-high dimensional problems. By understanding the relationship between network architecture and optimization procedure, we elucidate built-in assumptions used by PG-GLOnet in its search for the global optimum.
With the generic GLOnet algorithm, the original optimization problem cited in Equation 1 is reframed as a related problem (Equation 2) that addresses a transformed, smoothened optimization landscape. The key concepts that produce this landscape transformation and enable effective gradient-based optimization are outlined in Figure <ref>a and are: 1) distribution optimization, where the original problem involving the optimization of 𝐱 is transformed to a problem involving the optimization of parameters within a simple distribution P(𝐱); 2) exponential transformation, where the objective function is exponentially weighted; 3) over-parametrization, where the distribution P(𝐱) is now parameterized by a neural network with hundreds to thousands of weights; and 4) gradient estimation, where gradients that specify the evolution of the continuous distribution P(𝐱) are accurately computed through discrete samplings of 𝐳.
Distribution optimization. With the concept of distribution optimization, the original problem of searching for an optimal 𝐱 is recast as a population-based search in which parameters within a distribution function are optimized, thereby enabling a search for the global optimum in a smoother and higher dimensional optimization landscape. This concept is shared by other population-based optimization algorithms, such as CMA-ES. To visualize the concept, we consider a non-convex one-dimensional function f(𝐱) plotted as a blue line in the leftmost figure in Figure <ref>a. The objective is to maximize f(𝐱), and the function contains multiple local maxima separated by deep valleys. It is easy for optimization algorithms, particularly gradient-based algorithms, to get trapped in the local optima. For example, if gradient descent optimization is used and is initialized at the yellow dot position, the algorithm will converge to the local optimum delineated by the red dot. With this approach, multiple independent gradient descent optimizations with random starting points are needed to increase the possibility of finding the global optimum. For these problems, gradient-free optimization heuristics are often employed, which can reduce the chances of trapping within suboptimal maxima but which introduce a more stochastic nature to the search process.
However, if we consider the optimization of a distribution function that interacts with the global optimization landscape, local information at different parts of the landscape can be aggregated and collectively utilized to evolve this distribution in a manner that reduces issues of trapping within suboptimal maxima. Formally, we transform the optimization variable 𝐱 to parameters within the distribution P(𝐱), and the globally optimal distribution is one that is narrowly peaked around the global optimum. Distribution functions can be explicitly parameterized in many ways. As a simple illustrative example that builds on our
discussion of the one-dimensional f(𝐱), we consider the one-dimensional Gaussian distribution denoted as P(𝐱; μ, σ), shown as the red curve in the leftmost figure in Figure <ref>a. μ and σ refer to mean and standard deviation, respectively.
With a Gaussian distribution function, the objective function now becomes transformed to the expected value of f(𝐱) as a function of (μ, σ): 𝔼_𝐱∼ P(𝐱; μ, σ) f(𝐱). As this new optimization landscape is a function of two distribution parameters, μ and σ, it is two dimensional. We can directly visualize this new landscape by evaluating ∫ f(𝐱) P(𝐱;μ, σ) d𝐱 for all values of (μ, σ), and the result is summarized in the second figure from the left in Figure <ref>a. The horizontal line section at the bottom of the contour plot, where σ equals zero, is the original one-dimensional f(𝐱) with multiple optima. As σ increases to finite values above zero, the landscape becomes smoother. Mathematically, horizontal line sections for finite sigma are calculated by convolving f(𝐱) with the Gaussian function, producing a Gaussian blur that leads to smoothening. This smoothened landscape facilitates gradient-based optimization of (μ, σ) when the distribution is initialized to large σ values, and the final optimized distributions converge to the original f(𝐱) space at the bottom of the plot. However, while this two-dimensional landscape is smoother than the original f(𝐱), there remain multiple distribution parameter initializations for which the gradient-based optimizer converges to suboptimal maxima.
Exponential transformation. To further smoothen the optimization landscape and enhance the presence of the global optimum, we perform an exponential transformation of the objective function. Mathematically, the objective function for the distribution optimization problem becomes: 𝔼_𝐱∼ P(𝐱; μ, σ)exp[ f(𝐱)/T]. The temperature term T modulates the impact of the global optimum on the optimization landscape such that low T produces strong landscape modulation by the global optimum. For our one-dimensional f(𝐱) example, the exponentially transformed landscape is plotted in the second figure from the left in Figure <ref>a and shows that the local optima has faded out, such that gradient-based optimization within this landscape is more likely to converge to the global optimum.
The choice of T depends on the scale of f(𝐱). Consider f(𝐱) that is linearly normalized to span (0, 1). Such normalization can be typically achieved based on prior knowledge about the upper and lower bound of f(𝐱). If we want to amplify f(𝐱) for f(𝐱) > f_d and minimize f(𝐱) for f(𝐱) < f_d, where f_d is a division point between 0 and 1, the temperature is chosen to be T = f_d / log(1 + f_d). For example, if f_d is chosen to be the golden ratio, then the temperature is roughly T = 1.3. In practice, the selection of f_d is problem specific, and T can be treated as a hyperparameter that can be manually tuned around 1 for tailoring to a particular problem.
Over-parameterization. To further enhance the ability for GLOnet to efficiently and reliably converge to the global optimum, we next consider the concept of over-parameterization in which the distribution P(𝐱) is now a neural network parameterized by weights ϕ. The objective function then becomes: 𝔼_𝐱∼ P(𝐱; ϕ)exp[ f(𝐱)/T]. Our use of a neural network is inspired by the fact that deep network training involves the solving of an extremely high dimensional non-convex optimization problem, that the convergence of the neural network is typically insensitive to initialization, and that good neural network parameters can be found using backpropagation.
The underlying mathematical principles outlining why gradient descent is so effective for deep network training have been revealed to some extent by computer scientists in recent years. <cit.> First, the parameter space of deep networks is a high-dimensional manifold, such that most local optima are equivalently good and the probability of converging to a bad optimum during training decreases quickly with network size. Second, these equivalently high performing local optima originate from neural network over-parameterization, which builds in redundancy in the optimization landscape that speeds up and stabilizes the gradient-based optimization process.
To understand how this applies to GLOnet, we revisit our one-dimensional f(𝐱) landscape in which local optima are separated by deep barriers. When the optimization landscape is transformed using P(𝐱,ϕ), it frames the optimization problem in a very high dimensional landscape, as the dimensionality of ϕ is much higher than 𝐱. Solutions to the optimization problem therefore reside in a high-dimensional manifold, such that many different ϕ's serve as high performing local optima. Additionally, local optima in f(𝐱) are no longer separated by deep barriers but are instead connected by pathways with low to no barriers in our transformed high dimensional landscape, mitigating trapping within these local optima during gradient-based optimization. The high dimensional landscape representing the transformed f(𝐱) is visualized as a two-dimensional projection in the rightmost plot in Figure <ref>a. The global optimum is now a connected band in the optimization landscape, as opposed to a single point in f(𝐱), and there are fewer energy barriers preventing gradients from converging to the global optimum, enabling gradient descent optimization to be more robust and faster. We note that neural network depth and expressivity play a large role in determining the practical impact of over-parameterization on optimization, and as a demonstration, we compare the performance of GLOnets based on linear and deep non-linear networks in the Appendix.
Gradient estimation. A critical feature to maximizing the performance of GLOnet is ensuring that gradients used to evolve P(𝐱), which are approximated using a finite batch of samples, are sufficiently accurate. There are two methods for gradient estimation that can be used for GLOnets. The first is to use a score function gradient estimator, which utilizes the evaluated derivatives of the probability distribution P(𝐱; ϕ) and f(𝐱). This method for estimation requires explicit evaluation of derivatives to P(𝐱; ϕ) but only an implicit evaluation of ∇_𝐱f. The second is to use a pathwise gradient estimator, which relies on knowing the explicit derivatives of f(𝐱) but for which the probability distribution P(𝐱; ϕ) can be implicit. Empirically, we find for GLOnet that the pathwise gradient estimator more consistently produces smaller gradient error compared with the score function gradient estimator, and we therefore implement the pathwise gradient estimator in Equation <ref>. <cit.>
The pathwise gradient estimator is based on the principle of Monte Carlo estimation, such that the estimation error decreases with the inverse square root of batch size. Importantly, this estimation error is independent of dimension. As a result, GLOnet and specifically PG-GLOnet are able to operate for batch sizes that are independent of problem dimension, as demonstrated in Figures 2c and 2d. This scaling of problem dimension without a required scaling in the number of functional evaluations allows PG-GLOnet to readily scale and address the 1000-dimensional problems in Table 1 with modest computational resources.
Progressive growth. Direct searching within a high dimensional, non-convex landscape is an intractable problem. In the case of FC-GLOnet, which utilizes all of the features above, including distribution optimization and over-parameterization, the algorithm is still not effective in directly searching high dimensional landscapes (Table 1). With PG-GLOnet, the progressive growing architecture regularizes the optimization procedure to search first within a relatively coarse, low dimensional representation of the optimization landscape, followed by relatively local searching within increasingly higher dimensional landscape representations. This hierarchical increase of landscape dimensionality directly corresponds to the serial toggling of α within the series of growing blocks in the generator. As such, the optimization landscape is evolved over the course of PG-GLOnet training in a manner that maintains the tractability of the optimization problem.
To further visualize the relationship between generative network architecture and optimization search procedure, we consider a non-convex two-dimensional landscape shown in Figure <ref>b. The generative network contains a single growing block, and the toggling of α from zero to one modulates the effective dimensionality of the generator output from one to two. Initially, α is zero and the vector outputted by the generator has the same effective dimensionality as its input vector and is one. The optimization landscape being searched is therefore a diagonal line within the two-dimensional landscape (Figure <ref>b, left-most plot), and with optimal solutions near the center of the line, the outputted generator distribution (red coloring in plot) narrows towards this region. As α is increased, the generator output vector becomes dominated by its linear transformation branch, as opposed to its upsampling branch, and it has an effective dimensionality that increases and eventually doubles. In our PG-GLOnet visualization, this increase in effective dimensionality corresponds to a broadening of the optimization landscape being searched, and the outputted generator distribution widens relative to the diagonal line. Upon the completion of network growth, the PG-GLOnet distribution converges to the global optimum.
The success of PG-GLOnet is therefore predicated on the ability for the outputted distribution of the generative network to be narrowed down to smaller but more promising regions of a coarse optimization landscape, prior to increasing the landscape dimensionality and adding more degrees of freedom to the problem. This concept therefore works particularly well for problems where optima within a low dimensional analogue of the optimization landscape help to inform of the presence and position of optima within the high dimensional landscape. This regularization of the optimization procedure also indicates that for problems where optima within coarse variants of the optimization landscape do not inform the position of the global optimum, PG-GLOnet will not work well.
In summary, we present a general global optimization algorithm metaheuristic based on progressive growing deep generative neural networks termed PG-GLOnet. Unlike other population-based algorithms, PG-GLOnet uses gradient-based optimization to evolve an expressive, complex distribution in the optimization landscape to one centered around promising optima. This complex distribution, parameterized using the deep network framework, utilizes loss function engineering and over-parameterization to facilitate effective gradient-based searching. PG-GLOnet is particularly well suited to address ultra-high dimensional problems because the required batch size is independent of problem dimension and the progressively growing network architecture facilitates a hierarchical search process within a landscape with progressively growing effective dimensionality. This use of a hierarchical search strategy also provides bounds as to the types of problems and landscapes that are suited for PG-GLOnet optimization. We anticipate that further research in the tailoring of application-specific generative network architectures to particular optimization landscapes will enable the GLOnet platform to extend and adapt to an even wider range of non-convex, high dimensional optimization problems.
|
http://arxiv.org/abs/2307.04114v1 | 20230709080743 | FILM: How can Few-Shot Image Classification Benefit from Pre-Trained Language Models? | [
"Zihao Jiang",
"Yunkai Dang",
"Dong Pang",
"Huishuai Zhang",
"Weiran Huang"
] | cs.LG | [
"cs.LG",
"cs.AI",
"cs.CL",
"cs.CV",
"cs.MM"
] |
Neutron scattering and muon-spin spectroscopy studies of the magnetic triangular-lattice compounds A_2La_2NiW_2O_12 (A = Sr, Ba)
T. Shang
Accepted 08-Jul-2023. Received 18-Jun-2023; in original form 23-May-2023
================================================================================================================================
Few-shot learning aims to train models that can be generalized to novel classes with only a few samples.
Recently, a line of works are proposed to enhance few-shot learning with accessible semantic information from class names.
However, these works focus on improving existing modules such as visual prototypes and feature extractors of the standard few-shot learning framework.
This limits the full potential use of semantic information.
In this paper, we propose a novel few-shot learning framework that uses pre-trained language models based on contrastive learning.
To address the challenge of alignment between visual features and textual embeddings obtained from text-based pre-trained language model, we carefully design the textual branch of our framework and introduce a metric module to generalize the cosine similarity.
For better transferability, we let the metric module adapt to different few-shot tasks and adopt MAML to train the model via bi-level optimization.
Moreover, we conduct extensive experiments on multiple benchmarks
to demonstrate the effectiveness of our method.
§ INTRODUCTION
Deep neural networks <cit.> have achieved remarkable success in many fields.
However, training deep neural networks requires a large number of labeled data, which can be expensive and time-consuming to obtain.
For instance, in medical imaging, obtaining labeled data requires expert radiologists to annotate images.
This limits the application of deep learning models in real-world scenarios.
In contrast, humans possess the ability to recognize and classify objects of unseen categories with only a few examples.
This highlights the potential value of few-shot learning <cit.>, where models are trained on base classes and can be generalized well to novel classes with limited amounts of samples.
Previous works mainly focus on image classification tasks, and most of them adopt the meta-learning paradigm <cit.>.
Recent works consider leveraging additional information from other modalities such as text to enhance the performance of few-shot learning.
In particular, some methods <cit.> adopt static word embedding models (e.g., GloVe <cit.>) to extract textual representations of class names and use them to adjust visual prototypes or classifiers.
With the appearance of general language models such as BERT <cit.> and GPT <cit.>, another line of works <cit.> adopt public pre-trained language models (PLMs) to extract more comprehensive semantic information from class names.
However, these works still focus on improving existing modules of the standard few-shot learning framework (e.g., visual prototypes and feature extractors), which confines the full utilization of powerful PLMs in few-shot learning.
Inspired by the success of vision-language models <cit.> trained by contrastive learning, we explore the idea of aligning visual features and textual embeddings for few-shot image classification in this paper, where textual embeddings are extracted by a public PLM from class names following the setting of <cit.>.
However, there are two main factors making this alignment challenging.
Firstly,
unlike vision-language models that have sufficient pairs of image and textual descriptions available for model training, we only have the class name of each image instead of a rich description.
Secondly,
in contrast to vision-language models where both visual and textual encoders are learnable to align embeddings, our textual encoder inherits from a puublic PLM trained on uni-modal text data.
This leads to totally different structures of textual embedding spaces and thus makes the alignment between visual and textual features difficult.
For instance, if we directly align visual features and textual embeddings, the probability[Here probabilities mean the elements outputted by softmax function.] of a sample image being assigned to its true label is extremely low (see blue bars in Figure <ref>).
This indicates that the visual feature of an image is hard to approach the corresponding text embedding of its true label.
In this paper, we propose a novel framework (Figure <ref>) to boost few-shot learning by means of public PLMs.
To bridge the gap between visual and textual modalities, we carefully design a textual branch of our framework and introduce a metric module to measure the similarity between visual and textual embeddings.
The textual branch first incorporates class labels into our hand-crafted prompt template containing a [MASK] token and then inputs the filled sentence to a PLM.
The PLM transforms the input sentence into a hidden vector sequence
and the final textual embedding is extracted from the vector corresponding to the [MASK] token.
Meanwhile, the visual feature is obtained by a standard visual encoder.
After that, we compute the similarities between visual features and textual embeddings through the proposed metric module, and send them into the contrastive loss.
For better transferability on novel classes, we let the metric module adapt to different few-shot tasks and adopt Model-Agnostic Meta-Learning (MAML) <cit.> to train the model via bi-level optimization.
Moreover, we conduct extensive experiments on multiple benchmarks to demonstrate that the proposed method significantly outperforms the state-of-the-art few-shot learning methods based on PLMs.
The main contributions of this paper can be summarized as follows.
* We propose a novel few-shot learning framework that leverages semantic information extracted by a pre-trained language model based on contrastive learning.
* We carefully design a textual branch of the framework and introduce a metric module to generalize the similarity measure.
* The metric module is designed to be adaptive to different few-shot tasks for better transferability, and MAML is adopted to train the model via bi-level optimization.
* We conduct extensive experiments on multiple benchmarks with different domains to demonstrate the effectiveness of our method.
§ RELATED WORK
Few-shot Learning.
In general, few-shot learning methods are mainly divided into two categories: metric-based methods and optimization-based methods.
Metric-based methods aim to map samples into an appropriate embedding space on the basis of certain distance metrics. Most previous methods use task-agnostic distance metrics, e.g., cosine similarity distance <cit.>, Euclidean distance <cit.>, CNN relation module <cit.>, and Earth Mover’s Distance <cit.>.
Additionally, several methods <cit.> involve learning task-specific distance metrics, which can be adjusted for different tasks.
Optimization-based methods <cit.> aims at learning optimal initial model parameters on base classes and quickly fine-tune them on novel classes with a few support examples.
Our paper generalizes the similarity measure by the proposed metric module, and uses MAML <cit.> to train the model.
Few-shot Learning with Semantic Information.
Recent works on few-shot learning start to utilize semantic information from class labels to enhance few-shot learning.
AM3 <cit.> proposes an adaptive modality mixture mechanism to model prototype representation as a combination of visual features and language semantic features.
KTN <cit.> learns classifiers by fusing visual information and knowledge information acquired from a knowledge graph and word embeddings with a semantic-visual mapping network based on Graph Convolutional Network <cit.>.
VS-Alignment <cit.> introduces a contrastive alignment between visual and semantic features as an additional objective.
Semantic Prompt <cit.> considers semantic information as prompts to tune the ViT <cit.> feature extractor.
All these methods leverage semantic features as auxiliary information to adjust visual prototypes, classifiers, or feature extractors.
In contrast, we propose a new few-shot learning framework to directly align visual and textual embeddings via contrastive learning.
Contrastive Learning.
Contrastive learning is a popular method in self-supervised representation learning.
It learns representations by pulling positive samples close and driving negative samples away from them in the latent embedding space with a contrastive loss.
A set of previous works have shown the excellent performance of contrastive learning in computer vision <cit.> and natural language processing <cit.> tasks.
Furthermore, recent works <cit.> apply contrastive learning to multi-modal settings by aligning image-text pairs in the embedding space.
Our work introduces contrastive learning to few-shot learning, and proposes a learnable metric module to make aligning visual features and textual embeddings possible.
§ PROBLEM DEFINITION
Few-shot learning involves two disjoint class sets: a base class set 𝒞_base classes and a novel class set 𝒞_novel classes.
Sufficient labeled samples are provided for each base class, while abundant unlabeled samples and only a few labeled samples are provided for each novel class.
Few-shot learning targets at classifying unlabeled samples from novel classes through training on all the given labeled samples.
Previous works usually formulate the few-shot learning problem as N-way K-shot classification, which denotes a classification task among N classes with K labeled samples available for each class.
In addition, given a fixed pre-trained language model, we use bimodal contrastive learning to leverage the semantic information extracted by it.
Concretely, for each embedded sample image z and N embedded class labels {t_1,t_2,…,t_N} in a N-way K-shot classification task, contrastive learning adjusts the embedding space through the following widely-used contrastive loss <cit.> (using cosine similarity as an example):
ℒ =
-logexp(z· t_+/τ)/∑^N_i=1exp(z· t_i/τ),
where t_+ is the embedded true label of the sample image and τ is a temperature hyper-parameter.
Meta-learning paradigm <cit.> is commonly used to solve the few-shot learning problem,
which trains and evaluates the model with the episodic mechanism.
The standard meta-learning paradigm contains two stages: meta-training and meta-testing.
In each episode of the meta-training stage, a N-way K-shot M-query classification task 𝒯=(𝒮,𝒬) is constructed with samples from the base classes.
We first randomly select N classes from 𝒞_base as 𝒞_𝒯.
For each class, we randomly sample K support images and M query images.
Then we form the support set 𝒮={(x_i,y_i)|y_i∈𝒞_𝒯,i=1,2,…,N× K} and the query set 𝒬={(x_i,y_i)|y_i∈𝒞_𝒯,i=1,2,…,N× M} with the support images and the query images respectively, where x_i is the i-th sample image and y_i is the class label of x_i.
To learn an appropriate embedding space, bi-level optimization is performed on 𝒮 and 𝒬 respectively, utilizing a contrastive loss.
In each episode of the meta-testing stage, a classification task is built on the novel classes in a similar way.
The support set is formed with a few label samples, while the query set is sampled from the unlabeled samples.
After adapting to the novel classes by minimizing the contrastive loss on the support set, the model is used to predict class labels for the sample images in the query set.
§ METHOD
We introduce our method of Few-shot Image classification with pre-trained Language Models (FILM) in this section.
The overall framework is illustrated in Figure <ref>, which consists of three modules: a textual branch, a visual branch, and a metric module.
For each episode, the textual branch extracts textual embeddings from class labels, while the visual branch extracts visual embeddings from support and query images.
Moreover, the metric module computes the similarity score matrix between textual and visual embeddings from these two branches.
In addition, we utilize a training strategy based on MAML algorithm to train the model via bi-level optimization.
§.§ Textual Branch
In this section, we explain how we design the textual branch to get textual embeddings from class labels.
The textual branch comprises a text-based pre-trained language model (PLM) and a language model head.
During meta-training and meta-testing, the PLM is frozen while the language model head is tuned for the downstream classification tasks.
In our study, we mainly use the masked language model as the PLM. Notice that PLMs mainly take sentences rather than single words or phrases as input during the pre-training stage.
Therefore, to bridge the gap between the pre-training and downstream tasks, for each class label y_i, we insert it into a hand-crafted prompt template and get y_i^prompt as the input of the PLM.
The token sequence of y_i^prompt is first converted to a token embedding sequence through a token vocabulary.
The input embedding sequence is calculated by summing the corresponding token embeddings and positional embeddings.
Then PLM transforms the input embeddings into a sequence of hidden vectors.
Two straightforward ways to get the textual embedding from the output hidden vector sequence are respectively: (1) taking the average vector of the output vector sequence as the textual embedding; (2) taking the hidden vector of the [CLS] token as the textual embedding.
To make textual embeddings more relevant to the visual descriptive information of the corresponding categories, we design a prompt template with one [MASK] token as
y_i^prompt = [CLS] The appearance ofy_i is [MASK] . [SEP]
and extract the textual embedding by sending the hidden vector of the [MASK] token to the language model head.
In this way, the extraction of textual embeddings is treated as a masked language modeling task, which makes downstream classification tasks more consistent with the pre-training of the PLM.
The comparison among different designs of textual branches will be shown in Table <ref> later.
§.§ Metric Module
Inspired by vision-language models trained by contrastive learning, we explore aligning visual and textual modalities for few-shot image classification.
However, directly aligning visual features and textual embeddings extracted by text-based PLM with cosine similarity has a poor effect in few-shot setting.
The blue bars in Figure <ref> show that the probability of a sample image being assigned to its true label is extremely low if we directly align the visual and textual embeddings.
In this paper, we introduce a metric module to generalize the similarity measure between visual features and textual embeddings.
Moreover, we let the metric module adapt to different few-shot tasks for better transferability on novel classes.
Specifically, we define f_θ_I as the image encoder with learnable parameters θ_I to transform each sample image x_i into a feature map z_i = f_θ_I(x_i).
Textual branch f_θ_T with learnable parameters θ_T is used to extract the textual embedding t_y_i = f_θ_T(y_i) from each class label y_i.
We generalize the similarity measure between visual embeddings z and textual embeddings t as a learnable function M(z, t) called metric module, whose parameters are denoted as θ_M.
For example, the metric module could be a bilinear function M(z, t)=z^⊤θ_Mt (degenerating to the cosine similarity if θ_M is the identity matrix) or a neural network, e.g., M(z, t)=MLP_θ_M([z,t]).
During meta-testing, we first fine-tune the task-specific parameters θ_M on the support set 𝒮.
Then we use the similarity score matrix computed by the metric module as a reference to infer labels for sample images in the query set 𝒬.
As is shown in Figure <ref>, the correct classification probabilities of our method are significantly higher than that of direct alignment, which means that our metric module can effectively align the visual features and textual embeddings.
§.§ Loss Function
We formulate the learning objective as a contrastive loss (Eq (<ref>)), which pulls together images and corresponding class labels while pushing away unmatched pairs in the embedding space.
Moreover, we aim to train a model to maximize the similarity between visual features and textual embeddings for matching (image, text) pairs while reducing the similarity for non-matching pairs.
Specifically, for a classification task 𝒯=(𝒮,𝒬), we calculate the contrastive loss on the support set 𝒮 and the query set 𝒬 respectively.
On the support set, the contrastive loss ℒ_𝒮 is computed with all the support samples, which has a formulation as:
ℒ_𝒮 =
-1/|𝒮|∑_x_i∈𝒮logexp( M(z_i, t_y_i) /τ )/∑_c∈𝒞_𝒯exp(M(z_i, t_c)/τ ),
where z_i is the visual embedding of the i^th support image x_i, t_y_i is the textual embedding of the true label y_i corresponding to x_i, t_c is the textual embedding of the class label c, and M(·, ·) is the similarity measure.
On the query set, the contrastive loss ℒ_𝒬 has almost the same formulation as ℒ_𝒮, except it is computed with all the query samples of 𝒬.
§.§ Training Strategy
In this work, we incorporate the Model-Agnostic Meta-Learning (MAML) <cit.> algorithm to train the model via bi-level optimization as our training strategy.
Our training strategy aims to learn a good model initialization (through the outer-loop optimization), which can be quickly adapted to novel tasks given a few examples (through the inner-loop optimization).
The whole algorithm for our training strategy is outlined in Algorithm <ref>.
First, we randomly initialize the parameters of image encoder θ_I, language model head θ_T, and metric module θ_M.
For each task instance 𝒯_j from the distribution p(𝒯), we divide 𝒯_j into a support set 𝒮_j and a query set 𝒬_j.
To let the metric module task-specific, we create copies of θ_M as the adapted parameters θ_M^'.
In the inner loop, we adapt the model to the current task 𝒯_j by updating θ_M^' with a number of gradient descent steps on the support set while keeping θ_I, θ_T and θ_M fixed.
In the outer loop, θ_M^' are utilized to evaluate the performance of the adapted model on the query set.
Specifically, we compute loss on the query set with θ_I, θ_T, θ_M^' and perform gradient descent with respect to all the model parameters θ = {θ_I, θ_T, θ_M}.
The optimization objective of the meta-training stage is to learn a good initialization across tasks.
For example, when using one gradient update in the inner loop, the optimization objective can be formulated as follows:
min_θ∑_𝒯_j ∼ p(𝒯)ℒ_𝒬_j (θ_I, θ_T, θ_M -α∇_θ_Mℒ_𝒮_j(θ_I, θ_T, θ_M)),
where ℒ_𝒮_j and ℒ_𝒬_j denote the loss functions that evaluate the performance on support and query set respectively, and α is the learning rate of the inner loop.
§ EXPERIMENTS
§.§ Setup
Datasets.
We experiment on three general object recognition datasets, i.e., miniImageNet, tieredImageNet and CIFAR-FS, and one fine-grained categorization image classification dataset, i.e., CUB-200-2011.
The miniImageNet dataset is proposed in <cit.> as a benchmark for few-shot image classification tasks.
It contains a subset of 100 classes in the ImageNet <cit.> dataset, where 64 classes are used for training, 16 classes for validation, and 20 classes for testing.
The tieredImageNet dataset <cit.>, which is also derived from the ImageNet <cit.> dataset, contains 351 classes for training, 97 classes for validation, and 160 classes for testing.
The CIFAR-FS dataset is built upon CIFAR-100 <cit.> dataset.
Following the recent work of <cit.>, we use the same training/validation/testing splits consisting of 64/16/20 classes respectively.
CUB-200-2011 (CUB) <cit.> is a dataset for fine-grained bird species classification tasks consisting of 100/50/50 classes for training/validation/testing splits respectively.
We also evaluate the domain transferability of our method by training on miniImageNet dataset and then testing on CUB dataset.
Architecture.
For the visual branch, following previous works <cit.>, we use ResNet-12 as our image encoder of the visual branch, which consists of four residual blocks.
Each block contains three 3×3 convolutional layers and a 2×2 max-pooling layer.
Similar to <cit.>, we adopt Dropblock as the regularizer and set the number of filters to (64, 160, 320, 640).
We apply a global average pooling layer after the last residual block.
The backbone network takes images with a spatial size of 84×84 as input and outputs 640-dim support and query visual embeddings.
To extract comprehensive semantic information from class names, we adopt RoBERTa-base <cit.> as our text-based pre-trained language model, which is trained on large-scale corpora and available for public use.
The language model is a linear layer, which transforms 768-dim hidden vectors into 640-dim textual embeddings.
In addition, we use the bilinear form of our metric module.
Implementation Details.
Following <cit.>, we first pre-train the image encoder for 200 epochs on miniImageNet, CIFAR-FS and CUB dataset, and 100 epochs on tieredImageNet dataset.
Then we adopt the episodic training procedure under 5-way 1-shot and 5-shot settings.
In each episode, 16 unlabeled query images per class are used for the meta-training and meta-testing phases.
We use SGD optimizer with a momentum of 0.9 and a weight decay of 5e-4.
The outer-loop learning rate is initialized as 1e-3 on miniImageNet, CIFAR-FS, CUB datasets and 1e-4 on tieredImageNet dataset.
The inner-loop learning rate is initialized as 0.5 on four datasets.
The number of inner-loop update steps is set to 25.
Our model is meta-trained for 80 epochs on all datasets.
The hyper-parameter τ is set as 1 for 1-shot setting, 0.2 for 5-shot setting in the inner loop, and 0.1 in the outer loop.
To ensure the stability of the evaluation results, we test 1,000 episodes and report the average performance with 95% confidence intervals.
We conduct experiments with an NVIDIA GeForce RTX 4090 GPU.
§.§ Comparison with State-of-The-Art
General Object Recognition and Fine-Grained Categorization.
For fair comparisons, we compare with other methods using the same backbone or similar methods in both 5-way 1-shot and 5-way 5-shot settings on miniImageNet, tieredImageNet, CIFAR-FS and CUB datasets.
As is shown in Table <ref>, our method is superior to existing methods and achieves the best performance.
Compared with previous methods that leverage semantic information from class names, such as KTN <cit.>, AM3 <cit.>, TRAML <cit.> and Vs-Alignment <cit.>, our method improves 1-shot accuracy by 2.42% and 5-shot accuracy by 4.41% on miniImageNet.
Furthermore, our method outperforms AM3 <cit.> by 3.88% and 4.41% at 1-shot and 5-shot settings on tieredImageNet respectively.
According to Table <ref>, our method outperforms MetaOptNet <cit.> by 4.99% and 3.06% at 1-shot and 5-shot settings respectively on the CIFAR-FS dataset.
In addition, on the CUB dataset, our method surpasses all the competitors, including RE-Net <cit.>, which previously achieved the best result.
One observation worth highlighting is that our method not only outperforms traditional methods based on meta-learning but also is superior to methods using textual information on four benchmark datasets.
These results validate the effectiveness of our proposed few-shot learning framework, which can leverage semantic information well in few-shot image classification tasks.
Evaluation on Cross Domain and Larger Shots.
To evaluate the cross-domain transferability of different few-shot learning methods, we train them on the source domain miniImageNet dataset and test them on the target domain CUB dataset.
This setting is challenging due to the domain gap between the training and testing datasets.
The results are reported in Table <ref>, showing that our method has competitive performance and obtains consistent improvements in the cross-domain setting.
This indicates the transferability of our method in a situation where the meta-testing tasks are entirely different from the meta-training tasks.
Furthermore, we evaluate the performance when the number of shots increases (e.g., 10-shot, 30-shot, and 50-shot) in Table <ref>.
This shows that our method would be more effective when there are more (image, text) pairs available for novel classes.
These comparisons demonstrate that our method has a more robust transferability, which means it can work well in cross-domain and larger shots scenarios.
§.§ Ablation Study
In this subsection, we empirically show the effectiveness of each component.
To investigate the effects of our designed textual branch, we try to use different extraction methods and prompt templates.
Moreover, we conduct extensive ablation studies to verify the effectiveness in the absence of the metric module and visualize our method on miniImageNet and tieredImageNet dataset.
Analyze of Textual Branch.
To evaluate the effect of our textual branch, we test different extraction methods (i.e., “Avg”, “[CLS]”, and “[MASK]”) and prompt templates in our framework with 5-way 1-shot setting on miniImageNet.
As shown in Table <ref>, our “[MASK]” extraction method with “[CLS] The appearance ofy_i is [MASK] . [SEP]” prompt template outperforms the “[CLS]” extraction method by 5.39% and the “Avg” extraction method by 3.94%.
Our proposed hand-crafted prompt template treats the extraction of textual embeddings as a masked language modeling task, which makes the textual embeddings more relevant to the visual description of object categories.
The results demonstrate that the carefully designed textual branch is effective for aligning visual and textual embeddings for downstream few-shot classification tasks.
Analyze of Metric Module.
As is shown in Table <ref>, we design a new model without using the support set to update the parameters in the inner-loop optimization and directly compute the similarity score matrix between the query visual embeddings and textual embeddings with cosine similarity in the outer loop.
The results show a significant decrease in performance on four widely-used few-shot image classification datasets, demonstrating the importance of the task-specific metric module.
By leveraging the metric module to generalize the cosine similarity, our model can adaptively measure the similarity between visual features and textual embeddings for different few-shot tasks.
Visualization.
To qualitatively evaluate our method, we apply t-SNE <cit.> to visualize the results, which represent the visual features of five categories.
We randomly sample 300 examples for each class in 5-way 5-shot setting on miniImageNet and tieredImageNet dataset.
As shown in Figure <ref>, the t-SNE visualization results indicate that our method can learn more compact and separate clusters, which means that the learned representations are more discriminative.
§ CONCLUSION
In this paper, we propose a novel few-shot learning framework with text-based pre-trained language model to boost few-shot learning.
Furthermore, we introduce a task-specific metric module to enable the alignment between visual features and textual embeddings.
Extensive experiments on miniImageNet, tieredImageNet and CIFAR-FS demonstrate the effectiveness of our method.
unsrtnat
Supplementary Materials
§ ADDITIONAL EXPERIMENTS
Influence of Inner-Loop Temperature.
To study the influence of inner-loop temperature hyper-parameter, we conduct experiments on four widely-used few-shot datasets with different inner-loop temperature values in our method.
The rest settings are consistent with Section <ref>.
Table <ref> shows the results in 5-way 5-shot setting.
We find that 0.2 is an appropriate inner-loop temperature value for this setting on all these four datasets.
Effect of the Number of Inner-Loop Update Steps.
To find a suitable number of inner-loop update steps, we keep the experimental setup in Section <ref> and update the model 10, 15, 20, 25 and 30 steps in the inner loop respectively.
Table <ref> shows the results in 5-way 5-shot setting on miniImageNet and tieredImageNet.
Following the results, we set the number of inner-loop update steps to 25 in our experiments.
Visualization of Grad-CAM.
In Figure <ref>, we visualize the gradient-weighted class activation mapping from the pre-trained model and our method under a ResNet-12 feature extractor.
It is observed that our method makes the model pay more attention to the discriminative part of the target object than the pre-trained model.
For example, we find that for dog samples, the pre-trained model pays more attention to the body and background parts while our model focuses on the head part.
|
http://arxiv.org/abs/2307.03901v2 | 20230708044917 | One-Loop Quantum Effects in Carroll Scalars | [
"Kinjal Banerjee",
"Rudranil Basu",
"Bhagya Krishnan",
"Sabyasachi Maulik",
"Aditya Mehra",
"Augniva Ray"
] | hep-th | [
"hep-th"
] |
=1
|
http://arxiv.org/abs/2307.04927v2 | 20230710222833 | Probabilistic Counterexample Guidance for Safer Reinforcement Learning (Extended Version) | [
"Xiaotong Ji",
"Antonio Filieri"
] | cs.LG | [
"cs.LG",
"cs.LO"
] |
Probabilistic Counterexample Guidance for Safer RL
Ji and Filieri
Department of Computing
Imperial College London
London, SW7 2AZ, UK
{xiaotong.ji16, a.filieri}@imperial.ac.uk
Probabilistic Counterexample Guidance for Safer Reinforcement Learning (Extended Version)
Xiaotong Ji Antonio Filieri
=========================================================================================
Safe exploration aims at addressing the limitations of Reinforcement Learning (RL) in safety-critical scenarios, where failures during trial-and-error learning may incur high costs. Several methods exist to incorporate external knowledge or to use proximal sensor data to limit the exploration of unsafe states. However, reducing exploration risks in unknown environments, where an agent must discover safety threats during exploration, remains challenging.
In this paper, we target the problem of safe exploration by guiding the training with counterexamples of the safety requirement. Our method abstracts both continuous and discrete state-space systems into compact abstract models representing the safety-relevant knowledge acquired by the agent during exploration. We then exploit probabilistic counterexample generation to construct minimal simulation submodels eliciting safety requirement violations, where the agent can efficiently train offline to refine its policy towards minimising the risk of safety violations during the subsequent online exploration.
We demonstrate our method’s effectiveness in reducing safety violations during online exploration in preliminary experiments by an average of 40.3% compared with QL and DQN standard algorithms and 29.1% compared with previous related work, while achieving comparable cumulative rewards with respect to unrestricted exploration and alternative approaches.
14cm
< g r a p h i c s >
0
§ INTRODUCTION
A critical limitation of applying Reinforcement Learning (RL) in real-world control systems is its lack of guarantees of avoiding unsafe behaviours. At its core, RL is a trial-and-error process, where the learning agent explores the decision space and receives rewards for the outcome of its decisions. However, in safety-critical scenarios, failing trials may result in high costs or unsafe situations and should be avoided as much as possible.
Several learning methods try to incorporate the advantages of model-driven and data-driven methods to encourage safety during learning <cit.>. One natural approach for encouraging safer learning is to analyse the kinematic model of the learning system with specific safety requirements and to design safe exploration <cit.> or safe optimisation <cit.> strategies that avoid unsafe states or minimise the expected occurrence of unsafe events during training. However, this approach is not applicable for most control systems with partially-known or unknown dynamics, where not enough information is available to characterise unsafe states or events a priori.
To increase the safety of learning in environments with (entirely or partially) unknown dynamics, we propose an online-offline learning scheme where online execution traces collected during exploration are used to construct an abstract representation of the visited state-action space. If during exploration the agent violates a safety requirement with unacceptable frequency, a probabilistic model checker is used to produce from the abstract representation minimal counterexample sub-models, i.e., a minimal subset of the abstract state-action space within which the agent is expected to violate its safety requirement with a probability larger than tolerable. These counterexamples are then used to synthesise small-size offline simulation environments within which the agent's policy can be conveniently reinforced to reduce the probability of reiterating safety violating behaviors during subsequent online exploration. As new evidence from online exploration is gathered the abstract representation is incrementally updated and additional offline phases can be enforced when necessary, until an acceptable safety exploration rate is achieved. Overall, our strategy aims at migrating most trial-and-error risks to the offline training phases, while discouraging the repeated exploration of risky behaviours during online learning. As new evidence is collected during online exploration, the abstract representation is incrementally updated and the current value the agent expect from each action is used to prioritise the synthesis of more relevant counterexample-guided simulations.
Our main conceptual contribution in this paper is the use of probabilistic counterexamples to automatically synthesise small-scale simulation submodels where the agent can refine its policy to reduce the risk of violating a safety requirement during learning.
In particular, we 1) propose a conservative geometric abstraction model representing safety-relevant experience collected by the agent at any time during online exploration, with theoretical convergence and accuracy guarantees, suitable for the representation of both discrete and continuous state spaces and finite action spaces, 2) adapt minimal label set probabilistic counterexample generation <cit.> to generate small-scale submodels for the synthesis of offline agent training environments aimed at reducing the likelihood of violating safety requirements during online exploration, and 3) a preliminary evaluation of our method to enhance Q-Learning <cit.> and DQN <cit.> agents on problems from literature and the OpenAI Gym, demonstrating how it achieves comparable cumulative rewards while increasing the exploration safety rate by an average of 40.3% compared with QL/DQN, and of 29.1% compared with previous related work <cit.>.
§ BACKGROUND
§.§ Problem Framework
Markov Decision Process (MDP).
An MDP <cit.> is a tuple (S, A, s_0, P, R, L), where S is a set of states, A is a finite set of actions, s_0 is the initial state, P: S × A × S → [0, 1] is the probability of transitioning from a state s ∈ S to s'∈ S with action a ∈ A, R: S × A →ℝ is a reward function and L: S → 2^AP is a labelling function that assigns atomic propositions (AP) to each state.
A state in S is typically represented by a vector of finite length n_S ≥ 1. A state space is discrete if the elements of the vector are countable, where we assume S ⊆𝒵^n_S), continuous if S ⊆ℝ^n_S, or hybrid if some elements are discrete and others are continuous. When possible, we omit the cardinality n_S for readability.
Trace.
A finite trace (also called path or trajectory) through an MDP is a sequence σ = s_0, a_0, s_1, a_1, … s_i, a_i, … s_n, where s_0 is the initial state and P(s_i, a_i, s_i+1) > 0.
Policy.
A (deterministic) policy π: S → A selects in every state s the action a to be taken by the agent.
Q-learning (QL) <cit.> is a reinforcement learning algorithm where an agent aims at finding an optimal policy π^* for an MDP that maximises the expected cumulative reward. Given a learning rate α∈ (0, 1] and a discount factor γ∈ (0, 1], such that rewards received after n transitions are discounted by the factor γ^n, the agent learns a value function Q based on the following update rule:
Q_t(s, a) = (1 - α)Q_t-1(s, a) + α(R(s, a) + γmax_a' ∈ A Q_t-1(s', a'))
The optimal Q-function Q^* satisfies the Bellman optimality equation:
Q^*(s, a) = [R(s, a) + γmax_a' ∈ A Q^*(s', a') | s, a]
For finite-state and finite-action spaces, QL converges to an optimal policy as long as every state action pair is visited infinitely often <cit.>, but it is not suitable for learning in continuous state spaces. For continuous state spaces instead, the Deep Q-Learning method <cit.> parameterises Q-values with weights θ as a Q-network and the learning process is adapted to minimising a sequence of loss function L_i at each iteration i (cf. Algorithm 1 in <cit.>):
L_i(θ_i) = [(y_i - Q(s, a ; θ_i))^2],
where y_i = [(R(s, a) + γmax_a' ∈ A Q(s', a';θ_i-1) | s, a].
During learning, the agent selects the next action among those available in the current state at random with probability ϵ_QL >0 while with probability 1 - ϵ_QL it will select an action a yielding Q^*(s,a).
Optimal Policy.
An optimal policy π^*, in the context of Q learning and DQN, is given by π^* = max_a ∈ A Q^*(s,a), ∀ s ∈ S.
§.§ Probabilistic Model Checking and Counterexamples
Probabilistic model checking is an automated verification method that, given a stochastic model – an MDP in our case – and a property expressed in a suitable probabilistic temporal logic, can verify whether the model complies with the property or not <cit.>.
In this work, we use Probabilistic Computational Temporal Logic (PCTL) <cit.> to specify probabilistic requirements for the safety of the agent. The syntax of PCTL is recursively defined as:
Φ := true |α|Φ∧Φ|Φ| P_⋈ pφ
φ := X Φ|Φ U Φ
A PCTL property is defined by a state formula Φ, whose satisfaction can be determined in each state of the model. true is a tautology satisfied in every state, α∈ AP is satisfied in any state whose labels include (α∈ L(s)), and ∧ and are the Boolean conjunction and negation operators. The modal operator P_⋈ pφ, with ⋈∈{<, ≤, ≥, >} and p ∈ [0,1], holds in a state s if the cumulative probability of all the paths originating in s and satisfying the path formula φ is ⋈ p under any possible policy. The Next operator X Φ is satisfied by any path originating in s such that the next state satisfies Φ. The Until operator Φ_1 U Φ_2 is satisfied by any path originating in s such that a state s^' satisfying Φ_2 is eventually encountered along the path, and all the states between s and s^' (if any) satisfy Φ_1. The formula true U Φ is commonly abbreviated as F Φ and satisfied by any path that eventually reaches a state satisfying Φ. A model M satisfies a PCTL property Φ if Φ holds in the initial state s_0 of M <cit.>.
PCTL allows specifying a variety of safety requirements. For simplicity, in this work, we focus on safety requirements specified as upper-bounds on the probability of eventually reaching a state labelled as :
Safety Requirement. Given a threshold λ∈ (0,1], the safety requirement for a learning agent is formalised by the PCTL property P_≤λ [F ], i.e., the maximum probability of reaching a s ∈ S such that ∈ L(s) must be less than or equal to λ.
Counterexamples in Probabilistic Model Checking.
A
counterexample is a minimal possible sub-model M_cex = (S_cex, A_cex, s_0, P_cex) derived from the model M, where S_cex, A_cex are of subsets S and A, containing violating behaviours of a PCTL property from the initial state s_0 in M.
When a model M does not satisfy a PCTL property, a counterexample can be computed as evidence of the violation <cit.>. In this work, we adapt the minimal critical label set counterexample generation method of <cit.>.
The computation of a minimal possible sub-model requires the solution of a mixed-integer linear optimisation problem that selects the smallest number of transitions from the state-action space original model that allows the construction of violations.
An extensive description of the counterexample generation algorithm, including a heuristic to bias the counterexample generation towards including actions that a tabular Q-learning agent is more likely to select is included in Appendix <ref>.
Generating multiple Counterexamples. For an MDP violating the safety requirement, there can exist, in general, multiple counterexamples (both with minimal or non-minimal sizes), each potentially highlighting different policies that lead to requirement violations <cit.>.
In this work, we use counterexamples to guide the generation of offline training environments where the agent learns to reduce the value of actions that may eventually lead to the violation of safety requirements. We therefore aim at generating multiple, diverse counterexamples (if they exist), while keeping each of them at a small size for faster training. Given a counterexample, a different one can be obtained by adding a blocking clause to the minimisation problem, i.e., forcing the optimiser to exclude one or more previously selected action pairs (by imposing the corresponding selector variables x_ℓ=0 in the optimisation problem of Appendix <ref>). Hence, we can systematically add (an increasing number of) blocking clauses to obtain multiple diverse counterexamples that jointly provide a more comprehensive representation of the different violating behaviors the agent explored at any time.
§ COUNTEREXAMPLE-GUIDED REINFORCEMENT LEARNING
We assume that the agent does not have prior knowledge about the environment. In particular, it will only discover unsafe states upon visiting them during exploration. During the learning process the agent will iteratively interact with either the actual environment (online exploration) or with a counterexample-guided offline simulation.
The online phases aim at exploring the actual environment and improving the agent's policy expected reward, while acquiring information to build and continuously refine an abstract, compact representation of the control problem. The offline phases expose the agent to simulated, small-size, environments, within which the agent can revise its policy to penalise decisions that may lead to safety violations during the subsequent online phases.
In the remaining of the section, we first show how to construct and update an abstract finite MDP that compactly represents safety-relevant aspects of the (parts of) environment explored by the agent at any time (sec. <ref>).
Then, in sec. <ref>, we introduce the main learning algorithm with the online-offline alternation scheme, and discuss the main challenges of the offline learning phases.
§.§ Safety-relevant State-space Abstraction
For simplicity, let us assume the agent has no information about the topology of the state space S at the beginning of the exploration. Each online interaction with the environment (episode) can be described by the trace of the states visited by the agent and the actions it took. Besides the reward associated with each state-action pair, we assume states in the trace can be assigned a set of labels. Labels represent properties of the state related specific to the learning problem, e.g., a goal has been reached. We assume a special label labels the occurrence of unsafe situations the agent should aim to avoid. W.l.o.g., we assume an episode terminates when the agent enters an state.
We will refer to the states of the online environment as concrete states.
In this section, we propose an abstraction procedure to construct a finite, abstract MDP that retains sufficient information about the explored concrete environment to enable the synthesis of abstract counterexamples to the safety requirement. Each counterexample will therefore be an abstract representative of a set of possible safety violating behaviors that can happen in the concrete environment. To maintain the size of the abstract MDP tractable – especially in the presence of continuous concrete state spaces – the abstraction will retain only (approximate) safety-relevant information.
We assume that any state not labeled as is to explore and that the label is time-invariant. Furthermore, the abstraction must preserve at any time a safety invariant: every explored unsafe concrete state should be mapped to an unsafe abstract state. The finite abstract state space must therefore separate safe and unsafe regions of the concrete space, with only safe concrete states possibly misclassified as unsafe but not vice versa.
Inspired by the idea of casting the learning of geometric concepts as a set cover problem in <cit.>, we frame the separation task as a minimal red-blue set cover problem to abstract the explored concrete state space as a finite set of disjoint boxes or polyhedra, each expressed as a set of logical constraints and defined as the intersection of a finite set of hyperplanes.
To formalise the construction of the abstract state-space, we first introduce the notion of coverage of a concrete state by a polyhedra predicate.
Coverage of a polyhedra predicate
Let S̅⊆ S = {_0, _1, …, _n} be the set of all explored concrete states, a particular state ∈S̅ is covered by a (polyhedra) predicate C_i if ∈ C_i, where C_i = { | ω + ≤ 0}, in which ω represents the vector of slopes and represents the vector of biases corresponding to half-spaces enclosing the predicate.
The general affine form of the predicate C_i accounts for a variety of common numerical abstract domains, including, e.g., boxes (intervals), octagons, zonotopes, or polyhedra. In this work, we fix ω=1, i.e., restrict to hyper-boxes. We allow the user to specify a minimum size d > 0, which ensures that no dimension of the box will be reduced to a length smaller than d. This coarsening of the abstract domain struck a convenient trade-off between computational cost and accuracy of the abstraction in our preliminary experiments (see Appendix <ref> for additional discussion); the restriction can be lifted for applications requiring different trade-offs <cit.>.
The identification of a finite set of predicates that allow separating the concrete state space preserving the safety invariant can thus be reduced to the following:
Minimal Red-Blue Set Cover Problem.
Let S̅⊆ S = {_0, _1, … , _n} be the set of all explored concrete states and U = {_0, _1, …, _m} be the set of explored states assigned the label, find the minimal set C = {∪_i=1 C_i} s.t. every element ∈ U is covered by some predicate C_i, with an overall false positive rate fpr≤ f ∈ (0,1] for safe concrete states (∈S̅∖ U) covered by C.
In general, f cannot be zero, since the concrete state space, whether discrete or continuous, may not be perfectly partitioned by a finite set of polyhedra predicates, with smaller values of f possibly resulting in a larger number of predicates |C|.
To solve this optimisation problem, we employ a branch and bound method <cit.> to systematically enumerate possible combinations of predicates. The solution set guarantees all the unsafe concrete states are covered by a Boolean combination of predicates in C, while safe concrete states may also be covered by some predicate C_i, with a prescribed maximum tolerable rate f.
Safety-relevant Abstraction MDP.
A safety-relevant abstraction MDP M_a is a tuple (S_a, A_a, s_a0, R_a, P_a, L_a), where S_a is the abstract state space, which is the partition of the concrete state space S induced by the boundaries of C_i∈ C from the solution of the minimal set cover above, A_a is the set of applicable actions, s_a0 is the initial abstract state, P_a: S_a × A_a × S_a → [0,1] is a probability transition function, R_a is the abstract reward function (which will be defined later), and L_a: S →{, } is a labelling function.
M_a is constructed from the concrete traces collected during online learning, with the satisfaction of the predicates C_i determining the abstraction of concrete states and the abstract transition function is estimated accordingly from the frequencies observed in the concrete traces. The abstraction must preserve the safety invariant, therefore it may overapproximate explored unsafe regions, but not underapproximate them. Initially, the entire state space is assumed safe to explore. As new traces are collected during online exploration, the abstract model is incrementally updated: when a concrete state is found to be unsafe, the hyperbox containing its numerical vector representation is split to correct the classification (after the concrete state is wrapped around with a hyperbox of minimal size d, if d>0).
The incremental branch-and-bound refinement of the abstraction could lead to excessive fragmentation of the abstract state space, making it intractably large, particularly for the purpose of counterexample generation. To mitigate this issue, we merge adjacent states, i.e., abstract states sharing at least one separating hyperplane, into a single abstract state, adapting the general notion of probabilistic approximate ϵ-simulation <cit.> as in the following definition:
Adjacent ϵ-simulation:
Let S_l be the partitions of S_a induced by the equivalence relation s ∼ s' iff L_a(s)=L_a(s'). Then, for a given ϵ∈ [0,1], two adjacent states s ∈ S_a and s' ∈ S_a are ϵ-similar if (∃ s_l ∈ S_l) (s ∈ s_l, s' ∈ s_l) and (∀ s_l ∈ S_l) (|P_a(s, a, s_l) - P_a(s', a, s_l)| ≤ϵ, ∀ a ∈ A_a).
The ϵ-simulation in def. <ref> induces a hierarchical merging scheme. Let level l_0 contain the initial abstract states, which partition the explored concrete state space into a finite set of boxes – one per abstract state – which are labeled as either safe or unsafe. ϵ-similar adjacent states from level l_i are merged into a single state at level l_i+1, until no further merge is possible. Besides reducing the number of abstract states, and in turn the cost of generating counterexamples, this hierarchical merging scheme brings the indirect benefit of more aggressively merging abstract states corresponding to safe regions of the concrete state space, while preserving a finer-grained approximation of concrete state space regions in proximity of explored unsafe states, as discussed in Appendix <ref>.
Counterexample-guided Simulation. If a probabilistic safety requirement can be violated, one or more counterexamples M_cex can be generated from the abstract model M_a, where each counterexample includes a (near-)minimal subset of the abstract state-action space. We then use each counterexample as a guide to build an offline, simulation environment where the agent can update its Q-values towards avoiding eventually reaching an unsafe state.
Starting from the initial concrete state s_0, the abstract state s_cex in M_cex corresponding to the current concrete state is computed. By construction, each counterexample selects one action a from state s_cex. The abstract transition is randomly simulated according to the transition function P_cex from (s_cex, a) and an abstract destination state s_cex^' is identified. Such abstract state is concretised by sampling from the past concrete traces a transition (s, a, s^') where s ∈ s_cex and s^'∈ s^'_cex. If s^'_cex is an unsafe state, a penalty (negative reward; the impact of its magnitude is further discussed in Appendix tab. <ref>) is given to the agent, which has the transitive effect of re-weighting also the Q-value of the actions that led the agent to the current state. The simulation traces can be used by both Q-learning and DQN agents. The simulation terminates when an unsafe state is reached (penalty) or when we fail to concretise an abstract transition, which may happen when concrete safe states are misclassified as unsafe in the abstraction, but there is no actual transition to unsafe states from them. In the latter case, the simulation trace is discarded (no reward). While every simulation within the counterexample is designed to eventually reach an unsafe state with probability 1 by construction <cit.>, to avoid excessive length of a simulation, it can be practical to set an arbitrary, large bound on the maximum number of steps per run as additional termination criterion.
Multiple counterexamples, and corresponding simulations, can be generated up to a maximum simulation budget allowed by the user, adding blocking clauses in random order as described previously. Each counterexample is typically of small-size, which results in short simulation traces, thus reducing the overall cost of each offline learning experience.
§.§ Online-Offline Learning with Counterexample Guidance
Algorithm <ref> summarises the main steps of our online-offline learning method with counterexample guidance. We initially assume no knowledge about the environment is given to the agent: both the abstract model M_a and the set of explored paths D are empty (line <ref>). If prior knowledge was available, either in the form of an initial abstraction or of previously explored paths, M_a and D can be initialised accordingly.
Online learning. The procedure (line <ref>) lets the agent operate in the concrete environment with either tabular Q-Learning in discrete state space or DQN in continuous state space. We augment the exploration with a sequential Bayesian hypothesis testing (line <ref>) that monitors the frequency of violations of the safety requirement, by incrementally updating after each online episode a Beta distribution that estimates the probability of violation <cit.>. If the odds of such probability exceeding λ is larger than a prescribed Bayes factor β, the online learning phase is interrupted. The updated Q-values/Q-network of the agent are stored and the set of explored traces D is updated (line <ref>).
Offline learning. If the online learning phases has been interrupted because by the Bayesian test (line <ref>), an offline learning phase is triggered to reinforce the avoidance of discovered unsafe behaviors in future online exploration.
First, the abstraction M_a is updated with the current set of online traces D (line <ref>) to support the generation of current counterexamples. While there are theoretically a finite number of counterexample submodels <cit.>, for large M_a it could be computationally too expensive; instead, up to a maximum number N_cex of counterexamples is generated at each iteration. The addition of random blocking clauses (as described in sec. <ref>) will increase the diversity of the counterexamples within and across different offline learning phases.
The offline simulation traces synthesised from each M_cex (line <ref>) as described in the previous section are used by the agent to update its Q-values/Q-network (line <ref>), thus penalising the selection of eventually unsafe actions before the next online learning phase begins. Notice that the Bayesian hypothesis test is re-initialised before the next online learning phase (line <ref>) since the agent is expected to behave differently after an offline learning phase.
Discussion.
The interleaving of offline and online learning phases aims at reducing the frequency of unsafe events during the exploration of the environment. This goal is pursued by synthesising simulation environments from counterexamples to the safety requirement computed from an abstraction of the state space explored online. Notably, the offline phases never preclude the exploration of an action during the following online phases, rather they reduce the likelihood of selecting actions that may eventually lead to reaching unsafe states by lowering their Q-values. Due to space limitations, we report here the two main results related to the convergence of our abstraction method and of the online-offline learning process, and defer a more extensive discussion to Appendix <ref>.
Counterexample guidance relies on the abstraction constructed from online exploration phases, which classifies every region of the explored state space as either safe or unsafe. While by construction the abstraction preserves the safety invariant (every explored concrete unsafe state is mapped to an abstract unsafe state), the quality of the offline guidance relies also on controlling the misclassification error of safe concrete regions, which may unduly penalise the exploration of safe states.
propositionpropthree
The maximum misclassification error of a concrete safe state into an abstract unsafe state can eventually be reduced below an arbitrary bound 0 < u̅≤ 1 with probability at least 1-δ (0 < δ <1) throughout the exploration.
Further empirical analysis of the convergence of the abstraction and the impact of the abstraction parameters is provided in Appendix <ref>.
Finally, the following proposition states that the introduction of counterexample guidance does not preclude the convergence of the overall learning process to a maximal reward policy that satisfies the safety requirement, if such policy exists.
propositionpropfour
If there exist maximal-reward policies that satisfy the safety requirement, then the online-offline learning process eventually converges to one of them.
Further discussion of the convergence properties of the offline-online learning process are included in Appendix <ref>, including elaborating on the validity of the two propositions above.
In the next section, we instead report on our preliminary experimental evaluation of the performance of our counterexample-guided learning process.
Comment//
mycommfont
§ EVALUATION
In this section, we present a preliminary experimental evaluation of the performance of our method from two perspectives: 1) the improvement in the exploration safety rate, and 2) the impact on the cumulative reward achieved by the agent. Finally, we briefly discuss the overhead of counterexample guidance and make some observations on the policies it synthesises. Additional experimental results and discussion, including on abstraction effectiveness and sensitivity to hyperparameters can be found in Appendix <ref>.
Environments.
We consider four environments: from the implementation of <cit.>, the slippery from OpenAI Gym <cit.>, – where we change the state space from discrete to continuous with the same layout in <cit.>, and <cit.> (in particular, the exploration of the melas chasma in the Coprates quadrangle <cit.>). In all the environments, the agent decides a direction of move between , , , and . We define the objective of the agent as finding a policy with maximum Q-value while avoiding unsafe behaviours with intolerably high probability λ during exploration. Specifically, in the , the agent aims to find a walkable path in a 8x8 grid environment with slippery actions while avoiding entering states labelled with H. With the slippery setting, the agent will move in the intended direction with a probability of only 1/3 else will move in either perpendicular directions with equal probabilities of 1/3 respectively. In the and , the agent aims to reach the states labelled with and then states labelled with in a fixed order while avoiding entering any states labelled with along the path, with 15% probability of moving to random directions at every action. In , the distance covered in a move is also randomly sampled from a Gaussian 𝒩(2,0.5), thus choosing the same direction from a state may reach different states. In the environment, the agent aims to find one of the target states labelled with while avoiding reaching the unsafe regions, which in this scenario cannot be perfectly abstracted by boxes or other affine abstract domains. Following <cit.>, the distance covered in each move is sampled uniformly in the range (0, 10), with the addition of further uniform noise from 𝒰(-0.1, 0.5).
Baselines.
We compare the learning performance of our method with classical Q-Learning and DQN <cit.> as the baseline for discrete and continuous MDPs, respectively. For discrete MDPs, we further compare our method with <cit.> (referred to as QL-LCRL in the following), using the same set of hyper-parameters as provided in their implementation and the associated tool paper <cit.>. Given an automata corresponding to an LTL property, QL-LCRL guides the agent's exploration of an initially unknown MDP by reshaping on-the-fly the reward function to encourage the exploration of behaviors that satisfy such property. In this application QL-LCRL will encourage a safe exploration by discouraging reaching states.
Implementation and parameters. We implemented a standard tabular Q-learning in Python and used the DQN implementation from OpenAI Gym Baselines <cit.>. We parameterise the abstraction process with a learning rate α and a discount factor γ for the Q-value/Q-Network updates, and ϵ for adjacent ϵ-bisimulation. The agents move within Cartesian planes of sizes |S| and the minimisation of the abstract models reduces the state space to |S_a|, where the minimum size of each box is set to 1 and 0.01 for the discrete and the continuous environments, respectively. We set the safety specification parameter λ according to the intrinsic uncertainty in the respective environments. A summary of the parameters used for each environment is reported in the left side of tab. <ref> (additional parameters are discussed in Appendix tab. <ref> to ease reproducibility). We require at least 50 samples to be collected by the Bayesian hypothesis test before it can trigger offline training to reduce false positive triggers.
Experimental Results.
Fig. <ref> shows the accumulated safety rates (bottom) and the rolling average of accumulated rewards, indicating the real-time learning performance of the agent. The line is the average across 10 runs, while the shaded region around it is the standard deviation. We do not provide any prior information to the agent.
The dashed vertical lines in the figure indicate the average episode number where an offline learning phase in QL/DQN-CEX (our method) is triggered, and the solid horizontal line indicates the target safety rate in corresponding safety specifications. The cumulative rewards of different methods converge to similar values, demonstrating that the performance under guidance (QL-LCRL and Q-CEX) achieves cumulative rewards comparable to the baseline QL and DQN method.
As expected, providing additional guidance to discourage actions that may lead to reaching unsafe states with QL/DQN-CEX or QL-LCRL improves the safety rate of the exploration, with QL/DQN-CEX achieving on average higher safety rates.
In , the safety rate of online learning in our method exceeds the threshold much faster than other methods. This is due to the rapid convergence of the abstraction thanks to the grid layout that can be accurately and efficiently abstracted (and minimised). In turn, no further offline phases were required after the first 1000 episodes.
In , more online exploration is required to support comprehensive counterexample guidance, partly due to the high uncertainty in the outcome of the actions. Another phenomenon due to high uncertainty is that the agent takes a longer time to reach a stable performance, therefore the offline learning phase is triggered more frequently compared with other environments and also more episodes were required to stably satisfy the safety requirement.
In and , while the baseline eventually achieves a marginally higher cumulative reward in MarsRover, DQN-CEX achieves a higher safety rate – the number of failures experienced by the agent is inversely proportional to the integral of the safety rate curve, which results in significantly fewer failure events. The frequency of offline training phases also decreases over time. This is not surprising due to a safer exploration being possibly less speculative and slower in exploring the optimal policy. QL-LCRL is not applicable to and . Although LCRL <cit.> applies NFQ-based method for continuous MDPs, NFQ-LCRL trains the agent in a completely offline manner based on randomly sampled data, thus it is not suitable for comparison with our method from the perspective of safe exploration.
Overhead. Counterexamples generation requires solving a MILP problem on the abstract state space. On a Macbook Air with M1 CPU and 8Gb of memory, the average±stdev time to solve the optimization problems were 0.71±0.98s, 0.08±0.06s, 2.31±3.94s, and 3.87±4.07s for , ,
, and , respectively. We used Gurobi Optimiser v9.1.0 <cit.> off-the-shelf. The numbers of counterexamples generated for each offline learning phase were, on average: 11, 6, 19, and 21, respectively. Notice that both counterexample generation and the simulations can be parallelised. While the cost of solving the MILP may be higher, we notice that offline learning is triggered only when the recent online exploration resulted in an unacceptable failure rate. As shown in fig. <ref>, thanks to counterexample-guidance, QL/DQN-CEX can achieve the required safety exploration rate much faster, which helps amortizing the initial MILP solution cost. Finally, the optimisation problem might be relaxed to sacrifice the optimality of the solution (i.e., size of the counterexamples) for computation time.
§ RELATED WORK
Safe Exploration.
<cit.> provide surveys and taxonomies of recent safe reinforcement learning methods with different emphases. Most existing safe exploration methods assume prior knowledge about the environment or the agent's dynamics <cit.>, known safety constraints <cit.>, or utilise expert guidance <cit.> to provide guarantees of safety constraints satisfaction during exploration. A different class of methods <cit.> utilises surrogate Gaussian process models to characterise the unknown dynamics and optimise the unknown function with a prior model structure. There are only a few methods <cit.> tackling safe exploration without known model structure. <cit.> focuses on solving continuous, unknown MDPs into sub-tasks using an online RL framework under LTL specification with less emphasis on safety rate but blocking actions unsafe according to the specification, and <cit.> trains a safety layer/critic used for filtering out probabilistic unsafe actions with an offline dataset. While motivated by the same idea of safer exploration without prior knowledge, our method can be initialised with none or any amount of previously explored paths, meanwhile it converged to cumulative rewards comparably or better than baseline methods.
Offline RL.
Offline RL can be seen as a data-driven formulation of RL, where the agent collects transitions using the behaviour policy instead of interacting with the environment <cit.>. The biggest challenge in offline RL is the bootstrapping error: the Q-value is evaluated with little or no prior knowledge and propagated through Bellman equation <cit.>. <cit.> regularise the behaviour policy while optimising to address this issue. <cit.> alternatively update the Q values in more conservative ways to learn a lower bound of the Q function using uncertainty estimated with the sampled information. From the safe RL perspective, <cit.> optimises a risk-averse criterion using data previously collected by a safe policy offline, and <cit.> learns a constrained policy maximizing the long-term reward based on offline data without interaction in the concrete environment using a constrained penalised Q-Learning method. These methods have similar motivation of combining offline learning with risk-averse RL as ours, while focusing on continuous control setting instead. Besides utilising offline learning to reduce unsafe online exploration, we alternate online and offline learning to keep the abstract knowledge about the environment up to date and increase the risk aversion of the agent based on the most current evidence it collected online.
§ CONCLUSION
We presented our investigation of a safer model-free reinforcement learning method using counterexample-guided offline training.
We proposed an abstraction strategy to represent the knowledge acquired during online exploration in a succinct, finite MDP model that can consistently and accurately describe safety-relevant dynamics of the explored environment. Counterexample generation methods from probabilistic model checking are then adapted to synthesise small-scale simulation environments capturing scenarios in which the decisions of the agent may lead to the violation of safety requirements. The agent can then train offline within this minimal submodels by replaying concrete transitions recorded during past online exploration consistent with the counterexample, using a reward scheme focused on reducing the likelihood of selecting actions that may eventually lead to visiting again explored unsafe concrete states. The q-values penalized during the offline training phases implicitly reduce the risk of repeating unsafe behaviors during subsequent online exploration, while newly explored paths feedback information to the next offline learning phase.
The alternation of online exploration – and abstraction refinement – and counterexample guided learning can ultimately lead to higher safety rates during exploration, without significant reduction in the achieved cumulative reward, as demonstrated in our preliminary evaluation on problems from previous literature and the OpenAI Gym. While this paper focused on improving Q-Learning (and the related DQN algorithm), the fundamental framework is not specific to Q-Learning, and we plan to explore its impact on other learning algorithms in future work.
Data availability. An artifact including the prototype Python implementation used for the experiments has been accepted by QEST 2023 artifact evaluation. The implementation of our method is available at Github: <https://github.com/xtji/CEX-guided-RL>.
splncs04
10
achiam2017constrained
Achiam, J., Held, D., Tamar, A., Abbeel, P.: Constrained policy optimization.
In: International Conference on Machine Learning. pp. 22–31. PMLR (2017)
alshiekh2018safe
Alshiekh, M., Bloem, R., Ehlers, R., Könighofer, B., Niekum, S., Topcu, U.:
Safe reinforcement learning via shielding. In: Thirty-Second AAAI Conference
on Artificial Intelligence (2018)
baier2008principles
Baier, C., Katoen, J.P.: Principles of model checking. MIT press (2008)
bellman1957markovian
Bellman, R.: A markovian decision process. Journal of mathematics and mechanics
pp. 679–684 (1957)
bharadhwaj2020conservative
Bharadhwaj, H., Kumar, A., Rhinehart, N., Levine, S., Shkurti, F., Garg, A.:
Conservative safety critics for exploration. arXiv preprint arXiv:2010.14497
(2020)
blumer1989learnability
Blumer, A., Ehrenfeucht, A., Haussler, D., Warmuth, M.K.: Learnability and the
vapnik-chervonenkis dimension. Journal of the ACM (JACM) 36(4),
929–965 (1989)
brockman2016openai
Brockman, G., Cheung, V., Pettersson, L., Schneider, J., Schulman, J., Tang,
J., Zaremba, W.: Openai gym. arXiv preprint arXiv:1606.01540 (2016)
brunke2022safe
Brunke, L., Greeff, M., Hall, A.W., Yuan, Z., Zhou, S., Panerati, J.,
Schoellig, A.P.: Safe learning in robotics: From learning-based control to
safe reinforcement learning. Annual Review of Control, Robotics, and
Autonomous Systems 5, 411–444 (2022)
bshouty1998noise
Bshouty, N.H., Goldman, S.A., Mathias, H.D., Suri, S., Tamaki, H.:
Noise-tolerant distribution-free learning of general geometric concepts.
Journal of the ACM (JACM) 45(5), 863–890 (1998)
buckman2020importance
Buckman, J., Gelada, C., Bellemare, M.G.: The importance of pessimism in
fixed-dataset policy optimization. arXiv preprint arXiv:2009.06799 (2020)
vcevska2019counterexample
Češka, M., Hensel, C., Junges, S., Katoen, J.P.:
Counterexample-driven synthesis for probabilistic program sketches. In:
International Symposium on Formal Methods. pp. 101–120. Springer (2019)
dalal2018safe
Dalal, G., Dvijotham, K., Vecerik, M., Hester, T., Paduraru, C., Tassa, Y.:
Safe exploration in continuous action spaces. arXiv preprint arXiv:1801.08757
(2018)
desharnais2008approximate
Desharnais, J., Laviolette, F., Tracol, M.: Approximate analysis of
probabilistic processes: Logic, simulation and games. In: 2008 Fifth
International Conference on Quantitative Evaluation of Systems. pp. 264–273.
IEEE (2008)
downey2021think
Downey, A.: Think Bayes. O'Reilly Media (2021),
<https://books.google.com/books?id=Vh4vEAAAQBAJ>
filieri2014statistical
Filieri, A., Păsăreanu, C.S., Visser, W., Geldenhuys, J.:
Statistical symbolic execution with informed sampling. In: Proceedings of the
22nd ACM SIGSOFT International Symposium on Foundations of Software
Engineering. pp. 437–448 (2014)
FultonPlatzer2018
Fulton, N., Platzer, A.: Safe reinforcement learning via formal methods: Toward
safe control through proof and learning. Proceedings of the AAAI Conference
on Artificial Intelligence 32(1) (Apr 2018)
garcia2012safe
Garcia, J., Fernández, F.: Safe exploration of state and action spaces in
reinforcement learning. Journal of Artificial Intelligence Research
45, 515–564 (2012)
garcia2015comprehensive
Garcıa, J., Fernández, F.: A comprehensive survey on safe reinforcement
learning. Journal of Machine Learning Research 16(1), 1437–1480
(2015)
gurobi
Gurobi Optimization, LLC: Gurobi Optimizer Reference Manual (2022),
<https://www.gurobi.com>
counterexampleGeneration2009
Han, T., Katoen, J.P., Berteun, D.: Counterexample generation in probabilistic
model checking. IEEE Transactions on Software Engineering 35(2),
241–257 (2009). 10.1109/TSE.2009.5
hansson1994logic
Hansson, H., Jonsson, B.: A logic for reasoning about time and reliability.
Formal aspects of computing 6(5), 512–535 (1994)
hasanbeig2018logically
Hasanbeig, M., Abate, A., Kroening, D.: Logically-constrained reinforcement
learning. arXiv preprint arXiv:1801.08099 (2018)
lcrl_tool
Hasanbeig, M., Kroening, D., Abate, A.: LCRL: Certified policy synthesis via
logically-constrained reinforcement learning - implementation,
<https://github.com/grockious/lcrl>
hasanbeig2020deep
Hasanbeig, M., Kroening, D., Abate, A.: Deep reinforcement learning with
temporal logics. In: Bertrand, N., Jansen, N. (eds.) Formal Modeling and
Analysis of Timed Systems. pp. 1–22. Springer, Cham (2020).
10.1007/978-3-030-57628-8
HasanbeigKA22
Hasanbeig, M., Kroening, D., Abate, A.: LCRL: certified policy synthesis via
logically-constrained reinforcement learning. In: Ábrahám, E.,
Paolieri, M. (eds.) Quantitative Evaluation of Systems - 19th International
Conference, QEST 2022, Warsaw, Poland, September 12-16, 2022, Proceedings.
Lecture Notes in Computer Science, vol. 13479, pp. 217–231. Springer (2022).
10.1007/978-3-031-16336-4_11,
<https://doi.org/10.1007/978-3-031-16336-4_11>
huang2018learning
Huang, J., Wu, F., Precup, D., Cai, Y.: Learning safe policies with expert
guidance. Advances in Neural Information Processing Systems 31
(2018)
jansen2018shielded
Jansen, N., Könighofer, B., Junges, S., Bloem, R.: Shielded decision-making
in mdps. arXiv preprint arXiv:1807.06096 (2018)
kim2020safe
Kim, Y., Allmendinger, R., López-Ibáñez, M.: Safe learning and
optimization techniques: Towards a survey of the state of the art. In:
International Workshop on the Foundations of Trustworthy AI Integrating
Learning, Optimization and Reasoning. pp. 123–139. Springer (2020)
kumar2019stabilizing
Kumar, A., Fu, J., Soh, M., Tucker, G., Levine, S.: Stabilizing off-policy
q-learning via bootstrapping error reduction. Advances in Neural Information
Processing Systems 32 (2019)
kumar2020conservative
Kumar, A., Zhou, A., Tucker, G., Levine, S.: Conservative q-learning for
offline reinforcement learning. Advances in Neural Information Processing
Systems 33, 1179–1191 (2020)
lawler1966branch
Lawler, E.L., Wood, D.E.: Branch-and-bound methods: A survey. Operations
research 14(4), 699–719 (1966)
levine2020offline
Levine, S., Kumar, A., Tucker, G., Fu, J.: Offline reinforcement learning:
Tutorial, review, and perspectives on open problems. arXiv preprint
arXiv:2005.01643 (2020)
liu2020robust
Liu, A., Shi, G., Chung, S.J., Anandkumar, A., Yue, Y.: Robust regression for
safe exploration in control. In: Learning for Dynamics and Control. pp.
608–619. PMLR (2020)
mason2017assured
Mason, G.R., Calinescu, R.C., Kudenko, D., Banks, A.: Assured reinforcement
learning with formally verified abstract policies. In: 9th International
Conference on Agents and Artificial Intelligence (ICAART). York (2017)
mcewen2014recurring
McEwen, A.S., Dundas, C.M., Mattson, S.S., Toigo, A.D., Ojha, L., Wray, J.J.,
Chojnacki, M., Byrne, S., Murchie, S.L., Thomas, N.: Recurring slope lineae
in equatorial regions of Mars. Nature geoscience 7(1), 53–58
(2014). 10.1038/ngeo2014
mnih2013playing
Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra,
D., Riedmiller, M.: Playing atari with deep reinforcement learning. arXiv
preprint arXiv:1312.5602 (2013)
mnih2015human
Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A.A., Veness, J., Bellemare, M.G.,
Graves, A., Riedmiller, M., Fidjeland, A.K., Ostrovski, G., et al.:
Human-level control through deep reinforcement learning. nature
518(7540), 529–533 (2015)
moldovan2012safe
Moldovan, T.M., Abbeel, P.: Safe exploration in markov decision processes.
arXiv preprint arXiv:1205.4810 (2012)
openaibaselinesdqn
OpenAI: Stable baselines version 3 - dqn,
<https://stable-baselines3.readthedocs.io/en/master/modules/dqn.html>
pham2018optlayer
Pham, T.H., De Magistris, G., Tachibana, R.: Optlayer-practical constrained
optimization for deep reinforcement learning in the real world. In: 2018 IEEE
International Conference on Robotics and Automation (ICRA). pp. 6236–6243.
IEEE (2018)
prakash2019improving
Prakash, B., Khatwani, M., Waytowich, N., Mohsenin, T.: Improving safety in
reinforcement learning using model-based architectures and human
intervention. In: The Thirty-Second International Flairs Conference (2019)
sharma2013verification
Sharma, R., Gupta, S., Hariharan, B., Aiken, A., Nori, A.V.: Verification as
learning geometric concepts. In: International Static Analysis Symposium. pp.
388–411. Springer (2013)
siegel2020keep
Siegel, N.Y., Springenberg, J.T., Berkenkamp, F., Abdolmaleki, A., Neunert, M.,
Lampe, T., Hafner, R., Heess, N., Riedmiller, M.: Keep doing what worked:
Behavioral modelling priors for offline reinforcement learning. arXiv
preprint arXiv:2002.08396 (2020)
abstractDomains
Singh, G., Püschel, M., Vechev, M.: A practical construction for
decomposing numerical abstract domains. Proc. ACM Program. Lang.
2(POPL) (dec 2017). 10.1145/3158143,
<https://doi.org/10.1145/3158143>
stooke2020responsive
Stooke, A., Achiam, J., Abbeel, P.: Responsive safety in reinforcement learning
by pid lagrangian methods. In: International Conference on Machine Learning.
pp. 9133–9143. PMLR (2020)
sui2015safe
Sui, Y., Gotovos, A., Burdick, J., Krause, A.: Safe exploration for
optimization with gaussian processes. In: International conference on machine
learning. pp. 997–1005. PMLR (2015)
tessler2018reward
Tessler, C., Mankowitz, D.J., Mannor, S.: Reward constrained policy
optimization. arXiv preprint arXiv:1805.11074 (2018)
urpi2021risk
Urpí, N.A., Curi, S., Krause, A.: Risk-averse offline reinforcement
learning. arXiv preprint arXiv:2102.05371 (2021)
wachi2018safe
Wachi, A., Sui, Y., Yue, Y., Ono, M.: Safe exploration and optimization of
constrained mdps using gaussian processes. In: Proceedings of the AAAI
Conference on Artificial Intelligence. vol. 32 (2018)
watkins1992q
Watkins, C.J., Dayan, P.: Q-learning. Machine learning 8(3),
279–292 (1992)
wimmer2013high
Wimmer, R., Jansen, N., Vorpahl, A., Ábrahám, E., Katoen, J.P., Becker,
B.: High-level counterexamples for probabilistic automata. In: International
Conference on Quantitative Evaluation of Systems. pp. 39–54. Springer (2013)
wu2019behavior
Wu, Y., Tucker, G., Nachum, O.: Behavior regularized offline reinforcement
learning. arXiv preprint arXiv:1911.11361 (2019)
xu2022constraints
Xu, H., Zhan, X., Zhu, X.: Constraints penalized q-learning for safe offline
reinforcement learning. In: Proceedings of the AAAI Conference on Artificial
Intelligence. vol. 36, pp. 8753–8760 (2022)
zhou2018safety
Zhou, W., Li, W.: Safety-aware apprenticeship learning. In: International
Conference on Computer Aided Verification. pp. 662–680. Springer (2018)
§ APPENDIX
§.§ Minimal Counterexamples Generation
Counterexample of the Safety Specification: The solution of the following optimisation problem (adapted from <cit.>) is a minimal counterexample of the safety specification P_≤λ [F ]:
minimise -1/2ω_0 p_s_0 + ∑_ℓ∈ Lω(ℓ)x_ℓ, , such that
p_s_0 > λ
∀ s ∈ T. p_s = 1
∀ s ∈ S ∖ T. ∑_a ∈ P(s) π_s, a≤ 1
∀ s ∈ S ∖ T. p_s≤∑_a ∈ P(s) π_s, a
∀ s ∈ S ∖ T, ∀ a ∈ A, ∀ℓ∈ L(s, a, s'). p_s, a, s'≤ x_ℓ
∀ s ∈ S ∖ T, ∀ a ∈ A, p_s, a, s'≤ P(s,a,s') · p_s'
∀ s ∈ S ∖ T, ∀ a ∈ A. p_s≤ (1 - π_s, a) + ∑_s' : P(s,a,s')>0 p_s, a, s'
∀ (s, a) ∈ P_T^Prob . π_s, a = ∑_ℓ∈ L(s, a, s') x_ℓ
∀ (s, a) ∈ P_T^Prob ∀ℓ∈ L(s, a, s'). r_s < r_s' + (1 - x_ℓ)
where S is the state space of the model, T is the set of states labelled as , p_s represents the probability of reaching any state in T from state s, π_s,a∈{0,1} indicates that action a is selected in state s, p_s,a,s' represents the probability contribution of the transitions from s to s' via action a, where this transition is selected to be part of the counterexample, ℓ∈ L(s,a,s') is the label identifying a transition from s to s' via action a (not to be confused with the function L labeling states of the model) such that x_l=1 iff the transition is included in the counterexample, x_l=0 otherwise, and P_T^Prob represents the set of problematic state-action pairs if the minimal probability of reaching the target state is zero and the maximal probability of reaching the target state is non-zero from these state-action pairs.
Intuitively, the optimisation problem aims to find a policy π that will make the agent violate the safety specification within the smallest counterexample sub-model, composed of all the transitions (s,a,s') from the original model whose corresponding x_l is 1.
To this goal, eq. <ref> requires the probability of reaching an unsafe state from the initial state s_0 to be >λ (violation of the safety specification); eq. <ref> fixes p_s=1 for all the unsafe states; eq. <ref> impose the agent to select at most one action a for each state s; if no action is chosen in a state s, eq. <ref> ensures that p_s=0. Other constraints for ensuring the minimal size and prevent deadlock loops are also defined <cit.>. We further specialise the obejctive based on the approach in <cit.> by setting the weights ω(ℓ) of the selector variables x_ℓ as one minus the normalised Q-values corresponding to the state-action pair in ℓ, while ω_0 > max{ω(ℓ) |∀ℓ∈ L_c∧ω(ℓ) > 0 }. With this weighting, we encourage the selection of labels with larger Q-Value, i.e., corresponding to the violating behaviours most likely to be selected by the agent, while at the same time minimising the size of the sub-model. Eq. <ref> ensures that the contribution of the transition (s,a,s') is 0 if such transition is not included in the counterexample, otherwise, eq. <ref>, p_s,a,s' is bounded by the probability of the corresponding transitions in the model – P(s,a,s')) times the probability of reaching the target from s' p_s'. Eq. <ref> ensures that if action a is selected in state s, the probability of reaching T from s is bounded by the sum of the probabilities of reaching T from its successors given a. Finally, two additional constraints are defined in <cit.> to prevent the agent from getting stuck in an infinite loop that would prevent it from reaching T.
Compared to the general solution in <cit.>, for the case of tabular Q-Learning, we heuristically specialise the objective function by assigning to the selection of a state-action pair (s,a) a cost proportional to -Q̅(s,a), where Q̅(s,a) is the normalised where this average Q-value of a over the concrete states represented by an abstract state. Because the minimal size counterexample is, in general, not unique, this additional cost prioritises the inclusion of actions with larger Q-Value, i.e., actions most likely to be selected by the agent, while at the same time minimising the size of the sub-model.
§.§ On the Convergence of the Abstraction and Learning Processes
In this section we will discuss the main convergence aspects of the proposed abstraction method and of the alternating online-offline learning process.
Convergence of the abstraction.
The core of the proposed abstraction of safety relevant aspects of the explored concrete state space revolves around the solution of a minimal red-blue set coverage by means of polyhedra (restricted to hyperboxes in this work) predicates introduced in sec. <ref>. We aim to discuss how an abstraction up to an arbitrary accuracy will almost surely eventually be constructed. We remind that the set cover problem allows only the overapproximation of unsafe regions, with unsafe concrete points always mapped to unsafe abstract states, while safe concrete points may possibly be misclassified as unsafe with a maximum prescribed false positive rate.
For simplicity, let us focus the discussion on learning for MDPs with discrete state space. Because we allow hyperboxes of minimum size d>0, a continuous space is implicitly discretised, with every sample from a continuous space included within a box of size d or larger.
propositionprop1
Every reachable concrete state will eventually be reached with probability 1 during online exploration, in particular every reachable unsafe state will eventually be reached.
This follows from the fact that the online exploration of the environment is never strictly limited by the use of counterexample guidance. Rather, offline phases aim at reducing the relative Q-value of actions that may eventually lead to the violation of the safety requirement, thus reducing the likelihood of their selection during online learning. Because the agent is always allowed, with a controllable probability, to explore any action from the current state, every reachable state maintains a strictly positive probability of being reached.
propositionprop2
Every reachable unsafe state will eventually be mapped to an unsafe abstract state.
Proposition <ref> follows from Proposition <ref> and the preservation of the safety invariant, which ensures unsafe concrete states are always mapped to unsafe abstract states. Finally, restated from sec. <ref>:
*
Proposition <ref> relies on a PAC learnability argument. At any time during the exploration, the maximal explored state is bounded by the most extreme states that have been explored.
During exploration, the agent can select randomly an action among those available in the current state with probability ϵ_QL > 0. This exploration probability may change over time, but should always be strictly larger than zero to ensure every state-action pair can be selected infinitely often to ensure the convergence of Q-Learning (cf. sec. <ref>). Let ϵ_QL > 0 be the lowerbound of the values ϵ_QL can take during exploration. Then, given ϵ_QL and the concrete MDP's transition relation, the probability of visiting any (reachable) concrete state is bounded from below by a value p_QL.
Recalling Theorem 2.1 in <cit.>, if a learning concept L has a finite Vapnik–Chervonenkis (VC) dimension, and if L is consistent with a uniform sample size max(4/u̅log2/δ, 8VC/u̅log13/u̅), then the prediction error of the learning concept L can be limited to u ∈ (0, 1], with a probability of at least 1-δ.
The expressiveness of L and also VC dimension of the learning concept is decided by the abstraction domain. For the set coverage learning concept in def. <ref>, the general VC dimension of a single predicate of the form C_i = { | ω+ ≤ 0} is finite. The actual VC dimension of specific abstract domain and scenario can be easily verified, e.g. the VC dimension of an axis-parallel box is v=4.
According to Lemma 3.2.3 in <cit.>, the VC dimension of a union set C of convex polygons C_i is less than 2vslog(3s), for all s ≥ 1, where s is the number of sets in C. Hence, the required sample size to limit the abstraction error u ≤u̅ for the set cover solution C is max(4/u̅log2/δ, 16vslog(3s)/u̅log13/u̅). By underapproximating the number of samples considering conservatively that each concrete state has probability p_QL of being sampled, we can conclude that an abstraction with misclassification error less than a prescribed u̅ can eventually be learned with arbitrary probability 1 - δ.
These upper bounds are typically conservatively above the actual number of sampled paths required for most practical scenarios and mainly aim at ensuring the asymptotic convergence of the abstraction process. In sec. <ref>, we will demonstrate empirically on some of the experimental environments how the actual abstraction converges to a prescribed maximum misclassification error for different configuration hyperparameters.
Caveats and practical limitations.
Because our abstract domain uses boxes (with sides parallel to the axes of the state space domain), it is likely to miscalssify safe regions when their boundaries cannot be covered exactly with a union of boxes. In general, a similar argument can be formulated for any finite-accuracy abstract domain. The corner case of using this abstraction is that small safe regions located between two unsafe ones placed at a distance, along any dimension, smaller than the minimum size of a box d may remain misclassified even if the agent happens to sample a concrete point that could discriminate them. In turn, if the optimal policy requires passing through one such small misclassified region, counterexample guidance could reduce the likelihood of the agent exploring it – but never entirely prevent its exploration. In practice, the problem can be mitigated choosing a smaller value of d, or using an abstract domain with a more appropriate performance/accuracy tradeoff for the problem at hand, e.g., octagons or polyhedra.
Convergence of the online-offline learning process.
The main aim of offline learning is to discourage the re-exploration of policies that lead to violation of the safety requirement by penalizing the q-values of the involved actions. The penalization of explored unsafe behaviors has the effect of encouraging the exploration of alternative actions in a state, because their q-values “grow” relatively to the q-values of the penalized actions and are thus more likely to be selected next. An underlying assumption for the stability of the method is that there exist an optimal policy that satisfies the safety requirement. In the following, we will discuss how introducing offline learning does not prevent convergence to such a policy (in fact, as demonstrated experimentally, it accelerates the convergence to it), including sufficient conditions for such convergence to occur. Finally, we will discuss what may happen if all optimal policies violate the safety requirement. Restated from sec. <ref>:
*
The online learning phases are bound to eventually converge to an optimal policy as long as each state-action pair can be visited infinitely often. The offline phases can decrease the relative likelihood of actions involved with policies leading to a violation of the safety requirement, but never prevent their exploration altogether. As a result, when the online phases converge to an optimal policy which satisfies the safety requirement, and assuming the abstract model converged as well, no further offline phases will be triggered (except for possible occasional false positive triggers from the Bayesian hypothesis test). This happened in all our experiments, where occasional offline phases triggered after the agent converged to mainly exploring policies that satisfy the safety requirement could introduce transient fluctuations in the cumulative reward but do not affect convergence to maximum reward in the long run.
If there exist no maximal-reward policy satisfying the safety requirement, the introduction of offline learning may result in oscillations in the q-values of the actions involved in the maximal-reward policy discovered by the agent. Such oscillations arise from the fact that the maximal-reward policy the online phase converged to is itself, by hypothesis, a counterexample to the safety requirement. In this situation, our method may still reduce the number of failures during exploration by reducing the frequency at which the agent explores the maximal-reward policy, but it will not prevent the agent from converging to such maximal-reward policy in the long run. (Notice that if an alternative policy that achieves the same expected reward while satisfying the safety requirement existed, the introduction of offline learning would encourage its discovery, as discussed in Proposition <ref>; however, we are here assuming such policy does not exist.) In this situation, the designer has to accept the need to relax the safety requirement or decide whether to privilege reward over safety, which can be obtained by, e.g., limiting the number of times the same counterexample can be used in an offline phase. This situation is mentioned for the sake of completeness, but it falls outside the scope of this paper, where it is assumed that a maximal-reward policy that satisfies the safety requirement exists and its learning is accelerated via counterexample guidance.
§.§ Empirical Evaluation of the Abstraction and Sensitivity to Hyperparameters
In this section, we provide supplemental details about the abstraction evaluation, and the sensitivity of hyperparamters in the quality of abstraction and the learning outcomes. We first provide sets of additional hyperparamters in the baseline QL/DQN and our method to ease reproducibility. Then, we illustrate a comprehensive exploration of the concept of incremental abstraction, starting with a concrete example that demonstrates how it functions. This illustration will elucidate the process and its influence on reinforcement learning systems. Next, we analyse the robustness of our learning results with respect to hyperparameter tuning. This analysis will be twofold, focusing on both online and offline related hyperparameters. We aim to present a critical examination of how these parameters affect the quality of the abstraction model and also the overall learning performance.
Additional Hyperparamters
All common hyper-parameters used with QL, DQN and Q-CEX are the same as listed in tab. <ref>.For DQN specific addtional hyper-parameters, we use the default values given in <cit.>. For QL-LCRL-specific additional hyper-parameters, we use the same values as <cit.>.
Incremental Geometric Abstraction. When new unsafe states are discovered during online exploration, which were previously abstracted as safe, the abstract state space is updated incrementally using a branch-and-bound strategy to separate the new unsafe point. We demonstrate
this process taking as example the Frozenlake8x8 environment in fig. <ref>. The initial abstract MDP, used for counterexample generation, after the first 50 online episodes is shown in fig.<ref>. With the increasing number of explored points, more safety-relevant information can be acquired, and we employ the branch and bound method incrementally to cover the newly discovered unsafe states and refine the safe states adjacent to the newly discovered unsafe regions, as shown in fig.<ref>. The ground-truth layout is included in fig. <ref> for reference, where indicates unsafe regions.
Robustness of CEX-guided RL against hyperparamter tuning. We assessed the robustness of our proposed method in terms of hyperparameter tuning, focusing on two main aspects: the quality of the resulting abstraction model and the learning performance, measured by accumulated safety rates and average rolling rewards.
To evaluate simulation model quality, we present results using varying false positive rates (FPR) and ϵ for ϵ-bisimulation merging in both Frozenlake8x8 in fig <ref> and MarsRover environments in fig. <ref>. A lower FPR results in a more precise abstraction, which includes less safe concrete states from the unsafe abstract states (in environments like MarsRover, safe concrete states cannot be completely excluded with boxes).
Smaller ϵ values lead to less merging of similar states in the abstract state space. We recall the hierarchical ϵ-simulation concept, demonstrating that its inherent structure provides a beneficial trade-off between computational complexity and accuracy in proximity to unsafe states. Because the merge operation is constrained to states with identical labels and follows a hierarchical fashion (i.e., states from lower levels merge into higher ones, with transition probabilities at the higher level normalised by the sum from merged lower-level states), a lesser degree of merge due to ϵ-simulation is anticipated in the vicinity of unsafe states.
This hierarchical process indirectly optimises the resolution in the state space where it is most crucial, specifically near unsafe states, while permitting a coarser merge in safer regions. Such non-uniform merging fosters a more efficient balance between the computational complexity of the counterexample generation optimisation problem, which depends on the number of states in the abstraction, and the effectiveness of the counterexamples in accurately selecting concrete transitions from the agent's past experience near unsafe concrete regions.
Intuitively, given a specific ϵ, the final minimised abstraction is expected to be more precise near states labeled as unsafe due to the merge operation's restriction to adjacent abstract states. In these areas, a limited number of steps and corresponding traversal of adjacent states suffices to differentiate the trajectory outcome under consideration. In contrast, a coarser abstraction is produced when more steps are required to determine the trajectory's outcome. This characteristic of the hierarchical ϵ-simulation provides a tailored abstraction mechanism that varies with the safety of the region, enhancing the effectiveness of the abstract model.
Regarding learning performance, we demonstrate the average accumulated safety rate and the average rolling reward are robust under different hyperparameters, by varying the penalty and number of offline episodes for each simulation model during offline learning phase, and varying the Bayes factor and safety check interval during the online exploration in tab. <ref>.
To estimate the probability of safety requirement violation and trigger offline learning when necessary, we utilise the Bayesian hypothesis testing estimator <cit.> with the Bayes factor as a hyperparamter. This estimator can be initialised with a minimum number of samples prior to accepting its decision, and then determines whether the safety requirements are satisfied or violated based on the number of samples collected during the most recent online learning session. If the likelihood of violating the safety requirement significantly surpasses that of satisfying it, the offline learning phase is subsequently initiated.
Within the online learning session, we assess the Bayes factor of these two hypotheses, represented as P(H_0|S)/P(H_1|S), using the real-time experience dataset. The magnitude of the given Bayes factor decides the frequency of triggering offline learning. This frequency setting engenders a trade-off between safety and computational cost: higher frequency leads to increased computational cost. A high-frequency setting may also induce over-conservative learning performance. This happens when excessive offline learning guidance inadvertently penalises the learning process that entails a reasonable level of risk. In corner cases with very small Bayes factors, the agent may opt to explore solely safe regions, failing to achieve the learning objective.
|
http://arxiv.org/abs/2307.03915v1 | 20230708063742 | Galaxy-dark matter connection of photometric galaxies from the HSC-SSP Survey: Galaxy-galaxy lensing and the halo model | [
"Navin Chaurasiya",
"Surhud More",
"Shogo Ishikawa",
"Shogo Masaki",
"Daichi Kashino",
"Teppei Okumura"
] | astro-ph.GA | [
"astro-ph.GA",
"astro-ph.CO"
] |
firstpage–lastpage
Phased Geometric Controls of V-Shaped Three-Level System for Zero-field Quantum Sensing
Jiangfeng Du
August 12, 2023
=======================================================================================
We infer the connection between the stellar mass of galaxies from the Subaru Hyper Suprime-Cam (HSC) survey, and their dark matter halo masses and its evolution in two bins of redshifts between [0.3, 0.8]. We use the measurements of the weak gravitational lensing signal of the galaxies using background galaxies from the Year 1 catalog of galaxy shapes from the HSC survey. We bin the galaxies in stellar mass with varying thresholds ranging from 8.6 ≤log[ M_*/(h^-2M_⊙)] ≤ 11.2 and use stringent cuts in the selection of source galaxies to measure the weak lensing signal. We present systematic and null tests to demonstrate the robustness of our measurements. We model these measurements of the weak lensing signal together with the abundance of galaxies in the halo occupation distribution framework. For each stellar mass threshold bin, we obtain constraints on the halo occupation parameters of central galaxies M_ min and σ_log M, which correspond to the halo mass at which central galaxies in the threshold sample reach half occupancy, and its scatter, respectively, along with parameters that describe the occupation of the satellite galaxies. The measurements of abundance and weak lensing individually constrain different degeneracy directions in the M_ min and σ_log M plane, thus breaking the degeneracy in these parameters. We demonstrate that the weak lensing measurements are best able to constrain the average central halo masses, . We compare our measurements to those obtained using the abundance and clustering of these galaxies as well as the subhalo abundance matching measurements and demonstrate qualitative agreement. We find that the galaxy-dark matter connection does not vary significantly between redshift bins we explore in this study. Uncertainties in the photometric redshift of the lens galaxies imply that more efforts are required to understand the true underlying stellar mass-halo mass relation of galaxies and its evolution over cosmic epoch.
galaxies: evolution – galaxies: haloes – (cosmology:) large-scale structure of Universe - gravitational lensing: weak - cosmology: observations
§ INTRODUCTION
In the standard cosmological model, structure formation in the Universe is governed by the interplay between dark matter, which enhances overdensities of matter distribution, and dark energy, which acts to hinder such growth. Dark matter halos form the basic unit of the large scale structure, and their abundance is highly sensitive to this interplay between the cosmological parameters <cit.>. The formation and evolution of galaxies in dark matter halos is a result of complex astrophysical processes related to the formation and evolution of stars, its effect on the gas, the feedback from supermassive black holes at their centers, as well as, the mergers of galaxies <cit.>. Direct inference of the connection between dark matter halos and galaxies is thus important to understand these astrophysical processes <cit.>. In turn, an accurate determination of this connection can help in the inference of cosmological parameters <cit.>.
The stellar mass contained within galaxies reflects the integrated star formation efficiency of dark matter halos of various masses. It is now well established that the star formation efficiency of halos peaks around intermediate mass halos of around 10^12 <cit.> and halos on either side of this are less efficient due to various forms of feedback associated with star formation at the low mass end and supermassive black holes at the high mass end. The evolution of the stellar mass-halo mass relation can thus provide insights into how this star formation efficiency changes with time <cit.>.
Various observational techniques have been used to probe the dark matter halos of galaxies. One of the techniques that directly probes the halo masses beyond few tens of is the inference of masses using the kinematics of satellite galaxies in dark matter halos <cit.>. Satellite kinematics however has to rely on the assumption of virial equilibrium, the anisotropy of the dispersion in the orbits of satellite galaxies in dark matter halos, velocity bias which could arise from the differences in the distribution of matter compared to satellite galaxies, and accurate determination of the interloper galaxies which could masquerade as satellites. Indirect techniques such as subhalo abundance matching <cit.> instead rely on the ansatz of a monotonous relation between the stellar mass and halo masses of galaxies, along with a scatter in addition to a fixed set of cosmological parameters which determines the (sub)halo abundances. The technique of matching these abundances to the abundance of galaxies measured as the stellar mass function, allows an inference of the stellar mass-halo mass relation <cit.>. The clustering of galaxies on large scales can also indirectly provide information about this relation <cit.> by utilizing the dependence of the large scale bias of halos on the mass of halos <cit.>.
The weak gravitational lensing signal <cit.> of galaxies provides another direct method to constrain the galaxy-dark matter connection. According to general theory of relativity, an overdensity of matter warps spacetime in its vicinity in a manner that distorts light bundles from distant background sources traveling toward us. In its weak form, gravitational lensing causes a coherent tangential distortions in the shapes of such background galaxies. The distortion in the shape of a single galaxy due to weak lensing is quite small and difficult to disentangle from the intrinsic elliptical shape of its isophotes. A statistical averaging of the shapes of many such background galaxies gets rid of the uncorrelated intrinsic shapes of galaxies and allows the measurement of the coherent shear imprinted on the background galaxies due to weak lensing. Measurements of shapes of galaxies from ground based imaging data is challenging (see e.g., <cit.>), as atmospheric light propagation and the telescope optics can also corrupt the measurements of shapes of galaxies. A number of tests need to be conducted for residual systematics in weak lensing measurements, but once modelled, the weak lensing signal can also provide constraints on the stellar mass-halo mass relation of galaxies <cit.>.
A number of ongoing weak lensing surveys cover large areas of sky with excellent quality imaging in order to map out the dark matter distribution in the Universe. The Dark Energy Survey (DES)[<http://darkenergysurvey.org>], the Kilo Degree Survey (KiDS)[<http://kids.strw.leidenuniv.nl>], and the Subaru Hyper Suprime-Cam survey (HSC)[<http://hsc.mtk.nao.ac.jp/ssp>] have covered areas that range from 1000 to 5000 sq. degree in this pursuit. Amongst these, the HSC is the deepest and thus allows us to carry out studies of evolution of the connection between galaxies and their dark matter halos that extend over a wide range of stellar masses. In this paper, we use galaxies from the HSC survey along with their stellar mass and photometric redshift estimates from their photometry in order to infer the stellar mass-halo mass relation in two redshift bins, [0.30-0.55] and [0.55-0.80].
In recent works, <cit.> and <cit.>, the clustering and abundance of galaxies have been used to constrain the galaxy-dark matter connection of the same sample of galaxies. The former amongst these studies, model their measurements of the clustering signal using an analytical halo occupation distribution (HOD) framework, while the latter use a modification to the traditional subhalo abundance matching method in order to explain the same observables. These different methodologies can explain the measurements equally well, even though they may not agree on the prescription of how galaxies occupy their dark matter halos and thus predict a different weak lensing signal. Our weak lensing signal (hereafter, WLS) measurement can thus be used as a discriminant for such theoretical models and the assumptions that they rely on.
This paper is organised as follows: We describe the lens and source data in section <ref>. Sec. <ref> describes the abundance data we use to constrain our HOD model and to study the impact of abundances on scaling relations. The formalism of stacked weak lensing signal computations and tests of survey systematics have been detailed in sec. <ref>. We summarise our theoretical HOD modelling formalism and model fitting details in sec. <ref>. Results and inferences are discussed in sec. <ref> and previous studies employing the same datasets have been compared in sec. <ref>. We finally discuss the issues and challenges associated with photometric datasets in inferring galaxy-halo connections and possible future directions of improvements in sec. <ref> and present the summary of the results from this paper in sec. <ref>.
In this paper, we assume a standard 6-parameter flat ΛCDM cosmology with cosmological parameters set by cosmic microwave background observations <cit.>. We use (Ω_m ,Ω_Λ ,Ω_b ,σ_8 , n_s ,h ) = ( 0.309, 0.691, 0.049, 0.816, 0.967, 0.677), where, Ω_m, Ω_Λ, Ω_b denote the matter, dark energy and baryonic density with respect to the critical density of the Universe, σ_8 is related to the variance of density fluctuations on scale of 8, n_ s is the power law index of the power spectrum of density fluctuations on large scales, and h is the dimensionless Hubble parameter given by h = H_0/ 100 kms^-1 Mpc^-1. All the distances are measured in comoving units of h^-1 Mpc and stellar, halo masses are expressed in units of h^-2 M_⊙ and h^-1 M_⊙ respectively. Throughout the paper, we use log to denote 10-based logarithms.
§ DATA
§.§ HSC-SSP survey
The Hyper Suprime-Cam instrument <cit.> is a wide field imaging camera (1.5 FoV diameter) mounted on the prime focus of the 8.2m Subaru Telescope located at the summit of Mauna kea in Hawaii. The Hyper Suprime-Cam survey, a Subaru Strategic Program <cit.> is a three-layered (wide, deep and ultra-deep), multi-band (grizy plus 4 narrow-band filters) imaging survey. The HSC survey has efficiently imaged ∼ 1200 sq. deg. of the sky in its wide layer, utilizing the excellent seeing conditions at the summit and the large FoV of the camera. The data is processed using a fork of the Rubin LSST science pipelines <cit.>. The processed data from the survey has been released publicly at regular intervals. The measurement of the weak lensing signal requires well calibrated measurements of the shapes of galaxies. In our work, we use the first year shape catalog made public by the HSC survey collaboration to measure the weak lensing signal.
§.§ First year HSC shape catalog
The first year HSC shape catalog is based on an internal data release of the HSC survey (S16A). It consists of wide layer data observed over a period of 90 nights between March 2014 - April 2016. It covers an area of ∼ 140 deg^2 spread over six disjoint fields - HECTOMAP, VVDS, WIDE12H, GAMA15H, GAMA09H, and XMM. The shape measurements are performed in the i-band. Therefore, the imaging in the i-band was carried out when the full width at half maximum (FWHM) for the seeing was better than ∼ 0.8. This results in the median i-band seeing FWHM of 0.58. The corresponding 5σ point-source depth of the survey is i∼26 averaged over the area covered by S16A.
The resulting data was processed with the HSC pipeline <cit.> and the shape catalog was curated by applying a number of quality flags and several selection criteria as described in <cit.>. The resultant catalog covers an area of ∼ 136.9 deg^2. The shapes of galaxies were measured using a moments based method which corrects for the effects of the PSF using the re-Gaussianization technique <cit.>. The two components of the ellipticities are given by,
(e_1, e_2) = 1-r^2/1+r^2(cos 2ψ, sin 2ψ)
where r denotes the minor-to-major axis ratio and ψ the angle made by the major axis with respect to the equatorial coordinate system.
The final shape catalog consists of galaxies selected from the full depth-full color region in all five filters. Apart from some basic quality cuts related to pixel level information, the catalog includes extended objects with an extinction corrected cmodel magnitude i<24.5, i-band SNR≥ 10, resolution factor R_2≥0.3, >5σ detection in at least two bands other than i and a cut on the blendedness of the galaxy in the i-band. This conservative selection of galaxies results in an unweighted (raw) source number density of 24.6 arcmin^-2. When lensing related weights are taken in to consideration, the effective number density of sources is ∼ 21.8 arcmin^-2, with a sample of galaxies with median redshift of ∼ 0.8. The additive (c_1, c_2) and multiplicative biases (m) in the shape measurements, as well as the RMS intrinsic distortion of shapes (e_ rms) and the photon noise component (σ_ e) were calibrated using detailed image simulations <cit.> with the software GALSIM <cit.>. These image simulations account for the survey characteristics such as the variation in depth and seeing. The shape catalog is accompanied with inverse variance weights w_ s for each galaxy which is given by
w_ s = 1/σ_ e^2 + e^2_ rms .
The shape catalog satisfies a number of systematics and null tests, with residual systematics at the level of 0.01, sufficient to carry out cosmological analyses with the data.
The shape catalog is also supplemented with six different catalogs of photometric redshifts of galaxies as inferred by a number of methods, some relying on fitting the photometry using templates, while others use machine learning <cit.>. In our analysis we use the estimates of the redshifts provided by MIZUKI code <cit.>, which uses templates of galaxy spectral energy distributions (SEDs) and priors to fit the observed photometry of galaxies. It assumes an exponentially decaying star formation history with a variable decay time scale, along with a solar metallicity for the SED templates. It also assumes that the initial mass function is Chabrier <cit.> and that the dust attenuation is given by <cit.>. Finally nebular emission lines are also added to the SEDs. In addition to various point estimates (e.g. mean, median, mode, best) and the posterior distribution functions (PDFs) of the redshift for individual galaxies, the code also outputs several physical properties, such as stellar mass and specific star formation rate of these galaxies. We will use galaxies with reliable photometric redshifts and thus restrict our source galaxy sample to those galaxies which have photoz_risk_best < 0.5.
§.§ Lens galaxies
As our lens galaxies, we will use the galaxy samples presented by <cit.> in their HOD analysis of the clustering of these galaxies. In brief, our sample excludes galaxies centered on pixels at the edge of photometric images, or affected by cosmic rays, or have saturated pixels using the following flags: flags_pixel_edge, flags_pixel_interpolated_center, flags_pixel_saturated_center, flags_pixel_cr_center, and flags_pixel_bad. We also avoid galaxies with bad fits to the SED models and remove those with χ^2/ dof≥ 3 or photoz_risk_best ≥ 0.1 to use as our lens galaxies. In addition to the above cuts already mentioned in I20, we also apply the full-depth full color mask to the lens galaxy sample, to avoid selecting lenses from regions which were not observed in all bands to the nominal depth of the HSC survey. Finally, we also apply the same star mask <cit.> as that applied to the weak lensing shape catalog (S16A) which ensures full overlap of the lens galaxies spanning 125.7 deg^2 on the sky with the source catalog.
We will focus on the first two redshift bins presented in I20 and use galaxy samples with 0.30 ≤ z_ best < 0.55 (Bin z_1) and 0.55≤ z_ best < 0.80 (Bin z_2). These subsamples have redshifts that are smaller than the median of the redshifts of the source galaxies we use for the weak lensing signals. This allows us to get better signal-to-noise ratios in our measurements. In order to select lens galaxies that reliably lie in the redshift bins of our interest, we follow <cit.> and exclude galaxies which are within one standard deviation error (as reported by MIZUKI) from the bin edges that define the galaxy samples. The redshift distribution of the samples can be seen in Fig. 2 of I20 and Fig. <ref> after applying additional quality masks as mentioned above.
We will further divide the galaxy samples in each redshift bin using M_* - the median estimate of the stellar mass posterior distribution as provided by MIZUKI. We note that <cit.> uses h=0.7 to convert h-factors in M_* and we also use the same to change stellar mass units from to whenever required. We construct stellar mass threshold subsamples within each of the redshift bins. Given the flux limit of HSC, we do not use galaxies with stellar masses below 10^8.6 and 10^9 for the redshift bins z_1 and z_2, respectively. For bin z_1 (z_2) we make 13 (12) stellar mass threshold subsamples, whose statistics are listed in Table <ref>.
§ ABUNDANCE OF GALAXIES
We adopt the measurements of the abundance of galaxies as reported in I20 in order to adopt consistent abundances while comparing the results of the clustering analysis with those obtained from weak lensing. In their work, I20 compare their estimates of the SMF of photometric MIZUKI-HSC galaxies in bins of MIZUKI stellar masses and redshifts with those obtained using a multi-band, multi-survey data available in COSMOS/UltraVISTA field over 1.62 deg^2 sky area with a K_s-band limit of 23.4 mag (90% complete). This allows I20 to infer the completeness of the photometric HSC galaxy sample. They also computed the abundances of MIZUKI galaxies in stellar mass threshold bins[I20 and M13 abundances are available at: <https://github.com/0Navin0/galaxy_halo_connection_in_HSC/tree/main/abundances>].
Abundances of galaxies derived from photometric galaxy catalogs are prone to errors and systematics due to modelling uncertainties in their redshift and stellar mass estimates. These uncertainties in photometric redshifts are also expected to be correlated, a systematic error which results in a higher (lower) redshift for the galaxy, will also end up in systematic error which assigns a higher (lower) stellar mass to the galaxy. Errors in photometric redshifts also potentially translate into errors in the abundance. To reduce the systematics related to photometric redshifts on the abundance estimates, I20 carry out a `trimming' procedure in their section 2.3.2, which removes galaxies at the redshift bin edges with uncertain redshifts. This results in a loss of volume, but can improve the reliability of the lensing measurements, by keeping galaxies which have a higher probability of being in a given redshift bin. As the photometric measurement errors and the associated photometric redshift errors are expected to increase for fainter galaxies, this trimming method is nevertheless expected to systematically affect the abundances of fainter galaxies. The comparison with COSMOS/UltraVISTA in I20 is designed to keep a tab on such effects.
In order to study the impact of varying the abundances of galaxies, we will also carry out our analysis using the abundances that we compute from the best fit Schechter function models to the observed SMFs of galaxies from UltraVISTA in <cit.> and label them as M13 abundances[4] .
In their study, M13 provide single and double Schechter fitting functions for SMFs of galaxies in two redshift bins [0.20,0.50)z^'_1 and [0.50,1.00)z^'_2 which are closer to our original redshift bins z_1 and z_2, respectively. We plot and compare the I20 and M13 abundances as a function of the stellar mass thresholds in Fig. <ref>.
The abundance of central galaxies is related to the abundance of dark matter halos via their halo occupation distribution. In general, galaxies in a catalog do not necessarily come with a label of central or satellite. Although algorithms to group galaxies together exist, the large errors in photometric redshifts imply that it is increasingly difficult to do so in photometric surveys. Therefore, we use a relatively large 15% error on the abundance of galaxies for the I20 and M13 abundance measurements, so that they do not excessively drive the constraints on the halo occupation distributions. This has the effect of increasing the effective weight of our lensing signal to drive the halo occupation constraints. As mentioned before we will explore how the use of abundance changes the constraints of the stellar mass-halo mass relation we obtain.
§ WEAK GRAVITATIONAL LENSING
Weak gravitational lensing induces statistically coherent, tangential distortions in the shapes of background galaxies due to the intervening matter distribution along the line-of-sight towards the background galaxies. The tangential component of the shear γ_t imparted by an intervening matter distribution is related to its excess surface density (ESD) such that
ΔΣ(R) = Σ(<R) - ⟨Σ⟩ (R) = ⟨γ_t⟩(R) Σ_ crit(,) .
Here Σ(R) is the lens surface mass density at a projected separation R from the lens centre at redshift , Σ(<R) denotes the surface density averaged within a circular aperture R from the lens centre, and ⟨Σ⟩ (R) is the surface density averaged azimuthally at a distance R. The quantity Σ_ crit(,), is a geometrical factor dependent upon the physical angular diameter distances between us (observer) and the lens D_ a(), us and the source D_ a() and between the lens and source, D_ a(,), and is given by,
Σ_ crit(,) = c^2/4π GD_ a()/D_ a() D_ a(,) (1+)^2≡Σ_ crit, ls .
The factor of (1+z_ l)^2 in the denominator corresponds to our use of comoving coordinates. The intrinsic shapes of galaxies contribute to the noise in the determination of this shear from the measured ellipticity of galaxies. Therefore the signal has to be measured statistically by averaging the tangential ellipticity over a large sample of galaxies using weights that yield a minimal variance estimator for ΔΣ. For every lens-source pair, we use the weight = ⟨Σ^-1_ crit, ls⟩^2 while performing this average, where is the weight due to error in the shape measurement defined in equation (<ref>). The weight w_ ls defined above automatically down-weights lens-source pairs which are separated by a small distance from each other.
We will use full PDF of the redshift (z-PDF) of each source galaxy and z_ best estimate of the redshift of each lens galaxy as provided by the photo-z estimation code, and compute the average of the inverse critical surface mass density for each lens-source pair ⟨Σ^-1_ crit, ls⟩ given by,
⟨Σ^-1_ crit, ls⟩ = 4π G (1+)^2/c^2∫_^∞D_ a() D_ a(,)/D_ a() p() d .
The minimum variance estimator for ΔΣ is given by
ΔΣ(R) = 1/1+m̂( ∑_ ls e_ t,ls ⟨Σ^-1_ crit, ls⟩^-1/2ℛ∑_ ls.
-.∑_ ls c_ t,ls ⟨Σ^-1_ crit, ls⟩^-1/∑_ ls) ,
where e_ t,ls and c_ t,ls are the tangential components of ellipticity and the additive bias for the source galaxy in a lens-source pair, respectively. The quantity m̂ is the sample-averaged multiplicative bias and is given by
m̂ = ∑_ ls m_ ls/∑_ ls .
The symbol ℛ denotes the ensemble responsivity of the measured distortions to a small shear <cit.> and can be computed using the RMS intrinsic shape distortions e_ rms provided in the catalog as,
ℛ = 1 - ∑_ ls e^2_ rms,ls/∑_ ls .
In addition, to minimize effects of the uncertainty in the photometric redshifts, we use only those source galaxies which satisfy
∫_z_ l, max + z_ diff^∞ p() d > 0.99 ,
where z_ l, max is the maximum redshift in a lens galaxy sample, and we use z_ diff=0.1 in our work. This selection implies that based on the posterior of the redshift from the photometry, the source galaxies we use have a >99% probability of having a redshift greater than the farthest galaxy in the lens sample. Thus they are more likely to be true background galaxies. Even after applying this photo-z filter, source galaxies can still be contaminated by structures correlated to lenses if the posteriors p() are biased. Therefore, we will quantify any such contamination by looking for source galaxies clustered with our lens galaxies.
The shape noise of galaxies constitutes a dominant component of the error budget on small separation scales between the lens and the source, as the number of lens-source pairs at such separations are small in number. The error on the weak lensing signal measured around a sample of galaxies at various projected radial bins can be expected to be correlated as the same source galaxy may be used for the lensing signal around different lens galaxies in the sample. Such covariance between the measurements which arises due to shape noise can be quantified by randomly rotating the source galaxies and measuring the weak lensing signal around the lens galaxies. This preserves the number of pairs but presents a random realization of the source population ellipticities. However, on large scales we also expect the covariance due to the large scale structure. The large scale over-densities in which the lens galaxies reside can coherently shift the measurements up or down, leading to a larger covariance on such scales than that expected from just the shape noise.
We account for the above sources of noise together using the jackknife technique, where we divide the full survey area of the lens catalog in to 103 rectangular jackknife regions, each having an approximately equal area ∼ 1.22 deg^2, distributed contiguously in each survey field. We utilize the random catalog of points provided by the HSC survey, which have a uniform density of 100 random object per square arc minute, and where we can apply the same exact mask that we applied to our lens samples. Throughout this work, the jackknife sub-division of area remains identical for all the subsamples in each redshift bin. We then measure the lensing signals by excluding each region from the entire data at a time. We use these measurements to compute the covariance matrix 𝒞,
C_ij = N-1/N∑ _k=1^N[ ΔΣ(R_i,k) - ΔΣ(R_i) ] [ΔΣ(R_j,k)- ΔΣ(R_j) ] .
Here the indices i,j both vary from 1 to 10 for the 10 projected radial bins, ΔΣ(R_i,k) is signal computed at i^ th projected radial bin with removal of k^ th jackknife chunk, the quantity with bar on top is an average of the jackknife measurements at a particular radial bin.
We also define the cross-correlation matrix of the measurements between the i^ th and the j^ th projected radial bins to be given by
r_ij = 𝒞_ij/√(𝒞_ii𝒞_jj).
The cross-correlation matrix of the measurement for a representative set of stellar mass threshold samples in each of the redshift bins is shown in the different rows of Fig. <ref>. As expected we see that on small scales the off-diagonal components of this matrix are close to zero, however as we approach larger scales, neighbouring radial bins show enhancement in the cross-correlation of their errors.
Next, we present the results of two null-tests of survey systematics. First, we present the measurement of the weak lensing signal (ΔΣ_ rand) around random points which are distributed in the HSC footprint in the same manner as our lens galaxies. Second, we present the cross-component (ΔΣ_ rand, ×) around the random points and the lens galaxies (ΔΣ_ lens, ×); where ΔΣ_× averages the cross-component of the shear which is the ellipticity induced on circular objects with major/minor axes at 45^∘ compared to the line joining the two galaxies. In the absence of systematics, both these measurements should be consistent with zero within the statistical uncertainty.
In order to measure the signal around random galaxies, ΔΣ_ rand, for a given threshold subsample, we resample the photometric reshifts, z_ best, with replacement from the overall redshift distribution of galaxies in that subsample and assign them to the objects in the random catalog. We follow the procedure described by equations (<ref>) - (<ref>) and compute the tangential component of the weak lensing signal ΔΣ_ rand. We subtract this measured signal from the weak lensing signal around lenses from the true subsamples. Our tests indicate, however, that the measurements around random points for each of our subsamples is consistent with zero given the statistical fluctuations. The measurements ΔΣ_ rand and cross-components ΔΣ_ rand, × around random points, as well as the cross-components ΔΣ_ lens, × around lens galaxies along with their jackknife errors are shown in Fig. <ref> for the lowest, a middle and the highest stellar mass threshold, respectively. The p-values to exceed χ^2 for all of our subsamples for both the systematics tests are presented in Table <ref>.
In spite of our conservative sample selection cuts and quality filters (Section <ref> and equation <ref>) in lens and source galaxies, source galaxies can still be contaminated by structures correlated to the lens distribution. These source galaxies may not be even down weighted by the lensing weights if their p() is biased to high redshifts. This effectively dilutes the lensing signal as a function of projected radius. However, the overall dilution can be estimated and adjusted for by multiplying a boost factor to the signal (see e.g., <cit.>). Boost factor, B(R_i) is defined as the ratio of weighted number of l-s pairs per lens galaxy to random-source (r-s) pairs per random point, notationally,
B(R_i) = N_r ∑_ ls/N_l ∑_ rs w_ rs.
We adjust the randoms corrected signals by their corresponding boost factors, in each of the ready-to-model signals and their jackknife covariances. The estimated boost factors for few of the threshold bins are described in Fig. <ref>. The errorbars on B(R) values are computed by the jackknife technique outlined by equation (<ref>). Apart from few smallest projected scales in most massive galaxy samples that we probe, redshift bin z_1 shows boost factors consistent with unity, indicating presence of a non-zero but small amount of source contamination close to the inner-most radial bin; while the redshift bin z_2 shows a consistent contamination of source galaxies at all scales with B(R) ranging from ∼ 4% at inner most radial bin to ∼ 1% around outer most radii. The application of boost factor scales the signal as a function of R and may affect the covariances, however the relative error in the signal remains the same. The relative errors of the signals in bins z_1 and z_2 evolve slowly from ∼ 5% to ∼ 10% in subsamples of increasing threshold stellar mass within log M_ *, limit= (8.6 - 10.8) and (9.0 - 11.0) respectively. The most massive threshold subsamples in each redshift bin have ∼ 15% relative error. Given this level of statistical tolerance, we confirmed that skipping application of boost factors don't change our parameter constraints and thereby the resulting inferences, however, to maintain uniformity throughout our current and future analyses, we include boost factors on weak lensing signal measurements for all subsamples.
Also the photometric redshifts of the galaxies may have both statistical uncertainties and systematic biases. Such uncertainties could cause galaxies that are physically correlated with the lens samples to be included in our source samples, or could cause source galaxies to be wrongly classified as lens galaxies, or could result in background galaxies getting assigned wrong redshifts. The first of these errors are accounted for using boost factors as described in the paragraph above.
We mitigate the second error by using stringent cuts on the choice of source galaxies in this analysis, such that the fraction of source galaxies getting identified as lenses are small. Thus the bias in lensing signals will come mostly from source galaxy photometric redshifts being inconsistent with their true redshifts. We examine this effect using the methodology outlined by <cit.>[See appendix.]. We find that the source photo-z biases in bins z_1 and z_2 are ∼ 1% and ∼ 4% respectively and we have confirmed that such level of biases do not change our results or any of our conclusions in a statistically significant manner. Consequently, we have ignored the photo-z bias correction in our measurements and modelling of the weak lensing signals in this paper.
The weak lensing signals as measured using the above techniques can be seen in Figs. <ref> and <ref> for the two redshift bins we consider in this paper, respectively. The errors on the data points are based on the square root of the diagonal elements of the covariance matrix as defined in equation (<ref>).
§ THEORETICAL MODELLING
We use a halo occupation distribution (HOD) framework in order to model abundance and weak lensing signal. The HOD framework allows us to relate the theoretical predictions of the abundance of dark matter halos, their clustering and the dark matter distribution around these halos to the observed abundance and lensing of galaxies. The various parameters of the HOD of galaxies describe the average number of galaxies, N(>M_*, limit|M), in a particular sample of stellar mass threshold M_*, limit that reside in halos of mass M. In this work we only work with galaxy samples in thresholds of stellar masses, hence for ease of notation we denote it simply as N(M). We separate the total HOD of galaxies into a separate term for central galaxies and satellite galaxies, denoted by N_ c(M) and N_ s(M), respectively, such that,
N(M) = (M) + (M)
We use a 5-parameter model to describe these separate terms <cit.>,
(M) = 1/2[1+ erf(logM - log/)] ,
(M) = (M) ( M-M_0/M_1)^α,
where , , M_0, M_1, α are free parameters which are allowed to vary freely for each threshold subsample. Given that apart from an unknown intrinsic scatter, the relation between central galaxies and their halos is also obscured by uncertainties in the measured signals, we include the total scatter in the host halo masses of the central galaxies by a stochastic model expressed as equation (<ref>). Assuming that each central halo hosts a single galaxy, the first equation denotes the probability that a halo of mass M hosts a galaxy belonging to threshold subsample. According to the functional form, denotes the mass at which half of the halos are occupied by galaxies above the stellar mass threshold of subsample under consideration. Asymptotically, the halo occupation of central galaxies tends to unity. The satellite galaxy halo occupation number is a power law in M-M_0, where M_0 is the mass scale below which there are no satellite galaxies. M_1 can be seen as a typical halo mass to host a satellite galaxy, the exponent α as an indicator of the accumulated star formation history for galaxies of the given mass threshold. The (M) in front of the satellite halo occupation number down weights the galaxies in halos with low . Formally, we treat the two halo occupations to be independent, given that there are cases in which central galaxies are not necessarily the brightest galaxies in their halos.
Further, we also need to specify the position of the central galaxies within the dark matter halos. We assume that the central galaxy resides at the center of the dark matter halo. In our fiducial model we assume that satellite galaxies are distributed according to the NFW profile,
n(r) ∝( r/r_ s)^-1( 1 + r/r_ s)^-2
where r_ s is the scale radius of the halo and is defined as r_ s = r_ 200m/c_ 200m. Here c_ 200m is the concentration of the dark matter within that halo and halo masses are defined to be the masses enclosed within an overdensity of 200 times the background matter density, denoted by M_200m.
The abundance of galaxies in the threshold sample can be computed from the HOD using
n_ gal = ∫ M n(M) [(M) + (M)].
We use the analytical framework presented in <cit.> in order to predict the weak lensing signal from the HOD. Here we briefly repeat the formalism for the sake of completeness.
The ESD profile, equation (<ref>), depends on the correlated surface density of matter which is a line-of-sight projection of the galaxy-matter cross-correlation function ξ_ gm at a halo-centric distance R such that
Σ(R) = ρ̅∫_0^∞ z ξ_ gm([ R^2 + z^2]^1/2) .
Here, we have ignored the uniform density component in the computation of the surface density as it does not impact the weak lensing observables. We have also ignored any possible off-centering of central galaxies. Current modelling assumes that each halo hosts exactly one galaxy at its center
and that the dark matter contributions from subhalos of the satellite galaxies can be safely ignored. The cross-correlation is a Fourier transform of the cross power spectrum between galaxies and dark matter and can be computed using the analytical framework developed in <cit.>.
The total cross power spectrum between galaxies and dark matter can be divided in to 4 different terms, the one halo central and satellite terms, and the two halo central and satellite terms, such that,
P^ gm(k) = P^ 1h_ cm(k) + P^ 1h_ sm(k) + P^ 2h_ cm(k) + P^ 2h_ sm(k) .
Each of these terms can be expressed as
P^ 1h_ xm = ∫ n(M) M H_ x(k, M, z) M/ρ̅ u_ h(k| M, z) ,
P^ 2h_ xm = ∫ n(M') M' H_ x(k, M', z) ×
∫ n(M̃) dM̃ Q(k|M',M̃,z) M̃/ρ̅ u_ h(k| M̃, z) ,
and `x' stands for either central `c' or satellite `s', Q(k|M',M̃, z) describes the cross-power spectrum of halos of mass M' and M̃ at redshift z, and we use
H_ c(k|M, z) = (M)/n̅_ gal
H_ s(k|M, z) = (M)/n̅_ gal u_ s(k|M,z) .
In the equations above, u_ s/h(k|M,z) denotes the Fourier transform of the number density profile of the satellite galaxy (dark matter) distribution within the halo. As indicated previously we assume this to be given by the NFW profile. We allow the satellite and dark matter concentration to vary from the form given by <cit.> to allow for systematic uncertainties due to baryonic effects, as well as effects of averaging the dark matter profiles of halo of the same mass but varying concentrations <cit.>. We implement this with a multiplicative parameter c_ fac which alters the fiducial concentration-mass relation that we adopt in this paper. We include a Gaussian prior with unit mean and a variance of 0.2 for this parameter.
The baryonic component within the galaxy is expected to dominate the weak lensing signal at small projected separations. We model this component as a point mass contribution similar to how it has been modelled in previous studies <cit.>,
ΔΣ_b(R) = M̅_ bary/π R^2 ,
where, M̅_ bary represents average baryonic mass of all the galaxies in a given threshold subsample. We restrict our measurement of the lensing signal to scales above 100, thus our measurements are not very sensitive to the baryonic component (10 percent of the signal at the innermost point for the largest stellar mass bin). Given this relative insensitivity of our results to the baryonic contribution, we simply model this term as the average of the stellar mass contribution of all galaxies within the bin of interest. The total modelled signal is then the sum of ESD due to dark matter-halos and the central baryonic component.
§.§ HOD model fitting specifications
We carry out a Bayesian analysis to infer the posterior distribution of model parameters given the data, P(Θ|D, I), such that
P(Θ| D, I) ∝ P( D|Θ, I) P(Θ| I) ,
where I represents the choice of our model, the quantity P( D|Θ, I) is the likelihood of the data given the model parameters. and P(Θ| I) the priors on our model parameters. We assume the likelihood to be a multi-variate Gaussian, such that
ln P( D|Θ, I) ∝χ^2(Θ;𝒟, I)/2 ,
χ^2 = ∑_ i,j [ ΔΣ - ΔΣ ]_ i^ T [𝒞^-1]_ ij [ ΔΣ - ΔΣ ]_ j + ( n_ gal - n_ gal)^2/σ^2_ gal ,
where, the terms with tilde on top are modelled while those without tilde are observed quantities, subscripts i,j stand for the i^ th and j^ th radial bins respectively, and the covariance matrix, 𝒞, is obtained from jackknife technique discussed in Section <ref> (equation <ref>). We assume uniform priors on most of our parameters (see Table <ref>), unless mentioned otherwise.
We use the analytical HOD modelling framework from <cit.> as implemented by the software aum <cit.> in order to predict the abundance and galaxy-galaxy lensing predictions given the HOD parameters. We sample the posterior distribution of our parameters given the measurements using the affine invariant MCMC ensemble sampler of <cit.> as implemented in the publicly available package emcee v3.1.1 <cit.>. We use 256 walkers for a total of 10000 steps. We remove the first 2000 steps from each walker as a burn-in phase and verify the stationarity of our parameters of interest to confirm convergence.
§.§ Model predictions
In addition to modelling the observables, the ΔΣ and the abundances, we also compute predictions of satellite fractions,
= ∫ M n(M) (M)/N
and average central halo masses,
M_ cen = ∫ M M n(M) (M)/N_ c
for each threshold subsample accounting for the full sampled posterior distributions. Where N=N_ c+N_ s is the total number of galaxies computed for a given subsample and N_ x=∫ M n(M) N_ x; `x' stands for either `c' or `s'.
§ RESULTS AND DISCUSSION
We measure the weak gravitational lensing signal for stellar masses from log[M_*/() ] ≥ 8.6 in z ∈ [0.3, 0.55] and log[M_*/() ] ≥ 9.0 in z ∈ [0.55, 0.80]. Our measurements for the different threshold bins at the two different epochs are shown as black circles in Figs. <ref> and <ref>, respectively. The errors on the points are the square root of the diagonal of the error covariance matrix for each measurement. The figures show R ΔΣ as a function of the projected separation from the lens galaxies and we list the SNR of the measurments in the lower right boxes in each of the subpanels. The weak lensing measurements in each of the redshift bins clearly show that the weak lensing signal increases in strength for lens galaxies with a higher threshold in stellar mass. The lensing signal also show deviation from ΔΣ∝ R as would be expected for a simple isothermal profile.
§.§ HOD modelling of the abundance and lensing signal
We fit the analytical HOD model to each of the measurements described above and obtain the posterior distribution of the parameters of our model given the measurements. The priors that we use on the parameters for our analysis are listed in Table <ref>. The solid magenta lines in Figs. <ref> and <ref> and the associated grey shaded regions indicate the best fit model and the 68, 95 percentile credible intervals using the parameters given the joint fit of the lensing and I20 abundance measurements in the two photometric redshift bins z_1 and z_2, respectively. The best fit χ^2 value obtained from our measurements alongwith the number of degrees of freedom based on the formalism of <cit.> are also indicated in the boxes on the lower right in each of the subpanels.
We decompose the best fit model we obtain into components that correspond to the 1-halo central and 1-halo satellite term, in addition to the 2-halo term indicated by the solid red, solid orange and dotted green lines, respectively. The baryonic contribution to the lensing signals is quite small and we artificially have boosted it ten times its value for clarity and shown it with a dashed line. The 1-halo central component dominates in the innermost regions upto a few hundred kiloparsecs, followed by the rising 1-halo satellite component as we move further out.
The increasing amplitude of the observed lensing signal can be fit with a consistently rising 1-halo central component. Statistically this indicates that central galaxies with higher stellar masses live in more massive dark matter halos. The satellite component corresponds to halos which are more massive than that of centrals. These measurements and our modelling allow us to infer the stellar mass-halo mass relation for the central galaxies together with the satellite fractions in each of our subsamples, and these are a reflection of the scale dependence of the measured weak lensing signal.
Our results indicate that a simple dark-matter only HOD model in ΛCDM cosmology is flexible enough to describe the observed lensing and abundance measurements in each of the threshold stellar mass bins. The best fit χ^2 values corresponding to joint fits of weak lensing with either I20 or M13 abundances are listed in Table <ref>. We obtain similar values for χ^2 despite large differences in abundances between I20 and M13, which hint towards a potential degeneracy among HOD parameters when fitting weak lensing and abundances. Even though they appear statistically consistent, we see some evidence that I20 is better fit than M13 in low threshold mass subsamples for the z_1 bin, while M13 is better fit for high and low mass thresholds in z_1 and z_2 bins respectively.
The two-dimensional marginalized posterior distributions[The posterior distributions for two stellar mass thresholds chosen to be representative at each redshift bin have been made available online in the appendix.]
of free parameters show familiar degeneracies in the central halo occupation parameters and , where an increase in one parameter can be compensated by a corresponding increase in the other parameter. We will discuss the dependence of these degeneracy on our different observable in the following subsection. The satellite parameters are often ill-constrained with a wide variety of satellite parameters leading to similar observables. The constraints on the free parameters of the HOD model along with the inferred satellite fraction, abundances and the for each of the threshold bin in two redshift bins are listed in Tables <ref> and <ref>, respectively.
§.§ Degeneracy among central HOD parameters and abundance
Using the posterior distribution of the HOD parameters in our fiducial analysis, we examine the degeneracy between the central HOD parameters and its dependency on the weak lensing and the abundance, separately. The estimates of the abundance of galaxies differ between I20 and M13, and therefore can lead to different constraints on the HOD parameters. Therefore, we fit the HOD model to these observables individually and in combination to demonstrate the impact of using either of these abundance measurements. In Fig. <ref>, we present the resulting degeneracy contours between central HOD parameters corresponding to each of these observables. The 68 percent credible regions from the weak lensing only fit, the I20 abundance only fit and the M13 abundance only fit are shown with black, blue and red contours, respectively. The orange and the green correspond to the joint fits between weak lensing and the abundances from I20 and M13, respectively. The different subpanels correspond to lens subsamples with different stellar mass thresholds for bin z_1, chosen for illustrative purposes.
The central HOD parameters, log and for each of the observables individually are degenerate with each other and a positive change in one can be compensated by a positive change in the other parameter. This can be understood as follows. The abundance of halos is a decreasing function of halo mass. Thus increasing , in general would lead to smaller abundance. However, this can be compensated by increasing the scatter . The scatter allows galaxies to be populated in the more numerous less massive halos, thus satisfying the observed abundance. The relative shift in the degeneracy contours between the contours corresponding to I20 and M13 reflect the smaller abundance of galaxies inferred by I20 compared to M13. The weak lensing signal on the other hand is sensitive to the average halo masses of the lens samples. Thus an increase in which would nominally increase the average halo masses, can be compensated by increasing the scatter which brings in lower halo masses, thus compensating the increase. At the highest stellar mass threshold, the degeneracy contours become flatter due to the exponential decrease in the halo abundances at the high mass end. Even though the degeneracies in the - are qualitatively similar, the different dependencies of the abundance of halos and their average halo mass on the HOD parameters implies that the quantitative degeneracies have different directions. The combination of the abundance and weak lensing shown in orange/green contours thus results in tighter constraints on each of the central HOD parameters, which otherwise cannot be constrained by either of the observables on their own.
We also observe that weak lensing prefers somewhat higher values for log M_ min than just the abundance information alone until stellar mass thresholds of 10^10.2 (10^10.4) in the redshift bin z_1 (z_2) where the lensing and abundance contours start to cross-over. The exact location of this stellar mass depends upon which study we use the abundance information from. In general, we expect that adding in the abundance information will lead to lower values of than just from weak lensing at the low stellar mass threshold end. At the high stellar mass threshold, the inclusion of abundances can have a non-negligible impact on the inferred value of as can be seen from the relatively flat degeneracy contours in the - plane in the right hand subpanel of Fig. <ref>.
§.§ Galaxy-halo connection
The galaxy-halo scaling relation that we obtain from our joint analysis of weak lensing and the abundance of galaxies can be summarized with the dependence of on the stellar mass threshold log M_*, limit. The parameter corresponds to the mass at which half of the halos are occupied by galaxies in a given stellar mass threshold sample. This implies that the scaling relation between and log M_*, limit can be interpreted as the median of the stellar mass of galaxies at fixed halo mass. We show these constraints for our two redshift bins in different panels of Fig. <ref>. Our fiducial constraints are shown as credible intervals using light (95 percent) and dark grey (68 percent) shaded regions that correspond to the use of our weak lensing measurements combined with abundance measurements presented in I20. If instead, we combine with the abundance measurements from M13, we obtain constraints shown in blue points (median) with errors (68 percent credible interval). As discussed in the previous section, the smaller abundance inferred in I20 compared to M13 leads to a higher when we use abundance from I20 for redshfit bin z_1. In contrast for redshift bin z_2, the abundance of galaxies inferred in I20 and M13 both are roughly equivalent (see Fig. <ref>) and thus the inferred is similar irrespective of which abundances we combine with the weak lensing signal.
In both redshift bins, we observe a scaling relation which shows that 10^12 dark matter halos are most efficient at forming stars and become increasingly inefficient as we move away. At lower mass side the inefficiency of star formation manifests in the stellar masses dropping down precipitously to smaller values, while at the high mass end it is seen in a quick rise in halo masses that is required to form more and more massive galaxies. Qualitatively, this picture is consistent with previous studies. We present the comparison of the parameters and obtained from our analysis when combining our weak lensing measurements with the two different abundance estimates in the first two panels of Fig. <ref>. If taken at face value our results in left panel do not indicate a large evolution in the scaling relation within the two redshift bins, especially if we consider the abundance measurements from M13. However, the abundance measurements from I20 indicate that halos of same mass at lower redshift host galaxies with a median stellar mass which is lower by about 0.2 dex.
The scatter in our HOD parameterization captures the scatter in halo masses of galaxies that have stellar mass at the threshold chosen for our sample. We observe in middle panel of Fig. <ref> that this scatter increases as we increase our threshold to include only massive galaxies. In models which have a fixed scatter in stellar mass of galaxies at fixed halo mass, such behaviour is expected. The slope of the log M_*-M relation is quite shallow at the high mass end, and thus a constant scatter in the stellar masses at fixed halo mass results in a scatter in halo masses that continues to increase with the stellar mass. Our results are therefore qualitatively consistent with studies that indicate such a constant log-normal scatter in stellar masses at fixed halo mass σ_log M_* <cit.>. These trends are consistently observed irrespective of which abundances we use and the redshift bin under consideration.
Previously, we have shown that the parameters is degenerate with and the posterior constraints on are very much dependent on the choice of the abundance measurements, especially in the first redshift bin. The weak lensing signal is expected to be sensitive to the average mass of halos occupied by galaxies in our sample. Given that the small scale weak lensing signal is well measured and is dominated by the 1-halo central term, we expect the average central halo masses to be well determined by the lensing signal for every threshold stellar mass bin. In Fig. <ref>, the blue (orange) shaded region with slanting lines shows our constraints on from weak lensing measurements only. The solid blue (orange) shaded region corresponds to the 68 percent credible intervals derived from a joint fit between lensing and abundance from I20 for redshift bin 1 (2), while the blue (orange) solid points with errors correspond to a similar joint fit but using abundance measurements from M13. While both and have physical meaning, it is clear that M_ cen better reflect the results of our weak lensing measurements and is relatively insensitive to the exact choice of abundance.
We compare the M_ cen obtained for the two redshift bins in Fig. <ref>. When compared in this manner, we see very small differences in the redshift evolution of the scaling relation. The differences seen in the and compensate to result in a scaling relation between M_ cen as a function of the stellar mass threshold that shows very little evolution over the two redshift bins we use.
§.§ Satellite fraction
The weak lensing signal in the innermost regions is dominated by the dark matter halo of the central galaxies in each of our stellar mass threshold subsamples. Some of the galaxies in our subsample are also expected to be satellites. These satellites on average are expected to reside in more massive parent halos than halos hosting centrals of similar stellar mass. However these satellite galaxies do not reside at the centers of their parent dark matter halos, but are distributed within the halo. This signal from the satellite galaxies is thus expected to be a result of convolution of the weak lensing signal expected around the centers of their parent halos with the projected number density of satellite galaxies within the halo. The weak lensing signal at intermediate scales is thus sensitive to the fraction of satellite galaxies within the stellar mass threshold sample as well as the halo occupation distribution of the satellite galaxies in the subsample.
In Fig. <ref>, we show the fraction of satellite galaxies as a function of the stellar mass threshold of our subsamples. The solid blue (orange) colored shaded region shows the 68 percent credible region for the satellite fraction for redshift bin 1 (2) when using the weak lensing measurements along with abundance measurements from I20. The region shaded with slanted line but with the same colors correspond to the case when the lensing measurements alone are used as constraints. To maintain clarity, we do not show results when using M13 abundances here as they are essentially similar within the errors.
Overall, the observations suggest that the satellite fractions decrease as a function of the stellar mass threshold above 10^10 for both redshift bins. There is tentative evidence of a flattening of the satellite fractions at lower stellar mass threshold for redshift bin 1. We do not find significant evidence for the evolution of the satellite fractions with redshift given the large errors in our inference, nor do we find a significant difference depending upon which abundance constraints we use.
§ COMPARISON WITH PREVIOUS STUDIES
As mentioned in Section <ref>, we examine the results from the two studies, I20 and M22, of clustering and abundance of galaxies from the same samples we use in the paper, against our inferences which are driven by the measured weak lensing signals and abundance estimates from I20. This comparison is well suited even in the photometric observable plane due to use of the same dataset. To briefly summarize the results and approaches of these two studies, I20 modelled the measured projected 2-point correlation functions ω (θ), and the measured abundances of galaxies using the same HOD parameterization as used in our modelling scheme.
On the other hand, these same measurements were modelled by M22 using a modified subhalo abundance matching technique using cosmological simulations from the Uchuu suite <cit.>, namely mini and shin. Amongst the two, mini-Uchuu has larger box size of 400 with particle resolution of 3.27× 10^8 and shin-Uchuu has larger resolution of 8.97× 10^5 with box size of 140. Comparison between the two simulations allows us to test the effect of the resolution. In their paper, M22 explore two different proxies of halo mass which monotonically correlate with the stellar mass of galaxies (albeit with a scatter). The first approach uses the traditional peak maximum circular velocity (V_ peak), while the second one utilizes the halo mass of the progenitor of the subhalo at a prior redshift (M_ prog). The constraints presented by M22 correspond only to the first redshift bin.
We utilize the best fit HOD parameters from I20 and predict the expected weak lensing signal for each of the stellar mass threshold lens samples in redshift bins 1 and 2. We use the framework prescribed in Section <ref> to predict the lensing signal. These best-fit predictions are shown as blue lines in Figs. <ref> and <ref> which correspond to redshift bins 1 and 2, respectively. In redshift bin 1, we find that the I20 predictions underestimate the measured lensing signal (by about 10-30%) around small projected radii corresponding to the 1-halo regime for threshold mass bins up to 10^10.0. For higher threshold bins, the predictions overestimate the measured weak lensing signal by up to 50-60%. Although we see qualitatively similar differences in redshift bin 2, the magnitude of these differences is much smaller than in redshift bin 1. For redshift bin 1, we show the best fit predictions for the weak lensing signal from the and model for M22 using light green and blue dotted lines, respectively. Both the SHAM models are able to explain the lensing signals relatively well for the galaxies in mass thresholds below 10^10 compared to more massive thresholds, especially when compared to the fits from I20. In this stellar mass range, the model seems to fit the measurements better than the model, however, we have checked that this is a resolution dependent statement, and with the higher resolution shin run, these differences further disappear. For higher threshold stellar masses, both models seem to fare poorly. However, we see evidence that the model is at least able to capture the large scale lensing signal beyond 1 well. For these bins, we see appreciable differences between the measurements and the predictions on small scales for both models.
The weak lensing signal in the 1-halo regime is driven by the average central halo masses . Therefore, we compare our inference of of each of the threshold sample with that inferred from the results of I20 and M22 in Fig. <ref>. The best-fit predicted average central halo masses from I20 are shown as blue (left panel) and red (right panel) lines for redshift bins 1 and 2 respectively. The comparison shows that the inferences from I20 are statistically larger than ours for M_*, limit>10^10.0, consistent with the expectation based on the comparison of the predicted and measured weak lensing signals. However, for stellar mass thresholds below 10^10.0, the I20 best fit predictions appear consistent with our constraints. This implies that such differences in the weak lensing signal are likely absorbed by the difference in satellite fractions in our model compared to that in I20. This is visible in the comparison of the satellite fractions from I20 with our results shown in Fig. <ref>. The comparison shows that when compared with the lensing only results, I20 prefer larger satellite fractions in both redshift bins.
In the left hand panel of Fig. <ref>, the results from M22 from the model and the model are shown with open squares and open circles with errors, respectively. We distinguish between the results from the two simulations used in M22 with two different colors, green corresponds to the mini simulation while magenta to the shin simulation. We see that both the models infer results which are consistent with our constraints from the weak lensing and abundance from I20 until a stellar mass threshold of 10^10, consistent with the comparison of the weak lensing signals. At higher stellar mass thresholds the differences seen in the weak lensing signal are a result of the higher average central masses in these models. The results for M22 seem to be much more consistent with the results from I20 at these threshold bins, suggesting that the combination with the clustering is driving the larger halo masses. In the comparison of the satellite fractions we also observe that the models from M22 always prefer higher satellite fractions compared to either I20 or our results, with the exact difference depending upon the resolution of the simulation. While comparing these results, it is worth keeping in mind that the scales over which the lensing measurements are carried out (< 5) are smaller than the length scales over which the clustering signal was measured by I20 (≲ 25-30 at the median redshifts of the samples). The inferences from clustering are thus expected to be more sensitive to the large scale bias of the dark matter halos or the 2-halo term, while our inferences rely more significantly on the 1-halo term. The signal on large scales can potentially be affected by the presence of galaxy assembly bias <cit.>, and thus appropriate caution is warranted.
The I20 best fit predictions of SMHM relation for each redshift bin are shown as red circles with errors in the two separate panels of Fig. <ref> for comparison with other studies. Despite the overestimate in for thresholds greater than 10^10, the halo masses are underestimated. As discussed in Section <ref>, such a relation between and can be made possible by choice of small values of halo mass scatters . And we verify in the middle panel of Fig. <ref> that indeed this is the case. Additionally, their and deviation from our constraints increase as we go in the direction of massive galaxy thresholds. Partly this could also be due to clustering information probing different degeneracy direction in degeneracy space of central HOD parameters. We highlight a contrasting feature between lensing and clustering based studies, that the I20 study of galaxy clustering, despite using abundance information which puts strong constraints on the central HOD parameters, is unable to strongly constrain the halo mass scatter parameter at high stellar mass thresholds whereas lensing is able to unveil the huge ambiguity in scatter parameter. This lack of constraint could be driving the disagreements between the two studies for thresholds beyond 10^10.0. Even though high mass slope of SMHM relation makes the stellar mass a poor tracer of its host halo mass <cit.>, lensing is clearly more effective in probing the scatter in SMHM relation than clustering. In the left hand panel for redshift bin 1 of Fig. <ref>, we observe that results of M22 (shown with similar color scheme as described before) for either of the and model are consistent with our results. We do see a difference between the results depending upon the resolution, and it appears that the two simulation boxes can trade between and the scatter so as to maintain similar value for . This can be seen in the right panel of Fig. <ref>, where we compare the scatter from M22 in the two different simulations and compare it to our results.
The best-fit constraint on the halo mass and scatter parameters from I20 are shown as points with 1-σ errors in left and middle panels of Fig. <ref>, where blue and red correspond to redshift bins 1 and 2, respectively. The underestimation of the WLS and average central halo mass at the lowest mass threshold of z_1 bin (see Figs. <ref> and <ref>) is caused by the correspondingly larger best fit value of . However, in redshift bin z_2 their best fit scatter value is in line with our expectation but then the underestimate of WLS is driven by the lower value of preferred by the clustering signal when combined with the abundance.
While we use the same cosmology as I20, we note the fact that differences in their modelling ingredients may have a non-negligible impact on this comparison. To be more specific, I20 uses large scale halo bias function and halo mass function each calibrated from different simulations, that is, bias from <cit.> but mass function as given by the Seth & Tormen halo mass function <cit.>. Also, I20 uses different halo mass-concentration relation than us, although we have an extra free parameter c_ fac which can subsume such differences. Similarly M22 use a halo mass definition that contains mass within the virial radius, M_ vir. We convert M_ vir to M_200m when making a direct comparison of halo masses using colossus <cit.>.
§ CHALLENGES AND FUTURE WORK: PHOTOMETRIC DATA AND ASTROPHYSICAL INFERENCES
We have inferred the galaxy stellar mass - halo mass scaling relation from a joint analysis of the abundance and weak lensing signal in this paper. The inferred relation assumes the lens galaxy properties given by the photometric redshift and stellar mass estimates from the template fitting method MIZUKI <cit.>. However, it is important to note that the presence of statistical or systematic errors in photometric redshifts can propagate in to the selection of our sample, as well as the measured abundances and the weak lensing signal, in a non-linear manner. As discussed in sec. <ref>, the errors in photometric redshifts are expected to positively correlate with those in the stellar masses <cit.>. Such correlated errors, even if they are only statistical, can result in a number of lower mass galaxies to scatter into our stellar mass threshold and some of the high mass galaxies to scatter out instead. Similar effects can also be at play at the redshift boundaries of our redshift bins. The stellar mass bin thus does not represent a true stellar mass threshold in the presence of such errors. Moreover, such errors are also expected to affect the true average redshift of the sample, as well as their abundance measurements. The abundance measurements are further complicated due to issues in the determination of the volume associated with the galaxies due to quality cuts on photometric redshift as applied in I20. In case such volume determination uncertainties affect galaxies at all stellar masses equally, then one could correct for such issues by comparing against prior determinations of the abundance in the literature. However in general, the selection effects are often much more nuanced than simple volume misestimates, and are not entirely straightforward to correct.
Even though we explored the effect of photometric redshifts of source galaxies on the weak lensing signal, these measurements can also be affected due to the uncertainties in the photometric redshifts of the lens galaxies. The lens galaxy redshift is used to assign the projected comoving impact parameter at which the light from background galaxies approaches the cluster before it reaches us. The critical surface density estimates used to convert the shear to the excess surface density also depends upon the redshift of the lens galaxies. Thus, the interpretation of the weak lensing signal can also be affected due to the use of photometric redshifts for the lenses. Therefore, each of the above mentioned measurements can impact the inferred HOD parameters in a variety of ways.
Given these uncertainties, we refrain from making direct comparisons between these results and those present in the literature on the stellar mass halo mass relation. We restrict our comparison to those studies which use the same sample of galaxies and have similar assumptions in order to make a fair comparison between the results of these studies with the results we obtain. In order to enable comparison with the broader literature, in a future study, we will use a forward modelling approach and ascertain the level of systematic bias by making use of mock galaxy catalogs that have the errors in photometric redshifts as expected from the photometry from the HSC survey.
The Subaru HSC survey can map out galaxies out to even higher redshifts than those considered in this study. However beyond the median redshift of the survey we become exceedingly sensitive to potential systematic biases due to the use of photometric redshift estimates of the source galaxies. We also expect that magnification bias to start to play a role by correlating the lens and the source number densities, especially for galaxies that lie at the steep end of the luminosity function <cit.>. Eventually, once we have a control over all the above systematics, it would become interesting to model the clustering, the lensing and the abundance of galaxies as a function of stellar mass and at multiple redshift bins.
§ SUMMARY AND CONCLUSIONS
We have investigated the galaxy-dark matter connection and its evolution using samples of photometric galaxies from the HSC survey with varying thresholds of stellar masses from 8.6 ≤log[ M_*/() ] ≤ 11.2 in the redshift ranges [0.30,0.55) and [0.55,0.80). Our results are based on the weak lensing signal measured for these samples using the Year 1 catalog of source galaxy shapes from the HSC survey, and the measurements of the abundance of galaxies. We carry out a Bayesian analysis to infer the posterior distribution of parameters that describe the halo occupation distribution of these galaxies. The key results and findings of our study are summarized as follows.
* We present high signal-to-noise ratio measurements (SNR ranging from 30-50) of the lensing signals in both redshift bins for all of our samples. We show the robustness of the measured lensing lensing signals with multiple null tests, such as the tangential and cross components of the lensing signal around random points and the cross component around lens galaxies. We also find that the boost factors for our signals are statistically insignificant and the biases due to use of photometric redshifts for the source galaxies are ∼ 1% and ∼ 4% for the redshift bins 1 and 2, respectively. These tests of systematics indicate that our measurements are not heavily affected by contamination of either the foreground or the background galaxies.
* We fit these weak lensing measurements together with the abundances of galaxies with a simple 5 parameter HOD model per sample in the context of the Planck cosmological model and show that the model provides a reasonable description of the data. We infer the posterior distribution of these parameters given the measurements.
* We show that the weak lensing measurements and the abundances on their own constrain the central HOD parameters and the scatter in a degenerate manner. However these degeneracy directions are different for each of the observables and hence a combination of the two helps break the degeneracy. We also show the impact of using different abundances in the literature. We show that the average halo masses of central galaxies are well constrained irrespective of the use of either abundances.
* We find that the average halo masses of central galaxies increases with the threshold for the stellar mass subsample for both redshift bin 1 and 2. Comparison between these scaling relations at the two different redshifts shows very mild evolution between these two redshifts, if any.
* We also compare our results with the constraints obtained by the study of I20 who jointly model the abundance and clustering of the same sample of galaxies. We show that the best fit model of I20 underestimates the observed lensing signals by varying amounts of 10%-30% in the 1-halo central term regime and 50%-60% at larger radii for mass thresholds up to 10^10 and overestimate the lensing signal for more massive threshold samples. Nevertheless, we find excellent agreement between the constraints on the average halo masses of central galaxies for these samples for thresholds until 10^10, and the results from I20 overestimate these average halo masses for higher threshold samples.
* We also compare our results with the subhalo abundance matching method of M22 which uses the abundance and clustering measurements of I20 as constraints. We find that their models which use a monotonic relation between or of the subhalos and the stellar mass of galaxies, is able to predict lensing signal consistent with our measurements for stellar mass thresholds up to 10^10. Both models fail to explain the lensing signal especially within the 1-halo regime for higher stellar mass threshold samples.
* Finally, we find that the satellite fractions predicted by our fiducal analysis are consistent with the clustering study of I20 given the statistical errors. However, we find that the models from M22 based on subhalo abundance matching predict an additional satellite fraction of up to 15% over our constraints.
The paper demonstrates the great potential of large imaging surveys such as the HSC to infer the galaxy-dark matter connection over a large range of redshifts using multiple observational probes such as the abundance of galaxies, their clustering and their galaxy-galaxy lensing signal. An accurate inference of the true underlying scaling relations between stellar mass and halo mass, however, will depend upon quantitative estimates of how the photometric redshift errors in the lens galaxy population affect the underlying stellar mass threshold samples. Assessment of the extent of such biases will be subject of our work in the near future.
§ ACKNOWLEDGEMENTS
We thank Divya Rana, Amit Kumar, Preetish K. Mishra, Susmita Adhikari, Arka Banerjee, Supranta S. Boruah and Priyanka Gawade for useful discussions and their comments on the draft version of the paper. We also thank our research advisory committee members Aseem Paranjape, Masamune Oguri, Anupreeta More for useful discussions on the current project along with comments on the draft version of this paper. NC is thankful for the financial support provided by the University Grants Commission (UGC) of India. He is also thankful to IUCAA for its the amicable environment and hospitality offered to students.
We acknowledge the use of Pegasus, the high performance computing facility at IUCAA. The calculations in part were carried out on Cray XC50 at Center for Computational Astrophysics, National Astronomical Observatory of Japan. Data analysis was in part carried out on the Multi-wavelength Data Analysis System operated by the Astronomy Data Center (ADC), National Astronomical Observatory of Japan.
This work was supported in part by JSPS KAKENHI Grant Numbers JP23K13145 (SI), JP19H00677, JP21H05465, JP22K03644 (S. Masaki) and JP21K13956 (DK).
TO acknowledges support from the Ministry of Science and Technology of Taiwan under Grant Nos. MOST 111-2112-M-001-061- and NSTC 112-2112-M-001-034- and the Career Development Award, Academia Sinica (AS-CDA-108-M02) for the period of 2019 to 2023.
The Hyper Suprime-Cam (HSC) collaboration includes the astronomical communities of Japan and Taiwan, and Princeton University. The HSC instrumentation and software were developed by the National Astronomical Observatory of Japan (NAOJ), the Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU), the University of Tokyo, the High Energy Accelerator Research Organization (KEK), the Academia Sinica Institute for Astronomy and Astrophysics in Taiwan (ASIAA), and Princeton University. Funding was contributed by the FIRST program from the Japanese Cabinet Office, the Ministry of Education, Culture, Sports, Science and Technology (MEXT), the Japan Society for the Promotion of Science (JSPS), Japan Science and Technology Agency (JST), the Toray Science Foundation, NAOJ, Kavli IPMU, KEK, ASIAA, and Princeton University.
We also thank Instituto de Astrofisica de Andalucia (IAA-CSIC), Centro de Supercomputacion de Galicia (CESGA) and the Spanish academic and research network (RedIRIS) in Spain for hosting Uchuu DR1, DR2 and DR3 in the Skies & Universes site for cosmological simulations. The Uchuu simulations were carried out on Aterui II supercomputer at Center for Computational Astrophysics, CfCA, of National Astronomical Observatory of Japan, and the K computer at the RIKEN Advanced Institute for Computational Science. The Uchuu Data Releases efforts have made use of the skun@IAA_RedIRIS and skun6@IAA computer facilities managed by the IAA-CSIC in Spain (MICINN EU-Feder grant EQC2018-004366-P).
We have used <cit.> to create degeneracy plots and <cit.> to create triangle/corner plots.
§ DATA AVAILABILITY
The weak lensing signal measurements after applying all correction as mentioned in Section <ref> for our stellar mass threshold lens samples along with the measured covariance matrices and abundances are made available in a public github repository, <https://github.com/0Navin0/galaxy_halo_connection_in_HSC>. This repository also contains our modelling constraints from Tables <ref> and <ref> along with additional relevant plots for interested readers.
mnras
|
http://arxiv.org/abs/2307.04654v1 | 20230710155406 | Poles and zeros of electromagnetic quantities in photonic systems | [
"Felix Binkowski",
"Fridtjof Betz",
"Rémi Colom",
"Patrice Genevet",
"Sven Burger"
] | physics.optics | [
"physics.optics",
"cond-mat.mes-hall",
"physics.comp-ph"
] |
Université Côte d’Azur, CNRS, CRHEA, 06560 Valbonne, France
Université Côte d’Azur, CNRS, CRHEA, 06560 Valbonne, France
Physics Department, Colorado School of Mines, Golden, Colorado 80401, USA
We present an approach to investigate poles and zeros in resonant photonic systems.
The theory is based on contour integration of electromagnetic quantities
and allows to compute the zeros, to extract their sensitivities with respect to geometrical or other parameters,
and to perform modal expansions in the complex frequency plane.
The approach is demonstrated using an example from the field of nanophotonics, an illuminated
metasurface, where the emergence of reflection zeros due to the underlying resonance poles is explored.
Poles and zeros of electromagnetic quantities in photonic systems
Sven Burger
August 12, 2023
=================================================================
In the field of photonics, light-matter interactions can be tuned by exploiting resonance phenomena.
Examples include tailoring quantum entanglement with atoms and photons in cavities <cit.>,
probing single molecules with ultrahigh sensitivity <cit.>, and
realizing efficient single-photon sources <cit.>.
While electromagnetic observables are measured at real-valued excitation frequencies,
the concept of resonances intrinsically considers
the complex frequency plane <cit.>.
Resonance frequencies are complex-valued as the systems exhibit losses, e.g.,
due to interaction with the environment.
Excitation of the systems close to the resonance frequencies,
which are the poles of the electromagnetic field, leads to highly increased field values.
An important figure of merit is the quality factor of a resonance,
which is defined as the scaled ratio of real and imaginary part of the corresponding pole,
and which can represent the relation between stored
and dissipated electromagnetic field energy of the resonance <cit.>.
Resonances can also serve as a basis for the expansion of electromagnetic quantities.
Although most nanophotonic systems support many resonances, often only a few
resonances are sufficient to determine the optical
response in the real-valued frequency range of interest <cit.>.
The design of photonic components
can be greatly simplified by determining the complex frequency response of the photonic systems.
Controlling the relative locations of complex-valued poles and zeros of the scattering
matrix or of the transmission or reflection coefficients
can serve as a basis to tailor the corresponding optical response.
This kind of approach has long been used to design electronic systems <cit.>.
For example, all pass filters, i.e., systems whose response amplitude remains
constant when the excitation frequency is varied, have poles
and zeros which are complex conjugated <cit.>.
Other examples are minimum-phase systems, where the zeros have to be restricted
to the lower part of the complex plane <cit.>.
In photonics, recently,
it has been shown that a 2π-phase gradient of the reflection or
transmission output channel of a metasurface can be realized when a pair of pole and zero
is separated by the real axis in the complex frequency plane <cit.>.
Moreover, the zeros can have arbitrarily small imaginary parts,
i.e., the analysis of the locations of the zeros is extremely relevant
to design the response of the photonic systems at real frequencies.
Total absorption of light or perfect coherent absorption occurs when zeros of the
scattering matrix are on the real axis <cit.>.
Reflection zeros are also exploited for phase-sensitive detection with
nanophotonic cavities in biosensing applications <cit.>.
While in many electronic systems the determination of poles and zeros of the transfer matrix
may be done analytically, this is often not possible for photonic structures, such as metasurfaces.
To compute reflection and transmission zeros of scattering matrices of specific systems,
it has been proposed to solve Maxwell's equations as an eigenproblem with
appropriately modified boundary conditions <cit.>.
In this work, we present an approach for the study of poles and zeros in arbitrary photonic systems.
The theory is based on contour integration of electromagnetic quantities
which allows to extract also sensitivities of the poles and zeros,
i.e., their evolution in the complex frequency plane depending on chosen parameters can be analyzed.
A numerical realization is used to demonstrate the approach.
The poles and the reflection zeros of a
metasurface and their sensitivities with respect to geometrical parameters are computed.
Furthermore, a modal expansion in the complex frequency plane is introduced to investigate
the appearance of the reflection zeros through the interference of modal contributions.
Singularities and contour integration.—In the steady-state regime,
light scattering in a material system can be described
by the time-harmonic Maxwell's equation in second-order form,
∇×μ^-1 ∇×𝐄 -
ω_0^2ϵ𝐄 =
iω_0𝐉,
where 𝐄(𝐫,ω_0) ∈ℂ^3 is the electric field,
𝐉(𝐫)∈ℂ^3 is
the electric current density describing a light source,
ϵ(𝐫,ω_0) and μ(𝐫,ω_0) are the
permittivity and permeability tensors, respectively,
𝐫∈ℝ^3 is the position, and ω_0∈ℝ is the angular frequency.
Electromagnetic quantities Q(𝐄(𝐫,ω_0)) ∈ℂ, such as
reflection or transmission coefficients,
are typically experimentally measured
for real excitation frequencies .
However,
to obtain deeper insights into light-matter interactions,
an investigation of the optical response for complex
frequencies ω∈ℂ is essential.
For this, we consider the analytical continuation of
Q(𝐄(𝐫,ω_0)) into the complex frequency plane,
which we denote by q(ω) ∈ℂ as a short notation of q(𝐄(𝐫,ω)).
Figure <ref>(a) shows an example from the field of nanophotonics,
a dielectric metasurface <cit.>. Illumination of the metasurface by a plane wave
with the optical frequency ω_0 yields a physical observable Q(ω_0).
The singularities of its analytical continuation q(ω)
and the singularities of q(ω)^-1 are of special interest and
can be used to investigate the properties of the metasurface.
The singularities of q(ω) are the
poles ω^pole of the physical quantity q(ω).
The associated electric fields are so-called resonances or quasinormal modes.
The singularities of q(ω)^-1 are the zeros ω^zero of q(ω).
The associated electric fields leads to q(ω^zero) = 0.
Figure <ref>(b) sketches the complex frequency plane with exemplary locations of a pole and a zero.
By using Cauchy's integral theorem for a contour C which
encloses one simple pole ω^pole
and (or) one simple zero ω^zero of the quantity q(ω), as sketched in Fig. <ref>(b),
ω^pole and ω^zero are given by
ω^pole = ∮_Cω q(ω)
dω/∮_Cq(ω)
dωandω^zero = ∮_Cω q(ω)^-1
dω/∮_Cq(ω)^-1
dω,
respectively.
The locations of M poles ω^pole_m
inside a contour C are given by the eigenvalues ω_m of the
generalized eigenproblem <cit.>
H^< X = H X Ω,
where Ω = diag(ω_1,…,ω_M)
is a diagonal matrix containing the eigenvalues, the columns of the
matrix X ∈ℂ^M× M are the eigenvectors, and
H =
[ s_0 … s_M-1; ⋮ ⋮; s_M-1 … s_2M-2 ],
H^< =
[ s_1 … s_M; ⋮ ⋮; s_M … s_2M-1 ]
are Hankel matrices with the contour-integral-based coefficients
s_k = 1/2π i∮_Cω^k q(ω)
dω.
The zeros ω^zero_m inside the contour are also given in this way, except that
the quantity q(ω)^-1 is considered for the coefficients instead of q(ω).
Note that this type of approach has inspired a family of numerical methods
to reliably evaluate all zeros and poles in a given bounded domain.
The methods are an active area of research in numerical mathematics, where, e.g., numerical stability,
error bounds, and adaptive subdivision schemes are investigated <cit.>.
To compute poles and zeros, the coefficients of the Hankel
matrices can be approximated by numerical integration <cit.>,
where the quantity of interest q(ω) is calculated
by computing 𝐄(𝐫,ω) for complex frequencies
on the integration contour C.
The electric field 𝐄(𝐫,ω) can be obtained by numerically
solving Maxwell's equation given in Eq. (<ref>).
The quantity q(ω)^-1 is immediately available by inverting the scalar quantity q(ω).
Computing the different contour integrals for each of the coefficients requires no additional computational
effort since the quantity q(ω) needs to
be calculated only once for each of the integration points.
The integrands differ only in the weight functions ω^k.
Information on the numerical realization can be found in Ref. <cit.>.
Further, the data publication <cit.> contains software
for reproducing the results of this work based on an interface to the finite-element-based
Maxwell solver JCMsuite.
Poles and reflection zeros of a metasurface.—We apply this
approach to determine the poles ω^pole_m and reflection zeros
ω^zero_m of the metasurface sketched in Fig. <ref>(a).
Figure <ref>(a) shows
the geometry of the nanostructures forming the metasurface,
including the parameters chosen for the numerical simulation.
The metasurface is illuminated by a plane wave at normal incidence from above.
For the investigation of the reflected electric field, we consider the Fourier transform
of 𝐄(𝐫,ω_0) <cit.>. Due to sub-wavelength periodicity,
the resulting upward propagating Fourier spectrum consists of only one term,
the zero-order diffraction coefficient Q(ω_0).
Solving the generalized eigenproblem given by Eq. (<ref>) with
the analytical continuation q(ω) of Q(ω_0)
gives the poles ω^pole_m and the reflection
zeros ω^zero_m of the illuminated metasurface.
We emphasize that Eq. (<ref>) provides an expression
of both, poles and reflection zeros, and that
the numerical implementation does not pose any difficulties.
Figure <ref>(b) shows the integration contour C and the computed poles and zeros.
The contour-integral-based coefficients of the Hankel matrices in Eq. (<ref>)
allow to apply the approach of direct differentiation <cit.>.
When the Fourier coefficients q(ω̂_k) are calculated
at the integration points ω̂_k on the contour C,
also their sensitivities ∂ q/ ∂ p with respect to geometry, material, or source
parameters p can be evaluated without significant additional computational effort.
The sensitivities of the zeros can be extracted in the same way
as the sensitivities of the poles can be extracted <cit.>.
Figure <ref>(c) sketches the sensitivities ∂ω^pole_1/ ∂ p_k
and ∂ω^zero_1/ ∂ p_k with respect
to the upper radius p_1 and the height p_2 of the silicon cones of the metasurface.
With 64 integration points, it is possible to compute poles,
zeros and their sensitivities with high accuracies, see Table <ref>.
Modal expansion in the complex frequency plane.—The residues
a_m = 1/2 π i∮_C_m q(ω)dω,
where C_m are contours enclosing the single eigenvalues ω_m from Eq. (<ref>),
can be used as a selection criterion for meaningful eigenvalues ω_m.
Eigenvalues with large a_m are prioritized, while ω_m with small a_m are likely
to be unphysical eigenvalues because either M is chosen larger than the actual number of eigenvalues
within the contour or they are not significant with respect to the quantity of interest.
Correspondingly, the choice of a specific source in Eq. (<ref>) allows to regard only a subset of
eigenvalues of the considered physical system <cit.>.
Note that, for simple eigenvalues, the residues are
directly available, given by diag(a_1,…,a_M) = X^T H X,
where X is suitably scaled <cit.>.
Moreover, with the poles ω^pole_m
and the corresponding residues a_m, the modal expansion
of the Fourier coefficient,
q(ω) = ∑_m=1^M q_m(ω) + q_bg(ω),
q_m(ω) = -a_m/ω^pole_m - ω,
q_bg(ω) = 1/2 π i∮_C q(ξ)/ξ - ωdξ,
can be performed, where
q_m(ω) are Riesz-projection-based modal contributions and q_bg(ω) is the
background contribution <cit.>.
Figure <ref>(a) shows the phase distribution Arg(q(ω)) of the electric field
reflected from the metasurface shown in Fig. <ref>(a).
This is obtained by evaluating the modal expansion given by Eq. (<ref>)
for the contour C shown in Fig. <ref>(b).
A phase retardation of 2π for a real frequency scan,
which is often required for the design of metasurfaces,
is obtained when a pair of pole ω^pole_m and zero ω^zero_m
is separated by the real axis <cit.>.
Figure <ref>(b) shows Arg(q_1(ω)) corresponding to the pole ω^pole_1 and
Fig. <ref>(c) shows Arg(∑_m=2^4 q_m(ω) + q_bg(ω)).
In particular, it can be observed that the zero ω^zero_1 does not appear for the
modal contribution q_1(ω), but it emerges due to interference
with the other contributions, i.e., when
∑_m=2^4 q_m(ω) + q_bg(ω)
is added to q_1(ω).
Conclusion.—We presented a theoretical formulation to determine the locations of complex-valued
singularities, including poles and zeros, of any electromagnetic quantity in photonic systems.
The zeros can be determined by contour integration,
in the same way as the poles corresponding to resonances can be computed.
We also presented modal expansions in the complex frequency plane of the phase
of the field reflected from a metasurface, where the total expansion validated the
computed reflection zeros.
The different modal contributions give insight into the emergence of the reflection zeros
by interference of various expansion terms.
Furthermore, computation of partial derivatives of the reflection zeros was demonstrated.
The approach can easily be transferred to other physical systems
supporting resonances, e.g., to quantum mechanics and acoustics.
The theory essentially relies on detecting singularities of meromorphic functions
in the complex plane.
Therefore, it can be easily extended to compute other
quantities, e.g., transmission zeros, scattering cross sections
of isolated particles, or maximal chiral response of nanoassemblies.
The real-frequency response of metasurfaces can in many cases be
significantly impacted by reflection and transmission zeros,
since these typically lie close to the real axis or can even cross
the real axis with slight parameter variations.
Therefore, a precise quantification of the sensitivities of
reflection and transmission zeros or also of other physical quantities
is essential for gradient-based optimization of
photonic metasurfaces or other resonant or non-resonant systems.
We expect that the presented theory will enable new computer-aided design approaches.
Supplementary data tables and source code for the numerical experiments
for this work can be found in the open access data publication <cit.>.
We acknowledge funding
by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)
under Germany's Excellence Strategy - The Berlin Mathematics Research
Center MATH+ (EXC-2046/1, project ID: 390685689),
by the German Federal Ministry of Education and Research
(BMBF Forschungscampus MODAL, project 05M20ZBM),
and by the European Innovation Council (EIC) project TwistedNano
(grant agreement number Pathfinder Open 2021-101046424).
32
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Raimond et al.(2001)Raimond, Brune, and Haroche]Haroche_RevModPhys_2001
author author J. M. Raimond, author M. Brune, and author S. Haroche, 10.1103/RevModPhys.73.565 journal journal Rev. Mod. Phys. volume 73, pages 565 (year 2001)NoStop
[Nie and Emory(1997)]Nie_Science_1997
author author S. Nie and author S. R. Emory, 10.1126/science.275.5303.1102 journal journal Science volume
275, pages 1102 (year 1997)NoStop
[Senellart et al.(2017)Senellart, Solomon, and White]Senellart_2017
author author P. Senellart, author G. Solomon,
and author A. White, 10.1038/nnano.2017.218 journal journal
Nat. Nanotechnol. volume 12, pages
1026 (year 2017)NoStop
[Lalanne et al.(2018)Lalanne, Yan, Vynck, Sauvan, and Hugonin]Lalanne_QNMReview_2018
author author P. Lalanne, author W. Yan,
author K. Vynck, author C. Sauvan, and author
J.-P. Hugonin, 10.1002/lpor.201700113 journal journal Laser
Photonics Rev. volume 12, pages
1700113 (year 2018)NoStop
[Dyatlov and Zworski(2019)]Zworski_Scattering_Resonances_2019
author author S. Dyatlov and author M. Zworski, @noop title Mathematical theory of
scattering resonances (publisher American Mathematical
Society, address Providence, Rhode Island, year
2019)NoStop
[Wu et al.(2021)Wu,
Gurioli, and Lalanne]Wu_ACSPhot_2021
author author T. Wu, author M. Gurioli, and author P. Lalanne, 10.1021/acsphotonics.1c00336 journal journal ACS Photonics volume 8, pages 1522 (year 2021)NoStop
[Sauvan et al.(2022)Sauvan,
Wu, Zarouf, Muljarov, and Lalanne]Sauvan_2022
author author C. Sauvan, author T. Wu, author R. Zarouf, author
E. A. Muljarov, and author
P. Lalanne, 10.1364/OE.443656 journal journal Opt. Express volume 30, pages 6846 (year
2022)NoStop
[Nicolet et al.(2023)Nicolet, Demésy, Zolla, Campos, Roman, and Geuzaine]Nicolet_2022
author author A. Nicolet, author G. Demésy,
author F. Zolla, author C. Campos, author
J. E. Roman, and author
C. Geuzaine, 10.1016/j.euromechsol.2022.104809 journal journal
Eur. J. Mech. A Solids volume 100, pages 104809 (year 2023)NoStop
[Desoer and Schulman(1974)]Desoer_1974
author author C. Desoer and author J. Schulman, 10.1109/TCS.1974.1083805 journal journal IEEE Trans. Circuits Syst. volume 21, pages 3 (year
1974)NoStop
[Oppenheim and Verghese(2017)]Oppenheim_2017
author author A. Oppenheim and author G. Verghese, @noop title Signals, Systems and
Inference, Global Edition (publisher Pearson, year 2017)NoStop
[Butterworth et al.(1930)Butterworth et al.]Butterworth_1930
author author S. Butterworth et al., @noop journal
journal Wirel. Eng. volume 7, pages 536 (year 1930)NoStop
[Bechhoefer(2011)]Bechhoefer_2011
author author J. Bechhoefer, 10.1119/1.3614039 journal
journal Am. J. Phys. volume 79, pages 1053 (year 2011)NoStop
[Colom et al.(2023)Colom,
Mikheeva, Achouri, Zuniga-Perez, Bonod, Martin, Burger, and Genevet]Colom_2023
author author R. Colom, author E. Mikheeva,
author K. Achouri, author J. Zuniga-Perez, author
N. Bonod, author O. J. F. Martin, author S. Burger, and author P. Genevet, 10.1002/lpor.202200976
journal journal Laser Photonics Rev. volume 17, pages 2200976 (year
2023)NoStop
[Hutley and Maystre(1976)]Hutley_1976
author author M. Hutley and author D. Maystre, 10.1016/0030-4018(76)90116-4 journal journal Opt. Commun. volume
19, pages 431 (year 1976)NoStop
[Chong et al.(2010)Chong,
Ge, Cao, and Stone]Chong_2010
author author Y. D. Chong, author L. Ge, author H. Cao, and author
A. D. Stone, 10.1103/PhysRevLett.105.053901 journal journal
Phys. Rev. Lett. volume 105, pages
053901 (year 2010)NoStop
[Maystre(2013)]Maystre_2013
author author D. Maystre, 10.1016/j.crhy.2013.02.003 journal journal C. R. Phys. volume
14, pages 381 (year 2013)NoStop
[Sreekanth et al.(2018)Sreekanth, Sreejith, Han, Mishra, Chen, Sun, Lim, and Singh]Sreekanth_2018
author author K. V. Sreekanth, author S. Sreejith,
author S. Han, author
A. Mishra, author X. Chen, author H. Sun, author C. T. Lim, and author R. Singh, 10.1038/s41467-018-02860-6 journal journal Nat. Commun. volume 9, pages
369 (year 2018)NoStop
[Kravets et al.(2018)Kravets, Kabashin, Barnes, and Grigorenko]Kravets_2018
author author V. G. Kravets, author A. V. Kabashin, author W. L. Barnes, and author A. N. Grigorenko, 10.1021/acs.chemrev.8b00243 journal journal Chem. Rev. volume
118, pages 5912 (year 2018)NoStop
[Grigoriev et al.(2013)Grigoriev, Tahri, Varault, Rolly, Stout, Wenger, and Bonod]Grigoriev_2013
author author V. Grigoriev, author A. Tahri,
author S. Varault, author B. Rolly, author
B. Stout, author J. Wenger, and author N. Bonod, 10.1103/PhysRevA.88.011803
journal journal Phys. Rev. A volume 88, pages 011803(R) (year
2013)NoStop
[Bonnet-Ben Dhia et al.(2018)Bonnet-Ben Dhia, Chesnel, and Pagneux]Dhia_2018
author author A.-S. Bonnet-Ben Dhia, author L. Chesnel, and author V. Pagneux, 10.1098/rspa.2018.0050 journal
journal Proc. R. Soc. A volume 474, pages 20180050 (year 2018)NoStop
[Sweeney et al.(2020)Sweeney, Hsu, and Stone]Sweeney_2020
author author W. R. Sweeney, author C. W. Hsu, and author A. D. Stone, 10.1103/PhysRevA.102.063511 journal journal Phys. Rev. A volume 102, pages 063511 (year 2020)NoStop
[Mikheeva et al.(2023)Mikheeva, Colom, Achouri, Overvig, Binkowski, Duboz, Cueff, Fan, Burger, Alu, and Genevet]Genevet_2023
author author E. Mikheeva, author R. Colom,
author K. Achouri, author A. Overvig, author
F. Binkowski, author
J.-Y. Duboz, author
S. Cueff, author S. Fan, author S. Burger, author A. Alu, and author P. Genevet, 10.1364/opticaopen.22828976.v1 journal journal Optica Open. Preprint. (year 2023), 10.1364/opticaopen.22828976.v1NoStop
[Austin et al.(2014)Austin,
Kravanja, and Trefethen]Austin_2014
author author A. P. Austin, author P. Kravanja, and author L. N. Trefethen, 10.1137/130931035 journal journal SIAM J. Numer. Anal. volume 52, pages 1795 (year 2014)NoStop
[Delves and Lyness(1967)]Delves_1967
author author L. M. Delves and author J. N. Lyness, 10.1090/S0025-5718-1967-0228165-4 journal journal Math. Comp. volume
21, pages 543 (year 1967)NoStop
[Kravanja and Barel(2000)]Kravanja_2000
author author P. Kravanja and author M. V. Barel, @noop title Computing the Zeros of
Analytic Functions, Lect. Notes Math. 1727 (publisher
Springer, address New York, year
2000)NoStop
[Chen(2022)]Chen_2022
author author H. Chen, 10.1016/j.cam.2021.113796 journal
journal J. Comput. Appl. Math. volume
402, pages 113796 (year 2022)NoStop
[Trefethen and Weideman(2014)]Trefethen_SIAM_Trapz_2014
author author L. N. Trefethen and author J. Weideman, 10.1137/130932132 journal
journal SIAM Rev. volume 56, pages 385 (year 2014)NoStop
[Betz et al.(2021)Betz,
Binkowski, and Burger]Betz_2021
author author F. Betz, author F. Binkowski, and author S. Burger, 10.1016/j.softx.2021.100763 journal journal SoftwareX volume 15, pages
100763 (year 2021)NoStop
[Binkowski et al.(2023)Binkowski, Betz, Colom, Genevet, and Burger]Binkowski_SourceCode_Poles_Zeros
author author F. Binkowski, author F. Betz,
author R. Colom, author P. Genevet, and author S. Burger, 10.5281/zenodo.8063932 title Source code and
simulation results: Poles and zeros of electromagnetic quantities in photonic
systems, howpublished Zenodo (year 2023), note doi: 10.5281/zenodo.8063932NoStop
[Novotny and Hecht(2012)]Novotny_Hecht_2012
author author L. Novotny and author B. Hecht, @noop title Principles of
Nano-Optics, 2nd ed. (publisher Cambridge University
Press, address Cambridge, year 2012)NoStop
[Binkowski et al.(2022)Binkowski, Betz, Hammerschmidt,
Schneider, Zschiedrich, and Burger]Binkowski_CommunPhys_2022
author author F. Binkowski, author F. Betz,
author M. Hammerschmidt, author P.-I. Schneider, author
L. Zschiedrich, and author
S. Burger, 10.1038/s42005-022-00977-1 journal journal
Commun. Phys. volume 5, pages 202
(year 2022)NoStop
[Zschiedrich et al.(2018)Zschiedrich, Binkowski, Nikolay,
Benson, Kewes, and Burger]Zschiedrich_PRA_2018
author author L. Zschiedrich, author F. Binkowski, author N. Nikolay,
author O. Benson, author G. Kewes, and author
S. Burger, 10.1103/PhysRevA.98.043806 journal journal Phys.
Rev. A volume 98, pages 043806
(year 2018)NoStop
|
http://arxiv.org/abs/2307.05465v1 | 20230711175036 | Simulation of magnetohydrodynamic flows of liquid metals with heat transfer or magnetic stirring | [
"Shashwat Bhattacharya",
"Seyed Loghman Sanjari",
"Dmitry Krasnov",
"Thomas Boeck"
] | physics.flu-dyn | [
"physics.flu-dyn"
] |
[EN]
The short title]Simulation of magnetohydrodynamic flows of liquid metals with heat transfer or magnetic stirring
[1][DE]Institute of Thermodynamics and Fluid Mechanics, TU Ilmenau, P.O. Box 100565, 98684 Ilmenau
[2][DE]CTWe GmbH, Kirchenstrae, 91239 Henfenfeld
[EN]
We discuss the effects of nonhomogeneous magnetic fields in liquid metal flows in two different configurations. In the first configuration, we briefly report the impact of fringing magnetic fields in a turbulent Rayleigh-Bénard convection setup, where it was shown that the global heat transport decreases with an increase of fringe-width. The convective motion in regions of strong magnetic fields is confined near the sidewalls. In the second configuration, we numerically study the effects of an oscillating magnetic obstacle with different frequencies of oscillation on liquid metal flow in a duct. The Reynolds number is low such that the wake of the stationary magnetic obstacle is steady.
The transverse oscillation of the magnet creates a sinusoidal time-dependent wake reminiscent of the vortex shedding behind solid obstacles.
We examine the behavior of the streamwise and spanwise components of the Lorentz forces as well as the work done by the magnets on the fluid.
The frequency of the oscillation of the streamwise component of Lorentz force is twice that of the spanwise component as in the case of lift and drag on solid cylindrical obstacles.
The total drag force and the energy transferred from the magnets to the fluid show a non-monotonic dependence on the frequency of oscillation of the magnetic obstacle indicative of a resonant excitation of the sinusoidal vortex shedding.
[
Shashwat Bhattacharya1
[Corresponding author: e-mail [email protected],
phone +49 3677 69 2446],
Seyed Loghman Sanjari1,2,
Dmitry Krasnov1 and
Thomas Boeck1
August 12, 2023
========================================================================================================================================================================================
§ INTRODUCTION
Magnetohydrodynamic (MHD) flows, i.e., flows of electrically conducting fluids under the influence of magnetic fields, are frequently encountered in engineering and astrophysical applications. In such flows, the fluid is acted upon by the Lorentz force in addition to the force driving the flow.
Industrial and technological applications of such flows include heating, pumping, stirring, and levitation of liquid metals, cooling blankets in fusion reactors, and liquid-metal batteries.
In the context of astrophysics, magnetic fields strongly influence the flows in the sun and the stars and are responsible for the formation of sunspots and solar flares.
Magnetoconvection has been studied extensively in the past, but most of the studies focused on flows under the influence of a homogeneous magnetic field, which is an idealized approximation. However, in most engineering and astrophysical applications (such liquid metal batteries, cooling blankets in fusion reactors, electromagnetic stirring, solar spots, etc.) the magnetic fields are localized and thus vary in space <cit.>.
Further, strong homogeneous fields in large regions of space can only be generated by magnets of large size which are difficult to design and very costly to build and operate <cit.>. Thus, it is important to understand the impact of spatially varying magnetic fields on magnetohydrodynamic flows. Recently, Bhattacharya et al. <cit.> studied the effects of spatially varying magnetic fields on MHD flows driven by buoyancy (magnetoconvection); these effects will be briefly summarized in this paper. There have been several studies on MHD duct flows with different configurations of spatially varying fields (see, for example, Sterl <cit.> and Prinz et al. <cit.>); however, in this paper, we focus specifically on duct flows with a localized zone of applied magnetic field (henceforth referred to as magnetic obstacle). Flows past stationary magnetic obstacles have been studied before <cit.>. Similarities and differences between stationary magnetic and solid obstacles has been discussed by Votyakov and Kassinos<cit.>. Unsteady wakes
were only found for fairly large Reynolds numbers where the flow develops small-scale turbulent eddies <cit.>. In order to realize an unsteady flow past a magnetic obstacle at a rather low Reynolds number it seems necessary to add an additional periodic motion of the magnet. We therefore consider the effects of oscillating magnetic obstacles on MHD duct flow in the present paper, which can be interesting in the context of magnetic stirring. We also remark that oscillating solid obstacles have been studied previously but it appears that such studies are lacking for magnetic obstacles so far.
The outline of the paper is as follows. In Sec. <ref>, we discuss the mathematical model. Section <ref> describes the numerical method used in our simulations. We discuss the results in Sec. <ref> and conclude in Sec. <ref>.
§ MATHEMATICAL MODEL
In this section, we describe the setup and the mathematical formulation of our problems. The study will be conducted under the quasi-static approximation, in which the induced magnetic field is neglected as it is very small compared to the applied magnetic field. This approximation is fairly accurate for MHD flows of liquid metals <cit.>. The governing equations of MHD flows are given by
∇·u = 0,
∂u/∂ t + u·∇u = -∇ p + ν∇^2 u+ f,
where u and p are the velocity and pressure fields respectively, ν is the kinematic viscosity, and f is the total body force acting on the fluid.
For MHD duct flow, f is the specific Lorentz force (i.e. force per unit mass, henceforth denoted as f_L) and is given by
f = f_L =1/ρ(j×B_0),
j = σ(-∇ϕ + u×B_0),
∇^2 ϕ = ∇· (u×B_0).
where j is the current density, B_0 is the imposed magnetic field strength, σ and ρ are the electric conductivity and mean density of the fluid, respectively, and ϕ is the electric potential.
In magnetoconvection, f=f_L+f_b, where f_b=α g T ẑ is the buoyancy force, α is the thermal expansion coefficient, g is the gravitational acceleration, and T is the temperature field.
Magnetoconvection is additionally governed by the following thermal energy equation which describes the evolution of the temperature field T:
∂ T/∂ t + u·∇ T = κ∇^2 T,
where κ is the thermal diffusivity of the fluid.
MHD liquid-metal duct flows are governed by two nondimensional parameters: the Reynolds number , which is the ratio of the inertial force to the viscous force; and the Hartmann number , which is the ratio of the Lorentz force to the viscous force. Liquid-metal magnetoconvection is governed by three nondimensional parameters: the Rayleigh number , the ratio of the buoyancy force to the dissipative forces; the Prandtl number – the ratio of kinematic viscosity to the thermal diffusivity; and the Hartmann number . These quantities are given by
= UL/ν,
= BL√(σ/ρν),
= α g Δ L^3/νκ,
= ν/κ,
where U, L, and Δ are the characteristic velocity, length, and temperature scales respectively. For magnetoconvection, we consider the Rayleigh-Bénard setup consisting of fluid enclosed between a cooler top plate and a warmer bottom plate (the temperature difference between the plates being Δ), with the plates separated by a distance H. In this case, H and Δ respectively are the characteristic length and temperature scales for and .
As discussed in Sec. <ref>, we describe the effects of spatially varying magnetic fields in liquid metal flow for two configurations - (i) thermal convection in a box, and (ii) pressure-driven duct flow. In the first configuration, we consider a horizontally extended convection box of size l_x × l_y × H = 16 × 32 × 1 which is influenced by magnetic fields generated by two semi-infinite permanent magnets. The north pole of one magnet faces the bottom of the convection cell and the south pole of the second magnet faces the top of the convection cell. These magnets extend from -∞ to ∞ in the x-direction, l_y/2 to ∞ in the y-direction, from near the top wall to ∞ in the positive z-direction, and from near the bottom wall to - ∞ in the negative z direction. For a detailed description of the setup, the readers are referred to Bhattacharya et. al <cit.>.
In this configuration, the lateral component of the magnetic field (B_x) vanishes, and the longitudinal and vertical components respectively are logarithmic and inverse-tangent functions of the spatial coordinates y and z and the gap δ between the magnetic poles and the horizontal walls.
The magnetic field distribution is such that its strength is negligible for 0<y ≲ l_y/2, increases steeply at y ∼ l_y/2, and saturates close to its maximum value at y ≳ l_y/2.
When δ is increased keeping other parameters same, the total magnetic flux through the convection cell remains the same, but the gradient of the magnetic field at y∼ l_y/2 decreases, thereby increasing the fringe-width of the magnetic field. The aim of the study was to determine the effects of fringe-width on the heat and momentum transport.
The second configuration, which is the main focus of this paper, consists of liquid metal flow in a duct with two oscillating permanent magnetic poles near the top and bottom walls. The magnetic poles are semi-infinite in the z-direction and measure M_x=3 units along the streamwise direction and M_y=4 units along the spanwise direction in agreement with Votyakov and Kassinos <cit.>.
The spanwise dimension of the duct is L_y=50 units and the height is L_z=1 unit. The vertical gap between the magnetic poles and the liquid domain (one quarter of the layer height) also corresponds to Ref.<cit.>. A schematic diagram of the setup is shown in Fig. <ref>.
The magnetic field B=(B_x,B_y,B_z) generated by the magnetic poles is given by the formula derived by Votyakov et al. <cit.>.
The oscillation takes place along the spanwise direction and the y-coordinate of the center of the magnet at time t is given by
y_m=Asin(2πf_0t),
where A and f_0 are the amplitude and frequency of oscillation respectively. The magnets therefore have a velocity u_m with respect to the flow domain. Since the induction of currents depends on the relative velocity between conductor and magnet, the difference u-u_m must be used in Ohm's law (<ref>b) and in the charge conservation condition (<ref>c).
In our work, the oscillation amplitude is set to A=1, i.e. the ratio A/M_y=0.25. For the frequency we choose a reference value based on the Strouhal number _0=0.25
in Ref. <cit.>.
The nondimensional reference frequency in our work is therefore
f_s = _0 U/M_y = 0.25 × 1/4 = 0.0625
where U=1 is the nondimensional mean streamwise velocity. The frequency ratio is defined as F=f/f_s, i.e., the ratio of the frequency of oscillation of the magnetic poles to that of vortex shedding for the stationary magnetic obstacle of the same dimensions at a Reynolds number =900 in Ref. <cit.>.
§ NUMERICAL METHOD
We conduct direct numerical simulations of our setups using a second-order finite difference code developed by Krasnov et al. <cit.>.
For the magnetoconvection setup, a non-uniform grid of resolution 4800 × 9600 × 300 was used. All the walls were rigid and electrically insulated such that the electric current density j formed closed field lines inside the cell. The top and bottom walls were fixed at T=-0.5 and T=0.5 respectively, and the sidewalls were adiabatic. The Rayleigh number, Prandtl number, and the Hartmann number based on the maximum value of the vertical magnetic field were fixed at =10^5, =0.021, and _z,max=120. The gap δ between the magnetic poles and the conducting plates was varied from δ=0.01H to δ=9H, where H is the cell height.
For the configuration of flow past oscillating magnetic obstacle, we employ a rectangular domain of dimensions L_x × L_y × L_z = 200 × 50 × 1 with a grid resolution of 1024 × 384 × 32. The fluid enters the domain at x=0 with a nearly fully-developed laminar flow profile that is approximated by the analytical expression
u = cosh( 1.55L_y/L_z | 2y/L_y | )-cosh( 1.55L_y/L_z)/1 - cosh(1.55L_y/L_z)·cosh( 1.55L_z/L_y | 2z/L_z | ) - cosh(1.55L_z/L_y)/1-cosh( 1.55L_z/L_y).
The fluid leaves the domain at x=L_x where ∂u/∂ x = 0. The magnetic poles are located x=50. The mesh is non-uniform in y and z-directions. The top, bottom, and sidewalls are rigid (no-slip) and electrically insulated. We fix =400 and the Hartmann number based on the maximum vertical magnetic field as _z,max=70, and vary the frequency ratio from F=0.2 to F=0.8. It must be noted that the characteristic length and velocity for the above nondimensional quantities are L_z/2 (that is, half of the duct height) and the bulk horizontal velocity at the inlet (U), respectively.
For both the simulation setups, the elliptic equations for pressure, electric potential, and the temperature were solved based on applying cosine transforms in along the directions with uniform grid-stretching and using a tridiagonal solver along the direction with non-uniform grid stretching. The diffusive term in the temperature transport equation is treated implicitly. The time discretization of the momentum equation uses the fully explicit Adams-Bashforth/Backward-Differentiation method of second
order.
§ RESULTS
In this section, we first briefly summarize the results on the magnetoconvection simulations and then describe in detail the results on the flow past oscillating magnetic obstacle.
§.§ Results on magnetoconvection
A schematic of the magnetoconvection setup is illustrated in Fig. <ref>(a). The magnetic field generated by the magnets is strong enough to cease the flow in high magnetic flux region of the convection cell. We observe that as the local vertical magnetic field strength increases, the large scale structures become thinner and align themselves perpendicular to the longitudinal sidewalls.
The dependence of the local Reynolds and Nusselt numbers on the local Hartmann number (based on the vertical component of the magnetic field) was determined; this dependence was observed to be independent of the fringe-width.
The global heat transport was observed to decrease with increasing fringe-width for strong magnetic fields but decrease with increasing fringe-width for weak magnetic fields.
The convective motion became confined to the vicinity of the sidewalls in the regions of strong magnetic fields as shown in Fig. <ref>(b).
The amplitudes of these wall modes were shown to exhibit a non-monotonic dependence on the fringe-width.
For further details on the results, the readers are referred to Bhattacharya et al. <cit.>. In the next section, we discuss the results for the MHD duct flow setup.
§.§ Results on flow past oscillating magnetic obstacle
The simulations of the flow past magnetic obstacles are run for 300 convective time units after reaching a fully-developed state. The contour plots of instantaneous streamwise velocity are exhibited in Figs. <ref>(a–c) and those with time-averaging in Figs. <ref>(d–f) for F=0.2, F=0.5, and F=0.8. The figures show regions of reduced and even reversed streamwise velocity in the regions of strong magnetic field and also in the wake of the magnetic obstacle. The regions of reduced instantaneous velocity exhibit a wavy pattern. It can be visually observed from Figs. <ref>(a–c) that as the magnets oscillate faster, the wavelength of spatial oscillation decreases. There is an increase in the amplitude of this path from F=0.2 to F=0.5, but the amplitude decreases with a further rise in F. For F=0.5, the wake of the magnetic obstacle comprises of small-scale eddies, indicating that the flow becomes turbulent. The time-averaged streamwise velocity contours show that the length of the reversed flow region first decreases as F is increased to 0.5, and then increases with a further increase of F.
We examine the components of the total Lorentz force in the streamwise (f_L,x) and spanwise (f_L,y) directions. Note that f_L,x and f_L,y are the analog of the drag and lift forces, respectively, in flow past solid cylinders. These quantites are
f_L,x = ∫_-L_z/2^L_z/2∫_-L_y/2^L_y/2∫_0^L_x (f_L·x̂) dx dy dz,
f_L,y = ∫_-L_z/2^L_z/2∫_-L_y/2^L_y/2∫_0^L_x (f_L·ŷ) dx dy dz.
Figures <ref>(a,b,c) exhibit the plots of the above quantities versus the convective time t for F=0.5, F=0.6, and F=0.8, respectively. The figures show a periodic sinusoidal time dependence. The magnitude of f_L,x is higher than f_L,y; however, f_L,y oscillates with a higher amplitude than f_L,x. The amplitude of oscillation increases with an increase of F. The response frequency of f_L,y is equal to the excitation frequency f_0 of the magnets; however, the response frequency of f_L,x is twice of f_0.
We further compute ⟨ f_L,x⟩_t, the streamwise component of Lorentz force averaged over 300 timeframes, and plot it versus the frequency ratio in Fig. <ref>(a). It can be seen that ⟨ f_L,x⟩_t increases rapidly from F=0.2 to F=0.55. On further increase of F, ⟨ f_L,x⟩_t decreases sharply till F=0.7 above which ⟨ f_L,x⟩_t saturates close to a constant value.
Interestingly, the aforementioned behaviors of f_L,x and f_L,y closely resemble that of the drag and lift forces, respectively, in flows past an oscillating cylinder <cit.>.
For flows past an oscillating cylinder, the non-dimensional mechanical energy transferred from the cylinder to the fluid is expressed as
E = 2/ρ U d^2∫_0^t_Pdy/dtf_lift dt,
where t_P is the motion period, d is the diameter of the cylinder, y is the spanwise position of the cylinder's axis, and f_lift is the magnitude of the lift force <cit.>. In our work, the energy transferred from the oscillating magnets to the fluid can be expressed similarly as follows:
E = ∫_0^t_Pdy_m/dt f_L,y dt,
where t_P= 300 convective time units for our case.
We compute E for different frequency ratios and plot it versus F in Fig. <ref>(b). The figure shows that E is always positive, implying that the magnets perform work on the fluid for all frequencies. The figure further shows that there is a gradual growth of E until F=0.45 and then it sharply decreases to a minimum value at F=0.53. The energy transfer increases monotonically on further increase of F. Interestingly, the point of minimum energy transfer almost coincides with the point of maximum average streamwise Lorentz force. This point corresponds to resonance where the velocity field exhibits stronger fluctuations compared to other frequency ratios.
We finally examine the trends of the phase angle between the spanwise component of Lorentz force and the spanwise displacement of the magnets.
This parameter is used as an indicator of energy transfer from the magnets to the fluids <cit.> where a phase angle between 0 and 180 degrees indicates positive energy transfer.
We compute the phase angle using our data by fitting it with a sinusoidal function using the method of least squares.
The computed phase angle is plotted versus the frequency ratio in Fig. <ref>(b). The figure shows that the phase angle lies between 0 and 180 degrees, consistent with the fact that the energy is transferred from the magnets to the fluid. The maximum phase angle at F=0.55 reaches about 170 degrees.
§ CONCLUSIONS
In this paper, we numerically examined the effects of non-homogeneous magnetic fields in liquid metal flows using a finite-difference fluid solver. We briefly summarized the results of Bhattacharya et al. <cit.> in which the influence of fringing magnetic fields on turbulent convection was studied. An important finding was that for strong magnetic fields, the global heat transport decreases with an increase of fringe-width, whereas for weak magnetic fields, the heat transport marginally increases with an increase of fringe-width. The convective motion gets confined near the sidewalls in regions of strong magnetic fields.
We numerically examined the effects of oscillating magnetic obstacle with different frequencies on liquid metal flow in a duct.
We showed the presence of reduced and reversed streamwise velocity in the regions of strong magnetic field and in the wake of the magnetic obstacle. The regions of reduced velocity exhibit a wavy pattern with the wavelength of spatial oscillation decreasing with the excitation frequency of the magnets. The amplitude of wake oscillation shows a non-monotonic dependence on the frequency of the magnets and exhibits a maximum at a particular frequency f_max, which appears to correspond to the point of maximum Lorentz force in the streamwise direction and the minimum work done by the magnets on the fluid. The total streamwise and spanwise components of the Lorentz force oscillate sinusoidally with time with the same frequency and twice the frequency of the magnet's oscillation. The mean of the spanwise Lorentz force is zero. Its amplitude increases with an increase of the frequency of oscillation of the magnets. The frequency f_max≈ 0.5 f_s is considerably smaller than the reference value f_s taken from Ref. <cit.>. Although the stationary magnet does not produce vortex shedding in our case, it seems plausible that our value f_max is indicative of the intrinsic shedding frequency when (and possibly ) are increased further. Lower values than _0=0.25 of the Strouhal number of stationary magnetic obstacles were also reported by Kenjereš et al. <cit.>.
The authors are grateful to J. Schumacher for providing valuable contributions to the study of convection under the influence of fringing magnetic fields. S. Bhattacharya is supported by a postdoctoral fellowship of Alexander von Humboldt Foundation, Germany.
pamm
[10]
Davidson:ARFM1999
P. A. DavidsonMagnetohydrodynamics in materials
processing,
Annu. Rev. Fluid Mech. 31, 273–300 (1999).
Davidson:book:MHD
P. A. Davidson,
An Introduction to Magnetohydrodynamics, second edition (Cambridge University
Press, Cambridge, 2017).
Barleon:KIT1996
L. Barleon, K. J. Mack, and R. Stieglitz,
The MEKKA-facility a Flexible Tool to Investigate MHD-flow Phenomena,
Tech. Rep. FZKA 5821, 1996.
Bhattacharya:JFM2023
S. Bhattacharya, T. Boeck, D. Krasnov, and
J. SchumacherEffects of strong fringing magnetic fields on
turbulent thermal convection,
J. Fluid Mech. 964, A31 (2023).
Sterl:JFM1990
A. SterlNumerical simulation of liquid-metal MHD flows in
rectangular ducts,
J. Fluid Mech. 216, 161–191 (1990).
Prinz:PRF2016
S. Prinz, V. Bandaru, Y. Kolesnikov,
D. Krasnov, and T. BoeckNumerical simulations
of magnetohydrodynamic flows driven by a moving permanent magnet,
Phys. Rev. Fluids 1, 043601 (2016).
Cuevas:JFM2006
S. Cuevas, S. Smolentsev, and M. A.
AbdouOn the flow past a magnetic obstacle,
J. Fluid Mech. 553, 227–252 (2006).
Votyakov:PRL2007
E. V. Votyakov, Y. Kolesnikov, O. Andreev,
E. Zienicke, and A. ThessStructure of the wake
of a magnetic obstacle,
Phys. Rev. Lett. 98, 144504 (2007).
Evgeny:JFM2008
E. V. Votyakov, E. Zienicke, and Y. B.
KolesnikovConstrained flow around a magnetic obstacle,
J. Fluid Mech. 610, 131–156 (2008).
Kenjeres:IJHFF2011
S. Kenjereš, S. ten Cate, and C. J.
VoesenekVortical structures and turbulent bursts behind magnetic
obstacles in transitional flow regimes,
Int. J. Heat Fluid Flow 32, 510–528 (2011).
Tympel:JFM2013
S. Tympel, T. Boeck, and J. SchumacherLaminar and transitional liquid-metal flow near a magnetic dipole,
J. Fluid Mech. 735, 553–586 (2013).
Votyakov:PF2009
E. V. Votyakov and S. C. KassinosOn the analogy
between streamlined magnetic and solid obstacles,
Phys. Fluids 21, 097102 (2009).
Votyakov:TCFD2009
E. V. Votyakov, S. C. Kassinos, and
X. Albets-ChicoAnalytic models of heterogenous magnetic
fields for liquid metal flow simulations,
Theor. Comput. Fluid. Dyn 23, 571–578 (2009).
Krasnov:CF2011
D. Krasnov, O. Zikanov, and T. BoeckComparative study of finite difference approaches in simulation of
magnetohydrodynamic turbulence at low magnetic reynolds number,
Comput. Fluids 50, 46 – 59 (2011).
Krasnov:JCP2023
D. Krasnov, A. Akhtari, O. Zikanov, and
J. SchumacherTensor-product-Thomas elliptic solver for
liquid-metal magnetohydrodynamics,
J. Comput. Phys. 474, 111784 (2023).
Blackburn:JFM1999
H. M. Blackburn and R. D. HendersonA study of
two-dimensional flow past an oscillating cylinder,
J. Fluid Mech. 385, 255–286 (1999).
Placzek:JCF2009
A. Placzek, J. F. Sigrist, and
A. HamdouniNumerical simulation of an oscillating cylinder
in a cross-flow at low Reynolds number: Forced and free oscillations,
Comput. Fluids 38, 80–100 (2009).
|
http://arxiv.org/abs/2307.05568v1 | 20230710025610 | Subtraction of the foreground confusion and parameter uncertainty of resolvable galactic binaries on the networks of space-based gravitational-wave detectors | [
"Jie Wu",
"Jin Li"
] | gr-qc | [
"gr-qc",
"astro-ph.IM"
] |
[email protected]
^1 College of Physics, Chongqing University, Chongqing 401331, China
^2 Department of Physics and Chongqing Key Laboratory for Strongly Coupled Physics, Chongqing University, Chongqing 401331, China
There are tens of millions of compact binary systems in the Milky Way, called galactic binaries (GBs), most of which are unresolved, and the Gravitational waves (GWs) emitted overlap to form foreground confusion.
By simulating such foreground confusion, we have studied how LISA, Taiji and TianQin, including their alternative orbital configurations, subtract resolvable GBs when they combine as some networks.
The results of our research indicate that the order of detected number for a single detector from high to low is: Taiji-m, Taiji-p (c), LISA, TianQin I, TianQin II.
For detector combinations on the network, the foreground confusion is effectively reduced as the number of detectors grows, and the optimal combinations with different numbers are: Taiji-m, LISA+Taiji-m, LISA+Taiji-m+TianQin I, and LISA+Taiji-m+TianQin I+II.
The sensitivity curve is optimized as the number of detectors increases, which renders it possible to detect other gravitational wave sources more precisely and decrease the resolvable GBs parameter uncertainty.
Based on this, we discuss the parameter uncertainty of resolvable GBs detected by the combinations above and find that GW detection can promote electromagnetic (EM) detection.
On the contrary, we discovered that by utilizing EM detection, determining the inclination angle can reduce the uncertainty of GW strain amplitude by ∼93%, and determining the sky position can reduce the uncertainty of the phase by ∼30%, further strengthening the connection between GW detection and EM detection, and contributing to the research of Multi-messenger astronomy.
Subtraction of the foreground confusion and parameter uncertainty of resolvable galactic binaries on the networks of space-based gravitational-wave detectors
Jin Li^1,2
=============================================================================================================================================================
§ INTRODUCTION
Since LIGO detected the first GW event from a binary black hole merger (GW150914) in 2015 <cit.>, a series of ground-based GW detectors, such as Advanced LIGO <cit.>,
Advanced Virgo <cit.> and KAGRA <cit.>, have been built around the world, opening the window for detecting GW.
However, due to the limitation of the interferometer arm length, the observation window of the ground-based GW detector is in the high-frequency band from 1 Hz to kHz, and the low-frequency GW signal below 1 Hz cannot be effectively detected.
Therefore, constructing an interferometer with an arm length in order of one million kilometers in space is an ideal solution for detecting low-frequency GW.
The mission proposed by European Space Agency to detect GW in the low-frequency band named Laser Interferometer Space Antenna (LISA) is scheduled to be launched around the 2030s <cit.>.
At the same time, the Taiji mission proposed by the Chinese Academy of Sciences to construct a space-based GW observatory similar to LISA, which consists of a triangle of three spacecraft (S/C) orbiting the sun linked by laser interferometers, will be in operation <cit.>.
Another Chinese mission, TianQin, being different from LISA and Taiji, consists of three identical drag-free controlled S/C in high Earth orbits <cit.>.
LISA, Taiji, and TianQin are all sensitive to the milli-Hertz frequency band.
Compared with the Hertz frequency band, there are a large variety of GW sources in the milli-Hertz frequency band sensitive to the space-based GW detectors. These sources are expected to carry a large amount of information about galaxy formation, galactic nuclei, the Milky Way, and the early universe <cit.>, including massive black hole binaries (MBHB) <cit.>, extreme/intermediate mass ratio inspirals (EMRIs/IMRIs) <cit.>, compact binaries
in the Milk Way <cit.> and stochastic gravitational-wave backgrounds (SGWBs) <cit.>.
According to current astrophysical models and observations, there are a large number of GBs in our Milky Way, whose orbital period is less than a few hours, and the frequency band of emitted GW is from 0.1 mHz to 10 mHz <cit.>.
Considering the sensitivity of the space-based GW detectors, the GWs emitted by tens of millions of GBs will enter the observation frequency band at the same time, overlapping to form the galactic foreground <cit.>.
Except for a small percentage of high signal-to-noise ratio (SNR) GBs known as resolvable GBs, the majority of them are unresolved, resulting in an effective noise called foreground confusion or confusion noise. <cit.>.
In the frequency range of 0.5∼3 mHz, the foreground confusion will be greater than the instrument noise, affecting the observation of other GW sources and creating a bump on the sensitivity curve.
While the unresolved GBs constitute the foreground confusion and have a negative impact on the observation of other GW sources, the resolvable GBs are conducive to researching the evolution and distribution of GBs in our Milky Way, which is also one of the main science objectives of the space-based GW detectors <cit.>.
Since the proposal of LISA, extensive research has been conducted on the foreground confusion from GBs <cit.>.
Subtracting the foreground confusion as much as possible is beneficial for better observation of other GW sources.
The research on LISA, Taiji, and TianQin in subtracting of the foreground confusion is introduced respectively in Ref. <cit.>.
In addition to increasing observation time and improving the sensitivity of the GW detector, the networks of the GW detector can also effectively identify more resolvable GBs and subtract the foreground confusion <cit.>.
In this paper, we simulate subtracting foreground confusion using different combinations between LISA, Taiji, and TianQin on the network, including their alternative orbital configurations to determine the best combination on the network, and draw the sensitivity curve to calculate the SNR and parameter uncertainty of detected resolvable GBs, thus discussing the Multi-messenger astronomy combined with EM detection.
In Sec. <ref>, we introduce the GW signal model used to simulate GBs, the response of different space-based GW detectors to GW, as well as their instrument noise, sensitivity, and the alternative orbit configurations.
In Sec. <ref>, we use the population model to construct the GBs signal, subtracting the resolvable GBs by the iterative procedure to estimate the foreground confusion, and calculating the parameters of the resolvable GBs.
In Sec. <ref>, we present the subtraction of the foreground confusion by different combinations on the network, analyze the factors responsible for them, and plot the full sensitivity curves containing the foreground confusion.
Finally, we summarize our results in Sec. <ref>.
§ GW SIGNALS AND DETECTORS
§.§ GW signals from GBs
Considering that GBs have a few hours of orbital period and emit GW frequencies in milli-Hertz, they are in the very beginning phase of inspiral, millions of years before the merger <cit.>.
Therefore, the orbital period evolves slowly and the GWs emitted by GBs can be fully regarded as quasi-sinusoidal signals (quasi-monochromatic sources).
For the GW signal, we can use a very simple model in which the phase is decomposed in a Taylor series, and consequently, the time domain waveform of a GB can be written as <cit.>:
h_+(t)=𝒜(1+cosι^2)cosΦ(t)
h_×(t)=2𝒜cosιsinΦ(t)
with
Φ(t)=ϕ_0+2π f_0t+πḟ_0t^2+ Φ_D(t)
where 𝒜 is the GW strain amplitude, ι is the inclination angle, Φ(t) is the orbital phase , Φ_D(t) is the Doppler phase, ϕ_0 is the initial phase, f_0 and ḟ_0 is the frequency and the derivative of the frequency of GW.
The frequency variation, also known as the frequency derivative, can be expressed with the equation described in Ref. <cit.>:
ḟ_0=48/5(Gℳ/2c^3)^5/3f^11/3
where ℳ=(m_1m_2)^3/5/(m_1+m_2)^1/5 is the chirp mass, G and c are the gravitational constant and the speed of light.
By substituting frequency f_0∼10^-3 into equation 3, we can roughly calculate the derivative of frequency ḟ_0∼10^-19, indicating that the derivative of frequency is much lower in magnitude than that of frequency, which is also why we consider GWs as quasi-sinusoidal signals.
Therefore, we neglect higher-order phase terms as they contribute minimally to the waveform and have little impact on foreground confusion.
Additionally, we assume that the GBs are in circular orbits and ignore the influence of the third perturbation body. <cit.>.
For the space-based GW detector, the periodic motion around the Sun will produce the Doppler phase, which is given by <cit.>:
Φ_D(t) = 2π f_0t(R/c)cosβcos(2π f_mt-λ )
where R = 1 A.U. is the distance between the Sun and the Earth, f_m = 1/year is the Geocentric orbit modulation frequency and (λ,β) are the Ecliptic coordinates of the GW source.
§.§ Detector’s response and noise
For the space-based GW detector, the GW strain recorded by the detector can be described as the linear combination of two GW polarizations <cit.>:
h(t)=F^+(t)h_+(t)+F^×(t)h_×(t)
where F^+ and F^× are the antenna pattern functions.
In the low-frequency limit, the antenna pattern functions in the detector’s coordinate frame can be expressed as<cit.>:
F^+ = -sinγ/2[(1+cos^2θ_d)sin2ϕ_dcos2ψ_s+2cosθ_dcos2ϕ_dsin2ψ_s]
F^× = -sinγ/2[-(1+cos^2θ_d)sin2ϕ_dsin2ψ_s+2cosθ_dcos2ϕ_dcos2ψ_s]
where γ=π/3 is the angle between the two arms of the detector, (ϕ_d,θ_d) are the coordinates of the location of the GW source in the
detector coordinate frame and ψ_s is the polarization angle.
The transformation between detector coordinates (ϕ_d,θ_d) and Ecliptic coordinates (λ,β) can be found in Appendix <ref>.
To explore the response of the detector to GWs in different positions, we introduce the combined tensor mode response function:
F=√(|F^+|^2+|F^×|^2)
The results in the detector coordinate frame are shown in FIG. <ref>.
It can be seen that the position perpendicular to the constellation plane has the highest response, implying that different orientations will affect detection capacity in the same configuration.
Besides, the noise of the detector is another element that influences detection ability.
In this paper, we focus solely on the impact of instrument noise composed of acceleration noise and displacement noise when subtracting foreground confusion.
Therefore, an analytical model of the detector's sensitivity curve S_n(f) can be constructed from the sky average response function and instrument noise.
For LISA <cit.> and Taiji <cit.>, the sensitivity curve can be expressed as follows:
S_n(f) =10/3L^2[P_dp+2(1+cos^2(f/f_*))P_acc/(2π f)^4]
×[1+0.6(f/f_*)^2]
with
P_dp =S_x[1+(2mHz/f)^4]
P_acc =S_a[1+(0.4mHz/f)^2][1+(f/8mHz)^4]
For TianQin <cit.>, the sensitivity curve can be written in the form of:
S_n(f) =1/L^2[4S_a/(2π f)^4(1+0.4mHz/f)+S_x]
×[1+0.6(f/f_*)^2]
where f_*=c/(2π L) is the transfer frequency, c is the speed of light, L is the arm length, S_a is acceleration noise and S_x is displacement measurement noise, all of which are given in TABLE <ref>.
§.§ Alternative orbital configurations
LISA, Taiji, and TianQin are all scheduled to launch a triangular constellation composed of three S/C.
The difference is that LISA and Taiji apply heliocentric orbits, whereas TianQin applies geocentric orbits.
There are multiple orbital configurations to be chosen, as detailed in FIG. <ref>, FIG. <ref> and TABLE <ref>.
LISA includes three S/C forming a 2.5×10^6 km triangle trailing the Earth by 20^∘ on the Heliocentric orbit and the constellation plane has a 60^∘ inclination to the Ecliptic plane as shown in FIG. <ref> and FIG. <ref>.
Meanwhile, Taiji expects to use a LISA-like orbital configuration with a 3×10^6 km arm length and three different orbital configuration options available <cit.>.
The first configuration is called Taiji-p, which has the same inclination angle as LISA but is 20^∘ ahead of Earth. The second configuration is exactly the same as LISA, called Taiji-c. These two configurations are shown on the right side of FIG. <ref>. The third configuration named Taiji-m has an inclination of -60^∘ to the Ecliptic plane and a leading angle of 20^∘ to the Earth, as shown on the left side of FIG. <ref>.
Unlike LISA and Taiji, TianQin uses a Geocentric orbit with a √(3)×10^5 km arm length, hence the normal direction of the constellation plane will remain unchanged, pointing in the same direction. <cit.>.
The two orbital configurations of TianQin are the different orientations of the normal directions of the constellation plane.
The normal direction of TianQin I points towards the tentative reference source RX J0806.3+1527 (pointing towards λ = 120.4^∘,β = -4.7^∘), while the normal direction of TianQin II perpendicularly (pointing towards λ = 30.4^∘,β = 0^∘), which is shown in FIG. <ref> and FIG. <ref>.
The observation time varies with different orbital configuration.
LISA and Taiji are both year-round observation schemes, and any of Taiji's three alternative orbital configurations will not operate simultaneously.
Different from the former, TianQin follows the “three months on + three months off” observation scheme, and TianQin I and TianQin II can operate simultaneously to fill the data gaps of each other <cit.>, which will be considered in the subtraction methodology in Sec. <ref>.
§ METHODOLOGY
§.§ Data analysis
The SNR ρ of a GB source, which play an important role for judging the resolvable sources, can be defined as:
ρ^2=(h|h)
where the inner product (·|·) is a generalisation of the time-domain correlation product and is conventionally defined as <cit.>:
(a|b) =4∫_0^∞df ã^*(f)b̃(f)/S_n(f)
≃2/S_n(f_0)∫_0^T_obsdt a(t)b(t)
where ã(f) and b̃(f) are the Fourier transformations of a(t) and b(t), S_n(f) is the sensitivity curve defined by Eq. <ref> and Eq. <ref>, T_obs is the observation duration.
Note that the second line of Eq. <ref> only holds when calculating a quasi-sinusoidal signal (quasi-monochromatic source) that have an almost constant noise PSD and it can be seen that the SNR increases while the observation duration increases.
A quasi-sinusoidal signal like GB can be represented in the spectrum using the Dirac Delta function, thus the signal is plotted as a point with amplitude in the spectrum. Therefore, the SNR of GB in the Eq. <ref> can be roughly calculated as follows, which is obtained by evaluating the SNR integral <cit.>:
ρ^2 =16/5𝒜^2T_obs/S_n(f_0)
where 𝒜 is the GW strain amplitude.
Using Eq. <ref> can calculate SNR more quickly than using Eq. <ref>, and in the processing steps of Sec. <ref>, we use Eq. <ref> to quickly calculate and filter optimal resolvable GBs.
Usually, the GB with the SNR greater than 7 (ρ>7) is defined as the resolvable GB <cit.> and we can analyze the uncertainties of the resolvable GB using Fisher information matrix (FIM), which is defined as:
Γ_ij=(∂ h/∂ξ_i|∂ h/∂ξ_j)
where ξ_i represents the parameter of GB. For high SNR signals (ρ≫ 1), the variance-covariance matrix obtained from the inverse of FIM, Σ=Γ^-1, where the diagonal element represents the variance (or mean squared error) of each parameter, and the off-diagonal element represents the covariance (or correlation) between the parameters <cit.>.
Therefore, the uncertainty of each parameter can be written as:
Δξ_i=√(Σ_ii)
Compared to the uncertainty of coordinates, the uncertainty of sky position is more commonly used, which can be obtained by combining the uncertainty of both coordinates <cit.>:
ΔΩ=2π|sinβ|√(Σ_ββΣ_λλ-Σ_βλ^2)
When calculating FIM in Eq. <ref>, use the following numerical differentiation approximation <cit.>:
∂ h/∂ξ_i≈h(t,ξ_i+δξ_i)-h(t,ξ_i-δξ_i)/2δξ_i
When considering network detection by multiple independent detectors, the total SNR and FIM can be obtained by the sum of the inner products calculated by each detector, which can be written as <cit.>:
ρ_net^2=∑_kρ_k^2=∑_k(h_k|h_k)
Γ_net=∑_kΓ_k=∑_k(∂ h_k/∂ξ_i|∂ h_k/∂ξ_j)
where k represents different independent detectors.
From Eq. <ref>, the sensitivity in the network can be obtained, whose reciprocal is the sum of the reciprocal sensitivities of each detector, which can be expressed as follows:
S_net^-1=∑_kS_k^-1
§.§ Subtraction of the foreground confusion
For population simulation of GBs, we used the population datasets from the first “new” LISA Data Challenge (LDC), codenamed , which contains approximately 30 million GB sources in the milli-Hertz band <cit.>.
For the convenience of data processing, we select 1% of the GBs in (3×10^5 GBs) and multiply them to achieve the same amplitude level as the actual situation to generate the galactic foreground.
The number of 3×10^5 GBs is sufficient to include the same parameter distribution 3×10^7 GBs in , and the number of the resolvable GBs should be 1% of that in .
Notice that although the multiplication operation was performed during the generation of the galactic foreground, which would increase the amplitude of a single signal, the smoothed spectrum is used in subsequent processing to obtain the same amplitude as without affecting the calculated SNR.
The basic steps for subtracting foreground confusion are shown in FIG. <ref>, which can be summarized as follows<cit.>:
* Simulate the superposition h(t) of 3×10^5 GBs in the time domain and then calculate the power spectrum density (PSD) of the galactic foreground. Run the median on the PSD to estimate the foreground confusion S_c(f).
* Roughly calculate the optimal SNR ρ under the sensitivity curve of instrument noise S_n(f) using Eq. <ref>, and consider GBs with an optimal SNR greater than 3 (ρ>3) as optimal resolvable GBs, which can quickly filter out 99.6% of unresolved GBs.
* For the ith optimal resolvable GB, the sensitivity curve is formed by adding instrument noise and foreground confusion (S_n(f)+S_c(f)), and the SNR ρ_i is calculated using Eq. <ref> and Eq. <ref>. If the SNR is less than 7 (ρ_i<7), skip and repeat the method to calculate the SNR of the (i+1)th optimal resolvable GB. If the SNR is greater than 7 (ρ_i≥7), the GB is resolvable, and then continue with the next subtraction step.
* Subtract the ith GB signal in the time domain (h(t)-h_i(t)) and use the method in Step <ref> to re-estimate the subtracted galactic confusion. Repeat Steps <ref> and <ref>, continuously subtracting resolvable GBs and re-estimating galactic confusion until all optimal resolvable GBs are calculated.
* Repeat Steps <ref>, <ref> and <ref> in the remaining optimal resolvable GBs until the subtracted GB is 0, indicating the galactic confusion composed of unresolved GBs.
* Recalculate the SNR and FIM of the resolvable GBs using the final subtracted galactic confusion.
In the above steps, it is assumed that the resolvable GB can be subtracted perfectly without residual error, which will not be achievable in practice, and the subtraction error should be considered <cit.>.
When generating the time-domain galactic foreground signal, we set the Earth in the Vernal Equinox as zero time (t=0), and conduct observation simulation at different times (T_obs={0.5,1,2,4}years) to subtract the galactic confusion using the above basic steps. Considering the observation on the networks, we use the method of Eq. <ref> to calculate the SNR and FIM, and get the results on different networks.
§ RESULTS
§.§ Resolvable GBs
Using the method in Sec. <ref>, we simulated and calculated the number of resolvable GBs detected on different detectors and their networks at different observation times, as shown in FIG. <ref>.
Apparently, FIG. <ref> illustrates that as the observation time increases, the number of resolvable GBs also increases due to Eq. <ref> and Eq. <ref>.
Given the observation time, for a single detector, the number of resolvable GBs detected in descending order is: Taiji-m, Taiji-p (c), LISA, TianQin I, and TianQin II, mainly due to the arm length and orientation of the detector.
In terms of arm length, from Eq. <ref> and Eq. <ref>, it can be seen that the longer the detector arm length results in the better sensitivity.
Moreover, from TABLE <ref>, it can be seen that Taiji's arm length (3×10^9 m) is the longest, followed by LISA's arm length (2.5×10^9 m), and TianQin's arm length (√(3)×10^8 m) is the shortest, making Taiji detect more resolvable GBs than LISA and TianQin.
In terms of orientation, FIG. <ref> shows that the detector is most sensitive to signals perpendicular to the constellation plane position (θ_d=0^∘ or 180^∘).
The density of GBs in the bulge region of the Galaxy is significantly higher than that in the disk region <cit.>, therefore the closer the normal direction of the detector constellation plane is to the Galactic Center (λ = 266.8^∘,β = -5.6^∘), the greater the detector response and the more resolvable GBs can be detected.
From FIG. <ref>, it can be seen that the normal direction of Taiji-m (β = -30^∘ and β = 60^∘) is closer to the Galactic Center compared to Taiji-p (c) (β = 30^∘ and β = -60^∘) over a year, and the normal direction of TianQin I (λ = 120.4^∘,β = -4.7^∘ and λ = 300.4^∘,β = 4.7^∘) is also closer to the Galactic Center than TianQin II (λ = 30.4^∘,β = 0^∘ and λ = 210.4^∘,β = 0^∘). Therefore, Taiji-m detects more resolvable GBs than Taiji-p (c), and TianQin I detects more than TianQin II.
For detection on networks, just like the result in a single detector, the arm length and orientation of the detector are the major factors in resolvable GBs detection.
Due to the longer arm length of Taiji and LISA than that of TianQin, the networks of Taiji and LISA detect more resolvable GBs than individual Taiji or LISA, but the improvement is not significant compared to TianQin's network.
Eq. <ref> indicates that the reciprocal sensitivity on the network is the sum of the reciprocal sensitivities of each detector. Therefore, as the number of detectors in the network increases, the sensitivity of the network increases, but the increase rate decreases.
In summary, it can be concluded that as the number of detectors on the network increases, the number of resolvable GBs detected will also increase.
The optimal result will be achieved when LISA, Taiji-m, TianQin I and TianQin II are combined as a network.
§.§ Improvement of sensitivity
In order to better show the impact of foreground confusion on the sensitivity curve, and the subtraction of foreground confusion by different number of detectors on the network, we can fit the foreground confusion on logarithmic scale through a polynomial function, which can be written as follows <cit.>:
S_c(f) = 10^x
with
x = ∑_n = 0^5a_n[ log 10( f/1 mHz) ] ^n
This fitting is only applicable to the frequency range of 0.1∼6 mHz, and the fitting parameters a_n are listed in TABLE <ref>.
Due to different fitting functions, they can affect the final curve. Therefore, the fitting parameters given in TABLE <ref> and the curves drawn in FIG. <ref> are only used as a reference. In our previous calculations, we estimate foreground confusion using a running median on PSD.
In FIG. <ref>(a), we plotted the sensitivity curve of a single detector, and it can be seen that in the part where the foreground confusion affects, the sensitivity curve generated by instrument noise is better in Taiji than in LISA than in TianQin.
In the range of 8∼1.5 mHz, the full sensitivity curves of LISA and Taiji are almost identical, due to the larger response of resolvable GBs in Taiji, resulting in greater foreground confusion.
In the range of 1.5∼3.5 mHz, the full sensitivity of Taiji-m is superior to that of Taiji-p (c), as Taiji-m can detect more resolvable GBs than Taiji-p (c), resulting in lower subtracted foreground confusion.
In the 2∼6 mHz range, the full sensitivity of TainQin I is slightly lower than that of TianQin II, which is also because TainQin I has a greater response to resolvable GBs.
In FIG. <ref>(b), we show the sensitivity curves of different numbers of detectors on the network.
It can be seen that as the number of detectors on the network increases, the sensitivity curve of instrument noise decreases.
Moreover, because in this range, the sensitivity of TianQin is much lower than that of LISA and Taiji, the sensitivity curve of instrument noise only slightly changes after adding TianQin to the network.
As the number of detectors on the network increases, the more resolvable GBs are the subtracted foreground confusion is smaller, which is sufficient to demonstrate the advantage of detecting on the network for subtracting foreground confusion.
§.§ SNR and uncertainty
In addition to the number of resolvable GBs detected and the sensitivity curve containing foreground confusion, the uncertainty of parameters for resolvable GBs is also crucial. Therefore, we calculated the FIM on different networks (choosing TJm, LISA+TJm, LISA+TJm+TQI and LISA+TJm+TQI+II due to the most number of resolved GBs with 1, 2, 3, 4 detectors respectively) using Eq. <ref> ∼ Eq. <ref> to obtain the uncertainty of different parameters, as shown in FIG. <ref>.
Explanation of result on the right side of FIG. <ref>, for the resolvable GBs detected only by Taiji-m, it can be clearly seen that as the number of detectors on the network increases, the SNR will increase, while the uncertainty of parameters will decrease.
This is due to the sensitivity improvement for the increased number of detectors on the network.
Similar to the increase rate in the number of resolvable GBs described in Sec. <ref>, the magnitude of changes in SNR and uncertainty will decrease as the number of detectors on the network increases.
Increasing from one detector to two has a significant effect, but increasing from two to three is relatively less significant.
Unlike the above situation, in actual detection, the resolvable GB detected by different detector combinations is different.
From the result on the left side of FIG. <ref>, it can be seen that the changes in SNR and uncertainty of resolvable GBs detected on different networks are not as significant as those of the same resolvable GBs.
Except for the decrease in the uncertainties of GW strain amplitude, frequency, and sky position, there are almost no significant changes in the rest, and even some uncertainties have no decrease but increase.
For example, the initial phase and polarization angle show a slight increase when the number of detectors on the network increases from three to four.
This is because as the number of detectors on the network increases, the sensitivity improves, making many unresolved GBs become resolvable GBs, adding more low-SNR resolvable GBs.
Therefore, it is possible that as the number of detectors on the network increases, uncertainty increases instead of decreasing, and SNR decreases instead of increasing.
Nonetheless, as the number of detectors on the network increases, the SNR of the same resolvable GBs increases, and uncertainty decreases. Moreover, after adding more low-SNR resolvable GBs, the overall SNR remains almost unchanged, with some uncertainties significantly decreasing and others slightly increasing, which is sufficient to demonstrate the positive impact of increasing the number of detectors on the network.
Not only these, but also the GW detection of
resolvable GBs is helpful for the detection of EM bands, constituting Multi-messenger astronomy <cit.>.
The more accurate the GW detection of resolvable GBs parameters, i.e. the lower the uncertainties, the more conducive it is to EM detection.
If the sky position of the source is sufficiently accurate, it is possible to search for EM counterparts through EM follow-up observations
Among all resolvable GBs, the uncertainty of the sky position is less than 1 deg^2 (ΔΩ<1 deg^2) for 30.2∼31.6% of resolvable GBs, and less than 0.1 deg^2 (ΔΩ<0.1 deg^2) for 9.6∼10.3%.
It can be seen from the data in FIG. <ref> that among all parameters, the frequency measurement of resolvable GBs is the most accurate, of which the uncertainty on Δ f_0/f_0 of 29.2∼32.3% GB is less than 1×10^-6 (Δ f_0/f_0<1×10^-6), while the GW frequency f_0 is directly related to the period T_p of resolvable GBs (f_0=2/T_p), that is, the period can be measured accurately.
Note that as the number of detectors on the network increases, the proportion of the above items will also increase.
On the contrary, the results of EM detection can also serve as a prior to reduce the uncertainty of GW detection.
We adopt the method in Ref. <cit.>, which can be used to reduce the uncertainty of parameters from GW data by removing the respective rows and columns in the FIM.
By observing GBs, the inclination angle ι can be independently determined by EM detection, and we assume that the inclination angle of resolvable GBs can be completely determined.
By calculating the uncertainty of other parameters through the removed FIM, we found that only the uncertainty on Δ𝒜 /𝒜 changes significantly, with the mean uncertainty decrease of 91.9∼93.5% and the median uncertainty decrease of 60.8∼61.9%.
From Eq. <ref>, there is degeneracy between GW strain amplitude 𝒜 and inclination angle ι, which is why determining the inclination angle can significantly improve the measurement of amplitude.
Using the same method, we assume that the EM counterparts can be found through EM detection, that is, the sky position (λ,β) is completely determined. Therefore, the mean uncertainty on ϕ_0 is reduced by 25.8∼33.6%, the median uncertainty is reduced by 25.1∼26.9%, and other parameters will have a decrease of 2∼9%.
Notice that the above situations are all very idealized and are based on the assumption that a certain parameter of all resolvable GBs is completely determined, which cannot be achieved in practice. Even so, it can also indicate that there is feasibility in reducing the parameter uncertainty of GW detection through EM detection.
In summary, GW detection and EM detection can complement each other, and as the number of detectors on the network increases, the improvement of both will be greater.
§ SUMMARY AND DISCUSSION
In this paper, we used 1% of the data in LDC, which is 3×10^5 GBs, to simulate the galactic foreground by overlapping GBs as quasi-sinusoidal signals. We treated GB with the SNR greater than 7 as resolvable GBs, studied the number of detected resolvable GBs under different detector combinations and their alternative orbital configurations on the network, calculated the parameter uncertainties of resolvable GBs, and plotted the fitted full sensitivity curve.
Through the iterative method, we predict the number of resolvable GBs detected by different detector combinations on the network.
In the single detectors, the number of resolvable GBs is arranged in descending order of detected quantity: Taiji-m, Taiji-p (c), LISA, TianQin I, and TianQin II.
The trend of results for different detectors combinations on the network is also similar to that of a single detector.
The optimal combination for each number on the network is TJm, LISA+TJm, LISA+TJm+TQI, and LISA+TJm+TQI+II.
Based on the above optimal combinations, we calculate the uncertainty of the parameters of resolvable GBs using FIM.
As the number of detectors on the network increased, the uncertainty of the same resolvable GBs decreased, and the magnitude of the decrease also decreased.
The uncertainty remained reduced or almost unchanged even when more low-SNR resolvable GBs were detected.
Resolvable GBs with low uncertainty can help EM detection find electromagnetic counterparts and determine the period of GBs, while EM detection can also serve as a prior to reducing the uncertainty of GW detection.
We find that determining the inclination angle through EM detection can reduce GW strain amplitude uncertainty by ∼93%, and determining the sky position can reduce the phase uncertainty by ∼30%.
Therefore, GW joint detection on the network can complement EM detection, which is conducive to the development of Multi-messenger astronomy.
By fitting the full sensitivity curve containing foreground confusion, it is possible to intuitively see the effect of a single detector and different combinations of detectors on the network on subtracting foreground confusion.
The effect of subtracting foreground confusion is basically proportional to the number of resolvable GBs detected. The more detectors in the network, the better the subtracting effect.
In addition, it should be noted that so far, no space-based GW detector has been launched, so the data related to the space GW detector are simulated and predicted.
In fact, during the observation, the noise is assumed to be Gaussian and stationary, and the data quality is assumed to be optimal and uninterrupted <cit.>.
We use SNR to define thresholds and distinguish resolvable GBs, which is very useful and efficient to estimate foreground confusion.
Moreover, we assume that the subtraction of GBs is perfect without residual, which leads to our results being optimal and ideal.
Some new and more practical methods have been proposed, such as iterative subtraction based on Particle swarm optimization algorithm <cit.>, search and subtraction using Bayesian evidence ratio <cit.>.
In future research, we can delve into multiple aspects to improve our understanding and accuracy of foreground confusion.
Firstly, we can further investigate the relationship between GW detection and EM detection, exploring how to better combine GW detectors and EM detectors to enhance observation and understanding of GBs <cit.>.
Secondly, we can delve deeper into the impact of time-delay interferometr (TDI) technology on foreground confusion, as well as the subtraction of foreground confusion by different generations of TDI and channels <cit.>
In addition, we can also consider the impact of different population models on foreground confusion to better understand the population distribution and evolution theory of GBs.
Finally, we can also consider the impact of foreground confusion on other GW sources to better evaluate the sensitivity and accuracy of GW detection, and use foreground noise to improve the data processing and analysis methods.
In conclusion, through in-depth research on the above aspects, we can further improve our understanding and accuracy of GW detection, so as to better explore the essence and evolution history of astrophysical events, and provide more valuable data and information for research in Cosmology, Astrophysics and other fields.
§ COORDINATE TRANSFORMATION
The transformation between detector coordinates (ϕ_d,θ_d) and Ecliptic coordinates (λ,β) is based on the method described in Ref. <cit.>, and the situation in both coordinate frames is shown in FIG. <ref>.
We can use a rotation matrix R to connect detector coordinates X^d={sinθ_d cosϕ_d,sinθ_d sinϕ_d,cosθ_d} and Ecliptic coordinates X^e={cosβcosλ,cosβsinλ,sinβ}, which can be expressed as:
X^e =RX^d
X^d =R^-1X^e
For LISA and Taiji:
R=
([ cosθ_l cos ^2α_d+sin ^2α_d (cosθ_l-1) sinα_dcosα_d -sinθ_l cosα_d; (cosθ_l-1) sinα_dcosα_d cosθ_l sin ^2α_d+cos ^2α_d -sinθ_l sinα_d; sinθ_l cosα_d sinθ_l sinα_d cosθ_l; ])
For TianQin:
R=
([ cosθ_t qcosϕ_t qsinα_d+sinϕ_t qcosα_d cosθ_t qcosϕ_t qcosα_d-sinϕ_t qsinα_d sinθ_t qcosϕ_t q; cosθ_t qsinϕ_t qsinα_d-cosϕ_t qcosα_d cosθ_t qsinϕ_t qcosα_d+cosϕ_t qsinα_d sinθ_t qsinϕ_t q; -sinθ_t qsinα_d -sinθ_t qcosα_d cosθ_t q ])
where α_d=2π f_sct+2π/3(n-1)+α_0, n is the nth S/C, α_0 is the initial phase, f_sc=1/T_sc and T_sc is the rotation period.
For TianQin, T_sc=3.65 days and f_sc≃3×10^-3 mHz, but for LISA and Taiji, T_sc=1 year and f_sc≃3×10^-6 mHz
The angles in the rotation matrix R can be determined from FIG. <ref>.
For LISA, Taiji-p and Taiji-c, θ_l=60^∘ and for Taiji-m, θ_l=120^∘.
For TianQin I, θ_tq=94.7^∘,ϕ_tq=120.4^∘ and for TianQin II, θ_tq=90^∘,ϕ_tq=30.4^∘.
|
http://arxiv.org/abs/2307.05010v1 | 20230711044151 | Spin-splitting in electric-potential-difference antiferromagnetism | [
"San-Dong Guo"
] | cond-mat.mtrl-sci | [
"cond-mat.mtrl-sci"
] |
School of Electronic Engineering, Xi'an University of Posts and Telecommunications, Xi'an 710121, China
The antiferromagnetic (AFM) materials are robust to external
magnetic perturbation due to missing any net magnetic moment. In general, the spin splitting in the band
structures disappears in these antiferromagnets. However, the altermagnetism can achieve spin-split bands in collinear symmetry-compensated antiferromagnet with special magnetic space group. Here, we propose a new
mechanism that can achieve spin splitting in two-dimensional (2D) Janus A-type
AFM materials. Since the built-in electric field caused by Janus structure creates a layer-dependent electrostatic potential, the electronic bands in different layers will stagger, producing the spin splitting, which can be called electric-potential-difference antiferromagnetism (EPD-AFM).
We demonstrate that Janus monolayer Mn_2ClF is a possible candidate to achieve the EPD-AFM by the first-principles calculations.
It is proposed that the spin splitting can be tuned in EPD-AFM by piezoelectric effect.
Our works provide a new design principle for generating spin
polarization in 2D AFM materials.
Spin-splitting in electric-potential-difference antiferromagnetism
San-Dong Guo
August 12, 2023
==================================================================
§ INTRODUCTION
The spin splitting in the band structures can be produced by
utilizing the effect of spin-orbit coupling (SOC)<cit.>.
A general form of the SOC Hamiltonian H_SOC in solid-state materials with a lack of inversion symmetry can be expressed as<cit.>:
H_SOC=Ω⃗(k⃗)·σ⃗=α(E⃗×k⃗)·σ⃗
Where the Ω⃗(k⃗) is known as a spin-orbit field (SOF) as an effective magnetic field,
α is the strength of the SOC, E⃗ is the local electric field induced
by the crystal inversion asymmetry, k⃗ is is the wave vector, and σ⃗=(σ_x, σ_y, σ_z) are the Pauli matrices.
If a two-dimensional (2D) material possesses out-of-plane built-in electric field, <ref> will become:
H_SOC=α_R(k_xσ_y-k_yσ_x)
This is known as Rashba SOC Hamiltonian<cit.>, and α_R is the so-called Rashba parameter. Here, the spin
S only has the in-plane components S_x and S_y, which depend on the momentum of electrons.
The impurities and defects can change the momentum of electrons, which can
randomize the spin due to the k-dependent SOF, and then induce spin decoherence through the Dyakonov-Perel (DP) mechanism<cit.>.
If a 2D material possesses in-plane built-in electric field, for example along x direction, <ref> will be reduced into:
H_SOC=α_Dk_yσ_z
Here, the spin S only has the out-of-plane component S_z. The SOF orientation of <ref> is
unidirectional, which will
lead to a spatially periodic mode of the spin polarization, known as the persistent spin helix (PSH)<cit.>. The PSH can suppress spin dephasing due to SU(2) spin rotation symmetry, producing an
extremely long spin lifetime<cit.>.
The spin splitting can also be observed in ferromagnetic
(FM) materials. Superior to FM materials, the antiferromagnetic (AFM) materials are robust to external
magnetic perturbation due to missing any net magnetic moment, which allows high-speed device operation<cit.>.
In general, the spin splitting in the band
structures is lacking in these antiferromagnets.
However, the spin splitting has been realized in collinear symmetry-compensated antiferromagnet, and the SOC is not needed, which is called
altermagnetism<cit.>. Several 2D materials
have been predicted to be altermagnetic materials, such as Cr_2O_2<cit.>, Cr_2SO<cit.> and V_2Se_2O<cit.>.
Here, we propose a new
mechanism to achieve spin splitting in AFM materials. For a 2D material, the magnetic atoms have opposite layer spin polarization, namely A-type AFM ordering. If the out-of-plane built-in electric field is lacking, the degeneration of electron spin in the band structures is observed (<ref> (a) and (b)). For a 2D Janus material, the magnetic configuration is still A-type AFM ordering, but it has an out-of-plane built-in electric field E_b, which will destroy the degeneration of electron spin in the band structures (<ref> (c) and (d)). This is because the built-in electric field creates a layer-dependent electrostatic potential, and the electronic bands in different layers will stagger, which gives rise to the spin splitting. The spin splitting in 2D Janus A-type AFM materials can be called electric-potential-difference antiferromagnetism (EPD-AFM).
Recently, the electric-field control of spin polarization
in 2D A-type AFM semiconductor Mn_2Cl_2 has been reported, and the 100% spin polarization via electric field can be achieved<cit.>.
Based on Mn_2Cl_2, Janus monolayer Mn_2ClF is constructed by replacing one of two Cl layers with F atoms, which is proved to be a possible candidate to achieve the EPD-AFM by the first-principles calculations. Calculated results show that EPD-AFM in Mn_2ClF is robust against the electronic correlation. The piezoelectric properties of Mn_2ClF are also investigated, and the out-of-plane piezoelectric response may be used to tune the spin splitting. These findings enrich the types of spin splitting, which is useful for spintronic device applications.
§ COMPUTATIONAL DETAIL
Within density functional theory (DFT)<cit.>, the spin-polarized first-principles calculations are carried out within the projector augmented-wave (PAW) method by using the standard VASP code<cit.>. We use the generalized gradient
approximation of Perdew-Burke-Ernzerhof (PBE-GGA)<cit.> as the exchange-correlation functional. To account for electron correlation of Mn-3d orbitals, we use a Hubbard correction U_eff=4.00 eV<cit.> within the
rotationally invariant approach proposed by Dudarev et al.
The kinetic energy cutoff of 500 eV, total energy convergence criterion of 10^-8 eV, and force convergence criterion of 0.0001 eV.Å^-1 are set to obtain the accurate results.
A vacuum of more than 16 Å is used to avoid out-of-plane interaction.
The elastic stiffness tensor C_ij and piezoelectric stress tensor e_ij are calculated by using strain-stress relationship (SSR) method and density functional perturbation theory (DFPT) method<cit.>, respectively. The C^2D_ij/e^2D_ij has been renormalized by C^2D_ij=L_zC^3D_ij/e^2D_ij=L_ze^3D_ij, where the L_z is the length of unit cell along z direction. We use a 21×21×1 k-point meshes to sample the Brillouin zone (BZ) for calculating electronic structures and elastic properties, and a 10×21×1 k-point meshes for piezoelectric calculations.
The interatomic force constants (IFCs) are calculated by using a 5×5×1 supercell within finite displacement method, and the phonon dispersion spectrum can be calculated by the Phonopy code<cit.>. The elastic, piezoelectric, phonon and ab-initio molecular dynamics (AIMD) calculations are all performed with AFM1 magnetic configuration.
§ CRYSTAL STRUCTURE AND STABILITY
Monolayer Mn_2ClF possesses similar crystal structures with Mn_2Cl_2<cit.>, consisting of four atomic layers in the sequence of Cl-Mn-Mn-F (see <ref> (e) and (f)). It is clearly seen that the magnetic Mn atoms distribute in two layers, and an intrinsic polar electric field along the out-of-plane direction can be induced due to the different electronegativity of the Cl and F elements, which provides possibility to realize EPD-AFM.
The Janus monolayer Mn_2ClF can be constructed by replacing one of two Cl layers with F atoms in monolayer Mn_2Cl_2. The Mn_2Cl_2 possesses P3̅m1 space group (No.164), and
the space group of Mn_2ClF is reduced into P3m1 (No.156) due to broken horizontal mirror symmetry, which will produce both in-plane and out-of-plane piezoelectricity.
To determine magnetic ground state of Mn_2ClF, the rectangle supercell (see <ref> (e)) is used to construct
FM and three AFM configurations (AFM1, AFM2 and AFM3). These magnetic configurations are shown in FIG.1 of electronic supplementary information (ESI), and the AFM1 is called A-type AFM state. Calculated results show that the AFM1 configuration is ground state of Mn_2ClF, and its energy per unit cell is 0.43 eV, 0.32 eV and 0.23 eV lower than those of FM, AFM2 and AFM3 cases by GGA+U. The optimized lattice constants a=b=3.43 Å by GGA+U for AFM1 case. The magnetic easy-axis is confirmed by magnetic anisotropy energy (MAE), which is defined as the energy
difference of the magnetization orientation along the (100)
and (001) cases within SOC. The Calculated MAE is only 1 μ eV/Mn, which indicates that the easy-axis of Mn_2ClF is out-of-plane.
To validate the dynamic, thermal and mechanical stabilities of Mn_2ClF, the
phonon spectra, AIMD and elastic constants are calculated, respectively.
The calculated phonon spectrum of Mn_2ClF with no obvious imaginary frequencies is plotted in FIG.2 of ESI, indicating its dynamic stability. The AIMD simulations using NVT ensemble are carried out for more than
8000 fs with a time step of 1 fs by using a 4×4×1 supercell at 300 K. According to FIG.3 of ESI,
during the simulation, the crystal structures of Mn_2ClF are maintained without structural fracture, and
the energies are kept stable, confirming
its thermal stability.
Two independent elastic constants C_11 and C_12 of Mn_2ClF are 56.66 Nm^-1 and 17.22 Nm^-1, which satisfy the Born criteria of mechanical stability:
C_11>0 and C_11-C_12>0<cit.>, confirming its mechanical stability.
§ ELECTRONIC STRUCTURES
The magnetic moments of bottom and top Mn atoms are 4.57 μ_B and -4.52 μ_B, and total magnetic moment per unit cell is strictly 0.00 μ_B.
In general, no spin splitting can be observed for AFM material. However, our proposed Mn_2ClF shows obvious spin splitting from calculated energy band structures without SOC in <ref> (a). This is very different from energy band structures of Mn_2Cl_2 (see FIG.4 of ESI), where no spin splitting exists. This difference is because the Mn_2ClF possesses the out-of-plane polar electric field, while the built-in electric field of Mn_2Cl_2 disappears.
It is clearly seen that Mn_2ClF is an indirect band gap semiconductor with gap value of 1.043 eV. The valence band maximum (VBM) and conduction band bottom (CBM) are at high symmetry K/-K and M points, respectively, and they are provided by the same spin-up channel.
When including the SOC, the energy band
structures of Mn_2ClF have very small changes, and it is still an indirect bandgap semiconductor with reduced gap value of 1.028 eV (<ref> (b)).
Without considering SOC, the K and -K valleys of valence bands are exactly
degenerate (<ref> (c)). However, when SOC is switched on, the energy
degeneracy between the K and - K valleys is lifted due to broken space- and time-inversion symmetries, leading to an interesting phenomenon of the
spontaneous valley polarization with very small valley splitting of 4.3 meV (<ref> (d)). This is different from the common valley splitting in FM materials<cit.>. Recently, the spontaneous valley
polarization is also predicted in 2D AFM Mn_2P_2S_3Se_3 with a valley splitting of 16.3 meV<cit.>.
For Mn_2ClF, the layer-characters energy band structures without SOC and with SOC are plotted in <ref>.
Calculated results show that the weights of spin-up and spin-down of both valence and conduction bands are reversed in different Mn layers (<ref> (a) and (b)), which gives rise to the obvious spin splitting. According to <ref> (c), it is clearly seen that two Mn layers are non-equivalent due to a layer-dependent electrostatic potential caused by the built-in electric field.
The electronic correlation can produce important effects on the magnetic ground state, electronic structures and topological properties of 2D magnetic materials<cit.>. To confirm robust EPD-AFM, the electronic correlation effects on physical properties of Mn_2ClF are considered by using different U values. Firstly, the lattice constants a of Mn_2ClF are optimized by GGA+U (0-5 eV), and then calculate its related physical properties. Based on FIG.5 of ESI, the lattice constants a (3.286 Å-3.447 Å) increases with increasing U.
To achieve EPD-AFM, the AFM1 magnetic configuration as the ground state of Mn_2ClF is a crucial factor. So, the energy differences between FM/AFM2/AFM3 and AFM1 (per unit cell) as a function of U are plotted in <ref>.
It is found that Mn_2ClF is always a AFM1 ground state in considered U range.
The evolutions of energy band
structures as a function of U are plotted in <ref>, and the total gap vs U is shown in FIG.6 of ESI.
In considered U range, the Mn_2ClF is always an indirect gap semiconductor, and shows obvious spin splitting.
The VBM and CBM are always at high symmetry K/-K and M points, which are provided by the same spin-up channel.
Finally, the MAE as a function of U is plotted in FIG.7 of ESI. When U is less than about 4.7 eV, the out-of-plane magnetic anisotropy can be maintained. These results show that the EPD-AFM of Mn_2ClF is robust.
§ PIEZOELECTRIC PROPERTIES
The Mn_2Cl_2 monolayer possesses no piezoelectricity because of inversion symmetry. However, due to broken horizontal mirror symmetry, the monolayer Mn_2ClF has both in-plane and out-of-plane piezoelectricity. The piezoelectric response of a material can be described by the third-rank piezoelectric stress tensor e_ijk and strain tensor d_ijk, which can be expressed as the sum of ionic and electronic contributions:
e_ijk=∂ P_i/∂ε_jk=e_ijk^elc+e_ijk^ion
d_ijk=∂ P_i/∂σ_jk=d_ijk^elc+d_ijk^ion
In which P_i, ε_jk and σ_jk are polarization vector, strain and stress, respectively. The superscripts elc/ion means electronic/ionic contribution. The e_ijk^elc and d_ijk^elc are called clamped-ion piezoelectric coefficients, while the e_ijk and d_ijk are called relaxed-ion piezoelectric coefficients. The e_ijk is related with d_ijk by elastic tensor C_mnjk:
e_ijk=∂ P_i/∂ε_jk=∂ P_i/∂σ_mn.∂σ_mn/∂ε_jk=d_imnC_mnjk
By using Voigt notation, when only considering the in-plane strain and stress<cit.>, the <ref> with P3m1 symmetry can be reduced into:
(
[ e_11 -e_11 0; 0 0 -e_11; e_31 e_31 0; ])
=(
[ d_11 -d_11 0; 0 0 -2d_11; d_31 d_31 0; ])
(
[ C_11 C_12 0; C_12 C_11 0; 0 0 (C_11-C_12)/2; ])
With an imposed uniaxial in-plane strain, both in-plane and out-of-plane piezoelectric polarization can be produced (e_11/d_11≠0 and e_31/d_31≠0). However, when a biaxial in-plane strain is applied, the
in-plane component will disappear(e_11/d_11=0), but the out-of-plane component still exists (e_31/d_31≠0). By solving the <ref>, the two independent d_11 and d_31 can be derived:
d_11=e_11/C_11-C_12 and d_31=e_31/C_11+C_12
The orthorhombic supercell (see <ref> (e)) as the
computational unit cell is used to calculate the e_11/e_31 of Mn_2ClF.
The calculated e_11/e_31 is -0.745×10^-10/-0.191×10^-10 C/m with ionic part -0.647×10^-10/0.372×10^-10 C/m and electronic part -0.098×10^-10/-0.563×10^-10 C/m. For e_11, the same signs can be observed for the electronic and ionic contributions, and the ionic part plays a decisive role.
However, for e_31, the electronic and ionic contributions have opposite signs, and the electronic part dominates the piezoelectricity.
Based on <ref>, the calculated d_11 and d_31 of Mn_2ClF are -1.89 and -0.26 pm/V, respectively.
The predicted |d_31| is higher than or compared with those of other 2D known materials<cit.>, which provides possibility to tune spin splitting in Mn_2ClF by piezoelectric effect.
Electric-field induced spin splitting in Mn_2Cl_2 has been confirmed by the first-principles calculations<cit.>. The out-of-plane electric field can tune
the spin splitting in Mn_2ClF. When a biaxial in-plane strain is imposed, only out-of-plane d_31 appears, and an out-of-plane electric field can be induced, which can be used to tune spin splitting in Mn_2ClF.
Piezotronic effect on Rashba spin splitting in a ZnO/P3HT nanowire array structure has been studied experimentally<cit.>. It is found that the Rashba spin splitting can be effectively tuned by inner-crystal piezo-potential created inside the ZnO nanowires. So, the coupling between spin splitting and piezoelectric effect may be observed by EPD-AFM.
§ DISCUSSION AND CONCLUSION
For a 2D altermagnet, the magnetic atoms have opposite layer spin polarization (A-type AFM ordering). If the out-of-plane built-in electric field is lacking, the obvious spin splitting in the band structures can still be observed (<ref> (a) and (b)). However, the spin-valley polarization is lacking. Recently, this have been achieved in 2D Ca(CoN)_2<cit.>.
For a 2D Janus altermagnet, the magnetic configuration is still A-type AFM ordering, but it has an out-of-plane built-in electric field E_b, which will produce spin-valley polarization(<ref> (c) and (d)). This is because a layer-dependent electrostatic potential makes electronic bands in different layers stagger, producing the spin-valley polarization.
The out-of-plane polarization filed is equivalent to an external electric field<cit.>. By applying a gate field of 0.2 eV/Å, monolayer Ca(CoN)_2 possesses a significant spin-valley splitting up to 123 meV<cit.>. So, an out-of-plane built-in electric field can induce spin-valley polarization. The 2D Janus A-type altermagnetic material can be called electric-potential-difference altermagnet (EPD-AM).
In summary, we propose an alternative strategy to obtain
spin splitting based on 2D Janus A-type antiferromagnet.
It is demonstrated that 2D Mn_2ClF is a possible candidate for realizing EPD-AFM, which is dynamically, mechanically and thermally stable.
It is proved that the EPD-AFM is robust against electron correlation in Mn_2ClF.
The structural symmetry-breaking leads to out-of-plane piezoelectric response, providing a possibility to tune spin splitting in Mn_2ClF by piezoelectric effect. Our works reveal a new 2D family of AFM materials with spin splitting, which allow
high-speed spintronic device applications.
This work is supported by Natural Science Basis Research Plan in Shaanxi Province of China (2021JM-456). We are grateful to the China University of Mining and Technology (CUMT) for VASP software to accomplish this work. We are grateful to Shanxi Supercomputing Center of China, and the calculations were performed on TianHe-2.
gs1J. Nitta, T. Akazaki, H. Takayanagi, and T. Enoki, Phys. Rev.
Lett. 78, 1335 (1997).
gs2J. Nitta, T. Akazaki, H. Takayanagi and T. Enoki, Phys. Rev.
Lett. 78, 1335 (1997).
gs3A. Manchon, H. C. Koo, J. Nitta, S. M. Frolov and R. A. Duine,
Nat. Mater. 14, 871 (2015).
gs4E. I. Rashba, Sov. Phys. Solid State 2, 1224 (1960).
gs5M. I. Dyakonov and V. I. Perel, Sov. Phys. Solid State 13, 3023 (1972).
p7B. A. Bernevig, J. Orenstein, and S.-C. Zhang, Phys. Rev. Lett.
97, 236601 (2006).
p8J. Schliemann, Rev. Mod. Phys. 89, 011001 (2017).
p9P. Altmann, M. P. Walser, C. Reichl, W. Wegscheider, and G.
Salis, Phys. Rev. B 90, 201306(R) (2014).
k1X. Hu, Adv. Mater. 24, 294 (2012).
k2T. Jungwirth, J. Sinova, A. Manchon, X. Marti, J. Wunderlich
and C. Felser, Nat. Phys. 14, 200 (2018).
k4L. S̆mejkal, J. Sinova and T. Jungwirth, Phys. Rev. X
12, 031042 (2022).
k5I. Mazin Phys. Rev. X 12, 040002 (2022).
k6L. S̆mejkal, J. Sinova and T. Jungwirth, Phys. Rev. X 12, 040501
(2022).
k11X. Chen, D. Wang, L. Y. Li and B. Sanyal, Preprint at https://arxiv.org/abs/2104.07390 (2021).
k12P. J. Guo, Z. X. Liu and Z. Y. Lu, npj Comput. Mater. 9, 70 (2023).
k12-1S. D. Guo, X. S. Guo, K. Cheng, K. Wang and Y. S. Ang, Preprint at https://doi.org/10.48550/arXiv.2306.04094 (2023).
k13H.-Y. Ma, M. L. Hu, N. N. Li, J. P. Liu, W.
Yao, J. F. Jia and J. W. Liu, Nat. Commun. 12, 2846 (2021).
k14Y. J. Niu, H. F. Lv, X. J. Wu and J. L. Yang, J. Phys. Chem. Lett. 14, 4042 (2023).
1P. Hohenberg and W. Kohn, Phys. Rev. 136,
B864 (1964); W. Kohn and L. J. Sham, Phys. Rev. 140,
A1133 (1965).
pv1 G. Kresse, J. Non-Cryst. Solids 193, 222 (1995).
pv2 G. Kresse and J. Furthmüller, Comput. Mater. Sci. 6, 15 (1996).
pv3 G. Kresse and D. Joubert, Phys. Rev. B 59, 1758 (1999).
pbeJ. P. Perdew, K. Burke and M. Ernzerhof, Phys. Rev. Lett. 77, 3865 (1996).
u1Q. L. Sun and N. Kioussis, Phys. Rev. B 97, 094408 (2018).
u2C. Ma, X. J. Chen, K. J. Jin et al., J. Phys. Chem. Lett. 14, 825 (2023).
u3Q. Y. Ma, W. H. Wan, Y. M. Li and Y. Liu, Appl. Phys. Lett. 120, 112402 (2022).
pv6X. Wu, D. Vanderbilt and D. R. Hamann, Phys. Rev. B 72, 035105 (2005).
pv5A. Togo, F. Oba and I. Tanaka, Phys. Rev. B 78, 134106
(2008).
elaR. C. Andrew, R. E. Mapasha, A. M. Ukpong and N. Chetty, Phys. Rev. B 85, 125428 (2012).
duanW. Y. Tong, S. J. Gong, X. Wan, and C. G. Duan,
Nat. Commun. 7, 13612 (2016).
jmcP. Jiang, X. H. Zheng, L. L. Kang, X. X. Tao, H. M. Huang, X. C. Dong and Y. L. Li, J. Mater. Chem. C 11, 2703 (2023).
re1S. D. Guo, J. X. Zhu, M. Y. Yin and B. G. Liu, Phys. Rev. B 105, 104416 (2022).
re2S. D. Guo, W. Q. Mu and B. G. Liu, 2D Mater. 9, 035011 (2022).
re3S. D. Guo, Y. L. Tao, W. Q. Mu and B. G. Liu, Front. Phys. 18, 33304 (2023).
re4S. Li, Q. Q. Wang, C. M. Zhang, P. Guo and S. A. Yang, Phys. Rev. B 104, 085149 (2021).
re5W. Y. Pan, Phys. Rev. B 106, 125122 (2022).
yd1L. Dong, J. Lou and V. B. Shenoy, ACS Nano, 11,
8242 (2017).
yd2M. N. Blonsky, H. L. Zhuang, A. K. Singh and R. G. Hennig, ACS Nano 9,
9885 (2015).
yd3K. N. Duerloo, M. T. Ong and E. J. Reed, J. Phys. Chem. Lett. 3, 2871 (2012).
ydtL. Zhu, Y. Zhang, P. Lin et al., ACS Nano 12, 1811 (2018).
yzR. W. Zhang, C. X. Cui, R. Z. Li, J. Y. Duan, L. Li, Z. M. Yu and Y. G. Yao, Preprint at https://doi.org/10.48550/arXiv.2306.08902 (2023).
ar1A. O. Fumega and J. L. Lado, Nanoscale 15, 2181 (2023).
|
http://arxiv.org/abs/2307.06137v2 | 20230712124016 | Distribution-on-Distribution Regression with Wasserstein Metric: Multivariate Gaussian Case | [
"Ryo Okano",
"Masaaki Imaizumi"
] | stat.ME | [
"stat.ME",
"math.ST",
"stat.TH"
] |
Enhancing ECG Analysis of Implantable Cardiac Monitor Data: An Efficient Pipeline for Multi-Label Classification
Amnon Bleich,
Antje Linnemann, Benjamin Jaidi, Björn H Diem,
and Tim OF Conrad
================================================================================================================
Distribution data refers to a data set where each sample is represented as a probability distribution, a subject area receiving burgeoning interest in the field of statistics. Although several studies have developed distribution-to-distribution regression models for univariate variables, the multivariate scenario remains under-explored due to technical complexities. In this study, we introduce models for regression from one Gaussian distribution to another, utilizing the Wasserstein metric. These models are constructed using the geometry of the Wasserstein space, which enables the transformation of Gaussian distributions into components of a linear matrix space. Owing to their linear regression frameworks, our models are intuitively understandable, and their implementation is simplified because of the optimal transport problem's analytical solution between Gaussian distributions. We also explore a generalization of our models to encompass non-Gaussian scenarios. We establish the convergence rates of in-sample prediction errors for the empirical risk minimizations in our models. In comparative simulation experiments, our models demonstrate superior performance over a simpler alternative method that transforms Gaussian distributions into matrices. We present an application of our methodology using weather data for illustration purposes.
§ INTRODUCTION
The analysis of distribution data has gained significant attention in the field of statistics. Distribution data refers to data in which each sample is given in the form of a probability distribution or an empirical distribution generated from it. Examples include age-at-death distributions across different countries, house price distributions of different years, and distributions of voxel-voxel correlations of functional magnetic imaging signals. A distinctive feature of distribution data is that they take values in general metric spaces that lack a vector space structure. Existing complex data analysis methods, such as function or manifold data analysis methods, are inadequate for effectively handling distribution data due to their infinite dimensionality and non-linearity, posing significant challenges in processing. Developing methods and theories for analyzingdistribution data is an important and challenging problem for contemporary statistical practice. Refer to <cit.> for a review of this topic.
A common approach to handling distribution data involves the application of the Wasserstein metric to a set of distributions.
The resulting metric space is known as the Wasserstein space (<cit.>), where distribution data are considered as its elements.
There are several advantages to using the Wasserstein metric: it gives more intuitive interpretations of mean and geodesics compared to other metrics, and it reduces errors by rigorously treating constraints as distribution functions.
Based on this approach, numerous methods have been proposed for the anlaysis of distribution data (<cit.>).
This paper focuses on a problem of distribution-on-distribution regression, that is, the regression of one probability distribution onto another.
In the distribution-on-distribution regression problem, the task involves defining a regression map between non-linear spaces, which makes this problem technically challenging.
The problem is used for comparing the temporal evolution of age-at-death distributions among different countries (<cit.>, <cit.>) and predicting house price distributions in the United States(<cit.>).
For univariate distributions, several studies have investigated distribution-on-distribution regression models using Wasserstein metric.
<cit.> proposed a model utilizing geometric properties of the Wasserstein space, <cit.> presented an autoregressive model for distributional time series data, and
<cit.> introduced a model incorporating the optimal transport map associated with the Wasserstein space.
However, few studies proposed distribution-on-distribution regression models for the multivariate case with the Wasserstein metric.
For more detail, please refer to Section <ref> for a comprehensive overview.
In this paper, we propose models for regressing one Gaussian distribution onto another.
To define our models, we consider the space of Gaussian distributions equipped with the Wasserstein metric and use its tangent bundle structure to transform Gaussian distributions into matrices.
Then, we boil down the Gaussian distribution-on-distribution regression to the matrix-on-matrix linear regression, using the transformation to the tangent bundle.
Based on the transformation, we proposed two models: a basic model for the case where predictor and response Gaussian distributions are low-dimensional, and a low-rank model incorporating a low-rank structure in the parameter tensor to address high-dimensional Gaussian distributions.
Additionally, we explore the extension of our proposed models to encompass non-Gaussian scenarios.
Our strategy and the model give several advantages:
(i) the strategy enables the explicit construction of regression maps using the closed-form expression for the optimal transport problem between Gaussian distributions,
(ii) it boils down the distribution-on-distribution regression problem to an easy-to-handle linear model while maintaining the constraint of distributions, and (iii) we can solve the linear model without computational difficulties.
We compare our method to another natural approach, which regresses a mean vector and covariance matrices of covariate Gaussian on those of response Gaussian.
However, this approach deteriorates accuracy in predicting distributions, since it does not use the structure of distributions such as the Wasserstein metric.
In the simulation studies in Section <ref>, we compare our proposed models with this alternative approach and find that our models perform better than the alternative approach.
The remaining sections of the paper are organized as follows.
In Section <ref>, we provide some background on the optimal transport and Wasserstein space.
In Section <ref>, we introduce Gaussian distribution-on-distribution regression models and discuss their potential generalizations to accommodate non-Gaussian cases.
We show empirical risk minimization algrithmsin our models in Section <ref>, and analyze their in-sample prediction errors in Section <ref>.
We investigate the finite-sample performance of the proposed methods through simulation studies in Section <ref>, and illustrate the application of the proposed method using weather data in Section <ref>.
Section <ref> concludes.
Proofs of theorems and additional theoretical results are provided in Appendix.
§.§ Related Studies
There are several approaches to deal with distribution data apart from the Wasserstein metric approach.
<cit.> introduced the log quantile density transformation, enabling the utilization of functional data methods for distribution data.
The Bayes space approach has also been proposed as a viable solution for handling distribution data (<cit.>).
Within the framework of the Wasserstein metric approach, significant developments have been made in methods and theories for analyzing distribution data.
<cit.> considered the estimation for the Fréchet mean, a notion of mean in the Wasserstein space, from distribution samples.
<cit.> established the minimax rates of convergence for these estimators.
<cit.> proposed the Wasserstein covariance measure for dependent density data.
<cit.> developed the method of geodesic principal component analysis on the Wasserstein space.
Various regression models utilizing the Wasserstein metric have been proposed for distribution data.
<cit.> developed regression models for coupled vector predictors and univariate random distributions as responses.
<cit.> developed regression models for multivariate response distributions.
<cit.> and <cit.>
proposed regression models for scenarios where both regressors and responses are random distributions, and <cit.> studies its extension to the multivariate case.
<cit.>
developed autoregressive models for density time series data.
§.§ Notation
For d ≥ 1, we denote the identity matrix of size d × d as I_d.
Sym(d) is a set of all symmetric matrices of size d × d.
For a positive semidefinite matrix A, we denote its positive square root as A^1/2.
id(·) is the identity map. For a Borel measurable function f: ℝ^d →ℝ^d and Borel probability measure μ on ℝ^d, f#μ is the push-forward measure defined by f#μ(Ω) = μ(f^-1(Ω)) for any Borel set Ω in ℝ^d.
· denotes the Euclidean norm.
ℒ_μ^2(ℝ^d) is the sef of functions f:ℝ^d →ℝ^d such that ∫f(x)^2 dμ(x) < ∞, and is a Hilbert space with an inner product ⟨·, ·⟩_μ defined as ⟨ f, g ⟩_μ = ∫_ℝ^d
f(x)^⊤ g(x)dμ(x) for f,g ∈ℒ_μ^2(ℝ^d).
We denote the norm induced by this inner product as ·_μ.
For a matrix A ∈ℝ^d_1 × d_2, we denote its elements as A[p, q] for 1 ≤ p ≤ d_1 and 1 ≤ q ≤ d_2.
For a tensor 𝔸∈ℝ^d_1 × d_2 × d_3 × d_4, we denote its
elements as
𝔸[p, q, r, s]
for 1 ≤ p ≤ d_1, 1 ≤ q ≤ d_2, 1 ≤ r ≤ d_3 and 1 ≤ s ≤ d_4.
For a tensor 𝔸∈ℝ^d_1 × d_2 × d_3 × d_4 and indices 1 ≤ r ≤ d_3, 1 ≤ s ≤ d_4, let
𝔸[·, ·, r, s] ∈ℝ^d_1 × d_2 denote the d_1 × d_2 matrix whose (p, q)-elements are given by 𝔸[p, q, r, s].
For vectors a_1 ∈ℝ^d_1, a_2 ∈ℝ^d_2, a_3 ∈ℝ^d_3 and a_4 ∈ℝ^d_4, let define the outer product 𝔸 = a_1 ∘ a_2 ∘ a_3 ∘ a_4 ∈ℝ^d_1 × d_2 × d_3 × d_4 by
𝔸[p, q, r, s] = a_1[p]a_2[q]a_3[r]a_4[s].
For two matrices A_1, A_2 ∈ℝ^d_1 × d_2, we define their inner product ⟨ A_1, A_2 ⟩∈ℝ as ⟨ A_1, A_2 ⟩ = ∑_p=1^d_1∑_q=1^d_2A_1[p, q]A_2[p, q]. Furthermore, for a tensor
𝔸∈ℝ^d_1 × d_2 × d_3 × d_4
and a matrix A ∈ℝ^d_1 × d_2, we define their product ⟨ A, 𝔸⟩_2 ∈ℝ^d_3 × d_4 as
⟨ A, 𝔸⟩_2[r, s] = ∑_p=1^d_1∑_q=1^d_2A[p,q]𝔸[p,q,r,s] for 1 ≤ r ≤ d_3 and 1 ≤ s ≤ d_4.
§ BACKGROUND
In this section, we provide some background on optimal transport, the Wasserstein space, and its tangent space.
For more background, see e.g., <cit.>, <cit.> and <cit.>.
§.§ Optimal Transport
Let 𝒲(ℝ^d) be the set of Borel probability distributions on ℝ^d with finite second moments. The 2-Wasserstein distance between μ_1, μ_2 ∈𝒲(ℝ^d) is defined by
d_W(μ_1, μ_2)
=
(inf_π∈Π(μ_1, μ_2)∫_ℝ^d ×ℝ^dx - y^2dπ(x, y))^1/2.
Here, Π(μ_1, μ_2) is the set of couplings of μ_1 and μ_2, that is,
the set of joint distributions on ℝ^d ×ℝ^d with marginal distributions μ_1 and μ_2.
In our setting, the minimizer π in (<ref>)
always exists (Theorem 4.1 in <cit.>), and is called an optimal coupling.
When μ_1 is absolutely continuous with respect to the Lebesgue measure,
there exists a map T: ℝ^d →ℝ^d such that the joint distribution of (W, T(W)), where W∼μ_1, is an optimal coupling in (<ref>), and
such a map T is uniquely determined μ_1-almost everywhere (Theorem 1.6.2 in <cit.>). The map T is called the optimal transport map between μ_1 and μ_2, and
we denote it as T_μ_1^μ_2.
When d=1, the optimal transport map has the following closed-form expression (Section 1.5 in <cit.>):
T_μ_1^μ_2(x)
=
F_μ_2^-1∘ F_μ_1(x), x ∈ℝ,
where F_μ_1 is the cumulative distribution function of μ_1, and
F_μ_2^-1 is the quantile funciton of μ_2.
§.§ The Wasserstein Space and its Tangent Space
The Wasserstein distance d_W is a metric on 𝒲(ℝ^d) (Chapter 6 in <cit.>), and the metric space (𝒲(ℝ^d), d_W) is called the Wasserstein space.
We give a notion of a linear space induced from the Wasserstein space, by applying the
the basic concepts of Riemannian manifolds, as shown in <cit.>, <cit.> and <cit.>.
Let arbitrarily fix a reference measure μ_∗∈𝒲(ℝ^d) which is
absolutely continuous with
respect to
the Lebesgue measure.
For any μ∈𝒲(ℝ^d), the geodesic
from μ_∗ to μ, γ_μ_∗, μ: [0, 1] →𝒲(ℝ^d), is given by
γ_μ_∗, μ(t)
=
[t(T_μ_∗^μ - id) + id]#μ_∗, t ∈ [0, 1].
The tangent space of the Wasserstein space at μ_∗ is defined by
𝒯_μ_∗
=
{t(T_μ_∗^μ - id): μ∈𝒲(ℝ^d), t > 0 },
where the upper bar denotes the closure in terms of the norm ·_μ_∗ in the space ℒ_μ_∗^2(ℝ^d).
The space 𝒯_μ_∗ is a subspace of ℒ_μ_∗^2(ℝ^d) (Theorem 8.5.1 in <cit.>).
The exponential map Exp_μ_∗: 𝒯_μ_∗→𝒲(ℝ^d) is then defined by
Exp_μ_∗g
=
(g + id) #μ_∗, g ∈𝒯_μ_∗,
and as its right inverse, the logarithmic map Log_μ_∗: 𝒲(ℝ^d) →𝒯_μ_∗ is given by
Log_μ_∗μ = T_μ_∗^μ - id, μ∈𝒲(ℝ^d).
When d=1,
the logarithmic map is isometric in the sense that
Log_μ_∗μ_1 - Log_μ_∗μ_2_μ_∗ =
d_W(μ_1, μ_2)
for all μ_1, μ_2 ∈𝒲(ℝ) (Section 2.3.2 in <cit.>).
Remind that ·_μ^* is the norm of ℒ_μ_∗^2(ℝ^d) with the reference measure μ^∗, as defined in Section <ref>.
§.§ Specification with Gaussian Case
We restrict our attention to the Gaussian measures.
Let 𝒢(ℝ^d) be the set of Gaussian distributions on ℝ^d, and we call the metric space (𝒢(ℝ^d), d_W) as the Gaussian space.
For two Gaussian measures
μ_1 = N(m_1, Σ_1), μ_2 = N(m_2, Σ_2) ∈𝒢(ℝ^d) with mean vectors m_1,m_2 ∈ℝ^d and covariance matrices Σ_1,Σ_2 ∈ℝ^d× d, the
2-Wasserstein distance between them has the following closed-form expression (Section 1.6.3 in <cit.>):
d_W(μ_1, μ_2)
=
√(m_1 - m_2^2 + tr[Σ_1 + Σ_2 - 2(Σ_1^1/2Σ_2 Σ_1^1/2)^1/2]).
When Σ_1 is non-singular,
the optimal transport map between μ_1 and μ_2 also has the following closed-form expression (Section 1.6.3 in <cit.>):
T_μ_1^μ_2(x)
= m_2 + S(Σ_1,Σ_2)(x-m_1), x ∈ℝ^d,
where we define S(Σ_1,Σ_2) = Σ_1^-1/2[Σ_1^1/2Σ_2 Σ_1^1/2]^1/2Σ_1^-1/2 for two covariance matrices Σ_1, Σ_2.
We introduce a tangent space of Gaussian spaces.
Fix a Gaussian measure μ_∗ = N(m_∗, Σ_∗) ∈𝒢(ℝ^d) as a reference measure with a non-singular covariance matrix Σ_*.
Replacing 𝒲(ℝ^d) with 𝒢(ℝ^d) in the definition of tangent space (<ref>), we obtain the tangent space by a form of a function space
𝒯𝒢_μ_∗
=
{t(T_μ_∗^μ - id): μ∈𝒢(ℝ^d), t > 0 }.
Using the form of the optimal transport map
(<ref>), a function in the tangent space 𝒯𝒢_μ_∗ has the following form
t(T_μ_∗^μ - id)(x)
=
t(m - S(Σ_*, Σ) m_∗) + t(S(Σ_*, Σ) -I_d)x, x ∈ℝ^d.
This form implies that the function space 𝒯𝒢_μ_∗ is a set of affine functions of x ∈ℝ^d.
Note that Exp_μ_∗g ∈𝒢(ℝ^d) holds for any g ∈𝒯𝒢_μ_∗, and also Log_μ_∗μ∈𝒯𝒢_μ_∗ holds for any μ∈𝒢(ℝ^d).
§ MODEL
In this section, we define regression models between Gaussian spaces using the above notion of tangent spaces.
We first present our key idea of modeling and then develop two models.
§.§ Idea: Nearly isometry between Gaussian Space and Linear Matrix Space
As our key idea, we give a nearly isometric map from Gaussian space 𝒢(ℝ^d) to a linear matrix space.
For d ≥ 1, we define a set of symmetric matrices as
Ξ_d = {(a, V) ∈ℝ^d × (d+1): a ∈ℝ^d, V ∈Sym(d)},
which is obviously a linear space.
We will give a map from 𝒢(ℝ^d) to Ξ_d and show that this map has certain isometric properties.
This isometry map plays a
critical role in our regression model, given in the next subsection.
We fix a non-singular Gaussian measure μ_∗ = N(m_∗, Σ_∗) ∈𝒢(ℝ^d) as a reference measure.
Preliminarily, we introduce an inner product on the space Ξ_d.
For (a,V), (b, U) ∈Ξ_d, we define
⟨ (a, V), (b, U) ⟩_m_∗, Σ_∗
=
(a + V m_∗)^⊤(b + U m_∗)
+
tr(VΣ_∗ U).
Then we can easily check that ⟨·, ·⟩_m_∗, Σ_∗ satisfies the conditions of inner product.
This design follows an inner product for a space of affine functions.
Rigorously, for a ∈ℝ^d and V ∈Sym(d), we define an affine function f_a, V(x) = a + Vx and its space ℱ_aff = {f_a, V: a ∈ℝ^d, V ∈Sym(d)}.
Note that 𝒯𝒢_μ_∗⊂ℱ_aff holds from (<ref>).
Then we consider an inner product between f_a,V, f_b,U∈ℱ_aff with (a,V), (b,U) ∈Ξ_d as
⟨ f_a,V, f_b,U⟩_μ_∗
=
∫_ℝ^d(a+Vx)^⊤ (b+Ux)dμ_∗(x)
=
(a + V m_∗)^⊤(b + U m_∗)
+
tr(VΣ_∗ U).
Inspired by the design, we obtain an inner product space (Ξ_d, ⟨· , ·⟩_(m_∗, Σ_∗)).
The norm ·_(m_∗, Σ_∗) induced by this inner product is specified as
(a, V)_(m_∗, Σ_∗)
=
√(a + Vm_∗^2 + tr(VΣ_∗ V)).
We construct a nearly isometric map φ_μ_∗ from (𝒢(ℝ^d), d_W) to (Ξ_d, ·_(m_∗, Σ_∗)) as
φ_μ_∗ = π∘ψ_μ_∗.
We specify the maps ψ_μ_∗: 𝒢(ℝ^d) →𝒯𝒢_μ_∗ and π: ℱ_aff→Ξ_d as follows.
First, ψ_μ_∗ is the logarithm map Log_μ_∗(·) as (<ref>) with restriction to 𝒢(ℝ^d).
That is, for μ = N(m, Σ) ∈𝒢(ℝ^d), ψ_μ_∗μ is the affine function of the form (<ref>).
Second, for an affine function f_a,V∈ℱ_aff, we define
π f_a,V
=
(a, V).
For summary, the map φ_μ_∗: 𝒢(ℝ^d) →Ξ_d in (<ref>) is specified as
φ_μ_∗μ
=
(m-S(Σ_∗, Σ)m_∗, S(Σ_∗, Σ) - I), μ = N(m, Σ) ∈𝒢(ℝ^d).
We also define a map ξ_μ_∗: φ_μ_∗𝒢(ℝ^d) →𝒢(ℝ^d) as the left inverse of the map φ_μ_∗ by
ξ_μ_∗(a, V)
=
N(a+(V+I)m_∗, (V+I)Σ_∗(V+I)), (a, V) ∈φ_μ_∗𝒢(ℝ^d).
Here,
a range of the map (<ref>) with the domain 𝒢(ℝ^d) is written as
φ_μ_∗𝒢(ℝ^d)
=
{(a, V) ∈Ξ_d: V + I_d
is positive semidefinite},
which is obviously a subset of Ξ_d.
We obtain results on the distance-preserving property of the map φ_μ_∗.
As a preparation, for a d × d orthogonal matrix U, we define a class of Gaussian measures 𝒞_U ⊂𝒢(ℝ^d) as
𝒞_U =
{ N(m, Σ) ∈𝒢(ℝ^d) : m ∈ℝ^d, UΣ U^⊤is diagonal}.
Here, we give a formal statement.
Let μ_∗∈𝒢(ℝ^d) be an arbitrary fixed reference measure.
For any μ∈𝒢(ℝ^d),
we have
d_W(μ, μ_∗)
=
φ_μ_∗μ_(m_∗, Σ_∗).
Moreover, if μ_∗∈𝒞_U holds, we have the following for any μ_1, μ_2 ∈𝒞_U:
d_W(μ_1, μ_2)
=
φ_μ_∗μ_1 - φ_μ_∗μ_2 _(m_∗, Σ_∗).
Note that since φ_μ_∗μ_∗ = 0 holds,
the first claim shows that
the Wasserstein distance between any Gaussian measure μ and the reference Gaussian measure μ_∗ is
equal to the distance between corresponding
elements in the space (Ξ_d, ·_(m_∗, Σ_∗)).
The second claim shows that
if we choose a class of Gaussian measures appropriately, the map φ_μ_∗ is isometric on that class. This isometric property is essentially illustrated in Section 2.3.2 in <cit.> for the case of centered Gaussian distributions. Our claim can be understood as its generalization to the non-centered case.
§.§ Regression Model
In this section,
we develop our regression models for the Gaussian-to-Gaussian distribution regression.
Our strategy is to map Gaussian distributions to the linear matrix spaces using the nearly isometric maps and then conduct linear regression between the matrix spaces.
Figure <ref> illustrates the strategy.
Specifically, we develop the following two models: (i) a basic model, and (ii) a low-rank model.
See Section <ref> for the notation regarding matrices and tensors.
We review the setup of the regression problem.
Let d_1 and d_2 be positive integers and ℱ be a joint distribution on 𝒢(ℝ^d_1) ×𝒢(ℝ^d_2).
Let (ν_1, ν_2) be a pair of random elements generated by ℱ, where
we write ν_1 = N(m_1, Σ_1) and ν_2 = N(m_2, Σ_2).
We assume ν_1 and ν_2 are square integrable in the sense that max{𝔼[d_W^2(μ_1, ν_1)],𝔼[d_W^2(μ_2, ν_2)] }< ∞ for some (and thus for all) μ_1 ∈𝒢(ℝ^d_1) and
μ_2 ∈𝒢(ℝ^d_2).
In the following, we give models for dealing with this joint distribution ℱ.
§.§.§ Basic model
The first step is to define reference measures to introduce the nearly isometric maps.
For j ∈{1,2}, we define the Fréchet mean of the random Gaussian distribution ν_j as
ν_j⊕ = N(m_j⊕, Σ_j⊕)
=
_μ_j ∈𝒢(ℝ^d_j)𝔼[d_W^2(μ_j, ν_j)],
with the mean vector m_j⊕∈ℝ^d_j and the covariance matrix Σ_j⊕∈ℝ^d_j × d_j.
Note that the Fréchet means ν_1⊕ and ν_2⊕ are also Gaussian, and we assume they uniquely exist and are non-singular.
Using the Fréchet means ν_1 ⊕ and ν_2 ⊕ as reference measures,
we transform random Gaussian distributions ν_1 and ν_2 to
random elements X ∈Ξ_d_1 and Y ∈Ξ_d_2 by
X = φ_ν_1⊕ν_1, Y = φ_ν_2⊕ν_2,
where φ_ν_1⊕ and φ_ν_2⊕ are the nearly isometric maps in (<ref>).
For the random matrices X and Y transformed from the random distributions ν_1 and ν_2 as above, we perform a matrix-to-matrix linear regression.
To the aim, we consider a coefficient tensor 𝔹∈ℝ^d_1 ×(d_1+1)× d_2 ×(d_2+1) and define its associated linear map
Γ_𝔹: ℝ^d_1 × (d_1+1)→ℝ^d_2 × (d_2 + 1), A ↦⟨ A, 𝔹⟩_2.
Remind that ⟨·, ·⟩_2 is a product for tensors defined in Section <ref>.
To deal with the symmetricity of matrices in Ξ_d_1 and Ξ_d_2, we define the following class of coefficient tensors:
ℬ =
{ 𝔹∈ℝ^d_1 ×(d_1+1)× d_2 ×(d_2+1)
:
𝔹[·, ·, r, s] = 𝔹[·, ·, s-1, r+1] for 1 ≤ r ≤ d_2, 2 ≤ s ≤ d_2+1
}.
This definition guarantees ⟨ A, 𝔹⟩_2 ∈Ξ_d_2 holds for any 𝔹∈ℬ and A ∈Ξ_d_1.
We now give the linear regression model.
We assume that the (Ξ_d_1×Ξ_d_2)-valued random element (X,Y), which is obtained by the transform of the random pair if distributions (ν_1,ν_2), follows the following linear model with some 𝔹_0 ∈ℬ:
Y = Γ_𝔹_0 (X) + E, 𝔼[E | X] = 0,
where E is a Ξ_d_2-valued random element as an error term.
Note that 𝔹_0 is not necessarily unique.
We can rewrite this model into an element-wise representation such that
Y[r, s] = ⟨ X, 𝔹_0[·, ·, r, s] ⟩ + E[r, s], 𝔼[E[r, s] | X] = 0,
for 1 ≤ r ≤ d_2, 2 ≤ s ≤ d_2+1.
Furthermore, we impose the following assumption on the data-generating process in this model:
Γ_𝔹_0(X) ∈φ_ν_2⊕𝒢(ℝ^d_2) with probability 1.
For summary, we consider a regression map Γ_𝒢, 𝔹_0 between the Gaussian spaces 𝒢(ℝ^d_1) and 𝒢(ℝ^d_2) as
Γ_𝒢, 𝔹_0 = ξ_ν_2⊕∘Γ_𝔹_0∘φ_ν_1⊕.
Note that our model satisfies Γ_𝒢, 𝔹(ν_1⊕) = ν_2⊕ for any 𝔹, since we have φ_ν_1⊕ν_1⊕ = 0 and ξ_ν_2⊕(0) = ν_2⊕,
Note that our model satisfies Γ_𝒢, 𝔹_0(ν_1⊕) = ν_2⊕ , since we have φ_ν_1⊕ν_1⊕ = 0 and ξ_ν_2⊕(0) = ν_2⊕. In other words, the regression map Γ_𝒢, 𝔹_0 maps the Fréchet mean of ν_1 to that of ν_2.
[Scalar response model]
A variant of the proposed basic model is the pairing of Gaussian distributions with scalar responses.
In this case, the regression comes down to matrix-to-scalar linear regression.
Let (ν_1, Z) be a pair of random elements
with a joint distribution on 𝒢(ℝ^d_1) ×ℝ, and
let ν_1⊕ = (m_1⊕, Σ_1⊕) be the Fréchet mean of ν_1 in 𝒢(ℝ^d_1).
A Gaussian distribution-to-scalar regression model is
Z = ⟨ X, 𝔹_0 ⟩ + ε, 𝔼[ε | X] = 0.
Here, X = φ_ν_1⊕ν_1 is an element in Ξ_d_1, 𝔹_0 ∈ℝ^d_1 × (d_1+1) is the regression parameter and ε is a real-valued error term.
§.§.§ Low-Rank Model
We consider the case where the coefficient tensor 𝔹 is assumed to have low-rank, as an extension of the basic model.
The issue with the basic model (<ref>) is that
the number of elements in 𝔹 is d_1(d_1+1)d_2(d_2+1), which is high dimensional and far exceeds the usual sample size when d_1 and d_2 are not small.
A natural way to handle this issue is to approximate 𝔹 with fewer parameters, and we
employ the low-rank CP decomposition of tensors for that purpose.
This approach was employed by <cit.> for a tensor regression model for scalar outcome, and by <cit.> for a tensor-on-tensor regression model.
We define the low-rank coefficient tensor.
Let K be a positive integer such that K≤min{d_1, d_2}.
Then the tensor 𝔹∈ℝ^d_1 × (d_1+1) × d_2 × (d_2+1) admits a rank-K decomposition (e.g., <cit.>), if it holds that
𝔹 =
∑_k=1^K a_1^(k)∘ a_2^(k)∘ a_3^(k)∘ a_4^(k),
where
a_1^(k)∈ℝ^d_1, a_2^(k)∈ℝ^d_1+1, a_3^(k)∈ℝ^d_2, a_4^(k)∈ℝ^d_2+1 (k=1, ..., K) are all column vectors.
For convenience, the decomposition (<ref>) is often represented by a shorthand
𝔹 = A_1, A_2, A_3, A_4 ,
where A_1 = [a_1^(1), ..., a_1^(K)] ∈ℝ^d_1 × K, A_2 = [a_2^(1), ..., a_2^(K)] ∈ℝ^(d_1+1) × K, A_3 = [a_3^(1), ..., a_3^(K)] ∈ℝ^d_2 × K, A_4 = [a_4^(1), ..., a_4^(K)] ∈ℝ^(d_2+1) × K.
A number of elements of the tensor 𝔹 in (<ref>) is 2K(d_1+d_2+1), which is much smaller than d_1(d_1+1)d_2(d_2+1) when d_1 and d_2 are large.
Based on this decomposition, we propose a rank-K model for Gaussian distribution-to-distribution regression, in which the regression parameter 𝔹 in (<ref>) admits the rank-K decomposition (<ref>).
In the rank-K model,
we assume a_3^(k)[r] = a_3^(k)[s-1] and a_4^(k)[s] = a_4^(k)[r+1] for 1 ≤ r ≤ d_2, 2 ≤ s ≤ d_2+1 and 1 ≤ k ≤ K.
In other words, when 𝔹 is represented as A_1, A_2, A_3, A_4, we assume
the matrices A_3 and A_4 have the forms
A_3 = [ α_1 α_2 ⋯ α_K; ⋮ ⋮ ⋮ ; α_1 α_2 ⋯ α_K ],
A_4 = [ β_1 β_2 ⋯ β_K; γ_1 γ_2 ⋯ γ_K; ⋮ ⋮ ⋮ ; γ_1 γ_2 ⋯ γ_K, ],
where α_k, β_k, γ_k, 1 ≤ k ≤ K are some scalars.
Under this assumption,
the symmetric condition in (<ref>) holds, so that we have ⟨ A, 𝔹⟩_2 ∈Ξ_d_2 for any A ∈Ξ_d_1.
We denote the resulting parameter space for the rank-K model as
ℬ_low = {𝔹 = A_1, A_2, A_3, A_4 ∈ℝ^d_1 × (d_1+1) × d_2 ×(d_2+1)
:A_3 and A_4 satisfy the condition (<ref>)}.
Finally, we consider the regression model (<ref>) with 𝔹_0 ∈ℬ_low.
In practice, the appropriate number of rank K is unknown, and it can be selected via the cross-validation method.
§.§
Comparison with Existing Models in Terms of Generalization to Multivariate Case
For the univariate case where d_1 = d_2 = 1, regression models applying the Wasserstein metric to distribution-on-distribution were introduced by <cit.>.
<cit.> and <cit.> transformed distributions in the Wasserstein space 𝒲(ℝ) to elements in its tangent space (<ref>) by the logarithmic map (<ref>), and boiled down distribution-on-distribution regression to function-on-function linear regression.
Because the logarithmic map (<ref>) is isometric in the univariate case, their methods fully utilize the geometric properties of the Wasserstein space.
<cit.> modeled the regression operator from 𝒲(ℝ) to 𝒲(ℝ) by using the optimal transport map.
This approach enabled to interpret the regression effect directly at the level of probability distributions through a re-arrangement of probability mass.
Despite the effectiveness of these models for univariate distribution-on-distribution regression, their extension to the multivariate scenario remains non-trivial.
This challenge primarily arises from two reasons.
The first reason is that the explicit solution of the optimal transport problem for univariate distributions (<ref>) is not available for the multivariate case.
This brings numerical difficulties in the computation of optimal transport maps, which is required to transform distributions to unconstrained functions in the model by <cit.>.
The derivation of optimal transport maps also becomes essential when devising estimators for the regression map within <cit.>'s model.
The second reason is that the flatness of the Wasserstein space, that is, the isometric property of the logarithmic map (<ref>), does not hold for the multivariate case in general.
This means the transformation method by <cit.> lacks the theoretical support for preserving the geometric properties of the Wasserstein space in the multivariate case.
Moreover, the identifiability result of the regression map in the model by <cit.>, which depends on the flatness of the Wasserstein space, is hard to be generalized for the multivariate case.
Another study <cit.> analyzes the multivariate case and reveals several theoretical properties such as the sample complexity.
We addressed these challenges by limiting the class of distributions to Gaussian distributions.
In our model, we transform Gaussian distributions to unconstrained matrices via the map (<ref>).
Consequently, we simplify the regression of Gaussian distribution-on-Gaussian distribution to matrix-on-matrix linear regression. Given the explicit expression of the optimal transport map between Gaussian distributions as (<ref>), our transformation avoids computational difficulties.
Although our transformation is not isometric in general, it has certain isometric properties as shown in Proposition <ref>.
This guarantees that our transformation method partially utilizes the geometric properties of the Gaussian space.
§.§ Generalization to Elliptically Symmetric Distributions
Our proposed regression models extend to scenarios where distributions ν_1 and ν_2 belong to the class of elliptically symmetric distributions, a broader category than Gaussian distributions.
This is because, as shown in <cit.>, the closed-form expression of the Wasserstein distance (<ref>) holds if two distributions are in the same class of elliptically symmetric distributions.
We give more rigorous description.
Let d ≥ 1 and let f: [0, ∞) → [0, ∞) be a measurable function that is not almost everywhere zero and satisfies
∫_-∞^∞ |t|^ℓ f(t^2)dt < ∞, ℓ = d-1, d, d+1.
Given such a function f, for a positive definite matrix A ∈ℝ^d × d and a vector v ∈ℝ^d, one can consider a density function of the form f_A, v(x) = (c_A)^-1f((x-v)^⊤ A(x-v)), x ∈ℝ^d. Here,
we define c_A = ∫_ℝ^d f((x-v)^⊤ A(x-v)) dx as the normalizing constant.
Then, we can consider a class of distributions on ℝ^d whose elements have a density f_A, v for some positive definite matrix A ∈ℝ^d × d and vector v ∈ℝ^d.
We denote such a class as 𝒫_f(ℝ^d), and call it as the class of
elliptically symmetric distributions with function f.
For example, if we set f(t) = e^-t/2, we obtain the set of Gaussian distributions with positive definite covariance matrices as 𝒫_f(ℝ^d).
Furthermore, by setting f(t) = I_[0, 1](t), we obtain the set of uniform distributions on ellipsoids of the forms U_A, v = {x ∈ℝ^d: (x-v)^⊤A(x-v) ≤ 1} for some positive definite matrix A ∈ℝ^d × d and vector v ∈ℝ^d.
According to Theorem 2.4 of <cit.>, the closed-forms of the Wasserstein distance (<ref>) and optimal transport map (<ref>) are valid for any two measures μ_1, μ_2 in the same class of elliptically symmetric distributions 𝒫_f(ℝ^d).
Since our models rely only the forms (<ref>), (<ref>), our result can be extended to the case in which (ν_1, ν_2) are 𝒫_f_1(ℝ^d_1) ×𝒫_f_2(ℝ^d_2)-valued random elements.
Note that f_1,f_2:[0, ∞)→ [0, ∞) should be non-vanishing and satisfy the condition (<ref>) for d=d_1 and d=d_2, respectively.
§ EMPIRICAL RISK MINIMIZATION ALGORITHMS
In this section, we propose empirical risk minimization procedures for constructing a prediction model following the regression map Γ_𝒢, 𝔹_0 (<ref>) based on observed data.
Specifically, we consider two cases: (i) we directly observe random distributions (Section <ref>), and (ii) we observe only samples from the random distributions (Section <ref>).
We refer the estimation issue of the coefficient tensor 𝔹_0 itself and its related topics to Appendix.
§.§ Algorithm with Directly Observed Distributions
Suppose that we directly observe n independent pairs of random Gaussian distributions (ν_i1, ν_i2) ∼ℱ for i=1,...,n.
Here, we write ν_ij = N(μ_ij, Σ_ij) for j ∈{1, 2}.
Firstly, based on the distributions ν_ij(i=1, ..., n;j=1, 2), we compute the empirical Fréchet means for j ∈{1,2}:
ν̃_j⊕
=
_μ_j ∈𝒢(ℝ^d_j)1/n∑_i=1^n d_W^2(μ_j,ν_ij ),
where we write ν̃_j⊕ = N(m̃_j⊕, Σ̃_j⊕).
For solving optimizations in (<ref>),
we can use
the steepest descent algorithm (Section 5.4.1 in <cit.>).
Then, we transform Gaussian distributions ν_ij into matrices by X̃_i = φ_ν̃_1⊕ν_i1 and
Ỹ_i = φ_ν̃_2⊕ν_i2.
In the basic model, we solve the following least squares problem:
𝔹̃∈_𝔹∈ℬ∑_i=1^n Ỹ_i - Γ_𝔹(X̃_i)^2_(m̃_2⊕, Σ̃_2⊕),
where ℬ is the parameter space defined by (<ref>), and ·_(m̃_2⊕, Σ̃_2⊕) denotes the norm defined by (<ref>)
for m_∗ = m̃_2⊕ and Σ_∗ = Σ̃_2⊕.
In the rank-K model, we solve the following least squares problem:
𝔹̃∈_𝔹∈ℬ_low∑_i=1^n Ỹ_i - Γ_𝔹(X̃_i) _(m̃_2⊕, Σ̃_2⊕)^2,
where ℬ_low is the parameter space defined by (<ref>).
In either case, we use Γ_𝒢, 𝔹̃ = ξ_ν̃_2⊕∘Γ_𝔹̃∘φ_ν̃_1⊕ as the map for prediction.
We propose an algorithm for solving the optimization problem in (<ref>). We observe that although the tensor 𝔹 with rank K-decomposition (<ref>) is not linear in (A_1, A_2, A_3, A_4) jointly, it is linear in A_c individually for c=1, 2, 3, 4. This observation suggests a so-called block relaxation algorithm (<cit.>), which alternately updates A_c, c=1, 2, 3, 4, while keeping the other matrices fixed.
This algorithm is employed in <cit.> for parameter estimation in a tensor regression model.
Recalling that the matrices A_3, A_4 have the forms (<ref>) so that 𝔹∈ℬ_low, we denote the objective function in the optimization problem in (<ref>) as
ℓ(A_1, A_2, α, β, γ)
=
∑_i=1^n Ỹ_i - Γ_𝔹(X̃_i) _(m̃_2⊕, Σ̃_2⊕)^2,
where α = (α_1, ..., α_K) ∈ℝ^K, β = (β_1, ..., β_K) ∈ℝ^K and γ = (γ_1, ..., γ_K) ∈ℝ^K.
Then the procedure for solving the optimization problem in (<ref>) is summarized in Algorithm <ref>.
As the block relaxation algorithm monotonically decreases the objective function (<cit.>), the convergence of objective values ℓ(A_1^(t), A_2^(t), α^(t), β^(t), γ^(t)) is guaranteed whenever the function ℓ is bounded from above.
§.§ Algorithm with Samples of Not Directly Observed Distributions
In this section, suppose that we observe only samples from the random Gaussians (ν_i1, ν_i2), instead of the direct observation on (ν_i1, ν_i2) in Section <ref>.
Rigorously, we assume the following two-step data generating process.
First, n independent pairs of Gaussian distributions (ν_i1, ν_i2) ∼ℱ (i=1, ..., n) are generated.
Next, the N sample vectors W_ijm∼ν_ij (m=1, ..., N) are generated from the distributions, then we observe the sample vectors.
For each fixed (i, j), the W_ijm are independent and identically distributed.
At the beginning, we develop a proxy for each Gaussian distribution ν_ij = N(μ_ij, Σ_ij).
For i=1,...,n and j ∈{1,2}, we consider the empirical mean and covariance of W_ijm as
μ̂_ij
=
1/N∑_m=1^N W_ijm and Σ̂_ij
=
1/N∑_m=1^N (W_ijm - μ̂_ij)(W_ijm - μ̂_ij)^⊤,
for estimators of μ_ij and Σ_ij, respectively.
We define ν̂_ij = N(μ̂_ij, Σ̂_ij) and use it for a proxy of ν_ij = N(μ_ij, Σ_ij).
Based on these proxies, we compute the empirical Fréchet means for j ∈{1,2}:
ν̂_j⊕
=
_μ_j ∈𝒢(ℝ^d_j)1/n∑_i=1^n d_W^2(μ_j,ν̂_ij ),
where we write ν̂_1⊕ = N(m̂_1⊕, Σ̂_1⊕),
ν̂_2⊕ = N(m̂_2⊕, Σ̂_2⊕).
As with the directly observed case,
we can use
the steepest descent algorithm
for solving this optimization.
Then, we transform Gaussian distributions ν̂_ij into matrices by X̂_i = φ_ν̂_1⊕ν̂_i1 and
Ŷ_i = φ_ν̂_2⊕ν̂_i2.
In the basic model, we solve the following least squares problem:
𝔹̂∈_𝔹∈ℬ∑_i=1^n Ŷ_i - Γ_𝔹(X̂_i)^2_(m̂_2⊕, Σ̂_2⊕),
where ·_(m̂_2⊕, Σ̂_2⊕) denotes the norm defined by (<ref>)
for m_∗ = m̂_2⊕ and Σ_∗ = Σ̂_2⊕.
In the rank-K model, we solve the following least squares problem:
𝔹̂∈_𝔹∈ℬ_low∑_i=1^n Ŷ_i - Γ_𝔹(X̂_i) _(m̂_2⊕, Σ̂_2⊕)^2.
In either case, we use Γ_𝒢, 𝔹̂ = ξ_ν̂_2⊕∘Γ_𝔹̂∘φ_ν̂_1⊕ as the prediction map.
As with the directly observed case, we can use the block relaxation algorithm for solving the optimization (<ref>) by the similar manner of Algorithm <ref>.
§ ANALYSIS OF IN-SAMPLE PREDICTION ERROR
In this section, we analyze the prediction error of the proposed models and algorithms.
We especially focus on the in-sample prediction error measured on the observations, which is naturally extended to the out-sample prediction error.
Here, suppose that we directly observe the pairs of Gaussian distributions (ν_1i, ν_2i), i=1, ..., n from the model (<ref>) as the case in Section <ref>.
For simplicity, we assume that the true values of Fréchet means ν_1⊕ and ν_2⊕ are known.
In addition, we treat predictors {ν_1i}_i=1^n as fixed in this analysis.
Based on the sample (ν_1i, ν_2i), i=1, ..., n, we solve the following least squares problem for ℬ̃ = ℬ or ℬ̃ = ℬ_low:
𝔹̃∈_𝔹∈ℬ̃∑_i=1^n Y_i - Γ_𝔹(X_i) _(m_2⊕, Σ_2⊕)^2,
where X_i = φ_ν_1⊕ν_1i and
Y_i = φ_ν_2⊕ν_2i.
Then, we define the prediction map.
Moreover, under the assumption that Γ_𝔹̃(X_i) ∈φ_ν_2⊕𝒢(ℝ^d_2) (i=1, ..., n), we define the in-sample prediction error with the Wasserstein metric in terms of the empirical measure by
ℛ_n(Γ_𝒢, 𝔹̃, Γ_𝒢, 𝔹_0) =
√(1/n∑_i=1^n d_W^2(Γ_𝒢, 𝔹̃(ν_1i),
Γ_𝒢, 𝔹_0(ν_1i))),
which is an analogy of the empirical L^2-norm.
We also assume that the Ξ_d-valued random variable E in the linear model (<ref>) is Gaussian, that is,
that is, for any A ∈Ξ_d, ⟨ E, A ⟩_m_∗, Σ_∗ is a real Gaussian random variable.
In the following, we measure the in-sample prediction error of the basic model in terms of the Wasserstein distance.
Note that this is unique to our distribution-on-distribution regression problem, and deriving the convergence rate of in-sample prediction error under this setting is not a trivial problem.
Suppose that (ν_1i, ν_2i) (i=1, ..., n) are pairs of Gaussian distributions generated from the basic model (<ref>), and that error matrices E_i ∈Ξ_d_2 are Gaussian with mean 0 and covariance with trace 1, that is, 𝔼[E_i] = 0 and 𝔼[E_i^2_m_2⊕, Σ_2⊕] = 1.
Let 𝔹̃∈ℬ be an solution of the optimization (<ref>), and assume that Γ_𝔹̃(X_i) ∈φ_ν_2⊕𝒢(ℝ^d_2) holds for i=1, ..., n.
Then, we have
ℛ_n(Γ_𝒢, 𝔹̃, Γ_𝒢, 𝔹_0)= O_P(d_1d_2 / √(n)),
as n →∞.
This result shows that that our method achieves optimal convergence rates.
That is, the convergence rates in Theorem <ref> achieve the parametric rate n^-1/2 regarding the sample size n.
This rate comes from our parametric assumption of Gaussianity on distributions.
In contrast, existing distribution-on-distribution regression models do not impose parametric assumptions, which results in slower convergence rates of estimators for regression parameters.
For example, in the regression model proposed by <cit.>, an estimator for the regression operator achieve the same rate as the minimax rate for function-to-function linear regression in a certain case (Theorem1 in <cit.>), which is generally slower than the parametric rate.
In the regression model proposed by <cit.>, an estimator for the regression map achieve the rate n^-1/3 (Theorem 3.8 in <cit.>), which is slower than the parametric rate.
Next, we study the in-sample prediction error of the rank-K model.
This analysis provides an effect of the number of ranks K, in addition to the results of the basic model in Theorem <ref>.
Suppose (ν_1i, ν_2i) (i=1, ..., n) are pairs of Gaussian distributions generated from the rank-K model defined in Section <ref>, and that error matrices E_i ∈Ξ_d_2 are Gaussian with mean 0 and covariance with trace 1.
Let 𝔹̃∈ℬ_low be an solution of the optimization
(<ref>), and assume that Γ_𝔹̃(X_i) ∈φ_ν_2⊕𝒢(ℝ^d_2) holds for i=1, ..., n.
Then, we have
ℛ_n(Γ_𝒢, 𝔹̃, Γ_𝒢, 𝔹_0)= O_P(√(Kd_1) / √(n)),
as n →∞.
Theorem <ref> states an advantage of the low-rank model, in addition to the result that the model achieves the optimal parametric rate.
The constant part of the rate is √(Kd_1) in the rank-K model while d_1d_2 in the basic model. This implies that when the dimensions of distributions ν_1, ν_2 are large, the regression map in the rank-K model is better approximated than that in the basic model. In the rate of the rank-K model, the dimension of output distribution d_2 does not appear in the constant part. This is due to the specific forms of matrices (<ref>) imposed on tensors in ℬ_low.
We add some discussion on the observations of distributions.
Recall that we assume the true Fréchet means ν_1⊕, ν_2⊕ are known, and distributions (ν_1i, ν_2i) are directly observed. Relaxing these assumptions presents additional challenges for theoretical analysis. Specifically, if we estimate the Fréchet mean of ν_2i with the empirical Fréchet mean ν̃_2⊕, we solve the least squares problem (<ref>) by replacing Y_i = log_ν_2⊕ν_2i with Ỹ_i = log_ν̃_2⊕ν_2i. Since Ỹ_1, ..., Ỹ_n are not independent, the standard theory for analyzing the error of empirical risk minimization is not directly applicable in this setting. Moreover, if distributions are not directly observed and only samples from them are available, we need to tackle the discrepancy between the estimated distributions based on the sample and the actual distributions in the analysis. As for the estimation of the Fréchet mean, <cit.> derive the rates of convergence of empirical Fréchet mean on the Gaussian space (Corollary 17 in <cit.>), which may be helpful for further theoretical analysis.
Finally, we prove the consistency and asymptotic normality of an estimator for identified regression parameters in the Appendix.
§ SIMULATION STUDIES
In this section, we investigate the finite-sample performance of the proposed methods together with another method through simulation studies.
§.§ Setting
Setting d_1 = d_2 = d, we generate pairs of Gaussian distributions
{(ν_1i, ν_2i)}_i=1^n
from the basic model as follows.
Firstly, for each i=1, ..., n,
we generate independent random variables G_i^(1), ..., G_i^(d)∼ N(0, 1), H_i^(1), ..., H_i^(d)∼ Exp(1) and set a matrix X_i ∈ S_d by
X_i = [ G_i^(1) H_i^(1) O; ⋮ ⋱ ; G_i^(d) O H_i^(d) ].
Here,
Exp(1) is the exponential distribution with the rate parameter 1.
Then we obtain a Gaussian distribution ν_1i = ξ_ν_1⊕X_i ∈𝒢(ℝ^d), where
ν_1⊕ is the d-dimensional standard Gaussian distribution. Note that under this setting, the random distribution ν_1i has the Fréchet mean ν_1⊕.
Next, we set the coefficient tensor 𝔹∈ℝ^d × (d+1) × d × (d+1) as
𝔹[·, ·, r, 1]
=
[ 1 0 ⋯ 0; ⋮ ⋱ ; 1 0 ⋯ 0 ], 𝔹[·, ·, r, r+1]
=
[ 0 (2d)^-1 O; ⋮ ⋱ ; 0 O (2d)^-1 ],
for 1 ≤ r ≤ d,
and set the other elements to be zero.
Additionally, for each i=1, ..., n, we
generate independent random variables U_i^(1), ..., U_i^(d)∼ N(0, 1), V_i^(1), ..., V_i^(d)∼ U(-1/2, 1/2) and
set the error matrix E_i ∈ S_d by
E_i =
[ U_i^(1) V_i^(1) O; ⋮ ⋱ ; U_i^(d) O V_i^(d) ].
Here, U(-1/2, 1/2) is the uniform distribution on the interval (-1/2, 1/2).
We set Y_i = ⟨ X_i, 𝔹⟩_2 + E_i and obtain a response Gaussian distribution ν_2i = ξ_ν_2⊕Y_i ∈𝒢(ℝ^2), where ν_2⊕ is the d-dimensional standard Gaussian distribution.
Note that under this setting, the condition (<ref>) holds and the random distribution ν_2i has the Fréchet mean ν_2⊕. From the above procedure, we have obtained pairs of Gaussian distributions {(ν_1i, ν_2i)}_i=1^n. Finally, we draw N independent sample vectors from each of the distributions {ν_1i}_i=1^n and {ν_2i}_i=1^n.
§.§ Methods and Performance Criterion
As an alternative approach, we consider the following model between ν_1i and ν_2i:
W_i = ⟨ Z_i, 𝔸_0 ⟩_2 + E_i, 𝔼[E_i | Z_i] = 0.
Here, Z_i = (m_1i, Σ_1i) ∈ S_d_1 and
W_i = (m_2i, Σ_2i) ∈ S_d_2 are the matrices obtained from Gaussian distributions ν_1i = N(m_1i, Σ_1i) and ν_2i = N(m_2i, Σ_2i), respectively.
𝔸_0 ∈ℬ is the regression parameter and E_i ∈ S_d is the error matrix in this model.
Note that this alternative model does not consider the Wasserstein metric.
For the proposed models, we construct estimators 𝔹̂ as described in Section <ref>.
For the alternative model (<ref>), we construct an estimator by solving the least square problem
𝔸̂∈_𝔸∈ℬ∑_i=1^n Ŵ_i - ⟨Ẑ_i, 𝔸⟩_2 _F^2,
where Ẑ_i = (m̂_1i, Σ̂_1i) and
Ŵ_i = (m̂_2i, Σ̂_2i).
To investigate the performance of the proposed and alternative methods, following simulations in <cit.>, we generate 200 new predictors {ν_1i}_i=n+1^n+200 and compute the out-of-sample average Wasserstein discrepancy (AWD). Denoting the true response distributions by ν_2i^∗ = ξ_ν_1⊕Y_i^∗ with Y_i^∗ = ⟨ X_i, 𝔹_0⟩_2, and the fitted response distributions by ν_2i^#, the out-of-sample AWD is given by
AWD
=
1/200∑_i=n+1^n+200d_W(ν_2i^∗, ν_2i^#).
In the proposed model,
when the fit of the response in the space Ξ_d_2
does not fall in the range of map φ_ν̂_2⊕ , that is,
Γ_𝔹̂(X_i) ∉φ_ν̂_2⊕𝒢(ℝ^d_2),
we need to modify the fit to calculate the fitted response distribution.
To handle this problem,
we use a boundary projection method similar to one proposed by <cit.>.
Specifically, for d ≥ 1, let g_d: ℝ^d × (d+1)→ℝ^d × d be the map such that g((a, V)) = V for (a, V) ∈ℝ^d × (d+1).
If the event (<ref>) happens, we
calculate a constant η_i such that
η_i =
max{η∈ [0, 1]: η (g_d_2∘Γ_𝔹̂(X_i)) + I_d_2 is positive semidefinite},
and update the original fit by η_i Γ_𝔹̂(X_i).
Conceptually, we update the original fit by a projection onto the boundary of φ_ν̂_2⊕𝒢(ℝ^d_2) along the line segment between the origin 0 and the fit Γ_𝔹̂(X_i).
In the alternative method, if g_d_2(⟨ X_i, 𝔸̂⟩_2) is not positive semidefinite, we update g_d_2(⟨ X_i, 𝔸̂⟩_2) by _C ∈Sym^+(d_2)C - g_d_2(⟨ X_i, 𝔸̂⟩_2)_F.
§.§ Results
Firstly, we set d = 2 and consider four scenarios with n ∈{20, 200} and N ∈{50, 500}.
We simulate 500 runs for each (n, m) pair, and for each Monte Carlo run, we compute the AWD (<ref>) based on 200 new predictors.
The results of the proposed and alternative methods are summarized in the boxplots of Figure <ref>.
In all four scenarios, the proposed method outperforms the alternative method. This result comes from the fact that the proposed method takes into account the geometry of the Wasserstein metric, while the alternative method does not.
In this setting, the event (<ref>) seldom happened even if the number of distributions n is small.
Next, we set d=6, n=200, N=500 and fit the proposed and alternative models whose regression tensors have rank K ∈{2, 3, 4}.
As with the previous experiment, we simulate 500 runs, and for each Monte Carlo run, we compute the AWD (<ref>) based on 200 new predictors.
The results are summarized in the boxplots of Figure <ref>.
In all cases, the proposed method outperforms the alternative method. In this setting, event (<ref>) happened more frequently than in the previous experiment.
Finally, to see the performance of the methods under the existence of model misspecification, we generate pairs of multivariate t-distributions {(t_1i, t_2i)}_i=1^n and fit the Gaussian-on-Gaussian regression models.
Specifically, we firstly generate pairs of Gaussian distributions {(ν_1i, ν_2i)}_i=1^n from the basic model as described in Section <ref>.
Denoting these Gaussian distributions as ν_1i = N(m_1i, Σ_1i), ν_2i = N(m_2i, Σ_2i), we set multivariate t-distributions as t_1i = t_ℓ(m_1i, Σ_1i), t_2i = t_ℓ(m_2i, Σ_2i). Here, t_ℓ(m, Σ) denotes the multivariate t-distribution with location m, scale matrix Σ and the degree of freedom ℓ.
We draw an i.i.d. observations of size N from each of the distributions {t_1i}_i=1^n and {t_2i}_i=1^n, and construct estimators for the proposed and alternative models, respectively. Finally, we generate 200 new predictors {t_1i}_i=n+1^n+200 and calculate the out-of-sample AWD
200^-1∑_i=n+1^n+200d_W(t_2i^∗, ν_2i^#).
Here, t_2i^∗ = t_ℓ(m_2i^∗, Σ_2i^∗) is the true response t-distribution whose location and scale are given by N(m_2i^∗, Σ_2i^∗) = ξ_ν_1⊕Y_i^∗ with Y_i^∗ = ⟨ X_i, 𝔹_0⟩_2. ν_2i^# is the fitted response Gaussian distribution.
We set d=2, n=200, N=500 and consider three scenarios with the degree of the freedom ℓ∈{5, 10, 15}.
As with the previous experiments, we simulate 500 runs, and for each Monte Carlo run, we compute the AWD (<ref>) based on 200 new predictors.
The results of the proposed and alternative methods are summarized in the boxplots of Figure <ref>.
In all three scenarios, the proposed method outperforms the alternative method. In addition, the prediction performance is getting better as the degree of freedom increases. This result comes from the fact that as the degree of freedom increases, the t-distribution becomes more close to the Gaussian distribution, and thus there is less model misspecification.
§ APPLICATIONS
In this section, we employ the proposed regression model to grasp the relationship between daily weather in spring (March, April, and May) and that in summer (Jun, July, and August) in Calgary, Alberta.
We obtain data from <https://calgary.weatherstats.ca>. This dataset contains the temperature and humidity for each day in Calgary from 1953 to 2021. We consider the joint distribution of the average temperatures recorded daily and the average relative humidity recorded daily.
We regard each pair of daily values as one observation from a two-dimensional Gaussian distribution.
As examples,
Figure <ref> illustrates the observations and estimated Gaussian densities for spring and summer in each year from 1953 to 1956.
We applied the proposed (<ref>) and alternative (<ref>) regression models
with the distributions for spring as the predictor and summer as the response. Models are trained on data up to 1988 and predictions are computed for the remaining period, where we predicted the distribution of summer based on that of spring for each year.
Table <ref> shows the fitting and prediction results of the proposed method for training and prediction periods. Additionally, Table <ref> shows the result of the alternative method. In these tables, we report the summary of the Wasserstein discrepancies between observed and fitted distributions in training periods, and those between observed and predicted distributions in prediction periods. We also show the prediction results of both methods from 2017 to 2019 in Figure <ref>.
We find that fitting and prediction by the proposed model are generally better than those by the alternative model. This result can be explained by the fact that the proposed model takes into consideration the geometry of the Wasserstein space while the alternative model does not.
§ CONCLUSION
In this paper, we propose the distribution-on-distribution regression models for multivariate Gaussians with the Wasserstein metric.
In the proposed regression models, Gaussian distributions are transformed into elements in linear matrix spaces by the proposed nearly isometric maps, and the regression problem comes down to matrix-on-matrix linear regression.
It has the advantage that the distribution-on-distribution regression is reduced to a linear regression while keeping the properties of distributions.
Also, owing to the linear regression structure, we can easily implement and interpret the models.
We incorporate a low-rank structure in the parameter tensor to address large dimensional Gaussian distributions and also discuss the generalization of our models to the class of elliptically symmetric distributions.
In the simulation studies, we find that our models perform better than an alternative approach of transforming Gaussian distributions to matrices that do not consider the Wasserstein metric.
Appendix
§ PROOFS
Firstly, we set a = m - S(Σ_∗, Σ)m_∗ and
V = S(Σ_∗, Σ) - I. Then, we have
a + Vm_∗
=
m - m_∗ and
VΣ_∗ V
=
Σ + Σ_∗ - Σ_∗^1/2[Σ_∗^1/2ΣΣ_∗^1/2]^1/2Σ_∗^-1/2
- Σ_∗^-1/2[Σ_∗^1/2ΣΣ_∗^1/2]^1/2Σ_∗^1/2.
Therefore, φ_μ_∗μ_(m_∗, Σ_∗)^2 is expressed as
φ_μ_∗μ_(m_∗, Σ_∗)^2
=a+Vm_∗^2+tr(VΣ_∗ V)
=
m - m_∗^2
+
tr(Σ) + tr(Σ_∗)
-
tr(Σ_∗^1/2[Σ_∗^1/2ΣΣ_∗^1/2]^1/2Σ_∗^-1/2)
- tr(Σ_∗^-1/2[Σ_∗^1/2ΣΣ_∗^1/2]^1/2Σ_∗^1/2)
=m - m_∗^2
+ tr(Σ) + tr(Σ_∗)
-2tr([Σ_∗^1/2ΣΣ_∗^1/2]^1/2)
=d_W^2(μ, μ_∗).
Next, let U be a d × d orthogonal matrix and suppose
μ_∗ = N(m_∗, Σ_∗), μ_1 = N(m_1, Σ_1) and μ_2 = N(m_2, Σ_2) are Gaussian measures in 𝒞_U.
Because Σ_1^1/2Σ_2^1/2 = Σ_2^1/2Σ_1^1/2 holds in this setting, the Wasserstein distance between μ_1 and μ_2 is expressed as
d_W^2(μ_1, μ_2)
=
m_1 - m_2^2 + tr((Σ_1^1/2 - Σ_2^1/2)^2).
On the other hand,
because Σ_∗^1/2Σ_1^1/2 = Σ_1^1/2Σ_∗^1/2 and
Σ_∗^1/2Σ_2^1/2 = Σ_2^1/2Σ_∗^1/2 also hold in this setting, we have
φ_μ_∗μ_1 = (m_1 - Σ_1^1/2Σ_∗^-1/2m_∗, Σ_1^1/2Σ_∗^-1/2 - I),
φ_μ_∗μ_2 = (m_2 - Σ_2^1/2Σ_∗^-1/2m_∗, Σ_2^1/2Σ_∗^-1/2 - I).
This implies
φ_μ_∗μ_1 - φ_μ_∗μ_2
=
(m_1 - m_2 - (Σ_1^1/2 - Σ_2^1/2)Σ_∗^-1/2m_∗, (Σ_1^1/2 - Σ_2^1/2)Σ_∗^-1/2),
and we have
φ_μ_∗μ_1 - φ_μ_∗μ_2 _(m_∗, Σ_∗)^2
=
m_1 - m_2^2 + tr((Σ_1^1/2 - Σ_2^1/2)^2).
From (<ref>) and (<ref>), we obtain
d_W(μ_1, μ_2)
=
φ_μ_∗μ_1 - φ_μ_∗μ_2 _(m_∗, Σ_∗).
To prove Theorem 1 and 2, we employ the following general result regarding the in-sample prediction error of least squares regression, which is shown by <cit.>.
We refer to Section A.2 in <cit.> for Gaussian random variables in Hilbert spaces.
Let x_1, ..., x_n be fixed covariates taking values in a set 𝒳, and let Y_1, ..., Y_n be random variables taking values in a separable Hilbert space (𝒴, ·_𝒴) satisfying
Y_i = g_0(x_i) + ε_i, i=1, ..., n.
Here, ε_i are independent Gaussian noise terms with zero mean and covariance trace 1, and g_0: 𝒳→𝒴 is an unknown function in a class 𝒢. Let define the empirical norm
g_n = √(n^-1∑_i=1^n g(x_i)_𝒴^2) for g ∈𝒢, and define
J(δ) = ∫_0^δ√(log N_n(t, ℬ_n(δ; 𝒢), ·_n))dt for δ > 0, where N_n(t, ℬ_n(δ; 𝒢), ·_n) is the t-covering number of the ball
ℬ_n(δ; 𝒢) = {g ∈𝒢: g_n ≤δ}. Then, if there exist real sequence {δ_n} and constant C > 0 such that J(δ_n) ≤ C √(n)δ_n^2, the least squares estimator ĝ_n = _g ∈𝒢n^-1∑_i=1^n Y_i - g(x_i) _𝒴^2 satisfies
ĝ_n - g_0_n = O_P(δ_n).
Using this result, we prove Theorem 1 and 2.
Throughout the proofs, we denote a ≲ b
when there exists a constant C > 0 not depending on n, d_1, d_2, K such that a ≤ Cb.
Firstly we bound the in-sample prediction error regarding the map Γ_𝔹_0, which is defined by
Γ_𝔹̃ - Γ_𝔹_0_n
=
√(
n^-1∑_i=1^n Γ_𝔹̃(X_i) - Γ_𝔹_0(X_i) _(m_2⊕, Σ_2⊕)^2).
Our strategy is to bound the metric entropy of the function space ℱ = {Γ_𝔹: 𝔹∈ℬ} and employ Theorem <ref>.
We define the δ-ball of space ℱ as ℬ_n(δ; ℱ) = {Γ_𝔹∈ℱ: Γ_𝔹_n ≤δ} and denote its t-covering number as
N_n(t, ℬ_n(δ; ℱ), ·_n).
By defining
𝔹' = Γ_𝔹_n for 𝔹∈ℬ,
the set ℬ_n(δ; ℱ) is isometric to the δ-ball within the space (ℬ, ·').
Since the space (ℬ, ·') has dimension d_1(d_1+1)d_2(d_2+3)/2, by a volume ratio argument (Example 5.8 in <cit.>), we have
log N_n(t, ℬ_n(δ; ℱ), ·_n)
≲ d_1^2d_2^2 log(1 + 2δ/t).
Using this upper bound, we have
∫_0^δ√(log N_n(t, ℬ_n(δ; ℱ), ·_n))dt
≲ d_1d_2
∫_0^δ√(log(1 + 2δ/t))dt
= δ d_1d_2 ∫_0^1 √(log(1 + 2/u))du (u=t/δ)
≲δ d_1d_2.
This implies we can apply Theorem <ref> with δ_n = d_1d_2/√(n) and obtain Γ_𝔹̃ - Γ_𝔹_0_n = O_P(d_1d_2/√(n)).
Next, we bound the in-sample prediction error ℛ_n(Γ_𝒢, 𝔹̃, Γ_𝒢, 𝔹_0).
Because the Wasserstein space has nonnegative sectional curvature at any reference measure (e.g., Section 2.3.2 in <cit.>), the Gaussian space, which is the restriction of the Wasserstein space to Gaussian measures, also has this property.
In other words, the inequality
d_W(μ_1, μ_2) ≤φ_ν_2⊕μ_1 - φ_ν_2⊕μ_2_(m_2⊕, Σ_2⊕)
holds for any μ_1, μ_2 ∈𝒢(ℝ^d_2).
This implies ℛ_n(Γ_𝒢, 𝔹̃, Γ_𝒢, 𝔹_0) ≤Γ_𝔹̃ - Γ_𝔹_0_n holds, and combining this fact with Γ_𝔹̃ - Γ_𝔹_0_n = O_P(d_1d_2/√(n)), we have ℛ_n(Γ_𝒢, 𝔹̃, Γ_𝒢, 𝔹_0) = O_P(d_1d_2/√(n)).
As with the proof of Theorem 1, we firstly bound the in-sample prediction error regarding the map Γ_𝔹_0. We define the function space as ℱ_low = {Γ_𝔹: 𝔹∈ℬ_low}, define its δ- ball as ℬ_n(δ; ℱ_low) = {Γ_𝔹∈ℱ_low: Γ_𝔹_n ≤δ} , and denote its t-covering number as
N_n(t, ℬ_n(δ; ℱ_low), ·_n).
By defining
𝔹” = Γ_𝔹_n for 𝔹∈ℬ_low,
the set ℬ_n(δ; ℱ_low) is isometric to the δ-ball within the space (ℬ_low, ·”).
Recall that if a tensor 𝔹 = A_1, A_2, A_3, A_4 is in ℬ_low, the matrices A_3 and A_4 have the forms (<ref>). Based on this fact, denoting α = (α_1, ..., α_K), β = (β_1, ..., β_K), and γ = (γ_1, ..., γ_K), let consider an corresponding from ℝ^2Kd_1+4K to ℬ_low such that
(vec(A_1), vec(A_2), α, β, γ) ↦ A_1, A_2, A_3, A_4 ,
and define
(vec(A_1), vec(A_2), α, β, γ)”'
=
A_1, A_2, A_3, A_4 ”.
Since the δ-ball within the space
(ℬ_low, ·”) is isometric to the δ-ball within
(ℝ^2Kd_1 + 4K, ·”'), we eventually have that the set ℬ_n(δ; ℱ_low) is isometric to the δ-ball within the space (ℝ^2Kd_1 + 4K, ·”').
Therefore, by a volume ratio argument, we have
log N_n(t, ℬ_n(δ; ℱ_low), ·_n)
≲ Kd_1 log(1 + 2δ/t).
Using this upper bound, as with the proof of Theorem 1, we have
∫_0^δ√(log N_n(t, ℬ_n(δ; ℱ), ·_n))dt
≲δ√(Kd_1).
This implies
we can apply Theorem <ref> with δ_n = √(Kd_1)/√(n) and obtain Γ_𝔹̃ - Γ_𝔹_0_n = O_P(√(Kd_1)/√(n)).
As with the proof of Theorem 1, the nonnegativity of sectional curvature of the Wasserstien space implies ℛ_n(Γ_𝒢, 𝔹̃, Γ_𝒢, 𝔹_0) ≤Γ_𝔹̃ - Γ_𝔹_0_n.
Combing this fact with Γ_𝔹̃ - Γ_𝔹_0_n = O_P(√(Kd_1)/√(n)), we obtain ℛ_n(Γ_𝒢, 𝔹̃, Γ_𝒢, 𝔹_0) = O_P(√(Kd_1)/√(n)).
§ PARAMETER IDENTIFICATION
In this section, we deal with the identification of regression parameter 𝔹 in our proposed models. Although the parameter 𝔹 does not need to be identified in the empirical risk minimization problems in the main article, it must be identified when we consider estimation or inference for the regression parameter.
§.§ Basic Model
Recall that assuming linear regression model (<ref>) is equivalent to assuming the model (<ref>) for each 1 ≤ r ≤ d_2 and 1 ≤ s ≤ d_2+1. Let fix indexes 1 ≤ r ≤ d_2 and 1 ≤ s ≤ d_2+1 and consider the identification of parameter 𝔹[·, ·, r, s] ∈ℝ^d_1 × (d_1+1)
in (<ref>).
In order to deal with the identifiability issue coming from the symmetry in the matrix X ∈Ξ_d_1, we impose the following condition on the parameter 𝔹[·, ·, r, s]:
𝔹[p, q, r, s]
=
0, for 1 ≤ p ≤ d_1, p+2 ≤ q ≤ d_2+1.
In other words, the matrix 𝔹[·, ·, r, s]
has a lower triangular form
[ ∗ ⋮ ∗ O ; ⋮ ⋮ ⋱ ; ⋮ ⋮ ⋱ ; ∗ ⋮ ∗ ∗ ],
where ∗ is some real number.
If two matrices 𝔹[·, ·, r, s] and
𝔹'[·, ·, r, s] satisfy the condition (<ref>), we have
⟨ X, 𝔹[·, ·, r, s]⟩ = ⟨ X, 𝔹'[·, ·, r, s]⟩ for any X ∈Ξ_d_1𝔹[·, ·, r, s]
=
𝔹'[·, ·, r, s],
which guarantees the identifiability of the parameter
𝔹[·, ·, r, s].
In summary, by adding condition (<ref>) to the existing parameter space, we define the following modified parameter space for the basic model :
ℬ^∗ = {𝔹∈ℬ: the condition (<ref>) holds for each 1 ≤ r ≤ d_2 and 1 ≤ s ≤ d_2+1}.
Then, the parameter 𝔹 is uniquely identified in ℬ^∗.
§.§ Low-Rank Model
Next, we consider the identification of regression parameters in the low-rank model.
Let 𝔹 admit the rank-K decomposition 𝔹 = A_1, A_2, A_3, A_4.
Following an identification strategy used in <cit.> for tensor regression models,
we adopt
the following specific constrained parametrization
to fix the scaling and permutation indeterminacy of the tensor decomposition.
* To fix the scaling indeterminacy, we assume A_1, A_2, A_3 are scaled such that
a_1^(k)[1] = a_2^(k)[1] = a_3^(k)[1] = 1, 1 ≤ k ≤ K
In other words, the first rows of A_1, A_2, A_3 are ones.
Since A_3 is assumed to have the form in (<ref>), this implies that all elements of A_3 are ones.
This scaling of A_1, A_2, A_3 determines the first row of A_4 and fixes scaling indeterminacy (Section 4.2 in <cit.>).
* To fix the permutation indeterminacy, we assume that the first row elements of A_4 are distinct and arranged in the descending order
a_4^(1)[1] > a_4^(2)[1] > ⋯ > a_4^(R)[1].
This fixes permutation indeterminacy (Section 4.2 in <cit.>).
Adding these constraints to the existing parameter space,
we define the modified parameter space for the rank-K model as
ℬ_low^∗
=
{𝔹 = A_1, A_2, A_3, A_4 ∈ℬ_low:
A_1, A_2, A_3, A_4 satisfy the conditions (<ref>), (<ref>)}.
If the tensor 𝔹 = A_1, A_2, A_3, A_4 ∈ℬ_low^∗
satisfies the condition
rank A_1 +
rank A_2 +
rank A_3 +
rank A_4 ≥ 2K+3,
then Proposition 3 in <cit.> implies that 𝔹 is uniquely identified in ℬ_low^∗.
§ CONSISTENCY AND ASYMPTOTIC NORMALITY OF ESTIMATORS
In this section, we study the asymptotic property of estimators for the regression parameter in the basic model.
Let {(ν_i1, ν_i2)}_i=1^n be independent realization of the pair of Gaussian distributions (ν_1, ν_2) from the basic model. For simplicity, we assume the true Fréchet means ν_1⊕, ν_2⊕ are known and distributions {(ν_1i, ν_2i)}_i=1^n are fully observed.
We set X_i = φ_ν_1⊕ν_1i, Y_i = φ_ν_2⊕ν_2i and define an estimator as
𝔹̃_n =
_𝔹∈ℬ^∗∑_i=1^n Y_i - ⟨X_i, 𝔹⟩_(m_2⊕, Σ_2⊕)^2.
Here, ℬ^∗ is the modified parameter space defined by (<ref>).
In order to state our results, we introduce a half-vectorization of tensor 𝔹 in ℬ^∗. For a matrix A ∈ℝ^d × (d+1) , we define its vectorization vech^∗(A) ∈ℝ^d(d+3)/2 as
vech^∗(A)
=
(A[1, 1], A[2, 1], ⋯ ,A[d, 1], A[1, 2], A[2, 2], ⋯ A[d, 2],
A[2, 3], ⋯ ,A[d, 3], A[3, 4], ⋯ ,A[d,4], ⋯ ,A[d-1, d], A[d, d], A[d, d+1]).
Furthermore, for a tensor 𝔹∈ℬ^∗, we define its vectorization vec^∗(𝔹) ∈ℝ^d_1(d_1+1)d_2(d_2+1)/4 as
vec^∗(𝔹)
=
( (vech^∗(𝔹[·, ·, r, s])^⊤)_1 ≤ r ≤ d_2,r+2 ≤ s ≤ d_2+1)^⊤.
Note that the vec^∗(·)
operator is a one-to-one correspondence between ℬ^∗ and ℝ^d_1(d_1+1)d_2(d_2+1)/4.
Therefore, for any θ∈ℝ^d_1(d_1+1)d_2(d_2+1)/4,
there uniquely exists a tensor 𝔹∈ℬ^∗ such that vec^∗(𝔹) = θ.
We denote this tensor 𝔹 as 𝔹(θ).
Under this vectorization, we denote
θ̃_n = vec^∗(𝔹̃_n) and θ_0 = vec^∗(𝔹_0), and analyze the asymptotic property of the estimator θ̃_n with the standard theory for M-estimation.
For vector θ∈ℝ^d_1(d_1+1)d_2(d_2+1)/4 and
matrices X ∈Ξ_d_1, Y ∈Ξ_d_2,
we define
m_θ(vech^∗(X), vech^∗(Y)) = vech^∗(Y) - vech^∗(⟨ X, 𝔹(θ) ⟩_2)_(m_2⊕, Σ_2⊕)^2.
Here, for a vector z ∈ℝ^d_2(d_2+3)/2 represented as z = vech^∗(A) with a matrix A ∈ℝ^d_2(d_2+3), we
define its norm as z_m_2⊕, Σ_2⊕ = A_m_2⊕, Σ_2⊕.
Then, the estimator θ̃_n is characterized as the minimizer of the criterion function θ↦ n^-1∑_i=1^n m_θ(vech^∗(X_i), vech^∗(Y_i)). Note that
the vector
vech^∗(⟨ X, 𝔹(θ) ⟩_2) ∈ℝ^d_2(d_2+3)/2 has the form
vech^∗(⟨ X, 𝔹(θ) ⟩_2)
=
(⟨vech^∗(X), vech^∗(𝔹(θ)[·, ·, r, s])⟩)_1 ≤ r ≤ d_2, r+2 ≤ s ≤ d_2+1,
which implies θ̃_n is the least-square estimator in the linear regression model between vectors
vech^∗(X) and vech^∗(Y).
Then, we obtain the following results. We denote the partial derivative of the function m_θ in terms of θ as ∇_θ m_θ.
Assume θ_0 is in a compact parameter space Θ_0 ⊂ℝ^d_1(d_1+1)d_2(d_2+1)/4 and the pair of vectors (vech^∗(X_i), vech^∗(Y_i)) is supported on a bounded set. Then,
θ̃_n is a consistent estimator for θ_0.
We show that the set of functions {m_θ: θ∈Θ_0} is a Glivenko-Cantelli class (Section 19 in <cit.>).
If this holds, the consistency of the estimator θ̃_n follows from Theorem 5.7 in <cit.>.
Note that for a vector z = (z_1, ..., z_d_2(d_2+3)/2) ∈ℝ^d_2(d_2+3)/2, the norm z_m_2⊕,Σ_2⊕ has the form
z_m_2⊕, Σ_2⊕^2
=
∑_1 ≤ i ≤ j ≤ d_2(d_2+3)/2
c_ijz_iz_j,
where c_ij are constants determined by the values of m_2⊕ and Σ_2⊕.
This implies
that the map θ↦ m_θ(vech^∗(X), vech^∗(Y)) is continuous for each fixed vech^∗(X) and
vech^∗(Y). Moreover,
because the parameter θ and
vectors vech^∗(X) and
vech^∗(Y) are in bounded regions,
the map m_θ is also uniformly bounded.
That is, there exists a constant C > 0 such that
m_θ(vech^∗(X), vech^∗(Y)) ≤ C for all θ∈Θ_0, vech^∗(X), vech^∗(Y).
This implies the set of functions
{m_θ: θ∈Θ_0 } is dominated by the integrable constant function C.
Combining these facts with the
assumption of compactness of Θ_0,
Example 19.8 in <cit.> implies that {m_θ: θ∈Θ_0} is a Glivenko-Cantelli class.
In addition to the assumptions in Theorem <ref>, suppose θ_0 is an interior point of Θ_0 and the map θ↦𝔼[m_θ(vech^∗(X_i), vech^∗(Y_i))] has nonsingular Hessian matrix V_θ_0 at θ_0.
Then, √(n)(θ̃_n - θ_0) converges in distribution to a normal distribution with mean zero and covariance matrix
V_θ_0^-1𝔼[∇_θ m_θ_0(vech^∗(X_i), vech^∗(Y_i)) ∇_θ m_θ_0(vech^∗(X_i), vech^∗(Y_i))^⊤]V_θ_0^-1.
When the norm ·_(m_2⊕, Σ_2⊕) is equal to the Frobenius norm, that is, m_2⊕ = 0 and Σ_2⊕ = I, the
second-derivative matrix V_θ_0 has the form
V_θ_0
=
[ 𝔼[vech^∗(X_i)vech^∗(X_i)^⊤] O; ⋱ ; O 𝔼[vech^∗(X_i)vech^∗(X_i)^⊤] ].
Therefore, V_θ_0 is nonsingular if and only if the matrix 𝔼[vech^∗(X_i)vech^∗(X_i)^⊤] is nonsingular.
We check the conditions of Theorem 5.23 in <cit.>, which is a standard result for the asymptotic normality of the M-estimator.
Noting that the norm z_(m_2⊕, Σ_2⊕) has the form
(<ref>) for a vector z = (z_1,..., z_d_2(d_2+3)/2) ∈ℝ^d_2(d_2+3)/2, the function θ↦ m_θ(vech^∗(X), vech^∗(Y)) is differentiable on the interior of Θ_0 for each fixed vech^∗(X) and
vech^∗(Y).
Moreover, because the parameter θ and
vectors vech^∗(X) and
vech^∗(Y) are in bounded regions, the partial derivative ∇_θ m_θ is also bounded. That is, there exists a constant M > 0 such that ∇_θ m_θ(vech^∗(X), vech^∗(Y))≤ M for all θ∈Θ_0,vech^∗(X) and vech^∗(Y).
Combining this fact with the multi-dimensional mean value theorem, for every θ_1 and θ_2 in a neighborhood of θ_0, we have
|m_θ_1(vech^∗(X), vech^∗(Y)) - m_θ_2(vech^∗(X), vech^∗(Y))|
≤
Mθ_1 - θ_2.
Finally, the map θ↦𝔼[m_θ(vech^∗(X_i), vech^∗(Y_i))] is assumed to have nonsingular Hessian matrix V_θ_0 at θ_0. Then, the conditions of Theorem 5.23 in <cit.> are fulfilled, and we have the conclusion from the theorem.
plain
|
http://arxiv.org/abs/2307.04204v1 | 20230709151645 | Trajectory Alignment: Understanding the Edge of Stability Phenomenon via Bifurcation Theory | [
"Minhak Song",
"Chulhee Yun"
] | cs.LG | [
"cs.LG",
"math.OC",
"stat.ML"
] |
Sharper Asymptotically Optimal CDC Schemes via Combinatorial Designs
Yingjie Cheng, Gaojun Luo, Xiwang Cao, Martianus Frederic Ezerman, and San Ling
Y. Cheng, and X. Cao are with the Department of Mathematics, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China, and also with Key Laboratory of Mathematical Modeling and High Performance Computing of Air Vechicles (NUAA), MIIT, Nanjing 210016, China, e-mails: { xwcao,chengyingjie}@nuaa.edu.cn
G. Luo, M. F. Ezerman, and S. Ling are with the School of Physical and Mathematical Sciences, Nanyang Technological University, 21 Nanyang Link, Singapore 637371, e-mails: { gaojun.luo, fredezerman, lingsan}@ntu.edu.sg.
G. Luo, M. F. Ezerman, and S. Ling are supported by Nanyang Technological University Research Grant No. 04INS000047C230GRT01. X. Cao, Y. Cheng, and G. Luo are also supported by the National Natural Science Foundation of China under Grant 12171241.
August 12, 2023
========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
<cit.> empirically study the evolution of the largest eigenvalue of the loss Hessian, also known as sharpness, along the gradient descent (GD) trajectory and observe the Edge of Stability (EoS) phenomenon. The sharpness increases at the early phase of training (referred to as progressive sharpening), and eventually saturates close to the threshold of 2 / (step size). In this paper, we start by demonstrating through empirical studies that when the EoS phenomenon occurs, different GD trajectories (after a proper reparameterization) align on a specific bifurcation diagram independent of initialization. We then rigorously prove this trajectory alignment phenomenon for a two-layer fully-connected linear network and a single-neuron nonlinear network trained with a single data point. Our trajectory alignment analysis establishes both progressive sharpening and EoS phenomena, encompassing and extending recent findings in the literature.
§ INTRODUCTION
It is widely believed that implicit bias or regularization of gradient-based methods plays a key role in generalization of deep learning <cit.>. There is a growing literature <cit.> studying how the choice of optimization methods induces an implicit bias towards specific solutions among the many global minima in overparameterized settings.
<cit.> identify a surprising implicit bias of gradient descent (GD) towards global minima with a certain sharpness[Throughout this paper, the term “sharpness” means the maximum eigenvalue of the training loss Hessian.] value depending on the step size η. Specifically, for reasonable choices of η, (a) the sharpness of the loss at the GD iterate gradually increases throughout training until it reaches the stability threshold[For quadratic loss, GD becomes unstable if the sharpness is larger than a threshold of 2/(step size).] of 2/η (known as progressive sharpening), and then (b) the sharpness saturates close to or above the threshold for the remainder of training (known as Edge of Stability (EoS)).
These findings have sparked a surge of research aimed at developing a theoretical understanding of the progressive sharpening and EoS phenomena <cit.>.
In this paper, we study these phenomena through the lens of bifurcation theory, both empirically and theoretically.
Motivating observations:
Figure <ref> illustrates the GD trajectories with different initializations and fixed step sizes trained on three types of two-dimensional functions: (Figure:toy_a) log(cosh(xy)), (Figure:toy_b) 1/2(tanh(x)y)^2, and (Figure:toy_c) 1/2(ELU(x)y)^2, where x and y are scalars. The functions ℒ:^2→ have sharpness y^2 at the global minimum (0, y) for all three models. These toy models can be viewed as examples of single-neuron models, where (Figure:toy_a) represents a linear network with log-cosh loss, while (Figure:toy_b) and (Figure:toy_c) represent nonlinear networks with squared loss. These simple models can capture some interesting aspects of neural network training in the EoS regime, which are summarized below:
* EoS phenomenon: GD converges to a global minimum near the point (0,√(2/η)) with sharpness close to 2/η. During the convergence phase, the training dynamics exhibit period-2 oscillations.
* For different initializations, GD trajectories for a given step size align on the same curve. For example, Figure <ref> shows that GD trajectories with different initializations closely follow a specific U-shaped curve until convergence. We call this phenomenon trajectory alignment.
* In Figures <ref> and <ref>, GD trajectories are aligned on a curve with a fractal structure that qualitatively resembles the bifurcation diagram of a typical polynomial map, such as the logistic map. Particularly, Figure <ref> demonstrates a period-halving phase transition in the GD dynamics, shifting from period-4 oscillation to period-2 oscillation.
* Surprisingly, the curve that GD trajectories approach and follow coincides with the bifurcation diagram of a one-dimensional map x ↦ x - η∂/∂ xℒ(x, y) with a fixed “control parameter” y. The stability of its fixed point x=0 changes at the bifurcation point (x, y) = (0, √(2/η)), where period-doubling bifurcation occurs. Note that this point is a global minimum with sharpness 2/η.
Interestingly, such striking behaviors can also be observed in more complex models, up to a proper reparameterization, as we outline in the next subsection.
§.§ Our contributions
In this paper, we discover and study the trajectory alignment behavior of (reparameterized) GD dynamics in the EoS regime. To our best knowledge, we are the first to identify such an alignment with a specific bifurcation diagram solely determined by the loss. Our empirical findings are rigorously proven for both two-layer fully-connected networks and single-neuron nonlinear networks. Our main contributions are summarized below:
* In Section <ref>, we introduce a novel canonical reparameterization of training parameters, which incorporates the data, network, and GD step size.
This reparameterization allows us to study the trajectory alignment phenomenon in a unified framework.
Through empirical study, Section <ref> demonstrates that the alignment of GD trajectories on a bifurcation diagram is not limited to toy models but also occurs in wide and deep networks.
Remarkably, these bifurcation diagrams are exclusively determined by the loss and independent of initialization. Furthermore, we find that the alignment trend becomes more pronounced as the network width increases.
* In Section <ref>, we use our canonical reparameterization to establish the trajectory alignment phenomenon for two-layer fully-connected linear networks trained with a single data point. Our theoretical analysis rigorously proves both progressive sharpening and the EoS phenomenon, extending the work of <cit.> to a much broader class of networks and also providing more accurate bounds on the limiting sharpness.
* Our empirical and theoretical analyses up to Section <ref> are applicable to convex Lipschitz losses, hence missing the popular squared loss.
In Section <ref>, we take a step towards handling the squared loss.
Employing an alternative reparameterization, we prove the same set of theorems as Section <ref> for a single-neuron nonlinear network trained with a single data point under squared loss.
§.§ Related works
The Edge of Stability (EoS) phenomenon has been extensively studied in recent years, with many works seeking to provide a deeper understanding of the evolution of sharpness and the oscillating dynamics of GD.
<cit.> first formalize EoS through empirical study, and subsequent works have built on their findings.
<cit.> analyze EoS through experiments and identify the relations between the behavior of loss, iterates, and sharpness.
<cit.> suggest that subquadratic growth of the loss landscape is the key factor of oscillating dynamics.
<cit.> show that (normalized) GD enters the EoS regime, by verifying the convergence to some limiting flow on the manifold of global minimizers.
<cit.> divide GD trajectory into four phases and explain progressive sharpening and EoS by using the norm of output layer weight as an indicator of sharpness.
<cit.> use the third-order Taylor approximation of the loss to theoretically analyze EoS, assuming the existence of progressive sharpening.
<cit.> prove that normalization layers encourage GD to reduce sharpness.
Concurrent to our work, <cit.> study the logistic regression problem with separable dataset and establish that GD exhibits an implicit bias toward the max-margin solution in the EoS regime, extending prior findings in the small step size regime <cit.>.
Some recent works rigorously analyze the full GD dynamics for some toy cases and prove that the limiting sharpness is close to 2/η. <cit.> study the loss (x,y) ↦1/4(x^2y^2-1)^2 and prove that the sharpness converges close to 2/η with a local convergence guarantee. Notably, <cit.> study the function (x,y) ↦ℓ(xy) where ℓ is convex, even, and Lipschitz, and provide a global convergence guarantee. The authors prove that when ℓ is log-cosh loss or square root loss, the limiting sharpness in the EoS regime is between 2/η - 𝒪(η) and 2/η. Our theoretical results extend their results on a single-neuron linear network to two-layer fully-connected linear networks and provide an improved characterization on the limiting sharpness, tightening the gap between upper and lower bounds to only 𝒪(η^3).
The trajectory alignment phenomenon is closely related to <cit.> which shows empirical evidence of bifurcation-like oscillation in deep neural networks trained on real-world data. However, their empirical results do not show the alignment property of GD trajectory. In comparison, we observe that GD trajectories align on the same bifurcation diagram, independent of initialization.
Very recently, <cit.> observe a similar trajectory alignment phenomenon for scalar linear networks, employing a reparameterization based on the sharpness of the gradient flow solution. However, their empirical findings on trajectory alignment are confined to scalar linear networks, and do not provide a theoretical explanation. In contrast, our work employs a novel canonical reparameterization and offers empirical evidence for the alignment phenomenon across a wide range of networks. Moreover, we provide theoretical proofs for two-layer linear networks and single-neuron nonlinear networks.
§ PRELIMINARIES
Notations. For vectors and , we denote the ℓ_p norm of by _p , their tensor product as ⊗, and ⊗ by ^⊗ 2. For a matrix , we denote the spectral norm by _2. Given a function ℒ and a parameter Θ, we use λ_max(Θ)λ_max (∇_Θ^2 ℒ(Θ)) to denote the sharpness (i.e., the maximum eigenvalue of the loss Hessian) at Θ. We use asymptotic notations with subscripts (e.g., 𝒪_ℓ(·), 𝒪_δ, ℓ(·)) in order to hide constants that depend on the parameters or functions written as subscripts.
§.§ Problem settings
We study the optimization of neural network f( · ; Θ) : ^d→ parameterized by Θ. We focus on a simple over-parameterized setting trained on a single data point {(, y)}, where ∈^d and y∈. We consider the problem of minimizing the empirical risk
ℒ(Θ) = ℓ(f(; Θ) - y),
where ℓ is convex, even, and twice-differentiable with ℓ”(0)=1.
We minimize ℒ using GD with step size η: Θ_t+1 = Θ_t - η∇_Θℒ(Θ_t).
The gradient and the Hessian of the function are given by
∇_Θℒ (Θ) = ℓ'(f(; Θ) - y) ∇_Θ f(; Θ),
∇_Θ^2 ℒ (Θ) = ℓ”(f(; Θ) - y) (∇_Θ f(; Θ))^⊗ 2 + ℓ'(f(; Θ) - y) ∇_Θ^2f(; Θ).
Suppose that Θ^* be a global minimum of ℒ, i.e., f(; Θ^*) = y. In this case, the loss Hessian and the sharpness at Θ^* are simply characterized as
∇_Θ^2 ℒ (Θ^*) = (∇_Θ f(; Θ^*))^⊗ 2 , and λ_max(Θ^*) = ∇_Θ f(; Θ^*)_2^2.
§.§ Canonical reparameterization
For given step size η, the canonical reparameterization of Θ is defined as
(p, q) ( f(; Θ) - y, 2/η∇_Θ f(; Θ)_2^2).
Under the canonical reparameterization, p=0 represents global minima, and Eq. (<ref>) implies that the point (p,q)=(0,1) is a global minimum with sharpness 2/η. The update of p can be written as
p_t+1 = f(x; Θ_t+1) - y
= f(x; Θ_t - ηℓ'(f(; Θ_t) - y) ∇_Θ f(; Θ_t)) - y
≈ f(x; Θ_t) - ∇_Θ f(; Θ_t)^⊤(ηℓ'(f(; Θ_t) - y) ∇_Θ f(; Θ_t)) - y
= (f(x; Θ_t) - y) - ηℓ'(f(; Θ_t) - y) ∇_Θ f(; Θ)_2^2
= p_t - 2 ℓ'(p_t)/q_t,
which can be obtained by first-order Taylor approximation on f for small step size η.[The approximation is used just to motivate Lemma <ref>; in our theorems, we analyze the exact dynamics.]
§.§ Bifurcation analysis
Motivated from the approximated 1-step update rule given by Eq. (<ref>), we conduct the bifurcation analysis on this one-dimensional map, considering q_t as a control parameter. We first review some basic notions used in bifurcation theory <cit.>.
Let z_0 be a fixed point of a differentiable map f:→, i.e., f(z_0)=z_0. We say z_0 is the stable fixed point of f if f'(z)<1, and we say z_0 is the unstable fixed point of f if f'(z)>1.
A point z_0 is called a period-p point of a map f:→ if z_0 is the fixed point of f^p and f^j(z_0) z_0 for any 1≤ j ≤ p-1. The orbit of z_0, given by { z_j=f^j(z_0) | j=0,1,…,p-1} is called the period-p orbit of f. A period-p orbit is stable (unstable) if its elements are stable (unstable) fixed points of f^p, i.e., ∏_j=0^p-1f'(z_j) < 1 (> 1).
Now we analyze the bifurcation of the one-parameter family of mappings f_q: → given by
f_q(p) p(1-2r(p)/q),
where q is a control parameter and r is a differentiable function satisfying Assumption <ref> below.
A function r:→ is even, continuously differentiable, r(0)=1, r'(0)=0, r'(p)<0 for any p>0, and lim_p→∞r(p) = lim_p→ -∞r(p)= 0. In other words, r is a smooth, symmetric bell-shaped function with the maximum value r(0)=1.
We note that Eq. (<ref>) can be rewritten as p_t+1 = f_q_t(p_t) if we define r by r(p) ℓ'(p)/p for p 0 and r(0) 1. Below are examples of ℓ for which the corresponding r's satisfy Assumption <ref>. These loss functions were previously studied by <cit.> to explain EoS for (x,y)↦ℓ(xy).
* log-cosh loss: ℓ_log-cosh(p) log(cosh(p)). Note ℓ'_log-cosh(p) = tanh(p).
* square-root loss: ℓ_sqrt(p) √(1 + p^2). Note ℓ'_sqrt(p) = p/√(1+p^2).
If r satisfies Assumption <ref>, then for any 0<q≤ 1, there exists a nonnegative number p such that r(p)=q, and the solution is unique which we denote by r̂(q). In particular, r̂: (0,1]→_≥0 is a function satisfying r(r̂(q))=r(-r̂(q))=q for any q∈ (0,1].
[period-doubling bifurcation of f_q]lemmaLemmaBifurcation
Suppose that r is a function satisfying Assumption <ref>. Let p^* = sup{p≥ 0 |xr'(x)/r(x) > -1 for any x≤ p} and c = r(p^*). If p^*=∞, we choose c=0. Then, the one-parameter family of mappings f_q: → given by Eq. (<ref>) satisfies
* If q>1, p=0 is the stable fixed point.
* If q ∈ (c,1), p=0 is the unstable fixed point and {±r̂(p)} is the stable period-2 orbit.
The map f_q has the unique fixed point p=0 for any q>0. Since f_q'(0) = 1 - 2/q, p=0 is a stable fixed point if q>1 and p=0 is an unstable fixed point if 0<q<1. Now suppose that q∈ (c,1). Then, we have f_q(r̂(q)) = -r̂(q) and f_q(-r̂(q)) = r̂(q), which implies that {±r̂(q)} is a period-2 orbit of f_q. Then, f_q'(r̂(q)) = f_q'(-r̂(q)) = | 1+2r̂(q)r'(r̂(q))/q| < 1 implies that {±r̂(q)} is a stable period-2 orbit.
According to Lemma <ref>, the stability of the fixed point p=0 undergoes a change at q=1, resulting in the emergence of a stable period-2 orbit. The point (p,q)=(0,1) is referred to as the bifurcation point, where a period-doubling bifurcation occurs. A bifurcation diagram illustrates the points asymptotically approached by a system as a function of a control parameter. In the case of the map f_q, the corresponding bifurcation diagram is represented by p=0 for q≥ 1 and p=±r̂(q) (or equivalently, q = r(p)) for q∈ (c,1).
It is worth noting that the period-2 orbit {±r̂(p)} becomes unstable for q∈(0,c). If we choose r to be r(p) = ℓ'(p)/p for p 0 and r(0)=1, then 1 + pr'(p)/r(p) = ℓ”(p)/r(p) > 0 for all p, assuming ℓ is convex. Consequently, for log-cosh loss and square root loss we have c=0, indicating that the period-2 orbit of f_q remains stable for all q∈ (0,1). However, in Section <ref>, we will consider r with c>0, which may lead to additional bifurcations.
§ TRAJECTORY ALIGNMENT OF GD: AN EMPIRICAL STUDY
In this section, we conduct experimental studies on the trajectory alignment phenomenon in GD dynamics under the canonical reparameterization proposed in Section <ref>.
We consider a fully-connected L-layer neural network f( · ; Θ): ^d → written as
f(; Θ) = _L^⊤ϕ(_L-1ϕ(⋯ϕ(_2 ϕ(_1 ))⋯)),
where ϕ is an activation function, _1∈^m× d, _l∈^m × m for 2≤ l ≤ L-1, and _L∈^m. All L layers have the same width of m. We minimize the empirical risk ℒ(Θ) = ℓ(f(; Θ)-y). We visualize GD trajectories under the canonical parameterization, where each plot shows five different randomly initialized weights using Xavier initialization multiplied with a rescaling factor of α. For this analysis, we fix the training data point and hyperparameters as = _1 = (1,0,…,0), y=1, η=0.01, d=10, and focus on the log-cosh loss for ℓ, with either ϕ(t) = t (linear) or ϕ(t) = tanh(t).
We note that the trajectory alignment phenomenon is consistently observed in other settings, including square root loss, different activations (e.g., ELU), and various hyperparameters, in particular for sufficiently wide networks (additional experimental results are provided in Appendix <ref>).
The effect of initialization scale.
In Figures <ref> and <ref>, we examine the effect of the initialization scale α on GD trajectories in a two-layer fully-connected linear network with a width of m=256.
In Figure <ref>, when the weights are initialized with a smaller scale (α=5), the initial value of q is greater than 1, and it converges towards the minimum with only a small change in q_t until convergence. In this case, the limiting sharpness is relatively smaller than 2/η, and the EoS phenomenon does not occur. This case is referred to as the gradient flow regime <cit.>.
On the other hand, in Figure <ref>, when the weights are initialized with a larger scale (α=10), the initial value of q is less than 1, and we observe convergence towards the point (close to) (p,q) = (0,1). This case is referred to as the EoS regime. We note that choosing larger-than-standard scale α is not a necessity for observing EoS; we note that even with α = 1, we observe the EoS regime when η is larger.
Trajectory alignment on the bifurcation diagram. In order to investigate the trajectory alignment phenomenon on the bifurcation diagram, we plot the bifurcation diagram q = ℓ'(p)/p and observe that GD trajectories tend to align with this curve, which depends solely on ℓ. Figure <ref> clearly demonstrates this alignment phenomenon. Additionally, we analyze the evolution of q_t/r(p_t) and p_t in Figure <ref>. We observe that the evolution of q_t/r(p_t) follows two phases. In Phase 1, q_t/r(p_t) approaches to 1 quickly. In Phase 2, the ratio remains close to 1.
Notably, the convergence speed of q_t/r(p_t) towards 1 is much faster than the convergence speed of p_t towards 0. In Sections <ref> and <ref>, we will provide a rigorous analysis of this behavior, focusing on the separation between Phase 1 and Phase 2.
The effect of width and depth. In Figure <ref>, we present the GD trajectories of tanh-activated networks with different widths and depths (α = 5). All three cases belong to the EoS regime, where GD converges to a point close to (p,q)=(0,1), resulting in a limiting sharpness near 2/η. However, when comparing Figures <ref> and <ref>, we observe that the trajectory alignment phenomenon is not observed for the narrower network with m=64, whereas the GD trajectories for the wider network with m=256 are clearly aligned on the bifurcation diagram. This suggests that network width plays a role in the trajectory alignment phenomenon. Furthermore, we note that the trajectory alignment phenomenon is also observed for a deeper network with L=10, as depicted in Figure <ref>.
Multiple training data points. In our trajectory alignment analysis, we have primarily focused on training with a single data point. However, it is important to explore the extension of this phenomenon to scenarios with multiple training data points.
To investigate this, we train a neural network on a dataset {(_i, y_i)}_i=1^n, where _i∈^d and y_i∈, by minimizing the empirical risk ℒ(Θ) 1/n∑_i=1^nℓ(f(_i; Θ) - y_i). Defining ∈^n× d as the data matrix and ∈^n as the label vector, we introduce a generalized canonical reparameterization:
(p, q)( P(f(; Θ) - ) , 2n/η‖∑_i=1^n (∇_Θ f(_i; Θ))^⊗ 2‖_2),
where P: ^n→ can be a function such as a mean value or a specific vector norm.
In Figure <ref>, we consider training on a 50 example subset of CIFAR-10 and vary the network architecture. We use three-layer fully-connected network with tanh activation and convolutional network (CNN) with tanh activation. Under the generalized canonical reparameterization (<ref>) for various choices of P, including the mean and the ℓ_2 norm, we observe the trajectory alignment phenomenon throughout all settings, indicating a common alignment property of the GD trajectories. However, unlike the single data point case, the alignment does not happen on the curve q = ℓ'(p)/p. The precise characterization of the coinciding curve is an interesting direction for future research.
§ TRAJECTORY ALIGNMENT OF GD: A THEORETICAL STUDY
In this section, we study a two-layer fully-connected linear network defined as f(x; Θ) ^⊤, where ∈^d× m, ∈^m, and Θ denote the collection of all parameters (, ).
We consider training this network with a single data point {(, 0)}, where ∈^d and _2=1. We run GD with step size η on the empirical risk
ℒ(Θ) ℓ(f(; Θ) - 0) = ℓ(^⊤),
where ℓ is a loss function satisfying Assumption <ref>. We note that our assumptions on ℓ is motivated from the single-neuron linear network analysis (d=m=1) by <cit.>.
The loss ℓ is convex, even, 1-Lipschitz, and twice differentiable with ℓ”(0) = 1.
The canonical reparameterization (Definition <ref>) of Θ = (, ) is given by
(p, q) (^⊤, 2/η (_2^2 + _2^2)).
Under the canonical reparameterization, the 1-step update rule of GD can be written as
p_t+1 = [ 1 - 2r(p_t)/q_t + η^2 p_t^2 r(p_t)^2 ] p_t,
q_t+1 = [ 1 - η^2 p_t^2 r(p_t) ( 2q_t - r(p_t)) ]^-1 q_t ,
where we define the function r by r(p)ℓ'(p)/p for p 0 and r(0) 1. Note that the sequence (q_t)_t=0^∞ is monotonically increasing if q_0 ≥1/2, which is the case our analysis will focus on.
We have an additional assumption on ℓ as below, motivated from Lemma <ref>.
The function r(p) = ℓ'(p)/p corresponding to the loss ℓ satisfies Assumption <ref>.
We now present our theoretical results on this setting, and defer the proofs to Appendix <ref>.
§.§ Gradient flow regime
We first consider the gradient flow regime, where q is initialized with q_0 > 1.
[gradient flow regime]theoremTheoremGFdiag
Let η∈ (0, 2/33) be a fixed step size and ℓ be a loss function satisfying Assumptions <ref> and <ref>. Suppose that the initialization (p_0, q_0) satisfies p_0≤ 1 and q_0 ∈(2/2-η, min{1/16η, r(1)/2η}). Consider the GD trajectory characterized in Eq. (<ref>). Then, the GD iterations (p_t, q_t) converge to the point (0, q^*) such that
q_0 ≤ q^* ≤exp ( C η^2) q_0 ≤ 2q_0,
where C = 8 q_0 [min{2(q_0-1)q_0, r(1)2q_0}]^-1 > 0.
Theorem <ref> implies that in gradient flow regime, GD with initialization Θ_0 = (_0, _0) and step size η converges to Θ^* which has the sharpness bounded by:
(1 - C η^2) (_0 _2^2 + _0_2^2) ≤λ_max(Θ^*) ≤ (_0 _2^2 + _0_2^2).
Hence, for small step size η, if the initialization satisfies _0 _2^2 + _0_2^2 < 2/η-1, then the limiting sharpness is slightly below _0 _2^2 + _0_2^2. Note that we assumed the bound p_0≤ 1 for simplicity, but our proof also works with the assumption p_0≤ K for any positive constant K modulo some changes in numerical constants. Moreover, our assumption on the upper bound of q_0 is 1/η up to a constant factor, which covers most realistic choices of initialization.
§.§ EoS regime
We now provide rigorous results in the EoS regime, where the GD trajectory aligns on the bifurcation diagram q = r(p). To establish these results, we introduce additional assumptions on the loss ℓ.
The function r(z) = ℓ'(z)/z is C^4 on and satisfies
* z↦r'(z)/r(z)^2 is decreasing on ,
* z↦zr'(z)/r(z) is decreasing on z>0 and increasing on z<0,
* z↦zr(z)/r'(z) is decreasing on z>0 and increasing on z<0.
We note that both the log-cosh loss and the square root loss satisfy Assumptions <ref>, <ref>, and <ref>.
[EoS regime, Phase 1]theoremTheoremEoSearlylinear
Let η be a small enough step size and ℓ be a loss function satisfying Assumptions <ref>, <ref>, and <ref>. Let z_0 sup_z{zr'(z)/r(z)≥ -1/2} and c_0 max{r(z_0), 1/2}. Let δ∈ (0, 1-c_0) be any given constant. Suppose that the initialization (p_0, q_0) satisfies p_0≤ 1 and q_0∈ (c_0, 1-δ). Consider the reparameterized GD trajectory characterized in Eq. (<ref>). We assume that for all t≥ 0 such that q_t<1, we have p_t 0. Then, there exists a time step t_a = 𝒪_δ, ℓ(log(η^-1)), such that for any t≥ t_a,
q_t/r(p_t) = 1 + h(p_t)η^2 + 𝒪_δ, ℓ(η^4),
where h(p) - 1/2( p r(p)^3/r'(p) + p^2 r(p)^2 ) for p0 and h(p) - 1/2r”(0) for p = 0.
One can check that for log-cosh and square-root losses, the ranges of h are (0,3/4] and (0,1/2], respectively.
Theorem <ref> implies that in the early phase of training (t≤ t_a = 𝒪(log(η^-1))), GD iterates (p_t, q_t) approach closely to the bifurcation diagram r(p)=q, which we called Phase 1 in Section <ref>. In Phase 2, GD trajectory aligns on this curve in the remaining of the training (t≥ t_a). Theorem <ref> provides an analysis on Phase 2 stated as below.
[EoS regime, Phase 2]theoremTheoremEoSlatediag
Under the same settings as in Theorem <ref>, there exists a time step t_b = Ω((1-q_0)η^-2) such that q_t_b≤ 1 and q_t > 1 for any t > t_b. Moreover, the GD iterates (p_t, q_t) converge to the point (0, q^*) such that
q^* = 1 - η^2/2r”(0) + 𝒪_δ, ℓ(η^4).
Theorem <ref> implies that in EoS regime, GD with step size η converges to Θ^* with sharpness
λ_max(Θ^*) = 2/η - η/r”(0) + 𝒪_δ, ℓ(η^3).
Note that <cit.> study the special case d=m=1 and prove that the limiting sharpness is between 2/η - 𝒪(η) and 2/η. Theorem <ref> provides tighter analysis on the limiting sharpness in more general settings, up to only 𝒪(η^3). Also, our result is the first to prove that the limiting sharpness in the EoS regime is bounded away from 2/η by a nontrivial margin.
We also study the evolution of sharpness along the GD trajectory and prove that progressive sharpening (i.e., sharpness increases) occurs during Phase 2.
[progressive sharpening]theoremTheoremPSdiag
Under the same setting as in Theorem <ref>, let t_a denote the obtained iteration. Define the function λ̃: _>0→ given by
λ̃(q) (1 + r̂(q) r'(r̂(q))/q) 2/η if q≤ 1, and
2/η otherwise.
Then, the sequence (λ̃(q_t))_t=0^∞ is monotonically increasing. Moreover, for any t≥ t_a, the sharpness at GD iterate Θ_t closely follows the sequence (λ̃(q_t))_t=0^∞ by satisfying
|λ_max(Θ_t) - λ̃(q_t) |≤ 1 + 𝒪_ℓ(η).
The gap between λ_max(Θ_t) and λ̃(q_t) is bounded by a numerical constant, which becomes negligible compared to 2/η for small η. In Figure <ref>, we perform numerical experiments on a single-neuron case and observe that λ̃(q_t) closely approximates the sharpness.
§ EOS IN SQUARED LOSS: SINGLE-NEURON NONLINEAR NETWORK
Our canonical parameterization has a limitation in explaining the EoS phenomenon under squared loss ℓ(p)=1/2p^2, as the function r(p) = ℓ'(p)/p = 1 does not satisfy Assumption <ref>. However, empirical studies by <cit.> have observed the EoS phenomenon in GD training with squared loss. In this section, we analyze a simple toy model to gain insight into the EoS phenomenon and trajectory alignment of GD under squared loss.
We study the GD dynamics on a two-dimensional function ℒ(x,y) 1/2(ϕ(x)y)^2, where x, y are scalars and ϕ is a nonlinear activation satisfying Assumption <ref> below.
[sigmoidal activation]
The activation function ϕ: → is odd, increasing, 1-Lipschitz and twice continuously differentiable. Moreover, ϕ(0)=0, ϕ'(0)=1, lim_x→∞ϕ(x) = 1, and lim_x→-∞ϕ(x) = -1.
One good example of ϕ satisfying Assumption <ref> is tanh.
For this section, we use an alternative reparameterization defined as below.
For given step size η, the (p,q) reparameterization of (x,y)∈^2 is defined as
(p, q) ( x, 2/η y^2).
Under the reparameterization, the 1-step update rule can be written as
p_t+1 = ( 1 - 2r(p_t)/q_t) p_t,
q_t+1 = (1- ηϕ(p_t)^2)^-2 q_t,
where the function r is given by r(z)ϕ(z)ϕ'(z)/z for z 0 and r(0) 1.
We can observe a notable resemblance between Eq. (<ref>) and Eq. (<ref>). Indeed, our theoretical findings for a single-neuron nonlinear network closely mirror those of the two-layer linear network discussed in Section <ref>. Due to lack of space, we summarize our theorems in this setting as the following:
Under suitable assumptions on ϕ, step size, and initialization, GD trained on the squared loss ℒ(x,y) 1/2(ϕ(x)y)^2 exhibits the same gradient flow, EoS (Phase 1, 2), and progressive sharpening phenomena as shown in Section <ref>.
In Theorem <ref>, we prove that in the EoS regime, the limiting sharpness is 2/η - 2/r”(0) + 𝒪(η). For formal statements of the theorems and the proofs, we refer the reader to Appendix <ref>.
§ CONCLUSION
In this paper, we provide empirical evidence and rigorous analysis to demonstrate the remarkable phenomenon of GD trajectory alignment in the EoS regime. Importantly, we show that different GD trajectories, under the canonical parameterization, align on a bifurcation diagram determined by the loss function and independent of initialization. Our theoretical analysis not only characterizes the behavior of limiting sharpness but also establishes progressive sharpening of GD. Our findings raise intriguing questions for future research: How can we understand the trajectory alignment when trained on multiple data points? How can we theoretically explain the impact of network width on the presence of this phenomenon? How can we extend our analysis to encompass squared loss for general neural network, going beyond the toy single-neuron example?
§.§.§ Acknowledgments
This paper was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant (No. 2019-0-00075, Artificial Intelligence Graduate School Program (KAIST)) funded by the Korea government (MSIT), two National Research Foundation of Korea (NRF) grants (No. NRF-2019R1A5A1028324, RS-2023-00211352) funded by the Korea government (MSIT), and a grant funded by Samsung Electronics Co., Ltd.
abbrvnat
§ ADDITIONAL EXPERIMENTS
In this section, we present additional empirical evidence demonstrating the phenomenon of trajectory alignment, which supports the findings discussed in Section <ref> of our main paper.
§.§ Experimental Setup
Objective function. We run gradient descent (GD) to minimize the objective function defined as
ℒ(Θ) = ℓ(f(; Θ) - y),
where Θ represents the parameters, ℓ:→ is a loss function, f:^d→ is a neural network, and {(,y)} denotes a single data point with ∈^d and y∈. We also consider training on multiple data points {(_i,y_i)}_i=1^n with _i∈^d and y_i∈ for each 1≤ i≤ n, where we minimize the objective function
ℒ(Θ) 1/n∑_i=1^nℓ(f(_i; Θ) - y_i).
In our experiments, we primarily focus on the log-cosh loss function ℓ_log-cosh(p) = log(cosh(p)), but we also investigate the square root loss ℓ_sqrt(p) = √(1+p^2).
Model architecture.
We train a fully-connected L-layer neural network, denoted as f( · ; Θ): ^d →. The network is defined as follows:
f(; Θ) = _L^⊤ϕ(_L-1ϕ(⋯ϕ(_2 ϕ(_1 ))⋯)),
where ϕ:→ is an activation function applied entry-wise, _1∈^m× d, _l∈^m × m for 2≤ l ≤ L-1, and _L∈^m. All L layers have the same width of m, and the biases of all layers are fixed to 0.[While we fix the biases of all layers to 0 to maintain consistency with our theory, the trajectory alignment phenomenon is consistently observed even for neural networks with bias. In Appendix <ref>, we consider training networks with bias.] We consider three activations: hyperbolic tangent ϕ(t) = tanh(t), exponential linear unit ϕ(t) = ELU(t), and linear ϕ(t) = t.
Weight initialization. We perform gradient descent (GD) using five different randomly initialized sets of weights. The weights are initialized using Xavier initialization, and each layer is multiplied by a rescaling factor (gain) of α. In the plots presented throughout this section, we mark the initialization points with an `x' to distinguish them from other points on the trajectories.
Canonical reparameterization.
We plot GD trajectories after applying the canonical reparameterization introduced in Definition <ref>:
(p, q) ( f(; Θ) - y, 2/η∇_Θ f(; Θ)_2^2),
where η denotes the step size. For training on multiple data points, we employ the generalized canonical reparameterization as defined in Eq. (<ref>):
(p, q)( P(f(; Θ) - ) , 2n/η‖∑_i=1^n (∇_Θ f(_i; Θ))^⊗ 2‖_2),
where P: ^n→ can represent the mean value or vector norms. Specifically, we mainly focus on the mean value P() = 1/n∑_i=1^n z_i, but we also examine vector norms such as P() = _1, P() = _2, and P() = _∞, where = (z_1, z_2, …, z_n)∈^n.
For large networks, explicitly calculating the ℓ_2 matrix norm ‖∑_i=1^n (∇_Θ f(_i; Θ))^⊗ 2‖_2 is infeasible. Therefore, we adopt a fast and efficient matrix-free method based on power iteration, as proposed by <cit.>. This method allows us to numerically compute the ℓ_2 norm of large-scale symmetric matrices.
§.§ Training on a Single Data Point
Training data. We conduct experiments on a synthetic single data point (, y), where = _1 = (1,0,…,0)∈^d and y ∈. Throughout the experiments in this subsection, we keep the data dimension fixed at d=10, the data label at y=1, and the step size at η=0.01.
The effect of initialization scale. We investigate the impact of the initialization scale α while keeping other hyperparameters fixed. We vary the initialization scale across {0.5, 1.0, 2.0, 4.0}. Specifically, in Figure <ref>, we train a 3-layer fully connected neural network with tanh activation. We observe that smaller initialization scales (α = 0.5, 1.0) lead to trajectories in the gradient flow regime, while larger initialization scales (α = 2.0, 4.0) result in trajectories in the EoS regime. This behavior is primarily due to the initial value of q being smaller than 1 for larger initialization scales, which causes the trajectory to fall into the EoS regime.
The effect of network width. We investigate the impact of network width on the trajectory alignment phenomenon. While keeping other hyperparameters fixed, we control the width m with values of {64, 128, 256, 512}. In Figure <ref>, we train 3-layer fully connected neural networks with tanh activation and an initialization scale of α=4. Additionally, in Figure <ref>, we examine the same setting but with different depth, training 10-layer fully connected neural networks.
It is commonly observed that the alignment trend becomes more pronounced as the network width increases. In Figures <ref> and <ref>, all trajectories are in the EoS regime. However, narrower networks (m=64, 128) do not exhibit the trajectory alignment phenomenon, while wider networks (m=256, 512) clearly demonstrate this behavior. These results indicate that network width plays a significant role in the trajectory alignment property of GD.
The effect of loss and activation functions.
The trajectory alignment phenomenon is consistently observed in various settings, including those with square root loss and different activation functions. In Figure <ref>, we investigate a 3-layer fully connected neural network with a width of m=256 and an initialization scale of α=2. We explore different activation functions, including tanh, ELU, and linear, and consider both log-cosh loss and square-root loss. Across all these settings, we observe the trajectory alignment phenomenon, where the GD trajectories align on a curve q = ℓ'(p)/p.
§.§ Training on Multiple Synthetic Data Points
Training data.
We consider training on a synthetic dataset consisting of n data points, denoted as (_i, y_i)_i=1^n. The input vectors _i are sampled from a standard Gaussian distribution 𝒩(0, ), where _i∈^d, and the corresponding target values y_i are sampled from a Gaussian distribution 𝒩(0,1), where y_i∈. Throughout our experiments in this subsection, we use a fixed data dimension of d=10 and a step size of η=0.01.
The effect of function P.
To investigate the impact of different choices of the function P, we train a 3-layer fully connected neural network with tanh activation. The network has a width of m=256 and is initialized with a scale of α=4. The training is performed on a dataset consisting of n=10 data points. Figure <ref> displays the trajectories of GD trajectories under the generalized canonical reparameterization defined in Eq. (<ref>) for various choices of the function P. These choices include the mean, ℓ_1 norm, ℓ_2 norm, and ℓ_∞ norm.
We observe that GD trajectories exhibit alignment behavior across different choices of the function P. Notably, when P is selected as the mean, the trajectories align on the curve q = ℓ'(p)/p. However, when P is based on vector norms, the alignment occurs on different curves. The precise characterization of these curves remains as an interesting open question for further exploration.
The effect of the number of data points.
We examine how the size of the training dataset, denoted by n, influences the trajectory alignment behavior of GD. While keeping other hyperparameters constant, we vary n with values {2, 4, 8, 16, 32, 64, 128, 512, 1024}. In Figure <ref>, we train a 3-layer fully connected neural network with tanh activation, a width of m=256, and an initialization scale of α=4. Additionally, in Figure <ref>, we investigate the same setting but with a different activation function, training ELU-activated fully connected neural networks. The GD trajectories are plotted under the generalized canonical reparameterization using the mean function P()=1/n∑_i=1^n z_i.
We observe a consistent trajectory alignment phenomenon across different choices of the number of data points. Interestingly, for small values of n, the trajectories clearly align on the curve q = ℓ'(p)/p. However, as the number of data points n increases, it seems that the trajectories no longer align on this curve but different “narrower” curves. Understanding the underlying reasons for this phenomenon poses an intriguing open question.
§.§ Training on Real-World Dataset
Training data. In this subsection, we investigate a binary classification problem using a subset of the CIFAR-10 image classification dataset. Our dataset consists of 50 samples, with 25 samples from class 0 (airplane) and 25 samples from class 1 (automobile). We assign a label of +1 to samples from class 0 and a label of -1 to samples from class 1. This dataset was used in the experimental setup by <cit.>.
Architectures. In Figure <ref>, we examine the training of two types of network architectures: (top row) fully-connected tanh network and (bottom row) convolutional tanh network. The PyTorch code for the fully-connected tanh network is provided as follows:
Similarly, the PyTorch code for the convolutional tanh network is as follows:
Note that we consider networks with bias. We use a step size of η = 0.01 for the fully-connected network and η = 0.001 for the CNN. The default PyTorch initialization <cit.> is applied to all these networks.
In this subsection, we further explore the (reparameterized) GD trajectories of fully-connected networks with different activation functions, network widths, and the choice of function P in (<ref>).
The effect of function P. We investigate the impact of different choices of the function P on the GD trajectories. We train a 3-layer fully connected neural network with ELU activation, a width of m=256, and an initialization scale of α=1. Figure <ref> illustrates the GD trajectories under the generalized canonical reparameterization defined in Eq. (<ref>) for various choices of the function P, including the mean, ℓ_1 norm, ℓ_2 norm, and ℓ_∞ norm.
We observe that the GD trajectories exhibit alignment behavior, which is more pronounced when P is chosen to be the mean or ℓ_1 norm, but less evident for the ℓ_∞ norm. Unlike in Figure <ref>, the trajectories do not align on the curve q = ℓ'(p)/p when P is selected as the mean P()=1/n∑_i=1^n z_i.
The effect of network width. We investigate how the width of the network influences the trajectory alignment phenomenon. We vary the width m using values from {64, 128, 256, 512} while keeping other hyperparameters constant. In Figure <ref>, we train 3-layer fully connected neural networks with tanh activation and an initialization scale of α=1. Similarly, in Figure <ref>, we conduct experiments using the same configuration but with ELU activation, training ELU-activated fully connected neural networks.
Consistent with the observations from Figures <ref> and <ref> in the single data point setting, we commonly find that as the network width increases, the alignment trend becomes more pronounced. In both Figure <ref> and Figure <ref>, all trajectories fall within the EoS regime. However, narrower networks (m=64) show less evidence of the trajectory alignment phenomenon, while wider networks (m=256, 512) clearly demonstrate this behavior. These findings emphasize the significant impact of network width on the trajectory alignment property of GD.
The effect of data label.
In our previous experiments, we assigned labels of +1 and -1 to the dataset. However, in this particular experiment, we investigate the training process on a dataset with zero labels. This means that all samples in the dataset are labeled as zero (y_i = 0 for all 1 ≤ i ≤ n). Figure <ref> visualizes the training of 3-layer fully connected neural networks with tanh activation and an initialization scale of α=1. The network widths m are varied from {256, 512, 1024}. Interestingly, the GD trajectories align with the curve q = ℓ'(p)/p, in contrast to our observations in Figures <ref> and <ref>. These results suggest that the data label distribution also influences the alignment curve of GD trajectories. As a future research direction, it would be intriguing to investigate why setting the labels as zero leads to alignment towards the curve q = ℓ'(p)/p, which aligns with our theoretical findings in the single data point setting.
§ PROOFS FOR THE TWO-LAYER FULLY-CONNECTED LINEAR NETWORK
§.§ Proof of Theorem <ref>
We give the proof of Theorem <ref>, restated below for the sake of readability.
*
We note that in the given interval (2/2-η, min{1/16η, r(1)/2η}), it is possible that the interval is empty, depending on the value of r(1). However, this does not impact the correctness of the theorem.
By Proposition <ref>, p_t converges to 0 as t→∞, and for all t≥ 0, we have
q_0 ≤ q_t ≤exp(Cη^2) q_0.
Since the sequence (q_t)_t=0^∞ is monotonic increasing and bounded, it converges. Suppose that q_t→ q^* as t→∞. Then, we can obtain the inequality
q_0 ≤ q^* ≤exp(Cη^2) q_0,
as desired.
Suppose that η∈ (0, 2/33), p_0≤ 1 , and q_0 ∈(2/2-η, min{1/16η, r(1)/2η}). Then for any t≥ 0, we have
p_t≤[ 1 - min{2(q_0-1)/q_0, r(1)/2q_0}]^t ≤ 1,
and
q_0 ≤ q_t ≤exp( 8 η^2 q_0 [min{2(q_0-1)/q_0, r(1)/2q_0}]^-1) q_0 ≤ 2 q_0.
We give the proof by induction; namely, if
p_t≤[ 1 - min{2(q_0-1)/q_0, r(1)/2q_0}]^t,
q_0 ≤ q_t ≤exp( 8 η^2 q_0 [min{2(q_0-1)/q_0, r(1)/2q_0}]^-1) q_0 ≤ 2 q_0
are satisfied for time steps 0≤ t≤ k for some k, then the inequalities are also satisfied for the next time step k+1.
For the base case, the inequalities are satisfied for t=0 by assumptions. For the induction step, we assume that the inequalities hold for any 0≤ t≤ k. We will prove that the inequalities are also satisfied for t=k+1.
By induction assumptions, q_0≤ q_k≤ 2q_0 and p_k≤ 1, so that we have
1 - 2/q_0≤p_k+1/p_k = 1 - 2 r(p_k)/q_k + η^2 p_k^2r(p_k)^2 ≤ 1 - r(1)/q_0 + η^2,
where we used r(1) ≤ r(p_k) ≤ r(0) = 1. Since q_0 ≤r(1)/2η≤r(1)/2η^2, we have
|p_k+1/p_k| ≤max{2/q_0-1, 1 - r(1)/q_0 + η^2 }
= 1 - min{2(q_0-1)/q_0, r(1)/q_0 - η^2 }
≤ 1 - min{2(q_0-1)/q_0, r(1)/2q_0}.
This implies that
p_k+1≤( 1 - min{2(q_0-1)/q_0, r(1)/2q_0}) p_k≤[ 1 - min{2(q_0-1)/q_0, r(1)/2q_0}]^k+1,
which is the desired bound for p_k+1.
For any t≤ k, we also have
1 - 4η^2 p_t^2 q_0 ≤q_t/q_t+1 = 1 - η^2 p_t^2 r(p_t) (2q_t-r(p_t)) ≤ 1,
where we used the induction assumptions to deduce 4 q_0 ≥ 2q_t - r(p_t) ≥ 2q_0 - 1 ≥4/2-η-1 > 0. This gives q_k+1≥ q_k ≥ q_0.
Furthermore, note that 4η^2 p_t^2 q_0 ≤ 4 η^2 ·1/16η≤1/4, so the ratio q_t/q_t+1∈ [3/4,1]. From this, we have
|log( q_0/q_k+1) |≤∑_t=0^k|log(q_t/q_t+1) | ≤ 2 ∑_t=0^k|q_t/q_t+1 - 1 |
≤ 8η^2 q_0 ∑_t=0^k p_t^2
≤ 8η^2 q_0 ∑_t=0^k[ 1 - min{2(q_0-1)/q_0, r(1)/2q_0}]^2t
≤ 8 η^2 q_0 [min{2(q_0-1)/q_0, r(1)/2q_0}]^-1,
where the second inequality holds since log (1 + z)≤ 2 z if z≤1/2. Moreover, this implies that
q_k+1≤exp( 8 η^2 q_0 [min{2(q_0-1)/q_0, r(1)/2q_0}]^-1) q_0.
Since q_0 ∈(2/2-η, r(1)/2η), we have
min{2(q_0-1)/q_0, r(1)/2q_0}≥η.
Therefore, since q_0 ≤1/16η, we can conclude that
q_0 ≤ q_k+1≤exp( 8 η^2 q_0 [min{2(q_0-1)/q_0, r(1)/2q_0}]^-1) q_0 ≤exp(8η q_0)q_0 ≤ 2 q_0,
the desired bounds for q_k+1.
§.§ Proof of Theorem <ref>
In this subsection, we prove Theorem <ref>. From here onwards, we use the following notation:
s_tq_t/r(p_t).
All the lemmas in this subsection are stated in the context of Theorem <ref>.
Suppose that the initialization (p_0, q_0) satisfies p_0≤ 1 and q_0∈ (c_0, 1-δ). Then for any t≥ 0 such that q_t ≤ 1, it holds that
p_t≤ 4, and q_t≤ q_t+1≤ (1 + 𝒪(η^2)) q_t.
We prove by induction. We assume that for some t≥ 0, it holds that p_t≤ 4 and 1/2≤ q_t ≤ 1. We will prove that p_t+1≤ 4 and 1/2≤ q_t≤ q_t+1≤ (1 + 𝒪(η^2)) q_t. For the base case, p_0≤ 1 ≤ 4 and 1/2≤ c_0 < q_t ≤ 1 holds by the assumptions on the initialization. Now suppose that for some t≥ 0, it holds that p_t≤ 4 and 1/2≤ q_t ≤ 1. Then for small step size η, we have
|2ℓ'(p_t)/q_t|≥ 2ℓ'(p_t)≥1/2ℓ'(p_t)^2p_t≥η^2 ℓ'(p_t)^2p_t.
Consequently, by Eq. (<ref>),
p_t+1 = | (1 + η^2 ℓ'(p_t)^2)p_t - 2ℓ'(p_t)/q_t|≤max{p_t, 2/q_t}≤ 4.
where we used 1-Lipshitzness of ℓ. Moreover,
1 - 8η^2 ≤ 1 - 2p_tη^2 ≤q_t/q_t+1 = 1 - η^2 p_t^2 r(p_t) (2q_t - r(p_t))≤ 1,
where we used q_t∈ [1/2, 1] and |p_t r(p_t)| = |ℓ'(p_t)| ≤ 1 from 1-Lipschitzness of ℓ. Hence, q_t≤ q_t+1≤ (1 + 𝒪(η^2)) q_t, as desired.
Lemma <ref> implies that p_t is bounded by a constant throughout the iterations, and q_t monotonically increases slowly, where the increment for each step is 𝒪(η^2). Hence, there exists a time step T=Ω(δη^-2) = Ω_δ(η^-2) such that for any t≤ T, it holds that q_t ≤ 1 - δ/2. Through out this subsection, we focus on these T early time steps. Note that for all 0≤ t ≤ T, it holds that q_t ∈ (c_0, 1-δ/2).
Intuition on Theorem <ref>. Before we dive in to the rigorous proofs, we provide an intuition on Theorem <ref>. Lemma <ref> establishes that p_t is bounded and q_t monotonically increases slowly, with an increment of 𝒪(η^2) per step. Lemma <ref> shows that the map f_q_t(p) = (1 - 2r(p_t)/q_t)p_t has a stable 2-period orbit {±r̂(q_t)} when q_t ∈ (0,1). Consequently, when q_t is treated as a fixed value, p_t converges to the orbit {±r̂(q_t)}, leading to s_t converging to 1. In the (early) short-term dynamics, q_t is nearly fixed for small step size η, and hence s_t converges to 1. In long-term dynamics perspective, q_t gradually increases and at the same time, s_t stays near the value 1. In Theorem <ref>, we prove that it takes only t_a=𝒪_δ, ℓ(log(η^-1)) time steps for s_t to converge close to 1 (Phase 1, t≤ t_a), and after that, s_t stay close to 1 for the remaining iterations (Phase 2, t > t_a).
We informally summarize the lemmas used in the proof of Theorem <ref>. Lemma <ref> states that in the early phase of training, there exists a time step t_0 where s_t_0 becomes smaller or equal to 2/2-r(1), which is smaller than 2(1+η^2)^-1. Lemma <ref> demonstrates that if s_t is smaller than 2(1+η^2)^-1 and p_t≥r̂(1-δ/4), then s_t-1 decreases exponentially. For the case where p_t < r̂(1-δ/4), Lemma <ref> proves that p_t increases at an exponential rate. Moreover, Lemma <ref> shows that if s_t < 1 at some time step, then s_t+1 is upper bounded by 1+𝒪(η^2). Combining these findings, Proposition <ref> establishes that in the early phase of training, there exists a time step t_a^* such that s_t_a^* = 1 + 𝒪_δ, ℓ(η^2). Lastly, Lemma <ref> demonstrates that if s_t = 1 + 𝒪_δ, ℓ(η^2), then s_t - 1 - h(p_t) η^2 decreases exponentially.
Now we prove Theorem <ref>, starting with the lemma below.
There exists a time step t_0 = 𝒪_δ, ℓ(1) such that s_t_0≤2/2-r(1).
We start by proving the following statement: for any 0≤ t≤ T, if 2/2-r(1) < s_t < 2r(1)^-1, then s_t+1 < 2r(1)^-1 and p_t+1≤ (1-r(1)/2)p_t. Suppose that 2/2-r(1) < s_t < 2r(1)^-1. Then from Eq. (<ref>), it holds that
|p_t+1/p_t| = | 1 - 2/s_t + η^2p_t^2r(p_t)^2|≤| 1 - 2/s_t| + η^2 ≤ 1-r(1) + η^2 ≤ 1 - r(1)/2,
for small step size η. Hence, p_t+1≤ (1-r(1)/2)p_t. Now we prove s_t+1< 2r(1)^-1. Assume the contrary that s_t+1≥ 2r(1)^-1. Then, r(p_t+1) = q_t+1/s_t+1 < q_t+1 < 1-δ/2 so that p_t+1≥r̂(1-δ/2). By Mean Value Theorem, there exists p_t^* ∈ (p_t+1, p_t) such that (recall that r'(p)/r(p)^2 < 0 for p > 0)
1/r(p_t+1) = 1/r(p_t - (p_t-p_t+1)) = 1/r(p_t) + r'(p_t^*)/r(p_t^*)^2 (p_t-p_t+1)
≤1/r(p_t) + r'(p_t+1)/r(p_t+1)^2(r(1)p_t/2)
≤1/r(p_t) - r'(r̂(1-δ/2))/(1-δ/2)^2(r(1)r̂(1-δ/2)/2)
= 1/r(p_t) - Ω_δ,ℓ(1),
where we used Assumption <ref> <ref> and r̂(1-δ/2) ≤p_t+1≤ (1-r(1)/2)p_t≤p_t. Consequently,
s_t+1 = q_t+1/r(p_t+1) = (1+𝒪(η^2))q_t (1/r(p_t) - Ω_δ,ℓ(1)) ≤q_t/r(p_t) = s_t < 2r(1)^-1,
for small step size η. This gives a contradiction to our assumption that s_t+1≥ 2r(1)^-1. Hence, we can conclude that s_t+1 < 2r(1)^-1, as desired.
We proved that for any 0≤ t≤ T, if 2/2-r(1) < s_t < 2r(1)^-1, it holds that s_t+1 < 2r(1)^-1 and p_t+1≤ (1-r(1)/2)p_t. At initialization, p_0≤ 1 and q_0 < 1, so that s_0 < r(1)^-1. If s_0 ≤2/2-r(1), then t_0=0 is the desired time step. Suppose that s_0 > 2/2-r(1). Then, we have s_1 < 2r(1)^-1 and p_1≤ (1-r(1)/2)p_0≤ 1-r(1)/2. Then we have either s_1 ≤2/2-r(1), or 2/2-r(1) < s_1 < 2r(1)^-1. In the previous case, t_0=1 is the desired time step. In the latter case, we can repeat the same argument and obtain s_2 < 2r(1)^-1 and p_2≤ (1-r(1)/2)^2.
By inductively repeating the same argument, we can obtain a time step t_0≤log(r̂(1-δ/2)) / log(1-r(1)/2) = 𝒪_δ, ℓ(1) such that either s_t_0≤2/2-r(1), or p_t_0≤r̂(1-δ/2). In the latter case, r(p_t_0) ≥ 1-δ/2 > q_t_0, and hence s_t_0 < 1 < 2/2-r(1). Therefore, t_0 = 𝒪_δ, ℓ(1) is the desired time step satisfying s_t_0≤2/2-r(1).
According to Lemma <ref>, there exists a time step t_0=𝒪_δ,ℓ(1) such that s_t_0≤2/2-r(1) < 2(1+η^2)^-1 for small step size η. Now we prove the lemma below.
Suppose that s_t≤ 1. Then, it holds that s_t+1≤ 1 + 𝒪(η^2).
For any p∈ (0, r̂(q_t/2)), we have r(p)≥q_t/2 so that f_q_t(p) = (-1+2r(p)/q_t)p. Hence,
∂/∂ pf_q_t(p) = 2 r(p)/q_t(1 + p r'(p)/r(p)) - 1,
for any p∈ (0, r̂(q_t/2)). By Assumption <ref> <ref> and convexity of ℓ, both r(p) and 1+pr'(p)/r(p) = ℓ”(p)/r(p) are positive, decreasing function on (0, r̂(q_t/2)). Consequently, ∂/∂ pf_q_t(p) is a decreasing function on (0, r̂(q_t/2)).
Now note that q_t/2 < q_t < 1, which means r̂(1) = 0 < r̂(q_t) < r̂(q_t/2) by the definition of r̂.
Note that ∂/∂ pf_q_t(p) at p = r̂(q_t) evaluates to
∂/∂ pf_q_t(r̂(q_t)) = 1 + 2 r̂(q_t) r'(r̂(q_t))/r(r̂(q_t))≥ 1 + 2 r̂(c_0)r'(r̂(c_0))/r(r̂(c_0))≥ 0,
where the first inequality used Assumption <ref> <ref> and r̂(q_t) < r̂(c_0), which comes from q_t > c_0 max{r(z_0), 1/2}. The second inequality holds because q_t > c_0 ≥ r(z_0) where z_0 := sup_z{zr'(z)/r(z)≥ -1/2}, from the statement of Theorem <ref>.
Therefore, since ∂/∂ pf_q_t(p) is decreasing on (0, r̂(q_t/2)) and is nonnegative at r̂(q_t), for any p∈ (0, r̂(q_t)), it holds that ∂/∂ pf_q_t(p)≥ 0. In other words, f_q_t(p) is an increasing function on (0, r̂(q_t)). Since 0≤ s_t≤ 1, we have p_t≤r̂(q_t) and it holds that
p_t+1 = (-1 + 2/s_t - η^2 p_t^2 r(p_t)^2 ) p_t≤(-1 + 2/s_t) p_t = f_q_t(p_t)≤f_q_t(r̂(q_t)) = r̂(q_t).
Therefore, with this inequality and Lemma <ref>, we can conclude that
s_t+1 = q_t+1/r(p_t+1) = q_t/r(p_t+1) (1 + 𝒪(η^2)) ≤q_t/r(r̂(q_t)) (1 + 𝒪(η^2)) = 1 + 𝒪(η^2).
Using Lemma <ref>, we prove the following lemma.
For any 0≤ t≤ T, if s_t < 2(1+η^2)^-1, s_t-1 > η^2/2, and r(p_t)≤ 1 - δ/4, then
s_t+1-1≤ (1-d)s_t-1 + 𝒪(η^2),
where d∈ (0,1/2] is a constant which depends on δ and ℓ.
By Eq. (<ref>) and 1-Lipschitzness of ℓ,
p_t+1/p_t = 1 - 2/s_t + η^2 p_t^2 r(p_t)^2 < 1 - (1+η^2) + η^2 = 0,
so that p_t and p_t+1 have opposite signs. By Mean Value Theorem, there exists θ_t between -1 and (1-2/s_t+η^2p_t^2r(p_t)^2) satisfying
1/r(p_t+1) = 1/r(-p_t + (2(s_t-1)/s_t+η^2 p_t^2 r(p_t^2))p_t)
= 1/r(-p_t) - r'(θ_t p_t)/r(θ_t p_t)^2(2(s_t-1)/s_t + η^2 p_t^2 r(p_t)^2) p_t
= 1/r(p_t) - r'(θ_t p_t)/r(θ_t p_t)^2(2(s_t-1)/s_t + η^2 p_t^2 r(p_t)^2) p_t,
where the last equality used the fact that p_t and θ_t p_t have opposite signs and r'(z) and z have opposite signs.
Note that θ_t p_t is between p_t and p_t+1. Consequently, the value r'(θ_t p_t)/r(θ_t p_t)^2 is between r'(p_t)/r(p_t)^2 and r'(p_t+1)/r(p_t+1)^2 by Assumption <ref> <ref>. We will prove the current lemma based on Eq. (<ref>). We divide into following three cases: (1) s_t≥ 1 and s_t+1≥ 1, (2) s_t≥ 1 and s_t < 1, and (3) s_t < 1.
Case 1. Suppose that s_t≥ 1 and s_t+1≥ 1. Here, we have p_t≥r̂(q_t)≥r̂(1-δ/2) and similarly p_t+1≥r̂(1-δ/2). By Assumption <ref> <ref>, r'(θ_t p_t)/r(θ_t p_t)^2≥r'(r̂(1-δ/2))/(1-δ/2)^2. Hence, Eq. (<ref>) gives
1/r(p_t+1)≤1/r(p_t) - r'(r̂(1-δ/2))/(1-δ/2)^2(2(s_t-1)/s_t) r̂(1-δ/2).
Consequently, by Lemma <ref>,
s_t+1 = q_t (1 + 𝒪(η^2))/r(p_t+1) = q_t/r(p_t+1) + 𝒪(η^2)
≤ s_t - r'(r̂(1-δ/2))/(1-δ/2)^2(2(s_t-1)/s_t) r̂(1-δ/2) q_t + 𝒪(η^2)
≤ s_t - r'(r̂(1-δ/2))/(1-δ/2)^2 (s_t-1) r̂(1-δ/2) 1/2 + 𝒪(η^2)
≤ s_t - r̂(1-δ/2)r'(r̂(1-δ/2))/2(1-δ/2)^2 (s_t-1) + 𝒪(η^2),
where we used q_t > c_0 ≥1/2 and s_t < 2(1+η^2)^-1 < 2. Therefore, we can obtain the following inequality:
0 ≤ s_t+1-1 ≤( 1 - r̂(1-δ/2)r'(r̂(1-δ/2))/2(1-δ/2)^2) (s_t-1) + 𝒪(η^2).
Case 2. Suppose that s_t≥ 1 and s_t+1< 1. Here, we have r(p_t+1) > q_t+1≥ q_t ≥ r(p_t), so that p_t+1 < p_t. Consequently, r'(θ_t p_t)/r(θ_t p_t)^2≤r'(p_t)/r(p_t)^2 by Assumption <ref> <ref>. Hence, we can deduce from Eq. (<ref>) that
1/r(p_t+1) ≥1/r(p_t) - r'(p_t)/r(p_t)^2( 2(s_t-1)/s_t + η^2 p_t^2 r(p_t)^2 ) p_t
= 1/r(p_t) - 2p_tr'(p_t)/r(p_t)q_t (s_t-1) - η^2 p_t^3 r'(p_t)
≥1/r(p_t) - 2p_tr'(p_t)/r(p_t)q_t (s_t-1) - η^2 p_t^2 r(p_t)
= 1/r(p_t) + 2p_tr'(p_t)/r(p_t)q_t (s_t-1) - 𝒪(η^2),
where we used p_tr'(p_t)≤ r(p_t) since 1+p_tr'(p_t)/r(p_t) = ℓ”(p_t)/r(p_t) > 0 and p_t≤ 4 by Lemma <ref>. Consequently, by Lemma <ref> (q_t ≤ q_t+1) and Assumption <ref> <ref>,
s_t+1≥q_t/r(p_t+1)≥ s_t + 2p_tr'(p_t)/r(p_t) (s_t-1) - 𝒪(η^2) ≥ s_t + 8r'(4)/r(4) (s_t-1) - 𝒪(η^2).
Note that 1>1 + 4r'(4)/r(4) = ℓ”(4)/r(4) > 0 holds by convexity of ℓ. Therefore, we can obtain the following inequality:
0 ≤ 1 - s_t+1≤ - (1 + 8r'(4)/r(4)) (s_t-1) + 𝒪(η^2),
where -1<1+8r'(4)/r(4) < 1.
Case 3. Suppose that s_t < 1. By Lemma <ref>, it holds that s_t+1≤ 1+𝒪(η^2). Moreover, we assumed r(p_t) ≤ 1 - δ/4, so that p_t≥r̂(1-δ/4). We also have
p_t+1 = (-1+2/s_t-η^2p_t^2r(p_t^2))p_t≥(-1 + 2/1-η^2/2 - η^2) p_t > p_t≥r̂(1-δ/4),
where we used the assumption s_t-1 > η^2/2, and p r(p) = ℓ'(p)≤ 1 due to 1-Lipschitzness of ℓ. Consequently, by Assumption <ref> <ref>, it holds that r'(θ_t p_t)/r(θ_t p_t)^2≥r'(r̂(1 - δ/4))/(1 - δ/4)^2. Hence, by Eq. (<ref>), we have
1/r(p_t+1) ≥1/r(p_t) + r'(r̂(1 - δ/4))/(1 - δ/4)^2(2(1-s_t)/s_t) r̂(1-δ/4)
≥1/r(p_t) + r'(r̂(1 - δ/4))/(1 - δ/4)^2 2(1-s_t) r̂(1-δ/4)
= 1/r(p_t) + 2 r̂(1-δ/4)r'(r̂(1-δ/4))/(1-δ/4)^2 (1-s_t),
and hence, by Lemma <ref> (q_t ≤ q_t+1) and q_t > c_0 ≥1/2, we get
s_t+1≥q_t/r(p_t+1)≥ s_t + r̂(1-δ/4)r'(r̂(1-δ/4))/(1-δ/4)^2 (1-s_t).
Therefore, we can obtain the following inequality:
-𝒪(η^2) ≤ 1-s_t+1≤(1 - r̂(1-δ/4)r'(r̂(1-δ/4))/(1-δ/4)^2) (1-s_t),
where we used Lemma <ref> to obtain the first inequality.
Combining the three cases, we can finally conclude that if we choose
d min{1/2, r̂(1-δ/2)r'(r̂(1-δ/2))/2(1-δ/2)^2, 2 (1 + 4r'(4)/r(4)), r̂(1-δ/4)r'(r̂(1-δ/4))/(1-δ/4)^2}∈(0,1/2],
then s_t+1-1≤ (1-d)s_t-1 + 𝒪(η^2).
Lemma <ref> implies that if s_t<2(1+η^2)^-1 and p_t≥r̂(1-δ/4), then s_t-1 exponentially decreases. We prove Lemma <ref> to handle the regime p_t < r̂(1-δ/4), which is stated below.
For any 0≤ t ≤ T, if r(p_t)≥ 1 - δ/4, it holds that
|p_t+1/p_t|≥4/4-δ.
If r(p_t)≥ 1 - δ/4, then s_t = q_t/r(p_t) < 1-δ/2/1-δ/4 = 4-2δ/4-δ, where we used q_t < 1-δ/2 for any 0≤ t≤ T. Consequently,
|p_t+1/p_t| = 2/s_t - 1 - η^2 p_t^2 r(p_t^2) ≥2(4-δ)/4-2δ - 1 - η^2 = 2/2-δ - η^2 ≥4/4-δ,
for small step size η.
Now we prove Proposition <ref>, which proves that s_t reaches close to 1 with error bound of 𝒪(η^2).
There exists a time step t_a^* = 𝒪_δ,ℓ(log(η^-1)) satisfying
s_t_a^* = 1 + 𝒪_δ,ℓ(η^2).
By Lemma <ref>, there exists a time step t_0 = 𝒪_δ,ℓ(1) such that s_t_0≤2/2-r(1). Here, we divide into two possible cases: (1) s_t_0<1, and (2) 1≤ s_t_0≤2/2-r(1).
Case 1. Suppose that s_t_0<1. By Lemma <ref>, if r(p_t_0)≥ 1-δ/4 (or equivalently, p_t_0≤r̂(1-δ/4)), then there exists a time step t_1 ≤ t_0 + log(r̂(1-δ/4)/p_t_0) / log(4/4-δ) = 𝒪_δ, ℓ (1) such that p_t_1≥r̂(1-δ/4). We denote the first time step satisfying p_t_1≥r̂(1-δ/4) and t_1≥ t_0 by t_1 = 𝒪_δ, ℓ(1). By Lemma <ref>, it holds that s_t_1≤ 1+𝒪(η^2) since s_t_1-1<1. Consequently, if s_t_1≥ 1-η^2/2, then s_t_1-1≤𝒪(η^2) so that t_a^* = t_1 is the desired time step. Hence, it suffices to consider the case when s_t_1<1-η^2/2. Here, we can apply Lemma <ref> which implies that
s_t_1+1-1≤ (1-d) s_t_1-1 + 𝒪(η^2),
where d is a constant which depends on δ and ℓ. Then, there are two possible cases: either s_t_1-1≤𝒪(η^2 d^-1), or s_t_1+1-1≤ (1-d/2)s_t_1-1. It suffices to consider the latter case, suppose that s_t_1+1-1≤ (1-d/2)s_t_1-1. Since we are considering the case s_t_1<1-η^2/2, again by Lemma <ref>, we have s_t_1+1≤ 1 + 𝒪(η^2). Since p_t_1+1/p_t_1 = 2/s_t_1-1-𝒪(η^2), p_t_1+1≥p_t_1≥r̂(1-δ/4) must be satisfied unless s_t_1 = 1 + 𝒪(η^2) already holds. If s_t_1+1≥ 1-η^2/2, then s_t_1+1-1≤𝒪(η^2) so that t_a^* = t_1+1 is the desired time step; if not, we can again apply Lemma <ref> and repeat the analogous argument. Hence, there exists a time step t_2 ≤ t_1 + log(η^2/1-s_t_1) / log (1-d/2) = 𝒪_δ, ℓ(log(η^-1)), such that s_t_2-1≤𝒪(η^2d^-1) = 𝒪_δ,ℓ(η^2).
Case 2. Suppose that 1≤ s_t_0≤2/2-r(1). Then, r(p_t_0) ≤ q_t_0≤ 1-δ/2, so we can apply Lemma <ref>. There are two possible cases: either s_t_0+1-1≤𝒪(η^2d^-1) = 𝒪_δ,ℓ(η^2), or s_t_0+1-1≤ (1-d/2)s_t_0-1. It suffices to consider the latter case. If s_t_0+1≥ 1, we can again apply Lemma <ref> and repeat the analogous argument. Hence, we can obtain a time step t_0' ≤ t_0 + log(η^2/1-s_t_0) / log(1-d/2) = 𝒪_δ,ℓ(log(η^-1)) such that either s_t_0' < 1 or s_t_0'-1 = 𝒪_δ,ℓ (η^2) is satisfied. If s_t_0' < 1, we proved in Case 1 that there exists a time step t_2' = t_0'+𝒪_δ,ℓ(log(η^-1)) such that s_t_2'-1≤𝒪_δ,ℓ(η^2), and this is the desired bound.
Now we carefully handle the error term 𝒪(η^2) obtained in Proposition <ref> and a provide tighter bound on s_t by proving Lemma <ref> stated below.
If s_t-1 = 𝒪_δ, ℓ(η^2), then it holds that
s_t+1 - 1 - h(p_t+1) η^2≤( 1 + 2p_t r'(p_t)/r(p_t)) s_t-1-h(p_t)η^2 + 𝒪_δ, ℓ(η^4 p_t^2),
where h(p) - 1/2( p r(p)^3/r'(p) + p^2 r(p)^2 ) for p0 and h(p) - 1/2r”(0) for p = 0.
Suppose that s_t = 1+ 𝒪_δ, ℓ(η^2). Then, p_t+1 = | 1 - 2/s_t + η^2 p_t^2 r(p_t)^2 |·p_t= (1 + 𝒪_δ, ℓ(η^2)) p_t. By Eq. (<ref>) proved in Lemma <ref>, there exists ϵ_t = 𝒪_δ, ℓ(η^2) which satisfies the following:
1/r(p_t+1) = 1/r(p_t) + r'((1+ϵ_t)p_t)/r((1+ϵ_t)p_t)^2(2(s_t-1)/s_t + η^2 p_t^2 r(p_t)^2) p_t
= 1/r(p_t) + (r'(p_t)/r(p_t)^2 + 𝒪_δ, ℓ(η^2p_t)) (2(s_t-1)/s_t + η^2 p_t^2 r(p_t)^2) p_t
= 1/r(p_t) + r'(p_t)/r(p_t)^2(2(s_t-1)/s_t + η^2 p_t^2 r(p_t)^2) p_t + 𝒪_δ, ℓ(η^4 p_t^2),
where we used the Taylor expansion on r'(p)/r(p)^2 with the fact that d/dp(r'(p)/r(p)^2) is bounded on [-4, 4] and that p_t≤ 4 to obtain the second equality.
Note that q_t+1 = (1-η^2 p_t^2 r(p_t) (2q_t-r(p_t)))^-1q_t by Eq. (<ref>). Consequently,
s_t+1 = (1 - η^2 p_t^2 r(p_t)(2q_t - r(p_t)))^-1(s_t + 2p_t r'(p_t)/r(p_t)(s_t-1) + η^2 p_t^3 r'(p_t) q_t ) + 𝒪_δ, ℓ(η^4 p_t^2)
= (1 + η^2 p_t^2 r(p_t)(2q_t - r(p_t))) s_t + 2p_t r'(p_t)/r(p_t)(s_t-1) + η^2 p_t^3 r'(p_t) q_t + 𝒪_δ, ℓ(η^4 p_t^2)
= 1 + (1 + 2p_tr'(p_t)/r(p_t)) (s_t-1) + η^2 p_t^2 r(p_t)(2q_t - r(p_t))s_t + η^2 p_t^3 r'(p_t) q_t + 𝒪_δ, ℓ(η^4p_t^2).
Here, since s_t = 1 + 𝒪_δ, ℓ(η^2), we can rewrite
η^2 p_t^2 r(p_t)(2q_t - r(p_t))s_t + η^2 p_t^3 r'(p_t) q_t
= η^2 p_t^2 r(p_t)^2 (2s_t - 1)s_t + η^2 p_t^3 r'(p_t) r(p_t) s_t
= η^2 p_t^2 r(p_t)^2 + η^2 p_t^3 r'(p_t) r(p_t) + 𝒪_δ, ℓ(η^4 p_t^2),
which results in
s_t+1 = 1 + (1 + 2p_tr'(p_t)/r(p_t)) (s_t-1) + η^2 p_t^2 r(p_t)^2 + η^2 p_t^3 r(p_t) r'(p_t) + 𝒪_δ, ℓ(η^4p_t^2).
Note that h is even, and twice continuously differentiable function by Lemma <ref>. Consequently, h'(0) = 0 and h'(p) = 𝒪_ℓ(p), since h” is bounded on closed interval. Consequently, h(p_t+1) = h((1+𝒪_δ, ℓ(η^2))p_t) = h(p_t) + 𝒪_δ, ℓ(η^2 p_t^2).
Hence, we can obtain the following:
s_t+1 - 1 - h(p_t+1) η^2 = s_t+1 - 1 -h(p_t) η^2 + 𝒪_δ, ℓ(η^4 p_t^2)
= s_t+1 - 1 + 1/2(p_t r(p_t)^3/r'(p_t) + p_t^2 r(p_t)^2) η^2 + 𝒪_δ, ℓ(η^4 p_t^2)
= (1 + 2p_t r'(p_t)/r(p_t)) (s_t - 1 + 1/2(p_t r(p_t)^3/r'(p_t) + p_t^2 r(p_t)^2)η^2) + 𝒪_δ, ℓ(η^4 p_t^2)
= (1 + 2p_tr'(p_t)/r(p_t)) (s_t-1-h(p_t)η^2) + 𝒪_δ, ℓ (η^4 p_t^2).
Note that r(p_t) = (1 + 𝒪_δ, ℓ (η^2)) q_t ≥ (1 + 𝒪_δ, ℓ (η^2)) q_0 ≥ c_0 ≥ r(z_0) for small step size η, where z_0 = sup{z r'(z)/r(z)≥ -1/2}. Consequently, it holds that 1 + 2p_t r'(p_t)/r(p_t)≥ 0.
Therefore, we have the desired inequality:
s_t+1-1-h(p_t+1)η^2≤(1 + 2p_t r'(p_t)/r(p_t)) s_t-1-h(p_t)η^2 + 𝒪_δ,ℓ(η^4 p_t^2).
We now provide the proof of Theorem <ref>, restated below for the sake of readability.
*
By Proposition <ref>, there exists a time step t_a^* = 𝒪_δ, ℓ(log(η^-1)) which satisfies:
s_t_a^*-1 = |q_t_a^*/r(p_t_a^*) - 1 | = 𝒪_δ, ℓ(η^2).
By Lemma <ref>, there exists a constant D>0 which depends on δ, ℓ such that if s_t-1 = 𝒪_δ,ℓ(η^2), then
s_t+1-1-h(p_t+1)η^2≤( 1 + 2p_t r'(p_t)/r(p_t)) s_t-1-h(p_t)η^2 + D η^4 p_t^2.
Hence, if s_t-1 = 𝒪_δ,ℓ(η^2) and s_t-1-h(p_t) η^2≥(-p_t r(p_t)/r'(p_t))Dη^4, then
s_t+1 - 1 - h(p_t+1)η^2≤( 1 + p_t r'(p_t)/r(p_t)) s_t-1-h(p_t)η^2.
For any t≤ T, we have q_t < 1-δ/2 so that if s_t-1 = 𝒪_δ,ℓ(η^2), then r(p_t)≤ (1+𝒪_δ,ℓ(η^2))q_t < 1-δ/4 for small step size η. From Eq. (<ref>) with t=t_a^*, we have either
s_t_a^* - 1 - h(p_t_a^*)η^2 < (-p_t_a^* r(p_t_a^*)/r'(p_t_a^*))Dη^4,
or
s_t_a^*+1 - 1 - h(p_t_a^*+1)η^2≤( 1 + r̂(1-δ/4) r'(r̂(1-δ/4))/(1-δ/4)) s_t_a^* - 1 - h(p_t_a^*)η^2,
where we used Assumption <ref> <ref> and p_t > r̂(1-δ/4). In the later case, s_t_a^*+1-1 = 𝒪_δ,ℓ(η^2) continues to hold and we can again use Eq. (<ref>) with t=t_a^*+1. By repeating the analogous arguments, we can obtain the time step
t_a ≤ t_a^* + log( -Dη^4/r”(0) s_t_a^* - 1 - h(p_t_a^*)η^2)/log( 1 + r̂(1-δ/4) r'(r̂(1-δ/4))/(1-δ/4)) = 𝒪_δ, ℓ (log(η^-1)),
which satisfies: either
s_t_a - 1 - h(p_t_a)η^2 < (-p_t_a r(p_t_a)/r'(p_t_a))Dη^4,
or
s_t_a - 1 - h(p_t_a) η^2≤(-1/r”(0)) Dη^4 ≤(- p_t_a r(p_t_a)/r'(p_t_a)) Dη^4≤(- 4 r(4)/r'(4)) Dη^4,
where we used p_t≤ 4 from Lemma <ref> and -zr(z)/r'(z)≥ -1/r”(0) for any z by Assumption <ref> <ref>.
By Eq. (<ref>), if s_t - 1 - h(p_t) η^2≤(- 4 r(4)/r'(4)) Dη^4 is satisfied for any time step t, then
s_t+1 - 1 - h(p_t+1) η^2≤( 1 + 2p_t r'(p_t)/r(p_t)) (- 4 r(4)/r'(4)) Dη^4 + D η^4 p_t^2 ≤(- 4 r(4)/r'(4)) Dη^4,
by p_t≤ 4 from Lemma <ref> and Assumption <ref> <ref>.
Hence, by induction, we have the desired bound as following: for any t≥ t_a,
s_t - 1 - h(p_t) η^2≤(- 4 r(4)/r'(4)) Dη^4 = 𝒪_δ, ℓ(η^4),
by p_t≤ 4 from Lemma <ref> and Assumption <ref> <ref>.
§.§ Proof of Theorem <ref>
In this subsection, we prove Theorem <ref>. We start by proving Lemma <ref> which provides a useful property of h defined in Theorem <ref>.
Consider the function h defined in Theorem <ref>, given by
h(p)
- 1/2( p r(p)^3/r'(p) + p^2 r(p)^2 ) if p 0, and
- 1/2r”(0) if p=0.
Then, h is a positive, even, and bounded twice continuously differentiable function.
It is clear that h is even. We first prove that h is positive. For any p 0, it holds that
h(p) = -pr(p)^3/2r'(p)( 1 + pr'(p)/r(p))>0,
since pr(p)/r'(p) < 0 and 1 + pr'(p)/r(p) = ℓ”(p)/r(p)> 0 by Assumption <ref> and convexity of ℓ. The function h is continuous since lim_p→ 0 h(p) = h(0). Continuous function on a compact domain is bounded, so h is bounded on the closed interval [-1, 1]. We can rewrite h as
h(p) = 1/2 p^2r(p)^2 ( - r(p)/pr'(p) - 1 ).
Note that p^2r(p)^2 = ℓ'(p)^2 ≤ 1, and (-r(p)/pr'(p)-1) is positive, decreasing function on p>0 by Assumption <ref> <ref>. Hence, h is bounded on [1, ∞). Since h is even, h is bounded on (-∞, 1]. Therefore, h is a bounded function on .
We finally prove that h is twice continuously differentiable. Since r is even and C^4 on , we can check that
h'(p)
- 1/2[ r(p)^3 (r'(p) - pr”(p))/r'(p)^2 + pr(p) (5r(p) + 2pr'(p)) ] if p 0, and
0 if p=0.
Moreover, for any p 0,
h”(p) = - 1/2( 2r(p)^2 r”(p) (pr”(p) - r'(p))/r'(p)^3 - pr(p)^3 r^(3)(p)/r'(p)^2 - 3pr(p)^2 r”(p)/r'(p))
- 4r(p)^2 - 7pr(p)r'(p) - p^2 (r(p)r”(p) + r'(p)^2),
and
h”(0) = r^(4)(0)/6r”(0)^2 - 5/2.
Since lim_p→ 0 h”(p) = h”(0), we can conclude that h is a twice continuously differentiable function.
We now give the proof of Theorem <ref>, restated below for the sake of readability.
*
We first prove that there exists a time step t_b≥ 0 such that q_t_b > 1. Assume the contrary that q_t ≤ 1 for all t≥ 0. Let t_a be the time step obtained in Theorem <ref>. Then for any t≥ t_a, we have
r(p_t) = (1 - h(p_t)η^2 + 𝒪_δ, ℓ(η^4)) q_t ≤ 1 - h(p_t) η^2/2,
for small step size η. The function g(p) r(p) - 1 + h(p)η^2/2 is even, continuous, and has the function value g(0) = η^2/4r”(0) > 0. Consequently, there exists a positive constant ϵ>0 such that g(p) > 0 for all p∈ (-ϵ, ϵ). Then, we have p_t≥ϵ for all t≥ t_a, since g(p_t) ≤ 0. Moreover, s_t ≥3/4 for any t≥ t_a by Theorem <ref> for small step size η. This implies that for any t≥ t_a,
q_t/q_t+1 = 1 - η^2 p_t^2 r(p_t)^2 (2s_t - 1) ≤ 1 - 1/2η^2 ℓ'(p_t)^2 ≤ 1 - 1/2η^2ℓ'(ϵ)^2,
so q_t grows exponentially, which results in the existence of a time step t_b'≥ t_a such that q_t_b' > 1, a contradiction.
Therefore, there exists a time step t_b such that q_t_b≤ 1 and q_t > 1 for any t > t_b, i.e., q_t jumps across the value 1. This holds since the sequence (q_t) is monotonically increasing. For any t≤ t_b, we have q_t+1≤ q_t + 𝒪(η^2) by Lemma <ref>, and this implies that t_b ≥Ω ((1-q_0)η^-2), as desired.
Lastly, we prove the convergence of GD iterates (p_t, q_t). Let t > t_b be given. Then, q_t ≥ q_t_b+1 > 1 and it holds that
|p_t+1/p_t| = 2r(p_t)/q_t - 1 -η^2 p_t^2 r(p_t)^2 ≤2/q_t_b+1 - 1 < 1.
Hence, p_t is exponentially decreasing for t>t_b. Therefore, p_t converges to 0 as t→∞. Since the sequence (q_t)_t=0^∞ is monotonically increasing and bounded (due to Theorem <ref>), it converges. Suppose that (p_t, q_t) converges to the point (0, q^*). By Theorem <ref>, we can conclude that
| q^* - 1 + η^2/2r”(0)| = 𝒪_δ,ℓ(η^4),
which is the desired bound.
§.§ Proof of Theorem <ref>
In this subsection, we prove Theorem <ref>. We first prove a useful lemma which bounds the Hessian of the function (, ) ↦^⊤, stated below.
For any Θ = (, ) with ∈^m× d, ∈^m, and ∈^d with _2 = 1, the following equality holds:
‖∇_(, )^2 (^⊤) ‖_2 ≤ 1.
Moreover, if λ is an eigenvalue of ∇_(, )^2 (^⊤), then -λ is also an eigenvalue of ∇_(, )^2 (^⊤).
We first define the notations. We use the operator ⊗ to represent tensor product, or Kronecker product between matrices. For example, for any given two matrices A = (a_ij) ∈^m× n and B, we define A⊗ B by
A ⊗ B = [ a_11B … a_1nB; ⋮ ⋱ ⋮; a_m1B … a_mnB ].
We use 0_m× n to denote a m by n matrix with all entries filled with zero, and I_n denotes n by n identity matrix.
Now we provide the proof of the original problem.
Let = (u_ij)∈^m× d, = (v_i)∈^m, and = (x_j) ∈^d be given. Then,
^⊤ = ∑_i, j v_i U_ij x_j.
We vectorize the parameter Θ = (, ) by (v_1,…,v_m,U_11,…,U_m1,…,U_1d,…, U_md)∈^m+md.
Then, we can represent the Hessian as
∇_(, )^2 (^⊤) = ([ 0 ^⊤; 0_d× d ]) ⊗I_m.
For any given c∈ and ∈^d with c^2 + _2^2 = 1, we have
‖([ 0 ^⊤; 0_d× d ]) ([ c; ]) ‖_2
= ‖([ ^⊤; c ]) ‖_2
≤_2 = 1.
Hence, by definition of matrix operator norm, we have
‖([ 0 ^⊤; 0_d× d ]) ‖_2 ≤ 1.
Therefore, we can conclude that
‖∇_(, )^2 (^⊤)‖_2 = ‖([ 0 ^⊤; 0_d× d ]) ⊗I_m‖_2
= ‖([ 0 ^⊤; 0_d× d ]) ‖_2 ≤ 1.
Now suppose that λ is an eigenvalue of ∇_(, )^2 (^⊤). We note that for any given matrices and , if λ_a is an eigenvalue of with the corresponding eigenvector _a and λ_b is an eigenvalue of with the corresponding eigenvector _b, then λ_a λ_b is an eigenvalue of ⊗ with the corresponding eigenvector _a ⊗_b. Moreover, any eigenvalue of ⊗ arises as such a product of eigenvalues of and . Hence, using Eq. (<ref>), we have
λ is an eigenvalue of the matrix ([ 0 ^⊤; 0_d× d ]).
We denote the corresponding eigenvector by (c, ^⊤)^⊤ where c∈ and ∈^d, i.e., it holds that
([ 0 ^⊤; 0_d× d ]) ([ c; ])
= ([ ^⊤; c ])
= λ([ c; ]).
Consequently, we have
([ 0 ^⊤; 0_d× d ]) ([ -c; ])
= ([ ^⊤; -c ])
= - λ([ -c; ]),
and this implies that
-λ is an eigenvalue of the matrix ([ 0 ^⊤; 0_d× d ]).
Therefore, by Eq. (<ref>), -λ is an eigenvalue of ∇_(, )^2 (^⊤).
Using Lemma <ref>, we prove an important bound on the sharpness value provided by the Proposition <ref> stated below.
For any Θ = (, ) with ∈^m× d, ∈^m, and ∈^d with _2 = 1, the following bound holds:
|λ_max(Θ) - ℓ”(^⊤) (_2^2 + _2^2) |≤ 1
The loss Hessian at Θ = (, ) can be characterized as:
∇^2_Θℒ(Θ) = ℓ”(^⊤) (∇_Θ (^⊤))^⊗ 2 + ℓ'(^⊤) ∇_Θ^2 (^⊤).
We first prove that λ_max(∇^2_Θℒ(Θ)) = ‖∇^2_Θℒ(Θ) ‖_2. Note that the largest absolute value of the eigenvalue of a symmetric matrix equals to its spectral norm. Hence, ‖∇^2_Θℒ(Θ) ‖_2 = max{λ_max(∇^2_Θℒ(Θ)), -λ_min(∇^2_Θℒ(Θ)) }, so it suffices to prove that λ_max(∇^2_Θℒ(Θ)) ≥ -λ_min(∇^2_Θℒ(Θ)). Let denote the eigenvector of ∇^2_Θℒ(Θ) corresponding to the smallest eigenvalue λ_min(∇^2_Θℒ(Θ)) with _2 = 1. Then, using Eq. (<ref>), we have
λ_min (∇^2_Θℒ(Θ)) = ^⊤∇^2_Θℒ(Θ) = ℓ”(^⊤) ^⊤(∇_Θ (^⊤))^⊗ 2 + ℓ'(^⊤) ^⊤∇_Θ^2 (^⊤)
≥ℓ'(^⊤) ^⊤∇_Θ^2 (^⊤)
≥ - ℓ'(^⊤)∇_Θ^2 (^⊤)_2,
where we used Lemma <ref> to obtain the last inequality.
Note that the matrix ℓ”(^⊤) (∇_Θ (^⊤))^⊗ 2 is PSD, so that λ_max (∇^2_Θℒ(Θ)) ≥λ_max (ℓ'(^⊤) ∇_Θ^2 (^⊤)) = ℓ'(^⊤)∇_Θ^2 (^⊤)_2 ≥ - λ_min (∇^2_Θℒ(Θ)). Therefore, λ_max(∇^2_Θℒ(Θ)) = ‖∇^2_Θℒ(Θ) ‖_2.
Now, we have the following triangle inequality:
|λ_max(Θ) - ℓ”(^⊤) (_2^2 + _2^2) | = |‖∇^2_Θℒ(Θ)‖_2 - ‖ℓ”(^⊤) (∇_Θ (^⊤))^⊗ 2‖_2 |
≤‖ℓ'(^⊤) ∇_Θ^2 (^⊤) ‖_2
= |ℓ'(^⊤) |‖∇_(, )^2 (^⊤) ‖_2
≤ 1,
where the last inequality holds by Lemma <ref> and 1-Lipschitzness of ℓ.
We now give the proof of Theorem <ref>, restated below for the sake of readability.
*
By Proposition <ref>, we can bound the sharpness λ_max(Θ_t) at time step t by
|λ_max(Θ_t) - 2ℓ”(p_t)/η q_t|≤ 1.
Since ℓ”(z) = r(z) + zr'(z), we can rewrite as following:
|λ_max(Θ_t) - (s_t^-1 + p_t r'(p_t)/q_t) 2/η|≤ 1.
By Theorem <ref> and since h is a bounded function by Lemma <ref>, we have s_t = 1+𝒪_ℓ(η^2) for any t≥ t_a. Consequently, s_t^-1 - 1 = 𝒪_ℓ(η^2) and r(p_t) - q_t =𝒪_ℓ(η^2).
Moreover, for any 0<q<1,
d/dq(r̂(q)r'(r̂(q))/q) = r̂'(q) (r'(r̂(q)) + r̂(q) r”(r̂(q)))/q - r̂(q) r'(r̂(q))/q^2
= 1/q( 1 + r̂(q) r”(r̂(q))/r'(r̂(q))) - r̂(q) r'(r̂(q))/q^2,
so that
lim_q→ 1^-(d/dq(r̂(q)r'(r̂(q))/q)) = lim_p→ 0^+(1 + pr”(p)/r'(p)) = 2.
Therefore, d/dq(r̂(q)r'(r̂(q))/q) is bounded on [1/4, 1) and Taylor's theorem gives
|p_t r'(p_t)/r(p_t) - r̂(q_t) r'(r̂(q_t))/q_t| = 𝒪_ℓ(| r(p_t) - q_t |) = 𝒪_ℓ(η^2),
for any time step t with q_t < 1.
Hence, if q_t<1, we have the following bound:
|λ̃(q_t) - (s_t^-1 + p_t r'(p_t)/q_t) 2/η|≤| 1 - s_t^-1 + r̂(q_t) r'(r̂(q_t))/q_t - p_t r'(p_t)/r(p_t)|2/η + 𝒪_ℓ(η)= 𝒪_ℓ(η),
where we used p_t r'(p_t)/q_t = p_t r'(p_t)/r(p_t) (1+𝒪_ℓ(η^2)) = p_t r'(p_t)/r(p_t) + 𝒪_ℓ(η^2), since 1 + p_t r'(p_t)/r(p_t) = ℓ”(p_t)/r(p_t) > 0 implies p_t r'(p_t)≤ r(p_t) ≤ 1.
Now let t be any given time step with q_t ≥ 1. Then, r(p_t) = 1 - 𝒪_ℓ(η^2), and since r(z) = 1+r”(0)z^2 + 𝒪_ℓ(z^4) for small z, we have p_t = 𝒪_ℓ(η). Hence,
|λ̃(q_t) - (s_t^-1 + p_t r'(p_t)/q_t) 2/η|≤| 1 - s_t^-1 - p_t r'(p_t)/r(p_t)|2/η + 𝒪_ℓ(η) = 𝒪_ℓ(η),
for any t with q_t≥ 1. By Eqs. (<ref>), (<ref>), and (<ref>), we can conclude that for any t≥ t_a, we have
|λ_max(Θ_t) - λ̃(q_t) |≤ 1 + 𝒪_ℓ(η).
Finally, we can easily check that the sequence (λ̃(q_t))_t=0^∞ is monotonically increasing, since z↦zr'(z)/r(z) is a decreasing function by Assumption <ref> <ref> and the sequence (q_t) is monotonically increasing.
§ PROOFS FOR THE SINGLE-NEURON NONLINEAR NETWORK
§.§ Formal statements of Theorem <ref>
In this subsection, we provide the formal statements of Theorem <ref>. We study the GD dynamics on a two-dimensional function ℒ(x,y) 1/2(ϕ(x)y)^2, where x, y are scalars and ϕ is a nonlinear activation satisfying Assumption <ref>. We consider the reparameterization given by Definition <ref>, which is (p, q) ( x, 2/η y^2).
We emphasize that the results we present in this subsection closely mirror those of the Section <ref>. In particular,
* Assumption <ref> mirrors Assumption <ref>,
* Assumption <ref> mirrors Assumption <ref>,
* (gradient flow regime) Theorem <ref> mirrors Theorem <ref>.
* (EoS regime, Phase 1) Theorem <ref> mirrors Theorem <ref>,
* (EoS regime, Phase 2) Theorem <ref> mirrors Theorem <ref>, and
* (progressive sharpening) Theorem <ref> mirrors Theorem <ref>.
The proof strategies are also similar. This is mainly because the 1-step update rule Eq. (<ref>) resembles Eq. (<ref>) for small step size η. We now present our rigorous results contained in Theorem <ref>.
Inspired by Lemma <ref>, we have an additional assumption on ϕ as below.
Let r be a function defined by r(z)ϕ(z)ϕ'(z)/z for z 0 and r(0) 1. The function r satisfies Assumption <ref>.
In contrast to the function r defined in Section <ref>, the expression 1+pr'(p)/r(p) can be negative, which implies that the constant c defined in Lemma <ref> is positive. As a result, the dynamics of p_t may exhibit a period-4 (or higher) oscillation or even chaotic behavior (as illustrated in Figure <ref>).
We first state our results on the gradient flow regime.
[gradient flow regime]theoremTheoremGF
Let η∈ (0, r(1)/2(r(1)+2)) be a fixed step size and ϕ be a sigmoidal function satisfying Assumptions <ref> and <ref>. Suppose that the initialization (p_0, q_0) satisfies p_0≤ 1 and q_0 ∈(1/1-2η, r(1)/4η). Consider the reparameterized GD trajectory characterized in Eq. (<ref>). Then, the GD iterations (p_t, q_t) converge to the point (0, q^*) such that
q_0 ≤ q^* ≤exp(2 η[min{2(q_0 - 1)/q_0, r(1)/q_0}]^-1) q_0 ≤ 2q_0.
Theorem <ref> implies that in gradient flow regime, GD with initialization (x_0, y_0) and step size η converges to (0, y^*) which has the sharpness bounded by:
(1 - 2 η[min{2(q_0 - 1)/q_0, r(1)/q_0}]^-1) y_0^2 ≤λ_max(∇^2 ℒ(0, y^*)) ≤ y_0^2.
Now we provide our results on the EoS regime with an additional assumption below.
Let r be a function defined in Assumption <ref>. Then r is C^4 on and satisfies:
* z↦r'(z)/r(z)^2 is decreasing on ,
* z↦zr'(z)/r(z) is decreasing on z>0 and increasing on z<0,
* z→zr(z)/r'(z) is decreasing on z>0 and increasing on z<0, and
* r̂(1/2) r'(r̂(1/2))> -1/2.
Note that the function r that arise from the activation ϕ = tanh satisfies Assumptions <ref>, <ref>, and <ref>.
[EoS regime, Phase 1]theoremTheoremEoSearly
Let η>0 be a small enough constant and ϕ be an activation function satisfying Assumptions <ref>, <ref>, and <ref>. Let z_0 sup_z{zr'(z)/r(z)≥ -1/2}, z_1 sup_z{zr'(z)/r(z)≥ -1}, and c_0 max{r(z_0), r(z_1)+1/2}∈ (1/2,1). Let δ∈ (0,1-c_0) be any given constant. Suppose that the initialization (p_0, q_0) satisfies p_0≤ 1 and q_0∈ (c_0, 1-δ). Consider the reparameterized GD trajectory characterized in Eq. (<ref>). We assume that for all t≥ 0 such that q_t<1, we have p_t 0. Then, there exists a time step t_a = 𝒪_δ, ϕ(log(η^-1)) such that for any t≥ t_a,
q_t/r(p_t) = 1 + h(p_t) η + 𝒪_δ,ϕ(η^2)
where h:→ is a function defined as
h(p)
-ϕ(p)^2 r(p)/p r'(p) if p 0, and
-1/r”(0) if p=0.
The main difference between Theorem <ref> and Theorem <ref> is the error term which is 𝒪(η^2) in the former and 𝒪(η^4) in the latter. This is because the 1-step update rule of q_t in Theorem <ref> is given by q_t+1 = (1+𝒪(η))q_t, while in Theorem <ref> we have q_t+1 = (1+𝒪(η^2))q_t.
[EoS regime, Phase 2]theoremTheoremEoSlate
Under the same settings as in Theorem <ref>, there exists a time step t_b = Ω((1-q_0)η^-1), such that q_t_b≤ 1 and q_t > 1 for any t> t_b. Moreover, the GD iterates (p_t, q_t) converge to the point (0, q^*) such that
q^* = 1 - η/r”(0) + 𝒪_δ,ϕ(η^2).
Theorem <ref> implies that in the EoS regime, GD with step size η converges to (0, y^*) which has the sharpness approximated as:
λ_max(∇^2 ℒ(0, y^*)) = 2/η - 2/r”(0) + 𝒪_δ, ϕ(η).
Theorem <ref> proves that progressive sharpening (i.e., sharpness increases) occurs during Phase 2.
[progressive sharpening]theoremTheoremPS
Under the same setting as in Theorem <ref>, let t_a denote the obtained time step. Define the function λ̃: _>0→ given by
λ̃(q) (1 + r̂(q) r'(r̂(q))/q) 2/η if q≤ 1, and
2/η otherwise.
Then, the sequence (λ̃(q_t))_t=0^∞ is monotonically increasing. For any t≥ t_a, the sharpness at GD iterate (x_t, y_t) closely follows the sequence (λ̃(q_t))_t=0^∞ satisfying the following:
λ_max(∇^2 ℒ (x_t, y_t)) = λ̃(q_t) + 𝒪_ϕ(1).
In Figure <ref>, we conduct numerical experiments on single neuron model with tanh-activation, demonstrating that λ̃(q_t) provides a close approximation of the sharpness.
§.§ Proof of Theorem <ref>
We give the proof of Theorem <ref>, restated below for the sake of readability.
*
Theorem <ref> directly follows from Proposition <ref> stated below.
Suppose that η∈ (0, r(1)/2(r(1)+2)), p_0≤ 1 and q_0 ∈(1/1-2η, r(1)/4η). Then for any t≥ 0, we have
p_t≤[ 1 - min{2(q_0 - 1)/q_0, r(1)/q_0}]^t ≤ 1,
and
q_0 ≤ q_t≤exp(2 η[min{2(q_0 - 1)/q_0, r(1)/q_0}]^-1) q_0 ≤ 2q_0.
We give the proof by induction; namely, if
p_t≤[ 1 - min{2(q_0 - 1)/q_0, r(1)/q_0}]^t, q_0 ≤ q_t≤exp(2 η[min{2(q_0 - 1)/q_0, r(1)/q_0}]^-1) q_0 ≤ 2q_0
are satisfied for time steps 0≤ t≤ k for some k, then the inequalities are also satisfied for the next time step k+1.
For the base case, the inequalities are satisfied for t=0 by assumptions. For the induction step, we assume that the inequalities hold for any 0≤ t≤ k. We will prove that the inequalities are also satisfied for t=k+1.
By induction assumptions, we have r(1)≤ r(p_k) ≤ 1 and q_0≤ q_k ≤ 2q_0. From Eq. (<ref>), we get
|p_k+1/p_k| = | 1 - 2r(p_k)/q_k|≤max{ 1 - 2r(1)/2q_0, -1 + 2/q_0} = 1 - min{2(q_0-1), r(1)}/q_0.
Due to the induction assumption, we obtain the desired bound on p_k+1 as following:
p_k+1≤[ 1 - min{2(q_0 - 1)/q_0, r(1)/q_0}]^k+1.
Moreover, for any 0≤ t≤ k, by Eq. (<ref>) we have
1 - 2η p_t^2 ≤ (1-η p_t^2)^2 ≤q_t/q_t+1 = (1-ηϕ(p_t)^2)^2 ≤ 1,
where the second inequality comes from the fact that ϕ is 1-Lipschitz and ϕ(0) = 0 (Assumption <ref>).
Hence, we have q_k+1≥ q_k ≥ q_0. Note that q_t/q_t+1∈ [1/2,1] for small η. Consequently, we have
|log( q_0/q_k+1) |≤∑_t=0^k|log(q_t/q_t+1) | ≤ 2 ∑_t=0^k|q_t/q_t+1 - 1 |
≤ 2η∑_t=0^k p_t^2
≤ 2η∑_t=0^k[ 1 - min{2(q_0 - 1)/q_0, r(1)/q_0}]^2t
≤ 2 η[min{2(q_0 - 1)/q_0, r(1)/q_0}]^-1,
where the second inequality holds since log (1 + z)≤ 2 z if z≤1/2. Therefore, we obtain the desired bound on q_k+1 as following:
q_0 ≤ q_k+1≤exp(2 η[min{2(q_0 - 1)/q_0, r(1)/q_0}]^-1) q_0.
Since q_0 ≥1/1-2η and q_0≤r(1)/4η, we have
min{2(q_0 - 1)/q_0, r(1)/q_0}≥ 4η.
This implies that q_k+1≤exp(1/2) q_0 ≤ 2q_0, as desired.
§.§ Proof of Theorem <ref>
In this subsection, we prove Theorem <ref>. We use the following notation:
s_tq_t/r(p_t).
All the lemmas in this subsection are stated in the context of Theorem <ref>. The proof structure resembles that of Theorem <ref>. We informally summarize the lemmas used in the proof of Theorem <ref>. Lemma <ref> proves that p_t is bounded by a constant and q_t increases monotonically with the increment bounded by 𝒪(η). Lemma <ref> states that in the early phase of training, there exists a time step t_0 where s_t_0 becomes smaller or equal to 2/2-r(1), which is smaller than 2. Lemma <ref> demonstrates that if s_t is smaller than 2 and p_t≥r̂(1-δ/4), then s_t-1 decreases exponentially. For the case where p_t < r̂(1-δ/4), Lemma <ref> proves that p_t increases at an exponential rate. Moreover, Lemma <ref> shows that if s_t < 1 at some time step, then s_t+1 is upper bounded by 1+𝒪(η). Combining these findings, Proposition <ref> establishes that in the early phase of training, there exists a time step t_a^* such that s_t_a^* = 1 + 𝒪_δ, ϕ(η). Lastly, Lemma <ref> demonstrates that if s_t = 1 + 𝒪_δ, ϕ(η), then s_t - 1 - h(p_t) η decreases exponentially.
Suppose that the initialization (p_0, q_0) satisfies p_0≤ 1 and q_0∈ (c_0, 1-δ). Then for any t≥ 0 such that q_t ≤ 1, it holds that
p_t≤ 4, and q_t≤ q_t+1≤ (1 + 𝒪(η))q_t.
We prove by induction. We assume that for some t≥ 0, it holds that p_t≤ 4 and 1/2≤ q_t ≤ 1. We will prove that p_t+1≤ 4 and 1/2≤ q_t≤ q_t+1≤ (1+𝒪(η))q_t. For the base case, p_0≤ 1 ≤ 4 and 1/2≤ c_0 < q_t ≤ 1 holds by the assumptions on the initialization. Now suppose that for some t≥ 0, it holds that p_t≤ 4 and 1/2≤ q_t ≤ 1. By Eq. (<ref>),
p_t+1 = | p_t - 2ϕ(p_t)ϕ'(p_t)/q_t|≤max{p_t, 2/q_t}≤ 4.
where we used Assumption <ref> to bound ϕ(p_t)ϕ'(p_t)≤ 1. Moreover,
1-2η≤ (1-η)^2≤q_t/q_t+1 = (1-ηϕ(p_t)^2)^2≤ 1,
since ϕ is bounded by 1. Hence, q_t≤ q_t+1≤ (1 + 𝒪(η))q_t, as desired.
Lemma <ref> implies that p_t is bounded by a constant throughout the iterations, and q_t monotonically increases slowly, where the increment for each step is 𝒪(η). Hence, there exists a time step T=Ω(δη^-1) = Ω_δ(η^-1) such that for any t≤ T, it holds that q_t ≤ 1 - δ/2. Through out this subsection, we focus on these T early time steps. Note that for all 0≤ t ≤ T, it holds that q_t ∈ (c_0, 1-δ/2).
There exists a time step t_0 = 𝒪_δ, ϕ(1) such that s_t_0≤2/2-r(1).
We start by proving the following statement: for any 0≤ t≤ T, if 2/2-r(1) < s_t < 2r(1)^-1, then s_t+1 < 2r(1)^-1 and p_t+1≤ (1-r(1))p_t. Suppose that 2/2-r(1) < s_t < 2r(1)^-1. Then from Eq. (<ref>), it holds that
|p_t+1/p_t| = | 1 - 2/s_t|≤ 1-r(1).
Hence, p_t+1≤ (1-r(1))p_t. Now we prove s_t+1< 2r(1)^-1. Assume the contrary that s_t+1≥ 2r(1)^-1. Then, r(p_t+1) = q_t+1/s_t+1 < q_t+1 < 1-δ/2 so that p_t+1≥r̂(1-δ/2). By Mean Value Theorem, there exists p_t^* ∈ (p_t+1, p_t) such that
1/r(p_t+1) = 1/r(p_t - (p_t-p_t+1)) = 1/r(p_t) + r'(p_t^*)/r(p_t^*)^2 (p_t-p_t+1)
≤1/r(p_t) + r'(p_t+1)/r(p_t+1)^2(r(1)p_t)
≤1/r(p_t) - r'(r̂(1-δ/2))/(1-δ/2)^2(r(1)r̂(1-δ/2))
= 1/r(p_t) - Ω_δ,ϕ(1),
where we used Assumption <ref> <ref> and r̂(1-δ/2) ≤p_t+1≤ (1-r(1))p_t. Consequently,
s_t+1 = q_t+1/r(p_t+1) = (1+𝒪(η))q_t (1/r(p_t) - Ω_δ,ϕ(1)) ≤q_t/r(p_t) = s_t < 2r(1)^-1,
for small step size η. This gives a contradiction to our assumption that s_t+1≥ 2r(1)^-1. Hence, we can conclude that s_t+1 < 2r(1)^-1, as desired.
We proved that for any 0≤ t≤ T, if 2/2-r(1) < s_t < 2r(1)^-1, it holds that s_t+1 < 2r(1)^-1 and p_t+1≤ (1-r(1))p_t. At initialization, p_0≤ 1 and q_0 < 1, so that s_0 < r(1)^-1. If s_0 ≤2/2-r(1), then t_0=0 is the desired time step. Suppose that s_0 > 2/2-r(1). Then, we have s_1 < 2r(1)^-1 and p_1≤ (1-r(1))p_0≤ 1-r(1). Then we have either s_1 ≤2/2-r(1), or 2/2-r(1) < s_1 < 2r(1)^-1. In the previous case, t_0=1 is the desired time step. In the latter case, we can repeat the same argument and obtain s_2 < 2r(1)^-1 and p_2≤ (1-r(1))^2.
By inductively repeating the same argument, we can obtain a time step t_0≤log(r̂(1-δ/2)) / log(1-r(1)) = 𝒪_δ, ϕ(1) such that either s_t_0≤2/2-r(1), or p_t_0≤r̂(1-δ/2). In the latter case, r(p_t_0) ≥ 1-δ/2 > q_t_0, and hence s_t_0 < 1 < 2/2-r(1). Therefore, t_0 = 𝒪_δ, ϕ(1) is the desired time step satisfying s_t_0≤2/2-r(1).
According to Lemma <ref>, there exists a time step t_0=𝒪_δ,ϕ(1) such that s_t_0≤2/2-r(1) < 2(1+η^2)^-1 for small step size η. Now we prove the lemma below.
Suppose that s_t≤ 1. Then, it holds that s_t+1≤ 1 + 𝒪(η).
For any p∈ (0, r̂(q_t/2)), we have r(p)≥q_t/2 so that f_q_t(p) = (-1+2r(p_t)/q_t)p_t. Hence,
∂/∂ pf_q_t(p) = 2 r(p)/q_t(1 + p r'(p)/p) - 1,
for any p∈ (0, r̂(q_t/2)). By Assumption <ref> <ref>, <ref> and q_t ≥ c_0 ≥ r(z_1)+1/2≥ 2r(z_1) where z_1 = sup_z {zr'(z)/r(z)≥ - 1}, both r(p) and (1+pr'(p)/r(p)) are positive, decreasing function on (0, r̂(q_t/2)). Consequently, ∂/∂ pf_q_t(p) is a decreasing function on (0, r̂(q_t/2)).
Now note that q_t/2 < q_t < 1, which means r̂(1) = 0 < r̂(q_t) < r̂(q_t/2) by the definition of r̂. Note that ∂/∂ pf_q_t(p) at p = r̂(q_t) evaluates to
∂/∂ pf_q_t(r̂(q_t)) = 1 + 2 r̂(q_t) r'(r̂(q_t))/r(r̂(q_t))≥ 1 + 2 r̂(c_0)r'(r̂(c_0))/r(r̂(c_0))≥ 0,
where the inequalities used Assumption <ref> <ref> and q_t > c_0 ≥ r(z_0) where z_0 := sup_z{zr'(z)/r(z)≥ -1/2}, from the statement of Theorem <ref>.
Therefore, since ∂/∂ pf_q_t(p) is decreasing on (0, r̂(q_t/2)) and is nonnegative at r̂(q_t), for any p∈ (0, r̂(q_t)), it holds that ∂/∂ pf_q_t(p)≥ 0.
In other words, f_q_t(p) is an increasing function on (0, r̂(q_t)). Since 0≤ s_t≤ 1, we have p_t≤r̂(q_t) and it holds that
p_t+1 = (-1 + 2/s_t) p_t = f_q_t(p_t)≤f_q_t(r̂(q_t)) = r̂(q_t).
Therefore, with this inequality and Lemma <ref>, we can conclude that
s_t+1 = q_t+1/r(p_t+1) = (1 + 𝒪(η))q_t/r(p_t+1)≤q_t/r(r̂(q_t)) + 𝒪(η) = 1 + 𝒪(η).
Using Lemma <ref>, we prove the following lemma.
For any 0≤ t≤ T, if s_t < 2 and r(p_t)≤ 1 - δ/4, then
s_t+1-1≤ (1-d)s_t-1 + 𝒪(η),
where d∈ (0,1/2] is a constant which depends on δ and ϕ.
From Eq. (<ref>) it holds that
p_t+1/p_t = 1 - 2/s_t < 0,
so that p_t and p_t+1 have opposite signs. By Mean Value Theorem, there exists θ_t between -1 and (1-2/s_t) satisfying
1/r(p_t+1) = 1/r(-p_t + (2(s_t-1)/s_t)p_t)
= 1/r(-p_t) - r'(θ_t p_t)/r(θ_t p_t)^2(2(s_t-1)/s_t) p_t
= 1/r(p_t) - r'(θ_t p_t)/r(θ_t p_t)^2(2(s_t-1)/s_t) p_t.
where the last equality used the fact that p_t and θ_t p_t have opposite signs and r'(z) and z have opposite signs.
Note that θ_t p_t is between p_t and p_t+1. Consequently, the value r'(θ_t p_t)/r(θ_t p_t)^2 is between r'(p_t)/r(p_t)^2 and r'(p_t+1)/r(p_t+1)^2 by Assumption <ref> <ref>. We will prove the current lemma based on Eq. (<ref>). We divide into following three cases: (1) s_t≥ 1 and s_t+1≥ 1, (2) s_t≥ 1 and s_t < 1, and (3) s_t < 1.
Case 1. Suppose that s_t≥ 1 and s_t+1≥ 1. Here, we have p_t≥r̂(q_t)≥r̂(1-δ/2) and similarly p_t+1≥r̂(1-δ/2). By Assumption <ref> <ref>, r'(θ_t p_t)/r(θ_t p_t)^2≥r'(r̂(1-δ/2))/(1-δ/2)^2. Hence, Eq. (<ref>) gives
1/r(p_t+1)≤1/r(p_t) - r'(r̂(1-δ/2))/(1-δ/2)^2(2(s_t-1)/s_t) r̂(1-δ/2).
Consequently, by Lemma <ref>,
s_t+1 = q_t (1 + 𝒪(η))/r(p_t+1) = q_t/r(p_t+1) + 𝒪(η)
≤ s_t - r'(r̂(1-δ/2))/(1-δ/2)^2(2(s_t-1)/s_t) r̂(1-δ/2) q_t + 𝒪(η)
≤ s_t - r'(r̂(1-δ/2))/(1-δ/2)^2 (s_t-1) r̂(1-δ/2) 1/2 + 𝒪(η)
≤ s_t - r̂(1-δ/2)r'(r̂(1-δ/2))/2(1-δ/2)^2 (s_t-1) + 𝒪(η),
where we used q_t > c_0 > 1/2 and s_t < 2. Therefore, we can obtain the following inequality:
0 ≤ s_t+1-1 ≤( 1 - r̂(1-δ/2)r'(r̂(1-δ/2))/2(1-δ/2)^2) (s_t-1) + 𝒪(η).
Case 2. Suppose that s_t≥ 1 and s_t+1< 1. Here, we have r(p_t+1) > q_t+1≥ q_t ≥ r(p_t), so that p_t+1 < p_t. Consequently, r'(θ_t p_t)/r(θ_t p_t)^2≤r'(p_t)/r(p_t)^2 by Assumption <ref> <ref>. Hence, we can deduce from Eq. (<ref>) that
1/r(p_t+1) ≥1/r(p_t) - r'(p_t)/r(p_t)^2( 2(s_t-1)/s_t) p_t
= 1/r(p_t) - 2p_tr'(p_t)/r(p_t)q_t (s_t-1)
= 1/r(p_t) + 2p_tr'(p_t)/r(p_t)q_t (s_t-1).
Consequently, by Assumption <ref> <ref>,
s_t+1≥q_t/r(p_t+1)≥ s_t + 2p_tr'(p_t)/r(p_t) (s_t-1) ≥ s_t + 2r̂(c_0/2)r'(r̂(c_0/2))/r(r̂(c_0/2)) (s_t-1),
where we used r(p_t)≥q_t/2 > c_0/2. Therefore, we can obtain the following inequality:
0 ≤ 1 - s_t+1≤ - (1 + 4r̂(c_0/2)r'(r̂(c_0/2))/c_0) (s_t-1),
where -1<r̂(c_0/2)r'(r̂(c_0/2))/r(r̂(c_0/2)) = 2r̂(c_0/2)r'(r̂(c_0/2))/c_0 < 0, since c_0 ≥ r(z_1)+1/2≥ 2r(z_1) with z_1 = sup_z{zr'(z)/r(z)≥ -1}, and r(z_1) ≤1/2 holds by Assumption <ref> <ref>.
Case 3. Suppose that s_t < 1. By Lemma <ref>, it holds that s_t+1≤ 1+𝒪(η). Moreover, we assumed r(p_t) ≤ 1 - δ/4, so that p_t≥r̂(1-δ/4). We also have
p_t+1 = (-1+2/s_t)p_t > p_t≥r̂(1-δ/4).
Consequently, by Assumption <ref> <ref>, it holds that r'(θ_t p_t)/r(θ_t p_t)^2≥r'(r̂(1 - δ/4))/(1 - δ/4)^2. Hence, by Eq. (<ref>), we have
1/r(p_t+1) ≥1/r(p_t) + r'(r̂(1 - δ/4))/(1 - δ/4)^2(2(1-s_t)/s_t) r̂(1-δ/4)
≥1/r(p_t) + r'(r̂(1 - δ/4))/(1 - δ/4)^2 2(1-s_t) r̂(1-δ/4)
= 1/r(p_t) + 2 r̂(1-δ/4)r'(r̂(1-δ/4))/(1-δ/4)^2 (1-s_t),
and hence,
s_t+1≥q_t/r(p_t+1)≥ s_t + r̂(1-δ/4)r'(r̂(1-δ/4))/(1-δ/4)^2 (1-s_t),
where we used q_t > 1/2. Therefore, we can obtain the following inequality:
-𝒪(η) ≤ 1-s_t+1≤(1 - r̂(1-δ/4)r'(r̂(1-δ/4))/(1-δ/4)^2) (1-s_t),
where the first inequality is from Lemma <ref>.
Combining the three cases, we can finally conclude that if we choose
d min{1/2, r̂(1-δ/2)r'(r̂(1-δ/2))/2(1-δ/2)^2, 2 (1 + 2r̂(c_0/2)r'(r̂(c_0/2))/c_0), r̂(1-δ/4)r'(r̂(1-δ/4))/(1-δ/4)^2}∈(0,1/2],
then s_t+1-1≤ (1-d)s_t-1 + 𝒪(η), as desired.
Lemma <ref> implies that if s_t<2 and p_t≥r̂(1-δ/4), then s_t-1 exponentially decreases. We prove Lemma <ref> to handle the regime p_t < r̂(1-δ/4), which is stated below.
For any 0≤ t ≤ T, if r(p_t)≥ 1 - δ/4, it holds that
|p_t+1/p_t|≥2/2-δ.
If r(p_t)≥ 1 - δ/4, then s_t = q_t/r(p_t) < 1-δ/2/1-δ/4 = 4-2δ/4-δ, where we used q_t < 1-δ/2 for any 0≤ t≤ T. Consequently,
|p_t+1/p_t| = 2/s_t - 1 ≥2(4-δ)/4-2δ - 1 = 2/2-δ.
Now we prove Proposition <ref>, which proves that s_t reaches close to 1 with error bound of 𝒪(η).
There exists a time step t_a^* = 𝒪_δ,ϕ(log(η^-1)) satisfying
s_t_a^* = 1 + 𝒪_δ,ϕ(η).
By Lemma <ref>, there exists a time step t_0 = 𝒪_δ,ϕ(1) such that s_t_0≤2/2-r(1). Here, we divide into two possible cases: (1) s_t_0<1, and (2) 1≤ s_t_0≤2/2-r(1).
Case 1. Suppose that s_t_0<1. By Lemma <ref>, if r(p_t_0)≥ 1-δ/4 (or equivalently, p_t_0≤r̂(1-δ/4)), then there exists a time step t_1 ≤ t_0 + log(r̂(1-δ/4)/p_t_0) / log(2/2-δ) = 𝒪_δ, ϕ (1) such that p_t_1≥r̂(1-δ/4). We denote the first time step satisfying p_t_1≥r̂(1-δ/4) and t_1≥ t_0 by t_1 = 𝒪_δ, ϕ(1). By Lemma <ref>, it holds that s_t_1≤ 1+𝒪(η) since s_t_1-1<1. Consequently, if s_t_1≥ 1, then s_t_1-1≤𝒪(η) so that t_a^* = t_1 is the desired time step. Hence, it suffices to consider the case when s_t_1<1. Here, we can apply Lemma <ref> which implies that
s_t_1+1-1≤ (1-d) s_t_1-1 + 𝒪(η),
where d is a constant which depends on δ and ϕ. Then, there are two possible cases: either s_t_1-1≤𝒪(η d^-1), or s_t_1+1-1≤ (1-d/2)s_t_1-1. It suffices to consider the latter case, suppose that s_t_1+1-1≤ (1-d/2)s_t_1-1. Since we are considering the case s_t_1<1, again by Lemma <ref>, we have s_t_1+1≤ 1 + 𝒪(η). Since p_t_1+1/p_t_1 = 2/s_t_1-1 > 1, we have p_t_1+1≥p_t_1≥r̂(1-δ/4). This means that we can again apply Lemma <ref> and repeat the analogous argument. Hence, there exists a time step t_2 ≤ t_1 + log(η/1-s_t_1) / log (1-d/2) = 𝒪_δ, ϕ(log(η^-1)), such that s_t_2-1≤𝒪(η d^-1) = 𝒪_δ,ϕ(η).
Case 2. Suppose that 1≤ s_t_0≤2/2-r(1). Then, r(p_t_0) ≤ q_t_0≤ 1-δ/2, so we can apply Lemma <ref>. There are two possible cases: either s_t_0+1-1≤𝒪(η d^-1) = 𝒪_δ,ϕ(η), or s_t_0+1-1≤ (1-d/2)s_t_0-1. It suffices to consider the latter case. If s_t_0+1≥ 1, we can again apply Lemma <ref> and repeat the analogous argument. Hence, we can obtain a time step t_0' ≤ t_0 + log(η/1-s_t_0) / log(1-d/2) = 𝒪_δ,ϕ(log(η^-1)) such that either s_t_0' < 1 or s_t_0'-1 = 𝒪_δ,ϕ (η) is satisfied. If s_t_0' < 1, we proved in Case 1 that there exists a time step t_2' = t_0'+𝒪_δ,ϕ(log(η^-1)) such that s_t_2'-1≤𝒪_δ,ϕ(η), and this is the desired bound.
Now we carefully handle the error term 𝒪(η) obtained in Proposition <ref> and a provide tighter bound on s_t by proving Lemma <ref> stated below.
If s_t-1≤𝒪_δ, ϕ(η), then it holds that
s_t+1-1-h(p_t+1) η≤(1+2p_t r'(p_t)/r(p_t)) s_t-1-h(p_t)η + 𝒪_δ,ϕ(η^2p_t^2),
where
h(p)
-ϕ(p)^2 r(p)/p r'(p) if p 0, and
-1/r”(0) if p=0.
Suppose that s_t-1≤𝒪_δ,ϕ(η). Then, p_t+1 = | 1 - 2/s_t|p_t = (1 + 𝒪_δ, ϕ(η))p_t. By Eq. (<ref>) proved in Lemma <ref>, there exists ϵ_t = 𝒪_δ, ϕ(η) such that
1/r(p_t+1) = 1/r(p_t) + r'((1+ϵ_t)p_t)/r((1+ϵ_t)p_t)^2(2(s_t-1)/s_t) p_t
= 1/r(p_t) + ( r'(p_t)/r(p_t)^2 + 𝒪_δ,ϕ(η p_t) ) (2(s_t-1)/s_t) p_t
= 1/r(p_t) + r'(p_t)/r(p_t)^2(2(s_t-1)/s_t) p_t + 𝒪_δ,ϕ(η^2 p_t^2),
where we used the Taylor expansion on r'(p)/r(p)^2 with the fact that d/dp(r'(p)/r(p)^2) is bounded on [-4, 4] and that p_t≤ 4 to obtain the second equality.
Note that q_t+1 = (1-ηϕ(p_t)^2)^-2q_t = (1+2ηϕ(p_t)^2) q_t + 𝒪(η^2) by Eq (<ref>). Consequently,
s_t+1 = (1+2ηϕ(p_t)^2) (s_t + r'(p_t)/r(p_t)^2(2(s_t-1)/s_t) p_t q_t) + 𝒪_δ,ϕ(η^2 p_t^2)
= (1+2ηϕ(p_t)^2) s_t + 2p_t r'(p_t)/r(p_t)(s_t -1) + 𝒪_δ,ϕ(η^2 p_t^2)
= 1 + (1 + 2p_t r'(p_t)/r(p_t)) (s_t-1) + 2ηϕ(p_t)^2 + 𝒪_δ,ϕ(η^2 p_t^2).
Note that h is even, and twice continuously differentiable function by Lemma <ref>. Consequently, h'(0) = 0 and h'(p) = 𝒪_ϕ(p), since h” is bounded on closed interval. Consequently, h(p_t+1) = h((1+𝒪_δ, ϕ(η))p_t) = h(p_t) + 𝒪_δ, ϕ(η p_t^2).
Hence, we can obtain the following:
s_t+1 - 1 - h(p_t+1)η = s_t+1 - 1 - h(p_t) η + 𝒪_δ, ϕ(η^2p_t^2)
= s_t+1 - 1 + ϕ(p_t)^2 r(p_t)/p_t r'(p_t)η + 𝒪_δ, ϕ(η^2p_t^2)
= (1 + 2p_t r'(p_t)/r(p_t)) (s_t - 1 + ϕ(p_t)^2 r(p_t)/p_t r'(p_t)η) + 𝒪_δ, ϕ(η^2p_t^2)
= (1 + 2p_t r'(p_t)/r(p_t))(s_t - 1 - h(p_t)η) + 𝒪_δ,ϕ(η^2p_t^2)
Note that r(p_t) = (1 + 𝒪_δ, ϕ (η)) q_t ≥ (1 + 𝒪_δ, ϕ (η)) q_0 ≥ c_0 ≥ r(z_0) for small step size η, where z_0 = sup{z r'(z)/r(z)≥ -1/2}. Consequently, it holds that 1 + 2p_t r'(p_t)/r(p_t)≥ 0. Therefore, we can conclude that
s_t+1-1-h(p_t+1) η≤(1+2p_t r'(p_t)/r(p_t)) s_t-1-h(p_t)η + 𝒪_δ,ϕ(η^2p_t^2),
as desired.
Now we give the proof of Theorem <ref>, restated below for the sake of readability.
*
By Proposition <ref>, there exists a time step t_a^* = 𝒪_δ, ℓ(log(η^-1)) which satisfies:
s_t_a^*-1 = |q_t_a^*/r(p_t_a^*) - 1 | = 𝒪_δ, ϕ(η).
By Lemma <ref>, there exists a constant D>0 which depends on δ, ϕ such that if s_t-1 = 𝒪_δ,ϕ(η), then
s_t+1-1-h(p_t+1)η≤( 1 + 2p_t r'(p_t)/r(p_t)) s_t-1-h(p_t)η + D η^2 p_t^2.
Hence, if s_t-1 = 𝒪_δ,ϕ(η) and s_t-1-h(p_t) η≥(-p_t r(p_t)/r'(p_t))Dη^2, then
s_t+1 - 1 - h(p_t+1)η≤( 1 + p_t r'(p_t)/r(p_t)) s_t-1-h(p_t)η.
For any t≤ T, we have q_t < 1-δ/2 so that if s_t-1 = 𝒪_δ,ϕ(η), then r(p_t)≤ (1+𝒪_δ,ϕ(η))q_t < 1-δ/4 for small step size η. From Eq. (<ref>) with t=t_a^*, we have either
s_t_a^* - 1 - h(p_t_a^*)η < (-p_t_a^* r(p_t_a^*)/r'(p_t_a^*))Dη^2,
or
s_t_a^*+1 - 1 - h(p_t_a^*+1)η≤( 1 + r̂(1-δ/4) r'(r̂(1-δ/4))/(1-δ/4)) s_t_a^* - 1 - h(p_t_a^*)η,
where we used Assumption <ref> <ref> and p_t > r̂(1-δ/4). In the latter case, s_t_a^*+1-1 = 𝒪_δ,ϕ(η) continues to hold and we can again use Eq. (<ref>) with t=t_a^*+1. By repeating the analogous arguments, we can obtain the time step
t_a ≤ t_a^* + log( -Dη^2/r”(0) s_t_a^* - 1 - h(p_t_a^*)η)/log( 1 + r̂(1-δ/4) r'(r̂(1-δ/4))/(1-δ/4)) = 𝒪_δ, ℓ (log(η^-1)),
which satisfies: either
s_t_a - 1 - h(p_t_a)η < (-p_t_a r(p_t_a)/r'(p_t_a))Dη^2,
or
s_t_a - 1 - h(p_t_a) η≤(-1/r”(0)) Dη^2 ≤(- p_t_a r(p_t_a)/r'(p_t_a)) Dη^2≤(- 4 r(4)/r'(4)) Dη^2,
where we used p_t≤ 4 from Lemma <ref> and -zr(z)/r'(z)≥ -1/r”(0) for any z by Assumption <ref> <ref>.
By Eq. (<ref>), if s_t - 1 - h(p_t) η≤(- 4 r(4)/r'(4)) Dη^2 is satisfied for any time step t, then
s_t+1 - 1 - h(p_t+1) η≤( 1 + 2p_t r'(p_t)/r(p_t)) (- 4 r(4)/r'(4)) Dη^2 + D η^2 p_t^2 ≤(- 4 r(4)/r'(4)) Dη^2,
by p_t≤ 4 from Lemma <ref> and Assumption <ref> <ref>.
Hence, by induction, we have the desired bound as following: for any t≥ t_a,
s_t - 1 - h(p_t) η≤(- 4 r(4)/r'(4)) Dη^2 = 𝒪_δ, ℓ(η^2),
by p_t≤ 4 and Assumption <ref> <ref>.
§.§ Proof of Theorem <ref>
In this subsection, we prove Theorem <ref>.
We start by proving Lemma <ref> which provides a useful property of h defined in Theorem <ref>.
Consider the function h defined in Theorem <ref>, given by
h(p)
-ϕ(p)^2 r(p)/p r'(p) if p 0, and
-1/r”(0) if p=0.
Then, h is a positive, even, and bounded twice continuously differentiable function.
By Assumption <ref>, h is a positive, even function. Moreover, h is continuous since lim_p→ 0 h(p) = -1/r”(0) = h(0). Continuous function on a compact domain is bounded, so h is bounded on the closed interval [-1, 1]. Note that ϕ(p)^2 ≤ 1, and (-r(p)/pr'(p)) is positive, decreasing function on p>0 by Assumption <ref> <ref>. Hence, h is bounded on [1, ∞). Since h is even, h is bounded on (-∞, 1]. Therefore, h is a bounded on .
We finally prove that h is twice continuously differentiable. Since r is even and C^4 on , we can check that for any p 0,
h'(p) = -2r(p)^2/r'(p) - ϕ(p)^2/p + ϕ(p)^2 r(p) (r'(p) + pr”(p))/p^2 r'(p)^2,
and h'(p)=0.
Moreover, for any p 0,
h”(p) = - 6 r(p) + 2ϕ(p)^2/p^2 + ϕ(p)^2 r”(p)/pr'(p) + ϕ(p)^2r(p)r^(3)(p)/p r'(p)^2
+ 2r(p)^2 (r'(p)+2pr”(p))/pr'(p)^2 -ϕ(p)^2 r(p) (2r'(p) + pr”(p))/p^3 r'(p)^2 - ϕ(p)^2 r(p) r”(p) (r'(p) + 2pr”(p))/p^2 r'(p)^3,
and
h”(0) = - 1 - 2 ϕ^(3)(0)/3r”(0) + r^(4)(0)/3 r”(0)^2.
Since lim_p→ 0 h”(p) = h”(0), we can conclude that h is a twice continuously differentiable function.
Now we give the proof of Theorem <ref>, restated below for the sake of readability.
*
We first prove that there exists a time step t≥ 0 such that q_t > 1. Assume the contrary that q_t ≤ 1 for all t≥ 0. Let t_a be the time step obtained in Theorem <ref>. Then for any t≥ t_a, we have
r(p_t) = (1 - h(p_t)η + 𝒪_δ, ϕ(η^2)) q_t ≤ 1 - h(p_t) η/2,
for small step size η. The function g(p) r(p) - 1 + h(p)η/2 is even, continuous, and has the function value g(0) = η/2r”(0) > 0. Consequently, there exists a positive constant ϵ>0 such that g(p) > 0 for all p∈ (-ϵ, ϵ). Then, we have p_t≥ϵ for all t≥ t_a, since g(p_t) ≤ 0. This implies that for any t≥ t_a, it holds that
q_t+1/q_t = (1 - ηϕ(p_t)^2)^-2≥ (1 - ηϕ(ϵ)^2)^-2 > 1,
so q_t grows exponentially, which results in the existence of a time step t_b'≥ t_a such that q_t_b' > 1, a contradiction.
Therefore, there exists a time step t_b such that q_t_b≤ 1 and q_t > 1 for any t > t_b, i.e., q_t jumps across the value 1. This holds since the sequence (q_t) is monotonically increasing. For any t≤ t_b, we have q_t+1≤ q_t + 𝒪(η) by Lemma <ref>, and this implies that t_b ≥Ω ((1-q_0)η^-1), as desired.
Lastly, we prove the convergence of GD iterates (p_t, q_t). Let t > t_b be given. Then, q_t ≥ q_t_b+1 > 1 and it holds that
|p_t+1/p_t| = 2r(p_t)/q_t - 1 ≤2/q_t_b+1 - 1 < 1.
Hence, p_t is exponentially decreasing for t>t_b. Therefore, p_t converges to 0 as t→∞. Since the sequence (q_t)_t=0^∞ is monotonically increasing and bounded (due to Theorem <ref>, it converges. Suppose that (p_t, q_t) converges to the point (0, q^*). By Theorem <ref>, we can conclude that
| q^* - 1 + η/r”(0)| = 𝒪_ϕ,ℓ(η^2),
which is the desired bound.
§.§ Proof of Theorem <ref>
In this subsection, we prove Theorem <ref>. We first prove an important bound on the sharpness value provided by the Proposition <ref> stated below.
For any (x,y)∈^2 with r(x)+xr'(x) ≥ 0, the following inequality holds:
(r(x) + xr'(x)) y^2 ≤λ_max (∇^2 ℒ(x,y)) ≤ (r(x) + xr'(x)) y^2 + 4x^2 r(x)/r(x) + xr'(x),
Let (x,y)∈^2 be given. The loss Hessian at (x,y) is
∇^2 ℒ (x,y) = [ (ϕ(x)ϕ”(x) + ϕ'(x)^2) y^2 2ϕ(x)ϕ'(x) y; 2ϕ(x)ϕ'(x)y ϕ(x)^2 ].
Note that the largest absolute value of the eigenvalue of a symmetric matrix equals to its spectral norm. Since trace of the Hessian, tr(∇^2 ℒ (x,y)) = (ϕ(x)ϕ”(x) + ϕ'(x)^2) y^2 + ϕ(x)^2 = (r(x) + xr'(x)) y^2 + ϕ(x)^2 ≥ 0, is non-negative, the spectral norm of the Hessian ∇^2 ℒ (x,y) equals to its largest eigenvalue. Hence, we have
λ_max (∇^2 ℒ(x,y)) = ∇^2 ℒ(x,y)_2
≥‖∇^2 ℒ(x,y) ·[ 1; 0 ]‖_2
= [((ϕ(x)ϕ”(x) + ϕ'(x)^2) y^2)^2 + (2ϕ(x)ϕ'(x) y)^2]^1/2
≥ (ϕ(x)ϕ”(x) + ϕ'(x)^2) y^2
= (r(x) + xr'(x)) y^2,
which is the desired lower bound.
We also note that for any matrix , the inequality _2 ≤_F holds, where _F is the Frobenius norm. Hence, we have
λ_max (∇^2 ℒ(x,y)) = ∇^2 ℒ(x,y)_2
≤∇^2 ℒ(x,y)_F
= [ ((ϕ(x)ϕ”(x) + ϕ'(x)^2) y^2)^2 + 2(2ϕ(x)ϕ'(x) y)^2 + ϕ(x)^4 ]^1/2
= [ (r(x)+xr'(x))^2 y^4 + 8x^2r(x)^2 y^2 + ϕ(x)^4 ]^1/2
≤[ ( (r(x)+xr'(x))y^2 + 4x^2 r(x)/r(x) + xr'(x))^2 ]^1/2
= (r(x)+xr'(x))y^2 + 4x^2 r(x)/r(x) + xr'(x),
which is the desired upper bound.
Now we give the proof of Theorem <ref>, restated below for the sake of readability.
*
By Theorem <ref> and since h is a bounded function by Lemma <ref>, we have s_t = 1+𝒪_ϕ(η) for any t≥ t_a. Note that r(p_t) = (1 + 𝒪_δ, ϕ (η)) q_t ≥ (1 + 𝒪_δ, ϕ (η)) q_0 ≥ c_0 ≥ r(z_0) for small step size η, where z_0 = sup{z r'(z)/r(z)≥ -1/2}. Consequently, it holds that 1 + 2p_t r'(p_t)/r(p_t)≥ 0, and this implies 1 + p_t r'(p_t)/r(p_t)≥1/2.
By Proposition <ref>, we can bound the sharpness λ_t λ_max (∇^2 ℒ(x_t,y_t)) by
(1 + p_t r'(p_t)/r(p_t)) 2r(p_t)/η q_t≤λ_t ≤(1 + p_t r'(p_t)/r(p_t)) 2r(p_t)/η q_t + 4p_t^2 (1 + p_t r'(p_t)/r(p_t))^-1.
By Lemma <ref>, p_t≤ 4 for any time step t. Moreover, since 1 + p_t r'(p_t)/r(p_t)≥1/2 holds for any t≥ t_a with small step size η, we have 4p_t^2 (1 + p_t r'(p_t)/r(p_t))^-1 = 𝒪_ϕ(1). Hence, for any t≥ t_a, it holds that
λ_t = (1 + p_t r'(p_t)/r(p_t)) 2r(p_t)/η q_t + 𝒪_ϕ(1).
For any t≥ t_a, we have s_t = 1+𝒪_ϕ(η) and this implies s_t^-1-1 = 𝒪_ϕ(η) and r(p_t) - q_t =𝒪_ϕ(η).
For any 0<q<1, we have
d/dq(r̂(q)r'(r̂(q))/q) = r̂'(q) (r'(r̂(q)) + r̂(q) r”(r̂(q)))/q - r̂(q) r'(r̂(q))/q^2
= 1/q( 1 + r̂(q) r”(r̂(q))/r'(r̂(q))) - r̂(q) r'(r̂(q))/q^2,
so that
lim_q→ 1^-(d/dq(r̂(q)r'(r̂(q))/q)) = lim_p→ 0^+(1 + pr”(p)/r'(p)) = 2.
Therefore, d/dq(r̂(q)r'(r̂(q))/q) is bounded on [1/4, 1) and Taylor's theorem gives
|p_t r'(p_t)/r(p_t) - r̂(q_t) r'(r̂(q_t))/q_t|≤𝒪_ϕ( r(p_t) - q_t ) ≤𝒪_ϕ(η).
for any time step t with q_t < 1.
Hence, if q_t<1, we have the following bound:
|λ̃(q_t) - (1 + p_t r'(p_t)/r(p_t)) 2r(p_t)/η q_t|≤| 1 - s_t^-1 + r̂(q_t) r'(r̂(q_t))/q_t - p_t r'(p_t)/r(p_t)|2/η + 𝒪_ϕ(1) = 𝒪_ϕ(1),
where we used p_r r'(p_t)/q_t = p_t r'(p_t)/r(p_t) ( 1 + 𝒪_ϕ(η)) = p_t r'(p_t)/r(p_t) + 𝒪_ϕ(η), since p_t r'(p_t)≤ r(p_t) for any t ≥ t_a.
Now let t be any given time step with q_t ≥ 1. Then, r(p_t) = 1 - 𝒪_ϕ(η), and since r(z) = 1+r”(0)z^2 + 𝒪_ϕ(z^4) for small z, we have p_t = 𝒪_ϕ(√(η)). Hence,
|λ̃(q_t) - (1 + p_t r'(p_t)/r(p_t)) 2r(p_t)/η q_t|≤| 1 - s_t^-1 - p_t r'(p_t)/r(p_t)|2/η + 𝒪_ϕ(1)= 𝒪_ϕ(1),
for any t with q_t≥ 1, where we again used p_r r'(p_t)/q_t = p_t r'(p_t)/r(p_t) ( 1 + 𝒪_ϕ(η)) = p_t r'(p_t)/r(p_t) + 𝒪_ϕ(η).
By Eqs. (<ref>), (<ref>), and (<ref>), we can conclude that for any t≥ t_a, we have
|λ_t - λ̃(q_t) |≤𝒪_ϕ(1),
the desired bound.
Finally, we can easily check that the sequence (λ̃(q_t))_t=0^∞ is monotonically increasing, since z↦zr'(z)/r(z) is a decreasing function by Assumption <ref> <ref> and the sequence (q_t) is monotonically increasing.
|
http://arxiv.org/abs/2307.06280v1 | 20230712162126 | Can the orbital distribution of Neptune's 3:2 mean motion resonance result from stability sculpting? | [
"Sricharan Balaji",
"Nihaal Zaveri",
"Nanae Hayashi",
"Arcelia Hermosillo Ruiz",
"Jackson Barnes",
"Ruth Murray-Clay",
"Kathryn Volk",
"Jake Gerhardt",
"Zain Syed"
] | astro-ph.EP | [
"astro-ph.EP"
] |
firstpage–lastpage
Tackling Computational Heterogeneity in FL:
A Few Theoretical Insights
Adnan Ben Mansour
[email protected]
be-ys Research
Argonay, 74370, France
Gaia Carenini [email protected]
ENS - PSL University
Paris, 75005, France
Alexandre Duplessis [email protected]
ENS - PSL University
Paris, 75005, France
==================================================================================================================================================================================================================================================================================================================================================================
We explore a simplified model of the outcome of an early outer Solar System gravitational upheaval during which objects were captured into Neptune's 3:2 mean motion resonance via scattering rather than smooth planetary migration.
We use N-body simulations containing the Sun, the four giant planets, and test particles in the 3:2 resonance to determine whether long-term stability sculpting over 4.5 Gyr can reproduce the observed 3:2 redresonant population from an initially randomly scattered 3:2 population.
After passing our simulated 3:2 resonant objects through a survey simulator, we find that the semimajor axis (a) and eccentricity (e) distributions are consistent with the observational data (assuming an absolute magnitude distribution constrained by prior studies), suggesting that these could be a result of stability sculpting. However, the inclination (i) distribution cannot be produced by stability sculpting and thus must result from a distinct process that excited the inclinations. Our simulations modestly under-predict the number of objects with high libration amplitudes (A_ϕ), possibly because we do not model transient sticking. Finally, our model under-populates the Kozai subresonance compared to both observations and to smooth migration models. Future work is needed to determine whether smooth migration occurring as Neptune's eccentricity damped to its current value can resolve this discrepancy.
Kuiper Belt, Trans Neptunion Objects, Planetary Instability, Nbody
Tackling Computational Heterogeneity in FL:
A Few Theoretical Insights
Adnan Ben Mansour
[email protected]
be-ys Research
Argonay, 74370, France
Gaia Carenini [email protected]
ENS - PSL University
Paris, 75005, France
Alexandre Duplessis [email protected]
ENS - PSL University
Paris, 75005, France
==================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
The dynamical structure of small bodies in the Solar System's trans-Neptunian region indicates that the system's ice giants formed closer to the Sun than they orbit today. In particular, the large population of trans-Neptunian objects (TNOs) detected in mean motion resonances with Neptune suggests that early in its lifetime, Neptune either migrated outward from a closer-in orbit due to angular momentum transfer with nearby planetesimal debris or was dynamically scattered due to interactions with the other giant planets (or both; for reviews see, e.g., ).
Recent results from well-characterized surveys of the trans-Neptunian region have enabled direct comparisons between these models and the distribution of observed resonant orbits.
In this paper, we investigate whether the observed orbital distribution of TNOs in the 3:2 mean motion resonance (MMR) with Neptune is consistent with the class of models in which Neptune is dynamically scattered. To do so, we test whether this population can be produced by an initially scattered population of TNOs for which no preferential resonance capture has occurred, which is then sculpted over the age of the Solar System as unstable objects are lost. We refer to this process as “stability sculpting."
The nature of the Solar System's early dynamical evolution is still uncertain, but two end-member models are often discussed: gravitational upheaval and smooth migration.
Both have a similar pre-evolution state, with all of the giant planets on nearly-circular, co-planar orbits
with semi-major axes interior to Neptune's current orbit and an initial massive planetesimal disk extending from the giant planet region to roughly 34 au
(see, e.g. ; though at least some low-mass portion of the disk also extended out to include the current cold classical population at ∼45 au as discussed in, e.g., ).
The two models differ in their implications for how Neptune's exterior mean motion resonances are filled.
In the most violent upheaval models, the giant planets have direct gravitational interactions that scatter Neptune nearly directly to its current location (see, e.g., ; see also reviews by ).
In this type of scenario, most of the planetesimals are strongly scattered with some landing at random in the final locations of Neptune's mean motion resonances <cit.>.
Smooth migration models are characterized by a slower, gradual outward migration of the planets, during which planetesimals are captured into resonant orbits as the locations of the resonances sweep past them <cit.>.
In gravitational upheaval models, the ice giants exhibit chaotic orbital evolution, meaning that their final orbits are not easily controlled in N-body simulations. It is thus computationally challenging to perform pure upheaval simulations suitable for high fidelity comparisons with observations of resonant TNOs. Our aim in this paper is to sidestep this challenge by testing a generalized model of the outcome of a gravitational upheaval scenario, including long-term sculpting by dynamical instabilities.
We assume a simplified scenario where gravitational perturbations in the early Solar System scattered or “kicked" trans-Neptunian planetesimals onto various orbits beyond Neptune's current semi-major axis.
The giant planets simultaneously undergo strong mutual perturbations, including scattering events, that cause them to spread out.
Once the giant planets arrive at and settle into their current, stable orbits, some of those scattered planetesimals will remain in stable/meta-stable orbits.
These remaining TNOs are categorized into different dynamical sub-populations (see, e.g., ).
To test a simplified model of a giant planet dynamical upheaval, here we focus on the dynamical evolution of the 3:2 MMR population, which is located at a semimajor axis a=39.4 au. Our reason for focusing on this population stems from two key points:
* There is a significant characterized observational sample of the 3:2 MMR population from multiple well-characterized surveys <cit.>.
The Outer Solar System Origins Survey ensemble (OSSOS+) is a compilation of these surveys that contains field pointings, field depths, and tracking fractions at different magnitudes and on-sky rates of motion that can be combined with the OSSOS survey simulator to provide robust comparisons between models and observations (see, e.g., ).
* The 3:2 MMR population is also an ideal population to study long-term stability due to the fact that it is a strong first-order resonance.
The resonance hosts enough stable phase space that different emplacement mechanisms may have populated the resonance in observationally distinguishable ways.
Our work uses a simplified model of the outcome of a planetary upheaval scenario rather than direct simulations of the giant planets' early evolution to avoid the numerical complications presented by including the strong planet-planet interactions that occur during the actual epoch of planetary migration/upheaval.
highlights the difficulty in producing reasonable statistics for the final distributions of outwardly scattered planetesimals in smooth migration simulations.
Even without planet-planet close encounters, the interactions between planets during migration introduce significant randomness to the planet outcomes; coupled with the very low efficiency at which test particles land on even meta-stable orbits in regions of interest such as the present-day 3:2 resonance, it becomes computationally challenging to produce statistically meaningful resonant populations.
When even stronger planet-planet interactions are introduced, the numerical challenges in finding simulation initial conditions that result in well-behaved final giant planet orbits and then integrating them with enough test particles to result in a sufficiently large final 3:2 population are dramatically magnified. We discuss this further in Section <ref>.
No two simulations of giant planet instabilities are exactly alike, and the precise distribution of scattered planetesimals that remain at the end of the scattering epoch may be affected by mean motion and secular resonances.
However, scattered planetesimals are typically roughly evenly distributed along trajectories with pericenters in the scattering region. We therefore consider a population of objects that “fills phase space" for different ranges of perihelion distances in the 3:2 MMR with Neptune as an approximation of the outcome of an epoch of scattering (see Section <ref>).
We perform N-body simulations on a 4.5 Gyr timescale to allow the resonant phase space to be sculpted by long-term stability.
We can then test this modeled population against the observed 3:2 resonant population by subjecting our model to the OSSOS+ ensemble biases and comparing the simulated detections to the real ones across a variety of parameters (e.g., eccentricity e, inclination i, and resonant libration parameters).
Section <ref> presents our model and simulation setup along with the resulting distribution of resonant objects over time.
Section <ref> provides a description of how the simulation is passed into the OSSOS survey simulator to produce simulated detected objects.
We discuss the validity and accuracy of our model in Section <ref> and summarize in Section <ref>.
§ SIMULATIONS
We conduct an N-Body simulation using the Python package rebound <cit.> with the integrator <cit.> to mimic the evolution of the 3:2 MMR population.
The solar system's four giant planets are initialized with their current orbital elements and the
TNOs are treated as massless test particles.
We verify that TNOs that undergo close encounters with the giant planets are quickly lost from our region of interest, justifying our choice of integrator.
To generate a sample for comparison with observational data, we fill phase space in the vicinity of the 3:2 MMR with randomly-generated test particles with uniformly-drawn pericenter distances, q, and semi-major axes, a, and then integrate for 4.5 Gyr.
The non-resonant and thus less stable particles are “shaved" away over time, just leaving the stable 3:2 resonant particles. This is similar to, for example, the work of <cit.> who used long-term integrations to show how the 3:2 resonant population evolves over time for a different initial population.
Scattering outcomes show that over the limited semi-major axis range we consider, particles are distributed roughly evenly in a and q. The particles lay along lines of constant pericenter corresponding to the region in which scattering occurs (similar assumptions were made in, e.g., the model for the post-instability populations), thus influencing our initial conditions. Dynamical upheaval simulations typically end with at least a brief phase of low-eccentricity, residual migration of Neptune <cit.>, which may generate additional features in the 3:2 MMR population. We comment on this possibility in Sections <ref> and <ref>.
§.§ Model Overview
To construct the initial state of our simulations, we assume planetesimals are scattered outward at some early epoch and then Neptune itself is scattered outward and then damped to its current orbit on a timescale fast enough such that it can be treated (from the perspective of the previously scattered planetesimals in what is now the region of the 3:2 resonance) as “appearing" at its current orbit with a semi-major axis of a=30.1 au. Thus, at the end of the planetary upheaval, the 3:2 resonances is essentially laid on top of a previously scattered population of planetesimals whose perihelia are at random phases relative to Neptune; this has the effect of more or less randomly filling the libration phase space of the resonance over a range of eccentricities set by the earlier scattering processes.See Figure <ref> for a schematic describing the assumed initial scattering.
Present-day Neptune can scatter objects with perihelia ≲38 au (see, e.g., discussion in ), and non-resonant objects with q ≲ 33 au are scattered on very short timescales (see, e.g., ). During a scattering scenario, Neptune's semi-major axis and eccentricity are unknown. For example, if Neptune had a semi-major axis of 28 au and an eccentricity of 0.2 at some point in its evolution, its apocenter was at 33.6au, and it could scatter objects with pericenters a few au more distant on short timescales. To encompass this uncertainty within our model, we consider initial populations for which particle pericenters extend to maximum values between 33 and 38 au. Rather than running multiple simulations, we analyse different subsets of our initial particle distribution, with each subset representing a different outcome of the epoch of planet scattering. Figure <ref> illustrates this choice through a free parameter in perihelion distance (initial population limit), which we vary until we match observations. By finding the initial perihelion distance that provides a best fit with the data, we find a potential limit to the disk region Neptune was able to scatter during any high-eccentricity phases it might have experienced.
§.§ Approach Validation
As a proof of concept that the simplified distribution illustrated in Figure <ref> is reasonable, we performed a very limited-scope direct simulation of a planetary upheaval scenario using the mercurius integrator within REBOUND. Similar in philosophy to the hybrid orbital integrator used by Mercury <cit.>, mercurius combines the whfast and ias15 <cit.> integrators in order to follow massive bodies through mutual close-encounters.
We used planetary initial conditions similar to those in <cit.> and allowed the giant planets to perturb each other and a disk of massless test particles.
We tracked the system for 10 Myr until Neptune was scattered outward to nearly its present-day semimajor axis and the planets' orbits stabilized.
We then examined the distribution of outwardly scattered test particles in the vicinity of the simulated Neptune's 3:2 MMR, which is shown in Figure <ref>.
We find that the test particles are distributed reasonably similarly to our assumed distribution described above.
We note that even this short, simplified simulation (we have ignored, for example, the effects of the massive planetesimal disk) required a significant amount of trial and error and hand-tuning to produce. It would require significantly more fine-tuning to produce a final Neptune orbit that acceptably matches present-day Neptune, and simulating enough test particles to fill the 3:2 resonant region is beyond our computational capabilities; this highlights why we strongly prefer our simplified approach to studying a reasonable post-upheaval distribution.
§.§ Initial conditions and resonances
Our model consists of the Sun, the four giant planets (Jupiter, Saturn, Uranus, and Neptune) and 10270 test particles that represent TNOs.
The giant planets are given their initial spatial parameters from NASA's JPL Horizons Ephemeris site <cit.>.[Planet initial conditions were downloaded with Julian date 2458970.5 from <https://ssd.jpl.nasa.gov/horizons.cgi>]
The test particles' longitudes of ascending node (Ω), arguments of pericenter (ω), and mean anomalies (M) were randomly chosen from their full possible range, while the ranges for semi-major axis (a), pericenter distance (q), and inclination (i) were determined through pilot simulations (See Table <ref>).
We chose the initial range of semi-major axes to be centered around the exact resonant orbit with a wide enough range to yield a small padding of non-resonant particles on either side (see Figure <ref>).
In a series of pilot simulations with the initial eccentricity range set from 0-1, we found no resonant particles with eccentricity above 0.6 on a 1 Gyr timescale.
We therefore restrict our eccentricity range for our long simulations to e<0.6 for computational efficiency.
Upon running simulations for 1 billion years with both a uniform e and uniform pericenter distance, q = a(1-e), distribution, there was no notable difference between their respective time-evolved distribution in semimajor axis-eccentricity space which is most likely due to the limited a range (plots not shown). Therefore, we use a uniform q distribution to generate the initial eccentricity range, given our assumption that Neptune (and possibly other giant planets) kicked the planetesimals outward prior to the start of our simulations, suggesting that the objects' pericenters should be in the scattering region.
Our pilot simulations also demonstrated that the inclination distribution of TNOs in the 3:2 MMR evolve only modestly over the lifetime of the simulation for inclinations ranging from i=0-90^∘ (consistent with 's finding that stability in the 3:2 resonance is not strongly affected by orbital inclination).
We thus assume that the emplacement mechanism, or evolution prior to emplacement, must set the current inclination distribution of the 3:2 resonance and that our initial conditions for i must be similar to the current distribution (see for an in-depth discussion).
The initial inclination values for our test particles are randomly sampled from the differential inclination distribution modeled as sin i times a Gaussian <cit.>.
When our modeled inclination distribution is compared to the observed one, the best match was a Gaussian width σ_i = 14^∘ which is the best-fit value found for the 3:2 MMR in <cit.>.
To identify particles in the 3:2 MMR, we examine the time evolution of the particles' resonant argument, ϕ, which is given by:
ϕ = 3λ_tno-2λ_N-ϖ_tno,
where λ_tno and λ_N are the mean longitudes of the TNO and Neptune, and ϖ_tno is the TNO's longitude of pericenter.
The ϕ value of a particle in the 3:2 resonance librates around a central value of π with a half-amplitude less than π.
For particles that librate within the 3:2 resonance, we also check if they are in the Kozai subresonance (sometimes also referred to as the Kozai-Lidov resonance; see, e.g., for a discussion of this subresonance within the 3:2 MMR).
The Kozai resonance within the 3:2 resonance refers to the libration of an object's argument of pericenter, ω; this corresponds physically to the location of pericenter librating around a fixed point relative to where the orbit intersects the ecliptic plane. For the 3:2 resonant particles in Kozai, ω typically librates around a central value of either about π/2 or about 3π/2.
§.§ Simulation Setup
Our integration has a total of 10270 test particles integrated for 4.5 Gyrs along with the four giant planets.
In an effort to be more time-efficient, we ran 158 separate simulations, each with the sun, the giant planets, and 65 test particles.
We confirmed that the giant planets evolved identically in each simulation. Resonance libration in the 3:2 MMR occurs on 10^4–10^5-year timescales, and Kozai libration occurs on 10^6–10^7-year timescales.
Running a 4.5 Gyr integration with thousands of test particles with frequent enough outputs to identify resonance libration generates too much data to be feasible.
To make our simulations as time and resource efficient as possible, we split the integration into 3 parts: first is a 4.5 Gyr integration that saves snapshots at times of interest, second, a 10^5 years integration used for determining which particles are in the 3:2 MMR at each snapshot in time, and third, a 50 Myr integration used for determining membership in the Kozai subresonance. We set rebound's internal timestep to 0.2 years, which is small enough to ensure accuracy for our simulation.
We use the symplectic integrator , which provides a necessary increase in accuracy by averaging the total energy error at the end of the simulation and minimizes the propagation of error <cit.>.
The first integration runs for 4.5 Gyr and takes “snapshots" of the state of the simulation at 0 years, 1 Myr, 10 Myr, 0.1 Gyr, 1 Gyr, and 4.5 Gyr. Starting from each snapshot, we use a second high-resolution 10^5 year integration to identify resonant particles as those whose resonant argument, ϕ, is confined to remain within the range ϕ=5-355^∘ over the typical resonant timescale.
We can also measure the“tightness" of the resonance by finding the object's libration amplitude (A_ϕ) which is defined as the half-width of the range of ϕ.
Operationally, A_ϕ is found by taking the difference between the maximum and minimum values of ϕ over 10^5 years and dividing by 2.
Since the libration timescale for the Kozai subresonance is significantly longer, we run a third set of integrations starting from the 0.1 Gyr, 1 Gyr, and 4.5 Gyr snapshots that run for 50 Myr and output at sufficient resolution to check for Kozai resonance.
We consider a 3:2 resonant particle to also be in the Kozai resonance if the object's ω librates within either ω=5-175^∘ or ω=185-355^∘.
Kozai objects can librate outside of these ranges but the above cut provide a simplified, uniform check that identifies most of the Kozai particles (see section <ref> for more details).
§.§ Simulation Results
The simulation effectively “sculpts" the 3:2 resonant population over a 4.5 Gyr period.
Figure <ref> shows the eccentricity vs. semimajor axis evolution of our simulated particles; the less stable particles scatter away over time, while the most stable favor lower eccentricities and are tightly packed at the center of the resonance.
Most of the non-resonant particles are lost on relatively short timescales, and on longer timescales resonant particles with perihelia near Uranus, (q≈19 au) are lost as well because they are not phase protected from that planet.
At 4.5 Gyr, a small, non-resonant classical population remains on either side of the 3:2 MMR; this population is further discussed in Section <ref>.
The distribution of particles in semi-major axis/inclination space is displayed in Figure <ref>. As expected, particles at the edge of the resonance are shaved over time, but the distribution of inclinations remains similar.
As in our pilot simulations, we find no substantial correlation between the inclination and the stability of the particles in the resonance. A more in-depth discussion on the Plutino inclination distribution can be found in <cit.>, <cit.>, and <cit.>.
The diagonal gaps apparent in the non-resonant particles on either side of the 3:2 MMR in Figure <ref> likely result from a secular resonance that destabilizes particles at particular inclinations, as detailed in .
Within the resonant population we are also interested in analyzing how the Kozai subresonance evolves over time.
At 0.1, 1, and 4.5 Gyr, the numbers of Kozai/resonant particles were 73/1698, 76/870, 64/556, respectively.
While the number of resonant particles decreases significantly over time, the number of Kozai particles remains more constant.
The stable Kozai particles have eccentricities e≈0.25 and their inclinations are distributed up to i∼45^∘.
Figure <ref> shows the libration amplitude vs. eccentricity for the Kozai and non-Kozai particles.
In general, resonant particles with higher libration amplitudes are preferentially lost over time.
These objects are less stable because their resonant argument, ϕ, deviates more from the central value π, allowing them to approach more closely to Neptune when they come to perihelion.
As illustrated in Figure <ref>, Kozai particles tend to have moderate-to-low libration amplitudes in the 3:2 MMR.
The lower 3:2 resonant libration amplitudes of Kozai objects likely contribute to their stability in addition to the libration of ω keeping the Kozai particles' perihelia locations away from the plane of the planets.
§ OSSOS+ AND SURVEY SIMULATOR
To accurately compare our simulated 3:2 resonant population to the current observed population, we must account for observational biases.
Such biases are discussed extensively elsewhere <cit.>, but we review them briefly here.
TNOs are detected by reflected sunlight, so detections are strongly biased against smaller objects and objects farther from the Sun; TNOs at perihelion are much more likely to be detected than those at aphelion, and large TNOs are more likely to be detected than small ones.
For objects in mean motion resonances, the resonant dynamics controls where objects come to perihelion relative to Neptune's position: KBOs in the 3:2 resonance come to perihelion preferentially ±90^∘ from Neptune.
This means that where observations occurred relative to Neptune will strongly influence the detectability of resonant objects (see for a thorough discussion of this).
Thus, accounting for observational biases in any given survey requires knowledge of the pointing history and well-determined limiting magnitudes for those pointings.
We compare our simulated 3:2 resonant population to the well-characterized sample of observed 3:2 resonant TNOs from several well-characterized surveys.
We include 3:2 resonant objects from the A, E, L, and H observational blocks of the Outer Solar System Origins Survey (OSSOS) <cit.>, as well as the 3:2 resonant objects from the Canada France Ecliptic Plane Survey (CFEPS) described by <cit.>, <cit.>;together these surveys comprise the OSSOS+ 3:2 resonance sample.
The use of these detections to model TNO populations are described in, e.g., <cit.> and among other works.
In this section we describe how we use the OSSOS+ survey simulator (described in Section <ref>) to subject our simulated 3:2 resonant population to the same biases as the OSSOS+ observed 3:2 resonant population.
In Section <ref> we describe how we select and transform the orbital elements from our simulations to match them to a specific epoch near those of the OSSOS+ observations.
In Section <ref>, we describe how we then assign an H_r magnitude to each set of orbital parameters (as all objects in our simulation are test particles, this part of the distribution is set based on prior studies).
§.§ Survey Simulator
The OSSOS survey simulator software[https://github.com/OSSOS/SurveySimulator] is described in detail by <cit.> and <cit.>.
It is designed to take as input a TNO population model and output a list of simulated detections by subjecting that model to the observational biases of OSSOS and associated surveys (the OSSOS+ sample).
These biases include the surveys' on-sky pointing histories, detection efficiency as a function of brightness and rate of motion, and the tracking/recovery efficiency for detected objects.
We feed the survey simulator a list of model TNOs, including their orbital elements at a specific epoch and their absolute magnitudes in r-band (H_r).
These parameters fully describe the position and velocity of the model TNOs at a specific epoch from which the survey simulator can propagate them to all of the included observational epochs and, with H_r, determine their apparent magnitudes at these times.
This full model of the 3:2 resonant population is run through the survey simulator to produce a large set of synthetic detections, i.e., what OSSOS+ would have observed if our model was representative of the true current 3:2 resonant population.
§.§ Rotation
The final locations of the giant planets in the simulations will not exactly match the locations of the planets at the epochs of the observations, so we must account for this when comparing to the observations.
This mismatch is not a problem during the orbital integrations because long-term dynamical stability depends on the average behavior of the planets over time rather than the specifics of the current epoch.
However, we must correct for this difference when simulating detections because resonant objects are most detectable on-sky at specific longitudes relative to Neptune; it is thus necessary to rotate our simulation results to place the simulated Neptune near Neptune's current position to ensure that simulated resonant populations are oriented appropriately.
To do this, we calculate the polar angle of Neptune's final location projected into the ecliptic plane, θ≡tan^-1(y/x), where x and y are Cartesian coordinates in the ecliptic plane and x̂ is the reference direction. We then rotate every test particle's longitude of ascending node, Ω, at the final timestep by the difference in Neptune's θ at the end of the integration and its θ from JPL Horizons at a reference epoch near the present.[We chose JD 2458970.5]
This results in solid-body rotation of the entire system about the vertical (z) axis located at the barycenter of the solar system.
§.§ Cloning, Color distribution, and H-magnitudes
The number of 3:2 resonant particles in our simulation at any single snapshot in time is far fewer than the number needed for the survey simulator to produce a large enough sample of synthetic detections to robustly compare with OSSOS+ data.
After 4.5 Gyr, 556 particles remain in the 3:2 resonance in our simulation.
While this number is sufficient to map the phase space of the resonance well if all particles are considered, at any given snapshot in time, many particles will be un-observable.
A typical 3:2 resonant object is small and only visible near the pericenter of its orbit—near apocenter, it is too distant from the Sun and thus too faint to be seen.
We thus “clone" each test particle to sample a large range of phases along its orbit.
We take the orbital parameters of each particle at each timestep in the short 10^5-year integration (started at either 1 or 4.5 Gyr, depending on the comparison being made) and treat it as a new particle, essentially “cloning" the actual test particle into 1000 pseudo-particles.
Having 1000 clones of each resonant particle ensures that we have enough simulated detections from the OSSOS Survey Simulator to have reliable statistics when we compare our models to the OSSOS+ observations.
To forward-bias our models with the OSSOS Survey Simulator, several things are required: positional information for each object in the model, an H_r magnitude for each object in the model, a color distribution, and an epoch.
Our rebound simulations give us the positional information we need in the form of the six orbital elements: a, e, i, Ω, ω, and M.
We add an H_r magnitude to each object, a color distribution (to account for the fact that some of the OSSOS+ 3:2 objects were discovered in different filters), and an epoch to the output of the simulation before running the particles through the OSSOS Survey Simulator.
For the H_r magnitude, we use a broken power law size distribution derived from a modified version of Equation 4 from <cit.>.
A broken power law in size corresponds to two exponentials in absolute magnitude H_r affixed at a specified break magnitude.
Our choice of distribution is displayed in Figure <ref>.
The distribution is normalized by specifying the cumulative fraction of objects over the full modeled H_r range that are below the break magnitude.
We choose a bright-end slope of 0.9 based on previous modeling of the OSSOS 3:2 resonant population <cit.>.
We tested a range of values drawn from literature constraints <cit.> for the break magnitude and faint-end slope. We choose a break magnitude of H_r=8.5, a break fraction of 0.2, and a faint end slope of 0.4, which provide a good match for the observed eccentricity distribution (see Figure <ref> in Section <ref>.)
Each object in the simulation output is attributed a random H_r sampled from this distribution.
For the color distribution, we use the same approach as in the CFEPS L7 model <cit.>, with a few modifications.
The color distribution used by <cit.> works by assigning the H_r magnitude as the magnitude in a specified color band to be used as a reference.
For their distribution, <cit.> chose the g-band to be the color used when specifying the H_r magnitude.
The magnitudes in other bands were calculated from shifting up or down from the g-band.
We use this same distribution for our models, but we use the r-band as the reference band since the OSSOS observations were done in the r-band and dominate the sample we are comparing to <cit.>.
We define the g-r color to be 0.65 based on recent observations <cit.>.
We do not change any of the other conversions from <cit.>, as the g-band and r-band were the only two filters used for discovery in the OSSOS+ ensemble <cit.>.
§ STATISTICAL COMPARISONS
To test the rejectability of our models, we compare our forward biased models to the OSSOS+ detections by performing the two sample Kolmogorov-Smirnov (KS) test and Anderson-Darling (AD) on the distributions of a, e, i, H_r, ϕ, and A_ϕ.
We also utilize the Kuiper variant of the KS test specifically when looking at ϕ, it being a better test to use when comparing distributions of cyclical angular quantities.
The null hypothesis, H_0, of each test is the same: the two distributions being compared could have been drawn from the same parent distribution.
Though the KS, AD, and Kuiper-KS tests are simple 1D statistics that can only test for rejectability, not goodness of fit, they are frequently used for comparisons of populations in the trans-Neptunian region because the complicated phase space of orbits renders more detailed statistical analysis computationally prohibitive unless one is restricted to a small region of phase space <cit.>. While we compare the distributions of the six mentioned values, we are not aiming to explain the origin of the inclination or magnitude distributions. We assume the inclination distribution is formed before Neptune reaches its final semi-major axis of a=30.1 au and the magnitude distribution is set by formation processes not discussed in this paper.
We begin by calculating a test statistic unique to the each of the three tests.
The KS test statistic, D_KS, is defined to be the maximum vertical distance between the cumulative distribution functions (CDFs) of the two distributions being compared; for the Kuiper variant[Based on NIST handbook: <https://www.itl.nist.gov/div898/handbook/eda/section3/eda35e.htm>], D_Kuiper is defined to be the sum of the maximum and minimum vertical distances between the CDFs.
The AD test statistic, D_AD is similar to D_KS, but gives more weight to differences towards the tails of the distribution, while the KS test is dominated by differences in the middle of the distribution (because the CDFs for each distribution are forced to be 0 and 1 at either end of the distribution).
For both D_KS and D_AD, we use the functions built into the SciPy's Python package to calculate the test statistics.
After calculating the test statistic, we use a Monte Carlo sampling method to calculate a p-value for the result; our p-value is defined as the fraction of N synthetic test statistics generated by comparing the model to itself that were greater than the calculated test statistic when comparing the model to the observations.
The rejectability of H_0 is 1-p.
We place a 95% confidence limit on our p-values, meaning we reject H_0 if p<0.05.
There are 85 observed 3:2 resonant objects in the OSSOS+ survey. As such, we randomly select 85 objects from our forward biased 3:2 resonant model and calculate the test statistic between this random sample and the full forward biased 3:2 resonant model. This process is repeated N times to yield N test statistics.
To obtain consistent p-values using this method, we find that at least 100,000 random draws are needed.
§.§ Our model vs OSSOS+
Recalling that the null hypothesis we are testing for is that the OSSOS+ sample and our forward-biased 3:2 MMR model could have come from the same distribution, we perform the analysis described above for the parameters a, e, i, H_r, ϕ, and A_ϕ, at both 1 Gyr and 4.5 Gyrs (see Figure <ref>).
When we feed our full model of the 3:2 population through the survey simulator, we find that we cannot match the OSSOS eccentricity distribution because too many low-eccentricity objects are detected. We therefore consider the likely possibility that objects were not scattered from pericenter distances extending all the way out to the current location of the resonance at 40au.
To investigate the potential that the 3:2 resonance was populated with particles scattered outward from a more limited rage of initial heliocentric distances, we apply a cut in our initial test particle distribution to remove particles with initial pericenter distances larger than values ranging from 33-38 au in 1 au increments.
These six resulting models (which are subsets of our total simulation data) are fed through the Survey Simulator, and we find good agreement with the observed eccentricity distribution for pericenter cuts between 35 and 37au, while cuts at q= 33, 38, and 39au are rejected by the KS-test and cuts at q= 33, 34, 38, and 39au are rejected by the AD test. The best fit arises when objects having initial pericenters greater than 36 au are removed. All further results presented here include a 36au pericenter cut, corresponding to an initial scattering region ending at 36 au.
With this pericenter cut, at 1 Gyr, we do not reject the null hypothesis for any parameters, whereas at 4.5 Gyr, A_ϕ and ϕ produce rejectable p-values below 0.05. The angle ϕ is cyclical however, so we perform a Kuiper KS test which is designed for cyclic angles. The p-value for this test is above 0.05, so we conclude ϕ falls in line with the null hypothesis (see Tables <ref> and <ref>).
§.§.§ Libration Amplitude, A_ϕ
An alternate view of the A_ϕ distributions is shown in Figure <ref> to show the discrepancy between the synthetically detected objects from the simulation and the OSSOS+ observations in more detail. Alternative pericenter cuts did not improve agreement.
The discrepancy at the current solar system age of 4.5 Gyr is significant but modest. Within the context of the model considered here, two possibilities for resolving it immediately present themselves. First, transient sticking <cit.> adds a pseudo-stable population of particles to the resonance at preferentially high libration amplitudes. OSSOS objects are identified with million-year integrations and their longer-term resonance stability time is not currently available. The objects in our sample are stable over billion year timescales. In other words, the observations should contain high-libration-amplitude transient objects which our model does not.
Whether the transient sticking population adds sufficiently many high-libration-amplitude objects to resolve the discrepancy merits future work. We consider this possibility promising.
Alternatively, planetary upheaval models require that Neptune's eccentricity ultimately be damped to its current low value. This damping is thought to result from dynamical friction with planetesimals, a process which also results in smooth migration. While dynamical friction in a symmetric sea of particles normally results in the planet's inward migration from angular momentum transfer, in the case of the outer solar system, the ice giants migrate outward. This is due to an asymmetry between the number of planetesimals from which Neptune takes angular momentum and the number that give angular momentum to Neptune. This global asymmetry results from the presence of the other giant planets (see and for more details.)
Since smooth migration pushes objects more deeply into resonance, such a late-stage epoch of migration has the potential to modify the distribution found here, either in the direction of better or worse agreement. We investigate the impact of post-upheaval smooth migration on libration amplitudes with 4 independent smooth migration simulations including the giant planets and 8000 test particles. In the simulations, Jupiter, Saturn, and Uranus begin at their current locations and Neptune at ∼ 29, 28.5, 28, and 27 au respectively. Neptune migrates for 10 million years up to its barycenter value of ∼ 30.06 au for all simulations, and we continue to integrate up to 1 billion years to compare with the 1 billion year simulation in this paper. The test particles are initialized with similar distributions as those in our main simulation, but with a broader range of semi-major axes. For each value of Neptune’s initial semi major axis, we fill the phase space with test particles from the interior edge of the 3:2 resonance before migration to the exterior edge after migration.
We find that the libration amplitude distribution for 3:2 resonant objects does not differ from our non-migrating simulation when the migration distance is ≲2 au and the eccentricity distribution does not differ for migration distances ≲1 au, as illustrated in Figure <ref>. Thus a brief epoch of smooth migration neither improves nor worsens the match between our model and the OSSOS libration amplitude distribution. We note that exploration of larger migration distances would necessitate adjusting our pericenter cut, running separate 4.5 Gyr simulations for each migration scenario, and running these through the OSSOS survey simulator, which we reserve for future work.
§.§.§ Kozai population
We compare the expected Kozai subpopulation of the 3:2 resonance from our simulations to the observations to further examine the accuracy of our model.
We use a Monte Carlo sampling method for this comparison.
Taking the 3:2 resonant particles with initial pericenters below 36 au from our model that are detected by the survey simulator, we randomly draw samples of 85 3:2 objects and then count how many of those 85 simulated detections are of Kozai particles.
We repeat this process 10^5 times for both the 1 and 4.5 Gyr simulation snapshots to produce the distribution of expected observed Kozai particles shown in Figure <ref>.
Interestingly, the Kozai fraction in our raw simulation (i.e. without going through the survey simulator) increased from 11.1% of 3:2 resonant objects at 1 Gyr to 14% at 4.5 Gyr, but Figure <ref> shows that the expected number of detected Kozai objects is nearly identical at both simulation times.
While this apparent contradiction could possibly be related to the very complex observational biases in the Kozai population (see, e.g., ), it is also possible that it is due to the relatively small number statistics of Kozai objects in our simulations; using simple Poisson error estimates, the Kozai fractions in our simulations at 1 and 4.5 Gyr are marginally consistent with each other (though we note that because Kozai 3:2 resonant particles are more stable than non-Kozai , an increase in Kozai fraction over time is expected!).
As mentioned in Section <ref>, we identify the Kozai objects in the simulation by checking if their ω librates between 5^∘ and 175^∘ or 185^∘ and 355^∘.
In the OSSOS dataset considered here, there are 18 3:2 objects that are in the Kozai subresonance.
However, we find that if we restrict the libration of the observed objects to the same ranges, our check for Kozai fails to catch 3 real observed objects with libration centers other than 90^∘ and 270^∘ (these are classified as Kozai largely based on visual examination of their orbital histories).
We thus compare our simulation results to the 15 real observed Kozai 3:2 objects that librate in the same way as our simulated ones.
Figure <ref> shows that at both simulation snapshots, the number of simulated observed Kozai 3:2 objects is significantly smaller than the number observed by OSSOS. Out of 100000 total draws, 95.1% of draws contained <15 Kozai objects.
To check whether the rejectability of the model's predicted Kozai fraction and the rejectability of the predicted A_ϕ distribution are potentially related, we examine the libration amplitude distribution of the Kozai and non-Kozai 3:2 particles separately; this is shown in Figure <ref>.
Both the real and synthetic detected Kozai 3:2 populations are weighted toward smaller libration amplitudes (consistent with what we saw in our intrinsic model population; see Figure <ref>).
Because the discrepancy in Figure <ref> arises from the non-Kozai objects, we confirm two unrelated discrepancies: an under population of Kozai objects and underpopulation of mid-high libration amplitudes.
Upon running smooth migration simulations, introduced in Section <ref>, we found that all simulations had twice as many or more objects in Kozai resonance than before migration. When comparing the raw smooth migration simulation and main simulation discussed in this paper (i.e. without running them through the OSSOS survey simulator), we found that at 1 billion years, the 2 au smooth migration model had 13 % objects in kozai whereas the intrinsic simulation had 11 % in kozai at 1 billion years. While 13 % is higher than 11 %, we do not believe it’s significant enough to confidently say smooth migration will increase kozai objects significantly. We will explore this more rigorously in future work.
§.§.§ Classical population
Figures <ref> and <ref> show that some of the non-resonant test particles in the vicinity of the 3:2 survive our 4.5 Gyr simulations.
This provides an additional observational test for the perihelion distance cut used to best reproduce the observed 3:2 population.
Using this same perihelion distance cut at 36 au, we can examine how many classical (non-resonant) objects OSSOS+ should have observed in the region immediately surrounding the 3:2 resonance if the initial phase space was filled as in our model.
We compare the expected number of observed stable non-resonant TNOs from the simulation (see Figure <ref>) at 1 Gyr and 4.5 Gyr with the observed number in the OSSOS+ sample by considering the sample of all test particles (resonant and non-resonant) in the restricted a range of 38.81-40 au with initial q>36 au (the q cut determined in Section <ref>).
We pass all of these test particles, resonant and non-resonant, through the survey simulator to produce a large set of synthetic detections, cloning them as described in Section <ref>.
We then randomly draw from this set of synthetic detections until we have a total of 85 synthetic detected 3:2 objects (the number matching our real observational sample).
The number of non-resonant particles drawn while building up the resonant sample is the number of expected classical detections from a=38.81-40 au for OSSOS+.
Figure <ref> shows one such result of this random sampling procedure.
We repeat this process 10^5 times to build a distribution of the number of expected detected classical objects for the 1 Gyr and 4.5 Gyr simulation states, and the resulting distribution is shown in Figure <ref>.
It is clear that the expected number of detected stable classicals near the 3:2 resonance from the 4.5 Gyr simulation snapshot is consistent with the real observed number of objects in the same range.
This serves as an independent verification that the q=36 au cut in our simulated phase space is consistent with the observations.
§ SUMMARY
We investigate whether the orbital distribution of objects in Neptune's 3:2 mean-motion resonance is consistent with a history in which orbital phase space was uniformly filled and subsequently “sculpted" by dynamical stability.
We find that this simplified model, motivated by dynamical upheaval histories that scattered planetesimal debris outward early in the life of the solar system, is consistent with ensemble data from the Outer Solar System Origins Survey within the uncertainties, with a few notable exceptions.
Stability sculpting does not substantially alter the inclination distribution of resonant particles, so this distribution must be determined by a different mechanism.
More subtly, it can be seen in Figure <ref> that the simulation produces a smaller fraction of objects with mid-high libration amplitudes compared to those observed.
We suggest that this discrepancy could be due to not accounting for transient populations of objects, which are known to consist of objects that are less deep in the resonance, with higher libration amplitudes <cit.>.
Finally, the fraction of resonant objects in the Kozai sub-resonance is significantly underpredicted in our simulation.
We find that smooth migration over 1 au at the end of the epoch of planetary upheaval does not alter our model's agreement with the data, but also is not sufficient to push objects into the Kozai portion of the resonance. Future work is needed to determine whether a longer-distance smooth migration may be accommodated.
We comment that <cit.> analyze the distribution of test particles throughout the trans-Neptunian region from the <cit.> simulation of a specific instability model (based on ) that included Neptune's residual migration from an eccentric orbit at a=27.5 au to its current low-eccentricity orbit at 30.1 au.
<cit.> find a Kozai fraction in their 3:2 population of 21%, which is double the Kozai fraction in our simulations.
The libration amplitudes they find for the 3:2 resonant population are also shifted toward slightly higher libration amplitudes compared to our simulations, possibly a result of the high-eccentricity phase of Neptune's orbit, offering an alternative potential origin for the small observed excess of high-libration amplitude objects compared with our model.
Overall, given the simplicity of our model, we consider the match between the observed population of 3:2 resonant TNOs and our model to be very good, suggesting that stability sculpting likely played a large roll in determining the current distribution of 3:2 resonant objects, particularly in semi-major axis and eccentricity. We find strong evidence that, if a “phase-space filling" scattering history provided the initial conditions for this sculpting, the scattering region extended to approximately 36 au.
§ ACKNOWLEDGEMENTS
RMC, SB, NZ, NH, AHR, JB, JG, and ZS acknowledge support from NSF (grant CAREER AST-1411536/1663706) and NASA (grant NNX15AH59G/NNX17AK64G). KV acknowledges support from NSF (grant AST-1824869) and NASA (grants NNX15AH59G, and 80NSSC19K0785). AHR thanks the LSSTC Data Science Fellowship Program, which is funded by LSSTC, NSF Cybertraining Grant #1829740, the Brinson Foundation, and the Moore Foundation; her participation in the program has benefited this work. We acknowledge use of the lux supercomputer at UC Santa Cruz, funded by NSF MRI grant AST 1828315.
§ DATA AVAILABILITY
The data underlying this article are available in github, at https://github.com/sbalaji718/KBR.
mnras
|
http://arxiv.org/abs/2307.04372v1 | 20230710070518 | New results on the dynamics of critical collapse | [
"Jun-Qi Guo",
"Yu Hu",
"Pan-Pan Wang",
"Cheng-Gang Shao"
] | gr-qc | [
"gr-qc"
] |
[email protected] of Physics and Technology, University of Jinan, Jinan 250022, Shandong, China
[email protected] Key Laboratory of Fundamental Physical Quantities Measurement, Hubei Key Laboratory of Gravitation and Quantum Physics, PGMF, and School of Physics, Huazhong University of Science and Technology, Wuhan 430074, Hubei, China
[email protected]
[email protected]
We study the dynamics of critical collapse of a spherically symmetric scalar field. Approximate analytic expressions for the metric functions and matter field in the large-radius region are obtained. It is found that because of the boundary conditions at the center, in the central region, the equation of motion for the scalar field is reduced to the flat-spacetime form. On the other hand, due to the connection to its neighbouring region where gravity plays an important role, the scalar field in the central region feels the gravitational effects indirectly.
New results on the dynamics of critical collapse
Cheng-Gang Shao2
August 12, 2023
================================================
§ INTRODUCTION
The critical phenomena in gravitational collapse discovered by Choptuik demonstrate the rich dynamics of the Einstein equations <cit.>. Consider gravitational collapse of generic families of massless scalar field, which initial data are parameterized by p. The parameter p measures the strength of the gravitational interaction. Strong interactions (high p) result in black hole formation, while weak interactions (low p) will make the matter field disperse to infinity and flat spacetime will be left. By fine tuning p to the threshold of black hole formation, p=p_*, critical collapse occurs.
In supercritical collapse, a tiny black hole will form, which mass has a scaling relation, m_BH∝|p-p_*|^γ, where γ≃ 0.37. The critical collapse solution shows universality feature. Namely, the spacetime produced by different families of critical initial data approach the same solution after a finite time in a finite region. The solution also displays discrete self-similarity: it is invariant under rescaling the spacetime by a certain factor.
After the discovery, similar results have been obtained in many other models (see Ref. <cit.> for review). Recently, further results on simulations were reported in Refs. <cit.>.
Analytic interpretations are important for understanding the dynamics of gravitational collapse. In Refs. <cit.>, critical collapse was treated as an eigenvalue problem. By imposing discrete self-similarity, the global structure of the critical collapse spacetime was constructed with the pseudo-Fourier method. The rescaling factor Δ becomes an eigenvalue and was solved with high precision. The scaling law of the black hole mass in supercritical collapse was recovered analytically via perturbation approach in Ref. <cit.>. Critical collapse was analyzed with a renormalization group method in Refs. <cit.>. In Ref. <cit.>, with an explicit approximate solution, a true solution was shown to exist. In Ref. <cit.>, using one typical log-periodic formula in discrete scale invariance systems, the authors obtained one approximate analytic solution for the spacetime near the center. Approximate analytic expressions for the metric functions and matter field near the central singularity in black hole formation were obtained in Refs. <cit.>. In Ref. <cit.>, the equations for the matter field in critical collapse were analyzed with certain terms in the equations being dropped. Approximate expressions for certain combinations of the metric functions and derivatives of the scalar field were obtained.
In this paper, considering the significance of analytic results, with numerical data, we obtain approximate analytic expressions for the metric functions and matter field in the large-radius region. We also investigate the dynamics in the central region. We find that due to the boundary conditions at the center, the equation of motion for the scalar field in the central region is reduced to the flat-spacetime form.
This paper is organized as follows. In Sec. <ref>, we describe the methodology for simulating critical collapse. In Secs. <ref> and <ref>, we study the dynamics in the large-radius and central regions, respectively. The results are summarized in Sec. <ref>.
§ METHODOLOGY
We consider critical collapse of a spherically symmetric massless scalar field ϕ. Take the polar coordinates,
ds^2=-A(r,t)e^-2δ(r,t)dt^2+1/A(r,t)dr^2+r^2dΩ^2.
Then the equations can be written as
A_,r=1-A/r-4π rA(P^2+Q^2),
δ_,r=-4 π r(P^2+Q^2),
Q_,t=(A e^-δ P)_,r,
P_,t=1/r^2(r^2 A e^-δ Q)_,r,
A_,t=-8π rA^2e^-δPQ,
where Q(r,t)≡ϕ_,r, and P(r,t)≡ A^-1 e^δϕ_,t. The (_,r) and (_,t) denote partial derivatives with respect to the coordinates r and t, respectively. The Misner-Sharp mass is defined as <cit.>
m≡r/2(1-g^μνr_,μr_,ν)=r/2(1-A).
The initial conditions for ϕ are set up as ϕ|_t_i=aexp[-(r/σ)^2] and ϕ_,t|_t_i=0. The regularity of Eq. (<ref>) at the center requires that A(r=0,t)=1. We choose δ(r=0,t)=0, which implies that the coordinate time is equal to its proper time at the center. In the simulation, we integrate Eqs. (<ref>)-(<ref>) by the fourth-order Runge-Kutta method. Mesh refinement algorithm is implemented. For details on the numerics, see Ref. <cit.>.
§ RESULT I: DYNAMICS IN THE LARGE-RADIUS REGION
Rewrite the metric (<ref>) as
ds^2=-α^2(r,t)dt^2+β^2(r,t)dr^2+r^2dΩ^2.
For convenience, we adjust the time coordinate, such that t=0 when the naked singularity forms.
Define the variables, X(r,t)≡√(2π)(r/β)ϕ_,r, Y(r,t)≡√(2π)(r/α)ϕ_,t,
ρ≡ln r, T≡ln(-t), and u≡ t/r. Then the equations for ϕ (<ref>) and (<ref>) can be respectively rewritten as
(β X)_,u=-α Y + (α Y)_,ρ -u(α Y)_,u,
(β Y)_,u=α X + (α X)_,ρ-u(α X)_,u.
In critical collapse, the period in terms of the coordinate time t is exponentially decreasing. Consequently, the metric functions and matter field in the late stage of collapse and large-radius region for which |t/r|≪ 1, appear to be frozen, rather than propagating <cit.>. In Ref. <cit.>, the authors made one ansatz that in this region the last terms in Eqs. (<ref>) and (<ref>) are negligible in comparison with the first ones. Moreover, treating α and β as constants, the authors obtained the following solutions:
X≈ Bsin[ω(ρ-α u)-γ], Y≈ Bsin[ω(ρ-α u)],
where 1+ω^-2=β^2, sinγ=(ωβ)^-1, and cosγ=-β^-1. The expressions (<ref>) match well with the numerical results. However, some treatments in the above have not been fully justified. In addition, although the approximate expressions for X and Y were obtained, the results for the metric functions and scalar field remain absent. We address such issues below.
In Ref. <cit.>, some terms in Eqs. (<ref>) and (<ref>), -u(α Y)_,u, -u(α X)_,u, β_,uX, α_,ρY, β_,uY and α_,ρX, were dropped. Actually, as shown in Figs. <ref> and <ref>, in the large-radius region (r>10^-3), the absolute values of the terms, -uα_,uY, -uα_,uX, α_,ρY and α_,ρX, can sometimes be greater than the absolute values of other terms. On the other hand, the terms dropped approximately cancel. Consequently, the equations constructed by the remaining terms roughly hold,
β X_,u≈-α Y + α Y_,ρ,
β Y_,u≈α X + α X_,ρ.
So from this point of view, the treatments in Ref. <cit.> are effectively valid.
Motivated by the expressions for X and Y (<ref>) and the numerical results for ϕ, we find that the field ϕ admits the following approximate expression:
ϕ(r,t)≈ C_1(1+C_2[H(r,t)])cos(ωln r + C_3[H(r,t)] + φ_0 ).
The quantity [H(r,t)] has the following features:
* For [H(r,t)], there is
[H(r,t)]=H(r,t)≡ωα t/r=ω A^1/2e^-δt/r.
* For ϕ_,t, there is
ϕ_,t≈ C_1√(C_2^2+C_3^2)[H]_,tcos(ωln r +C_3[H]+φ_0+φ_1),
where tanφ_1≡ C_3/C_2. Regarding the quantity H_,t(=ωα/r+ωα_,tt/r), the numerical results show that |ωα_,tt/r| is sometimes greater than ωα/r. However, comparing the expression (<ref>) with the numerical results for ϕ_,t, we always obtain
[H]_,t≈ωα/r=ω A^1/2e^-δ1/r.
This implies that in [H]_,t the contribution from ωα_,tt/r is negligible. This should be related to the fact that the respective reductions of Eqs. (<ref>) and (<ref>) to (<ref>) and (<ref>) are equivalent to treating α and β as constants.
* The numerical results in Fig. <ref>(a) show that in the large-radius region, the equation of motion for ϕ (<ref>) is reduced to
A^-1e^δϕ_,tt≈-(A^-1e^δ)_,tϕ_,t.
Using Eq. (<ref>) and the numerical results of |δ_,t|≫ |A_,t|, we have
ϕ_,tt≈-δ_,tϕ_,t.
Combination of Eqs. (<ref>), (<ref>), (<ref>) and the numerical results of |δ_,t|≫ H_,t generates
[H]_,tt≈ωα_,t/r≈ -δ_,t[H]_,t.
Namely, the dynamical feature of α begins to take effect since [H]_,tt.
* At the late stage of critical collapse, in the large-radius region for which |t/r|≪ 1, there are |H|≪|ωln r| and |H_,r|≪ 1/r. Therefore, with Eq. (<ref>), [H] mainly contributes to the temporal derivatives of ϕ, rather than to the field ϕ and its spatial derivatives.
The numerical results show that C_1≈0.058, C_2^2+C_3^2≈ 1, and φ_1≈ 1.08. As shown in Figs. <ref>(a) and <ref>(b), the expressions for ϕ (<ref>), ϕ_,t (<ref>) and ϕ_,tt (<ref>) agree well with the numerical results.
With Eqs. (<ref>) and (<ref>), one can rewrite Eq. (<ref>) as
1/A∂ A/∂ t =-8π rϕ_,tϕ_,r
≈ C_4[H]_,t[sin(2ωln r + 2C_3[H] + 2φ_0 + φ_1 ) - sinφ_1],
where C_4=4πωC_1^2√(C_2^2+C_3^2). Via integration, we have
ln A≈ -C_4/2C_3cos(2ωln r + 2C_3[H] + 2φ_0 + φ_1 )
- C_4sinφ_1[H] + C_5.
Then using Eq. (<ref>) and the fact that |H|≪ 1, we obtain
m/r≈ C_6cos(2ωln r+2C_3[H]+2φ_0+φ_1)+C_7,
where C_6≈ e^C_5C_4/(4C_3)≈ e^C_5πωC_1^2√(C_2^2+C_3^2)/C_3, and C_7=(1/2)(1-e^C_5). As shown in Fig. <ref>(c), the expression for m/r (<ref>) matches well with the numerical results. The fitting results are C_6=0.013360± 0.000009≈ 1/75, and C_7≈ 0.065480± 0.000007≈ 1/15.
With Eq. (<ref>), one can rewrite Eqs. (<ref>) and (<ref>) as
m_,r=2π r^2 A(P^2+Q^2),
rδ_,r=∂δ/∂ln r=-2/1-2m/rm_,r.
Then the solution for δ can be expressed as
δ≈ C_8ln r +ln(1-2m/r)
+C_9sin(2ωln r+2C_3[H]+2φ_0+φ_1)+δ_0(t),
where C_8≈-2C_7/(1-2C_7)-2C_6^2, and C_9≈-(C_6+8C_6C_7)/ω. As shown in Fig. <ref>(d), the expression for δ (<ref>) match well with the numerical results.
In Ref. <cit.>, the quantities α and β were treated as constants. The approximate expressions for X and Y obtained in this way agree well with the numerical results. Then it was stated that in this circumstance the spacetime is effectively flat. Actually, X and Y are combinations of the metric functions and derivatives of the scalar field, rather than the scalar field. In order to check whether the spacetime is effectively flat, it may be more appropriate to investigate directly the behavior of the equation of motion for the scalar field (<ref>). As shown in Fig. <ref>(a), in the large-radius region, Eq. (<ref>) is reduced to Eq. (<ref>), which is clearly different from the flat-spacetime form, ϕ_,tt=r^-2(r^2ϕ_,r)_,r. So the spacetime in this region is not effectively flat.
§ RESULT II: DYNAMICS IN THE CENTRAL REGION
As shown in Fig. <ref>(a), in the central region, the absolute values of the terms of (A^-1e^δ)_,tϕ_,t and (A^-1e^δ)_,rϕ_,r in Eq. (<ref>) are much less than the absolute values of A^-1e^δϕ_,tt, Ae^-δϕ_,rr, and (2/r)Ae^-δϕ_,r. Moreover, in this region, A≈1, and δ≈ 0. Consequently, Eq. (<ref>) is reduced to the flat-spacetime form,
ϕ_,tt≈1/r^2(r^2ϕ_,r)_,r.
Regarding Eq. (<ref>), we make the following discussions:
* Equation (<ref>) implies that in the central region, the scalar field ϕ evolves almost as in flat spacetime, not directly feeling the gravitational effects.
On the other hand, as shown in Fig. <ref>, in the transition region locating between the central and large-radius ones, the gravitational effects are important on the dynamics of the scalar field. Therefore, due to the connection between the central and transition regions, in the central region, gravity affects the evolution of the scalar field indirectly.
* Besides critical collapse, we also check the evolution for the scalar field in another two types of collapse (dispersion and black hole formation), and obtain similar results as (<ref>).
* The result (<ref>) is closely related to the asymptotic behaviors of the metric functions and scalar field near the center. Under the smoothness requirement at the center, the metric functions and scalar field have the following power series expansions near the center <cit.>:
A≈ 1+A_2(t)r^2, dddδ≈δ_2(t)r^2, dddϕ≈ϕ_0(t) + ϕ_2(t)r^2.
With Eqs. (<ref>), (<ref>), (<ref>), (<ref>) and (<ref>), we obtain the following asymptotic expressions:
[ A_,t≈ -16πϕ_,tϕ_2r^2, δ_,t≈ -4πϕ_,ttϕ_,tr^2,; ; ϕ_,t≈ϕ_0'(t)+ϕ_2'(t)r^2, A_,r≈ -8π/3(ϕ_,t)^2 r,; ; δ_,r≈ -4π(ϕ_,t)^2 r, ϕ_,r≈ 2ϕ_2(t)r,; ]
which are also shown in Fig. <ref>. With Eqs. (<ref>) and (<ref>), one can straightforwardly simplify Eq. (<ref>) to (<ref>).
* It is known that in critical collapse, the Ricci curvature scalar R in the central region is very high and will diverge eventually. This fact is not in contradiction with the result (<ref>). For the metric (<ref>), the Ricci curvature scalar can be written as
R= 4Aδ_,r/r - 4A_,r/r + 2Aδ_,rr + 2(1-A)/r^2 - A_,rr - A_,tte^2δ/A^2
+ 3A_,rδ_,r - 2A(δ_,r)^2 + 2(A_,t)^2 e^2δ/A^3 - A_,tδ_,te^2δ/A^2.
With Eqs. (<ref>), (<ref>), (<ref>), (<ref>), (<ref>) and (<ref>), we obtain the asymptotic expressions for all the terms on the right-hand side of Eq. (<ref>):
4Aδ_,r/r≈ -2D,- 4A_,r/r≈4/3D,2Aδ_,rr≈ -D,
2(1-A)/r^2≈- A_,rr≈D/3, D=8π(ϕ_,t)^2.
-A_,tte^2δ/A^2≈ 16πϕ_,ttϕ_2r^2,
3A_,rδ_,r≈ 32π^2 (ϕ_,t)^4 r^2,
-2A(δ_,r)^2≈ -32π^2 (ϕ_,t)^4 r^2,
2(A_,t)^2 e^2δ/A^3≈ 512π^2 (ϕ_,t)^2 (ϕ_2)^2 r^4,
- A_,tδ_,te^2δ/A^2≈ -64π^2 ϕ_,tt(ϕ_,t)^2 ϕ_2 r^4.
The first five terms are dominant and have the same order of magnitude as 8π(ϕ_,t)^2 which will diverge eventually; and the rest terms are proportional to r^2 or r^4 and are negligible.
As shown in Fig. <ref>(b), the transition region between the central and large-radius regions can be expressed as r∈ [r_1, r_2]. At r=r_1, there is
|C_3H|∼|ωln r|; and at r=r_2, there is |C_3H_,r|∼ω/r.
§ SUMMARY
Analytic solutions are important for understanding the dynamics of gravitational collapse. Due to the complexity of the Einstein equations, seeking the analytic solutions to the equations has been a very difficult task. In the successful circumstances, the equations are usually reduced to ODEs. In critical collapse, the equations remain PDEs, while in the large-radius region and late stage of the evolution, the spatial and temporal contributions are separate to some extent. This enables us to obtain approximate analytic expressions for the metric functions and matter field.
The boundary conditions at the center play a key role on the dynamics in the central region. In this region, due to the boundary conditions, in the equation of motion for the scalar field, the terms related to the gravitational effects are negligible, such that the equation is reduced to the flat-spacetime form. On the other hand, in the transition region, gravity is important for the evolution of the scalar field. Consequently, due to the connection between the central and transition regions, the scalar field in the central region feels the gravitational effects indirectly.
§ ACKNOWLEDGMENTS
The authors are very thankful to Xiao-Kai He, Junbin Li and Cheng-Yong Zhang for the helpful discussions. JQG is supported by Shandong Province Natural Science Foundation under
grant No. ZR2019MA068. YH and CGS are supported by the National Natural Science Foundation of China (Grant No. 11925503).
99Choptuik:1992jv
M. W. Choptuik,
Phys. Rev. Lett. 70, 9 (1993).
Gundlach:2007gc
C. Gundlach and J. M. Martin-Garcia,
Living Rev. Rel. 10, 5 (2007).
[[gr-qc]0711.4620]
Bizon:2011gg
P. Bizon and A. Rostworowski,
Phys. Rev. Lett. 107, 031102 (2011).
Deppe:2018uye
N. Deppe, L. E. Kidder, M. A. Scheel and S. A. Teukolsky,
Phys. Rev. D 99, 024018 (2019).
[[gr-qc]1802.08682]
Baumgarte:2019fai
T. W. Baumgarte, C. Gundlach and D. Hilditch,
Phys. Rev. Lett. 123, 171103 (2019).
[[gr-qc]1909.00850]
Kelson-Packer:2020hbb
C. Kelson-Packer and J. Belz,
Phys. Rev. D 102, 084050 (2020).
[[gr-qc]2008.06774]
Mendoza:2021nwq
M. F. P. Mendoza and T. W. Baumgarte,
Phys. Rev. D 103, 124048 (2021).
[[gr-qc]2104.03980]
Zhang:2021nnn
C.-Y. Zhang, Q. Chen, Y. Liu, W.-K. Luo, Y. Tian and B. Wang,
Phys. Rev. Lett. 128, 161105 (2022).
[[gr-qc]2112.07455]
Gundlach:1995kd
C. Gundlach,
Phys. Rev. Lett. 75, 3214 (1995).
[gr-qc/9507054].
Gundlach:1996eg
C. Gundlach,
Phys. Rev. D 55, 695 (1997).
[gr-qc/9604019]
Martin-Garcia:2003xgm
J. M. Martin-Garcia and C. Gundlach,
Phys. Rev. D 68, 024011 (2003).
[gr-qc/0304070]
Koike:1995jm
T. Koike, T. Hara and S. Adachi,
Phys. Rev. Lett. 74, 5170 (1995).
[gr-qc/9503007]
Hara:1996mc
T. Hara, T. Koike and S. Adachi,
gr-qc/9607010Reiterer:2012hnr
M. Reiterer and E. Trubowitz,
Commun. Math. Phys. 368, 143 (2019).
[[gr-qc]1203.3766]
Guo:2018yyt
J.-Q. Guo and H. Zhang,
Eur. Phys. J. C 79, 625 (2019).
[[gr-qc]1808.09826]
Guo:2013dha
J.-Q. Guo, D. Wang and A. V. Frolov,
Phys. Rev. D 90, 024017 (2014).
[[gr-qc]1312.4625]
Guo:2020jfa
J.-Q. Guo,
J. Phys. Comm. 5, 075015 (2021).
[[gr-qc]2011.14853]
Price:1996sk
R. H. Price and J. Pullin,
Phys. Rev. D 54, 3792 (1996).
[gr-qc/9601009]
Misner_1964
C. W. Misner and D. H. Sharp,
Phys. Rev. 136, B571 (1964).
Zhang:2016kzg
C.-Y. Zhang, Z.-Y. Tang and B. Wang,
Phys. Rev. D 94, 104013 (2016).
[[gr-qc]1608.04836]
Choptuik_workshop_1993
M. W. Choptuik,
Critical Behaviour in Scalar Field Collapse,
in Proceedings of a NATO Advanced Research Workshop on Deterministic Chaos in General Relativity,
Springer Science+Business Media, LLC. Editors: D. Hobill and A. Burd and A. Coley, 155-175, 1993.
Choptuik:1997mq
M. W. Choptuik,
The (Unstable) threshold of black hole formation,
15th International Conference on General Relativity and Gravitation (GR15),
67–86, 1997.
|
http://arxiv.org/abs/2307.06024v2 | 20230712090949 | balance -- a Python package for balancing biased data samples | [
"Tal Sarig",
"Tal Galili",
"Roee Eilat"
] | stat.CO | [
"stat.CO",
"stat.ML"
] |
images/
mystyle
backgroundcolor=,
commentstyle=,
keywordstyle=,
numberstyle=,
stringstyle=,
basicstyle=,
breakatwhitespace=false,
breaklines=true,
captionpos=b,
keepspaces=true,
numbers=left,
numbersep=5pt,
showspaces=false,
showstringspaces=false,
showtabs=false,
tabsize=2
style=mystyle
@thanks
[2][]
begindocument/end
#1
footnote
@xdefthanks
thanks[@̧footnote]#2
begindocument/begin@thanks
@maketitle[
Sanjay Kumar^1, Avijeet Prasad^2,3,
Sushree S. Nayak^4, Satyam Agarwal^5,6, and R. Bhattacharyya^5
=======================================================================================================
[
Sanjay Kumar^1, Avijeet Prasad^2,3,
Sushree S. Nayak^4, Satyam Agarwal^5,6, and R. Bhattacharyya^5
=======================================================================================================
@maketitlearabic@̧footnote
#1#2#1#22mu#1#2
Meta
[equal contributor]The first two authors contributed equally to this work
< g r a p h i c s >
- a Python package for balancing biased data samples
Roee Eilat
June 2023
=====================================================
Surveys are an important research tool, providing unique measurements on subjective experiences such as sentiment and opinions that cannot be measured by other means. However, because survey data is collected from a self-selected group of participants, directly inferring insights from it to a population of interest, or training ML models on such data, can lead to erroneous estimates or under-performing models. In this paper we present , an open-source Python package by Meta, offering a simple workflow for analyzing and adjusting biased data samples with respect to a population of interest.
The workflow includes three steps: understanding the initial bias in the data relative to a target we would like to infer, adjusting the data to correct for the bias by producing weights for each unit in the sample based on propensity scores, and evaluating the final biases and the variance inflation after applying the fitted weights. The package provides a simple API that can be used by researchers and data scientists from a wide range of fields on a variety of data. The paper provides the relevant context, methodological background, and presents the package's API.
§ INTRODUCTION
Surveys play an important role in the study of social phenomena across research fields and industries. From their traditional usage by statistics bureaus in producing population estimates, through the long history of public opinion surveys in political science, to more recent applications like studying user experience in online services and even playing part in epidemiological studies <cit.>. The widespread use of surveys, and their unique role in providing measurements on subjective indicators such as sentiment and opinions, makes the field abundant with methodological research.
A central challenge in designing and analyzing survey data stems from bias due to sampling limitations and non-response. Since the data is collected from a self-selected group of participants, directly inferring insights or training ML models on such data can result in erroneous estimates or under-performing models. An insightful theoretical framework for the sources of error present in survey data is given in the "Total Survey Error" framework <cit.>. While the sources might be different, similar manifestations of bias are often present in observational studies when comparing treatment groups, and in any data produced through self-selection processes.
The field of survey statistics offers methods for mitigating bias in samples, at least partially, by relying on auxiliary information (i.e., “covariates” or “features”). When such information is available for all items in the sample as well as for the population from which it was sampled, it can be used to create weights. Under some assumptions on the relation between the auxiliary information, the response mechanism, and the survey responses, applying the weights to the data will produce less biased estimates or models. Different approaches were proposed for the task, from simple post-stratification <cit.> to methods more suitable for high dimensional covariates space such as raking <cit.>, inverse propensity weighting <cit.>, covariate balancing methods <cit.>, outcome regression based approaches <cit.>, and others. Weighting methods have been shown to be effective in reducing bias of survey estimates <cit.>.
Following methodological advancements in survey statistics, statistical software packages were developed to allow researchers and practitioners to apply these methodologies to survey data and observational data. Most software packages for this aim have R implementations, and other implementations in environments such as SPSS, stata, SAS exist as well. In recent years a rich ecosystem of data science software has been developed for Python, and its usage has become prevalent among researchers and data scientists. This shift created a need for a reliable Python package for working with survey data, and more generally with biased data sets. Here we introduce - a Python package for balancing biased data samples. offers a simple easy-to-use framework for weighting data and evaluating its biases. The package is designed to provide best practices for weights fitting and offers several modeling approaches. The methodology in can support ongoing automated survey data processing, as well as ad-hoc analyses of survey data.
The main workflow API of includes three steps: (1) understanding the initial bias in the data relative to a target population as observed by the differences in covariate distribution (2) adjusting the data to correct for the bias by producing weights for each unit in the sample based on propensity scores, and (3) evaluating the final biases and the variance inflation after applying the fitted weights. The adjustment step provides a few alternatives for the researcher to choose from: Inverse propensity weighting using logistic regression model based on LASSO (Least Absolute Shrinkage and Selection Operator <cit.>), Covariate Balancing Propensity Scores <cit.>, Raking, and post-stratification. The focus is on providing a simple to use API, based on Pandas's DataFrame structure, which can be used by researchers from a wide spectrum of fields.
In this paper we describe the workflow in more detail and provide guidance on how to implement it using the package. We include details on methods, assumptions, and model choices made in the package. The methodological background part of the paper is an accessible review of the theoretical frameworks, methods, and practices often used in survey statistics. We invite readers new to the field to use it as a short and effective introduction.
The rest of this paper is structured as follows. We discuss related work in Section <ref>, focusing on software packages available in the R and Python ecosystems for survey data analysis and related use cases. In Section <ref> we provide details on the statistical background that guided the implementation of the package, including theoretical frameworks, estimation methods, and diagnostic tools. In Section <ref> we present the workflow and provide an end-to-end walk through using code snippets that are applied to simulated data. We conclude with a discussion on future directions for the package in Section <ref>.
§ RELATED WORK
The open-source ecosystem offers a variety of packages for weighting biased data. This section gives a brief survey of prominent tools in this space and describes some of their capabilities. We find the R ecosystem to be the most developed in terms of packages for survey analysis. The Python ecosystem has some packages for survey statistics. It also has several, well developed, packages for casual inference, which employs similar models (e.g.: propensity scores models, outcome models, etc.). While various R packages exist with similar capabilities to what is available in , no Python package (that we are aware of) gives a comprehensive end-to-end coherent solution for researchers.
The R ecosystem is exceptionally rich and diverse when it comes to survey statistics. The most comprehensive review can be seen in the CRAN task view of "Official Statistics & Survey Statistics" <cit.>. To date, it includes over 130 packages - ranging from the classical package <cit.> to more niche packages. Similarly, the CRAN task view of "Causal Inference" <cit.> also includes over 130 packages that offer related methods. A short review on the current state of R packages can be found in the R package <cit.>, which compares 9 R packages that implement propensity score weighting with discrete treatments. For survey weights diagnostics, the package <cit.> offers many options, including balance tables and plots for covariates of multiple groups. This package includes various capabilities that could inspire future development of .
For Python, the ipfn package <cit.> specializes in implementing a fast iterative proportional fitting (raking). This package is utilized in and used as the back-end for the raking implementation we rely on. The package <cit.> is designed to support data processing, analysis and reporting for survey data using pandas and numpy. It supports native handling of special data types like multiple choice variables, statistical analysis using case or observation weights, DataFrame metadata and different data exports. seems to be the most similar to what tries to achieve but lacks many of the capabilities has in all stages of the workflow. The package offers a comprehensive solutions for dealing with complex sampling designs <cit.>, with various overlapping and non-overlapping capabilities between this package and . The package offers tooling for random selection techniques used to draw a sample from a population, and sample size calculations. It also provides methods for weight adjustments using post stratification or calibration. Additional capabilities in include functions for estimation of statistics and their variance (beyond just the Taylor linearization estimation in ). These include bootstrap, balanced repeated replication and Jackknife. Other packages we found seem to be only lightly maintained, and do not provide additional capabilities of relevance to our use-case. These include PySurvey <cit.>, Surveyweights <cit.>, pscore_match <cit.>, pymatch <cit.>, and causal_nets <cit.>.
Stepping aside from survey statistics, several Python packages offer tools for casual inference that can be repurposed for adjusting biased samples.
The package <cit.>, developed by Microsoft, is a well maintained package with the focus on causal inference. It models a given problem as a causal graph to help explore assumptions clearly, estimate causal effects, and test assumptions' validity for robustness. It offers a variety of methods for estimation including Propensity-based Stratification, Propensity Score Matching, and Inverse Propensity Weighting (similar to ). It also offers outcome based models (currently not implemented in ) using Linear Regression or Generalized Linear Models, and supports other methods such as Instrumental Variable methods. The package emphasizes graphical interpretation of causal inference. It also gives various refutation methods (dummy outcome, simulated outcome, etc.) and basic visualizations (e.g.: barplots of treatment and control). The package <cit.>, developed by Google, provides a method to compute empirical calibration weights using convex optimization. This approach balances out the marginal distribution of covariates directly while reducing the inflation of variance. This is similar to performing raking while trying to keep the weights to be as equal as possible. It offers a bias correction solution that resembles the raking and CBPS methods that are implemented in the balance package. The package <cit.> provides a set of modeling and causal inference methods for analyzing observational data using machine learning algorithms. It provides tool to estimate the Conditional Average Treatment Effect (CATE) and the Individual Treatment Effect (ITE). This package offers a wide variety of ML algorithms, including tree-based algorithms, meta-learner algorithms, instrumental variables algorithms, and neural-network-based algorithms. While these packages are comprehensive, there is still an overhead and complexity for using them for balancing data for the workflow is optimized in handling with the focus on surveys data.
§ METHODOLOGICAL BACKGROUND
Before diving into the workflow and the implementation details of balance, we introduce a brief description of the methodological background concerning the representation error problem in surveys, weights estimation and tools to evaluate survey weights.
§.§ The Total Survey Error framework
The "Total Survey" Error framework <cit.> provides a theoretical framework to describe statistical properties of surveys. It is used as a conceptual tool for researchers when designing and analyzing surveys to minimize estimation errors and biases. While the research goal is to estimate a population parameter, such as average or ratio, surveys only provides a glimpse on this parameter through the survey responses and are subject to a range of sources for statistical errors, as described by the "Total Survey Error" concept.
The "Total Survey Error" has two main components: representation error and measurement error <cit.>. Since neither can be overcome by increasing the sample size, researchers should be aware of these as early as the survey design stage. Measurement error deals with potential biases introduces to the estimation due to the instrument of measurement. It includes questions about the validity of the responses, the phrasing of the questions and how it is affecting what we are trying to estimate, and similar questions related to whether we measure the exact quantity we aim for. is focused on addressing and correcting the representation errors in this framework, and hence for the rest of the section we will focus on the representation part of the framework.
The representation error deals with how to infer from a subset of people to the whole population on which we would like to learn, referred to as the target population. The magnitude of the error depends on the group of respondents to the survey and depends on how similar or different this group is from the target population. Formally, Figure <ref> shows the different sources of representation error and illustrates a breakdown of the difference between the group of respondents and the target population.
The first error we consider is the coverage error. Its driver is the misalignment between who can be sampled for the survey (the "sampling frame") and the target population. In today's world, where many, if not most, surveys are conducted through the internet, a common sampling frame is people with access to the internet. Since this sampling frame may not be representative of the whole population, a caution should be taken when conducting survey over the web.
The most canonical example for a sampling frame that is not fully covering the target population is the "Literary Digest" 1936 poll <cit.>. During this year the Literary Digest magazine ran a poll to predict the result of the U.S. election. Franklin Delano Roosevelt was the Democrats candidate and the Republican candidate was Governor Alfred Landon of Kansas. The magazine predicted a decisive victory for Landon with a poll that was based on roughly 2.4 million voters, but, as history tells us, Roosevelt won 62% of the votes. Even though the poll sampled 10 million people, the sampling frame was skewed. The sample included magazine readers, and people from phone lists and club memberships lists. However, since people of lower socioeconomic status were disproportionately not part of the magazine's audience those days, the poll missed a significant and unique portion of the U.S. voters population Due to setting a sampling frame that ignores the target population definition, the magazine's coverage error was large and led to mis-prediction of the elections' results.
Once the sampling frame is set, the researcher samples a certain amount of people from the frame to ask to reply on the survey. This is the sample population, or the group of people that have a "real" opportunity to reply on the survey. When doing so, the researcher reveal another gap where error can occur due to sampling, which is the sampling error. This might be small if we are able to sample either completely at random from the sampling frame or by designing the sampling with the right sampling probabilities, but can be significant given wrong assumptions on the structure of the sampling frame or a complex mechanism of sampling. This error can be reduced as we increase the sample size (and will be 0 if the sample population is the same is the sampling frame), and is the one captured by the margin of error, often reported with a survey results.
Once researcher has sent out the invitation for filling the survey to the sample population, most often only a portion will choose to take part in the survey. These are the respondents, or the observed sample. This self selection behavior causes another error component which is the non-response error. This error can be substantial depending on how the survey is conducted, the survey questions, and other issues related to the instrument. The percent of non-response can give us some intuition of how large this error is but the actual size of the bias depends only on the properties of the people who chose to respond. In the case of the Literary Digest poll the response rate was only 24%. In fact, research suggests that the primary source of the error in the poll originated in the non-response bias. Specifically, people who strongly disliked Roosevelt were more willing to take the time to mail back their response <cit.>.
aims to correct for all types of representation errors at once (see Figure <ref>). Using additional assumptions, as described in the next section, we are able to make the group of respondents, i.e. the observed sample, similar in properties to the target population and hence overcome some parts of the representation error. However, it is important to note that there are cases where it is impossible to fully correct the representation error. Such cases occur when the assumptions on the missingness are not satisfied. The simplest example of such case is when there is a substantial coverage error for which we cannot overcome using auxiliary data. For example, if we want to learn about North America's population but survey people only from The United States. Even given lots of auxiliary information we will likely not be able to adjust the sample such that it correctly represents Canada's population as well.
§.§ Definitions and notations
With the Total Survey Error framework in mind, we will now set definitions to be used throughout the paper. Let 𝒮 denote a sample of respondents to a survey consisting of n respondents (sometimes referred to as the sample), and let 𝒯 represent a target population with N units.
Furthermore, we assume we have some auxiliary data on all units in sample and target population, represented by a covariates (or features) vector attached, X_i. Note that we assume that the same covariates are available for the sample and the target, otherwise we ignore the non-overlapping covariates. This framework is applicable when we have the auxiliary data at the unit level for the entire population or for a representative random sample from it. For example, when we sample from a list of customers for which we have auxiliary information available, or in cases when a reference survey is available (a survey with better sampling properties to be used for correcting biases <cit.>). Another common use-case is when census data of the population is available.
We define R to be an indicator for inclusion in the respondents group[For estimation in , we think of the target population as a reference group, and hence units of the target population are distinct from the units of the observed sample.], i.e. R_i=1 if i ∈𝒮 and R_i=0 if i ∉𝒮. Furthermore, we define y to be the answer to one item of the survey. The answer can be numeric or discrete, and is observed only for 𝒮. In our setup, we think about y as a constant and the random variable, later considered for statistical properties, is R.
Our objective is to estimate a certain parameter of the population. The simplest example is estimating the mean of one item of the survey, i.e. the mean of y. In this case, a natural estimate to y̅=1/N∑_i∈𝒯y_i is the sample mean y̅_𝒮=1/n∑_i∈𝒮y_i. However, due to the non-random sampling of 𝒮 from the population 𝒯, the proposed estimate will be biased, such that 𝔼[y̅-y̅_𝒮] ≠ 0.
§.§ Estimation of the survey weights
Weights are a common way to overcome survey error, and are essential when estimating a population parameter <cit.>. This is generally done by incorporating the weights, w_i where i ∈𝒮, into the parameter estimation procedure[ In the package, we choose to scale the weights to sum to the population size. This way, each sample unit weight represents the number of corresponding units from the population.]. An example is the case of estimating a parameter of the population using a weighted mean of the sample:
y̅_w = ∑_i=1^n w_i y_i /∑_i=1^n w_i
Further details about using the weights for estimations are described in section <ref>.
One of the advantages of using weights, unlike alternative methods, is the flexibility it gives in estimation. The weights depend only on the group of respondents and not on the outcome itself, hence give the researcher the flexibility to use the same set of weights for multiple outcomes, combine multiple outcomes into one parameter, or consider the outcome in only in specific cuts of the population.
We typically employ weights to adjust from the observed sample to match the target population, so to overcome the representation bias of the sample. In scenarios where the sampling procedure is set by design and therefore known, we define the inverse of the sampling (or selection) probabilities as design weights[We may assume each unit i in 𝒮 and/or 𝒯 has a design weight, d_i. These are the sampling weights that are based on the known sampling procedure from the sampling frame to the sample population. These are often set to 1 for the respondents when the sampling probabilities are unknown.]. However, to overcome the full gap between the respondents and the target population we need to estimate the weights according to the actual realization of the observed sample. When estimated against the complete target population, the weights can help address the non-response error, the by-design sampling error, the "unknown" sampling error and the coverage error.
A few assumptions are required to utilize the estimated weights for valid estimations of the parameter of interest and mitigate the representation bias. Under these assumptions, the estimation of the weights relies on the auxiliary data X_i.
The first assumption is the Missing At Random assumption (MAR) <cit.>. The MAR assumption states that the response mechanism is independent of the survey responses conditional on the auxiliary data. In other words, Y R | X, which means that given the covariates the likelihood of a person to respond to the survey doesn't depend on their answer. This assumption is also known as the ignorability assumption or conditional unconfoundedness in causal inference literature <cit.>. It is worth noting that recent research (such as <cit.>) proposes alternative approaches to address missing values created by design in surveys analysis.
The second assumption is positivity 0<P(R_i=1|X_i)<1 for all units in 𝒮 and 𝒯. 0<P(R_i=1|X_i) means that given the auxiliary data, every unit of the target population has non-zero probability to be include in the sample. In other words, in a counterfactual world, any unit i ∈𝒯 could have participated in the survey given their covariates. Conversely, we also assume P(R_i=1|X_i)<1, implying that every unit in the observed sample is also in the target population. The combination of the MAR assumption and the positivity assumption is often known as the "strong ignorability" assumption <cit.>.
Given the assumptions we are now left with the question of how to estimate the weights. The package currently supports 4 different methods for estimating the weights: post-stratification, raking, Inverse Propensity score Weights (IPW or IPSW) and Covariate Balancing Propensity Score (CBPS). Next, we provide more background about the estimation process in each method and describe advantages and limitations of each.
§.§.§ Post-stratification
Post-stratification <cit.> is one of the most common weighting approaches in survey statistics. It originates from a stratified sample or probability sampling, where the population is divided into sub-populations (strata) and a sample is independently drawn from each. However, in post-stratification, the stratification is done after the sample has been selected. This is done to overcome errors originating in mechanisms outside the sampling design, such as non-response.
The idea behind post-stratification is straightforward. For each cell (strata) in the population, calculate the percentage it represents of the total population. Then fit weights so that they adjust each stratum in the sample so to have the same proportions as each strata as in the population.
Let ℋ be the group items which represent some stratum in the population, and P_ℋ represent the proportion of this stratum in the target population 𝒯, i.e. p_h=|ℋ|/N. Let n_ℋ be the number of respondents from stratum ℋ in the observed sample. We also define the "inflation factor" as I = N/n, i.e. the factor indicating by how much we need to multiply the total observed sample size to get to the total target population size. Consequently, the post-stratification weight for each unit i from stratum ℋ in the observed sample is:
w_i = P_ℋn/n_ℋ * I ∀ i ∈ℋ
Note that the multiplication by I is a result of the arbitrary choice to scale the weights to the population size, and could be omitted.
The goal of post-stratification is to have the sample exactly match the joint-distribution of the auxiliary data of the target population. Hence it requires the researcher to know the joint distribution of the covariates to weight on. This level of resolution for the target population may not always be available. When only marginal distributions of the covariates are available then raking might serve as an alternative method to estimate the weights. Raking is described in sub-section <ref>.
Another limitation of post-stratification is on the number of covariates that can be used to correct the biases due to the limitations of having enough respondents in each of the cells. Having a cell with very few respondents could easily lead to a handful of respondents that receive very large weights - which leads to inflated variance of estimation based on such weights. Furthermore, when continuous variables are required for weighting, the researcher must decide on the thresholds for bucketing the variables into. A more general approach is the inverse propensity score weighting described in sub-section <ref>.
§.§.§ Raking
Raking <cit.>, also known as Iterative Proportional Fitting procedure (IPF), is a method that fits the sample data to a target population using only the marginal distributions of the population's covariates. Typically, we have access to these marginal distributions but often not to their joint distribution. Since raking weights do not represent the joint distribution, this can be thought of as a type of regularized model. This approach helps to avoid over-fitting small cells as in post-stratification and instead focuses only on the marginals <cit.>.
Raking essentially applies post-stratification sequentially over all covariates using only the marginal distributions. This is done repeatedly until a convergence is achieved. If exist, the design weights of the sample are used as the starting point of the algorithm. For example, we may have the marginal distribution of gender, age, and education. Raking would first adjust weights to match the gender distribution and then take these weights as input to adjust for age, and then for education. It would then adjust again to gender and then again to age, and so forth until it converges. This process will repeat until one of three stopping criteria are met: (1) we reached a pre-defined number of iterations,(2) the maximum difference in proportions between sample and target marginal distribution on any covariate is smaller then a preset convergence rate, or (3) the weights have converged and the change from one iteration to another is smaller then a preset rate tolerance parameter.
The resulting weights will be close to the marginal distribution of the population covariates. However, one cannot assume that the weighted sample joint distribution is the same as the joint distribution in the target population. Hence, if one wants to infer only for a sub-group of the population (such as young-adults only), it is less recommended to use raking weights, and, if possible, one should prefer a method that take into account the joint distribution of the covariates if such data exists.
Similar to post-stratification, raking is limited by the number of covariates that can be included in the model, due to the need to have enough respondents in each margin cell. In addition, raking may be sensitive to the order in which covariates are adjusted for, which may lead to under-correction of some covariates.
§.§.§ Inverse Propensity score Weighting (IPW)
A natural expansion to post-stratification and raking is the inverse propensity score weighting that can be viewed as a continuous extension of post stratification.
The Propensity Score is defined as the conditional probability to be part of the observed sample given the covariates:
p(X):=Pr(R=1|X)
It was first suggested by Rosenbaum and Rubin <cit.> as a method to perform matching for causal effects estimations in observational studies, and was later adopted to weighting survey data <cit.>. Rosenbaum and Rubin <cit.> showed that the assumptions of "strong ignorability" (uncounfoundness: Y R | X and positivity: 0<P(R_i=1|X_i)<1) implies that Y R | p(X), and that p(X) is the coarsest balancing score (a score B(X) that satisfies Y R | B(X)). This means that the propensity score is an inexpensive way, in terms of dimension, to estimate the selection probabilities. Hence, in the spirit of "Horvitz–Thompson estimator" <cit.> of using the inverse selection probabilities as weights, the inverse of the propensity score was suggested as a weighting procedure to adjust for non-response bias <cit.>.
The estimation of the propensity scores can be done in any standard tools for classification, such as logistic regression, decision trees and random forests (such as in <cit.>) or others. The choice of the model depends on the researcher assumptions regarding the parametric model of the non-response and the number and types of features used. In , we chose to implement the estimation of the propensity scores through a (regularized) logistic regression. The logistic regression model assumes a linear relation between the covariates and the log odds, of the form:
log(p_i/1-p_i)=β^T X
Once the propensity scores are estimated, the weight of unit i is calculated by w_i=1-p̂_̂î/p̂_̂î. This is because we define the target population as a reference group and we don't assume the target doesn't include the observed sample (i.e. we don't exclude units from the target based on their appearance in the sample). In this case, borrowing concepts from the causal inference literature <cit.>, the estimation we care about is only the estimation of the average treatment effect for the "control" ("untreated") group (the target population), and hence we use 1-p̂_̂î/p̂_̂î as the weights.
One challenge when including many covariates is that the estimation of the propensity scores (and hence, the weights), can have a a high variance, which may lead to unnecessary inflation of the survey estimates[Note that the variance estimator of the weighted mean presented in subsection <ref> is a closed form formula that assumes fixed weights, and hence the variability in the estimation of the weights is not reflected in the formula. A more accurate estimator of the variance would rely on bootstrap samples, which are more computationally expensive.] <cit.>. In we try to mitigate this by applying regularization to the logistic model using LASSO (Least Absolute Shrinkage and Selection Operator) <cit.>. This either excludes or reduces the magnitude of the covariates' coefficients that are not predictive for the response mechanism in the propensity model. This helps to minimize the variance of the estimated weights, at the potential expense of some consistent (hopefully small) bias in their estimated values. However, this process doesn't exclude covariates that are uncorrelated with the response itself. These should be excluded by the researcher in order to avoid variance inflation <cit.>. Another protective measure against variance inflation and extreme weights is weight trimming. offers automatic trimming, for details see subsection <ref>.
Another weakness of inverse propensity score weighting is that it may strongly depend on the specification of the model of the propensity scores, as shown in a simulation study in <cit.>. Imai and Ratkovic <cit.> have suggested the method of Covariate Balancing Propensity Score (CBPS) (described in the next subsection) to overcome this issue. Fitting tree-based methods for the propensity scores have also shown to be a good alternative <cit.>.
§.§.§ Covariate Balancing Propensity Score (CBPS)
Covariate Balancing Propensity Score (CBPS), suggested by Imai and Ratkovic <cit.>, is a method to estimate the propensity score in a way that will also result in maximizing the covariate balance. The method is preferable in the cases of misspecification of the propensity score model, which may lead to a bias in the estimated weights (and consequently, the estimated survey statistic). CBPS is described in details in <cit.> and implemented in the R package <cit.>. We give here a short summary of the method for completeness of the estimation methods section.
The CBPS method is an expansion of the maximization problem of logistic regression. The propensity score of the logistic regression model is modeled by:
p _β(X_i)=exp(β ^T X_i)/1+exp(β ^T X_i) ∀ i ∈𝒮, 𝒯
By the maximum-likelihood approach, β is estimated by maximizing the log-likelihood, which results in:
β̂_MLE=max_β∑_i ∈𝒮log(p_β(X_i))+∑_i ∈𝒯log(1-p_β(X_i))
At the maximum of the log-likelihood β satisfies first order condition:
1/n[ ∑_i ∈𝒮p^'_β(X_i)/p_β(X_i) + ∑_i ∈𝒯-p^'_β(X_i)/1-p_β(X_i)] =0
where p^'_β(X_i) is the derivative of p by β^T.
This condition can be viewed as the condition that balances a certain function of the covariates, in this case the derivative of the propensity score p^'_β(X_i).
Generally, one can expand the above to hold for any function f of the covariates X (f(X)) and depends on the researcher's goals and assumptions:
𝔼{∑_i ∈𝒮f(X_i)/p_β(X_i) + ∑_i ∈𝒯f(X_i)/1-p_β(X_i)} =0
CBPS method chooses f(X)=X as the balancing function f in order to balance the first moment of each covariate in addition to the to derivative of the propensity score parametric model. The estimation of the propensity score is then done by using Generalized Methods of Moments (GMM) <cit.> and is described in <cit.>.
§.§ Evaluation of survey weights
§.§.§ Overview
As mentioned, survey weights are essential to improve the accuracy of survey estimates, but their reliability and validity hinge on several assumptions and modeling decisions.
Survey weights are valuable when: (a) the non-response pattern is sufficiently captured by the measurable covariates, (b) the covariates are accurately represented in the fitted propensity score model, ensuring that the weighted distribution of covariates in the sample closely resembles that in the target population, and (c) the survey weights correlate with the outcome of interest to an extent that justifies the increased variance resulting from the weights <cit.>.
To see the level to which survey data empirically complies with the above criteria, diagnostics measures can be applied on each of the main elements: covariates, weights, and outcome.
Both covariates and outcomes can be checked before and after applying the weights, allowing for a comprehensive assessment of the weights influence on the data. Such evaluations helps confirm whether the weights have successfully enhanced the representativeness of the sample data in a way that also substantially influences the outcome of interest. Additionally, various diagnostics can be performed on the weights themselves to understand if their are extreme weights which dominate the sample, as well as the overall impact the weights have on the effective sample size.
Distributions can be compared using summary statistics and plots, with calculations incorporating the fitted survey weights. The following sections describes various methods for that purpose.
§.§.§ Visualizing Distributions
Distribution plots are effective tools for visualizing the covariates and outcomes in the data, offering insights that extend beyond basic summary statistics. For numerical variables there are Kernel Density Estimator (KDE) plots (see an example in Fig <ref>), histograms, and quantile-quantile (QQ) plots. For categorical variables it is common to use bar-plots (see an example in Fig <ref>). These distribution plots enable users to observe the differences between the observed sample and the target population, as well as the influence of the applied weights.
The advantage of visualizations lies in their ability to reveal unexpected patterns in the complete range of data, as opposed to looking on summary statistics only. However, scaling these visualizations can be challenging. For instance, while examining KDE plots for each covariate comparing the sample and target population is informative, it is often more efficient for the researcher to have summary statistics that can quickly convey the extent of bias in different features. This is particularly useful when evaluating multiple weighting solutions. The following sections discuss particular summary statistics that helps in addressing this need.
§.§.§ Diagnostics for the covariates using ASMD
A fundamental statistic for comparing distributions is the first moment, i.e. the mean, of each covariate for the target and the sample (weighted or unweighted). For each covariate, it is insightful to observe how much closer the application of weights brings us to the target mean. The Absolute Standardized Mean Deviation (ASMD) can be used to summarize this effect.
The Absolute Standardized Mean Deviation (ASMD) is a statistical measure employed to compare the means of two groups (in our case, the sample and the target). It is computed as follows:
ASMD = | X̅_Sample - X̅_Target| /SD
where X̅_Sample and X̅_Target are the means of the sample and target. The SD can be either the pooled standard deviation of sample and target, or the standard deviation of the target population. In we use the standard deviation of the target population.
The concept of ASMD is derived from the standardized mean difference, which is a measure of effect size used to compare the means of two groups, expressed in the standard deviation units. This is often referred to as Cohen’s d <cit.>, a standardized measure of the magnitude of the difference between two means. ASMD values range from 0 to infinity, with larger values indicating greater differences between the means of the two groups.[A value of 0 signifies no difference between the means, while a value of 1, for example, indicates that the difference between the means is equal to one standard deviation. ASMD is most easily conceptualized when the distributions being compared are unimodal and symmetric.]
The ASMD can be calculated using the unweighted and weighted mean of the Sample, and these two quantities can be compared. If applying the weights lead to an ASMD value that is closer to 0 than the ASMD of the unadjusted sample, then it is an indication the weights help to reduce the bias. The level of adjustment can be measured by taking the difference of these two ASMD values.
ASMD_diff = ASMD_unadjusted - ASMD_weighted
The more the adjusted ASMD (the ASMD that is based on the weighted mean of the sample) is smaller than the unadjused ASMD (based on the unweighted mean) - i.e.: the closer the diff is to 0 - the stronger the indication we have of the potential benefit of the weights for adjusting a bias in the covariates. The magnitude of the difference of the two ASMD values is a measure of the impact of the weights. If ASMD_diff is positive then it means the weights have helped reduce the bias, while a negative value indicates that the weights have potentially increased the bias.
Since we often wish to adjust over many covariates, then the ASMD difference from each covariate can be summarized by taking the average ASMD (or ASMD_diff) over all covariates. This gives a single summary statistic to measure the level of impact the weights had on reducing the bias of the sample in the covariates.
For categorical variables, one possible behavior for ASMD calculation is to use dummy variables and calculates the ASMD for each of them. The ASMD per dummy variable approach could lead to over-weighting categorical variables with many categories when calculating the mean ASMD. A possible solution is to aggregate these ASMD per covariate. I.e.: calculates a single mean ASMD value for each categorical variable, and then the general mean ASMD will give each variable the same weight in the final calculation.
For some limitations of using ASMD, see the appendix section <ref>.
§.§.§ Diagnostics for the weights
Kish's design effect
One important aspect to consider when using survey weights is the potential increase in variance of some estimate of interest (e.g.: the mean) due to the variability in the weights. This is measured by a quantity known as design effect, which is generally defined as the variance of the weighted estimator to the variance expected with simple random sample (SRS) without replacement <cit.>. It assesses the potential impact that weights might have on the variance of estimating the weighted mean.
Kish's design effect <cit.> is a widely known and commonly used design effect measure for the potential impact that weights might have on the variance of estimating the population mean using the weighted mean. Kish's design effect assumes that there is no correlation between the weights and the outcome variable, also known as "haphazard weights.", its formula is:
D_eff = n ∑_i=1^n w_i^2/(∑_i=1^n w_i)^2 = 1/n∑_i=1^n w_i^2/(1/n∑_i=1^n w_i)^2
= w^2/w^2
The effective sample size proportion (ESSP) indicates what is the effective proportion of sample size we'll keep after applying the weights. It's simply the inverse of D_eff:
ESSP = 1/D_eff
The effective sample size is a related measure that takes into account both the design effect and the actual sample size. It can be used to approximate the number of independent observations that would yield the same variance as the weighted sample. The effective sample size is calculated as follows (where n is the sample size):
n_eff = ESS = n/D_eff
The effective sample size provides a useful way to gauge the impact of the weights on the precision of the estimates. A smaller effective sample size indicates that the weights have introduced greater variability in the estimates, potentially requiring a larger actual sample size to achieve the desired precision.
Further details on assumptions and proofs are available in appendix <ref>.
Summary Statistics for the Distribution of Weights
While Kish's design effect can be used to estimate an effective sample size as a summary measure for the impact of using weights, it may also be beneficial to examine the distribution of weights using other summary statistics. For instance, extremely large or small weights could indicate potential issues with the weighting process or the presence of outliers in the data used for estimating the weights. Furthermore, the distribution of weights can help determine whether the weights are concentrated on a small number of observations or more evenly distributed across the sample. These observations are often easier to infer from summary statistics than from distribution plots of the weights. Understanding the distribution of the weights can also help to better understand Kish's design effect (and effective sample size) value, which may indicate whether follow-up manipulation of the weights is necessary (such as using an alternative weighting model or weight trimming).
For diagnostic purposes, it is often more convenient to examine the weights after they have been normalized so that their sum equals the sample size. i.e.: by dividing each weight in the sample by the average of the weights (w_i^* = w_i / w̅). When weights are normalized to sum to the sample size, they have the appealing property of being more or less informative as they deviate from 1. A weight smaller than 1 for an observation indicates that the weighting procedure considers this observation less informative than the average observation. Conversely, a weight larger than 1 suggests that this observation is more informative, on average, than other observations.
For instance, if we have weights based on gender and find that males have weights smaller than 1 while females have weights larger than 1, we can infer that our sample has an over-representation of males and an under-representation of females - an imbalance that the weights attempt to rectify.
It is helpful to look at the distribution of the weights. Looking at the KDE plot can help detect multimodal distribution (which might indicate clusters of users of higher/lower representativeness of the population). It is also helpful to look at basic summary statistics, such as the main quartiles (25%, 50%, and 75%) as well as the proportion of weights above and below certain values (e.g., over 2 and under 0.5, along with other similar quantities). This can help identify which proportions of the responses might be over/under weighted. Such insights could lead to followup changes to the final weighting model. For example, if we find out a handful of users have weights that are extremely large we might decide to look at the skewed features. We might find a need to bucket some classes in a covariate together, remove some features from the weighting model, use weight trimming, or some other post-processing manipulation to the weights.
§.§.§ Diagnostics for the outcome
The entire procedure of fitting weights and diagnostics is geared towards an impactful change in the outcome (or outcomes) of interest towards reducing the estimation bias. A common population parameter of interest is the mean. The statistics used to review it are the sample weighted mean, the variance of the weighted mean, as well as asymptotic confidence intervals.
The formula for the weighted mean, using a Horvitz–Thompson estimator <cit.>, is simply:
y̅_w = ∑_i=1^n w_i y_i /∑_i=1^n w_i
The variance of the weighted mean is based on the π-estimator for the ratio-mean:<cit.>
V (y̅_w)
= 1/(∑_i=1^n w_i)^2∑_i=1^n w_i^2 (y_i - y̅_w)^2
This estimator works for cases when the probability of selection for each y_i are not identical, treating the y_i values themselves as fixed.[The formula presented for the variance of the weighted mean assumes that the weights are known and fixed quantities. Hence, this formula does not account for the uncertainty that is introduced from the estimation of the weights. If measuring this uncertainty is of interest, then it is possible to perform an end to end bootstrap simulation which includes re-sampling from the sample, calculating the weighted mean estimation, and then repeating the process a few times, and using the bootstrap estimations of the mean to estimate the variance.] See section <ref> for more details.
The confidence intervals (CI) available uses the above formula and are the standard approximate CI based on the central limit theorem:
CI(μ): y̅_w ± z_α/2√(V (y̅_w))
In an applied setting, it is advisable to calculate the weighted mean and their CI after applying the weights, and also without weights, and compare the quantities to each other. The difference of the weighted and unweighted mean could be thought of as an estimator of the potential bias reduced by using the weights (assuming the general trend of the ASMD calculations on the covariates indicate a positive improvement in their imbalance). This estimated bias can be compared to the effective sample size to allow a rough decision if the increase in variance due to the weights is adequately compensated by the reduction in bias.
§ THE WORKFLOW
§.§ The workflow
Survey data weighting using is achieved with the following three main steps:
* Understanding the initial bias in the data relative to a target population: First, the survey data is loaded for both respondents and the target population. A pandas DataFrame can be created using and converted into a class object with . A similar step is repeated for the target population's data, and then the two objects can be combined by assigning the target object as the target of the sample object. Once the data is loaded, we can conduct a diagnostic evaluation of the sample-vs-target covariates' distributions to determine if weighting is necessary. These include ASMD and distribution plots such as bar-charts and kernel-density-estimation plots.
* Adjusting the sample to the target: next, we generate weights for the sample to more accurately represent the target population's distributions. Currently, the package implements the following methods: Inverse Probability Weighting (IPW) using LASSO regression, Covariate Balancing Propensity Score (CBPS), Post-stratification, and raking. These are all available through the method in the class.
* Results evaluation: Once the weights are estimated, their effect is evaluated on the covariates imbalance (again, using ASMD and plots), the effective sample size, and the change in weighted mean of the outcome as well as their confidence intervals.
The next section gives a detailed example for applying this workflow.
§.§ An end-to-end example
§.§.§ Understanding the initial bias
Loading simulated data
This section presents an example of simulated data extracted from the tutorial page <cit.>. The data set is comprised of two pandas DataFrames: one for the target population and the other for the sample population. Both DataFrames contain an identifier column (id), three covariate columns (gender, age_group, and income), and an outcome variable (happiness)[Code for creating the distributions is available here: <https://github.com/facebookresearch/balance/blob/main/balance/datasets/__init__.py#L17>.]
In this particular simulation, we intentionally designed the outcome to be associated with all covariates, ensuring that this relationship remains consistent for both the target and sample populations. It is important to note that in real-world data sets, we generally don't observe the outcome for the target population. However, in this simulated data set we have included it for illustrative purposes. This setup allows us to later demonstrate how weighting methods can mitigate bias and approximate population-level parameters more accurately.
In real-world use-cases the data is often loaded using . Here, we use pre-made DataFrames that can be loaded (and inspected) using the following Python code:
[language=Python]
from balance import load_data
# INFO (2023-05-14 09:00:15,410) [__init__/<module> (line 52)]: Using balance version 0.9.0
target_df, sample_df = load_data()
print("sample_df: ", sample_df.head())
Creating instances of the Sample class with the DataFrames
The main class for our analyses is the class from the package. The following illustrates how we incorporate the DataFrames into this class:
[language=Python]
from balance import Sample
sample = Sample.from_frame(sample_df, outcome_columns=["happiness"])
target = Sample.from_frame(target_df, outcome_columns=["happiness"])
# Usually the code will be simply:
# target = Sample.from_frame(target_df)
# This is since most times we do not have the outcome for the target. In the example in this paper we have added it just to validate later that the weights indeed help us reduce the bias of the outcome.
# Following this, we associate the Sample object instance of sample with that of the target object, enabling us to adjust the sample to match the target.
sample_with_target = sample.set_target(target)
The class provides a wide range of attributes, methods, and properties. For instance, the property can reveal the DataFrame encapsulated within the instance of the class (e.g.: ):
Invoking the object directly provides a concise summary of its attributes:
[language=Python]
sample_with_target
Exploring the imbalances in covariates
We can use methods such as with , , and to get some diagnostics about the imbalance.
We can use the method to look at the distributions of covariates in the sample versus the target data.
[language=Python]
sample_with_target.covars().plot()
The output in Figure <ref> helps to easily identify imbalance. For example, we can see the sample has many more males than females, as opposed to a 50%-50% split in the target population. And for age_group we can see how the sample is skewed towards younger respondents, as compared to the target population.
The package leverages plotly <cit.> (as the default) to create interactive visualizations, but it also supports static figures using the seaborn package <cit.> for added flexibility.
The default method uses ASMD to compare sample (which is unweighted) with the target using dummy variables for categorical variables, and calculates the ASMD for each of them. The aggregate ASMD per covariate can be achieved using the argument, as described in section <ref>.
[language=Python]
print(sample_with_target.covars().asmd(aggregate_by_main_covar = True).T.round(2))
The ASMD helps quantify the levels of imbalance in each covariate.
§.§.§ Fitting survey weights
In order to estimate weights for the sample the method as used on the object. The default is ipw, and other methods could be invoked using the argument.
[language=Python]
# Using ipw to fit survey weights
adjusted = sample_with_target.adjust()
§.§.§ Evaluating the Results
Covariates
We can get a basic summary of the results using the method:
[language=Python]
print(adjusted.summary())
It shows that the weights led to an improvement of around 60% reduction in the mean ASMD (from 0.327 to 0.132), and that the price we paid for it is an increasing the variance of the estimator by 1.897 in comparison to a random sample (as calculated using Kish's design effect, if assuming haphazard weights).
The same tools used to evaluate the bias before adjustment can be used for evaluating the effect of the weights on the balance after adjustment.
[language=Python]
adjusted.covars().plot()
The output in Figure <ref> shows how the weights help mitigate some (though not all) of the bias, for all three covariates (gender, age and income).
We can also see the improvement per caovariate (averaged across category) using the method:
[language=Python]
print(adjusted.covars().asmd(aggregate_by_main_covar = True).T.round(2))
We can see that while we got improvements in all covariates, there is still some imbalance that remained, especially in the income variable.
Weights
Next, we wish look at the diagnostics of the weights to identify if there are any extreme weights or signs of issue that requires further investigation. This can be done by using the method on the method of the adjusted object.
[language=Python]
print(adjusted.weights().summary().round(2))
We can see a design effect of 1.9 which corresponds with an effective sample size proportion of 53%. Since the size of the sample was 1000, it means that the effective sample size is 527.
We can also see that 65% of the weights are below 1, meaning that we down-sized 65% of our sample. The minimal weight is 0.31 and the max weight is 11.65, with almost no weights above 10. A conclusion here is that the weights are not too extreme and we get some sense of the cost that using the weights would incur on the precision of our estimates.
Outcome
The method on the method gives us the weighted means and confidence intervals for the adjusted sample, the target, and the unadjusted sample data.
From the results below we can see that the real population level mean of happiness in the simulation was 56.2. In our (unweighted/unadjusted) sample it was 48.5. Meaning, the bias was roughly 7.7 points. After applying the weights, we got a value of 53.3, reducing the bias to roughly only 2.9 points. Note that this comparison is only possible in a simulated environment and is given here for a proof of concept of the effect of the weights. We can also see that the CI of the self and unadjusted show very different ranges of bands, indicating how the weights clearly got us a significant change in the estimated mean[Comparing the CI of the data with and without the weights is a good approximation for the impact of the weights, but is not statistically precise. Future work is planned for introducing more formal confidence intervals of the impact of the weights by using paired t-test style analysis. See the discussion and future work section for more.]. While the model improved the bias, we know it didn't fix it completely. This is because the model also did not perfectly fix the covariate imbalance, since it used some regularization in the process.
[language=Python]
print(adjusted.outcomes().summary())
adjusted.outcomes().plot()
The output of is in Figure <ref>. It shows that we got a relatively symmetrical uni-modal distribution (before and after applying the weights). So we don't observe and strong irregular behavior of the outcome. Note that we are able to compare the outcome with and without the weights in the sample to the real outcome distribution in the target population only because this is simulated data. In real-world cases, we are not expected to have access to the outcome distribution of the target population. Also, it is relatively common to get outcome responses in binary or likert scales, and not a continuous variable. The would work with these just as well.
Downloading data
Once we are settled with the weights we got, we can download them as csv, as follows:
[language=Python]
adjusted.to_download() # Will create a download link in jupyter
# We can also prepare the data to be exported as csv
# The following code showes the first 500 characters for simplicity:
adjusted.to_csv()
§.§ How does implement the adjustment?
Pre-processing
Before applying any of the adjustment methods, performs a pre-processing step to improve models' results. The pre-processing step includes a few best practiced that makes the use of easy and automatic for a default usage.
Transformations. applies the following default behaviours:
* Handling missing values: handles missing values automatically by adding a special indicator column to any variable that contains missing values. The advantage of this is that these are then considered as a separate category for the adjustment.
* Feature engineering: by default, applies feature engineering to be able to fit the covariate distribution better, and not only the first moment. Specifically, each continuous variable is bucketed into 10 quantiles buckets, and rare categories variables are grouped together so to avoid overfitting[The user has also an option to change these default behaviours, through setting different values to the argument of .].
Model matrix. The model matrix of the covariates used in for the logistic regression in ipw and for CBPS propensity score is constructed before the fitting is done using the transformed variables and one-hot encoding for discrete variables. The default behaviour is an additive model including all joint covariates of the target and the observed sample. However, thorough the argument , one can input a formula for a specified relation between the variables. The formulas adopt the notation from the Python package <cit.>, facilitating a range of operations like addition, multiplication (for interaction effects), and power transformations. A detailed example is available in the https://import-balance.org/docs/tutorials/balance_transformations_and_formulas/"balance: transformations and formulas" tutorial <cit.>.
Adjustment through
is implemented using LASSO regularized logistic regression.
To avoid non-balanced categories in the logistic regression, scales the prevalence of the target population to be similar to the observed sample.
The penalty factor λ of the LASSO is chosen through cross-validation. Two methods for choosing the parameter are suggested:
* Unbounded Design Effect: If one doesn't want to bound the design effect of the resulted weights (the default behaviour with ), the penalty if chosen using , which is the largest value of λ such that the cross-validated error is within one standard error of the minimum value.
* Bounded Design Effect: If one chooses to bound the design effect (e.g. by using ), a grid search over 10 of the values of λ that brings the largest design effect within the bound is done, where the λ is chosen to be the one that brings the largest ASMD reduction.
In addition, a argument can be also used to indicate how much the model should focus to adjust each term of the formula. Larger penalty factors means that the covariate is more likely to be regularized by the LASSO penalty and as a result the adjustment of this covariate will be smaller, i.e. will end in a less balanced covariate. This feature can be particularly useful when certain components are believed to be more or less responsible for bias in the data, or when the user wants to explore different adjustment scenarios.
Post-processing
Weights in are scaled to the population size after estimated, and can be interpret as the number of people from the target the sample unit represent. After the adjustment and scaling is done, weights trimming from above is performed. This is done in order avoid over fitting of the model and unnecessary variance inflation. The weights are trimmed and scales in a way that keeps the mean and sum of the weights the same as before trimming, such that the interpretation how many units in the target this unit represent holds after trimming.
§ FUTURE DIRECTIONS
The package offers benefits for researchers interested in analyzing data with non-response bias in the Python environment by being easy to use, providing an end-to-end workflow, and released as open-source. While comprehensive, there is still room for improvement and expansion. This section highlights several possible areas for future development in the package.
* Better Diagnostic Tools for Covariates: The current metric of ASMD has limitations, especially when applied to a wide range of distributions and for categorical variables. Future versions could include more robust measures like the Common Language Effect Size <cit.> and better methods for handling categorical variables, such as Kullback-Leibler divergence. There is also room for adding statistical hypothesis tests for the evaluations, as well as more plots. The R package <cit.> is a good source of inspiration.
* Expanded Estimation and Diagnostic Tools for Outcomes: Currently, the package primarily provides the weighted mean and its confidence intervals. A helpful improvement would be to directly measure the estimated bias reduction caused by the weights, including a confidence interval for this estimate. Also, current implementation focuses on the weighted average and the linearization (Taylor) estimator for the variance. Other possible statistics, and estimations of variance exists. The package already implements some of these and would be a good source of inspiration <cit.>.
* Diagnostics for the bias-variance trade-offs when using weights: At present, the user is provided with a set of weights but with no easy way to check the bias-variance tradeoffs for alternative levels of trimming or tuning other parameters. A future version of the package could include more diagnostics tools and allow automated functions for weight trimming, such as based on empirical-MSE estimation for a given outcome over a range of potential weight trimming values. This could lead to a better balance between the variance induced by the weights and the bias they reduce and save researcher's time in manual tweaking.
* Built-in Model Comparison for Multiple Weights: Our ultimate goal is to allow the most flexibility to the user by conducting easy comparisons of multiple models and adjustments to the weights in order to choose the model that best fits his/hers data.
* Feature Selection for Propensity Score Models: When given several potential models, it can be challenging to choose the best one. This choice could depend on various factors, such as the balance between reduced bias and incurred variance or the impact of different models on different outcomes. Further development in this area could provide useful tools for sensitivity analysis and decision making.
* Expansion Beyond Propensity Score Models: The next step for the package could be to include outcome models and doubly robust models. Thus making the package more versatile and comprehensive.
These possible improvements represent exciting opportunities for the future of the package, aiming to provide a more robust and user-friendly tool for researchers in the Python environment. We welcome any feedback, suggestions, and opportunities for collaborations.
tocsectionReferences
ieeetr
§ ACKNOWLEDGMENTS
The package was (and is) developed by many people, including: Roee Eilat, Tal Galili, Daniel Haimovich, Kevin Liou, Steve Mandala, Adam Obeng (author of the initial internal Meta version), Tal Sarig, Luke Sonnet, Sean Taylor, Barak Yair Reif, and others.
The package was open-sourced by Tal Sarig, Tal Galili and Steve Mandala, from Central Applied Science at Meta, in late 2022.
Branding created by Dana Beaty, from the Meta AI Design and Marketing Team.
§ LIMITATIONS OF THE ASMD
It is also worth noting some of the disadvantages in ASMD:
* Sensitivity to extreme values: Since ASMD is based on the first moment, ASMD can be sensitive to outliers or extreme values in the data, which can lead to a distorted representation of the differences between the two groups. This could be mitigated by turning to robust measure, but these are currently not implemented in .
* Inability to detect distributional differences: ASMD focuses solely on the mean difference between the two groups, and does not account for differences in other distributional characteristics, such as variance, skewness or number of modes. This means that two groups with similar means but different variances or shapes may have a low ASMD value, which could be misleading. This can be addressed by looking at distribution plots. The next section discusses methods that are available in .
* The need for context: ASMD values are unitless and can be difficult to interpret without context. Does an ASMD value below 0.1 indicate an effect size which is small or large? The interpretation of ASMD is often comparative within a specific research context. For example, ASMD changes could be compared across different covariates and alternative weights, so to identify which set of weight effects the bias of which covariate.
* Limited applicability to categorical variables: ASMD is primarily designed for comparing continuous variables, and its applicability to categorical variables is more limited. In such cases the covariate can be turned into several dummy variables using one hot encoding and the ASMD can be calculated on these values of zeros and ones. Alternative measures that directly compare categorical distributions <cit.> are currently not implemented in .
Despite these limitations, the ASMD can be a useful measure for comparing the effect of the weights on the covariates.
§ KISH'S DESIGN EFFECT
Design effect in general
A design effect <cit.>[The text in this section is a modified version of the text we wrote for the Wikipedia article on Kish's design effect <cit.>.] is a measure of the increase in variance of an estimate due to the use of survey weights compared to an equal probability sample of the same size. Theoretically, it is calculated as follows:
D_eff = Var(θ̂_weighted)/Var(θ̂_un-weighted)
where Var_weighted and Var_unweighted are the variances of the weighted and unweighted estimates, respectively.
A design effect greater than 1 indicates that the variance of the weighted estimate is larger than that of an unweighted estimate, while a design effect less than 1 suggests that the variance of the weighted estimate is smaller (for example, when using design weights based on stratified sampling). A design effect of 1 implies that the use of weights does not affect the variance of the estimate (which happens only if all weights are equal to the same, non 0, value).
A design effect that is larger than 1 does not necessarily imply that the weights are undesirable, as they can still improve the accuracy and representativeness of the estimates. Put differently, it may be that the bias corrected by applying the weights is substantially larger than the variance added due to using them.
Kish's design effect is a specific measure for a specific parameter (the population mean), with specific assumptions. The following sections discusses these some of these assumptions.
Assumptions and derivation
The formula of Kish's design effect computes the increase in variance of the weighted mean due to "haphazard" weights, which occur when y consists of observations selected using unequal probabilities, without within-cluster correlation or any relationship to the expected value or variance of the outcome measurement. From a model-based perspective <cit.>, the formula holds when all n observations (y_1,...,y_n) are (at least approximately) uncorrelated and have the same variance for some response variable of interest (y). The formula also assumes that the weights are not random variables but rather known constants. These, for example, can be the inverse of the selection probability for a pre-determined and known sampling design.
The conditions on y are trivially satisfied if the y observations are independent and identically distributed (i.i.d.) with the same expectation and variance. It is important to note that if y_1,...,y_n do not have the same expectations, the estimated variance of the estimator cannot be used for calculating the variance using a simple weighted variance formula, as the estimation assumes that all y_is have the same expectation. Specifically, if there is a correlation between the weights and the outcome variable y, the expectation of y is not the same for all observations but rather depends on the specific weight value for each observation. In such cases, while the design effect formula might still be accurate (assuming other conditions are met), a different estimator for the variance of the weighted mean may be needed, such as a weighted variance estimator.
If different y_i's have distinct variances, the weighted variance might capture the correct population-level variance, but Kish's formula for the design effect may no longer be valid. A similar issue occurs if there is a correlation structure in the samples, such as when using cluster sampling.
Kish's formula estimates the increase in the variance of the weighted mean based on "haphazard" weights. Let y be observations selected using unequal selection probabilities (with no within-cluster correlation, and no relationship to the expectancy or variance of the outcome measurement);<cit.> and let y' be the observations we would have had if we got them from simple random sample, then Kish's formula for Deff is:
D_eff (kish) =
var(y̅_w)/var(y̅') =
var((∑_i=1^n w_i y_i)/(∑_i=1^n w_i )) / var( ∑_i=1^n y_i'/n)
From a model based perspective<cit.>, this formula holds when all n observations (y_1, ..., y_n) are (at least approximately) uncorrelated (∀ (i ≠ j): cor(y_i, y_j) = 0), with the same variance (σ^2) in the response variable of interest (y). It also assumes the weights themselves are not a random variable but rather some known constants (E.g.: the inverse of probability of selection, for some pre-determined and known sampling design).
The conditions on y are trivially held if the y observations are i.i.d with the same expectation and variance. In such case we have y=y', and we can estimate var(y̅_w) by using var(y̅_w) = var(y̅)× D_eff <cit.>.
Proof
We present here a simplified proof to Kish's formula: D_eff:=var(y̅_w)/var(y̅') = w^2/w̅^2
for the case when there are no clusters (i.e.: no intraclass correlation between the elements of the sample), so that each strata includes only one observation. The proof is shown in full in <cit.> .
var(y̅_w)
1= var(∑_i=1^n w_i y_i/∑_i=1^n w_i)
2= var( ∑_i=1^n w_i' y_i )
3=∑_i=1^n var( w_i' y_i )
4=∑_i=1^n w_i'^2 var( y_i )
5=∑_i=1^n w_i'^2 σ^2
6=σ^2 ∑_i=1^n w_i'^2
7=σ^2 ∑_i=1^n w_i^2/( ∑_i=1^n w_i) ^2
8=σ^2 ∑_i=1^n w_i^2/( ∑_i=1^n w_i n/n) ^2 9=σ^2 ∑_i=1^n w_i^2/( ∑_i=1^n w_i/n) ^2 n^210=σ^2/n∑_i=1^n w_i^2/n/( ∑_i=1^n w_i/n) ^2
11=σ^2/nw^2/w̅^2 12= var(y̅') D_eff
D_eff (kish) =var(y̅_w)/var(y̅')
Transitions:
* from definition of the weighted mean.
* using normalized (convex) weights definition (weights that sum to 1): w_i' = w_i/∑_i=1^n w_i.
* sum of uncorrelated random variables.
* If the weights are constants (from the basic properties of the variance). Another way to say it is that the weights are known upfront for each observation i. Namely that we are actually calculating var(y̅_w | w )
* assume all observations have the same variance (σ^2).
§ ESTIMATING THE VARIANCE OF THE WEIGHTED MEAN
Formulation
This section discusses the derivation of the formula presented in the paper for the variance of the weighted mean, also known as π-estimator for ratio-mean.[The text in this section is a modified version of the text we wrote for the Wikipedia article on the weighted mean <cit.>.]
We are interested in estimating the variance of the weighted mean when the various y_i are not assumed to be i.i.d random variables. An alternative perspective for this problem is that of some arbitrary sampling design of the data in which units are selected with unequal probabilities (with replacement) <cit.>.
Unlike classical ”model based” approaches, in which the randomness is described by the randomness of the y value, here we consider the value of y_i as constant, where the variability comes from the selection procedure. We let R_i be the Bernoulli indicator that is equal to 1 if observation i is in the observed sample, and 0 if not. The probability of a unit to be sampled given a sample 𝒮 of size n is denoted by π_i:=P(R_i=1 |𝒮). Furthermore, we denote the one-draw probability of selection by p_i := P(R_i=1 | ) ≈π_i/n. For the following derivation we'll assume that the probability of selecting each element is fully represented by these probabilities <cit.>, i.e. selecting some element will not influence the probability of drawing another element (this doesn't apply for things such as cluster sampling design).
Since each outcome y_i is fixed, and the randomness comes from unit i being included in the sample or not (R_i), we often talk about the multiplication of the two, which is a random variable. To avoid confusion in what to follow, we define: y'_i = y_i · R_i. This satisfies: 𝔼[y'_i] = y_i 𝔼[R_i] = y_i π_i and 𝕍[y'_i] = y_i^2 𝕍[R_i] = y_i^2 π_i(1-π_i).
In this "design based" perspective, the weights are obtained by taking the inverse of the selection probability (i.e.: the inflation factor), i.e. w_i = 1/π_i≈1/n × p_i. The weights in this setting are considered fixed and known.
We assume that the target population size N is unknown, and is estimated by N̂ = ∑_i=1^n w_i. Our parameter of interest is the weighted mean, that can be written as a ratio:
Y̅ = ∑_i=1^N y_i/π_i/∑_i=1^N 1/π_i = ∑_i=1^N w_i y_i/∑_i=1^N w_i
This ratio is estimated by the observed sample using:
Ŷ̅̂ = ∑_i=1^n y_i/π_i/∑_i=1^n 1/π_i = ∑_i=1^n w_i y'_i/∑_i=1^n w_i R_i
This is called a Ratio estimator and it is approximately unbiased for Y̅<cit.>
In this case, the variability of the ratio depends on the variability of the random variables both in the numerator and the denominator - as well as their correlation. Since there is no closed analytical form to compute this variance, various methods are used for approximate estimation, primarily Taylor series first-order linearization, asymptotics, and bootstrap/jackknife.<cit.> The Taylor linearization method could lead to under-estimation of the variance for small sample sizes in general, but that depends on the complexity of the statistic. For the weighted mean, the approximate variance is supposed to be relatively accurate even for medium sample sizes <cit.>. For when the sampling has a random sample size, such as in Poisson sampling, it is as follows: <cit.>
V (y̅_w)
= 1/(∑_i=1^n w_i)^2∑_i=1^n w_i^2 (y_i - y̅_w)^2
We note that if π_i ≈ p_i n, then either using w_i = 1/π_i or w_i = 1/p_i would give the same estimator, since multiplying w_i by some factor would lead to the same estimator. It also means that if we scale the sum of weights to be equal to a known-from-before population size N, the variance calculation would look the same. When all weights are equal to one another, this formula is reduced to the standard unbiased variance estimator.
Note that for the trivial case in which all the weights are equal to 1, the above formula is just like the maximum-likelihood formula for the variance of the mean (but not that it is not the unbiased variance, i.e. dividing it by n instead of (n-1)).
Proof
We show here a short proof for the variance formula presented above:
V (y̅_w)
= 1/(∑_i=1^n w_i)^2∑_i=1^n w_i^2 (y_i - y̅_w)^2
The Taylor linearization states that for a general ratio estimator, Q, of two sums, Y and Z, can be expressed by: <cit.>
Q̂ = Ŷ/Ẑ = ∑_i=1^n w_i y'_i/∑_i=1^n w_i z'_i≈ Q + 1/Z∑_i=1^n ( y'_i/π_i - Q z'_i/π_i)
And the variance can be approximated by: <cit.>
V (Q̂) = 1/Ẑ^2∑_i=1^n ∑_j=1^n ( Δ̌_ijy_i - Q̂ z_i/π_iy_j - Q̂ z_j/π_j) = 1/Ẑ^2[ V (Ŷ) + Q̂V (Ẑ) -2 Q̂Ĉ (Ŷ, Ẑ) ]
where Ĉ (Ŷ, Ẑ) is the estimated covariance between the Y and Z, and Δ_ij = C(R_i, R_j).
Since Ĉ is the covariance of two sums of random variables, it would include many combinations of covariances that will depend on the indicator variables. If the selection probability are uncorrelated (i.e.: ∀ i ≠ j: Δ_ij = C(R_i, R_j) = 0), this term would include only the summation of n covariances for each element i between y'_i = R_i · y_i and z'_i = R_i · z_i. This helps illustrate that this formula incorporates the effect of correlation between y and z on the variance of the ratio estimators.
When defining z_i = 1 the above becomes: <cit.>
V (Q̂) = V (y̅_w)
= 1/N̂^2∑_i=1^n ∑_j=1^n ( Δ̌_ijy_i - y̅_w/π_iy_j - y̅_w/π_j)
where Δ̌_ij = Δ_ij/π_ij. If the selection probability are uncorrelated (i.e.: ∀ i ≠ j: Δ_ij = C(R_i, R_j) = 0), and when assuming the probability of each element is very small (i.e.: (1- π_i) ≈ 1), then the above reduced to the following:
V (y̅_w)
= 1/N̂^2∑_i=1^n ( (1- π_i) y_i - y̅_w/π_i)^2
= 1/(∑_i=1^n w_i)^2∑_i=1^n w_i^2 (y_i - y̅_w)^2.
|
http://arxiv.org/abs/2307.04583v1 | 20230710142143 | Parameterised distance to local irregularity | [
"Foivos Fioravantes",
"Nikolaos Melissinos",
"Theofilos Triommatis"
] | cs.CC | [
"cs.CC",
"cs.DM",
"cs.DS"
] |
arrows,calc
decorations.pathreplacing
theoremTheorem[section]
claim[theorem]Claim
lemma[theorem]Lemma
observation[theorem]Observation
property[theorem]Property
proposition[theorem]Proposition
remark[theorem]Remark
corollary[theorem]Corollary
*123conjecture1-2-3 Conjecture
definition[theorem]Definition
equationsection
GrayBox[1]
#1
(Text)[draw=black!20,fill=white,rounded corners,inner sep=2ex,text width=]
;
(x) at (current bounding box.north west);
[draw=white,rectangle,inner sep=3pt,anchor=north west,fill=white]
at ((x)+(6pt,.75em)) ;
defproblemx[1]=6pt
=0pt
#1
1]Foivos Fioravantes
1]Nikolaos Melissinos
2]Theofilos Triommatis
[1]Department of Theoretical Computer Science, Faculty of Information Technology, Czech Technical University in Prague, Prague, Czech Republic
[2]School of Electrical Engineering, Electronics and Computer Science
University of Liverpool, Liverpool, L69-3BX, UK
Parameterised distance to local irregularity
The third author is supported by EP/S023445/1 EPSRC CDT in Distributed Algorithms, University of Liverpool.
[
========================================================================================================================================================
A graph G is locally irregular if no two of its adjacent vertices have the same degree. In [Fioravantes et al. Complexity of finding maximum locally irregular induced subgraph. SWAT, 2022], the authors introduced and studied the problem of finding a locally irregular induced subgraph of a given a graph G of maximum order, or, equivalently, computing a subset S of V(G) of minimum order, whose deletion from G results in a locally irregular graph; S is denoted as an optimal vertex-irregulator of G. In this work we provide an in-depth analysis of the parameterised complexity of computing an optimal vertex-irregulator of a given graph G. Moreover, we introduce and study a variation of this problem, where S is a substet of the edges of G; in this case, S is denoted as an optimal edge-irregulator of G. In particular, we prove that computing an optimal vertex-irregulator of a graph G is in FPT when parameterised by the vertex integrity, neighborhood diversity or cluster deletion number of G, while it is [1]-hard when parameterised by the feedback vertex set number or the treedepth of G. In the case of computing an optimal edge-irregulator of a graph G, we prove that this problem is in FPT when parameterised by the vertex integrity of G, while it is -hard even if G is a planar bipartite graph of maximum degree 4, and [1]-hard when parameterised by the size of the solution, the feedback vertex set or the treedepth of G. Our results paint a comprehensive picture of the tractability of both problems studied here, considering most of the standard graph-structural parameters.
§ INTRODUCTION
A fundamental problem in graph theory is “given a graph G, find an induced subgraph H of G, of maximum order, that belongs in the family of graphs verifying a property Π”, in which case we say that H∈Π:
Largest Induced Subgraph with Property Π (ISPΠ)<cit.>
A graph G=(V,E), an integer k, a property Π.
Does there exist a set S⊆ V such that |S|≤ k and G-S∈Π?
There is a plethora of classical problems that fall under this general setting. Consider for example the Vertex Cover and the Feedback Vertex Set, where Π is the property “the graph is an independent set” and “the graph is a forest”, respectively.
In this paper we study the ISPΠ problem where Π is the property “the graph is locally irregular”, recently introduced in <cit.>. A graph G=(V,E) is called locally irregular if no two adjacent vertices in V have the same degree. We extend the work presented in <cit.>, by more thoroughly investigating the behaviour of the problem in regards to parameterised complexity. In addition, we take the first step towards the problem of finding large locally irregular (not necessarily induced) subgraphs of a given graph G. In particular, we introduce the problem where the goal is to find a subset of edges of G of maximum order, whose removal renders the graph locally irregular. Our results allow us to paint a rather clear picture concerning the tractability of both problems studied here, considering many standard graph-structural parameters (see Figure <ref> for an overview of our results).
ISPΠ and hereditarity. The ISPΠ problem has been extensively studied in the case where Π is a hereditary property. Formally, a property Π is hereditary if, for any graph G verifying that property, it holds that any induced subgraph of G also verifies that property (notice that the properties mentioned previously are indeed hereditary). It was already shown in <cit.> that ISPΠ is a hard problem for any non-trivial hereditary property. On the positive side, the ISPΠ problem always admits an FPT algorithm, when parameterised by the size of the solution, if Π is a hereditary property <cit.>. This is an important result, as it allows us to conceive efficient algorithms to solve computationally hard problems, as long as we restrict ourselves to graphs verifying such properties.
It is also worth mentioning the work in <cit.>, which provides a framework that yields exact algorithms that are significantly faster than brute-force to solve a more general version of the ISPΠ problem: given a universe, find a subset of maximum cardinality which verifies some hereditary property. On a high level, the algorithm proposed in <cit.> builds the solution which is a subset H of maximum cardinality with the wanted property, by continuously extending a partial solution X⊆ H. Note that this approach only works if Π is indeed a hereditary property.
More recently, this approach was generalised by the authors of <cit.>, who provide a framework that yields exponential-time approximation algorithms.
However, not all interesting properties are hereditary. E.g., “all vertices of the induced subgraph have odd degree”, and “the induced subgraph is d-regular”, where d is an integer given in the input (recall that a graph is d-regular if all of its vertices have the same degree d), are two non-hereditary properties. The authors of <cit.> studied the ISPΠ problem for the former property, showing that this is an -hard problem, and providing an FPT algorithm that solves the problem when parameterised by the rank-width.
Also, the authors of <cit.> studied the ISPΠ problem for the latter property. In particular, in <cit.> it is shown that finding a (connected) induced subgraph of maximum order that is d-regular, is -hard to approximate, even when restricted on bipartite or planar graphs. The authors of <cit.> also provide a linear-time algorithm to solve this problem for graphs with bounded treewidth. Lastly, it is also worth mentioning <cit.>, where the authors consider the non-hereditary property “the induced subgraph is k-anonymous”, where a graph G is k-anonymous if for each vertex of G there are at least k-1 other vertices of the same degree.
An important observation is that, in the case of non-hereditary properties, the ISPΠ problem does not necessarily admit an FPT algorithm parameterised by the size of the solution.
Indeed, the authors of <cit.> proved that when considering Π as “the induced subgraph is regular”, the ISPΠ problem is [1]-hard when parameterised by the size of the solution.
This indicates the importance of considering graph-structural parameters for conceiving efficient algorithms for such problems. This is exactly the approach followed in <cit.>, where the authors consider a generalisation of Vertex Cover, the ISPΠ problem where Π is “the graph has maximum degree k”, for an integer k given in the input.
Distance from local irregularity. In some sense, the property that interests us lies on the opposite side of the one studied in <cit.>. Recall that a graph G is locally irregular if no two of its adjacent vertices have the same degrees. The notion of locally irregular graphs was formally introduced in <cit.>, where the authors take some steps towards proving the so-called 1-2-3 Conjecture proposed in <cit.> and claimed to be solved recently in <cit.>. Roughly, this conjecture is about functions assigning weights from [k]={1,…,k} to the edges of a graph, called proper k-labellings, so that all adjacent vertices have different weighted degrees; the conjecture states that for any non-trivial graph, this should always be achievable for k≤ 3.
In <cit.>, the authors introduced and studied the problem of finding a locally irregular induced subgraph of a given graph G of maximum order (a non-hereditary property). Equivalently, given a graph, find a set of vertices of minimum cardinality, whose deletion renders the graph locally irregular; such sets are named optimal vertex-irregulators. The main focus of <cit.> was to study the complexity of computing an optimal vertex-irregulator of a given graph. Among other results, it was shown that this problem is -hard even for subcubic planar bipartite graphs, [2]-hard parameterised by the size of the solution and [1]-hard parameterised by the treewidth of the input graph. Moreover, for any constant ε <1, there cannot be a polynomial-time 𝒪(n^1-ε)-approximation algorithm. On the positive side, there are two FPT algorithms that solve this problem, parameterised by the maximum degree of the input graph plus either the size of the solution or the treewidth of the input graph. Note that the notion of vertex-irregulators proved to be fruitful in the context of proper labellings. Indeed, the authors of <cit.> observed a connection between finding large locally irregular induced subgraphs and constructing proper k-labellings that also maximise the use of weight 1 on the edges of the given graph.
Apart from improving the results of <cit.>, in this paper we also introduce the novel problem of computing a subset of a graph's edges, of minimum order, whose deletion renders the graph locally irregular; such sets are named optimal edge-irregulators. This problem is introduced as a first step towards understanding the problem of finding large locally irregular (not necessarily induced) subgraphs of a given graph. Problems concerned with finding maximum subgraphs verifying a specific property have also been extensively studied (e.g., <cit.>).
One might expect that finding edge-irregulators could be easier than finding vertex-irregulators as it is often the case with graph theoretical problems concerned with subsets of edges, whose versions considering subsets of vertices are intractable (recall, e.g., the Edge Cover, the Feedback Edge Set and even the Min Weighted Lower-Upper-Cover <cit.>). As it turns out, however, finding large edge-irregulators is also a computationally hard problem.
Our contribution. In this paper we study the complexity of computing optimal vertex and edge-irregulators. Our results allow us to identify the parameters for which the tractability of the former problem changes, considering almost all standard graph-structural parameters. We also take steps towards the same goal for the latter problem. In Section <ref> we introduce the needed notation and provide some first results. In particular, we observe that computing optimal vertex-irregulators is [1]-hard when parameterised by the treedepth or the feedback vertex set of the given graph. Section <ref> is focused on providing FPT algorithms for the problem of finding optimal vertex-irregulators, parameterised by the neighborhood diversity or the vertex integrity of the input graph. In Section <ref>, we focus on the problem of finding optimal edge-irregulators. First, we prove that this problem is -hard, even when restricted to planar bipartite graphs of maximum degree 4. We also show that the problem is [1]-hard parameterised by the size of the solution or the feedback vertex set of the input graph. Lastly, we modify the FPT algorithm for computing an optimal vertex-irregulator parameterised by the vertex integrity in order to provide an FPT algorithm that solves the edge version of the problem (once more parameterised by the vertex integrity). We close the paper in Section <ref>, where we propose some directions for further research.
§ PRELIMINARIES
For notions and definitions of graph theory not explained here, we refer the reader to <cit.>.
Let G=(V,E) be a graph and G'=(V',E') be a subgraph of G (i.e., created by deleting vertices and/or edges of G). Recall first that the subgraph G' is induced if it can be created only by deleting vertices of G. That is, for each edge uv∈ E, if u,v∈ V', then uv∈ E'. For any vertex v∈ V, let N_G(v)={u∈ V : uv∈ E} denote the neighbourhood of v in G
and d_G(v)=|N_G(v)| denote the degree of v in G. Note that, whenever the graph G is clear from the context, we will omit the subscript and simply write N(v) and d(v).
Also, for S⊆ E, denote by G-S the graph G'=(V, E∖ S). That is, G' is the graph resulting from the deletion of the edges of S from the graph G.
Let G=(V,E) be a graph. We say that G is locally irregular if, for every edge uv∈ E, we have d(u)≠ d(v). Now, let S⊆ V be such that G[V∖ S] is a locally irregular graph; any set S that has this property is denoted as a vertex-irregulator of G. Moreover, let _v(G) be the minimum order that any vertex-irregulator of G can have. We will say that S is an optimal vertex-irregulator of G if S is a vertex-irregulator of G and |S|=_v(G). Similarly, we define an
edge-irregulator of G to be any set S⊆ E such that G-S is locally irregular. Moreover, let _e(G) be the minimum order that any edge-irregulator of G can have. We will say that S is an optimal edge-irregulator of G if S is an edge-irregulator of G and |S|=_e(G).
The next simple observation is quite useful when proving lower bounds on an optimal vertex or edge-irregulator of a graph.
Let G=(V,E) be a graph containing two vertices u,v such that uv∈ E and d(u)=d(v). Any edge-irregulator of G contains at least one edge incident to u or v. Also, any vertex-irregulator of G contains at least one vertex in N(u)∩ N(v).
Let G=(V,E) be a graph. We say that two vertices u, v of V are twins if N(u)∖{v}=N(v)∖{u}, i.e., they have the same neighbourhoods.
Let G=(V,E) be a graph and u,v∈ V be a set of twins of G such that uv∈ E. Any vertex-irregulator of G contains at least one vertex in {u,v}.
Indeed, by Observation <ref>, we get that any vertex-irregulator S of G includes at least one neighbour of u or v. If we assume that S∩{u,v}=∅, then u and v are once more adjacent twins in G[V∖ S], contradicting the fact that S is a vertex-irregulator.
The importance of the upcoming Lemma <ref> lies in the fact that we can repeatedly apply it and reduce the size of the graph on which we are searching for a vertex-irregulator, as long as the reduced graph contains a pair of adjacent twins. This is a core argument behind the algorithms presented in Theorems <ref> and <ref>.
Let G=(V,E) be a graph and u,v∈ V be a pair of adjacent twins. Let G'=(V',E') be the graph resulting from the deletion of either u or v from G. Then, _v(G)=_v(G')+1.
Assume w.l.o.g. that u∉ V'. We first prove that _v(G)≤_v(G')+1. Indeed, assume that _v(G)>_v(G')+1 and let S' be an optimal vertex-irregulator of G'. Next, consider the graph G=G[V∖ (S'∪{u})]. From the construction of G', it follows that G=G'[V'∖ S']. Since S' is a vertex-irregulator of G', we obtain that G is locally irregular. In other words, the set S'∪{u} is a vertex-irregulator of G and |S'∪{u}|=I_v(G')+1, a contradiction.
Next, assume that _v(G)<_v(G')+1 and let S be an optimal vertex-irregulator of G. It follows from Observation <ref> that |{u,v}∩ S|≥ 1. Assume w.l.o.g. that u∈ S. Thus, and by the construction of G', we have that G'[V'∖ (S∖{u})]=G[V∖ S] and the set S∖{u} is a vertex-irregulator of G'. In other words, _v(G')≤ |S|-1=_v(G)-1, a contradiction.
We close this section with some observations on the proof that computing _v(G) is [1]-hard parameterised by the treewidth of G, initially presented in <cit.>, which allow us to show that this result holds even if we consider more “generous” parameters, such as the treedepth or the the feedback vertex set number (i.e., size of a minimum feedback vertex set) of the input graph. Recall that the treedepth of a graph G=(V,E) can be defined recursively: if |V|=1 then G has treedepth 1. Then, G has treedepth k if there exists a vertex v∈ V such that every connected component of G[V∖{v}] has treedepth at most k-1. Given a graph G and a tree T rooted at a vertex u, by attaching T on a vertex v of G we mean the operation of adding T to G and identifying u with v.
Let G be a graph with vertex cover number (i.e., size of a minimum vertex cover) k_1 and T be a rooted tree of depth k_2. Let G' be the graph resulting from attaching an arbitrary number of copies of T directly on vertices of G. Then G' has treedepth 𝒪(k_1+k_2) and feedback vertex set number 𝒪(k_1).
The reduction presented in <cit.> starts with a graph G which is part of an instance of the List Colouring problem, and constructs a graph G' by attaching some trees of depth at most 3 on each vertex of G. The List Colouring problem was shown to be [1]-hard in <cit.> when parameterised by the vertex cover number of the input graph. Thus, and by Observation <ref>, we obtain the following:
Given a graph G, it is [1]-hard to compute _v(G) parameterised by either the treedepth or the feedback vertex set number of G.
§ FPT ALGORITHMS FOR VERTEX-IRREGULATORS
In this section we present two FPT algorithms that compute an optimal vertex-irregulator of a given graph G, when parameterised by the neighbourhood diversity or the vertex integrity of G. The latter algorithm is then used to show that this problem is in FPT also when parameterised by the cluster deletion number of G. We begin by recalling the needed definitions.
The twin equivalence of G is the relation on the vertices of V according to which two vertices belong to the same equivalence class if and only if they are twins.
The neighbourhood diversity of a graph G, denoted by nd(G), is the number k of classes of the twin equivalence of G.
Let G=(V,E) be a graph with nd(G)=k and let V_1,…,V_k be the partition of V defined by the twin equivalence of G. Observe that for any i∈ [k], we have that G[V_i] is either an independent set or a clique.
Given a graph G=(V,E) such that nd(G)=k, there exists an algorithm that computes _v(G) in FPT-time parameterised by k.
Let V_1,…,V_k be the partition of V defined by the twin equivalence of G. Recall that for any i∈ [k], we have that G[V_i] is either an independent set or a clique.
We begin by constructing an induced subgraph G'=(V',E') of G by applying the following procedure: for each i∈ [k], if G[V_i] is a clique on at least two vertices, then delete all the vertices of V_i except one; let D be the set of vertices that were deleted in this fashion throughout the procedure and d=|D|. Observe that this procedure terminates after k repetitions and, thus, runs in polynomial time (in regards to |V|).
Moreover, it follows from Lemma <ref> that _v(G)=_v(G')+d. Thus, in order to compute _v(G), it suffices to compute _v(G'). To achieve that, we model this problem as an ILP on a bounded number of variables. For every i∈ [k], let V'_i=V_i∩ V'.
Also, for every i∈[k], let N(i)={j∈ [k] |∃ u∈ V'_j and v∈ V'_i s.t. uv∈ E'}. That is, N(i) is the set of indices of the neighbouring partitions of vertices in V'_i. Finally, we guess a partition of [k] into S_1 and S_2 (there are at most 2^k such partitions), such that, if S' is a vertex-irregulator of G', then S'∩ V'_i=V'_i for all i∈ S_2, and S'∩ V'_i≠ V'_i for all i∈ S_1.
Variables
x_i∈ [|V'_i|] i∈ S_1 number of vertices remaining in a subset of V'_i
Objective
max∑_i=1^k x_i
Constraints
∑_ℓ∈ N(i) x_ℓ≠∑_ℓ∈ N(j) x_ℓ ∀ i,j∈ S_1 s.t. j ∈ N(i)
The variable x_i is used in the above model to represent the vertices that will remain in V'_i, for each i∈ S_1, after the deletion of an optimal vertex-irregulator S' of G'. The constraint <ref> makes sure that any two adjacent vertices u,v∈ V' have different degrees in G'[V'∖ S']. Indeed, for each uv∈ E', there exist i,j such that u∈ V'_i and v∈ V'_j. If either i∈ S_2 or j∈ S_2 (or both), then u∈ S' or v∈ S' (or both). Thus, we can assume that i,j∈ S_1. In this case, it follows from the constraint <ref> that d_G'[V'∖ S'](u)=∑_ℓ∈ N(i) x_ℓ≠∑_ℓ∈ N(j) x_ℓ=d_G'[V∖ S'](v). In any case, G'[V'∖ S'] is locally irregular. Finally, since the model has k variables, we can and obtain S' in FPT time, parameterised by k (by running for example the Lenstra algorithm <cit.>).
Let V_1,…,V_k be the partition of V defined by the twin equivalence of G. Recall that for any i∈ [k], we have that G[V_i] is either an independent set or a clique.
We begin by constructing an induced subgraph G'=(V',E') of G by applying the following procedure: for each i∈ [k], if G[V_i] is a clique on at least two vertices, then delete all the vertices of V_i except one; let D be the set of vertices that were deleted in this fashion throughout the procedure and d=|D|. Observe that this procedure terminates after k repetitions and, thus, runs in polynomial time (in regards to |V|).
Moreover, it follows from Lemma <ref> that _v(G)=_v(G')+d. Thus, in order to compute _v(G), it suffices to compute _v(G'). To achieve that, we model this problem as an ILP on a bounded number of variables. For every i∈ [k], let V'_i=V_i∩ V'.
Constants
e_ij∈{0,1} i,j∈ [k] set to 1 iff ∃ u∈ V_i and v∈ V_j s.t uv∈ E'
t_i=|V'_i| i∈[k] the number of vertices in V'_i
Variables
x_i∈ [|V_i|] i∈ [k] number of deleted vertices in V'_i
Objective
min∑_i=1^k x_i
Constraints
∑_ℓ=1^k e_iℓ(t_ℓ-x_ℓ)≠∑_ℓ=1^k e_jℓ(t_ℓ-x_ℓ) ∀ i,j∈ [k] s.t. e_ij=1
0≤ x_i≤ t_i ∀ i∈ [k]
The variable x_i is used in the above model to represent the vertices that will be included in an optimal vertex-irregulator S' of G'. The constraint <ref> makes sure that any two adjacent vertices uv∈ V' have different degrees in G'[V'∖ S']. Indeed, for each uv∈ E', there exist i,j such that u∈ V_i and v∈ V_j and e_ij=1. Moreover, d_G'[V∖ S'](u)=∑_ℓ=1^k e_iℓ(t_ℓ-x_ℓ) and d_G'[V∖ S'](v)=∑_ℓ=1^k e_jℓ(t_ℓ-x_ℓ). Thus, S'∪ D is an optimal vertex-irregulator of G. Finally, since the model has k variables, we can and obtain S' in FPT time, parameterised by k (by running for example the Lenstra algorithm <cit.>).
We now present an FPT algorithm to compute an optimal vertex-irregulator of an input graph G when parameterised by the vertex integrity of G.
A graph G=(V,E) has vertex integrity k if there exists a set U ⊆ V such that |U| = k' ≤ k and all connected components of G[V∖ U] are of order at mots k - k'.
It is known that we can find such a set in FPT-time parameterised by k <cit.>.
Given a graph G=(V,E) with vertex integrity k, there exists an algorithm that computes _v(G) in FPT-time parameterised by k.
Let U be such that |U|=k'≤ k and C_1,…, C_m be the vertex sets of the connected components of G[V∖ U]. It follows that |C_j|≤ k, j∈[m]. Assume that we know the intersection of an optimal vertex-irregulator S of G and the set U, and let S' = S ∩ U and U' = U ∖ S (there are at most 2^|U|≤ 2^k possible intersections S' of U and S.).
Notice that the graph G[V∖ S'] has an optimal vertex-irregulator that contains only vertices from ⋃_i ∈ [m]C_i. Indeed, assuming otherwise contradicts that S' is the intersection of an optimal vertex-irregulator and U. Thus, in order to find an optimal vertex-irregulator S of G, it suffices to compute S^* ⊆⋃_i ∈ [m]C_i, which is an optimal vertex-irregulator of G[V ∖ S'], for every set S' ⊆ U. Then, we return the set S^*∪ S' of minimum order. We compute S^* through an ILP with bounded number of variables. To do so, we define types and sub-types of graphs G[U'∪ C_j].
Informally, the main idea is to categorise the graphs G[U' ∪ C_j], j ∈ [m], into types based on their structure (formally defined later), whose number is bounded by k. Each type i is associated to a number no_i that represents the number of the subgraphs G[U' ∪ C_j], j ∈ [m], that belong in that type.
Then, for each type i, we will define sub-types based on the induced subgraphs G[(U' ∪ C_j) ∖ S_q], for S_q ⊆ C_j. We also define a variable no_i,q that is the number of the subgraphs G[U' ∪ C_j], j ∈ [m], that are of type i and of sub-type q in G[V∖ S].
Note that knowing the structure of these types and sub-types, together with no_i,q, is enough to compute the order of S^*. Finally, for any j ∈ [m], the graph G[U' ∪ C_j] is of order at most k. Thus, the number of types, sub-types and their corresponding variables, is bounded by a function of k. We will present an ILP formulation whose objective is to minimise the order of S^*.
We begin by defining the types. Two graphs G[U' ∪ C_i] and G[U' ∪ C_j], i,j ∈ [m], are of the same type if there exists a bijection[Recall that a function f:A→ B is a bijection if, for every a_1,a_2∈ A with a_1≠ a_2, we have that f(a_1)≠ f(a_2) and for every b∈ B, there exists an a∈ A such that f(a)=b. Recall also that the inverse function of f, denoted as f^-1, exists if and only if f is a bijection, and is such that f^-1:B→ A and for each b∈ B we have that f^-1(b)=a, where f(a)=b.]
f: C_i∪ U' → C_j∪ U' such that f(u)=u for all u∈ U' and N_G[U' ∪ C_i](u) = { f^-1(v) | v ∈ N_G[U' ∪ C_j](f(u))} for all u ∈ C_i. Note that if such a function exists, then G[U' ∪ C_i] is isomorphic to G[U' ∪ C_j].
Let p be the number of different types. Notice that p is bounded by a function of k as any graph G[U' ∪ C_i] has order at most k. Also, we can decide if two graphs G[U' ∪ C_i] and G[U' ∪ C_j], i,j ∈ [m], are of the same type in FPT-time parameterised by k. For each type i ∈ [p], set no_i to be the number of graphs G[U' ∪ C_j], j ∈ [m], of type i.
Furthermore, for each type i ∈ [p] we select a C_j, j ∈ [m], such that G[U' ∪ C_j] is of type i, to represent that type; we will denote this set of vertices by C'_i.
We are now ready to define the sub-types.
Let i ∈ [p] be a type represented by C'_i and S^i_1… , S^i_2^|C'_i|
be an enumeration of the subsets of C'_i.
For any q ∈ [2^|C'_i|] we define a sub-type (i,q) which represents the induced subgraph G[(U' ∪ C'_i) ∖ S_q ]. Set no_i,q to be the number of graphs represented by G[U'∪ C'_i], i∈[p], that are of type (i,q) in G[V∖ S^*], for a vertex-irregulator S^* ,i.e., S^* ∩ C'_i = S^i_q.
Notice that, given a vertex-irregulator S^* ⊆⋃_j ∈ [m] C_j of G[V ∖ S'], there exists a sub-type (i,q), i∈ [p], q∈ [2^|C'_i|], such that the graph G[(U' ∪ C_j)∖ S^*] is of sub-type (i,q), for all j∈ [m]. Also, assuming that we know the order of |S^i_q| and the number no_i,q for all i∈ [p], q∈ [2^|C'_i|], then |S^*| = ∑_i ∈ [p]∑_q ∈ [2^|C'_i|] no_i,q |S^i_q|.
Before giving the ILP formulation whose goal is to find a vertex-irregulator S^* while minimising the above sum, we guess the (i,q) such that no_i,q≠ 0.
Let S_2 be the set of pairs (i,q) i ∈ [p] and q ∈ [2^|C'_i|], such that there are two vertices u,v ∈ C'_i ∖ S^i_q where uv∈ E(G[(U'∪ C_i)∖ S^i_q ]) and d_G[(U'∪ C'_i)∖ S^i_q ](u) =d_G[(U'∪ C'_i)∖ S^i_q ](v). For every (i,q)∈ S_2, we have that no_i,q=0. Indeed, assuming otherwise contradicts the fact that S^* is a vertex-irregulator.
We guess S_1 ⊆{ (i,q) | i ∈ [p], q ∈ 2^|C'_i|}∖ S_2 such that no_i,q≠ 0 for all (i,q) ∈ S_1. Observe that the number of different sets that are candidates for S_1 are at most some function of k.
Constants
no_i i ∈ [p] number of components of type i
e_uv∈{0,1} u,v∈ U' set to 1 iff uv ∈ E(G[U'])
e^i,q_u,v∈{0,1} i∈ [p], q∈ [2^|C'_i|], u ∈ U' set to 1 iff uv ∈ E(G[(U' ∪ C'_i) ∖ S^i_q])
and v∈ C'_i∖ S^i_q
b^i,q_u∈ [n] i∈ [p], q∈ [2^|C'_i|] and u ∈ U' set to d_G[(U' ∪ C'_i) ∖ S^i_q](u)
d^i,q_u∈ [n] i∈ [p], q∈ [2^|C'_i|] and u ∈ C'_i ∖ S^i_q set to d_G[(U' ∪ C'_i) ∖ S^i_q](u)
Variables
no_i,q i ∈ [p], q∈ [2^|C'_i|] number of types (i,q)
Objective
min∑_i ∈ [p] no_i,q |S^i_q|
Constraints
no_i,q=0 iff (i,q)∉ S_1
∑_q ∈ 2^|C'_i| no_i,q = no_i ∀ i ∈ [p]
∑_w ∈ U' e_wv + ∑_i ∈ [p] no_i,q b^i,q_v≠∑_w ∈ U' e_wu + ∑_i ∈ [p] no_i,q b^i,q_u ∀ u,v ∈ U'
d^i,q_v≠∑_w ∈ U' e_wu + ∑_i ∈ [p] no_i,q b^i,q_u ∀ e^i,q_u,v = 1 and (i,q) ∈ S_1
Assume that we have found the values no_i,q for (i,q), i∈ [p], q∈ [2^|C'_i|].
We construct an optimal vertex-irregulator of G[V∖ S'] as follows.
Start with an empty set S^*.
For each i ∈ [p] take all components C_j of type i.
Partition them in to 2^|C'_i| sets 𝒞^i_q such that any set q ∈ [2^|C'_i|] contains exactly no_i,q of these components.
For any component C ∈𝒞^i_q, select all vertices represented by the set S^i_q (as it was defined before) and add them to S^*.
The final S^* is an optimal vertex-irregulator for G[V∖ S'].
Let S=S'∪ S^*. We show that S is a vertex-irregulator of G.
To do so, it suffices to verify that in the graph G[V∖ S] there are no two adjacent vertices with the same degree.
Let u,v be a pair of adjacent vertices in a component represented by C'_i ∖ S, which is of type (i,q).
If d_G[V∖ S](u) = d_G[V∖ S](v), then (i,q)∈ S_2. Therefore, no_i,q= 0 and we do not have such a component in G[V∖ S].
Thus, it suffices to focus on adjacent vertices such that at least one of them is in U'.
Notice that, in G[V∖ S], the degree of vertex u ∈ U' is equal to ∑_w ∈ U' e_wv + ∑_i ∈ [p] no_i,q b^i,q_v. In other words, no two adjacent vertices in U' have the same degree due to the constrain <ref>.
Lastly, the constrain <ref> guarantees that no vertex in U' is adjacent to a vertex in C_i ∖ S (for some i∈ [p]) such that both of them have the same degree in G[V∖ S]. Moreover, both S' and S^* are constructed to be minimum such sets. Thus, S is an optimal vertex-irregulator of G. Finally, since the number of variables in the model is bounded by a function of k, we can and obtain S^* in FPT time, parameterised by k (by running for example the Lenstra algorithm <cit.>).
The previous algorithm can be used to find an optimal vertex-irregulator of a graph G in FPT-time when parameterised by the cluster deletion number of G. Note that the cluster deletion number of a graph can be computed in FPT-time parameterised by k <cit.>.
Let G=(V,E) be a graph and S⊆ V be a set of minimum order such that all the connected components of G[V∖ S] are cliques. Then G has cluster deletion number k, where k=|S|.
Given a graph G=(V,E) with cluster deletion number k, there exists an algorithm that computes _v(G) in FPT-time parameterised by k.
Let S be such that |S|=k and G[V∖ S] is a disjoint union of cliques C_1,… C_m for m≥ 1. Our goal is to reduce the size of these cliques so that each one of them has order at most 2^k. We achieve this through the the following procedure. Let i∈[m] be such that the clique C_i=(V_C_i,E_C_i) has |V_C_i|>2^k. Let V_1,…,V_p be the partition of V_C_i defined by the twin equivalence of C_i. That is, two vertices u,v∈ V_C_i belong in a V_j, j∈[p], if and only if u and v are twins. Note that p≤ 2^k. Observe that, since C_i is a clique, the graphs C_i[V_j], j∈[p], are also cliques. In other words, for each j∈[p], all the vertices of V_j are adjacent twins. We delete all but one vertex of V_j, for each j∈[p], and repeat this process for every i∈[m] such that |V_C_i|>2^k.
Let G'=(V',E') be the resulting subgraph of G and d=|D|, where D is the set of vertices that were removed throughout this process. It follows from Lemma <ref> that _v(G)=_v(G')+d. Observe also that S⊆ V' and that each connected component of G'[V'∖ S] is a clique of at most 2^k vertices. In other words, G' has vertex integrity at most 2^k+k. To sum up, to compute _v(G) it suffices to compute _v(G'), which can be done in FPT-time by running the algorithm presented in Theorem <ref>.
§ EDGE-IRREGULATORS
In this section we begin the study of finding an optimal edge-irregulator of a given graph G. It turns out that the decision version of this problem is -complete, even for quite restrictive classes of graphs. Furthermore, it is also [1]-hard parameterised by the size of the solution.
Let G be a graph and k∈ℕ. Deciding if _e(G)≤ k is -complete, even if G is a planar bipartite graph of maximum degree 4.
The problem is clearly in . We focus on showing it is also -hard. This is achieved through a reduction from the Planar 3-SAT problem which is known to be -complete <cit.>. In that problem, a 3CNF formula ϕ is given as an input. We say that a bipartite graph G'=(V,C,E) corresponds to ϕ if it is constructed from ϕ in the following way: for each literal x_i (resp. ¬ x_i) that appears in ϕ, add the literal vertex v_i (resp. v'_i) in V (for 1≤ i≤ n) and for each clause C_j of ϕ add a clause vertex c_j in C (for 1≤ j≤ m). Then the edge v_ic_j (resp. v'_ic_j) is added if the literal x_i (resp. ¬ x_i) appears in the clause C_j. Finally, we add the edge v_iv'_i for every i. A 3CNF formula ϕ is valid as input to the Planar 3-SAT problem if the graph G' that corresponds to ϕ is planar. Furthermore, we may assume that each variable appears in ϕ twice as a positive and once as a negative literal. The question is whether there exists a truth assignment to the variables of X satisfying ϕ.
Starting from a 3CNF formula ϕ, we construct a graph G such that _e(G)≤ 3n if and only if ϕ is satisfiable. The construction of G is as follows: we start with the graph G' that corresponds to ϕ. Then, for each 1≤ i≤ n, we remove the edge v_iv'_i, and attach the gadget illustrated in Figure <ref> to v_i and v'_i. Let E_i denote the edges of the gadget attached to v_i and v'_i plus the edges e_i^1,e_i^2 and e_i^3. Finally, for each 1≤ j≤ m, we add the star on 5 vertices, and identify one of its leaves with the vertex c_j. Observe that the resulting graph G is planar, bipartite and Δ(G)=4.
Before we provide the reduction, let us show two claims that are going to be useful.
Let S be an edge-irregulator of G such that |S|≤ 3n. For every 1≤ i≤ n, we have that |S∩ E_i|≥ 3.
Observe that d_G(u_5)=d_G(v_i)=d_G(u_6)=d_G(u_7). It follows that S contains at least one edge a_1 incident to u_6 or u_7 and one edge a_2 incident to v_i or u_5. We distinguish cases:
* a_1=a_2=v'_iu_6. Then S also contains an edge a_3 incident to v_i or u_5. If a_3=u_5v'_i, then S contains an additional edge incident to u_2 or u_5. If a_3=u_5u_4, then S also contains the edge u_3u_4. If a_3=u_2u_5, then S contains at least one additional edge incident to u_2 and u_1. If a_3=u_5v_i, then S contains one additional edge incident to u_2 or u_5. In any one of the above cases, we have that |S_i|≥ 3. Thus, we may assume that a_3 is incident to v_i but not to u_5. If a_3=e_i^1 or a_3=e_i^2, then S contains an additional edge incident to v_i or u_6. Finally, if a_3=v_iu_6, then S contains an additional edge incident to u_6 or u_8. Thus, if a_1=a_2=v'_iu_6, then |S_i|≥ 3.
* a_1≠ a_2. We distinguish some additional cases:
* a_1=v_iu_6. If a_2∈{e_i^3,v'_iu_9,v'_iu_5}, then S contains an additional edge incident to u_7. If a_2∈{v_iu_5,u_5u_4}, then S contains an additional edge incident to u_2. Finally, if a_2=u_5u_4, then S contains an additional edge incident to u_3.
* a_1=u_6u_7. Then S contains an additional edge incident to u_9.
* a_1∈{u_7u_9,u_7u_10,u_7u_11}. Then S contains an additional edge incident to u_7.
* a_1=u_6u_8. Then S contains an additional edge incident to u_16.
Thus, if a_1≠ a_2, then |S_i|≥ 3, which finishes the proof of the claim.
Let S be an edge-irregulator of G such that |S|≤ 3n. Then, for every 1≤ i≤ n, we have that
* if |S∩{e_i^1,e_i^2}|≥ 1 then |S∩{e_i^3,e_i^4}|=0 and
* if |S∩{e_i^3,e_i^4}|≥ 1 then |S∩{e_i^1,e_i^2}|=0.
Since the proofs of the two items are highly symmetrical, we will
only prove the first item. To do that, it suffices to show that if S does not respect the statement for some 1≤ i≤ n, then |S∩ E_i|≥ 4. Then, since |S|≤ 3n, and 1≤ i≤ n, there exists a 1≤ j≤ n such that i≠ j and |S∩ E_j|≤ 2. This contradicts Claim <ref>.
Let H=G-S. Assume first that there exists an i such that, say, e_i^1∈ S and e_i^3∈ S.
Observe that S contains at least one edge e incident to u_6 or u_7, as otherwise we would have that d_H(u_6)=d_H(u_7), contradicting the fact that S is an edge-irregulator of G. Thus, if we also have that e_i^2∈ S or that e_i^4∈ S, it follows that |S∩ E_i|≥ 4, a contradiction. Thus, we may assume that S∩ E_i={e_i^1,e_i^3,e}. If e∈{u_7u_9,u_7u_10,u_11}, say e=u_7u_9, then d_H(u_7)=d_H(u_10). Also, if e=u_6u_8, then S also contains u_8u_16. Finally, if e=v_iu_6 (resp. e=u_6v'_i) then d_H(u_6)=d_H(v'_i) (resp. d_H(u_6)=d_H(v_i)). It follows from Observation <ref> that in all cases, we have that |S∩ E_i|≥ 4, a contradiction.
We are now ready to give the reduction. Let G be the graph constructed from the formula ϕ as explained above. We show that there exists a satisfying truth assignment of ϕ if and only if _e(G)≤ 3n.
For the first direction, let T be a satisfying truth assignment of ϕ. Let S be the set containing the edges e_i^1,e_i^2,u_6u_7 for every 1≤ i≤ n such that T(x_i)=true and the edges e_i^3,e_i^4,v_iu_6 for each i such that T(¬ x_i)=true. Let H=G-S. Clearly, |S|=3n. Also S is an edge-irregulator of G. Indeed, the part of the graph H that corresponds to the gadget attached to v_i and v'_i is clearly locally irregular for every i. Also, for each j, we have that d_H(c_j)≤ 3 (since C_j is satisfied by at least one literal) and any vertex in N_H(c_j) has degree equal to 4.
For the reverse direction, assume that _e(G)≤ 3n and let S be an edge-irregulator of G such that |S|=3n. Recall that due to Claim <ref>, for each i∈[n], if S contains one edge in {e_i^1,e_i^2} then it contains no edge in {e_i^3,e_i^4} and vice versa. For each i∈ [n], we set T(x_i)=true if S contains one edge in {e_i^1,e_i^2} and T(¬ x_i)=true in any other case. We claim that T is indeed a truth assignment that satisfies ϕ. Indeed, due to Claim <ref>, we know that each variable will receive exactly one truth value. Also, since S is an edge-irregulator, and due to Claim <ref>, we know that for each j∈ [m], there exists an i∈ [n] such that either v_ic_j∈ S or v'_ic_j∈ S; that is, for each clause C_j, there exists either a literal x_i or a literal ¬ x_i that has been set to true. In other words, each clause of ϕ is satisfied by T. This ends the reduction.
Let G be a graph and k∈ℕ. Deciding if _e(G)≤ k is [1]-hard parameterised by k.
The reduction is from k-Multicoloured Clique.
k-Multicoloured Clique
A graph G'=(V,E) and a partition (V_1,…,V_k) of V into k independent sets.
Does there exist a set S⊆ V such that G'[S] is a clique?
It is known that k-Multicoloured Clique is [1]-hard parameterised by k <cit.>.
On a high level, our reduction will proceed as follows. Starting with the graph G' that is given in the input of k-Multicoloured Clique, we will first subdivide every edge of the graph G'. Then, for each i∈[k], we will attach one copy of a particular gadget to the vertices of V_i. Also, for each 1≤ i<j≤ k, we will attach a copy of our gadget to the vertices that correspond to the edges v_iv_j of G', with v_i∈ V_i and v_j∈ V_j. In total, we will add (k^2+k)/2 gadgets.
The gadgets are structured so that any edge-irregulator of the graph contains at least one edge for each gadget (so any solution has a size of at least (k^2+k)/2). Furthermore, we prove that, if we have selected only one edge from a gadget, then that edge must be incident to either a vertex of the original graph or a vertex that represents an edge of the original graph.
Finally, we show that:
* an edge-irregulator S that contains exactly one edge from each gadget (i.e. an edge-irregulator of size (k^2+k)/2) can give us a clique of size k in the original graph by selecting the vertices and edges (represented by vertices) of the original graph that are incident to the edges of S and
* if we have a clique of size k in the original graph we can construct an optimal edge-irregulator S by selecting the edges of the gadgets that are incident to the k vertices of the clique and the (k^2-k)/2 vertices that represent the edges of the clique.
We proceed with the formal proof. Assume that we are given an instance G'=(V,E) with vertex partition (V_1,…,V_k) where |V_i| = n for all i ∈ [k]. For each i ∈ [k], we denote by v_i^p, for p∈ [n], the vertices of V_i.
We construct a graph G as follows:
* Start with a copy of G'.
* Subdivide each edge e ∈ E. Let u_i,j^p,q be the vertex that corresponds to the edge v_i^pv_j^q∈ E. Also, let U_i,j be the set of vertices that corresponds to the edges between the sets V_i and V_j, i.e., the set {u_i,j^p,q| v_i^pv_j^q∈ E}.
* For each pair (i,j) where 1≤ i < j ≤ k, create a copy of the gadget H_|U_i,j|, illustrated in Figure <ref>, and add all the edges between the copy of w and the vertices of U_i,j. We denote this copy of w by w_i,j, the copy of H_|U_i,j| by H^w_i,j and the copy of y in H^w_i,j by y_i,j.
* For each i ∈ [k], create a copy of the gadget H_|V_i| and add all the edges between the copy of w and the vertices of V_i. We denote this copy of w by w_i, the copy of H_|V_i| by H^w_i and the copy of y in H^w_i by y_i.
* Finally, add leaves attached to the vertices of V_i, i ∈ [k], so that each vertex of V_i has degree kn and attached to the vertices of U_i,j, 1≤ i<j ≤ k, so each that vertex of U_i,j has degree kn + 1.
Let G be the resulting graph.
We prove that G has an edge-irregulator of order (k^2 + k)/2 if and only if G' is a yes instance of k-Multicoloured Clique.
Assume that G' is a yes instance of k-Multicoloured Clique and C= {c_1, … , c_k} is a clique in G' with c_i∈ V_i for every i∈[k]. We will construct an edge-irregulator of G as follows. Start with an empty set S.
Notice that, for each i ∈ [k], |V_i ∩ C|=1 and let p∈[n] be such that v_i^p=c_i;
we add to S the edge v_i^p w_i.
For each pair (i,j), 1≤ i<j ≤ k, let p,q∈[n] be such that v_i^p=c_i and v_j^q=c_j; we add to S the edge u_i,j^p,qw_i,j. Notice the edge v_i^p v_j^q must exist in E since C is a clique. It follows that the vertex u_i,j^p,q, and therefore the edge u_i,j^p,qw_i,j, also exists in G. By construction, |S| = (k^2+k)/2. It only remains to prove that S is an edge-irregulator of G.
Consider the graph G-S. Observe that, for every H^w_i, i ∈ [k], we have reduced the degree of w_i by exactly one. Therefore, any two adjacent vertices of H^w_i have different degree (see Figure <ref>). The same holds true for every H^w_i,j, 1≤ i<j ≤ k.
Consider now the edges xz∈ E(G) such that x∈{w_i,w_j,w_i,j}, and z∈ V_i ∪ U_i,j∪ V_j, 1≤ i<j ≤ k. Notice that d_G-S(x)=n^2-1 and kn-1≤ d_G-S(z)≤ kn+1. For sufficiently large n, we have that n^2-1 > kn+1.
It remains to consider the edges between vertices in V_i ∪ V_j and in U_i,j for any 1≤ i<j ≤ k.
Notice that, for every 1≤ i<j ≤ k, all vertices of V_i ∪ V_j, except one vertex v_i^p∈ V_i and one vertex v_j^q∈ V_j, have degree kn, and d_G-S(v_i^p)=d_G-S(v_j^q)=kn - 1.
Also, all vertices of U_i,j, except one vertex u', have degree kn +1, and d_G-S(u')=kn. So, u' is the only vertex of U_i,j that could possibly have the same degree as a vertex in V_i∖{v_i^p} or V_j∖{v_j^q}. It follows by the construction of S that u' is actually u_i,j^p,q. Also, by the construction of G, u_i,j^p,q is adjacent only to v_i^p and v_j^q, as it represents the edge between their corresponding vertices in G'. Thus, for every 1≤ i<j ≤ k, no vertex in U_i,j has the same degree as any of its neighbours in V_i or V_j. It follows from all the arguments above that S is indeed an edge-irregulator of G.
Now we show that if _e(G)=(k^2+k)/2 then G' has a clique of size k. Let S be an edge-irregulator of G of order (k^2+k)/2.
First, we notice that for each i∈ [k], d_G(w_i)=d_G(y_i) and that for each 1≤ i <j ≤ k, d_G(w_i,j)=d_G(y_i,j).
Let E_w_i be the set of edges w_iv for v ∈ V_i and E_w_i,j be the set of edges w_i,ju for u ∈ U_i,j. Also, let w ∈{w_i | i ∈ [k]}∪{w_i,j| 1≤ i < j ≤ k }.
Since S is an edge-irregulator of G, it follows that |S∩ (E(H^w)∪ E_w)|≥ 1. Also, observe that for any pair of distinct vertices w, w' ∈{w_i | i ∈ [k]}∪{w_i,j| 1≤ i < j ≤ k }, we have that (E(H^w)∪ E_w) ∩ ( E(H^w')∪ E_w' ) = ∅. Thus, and since |S|=(k^2+k)/2, we obtain that, actually, |S∩ (E(H^w)∪ E_w)|= 1. Next, we show that S includes only edges from the set E_w, for each w ∈{w_i | i ∈ [k]}∪{w_i,j| 1≤ i < j ≤ k }. In particular we claim the following:
Let w ∈{w_i | i ∈ [k]}∪{w_i,j| 1≤ i < j ≤ k }.
It holds that S ∩ E(H^w) = ∅ and that |S ∩ E_w| = 1.
Assume that S∩ E(H^w)≠∅ and let e ∈ S ∩ E(H^w).
We distinguish cases according to which edge of H^w is e. In each case, we show that S must include an additional edge of E(H^w), which is a contradiction to the fact that |S ∩ ( E(H^w) ∪ E_w ) |= 1.
e is incident to neither w nor y: Then S must also include an additional edge incident to w or y (from previous discussion).
e is incident to y: Then, S must include an additional edge of E(H^w), as otherwise d_G-S(y)=d-1 and y would have at least one neighbour of degree d-1.
e is incident to w and e ≠ wy: Then, S must include an additional edge of E(H), as otherwise G-S would include a connected component isomorphic to K_2.
The previous claim also shows that S ⊆⋃_i ∈ [k]E_w_i∪⋃_1≤ i < j ≤ k E_w_i,j.
We now explain how to construct a clique of G' of order k.
Let ℓ(i)=m(i) be the index that specifies which edge incident to w_i is included in S. That is, ℓ(i) is such that w_i v_i^ℓ(i)∈ S.
Similarly, for each 1≤ i<j ≤ k, let ℓ(i,j) and m(i,j)
be the indices such that w_i,j u_i,j^ℓ(i,j) , m(i,j)∈ S.
Notice that both ℓ(i) and ℓ(i,j) are unique as S contains exactly one edge incident to each of w_i and w_i,j (by Claim <ref>).
The set C ={ v_i^ℓ(i)| i ∈[k] } induces a clique of order k in G.
First, for any 1≤ i<j ≤ k, we show that ℓ(i) = ℓ(i,j) and m(j) = m(i,j). To simplify the notation let ℓ = ℓ(i,j) and m = m(i,j). By the definition of ℓ and m we have that w_i,j u_i,j^ℓ , m∈ S.
Now, we consider the degrees of the vertices v_i^ℓ
and u_i,j^ℓ,m.
Since w_i,j u_i,j^ℓ, m ∈ S, we have that d_G-S(u_i,j^ℓ,m)=kn.
If ℓ(i) ≠ℓ,
then d_G-S(v_i^ℓ)=kn, as S would not include any edges incident to v_i^ℓ in that case. This is a contradiction since v_i^ℓ and u_i,j^ℓ, m are adjacent in G (by construction) and remain so in G-S (as S ⊆⋃_i ∈ [k]E_w_i∪⋃_1≤ i < j ≤ k E_w_i,j). Therefore, for any 1≤ i<j ≤ k, ℓ(i) = ℓ = ℓ(i,j). Similarly, we can show that for any 1≤ i<j ≤ k, m(j) = m = m(i,j).
Now we show that for every pair of distinct vertices u,v ∈{ v_i^ℓ(i)| i ∈[k] }, we have that u and v are adjacent in G'.
W.l.o.g. let u =v_i^ℓ(i) and v = v_j^ℓ(j) for some 1≤ i < j ≤ k. We know that ℓ(i) = ℓ and ℓ (j) = m(j) = m. Therefore, the vertex u_i,j^ℓ(i,j) , m(i,j) = u_i,j^ℓ , m of G is adjacent to v_i^ℓ (i) and v_j^ℓ (j). This means that v_i^ℓ (i) and v_j^ℓ (j) are incident in G' as the vertex u_i,j^ℓ(i) , m(j) corresponds to the edge between these two vertices in G' (recall the construction of G).
Thus, any pair of vertices in C is a pair of adjacent vertices in G'. It follows that C is a clique.
This completes the proof.
Unfortunately, this problem exhibits a similar behaviour to finding optimal vertex-irregulators, as it also remains intractable even for “relatively large” structural parameters.
Let G and k∈ℕ. Deciding if _e(G)≤ k is [1]-hard parameterised by either the feedback vertex set number or the treedepth of G.
The reduction is from the General Factor problem:
General Factor
A graph H=(V,E) and a list function L: V →𝒫({0,…,Δ(H)}) that specifies the available degrees for each vertex u ∈ V.
Does there exist a set S⊆ E such that d_H-S(u) ∈ L(u) for all u ∈ V?
This problem is known to be [1]-hard when parameterised by the vertex cover number of H <cit.>.
Starting from an instance (H,L) of General Factor, we construct a graph G such that _e(G)≤ n^2, where n=|V(H)|, if and only if (H,L) is a yes-instance.
For every vertex u∈ V(H), let us denote by L(u) the set {0,1,…, d_H(u)}∖ L(u). In the case where {0,1,…, d_H(u)}∖ L(u)=∅, we set L(u)={-1}. On a high level, the graph G is constructed by adding some trees on the vertices of H. In particular, for each vertex u∈ V(H) and for each element a in L(u), we will attach a tree to u whose purpose is to prevent u from having degree a in G-S, for any optimal edge-irregulator S of G.
We begin by defining an arbitrary order on the vertices of H. That is, V(H)={u_1,u_2,…,u_n}. Next, we describe the trees we will use in the construction of G. In particular, we will describe the trees that we attach to the vertex u_i, for every 1≤ i≤ n. First, for each a_j∈L(u_i), define the value a'_j=d_H(u_i)-a_j. Also, for each j, let d_i,j=2in^4-a'_j.
For each “forbidden degree” a_j in the list L(u_i), we will attach a tree T_i,j to u_i. We define the tree T_i,j as follows.
First, for every 0≤ k≤ n^2-1,
create n^2
copies of S_d_i,j-k (the star on d_i,j-k vertices) and q additional copies of S_d_i,j-n^2+1
(the exact value of q will be defined in what follows). Then, choose one leaf from each one of the above stars, and identify them into a single vertex denoted as u_i,j; the value of q is such that d(u_ij)=d_i,j-1=2in^4-a'_j-1.
Let T_i,j be the resulting tree and let us say that u_i,j is the root of T_i,j.
Let us now describe the construction of G. For each vertex u_i∈ V(H) and for each a_j∈L(u_i), add the tree T_i,j to H and the edge u_i,ju_i. Then, for each vertex u_i∈ V(H), for any j such that u_i,j is a neighbour of u_i, add p_i additional copies of the tree T_i,j, as well as the edges between u_i and the roots of the additional trees, so that d_G(u_i)=2in^4.
The resulting graph is G. Note that, for each vertex of V(H), we are adding at most 𝒪(n^4)
trees, each tree containing 𝒪(n^4) vertices.
Thus, the construction of G is achieved in polynomial time.
We are now ready to present our reduction. Assume first that (H,L) is a yes-instance of General Factor, and let S⊆ E be such that d_H-S(u)∈ L(u) for all u∈ V(H). We claim that S is also an edge-irregulator of G. By the construction of G, and since S only contains edges from H, there are no two adjacent vertices in G-H that have the same degree in G-S. Thus, it remains to check the pairs of adjacent vertices x,y such that, either both x and y belong to V(H), or, w.l.o.g., x∈ V(H) and y∈ V(G-H). For the first case, let x=u_i and y=u_i', for 1≤ i<i'≤ n. Then, assuming that d_G-S(u_i)=d_G-S(u_i'), we get that 2in^4-p=2i'n^4-p', where S contains 0≤ p≤ n^2 and 0≤ p'≤ n^2 edges incident to u_i and u_i' respectively. Thus, 2n^4(i-i')=p-p', a contradiction since -n^2≤ p-p'≤ n^2 and -n≤ i-i'≤ n. For the second case, for every i, let d_G-S(u_i)=2in^4-p, where the set S contains 1≤ p≤ n^2 edges of H incident to u_i. Also, by the construction of G and since S only contains edges from H, we have that for every j, d_G-S(u_i,j)=d_G(u_i,j)=2in^4-a'_j, where, recall, a'_j=d_H(u_i)-a_j for a_j∈L(u_i). Assume now that there exist i,j such that d_G-S(u_i)=d_G-S(u_i,j). Then, 2in^4-p=2in^4-d_H(u_i)+a_j and thus d_H(u_i)-p=a_j. But then d_H-S(u_i)=a_j, which is a contradiction since a_j∈L(u_i). Thus, S is an edge-irregulator of G and |S|≤ n^2 since S only contains edges of E(H).
For the reverse direction, assume that _e(G)≤ n^2 and let S be an optimal edge-irregulator of G. We will show that S is also such that d_H-S(u_i)∈ L(u_i), for every i. Let us first prove the following claim.
Let S be an optimal edge-irregulator of G. For every i,j, let T be any copy of the T_i,j tree that is attached to u_i, and let u be the root of this T_i,j. If S contains x≥ 1 edges of E_i,j=E(T)∪{uu_i}, then x≥ n^2.
Assume there exist i,j such that |S∩ E_i,j|=x≥ 1 and x≤ n^2. Among those edges, there are x_1≥ 0 edges incident to u and x_2≥ 0 edges incident to children of u (but not to u), with x_1+x_2=x< n^2.
Assume first that x_1=0. Then x=x_2 and there is no edge of S∩ E_i,j that is incident to u. Then d_G-S(u)=d_G(u) and observe that d_G(u) is strictly larger than that of any of its children (by the construction of G). It follows that S∖ (S∩ E_i,j) is also an edge-irregulator of G, contradicting the optimality of S. Thus x_1≥ 1. It then follows from the construction of G that there exist at least n^2 children of u, denoted by z_1,…,z_n^2, such that d_G-S(u)=d_G(z_k), for every 1≤ k≤ n^2. Since x<n^2, there exists at least one 1≤ k≤ n^2 such that d_G-S(u)=d_G-S(z_k), contradicting the fact that S is an edge-irregulator. Thus x≥ n^2.
It follows directly from Claim <ref> that S contains only edges of E(H). Assume that there exist i,j such that d_H-S(u_i)=a_j and a_j∈L(u_i). Then d_G-S(u_i)=2in^4-a'_j. Also, by the construction of G, u_i is adjacent to a vertex u_i,j for which (since S contains only edges of E(H)) we have that d_G-S(u_i,j)=d_G(u_i,j)=2in^4-a'_j. This is contradicting the fact that S is an edge-irregulator of G. Thus, for every i,j, we have that if d_H-S(u_i)=a_j, then a_j∈ L(u_i), which finishes our reduction.
Finally, if H has vertex cover number vc, then, by Observation <ref>, and since G is constructed by attaching trees of depth 3 directly on the vertices of H, we have that G has treedepth and feedback vertex set 𝒪(vc). This concludes our proof.
We close this section by observing that the proof of Theorem <ref> can be adapted for the case of edge-irregulators. Indeed, it suffices to replace the guessing of vertices and the variables defined on vertices, by guessing of edges and variables defined on the edges of the given graph. Finally, the definition of the sub-types is done through subgraphs produced only by deletion of edges. This leads us to the following:
Given a graph G with vertex integrity k, there exists an algorithm that computes _e(G) in FPT-time.
§ CONCLUSION
In this work we continued the study of the problem of finding optimal vertex-irregulators, and introduced the problem of finding optimal edge-irregulators. In the case of vertex-irregulators, our results are somewhat optimal, in the sense that we almost characterise exactly which are the “smallest” graph-structural parameters that render this problem tractable. The only meaningful parameter whose behaviour remains unknown is the modular-width of the input graph. The parameterised behaviour of the case of edge-irregulators is also somewhat understood, but there are still some parameters for which the problem remains open.
Another interesting direction is that of approximating optimal vertex or edge-irregulators. In particular it would be interesting to identify parameters for which either problem becomes approximable in FPT-time (recall that vertex-irregulators are not approximable within any decent factor in polynomial time <cit.>). Finally, provided that the behaviour of edge-irregulators is better understood, we would also like to propose the problem of finding locally irregular minors, of maximum order, of a given graph G.
abbrv
|
http://arxiv.org/abs/2307.04084v1 | 20230709022832 | A Sustainability Roadmap for C$^3$ | [
"Martin Breidenbach",
"Brendon Bullard",
"Emilio Alessandro Nanni",
"Dimitrios Ntounis",
"Caterina Vernieri"
] | hep-ex | [
"hep-ex",
"physics.acc-ph"
] |
Age of FGK Dwarfs Observed with LAMOST and GALAH: Considering the Oxygen Enhancement
Jinghua Zhang
Received August 12, 2023; accepted August 12, 2023
====================================================================================
§ INTRODUCTION
An electron-positron collider gives a unique opportunity to study the Higgs boson's properties with unprecedented precision and also provide an exceptionally clean environment to search for subtle new physics effects <cit.>. A number of different "Higgs factory" proposals, based on linear and circular colliders, are now under consideration. All of these provide collisions at center of mass energies in the range of 240-370 GeV, and some also are capable of reaching higher energies.
A high-energy particle collider is a large energy-consuming research facility. As such, it is important to balance its scientific importance against its environmental cost. The environmental impact of large accelerators has been analyzed in the recent Snowmass 2021 study <cit.> of the future of particle physics in the US <cit.>. The papers <cit.> have examined the environmental cost of particular Higgs factory proposals, though often concentrating on particular elements of the total cost.
In this paper, we attempt a comprehensive evaluation of the carbon cost of the Cool Copper Collider () Higgs factory proposal <cit.> over its full lifetime, including costs from construction and from operation over the proposed timeline. The structure of this paper is as follows: in Section <ref>, we briefly review the design of . In Section <ref>, we review the physics reach for and other Higgs factory proposals and introduce a metric for balancing carbon impact against the physics impact of each proposal. In Section <ref>, we analyze the power costs of operation of and describe methods for modifying the power design of the accelerator that would lead to substantial savings with little impact on the physics performance. In Section <ref>, we analyze the carbon impact of the construction of C3 and emphasize that cut-and-cover construction, as opposed to construction in a deep tunnel, has significant advantages. In Section <ref>, we discuss options for the source of electrical power for the laboratory. In Section <ref>, we bring these analyses together to estimate the total carbon footprint of . Using information from available studies and design reports, we estimate the carbon impact of other Higgs factory proposals and compare these to in the framework described in Section <ref>.
§ REVIEW OF THE ACCELERATOR DESIGN
, recently proposed <cit.>, is a linear facility that will first operate at 250 GeV center-of-mass collisions. Immediately after, without further extension of the linac, it will run at 550 GeV with an RF power upgrade. The high energy operations will enable the exploration of the Higgs-top coupling, and provide direct access to the Higgs self-coupling with double Higgs production <cit.>. Furthermore, the beam polarization, which exploits the strong dependence of electroweak processes on the chirality of the initial state particles, will offer unique insights into the underlying physics, acting as a new tool for discovery <cit.>. This offers a strong complementarity with proton and circular colliders, where beam polarization is not possible.
utilizes a radically different approach to linear accelerators to build a collider with high gradient and high RF efficiency, and thus lower capital and operating costs <cit.>. is based on a distributed coupling accelerator concept, running under liquid nitrogen (LN) <cit.>, that has led to an optimized accelerating gradient and minimized breakdown problems with respect to earlier designs based on normal conducting technologies. This has yielded an overall optimization of the gradient at 70 and 120 MeV/m for the 250 GeV and 550 GeV operating points, respectively.<cit.>. Much higher energies are possible if length is not the major consideration. The fundamental parameters, assumed for the analysis in this paper, are shown in Table <ref>.
By far the major development to date is the actual distributed coupling accelerator structure. will use C-band (5.712 GHz) standing wave RF accelerating structures that are 1 m long. Each has an RF waveguide to bring power in, and in the more probable operating modes, splits RF power evenly between the beam and dissipation in the structure with 43% beam loading. Operating at 80 K brings the shunt impedance up to 300 MΩ/m, allowing for efficient operation at 120 MeV/m. These gradients have been demonstrated at C-band <cit.> and with an electron beam in an X-Band (11.424 GHz) structure on the SLAC XTA beamline <cit.>. The C-band structure has been tested at low power at SLAC and at high power without beam at Radiabeam <cit.>. The gradient results in a collider with a 550 GeV center-of-mass energy capability on an 8 km footprint.
A pre-conceptual design for the overall linac cryogenics has been developed that includes the design for the CryoModules. For the 250 GeV and 550 GeV design, each linac will have 3 re-liquification cryoplants. LN will flow out along the linac in both directions, so there are 6 flow runs. The LN will be above the raft structures, with an initial velocity of ∼0.03 m/s. The LN will cool the accelerator structures by nucleate boiling with a power density of 0.4 W/cm^2, producing saturated vapor which counter-flows back to the cryoplant. Each cryo-run is about 450 meters in length. The vapor velocity near the cryoplant is ∼3 m/s.
§ COMPARISON OF HIGGS FACTORY PHYSICS REACH
Among the colliders being evaluated by the community, the International Linear Collider (ILC) <cit.>, based on superconducting RF technology, has the most advanced design <cit.>, and the ILC is currently under consideration for construction in Japan.
CERN is pursuing as its main strategy a large circular collider, the FCC <cit.>, and China is planning a similar circular collider, the CEPC <cit.>. Each of these circular colliders would require a tunnel with circumference of the order of 100 km to limit synchrotron radiation. Still, though, the expected instantaneous luminosity drops off significantly above center-of-mass energies of 350–400 GeV.
A different alternative is to construct a compact linear collider based on high gradient acceleration. CERN is also pursuing such a proposal, CLIC <cit.>, that would operate at a collision energy of 380 GeV.
The carbon footprint of the proposed future Higgs factories should be assessed relative to the expected physics reach, which has been reviewed most recently in the context of the Snowmass Community process <cit.>. The primary physics goal of a future Higgs factory is the determination of the total Higgs width and Higgs couplings with per-cent or sub-per-cent precision. A reasonable figure of merit to gauge the physics reach of each machine is the expected level of precision for each of these measurements. We note that evaluating the projected measurement precision accounts for the fact that different beam configurations (center-of-mass energy and beam polarization) have a strong impact on the physics reach of each of those machines. These differences in precision are not accounted for when comparing the total number of Higgs bosons produced alone <cit.>.
The physics reach at colliders increases with the center-of-mass energy, since different Higgs boson production mechanisms become accessible. At 250 GeV center-of-mass energy operations the main Higgs boson production mechanism is associated production with a Z boson (→ ZH), enabling a model-independent determination of the Higgs boson total width. Higgs boson production via the W-boson fusion reaction e^+e^-→νν̅H is accessible at √(s)∼500 GeV, where the only visible signals in the final state come from Higgs boson decays. This allows Higgs boson measurements governed by different systematic effects, complementary to the 250 GeV data, as well as opportunities to study effects such as separation of H → gg/bb̅/cc̅ decays and CP violation in H →τ^+τ^- <cit.>. Importantly, at high center-of-mass energies, double Higgs boson production in the ZHH channel opens up, providing direct access to the Higgs boson self-coupling λ_3. At circular machines, given the energy limitations, double Higgs boson production mechanisms are not accessible, thus allowing only for indirect and model-dependent measurements of λ_3, through loop effects in single-Higgs production.
The use of longitudinal beam polarization offers unique advantages for effective precision measurements at a linear collider, since the interaction cross sections at an collider have strong dependencies on beam polarization.
It has been demonstrated that at 250 GeV center-of-mass energy, the ultimate precision reach in the determination of Higgs couplings, through a Standard Model Effective Field Theory (SMEFT) analysis, for an integrated luminosity of 2 ab^-1 with polarized beams, has comparable sensitivity to 5 ab^-1 with unpolarized beams, with most of the gain coming from e^- polarization alone <cit.>. The main effect of beam polarization is to discriminate the effect of different SMEFT operators that contribute to the Higgs boson coupling. There is a similar gain of about a factor of 2.5 from discrimination of the effects of the operators contributing to the WWγ and WWZ couplings, which also enter the SMEFT analysis.
The positron polarization becomes more relevant at higher center-of-mass energies. For instance, W-boson fusion reactions, such as e^+e^-→νν̅H, proceed only from e_L^-e_R^+ initial states, providing a cross-section (or, equivalently, effective luminosity) enhancement of ∼ 2.5 for typical polarizations foreseen at future linear machines <cit.>. Here positron polarization makes a significant contribution. This implies that the same number of Higgs bosons can be produced through this process with only ∼ 40 % of the integrated luminosity, compared to having unpolarized beams.
Moreover, beam polarization at high energy enables the suppression of relevant backgrounds, such as the dominant e^+e^-→ W^+W^- background for positive (negative) electron (positron) beam polarization, increasing the signal-over-background ratio and allowing the precise measurement of the rate of other backgrounds, as well as the reduction of detector-related systematic uncertainties, with combined measurements of datasets with four distinct initial-state polarization configurations. These effects collectively indicate the increased precision reach that beam polarization provides for linear machines <cit.>.
Additionally, electron (primarily) and positron (secondarily) polarization enhance the precision in the extraction of the Higgs couplings, compared to having unpolarized beams. For example, it has been shown that having polarized initial state can yield an effective luminosity improvement factor for linear machines up to ∼ 2.5, thus allowing the same precision for various Higgs couplings to be reached with ∼ 40 % of the integrated luminosity.reference
For these reasons, in this analysis we propose a comparison of the carbon footprint of collider concepts relative to their expected precision in Higgs coupling measurements. Table <ref> summarizes the projected relative precision for Higgs boson couplings measurements at each collider combined with projected results from the HL-LHC. As can be seen, the overall physics reach of all proposed Higgs factories is similar <cit.> for the 240-250 GeV operations, and additional measurements become accessible for the higher center-of-mass energy runs at linear colliders. We also compare the Higgs Factory proposals is in terms of total energy consumption and carbon emissions, for both construction activities and operations, with the latter being the most relevant number when evaluating each project's impact on the global climate.
We then present an estimate of energy consumption and carbon footprint per unit of physics output. This is achieved by taking the average of the relative precision over all Higgs couplings, weighing them by the relative improvement in their measurement with respect to HL-LHC:
⟨δκ/κ⟩ = ∑_iw_i(δκ/κ)_i/∑_iw_i
where the sum runs over the columns of Table <ref> and the weight is defined as:
w = (δκ/κ)_HL-LHC-(δκ/κ)_HL-LHC+HF/(δκ/κ)_HL-LHC+HF
This definition weights measurements by their relative improvement over HL-LHC when combining the HL-LHC and future Higgs Factory (HF) results. Qualitatively, measurements that minimally improve those of HL-LHC are assigned weights near zero, while HF measurements with high precision or large improvement over HL-LHC are assigned larger weights. While other weighting schemes could be used, we argue that Equation <ref> is unbiased towards the type of physics measurement (e.g. Yukawa, self-coupling, vector coupling) and it emphasises the individual strengths of each collider facility.
For the estimation of the weighted average precision, the hcc̅ coupling was excluded, since there is no estimate for HL-LHC, whereas we assume that the hhh coupling for CEPC can be measured with the same precision as for FCC. The weighted average precision for each collider is given in the last row of Table <ref>.
§ POWER CONSUMPTION AND OPTIMIZATIONS
The most obvious way to reduce the carbon impact of a major facility is to minimize the amount of power that it consumes, thereby minimizing the associated emissions from energy production. This is firmly within the means of the facility designers and crucially does not rely on grid electrification. The nominal operating parameters for -250 are given in Table <ref>.
Several avenues can be pursued to optimize operational power requirements. Improvements in luminosity or reduction in power consumption are possible through the development of ancillary technology by increasing the RF source efficiency, increasing the efficiency of powering the accelerating structures or modification of beam parameters to increase luminosity. At present the main linac requires ∼100 MW of power with 40 MW for the RF sources and 60 MW for the cryogenics.
For the RF sources, the concept utilizes an overall RF system efficiency of 50% which is in line with present high power RF sources that are designed with efficiency in mind. However, significant advances in modern design techniques for klystrons are increasing the klystron amplifier's ultimate efficiency significantly with the inclusion of higher order mode cavities, multi-cell outputs and advanced multi-dimensional computational tools. For example, designs now exist for a 50 MW class RF source<cit.> approaching an amplifier efficiency of 70%. Multi-beam RF sources, reducing the beam perveance, have advanced design efforts exceeding 80% efficiency<cit.>. These results reinforce modern understanding on the limits of klystron efficiency <cit.> which indicate a klystron amplifier efficiency of 70-80% is possible, leading to an overall RF source efficiency of 65%.
RF pulse compression, presently not in the baseline, is also a well known technique for powering high gradient structures. For , pulse compression is particularly useful due to the impact of power loss at cryogenic temperatures and due to the relatively long fill time for a copper structure operating at cryogenic temperatures. In a previous study<cit.>, it was found that low factors of pulse compression, which preserves RF efficiency in the compressor<cit.>, improves the overall efficiency of the system by 30%. Recently, additional efforts have been made to realize the extremely high Q cavities required for pulse compression with cryogenically cooled RF structures <cit.>; these include concepts operating at room temperature and inside the cryostat at 80 K.
For the baseline design <cit.> we anticipate operation with 700 ns and 250 ns flat tops respectively for gradients of 70 and 120 MeV/m and a constant power dissipation of 2.5 kW/m at 120 Hz. Figure <ref> and Figure <ref> show the RF power, dissipated energy and gradient during the pulse. While these flat top lengths were selected to limit the challenges of breakdown, increasing the flat top length and reducing the repetition rate should be investigated in order to reduce the thermal load on the linac. At present, the thermal balance between the structure fill/dump time and the flat top is approximately 50% (equal thermal load). If we were to extend the flat top lengths by a factor of two and reduce the repetition rate by a factor of two, the thermal dissipation in the main linac would decrease by 25%. This improvement would have little effect on the overall design of the accelerator, and would be acceptable if the breakdown rates remain low enough. Proving that this is possible will require high gradient testing of structures with 1400 ns and 500 ns respectively.
The beam current of is relatively low thanks to the large bunch spacing and efficient accelerating structures. One could pursue the possibility of reducing the bunch spacing to increase the current. However, this will require compatibility studies with the detector design. Here we consider the scenario where the bunch spacing is reduced by a factor of two. This would keep a bunch spacing of >1 ns for both -250/550, resulting in a decrease of 25% for the cryogenics power. The RF power required would only decrease by 20% because the peak RF power required would be slightly higher during the RF pulse flat top to compensate for the additional current.
We note that these approaches can all be combined for mutual benefit as shown in the last row of Table <ref>. The demonstration R&D plan <cit.> will be able to investigate these approaches and lead to potential power savings.
§ CARBON IMPACT OF CONSTRUCTION
Under the assumption that the electric grid will be successfully de-carbonized by 2040, as it is the goal of many international climate plans, then construction, rather than operations, may well dominate the climate impact of a new particle physics facility <cit.>.
For FCC it is projected that the whole accelerator complex[The main tunnel plus the additional buildings on the site, the materials for the accelerator and detectors, assuming a main tunnel length of 97.7 km (the updated FCC design anticipates 91 km).] will have a carbon impact similar to that of the redevelopment of a neighbourhood of a major city <cit.>. This indicates that the environmental impact of any future collider facility is going to receive the same scrutiny as that of a major urban construction project.
The bottom-up analysis in <cit.> derives an estimate of global warming potential (GWP) for the main tunnel material (concrete) manufacture alone to be equivalent to the release of 237 ktons of CO_2 (). An alternative top-down analysis is instead dependent on the character of the earth to be excavated, leading to estimates ranging from 5-10 kton /km of tunnel construction and total emissions of 489-978 kton [Contributions from many bypass tunnels, access shafts, large experimental caverns, and new surface sites are excluded.].
A life cycle assessment of the ILC and CLIC accelerator facilities is being performed by ARUP <cit.> to evaluate their holistic GWP, so far providing a detailed environmental impact analysis of construction. The components of construction are divided into classes: raw material supply, material transport, material manufacture, material transport to work site, and construction process. These are labelled A1 through A5, where A1-A3 are grouped as materials emissions and A4-A5 are grouped as transport and construction process emissions. The total GWP for ILC and CLIC is taken to be 266 and 127 kton <cit.>, respectively[We use the emissions figures associated to the CLIC drive-beam design, which is more efficient than the alternative design utilizing only klystrons for RF power.]. The approximate construction GWP for the main tunnels are 6.38 kton /km for CLIC (5.6m diameter) and 7.34 kton /km for ILC (9.5m diameter); the FCC tunnel design is similar to that of CLIC, so 6.38 kton /km is used for the calculation of emissions for both FCC and CEPC. While a comprehensive civil engineering report is unavailable for FCC and CEPC, we estimate the concrete required for klystron gallery, access shafts, alcoves, and caverns to contribute an additional 30% of emissions, similar to what is anticipated for CLIC. The analysis indicates that the A4-A5 components constitute 20% for CLIC and 15% for ILC. In the absence of equivalent life cycle assessment analysis for FCC and CEPC, we account for the A4-A5 contributions as an additional 25%. A summary of these parameters is given in Table <ref>.
The tunnel will be about 8 km long with a rectangular profile in each of its component systems. Assuming a cut and cover approach, all the excavated material will be replaced to yield a small berm. We estimate that for the whole accelerator complex only about 50 thousands cubic meters of spoil for the experimental hall will have to be relocated. Figure <ref> shows a schematic of the cross section, where the klystron gallery is situated directly above the accelerator hall with sufficient concrete shielding to allow constant access to the klystron gallery during operation. The application of a top-down estimate of 6-7 kton /km obtained from the ARUP report is not appropriate for the surface site due the differing cross section geometries of the accelerator housing. To allow for a fair comparison among facilities, we take the same basic assumptions of construction materials. In particular, that construction uses a mix of CEM1 C40 concrete and 80% recycled steel, the GWP of concrete is taken to be 0.18 kg /kg concrete with density 2400 kg/m^3<cit.>, and 85%/15% of emissions originate from concrete/steel production. Taking into account construction of the main linacs, injector linacs, damping rings, beam delivery system, and experimental hall, the total volume of construction material is estimated to be about 260,000 m^3 (consisting mostly of concrete by volume). This leads to a GWP of 133 kton for A1-A3 components and GWP per unit length of the main linac of around 17 kton /km. Notably, this is roughly a factor 2 larger than the GWP/km of main tunnel construction of ILC and CLIC; this suggests further tunnel geometry optimizations are achievable with a detailed engineering study. The surface site construction eliminates the need for additional infrastructure (e.g. access tunnels and turnarounds) and greatly reduces the complexity of the construction process, which we estimate to account for an additional 10%[This estimate is half the A4-A5 component associated to tunnelled facilities and is expected to overestimate the improvement associated to a cut and cover approach, due to significant reduction to spoil transport and operation of a boring machine] to the GWP. This yields a final estimate of 146 kton for civil engineering.
Unlike other Higgs factories under evaluation, the site has not been decided yet. A collider could in principle be sited anywhere in the world.
A community decision will be made regarding the actual site selection, although we note that the offers a unique opportunity to realize an affordable energy frontier facility in the US in the near term and the entire program could be sited within the existing US National Laboratories. The tunnel layout would be adapted to its location, and a cut and cover site, suitable for a horizontal layout, is extremely attractive also for both cost and schedule reasons.
The details of the siting options at FNAL are discussed in <cit.>. Sites such as the DOE Hanford site located in the Pacific Northwest have room to accommodate even bigger footprint machines within their site boundary.
§ POSSIBLE MITIGATION STRATEGY DURING OPERATIONS
The carbon footprint of the electricity production required to meet the total site power requirements of 150-175 MW can be substantial. The average carbon intensity of energy production since May 2022 is 194 and 381 g CO_2/kWh for the CAISO and PJM power grids, respectively <cit.>. This would result in the CO_2 emissions equivalent of 5.7 and 11.2 mega tonnes of CO_2 equivalent for a 20 year run. The electrification of the grid will allow to operate much more sustainably by the time data taking begins. The U.S. “has set a goal to reach 100 percent carbon pollution-free electricity by 2035" in its 2021 emissions target report <cit.>. The U.S. is making progress toward this goal, having been ranked #1 on the Renewable Energy Country Attractiveness Index in 2021, driven primarily by widespread adoption of solar energy. The outlook for renewable energy investments have been further buoyed by the recent passage of the Inflation Reduction Act <cit.>. While full electrification by 2035 is conceivable, it is helpful to consider powering infrastructure required by operating only with renewable energy sources to evaluate associated costs and feasibility. The three technologies of interest to this study are photovoltaic cells (solar), onshore and offshore turbines (wind), and energy storage systems (batteries) to facilitate the diurnal cycle of power generation by solar and wind sources.
Solar is the most appealing renewable energy source. It has achieved the highest market penetration among renewable sources and is expected to achieve utility-scale parity with non-renewables within the next decade. The present cost of PV cells is between 0.82 - 1.01 $/W and the land area required to operate a 3 MW scale solar farm is 6-8 acres/MW <cit.>. Assuming PV cell efficiencies will be driven well beyond the present 30% limit by multi-junction fabrication techniques, the values $0.80/W and 4 acres/MW are assumed <cit.>.
While wind energy trails solar in terms of market penetration, providing over 120 GW domestically, it would offer a complementary daily load profile to that of solar energy, where approximately twice as much power is generated at night than during the day, by both onshore and offshore wind farms <cit.>. While onshore wind has greatest penetration in the Midwest, where average wind speeds at 100m elevation can exceed 10 m/s, smaller wind turbines with lower peak output capacity and lower cut-in wind speeds can be suitable for regions where wind patterns are less intense <cit.>. Typical peak power output for onshore and offshore wind turbines are 3 MW and 10 MW with typical capacity factors (efficiency) of 40% and 60%, respectively <cit.>. The significantly higher power production capacity for offshore wind turbines offers an advantage to candidate sites located on the coasts. Fixed-bottom and floating turbines are the preferred for offshore farms on the Atlantic and Pacific coasts, respectively. Floating turbines have the additional advantage of eliminating high-frequency vibrations resulting from mechanical coupling to the sea floor, which can significantly increase the turbine's functional lifetime, and installation of a floating turbine has a significantly reduced impact on local marine life <cit.>. The costs of onshore, fixed-bottom offshore and floating offshore turbines are around 1.3, 3.25 and 5.3 $/W <cit.>.
A major challenge to full electrification is the need to deliver power to end-users reliably when generation is dependent on natural processes which fluctuate on short timescales (local weather patterns, daily cycle) and long timescales (seasons, regional climate cycles). Energy storage systems are required to eliminate dependence on non-renewables during periods of low production by renewable sources, and can be realised using mechanical, thermal, and chemical energy storage techniques. For example, pumped storage hydro-power (PSH) stations represented 99% of utility-scale energy storage in 2019, each of which has GWh-scale capacity <cit.>. While PSH stations can be used to balance load profiles on the regional scale, they can only be situated where geological constraints allow. Battery energy storage systems (BESS) are not subject to such constraints and can further be build in a distributed network near end-users, rather than in large centralised plants. However, utility-scale battery technology is still nascent, with liquid lithium-ion as the most common battery chemistry. While other designs, like lithium-sulfur, lithium-metal, and sodium-ion, can offer higher energy densities and longer lifetimes, various technical challenges must be overcome. As alternative designs are developed for the future, lithium-ion batteries can support BESS operating on the scale required for today. The world's largest BESS is located in Moss Landing, CA, and has a capacity of 1.4 GWh and can deliver 350 MW to the CAISO grid. The Edwards and Sanburn Solar and Energy Storage site, to be completed in 2023, will use 2.5 millon PV modules and 110,000 lithium-ion batteries situated on 6,000 acres to produce up to 1.1 GW and store 3.32 GWh.
We rely on projections of BESS costs and capacities in the years 2040 and 2050 to appraise those associated to . A reference case for the projected domestic storage capacity in batteries in the years 2040 and 2050 are 120 GWh and 210 GWh, respectively <cit.>. The maximum amount of storage capacity needed to power for a 12 hour period at 150 (175) MW is 1.2 (1.4) GWh, constituting less than 1% of expected total market capacity. By 2040, hydro-pumped energy storage will constitute 20% of total storage capacity and will be relegated to storage durations of more than 12 hours. Lithium-ion battery cell lifetimes are typically on the order of 1000 cycles, and other battery chemistries have rapidly increased in lifetime in recent years, topping 600 cycles for Lithium NMC <cit.>. If a 1000 cycle lifetime is assumed for future battery technologies, and batteries would experience 300 full cycles in a year, each battery module would need to be replaced 3 times in each 10 year run period. Costs could be mitigated through battery recycling, at minimum to be smelted and the valuable elements Nickel and Cobalt captured, 10% of the battery cost could feasibly be reclaimed. The cost of batteries designed for 10 hour storage in the years 2040 and 2050 are 125 and 100 $/kWh <cit.>. These parameters can be used to estimate the total cost of batteries for powering scenarios over the full 20 year run time.
Finally, cost mitigation strategies can be explored. The current compensation rate for surplus power sold back to Pacific Gas and Energy was around $525/kW/year on average from January 2022 to May 2023 <cit.>. An analysis by S&P indicates that in 2030, $55/kW/year could be generated through energy arbitrage, where energy purchased during the day can be stored and sold at night when energy prices are driven by the higher cost non-renewables <cit.>. This analysis also shows that the average cost of energy will not substantially decrease over time. Higher battery capacity would be required to capitalise on arbitrage opportunities and is therefore less appealing than selling excess energy production immediately during daytime production. An additional 150 MW of solar capacity in excess of requirements could generate $380 million. If government investment on the scale of the Production and Investment Tax Credits (PTC and ITC) outlined in the IRA were to be available during construction, the cost of batteries could be reduced by 30% and the cost of renewable power generation could be reduced by $0.0275/kWh <cit.>.
For the following analysis, a day/night cycle of 12 hours each is considered and the average power production over the course of a full day is 175 MW. The total energy storage capacity from batteries is set to provide the difference in nighttime power generation (and must be charged during the day with power generated in excess of 175 MW). Table <ref> summarises a possible design configuration using a mix of solar and wind energy.
While the composition of this energy portfolio can impact the total cost estimates, the total cost of energy infrastructure required to de-carbonize operations is approximately $1 billion over the coarse of 20 years of operation. It is important to note that this falls largely outside the scope of project budget. Indeed, most of this cost will be covered by general investment by the US government in electrification of the grid. While FCC would not be able to access 550 GeV CoM energy, it is expected to require 350 MW in the 365 GeV tt̅ run configuration <cit.>. CERN receives significantly de-carbonized energy from France, where 56 nuclear reactors collectively deliver 63 GW to the grid (1.1 GW/plant on average) <cit.>. Assuming FCC operated with nuclear power alone, it would consume 30% of the power output of a single plant. A nuclear reactor today typically costs around 8 billion euros, implying that the energy infrastructure required to operate FCC sustainably is $2.5 billion.
The previous analysis leads to two conclusions about sustainable operation of :
* The required technological innovation of solar, wind, and energy storage systems is expected to meet the site power needs for by the beginning of operations
* Market availability of these technologies will be sufficiently scaled such that they can be deployed for , and the associated costs born by government investment in renewable energy will be comparable if not less than alternate e^+e^- Higgs factory options
We would like to estimate the cost within the budget scope required to operate sustainably in a realistic scenario. A $200 million budget for renewables would support a 250 MW solar farm, fully covering the needs of during the day with an average excess production of 87.5 MW that can be sold to the grid. Assuming increased capacity of domestic BESS results in negligible energy price differences between day and night through arbitrage, would incur energy costs only from the additional 75 MW needed at night on average. At $0.06/kWh, this would amount to $780 million over 20 years. To effectively erase this additional energy cost, the solar farm budget can be increased to $270 million to provide twice the average site power needs. It should be emphasised that can achieve effective energy independence with a modest investment in solar infrastructure. Given the carbon intensity of solar, wind, nuclear, and natural gas of 11, 11, 12, and 524 gCO_2/kWh in the CAISO grid, along with the least optimistic projection of domestic renewable energy production by the US Energy Information Administration, the carbon intensity of electricity produced by the CAISO grid can be expected to fall below 125 gCO_2/kWh by 2050 <cit.>. This is driven by a doubling of solar/wind and a 25% reduction in gas in terms of total energy portfolio composition. Since half of site power originates purely from solar, the average carbon intensity of energy consumption will be better than 68 gCO_2/kWh. This is further improved to 46 gCO_2/kWh in the high technology uptake scenario. These are comparable to the carbon intensity in France of 38 gCO_2/kWh, which is not expected to be further reduced.
§ MITIGATION STRATEGIES FOR OPERATIONS
https://www.nature.com/articles/d41586-022-03551-5
https://link.springer.com/article/10.1140/epjp/s13360-022-03319-w
There can be considerable emissions associated with the production of energy required to meet site operation power requirements. This is highly dependent on the region in which the project operates; regions with highly de-carbonized electricity grids (via solar, wind, hydroelectric, and nuclear power) offer significantly reduced carbon emissions related to energy production than those running on non-renewable energies (gas, oil, and coal). The total emissions of each collider project are then evaluated as the product of the total amount of energy consumed and the local carbon intensity for its production.
While total de-carbonization of the electric grid by 2040 is a nominal goal, it is not assured. The 2040 projection of carbon intensity based on the stated policies scenario for Japan, China, the European Union, and the United States are roughly 150, 300, 40, and 45 t/GWh, respectively <cit.>. However, local variations in renewable energy systems implementation is neglected in these estimates; for example, the CERN-based colliders could take advantage of a 50-50 mix of renewable and nuclear energy. Additional mitigation strategies, such as construction of dedicated renewable energy plants, would reduce the carbon impact of operations in other regions. This strategy has been thoroughly investigated by the Green ILC Project <cit.>. A more moderate strategy can be envisioned for . A 185 MW solar farm could be built with a $150 million budget <cit.>, double covering the average power requirement of [This estimate considers the power optimizations in Table <ref>], such that excess power could be stored for later use at night[The additional cost of selling and purchasing energy through utility companies can be reduced through special contracts and is neglected here], allowing to achieve green energy independence. The use of multi-junction photovoltaic cell fabrication techniques would improve power conversion efficiency well beyond 30% that is common in today's cells <cit.>, allowing such a solar farm to be situated on about 5 km^2 of land <cit.>.
This estimate relies on energy storage systems supported by regional electricity grids. To better understand the feasibility of scaling all parts of energy production (which may fall under the project budget) and energy storage infrastructure (which would be funded by the US government, but would nonetheless need investment), we perform a holistic cost estimate. We first note that the energy storage capacity required to supply 150 MW continuously for 12 hours is less than 1% the expected grid energy storage capacity in 2040 <cit.>, indicating that the US grid should be able to reasonable support operations at this scale using renewable energy. We assume lithium ion batteries[Lithium ion batteries are not considered to be viable long term energy storage solutions, instead technologies such as flow batteries and systems based on mechanical potential energy are favored] are the primary energy storage technology with a lifetime of 1000 cycles, experiencing 300 cycles per year with 10% of battery cost reclaimed through recycling at a base cost of 125 (100) $/kWh in 2040 and 2050 <cit.>. We take the cost of energy production of solar to be $0.80/W <cit.> while taking that of onshore, fixed-bottom offshore and floating offshore wind turbines to be around 1.3, 3.25 and 5.3 $/W <cit.>. An energy production portfolio that provides continuous power for over a 12 hour day/12 hour night period based on these technologies alone would cost approximately $1 billion. This estimate is primarily driven by requirements of battery energy storage systems and holds for a variety of energy source mixes. This indicates a similar cost would be associated to a site located near the Pacific or Atlantic coasts, which could leverage floating and fixed-bottom turbines respectively, in the Southern US where solar would be most efficient, or proximate to large wind farms in the Midwest. A more precise cost and feasibility analysis can be performed when a candidate site is defined, as has been done for experiments operating at the South pole, for example <cit.>. This cost analysis demonstrates that operations could be supported sustainably within the US within the next two decades given conservative projections of technological development.
As a point of comparison, the power requirement of FCC would be about 30% of the output of a large nuclear plant (generating 1.1 GW on average <cit.>). At about $8 billion per facility, the cost of renewable energy infrastructure for FCC would be about $2.5 billion.
To obtain an estimate of the carbon impact of operations at future collider facilities that takes mitigation strategies into account, we first note the carbon intensity of solar, wind, hydro, and nuclear are around 30, 15, 25 and 5 t/GWh, respectively <cit.>. These estimates have some regional variation due to the differences in supply chains and local infrastructure. For instance, given the lifetime of existing nuclear plants of about 30 years, replacement or construction of entirely new facilities will be required and it might effect the overall carbon intensity. While the ultimate energy production portfolio will be different for facilities constructed in different regions, we take a common estimate of 20 t/GWh for all collider facilities in this analysis. We find this to be a reasonable estimate given that any facility can propose mitigation strategies to decouple their carbon impact from the regional average. It also reflects the expectation that clean energy infrastructure supply chains will improve over the next 20 years.
§ ANALYSIS OF TOTAL CARBON FOOTPRINT
A straightforward calculation of total energy consumption is possible using the information summarized in Table <ref>, which includes estimates of the site power P during collision mode, the annual collision time T_collisions and the total running time in years T_run for each center-of-mass energy √(s) considered. We take into account the time spent with the beam operating at full RF and cooling power outside of data-taking mode, for example for machine development, as an additional week for every 6 weeks of data-taking (i.e. +17%), represented as T_development. We take the site power requirement for the remaining period in a calendar year to be 30% of the site power requirement during data-taking (denoted by κ_down). This value is a conservative upper estimate, since without RF power and associated heat load, any accelerator can be kept cold with a small fraction of power to the cryogenics system.
Using these values, the annual energy consumed is calculated as:
E_annual = P[κ_down· T_year+(1-κ_down)(T_collisions + T_development)]
and the total energy consumption summing over all run configurations √(s) runs is
E_total=∑_r ∈ runsE(r)_annual· T_run(r)
For the circular collider projects, FCC and CEPC, we consider separately the cumulative energy consumption of the Higgs physics runs (i.e. √(s)>240 GeV) for a focused comparison on the basis of Higgs physics reach argued in Section <ref>, but additionally include the contribution of Z-pole and WW-threshold runs which impact the climate nevertheless.
Figure <ref> shows the energy consumption for the considered collider projects. The least energy is consumed by CLIC, driven by the lowest planned run time at low energies and its marginally lower power consumption compared to and ILC, which are comparable. The energy consumption of CEPC is large compared to FCC because CEPC plans to collect four times the integrated luminosity at 240 GeV with an associated tripling of the total run duration.
Figure <ref> shows the precision-weighted energy consumption for the considered collider projects, estimated by multiplying the energy consumption of Figure <ref> with the average relative precision in the last row of Table <ref>. The lowest run time for CLIC is now compensated by the reduced relative precision, in comparison to and ILC, leading to overall closer precision-weighted energy consumption. Similarly, the large proposed run time for CEPC is now taken into account in conjunction with the improved precision reach, yielding a total weighted energy consumption closer to FCC.
Figure <ref> shows the associated GWP of the total energy required for operations, obtained by multiplying the total energy consumption by the respective carbon intensity. The GWP of FCC operations benefits from the de-carbonized electricity expected in France and Switzerland, despite its high total energy requirements.
Figure <ref> shows the GWP due to construction of accelerator facilities. The carbon footprint is very similar among the linear and circular colliders, which is driven primarily by the total length of the accelerator. Figure <ref> shows the total GWP from construction and operations. CLIC is the most environmentally friendly option, owing to its lead performance in operations emissions as well as its small footprint. The total GWP of and ILC are driven by operations while that of CLIC, FCC, and CEPC are almost entirely driven by construction emissions. Possible reductions in the construction component could be achieved by using concrete with lower cement content than CEM1 C40 considered in this analysis. Such cases would still leave FCC GWP dominated by construction processes.
Finally, Figure <ref> shows the total precision-weighted GWP from construction and operations, estimated in the same way as the precision-weighted energy consumption in Figure <ref>. Given the overall similar GWP for CLIC and and the superior precision reach of at higher energies, compared to CLIC, appears to be the most environmentally friendly option, when accounting for the precision-weighted total carbon footprint.
The total energy consumption is given in table <ref> for three cases:
(a) when considering the complete running scenarios of Table <ref>, which include higher √(s) runs for ILC,and runs at the Z-pole and WW-threshold for CEPC,FCC;
(b) when only considering the "Higgs factory" modes of the proposed colliders, thus excluding the Z, WW runs for CEPC,FCC;
(c) and when only including the √(s)=250 GeV run for ILC/, since this run already provides comparable sensitivity to the Higgs couplings as the other proposed Higgs factories, as is shown in Table <ref>.
Update table with latest estimates!
The 2045 estimates for the carbon intensity in the various locations where the collider projects could be hosted are given on the 3rd row of table <ref>, and the total carbon footprint is given on the same table for the two cases considered (6th and last row). The total energy consumption and carbon footprint are also shown in Figures <ref>,<ref>.
§ CONCLUSIONS
We present the first analysis of the environmental impact of the newly proposed collider and a comparison with the other proposed facilities in terms of physics reach, energy needs and carbon footprint for both construction and operations.
The physics reach of the proposed linear and circular e^+e^- colliders has been studied extensively in the context of the US Snowmass and European Strategy processes. We zero in on the Higgs boson coupling measurement precision achievable at , CLIC, ILC, FCC, and CEPC and point out that they are generally similar, though linear colliders can operate at higher collision energies allowing access to additional measurements of the Higgs boson's properties. Moreover, the use of polarization at linear facilities effectively compensates for the lower luminosity.
On this basis, the global warming potential of these facilities is compared in terms of absolute environmental impact and in terms of environmental impact per unit of physics output obtained by a weighted average of expected precision on Higgs coupling measurements. The operations emissions of could be improved through beam parameter optimization leading to 63 (79) MW power reduction compared to the nominal 150 (175) MW in the 250 (550) GeV running mode. Mitigation strategies using dedicated renewable energy facilities can reduce the carbon intensity of energy production to 20 ton /GWh. We find that global warming potential is driven by construction rather than by operations beyond 2040. The compact nature of linear collider facilities reduces the total volume of construction materials and opens up the option for a surface site to simplify the construction process. We conclude that linear colliders and in particular have great potential for an environmentally sustainable path forward for high energy collider facilities.
§ ADDITIONAL POINTS
Somewhere in the intro or dedicated section state the importance of polarization for electrons and positrons.
How large is the systematic on the positron polarization measurement ? –> check with Peskin
Detectors with short duty cycle –> less systematic –> more effective per number of Higgs bosons
Beam Damp experiments
When assessing the energy consumption and carbon footprint of a proposed Higgs factory, one has
* The figure of merit when assessing the scientific output of a Higgs factory should not be the number of Higgs bosons produced per se, but rather the precision in the Physics observables of interest (particularly Higgs couplings) that can be reached for a given number of Higgs bosons produced.
* Electron (primarily) and positron (secondarily) polarization can yield an effective luminosity improvement factor for linear machines of ∼ 2.5, i.e. allowing the same precision for various Higgs couplings to be reached with ∼ 40 % of the integrated luminosity.
* Additionally, linear machines can probe higher center-of-mass energies, which offers various advantages compared to linear machines:
* At higher √(s), Higgs boson production cross section increases, enabling a more efficient production of Higgs bosons.
* At high √(s) (above ≃ 500 GeV), linear machines can probe double Higgs production via the ZHH channel, allowing for a direct measurement of the Higgs trilinear coupling λ_3.
For the electron Yukawa coupling, FCC can achieve a 𝒪(1) fractional uncertainty with the dedicated run at the Higgs mass pole, which was however not taken into account for the studies presented here.
Action Items (Dimitris):
* Reach out to each collider project asking for their most up-to-date estimates for: site power for each √(s), annual collision time, operational efficiency and downtime site power fraction → to make sure there is no contention about our estimates once published
More ideas:
* Reach out to Janot/Blondel asking specifics about their assumptions and references for the numbers they're quoting (Dimitris)
* Reach out to Doerr School of Sustainability contact (Caterina?)
* Follow up with Ken Bloom about carbon intensity projections
§ ACKNOWLEDGEMENTS
The authors express their gratitude to Dan Akerib, Tom Shutt, Sridhara Dasu, Patrick Maede, and Jim Brau for their insightful discussions, which have significantly contributed to this work. The authors also extend their appreciation to Michael Peskin and Steinar Stapnes for providing feedback on the manuscript.
The work of the authors is supported by the US Department of Energy under contract DE–AC02–76SF00515.
tocsectionBibliography
atlasnote
|
http://arxiv.org/abs/2307.05214v1 | 20230711123549 | Theory of coherent interaction-free detection of pulses | [
"John J. McCord",
"Shruti Dogra",
"Gheorghe Sorin Paraoanu"
] | quant-ph | [
"quant-ph"
] |
[][email protected]
QTF Centre of Excellence,
Department of Applied Physics, Aalto University, FI-00076 Aalto, Finland
QTF Centre of Excellence,
Department of Applied Physics, Aalto University, FI-00076 Aalto, Finland
QTF Centre of Excellence,
Department of Applied Physics, Aalto University, FI-00076 Aalto, Finland
Quantum physics allows an object to be detected even in the absence of photon absorption, by the use of so-called interaction-free measurements. We provide a formulation of this protocol using a three-level system, where the object to be detected is a pulse coupled resonantly into the second transition.
In the original formulation of interaction-free measurements, the absorption is associated with a projection operator onto the third state. We perform an in-depth analytical and numerical analysis of the coherent protocol, where coherent interaction between the object and the detector replaces the projective operators, resulting in
higher detection efficiencies.
We provide approximate asymptotic analytical results to support this finding. We find that our protocol reaches the Heisenberg limit when evaluating the Fisher information at small strengths of the pulses we aim to detect – in contrast to the projective protocol that can only reach the standard quantum limit.
We also demonstrate that the coherent protocol remains remarkably robust under errors such as pulse rotation phases and strengths, the effect of relaxation rates and detunings, as well as different thermalized initial states.
Valid PACS appear here
Theory of coherent interaction-free detection of pulses
Gheorghe Sorin Paraoanu
August 12, 2023
=======================================================
§ INTRODUCTION
Interaction-free measurements <cit.> are a type of quantum hypothesis tests whereby the presence of an object is confirmed or falsified even when the probe photons are not absorbed. As originally formulated, interaction-free measurements are based on the
observation that placing an ultrasensitive object in one arm of a Mach-Zehnder interferometer alters the output probabilities, thus allowing us to probabilistically infer its presence. This class of measurements provides a remarkable illustration of so-called negative-result measurements as first described by Renninger <cit.> and Dicke <cit.>. Furthermore, the detection efficiency can be improved by utilizing the quantum Zeno effect <cit.> through repeated “interrogations” of the object <cit.>.
Several topics in the foundations of quantum mechanics have been motivated by interaction-free measurements, such as the Hardy's paradox <cit.> – where they have been utilized to rule out local hidden variables. Others include developments in quantum thermodynamics <cit.> – where an engine is proposed that is able to do useful work on an Elitzur-Vaidman bomb without seemingly having interacted with it. Finally, interaction-free measurements can induce non-local effects between distant atoms <cit.>, while
the Zeno effect has been shown to transform a single qubit gate operation into multi-qubit entangling gates, even in non-interacting systems <cit.>. Various implementations of interaction-free measurements have been done on different experimental platforms, leading to a plethora of applications. Some examples include optical imaging, where a photosensitive object is imaged in an interaction-free manner <cit.>, counterfactual quantum cryptography, where a secret key distribution can be acquired without a particle carrying this information being transmitted through a quantum channel <cit.>, and counterfactual ghost-imaging – where ghost imaging i.e., the technique of using entangled photon pairs for detecting an opaque object with improved signal-to-noise ratio is merged with the idea of interaction-free measurements. This combined technique significantly reduces photon illumination and maintains comparable image quality of regular ghost imaging <cit.>. A related idea – combining interaction-free measurements with the concept of induced coherence – led to the realization of single-pixel quantum imaging of a structured object with undetected photons <cit.>. Other examples include counterfactual communication <cit.>, and counterfactual quantum computation <cit.>. Remarkably, these advancements have shown that information can be transmitted independent of a physical particle carrying it <cit.>. Overall, these results demonstrate that the interaction-free concept offers an unconventional yet viable avenue towards quantum advantage: tasks that manifestly cannot be achieved classically can be realized in this framework.
We recently proposed and experimentally demonstrated a novel protocol <cit.> which employs repeated coherent interrogations instead of projective ones as used in the original interaction-free concept <cit.>. This distinction is of fundamental nature and for clarity we will refer to the original protocol as “projective” and to ours as “coherent”. We will formulate these protocols as the task of detecting the presence of a microwave pulse in a transmission line via a resonantly-activated detector realized as a three-level (qutrit) transmon. We hereafter refer to these pulses as B-pulses, which are taken close to resonance with respect to the second transiton.
The connection to a specific superconducting-circuit implementation is convenient but not restrictive: indeed the concepts are general and can be readily employed in any experimental platform where a three-level system is available.
We will investigate the theoretical foundations of this protocol, providing approximate analytical results in the asymptotic limit. We will study the sensitivity of the success probability and efficiency of our coherent protocol and compare with the corresponding figures of merit of the projective protocol under a variety of realistic experimental scenarios. Our results show that coherence acts as an additional quantum resource, allowing the accumulation of information about the B-pulses under successive exposures separated by Ramsey pulses on the first transition. Moreover, the protocol proposed can be further generalized to the detection of quantized B pulses (such as photons in a single or multiple cavities).
The paper is organized as follows: in Section II, we introduce our coherent detection scheme and compare it with the standard projective scheme as often described in optical systems. We outline the description of the two hypothesis: the system evolution with only beam-splitters and the evolution with the presence of pulses we wish to detect. In Section III, we investigate the limit when the number of Ramsey sequences N is large, and subsequently explore the lower limit of B-pulse strength θ leading to sufficiently high detection efficiency. In Section IVA, we investigate how information on the presence of the pulses is acquired during each protocol by studying the success probabilities obtained when B-pulses of same strength are applied. We also investigate in Section IVB the successive probabilities of success and absorption for N = 25 Ramsey sequences when subjected to B-pulses of strength θ = π. Additionally, in Section IVC we investigate the quantum limits of each protocol by studying the Fisher information and the Fisher information of the efficiencies. In Section V, we consider several sources of error and expound on their implications for the effectiveness of our protocol. These include the effect of beam-splitter strength (Section VA), B-pulses with a variable phase (Section VB), interaction-free detection with randomly placed B-pulses (Section VC), initialization on thermal states (Section VD), effects of decoherence (Section VE), and detuned B-pulses (Section VF).
§ COHERENT INTERACTION-FREE MEASUREMENTS WITH QUTRITS
Our protocol employs repeated coherent interrogations to perform interaction-free measurements using a qutrit <cit.>. We consider a qutrit (three-level quantum system) with basis states
(|0⟩, |1⟩, |2⟩) and introduce the asymmetric Gell-Mann generators of SU(3) by σ^y_kl = -i|k⟩⟨ l| + i|l⟩⟨ k|, σ^x_kl= |k ⟩⟨ l| + |l⟩⟨ k|, with k,l ∈{0,1,2} and k < l. Our protocol is such that in certain cases it is possible to detect the presence of a series of pulses without exciting the detector into the second excited state. This is experimentally realized by trying to detect the presence of a microwave pulse in a transmission line using a transmon qutrit, which serves as a resonantly-activated detector. We require that the detector has not irreversibly absorbed the pulse at the end of the protocol, as witnessed by a non-zero occupation of the second excited state.
Moreover, we employ N Ramsey
sequences with beam-splitter unitaries S (ϕ) to the lowest two energy
levels. Each of these unitaries are of the form
S ( ϕ ) = exp [-iϕσ^y_01/2] .
or
S ( ϕ ) = 𝕀_01cosϕ/2 - i σ_01^ysinϕ/2 + |2⟩⟨ 2|,
where 𝕀_01 = |0⟩⟨ 0| + |1⟩⟨ 1| is the identity operator on the {|0⟩, |1⟩} subspace.
The microwave B-pulses to be detected are parametrized by a strength θ_j and a phase φ_j, and are represented by the unitary
B (θ_j, φ_j) = exp [-iθ_j𝐧_jσ_12/2] ,
where 𝐧_j = (cosφ_j, sinφ_j , 0) and σ_12=
(σ_12^x, σ_12^y, σ_12^z), or explicitly
B (θ_j, φ_j) = |0⟩⟨ 0| + 𝕀_12cosθ_j/2
- i(cosφ_jσ_12^x + sinφ_jσ_12^y)sinθ_j/2,
In matrix form the S and B operators read
S ( ϕ ) =
[ cosϕ/2 - sinϕ/2 0; sinϕ/2 cosϕ/2 0; 0 0 1 ],
and
B(θ_j,φ_j ) =
[ 1 0 0; 0 cosθ_j/2 -ie^-iφ_jsinθ_j/2; 0 -ie^iφ_jsinθ_j/2 cosθ_j/2 ] .
The protocol's evolution is governed by a series of j=1,N Ramsey sequences, each containing a B-pulse with arbitrary θ_j as shown in Fig. <ref>(a). In practice, these unitaries are generated by applying pulses with Hamiltonians H_01,j(t) = -iħ[Ω_01,j(t)/2]|0⟩⟨ 1| + h.c. and H_12,j(t) = ħ[Ω_12,j(t)exp(-iφ_j)/2]|1⟩⟨ 2| + h.c. resonant to the 0-1 and 1-2 transitions respectively. The beam-splitter pulses differ only by the times at which they are applied, otherwise their Rabi frequencies are identical, resulting in ϕ = ∫_-∞^∞Ω_01,j(t) dt. For the B-pulses, the Rabi frequencies in general can differ at different j's and therefore we have θ_j = ∫_-∞^∞Ω_12,j(t) dt.
At the end of the protocol, single-shot measurements (corresponding to projectors |0⟩⟨ 0|, |1⟩⟨ 1|, and |2⟩⟨ 2|) as well as
three-level state tomography can be performed.
We now outline the two main cases below.
Case 1. Absence of B-pulses
Here we study the efficiency of the protocol under a general ϕ. The usual arrangement considered in interaction-free measurements is ϕ→ϕ_N=π/(N+1) <cit.>. From Eq. (<ref>) we see that
S^N+1(ϕ_N) = exp (-i πσ_01^y/2) =-|0⟩⟨ 1| + |1⟩⟨ 0| + |2⟩⟨ 2|. This choice guarantees that for an initial state |0⟩ the resulting state after N+1 beam-splitter unitaries is |1⟩, while an initial state |1⟩ would result in a final state -|0⟩.
When no B-pulses are present, the coherent and projective protocols are identical.
Case 2. Presence of B-pulses
When B-pulses are included between each beam-splitter, the evolution of our coherent protocol with N pulses is governed by the string of unitaries
𝕌_{N} = S(ϕ) ∏_j=1^N[B(θ_j, φ_j)S(ϕ)] .
where the product is defined from right to left and the subscript { N} signifies the fact that 𝕌 is parametrized by all ϕ, θ_j, and φ_j variables from j=1 to j=N.
When starting in the state |0⟩, the final state after the application of the full sequence is 𝕌_{N} | 0 ⟩, yielding the probabilities
p_i = |⟨ i| 𝕌_{N} | 0 ⟩ |^2
for i = {0,1,2}. If the initial state is a density matrix ρ, then we have 𝕌_{N}ρ𝕌^†_{N} as the final state and the probabilities are
p_i = Tr{|i⟩⟨ i| 𝕌_{N}ρ𝕌^†_{N}} .
Table <ref> shows the resulting probabilities and the coherent interaction-free efficiency, η_ c = p_0/(p_0 + p_2), for N = 1, 2, 3, and 4 Ramsey sequences at ϕ = ϕ_N, θ = π and φ = π/2, when 𝕌_{N} acts on the initial state |0⟩.
We also introduce the following elements of the confusion matrix <cit.>: the positive ratio (PR) and the negative ratio (NR). For equal-strength pulses with θ_j=θ these are defined as (θ ) = p_0(θ)/(p_0(θ ) + p_1(θ)), and (θ )= p_1(θ)/(p_0(θ) + p_1(θ)). The quantity FPR=PR(θ = 0) is called the false positive ratio and it is a way to characterize the reduction in the confidence of the predictions due to the dark count probability p_0(θ =0) (positive detection count even in the absence of a pulse).
In contrast, for the standard projective case <cit.>
as usually implemented in optics, the POVM measurement operators after each application of the B-pulse are
P_ abs = 𝕀_01=|0⟩⟨ 0| + |1⟩⟨ 1| ,
P_ abs = |2⟩⟨ 2|,
where the latter is the projector corresponding to an absorption event while the first describes the situation where absorption did not occur. Fig. <ref>(b) diagrammatically illustrates this detection scheme for a protocol with N steps.
Note that by including the pulse we can define the POVM measurement operators M_ abs=P_ absB and M_ abs=P_ absB, satisfying the completeness property M_ abs^†M_ abs + M_ abs^†M_ abs = 𝕀_3, where 𝕀_3 is the 3× 3 identity matrix.
In this protocol, it is useful to define two probabilities: the probability of detection and the probability of absorption <cit.>. For N B-pulses, these probabilities can be obtained by applying Wigner's generalization of Born's rule <cit.>
p_ det = | ⟨ 0| 𝕏_{N} |0⟩|^2 ,
p_ abs = ∑_j = 1^N| ⟨ 2| B(θ_j, φ_j)𝕏_{j-1}
|0 ⟩|^2 ,
where the string of operators
𝕏_{j}=S(ϕ )∏_i=1^j[ P_ absB(θ_i, φ_i) S (ϕ )]
with the convention 𝕏_0=𝕀_3 (the 3 × 3 identity matrix) now plays the role of the 𝕌_N unitary from the coherent case, see Eq. (<ref>).
One can readily verify that p_ det is a product of probabilities: the probability of detection on the state |0⟩ when applying S the (N+1)th time multiplied by the probability that the wavefuction did not collapse to |2⟩ in any of the previous N detection steps. Similarly, p_ abs is a sum of probabilities, each of them obtained as a product between the state-|2⟩ probability after applying B in step j and the probability that the wavefunction did not collapse to |2⟩ in any of the previous j-1 detection steps.
For mixed states, these expressions generalize immediately as
p_ det = Tr{|0⟩⟨ 0| 𝕏_{N}ρ𝕏^†_{N}} ,
p_ abs = ∑_j = 1^NTr{|2⟩⟨ 2| B(θ_j, φ_j)
𝕏_{j-1}ρ𝕏^†_{j-1} B^†(θ_j, φ_j) } ,
where ρ is the initial state.
As we will see later, in real systems under decoherence the operators S and B will also be modified accordingly.
Table <ref> shows the resulting probabilities and the projective interaction-free efficiency defined as η = p_ det/(p_ det + p_ abs), for N = 1, 2, 3, and 4, at ϕ = π/(N+1), θ = π and φ = π/2, starting with the initial state |0⟩.
Comparing the efficiencies from Table I and II, we note that there is a clear advantage of the coherent interaction-free detection protocol with respect to the projective one, with the coherent efficiency η_c already exceeding 0.999 for N=3.
Similarly to the coherent case, we can also introduce the positive and negative ratios of projective interaction-free measurements by replacing p_0 and p_1 with p_ det and 1-p_ abs-p_ det, respectively.
A few observations are in place at this point. One is that the three-state model with projection operators is able to emulate exactly the physics of an ultrasensitive object placed in one arm of a chain of Mach-Zehnder interferometers, as usually studied in quantum optics. The only difference is that in the latter case the measurement is destructive, while in our case the projector |2⟩⟨ 2| is a von Neumann non-demolition operator. However, this is not a serious issue: one can connect the |2⟩⟨ 2| detector to an instrument that simply switches off the experiment. Another way of realizing this in circuit QED, is by using a phase qubit with the states |0⟩ and |1⟩ localized in one of the wells of the washboard potential and with the state |2⟩ such that switching into the running state occurs by tunneling with some probability <cit.>.
Another observation is that in the projective case the probability p_ abs is calculated immediately after the last B-pulse while in the coherent case all probabilities are calculated after the last beam-splitter S. However, the last S acts only on the subspace {|0⟩ , |1⟩} therefore the probability of state |2⟩ remains invariant under the action of the last S. We can therefore perform a point-to-point fair comparison of the two protocols.
§ RESULTS IN THE LARGE-N LIMIT
In this section, we derive approximate expressions for the probability amplitudes when our protocol is subjected to a large number of Ramsey sequences N. We also explore the lower limit to the B-pulse strength which can still give rise to high enough interaction-free detection efficiency.
§.§ Analytical results
The coherent interaction-free protocol has been reported to yield high efficiencies when the number N of consecutive Ramsey sequences is large <cit.>. Here we present a detailed analysis of this case using analytical tools.
Let us consider the beam-splitter unitary S(ϕ_N) =exp(-iϕ_N σ_01^y/2) from Eq. (<ref>), where ϕ_N=π/(N+1) is the beam-splitter strength that presents constructive interference on state |1⟩ in the absence of B-pulses. For the B-pulse unitary we choose B(θ) = B(θ, π/2) =exp(-iθσ_12^y/2), or in other words
we take all φ_j=π/2 for simplicity (see Eq. (<ref>)).
We start with the observation that 𝕌_{N} = S (ϕ_N)[B(θ)S (ϕ_N) ]^N = [S (ϕ_N)B(θ ) ]^(N+1)B^-1(θ ). Since B(θ) does not act on the ground state, it follows that
B^-1(θ )|0⟩ = |0⟩ and therefore the final state can be obtained as
𝕌_{N}|0⟩ = [ S (ϕ_N) B(θ) ]^N+1 |0⟩ .
Next, our goal is to obtain an approximate spectral decomposition of the matrix S (ϕ_N) B(θ) in the limit ϕ_N≪ 1 and cos (θ /2) ≪ 1. The details of this calculation are presented in Appendix A. We find the eigenvalues 1, e^-iθ/2, e^i θ /2 with corresponding eigenvectors appearing as columns in the diagonalizing matrix M
M = ( [ 1 1/2ϕ_N 1/2ϕ_N; 1/4ϕ_N 2i sinθ/4e^i θ/4 -2i sinθ/4e^-i θ/4; 1/4ϕ_N θ/4 -2 sinθ/4 e^i θ/4 -2 sinθ/4e^-i θ/4 ] ) .
We can then obtain the matrix [ S (ϕ_N) B(θ) ]^N+1 as
[ S (ϕ_N) B(θ) ]^N+1 = M ·( [ 1 0 0; 0 e^-i (N+1)θ/2 0; 0 0 e^i (N+1)θ/2 ] ) ·M^-1.
Consider now the final state written in the form
c_0|0⟩ + c_1|1⟩ + c_2|2⟩ = 𝕌_{N}|0⟩. Using the results above, after some algebra we obtain
c_0 = 1 - 1/2(ϕ_N/2)^2 1/sin^2θ/4sin^2 (N+1)θ/4 ,
and
c_1 = ϕ_N/21/sinθ/4cosNθ/4sin(N+1)θ/4 ,
c_2 = ϕ_N/21/sinθ/4sinNθ/4sin(N+1)θ/4.
One can see just by inspection that the wavefunction is correctly normalized up to fourth order in ϕ_N.
The detection efficiency of the coherent protocol is given by
η_ c=p_0/p_0 + p_2 = |c_0|^2/|c_0|^2 + |c_2|^2,
which, using the results above, can be evaluated to
η_ c≈ 1- ϕ_N^2/16sin^2θ/4[ cosθ/2-cos( Nπ/2 + π/4) ]^2 .
This shows that the efficiency approaches 1 in an oscillatory way, exactly as observed in the numerical simulations.
These results allow us to obtain even deeper insights into the mathematics of our protocol. In the asymptotic large-N limit and if θ≫ 1/N we can completely neglect even the first order terms in ϕ_N/2. The coefficients of the final wavefunction become c_0 ≈ 1, c_1≈ c_2≈ 0 so the protocol achieves nearly perfect efficiency. To understand why this is the case, we can calculate 𝕌_{N} = [S (ϕ_N)B(θ ) ]^(N+1)B^-1(θ ) in this limit, obtaining
𝕌_{N}≈( [ 1 O_1×2; O_2×1 ℬ(Nθ) ]),
where O_n1 × n2 is a null matrix of dimension n_1 × n_2 and ℬ (θ) is the submatrix on the 1 – 2 subspace from Eq. (<ref>) with θ_j = θ and ϕ_j = π /2.
Asymptotically, the evolution is approximated as a rotation with an angle Nθ in the subspace {|1⟩ , |2⟩}. When we appply this operator to an initial state |0⟩, the state of the system remains unaltered with very high probability. This is straightforward quantitative evidence that in the coherent interaction-free protocol, the state of the system at large N does not evolve (mostly), and still one can detect the presence of a B-pulse with very high probability, which is of course an interaction-free detection.
§.§ Discussion: limits on θ
Next, we obtain the least value of B-pulse strength for which the above approximate treatment works appropriately. In general, for small B-pulse strength (e.g. θ≈ϕ_N) the coherent interaction-free detection protocol may not result in an efficient detection. We address this issue numerically by examining the probabilities of the final state as a function of the ratio θ/ϕ_N, as shown in Fig. <ref>. In this figure, the exact numerical results based on evolving the system according to Eq. (<ref>) are compared with the approximate results from Appendix A based on the treatment above. Very similar results are obtained by the use of the simpler expressions from the previous subsection.
We have checked numerically that the variation of probabilities p_0, p_1, p_2 versus θ/ϕ_N is not very sensitive to the value of N, i.e., the probability profiles remain almost the same (as that of Fig. <ref>) for any arbitrary value of N. We notice that p_0 reaches close to 1 at θ/ϕ_N ≃ 4 and thereafter remains close to 1 for 4 ≤θ/ϕ_N≤ 4N. At θ/ϕ_N=4(N+1), which means θ =4π, it drops again to zero, refecting the 4π - periodicity of the system (see also <cit.> for the experimental observation of this effect). Thus, θ≃ 4ϕ_N is the minimum value of B-pulse strength that gives a highly efficient interaction-free detection (see the solid blue curve in Fig. <ref>). We can understand where this value comes from by examining the approximate solutions Eqs. (<ref>, <ref>, <ref>): we see that at θ = 4ϕ_N the last sine function in these expressions becomes zero.
Further, we see that the lower and upper limits of θ/ϕ_N, i.e., 4 and 4N respectively mark the boundaries of the plateau later discussed in Fig. 3 and
observed experimentally in Fig. 6 in Ref. <cit.> extending from θ = 4ϕ_N
to θ=4 N ϕ_N = 4π-4ϕ_N. The width of the plateau is 2(2π-2 ϕ_N), which we have verified also by direct comparison with the numerical data from Fig. 3. This width is therefore zero for N=1 (p_0 attains its maximum value close to 1 only at θ =2π and has a downward trend as θ exceeds 2π). The limits (or the width) of these plateaus of highly efficient interaction-free detection are further attributed to the periodicity of the protocol in θ with a period 4π. In the next sections, we will be more interested in exploring the lower limit to the B-pulse strength which can give rise to near-unity interaction-free detection efficiency. It is also noteworthy that the solid curves in Fig. <ref> result from numerical simulations without considering the large-N approximation. Thus, the bounds on θ/ϕ_N obtained here represent a general characteristic of our protocol.
§ INFORMATION IN COHERENT INTERACTION-FREE DETECTION
The effect of B-pulses on the success probability and efficiency of each protocol is more thoroughly explored in this section, with the goal of providing further insights into how information on the presence of the pulses is acquired during the protocol. We begin by considering B-pulses with equal strengths and study the behavior of the success probabilities of both protocols at different B-pulse strengths θ and N Ramsey sequences. Further, we explore the successive probabilities at different N of the system evolutions for both protocols with B-pulses of strength θ = π. Finally, we provide an analysis based on Fisher information, which demonstrates that the precision at which we can determine a small θ obeys the Heisenberg scaling.
§.§ B-pulses with equal strengths
While the coherent protocol generally has a higher success probability than the projective protocol, it is useful to see just how they differ at various N's for different fixed B-pulse strengths θ.
Here we consider the success probability profiles of each protocol for various values
of N ∈ [1,25] as a function of θ as shown in
Fig. <ref>, with optimal beam-splitter strengths ϕ = ϕ_N. For a given N, all the B-pulses are of the same strength θ, varying linearly between [0,2π]. Both p_0 and p_ det are symmetrical about θ=2π, and as expected, gradually rise from 0 to a maximum value with increasing θ and tend to stay higher, forming a plateau with a noticeably wavy structure for p_0. This plateau gets wider with increasing N. The same p_0 can also be recognized as a quantitative measure of the success probability of the interaction-free measurement. Thus, the widening of the p_0 plateau (close to 1) for higher values of N allows us to conclude that setups with multiple B-pulses can perform interaction-free detection of the B-pulses with very high efficiency. Beyond a threshold θ, this efficiency becomes almost independent of θ.
This threshold θ, is represented by red markers
in Fig. <ref> (a), plotted on top of the p_0
distribution as a function of N and
θ corresponding to an ideal case of identical
B-pulses. Data shown with red markers in fact correspond to
a minimum θ satisfying p_0≥ 0.85. The dashed black line represents p_0 at θ = 4ϕ_N, and the inset is the log-log plot of the populations at thresholds p_0≥ 0.25 (blue), p_0≥ 0.5 (magenta), p_0≥ 0.85 (red), and p_0≥ 0.95 (green) when considering N ∈ [25,100]. The circles are ln(θ/π) at these thresholds and the solid lines are the best fits of the form ln(aN^-1). We find that the coefficients are approximately a = 1.7, 2.2, 2.9, and 3.3 at thresholds p_0≥ 0.25, 0.5, 0.85, and 0.95, respectively. In Fig. <ref> (b), we notice that the threshold p_ det≥ 0.85 is considerably higher in θ than the corresponding threshold of the coherent protocol. In fact, at N = 2, the threshold is not even reached as can be seen from the lack of a marker in the figure.
Thus, the coherent protocol generally has higher success probabilities over a wider range of θ, even at small N.
The N^-1 scaling seen in Fig. <ref> (a) can also be obtained from Eq. (<ref>) in the following manner. A fixed value of p_0 not too close to 1 (corresponding to the chosen treshold) is obtained at relatively low values of θ by fixing the ratio θ /ϕ_N at a constant value. This immediately results in the scaling θ∼ N^-1 observed numerically. If the measurement of p_0 is utilized as a way of measuring θ, this yields Heisenberg-scaling precision. We will further confirm this result later in this Section when analyzing the Fisher information.
§.§ Successive probabilities of detection and absorption
Here we further develop insights into the coherent interaction-free and projective measurement-based protocols by looking at the detailed map of successive probabilities of detection and absorption at the end of each Ramsey sequence j when subjected to B-pulses of strength θ = π. These probabilities are denoted repectively by p_j,0 and p_j,2 for the coherent case, and p_j, det and p_j, abs for the projective case. At the end of the sequence j=N and we have, with the previous notations,
p_N,0≡ p_0, p_N,2≡ p_2, etc.
Thus, these maps show how the probability of occupation of the three levels evolve with successive j^th Ramsey sequence implementations (given N).
Fig. <ref>(a,b) presents the ground state probability plotted for j ∈ [1,N], N ∈ [1,25].
As N increases, p_j,0 tends to 1 very rapidly, as shown by the bright red in the color map, while p_ det manages to exceed 0.9 marginally for N=23. Implementing the first Ramsey sequence (j=1) results in the same values p_1,0=p_1,det
for any arbitrary N. This is due to the fact that the j=1 coherent and projective sequences do not differ in any fundamental way when perfoming a POVM analysis <cit.>.
In the coherent protocol, p_j,0 increases for j≥2 and then oscillates with j in the range: [0.85, 0.999], which further subsides for large N. Typically, for large N, say N=25 in the coherent protocol, the system tends to stay in the initial state |0⟩ with a very high probability (> 0.99) throughout the sequence.
Higher values of p_j, det at large N with small values of j correspond to higher ground state occupancy for the first few initial steps, which should not be mistaken as higher probability of interaction-free detection.
Similarly, Fig. <ref>(c,d) present the probability of the second excited state p_j,2 and the probability of B-pulse absorption p_j, abs at the end of the j^th Ramsey sequence
implementation for a given N in the coherent interaction-free and projective measurement protocols, respectively. Mirroring the features of the p_j,0 map, the map of p_j,2 also exhibits a pattern of oscillations with j, where the bright blue color corresponds to p_j,2 values as low as 0.01 and the dark blue color corresponds to slightly larger values.
§.§ Fisher information of the protocols
Since our protocol is remarkably efficient it can in principle be used to provide an estimate for the B-pulse strength θ.
Here we
study the associated quantum Fisher information of our protocol and that of the projective case at θ = 0 to determine which quantum limits are reached. We imagine two situations: one in which all three probabilities are used for evaluating θ, and the other in which only two probabilities, which make up the efficiency, are used.
The Cramér-Rao bound states that
Var(θ̂) ≥1/QFI (θ),
where the variance of the parameter θ is bounded by the Fisher information of the parameter QFI(θ) <cit.>.
Moreover, the Fisher information is defined as
QFI (θ) = ∑_i=0,1,2[∂_θ (p(i|θ))]^2/p(i|θ) .
Thus, the Fisher information of the coherent protocol is
QFI_ c = (∂ p_0/∂θ)^2/p_0 + (∂ p_1/∂θ)^2/p_1 + (∂ p_2/∂θ)^2/p_2 ,
and for the projective protocol, it is
QFI_ proj = (∂ p_ det/∂θ)^2/p_ det + (∂ p_ abs/∂θ)^2/p_ abs +
[∂ (1 - p_ det - p_ abs)/∂θ]^2/(1-p_ det - p_ abs) .
The Fisher information of the efficiency of each protocol characterizes how sensitive the efficiency is with respect to a variable, e.g. θ.
Explicitly, this is
QFI_η_ (c) = (∂η_ (c)/∂θ)^2/η_ (c) + [∂ (1-η_ (c))/∂θ]^2/1- η_ (c) ,
or more compactly,
QFI_η_ (c) =
1/η_ (c)(1-η_ (c))(∂η_ (c)/∂θ)^2 .
As can be seen in Fig. <ref>, the Fisher informations QFI_ c and QFI_ proj each have maximum values at θ = 0 and θ =4π, regardless of N. In general, QFI_η and QFI_η_ c do not reach their maxima exactly at θ = 0 or 4π, and these maxima converge to θ = 0 as N →∞. This shows that the most interesting situation for determining θ with high precision occurs at small values (or values near 4π). We can also see that this should indeed be the case by examining Fig. 3a: there, for N ≫ 1 the maximum variation of p_0 – which is metrologically useful – occurs at very low values of θ.
Taking the limit of QFI_c and QFI_ proj as θ→ 0, we observe that the projective case is precisely N/2 and that for the coherent case the power law fitting is 0.42N^2.
Hence, the projective protocol reaches the standard quantum limit (SQL) and the coherent case approaches the Heisenberg limit. Similarly, QFI_η = 0.1N + 0.05 and QFI_η_ c =
0.024N^2 (for points larger than N = 35) as θ→ 0, so the SQL and Heisenberg limit are also respectively reached for the Fisher information of the efficiencies.
For completion, we also investigate the quantum Fisher information at θ = π, where the protocol tends to be less sensitive compared to small θ, due to the formation of plateaus of probabilities (see discussion in Section IIIB and Section IVA). At θ = π, and for N ∈ [200, 10^3], QFI_proj monotonically decreases following approximately the power law 2.5N^-1, and similarly, QFI_η monotonically decreases, except at N = 1,
also following approximately the power law 2.5N^-1. QFI_c oscillates in an underdamped fashion with respect to N and converges to approximately 1.2 as N →∞. QFI_η_ c at θ = π also oscillates in an underdamped fashion and converges to approximately 0.62 as N →∞.
Next, we use the approximate expressions for the final state coefficients (c_0, c_1, c_2) obtained in the limit of large N (see Eqs. <ref>, <ref>, <ref> or Eqs. <ref>, <ref>, <ref>) and obtain the expressions for the quantum Fisher information as a function of θ and N. To this end, we again use the ratio θ /ϕ_N.
Based on the results in Subsection IIIB we know that we can approach small values of θ down to θ /ϕ_N≃ 4, and the approximate equations from Section IIIA will still be valid. Thus we can calculate analytically the Fisher informations, under the approximations θ /ϕ_N≪ N and ϕ_N ≪ 1. For
QFI_η_ c we obtain
QFI_η_ c≈4 N^2/π (θ /ϕ_N)^2[1-cos(Nπ /2 + π/4 )]^2/(θ /ϕ_N)^2 - [1-cos(Nπ /2 + π/4 )]^2.
This shows that QFI_η_ c scales as N^2 for small values of θ (of the order of ϕ_N). One can also see that this scaling does not hold if θ /ϕ_N becomes of the order of N (or in other words θ becomes comparable to π), as also seen numerically.
In the case of QFI_c we can perform a similar analysis, with the result QFI_ c∝ N^2 at large N. The final expressions are too cumbersome to be reproduced here; instead we will make some further observation based on numerical results. With increasing N, the parameter θ to be estimated decreases with N, while the variance in its estimation decreases with N^2. Further, it is seen that for an arbitrarily chosen fixed value of θ, the QFI_ c saturates to a constant value for large N, which is inversely proportional to the value of θ. Fig. <ref>(d) shows the N^2 proportionality of both QFI_ c(θ = 0) and QFI_ c (θ = 4ϕ_N) along the corresponding best-fit lines. The latter was explored due to 4 being the lowest value of θ/ϕ_N where the efficiency is high, as seen in Fig. <ref>. Thus, we see if the Heisenberg limit is reached for these choices of θ.
To get some intuition of why the coherent protocol performs better than the projective one, we can examine the recursion relations in Appendix B. We can see that in the projective protocol the information about θ contained in the amplitude of state |2⟩ is erased at every application of P_ abs, and what is being measured at every step and retained in the ground- and first-excited state amplitudes is ≈cos (θ /2). In particular, for θ=π one can clearly see that each Ramsey sequence is an exact repetition of the previous sequence, since each sequence starts in state |0⟩; due to the absence of correlations between successive measurements, the scaling corresponding to the standard quantum limit is expected. In contrast, in the coherent case the information about sin (θ /2) is stored in the amplitude of state |2⟩ and then fed back into the Ramsey sequence at the next step. The evolution is unitary, therefore governed by Heisenberg scaling.
§ SOURCES OF ERRORS
In this section we investigate the sensitivity of the protocol when subjected to sources of errors. In particular, the protocol's sensitivity to beam-splitter strength is important for assessing its effect on efficiency in the subsequent evolution. We also study the sensitivity of the protocol when arbitrary phases are introduced on the B-pulses, as well as the effect of having randomly placed B-pulses, i.e., some of the B-pulses which would normally occur in the Ramsey sequence are switched off. The sensitivity of the protocol to the initial sample temperature as well as the effects of decoherence via relaxation and of detuning are also examined.
§.§ Effect of beam-splitter strength
We first consider the case θ_j = 0, φ_j =0 and analyze the protocol with respect to ϕ. The optimal choice of beam-splitter strength ϕ is π/(N+1) <cit.> for a protocol with N B-pulses, which can be seen along the principle diagonal in Fig. <ref>. This is the choice of beam-splitter strength such that only one of the two detectors in the projective protocol will click, and where there is a complete probability transfer from the initial state, i.e., ground state, to the first excited state for our coherent protocol.
Surface maps in Fig. <ref> (a,b) show the variation of the first-excited state probability p_1 and false positive ratio (FPR) as a function of N and beam-splitter angle π/(N+1) for each N ∈ [1,25].
Curiously, there are other maxima in p_1 which can be seen in Fig. <ref>.
These maxima occur after every 2(N+1) beam-splitter unitaries for a given protocol with N B-pulses. This results from the net rotation angle, i.e. 2(N+1)ϕ=2π, which after every 2(N+1) implementations brings the system back to the initial ground state with a phase of e^i π.
From Fig. <ref>(b), we see that FPR is high at low values of ϕ. In other words, it is not always advantageous to have small ϕ.
The sensitivity of the first-excited state probability p_1 to beam-splitter strength ϕ_N ±Δϕ is shown in <ref>(c) for strength θ = 0. For θ=0 and p_2=0, p_1 and p_0 are symmetrical about ϕ_N, i.e., p_1(ϕ_N +Δϕ) = p_1(ϕ_N -Δϕ). This behavior is independent of the chosen N. We can also see that for low errors δϕ the first derivative of p_1 is zero, which makes p_1 sensitive only to second-order in Δϕ errors. For θ≠ 0, we have p_i(ϕ_N + Δϕ) ≠ p_i( ϕ_N - Δϕ), i ∈ {0,1,2}, and is no longer independent of N.
For the case θ =π, the projective protocol has a detection probability analytically expressed as
p_ det = [cos^2 ϕ/2]^(N+1)
and an absorption probability expressed by
p_ abs = sin^2ϕ/2∑_j=1^N[cos^2 ϕ/2]^(j-1) = sin^2ϕ/21-[cos^2ϕ/2]^N/1-cos^2ϕ/2 .
These formulas can be obtained in a straightforward way from of Eq. <ref> and Eq. <ref> and they coincide with those derived for Mach-Zehnder based experiments in quantum optics <cit.>. It is worth pausing and analyzing the meaning of these relations, as anticipated to some extent
in the comment subsequent to Eq. <ref> and Eq. <ref>. Starting in |0⟩, the system remains in this state with probability cos^2(ϕ/2) after the application of the first beam-splitter S(ϕ ). If there is no absorption on state |2⟩ after the B-pulse, it means that the second beam-splitter sees the system again in state |0⟩. After N+1 applications of the beam-splitter, the probability to find the system in the state |0⟩ is [cos (ϕ /2)] ^2(N+1). In the case of absorption, p_ abs is obtained by summing over probabilities of absorption at each application j of the B-pulse, which are given by the probability [cos (ϕ /2)] ^2(j-1) that the pulse was not absorbed in the previous j-1 steps multiplied by the probability sin^2(ϕ/2) that the system is in the state |1⟩ from which absorption to state |2⟩ is possible.
If ϕ = ϕ_N = π/(N+1), the detection probability becomes 1 in the limit of large N, which is a manifestation of localization on the state |0⟩ (suppressing the evolution in the rest of the Hilbert space) by the quantum Zeno effect. Indeed we have cos^2 (ϕ_N/2) ≈ 1 - ϕ^2_N/4 and by applying the binomial formula we obtain
p_ det≈ 1 - (N+1) ϕ_N^2/4≈ 1 - π^2/4N
and
p_ abs≈ N ϕ_N^2/4≈π^2/4N ,
which yields the efficiency η≈ p_ det.
We can now see that, in contrast to the coherent case (see Eq.(<ref>)), the scaling with N of these probabilities is slower ∼1/N, indicative of the standard quantum limit.
In Fig. <ref>(a,b), the efficiencies resulting from the coherent protocol (η_ c) and projective protocol (η) are respectively plotted as functions of beam-splitter strength ϕ and N (similar to that of Fig. <ref>) at B-pulse strength θ=π.
In the upper triangular region where ϕ > π/(N+1) and near ϕ=π/2, η_ c is marginally higher ≈ 1-2% than the optimal value of η_ c at ϕ=π/(N+1) only for a few values of N (10, 14, 18, 21, 22, 25).
Lower values of η in Fig. <ref>(a) for ϕ > π/(N+1) are mainly the result of a higher probability of occupation of the state |1⟩, which further results in a higher probability for p_ abs.
In the lower triangular section with ϕ < π/(N+1) the efficiency reaches high values, since the beam-splitter unitaries are not capable of transferring the ground state probability to the first excited state. This results in higher p_0 and hence an apparently higher η_ c (as well as η) as per the lower triangular region of Fig. <ref>(b,a). However, one can see from Figs. <ref>
(a)(b) that the FPR increases to large values, making the protocol unusable.
Once again, we insist that ϕ=π/(N+1) is the optimal beam-splitter strength for a given N.
A neat comparison of the efficiencies η (in red) and η_ c (in blue) are presented in Fig. <ref>(c) for optimal values of the beam-splitter angle. Clearly, η_ c already exceeds 0.95 for N>5, while η is below 0.9 even for N=25, depicting a highly efficient coherent interaction-free measurement protocol as compared to that of the projective protocol.
An interesting situation is the case θ = 2π. From Eq. <ref> we can see that the matrix at this choice of θ has 1 followed by two -1's on the diagonal, and zero on the off-diagonal (thus making the phases φ irrelevant). Classically, this is a 360^o rotation that should have no effect, yet quantum-mechanically, due to the appearance of the minus signs, it has a dramatic effect. Indeed, B (θ = 2π)S(ϕ ) |0⟩ = cosϕ/2|0⟩ - sinϕ/2|1⟩. We can see that the probability of absorption is zero! Further, after another application of S, we obtain
S(ϕ )B (θ = 2π)S(ϕ ) |0⟩ = |0⟩, therefore achieving a perfect interaction-free detection. Surprisingly, we have a situation where the efficiency of detecting a pulse that produces no absorption is maximal! Indeed, a detector based say on absorption of the pulse by a two-level system and the subsequent measurement of the excited-state probability would not be able to detect this pulse at all.
§.§ B-pulses with a variable phase
Next, we consider the situation when both the B-pulse strength θ_j and phase φ_j are non-zero. Here, we investigate the efficiency of the coherent protocol at different N when subjected to various θ_j and φ_j, where j ∈ [1,N].
First, it is straightforward to verify that the results do not depend on the phase φ_j in the projective case. This can be shown immediately by examining a sequence of Ramsey pulses with the measurement operators inserted after each B-pulse. The phase appears only on the state |2⟩, and therefore it is eliminated when the non-absorptive result is obtained – that is, from the application of Eq. (<ref>), P_ abs = |0⟩⟨ 0| + |1⟩⟨ 1|.
In the coherent case however, there is a change in efficiency when either θ or the difference between the phases of consecutive B-pulses δφ≡φ_j+1 - φ_j is varied. Fig. <ref> (a) shows η_ c surface plots as a function of δφ and B-pulse strength θ at N = 2, N=5, and N=25. It is clear from these surface maps that for a wide range of δφ values, we obtain wide plateaus of high efficiencies. It is also noteworthy that small values of δφ do not cause any significant drop in the efficiency as compared to that of δφ=0. The worst case corresponds to δφ = π, where these high efficiency plateaus are significantly narrowed.
The surface maps for η_ c as a function of δφ and θ_j=θ are shown only for a few values of N, the behavior however is similar for other values of N.
The best case, i.e. wide region of high η_ c for arbitrary N corresponds to δφ=0, which is plotted as solid lines in Fig. <ref>(b).
Moreover, Fig. <ref> (b) shows cross-sections of the aforementioned surface maps at δφ =0 as well as the projective efficiency η as a function of θ. Dotted lines in Fig. <ref>(b) are the corresponding efficiency plots for the projective case, with green, red, and black colors representing cases with N=2,5, and 25 respectively. Clearly, as N increases, higher efficiencies are attained over a broader range of θ.
Next, we used the coherent efficiency η_ c as a probe for performing detailed analysis of the protocol for arbitrary B-pulses with randomly chosen strengths and phases. Fig. <ref> (c) shows the mean efficiency (η_ c^M) vs N for various choices of θ_j and φ_j with j ∈ [1,25]. The mean efficiency for each N is obtained from 10^4 repetitions, each with a realization of θ_j and φ_j.
The final probabilities and hence η_ c are independent of the B-pulse phase φ if for a given N all the B-pulses have the same phases i.e. φ_j+1=φ_j, such that δφ =φ_j+1 - φ_j = 0 where j ∈ [1,N-1]. This is verified numerically for various values of N with arbitrarily chosen values of θ_j ∈ [0,π] and φ_j=φ, where φ is chosen arbitrarily from the range [0,π].
As a first check, we took θ_j = π and δφ = 0 and reproduced the blue curve representing η_ c in Fig. <ref>(c) for different values of φ. This is represented as a solid black line in Fig. <ref> (c). Note that the solid magenta curve represents the efficiency of the projective case at θ_j = π, i.e., identical to the red curve in Fig. <ref>(c). We also numerically verified that this property is extended for arbitrary values of B-pulse strengths θ_j. This observation in fact relaxes the specifications for the B-pulse. However, as previously seen in Fig. <ref>(a), the relative values of consecutive B-pulse phases (δφ≠ 0) can significantly alter the final probability profiles, and thus η_ c.
Further, we studied η_ c^M when the phase is constant and the strengths are randomly varied such that θ_j∈ [0,π] (denoted as R_θ). Since we have established that the behavior of efficiency is not affected by a fixed value of phase, i.e. φ_j = φ, we select φ = 0, as indicated in Fig. <ref> (c). The blue dashed line in the figure shows this case for the coherent protocol, and the magenta dashed line shows the corresponding mean efficiency η^M for the projective case.
The red solid line with circular markers represents η_ c^M when θ_j = π, and the phases are randomly varied such that φ_j∈ [0,π/4] (labelled as R_φ). Clearly, η_ c^M sits near the solid black line and is thus mostly insensitive to phase in this case. However, the efficiency is lower when the range of randomly varied phase is extended such that R_φ∈ [0,π]. This is represented by the dotted red curve with circular markers in Fig. <ref> (c). Nevertheless, we conclude that small errors in the values of δφ are tolerable without much compromise in the efficiency, which makes the coherent protocol more robust.
Further, there is a marked decrease in η_ c^M when the B-pulse strengths are also random. In fact, the lowest mean efficiencies for the coherent protocol are when R_θ∈ [0, π] and R_φ∈ [0, π]. This is shown as the blue dotted line with triangular markers. Only the projective cases shown in Fig. <ref> (c) tend to be lower than this case as N becomes large. In particular, the mean projective efficiencies at R_θ∈ [0, π], and φ =0 is consistently less than η_ c^M(R_θ, R_φ) for R_θ∈ [0, π] and R_φ∈ [0, π], and the projective efficiencies at θ_j = π and φ_j = 0 is also less than the lowest mean efficiencies of the coherent protocol after N = 8. Remarkably this means that the coherent protocol is on average more efficient than the maximum efficiencies of the projective protocol even when subjected to random B-pulse strengths and phases in the full range [0, π].
The mean efficiencies when R_θ∈ [0, π] and R_φ∈ [0, π/4] is represented as the solid blue curve with the triangular markers, and is significantly larger than when R_θ∈ [0, π] and R_φ∈ [0, π]. Thus, as expected, the mean value η_ c^M(R_θ, R_φ) lies close to the probabilities obtained with all B-pulses of strength π and δφ=0 for large N <cit.>.
§.§ Interaction-free detection with randomly placed B-pulses
In this section, we consider N consecutive Ramsey sequences
with randomly placed B-pulses. In each Ramsey sequence,
the B-pulse slot can either have a B-pulse with θ=π, i.e. maximum strength, or θ=0 (no B-pulse). In other words, this situation corresponds to arbitrarily placing maximum-strength B-pulses in the N Ramsey slots with a certain probability.
Here we consider four cases where each B-pulse slot can have a pulse with probabilities 1, 1/2, 1/4 and 1/8.
Depending upon the arrangements of the B-pulses in the full pulse sequence, the results can vary significantly.
Suppose that out of N B-pulse slots, n have B-pulses with maximum strength while N-n are vacant. The number of combinations is N!/(n!(N-n)!) where n ∈ [0,N]. The total number of combinations can reach a maximum of 10^7 at n=N/2 and N=25. Fig. <ref>, shows the calculations of the Positive ratio and Negative ratio for (a) projective and (b) coherently interrogated detection schemes with different percentages of B-pulses. Different curves in fact plot the average of PR and NR values obtained from 400 repetitions with random combinations of vacant and occupied slots for the B-pulses. As shown in Fig. <ref>, curves in blue correspond to a situation with all B-pulses of strength π, which means that there is a very large flux of microwave photons resonant with |1⟩-|2⟩ transition. Due to this large flux, whenever level |1⟩ acquires some population at the end of a beam-splitter operation, it is highly likely that our three-level system will transit to its second excited state. Despite the absorption of a fraction of photons, the positive ratio PR(θ) approaches 1, while the NR(θ) approaches 0. It is interesting to note that as n decreases, the probability of absorption of photons increases. This counterintuitive behavior is due to the abrupt and rapid decrease in the norm p_0+p_1. PR and NR are highly dependent upon the combinations, therefore it is more useful to look at their average behaviors. Consistent with these observations, it is also noteworthy that as n decreases to N/2, events leading to the absorption of photons increase, which further increases for large N values as n decreases to N/4 and N/8 respectively.
§.§ Initialization on thermal states
The initial state of a real device is sometimes not perfectly thermalized to the ground state. For a real device such as the transmon, the initial state can have a rather high initial temperature, of the order of 50-100 mK <cit.>.
For a general three-level system in thermal equilibrium, the density matrix is of the form
ρ = p_0|0⟩⟨ 0| + p_1|1⟩⟨ 1| + p_2|2⟩⟨ 2| ,
where the probabilities are
p_i = 1/Ze^-E_i/k_BT, i ∈{0,1,2}, E_0 = 0 ,
and the canonical partition function is
Z = ∑_iexp[-E_i/k_BT] = 1 + e^-ħω_01/k_BT + e^-ħω_02/k_BT .
By populating our initial state in accordance with Eq. <ref> using qutrit transition frequencies ω_01/(2π) = 7.20 GHz, ω_12/(2π) = 6.85 GHz, and with initial temperatures T ∈ [0, 100] mK, we see in Fig. <ref> that the efficiencies are less sensitive at lower initial temperatures, and that the coherent protocol is overall more efficient than the projective case for a given N. In fact, at the modest N = 25, the efficiency of the coherent protocol is greater than the efficiency of the projective case η at N = 250.
The dark count probabilities across this range of initial sample temperatures are determined by p_0(θ =0)
at each of the temperatures. The dark count probabilities monotonically increase with temperature and are small across this range, less than 10^-6 until 30 mK, and reach approximately 0.031 at 100 mK. These values are the same for both the coherent and projective protocols, as the dark count probabilities are necessarily computed at θ = 0. Consequently, the dark count probabilities are also independent of N since we consistently choose ϕ = π/(N+1), i.e., ϕ_N.
§.§ Effects of decoherence
In real systems such as transmons, the action of the beam-splitters and the B-pulses
is modified due to decoherence. To account for this effect, we consider a model where the first and second levels can relax to the ground and first excited state respectively with rates Γ_10 and Γ_21 <cit.>.
The action of the beam-splitter on the density matrix is obtained from
ρ̇ = -i/ħ[H_01,j(t),ρ] + ∑_l = 0,1; k= l+1Γ_klD[σ_lk]ρ ,
while for the B-pulse we have
ρ̇ = -i/ħ[H_12,j(t),ρ] + ∑_l = 0,1; k= l+1Γ_klD[σ_lk]ρ
with H_01,j and H_01,j as introduced in Section II, and D[L]ρ = Lρ L^† - 1/2{L^†L ,ρ} defining the Lindblad super operator with jump operators L. Also note that for the transmon direct relaxation from the second excited state to the ground state is suppressed by selection rules.
In Fig. <ref>, we study the effect of various relaxation rates Γ_10, Γ_21∈ [0, 0.2] MHz on the efficiencies of the coherent (a) and projective (b) protocols at θ=π. The dashed black line in both plots corresponds to the particular case of a transmon device, where these rates are related as i.e. Γ_21 = 2Γ_10.
We see from Fig. <ref> that η_ c is consistently greater than η, where their mean values from Fig. <ref> are approximately 0.9959 and 0.9125, respectively. We also note from the slope of the contour lines, that the coherent case appears less sensitive to variation in Γ_10 compared to the projective protocol.
The dark count probabilities, i.e. FPRs are also reasonably low, having a maximum value of 27.4% for the worst-case scenario of a transmon with relatively large relaxation Γ_10 = 0.2 MHz.
A remarkable feature of the protocol is the robustness against decoherence acting on the 1-2 subspace. One can see from Fig. <ref>(a) that a change in Γ_21 produces a much smaller change in efficiency than a change in Γ_01 (equal-efficiency white lines are nearly vertical), which is not the case for the projective protocol. To illustrate this point, in Fig. <ref> we present p_0 (θ =π) and p_1 (θ =0) for Γ_10 = 0.1 MHz and Γ_12 = 10 MHz and N from 1 to 50. Note that Γ_21 is 100 times larger than Γ_10 and yet the protocol is usable, with the limitation coming from the increase in the dark counts (FPR) p_0 (θ =0) = 1- p_1 (θ =0) at large N. At N = 50, p_0(θ = π) = 0.977 and p_ det(θ = π) = 0.937 – whereas, the dark count is p_0 (θ =0) = 1- p_1(θ = 0) = 0.263. Clearly the p_ det of the projective protocol is more sensitive to Γ_21 This difference is even more prominent at smaller values of θ: we find numerically that as θ decreases both p_0 (θ) and p_ det(θ) curves move to lower values maintaining a gap between them, and with the latter approaching faster the p_ det(θ = 0) = p_0(θ = 0) line. Also, the duration of the B-pulses used is 112 ns, while that of beam-splitter pulses is 56 ns, therefore 50 pulses take 8.5 μs, much longer than the relaxation time Γ_21^-1 = 100 ns of the state |2⟩. This can be understood by the fact that the B-pulse and the relaxation act jointly as a disturbance of the interferometry pattern. It also shows that, in order to apply our protocol, one only needs a good two-level system: even if the third state is affected by large decoherence, the protocol will still work. Also note that, even if the FPR is affected in a relatively stronger way by the relaxation in the 0-1 subspace due to the increase in the dark count probability p_0(θ=0), this detrimental effect is still slightly weaker than what one would expect from a naive estimation of probabilities decaying exponentially with a rate Γ_10^-1 during the total duration of the protocol.
§.§ Detuned B-pulses
We now examine the effect of a detuning δ of the B-pulse with respect to the second transition. For simplicity we consider identical pulses, implemented by the Hamiltonian H_12(t) = ħ[Ω_12(t)exp(-iφ)/2]|1⟩⟨ 2| + h.c. - ħδ |2⟩⟨ 2|. With the usual notation θ = ∫_-∞^∞Ω_12(t) dt and with χ = δτ, where τ is the duration of the B-pulse, we find that B(θ,φ, χ) takes the form
B(θ,φ, χ) =
( [ 1 O_1×2; O_2×1 e^i χ/2ℬ ]),
where again O_n1 × n2 is the null matrix of dimension n_1 × n_2 and the submatrix ℬ has elements
ℬ_11 = ℬ_22^* = cos√(θ^2 + χ^2)/2 - i χ/√(θ^2 + χ^2)sin√(θ^2 + χ^2)/2,
and
ℬ_12 = -ℬ_12^* = - i θ e^-iφ/√(θ^2 + χ^2)sin√(θ^2 + χ^2)/2.
In Fig. <ref> (a) and (b) we present the results of simulating the protocol up to N=25 for θ = π /2.
Remarkably, for the coherent case as N gets larger, the B-pulse can be detected even for relatively large values of χ. In other words, the small effect on the interference pattern at small values of N gets amplified at larger N. In contrast, this effect is not so prominent for the projective case. The detection bandwidth of p_0 appears to linearly increase symmetrically about χ = 0 producing a fan-out structure, whereas p_ det has less defined features and lower values.
To show how dramatically different this situation is from the two-level case, consider what would happen if we aim to detect the pulse by measuring the off-resonant Rabi oscillation produced by a pulse ℬ acting N times on an initial state with maximum population on one of the levels. Fig. <ref> (c) shows the population on the other level which can be used as a detection signal and is explicitly
θ^2/θ^2 + χ^2sin^2 N√(θ^2 + χ^2)/2 .
We can see that the detection bandwidth does not increase with N.
§ CONCLUSIONS
We have investigated a protocol for interaction-free measurements in a three-level system that uses coherent unitary evolution instead of projective measurements. We found that the coherent scheme is generally more efficient than the projective protocol, and we derived asymptotic analytical results that demonstrate conclusively the existence of this enhancement. When considering the large N limit, we determined the minimum value of B-pulse strength which yields optimal success probability and efficiency to be approximately four times the beam splitter strength. From the analysis of Fisher information, we found that for weak B-pulses our coherent interaction-free detection scheme reaches the Heisenberg limit while the projective scheme may only reach the standard quantum limit. We have explored numerically the sensitivity of our coherent interaction-free detection scheme under various imperfections and realistic conditions and compared it with the projective one. We find that the coherent protocol remains robust under experimentally-relevant variations in beam-splitter strengths, temperature, decoherence, and detuning errors. Our results open up a new route towards quantum advantage by proposing a task that cannot be achieved classically and by using coherence as a quantum resource to achieve it efficiently.
We acknowledge financial support from the Finnish Center of Excellence in Quantum Technology QTF (Projects No. 312296, No. 336810, No. 352925) of the Academy of Finland and from Business Finland QuTI (Decision No. 41419/31/2020).
§ DETAILS ABOUT THE DERIVATION OF ANALYTICAL RESULTS IN THE LARGE-N LIMIT
Here we give more details about the diagonalization of the operator S(ϕ_N)B(θ). We expand det[S(ϕ_N)B(θ)-λ𝕀_3] in powers of ϕ/2 and retain terms up to second order, obtaining
det[S(ϕ_N)B(θ)-λ𝕀_3] ≈ (1-λ ) [λ^2 -2λcos(θ /2) +1
- 1/2(ϕ_N/2)^2(1-λcos (θ/2))] -1/2(ϕ_N/2)^2[λ^2 -2λcos(θ /2)
+ 1] + (ϕ_N/2)^2[1- λcos(θ/2)].
Next, we neglect terms of the type (ϕ_N/2)^2cos(θ/2). Note that this is a better approximation than just working around θ≈π, allowing us to retain cos(θ/2) in the expression above whenever it does not get multiplied by the small factor (ϕ_N/2)^2.
In this approximation, after some algebra we obtain the eigenvalues λ_0 = 1, λ_± = ±exp (± i θ /2) with corresponding eigenvectors
|v_0 ⟩= ( [ 1; tan(ϕ_N/4); tan(ϕ_N/4) (θ/4) ]),
|v_±⟩ = ( [ sin (ϕ_N/2); a ∓ ib; ∓ ia - b ]), where a = cos (θ /2) cos (ϕ_N/2) -1 and b = sin (θ /2) cos (ϕ_N /2).
Note that |v_±⟩ = |v_∓^*⟩.
We get
[ S (ϕ_N) B(θ) ]^N+1 = M ·( [ 1 0 0; 0 e^-i (N+1)θ/2 0; 0 0 e^i (N+1)θ/2 ] ) ·M^-1,
where
M = ( [ 1 sin(ϕ_N /2) sin(ϕ_N /2); tan(ϕ_N/4) a + i b a - ib; tan(ϕ_N/4) (θ/4) ia - b -i a -b ] ) .
Using the above expressions, we obtain an approximate final state (c_0 |0⟩ + c_1 |1⟩ + c_2 |2⟩) of the three-level system,
c_0 = 1/𝒩[ (a^2 + b^2) sinθ/4 + .
.
+tanϕ_N/4sinϕ_N/2(a sin (2N + 1)θ/4 + b
cos(2N + 1)θ/4) ] ,
and
c_1 = 2 (a^2 + b^2)/𝒩tanϕ_N/4 sin(N + 1)θ/4 cosNθ/4,
c_2 =2 (a^2 + b^2)/𝒩tanϕ_N/4 sin(N + 1)θ/4 sinNθ/4,
where
𝒩 = (a^2+ b^2) sin(θ/4) - tan(ϕ_N/4) sin (ϕ_N /2) [ a sin(θ/4) - b cos(θ/4) ].
Note that here we have approximated sin(ϕ_N /2) ≈ϕ_N /2 and cos(ϕ_N /2) ≈ 1 - (ϕ_N /2)^2/2 when calculating the eigenvalues, but we have chosen to keep the full trigonometric expressions
for the eigenvalues (and subsequently in the expressions for the matrix M and for the coefficients c_0, c_1, and c_2). Compared to the case in the main text, where everything has been Taylor expanded in ϕ_N/2, this
this leads to a slightly better approximation, especially at low values of θ, albeit at the expense of more complicated analytical expressions.
The probability amplitudes Eqs. <ref>, <ref>, <ref> agree well with the probability amplitudes derived in the main text, i.e. Eqs. <ref>, <ref>, <ref>. For instance, at θ = π and N = 5, these are c_0 = 0.933 (0.931), c_1 = 0.254 (0.262), and c_2 = 0.254 (0.262), where the values in parenthesis correspond to Eqs. <ref>, <ref>, <ref>, respectively. At θ = π and N = 25, these are c_0 = 0.996 (0.996), c_1= 0.0603 (0.0604), and c_2 = 0.0603 (0.604).
The coherent detection efficiency of the protocol is given by
η_ c=p_0/p_0 + p_2 = |c_0|^2/|c_0|^2 + |c_2|^2,
which, at θ=π, is approximated to be
η_ c (θ=π) = 1- ϕ_N^2/16[1-√(2)cos( Nπ/2 + π/4) ]^2 .
This is in agreement with the results in the main text.
§ RECURSION RELATIONS
In this appendix, we detail how the state at each step is related recursively to the previous Ramsey sequence.
Let us consider N Ramsey sequences with the beam-splitter strength ϕ, B-pulse strength θ_j and phase φ_j, where j ∈ [1,N].
For the projective case, the state after the application of j sequences is
S(ϕ) ∏_i=1^j[P_ absB(θ_i, φ_i)S(ϕ)]|0⟩.
Let us denote this state generically by c_j,0|0⟩ + c_j,1|1⟩ + c_j,2|2⟩.
Therefore the (unnormalized) probability amplitudes are recursively related to the subsequent values at j+1 as follows:
c_j+1,0 = cosϕ/2c_j,0 - sinϕ/2cosθ_j+1/2 c_j,1 ,
c_j+1,1 = sinϕ/2 c_j,0 + cosϕ/2cosθ_j+1/2 c_j,1 ,
c_j+1,2 = c_j,2 = 0 .
For the coherent case, the state after applying j Ramsey sequences is given by S(ϕ) ∏_i=1^j[B(θ_i, φ_i)S(ϕ)]|0⟩.
Let us similarly denote this wavefunction as c_j,0|0⟩ + c_j,1|1⟩ + c_j,2|2⟩. Then the probability amplitudes c_j,0, c_j,1, and c_j,2
satisfy the following recursion relations
c_j+1,0 = cosϕ/2c_j,0
- sinϕ/2cosθ_j+1/2c_j,1
+ ie^-iφ_j+1sinϕ/2sinθ_j+1/2c_j,2,
c_j+1,1 = sinϕ/2c_j,0 + cosϕ/2cosθ_j+1/2c_j,1
- ie^-iφ_j+1cosϕ/2sinθ_j+1/2c_j,2,
c_j+1,2 = -ie^iφ_j+1sinθ_j+1/2c_j,1 + cosθ_j+1/2c_j,2 .
With these notations, we also have at the end of the sequences that c_0≡ c_N,0, c_1≡ c_N,1, c_2≡ c_N,2, to make the connection with the previous usage of coefficients c_0, c_1, and c_2.
In both cases we can now see the mechanism by which the probability corresponding to the ground state increases under successive sequences. Indeed, if at some j we have |c_j,0| ≈ 1 (and consequently |c_j,1| ≪ 1, |c_j,2| ≪ 1), in the next step c_j+1,1 we will acquire a contribution sin (ϕ /2) c_j,0, which is very small since ϕ≪ 1. We will also acquire a contribution
from the very small previous values c_j,1 and c_j,2 (in the coherent case). In contrast, α_j+1 acquires a contribution cos (ϕ /2)c_j,0≈ 1, therefore remaining the dominant probability amplitude. At the end of the sequence and for N ≫ 1 the state will be |0⟩, in agreement with the observations from Section III A related to Eq. (<ref>).
Numerical simulations of the probabilities of success (p_0(j,N), p_ det(j,N)) and of absorption (p_2(j,N), p_ abs(j,N)) shown in Fig. <ref> directly correspond to the absolute squares of the correponding complex coefficients discussed above.
33
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Elitzur and Vaidman(1993)]Elitzur_1993
author author A. C. Elitzur and author L. Vaidman, title title Quantum mechanical
interaction-free measurements, https://doi.org/10.1007/bf00736012
journal journal Foundations of Physics volume 23, pages 987–997 (year
1993)NoStop
[Renninger(1953)]Renninger
author author M. Renninger, title title Zum wellen-korpuskel-dualismus,
https://doi.org/10.1007/BF01325679 journal journal Zeitschrift für Physik volume
136, pages 251 (year 1953)NoStop
[Dicke(1981)]Dicke
author author R. H. Dicke, title title Interaction-free
quantum measurements: A paradox?, 10.1119/1.12592
journal journal American Journal of Physics volume 49, pages 925–930 (year 1981)NoStop
[Peres(1980)]peres-ajp-1980
author author A. Peres, title title Zeno paradox in
quantum theory, 10.1119/1.12204 journal
journal Am. J. Phys. volume 48, pages 931 (year 1980)NoStop
[Kwiat et al.(1995)Kwiat,
Weinfurter, Herzog, Zeilinger, and Kasevich]Kwiat_1995
author author P. Kwiat, author H. Weinfurter,
author T. Herzog, author A. Zeilinger, and author M. A. Kasevich, title title Interaction-free measurement, https://doi.org/10.1103/PhysRevLett.74.4763 journal journal Phys. Rev. Lett. volume 74, pages 4763 (year 1995)NoStop
[Kwiat et al.(1999)Kwiat,
White, Mitchell, Nairz,
Weihs, Weinfurter, and Zeilinger]Kwiat_1999
author author P. G. Kwiat, author A. G. White,
author J. R. Mitchell, author O. Nairz, author
G. Weihs, author H. Weinfurter, and author A. Zeilinger, title title High-efficiency quantum interrogation measurements via the quantum
zeno effect, 10.1103/physrevlett.83.4725 journal journal Physical Review Letters volume 83, pages 4725–4728 (year
1999)NoStop
[Ma et al.(2014)Ma,
Guo, Schuck, Fong,
Jiang, and Tang]Ma2014
author author X.-s. Ma, author X. Guo, author C. Schuck, author
K. Y. Fong, author
L. Jiang, and author
H. X. Tang, title title On-chip interaction-free measurements via the quantum zeno effect, https://doi.org/10.1103/PhysRevA.90.042109 journal
journal Phys. Rev. A volume 90, pages 042109 (year 2014)NoStop
[Peise et al.(2015)Peise,
Lücke, Pezzé, Deuretzbacher, Ertmer, Arlt, Smerzi, Santos, and Klempt]Peise2015
author author J. Peise, author B. Lücke,
author L. Pezzé, author F. Deuretzbacher, author
W. Ertmer, author J. Arlt, author A. Smerzi, author L. Santos,
and author C. Klempt, title title Interaction-free
measurements by quantum zeno stabilization of ultracold atoms, 10.1038/ncomms7811 journal journal
Nature Communications volume 6, pages
6811 (year 2015)NoStop
[Hardy(1992)]Hardy_1992
author author L. Hardy, title title Quantum
mechanics, local realistic theories, and lorentz-invariant realistic
theories, 10.1103/PhysRevLett.68.2981 journal journal Phys. Rev. Lett. volume 68, pages 2981–2984 (year
1992)NoStop
[Elouard et al.(2020)Elouard, Waegell, Huard, and Jordan]Elouard_2020
author author C. Elouard, author M. Waegell, author B. Huard, and author A. N. Jordan, title title An
interaction-free quantum measurement-driven engine, 10.1007/s10701-020-00381-1 journal journal
Foundations of Physics volume 50, pages 1294–1314 (year 2020)NoStop
[Aharonov et al.(2018)Aharonov, Cohen, Elitzur, and Smolin]Aharonov_2018
author author Y. Aharonov, author E. Cohen, author A. C. Elitzur, and author Lee Smolin, title title Interaction-free
effects between distant atoms, 10.1007/s10701-017-0127-y
journal journal Foundations of Physics volume 48, pages 1–16 (year
2018)NoStop
[Blumenthal et al.(2022)Blumenthal, Mor, Diringer, Martin, Lewalle, Burgarth, Whaley, and Hacohen-Gourgy]Blumenthal_2022
author author E. Blumenthal, author C. Mor,
author A. A. Diringer, author L. S. Martin, author
P. Lewalle, author D. Burgarth, author K. B. Whaley, and author S. Hacohen-Gourgy, title title Demonstration of universal control between non-interacting qubits
using the quantum zeno effect, 10.1038/s41534-022-00594-4 journal journal npj
Quantum Information volume 8, pages
88 (year 2022)NoStop
[Burgarth et al.(2014)Burgarth, Facchi, Giovannetti,
Nakazato, Pascazio, and Yuasa]Burgarth_2014
author author D. K. Burgarth, author P. Facchi, author V. Giovannetti, author H. Nakazato, author S. Pascazio, and author K. Yuasa, title title
Exponential rise of dynamical complexity in quantum computing through
projections, 10.1038/ncomms6173 journal
journal Nature Communications volume
5, pages 5173 (year 2014)NoStop
[White et al.(1998)White,
Mitchell, Nairz, and Kwiat]White1998
author author A. G. White, author J. R. Mitchell, author O. Nairz, and author P. G. Kwiat, title title
“Interaction-free” imaging, 10.1103/PhysRevA.58.605
journal journal Phys. Rev. A volume 58, pages 605–613 (year
1998)NoStop
[Noh(2009)]Noh2009
author author T.-G. Noh, title title Counterfactual
quantum cryptography, 10.1103/PhysRevLett.103.230501
journal journal Phys. Rev. Lett. volume 103, pages 230501 (year
2009)NoStop
[Li et al.(2020)Li,
Wang, Xu, Yang, Al-Amri, and Zubairy]zheng-pra-2020
author author Z.-H. Li, author L. Wang, author J. Xu,
author Y. Yang, author M. Al-Amri, and author M. S. Zubairy, title title Counterfactual trojan horse attack, 10.1103/PhysRevA.101.022336 journal journal Phys. Rev. A volume 101, pages 022336 (year 2020)NoStop
[Zhang et al.(2019)Zhang,
Sit, Bouchard, Larocque,
Grenapin, Cohen, Elitzur,
Harden, Boyd, and Karimi]Zhang_2019
author author Y. Zhang, author A. Sit,
author F. Bouchard, author H. Larocque, author F. Grenapin, author E. Cohen, author A. C. Elitzur, author J. L. Harden, author R. W. Boyd, and author E. Karimi, title title Interaction-free
ghost-imaging of structured objects, 10.1364/OE.27.002212 journal journal Opt.
Express volume 27, pages 2212–2224
(year 2019)NoStop
[Hance and Rarity(2021)]hans-npj-2021
author author J. R. Hance and author J. Rarity, title title
Counterfactual ghost imaging, 10.1038/s41534-021-00411-4 journal journal npj
Quantum Information volume 7, pages
88 (year 2021)NoStop
[Yang et al.(2023)Yang,
Liang, Xu, Zhang,
Zhu, and Ma]Ma2023
author author Y. Yang, author H. Liang,
author X. Xu, author L. Zhang, author
S. Zhu, and author
X.-s. Ma, title
title Interaction-free, single-pixel quantum imaging
with undetected photons, 10.1038/s41534-022-00673-6
journal journal npj Quantum Information volume 9, pages 2 (year
2023)NoStop
[Salih et al.(2013)Salih,
Li, Al-Amri, and Zubairy]salih-prl-2013
author author H. Salih, author Z.-H. Li,
author M. Al-Amri, and author M. S. Zubairy, title title Protocol for direct
counterfactual quantum communication, 10.1103/PhysRevLett.110.170502 journal journal
Phys. Rev. Lett. volume 110, pages
170502 (year 2013)NoStop
[Vaidman(2015)]Vaidman_2015
author author L. Vaidman, title title
Counterfactuality of `counterfactual' communication, 10.1088/1751-8113/48/46/465303 journal journal
Journal of Physics A: Mathematical and Theoretical volume 48, pages 465303 (year
2015)NoStop
[Cao et al.(2017)Cao,
Li, Cao, Yin, Chen, Yin, Chen, Ma,
Peng, and Pan]Cao2017
author author Y. Cao, author Y.-H. Li,
author Z. Cao, author J. Yin, author
Y.-A. Chen, author
H.-L. Yin, author
T.-Y. Chen, author
X. Ma, author
C.-Z. Peng, and author
J.-W. Pan, title
title Direct counterfactual communication via quantum
zeno effect, 10.1073/pnas.1614560114 journal journal Proceedings of the National Academy of
Sciences volume 114, pages 4920–4924
(year 2017)NoStop
[Aharonov and Vaidman(2019)]vaidman-pra-2019
author author Y. Aharonov and author L. Vaidman, title title Modification of
counterfactual communication protocols that eliminates weak particle
traces, 10.1103/PhysRevA.99.010103 journal
journal Phys. Rev. A volume 99, pages 010103 (year 2019)NoStop
[Calafell et al.(2019)Calafell, Strömberg, Arvidsson-Shukur,
Rozema, Saggio, Greganti,
Harris, Prabhu, Carolan,
Hochberg, Baehr-Jones, Englund, Barnes, and Walther]Walther2019
author author I. A. Calafell, author T. Strömberg, author D. R. M. Arvidsson-Shukur, author L. A. Rozema, author V. Saggio,
author C. Greganti, author N. C. Harris, author
M. Prabhu, author J. Carolan, author M. Hochberg, author et al., title title Trace-free counterfactual communication with a
nanophotonicprocessor, 10.1038/s41534-019-0179-2
journal journal npj Quantum Information volume 5, pages 61 (year
2019)NoStop
[Aharonov et al.(2021)Aharonov, Cohen, and Popescu]Aharonov2021
author author Y. Aharonov, author E. Cohen, and author S. Popescu, title title A dynamical
quantum cheshire cat effect and implications for counterfactual
communication, 10.1038/s41467-021-24933-9 journal journal Nature Communications volume 12, pages 4770 (year
2021)NoStop
[Hosten et al.(2006)Hosten,
Rakher, Barreiro, A., and Kwiat]Hosten2006
author author O. Hosten, author M. T. Rakher,
author J. T. Barreiro, author Peters N. A., and author P. G. Kwiat, title
title Counterfactual quantum computation through
quantum interrogation, 10.1038/nature04523 journal journal Nature volume 439, pages 949 (year 2006)NoStop
[Dogra et al.(2022)Dogra,
McCord, and Paraoanu]our_protocol
author author S. Dogra, author J. J. McCord,
and author G. S. Paraoanu, title title Coherent
interaction-free detection of microwave pulses with a superconducting
circuit, 10.1038/s41467-022-35049-z journal journal Nature Communications volume 13, pages 7528 (year
2022)NoStop
[Wigner(1963)]Wigner_1963
author author E. P. Wigner, title title The
problem of measurement, 10.1119/1.1969254 journal journal American Journal of Physics volume 31, pages 6–15 (year
1963)NoStop
[Paraoanu(2006)]Paraoanu_2006
author author G. S. Paraoanu, title title
Interaction-free measurements with superconducting qubits, 10.1103/PhysRevLett.97.180406 journal journal Phys. Rev. Lett. volume 97, pages 180406 (year 2006)NoStop
[Wander et al.(2021)Wander,
Cohen, and Vaidman]Wander
author author A. Wander, author E. Cohen,
and author L. Vaidman, title title Three approaches for
analyzing the counterfactuality of counterfactual protocols, 10.1103/PhysRevA.104.012610 journal journal Phys. Rev. A volume 104, pages 012610 (year 2021)NoStop
[Sultanov et al.(2021)Sultanov, Kuzmanović, Lebedev, and Paraoanu]Sultanov2021
author author A. Sultanov, author M. Kuzmanović, author A. V. Lebedev, and author G. S. Paraoanu, title title
Protocol for temperature sensing using a three-level transmon circuit, 10.1063/5.0065224 journal journal Applied Physics Letters volume 119, pages 144002 (year 2021)NoStop
[Li et al.(2012)Li,
Sillanpää, Paraoanu, and Hakonen]Li2012
author author J. Li, author M. A. Sillanpää, author G. S. Paraoanu, and author P. J. Hakonen, title title Pure dephasing
in a superconducting three-level system, 10.1088/1742-6596/400/4/042039 journal journal
Journal of Physics: Conference Series volume 400, pages 042039 (year 2012)NoStop
[Tempel and Aspuru-Guzik(2011)]Guzik2011
author author D. G. Tempel and author A. Aspuru-Guzik, title title Relaxation and dephasing in open quantum systems time-dependent
density functional theory: Properties of exact functionals from an
exactly-solvable model system, https://doi.org/10.1016/j.chemphys.2011.03.014 journal
journal Chemical Physics volume 391, pages 130–142 (year 2011)NoStop
|
http://arxiv.org/abs/2307.04016v1 | 20230708171122 | Cellular LTE and Solar Energy Harvesting for Long-Term, Reliable Urban Sensor Networks: Challenges and Opportunities | [
"Alex Cabral",
"Vaishnavi Ranganathan",
"Jim Waldo"
] | cs.NI | [
"cs.NI"
] |
Explicit a posteriori error representation for variational problems and application to TV-minimization
[
August 12, 2023
========================================================================================================
empty
§ INTRODUCTION
As the global urban population continues to grow, cities are increasingly interested in monitoring urban processes such as vehicular traffic, and public health and environmental harms including air pollution and noise, to help cities grow in a healthy and sustainable fashion <cit.>. The lowering cost of sensing infrastructure and recent digital twin capabilities have encouraged city officials, researchers, and urban residents to use large-scale, low-cost sensor networks to monitor hyperlocal phenomena, inform policy and planning decisions, and collect data to help transition to being considered smart cities <cit.>.
We identify that, to be successful, a smart city network must be:
* reliable: the network should continue to operate and transmit data over long periods of time and across the city to ensure equitable node distribution <cit.>
* scalable: it should be easy to add/replace nodes within the network at any new location in the city <cit.>
* easy to maintain: nodes should be outfitted with hardware and firmware that minimize the need for in-person maintenance <cit.>
* real-time: data must be transmitted as quickly as possible, particularly for applications such as emergency services <cit.>, and the network must be monitored in real-time for maintenance <cit.>
* low-cost: by using existing infrastructure and services, the network can avoid added costs in installation and maintenance <cit.>
We determine that two key features of an urban sensor network's design can help to make the network fit within the aforementioned criteria. The first is connectivity, which is essential for data transmission, real-time node monitoring, and software updates. The second is power, which provides for reliable operation and data collection. The decisions that cities and network designers make in these two areas have a direct and significant impact on the criteria for a successful smart city network. For example, an urban sensor network that uses a low-power wide-area network (LPWAN) for connectivity may not satisfy the criteria of low cost because the backhaul infrastructure required, although low in per-unit cost, quickly becomes expensive when considering the number of cells required for a large, dense sensor network <cit.>. Similarly, a smart city network that relies on wired power may not be scalable, as nodes will be limited to locations that already have wired mains <cit.> and will involve additional installation and maintenance cost.
Based on a review of prior urban sensor network deployments and our experience working on a large-scale sensor network, we establish that LTE networks and solar panels are the appropriate connectivity and power choice for most urban sensor networks given the available options and necessary criteria. Although LTE performance for mobile communication in urban areas is well-researched <cit.>, the performance of IoT-specific networks when implemented in a city-scale long-term sensor network deployment is yet to be characterized. Solar power in urban sensor networks has also been evaluated on a small scale <cit.>, but not in a large-scale long-term deployment. Moreover, there are no established guidelines that can ensure reliable performance for future deployments of such large-scale LTE-connected, solar-powered sensor networks. Finally, researchers have not looked into the overlap between technical issues that arise in LTE connectivity or solar power and the socioeconomic factors that make up many “sensor deserts" <cit.>, or areas that lack nodes in cities with sensor networks.
In this work we describe the design and analyze the connectivity and power performance of a stationary 118-node LTE-M connected, solar-powered sensor network deployed for one year in Chicago, Illinois. We find that 11 of the 118 original node locations could not support LTE connectivity, despite all FCC and network provider connectivity maps indicating otherwise. A small number of cell towers and node locations are disproportionately affected by significantly delayed readings, and 44 of the 118 nodes experienced issues charging in the winter months. Furthermore, we discover that connectivity and power related issues are not equitably spread around the city, but rather are more prominent in areas that are classified as socioeconomically disadvantaged and have a larger racial minority population.
Our primary contribution is an in-depth analysis of a long-term real-world deployment assessing the feasibility and reliability of a large-scale LTE-connected and solar-powered urban sensor network. Additional contributions include: 1) highlighting the overlap between technical challenges in urban sensor networks and socioeconomic inequality, 2) revealing the inherent challenges in relying upon open data sources that are commonly used to predict connectivity and power availability for urban sensor network deployments, and 3) identifying strengths and weaknesses to define future research directions in energy harvesting systems and equitable network infrastructure deployments to ensure the just future of smart city networks.
This paper is structured as follows: Section 2 offers an overview of Related Works; Section 3 highlights why the city of Chicago is a useful case study for urban sensor networks; Section 4 highlights the design of the sensor network and datasets used; Section 5 discusses the connectivity of the sensor network, including the hardware, network carrier information, and insights from the year-long deployment; Section 6 details the powering of the sensor network, including the hardware, energy management techniques, and insights from the deployment; Section 7 provides a discussion, focusing on the implications of the challenges we discovered and the limitations of our study.
§ RELATED WORKS
In this section, we first review former and existing sensor network deployments to identify necessary criteria, prior evaluations, and known issues around inequality. We then examine LTE connectivity and solar power in urban areas, as these are the technologies we use for our sensor network.
§.§ Criteria for Urban Sensor Networks
By examining prior urban sensor network deployments, we have identified five criteria necessary for success—reliability, scalability, ease of maintenance, real-time communication, and low cost. The shortcomings of prior sensor networks has often been caused by a lack of reliability, either in terms of not functioning over time, as with malfunctioning hardware <cit.>, or not communicating data reliably over space and time <cit.>. Many prior networks have also raised the issue of scalability, which is especially prevalent when relying on electrical cables and wired power, which may be available at street lamps or traffic signals, but ultimately limits the node placement locations <cit.>. Similar initiatives have shown that reliance on these specific locations can additionally make installation and maintenance more difficult, which then increases the cost of operation <cit.>. The issue of maintenance is particularly important in urban settings, where the cost of accessing a node can be very high <cit.>.
Conversely, we find that some deployments are more successful because they achieve low-cost via the use of existing infrastructure. For example, officials in New York City chose to use an existing public safety wireless network for a new traffic control system <cit.> and Chicago's Array of Things relied on cellular networks <cit.>, decisions that helped ease installation and thus save costs.
§.§ Evaluations of Urban Sensor Network Deployments
The evaluations of real-world sensor network deployments in urban settings have often been small-scale and short-term. A small number of researchers have shared the lessons and challenges learned from urban sensor network deployments, but many of these are focused on specific data such as noise <cit.> and water quality <cit.>. Furthermore, many of these studies rely on the power grid for high computation tasks <cit.>, or use technologies such as Wi-Fi or Zigbee for data transfer <cit.>. The works that evaluate LTE-connected or solar-powered urban sensor networks are small scale and short duration studies that do not offer extended insights on reliability <cit.>.
§.§ Inequality of Sensor Networks
As smart city networks are increasingly explored and deployed, sociology and urban planning researchers have begun to evaluate the potential social implications of urban sensor networks. For example, one group of researchers evaluated prior urban sensor network deployments and identified areas deemed “sensor deserts", which are those that lack nearby sensors based on a straight line distance <cit.>. As the researchers state, sensor deserts not only add to existing forms of inequality, but the placement of sensor nodes can also affect resident perception of the distribution of resources and harms throughout a city <cit.>, creating potential political or social strife if nodes are not visible in certain areas. Similarly, others have noted the potential for smart city technologies to “further deepen the splintering of urban networks, creating deep divides between those with access to 'smart' and those without" and raising questions about the “politics of urban exclusion" <cit.>. Thus, there is an increasing push for equity as a consideration in practical sensor network deployment <cit.>.
§.§ LTE Connectivity in Urban Areas
Extensive research around mobile connectivity has revealed a variety of factors known to affect RSS and limit propagation distance for LTE signals. These include physical features such as high-rise buildings <cit.>, the distance between the cell tower and receiver <cit.>; meteorological conditions such as precipitation <cit.>, humidity <cit.>, strong winds <cit.>, temperature <cit.> and sudden weather changes <cit.>; and environmental measures such as high particulate matter concentrations <cit.>. Another major factor that affects signal strength is inter-cell interference (ICI) <cit.>, which occurs when a node moves to the edge of one cell tower's range while moving closer to another cell tower. We include all these factors in our analysis of connectivity issues in section 5.
§.§ Solar Charging in Urban Areas
Due to the vast quantity of previously deployed solar powered sensor networks and the numerous papers published about these networks, it seems guaranteed that solar power is reliable for most sensor network deployments. However, there have been very few studies looking into the long-term reliability of solar power in urban settings. Dehwah et al. <cit.> evaluate the performance of a traffic monitoring sensor network in a desert city, and describe the effect of dust storms and building shadows on solar charging. However, they do not do a deep analysis into the locations that were most affected by shadows to determine how the issue may be prevented in future deployments and the potential social implications.
To our knowledge, this work presents the first in-depth analysis of a large-scale, long-term cellular, solar-powered urban sensor network towards understanding the broader impact of the technical challenges for urban communities.
§ CHICAGO AS A CASE STUDY
§.§ Building Height
According to the Council on Tall Buildings and Urban Habitat <cit.>, amongst cities around the world, Chicago has the 10th most buildings 150 meters and higher, 11th most buildings 200 meters and higher, and 5th most buildings 300 meters and higher. However, its place on those lists is expected to fall within the coming years—Chicago has only three buildings 150 meters and higher under construction and twelve proposed for construction. By comparison, Wuhan, Shenyang, and Bangkok—cities just below Chicago on the list of most 150+ meter buildings—have 49, 14, and 17, buildings under construction respectively, and dozens more proposed in both Wuhan and Shenyang. In addition, development in cities such as Mumbai, Nanning, and Nanjing, which all have several 150+ meter buildings under and proposed for construction will propel them past Chicago in the list in the coming decades. This puts Chicago currently in a unique position for evaluating the impact of built environment towards planning global urban sensor networks.
§.§ Latitude and Sunlight Hours
Chicago has a latitude of 41.88 degrees, where the sun is visible for 15 hours, 15 minutes during the summer solstice and 9 hours, 6 minutes during the winter solstice. According to data from the World Economic Forum <cit.>, the top five most populous latitudes are between the 22nd and 27th parallel north, which are all much closer to the equator and thus have more sunlight on the winter solstice, with an average of 10 hours 35 minutes.
Nevertheless, a number of highly populated cities reside at or above the 42nd parallel north, including London, Moscow, Harbin, and Toronto, as well as much of Western Europe. Cities such as New York and Beijing are also located at nearly the same latitude, receiving 9 hours 13 minutes sunlight on the winter solstice. Furthermore, as the effects of climate change disproportionately affect populations who live closer to the equator, mass migration away from the equator is expected <cit.>. Thus, understanding the performance of solar-powered sensor networks at northern latitudes is essential for future urban environmental sensing.
§.§ Segregation and Inequality
Based on 2020 United States Census Data, Chicago is the fourth most racially segregated large city (population at least 200,000) in the United States <cit.>. Fig. <ref>a highlights Chicago's racial segregation, showing where the white and non-white—primarily Black and Latine—populations live relative to each other. There is limited data comparing racial segregation in global cities, likely because many countries are more racially homogeneous than the United States.
However, segregation based on income or social status exists in many global cities, with the highest levels of inequality and segregation often found in cities of lower income countries <cit.>. According to Gini Index data from the 2019 American Community Survey <cit.>, Chicago has the 10th greatest income inequality amongst US cities, with a Gini index of 0.53 (where a 0 indicates perfect equality and 1 indicates perfect inequality). Compared to cities such as London and Johannesburg, which have the highest global Gini index values—both over 0.7—Chicago has a relatively medium-high level of income inequality <cit.>. As seen in Fig. <ref>b, the areas of Chicago that are considered most socioeconomically disadvantaged based on factors such as unemployment and poverty level also overlap with many of the areas that have a majority Black or Latine population. Thus, we believe that Chicago provides a useful case study by which to examine the potential social and equity implications that sensing technologies can introduce in cities around the globe.
§ SENSOR NETWORK AND DATA
§.§ Sensor Network Design
The sensor network, described in further detail in [blinded]
and shown in Fig. <ref>, was designed and deployed to collect air pollution data across Chicago. The network comprised of 118 unique sensor node locations, with 20 nodes allocated to local environmental justice groups for placement according to their priorities, 12 nodes at four EPA stations (3 nodes at each station) for collocation to perform calibration, and the rest placed based on locations chosen through stratified random sampling, as described in NYCCAS <cit.>, with a small subset chosen by partner organizations.
All devices that were not at EPA stations were installed at bus shelters throughout the city, as shown in Fig <ref>. These nodes were placed at the same height, about 2.5 meters above ground. Nodes at EPA stations were located on the rooftops near the EPA monitors, several meters above ground and at different heights based on the height of the building or structure housing the EPA monitor. Most of the devices were installed at their respective locations in July and August 2021, with 98 nodes (over 83%) placed by July 3rd, 2021.
§.§ Datasets
The node-related data for each reading, including the time, received signal strength (RSS), battery level, internal node temperature, and air pollutant readings were all logged with each reading and stored in an cloud server. We calculated the latency by comparing the time of the sensor reading to the time of the data's insertion into the server. Cell tower information, such as the cell tower ID, were collected when making a connection with the tower. We used OpenCellID <cit.> to link the cell tower information with locations, OSM (Open Street Maps) Buildings <cit.> to gather data about buildings surrounding the nodes, FCC Broadband <cit.> and nPerf <cit.> data to examine AT&T connectivity, Meteostat <cit.> to collect external weather data, and the Shadow Accrual Maps tool <cit.> to calculate the amount of shadow hours at each node location. Socioeconomic data were pulled from the City of Chicago Open Data Portal <cit.>.
§.§ Data Cleaning
We removed readings that had no connectivity data (N = 9,393, 0.2% of readings), readings where the signal was equal to zero (N = 11,626, 0.12%), readings where the tower location was clearly outside of Chicago, possibly due to sensors being shipped back and forth when there were issues (N = 11,778, 0.12%), and readings with a delay of more than 24 hours (N = 54,900, 0.63%), as this was likely indicative of a device issue, rather than connectivity or charging issue. We also identified 565,371 readings (12.7%) where the cell tower could not be located in the OpenCellID database; we kept these readings in for all analyses except ones involving distance and general direction of the cell tower.
§ CONNECTIVITY
§.§ Motivation for an LTE-Connected Urban Sensor Network
Despite recent advances in WiFi and low-power wide-area networks (LPWAN), such as LoRaWAN <cit.>, most urban sensor networks will rely on cellular networks in the coming years
for the following reasons: 1) Dependence on existing urban cellular networks ensures city-wide coverage without additional infrastructure. 2) Widespread global availability and flexible data plans with each generation. 3) Lower cost and ease of setup and scaling—for technologies such as LoRaWAN, scalability is a particularly pressing issue due to the cross-technology interferences that will arise from other technologies <cit.> and potential packet collisions with large sensor networks <cit.>. In addition, LPWAN require dedicated infrastructure that have a low per-unit cost, but quickly add up in costs based on the cells required to support high node density <cit.>.
Thus, to support the necessary criteria of reliability, real-time, and low cost, we use an LTE network for communication. LTE networks propose great coverage in most cities around the globe <cit.>, providing means for scaling reliably. Because the cellular infrastructure is already built and evolving, networks are easy to set up and remain low-cost, especially with the variety of LTE plans available. Finally, with the fast evolving generations of cellular communication, such networks are increasingly seen as dedicated low latency connectivity for massive IoT deployments in growing cities <cit.>.
§.§ Materials: Antenna and LTE Carrier
The sensing nodes connected via AT&T's 4G IoT LTE-M One network, which uses LTE Bands 2, 4, and 12, and operates at frequencies of 700, 1700, and 1900 MHz. Each node used a SIM card and Ignion NN03-310 antenna <cit.>, which transmits data over 3G and 4G, is tuned for channels 2, 3, 4, 5, 9, 12, 20, and 28, and operates on frequencies from 698-960 MHz and 1710-2690 MHz. The antenna was placed at the top right of the printed circuit board (PCB) [After conversations with the antenna manufacturer and a small series of tests, it was determined that antenna placement on a PCB can have a significant effect on the RSS values. It is imperative for sensing node designers to consult with antenna manufacturers to ensure correct antenna placement on custom PCB for the best connectivity.], as shown in Fig <ref>.
§.§ Methods: Node Connectivity and Data Transmission
The sensing node preserved battery life by periodically waking up to record a sample and transmit data to the cloud, as further described in Section <ref>. For this deployment, the nodes were set to transmit data every five minutes from the last recorded sample time. The data transmission process included the following series of steps: 1) The microprocessor woke up and kicked off two processes on separate threads, 2a) One thread sampled the sensor with the longest latency, typically about 8 seconds, 2b) A separate thread simultaneously initiated connection to the cloud, 3) Another array of low latency sensors were sampled, 4) The data were then packaged and transmitted to the IoT endpoint going through the cell tower, AT&T network routers etc.
§.§ Methods: Retry Logic
If a node could not connect to the cloud, it stored the reading locally, went back to sleep for five minutes, and tried to connect again. After 10 retries, if the node still could not connect, then the node was set to reboot itself. After a reboot, the node would immediately try to make a connection to the cloud and would not record local readings until it did because the node lacked a real time clock. Once the node could connect again, it transmitted all locally stored data and errors that were logged in the absence of connectivity.
§.§ Results: Readings and Cell Towers
For the one-year period and 118 nodes in our network, our dataset included 8,684,756 readings. We linked the readings to 417 unique cell tower locations, 65 with only 1 associated reading, 179 with 500 (0.0057%) or more readings, and 165 with 1000 (0.011%) or more readings.
§.§ Results: “Dead Zones"
Over the course of our deployment, we identified 11 locations (9.32%) at which the sensor nodes reported consistently low RSS values and ultimately failed to connect, generally within a few days of installation. These 11 locations include 10 from the main deployment beginning in July 2021 and one node location from an earlier pilot program in April 2021. 3 of the 11 locations were selected for deployment by local community groups, a significant percentage more than in the overall deployment. Initial mitigation strategies involved moving the nodes to the closest bus shelter, which was often directly across the street. However, we discovered that the nodes had to be moved even further—sometimes multiple blocks away—to establish a connection.
We examined a number of factors to determine the potential cause of these “dead zones", including the distance between the node and cellular tower, the number of towers close to a node, evidence of inter-cell interference (ICI) <cit.>, and nearby physical urban structures, including the distance and height of the closest building to the node, and the number, tallest height, mean and median building height within 100, 250, and 500 meters of each node. We found no evidence to suggest that any of these features had an effect on a node's ability to connect, when comparing all “dead zones" to all other node locations. When comparing “dead zone" locations to the new locations each of those nodes was moved to, we found a statistically significant difference in the height of the tallest building within 100 meters of the node after relocation versus before, as shown in Fig. <ref>. This indicates that land use and urban form close to the location of stationary sensors are likely factors impacting connectivity, fitting in line with observation from prior work <cit.>.
In addition, we investigated the role of line-of-sight as a primary factor contributing to “dead zones". We examined the relation between the sensor node, cellular tower, and tallest nearby building for the two nodes found to connect to the same primary cellular tower at their original (“dead zone") and new location. We found that one of these node configurations exhibited line-of-sight interference, as shown in Fig. <ref>, as the tallest building (11.9 meters) was clearly in the path between the cellular tower and sensing node.
Due to the limited number of examples to examine, there is a need for further investigation in larger datasets, however, this evidence supports the key role of line-of-sight impediments in contributing to “dead zones".
Finally, we examine the socioeconomic factors around the node locations without connectivity. We do not find a significant difference in the socioeconomic factors when comparing node locations that can and cannot connect, likely because there are a large number of nodes around the city. However, we do note that many of the dead zone locations are in socioeconomically disadvantaged and majority Black and Latine neighborhoods, as shown in Fig. <ref>a.
§.§ Results: Signal Strength
As shown in Fig. <ref>, the yearly median signal strength for each node ranged from -61 dBm to -113 dBm, with a network-wide median of -87 dBm. There was no significant difference in the median signal strength for community-selected versus randomly-selected nodes and we did not identify a statistical relationship between surrounding physical features, such as building height or distance to buildings, and the median signal strength for the sensor node or corresponding cell tower location.
As with “dead zones", we found that the node locations with the lowest median signal strength—those less than 100 dBm—were nearly all sited in neighborhoods that are socioeconomically disadvantaged and have a higher percentage racial minority population. In fact, only one of the eight locations with a low median signal strength was sited in a majority white neighborhood, as shown in Fig. <ref>b.
§.§ Results: Latency
We found that over the entire year's worth of data, the minimum latency was 2 seconds, the median latency was 5 seconds, and the interquartile range fell between 4 and 6 seconds (our data allowed only for estimating seconds, and not milliseconds for latency).
When examining the median latency for each sensor node over the course of the study, we found a much tighter distribution then we saw for median signal strength. In fact, the interquartile range all falls at the exact same value of 5 seconds. There are only three sensor locations with a median latency greater than that value, shown in Fig. <ref>c, and two of those locations overlap with those that have poor median signal strength, suggesting a correlation between signal strength and latency.
We find that only 7.24% of readings have a latency of 10 or more seconds, 1.18% have a latency of 30 or more seconds, and less than 1% (0.88%) have a latency of one minute or longer. Although these are low percentages, we examined the significantly delayed readings to determine if they occur randomly or follow a pattern. We found that the delayed readings do not occur randomly, but rather appeared disproportionately on certain dates, at certain sensor locations, and with certain cellular towers, as seen in Fig. <ref>. Interestingly, the sensor locations with the most delayed readings have no overlap with the locations that have either the lowest median signal strength or the highest median latency. However, when looking at the map of the sensor locations in Fig <ref>d, we see again that most of these locations are in neighborhoods with a majority Black or Latine population. We could not identify any temporal or location-based events events, such as sporting games, that have previously been associated with cellular network delays and may have caused these significant events. Coupled with the lack of empirical evidence from the cellular service providers
, we are led to determine that the delays are likely caused to carrier-specific issues such as cell tower maintenance.
§ POWER
§.§ Motivation for a Solar-Powered Urban Sensor Network
Nodes must be continuously running to collect data over time, yet many outdoor urban spaces are not equipped with accessible wired mains <cit.>. Solar power is the most ubiquitous form of renewable energy for sensor networks, and will remain prevalent in the coming years for the following reasons: 1) Solar panels are relatively inexpensive and easy to install. 2) Solar panels can power sensors that need to operate continuously in remote or hard-to-reach locations where it may be difficult or expensive to run electrical cables or replace batteries. 3) Using solar power eliminates the need for frequent battery replacements, which creates an added burden for cities looking to deploy sensor networks.
Thus we use solar energy to power our sensor network to
achieve reliability through continuous power, scalability in allowing for power in locations that do not have outlets, ease of maintenance by limiting battery replacements, and low-cost by requiring no new infrastructure.
§.§ Materials: Battery, Solar Panel, and Power Usage
Each sensing node was outfitted with a rechargeable 2000 mAh lithium polymer battery
and a 10×13 cm Voltaic Systems P126 6W solar panel. The solar panel was attached horizontally, in a flat position, to the top of the node's respective bus shelter to maximize solar absorption, maintain security of the panel, and provide ease of installation.
To optimize for low power consumption, the microcontroller operated in a duty cycled mode, consuming as little as 40 µA between measurements. The device's four electrochemical gas sensors consume microwatts of power, while the particulate matter (PM) sensor consumes up to 80 mA power as it relies on an internal fan to circulate air. Thus to optimize the overall power usage, we sampled the gases every 60 seconds and sampled the PM and transmitted data every 5 minutes. On average, the device drew 4mA current over a 24 hour period, allowing the battery to power the sensing node, including communications, for approximately 15 days at the aforementioned sampling rate.
§.§ Methods: Power Saving Strategies
In October 2021, we noticed that one of the devices was no longer charging. After sending the local maintenance team to investigate, we discovered that the sun was no longer reaching the solar panel due to the change in the sun's position and the node's location surrounded by skyscrapers. We anticipated that this issue would begin to show up in other nodes as well, so determined three potential solutions to ensure the network still collected useful data throughout the winter months:
* Set the sampling interval to be more than every five minutes, which would deplete the battery less quickly by running the PM sensor and data transmission less often.
* Implement a power-saving mode to ensure devices only run when they have a certain amount of battery and sleep when they are below that value.
* Schedule devices to only run at certain times of the day, i.e. for a few hours in the middle of the day when there is sunlight.
Naturally, each option comes with its own trade-offs that had to be considered. Sampling less often would provide less temporal coverage which could cause cities to potentially miss timely notifications from sensors, make it more difficult to identify noisy or anomalous readings through techniques such as moving averages, and introduce calibration errors from datasets with different resolutions. A power-saving mode could result in large time spans with no data, creating difficulty in comparing data from different seasons and potentially resulting in a lack of data needed for calibration. Scheduling devices to only run at certain times would limit data collection to only specific hours of the day, and may not solve the issue if the number of hours is not chosen correctly.
Based on the tradeoffs and our need of data for sensor calibration, we implemented a power-saving mode to put devices into a deep sleep to avoid depleting the batteries in low- or no-light conditions. Power-saving mode was initiated when a battery's power level fell to 15% or less of its total capacity then turned off when the battery's power level had recharged to at least 40%.
§.§ Results: Data Loss due to Power Saving Mode
Between the autumn and spring equinox of the year long study period, 44 devices (37.29%) went into power saving mode (PSM), with most devices entering PSM between January and March. Seven of these devices were at community selected sites, representing about 16% of the devices in PSM, indicating the community selected sites were not disproportionately affected. In total, devices in the networks spent 19,450,915 seconds — over 33,180 hours or 1382.5 days—in PSM, resulting in about 398,000 potential sensor readings that were not captured. Most devices entered PSM numerous times, with several entering more than five times during the study period. Thus, in many locations there was adequate sunlight to keep the devices charged throughout the winter months if a larger solar panel had been used or the devices had better energy harvesting to extend the battery life with the limited charge they received.
§.§ Results: Location of Solar Charging Issues
As expected, the node locations in downtown Chicago entered PSM for a long duration of the winter due to the high number of very tall buildings in the neighborhood. However, several node locations in neighborhoods outside of downtown Chicago, that lack a high density of tall buildings, also experienced solar charging issues. In fact, the node location with the second highest amount of time spent in PSM was not in a location near tall buildings, and 8 of the 12 node locations that had the most power saving hours were outside of the downtown area, as shown in Fig. <ref>f. The figure also shows that they mostly fall in neighborhoods with a majority Black or Latine population. As seen in Fig. <ref>, shadows from trees for large portions of the day could be a potential cause for charging issues in some areas. In addition, ice build up on solar panels may cause charging issues, but this is difficult to diagnose without visiting every node location while it is in PSM. Thus, further analysis is required to determine the exact cause of charging issues in these locations that obviously lack tall buildings in the vicinity. The important takeaway is that the dynamic physical environment of solar IoT deployments need to be considered by tools that are currently being developed to estimate solar energy availability using historic data or satellite/map images <cit.>.
§.§ Results: Predicting Solar Charging Issues
We used the OSM Buildings data <cit.> and Shadow Accrual Maps tool <cit.> to determine how well we would be able to predict a sensor location having power saving issues. With the OSM Buildings data, we examined the distance to the closest building, height of the closest building, and mean and median height of buildings within 100, 250, and 500 meters of each node location. For shadows, we used the tool to calculate the amount of time each node location was in shadow on the winter equinox. Using both a logistic regression model for the binary case of power saving or not, and a linear regression model for the amount of time spent in PSM, we found no statistical significance for either the amount of time spent in shadow, or any data related to buildings around the node locations, as highlighted for one data point in Fig. <ref>.
Upon further examination, we discovered that one of the issues around using crowdsourced and open source resources is that they are not consistently updated. For example, one sensor node that was indicated to have shadow issues but did not enter PSM likely had a building present when the data were uploaded, but no longer has a building there as discovered on Google Maps. Likewise, as seen in Fig. <ref>, a node location with no building nearby that entered PSM was likely affected by the presence of a tree near the bus shelter, which was not captured in the tools we used, which are focused on buildings. This points to an additional shortcoming of the data available, which focus on buildings and do not account for foliage, hyperlocal snowfall, and other physical phenomena that may impede solar charging.
§ DISCUSSION
§.§ The Potential of LTE-Connected, Solar-Powered Urban Sensor Networks
The results show immense promise for LTE-connected urban sensor networks. Most node locations had adequate signal strength to achieve connectivity, and the vast majority of sensor readings were transmitted to the cloud server within five seconds. Furthermore, there were no noticeable issues around connectivity due to temporal features such as weather or traffic patterns. We also had success using LTE to detect errors and perform software updates, including a firmware patch to add the power saving mode. These findings all point to the potential of LTE in creating reliable, scalable, easily maintainable, and real-time sensing in cities.
Solar panels proved to be a reliable energy source for over half of the year-long study, and most devices that experienced charging issues only did so between January and March. Chicago is at a more northern latitude than most of the global population, so we expect that many cities, and especially those in the Global South, would experience fewer solar charging issues. Additional improvements with solar panel efficiency <cit.> and research on smart power management strategies for renewable energy in IoT establish solar charging as a viable powering option.
The nodes that were collocated at EPA stations all experienced no charging or connectivity issues, suggesting that placing nodes on rooftops could be a viable solution to improve reliability. However, node placement is highly dependent on the application, and many cities may choose or need to place nodes closer to street level. Future research could include interpolation and machine learning techniques to correlate data from street level to rooftop nodes to address the technical issues and still collect useful data. Additionally, passive wireless reflector and relay research can find application in routing network availability from cell towers and around built infrastructure to end devices.
§.§ Implications of Connectivity and Charging Issues
Despite the success we had in using 4G LTE-M to transmit data, we discovered issues around “dead zones", delayed readings, and unequal signal strength. The cause of these issues could not often be easily identified and data sources from AT&T and the FCC indicate widespread support of the LTE network across Chicago, as seen in Fig. <ref>. Thus, the discovery of these issues raises questions on the reliability of LTE networks, especially in cities that do not have as much cellular infrastructure as Chicago.
However, we did not identify significant data loss from the connection-related issues, suggesting that LTE-connected sensor networks are likely appropriate for applications that do not rely on instant or near instant data.
For applications that cannot afford to have any delayed data, such as emergency support services, network designers will want to think about building robustness into the system to ensure real-time communication for all readings.
Despite the ubiquity of solar panels as the power source for wireless sensor networks, we found that they are not a reliable power source for urban sensor networks for cities
that have limited sunlight
in winter months. In addition, urban areas at latitudes closer to the equator will also experience solar charging issues if they have numerous tall buildings blocking the path of the sun. Thus, we need to continue research in alternative charging options, energy harvesting techniques, and battery-less sensors to ensure reliability and scalability in powering urban sensor networks.
In our study, we found that cellular connection and solar charging issues are not all localized to areas with tall buildings and may be spread inequitably around a city. Thus, urban sensor network deployments have the potential to exacerbate existing societal inequalities by allowing for networks to be scaled more easily in some neighborhoods than others. In turn, this can increase mistrust between residents and governments <cit.> and drive residents to make assumptions about the distribution of resources and harms based on the physical presence of sensors <cit.>. Thus, to serve people in all communities, sensor network designers should consider working with local service providers, using repeaters, multiple sensors, and other technologies to improve reliability in underserved areas. Furthermore, networking researchers and designers need to focus on equality, and not just quality or area coverage when building and deploying infrastructure.
§.§ Challenges around Data Access
Due to the lack of official up-to-date building information, we relied on open crowdsourced data to determine the location and height of buildings in the city. Similarly, because the location of cellular towers is not publicly available, we relied on data from OpenCellID. As with many open crowdsourced datasets, these data were not completely accurate or up-to-date <cit.>. This was especially clear when examining FCC carrier connectivity information, as the entire city of Chicago seemingly has coverage (Fig. <ref>, yet we found that was not the case, likely because the data are reported by carriers <cit.>. We also discovered data accuracy issues in shadow prediction using the Shadow Accrual Maps <cit.>. Other crowdsourced data, such as nPerf, presented an alternative usage issue in incompleteness, as seen in Fig. <ref>. Particularly in Chicago, there is significantly more data available in the northern part of the city and along highways, likely attributed to the increased usage of crowdsourced platforms by white people and high-income earners <cit.>. Thus, relying on crowdsourced data makes it difficult to predict locations with solar charging or connectivity issues that may arise due to building height and other urban interferences, made further difficult by the social inequities that exist in many cities and are exacerbated in crowdsourced technologies.
The difficulty in working with open crowdsourced data points to a need for new methods to obtain up-to-date
urban data. For example, researchers can help develop ways to obtain building height or cell tower location from satellite imagery or Google Maps. We may also look to develop easier ways for cities to create their own databases that are kept up-to-date or develop better community science incentives to keep crowdsourced data sources such as OSM Buildings, OpenCellID, and nPerf up-to-date and to reach new users who do not currently contribute to these datasets.
§.§ Limitations of this Study
We acknowledge that this work is limited, as it focuses on a single-city case study. Although we believe that Chicago is representative of many other large cities,
we lack the empirical evidence needed to “assess the implications and potentially transformative consequences" of how similar smart city networks would emerge in different urban contexts <cit.>. An additional limitation is that we use weather data from US government agencies and there are only three weather stations in the Chicago area. Although we also had temperature and humidity readings at each node, these sensors were located inside the node enclosures, and thus did not always provide accurate external measurements. Thus, our weather-related analyses are not hyperlocalized to most of the sensors, and it is possible that there are hyperlocal weather correlations, such as urban heat islands, that affected sensor connectivity.
§ CONCLUSION
In this work, we present the challenges and opportunities from a year-term city-wide urban sensor network deployment. The network was created based on five specific criteria of success that we identified from past work. We provide an in-depth analysis of deployment data from the aspect of cellular connectivity and solar energy harvesting, which are the two key features that help meet the success criteria. In addition we highlight inherent challenges with open data sources available for root-cause analysis of failure nodes, and identify strengths and weaknesses to define future research directions that will support large-scale, real-time energy harvesting deployments in achieving reliable, equitable smart city networks.
acm
|
http://arxiv.org/abs/2307.03988v1 | 20230708143306 | PCG-based Static Underground Garage Scenario Generation | [
"Wenjin Li",
"Kai Li"
] | cs.AI | [
"cs.AI",
"cs.RO"
] |
Journal of Class Files, Vol.
Shell et al.: Bare Demo of IEEEtran.cls for IEEE Journals
PCG-based Static Underground Garage Scenario Generation
Wenjin Li, Kai Li
Wenjin Li, Kai Li are with the Department of Computer Science and Technology, Southern University of Science and Technology, Shenzhen, 518055, China
August 12, 2023
============================================================================================================================================================================
Autonomous driving technology has five levels, from L0 to L5.
Currently, only the L2 level (partial automation) can be achieved, and there is a long way to go before reaching the final level of L5 (full automation).
The key to crossing these levels lies in training the autonomous driving model.
However, relying solely on real-world road data to train the model is far from enough and consumes a great deal of resources.
Although there are already examples of training autonomous driving models through simulators that simulate real-world scenarios, these scenarios require complete manual construction.
Directly converting 3D scenes from road network formats will lack a large amount of detail and cannot be used as training sets.
Underground parking garage static scenario simulation is regarded as a procedural content generation (PCG) problem.
This paper will use the Sarsa algorithm to solve procedural content generation on underground garage structures.
Automated driving, underground garage planning, reinforcement learning, procedural content generation, Sarsa
§ INTRODUCTION
According to a recent technical report by the National Highway Traffic Safety Administration (NHTSA), 94% of road accidents are caused by human errors <cit.>. Against this backdrop, Automated Driving Systems (ADSs) are being developed with the promise of preventing accidents, reducing emissions, transporting the mobility-impaired, and reducing driving-related stress <cit.>.
Autonomous driving simulation is an important part of ADSs.
However, simulation lacks interactive and changeable scenarios <cit.>. Researchers are still using authentic human-made ways to build one scenario for huge training.
Procedural Content Generation for Games (PCG-G) is the application of computers to generate game content, distinguish interesting instances among the ones generated, and select entertaining instances on behalf of the players <cit.>.
In our project, we consider the underground garage as the game content that should be generated.
The problem can normally be divided into three parts. The first part is to create the digit grid map for each type of floor, as a PCG task. The second part is to convert each type of floor to the design diagram.
The last part is to simulate the whole 3D scenario map depending on the design diagram.
To simplify the simulation, we combine the last two parts as one part.
In reinforcement learning <cit.>, an agent seeks an optimal control policy for a sequential decision-making problem.
We regard the first part as a sequential decision-making problem.
Markov decision processes (MDPs) are effective models for solving sequential decision-making problems <cit.> in uncertain environments.
The agent's policy can be represented as a mapping from each state it may encounter to a probability distribution over the available actions <cit.>.
Generalized policy iteration (GPI) was demonstrated as a class of iterative algorithms for solving MDPs in <cit.>.
It contains policy iteration (PI) and value iteration (VI) as special cases and has both advantages of PI and VI.
Temperal-difference <cit.> is the specific implementation of GPI <cit.>.
TD methods are guaranteed to converge in the limit to the optimal action-value function, from which an optimal policy can be easily derived.
A classic TD method is Sarsa <cit.>.
The on-policy algorithm, in which policy evaluation and policy improvement are identical, has important advantages.
In particular, it has stronger convergence guarantees when combined with function approximation, since off-policy approaches can diverge in that case.
In this paper, we use the Sarsa algorithm to create a digit grid map.
Simulation is an important step during the conversion <cit.>.
We consider the simulator can generate test scenarios automatically, including static buildings, dynamic traffic flow, and real-time calculated lighting and weather.
This paper aims to solve the static scene generation problem.
§ RELATED WORK
Abdullah <cit.> compared the space utilization efficiency of diagonal, parallel, and perpendicular parking methods and concluded that perpendicular parking methods have the highest number of spaces, using a university as a specific example.
Sawangchote <cit.> developed a heuristic algorithm for the layout of parking spaces in small-scale garages based on the space utilization of different parking methods;
Xu Hanzhe <cit.> carries out a parking space layout design based on a greedy algorithm to study the influence of irregular contours and obstacles on the layout of parking spaces and get the layout plan with the most number of parking spaces.
Julian Togelius <cit.> finds that the result of the composition of a bunch of different algorithms is better than the result of any single algorithm and He used answer set programming to do procedure content generation.
Huimin Wang <cit.> has previously proposed a model-based reinforcement learning algorithm to implement the path planning problem. The path planning problem has similar features when it applies to the specialized PCG problem. We consider that generation on a garage can use the method of path planning on agent moving. Besides, Arunpreet Sandhu <cit.> comes up with the WFC algorithm to generate similar images.
Akatsu <cit.> provides an idea for evaluating underground garage structures by feeding a series of indicators obtained from a realistic traffic survey into a modeled underground garage structure to obtain a series of evaluation results.
§ METHODOLOGY
§.§ Overall
We consider dividing the underground garage construction into two main parts, PCG task and simulation. Notations using throughout this report are as follows:
Since the most important thing in static underground garage scenario generation problems is the planning of parking stalls. For parking space planning problem, it is essentially an optimization problem of object placement, the objects to be placed will have the following distinction:
* static object: object's position will not change after confirming the position
* dynamic object: objects can wait for further optimization after confirming the position of static objects
Now we only need to consider the dynamic object distribution, in order to better describe the entire underground garage object planning situation, here we rasterize the underground garage by using three matrices S_i,j, R_i,j, C_i,j to describe the state of an underground garage.
In this paper, we will use reinforcement learning to plan the distribution of dynamic objects, by combining the distribution with the distribution of static objects to obtain the S_i,j as the result of parking space planning, and finally combine the R_i,j and C_i,j as the plane structure of the static underground garage to pass into the Unity3D engine for 3D modeling to finally generate the static underground garage scenario.
We provide the following requirements for a reliable garage:
* Reality: The generated basement structure needs to adapt to real-world standards (such as national standards and regulations)
* Feasibility: Ensure that at least one route to any exit and entrance can be found for each parking space arranged in the basement structure
* Randomness: The structure and contour of the basement are randomly generated, and the solution generated each time will change according to the change of the random process
* Bijection: Each generated basement structure has a unique corresponding random process, and this random process must correspond to a unique basement structure
* Customizability: The structure of the basement can be self-defined
§.§ Static objects generation
First, we give a definition of structure matrix 𝒮(i,j):
𝒮(i,j)={
0 , parking space or free space
-1 , obstacle
1 , lane
2 , entrance
3 , exit
.
At the beginning of getting this matrix, we should confirm the location of those static objects, which can be divided into three steps: contour generation, entrance and exit generation, and obstacle generation.
First, we need to generate the contour of the underground garage. Divide a w× h rectangle into w× h blocks and each block has a width and height of 1. We consider generate n groups of 2n points in this rectangle and use the line of two points of each group as the diagonal of the rectangle to generate a rectangle and then after expand all rectangles to its corresponding squares, We will treat the concatenation of all rectangles as a generated underground garage contour. The following algorithm shows the generation of underground garage contour.
After contour generation, we can get all squares in the floor plan, which mean we get ζ and ψ and then assign values to all those squares in ζ and ψ:
𝒮(ζ) = 0
𝒮(ψ) = -1
Secondly, we need to determine the position of the entrance and exit. After contour generation, in order to generate a reliable position of entrance and exit, we give a definition of ξ and η. A frontier square needs to satisfy the following conditions:
𝒮(ξ) = 0
∑_i=1^8𝒮(ρ_ξ) < 0
An inner square needs to satisfy the following conditions:
𝒮(η) = 0
∑_i=1^8𝒮(ρ_η) = 0
Since entrances and exits can only be generated in ξ and cannot be generated on the corners of ξ, in this condition, we only generate entrance and exit on those squares satisfy the following condition:
ϵ∈ξ
∑_i=1^8𝒮(ρ_ϵ) = -3
M(ϵ_i,ϵ_j) ≥σ_1
Thirdly, we need to consider the position of obstacles in this underground garage. We only generate obstacles on those squares satisfying the following conditions:
o ∈η
M(o_i,o_j) ≥σ_2
§.§ Reinforcement Learning
Reinforcement learning (RL) is a basis to solve our PCG problem. In this paper, we first focus on finite Markov decision processes (finite MDPs).
A finite Markov decision process can be
represented as a 4-tuple M = {S, A, P, R}, where S is a
finite set of states; A is a finite set of actions; P : S× R × S × A → [0, 1] is the probability transition function; and R : S × A →ℛ is the reward function. In this paper, we denote the probability of the transition from state s to another state s' when taking action a by P(s', r|s, a) and the immediate reward received after the transition by r_s^a <cit.>.
A policy is defined as a mapping, π: S× A→ [0,1]. In this paper, we use π(s) to represent the action a in state s under the policy π. To measure the quality of a policy, action-value function, q_π(s, a) is used to estimate the expected long-term cumulative reward of taking action a in state s under a policy π. It is formally defined as:
q_π(s,a)=𝔼_π[∑_k=0^∞γ^kR_t+k+1| S_t=s, A_t=a]
where γ is a discount factor, R_t is the reward at time-step t, and E_π is the expectation with respect to the policy π.
The goal is to find an optimal policy π_* which maximizes the expectation of long-time discounted cumulative reward from any starting state s∈ S:
π_*=*argmax_πE_π [∑_t=0^∞γ^t R_t|s_0=s]
In this paper, we format PCG as an optimization problem <cit.>, which is represented as a 2-tuple (M, E ), where M is finite MDPs which can generate one 2D integer array and E is an evaluation function which evaluates the quality of array. We have one agent with policy π. It will tack action in state s and send a message to the environment.
The environment receives the message and changes the state to the next state and sends rewards to the agent.
Finally, the agent and environment produce a finite Markov decision array:
S_0,A_0, R_1, S_1, A_1, R_2, S_2, A_2, R_3,…, S_T-1, A_T-1, R_T
where T is the termination time. Evaluation function E is calculated from M
E=∑_t=1^T-1 R_t
R_T is always a negative value and it is not included in E. In other words, we come back to the previous unfailed state to compute E.
Generalized policy iteration (GPI) contains two processes, policy evaluation (E) and policy improvement (I):
π_0E→ q_π_0I→π_1E→ q_π_1I→π_2E→…I→π_*E→ q_*
where q_π_i is action value function under π at episode i. The process is terminated when q and π converges to q_* and π_*. For Sarsa algorithm, policy evaluation and policy improvement are carried out simultaneously in each episode.
The agent and environment in MDP are clear. Our design is divided into two sections.
In the first section, we design the MDP for our PCG task. In the other section, we design the environment penalty based on the principle of parking lot design.
§.§ Sarsa
We use the Sarasa algorithm to solve the PCG task. First, we define the parameters of MDPs. We consider a car in a 2D place as an agent to perform a colouring task, which colours the undefined square to a lane spuare. Agent's state at timestamp t is defined as the multiple dimensional vectors:
S_t=(D, M, A_t-1)
Where D is a 4-dimensional vector that each element point to the distance between the free space, border, or obstacle and agent in the direction, M is a 25-dimensional vector that symbols to the perception range of the agent. It satisfies that all points have a Manhattan distance of less than 2 from the agent.
The agent takes action from the action set
A={UP, DOWN, LEFT, RIGHT, STAY}
The goal is to colour the road as much as possible until it comes back to the start and takes action STAY, leading to a terminate state. Agent receives rewards depending on the increment of the number of parking spaces. The agent also receives a penalty for some wrong actions.
To evaluate one policy π, we predict one Markov decision array containing S, A, R for each episode. We update q(S_t, A_t) during the prediction, following the function:
q(S_t, A_t) = q(S_t, A_t) + α× (R_t+1 + γ× q(S_t+1, A_t+1)-q(S_t, A_t))
where α and γ are parameters, with 0≤α, γ≤ 1.
We use greedy method to improve one policy:
π(s)=*argmax_a q(s,a)
where π(s) is the greedy action under policy π. We consider using ϵ-greedy to take action, where the agent has ϵ chance of taking greedy action with maximum value otherwise taking action equivalently. The probability of taking greedy action π(s) in state s is:
p(s, π(s)) = (1-ϵ)+ϵ/|A|
§.§ Penalty design
The principle of parking lot design has been proposed for optimizing parking area space.
* Use rectangular areas where possible
* Make the long sides of the parking areas parallel
* Design so that parking stalls are located along the lot's perimeter
* Use traffic lanes that serve two rows of stalls
<ref> conforms the above principle, where green square refers to lane square, orange square refers to parking square or free square, and white square refers to entrance or exit. Contrary to <ref>, <ref> has many problems: no cycle, existing non-rectangular and non-parallel areas, and many lanes serving only one row of the stall.
The agent can not only receive a reward after the action but also a certain penalty we defined. The reasonable penalty guides agents to do actions they want. Based on the design principle, we propose several penalties below:
* Turn-back penalty when the agent takes the opposite action from the last action.
* Interval penalty based on the interval of the same actions.
* Wheeling penalty at an improper position with a certain direction.
* Step penalty for each timestamp to prevent agents from cycling consistently.
§.§ Convert matrix to simulated underground garage
After generating structure matrix 𝒮(i,j), we need to convert this matrix to a simulated underground garage. Here we first atomize the elements of the matrix, we define the below equation:
n = ∑_i=1^4𝒮(θ_η)
and for any square η, if:
𝒮(η) = 1
we define η as:
η={
Crossroads , n = 4
T-Junctions , n = 3
Straight road , n ≤ 2
.
and if:
𝒮(η) = 0
we define η as different types in Figure 2:
η={
Type1 , n ≥ 3 or across n = 2
Type2 , adjacent n = 2,
Type3 , n = 1
Type4 , n = 0
.
Then, we only need to model each type of square η in the simulator and use scripts to construct the simulated underground garage.
§.§ Construction of underground garage structure
We know that autonomous vehicles typically use multiple types of sensors to collect and process environmental information to support the vehicle's decision-making and control systems <cit.>.
The parking garage structure we generate is intended to provide training scenarios for autonomous vehicles, and the information collected during autonomous vehicle training comes from the simulated scenes, such as the lighting of light sources, the materials of various object surfaces, and information on the different light reflections of objects in the scene, and so on <cit.>. If we can better simulate the various objects in these scenes, the amount of information contained in the overall static parking garage scene will be greater, and it will better provide training data for autonomous vehicles, achieving better training effects.
The construction details of a static underground parking garage mainly include object surface texture mapping, such as:
* Lane marking texture mapping
* Wall texture mapping
* Floor texture mapping
* Lighting texture mapping
As well as collision bodies in the underground parking garage, such as:
* Column mesh collision body
* Speed bump collision body
* Parking barrier
And here we give the detailed procedure of underground garage generation in Unity3D:
* The structure matrix 𝒮_(i,j) previously generated by using reinforcement learning is used as the generated underground structure, and the R_(i,j) and C_(i,j), which define the length and width of each plot of land in reality, are passed as input into Unity3D engine.
* In the Unity engine, each different state of the land is first modeled, and then the entire underground plane is automatically generated based on the arrangement of elements in the specific structure matrix.
* After generating the plane, three-dimensional information such as walls, pillars, ceilings, obstacles, etc. are further generated based on the outline of the underground structure.
* According to the generated structure, more detailed descriptions are made, such as light tubes, ventilation ducts, and other underground details.
* According to the demand, some objects that may appear underground, such as parked vehicles and no parking signs, are randomly generated.
§ EXPERIMENTAL SETUP
§.§ Evaluation
After generating the underground garage structure, we need to evaluate it, but there is no unified and credible standard for the evaluation function. So we proposed the following three dimensions to describe the value of the underground garage structure by combining the evaluation system of several papers:
* the number of the parking spot
* the average parking time
* the number of unused squares
So the evaluation function is like:
y^' = k_1 * N_S + k_2 * T_S + k_3 * U_S
To obtain the proportion of weights accounted for by each of these three criteria, here we assume that there exists a corresponding evaluation function for a certain underground garage structure, and the value distribution of all solutions for that structure is roughly Gaussian distributed.
Based on this, we can know that if we have enough sampling points and judge the value size relationship of the structure in the sampling points, we can correspond these sampling points to the Gaussian distribution curve one by one, and then make the estimated value order of the sampling points the same as before by adjusting the weights of our evaluation function, so that we get an evaluation function with a certain degree of confidence, and when more and more points are sampled, the final evaluation function will be more credible.
Here, we sampled a series of more representative experimental results and derived the above values for the three coefficients:
y^' = N_S + (-5) * T_S + (-1) * U_S
We conducted a 5000-episode cycle test for Sarsa algorithm with one garage contour. For each episode, we save the matrix and evaluation on it to the dictionary. In the end, we select top 200 matrix with high evaluation function value.
§.§ Simulation of Underground Garage
The main hardware devices used in the simulation to generate the underground garage scenario are: CPU: Intel(R) Core(TM) i7-10750H CPU @ 2.60GHz, GPU: NVIDIA GeForce GTX 1650 and the software are: Unity3D 2021.3.5f1c1, Visual Studio 2022
§ RESULTS
§.§ Sarsa Result
<ref> indicate that the agent easily achieves the local limit at episode 400. Then it straight down to a small value. It maintains a trend of first converging to the limit and then sharply decreasing. It will keep searching for a solution if the test doesn't stop.
However, we observed that as the number of episodes increases, there are instances where the agent obtains lower payoffs. This can be attributed to the ϵ-greedy strategy, which sometimes leads the agent directly to the termination state. To increase the converge rate, We make the ϵ decrease slowly. We also refresh the value of ϵ if the matrix keeps at 100 consecutive episodes.
<ref> shows the matrix with the highest evaluation value during the test. It is slightly inferior to <ref> and <ref> manually constructed.
§.§ Simulated Underground Garage
<ref> shows the underground garage model simulated by modelling the structure matrix generated by the above reinforcement learning algorithm for 3000 iterations as input.
<ref> shows the underground garage model simulated by modelling the structure matrix generated by the above reinforcement learning algorithm for 3000 iterations as input.
§ DISCUSSION
For the evaluation function, there is no unified credible evaluation function, and the coefficient given in this paper is only a fitting operation for the real value curve. At the same time, since the structure of an underground garage with different contours has an impact on the three evaluation indexes we selected, the value of the coefficients for different contours may also be inconsistent, which may require more sampling and training through neural networks to come up with the coefficients for each underground garage contour later <cit.>.
However, happily, we were able to correctly evaluate the generated underground garage parking space structure according to the evaluation function obtained from the sampling on the 7*9 square contour, as it can be seen that Fig. 5 and Fig. 6 are the manually designed structures considered to be of higher value according to the cognitive design, and Fig. 1 to Fig. 4 are the top four structures of value filtered according to the evaluation function from the results of the algorithm generating 5000 episodes, and it can be seen that the filtered structures, although not perfect, can meet several of the most basic requirements in designing an underground garage parking space, and are indeed a little more valuable than the manually designed structures.
§ CONCLUSIONS
Sarsa, an on-policy TD algorithm, performs well in this paper. It can generate reliable graphs eventually. However, the state set is so large that it can not converge into one solution that reaches the highest repayment.
This study demonstrates the feasibility of using reinforcement learning to programmatically generate underground garage grid maps. We have yet to reach a target that can generate a reliable underground garage based on some contour. PCG of underground garage design has a long way to go.
In terms of simulation, we are currently able to construct the corresponding 3D underground parking garage and the generated garage has certain details: real-time lighting, ventilation ducts, column network structure, etc.. The current garage details such as various pipe layouts are not yet practical and various scene elements can be further rendered to achieve a more realistic effect. This will allow us to further enhance the accuracy and reliability of the generated underground garage maps. These findings provide valuable insights for the development of intelligent underground garage planning and design tools.
In the future, we will extend this work with other AI technologies, such as classification <cit.>, knowledge graphs <cit.>, deep learning <cit.>.
IEEEtran
|
http://arxiv.org/abs/2307.07307v1 | 20230714123246 | Verification of Quantum Systems using Barrier Certificates | [
"Marco Lewis",
"Paolo Zuliani",
"Sadegh Soudjani"
] | quant-ph | [
"quant-ph",
"cs.SY",
"eess.SY"
] |
M. Lewis, P. Zuliani, S. Soudjani
Newcastle University, Newcastle upon Tyne, UK
Università di Roma “La Sapienza”, Rome, Italy
Max Planck Institute for Software Systems, Germany
Verification of Quantum Systems using Barrier Certificates
Marco Lewis1Corresponding email: [email protected]
0000-0002-4893-7658
Paolo Zuliani1,2Currently at Università di Roma; work predominately done at Newcastle University.
0000-0001-6033-5919
Sadegh Soudjani1,30000-0003-1922-6678
August 12, 2023
================================================================================================================================================================================================================================================
Various techniques have been used in recent years for verifying quantum computers, that is, for determining whether a quantum computer/system satisfies a given formal specification of correctness.
Barrier certificates are a recent novel concept developed for verifying properties of dynamical systems.
In this article, we investigate the usage of barrier certificates as a means for verifying behaviours of quantum systems.
To do this, we extend the notion of barrier certificates from real to complex variables.
We then develop a computational technique based on linear programming to automatically generate polynomial barrier certificates with complex variables taking real values.
Finally, we apply our technique to several simple quantum systems to demonstrate their usage.
§ INTRODUCTION
Quantum computers are powerful devices that allow certain problems to be solved faster than classical computers.
The research area focusing on the formal verification of quantum devices and software has witnessed the extension of verification techniques from classical systems <cit.> to the quantum realm.
Classical techniques that have been used include
theorem provers <cit.>,
Binary Decision Diagrams <cit.>,
SMT solvers <cit.> and
other tools <cit.>.
Quantum systems evolve according to the equation from some initial state.
However, the initial state may not be known completely in advance.
One can prepare a quantum system by making observations on the quantum objects, leaving the quantum system in a basis state, but this omits the global phase which is not necessarily known after measurement.
Further, the system could be disturbed through some external influence before it begins evolving.
This can slightly change the quantum state from the basis state to a state in superposition or possibly an entangled state.
By taking into account these uncertain factors, a set of possible initial states from which the system evolves can be constructed.
From this initial set, we can see if the system evolves according to some specified behaviour such as reaching or avoiding a particular set of states.
As an example, consider a single qubit system that evolves according to a Hamiltonian Ĥ implementing the controlled-NOT operation.
Through measurement and factoring in for noise, we know the system starts close to |10⟩.
The controlled-NOT operation keeps the first qubit value the same and so we want to verify that, as the system evolves via Ĥ, the quantum state does not evolve close to |00⟩ or |01⟩.
The main purpose of this work is to study the application of a technique called barrier certificates, used for verifying properties of classical dynamical systems, to check properties of quantum systems similar to the one mentioned above.
The concept of barrier certificates has been developed and used in Control Theory to study the safety of dynamical systems from a given set of initial states on real domains <cit.>.
This technique can ensure that given a set of initial states from which the system can start and a set of unsafe states, the system will not enter the unsafe set.
This is achieved through separating the unsafe set from the initial set by finding a barrier.
Barrier certificates can be defined for both deterministic and stochastic systems in discrete and continuous time <cit.>.
The concept has also been used for verification and synthesis against complicated logical requirements beyond safety and reachability <cit.>.
The conditions under which a function is a barrier certificate can be automatically and efficiently checked using SMT solvers <cit.>.
Such functions can also be found automatically using learning techniques even for non-trivial dynamical systems <cit.>.
Dynamical systems are naturally defined on real domains (ℝ^n).
To handle dynamical systems in complex domains (ℂ^n), one would need to decompose the system into its real and imaginary parts and use the techniques available for real systems.
This has two disadvantages, the first being that this doubles the number of variables being used for the analysis.
The second disadvantage is that the analysis may be easier to perform directly with complex variables than their real components.
As quantum systems use complex values, it is desirable to have a technique to perform the reachability analysis using complex variables.
In this paper, we explore the problem of safety verification in quantum systems by extending barrier certificates from real to complex domains.
Our extension is inspired by a technique developed by Fang and Sun <cit.>, who studied the stability of complex dynamical systems using Lyapunov functions (where the goal is to check if a system eventually stops moving).
Further, we provide an algorithm to generate barrier certificates for quantum systems and use it to generate barriers for several examples.
§ BACKGROUND
§.§ Safety Analysis
We begin by introducing the problem of safety for dynamical systems with real state variables x∈.
More details can be found in <cit.>.
A continuous dynamical system is described by
ẋ = xt = f(x), f:→,
where the evolution of the system is restricted to X ⊆ and f is usually Lipschitz continuous to ensure existence and uniqueness of the differential equation solution.
The set X⊆ X is the set of initial states and the unsafe set X⊆ X is the set of values that the dynamics x(t) should avoid.
These sets lead to the idea of safety for real continuous dynamical systems:
A system, ẋ = f(x), evolving over X ⊆ is considered safe if the system cannot reach the unsafe set, X_u ⊆ X, from the initial set, X_0 ⊆ X.
That is for all t ∈ℝ_+ and x(0) ∈X, then x(t) ∉X.
The safety problem is to determine if a given system is safe or not.
Numerous techniques have been developed to solve this problem <cit.>.
Barrier certificates are discussed in Section <ref>.
Here, we describe two other common techniques.
Abstract Interpretation
One way to perform reachability analysis of a system is to give an abstraction <cit.> of the system's evolution.
Given an initial abstraction that over-approximates the evolution of the system, the abstraction is improved based on false bugs.
False bugs are generated when the current abstraction enters the unsafe space but the actual system does not.
This method has been investigated for quantum programs in <cit.>, where the authors can verify programs using up to 300 qubits.
Backward and Forward Reachability
A second approach is to start from the unsafe region and reverse the evolution of the system from there.
A system is considered unsafe if the reversed evolution enters the initial region.
This is backward reachability.
Conversely, forward reachability starts from the initial region and is considered safe if the reachable region does not enter the unsafe region.
Both backward and forward reachability are discussed in <cit.>.
§.§ Barrier Certificates
Barrier certificates <cit.> are another technique used for safety analysis.
This technique attempts to divide the reachable region from the unsafe region by putting constraints on the initial and unsafe set, and on how the system evolves.
The benefit of barrier certificates over other techniques is that one does not need to compute the system's dynamics at all to guarantee safety, unlike in abstract interpretation and backward (or forward) reachability.
A barrier certificate is a differentiable function, B: →ℝ, that determines safety through the properties that B has.
Generally, a barrier certificate needs to meet the following conditions:
B(x) ≤ 0 , ∀ x ∈X
B(x) > 0 , ∀ x ∈X
x(0) ∈X B(x(t)) ≤ 0 , ∀ t ∈ℝ_+.
Essentially, these conditions split the evolution space into a (over-approximate) reachable region and an unsafe region, encapsulated by Conditions (<ref>) and (<ref>) respectively.
These regions are separated by a “barrier”, which is the contour along B(x) = 0.
Condition (<ref>) prevents the system evolving into the unreachable region and needs to be satisfied for the system to be safe.
However, Condition (<ref>) can be replaced with stronger conditions that are easier to check.
For example, the definition of one simple type of barrier certificate is given.
For a system ẋ = f(x), X ⊆, X_0 ⊆ X and X_u ⊆ X, a function B: →ℝ that obeys the following conditions:
B(x) ≤ 0 , ∀ x ∈X
B(x) > 0 , ∀ x ∈X
B/x f(x) ≤ 0 , ∀ x ∈ X,
is a convex barrier certificate.
Note that in Condition (<ref>): Bxxt = Bt.
This condition can be viewed as a constraint on the evolution of the barrier as the system evolves over time.
Now, if a system has a barrier certificate, then the system is safe.
We show the safety theorem for convex barrier certificates.
If a system, ẋ = f(x), has a convex barrier certificate, B: →ℝ, then the system is safe <cit.>.
Proofs of Theorem <ref> are standard and can be found in, <cit.>.
The intuition behind the proof is that since the system starts in the negative region and the barrier can never increase, then the barrier can never enter the positive region.
Since the unsafe set is within the positive region of the barrier, this set can therefore never be reached.
Thus, the system cannot evolve into the unsafe set and so the system is safe.
Figure <ref> shows an example of a dynamical system with a barrier based on the convex condition.
The term “convex” is used for these barriers as the set of barrier certificates satisfying the conditions in Definition <ref> is convex.
In other words, if B_1 and B_2 are barrier certificates for a system, the function λ B_1 + (1-λ) B_2 is also a barrier certificate for any λ∈[0,1].
See <cit.> or the proof of Proposition <ref> in Appendix <ref> for (similar) details.
There are a variety of different barrier certificates to choose from with different benefits, the convex condition given is simple but may not work for complicated or nonlinear systems.
In comparison, the non-convex condition given in <cit.> changes Condition (<ref>) such that B/x f(x) ≤ 0; ∀ x ∈ X, B(x) = 0 (instead of ∀ x ∈ X).
This is a weaker condition allowing for more functions to be a suitable barrier certificate.
However, a different computational method is required because the set of such barrier certificates is non-convex.
Each barrier certificate requires a different proof that if the system has a satisfying barrier certificate, then the system is safe.
It should be noted that Theorem <ref> only has a one way implication, a system does not necessarily have a barrier certificate even if it is safe.
In <cit.>, the authors showed the converse holds for systems defined on a compact manifold and using convex barrier certificates.
§ COMPLEX-VALUED BARRIER CERTIFICATES
Now we wish to extend the use of barrier certificates into a complex space ().
We use = √(-1) as the imaginary unit in the rest of the paper.
The complex dynamical systems considered are of the form
ż = zt = f(z), f:→,
which evolves in Z ⊆.
The initial and unsafe sets are defined in the usual way except now we have Z⊆ Z and Z⊆ Z, respectively. The notion of safety for this system is similar to Definition <ref>.
A complex system, ż = f(z), with Z ⊆, Z⊆ Z and Z⊆ Z, is considered safe if for any z(0) ∈Z, then ∀ t ∈ℝ^+, z(t) ∉Z.
Whilst it is easy to extend the safety problem and required definitions into the complex plane, extending the notion of barrier certificates requires particular attention.
Conditions (<ref>), (<ref>) and (<ref>) are changed respectively to
B(z) ≤ 0 , ∀ z ∈Z;
B(z) > 0 , ∀ z ∈Z;
z(0) ∈Z B(z(t)) ≤ 0 , ∀ t ∈ℝ_+.
Many barrier certificates use differential equations to achieve Condition (<ref>), which restricts the class of functions that can be used.
This is because differentiable complex functions must satisfy the Cauchy-Riemann equations.
For our purposes, we consider a holomorphic function, g(z): →ℂ, to be a function whose partial derivatives, g(z)z_j, are holomorphic on ℂ, they satisfy the Cauchy-Riemann equations (for several variables).
That is for z_j = x_j + y_j and g(z) = g(x, y) = u(x,y) + v(x,y), then
ux_j = vy_j uy_j = - vx_j.
Using an adapted technique developed by Fang and Sun <cit.> allows us to reason about barrier certificates in the complex plane.
We begin by introducing a family of complex functions that are key to our technique.
A function, b: →, is if ∀ z ∈, b(z, z) ∈ℝ.
A function, B: ℂ^n→ℝ, is a complex-valued barrier function if B(z) = b(z, z) where b: → is a , holomorphic function.
Suppose now that we have a system that evolves over time, z(t). To use the complex-valued barrier function, B(z(t)), for barrier certificates we require the differential of B with respect to t.
Calculating this differential reveals that
B(z(t))t = b(z(t),z(t))t =
b(z, u)zzt
+
b(z, u)uzt
=
b(z, u)z f(z)
+
b(z, u)uf(z),
where
b(z,u)z = [ b(z,u)z_1, b(z,u)z_2, …, b(z,u)z_n ]
is the gradient of b(z,u) with respect to z and the gradient is defined with respect to u in a similar way.
Given Equation (<ref>), barrier certificates that include a differential condition can be extended into the complex domain quite naturally.
For example, the convex barrier certificate is extended to the complex domain.
For a system ż = f(z), Z ⊆, Z⊆ Z and Z⊆ Z; a complex-valued barrier function B: →ℝ, B(z) = b(z, z), that obeys the following conditions,
b(z, z) ≤ 0 , ∀ z ∈Z
b(z, z) > 0 , ∀ z ∈Z
b(z, u)z f(z)
+
b(z, u)uf(z)≤ 0 , ∀ z ∈ Z ,
is a complex-valued convex barrier certificate.
With this definition, we can ensure the safety of complex dynamical systems:
If a complex system, ż = f(z), has a complex-valued convex barrier certificate, B: →ℝ, then the system is safe.
The set of complex-valued barrier certificates satisfying the conditions of Definition <ref> is convex.
The proofs of these results are given in Appendix <ref> and <ref> respectively.
§ GENERATING SATISFIABLE BARRIER CERTIFICATES FOR QUANTUM SYSTEMS
We now describe how to compute a complex-valued barrier function.
Throughout, let ż = f(z), Z ⊆, Z⊆ Z and Z⊆ Z be defined as before.
We introduce a general family of functions that will be used as “templates” for complex barrier certificates.
A k-degree polynomial function is a complex function, b: →ℂ, such that
b(z_1, …, z_n) = ∑_∈ A_n,k a_ z^
where A_n,k := { = (α_1, …, α_n) ⊆ℕ^n : ∑_j=1^nα_j ≤ k },
a_∈ℂ,
and z^ = ∏_j=1^n z_j^α_j.
The family of k-degree polynomials are polynomial functions where no individual term of the polynomial can have a degree higher than k.
Note that k-degree polynomial functions are holomorphic.
Further, some k-degree polynomials are .
For example, the 2-degree polynomial
b(z_1, u_1) = z_1 u_1
is since zz = z, whereas the 1-degree polynomial
b(z_1, u_1) = z_1
is not.
Thus, a subset of this family of functions are suitable to be used for barrier certificates as complex-valued barrier functions.
The partial derivative of the polynomials in Equation (<ref>) is required for ensuring the function meets Condition (<ref>). The partial derivative of the function is
∂ b/∂ z_j
=
∑_∈ A_n,k
a_α_j z_j^ - 1
z^.
We write
B(a,z) := b(a, z, z) :=
∑_
(, ) ∈ A_2n, k
= (α_1, …, α_n)
= (α_n+1, …α_2n)
a_, z^z^,
where a = (a_,) ∈ℝ^A_2n,k is a vector of real coefficients to be found and z^ = ∏_j=1^n z_j^α_n+j.
The following (polynomial) inequalities find the coefficient vector:
find a^T
subject to B(a,z) ≤ 0, ∀ z ∈Z
B(a,z) > 0, ∀ z ∈Z
B(a,z)t≤ 0, ∀ z ∈ Z
B(a, z) ∈ℝ
-1 ≤ a_,≤ 1.
The coefficients, a_,∈ℝ, are restricted to the range
[ -1, 1 ] since any barrier certificate B(a, z), can be normalised by dividing B by the coefficient of greatest weight, m = maxa_,.
The resulting function 1/mB(a,z) is still a barrier certificate.
A barrier certificate generated from these polynomial inequalities can then freely be scaled up by multiplying it by a constant.
§.§ An Algorithmic Solution
One approach of solving the inequalities in (<ref>) is to convert the system to real numbers and solve using sum of squares (SOS) optimisation <cit.>;
another method is to use SMT solvers to find a satisfiable set of coefficients;
or it is possible to use neural network based approaches to find possible barriers <cit.>.
We consider as a special case, an approach where B(a,z)t = 0 rather than B(a,z)t≤ 0, which allows the problem to be turned into a linear program.
This restriction allows us to consider a subset of barrier certificates that still ensures the safety of the system.
This is motivated by the fact that simple quantum systems of interest exhibit periodic behaviour; that is for all t ∈ℝ^+, z(t) = z(t + T) for some T.
The barrier must also exhibit periodic behaviour,[The barrier being periodic can be seen by interpreting the barrier as a function over time: B(t) = B(z(t)) = B(z(t+T)) = B(t+T), ∀ t ∈ℝ^+]
and this can be achieved by setting B(a,z)t = 0.
Whilst there are other properties that ensure a function is periodic, these would involve non-polynomial terms such as trigonometric functions.
Further, linear programs tend to be solved faster than SOS methods.
This is because SOS programs are solved through semidefinite programming techniques, which are extensions of linear programs and therefore harder to solve.
We begin by transforming the differential constraint, B(a,z)t = 0.
To obey the third condition for the complex-valued convex barrier certificate, we can substitute terms in Equation (<ref>) with the partial derivatives from Equation (<ref>).
Essentially one will end up with an equation of the form
(𝐀 a)^⊤ζ = 0,
where ζ is a vector of all possible polynomial terms of z_j, z_j with degree less than k,[for k=2 acceptable terms include z_j^a, z_j z_l, z_jz_l, z_j^a, z_jz_l for 0 ≤ a ≤ 2.
]
and 𝐀 is a matrix of constant values.
By setting 𝐀 a = 0⃗ the constraint is satisfied.
Therefore, each row of the resultant vector, (𝐀a)_j = 0, is added as a constraint to a linear program.
To transform the real constraint (B(a,z) ∈ℝ) note that if x ∈ℂ, then x ∈ℝ if and only if x = x.
Therefore, B(a, z) - B(a, z) = 0 and we have
B(a, z) - B(a, z) =
∑_
(α_j) ∈ A_2n, k
= {α_1, …, α_n}
= {α_n+1, …α_2n}
a_, z^z^
-
∑_
(α_j) ∈ A_2n, k
' = {α_1, …, α_n}
' = {α_n+1, …α_2n}a_',' z^'z^'
=
∑_
(α_j) ∈ A_2n, k
= {α_1, …, α_n}
= {α_n+1, …α_2n}
(a_, - a_,) z^z^.
The whole polynomial is equal to 0 if all coefficients are 0.
Thus, taking the coefficients and noting that a_j are real gives the transformed constraints a_, = a_, for = (α_j)_j=1^n,=(α_j)_j=n+1^2n, (α_j) ∈ A_2n,k.
These constraints to the coefficients are then also added to the linear program.
The final constraints we need to transform are the constraints on the initial and unsafe set: B(a,z) ≤ 0 for z ∈Z and B(a,z) > 0 for z ∈Z, respectively.
We begin by noting that B(a,z) = c + b(a, z, z) where b(a,z,z) is a k-degree polynomial (with coefficients a) and c ∈ℝ is a constant.
When considering the differential and real constraint steps, c is not involved in these equations since c does not appear in the differential term and c is cancelled out in the real constraint (c - c = c - c = 0).
Considering the initial and unsafe constraints, we require that
∀ z ∈Z, c + b(a, z, z) ≤ 0, and
∀ z ∈Z, c + b(a, z, z) > 0.
Therefore, c is bounded by
max_z ∈Z -b(a, z, z) < c ≤min_z ∈Z -b(a, z, z).
Finding c = min_z ∈Z -b(a, z, z) and then checking max_z ∈Z -b(a, z, z) < c will ensure the initial and unsafe constraints are met for the barrier. The final computation is given in Algorithm <ref>.
Note that the algorithm can fail since the function b may divide the state space in such a way that a section of Z may lie on the same contour as a section of Z.
This means that either the function b is unsuitable or the system is inherently unsafe.
§ APPLICATION TO QUANTUM SYSTEMS
We consider quantum systems that evolve within Hilbert spaces ℋ^n = ℂ^2^n for n ∈ℕ.
We use the computational basis states |j⟩∈ℋ^n, for 0 ≤ j < 2^n, as an orthonormal basis within the space, where (|j⟩)_l = δ_jl.[δ_jl is the Kronecker delta, which is 1 if j=l and 0 otherwise.]
General quantum states, |ϕ⟩∈ℋ^n, can then be written in the form
|ϕ⟩ = ∑_j=0^2^n -1 z_j |j⟩,
where z_j ∈ℂ and ∑_j=0^2^n - 1z_j = 1.[For readers familiar with the Dirac notation, z_j = ⟨j|ϕ⟩ and z_j = ⟨ϕ|j⟩.]
Quantum states reside within the unit circle of ℂ^2^n.
For simplicity, we consider quantum systems that evolve according to the equation
|ϕ⟩/t = - Ĥ|ϕ⟩,
where Ĥ is a Hamiltonian, a complex matrix such that Ĥ = Ĥ^† = Ĥ^⊤; and |ϕ⟩ is a quantum state.[We set the Planck constant ħ = 1 in the equation.]
In the rest of this section, we make use of Algorithm <ref> in order to find suitable barrier certificates for operations that are commonly used in quantum computers.
§.§ Hadamard Operation Example
The evolution of the Hadamard operation, H = 1/√(2)[ 1 1; 1 -1 ], is given by Ĥ_H = [ 1 1; 1 -1 ] and |ϕ⟩ is one qubit, z_0 |0⟩ + z_1 |1⟩.
We have z(t) = [ z_0(t); z_1(t) ] and
ż = -Ĥ_Hz = -[ z_0 + z_1; z_0 - z_1 ].
The system evolves over the surface of the unit sphere, Z = {(z_0, z_1) ∈ℂ^2 : z_0 + z_1 = 1}.
The initial set is defined as Z = {(z_0, z_1) ∈ Z : z_0≥ 0.9 } and the unsafe set as Z = {(z_0, z_1) ∈ Z : z_0≤ 0.1 }.
Note that the definitions of Z and Z are restricted by Z, therefore z_1≤ 0.1 and z_1≥ 0.9 for Z and Z respectively.
A barrier function computed by our Algorithm <ref> is
B(z) = 11/5 - 3z_0z_0 - z_0z_1 - z_0z_1 - z_1z_1.
By rearranging and using properties of the complex conjugate, we find that
B(z) = 2 (1/10 - z_0 + 1/2 - z_0z_1).
The derivation is given in Appendix <ref>.
The first term of the barrier (1/10 - z_0) acts as a restriction on how close to |0⟩ as |ϕ⟩ evolves, whereas the second term (1/2 - z_0z_1) is a restriction on the phase of the quantum state.
Next, we double check that B is indeed a barrier certificate.
The system evolving according to Equation (<ref>), initial set Z_0 and unsafe set Z_u is safe.
The proposition is proved in Appendix <ref>.
A visualisation on a Bloch sphere representation of the example system and its associate barrier are given in Figure <ref>.
§.§ Phase Operation Example
The evolution of the phase operation
S = [ 1 0; 0 ]
is given by the Hamiltonian
Ĥ_S = [ 1 0; 0 -1 ] for a single qubit z_0 |0⟩ + z_1 |1⟩.
Thus, the evolution of the system for z(t) = [ z_0(t); z_1(t) ] is
ż = -[ z_0; -z_1 ].
Again, Z represents the unit sphere as described previously.
Two pairs of safe and unsafe regions are given. The first pair Z_1 = (Z^1, Z^1) is given by
Z^1 = { (z_0, z_1) ∈ Z : z_0≥ 0.9 },
Z^1 = {(z_0, z_1) ∈ Z : z_1 > 0.11 };
and the second pair Z_2 = (Z^2, Z^2) is given by
Z^2 = { (z_0, z_1) ∈ Z : z_1≥ 0.9 },
Z^2 = {(z_0, z_1) ∈ Z : z_0 > 0.11 }.
The pair Z^1 starts with a system that is close to the |0⟩ state and ensures that the system cannot evolve towards the |1⟩ state.
The pair Z^2 has similar behaviour with respective states |1⟩ and |0⟩.
The system for each pair of constraints is considered safe by the following barriers computed by
Algorithm <ref>:
B_1(z) = 0.9 - z_0 z_0,
B_2(z) = 0.9 - z_1 z_1,
where B_1 is the barrier for Z^1 and B_2 is the barrier for Z^2.[These barriers can similarly be written using the Dirac notation.]
The system with different pairs of regions can be seen on Bloch spheres in Figure <ref>.
Again, both functions B_1 and B_2 are valid barrier certificates.
The system given by Equation <ref> with the set of initial states Z^1 and the unsafe set Z^1 is safe.
The system given by Equation <ref> with the set of initial states Z^2 and the unsafe set Z^2 is safe.
The proofs are omitted as they are similar to the proof given in Proposition <ref>.
These barriers give bounds on how the system evolves, the system must only change the phase of the system and not the amplitude.
This can be applied in general by combining barriers to show how a (disturbed) system is restricted in its evolution.
§.§ Controlled-NOT Operation Example
The final example we consider is the controlled-NOT (CNOT) operation acting on two qubits; a control qubit, |ϕ_c⟩, and a target qubit, |ϕ_t⟩, with the full quantum state being |ϕ_c ϕ_t⟩.
The CNOT operation performs the NOT operation on a target qubit (|0⟩→|1⟩ and |1⟩→|0⟩) if the control qubit is set to |1⟩ and does nothing if the control qubit is set to |0⟩.
The CNOT operation and its associated Hamiltonian are given by
CNOT = [ 1 0 0 0; 0 1 0 0; 0 0 0 1; 0 0 1 0 ] , Ĥ_CNOT = [ 0 0 0 0; 0 0 0 0; 0 0 1 -1; 0 0 -1 1 ].
The system z(t) = (z_j(t))_j=0, …, 3 evolves according to
ż = -[ 0; 0; z_2 - z_3; -z_2 + z_3 ].
This system evolves over Z = {(z_0, …, z_3) ∈ℂ^4 : ∑_j=0^3 |z_j|^2 = 1}.
Using this as our system, various initial and unsafe regions can be set up to reason about the behaviour of the CNOT operation.
§.§.§ Control in |0⟩
Here we consider the following initial and unsafe regions
Z = { (z_j)_j=0^3∈ℂ^4 : z_0≥ 0.9 },
Z = { (z_j)_j=0^3∈ℂ^4 : z_1 + z_2 + z_3≥ 0.11 }.
The initial set, Z, encapsulates the quantum states that start in the |00⟩ state with high probability and Z captures the states that are not in the initial region with probability greater than 0.11.
These regions capture the behaviour that the quantum state should not change much when the control qubit is in the |0⟩ state.
Using Algorithm <ref>, the barrier B(z) = 0.9 - z_0 z_0 can be generated to show that the system is safe.
A similar example can be considered where the initial state |00⟩ is replaced with |01⟩ instead (swap z_0 and z_1 in Z and Z).
The behaviour that the state of the system should not change much is still desired; the function B(z) = 0.9 - z_1 z_1 is computed as a barrier to show this behaviour is met.
§.§.§ Control in |1⟩
Now consider when the initial region has the control qubit near the state |1⟩.
The following regions are considered:
Z = { (z_j)_j=0^3∈ℂ^4 : z_2≥ 0.9 },
Z = { (z_j)_j=0^3∈ℂ^4 : z_1 + z_2≥ 0.11 }.
This system starts close to the |10⟩ state and the evolution should do nothing to the control qubit.
Note that the specified behaviour does not captures the NOT behaviour on the target qubit.
Our Algorithm <ref> considers this system safe by outputting the barrier certificate B(z) = 0.9 - z_2z_2 - z_3z_3.
This is also the barrier if the system were to start in the |11⟩ state instead.
§ CONCLUSIONS
In this paper, we extended the theory of barrier certificates to handle complex variables and demonstrated that barrier certificates can be extended to use complex variables.
We then showed how one can automatically generate simple complex-valued barrier certificates using polynomial functions and linear programming techniques.
Finally, we explored the application of the developed techniques by investigating properties of time-independent quantum systems.
There are numerous directions for this research to take.
In particular, one can consider (quantum) systems that are time-dependent, have a control component or are discrete-time, quantum circuits.
Data-driven approaches for generating barrier certificates based on measurements of a quantum system can also be considered.
A final challenge to consider is how to verify large quantum systems.
Techniques, such as Trotterization, allow Hamiltonians to be simulated either by simpler Hamiltonians of the same size or of lower dimension.
How barrier certificates can ensure safety of such systems is a route to explore.
§ ACKNOWLEDGEMENTS
M.Lewis is supported by the UK EPSRC (project reference EP/T517914/1). The work of S. Soudjani is supported by the following grants: EPSRC EP/V043676/1, EIC 101070802, and ERC 101089047.
Data availability.
The public repository with an implementation of the algorithm from Section <ref> and case studies from Section <ref> is available on GitHub: <https://github.com/marco-lewis/quantum-barrier-certificates>.
splncs04
§ PROOF OF THEOREM <REF>
The proof is similar to the intuition given for Theorem <ref>.
Assume by contradiction that the system has a complex-valued convex barrier certificate, but the system is not safe.
Therefore, there is an initial state z(0) ∈Z and time T ∈ℝ^+ such that z(T) ∈Z.
By the definition of our convex barrier certificate, we have that B(z(0)) ≤ 0 and B(z(T)) > 0.
Thus, the barrier must grow positively at some point during the system evolution.
However, we have that B(z(t))t≤ 0 for all t ∈ℝ^+ based on Equation (<ref>).
The system cannot grow positively and so we have a contradiction.
Therefore, the system must be safe.
§ PROOF OF PROPOSITION <REF>
Let ż = f(z) be a system over Z with Z_0 and Z_u being the initial and unsafe sets as before.
Let ℬ denote the set of (complex-valued convex) barrier certificates such that for any B ∈ℬ the system f(z) is safe.
Take B_1, B_2 ∈ℬ and consider the function B(z) = λ B_1(z) + (1-λ) B_2(z), where λ∈ [0,1].
Since B_1(z) ≤ 0 and B_2(z) ≤ 0 for all z ∈ Z_0, then B(z) ≤ 0 as well.
A similar argument holds for B(z) > 0 for all z ∈ Z_u.
Finally, consider the differential equation . It is trivial to see that
= λ B_1/ t + (1-λ) B_2/ t≤ 0,
because differentiation is linear; and B_1/ t, B_2/ t≤ 0 for all z ∈ Z.
Therefore, B satisfies the properties of a barrier certificate for f(z) and so B ∈ℬ.
Hence, ℬ is convex.
§ DERIVATION OF BARRIER FOR HADAMARD SYSTEM
By substituting z_j z_j = z_j^2 and noting that z = z + z for any z ∈ℂ, we have that
B(z) = 11/5 - 3z_0 - z_0z_1 - z_1.
Since z_1 = 1 - z_0 (due to properties of quantum systems), we then have
B(z) = 6/5 - 2z_0 - z_0z_1,
and by simply rearranging we get
B(z) = 2 (1/10 - z_0 + 1/2 - z_0z_1).
§ PROOF OF PROPOSITION <REF>
We prove this by showing that B meets the conditions of a convex barrier certificate (given in Definition <ref>). Safety is then guaranteed from Theorem <ref>.
Firstly, consider z ∈Z. As z_0≥ 0.9, then B(z) ≤ 2(-4/5 - z_0z_1).
Further, it can be seen that
z_0z_1 = z_0z_1 + z_0z_1 < 1 ×√(1/10) + 1 ×√(1/10) = √(2/5).
Note that we are taking the maximal possible value of each component and therefore this is larger than the maximal value of z_0 z_1.
Thus,
B(z) ≤ 2(-4/5 - z_0z_1) < 2(-4/5 + √(2/5)) < 0.
A similar argument can be made for when z ∈Z and it can be shown that B(z) > 0.
Finally, we use Equations (<ref>) and (<ref>) to get
= - (- (2z_0 + z_1)(z_0 + z_1)
- (z_0) (z_0 - z_1)
+ (2z_0 + z_1) (z_0 + z_1)
+ (z_0) (z_0 - z_1) )
= -(
-2 z_0 z_1 - z_0 z_1 + z_0 z_1
+ 2 z_0 z_1 + z_0 z_1 - z_0 z_1)
= 0 , ∀ z ∈ Z.
Therefore, the system meets the conditions given in Equations (<ref>), (<ref>) and (<ref>); the system is safe.
|
http://arxiv.org/abs/2307.04933v1 | 20230710230000 | On a generalization of symmetric edge polytopes to regular matroids | [
"Alessio D'Alì",
"Martina Juhnke-Kubitzke",
"Melissa Koch"
] | math.CO | [
"math.CO",
"math.AC",
"Primary: 52B40, Secondary: 52B20, 05B35, 13P10"
] |
[2020]Primary: 52B40; Secondary: 52B20, 05B35, 13P10.
Starting from any finite simple graph, one can build a reflexive polytope known as a symmetric edge polytope. The first goal of this paper is to show that symmetric edge polytopes are intrinsically matroidal objects: more precisely, we prove that two symmetric edge polytopes are unimodularly equivalent precisely when they share the same graphical matroid. The second goal is to show that one can construct a generalized symmetric edge polytope starting from every regular matroid. Just like in the usual case, we are able to find combinatorial ways to describe the facets and an explicit regular unimodular triangulation of any such polytope. Finally, we show that the Ehrhart theory of the polar of a given generalized symmetric edge polytope is tightly linked to the structure of the lattice of flows of the dual regular matroid.
Seeing quantum effects in experiments
H. J. Lewandowski
August 12, 2023
=====================================
§ INTRODUCTION
Symmetric edge polytopes are a class of centrally symmetric reflexive lattice polytopes which has seen a lot of interest in the last few years due to their fascinating combinatorial properties <cit.> and their connections to various branches of mathematics and physics <cit.>.
Given a finite simple graph G on vertex set V = [n] {1, 2, …, n}, the symmetric edge polytope associated with G is the lattice polytope
_G {±(_i - _j) |{i,j}∈ E(G)}⊆^|V|,
where _i denotes the i-th standard basis vector.
Equivalently, after assigning an arbitrary orientation to each of the edges of G, one has that
_G = [M_G | -M_G],
where M_G ∈^|V|×|E| is the signed incidence matrix of G with respect to the chosen orientation (i.e., the matrix whose (v,e)-entry is 1 if v is the head of e, -1 if v is the tail of e, and 0 otherwise). The matrix M_G also serves as a representation of the graphic matroid _G associated with G. Several objects associated with _G, for instance the facets or some triangulations, can be described via the combinatorial features of the graph G, and one can rephrase many of these characterizations in terms of the matroid _G only. This is not by accident; in fact, we will prove in <Ref> that two symmetric edge polytopes _G and _H are unimodularly equivalent precisely when the graphical matroids _G and _H are isomorphic. In particular, if G and H are both 3-connected, applying Whitney's 2-isomorphism theorem yields that _G and _H are unimodularly equivalent if and only if G and H are isomorphic. We remark that the characterization in <Ref> corrects an erroneous statement of Matsui, Higashitani, Nagazawa, Ohsugi and Hibi <cit.>: see <Ref> for the details.
It is tempting to ask what happens if we take the polytope defined by the convex hull of the columns of for a more general matrix M, and whether this object bears any relation to the matroid represented by M. The former question was investigated by Ohsugi and Hibi in <cit.>, while the latter appears to be new. In general, changing the representation of a given matroid and applying the above “symmetrization” will produce wildly different polytopes, so this question might seem to be too far-fetched at first sight. However, as we show in <Ref>, the construction described above does indeed yield a unique lattice polytope (up to some unimodular equivalence not involving any translation) when we consider any regular matroid and restrict to its (full-rank) weakly unimodular representations. We will call any polytope arising from such a setting a generalized symmetric edge polytope. Throughout the paper, if we need the concrete polytope associated with a specific representation M, we will denote it by _M; if instead it is enough for our purposes to just deal with the equivalence class, we will write _.
Most of the properties that make the usual symmetric edge polytopes pleasant are preserved in this wider environment: for instance, generalized symmetric edge polytopes are reflexive (as already observed in <cit.>) and terminal, and it is possible to describe their facets in a purely combinatorial fashion (<Ref>). We remark here that Kálmán and Tóthmérész have been working on a similar statement for extended root polytopes in the recent preprint <cit.>.
The polars of generalized symmetric edge polytopes are special instances of Lipschitz polytopes and enjoy a rich Ehrhart theory. More precisely, in the spirit of work by Beck and Zaslavsky <cit.>, we show that the lattice points in the k-th dilation of the polar ^Δ_ are in bijection with the (k+1)-cuts of or, equivalently, with the (k+1)-flows of the dual matroid ^* (<Ref>).
Finally, the existence of a regular unimodular triangulation for _M had already been proved by Ohsugi and Hibi <cit.>, while an explicit one had been provided in the case of graphs by Higashitani, Jochemko and Michałek <cit.>. We show that, via a careful analysis of signed circuits, it is possible to extend the latter result to generalized symmetric edge polytopes (<Ref>).
The paper is organized as follows. <Ref> contains some preliminaries about matroids, polytopes and toric ideals, while <Ref> is devoted to define generalized symmetric edge polytopes and prove that any two full-rank weakly unimodular representations of the same regular matroid will yield unimodularly equivalent polytopes (<Ref>).
<Ref> studies properties of generalized symmetric edge polytopes, including a partial converse to <Ref>. <Ref> focuses on the polytopes polar to generalized symmetric edge polytopes and their Ehrhart theory; the obtained results are then used to derive a facet description for generalized symmetric edge polytopes, extending the one for the graphical case from <cit.>.
<Ref> is devoted to the explicit description of a regular unimodular triangulation of any generalized symmetric edge polytope.
Finally, we collect some open questions and suggestions for future work in <Ref>.
§ PRELIMINARIES
§.§ Regular matroids, cuts and flows
The aim of this subsection is to briefly introduce regular matroids and their properties. We direct the reader to <cit.> for a more complete treatment and for general matroid terminology.
If is a matroid, we will denote by () and () the sets of its bases and circuits, respectively. If M is a matrix, we will sometimes write “basis/circuit of M” to refer to a basis/circuit of the matroid represented by M. In this case, the ground set of such a matroid will consist of the column indices of M.
Let M ∈^m × n be an integer matrix. We will say that M is:
* totally unimodular if the determinant of every square submatrix of M lies in {0, ± 1};
* weakly unimodular if the determinant of every square submatrix of M of size max{m,n} lies in {0, ± 1}.
A matroid of rank r > 0 is called regular if it satisfies any of the following equivalent properties:
* can be represented via a totally unimodular matrix;
* can be represented via a full-rank weakly unimodular matrix;
* is representable over any field.
The equivalence of (i) and (iii) is a well-known fact, see for instance <cit.>. We will now prove for clarity's sake that (i) and (ii) are equivalent: for a source in the literature, the reader can check <cit.>.
To see that (i) implies (ii) it is enough to show that, if is represented by a totally unimodular matrix, then it is also represented by a totally unimodular (and hence weakly unimodular) matrix of the form [I_r | D], where r is the rank of . For a proof of this claim, see <cit.>.
Let us now prove that (ii) implies (i). By assumption, there exists a full-rank matrix M ∈^r × n that is weakly unimodular and represents . We now proceed as in <cit.>: after choosing a basis for , we can shuffle the columns of M so that the elements of correspond to the first r columns. This amounts to multiplying M on the right by an (n × n)-permutation matrix P, an operation preserving the weakly unimodular property. Now consider the invertible submatrix N of MP obtained by taking the first r columns. Since MP is weakly unimodular and N is invertible, the determinant of the integer matrix N is either 1 or -1; in other words, N ∈GL_r(). By construction, one has that N^-1MP = [I_r | D] represents and is weakly unimodular; however, since it contains the identity matrix I_r, it must actually be totally unimodular (<cit.> or <cit.>), as desired.
We illustrate the content of the previous definition with an example that will also serve as a running example throughout.
Let be the rank 3 simple matroid with ground set [5], bases () = {123, 124, 134, 135, 145, 234, 235, 245}, and circuits () = {125, 345, 1234}, where we are using the shorthand i_1i_2… i_m for {i_1, i_2, …, i_m}. It is easy to check that is represented by the full-rank totally unimodular matrix
M = [ 1 0 0 -1 1; 0 1 0 -1 1; 0 0 1 -1 0 ],
and thus is regular. In fact, in this case is also graphic.
The assumption about the rank of being nonzero is not part of the usual definition of regular matroid in the literature: we include it to avoid nuisances with representability, see for instance <cit.>.
The class of regular matroids (including those of rank zero) is closed under duality and contains all graphic matroids.
We now introduce cuts and flows of a regular matroid, following Su and Wagner's treatment in <cit.>.
Let M ∈^r × n (where 0 < r ≤ n) be a full-rank weakly unimodular matrix. We define the lattice of integer cuts of M, denoted Γ(M), and the lattice of integer flows of M, denoted Λ(M), as
Γ(M) row(M) ∩^n,
Λ(M) (M) ∩^n.
The lattices of integer cuts and flows are orthogonal to each other with respect to the usual dot product. In particular, if A = [I_r | D] ∈^r × n (with 0 < r < n) is totally unimodular and A^* [-D^T | I_n-r], one has that
Λ(A) = Γ(A^*).
If ∈^n, the support of , denoted by (), is the set of indices i ∈ [n] such that v_i ≠ 0.
Given a full-rank weakly unimodular matrix M ∈^r × n with r ≤ n, we will call a flow ∈Λ(M) (respectively, a cut ∈Γ(M))
* nowhere-zero if λ_i ≠ 0 for every i ∈ [n], i.e., if () = [n];
* a k-flow (respectively, a k-cut) if |λ_i| < k for every i ∈ [n];
* a signed circuit or simple flow if it is a 2-flow and its support is a circuit of M. We denote the set of signed circuits of M by (M).
For a regular matroid , we are able to talk about the lattice of integer cuts of up to isometry: in fact, if M and M' are two full-rank weakly unimodular matrices representing , then the elements of Γ(M') correspond to elements of Γ(M) via multiplication by a signed permutation matrix (compare, e.g., <cit.>; their argument is stated for totally unimodular matrices, but goes through for full-rank weakly unimodular matrices as well). In particular, an element of Γ(M') will be a nowhere-zero cut, a k-cut or a signed circuit if and only if the corresponding element of Γ(M) is. Moreover, due to (<ref>), all the above statements go through for flows as well.
The matrix M from <Ref> has
* seventeen 2-cuts: (0,0,0,0,0), (1,0,0,-1,1), (-1,0,0,1,-1), (0,1,0,-1,1),
(0,-1,0,1,-1), (0,0,1,-1,0), (0,0,-1,1,0), (1,-1,0,0,0), (-1,1,0,0,0),
(1,0,-1,0,1), (-1,0,1,0,-1), (0,1,-1,0,1), (0,-1,1,0,-1), (1,-1,1,-1,0),
(-1,1,-1,1,0), (1,-1,-1,1,0), (-1,1,1,-1,0).
* seven 2-flows: (0,0,0,0,0), (1,1,0,0,-1), (-1,-1,0,0,1), (0,0,1,1,1),
(0,0,-1,-1,-1), (1,1,1,1,0), (-1,-1,-1,-1,0).
* six signed circuits: all the 2-flows except for the origin.
We record here for further reference some useful facts:
Let M be a full-rank weakly unimodular matrix.
* If ∈Λ(M) and () is a circuit of M, then every coordinate of has the same absolute value.
* If ∈(M), then there are exactly two signed circuits (differing by a global sign) with support .
Part (i) can be derived directly from <cit.> and constitutes a strengthening of <cit.>. The proof of part (ii) is almost verbatim the same as the one of <cit.>, using part (i) instead of <cit.>.
Let be a regular matroid with ground set E. If is a basis of and e ∈ E ∖, then the fundamental circuit of e with respect to is the unique circuit (e,) ∈() contained in ∪{e}. Note that e ∈(e,).
If, moreover, M is a full-rank weakly unimodular representation of , then by <Ref> there is a unique signed circuit (e,) ∈Λ(M) supported at (e,) whose e-th entry equals 1. We will call such a signed circuit the fundamental signed circuit of e with respect to and M.
Let and M be as in <Ref>. Then, for = {1,2,3}, one has that (4,) = (1,1,1,1,0) and (5,) = (-1,-1,0,0,1).
Let M ∈^r × n (where 0 < r ≤ n) be a full-rank weakly unimodular matrix and assume that the first r columns of M are linearly independent. Then, for any a_1, …, a_r ∈, there exist unique a_r+1, …, a_n ∈ such that (a_1, …, a_n) ∈Γ(M). In other words, there exists a unique cut γ∈Γ(M) having a_1, …, a_r as its first r entries.
Call the regular matroid represented by M.
If r=n, then Γ(M) = ^n and the claim is true. Assume now that r < n.
To prove existence, define γ∈^n in the following way:
* γ_i = a_i for every i ∈ [r];
* for every j ∈{r+1, …, n}, we determine γ_j by imposing that γ·(j, [r]) = 0, where (j, [r]) is the fundamental signed circuit of j with respect to the basis [r] of and the representation M.
To prove that the integer vector γ∈^n we have just defined is indeed a cut, it is enough to show that γ∈row(M); but since row(M) and (M) are orthogonal with respect to the standard dot product, this amounts to proving that γ· = 0 for every ∈(M). Since the fundamental signed circuits of M with respect to [r] form an -basis of (M) (being n-r many linearly independent vectors by construction), the claim follows.
To prove uniqueness, assume there is another cut γ' such that γ'_i = a_i for every i ∈ [r], and consider βγ' - γ∈Γ(M). By assumption, β_i = 0 for every i ∈ [r]. Since row(M) = (M)^⊥, it follows that β·λ = 0 for every λ∈(M). In particular, for every j ∈{r+1, …, n}, one has that β·(j, [r]) = 0, and thus β_j = 0. This proves that γ' = γ.
Given a matroid , denote by ^∘ the matroid obtained from by deleting all its loops. Su and Wagner proved in <cit.> that knowing the lattice of integer cuts of is enough to determine ^∘ up to isomorphism. In particular, if we know beforehand that is loopless (for instance, if is simple), we can reconstruct completely from the data of its lattice of integer cuts. This idea will serve as a blueprint for the constructions in this paper.
§.§ Polarity
Let P ⊆^d be a full-dimensional lattice polytope with ∈P∩^d (here P denotes the interior of P with respect to the Euclidean topology).
We recall that the polar of P is the polytope
P^Δ{∈^d |·≤ 1 for every ∈ P},
where we are using the usual dot product to identify ^d and its dual (^d)^*.
The polar P^Δ will not be a lattice polytope in general. If P^Δ happens to be a lattice polytope, then P is called reflexive.
For the rest of this subsection we fix a full-dimensional reflexive polytope P ⊆^d with P∩^d = {}.
If ∈ P^Δ∩^d, we denote by F_ the face of P obtained as {_i |_i · = 1}, where the _i's are the vertices of P.
Indeed, the polytope P lies entirely inside one of the halfspaces defined by the hyperplane H_{∈^d |· = 1}.
By polarity, facets of the polytope P correspond to the vertices of the polar polytope P^Δ; in particular, any facet of P will be of the form F_ for some ∈ P^Δ∩^d, and such a will be a vertex of P^Δ.
§.§ Toric ideals
We introduce some basic notation about toric ideals. For the concepts not explained here and to get further insight, see for instance <cit.>.
If M ∈^r × n is an integer matrix with columns _1, …, _n, we will denote by I_M the toric ideal associated with M, i.e. the kernel of the map
π K[x_1, …, x_n] → K[t_1^± 1, …, t_r^± 1]
x_i ↦𝐭^_i t_1^m_1,it_2^m_2,i… t_r^m_r,i,
where K is a field. Every ∈^n can be uniquely written as ^+ - ^-, where ^+ and ^- are in ^n and have disjoint supports. Any column vector ∈(M) ∩^n gives rise to a binomial ^^+ - ^^-∈(π), and the ideal I_M is generated by binomials of this form. In what follows, with a slight abuse of notation, we will use the expression “signed circuit” to denote both an element λ∈{0, ± 1}^n as in <Ref> and the associated binomial ^^+ - ^^- in (π).
When M is full-rank weakly unimodular, the toric ideal I_M is remarkably well-behaved: in fact, the set (M) of signed circuits is a universal Gröbner basis for I_M (and hence, in particular, the signed circuits of M generate I_M).
Actually, an even stronger result is true, as (M) turns out to be the Graver basis of I_M <cit.>. (In fact, since two signed circuits only differing by a global sign give rise to the same binomial up to sign, one usually picks a representative for every pair; in particular, the Graver basis of I_M will have cardinality |(M)| = 1/2|(M)|.)
Let M be as in <Ref>. The enumeration of signed circuits in <Ref> shows that the polynomials x_1x_2-x_5, x_3x_4x_5-1 and x_1x_2x_3x_4 - 1 are the Graver basis (and a universal Gröbner basis) of the toric ideal I_M.
Throughout the paper, when we say that a certain polynomial inside a polynomial ring is homogeneous, we are using the standard grading: i.e., each variable has degree 1. We record here for further reference a useful observation.
Let B ∈^m × n and let B' ∈^(m+1) × (n+1) be the matrix defined via
b'_ijb_ij if i≤ m and j ≤ n
0 if i≤ m and j = n+1
1 if i=m+1 .
Let I_B ⊆ K[x_1, …, x_n] and I_B'⊆ K[x_1, …, x_n, z] be the respective toric ideals (here x_i corresponds to the i-th column and z to the (n+1)-st, when available). Then I_B' = I_B^hom, where the homogenization is taken with respect to the variable z.
Let us first prove that I_B'⊆ I_B^hom. The toric ideal I_B' is generated by the set of its primitive binomials, i.e. its Graver basis. Let f be a primitive binomial of I_B'. Due to primitivity, the variable z can appear at most on one side of the binomial; without of loss of generality, we can hence write f = ^^+ - ^^-z^k, where = ^+ - ^- ∈^n, k ≥ 0 and (, k) ∈(B'). By construction, ∈(B) and f is homogeneous; more precisely, f is the homogenization of a binomial in I_B with respect to the variable z. It follows that I_B'⊆ I_B^hom.
Let us now prove that I_B^hom⊆ I_B'. By <cit.>, in order to find a generating set for I_B^hom it is enough to homogenize a set of polynomials forming a Gröbner basis of I_B with respect to a graded monomial order. Primitive polynomials provide such a set: in fact, the Graver basis of I_B contains the universal Gröbner basis of I_B <cit.>. Let g = ^^+ - ^^- be a primitive binomial in I_B. We can assume without loss of generality that k |^+| - |^-| ≥ 0. By construction, the homogenized polynomial ^^+ - ^^-z^k lies in I_B'. This shows that I_B^hom⊆ I_B'.
§ UNIQUENESS UP TO UNIMODULAR EQUIVALENCE
The main aim of this section is to describe how to extend the definition of a symmetric edge polytope from the context of graphs to that of regular matroids. Let us first fix some notation:
For any integer matrix M ∈^r × n with 0 < r ≤ n, we denote by _M the lattice polytope of ^r obtained as [ M | -M ].
For F ∈GL_r(), we denote by ψ_F: ℝ^r →ℝ^r the affine map sending to F.
It is our goal to show that any two full-rank weakly unimodular representations of a regular matroid produce the same lattice polytope (in the sense specified in <Ref>) up to unimodular equivalence, and the same holds for the polytopes obtained via polarity. More precisely, we show the following:
Let be a regular matroid of rank r > 0 on n elements and let M_1, M_2 ∈ℝ^r × n be two full-rank weakly unimodular representations of . Then there exists F∈GL_r() such that
_M_2 = ψ_F(_M_1) and _M_2^Δ = ψ_(F^T)^-1(_M_1^Δ).
We will show how to handle the case when the matrices representing are not full-rank in <Ref>. Moreover, a partial converse to <Ref> will be proved later, see <Ref>.
Pick two weakly unimodular full-rank (r × n)-matrices M_1 and M_2 both representing . For each i ∈{1,2}, write _i _M_i. Multiplying M_i on the right by a (signed) permutation matrix has no effect on the polytope _i: permuting the columns just permutes the list L of points we are taking the convex hull of, and changing the sign of a column is harmless because the list L consists of the columns of both M_i and -M_i. After some permutation of the columns of M_1 and M_2, we can hence assume without loss of generality the following two statements:
* the identity map [n] → [n] yields an isomorphism between the matroids represented by M_1 and M_2;
* the submatrices N_1 and N_2 obtained by selecting the first r columns of respectively M_1 and M_2 are both invertible.
Proceeding as in the proof of “(ii) implies (i)” in <Ref>, we can now multiply each M_i on the left by N_i^-1∈GL_r(), obtaining the totally unimodular matrix [I_r | D_i]. Since the identity map still yields an isomorphism between the matroids represented by [I_r | D_1] and [I_r | D_2], we can apply <cit.> to get that D_1 and D_2 are congruent modulo 2, and hence so are [I_r | D_1] and [I_r | D_2]. Since [I_r | D_1] and [I_r | D_2] are both totally unimodular, we are now in the position to use Camion's signing lemma <cit.>: i.e., we can obtain the matrix [I_r | D_2] by changing the signs of some rows and columns of [I_r | D_1]. In other words, there exist diagonal matrices R ∈GL_r() and C ∈GL_n() with only 1's and -1's on the diagonal and such that [I_r | D_2] = R · [I_r | D_1] · C.
Now let F N_2 · R · N_1^-1∈GL_r(). It follows from the discussion above that _2 = ψ_F(_1), as desired (note that C, being a signed permutation matrix, does not enter the picture).
The polar statement can now be derived like this:
_2^Δ = {∈^r |·≤ 1 for every ∈_2}
= {∈^r |·≤ 1 for every ∈ψ_F(_1)}
= {∈^r |· F≤ 1 for every ∈_1}
= {∈^r | F^T·≤ 1 for every ∈_1}
= {(F^T)^-1∈^r |·≤ 1 for every ∈_1}
= ψ_(F^T)^-1(_1^Δ).
Let be the uniform matroid U_2,3. The two full-rank totally unimodular matrices
M_1 [ 1 0 1; 0 1 1 ] and M_2 [ 1 0 1; 0 1 -1 ]
both represent . Changing the signs of both the second row and the second column of M_1 yields M_2; in formulas,
[ 1 0 1; 0 1 -1 ] = [ 1 0; 0 -1 ][ 1 0 1; 0 1 1 ][ 1 0 0; 0 -1 0; 0 0 1 ].
It is easy to verify that the two polytopes _1 and _2 are unimodularly equivalent, as guaranteed by <Ref>.
The usual symmetric edge polytope associated with a graph G is defined as _A_G, where A_G is any signed incidence matrix associated with G. The matrix A_G provides a totally unimodular representation of the graphic matroid _G, but is not full-rank. However, this is not really an issue, as we now explain.
Let be a regular matroid of rank r > 0 and let M ∈^m × n be a totally unimodular representation of with m > r. Possibly after permuting the columns of M, we can assume without loss of generality that the first r columns of M are linearly independent. Pivoting repeatedly we can then reach a matrix
M' [ I_r D; 0_m-r, r 0_m-r, n-r ],
which will again be totally unimodular by <cit.>.
The two polytopes _M and _M' are unimodularly equivalent, and projecting onto the first r coordinates shows that _M' is in turn unimodularly equivalent to _M”, where M”[ I_r | D ] is a full-rank totally unimodular representation of .
Let G = C_3 be the cycle graph on three vertices and pick
A_G [ 1 1 0; -1 0 -1; 0 -1 1 ].
Then _A_G is the symmetric edge polytope _G. Successive row operations on A_G yield that
[ 1 -1 0; 0 1 0; 0 1 1 ][ 1 0 0; 1 1 0; 0 0 1 ][ 1 1 0; -1 0 -1; 0 -1 1 ] = [ 1 0 1; 0 1 -1; 0 0 0 ],
and so _G is unimodularly equivalent to the full-dimensional polytope _M_2⊆^2, with M_2 as in <Ref>.
Selecting a directed spanning tree inside a connected finite simple graph G on r+1 vertices and n edges yields an explicit full-rank totally unimodular representation for ℳ_G in the following way (compare <cit.>):
* fix an orientation for each edge of G;
* pick a spanning tree 𝒯 for G and number its edges from 1 to r;
* assign the i-th standard basis vector _i ∈ℝ^r to the i-th edge in 𝒯 (taken with the orientation selected at the beginning);
* for any edge e⃗ in G taken with its orientation, consider the unique directed path 𝒫_e⃗ from the starting vertex to the ending vertex that only uses edges of 𝒯;
* assign to 𝒫_e⃗ the vector _e⃗ = (λ_1, …, λ_r), where λ_i equals 1 if the i-th edge of 𝒯 appears in 𝒫_e⃗ with its “correct” orientation, -1 if it is traversed backwards, 0 if it does not appear at all.
Putting together all the vectors _e⃗ as columns of a matrix yields a full-rank totally unimodular matrix [I_r | D] representing ℳ_G. By the results in this section, this also produces a full-dimensional polytope _[I_r | D]⊆^r unimodularly equivalent to the symmetric edge polytope of G (compare this to <Ref>). If the graph G is not connected, one can select a directed spanning tree for each connected component and argue analogously.
§ FIRST PROPERTIES OF GENERALIZED SYMMETRIC EDGE POLYTOPES
Due to <Ref>, if we are given a regular matroid of rank r>0 on n elements, we know how to define a full-dimensional polytope _⊆^r which is defined up to some unimodular equivalence not involving any translation. We now wish to prove some results about _: in the proofs we will often need to fix a specific full-rank totally unimodular representation of .
We begin by noting that the polytope _ does not see potential loops or parallel elements inside the matroid , in analogy to the usual symmetric edge polytopes (see <cit.>).
Let be a regular matroid of rank r>0 and let M be a full-rank weakly unimodular matrix representing . Let M be the submatrix of M obtained by keeping only the nonzero columns _i such that _i ≠±_j for every j<i. Then _M = _M, since the redundant columns in M do not affect the structure of _M and, as always lies in the interior of _M, the same holds for the zero columns.
Hence, the polytope _ does not see loops or parallel elements of ; as a consequence, we can replace by its simplification[In <cit.> this is called the combinatorial geometry of .] .
In the setting of <Ref>, we will say M has irredundant columns if M = M, i.e., if the regular matroid represented by M is simple.
We now wish to collect some properties of generalized symmetric edge polytopes. We point out that parts (i) to (iii) of <Ref> below were essentially already known to Ohsugi and Hibi <cit.>.
Let be a regular matroid of rank r>0. The following properties hold:
* _ is centrally symmetric;
* (_) = rk();
* _ is reflexive;
* _ is terminal, i.e., the only points of _ with integer coordinates are its vertices and the origin;
* The vertices of _ are twice as many as the atoms of the lattice of flats of . In particular, if is simple, every antipodal pair of vertices of _ corresponds to an element in the ground set of .
Part (i) is immediate by definition, no matter which representation M we choose for .
Due to <Ref> we know that, if M and M' are two full-rank weakly unimodular matrices representing , we can go from _M to _M' and from _M^Δ to _M'^Δ via unimodular maps that do not involve any translation. In particular, it is enough to prove statements (ii)–(v) for _M, where M is a full-rank totally unimodular r × n matrix representing .
Part (ii) is now immediate and, together with part (i), implies that the origin lies in the interior of _M; hence, the polar polytope _M^Δ is well-defined, and an ℋ-presentation for it is given by ^T 𝐱≤1. Since M is totally unimodular, so is [ M | -M ]^T; the polar _M^Δ must then be a lattice polytope (see for instance <cit.>), and hence _M is reflexive. This proves part (iii).
As regards part (iv), pick a lattice point 𝐱 = (x_1, …, x_r) of _M different from the origin. Then we can write 𝐱 = ∑_i λ_i 𝐯_i, where λ_i > 0, ∑_i λ_i = 1, and the vertices 𝐯_i form a set of pairwise distinct nonzero columns of . If r=1, the claim is obvious. Assume hence that r > 1. Since 𝐱≠ and _M is a centrally symmetric subset of the hypercube [-1,1]^r, we can assume without loss of generality that x_1 = 1. Then the first coordinate of every 𝐯_i must also equal 1. If 𝐱 = 𝐯_1, there is nothing to prove. Assume otherwise. Then there is a coordinate (without loss of generality, the second one) in which 𝐱 and 𝐯_1 differ. This can happen only if x_2 = 0 and (𝐯_1)_2 ∈{1, -1}. But then there must exist some j > 1 such that (𝐯_j)_2 = -(𝐯_1)_2. As a consequence, the totally unimodular matrix contains the submatrix
[ 1 1; (𝐯_1)_2 -(𝐯_1)_2 ]
with determinant 2 or -2. This yields a contradiction.
Finally, it is enough to prove the statement of part (v) when is simple. When this is the case, then M has irredundant columns; denote by _1, …, _2n the columns of . Assume by contradiction that a column of (without loss of generality, the first one) can be expressed as a convex combination of the other ones; i.e., _1 = ∑_j ∈ Jλ_j _j for some J ⊆{2, 3, …, 2n}, λ_j > 0, ∑_j ∈ Jλ_j = 1. Since M has no zero columns and _M is a centrally symmetric subset of the hypercube [-1,1]^r, we can assume without loss of generality that (_1)_1 = 1, and this in turn implies that (_j)_1 = 1 for every j ∈ J. Arguing in a similar way to part (iv), one can then build a submatrix of with determinant 2 or -2, which in turn yields the desired contradiction.
Let and M be as in <Ref>. Then _M is the polytope shown in <Ref>. One has that _M = rk(M) = 3; since the matroid is simple, the lattice points of _M are the origin and the columns of .
<Ref> gives us the tools to establish a partial converse to <Ref>.
Let M, N ∈^r × n (where 0 < r ≤ n) be two full-rank weakly unimodular matrices with irredundant columns, and assume that the polytopes _M and _N are unimodularly equivalent. Then there exist F ∈GL_r() and a signed permutation matrix P ∈^n × n such that N = FMP. In particular, N and M represent the same simple regular matroid .
By assumption there exist F ∈GL_r() and ∈^r such that _N = ψ_F(_M) +. Since 0 is the only interior point of the reflexive polytopes _N and _M, it must be that = 0, so that no translation is actually involved. Moreover, one can easily check that _N = ψ_F(_M) = _FM.
Since the matrices FM and N are both full-rank and weakly unimodular, <Ref>(v) implies that the columns of both and correspond to the vertices of _FM = _N. As a consequence, the matrices FM and N can only differ by a signed permutation of their columns; in other words, there exists a signed permutation matrix P ∈^n × n such that N = FMP, as desired.
As a consequence, we obtain that the matroidal setting is the “right” one to study even the usual symmetric edge polytopes.
Let G and H be finite simple graphs. Then the symmetric edge polytopes _G and _H are unimodularly equivalent if and only if the graphic matroids _G and _H are isomorphic.
The “if” part follows from <Ref> and <Ref>. The “only if” part follows from <Ref> and <Ref>, noting that any signed incidence matrix of a simple graph has irredundant columns by construction.
Let G and H be finite simple 3-connected graphs. Then the symmetric edge polytopes _G and _H are unimodularly equivalent if and only if G and H are isomorphic.
This follows directly from <Ref> and Whitney's 2-isomorphism theorem <cit.>.
It was claimed in <cit.> that, if G and H are finite simple graphs and G is 2-connected, then _G and _H are unimodularly equivalent if and only if G and H are isomorphic. Unfortunately, this claim is erroneous and affects the validity of <cit.> as well: indeed, there exist non-isomorphic 2-connected graphs giving rise to the same graphic matroid, and thus having unimodularly equivalent symmetric edge polytopes by <Ref>. The key to build such objects is the Whitney twist operation, see <cit.>. We provide here an explicit example.
Let G and H be the 6-vertex graphs depicted in <Ref>. Both G and H are 2-connected; moreover, since the vertex a has degree 4 in G and all vertices have degree at most 3 in H, the graphs G and H are not isomorphic. After matching the i-th letter of the English alphabet with the i-th coordinate of ℝ^6, consider the 5-dimensional symmetric edge polytopes _G and _H in ℝ^6. Letting
F = [ 0 -1 -1 0 0 0; 0 1 0 0 0 0; 1 1 2 2 1 1; 0 0 0 0 1 0; -1 -1 -1 -1 -1 0; 1 1 1 0 0 0 ]∈GL_6(),
one checks that the unimodular map ψ_F: ℝ^6 →ℝ^6 sending to F transforms _G into _H, and thus _G and _H are unimodularly equivalent.
§ FACETS OF _ AND THE EHRHART THEORY OF THE POLAR POLYTOPE
After defining generalized symmetric edge polytopes and investigating their first structural properties, it is our next goal to find a combinatorial characterization of their facets. In order to achieve this, it is fruitful to focus on the Ehrhart theory of the polar polytope. Unless specified differently, in this section we will only consider simple regular matroids of positive rank, so that by <Ref>(v) the vertices of _ will correspond to the columns of for any full-rank weakly unimodular matrix M representing .
Inspired by work of Beck and Zaslavsky <cit.>, we begin by providing a description of the lattice points in the k-th dilation of ^Δ_.
Let k be a positive integer, be a simple regular matroid of rank r > 0 and M be a full-rank weakly unimodular matrix representing . Then the map
(k ·_M^Δ) ∩^r →{(k+1)-cuts of M}
↦ M^T
is a bijection.
Let us first describe in more detail the polar polytope _M^Δ. A facet description of _M^Δ is given by ^T ≤1, which in turn implies that
k·_M^Δ = {𝐮∈^r : -k·1≤ M^T≤ k·1},
where the inequalities are meant to be taken componentwise. This implies that, if ∈ (k ·_M^Δ) ∩^r, then (M^T) is an element of row(M) ∩^n such that |(M^T)_i| ≤ k for every i ∈ [n]. This means precisely that M^T is a (k+1)-cut of M.
Vice versa, let γ be a (k+1)-cut of M. Since γ∈row(M), there exists ∈^r such that M^T = γ. Since M is full-rank, the linear map ^r →^n defined by M^T is injective, and thus is uniquely determined. Since satisfies the inequalities in (<ref>), we have that is a lattice point of k ·_M^Δ, and this finishes the proof.
Let M be as in <Ref>. Then the polar polytope ^Δ_M is shown in <Ref>. The lattice points of ^Δ_M are obtained from the 2-cuts in <Ref> by throwing away the last two coordinates.
A consequence of <Ref> is that the lattice of cuts Γ(M) can be thought of as the union of the lattice points of k ·_M^Δ as k varies in (with the convention that 0 ·_M^Δ = {}). This gives us an interpretation of the lattice of cuts as a “limit object”.
It follows from the argument in <Ref> that, if is a lattice point of _M^Δ, then
F_ = ({M_i | M_i · = 1}∪{-M_i| M_i · = -1})
is a face of _M with supporting hyperplane
H_ = {∈^r |· = 1}
(where we are using the fact that the columns of correspond to the vertices of _M). Since by <Ref> γ M^T is a 2-cut of M, we can rewrite F_ in the following way:
F_ = ({M_i |γ_i = 1}∪{-M_i|γ_i = -1}).
In other words, the 2-cut γ = M^T acts as an indicator vector for F_, in the following sense: the i-th entry of γ equals +1 (respectively, -1) if and only if the vertex M_i (respectively, -M_i) belongs to F_.
Next, we are going to define a partial order on {0, ±1}-tuples that will enable us to give a first characterization of the facets of _.
Let , ∈{0, ±1}^m. We will write that ≼ if for every i ∈ [m] it holds that u_i = 0 or u_i = v_i. Equivalently, ≼ is the partial order induced componentwise by the relations “0 ≺ +1”, “0 ≺ -1” and “+1 and -1 are incomparable”.
Note that {0, ± 1}^m equipped with the partial order from <Ref> is isomorphic to the face lattice of the m-dimensional cross-polytope: see for example <cit.>. More specifically, the isomorphism maps γ∈{0, ± 1}^m to the face obtained as ({_i |γ_i = 1}∪{-_i |γ_i = -1}). This foreshadows the upcoming characterizations of the facets of _.
A direct consequence of the definition of ≼ is that F_⊆ F_ if and only if M^T≼ M^T. This immediately yields a first characterization of the facets of _ when is simple.
Let be a simple regular matroid of positive rank and let M be a full-rank weakly unimodular representation of . Then the facets of _M are the faces F_ of _M for which M^T is a ≼-maximal 2-cut of M.
The facet description in <Ref> is not completely satisfactory. Our next goal is to develop an alternate characterization that will be the “right” generalization of the description obtained by Higashitani, Jochemko and Michałek <cit.> for classical symmetric edge polytopes: see <Ref> below for a more detailed discussion.
Let be a regular matroid of positive rank and let M be a full-rank weakly unimodular representation of . We will say that the cut γ∈Γ(M) is spanning if the support of γ contains a basis of .
Let be a simple regular matroid of rank r > 0 and let M be a full-rank weakly unimodular representation of . Then the facets of _M are the faces F_ of _M for which M^T is a spanning 2-cut of M.
Let us recall once more that, since is simple, the vertices of _M are in bijection with the columns of by <Ref>(v).
Let us show that, if γ = M^T is a spanning 2-cut of M, then F_ is a facet of _M. Since γ is spanning, by the discussion after <Ref> we know that the face F_ contains r linearly independent vertices. Since ∉ F_, such vertices are also affinely independent; but then, since (_M) = r by <Ref>(ii), it follows that F_ must be a facet.
Let us now prove that all facets of _M arise in this fashion. Let G be a facet of _M. Since (_M) = r, the facet G must contain r linearly independent vertices _1, …, _r; these will correspond to certain columns of M or -M. If the i-th column of M appears among the _j's, set γ_i = 1; if the i-th column of -M does, set γ_i = -1. Possibly after some relabeling, we can assume without loss of generality that γ_i ≠ 0 for every i ∈ [r]. By <Ref>, there exists a unique cut γ compatible with the above assignments; moreover, such a cut is spanning by construction. It only remains to show that γ is a 2-cut. By polarity, the facet G corresponds to a vertex ' of the polar polytope _M^Δ; it then follows from <Ref> that
G = F_' = ({M_i |γ'_i = 1}∪{-M_i|γ'_i = -1}),
where γ' = M^T' is a 2-cut of M. Since γ and γ' coincide on a basis of , it follows from the uniqueness of the cut in <Ref> that γ = γ' and hence γ is a a 2-cut, as desired.
Let M be as in <Ref>. Twelve of the seventeen 2-cuts enumerated in <Ref> are spanning: these are (1,0,0,-1,1), (-1,0,0,1,-1), (0,1,0,-1,1), (0,-1,0,1,-1), (1,0,-1,0,1), (-1,0,1,0,-1), (0,1,-1,0,1), (0,-1,1,0,-1), (1,-1,1,-1,0), (-1,1,-1,1,0), (1,-1,-1,1,0), (-1,1,1,-1,0).
Hence, _M has twelve facets, and each of the spanning 2-cuts serves as an indicator vector for one of them: for instance, the 2-cut (1,0,0,-1,1) corresponds to the facet obtained as the convex hull of _1, _1+_2+_3 and _1+_2 (respectively, the first, minus the fourth, and the fifth column of M).
Some words are needed in order to explain in which sense <Ref> generalizes the characterization of facets obtained by Higashitani, Jochemko and Michałek for classical symmetric edge polytopes <cit.>. If G is a connected graph, facets of the symmetric edge polytope _G were shown to be in bijection with integer vertex labelings such that
(i) if i and j are adjacent in G, then their labels differ at most by one;
(ii) the subgraph of G consisting of the edges {i, j} whose vertex labels differ exactly by one contains a spanning tree of G.
(For the statement to be precise, one further needs to identify any two vertex labelings that differ by a fixed constant value on each vertex.) The first author, Delucchi and Michałek observed in <cit.> that, after fixing an orientation of G, such a characterization is equivalent to asking for integer edge labelings such that
(a) each label is either 1, 0 or -1;
(b) the sum of the labels on each oriented cycle of G is zero;
(c) the set of edges with nonzero labels contains a spanning tree of G.
This last characterization corresponds to <Ref> in the special case when is the graphic matroid associated with G and M is the signed incidence matrix associated with the chosen orientation of G (although, to be fully precise, such a matrix is not full-rank). Indeed, labeling the (oriented) edges of G can be thought of as labeling the columns of the matrix M. More in detail, condition (b) can be expressed more succinctly by saying that the desired edge labelings are cuts of M, while conditions (a) and (c) further specify that they must be spanning 2-cuts.
Comparing the facet characterization from <Ref> with the one found in <Ref> immediately yields the following corollary:
Let be a simple regular matroid of positive rank and M a full-rank weakly unimodular representation of . Then a 2-cut of M is spanning if and only if it is ≼-maximal.
The next result generalizes <cit.>. We recall that a matroid is said to be bipartite if all of its circuits have even cardinality.
Let be a simple regular bipartite matroid of rank r > 0 and let M be a full-rank weakly unimodular representation of .
Then the facets of _M are in bijection with the nowhere-zero 2-cuts of M.
By <Ref>, it is enough to prove that the spanning 2-cuts of M are exactly the nowhere-zero 2-cuts of M. Clearly, every nowhere-zero 2-cut must be spanning.
For the reverse containment, let n be the number of elements in the ground set of . If r=n, then is the uniform matroid 𝒰_n,n, the polytope _M is unimodularly equivalent to the n-dimensional cross-polytope, and its 2^n facets correspond to the nowhere-zero elements of row(M) ∩{0, ± 1}^n = {0, ± 1}^n (see also <Ref>).
Assume now that r < n. Let γ be a spanning 2-cut of M and assume without loss of generality that γ_i ≠ 0 for every i ∈ [r]. Now pick any j ∈{r+1, …, n} and consider the fundamental signed circuit (j, [r]), whose support has even cardinality because of the bipartite assumption. Since γ·(j, [r]) = 0, one has that 0 is the sum of γ_j and an odd number of elements in {+1, -1}. For parity reasons, it follows that γ_j ≠ 0, which proves the claim.
§ A REGULAR UNIMODULAR TRIANGULATION FOR _
It follows from a result of Ohsugi and Hibi <cit.> that the polytope _ always admits a regular unimodular triangulation. The aim of this section is to find an explicit description generalizing what Higashitani, Jochemko and Michałek found in the context of symmetric edge polytopes <cit.>. Since the desired characterization involves signed circuits (see <Ref>), our results will be expressed in terms of a fixed full-rank weakly unimodular representation of the given (simple) regular matroid .
If M is a full-rank weakly unimodular matrix, then is as well. It will be useful to describe the signed circuits of the latter in terms of the former. To achieve this goal, we need to introduce some more notation.
Let J ⊆ [n]. We will denote by η_J the injective map ^n →^2n sending (λ_1, …, λ_n) to (_1, …, _2n), where for every i ∈ [n]
_i
0 i∈ J
λ_i i∉ J
and _n+i
-λ_i i∈ J
0 i∉ J.
Basically, given an integer vector, the map η_J changes the sign of the entries indexed by an element of J, and then moves them to the second half of an integer vector twice as long. We will sometimes refer to this operation as a promotion. Note that η_J restricts to a map {0, ±1}^n →{0, ±1}^2n.
Note that ∈^2n is in the image of η_J precisely when _i = 0 for every i ∈ J and _n+i = 0 for every i ∉ J. In particular, if the support of ∈^2n is contained in the support of ∈im(η_J), then ∈im(η_J).
Let J ⊆ [n] and let M ∈^r × n (where 0 < r ≤ n) be a full-rank weakly unimodular integer matrix. Then ∈(M) if and only if η_J() ∈([M | -M]).
Let ∈^n. By construction, one has that
M = ∑_i ∈ [n]λ_i (M_i) = ∑_i ∈ [n](_i - _n+i) (M_i) = ∑_i ∈ [n]_i (M_i) + ∑_i ∈ [n]_n+i(-M_i) = η_J().
In particular, ∈(M) if and only if η_J() ∈(), and is a 2-flow if and only if η_J() is. To prove the claim, we still need to show that () is a circuit of M if and only if (η_J()) is a circuit of .
The definition of η_J implies immediately that, if () is not minimally dependent, then (η_J()) is not minimally dependent either. Conversely, assume (η_J()) is not minimally dependent. Then there exists ∈^2n such that = and () ⊆(η_J()). By <Ref>, belongs to the image of η_J and hence () is not minimally dependent, since (η_J^-1()) ⊆(). This proves the claim.
Provided that M does not contain any zero column, the signed circuits of come in two flavors: on the one hand, we have the ones of the form ±(_i + _n+i) (reflecting the relation M_i + (-M)_i =), while on the other hand we have those obtained by promoting a signed circuit of M. This is the content of the technical lemma below.
Assume M ∈^r × n (where 0 < r ≤ n) is a full-rank weakly unimodular matrix not containing any zero column. Then
([M | -M]) = {±(_i + _n+i) : i ∈ [n]}∪⋃_J ⊆ [n]η_J((M)).
Let us first prove that the right hand side of (<ref>) consists of signed circuits of . This is clear for ±(_i + _n+i), since
_i + _n+i = M_i + (-M)_i =
and M does not contain any zero column by hypothesis. Moreover, by <Ref>, η_J() is a signed circuit of for every choice of J ⊆ [n] and ∈(M).
Let us now prove that every signed circuit of arises as in the right-hand side of (<ref>). Let = (_1, …, _2n) ∈{0, ± 1}^2n be a signed circuit of . If there exists i ∈ [n] such that _i_n+i≠ 0 then, by support minimality, must be equal to _i + _n+i up to sign. Assume then that _i_n+i = 0 for every i ∈ [n], and let J {i ∈ [n] : _n+i≠ 0}. By <Ref>, one has that = η_J() for some ∈{0, ± 1}^n; moreover, by <Ref>, such is a signed circuit of M. This finishes the proof.
We remark that the unimodularity assumption is not really crucial for Lemmas <ref> and <ref>: one could prove similar statements by substituting “signed circuits” with “circuits” (in the toric ideal meaning). However, since in this paper we are reserving the word “circuit” for its matroidal meaning, we did not want to confuse the reader unnecessarily.
Before moving on, we need to introduce some notation about toric ideals naturally arising in this context.
Let be a simple regular matroid of rank r > 0 on n elements and let M be a full-rank weakly unimodular representation of . Then, by <Ref>(iv)–(v), the lattice points of _M are the columns of and the origin . We will denote by I__M the toric ideal associated with the polytope _M, i.e., the one obtained as the kernel of the map
K[x_1, …, x_n, x_-1, …, x_-n, z] → K[t_1^± 1, …, t_r^± 1, s]
x_i ↦𝐭^M_is
x_-i ↦𝐭^-M_is
z ↦ s
and by I_[M | -M] the toric ideal obtained as the kernel of the map
K[x_1, …, x_n, x_-1, …, x_-n] → K[t_1^± 1, …, t_r^± 1]
x_i ↦𝐭^M_i
x_-i ↦𝐭^-M_i,
where K is a field.
We immediately obtain the following corollary of <Ref>:
Let be a simple regular matroid of rank r > 0 and let M ∈^r × n be a full-rank weakly unimodular matrix representing . Then the ideal I__M is the homogenization of I_[M | -M] with respect to the variable z. In particular, the (irreducible) projective variety V(I__M) is the projective closure of the (irreducible) affine variety V(I_[M | -M]).
<Ref> and <Ref> imply the following description for the universal Gröbner basis of the toric ideal I_[M | -M] when M is a full-rank weakly unimodular matrix not containing any zero column:
Let M ∈^r × n (where 0 < r ≤ n) be a full-rank weakly unimodular matrix without any zero column. Then the set of signed circuits, the universal Gröbner basis and the Graver basis of I_[M | -M] all coincide and consist of the following binomials:
* x_ix_-i - 1 for every i ∈ [n];
* ^η_J()^+ - ^η_J()^- for every J ⊆ [n], ∈(M) such that |η_J()^+| ≥ |η_J()^-|.
(With a slight abuse of notation, we identify those binomials that differ only up to a global sign.)
Let M be as in <Ref> and <Ref>. The Graver basis of contains 37 binomials: 5 of the form x_ix_-i-1, 16 arising from the promotions of x_1x_2x_3x_4-1, 8 from the promotions of x_1x_2-x_5 and another 8 from the promotions of x_3x_4x_5-1. For instance, the promotions of = (1,1,0,0,-1) ∈(M) (corresponding to the binomial x_1x_2 - x_5) give rise to the following eight signed circuits of : x_1x_2-x_5, x_2-x_-1x_5, x_1 - x_-2x_5, x_1x_2x_-5-1, 1 - x_-1x_-2x_5, x_2x_-5-x_-1, x_1x_-5 - x_-2, x_-5 - x_-1x_-2. Technically, the statement of <Ref> only asks for the promotions such that |η_J()^+| ≥ |η_J()^-|; however, when this is not the case, we just take - instead of , and the global count is not affected.
The next proposition, reminiscent of the results in <cit.>, proves the existence of a regular unimodular triangulation for I__ and serves as a first step towards an explicit description. For the correspondence between regular unimodular triangulations and squarefree initial ideals, we refer the reader to <cit.> and <cit.>.
Let be a simple regular matroid of rank r > 0 and let M ∈^r × n be a full-rank weakly unimodular representation of . Denote by S the polynomial ring K[x_1, …, x_n, x_-1, …, x_-n]. Let < be a graded monomial order of S and let <_h be any monomial order of S[z] with the property that _<_hf^h = _<f for every f ∈ S. Then the toric ideal I__M has a squarefree initial ideal with respect to <_h.
A concrete choice for <_h as in <Ref> (and later <Ref>) is any degrevlex order of S[z] such that z <_h v for every variable v in S.
By <Ref>, the toric ideal I__M is the homogenization of I_[M | -M] with respect to the variable z. In order to find a Gröbner basis for I__M, by <cit.> it is then enough to homogenize a set of polynomials forming a Gröbner basis for I_[M | -M] with respect to a graded monomial order. The universal Gröbner basis of I_[M | -M] is described in <Ref>; since by definition signed circuits have coefficients in {0, ± 1}, the claim follows.
We are finally able to generalize the Gröbner basis description obtained by Higashitani, Jochemko and Michałek for the usual symmetric edge polytopes.
Let be a simple regular matroid and let M ∈^r × n be a full-rank weakly unimodular representation of . Let S[z], < and <_h be as in <Ref>. Then the polynomials
(i) x_ix_-i - z^2 for every i ∈ [n];
(ii) ^η_J()^+ - ^η_J()^-z for every J ⊆ [n], ∈(M) such that |η_J()^+| = |η_J()^-|+1;
(iii) ^η_J()^+ - ^η_J()^- for every J ⊆ [n], ∈(M) such that |η_J()^+| = |η_J()^-|
form a Gröbner basis for I__M with respect to <_h.
The binomials of types (ii) and (iii) in <Ref> come from considering those signed circuits of where 1's and -1's are “as balanced as possible”; this is exactly what happens for classical symmetric edge polytopes, recalling that both orientations for every edge of the original undirected graph are available in that setting.
By the proof of <Ref>, we know that homogenizing the polynomials of <Ref> with respect to the variable z yields a Gröbner basis for I__M with respect to <_h.
For every i ∈ [n], the homogenization of x_ix_-i - 1 gives us one of the polynomials of type (i). Now let J ⊆ [n], ∈(M) and set k |η_J()^+| - |η_J()^-|. After possibly swapping with -, we can assume without loss of generality that k ≥ 0.
If k = 0 or k = 1, the homogenization of ^η_J()^+ - ^η_J()^- yields one of the binomials of type (iii) or (ii) in the list. It is then enough to show that the homogenization of ^η_J()^+ - ^η_J()^- is redundant when k ≥ 2. Consider such a polynomial. There exists j ∈ [n] such that either η_J()_j = 1 or η_J()_n+j = 1. If η_J()_j = 1, one has that
x_j ·^η_J ∪{j}()^+ = ^η_J()^+ and x_-j·^η_J()^- = ^η_J ∪{j}()^-
and we can write
^η_J()^+ - ^η_J()^-z^k = ^η_J()^+ - ^η_J()^-z^k + x_jx_-j^η_J()^-z^k-2 - x_jx_-j^η_J()^-z^k-2
= x_j·^η_J∪{j}()^+ - ^η_J()^-z^k + x_jx_-j^η_J()^-z^k-2 - x_j ·^η_J ∪{j}()^-z^k-2
= x_j· (^η_J∪{j}()^+ - ^η_J ∪{j}()^-z^k-2) + ^η_J()^-z^k-2(x_jx_-j-z^2).
If instead η_J()_n+j = 1, one has that
x_-j·^η_J ∖{j}()^+ = ^η_J()^+ and x_j·^η_J()^- = ^η_J ∖{j}()^-
and an analogous computation leads to
^η_J()^+ - ^η_J()^-z^k = x_-j· (^η_J∖{j}()^+ - ^η_J ∖{j}()^-z^k-2) + ^η_J()^-z^k-2(x_jx_-j-z^2).
Iterating this procedure as many times as possible yields the claim.
Let M be as in <Ref> and <Ref>, and pick as a term order the degree reverse lexicographic order with x_1 > x_-1 > x_2 > x_-2 > x_3 > x_-3 > x_4 > x_-4 > x_5 > x_-5 > z. Then, by <Ref>, there is a Gröbner basis for I__M consisting of the following binomials (where we underline the leading term):
* x_1x_-1-z^2, x_2x_-2-z^2, x_3x_-3-z^2, x_4x_-4-z^2, x_5x_-5-z^2
* x_1x_2-x_5z, x_-1x_-2-x_-5z, x_-1x_5-x_2z, x_1x_-5-x_-2z, x_-2x_5-x_1z, x_2x_-5-x_-1z
* x_3x_4-x_-5z, x_-3x_-4-x_5z, x_3x_5-x_-4z, x_-3x_-5-x_4z, x_4x_5-x_-3z, x_-4x_-5-x_3z
* x_1x_2-x_-3x_-4, x_-1x_-2-x_3x_4, x_1x_3-x_-2x_-4, x_-1x_-3-x_2x_4, -x_-2x_-3+x_1x_4, -x_2x_3+x_-1x_-4.
Note that this Gröbner basis is not reduced, as the monomials x_1x_2 and x_-1x_-2 are both featured twice as the leading term of a binomial. The associated triangulation has sixteen facets and is shown in <Ref>.
Finally, the γ-polynomial of a symmetric edge polytope has been the object of much recent work after Ohsugi and Tsuchiya conjectured the nonnegativity of its coefficients in <cit.>. We wish to conclude the present article by extending to the matroidal setting a characterization of γ_1 which appeared independently in <cit.> and <cit.>.
Let be a simple regular matroid of positive rank. Then γ_1(_) = 2 ·rk(^*). In particular, γ_1(_) is nonnegative.
In what follows, let E be the ground set of the matroid . By <Ref>, the polytope _ admits a (regular) unimodular triangulation Δ_<, and hence the h^*-polynomial of _ and the h-polynomial of Δ_< coincide. Then
γ_1(_) = h^∗_1(_)- rk() by <Ref>(ii)
= h_1(Δ_<)-rk() by <Ref>
= (f_0(Δ_<)-rk()) - rk() by definition of h_1
=2 · (|E|-rk()) since Δ_< has 2 · |E| vertices
=2 ·rk(^*).
§ FUTURE DIRECTIONS
We conclude the present paper with some questions.
Are generalized symmetric edge polytopes γ-positive? A positive answer would settle the conjecture by Ohsugi and Tsuchiya on symmetric edge polytopes <cit.>.
More modestly, one could try to prove or disprove that γ_2 is always nonnegative, analogously to the classical symmetric edge polytope case treated in <cit.>.
How do properties of the generalized symmetric edge polytope (e.g., its h^*-vector) change under operations on the associated matroid? Is there any way to use Seymour's characterization of regular matroids via 1-, 2- and 3-sums <cit.>?
Can one determine a formula for the h^*-vector of generalized symmetric edge polytopes analogous to the one found by Kálmán and Tóthmérész in <cit.>?
Are there “nice” classes of regular matroids for which the h^∗-polynomial of the associated generalized symmetric edge polytope is real-rooted?
To which extent can the formulas from <cit.> be generalized to the matroidal setting?
§.§ Acknowledgements
We wish to thank Emanuele Delucchi, Akihiro Higashitani, Hidefumi Ohsugi and Lorenzo Venturello for useful comments and discussions at various stages of this project. We are grateful to Matthias Walter for helping us with using the Combinatorial Matrix Recognition Library (currently available at <http://discopt.github.io/cmr/>); in particular, in our computations we made use of the total unimodularity test described in <cit.>. We also acknowledge the use of the Macaulay2 <cit.> package <cit.> by Justin Chen. Finally, we thank Marco Caselli and Lorenzo Venturello for their help with coding in SageMath <cit.>.
alpha
|
http://arxiv.org/abs/2307.07623v1 | 20230714204818 | Unveiling the Impact of Cognitive Distraction on Cyclists Psycho-behavioral Responses in an Immersive Virtual Environment | [
"Xiang Guo",
"Arash Tavakoli",
"T. Donna Chen",
"Arsalan Heydarian"
] | cs.HC | [
"cs.HC"
] |
Journal of Class Files, Vol. 14, No. 8, August 2021
Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals
Unveiling the Impact of Cognitive Distraction on Cyclists Psycho-behavioral Responses in an Immersive Virtual Environment
Xiang Guo, Arash Tavakoli, T. Donna Chen, and Arsalan Heydarian
Corresponding Author: Dr. Arsalan Heydarian, [email protected]
August 12, 2023
=============================================================================================================================================
The National Highway Traffic Safety Administration reported that the number of bicyclist fatalities has increased by more than 35% since 2010. One of the main reasons associated with cyclists' crashes is the adverse effect of high cognitive load due to distractions. However, very limited studies have evaluated the impact of secondary tasks on cognitive distraction during cycling. This study leverages an Immersive Virtual Environment (IVE) simulation environment to explore the effect of secondary tasks on cyclists' cognitive distraction through evaluating their behavioral and physiological responses. Specifically, by recruiting 75 participants, this study explores the effect of listening to music versus talking on the phone as a standardized secondary tasks on participants' behavior (i.e., speed, lane position, input power, head movement) as well as, physiological responses including participants' heart rate variability and skin conductance metrics. Our results show that (1) listening to high-tempo music can lead to a significantly higher speed, a lower standard deviation of speed, and higher input power. Additionally, the trend is more significant for cyclists who had a strong habit of daily music listening (> 4 hours/day). In the high cognitive workload situation (simulated hands-free phone talking), cyclists had a lower speed with less input power and less head movement variation. Our results indicate that participants' HRV (HF, pnni-50) and EDA features (numbers of SCR peaks) are sensitive to cyclists' cognitive load changes in the IVE simulator.
Cycling Safety, Physiological Responses, Heart Rate, Skin Conductance, Cognitive Distraction
§ INTRODUCTION
§.§ Cyclist Crashes and Safety
Bicycle users are expanding as an increasing number of cities are encouraging low-carbon transportation by investing in infrastructure to accommodate bicyclists <cit.>. This increasing trend hasn't been slowed down during the COVID-19 pandemic, and in fact it is reported that bicycling levels have significantly increased in many countries<cit.>. However, bicyclist fatalities are increasing as the number of bicyclists road users is increasing over the last decade. The National Highway Traffic Safety Administration reports that the number of bicyclist fatalities has increased by more than 35% since 2010 <cit.>. In addition to the increased number of cyclists on the road as mentioned above, several factors are believed to contribute to this alarming trend: lack of cycling infrastructures <cit.>, poor roadway designs <cit.>, reckless driving and cycling behavior <cit.>, distraction <cit.>, lack of public education <cit.>, and driver-centric vehicle design (e.g., smaller windows, wider pillars, larger headrests) <cit.>. Among those factors, distraction has been identified as one of the main reasons for traffic accidents. In the US, nine percent of fatal crashes, 15 percent of injury crashes, and 15 percent of all police-reported motor vehicle traffic crashes in 2019 were reported as distraction-affected crashes. There were 566 vulnerable road users including pedestrians, pedal cyclists, and others killed in distraction-affected crashes <cit.>. The data for the fatalities and injuries are underestimated as only vehicle-related crashes are reported. For now, very limited information about cyclists' distraction on fatality is available. Not only the limited data sources but also the number of studies that have been published on distracted biking is small. As a result, our knowledge about the effect of distraction on cycling is insufficient.
§.§ Cyclist Cognitive Distraction
Cyclist distraction refers to the phenomenon where a cyclist's focus and attention are diverted from the road and the cycling task. Previous studies have reported that distractions have a major prevalence among bike users and that they play a significant role in the prediction of the traffic crash rates of cyclists, through the mediation of risky behaviors <cit.>. Studying cyclists' behaviors under the influence of distraction can provide evidence for interventions to address safety-related issues.
Similar to diving distraction, cycling distraction can be categorized into three main types: visual (taking the eyes off the road), manual (taking the hands off the handlebar), and cognitive (taking the mind off cycling) <cit.>. Apart from cyclists' physical or psychological state changes like fatigue or stress, involvement in any type of secondary tasks (tasks that are not related to the cycling main task) is the main cause of distraction. This study will focus on cognitive distraction as it is related to the most frequently reported secondary task during cycling, such as listening to music or talking on phone through the earphones <cit.>. This type of distraction can significantly impair a cyclist's ability to process information, respond to changing road conditions, and make safe decisions and maneuvers while cycling. For example, a study in real traffic examined the glance behavior of teenage cyclists while listening to music. The results indicated that a substantial percentage of participants cycling with music decreased their visual performance <cit.>. Another study found that listening to music can significantly increase cyclists' missing rate of auditory stop signals. Completing a task on the mobile phone (both handheld and hands-free) resulted in increased response time to an auditory stop signal and also reduced overall auditory perception <cit.>.
§.§ Existing Methods for Studying Cognitive Distraction
The current state of knowledge on cyclist distraction is mostly retrieved from surveys or observational studies. For example, an observational study in New York City shows that headphone use is the most prevalent distraction among local cyclists <cit.>. However, observational studies are unable to track cyclists' physiological changes, which helps to understand the amount of cognitive load. Additionally, observational studies do not provide the details of secondary tasks (e.g., headphone use can be either music listening or talking on the phone). Surveys from different areas around the world have been collected to study cyclists' distracted behavior, listening to music or talking with earphones have been identified as the most prevalent distractions <cit.>. However, the subjective response collected in the surveys does not always reflect the participant's real-world response due to hypothetical bias <cit.>. Another way to collect data is the naturalistic study which records cyclists' responses in real-world conditions <cit.>. Naturalistic study guarantees real-world data but it is limited by the potential safety risks, high costs, noise-diluted data, and difficulties in environmental control <cit.>. To overcome the shortcomings of the existing methods, high-fidelity bike simulators have been developed by different researchers. The prevalence of virtual reality (VR) or Immersive Virtual Environments (IVE) technology in the past decade further provides a low-cost and controllable solution for experimental study to evaluate the responses of cyclists to different roadway designs and conditions <cit.>.
The most frequent secondary tasks, both listening to music and talking with earphones can be categorized as cognitive distractions. One of the main challenges in the quantitative analysis of cognitive distraction is the difficulty in measuring the workload needed for certain tasks. To understand the mechanism of distraction, a standardized secondary task with different levels of workload is required in the experimental study. To our knowledge, no prior studies have applied such methods for cyclist distraction. In other research fields, several standardized secondary tasks have been developed to simulate different levels of workload. For instance, to simulate the phone conversation, an alternative mock
cellphone task was used in a driving-related study as a cognitive distraction <cit.>. The mock cellphone task was designed to simulate cognitive load when talking on the phone, and the impact of this type of task was reported to be similar to a hands-free cellphone conversation in a prior study <cit.>.
§.§ Physiological Responses under Cognitive Distraction
To objectively measure cognitive distraction, physiological responses can be useful metrics to record and monitor within an experiment. Previous studies have used physiological responses as a proxy to detect distraction in humans <cit.>. Specifically, prior studies in transportation engineering, mostly in driving conditions, have used eye gaze patterns <cit.>, EEG <cit.>, electrodermal activity (EDA) <cit.>, heart rate (HR) <cit.>, heart rate variability (HRV) <cit.>, and skin temperature to detect drivers' distraction<cit.>. Here, we direct our focus to cardiovascular and skin-related metrics for this research. Cardiovascular metrics, including HR and HRV, can be retrieved through multiple devices such as an Electrocardiogram (ECG) and Photoplethysmography (PPG), where PPG is the technology that is mostly used in smart wearable devices and can be applied in both naturalistic and experimental conditions <cit.>. Based on the heart signal retrieved through either of the aforementioned devices, features of heart activity can be calculated that are generally referred to as heart rate variability (HRV) features. These features are results of signal processing applied to the heart signal in time, frequency, and nonlinear domain. An example of the time domain HRV is the root mean squared of the successive intervals of heart beats (R-R intervals) referred to as RMSSD. An example of the frequency domain HRV metric is the power of the high frequency (between 0.15 to 0.4 Hz) and low frequency band (between 0.04 to 0.15 Hz) of the HRV spectrum referred to as HF and LF. Lastly, an example of the nonlinear domain HRV feature is the entropy of the beat-to-beat intervals. Previous studies have showed that these features are correlated with certain human states such as stress level (correlated with decrease in RMSSD) and cognitive load ( increase in HF) <cit.>.
Electrodermal activity (EDA) or sometimes referred to as Galvanic Skin Response (GSR) have also been shown to be correlated with certain human states such as stress and workload. EDA signals are generally first decomposed into two main components of tonic and phasic. Tonic is the long-term changes in the signal, whereas phasic is the momentary changes in the EDA signal. The tonic level is then used to define the skin conductance level (SCL), and the phasic component is used to define the skin conductance responses (SCR). Both SCL and SCR have been shown to be correlated with higher cognitive load and stress level <cit.>.
Within bicycle research, application of physiological sensors has mostly been centered around detecting stress, comfort, and emotions <cit.>. Additionally, some naturalistic studies have been conducted to evaluate bicyclists' behavior and physiological responses in different contextual settings (<cit.>). These preliminary studies revealed that psycho-physiological metrics (e.g., heart rate (HR), gaze variability, and skin conductance) are indicators of how cyclists' perceive change in different contextual settings. While previous studies have provided significant insights into using physiological responses for distraction detection in driving as well as cyclists state (e.g., stress and comfort) detection, they have not been used for cyclists cognitive distraction defections much. In other words, we still have very a limited understanding of bicyclists' physiological responses in different levels of cognitive distraction, especially in IVE studies.
§.§ Research Goals
The goal of this experiment is to study the effect of cognitive distraction on cyclist behavior. More specifically, we are interested in applying the standardized secondary task in the IVE bicycle simulator to simulate different levels of cognitive workload, and explore cyclists' physiological responses in different situations.
The research hypothesis are
* Listening to high-tempo music and talking on the phone can be candidates for standardized secondary tasks and can be used to simulate different cognitive distractions in the IVE.
* Listening to high-tempo music and talking on the phone result in varying levels of behavioral performance such as lower speed in comparison to a baseline of no distraction-biking.
* Listening to high-tempo music and talking on the phone result in significantly different physiological responses such as different gaze/head movement variation, HRV, and EDA in comparison to baseline no-distraction condition.
§ METHODOLOGY
The study is reviewed and approved by the Institutional Review Board for the Social and Behavioral Sciences from University of Virginia (IRB-SBS Protocol # 2148). All experiments were performed in accordance with relevant named guidelines and regulations. Informed consent was obtained from all participants and/or their legal guardians.
§.§ Experiment Design
This research studies the effect of cognitive distraction on cyclist behavior in the proposed IVE bicycle simulator framework. The cognitive distraction will be triggered by both the standardized secondary task (mock phone conversation task) and the actual task (music listening). Each participant in this within-subject design will experience 3 different conditions (baseline, music listening, and mock phone conversation) in random order.
§.§.§ Cognitive Secondary Tasks - Mock Phone Conversation Task
As an alternative to a real cellphone conversations, a mock cellphone conversation task was used in this study as a cognitive distraction, particularly because typical conversations are difficult to experimentally control. While performing the distraction task, participants were instructed to listen to a series of generic English language sentences synonymous with the previously validated grammatical reasoning task and respond aloud to the subject, object, and whether the sentence was plausible or not. The experimenter would remotely initiate the task with a button press at the beginning of each scenario and similarly, terminate it a few seconds before the end of every scenario. The participants listening to each sentence were then asked to reply aloud: the subject of the sentence, the object of the sentence, and whether or not the sentence was plausible. For example, for the sentence, “A child jumped a rope,” the correct answer is “Child, Rope, and Yes.” Similarly, an implausible sentence would be, “A cat baked the television,” and the correct answer would be: “Cat, Television, and No.” The mock cellphone task was designed in such a way that the cyclist would be cognitively loaded, and as found in prior studies, the impact of this type of task would be similar to that of a hands-free cellphone conversation <cit.>.
§.§.§ Cognitive Secondary Tasks - Music Listening Task
In addition to the standardized secondary tasks, an actual secondary task, music listening is considered in this experiment as well. In the music listening condition, the participants will be asked to listen to a popular song of the year 2022, which is a positive song with a tempo of 174 Beat Per Minute. The track lasts for 3:05. The song will be played during cycling automatically by the experimenter, the participants can hear the song from the earphone of the VR headset, similar to listening to music with earphones in the real world.
§.§ Road Environment in IVE
The IVE is developed on a 1:1 scale in Unity 3D game engine and steamVR platform, based on the Water Street corridor in Charlottesville, Virginia. Water Street is well-trafficked by bicyclists and is being considered for redesign by the city of Charlottesville due to the crash history in this area as shown in Fig.<ref>(a). The IVE road starts from the intersection of West Main Street and Ridge Street and ends at the intersection of East Water Street and 9th Street NE (the Belmont Bridge). Bike lanes are designed for the road in the IVE with a standard bike lane width of 4 feet (1.2m).
§.§ Apparatus
In addition to the virtual environment simulation in Unity, several software or devices have been utilized in this study as listed below, which is based on our prior framework introduced in Guo et al 2022 <cit.>:
§.§.§ VR devices
The HTC VIVE Pro Eye headset is connected wirelessly to the control computer in the lab. Two controllers are attached to the handlebar of the simulator, and the spatial location of the controllers is detected to reflect the turning movements. The squeezing value of the trigger button on the back side of the right controller is programmed for the braking of the bike.
With the C# scripts written in the Unity scenario, the cycling performance data such as speed, lane position, head movement, and braking data are collected from the headset and controllers.
§.§.§ Bike simulator
As can be viewed from Fig.<ref>(c), an average physical Trek Verve bike (without wheels) is assembled with a series of Wahoo Kickr Smart Trainer (Climb, Headwind, and ANT+) to receive pedaling power, and simulate resistance/headwind feedback based on current cycling speed. With the bike trainer, the pedaling input power data is available after each play.
§.§.§ Smartwatch
The Empatica E4 smartwatch is used to collect participants' heart rate data and EDA data. The smartwatch is connected to a smartphone via Bluetooth. The data collection is initialized and controlled by the smartphone. The clock on the smartphone is synchronized with the control computer before each participant's test.
§.§.§ video collection
As shown in Fig.<ref>(c), the OBS studio software integrates the three video collection components: two video recordings from cameras capturing the body position of the participant and one screen recording of the participant’s point of view in IVE.
§.§ Experiment Procedure
After signing the consent form, each participant was asked to put on a smartwatch before completing the pre-experiment survey. After finishing the pre-experiment survey, instructions are given on how to use the VR headset, controllers, and bike simulator. After the bike is adjusted to a comfortable position, the participant mounts the bike and puts on the headset. Next, the participant is guided through the eye tracker calibration. After the IVE system setup, the participant is placed into a familiarization scenario (without any vehicle traffic) to become accustomed to interacting with the IVE. In this environment, the participant can practice pedaling, steering, and braking and the practice procedure can be repeated until the participant feels comfortable. If the participant feels any motion sickness, they may stop the experiment at any point and still receive compensation for participation.
Once the participant is comfortable in the training environment, they experience the three design scenarios in random order, where each experiment trial lasts about two minutes, with a two-minute break between each scenario. Once the participant has completed all three scenarios, they are asked to complete the post-experiment survey. On average, each participant spends 30 minutes completing the experimental procedure.
§.§ Participants
75 participants were recruited for the experiment. Among them 40 are female, 33 are male, 1 participant identified as other and 1 participant didn't provide gender information. Most of the participants are local bicyclists, students, and faculty members from the University of Virginia. All participants are 18 or older and without color blindness. The mean age is 24.5 with a standard deviation of 4.7, and the median age is 24.5 years old as well (one participant didn't provide the age information).
§.§ Physiological Data Analysis
In order to analyze the physiological data, we have taken advantage of multiple modeling techniques and packages scripted in python.
§.§.§ Heart Rate Variability (HRV)
Based on the interbeat interval data (IBI) recorded through Empatica E4, we can calculate HRV features. As mentioned previously, these features span across different domain of time, frequency, and nonlinear. We use the pyHRV <cit.> package scripted in python to calculate these features in all three domains for each participant during the experiment (e.g., HF, and RMSSD). Note that some of the calculated feature may not be applicable to short term data collection (e.g., LF), thus we only focus on the features that are suitable for a short period of time.
§.§.§ Electrodermal Activity (EDA)
In order to calculate the tonic and phasic compositions of the EDA signals, we first denoise the EDA signal recorded through Empatica E4 by passing it through a Butterworth signal with a high pass of 10 Hz. We then feed the resulting signal to the well-known Neurokit 2 package <cit.>. This package allows for decomposing the signal to tonic and phasic component and calculating a variety of EDA related features such as number of phasic skin conductance responses and skin conductanc elevel (SCL). We compare number of SCR peaks across different participants and conditions in the study.
§.§ Statistical Analysis
In order to compare across conditions, we use Linear Mixed Effect Models (LMM), for their capability in addressing individual differences. Different people have various baseline values for their physiological signals. For instance, a person's baseline HR might be at 60 bpm, while another person's might be at 80 bpm. Additionally, the slope of change in a peconductancealevel signals across different conditions can be very different relative to another participant. LMM is similar to linear regression in measuring the main effects in a study but with a difference that it accounts for random changes across participants referred to as random factors <cit.> through random intercept and random slope. An LMM is defined as follows:
y = Xβ + bz +ϵ
In equation (<ref>), y is the dependent variable (e.g. number of skin conductance responses), X is the matrix of predictor variables (type of distraction), β is defined as the vector of fixed-effect regression coefficients, b is defined as the matrix of random effects, z is defined as the coefficients for each random effect, and ϵ is the error term. Additionally, we can define the elements of the b and ϵ matrices as follows:
b_ij∼ N(0,ψ_k^2),Cov(b_k,b_k')
ϵ_ij∼ N(0,σ^2λ_ijj),Cov(ϵ_ij,ϵ_ij')
We use lme4 package in R <cit.> for applying LMM to the data.
§ RESULTS
This section reports the results of the experiment. The following subsections describe the summary statistics (from the pre-and post-experiment surveys), the bicyclists’ physical behavior (cycling speed, input power, and lateral lane position), and physiological responses in different roadway designs.
§.§ Survey Response
§.§.§ Pre-experiment Survey
Participants' attitude towards cycling is collected in the pre-experiment survey. 74 participants answered the question about what types of bicyclists they are. The majority of the participants have a positive attitude towards cycling, 6 participants (8.11%) indicated their attitude towards cycling as "No way, no how" - I do not ride a bike, 25 participants (37.84%) identified themselves as "Interested but Concerned" - I like the idea of riding but have concerns. The rest of the participants had a higher preference for cycling as 28 (33.78%) indicated themselves as "Enthused and Confident" -I like to ride and will do so with dedicated infrastructure and the remaining 15 (20.27%) chose "Strong and Fearless" - I will ride anywhere, no matter the facilities provided.
The engagement of secondary tasks both in daily life and during cycling is also collected in the pre-experiment survey. We asked the participants to estimate how many hours they spend in music listening and phone usage, as well as the frequency of music listening, and phone talking when they ride a bike. The distribution of daily hours spend on music listening and phone are displayed in Figure <ref>. The average music listening hours are 2.82 hours (sd = 2.88 hours). And unsurprisingly, the average hours spent on the phone is higher (mean = 4.30 hours, sd = 2.07 hours). Participants' music listening and phone talking frequency while riding a bike is self-reported with 5-point Likert scales, with 5 options of "Never (<10%)", "Seldom (about 25%)", "Sometimes (about 50%)", "Often (about 75%)", and "Always (>90%)". Participants were asked to choose an option that is closest to them. Table <ref> summarizes the results for these two questions. The participants have a higher frequency of music listening than talking on the phone while biking. More than half of the participants admitted that they have music-listening behavior while biking, and only about 25% of the participants reported that they had the experience talking on the phone while biking before.
§.§.§ Post-experiment Survey
In the post-experiment survey, participants' stated preferences over the three scenarios are collected in three aspects: safety, comfort and distraction. For each question, the answer is to choose their subjective ratings from a 5-point Likert scale.
For subjective safety rating, the Baseline scenario is rated as the safest scenario with an average score of 4.31/5.0, followed by the Music Listening (3.93/5.0), then the Mock phone conversation scenario (2.95/5.0), the differences between all the three scenarios are significant, as shown in Figure <ref> (Baseline v.s. Music listening, p = 0.00297; Baseline v.s. Mock phone conversation, p < 0.0001; Music listening v.s. Mock phone conversation, p < 0.0001).
For subjective comfort rating, the scores of the Baseline (4.35/5.0) and Music listening (4.32/5.0) scenarios are close to each other, and both are significantly higher than the Mock phone conversation scenario (2.88/5.0), with both p values smaller than 0.0001, as shown in Figure <ref>.
For subjective distraction rating, the Mock phone conversation scenario is rated as the most distracting scenario with an average score of 3.74/5.0, followed by the Music Listening (2.42/5.0), then the Mock phone conversation scenario (1.64/5.0), the differences between all the three scenarios are significant, as shown in Figure <ref> (all the p values are smaller than 0.0001).
§.§ Cycling Performance
§.§.§ Speed
For the mean speed, as indicated in Figure <ref>-a, there is a significant difference between the Baseline and the Mock phone conversation scenarios (β = -1.262, SE = 0.435, p = 0.0428) and between the Music listening and the Mock phone conversation scenarios (β = 1.178, SE = 0.312, p = 0.0007). Bicyclists' mean speed in the three scenarios (Baseline, Music listening, and Mock phone conversation) are 18.6 km/h, 19.1 km/h, and 17.9km/h, respectively. Age group differences are found only in the Mock phone conversation scenario, where the younger group (19.3 km/h) has a significantly higher cycling speed (β = 2.903, SE = 1.058, p = 0.00783) than the older group (16.7 km/h), as indicated in Figure <ref>-b.
For the standard deviation of speed, the results show there is a significant difference between the Baseline and the Music listening scenarios (β = -0.30669, SE = 0.14391, p = 0.0348), as shown in Figure <ref>-a. Bicyclists' standard deviation of speed in the three scenarios (Baseline, Music listening, and Mock phone conversation) are 1.92 km/h, 1.74 km/h, and 1.83 km/h, respectively. For the Music listening scenario, it is found that participants who listen to music a lot (>4 hours daily) have a lower standard deviation of speed (β = -0.572, SE = 0.226, p = 0.0142), as shown in Figure <ref>-b.
§.§.§ Lateral Lane Position
For the lateral lane position, no significant differences are found between different scenarios. The average lateral lane position for the three scenarios is Baseline - 0.575m, Music listening - 0.569m, and Mock phone conversation - 0.592m. However, significant differences are found between the participants with different attitudes toward cycling. As can be seen from Figure <ref>-a, participants who hold more positive attitudes toward cycling will go more on the left (closer to the vehicle lane).
§.§.§ Input Power
For the mean input power, the average input power in music listening (50.6 Wattage) is 16% higher than mock phone conversation (43.8 Wattage), and 7.5% higher than baseline (47.1 Wattage). There is a significant difference between the Baseline and the Mock phone conversation scenarios (β = -6.658, SE = 2.443, p = 0.00725) and between the Music listening and the Mock phone conversation scenarios (β = 6.72, SE = 1.75, p = 0.0006), as shown in Figure <ref>-a. Age group differences are found only in the Mock phone conversation scenario, where the younger group (50.4 Wattage) has a significantly higher cycling input power (β = 11.864, SE = 5.712, p = 0.0418) than the older group (37.8 Wattage), as indicated in Figure <ref>-b.
§.§.§ Head Movement
The head movement result shows a significant difference between the Baseline and the Mock phone conversation scenario with β = -0.00636, SE = 0.00220, p = 0.00435, as shown on Figure <ref>-a, participants had a lower variation of head movement direction in the Mock phone conversation scenario than the Baseline. Additionally, in the music listening scenario, male participants have a higher head movement variation than female participants with β = 0.0122, SE = 0.00538, p = 0.0266 (Figure <ref>-b).
§.§ Physiological Response
§.§.§ HR/HRV
The comparison of the mean heart rate across conditions indicates that there are no significant differences between the three scenarios with a 95% confidence level. The mean HR (beat per minute) of the Baseline, Music listening, and Mock phone conversation are 92.89, 92.07, and 90.66 bpm, respectively.
For HF-HRV, the LMM model results show that cyclists in the Mock phone conversation scenario have higher HF-HRV than the baseline (β = 232.31, SE = 114.40, p = 0.0454), as indicated in Figure <ref>.
Another interesting finding from HRV is in the pnni-50 feature. Cyclists in the mock phone conversation scenario have significantly lower pnni-50 values than the baseline with β = -4.777, SE = 1.785, p = 0.00849 (Figure <ref>a). In addition, pnni-50 is significant with cyclists' subjective safety ratings. As shown in Figure <ref>b, cyclists with higher levels of subjective safety ratings also have higher pnni-50 values in general.
§.§.§ EDA
The numbers of SCR peaks for the Baseline, Music listening, and Mock phone conversation scenarios are 5,74, 6.30, and 6.57. The Mock phone conversation scenario has a significantly higher number of SCR peaks than the Baseline (β = 0.771, SE = 0.341, p = 0.0258), as shown in Figure <ref>. The age factor is also found to be significant where the younger group has a lower number of SCR peaks than the older group (β = -0.881, SE = 0.341, p = 0.0258)
The mean amplitude of SCR peaks for the Baseline, Music listening, and Mock phone conversation scenarios are 0.0707 μ S, 0.0695μ S, and 0.0832 μ S. The Mock phone conversation scenario appears to have a slightly higher mean amplitude of SCR peaks than the Baseline, but the differences are not significant. No other significant differences are found at a 95% confidence level.
§ DISCUSSION
We measured three subjective ratings from the post-experiment survey: safety, comfort, and distraction. For the two types of secondary tasks, not surprisingly, the Mock phone conversation is rated as the most distracting scenario, as it requires both listening to the audio (input) and speaking out the response (output). And for music listening, the cyclists only need to listen to the audio (input). The safety rating is correlated with the distraction rating, lower distraction rating scenarios have higher safety ratings. In terms of comfort rating, no significant results are found between the Baseline and Music listening scenarios, both scenarios have a higher comfort rating than the Mock phone conversation scenario.
Different levels of cognitive distraction have different effects on cycling behavior and physiological response. For cycling performance, the Music listening scenario has a significantly higher average speed and input power than the Mock phone conversation, as cyclists have a lower subjective rating on the distraction of music listening scenario, they are more confident to keep a higher speed in the IVE with more input power, although the safety rating of music listening is lower than the Baseline.
In a previous virtual reality-based distracted cycling study, it was found that those in a low perceptual load (visual distraction) VR cycled at a higher intensity despite greater pain <cit.>. An earlier real road study also reported that telephoning coincided with reduced speed, reduced peripheral vision performance, and increased risk and mental effort ratings <cit.>. That study created different levels of perceptual load by displaying different items in the VR for the item detection task. In our study, we generated different levels of cognitive distraction, and with low cognitive load (music listening), we observe a similar effect as a visual distraction. With high cognitive load (mock phone conversation), the adaptive cycling performance includes lowering speed, less input power, and less head movement, indicating a degraded perception ability of the surrounding environment, which is aligned with previous research findings with drivers made fewer saccades, spent more time looking centrally and spent less time looking to the right periphery <cit.>.
However, in terms of lateral lane position, our findings of cycling performance under the influence of cognitive distraction are different from driving. With the findings from our previous experiment <cit.>, we design bike lanes for the whole road in this experiment, which is different from the real road of the shared bike lane with vehicles. The introduction of bike lanes in the last experiment was found to help the bicyclist to keep closer to the road curbside. In this study, a similar effect is found as there are no significant differences between different scenarios in lateral lane position. While in driving-related studies, cognitive load led to a diminished standard deviation of lateral position, implying a better lane-keeping performance. However, a systematic comparison of time-to-line crossing calculations suggested a degraded safety margin of lane keeping <cit.>.
Music listening has been found to be related to emotional arousal, which has the potential to affect cycling performance. For example, listening to preferred music showed no ergogenic benefit during repeated anaerobic cycling sprints when compared to non-preferred music. However, preferred music increased motivation to exercise and decreased perceived exertion <cit.>. The cycling task in this experiment is low intensity, listening to preferred music, as indicated by the survey results with higher familiarity and preference of the song played in the Music listening scenario, which may help to explain the increased speed and input power. Cyclists' engagement in the music also leads to a decreased standard deviation of speed, therefore, they will keep a high speed while avoiding any additional speed changes during the whole cycling process.
For the physiological response, no significant results are found in the mean HR, but the HR change points results showed that there are fewer heart rate change points in the mock phone conversation scenario than in the Baseline and Music listening scenarios. While there are multiple HRV features, research shows that HRV-HF can be used a short-term measure of cognitive load <cit.>. The HRV-HF is slightly higher in the mock phone conversation compared to the music and baseline condition. Previous studies showed a positive correlation between cognitive workload and the HRV measures including the HRV-HF feature <cit.>. This finding suggests that the mock phone interview has a higher cognitive load as compared to the other two scenarios.
We also observe a slightly lower HRV-pnni50 value for the mock phone interview scenario. Research shows that pnni50 is correlated with the parasympathetic (PNS) activity <cit.>. Lower pnni50 and the resulting lower PNS activity during the mock phone interview may show a higher workload level for cyclists during this condition with respect to the other conditions <cit.>.
The mock phone conversation has a higher number of SCR peaks. Previous research shows that higher skin conductance response is correlated with an increase in cognitive load. Our results indicate that mock phone conversation based on the skin conductance activity shows an increased cognitive load <cit.>. The physiological responses collectively show that these measures can help reliably differentiate between different levels of cognitive load resulting from cognitive distraction during cycling.
Demographic differences are found in several aspects. Generally speaking, participants who hold more positive attitudes toward cycling will go more on the left (closer to the vehicle lane), as they may be more confident about their ability to control the bike. The HR change points data reveals the gender difference as male participants have a significantly lower frequency of increasing HR than female participants. The differences vary in different levels of cognitive load. In the Music listening scenario, the younger group has a significantly higher cycling speed and input power as they are more used to the selected music.
As prior research has demonstrated that people’s judgments and behaviors are relative, and based on their context <cit.>, the frequency at which people normally engage in a behavior is likely to affect how distracting this behavior is while driving. In the Mock phone conversation scenario, participants who listen to music a lot (>4 hours daily) have a lower standard deviation of speed, indicating they are more engaged in the music. The physiological response also shows that female cyclists are more affected by the cognitive load in the Mock phone conversation scenario, as (1) male participants have a higher head movement variation than female participants, and (2) Female participants have a higher mean amplitude of SCR peaks. These results highlight the groups of people that require more attention when studying cyclist cognitive distraction (young people who listen to music a lot in their daily life and female cyclists under the effect of higher cognitive distraction such as talking on the phone).
§ CONCLUSION
This research explores the effect of different cognitive distractions on bicyclists' physiological and behavioral changes. In an immersive virtual environment, a bicycle simulator with multiple physiological sensing devices is utilized to collect bicyclists’ behavioral and physiological responses on the same road design with bike lanes. Data collection includes demographic information (age, gender, biking attitude), engagement in a secondary task such as music listening and talking on the phone during their daily life or cycling, cycling performance in the simulator (speed, lane position, input power, head movement) and physiological responses (heart rate, skin temperature). Results from 75 participants who rode on a bicycle simulator through a virtual environment indicate that (1) Cyclists would have a significantly higher speed, a lower standard deviation of speed, and higher input power in the music listening scenario; (2) When talking on the phone, cyclists had a lower speed with less input power and less head movement variation; (3) When listening to music, cyclists who had a strong habit of daily music listening (> 4 hours/day) had a higher engagement in the music, with a significantly lower sd of speed. Male cyclists stayed closer to the vehicle lane and had a higher head movement variation; (4) Lane position is not affected by the scenario, this may be the effect of introducing bike lanes in the environment, and (5) Several HRV (HF, pnni-50) and EDA features (numbers of SCR peaks) are sensitive to cyclists' cognitive load changes in the IVE simulator.
IEEEtran
[
< g r a p h i c s >
]Xiang Guo Dr. Xiang Guo earned his phd from the Engineering Systems and Environment department at University of Virginia. He received his BSc. and the MSc. degrees in transportation engineering from Beihang University, Beijing, China. His research interests include traffic safety, human factors, human performance modeling, virtual reality and mixed reality.
[
< g r a p h i c s >
]Arash Tavakoli Dr. Arash Tavakoli is a Postdoctoral Scholar at Stanford University, Department of Civil and Environmental Engineering. He graduated with a Ph.D. in Civil Engineering from the University of Virginia. He has also earned his BSc. and MSc. in Civil Engineering from the Sharif University of Technology and Virginia Tech, respectively. Arash’s research interest lies in the intersection of transportation engineering, computer science, and psychology.
[
< g r a p h i c s >
]T. Donna Chen Dr. T. Donna Chen is an Assistant Professor in the Department of Engineering Systems and Environment at the University of Virginia. Her research focuses on sustainable transportation systems (in particular modeling the impacts of new vehicle technologies systems on traveler behavior and the environment), travel demand modeling, transportation economics, and crash safety. Prior to joining academia, Dr. Chen worked in the consulting industry as a transportation planning engineer and has experience with roadway design, cost estimation, and traffic operation analyses.
[
< g r a p h i c s >
]Arsalan Heydarian Dr. Arsalan Heydarian is an Assistant Professor in the Department of Engineering Systems and Environment as well as the UVA LINK LAB. His research focuses on user-centered design, construction, and operation of intelligent infrastructure with the objective of enhancing sustainability, adaptability, and resilience in future infrastructure systems. Specifically, his research can be divided into four main research streams: (1) intelligent built environments; (2) mobility and infrastructure design; (3) smart transportation; and (4) data-driven mixed reality. Dr. Heydarian received his Ph.D. in Civil Engineering from the University of Southern California (USC), M.Sc in System Engineering from USC, and B.Sc. and M.Sc in Civil Engineering from Virginia Tech.
|
http://arxiv.org/abs/2307.05544v1 | 20230708211940 | Coupling high-overtone bulk acoustic wave resonators via superconducting qubits | [
"Wayne Crump",
"Alpo Välimaa",
"Mika A. Sillanpää"
] | quant-ph | [
"quant-ph",
"cond-mat.mes-hall"
] |
Department of Applied Physics, Aalto University, P.O. Box 15100, FI-00076 AALTO, Finland
Department of Applied Physics, Aalto University, P.O. Box 15100, FI-00076 AALTO, Finland
Department of Applied Physics, Aalto University, P.O. Box 15100, FI-00076 AALTO, Finland
In this work, we present a device consisting of two coupled transmon qubits, each of which are coupled to an independent high-overtone bulk acoustic wave resonator (HBAR). Both HBAR resonators support a plethora of acoustic modes, which can couple to the qubit near resonantly. We first show qubit-qubit interaction in the multimode system, and finally quantum state transfer where an excitation is swapped from an HBAR mode of one qubit, to an HBAR mode of the other qubit.
Coupling high-overtone bulk acoustic wave resonators via superconducting qubits
Mika A. Sillanpää
===============================================================================
Hybrid quantum systems seek to combine strengths and offset weaknesses of different quantum technologies in order to improve capability beyond that of any one technology. Superconducting circuits are one of the more mature quantum technologies at this stage and have been integrated with many other systems due to the relative ease in design and fabrication as well as good coherence times <cit.>.
Many different acoustic systems have been integrated with superconducting circuits such as nanomechanical oscillators <cit.>, phononic crystals <cit.>, bulk acoustic wave systems <cit.> and surface acoustic wave systems <cit.>. Acoustic resonators can offer great coherence properties <cit.> as well as smaller mode volumes due to the relation between wave velocity and wavelength, with the difficulty coming in coupling these resonators strongly with electromagnetic systems.
The strong coupling of acoustic modes with superconducting qubits has resulted in many experiments exploring the quantum nature of mechanical oscillations, with experiments demonstrating number splitting <cit.>, the creation of non-classical states in the acoustic mode <cit.>, Landau-Zener-Stückelberg interference <cit.>, and entanglement <cit.>. The ability to prepare acoustic resonators in arbitrary quantum states opens up the possibility of using them in applications such as quantum memories due to their coherence properties and insensitivity to electromagnetic noise.
High-overtone bulk acoustic wave resonators (HBAR) offer access to mechanical modes in the GHz regime, making them attractive for integration with superconducting qubits. The piezoelectric interaction enables coupling in the strong regime and their state to be controlled and read-out using the qubit. The system has been implemented using a 3D <cit.> and 2D <cit.> transmon architecture with part or all of the qubit capacitor directly patterned on the piezo layer of the HBAR. This was later improved in both cases by using a flip-chip design <cit.> which has lead to the current state of the art <cit.>. Experiments on these system have demonstrated the creation of non-classical multiphonon states <cit.>, demonstration of dispersive readout for a parity measurement of the mechanical mode <cit.>, and sideband control of the mechanical modes <cit.>.
Work thus far has focused on coupling of a qubit and a single HBAR device supporting a set of acoustic modes. In this work we couple two complete qubit-HBAR systems together via qubit-qubit interaction, and transfer excitations within the system, including between the HBAR modes. This demonstrates the possibility of integrating multiple HBAR devices into quantum circuits enabling the exploration of much larger and complex systems.
In the system there are two qubits which are coupled together as well as being individually coupled to a set of HBAR modes. The qubit-mode couplings can be described by the Jaynes-Cummings model, and the qubit-qubit coupling will be capacitive and therefore expected to take the iSWAP form
<cit.>. The system as a whole can then be described by the Hamiltonian:
H/ħ = ω_1/2σ_(z,1) + ω_2/2σ_(z,2) + J (σ_(+,1)σ_(-,2) + σ_(-,1)σ_(+,2))
+ ∑_m [ ω_(m,1)( a_(m,1)^† a_(m,1) + 1/2) .
. + g_(m,1)(a_(m,1)^†σ_(-,1) + a_(m,1)σ_(+,1))]
+ ∑_n [ ω_(n,2)( a_(n,2)^† a_(n,2) + 1/2) .
. + g_(n,2)(a_(n,2)^†σ_(-,1) + a_(n,2)σ_(+,1))] ,
where ω_1 and ω_2 are the qubit frequencies, J is the qubit-qubit coupling, ω_(m,1) and ω_(n,2) are the HBAR mode frequencies corresponding to their respective qubits and g_(m,1), g_(n,2) are the couplings to the HBAR modes. The σ_i,j are the pauli operators and a_m,a_m^† are the annihilation and creation operators.
In order to theoretically analyze the experiments described below, we determine the time evolution of the system using the Lindblad master equation. We include the qubits' decay and dephasing, as well as mechanical mode decay.
Figure <ref> shows an optical image of the device used for the experiments.
The device consists of a superconducting circuit with two qubits, each with their own readout, flux bias control and excitation lines. The qubits have a capacitive coupling to each other, as well as to the HBAR flip chip that covers both. The qubits have a round pad on the bottom arm of around 80 μm in diameter which defines the capacitive coupling to the HBAR chip. The circuit was patterned using Electron beam lithography and metalised with evaporated Aluminium. Double angle evaporation was used to create the Josephson junctions for the qubits.
The HBAR flip chip consists of a 900 nm AlN piezo layer, a 250 μm sapphire layer and a 60 nm Mo layer in-between to act as a ground plane to enhance the coupling to the mechanical modes <cit.>. The HBAR was placed by hand onto the circuit chip and glued with standard epoxy.
The qubit frequencies can be tuned in the range 3.7-4.5 GHz and have readout resonator frequencies of 6.230 GHz and 6.013 GHz. The operating points of the qubits were chosen to maximise their coherence properties and hence they are operating at or close to their minimum frequencies, as shown in fig:overview.
The bottom two plots of figure <ref> show two-tone measurements sweeping the qubit frequencies in the neighbourhood of their operating frequencies chosen for later experiments. The operating frequency of qubit 1 was set near its minimum at ω_1,OP/2π = 3.7778 GHz and qubit 2 at its minimum at ω_2,OP/2π = 3.6673 GHz. The many small anticrossings occur when a qubit is sweeping past an HBAR mode, while the larger anticrossing at 3.778 GHz seen in the data for qubit 2 corresponds to the qubit-qubit coupling. The spacing between HBAR modes (free spectral range, FSR) is around 22 MHz which corresponds well with the thickness of the HBAR sapphire layer. The dashed lines show the eigenvalues according to equation <ref>.
At the qubits respective operating points, they had T_1 values of 2.2 μs and 2.41 μs, as well as T_2 values of 4.41 μs and 1.02 μs. Their respective 2g couplings to their HBAR modes were 2.55 MHz and 2.85 MHz, with the mechanical T_1 values being 380 ns and 320 ns. The system had a qubit-qubit 2g coupling of 16.7 MHz.
Figure <ref> shows a vacuum Rabi oscillation experiment where an excitation is swapping between an initially excited qubit and its coupled mechanical modes. In panels (a,b) qubit 2 is being controlled and measured and we see vacuum Rabi oscillations with the mechanical modes (red arrows) and also with the other qubit (blue arrows), corresponding with the anticrossings seen in figure <ref> bottom right. In figure <ref> (c,d) qubit 1 is controlled and experiences vacuum Rabi oscillations with its coupled mechanical modes following the anticrossings seen in figure <ref> bottom left. Since the flux is tuned in the positive direction, it first sweeps on resonance with the lower mode and then with the upper mode seen in figure <ref> bottom right.
If one looks closely the vacuum Rabi oscillation fringes can be seen to be asymmetric, especially in figure <ref> (a). The source of this unknown and it results in deviations from the theory at later simulation times. Some slight asymmetry could be generated for the nearest mode by including the effect of the π pulse specifically in the simulations, but this was not enough to reproduce the long tail of the fringes from the mode nearest the qubit operation point seen in figure <ref> (a) which extend very far, up to where qubit 1 is. It can also be seen in figure <ref> (a) that the vacuum Rabis with qubit 1 also show these extended fringes on the right side. This behaviour may be related to the same phenomena that is seen in frequency domain, where at the avoided crossing, the upper branch has less weight than the lower branch. It is possible at least some of the asymmetry is caused by pulse distortion <cit.>.
The line cuts in figure <ref> (b) show a double oscillation feature that occurs when qubit 2 is near the qubit 1 frequency. This is because the excitation is experiencing Rabi oscillations with both the other qubit and the nearby acoustic modes at the same time but on different time scales, hence the multiple oscillating feature.
It is important to determine whether or not the qubits couple to the same set of acoustic modes. The issue is nontrivial since on one hand, the qubits are in close proximity to each other and share the same HBAR chip, which would point to delocalized acoustic modes. On the other hand, one could argue that the electric field of either qubit should confine the HBAR mode only below its own electrode. We attempted to carry out finite-element simulations, however, full 3-dimensional solution was beyond reach. In 2 dimensions and with a limited overtone number, we saw indications of a delocalized acoustic mode, with the study showing that moving the qubit coupling pad changed the strength of coupling to modes of different lateral profile. Experimentally, the issue cannot immediately be resolved in spectroscopy, since the HBAR spectral lines in figure <ref> are equal within measurement uncertainties, which however is expected based on the geometry. A time-domain experiment was done to confirm that the qubits couple to their individual sets of acoustic modes. This was done by swapping an excitation from qubit 1 to its acoustic mode at 3.788 GHz, and then tuning it away whilst tuning the qubit 2 on resonance with this mode. The experiment found no response and so concluded that the qubits indeed couple to separate modes with any stray coupling being too weak to observe.
Finally, we demonstrate the swapping of an excitation through the degrees of freedom of the system. Figure <ref> shows the pulse sequence and measured data. The excitation swaps from the 3.7885 GHz HBAR mode coupled to qubit 1 all the way to various HBAR modes coupled to qubit 2. The resulting measurement data is similar to figure <ref> (a) as the last part of the pulse sequence is similar to that experiment, however this excitation has travelled from an acoustic mode coupled to the opposite qubit hence why the initial excited state population is reduced due to decoherence.
Now that we have shown the ability to transfer excitations around the system, we would in principle be able to create an entangled state between arbitrary acoustic modes. However, due to the limited coherence of the system, we were not able to measure this in practice. One needs to measure the entangled modes simultaneously under a series of tomography pulses in order to produce the density matrix of the system (for example see <cit.>). This was not straightforward to do in our system as the acoustic modes are coupled to different qubits, meaning we need to readout the acoustic mode in single-shot to be able to correlate the results. We are limited both by our single-shot readout fidelity <60%, and by not being in the strong dispersive regime which requires acoustic T_1 times of 8 μs at our coupling magnitudes.
A possible simplification to make is to only measure an entangled state which does not occupy number states higher than |1⟩ so that in this case one can swap the state back to the qubits and measure them. Due to the low readout fidelity, we have to use an ensemble measurement. There is a tomography pulse scheme to measure the two qubit density matrix using an ensemble measurement <cit.>. This requires an appropriate two-qubit gate as a part of the tomography pulse scheme and in our case this would be an iSWAP pulse. The calibration of this iSWAP pulse was problematic having a fidelity of 55% which was not sufficient to do the two qubit tomography. We estimate that probably higher than 70% gate fidelity is required to be able to perform the measurement.
In order to improve the fidelity of single and two-qubit gates in the system, one would like the FSR to be larger than the coupling by a factor of at least 20. This is so that if the qubit is in between two modes, it will only interact dispersively. Also the FSR should be larger than inverse pulse widths, so that these are not exciting nearby mechanical modes as well. Longer coherence times for both the qubits and the acoustics are important towards this end. The ideal solution would be the development of a tunable coupler, to be able to selectively couple to modes of interest, which is important for using HBARs in quantum information processing.
In conclusion we have fabricated and measured a sample consisting of two qubits each coupled to an individual set of high overtone bulk acoustic (HBAR) modes as well as to each other. An excitation was swapped from an HBAR mode coupled with one qubit, to an HBAR mode coupled to the other qubit. This demonstrates the possibility to integrate multiple HBAR devices into a superconducting circuit, where complex quantum states could be stored across these devices.
We would like to thank Mikael Kervinen for useful discussion. We acknowledge the facilities and technical support of Otaniemi research infrastructure for Micro and Nanotechnologies (OtaNano) that is part of the European Microkelvin Platform. This work was supported by the Academy of Finland (contracts 307757), by the European Research Council (101019712), and by the Wihuri Foundation. We acknowledge funding from the European Union's Horizon 2020 research and innovation program under the QuantERA II Programme (13352189). The work was performed as part of the Academy of Finland Centre of Excellence program (project 336810).
20
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Clerk et al.(2020)Clerk,
Lehnert, Bertet, Petta, and Nakamura]Clerk2020hybrid
author author A. A. Clerk, author K. W. Lehnert,
author P. Bertet, author J. R. Petta, and author Y. Nakamura, title
title Hybrid quantum systems with circuit quantum
electrodynamics, @noop journal journal
Nature Physics volume 16, pages
257–267 (year 2020)NoStop
[Regal et al.(2008)Regal,
Teufel, and Lehnert]Lehnert2008Nph
author author C. A. Regal, author J. D. Teufel, and author K. W. Lehnert, title title Measuring nanomechanical
motion with a microwave cavity interferometer, @noop journal journal Nature Physics volume
4, pages 555–560 (year 2008)NoStop
[Teufel et al.(2011)Teufel,
Li, Allman, Cicak,
Sirois, Whittaker, and Simmonds]Teufel2011a
author author J. D. Teufel, author Dale Li,
author M. S. Allman, author K. Cicak, author
A. J. Sirois, author
J. D. Whittaker, and author
R. W. Simmonds, title
title Circuit cavity electromechanics in the
strong-coupling regime, @noop journal journal Nature volume 471, pages
204–208 (year 2011)NoStop
[O'Connell et al.(2010)O'Connell, Hofheinz, Ansmann, Bialczak, Lenander, Lucero, Neeley, Sank, Wang, Weides,
Wenner, Martinis, and Cleland]OConnell2010
author author A. D. O'Connell, author M. Hofheinz,
author M. Ansmann, author Radoslaw C. Bialczak, author M. Lenander, author
Erik Lucero, author
M. Neeley, author D. Sank, author H. Wang, author M. Weides,
author J. Wenner, author John M. Martinis, and author A. N. Cleland, title title Quantum ground state and single-phonon
control of a mechanical resonator, @noop journal
journal Nature volume 464, pages 697–703 (year 2010)NoStop
[Arrangoiz-Arriola et al.(2019)Arrangoiz-Arriola, Wollack, Wang,
Pechal, Jiang, McKenna,
Witmer, Van Laer, and Safavi-Naeini]Safavi2019Fock
author author Patricio Arrangoiz-Arriola, author E. Alex Wollack, author Zhaoyou Wang, author Marek Pechal, author Wentao Jiang, author Timothy P. McKenna, author Jeremy D. Witmer, author Raphaël Van Laer, and author Amir H. Safavi-Naeini, title title Resolving the energy levels of a nanomechanical oscillator, @noop journal journal Nature volume 571, pages 537–540 (year
2019)NoStop
[Chu et al.(2017)Chu,
Kharel, Renninger, Burkhart,
Frunzio, Rakich, and Schoelkopf]SchoelkopfHBAR2017
author author Yiwen Chu, author Prashanta Kharel,
author William H. Renninger,
author Luke D. Burkhart,
author Luigi Frunzio, author Peter T. Rakich, and author Robert J. Schoelkopf, title title Quantum acoustics with superconducting
qubits, @noop journal journal
Science volume 358, pages 199–202
(year 2017)NoStop
[Kervinen et al.(2018)Kervinen, Rissanen, and Sillanpää]kervinen_interfacing_2018
author author Mikael Kervinen, author Ilkka Rissanen, and author Mika Sillanpää, title title
Interfacing planar superconducting qubits with high overtone bulk acoustic
phonons, @noop journal journal
Physical Review B volume 97, pages
205443 (year 2018)NoStop
[Gustafsson et al.(2014)Gustafsson, Aref, Kockum, Ekström, Johansson, and Delsing]Delsing2014
author author Martin V. Gustafsson, author Thomas Aref, author Anton Frisk Kockum, author Maria K. Ekström, author Göran Johansson, and author Per Delsing, title title
Propagating phonons coupled to an artificial atom, @noop
journal journal Science volume 346, pages 207–211 (year
2014)NoStop
[Noguchi et al.(2017)Noguchi, Yamazaki, Tabuchi, and Nakamura]Nakamura2017
author author Atsushi Noguchi, author Rekishu Yamazaki, author Yutaka Tabuchi, and author Yasunobu Nakamura, title title Qubit-assisted
transduction for a detection of surface acoustic waves near the quantum
limit, @noop journal journal Phys.
Rev. Lett. volume 119, pages 180505
(year 2017)NoStop
[Moores et al.(2018)Moores,
Sletten, Viennot, and Lehnert]moores_cavity_2018
author author Bradley A. Moores, author Lucas R. Sletten, author Jeremie J. Viennot, and author K. W. Lehnert, title title
Cavity Quantum Acoustic Device in the Multimode Strong Coupling
Regime, @noop journal journal
Physical Review Letters volume 120, pages 227701 (year 2018)NoStop
[Bienfait et al.(2019)Bienfait, Satzinger, Zhong, Chang, Chou, Conner, Dumur,
Grebel, Peairs, Povey, and Cleland]Cleland2019PhEntangl
author author A. Bienfait, author K. J. Satzinger, author Y. P. Zhong, author H.-S. Chang,
author M.-H. Chou, author C. R. Conner, author
É. Dumur, author
J. Grebel, author G. A. Peairs, author R. G. Povey, and author A. N. Cleland, title title
Phonon-mediated quantum state transfer and remote qubit entanglement, @noop journal journal Science volume 364, pages 368–371 (year
2019)NoStop
[Gokhale et al.(2020)Gokhale, Downey, Katzer, Nepal, Lang, Stroud, and Meyer]gokhale_epitaxial_2020
author author Vikrant J. Gokhale, author Brian P. Downey, author D. Scott Katzer, author Neeraj Nepal, author Andrew C. Lang, author Rhonda M. Stroud, and author David J. Meyer, title title
Epitaxial bulk acoustic wave resonators as highly coherent multi-phonon
sources for quantum acoustodynamics, @noop journal
journal Nature Communications volume
11, pages 2314 (year 2020)NoStop
[Chu et al.(2018)Chu,
Kharel, Yoon, Frunzio,
Rakich, and Schoelkopf]chu_creation_2018
author author Yiwen Chu, author Prashanta Kharel,
author Taekwan Yoon, author Luigi Frunzio, author
Peter T. Rakich, and author
Robert J. Schoelkopf, title
title Creation and control of multi-phonon Fock
states in a bulk acoustic-wave resonator, @noop journal journal Nature volume 563, pages 666–670 (year 2018)NoStop
[Kervinen et al.(2019)Kervinen, Ramírez-Muñoz, Välimaa, and Sillanpää]kervinen2019landau
author author Mikael Kervinen, author Jhon E. Ramírez-Muñoz, author Alpo Välimaa, and author Mika A. Sillanpää, title title Landau-Zener-Stückelberg Interference in a Multimode
Electromechanical System in the Quantum Regime, @noop journal journal Phys. Rev. Lett. volume 123, pages 240401 (year
2019)NoStop
[Wollack et al.(2022)Wollack, Cleland, Gruenke, Wang, Arrangoiz-Arriola, and Safavi-Naeini]Wollack2022entangle
author author E. Alex Wollack, author Agnetta Y. Cleland, author Rachel G. Gruenke, author Zhaoyou Wang,
author Patricio Arrangoiz-Arriola, and author Amir H. Safavi-Naeini, title title Quantum state preparation and tomography of entangled mechanical
resonators, @noop journal journal
Nature volume 604, pages 463–467
(year 2022)NoStop
[Kervinen et al.(2020)Kervinen, Välimaa, Ramírez-Muñoz, and Sillanpää]Kervinen2020
author author Mikael Kervinen, author Alpo Välimaa, author Jhon E. Ramírez-Muñoz, and author Mika A. Sillanpää, title title Sideband control of a multimode quantum bulk acoustic system, @noop journal journal Phys. Rev.
Applied volume 14, pages 054023
(year 2020)NoStop
[von Lüpke et al.(2022)von Lüpke, Yang, Bild, Michaud, Fadel, and Chu]Lupke2022
author author Uwe von Lüpke, author Yu Yang,
author Marius Bild, author Laurent Michaud, author
Matteo Fadel, and author
Yiwen Chu, title title Parity measurement in the strong dispersive regime of
circuit quantum acoustodynamics, @noop journal
journal Nature Physics volume 18, pages 794–799 (year 2022)NoStop
[Kwon et al.(2021)Kwon,
Tomonaga, Lakshmi Bhai, Devitt, and Tsai]Kwon2021GateBased
author author Sangil Kwon, author Akiyoshi Tomonaga, author Gopika Lakshmi Bhai, author Simon J. Devitt, and author Jaw-Shen Tsai, title title Gate-based
superconducting quantum computing, @noop journal
journal Journal of Applied Physics volume 129, pages 041102 (year
2021)NoStop
[Rol et al.(2020)Rol,
Ciorciaro, Malinowski, Tarasinski, Sagastizabal, Bultink,
Salathe, Haandbaek, Sedivy, and DiCarlo]Rol2020PulseDistorsion
author author M. A. Rol, author L. Ciorciaro,
author F. K. Malinowski,
author B. M. Tarasinski,
author R. E. Sagastizabal,
author C. C. Bultink, author Y. Salathe, author
N. Haandbaek, author
J. Sedivy, and author
L. DiCarlo, title title Time-domain characterization and correction of on-chip
distortion of control pulses in a quantum processor, @noop
journal journal Applied Physics Letters volume 116, pages 054001 (year 2020)NoStop
[Li et al.(2017)Li,
Xue, Tan, Liu, Dai, Zhang, Yu, and Yu]Li2017Ensemble
author author Mengmeng Li, author Guangming Xue, author Xinsheng Tan, author Qiang Liu, author Kunzhe Dai,
author Ke Zhang, author Haifeng Yu, and author Yang Yu, title
title Two-qubit state tomography with ensemble
average in coupled superconducting qubits, @noop journal journal Applied Physics Letters volume 110, pages 132602 (year
2017)NoStop
|
http://arxiv.org/abs/2307.05802v1 | 20230711205357 | Sliced Wasserstein Distance between Probability Measures on Hilbert Spaces | [
"Ruiyu Han"
] | math.MG | [
"math.MG",
"math.OC",
"math.PR",
"53B12"
] |
§ INTRODUCTION
Arising in optimal transport theory, the Wasserstein distance is a metric between probability distributions which
has a lot of applications in statistics and machine learning.
The Wasserstein distance of order p≥ 1 between two probability measures μ and ν on a Polish metric space 𝒳 is defined as
W_p(μ,ν)=(inf_π∈Π(μ,ν)∫_𝒳×𝒳(d(x,y))^pπ( x, y) )^1/p
where Π(μ,ν) is the set of probability measures π on 𝒳×𝒳 having the marginal distributions
μ and ν <cit.>. In particular, when 𝒳=ℝ^d,
W_p(μ,ν)=(inf_π∈Π(μ,ν)∫_ℝ×ℝ|x-y|^pπ( x, y) )^1/p.
The Wasserstein distance suffers from the curse of dimension limiting its application to large-scale data analysis <cit.>. To ease the computational burden, many variants of the Wasserstein distance have been explored, among those the sliced Wasserstein distance received a surge of interest due to its efficiency <cit.>. The sliced Wasserstein distance compares two probability measures on ℝ^d by estimating distance of the projected uni-dimensional distributions.
A natural question arises that if it is possible to define a extended notion of sliced Wasserstein distance for measures on infinite dimensional spaces, as the Wasserstein exists for such measures. <cit.> defines the sliced Wasserstein distance on compact manifolds and provides real data examples. In this paper we establish the notion of sliced Wasserstein distance between measures on an infinite dimensional separable Hilbert space in a more theoretical view, which also allows for noncompact domains. Moreover, we depict the relation between sliced Wasserstein distance and narrow convergence of measures and quantize the approximation via empirical measures.
The definition of sliced Wasserstein distance (<ref>) for measures on infinite dimensional spaces resembles that of measures on ℝ^d, while here the main task is to make the surface integral well-defined. The newly-defined sliced Wasserstein distance indeed depicts the narrow convergence of measure similarly as Wasserstein distance does <cit.>, but it turns out to require stronger conditions than that in ℝ^d to infer the asymptotic behaviour of measures, see Theorem <ref> below. In particular, the requirement of further assumptions origins from the loss of compactness of the unit sphere. Meanwhile, the approximation via empirical measures survives from the curse of dimension, see Theorem <ref>. It shares the same behaviour as the p-sliced Wasserstein distance on 𝒫_p(ℝ^d) <cit.>. Compared to the results for Wasserstein distances <cit.>, the sliced Wasserstein distance reveals its computational efficiency, see Subsection <ref> for further details.
Whether the notion of sliced Wasserstein distance has a parallel definition in general infinite dimensional Banach space is unknown, but we point out that, within the scope of this paper, the requirement of Hilbert space is crucial for the sliced Wasserstein distance to be well-defined. In particular, inner product and decomposition theorem allow the projection to resemble that in ℝ^d.
We also point out that for measures on ℝ^d, there is another equivalent definition of sliced Wasserstein distance which uses Radon transform <cit.>. Radon transform does have several extensions in infinite dimensions <cit.>, but they either appear hard to tackle <cit.> or only apply to L^2 functions on a certain probability space <cit.>. Further exploration in this direction is welcomed.
The structure of the rest of paper is simple. In Section <ref> we will provide a rigorous definition of sliced Wasserstein distance between measures on an infinite dimensional separable Hilbert space. Section <ref> is devoted to characterize the narrow convergence of measures via the newly-defined sliced Wasserstein distance. Next in Section <ref> we study the convergence rate of empirical measure, which is consistent with those results in finite dimensions <cit.>. At the end of the paper, in Section <ref>, we list some open problems which may be of future interests.
In the following context, let the order p∈ [1,∞). X is an infinite dimensional separable Hilbert space where the norm denoted by · is induced by the inner product ⟨⟩.
§ SLICED
WASSERSTEIN DISTANCE IN INFINITE DIMENSIONS
This section is devoted to establishing a well-defined notion of sliced Wasserstein distance between measures in 𝒫_p(X). Before that, let's recall the definition of sliced Wasserstein distance on 𝒫_p(ℝ^d).
For μ,ν∈𝒫_p(ℝ^d), the sliced Wasserstein distance of order p≥ 1, denoted as SW_p, is defined as follows:
SW^p_p(μ,ν)=1/ℋ(𝕊^d-1)∫_𝕊^d-1W_p^p(μ̂_θ,ν̂_θ)ℋ^d-1(θ),
where ℋ(𝕊^d-1) denotes the surface area of the d-1 dimensional unit sphere and ℋ^d-1 denotes the d-1 dimensional Hausdorff measure. μ̂_θ:=P_θ#μ and ν̂_θ:=P_θ#ν, where P_θ: ℝ^d→ℝ, P_θ(x)=⟨θ, x ⟩ is the projection operator. For two measures μ,ν∈𝒫(X), we aims to construct an analogy as (<ref>).
In infinite dimensional space there is no longer a compact unit sphere that is 1 dimension less than the space dimension. To have an analogy as 𝕊^d-1 in ℝ^d, we take the candidate S:={x∈ X| x=1} which consists of unit vectors in every direction. Our goal is to make the following formal integral well-defined
1/γ_S(S)∫_θ=1W_p^p(μ̂_θ,ν̂_θ)γ_S(θ),
where γ_S is some finite Borel measure on S, μ̂_θ,ν̂_θ are the pushforward measure to the subspace θℝ of measure μ and ν, respectively. W_p(μ̂_θ,ν̂_θ) is the Wasserstein distance between μ̂_θ and ν̂_θ, viewed as measures on ℝ.
§.§ Surface Measure on Unit Sphere
Our primary concern is a valid Borel measure on S. To be specific, we are looking for a Borel measure on S that is strictly positive and finite. The topology on S will be the relative topology induced by X, where the topology of X is the metric topology.
The set of finite and strictly positive Borel measure on S is nonempty. One eligible path to find such a measure is that we first define a strictly positive probability measure γ on X and then take γ_S to be the surface measure associated to γ. Surface measure of infinite dimensional spaces is a topic of its own interest. The existence of surface measure associated to a probability measure on whole space is a nontrivial task. Initially it is only defined for sufficiently regular surface using tools from Malliavin calculus <cit.>; later the restrictions are reduced in <cit.>.
We pick γ a non degenerate centered Gaussian measure on X. Recall the definition of Gaussian measure in infinite dimensions <cit.>:
Let W be a topological vector space and μ a Borel probability on W. μ is Gaussian if and only if, for each continuous linear functional on W^*, the pushforward μ f^-1 is a Gaussian measure on ℝ.
Since X is a separable Hilbert space, there is a more explicit description of a non degenerated centered Gaussian measure γ on X. By Karhunen-Loève expansion <cit.>, γ=ℒ(∑_i=1^∞λ_iξ_i e_i). ℒ denotes the law, {ξ_i}_i∈ℕ are i.i.d. standard Gaussian. The eigenvalues {λ_i}_i∈ℕ satisfy λ_i≠ 0 and ∑_i=1^∞λ_i^2<∞.
We refer to Section 3 of <cit.> for more solid examples of measure γ.
We then define the surface measure γ_S concentrated on S associated to γ. Let S^ϵ:={x∈ X | 1-ϵ≤x≤ 1+ϵ} and let f: X→ℝ be a Borel function defined. We set
∫ f(x)γ_S=lim_ϵ→ 01/2ϵ∫_S^ϵf(x) γ( x).
By Theorem 2.11, Proposition 3.5 and Example 3.8 in <cit.>, there exists a unique Borel measure γ_S whose support is included in S, for φ: X→ℝ which is uniformly continuous and bounded :
∫_Xφ(x)γ_S( x)=(F_φ(r))'|_r=1,
where F_φ(r):=∫_x^2≤ 1φ(x)γ( x). Note that (<ref>) implies that ∫_θ=11γ_S(θ)<∞. It can be easily checked that if γ is strictly positive, then γ_S(θ)>0 for all θ∈ S.
§.§ Wasserstein Distance between Projected Measures
Fix p∈ [1,∞). Let μ,ν∈𝒫_p(X), M_p(μ):=∫_Xx^pμ(x) and M_p(ν):=∫_Xx^pν(x). In this subsection we will prove the uniform continuity and bound of W_p^p(μ̂_θ,ν̂_θ) given that the measures have appropriate moments, which ensures that W_p^p(μ̂_θ, ν̂_θ) is integrable with respected to γ_S.
Above all, we demonstrate that the quantity W_p^p(μ̂_θ,ν̂_θ) is well-defined. Let θ∈ X be a vector
with θ=1,
then Z:=θℝ is a closed subspace. Every element x∈ X can be uniquely written as x=z+w with z∈ Z and w∈ Z^⊥. The projection map P̃_θ: X→ Z, P̃_θ(x)=z is well-defined. It is also measurable since it is a bounded linear map. It follows that the map P_θ: X→ℝ, P_θ(x)=⟨θ, z⟩ is also measurable. A simple observation is that
x= ⟨θ,x⟩θ+ (x- ⟨θ,x⟩θ). It could be checked that ⟨θ,x⟩θ∈ Z and x- ⟨θ,x⟩θ∈ Z^⊥. Thus P_θ(x)=⟨θ,x⟩.
Given a unit
vector θ∈ X, the pushforward measure μ̂_θ:=P_θ#μ is a probability measure on ℝ and μ̂_θ∈𝒫_p(ℝ); indeed, by change of variables,
∫_ℝ|y|^p μ̂_θ(y) = ∫_XP̃_θ(x)^p μ(x)≤∫_Xx^p μ(x)<∞.
Therefore, for μ,ν∈𝒫_p(X) and a unit
vector θ∈ X, the Wasserstein distance W_p(μ̂_θ,ν̂_θ) is well-defined.
Now we are ready to check W^p_p(μ̂_θ,ν̂_θ) is uniformly continuous and bounded on S. Observe that the function W_p(μ̂_θ,ν̂_θ) is Lipschitz on S.
Given that μ,ν∈𝒫_p(X),
W_p(μ̂_θ,ν̂_θ) is Lipschitz on S with Lipschitz constant (M_p(μ))^1/p+(M_p(ν))^1/p.
Let θ,γ∈ S. Triangle inequality gives that
|W_p(μ̂_θ,ν̂_θ)-W_p(μ̂_γ,ν̂_γ)|≤ W_p(μ̂_γ,μ̂_θ)+W_p(ν̂_γ,ν̂_θ).
Notice that π_μ:=(P_θ× P_γ)#μ is a transport plan between μ̂_θ and μ̂_γ. Then
W_p^p(μ̂_θ,μ̂_γ)≤∫_ℝ^2|y-z|^pπ_μ(y,z)=∫_X|P_θ(x)-P_γ(x)|^pμ(x)
= ∫_X |⟨θ-γ,x⟩|^pμ(x)≤θ-γ^p ∫_Xx^pμ(x),
where the last inequality we use Cauchy-Schwarz inequality. The above argument implies that
W_p(μ̂_θ,μ̂_γ)≤θ-γ(M_p(μ))^1/p.
Analogously, W_p(ν̂_θ,ν̂_γ)≤θ-γ(M_p(ν))^1/p. Therefore,
|W_p(μ̂_θ,ν̂_θ)-W_p(μ̂_γ,ν̂_γ)|≤θ-γ((M_p(μ))^1/p+(M_p(ν))^1/p).
Equipped with Lemma <ref>, we conclude this subsection with the following theorem:
Given that μ,ν∈𝒫_p(X), W^p_p(μ̂_θ,ν̂_θ) is bounded on S, in particular
∀θ∈ S, W^p_p(μ̂_θ,ν̂_θ)≤ 2^p(M_p(μ)+M_p(ν)).
Meanwhile, W^p_p(μ̂_θ,ν̂_θ) is Lipschitz on S with Lipschitz constant
p 2^p-1max{M_p(μ),M_p(ν)}^p-1/p((M_p(μ))^1/p+(M_p(ν))^1/p).
Bound:
Let π_θ∈Π(μ̂_θ,ν̂_θ). Recall that Π(μ̂_θ,ν̂_θ) consists of probability measures on ℝ×ℝ with marginals μ̂_θ and ν̂_θ, respectively. For any θ∈ S,
W_p^p(μ̂_θ,ν̂_θ)≤∫_ℝ^2|x-y|^pπ_θ( x, y)≤ 2^p∫_ℝ^2(|x|^p+|y|^p)π_θ( x, y)
(<ref>)≤2^p(M_p(μ)+M_p(ν)).
Uniform Continuity: For θ,γ∈ S,
|W^p_p(μ̂_θ,ν̂_θ)-W^p_p(μ̂_γ,ν̂_γ)|
≤ pmax{W^p-1_p(μ̂_θ,ν̂_θ),W^p-1_p(μ̂_γ,ν̂_γ)} |W_p(μ̂_θ,ν̂_θ)-W_p(μ̂_γ,ν̂_γ)|
≤ p 2^p-1max{M_p(μ),M_p(ν)}^p-1/p|W_p(μ̂_θ,ν̂_θ)-W_p(μ̂_γ,ν̂_γ)|
(<ref>)≤ p 2^p-1max{M_p(μ),M_p(ν)}^p-1/p((M_p(μ))^1/p+(M_p(ν))^1/p)θ-γ,
where the first inequality we use |a^p-b^p|≤ p max{a,b}^p-1|a-b| for a,b∈ℝ, a,b≥ 0 and p∈ [1,∞).
§.§ Sliced Wasserstein Distance in Infinite Dimensions
Now we are well-equipped to make the formal expression (<ref>) rigorous:
Given γ_S∈𝒫(S) be strictly positive Borel measure defined on S such that γ_S(S)=∫_θ=11γ_S(θ)<∞. Let μ,ν∈𝒫_p(X), the p-sliced Wasserstein distance (with respect to γ_S) is defined as
SW^γ_p(μ,ν)= (1/γ_S(S)∫_θ=1W^p_p(μ̂_θ,ν̂_θ)γ_S(θ) )^1/p.
In particular, we can take γ_S the surface measure on S associated to a non degenerate centered Gaussian measure on X.
Given γ∈𝒫(X) be a non degenerate centered Gaussian measure on X. Let μ,ν∈𝒫_p(X), the p-sliced Wasserstein distance (with respect to γ) is defined as
SW^γ_p(μ,ν)= (1/γ_S(S)∫_θ=1W^p_p(μ̂_θ,ν̂_θ)γ_S(θ) )^1/p,
where γ_S is the surface measure on S associated to γ, and γ_S(S)=∫_θ=11γ_S(θ).
(<ref>) is well-defined owing to the discussion in Subsection <ref> and <ref>. Next we show that (<ref>) is indeed a distance.
Given γ_S∈𝒫(S) be a finite, strictly positive Borel measure defined on S. For p∈ [1,∞), the SW_p^γ defined as (<ref>) is a distance.
The symmetry is obvious. The triangle inequality follows from the triangle inequality of W_p. Let μ,ν∈𝒫_p(X) with compact support. If μ=ν, then SW_p^γ(μ,ν)=0. It remains to check that if SW_p^γ(μ,ν)=0 implies that μ=ν.
Notice that since W_p(μ̂_θ,ν̂_θ) is nonnegative and uniformly continuous,
SW_p^γ(μ,ν)=0 implies that
W_p(μ̂_θ,ν̂_θ)≡ 0 for θ=1. Thus P_θ#μ=P_θ#ν for every θ=1. Pick an arbitrary f∈ X^*=X,
∫_Xe^if(x)μ( x)=∫_Xe^if⟨ x, f/f⟩μ( x)=∫_ℝexp(ify)μ̂_f/f( y)
= ∫_ℝexp(ify)ν̂_f/f( y)=∫_X e^if(x)ν( x).
By the injectivity of characteristic functions, we obtain μ=ν.
§ NARROW CONVERGENCE OF PROBABILITY MEASURES
With the definition in hand, we are at the position to the investigate some properties of the sliced Wasserstein distance. It is natural to ask that if the sliced Wasserstein distance (<ref>) can characterize the narrow convergence of measures in 𝒫_p(X) since it is well-known that the Wasserstein distance describes the narrow convergence of probability measures <cit.> and so does sliced Wasserstein distance on 𝒫_p(ℝ^d) <cit.>. In this section we establish the connection between narrow convergence of measures and the quantity of sliced Wasserstein distance.
To begin with, we recall the definition of narrow convergence, although it will not be directly used in the argument below <cit.>. Note that in some context it is called “weak convergence" <cit.>.
We say that a sequence {μ_n}⊂𝒫(X) is narrowly convergent to μ∈𝒫(X) as n→∞ if
lim_n→∞∫_Xf(x)μ_n( x)=∫_Xf(x)μ( x)
for every f which is a continuous and bounded real function on X.
We will first show that for a narrow convergent sequence with converging p-moments, the sliced Wasserstein distance goes to zero. The inequality between sliced Wasserstein and Wasserstein distance still holds as that in 𝒫_p(ℝ^d) <cit.>.
If μ,ν∈𝒫_p(X), then SW_p^γ(μ,ν)≤ W_p(μ,ν).
There exists an optimal transport plan π between μ and ν under Wasserstein distance (see Theorem 1.7 in Chapter 1 of <cit.>). Then (P_θ× P_θ)#π is a transport plan between μ̂_θ and ν̂_θ. So
W^p_p(μ̂_θ,ν̂_θ)≤∫_X^2 |⟨θ,x⟩-⟨θ,y⟩|^pπ( x,y).
By Cauchy-Schwarz,
SW_p^γ(μ,ν)≤ (1/γ_S(S)∫_θ=1∫_X^2 |⟨θ,x⟩-⟨θ,y⟩|^pπ( x,y)γ_S(θ) )^1/p
≤ (1/γ_S(S)∫_θ=1∫_X^2x-y^pθ^pπ( x,y)γ_S(θ) )^1/p=W_p(μ,ν).
Lemma <ref> directly gives the following theorem.
If μ^n,μ∈𝒫_p(X), μ^n converges to μ narrowly and lim_n→∞∫_Xx^p μ^n( x)=∫_Xx^p μ( x), then SW^γ_p(μ^n,μ)→ 0 as n→∞.
By Definition 6.8 and Theorem 6.9 in <cit.>, W_p(μ^n,μ)→ 0. By Lemma <ref>, SW^γ_p(μ^n,μ)→ 0.
Now we turn to characterize narrow convergence of measures by the sliced Wasserstein distance. Unlike the finite dimensional case, to have weak convergence, besides the condition that the sliced Wasserstein distance goes to zero, we further require the uniform bound of the p-moments. This condition cannot be removed, see Example <ref> below for a counterexample.
If μ^n,μ∈𝒫_p(X) satisfies lim_n→∞SW_p^γ(μ^n,μ)=0 and sup_n≥ 1M_p(μ_n):=C<∞, then μ^n converges to μ narrowly.
We will prove that every subsequence {μ^n_k}_k∈ℕ admits a further subsequence that converges to μ narrowly. For the simplicity of notation, denote this subsequence as {μ^n}_n∈ℕ, then we still have lim_n→∞SW_p^γ(μ^n,μ)=0 and sup_n≥ 1M_p(μ_n):=C<∞.
As
lim_n→∞∫_θ=1W_p^p(μ̂_θ^n, μ̂_θ)γ_S(θ)=γ_S(S)lim_n→∞ (SW^γ_p(μ̂_θ^n, μ̂_θ) )^p=0,
then up to a subsequence {n_k}_k∈ℕ, the functions θ↦ W_p^p(μ̂_θ^n_k, μ̂_θ)∈ℝ^+ converges to zero for γ_S almost every θ∈ S, as k→∞.
Meanwhile, given that sup_n≥ 1M_p(μ_n):=C<∞ and μ∈𝒫_p(X), Proposition <ref> implies that for n≥ 1 the functions θ↦ W_p^p(μ̂^n_θ,μ̂_θ) share the same Lipschitz constant
p 2^p-1max{(M_p(μ), C}^p-1/p(M_p(μ)^1/p+C^1/p).
This implies that as k→∞, the functions W_p(μ̂_θ^n_k,μ̂_θ)→ 0 for every θ∈ S since γ is non degenerate Gaussian. It follows that for every θ∈ S, μ̂^n_k_θ converges to μ̂_θ narrowly. Now for every f∈ X^*=X,
lim_k→∞∫_Xexp(i⟨ f,x ⟩) μ^n_k( x)=lim_k→∞∫_Xexp(if⟨f/f,x ⟩) μ^n_k( x)
= lim_k→∞∫_ℝexp(ify)μ̂_f/f^n_k( y)=∫_ℝexp(ify)μ̂_f/f( y)=∫_Xexp(i⟨ f,x ⟩) μ( x).
By Proposition 4.6.9 of <cit.>, we obtain that μ^n_k converges to μ narrowly.
In particular, when the domain X is bounded,
If (X)<∞, then for μ_n,μ∈𝒫_p(X), n∈ℕ, SW^γ_p(μ_n,μ)→ 0 if and only if μ_n converges to μ narrowly.
The condition sup_n≥ 1M_p(μ_n)<∞ automatically holds, then Theorem <ref> gives that SW^γ_p(μ_n,μ)→ 0 implies μ_n converges to μ narrowly. For the other direction, notice that x↦x^p is now a bounded continuous function, thus Theorem <ref> applies.
If X=ℝ^d, then the condition sup_n≥ 1M_p(μ^n)<∞ can be derived from SW_p^γ(μ^n,μ)→ 0, see proof of Theorem 2.1 in <cit.>. However, we emphasize that, for measures on infinite dimensional space we can no longer obtain sup_n≥ 1M_p(μ^n)<∞ by the convergence in sliced Wasserstein distance. Consider the following example.
Let p=2 and let μ:=δ_0 and μ^n:=δ_n^1/3e_n, where {e_k}_k∈ℕ is the orthonormal basis of X. Recall that for θ∈ X, ∑_i=1^∞|⟨θ,e_n⟩|^2=θ^2. Monotone Convergence Theorem gives that
lim_n→∞∑_i=1^n1/γ_S∫_S|⟨θ, e_n⟩|^2 γ_S(θ)=1/γ_S∫_S∑_i=1^∞|⟨θ,e_n⟩|^2 γ_S(θ)=1/γ_S∫_S 1 γ_S(θ)=1,
which implies that lim_n→∞1/γ_S∫_S |⟨θ, e_n⟩|^2γ_S(θ)=0. Moreover, we know that
1/γ_S∫_S |⟨θ, e_n⟩|^2γ_S(θ)=o(n^-1).
On the other hand, for every θ∈ S, W_2^2(μ̂^n_θ,μ̂_θ)=n^2/3|⟨θ, e_n⟩|^2. Then we obtain that
SW_p^γ(μ^n,μ)=1/γ_S∫_S n^2/3|⟨θ, e_n⟩|^2γ_S(θ)=o(n^2/3-1)→ 0, n→∞.
Meanwhile it is obvious that M_2(μ^n)=n^2/3, sup_n≥ 1M_p(μ)=∞. The 2-moments are not uniformly bounded. Furthermore, μ^n do not converge to μ narrowly.
§ APPROXIMATION VIA EMPIRICAL MEASURES
The estimate of the distance between empirical measures and its true distribution is a prevailing problem. In this section we investigate the convergence rate of empirical measures on infinite dimensional Hilbert space under the sliced Wasserstein distance (<ref>); in particular, we have the below theorem:
If μ∈𝒫_s(X) for s>2p, μ^n:=1/n∑_k=1^nδ_X_k with X_1,...,X_n a sample drawn from μ, then
SW^γ_p (μ^n,μ)≤ C n^-1/2p,
where the constant C is determined by p,s and M_s(μ).
The proof of Theorem <ref> relies on the following result on estimates of one dimensional empirical measure <cit.>.
Let X_1,...,X_n be an sample drawn from a Borel probability measure μ on ℝ with distribution functions
F. Let μ^n:=1/n∑_k=1^nδ_X_k be the empirical measure. For all p≥ 1,
W_p^p(μ^n,μ)≤p2^p-1/√(n)∫_-∞^∞|x|^p-1√(F(x)(1-F(x))) x.
Then we bound the right hand side of (<ref>) by a simple observation. Let s≥ 1. Let ξ be a random variable on ℝ with distribution function F. Assume further |ξ|^s<∞. By Chebyshev's inequality, for x≥ 0,
(F(x)(1-F(x)) ) (1+|x|^s)= (F(x)(1-F(x)) )+ (F(x)(1-F(x)) ) |x|^s
≤ 1+ (1-F(x))|x|^s≤ 1+ |ξ|^s,
which implies that for every x≥ 0, F(x)(1-F(x))≤1+ |ξ|^s/1+|x|^s. The same inequality holds for x≤ 0. Thus we have
F(x)(1-F(x))≤1+ |ξ|^s/1+|x|^s, ∀ x∈ℝ.
The above discussion leads to the following proof:
Notice that for θ∈ S, ⟨θ, X_1⟩,...,⟨θ, X_n⟩ is a sample drawn from μ̂_θ and P_θ#μ^n=1/n∑_k=1^nδ_⟨θ, X_k⟩. Let F_θ denote the distribution function of μ̂_θ and X_θ∼μ̂_θ . Applying Theorem <ref>, we obtain
W_p^p(P_θ#μ^n,P_θ#μ)≤p2^p-1/√(n)∫_-∞^∞|x|^p-1√(F_θ(x)(1-F_θ(x))) x
(<ref>)≤p2^p-1/√(n)∫_-∞^∞|x|^p-1(1+ |X_θ|^s/1+|x|^s)^1/2 x(<ref>)≤p2^p/√(n)(1+M_s(μ))^1/2∫_0^∞|x|^p-1/(1+|x|^s)^1/2 x
≤p2^p/√(n)(1+M_s(μ)) (1+∫_1^∞|x|^p-1-s/2 x )=p2^p/√(n)(1+M_s(μ))^1/2(1+1/p-s/2).
Tonelli's theorem gives that
(SW_p^γ(μ^n,μ))^p≤p2^p/√(n)(1+M_s(μ))^1/2(1+1/p-s/2),
which by Jensen's inequality implies
(<ref>).
We then provide the following straightforward corollary estimating the sliced Wasserstein distance between two unknown measures, whose proof only uses triangle inequality.
If μ,ν∈𝒫_s(X) for s>2p, μ^n:=1/n∑_k=1^nδ_X_k, ν^m:=1/m∑_k=1^mδ_Y_k with X_1,...,X_n a sample drawn from μ, and Y_1,...,Y_m a sample drawn from ν, then
|SW^γ_p (μ^n,ν^m)-SW^γ_p (μ,ν)|≤ C (n^-1/2p+m^-1/2p),
where the constant C is determined by p,s and M_s(μ),M_s(ν).
The above results are consistent with that in <cit.> where the convergence rate of SW for measures on ℝ^d does not depend on the dimension d. The reason is that the projection induces the problem to the uniform estimation of Wasserstein distance in one dimension.
§.§ Comparison to quantization in Wasserstein metric
In this subsection we display some results of the convergence rate of empirical measure under Wasserstein distance for measures defined both on finite and infinite dimensional spaces <cit.> in comparison with our results in Section <ref>.
For measures on finite dimensional spaces, Wasserstein distance suffers from curse of dimensions. To be specific, we refer to the results in <cit.>, which read
Let μ∈𝒫(ℝ^d) and p>0. Assume M_q(μ)<∞ for some q>p. For n≥ 1, let μ^n:=1/n∑_k=1^nδ_X_k with X_1,...,X_n a sample drawn from μ.There exists a constant C depending only on p,q,d such that, for all n≥ 1,
W_p^p(μ^n,μ)≤ CM_p^p/q(μ){[ n^-1/2 + n^-(q-p)/q if p>d/2 and q≠ 2p; n^-1/2log(1+n)+n^-(q-p)/q if p>d/2 and q≠ 2p; n^-p/d+n^-(q-p)/q if 0<p<d/2 and q≠d/d-p. ].
It follows that if p is fixed and when d is large, the dominant term in the convergence rate will be n^-p/d approaching 1.
On the other hand, for measures on infinite dimensional spaces, <cit.> studied the convergence rate of Wasserstein distance between certain class of infinite dimensional measures and their empirical measures. We will state their results here. The probability measures are defined on a Hilbert space 𝒳=L^2={x∈ℝ^∞:∑_m=1^∞x_m^2<∞}.
Define the distribution class
𝒫_poly(q,b,M_q):={ μ: _X∼μ[ ∑_m=1^∞(m^bX_m)^2]^q/2≤ M_q^q}.
If p, q, b are constants such that 1 ≤ p < q and b > 1/2, then there exist
positive constants c_p,q,b, c̅_p,q,b depending on (p,q,b) such that
c_p,q,b M_q(log n)^-b≤sup_μ∈𝒫_poly(q,b,M_q) W_p(μ^n,μ)≤c̅_p,q,bM_q(log n)^-b.
Define the distribution class
𝒫_exp(q,α,M_q):={ μ: _X∼μ[ ∑_m=1^∞(α^m-1X_m)^2]^q/2≤ M_q^q}.
If p, q, α are constants such that 1 ≤ p < q and α > 1, then there exist
positive constants c_p,q,α, c̅_p,q,α depending on (p,q,α) such that
c_p,q,α M_q e^-√(logαlog n)≤sup_μ∈𝒫_exp(q,α,M_q) W_p(μ^n,μ)≤c̅_p,q,αM_qe^-√(logαlog n).
The convergence rate in Wasserstein distance is a finite power of (log n)^-1 for polynomial decay and a finite power of e^-√(log n) for exponential decay, both of which are significantly slower than that of n^-1/2p in Theorem <ref>. We conclude that the sliced Wasserstein distance indeed reduces the computational complexity.
§ UNSOLVED ISSUES
In the last section, we list some issues which should be included in the scope of this article, but currently they remain unsolved because of the lack of ability/energy of the author.
* It will be of future interest to investigate if there can be sliced Wasserstein distance on 𝒫_p(X) defined via Radon transform. If it does exist, the next question will be that if it is equivalent to the one we define in this article. And if there are some ways to compare them, which one should be better?
* The choice of reference measure γ seems to have no influence on the properties of distance within the scope of this article. But it is unclear whether it has potential impact. For example, will a shift of the reference measure provides a better or worse actual convergence rate of empirical measures?
* In <cit.>, the trimmed Wasserstein distance is defined. A parallel definition on measures on 𝒫_p(X) seems to be promising and share similar properties. Rigorous statement and proofs are of interest.
amsplain
RH sincerely thanks Professor Dejan Slepčev for the stimulating discussion and precious advice.
|
http://arxiv.org/abs/2307.05603v1 | 20230710203941 | Can You Improve My Code? Optimizing Programs with Local Search | [
"Fatemeh Abdollahi",
"Saqib Ameen",
"Matthew E. Taylor",
"Levi H. S. Lelis"
] | cs.SE | [
"cs.SE",
"cs.LG",
"cs.PL"
] |
Spin-EPR-pair separation by conveyor-mode single electron shuttling in Si/SiGe
Lars R. Schreiber
August 12, 2023
==============================================================================
This paper introduces a local search method for improving an existing program with respect to a measurable objective. Program Optimization with Locally Improving Search () exploits the structure of a program, defined by its lines. improves a single line of the program while keeping the remaining lines fixed, using existing brute-force synthesis algorithms, and continues iterating until it is unable to improve the program's performance. was evaluated with a 27-person user study, where participants wrote programs attempting to maximize the score of two single-agent games: Lunar Lander and Highway. was able to substantially improve the participants' programs with respect to the game scores. A proof-of-concept demonstration on existing Stack Overflow code measures applicability in real-world problems.
These results suggest that could be used as a helpful programming assistant for programming problems with measurable objectives.
§ INTRODUCTION
Recent advances in large language models and program synthesis have enabled the development of powerful artificial intelligence assistants for computer programmers. For example, Copilot <cit.> can provide an initial solution to a problem if the programmer is unsure of how to approach the problem or auto-complete what the programmer writes to speed up coding.
Copilot and other assistants were designed to interact with the programmer throughout the development of the program. This paper considers a setting where the assistant interacts with the programmer only after a working version of the program is available. In this paper's setting, the assistant attempts to improve the programmer's solution with respect to a real-valued, measurable objective function, something systems such as Copilot cannot perform.
We introduce Program Optimization with Locally Improving Search (), an intelligent assistant to improve existing programs. leverages the ability of existing synthesizers to generate high-quality (short) programs by treating each line of an existing program as an independent program synthesis task. uses an enumeration algorithm for synthesis, called bottom-up search <cit.>, for each line of the program. Since selects the best solution encountered in each bottom-up search, it can be seen as a hill-climbing algorithm in the program-line space. Despite not using any models for guiding its search, can handle complex programs because it divides the original problem into much smaller sub-problems by considering the synthesis of one line at a time.
To evaluate , 27 programmers wrote programs for playing Lunar Lander and Highway, two single-agent games commonly used to evaluate reinforcement learning algorithms. was able to improve the score of all programs written by the participants, often by a large margin. Our results also show that often the modified programs retain most of the structure of the original programs. As a result, the users who wrote the programs are likely to understand 's modifications to their implementations. We also present a proof-of-concept demonstration of 's ability of fixing bugs in 4 simple programs posted on Stack Overflow.
's modified programs can be seen as the result of the work done by an effective human-AI team. This is because bottom-up search would not be able to synthesize the resulting programs from scratch, as the programs are long and complex. However, bottom-up search is able to substantially improve human-generated programs. As our results demonstrate, human programmers are unable to write on their own programs of the quality obtained with . These results suggest that can be a helpful assistant to programmers for problems with measurable objectives.
This paper makes two contributions. First, it defines a problem setting for intelligent programming assistants where the assistant attempts to improve existing programs with respect to an objective function. Second, it introduces , a system that employs a novel local search algorithm based on a simple brute-force search algorithm.
§ RELATED WORK
is related to intelligent programming assistants, program synthesis, programmatically interpretable policies, and program enhancement algorithms.
§.§ Intelligent Programming Assistants
Intelligent assistants for programmers are getting popular and have become a popular area of research lately. SnipPy <cit.> is one such tool that allows the programmer to synthesize instructions by defining input-output examples in the context of live programming. Similarly, Blue-Pencil <cit.> is a system that identifies repetitive tasks that arise in programming and suggests transformations for such tasks. reCode <cit.> observes code transformation to identify other places of the code that would require similar changes. code-completion-statistical introduced a statistical model for code completion and guo2022learning introduced a model for code completion that leaves “holes” where the model is uncertain.
differs from these works in how it assists the programmer. Instead of real-time interactions during the development of the program, we consider the scenario where the programmer provides a complete, compilable version of their program. leverages human-defined code structure to improve the user's implementation with a simple synthesizer.
§.§ Program Synthesis
The task of synthesizing programs that satisfy a specification is a long-standing problem <cit.> and it has received much attention lately <cit.>. While previous works attempt to improve the synthesis process and generate programs which satisfy given specification, uses program synthesis to optimize existing programs with respect to a given objective function.
§.§ Programmatic Policies
One way to solve the problems considered in this work is to synthesize programs encoding a policy for solving the tasks. Neurally directed program search (NDPS) <cit.> synthesizes programs while imitating a neural oracle. Viper <cit.> also employs imitation learning to train decision trees encoding policies. In order to provide better search guidance for synthesis, Propel <cit.> trains neural policies that are not “too different” from the synthesized program. Sketch-SA <cit.> is another such system that uses imitation learning to synthesize a sketch of a policy; the policy is synthesized from the sketch by evaluating it directly in the environment.
Oracle-free programmatically interpretable reinforcement learning (π-PRL) <cit.> and Bilevel Synthesis (Bi-S) <cit.> bypass the need of an oracle to guide the synthesis of programmatic policies. π-PRL uses a differentiable language and trains the model using policy gradient methods, while Bi-S uses the result of a search in a feature space to guide the search in the programmatic space.
differs from these algorithms because they were designed to synthesize programs from scratch, while focuses on leveraging the structure of existing programs.
§.§ Program Enhancement
Refactoring is a well-known program enhancement technique used to improve a program's quality without affecting its external behavior <cit.>. Another way of enhancing a program is the Automated Program Repair (APR) technique which refers to the process of fault localization in software and the development of patches using search-based software engineering and logic rules <cit.>. For instance, 1genprog use genetic programming to develop bug-fixing patches without affecting software functionality. is different from these techniques because a) improves programs with respect to an objective function and its external behavior is likely to change; and b) while fixes unintended programmer mistakes (similar to APR), it is likely to also change sub-optimal parts of the program, improving overall performance.
§ PROBLEM DEFINITION
Rather than using a general-purpose language like Python, which defines a very large program space, we use a domain-specific language (DSL) to define a more constrained space of programs for solving a programming task. A DSL is defined as a context-free grammar (V, Σ, R, S), where V is a finite set of non-terminals, Σ is a finite set of terminals, and R is the set of relations corresponding to the
production rules of grammar. S is the grammar's start symbol. An example of a DSL defined by a grammar G is shown below, where V = {S, C, B}, Σ = {c_1, c_2, c_3, b_1, b_2, if-then-else}, R are the relations (e.g., C → c_1), and S is the start symbol.
S →if(B) then S else S C
C → c_1 c_2 c_3 CC
B → b_1 b_2
This DSL allows programs with a single instruction (c_1, c_2, or c_3), or multiple commands using nested if-then-else blocks. Let G be the set of programs (possibly infinite) that can be written with grammar G. Each program p ∈ G is defined by a pair {T, L}, where T is a multiset of non-terminal symbols and L defines a partition of symbols from T into program lines, i.e., L defines how a programmer organizes the symbols in T in a text editor. Note that two programs that have identical functionality could have different partitions L.
takes as input a program p ∈ G, and an objective function F (real-valued evaluation of the program), and outputs a program p' ∈ G that is at least as good as p and approximates a solution for max_p ∈ G F(p), assuming a maximization problem.
§ : A PROGRAMMING ASSISTANT
The pseudocode in Algorithm <ref> shows the local search algorithm employs. It receives an existing program p and two time limits, t and t_l, for the overall running time of the search and for the running time allowed to optimize each line of code, respectively, and an evaluation function F. returns a new program, p', that is at least as good as p in terms of F-value. While there is time available to improve the input program, iterates through each line (the for loop in line <ref>) and it attempts to synthesize a program that replaces the code in the i-th line of p such that the objective function F is improved. This is achieved with a call to the synthesizer (line <ref>), which returns a version of p where the i-th line of p is replaced by a program that optimizes F. The synthesizer can return the program unchanged, if its original i-th line returns the best F-value or it exceeds its time limit before finding a better line. Lastly, returns the optimized program (line <ref>) if the search reaches a local optimum, i.e., the improved program p has the same F-value as p'.
Our system uses size-based bottom-up search (BUS) <cit.> as the synthesizer. BUS was shown to outperform other uninformed enumeration-based synthesizers <cit.>. BUS starts by enumerating the smallest possible programs of a given language. It then uses the smallest programs with the production rules of the DSL to generate larger programs. One can use different metrics of “size” for defining BUS's enumeration procedure. A commonly used metric, which we use in our implementation, is the number of nodes in the abstract syntax tree representing the synthesized programs. That is, in BUS's first iteration it generates all programs whose tree has a single node, then all programs whose tree has two nodes, and so on, until a solution is found. In its first iteration, for the DSL shown in Equation <ref>, BUS generates programs c_1, c_2, c_3, b_1, b_2. Then, in its second iteration BUS generates programs c_1 c_1, c_1 c_2, c_1 c_3 c_2 c_2, c_2 c_1, c_2 c_3, and so on. One advantage of BUS is that, once it finds a solution program, the program is provably the smallest one that solves the problem. Another advantage is that all programs generated in search are executable, which allows one to run them and perform an observational equivalence check (i.e., the search only keeps one of two programs that produce the same set of output values for a given set of input values of interest).
§.§ Domain-Dependent Implementation Details
We evaluate on programmatic policies for playing games, which are written by human programmers. A programmatic policy is a program encoding a function (policy) that receives a state of a game and returns the action the agent should take at that state. In what follows, we describe 's implementation details.
§.§.§ Input-Output Examples
For the task of writing programmatic policies for playing games, we use the approach introduced by pirl to define a set of input-output examples. That is, we train a neural policy that generates a set of input-output pairs: for a set of observations o (input), we store the neural policy's chosen action a (output). We use DQN <cit.> to train a neural policy π for 2000 episodes. We let the agent follow π in the environment for 2000 steps and collect all the observation-action pairs along with their Q-values.
§.§.§ Evaluation Function
We use two evaluation functions. The function F is given by running the programmatic policy and computing its game score. This evaluation function is computationally expensive, since we need to play the game several times to evaluate a program, due to the stochastic nature of the environments.
Instead of computing F for all programs generated in search, we keep a list of the current k-best programs with respect to an action-agreement metric: the number of observations each program correctly maps to the action a neural policy π selects for that observation. The action-agreement metric we use is computed as ∑_o ∈ T1[p(o) = π(o)]/|T|, where T is the set of input-output examples, 1[·] is the indicator function, p(o) and π(o) are the actions returned by the program p and policy π, respectively, for observation o. We evaluate the value of F only for the programs in the k-best set.
Once the synthesizer runs out of time, it returns the best program in the set of k best with respect to F, not with respect to the action agreement metric. We use k=20 in our experiments.
§.§.§ Highlights
Highlights ranks a set of observations according to the largest difference in Q-values for different actions available at a given observation. We employ the idea of highlights to further optimize the computational cost of our evaluation function by using a small number of input-output examples. Instead of collecting a large number of observation-action pairs uniformly at random, we collect the 400 observations ranked most important by Highlights <cit.>.
§.§.§ Bayesian Optimization
The real numbers n in the DSL (Figure <ref>) are set using Bayesian optimization <cit.>. Bottom-up enumeration in the synthesizer generates programs with the symbol n, later replaced with real values by the optimizer. The optimizer chooses these values while attempting to optimize for the action agreement metric.
§.§.§ Restarts
The initial program and the set of input-output pairs define the optimization landscape traverses with its hill-climbing algorithm. 's greedy approach to optimization could lead to the algorithm returning locally optimal solutions. An effective strategy for dealing with local optimum solutions is to restart the search from a different starting location in the optimization landscape once the search stops in a local optimum <cit.>. To restart the search and allow for different initial starting conditions, we train a different DQN agent to generate a new set of input-output pairs every time we restart the algorithm. A restart is triggered in Algorithm <ref> when line <ref> is reached and still has time available for synthesis.
§ USER STUDY EVALUATION
This section describes the experimental design of the study.[Our implementation and the data collected in our user study is available at <https://github.com/FatemehAB/POLIS>.]
§.§ Problem Domains
We use to improve programs written by users to play two games: Lunar Lander and Highway (Figure <ref>).
Both games have a game score, which serves as a clear metric for evaluating the quality of the programs.
Lunar Lander In this game the player controls three thrusters of a spaceship trying to land on the moon. Each thruster can be either on or off. The game score is maximized if the player does not use the thrusters unnecessarily and gently reaches the landing pad. We use the LunarLander-v2 implementation from OpenAI Gym <cit.>.
Highway In this game the player controls a car on a three-lane highway. The game score is higher when the player drives fast, avoids collisions, and spends more time in the rightmost lane. The player can change lanes, increase, or reduce speed. We use the implementation of highway-env.
§.§ User Study Design
We developed a web-based system based on HIPPO Gym <cit.> to conduct the user study and advertised it in mailing lists of graduate and undergraduate Computing Science students at our university.[The study was approved by the University of Alberta Research Ethics Office (Pro00113586).] Each participant first electronically signed a consent form, explaining that they would write a program to play a computer game. It also explained that their compensation would be impacted by the game score of their final program; higher game scores would result in higher monetary compensation. The minimum compensation was $15. We used the following formulae to compute the compensation of each participant: 15+ (100+x) × (1/30) and 15 + x × (1/5) for Lunar Lander and Highway, respectively. x represents the participants' average game score over 100 and 25 episodes of Lunar Lander and Highway, respectively (an episode is completed when the player finishes landing the spaceship in Lunar Lander or when the player crashes the car or a time limit is reached in Highway). The maximum compensation was capped at $25.
After agreeing with the terms of the study, each participant was randomly assigned to one of the two games. Then, they read a tutorial about the assigned game. In the tutorial, we explained the features in each observation passed as an input parameter to the program as well as the actions available to the player. Our tutorial had a few examples with screenshots of the game showing situations where different actions were applied to different observations of the game. The tutorial finished with a multiple-choice question about the game; immediate feedback was provided to the participant showing whether they chose the correct or wrong answer. If an answer was incorrect, the participant would have as many attempts as needed to answer it correctly.
Following the game tutorial, each participant read a tutorial about our DSL. The tutorial presented the DSL (Figure <ref>) and explained Boolean and algebraic expressions as well as the programming structures our DSL supports. Similarly to the game tutorial, we provided several examples of programs that can be written in our DSL. The tutorial finished with a multiple-choice question where the participant had to select, among four options, the program that was accepted in our DSL; the participant had as many attempts as needed to answer the question correctly.
Before writing a program for playing the game, the participant had the chance to play the game using their keyboard for a maximum of 10 minutes. Our graphical user interface showed, in real-time, the observation values and the game score each participant obtained for each run of the game. The participant could choose to stop playing the game at any time (within the 10 minutes allowed by our system) and start writing their program. Our goal with this step of the study was to allow the participant to develop a strategy for playing the game, something they could try to encode in their programs.
We provided the participants with a Python-like editor, where the keywords of the DSL are highlighted. The editor also had an example of a simple program for playing the game. For Highway, the initial program moves the car to the right lane if the car is not already there; the player takes no action otherwise. Our interface also allowed the participants to go back to the tutorials while writing their program.
Our interface also showed the game so that participants could execute their program and see its behavior. Similarly to the interface where the participant played the game, we showed the observation values and the game scores in real-time. The participant could stop the simulation at any time to inspect the values of the observations. We stored all programs the participants evaluated so that they could be used as input for our evaluation. The total time allowed for the experiment was 60 minutes. The participant could submit the final version of their program at any time within the 60-minute limit. We used the final program submitted to compute the participant's monetary compensation. The participant then answered demographic questions before leaving.
§ USER STUDY RESULTS
In our results, we abbreviate standard deviation as SD and interquartile range as IQR.
§.§ Demographics
40 people consented to participate and 26 completed the survey. The average age was 20.96 (SD of 4.13), with their ages ranging from 18 to 40; 20 of the participants identified themselves as male, 5 as female, and 1 withheld gender information.
Most (20) had received or were pursuing undergraduate education, 4 had completed high school, and 2 were pursuing post-secondary training. Most (25) had not done any form of game artificial intelligence (AI) research and about half of them had not taken any AI courses. More than one-third of the participants (10) rarely or never played computer games and others occasionally or often played computer games.
We asked about the participants' programming experience: 22 had more than one year of experience and 4 had less than a year. We also asked about their knowledge of Python, how hard it was to write a program in our DSL, and how hard it was to write a program for solving the game. We used a 5-point, Likert-like scale: 1 being “novice” in Python and “very easy” for writing programs, to 5 being “expert” in Python and “very hard” for writing programs. The median response to these three questions were: 3 (IQR = 1), 2.5 (IQR = 2), and 4 (IQR = 1), respectively. On average, the participants had some experience in Python, and found it easy to use our DSL, but found it hard to write a program to play the game. To evaluate we considered the data from those who submitted at least one working program (different from the example program we provided), resulting in a total of 27 participants (one of them did not complete the survey).
§.§ Computational Results
<Ref> show the results for Lunar Lander and Highway, respectively. Here, each participant is represented by an ID. The game score of both the participants' and 's programs is an average of the score the program obtained in 100 of Lunar Lander and 25 episodes of Highway. The game score shown for is the average over 10 independent runs of the system. Each run of can result in different game scores due to the random initialization of the neural policy used to generate input-output pairs. We also present the standard deviation, minimum, and maximum game scores across these 10 independent runs. We performed 5 restarts for each run of the system; the result of a run is the best program encountered across the 5 restarts. The average score we present for both participants and are for the program that achieved the highest average score throughout the study; the program the participant submits is not necessarily the program with the highest score. The number of lines of code (LoC) indicates how many lines the original program has. In both tables, we sort the rows according to the participant's program game score, from lowest (top) to highest (bottom). The number of edited lines (Edited LoC) refers to the average number of lines that modifies in the restart that resulted in the best program of a given run. We also show the average number of car collisions in Highway (Hits).
's average score is higher for all programs written in our study. Even the minimum value across the 10 independent runs is often much higher than the score of the program the participants wrote. A Wilcoxon signed-rank test pointed to a large effect size for the average results of both domains: 0.624 for Lunar Lander (p<4.9 × 10^-4) and 0.621 for Highway (p < 3.1 × 10^-5).
For Lunar Lander, provided quite significant improvements to some of the participants' scores (e.g., IDs 3 and 11), but for some others the improvements were minor (e.g., IDs 4 and 5). The number of lines edited for the programs of participants 4 and 5 is much smaller than for the other programs, which indicates that quickly reached a local minimum for these programs. Interestingly, for Highway, improved the performance of all programs to an average game score above 33 (the best program a participant wrote achieved a score of 35.71). Moreover, substantially reduced the number of collisions, in some cases from more than 20 to less than 3 collisions. Since does not change the overall structure of the program, we conjecture that the participants identified the program structure needed to play Highway, which makes the programs for that game more amenable to 's improvements.
The Lunar Lander results might be pointing to a limitation of which is its inability to improve programs that need simultaneous changes to more than one line of code.
§.§ Representative Program
The program shown in Figure <ref> is a representative program written by one of the participants of our study for the Highway domain; we refer to this program as p in this section. This program obtains an average game score of 6.8 over 25 episodes. Figure <ref> shows 's improved program for p, which we will refer to as p'. We lightly edited p' for readability.
's p' obtains an average game score of 39.0 over 25 episodes, a major improvement over the original program. The participant of our study made a mistake while writing the first if-statement of p as the Boolean condition checks whether o[5] is equal to o[1] and if o[5] - o[1] > 200; the two parts of the expression cannot be simultaneously true as once o[5] is equal to o[1], we have that o[5] - o[1] is zero. As a result, the player never slows down (action 4). The participant's intention with this if-statement was likely to slow the car down if the player's car was on the same lane as the nearest car on the road (the condition “o[5] is equal to o[1]” returns true if the cars are on the same lane).
not only fixed the problem with the Boolean condition in the participant's program, but also changed the player's strategy. Instead of slowing down if another car is on the same lane, p' only slows down when changing lanes; o[3] is the car's velocity on the y-axis, which is different from zero when the car is changing lanes. Since the car is changing lanes, o[1] cannot be zero, as o[1] is zero when the car is in the leftmost lane. Unlike p, p' changes lanes when there is another car in the same lane. This is encoded in the elif structure of the program, which can be translated as if the nearest car is on the same lane (o[5] is equal to o[1]) and the car is not already in the rightmost lane (line 7), then move to the right lane (action 2; line 8). The agent will move to the left lane if already in the rightmost lane (action 0; line 10).
's improved program prefers to drive in the rightmost lane if the car driving in the same lane is not the closest (i.e., there is still time to change lanes). The program maximizes its score by driving in the rightmost lane. Finally, 's program does nothing (action 1) if it is not changing lanes and there is no car in front of it. 's strategy is a cautious one as the car slows as it changes lanes, but never accelerates. This cautious strategy achieves a much higher game score than the participant's program.
§ PROOF OF CONCEPT: STACK OVERFLOW
To demonstrate that is general and can be applied to problems other than games and also to languages with more complex structures such as loops, we collected four programs with implementation problems on Stack Overflow and translated them to our Python-like language so that could fix them. Three of the four programs are sorting algorithms; the last program attempts to compute the cumulative sum of a set of numbers.
Figure <ref> shows the DSL used in this experiment. The input parameter indicates that the program can accept and return a variable number of arguments depending on the problem being solved. Compared to the DSL used in the user study, this DSL accepts more data types (arrays) and more complex structures (loops).
corrected all three sorting programs with the evaluation function that simply counts the number of input examples that are correctly mapped to the desired output. The problems with the Stack Overflow sorting programs were simple (e.g., one of the programs used instead of in the Boolean expression of a while loop) and was able to fix them by changing a single line of the original programs.
The fourth program we collected on Stack Overflow attempts to solve a “cumulative sum problem,” which is defined as follows. Given an array of numbers, the goal is to replace each element with index i in the array with the sum of all elements with index j ≤ i. For example, the expected output for array [4,3,6] is [4, 7, 13]. Figure <ref> shows the incorrect implementation of a program for solving the cumulative sum problem () and 's corrected version for the problem (). The cumulative sum program had two implementation errors: the Boolean expression of the while-loop and the list used in the operation within the loop. could not fix them by simply using the number of input examples correctly mapped to the desired outputs. Instead, we used an F function that computed the sum of the absolute differences between each element of the list the program produced as output and the desired list of numbers. Using this F function, corrected the program, as shown in Figure <ref>.
In this proof-of-concept experiment, we manually generated the input-output examples, similar to how a programmer would come up with a set of test cases for their program. Such a set could possibly be used to define 's F function, so it can attempt to correct the implementation errors.
§ CONCLUSIONS
In this paper, we present , a system capable of improving existing programs with respect to a measurable, real-valued metric. employs a simple synthesizer within the loop of its local search. divides the problem of improving an existing implementation into smaller sub-problems by considering each line of the program as an independent program synthesis task. This way, employs a bottom-up search synthesizer that attempts to replace a single line of the original program at a given time, while all the other lines remain unchanged. We conducted a user study where 27 participants wrote programs to play two games. was able to improve the performance of the programs of all participants, often by a large margin. Since performs local changes with an enumerative synthesizer, its modified program shares the same structure as the original program. The similarity of the programs allowed us to understand how was able to improve the performance of a representative program from our study. We also performed a proof-of-concept experiment with four programs collected from Stack Overflow to demonstrate that can also be applied to other application domains and handle more complex languages such as those with loops. was able to correct all four programs. The results of our experiments suggest that can be used as a programming assistant in scenarios where one is interested in improving an existing program with respect to a measurable, real-valued metric.
§ ACKNOWLEDGEMENTS
This research was supported by Canada's NSERC and the CIFAR AI Chairs program. The research was carried out using computational resources from Compute Canada.
Part of this work has taken place in the Intelligent Robot Learning (IRL) Lab at the University of Alberta, which is supported in part by research grants from the Alberta Machine Intelligence Institute (Amii); a Canada CIFAR AI Chair, Amii; Compute Canada; Huawei; Mitacs; and NSERC.
We thank the anonymous reviewers for their feedback.
named
|
http://arxiv.org/abs/2307.04852v1 | 20230710184746 | $\texttt{AlgRel.wl}$: Algebraic Relations for the Product of Propagators in Feynman integrals | [
"B. Ananthanarayan",
"Souvik Bera",
"Tanay Pathak"
] | hep-ph | [
"hep-ph",
"hep-th",
"math-ph",
"math.MP"
] |
Spatially-Resolved Recent Star Formation History in NGC 6946
[
August 12, 2023
============================================================
§ INTRODUCTION
In this work, we consider the formalism first proposed by Tarasov to derive algebraic relations for the product of propagators for functional reduction <cit.>. We systematically develop an algorithm inspired by the original work and
present a realization in for the same, which is provided for the user as a package called .
We have used the package to simplify and analyze many important and interesting Feynman Integrals that are
amenable to treatment using this formalism.
Feynman integrals play an important role in precision calculations in quantum field theory. There are various methods to evaluate them <cit.>. Even with all these methods, it is at times still challenging to compute Feynman integrals. More often, other techniques are used to facilitate this computation. In <cit.> the method of functional reduction was introduced to derive functional relations between Feynman integrals. These relations reduce the original integral into a sum of integrals which are easier to evaluate. The focus of the present work is this new way to obtain functional relations by deriving the algebraic relations for the product of propagators. This method in turn then leaves some undetermined free parameters which can be chosen at will. Appropriate choices of these parameters result in various functional equations for Feynman integrals<cit.>.
The method can be applied to any one-loop diagram, indeed as already pointed out in
detail by Tarasov. Despite this, no working code has been provided in the past. In the present work, we provide an automated package to derive the algebraic relation for the product of propagators. Our code here fills this gap in the possibility of finding widespread use of formalism. Since our goal is an efficient algorithmic implementation to find the algebraic relation, we introduce a recursive way of method. The free parameters in the resulting relation can then be chosen in an appropriate manner to derive the functional equations for the Feynman integrals. More specifically, for presentation purposes, we will focus on the cases when all these free parameters are zero and the original Feynman integral with many massive propagators can be written as a sum of integrals with fewer massive propagators, which was also pointed out in <cit.>[We also briefly discuss a case when we choose a non-zero parameter in Appendix <ref>]. For the one-loop integrals, this procedure can be used to reduce the original integral to a sum of integrals with at most one massive propagator. We apply the method for up to 6-point, one-loop integrals and show that the N-point one loop integral with all massive propagators and general external momenta can be written as a sum of 2^N-1 integral with just one massive propagator. Though the method is not readily generalizable to the higher loop we yet extend the uses to cover
certain cases of 2- and even 3-loop Feynman integrals. In a similar manner, this approach is also applicable to higher loops. Our findings show that we require at least
4 propagators in order for the formalism to be viable and to be of utility as far as the simplification is
concerned. We explain this feature in some detail.
We, however, notice that such functional reduction is one of the many possibilities obtained after choosing the free parameters obtained from the algebraic relation <cit.>.
In view of the proposed method of functional reduction of Feynman integrals, the package has been built in such a way that the final result still has arbitrary parameters which can be chosen suitably for the functional reduction procedure. Using a few of the analytical results available for the one-loop integrals, we explicitly show how the complexity in the evaluation of these integrals can be reduced. Since the Feynman integrals can be written in terms of hypergeometric functions <cit.> this reduction in complexity gives rise to reduction formulae for the hypergeometric functions. Thus it can be used to establish new relations between multi-variable hypergeometric functions. We discovered many new reduction formulae for such hypergeometric functions, which, to the
best of our knowledge, have not appeared anywhere in the literature. We also discuss in detail how further reduction formulae can be obtained from already available results for the one-loop cases. Such relations between hypergeometric functions were also obtained in <cit.>, where explicit relations between hypergeometric functions were derived via the evaluation of Feynman integrals. In order to make the results accessible, we provide several examples in a single notebook that allows
the reader to appreciate the power of the formalism, based on the code that is provided alongwith.
The article is organized as follows. In section <ref> we discuss the method in detail with one loop bubble integral as an example and explicitly present how the reduction in complexity has been achieved for the integral. In section <ref>, we present the algorithm of the package and discuss its usage in detail. In section <ref>, various results obtained for one, two, and three-loop integrals are presented. In section <ref>, we discuss the various analytic results in terms of multi-variable hypergeometric functions already derived for the one-loop N-point integrals <cit.> and show how the present work helps in deriving the reduction formulae for the multi-variable hypergeometric functions using them. Finally, we conclude the paper with some summary and discussions in section <ref>. In Appendix <ref>, we provide a list of various reduction formulae that we derive, along with some details on how to further extend the list given there.
The package along with a notebook , that contains all the examples discussed in the paper can be found in the https://github.com/TanayPathak-17/Algebraic-relation-for-the-product-of-propagatorsGitHub repository.
§ THE METHOD
We now explain the method to find the algebraic relation of the product of propagators with the help of the one-loop bubble integral.
Consider the one-loop bubble integral corresponding to bubble diagram Fig.<ref>,
I_2(p^2,m_1,m_2)= ∫d^4k/(k^2-m_1^2)((k-p)^2-m_2^2)
To find the algebraic relation for the product of propagators, we instead consider a more general propagator, depending on only one loop-momenta, of the following form
d_i= (k+q_i)^2-m_i^2
where k is the loop-momentum, q_i's are dependent on external momenta and can be zero as well and m_i is the mass of the propagator.
With the general propagators, we now have
I_2((q_1-q_2)^2,m_1,m_2) = ∫d^4k/d_1d_2
where substituting q_1=0 and q_2=-p we recover Eq.(<ref>).
We seek the algebraic relation for the integrand, by introducing a new denominator D_1 along with coefficients x_1 and x_2, of the following form
1/d_1d_2 = x_1/D_1 d_2 + x_2/D_1 d_1
where D_i= (k+P_i)^2-M_i^2 is defined similar to Eq.(<ref>).
The unknowns that are introduced can be fixed using the above equation, while the remaining parameters are arbitrary and can be fixed at will in such a way that the resulting relationship will give rise to integrals which are easier to compute.
Using Eq.(<ref>) we get
D_1= x_1 d_1 +x_2 d_2
Comparing the coefficients of k^2, k and the remaining k independent term we get
x_1+x_2 =1
x_1 q_1 + x_2 q_2 = P_1
-M_1^2 + P_1^2 - (-m_1^2 + q_1^2) x_1 - (-m_2^2 + q_2^2) x_2 =0
Solving for x_1, x_2 and P_1 we get
x_1 = √((m_1^2-m_2^2+(q_1-q_2)^2)^2-4 (m_1^2-m_2^2) (q_1-q_2)^2)+m_1^2-m_2^2+q_1^2+q_2^2-2 q_1 q_2/2 (q_1-q_2)^2
x_2 = -√((m_1^2-m_2^2+(q_1-q_2)^2)^2-4 (m_1^2-m_2^2) (q_1-q_2)^2)-m_1^2+m_2^2+q_1^2+q_2^2-2 q_1 q_2/2 (q_1-q_2)^2
P_1 = -√((m_1^2-m_2^2+(q_1-q_2)^2)^2-4 (m_1^2-m_2^2) (q_1-q_2)^2)+m_1^2-m_2^2-q_1^2+q_2^2/2 (q_1-q_2)^2(q_1-q_2)
In the above equation, M_1 is still an arbitrary variable that can be chosen at will. Choosing various values of M_1 will result in different functional equations <cit.> for the bubble integral. For the present work, we will focus on one of the simple choices i.e. M_1=0. Integrating Eq.(<ref>) and substituting q_1=0 and q_2= -p we have
I_2(p^2,m_1,m_2)= x_1 I_2((P_1-p)^2,0,m_2)+ x_2 I_2(P_1^2,m_1,0)
Hence we see that the general two-point bubble integral with non-zero masses can be written in terms of two integrals with just one mass. Diagrammatically Eq.(<ref>) can be represented as in Fig.<ref>.
To see how the complexity in the computation has been reduced in the Eq.(<ref>), we refer to a few analytic results. The general result for the massive bubble diagram can be written in terms of the Appell F_4 function <cit.>.
I_2(p,m_1,m_2) = (m_2^2)^d/2-2Γ (d/2-1) Γ (2-d/2)/Γ (d/2) F_4(2-d/2,1;d/2,2-d/2;p^2/m_2^2,m_1^2/m_2^2)
+(m_1^2)^d/2-1Γ (1-d/2) /m_2^2 F_4(d/2,1;d/2,d/2;p^2/m_2^2,m_1^2/m_2^2)
where,
F_4(a,b,c,d,x,y)= ∑_m,n=0^∞(a)_m+n(b)_m+n(c)_m(d)_nm!n!x^my^n
is the Appell F_4 hypergeometric function with region of convergence (ROC) given by √(|x|)+ √(|y|) < 1.
The analytic expression result for I_2(p,m,0) is readily available in <cit.>.
I_2^(d)( p^2; m^2, 0 )=-Γ(1-d/2) m_2^d-4 _2F_1[[ 1,2-d/2 ;; d/2 ; ]p^2/m^2]
Using the above relation in Eq.(<ref>), we get the following for the right-hand side
-m_1^d-4Γ (1-d/2) (-√((-m_1^2+m_2^2+p^2)^2 -4 m_2^2 p^2)+m_1^2-m_2^2+p^2) /2 p^2
× _2F_1[[ 1,2-d/2 ;; d/2 ; ](p^2+m_1^2-m_2^2-√((p^2-m_1^2+m_2^2)^2-4 p^2 m_2^2))^2/4 p^2 m_1^2]
-m_2^d-4Γ (1-d/2)/2 p^2
(√((-m_1^2+m_2^2+p^2)^2-4 m_2^2 p^2)-m_1^2+m_2^2+p^2) _2F_1[[ 1,2-d/2 ;; d/2 ; ](p^2+m_1^2-m_2^2-√((p^2-m_1^2+m_2^2)^2-4 p^2 m_2^2)/2 p-p)^2/m_2^2]
The above relation can be viewed as a reduction formula without making reference to the underlying Feynman integral and the result is shown in Eq.(<ref>) and Eq. (<ref>). In a similar manner, evaluation of other Feynman integrals can be used to obtain the relationship between hypergeometric functions <cit.>. Such a reduction of hypergeometric functions with a higher number of variables to those with a lesser number of variables also helps when the analytic continuation has to be done to reach a certain kinematical region. For the case of Appell F_4 the elaborate analytic continuation has been performed explicitly in <cit.> or using automatized algorithms <cit.> for more general multi-variable hypergeometric functions. This whole process still does not guarantee convergence for all the values of the parameter space <cit.>. While for the case of _2F_1 complete table of analytic continuations is available <cit.>. The procedure to find the analytic continuations also gets more complicated with the increase in the number of variables even with the use of automatized packages.
§ PACKAGE : ALGORITHM AND USAGE
§.§ Algorithm
We now present a general algorithm for the case when we have N denominators to find algebraic relation recursively.
Consider the general situation with product of N denominators as 1d_1⋯ d_N.
* We first find the algebraic relation by taking d_1 and d_2
1/d_1d_2 = x_1/D_1 d_2 + x_2/D_1 d_1
* We then multiply the above equation by 1d_3
1/d_1d_2d_3 = x_1/D_1 d_2d_3 + x_2/D_1 d_1d_3
* We then find the algebraic relation of each pair of d_is again using Eq.(<ref>).
* Then in the resulting relation, we repeat this process until all the denominators are exhausted.
The final result will be a sum of 2^N-1 terms where N is the total number of denominators we started with.
It is to be noted that the above procedure is a slight modification of the original method<cit.>. In <cit.>, we start by seeking the following algebraic relation for the product of N propagators
1/d_1⋯ d_N = x_1/D_1d_2⋯ d_n + ⋯ + x_N/d_1⋯ d_N-1D_1
Comparing the coefficients of k^2,k and using the constant term we get an over-determined set of equations. Such a system will leave x_3, x_4⋯ x_N undetermined. Such procedure when used recursively with each term on the RHS of the above equation will finally result in N! total number of terms, unlike 2^N-1 terms using the procedure presented here. Also, the arbitrariness in the choice of coefficients x_is in the original algorithm is now present in the choice of parameters M_is.
§.§ Usage
The recursive algorithm presented previously has been automatized in the accompanying package . Below we demonstrate the usage of the package .
After downloading the package and putting it in the same directory as the notebook we can call the package as follows: Input
SetDirectory[NotebookDirectory[]];
AlgRel.wl;
Input
<<AlgRel.wl
Print
AlgRel.wl v1.0
Authors : B. Ananthanarayan, Souvik Bera, Tanay Pathak
ROC2.wl v1.0
The package has been made assuming the form d_i= (k+p_i)^2-m_i^2 for the propagator, where k,p and m can be changed as per the convenience of the user. The only command of the package is , which can be called as follows
Input
AlgRel[Propagator's number,{k,q,m},{P,M},x,Substituions]
Output
{{Algebraic relation},{Values}}
The various elements of the input are as follows
* : It is a list of numbers to denote various propagators. It need not necessarily be serial and to ease the use of the package in case of many propagators (See Section <ref> for an example).
* : It is a list containing three variables corresponding to the general propagator d_i= (k+p_i)^2-m_i^2. denotes the loop momenta, denotes the combination of external momenta and can be zero too and denotes the mass of the propagator.
* : It is a list containing two variables. They are used to set the variable for the auxiliary propagator introduced for obtaining the algebraic relation, D_i= (k+P_i)^2-M_i^2. It automatically takes the from the previous list.
* : It is used to denote the variable for the coefficients in the algebraic relation, Eq.(<ref>).
* : It is a list of substitution for p_i and M_i.
The output of the above command is a nested list with two sub-lists with the following two sub-lists
* : It gives the algebraic relation for the product of propagators, Eq.(<ref>).
* : It is a list of the values obtained for P_i and x_i.
Consider the example of Bubble integrand. To obtain the result for it we can use the following command
Input
AlgRel[{1, 2},{k,q,m},{P, M}, x,{q[1]-> 0,q[2]->-p,M[1]->0}]
Output
{{x[1]((k+P[1])2)(-m[1]2+(k)2)+x[2]((k+P[1])2)(-m[2]2+(k-p)2)},
{x[1]->p2+m[1]2-m[2]2+(p2+m[1]2- m[2]2)2-4p2(m[1]2)p2,...}}
Due to its length, the second element of the output (i.e., the substitution list) is not shown fully. It contain the values of the and as given in Eq.(<ref>).
In the next section, we look at a few one-loop and two-loop examples where such a procedure is helpful. Numerical checks at the integrand level for all the algebraic relations have been done as a check for correctness of the relations. For the cases wherever it was possible to achieve numerical stability we have also checked these relation by carrying out explicit numerical integration using <cit.>.
§ RESULTS
We will now look at results for one loop and higher loop cases that are obtained with the help of the package. All the results are also presented in the file .
§.§ One-loop vertex integral
We next consider the reduction of the one-loop vertex integral, which is given by
I_3= ∫d^4k/(k^2-m_1^2)((k+p_1)^2-m_2^2)((k+p_1+p_2)^2 -m_3^2)
We proceed as described in the previous section. We use the generalized propagators and do the substitutions accordingly so the result reduces to Eq.(<ref>). This can be done using following command
Input
AlgRel[{1,2,3},{k,q,m},{P,M},x,{q[1]->0,q[2]->p1,q[3]->p1+p2}]
The result will be the relation which is a sum of 4 terms, as follows
x_1 x_3/(k^2-m_1^2) (k+P_1)^2 (k+P_2)^2 +x_1 x_4/(k+P_1)^2 (k+P_2)^2 ((k+p_1+p_2)^2-m_3^2)
+x_2 x_5/(k+P_1)^2 (k+P_3)^2 ((k+p_1)^2-m_2^2)
+x_2 x_6/(k+P_1)^2 (k+P_3)^2 ((k+p_1+p_2)^2-m_3^2)
where
x_1 = √((m_1^2-m_2^2+p_1^2)^2-4 m_1^2 p_1^2)+m_1^2-m_2^2+p_1^2/2 p_1^2,
x_2 = -√((m_1^2-m_2^2+p_1^2)^2-4 m_1^2 p_1^2)-m_1^2+m_2^2+p_1^2/2 p_1^2,
P_1 = √((m_1^2-m_2^2+p_1^2)^2-4 m_1^2 p_1^2)+m_1^2-m_2^2+p_1^2/2 p_1^2p_1,
x_3 =√((m_1^2-m_3^2+(-p_1-p_2)^2)^2-4 m_1^2 (-p_1-p_2)^2)+m_1^2-m_3^2+(p_1+p_2)^2/2 (-p_1-p_2)^2,
x_4 = -√((m_1^2-m_3^2+(-p_1-p_2)^2)^2-4 m_1^2 (-p_1-p_2)^2)-m_1^2+m_3^2+(p_1+p_2)^2/2 (-p_1-p_2)^2,
P_2 = √((m_1^2-m_3^2+(p_1+p_2)^2)^2-4 m_1^2 (-p_1-p_2)^2)+m_1^2-m_3^2+(p_1+p_2)^2/2 (p_1+p_2)^2 (p_1+p_2) ,
x_5 = √((m_2^2-m_3^2+p_2^2)^2-4 m_2^2 p_2^2)+m_2^2-m_3^2+p_1^2+(p_1+p_2)^2-2 p_1 (p_1+p_2)/2 p_2^2,
x_6 = -√((m_2^2-m_3^2+p_2^2)^2-4 m_2^2 p_2^2)-m_2^2+m_3^2+p_1^2+(p_1+p_2)^2-2 p_1 (p_1+p_2)/2 p_2^2,
P_3 = √((m_2^2-m_3^2+p_2^2)^2-4 m_2^2 p_2^2)+m_2^2-m_3^2-p_1^2+(p_1+p_2)^2/2 p_2^2p_2
Integrating Eq.(<ref>) over loop momenta k we get vertex integral written as a sum of vertex integrals but with just one massive propagator.
§.§ One loop box integral
We now consider one loop box integral which can be written as
I_4= ∫d^4k/(k^2-m_1^2)((k+p_1)^2-m_2^2)((k+ p_1 +p_2)^2 -m_3^2)((k+ p_1 +p_2+p_3)^2 -m_4^2)
We can get the algebraic relation using the following command
Input
AlgRel[{1,2,3,4},{k,q,m},{P,M},x,{q[1]->0,q[2]->p1,q[3]->p1+p2}]
Substitute q_1=0, q_2 = p_1,q_3= p_1+p_2,q_4= p_1+p_2+p_3 and M_i=0, i=1 ⋯ 7 and simplifying we get
1/(k^2-m_1^2)((k+p_2)^2-m_2^2)((k+ p_2 +p_3)^2 -m_3^2)((k+ p_2 +p_3+p_4)^2 -m_3^2) =
x_1 x_3 x_7/(k^2-m_1^2) (k+P_1)^2 (k+P_2)^2 (k+P_4)^2+x_1 x_3 x_8/(k+P_1)^2 (k+P_2)^2 (k+P_4)^2 ((k+p_1+p_2+p_3)^2-m_4^2)
+x_1 x_4 x_9/(k+P_1)^2 (k+P_2)^2 (k+P_5)^2 ((k+p_1+p_2)^2-m_3^2)
+x_1 x_4 x_10/(k+P_1)^2 (k+P_2)^2 (k+P_5)^2 ((k+p_1+p_2+p_3)^2-m_4^2)
+x_2 x_5 x_11/(k+P_1)^2 (k+P_3)^2 (k+P_6)^2 ((k+p_1)^2-m_2^2)
+x_2 x_5 x_12/(k+P_1)^2 (k+P_3)^2 (k+P_6)^2 ((k+p_1+p_2+p_3)^2-m_4^2)
+x_2 x_6 x_13/(k+P_1)^2 (k+P_3)^2 (k+P_7)^2 ((k+p_1+p_2)^2-m_3^2)+
x_2 x_6 x_14/(k+P_1)^2 (k+P_3)^2 (k+P_7)^2 ((k+p_1+p_2+p_3)^2-m_4^2)
where the value of unknowns can be obtained from the notebook . Integrating Eq.(<ref>) over loop momenta k we get box integral written as a sum of 8 box integrals but with just one massive propagator.
§.§ One-loop pentagon integral
The one-loop pentagon integral is given by [For this and the subsequent subsection we will use the shorthand notation p_i_1+p_i_2 + ⋯ = p_i_1 i_2⋯, so as to avoid very lengthy expressions. ]
I_5= ∫d^4k/(k^2-m_1^2)((k+p_1)^2-m_2^2)((k+ p_12)^2 -m_3^2)((k+ p_123)^2 -m_4^2)((k+ p_1234)^2 -m_5^2)
We can get the algebraic relation using the following command
Input
AlgRel[{1,2,3,4,5},{k,q,m},{P,M},x,{q[1]->0,q[2]->p1,q[3]->p1+p2
,q[4]->p1+p2+p3,q[5]->p1+p2+p3+p4}]
Doing the substitution as before and simplifying we get
1/(k^2-m_1^2)((k+p_1)^2-m_2^2)((k+ p_12)^2 -m_3^2)((k+ p_123)^2 -m_4^2)((k+ p_1234)^2 -m_5^2)
= x_1 x_3 x_7 x_15/(k^2-m_1^2) (k+P_1)^2 (k+P_2)^2 (k+P_4)^2 (k+P_8)^2
+x_1 x_3 x_7 x_16/(k+P_1)^2 (k+P_2)^2 (k+P_4)^2 (k+P_8)^2 ((k+p_1234)^2-m_5^2)
+x_1 x_3 x_8 x_17/(k+P_1)^2 (k+P_2)^2 (k+P_4)^2 (k+P_9)^2 ((k+p_123)^2-m_4^2)
+x_1 x_3 x_8 x_18/(k+P_1)^2 (k+P_2)^2 (k+P_4)^2 (k+P_9)^2 ((k+p_1234)^2-m_5^2)
+x_1 x_4 x_9 x_19/(k+P_1)^2 (k+P_2)^2 (k+P_5)^2 (k+P_10)^2 ((k+p_12)^2-m_3^2)
+x_1 x_4 x_9 x_20/(k+P_1)^2 (k+P_2)^2 (k+P_5)^2 (k+P_10)^2 ((k+p_1234)^2-m_5^2)
+x_1 x_4 x_10 x_21/(k+P_1)^2 (k+P_2)^2 (k+P_5)^2 (k+P_11)^2 ((k+p_123)^2-m_4^2)
+x_1 x_4 x_10 x_22/(k+P_1)^2 (k+P_2)^2 (k+P_5)^2 (k+P_11)^2 ((k+p_1234)^2-m_5^2)
+x_2 x_5 x_11 x_23/(k+P_1)^2 (k+P_3)^2 (k+P_6)^2 (k+P_12)^2 ((k+p_1)^2-m_2^2)
+x_2 x_5 x_11 x_24/(k+P_1)^2 (k+P_3)^2 (k+P_6)^2 (k+P_12)^2 ((k+p_1234)^2-m_5^2)
+x_2 x_5 x_12 x_25/(k+P_1)^2 (k+P_3)^2 (k+P_6)^2 (k+P_13)^2 ((k+p_123)^2-m_4^2)
+x_2 x_5 x_12 x_26/(k+P_1)^2 (k+P_3)^2 (k+P_6)^2 (k+P_13)^2 ((k+p_1234)^2-m_5^2)
+x_2 x_6 x_13 x_27/(k+P_1)^2 (k+P_3)^2 (k+P_7)^2 (k+P_14)^2 ((k+p_12)^2-m_3^2)
+x_2 x_6 x_13 x_28/(k+P_1)^2 (k+P_3)^2 (k+P_7)^2 (k+P_14)^2 ((k+p_1234)^2-m_5^2)
+x_2 x_6 x_14 x_29/(k+P_1)^2 (k+P_3)^2 (k+P_7)^2 (k+P_15)^2 ((k+p_123)^2-m_4^2)+
x_2 x_6 x_14 x_30/(k+P_1)^2 (k+P_3)^2 (k+P_7)^2 (k+P_15)^2 ((k+p_1234)^2-m_5^2)
where the values of x_i and P_i can be obtained from the notebook . Integrating Eq.(<ref>) over loop momenta k we get pentagon integral written as a sum of 16 pentagon integrals but with just one massive propagator.
§.§ Six-point integral
The six-point integral corresponding to the Fig.<ref>, is
I_6= ∫d^4k/(k^2-m_1^2)((k+p_1)^2-m_2^2)((k+ p_1 +p_2)^2 -m_3^2)((k+ p_123)^2 -m_4^2)
×1/((k+ p_1234)^2 -m_5^2)((k+ p_12345 )^2 -m_6^2)
As in the previous examples we use the following command to obtain the algebraic relations
Input
AlgRel[{1,2,3,4,5,6},{k,q,m},{P,M},x,{q[1]->0,q[2]->p1,q[3]->p1+p2
,q[4]->p1+p2+p3,q[5]->p1+p2+p3+p4,q[6]->p1+p2+p3+p4+p5}]
We omit the result as it is lengthy. The full result can be obtained from the notebook .
§.§ Two-loop box integral
To find algebraic relation for the product of propagators for a two-loop integral we would use a loop-by-loop approach. To illustrate the method let us consider a two-loop example, the two-loop box integral, corresponding to the diagram Fig.<ref>. The integral is as follows
I_4,2= ∫∫d^4k_1d^4k_2/(k_1^2-m_1^2)((k_1+p_1)^2-m_2^2)(k_2^2-m_3^2)((k_2+p_3)^2-m_4^2)((k_1-k_2+p_1+p_2)^2-m_5^2)
The propagators are numbered such that i represents the propagator d_i.
Firstly we will find the algebraic relation for the product of propagators numbered 1 and 2, which has only the loop-momenta k_1 we can use the following command
Input
AlgRel[{1,2},{k1,q,m},{P,M},x,{q[1]->0,q[2]->p1}]
Similarly, for propagators numbered 3 and 4 we can use the following command
Input
AlgRel[{3,4},{k2,q,m},{Q,M},y,{q[3]->0,q[4]->p3}]
The final relation that we obtain, with M_i=0 is( see )
1/(k_1^2-m_1^2)((k+p_1)^2-m_2^2)(k_2^2-m_3^2)((k_2+p_3)^2-m_3^2) = x_1 y_1/(k_1^2-m_1^2) (k_2^2-m_3^2) (k_1+P_1)^2 (k_2+Q_1)^2
+x_2 y_1/(k_2^2-m_3^2) (k_1+P_1)^2 (k_2+Q_1)^2 ((k_1+p_1)^2-m_2^2)+x_1 y_2/(k_1^2-m_1^2) (k_1+P_1)^2 (k_2+Q_1)^2 ((k_2+p_3)^2-m_4^2)
+x_2 y_2/(k_1+P_1)^2 (k_2+Q_1)^2 ((k_1+p_1)^2-m_2^2) ((k_2+p_3)^2-m_4^2)
where
x_1= √((m_1^2-m_2^2+p_1^2)^2-4 m_1^2 p_1^2)+m_1^2-m_2^2+p_1^2/2 p_1^2,
x_2= -√((m_1^2-m_2^2+p_1^2)^2-4 m_1^2 p_1^2)-m_1^2+m_2^2+p_1^2/2 p_1^2
P_1= √((m_1^2-m_2^2+p_1^2)^2-4 m_1^2 p_1^2)+m_1^2-m_2^2+p_1^2/2 p_1^2p_1,
y_1=√((m_3^2-m_4^2+p_3^2)^2-4 m_3^2 p_3^2)+m_3^2-m_4^2+p_3^2/2 p_3^2,
y_2= -√((m_3^2-m_4^2+p_3^2)^2-4 m_3^2 p_3^2)-m_3^2+m_4^2+p_3^2/2 p_3^2,
Q_1= √((m_3^2-m_4^2+p_3^2)^2-4 m_3^2 p_3^2)+m_3^2-m_4^2+p_3^2/2 p_3^2p_3
Multiplying both sides of Eq.(<ref>) by 1/((k_1-k_2+p_1+p_2)^2-m_5^2) will give the required algebraic relation for the two-loop box integral.
§.§ Two-loop double box integral
Next, we consider the two-loop double-box integral corresponding to the diagram Fig.<ref>. The integral is as follows
I_4,2= ∫∫d^4k_1d^4k_2/(k_1^2-m_1^2)(k_2^2-m_2^2)((k_2+p_2)^2-m_3^2)((k_2+p_23)^2-m_4^2)((k_1+ p_23)^2-m_5^2)
×1/((k_1+ p_234)^2-m_6^2)((k_1-k_2)^2-m_7^2)
The propagators are numbered such that i represents the propagator d_i.
To find the algebraic relation for the product of propagators numbered 1,5 and 6 we can use the following command
Input
AlgRel[{1,5,6},{k1,q,m},{P,M},x,{q[1]->0,q[5]->p2+p3,q[6]->p2+p3+p4}]
Substituting value of q_1,q_5 and q_6 corresponding to the Feynman integral we get
x_1 x_4/(k_1+P_1)^2 (k_1+P_2)^2 ((k_1+p_234)^2-m_6^2)+x_2 x_5/(k_1+P_1)^2 (k_1+P_3)^2 ((k_1+p_23)^2-m_5^2)
+x_1 x_3/(k_1^2-m_1^2) (k_1+P_1)^2 (k_1+P_2)^2+x_2 x_6/(k_1+P_1)^2 (k_1+P_3)^2 ((k_1+p_234)^2-m_6^2)
Similarly, for propagators numbered 2,3 and 4, we can use the following command
Input
AlgRel[{2,3,4},{k2,q,m},{Q,M},y,{q[2]->0,q[3]->p2,q[4]->p2+p3}]
which gives the following result after substituting the value of q_2,q_3 and q_4 corresponding to the Feynman integral
y_1 y_4/(k_2+Q_1)^2 (k_2+Q_2)^2 ((k_2+p_23)^2-m_4^2)+y_1 y_3/(k_2^2-m_2^2) (k_2+Q_1)^2 (k_2+Q_2)^2
+y_2 y_5/(k_2+Q_1)^2 (k_2+Q_3)^2 ((k_2+p_2)^2-m_3^2)+y_2 y_6/(k_2+Q_1)^2 (k_2+Q_3)^2 ((k_2+p_23)^2-m_4^2)
All the values of the parameters P_i,Q_i,x_i and y_i can be obtained from the notebook . To get the algebraic relation for the integrand in Eq.(<ref>) we multiply Eq.(<ref>) and (<ref>) together and then multiply both the sides of the equation by 1(k_1-k_2)^2-m_7^2.
We see that, unlike the one-loop case, we will have 3 massive propagators in each term. In fact with the present procedure to find the algebraic relation for any two-loop integral, we will have at least 3-massive propagators in each integral. Due to this reason, the present procedure won't be helpful for the case of integrals like the sunset integral where there are only 3-propagators.
§.§ Three-loop ladder integral
I_4,3 = ∫∫∫d^4k_1 d^4k_2 d^4k_3/(k_1^2-m_1^2)(k_2^2-m_2^2)(k_3^2-m_3^2)((k_3+p_2)^2-m_4^2)((k_3+p_23)^2-m_5^2)((k_2+ p_23)^2-m_6^2)
× 1/((k_1+ p_23)^2-m_7^2)((k_1+ p_234)^2-m_8^2)((k_1-k_2)^2-m_9^2)((k_2-k_3)^2-m_10^2)
We use a similar strategy as before for this case too, to obtain the algebraic relation. The result contains 32 terms and is presented in the file .
§ REDUCTION OF HYPERGEOMETRIC FUNCTIONS
The Feynman integral evaluation gives results in terms of hypergeometric functions. The formalism to find algebraic relation for the product of propagators of Feynman integrals can be employed to find relations between hypergeometric functions <cit.>.
In this section, we point out some analytic results on N-point function <cit.> and various hypergeometric relations that can be obtained from them with the present analysis.
It is well-known that the general one-loop N-point function with zero external momenta and different masses (m_i, i = 1,…, N-1), with unit powers of propagators, can be expressed in terms of Lauricella F_D function <cit.>
I^(N)(m_1,…, m_N-1) = π^d / 2 i^1-d(-m_N^2)^d / 2-NΓ(N-d / 2)/Γ(N)
× F_D^(N-1)(N-d/2, 1, …, 1 ; N | 1-m_1^2/m_N^2, …, 1-m_N-1^2/m_N^2)
where F_D^L is the Lauricella function of L-variables given by
F_D^(L) (a, b_1, …, b_L ; c | z_1, …, z_L) = ∑_j_1=0^∞⋯∑_j_L=0^∞(a)_j_1+⋯+j_L(b_1)_j_1⋯(b_L)_j_L/(c)_j_1+⋯+j_L×z_1^j_1⋯ z_L^j_L/j_1 ! ⋯ j_L ! .
and d is the dimension. The general result (i.e., Eq. (<ref>)) is a N-1 summation fold hypergeometric series. If one of the masses m_1,m_2⋯, m_N-1 vanishes then the function F^(N-1)_D reduces to F^(N-2)_D, using the following relation
F_D^(L) (a, b_1, …, b_L-1, b_L ; c | z_1, …, z_L-1, 1)
= Γ(c) Γ(c-a-b_L)/Γ(c-a) Γ(c-b_L) F_D^(L-1)(a, b_1, …, b_L-1 ; c-b_L | z_1, …, z_L-1)
Using the method presented, N-1 masses vanishes; hence the result reduces to 0-fold series and is just a factor dependent on m_i. To evaluate the result (<ref>), outside its associated ROC, one has to explicitly perform analytic continuation which is difficult to obtain at times for multi-variable hypergeometric functions. Thus a reduction of the result to just a mass-dependent constant is helpful when the analytic continuation of the Eq.(<ref>) is required. Such a result should also be viewed as a reduction formula for the Lauricella F_D^L, obtained using a physical problem <cit.> which might otherwise be hard to obtain.
For a general one-loop N- point function with non-zero external momenta, the general result can be written as a generalized Lauricella hypergeometric function with (N-1)(N+2)/2 variables <cit.>. For the case of general vertex integral, this will result in a generalized Lauricella function with 5 variables <cit.>. On the other hand, using Eq.(<ref>) the result can be written in terms of a hypergeometric function of 3-variables <cit.>.
Comparing Eq. (<ref>) and (<ref>) we see that the evaluation of bubble integral has reduced from the evaluation of Appell F_4 which has two variables to that of hypergeometric _2F_1 with one variable. Such a result can be viewed as a general reduction formula without any explicit relation to the Feynman integrals it has been obtained from. Substituting a=d/2, p^2/m_2^2=x and m_1^2/m_2^2=y, we get the following relation
y^a-1 F_4(a,1,a,a,x,y)-F_4(2-a,1,a,2-a,x,y) =1/2 x((-x+y-1)-√(-2 (x+1) y+(x-1)^2+y^2))
× _2F_1[[ 1,2-a ;; a ; ]((x-y+1)+√((x-1)^2+y^2-2 (x+1) y))^2/4 x]+((1-x-y)
+√(-2 (x+1) y+(x-1)^2+y^2))× y^a-2 _2F_1[[ 1,2-a ;; a ; ](√((x-1)^2+y^2-2 (x+1) y)-(x+y-1))^2/4 x y])
Here a can take any value except negative integers and positive integers greater than 2.
We can further simplify the above relation by using the following relation of F_4 <cit.>
F_4(α, β ; β, β ;-x/(1-x)(1-y),-y/(1-x)(1-y)) =(1-x)^α(1-y)^α_2 F_1(α, α-β+1 ; β ; x y),
For our case α=1, β=a. Thus we get
F_4(2-a,1,a,2-a,x,y) =1/2 x((-x+y-1)-√(-2 (x+1) y+(x-1)^2+y^2))
× _2F_1[[ 1,2-a ;; a ; ]((x-y+1)+√((x-1)^2+y^2-2 (x+1) y))^2/4 x]+((1-x-y)
+√(-2 (x+1) y+(x-1)^2+y^2))× y^a-2 _2F_1[[ 1,2-a ;; a ; ](√((x-1)^2+y^2-2 (x+1) y)-(x+y-1))^2/4 x y])
+ y^a-1(1-x-y-√(x^2-2 x (y+1)+(y-1)^2)/2 x y) _2F_1[[ 1,2-a ;; a ; ](√((x+y-1)^2-4 x y)+x+y-1)^2/4 x y]
As a consequence of this we get F_4(1,1;1,1;x,y)
F_4(1,1;1,1;x,y)= 1/√((x+y-1)^2-4 x y)
We can also consider the result for the bubble integral with general masses and unit power of propagators, for which the result is given as follows <cit.>
I_2= (m_2^2)^a-2Γ(2-a) ×∑_j=0^∞∑_l=0^∞1/j ! l !(x)^j(1-y)^l ×(2-a)_j+l(1)_j+l(1)_j/(2)_2 j+l
With the help of the reduction procedure, the result for bubble integral is given by Eq.(<ref>). This equality of Eq.(<ref>) and (<ref>) thus provides a reduction formula for the hypergeometric series in Eq.(<ref>), which can be written as follows
∑_j=0^∞∑_l=0^∞(2-a)_j+l(1)_j+l(1)_j/(2)_2 j+l(x)^j/j!(1-y)^l/l! =1/2 x((-x+y-1)-√(-2 (x+1) y+(x-1)^2+y^2))
× _2F_1[[ 1,2-a ;; a ; ]((x-y+1)+√((x-1)^2+y^2-2 (x+1) y))^2/4 x]+((1-x-y)+
+√(-2 (x+1) y+(x-1)^2+y^2))× y^a-2 _2F_1[[ 1,2-a ;; a ; ](√((x-1)^2+y^2-2 (x+1) y)-(x+y-1))^2/4 x y])
We can also obtain new hypergeometric relations by deriving other functional equations for the Feynman integrals(see appendix <ref>). Using Eq.(<ref>), (<ref>) and and we get
y^a-1 F_4(a,1;a,a;x,y)-F_4(2-a,1;a,2-a;x,y) =
(a-1)/2 x( y^a-2 (x+y-1) _2 F_1[[ 1,2-a; 3/2 ; ](x+y-1)^2/4 x y ;
] +
(x-y+1) _2 F_1[[ 1,2-a; 3/2 ; ](x-y+1)^2/4 x ;
]
We further obtain
F_4(2-a,1;a,2-a;x,y) =
(a-1)/2 x( y^a-2 (x+y-1) _2 F_1[[ 1,2-a; 3/2 ; ](x+y-1)^2/4 x y ;
] +
(x-y+1) _2 F_1[[ 1,2-a; 3/2 ; ](x-y+1)^2/4 x ;
]+
y^a-1 (1-x-y-√(x^2-2 x (y+1)+(y-1)^2)/2 x y) _2F_1[[ 1,2-a ;; a ; ](√((x+y-1)^2-4 x y)+x+y-1)^2/4 x y]
We provide a list of various reduction formulae that can be derived using Eq.(<ref>) and (<ref>) in the appendix <ref>.
The right-hand side of Eq.(<ref>) and (<ref>) can further be equated to give the relation between the sum of hypergeometric _2F_1 functions. An interesting consequence of this relation can be obtained with a= 3/2
-tanh ^-1(x-y+1/2 √(x))+ ^-1(2 √(x)√(y)/x+y-1)/2 = ^-1(2 √(x)√(y)/√(x^2-2 x (y+1)+(y-1)^2)-x-y+1)-
tanh ^-1(√(-2 (x+1) y+(x-1)^2+y^2)+x-y+1/2 √(x))
As before, such a reduction also helps if the analytic continuation has to be performed to reach a certain kinematical region. We can find the analytic continuations for the series in Eq.(<ref>) using automated tools <cit.>, but it still does not guarantee that the parameter space has been covered. In contrast, the complete list of analytic continuations for the hypergeometric _2F_1<cit.> is available and well implemented in software like . The complexity of the analytic continuation procedure also increases with the increase in the number of variables of the hypergeometric function due to the increase in difficulty to find the ROC of the resulting series.
We notice that the procedure is sufficiently general and one can obtain a large number of reduction formulae using it by doing the following steps
* We take the Eq.(<ref>) or any other general result for N-point integral from <cit.>.
* For a N- point function we will have product of N- propagators. We take any two propagators and find the algebraic relation. This will result in a sum of 2 terms, for which the number of variables in the result, as in Eq.(<ref>), would be reduced by one. This would give a reduction formula between, say L (which is a function of N) variable hypergeometric function and (L-1) variable hypergeometric function.
* We apply the previous step again, thus resulting in a relation between L variable hypergeometric function and (L-2) variable hypergeometric function. Also the previous step it will give a relation between (L-1) variable hypergeometric function and (L-2) variable hypergeometric function.
* We apply the procedure recursively until we have an algebraic relation for the product of N massive propagators as sum of 2^N-1 terms, such that each term contains product of N-propagators with just one massive propagator.
* The final result of the procedure would be a collection of relations between L, (L-1),...1 variable hypergeometric functions.
§ SUMMARY AND DISCUSSION
We have presented an automatized package for finding the algebraic relation for the product of propagators. These relations were used by Tarasov <cit.> to derive the Functional relations for Feynman integrals. The results obtained using the package are also sufficiently general and can be used further to obtain the functional relations for the Feynman integrals by appropriately choosing the arbitrary parameters. In the present work we focused on automatizing the method to derive algebraic relation for the propagators by suitably implementing a recursive algorithm (a slight modification to the Tarasov's algorithm<cit.>). Furthermore, using a loop-by-loop approach we provided a systematic way so as to use these relation for higher loop integrals too. These relation occur with free parameters which can be chosen suitably. Using various examples up to three-loops, we focused on how with a simple choice of these free parameters we can reduce integrals with large numbers of massive propagators into integrals with fewer massive propagators <cit.>, which can thus be computed easily. For the one-loop case, we obtained results for up to 6-point integral with the procedure and wrote them as a sum of 2^N-1(for N- point integral) integrals with one massive propagator. We also showed how the procedure can be used for higher-loop integrals too where a loop-by-loop strategy has been applied for finding the relations.
Since the general results for the one-loop N-points integral are explicitly known for various cases in terms of multi-variable hypergeometric functions, we show how the present work can be used to obtain a large list of reduction formulae for these functions. As a demonstrative example of the same, we used the one-loop bubble integral where the reduction of the Appell F_4 to hypergeometric _2F_1 can be obtained. We also derive another reduction formula for a 2-variable hypergeometric series, Eq.(<ref>) in terms of hypergeometric _2F_1. The relations thus obtained, can be treated as general reduction formulae for these functions without making reference to the Feynman integral they were derived from. These relation hence provides a way to derive non-trivial reduction formulae for multi-variable hypergeometric function using physical problems. They are also helpful, especially for situations where the analytic continuation of multi-variable hypergeometric functions has to be obtained to evaluate them outside their ROC, which is not easy to derive otherwise.
The present procedure of finding algebraic relation for the product of propagators can be used only if the propagators are dependent on just one loop-momenta. For this reason, the procedure cannot be applied with full generality to multi-loop integrals and a loop-by-loop approach has to be adopted. Hence the procedure is not helpful for integrals such as the sunset integral or in the cases where for each loop momenta k_i there is just one propagator. To apply such a procedure to sunset-like integrals, a generalization of the procedure for the multi-variable case, when the propagators can depend on more than one loop momenta has to be developed.
As we have seen that the algebraic relation obtained reduces the complexity of the Feynman integral. Specifically for the simple case of one loop bubble (in Section <ref>), we saw that the result for general bubble integral, which was expressed in terms of double variable hypergeometric function Appell F_4, was reduced to _2F_1 which is a single variable hypergeometric function. It would be worth studying such reduction in complexity for other non-trivial cases of Feynman integrals which result in multi-variable hypergeometric functions of even higher variables. Since obtaining analytic expressions might not be feasible for such cases, a detailed numerical study for the same would be an important application of these algebraic relations after the proper function relations have been obtained by the proper choice of arbitrary variables.
§ FUNCTIONAL REDUCTION WITH M_I≠ 0
In this appendix, we will point out other possibilities for the choice of arbitrary parameters M_i <cit.>. This choice will lead to different functional reduction equations than already presented. Also, this will give rise to different reduction formulae as has been done in section <ref>.
Consider the bubble integral considered in section <ref>. This time we will choose a different non-zero value of M_1. Since the Feynman integrals are relatively easier to compute with equal masses a suitable choice is M_1= m_1. With this choice, we get, similar to Eq.(<ref>), the following relation
I_2(p^2,m_1,m_2)= x_1 I_2((P_1-p)^2,m_1,m_2)+ x_2 I_2(P_1^2,m_1,m_1)
with
x_1= m_1^2-m_2^2+p^2/p^2, x_2= m_2^2-m_1^2/p^2, P_1= -m_1^2-m_2^2+p^2/p^2 p
We see that on the right-hand side of Eq.(<ref>), we have a partial simplification. We can then exploit the symmetry of the I_2 integral under the exchange of m_1↔ m_2. We do the exchange m_1↔ m_2 in Eq.(<ref>) and add the resulting equation with it. Simplifying we get
I_2(p^2,m_1,m_2)= p^2+m_1^2-m_2^2/2p^2 I_2((p^2+m_1^2-m_2^2/p)^2,m_1,m_1) +
p^2-m_1^2+m_2^2/2p^2 I_2(( p^2-m_1^2+m_2^2/p)^2,m_2,m_2)
The value of I_2(p^2,m,m) is <cit.>
I_2(p^2,m,m)= m^d-4Γ(2-d/2)_2 F_1[[ 1,2-d/2; 3/2 ; ]p^2/4 m^2 ;
]
Substituting this in Eq.(<ref>) we get another functional equation for the bubble Feynman integral.
§ REDUCTION FORMULAE
F_4(1,1;1,1;x,y)=1/√((x+y-1)^2-4 x y)
F_4(3/2,1;1/2,3/2;x,y)= x-y+1/x^2-2 x (y+1)+(y-1)^2
F_4(5/2,1;-1/2,5/2;x,y)= (x-y+1) (x^2-2 x (y+5)+(y-1)^2)/(x^2-2 x (y+1)+(y-1)^2)^2
F_4(1/2,1;3/2,1/2;x,y)= -1/√(x)(-tanh ^-1(√(x^2-2 x (y+1)+(y-1)^2)+x-y+1/2 √(x))+
^-1(2 √(x)√(y)/√(x^2-2 x (y+1)+(y-1)^2)-x-y+1)+ ^-1(2 √(x)√(y)/√((x+y-1)^2-4 x y)+x+y-1))
F_4(1/2,1,3/2;1/2;x,y)=1/2√(x)(tanh ^-1(x-y+1/2 √(x))-2 ^-1(2 √(x)√(y)/√((x+y-1)^2-4 x y)+x+y-1)
+ ^-1(2 √(x)√(y)/x+y-1) )
F_4(0,1;2,0;x,y)= √(-2 (x+1) y+(x-1)^2+y^2)+x-y+1/2 x
F_4(2-a,1,a,2-a,1,1)= 1/2(1-i √(3)) _2F_1(1,2-a;a;-√(-1))
_2F_1(1,2-a;3/2;(x-y+1)^2/4 x) = (√(-2 (x+1) y+(x-1)^2+y^2)+(y-x-1)) /(1-a) (x-y+1)
_2F_1(1,2-a;a;((x-y+1)+√((x-1)^2+y^2-2 (x+1) y))^2/4 x)
_2F_1(1,2-a;3/2;(x+y-1)^2/4 x y) = √(-2 (x+1) y+(x-1)^2+y^2)-(x+y-1)/(1-a) (x+y-1)
_2F_1(1,2-a;a;(√((x-1)^2+y^2-2 (x+1) y)-(x+y-1))^2/4 x y)
We can use the following relation as given in <cit.> and obtain formulae for F_1
F_4 (α, β ; γ, β ;-x/(1-x)(1-y),-y/(1-x)(1-y)) =(1-x)^α(1-y)^α F_1(α, γ-β, α-γ+1 ; γ ; x, x y)
We can further exploit the relation between F_1 and F_2 to derive reduction formulae for F_2. F_2 can further be related to other hypergeometric functions including F_3, G_1,G_2,G_3,H_2,H_3,H_4,H_6 and H_7. So we can derive reduction formulae for all of these two variable hypergeometric functions.
In a similar manner using the following relation <cit.> we can obtain formulae for H_3
F_4(α, β ; γ, β ; x, y) =(1-x-y)^-α H_3(α, γ-β ; γ ; x y/(x+y-1)^2, x/x+y-1)
jhep
|
http://arxiv.org/abs/2307.04495v1 | 20230710113346 | Model-Driven Engineering Method to Support the Formalization of Machine Learning using SysML | [
"Simon Raedler",
"Juergen Mangler",
"Stefanie Rinderle-Ma"
] | cs.SE | [
"cs.SE",
"cs.AI",
"H.1.0; I.2.4"
] |
Model-Driven Engineering for Machine Learning]Model-Driven Engineering Method to Support the Formalization of Machine Learning using SysML
[1,2]Simon [email protected]
1]Juergen [email protected]
1]Stefanie [email protected]
[1]TUM School of Computation, Information and Technology; Department of Computer Science, Technical University of Munich, Boltzmannstraße 3, Garching b. München, 85748, Germany
[2]Business Informatics Group, Technical University of Vienna, Favoritenstraße 9-11/194-3, Vienna, 1040, Austria
Motivation: Systems Engineering is a transdisciplinary and integrative approach, that enables the design, integration, and management of complex systems in systems engineering life cycles. In order to use data generated by cyber-physical systems (CPS), systems engineers cooperate with data scientists, to develop customized mechanisms for data extraction, data preparation, and/or data transformation. While interfaces in CPS systems may be generic, data generated for custom applications must be transformed and merged in specific ways so that insights into the data can be interpreted by system engineers or dedicated applications to gain additional insights. To foster efficient cooperation between systems engineers and data scientists, the systems engineers have to provide a fine-grained specification that describes (a) all parts of the CPS, (b) how the CPS might interact, (c) what data is exchanged between them, (d) how the data interrelates, and (e) what are the requirements and goals of the data extraction. A data scientist can then iteratively (including further refinements of the specification) prepare the necessary custom machine-learning models and components.
Methods: This work introduces a method supporting the collaborative definition of machine learning tasks by leveraging model-based engineering in the formalization of the systems modeling language SysML. The method supports the identification and integration of various data sources, the required definition of semantic connections between data attributes, and the definition of data processing steps within the machine learning support.
Results: By consolidating the knowledge of domain and machine learning experts, a powerful tool to describe machine learning tasks by formalizing knowledge using the systems modeling language SysML is introduced. The method is evaluated based on two use cases, i.e., a smart weather system that allows to predict weather forecasts based on sensor data, and a waste prevention case for 3D printer filament that cancels the printing if the intended result cannot be achieved (image processing). Further, a user study is conducted to gather insights of potential users regarding perceived workload and usability of the elaborated method.
Conclusion: Integrating machine learning-specific properties in systems engineering techniques allows non-data scientists to understand formalized knowledge and define specific aspects of a machine learning problem, document knowledge on the data, and to further support data scientists to use the formalized knowledge as input for an implementation using (semi-) automatic code generation. In this respect, this work contributes by consolidating knowledge from various domains and therefore, fosters the integration of machine learning in industry by involving several stakeholders.
[
[
=====
Acknowledgments
This project has been partially supported and funded by the Austrian Research Promotion Agency (FFG) via the Austrian Competence Center for Digital Production (CDP) under the contract number 881843.
§ INTRODUCTION
Leveraging data to allow experts making informed decisions during the product lifecycle of a product is recently defined as data-driven engineering <cit.>.
The knowledge required for implementing data-driven engineering can be characterized in a two-fold way <cit.>, i.e., by i) profound machine learning skills with respect to processing and analytics of data and implementation of algorithms, and ii) by domain knowledge regarding the product of interest, relevant product lifecycle data, and related business processes with the entangled IT infrastructures to identify data provenance and information flows.
Regarding i) profound machine learning skills, a recent industrial survey revealed that companies have fewer machine learning experts and too little knowledge to implement solutions themselves. Further, few experts are available on the market <cit.>.
To still connect the domain and machine learning knowledge, various methods have been recently proposed in literature <cit.>.
However, these methods lack support for defining machine learning tasks and do not sufficiently represent the perspective of engineers.
Additionally, the methods mainly integrate engineering methods into data science methodologies supporting data scientists rather than allowing engineers to apply the methods to support the elaboration of machine learning support.
Therefore, this work aims to integrate machine learning knowledge into systems engineering to support engineers in the definition of machine learning tasks, to consequently enable data-driven engineering and, ultimately, to support the product development for the definition of prerequisites for the machine learning integration. Particularly, means of Model-Based Engineering (MBE) are adapted to define tasks for data-driven engineering by leveraging data from the product lifecycle of a system.
The method of this work builds upon the systems modeling language SysML <cit.>, a general-purpose modeling language allowing to formalize a system from various viewpoints and disciplines. The interdisciplinary formalization of systems knowledge refers to the term Model-Based Systems Engineering (MBSE) <cit.>. Additionally, the CRISP-DM <cit.> methodology is used as a basis for the organization of the machine learning task definition. The Cross-Industry Standard Process for Data Mining (CRISP-DM) is a methodology consisting of common approaches used by data mining professionals to work out a data mining project from inception (requirements and business understanding) through processing (data understanding, data preparation and modeling) to evaluation and deployment.
Ultimately, the method proposed in this work aims to formalize machine learning tasks during product development and to use the formalized knowledge to derive parts of the machine learning and to guide the implementation, respectively. The method is evaluated using a case study representing a weather station with multiple subsystems to predict weather forecasts and a second study to prevent wasting of 3D printer filament by canceling the printing if the intended result cannot be achieved.
The contribution of this work is manifold:
* The proposal of a SysML metamodel extension to include stereotypes that are used to describe machine learning functions for domain-specific data objects
* A method that fits to the latest research areas of the modeling community and is called MDE4AI <cit.>
* A means of structuring the models based on the CRISP-DM methodology.
* Two case studies using the proposed concepts for modeling machine learning support based on simple input data, followed by a discussion of the strengths and weaknesses of the method.
* A user study showing the workload and usability of the method as rated by experts and computer scientists.
This work lays a foundation for allowing non-programmers to define machine learning tasks by formalizing knowledge from the problem domain into a high-level model and to communicate formalized knowledge.
Additionally, semantic connection of data from various Product-Lifecycle Management (PLM) <cit.> sources allows to describe the origination and composition of data relations.
With the availability of such models, the goal is to support the automatic decomposition of SysML models and the (semi-)automatic generation of executable machine learning modules.
This work constitutes an extension of our previous work presented in
<cit.> and expands <cit.> in several ways by
* providing more extensive background information to foster understanding.
* extending the presented method with a generic and fine-grained sample of the modeling method.
* applying the method in two case studies from industry.
* conducting a user study on the perceived workload and usability of mechanical engineers and computer scientists
* discussing advantages and disadvantages of the method in a more thorough way.
The remainder of this paper is structured as follows: Section <ref> presents the background regarding MBSE, data science methodologies and related work of data-driven engineering. In Section <ref>, the elaborated method is introduced in detail and evaluated based on two case studies in Section <ref>. Further, a user study is presented in Section <ref> that evaluates the perceived workload and the usability of the method with mechanical engineers and computer scientists.
Based on the findings of the evaluation and the user study, an extensive discussion on advantages and disadvantages is presented in Section <ref>. Finally, the study is summarized in conclusion with future remarks in Section <ref>.
§ BACKGROUND
First, the concepts of model-based systems engineering (MBSE) and the systems modeling language SysML are explained. Second, machine learning and the CRISP-DM <cit.> methodology are introduced, acting as a basis for the method presented in Section <ref>. Next, related methods are depicted with special focus on data-driven engineering. Finally, Section <ref> presents a summary of the background.
§.§ Model-Based Systems Engineering and SysML
Systems engineering, particularly MBSE, aims to integrate various engineering disciplines in product development to establish a single-source of truth by formalizing system requirements, behavior, structure and parametric relations of a system. Conventional systems engineering focuses on storing artifacts in several (text) documents maintained in case of changes. In a model-based method, the relevant information to describe an abstract system are stored in a model <cit.>. The literature concerning graphical MBSE methods promises to increase design performance while supporting the communication of relevant stakeholders of a system <cit.>.
MBSE is a term explicitly considering aspects of a system. Nevertheless, other terms can be considered interchangeable depending on the level of automation and the focus of the application[See <https://modelling-languages.com/clarifying-concepts-mbe-vs-mde-vs-mdd-vs-mda/> for a discussion.]. Independent of the level of automation and the focus of the modeling language, a metamodel defines the modeling concept, relations and all possible instances of a specific set of models. Models are instances of metamodels describing a specific system. The model characteristics must match all aspects of the associated metamodel. However, extensions such as additional attributes can be added directly on a model without changing the metamodel. If a metamodel does not represent an aspect, an extension for a specific group of use cases can be defined using so-called stereotypes <cit.>. A stereotype is a means of modeling to extend metaclasses by defining additional semantics for a specific class concept. A metaclass is a class describing a set of classes, e.g. the metaclass block is a general purpose structuring mechanism that describes a system, subsystem, logical or physical component without the software-specific details implicitly given in UML structured classes <cit.>.
The use of stereotypes in modeling methods have been proven to support the understanding and standardization of a model <cit.>.
In MBSE, the Systems Modeling Language SysML is the most prominent modeling language <cit.>. SysML is based on the UML standard with a special focus on the formalization of systems instead of modeling classes and objects for software engineering. The language supports the formalization of structural, behavioral and functional specifications <cit.>.
Structural diagrams describe the composition of systems and subsystems with their attributes and relations <cit.>. Figure <ref> depicts core elements of a block definition diagram modeled in the Eclipse-based open-source software Papyrus[<https://www.eclipse.org/papyrus/index.php>]. On top of <ref>, a Block with the name Human is defined, consisting of one attribute of type String with the attribute name Name and the visibility public indicated by the plus (+). A block can also have operations, ports etc. which are not relevant for this work and, therefore not introduced here. Underneath the Human-Block, two inheriting elements are defined by the white arrows between the blocks. The attribute Name is inherited from the parent block marked by the tailing dash. One child has an additional property Age, which only affects the block (as long as no deeper inheritance is available). The second block consists of a subsystem, indicated by the black diamond being a part association (a.k.a. composition). A part association determines that a block describes a whole element and a part of the whole element is additionally described in another element[See <https://sysmlforum.com/sysml-faq/what-are-diff-among-part-shared-referenced-associations.html> for a discussion]. The 1 and the 0..2 indicate the multiplicity, allowing to define the cardinality, e.g. number of elements. In this sample, it means one element Child2 can have zero, one or two legs. The white diamond between Leg and Shoe indicates a shared association, which is a weaker form of the part association. It refers to a relationship where the part element is still valid if the whole element is deleted, e.g. if the element Leg is not valid anymore, the Shoe is still valid. The multiplicity * indicates that one can have any number of shoes.
Since various software represent slightly different parts, the description of the block definition diagram can vary.
In SysML, the execution of single activities can be modeled using activity diagrams. A state diagram has an entry-point and an exit-point. The arrow between the states indicates a transition and describes that one state has been completed and another is active. Behind a state, the execution of one or multiple activities can be triggered, whereas an activity is a sequential execution of single actions <cit.>, see <ref>.
§.§ Data Science and Methodologies
Data Science and Business Intelligence refer to the extraction of information and knowledge from data through analysis to assist people with various types of insights, such as analysis or prediction, among many others <cit.>.
The digging of such information to derive knowledge is called data mining (DM)<cit.>.
Machine learning (ML) is one subfield of DM, which automatically allows computer programs to improve through experience <cit.>.
Machine learning algorithms aim to solve a (specific) problem to eliminate the need for being explicitly programmed <cit.>.
To support the implementation of machine learning applications, methodologies have been proposed in a general manner <cit.>. Additionally, extensions of such methods with particular support for data science in the engineering domain are introduced <cit.>.
In literature, the methods of the CIRSP-DM <cit.> and KDD <cit.> are assessed in a comparative study <cit.>. According to <cit.>, CRISP-DM is a kind of implementation of the KDD process. In the following, CRISP-DM is described and used as basis for the structure of the proposed method described in Section <ref>.
In CRISP-DM, six core steps are defined supporting the implementation of a DM application:
* Business Understanding: Project objectives, requirements and an understanding from a business level is achieved. Based thereon, a DM problem is defined and a rough roadmap is elaborated.
* Data Understanding: Data is collected to understand the situation from a data point of view.
* Data Preparation The construction of the final dataset for the learning algorithm based on raw data and data transformations.
* Modeling: Various or sometimes one algorithm is selected and applied to elaborated dataset from the previous step. In this step, so-called hyperparameter tuning is applied to vary on parameter values and achieve a most valuable result.
* Evaluation: The result of the algorithm is evaluated against metrics and the objectives from the first step.
* Deployment: The achievements are presented in a way that a customer or an implementation team can use it for further integration.
§.§ Related Work
In literature, various methods supporting the formalization of data-driven engineering or machine learning using modeling languages, are given.
The method of <cit.> is based on the Kevoree Modeling Framework KMF <cit.>, which is similar to the Eclipse Modeling Framework (EMF) that is the basis for the open source modeling framework Papyrus[<https://www.eclipse.org/papyrus/>]. <cit.> proposes to model the domain knowledge and small learning units in a single domain modeling method since both are highly entangled. The method is based on a textual modeling syntax and describes what should be learned, how and from which attributes and relations. Additionally, templates are given to render code based on the model. However, the open-source framework seems to be out of maintenance since the repository is not updated since 2017[<https://github.com/dukeboard/kevoree-modeling-framework>].
An active maintained framework family with means to model machine learning is shown in <cit.>. The method is based on the MontiAnna framework <cit.> and focuses on modeling artificial neural networks. The MoniAnna framework is part of the MontiCore Workbench Family<cit.>.
Similar to <cit.>, textual modeling is used to formalize the learning units and related input and output. The formalization is used as input for template-based code generation. However, the method does not reflect domain-specific (business) knowledge from an engineering perspective.
In <cit.>, focus is put on the integration of executable machine learning units modeled on a cloud platform, enabling the fast deployment of distributed systems. However, the method is stiff regarding extendability and advanced data preparation as of the current development state.Additionally, the integration of domain knowledge is hardly given and the focus on the formalisation of data-driven algorithms is not present.
The integration of ML in CPS modeling is supported by the textual modeling framework ThingML+<cit.>. The method extends the ThingML <cit.> modeling method, intended to support the development of IoT devices. As with the other methods, focus is put on machine learning modeling without considering domain knowledge. The method allows deriving executable code based on model transformation using xtext.
§.§ Summary
MBSE has been proven beneficial in increasing the design performance of systems <cit.>. According to <cit.>, the number of components and functions are increasing in future, leading to more complex systems, requiring advanced support in the development and analysis using means of data science.
Development support for data science is given in methodologies such as CRISP-DM. However, guidance specific for the engineering domain is limited <cit.> and the integration in a model-based method is unavailable as of the author's knowledge.
In literature, various methods introduce specific metamodels and languages to describe a data science task and eventually enable to derive executable code. However, the methods are not based on a MBSE compatible modeling language such as SysML rather than introducing single domain-specific modeling environments.
Therefore, little support for interdisciplinary communication is given and the methods are more applicable for computer scientists than to domain outsiders such as mechanical engineers with little knowledge in programming. Moreover, the domain-specific modeling methods are not aligned with the CRISP-DM methodology, leading to little support from a methodological perspective. Last but not least, the proposed methods use model transformation to reduce the implementation effort, but are seldomly built in a generic way, allowing to extend the modeling or the derivation of code without extensive changes in the generation. Therefore, maintenance and applicability in practice is rather limited.
§ METHOD
This section describes a method to formalize machine learning tasks based on SysML and the application of an extended metamodel.
In the following, first, the extension of the SysML metamodel using stereotypes is described.
Special attention is given to the package structure for organizing the stereotypes, extensibility for different purposes, and generalization so that stereotypes can be used for multiple use cases.
Second, a package structure aligned with the CRISP-DM methodology is presented, enabling to guide the application of the newly defined stereotypes.
Next, a syntax and semantic is introduced, allowing to interpret the formalized machine learning model enriched with the introduced stereotypes.
Finally, means of SysML state diagram is used to define the tasks' execution order.
§.§ Metamodel Extension using Stereotypes
In the following subsections, six packages are introduced, which allow to group stereotypes that semantically describe required functionalities.
Subsequently, an exemplary stereotype hierarchy for defining higher-order functions for domain-specific data transformation purposes is described in detail.
§.§.§ Stereotype Package Structure
SysML packages are used to group and organize a model and to reduce the complexity of system parts.
Similarly, it can be applied for the organization of stereotypes, as depicted in Figure <ref>.
The organization of the stereotypes is as follows: in Common, general stereotypes are defined that are used in other packages as basis, e.g. a stereotype ML is defined in Common, each defined stereotype related to machine learning inherits from this stereotype to indicate that it is a machine learning stereotype. Additionally, stereotypes can be defined allowing to categorize other stereotypes, e.g. an abstract Pre-Processing stereotype allows to identify that all inheriting stereotypes are introduced for the data preparation step of the CRISP-DM methodology.
In Attributes, stereotypes for a more detailed definition of attributes are defined. These attribute stereotypes cannot be applied to blocks, only to attributes of a block. Consequently, the stereotypes extend primitive data types such as Integer or Float. The purpose of the extension are additional characteristics to describe the data, e.g. valid ranges of a value or the format of a datetime property or a regular expression to collect or describe a part of a text value.
The package DataStorage defines available data interfaces from a general perspective required for the loading and processing of data from various data sources, e.g. SQL servers, Application Programmable Interface (API) or other file formats (e.g. CSV).
The purpose of the stereotypes are to support the data understanding of the CRISP-DM methodology. Additionally, it allows to bridge the gap between business and data understanding due to the explicit formats. Further details in Section <ref>.
In the Algorithm package, various machine learning algorithms are defined and grouped with respect to algorithm types, e.g. regression or clustering algorithms. Particularly, the focus is put on key characteristics of an algorithm implementation, such as mandatory hyper-parameter or the stereotype description. Optional algorithm parameters are not described in the stereotype, but can be added during the modeling, as later illustrated in Figure <ref>.
The PreProcessing package (a.k.a. as data preparation) is the most complex and extensive package due to the number of functionalities required. Additionally, a survey revealed that computer scientists spend the most effort in preparing and cleaning data <cit.>. Within this package, functions are defined allowing to transform data so that a cleaned and applicable dataset for the machine learning algorithm is defined.
Finally, the AlgorithmWorkflow package, consisting of stereotypes for states of the state diagram, allowing to define the implementation order of the machine learning tasks. Typically in SysML, states are connected to activities, which are a sequence of execution steps. However, in practice, we found out that it is very time consuming to prepare activities first. Additionally, a function abstracted as a single block can be considered as a set of activities. Consequently, state diagrams are used instead of activity diagrams to reduce the implementation effort and complexity.
§.§.§ Stereotypes Hierarchy
As mentioned in Section <ref>, each package represents a specific hierarchy of stereotypes, allowing to describe various aspects of machine learning subtasks.
An example definition of stereotypes related to data pre-processing is depicted in Figure <ref>. As described in Section <ref>, stereotypes can be hierarchically composed to describe specific attributes only once for a set of stereotypes.
On top, the ML stereotype defined in the Common package is depicted, indicating that all inheriting stereotypes are related to machine learning. Formalizing a machine learning task is intended to be iteratively, which is why some stereotypes are abstract, illustrated by italic letters.
If a stereotype is abstract, it means that the stereotype requires further detailization or that a child stereotype with additional information is required, e.g., DataTransformation cannot be used without further details as it can be arbitrary transformation of data.
The purpose of abstraction is to support the early definition of tasks in the product development without details already being known, e.g., the final file-format used to store the data.
From top to bottom in Figure <ref>, the level of detail increases and the task is more fine-grained chosen. Consequently, leaves are the most fine-grain representation. The inheritance additionally allows to group functions of a specific kind, e.g., functions regarding outlier detection etc.
Due to the grouping of functions, the composition of stereotypes strongly depends on the preferences of the implementing expert and the purpose of the composition in terms of inheritance of attributes.
Note that attributes defined in a parent stereotype are also available in a child or grandchild stereotype, respectively. Therefore, each level should only represent mandatory attributes. This especially applies for algorithms with a lot of hyper-parameters, e.g. logistic regression with more than 20 parameter and attributes[<https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html>]. In case a parameter is not defined in the stereotype, it sill can be add during the modeling and application of the stereotypes. A sample can be found in Section <ref>.
Additionally, it is possible to add a set of values using Enumerations for a single attribute, e.g. MissingValueFunction highlighted in green. In this respect, modeling is more precise and guided by a fixed set of valid options.
Similarly, specific stereotypes can be used as an attribute, which means that only blocks or attributes that apply the specific stereotype can be assigned, e.g. Method_Attribute_Input indicating that only properties with a stereotype defined in the package Attributes can be applied because each attribute stereotype inherit from that stereotype.
Finally, the application of the keyword BlackBox can be used if a function shall be hidden due to security reasons or the implementation is unknown, e.g. BlackBox_Outliers on the right side of Figure <ref>.
§.§ Package structure guiding the implementation.
CRISP-DM as described in Section <ref> consists of six steps, each describing a specific aspect required for the development of a machine learning project. Figure <ref> illustrates the package structure aligned with the CRISP-DM methodology.
Business Understanding consists of block definition diagrams describing the system under study with the composition from a system configuration point of view. In this respect, the VAMOS method (Variant Modeling with SysML, <cit.>) is integrated to describe a specific system configuration. The integration of the VAMOS method focuses on the data interfaces and attributes of a particular configuration of a system, as different configurations of a system might lead to other data output. In this method, the VAMOS method is used to focus on data interfaces. Therefore, other systems engineering knowledge is presented in other diagrams, which is out of the scope of this work. Still, the knowledge modeled in other diagrams is connected to the instance of a block used in the VAMOS method and therefore, multiple disciplines are enabled to work on the same model.
The second step, Data Understanding, details the Business Understanding with the definition of delivered data on an attribute and data format level. Particularly, the data type and the name of the delivered data attribute are described using block definition diagrams. Additionally, attribute stereotypes are used to describe the data in detail as described in Section <ref>. With the application of stereotypes on a block level, the type of data interface is defined, e.g. CSV files or SQL servers. As a result of the formalization of the interfaces in this package: The information exchange between the systems engineering and the data engineering can be considered as completed.
Based on the Data Understanding, the Pre-Processing is applied to transform and prepare the data in a final dataset that can be used in the Modeling. In the Pre-Processing, the most effort is required due to the possible number of required data transformations to create a dataset usable for machine learning. The result of the Pre-Processing is a final dataset, considered to be ready for the machine learning algorithm.
Within the Modeling, algorithms are applied to the final dataset. Additionally, train-test-splitting and other required functions on the machine learning algorithm are applied.
In the Evaluation package, various metrics are used to asses and prove the validity of the algorithm result of the Modeling package.
Finally, the Workflow package, which describes the execution order of the formalization in the previous packages using state diagrams. For each state, a custom stereotype is applied allowing to connect a block that is connected to a stereotype inherited from ML. The method to assign blocks to states allows to overcome the necessity to define activities, making the method less heavy for the application and reduces time for the formalization of the machine learning.
Typically in CRISP-DM, the very last step is the deployment. However, the deployment is considered out of scope in this work and therefore the method ends with the workflow.
§.§ Syntax and Semantics
For the purpose of implementing ML functionalities, the utilization of
functional programming paradigm is intuitive <cit.>. It
utilizes higher order functions, invoked on (data-)objects which are returning
objects. This allows for step-by-step decomposition, filtering and
transformation of data, without side-effects (changes to variables), in
comparison to the imperative programming paradigm.
This sequence of function invocation aligns well with how UML and other
modeling languages implement abstraction-levels to reflect a relevant selection
of properties to focus on the aspects of interest
<cit.>. Functions are blackboxes with processing
capability that are associated with (data-)artifacts upon which they can be
called, and are associated with data-artifacts they produce as output. The
abstraction is realized by describing functions or a set of functions with a
single stereotype and instances with blocks.
A class in UML is defined among others by attributes, stereotypes, operations (methods), constraints and relationships to other classes. In SysML, a block describes a system or subsystem with a similar definition as a class in UML. A machine learning task and the respective subtasks can be seen as a system with subsystems. Therefore, each subtask is modeled using blocks, aligned with the syntax described in section <ref>. Particularly, only input values represented as attributes of a block and the relation to other blocks are modeled. The operations (methods) are defined as stereotypes with abstracted implementations. Attributes defined on the stereotype are mandatory input values for the definition of a machine learning subtask. The attributes defined on a block itself are optional for documentation or to extend the stereotype with fine-grained details, e.g. utc attribute in the Format_Date2 block in Figure <ref>. The output of a subtask (block) is implicitly defined in the implementation of the code snippet related to a stereotype and not explicitly depicted in the model. The output of a block can be used as input for other blocks, e.g. CSV_1 block as input for the Format_Date block. Figure <ref> depicts a few samples of the aforementioned syntax and semantics. On top right, a date conversion subtask is modeled as Format_Date. The date conversion stereotype has a mandatory attribute to define the format of the output of the conversion. The input for the date conversion is the block CSV_1, connected using a part association. In this sample, the date attribute is the only input value matching due to the stereotype Datetime. However, if the input is ambiguous because the datetime is stored for instance as integer or multiple attributes of the connected block are in the correct input format, it is necessary to add additional attributes to the date conversion to select the particular input, e.g. with a new attribute which value is the particular input attribute from the connected block.
The block Format_Date2 inherits from Format_Date. Therefore, the input and the attributes are the same except of manual overwritten values, e.g. changes on the output datetime format or the added additional attribute utc. Another sample in Figure <ref> shows the integration of multiple inputs. The Merge_DF block consists of two input blocks and the attributes on which the merging function shall be applied are defined using an attribute that consists of two values (MergeOn). The MergeOn attribute is mandatory and therefore defined on the stereotype.Although the implicit execution order of the subtasks is defined by the associations and the necessity to compute first inputs, the execution order might be ambiguous, e.g. execute first the Format_Date or the Merge_DF. As described in section <ref>, structural diagram elements, such as blocks, requires the integration in behavioral diagrams to allow the definition of an execution order <cit.>.To enable the connection of a block with a state in a state diagram, custom stereotypes are applied. The stereotypes for the states consist of a single mandatory attribute. The mandatory attribute references a block with a stereotype that inheritate from the root parent stereotype ML.
§ CASE STUDIES
This section presents two case studies, i.e., a weather system that predicts weather forecasts based on sensor data, and an image similarity check that makes it possible to assess whether the actual print of a 3D model with a 3D printer corresponds to the desired output.
As a result, the printing process can be stopped prematurely, saving filament and time.
§.§ UC1 - Weather Forecast based on Sensor Data
Figure <ref> illustrates the composition of the weather system that is split in two parts. On the left side, a local station is equipped with various sensors, delivering a CSV file with measuring and on the right side, a weather forecast additionally delivers a CSV file with weather forecasts over the internet.
From a systems engineering perspective, the weather system is a cyber-physical system and can be configured with various sensors.
Figure <ref> depicts the SysML model of the weather system with a specific configuration aligned with Figure <ref>.
Particularly, Figure <ref> depicts an method aligned with <cit.> that allows to formalize variations. Additionally, the modeling of the system from an business perspective is the first step of the method. Focus is put on the values of interest, which are the output values of the subsystems, to keep the business understanding as concise as possible. In the middle of the figure, the core weather system configuration is depicted. The surrounding subsystems are sensors or subsystems, e.g., an API (right side). The attributes of the sensors are output values of each subsystems to align with the CRISP-DM business understanding that aims to get a general idea of the system and from where data originates.
To transform the business understanding in valuable data understanding, connections between the system in the business understanding and output data formats are established.
Particularly, a realization connection between the CPS and blocks describing the data format using stereotypes inheriting from ML are modeled.
In the blocks, each attribute has a type representing the actual data type in the data source and a stereotype with a ML attribute describing the representation in the machine learning method, e.g., CSV_2 attribute date_date is of type String and is mapped to the stereotype Datetime that considers aspects such as the datetime format.
Additionally, stereotype attributes are defined such as the Encoding or the Delimiter to describe the composition of the CSV file.
Figure <ref> depicts a set of subtasks applied to the data sources defined in Figure <ref>.
For and explanation of Figure <ref>, please refer to Section <ref>.
Figure <ref> illustrates the application of a train-test-split and the integration of the split data into two different regression algorithms, which are specified in a mandatory attribute. As of the definition of the stereotypes, no further parameters are mandatory. For the RandomForestRegressor, the optional hyper-parameter max_depth is defined.
Figure <ref> depicts the prediction and the application of metrics such as mean absolute error (MAE). The mandatory parameter text is a placeholder allowing to add text that shall be implemented with the evaluation result.
The method's final step is integrating the blocks into an execution workflow. Figure <ref> illustrates the execution order of the algorithm steps. As can be seen, the Format_Date2 block modeled in Figure <ref> is not depicted in the workflow, meaning that it is not taken into concern during the implementation and is left out as an artifact from the formalization time. The state's name is to readily understand the workflow and the blocks connected with the ML_Block_Connection stereotype.
As the scope of this work is to formalize the machine learning and not to improve the executable code or to derive the code automatically, the result of the machine learning and the implementation itself are not depicted and left to future work.
§.§ UC2 - 3D Printer Success Evaluation during Printing
The purpose of the application is to detect faulty 3D prints during the printing process by comparing the actual status of the printed model with the intended model.
This use case illustrates the method's applicability to other data sources, such as image data, and the integration of the method into an executable workflow engine.
Additionally, the integration of pre-trained models is depicted by integrating TensorFlow Hub.
The idea of image similarity is based on an image similarity tutorial[<https://towardsdatascience.com/image-similarity-with-deep-learning-c17d83068f59>].
The use case process is described below and illustrated in Figure <ref>.
We adopt the CPEE process engine <cit.> to orchestrate the application process, as the CPEE provides a lightweight and straightforward user interface to orchestrate any application that allows interaction via REST web services.
Figure <ref> shows the workflow of the application, consisting of image generation and printing.
The first three process steps define the slicing of a STL file and the generation of the reference images.
Particularly, a Python script is called that generates the slices based on a given STL file and stores the generated reference images for later comparison and similarity check.
The second part of the process consists of a loop that prints a slice, takes a photo with a camera from the top center of the working area, and calls a similarity script to compare the intended and actual printed model.
The image similarity algorithm is defined using the machine learning formalization method, proposed here.
The defined algorithm provides a similarity index compared to a threshold value.
If the threshold is exceeded, the printing process is aborted, otherwise, it is repeated.
The machine learning model integrated into the printing process is formalized below.
Figure <ref> shows input data consisting of two images: the image sliced from the STL file and the photo from the 3D printer camera.
In contrast to the first use case, the data attributes are not further detailed with stereotypes because the input data do not show any variations, i.e. the format and resolution of the images do not change.
Figure <ref> depicts the scaling of the images such that they have the same dimension.
The conversion parameter L allows comparing the images on a black-and-white basis.
Normalization of the pixels and colors between 0 and 1 is also applied.
The normalization in the block Convert_PixelsAndNormalize should be defined as a new stereotype.
In this case, we show the application of the CustomCode stereotype, allowing for the injection of program code, which allows rapid prototyping.
However, flaws, such as vulnerability or hijacking of the method might lead to reduced understanding and reproducibility.
Additionally, it is not the purpose of the method to insert programmed code.
For further discussion, see Section <ref>.
With respect to potentially wrong use of the method, Figure <ref> depicts the wrong used stereotype CustomCode on top and below the correct use of stereotypes for the same result with a slightly changed code sequence.
Further, the two images are fed to the classification algorithm, as illustrated in Figure <ref>.
The input value Model describes a TensorFlow Hub input, a pre-trained model to classify images.
Finally, the result is measured using cosine distance metrics.
The threshold for canceling the printing is implemented in the workflow and can be adjusted by the user.
Finally, Figure <ref> depicts the execution sequence of the algorithm.
§ USER STUDY
Typical user of the presented method are computer scientists and engineers from various disciplines, depending on the application area.
Therefore, this study aims to assess and compare computer scientists' and mechanical engineers' subjective workload and user experience regarding understanding, modifying, and creating machine learning functions in a model-based method.
Further, the time required for applying changes or creating constructs in SysML is assessed to allow a comparison of the participants based on previous experiences, e.g., programming or modeling prior knowledge.
Since the study and the modeling is conducted using the SysML modeling tool Papyrus[<https://www.eclipse.org/papyrus/>], it is impossible to eliminate distortions due to the usability of the underlying tool, e.g.,
“How to model a block”.
Therefore, the study director will provide verbal assistance if a participant requires support due to the tool's usability.
Large sample sizes are necessary to enable quantitative evaluation, which is not applicable due to resource constraints.
Therefore, the principles of discount usability are applied to test only a small group of customers and to identify the main usability problems by applying small qualitative user studies with three to five users, a detailed scenario, and a think-aloud method <cit.>.
According to <cit.>, a 70% chance to find 80% of the usability issues is given with five users.
However, in literature, there are reports that the increase of five participants to ten significantly changes the amount of found issues <cit.>.
In this respect, a total number of 12 users were tested, equally distributed among the two groups, Computer Scientists (CS) and Mechanical Engineers (ME).
In the following, the experimental setting is illustrated.
Next, an introduction to the evaluation procedure is given, followed by an introduction of the test cases in Section <ref>.
Finally, the results of the user studies are depicted in Section <ref>.
A discussion on the implications from the user study is given in Section <ref>.
§.§ Experimental Setting
The user study was conducted with 12 participants.
Each participant has a university degree (B.Sc., M.Sc., or Ph.D.) and received a basic introduction to programming at university.
Half of the participants are CSs, and half MEs.
Other engineers can serve as potential users and equally valid test users, as well.
However, to obtain a more homogeneous group, engineers are limited to MEs.
Due to the participants' different knowledge in modeling, programming, and data science, a self-assessment of their experience was made at the beginning of the user test.
Table <ref> summarizes the knowledge levels of the participants based on their highest university degree, years of experience, position at the current job, and self-assessment on the three relevant dimensions.
§.§ Evaluation Procedure
The study started with a basic introduction to SysML and an overview of the method introduced in this work, taking approximately 10 minutes and involving the presentation of two predefined block definition diagrams as samples with a focus on the modeling and understanding of a block definition diagram and the application of the introduced stereotypes.
Following this, the users had to perform three tasks, i.e.,
(1) showing that they understand the purpose of the modeling and the basic idea of the method by describing the modeled methods in Figure <ref>, (2) replacing a CSV stereotype with Text-file stereotype and redefining the attribute properties of the text file, and (3) adding a new function by connecting a new block with a particular stereotype to an existing block.
Each of the tasks (1) – (3) is subdivided into sub-activities to allow fine-grained evaluation of the tasks and the performance achieved by the participants.
The sub-activities are presented with their tasks in Table <ref>.
For each participant, the time taken to perform the tasks is recorded.
After each of the three tasks, NASA Task Load Index (NASA-TLX, <cit.>) and the Systems Usability Scale (SUS, <cit.>) questionnaire are filled out by the users to assess the participants' subjective workload and usability.
Before filling out the questionnaire, the users were explicitly told to evaluate the method's usability, not Papyrus's.
§.§ Test Cases
Table <ref> depicts the subtasks to accomplish the tasks of the user study.
Therefore, each subtask is assessed by the study leader to determine whether they are completed correctly or not.
If a user could not find a specific button due to the usability of Papyrus, but could justify why it is being searched for, e.g., “I need to remove a stereotype and add a new one so that a new function is defined”, the task is evaluated as correct.
To achieve reproducibility, the tasks were set exactly with the following wordings:
Task 1 Understanding: Please describe what can be seen in the currently displayed diagram and what function it fulfills. Additionally, please answer the following questions:
* What are the two input files, and in which format?
* What values are stored within CSV_2?
* What is the type of date_date, and how is it represented in the ML model?
* What are the path and encoding of the two input files?
* What are the properties of DataFrame_Merge Stereotype?
Task 2 Function Exchange: Behind the here presented TextFile function, a CSV stereotype is defined.
However, the type is incorrect.
Please change the file type to Text-File.
Additionally, set the encoding to UTF-8 and the path to C:/file.txt.
Task 3 Adding a Function:In the following view, you can see two input files connected to a merge block.
Additionally, a normalization of the merge block is required.
Please add the function for Normalization and set the value of the normalization method to MaxAbsScalar.
§.§ Survey Results
Figure <ref> shows boxplots of the required times for the individual tasks grouped per task and training of the participants in CS or ME.
For Task1, the time required is higher than for Task2 and Task3, whereas Task2 and Task3 shows a comparable average and distribution.
One reason for the higher time for Task1 is that the users had to describe a model and this task is therefore more time-consuming.
It was also observed that repetitive tasks made the users faster, which also came as feedback from the participants.
Further, the dispersion of Task1 for ME is higher compared to CS.
This scatter might be explained because of the varying experience levels of the participants with respect to modeling and data science.
However, there was no correlation between the time spent and the correctness of the execution of the sub-activities.
Regarding the dispersion of CS, interestingly, Tasks 2 and 3 vary more than Task1.
This can mainly be explained by the familiarity with the Papyrus modeling environment.
Thus, participants with more Papyrus experience had completed the tasks much faster than those who used Papyrus for the first time.
Figure <ref> shows the result of the individual tasks in terms of correctness in relation to the subtasks of Table <ref>.
CS perform better for T1 and T2, which can be explained by the extended prior experience regarding UML of CS obtained during university education.
In T3, however, ME perform better.
This can be explained by an outlier value for CS that performs significantly below the average.
The overall accuracy of ME increased with the evolving tasks although the average of T2 is lower than for T1.
The results of the applied NASA-TLX test to indicate the perceived workload of the participants for the specific tasks are presented in Figure <ref>.
The lower the value of a dimension of the NASA-TLX, the lower the perceived workload.
Consequently, a low scale value is seen as positive.
The Effort dimension shows, for example, that with increasing experience or task, the perceived effort decreases.
Further, the frustration increases and the performance decreases compared to T1.
For T3, the standard error is larger than for T1 and T2.
Both might be justified due to the increasing complexity of the tasks.
However, it is a contrast compared to the achieved accuracy in Figure <ref>.
The raw overall scores of the tasks are depicted in Table <ref>.
According to <cit.>, the workload is categorized as `medium', which is the second best score and ranging from 10 to 29 points.
The cumulative results of CS and ME shows a decreasing workload among the evolving tasks.
For CS, the workload appears to be higher than for ME, especially for T3.
As of the user feedback, no justification can be given on the difference between CS and ME.
The results of the SUS test with different rating scales are shown in Table <ref> based on <cit.>.
Figure <ref> presents the SUS score as a boxplot, prepared with an online tool for analyzing SUS questionnaire results <cit.>.
The adjective scale score in the boxplot is aligned with <cit.>, which is based on <cit.>.
The figure highlights that each task achieves the rating good for both CS and ME.
The standard error of CS is slightly higher than for ME, which can also be seen in Table <ref>.
The values of quartile scale shown in Table <ref> are according to <cit.> and acceptability scale according to <cit.>.
ME increased the score in T3, T1 and T2 are equal.
CS decreased the score among the tasks.
However, the changes in the scores are little and therefore not justifiable.
Figure <ref> depicts the percentile scale based on <cit.>.
Since the percentile score is not uniform or normally distributed, a percentile score was created based on 5000 SUS studies.
In this respect, the comparison shows that the tests achieved a percentile between 60 and 79.
T3 ME over performed with 79.
For CS and ME the average percentile is 66.
T1 and T2 for ME have exactly the same value, which is why they are shown as one colour in the Figure.
§ DISCUSSION
This section discusses advantages and potential flaws of the newly introduced method to formalize machine learning tasks. The structure of the section is as follows: First, the metamodel's extension and the stereotypes' proposed structure are discussed. Next, the benefits and shortcomings of the modeling semantic are assessed with a particular focus on the applicability and potential ambiguous interpretation. Next, potential risks of model-driven machine learning and future work are presented. Finally, the implications of the user study are presented and discussed.
§.§ Stereotypes and Structure of the Custom Metamodel
The integration of custom stereotypes has been proven beneficial in the literature <cit.>. In this method, the use of stereotypes to encapsulate and abstract knowledge about machine learning tasks is beneficial as implementation details are hidden, thus supporting communication between different engineers not necessarily experienced in machine learning or programming.
With structuring the stereotypes using packages, a stereotype organization aligned to the CRISP-DM methodology is given, supporting refinements and extension in a fine-grained, hierarchical manner. Particularly, the definition of blackbox and abstract stereotypes allows the description of various functions without the necessity to specify each machine learning function in detail.
In the custom metamodel, custom Enumerations are defined to limit the number of attribute values, which reduces the model's wrong specifications. Another opportunity to reduce the scope of possible selections is to reduce the number of allowed stereotypes, e.g., only inheritance of the abstract stereotype PreProcessing can be assigned as a value for a specific attribute.
However, the filtering of stereotypes requires specific rules that have not yet been integrated or elaborated.
Although various methods are defined using stereotypes, the level of detail might be too little for practical application. DateConversion, for example, can be applied to manifold input values and various outputs, e.g., output representation as a string or Coordinated Universal Time (UTC). Adding multiple DateConversion stereotypes for each case is possible. Still, with a growing number of stereotypes, the complexity of selecting the correct, unambiguous stereotype increases while the maintainability decreases. Similarly, if too many stereotype attributes have to be set, the complexity and the effort for the application increases.
With respect to these uncertainties at the level of detail required for fine-grained definition of machine learning tasks, industrial case studies have to be conducted to elaborate and validate sufficient degree of detail and additionally define future work.
§.§ Complexity of Unambiguous Modeling
The definition of an implementation structure aligned with the CRISP-DM methodology starting from the business understanding and ending with the definition of evaluation and workflows, is promising to be useful due to the integration of a comprehensive and mature methodology in a MBSE method. Additionally, more experienced computer scientists aware of CRISP-DM can rely on experiences and the benefits of CRISP-DM. Furthermore, in practice, one third of data scientists lack business understanding and communication skills<cit.>, which can be supported by the model-based method of CRISP-DM.
Each block implementing a ML stereotype within the implementation structure can be seen as an encapsulated subtask. Each subtask provides an output that can be used as input for another block. However, the given method does not explicitly specify the output of a block. Therefore, the output is defined by the implementing computer scientist, which may lead to different results due to the range of experience of the decisions and the laziness of the semantics, which allows to create arbitrary associations that may not be implementable.
In this respect, future work requires the integration of model checking to reduce orphan associations, infeasible implementations and unwanted side effects on changing associations.
Despite the ambitiousness of the modeling and the potential errors of the associations, the method supports the elaboration and definition of machine learning tasks from early development, which is beneficial. The authors believe that the flaws in the beginning of the method are getting less with the application due to the possibility of reusing certain parts of the formalization. The reuse additionally allows to preserve knowledge and contribute to standardization in the modeling and implementation, which further leads to a reduction of cost and risk in the design <cit.> and the maintenance of machine learning applications.
§.§ Potential of Model-Driven Machine Learning
The given proposal to describe machine learning tasks using a model-based method has some benefits but also disadvantages.
A core disadvantage is the initial effort to introduce stereotypes and formalize the model.
In this respect, traditional programming might be less time consuming and therefore, users might use the CustomCode stereotype to inject code.
However, it is not the purpose of the method to insert code injection due to vulnerability risks and the reduced documentation and understanding by others.
Consequently, future work is required to investigate an extension of the method that allows to generate code from the model but with limitations so that code injections like described in the use case are not possible.
Another disadvantage of the stereotypes is the potential effort for maintenance if interfaces are proprietary or rapidly changing, e.g. due to configuration changes or replacement of machines.
Closely related, for huge projects, the complexity of the resulting models might be very high, including potential errors in the model or ambiguous associations, which might be very hard to find and thus lead to additional communication effort.
Nevertheless, the shortcoming of a complex ramp-up might also be a benefit in the end due to the possibility of introducing model libraries containing well-defined models, leading to standardized parts that can be reused. Further, the method allows to use the formalization as documentation of the implemented technologies that improve the maintainability and extendability for various engineers. Additionally, with further investigations regarding model validation and model debugging features, errors in the semantics can be found and repaired without actually implementing the machine learning application. However, to use this efficiently, the integration into advanced model lifecycle management <cit.> might be necessary to allow collaborative working.Due to the non-programming description of machine learning, the method is promising to increase the communication among various disciplines. In particular, with the integration of the general-purpose language SysML and the intersection of CRISP-DM and MBSE, the heterogeneous communities are broadly supported, which favors the implementation of machine learning in industrial practice and supports to shift knowledge in enterprises regarding machine learning. Further, the method can be integrated into early product development due to the abstract definition that allows to foresee various data interfaces which might have been forgotten during the development. This potentially leads to increased accuracy of the machine learning applications and might reduce failing machine learning projects, which is a well-known problem in industries <cit.>.
In this section, the advantages and potential shortcomings of the method have been shown. However, the key advantages of formalized knowledge was not detailed yet. The machine-readable artifacts (models) are usable with model transformations so to generate executable code, such as a Python script.
Particularly, each ML stereotype consists of knowledge to describe a specific subtask, which is a function in a programming language, e.g. a date conversion. The function parameters are defined in the stereotype (mandatory parameters) or on the block (optional parameters). Since stereotypes have to be uniquely named, each can be mapped to a generic code template in a dedicated programming language, e.g. Python. The templates consist of fixed code and generic parts with placeholders, which are filled based on the model's attributes. The state diagram defines the execution order; all blocks are a well-encapsulated functionality; hence, each block can generate a single code block in an Jupyter Notebook[<https://ipython.org/notebook.html>]. With the automatic derivation of executable machine learning code, the effort for the documentation and implementation is reduced and potentially lead to less errors in the interpretation.
In this respect, future work consists of implementing a proof of concept showing that a derivation and decomposition of formalized machine learning knowledge is beneficial.
§.§ Implications from the User Study
The user study was conducted with two groups that are representative for using the method presented in this work in practice.
The results show that the majority of the tasks were successfully accomplished.
From a study perspective, the users could perform each task without additional guidance on the modeling method.
Still, problems occurred with the user-interface of Papyrus, e.g., expanding a group of elements to select a block element for modeling. However, learning effects could be observed among the tasks on both CS and ME.
The assessment of the NASA-TLX showed that the mental demand for each task is comparable.
A similar observation can be made for the level of frustration, which is slightly lower for the first task.
Contrary to expectations, the participants perceived the effort as decreasing.
With regard to the task, the effort for modeling should have been higher than for understanding a model.
Nevertheless, it can be implied that both CS and ME can use the method in terms of task load without being more strained.
From an usability perspective, the method achieved good results.
Users rated especially the consistency of the method as very high.
Comparing the method with others using the percentile curve, it achieved a rank over 66.
However, the first positive results could be due to some shortcomings in the study design.
In particular, the demand for rating Papyrus might have a larger impact on the study design than expected.
The usability feeling of the users is more dedicated to the experience with Papyrus than to the method, although it was said before to focus on the method.
In this respect, a paper prototype where users had to move paper snippets on the table might have been more valuable.
Furthermore, most of the participants reported their data science knowledge as low and yet were able to explain what happens in a given model or create a model building block themselves.
However, modeling their own data science application might not be possible, as the general understanding of data science is too low.
Nevertheless, it can be seen as a result of the study that the modeled knowledge can be used as a communication medium.
Therefore, it should also be possible for non-data scientists to perform a plausibility analysis, as they can gain an understanding of the process without understanding programming code.
However, this would need to be evaluated in a further study.
Similarly, an evaluation of the results with the help of a larger study should be sought.
§ CONCLUSIONS
In this work machine learning task definition using means of SysML is depicted. Particularly, the metamodel of SysML is extended with stereotypes to reflect functions from the machine learning domain. Additionally, the CRISP-DM methodology is used as basis for the structure of the models to organize the development with specific viewpoints.
The method is evaluated in a case study showing the integration of machine learning task definition in a cyber-physical system as well as in a case study where a workflow engine is integrated for the interruption of a 3D printer task if the aimed result cannot be achieved.
Additionally, a user study is performed to collect an overview of the perceived workload using NASA-TLX questionnaire and to check usability of the system using the SUS questionnaire.
The findings of the evaluation showed that the entire workflow of a machine learning solution can be reflected using SysML. Additionally, the connection between the domain of (mechanical/electrical) engineers and machine learning experts is shown.
With the MBSE integration and the involvement of various stakeholders from different disciplines, an improvement in communication is expected as shown in a user study.
The user study implies that non-experts in data science can use the method as medium of communication.
Future work consists of the extension of the method to automatically derive executable machine learning code acting as a basis for the implementation. In addition, a case study must be conducted to develop a minimum level of detail required to sufficiently define a machine learning model that can be used for communication, and thus guide the implementation of the executable code through the formalization of the machine learning model.
|
http://arxiv.org/abs/2307.06306v1 | 20230712170232 | Locally Adaptive Federated Learning via Stochastic Polyak Stepsizes | [
"Sohom Mukherjee",
"Nicolas Loizou",
"Sebastian U. Stich"
] | cs.LG | [
"cs.LG",
"math.OC",
"stat.ML"
] |
FDAPT: Federated Domain-adaptive Pre-training for Language Models
Lekang Jiang^†, Filip Svoboda^†, Nicholas D. Lane^†♢
August 12, 2023
====================================================================
State-of-the-art federated learning algorithms such as FedAvg require carefully tuned stepsizes to achieve their best performance. The improvements proposed by existing adaptive federated methods involve tuning of additional hyperparameters such as momentum parameters, and consider adaptivity only in the server aggregation round, but not locally. These methods can be inefficient in many practical scenarios because they require excessive tuning of hyperparameters and do not capture local geometric information. In this work, we extend the recently proposed stochastic Polyak stepsize (SPS) to the federated learning setting, and propose new locally adaptive and nearly parameter-free distributed SPS variants (FedSPS and FedDecSPS). We prove that FedSPS converges linearly in strongly convex and sublinearly in convex settings when the interpolation condition (overparametrization) is satisfied, and converges to a neighborhood of the solution in the general case. We extend our proposed method to a decreasing stepsize version FedDecSPS, that converges also when the interpolation condition does not hold. We validate our theoretical claims by performing illustrative convex experiments. Our proposed algorithms match the optimization performance of FedAvg with the best tuned hyperparameters in the i.i.d. case, and outperform FedAvg in the non-i.i.d. case.
§ INTRODUCTION
Federated Learning (FL) <cit.> has recently become popular as a collaborative learning paradigm where multiple clients jointly train a machine learning model without sharing their local data. Despite the recent success of FL, federated optimization still faces various challenges in practical scenarios such as lack of adaptivity—FL using vanilla SGD updates maybe unsuitable for heavy-tail stochastic gradient noise distributions, arising frequently in training large-scale models such as ViT <cit.>. On the other hand, there has been an increasing interest in adaptive and parameter-free optimization algorithms <cit.>, which converge as fast as optimally tuned algorithms, with minimal or no assumptions about knowledge of properties of the data (e.g., Lipschitz constants, strong-convexity constants).
Various adaptive federated methods such as FedAdam <cit.> and FedAMS <cit.> have been proposed to address the above issue, but the convergence analysis for these methods require knowledge of problem-dependent parameters as well as strong assumptions like bounded stochastic gradients. Moreover, we should also note that a majority of these existing adaptive federated methods consider server-side adaptivity, i.e., essentially adaptivity only in the aggregation step. Some methods like Local-AMSGrad <cit.> and Local-AdaAlter <cit.> do consider local (client-side) adaptivity, but they perform some form of stepsize aggregation in the communication round, thereby using the same stepsize on all clients. Therefore, there is no method using full local adaptivity on clients and no existing proofs for the same, showing a gap in current literature. Intuitively it makes sense to use fully locally adaptive stepsizes on each client to capture the local geometric information of each objective function <cit.>, but it is not trivial to observe how using such asynchronous stepsizes on different clients will affect the convergence—especially in the heterogeneous client situations.
The stochastic Polyak step-size (SPS) proposed in <cit.> requires the values of the current and the optimal stochastic loss
and converges sublinearly under convexity and non-convexity (linearly under strong convexity) without requiring the knowledge of unknown quantities such as the gradient Lipschitz constant. DecSPS <cit.> showed that it is possible to replace the optimal stochastic loss with a lower bound (as in the deterministic setting <cit.>), making the method nearly[nearly refers to the fact that the lower bound on the stochastic loss is still unknown, but can easily be obtained for many practical situations—the loss is non-negative for standard regression and classification tasks, hence we can choose the lower bound as zero <cit.>.] parameter-free[Practically relevant variants of parameter-free algorithms may still contain some hyperparameters that do not depend on any problem-dependent quantities, and we call such hyperparameters as “free hyperparameters” (as opposed to “problem-dependent hyperparameters” which depend on some properties of the data or functions).]. To the best of our knowledge, no previous work has considered using SPS stepsizes in the distributed or federated setting.
The quest for finding a fully locally adaptive method for federated optimization, that works with minimal tuning, motivates us to look at the stochastic Polyak stepsize in distributed setting. To this end, we propose the FedSPS algorithm by incorporating the SPS stepsize in the local client updates. We obtain exact convergence of our locally adaptive FedSPS when the interpolation condition is satisfied (overparameterized case), and convergence to a neighbourhood for the general case. Refining the analysis for FedSPS in the small stepsize regime (which will be shown to be equivalent to constant stepsize FedAvg) allows us to obtain exact convergence even for the non-interpolating case. We also extend our method to a decreasing stepsize version FedDecSPS (following ideas from DecSPS <cit.>—proposed for the non-interpolating case in centralized setting), that provides exact convergence in the general non-interpolating setting without the aforementioned small stepsize assumption. Table <ref> provides a summary of our theoretical results and comparison to other methods highlighting the assumptions, number of problem-dependent hyperparameters, and whether local adaptivity involved for each case. Finally, we experimentally observe that the optimization performance of FedSPS is always on par or better than that of tuned FedAvg, and also close to that of other adaptive federated algorithms like FedAMS.
Contributions. We summarize our contributions as follows:
* We design the first fully locally adaptive and nearly parameter-free method for federated learning called FedSPS, and prove sublinear and linear convergence to a neighbourhood of the optimum, for the convex and strongly convex cases, respectively (Theorem <ref>, Remark <ref>). This is in contrast to existing adaptive federated methods such as FedAdam <cit.> and FedAMS <cit.>, both of which involve the problem-dependent hyperparameter of learning rate that depends on the knowledge of gradient Lipschitz constant, and employ adaptivity only for server aggregation.
* We propose FedDecSPS (Corollary <ref>) that enjoys local adaptivity and provably converges also in the non-interpolating regime due to decreasing stepsizes. The convergence guarantee matches the asymptotic convergence rates for federated algorithms such as FedAvg, and as a special case recovers the rates of DecSPS for a single worker (up to a logarithmic factor) without the restrictive bounded iterates assumption used in the original work.
* We empirically verify our theoretical claims by performing relevant illustrative experiments, as well as obtain competitive performance of the proposed method compared to tuned baselines for the convex case in i.i.d. as well as non-i.i.d. settings.
§.§ Related work
Adaptive gradient methods and SPS.
It is widely observed that careful stepsize selection plays a particularly important role in the convergence of SGD. Simple solutions such as constant stepsize needs the knowledge of (often unknown) problem parameters, while polynomially decreasing stepsizes <cit.> suffer from slow convergence. Recently, adaptive stepsize methods that use some optimization statistics (e.g., loss history, gradient history) have become popular for deep learning applications. Such methods, including Adam <cit.> and AdaGrad <cit.>, work well in practice, but they still contain problem-dependent hyperparameters, and their convergence depends on unrealistic assumptions <cit.>. Another adaptive method with sound theoretical guarantees is the Polyak stepsize <cit.>, which has been recently extended to the stochastic setting by <cit.>, where SGD with the stochastic Polyak stepsize (SPS) has been proposed and analyzed for the first time. Extensions of the original SPS have already been proposed for solving structured non-convex problems <cit.> and in the update
rule of stochastic mirror descent <cit.>. Further follow-up works have come up with various ways to overcome the limitations of vanilla SPS, such as when optimal stochastic loss values are not known <cit.>, or when the interpolation condition does not hold <cit.>, as well as a proximal variant for tackling regularization terms <cit.>.
Adaptive federated optimization.
In <cit.>, the authors provide a general framework for adaptive federated optimization FedOpt, including particular instances such as FedAdam and FedYogi, by using the corresponding centralized adaptive methods as the server optimizer. Several works followed on the idea of server side adaptivity, some recent ones being CD-Adam <cit.> and FedAMS <cit.>, but they still contain problem-dependent hyperparameters in theory, and consequently need extensive hyperparameter tuning in practice.
Fully locally adaptive stepsizes on the client side have not been explored before, except in one concurrent work <cit.>
of which we became aware when finalizing this manuscript. Their proposed method is not based on Polyak stepsizes, but on an estimator for the inverse local Lipschitz constant from <cit.>.
§ PROBLEM SETUP
In this work, we consider the following sum-structured (cross-silo) federated optimization problem
f^⋆ := min_∈^d[ f() := 1/n∑_i=1^n f_i() ] ,
where the components f_i^d → are distributed among n local clients and are given in stochastic form f_i() := ξ∼_iF_i (, ξ), where _i denotes the distribution of ξ over parameter space Ω_i on client i ∈ [n]. Standard empirical risk minimization is an important special case of this problem, when each _i presents a finite number m_i of elements {ξ^i_1,…,ξ^i_m_i}. Then f_i can be rewritten as f_i()= 1/m_i∑_j=1^m_i F_i(,ξ_j^i). We do not make any restrictive assumptions on the data distributions _i, and therefore our analysis covers the case of heterogeneous (non-i.i.d.) data where _i ≠_j, ∀ i ≠ j and the local minima _i^⋆ := _∈^d f_i(), can be far away from the global minimizer of (<ref>).
§.§ Background
Federated averaging. A common approach to solving (<ref>) in the distributed setting is FedAvg <cit.> also known as Local SGD <cit.>. This involves the clients performing a local step of SGD in each iteration, and the clients communicate with a central server after every τ iterations—their iterates are averaged on the server, and sent back to all clients. For a non-communication iteration t and client i ∈ [n], the local iterate is updated according to _t+1^i = _t^i - γ_t^i ≫_t^i, and for a communication iteration (i.e., (t+1) multiple of τ), the update will be _t+1^i = 1/n∑_i=1^n(_t^i - γ_t^i ≫_t^i), where ≫_t^i := ∇ F_i(_t^i, ξ^i_t) is the stochastic gradient. FedAvg corresponds to the special case of Algorithm <ref> with constant stepsizes γ_t^i ≡γ_0 (Line 4).
Stochastic Polyak stepsize.
We use the notion of SPSmax from the original paper <cit.>. They only consider the centralized setting, that corresponds to n=1 in our notation. Considering the centralized setting of finite-sum optimization on a single worker min_∈^d[ f_1() := 1/m∑_j=1^mF_1(, ξ^1_j) ], the SPSmax stepsize for SGD (with single stochastic sample) is given by γ_t = min{F_1(_t, ξ^1_j) - F_1^⋆/c ∇ F_1(_t, ξ^1_j)^2, γ_b }, where F_1^⋆ := inf_ξ∈_1, ∈^dF_1(,ξ), γ_b > 0 is an upper bound on the stepsize that controls the size of neighbourhood (in fact γ_b trades-off adaptivity for accuracy as discussed later in Remark <ref>), and c >0 is a constant scaling factor. Instead of using the optimal function values of each stochastic function as suggested in the original paper, we use the lower bound on the function values ℓ_1^⋆, which is easier to obtain for many practical tasks as shown in follow-up work <cit.>. Therefore, we have γ_t = min{F_1(_t, ξ^1_j) - ℓ_1^⋆/c ∇ F_1(_t, ξ^1_j)^2, γ_b }, where ℓ_1^⋆≤ F_1^⋆.
§ PROPOSED METHOD
FedSPS.
We begin by proposing a fully locally (i.e., client-side) adaptive federated optimization algorithm FedSPS (Algorithm <ref>) with asynchronous stepsizes, i.e., the stepsizes are different across the clients, and also across the local steps for a particular client. The FedSPS stepsize for a client i and local iteration t will be given by
γ_t^i = min{F_i(_t^i, ξ_t^i) - ℓ_i^⋆/c ∇ F_i(_t^i, ξ_t^i)^2, γ_b } ,
where c, γ_b > 0 are constants as explained before, ξ_t^i is the sample at time t on worker i, F_i(_t^i, ξ_t^i) is the stochastic loss, ≫_t^i := ∇ F_i(_t^i, ξ^i_t) is the stochastic gradient, and ℓ_i^⋆≤ F_i^⋆ = inf_ξ^i ∈_i, ∈^d F_i(,ξ^i) is a lower bound on the minima of all functions on worker i. Since the loss functions are non-negative for most practical machine learning tasks, we can use ℓ_i^⋆ = 0 as discussed before, for running our algorithms. We analyse FedSPS in the strongly convex and convex settings and prove convergence guarantees without any dependence on problem-dependent hyperparameters (Theorem <ref>). We also empirically verify that γ_b and c are indeed free hyperparameters that do not need tuning, through relevant sensitivity analysis in Section <ref>. Note that the notations used throughout can be trivially extended to the mini-batch setting as described in Appendix <ref>. In the following, we provide an example which shows the convergence benefits of locally adaptive federated optimization.
[Local adaptivity can improve convergence]
For a parameter a>0, consider the finite sum optimization problem min_x ∈[ f(x) := 1/2∑_i=1^2 f_i(x) ], with f_1(x) = a/2x^2, f_2(x) = 1/2x^2 in the interpolation regime. If we solve this problem using mini-batch SGD, x_t+1= x_t - γ/2(∇ f_1(x_t) + ∇ f_2(x_t)), we are required to choose a stepsize γ≤ 2/L, where L = 1+a/2 to enable convergence, and therefore Ω(a log1/ϵ) steps are needed. However, if we solve the same problem using locally adaptive distributed SGD of the form x_t+1= x_t - 1/2(γ_1 ∇ f_1(x_t) + γ_2 ∇ f_2(x_t)), then the complexity can be near-constant. Concretely, for any stepsizes γ_i ∈ [1/2γ_i^⋆, 3/2γ_i^⋆], with γ_1^⋆=1/a, γ_2^⋆ =1, the iteration complexity is (log1/2), which can be arbitrarily better than Ω(a log1/ϵ) when a →∞.
§ CONVERGENCE ANALYSIS OF FEDSPS
§.§ Assumptions on the objective function and noise
[L-smoothness]
Each function F_i(, ξ)^d ×Ω_i →, i ∈ [n]
is differentiable for each ξ∈(_i) and there exists a constant L ≥ 0 such that for each , ∈^d, ξ∈(_i):
∇ F_i(, ξ) - ∇ F_i(,ξ) ≤ L - .
Note that Assumption <ref> implies L-smoothness of each f_i() and of f(). The assumption of each F_i(, ξ) being smooth is often used in federated and decentralized optimization literature (for e.g., Assumption 1a of <cit.>, or Assumption 3 of <cit.>).
[μ-convexity]
There exists a constant μ≥ 0 such that for each for each i ∈ [n], ξ∈(_i) and for all ,∈^d:
F_i(,ξ) ≥ F_i(, ξ) + ∇ F_i(, ξ),- + μ/2-^2 .
For some of our results, we assume μ-strong convexity for a parameter μ > 0, or convexity (when μ = 0). Furthermore we assume (as mentioned in the introduction) access to stochastic functions F_i(,ξ) on each client i, with E_ξ∼_i∇ F_i(,ξ)=∇ f_i(), E_ξ∼_i F_i(,ξ)=f_i().
Finite optimal objective difference.
For each i ∈ [n] we denote f_i^⋆ := inf_∈^d f_i(). Recall that we defined F_i^⋆ := inf_ξ∈_i, ∈^dF_i(,ξ), and need knowledge of lower bounds, ℓ_i^⋆≤ F_i^⋆ for our algorithm. We define the quantity
σ_f^2 := 1/n∑_i=1^n (f_i(^⋆) - ℓ_i^⋆) = f^⋆ - 1/n∑_i=1^n ℓ_i^⋆ ,
that will appear in our complexity estimates, and thus we implicitly assume that σ_f^2 < ∞ (finite optimal objective difference). Note that <cit.> studied SPS on finite-sum problems (which are a special case of the problems we consider here) and assumed knowledge of inf_∈^dF_i(,ξ) for every ξ∈_i.
As calculating this for every stochastic function might be infeasible in practice, we assume here access to lower bound instead <cit.>. Thus our definition (<ref>) of the finite objective difference σ_f^2 is slightly weaker than theirs, but can recover it in the special case when ℓ_i^⋆ = F_i^⋆ = f_i^⋆. Moreover, σ_f^2 also acts as our measure of heterogeneity between clients. This is in line with previous works on federated optimization in non-i.i.d. setting, such as <cit.> that used Γ := f^⋆ - E_i f_i^⋆ as the heterogeneity measure. We can relate σ_f^2 to the more standard measures of function heterogeneity ζ^2 = 1/n∑_i = 1^n ∇ f_i() - ∇ f()_2^2 and gradient variance σ^2 = 1/n∑_i=1^n_ξ^i∇ F_i(, ξ^i) - ∇ f_i()^2 in the federated literature <cit.> as shown in the following proposition (proof in Appendix <ref>). For the case of convex functions, it suffices <cit.> to compare with ζ_⋆^2 = 1/n∑_i = 1^n ∇ f_i(^⋆)_2^2, σ_⋆^2 = 1/n∑_i=1^n_ξ^i∇ F_i(^⋆, ξ^i) - ∇ f_i(^⋆)^2, calculated at the global optimum ^⋆ = _∈^d f(). We can observe that σ_f^2 is actually a stronger assumption than bounded noise at optimum (ζ_⋆, σ_⋆), but weaker than uniformly bounded noise (ζ, σ).
Using the definitions of σ_f^2, ζ_⋆^2, and σ_⋆^2 as defined above, we have: (a) ζ_⋆^2 ≤ 2 L σ_f^2, and (b) σ_⋆^2 ≤ 2 L σ_f^2.
§.§ Convergence of fully locally adaptive FedSPS
In this section we provide the convergence guarantees of FedSPS on sums of convex (or strongly convex) functions. We do not make any restriction on γ_b, and thus denote this as the fully locally adaptive setting that is of most interest to us. All proofs are provided in Appendix <ref>.
Assume that Assumptions <ref> and <ref> hold and c ≥ 2τ^2, then after T iterations (T/τ communication rounds) of FedSPS (Algorithm <ref>) it holds
1/Tn∑_t=0^T-1∑_i=1^n [f_i(_t^i) - f_i(^⋆)]
≤2/T α_0 - ^⋆^2 + 4 γ_b σ_f^2/α ,
where α := min{1/2cL, γ_b }. If μ>0, and c ≥ 4τ^2, we have
(b) Strongly convex case:
_T - ^⋆^2 ≤ A (1-μα)^T _0 - ^⋆^2 + 2 γ_b σ_f^2/αμ ,
where A= 1/μα, and _t := 1/n∑_i=1^n _t^i.
τ is a user selected input parameter to determine the number of local steps, and γ_b trades-off adaptivity (potentially faster convergence for large γ_b) and accuracy (higher for small γ_b). Moreover, as c only depends on the input parameter[This is a bit reminiscent of FedAvg convergence results, where τ typically appears as a constraint on the stepsize (which we cannot impose here, as γ_b can be arbitrary). For FedAvg, the stepsize has to decrease linearly in τ, while here c scales quadratically in τ. In the experiments we observe that the impact of c is quite weak, any typically choosing a small constant suffices.] τ and not on properties of the function (e.g. L or μ), it is also a free parameter. The algorithm provably converges (up to the indicated accuracy) for any choice of these parameters. The lower bounds ℓ^⋆_i can be set to zero for many machine learning problems as discussed before.
The convergence criterion of the first result (<ref>) is non-standard, as it involves the average of all iterates _t^i on the left hand side, and not the average _t more commonly used. However, note that every τ-th iteration these quantities are the same, and thus our result implies convergence of 1/Tn∑_t=0^(T-1)/τ∑_i=1^n [f_i(_tτ) - ℓ^⋆_i]. The proof techniques involve extending the error-feedback framework (that originally works for equal stepsizes) <cit.> to work for fully un-coordinated local stepsizes for the first time in our work.
Comparison with SPS <cit.>.
We will now compare our results to <cit.> that studied SPS for a single worker (n=1). First, we note that in the strongly convex case, we almost recover Theorem 3.1 in <cit.>. The only differences are that they have A=1 and allow weaker bounds on the parameter c (c ≥ 1/2, vs. our c>4), but we match other constants. We note that the dependency on c in our case could be weakened with a dedicated analysis, while removing A would be more difficult: in the distributed case we cannot expect a decrease of _t - ^⋆^2 in every step (as synchronization happens only every τ-th step), but we recover the exact same rate, decrease factor (1-μα) asymptotically, requiring only log (A) more steps to reach the same accuracy. In the convex case, we again recover <cit.> up to constants and the stronger condition on c (vs. c>1 in their case).
∙ (Special case I) Exact convergence of FedSPS in interpolation regime:
We highlight the linear convergence of FedSPS in the interpolation case (σ_f = 0) in the following corollary.
Assume interpolation, σ_f^2=0 and let the assumptions of Theorem <ref> be satisfied with μ>0. Then
_T - ^⋆^2 ≤ A (1-μα)^T _0 - ^⋆^2 .
∙ (Special case II) Exact convergence of FedSPS in the small stepsize regime:
It can be observed that Theorem <ref> shows convergence of FedSPS to a neighborhood of the solution. Decreasing γ_b (smaller than 1/2cL) can improve the accuracy, but the error is at least Ω(σ_f^2) even when γ_b → 0. This issue is also persistent in the original work on SPSmax <cit.>.[Plugging in γ_b < 1/2cL into the definition of α (Theorem <ref>) gives α = γ_b, and the second term in (<ref>) or <cit.> becomes 2σ_f^2/μ—the neighbourhood of convergence that does not vanish even if γ_b → 0.] However, we remark when the stepsize upper bound γ_b is chosen extremely small—not allowing for adaptivity—-then FedSPS becomes identical to constant stepsize FedAvg (or Local SGD). This is not reflected in Theorem <ref> that cannot recover the exact convergence known for FedAvg. We address this in the next theorem, proving exact convergence of small stepsize FedSPS (equivalent to analysis of FedAvg with σ_f^2 assumption).
Assume that Assumptions <ref> and <ref> hold and γ_b ≤min{1/2cL, 1/20Lτ}, then after T iterations (T/τ communication rounds) of FedSPS (Algorithm <ref>) it holds that
(a) Convex case:
1/T∑_t=0^T-1 [f(_t) - f^⋆] =( 1/T γ_b_0 - ^⋆^2 + γ_b L σ_f^2 + γ_b^2 L τ^2 σ_f^2) ,
and when μ > 0,
(b) Strongly convex case:
_T - ^⋆^2 = ( _0 - ^⋆^2/μγ_b (1-μγ_b)^T + γ_b L σ_f^2/μ + γ_b^2 L τ^2 σ_f^2/μ) .
This theorem shows that by choosing an appropriately small γ_b, any arbitrary target accuracy ϵ >0 can be obtained. We are only aware of <cit.> that studies FedAvg under similar assumptions as us (Γ := f^⋆ - E_i f_i^⋆ measuring heterogeneity). However, their analysis additionally required bounded stochastic gradients and their convergence rates are weaker (e.g., not recovering linear convergence under interpolation when σ_f^2 = 0).
§ DECREASING FEDSPS FOR EXACT CONVERGENCE
In the previous section, we have proved that FedSPS converges in the interpolation setting without having to pick (or know) a particular value of the stepsize parameter γ_b.
However, for the non-interpolating setting (σ_f > 0), we need to choose a small value of γ_b to ensure convergence, trading-off adaptivity for achieving exact convergence. For example, by choosing γ_b = 1/√(T) for the convex case (<ref>) in Theorem <ref>, we get a rate of (R_0^2 + L σ_f^2/√(T)), where R_0 := _0 - ^⋆^2—recovering the known asymptotic rates for federated algorithms, but losing the adaptivity of FedSPS (the required upper bound γ_b might allow only small stepsizes to be picked). In this section, we draw inspiration from the decreasing SPS stepsize DecSPS <cit.> for the non-interpolating case, to develop FedDecSPS, that achieves the same rate (up to log factors) without compromising adaptivity.
FedDecSPS. In order to obtain exact convergence to arbitrary accuracy (without the small stepsize assumption) in the heterogeneous setting with σ_f >0, we propose a decreasing stepsize version of FedSPS, called FedDecSPS. The FedDecSPS stepsize for client i and local iteration t is given by γ_t^i = 1/c_tmin{ F_i(_t^i, ξ^i_t) - ℓ_i^⋆/∇ F_i(_t^i, ξ_t^i)^2, c_t-1γ_t-1^i}, where (c_t)_t=0^∞ is any non-decreasing positive sequence of real numbers. We also set c_-1 = c_0, γ_-1^i = γ_b, getting γ_0^i = 1/c_0min{ F_i(_0^i, ξ^i_0) - ℓ_i^⋆/∇ F_i(_0^i, ξ_0^i)^2, c_0 γ_b}. We obtain the following convergence result for FedDecSPS (the proof is deferred to Appendix <ref>).
Assume that Assumption <ref> holds, and F_i(, ξ) are convex, for each ∈^d, ξ∈(_i), i ∈ [n], and c_t≥ 2 τ^2, then after T iterations (T/τ communication rounds) of FedDecSPS it holds
1/Tn∑_t=0^T-1∑_i=1^n [f_i(_t^i) - f_i(^⋆)]
≤2c_T-1/T c_0 α_0_0 - ^⋆^2 + 2c_T-1γ_b(2+ τ^2)/α_01/T∑_t=0^T-1σ_f^2/c_t^2 ,
where α_0 := min{1/2 c_0L, γ_b }.
In Theorem <ref>, compared to the convex analysis of FedSPS (<ref>) in Theorem <ref>, we have a factor of ∑_t=0^T-11/c_t^2 which aids exact convergence. Considering the general heterogeneous setting (σ_f > 0), choosing c_t = (√(t + 1)), and observing that ∑_t=0^T-11/t+1≤(1 + log T), gives us an asymptotic rate of (1/√(T)) which matches previous works on federated optimization <cit.>. We highlight this in Corollary <ref> below.
Under the setting of Theorem <ref>, for c_t = 2 τ^2√(t+1) we have
1/Tn∑_t=0^T-1∑_i=1^n [f_i(_t^i) - f_i(^⋆)] = ( R_0^2 +γ_bσ_f^2 log T/α_0 √(T)) .
Note that our results above for FedDecSPS do not rely on the restrictive bounded iterates assumption (i.e. assuming _t - ^⋆^2 ≤ R^2, for all t ∈ [T-1]), that was necessary in the original work on DecSPS <cit.> that considered only the case n=1. We follow a modified proof technique using the upper bound c_t ≤ c_T-1, that enables us to remove the bounded iterates assumption, but we do incur an additional log factor (Corollary <ref>).
§ EXPERIMENTS
Experimental setup.
For all federated training experiments we have 500 communication rounds (the no. of communication rounds being T/τ as per our notation), 5 local steps on each client (τ = 5, unless otherwise specified for some ablation experiments), and a batch size of 20 (|| = 20). We perform convex experiments in the i.i.d. as well as non-i.i.d. settings. Results are reported for both settings without client sampling (10 clients) and with client sampling (10 clients sampled uniformly at random from 100 clients with participation fraction 0.1, and data split among all 100 clients) i.e., n=10 active clients throughout. The i.i.d. experiments involve randomly shuffling the data and equally splitting the data between clients. For non-i.i.d. experiments, we assign every client samples from exactly two classes of the dataset, the splits being non-overlapping and balanced with each client having same number of samples <cit.>. All experiments are performed on a machine equipped with one NVIDIA GeForce GTX 1080 and an Intel Core i7-4790K Processor with 16GB RAM. Our code is based on publicly available repositories[SPS (<https://github.com/IssamLaradji/sps>), FedAMS (<https://github.com/jinghuichen/FedCAMS>)].
FedSPS.
The implementation is done according to Algorithm <ref>. Since all our experimental settings involve non-negative loss functions, we can use the lower bound ℓ_i^⋆ = 0 <cit.>, throughout. The only free hyperparameters associated with this algorithm are the upper bound on stepsizes γ_b and the scaling parameter c. In the following, we perform empirical sensitivity analysis for these free hyperparameters, and conclude that our method is indeed insensitive to changes in these parameters.
We start with benchmarking our method by running some initial convex experiments performing classification of the MNIST (i.i.d.) dataset <cit.> with a logistic regression model, without client sampling. In Figure <ref>(a), we compare the effect of varying γ_b ∈{1, 5, 10} on FedSPS, and varying γ∈{0.1, 0.01} on FedAvg. We find that FedAvg is not robust to changing stepsize—converging well for stepsize 0.1, but very slow convergence for stepsize 0.01. On the contrary, all instances of FedSPS converge to a neighbourhood of the optimum—the size of the neighbourhood being proportional to γ_b as suggested by the theory. We perform another ablation study on how stepsize adaptation occurs for different values of γ_b ∈{0.1, 0.5, 1, 5, 10} in Figure <ref>(b). We find that for γ_b ≤ 0.5, the stepsize is constant and equal to γ_b and there is essentially no adaptivitiy (the is the small stepsize regime mentioned previously in the theory), while for bigger values of γ_b ≥ 1 there is adaptive behaviour.
We now fix γ_b = 1, and perform an ablation study to understand the effect of varying SPS scaling parameter c on the convergence in Figure <ref>(c). For number of local steps τ=5, we vary c from 0.01 to 40 (i.e., of the order of square of τ). Unlike what is predicted by our theory, empirically we observe that small c works better and larger c leads to slower convergence. Moreover, all values of c ∈{0.01, 0.1, 0.5, 1.0} have similarly good convergence, thereby implying our method is robust to this hyperparameter and needs no tuning. We provide additional plots for τ∈{10, 20, 50, 100} local steps in Appendix <ref> to confirm that this observation is valid across all values of τ, and plot the optimal value of c versus τ for each case in Figure <ref>(d). Gaining insights from above experiments we now fix c=0.5 for all of the experiments that follow.
FedDecSPS. We evaluate the performance of FedDecSPS with c_t = c_0 √(t+1). Similar to the sensitivity analysis of FedSPS towards c (Figure <ref>), we performed ablations studies for a fixed value of γ_b and varying c_0 as well as τ. The observation is same as the previous case: the optimal value of c_0 does not scale according to τ as suggested by theory and we fix c_0=0.5 for all experiments. Similarly we fix γ_b = 1, following similar observations as before. We compare the convergence of FedSPS and FedDecSPS for the case of heterogeneous data on clients (i.e., σ_f>0) in Figure <ref> (c) and (d). We observe that heterogeneity has a worse effect on convergence of FedSPS as compared to FedDecSPS, and this verifies our previous theory for both methods.
FedAvg and FedAMS. We compare the performance of our methods—FedSPS and FedDecSPS against the FedAvg baseline, and the state-of-the-art adaptive federated algorithm FedAMS <cit.>. FedAvg and FedAMS need extensive tuning for client learning rate η_l, server learning rate η, as well as max stabilization factor ϵ, and β_1, β_2. We refer readers to Appendix <ref> for details on the grid search performed and the optimal set of hyperparameters.
Comparison.
For the convex setting of logistic regression on MNIST dataset (i.i.d. setting), without client sampling, we now compare FedSPS with FedAvg and FedAMS in Figure <ref>(a). We see that the convergence of FedSPS matches that of the best tuned FedAvg. Note that while the best tuned FedAMS slightly outperforms our method, it requires considerable tuning depicted by the large margin between best and worst learning rate performances. For additional convex experiments in the more practical setting with client sampling, we take the problem of binary classification of LIBSVM <cit.> datasets (w8a, mushrooms, ijcnn, phishing, a9a) with logistic regression model in the i.i.d. setting. We report the performance on w8a in Figure <ref>(b), where FedSPS again converges similarly as tuned FedAvg, and better than FedAMS. We defer rest of the LIBSVM dataset plots to Appendix <ref>. In the non-i.i.d. case we compare our proposed FedSPS and FedDecSPS to the FedAvg baseline, adaptive federated methods FedAMS and FedADAM, as well as another state-of-the-art federated method MIME <cit.>. In this setting FedDecSPS does better than FedSPS, and our methods outperform the best tuned FedAvg, FedADAM and MIME.
§ CONCLUSION
To further advance federated and fully decentralized learning, it is important to reduce the need for hyperparameter tuning, especially when tuning is costly or impossible.
In this paper, we extended the recently proposed SPS to the federated optimization setting, obtaining a locally adaptive and nearly parameter-free federated algorithm FedSPS.
We prove that FedSPS converges sublinear and linear convergence to a neighbourhood for convex and strongly convex cases, respectively. We further extend our method to the decreasing stepsize version FedDecSPS, that enables exact convergence in the non-interpolating setting without compromising adaptivity.
We observe in experiments on convex problems that FedSPS converges as well as the best tuned FedAvg with in the i.i.d. setting, while outperforming FedAvg for non-i.i.d. data.
For the sake of completeness and comparison with existing literature, we also introduce an additional version of our algorithm in the Appendix <ref> called FedSPS-Global, that uses aggregation of stepsizes in communication rounds, similar to <cit.>. Although not the main focus of our work, we also include some theoretical insights and positive practical results for small-scale deep learning experiments in the Appendix <ref>, hinting it might be possible to easily extend our proposed approach to non-convex settings.
§ ACKNOWLEDGMENTS
This work was supported by the Helmholtz Association's Initiative and Networking Fund on the HAICORE@FZJ partition. Nicolas Loizou acknowledges support from CISCO Research.
plain
PART:
*Appendix
The Appendix is organized as follows. We begin by introducing some general definitions and inequalities used throughout the proofs, in Section <ref>. Proofs for convergence analysis of FedSPS are provided in Section <ref>—including convex and strongly convex cases. Section <ref> provides some additional theoretical details for FedSPS. A proof for convergence analysis of FedDecSPS follows in Section <ref>. The additional version of FedSPS called FedSPS-Global is described in Section <ref>. We provide some theoretical results for FedSPS in the non-convex setting in Section <ref>. Finally, additional experiments for FedSPS and FedSPS-Global in convex and non-convex settings are provided in Section <ref>.
toc
§ TECHNICAL PRELIMINARIES
Let us present some basic definitions and inequalities used in the proofs throughout the appendix.
§.§ General definitions
The function f : ^d →, is convex, if for all , ∈^d
f() ≥ f()+ ∇ f() , -
.
The function f : ^d →, is L-smooth, if there exists a constant
L > 0 such that for all , ∈^d
∇ f()-∇ f()≤ L -
,
or equivalently <cit.>
f() ≤ f()+ ∇ f() , - + L/2-^2
.
For a L-smooth function f : ^d →, having an optima at ^⋆∈^d, we have for any ∈^d
∇ f() - ∇ f(^⋆)^2 ≤ 2L ( f() - f(^⋆) ) .
For a L-smooth and μ-strongly convex function f : ^d →, the following bound holds
1/2cL≤f() - f^⋆/c ∇ f()^2≤f() - ℓ^⋆/c ∇ f()^2≤1/2cμ ,
where f^⋆ = inf_f(), and ℓ^⋆ is a lower bound ℓ^⋆≤ f^⋆.
§.§ General inequalities
For arbitrary set of n vectors {å_i}_i = 1^n, å_i ∈^d
1/n∑_i = 1^n å_i - (1/n∑_i=1^n å_i )^2 ≤1/n∑_i = 1^n å_i^2 .
For arbitrary set of n vectors {å_i}_i = 1^n, å_i ∈^d
∑_i = 1^n å_i^2 ≤ n ∑_i = 1^n å_i^2 .
For given two vectors å, ∈^d
2å, ≤γå^2 + γ^-1^2 , ∀γ > 0 .
For given two vectors å, ∈^d
å + ^2 ≤ (1 + λ)å^2 + (1 + λ^-1)^2, ∀λ > 0 .
For any random variable X
X - X^2 ≤X^2 .
§ CONVERGENCE ANALYSIS OF FEDSPS
§.§ FedSPS inequalities
Under the assumption that each F_i is L-smooth (Assumption <ref>), the stepsizes of FedSPS follow for all rounds t ∈ [T-1]
α≤γ_t≤γ_b ,
where α = min{1/2cL, γ_b }.
This lemma is easily obtained by plugging in the definition of FedSPS stepsizes into (<ref>).
FedSPS stepsizes γ_t^i follow the following fundamental inequality
γ_t^i ≫_t^i^2 ≤γ_t^i/c [F_i(_t^i,ξ_t^i) - ℓ_i^⋆]
We observe that it holds from the definition of FedSPS stepsizes
γ_t^i ≫_t^i^2 = γ_t^i ·min{F(_t^i,ξ_t^i) - ℓ_i^⋆/c ∇ F(_t^i,ξ_t^i)^2, γ_b }∇ F_i(_t^i,ξ_t^i)^2
≤γ_t^i/c [F_i(_t^i,ξ_t^i) - ℓ_i^⋆] .
§.§ Proof for convex case of FedSPS
Proof outline and notation.
Before we start with the proof, let us recall the notation:
below we will frequently use ≫_t^i to denote the random component sampled by worker i at iteration t, so ≫_t^i = ∇ F_i(_t^i, ξ_t^i) and the stepsize γ_t^i = min{F_i(_t^i,ξ_t^i) - ℓ_i^⋆/c ∇ F_i(_t^i, ξ_t^i)^2, γ_b }. We define the (possibly virtual) average iterate _t := 1/n∑_i=1^n _t^i. We follow the proof template for FedAvg (Local SGD) <cit.>, deriving a difference lemma to bound R_t:=1/n∑_i=1^n _t - _t^i^2, and plugging it into the decrease _t+1 - ^⋆^2. Our poof introduces differences at various points like in the difference lemma (<ref>), and the decrease (<ref>), to incorporate local adaptivity using properties of FedSPS stepsizes (Lemma <ref> and Lemma <ref>) and the finite optimal objective difference assumption (σ_f^2 < ∞).
§.§.§ Difference Lemmas
For c ≥ 2 τ^2, the variance of the iterates between the clients R_t := 1/n∑_i=1^n _t - _t^i^2, is bounded as
R_t ≤1/2n τ∑_i=1^n ∑_j=(t-1)-k(t)^t-1γ_j^i [F_i(_j^i,ξ_j^i)- ℓ_i^⋆] ,
where t-1-k(t) denotes the index of the last communication round (k ≤τ - 1).
We use the property that _t = _t^i for every t that is a multiple of τ. Therefore, there exists a k(t) ≤τ - 1 such that R_(t-1)-k(t)= 0. So we have,
R_t = 1/n∑_i=1^n ∑_j=(t-1)-k(t)^t-1γ_j^i ≫_j^i - ^2 (<ref>)≤1/n∑_i=1^n ∑_j=(t-1)-k(t)^t-1γ_j^i ≫_j^i^2 ,
where := 1/n∑_i=1^n ∑_j=(t-1)-k(t)^t-1γ_j^i ≫_j^i. The inequality follows by the fact the is the mean of the terms in the sum.
With the inequality ∑_i=1^τå_i^2 ≤τ∑_i=1^τå_i^2 for vectors å_i ∈^d, and the property of the Polyak Stepsize we deduce:
R_t (<ref>)≤τ/n∑_i=1^n ∑_j=(t-1)-k(t)^t-1γ_j^i ≫_j^i^2
(<ref>)≤τ/nc∑_i=1^n ∑_j=(t-1)-k(t)^t-1γ_j^i [F_i(_j^i,ξ_j^i)- ℓ_i^⋆]
≤1/2n τ∑_i=1^n ∑_j=(t-1)-k(t)^t-1γ_j^i [F_i(_j^i,ξ_j^i)- ℓ_i^⋆] ,
where we used the assumption c ≥ 2 τ^2 to obtain the last inequality.
For γ_b ≤1/20Lτ, the variance of the iterates between the clients R_t := 1/n∑_i=1^n _t - _t^i^2, is bounded as
R_t≤1/3L τ∑_j=(t-1)-k(t)^t-1 [ f(_j) - f^⋆] + 64 L γ_b^2τ∑_j=(t-1)-k(t)^t-1σ_f^2 ,
where t-1-k(t) denotes the index of the last communication round (k ≤τ - 1).
Again we derive a bound on the variance on the iterated between the clients. We use the property that _t = _t^i for every t that is a multiple of τ. Define k(t) ≤τ-1 as above. We will now prove that it holds
R_t ≤32 γ_b^2 τ/n∑_i=1^n ∑_j=(t-1)-k(t)^t-1∇ F_i(_j, ξ_j^i) ^2
(<ref>)≤64 γ_b^2 L τ/n∑_i=1^n ∑_j=(t-1)-k(t)^t-1 [ F_i(_j, ξ_j^i) - ℓ_i^⋆] ,
and consequently (after taking expectation) and using γ_b ≤1/20Lτ:
R_t≤1/3L τ∑_j=(t-1)-k(t)^t-1 [ f(_j) - f^⋆] + 64 L γ_b^2τ∑_j=(t-1)-k(t)^t-1σ_f^2 .
Note that if t is a multiple of τ, then R_t = 0 and there is nothing to prove. Otherwise note that
R_t+1 = 1/n∑_i=1^n _t - _t^i + γ_t^i ∇ F(_t^i,ξ_t^i) - ^2 ,
where := 1/n∑_i=1^n γ_t^i ∇ F(_t^i,ξ_t^i) denotes the average of the client updates. With the inequality å + ^2 ≤ (1+τ^-1) å^2 + 2τ^2 for τ≥ 1, å, ∈^d, we continue:
R_t+1 (<ref>)≤(1+1/τ) R_t + 2 τ/n∑_i=1^n γ_t^i ∇ F_i(_t^i, ξ_t^i) - ^2
(<ref>)≤(1+1/τ) R_t + 2 τ/n∑_i=1^n γ_t^i ∇ F_i(_t^i, ξ_t^i)^2
≤(1+1/τ) R_t + 2 τγ_b^2 /n∑_i=1^n ∇ F_i(_t^i, ξ_t^i)^2
(<ref>)≤(1+1/τ) R_t + 4 τγ_b^2 /n∑_i=1^n ( ∇ F_i(_t^i, ξ_t^i) - ∇ F_i(_t, ξ_t^i)^2 + ∇ F_i(_t, ξ_t^i)^2 )
(<ref>)≤(1+1/τ) R_t + 4 τγ_b^2 /n∑_i=1^n ( L^2 _t - _t^i^2 + ∇ F_i(_t, ξ_t^i)^2 )
≤(1+2/τ) R_t + 4 τγ_b^2 /n∑_i=1^n ∇ F_i(_t, ξ_t^i)^2 ,
where we used γ_b ≤1/10L τ. Equation (<ref>) now follows by unrolling, and noting that (1+2/τ)^j ≤ 8 for all 0 ≤ j ≤τ.
§.§.§ Proof of Theorem <ref> (a) (general case valid for all stepsizes)
Distance to optimality.
We now proceed by using the definition of the virtual average: _t+1 = 1/n∑_i=1^n (_t^i - γ_t^i ≫_t^i ).
_t+1 - ^⋆^2
≤_t - ^⋆^2 - 2/n∑_i=1^n _t - ^⋆ , γ_t^i ≫_t^i + 1/n∑_i=1^n γ_t^i ≫_t^i^2
= _t - ^⋆^2 - 2/n∑_i=1^n _t^i - ^⋆ , γ_t^i ≫_t^i + 1/n∑_i=1^n γ_t^i ≫_t^i^2 - 2/n∑_i=1^n _t - _t^i, γ_t^i ≫_t^i
(<ref>)≤_t - ^⋆^2 - 2/n∑_i=1^n _t^i - ^⋆ , γ_t^i ≫_t^i + 1/n∑_i=1^n γ_t^i ≫_t^i^2 + 1/n∑_i=1^n ( _t - _t^i^2 +γ_t^i ≫_t^i ^2 )
= _t - ^⋆^2 - 2/n∑_i=1^n _t^i - ^⋆ , γ_t^i ≫_t^i + 2/n∑_i=1^n γ_t^i ≫_t^i^2 + R_t .
Upper bound (valid for arbitrary stepsizes).
We now proceed in a similar fashion as outlined in the proof of <cit.>. Using the property (<ref>) of the FeSPS stepsize we get:
_t+1 - ^⋆^2
(<ref>)≤_t - ^⋆^2 - 2/n∑_i=1^n _t^i - ^⋆ , γ_t^i ≫_t^i + 2/nc ∑_i=1^n γ_t^i [F_i(_t^i,ξ_t^i) - ℓ_i^⋆] + R_t
(<ref>)≤_t - ^⋆^2 - 2/n∑_i=1^n γ_t^i [ F_i(_t^i,ξ_t^i) - F_i(^⋆,ξ_t^i) ] + 2/nc ∑_i=1^n γ_t^i [F_i(_t^i,ξ_t^i) - ℓ_i^⋆] + R_t
= _t - ^⋆^2 - 2/n∑_i=1^n γ_t^i [ F_i(_t^i,ξ_t^i) - ℓ_i^⋆ + ℓ_i^⋆ - F_i(^⋆,ξ_t^i) ]
+ 2/nc ∑_i=1^n γ_t^i [F_i(_t^i,ξ_t^i) - ℓ_i^⋆] + R_t
= _t - ^⋆^2 - (2 - 2/c) 1/n∑_i=1^n γ_t^i [ F_i(_t^i,ξ_t^i) - ℓ_i^⋆ ] + 2/nc ∑_i=1^n γ_t^i [F_i(^⋆,ξ_t^i) - ℓ_i^⋆] + R_t
≤_t - ^⋆^2 - (2 - 2/c) 1/n
∑_i=1^n γ_t^i [F_i(_t^i,ξ_t^i) - ℓ^⋆_i] + γ_b 1/n∑_i=1^n [F_i(^⋆,ξ_t^i) - ℓ_i^⋆] + R_t ,
where the last inequality followed by the assumption c > 2.
We again use the assumption c>2 and simplify further:[The attentive reader will observe that any c > 1/2 would be sufficient to show convergence of the function suboptimality (note that <cit.> used c ≥1/2, but only showed convergence of the iterate distance to optimality).]
_t+1 - ^⋆^2 ≤_t - ^⋆^2 - 1/n
∑_i=1^n γ_t^i [F_i(_t^i,ξ_t^i) - ℓ^⋆_i] + 2 γ_b σ_t^2 + R_t ,
where we defined σ_t^2 := 1/n∑_i=1^n [F_i(^⋆,ξ_t^i) - ℓ_i^⋆].
After rearranging:
1/n∑_i=1^n γ_t^i [F_i(_t^i,ξ_t^i) - ℓ^⋆_i]
≤_t - ^⋆^2 - _t+1 - ^⋆^2 + 2 γ_b σ_t^2 + R_t .
We now plug in the bound on R_t calculated above in equation (<ref>):
1/n∑_i=1^n γ_t^i [F_i(_t^i,ξ_t^i) - ℓ^⋆_i]
≤_t - ^⋆^2 - _t+1 - ^⋆^2 + 2 γ_b σ_t^2 + 1/2n τ∑_i=1^n ∑_j=(t-1)-k(t)^t-1γ_j^i [F_i(_j^i,ξ_j^i)- ℓ_i^⋆]
≤_t - ^⋆^2 - _t+1 - ^⋆^2 + 2 γ_b σ_t^2 + 1/2n τ∑_i=1^n ∑_j=(t-1)-(τ-1)^t-1γ_j^i [F_i(_j^i,ξ_j^i)- ℓ_i^⋆] .
We now sum this equation up from t=0 to T-1, and divide by T:
1/T∑_t=0^T-11/n∑_i=1^n γ_t^i [F_i(_t^i,ξ_t^i) - ℓ^⋆_i]
≤1/T∑_t=0^T-1( _t - ^⋆^2 - _t+1 - ^⋆^2 )
+ 2 γ_b 1/T∑_t=0^T-1σ_t^2 + 1/T∑_t=0^T-11/2n∑_i=1^n γ_t^i [F_i(_t^i,ξ_j^t) - ℓ^⋆_i] ,
by noting that each component in the last term appears at most τ times in the sum. We can now move the last term to the left:
1/T∑_t=0^T-11/2 n∑_i=1^n γ_t^i [F_i(_t^i,ξ_t^i) - ℓ^⋆_i]
≤1/T_0 - ^⋆^2 + 2 γ_b 1/T∑_t=0^T-1σ_t^2 .
It remains to note γ_t^i ≥α := min{1/2c L, γ_b }, therefore:
1/n∑_i=1^n γ_t^i [F_i(_t^i,ξ_t^i) - ℓ_i^⋆]
≥1/n∑_i=1^n α [F_i(_t^i,ξ_t^i) - ℓ_i^⋆] .
To summarize:
1/T∑_t=0^T-11/2 n∑_i=1^n [F_i(_t^i,ξ_t^i) - ℓ^⋆_i]
≤1/T α_0 - ^⋆^2 + 2 γ_b/α1/T∑_t=0^T-1σ_t^2 .
We can simply the left hand side further by upper bounding it as follows
1/T∑_t=0^T-11/2 n∑_i=1^n [F_i(_t^i,ξ_t^i) - ℓ^⋆_i]
= 1/T∑_t=0^T-11/2 n∑_i=1^n [F_i(_t^i,ξ_t^i) - F_i(^⋆,ξ_t^i) + F_i(^⋆,ξ_t^i) - ℓ^⋆_i]
= 1/T∑_t=0^T-11/2 n∑_i=1^n [F_i(_t^i,ξ_t^i) - F_i(^⋆,ξ_t^i)] + 1/T∑_t=0^T-11/2 n∑_i=1^n [F_i(^⋆,ξ_t^i) - ℓ^⋆_i]
= 1/T∑_t=0^T-11/2 n∑_i=1^n [F_i(_t^i,ξ_t^i) - F_i(^⋆,ξ_t^i)] + 1/2T∑_t=0^T-1σ_f^2
= 1/T∑_t=0^T-11/2 n∑_i=1^n [F_i(_t^i,ξ_t^i) - F_i(^⋆,ξ_t^i)] + σ_f^2/2
≥1/T∑_t=0^T-11/2 n∑_i=1^n [F_i(_t^i,ξ_t^i) - F_i(^⋆,ξ_t^i)]
From (<ref>) and (<ref>) and we have
1/T∑_t=0^T-11/2 n∑_i=1^n [F_i(_t^i,ξ_t^i) - F_i(^⋆,ξ_t^i)]
≤1/T α_0 - ^⋆^2 + 2 γ_b/α1/T∑_t=0^T-1σ_t^2 .
We now take full expectation to get the claimed result.
§.§.§ Proof of Theorem <ref> (a) (special case with small step sizes)
We now give an additional bound that tightens the result when the parameter γ_b is chosen to be a small value, smaller than 1/2cL. As a consequence of this assumption (see Equation (<ref>)), it holds γ_t^i ≡γ_b for all t and i ∈ [n].
The proof now follows a similar template as above, but we can now directly utilize the imposed upper bound on γ_b (v.s. the bound on c used previously).
Upper bound (valid for small stepsizes).
Using the definition of the virtual average: _t+1 = 1/n∑_i=1^n (_t^i - γ_t^i ≫_t^i ).
_t+1 - ^⋆^2
≤_t - ^⋆^2 - 2/n∑_i=1^n _t - ^⋆ , γ_t^i ≫_t^i + 1/n∑_i=1^n γ_t^i ≫_t^i^2
We now observe
-_t - ^⋆ , γ_t^i ≫_t^i = - γ_t^i _t - _t^i , ≫_t^i - γ_t^i _t^i - ^⋆ , ≫_t^i
(<ref>)≤ - γ_t^i [F_i(_t,ξ_t^i) - F_i(_t^i,ξ_t^i) - L/2_t - _t^i^2 ] - γ_t^i
_t^i - ^⋆ , ≫_t^i
(<ref>)≤ - γ_t^i [F_i(_t,ξ_t^i) - F_i(_t^i,ξ_t^i) - L/2_t - _t^i^2 + F_i(_t^i,ξ_t^i) - F_i(^⋆,ξ_t^i) ]
= - γ_t^i [ F_i(_t,ξ_t^i) - F_i(^⋆,ξ_t^i) ] + γ_t^i L/2_t - _t^i^2 .
Therefore
_t+1 - ^⋆^2
(<ref>),(<ref>)≤_t - ^⋆^2 - 2/n∑_i=1^n γ_t^i[ F_i(_t,ξ_t^i) - F_i(^⋆,ξ_t^i) ] + L/n∑_i=1^n γ_t^i _t - _t^i^2 + 1/n∑_i=1^n γ_t^i ≫_t^i^2 .
We now use the observation that γ_t^i = γ_b (by the assumption that γ_b is small—we could have made this simplification also earlier), and continue:
_t+1 - ^⋆^2
≤_t - ^⋆^2 - 2 γ_b /n∑_i=1^n[ F_i(_t,ξ_t^i) - F_i(^⋆,ξ_t^i) ] + γ_b L R_t + γ_b^2/n∑_i=1^n ≫_t^i^2 .
We now use
≫_t^i^2 (<ref>)≤ 2 ∇ F_i(_t, ξ_t^i) - ∇ F_i(_t^i,ξ_t^i)^2 + 2 ∇ F_i(_t, ξ_t^i)^2
(<ref>)≤ 2 L^2 _t - _t^i^2 + 2 ∇ F_i(_t, ξ_t^i)^2
and the assumption γ_b ≤1/10L, to obtain
_t+1 - ^⋆^2
≤_t - ^⋆^2 - 2 γ_b /n∑_i=1^n[ F_i(_t,ξ_t^i) - F_i(^⋆,ξ_t^i) ] + 2 γ_b L R_t + 2 γ_b^2/n∑_i=1^n ∇ F_i(_t, ξ_t^i)^2
≤_t - ^⋆^2 - 2 γ_b /n∑_i=1^n[ F_i(_t,ξ_t^i) - F_i(^⋆,ξ_t^i) ] + 2 γ_b L R_t + 4 Lγ_b^2/n∑_i=1^n [F_i(_t,ξ_t^i) - ℓ_i^⋆] .
To simplify, let's take expectation over the randomness in iteration t:
_t+1 - ^⋆^2 ≤_t - ^⋆^2 - 2 γ_b [f(_t) - f^⋆] + 2 γ_b L [R_t] + 4 Lγ_b^2 [f(_t) - f^⋆ + σ_f^2] .
Because f(_t)-f^⋆≥ 0 and γ_b ≤1/10L, we can further simplify:
_t+1 - ^⋆^2
≤_t - ^⋆^2 - γ_b [f(_t) - f^⋆] + 2 γ_b L [R_t] + 4 Lγ_b^2 σ_f^2 .
After re-arranging and taking expectation:
γ_b [f(_t) - f^⋆]
≤_t - ^⋆^2 - _t+1 - ^⋆^2 + 2 γ_b L [R_t] + 4 Lγ_b^2 σ_f^2 .
We now plug in the bound on R_t from Equation (<ref>):
γ_b [f(_t) - f^⋆]
≤_t - ^⋆^2 - _t+1 - ^⋆^2 + 2 γ_b/3 τ∑_j=(t-1)-k(t)^t-1 [ f(_j) - f^⋆] +4( γ_b^2 + 32 τ^2 γ_b^3) L σ_f^2 ,
and after summing over t=0 to T-1, and dividing by T:
1/T∑_t=0^T-1γ_b [f(_t) - f^⋆]
≤1/T∑_t=0^T-1( _t - ^⋆^2 - _t+1 - ^⋆^2 ) + 2γ_b/3T∑_t=0^T-1 [ f(_t) - f^⋆] + 4( γ_b^2 + 32 τ^2 γ_b^3) L σ_f^2 ,
and consequently:
1/T∑_t=0^T-1 [f(_t) - f^⋆]
≤3/T γ_b _0 - ^⋆^2 + 12( γ_b + 32 τ^2 γ_b^2)L σ_f^2 .
§.§ Proof for strongly convex case of FedSPS
Here we outline the proof in the strongly convex case. As the proof is closely following the arguments from the convex case (just with the difference of applying μ-strong convexity whenever convexity was used), we just highlight the main differences for the readers.
§.§.§ Proof of Theorem <ref> (b) (general case valid for all stepsizes)
In the respective proof in the convex case, convexity was used in Equation (<ref>). If we use strong convexity instead, we obtain
_t+1 - ^⋆^2
(<ref>)≤_t - ^⋆^2 - 2/n∑_i=1^n γ_t^i [ F_i(_t^i,ξ_t^i) - F_i(^⋆,ξ_t^i) + μ/2_t^i -^⋆^2 ]
+ 2/nc ∑_i=1^n γ_t^i [F_i(_t^i,ξ_t^i) - ℓ_i^⋆] + R_t
≤_t - ^⋆^2 - 2/n∑_i=1^n γ_t^i [ F_i(_t^i,ξ_t^i) - F_i(^⋆,ξ_t^i) + μ/2_t - ^⋆^2 ] + 2/n∑_i=1^n γ_t^i μ_t - _t^i^2
+ 2/nc ∑_i=1^n γ_t^i [F_i(_t^i,ξ_t^i) - ℓ_i^⋆] + R_t
≤_t - ^⋆^2 - 2/n∑_i=1^n γ_t^i [ F_i(_t^i,ξ_t^i) - F_i(^⋆,ξ_t^i) + μ/2_t - ^⋆^2 ]
+ 2/nc ∑_i=1^n γ_t^i [F_i(_t^i,ξ_t^i) - ℓ_i^⋆] + 2 R_t
where the second inequality used _t - ^⋆^2 ≤ 2 _t - _t^i^2 + 2 _t^i - ^⋆^2 and the last inequality used γ_t^i μ≤1/2c≤1/2. By applying the lower bound α on the stepsize, we see that convergence is governed by linearly decreasing term:
_t+1 - ^⋆^2
≤ (1-μα) _t - ^⋆^2 - 2/n∑_i=1^n γ_t^i [ F_i(_t^i,ξ_t^i) - F_i(^⋆,ξ_t^i) ] + 2/nc ∑_i=1^n γ_t^i [F_i(_t^i,ξ_t^i) - ℓ_i^⋆] + 2 R_t ,
while the remaining terms are the same (up to a small change in the constant in front of R_t) as in the convex case. The increased constant can be handled by imposing a slightly stronger condition on c (here we used c≥ 4τ^2, compared to c≥2τ^2 in the convex case). The rest of the proof follows by the same steps, with one additional departure: when summing up the inequalities over the iterations t=0 to T-1 we need to use an appropriate weighing to benefit from the linear decrease. This technique is standard in the analysis (illustrated in <cit.> and carried out in the setting a residual R_t in <cit.> and with a residual R_t in a distributed setting in <cit.>.
§.§.§ Proof of Theorem <ref> (b) (special case with small step sizes)
Convexity was used in Equation (<ref>). If we use μ-strong convexity instead, we obtain:
-_t - ^⋆ , γ_t^i ≫_t^i (<ref>)≤ - γ_t^i [F_i(_t,ξ_t^i) - F_i(_t^i,ξ_t^i) - L/2_t - _t^i^2 + F_i(_t^i,ξ_t^i) - F_i(^⋆,ξ_t^i) + μ/2_t^i - ^⋆^2]
≤ - γ_t^i [ F_i(_t,ξ_t^i) - F_i(^⋆,ξ_t^i) ] + γ_t^i L _t - _t^i^2 - γ_t^i μ/2_t - ^⋆^2 ,
where the last inequality used _t - ^⋆^2 ≤ 2 _t - _t^i^2 + 2 _t^i - ^⋆^2. The last term is essential to get a linear decrease in the first term. For illustration, Equation (<ref>) now changes to
_t+1 - ^⋆^2
≤(1- γ_b μ) _t - ^⋆^2 - γ_b [f(_t) - f^⋆] + 4 γ_b L [R_t] + 4L γ_b^2 σ_f^2
(the constant in front of [R_t] increased by a factor of two because of the estimate in (<ref>) and can be controlled by the slightly stronger condition on γ_b). From there, the proof is very standard and follows exactly the template outlined in e.g. <cit.> or <cit.>.
§ ADDITIONAL DETAILS FOR FEDSPS
§.§ Extending FedSPS to the mini-batch setting
Note that for the sake of simplicity in notation of the proposed method and algorithms, we used a single stochastic sample ξ_t^i. However, this can be trivially extended to the mini-batch setting. For a batch _t^i, the stochastic gradient would become 1/|_t^i|∑_ξ∈_t^i^∇ F_i(_t^i, ξ), and the stochastic loss would become 1/|_t^i|∑_ξ∈_t^i^F_i(_t^i, ξ). We use this mini-batch setting for our practical experiments.
§.§ Comparison of heterogeneity measures
§.§.§ Proof of Proposition <ref>
We look at the two cases separately as follows:
Function heterogeneity.
We recall the definitions ζ^2_⋆ = 1/n∑_i = 1^n ∇ f_i(^⋆) - ∇ f(^⋆)_2^2 and σ_f^2 = 1/n∑_i=1^n (f_i(^⋆) - ℓ_i^⋆). The global optimal point is denoted by ^⋆, and the local optima by _i^⋆. Note that ∇ f(^⋆) = 0, ∇ f_i(_i^⋆)=0, and we make use of these facts in our proof.
Assuming L-smoothness of each f_i(), we can apply Lemma <ref> to each of them with = ^⋆ to obtain
∇ f_i(^⋆) - ∇ f_i(^⋆_i)^2 ≤ 2L ( f_i(^⋆) - f_i^⋆) , ∀ i ∈ [n].
Using this result we can bound ζ^2_⋆ from above as follows (noting that ∇ f(^⋆)=0, ∇ f_i(_i^⋆)=0):
ζ^2_⋆ = 1/n∑_i = 1^n ∇ f_i(^⋆) - ∇ f(^⋆)_2^2
= 1/n∑_i = 1^n ∇ f_i(^⋆) - ∇ f_i(_i^⋆)^2
(<ref>)≤1/n∑_i = 1^n 2 L ( f_i(^⋆) - f_i^⋆)
(<ref>)≤1/n∑_i = 1^n 2 L ( f_i(^⋆) - ℓ_i^⋆)
= 2L σ_f^2
Gradient variance.
We recall the definitions σ_⋆^2 = 1/n∑_i=1^n_ξ^i∇ F_i(^⋆, ξ^i) - ∇ f_i(^⋆)^2 and σ_f^2 = 1/n∑_i=1^n (f_i(^⋆) - ℓ_i^⋆). The global optimal point is denoted by ^⋆, local optimum on client i by _i^⋆ and the local optima of the functions on worker i by ^⋆_ξ^i. Note that ∇ f_i(_i^⋆)=0, ∇ F_i(^⋆_ξ^i, ξ^i) = 0 and we make use of these facts in our proof.
Assuming L smoothness of each F_i(, ξ^i), we can apply Lemma <ref> to each function on any client i, with = ^⋆ to obtain
∇ F_i(^⋆, ξ^i) - ∇ F_i(^⋆_ξ^i, ξ^i)^2 ≤ 2L ( F_i(^⋆, ξ^i) - F_i(^⋆_ξ^i, ξ^i) ) , ∀ i ∈ [n] , ξ^i ∼_i .
Using this result we can bound σ^2_⋆ from above as follows
σ^2_⋆ = 1/n∑_i=1^n_ξ^i∇ F_i(^⋆, ξ^i) - ∇ f_i(^⋆)^2
(<ref>)≤1/n∑_i = 1^n_ξ^i∇ F_i(^⋆, ξ^i)^2
= 1/n∑_i = 1^n (_ξ^i[∇ F_i(^⋆, ξ^i) - ∇ F_i(^⋆_ξ^i, ξ^i)^2] )
(<ref>)≤2 L/n∑_i = 1^n ( _ξ^i[F_i(^⋆, ξ^i) - F_i^⋆(^⋆_ξ^i, ξ^i)] )
(<ref>)≤2 L/n∑_i = 1^n ( _ξ^i[F_i(^⋆, ξ^i) - ℓ_i^⋆] )
= 2 L/n∑_i = 1^n ( f_i(^⋆) - ℓ_i^⋆)
= 2L σ_f^2
§ CONVERGENCE ANALYSIS OF FEDDECSPS
In this section, we provide a proof for the convergence analysis of FedDecSPS (Theorem <ref>). The proof reuses some of our results from the proof of Theorem <ref> (a) before, changing the relevant parts to incorporate the properties of the decreasing FedDecSPS stepsize (adapted from DecSPS <cit.>, in Lemma <ref> and Lemma <ref> below).
§.§ FedDecSPS inequalities
Under the assumption that each F_i is L-smooth (Assumption <ref>), and (c_t)_t=0^∞ is any non-decreasing positive sequence of real numbers, the stepsizes of FedDecSPS follow for all rounds t ∈ [T-1]: γ_t^i ≤γ_t-1^i, and
min{1/2c_t L, c_0γ_b/c_t}≤γ_t^i ≤c_0 γ_b/c_t
This lemma is a simple adaptation of Lemma 5.1 from DecSPS <cit.> to our federated setting. Firstly, we can observe from the definition that γ_t^i ≤c_t-1/c_tγ^i_t-1≤γ^i_t-1. Next, we prove the bounds on γ_t^i using induction. For t=0, we can directly use Lemma <ref> as below
γ_b ≥γ_0^i = 1/c_0min{ F_i(_0^i, ξ^i_0) - ℓ_i^⋆/∇ F_i(_0^i, ξ_0^i)^2, c_0γ_b}≥min{1/2c_0L, γ_b}
For the induction step, we assume that the proposition holds true for γ_t^i
min{1/2c_t L, c_0γ_b/c_t}≤γ_t^i ≤c_0 γ_b/c_t
Then we have γ_t+1^i = 1/c_t+1min{ F_i(_t+1^i, ξ^i_t+1) - ℓ_i^⋆/∇ F_i(_t+1^i, ξ_t+1^i)^2, c_tγ_t^i}, where from the induction hypothesis we know c_tγ_t^i ∈ [min{1/2 L, c_0γ_b }, c_0 γ_b]. This bound now implies that the proposition also holds true for γ_t+1^i, since again by Lemma <ref> we have F_i(_t+1^i, ξ^i_t+1) - ℓ_i^⋆/∇ F_i(_t+1^i, ξ_t+1^i)^2≥1/2L. This concludes the induction step of the proof.
FedDecSPS stepsizes γ_t^i follow the following fundamental inequality
γ_t^i ≫_t^i^2 ≤γ_t^i/c_t [F_i(_t^i,ξ_t^i) - ℓ_i^⋆] .
We observe that it holds from the definition of FedDecSPS stepsizes
γ_t^i ≫_t^i^2 = γ_t^i ∇ F_i(_t^i,ξ_t^i)^2 = γ_t^i ·1/c_tmin{F(_t^i,ξ_t^i) - ℓ_i^⋆/∇ F(_t^i,ξ_t^i)^2, c_t-1γ_t-1^i }∇ F_i(_t^i,ξ_t^i)^2
≤γ_t^i/c_t [F_i(_t^i,ξ_t^i) - ℓ_i^⋆] .
§.§ Difference Lemma
The variance of the iterates between the clients R_t := 1/n∑_i=1^n _t - _t^i^2, is bounded as
R_t ≤τ/n∑_i=1^n ∑_j=(t-1)-k(t)^t-1γ_j^i/c_j [F_i(_j^i,ξ_j^i)- F_i(^⋆,ξ_j^i)] + c_0 γ_b τ/n∑_i=1^n ∑_j=(t-1)-k(t)^t-11/c_j^2[F_i(^⋆,ξ_j^i) - ℓ_i^⋆] ,
where t-1-k(t) denotes the index of the last communication round (k ≤τ - 1) before iteration t.
We can refine (<ref>) from Lemma <ref> for the case of DecSPS stepsizes as follows:
R_t (<ref>)≤τ/n∑_i=1^n ∑_j=(t-1)-k(t)^t-1γ_j^i ≫_j^i^2
(<ref>)≤τ/n∑_i=1^n ∑_j=(t-1)-k(t)^t-1γ_j^i/c_j [F_i(_j^i,ξ_j^i)- ℓ_i^⋆]
= τ/n∑_i=1^n ∑_j=(t-1)-k(t)^t-1γ_j^i/c_j [F_i(_j^i,ξ_j^i)- F_i(^⋆,ξ_j^i) + F_i(^⋆,ξ_t^i) - ℓ_i^⋆]
= τ/n∑_i=1^n ∑_j=(t-1)-k(t)^t-1γ_j^i/c_j [F_i(_j^i,ξ_j^i)- F_i(^⋆,ξ_j^i)] + τ/n∑_i=1^n ∑_j=(t-1)-k(t)^t-1γ_j^i/c_j[F_i(^⋆,ξ_j^i) - ℓ_i^⋆]
(<ref>)≤τ/n∑_i=1^n ∑_j=(t-1)-k(t)^t-1γ_j^i/c_j [F_i(_j^i,ξ_j^i)- F_i(^⋆,ξ_j^i)]
+ c_0 γ_b τ/n∑_i=1^n ∑_j=(t-1)-k(t)^t-11/c_j^2[F_i(^⋆,ξ_j^i) - ℓ_i^⋆]
§.§ Proof of Theorem <ref>
Distance to optimality.
We have here exactly the same bound as in (<ref>), since there is no c_t involved in this step
_t+1 - ^⋆^2
≤_t - ^⋆^2 - 2/n∑_i=1^n _t^i - ^⋆ , γ_t^i ≫_t^i + 2/n∑_i=1^n γ_t^i ≫_t^i^2 + R_t .
Upper bound (valid for decreasing stepsizes).
We use the bound on FedDecSPS stepsizes (<ref>) to get:
_t+1 - ^⋆^2
(<ref>)≤_t - ^⋆^2 - 2/n∑_i=1^n _t^i - ^⋆ , γ_t^i ≫_t^i + 2/nc_t∑_i=1^n γ_t^i [F_i(_t^i,ξ_t^i) - ℓ_i^⋆] + R_t
(<ref>)≤_t - ^⋆^2 - 2/n∑_i=1^n γ_t^i [ F_i(_t^i,ξ_t^i) - F_i(^⋆,ξ_t^i) ] + 2/nc_t∑_i=1^n γ_t^i [F_i(_t^i,ξ_t^i) - ℓ_i^⋆] + R_t
= _t - ^⋆^2 - 2/n∑_i=1^n γ_t^i [ F_i(_t^i,ξ_t^i) - F_i(^⋆,ξ_t^i) ]
+ 2/nc_t∑_i=1^n γ_t^i [F_i(_t^i,ξ_t^i) - F_i(^⋆,ξ_t^i) + F_i(^⋆,ξ_t^i) - ℓ_i^⋆] + R_t
= _t - ^⋆^2 - (2 - 2/c_t) 1/n∑_i=1^n γ_t^i [ F_i(_t^i,ξ_t^i) - F_i(^⋆,ξ_t^i) ] + 2/nc_t∑_i=1^n γ_t^i [F_i(^⋆,ξ_t^i) - ℓ_i^⋆] + R_t
≤_t - ^⋆^2 - 1/n
∑_i=1^n γ_t^i [F_i(_t^i,ξ_t^i) - F_i(^⋆,ξ_t^i)] + 2γ_b c_0/c_t^2σ_t^2 + R_t ,
where we used the assumption c_t>2 and the upper bound (<ref>) on γ_t^i from Lemma <ref>, as well as the definition σ_t^2 := 1/n∑_i=1^n [F_i(^⋆,ξ_t^i) - ℓ_i^⋆] from before to obtain the last inequality. After re-arranging we get:
1/n∑_i=1^n γ_t^i [F_i(_t^i,ξ_t^i) - F_i(^⋆,ξ_t^i)]
≤_t - ^⋆^2 - _t+1 - ^⋆^2 + 2 γ_b c_0/c_t^2σ_t^2 + R_t .
We now plug in the bound on R_t calculated above in equation (<ref>):
1/n∑_i=1^n γ_t^i [F_i(_t^i,ξ_t^i) - F_i(^⋆,ξ_t^i)]
≤_t - ^⋆^2 - _t+1 - ^⋆^2 + 2 γ_b c_0/c_t^2σ_t^2 +
τ/n∑_i=1^n ∑_j=(t-1)-k(t)^t-1γ_j^i/c_j [F_i(_j^i,ξ_j^i)- F_i(^⋆,ξ_j^i)] +
c_0 γ_b τ/n∑_i=1^n ∑_j=(t-1)-k(t)^t-11/c_j^2[F_i(^⋆,ξ_j^i) - ℓ_i^⋆]
We now sum this equation up from t=0 to T-1 and divide by T, observing that the last two terms can occur at most τ times:
1/T∑_t=0^T-11/n∑_i=1^n γ_t^i [F_i(_t^i,ξ_t^i) - F_i(^⋆,ξ_t^i)] ≤1/T∑_t=0^T-1(_t - ^⋆^2 - _t+1 - ^⋆^2) + 2 γ_b c_0 1/T∑_t=0^T-1σ_t^2/c_t^2 +
τ^2/Tn∑_t=0^T-1∑_i=1^nγ_t^i/c_t [F_i(_t^i,ξ_t^i)- F_i(^⋆,ξ_t^i)] +
c_0 γ_b τ^2/Tn∑_t=0^T-1∑_i=1^n 1/c_t^2[F_i(^⋆,ξ_t^i) - ℓ_i^⋆]
a≤1/T∑_t=0^T-1(_t - ^⋆^2 - _t+1 - ^⋆^2) + 2 γ_b c_0 1/T∑_t=0^T-1σ_t^2/c_t^2 +
1/2Tn∑_t=0^T-1∑_i=1^nγ_t^i [F_i(_t^i,ξ_t^i)- F_i(^⋆,ξ_t^i)] +
τ^2 c_0 γ_b 1/T∑_t=0^T-1σ_t^2/c_t^2 ,
where for a we used c_t ≥ 2τ^2 in the second last term, and definition of σ_t^2 in the last term. Now, we can rearrange the terms to obtain
1/2Tn∑_t=0^T-1∑_i=1^nγ_t^i [F_i(_t^i,ξ_t^i)- F_i(^⋆,ξ_t^i)] ≤1/T∑_t=0^T-1(_t - ^⋆^2 - _t+1 - ^⋆^2) + c_0γ_b(2 + τ^2) 1/T∑_t=0^T-1σ_t^2/c_t^2
Noting that γ_t^i ≥min{1/2c_t L, c_0 γ_b/c_t} = c_0α_0/c_t, where α_0 := min{1/2 c_0L, γ_b }, and that c_t ≤ c_T-1 we can lower bound the left hand side as:
1/2Tn∑_t=0^T-1∑_i=1^nγ_t^i [F_i(_t^i,ξ_t^i)- F_i(^⋆,ξ_t^i)]
≥1/2Tn∑_t=0^T-1∑_i=1^n c_0α_0/c_t [F_i(_t^i,ξ_t^i) - F_i(^⋆,ξ_t^i)]
≥1/2Tnc_0 α_0/c_T-1∑_t=0^T-1∑_i=1^n [F_i(_t^i,ξ_t^i) - F_i(^⋆,ξ_t^i)] .
Now plugging (<ref>) into (<ref>) and performing a telescoping sum for the first term we obtain
1/2Tn∑_t=0^T-1∑_i=1^n[F_i(_t^i,ξ_t^i)- F_i(^⋆,ξ_t^i)] ≤c_T-1/T c_0 α_0∑_t=0^T-1(_t - ^⋆^2 - _t+1 - ^⋆^2)
+ c_T-1γ_b(2 + τ^2)/α_01/T∑_t=0^T-1σ_t^2/c_t^2
≤c_T-1/T c_0 α_0_0 - ^⋆^2 + c_T-1γ_b(2 + τ^2)/α_01/T∑_t=0^T-1σ_t^2/c_t^2 .
We now take full expectation to get the claimed result.
§ FEDSPS-GLOBAL
For the sake of comparison with existing locally adaptive FL algorithms such as Local-AMSGrad <cit.> and Local-AdaAlter <cit.> (both of which perform some form of stepsize aggregation on the server), we introduce a synchronous stepsize version of our algorithm called FedSPS-Global.
§.§ Algorithm
FedSPS-Global.
In FedSPS-Global (Algorithm <ref>), the stepsize is constant across all clients and also for local steps for a particular client, for a particular round t. There can be several choices for the aggregation formula for the “global” stepsize—we use a simple average of the local SPS stepsizes as given below
γ =
1/n∑_i = 1^nmin{F_i(_t^i, ξ_t^i) - ℓ_i^⋆/c ∇ F(_t^i, ξ_t^i)^2, γ_b } .
We provide a theoretical justification of this choice of aggregation formula, and compare it with other possible choices in Appendix <ref>. As it is apparent from (<ref>), we need to use stale quantities from the last round, for calculating the stepsize for the current round. This is justified by the empirical observation that stepsizes do not vary too much between consecutive rounds, and is a reasonable assumption that has also been made in previous work on adaptive distributed optimization <cit.>. Due to the staleness in update of the stepsize in this method, we need to provide a starting stepsize γ_0 to the algorithm, and this stepsize is used in all clients till the first communication round happens. FedSPS-Global can be thought of more as a heuristic method using cheap stepsize computations, and offering similar empirical performance as FedSPS.
§.§ Choice of aggregation formula for FedSPS-Global
We can have various choices for aggregating the local stepsizes to calculate the global stepsize. The first obvious choice would be to perform a simple average of the local stepsizes across all clients. This is given by the aggregation formula γ^(a) = 1/n∑_i=1^nmin{ f_i(_t^i) - ℓ_i^⋆/c || ≫_t^i ||^2, γ_b} which is used in our proposed method (Algorithm <ref>). Two other plausible choices of aggregation formula could be γ^(b) = min{1/n∑_i=1^n [f_i(_t^i) - ℓ_i^⋆] /c 1/n∑_i=1^n || ≫_t^i ||^2, γ_b } and γ^(c) = min{1/n∑_i=1^n [f_i(_t^i) - ℓ_i^⋆] /c 1/n∑_i=1^n≫_t^i^2, γ_b }. Among these choices, γ^(c) represents the “correct” SPS stepsize if we follow the original definition and replace batches with clients in the distributed setup. In the following Proposition we show theoretically that γ^(b) < min{γ^(a), γ^(c)}. Experimentally we find the simple averaging of local stepsizes i.e., γ^(a) to work best, followed closely by γ^(a), while the “correct” SPS stepsize γ^(c) explodes in practice. Therefore, we choose γ^(a) as our aggregation formula and this has some added advantages for the associated proofs. We feel that a reason behind the good performance of FedSPS-Global is the low inter-client and intra-client variance of the stepsizes explained in Section <ref>—this is the reason behind why simple averaging of the local stepsizes work.
Using the definitions of the three aggregation formulae for synchronous SPS stepsizes γ^(a), γ^(b), and γ^(c), as defined above in Section <ref>, we have the following inequalities
* γ^(c)≥γ^(b).
* γ^(a)≥γ^(b).
combining which we get γ^(b) < min{γ^(a), γ^(c)}.
We look at the two cases separately as follows:
* From Lemma <ref> it is easy to observe that
∑_i = 1^n ≫_t^i^2 (<ref>)≤ n ∑_i = 1^n ≫_t^i^2
which can be rearranged as
∑_i=1^n [f_i(_t^i) - ℓ_i^⋆]/1/n∑_i=1^n≫_t^i^2 ≥∑_i=1^n [f_i(_t^i) - ℓ_i^⋆]/1/n∑_i = 1^n ≫_t^i^2
The required statement follows trivially from the above inequality.
* From Chebyshev's inequality we have
1/n∑_i=1^nf_i(_t^i) - ℓ_i^⋆/≫_t^i^2≥∑_i=1^n [f_i(_t^i) - ℓ_i^⋆]/n·1/n∑_i=1^n1/≫_t^i^2
From AM-HM inequality we obtain
∑_i=1^n1/≫_t^i^2/n≥n/∑_i=1^n≫_t^i^2
Plugging in this into the above we get
1/n∑_i=1^nf_i(_t^i) - ℓ_i^⋆/≫_t^i^2≥1/n∑_i=1^n [f_i(_t^i) - ℓ_i^⋆]/1/n∑_i=1^n≫_t^i^2
The required statement follows trivially from the above inequality.
§.§ FedSPS-Global experimental details
The implementation is done according to Algorithm <ref>[Note that for any experiment in the Appendix that involves FedSPS-Global, we refer to FedSPS (from the main paper) as FedSPS-Local in the legends of plots, for the sake of clarity between the two approaches.]. All remarks we made about the choice of scaling parameter c, upper bound on stepsize γ_b (including the smoothing technique for non-convex experiments, explained later in Section <ref>), and ℓ_i^⋆ in the previous section on FedSPS also remain same here. As mentioned before, this method needs an extra hyperparameter γ_0, that is the stepsize across all clients until the first communication round. Empirically we found that setting γ_0 = γ_0^0, i.e. the local SPS stepsize for client 0 at iteration 0, works quite well in practice. This is explained by the fact that the SPS stepsizes have low inter-client and intra-client variance (Figure <ref>). Experimentally we find that FedSPS-Global converges almost identically to the locally adaptive FedSPS (Figure <ref> and Figure <ref>).
§ EXTENSION OF FEDSPS TO NON-CONVEX SETTING
Although not the main focus of this paper, we outline here some theoretical results of extending FedSPS to the non-convex setting. Our non-convex results have the limitation of requiring small stepsizes, but we do not need the additional strong assumption of bounded stochastic gradients used in previous work on adaptive federated optimization. We also provide some positive empirical results for the non-convex case in Section <ref>.
§.§ Convergence analysis for non-convex functions
We now discuss the convergence of FedSPS on non-convex functions. Unfortunately, it is required to impose additional assumptions in this section. This is mainly due that the Polyak stepsize was designed for convex objectives and additional assumptions are needed in the non-convex setting.
[Bounded variance]
Each function f_i ^d →, i ∈ [n] has stochastic gradient with bounded local variance, that is, for all ∈^d,
_ξ∼_i∇ F_i(, ξ) - ∇ f_i()_2^2 ≤σ^2 .
Moreover, global variance of the loss function on each client is bounded, that is, for all ∈^d,
1/n∑_i = 1^n ∇ f_i() - ∇ f()_2^2 ≤ζ^2 .
This assumption is frequently used in the analysis of FedAvg <cit.>, as well as adaptive federated methods <cit.>. The local variance denotes randomness in stochastic gradients of clients, while the global variance represents heterogeneity between clients. Note that ζ = 0 corresponds to the i.i.d. setting. We further note that Equation (<ref>) corresponds to the variance Assumption in <cit.> (their Equation (8) with ρ=1, δ=ζ^2) that they also required for the analysis in the non-convex setting.
The following theorem applies to FedSPS and FedAvg with small stepsizes:
Under Assumptions <ref> and <ref>, if γ_b < min{1/2cL, 1/25Lτ} then after T steps, the iterates of FedSPS and FedAvg satisfy
Φ_T = ( F_0/γ_b T + γ_b L σ^2/n + γ_b^2 L^2 τ (σ^2+τζ^2) )
where Φ_T :=min_0 ≤ t ≤ T-1∇ f(_t)^2 and F_0:=f(_0)- f^⋆.
The proof of this theorem again relies on the assumption that the stepsize is very small, but otherwise follows the template of earlier work <cit.> and precisely recovers their result. For the sake of completeness we still add a proof in the Appendix, but do not claim novelty here. The theorem states that when using a constant stepsize, the algorithms reach a neighborhood of the solution, where the neighbourhood is dependent on the local variance σ and global variance ζ, and for the i.i.d. case of ζ = 0 the neighbourhood is smaller and less dependent on number of local steps τ. By choosing an appropriately small γ_b, any arbitrary target accuracy ϵ can be reached.
A limitation of our result is that it only applies to the small stepsize regime, when the adaptivity is governed by the stepsize bound on γ_b. However, when comparing to other theoretical work on adaptive federated learning algorithms we observe that related work has similar (or stronger) limitations, as e.g. both <cit.> (FedAMS) and <cit.> (FedAdam) require an uniform bound on the stochastic gradients (we do not need) and also require effective stepsizes smaller than 1/τ L similar to our case.
§.§.§ Proof of Theorem <ref>
Small Stepsize.
We start by the observation that it must hold γ_t^i = γ_b for all iterations t and clients i. This is due to our strong assumption on the small stepsizes.
Bound on R_t.
Similar as in the convex case, we need a bound on [R_t]. We note that when deriving the bound in Equation (<ref>) we did not use any convexity assumption, so this bound also holds for arbitrary smooth functions:
R_t≤32 γ_b^2 τ/n∑_i=1^n ∑_j=(t-1)-k(t)^t-1F_i(_j, ξ_j^i)^2 .
and consequently (after taking expectation) combined with Assumption <ref>:
R_t ≤ 32 γ_b^2 τ∑_j=(t-1)-k(t)^t-1 [∇ f(_j)^2 + σ^2/τ + ζ^2]
≤1/16L^2 τ∑_j=(t-1)-k(t)^t-1∇ f(_j)^2 + 32 γ_b^2 τ^2 [σ^2/τ + ζ^2] ,
where the last estimate followed by γ_b ≤1/25τ L.
Decrease.
We study the (virtual) average _t. By smoothness and definition of _t+1 we have:
f(_t+1)
≤ f(_t) - γ_b/n∑_i=1^n ∇ f(_t), ≫_t^i + γ_b^2 L/21/n∑_i=1^n ≫_t^i^2 .
The scalar product can be estimated as follows:
- 1/n∑_i=1^n ∇ f(_t), ≫_t^i =
- 1/n∑_i=1^n ( ∇ f(_t), ∇ F_i(_t,ξ_t^i) + ∇ f(_t), ∇ F_i(_t^i,ξ_t^i) - ∇ F_i(_t^i,ξ_t^i))
≤ - 1/n∑_i=1^n ∇ f(_t), ∇ F_i(_t,ξ_t^i) + 1/2∇ f(_t)^2 + 1/2n∑_i=1^n ∇ F_i(_t^i,ξ_t^i) - ∇ F_i(_t^i,ξ_t^i)^2
= - 1/n∑_i=1^n ∇ f(_t), ∇ F_i(_t,ξ_t^i) + 1/2∇ f(_t)^2 + 1/2 L^2 R_t .
and after taking expectation:
- [1/n∑_i=1^n ∇ f(_t), ≫_t^i]
≤ -1/2∇ f(_t)^2 + 1/2L^2 R_t
We now also use smoothness to estimate the last term in (<ref>):
1/n∑_i=1^n≫_t^i^2
= 1/n∑_i=1^n ∇ F_i(_t^i,ξ_t^i) - ∇ f_i(_t^i)^2 + 1/n∑_i=1^n ∇ f_i(_t^i)^2
≤σ^2/n + 2 ∇ f(_t)^2 + 2 1/n∑_i=1^n ∇ f_i(_t^i) - ∇ f_i(_t) ^2
≤σ^2/n + 2 ∇ f(_t)^2 + 2 L^2 R_t .
By combining these estimates, and using γ_b ≤1/10L, we arrive at:
f(_t+1)
≤ f(_t) - γ_b/2∇ f(_t)^2 + γ_b L^2 R_t+ γ_b^2 L/2(2 ∇ f(_t)^2 + σ^2/n + 2L^2 R_t)
≤ f(_t) - γ_b/4∇ f(_t)^2 + 2γ_b L^2 R_t + γ_b^2 L σ^2/2n .
We now re-arrange and plug in the estimate of R_t from (<ref>):
γ_b/4∇ f(_t)^2
≤ f(_t) - f(_t+1) +γ_b^2 L σ^2/2n + γ_b/8 τ∑_j=(t-1)-k(t)^t-1∇ f(_j)^2 + 64 γ_b^3 τ^2 L^2 [σ^2/τ + ζ^2]
and after summing from t=0 to T-1 and dividing by T:
γ_b/4T∑_t=0^T-1∇ f(_t)^2
≤1/T∑_t=0^T-1( f(_t) - f(_t+1) ) + γ_b^2 L σ^2/2n + 64 γ_b^3 τ^2 L^2 [σ^2/τ + ζ^2]
+ γ_b/8T∑_t=0^T-1∇ f(_t)^2
implying that
1/8T∑_t=0^T-1∇ f(_t)^2 ≤1/T γ_b (f(_0)- f^⋆) + γ_b L σ^2/2n + 64 γ_b^2 τ^2 L^2 (σ^2/τ + ζ^2) .
§ ADDITIONAL EXPERIMENTS
§.§ Visualization of FedSPS stepsize statistics
Intuitively it may seem at first glance that using fully locally adaptive stepsizes can lead to poor convergence due to different stepsizes on each client. However, as we have already seen in our theory and well as verified in experiments, that this is not the case—our fully locally adaptive FedSPS indeed converges. In this remark, we try to shed further light into the convergence of FedSPS by looking at the stepsize statistics across clients. Figure <ref> plots the “intra-client” and “inter-client” stepsize plots for i.i.d. and non-i.i.d. experiments. “intra-client” visualizes the variance in stepsizes for a particular client across different local steps. “inter-client” visualizes the the variance in stepsizes across all clients. We can notice that both these variances are small, explaining the good convergence behaviour of FedSPS.
§.§ Effect of varying on convergence
We provide additional experiments on the effect of varying the FedSPS scale parameter c on convergence, for different number of local steps τ∈{10, 20, 50, 100} (convex case logistic regression on i.i.d. MNIST dataset without client sampling) in Figure <ref>. Similarly as before, we observe that smaller c results in better convergence, and any c ∈{0.01, 0.1, 0.5} works well. Moreover, the effect of varying c on the convergence is same for all τ clarifying that in practice there is no square law relation between c and τ, unlike as suggested by the theory.
§.§ Additional convex experiments
Additional convex experiments for the rest of the LIBSVM datasets (i.i.d.) have been shown in Figure <ref>. The first column represents the training loss and the second column the test accuracy. The third column represents the FedSPS stepsize statistics as described before in Section <ref>. We can make similar observations as before in the main paper—the proposed FedSPS methods (without tuning) converge as well as or slightly better than FedAvg and FedAdam with the best tuned hyperparameters. The stepsize plots again highlight the low inter-client and intra-client stepsize variance.
§.§ Non-convex experiments
For non-convex experiments, we look at multi-class classification of MNIST dataset using LeNet architecture <cit.> in the i.i.d. as well as non-i.i.d. setting (with client sampling) in Figure <ref>. Across these all experiments, we find that FedSPS and FedSPS-Global converge almost identically (again highlighting the reasoning about low stepsize variance), and their convergence is also very close to that of FedAvg with the best possible tuned local learning rate. FedSPS and FedSPS-Global converge better than FedAMS for the non-convex MNIST case (both i.i.d. and non-i.i.d.). Moreover, it is also remarkable that FedSPS methods have a much superior generalization performance than FedAMS.
Similarly as before, we fix c=0.5 for all non-convex experiments. For the upper bound on stepsizes, we use the smoothing technique for rest of the experiments, as suggested by <cit.> for avoiding sudden fluctuations in the stepsize. For a client i ∈ [n] and iteration t, the adaptive iteration-dependent upper bound is given by γ_b, t^i = 2^||/m_iγ_b, t-1^i, where || is the batch-size, m_i is the number of data examples on that client and we fix γ_b,0 = 1. Since all our experimental settings involve non-negative losses we can use ℓ_i^⋆ = 0 as before.
§.§ Hyperparameters tuning for FedAvg and FedAMS
Here we provide more details on hyperparameters for each dataset and model needed by FedAvg and FedAMS. We perform a grid search for the client learning rate η_l ∈{0.0001, 0.001, 0.01, 0.1, 1.0}, and server learning rate η∈{0.001, 0.01, 0.1, 1.0}. Moreover, for FedAMS we also choose β_1 = 0.9, β_2 = 0.99, and the max stabilization factor ϵ∈{10^-8, 10^-4, 10^-3, 10^-2, 10^-1, 10^0}. The grid search leads to the following set of optimal hyperparameters presented in Table <ref>.
|
http://arxiv.org/abs/2307.04661v1 | 20230710155909 | On the power of graph neural networks and the role of the activation function | [
"Sammy Khalife",
"Amitabh Basu"
] | cs.LG | [
"cs.LG"
] |
On the power of graph neural networks]On the power of graph neural networks and the role of the activation function
Johns Hopkins University, Department of Applied Mathematics and Statistics
[email protected]
[email protected]
[2020]68T07, 68Q19, 05D10, 11J85
[
Amitabh Basu
August 12, 2023
===================
In this article we present new results about the expressivity of Graph Neural Networks (GNNs). We prove that for any GNN with piecewise polynomial activations, whose architecture size does not grow with the graph input sizes, there exists a pair of non-isomorphic rooted trees of depth two such that the GNN cannot distinguish their root vertex up to an arbitrary number of iterations. The proof relies on tools from the algebra of symmetric polynomials. In contrast, it was already known that unbounded GNNs (those whose size is allowed to change with the graph sizes) with piecewise polynomial activations can distinguish these vertices in only two iterations. Our results imply a strict separation between bounded and unbounded size GNNs, answering an open question formulated by <cit.>.
We next prove that if one allows activations that are not piecewise polynomial, then in two iterations a single neuron perceptron can distinguish the root vertices of any pair of nonisomorphic trees of depth two (our results hold for activations like the sigmoid, hyperbolic tan and others). This shows how the power of graph neural networks can change drastically if one changes the activation function of the neural networks. The proof of this result utilizes the Lindemann-Weierstrauss theorem from transcendental number theory.
§ INTRODUCTION
Graph Neural Networks (GNNs) form a popular framework for a variety of computational tasks involving network data, with applications ranging from analysis of social networks, structure and functionality of molecules in chemistry and biological applications, computational linguistics, simulations of physical systems, techniques to enhance optimization algorithms, to name a few. The interested reader can look at <cit.>, which is a small sample of a large and actively growing body of work.
Given the rise in importance of inference and learning problems involving graphs and the use of GNNs for these tasks, significant progress has been made in recent years to understand their computational capabilities. See the excellent recent survey <cit.> for an exposition of some aspects of this research. One direction of investigation is on their so-called separation power which is the ability of GNNs to distinguish graphs with different structures. In this context, it becomes natural to compare their separation power to other standard computation models on graphs, such as different variants of the Wesfeiler-Lehman algorithm <cit.>, and the very closely related color refinement algorithm <cit.>. These investigations are naturally connected with descriptive complexity theory, especially to characterizations in terms of certain logics; see <cit.> for excellent introductions to these different connections. A closely related line of work is to investigate how well general functions on the space of graphs can be approximated using functions represented by GNNs; see <cit.> for a sample of work along these lines. Our work in this paper focuses on improving our understanding of the separation power of GNNs.
At a high level, the computational models of GNNs, Wesfeiler-Lehman/color refinement type algorithms and certain logics in descriptive complexity are intimately connected because they all fall under the paradigm of trying to discern something about the global structure of a graph from local neighborhood computations. Informally, these algorithms iteratively maintain a state (a.k.a. “color”) for each vertex of the graph and in every iteration, the state of a vertex is updated by performing some predetermined set of operations on the set of current states of its neighbors (including itself). The different kinds of allowed states and allowed operations determine the computational paradigm. For instance, in GNNs, the states are typically vectors in some Euclidean space and the operations for updating the state are functions that can be represented by deep neural networks. As another example, in the color refinement algorithm, the states are multisets of some predetermined finite class of labels and the operations are set operations on these multisets. A natural question then arises: Given two of these models, which one is more powerful, or equivalently, can one of the models always simulate the other? Several mathematically precise answers to such questions have already been obtained. For instance, it has been proved independently by <cit.> and <cit.> that the color refinement algorithm precisely captures the expressiveness of GNNs in the sense that there is a GNN distinguishing two nodes of a graph (by assigning them different state vectors) if and only if color refinement assigns different multisets to these nodes. Such a characterization holds for unbounded GNNs, i.e. GNNs for which the underlying neural network sizes can grow with the size of the input graphs. This implies a characterisation of the distinguishability of nodes by GNNs as being equivalent to what is known as Graded Modal Counting Logic (GC2); see <cit.> for some recent, quantitatively precise results in this direction.
Reviewing these equivalences in a recent survey <cit.>, Grohe emphasizes the fact that the above mentioned equivalence between the separation power of GNNs and the color refinement algorithm has only been established for unbounded GNNs whose neural network sizes are allowed to grow as a function of the size of the input graphs. Question 1 on his list of important open questions in this topic asks what happens if one considers bounded GNNs, i.e., the the size of the neural networks is fixed a priori and cannot change as a function of the size of the input graphs. Do bounded GNNs have the same separation power as unbounded GNNs and color refinement? We answer this question in the negative, by constructing, for any given bounded GNN, two non isomorphic rooted trees of depth two such that their root nodes cannot be distinguished by the GNN. Interestingly, only the sizes of the trees depend on the GNN, but their depth does not. This result is stated formally in Theorem <ref> and it holds for bounded GNNs with piecewise polynomial activations (this includes, e.g., ReLU activations). We prove a second result that shows how the activation function dramatically impacts the expressivity of bounded size GNNs: if one allows activation functions that are not piecewise polynomial, all root nodes of rooted trees of depth two can be distinguished by a single neuron perceptron. This result is formally stated in Theorem <ref>.
The rest of this article is organized as follows. In Section <ref> we present the main definitions and formal statement of our results. In Section <ref> we give an overview of the proofs. Sections <ref> and <ref> fill in the technical details.
§ FORMAL STATEMENT OF RESULTS
We assume graphs to be finite, undirected, simple, and vertex-labelled:
a graph is a tuple G = (V(G),E(G),P_1(G),...,P_ℓ(G)) consisting of a finite vertex set V(G), a binary edge relation E(G) ⊂ V(G)^2 that is symmetric and irreflexive, and unary relations P_1(G),⋯ ,P_ℓ(G) ⊂ V(G) representing ℓ > 0 vertex labels. In the following, the number ℓ of labels, which we will also call colors, is supposed to be fixed and does not grow with the size of the input graphs. When there is no ambiguity about which graph G is being considered, N(v) refers to the set of neighbors of v in G not including v. | G | will denote the number of vertices of G. We use simple curly brackets for a set X={x ∈ X} and double curly brackets for a multiset Y={{y ∈ Y }}. For a set X, | X| is the cardinal of X. When m is a positive integer, 𝔖_m is the set of permutations of {1, ⋯, m}.
Let m be a positive integer. A function f: ℝ^m→ℝ is piecewise polynomial iff there exist multivariate polynomials P_1, ⋯, P_r ∈ℝ[X_1, ⋯, X_m] such that for any x ∈ℝ^m, there exists i∈{1, ⋯, r} such that f(x)=P_i(x). The degree of a piecewise polynomial function f is 𝖽𝖾𝗀(f):=max{𝖽𝖾𝗀(P_1), ⋯, 𝖽𝖾𝗀(P_r)}. The number of polynomial pieces of a piecewise polynomial f is the smallest r such that f can be represented as above.
For any positive integer m, a polynomial P ∈ℝ[X_1, ⋯, X_m] is said to be symmetric if for any permutation π∈𝔖_m of {1, ⋯, m} and any v_1, …, v_m ∈ℝ, P(v_π(1), ⋯, v_π(m)) = P(v_1, ⋯, v_m) . For any k ∈{1, ⋯, m}, the elementary symmetric polynomial s_k is given by s_k(X_1, ⋯,X_m):=∑_1≤ j_1 < j_2 < ⋯ < j_k ≤ m X_j_1… X_j_k.
Given a set X, an embedding ξ is a function that takes as input a graph G and a vertex v∈ V(G), and returns an element ξ(G,v)∈ X for each vertex v of the graph. An embedding is equivariant if and only if for any pair of isomorphic graphs G, G', and any isomorphism f from G to G', it holds that
ξ(G,v) = ξ(G',f(v)). We say that ξ refines ξ' if and only if for any graph G and any v ∈ V(G), ξ(G, v) = ξ(G, v') ξ'(G, v) = ξ'(G,v').
Given a graph G, and v ∈ V(G), let (G, v) ↦𝖼𝗈𝗅(𝖦,𝗏) be the function which returns the color of the node v. The color refinement refers to a procedure that returns a sequence of equivariant embeddings cr^t, computed recursively as follows:
- cr^0(G,v) = 𝖼𝗈𝗅(G,v)
- For t≥ 0, 𝖼𝗋^t+1(G,v) := ( 𝖼𝗋^t(G,v), {{𝖼𝗋^t(G,w): w ∈ N(v) }})
In each round, the algorithm computes a coloring that is finer than the one computed in the previous round, that is, 𝖼𝗋^t+1 refines 𝖼𝗋^t.
For some t ≤ n := | G|, this procedure stabilises: the coloring does not become strictly finer anymore.
We refer the reader to the seminal work <cit.> for comments about the history and connections between the color refinement and Weisfeiler-Lehman algorithms.
A GNN is a recursive embedding of vertices of a labelled graph by relying on the underlying adjacency information and node features.
Each vertex v is attributed an indicator vector ξ^0(v) of size ℓ, encoding the color of the node v: the colors being indexed by the palette {1, ⋯, ℓ}, ξ^0(v)=e_i (the i-th canonical vector) if the color of the vertex v is i. The GNN is fully characterized by:
∘ A combination function 𝖼𝗈𝗆𝖻: ℝ^2ℓ⟶ℝ^ℓ which is a feedforward neural network with given activation function σ:ℝ⟶ℝ.
∘ The update rule of the GNN at iteration t ∈ℕ for any labelled graph G and vertex v ∈ V(G), is given as follows:
ξ^0(v) is the indicator vector of the color of v, ξ^t+1(v) = 𝖼𝗈𝗆𝖻 (ξ^t(v), ∑_w ∈ N(v)ξ^t(w) )
This type of GNN is sometimes referred to as a recurrent GNN. The general definition (cf. for instance <cit.>) usually considers a sequence of combine and aggregation functions which may depend on the iteration t. The aggregation functions replaces the sum over the neighborhood, i.e. at each iteration 𝖼𝗈𝗆𝖻(ξ^t(v), 𝖺𝗀𝗀({{ξ^t(w) : w ∈ N (v) }})) is applied. It has been proved in <cit.> that for any 𝖺𝗀𝗀 function, there is a GNN (of potentially larger size) whose aggregation function is the summation and which refines any GNN with this aggregation function. The results of this article extend to GNNs whose combination and aggregation functions are allowed to be different in different iterations, but are multivariate piecewise polynomials. For ease of presentation, we restrict to
recurrent GNNs.
Given these definitions, we can now formally state the previously known results about the expressivity of unbounded GNNs (Theorems <ref> and <ref>). Namely, in Theorem <ref>, the size of the GNN is allowed to grow with n.
<cit.>
Let d ≥ 1, and let ξ^d be a vertex invariant computed by a GNN after d iterations. Then 𝖼𝗋^d refines ξ, that is, for all graphs G, G' and vertices v ∈ V(G), v' ∈ V(G'), 𝖼𝗋^d(v)=𝖼𝗋^d(v') ξ^d(G,v)=ξ^d(G',v').
<cit.><cit.> Let n ∈ℕ. Then there is a recurrent GNN such that for all t=0, ⋯, n, the vertex invariant ξ^t computed in the t-th iteration of the GNN refines 𝖼𝗋^t on all graphs of order at most n.
In contrast, we prove Theorems <ref> and <ref> for bounded GNNs:
For any GNN, i.e., choice of combination function, represented by a feedforward neural network with piecewise polynomial activation, and any natural number I ∈ℕ, there exists a pair of rooted trees T and T' (unicolored, i.e. ℓ=1) of depth two with root nodes s and s' respectively such that:
* 𝖼𝗋^2(T,s) ≠𝖼𝗋^2(T',s'), i.e. s and s' can be distinguished with color refinement in two iterations.
* ξ^t(T,s) = ξ^t(T',s') for all t ≤ I, i.e s and s' cannot be distinguished by the GNN until iteration I+1.
In two iterations, a single neuron perceptron with an activation that is not piecewise polynomial such as σ∈{exp, sigmoid, cosh, sinh, tanh} can distinguish the root nodes of any pair of non-isomorphic rooted trees of depth two.
§ OVERVIEW OF THE PROOFS
To establish our first result, we will use rooted trees of the form shown in Figure <ref> which is a tree of depth two whose depth one vertices have prescribed degrees k_1, …, k_m, with k_1, …, k_m ≥ 1. Given a GNN with piecewise polynomial activation and a natural number I∈ℕ, we will show that there exist two sets of integers k_1, ⋯, k_m and k'_1, ⋯, k'_m that are not the same up to permutations, such that for the corresponding rooted trees T[k_1, ⋯, k_m] and T[k'_1,⋯,k'_m], the GNN cannot distinguish s and s' for the first I iterations, i.e. ξ^t(T,s) = ξ^t(T',s') for any t ∈{1, ⋯, I}.
Note that the natural numbers m, and k_1, ⋯, k_m and k'_1, ⋯, k'_m will depend on I, the activation and the size of the neural network considered.
The proof of the first result is structured as follows. Since the trees are parameterized by m-tuples of integers k_1, …, k_m, the embedding of the root node computed by the GNN at any iteration is a function of these m integers. Since the activations are piecewise polynomials, these embeddings of the root node are also piecewise multivariate symmetric polynomial functions of k_1, …, k_m (Lemma <ref>). Then, we show that there exists a large enough region of ℝ^m on which this piecewise polynomial function is evaluated by the same polynomial. This region is large enough in the following sense: we prove that it contains more integral vectors than the number of possible values a symmetric polynomial with bounded degree can take on these vectors, even after identifying vectors up to permutations of the coordinates.
This implies that the polynomial will take the same value on two distinct integral vectors whose coordinates are not identical up to permutations.
When translating this result back to the world of GNNs, this implies that the two embeddings of the root nodes of the trees corresponding to these two vectors will coincide. To conclude a separation between bounded and unbounded GNNs, we justify that the unbounded ones can seperate these two vertices. This is based on the previous result (Theorem <ref>) stating that unbounded GNNs refine color refinement.
Our second result states that for activations that are not piecewise polynomial, a one neuron perceptron GNN can distinguish the root nodes of any pair of nonisomorphic trees of depth two.
In particular, we prove this when the activation function is the exponential, the sigmoid or the hyperbolic sine, cosine or tangent functions. This is done by showing that the condition ξ^2(s)=ξ^2(s) corresponds to a relation between the exponentials of the integers k_1, ⋯, k_m and k'_1, ⋯, k'_m. Applying the Lindemann-Weirstrass Theorem in transcendental number theory (Lemma <ref> and Theorem <ref>) leads to the conclusion that k'_1, …, k'_m must be a permutation of k_1, …, k_
m, showing that the trees are isomorphic.
§ COLLISION WITH PIECEWISE POLYNOMIAL ACTIVATIONS
The following statement is a reformulation of the fundamental theorem of symmetric polynomials, where we added a constraint on the degree of the decomposition. We provide a proof for completeness.
Let P be a symmetric multivariate polynomial of m variables of degree q with q ≤ m. Then P can be written as a polynomial of degree q of the elementary symmetric polynomials s_1, ⋯, s_q.
For α=(α_1,⋯, α_m)∈ℕ^m, define the multidegree of a monomial:
𝗆𝖽𝖾𝗀(X_1^α_1⋯ X_m^α_m) :=∑_i=1^mα_i (q+1)^m-i
.
By definition, the leading term of a polynomial is the monomial with greatest multidegree. We present a proof by induction on the multidegree of P.
We first need the following claim.
Claim: Let P ∈ℝ[X_1, ⋯, X_m] be a symmetric polynomial. Let c_αX^α=c_αX_1^α_1⋯ X_m^α_m be the leading term of P (c_α≠ 0). Then c_αX^π(α) is also a monomial in P, for every permutation π of {1, ⋯, m}.
Define ϕ^π : ℝ[X_1, ⋯, X_m] →ℝ[X_1, ⋯ X_m], be the linear map between polynomials which maps variable X_i to X_π(i). Since P is symmetric, then ϕ^π(P) = P (we use here the equivalence between equality of ϕ^π(P) and P as functions and as formal algebraic objects). Since c_αX^α is a term in P and c_αX^π(α) is a monomial in ϕ^π(P) for any permutation π, we must have c_αX^π(α) as a monomial in P.
Base case. The property is true for any polynomial of multidegree 0 (constant polynomial).
Induction step. Let c_αX^α be the leading term of P. Then c_αX^π(α) is a monomial in P for every π∈𝔖_m by the previous claim. Therefore, the leading term's exponent α =(α_1, ⋯, α_m) must satisfy α_1 ≥α_2 ≥⋯≥α_m. Since P has degree q then α_i = 0 for any i ≥ q+1.
Let d_m = α_m, d_m-1= α_m-1 -α_m, ⋯, d_i = α_i - α_i+1, ⋯, d_1 = α_1 - α_2.
Define Q'(X_1, ⋯, X_m):=s_1^d_1⋯ s_m^d_m. Q' is a polynomial of s_1, ⋯, s_q and the leading term of Q' is c'_αX^α where c'_α≠ 0. In particular, 𝖽𝖾𝗀(Q')≤ q.
Now, let P':=P-c_α/c'_αQ'.
Then 𝗆𝖽𝖾𝗀(P') < 𝗆𝖽𝖾𝗀(P), and P' is symmetric because P and Q' are. Applying the induction hypothesis to P', get Q” such that P'=Q”(s_1, ⋯, s_q). Define Q := c_α/c'_αQ' + Q”. Q is a polynomial of degree at most q of s_1, ⋯, s_q and Q=P, which completes the induction step.
Let q be a positive integer. Then, there exists m∈ℕ and two integral vectors (k_1, ⋯, k_m) ∈ℕ^m and (k'_1, ⋯, k'_m)∈ℕ^m that are not equal up to a permutation such that for any sequence of symmetric piecewise polynomial functions (f_p: ℝ^p→ℝ )_p∈ℕ satisfying 𝖽𝖾𝗀(f_p) ≤ q for all p∈ℕ (bounded degree condition on each polynomial piece for any p), f_m(k_1, ⋯, k_m) = f_m(k'_1, ⋯, k'_m).
Let m be any natural number such that m > max{q^2, 2q}. For any natural number M, let F_M be the box { (k_1, ⋯, k_m) ∈ℤ^m: ∀ i 1 ≤ k_i ≤ M}.
Let Ω_M:= {{{ x_1, ⋯, x_m }}: (x_1, ⋯, x_m) ∈ F_m }; in other words, Ω_M is the set of multisets of size m whose elements, when arranged as vectors, are in the box F_M[Another way to define Ω_M is as the orbits of the action of the symmetric group 𝔖_m on F_M.].
Consider
Φ: Ω_M⟶ℤ^q
S ↦ ( s_1(S), s_2(S), ⋯, s_q(S) )
which is well-defined because of the symmetry of the elementary symmetric polynomials s_1, …, s_q. Note that for any i = 1, …, q, s_i is a sum of m i monomials whose maximum value is M^i on F_M. Therefore, |𝖨𝗆(Φ)|≤ (∑_i=1^qmiM^i)^q≤ M^q^2(mqq)^q
where the last inequality follows from mi≤mq because m > 2q.
On the other hand, |𝖣𝗈𝗆(Φ)| = M+m-1m (number of multisets of size m whose elements are taken from a set of size M).
Now, let f_m: ℝ^m→ℝ be the m-th function in the given sequence of symmetric piecewise polynomial functions where each polynomial piece has degree at most q. Let r be the number of pieces of f_m. Then, there is a subset of 𝖣𝗈𝗆(Φ) with at least 1/rM+m-1m elements where f_m is a symmetric polynomial P of degree at most q. Lemma <ref> tells us that P can be expressed as a polynomial of degree at most q of the elementary symmetric polynomials s_1, ⋯, s_q. Due to the pigeonhole principle, any such polynomial will be equal on two distinct multisets {{k_1, ⋯, k_m}} and {{k'_1, ⋯, k'_m}} in Ω_M as soon as:
1/rM+m-1m_number of points > M^q^2(mqq)^q ≥ |𝖨𝗆(Φ)|_number of values P can take at most
Such a value for M can be found by noticing that M+m-1m is a polynomial of M of degree m whereas M^q^2(mqq)^q is a polynomial of M of degree q^2. Since we chose m to be greater than q^2, there exists M ∈ℕ such that Equation <ref> is true. Hence there exist k and k' whose coordinates are not equal up any permutation and such that s_i(k_1, ⋯, k_m)=s_i(k'_1, ⋯, k'_m) for any i∈{1, ⋯, m}. In turn f_m(k_1, ⋯, k_m)=f_m(k'_1, ⋯, k'_m).
Let ξ^t(T[k_1, …, k_m],s) be the embedding obtained via a GNN with piecewise activation functions after t iterations, where ξ^0(v)=1 for all vertices v ∈ V(T[k_1, …, k_m]).
Then, for any iteration t, there exists a symmetric multivariate piecewise polynomial function F such that ξ^t(T[k_1,…,k_m],s)=F(k_1, ⋯, k_m).
Furthermore, the degree of F does not depend on m, but only on the underlying neural network and t.
We first prove by induction on t that, for any vertex v ∈ V(T[k_1, ⋯, k_m]), ξ^t(T[k_1, …, k_m],v) is a piecewise polynomial function of the k_i's.
Base case: for t=0 this is trivial since all vertices are initialised with the constant polynomial 1, whose degree does not depend on m.
Induction step: Suppose the property is true at iteration t, i.e for each node w, ξ^t(T[k_1, …, k_m],w) is a multivariate polynomial of the k_i's. Since
ξ^t+1(T[k_1, …, k_m],v) = ϕ(ξ^t(T[k_1, …, k_m],v), ∑_w∈ N(v)ξ^t(T[k_1, …, k_m],w))
where ϕ is a piecewise multivariate polynomial of k_1, ⋯, k_m, by composition ξ^t+1(T[k_1, …, k_m],v) is a multivariate polynomial of k_1, ⋯, k_m. By induction the degree of ξ^t+1(T[k_1, …, k_m],v) depends only on t and the degree of ϕ, which does not depend on m.
Finally, we know from <cit.> that the color refinement algorithm refines any GNN at any iteration. Since the tuple obtained by color refinement for the vertex s is invariant with respect to permutations of the k_i's, ξ^t(T[k_1, …, k_m],s) is also invariant with respect to permutations of the k_i's.
We already know <cit.> that color refinement refines any recurrent GNN (even with an architecture of unbounded size). We prove the existence of pairs of graphs that can be separated by the color refinement algorithm, but cannot be separated by a recurrent GNN of fixed (but arbitrary) size.
We use T[k_1, ⋯, k_m] to refer to the tree illustrated in Figure <ref>. This tree has depth two, a root node s, and contains m nodes at depth one. Each vertex i at depth 1 has exactly k_i-1 “children” at depth two (and therefore k_i neighbors, where k_i is a positive integer). In the following, all vertices have color label 1.
Claim: Let T[k_1, ⋯, k_m] and T'[k'_1, ⋯, k'_m] be two rooted trees given by Figure <ref>. If the k_i's and k'_i's are not equal up to a permutation, the color refinement distinguishes s and s' after two iterations, i.e. 𝖼𝗋^2(s) ≠𝖼𝗋^2(s').
Simply note that
𝖼𝗋^2(s) = ( 𝖼𝗋^1(s), {{𝖼𝗋^1(x_1), ⋯, 𝖼𝗋^1(x_m) }} )
𝖼𝗋^1(s) = (1_𝖼𝗋^0(s), {{ 1, ⋯, 1_m times}} )
∀ i ∈{1, ⋯, m } 𝖼𝗋^1(x_i) = (1_𝖼𝗋^0(x_i) , {{1, ⋯, 1 _k_i times}})
hence 𝖼𝗋^2(s) is uniquely determined by the multiset {{k_1, ⋯, k_m}}.
Let T > 0 be a positive integer, and for 0 ≤ t ≤ T, let f_t(k_1, ⋯, k_m):=ξ^t(T[k_1, …, k_m],s) be the value returned by a GNN with piecewise polynomial activation after t iterations (note that the embeddings are one-dimensional because only one color is used). Due to Lemma <ref>, for any t∈{0, ⋯, T}, ((k_1, ⋯, k_m) ↦ f_t(k_1, ⋯, k_m))_m∈ℕ is a sequence of symmetric piecewise multivariate polynomials with bounded degrees (the degree of f_t does not depend on m). Lemma <ref> tells us that there exists m∈ℕ, and two vectors k∈ℕ^m and k'∈ℕ^m whose coordinates are not equal up to permutations, such that for any t∈{0, ⋯, T}, f_t(k_1, ⋯, k_m) =f_t(k'_1, ⋯, k'_m).
Note that in Theorem <ref>, depth two is minimal: for any pair of non isomorphic rooted trees of depth one, any GNN with one neuron perceptron, an injective activation function, weights set to one, and zero bias can distinguish their root vertex in one iteration. Indeed, in that case, ξ^1(s) = σ(1+𝖽𝖾𝗀(s)) if the GNN is recurrent with a combine function given by ϕ: ℝ^2→ℝ,(x_1, x_2) ↦σ(x_1 + x_2). Hence, ξ^1(s)≠ξ^1(s') as soon as σ is injective and s and s' have distinct degree.
§ ACTIVATIONS THAT ARE NOT PIECEWISE POLYNOMIAL
In this Section we present a proof of Theorem <ref>. We prove that for any pair of non isomorphic rooted trees of depth two, i.e. trees of the form T[k_1, ⋯, k_m] and T'[k'_1, ⋯, k'_n] (here the k_i's and k'_i's are all greater than or equal to 1, cf. Figure <ref>) can be distinguished by a bounded GNN with any of the following activation functions: exponential, sigmoid, or a hyperbolic sine, cosine or tangent function. Consider the following 1-neuron perceptron ϕ with activation function σ, ϕ: ℝ^2→ℝ, ϕ(x_1,x_2) = σ( x_1 + x_2). Then it is easy to see that:
∀ v ∈ V(T[k_1, ⋯, k_m]) ξ^1(v) = σ( ξ^0(v) + ∑_w∈ N(v)ξ^0(w)) = σ(1 + 𝖽𝖾𝗀(v))
ξ^2(v) = σ(σ(1 + 𝖽𝖾𝗀(v)) + ∑_w∈ N(v)σ(1 + 𝖽𝖾𝗀(w))
In particular ξ^2(s) = σ( σ( 1 + m) + ∑_i=1^mσ(k_i+1))
Now suppose σ is either injective on ℝ, or nonnegative and injective on ℝ^+ (this is the case for the exponential, the sigmoid, the hyperbolic tan, and the hyperbolic cosine and sine), s and s' are vertices of two trees with potentially different number of leaves m and n, then
ξ^2(s) = ξ^2(s') ∑_i=0^mσ(k_i+1) =∑_i=0^nσ(k'_i+1)
where k_0:=1+m and k'_0:=1+n. The goal of the remainder of this section is to prove that the right hand side equality of Statement <ref> implies m=n and k_i's are the same as k'_i's, up to a permutation, for the activation functions σ of Theorem <ref>.
If α_1, ⋯, α_n are distinct algebraic numbers, then the exponentials e^α_1, ⋯, e^α_n are linearly independent over the algebraic numbers.
Let n and m be positive integers, and α_1, ⋯, α_n and α'_1, ⋯, α'_m be algebraic numbers. Then ∑_i=1^n e^α_i = ∑_i=1^m e^α'_i if and only if m=n and the α_i's and α'_i's are equal up to a permutation.
(⟸ ) is clear. For (⟹ ), by contradiction suppose that the α_i's and α'_i's are not equal up to a permutation. First, if the α_i's (resp. α'_i's) are not distinct one can group them by their number of occurrences in both sums. Then, we would have a linear dependence with integer coefficients of exponentials of integers. This contradicts Theorem <ref> (Linderman-Weirstrass).
Without loss of generality, suppose the k_i's and k'_i's are ordered in increasing order. For ease of notation, let α and α' be the vectors defined as α_i = k_i+1 for all i∈{1, ⋯, m} and α'_i=k'_i+1 for all i∈{1, ⋯, n}. We will now prove that Statement <ref> implies α=α' in each case.
- σ∈{𝗌𝗂𝗀𝗆𝗈𝗂𝖽, 𝗍𝖺𝗇𝗁}.
In the case of the sigmoid, Statement <ref> yields the following equation after multiplication by the the product of the denominators:
!( ∑_i=1^m e^α_i( ∏_j=1
j≠ i ^m (1 +e^α_j) ) ) ∏_i=1^n (1+ e^α'_i)= ( ∑_i=1^n e^α'_i( ∏_j=1
j≠ i ^n (1+e^α'_j) ) ) ∏_i=1^m (1+e^α_i)
After developing and grouping each hand side into linear combinations of exponentials we obtain an equation of the form:
∑_S ⊆{1, ⋯, m}
T ⊆{ 1, ⋯, n}γ_S,Texp(α_S + α'_T) =∑_S ⊆{1, ⋯, m}
T ⊆{ 1, ⋯, n}γ_S,Texp(α'_S + α_T)
where for S ⊆{1, ⋯, m}, α_S:=∑_i ∈ X α_i (resp. for T⊆{1, ⋯, n}, α'_T:=∑_i ∈ X α'_i). Note that γ_∅, T=0 for all subsets T ⊆{1, …, n}. We will prove by induction on m (the size of the vector α) that in these conditions, Equation <ref> implies that m=n and α= α'.
Base case:
If α has size one and α' has size n > 0, then the equation boils down to exp(α_1)=∑_i=1^nexp(α'_i) which is true if and only if n=1 and α_1=α'_1 using Lemma <ref>.
Induction step:
We suppose the following property true for some nonnegative integer m: For any nonnegative integers α_1, ⋯, α_m and α'_1, ⋯, α'_n, ∑_i=1^mσ(α_i)=∑_i=1^nσ(α'_i) m=n and k=k'.
Suppose now that ∑_i=1^m+1σ(α_i)=∑_i=1^nσ(α'_i). Since γ_∅,T = 0 for all T⊆{1, …, n}, the smallest term on the left hand side of (<ref>) is exp(α_1) and the smallest term on the right hand side is exp(α'_1) . Using Lemma <ref>, this implies that α_1 = α'_1. Therefore ∑_i=2^m+1σ(α_i) =∑_i=2^nσ(α'_i). We can apply the induction assumption on the vector (α_2, ⋯, α_m+1) of size m to obtain that m=n-1 and (α_k_2, ⋯, α_m+1)=(α_k'_2, ⋯, α'_m+1). This proves that m+1=n and α=α', which ends the induction.
If σ=𝗍𝖺𝗇𝗁=exp(2·)+1/exp(2·)-1. Equation <ref> becomes after multiplication by the the product of the denominators:
!( ∑_i=1^n (e^2α_i-1 )∏_j=1, j ≠ i^n(e^2α_j +1) ) ∏_j=1 ^m (1+e^2α'_j) = ( ∑_i=1^m (e^2α'_i-1) ∏_j=1, j ≠ i^m(e^2α'_j +1) ) ∏_j=1 ^n (1+e^2α_j)
After developing into a linear combination of exponentials on each side, the arguments containing α_T with T≠∅ on the left hand side and α'_T with T≠∅ on the right hand side have positive algebraic coefficients. There are also arguments of the form α'_T on the left hand side and α_T on the right hand side (in other words, γ_∅, T≠ 0, unlike the sigmoid case). However, note that the coefficients corresponding to these terms are (algebraic and) negative. Hence, as a consequence of Lemma <ref>, the arguments with negative coefficients in front of the exponentials must match up on each side, and we are left with an equation similar to Equation <ref> (the arguments have a factor 2), where again γ_∅, T = 0. We can apply the same reasoning by induction as for the sigmoid case, to prove that α=α'.
- σ∈{sinh, cosh}. If σ = cosh, then Equation <ref> becomes:
( ∑_0=1^nexp(iα_j) - ∑_0=1^mexp(iα'_j) ) + ( ∑_j=0^mexp(-iα'_j) - ∑_0=1^nexp(-iα_j) ) =0
Due to Lemma <ref>, this can only happen if m=n and for all j ∈{1, ⋯, n}, α_j = α'_j, because iα_j, iα'_j are algebraic for any j ∈{1, ⋯, n}, and the α_j's and α'_j's are ordered and positive. We conclude that α=α'. The case σ∈{sinh} can be treated similarly.
alpha
|
http://arxiv.org/abs/2307.05995v1 | 20230712081937 | Canonical partition function and distance dependent correlation functions of a quasi-one-dimensional system of hard disks | [
"V. M. Pergamenshchik",
"T. Bryk",
"A. Trokhymchuk"
] | cond-mat.dis-nn | [
"cond-mat.dis-nn",
"cond-mat.stat-mech"
] |
label1,label2]V.M. Pergamenshchik^*,
label3,label4]T. Bryk
label3]A. Trokhymchuk
[label1]Institute of Physics, National Academy of Sciences of Ukraine,
prospekt Nauky, 46, Kyiv 03039, Ukraine, ^*[email protected]
[label2]Center for Theoretical Physics, Polish Academy of Sciences, Al. Lotników 32/46, 02-668, Warsaw, Poland
[label3]Institute for Condensed Matter Physics, NAS of Ukraine,
1 Svientsistsky Str, Lviv, 79011, Ukraine
[label4] Institute of Applied Mathematics and Fundamental
Sciences,
Lviv National Polytechnic University, UA-79013 Lviv,
Ukraine
The canonical NLT partition function of a quasi-one dimensional
(q1D) one-file system of equal hard disks [J. Chem Phys. 153, 144111
(2020)] provides an analytical
description of the thermodynamics and
ordering in this system (a pore) as a function of linear density Nd/L
where d is the disk diameter.
We derive the analytical formulae for the distance dependence of the
translational pair distribution function and the distribution function of distances
between next neighbor disks, and then demonstrate their use by calculating
the translational order in the pore. In all cases, the order is found to be
of a short range and to exponentially decay with the disks' separation. The
correlation length presented for different pore widths and densities shows a
non-monotonic dependence with a maximum at Nd/L = 1 and tends to
the 1D value for a vanishing pore width. The results indicate a special role
of this density when the pore length L is equal exactly to N disk
diameters. Comparison between the theoretical results for an infinite system and the results of a molecular dynamics simulation for a finite system with periodic boundary conditions is presented and discussed.
quasi-one-dimensional pore hard-disk system partition function correlation functions
64.75.Yz 61.20.Gy 61.20.Ja 61.20.Ne 05.10.-a
§ INTRODUCTION
The statistical description of many-particle systems must deal with many,
even infinite number of degrees of freedom and as many integrals. As this
limit can be studied only theoretically, the analytical results and
particularly exact ones are of great importance. To solve a statistical
mechanical problem implies to reduce the problem of calculation of its
partition function (PF) and pair correlation functions to a finite number of
dimensions, finite number of integrals and other mathematical actions. This
is most often a task impossible and we try to learn the physics of many-body
system and develop the appropriate mathematical tools by studying simplified
models. In particular, a strong simplification can be achieved by
considering geometries with reduced dimensionality and, in particular,
one-dimensional (1D). A great number of 1D models considered in the last
century and summarized in the book <cit.> has proved to be very
usefully related to the physics in two and three dimensions. In the theory
of liquids, modeling molecules as hard spheres, the distinguished example of
the 1D physics is the exact solution for the PF of a 1D gas of hard core
molecules, now known as Tonks' gas <cit.>.
The 1D Tonks gas is much simpler than any 2D system, nevertheless Tonks'
solution has become the analytical platform for further expansion into the
world of 2D hard disk (HD) systems via moving to certain
quasi-one-dimensional (q1D) models. The simplest q1D HD system is such that
each disk can touch no more than one next neighbor from both sides (the
so-called single-file system); the width of such q1D pore must be below (√(3)/2+1)d where d is the HD diameter. The analytical theory of HDs
in q1D pore was first considered by Wojciechowski et al <cit.> for a
system periodically replicated in the transverse direction. Later Kofke and
Post <cit.> proposed an approach that enables one to consider HDs in a
q1D pore in the thermodynamic limit using the well-known transfer matrix
method introduced in statistical physics by Kramers and Wannier <cit.>. This theory has become the main tool in studying a q1D
single-file HD system <cit.>
that allowed one to address also q1D systems of non-circular particles
(e.g., <cit.>). The virial expansion for disks in the q1D geometry
has been addressed by Mon <cit.>. Thanks to the analytical methods
nowadays HDs in the q1D geometry have been intensively used for calculations
of thermodynamic as a model glass former to study glass transitions and HDs'
dynamics <cit.>. The approximate analytical theory
for HDs was also developed for one- and two-dimensional random pore geometry
by the scaled particle method <cit.>. The new interest has been
brought about by the studies of actual physical ultracold systems such as
Bose-Einstein condensates created in practically 1D or q1D electromagnetic
traps <cit.>. Although mathematically quantum and classical gases are
very different, the classical 1D and q1D models can provide some technical
and even physical insight.
The transfer matrix method is essentially related to the pressure-based NPT ensemble which does not directly predict pressure as a function of
system's width D and length L as the Gibbs free energy is
parameterized by the pressure P. Another peculiarity is that the
transfer matrix approach essentially employs the periodic boundary
conditions along the pore which allows one to reduce the PF to the trace of
the transfer matrix <cit.>. Recently one of us derived analytically
the canonical NLT PF of a q1D HD single-file system (from now on q1D
implies also single-file system) both for a finite number of disks N
and in the thermodynamic limit <cit.>. Although the derivation of
the PF in <cit.> has used certain approximation which is addressed
in more detail in the next section, the obtained results provide
convenient analytical tools. In particular, the PF of a NLT ensemble is
not related to the periodic boundary conditions and finding the
thermodynamic properties of a q1D HD system for given L and D is
reduced to solving single transcendental equation which can be easily done
numerically. The PF, pressure along and across the pore, distribution of the
contact distances between neighboring HDs along the pore, and distribution
of HD centers across the pore for given linear density ρ =Nd/L are
found analytically. In this paper we derive and employ another fundamental
thermodynamic quantity, the pair distribution function (PDF).
The transfer matrix method directly gives the leading correlation length
that describes the correlations between the disks' transverse coordinates y_i and y_i+n as a function of the difference n between their
order numbers <cit.>. At the
same time, the most important longitudinal ordering is related to PDFs that
are functions of the actual distance R between disks along the
system. The analytical formula for such PDF g_1D(R) is known only for
a 1D gas of noninteracting (Tonks' gas) <cit.> and
interacting <cit.> hard core molecules . In a q1D system, finding
the PDF g(R)for large R by the transfer matrix method directly from
its definition is admittedly a formidable problem <cit.>. The
large R behavior of the PDF is possible to get by means of the following
nontrivial numerical procedures: either by inverse Fourier transform of the
structure factor obtained from the joint solution of two integral equations
<cit.>, or by first planting the system's configuration from the
transfer matrix eigenstates and then averaging over these planted
configurations <cit.>. The main goal of this paper is to develop an
alternative, analytical approach to the PDFs of a q1D HD system based on the
NLT ensemble PF, which is not related to periodic boundary conditions
along the pore, and demonstrate its implementation.
From the analytical canonical PF of a q1D HD system <cit.>, we
derive a formula for the translational PDF g(R) which requires
computing a few integrals and can be straightforwardly implemented
numerically. We also derive the PDF for the distance between next
neighbors. Both PDFs are presented for an infinite system, but the canonical
PF allows one to obtain the formulae for finite systems, too. Usually, the
PDF for a 1D gas is derived by resorting to Laplace's transform related to
the NPT ensemble <cit.>, but in earlier works Frenkel <cit.> and Nagamija <cit.> used a more direct technique related
to a converting infinite products into exponentials. We also use the last
technique and derive the PDF for a q1D HD system directly from the canonical
PF. The method is first demonstrated by application to the PDF of a 1D Tonks
gas and the derived formulae then used to calculate the translational PDF g(R) and its correlation length for the range of the q1D pore widths
and wide range of linear densities of a q1D HD system. In all cases, the
correlations are found to exponentially decay with the disks' separation.
The correlation length presented for the total range of the q1D pore widths
and different densities shows a non-monotonic density dependence with a
maximum at the density Nd/L=1 and, for vanishing pore width, tends to
the 1D value of a Tonks gas. The theoretical PDFs g(R) and g_1(R) are compared with the results of molecular dynamic (MD)
simulations presented for q1D systems of N=400 and 2000 HDs.
It is found that the theoretical and computer simulations results for g_1(R) nearly coincide for high and low densities. At the same time, at
the intermediate densities in the vicinity of ρ∼ 1, to coincide
with the MD results, the theoretical g_1 should be obtained for a
slightly higher density. Analysing MD simulations data for PDF g(R), we came to a tentative conclusion that this difference can be
attributed to the approximation used in <cit.> and, possibly, to a
pressure difference between a finite system with periodic boundary condition
considered in computer simulations and an infinite system considered in the
theory.
The paper is structured as follows. The canonical PF and the methods of its
calculations are introduced in Sec. 2, and then, in Sec. 3, the formulas for
PDFs g(R) and g_1(R) are derived. In Sec. 4, these
formulae are used to study the theoretical PDFs, the results are compared
with the MD data and discussed in detail. The final Sec.5 is a brief
conclusion.
§ THE CANONICAL PARTITION FUNCTION OF A Q1D HD SYSTEM
Consider a pore of length L, confined between two parallel hard walls
separated by the width D, and filled with N of HDs of diameter d=1. All lengths will be measured in units of HD diameters. The reduced
width Δ =(D-d)/d, that gives the actual pore width attainable to
HD centers, in the single-file q1D case ranges from 0 in the 1D case to
the maximum √(3)/2≈ 0.866. The i-th disk has two
coordinates, x_i along and y_i across the pore; y
varies in the range -Δ /2≤ y≤Δ /2; the pore volume is
LD. The transverse center-to-center distance between two neighbors, δ y_i=y_i+1-y_i, determines the contact distance σ between them along the pore:
σ (δ y_i) = min|
x_i+1(y_i+1)-x_i(y_i)| ,
σ (δ y_i) = √(d^2-δ y_i^2),
σ _m(Δ ) = √(d^2-Δ ^2)≤σ≤ d .
The minimum possible contact distance, σ _m, depends on the
pore width Δ and obtains for δ y_i=±Δ when
the two disks are in contact with the opposite walls. Thus, each set of
coordinates {y}=y_1,y_2,… ,y_N determines the
correspondent densely packed state of the total length L^'{y}=∑_i=1^N-1σ (δ y_i), which we call condensate
<cit.>. The minimum condensate length (the distance between centers
of the first and N th disk) is (N-1)σ _m, the maximum length
can be as large as (N-1)d, but it cannot exceed L-d, i.e., (N-1)σ _m<L^'≤ L_max^' where L_max^'=min [(N-1)d,L-d].
The PF of this q1D HD system is given by the integral <cit.>:
Z=Δ∫_-∞^∞
dα/N!∫_(N-1)σ _m^
L_m^'dL^'e^ iα L^'(L-1-L^')^N( ∫_σ _m^
1dσ/√(1-σ ^2)σ e^-iασ) ^N-1,
where i is an imaginary unite. To derive the above PF, in <cit.>, the integration over transverse coordinates y was changed to
that over σ (δ y). The integration over different δ y_i
is not independent and so is the integration over σ (δ y_i),
but in <cit.> this integration was performed for each σ
(δ y_i) from σ _m to 1 independently. It turns out that
the PF obtained under the above approximation coincides with that obtained
in <cit.> for a system periodic in the transverse direction. However,
in <cit.> such periodic condition in y was not imposed. The above
approximation is supposed to be valid in the limit of large N as in this
limit the main contribution to the PF has to come from the average contact
distance σ which does lie within the range from σ
_m to 1. This approximation has allowed one to solve the problem
analytically for the system in a box and avoid the periodic boundary
condition in x.
It is convenient to rewrite this PF in the exponential form:
Z=Δ/N!∫_(N-1)σ _m^
L_m^'dL^'∫ dα e^ S ,
where
S=iα L^'+Nln (L-1-L^')+(N-1)ln( ∫_σ _m^1dσ/√(1-σ ^2)σ e^-iασ) .
Equations (<ref>) and (<ref>) give the PF in the general case of a q1D
HD system for large N and L.
The integrand of Z is a regular function of α so that the α-integration contour, in particular, its central part that gives
the principal contribution to the integral, can be shifted while the ends
remain along the real axis. In the thermodynamic limit N→∞, L→∞ ,Nd/L=ρ =const, we can compute
the PF (<ref>) by the steepest descent method. In the limit N→∞ the integral (<ref>) is exactly determined by the saddle point
which, for given N, L and σ _m, is the stationary point
of the function S(iα ,L^'), Eq. (<ref>). It is
convenient to introduce real a=iα since α at the
saddle point lies on the imaginary axis and the integration contour has to
be properly deformed. The two equations ∂ S/∂ a=∂
S/∂ L^'=0 that determine the saddle point, can be reduced
to the single equation for a=a_N which reads:
L/N-1/a_N=I^'(a_N)/I(a_N) ,
where
I(a_N) = ∫_σ _m^1dσ/√(1-σ ^2)σexp (-a_Nσ ) ,
I^'(a_N) = ∫_σ _m^1dσ/√(1-σ
^2)σ ^2exp (-a_Nσ ) .
The solution a_N of Eq. (<ref>), which gives the total
longitudinal pressure P_L=k_BTa_N/D <cit.> and
longitudinal force k_BTa_N <cit.> wherek_BT is
Boltzmann's constant times temperature, depends on the per disk pore length L/N and, via σ _m, on the pore width D, and fully
determines the free energy. The free energy per disk,F/N, which
therefore is the function of the pore length L, pore width D and
the temperature T, is F(L,D,T)/N=-TS(a_N)/N=-Ts_N where s_N is system's per disk entropy:
s_N=a_Nσ _N+ln( L-Nσ _N) +N-1/Nln
I(a_N) ,
where σ _N is the average value of the contact distance σ in the condensate [i.e., average of L^'/(N-1)] <cit.>:
σ _N=L/N-1/a_N .
Finally, for N→∞, the PF can be cast in the two
equivalent forms:
Z_∞ = ς _NΔ/N!exp (Ns_N)
= ς _NΔ/N!(L-Nσ _N)^NI(a_N)^N-1exp
(Na_Nσ _N) ,
where ς _N is the prefacfor ∼ 1/√(N) originated
from the Gaussian integration along the steepest descent contour whose exact
form is of no need. In the 1D case, all σ's are equal to d and
this expression goes over into the Tonks PF Z_1D up to the factor Δ ^N which in this case represents the independent transverse
degrees of freedom: Z_∞→Δ ^NZ_1D where
Z_1D=1/N!( L-Nd) ^Nθ( L-Nd)
and θ( x) is the step function equal to 1 for x>0 and 0 otherwise. Now consider the general case of a finite
system. In what follows, the number of HDs and the total length of a finite
system are denoted as n and L_n, respectively (instead of N and L). The integral (<ref>) can be transformed to the one
along the real axis αlike that:
Z_ n,L_n = Δ/n!∫_
(n-1)σ _m^L_mdL^'(
L_n-1-L^') ^n
×∫_-∞^∞dα/2π| I(iα
)| ^n-1cos[ L^'α
+(n-1)φ _α] ,
where L_m=min (n-1,L_n-1) and
φ _α = I(iα ),
I(iα ) = ∫_σ _m^1dσ/√(1-σ ^2)σ e^-iασ .
Although the Gaussian approximation at the saddle point cannot give an exact
result for a system with finite number of disks, choosing the α
integration contour passing through the saddle point provides the best
convergence of the integrals (which has been confirmed numerically). Hence
to compute the PF we shift the central part of the α integration
contour downward and integrate over the real variable t along the line α =-ia_n+t that crosses the imaginary axis at α
=-ia_n. The best choice for the shift a_n is the root of the
following modified equation (<ref>):
L_n/n-n-1/na_n=I^'(a_n)/I(a_n) ,
whose rhs is defined in Eqs. (<ref>) and (<ref>). Then the PF Z_n,L_n can be transformed like that:
Z_n,L_n = Δ/n!∫_(n-1)σ _m^L_mdL^'e^a_nL^'( L_n-1-L^') ^n
×∫_-∞^∞dt/2π(
I_s^2+I_c^2) ^(n-1)/2cos [L^'t+(n-1)φ ] .
where
I_s(t) = -∫_σ _m^1dσ/√(1-σ ^2)σ e^-a_nσsin (tσ ) ,
I_c(t) = ∫_σ _m^1dσ/√(1-σ ^2)σ e^ -a_nxcos (tσ ) ,
φ (t)=( I_c+iI_s) ={[ arctanI_s/I_c ,I_c>0 ,; π +arctanI_s/I_c ,I_c<0,I_s>0 ,; -π +arctanI_s/I_c ,I_c<0,I_s<0 . ]. .
The density n/L_ n and the reduced pore width Δ, which enter the integrals above via σ _m, fully determine
the partition function Z_ n,L_ n through
Eqs. (<ref>) - (<ref>).
§ DERIVATION OF THE PDF FROM THE CANONICAL PARTITION FUNCTION
The PDF as a function of separation R is the probability to find
particle a distance R from another particle whose coordinate x_0 is fixed, say at x_0=0. Here we derive g(R) for a
q1D HD systems directly from the PF Z_N of the canonical NLT
ensemble.
The q1D PF Z_N{x_i,y_i}, Eqs. (<ref>) and (<ref>), is a
functional of the particles' longitudinal x coordinates and transverse
y coordinates. In the particular case of a q1D system, the general
formula for the PDF g(R) equivalent to its definition is obtained from
the canonical PF for the N particle system by fixing the x
coordinate of n-th disk at x_n=x and then summing over all
possible n (the range of n will be clarified later on) :
g(R)=1/ρ∑_n=1Z_N{x_0=0,y_0,x_1,y_1,...,x_n=R,y_n,...,x_N,y_N}/
Z_N{x_0=0,y_0,x_1,y_1,...,x_n,y_n,...x_N,y_N} .
Note that y_0 and y_n are not fixed so that the particles 0 and n can move in the transverse direction. The PF in the
nominator splits into a product of two PFs, Z_n for n disks
(of which n-1 are free to move) in the space 0<x_k<R, and Z_N-n, L-R for N-n moving disks in the space R<x_k<L-R-d/2, Fig. 1:
g(R)=1/ρ∑_n=1Z_n,RZ_N-n,L-R/Z_N,L .
Figure 1 demonstrates that the numbers of free disks, contact distances σ, and the disk-vertical wall contact distance d/2 have to
be adjusted in each PF individually. As a result, the form of Eq. (<ref>)
that determines a_n, is also slightly modified.
Consider first Z_n,R. We assume that R>1; the case R<1,
possible only for n=1, will be considered separately. In the system of
size R>1, there are n-1 freely moving HDs, ∫
dy_0=Δ, n contact distances σ, no vertical walls and
thus no contact distances d/2. Hence (R-d-L^')^n in PF (
<ref>) has to be replaced by (R-L^')^n-1 and I(iα
)^n-1 by I(iα )^n. Then the PF Z_n,R takes the form
Z_n,R = Δ/(n-1)!∫_ nσ _m^ L_n,mdL^'e^a_nL^'(
R-L^') ^ n-1
×∫_-∞^∞dt/2π
(I_s^2+I_c^2)^ n/2cos [L^'t+nφ
(t)] ,
where L_n,m=min (n,R). The angle φ (t) is defined in
Eq. (<ref>), a_n is the root of equation (<ref>), and the
relation between σ _n and a_n is just the properly modified
Eq. (<ref>) ,
σ _n=R/n-n-1/na_n .
Next consider Z_N-n,L-R in Eq. (<ref>), the PF for N-n HDs
in the range R<x<L. Here all disks are free to move, there are N-n contact distances σ, and the single vertical wall at
the pore end. Then the PF Z_N-n,L-R can be presented in the form
Z_ N-n,L-R = 1/(N-n)!∫_σ
_m^ L_N-n,mdle^L^'a_N-n(L-R-1/2-L^')^ N-n
×∫_-∞^∞dt/2π
(I_s^2+I_c^2)^(N-n)/2cos [L^'t+(N-n)φ (t)] ,
where L_ N-n, m=min (N-n,L-R-1/2) and a_
N-n is the root of the following modified Eq. (<ref>) :
L-R-1/2/N-n-1/a_N-n=I^'(a_
N-n)/I(a_ N-n) .
At last, consider Z_N, L in Eq. (<ref>) that is the PF for N
HDs in the range 0<x<L. Here all disks are free to move, ∫
dy_0=Δ, there are N contact distances σ and thesingle vertical wall at the pore end. Then the PF Z_N,L can be
presented in the form
Z_N,L = Δ/N!∫_ Nσ _m^ L_N,mdL^'e^a_NL^'(L-1/2-L^')^N
×∫_-∞^∞dt/2π
(I_s^2+I_c^2)^ N/2cos [L^'t+Nφ
(t)] ,
where L_ N,m=min (N,L-1/2) and a_N is the root
of the equation (<ref>). Making use of the PFs (<ref>), (<ref>),
and (<ref>) in the general formula (<ref>) gives the PDF of a q1D HD
system for finite N and L.
The general result for g(R) can be further simplified in the
thermodynamic limit. This case, usually considered the most important one,
is presented in detail in the next section. To illustrate our method of
deriving the PDF directly from the canonical NLT PF we first derive
the PDF for 1D Tonks' gas.
§.§ PDF of a q1D HD system in the thermodynamic limit
§.§.§ PDF of an infinitely long 1D HD system (Tonks' gas)
The PDF g(R) for a 1D HD is given by the general formula (<ref>) in
which the three PFs are obtained from the Tonks' PF, Eq. (<ref>
):
g_1D(R)=1/ρ∑_n=1N!|R-n|^n-1[L-R-n]^N-n/
(n-1)!(N-n)!(L-n)^N θ (R-n) .
In the limit N→∞, neglecting O(n/N), one also has
(N-n)!≅ N!/N^n and
(L-R-n)^N-n = (L-N)^N-n( 1-R-n/N(l_N-1)) ^N-n
= (L-N)^N-n[ exp( -R-n/l_N-1) +O( n
/N) ]
→ (L-N)^N-nexp( -R-n/l_N-1) ,
where l_N=L/N=1/ρ. Making use of these results in Eq. (<ref>
) and introducing the step function, θ (R-n)=1 for R≥ n
and θ (R-n)=0 otherwise, one finally obtains
g_1D(R)=1/ρ∑_n=1|R-n|^n-1exp( -R-n/l_N-1) /(n-1)!(l_N-1)^n θ (R-n) ,
which is the well-known PDF of 1D Tonks' gas <cit.>.
§.§.§ PDF of an infinitely long q1D HD system.
In an infinitely long q1D HD system, the above thermodynamic limit result,
Eqs. (<ref>) and (<ref>), is applicable both for Z_ N,
L and Z_ N-n, L-R as the number of particles N-n and volume L-R are infinite, but the PF Z_
n,R for the finite n disk system has to be found directly from the
general formula (<ref>) [or from the original form (<ref>) without
the contour shift]. Adjusting Eqs. (<ref>)-(<ref>) to the above PFs
of interest, one has :
Z_N-n,L-R = ς _N-n/(N-n)![L-R-(N-n)σ
_N-n]^N-nexp [(N-n)s_N-n] ,
Z_N,L = ς _NΔ/N!(L-Nσ _N)^Nexp (N
s_N) .
Here s_ N-n=a_ N-nσ_ N-n+ln I(a_N-n) and s_
N=a_ Nσ_ N+ln I(a_ N),
where the pair σ_ N, a_ N is
determined by l_N=L/N=1/ρ from the Eqs. (<ref>) and (<ref>
) and the pair σ_ N-n, a_ N-n by l_ N-n=(L-R)/(N-n) from similar equations
l_N-n-1/a_N-n = I^'(a_N-n)/I(a_N-n) ,
σ _N-n = l_N-n-1/a_N-n .
Substituting these expressions in the general formula (<ref>) for g(R) and taking into account that in the thermodynamic limit the
preexponential factors ς_ N and ς_ N-n are equal, we get :
g(R) = 1/ρ∑_n=1Z_n,RN![L-R-(N-n)σ
_N-n]^N-n/(N-n)!(L-Nσ _N)^N
×exp [N(s_N-n-s_N)-ns
_N-n] .
Now we find s_N-n by expanding about s_N and
using the smallness of n/N. First, up to O(n/R), one has N(
s_N-n-s_N)≅ N( ∂s_N/∂
l_N) (l_N-n-l_N), where
l_N-n-l_N = L-R/N-n-L/N
= R-l_Nn/N[ 1+O(n/L)] .
The l_N derivative obtains regarding (<ref>) and (<ref>
):
∂s_N/∂ l_N=1/a_N∂ a_N/∂ l_N+a_N=a_N∂σ _N/∂ l_N .
Then one expands s_N-n about s_N regarding (
<ref>):
ns_N-n ≅ ns_N+n∂s
_N/∂ l_N(l_N-n-l_N)
= ns_N+O(n/N) ,
to finally obtain
N(s_N-n-s_N)≅ -a_N∂σ
_N/∂ l_N(R-nl_N) .
Next we show that the N-n th power of the ratio in (<ref>) gives rise
to an exponential:
[ L-R-(N-n)σ _N-n/L-Nσ _N] ^N-n
= ( l_N-σ _N-n/l_N-σ _N) ^N-n(
1-R-nσ _N-n/L-Nσ _N-n) ^N-n
≅ ( l_N-σ _N-n/l_N-σ _N)
^N-nexp( -R-nσ _N/l_N-σ _N) .
In turn, the first factor in the last line can also be reduced to an
exponential whose exponent cancels out the one in Eq. (<ref>):
( l_N-σ _N-n/l_N-σ _N) ^N-n = (
1+σ _N-σ _N-n/l_N-σ _N) ^N-n≅
[ 1+a_N/N∂σ _N/∂ l_N]
^N-n ≅ exp[ a_N∂σ _N/∂ l_N
(R-nl_N)] .
Making use of the results (<ref>)-(<ref>) in formula (<ref>),
after some straightforward algebra and convenient rescaling, we obtain the
PDF in the final form:
g(R)=1/ρ∑_ n=1^ n_max |R-nσ_ n|^ n-1exp{ -R-nσ _N/l_N-σ_N + n[ a_ nσ_ n-a_Nσ_N+lnI(a_ n) /I(a_N)] }/
(n-1)!(l_N-σ_N)^ n J_ n(R) .
Here J_n(R) is the following integral:
J_n(R) = n∫_σ _m^l_mdle^ na_n(l-σ _n)( R/n-l/|R/n-σ _n|) ^n-1
×∫_-∞^∞dt/2π[
I_c(t)^2+I_s(t)^2/I(a_n)^2] ^n/2cos
[n(lt+φ )] ,
where I_c(t),I_s(t),φ (t) and a_n,σ _n are
given in Eqs. (<ref>), (<ref>) and Eqs. (<ref>), (<ref>),
respectively. Deriving Eqs. (<ref>) and (<ref>), we changed from the
variable L^' to l=L^'/n so that the upper lintegration limit is now l_m=min (1,R/n). To avoid dealing with
extremely small quantities and extremely fast oscillations, we made the
following convenient rescaling: we divided R/n-l by |R/n-σ _n|and, to compensate, introduced the factor |R-nσ _n|^n-1;similarly, the factor exp [na_nσ _n+nln I(a_n)]
compensates for the denominator I(a_n)^n and exp
[-na_nσ _n] in the integrand.
The maximum
n_max in summation of Eq. (<ref>) is the maximum number of disks
at close contact which can be put in the space between the particle fixed at
x=0 and the point x=R:
n_max(R)=R-mod(R,σ_m)/σ_m .
Note that the expression for g(R) appears to be considerably simpler
if no contour shift and rescaling have been applied:
g(R) = 1/ρ∑_ n=1^ n_maxnexp( -R-nσ_N/l_N-σ_N) / (n-1)!(l_N-σ _N)^ n
×∫_σ_m^ l_mdl(
R/n-l) ^n-1∫_ -∞^∞
dα/2π| I(iα )| ^ n/2cos
[n(lα +φ_α)] ,
where I(iα) and φ_α are defined in (<ref>
). But the formulae (<ref>) and (<ref>) actually provide a much
better convergence and much simpler numericals.
§.§.§ The 1D limit
It is important to see how the results obtained for a q1D HD system behave
approaching a 1D HD system, i.e., in the limit D→ 0 when Δ→ 0, and σ_m , σ_n , σ_N→ 1. To this end, we first estimate the x integrals in
this limit :
I(a) = e^ -aΔ +O(Δ ^2) ,
I_c = e^ -aΔcos t+O(Δ ^2) ,
I_s = e^ -aΔsin t+O(Δ ^2) .
As a result,
φ → -t ,
I_ c^ 2+I_ s^ 2/I(a_ n)^ 2 → 1 ,
∫_ -∞^∞dt/
2π[ I_ c^ 2+I_
s^ 2/ I(a_ n)^ 2] ^ n/2cos [n(lt+φ )] → δ (l-1) ,
J_ n → 1 .
We see that in the 1D limit, the g(R), Eq. (<ref>), goes over into
the Tonks g_1D(R), Eq. (<ref>).
§.§.§ Probability to find next neighbor at a distance R
The term with n=1 in the PDF g(R) is proportional to the
probability g_1(R)=Z_1,RZ_N-1,L-R/ρ Z_N,L to have next
neighbour disk 1 of disk 0 at a distance R including R<1. Here we derive this quantity for an infinitely long q1D HD system.
The case n=1 is particular because the small distance between neighbor
disks sets certain restriction on the integration over their transverse
coordinates y which depends on their distributions. Now we have to
consider the two neighbor disks, 0 and 1, within the large system. The
result is similar to that obtained in <cit.> in deriving the y
distribution across the pore. This distribution has the form ∝φ (y)^2 where φ is the following integral:
φ (y)=∫_-Δ /2^Δ /2dy^'e^-α
_Nσ (y-y^').
Here σ (y-y^') is defined in eq.(<ref>), and, compared
with formulae of <cit.>, the integration variable σ is
changed to y. This derivation shows that to place the two disks into a
large system, it is sufficient to consider correlations between disk 0 and
one disk on the left of disk 0, call it disk -1, and that between disk 1
and one disk, call it disk 2, on the right of disk 1. Then, rather than
disks 0 and 1 we consider disks -1,0,1, and 2 which results in the
following extensions:
∫_-Δ /2^Δ /2dy_0 → ∫_-Δ /2^Δ
/2dy_-1∫_-Δ /2^Δ /2dy_0e^-α
_Nσ (y_0-y_-1)=∫_-Δ /2^Δ /2dy_0φ
(y_0),
∫_-Δ /2^Δ /2dy_1 → ∫_-Δ /2^Δ
/2dy_1∫_-Δ /2^Δ /2dy_2e^-α
_Nσ (y_2-y_1)=∫_-Δ /2^Δ /2dy_1φ (y_1).
Regarding the equalities σ _N-1=σ _N+1 and s
_N-1=s_N+1 valid up to O(1/N) and retaining only the R dependent terms z(R), one has:
z(R)=∫_-Δ /2^Δ /2dy_0∫_-Δ /2^Δ
/2dy_1φ (y_0)φ (y_1)θ[
R^2+(y_0-y_1)^2-1] exp( -a_NR) ,
where the θ function eliminates states in which the cores of disks 0 and 1 overlap and we used that 1/(l_N-σ _N)=a_N.
Normalizing on unity, one finally obtains:
g_1(R)=z(R)/∫_σ _m^∞dRz(R) .
In relation with the approximation used to derive the PF (<ref>) and
described in Sec.2, we stress that the distribution φ (y)^2 with φ given in ( <ref>) is very different from that in the system
periodic in the y direction <cit.>, and the approximation
influences this distribution only through the value of a_N.
§ RESULTS AND DISCUSSION
Figures 2 and 3 present the PDF g_1(R) for next neighbor disks
obtained from Eq. (<ref>) for a set of linear densities ρ =N/L
and two reduced pore widths Δ. The sharp peak at R=1 is
present at all densities including very high, but in this case its height is
incomparable with second peak centered at the average interdisk spacing l_N=1/ρ. The second peak appears and strengthens as density
becomes higher and higher. The concentration of spacings R at the
average distance indicates a high order along the pore. For densities ρ near the close packing, this also implies a high overall zigzag
order since R≅ l_N approaches the minimum separation σ
_m for which disks stay very close to the walls. In contrast, the fact
that there is a high peak at R=1 which is particularly pronounced for
the density ρ =1 with l_N =1 shows that the ordering at this
density is not necessarily related with a zigzag type order. We shall give
this issue a more consideration later on as the peculiarity of separation R=1 and density ρ =1 will get additional indications.
Right now we would like only to explain the very reason for the cusp at R=1 whose presence at PDFs g_1(R) and g(x) has been well
known <cit.>. To this end, the θ function in the formula for z(R), Eq. (<ref>), is replaced by
the explicit dependence of the integration limits on R, i.e.,
z(R)={[ ∫_-Δ /2^Δ /2-√(1-R^2)dy_0∫_y_0+
√(1-R^2)^Δ /2dy_1φ (y_0)φ (y_1), R≤
1 ,; ; ∫_-Δ /2^Δ /2dy_0∫_-Δ /2^Δ
/2dy_1φ (y_0)φ (y_1), R>1 . ].
This formula shows that the increase of the disk transverse free path in y with distance R abruptly stops at its maximum constant value Δ at R=1.
In Fig. 2a, the theoretical g_1(R) is superimposed on the MD
simulation data <cit.> for the same Δ =0.5. It is seen that
the theoretical and MD simulation results for high ρ =1.111 and low
ρ =0.8 practically coincide whereas for the intermediate densities ρ=1.053 and 1.01 they look very different. Actually, however, a
perfect fit can be achieved by a small increase of these theoretical
densities respectively to 1.065 and 1.032, Fig. 2b. This mismatch is
addressed after presenting the case of pore width Δ =0.866 in
Fig. 3. This figure shows our theoretical g_1(R) for the densities
for which in <cit.> the PDF normalized on the density, g/ρ ,
was obtained by a Monte Carlo simulation for short distances R<1.5. While g_1(R) contains the contribution of a single next neighbor, g(R)
in <cit.> includes the contributions of both next and next-next
neighbor. Nevertheless, the peaks for ρ =1.6 and 1.4 are concentrated
at distances R<1 where the contribution of the next-next neighbor is
indirect and negligible so that our g_1/ρ and the g/ρ can be
compared. After dividing by the correspondent ρ , these curves in Fig.3
become in a good agreement with their counterparts from <cit.>.
The results for ρ =1 and 0.6, however, cannot be compared as the
role of the next-next neighbor for these curves in <cit.> is
essential.
The first idea is that the reason for the aforementioned mismatch lies in
the approximation described in Sec.2 which was used in the derivation of the
PF in <cit.>. This can be checked by developing the theory which
does not use both above approximation and periodic boundary condition in the
x direction. At the same time, to address the mismatch between our
theoretical and MD results at the intermediate densities for Δ =0.5, it is also important to resort to Fig. 4 which presents our theoretical g(R) and the MD results for ρ =1. Figure 4a shows that the general
trend is that the theoretical peaks obtained for an infinitely long system
are slightly wider and lower than those obtained by the MD simulations for a
system with periodic boundary conditions. The theoretical correlation length
1/0.124≈ 8.1 is slightly shorter than that of the fit to MD data
1/0.1015≈ 9.85, Fig. 4b. This last figure, however, also
demonstrates a visually appreciable difference between the theoretical and
MD g(R): the last does not vanish for large R and remains at a
level on the order of 0.01 for both system sizes N=400 and N=2000.
The residual correlations persist for all R≳ 50, are highly
fluctuating, showing no tendency to decreasing and are even higher for the
larger system. This points to the possibility that the slower correlation
decay of MD data is connected to the system finite size, i.e., the
effect which was already addressed in <cit.>. Another reason can be
the periodic boundary conditions employed in MD simulations: imposing a
correlation at the distance equal to the system length L can also
enforce the longitudinal correlation value. The relation between theoretical
and computer simulation data was already addressed in <cit.> where we
compared the data for the transverse disk distributions. It was found that
the former always predict less disks at the walls, i.e., at R∼σ
_m, and slightly more disks at a distance R∼ 1 than the latter. The reason is that the space for windowlike defects with R∼ 1 is
related to the system size L: it diminishes sharper with the linear
density ρ for shorter L and for sufficiently high ρ
in a finite size system is not available at all. At the same time, the
probability of a window R∼ 1 in the zigzag arrangement in an
infinite q1D system is nonzero for any ρ below close packing <cit.>. For this reason it can be expected that the peaks of g_1(R) and g(R) in an infinite theoretical system are slightly stretched
toward higher R, hence are wider and slightly lower than those
obtained in computer simulations of a finite system with periodic boundary
conditions, which is the case of Fig.4. In terms of correlation decay, this
means that an infinite system has a shorter correlation length than that in
a finite system with periodic boundary conditions. Clearly, in terms of
pressure and density it implies that the pressure sensitively depends on the
ordering details: in a finite system pressure is slightly higher than in an
infinite system and the mismatch between the two PDFs can be eliminated by
shifting the density of an infinite system to a slightly higher value, which
was demonstrated in Fig.2b. Another question is why this shift is mostly
needed at the intermediate densities, i.e., those between the dense packing
and gas values. Before addressing this we first consider the results
presented in Figs. 5 and 6.
The longitudinal pair correlations as function of the disks' number
difference, g_2(|n_2-n_1|), has been investigated in detail by
the transfer matrix method <cit.>. At
the same time, the PDF g(R) as function of the disk separation R
for given density cannot be directly obtained by this method. Formula (<ref>) considerably simplifies its calculation and enables one to get its
systematic understanding by means of the direct calculation. The density ρ determines a_N (i.e., the pressure) via simple
transcendental Eq. (<ref>) in which Δ enters via minimum
contact distance σ _m, Eq. (<ref>), and σ _N
is given by Eq. (<ref>). We obtained the PDF g(R), Eq. (<ref>
), by performing the integration in Eq. (<ref>) numerically, Figs. 5 and
6 <cit.>. Contrary to our suggestion based on computer simulations
data <cit.> and in line with the results of the transfer matrix method
<cit.>, our findings on the longitudinal correlations in the
thermodynamic limit show an exponential decay for all pore widths and
densities. The correlation length is a monotonically increasing function of
density, Fig. 5. To combine both width and density effects, we fixed the
ratio ( ρ /ρ _max) of the actual density ρ to the maximum density ρ _max(Δ ) for a given pore
width Δ, and then found the correlation lengths for different Δ in the total range of the single-file widths, 0≤Δ≤√(3)/2≈ 0.866, Fig. 6. For a given Δ the
maximum density is ρ _max(Δ )=1/σ _m(Δ )=1/√(
1-Δ ^2). It follows that as Δ runs from 0 to
0.866, the actual density ρ = (ρ /ρ _max)/√(1-Δ ^2) monotonically increases from 0 to 1. 154 7(ρ /ρ
_max). In particular, for the same Δ, the actual ρ is
higher for higher ρ /ρ _max. The results for (ρ /ρ
_max)=0.866, 0.9539 and 0.9875 are presented in Fig. 6.
First, it is seen that, for the same Δ, the correlation length is
larger for a higher density. Second, as the width approaches zero, the
correlation length tends to the value obtained for the 1D Tonks gas from g_1D(R), Eq. (<ref>). Third, the width and density monotonically
grow along the curves in Fig.6̃. It is seen however that the correlation
length does not monotonically increase as both the width and density do:
there is a maximum at each of the three curves. But the most interesting
observation is that all these maxima occur at the density ρ =1 when
a pore length interval equal to the disk diameter d is on average
occupied by one disk. This is another peculiarity of these density and
disks' separation indicated above.
Thus, Figs. 2, 3, and 6 show that the PDFs g_1(R) and g(R) have
peculiarities at the density ρ =1 in the form of certain peaks or
maxima. Moreover, for Δ =0.5, density ρ =1 is in the
intermediate range between high and low densities where we found a high
sensitivity of the pressure to density. Consider this effect which can be
related to the mismatch between the correlations in an infinite and finite
system with periodic boundary condition. It has been suggested that the peak
at the distribution of next neighbors at R=1, Figs.2, 3, is related to the
tendency of the system to produce windowlike defects to increase the entropy
as such a defect enables disk's travel across the pore <cit.>. However, our finding that the correlation
length has a maximum at ρ =1 for any pore width, Fig.6, is
unexpected and cannot be explained by this idea alone. At higher densities,
the peak at R=1 is diminishing and the peak at another distinguished,
namely average distance l_N=L/N<1 is raising and eventually
dominates the one at R=1. As the peak of g_1(R) at R=L/N
is definitely related to the longitudinal component of the zigzag order, it
is natural to connect the peak at ρ =1, at least partially, to the
nascent longitudinal ordering, too. In the light of this idea, the maxima of
the correlation length become reminiscent of the correlation length increase
at a phase transition. Of course, there is no transition at ρ =1,
but a kind of pretransitional effect seems to show up. Interestingly, in
recent paper on the same q1D HD system <cit.>, the authors found,
also for all widths Δ's, well developed compressibility peaks at ρ≈ 1 showing that at this density the system is softer even than
at lower densities, which is in line with the above idea. We may then
speculate that at ρ =1, this effect is somehow related to the
increase of the correlation length and to enforced correlation sensitivity
at densities in the vicinity of ρ =1. Approaching thermodynamic
equilibrium, a system tends to find more space to increase its entropy. For
densities near close packing, it has no much choice: the interdisk space is
very limited, correlating many such spacings along the system requires
extremely fine adjustments so that the correlations are determined by the
average distance L/N. At low densities, the interdisk spacings are large
and uncorrelated, so that again entropy-wise the correlations are connected
to L/N rather than to the global system's size. But at the intermediate
densities, when the longitudinal and nascent transverse orders compete, the
system tends to benefit from both interdisk spaces, the strict L/N and
those nearby L/N∼ 1. To do so, it searches for the space by correlating
interdisk spaces along the system so that the system size comes into play.
As a result, the size effect can manifest itself in the pressure: slight
increase of density adjusts the pressure in an infinite system to that in a
finite one.
§ CONCLUSION
We derived the formulae for the two important PDFs g(R) and g_1(R) for a q1D HD system and demonstrated that they can be readily
used. Apart of that, based on our finding on the correlation lengths, we
suggested that the density ρ =1 plays a distinguished role in the
zigzag transformation with density irrespective of the pore width. We
related this to a nascent longitudinal order and the system tendency to
correlate multiple interdisk spacings along the system to increase its
entropy. To this effect we attributed the high sensitivity of the system
pressure to its density in the vicinity of ρ =1 which was also
revealed in <cit.>. As the pressure is affected by a system size and
can be slightly higher in a finite system with periodic boundary conditions
than in an infinite system, the PDF g(R) and next-neighbor
distribution g_1(R), which nearly coincide with computer simulation
data for high and low densities, can differ for intermediate densities in
the vicinity of ρ =1, but can be made coinciding by the
correspondent density increase in an infinite system. Of course, one obvious
reason for the observed mismatch between the theoretical predictions and
simulation data can be the approximation described in Sec.2, but one cannot
also exclude an effect of the pressure difference between a finite system
with the periodic boundary condition and infinite system, which is possible
at the intermediate densities. Note that the theoretical results <cit.> based on the transfer martrix approach show a good fit to
the simulation data for high, low, and intermediate densities. As the
periodic boundary conditions along the pore are essential for both these
approaches, the relation between the results obtained for a finite and
infinite systems is yet to be clarified. The investigation of a similar
problem in the physics of one-dimensional ultra cold quantum gases shows
that this problem is nontrivial and worth to be addressed <cit.>.
§ DECLARATION OF COMPETING INTEREST
The authors declare that they have no known competing financial interests or
personal relationships that could have appeared to influence the work
reported in this paper.
§ ACKNOWLEDGMENTS
V.M.P. is grateful to Center for Theoretical Physics PAS for hospitality and
financial support via (Polish) National Science Center Grant No.
2019/34/E/ST2/00289 and project POLONEZ BIS reg. no. 2022/45/P/ST3/04237.
T.B. and A.T. were supported by the National Research Foundation of Ukraine
under the grant 2020.02/0115.
99
Lieb E. H. Lieb, D. C. Mattis, Mathematical Physics in One
Dimension: Exactly Soluble Models of Interacting Particles, Academic, New
York, 2013.
Tonks L. Tonks, The complete equation of state of one, two and
three-dimensional gases of hard elastic spheres, Phys. Rev. 50
(1936) 955-963. https://doi.org/10.1103/PhysRev.50.955
Wojc K.W. Wojciechowski, P. Pieranski, J. Małecki, A hard-disk
system in a narrow box. I. Thermodynamic properties, J. Chem. Phys. 76,
6170-6175 (1982). https://doi.org/10.1063/1.443019
Kofke D.A. Kofke, A.J. Post, Hard particles in narrow pores.
Transfer-matrix solution and the periodic narrow box, J. Chem. Phys. 98
(1993) 4853- 4861. https://doi.org/10.1063/1.479206
Kramers H. A. Kramers, G. H. Wannier, Statistics of the
Two-Dimensional Ferromagnet. Part I, Phys. Rev. 60 (1941) 252-262. https://doi.org/10.1103/PhysRev.60.252.
Varga S. Varga, G. Balló, P. Gurin, Structural properties of
hard disks in a narrow tube, J. Stat. Mech. Theory Exp. P11006 (2011).
10.1088/1742-5468/2011/11/P11006
Varga2013 P. Gurin, S. Varga, Pair correlation functions of two-
and three-dimensional hard-core fluidsconfined into narrow pores: Exact
results from transfer-matrix method, J. Chem. Phys. 139 (2013) 244708-6.
Godfrey2015 M. Godfrey, M. Moore, Understanding the ideal glass
transition: Lessons from an equilibrium study of hard disks in a channel,
Phys. Rev. E 91 (2015) 022120-15. https://doi.org/10.1103/PhysRevE.91.022120
Robinson J.F. Robinson, M.J. Godfrey, M.A. Moore, Glasslike
behavior of a hard-disk fluid confined to a narrow channel, Phys. Rev. E 93
(2016) 032101-10. https://doi.org/10.1103/PhysRevE.93.032101
Hu Y. Hu, L. Fu, P. Charbonneau, Correlation lengths in
quas-ione-dimensional systems via transfer matrices, Mol. Phys. 116 (2018)
3345-3354. https://doi.org/10.1080/00268976.2018.1479543
Comment Y. Hu, P. Charbonneau, Comment on “Kosterlitz-Thouless-type caging-uncaging transition in a
quasi-one-dimensional hard disk system”, Phys. Rev.
Research 3 (2021) 038001-5. https://doi.org/10.1103/PhysRevResearch.3.038001
squares P. Gurin, G. Odriozola, S. Varga, Critical behavior of
hard squares in strong confinement, Phys. Rev. E 95 (2017) 042610-10.
https://doi.org/10.1103/PhysRevE.95.042610.S.
Mon K.K. Mon, Virial series expansion and Monte Carlo studies of
equation of state for hard spheres in narrow cylindrical pores, Phys. Rev. E
97 (2018) 052114-7. https://doi.org/10.1103/PhysRevE.97.052114
Yamchi M.Z. Yamchi, S.S. Ashwin, R.K. Bowles, Inherent
structures, fragility, and jamming: Insights from quasi-one-dimensional hard
disks, Phys. Rev. E 91 (2015) 022301-12. https://doi.org/10.1103/PhysRevE.91.022301
Hicks C.L. Hicks, M.J. Wheatley, M.J. Godfrey, M.A. Moor, Gardner
Transition in Physical Dimensions,
AiT A. Huerta, T. Bryk, V.M. Pergamenshchik, A. Trokhymchuk,
Collective dynamics in quasi-one-dimensional hard disk system, Frontiers in
Physics 9 (2021) 636052-15. https://doi.org/10.1103/PhysRevLett.120.225501
Holovko M.F. Holovko, V.I. Shmotolokha, W. Dong, Analytical theory
of one- and two-dimensional hard sphere fluids in random porous media, Cond.
Matter Phys. 13 (2010) 23607-7.
http://dspace.nbuv.gov.ua/handle/123456789/32097
BEC M.A. Cazalilla, R. Citro, T. Giamarchi, E. Orignac, M. Rigol,
One dimensional bosons: From condensed matter systems to ultracold gases,
Rev. Mod. Phys. 83 (2011) 1405-1466. https://doi.org/10.1103/RevModPhys.83.1405
JCP2020 V.M. Pergamenshchik, Analytical canonical partition
function of a quasi-one-dimensional system of hard disks. J. Chem. Phys. 153
(2020) 144111-10. https://doi.org/10.1063/5.0025645
Yukhnovski I.R. Yukhnovski, M.F. Holovko, Statistical Theory of
Classical Equilibrium Systems, Naukova Dumka, Kyiv, 1980 (in Russian).
Santos A. Santos, A Concise Course on the Theory of Classical
Liquids. Basics and Selected Topics, Lecture Notes in Physics, Vol. 923,
Springer International Publishing, Switzerland, 2016.
https://doi.org/10.1007/978-3-319-29668-5
Frenkel J. Frenkel, Kinetic Theory of Liquids, Dover Publications,
NY, 1946.
Nagamiya T. Nagamiya. Statistical mechanics of one-dimensional
subtances I, Proc. Phys.-Math. Soc. Japan 22 (1940) 705-729. https://doi.org/10.11429/ppmsj1919.22.8-9_705
WE A. Huerta, T.M. Bryk, V.M. Pergamenshchik, A.D. Trokhymchuk,
Kosterlitz-Thouless-type caging-uncaging transition in a
quasi-one-dimensional hard disk system, Phys. Rev. Research 2 (2020)
033351-5. https://doi.org/10.1103/PhysRevResearch.2.033351
ArXiv After this paper was submitted, Montero and Santos presented
their analytical theory of g(R) which is based on the formulae equivalent
to the transfer matrix approach with the periodic boundary condition <cit.>.
Santos2 A.M. Montero, A. Santos, Structural properties of
hard-disk fluids under single-file confinement, arXiiv: 2304.14290v1 (2023).
https://arxiv.org/abs/2304.14290
Saika R.K. Bowles, I. Saika-Voivod, Landscapes, dynamic
heterogeneity, and kinetic facilitation in a simple off-lattice model, Phys.
Rev. E 73 (2006) 011503-4 . https://doi.org/10.1103/PhysRevE.73.011503
Santos1 A.M. Montero, A. Santos, Equation of state of hard-disk
fluids under single-file confinement, J. Chem. Phys. 158 (2023) 154501-5.
doi:10.1063/5.0139116
JofPhysics M.T. Batchelor, X.W. Guan, N. Oelkers, C. Lee, The 1D
interacting Bose gas in a hard wall box, J. Phys. A 38 (2005) 7787. DOI
10.1088/0305-4470/38/36/001
|
http://arxiv.org/abs/2307.04224v1 | 20230709163518 | Reach of Segre-Veronese Manifolds | [
"Paul Breiding",
"Sarah Eggleston"
] | math.AG | [
"math.AG",
"math.DG"
] |
Seismic Data Interpolation based on Denoising Diffusion Implicit Models with Resampling
Xiaoli Wei, Chunxia Zhang, Member, IEEE, Hongtao Wang, Chengli Tan, Deng Xiong, Baisong Jiang, Jiangshe Zhang, Sang-Woon Kim, Life Senior Member, IEEE
Corresponding author: Chunxia Zhang. E-mail: [email protected].
Xiaoli Wei, Chunxia Zhang, Hongtao Wang, Chengli Tan, Baisong Jiang, Jiangshe Zhang are with the School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an, Shaanxi, 710049, China.
Deng Xiong is with the Geophysical Technology Research and Development Center, BGP, Zhuozhou, Hebei, 072751, China
Sang-Woon Kim is with the Department of Computer Engineering, Myongji University, Yongin, 17058, South Korea.
This research was supported by the National Key Research and Development Program of China (No. 2018AAA0102201) and the National Natural Science Foundation of China (No. 61976174).
This work has been submitted to the IEEE TGRS for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible.
August 12, 2023
====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
We compute the reach, extremal curvature and volume of a tubular neighborhood for the Segre–Veronese variety intersected with the unit sphere.
Keywords. Tensors, Rank-One Tensors, Reach, Curvature, Tubes.
§ INTRODUCTION
In this paper we study the metric geometry of rank-one tensors. More specifically, we compute the reach and the volume of a tubular neighborhoods of the Segre–Veronese variety, i.e. the variety of rank-one tensors in the space of partially symmetric tensors. Since rank-one tensors form a cone, we intersect the Segre–Veronese variety with the unit sphere, thus obtaining the (spherical) Segre–Veronese manifold; the proof that this is a manifold is provided below. We first describe the setting.
Let H_n,d denote the vector space of homogeneous polynomials in n+1 variables x_0,…, x_n of degree d.
We consider the Bombieri-Weyl inner product ⟨ , ⟩ on H_n,d: this is the inner product corresponding to the orthogonal basis vectors m_α := √(dα) x^α, where dα = d!/α_0!⋯α_n! is the multinomial coefficient for α = (α_0,…,α_n). The reason for this choice is that the Bombieri-Weyl inner product is invariant under an orthogonal change of coordinates; this was proved by Kostlan <cit.>.
The norm of a polynomial f∈ H_n,d is ‖ f‖ =√(⟨ f, f⟩), and the sphere is 𝕊(H_n,d):={ f∈ H_n,d|‖ f‖ =1}.
For n,d≥ 1, we denote the real variety of powers of linear forms in H_n,d by
𝕍_n,d := {± ℓ^d|ℓ is a linear form in x_0,…,x_n}.
The hat indicates that 𝕍_n,d is the cone over a spherical variety
:=𝕍_n,d∩𝕊(H_n,d),
which we call the (spherical) Veronese variety. Its dimension is = n.
We fix r-tuples of positive integers d=(d_1,…,d_r) and n=(n_1,…,n_r) and write
:= H_n_1,d_1⊗⋯⊗ H_n_r,d_r.
The elements in are called
partially symmetric tensors. They are multihomogeneous forms in r sets of variables. The number d=d_1+⋯ +d_r is called the total degree of the tensors in .
For a tensor F = ∑_α_1,…,α_r F_α_1,…, α_r m_α_1⊗⋯⊗ m_α_r∈, we use the short form F = (F_α_1,…, α_r).
The following defines an inner product on :
⟨ F, G ⟩ := ∑_α_1,…,α_r F_α_1,…, α_r· G_α_1,…, α_r, where F = (F_α_1,…, α_r), G = (G_α_1,…, α_r)∈.
With this, becomes a Euclidean space, and we can measure volumes and distances in . The norm of a tensor F∈ is ‖ F‖ := √(⟨ F, F⟩), and the angular distance is d_𝕊( F, G):= arccos⟨ F, G⟩ for F, G in the unit sphere 𝕊()⊂.
The (spherical) Segre–Veronese variety in is
:= { f_1⊗⋯⊗ f_r| f_i∈𝕍_n_i,d_i}∩𝕊().
This is the variety of products of powers of linear forms in that have unit norm. Tensors in are also called decomposable, simple or rank-one tensors. We prove in Proposition <ref> that is an embedded smooth submanifold of 𝕊() of dimension =n_1+⋯+n_r; hence, we call a (spherical) Segre–Veronese manifold.
The main focus of this paper is the reach and the volume of a tubular neighborhood of . We briefly recall what these are:
the medial axis Med(S) of a subset S⊂𝕊() is the set of all points F∈𝕊() such that there exist at least two points G_1, G_2 ∈ S with d_𝕊( F, S) = d_𝕊( F, G_i), i=1,2. The reach of a subset S is its minimum distance to the medial axis:
τ(S):=inf_ F ∈ Sd_𝕊( F, Med(S)).
This notion was first introduced by Federer <cit.>.
In our first main theorem we calculate the reach of the (spherical) Segre–Veronese manifold.
Let d=(d_1,…,d_r) and n=(n_1,…,n_r) be r-tuples of positive integers, and let d:=d_1+⋯+ d_r≥ 2 be the total degree. The reach of the (spherical) Segre–Veronese manifold is
τ()=
π/4, d≤ 5
√(d/2(d-1)), d>5.
In particular, the reach only depends on the total degree d and not on the dimensions of the Veronese varieties 𝕍_n_i,d_i.
This extends a theorem by Cazzaniga, Lerario and Rosana <cit.>, who proved this formula for the Veronese variety, which is the special case r=1. Another special case worth mentioning is d_1=⋯=d_r=1, which corresponds to the Segre variety.
Since is a smooth submanifold of the sphere, its reach is the minimum of the inverse of its largest radius of curvature and its smallest bottleneck. We also compute these. The next theorem explains which curves in have maximal and minimal curvature; this is proved in Section <ref>.
Let the total degree of the (spherical) Segre–Veronese manifold be d=d_1+⋯+d_r≥ 2. Consider a geodesic
γ(t) = γ_1(t)⊗⋯⊗γ_r(t) ∈.
* The maximum curvature of γ(t) is √(2(d-1)d). It is attained by curves where the γ_i(t) are curves of constant speed ‖γ_i'(t)‖ = √(d_id).
* The minimal curvature is √(2(d_ℓ-1)d_ℓ), where d_ℓ=min{d_1,…,d_r}.
It is attained by curves where γ_ℓ(t) is a geodesic parametrized by arc length in 𝕍_n_ℓ,d_ℓ and the other γ_i(t) are constant.
Our third main result concerns the volume of the tubular neighborhood
U(ε) := { F∈𝕊() | d_𝕊( F, ) < ε}.
In Section <ref> we compute this volume in terms of complete matchings in a weighted graph. For a tuple (m_1,…,m_r) of nonnegative integers let G=(V,E) be the complete graph on m:=m_1+⋯+m_r vertices. Recall that the tuple of degrees is d = (d_1,…, d_r). We define weights on E as follows: the vertices of G are partitioned into r groups V= ℐ_1⊔⋯⊔ℐ_r of cardinalities |ℐ_k| = m_k. The weight w(e) of an edge e between vertices in group ℐ_k is w(e)=d_k(d_k-1). Given a perfect matching C⊂ E we define its weight to be w(C):=∏_e∈ C w(e).
This defines the function
D_ d(m_1,…,m_r) := (-1)^m/2∑_C⊂ E perfect matching w(C).
We now have the following result.
Let n =, N = (𝕊()) and c=N-n. Define
J_i(ε) = ∫_0^ε (sinϕ)^N-n+2i· (cosϕ)^n-2i dϕ
and
a_i = Γ(c/2)/2^i Γ(i+c/2) ∑_m_1,…,m_r∈ℕ: m_i≤ n_i
m_1+⋯+m_r = 2i D_ d(m_1,…,m_r).
Then, for ε < τ() we have
vol(U(ε)) = √((d_1⋯ d_r)^n)/2^r-1·vol(𝕊^n_1) ⋯vol(𝕊^n_r) ·vol(𝕊^c-1)·∑_0≤ 2i≤ n a_i· J_i(ε).
The proof of this theorem is based on computing the Weingarten map of 𝕏_ n, d, which we do in Theorem <ref>. We show that the Weingarten map of 𝕏_ n, d admits a block structure where the diagonal blocks are the Weingarten maps of the Veronese factors.
§.§ Acknowledgements
We thank Antonio Lerario and Andrea Rosana for helping in finding several references related to differential geometry and for carefully explaining their paper <cit.> to us. We also thank Jan Draisma for a discussion which led to Theorem <ref>.
§.§ Organization of the paper
In Section <ref> we discuss the differential geometry and curvature of manifolds defined by tensor products of vectors. We then apply the results from Section <ref> in Section <ref> to study the curvature of the (spherical) Segre–Veronese manifold 𝕏_ n, d. In particular, we work out the Weingarten map of 𝕏_ n, d. In Section <ref> we compute the reach and prove Theorems <ref> and <ref>. Finally, in Section <ref>, we compute the volume of the tubular neighborhood and prove Theorem <ref>.
§ TENSOR PRODUCTS OF RIEMANNIAN MANIFOLDS
The tensor space ℝ^m_1+1⊗⋯⊗ℝ^m_r+1 is a Euclidean space for the inner product defined by ⟨ x_1⊗⋯⊗ x_r , y_1⊗⋯⊗ y_r⟩ = ⟨ x_1, y_1⟩⋯⟨ x_r, y_r⟩, where ⟨ x, y⟩ = x^T y. Write N:=m_1+⋯+m_r+r-1; then 𝕊^N is the sphere in ℝ^m_1+1⊗⋯⊗ℝ^m_r+1.
We consider for 1≤ i≤ r a smooth embeeded submanifold 𝕄_i of the sphere 𝕊^m_i⊂ℝ^m_i+1. We define the tensor product of these manifolds to be
𝕄_1⊗⋯⊗𝕄_r:= { x_1 ⊗⋯⊗ x_r | x_1∈𝕄_1,…, x_r∈𝕄_d}.
For 1≤ i≤ r let 𝕄_i be a smooth Riemannian submanifold of 𝕊^m_i of dimension n_i, and denote
𝕄:= 𝕄_1⊗⋯⊗𝕄_r.
Furthermore, denote the tensor product map by ψ: 𝕄_1×⋯×𝕄_r→𝕄, ( x_1,…, x_r)↦ x_1 ⊗⋯⊗ x_r.
Then:
* 𝕄 is a Riemannian submanifold of 𝕊^N of dimension n_1+⋯+n_r.
* The tangent space of 𝕄 at x= x_1 ⊗⋯⊗ x_r is
T_ x𝕄 = T_ x_1𝕄_1⊗ x_2 ⊗⋯⊗ x_r + ⋯ + x_1⊗ x_2 ⊗⋯⊗ T_ x_r𝕄_r.
* ψ is a local isometry.
For 1≤ i≤ r let 𝒜_i = (𝒰_i_j,φ_i_j)_j be an atlas for 𝕄_i, such that u∈𝒰_i implies that the antipodal point - u∉𝒰_i. Such an atlas exists, since 0∉𝕄_i. Define the open sets 𝒰_i_1,…,i_d := ψ(U_i_1×⋯× U_i_d); then ψ|_U_i_1×⋯× U_i_d is an isomorphism, so we have an atlas for 𝕄 with charts 𝒰_i_1,…,i_d
and maps (φ_i_1×…×φ_i_d) ∘ (ψ|_U_i_1×⋯× U_i_d)^-1. This also shows that we have 𝕄= (𝕄_1×⋯×𝕄_d)=m_1+⋯ +m_r. The Riemmanin structure on the ambient space 𝕊^N induces a Riemmanin structure on 𝕄.
For the second statement, we use that T_( x_1,…, x_r)𝕄 = T_ x_1𝕄_1×⋯× T_ x_r𝕄_r. For 1≤ i≤ r let v∈ T_ x_i𝕄_i.
By multilinearity, the derivative of ψ at ( x_1,…, x_r) maps
D_( x_1,…, x_r)ψ (0,…,0, v,0,…, 0) = x_1⊗⋯⊗ x_i-1⊗ v⊗ x_i+1⊗⋯⊗ x_r.
This proves the second statement, since T_ x𝕄 is the image of D_( x_1,…, x_r)ψ.
Finally, for v∈ T_ x_i𝕄_i and w∈ T_ x_j𝕄_j we have
⟨ x_1⊗⋯⊗ v⊗⋯⊗ x_r, x_1⊗⋯⊗ w⊗⋯⊗ x_r⟩ = ⟨ v, w⟩, i=j
⟨ v, x_i⟩ ⟨ w, x_j⟩, i≠ j
Since ⟨ v, x_i⟩ = ⟨ w, x_j⟩=0, this shows that the inner product between the images of (0,…,0, v,0,…, 0) and (0,…,0, w,0,…, 0) under D_( x_1,…, x_d) is ⟨ v, w⟩, if i=j, and 0 otherwise. This shows D_( x_1,…, x_r) preserves inner products on a basis of T_( x_1,…, x_r)𝕄 and hence is an orthogonal map. This proves the third statement.
Using the notation of Proposition <ref> we can now write
= 𝕍_n_1,d_1⊗⋯⊗𝕍_n_r,d_r.
Furthermore, Proposition <ref> implies that is a smooth submanifold of the sphere of dimension = n_1+⋯+n_r.
Therefore, we will henceforth call it the (spherical) Segre–Veronese manifold.
§.§ The second fundamental form of a tensor product manifold
Recall that the second fundamental form II_ x of a Riemannian submanifold 𝕄⊂𝕊^N at a point x∈𝕄 is the trilinear form
II_ x: T_ x𝕄× T_ x𝕄× N_ x𝕄→ℝ, ( v, w, a)↦⟨∂ v( u)/∂ w|_ u = x, a ⟩,
where v( u) is a (local) smooth tangent field of 𝕄 with v( x)= v.
For a fixed a∈ N_ x𝕄 the Weingarten map is the linear map
L_ a: T_ x𝕄→ T_ x𝕄,
such that II_ x ( v, w, a) = ⟨ v, L_ a( w)⟩.
The next proposition provides the Weingarten map for a tensor product of manifolds.
Let 𝕄_1,…, 𝕄_r be as in Proposition <ref> and 𝕄 = 𝕄_1⊗⋯⊗𝕄_r. Consider a point x = x_1⊗⋯⊗ x_r∈𝕄 and a normal vector a∈ N_ x𝕄. A matrix representation of the Weingarten map of 𝕄 at x in direction a relative to orthonormal coordinates is
L_ a = [ L_1 L_1,2 ⋯ L_1,r; (L_1,2)^T L_2 ⋯ L_1,r-1; ⋱ ; (L_1,r)^T (L_1,r-1)^T ⋯ L_r ]
,
where the matrices L_i,j and L_i are defined as follows: let v_1^(i),…, v_n_i^(i) be an orthonormal basis for the tangent space T_ x_i𝕄_i.
* The off-diagonal blocks are
L_i,j := [ ⟨ x_1⊗⋯⊗ v_k^(i)⊗⋯⊗ v_ℓ^(j)⊗⋯⊗ x_r, a⟩ ]_1≤ k≤ n_i, 1≤ℓ≤ n_j∈ℝ^n_i× n_j.
* Write R_i := x_1⊗⋯⊗ x_i-1⊗ N_ x_i𝕄_i ⊗ x_i+1⊗⋯⊗ x_r,
and let the orthogonal projection of a onto R_i be x_1⊗⋯⊗ x_i-1⊗ a_i ⊗ x_i+1⊗⋯⊗ x_r.
Then a_i∈ N_ x_i𝕄_i, and L_i∈ℝ^n_i× n_i is a matrix representation of the Weingarten map L_ a_i of 𝕄_i at x_i in direction a_i with respect to the orthonormal basis v_1^(i),…, v_n_i^(i).
By Proposition <ref>, x_1⊗⋯⊗ v_k^(i)⊗⋯⊗ x_r for 1≤ k≤ n_i and 1≤ i≤ r is an orthonormal basis of T_ x𝕄.
Fix tangent vectors v = x_1⊗⋯⊗ v_k^(i)⊗⋯⊗ x_r
and
w:= x_1⊗⋯⊗ v_ℓ^(j)⊗⋯⊗ x_r. Furthermore, let v_k^(i)( u_i) be a local smooth tangent field of 𝕄_i with v_k^(i)( x_i) = v_k^(i). Then, we obtain a local smooth tangent field of 𝕄 with v( x) = v by setting
v( u_1⊗⋯⊗ u_r) := u_1⊗⋯⊗ v_k^(i)( u_i)⊗⋯⊗ u_r.
By multilinearity,
∂ v( u)/∂ w =
x_1⊗⋯⊗∂ v_k^(i)( u_i)/∂ v_ℓ^(i)⊗⋯⊗ x_r, if i= j.
x_1⊗⋯⊗ v_k^(i)⊗⋯⊗ v_ℓ^(j)⊗⋯⊗ x_r, if i≠ j;
This shows that the off-diagonal blocks of L_ a are the matrices L_i,j.
For the diagonal blocks (i=j) we observe that x_1⊗⋯⊗∂ v_k^(i)( u_i)∂ v_ℓ^(i)⊗⋯⊗ x_r∈ R_i, so
⟨ v, L_ a( w)⟩ = II_ x( v, w, a) = ⟨ x_1⊗⋯⊗∂ v_k^(i)( u_i)∂ v_ℓ^(i)⊗⋯⊗ x_r, a⟩
= ⟨∂ v_k^(i)( u_i)∂ v_ℓ^(i), a_i⟩= ⟨ v_ℓ^(i), L_ a_i( v_k^(i))⟩.
This settles the case i=j.
§ GEODESICS AND SECOND FUNDAMENTAL FORM OF SEGRE–VERONESE MANIFOLDS
Recall from (<ref>) that the (spherical) Segre–Veronese manifold is
= 𝕍_n_1,d_1⊗⋯⊗𝕍_n_r,d_r.
We now use the results from the previous section to compute geodesics, the second fundamental form and the Weingarten map for a (spherical) Segre–Veronese manifold 𝕏_ n, d. The first step towards this goal is considering the Veronese manifold (r=1).
§.§ Veronese manifolds
The Bombieri-Weyl inner product on the space of homogeneous polynomials H_n,d has the property that
⟨ f, ℓ^d⟩ = f(ℓ_0,…,ℓ_n), where ℓ( x) = ℓ_0x_1+⋯ + ℓ_nx_n;
that is, taking the inner product of f∈ H_n,d with ℓ^d∈𝕍_n,d evaluates f at the coeffcient vector of ℓ. One calls ( x, y)↦⟨ x, y⟩^d a reproducing kernel for H_n,d.
Recall that the scaled monomials m_α = √(dα) x^α form an orthonormal basis for the space of polynomials H_n,d. We first prove a lemma on the structure of the tangent space of the Veronese manifold.
Consider m_(d,0,…,0) = x_0^d∈. Then m_(d-1,0,…,k,…,0) =√(d) x_0^d-1x_k, for 1≤ k≤ n, form an orthonormal tangent basis for the tangent space T_x_0^d.
It follows from <cit.> that T_x_0^d is spanned by √(d) x_0^d-1x_k, 1≤ k≤ n. The fact that these monomials are orthonormal follows directly from the definition of the Bombieri-Weyl inner product.
We denote the two linear spaces from <cit.>:
P :=span{ m_α|α_0 < d-2} and
W:=span{ m_α|α_0 = d-2}.
The spaces P and W are orthogonal to each other. Lemma <ref> implies the following.
N_x_0^d = P⊕ W.
The next theorem follows from Equations (28) and (29) in <cit.>.
Let f∈ N_x_0^d = P⊕ W and L_ f be the Weingarten map of at x_0^d and f.
* If f∈ P, then L_ f = 0.
* If f ∈ W, then L_ f can be represented in orthonormal coordinates by the matrix
L_ f = √(d-1/d) [ √(2)· f_1,1 f_1,2 ⋯ f_1,n; f_2,1 √(2)· f_2,2 ⋯ f_2,n; ⋮ ⋮ ⋱ ⋮; f_n,1 f_n,2 ⋯ √(2)· f_n,n ],
where
f = ∑_1≤ i<j≤ n f_i,j√(d(d-1)) x_0^d-2x_ix_j + ∑_1≤ i≤ n f_i,i√(d(d-1)/2) x_0^d-2x_i^2.
Recall that a random symmetric n× n matrix L=(ℓ_i,j) has distribution L∼GOE(n), if ℓ_i,j∼ N(0,12) for i≠ j and ℓ_i,i∼ N(0,1) and all entries are independent (except for the symmetry condition). The probability density of L is (2π)^n(n+1)/4exp(-12Trace(L^TL)).
Let f∈ N_x_0^d and L_ f be the Weingarten map of at x_0^d and f. If f is Gaussian with respect to the Bombieri-Weyl metric then
L_ f∼√(2(d-1)/d) GOE(n).
§.§ Segre–Veronese manifold
We now turn to the Segre–Veronese manifold 𝕏_ n, d. We first show that 𝕏_ n, d is a homogeneous space. This allows us to compute geodesics and the second fundamental form at the distinguished point
E := x_0^d_1⊗⋯⊗ x_0^d_r∈𝕏_ n, d.
The Bombieri-Weyl inner product on has the property
⟨ f_1⊗⋯⊗ f_r, g_1⊗⋯⊗ g_r⟩ := ⟨ f_1, g_1⟩⋯⟨ f_r, g_r⟩.
Moreover, it is invariant under orthogonal change of variables; i.e., for orthogonal matrices Q_1∈ O(n_1+1),…, Q_r∈ O(n_r+1) we have
⟨ ( f_1∘ Q_1)⊗⋯⊗ ( f_r∘ Q_r), ( g_1∘ Q_1)⊗⋯⊗ ( g_r∘ Q_r)⟩ = ⟨ f_1⊗⋯⊗ f_r, g_1⊗⋯⊗ g_r⟩.
This invariance was proved by Kostlan in <cit.>.
We extend this linearly to an isometric action of O(n_1+1)×⋯× O(n_r+1) on . Furthermore, the orthogonal group O(n_i+1) acts transitively on 𝕍_n_i, d_i for every i. This implies the following.
is a homogeneous space under the isometric O(n_1+1)×⋯× O(n_r+1) action on restricted to .
Next, we provide an explicit description of geodesics in 𝕏_ n, d.
Let γ(t) be a geodesic in 𝕏_ n, d parametrized by arc length and passing through E. Up to the action by a tuple of orthogonal matrices in O(n_1+1)×⋯× O(n_r+1) the geodesic γ(t) has the form
γ(t) = γ_1(t) ⊗⋯⊗γ_r(t),
where
γ_i(t) = (cos(d_i^-1/2 a_i(t)) x_0 + sin(d_i^-1/2 a_i(t)) x_1)^d_i
and a_i(t),1≤ i≤ r, are smooth functions with a_1'(t)^2+⋯ +a_r'(t)^2 = 1.
Let γ(t) = γ_1(t) ⊗⋯⊗γ_r(t) be a curve through E. From Proposition <ref> (3) it follows that
‖γ'(t)‖^2 = ‖γ_1'(t)‖^2 + ⋯ + ‖γ_r'(t)‖^2.
Therefore, γ(t) is a geodesic parametrized by arc length if and only if
‖γ_1'(t)‖^2 + ⋯ + ‖γ_r'(t)‖^2=1.
After a rotation in every factor we can assume that γ_i(t) = (cos(w_i(t)) x_0 + sin(w_i(t)) x_1)^d_i where the w_i(t) are smooth functions; then
γ_i'(t) = d_i· (cos(w_i(t)) x_0 + sin(w_i(t)) x_1)^d_i-1· (-sin(w_i(t)) x_0+ cos(w_i(t)) x_1) · w_i'(t).
The norm of this polynomial is ‖γ_i'(t) ‖^2 = d_i · w_i'(t)^2. Setting a_i(t) := d_i^1/2 w_i(t) we see that γ(t) is a geodesic if and only if a_1'(t)^2 +⋯ + a_r'(t)^2 = 1.
For our formulation of the Weingarten map of 𝕏_ n, d we first need to obtain a certain decomposition of the normal space. We define the spaces
𝒲_i := span{ m_α_1⊗⋯⊗ m_α_r| (α_i)_0 = d_i-2, (α_k)_0 = d_k for k≠ i},
𝒢_i,j := span{ m_α_1⊗⋯⊗ m_α_r| (α_i)_0 = d_i-1, (α_j)_0 = d_j -1, (α_k)_0 = d_k , for k≠ i,j}
and set
𝒲 := ⊕_1≤ i ≤ r𝒲_i,
𝒢 := ⊕_1≤ i<j≤ r𝒢_i,j,
𝒫 := (𝒲⊕𝒢)^⊥∩ N_ E𝕏_ n, d.
The next result extends Lemma <ref> to the case r≥ 2.
If r≥ 2, the normal space has the orthogonal decomposition
N_ E𝕏_ n, d = 𝒫⊕𝒲⊕𝒢.
By the inner product rule for simple tensors (<ref>), and since the monomials m_α are orthogonal, the decomposition 𝒫⊕𝒲⊕𝒢 is an orthogonal decomposition.
Therefore, we only have to show that 𝒲,𝒢⊂ N_ E𝕏_ n, d.
It follows from Proposition <ref> (2) that
T_ E𝕏_ n, d = T_x_0^d_1𝕍_n_1,d_1⊗ x_0^d_2⊗⋯⊗ x_0^d_r + ⋯ + x_0^d_1⊗ x_0^d_2⊗⋯⊗ T_x_0^d_r𝕍_n_r,d_r.
Lemma <ref> implies that
T_x_0^d_ℓ𝕍_n_ℓ,d_ℓ = span{ m_α_1⊗⋯⊗ m_α_r| (α_ℓ)_0 = d_ℓ-1, (α_k)_0 = d_k , for k≠ℓ}.
The space 𝒲_i is spanned by simple tensors m_α_1⊗⋯⊗ m_α_r, such that the i-th factor m_α_i is orthogonal to both T_x_0^d_i𝕍_n_i,d_i and x_0^d_i. This already shows, using (<ref>), that 𝒲_i⊥ T_ E𝕏_ n, d. Consequently, 𝒲⊂ N_ E𝕏_ n, d.
The space 𝒢_i,j is spanned by
simple tensors m_α_1⊗⋯⊗ m_α_r, such that the i-th factor m_α_i is orthogonal to x_0^d_i and the j-th factor m_α_j is orthogonal to x_0^d_j. Since T_ E𝕏_ n, d is spanned by simple tensors that have at most one factor different than x_0^d_k, the inner product rule (<ref>) implies that 𝒢_i,j⊥ T_ E𝕏_ n, d for all i,j, hence 𝒢⊂ N_ E𝕏_ n, d.
Let us work out the decomposition from Lemma <ref> in the case of the Segre manifold. This is the case d = 1 = (1,…,1). Since H_n, 1≅ℝ^n, we can view elements in = H_n_1, 1⊗⋯⊗ H_n_r,1 as r-dimensional matrices F=(F_i_1,…,i_r), where 0≤ i_j≤ n_j for 1≤ j≤ r.
Figure <ref> shows an order-three tensor illustrating the case r=3. We have for the Segre manifold
T_ E𝕏_ n, 1 = {(F_i_1,…,i_r)| there is exactly one i_j greater than zero}
Moreover, 𝒲 = ∅ and
𝒢 = {(F_i_1,…,i_r)| there are exactly two i_js greater than zero},
𝒫 = {(F_i_1,…,i_r)| there are at least three i_js greater than zero}.
Figure <ref> shows the case for r=3 the tangent space of 𝕏_ n, 1 in red, 𝒢 in green and 𝒫 in blue.
We can now prove a theorem on the structure of the Weingarten map of 𝕏_ n, d.
Consider a normal vector F ∈ N_ E𝕏_ n, d = 𝒫⊕𝒲⊕𝒢 and L_ F be the Weingarten map of 𝕏_ n, d at E and F. Then, L_ F
is represented in orthonormal coordinates by the matrix
L_ F=
[ L_1 L_1,2 ⋯ L_1,r; (L_1,2)^T L_2 ⋯ L_1,r-1; ⋱ ; (L_1,r)^T (L_1,r-1)^T ⋯ L_r ]∈ℝ^n× n, n=,
defined as follows:
Let us write F = P + W + G, where P∈𝒫, W∈𝒲 and G ∈𝒢. Decompose further:
W = ∑_1≤ i≤ r W_i, G = ∑_1≤ i<j≤ r G_i,j,
where W_i = m_(d_1,0,…,0)⊗⋯⊗ m_(d_i-1,0,…,0)⊗ f_i⊗ m_(d_i+1,0,…,0)⋯⊗ m_(d_r,0,…,0)∈𝒲_i with
f_i = ∑_1≤ k<ℓ≤ n_i f_i, (k,ℓ) m_(d_i-2,0,…, k-th1,…,ℓ-th1,…,0) + ∑_1≤ k ≤ n_i f_i, (k,k) m_(d_i-2,0,…,k-th2,…,0),
and
G_i,j = ∑_k=1^n_i∑_ℓ = 1^n_j g_(i,j),(k,ℓ) m_(d_1,0,…,0)⊗…⊗ m_(d_i-1,0,…, k-th1, …, 0)⊗…⊗ m_(d_r,0,…, 0) .
Then,
L_i = √(d_i-1/d_i)[ √(2)· f_i,(1,1) f_i,(1,2) ⋯ f_i,(1,n); f_i,(2,1) √(2)· f_i,(2,2) ⋯ f_i,(2,n_i); ⋮ ⋮ ⋱ ⋮; f_i,(n_i,1) f_i,(n_i,2) ⋯ √(2)· f_i,(n_i,n_i) ]∈ℝ^n_i× n_i
and
L_i,j = [ g_(i,j),(1,1) g_(i,j),(1,2) ⋯ g_(i,j),(1,n_j); g_(i,j),(2,1) g_(i,j),(2,2) ⋯ g_(i,j),(2,n_j); ⋮ ⋮ ⋱ ⋮; g_(i,j),(n_i,1) g_(i,j),(n_i,2) ⋯ g_(i,j),(n_i,n_j) ]∈ℝ^n_i× n_j.
(In particular, L_ F depends on the components 𝒲 and 𝒢. If F∈𝒫, we have L_ F = 0.)
Proposition <ref> implies the block structure of L_ F. The structure of the diagonal blocks is given by Theorem <ref>. The structure of the off-diagonal blocks comes from the fact that m_(d_i-1,0⋯,k-th1,⋯,0)
for 1≤ k≤ n_i are an orthonormal basis of the tangent space T_x_0^d_i𝕍_n_i,d_i by Lemma <ref>.
An immediate corollary of Theorem <ref> comes next.
Let F = (F_α_1,…,α_r)∈ N_ E𝕏_ n, d and L_ F be the Weingarten map of 𝕏_ n, d at E in the normal direction F. If F is Gaussian with respect to the Bombieri-Weyl norm, then
L_ F∼[ L_1 L_1,2 ⋯ L_1,r; (L_1,2)^T L_2 ⋯ L_1,r-1; ⋱ ; (L_1,r)^T (L_1,r-1)^T ⋯ L_r ]
,
where
L_k∼√(d_k(d_k-1)/2) GOE(n_i) and L_i,j∼ N(0, I_n_i⊗ I_n_j),
and all blocks L_k, L_i,j, 1≤ k≤ r, 1≤ i<j≤ r, are independent.
[Weingarten map of the Segre manifold]
We consider again the case of the Segre manifold, where d = 1 = (1,…,1). In this case, the diagonal blocks of L_ F are all zero and the off-diagonal blocks are independent standard normal matrices.
For instance, when n_1 = n_2 = 2 and n_3 = n_4=1 we have two 2× 2 diagonal zero blocks and two 1× 1 diagonal zero blocks:
L_ F = [
[ 0 0 F_1100 F_1200 F_1010 F_1001; 0 0 F_2100 F_2200 F_0201 F_2001; F_1100 F_2100 0 0 F_0110 F_0101; F_1200 F_2200 0 0 F_0210 F_0201; F_1010 F_2010 F_0110 F_0210 0 F_0011; F_1001 F_2001 F_0101 F_0201 F_0011 0 ]],
where F = ∑_i=0^2∑_j=0^2∑_k=0^1∑_ℓ=0^1 F_ijkℓ x_i⊗ x_j⊗ x_k⊗ x_ℓ. We will revisit this in Example <ref> below.
§ REACH OF THE SEGRE–VERONESE MANIFOLD
We compute the reach τ() of the Segre–Veronese manifold. We adapt the strategy from <cit.> and calculate the reach as the minimum of two quantities:
τ() = min{ρ_1, ρ_2},
where ρ_1 is the inverse of the maximal curvature of a geodesic curve in that is parametrized by arc length:
1/ρ_1 = sup{‖ P_ E(γ”(0)) ‖|γ is a geodesic in parametrized by arc length};
here P_ E denotes the orthogonal projection onto N_ E.
The other quantity, ρ_2 , is the width of the smallest bottleneck:
ρ_2 = min{12 d_𝕊( F, E) | F∈, F≠ E and F- E ∈ N_ E⊕ℝ· E}.
The goal of this section is to prove the following proposition, giving formulas for both ρ_1 and ρ_2.
Let d=(d_1,…,d_r) and n=(n_1,…,n_r) be r-tuples of positive integers, and let d:=d_1+⋯+ d_r≥ 2. For the (spherical) Segre–Veronese manifold of total degree d, we have
* ρ_1 = √(d2(d-1)).
* ρ_2 = π/4.
We prove Proposition <ref> (<ref>) in Section <ref> and (<ref>) in Section <ref>. Because the reach is the minimum of ρ_1 and ρ_2, this proves Theorem <ref>.
§.§ Extremal curvature of the Segre–Veronese manifold
Let γ(t) be a geodesic in parametrized by arc length. By orthogonal invariance (Lemma <ref>) we can assume that γ(0)= E.
As shown in Lemma <ref>, geodesics in parametrized by arc length that pass through E can, without loss of generality, be written as
γ(t) = γ_1(t) ⊗…⊗γ_r(t)
where
γ_i(t) = (cos(d_i^-1/2 a_i(t)) x_0 + sin(d_i^-1/2 a_i(t)) x_1)^d_i
and a_i(t),1≤ i≤ r, are smooth functions such that a_1'(t)^2+⋯ +a_r'(t)^2 = 1.
The first derivative of γ_i(t) at t=0 is
γ'_i(0) = a_i'(0) ·√(d_i) x_0^d_i-1x_1 = a_i'(0) · m_(d_i-1,1,0,…,0).
The second derivative is
γ”_i(0) = -a_i'(0)^2 x_0^d_i + a_i”(0) √(d_i) x_0^d_i-1 x_1 + a_i'(0)^2 (d_i-1) x_0^d_i-2 x_1^2
= -a_i'(0)^2 m_(d_i,0,…,0) + a_i”(0) m_(d_i-1,1,0,…,0) + a_i'(0)^2 √(2(d_i-1)d_i) m_(d_i-2,2,0,…,0).
These formulas give the first and second derivatives of the factors of γ(t).
Next, we compute the derivative of the geodesic γ(t) itself. The first derivative is
γ'(t) = ∑_i=1^r γ_1(t) ⊗…⊗γ_i-1(t) ⊗γ'_i(t) ⊗γ_i+1(t) ⊗…⊗γ_r(t).
The second derivative is
γ”(t) = ∑_i=1^r γ_1(t) ⊗…⊗γ_i-1(t) ⊗γ”_i(t) ⊗γ_i+1(t) ⊗…⊗γ_r(t)
+ 2∑_1≤ i<j ≤ rγ_1(t) ⊗…⊗γ'_i(t) ⊗…⊗γ'_j(t) ⊗…⊗γ_r(t)
The second derivative γ_i”(0) is the sum of three terms. Tensoring the first term in (<ref>) with γ_j(0)=x_0^d_j, j≠ i, gives a multiple of E. Tensoring the second term in γ”_i(0) with γ_j(0)=x_0^d_j, j≠ i, gives a point a point in the tangent space T_ E𝕏_ n, d by Lemma <ref>. Therefore, projecting γ”(0) onto N_ E, we have
P_ E(γ”(0)) = W + G,
where W∈𝒲 and G∈𝒢 (see (<ref>) for the definition of these spaces) are given by
W = ∑_i=1^r a_i'(0)^2 √(2(d_i-1)d_i) m_(d_1,0,…,0)⊗…⊗ m_(d_i-2,2,0,…,0)⊗…⊗ m_(d_r,0,…,0),
G = 2∑_1≤ i<j ≤ r a_i'(0) a_j'(0) m_(d_1,0,…,0)⊗…
⊗ m_(d_i-1,1,0,…,0)⊗…⊗ m_(d_j-1,1,0,…,0)⊗…⊗ m_(d_r,0,⋯,0).
Let us write
θ_i:=a_i'(0).
As 𝒲⊥𝒢 and the m_α form orthonormal bases, the magnitude of P_ E(γ”(0)) is
‖ P_ E(γ”(0)) ‖^2 = ‖ W ‖^2 + ‖ G ‖^2
= ∑_i=1^r θ_i^4 ·2(d_i-1)/d_i + 4∑_1≤ i < j≤ dθ_i^2 θ_j^2
= ∑_i=1^r θ_i^4 ·2(d_i-1)/d_i + 2∑_i=1^d θ_i^2 ∑_j ≠ iθ_j^2
= ∑_i=1^r ( θ_i^4 ·2(d_i-1)/d_i + 2 a_i^2(1-a_i^2) ), (because ∑_i=1^r θ_i^2=1)
= 2∑_i=1^r θ_i^2 - θ_i^4/d_i .
To maximize this expression under the constraint ∑_i=1^r θ_i^2=1 we consider the Lagrange function
ℒ(θ_1,…,θ_r,λ) := ∑_i=1^r (θ_i^2 - θ_i^4/d_i) - λ(1-∑_i=1^r θ_i^2) .
Setting the derivatives of ℒ to zero, we have
0 = ∂ℒ/∂θ_i = 2θ_i - 4/d_iθ_i^3 + 2λθ_i ⟹ θ_i = √(d_i(1+λ)/2) or θ_i = 0.
Let us first consider the case when the θ_i are not equal to zero. In this case, the equation ∑_i=1^r θ_i^2 = 1 implies
1 = ∑_i=1^r θ_i^2 = ∑_i=1^r d_i(1+λ)/2 = d(1+λ)/2 ,
where d=d_1+⋯ +d_r is the total degree. This shows λ = 2/d - 1, so that
θ_i = √(d_i/d).
Thus, in this case
‖ P_ E(γ”(0)) ‖ = √( 2 ∑_i=1^r d_i/d - d_i/d^2) = √(2(d-1)/d)
For the other critical values of (θ_1,…,θ_r) we get √(2(d'-1)/d'), where d' = ∑_i∈ I d_i is the total degree of a subset I⊂{1,…,r} of factors. Since x↦√(2(x-1)/x) is an increasing function for x≥ 1, this shows that √(2(d-1)/d) is indeed the maximal curvature. It also shows that √(2(d_ℓ-1)/d_ℓ) is the minimal curvature, where d_ℓ = min{d_1,…,d_r}.
We have shown above that √(2(d-1)d) is the maximal curvature, and that √(2(d_ℓ-1)d_ℓ), where d_ℓ = min{d_1,…,d_r}, is the minimal curvature. The curves that attain these curvatures are given by the critical values θ_i. To realize them we can choose constant speed curves defined by a_i(t) = θ_i· t.
§.§ Bottlenecks of the Segre–Veronese manifold
We compute ρ_2, the width of the smallest bottleneck of the Segre–Veronese manifold.
Recall that ρ_2 is the minimum over the distances 12 d_𝕊( F, E) where F∈ with F≠ E and F- E∈ N_ E⊕ℝ· E. The latter is equivalent to
⟨ F- E, G⟩ = 0 for all G∈ T_ E𝕏_ n, d.
We have
T_ E𝕏_ n, d = T_x_0^d_1𝕍_n_1,d_1⊗ x_0^d_2⊗⋯⊗ x_0^d_r + ⋯ + x_0^d_1⊗ x_0^d_2⊗⋯⊗ T_x_0^d_r𝕍_n_r,d_r
by Proposition <ref> (2).
We check that F- E is orthogonal to each summand in this decomposition:
let us write
F = ℓ_1^d_1⊗⋯⊗ℓ_r^d_r
and consider the inner product of F- E with elements from the first summand in the decomposition of T_ E𝕏_ n, d above. By Lemma <ref>
the monomials
x_0^d_1-1x_k, for 1≤ k≤ n_1, span the tangent space T_x_0^d_1𝕍_n_1,d_1.
For G = (x_0^d_1-1x_k)⊗ x_0^d_2⊗⋯⊗ x_0^d_r∈ T_x_0^d_1𝕍_n_1,d_1 we have that
⟨ F- E, G ⟩ = ⟨ F, (x_0^d_1-1x_k)⊗ x_0^d_2⊗⋯⊗ x_0^d_r⟩ by (<ref>)=⟨ℓ_1^d_1, x_0^d_1-1x_k⟩ ∏_i=2^r ⟨ℓ_i^d_i,x_0^d_i⟩
by (<ref>)=⟨ℓ_1, x_0⟩^d_1-1 ⟨ℓ_1,x_k⟩ ∏_i=2^r ⟨ℓ_i,x_0⟩^d_i.
This inner product is zero for every 1≤ k≤ n_1 if either ℓ_1=x_0 or ⟨ℓ_1,x_0⟩ =0.
We proceed similarly for the other summands in the decomposition of T_ E𝕏_ n, d.
Ultimately, we find that
⟨ F- E, G⟩ = 0 for all G∈ T_ E𝕏_ n, d if and only if either ℓ_1=⋯=ℓ_r=x_0 or there is at least one ℓ_i with ⟨ℓ_i, x_0⟩=0, in which case ⟨ F, E⟩ =0 by (<ref>). Since F≠ E, it must be that the latter holds.
Therefore, the bottlenecks of all have width arccos 0 = π/2, so ρ_2 = 1/2·π/2 = π/4.
§ VOLUME OF THE TUBULAR NEIGHBORHOOD
Recall from Theorem <ref> that the reach of the (spherical) Segre–Veronese manifold is τ()=π/4, if d<5, and τ()=√(d/2(d-1)), if d>5.
In this section we prove Theorem <ref> by computing the volume of the tubular neighborhood for ε < τ()
U(ε) := { F∈𝕊() | d_𝕊( F, ) < ε}.
The proof will be completed in Section <ref> below.
For the computation we use Weyl's tube formula <cit.>.
We denote
n = = n_1+⋯ + n_r, N = (𝕊()),
and
J_i(ε) = ∫_t=0^ε (sinϕ)^N-n+2i· (cosϕ)^n-2i dϕ.
Then Weyl's tube formula implies that the volume of U(ε) is given as the following linear combination of the functions J_i:
vol(U(ε)) = ∑_0≤ 2i≤ nκ_i· J_i(ε),
with coefficients
κ_i = ∫_ G∈(∫_ F ∈ N_ G : ‖ F‖ = 1 m_2i(L_ F) d F) d G,
where m_2i(L_ F) denotes the sum of the 2i-principal minors of the Weingarten map L_ F in normal direction F. The coefficients κ_2i are called curvature coefficients. They are isometric invaiants of .
It follows from Lemma <ref> that the integral in the formula for κ_2i is independent of G, so that
κ_2i= vol() ∫_ F ∈ N_ E : ‖ F‖ = 1 m_2i(L_ F) d F,
where now the inner integral is over the sphere in the normal space of E=x_0^d_1⊗⋯⊗ x_0^d_r. The volume of the (spherical) Segre–Veronese manifold is computed next.
vol() = √((d_1⋯ d_r)^n)/2^r-1·vol(𝕊^n_1) ⋯vol(𝕊^n_r).
In the case of the map ψ in Proposition <ref> is 2^r-1:1.
Proposition <ref> (3) therefore implies
vol() = 1/2^r-1·vol(𝕍_n_1,d_1) ⋯vol(𝕍_n_r,d_r).
Finally, vol() =√(d^n)·vol(𝕊^n) (see, e.g., <cit.>).
The volume of the k-sphere is
vol(𝕊^k) = 2 π^k+1/2/Γ (k+12).
The main task in computing the volume of U(ε) therefore is integrating the principal minors of the Weingarten map L_ F over the normal space. For this we pass from the uniform distribution on the sphere to the Gaussian distribution. Since L_λ· F = λ· L_ F for F ∈ N_ E and λ∈ℝ, we have
m_2i(L_λ· F)=λ^2i· m_2i(L_ F).
Suppose that F is a Gaussian vector in the normal space; that is, a random tensor in N_ E with probability distribution (2π)^c/2exp(-12‖ F‖^2). Then, the two random variables ‖ F ‖ and F/‖ F ‖ are independent. We define the scalars
λ_i := _ F ∈ N_ E Gaussian ‖ F‖^2i.
Using (<ref>) we can then pass between the uniform distribution on the sphere and the Gaussian distribution as follows:
_ F ∈ N_ E Gaussian m_2i(L_ F)= λ_i·_ F ∈ N_ E uniform in the sphere m_2i(L_ F)
Since ‖ F‖^2 has χ_c^2-distribution with c= N_ E degrees of freedom, λ_i is the i-th moment of χ_c^2; i.e.,
λ_i = 2^i Γ(i+c/2)/Γ(c/2).
We have thus proved the following reformulation of (<ref>).
Let c= N_ E. Then
κ_2i= vol() ·vol(𝕊^c-1) · Γ(c/2)/2^i Γ(i+c/2)·_ F ∈ N_ E Gaussian m_2i(L_ F).
For computing the expectation of m_2i(L_ F) we can rely on Corollary <ref>. Recall that this corollary implies that if F is Gaussian, L_ F is a random symmetric matrix with independent blocks
L_ F∼[ L_1 ⋯ L_1,r; ⋱ ; (L_1,r)^T ⋯ L_r ], [ L_k∼√(d_k(d_k-1)2) GOE(n_i),; L_i,j∼ N(0, I_n_i⊗ I_n_j) ]
.
In general it is difficult to evaluate the expected value of the minors of this random matrix. We make an attempt using graph theory in the next subsection.
§.§ Perfect matchings in graphs and random determinants
In this section we give a formula for m_2i(L_ F) when F is Gaussian using concepts from graph theory. In the following, the degrees d=(d_1,…,d_r) are fixed. Define the following random symmetric matrix with independent blocks:
L_ d(m_1,…,m_r) := [ L_1 ⋯ L_1,r; ⋱ ; (L_1,r)^T ⋯ L_r ], [ L_k∼√(d_k(d_k-1)2) GOE(m_i),; L_i,j∼ N(0, I_m_i⊗ I_m_j) ].
This differs from (<ref>) in that we allow the sizes of the blocks to be arbitrary, not necessarily given by the dimension n_i = 𝕍_n_i,d_i.
We can write the expected principal minors of L_ F as
_ F ∈ N_ E Gaussian m_2i(L_ F) =
∑_m_1,…,m_r∈ℕ: m_i≤ n_i
m_1+⋯+m_r = 2i L_ d(m_1,…,m_r).
Recall the definition of D_ d(m_1,…,m_r) from (<ref>): for a tuple (m_1,…,m_r) of nonnegative integers let G=(V,E) be the complete graph on m:=m_1+⋯+m_r vertices. The vertices are partitioned into r groups V= ℐ_1⊔⋯⊔ℐ_r of cardinalities |ℐ_k| = m_k. The weight w(e) of an edge between vertices in group ℐ_k is w(e)=d_k(d_k-1). The weight of an edge across groups is 1. Given a perfect matching C⊂ E its weight is w(C):=∏_e∈ C w(e).
Then,
D_ d(m_1,…,m_r) = (-1)^m/2∑_C⊂ E perfect matching w(C)
The main goal of this section is to prove the following characterization of the function D_ d. In combination with (<ref>), Lemma <ref> and Lemma <ref> the next proposition completes the proof of Theorem <ref>.
Let (m_1,…,m_r) be nonnegative integers. Then,
D_ d(m_1,…,m_r) = (L_ d(m_1,…,m_r)).
Recall from Example <ref> that the random matrix for the Segre manifold with n_1=n_2=2, n_3=n_4=1 and degrees 1 = (1,1,1,1) is
L_1(2,2,1,1) = [
[ 0 0 F_1100 F_1200 F_1010 F_1001; 0 0 F_2100 F_2200 F_0201 F_2001; F_1100 F_2100 0 0 F_0110 F_0101; F_1200 F_2200 0 0 F_0210 F_0201; F_1010 F_2010 F_0110 F_0210 0 F_0011; F_1001 F_2001 F_0101 F_0201 F_0011 0 ]],
where the entries are all i.i.d. standard Gaussian. We compute the expected determinant of this matrix using Theorem <ref>. The corresponding graph has n_1+n_2+n_3+n_4=6 vertices and four groups ℐ_1={1,2},ℐ_2={3,4},ℐ_3={5},ℐ_3={6}:
[fill = purple!30] (1) at (-1,0) [circle,draw,inner sep=2pt] 1;
[fill = teal!30] (3) at (3,0) [circle,draw,inner sep=2pt] 3;
[fill = teal!30] (4) at (3,2) [circle,draw,inner sep=2pt] 4;
[fill = blue!30] (5) at (-1,2) [circle,draw,inner sep=2pt] 5;
[fill = purple!30] (2) at (1,-1) [circle,draw,inner sep=2pt] 2;
[fill = orange!30] (6) at (1,3) [circle,draw,inner sep=2pt] 6;
(1) edge node (3);
(1) edge node (4);
(1) edge node (5);
(1) edge node (6);
(2) edge node (3);
(2) edge node (4);
(2) edge node (5);
(2) edge node (6);
(3) edge node (5);
(3) edge node (6);
(4) edge node (5);
(4) edge node (6);
(5) edge node (6);
The edges within groups all have weight zero (they can be deleted). All other edges have weight one, so D_1(2,2,1,1)= L_1 (2,2,1,1) is given by the negative of the number of perfect matchings in this graph. We can match {1,2} with {3,4}. There are two possible such matches. Or we can match 1 with either 5 or 6, in which case we have to match 2 with either 3 or 4. There are 4 such matches. Finally, we can also match 2 with either 5 or 6, and by symmetry there are again 4 such matches. In total these are 10 matches, which shows that D_1(2,2,1,1)=-10.
Let m:=m_1+⋯+m_r and write
L_ d(m_1,…,m_r) = (ℓ_i,j)_1≤ i,j≤ m.
Since the expectation is linear, Laplace expansion of the determinant yields
L_ d(m_1,…,m_r) = ∑_σ∈𝔖_msgn(σ) ∏_i=1^m ℓ_i,σ(i),
where 𝔖_m is the symmetric group on m elements. The ℓ_i,σ(i) are all Gaussian with mean ℓ_i,σ(i) = 0 and independent. This implies that the only terms whose expectation is not zero are those where all ℓ_i,σ(i) appear as a square. In other words, only those expectations are not zero where σ∈𝔖_m has the property that σ(i)≠ i for all i and
σ(i)=j implies σ(j)=i. Such σ∈𝔖_m only exist when m is even.
If m is odd, we therefore have L_ d(m_1,…,m_r)=0. Since for m odd, no perfect matchings can exist, we also have D_ d(m_1,…,m_r)=0.
If m is even, on the other hand, the σ∈𝔖_m with the above property are precisely products of m2 transpositions, so that
L_ d(m_1,…,m_r) = (-1)^m/2∑_σ∈𝔖_m:
σ is product of m/2 transpositions∏_i=1^m ℓ_i,σ(i).
There is a 1:1 correspondence between products of m2 transpositions and perfect matchingts C⊂ E, where E is the set of edges in the complete graph G=(V,E) on m vertices. Let C={(i_1,i_2), (i_3,i_4),…, (i_m-1,i_m)} be the matching corresponding to σ; i.e., σ(i_j)=i_j+1. Then, using independence we get
∏_i=1^m ℓ_i,σ(i) = (ℓ_i_1,i_2^2 ⋯ℓ_i_m-1,i_m^2) = ℓ_i_1,i_2^2 ⋯ℓ_i_m-1,i_m^2 = σ_i_1,i_2^2⋯σ_i_m-1,i_m^2,
where σ_i_j,i_j+1^2 is the variance of ℓ_i_j,i_j+1. By definition of L_ d(m_1,…,m_r) the variance of the off-diagonal entries in the diagonal blocks is d_k(d_k-1), while the variance of the entries in the off-diagonal blocks of L_ d(m_1,…,m_r) is 1. That is:
σ_i_j,i_j+1^2 = d_k(d_k-1), i_j,i_j+1∈ℐ_k
1, i_j,i_j+1 are in different groups of vertices .,
which shows that ∏_i=1^m ℓ_i,σ(i) = w(C), so D_ d(m_1,…,m_r)= L_ d(m_1,…,m_r).
alpha
|
http://arxiv.org/abs/2307.07480v1 | 20230714170348 | Whitney Twins, Whitney Duals, and Operadic Partition Posets | [
"Rafael S. González D'León",
"Joshua Hallam",
"Yeison A. Quiceno D"
] | math.CO | [
"math.CO",
"06A07, 06A11, 18M70"
] |
Whitney twins, Whitney duals, and operadic partition posets
Sergey V. Baryshev
August 12, 2023
===========================================================
We say that a pair of nonnegative integer sequences ({a_k}_k≥ 0,{b_k}_k≥ 0) is Whitney-realizable if there exists a poset P for which (the absolute values) of the Whitney numbers of the first and second kind are given by the numbers a_k and b_k respectively. The pair is said to be Whitney-dualizable if, in addition, there exists another poset Q for which their Whitney numbers of the first and second kind are instead given by b_k and a_k respectively. In this case, we say that P and Q are Whitney duals. We use results on Whitney duality, recently developed by the first two authors, to exhibit a family of sequences which allows for multiple realizations and Whitney-dual realizations. More precisely, we study edge labelings for the families of posets of pointed partitions Π_n^∙ and weighted partitions Π_n^w which are associated to the operads and ^2 respectively. The first author and Wachs proved that these two families of posets share the same pair of Whitney numbers. We find EW-labelings for Π_n^∙ and Π_n^w and use them to show that they also share multiple nonisomorphic Whitney dual posets.
In addition to EW-labelings, we also find two new EL-labelings for Π_n^∙ answering a question of Chapoton and Vallette. Using these EL-labelings of Π_n^∙, and an EL-labeling of Π_n^w introduced by the first author and Wachs, we give combinatorial descriptions of bases for the operads , , and ^2. We also show that the bases for and ^2 are PBW bases.
§ INTRODUCTION
To a finite graded poset (partially ordered set) P with a minimal element (denoted 0̂ throughout) we can associate a pair of sequences of integers {w_k(P)}_k≥ 0 and {W_k(P)}_k≥ 0 known as the Whitney numbers of the first and second kind respectively. These two sequences are poset
invariants and encode relevant information in areas where partially ordered structures arise naturally. For example, Whitney showed in <cit.> that the coefficients of the chromatic polynomial of a graph are the Whitney numbers of the first kind of a poset one can associate to a graph (its bond lattice). The Whitney numbers of the first kind keep track of the Möbius function at each rank level and the Whitney numbers of the second kind keep track of the number of elements at each rank level.
§.§ Whitney-realizable and dualizable sequences
In <cit.>, the first and second authors introduced the concept of a Whitney dual of a graded poset P with a 0̂.
We say that two graded posets P and Q are Whitney duals if, after taking absolute values, the sequences of Whitney numbers of the first and second kind of P are equal to the sequences of Whitney numbers of the second and first kind of Q. That is, the Whitney numbers of P and Q are swapped with respect to one another. In <cit.>, the authors also defined a new type of poset edge labeling, which is called an EW-labeling (or Whitney labeling). The authors show that these labelings provide a sufficient condition for the existence of a Whitney dual for any graded poset P admitting such a labeling. Moreover, they describe an explicit construction of the Whitney dual associated to a given EW-labeling.
One can readily observe from the definition, that nothing prevents the existence of multiple Whitney duals to a graded poset P. Hence, the concept of Whitney duality is more precisely a duality between the sequences of numbers involved rather than a duality between posets. We say that a pair of nonnegative integer sequences ({a_k}_k≥ 0,{b_k}_k≥ 0) is Whitney-realizable if there exists a poset P such that ({|w_k(P)|}_k≥ 0,{W_k(P)}_k≥ 0)=({a_k}_k≥ 0,{b_k}_k≥ 0). We will call two posets P and Q Whitney twins if they realize the same pair of sequences. We say that a Whitney-realizable pair is Whitney-dualizable if ({b_k}_k≥ 0,{a_k}_k≥ 0) is also Whitney-realizable.
Determining which pairs of nonnegative integer sequences ({a_k}_k≥ 0,{b_k}_k≥ 0) are Whitney-realizable or Whitney-dualizable both seem to be challenging questions. In this article, we present results related to the non-uniqueness of Whitney realizations and dualizations of a pair ({a_k}_k≥ 0,{b_k}_k≥ 0) by finding and exploring the algebraic and combinatorial consequences of EW-labelings on two families of posets which come from the theory of symmetric operads. The two particular families of posets are associated to the permutative operad and to the double commutative operad ^2.
§.§ Operadic posets and EL/CL-labelings
In <cit.>, Vallete defined a family of partition posets Π_n^ associated to a basic-set quadratic operad .
These posets are an operadic generalization of the poset of set partitions Π_n ordered by refinement. There, the author shows that the top cohomology H^top(Π_n^) _n-modules are, up to tensoring with the sign representation, equal to the Koszul dual cooperad ^ to . He also shows that the Cohen-Macaulay property of the maximal intervals of Π_n^ is equivalent to the Koszul property of and ^. Hence, the application of combinatorial techniques on the family Π_n^ is relevant in determining the algebraic properties of and ^. One such technique is the theory of lexicographic shellability for posets introduced by Björner <cit.> and further developed by Björner and Wachs in <cit.> (see also <cit.>). The main idea behind the theory of lexicographic shellability is that the maximal intervals of a poset P which admit a type of edge labeling, known as an EL-labeling (or a CL-labeling in more generality), are Cohen-Macaulay. Finding an EL or CL-labeling for a poset Π_n^ then implies under Vallette's relation that and ^ are Koszul operads. As an application of EL and CL-labelings for partition posets, Bellier-Millès, Delcroix-Oger, and Hoffbeck <cit.> showed that if an EL or CL-labeling of Π_n^ satisfies a certain condition that they call being isomorphism-compatible, then the operad has a Poincaré–Birkhoff–Witt (PBW) basis determined by the labeling. PBW bases are useful because they imply that the operads are Koszul as was shown by Hoffbeck <cit.> for totally ordered PBW bases and in more generality for partially ordered PBW bases in <cit.>.
We note that the posets Π_n^
have appeared before in a different but related context. They are relevant in finding compositional (or substitutional) inverses to species within Joyal's theory of combinatorial species (see<cit.>) as was shown by Méndez and Yang in <cit.>.
§.§ Pointed and weighted partition posets
Vallette <cit.> showed that the pointed partition poset Π_n^∙ is isomorphic to the operadic poset Π_n^ associated to the operad 𝒫erm. In Section <ref>, we give an EW-labeling of Π_n^∙ and give an explicit description of its Whitney dual in terms of pointed Lyndon forests in Section <ref>.
In <cit.>, Chapoton and Vallette show that the maximal intervals of Π_n^∙ are totally semimodular. By the results in <cit.>, this implies that they are also CL-shellable and hence Cohen-Macaulay. By the result in <cit.> this in turn implies that 𝒫erm, and its Koszul-dual operad 𝒫reℒie, are Koszul. The authors in <cit.> leave open the question of whether or not Π_n^∙ admits the more restrictive property of being EL-shellable. EL-shellability and CL-shellability have been shown recently by Li <cit.> to not be equivalent in general for posets. The authors in <cit.> propose a possible EL-labeling of Π_n^∙ and claim that this labeling has the additional property of being isomorphism-compatible. We show in Section <ref> that the proposed labeling does not satisfy the requirements for being an EL-labeling. We then provide a new EL-labeling which answers the open question in <cit.>. This labeling has the same set of labels as our EW-labeling for Π_n^∙, but differ in how these labels are partially ordered. We show this EL-labeling is isomorphism-compatible which in turn gives a PBW basis for the operad using the results in <cit.>. Although our EW-labeling for Π_n^∙ is not directly an EL-labeling, we show that reversing the order on the labels gives an EL-labeling for the order dual poset (Π_n^∙)^*. This provides a second answer to the open question in <cit.>. We also show that the former EL-labeling for Π_n^∙ is isomorphism-capatible, giving us a PBW bases for .
In <cit.>, Dotsenko and Khoroshkin introduced the weighted partition poset Π_n^w. They showed that Π_n^w is isomorphic to the poset Π_n^^2 associated to the operad ^2 of algebras with two totally commutative products. The combinatorial and homological properties of Π_n^w were extensively studied by González D'León and Wachs in <cit.>. In their study, the authors introduced an EL-labeling for Π_n^w. In Section <ref> we prove that this labeling is an EW-labeling and hence Π_n^w has a Whitney dual. In Section <ref>, we give an explicit description of this Whitney dual in terms of bicolored Lyndon forests. We also show in Section <ref> that this labeling is isomorphism-compatible which gives a PBW basis for ^2.
§.§ Nonuniqueness of Whitney realizations
In <cit.> the authors show that Π_n^w and Π_n^∙ are Whitney twins (though they do not use this terminology). Indeed as a consequence of their Theorem 2.8, Proposition 2.1, and the follow up discussion in Section 2.4 in <cit.>, the Whitney numbers of the first and second kind are given for all k≥ 0 by the sequences
w_k(Π_n^∙)=w_k(Π_n^w) =(-1)^kn-1kn^k
W_k(Π_n^∙)=W_k(Π_n^w) =nk(n-k)^k.
This already implies the nonuniqueness of realizations for a Whitney-realizable sequence. We show that the Whitney duals constructed with the EW-labelings for Π_n^∙ and Π_n^w are not isomorphic for n≥ 4. Since they have the same Whitney numbers of both kinds, we get multiple non-isomorphic Whitney duals for both Π_n^∙ and Π_n^w, implying further the nonuniqueness of dual realizations of Whitney-dualizable sequences. We also show that there is a third family _n of Whitney duals to Π_n^∙ and Π_n^w which for n≥ 3 is not isomorphic to any of the Whitney duals discussed before. The family _n is also shown in future work by the first two authors to be associated with a more general type of Whitney labeling. The three nonisomorphic families of Whitney dual posets to Π_n^∙ and Π_n^w also constitute a new example of the nonuniqueness of Whitney realizations.
§.§ Organization of this article
The rest of the article is structured as follows. In Section <ref> we review EW-labelings and EL-labelings, and we describe the labelings of Π_n^w and Π_n^∙. In Section <ref>, we give explicit descriptions of the Whitney duals of Π_n^w and Π_n^∙. In Section <ref> we consider the algebraic consequences of these labelings. Specifically we use these labelings to describe bases for , , and ^2, the latter two in particular being PBW bases. In Section <ref>, we discuss the nonuniqueness of Whitney realizations using our results for Π_n^w and Π_n^∙ and their associated Whitney duals.
Some results in this work have been announced as part of the third author's master's thesis in <cit.>.
§ EW-LABELINGS
In this section we describe three edge labelings: one for the weighted partition poset, which was introduced already in <cit.>, and two new edge labelings for the pointed partition poset. The edge labeling for the weighted partition poset, was shown in <cit.> to be an EL-labeling and here we show that is also an EW-labeling. Of the two labelings for the pointed partition poset, one is an EW, which we also show is a dual EL-labeling, and the second is an EL-labeling (but not an EW-labeling). We show that the two labelings have the same sets of words of labels, however the labels come from two different partial orders. Our main use of these labelings is three-fold: constructing Whitney duals for the two posets, understanding their homotopy type and cohomology of the respective order complexes, and finding PBW bases of the corresponding operads and bases for their dual (co)operads. We start with a brief review of Whitney numbers, Whitney duals, and edge labelings.
§.§ Whitney numbers and Whitney duals
We will assume some familiarity with posets. For a more in-depth review of posets as well as any undefined terms, see <cit.>.
For a review of poset topology see <cit.>.
All the posets we consider in this article will be finite, graded, and contain a minimum element which we denote by 0̂. We will use ρ(x) for the rank of an element x.
The (one-variable) Möbius function of a poset P, denoted by μ, is defined recursively by
μ(0̂) =1
and for x≠0̂,
μ(x) = -∑_y<xμ(y).
Note that the one-variable Möbius function coincides with the classical two-variable Möbius function μ(0̂,x) on the interval [0̂,x]. See Figure <ref> for examples of the Möbius function. The k^th Whitney number of the first kind, denoted by w_k(P), is defined by
w_k(P) = ∑_x∈ P
ρ(x) = kμ(x).
For the poset P in Figure <ref>, the Whitney numbers of the first kind are given by the sequence (1, -3 ,2) and for the poset Q these are given by the sequence (1,-3,1).
The k^th Whitney number of the second kind, denoted by W_k, is defined by
W_k(P) = #{x∈ P |ρ(x)=k}.
In Figure <ref>, the Whitney numbers of the second kind of P are given by (1, 3, 1) and of Q are given by (1, 3 , 2).
By comparing the Whitney numbers of the first and second kind of P and Q in Figure <ref>, the reader may notice a peculiar phenomenon. The Whitney numbers of P and Q switch (up to a sign). It turns out that this phenomenon, which was first described in <cit.> and further studied in <cit.>, occurs for many other pairs of posets and motivates the next definition.
Let P and Q be ranked posets. We say P and Q are Whitney duals if for all k,
|w_k(P)| = W_k(Q) |w_k(Q)| = W_k(P).
Using this definition, we can see that P and Q in Figure <ref> are Whitney duals.
§.§ Consequences of ER, EL, and EW-labelings
To approach Whitney duality, the first two authors in <cit.> used the poset topology technique of edge labelings. Here, we review a few concepts related to edge labelings, but for further details the reader can visit <cit.> or some of the classical articles <cit.>
Let P be a poset. We use ℰ(P) to denote the set of edges in the Hasse diagram of P (which is in bijection with the set of cover relations in P). An edge labeling of P is a map λ: ℰ(P)→Λ where Λ is a set of partially ordered labels. See Figure <ref> for examples of an edge labeling where the set of labels is {a,b,c} which is ordered alphabetically. Recall that a chain x_0 x_1⋯ x_n is said to be saturated if it is maximal in the interval [x_0,x_n]. Given an edge labeling λ, we say that a saturated chain, x_0 x_1⋯ x_n, is increasing if λ(x_i-1 x_i) < λ(x_i x_i+1) for all 1≤ i≤ n-1. Similarly, x_0 x_1⋯ x_n, is ascent-free if λ(x_i-1 x_i) ≮λ(x_i x_i+1) for all 1≤ i≤ n-1. Returning to our example in Figure <ref>, we see that among maximal chains of P, the chain 0̂ a1̂ is increasing (since ab is an increasing sequence). On the other hand, the maximal chains 0̂ b 1̂ and 0̂ c1̂ are ascent-free. We want to remark that the example in Figure <ref> is rather small and in general there are saturated chains that are neither increasing nor ascent-free, but these two particular types of chains are the ones of interest in the following discussion.
§.§.§ ER and EL-labelings
We say an edge labeling is an ER-labeling if every interval has a unique increasing maximal chain. Moreover, we say an ER-labeling is an EL-labeling if in each interval, the unique increasing maximal chain also precedes every other chain in lexicographic order. One can check that the labeling of P in Figure <ref> is both an ER and an EL-labeling. Indeed, the lexicographic requirement holds trivially on rank 0 and 1 intervals, so the only interval to check the lexicographic condition is on the full poset (which is also an interval in this case). The increasing chain is labeled ab and this precedes both ba and ca in lexicographic order. One of the main reasons we are interested in ER and EL-labelings is because of the topological and combinatorial consequences given by the following two theorems.
Let P be a graded poset with an ER-labeling λ: ℰ(P) →Λ. Then for every x<y in P we have that
μ(x,y) = (-1)^ρ([x,y]) |{c| c [x,y]}|.
Let P be a graded poset with an EL-labeling λ: ℰ(P) →Λ. Then for every x<y in P we have that:
* The order complex Δ((x,y)) is shellable. Moreover, it has the homotopy type of a wedge of |{c| c[x,y]}| many spheres each of dimension ρ([x,y])-2. As a consequence, [x,y] is Cohen-Macaulay.
* The set
{c∖{x,y}| c [x,y]}
forms a basis for the top reduced cohomology H^ρ([x,y])-2((x,y)) of Δ((x,y)).
In this work we will be particularly interested on the consequences of Theorems <ref> and <ref> for intervals of the form [0̂,x] for all x in a poset P.
§.§.§ EW-labelings
In order to construct a Whitney dual, we need to impose two additional conditions on an ER-labeling. Note that in the following definition, we do not require the labeling to be an EL-labeling.
Let λ be an edge labeling of P. We say λ is an EW-labeling if the following hold.
* λ is an ER-labeling.
* (The rank two switching property) For every interval [x,y] with ρ(y)-ρ(x)=2, if the increasing chain is labeled ab, there exists a unique chain in [x,y] labeled ba.
* (Injectivity of ascent-free chains) For every x<y∈ P, every ascent-free maximal chain in [x,y] has a unique sequence of labels.
We already noted that the labeling of P in Figure <ref> is an ER-labeling. In fact, it is an EW-labeling too. Clearly we have injectivity of ascent-free chains. Moreover, in the only rank two interval, the increasing chain is labeled by ab and there is exactly one other chain in that interval labeled by ba. As we saw, the poset P has a Whitney dual (namely Q in Figure <ref>). This is no coincidence, rather it is a a consequence of the following theorem.
Let P be a poset with an EW-labeling λ. Then P has a Whitney dual. Moreover, we can construct a Whitney dual Q to P that depends on λ.
In Section <ref>, we describe a specific construction of such a Whitney dual Q using λ.
§.§ An EW-labeling of the weighted partition poset
In this subsection, we describe an EW-labeling of the weighted partition poset. First, we briefly discuss the weighted partition poset.
A weighted set is a pair (A,v) where v∈{0,…, |A|-1}. We will also denote weighted sets with the simpler notation A^v. A weighted partition of [n] is a collection of weighted sets =B_1^v_1/B_2^v_2/⋯/B_t^v_t such that
B_1/B_2/⋯/B_t is a partition of [n]. The poset of weighted partitions, Π_n^w, is the set of weighted
partitions of [n] with cover order relation given by
=A_1^w_1/A_2^w_2/… /A_s^w_s⋖ B_1^v_1/ B_2^v_2/ … /B_s-1^v_s-1='
if the following conditions hold:
* A_1/A_2/⋯ /A_s ⋖ B_1/B_2/ ⋯ /B_s-1 in Π_n
* if B_k=A_i∪ A_j, where i j, then v_k-(w_i + w_j) ∈{0,1}
* if B_k = A_i then v_k = w_i
See Figure <ref> for a depiction of Π_3^w. As was noted in the introduction, Π_n^w is (isomorphic to) the poset of partitions for the operad ^2.
In <cit.>, González D'León and Wachs gave an EL-labeling for Π_n^w. Here we show that this labeling is in fact an EW-labeling. We now review the definition of their labeling.
Let us start by defining the set of edge labels, Λ^w_n. For each a ∈ [n], let Γ_a:= {(a,b)^u |
a<b ≤ n, u ∈{0,1}}.
We partially order Γ_a by letting (a,b)^u ≤ (a,c)^v if b≤ c and u ≤ v. Note that Γ_a is isomorphic to the direct product of the chain a+1< a+2 <… < n and
the chain 0 < 1. Now define Λ^w_n to be the
ordinal sum
Λ^w_n := Γ_1 ⊕Γ_2 ⊕⋯⊕Γ_n-1. See Figure <ref> for a depiction of the Hasse diagram of Λ^w_4.
We are now ready to describe the edge labeling. The map λ_w:(Π_n^w)→Λ_n^w is defined as follows: let ⋖' in
Π^w_n so that ' is obtained from by merging two blocks A^w_A and B^w_B of into a
new block (A ∪ B)^w_A + w_B+u of ', where u ∈{0,1} and where we assume without loss of
generality that min A < min B. We
define then
λ_w(') = (min A, min B)^u.
See Figure <ref> for an example of this labeling on Π_3^w.
The following theorem was proved in <cit.>.
λ_w is an EL-labeling (and hence also an ER-labeling).
According to Definition <ref>, to show that λ_w is an EW-labeling, we need to check the rank two switching property and the injectivity condition on ascent-free chains. To see that the latter is satisfied, note that the information contained in the collection of labels of the form (min A, min B)^u is enough to trace which blocks of a weighted partition are merged at each step which is enough to recover any saturated chain starting at any particular weighted partition . Hence the sequence of labels in each interval
uniquely determines a chain.
To show that λ_w is an EW-labeling, we are left to show that it
satisfies the rank two switching property.
As explained in <cit.>, there are three types of rank two intervals in Π_n^w. These intervals are depicted in Figure <ref> together with their edge labels.
For each type, the reader can check that the rank two switching property holds.
λ_w is an EW-labeling of Π_n^w. Consequently, Π_n^w has a Whitney dual.
We will give a combinatorial description of the corresponding Whitney dual in Section <ref>.
§.§ An EW-labeling of the pointed partition poset
A pointed set is a pair (A,p) where A is a nonempty set and p∈ A. In the following we will use the notation A^p for (A,p). A pointed partition of [n] is a collection ={B_1^p_1,B_2^p_2,…,B_m^p_m} where π={B_1,B_2,…,B_m} is a partition of [n], called its underlying partition, and B_i^p_i are pointed sets for all i.
We will also use the notation B_1^p_1/B_2^p_2/⋯/ B_m^p_m for {B_1^p_1,B_2^p_2,…,B_m^p_m}. The poset of pointed partitions Π_n^∙ is the partial order on the set of all pointed partitions of [n] with cover order relation given by ={A_1^q_1,A_2^q_2,...,A_l^q_l}⋖'={B_1^p_1,B_2^p_2,...,B_m^p_m} whenever
* π⋖π' in Π_n.
* if B_h=A_i∪ A_j then p_h ∈{q_i,q_j}.
* if B_h=A_i then p_h=q_i.
Thus to move up in a cover, exactly two blocks are merged and the pointed element of this new block is one of the pointed elements of the merged blocks. We will represent the pointed element for each block by placing a tilde above the pointed element. For example, {1478}^4 will be denoted by 14̃78. The Hasse diagram of Π_3^∙ is illustrated in Figure <ref>. As noted in the introduction, Π_n^∙ is (isomorphic to) the poset of partitions for the operad .
Suppose we are merging two blocks A and B with min A< min B. We say that this merge is a 0-merge if the pointed element of A∪ B is the pointed element of B. Similarly, we say the merge is a 1-merge if the pointed element of A∪ B is the pointed element of A. For example, if we merge the blocks 12̃4 with 35̃ to get 12345̃ we have done a 0-merge. On other hand, if we had obtained 12̃345, we would have done a 1-merge. From time to time, we will need to discuss merges where we do not know whether it is a 0 or 1-merge. In these cases, we will refer to it as an u-merge, always bearing in mind that u∈{0,1}.
We now give an edge labeling of Π_n^∙. We first define the poset of labels. Let Λ_n^∙ be the set {(a,b)^u| 1≤ a < b≤ n u∈{0,1}}. To define the order relation on Λ_n^∙, let A_a be the antichain on the set A_a={(a, b)^0 | a<b≤ n } and let C_a be the chain on the set {(a,b)^1| a <b≤ n} where (a, b)^1< ( a,c )^1 whenever b<c. Then we define Λ_n^∙ as the ordinal sum
Λ_n^∙:=A_1⊕ C_1⊕ A_2⊕ C_2 ⊕⋯⊕ A_n-1⊕ C_n-1.
The Hasse diagram of Λ_4^∙ is given in Figure <ref>. Note that the underlying sets of Λ^w_n and Λ_n^∙ are the same, but their partial orders are different. Now suppose that in the cover relation ', we u-merge blocks A and B. Then we define the labeling λ_∙: Π_n^∙→Λ_n^∙ by
λ_∙(⋖')= (min A, min B)^u.
In Figure <ref> we illustrate the labeling λ_∙ of Π_3^∙.
We now turn our attention to proving that λ_∙ is an EW-labeling. First, let us note that a label λ_∙(⋖') completely determines which two blocks of merge to form a block of ' and which element in the resulting block is pointed. Hence, for every ∈Π_n^∙ the cover relations over have distinct labels. Thus starting at any element ∈Π_n^∙, a sequence of valid labels completely determines a saturated chain starting at . Thus we obtain the following proposition.
The labeling λ_∙ of equation (<ref>) is injective on maximal chains in any interval of Π_n^∙.
Next we show that λ_∙ is an ER-labeling.
For a finite A⊂ we denote Π_A the poset of partitions of A and Π_A^∙ the poset of pointed partitions supported on A. We also use U(x)={y∈ P | y≥ x} to denote the (principal) upper filter generated by an element x in a poset P. It turns out that the upper filter of any element of Π_n^∙ is isomorphic to another pointed partition poset and that this isomorphism preserves the labeling λ_∙. We make this explicit next.
Let ={B_1^p_1,…, B_l^p_l}∈Π_n^∙ with min B_1< ⋯ < min B_l. Let
Φ:U()→Π_{min B_1, …, min B_l}^∙
be the map defined as follows:
* For a pointed set A^q with A=B_j_1∪⋯∪ B_j_r with j_1<…<j_r and q=p_j_s for some s∈ [r] we define Φ(A^q):={min B_j_1∪…∪min B_j_r}^min B_j_s.
* For any ∈ U() we define Φ():={Φ(A^q)| A^q ∈}.
Then the map Φ is an isomorphism preserving the labeling λ_∙ defined in equation (<ref>), i.e., for any ⋖' in U() we have that
λ_∙(Φ() ⋖Φ('))=λ_∙(⋖').
Before we prove the lemma, let us provide a quick example of the map Φ. Suppose that α = 145̃6/ 27̃9/38̃. Then 12567̃9/ 38̃ is in U(α) and Φ(12567̃9/ 38̃) = 12̃/3̃. The pointed block 12̃ comes from the fact that we merged the blocks 145̃6 and 27̃9 and chose to keep 7 pointed. As a result we point 2 when we apply the map Φ since 2 is the minimum element in the block containing 7. The block 38̃ does not get merged, but since we reduce to the minimum element of the block when applying Φ, we get the pointed block 3̃.
We will show first that the function Φ preserves the u-merging of two blocks u∈{0,1}. Let A_1=B_j_1∪⋯∪ B_j_r with j_1<j_2<⋯ <j_r and q_1=p_j_l for some l∈ [r] and let A_2=B_k_1∪⋯∪ B_k_t with k_1<k_2<⋯ <k_t and q_2=p_k_m for some m∈ [t]. Without loss of generality we assume j_1<k_1 so min A_1 < min A_2.
We denote A_1^q_1∪_u A_2^q_2=(A_1∪ A_2)^q the u-merging of the pointed blocks A_1^q_1 and A_2^q_2 where q=q_1 if u=1 and q=q_2 if u=0.
Φ(A_1^q_1∪_u A_2^q_2) =Φ({ B_j_1∪⋯∪ B_j_r∪ B_k_1∪⋯∪ B_k_t}^q)
={min B_j_1∪⋯∪min B_j_r∪min B_k_1∪⋯∪min B_k_t}^q̃
={min B_j_1∪⋯∪min B_j_r}^min B_j_s∪_u {min B_k_1∪⋯∪min B_k_t}^min B_k_u
=Φ(A_1^q_1)∪_u Φ(A_2^q_2),
where q̃=min B_j_s if u=1 and q̃=min B_k_u if u=0. Since the blocks of are in bijection with the blocks of min B_1/⋯/min B_l and all elements of U() are obtained uniquely by a sequence of u-merges of blocks of and the elements of Π_{min B_1, …, min B_l}^∙ are obtained uniquely by a sequence of u-merges of the blocks of min B_1/⋯/min B_l, we conclude that Φ is a bijection. Moreover, Φ and Φ^-1 preserve cover relations and hence Φ is a poset isomorphism.
Now, to see that the labeling according to λ_∙ of equation (<ref>) is preserved, note that in a cover relation where we u-merge the blocks A_1^q_1 and A_2^q_2 the label is
(min A_1, min A_2)^u=(min B_j_1,min B_k_1)^u,
which is the same obtained by u-merging the blocks Φ(A_1^q_1) and Φ(A_2^q_2).
As we explain in the proof of the following proposition, Lemma <ref> essentially reduces the task of finding a unique increasing chain in each interval to finding an increasing chain in every maximal interval.
The labeling λ_∙ of equation (<ref>) is an ER-labeling of Π_n^∙.
Let ,' ∈Π_n^∙ such that ≤'. We want to show that there is a unique increasing saturated chain in [,'].
Assume first that =0̂ and '=[n]^p, so [,']=[0̂,[n]^p] is a maximal interval. We will construct an increasing saturated chain in [0̂,[n]^p] and show that such chain is the only increasing saturated chain in [0̂,[n]^p]. Consider the chain c_[n]^p whose label sequence is as follows.
λ_∙(c_[n]^p)=
(1,2)^1 (1,3)^1⋯ (1,n-1)^1 (1,n)^1 if p =1,
(1,p)^0(1,2)^1⋯ (1,p-1)^1(1,p+1)^1 ⋯ (1,n)^1 if 1<p<n,
(1,n)^0(1,2)^1⋯ (1,n-1)^1 if p=n.
Because of Proposition <ref>, there is at most one such chain with the above label sequence. It is not hard to check that such a chain does in fact exist. In the case p=1, it is easy to see that the chain is increasing. On the other hand, if p≠ 1, the chain is also increasing since (1,p)^0 is smaller than any label of the form (1,b)^1 and the remaining values are increasing in Λ_n^∙.
We now show that the chain c_[n]^p is indeed the only increasing chain in [0̂,[n]^p]. We discuss the case when p≠ 1. The case when p=1 follows the same idea. Note that if c' is any other chain in [0̂,[n]^p] it must have as final label either (1,a)^0 or (1,a)^1 for some a≠ 1 since in the last step the block with minimal label 1 always be involved. It follows that for c' to be increasing all the labels along the chain must be of the form (1,b)^u for some b and u. Hence c' has to be constructed by a step-by-step process of merging blocks with the block that contains the element 1. Hence, the labels in the second component will form a permutation of the elements {2,3,…, n}. Since p has to be the pointed element, we will have a step where the label (1,p)^0 appears. Since (1,p)^0 and (1,a)^0 are not comparable when a≠ p, we see that c' cannot have the label (1,a)^0 where a≠ p as c' would not be increasing. Hence, all other labels are of the form (1,a)^1 and the only way to order them increasingly is as in equation (<ref>). By Proposition <ref>, λ_∙ is injective and so c'=c_[n]^p.
Now, we consider an interval of the form [0̂,] where ∈Π_n^∙ and has at least two blocks. Let ={B_1^p_1,…,B_l^p_l} where min B_1 <⋯ <min B_l. For each i=1,…,l, let c_B_i^p_i be the unique increasing chain of [0̂,B_i^p_i]. To see why such chains exist and are unique, apply the same idea from the previous paragraph to each of the intervals [0̂,B_i^p_i]. We will now consider the word of labels of c_B_i^p_i, λ_∙(c_B_i^p_i). Note that this word will be empty if |B_i|=1. Now let c_ be the chain in [0̂,] that first merges the elements with labels in B_1 as instructed in c_B_1^p_1, then merges the elements with labels in B_2 as instructed in c_B_2^p_2, and so on. Then c_ has the word of labels obtained by the concatenation of words
λ_∙(c_)=λ_∙(c_B_1^p_1)λ_∙(c_B_2^p_2)⋯λ_∙(c_B_l^p_l).
Note that this chain is increasing because min B_1 <⋯ <min B_l and there is only one chain with this word of labels because of
Proposition <ref>. In order to see that λ_∙(c_) is the unique increasing chain in [0̂,], let c' be any other increasing chain in this interval and for every i=1,…,l, and let
w_i=λ_∙(c')_i_1λ_∙(c')_i_2⋯λ_∙(c')_i_|B_i|
be the subword
of λ_∙(c') whose labels belong to the steps in c'
where blocks with elements in B_i were merged. Since w_i is a subword of an increasing word it must also be increasing. Then by the discussion in the paragraph above, we conclude that there is a unique way to apply the merges in order to get an increasing word and this word is λ_∙(c_B_i^p_i). Note that all the labels from all these words are comparable among each other since the min B_i are all different. There is then a unique shuffle of the subwords λ_∙(c_B_i^p_i) that leads to an increasing word λ_∙(c') which is λ_∙(c_B_1^p_1)λ_∙(c_B_2^p_2)⋯λ_∙(c_B_l^p_l). So we have that c'=c_.
Finally, consider an interval of the form [, '] in Π_n^∙ with ={B_1^p_1,…,B_l^p_l}. We have by Lemma <ref> that [, '] is isomorphic to an interval [0̂,”] in the poset Π_{min B_1, …, min B_l}^∙ through an isomorphism that preserves the labels of the maximal chains. Hence by the discussion in the paragraph before we have that there is a unique increasing chain in the interval [0̂,”] of the latter poset and hence in [, '], completing the proof.
To finish showing that the labeling λ_∙ is an EW-labeling, we just need to show λ_∙ has the rank two switching property. There are two types of rank two intervals in Π_n^∙. The type I is where there are two pairs of blocks that get merged independently of each other and the type II is where there are three blocks all of which get merged. These two types are shown in Figure <ref>. In type II, there are 3 possible choices for which initial block the pointed element at the top of the interval comes from. From Figure <ref> one can readily see that we have the following proposition.
The labeling λ_∙ of equation (<ref>) satisfies the rank two switching property.
By Propositions <ref>, <ref> and <ref> we have that the labeling λ_∙ on Π_n^∙ satisfies Definition <ref> which proves the following theorem.
The labeling λ_∙ is an EW-labeling of Π_n^∙. As a consequence, the poset Π_n^∙ has a Whitney dual.
We will give a combinatorial description of the corresponding Whitney dual in Section <ref>.
§.§ EL-labelings for the pointed partition poset
In <cit.> Chapoton and Vallete show that Π_n^∙ has a CL-labeling and hence is Cohen-Macaulay. In Remark 1.11 of that paper, they leave open the question of if maximal intervals of Π_n^∙ have EL-labelings. We give a positive answer to their question below by providing an EL-labeling for Π_n^∙ (which restricts to an EL-labeling in every maximal interval). Before we do, let us note that Bellier-Millès, Delcroix-Oger and Hoffbeck <cit.> propose an edge labeling for Π_n^∙ that the authors claim is an EL-labeling. However, we later argue that the proposed labeling does not satisfy the conditions to be an EL-labeling (see Remark <ref>).
One might hope that our previous EW-labeling λ_∙ is an EL-labeling. Unfortunately, this is not the case. Indeed, consider the rank two interval [ 1̃/2̃/3̃,123̃]. The unique increasing chain 1̃/2̃/3̃ 13̃/2̃ 123̃ has word of labels (1,3)^0(1,2)^1. However, this sequence is not lexicographically comparable with the word of labels (1,2)^0(1,3)^0 of the chain 1̃/2̃/3̃ 12̃/3̃ 123̃ in the same interval (see Figure <ref>). Although λ_∙ is not an EL-labeling, if we keep the same edge labels, but instead use the ordering of the labels that we used for the weighted partition poset, we do get an EL-labeling. More specifically, we claim that the labeling λ_∙_2: ℰ(Π_n^∙) →Λ_n^w where λ_∙_2(') := λ_∙(') is an EL-labeling.
The following theorem has a very similar proof to the one in Proposition <ref> and <cit.>. To avoid a lengthy discussion we just provide the relevant steps in the proof, which can be verified by the reader.
The labeling λ_∙_2 is an EL-labeling of Π_n^∙. Consequently, Π_n^∙ is EL-shellable and its maximal intervals are Cohen-Macaulay.
We need to show that in each interval [,'] of Π_n^∙ there is a unique increasing maximal chain and that this chain is lexicographically first.
First we consider an interval of the form [0̂,[n]^p]. For this type of interval the reader can verify that there is an increasing maximal chain that has word of labels
λ_∙_2(c' _[n]^p)=
(1,2)^1⋯(1,n)^1 if p =1,
(1,2)^0⋯(1,p)^0(1,p+1)^1⋯(1,n)^1 if 1<p<n,
(1,2)^0⋯(1,n)^0, if p=n,
and which is of the form (where we represent each pointed partition by its unique nonsingleton block):
c'_[n]^p=(0̂⋖ [2]^2⋖⋯⋖ [p]^p⋖ [p+1]^p⋖⋯⋖ [n]^p).
We note that a similar argument given in the proof of Proposition <ref> shows that this is the only increasing maximal chain in [0̂,[n]^p]. To show that this chain is lexicographically smallest, suppose this was not the case. Then there is some other maximal chain, d, whose words of labels is not lexicographically larger. Let d_1d_2⋯ d_n-1 be the word of labels of d and assume that the first time it disagrees with the increasing chain at d_i-1. Note that we may assume that i>2 since the first label along the increasing chain is the smallest possible label.
First, suppose that i-1<p and that (1,i)^0 ≮ d_i-1. Then based on Λ_n^w, d_i-1 must be of the form (1,b)^1 where b<i. But this is impossible since by the time d adds the label d_i-1, b was already in the same block as 1 as d agrees with c'_[n]^p up to this step. If i-1≥ p and (1,i)^1 ≮ d_i-1. This would imply that d_i-1 is of one of the following forms: (1,b)^0 with b>i or (1,b)^u with b< i and u∈{0,1}. By this point along d, p is the pointed element in its block and this block contains 1. So, all the labels at this point must have an exponent of 1 and thus it cannot be of the form (1,b)^0. It also cannot be of the form (1,b)^1 with b<i since b is already in the block with 1 by this point along d. We conclude that the unique maximal chain is lexicographically smallest.
For an interval of the form [0̂,] where is of the form ={B_1^p_1,…,B_l^p_l} with min B_1 <⋯ <min B_l and l≥ 2, we consider the unique increasing word of labels c'_B_i^p_i in [0̂,B_i^p_i] and then the unique maximal chain c'_ in [0̂,] with word of labels
λ_∙_2(c'_)=λ_∙(c'_B_1^p_1)λ_∙(c'_B_2^p_2)⋯λ_∙(c'_B_l^p_l)
is the unique increasing chain and is lexicographically first among maximal chains in [0̂,].
Finally, for an interval of the form [, '] in Π_n^∙, we use Lemma <ref> to reduce to any of the two cases before. Note that the lemma still applies in this case since the functions λ_∙_2 and λ_∙ only differ in the order structure on the poset of labels.
At this point, the reader may be wondering if λ_∙_2 is an EW-labeling. By looking at the last occurrence of rank 2 intervals of type II in Figure <ref>, we can conclude that the unique increasing chain has a word of labels (a,b)^0(a,c)^0, but there is no chain with a word of labels (a,c)^0(a,b)^0. Hence, the EL-labeling λ_∙_2 of Theorem <ref> fails the rank two switching property and is not an EW-labeling.
As we mentioned earlier, λ_∙ is an EW-labeling, but is not an EL-labeling. Nevertheless, if we take the order dual of Π_n^∙ and reverse the ordering on the labels for λ_∙, we do get an EL-labeling of the order dual.
Given a poset P, let P^* be the order dual of P. Moreover, given a labeling λ:ℰ(P)→Λ of a poset P with label poset Λ, we define the dual labeling λ^*:ℰ(P^*)→Λ^* of the order dual poset P^* to be given by
λ^*(y_P^* x) = λ(x_P y).
In other words, the edge labels do not change when passing from P to its order dual P^*, just the ordering on the labels.
The labeling λ^*_∙ is an EL-labeling of Π_n^∙^*. Consequently the maximal intervals of the order dual are EL-shellable.
First note that since we reverse the order of the labels from λ_∙ to get λ^*_∙, an increasing chain in an interval [, ] of Π_n^∙^* is exactly the order dual of an increasing chain in the interval [, ] of Π_n^∙. It follows that since λ_∙ is an ER-labeling, λ^*_∙ is also an ER-labeling. So to finish the proof, we need only show that in every interval of Π_n^∙^*, the increasing chain with respect to λ^*_∙ is lexicographically smallest. Note that when we restrict the (unique) increasing chain on an interval to a smaller subinterval, that restriction is again the (unique) increasing chain in the said subinterval. Now, since the order on the labels are reversed, it is enough to show that in any interval of Π_n^∙ the last label along the increasing chain is strictly larger than the other possible last labels of other chains in that interval. The rest of the argument will follow by induction on the smaller subinterval that is obtained by removing the last step on the unique increasing chain. By appealing to Lemma <ref>, we need only need to check this condition for increasing chains in intervals of the form [0̂,] in Π_n^∙. This is what we do next.
Consider the interval [0̂, ] and let c_ be the increasing chain. Suppose that the last cover relation on c_ is . We will show that the label λ_∙() is strictly larger than any other label of the form λ_∙('). Suppose that is of the form
={B_1^p_1,…,B_s^p_s,B_s+1^p_s+1,…,B_l^p_l}
and where B_1^p_1,…,B_s^p_s are the nonsingleton blocks and min B_1 <⋯ <min B_s.
Note that for any '⋖, we have labels of the form λ_∙('⋖)=(min B_i,a)^u with a∈ B_i, i=1,…,s and u∈{0,1}. Hence, the largest possible label that appears along the edges of [0̂,] is of the form (min B_s,a)^u. Moreover, since min B_s ≥min B_i for all 1≤ i≤ s, we know that (min B_s,a)^u is larger than any label in [0̂, ] of the form (min B_i, b)^u with i≠ s. Since the elements of B_s must be merged together when going from 0̂ to , a label of this form must occur on every maximal chain in [0̂,]. It follows that the increasing chain must end with a label of the form (min B_s,a)^u.
Now we distinguish between two cases whether |B_s|=2 or |B_s|>2. If |B_s|=2 there is only one label of the form (min B_s,a)^u where a is the unique element in B_s∖{min B_s} and u∈{0,1} is uniquely determined to be u=0 if p_s=a or u=1 otherwise. In either case, all other labels in [0̂,] are of the form (min B_i,b)^u, which are strictly smaller than (min B_s,a)^u.
Now suppose that |B_s|>2 and let b=max B_s. If p_s≠ b then the label in the uppermost cover relation of c_ is (min B_s,b)^1 (see the proof of Proposition <ref>) which is the largest label among all labels that appear in the interval [0̂,]. Since b is the largest value in its block, there is only one ' with this property. That is '={B_1^p_1,…,B_s∖{b}^p_s,{b}^b,B_s+1^p_s+1,…,B_l^p_l} which is the second to last element of c_.
If p_s= b then the label in the uppermost cover relation of c_ is (min B_s,c)^1 where c=max (B_s ∖{b}). Note that the only label of the form (min B_s,a)^u larger than (min B_s,c)^1 is (min B_s,b)^1.
But this label cannot actually appear among the cover relations '⋖ since this would indicate that p_s=b which is not the case. Thus (min B_s,c)^1 is the largest possible label that can occur in [0̂, ] and this also determines uniquely '={B_1^p_1,…,B_s∖{c}^b,{c}^c,B_s+1^p_s+1,…,B_l^p_l} which is the second to last element of c_. We conclude that λ^*_∙ is an EL-labeling.
We finish this section with a remark regarding a proposed edge labeling of Π_n^∙ given by Bellier-Millès, Delcroix-Oger, and Hoffbeck in <cit.> [In the ArXiv v2 version of this paper, the authors have not included this labeling anymore. We include here an explanation of why the proposed labeling is not an EL-labeling.].
The authors in <cit.> define a labeling λ̃:ℰ(Π_n^∙)→× where × has the lexicographic order. To describe their labeling, let ⋖' be such that the two pointed blocks in which were u-merged to get ' are A_i^q_i and A_j^q_j, with a=min(A_i)< b=min(A_j). Then define
λ̃(⋖')=
(b,a+n-||) if u=0,
(b,b+n-||) if u=1.
With this labeling in the interval [0̂,123̃] of Π_3^∙, we see that the chains 0̂⋖1̃2/3̃< 123̃ and 0̂⋖ 12̃/ 3̃<123̃ have words of labels (2,2)(3,2) and (2,1)(3,2) respectively, which are both increasing in the lexicographic order on ×. This shows that λ̃ already fails to satisfy the requirement of the uniqueness of the increasing chain in the interval [0̂,[3]^3]. Note that this issue does not arise in the interval [0̂,[3]^1]. So λ̃ is an EL-labeling of at least one maximal interval of Π^∙_3. Moreover since [0̂,[3]^1] ≅ [0̂,[3]^3], their labeling shows all maximal intervals of Π_3^∙ have an EL-labeling.
However, by extending this idea one can show that if n≥ 6, λ̃ is not an EL-labeling for any maximal interval of Π_n^∙. To see why, note that if n≥ 6, [0̂, [n]^i] always contains an interval of the form [0̂, [6]^j/7̃/⋯/ ñ] where j∈ [6]. If j=4,5,6 then the interval [0̂, 123̃/4̃/⋯/ñ] is in [0̂, [6]^j/7̃/⋯/ ñ] and so we have the same problem as before. If j=1,2,3, then we claim the interval [123^j/4̃/… /ñ, 123^j/456̃/7̃⋯/ñ] has two increasing chains. The chain
123^j/4̃/5̃/6̃/⋯ /ñ 123^j/4̃5/6̃/⋯ /ñ 123^j/456̃/⋯ /ñ
has label sequence (5,7)(6,7) and the chain
123^j/4̃/5̃/6̃/⋯ /ñ 123^j/45̃/6̃/⋯ /ñ 123^j/456̃/⋯ /ñ
has label sequence (5,6)(6,7), both of which are increasing. It follows that λ̃ is not an EL-labeling in general.
§ COMBINATORIAL DESCRIPTIONS OF WHITNEY DUALS
In this section we give combinatorial descriptions of the Whitney duals of Π_n^w and Π_n^∙ that come from the EW-labelings we discussed in Section <ref>.
§.§ Constructing Whitney duals
We start with a quick review of how to construct Whitney duals from EW-labelings. The full details can be found in <cit.>. Note that in <cit.>, the authors introduce two (isomorphic) Whitney duals that can be constructed using an EW-labeling. The first construction, which is denoted Q_λ(P) in <cit.>, is obtained by taking a quotient of the poset of saturated chains containing 0̂. This quotient is based on a quadratic relation on these chains. The second construction, denoted R_λ(P), is a poset on ascent-free saturated chains containing 0̂ and whose order relation is given by sorting a label into an existing ascent-free chain. Here we will only use the sorting description.
Let Λ be a poset of labels and consider the following sorting algorithm on words over Λ. Given the word w=w_1w_2⋯ w_iw_i+1⋯ w_n, let i be the smallest index such that w_i<w_i+1. If no such i exists, the algorithm terminates and returns w. If i does exists, swap w_i and w_i+1 to get the word w'=w_1w_2⋯ w_i+1w_i⋯ w_n. Next, apply this procedure to w' and continue until the algorithm terminates. Once the algorithm terminates one obtains a unique ascent-free word with the same underlying multiset of labels as the original word. We denote this unique word as (w).
As an example of the sorting algorithm described above, consider the poset of labels Λ shown in Figure <ref>. Applying the sorting algorithm to adbca gives the following.
abdca → adbca → dabca → dacba → dcaba
Thus, (adbca) = dcaba.
Suppose that P is a poset with an EW-labeling λ. Let R_λ(P) be the set of pairs (x,w) where x ∈ P and w is the sequence of labels along a 0̂-x ascent-free chain. Order the elements of R_λ(P) by (x,w) (y,u) if and only if x y in P and u =(wλ(x y)), where wλ(x y) denotes the concatenation of the words w and λ(x y) and sort is done with respect to the ordering of the labels of λ.
Suppose that P is a poset with an EW-labeling λ. Then P and R_λ(P) are Whitney duals.
The reader can verify that the poset labeled Q in Figure <ref> is R_λ(P) where λ is the EW-labeling of P given in the figure. In addition to being EW-labelings, where ascent-free chains have a unique word of labels in their own interval, λ_w and λ_∙ have the additional property that the sequence of labels along a saturated chain starting at 0̂ completely determines the elements on that chain. So the word of labels identifies the saturated chain uniquely in the poset and as a result, when we consider R_λ(P) the “x" in the pair (x,w) is redundant. In other words, we need only to consider ascent-free chains as elements of R_λ(P).
Figure <ref> depicts Π_3^w and the Whitney dual corresponding to the EW-labeling described in the previous section. See Figure <ref> for an isomorphic version of R_λ_w(Π_n^w) whose elements are described in terms of a family of forests.
§.§ Combinatorial families for the Whitney duals
We now turn our attention to giving combinatorial descriptions of the Whitney duals of Π_n^∙ and Π_n^w. First, we need to describe the combinatorial objects on which the Whitney duals will be defined.
A tree is an undirected graph in which any two vertices are connected by exactly one path. We say that a tree is rooted if there is a distinguished vertex that we call the root. If, in order to travel through the unique path from a vertex b to the root we need to pass through a vertex a, we say that a is an ancestor of b. In the specific cases that {a,b} is an edge, we say that a is the parent of b (or equivalently, b is a child of a). Every vertex in a rooted tree T which has at least one child is considered an internal vertex. If it has no child we say that the vertex is a leaf. A planar tree is a rooted tree in which the set of children of each internal vertex comes equipped with a total order (which we represent by placing the vertices from left to right in this order). A binary tree is a rooted planar tree in which every internal vertex has two children, a left child and a right child. All the trees we consider from now on are both rooted and planar, so we will be referring to them (informally) as “binary trees" when the context makes it clear.
We say a binary tree is a bicolored binary tree if there is a function that assigns to each internal vertex x a number (x)∈{0,1} (a color). Note that in all of our figures, we represent the color 0 with blue and the color 1 with red.
A linear extension of a binary tree T is a listing v_1,v_2, …,v_n-1 of the internal vertices of T such that each vertex precedes its parent.
Let T be a bicolored binary tree and v a vertex of T. We define the valency of v, ν(v), to be the smallest leaf label of the subtree rooted at v. Note that, by this definition, if w is an ancestor of v we have that ν(v)≥ν(w). Hence, since the leaves are labeled by the totally ordered set [n], there is a unique linear extension v_1,v_2,…,v_n-1 of T such that
ν(v_1)≥ν(v_2) ≥⋯≥ν(v_n-1).
We will call this linear extension the reverse-minimal linear extension of the internal vertices of T. Figure <ref> depicts the reverse-minimal linear extension of the tree in Figure <ref>. Note that we can extend the notion of reverse linear extension to forests with leaf set [n]. For example, see the forest in Figure <ref>. As we will see later, the reverse-minimal linear extension gives us a recipe to build a forest step-by-step in a way that corresponds to the ascent-free chains of the weighted and pointed partition posets.
Let v be an internal vertex of a bicolored binary tree T. We denote as L(v) the left child of v and as R(v) the right child of v.
We say that T is normalized if for every internal vertex v we have that
Nν(v)=ν(L(v)).
In other words, a tree is normalized if the smallest leaf label always appears in a leaf to its left. One can check that the tree depicted in Figure <ref> is normalized. We say a forest is normalized if all the trees in the forest are normalized. Whenever T is normalized, we say that an internal vertex v is Lyndon if L(v) is a leaf or else if L(v) is not a leaf then we have that
Lν(R(L(v)))>ν(R(v)).
Returning to our example in Figure <ref>, we see that the internal vertices that are labeled by 1,2,3,4,5,6 are all Lyndon since for each, their left child is a leaf. The internal vertex labeled as 7 is also Lyndon but 8 is not. To see why 7 is Lyndon, note that R(L(7)) is the leaf labeled 9 (and hence has valency 9) and R(7)=5 which has valency 2. Thus, ν(R(L(7)))>ν(R(7)). To see that 8 is not Lyndon, note that ν(R(L(8)))=2≯ 3= ν(R(8)), i.e. inequality <ref> is not satisfied.
The Whitney duals for the weighted partition poset and the pointed partition poset can both be described using special types of normalized bicolored binary trees together with a sliding procedure used to merge such trees. We now discuss these special types of trees.
§.§.§ Pointed Lyndon forests
Let us define first the objects used to describe the Whitney dual of the pointed partition poset.
A normalized bicolored binary tree T is said to be a pointed Lyndon tree if for each internal vertex v of T such that L(v) is not a leaf it must be that:
PL1(L(v))≥(v)
and
PL2If (L(v))=(v)=1, then v is a Lyndon node.
We say a forest F is a pointed Lyndon forest if all the connected components of F are pointed Lyndon trees. We denote as _n^∙ the set of all pointed Lyndon forests whose leaf label set is [n].
The tree in Figure <ref> is a pointed Lyndon tree. Indeed, we need only check the conditions for the internal vertices labeled by 7 and 8 since the left children of the other internal vertices are leaves. For 7, both it and its left child 6 are colored by 1 (red) and 7 is Lyndon. For 8, it is colored 0 (blue) so we do not need to check anything else.
§.§.§ Bicolored Lyndon forests
Let us now define the trees used for the Whitney dual of the weighted partition poset.
Let T be a normalized bicolored binary tree, we say that T is a bicolored Lyndon tree if, for each internal vertex v of T whose left child is an internal vertex, either v is a Lyndon vertex or it must be that:
CL(L(v))>(v).
We say a forest F is a bicolored Lyndon forest if all the connected components of F are bicolored Lyndon trees. We denote as _n^w the set of all bicolored Lyndon forests whose leaf label set is [n].
The tree in Figure <ref> is a bicolored Lyndon tree. As mentioned earlier, the only internal vertex that is not a Lyndon vertex is the one labeled by 8. Its color is 0 (blue) and its left child is colored by 1 (red), so it satisfies condition (<ref>).
At this point, the reader may be wondering if pointed Lyndon trees and bicolored Lyndon trees are the same. To see that this is not the case, consider the trees in Figure <ref>. We claim that the tree T_1 is bicolored, but not pointed. All the internal vertices of T_1 are Lyndon, so it is automatically a bicolored Lyndon tree. However, it is not a pointed Lyndon tree because the color of the root is larger than its left child, a violation of condition (<ref>). On the other hand, we claim that T_2 is a pointed Lyndon tree, but not a bicolored Lyndon tree. It is pointed since both vertices are colored by 0 (blue). However, it is not a bicolored Lyndon tree since the root is not a Lyndon vertex and the root's color is not strictly larger than its left child, a violation of condition (<ref>).
Note that condition (<ref>) implies that the family of classical Lyndon trees coincide with the subfamily of bicolored Lyndon trees that either have only internal vertices of color 0 (blue) or that have only internal vertices of color 1 (red). On the other hand condition (<ref>) implies that the family of classical Lyndon trees coincide with the subfamily of pointed Lyndon trees which have all internal nodes colored 1 (red).
§.§ A Whitney dual for -
Let F be a pointed Lyndon forest. We explain how to associate an ascent-free saturated chain c(F) starting at 0̂ in Π_n^∙. Recall that F has a unique reverse-minimal linear extension order on the internal vertices. To construct an ascent-free chain from F, we start with the the bottom element, 1̃/2̃/⋯ /ñ. In the first step, we merge together the two blocks which are the in the left subtree and right subtree rooted at the first internal vertex. We keep the point on the element of the left tree if the first internal vertex is colored 1 (red) and we keep the point on the element of the right tree if it is colored is 0 (blue). We continue doing this so that at the i^th step we merge together the elements of the left and right subtrees rooted at the i^th internal vertex and keep the pointed element of the left subtree if it is colored 1 (red) or the right subtree if it is colored 0 (blue).
Figure <ref> has a depiction of a pointed Lyndon forest F and its corresponding chain c(F). In the first step of the chain, we merge the blocks containing 6 and 7 since they are the leafs in the left and right tree rooted at the first internal vertex. We keep 6 pointed because this first internal vertex is colored by 1 (red). Continuing this process gives the saturated chain seen in the figure.
Let F be a pointed Lyndon forest and let a_i be the valency of left child of the i^th internal vertex and b_i the valency of right child of i^th internal vertex. That is a_i = ν(L(i)) and b_i =ν(R(i)). Note that a_i is also the valency of i since our trees are normalized (i.e. ν(i) =ν(L(i))). Then, it is straightforward to see that the sequence of labels along the chain c(F) is (a_1,b_1)^u_1(a_2,b_2)^u_2⋯ (a_k,b_k)^u_k where u_i is the color of the i^th internal vertex.
Let us return to the forest in Figure <ref>. Since the left child of the first internal vertex is the leaf labeled 6, the right child is a leaf labeled 7, and the first internal vertex is colored 1 (red), we have that (a_1,b_1)^u_1 = (6,7)^1. Moving to the second internal vertex we see its left child is the leaf 5 and the right child is the leaf 8. Since it is colored by 1 (red) the next label is (5,8)^1. Next, moving to the third internal vertex, wee see that the left child is a leaf 4 and the right child is the first internal vertex whose valency is 6. Since the third internal vertex is colored 1 (red), the corresponding label is (4,6)^1. Continuing this, we see the sequence we get from the forest is
(6,7)^1 (5,8)^1 (4,6)^1 (3,4)^0 (2,5)^0 (1,9)^1 (1,2)^1
Note that the sequence (and hence its chain) is ascent-free with respect to the ordering of the labels for λ_∙. It turns out that this not a coincidence as we show next.
The map sending F to c(F) is a bijection between pointed Lyndon forest whose leaf label set is [n] and ascent-free chains starting at 0̂ of Π_n^∙, where the ascent-free condition is defined with respect to λ_∙.
First, we show that the map is well-defined. That is, c(F) is in fact ascent-free for all pointed Lyndon forest F. Assume that the internal vertices are 1,2,…, k which is also the reverse-minimal ordering. Let (a_1,b_1)^u_1(a_2,b_2)^u_2⋯ (a_k,b_k)^u_k be the sequence of labels along c(F). As mentioned earlier, a_i is the valency of i. Since we are using the reverse-minimal order, this gives us a_1≥ a_2≥⋯≥ a_k. If a_i> a_i+1, then (a_i,b_i)^u_i > (a_i+1,b_i+1)^u_i+1 in the ordering of labels of λ_∙. On the other hand, if a_i=a_i+1, then i+1 is an ancestor of i and i must be in the left tree rooted at i+1. Since the reverse-minimal ordering is a linear extension of the internal vertices, it must be the case that i is the left child of i+1. Since F is a pointed Lyndon forest, condition (<ref>) implies that u_i = (i) ≥(i+1) =u_i+1. If i+1 is not Lyndon, then condition (<ref>) implies that u_i>u_i+1. Since u_i,u_i+1∈{0,1}, this implies that u_i=1 and u_i+1=0. So, if i+1 is not Lyndon, (a_i,b_i)^u_i > (a_i+1,b_i+1)^u_i+1. On the other hand, if i+1 is Lyndon, b_i+1=ν(R(i+1)) <ν(R(L(i+1))) =b_i. Then either (a_i,b_i)^u_i > (a_i+1,b_i+1)^u_i+1 (if u_i>u_i+1 or u_i=1=u_i+1) or (a_i,b_i)^u_i and (a_i+1,b_i+1)^u_i+1 are incomparable (if u_i=0=u_i+1). It follows that the map is well-defined.
We claim that the map sending F to c(F) is invertible. Suppose we have an ascent-free chain with sequence
(a_1,b_1)^u_1(a_2,b_2)^u_2⋯ (a_k,b_k)^u_k.
Build a forest recursively by first placing n isolated vertices (which will be the leafs) labeled by [n]. Now assume that you are at the i^th step of this process. Let T_1 be the connected component of the forest with minimal leaf label a_i and let T_2 be the connected component of the forest with minimal leaf label b_i. Add a vertex colored u_i and add edges from this vertex to the roots of T_1 and T_2. Repeat this process until each pair (a_j,b_j)^u_j has been used and call the resulting forest F. Since (a_1,b_1)^u_1(a_2,b_2)^u_2⋯ (a_k,b_k)^u_k is ascent-free, it must be the case that a_1≥ a_2≥⋯≥ a_k. It then follows that the reverse minimal ordering on F is exactly the order that the internal vertices were added in the process. Upon observing this, it is clear that c(F) is the chain with label sequence (a_1,b_1)^u_1(a_2,b_2)^u_2⋯ (a_k,b_k)^u_k. So, if we can show this map is well-defined, we will have that the map sending F to c(F) is invertible and thus a bijection. We do this next.
Let F be a forest obtained by using the inverse procedure described in the previous paragraph. We need to show that F is a pointed Lyndon forest. First, it is clear from the construction that F is a bicolored binary tree. Since a_i<b_i for all i, we also have that F is normalized. Now consider an internal vertex v and suppose that v is the i^th internal vertex in the reverse minimal order. If L(v) is not a leaf, then L(v) must immediately precede v in the reverse minimal ordering. That is, L(v) is the (i-1)^th internal vertex. Since F is normalized, v and L(v) have the same valency and so a_i-1 = a_i. Since (a_1,b_1)^u_1(a_2,b_2)^u_2⋯ (a_k,b_k)^u_k is ascent-free, this means that u_i-1≥ u_i. Thus, (L(v)) =u_i-1≥ u_i=(v) and so condition (<ref>) is satisfied. If u_i-1 =1 = u_i, then the fact that a_i-1 = a_i implies that b_i-1>b_i (otherwise either a label would be repeated in the sequence or the sequence would have an ascent). Thus,
ν(R(L(v))= b_i-1 > b_i = ν(R(v)) and so v is a Lyndon vertex (equation (<ref>)). It follows that condition (<ref>) is also satisfied. We conclude that F is a pointed Lyndon forest. Thus, the inverse map is well-defined, completing the proof.
By the previous theorem and Theorem <ref>, we can describe the Whitney dual of the pointed partition poset using pointed Lyndon forests. To do this, we will need to describe how to merge trees in a Lyndon forest. Suppose that T_1 and T_2 are trees in a Lyndon forest with roots r_1 and r_2 where the minimum leaf label of T_1 is less than the minimum leaf label of T_2. Let u∈{0,1}. To u-merge T_1 with T_2, we first create a new vertex r and color it so (r)=u. Then we add edges from r to r_1 and r_2. If the resulting tree is a pointed Lyndon tree, we stop. If it is not, we slide the new internal vertex r together with its right subtree past its left child and check if the result is a pointed Lyndon tree. We continue this process until we obtain a pointed Lyndon tree.
An example of a 1-merge of two pointed Lyndon trees is illustrated in Figure <ref>. First, we add a vertex r colored by 1 (red) and add edges from r to the roots of T_1 and T_2. We then need to check whether or not this construction results into a pointed Lyndon tree. Since (L(r))=0<1=(r), we have a violation of condition (<ref>) at r and hence, this is not yet a pointed Lyndon tree. We then slide r together with its right subtree, to its left, interchanging r and L(r). After sliding r to its left we have that (L(r))=(r)=1. However, 4=v(R(L(r)))<v(R(r))=5 resulting in a violation of condition (<ref>), so we slide r once more to its left to finally get a valid pointed Lyndon tree.
Let us remark here that this process always terminates in a valid pointed Lyndon tree. This is the case since if we keep sliding until we cannot anymore, the left child of the root will be a leaf. We are now in a position to put an ordering on the pointed Lyndon forests.
The poset of pointed Lyndon forests is the set _n^∙ together with the cover relation F⋖ F' whenever F' is obtained from F when exactly two trees of F are u-merged for some u∈{0,1}.
We illustrate _3^∙ in Figure <ref>. The sliding procedure described to merge two pointed Lyndon trees is just a way to explain the sorting procedure used to define the Whitney dual R_λ in Definition <ref>. Note that in the definition of the map sending F to c(F) we did not need F to be a pointed Lyndon forest nor do we need to use the reverse-minimal ordering on the internal vertices. Indeed as long as F is a normalized bicolored binary forest and we use some linear extension of the internal vetices, the map still produces a saturated chain in Π_n^∙ starting at 0̂. In the case that the corresponding chain is not necessarily ascent-free, swapping the labels in an ascent either corresponds to reordering the internal vertices to get the reverse-minimal order or corresponds to the sliding procedure. For example, say we have the following label sequence in Π_4^∙
(1,2)^1(3,4)^1
This corresponds to a pointed Lyndon forest with two components where the internal vertices are ordered so that the internal vertex above leafs 1 and 2 comes first. When we swap the labels in the sequence to get the ascent-free sequence
(3,4)^1(1,2)^1
we are just reordering the internal vertices so the one above leafs 3 and 4 comes first.
For an example where sliding occurs, consider the sliding procedure shown in Figure <ref>. The sequence of labels of the pointed Lyndon forest in the upper left corner of the figure is
(5,7)^1(5,6)^0(2,3)^1(1,4)^1(1,2)^0
Adding the red vertex r is corresponds to adding the the label (1,5)^1 at the end of the sequence since it merges together the components with minimal leaf label 1 and 5. So we would have the sequence
(5,7)^1(5,6)^0(2,3)^1(1,4)^1(1,2)^0(1,5)^1.
This is the label sequence for a saturated chain starting at 0̂ of Π_n^∙. However, it is not ascent-free since (1,2)^0(1,5)^1 is an ascent. Moreover, the corresponding tree in the upper right corner of Figure <ref> is not a pointed Lyndon tree. As λ_∙ has the rank two switching property, we can swap these to labels to get the sequence
(5,7)^1(5,6)^0(2,3)^1(1,4)^1(1,5)^1(1,2)^0.
This swap corresponds to sliding the root r to the left of its left child giving the tree in the bottom right corner of Figure <ref> whose label sequence is the one given above. This sequence has an ascent at (1,4)^1(1,5)^1 and the corresponding tree is not a pointed Lyndon tree. Swapping labels, we get
(5,7)^1(5,6)^0(2,3)^1(1,5)^1(1,4)^1(1,2)^0
which is ascent-free. Again, this swap corresponds to sliding r once again past its left child to get the tree in the bottom left corner to Figure <ref>. Note that this sequence is ascent-free and the tree we finish with is the corresponding pointed Lyndon tree for this sequence.
Because the sliding procedure corresponds to the sorting procedure in the Whitney dual, we get the following.
_n^∙ is a Whitney dual to Π_n^∙. In particular, _n^∙≅ R_λ_∙(Π_n^∙).
We omit the full details of this proof here since is rather technical and a case by case analysis of the ways one can have ascents in the chains. For all the details, the reader can consult <cit.>
§.§ A Whitney dual for -
Here we give a combinatorial description for the Whitney dual of the weighted partition poset. The method closely follows what we did in the previous subsection. As such, we do not provide as many detailed examples.
In <cit.>, it was shown that the maximal ascent-free saturated chains of Π_n^w with respect to λ_w are in bijection with bicolored Lyndon trees. This bijection can be modified to give a bijection between ascent-free chains of Π_n^w starting at 0̂ and bicolored Lyndon forests. It follows that the elements of the Whitney dual R_λ_w(Π_n^w) can be described using bicolored Lyndon forests. We briefly explain this bijection.
Let F be a bicolored Lyndon forest. As mentioned in Section <ref>, F has a unique reverse-minimal linear extension on the internal vertices. To construct our ascent-free chain c(F), we follow a similar procedure as in the case for pointed Lyndon forests (see Figure <ref>). We start with the the bottom element, 1^0/2^0/⋯ /n^0. At the i^th step we merge together the elements of the left and right subtrees rooted at the i^th internal vertex. We add together the weights of these blocks and add 1 to this weight if this internal vertex is colored by 1. If the internal vertex is colored by 0, we do not add 1 to this sum. See <cit.> for an example.
To describe the covering relation in R_λ(Π_n^w), we need to describe a method to merge bicolored Lyndon trees. Just like the case for the pointed partition poset, the covering relation is defined using a sliding process. Let T_1 and T_2 be bicolored Lyndon trees with the minimum leaf label of T_1 less than the minimum leaf label of T_2. Let u∈{0,1}. To u-merge T_1 and T_2, we first create a new vertex r and color it so that (r)=u. Then we add edges from r to the roots of T_1 and T_2. If the resulting tree is a bicolored Lyndon tree, we stop. If not, we slide the new internal vertex r together with its left subtree past its left child. We continue this procedure until we get a bicolored Lyndon tree. See Figure <ref> for an example of this procedure.
The poset of bicolored Lyndon forests is the set of bicolored Lyndon forests on [n], _n^w, together with the cover relation F⋖ F' whenever F' is obtained from F when exactly two trees of F are u-merged for some u∈{0,1}.
Figure <ref> depicts _3^w.
_n^w is a Whitney dual of Π_n^w. In particular, _n^w≅ R_λ_w(Π_n^w).
As is the case with the pointed partition poset, the proof is a case by case analysis of the ways one might have an ascent on an ascent-free chain when we add a new label. See <cit.> for all the details.
In Π_n^w, the interval [0̂, [n]^0] is isomorphic to the partition lattice Π_n. The labeling λ_w restricted to this interval is also an EW-labeling, and hence a subposet of _n^w is also a Whitney dual for the partition lattice. In [0̂, [n]^0] all labels are of the form (a,b)^0. Thus, all the ascent-free chains correspond to bicolored Lyndon forests where every vertex must be Lyndon. Moreover, all the internal vertices are colored 0 (blue), so we can just assume that the internal vertices are uncolored. We then get a Whitney dual of Π_n where the elements are normalized binary forests whose internal vertices are Lyndon and where the covering relation is given by merging together trees to get normalized binary trees whose vertices are Lyndon. By restricting to forests with internal vertices colored 0 (blue) in Figure <ref>, we get a depiction of this Whitney dual of Π_3. We should note that restricting λ_w to [0̂, [n]^0] yields Stanley's labeling <cit.> for Π_n. Stanley's labeling was used in <cit.> to construct a Whitney dual to Π_n isomorphic to the poset of increasing spanning forests _n. It follows then that the subposet of _n^w formed by Lyndon forests (with blue internal nodes) is isomorphic to _n.
§ ALGEBRAIC CONSEQUENCES
§.§ Homological consequences of the EL-labelings
In Section <ref> we showed that λ_∙ is an EL-labeling of the order dual of Π_n^∙ and λ_∙_2 is an EL-labeling of Π_n^∙. A poset and its order dual have the same order complex and hence also have the same cohomology. As a result, Theorem <ref> implies that the two EL-labelings give bases for the cohomology of maximal intervals of Π_n^∙. These bases are indexed by the ascent-free chains in the two labelings.
In Section <ref> we showed that the maximal ascent-free chains of Π_n^∙ with respect to λ_∙ are indexed by pointed Lyndon trees. To each pointed Lyndon tree T, we gave a bijection mapping T to the ascent-free chain c(T).
Let 𝒯Lyn_n,p^∙ be the set of pointed Lyndon trees such that along the path from the leaf labeled p to the root, if the path moves to the left, the internal vertex label is 0 and if it is to the right, the label is given by 1. In Figure <ref> the reader can easily observe all the trees in 𝒯Lyn_3,p^∙ for p=1,2,3 by selecting the subset of maximal elements in _3^∙ with the leaf p decorated with a tilde (p̃). Note that T∈𝒯Lyn_n,p^∙, if and only if the top element of c(T) is [n]^p. We have the following consequences of Theorem <ref> and the characterization of the ascent-free chains of λ_∙ presented in Section <ref>.
For every p∈ [n] we have that
* The order complex Δ((0̂,[n]^p)) is shellable and has the homotopy type of a wedge of | 𝒯Lyn_n,p^∙| many spheres of dimension n-3. As a consequence, the interval [0̂,[n]^p] is Cohen-Macaulay.
* The set
{c(T) | T∈𝒯Lyn_n,p^∙} forms a basis for H^n-3(0̂,[n]^p).
Note that as a consequence of Theorem <ref> and the fact that the intervals [0̂,[n]^p] are isomorphic, all the sets 𝒯Lyn_n,p^∙ are equinumerous for any given p∈ [n].
Now let us turn our attention to the consequences of the EL-labelings of Π_n^∙ for the operad . In the theory of nonassociative algebras, a -algebra is a vector space V that comes equipped with a binary operation ∘ which satisfies for every v,w,z ∈ V the relation
(v ∘ w) ∘ z - v ∘ (w ∘ z) =(v ∘ z) ∘ w - v ∘ (z ∘ w).
Let (n) be the multilinear component of the free algebra on n generators. In <cit.>, Vallette
proved the following theorem (in terms of homology, which we reinterpret here in terms of cohomology).
We have the following _n-module isomorphism
(n)≅__n⊕_p∈[n]H^n-3(0̂,[n]^p)⊗_n,
where _n is the sign representation of _n.
Under Theorem <ref> we obtain a corresponding basis for (n), which we describe now. Let T= T_L ∧^u T_R denote a normalized bicolored binary tree where T_L and T_R are respectively the left and right subtrees from the root and u is the color of the root.
Define Θ(T) to be the element in (n) defined recursively by Θ(T)=a when T=a is the one-leaved tree with leaf-label a, and if T=T_L ∧^u T_R then
Θ(T)=Θ(T_L)∘Θ(T_R) if u=1,
Θ(T_R)∘Θ(T_L) if u=0.
As an example of this definition, let T be the pointed Lyndon tree in the bottom left of Figure <ref>. One can check that the associated monomial Θ(T) is (2∘ 3)∘ ((1∘ (6 ∘ (5∘ 7))) ∘ 4). Theorems <ref> and <ref> imply the following theorem.
The set
{Θ(T) | T∈𝒯Lyn_n^∙} forms a basis for 𝒫reℒie(n).
In <cit.>, the authors proved the analogous theorem to Theorem <ref> providing a basis for the reduced cohomology H^n-3(0̂,[n]^i) of the maximal intervals of Π_n^w for i=0,…, n-1, and for the multilinear component ^2(n) of the free bibracketed Lie algebra in n generators. Those bases are indexed in terms of the bicolored Lyndon trees, 𝒯Lyn^w_n, since they index the ascent-free chains of λ_w. We will show that the same set of trees index a basis for H^n-3(0̂,[n]^p) and (n).
In <cit.> the authors prove that there is a rank-preserving bijection between Π_n^∙ and Π_n^w. We prove here the following further statement about their sets of saturated chains from 0̂.
There is a label-preserving bijection between saturated chains from 0̂ in (Π_n^∙,λ_∙) (or (Π_n^∙,λ_∙_2)) and in (Π_n^w,λ_w).
First note that between λ_∙ and λ_∙_2, the edge labels are the same, only the ordering of the labels is different. So we can find the bijection using λ_∙.
In (Π_n^∙,λ_∙) at every step on a saturated chain from 0̂ we u-merge two blocks (A,p) and (B,q) such that min A < min B and assign the label
λ_∙_2(⋖')=(min A, min B)^u.
In (Π_n^w,λ_w) at every step we merge two blocks (A,i) and (B,j) to obtain the block (A∪ B, i+j+u) and assign the label
λ_w(⋖')=(min A, min B)^u.
Note that in both cases, at every merging step from bottom to top we are free to choose between u=0 or u=1 and hence the sets of words of labels for saturated chains from 0̂ are equal. Since the saturated chains are uniquely determined by their words of labels in both labelings, the words of labels induce a bijection among saturated chains.
Theorem <ref> gives us analogous results to Theorem <ref> and Theorem <ref>, but this time in terms of bicolored Lyndon trees.
Let 𝒯Lyn_n,p^w be the set of bicolored Lyndon trees such that along the path from the leaf labeled p to the root, if the path moves to the left, the internal vertex label is 0 and if it is to the right, the label is given by 1.
Note that the definition of 𝒯Lyn_n,p^w amounts to selecting the bicolored Lyndon trees whose associated maximal chains belong to the interval [0̂,[n]^p].
For every p∈ [n] we have that
* The order complex Δ((0̂,[n]^p)) has the homotopy type of a wedge of |𝒯Lyn_n,p^w| many spheres of dimension n-3. Hence the interval [0̂,[n]^p] is Cohen-Macaulay.
* The set
{c(T) | T∈𝒯Lyn_n,p^w} forms a basis for H^n-3(0̂,[n]^p), where c(T) gives the corresponding maximal chain associated to T in Π_n^∙.
* The set
{Θ(T) | T∈𝒯Lyn_n^w} forms a basis for 𝒫reℒie(n).
Theorem <ref> says that λ_∙_2 is an EL-labeling of Π_n^∙. Theorem <ref> implies that the ascent-free words of labels according to λ_∙_2 and λ_w are the same. The ascent-free words of labels of λ_w are indexed by bicolored Lyndon trees by <cit.>. The reader can check that the bicolored Lyndon trees in 𝒯Lyn_n,p^w are precisely the ones who index the ascent-free chains in [0̂,[n]^p] according to λ_∙_2.
Vallette also concludes in <cit.> that a criterion to show that a basic-set quadratic operad and its Koszul dual ^ have the property of being Koszul is to show that all maximal intervals of its associated operadic partition poset Π^ are Cohen-Macaulay. Theorems <ref> and <ref> give then new proofs of the following theorem.
The operads and are Koszul operads.
§.§ CL-labelings compatible with isomorphisms and PBW bases
In <cit.> the authors introduce a new compatibility condition on CL-labelings of operadic posets which gives rise to a Poincaré–Birkhoff–Witt
(PBW) basis for the corresponding operad. This PBW basis comes from the increasing chains as opposed to the ascent-free chains that is used to give a basis for the cohomology of the poset, and hence for the Koszul dual of the operad.
The defined property in <cit.> which a CL-labeling can have is called being compatible with isomorphism of subposets. We refer the reader to such article for the complete context and proper definitions which we mostly omit here. Informally, this property requires for operadic posets (in our case of binary generators) that in intervals of the collection {Π_n^}_n≥ 1 that are isomorphic due to the action of the operad , but perhaps on different sets of inputs, there is also a consistent map between the words of labels of saturated chains on those “-isomorphic” intervals. The requirement is that increasing chains map to increasing chains, ascent-free chains map to ascent-free chains and the lexicographic partial order on chains is preserved on -isomorphic intervals.
Both of the labelings (Π_n^w,λ_w) and (Π_n^∙,λ_∙_2) depend only on the minimal elements of the blocks that are being merged at each step and the generator of the corresponding operad that is being used to merge the blocks. Because the min function is preserved under the unique order isomorphism between two totally ordered sets of the same cardinality, we follow a very similar argument as the one in <cit.> to conclude the following theorem.
The EL-labelings (Π_n^w,λ_w) and (Π_n^∙,λ_∙_2) are compatible with isomorphisms of subposets.
The following two theorems then highlight the relevance of the notion of CL-labelings compatible with isomorphisms in the context of operad theory.
A quadratic basic-set operad whose operadic poset Π^_n admits a CL-labeling compatible with isomorphisms of subposets admits a partially ordered PBW basis given by the increasing maximal chains of the CL-labeling where the order is given by the lexicographic order on saturated chains.
An operad equipped with a partially ordered PBW basis is Koszul.
We obtain as a corollary of Theorems <ref>, <ref>, and <ref> a new proof of the fact that the operads ^2, , and their Koszul duals ^2 and are all Koszul operads.
To determine the corresponding PBW bases predicted by Theorem <ref> we use the increasing chains both of the EL-labelings λ_w (described in <cit.>) and of λ_∙_2 (described in the proof of Theorem <ref>). Note that from Theorem <ref> it follows that the increasing chains in both (Π_n^w,λ_w) and (Π_n^∙,λ_∙_2) have the same words of labels. These increasing words of labels are indexed by the following family of trees. Let lcomb_n^w be the set of left-combs of the form
((1∧^c_1 2)∧^c_2 3)⋯ ) ∧^c_n-1 n,
where for some i∈ [n] we have that c_1=⋯ =c_i-1=0 and c_i=⋯ =c_n-1=1 (See Figure <ref>).
We have that
* The EL-labeling (Π_n^∙,λ_∙_2) determines the PBW basis for formed by the identity and tree-monomials of the form
{Θ(T) | T∈lcomb_n^w}_n≥ 1.
* The EL-labeling (Π_n^w,λ_w) determines a PBW basis for ^2 formed by the identity and tree-monomials of the form
((1∘_c_1 2)∘_c_2 3)⋯ ) ∘_c_n-1 n,
where for some i∈ [n] we have that c_1=⋯ =c_i-1=0 and c_i=⋯ =c_n-1=1.
§ WHITNEY TWINS AND NON-UNIQUENESS OF WHITNEY DUALS
§.§ Whitney twins
The reader might have noticed at this point that the pointed and the weighted partition posets are closely related. From Figure <ref>, we can see that already the posets Π_3^∙ and Π_3^w are not isomorphic. This can be easily shown in general for Π_n^∙ and Π_n^w since all maximal intervals in Π_n^∙ are isomorphic but this is not the case in Π_n^w. In particular, for the latter poset the intervals [0̂,[n]^0] and [0̂,[n]^n-1] are isomorphic to Π_n which is not the case for any maximal interval in Π_n^∙ for n≥ 3.
The reader can verify from Figure <ref> that the Whitney numbers of the first and second kind are the same for Π_3^∙ and Π_3^w. In <cit.> the authors prove that this is true for any n≥ 1. Indeed there is a rank preserving bijection Π_n^w→Π_n^∙ induced by transforming, in a weighted partition, every weighted set A^w into the pointed set A^p_w where A={p_0< p_1<… <p_|A|-1}. The authors then use the fact that the two posets are uniform to conclude that their Whitney numbers of the first and second kind are the same. This gives an example of the next definition.
Two graded posets P and Q are said to be Whitney twins if their Whitney numbers of the first and second kind are the same, i.e., they satisfy
w_k(P)=w_k(Q) and W_k(P)=W_k(Q)
for all k.
Thus in our new terminology, the results in <cit.> can be recast into the following proposition.
For all n≥ 1, the posets
Π_n^∙ and Π_n^w are Whitney twins.
Note that if P_1 and P_2 are Whitney twins and Q_1 and Q_2 are Whitney duals of P_1 and P_2 respectively, then Q_1 and Q_2 are Whitney twins. Thus we also have the following immediate corollary from Proposition <ref> and Theorems <ref> and <ref>.
For all n≥ 1, the posets _n^w and _n^∙ are Whitney twins.
We should note that if P and Q are isomorphic, they are Whitney twins. Thus, at this point, it could be that _n^w and _n^∙ are Whitney twins merely because they are isomorphic. We will show in Theorem <ref> that this is only true for n≤ 3 and is not the case for n≥ 4.
§.§ Non-uniqueness of Whitney duals
As mentioned in Corollary <ref>, _n^w and _n^∙ are Whitney twins. Here we explain why they are not isomorphic in general. This in turn will show that a poset can have multiple (non-isomorphic) Whitney duals. We also argue that another poset _n already studied by Reiner <cit.> and Sagan <cit.> is a third non-isomorphic Whitney dual of Π_n^w and Π_n^∙.
For n≥4, _n^w and _n^∙ are not isomorphic. Consequently, Π_n^w and Π_n^∙ have multiple Whitney duals.
Consider the maximal interval of _4^w depicted in Figure <ref>. This interval occurs in _n^w for all n≥ 4 since adding isolated vertices to the forests of the interval does not change the interval's structure. We claim that there are no intervals in _n^∙ (for n≥ 4) that start at 0̂ and are isomorphic to the interval in Figure <ref>. Note that if we can verify this claim we will be done.
Suppose that such an interval in _n^∙ exists and let I be this interval. Note that the cover relation on _n^∙ only depends on the relative order of the leaf labels and not the actual leaf labels themselves. So I must be isomorphic to an interval starting at 0̂ in _4^∙. A simple check (see <cit.> for a complete argument) of the intervals of _4^∙ shows that no intervals starting at 0̂ are isomorphic to I, completing the proof.
Reiner <cit.> introduced a family of posets of rooted spanning forests _n and Sagan <cit.> computed the Whitney numbers of these posets. The poset _n is formed by rooted spanning forests where cover relations happen when two rooted trees are merged by their roots selecting the new roots from the two that have been merged (see Figure <ref> for an example, there the square (red) nodes represent the roots of the trees).
As mentioned in <cit.>, the Whitney numbers Π_n^w and Π_n^∙ are switched as compared to _n, which implies that _n is also a Whitney dual to both posets. From Figures <ref>, <ref>, and <ref> it is already evident that _3 is not isomorphic to _3^w≅_3^∙. We show here that in fact _n is not isomorphic to _n^w or _n^∙ for n≥ 3.
For n≥ 3, _n^∙ and _n are not isomoprhic.
Note first that _n is an uniform graded poset according to the definition in <cit.>. More specifically, if F∈_n is an element of rank ρ(F)=i then the filter U(F) in _n is isomorphic to _n-i. Indeed, the rules of merging in the filter U(F) are only dependent on the roots of F and any F∈_n of rank ρ(F)=i has n-i roots.
When n=3, the posets _3 and ^∙_3 are clearly non-isomorphic as can be appreciated from Figures <ref> and <ref>, so let us assume that n≥ 4.
Consider the pointed Lyndon forest F of Figure <ref>. Since the root of the nontrivial tree in F is a Lyndon node with minimal element of the right subtree 4 that is larger than 2 and 3, that the filter U(F) in _n^∙ is isomorphic to ^∙_3. Now, if there is an isomorphism f:_n^∙→_n, this induces an isomorphism U(F)≅ U(f(F))≅_3 since the element f(F) has rank n-3, but this is a contradiction.
The proof of the following theorem follows the same idea as in Theorem <ref>
For n≥ 3, _n^w and _n are not isomorphic.
We should note that the first two authors have found a CW-labeling (a more general version of an EW-labeling) of Π_n^w whose corresponding Whitney dual is _n. This will be further discussed in a coming work and can be already found in the ArXiv version of <cit.>.
§ ACKNOWLEDGMENTS
We would like to thank Joan Bellier-Millès, Bérénice Delcroix-Oger, and Eric Hoffbeck, for the helpful conversation and explanation of the context of CL-labelings compatible with isomorphisms coming from their work in <cit.>.
plain
|
http://arxiv.org/abs/2307.07259v1 | 20230714102031 | An $(\infty,n)$-categorical straightening-unstraightening construction | [
"Lyne Moser",
"Nima Rasekh",
"Martina Rovelli"
] | math.AT | [
"math.AT",
"math.CT",
"18N65, 55U35, 18N50, 18N45, 18N10"
] |
arrows,calc,decorations.markings
@letter@letter
bbify@lettercalify@letterbfify@letter
@letter<26@letter
tz
[scale=1]
tzbig
[node distance=2.5cm]
d=[double distance=.3ex]
w=[preaction=draw=white, -,line width=4pt]
diagram
diagramthmdiagram
()[baseline=(current bounding box.center)]
over/.style=auto=false,fill=white,inner sep=1.5pt, minimum size=0, outer sep=0,
pro/.style=postaction=decorate,decoration=
markings,
mark=at position .5 with at (0,0) ∙;
,
inner sep=1ex,
,n/.style=double equal sign distance, -implies,t/.style=double distance=2.5pt, -implies, postaction=draw,-,
node distance=1.5cm, la/.style=scale=0.8, rr/.style=xshift=1.5cm,
space/.style=xshift=.5cm,
symbol/.style=
draw=none,
every to/.append style=
edge node=node [sloped, allow upside down, auto=false]#1,
0.5
.2cm
O Om O O m m m
(mid) at ((#5)!#3!(#6));
(start) at ((mid)!#4!(#5));
(end) at ((mid)!#4!(#6));
[#2] (start) to node
[inner sep=4pt,outer sep=0,minimum size=0,#1]#7 (end);
[name=Theorem,numbered=yes]theoremA
thmTheorem[subsection]
*thm*Theorem
cor[thm]Corollary
lemma[thm]Lemma
prop[thm]Proposition
definition
defn[thm]Definition
notation[thm]Notation
constr[thm]Construction
descr[thm]Description
remark
rmk[thm]Remark
recall[thm]Recall
ex[thm]Example
lemLemmaLemmas
thmTheoremTheorems
defnDefinitionDefinitions
propPropositionPropositions
rmkRemarkRemarks
corCorollaryCorollaries
exExampleExamples
notationNotationNotations
constrConstructionConstructions
recallRecallRecalls
descrDescriptionDescriptions
para§§§
romeenumerate7
[rome]label=(*)
alphabetenumerate7
[alphabet]label=(*)
m m O5pt at ((#1.east)-(0,#3)) #2;
On-1_#1
On-1𝑠^
On-1𝑠𝑠^
On-1
On-1
OmSp[#1]
OmF[#1]
O[#1]
O
[#1]
Om O
F[#1]×[#2]
Om O O
F[#1]×[#2]×[#3]
O O
[#1]×[#2]
Of OCone_#2(#1)
Of Oi O^ #2_#3(#1)
Of Oi O^ #2_#3(#1)
Oi Om+1H^ #1_#2
Fakultät für Mathematik, Universität Regensburg, 93040 Regensburg, Germany
[email protected]
Max Planck Institute for Mathematics, Bonn, Germany
[email protected]
Department of Mathematics and Statistics, University of Massachusetts Amherst, Amherst, USA
[email protected]
[2020]18N65; 55U35; 18N50; 18N45 ; 18N10.
An (∞,n)-categorical straightening-Unstraightening construction
Martina Rovelli
August 12, 2023
===============================================================
We provide an (∞,n)-categorical version of the straightening-unstraightening construction, asserting an equivalence between the (∞,n)-category of double (∞,n-1)-right fibrations over an (∞,n)-category and that of the (∞,n)-functors from valued in (∞,n-1)-categories. We realize this in the form of a Quillen equivalence between appropriate model structures; on the one hand, a model structure for double (∞,n-1)-right fibrations over a generic precategory object W in (∞,n-1)-categories and, on the other hand, a model structure for (∞,n)-functors from its homotopy coherent categorification ℭ W valued in (∞,n-1)-categories.
§ INTRODUCTION
§.§ The rise of fibrations
Categories have established themselves in a variety of mathematical settings, ranging from geometry and topology to algebra and representation theory, which has served as a motivation for developing a proper theory of categories and in particular the study of the category of sets. Indeed, the fact that every category has hom sets implies that every category embeds in its presheaf category and is hence equivalent to its category of representable functors. This result is known as the Yoneda lemma and the embedding as the Yoneda embedding.
Combining our understanding of the category of sets with the Yoneda embedding has far reaching implications for category theory and many important applications. For example, it enables us to reduce limits of the most complicated diagrams to equalizers and products. It also permits us to formally study geometrically motivated objects, such as schemes, via their category of sheaves, and in fact this can be seen as a key innovation of the Grothendieck school in algebraic geometry.
In recent decades categories have been generalized to a variety of higher categories with the goal of capturing the relevant structure. On the one side, categories form a 2-category, keeping track of categories, functors and natural transformations, which, for instance, permits us to effectively discuss adjunctions inside the 2-category of categories and is hence an effective way to capture purely categorical data. On the other side, topological spaces form an (∞,1)-category with structure given by spaces, continuous maps and weak homotopy equivalences, which permits us to make sense of homotopy invariant constructions internal to the (∞,1)-category and presents us with effective methods to capture purely homotopical data. However, not every mathematical structure exhibits only categorical or homotopical data, but rather combines these two. Examples include (∞,1)-categories themselves, derived stacks on schemes, but also n-manifolds studied from the perspective of the theory of bordisms. In all those cases, the objects naturally form an (∞,n)-category which combines the structure of an n-dimensional category with appropriately chosen homotopical data.
In a similar way to categories, these (∞,n)-categories have found a variety of applications and particularly in the cases n=1 and n=2. This provides a motivation for a higher categorical analogue of the Yoneda lemma and the study of presheaves, which, however, has been proven to be far more challenging than one initially anticipated. Indeed, as part of the Yoneda lemma we would like to associate to each object in our (∞,n)-category a representable functor. However, here functoriality is given by composition which is not strict in all relevant models and examples, necessitating an alternative approach.
Fortunately, this issue already arises when working with presheaves valued in groupoids! As they played an important role in modern algebraic geometry, the one major philosophy to deal with such situations was already developed in the sixties: categorical fibrations. At that time Grothendieck defined what we now call Grothendieck fibrations and the Grothendieck construction, to associate to each groupoid valued functor a fibration and showed that categories of stacks can be described as categories of such fibrations.
In the setting of (∞,1)-categories the Grothendieck construction served as a motivation for Lurie to develop the “straightening-unstraightening" adjunction, which establishes an equivalence between presheaves valued in (∞,1)-categories (resp. spaces) and Cartesian (resp. right) fibrations. This result has then been the basis for a wide range of (∞,1)-categorical results, such as the Yoneda embedding, symmetric monoidal (∞,1)-categories, ... . On the other hand, the situation regarding (∞,n)-categories for n>1 has remained far more unclear.
§.§ Towards a general straightening for (oo,n)-categories
As explained above, the straightening construction has been a key input needed to advance the theory of (∞,1)-categories. As such we anticipate a similar important role for a straightening construction for (∞,n)-categories, which establishes an equivalence between presheaves valued in the (∞,n)-category of (∞,n-1)-categories with an appropriately defined notion of fibration over a certain object W. As a result, several flavors of such a construction have already been studied in a variety of settings. In each one of those cases the authors have chosen one (or several similar) models of (∞,n)-categories as a basis for a straightening construction.
* Lurie has generalized his own construction for (∞,1)-categories to (∞,2)-categories using scaled simplicial sets <cit.>. This approach has been further studied by Gagna–Harpaz–Lanari <cit.> and by Abellán-García–Stern <cit.>.
* In <cit.> the second author studied a straightening construction focused on models of (∞,n)-categories developed by Bergner–Rezk <cit.>, that works for all n, however, restricts to fibrations where the basis W is the strict nerve of a strictly enriched category.
* In <cit.> Nuiten presents a straightening construction that holds for all n however only applies to fibrations where the basis W is an n-fold complete Segal space.
While these results have greatly contributed to a better understanding of fibrations, they also include some shortcomings, which restrict their applicability. In order to illustrate possible challenges, let us focus here on the case where the basis W is a precategory object in the category of -spaces, which can be endowed with a model structure that makes it a model of (∞,n-1)-categories <cit.>. The category of such objects is defines as a full subcategory of simplicial objects in -spaces with discrete level 0 and has been shown to carry a model structure that makes it a model of (∞,n)-categories <cit.>. Now, the results in (1) would only apply to the case n=2 and even in that case one would have to translate via an intricate web of equivalences of various models of (∞,2)-categories <cit.>. The approaches (2) and (3) do apply to all n, however, the basis object W does not range over all objects in , but instead only over the ones coming from strict nerves of strictly enriched categories or n-Segal categories (i.e., (∞,n)-categories). While up to equivalence these include all objects of , we now mention a few concrete reasons why we would want a straightening construction for all objects, rather than just some objects which cover all equivalence classes.
* From a computational perspective, we like to have the ability to define and study colimits of graphs without necessarily forming their corresponding category, as it might significantly complicate constructions. As a simple example, the graph given by a single loop is a finite graph with two non-degenerate simplices. This means the data of a fibration over such a graph has a very simple structure. On the other side, its associated category, known as the free endomorphism category, is in fact infinite.
* As is in particular a presheaf category, every (∞,n)-category permits an explicit filtration via a chain of subsimplices, which provides us with one effective tool to prove a variety of results via induction. However, while the starting point and the end result of the filtration are (∞,n)-categories the various steps in between will generally not be and so any inductive argument that involves straightening will need the ability to straighten general objects, rather than just (∞,n)-categories. An example for the effectiveness of this method is the proof of <cit.>, which relies on the inductive argument given in <cit.>
* As (co)limits in an (∞,n)-category involve an infinite level of coherent interlocking data, we often would like to use strictification methods to compute the (co)limits more effectively. However, this requires using the free contractible homotopy coherent diagram, which internalizes all the required homotopy coherence. For a general diagram it is nearly impossible to compute, however, a straightening construction would give us a direct way of computing the desired free homotopy coherent diagram by applying straightening to the identity fibration, which can be used to explicitly calculate (co)limits. Indeed, this is precisely the method used by Riehl and Verity to give an explicit description of weighted (co)limits which they use to identify (co)limits in quasi-categories <cit.>. Note here that, while they do not use the terminology of straightening explicitly, their description in <cit.> precisely coincides with the straightening construction found in <cit.>.
These are just some of the reasons motivating us to construct a straightening construction for all n and for all simplicial objects rather than just simplicial objects that come from strict nerves or (∞,n)-categories.
§.§ A straightening for (oo,n)-categories
Our work exactly remedies this situation by constructing a straightening functor for all objects in the category , which establishes a general equivalence between functors and fibrations. Given an object W∈, on one side, we consider the category of -enriched functors W^→ equipped with the projective model structure [ W^,]_. Here, the -enriched category W denotes the homotopy coherent categorification of W, where is left adjoint of the homotopy coherent nerve → <cit.>. On the other side, we consider the category of maps in the larger ambient over W equipped with the model structure W for double (∞,n-1)-right fibrations <cit.>.
Our main result now consists of constructing an explicit enriched Quillen equivalence between these two model structures, which appears as <ref>.
Let W be an object in . There is a Quillen equivalence enriched over and natural in W
[](1) W;
[right of=1,xshift=3.9cm](2) [ W^,]_;
2.;
[->] ((2.west)-(0,5pt)) to node[below,la]_W ((1.east)-(0,5pt));
[->] ((1.east)+(0,5pt)) to node[above,la]_W ((2.west)+(0,5pt));
[la] at ((1.east)!0.5!(2.west)) ;
As part of proving this result, we also construct a partial inverse, meaning a left adjoint from functors to fibrations, that also induces an enriched Quillen equivalence. This result appears as <ref>.
Let be a fibrant -enriched category. There is a Quillen equivalence enriched over
[](1) [^,]_;
[right of=1,xshift=3.8cm](2) ;
2.;
[->] ((2.west)-(0,5pt)) to node[below,la]_^ ((1.east)-(0,5pt));
[->] ((1.east)+(0,5pt)) to node[above,la]∫_^ ((2.west)+(0,5pt));
[la] at ((1.east)!0.5!(2.west)) ;
Let us record various key aspects and implications of the main result.
* For n=0, the adjunction _W⊣_W is a Quillen equivalence between right fibrations over a general precategory W∈() and -enriched functors out of its homotopy coherent categorification. We hence get an alternative version of the original straightening construction due to Lurie <cit.>, which uses Segal categories as its model of (∞,1)-categories instead of quasi-categories. We should note that straightening in the context of Segal categories has only been studied over strict nerves of categories <cit.>. Moreover, the Quillen equivalence ∫_^⊣_^ provides an reversed equivalence for Kan-enriched categories, which has been studied via quasi-categories <cit.>, whereas the case of Segal categories was again restricted to fibrations over strict nerves <cit.>.
* In <cit.> Gepner–Haugseng define a theory of enriched ∞-categories and in particular show that (∞,n)-categories, in the sense of Segal n-categories, coincide with ∞-categories enriched in (∞,n-1)-categories <cit.>. Hence, as is a model of (∞,n-1)-categories and both model structures and the adjunction are enriched, it follows from <cit.> that the Quillen equivalence _W⊣_W establishes an equivalence of (∞,n)-categories.
* In <cit.> Riehl–Verity introduce ∞-cosmoi as a formal 2-categorical method to study various (∞,1)-categorical (and some (∞,n)-categorical) aspects. In particular they show in <cit.> that for an appropriate choice of model category (which by <cit.> applies to W), their subcategory of bifibrant objects gives us an ∞-cosmos, meaning we get an ∞-cosmos of double (∞,n-1)-right fibrations. Hence, our enriched Quillen equivalence induces a biequivalence of ∞-cosmoi from the ∞-cosmos of functors valued in (∞,n-1)-categories to the ∞-cosmos of double (∞,n-1)-right fibrations <cit.>.
This in particular means that all 2-categorical phenomena (such as adjunctions) will uniquely correspond to each other <cit.>. This result is new even for n=0, as the straightening construction by Lurie is not an enriched adjunction and hence cannot be cosmological <cit.>.
* While the Quillen equivalence ∫_^⊣_^ only holds over homotopy coherent nerves of strictly enriched categories, it adds significant computational power to our results. In fact we already used it as an inverse to prove that the adjunction _W⊣_W is a Quillen equivalence. Beyond this example, note that all representable objects in are themselves given as nerves of strictly enriched categories, meaning the two Quillen equivalences give us a detailed characterization of functors and fibrations in that situation.
* Note that having a Quillen equivalence _W⊣_W for a general object W∈ is the best result we expect to hold and there should not be such a Quillen equivalence for a general object W∈. Indeed, any non-trivial structure in W_0 would be lost when applying the homotopy coherent categorification , which in particular means that the fibrations would carry additional information beyond enriched functors out of W valued in . See <cit.> for several counter-examples for various results regarding fibrations over simplicial objects with non-trivial degree 0.
* As we explained above, the projective cofibrant replacement of the terminal functor is a key ingredient for the computation of limits of homotopy coherent diagrams. The adjunction _W⊣_W now gives us an effective method to compute it.
Indeed, as all objects in W are cofibrant, the derived counit map of the adjunction _W⊣_W, which is a projective cofibrant replacement, is given by the strict counit map. Applying this to the terminal object in [ W^,], we can conclude that a cofibrant replacement is given via _W(𝕀_W) (as the identity is the terminal object in W).
§.§ Applying necklace calculus
As part of our work in <cit.> we introduced a necklace calculus that was an integral part in our study of the homs of the homotopy coherent categorifications. Given the computational strength of the method we also anticipated further applications. In this work we precisely present one such application, applying necklace calculus to give concrete computations of the straightening functor.
Concretely, we want to show that the straightening construction preserves various classes of morphisms in order to establish it is left Quillen. While we can use formal ideas to reduce it to certain generating diagrams (<ref>), any progress beyond that point requires explicit computations, which are only possible via necklace calculus. Examples of the kind of results we are able to obtain using this method can be found in <ref> and <ref>.
§.§ Notations and conventions
We write:
* ∈^ for the representable at m≥ 0, and ∂ for the boundary of ,
* F[,]×∈ for the representable at (,[])∈,
* [m,,]∈ for the representable at ([m],,[])∈,
* [m,X]∈ for the product [m]× X, and ∂[m,X] for the product ∂[m]× X for m≥ 0 and X∈,
The categories and , are naturally included into , and we regard the above as objects of it without further specification. We refer to the objects of as -spaces.
§.§ Acknowledgements
The third author is grateful for support from the National Science Foundation under Grant No. DMS-2203915.
§ STRICT AND HOMOTOPY COHERENT NERVES
We first collect here the necessary background for the content of the paper. In <ref>, we recall the model structure for complete Segal -spaces modeling (∞,n-1)-categories. In <ref>, we recall two models of (∞,n)-categories given by the model structures for strictly enriched categories and weakly enriched categories in complete Segal -spaces. These are related by Quillen equivalences given by the strict and homotopy coherent categorification-nerve adjunctions, which we recollect in <ref>. Finally, in <ref>, we recall the notion of necklaces and how they are used to describe the hom -spaces of the homotopy coherent categorification.
§.§ Model structure for (infinity,n-1)-categories
We recall the model structure on for (∞,n-1)-categories given by Rezk's complete Segal -spaces <cit.>.
For n≥1, recall from <cit.> Joyal's cell category [n]. For n=1, then =[0] is the terminal category, and for n>1, the category is the wreath product Δ≀[n-2] (see e.g. <cit.>).
We denote by the Kan-Quillen model structure. The model structure is defined recursively as a localization of the injective model structure on the category of -presheaves valued in with respect to a set S_ of maps in .
The set S_[0] is the empty set, and for n>1 the set S_ consists of the following monomorphisms:
* the Segal maps [1](θ_1)⨿_[0]…⨿_[0][1](θ_ℓ)↪[ℓ](θ_1,…,θ_ℓ), for all ℓ≥ 1 and θ_1,…,θ_ℓ∈[n-2],
* the completeness map [0]↪ N seen as a map in through the canonical inclusion ↪ induced by pre-composition along the projection →Δ given by [ℓ](θ_1,…,θ_l)↦ [ℓ], where denotes the free-living isomorphism,
* the recursive maps [1](A)↪[1](B), where A↪ B∈[n-2] ranges over all maps in S_[n-2].
Note that by <cit.> the model structure obtained by localizing the injective model structure with respect to the set S_ is cartesian closed. This is enough to guarantee that the model structure is excellent in the sense of <cit.>.
§.§ Enriched model structures for (infinity,n)-categories
As the model structure is excellent, the category supports the left proper model structure from <cit.>, obtained as a special instance of <cit.>. We recall the features of this model structure needed in this paper. They rely on the notion of homotopy category, and we refer the reader to <cit.> for a definition in the case of .
In the model structure ,
* a -enriched category is fibrant if, for all objects a,b∈, the hom -space _(a,b) is fibrant in ,
* a -enriched functor F→ is a weak equivalence
if the induced functor between homotopy categories F→ is essentially surjective on objects, and for all objects a,b∈ the induced map
F_a,b_(a,b)→_(Fa,Fb)
is a weak equivalence in .
Many of the -enriched categories that feature in this paper have the following property, so we introduce a terminology that streamlines the exposition.
A -enriched category is directed if
* its set of objects is {0,1,…,m}, for some m≥ 0,
* for 0≤ j≤ i≤ m, the hom -space _(i,j) is given by
_(i,j)=∅ if j<i
[0] if j=i.
In particular, the composition maps in a directed -enriched category involving the above hom -spaces are uniquely determined. Moreover, the value of a -enriched functor from a directed -enriched category is also uniquely determined on these hom -spaces. Similarly, it is enough to verify the naturality conditions for a -enriched natural transformations between such -enriched functors with respect to these hom -spaces.
§.§ Weakly enriched model structures for (infinity,n)-categories
Let denote the full subcategory of spanned by those ()-spaces W such that W_0 is discrete, i.e., such that W_0 is in the image of ↪. The canonical inclusion I→ admits a left adjoint L, so there is an adjunction
[](2) ;
[right of=2,xshift=2.7cm](3) ;
3.;
[->] ((3.west)-(0,5pt)) to node[below,la]I ((2.east)-(0,5pt));
[->] ((2.east)+(0,5pt)) to node[above,la]L ((3.west)+(0,5pt));
[la] at ((2.east)!0.5!(3.west)) ;
In <cit.>, Bergner–Rezk construct two model structures on the category : the “projective-like” and the “injective-like” model structures. Here, we denote these two model structures by and , respectively. As shown in <cit.>, these model structures are Quillen equivalent via the identity functor.
Let and denote the injective and projective model structures on the category of simplicial objects in .
An object W is fibrant in (resp. ) if W is fibrant in (resp. ) and the Segal map
W_m→ W_1×^(h)_W_0…×^(h)_W_0 W_1
is a weak equivalence in , for all m≥ 1. Here, the ordinary pullbacks are homotopy pullbacks because they are taken over the discrete object W_0 (see <cit.>).
§.§ Strict and homotopy coherent nerves
There is a canonical inclusion
N→,
which admits a left adjoint c→. We refer to N as the strict nerve and to c as the strict categorification. The following appears as <cit.>.
The adjunction c⊣ N is a Quillen equivalence
[](1) ;
[right of=1,xshift=3.2cm](2) ;
2.;
[->] ((2.west)-(0,5pt)) to node[below,la]N ((1.east)-(0,5pt));
[->] ((1.east)+(0,5pt)) to node[above,la]c ((2.west)+(0,5pt));
[la] at ((1.east)!0.5!(2.west)) ;
In <cit.>, we construct a homotopy coherent version of this categorification–nerve adjunction, and we briefly recall it here.
First, the homotopy coherent categorification–nerve adjunction by Cordier–Porter <cit.>, as depicted below left, induces by post-composition an adjunction as below right.
[](1) ;
[right of=1,xshift=1.5cm](2) ;
2;
[->] ((2.west)-(0,5pt)) to node[below,la] ((1.east)-(0,5pt));
[->] ((1.east)+(0,5pt)) to node[above,la] ((2.west)+(0,5pt));
[la] at ((1.east)!0.5!(2.west)) ;
[right of=2,xshift=3cm](1) ()^;
[right of=1,xshift=3.3cm](2) ()^;
[->] ((2.west)-(0,5pt)) to node[below,la]_* ((1.east)-(0,5pt));
[->] ((1.east)+(0,5pt)) to node[above,la]_* ((2.west)+(0,5pt));
[la] at ((1.east)!0.5!(2.west)) ;
The adjunction _*⊣_* in turn restricts to an adjunction between full subcategories
[](1) ;
[right of=1,xshift=2.7cm](2) ;
2.;
[->] ((2.west)-(0,5pt)) to node[below,la]_* ((1.east)-(0,5pt));
[->] ((1.east)+(0,5pt)) to node[above,la]_* ((2.west)+(0,5pt));
[la] at ((1.east)!0.5!(2.west)) ;
Next, the diagonal functor δΔ→Δ×Δ induces an adjunction as below left, which induces by base-change an adjunction between categories of enriched categories as below right.
[](1) ^;
[right of=1,xshift=1.8cm](2) ^;
[->] ((2.west)-(0,5pt)) to node[below,la](δ_*)_* ((1.east)-(0,5pt));
[->] ((1.east)+(0,5pt)) to node[above,la] ((2.west)+(0,5pt));
[la] at ((1.east)!0.5!(2.west)) ;
[right of=2,xshift=3cm](1) ;
[right of=1,xshift=2.5cm](2) ;
[->] ((2.west)-(0,5pt)) to node[below,la]((δ_*)_*)_* ((1.east)-(0,5pt));
[->] ((1.east)+(0,5pt)) to node[above,la]_* ((2.west)+(0,5pt));
[la] at ((1.east)!0.5!(2.west)) ;
The homotopy coherent categorification-nerve adjunction is then defined to be the following composite of adjunctions.
[](1) ;
[right of=1,xshift=2.7cm](2) ;
[right of=2,xshift=2.5cm](3) ;
at ((1.west)-(6pt,.3pt)) ;
at ((3.east)+(7pt,-1.5pt)) ;
[->] ((3.west)-(0,5pt)) to node[below,la]((δ_*)_*)_* ((2.east)-(0,5pt));
[->] ((2.east)+(0,5pt)) to node[above,la]_* ((3.west)+(0,5pt));
[la] at ((1.east)!0.5!(2.west)) ;
[->] ((2.west)-(0,5pt)) to node[below,la]_* ((1.east)-(0,5pt));
[->] ((1.east)+(0,5pt)) to node[above,la]_* ((2.west)+(0,5pt));
[la] at ((2.east)!0.5!(3.west)) ;
The following appears as <cit.>.
The adjunction ⊣ is a Quillen equivalence
[](1) ;
[right of=1,xshift=3.1cm](2) ;
2.;
[->] ((2.west)-(0,5pt)) to node[below,la] ((1.east)-(0,5pt));
[->] ((1.east)+(0,5pt)) to node[above,la] ((2.west)+(0,5pt));
[la] at ((1.east)!0.5!(2.west)) ;
We also show in <cit.> the following result, which compares the strict and the homotopy coherent nerves. In particular, this shows that the homotopy coherent nerve is an injective fibrant replacement of the strict nerve.
Let be a fibrant -enriched category. The canonical map
φ N→
is a weak equivalence in .
§.§ Necklaces and homotopy coherent categorification
We recollect here some useful results from <cit.> studying the hom -spaces of the categorification . For this, we first recall the main terminology about necklaces, introduced in <cit.>.
A necklace is a simplicial set, i.e., an object in , given by a wedge of representables
T=[m_1]∨…∨[m_t]
obtained by gluing m_i∈[m_i] to 0∈[m_i+1] for all 1≤ i≤ t-1. By convention, if t>1, then m_i>0 for all 1≤ i≤ t. We say that [m_i] is a bead of T, and an initial or final vertex in some bead is a joint of T. We write B(T) for the set of beads of T.
We consider the necklace T to be a bi-pointed simplicial set (T,α,ω) where α is the initial vertex α=0∈[m_0]↪ T and ω is the final vertex ω=m_t∈[m_t]↪ T.
We write for the full subcategory of the category _*,* of bi-pointed simplicial sets spanned by the necklaces.
Given a simplicial set K and a,b∈ K_0, we denote by K_a,b the simplicial set bi-pointed at (a,b)[0]⨿[0]→ K. A necklace in K_a,b is a bi-pointed map T→ K_a,b with T a necklace. We denote by Kab_/K_a,b the category of necklaces T→ K_a,b in K from a to b.
A necklace T=[m_1]∨…∨[m_t]→ K_a,b is totally non-degenerate if, for all 0≤ i ≤ t, its restriction to the i-th bead
[m_i]↪[m_1]∨…∨[m_k]=T → K
is a non-degenerate m_i-simplex of K. We write Kab for the full subcategory of Kab spanned by the totally non-degenerate necklaces.
Using the language of necklaces, we obtain as <cit.> the following description of the hom -spaces of the categorification .
Let W be an object in and a,b∈ W_0. Then there is a natural isomorphism in
_ W(a,b)≅( _T∈W_-,⋆,⋆ab_ T(α,ω)),
where _T∈W_-,⋆,⋆ab_ T(α,ω)∈ is given at ∈ and ≥ 0 by the colimit _T∈W_-,,ab_ T(α,ω) in .
We get a description of the hom -spaces of the categorification in terms of totally non-degenerate necklaces, under the assumption of each level being a 1-ordered simplicial set; see <cit.>.
For m≥ 0, by <cit.>, the simplicial set is 1-ordered.
The following appears as <cit.>.
Let W be an object in and a,b∈ W_0. Suppose that, for all ∈ and ≥ 0, the simplicial set W_-,, is 1-ordered. Then there is a natural isomorphism in
_ W(a,b)≅( _T∈W_-,⋆,⋆ab_ T(α,ω))
where _T∈W_-,⋆,⋆ab_ T(α,ω)∈ is given at ∈ and ≥ 0 by the colimit _T∈W_-,,ab_ T(α,ω) in .
We now describe the category of totally non-degenerate necklaces of a 1-ordered simplicial set, and recall the bead functor construction from <cit.>.
Let K be a 1-ordered simplicial set and a,b∈ K_0. By <cit.>, a necklace f T→ K_a,b is totally non-degenerate if and only if the map f is a monomorphism in . Hence the objects of the category Kab are monomorphisms T↪ K_a,b with T a necklace, and, by the cancellation property of monomorphisms, the morphisms are precisely monomorphisms U↪ T of necklaces over K_a,b. In particular, this category is a poset.
Given a monomorphism g U↪ T between necklaces, there is an induced map B(g) B(U)→ B(T) between the sets of necklaces, which sends a bead [m_i] of U to the unique bead B(g)([m_i]) of T that contains [m_i].
Since all morphisms in Kab are monomorphisms, the above assignment induces a functor BKab→.
§ GROTHENDIECK CONSTRUCTION OVER THE HOMOTOPY COHERENT NERVE
In this section, we build a first Quillen equivalence between strictly enriched functors valued in (∞,n-1)-categories and right double (∞,n-1)-fibrations. In <ref>, we first recall the model structure for right double (∞,n-1)-fibrations, and in <ref> the projective model structure for enriched functors. In <ref>, we build a Grothendieck construction over the homotopy coherent nerve and prove that it gives a Quillen equivalence
[](1) [^,]_;
[right of=1,xshift=3.8cm](2) ;
2,;
[->] ((2.west)-(0,5pt)) to node[below,la]_^ ((1.east)-(0,5pt));
[->] ((1.east)+(0,5pt)) to node[above,la]∫_^ ((2.west)+(0,5pt));
[la] at ((1.east)!0.5!(2.west)) ;
for every fibrant -enriched category . To do so, we compare with the Grothendieck construction over the strict nerve constructed by the second author in <cit.>.
§.§ Model structure for right double (infinity,n-1)-fibrations
Let us fix an object W of , seen as an object of . In this section, we first recall the model structure W for right double (∞,n-1)-fibration introduced by the second named author in <cit.>.
We denote by the injective model structure on the category of ()-presheaves valued in . Then recall that by <cit.> the slice category W admits a model structure created by the forgetful functor W→. We denote this model structure by W.
By <cit.> and <cit.>, a set of generating (trivial) cofibrations for the model structure W is given by the set containing the maps
∂[m,Y]⨿_∂[m,X][m,X]ι_m× f[m,Y]→ W
for all m≥ 0, where ι_m∂↪ denotes the boundary inclusion, f X↪ Y ranges over all maps in a set of generating (trivial) cofibrations in , and [m,Y]→ W over all such maps in .
The model structure W is defined to be the localization of the model structure W with respect to the following monomorphisms:
* the monomorphism
[0,,0][⟨ m⟩,,0][m,,0]→ W,
for all m≥ 0 and ∈, where ⟨ m⟩ is the map induced by the morphism [0]→ [m] of Δ that picks m∈ [m] and [m,,0]→ W ranges over all such maps in ,
* the monomorphism
[0,X][𝕀_[0],f][0,Y]→ W,
where f X↪ Y ranges over all maps in S_ (see <ref>), and [0,Y]→ W over all such maps in .
To describe the fibrant objects of this model structure, we use the following result to claim that certain homotopy pullback squares can be computed as strict pullbacks.
Let D be a discrete -space, i.e., in the image of ↪. Then every pullback square of the following form in is a homotopy pullback in .
[](1) P;
[right of=1,xshift=.5cm](2) Y;
[below of=1](3) X;
[below of=2](4) D;
[->] (1) to node[above,la]f' (2);
[->] (1) to node[left,la]g' (3);
[->] (2) to node[right,la]g (4);
[->] (3) to node[below,la]f (4);
1;
Given an element d∈ D, we denote by _d X, _d Y, and _dP the fibers in of the maps X→ D, Y→ D, and P→ D at d. Consider the following pullback square in
[](1) ∐_d∈ D_d X×_d Y;
[right of=1,xshift=2.8cm](2) ∐_d∈ D_d Y;
[below of=1](3) ∐_d∈ D_d X;
[below of=2](4) ∐_d∈ D{d};
[->] (1) to (2);
[->] (1) to (3);
[->] (2) to node[right,la]g (4);
[->] (3) to (4);
1;
where _d X→_d X denotes a fibrant replacement in .
Given that by construction the bottom map is a fibration and the model structure is right proper by <cit.>, this square is a homotopy pullback square by <cit.>.
Using the fact that D is discrete and that is cartesian closed, one can check that the square is weakly equivalent to the original square (<ref>). Hence, the latter is also a homotopy pullback, as desired.
By <cit.>, the fibrant objects in W admit the following description.
A map p P→ W in is a right double (∞,n-1)-fibration if it satisfies the following conditions:
* it is a fibration in W,
* for all m≥ 0, the following square is a (homotopy) pullback square in ,
[](1) P_m;
[right of=1,xshift=.6cm](2) P_0;
[below of=1](3) W_m;
[below of=2](4) W_0;
[->] (1) to node[above,la]p_m (2);
[->] (1) to node[left,la]⟨ m⟩^* (3);
[->] (2) to node[right,la]⟨ m⟩^* (4);
[->] (3) to node[below,la]p_0 (4);
1;
* for all a∈ W_0, the (homotopy) fiber _a P of p P_0→ W_0 at a is fibrant in .
The category W can be made into a tensored and cotensored -enriched category, with tensor described as follows. Given an object p P→ W in W and an object X∈, the tensor p⊗ X is the object of W obtained as the composite
p⊗ X P× X P W,
where π denotes the canonical projection. The following result can be deduced from <cit.>.
The model structure W is enriched over .
Every map f W→ Z in , seen as a map of , induces by post-composition a functor f_!W→Z and the latter admits as a right adjoint the functor f^*Z→W obtained by taking pullbacks along f. This adjunction has good homotopical properties.
Let f W→ Z be a map in . The adjunction f_!⊣ f^* is a Quillen pair
[](1) W;
[right of=1,xshift=3.9cm](2) Z;
2.;
[->] ((2.west)-(0,5pt)) to node[below,la]f^* ((1.east)-(0,5pt));
[->] ((1.east)+(0,5pt)) to node[above,la]f_! ((2.west)+(0,5pt));
[la] at ((1.east)!0.5!(2.west)) ;
It is further a Quillen equivalence when f W→ Z is a weak equivalence in or in .
The fact that f_!⊣ f^* is a Quillen pair is <cit.>, and the fact that it is a Quillen equivalence when f is a weak equivalence in is the first bullet point of <cit.>. It remains to show that it is a Quillen equivalence when f W→ Z is a weak equivalence in .
For this, we first show that every object W∈ has weakly constant objects in the sense of <cit.>. Using <cit.>, this happens if and only if any fibrant replacement W→W in the model structure from <cit.> in the case where =, denoted here by ()^_CSeg, is homotopically constant, meaning that each map W_0,[0]→W_0, is a weak equivalence in for all ∈.
We take a fibrant replacement W→W in the model structure from <cit.> in the case where =, denoted here by ()^_C_0Seg. We show that W is homotopically constant and that it is a fibrant replacement of W in ()^_CSeg. By running the small object argument to W in order to obtain W, we see that each step preserves the property of being homotopically constant. Hence, as W_0,[0]=W_0, by assumption, it follows that W_0,[0]→W_0, is a weak equivalence in for all ∈. The fact that W is fibrant in ()^_CSeg follows from applying <cit.> using that, as W is homotopically constant, it is fibrant in the model structure from <cit.> in the case where =, denoted here by ()^_hcC_0Seg.
This shows that every object of has weakly constant objects. Hence we can now apply the third bullet point of <cit.> to deduce that f_!⊣ f^* is a Quillen equivalence when f W→ Z is a weak equivalence in . To see that this applies in our case, note that the model structure is cartesian closed so that the cartesian mapping space condition is automatic, and that the functor I→ ()^_hcC_0Seg preserves and reflects weak equivalences by <cit.> and a generalization of the argument in <cit.>.
§.§ Projective model structure for enriched functors
The category is cartesian closed and so it can be seen as a -enriched category. Let us fix a -enriched category . We denote by [^,] the category of -enriched functors from ^ to and -enriched natural transformations between them.
By <cit.>, the category [^,] of -enriched functors supports the projective model structure [^,]_.
The category [^,] can be made into a tensored and cotensored -enriched category, with tensor described as follows. Given a -enriched functor F^→ and an object X∈, the tensor F⊗ X is the -enriched functor obtained as the composite
F⊗ X^.
The following result can be deduced from <cit.>.
The model structure [^,]_ is enriched over .
By <cit.>, a set of generating (trivial) cofibrations for the model structure [^,]_ is given by the set containing the -enriched natural transformations
_(-,a)⊗ X→_(-,a)⊗ Y
for all objects a∈, where X↪ Y ranges over a set of generating (trivial) cofibrations of .
Every -enriched functor F→ in induces by pre-composition a functor F^* [^,]→ [^,]. The latter admits as a left adjoint the functor F_! [^,]→ [^,] obtained by taking the -enriched left Kan extension along F. Note that the adjunction F_!⊣ F^* is enriched over by <cit.>, i.e., the functor F_! commutes with tensors. This adjunction has good homotopical properties, as we now recall from <cit.>.
Let F→ be a -enriched functor. The adjunction F_!⊣ F^* is a Quillen pair
[](1) [^,]_;
[right of=1,xshift=3.6cm](2) [^,]_;
2.;
[->] ((2.west)-(0,5pt)) to node[below,la]F^* ((1.east)-(0,5pt));
[->] ((1.east)+(0,5pt)) to node[above,la]F_! ((2.west)+(0,5pt));
[la] at ((1.east)!0.5!(2.west)) ;
It is further a Quillen equivalence when F→ is a weak equivalence in .
§.§ Grothendieck constructions
In <cit.>, the second author constructs a Grothendieck construction over the strict nerve. We here give an alternative – though equivalent by <cit.> – presentation of this construction, alongside a new variant over the homotopy coherent nerve.
Let us fix a -enriched category . There are Grothendieck constructions
∫_^N [^,]→N and ∫_^ [^,]→.
On objects, they send a -enriched functor F^→ to the maps in
π_F∫_^N F→ N and π_F∫_^ F→
given at level 0 by the canonical projection
(π_F)_0 (∫_^N F)_0=(∫_^ F)_0∐_a∈ Fa→=(N)_0=()_0
and at level m≥ 1 by the pullbacks in
[](1) (∫_^N F)_m;
[right of=1,xshift=1.5cm](2) (N)_m;
[below of=1](3) (∫_^N F)_0;
[below of=2](4) (N)_0;
[->] (1) to node[above,la](π_F)_m (2);
[->] (1) to node[left,la]⟨ m⟩^* (3);
[->] (2) to node[right,la]⟨ m⟩^* (4);
[->] (3) to node[below,la](π_F)_0 (4);
1;
[right of=2,xshift=2cm](1) (∫_^ F)_m;
[right of=1,xshift=1.5cm](2) ()_m;
[below of=1](3) (∫_^ F)_0;
[below of=2](4) ()_0;
4.;
[->] (1) to node[above,la](π_F)_m (2);
[->] (1) to node[left,la]⟨ m⟩^* (3);
[->] (2) to node[right,la]⟨ m⟩^* (4);
[->] (3) to node[below,la](π_F)_0 (4);
1;
We further provide the simplicial structure of ∫_^ F. The one for ∫_^N F can be constructed in a similar manner, or deduced from the isomorphism from <cit.>.
We first state the following lemma, which is simply an application of the universal property of pullbacks.
Let [ℓ]→ [m] be a morphism in Δ such that (ℓ)=m. Then there is a unique map ^* (∫_^ F)_m→ (∫_^ F)_ℓ in making the following diagram commute.
[](1) (∫_^ F)_ℓ;
[above left of=1,xshift=-1cm,yshift=.2cm](1') (∫_^ F)_m;
[right of=1,xshift=1.3cm](2) ()_ℓ;
[above left of=2,xshift=-1cm,yshift=.2cm](2') ()_m;
[below of=1](3) (∫_^ F)_0;
[below of=2](4) ()_0;
[dashed,->] (1') to node[right,la,yshift=5pt,pos=0.4]^* (1);
[->] (2') to node[right,la,yshift=5pt,pos=0.4]^* (2);
[->,bend right] (1') to node[left,la]⟨ m⟩^* (3);
[->] (1) to node[above,la](π_F)_ℓ (2);
[->] (1') to node[above,la](π_F)_m (2');
[->] (1) to node[left,la]⟨ℓ⟩^* (3);
[->] (2) to node[right,la]⟨ℓ⟩^* (4);
[->] (3) to node[below,la](π_F)_0 (4);
1;
For m≥ 1, given that the coface maps d^i [m-1]→ [m] for 0≤ i<m and the codegeneracy morphisms s^j [m]→ [m-1] for 0≤ j≤ m-1 in Δ satisfy the condition in <ref>, we get induced face maps d_i (∫_^ F)_m→ (∫_^ F)_m-1 for 0≤ i<m and degeneracy maps s_j (∫_^ F)_m-1→ (∫_^ F)_m for 0≤ j≤ m-1 compatible with those of through the projection π_F.
It remains to define the face maps d_m (∫_^ F)_m→ (∫_^ F)_m-1 for m≥ 1. If m=1, define
d_1 (∫_^ F)_1≅∐_a,b∈_(a,b)× Fb∐_a∈ Fa=(∫_^ F)_0
where _a,b^F_(a,b)× Fb→ Fa denotes the unique map corresponding under the adjunction (-)× Fb⊣_(Fb,-) to the induced map F_a,b_(a,b)→_(Fb,Fa) on hom -spaces. If m>1, define d_m to be the following composite
[](1) (∫_^ F)_m≅ ()_m×_()_0 (∫_^ F)_0;
[right of=1,xshift=7cm](2) ()_m-1×_()_0 ()_1×_()_0 (∫_^ F)_0;
[below of=2,yshift=.6cm](4) ()_m-1×_()_0 (∫_^ F)_1;
[below of=4](5) ()_m-1×_()_0 (∫_^ F)_0;
[below of=5,yshift=.5cm](6) (∫_^ F)_m-1;
[->] ((1.south)-(1.5cm,0)) to node[below,la]d_m ((6.west));
[->] (1) to node[above,la]ρ^*×_()_0 (∫_^ F)_0 (2);
at ((2)!0.5!(4)) 270≅;
at ((5)!0.5!(6)) 270≅;
[->] (4) to node[right,la]()_m-1×_()_0 d_1 (5);
where ρ[m-1]⨿_[0][1]→[m] is the map of which is induced by the morphisms d^m [m-1]→ [m] and ⟨ m-1,m⟩ [1]→ [m] of Δ. Note that the face map d_m as defined above is compatible with the corresponding face of through the projection π_F.
The simplicial identities then follow from those of and the unitality of F. Hence this defines a simplicial object ∫_^ F→, i.e., an object of .
Now, on morphisms, the functors ∫_^N and ∫_^ send a -enriched natural transformation η F→ G to the maps in
∫_^Nη∫_^N F→∫_^N G and ∫_^η∫_^ F→∫_^ G
given at level 0 by the map
(∫_^Nη)_0=(∫_^η)_0∐_a∈η_a∐_a∈ Fa→∐_a∈ Ga
and at level m≥ 1 by the unique map determined by the universal property of pullbacks. The compatibility of η with the simplicial structure of ∫_^ F and ∫_^ G follows from the -enriched naturality condition of η.
By <cit.>, the functor ∫_^N admits a right adjoint _^N and the following appears as <cit.>.
Let be a -enriched category. The adjunction ∫_^N⊣_^N is a Quillen equivalence
[](1) [^,]_;
[right of=1,xshift=3.8cm](2) N;
2.;
[->] ((2.west)-(0,5pt)) to node[below,la]_^N ((1.east)-(0,5pt));
[->] ((1.east)+(0,5pt)) to node[above,la]∫_^N ((2.west)+(0,5pt));
[la] at ((1.east)!0.5!(2.west)) ;
We now want to prove that the Grothendieck construction ∫_^ is also a Quillen equivalence. First we have the following.
There is an adjunction
[](1) [^,];
[right of=1,xshift=2.6cm](2) ;
2.;
[->] ((2.west)-(0,5pt)) to node[below,la]_^ ((1.east)-(0,5pt));
[->] ((1.east)+(0,5pt)) to node[above,la]∫_^ ((2.west)+(0,5pt));
[la] at ((1.east)!0.5!(2.west)) ;
Note that the categories involved are locally presentable and so, by the adjoint functor theorem, it is enough to show that the functor ∫_^ [^,]→ preserves colimits. This follows from the definition of ∫_^ and the fact that coproducts and pulling back along ()_m → ()_0 commute with colimits, as is locally cartesian closed.
We first prove that the above adjunction also forms a Quillen pair.
Let F^→ be a -enriched functor and consider an object X∈. There is a natural isomorphism in
∫_^ (F⊗ X)≅ (∫_^ F)⊗ X.
In particular, the adjunction ∫_^⊣_^ is enriched over .
This follows directly from the definition of ∫_^ and the fact that coproducts and pullbacks in commute with products.
Let be an -enriched category. The adjunction ∫_^⊣_^ is a Quillen pair enriched over
[](1) [^,]_;
[right of=1,xshift=3.8cm](2) ;
2.;
[->] ((2.west)-(0,5pt)) to node[below,la]_^ ((1.east)-(0,5pt));
[->] ((1.east)+(0,5pt)) to node[above,la]∫_^ ((2.west)+(0,5pt));
[la] at ((1.east)!0.5!(2.west)) ;
By <ref>, a set of generating (trivial) cofibrations for [^,]_ is given by the set containing the -enriched natural transformations
_(-,a)⊗ X→_(-,a)⊗ Y
for all objects a∈, where X↪ Y ranges over a set of generating (trivial) cofibrations of . Using <ref>, such a -enriched natural transformation is sent by the functor ∫_^ [^,]_→ to the map in
(∫_^_(-,a))⊗ X↪ (∫_^_(-,a))⊗ Y,
which is a (trivial) cofibration in by <ref>. This shows that the functor ∫_^ is left Quillen.
The fact that the Quillen pair is enriched over follows from the -enrichment of both model structures given by <ref> together with <ref> showing that the functor ∫_^ is compatible with these enrichments.
We now show that, in the case where is fibrant in , the above Quillen pair is further a Quillen equivalence.
Let be a fibrant -enriched category. The following diagram of functors commutes up to weak equivalence,
[](1) [^,]_;
[below of=1,xshift=-2.8cm](2) N;
[below of=1,xshift=2.8cm](3) ;
[->] (1) to node[left,la,yshift=4pt,xshift=-5pt]∫_^N (2);
[->] (1) to node(a)[right,la,yshift=4pt,xshift=5pt]∫_^ (3);
[->] (2) to node[below,la]φ_! (3);
[la,above,xshift=-5pt][n][.6]2a≃;
where φ N→ is the canonical map from <ref>.
Let F^→ be a -enriched functor. We first construct a natural map η_Fφ_! ∫_^N F→∫_^ F in . By definition of the strict Grothendieck constructions, we have the following equality in
(∫_^N F)_0=(∫_^ F)_0,
and we set (η_F)_0 to be the identity. Given the following diagram of cospans in ,
[](1) (∫_^N F)_0;
[right of=1,xshift=1cm](2) (N)_0;
[right of=2,xshift=1cm](3) (N)_m;
[->] (1) to node[above,la](π_F)_0 (2);
[->] (3) to node[above,la]⟨ m⟩^* (2);
[below of=1](1') (∫_^ F)_0;
[below of=2](2') ()_0;
[below of=3](3') ()_m;
[->] (1') to node[below,la](π_F)_0 (2');
[->] (3') to node[below,la]⟨ m⟩^* (2');
[d] (1) to (1');
[d] (2) to (2');
[->] (3) to node[left,la]φ_m node[right,la]≃ (3');
there is a unique induced map (∫_^N F)_m→ (∫_^ F)_m between their pullbacks in , and we set (η_F)_m to be that map. Note that these maps (η_F)_m for m≥ 0 assemble into a map η_Fφ_!∫_^N F→∫_^ F in . Moreover, this assignment is natural in F.
Now, since all vertical maps in the diagram of cospans above are weak equivalences in by <ref> and the pullbacks of the cospans are in particular homotopy pullbacks by <ref>, then the induced map (η_F)_m (∫_^N F)_m→ (∫_^ F)_m is also a weak equivalence in . This shows that the map η_Fφ_!∫_^N F→∫_^ F in is a weak equivalence in . Hence, it is in particular a weak equivalence in its localization .
Let be a fibrant -enriched category. The Quillen pair ∫_^⊣_^ is a Quillen equivalence enriched over
[](1) [^,]_;
[right of=1,xshift=3.8cm](2) ;
2.;
[->] ((2.west)-(0,5pt)) to node[below,la]_^ ((1.east)-(0,5pt));
[->] ((1.east)+(0,5pt)) to node[above,la]∫_^ ((2.west)+(0,5pt));
[la] at ((1.east)!0.5!(2.west)) ;
We have a triangle of left Quillen functors from <ref>
[](1) [^,]_;
[below of=1,xshift=-2.8cm](2) N;
[below of=1,xshift=2.8cm](3) ;
[->] (1) to node[left,la,yshift=4pt,xshift=-5pt]∫_^N (2);
[->] (1) to node[right,la,yshift=4pt,xshift=5pt]∫_^ (3);
[->] (2) to node[below,la]φ_! (3);
[la,above,xshift=-5pt][n][.6]2a≃;
which commutes up to isomorphism at the level of homotopy categories by <ref>. Since the map φ N→ is a weak equivalence in by <ref>, the functor φ_! is a Quillen equivalence by <ref>. Moreover, by <ref> the functor ∫_^N is a Quillen equivalence. Hence, by 2-out-of-3, we conclude that the functor ∫_^ is also a Quillen equivalence, as desired.
§ STRAIGHTENING-UNSTRAIGHTENING: DEFINITION AND COMPUTATIONS
In this section, we introduce the straightening-unstraightening adjunction
[](1) W;
[right of=1,xshift=2.8cm](2) [ W^,];
2,;
[->] ((2.west)-(0,5pt)) to node[below,la]_W ((1.east)-(0,5pt));
[->] ((1.east)+(0,5pt)) to node[above,la]_W ((2.west)+(0,5pt));
[la] at ((1.east)!0.5!(2.west)) ;
for every object W∈, and study its point-set properties. In <ref>, we construct the above adjunction, and in <ref>, we state some properties and computations of the straightening functor, whose proofs are deferred to the appendix. Then, in <ref>, we compute the straightening of maps of the form
∂[m,Y]⨿_∂[m,X][m,X]↪ L[m,Y] and [0,X][⟨ m⟩,𝕀_X] L[m,X]
for m≥ 0 and X↪ Y a monomorphism in .
§.§ Definition of straightening
In this section, we construct the straightening-unstraightening adjunction.
Given an object W in , we denote by ∫_ W the category of elements of W. Then there is a canonical equivalence of categories
W≃^(∫_ W)^.
Let us fix an object W∈, seen as an object of . Let m,≥ 0, ∈, and σ[m,,]→ W be a map in .
Since W is in , the map σ corresponds under the adjunction L⊣ I from <ref> to a map σ L[m,,]→ W in . We write W_σ for the following pushout in (and hence in ).
[](1) L[m,,];
[right of=1,xshift=1.8cm](2) W;
[below of=1](3) L[m+1,,];
[below of=2](4) W_σ;
[->] (1) to node[above,la]σ (2);
[->] (1) to node[left,la]L [d^m+1,,] (3);
[->] (2) to node[right,la]ι_σ (4);
[->] (3) to node[below,la]σ' (4);
4;
By applying the colimit-preserving functor (-)_0→ to the above pushout, we get that
(W_σ)_0≅ W_0⨿{⊤},
where ⊤ is the image under σ' of the object m+1∈ L[m+1,,]_0≅{0,1,…,m+1}.
We define the -enriched functor _W(σ) W^→ to be the following composite
_W(σ) W^ ( W_σ)^.
This construction extends to a functor
_W∫_ W→ [ W^, ],
and by left Kan extending along the Yoneda embedding and using <ref>, we obtain a straightening-unstraightening adjunction
[](1) W;
[right of=1,xshift=2.8cm](2) [ W^,];
2.;
[->] ((2.west)-(0,5pt)) to node[below,la]_W ((1.east)-(0,5pt));
[->] ((1.east)+(0,5pt)) to node[above,la]_W ((2.west)+(0,5pt));
[la] at ((1.east)!0.5!(2.west)) ;
§.§ Naturality, enrichment, and computations of straightening
We now state some properties and computations of the straightening functor, whose proofs are deferred to the Appendix, as they involve technical results related to necklace calculus.
We show in <ref> that the straightening functor is natural in the following sense.
Let f W→ Z be a map in . Then the following square of left adjoint functors commutes up to a natural isomorphism.
[](1) W;
[right of=1,xshift=2.8cm](2) [ W^,];
[below of=1](3) Z;
[below of=2](4) [ Z^,];
[->] (1) to node[above,la]_W (2);
[->] (1) to node[left,la]f_! (3);
[->] (2) to node[right,la]( f)_! (4);
[->] (3) to node[below,la]_Z (4);
[la,above][n][.5]23≅;
Moreover, the straightening functor is enriched, as we show in <ref>.
Let W be an object in and p P→ W be an object in W. Then there is a natural isomorphism in [ W^,]
_W (p⊗ X)≅_W(p)⊗ X.
In particular, the adjunction _W⊣_W is enriched over .
In <ref>, we compute the straightening of maps [,f][ℓ,X]→ L [m,Y] in with [ℓ]→ [m] an injective map in Δ and f X→ Y a map in between connected -spaces.
For this, we first recall the following result, computing L[m,Y] as a certain pushout. Here we write π_0→ for the left adjoint to the inclusion ↪.
By <cit.>, the object L[m,Y] of can be computed as the following pushout in .
[](1) ∐_m+1 Y;
[right of=1,xshift=1.6cm](2) [m,Y];
[below of=1](3) ∐_m+1π_0 Y;
[below of=2](4) L[m,Y];
[->] (1) to (2);
[->] (1) to (3);
[->] (2) to (4);
[->] (3) to (4);
4;
A map f A→[m,Y] in induces a map A→ L[m,Y] by post-composing with the canonical map [m,Y]→ L[m,Y] from the above pushout. For simplicity, we also write f for this map, namely f A→ L[m,Y].
Recall from <cit.> that a -space Y is said to be connected if the set π_0 Y is isomorphic to a point.
For ∈ and ≥ 0, the representable [,]=× in is a connected -space.
The computation of the straightening of [,f] is given by a generalization of the construction of the straightening at representables. We introduce the following notation; compare to <ref>.
We write for the following pushout in (and hence in ).
[](1) L [ℓ,X];
[right of=1,xshift=2cm](2) L [m,Y];
[below of=1](3) L [ℓ+1,X];
[below of=2](4) ;
[->] (1) to node[above,la]L [,f] (2);
[->] (1) to node[left,la]L [d^ℓ+1,X] (3);
[->] (2) to node[right,la]ι_f (4);
[->] (3) to (4);
4;
By <ref>, there is an isomorphism in
_0≅{0,1,…,m+1}.
Let [ℓ]→ [m] be an injective map in Δ, and f X→ Y be map in between connected -spaces. The straightening functor _L[m,Y] sends the object [,f][ℓ,X]→ L[m,Y] in L[m,Y] to the -enriched functor
_L[m,Y] ([,f]) L[m,Y]^^.
We then compute in <ref> the straightening of a map [,f][ℓ,X]→ L [m,Y] in in the case where [ℓ]→ [m] is an injective map in Δ and f X↪ Y is a monomorphism in between connected -spaces.
We define a functor (-)+1Δ→Δ which sends an object [m] to the object [m+1], and a map [ℓ]→ [m] to the map
+1 [ℓ+1]→ [m+1]
with (+1)(i)=(i) for all 0≤ i≤ℓ and (+1)(ℓ+1)=m+1.
Recall the bead functor B[m+1]im+1→ from <ref>.
Given a necklace T, let us denote by B_ω^T its last bead. Then, given a monomorphism g U↪ T between necklaces, since the last vertex of U is mapped to the last vertex of V, the last bead B_ω^U of U must be mapped into the last bead B_ω^T of T. In particular, the induced map B(g) B(U)→ B(T) between sets of beads from <ref> is such that B(g)(B_ω^U)=B_ω^T.
For 0≤ i≤ m, we define a functor
([m+1]im+1)^→.
It sends an object T↪[m+1]_i,m+1 in [m+1]im+1 to the -space
(∏_B(T)∖{B_ω^T} Y)× X if B_ω^T⊆(+1)
∅ else
and a map g U↪ T in [m+1]im+1 to the map in
(∏_B(T)∖{B^T_ω} Y)× X (∏_B(U)∖{B^U_ω} Y)× X if B^U_ω⊆ B^T_ω⊆(+1)
∅→ (∏_B(U)∖{B^U_ω} Y)× X if B^U_ω⊆(+1), B^T_ω⊈(+1)
∅→∅ else ,
where f̂ B(g)^* is the composite in
[](1) (∏_B(T)∖{B_ω^T} Y)× X;
[right of=1,xshift=5.4cm](2) (∏_B(U)∖ B(g)^-1{B_ω^T} Y)× (∏_B(g)^-1{B_ω^T} X);
[below of=2](3) (∏_B(U)∖{B^U_ω} Y)× X;
[->] (1) to node[above,la]B(g)^* (2);
[->] (2) to node[right,la]𝕀_∏_B(U)∖ B(g)^-1{B_ω^T} Y× (∏_B(g)^-1{B_ω^T}∖{B_ω^U} f)×𝕀_X (3);
[->] (1) to node[below,la]f̂ B(g)^* (3);
For ∈ and ≥ 0, we write _, for the composite
_, ([m+1]im+1)^.
Recall the homotopy coherent nerve functor c^h→ from <ref>.
For 0≤ i≤ m, we define a functor
H^i_m+1[m+1]im+1→.
It sends an object T↪[m+1]_i,m+1 in [m+1]im+1 to the space _ T(α,ω) and a map g U↪ T in [m+1]im+1 to the map in
( g)_α,ω_ T(α,ω)→_ U(α,ω).
Let φ^≅ be one of the two canonical isomorphism, and consider the inclusion
ι≅^↪^φ≅.
Then ^ is canonically enriched over and we consider the -enrichment on induced via φ^≅.
Let [ℓ]→ [m] be an injective map in Δ, and f X↪ Y be a monomorphism in between connected -spaces. For 0≤ i≤ m, there is a natural isomorphism in
_L[m,Y]([,f])(i)≅(^H^i_m+1_([m+1]im+1)^ι).
§.§ Straightening of bla
We first compute the straightening of a map ∂[m,X]→ L[m,Y] in where m≥ 0 and f X↪ Y is a monomorphism in between connected -spaces.
Recall that the boundary ∂ can be computed as the following coequalizer in
∐_0≤ s<t≤ m[m-2]⇉∐_0≤ s≤ m[m-1]→∂.
We denote by ι_m∂↪ the boundary inclusion.
Since products in commute with colimits, we get that ∂[m,X] can be computed as the following coequalizer in
∐_0≤ s<t≤ m[m-2,X]⇉∐_0≤ s≤ m[m-1,X]→∂[m,X].
We define a functor
[f][0][∂,m] ([m+1]0m+1)^→.
It sends an object T↪[m+1]_0,m+1 in [m+1]0m+1 to the -space
(∏_B(T)∖{B_ω^T} Y)× X if T≠[m+1]
∅ if T=[m+1]
and a map g U↪ T in [m+1]0m+1 to the map in
(∏_B(T)∖{B^T_ω} Y)× X (∏_B(U)∖{B^U_ω} Y)× X if U,T≠[m+1]
∅→ (∏_B(U)∖{B^U_ω} Y)× X if U≠[m+1], T=[m+1]
For 0< i≤ m, there is a coequalizer in ()^([m+1]im+1)^
∐_0≤ s<t≤ m[f][i][d^sd^t]⇉∐_0≤ s≤ m[f][i][d^s]→[f][i][𝕀_[m]],
and for i=0, there is a coequalizer in ()^([m+1]im+1)^
∐_0≤ s<t≤ m[f][0][d^sd^t]⇉∐_0≤ s≤ m[f][0][d^s]→[f][0][∂,m].
To show that the above diagrams are coequalizers in ()^([m+1]im+1)^, it is enough to show that they are coequalizers in when evaluating the functors at each object T↪[m+1]_i,m+1 in [m+1]im+1. This is a straightforward computation.
We can now compute the straightening of the map [ι_m,f]∂[m,X]→ L[m,Y].
Let m≥ 0, and f X↪ Y be a monomorphism in between connected -spaces. For 0< i≤ m, there is a natural isomorphism in
_L[m,Y]([ι_m,f])(i)≅(^H^i_m+1_([m+1]im+1)^ι[f][i][𝕀_[m]]),
and for i=0, there is a natural isomorphism in
_L[m,Y]([ι_m,f])(0)≅(^H^0_m+1_([m+1]0m+1)^ι[f][0][∂,m]).
For 0<i≤ m, we write [f][i][∂,m][f][i][𝕀_[m]]. Since the straightening functor
_L[m,Y]L[m,Y]→ [ L[m,Y]^,]
commutes with colimits, for 0≤ i≤ m, we obtain using <ref> a coequalizer in
∐_0≤ s<t≤ m_L[m,Y]([d^sd^t,f])(i)⇉∐_0≤ s≤ m_L[m,Y]([d^s,f])(i)→_L[m,Y]([ι_m,f])(i)
Now, by applying the colimit-preserving functor
(^H^i_m+1_([m+1]im+1)^ι(-)) ()^([m+1]im+1)^→
to the coequalizer in ()^([m+1]im+1)^ from <ref>
∐_0≤ s<t≤ m[f][i][d^sd^t]⇉∐_0≤ s≤ m[f][i][d^s]→[f][i][∂,m],
we obtain a coequalizer in , which is isomorphic to the one above by
<ref>. Hence, we get an isomorphism in
_L[m,Y]([ι_m,f])(i)≅(^H^i_m+1_([m+1]0m+1)^ι[f][i][∂,m]).
From now on, we assume that f X↪ Y is a monomorphism in with only its target Y connected and we compute the straightening of the pushout-product map
ι_m× f∂[m,Y]⨿_∂[m,X][m,X]→ L[m,Y].
We define a functor
_m^0(f) ([m+1]0m+1)^→.
It sends an object T↪[m+1]_0,m+1 in [m+1]0m+1 to the -space
∏_B(T) Y if T≠[m+1]
X if T=[m+1]
and a map g U↪ T in [m+1]0m+1 to the map in
∏_B(T) Y∏_B(U) Y if U,T≠[m+1]
X Y∏_B(U) Y if U≠[m+1], T=[m+1].
Recall from <cit.> that every -space X can be decomposed as a coproduct X= ∐_j X_j, where each X_j is a connected -space.
Let X=∐_j X_j be a decomposition of X into its connected components and write f=∑_j f_j∐_j X_j↪ Y. There is a pushout in ()^([m+1]0m+1)^
[](1) ∐_j[f_j][0][∂,m];
[right of=1,xshift=2.1cm](2) [𝕀_Y][0][∂,m];
[below of=1](3) ∐_j[f_j][0][𝕀_[m]];
[below of=2](4) ^0_m(f);
4.;
[->] (1) to (2);
[->] (1) to (3);
[->] (2) to (4);
[->] (3) to (4);
4;
To show that the above square is a pushout in ()^([m+1]0m+1)^, it is enough to show that it is a pushout in when evaluating at each object T↪[m+1]_i,m+1 in [m+1]im+1. This is a straightforward computation.
We can now compute the straightening of the pushout-product map
ι_m× f∂[m,Y]⨿_∂[m,X][m,X]→ L[m,Y].
Let m≥ 0, Y be a connected -space, and f X↪ Y be a monomorphism in . For 0< i≤ m, there is a natural isomorphism in
_L[m,Y](ι_m× f)(i)≅(^H^i_m+1_([m+1]im+1)^ι[𝕀_Y][i][𝕀_[m]]),
and for i=0, there is a natural isomorphism in
_L[m,Y](ι_m× f)(0)≅(^H^0_m+1_([m+1]0m+1)^ι^0_m(f)).
For 0<i≤ m, we write [f][i][∂,m][f][i][𝕀_[m]] and ^i_m(f)[𝕀_Y][i][𝕀_[m]]. Let X=∐_j X_j be a decomposition of X into its connected components and write f=∑_j f_j∐_j X_j↪ Y. Since the straightening functor _L[m,Y]L[m,Y]→ [ L[m,Y]^,] commutes with colimits, for 0≤ i≤ m, we have a pushout in
[](1) ∐_j_L[m,Y]([ι_m,f_j])(i);
[right of=1,xshift=4.1cm](2) _L[m,Y]([ι_m,𝕀_Y])(i);
[below of=1](3) ∐_j_L[m,Y]([𝕀_[m],f_j])(i);
[below of=2](4) _L[m,Y](ι_m× f)(i);
[->] (1) to (2);
[->] (1) to (3);
[->] (2) to (4);
[->] (3) to (4);
4;
Now, by applying the colimit-preserving functor
(^H^i_m+1_([m+1]im+1)^ι(-)) ()^([m+1]im+1)^→
to the pushout in ()^([m+1]im+1)^
[](1) ∐_j[f_j][i][∂,m];
[right of=1,xshift=2cm](2) [𝕀_Y][i][∂,m];
[below of=1](3) ∐_j[f_j][i][𝕀_[m]];
[below of=2](4) ^i_m(f);
[->] (1) to (2);
[->] (1) to (3);
[->] (2) to (4);
[->] (3) to (4);
4;
obtained from <ref> if i=0 and by direct inspection if 0<i≤ m, we obtain a pushout in , which is isomorphic to the one above by
<ref>. Hence, we get an isomorphism in
_L[m,Y](ι_m× f)(i)≅(^H^i_m+1_([m+1]im+1)^ι^i_m(f)).
Later, we will need to compare the straightening of the pushout-product map ι_m× f with that of the map [𝕀_[m],𝕀_Y][m,Y]→ L[m,Y]. The straightening of the latter admits the following description.
Let m≥ 0, and Y be a connected -spaces. For 0≤ i≤ m, there is a natural isomorphism in
_L[m,Y]([𝕀_[m],𝕀_Y])(i)≅(^H^i_m+1_([m+1]im+1)^ι[𝕀_Y][i][𝕀_[m]]).
This is obtained by taking =𝕀_[m] and f=𝕀_Y in <ref>.
By <ref>, for 0< i≤ m, there is an isomorphism in
_L[m,Y](ι_m× f)(i)≅_L[m,Y]([𝕀_[m],𝕀_Y])(i),
and so the two straightenings only differ at i=0. Then the component of the -enriched natural transformation in [ L[m,Y]^,]
_L[m,Y](ι_m×f)_L[m,Y](ι_m×f)→_L[m,Y]([𝕀_[m],𝕀_Y])
at 0<i≤ m is an isomorphism, and at 0 it is obtained by applying the functor
(^H^i_m+1_([m+1]im+1)^ι(-)) ()^([m+1]im+1)^→
to the canonical inclusion ^0_m(f)↪[𝕀_Y][0][𝕀_[m]].
§.§ Straightening of F[0,X]->LF[m,X]
We now compute the straightening of a map [⟨ m⟩, 𝕀_X][0,X]→ L[m,X] in for m≥ 0 and X a connected -space.
There is a natural isomorphism in
[𝕀_X][⟨ m⟩]≅ L[m,X]⨿_[0] L[1,X],
where the gluing happens along the vertices m∈ L[m,X] and 0∈ L[1,X].
By <ref>, since X is a connected -space, we get that L[0,X]=[0]. The result is then straightforward from the definition of (see <ref>) in the case where =⟨ m⟩ and f=𝕀_X.
Recall the suspension functor Σ→ sending an object X∈ to the directed -enriched category Σ X with object set {0,1} and hom -space _Σ X(0,1)=X. The following appears as <cit.>.
There is a natural isomorphism in
L[1,X]≅Σ X.
There is a natural isomorphism in
[𝕀_X][⟨ m⟩]≅ L[m,X]⨿_[0]Σ X,
where the gluing happens along the objects m∈ L[m,X] and 0∈Σ X.
This is obtained by applying the left adjoint functor → to the isomorphism from <ref> and using <ref>.
Let m≥ 0, and X be a connected -space. For 0≤ i≤ m, there is a natural isomorphism in
_L[m,X]([⟨ m⟩,𝕀_X])(i)≅_ L[m,X]⨿_[0]Σ X(i,m+1).
By <ref>, we have isomorphisms in
_L[m,X]([⟨ m⟩,𝕀_X])(i)≅_[𝕀_X][⟨ m⟩](i,m+1)≅_ L[m,X]⨿_[0]Σ X(i,m+1).
Later, we will need to compare the straightening of the map [⟨ m⟩,𝕀_X] with that of the map [𝕀_[m],𝕀_X][m,X]→ L[m,X]. The straightening of the latter admits the following description.
There is a natural isomorphism in
[𝕀_X][𝕀_[m]]≅ L[m+1,X].
This is straightforward from the definition of (<ref>) in the case where =𝕀_[m] and f=𝕀_X.
Let m≥ 0, and X be a connected -spaces. For 0≤ i≤ m, there is a natural isomorphism in
_L[m,Y]([𝕀_[m],𝕀_X])(i)≅_ L[m+1,X](i,m+1).
By <ref>, we have isomorphisms in
_L[m,X]([𝕀_[m],𝕀_X])(i)≅_[𝕀_X][𝕀_[m]](i,m+1)≅_ L[m+1,X](i,m+1).
By <ref>, the component of the -enriched natural transformation in [ L[m,X]^,]
_L[m,X]([⟨ m⟩, 𝕀_X])_L[m,X]([⟨ m⟩, 𝕀_X])→_L[m,X]([𝕀_[m],𝕀_X])
at 0≤ i≤ m is the map in
_ L[m,X]⨿_[0]Σ X(i,m+1)→_ L[m+1,X](i,m+1)
induced by the action on hom -spaces of the canonical -enriched functor
L[m,X]⨿_[0]Σ X→ L[m+1,X].
Finally, we observe that in the case where m=0, as LF[0,X]≅ F[0] and [0]≅ [0], we have that _[0] is a functor
_[0]≅[0]→ [[0]^,]≅.
Hence we get the following simplification of <ref> in the case where m=0.
Let X be a connected -space. There is a natural isomorphism in
_[0]([0,X])≅ X.
Let f X→ Y be a map in between connected -space. By <ref>, the induced map in
_[0]([0,f])_[0]([0,X])→_[0]([0,Y])
is simply given by the map f X→ Y itself.
§ STRAIGHTENING-UNSTRAIGHTENING: QUILLEN EQUIVALENCE
In this section, we prove that the straightening-unstraightening adjunction gives a Quillen equivalence
[](1) W;
[right of=1,xshift=3.9cm](2) [ W^,]_;
2,;
[->] ((2.west)-(0,5pt)) to node[below,la]_W ((1.east)-(0,5pt));
[->] ((1.east)+(0,5pt)) to node[above,la]_W ((2.west)+(0,5pt));
[la] at ((1.east)!0.5!(2.west)) ;
for every object W of . In <ref>, we first give a useful reduction trick for proving that the straightening functor preserves (trivial) cofibrations. Then, in <ref>, we prove that the above adjunction is a Quillen pair before and after localizing. Finally, in <ref>, we prove that it is a Quillen equivalence by comparing it to the Quillen equivalence from <ref>.
§.§ Reduction trick
In order to prove that the straightening functor is left Quillen, we will make use of the following reduction trick several times.
Let f W→ Z be a map in and let g A→ B be a map in W. If the image of g under _WW→ [ W^,] is a (trivial) cofibration in [ W^,]_, then the image of g under the composite of functors _Z∘ f_!W→ [ Z^,] is a (trivial) cofibration in [ Z^,]_.
Suppose that g is a map in W such that _W(g) is a (trivial) cofibration in [ W^,]_. Since ( f)_! [ W^,]_→ [ Z^,]_ is left Quillen by <ref>, then -natural transformation ( f)_! _W(g) is a (trivial) cofibration in [ Z^,]_. By <ref>, we have a natural isomorphism ( f)_! _W(g)≅_Z(f_!(g)) and so _Z(f_!(g)) is a (trivial) cofibration in [ W^,]_, as desired.
As a consequence, if we want to show that a map g A→ B in W is sent by the functor _WW→ [ W^,] to a (trivial) cofibration in [ W^,]_, it is enough to show that g A→ B seen as a map in B (using the identity at B) is sent by _BB→ [ B^,] to a (trivial) cofibration in [ B^,]_. Indeed, this follows from the above proposition and the fact that p_!(g)=g, where p B→ W denotes the map defining B as an object in W.
§.§ Quillen pair before localizing
We first prove here that the straightening-unstraightening adjunction is a Quillen pair before localizing.
Let W be an object in . The adjunction _W⊣_W is a Quillen pair
[](1) W;
[right of=1,xshift=3.8cm](2) [ W^,]_;
2.;
[->] ((2.west)-(0,5pt)) to node[below,la]_W ((1.east)-(0,5pt));
[->] ((1.east)+(0,5pt)) to node[above,la]_W ((2.west)+(0,5pt));
[la] at ((1.east)!0.5!(2.west)) ;
To prove the above result, by the reduction trick from <ref>, it is enough to show that a map in L[m,Y]
ι_m× f∂[m,Y]⨿_∂[m,X][m,X]→ L [m,Y]
with m≥ 0 and f X↪ Y a generating (trivial) cofibration in is sent by the functor _L[m,Y]L[m,Y]→ [ L[m,Y]^,] to a (trivial) cofibration in [ L[m,Y]^,]_.
Let m≥ 0, Y be a connected -space, and f X↪ Y be a (trivial) cofibration in . Then the induced map
^0_m(f)→[𝕀_Y][0][𝕀_[m]]
is a (trivial) cofibration in ()^([m+1]0m+1)^_.
By definition of the injective model structure, it is enough to show that, for each object T↪[m+1]_0,m+1 of [m+1]0m+1, the induced map
^0_m(f)(T)→[𝕀_Y][0][𝕀_[m]](T)
is a (trivial) cofibration in . However, recalling <ref>, a direct computation shows that
^0_m(f)(T)→[𝕀_Y][0][𝕀_[m]](T)= ∏_B(T) Y∏_B(T) Y if T≠[m+1]
X Y if T=[m+1].
Hence we get the result, as f is by assumption a (trivial) cofibration in .
For m≥ 0, the functor
(^H^0_m+1_([m+1]0m+1)^ι (-)) ()^([m+1]0m+1)^_→
is left Quillen.
This is obtained as a combination of <cit.>, noticing that the functor H^0_m+1 from <ref> coincides with the functor H_m+1 from <cit.>.
Let m≥ 0, Y be a connected -space, and f X↪ Y be a (trivial) cofibration in . Then the map
_L[m,Y](ι_m× f)_0_L[m,Y](ι_m× f)(0)→_L[m,Y]([𝕀_[m],𝕀_Y])(0)
is a (trivial) cofibration in .
By <ref>, the above map in can be computed as the map
(^H^0_m+1_([m+1]0m+1)^ι (^0_m(f)→[𝕀_Y][0][𝕀_[m]])).
Since the map ^0_m(f)→[𝕀_Y][0][𝕀_[m]] is a (trivial) cofibration in ()^([m+1]0m+1)^_ by <ref>, then by <ref> the above map is a (trivial) cofibration in .
Let be a directed -enriched category with object set {0,1,…,m}, and η F→ G be a -enriched natural transformation in [^,]. Suppose that:
* for 0<i≤ m, the map η_i F(i)→ G(i) is an isomorphism in ,
* the map η_0 F(0)→ G(0) is a (trivial) cofibration in .
Then η F→ G is a (trivial) cofibration in [^,]_.
We deal with the case where η_0 is a cofibration in ; the case of a trivial cofibration proceeds similarly.
We show that the -enriched natural transformation η F→ G has the left lifting property with respect to all trivial fibrations in [^,]_. Let χ P→ Q be a trivial fibration in [^,]_, i.e., the map χ_i P(i)→ Q(i) is a trivial fibration in , for all 0≤ i≤ m. Consider a commutative square in [^,] as below left.
[](1) F;
[right of=1,xshift=.55cm](2) P;
[below of=1](3) G;
[below of=2](4) Q;
[->] (1) to node[above,la]β (2);
[->] (1) to node[left,la]η (3);
[->] (2) to node[right,la]χ (4);
[->] (3) to node[below,la]γ (4);
[->,dashed] (3) to node[left,la,yshift=3pt]δ (2);
[right of=2,xshift=2cm](1) F(0);
[right of=1,xshift=1cm](2) P(0);
[below of=1](3) G(0);
[below of=2](4) Q(0);
[->] (1) to node[above,la]β_0 (2);
[->] (1) to node[left,la]η_0 (3);
[->] (2) to node[right,la]χ_0 (4);
[->] (3) to node[below,la]γ_0 (4);
[->,dashed] (3) to node[left,la,yshift=3pt]δ_0 (2);
We construct a -enriched natural transformation δ G→ P making the above left diagram commute. For i=0, we set δ_0 to be a lift in the above right commutative square in , which exists since by assumption, and, for 0<i≤ m, we set δ_i to be the composite
δ_i G(i) F(i) P(i).
We show that the δ_i's assemble into a -enriched natural transformation. Since η and β are -enriched natural transformations, the enriched naturality condition for δ clearly holds in the case where 0<i≤ j≤ m, and, for 0<i≤ m, it is obtained from the commutativity of the following diagram.
[](1) G(i)×_(0,i);
[right of=1,xshift=2.2cm](2) G(0);
[below of=1](3) F(i)×_(0,i);
[below of=2](4) F(0);
[below of=3](5) P(i)×_(0,i);
[below of=4](6) P(0);
[->] (1) to node[above,la]G_0,i (2);
[->] (1) to node[left,la]η_i^-1×𝕀 (3);
[->] (4) to node[right,la]η_0 (2);
[->] (3) to node[left,la]β_i×𝕀 (5);
[->] (4) to node[right,la]β_0 (6);
[->] (3) to node[below,la]F_0,i (4);
[->] (5) to node[below,la]P_0,i (6);
[->,bend right=50] ((1.south west)+(5pt,0)) to node[left,la]δ_i×𝕀 ((5.north west)+(5pt,0));
[->,bend left=50] (2) to node[right,la]δ_0 (6);
So we have a -enriched natural transformation δ G→ P as desired.
Let m≥ 0, Y be a connected -space, and f X↪ Y be a (trivial) cofibration in . Then the induced -enriched natural transformation
_L[m,Y](ι_m× f)_L[m,Y](ι_m× f)→_L[m,Y]([𝕀_[m],𝕀_Y])
is a (trivial) cofibration in [ L[m,Y]^,]_.
For 0<i≤ m, by <ref>, the map
_L[m,Y](ι_m× f)_i_L[m,Y](ι_m× f)(i)→_L[m,Y]([𝕀_[m],𝕀_Y])(i)
is an isomorphism in , and by <ref> the map
_L[m,Y](ι_m× f)_0_L[m,Y](ι_m× f)(0)→_L[m,Y]([𝕀_[m],𝕀_Y])(0)
is a (trivial) cofibration in . Hence the result follows from <ref>.
Recall from <cit.> that a set of generating cofibrations in is given by the monomorphisms
Xf Y (∂↪)×(∂↪)
with ∈ and ≥ 0.
We are now ready to show that the straightening functor is left Quillen.
Let W be an object in . Then the functor
_WW→ [ W^,]_
preserves cofibrations.
By <ref>, it is enough to show that the functor _W sends the generating cofibrations
∂[m,Y]⨿_∂[m,X][m,X][m,Y] → W
in W with m≥ 0, f X↪ Y a monomorphism in of the form
Xf Y (∂↪)×(∂↪),
and [m,Y]→ W a map in , to a cofibration in [ W^,]_. By <ref>, it is enough to show that, for m≥ 1 and f X↪ Y as above, the functor _L[m,Y]L[m,Y]→ [ L[m,Y]^,] sends the map
∂[m,Y]⨿_∂[m,X][m,X][m,Y] L[m,Y]
to a cofibration in [ L[m,Y]^,]_.
Since by <ref> the -space Y=×=[,] is connected, by <ref> the induced -enriched natural transformation
_L[m,Y](ι_m× f)_L[m,Y](ι_m× f)→_L[m,Y]([𝕀_[m],𝕀_Y])
is a cofibration in [ L[m,Y]^,]_, as desired.
Recall from <cit.> that a set of generating trivial cofibrations in is given by the monomorphisms
Xf Y (∂↪)×(Λ^t[]↪)
with ∈, ≥ 1, and 0≤ t≤.
Note that, since is a localization of , a trivial cofibration f X↪ Y in is also a trivial cofibration in .
Let W be an object in . Then the functor
_WW→ [ W^,]_
preserves trivial cofibrations.
By <ref>, it is enough to show that the functor _W sends the generating trivial cofibrations
∂[m,Y]⨿_∂[m,X][m,X][m,Y] → W
in W with m≥ 0, f X↪ Y a monomorphism in of the form
Xf Y (∂↪)×(Λ^t[]↪),
and [m,Y]→ W a map in , to a trivial cofibration in [ W^,]_. By <ref>, it is enough to show that, for m≥ 1 and f X↪ Y as above, the functor _L[m,Y]L[m,Y]→ [ L[m,Y]^,] sends the map
∂[m,Y]⨿_∂[m,X][m,X][m,Y] L[m,Y]
to a trivial cofibration in [ L[m,Y]^,]_.
Since by <ref> the -space Y=×=[,] is connected and by <ref> the map f is a trivial cofibration in , by <ref> the induced -enriched natural transformation
_L[m,Y](ι_m× f)_L[m,Y](ι_m× f)→_L[m,Y]([𝕀_[m],𝕀_Y])
is a trivial cofibration in [ L[m,Y]^,]_, as desired.
The functor
_WW→ [ W^,]_
preserves (trivial) cofibrations by <ref>, and so it is left Quillen.
§.§ Quillen pair after localizing
We now prove that the desired straightening-unstraightening adjunction is a Quillen pair with respect to the desired localization.
Let W be an object in . The adjunction _W⊣_W is a Quillen pair enriched over
[](1) W;
[right of=1,xshift=4cm](2) [ W^,]_;
2.;
[->] ((2.west)-(0,5pt)) to node[below,la]_W ((1.east)-(0,5pt));
[->] ((1.east)+(0,5pt)) to node[above,la]_W ((2.west)+(0,5pt));
[la] at ((1.east)!0.5!(2.west)) ;
To prove the above result, by <cit.>, <ref>, and the reduction trick from <ref>, it is enough to show that a map in L[m,X]
[⟨ m⟩,𝕀_X][0,X]→[m,X]
for m≥ 1 and X a connected -space is sent by the functor
_L[m,X]L[m,X]→ [ L[m,X]^,]
to a weak equivalence in [ L[m,X]^,]_; and that a map in
[𝕀_[0],f][0,X]→[0,Y]
for f X↪ Y a trivial cofibration in between connected -spaces is sent by the functor _[0]→ to a weak equivalence in .
We first deal with the map [⟨ m⟩,𝕀_X]. Recall the functor Σ_m→ which sends an object X∈ to the pushout Σ_mXΣ X⨿_[0]…⨿_[0]Σ X of m copies of Σ X along consecutive sources and targets. Then by <cit.> combined with <cit.> we have the following result.
Let m≥ 1, and X be a connected -space. The -enriched functor
Σ_m X→ L[m,X]
is a weak equivalence in .
Let m≥ 1, and X be a connected -space. The -enriched functor
Σ_m+1 X→ L[m,X]⨿_[0]Σ X
is a weak equivalence in .
Consider the following diagram of spans in
[](1') Σ_m X;
[right of=1',xshift=.9cm](2') [0];
[right of=2',xshift=.8cm](3') Σ X;
[->] (2') to node[above,la]m (1');
[right hook->] (2') to node[above,la]0 (3');
[below of=1'](1) L[m,X];
[below of=2'](2) [0];
[below of=3'](3) Σ X;
[->] (2) to node[below,la]m (1);
[right hook->] (2) to node[below,la]0 (3);
[->] (1') to node[left,la]≃ (1);
[d] (2') to (2);
[d] (3') to (3);
where 0 [0]→Σ X is a cofibration in and Σ_m X→ L[m,X] is a weak equivalence in by <ref>. In particular, the pushout in of each span is a homotopy pushout in and every vertical -enriched functor is a weak equivalence in . Hence the induced -enriched functor between pushouts
Σ_m+1 X→ L[m,X]⨿_[0]Σ X
is also a weak equivalence in , as desired.
Let m≥ 1, and X be a connected -space. The -enriched functor
L[m,X]⨿_[0]Σ X→ L[m+1,X]
is a weak equivalence in .
We have a commutative triangle in
[](1) Σ_m+1 X;
[right of=1,xshift=2.5cm](2) L[m,X]⨿_[0]Σ X;
[below of=2](3) L[m+1,X];
[->] (1) to node[above,la]≃ (2);
[->] (1) to node[below,la]≃ (3);
[->] (2) to (3);
where Σ_m+1 X→ L[m+1,X] and Σ_m+1 X→ L[m,X]⨿_[0]Σ X are weak equivalences in by <ref>. Hence, by 2-out-of-3, the -enriched functor
L[m,X]⨿_[0]Σ X→ L[m+1,X]
is a weak equivalence in , as desired.
Let m≥ 1, and X be a connected -space. The induced -enriched natural transformation
_L[m,X]([⟨ m⟩,𝕀_X])_L[m,X]([⟨ m⟩,𝕀_X])→_L[m,X]([𝕀_[m],𝕀_X])
is a trivial cofibration in [ L[m,X]^,]_.
By <ref>, we know that the above -enriched natural transformation is a cofibration in [ L[m,X]^,]_. Hence it remains to show that it is also a weak equivalence in [ L[m,X]^,]_.
For 0≤ i≤ m, by <ref> the map
_L[m,X]([⟨ m⟩,𝕀_X])_i_L[m,X]([⟨ m⟩,𝕀_X])(i)→_L[m,X]([𝕀_[m],𝕀_X])(i)
is given by the map in
_ L[m,X]⨿_[0]Σ X(i,m+1)→_ L[m+1,X](i,m+1)
induced by the weak equivalence L[m,X]⨿_[0]Σ X→ L[m+1,X] in from <ref>. Hence, by definition of the weak equivalences in , we get that (<ref>) is a weak equivalence in . This shows that
_L[m,X]([⟨ m⟩,𝕀_X])_L[m,X]([⟨ m⟩,𝕀_X])→_L[m,X]([𝕀_[m],𝕀_X])
is a weak equivalence in [ L[m,X]^,]_, as desired.
Let W be an object in . Then the straightening functor _WW→ [ W^,] sends a map
[0,,0][m,,0]→ W
with m≥ 1, ∈, and [m,,0]→ W a map in , to a trivial cofibration in [ W^,]_.
Note that, by <ref>, it is enough to show that the straightening functor _L[m,,0]L[m,,0]→ [ L[m,,0]^,] sends the map
[0,,0][m,,0][m,,0]
to a trivial cofibration in [ L[m,,0]^,]_. Since by <ref> the representable F[,0]= is connected, by <ref> the induced -enriched natural transformation
_L[m,,0]([⟨ m⟩,𝕀_[,0]])_L[m,,0]([⟨ m⟩,𝕀_[,0]])→_L[m,,0]([𝕀_[m],𝕀_[,0]])
is a trivial cofibration in [ L[m,,0]^,]_, as desired.
We now deal with the map [𝕀_[0],f].
Let f X↪ Y be a trivial cofibration in between connected -spaces. Then the induced map
_[0]([𝕀_[0],f])_[0]([0,X])→_[0]([0,Y])
is a trivial cofibration in .
This is straightforward from the computation in <ref>.
Note that all maps f X→ Y in S_ are such that X and Y are connected -spaces. Indeed, this can be shown by induction on n≥ 1.
Let W be an object in . Then the straightening functor _WW→ [ W^,] sends a map
[0,X][0,Y]→ W
with f X↪ Y a map in S_, and [0,Y]→ W a map in , to a trivial cofibration in [ W^,]_.
Note that, by <ref>, it is enough to show that the straightening functor _[0]→ sends the map
[𝕀_[0],f][0,X]→[0,Y]
to a trivial cofibration in . Since by <ref> the -spaces X and Y are connected, by <ref> the induced map
_[0]([𝕀_[0],f])_[0]([0,X])→_[0]([0,X])
is a trivial cofibration in , as desired.
As the functor _WW→ [ W^,]_ is left Quillen by <ref>, to show that the functor
_WW→ [ W^,]_
from the localization is also left Quillen, it is enough by <cit.> to show that it sends the maps from <ref> with respect to which we localize the model structure W to obtain the model structure W to weak equivalence in [ W^,]_. But this follows from <ref>.
The fact that the Quillen pair is enriched follows from the -enrichment of both model structures given by <ref> together with <ref> showing that the functor _W is compatible with these enrichments.
§.§ Quillen equivalence
Finally, we prove that the straightening-unstraightening Quillen pair is further a Quillen equivalence.
We first prove that the functor _→ [()^,]_ is a Quillen equivalence in the case where is a fibrant -enriched category, by showing that the following square of left Quillen functors
[](1) [^,]_;
[below of=1](2) [^,]_;
[right of=1,xshift=3.7cm](3) ;
[below of=3](4) [()^,]_;
[->] (1) to node[above,la]∫^_ (3);
[->] (3) to node(a)[right,la]_ (4);
[->] (4) to node[below,la](ε_)_! (2);
[->] (1) to node(b)[left,la]𝕀 (2);
[la,above,xshift=3pt][n][.5]ab≃;
commutes up to a natural isomorphism at the level of homotopy categories, where the -enriched functor ε_→ is the component at of the (derived) counit of the adjunction ⊣. Note that we already know that every functor in the above square except for _ is a Quillen equivalence, as we will also recall later.
Let us fix a -enriched category and an object c∈. The latter can also be regarded as an object c[0]→ in .
There is an isomorphism in [()^, ]
_(c)≅_(-,c).
By applying the colimit-preserving functor → to the pushout of <ref> in the case where W= and σ=c[0]→ and using <ref>, we obtain the following pushout in .
[](1) [0];
[right of=1,xshift=1.3cm](2) ;
[below of=1](3) Σ([0]);
[below of=2](4) (_c);
[->] (1) to node[above,la]c (2);
[->] (1) to node[left,la]0 (3);
[->] (2) to node[right,la](ι_c) (4);
[->] (3) to (4);
4;
Then, by definition, we get that _(c) is the composite
_(c)()^(_c)^.
Given an object a∈, a direct computation shows that there is an isomorphism in
_(_c)(a,⊤)≅_(a,c),
as the pushout (_c) is obtained from by adding a unique object ⊤ and a unique morphism c→⊤ together with free composites with it. Hence this gives isomorphisms in [()^, ]
_(c)≅_(_c)(-,⊤)∘(ι_c)≅_(-,c).
There is an isomorphism in [^, ]
(ε_)_!_(c)≅_(-,c).
This follows from <ref> and the fact that the left Kan extension of the representable _(-,c) along ε_ is the representable _(-,c) by <cit.>, as the -enriched functor ε_→ is the identity on objects.
As a consequence, the map 𝕀_c[0]→∫_^_(-,c) in induces a -enriched natural transformation in [^,]
(ε_)_!_(𝕀_c)_(-,c)≅ (ε_)_!_(c)→ (ε_)_!_∫_^_(-,c).
However, this construction is not natural in c∈, and so in <ref>, we construct a retraction of the above, which happens to be natural in c∈. Namely, we prove the following.
There is a -enriched natural transformation in [^,]
φ_c (ε_)_!_∫_^_(-,c)→_(-,c)
which is natural in c∈, and is a retraction of the -enriched natural transformation (ε_)_!_(𝕀_c)_(-,c)≅ (ε_)_!_(c)→ (ε_)_!_∫_^_(-,c).
We now prove that the comparison map is a weak equivalence, hence inducing an isomorphism at the level of homotopy categories.
The map
𝕀_c[0]→∫_^_(-,c)
is a weak equivalence in .
By <cit.>, the map 𝕀_c[0]→∫_^N_(-,c) is a weak equivalence in N. Hence, by applying the left Quillen functor φ_!N→ induced by the canonical map φ N→, we get a weak equivalence in
𝕀_cφ_![0]→φ_!∫_^N_(-,c).
Now, by <ref>, we have a weak equivalence in
φ_!∫_^N_(-,c)→∫_^_(-,c)
and hence we obtain that the composite
𝕀_c[0]φ_!∫_^N_(-,c)→∫_^_(-,c)
is also a weak equivalence in , as desired.
The -enriched natural transformations
(ε_)_!_(𝕀_c)_(-,c)≅ (ε_)_!_(c)→ (ε_)_!_∫_^_(-,c)
and
φ_c (ε_)_!_∫_^_(-,c)→_(-,c)
are weak equivalences in [^,]_.
Since (ε_)_!_→ [^,]_ preserves weak equivalences by <ref>, the fact that (ε_)_!_(𝕀_c) is a weak equivalence in [^,]_ follows directly from <ref>. Then by 2-out-of-3 we get that φ_c is also a weak equivalence in [^,]_, as it is a retraction of (ε_)_!_(𝕀_c) by <ref>.
Suppose that is a combinatorial model category and that is a set of generating cofibrations between cofibrant objects. Write _0={A,B| A→ B∈}⊆. Recall that every cofibrant object in can be obtained as a retract of transfinite compositions of pushouts along generating cofibrations in . Since pushouts of cofibrant objects along cofibrations and transfinite compositions of cofibrations between cofibrant objects are homotopy pushouts by <cit.>, this tells us that every cofibrant object is a retract of a homotopy colimit of objects in _0.
We are now ready to show the desired result.
The following diagram of left Quillen functors commutes up to a natural weak equivalence.
[](1) [^,]_;
[below of=1](2) [^,]_;
[right of=1,xshift=3.7cm](3) ;
[below of=3](4) [()^,]_;
[->] (1) to node[above,la]∫^_ (3);
[->] (3) to node[right,la]_ (4);
[->] (4) to node[below,la](ε_)_! (2);
[->] (1) to node[left,la]𝕀 (2);
[la,above,xshift=3pt][n][.5]ab≃;
First note that the functors in the diagram are all left Quillen by <ref>. Hence, it is enough to show that there is a weak equivalence in [^,]_
(ε_)_!_∫_^ F→ F
at any cofibrant object F^→ in [^,]_ that is natural in F. However, by combining <ref>, every cofibrant object is a retract of a homotopy colimit of objects of the form _(-,c)⊗ X, with c∈ an object and X∈, and so it is enough to show that, for every object c∈ and every object X∈, there is a weak equivalence in [^,]_
φ_c,X (ε_)_! _ (∫^__(-,c)⊗ X)→_(-,c)⊗ X
that is natural in c and X. As all functors involved preserve tensors by <ref>, we define φ_c,X to be
(ε_)_! _ (∫^__(-,c)⊗ X)≅ (ε_)_! _ (∫^__(-,c))⊗ X_(-,c)⊗ X.
Then φ_c,X is clearly natural in X∈ and it is natural in c∈ by <ref>. Moreover, as the model structure [^,]_ is enriched over by <ref> and φ_c is a weak equivalence in [^,]_ by <ref>, then so is φ_c,X=φ_c⊗ X, which concludes the proof.
From the commutativity of the above square at the level of homotopy categories, we can deduce the following.
Let be a fibrant -enriched category. Then the Quillen pair _⊣_ is a Quillen equivalence enriched over
[](1) ;
[right of=1,xshift=4.2cm](2) [()^,]_;
2.;
[->] ((2.west)-(0,5pt)) to node[below,la]_ ((1.east)-(0,5pt));
[->] ((1.east)+(0,5pt)) to node[above,la]_ ((2.west)+(0,5pt));
[la] at ((1.east)!0.5!(2.west)) ;
We have a triangle of left Quillen functors
[](1) [^,]_;
[below of=1](2) [^,]_;
[right of=1,xshift=3.7cm](3) ;
[below of=3](4) [()^,]_;
[->] (1) to node[above,la]∫^_ (3);
[->] (3) to node[right,la]_ (4);
[->] (4) to node[below,la](ε_)_! (2);
[->] (1) to node[left,la]𝕀 (2);
[la,above,xshift=3pt][n][.5]ab≃;
which commutes up to isomorphism at the level of homotopy categories by <ref>. Since is fibrant in , the -enriched functor ε_→ is a weak equivalence in as it is the component of the derived counit of the Quillen equivalence from <ref>. Hence the functor (ε_)_! is a Quillen equivalence by <ref>. Moreover, by <ref> the functor ∫_^ is a Quillen equivalence. Hence, by 2-out-of-3, we conclude that the functor _ is also a Quillen equivalence, as desired.
We are now ready to deduce that, for any object W∈, the straightening functor _W [ W^,]_→W is a Quillen equivalence. For this, we use the following change of base lemma.
Let f W→ Z be a weak equivalence in . The Quillen pair _W⊣_W is a Quillen equivalence
[](1) W;
[right of=1,xshift=3.9cm](2) [ W^,]_;
[->] ((2.west)-(0,5pt)) to node[below,la]_W ((1.east)-(0,5pt));
[->] ((1.east)+(0,5pt)) to node[above,la]_W ((2.west)+(0,5pt));
[la] at ((1.east)!0.5!(2.west)) ;
if and only if the Quillen pair _Z⊣_Z is a Quillen equivalence.
[](1) Z;
[right of=1,xshift=3.9cm](2) [ Z^,]_;
[->] ((2.west)-(0,5pt)) to node[below,la]_Z ((1.east)-(0,5pt));
[->] ((1.east)+(0,5pt)) to node[above,la]_Z ((2.west)+(0,5pt));
[la] at ((1.east)!0.5!(2.west)) ;
First, note that, since → preserves weak equivalences by <ref>, the -enriched functor f W→ Z is a weak equivalence in . Then, by <ref>, we have a commutative square of left Quillen functors
[](1) W;
[right of=1,xshift=2.8cm](2) [ W^,];
[below of=1](3) Z;
[below of=2](4) [ Z^,];
[->] (1) to node[above,la]_W (2);
[->] (1) to node[left,la]f_! (3);
[->] (2) to node[right,la]( f)_! (4);
[->] (3) to node[below,la]_Z (4);
[la,above][n][.5]23≅;
where f_! and ( f)_! are Quillen equivalences by <ref>. Hence, by 2-out-of-3, we conclude that the functor _W is a Quillen equivalence if and only if the functor _Z is a Quillen equivalence.
Let W be an object in . The Quillen pair _W⊣_W is a Quillen equivalence enriched over
[](1) W;
[right of=1,xshift=3.9cm](2) [ W^,]_;
2.;
[->] ((2.west)-(0,5pt)) to node[below,la]_W ((1.east)-(0,5pt));
[->] ((1.east)+(0,5pt)) to node[above,la]_W ((2.west)+(0,5pt));
[la] at ((1.east)!0.5!(2.west)) ;
The component of the derived unit of the Quillen equivalence from <ref> at W gives a weak equivalence W→ W in , where W→ W is a fibrant replacement in . Then, by <ref> the functor _ W is a Quillen equivalence, and so by <ref> we get that the functor _W is also a Quillen equivalence.
§ STRAIGHTENING-UNSTRAIGHTENING: TECHNICAL RESULTS
In this section, we prove the technical results stated in <ref>. In <ref>, we prove <ref>; in <ref>, <ref>; in <ref>, <ref>; and, finally, in <ref>, <ref>.
§.§ Naturality of straightening
We aim to show <ref> which states that the straightening functor is natural in W. For this, we first need the following result. Recall the notion of a cofinal functor from <cit.>.
Consider a commutative square in as below left.
[](1) ;
[below of=1](2) ;
[right of=1,xshift=.5cm](3) ;
[below of=3](4) ;
[->] (1) to node[left,la]I (2);
[->] (1) to node[above,la]F (3);
[->] (2) to node[below,la]G (4);
[->] (3) to node[right,la]J (4);
[right of=3,xshift=2cm](1) ^^;
[below of=1](2) ^^;
[right of=1,xshift=1.3cm](3) ^^;
[below of=3](4) ^^;
[][n][.5]14;
[->] (2) to node[left,la]I^* (1);
[->] (1) to node[above,la]F_! (3);
[->] (2) to node[below,la]G_! (4);
[->] (4) to node[right,la]J^* (3);
Suppose that, for every object b∈, the induced functor between comma categories
J↓ I b↓ F→ Jb↓ G, (a∈, b Fa∈)↦ (Ia∈,Jb JFa=GIa∈)
is cofinal. Then the canonical natural transformation in the above right square in is a natural isomorphism F_! I^*≅ J^* G_!, i.e., the Beck-Chevalley condition is satisfied.
Let H^→ be a functor and b∈ be an object. By the pointwise formula for left Kan extension, we have natural isomorphisms in
J^* G_! H(b)=G_! H(Jb)≅((Jb↓ G)^→^),
F_! I^* H(b) =F_! (H∘ I^)(b) ≅((b↓ F)^→^^)
≅((b↓ F)^(Jb↓ G)^→^).
Hence the result follows from the fact that the functor (J↓ I)^ (b↓ F)^→ (Jb↓ G)^ is final as its opposite J↓ I b↓ F→ Jb↓ G is cofinal by assumption.
Recall that is a full subcategory of ^. Hence a -enriched category can be thought of as a functor ×→, and a -enriched functor F→ as a natural transformation between such functors.
Consider a commutative square in as below left.
[](1) ;
[below of=1](2) ;
[right of=1,xshift=.5cm](3) ;
[below of=3](4) ;
[->] (1) to node[left,la]I (2);
[->] (1) to node[above,la]F (3);
[->] (2) to node[below,la]G (4);
[->] (3) to node[right,la]J (4);
[right of=3,xshift=2cm](1) [^,];
[below of=1](2) [^,];
[right of=1,xshift=2.7cm](3) [^,];
[below of=3](4) [^,];
[][n][.5]14;
[->] (2) to node[left,la]I^* (1);
[->] (1) to node[above,la]F_! (3);
[->] (2) to node[below,la]G_! (4);
[->] (4) to node[right,la]J^* (3);
Suppose that, for every object b∈ and every ∈, ≥ 0, the induced functor between comma categories
J_,↓ I_, b↓ F_,→ Jb↓ G_,
is cofinal. Then the canonical natural transformation in the above right square in is a natural isomorphism F_! I^*≅ J^* G_!, i.e., the Beck-Chevalley condition is satisfied.
Let H^→ be a -enriched functor and b∈. By using the pointwise formula for -enriched left Kan extensions from <cit.> and applying the evaluation functor _,→ for every ∈ and ≥ 0, we see that there are natural isomorphisms of sets
_,(F_! I^* H(b))≅_,((F_,)_!I_,^*H_,(b)), _,(J^* G_!H(b))≅_,(J_,^* (G_,)_! H_,(b)).
Hence the result follows from applying <ref> and using that isomorphisms in are levelwise.
Let us fix an object W∈ and a map σ[m,,]→ W in with m,≥ 0 and ∈. Consider the following pushout square in .
[](1) L[m,,];
[right of=1,xshift=2cm](2) W;
[below of=1](3) L[m+1,,];
[below of=2](4) W_σ;
4;
[->] (1) to node[above,la]σ (2);
[->] (1) to node[left,la]L [d^m+1,,] (3);
[->] (2) to node[right,la]ι_σ (4);
[->] (3) to node[below,la]σ' (4);
For '∈, '≥ 0, and b∈ W an object, the functor
(ι_σ)_','↓( L[d^m+1,,])_',' b↓ (σ)_','→ (ι_σ) b ↓ (σ')_','
is cofinal.
Let 0≤ i≤ m+1 and f (ι_σ) b→ (σ')i be a morphism in ( W_σ)_','. We need to show that the comma category ((ι_σ)_','↓( L[d^m+1,,])_',')↓ (i,f) is non-empty and connected. To see this, we will show the following:
* there exist 0≤ j≤ m, a morphism g b→ (σ)j in ( W)_',', and a morphism φ j→ i in ( L[m+1,,])_',' such that f=(σ')φ∘ (ι_σ) g,
* given another such factorization (j',g',φ') of f, then there exists a morphism β j'→ j in ( L[m,,])_',' such that φ∘β=φ' and g=(σ)β∘ g'.
When 0≤ i≤ m, as the functor (ι_σ)_',' ( W)_','→ ( W_σ)_',' is fully faithful as a pushout of a fully faithful functor, there exists a morphism g b→ (σ)i such that f=(ι_σ)g, and hence the tuple (i,g,𝕀_i) gives a factorization of f as in (i). It is in fact a terminal object in the comma category ((ι_σ)_','↓( L[d^m+1,,])_',')↓ (i,f) and so it satisfies (ii) as well.
It remains to consider the case where i=m+1. In order to show (i) and (ii), using the same notations as above, we will construct a commutative diagram in ( W_σ)_',' of the following form.
[](1) (ι_σ)b;
[above of=1,xshift=4cm](2) (ι_σ)(σ)j'=(σ')j';
[below of=2,xshift=3.7cm](3) ⊤;
[below of=1,xshift=4cm](4) (ι_σ)(σ)j=(σ')j;
[->] (1) to node[above,la,xshift=-1.2cm](ι_σ)g'=(T_1,γ_1) (2);
[->] (2) to node[above,la,xshift=1.2cm](σ')φ'=(T_2,γ_2) (3);
[->] (1) to node[below,la,xshift=-1.2cm](ι_σ)g=(T',γ') (4);
[->] (4) to node[below,la,xshift=1.2cm](σ')φ=(B_ω^T,γ_ω) (3);
[->] (2) to node[over,la,pos=0.7](ι_σ)(σ)β=(T_1',γ_1') (4);
[->,w](1) to node[above,la,pos=0.65]f=(T,γ) (3);
Recall from <ref> that
_ W_σ((ι_σ) b,⊤)≅(_T∈(W_σ)_-,⋆,⋆(ι_σ) b⊤_ T(α,ω)),
where ⊤=(σ')(m+1). Hence a morphism f (ι_σ)b→⊤ in ( W_σ)_-,',', i.e., an element of _ W_σ((ι_σ) b,⊤)_',', consists of a pair (T,γ) of a necklace T→ ((W_σ)_',')_(ι_σ)b,⊤ and an element γ∈_ T(α,ω)_'.
Now note that, if τ[m']→ (W_σ)_-,',' is such that τ(m')=⊤, then by definition of (W_σ)_-,',' as the pushout (L[m+1,,]))_-,','⨿_(L[m,,])_-,',' W_-,',' in , the m'-simplex τ factors through σ'_-,',' (L[m+1,,]))_-,','→ (W_σ)_-,','. In particular, the last bead B_ω^T↪ T→ ((W_σ)_',')_(ι_σ)b,⊤ satisfies this condition and so it factors through a simplex B_ω^T→ (L[m+1,,]_-,',')_j,m+1 for some 0≤ j≤ m.
We write T=T'∨ B_ω^T. Then by <cit.>, there is an isomorphism of sets
_ T(α,ω)_'≅_ T'(α,ω)_'×_ B_ω^T(α,ω)_k'
and so the element γ∈_ T(α,ω)_' corresponds to a pair (γ',γ_ω) with γ'∈_ T'(α,ω)_' and γ_ω∈_ B_ω^T(α,ω)_k'. Using <ref>, the pair (T',γ') corresponds to an element of _ W_σ((ι_σ)b,(ι_σ)(σ)j)_',', and so to an element g∈_ W(b,(σ)j)_',' by fully faithfulness of (ι_σ)_',', and the pair (B_ω^T,γ_ω) to an element of φ∈_ L[m+1,,](j,m+1)_',' by the above discussion. Moreover, as T=T'∨ B_ω^T and γ is sent to (γ',γ_ω) by the isomorphism (<ref>), we get that f=(σ')φ∘ (ι_σ) g, proving (i).
Now, if (j',g',φ') is another factorization of f of the desired form, then using <ref> the morphism g' consists of a pair (T_1,γ_1) of a necklace T_1→ (W_-,',')_b,(σ)j' and an element γ_1∈_ T_1(α,ω)_' and the morphism φ' consists of a pair (T_2,γ_2) of a necklace T_2→ ( L[m+1,,]_-,',')_j',m+1 and an element γ_2∈_ T_2(α,ω)_'. Moreover, as f=(σ')φ'∘ (ι_σ) g', we get that T=T_1∨ T_2 and that the pair (γ_1,γ_2) corresponds to γ∈_ T(α,ω)_' under the isomorphism of sets from <cit.>
_ T(α,ω)_'≅_ T_1(α,ω)_'×_ T_2(α,ω)_k'.
As T'↪ T is by definition the subnecklace of T containing all beads of T except for B_ω^T, we get that T_1↪ T'. Write T'=T_1∨ T_1'. As T_1∨ T_1'∨ B_ω^T=T'∨ B_ω^T=T=T_1∨ T_2 we also have that T_2=T_1'∨ B_ω^T. Then by <cit.> there are isomorphisms of sets
_ T'(α,ω)_'×_ B_ω^T(α,ω)_' ≅_ T_1(α,ω)_'×_ T'_1(α,ω)_k'×_ B_ω^T(α,ω)_'
≅_ T_1(α,ω)_'×_ T_2(α,ω)_',
and so there is a unique element γ'_1∈_ T'_1(α,ω)_k' such that the pair (γ_1,γ'_1) corresponds to γ' and the pair (γ'_1,γ_ω) to γ_2, where we used that the pairs (γ',γ_ω) and (γ_1,γ_2) both correspond to γ∈_ T(α,ω)_' under the isomorphisms (<ref>) and (<ref>), respectively. In particular, this implies that φ∘β=φ' and g=(σ)β∘ g', proving (ii).
There is an isomorphism in [ W^,]
(σ)_! _L[m,,](𝕀_[m,,]) ≅_W(σ).
First recall that, by the definition of the straightening functor on representables from <ref>, we have the following isomorphism in [ L[m,,]^,]
_L[m,,](𝕀_[m,,])≅ ( L[d^m+1,,])^*_ L[m+1,,](-,m+1)
and the following isomorphism in [ W^,]
_W(σ)=(ι_σ)^* _ W_σ(-,⊤)≅ (ι_σ)^* (σ')_! _ L[m+1,,](-,m+1),
where we use that the left Kan extension of the representable _ L[m+1,,](-,m+1) along σ' L[m+1,,]→ W_σ is the representable _ W_σ(-,⊤) by <cit.>. Then, by combining <ref>, we get that the following square in
[](1) [ L[m,,]^,];
[below of=1](2) [ L[m+1,,]^,];
[right of=1,xshift=4.2cm](3) [ W^,];
[below of=3](4) [( W_σ)^,];
[->] (2) to node[left,la]( L[d^m+1,,])^* (1);
[->] (1) to node[above,la](σ)_! (3);
[->] (2) to node[below,la](σ')_! (4);
[->] (4) to node[right,la](ι_σ)^* (3);
[la,above][n][.5]14≅;
commutes up to a natural isomorphism (σ)_!( L[d^m+1,,])^*≅ (ι_σ)^* (σ')_!. By taking the component of this isomorphism at the representable _ L[m+1,,](-,m+1), we get isomorphisms in [ W^,]
(σ)_! _L[m,,](𝕀_[m,,])
= (σ)_! ( L[d^m+1,,])^*_ L[m+1,,](-,m+1)
≅ (ι_σ)^* (σ')_! _ L[m+1,,](-,m+1)≅_W(σ).
We are now ready to prove <ref>
which we restate for the reader's convenience.
Let f W→ Z be a map in . Then the following square of left adjoint functors commutes.
[](1) W;
[right of=1,xshift=2.8cm](2) [ W^,];
[below of=1](3) Z;
[below of=2](4) [ Z^,];
[->] (1) to node[above,la]_W (2);
[->] (1) to node[left,la]f_! (3);
[->] (2) to node[right,la]( f)_! (4);
[->] (3) to node[below,la]_Z (4);
[la,above][n][.5]23≅;
Since all functors involved are left adjoints and there is an equivalence of categories W≃^(∫_ W)^, it is enough to show that, for every representable object σ[m,,]→ W in W, we have an isomorphism in [ Z^,]
( f)_! _W(σ)≅_Z (f_!σ)=_Z(fσ).
By applying <ref> once to σ and once to the composite fσ and using the functoriality of left Kan extensions and of the functor , we get isomorphisms in [ Z^,]
( f)_! _W(σ) ≅ ( f)_! (σ)_! _L[m,,](𝕀_[m,,])
≅ ( (fσ ))_!_L[m,,](𝕀_[m,,]) ≅_Z(fσ).
A similar proof of this argument in the case of quasi-categories can be found in <cit.>. While our proof is motivated by theirs, we do carefully separate our argument in a formal part (deducing the desired Beck-Chevalley condition from an abstract cofinality assumption in <ref>) and a computational part (using specific properties of necklaces to deduce the desired cofinality in <ref>). This separation paves the way for possible future generalizations via appropriate computations.
§.§ Straightening of F[l,X]->LF[m,Y], general case
We first prove <ref> which gives a formula for the straightening of a map [,f][ℓ,X]→ L [m,Y] in with [ℓ]→ [m] an injective map in Δ and f X→ Y a map in between connected -spaces.
Recall the definition of the object in from <ref>. We first want to understand necklaces in . To do so, we first rewrite as a certain pushout in .
Let [ℓ]→ [m] be an injective map in Δ, and f X→ Y be a map in . Then is the following pushout in .
[](1) (∐_m+1 Y)⨿ X;
[below of=1](2) (∐_m+1π_0 Y)⨿π_0 X;
[right of=1,xshift=3.6cm](3) [m,Y]⨿_[ℓ,X][ℓ+1,X];
[below of=3](4) ;
4;
[->] (1) to (3);
[->] (1) to (2);
[->] (3) to (4);
[->] (2) to (4);
This is an instance of pushouts commuting with pushouts, using <ref>.
We now fix an injective map [ℓ]→ [m] in Δ and a map f X→ Y in between connected -spaces.
There is an isomorphism in
_0≅{0,1,…,m+1}.
By applying the colimit-preserving functor (-)_0→ to the pushout [m,Y]⨿_[ℓ,X][ℓ+1,X], we get an isomorphism in
([m,Y]⨿_[ℓ,X][ℓ+1,X])_0≅(∐_m+1 Y)⨿ X.
Now, by applying (-)_0→ to the pushout from <ref> and using that π_0 Y≅π_0 X≅{*}, we get isomorphisms in
_0≅{0,1,…,m}⨿{ℓ+2}≅{0,1,…,m+1}.
For ∈ and ≥ 0, by applying the colimit-preserving evaluation functor (-)_-,,→ to the pushout from <ref>, we obtain the following pushout in .
[](1) (∐_m+1∐_Y_,[0])⨿ (∐_X_,[0]);
[below of=1](2) ∐_m+2[0];
[right of=1,xshift=6.3cm](3) (∐_Y_,[m])⨿_∐_X_,[ℓ] (∐_X_,[ℓ+1]);
[below of=3](4) _-,,;
4;
[->] (1) to (3);
[->] (1) to (2);
[->] (3) to (4);
[->] (2) to (4);
We write Q→[m+1] for the unique map in making the following diagram commute.
[](1) L[ℓ,X];
[below of=1](2) L[ℓ+1,X];
[right of=1,xshift=1.8cm](3) L[m,Y];
[below of=3](4) ;
[below right of=4,xshift=1cm](5) [m+1];
4;
[->] (1) to node[above,la]L[,f] (3);
[->] (1) to node[left,la]L[d^ℓ+1,X] (2);
[->] (3) to node[right,la]ι_f (4);
[->] (2) to (4);
[->,bend left] (3) to node[right,la]L[d^m+1,!] (5);
[->,bend right=15] (2) to node[below,la,yshift=-3pt]L[+1,!] (5);
[->,dashed] (4) to node[above,la,xshift=2pt]Q (5);
For ∈, ≥ 0, and 0≤ i≤ m, the map in
Q_-,,_-,,→[m+1]_-,,=[m+1]
induces by post-composition a functor
(Q_-,,)_!_-,,im+1→[m+1]im+1.
We show that it is a discrete fibration as defined e.g. in <cit.>.
For ∈, ≥ 0, and 0≤ i≤ m, the functor
(Q_-,,)_!_-,,im+1→[m+1]im+1
is a discrete fibration.
Given a necklace T→ (_-,,)_i,m+1, consider its image T→[m+1]_i,m+1 under (Q_-,,)_! and let f U→ T be a map in [m+1]im+1. Then the composite
U T→ (_-,,)_i,m+1,
is the unique lift of f via (Q_-,,)_!. Hence (Q_-,,)_! is a discrete fibration.
We now construct a functor that takes a necklace to a totally non-degenerate one in order to get a generalization of the bead functor from <ref> to all necklaces, and we study the compatibility of the discrete fibration (Q_-,,)_! with this construction.
Let K be a simplicial set and a,b∈ K_0. As explained in <cit.>, for every necklace T→ K_a,b, there is a unique epimorphism of simplicial sets T↠T over K_a,b to a totally non-degenerate necklace T→ K_a,b. Moreover, a map g U→ T in Kab induces a unique map gU→T between the induced totally non-degenerate necklaces over K_a,b making the following square in Kab commute.
[](1) U;
[below of=1](2) U;
[right of=1](3) T;
[below of=3](4) T;
[->] (1) to node[above,la]g (3);
[->>] (1) to (2);
[->>] (3) to (4);
[->] (2) to node[below,la]g (4);
This yields a functor (-)Kab→Kab.
For ∈ and ≥ 0, a simplex of _-,, is non-degenerate if and only if its image under the map Q_-,,_-,,→[m+1] in is a non-degenerate simplex of [m+1].
Let σ[m']→_-,, be an m'-simplex of _-,,. By the description of _-,, given in <ref>, if m+1 is in the image of σ, such an m'-simplex comes from an m'-simplex
[m']{x}×[ℓ+1]↪(∐_Y_,) ⨿_∐_X_,[ℓ] (∐_X_,[ℓ+1])
for some x∈ X_,, and if m+1 is not in the image of σ, such an m'-simplex comes from an m'-simplex
[m']{y}×[m]↪(∐_Y_,) ⨿_∐_X_,[ℓ] (∐_X_,[ℓ+1])
for some y∈ Y_,. These are sent by Q_-,, to m'-simplices
[m']{x}×[ℓ+1]+1[m+1] or [m']{y}×[m]d^m+1[m+1].
Hence an m'-simplex σ of _-,, is non-degenerate if and only if the corresponding m'-simplex σ in {x}× F[ℓ+1] or {y}× F[m] is non-degenerate if and only if the image of σ under Q_-,, is a non-degenerate simplex of [m+1].
For ∈, ≥ 0, and 0≤ i≤ m, the functor
(Q_-,,)_!_-,,im+1→[m+1]im+1
restricts to a functor
(Q_-,,)_!_-,,im+1→[m+1]im+1.
It suffices to show that the functor (Q_-,,)_! sends a totally non-degenerate necklace in (_-,,)_i,m+1 to a totally non-degenerate necklace in [m+1]_i,m+1. Since (Q_-,,)_! is given by post-composition with the map Q_-,,_-,,→[m+1], this follows directly from <ref> as totally non-degenerate necklaces are precisely those necklaces whose beads are sent to non-degenerate simplices.
For ∈, ≥ 0, and 0≤ i≤ m, the following diagram in commutes.
[](1) _-,,im+1;
[right of=1,xshift=4.3cm](2) _-,,im+1;
[below of=1](3) [m+1]im+1;
[below of=2](4) [m+1]im+1;
[->] (1) to node[above,la](-) (2);
[->] (1) to node[left,la](Q_-,,)_! (3);
[->] (2) to node[right,la](Q_-,,)_! (4);
[->] (3) to node[below,la](-) (4);
Let T=[m_1]∨…∨[m_t]→ (_-,,)_i,m+1 be a necklace. Then the (unique) totally non-degenerate necklace T→ (_-,,)_i,m+1 is obtained by replacing each bead σ_i[m_i]↪ T→_-,, by the corresponding non-degenerate m'_i-simplex σ'_i obtained by factoring σ_i
[m_i]↠[m'_i]_-,,
as an epimorphism followed by a non-degenerate simplex, for all 1≤ i≤ t. By the description of _-,, given in <ref>, if m+1 is in the image of σ_i and so also in the image of σ'_i, then σ_i and σ'_i come from simplices
[m_i]↠[m'_i]{x_i}×[ℓ+1]↪(∐_Y_,) ⨿_∐_X_,[ℓ] (∐_X_,[ℓ+1])
for some x_i∈ X_,, and if m+1 is not in the image of σ_i and so also not in the image of σ'_i, then σ_i and σ'_i come from simplices
[m_i]↠[m'_i]{y_i}×[m]↪(∐_Y_,) ⨿_∐_X_,[ℓ] (∐_X_,[ℓ+1])
for some y_i∈ Y_,. These are sent by Q_-,, to simplices
[m_i]↠[m'_i]{x_i}×[ℓ+1]+1[m+1]
[m_i]↠[m'_i]{y_i}×[m]d^m+1[m+1].
In particular, the (unique) totally non-degenerate necklace T→[m+1]_i,m+1 obtained from T=[m_1]∨…∨[m_t]→ (_-,,)_i,m+1→[m+1]_i,m+1 is also given by the above factorization of its beads. Hence the diagram commutes.
Recall from <cit.> that there is an equivalence between the categories of functors ([m+1]im+1)^→ and of discrete fibrations over [m+1]im+1. We now identify the set-valued functor corresponding to the discrete fibration (Q_-,,)_! under this equivalence. For this, recall the notations from <ref>.
For 0≤ i≤ m, we define a functor
([m+1]im+1)^→.
It sends an object T→[m+1]_i,m+1 in [m+1]im+1 to the -space
(∏_B(T)∖{B_ω^T} Y)× X if B_ω^T⊆(+1)
∅ else .
It sends a map g U→ T in [m+1]im+1 to the map in
(∏_B(T)∖{B^T_ω} Y)× X (∏_B(U)∖{B^U_ω} Y)× X if B^U_ω⊆ B^T_ω⊆(+1)
∅→ (∏_B(U)∖{B^U_ω} Y)× X if B^U_ω⊆(+1), B^T_ω⊈(+1)
∅→∅ else,
where f̂ B(g)^* is the composite in from <ref> replacing g U→ T with gU→T.
For ∈ and ≥ 0, we write _, for the composite
_, ([m+1]im+1)^.
For ∈, ≥ 0, and 0≤ i≤ m, the discrete fibration
(Q_-,,)_!_-,,im+1→[m+1]im+1
corresponds to the functor
_, ([m+1]im+1)^→.
We first show that there is an isomorphism of sets between the fiber at a given necklace T→[m+1]_i,m+1 of the functor (Q_-,,)_!_-,,im+1→[m+1]im+1 and the value at this necklace of the functor _, ([m+1]im+1)^→.
We write fib_T→[m+1]_i,m+1((Q_-,,)_!) for the fiber of (Q_-,,)_! at the necklace T→[m+1]_i,m+1. Let T=[m_1]∨…∨[m_t]↪[m+1]_i,m+1 be the (unique) totally non-degenerate necklace obtained from T→[m+1]_i,m+1 by <ref>.
Given a necklace T→ (_-,,)_i,m+1 which is sent by the functor (Q_-,,)_! to the necklace T→[m+1]_i,m+1, it follows from <ref> that the (unique) totally non-degenerate T→ (_-,,)_i,m+1 obtained from T→ (_-,,)_i,m+1 by <ref> is sent by (Q_-,,)_! to the totally non-degenerate necklace T↪[m+1]_i,m+1. Moreover, since T→ (_-,,)_i,m+1 is the composite
T↠T→ (_-,,)_i,m+1,
it is uniquely determined by T→ (_-,,)_i,m+1.
Now, if B_ω^T=[m_t]⊆(+1), we show that there is an isomorphism of sets
fib_T→[m+1]_i,m+1((Q_-,,)_!)≅∏_B(T)∖{B_ω^T} Y_,× X_,=_,(T).
For 1≤ i≤ t-1, the ith bead σ_i[m_i]↪T→ (_-,,)_i,m+1 of T corresponds, by the description of _-,, given in <ref> and the fact that m+1 is not in the image of σ_i, to a non-degenerate m_i-simplex
[m_i]{y_i}×[m]↪(∐_Y_,) ⨿_∐_X_,[ℓ] (∐_X_,[ℓ+1]),
for some y_i∈ Y_,. Then, the last bead σ_t B_ω^T=[m_t]↪T→ (_-,,)_i,m+1 corresponds, by the description of _-,, given in <ref> and the fact that m+1 is in the image of σ_t, to a non-degenerate m_t-simplex
[m_t]{x}×[ℓ+1]↪(∐_Y_,) ⨿_∐_X_,[ℓ] (∐_X_,[ℓ+1]),
for some x∈ X_,. Then the data (T→[m+1]_i,m+1,{y_i}_1≤ i≤ t-1,x) uniquely determine the necklace T→ (_-,,)_i,m+1, hence giving the desired isomorphism.
If B_ω^T=[m_t]⊈(+1), then
fib_T→[m+1]_i,m+1((Q_-,,)_!)=∅=_,(T).
Indeed, then there are no m_t-simplices of _-,, that contains m+1 and can be mapped to [m_t]⊆[m+1] by Q_-,,_-,,→[m+1], as Q_-,, acts as +1 on simplices containing m+1. So there are no lifts of T→[m+1]_i,m+1.
It remains to show that the isomorphisms (<ref>) and (<ref>) assemble into a natural isomorphism. For this, note that if g U→ T is a map in [m+1]im+1, then by the proof of <ref>, the map g acts on the fibers of (Q_-,,)_! by pre-composition
g^*fib_T→[m+1]_i,m+1((Q_-,,)_!)→fib_U→[m+1]_i,m+1((Q_-,,)_!).
Using that the map g U→ T is completely determined by the induced map gU↪T obtained by <ref>, a direct computation using the above description of morphisms on fibers of (Q_-,,)_! and the definition of _, on morphisms shows that the isomorphisms (<ref>) and (<ref>) are natural in T→[m+1]_i,m+1.
We now aim to describe the hom -spaces of the categorification .
For m≥ 0 and 0≤ i≤ m, we define a functor
[m+1]im+1→.
It sends an object T→[m+1]_i,m+1 in [m+1]im+1 to the space _ T(α,ω) and a map g U→ T in [m+1]im+1 to the map in
( g)_α,ω_ T(α,ω)→_ U(α,ω).
For 0≤ i≤ m, there is a natural isomorphism in
_T∈_-,⋆,⋆im+1_ T(α,ω) ≅^_⋆,⋆_[m+1]im+1,
where ^_⋆,⋆_[m+1]im+1 is the object of given at ∈ and ≥ 0 by the colimit in of the functor weighted by _,.
Let ∈ and ≥ 0. By <ref> the category of elements of the functor _, ([m+1]im+1)^→ is given by the discrete fibration
(Q_-,,)_!_-,,im+1→[m+1]im+1.
So by <cit.> we have natural isomorphisms in
_T∈_-,,im+1_ T(α,ω) = __-,,im+1 (Q_-,,)_!
≅^_,_[m+1]im+1.
Recall the inclusion ι↪ from <ref>.
For 0≤ i≤ m, there is a natural isomorphism in
_T∈_-,⋆,⋆im+1_ T(α,ω) ≅^_([m+1]im+1)^ι,
where ^_([m+1]im+1)^ι is the -enriched colimit of ι weighted by .
By <ref> and <cit.>, we have isomorphisms in
_T∈_-,⋆,⋆im+1_ T(α,ω) ≅^_⋆,⋆_[m+1]im+1
≅^_([m+1]im+1)^ι.
For 0≤ i≤ m, there is a natural isomorphism in
_(i,m+1)≅(^_([m+1]im+1)^ι),
where ^_([m+1]im+1)^ι is the -enriched colimit of ι weighted by .
By <ref>, we have natural isomorphisms in
_(i,m+1) ≅(_T∈_-,⋆,⋆im+1_ T(α,ω))
≅(^_([m+1]im+1)^ι).
With this computation, we are now ready to calculate the straightening of the desired map.
For 0≤ i≤ m, the construction from <ref> defines a functor [-]Y→ ()^([m+1]im+1)^. Given a map X' X Y in Y, there is an induced natural transformation σ[fσ]→ which acts as σ on the copies of X' and X in each component of [fσ] and .
Using this functoriality, we get the following. We write X≅_[,] X[,] as a colimit of representables in , where we recall that X is the source of the given map f X→ Y in between connected -spaces.
For 0≤ i≤ m, there is a natural isomorphism in ()^([m+1]im+1)^
_[,]X[fσ]≅.
The component at every object T→[m+1]_i,m+1 in [m+1]im+1 of the canonical map _[,]X[fσ]→ in ()^([m+1]im+1)^
is an isomorphism in . Indeed, this follows from the fact that, for every object T→[m+1]_i,m+1 in [m+1]im+1, we have isomorphisms in
_[,]X((∏_B(T)∖{B_ω^T} Y)×[,]) ≅ (∏_B(T)∖{B_ω^T} Y)× (_[,]X[,])
≅ (∏_B(T)∖{B_ω^T} Y)× X,
where the first isomorphisms holds since products in commute with colimits.
For 0≤ i≤ m, there is a natural isomorphism in
_[,] X^_([m+1]im+1)^ι[fσ]≅^_([m+1]im+1)^ι.
This follows directly from the isomorphism from <ref>, and the fact that the functor ^_([m+1]im+1)^ι(-) ()^([m+1]im+1)^→ preserves colimits.
There is a natural isomorphism in [ L[m,Y]^,]
_[,]X_[fσ](-,m+1)∘(ι_fσ)≅_(-,m+1)∘(ι_f).
Let 0≤ i≤ m. By applying the colimit-preserving functor → to the isomorphism from <ref>, we get natural isomorphisms in
_[,] X( ^_([m+1]im+1)^ι[fσ])
≅(_[,] X^_([m+1]im+1)^ι[fσ])
≅(^_([m+1]im+1)^ι).
Then, by <ref>, this yields a natural isomorphism in
_[,] X_[fσ](i,m+1)≅_(i,m+1).
Since the above isomorphisms are natural in 0≤ i≤ m, they assemble into a natural isomorphism in [ L[m,Y]^,]
_[,] X_[fσ](-,m+1)∘(ι_fσ)≅_(-,m+1)∘(ι_f).
We finally prove <ref> which we restate for the reader's convenience..
Let [ℓ]→ [m] be an injective map in Δ, and f X→ Y be map in between connected -spaces. The straightening functor _L[m,Y] sends the object [,f][ℓ,X]→ L[m,Y] in L[m,Y] to the -enriched functor
_L[m,Y] ([,f]) L[m,Y]^^.
Write X≅_[,] X[,] as a colimit of representables in . Since _L[m,Y]L[m,Y]→ [ L[m,Y]^,] commutes with colimits and by its definition on representables, we have isomorphisms in [ L[m,Y]^,]
_L[m,Y]([,f]) ≅_[,] X_L[m,Y]([,fσ])
≅_[,] X_(L[m,Y]_[,fσ])(-,⊤)∘(ι_fσ).
By <ref>, we see that there is an isomorphism L[m,Y]_[,fσ]≅[fσ] in that identifies ⊤ with m+1, for every map σ[,]→ X in . Hence, combining with <ref>, we get isomorphisms in [ L[m,Y]^,]
_[,] X_(L[m,Y]_[,fσ]) (-,⊤)∘(ι_fσ)
≅_[,] X_[fσ](-,m+1)∘(ι_fσ)
≅_(-,m+1)∘(ι_f).
All together they assemble into the desired isomorphism in [ L[m,Y]^,].
§.§ Enrichment of straightening
We now prove <ref> which states that the straightening functor is enriched over .
Let us fix m,≥ 0, ∈, and an object X∈. We first compare the straightening of the canonical projection
[𝕀_[m],π][m,,]× X→[m,,]
with the tensor by X of the straightening of the identity at [m,,].
For 0≤ i≤ m, there is an isomorphism in
_[π][𝕀_[m]](i,m+1)≅_[𝕀_[,]][𝕀_[m]](i,m+1)× X.
For every object T→[m+1]_i,m+1 in [m+1]im+1, a direct computation shows that
[π][i][𝕀_[m]](T)=(∏_B(T)[,])× X=[𝕀_[,]](T)× X=([𝕀_[,]]⊗ X)(T)
and so we have [π][i][𝕀_[m]]=[𝕀_[,]]⊗ X in ()^([m+1]im+1)^. Since the inclusion functor ι→ preserves products, we obtain an isomorphism in ()^([m+1]im+1)^
ι[π][i][𝕀_[m]]= ι([𝕀_[,]]⊗ X)≅ (ι[𝕀_[,]])⊗ι X.
Now, since weighted colimits commute with each other and tensors are an example of such by <cit.>, we get isomorphisms in
^_([m+1]im+1)^ι[π][i][𝕀_[m]] ≅^_([m+1]im+1)^(ι[𝕀_[,]]⊗ι X)
≅ (^_([m+1]im+1)^ι[𝕀_[,]])×ι X.
Finally, as → commutes with products and (ι X)=X, we get isomorphisms in
(^_([m+1]im+1)^ι[π][i][𝕀_[m]]) ≅ ((^_([m+1]im+1)^ι[𝕀_[,]])×ι X)
≅ (^_([m+1]im+1)^ι[𝕀_[,]])× X.
By <ref>, this gives an isomorphism in
_[π][𝕀_[m]](i,m+1)≅_[𝕀_[,]][𝕀_[m]](i,m+1)× X.
There is an isomorphism in [ L[m,,]^,]
_L[m,,]([𝕀_[m],π])≅_L[m,,](𝕀_[m,,])⊗ X.
Let 0≤ i≤ m. By <ref>, we have isomorphisms in
_L[m,,]([𝕀_[m],π])(i)
≅_[π][𝕀_[m]](i,m+1) ≅_[𝕀_[,]][𝕀_[m]](i,m+1)× X
≅_L[m,,](𝕀_[m,,])(i)× X=(_L[m,,](𝕀_[m,,])⊗ X)(i).
Since the above isomorphisms are natural in 0≤ i≤ m, they assemble into a natural isomorphism in [ L[m,,]^,]
_L[m,,]([𝕀_[m],π])≅_L[m,,](𝕀_[m,,])⊗ X.
From this, we can deduce <ref> which we restate for the reader's convenience.
Let W be an object in and X∈. Then the following square of left adjoint functors commutes up to a natural isomorphism.
[](1) W;
[right of=1,xshift=2.8cm](2) [ W^,];
[below of=1](3) W;
[below of=2](4) [ W^,];
[->] (1) to node[above,la]_W (2);
[->] (1) to node[left,la](-)⊗ X (3);
[->] (2) to node[right,la](-)⊗ X (4);
[->] (3) to node[below,la]_W (4);
[la,above][n][.5]23≅;
Since all functors involved are left adjoints and there is an equivalence of categories W≃^(∫_ W)^, it is enough to show that, for every representable object σ[m,,]→ W in W, we have an isomorphism in [ W^,]
_W(σ⊗ X)≅_W(σ)⊗ X.
By <ref>, we have an isomorphism in [ L[m,,]^,]
_L[m,,](𝕀_[m,,]⊗ X)=_L[m,,]([𝕀_[m],π])≅_L[m,,](𝕀_[m,,])⊗ X.
As (σ)_! preserves tensors by <ref>, we get isomorphisms in [ W^,]
(σ)_!_L[m,,](𝕀_[m,,]⊗ X) ≅ (σ)_!(_L[m,,](𝕀_[m,,])⊗ X)
≅ ((σ)_!_L[m,,](𝕀_[m,,]))⊗ X.
Finally, by <ref>, this gives an isomorphism in [ W^,]
_W(σ⊗ X)≅_W(σ)⊗ X.
§.§ Straightening of F[l,X]->LF[m,Y], monomorphism case
We now prove <ref> computing the straightening of a map [,f][ℓ,X]→ L [m,Y] in in the special case where f X→ Y is a monomorphism in .
Let us fix an injective map [ℓ]→ [m] in Δ and a monomorphism f X↪ Y in between connected spaces. Under these assumptions, we first show that the simplicial set _-,, is 1-ordered as defined in <cit.>.
For ∈ and ≥ 0, the simplicial set _-,, is 1-ordered.
By <ref>, we have that _0,,≅{0,1,…,m+1} and by construction every 1-simplex goes from i to j where i ≤ j. Hence the relation ≼__-,, is anti-symmetric.
For m'≥ 1, we first show that the Segal map
_m',,→_1,,×__0,,…×__0,,_1,,
restricts to a map
_m',,^nd→{ g[m']→_-,,| g mono},
where [m'] denotes the spine of [m']. Let σ[m']→_-,, be a non-degenerate m'-simplex of _-,,. By the description of _-,, given in <ref>, if m+1 is in the image of σ, such an m'-simplex comes from a non-degenerate m'-simplex
[m']{x}×[ℓ+1]↪(∐_Y_,) ⨿_∐_X_,[ℓ] (∐_X_,[ℓ+1]),
for some x∈ X_,, and if m+1 is not in the image of σ, such an m'-simplex comes from a non-degenerate m'-simplex
[m']{y}×[m]↪(∐_Y_,) ⨿_∐_X_,[ℓ] (∐_X_,[ℓ+1]),
for some y∈ Y_,. Since the simplicial sets [ℓ+1] and are 1-ordered by <ref>, it follows that the induced map
[m']↪[m']{x}×[ℓ+1] or [m']↪[m']{y}×
is a monomorphism. Now, as X_,⊆ Y_,, the composites
{x}×[ℓ+1]↪ (∐_Y_,) ⨿_∐_X_,[ℓ] (∐_X_,[ℓ+1])→_-,,
{y}×[m]↪ (∐_Y_,) ⨿_∐_X_,[ℓ] (∐_X_,[ℓ+1])→_-,,
are also monomorphisms and so the induced map [m']↪[m']_-,, is a composite of monomorphisms, hence a monomorphism.
Now we show that the map (<ref>) is injective. Let σ,τ[m']→_-,, be non-degenerate m'-simplices of _-,, such that their restrictions along [m']↪[m'] coincide. First note that m+1 is in the image of σ if and only if it is in the image of τ. Indeed, if m+1 is in the image of σ, then it is in the image of its restriction along [m']↪[m'], and so it must be in the image of τ; and conversely.
In the case where m+1 is in the image of σ and τ, as before, the m'-simplices σ and τ come from non-degenerate m'-simplices
σ[m']→{x}×[ℓ+1] and τ[m']→{x'}×[ℓ+1],
for some x,x'∈ X_,. As the restrictions of σ and τ along [m']↪[m'] coincide, and the map
(∐_Y_,) ⨿_∐_X_,[ℓ] (∐_X_,[ℓ+1])→_-,,
is injective on 1-simplices, it follows that x=x'. So σ and τ are two non-degenerate m'-simplices of {x}×[ℓ+1] whose restrictions along [m']↪[m'] coincide. Hence, as [ℓ+1] is 1-ordered by <ref>, it follows that σ=τ and so σ=τ.
The case where m+1 is not in the image of σ and τ proceeds similarly, replacing {x}×[ℓ+1] and {x'}×[ℓ+1] by {y}× and {y'}× for some y,y'∈ Y_,.
Since is 1-ordered, in order to compute the hom -spaces of the categorification , we need to study totally non-degenerate necklaces in . For this, we first show that the restriction of the discrete fibration
(Q_-,,)_!_-,,im+1→[m+1]im+1
from <ref> to subcategories of totally non-degenerate necklaces is in fact its pullback.
For ∈, ≥ 0, and 0≤ i≤ m, we have the following pullback square in .
[](1) _-,,im+1;
[right of=1,xshift=4.3cm](2) _-,,im+1;
[below of=1](3) [m+1]im+1;
[below of=2](4) [m+1]im+1;
[right hook->] (1) to (2);
[->] (1) to node[left,la](Q_-,,)_! (3);
[->] (2) to node[right,la](Q_-,,)_! (4);
[right hook->] (3) to (4);
1;
To show that the above square is a pullback, it is enough to show that a necklace in (_-,,)_i,m+1 is totally non-degenerate if and only if its image under the functor
(Q_-,,)_!_-,,im+1→[m+1]im+1
is a totally non-degenerate necklace in [m+1]_i,m+1. However, this follows directly from <ref> as totally non-degenerate necklaces are precisely those necklaces whose beads are sent to non-degenerate simplices.
Since discrete fibrations are closed under pullbacks, as a consequence of <ref>, we obtain the following.
For ∈, ≥ 0, and 0≤ i≤ m, the functor
(Q_-,,)_!_-,,im+1→[m+1]im+1
is a discrete fibration.
We now identify the functor corresponding to this discrete fibration. For this, recall the functor ([m+1]im+1)^→ from <ref>.
For ∈, ≥ 0, and 0≤ i≤ m, the discrete fibration
(Q_-,,)_!_-,,im+1→[m+1]im+1
corresponds to the functor
_, ([m+1]im+1)^→.
Recall that the equivalence between discrete fibrations and set-valued functors from <cit.> is natural with respect to functors F→ which act by sending a discrete fibration over to its pullback along F and a functor → to its pre-composition with F. Hence, by <ref> the functor ([m+1]im+1)^→ corresponding to the discrete fibration (Q_-,,)_!_-,,im+1→[m+1]im+1 is obtained by restricting the functor ([m+1]im+1)^→ from <ref> along the inclusion [m+1]im+1↪[m+1]im+1. But this is precisely the functor _, ([m+1]im+1)^→.
Recall the functor H^i_m+1[m+1]im+1→ from <ref>.
For 0≤ i≤ m, there is a natural isomorphism in
_T∈_-,⋆,⋆im+1_ T(α,ω) ≅^_⋆,⋆_[m+1]im+1 H^i_m+1,
where ^_⋆,⋆_[m+1]im+1 H^i_m+1 is the object of given at ∈ and ≥ 0 by the colimit in of the functor H^i_m+1 weighted by _,.
The proof works as in <ref>, using <ref>.
For 0≤ i≤ m, there is a natural isomorphism in
_T∈_-,⋆,⋆im+1_ T(α,ω) ≅^H^i_m+1_([m+1]im+1)^ι,
where ^H^i_m+1_([m+1]im+1)^ι is the -enriched colimit of ι weighted by H^i_m+1.
The proof works as in <ref>, using <ref>.
Finally, we can compute the hom -spaces of the categorification , and hence the straightening of the desired map.
For 0≤ i≤ m, there is a natural isomorphism in
_(i,m+1)≅(^H^i_m+1_([m+1]im+1)^ι),
where ^H^i_m+1_([m+1]im+1)^ι is the -enriched colimit of ι weighted by H^i_m+1.
Recall from <ref> that, for every ∈ and ≥ 0, the simplicial set _-,, is 1-ordered. Hence, by <ref>, we have natural isomorphisms in
_(i,m+1) ≅(_T∈_-,⋆,⋆im+1_ T(α,ω))
≅(^H^i_m+1_([m+1]im+1)^ι).
We can now prove <ref> which we restate for the reader's convenience.
Let [ℓ]→ [m] be an injective map in Δ, and f X↪ Y be a monomorphism in between connected -spaces. For 0≤ i≤ m, there is a natural isomorphism in
_L[m,Y]([,f])(i)≅(^H^i_m+1_([m+1]im+1)^ι).
By <ref>, we have natural isomorphisms in
_L[m,Y]([,f])(i) ≅_(i,m+1) ≅(^H^i_m+1_([m+1]im+1)^ι).
§ CONSTRUCTION OF THE COMPARISON MAP
The goal of this is to prove <ref>. In <ref>, we first construct a -enriched functor Π_m,Y L[m+1,Y]→ L[m,Y]⨿_[0]Σ Y. Then, building on this, in <ref>, we construct, for every object c∈, the desired -enriched natural transformation in [^,]
(ε_)_!_∫_^_(-,c)→_(-,c).
We then verify that it is a section of the -enriched natural transformation
_(-,c)≅ (ε_)_!_(c) (ε_)_!_∫_^_(-,c),
and that it is natural in c∈, hence proving <ref>.
§.§ The projection CL[m+1,Y]–>CL[m,X]USigmaY
Let us fix m≥ 0 and a connected -space Y. In this section, we build a -enriched functor
Π_m,Y L[m+1,Y]→ L[m,Y]⨿_[0]Σ Y.
First, we let the -enriched functor Π_m,Y be the identity on objects, and it remains to describe its action on hom -spaces. For this, we first study the totally non-degenerate necklaces in L([m,Y]⨿_F[0,Y][1,Y])_-,,. We get the following description.
For ∈, ≥ 0, and 0≤ i<j≤ m+1, we have the following pullback square in .
[](1) L([m,Y]⨿_F[0,Y][1,Y])_-,,ij;
[right of=1,xshift=5.6cm](2) L[m+1,Y]_-,,ij;
[below of=1](3) [m]∨[1]ij;
[below of=2](4) [m+1]ij;
[right hook->] (1) to (2);
[->] (1) to (3);
[->] (2) to node[right,la](Q_-,,)_! (4);
[right hook->] (3) to (4);
1;
As a consequence, we get that the hom -spaces of the -enriched categories coincide when 0≤ i<j≤ m.
For ∈, ≥ 0, and 0≤ i<j≤ m, there is an isomorphism of categories
L([m,Y]⨿_F[0,Y][1,Y])_-,,ij≅L[m+1,Y]_-,,ij.
When 0≤ i<j≤ m, there is an isomorphism of categories
[m]∨[1]ij≅[m+1]ij.
Hence, the desired result follows from <ref>.
For 0≤ i<j≤ m, there is an isomorphism in
(Π_m,Y)_i,j_ L[m+1,Y](i,j)≅_ L[m,Y]⨿_[0]Σ Y(i,j).
By <ref>, we have the following isomorphisms in
[](1) _ L [m+1,Y](i,j)≅(_T∈L[m+1,Y]_-,⋆,⋆ij_ T(α,ω));
[below of=1,xshift=.55cm](2) _ L[m,Y]⨿_[0]Σ Y(i,j)≅(_T∈L([m,Y]⨿_F[0,Y][1,Y])_-,⋆,⋆ij_ T(α,ω));
2.;
[->] ((1.south)-(4.5cm,0)) to node[left,la](Π_m,Y)_i,j ((2.north)-(5.05cm,0));
[->] ((1.south)+(2.1cm,0)) to node[right,la] ((2.north)+(1.55cm,0));
By <ref>, there is a canonical isomorphism in between the right-hand terms and hence we get an isomorphism between the left-hand terms as desired.
It remains to construct the action on the hom -spaces whose target is m+1. For this, we first provide a more combinatorial description of the category [m+1]im+1 and its subcategory [m]∨[1]im+1.
For 0≤ i≤ m, we define the category air_i,m+1 to be the poset such that
* its objects are pairs (J,V) of subsets {i,m+1}⊆ J⊆ V⊆{i,…,m+1},
* there is a morphism (J,V)→ (J',V') if and only if V⊆ V' and J'⊆ J.
We write air_i,m+1^m for the full subcategory of air_i,m+1 consisting of those pairs (J,V) such that m∈ J⊆ V.
For 0≤ i≤ m, there are isomorphisms of categories
[m+1]im+1≅ air_i,m+1 and [m]∨[1]im+1≅ air^m_i,m+1.
The left-hand isomorphism is induced by the functor [m+1]im+1→ air_i,m+1 which sends a totally non-degenerate necklace T↪[m+1]_i,m+1 to the pair (J_T,V_T) of subsets of [m+1]_0 consisting of the joints and vertices of T, respectively. It is straightforward to check that this defines an isomorphism of categories and that it restricts to an isomorphism as depicted above right.
We then obtain the following description of the hom -spaces from <cit.>.
For 0≤ i≤ m and T↪[m+1]_i,m+1 a totally non-degenerate necklace, there is an isomorphism in
_ T(α,ω)≅∏_V_T∖ J_T[1].
For 0≤ i≤ m, we define a functor
(-)^+m air_i,m+1→ air^m_i,m+1,
which sends a pair (J,V) to the pair (J∪{m},V∪{m}). This is well-defined.
For 0≤ i≤ m, the above functor induces under the isomorphisms from <ref> a functor
(-)^+m[m+1]im+1→[m]∨[1]im+1.
This functor adds the vertex m to the set of joints and vertices of a totally non-degenerate necklace T↪[m+1]_i,m+1.
Note that the composite of functors
air^m_i,m+1↪ air_i,m+1 air^m_i,m+1
is the identity, and hence the composite of functors
[m]∨[1]im+1↪[m+1]im+1[m]∨[1]im+1
is also the identity.
As the below outer square of functors commute by <ref>, we get a
functor
(-)^+mL[m+1,Y]_-,,im+1→L([m,Y]⨿_[0,Y][1,Y])_-,,im+1
given by the universal property of the pullback from <ref>.
[](1') L[m+1,Y]_-,,im+1;
[below of=1,yshift=-.5cm](3') [m+1]im+1;
[below of=1',xshift=4.4cm,yshift=.5cm](1) L([m,Y]⨿_F[0,Y][1,Y])_-,,im+1;
[right of=1,xshift=5cm](2) L[m+1,Y]_-,,im+1;
[below of=1,yshift=-.5cm](3) [m]∨[1]im+1;
[below of=2,yshift=-.5cm](4) [m+1]im+1;
[right hook->] (1) to (2);
[->] (1) to node[left,la](Q_-,,)_! (3);
[->] (1') to node[left,la](Q_-,,)_! (3');
[->,dashed] (1') to node[above,la,xshift=25pt,yshift=-3pt](-)^+m (1);
[->] (3') to node[below,la,xshift=-7pt](-)^+m (3);
[->] (2) to node[right,la](Q_-,,)_! (4);
[right hook->] (3) to (4);
[->,bend left=10] (1') to node[above,la]𝕀 (2);
1;
For 0≤ i≤ m and T↪[m+1]_i,m+1 a totally non-degenerate necklace, the inclusion V_T∪{m}∖ J_T∪{m}⊆ V_T∖ J_T induces by pre-composition a map in
[](1) _ T(α,ω)≅∏_V_T∖ J_T[1];
[below of=1,xshift=.43cm](2) _ (T^+m)(α,ω)≅∏_V_T∪{m}∖ J_T∪{m}[1];
2.;
[->] ((1.south)-(1.3cm,0)) to node[left,la]φ_T ((2.north)-(1.73cm,0));
[->] ((1.south)+(1.4cm,0)) to node[right,la] ((2.north)+(.97cm,0));
It is straightforward to check that this construction is natural in T∈[m+1]im+1.
For ∈ and ≥ 0, we get an induced map between colimits in making the following diagrams commute
[](1) _ T(α,ω);
[right of=1,xshift=6.2cm](2) _T∈L[m+1,Y]_-,,im+1_ T(α,ω);
[below of=1](3) _ (T^+m)(α,ω);
[below of=2](4) _T∈L([m,Y]⨿_F[0,Y][1,Y])_-,,im+1_ T(α,ω);
[->](1) to node[left,la]φ_T (3);
[->](1) to node[above,la]γ_T (2);
[->](2) to node[right,la](φ^i_m,Y)_, (4);
[->] (3) to node[below,la]γ_T^+m (4);
for all T∈L[m+1,Y]_-,,im+1, where γ denote the colimit cones. Since the above defined map is natural in ∈ and ≥ 0, this induces a map in
[](2) _T∈L[m+1,Y]_-,⋆,⋆im+1_ T(α,ω);
[below of=2](4) _T∈L([m,Y]⨿_F[0,Y][1,Y])_-,⋆,⋆im+1_ T(α,ω);
[->](2) to node[right,la]φ_m,Y^i (4);
We are now ready to build the desired -enriched functor.
We construct a -enriched functor
Π_m,Y L[m+1,Y]→ L[m,Y]⨿_[0]Σ Y
between directed -enriched categories such that
* it is the identity on objects,
* for 0≤ i<j≤ m, the map on hom -spaces is given by the isomorphism in from <ref>
(Π_m,Y)_i,j_ L[m+1,Y](i,j)≅_ L[m,Y]⨿_[0]Σ Y(i,j),
* for 0≤ i≤ m, the map on hom -spaces is given by the map in obtained by taking the diagonal of the map in from <ref>
[](1) _ L[m+1,Y](i,m+1)≅(_T∈L[m+1,Y]_-,⋆,⋆im+1_ T(α,ω));
[below of=1,xshift=.55cm](2) _ L[m,Y]⨿_[0]Σ Y(i,m+1)≅(_T∈L([m,Y]⨿_F[0,Y][1,Y])_-,⋆,⋆im+1_ T(α,ω));
[->] ((1.south)-(4.5cm,0)) to node[left,la](Π_m,Y)_i,m+1 ((2.north)-(5.05cm,0));
[->] ((1.south)+(2.1cm,0)) to node[right,la](φ_m,Y^i) ((2.north)+(1.55cm,0));
It is straightforward to check that this construction is compatible with composition.
Moreover, we observe that, by construction, we have the following.
The composite of -enriched functor
L[m,Y] L[m+1,Y] L[m,Y]⨿_[0]Σ Y
coincides with the canonical inclusion of the coproduct.
§.§ The comparison map
Let us fix a -enriched category and an object c∈. We want to construct a -enriched natural transformation in [^,]
(ε_)_!_∫_^_(-,c)→_(-,c).
Since the composite of left adjoints (ε_)_!_ preserves colimits, we have the following isomorphisms in [^,]
(ε_)_!_∫_^_(-,c) ≅ (ε_)_!_(_[m,,]∫_^_(-,c)[m,,])
≅_[m,,]∫_^_(-,c)(ε_)_!_(π__(-,c)σ).
Hence to construct (<ref>), we need to define, for every map σ[m,,]→∫_^_(-,c) in , a -enriched natural transformation in [^,]
(ε_)_!_(π__(-,c)σ)→_(-,c)
natural in σ∈∫_(∫_^_(-,c)).
For every m≥ 0 and every Y∈, there is a one-to-one correspondence between maps in
σ[m,Y]→∫_^_(-,c)
and -enriched functors
F_σ L[m,Y]⨿_[0]Σ Y→
such that F_σ(m+1)=c.
Moreover, for every morphism [ℓ]→ [m] in Δ, every map f X→ Y in , and every map σ[m,Y]→∫_^_(-,c) in , the following boat commutes,
[](1') L[ℓ,X];
[below of=1'](2') L[m,Y];
[right of=1',xshift=2.6cm](1”) L[ℓ,X]⨿_[0]Σ X;
[below of=1”](2”) L[m,Y]⨿_[0]Σ Y;
[right of=1”,yshift=-.75cm,xshift=2.5cm](3) ;
[->] ((1”.east)) to node[above,la]F_σ[,f] (3);
[->] ((2”.east)) to node[below,la]F_σ (3);
[->] (1') to (1”);
[->] (1') to node[left,la]L[,f] (2');
[->] (2') to (2”);
where the horizontal maps are the canonical inclusions of the coproducts.
Composites in
[m,Y]∫__(-,c) →
correspond by Yoneda to composites in
Y (∫__(-,c))_m→ ()_m,
which by definition of (∫__(-,c))_m correspond to composites in
Y()_m×_()_0× ()_1×_()_0{c}→ ()_m.
By the universal property of pullback and Yoneda, above composites in correspond to composites in
F[m,Y]→ F[m,Y]⨿_[0] F[1,Y]
with σ(m+1)=c, and so by the adjunction L⊣ I and <ref> to composites of -enriched functors
L F[m,Y]→ L F[m,Y]⨿_[0]Σ Y
such that F_σ(m+1)=c. In particular, we can deduce from this the desired bijection between maps σ and -enriched functors F_σ. Finally, the commutativity of the boat follows from the naturality of the adjunction L⊣ I.
Let us now fix m≥ 0, a connected -space Y, and a map σ[m,Y]→∫_^_(-,c) in . In particular, one can take Y=[,].
We write τ for the composite of maps in
τ[m,Y]∫_^_(-,c)→.
We aim to construct a -enriched natural transformation in [^, ]
φ_σ (ε_)_!_(τ)→_(-,c).
By <ref> in the case where =𝕀_[m] and f=𝕀_Y, there is an isomorphism in [ L[m,Y]^,]
_L[m,Y](𝕀_[m,Y])≅ L[d^m+1,Y]^* _ L[m+1,Y](-,m+1).
Recall from <ref> that the map σ[m,Y]→∫_^_(-,c) corresponds to a -enriched functor F_σ L[m,Y]⨿_[0]Σ Y→ such that F_σ(m+1)=c. We further have a commutative diagram in
[](1) L[m,Y];
[below of=1](2) L[m+1,Y];
[right of=2,xshift=3.1cm](2') L[m,Y]⨿_[0]Σ Y;
[right of=2',xshift=1.95cm](2”) ;
[above of=2”](1”) ;
[->] (1) to node[left,la]L[d^m+1,Y] (2);
[->] (1) to node[above,la]τ (1”);
[->] (1”) to node[right,la]ε_ (2”);
[->] (2) to node[below,la]Π_m,Y (2');
[->] (2') to node[below,la]F_σ (2”);
Then the -enriched functor F_σΠ_m,Y induces a -enriched natural transformation in [ L[m+1,Y]^, ]
(F_σΠ_m,Y)_-,m+1_ L[m+1,Y](-,m+1)→Π_m,Y^*F_σ^*_(-,c).
By restricting along L[d^m+1,Y], we obtain a -enriched natural transformation in [ L[m,Y]^, ]
γ_σ L[d^m+1,Y]^*_ L[m+1,Y](-,m+1)→ L[d^m+1,Y]^*Π_m,Y^*F_σ^*_(-,c),
which by <ref> and the fact that the above diagram commutes amounts to a -enriched natural transformation in [ L[m,Y]^, ]
γ_σ_L[m,Y](𝕀_[m,Y])→ (τ)^*(ε_)^*_(-,c).
By transposing along the adjunctions (τ)_!⊣ (τ)^* and (ε_)_!⊣ (ε_)^*, we get a -enriched natural transformation in [^, ]
φ_σ (ε_)_!_(τ)≅ (ε_)_!(τ)_!_L[m,Y](𝕀_[m,Y])→_(-,c),
where the isomorphism holds by <ref> in the case where f=τ.
To show that this construction is natural, we will make use of the following formal results.
Let F→ be a -enriched functor and c∈ be an object. The unit of the adjunction F_!⊣ F^* evaluated at _(-,c) is the -enriched natural transformation in [^,]
F_-,c_(-,c)→_(F(-),Fc)=F^*_(-,Fc)≅ F^*F_!_(-,c).
By the enriched Yoneda lemma, the -enriched natural transformation F_-,c is determined by the fact that it sends 𝕀_c∈_(c,c) to 𝕀_Fc∈_(Fc,Fc). It then follows that the pair (_(-,Fc),F_-,c) is initial in the category of pairs (G,η) of a -enriched functor G→ and a -enriched natural transformation η_(-,c)⇒ F^* G. To see this, note that the latter is again determined by the image of 𝕀_c∈_(c,c) under η, namely η(𝕀_c)∈ G(Fc). This shows that F_-,c is the unit of the adjunction F_!⊣ F^*.
Consider a commutative square in as below left.
[](1) ;
[below of=1](2) ;
[right of=1,xshift=.5cm](3) ;
[below of=3](4) ;
[->] (1) to node[left,la]I (2);
[->] (1) to node[above,la]F (3);
[->] (2) to node[below,la]G (4);
[->] (3) to node[right,la]J (4);
[right of=3,xshift=2cm](1) [^,];
[below of=1](2) [^,];
[right of=1,xshift=2.8cm](3) [^,];
[below of=3](4) [^,];
[][n][.5]14;
[->] (2) to node[left,la]I^* (1);
[->] (1) to node[above,la]F_! (3);
[->] (2) to node[below,la]G_! (4);
[->] (4) to node[right,la]J^* (3);
Then the component at a -enriched functor H^→ of the natural transformation F_!I^*→ J^*G_! as in the above right square in is given by the -enriched natural transformation F_!I^*(H)→ J^*G_!(H) in [^,] corresponding under the adjunction F_!⊣ F^* to the -enriched natural transformation in [^,]
I^*(H) I^*G^*G_!(H)=F^*J^*G_!(H),
where η denotes the unit of the adjunction G_!⊣ G^*.
We now show that the construction φ_σ from <ref> is natural in σ. Let us fix a morphism [ℓ]→ [m] in Δ, a map f X→ Y in between connected -spaces, and a map σ[m,Y]→∫_^_(-,c) in .
The following diagram in [ L[ℓ,X]^,] commutes.
[](1) L[d^ℓ+1,X]^*_ L[ℓ+1,X](-,ℓ+1);
[below of=1](1') L[d^ℓ+1,X]^* L[+1,f]^*_ L[m+1,Y](-,m+1);
[below of=1',yshift=.5cm](2) L[,f]^* L[d^m+1,Y]^*_ L[m+1,Y](-,m+1);
[right of=1,xshift=7cm](3) L[,f]^*(τ)^*(ε_)^*_(-,c);
[below of=3,yshift=-1cm](4) L[,f]^*(τ)^*(ε_)^*_(-,c);
[->] (1) to node[above,la]γ_σ [,f] (3);
[->] (1) to node[left,la]L[d^ℓ+1,Y]^* L[+1,f]_-,ℓ+1 (1');
[d] (3) to (4);
[->] (2) to node[below,la]L[,f]^*γ_σ (4);
at ((1')!0.5!(2)) 270≅;
Combining <ref>, the following diagram of -enriched functors commutes.
[](1) L[ℓ,X];
[right of=1,xshift=2.5cm](1') L[ℓ+1,X];
[below of=1](2) L[m,Y];
[below of=1'](2') L[m+1,Y];
[right of=1',xshift=3cm](1”) L[ℓ,X]⨿_[0]Σ X;
[below of=1”](2”) L[m,Y]⨿_[0]Σ Y;
[right of=1”,yshift=-.75cm,xshift=2.5cm](3) ;
[->] ((1”.east)) to node[above,la]F_σ[,f] (3);
[->] ((2”.east)) to node[below,la]F_σ (3);
[->] (1) to node[left,la]L[,f] (2);
[->] (1) to node[above,la]L[d^ℓ+1,X] (1');
[->] (1') to node[above,la]Π_ℓ,X (1”);
[->] (2) to node[below,la]L[d^m+1,Y] (2');
[->] (2') to node[below,la]Π_m,Y (2”);
By taking the induced -enriched natural transformations between hom -spaces with target ℓ+1 and restricting along L[d^ℓ+1,X], we get the desired result.
The -enriched natural transformation in [ L[m,Y]^,]
_L[m,Y]([,f]) L[,f]_!_L [ℓ,X](𝕀_[ℓ,Y])≅_L [m,Y]([,f])→_L[m,Y](𝕀_[m,Y])
corresponds under the adjunction L[,f]_!⊣ L[,f]^* to the -enriched natural transformation in [ L[ℓ,X]^,]
L[d^ℓ+1,X]^* L[+1,f]_-,ℓ+1_LF[ℓ,X](𝕀_F[ℓ,X])→ L[,f]^*_LF[m,Y](𝕀_F[m,Y])
Consider the following diagram in .
[](1) L[ℓ,X];
[below of=1](2) L[ℓ+1,X];
[right of=1,xshift=2cm](3) L[m,Y];
[below of=3](4) ;
[below right of=4,xshift=1.3cm](5) L[m+1,Y];
4;
[->] (1) to node[above,la]L[,f] (3);
[->] (1) to node[left,la]L[d^ℓ+1,X] (2);
[->] (3) to node[right,la]ι_f (4);
[->] (2) to node[below,la]J (4);
[->,bend left] (3) to node[right,la]L[d^m+1,Y] (5);
[->,bend right=15] (2) to node[below,la,yshift=-3pt]L[+1,f] (5);
[->,dashed] (4) to node[above,la,xshift=2pt]G (5);
Then we can deduce from <ref> that the -enriched natural transformation in [ L[m,Y]^,]
_L[m,Y]([,f])_L [m,Y]([,f])→_L[m,Y](𝕀_[m,Y])
is given by the -enriched natural transformation
(ι_f)^*G_-,m+1 (ι_f)^*_(-,m+1)→ L[d^m+1,Y]^*_ L[m+1,Y](-,m+1).
Consider the following diagram of natural transformations.
[](1) [ L[ℓ,X]^,];
[below of=1](2) [ L[ℓ+1,X]^,];
[right of=1,xshift=5cm](3) [ L[m,Y]^,];
[below of=3](4) [^,];
[below of=2](5) [ L[ℓ+1,X]^,];
[below of=4](6) [ L[m+1,Y]^,];
[la,above][n][.5]14;
[la,above][n][.5]26;
[->] (2) to node[left,la]L[d^ℓ+1,X]^* (1);
[d] (2) to (5);
[->] (1) to node[above,la]L[,f]_! (3);
[->] (5) to node[below,la]L[+1,f]_! (6);
[->] (2) to node[below,la]J_! (4);
[->] (4) to node[right,la](ι_f)^* (3);
[->] (6) to node[right,la]G^* (4);
[->,bend right=50] ((6.north)+(2cm,0)) to node[right,la]L[d^m+1,Y]^* ((3.south)+(2cm,0));
By <ref>, the component of the upper natural transformation at the representable object _ L[ℓ+1,X](-,ℓ+1)∈ [ L[ℓ+1,X]^,] is given by the isomorphism in [ L[m,Y]^,]
L[,f]_! _L[ℓ,X] (𝕀_[ℓ,X]) ≅_L[m,Y]([,f]).
Then, combining <ref>, the component of the lower natural transformation at _ L[ℓ+1,X](-,ℓ+1) is given by the -enriched natural transformation in [ ^,]
G_-,m+1_(-,m+1)→ G^*_ L[m+1,Y](-,m+1).
Hence, by the above, their composite is given by the -enriched natural transformation in [ L[m,Y]^,]
L[,f]_!_L [ℓ,X](𝕀_[ℓ,Y])≅_L [m,Y]([,f])_L[m,Y](𝕀_[m,Y]).
On the other hand, combining <ref>, the component of the composite of the two natural transformations at _ L[ℓ+1,X](-,ℓ+1) is given by the -enriched natural transformation in [ L[m,Y]^,] corresponding under the adjunction L[,f]_!⊣ L[,f]^* to the restriction along L[d^ℓ+1,X] of the -enriched natural transformation in [ L[ℓ+1,X]^,]
L[+1,f]_-,ℓ+1_ L[ℓ+1,X](-,ℓ+1)→ L[+1,f]^*_ L[m+1,Y](-,m+1).
Using that L[d^ℓ+1,X]^* L[+1,f]^*= L[,f]^* L[d^m+1,Y]^* and <ref>, it is isomorphic to the -enriched natural transformation in [ L[ℓ,X]^,]
L[d^ℓ+1,X]^* L[+1,f]_-,ℓ+1_LF[ℓ,X](𝕀_F[ℓ,X])→ L[,f]^*_LF[m,Y](𝕀_F[m,Y]).
This gives the desired result.
The following diagram in [ L[m,Y]^,] commutes.
[](1) L[,f]_!_LF[ℓ,X](𝕀_F[ℓ,X]);
[below of=1,yshift=.5cm](1') _LF[m,Y]([,f]);
[below of=1'](2) _LF[m,Y](𝕀_F[m,Y]);
[right of=1,xshift=6.5cm](3) L[,f]_! L[,f]^*(τ)^*(ε_)^*_(-,c);
[below of=3,yshift=-1cm](4) (τ)^*(ε_)^*_(-,c);
[->] (1) to node[above,la]L[,f]_! γ_σ [,f] (3);
[->] (1') to node[left,la]_L[m,Y]([,f]) (2);
[->] (3) to node[right,la]ϵ (4);
[->] (2) to node[below,la]γ_σ (4);
at ((1)!0.5!(1')) 270≅;
This is obtained by transposing the commutative diagram from <ref> along the adjunction L[,f]_!⊣ L[,f]^* using <ref>.
The following diagram in [^,] commutes.
[](1) (ε_)_!_(τ [,f]);
[below of=1](2) (ε_)_!_(τ);
[right of=1,xshift=2.7cm](3) _(-,c);
[below of=3](4) _(-,c);
[->](1) to node[left,la](ε_)_!_([,g]) (2);
[->](1) to node[above,la]φ_σ[,f] (3);
[->](2) to node[below,la]φ_σ (4);
[d] (3) to (4);
This is obtained by transposing the diagram from <ref> along the adjunctions (τ)_!⊣ (τ)^* using <ref> and the definitions of φ_σ and φ_σ[,f].
By <ref>, the -enriched natural transformations in [^,]
φ_σ (ε_)_!_(τ)→_(-,c),
with σ[m,,]→∫_^_(-,c) in , form a natural cone over _(-,c). Hence we get an induced -enriched natural transformation in [^,]
φ_c (ε_)_!_∫_^_(-,c)≅_[m,,]∫_^_(-,c) (ε_)_!_(τ)→_(-,c).
We first show that φ_c provides a retract of the map (ε_)_! _(𝕀_c).
The following composite in [^,] is the identity.
_(-,c)≅ (ε_)_! _(c) (ε_)_! _∫__(-,c)_(-,c)
First note that the above composite is simply given by the component of the colimit at 𝕀_c[0]→∫__(-,c) of φ_c, i.e., it is the -enriched natural transformation in [^,]
φ_𝕀_c (ε_)_! _(c)≅ (ε_)_! c^* _F[0](𝕀_F[0])→_(-,c)
from <ref> taking σ=𝕀_c[0]→∫__(-,c) and τ=c[0]→. By noticing that F_𝕀_c [1]→ is the -enriched that picks the identity morphism 𝕀_c in , we see that, by construction, the -enriched natural transformation φ_𝕀_c corresponds under the adjunctions c_!⊣ c^* and (ε_)_!⊣ (ε_)^* to the map in [[0],]≅
γ_𝕀_c_[0](𝕀_[0])≅[0]→ c^*(ε_)^* _(-,c)≅_(c,c),
which picks the identity 𝕀_c. Using the Yoneda Lemma and <ref> in the case where F=F_𝕀_c, we get that the component at c of the -enriched natural transformation
φ_𝕀_c_(-,c)≅ (ε_)_! _(c)→_(-,c)
takes 𝕀_c to 𝕀_c and so must be the identity.
We now show that the construction φ_c is natural in c∈. Let us fix a morphism g c→ d in the underlying category of the -enriched category . Let us further fix m≥ 0, a connected -space Y∈, and a map σ[m,Y]→∫_^_(-,c) in .
The following diagram in [ L[m,Y]^,] commutes.
[](1) _L[m,Y](𝕀_[m,Y]);
[right of=1,xshift=3.8cm](2) (ε_)^*(τ)^*_(-,c);
[below of=2](3) (ε_)^*(τ)^*_(-,d);
[->] (1) to node[above,la]γ_σ (2);
[->] (1) to node[below,la]γ_∫_^ g_*σ (3);
[->] (2) to node[right,la](ε_)^* (τ)^* g_* (3);
By <ref> and the definitions of γ_σ and γ_∫_^ g_*σ, this amounts to show that the following diagram in [ L[m,Y]^,] commutes.
[](1) L[d^m+1,Y]^*_ L[m+1,Y](-,m+1);
[right of=1,xshift=7.7cm](2) L[d^m+1,Y]^*Π_m,Y^* F_σ^* _(-,c);
[below of=2](3) L[d^m+1,Y]^*Π_m,Y^* F_∫_^ g_*σ^* _(-,d);
[->] (1) to node[above,la,yshift=5pt]L[d^m+1,Y]^*(Π_m,YF_σ)_-,m+1 (2);
[->] (1) to node[below,la,xshift=-40pt]L[d^m+1,Y]^*(Π_m,YF_∫_^ g_*σ)_-,m+1 (3);
[->] (2) to node[right,la]L[d^m+1,Y]^*Π_m,Y^* F_σ^* g_* (3);
As L[d^m+1,Y]Π_m,Y F_σ=(τ) ε_= L[d^m+1,Y]Π_m,Y F_∫_^ g_*σ, to check the commutativity of the above diagram, it is enough to verify that it commutes at the object m∈ L[m,Y]. For this, note that the map in
Y≅[0,Y][m,Y] ∫_^_(-,c)∫_^_(-,d)
is given by Yoneda and the connectedness of Y by a map in
Y_(e,c)_(e,d),
where f is the map corresponding to Y≅[0,Y][m,Y] ∫_^_(-,c). Hence the commutativity of the above diagram follows from the commutativity of the following diagram.
[scale=1]
[](1) _ L[m+1,Y](m,m+1)=Y;
[right of=1,xshift=3.7cm](2) _(e,c);
[below of=2](3) _(e,d);
[->] (1) to node[above,la]f (2);
[->] (1) to node[below,la,xshift=-5pt]g_*f (3);
[->] (2) to node[right,la]g_* (3);
The following diagram in [^,] commutes.
[](1) (ε_)_!_(τ);
[right of=1,xshift=2.3cm](2) _(-,c);
[below of=2](3) _(-,d);
[->] (1) to node[above,la]φ_σ (2);
[->] (1) to node[below,la]φ_∫_^ g_*σ (3);
[->] (2) to node[right,la]g_* (3);
This is obtained by transposing the diagram from <ref> along the adjunctions (τ)_!⊣ (τ)^* and (ε_)_!⊣(ε_)^* using <ref> and the definitions of φ_σ and φ_∫_^ g_*σ.
The -enriched natural transformations in [^,]
φ_c (ε_)_!_∫_^_(-,c)→_(-,c)
are natural in objects c∈.
We need to show that, given a morphism g c→ d in , then the following diagram commutes.
[](1) (ε_)_!_∫_^_(-,c);
[below of=1](2) (ε_)_!_∫_^_(-,d);
[right of=1,xshift=3.3cm](3) _(-,c);
[below of=3](4) _(-,d);
[->](1) to node[left,la](ε_)_!_∫_^ g_* (2);
[->](1) to node[above,la]φ_c (3);
[->](2) to node[below,la]φ_d (4);
[->] (3) to node[right,la]g_* (4);
As (ε_)_!_∫_^_(-,c)≅_[m,,]∫_^_(-,c) (ε_)_!_(τ), by the universal property of colimits, it is enough to check that this diagram commute at a specific component σ[m,,]→∫^__(-,c) of the colimit. However, this is precisely <ref>.
amsalpha
|
http://arxiv.org/abs/2307.05046v1 | 20230711065207 | On the Finite Variable-Occurrence Fragment of the Calculus of Relations with Bounded Dot-Dagger Alternation | [
"Yoshiki Nakamura"
] | cs.LO | [
"cs.LO"
] |
Belief Revision from Probability
Jeremy Goodman
School of Philosophy
University of Southern California, USA
[email protected]
Bernhard Salow
Faculty of Philosophy
University of Oxford, UK
[email protected]
August 12, 2023
=======================================================================================================================================================================================================
We introduce the k-variable-occurrence fragment, which is the set of terms having at most k occurrences of variables.
We give a sufficient condition for the decidability of the equational theory of the k-variable-occurrence fragment using the finiteness of a monoid.
As a case study, we prove that for Tarski's calculus of relations with bounded dot-dagger alternation (an analogy of quantifier alternation in first-order logic), the equational theory of the k-variable-occurrence fragment is decidable for each k.
§ INTRODUCTION
Since the satisfiability problem of first-order logic is undecidable <cit.> in general,
(un-)decidable classes of first-order logic are widely studied <cit.>;
for example, the undecidability holds even for the [KMW]Kahr–Moore–Wang (KMW) class[Recall the notation for prefix-vocabulary classes <cit.>.
E.g., [∀∃∀, (0, ω)] denotes the set of prenex sentences of first-order logic without equality, function symbols, nor constants such that
the quantifier prenex of is ∀∃∀;
has ω (countably infinitely many) binary relation symbols and does not have 1- nor i-ary relation symbols for i ≥ 3.] [∀∃∀, (0, ω)] <cit.>,
but it is decidable for the [BSR]Bernays–Schönfinkel–Ramsey (BSR) class [∃^*∀^*, all]_= <cit.>.
[CoR]The calculus of relations (CoR) <cit.>, revived by Tarski, is an algebraic system on binary relations; its expressive power is equivalent to that of the three-variable fragment of first-order logic with equality <cit.>, w.r.t. binary relations.
The equational theory of CoR is undecidable <cit.>[In <cit.>, the undecidability of the equational theory is shown for more general classes of relation algebras.] in general, which follows from the undecidability of the KMW class,
but, for example, it is decidable for the (existential) positive fragment <cit.> and the existential fragment <cit.> of CoR, which follows from the decidability of the BSR class.
On the undecidability of CoR, the undecidability holds even for the 1-variable fragment <cit.> and even for the 1-variable fragment only with union, composition, and complement <cit.>, where the k-variable fragment denotes the set of terms having at most k variables.
Then, from the undecidability result for the 1-variable fragment of CoR <cit.> above,
the following natural question arises—Is it decidable for the k-variable-occurrence fragment of CoR?
Here, the k-variable-occurrence fragment denotes the set of terms having at most k occurrences of variables.
For example, when a, b are variables and I is a constant,
the term (a · b) · (I· (a · b)) has 4 occurrences of variables
and 1 occurrence of constants;
thus, this term is in the 4-variable-occurrence fragment (cf. the term is in the 2-variable fragment since the variables a, b occur).
While one may seem that this restriction immediately implies the decidability, the equational theory of the k-variable-occurrence fragment on some (single) algebra is undecidable in general even when k = 0 (<Ref>).
Our contribution is to prove that the equational theory of the k-variable-occurrence fragment is decidable for CoR with bounded dot-dagger alternation,
where the dot-dagger alternation <cit.> is an analogy of the quantifier alternation in first-order logic.
Note that the equational theory of the k-variable fragment is undecidable in general for CoR <cit.> (even with bounded dot-dagger alternation (<Ref>)).
Our strategy is to prove that the number of terms in the k-variable-occurrence fragment is finite up to the semantic equivalence relation.
To this end, (1)outline:1(1) we decompose terms as much as possible; and then (2)outline:2(2) we show that each decomposed part is finite up to the semantic equivalence relation by collecting valid equations.
By the preprocessing of <ref>, one can see that for <ref>, essentially, it suffices to prove the finiteness of some monoid (using the method of <Ref>).
Its finiteness is not clear, as it is undecidable whether a (finitely presented) monoid is finite in general;
but, fortunately, we can prove the finiteness (<Ref>) by finding valid equations (<Ref>).
The rest of this paper is structured as follows.
<Ref> introduces the k-variable-occurrence fragment for general algebras and gives a framework to prove the decidability from the finiteness of a monoid.
<Ref> recalls the syntax and semantics of CoR and the dot-dagger alternation hierarchy.
In <Ref>, based on <Ref>, we prove that the equational theory of CoR with bounded dot-dagger alternation is decidable.
<Ref> concludes this paper.
We write * for the set of all non-negative integers.
For l, r ∈, we write *l, r for the set i ∈| l ≤ i ≤ r.
For a set A, we write * A for the cardinality of A and *(A) for the power set of A.
For a set A and an equivalence relation ∼ on A, we write A*∼ for the quotient set of A by ∼ and *a_∼ for the equivalence class of an element a ∈ A on ∼.
[1]|#1|
§ ON THE K-VARIABLE-OCCURRENCE FRAGMENT
We fix * as a non-empty finite set of variables.
We fix * as a finite algebraic signature; is a map from a finite domain (of functions) to .
For each f, n∈, we write f^(n); it is the function symbol f with arity n.
We also let ^(n)f^(m)∈| m = n.
The set * of -terms over is defined as the minimal set closed under the following two rules:
a ∈⟹ a ∈*; (f^(n)∈ and _1, …, _n ∈) ⟹ f(_1, …, _n) ∈*.
An -algebra A is a tuple *A, f^A_f^(n)∈, where A is a non-empty finite set and f^AA^n →A is an n-ary map for each f^(n)∈.
A valuation →A on an -algebra A is a map;
we write →A for the unique homomorphism extending .
For a class of -algebras,
the equivalence relation * on is defined by:
[1] *[2].
For a set ⊆, the [equational theory]equational theory of over is the set [1], [2]∈^2 |[1] [2].
For an -term , let *() be the number of occurrences of variables in :
*()
1 (∈)
∑_i = 1^n(_i) ( = f(_1, …, _n)).
For each set of -terms,
the k-variable-occurrence fragment *k is the set ∈|() ≤ k.
(Similarly, let *k∈|() = k.)
Clearly, = ⋃_k ∈k.
The equational theory of the k-variable-occurrence fragment is undecidable in general, even when k = 0.
It follows from the reduction from the word problem for monoids.
Let M = M, ∘^M, I^M be a (finitely) presented monoid with finite generators C = c_1, …, c_l such that the word problem for M is undecidable (by Markov <cit.> and Post <cit.>).
We define c_1^(1), …, c_l^(1)∪I^(0) and the -algebra A A, f^A_f^(n)∈ by:
A = M;
c_i^A(x) = c_i ∘^M x for i ∈1, l;
I^A = I^M.
By definition, for all two words a_1 … a_n, b_1 … b_m, over C:
they are equivalent in M iff
a_1(a_2(… a_n(I) …)) A b_1(b_2(… b_m(I) …)).
In the rest of this section, we fix as a class of -algebras.
§.§ On the finiteness of k-variable-occurrence fragment: from 1 to k
How can we show the decidability of the equational theory of the k-variable-occurrence fragment?
We consider proving it from the finiteness up to the semantic equivalence relation:
Let ⊆ be a subterm-closed[A set ⊆ is subterm-closed if for every ∈, if [2] a subterm of , then [2] ∈.] set.
If the set is finite,
the equational theory of over is decidable.
Moreover, it is decidable in DLOGTIME-uniform NC^1 if the input is given as a well-bracketed string.
Because is fixed and is finite, for each ∈, one can calculate the index of the equivalence class of on by using the (finite and possibly partial) Cayley table of each operator; thus, the equational theory is decidable.
Moreover, according to this algorithm, if the input is given as a well-bracketed string, one can also construct a parenthesis context-free grammar such that for all [1], [2] ∈,
the well-bracketed string encoding the equation [1] = [2] is in the language iff [1] [2].
Hence, the complexity is shown because every language recognized by a parenthesis context-free grammar is in ALOGTIME <cit.> (ALOGTIME is equivalent to DLOGTIME-uniform NC^1 <cit.>).
For the k-variable-occurrence fragment, the finiteness of 1 (with <Ref>) can imply the decidability of the equational theory of k (<Ref>) by the following decomposition lemma.
Here, we write *[2]a for the term in which each a has been replaced with [2].
Let ⊆ be a subterm-closed set.
Let k ≥ 2, a ∈.
Then, for all ∈k, there are _0 ∈1, f^(n)∈, _1, …, _n ∈k-1
such that = _0f(_1, …, _n)a.
By induction on .
Since k ≥ 2,
there are g^(m)∈, [2]_1, …, [2]_m ∈ s.t. = g([2]_1, …, [2]_m)
and ∑_i = 1^m([2]_i) = k.
Case ([2]_i) ≤ k - 1 for all i:
By letting _0 a, we have = _0 g([2]_1, …, [2]_m)a.
Otherwise:
Let i be s.t. ([2]_i) = k.
Since [2]_i ∈k,
let [3] ∈1, f^(n)∈, _1, …, _n ∈k-1
be the ones obtained by IH w.r.t. [2]_i, so that [2]_i = [3]f(_1, …, _n)a.
By letting _0 g([2]_1, …, [2]_i-1, [3], [2]_i+1, …, [2]_m),
we have:
_0f(_1, …, _n)a = g([2]_1, …, [2]_i-1, [3], [2]_i+1, …, [2]_m)f(_1, …, _n)a
= g([2]_1, …, [2]_i-1, [3]f(_1, …, _n)a, [2]_i+1, …, [2]_m) ([2]_j) = 0 if j ≠ i
= g([2]_1, …, [2]_i-1, [2]_i, [2]_i+1, …, [2]_m) = . [2]_i = [3]f(_1, …, _n)a
Hence, this completes the proof.
[of <Ref>]
If = ∘^(2), I^(0) and a, b, c ∈,
the term [1] = I∘ ((a ∘ (b ∘ c)) ∘I) ∈3 has the following decomposition:
[1] = (I∘ (a ∘I))a ∘ (b ∘ c)a.
Then I∘ (a ∘I) ∈1 and a, (b ∘ c) ∈2.
The following is an illustration of the decomposition, where the number written in each subterm [2] denotes ([2]):
[
level distance=4ex,
sibling distance=6ex,
baseline=-8ex
]
fit1/.style=rounded rectangle, inner sep=0.8pt, fill=yellow!40, opacity = .8,
fit2/.style=red!20, fill = red!20, thick,
fit3/.style=blue!20, fill = blue!20, thick,
nlab/.style=font= , color = gray(0)∘
child node(00)I
child node(01)∘
child node(010) ∘
child node(0100) a
child node(0101) ∘
child node(01010) b
child node(01011) c
child node(011) I
;
background[fit1, rotate fit=-45, fit=(0)(01)] ;
[fit1, rotate fit=45, fit=(01)(010)] ;
[fit2] ((0100.north) + (0,0)) – ((0100.south west) + (-.2, 0)) – ((0100.south east) + (.2, 0)) – cycle;
[fit3] ((0101.north) + (0,0)) – ((01010.south west) + (-.2, 0)) – ((01011.south east) + (.2, 0)) – cycle;
[right = -2pt of 0, nlab]3;
[right = -2pt of 00, nlab]0;
[right = -2pt of 01, nlab]3;
[right = -2pt of 010, nlab]3;
[right = -2pt of 0100, nlab]1;
[right = -2pt of 0101, nlab]2;
[right = -2pt of 01010, nlab]1;
[right = -2pt of 01011, nlab]1;
[right = -2pt of 011, nlab]0;
=
(
[
level distance=4ex,
sibling distance=6ex,
baseline=-4ex
]
fit1/.style=rounded rectangle, inner sep=0.8pt, fill=yellow!40, opacity = .8,
nlab/.style=font= , color = red(0)∘
child node(00)I
child node(01)∘
child node(010) a
child node(011) I
;
background[fit1, rotate fit=-45, fit=(0)(01)] ;
[fit1, rotate fit=45, fit=(01)(010)] ;
[right = -2pt of 0, nlab]1;
)[[
level distance=4ex,
sibling distance=6ex,
baseline=-4ex
]
fit1/.style=rounded rectangle, inner sep=0.8pt, fill=yellow!40, opacity = .8,
fit2/.style=red!20, fill = red!20, thick,
fit3/.style=blue!20, fill = blue!20, thick,
nlab/.style=font= , color = red(010) ∘
child node(0100) a
child node(0101) ∘
child node(01010) b
child node(01011) c
;
background [fit2] ((0100.north) + (0,0)) – ((0100.south west) + (-.2, 0)) – ((0100.south east) + (.2, 0)) – cycle;
[fit3] ((0101.north) + (0,0)) – ((01010.south west) + (-.2, 0)) – ((01011.south east) + (.2, 0)) – cycle;
[right = -2pt of 0100, nlab]1;
[right = -2pt of 0101, nlab]2;
/ a].
Using this decomposition iteratively, we have the following:
Let ⊆ be a subterm-closed set.
Assume that 1 is finite.
Then, for each k ∈, the set k is finite.
It suffices to prove:
for all k ≥ 2, k is finite.
By induction on k.
We have:
(k)
≤(_0f(_1, …, _n)a|
f^(n)∈, _1, …, _n ∈k-1, _0 ∈1) <Ref>
≤∑_f^(n)∈(_0f(_1, …, _n)a|_1, …, _n ∈k-1, _0 ∈1).
Then the set _0f(_1, …, _n)a|_1, …, _n ∈k-1, _0 ∈1 is finite
because k-1 is finite (by IH) and satisfies the congruence law.
Thus, the last term above is finite since is finite.
Hence k is finite.
ε
[1][#1]
[1]#1
§.§ The monoid of the 1-variable-occurrence fragment
Thanks to <Ref>, we can focus on the 1-variable-occurrence fragment.
For the 1-variable-occurrence fragment, it suffices to consider a monoid.
For a set A of characters, we write A^* for the set of all words (i.e., finite sequences) over the language A.
We write [1] [2] for the concatenation of words [1] and [2]
and write * for the empty word.
We write *[1] for the length of a word [1].
Let * be the (possibly infinite) set of characters defined by:
*⋃_f^(n)∈, i ∈1,nf(_1, …, _i-1, , _i+1, …, _n) |∀ j ∈1, n∖i, _j ∈0.
( denotes “blank”.)
For a word ∈^* and a term ∈,
let [1]* be the term defied by:
[1]*
f(_1, …, _i-1, [2], _i+1, …, _n) ([1] = f(_1, …, _i-1, , _i+1, …, _n) [2])
([1] = )
.
[of <Ref>]
If = ∘^(2), I^(0), then = (I∘), ((I∘I) ∘), … (∘I), ….
For example, if [1] = ((I∘I) ∘) (I∘) (∘I), we have:
[1]a = (((I∘I) ∘) (I∘) (∘I)a) =
(I∘I) ∘ ((I∘) (∘I)a)
=
(I∘I) ∘ (I∘ ((∘I)a)) = (I∘I) ∘ (I∘ (a ∘I)).
For all ∈1, there are ∈^* and [2] ∈^(0)∪ s.t. = [1][2].
By easy induction on .
Let * be the equivalence relation on ^* defined by:
[1] *[2] [1]a[2]a
If ^* is finite, then 1 is finite.
By <Ref> (and that the set ^(0)∪ is finite).
Moreover, if 0 is finite, it suffices to consider a finite subset of , as follows:
Assume that 0 is finite.
Let _0 = _1, …, _n⊆0 be such that 0 = _1_, …, _n_.
Let *⊆ be the finite set defined by:
*⋃_f^(n)∈, i ∈1,nf(_1, …, _i-1, , _i+1, …, _n) |∀ j ∈1, n∖i, _j ∈_0.
Then ^* is finite ⟹ ^* is finite.
For every a ∈, there is b ∈ s.t. a b.
By the congruence law of , for every ∈^*, there is some [2] ∈^* s.t. [2].
Since ^* is finite, this completes the proof.
[of <Ref>]
If = ∘^(2), I^(0) (so, = (I∘), ((I∘I) ∘), … (∘I), …) and is the class of all monoids,
we have: 0 = I_.
Thus the set = (I∘), (∘I) is sufficient for considering the finiteness of ^*.
Thus, to prove the finiteness of k,
it suffices to prove that both 0 and ^* are finite:
If 0 and ^* are finite,
then for each k ∈,
the set k is finite (hence, the equational theory of k over is decidable).
We have:
0 and ^* are finite
⟹
^* is finite (by <Ref>)
⟹
1 is finite (by <Ref>)
⟹
k is finite (by <Ref>).
Hence by <Ref>.
§.§ Finiteness from finding equations
For the finiteness of ^*,
we consider finding equations [1]_i, [2]_i| i ∈ I and then applying the following:
Let ⊆ be a finite set.
Let (<) ⊆ (^*)^2 be a well-founded relation s.t.
* < satisfies the congruence law (i.e., [2] < [2]' ⟹[1][2][1]' < [1][2]'[1]');
* < has no infinite antichains.[This assumption is used only in the direction of <ref>⟹<ref>.]
Then, the following are equivalent:
* There is a finite set [1]_i, [2]_i| i ∈ I⊆ (<) ∩ () such that
the language ^* (⋃_i ∈ I[2]_i) ^* over the alphabet is cofinite.[A language L over an alphabet A is cofinite if its complemented language A^* ∖ L is finite.]
* ^* is finite.
<ref>⟹<ref>:
By induction on the well-founded relation <, we prove:
For every [1] ∈^*, there is some [2] ∈^* ∖ (^* (⋃_i ∈ I[2]_i) ^*) such that
[1] [2].
If ∈^* ∖ (^* (⋃_i ∈ I[2]_i) ^*), by letting [2] =.
Otherwise, since ∈^* (⋃_i ∈ I[2]_i) ^*,
there are i ∈ I and ', ”∈^* such that = ' [2]_i ”.
By ' [1]_i ” < ' [2]_i ” (the congruence law of <) and IH,
there is [2] ∈^* ∖ (^* (⋃_i ∈ I[2]_i) ^*) s.t. ' [1]_i ”[2].
We also have ' [2]_i ”' [1]_i ” (by [2]_i [1]_i with the congruence law of ).
Thus ' [2]_i ”[2] (by transitivity of ).
<ref>⟹<ref>:
Let W ⋃_X ∈^*∈ X |.
Let (W) be the subword closure of W (i.e., the minimal set W' ⊇ W s.t. w'ww”∈ W' ⟹ w ∈ W').
Let V ((W) ) ∖(W).
Then, (^* V ^*) = ^* ∖(W) holds, as follows.
For ⊆:
Let w ∈^*, v ∈ V, w' ∈^*.
If we assume w v w' ∈(W), then v ∈(W), but this contradicts v ∈ V; thus, w v w' ∉(W).
For ⊇:
By induction on the length of w ∈^* ∖(W).
If w ∈ V, clear.
Otherwise (i.e., w ∉V),
let w = w' a (note that w ≠, because ∈(W) always by that W is not empty).
Then, w' ∈^* ∖(W)
(if not, since w' ∈(W) and w ∉(W), w ∈ V, thus reaching a contradiction).
Thus by IH, w' ∈^* V ^*, and thus w ∈^* V ^*.
Hence, we have ^* ∖ (^* V ^*) = (W).
Now, the set W is finite because ^* is finite and for each X ∈^*, the number of minimal elements is finite (because < has no infinite antichains);
thus (W) is finite; thus V is finite.
Let V = v_1, …, v_n.
For every i ∈1, n, there is w_i ∈ W ∩v_i_ s.t. w_i < v_i, because
v_i is not minimal w.r.t. (<) ∩v_i_^2.
Thus, w_i, v_i| i ∈1, n is the desired set.
The shortlex order (aka length-lexicographical order) is an example of < in <Ref> (because it is a well-ordering <cit.>
and its congruence raw is also easy).
While it is undecidable whether a given (finitely presented) monoid ^* is finite <cit.> (see also <cit.>) in general (cf. <ref> of <Ref>),
it is decidable (in linear time) whether the language of a given regular expression of the form ^* (⋃_i ∈ I[2]_i) ^* is cofinite (cf. <ref> of <Ref>):
The following is decidable in linear time (more precisely, 𝒪(n) time on a RAM machine for n the number of symbols in the given regular expression):
Given a regular expression of the form ^* (⋃_i ∈ I[2]_i) ^* over the alphabet , is its language cofinite?
By the Aho-Corasick algorithm,
we can construct a deterministic finite automaton (DFA) from a given regular expression of the form ^* (⋃_i ∈ I[2]_i) ^* in linear time <cit.>.
By taking the complemented language of the DFA,
it suffices to show that the following problem is in 𝒪(n) time: given a DFA with n states, is its language finite?
Then we can give the following algorithm:
From the graph induced by the DFA, remove all the states not reachable from the starting state and remove all the states not reachable to any accepting states by using the depth-first search;
check whether there exists some cycle in the graph by the depth-first search.
Thus, thanks to <ref>⟹<ref> of <Ref>, we can focus on finding a finite set of equations.
While it is undecidable in general whether there exists such a set,
we give a possibly non-terminating pseudo-code in <Ref>,
which can help to find equations (e.g., <Ref>).[Usually, to calculate is a bottleneck. For relaxing this problem, for example, hashing words by using some algebras in is practically useful for reducing the number of calls (since if the hash of two words [1], [2] are different, then we immediately have that [1], [2] are not equivalent w.r.t. ).]
pseudocode
1em
goto
[1] <ref>
When is given as a total recursive function, <Ref> is a semi-algorithm (that is, if ^* is finite, the algorithm is terminated and returns True; otherwise, not terminated).
This is because <ref>⟹<ref> of <Ref> also holds.
§ THE CALCULUS OF RELATIONS WITH BOUNDED DOT-DAGGER ALTERNATION
In the remaining part of this paper, as a case study of the k-variable-occurrence fragment presented in <Ref>, we consider the calculus of relations with bounded dot-dagger alternation.
In this section, we recall the definitions of the calculus of relations (CoR) and the dot-dagger alternation hierarchy.
§.§ CoR: syntax and semantics
We fix as a non-empty finite set of variables.
Consider the finite algebraic signature *^(0), ⊤^(0), -^(1), ∪^(2), ∩^(2), I^(0), D^(0), ·^(2), †^(2)∪π^(1)| (we consider algebras of binary relations and each π is used for a projection of binary relations).
The set * of CoR terms is defined as follows:
*∋[1], [2], [3] a ||⊤|[2] ∪[3] |[2] ∩[3] |[2]^-
|I|D|[2] ·[3] |[2] †[3] |[2]^π*(a ∈, π∈1, 2^1, 2).
Additionally, for a term , we use ^⌣ to denote the term ^⌣^1 ↦ 2, 2 ↦ 1.
Here, we use the infix notation for binary operators, the superscript notation for unary operators, and parenthesis in ambiguous situations, as usual.
For binary relations R, S on a set W,
the identity relation I_W on W, the difference relation D_W on W,
the (relational) composition (relative product) R · S, the dagger (relative sum) R † S, and the projection R^π are defined by:
I_W x,y∈ W^2 | x = yidentity
D_W x,y∈ W^2 | x ≠ ydifference
R · S x, y∈ W^2 |∃ z ∈ W, x, z∈ R z, y∈ Srelative product
R † S x, y∈ W^2|∀ z ∈ W, x, z∈ R z, y∈ Srelative sum
R^π x_1, x_2∈ W^2 |x_π(1), x_π(2)∈ R*(projection).
A structure is a tuple ||, a^_a ∈, where
|| is a non-empty set and a^⊆ ||^2 is a binary relation for each a ∈.
For a structure ,
the binary relation map *_→(||^2) is the unique homomorphism extending a_ = a^ w.r.t. the set-theoretic operators and the aforementioned binary relation operators; i.e.,
*_ is defined as follows:
a_ a^(a ∈) _∅⊤_ ||^2 I_I_|| D_D_||
[1] ∪[2]_ [1]_∪[2]_ [1] ∩[2]_ [1]_∩[2]_ [1]^-_ ||^2 ∖_
[1] ·[2]_ [1]_·[2]_ [1] †[2]_ [1]_†[2]_ [1]^π_ [1]_^π.
It is well-known that w.r.t. binary relations, CoR has the same expressive power as the three-variable fragment of first-order logic with equality:[Namely, for every formula with two distinct free variables z_1, z_2 in the three-variable fragment of first-order logic with equality, there is ∈ such that for all , λ z_1 z_2. _ = _.
Conversely, for every ∈, there is such that for all , _ = λ z_1 z_2. _.
Here, λ z_1 z_2. _x, y∈ ||^2 |.]
W.r.t. binary relations, the expressive power of is equivalent to that of the three-variable fragment of first-order logic with equality.
[1]∼_#1
[1][]≁_#1
automatic in command
Let * be the class of all structures.
Let *_≥ m (resp. *_≤ m) be the class of structures of || ≥ m (resp. || ≤ m).
For ⊆, the equivalence relation * on is defined by:
[1] *[2] [1]_ = [2]_ for every ∈.
For ⊆, the [equational theory]equational theory of over is the set [1], [2]∈^2 |[1] [2].
We mainly consider : the equational theory over .
The following are some instances w.r.t. :
a · (b · c) (a · b) · c a ∩ (b ∪ c) (a ∩ b) ∪ (a ∩ c) (a^⌣)^⌣ a
a · a^⌣ a^⌣· a a · (b † c) (a · b) † c a^1 ↦ 1, 2 ↦ 1 (a ∩I) ·⊤.
The following propositions hold because for each m ∈, the number of structures of || ≤ m is finite up to isomorphism and each structure is finite.
For each m ∈, _≤ m is finite.
Let ⊆ be a subterm-closed set and m ≥ 1.
Then, is finite _≥ m is finite.
Additionally, the equational theory of over is decidable the equational theory of over _≥ m is decidable.
Because [1] [2] [1] _≤ m - 1[2] [1] _≥ m[2].
By <Ref> with <Ref>, _≤ m - 1 is finite and
the equational theory of over _≤ m - 1 is decidable.
§.§ The dot-dagger alternation hierarchy
The sets, *_n, *_n_n ∈, are the minimal sets satisfying the following:
* _0 = _0 = ∈|;
* For n ≥ 0, _n ∪_n ⊆_n+1∩_n+1;
* For n ≥ 1, if [2], [3] ∈_n, then [2] ∪[3], [2] ∩[3], [2] ·[3], [2]^π∈_n and [2] †[3] ∈_n+1;
* For n ≥ 1, if [2], [3] ∈_n, then [2] ∪[3], [2] ∩[3], [2] †[3], [2]^π∈_n and [2] ·[3] ∈_n+1.
For example, a · b ∈_1 and a · (b † c) ∈_2 (the term a · b means that for some z, a(x, z) and b(z, y).
The term a · (b † c) means that for some z, for every w, a(x, z) and (b(z, w) or c(w, y)).
Here, x and y indicate the source and the target, respectively, and each a'(x', y') denotes that there is an a'-labelled edge from x' to y').
The dot-dagger alternation hierarchy is an analogy of the quantifier alternation hierarchy in first-order logic (by viewing · as ∃ and † as ∀).
This provides a fine-grained analogy of <Ref> w.r.t. the number of quantifier alternations, as follows:
W.r.t. binary relations, the expressive power of _n (resp. _n) is equivalent to that of the level Σ_n (resp. Π_n) in the quantifier alternation hierarchy of the three-variable fragment of first-order logic with equality.
Because there are recursive translations for <Ref> <cit.>, the following (un-)decidability results follow from those in first-order logic.
The equational theory of _n (resp. _n) is decidable if n ≤ 1
and is undecidable if n ≥ 2.
When is a countably infinite set, they follow from the BSR class [∃^*∀^*, all]_= <cit.> and the reduction class [∀∃∀^3, (ω, 1)] <cit.>.
We can strengthen this result even if = 1 by using a variant of the translation in <cit.> for encoding countably infinitely many variables by one variable.
(See <cit.> <Ref> for more details.)
§ ON THE K-VARIABLE-OCCURENCE FRAGMENT OF _N
We now consider (_n)k: the k-variable-occurrence fragment of the level _n in the dot-dagger alternation hierarchy.
Clearly, _n = ⋃_k ∈(_n)k.
While the equational theory of _n is undecidable in general (<Ref>), we show that the equational theory of (_n)k is decidable (<Ref>).
Our goal in this section is to show the following:
For each n, k ∈, (_n)k is finite.
Combining with <Ref> yields the following decidability and complexity upper bound.
The complexity lower bound is because the equational theory can encode the boolean sentence value problem <cit.> (even if n = k = 0), as
a given boolean sentence is true iff ∼_⊤, where is the term obtained from by replacing , , T, F with ∩, ∪, ⊤,, respectively.
For n, k ∈, the equational theory of (_n)k over is decidable.
Moreover, it is complete for DLOGTIME-uniform NC^1 under DLOGTIME reductions if the input is given as a well-bracketed string.
To prove <Ref>, we consider the finiteness of 0 in <Ref>
and the finiteness of a monoid for (_n)k in <Ref>, respectively (cf. <Ref>).
§.§ On the finiteness of 0
For the finiteness of 0, by <Ref>, it suffices to show the following:
0_≥ 3 = []__≥ 3, [⊤]__≥ 3, [I]__≥ 3, [D]__≥ 3.
W.r.t. _≥ 3,
we prove that the four elements are closed under each operator.
For the operators ∩, -, ·, ⌣, this is shown by the following Cayley tables:
∩ ⊤ I D
⊤ ⊤ I D
I I I
D D D
-
⊤
⊤
I D
D I
· ⊤ I D
⊤ ⊤ ⊤ ⊤
I ⊤ I D
D ⊤ D ⊤
⌣
⊤ ⊤
I I
D D
.
Note that D·D_≥ 3⊤ holds thanks to “≥ 3”.
When (||) ≥ 3, we have: x, y∈D·D_
iff (∃ z ∈ ||, z ≠ x z ≠ y) iff || ∖x, y≠∅ iff True (cf. <Ref>).
(Similarly for D·⊤_≥ 2⊤.)
For the other operators (∪, †, π), they can be expressed by using ∩, -, ·, ⌣ as follows:
[1] ∪[2] ([1]^-∩[2]^-)^-,
[1] †[2] ([1]^-·[2]^-)^-,
^1 ↦ 1, 2 ↦ 2,
^1 ↦ 1, 2 ↦ 1 (∩I) ·⊤,
^1 ↦ 2, 2 ↦ 2⊤· (∩I), and
^1 ↦ 2, 2 ↦ 1 = ^⌣.
Hence, this completes the proof.
D·D⊤, whereas D·D_≥ 3⊤.
For example when || = 1, since D_ = _,
we have D·D_ = ∅≠ || = ⊤_.
(D·D is not equivalent to neither one of the four constants w.r.t. ; thus, there are many constants w.r.t. .)
§.§ Monoid for (_n)k
Next, we decompose terms, and then we reduce the finiteness of (_n)k to that of a monoid (cf. <Ref>).
For each n, k ∈,
if (_n)1 is finite, (_n)k is finite.
By specializing with (_n)k and 𝒞 with , in <Ref> and <Ref>.
For each n, k ∈,
(_n)k is finite iff (_n)k is finite.
⟸:
For every term in (_n)k,
there is some [2] in (_n)k such that [2]^-.
Such [2] can be obtained from the term ^- by taking the complement normal form using the following equations(see also <Ref>):
⊤^- ∼_ ^- ∼_⊤ I^- ∼_D D^- ∼_I
([2] ∪[3])^- ∼_[2]^-∩[3]^- ([2] ∩[3])^- ∼_[2]^-∪[3]^- ([2]^-)^- ∼_[2] ([2]^π)^- ∼_ ([2]^-)^π.
⟹:
As with ⟸.
Let a ∈.
For all n ≥ 2, ∈(_n)1,
there are _0 ∈(_1)1 and _1 ∈(_n-1)1 such that _≥ 3_0_1a.
By induction on the pair of n and .
We distinguish the following cases.
Case ∈(_n-1)1:
Clear, by IH (∵ _n-2⊆_n-1).
Case ∈(_n-1)1:
By letting _0 = a and _1 =.
Case = [2] ∪[3]:
By () = 1, ([2]) = 0 or ([3]) = 0 holds.
Sub-case ([2]) = 0:
By <Ref>, let [2]' ∈, ⊤, I, D be s.t. [2] _≥ 3[2]'.
By IH w.r.t. [3], let [3]_0 ∈(_1)1, [3]_1 ∈(_n-1)1 be s.t. [3] _≥ 3[3]_0[3]_1a.
By letting _0 = [2]' ∪[3]_0 and _1 = [3]_1, we have _≥ 3[1]_0 [1]_1a.
Sub-case ([3]) = 0:
As with Sub-case ([2]) = 0.
Case = [2] ∩[3], [2] ·[3], [2]^π:
As with Case = [2] ∪[3].
The following is an illustrative example of the decomposition of <Ref>:
.691.428
[
level distance=4ex,
sibling distance=8ex,
baseline=-8ex
]
fit1/.style=rounded rectangle, inner sep=2.pt, fill=yellow!40, opacity=.8,
fit2/.style=red!20, fill = red!20, thick,
fit3/.style=blue!20, fill = blue!20, thick,
nlab/.style=font= , color = gray(0)·
child node(00)D†D
child node(01)·
child node(010) †
child node(0100) D·D
child node(0101) †
child node(01010) ·
child node(010100) D†D
child node(010101) a
child node(01011) D·D
child node(011) D·D
;
background [fit3] ((010.north) + (0,0)) – ((010.south) + (-2, -1.8)) – ((010.south) + (2, -1.8)) – cycle;
[fit1, rotate fit=-45, fit=(0)(01)] ;
[fit1, rotate fit=45, fit=(01)(010)] ;
[right = -2pt of 0, nlab]_3;
_≥ 3 [
level distance=4ex,
sibling distance=8ex,
baseline=-8ex
]
fit1/.style=rounded rectangle, inner sep=2.pt, fill=yellow!40, opacity=.8,
fit2/.style=red!20, fill = red!20, thick,
fit3/.style=blue!20, fill = blue!20, thick,
nlab/.style=font= , color = gray(0)·
child node(00)D
child node(01)·
child node(010) †
child node(0100) ⊤
child node(0101) †
child node(01010) ·
child node(010100) D
child node(010101) a
child node(01011) ⊤
child node(011) ⊤
;
background [fit3] ((010.north) + (0,0)) – ((010.south) + (-2, -1.8)) – ((010.south) + (2, -1.8)) – cycle;
[fit1, rotate fit=-45, fit=(0)(01)] ;
[fit1, rotate fit=45, fit=(01)(010)] ;
[right = -2pt of 0, nlab]_3;
[right = -2pt of 01, nlab]_3;
[right = -2pt of 010, nlab]_2;
=
(
[
level distance=4ex,
sibling distance=6ex,
baseline=-4ex
]
fit1/.style=rounded rectangle, inner sep=2.pt, fill=yellow!40, opacity=.8,
fit2/.style=red!20, fill = red!20, thick,
fit3/.style=blue!20, fill = blue!20, thick,
nlab/.style=font= , color = gray(0)·
child node(00)D
child node(01)·
child node(010) a
child node(011) ⊤
;
background[fit1, rotate fit=-45, fit=(0)(01)] ;
[fit1, rotate fit=45, fit=(01)(010)] ;
[right = -2pt of 0, nlab]_1;
)[[
level distance=4ex,
sibling distance=6ex,
baseline=-6ex
]
fit1/.style=rounded rectangle, inner sep=.8pt, fill=yellow!30,
fit2/.style=red!20, fill = red!20, thick,
fit3/.style=blue!20, fill = blue!20, thick,
nlab/.style=font= , color = red(010) †
child node(0100) ⊤
child node(0101) †
child node(01010) ·
child node(010100) D
child node(010101) a
child node(01011) ⊤
;
background [fit3] ((010.north) + (0,0)) – ((010.south) + (-2, -1.8)) – ((010.south) + (2, -1.8)) – cycle;
[right = -2pt of 010, nlab]_2;
/ a]
For each n ∈, if (_1)1 is finite, (_n)1 is finite.
By induction on n.
Case n ≤ 1:
By the assumption (note that (_0)1⊆(_1)1).
Case n ≥ 2:
By the assumption, (_1)1 is finite.
By IH with <Ref>, (_n-1)1 is finite.
Combining them with <Ref> (and <Ref> for changing and _≥ 3 mutually) yields that (_n)1 is finite.
[1][]𝐓^#1
automatic in command
For ⊆, let *⊆ be the set of all terms over the signature .
Then we have:
If ∩, ·, I, D1 is finite,
then (_1)1 is finite.
Note that [1], [2] ∈_1 [3] |[1] ∪[2] |[1] ∩[2] |[1] ·[2] |[1]^π (where [3] ∈_0, π∈1, 2^1, 2)
and [3], [3]' ∈_0 a |[3] ∪[3]' |[3] ∩[3]' |[3]^-|⊤||[3]^π (where a ∈, π∈1, 2^1, 2).
By taking the complement (-) and projection (π) normal form and replacing with I∩D and ⊤ with I∪D, for each ∈(_1)1 and a ∈,
there are _0 ∈∪, ∩, ·, I, D1 and _1 ∈-∪π|π∈1, 2^1, 21 such that _0_1a.
Moreover, by the distributive law of ∪ w.r.t. · and ∩, for each ∈∪, ∩, ·, I, D1,
there are n ∈ and _1, …, _n ∈∩, ·, I, D1 such that
_1 ∪…∪_n.
Because ∩, ·, I, D1 is finite (by the assumption) and -∪π|π∈1, 2^1, 21 is clearly finite, (_1)1 is finite.
Hence, this completes the proof.
(See <cit.> <Ref> for more details of the proof.)
Combining <Ref> yields that to prove that (_n)k is finite,
it suffices to prove that ∩, ·, I, D1 is finite.
Σ̇
[]Σ̇_[]0
automatic in command
[1]∼̇_#1
Let * be the set of characters of <Ref> from the signature ∩^(2), ·^(2), I^(0), D^(0), ⌣^(1).
That is, *(∩), (∩), (·), (·) |∈(∩, ·, I, D, ⌣)0∪⌣.
(While ⌣ does not occur in ∩, ·, I, D, we introduce ⌣ for replacing the primitive character (D·) with ⌣ (·D). This is not essential but is useful for reducing the number of equations and for simplifying the notation (<Ref>).)
Let *_≥ 5 be the equivalence relation on ^* defined by:
[1] _≥ 5[2] [1]a_≥ 5[2]a where a ∈ is any variable (recall <Ref>).[The condition “≥ 5” is needed for some equations in <Ref>(see also <Ref>).]
If ^*_≥ 5 is finite, then ∩, ·, I, D1 is finite.
Since ^*_≥ 5 is finite,
we have that ∩, ·, I, D, ⌣1_≥ 5 is finite (<Ref>);
thus, ∩, ·, I, D1_≥ 5 is finite.
Hence by <Ref>, this completes the proof.
We consider the following finite subset of (cf. <Ref>):
∪_I
∪_D
∩_I
∩_D
·_D
⌣
Let ⊆ be the finite set *, *, *, *,
where , , are abbreviations of (∩I), (∩D), (·D),
respectively.
If ^*_≥ 5 is finite, then ^*_≥ 5 is finite.
It suffices to prove the following: for every a ∈, there is ∈^* such that a _≥ 5.
Case a = (∩), (·):
Since () = 0, by using <Ref>, they are shown by distinguishing the following four sub-cases, as follows:
_≥ 3 _≥ 3⊤ _≥ 3I _≥ 3D
a = (∩)
a = (·)
.
Case a = (∩), (·):
By (∩) = (∩) and applying the above case analysis for (- ∩), this case can be proved (similarly for (·)).
Case a =:
Since ∈.
Thus, our goal is to prove that ^*_≥ 5 is finite.
§.§ On the finiteness of the monoid
For the finiteness of ^*_≥ 5 (cf. <Ref>),
we present the 21 equations in <Ref>.[The most technical part of the paper is to collect these equations;
they are obtained by running a program based on <Ref> using ATP/SMT systems.]
For i ∈1, 21, let [1]_i, [2]_i be words such that [1]_i = [2]_i denotes the i-th equation.
For each i ∈1, 21, [1]_i _≥ 5[2]_i.
We prove [1]_ia_≥ 5[2]_ia, where a is any variable.
This equation can be translated to the validity of a first-order sentence via the standard translation <cit.>.
Here, we add the formula ∃ x_1, …, x_5, _i,j ∈1, 5; i ≠ j x_i ≠ x_j as an axiom, for forcing _≥ 5.
Thanks to this encoding, each of them can also be tested by using ATP/SMT systems.
Nevertheless, in the following, as an example, we give explicit proof for <Ref>.
By using the standard translation,
<Ref> is translated into the following formula in first-order logic, where x_0, y_0 are free variables:
(∃ y_1, (∃ y_2, a(x_0, y_2) y_2 ≠ y_1) y_1 ≠ y_0)
↔ (∃ y_1, (∃ y_2, (∃ y_3, a(x_0, y_3) y_3 ≠ y_2) y_2 ≠ y_1) y_1 ≠ y_0).
This formula is valid under _≥ 5, which can be shown by using the axiom above (notice that under _≥ 5, y_1 on the left and y_1, y_2 on the right always exist, by taking a vertex not assigned by any variable occurring in each formula;
thus, both formulas are equivalent to the formula ∃ y, a(x_0, y)).
Even without the encoding to first-order logic, this equation can also be shown as follows:
()a = (a ·D) ·D = a · (D·D) associativity law
= a ·⊤⊤_≥ 3D·D
= a · (D·D·D) ⊤_≥ 3⊤·D and ⊤_≥ 3D·D
= ((a ·D) ·D) ·D = ()a. associativity law
See <cit.> <Ref> for all the equations.
The language ⋃_i ∈1, 21^* [2]_i ^* over is cofinite.
It suffices to prove that for some n ∈, the following hold:
there is no word ∈^* ∖ (⋃_i ∈1, 21^* [2]_i ^*) such that ≥ n (since the set ∈^* |≤ n - 1 is finite).
This holds when n ≥ 29, which can be tested by using Z3 (an ATP/SMT system) <cit.>
and can be checked by drawing its DFA (see <cit.><Ref>, for more details).
Thus, we have obtained the following:
^*_≥ 5 is finite.
By <Ref>, we can apply <Ref>,
where < is the shortlex order on ^* induced by: < < <.
By the form, [1]_i < [2]_i is clear for each i ∈1, 21.
Finally, <Ref> is obtained as follows:
We have:
^*_≥ 5 is finite (<Ref>)
⟹ ^*_≥ 5 is finite (<Ref>)
⟹ ^∩, ·, I, D1 is finite (<Ref>)
⟹ (_1)1 is finite (<Ref>)
⟹ (_n)1 is finite (<Ref>)
⟹ (_n)k is finite (<Ref>).
The finite axiomatizability of the equational theory of (_n)k over immediately follows from the finiteness of (_n)k.
§ CONCLUSION
We have introduced the k-variable-occurrence fragment
and presented an approach for showing the decidability of the equational theory from the finiteness.
As a case study, we have proved that the equational theory of (_n)k is decidable,
whereas that of _n is undecidable in general.
We leave the decidability open for the equational theory of CoR with full dot-dagger alternation (i.e., k, in this paper).
Our approach may apply to some other algebras/logics.
It would be interesting to consider the finite variable-occurrence fragment for other systems (e.g., CoR with antidomain <cit.>, dynamic logics <cit.>).
It would also be interesting to extend our result to first-order logic with equality (cf. <Ref>)—for example, is the k-atomic-predicate-occurrence fragment of the m variable fragment of first-order logic with equality decidable?
§ PROOF OF <REF>
(In this section, we refer, e.g., <cit.>, for standard terminologies in first-order logic.)
First, we show <Ref> when is a countably infinite set:
When is a countably infinite set,
the equational theory of _n (resp. _n) is decidable if n ≤ 1 and is undecidable if n ≥ 2.
It suffices to consider _n.
Case n ≤ 1:
Let [1], [2] ∈_1.
Let z_1, z_2 be distinct variables of first-order formulas.
By using the recursive translation of <Ref> <cit.>,
let [1] (resp. [2]) be the formula in the level Σ_1 of the three variable fragment of first-order logic with equality with two free variables z_1, z_2
such that (w.r.t. binary relations) λ z_1 z_2. [1] (resp. λ z_1 z_2. [2])[We use λ z_1 z_2. [1] to denote the term having the semantics λ z_1 z_2. [1]_x_1, x_2∈ ||^2 | on a structure . (In <cit.>, the notation “[]_z_1 z_2” is used instead of λ z_1 z_2. [1].)] is semantically equivalent to [1] (resp. [2]) over .
Then, [1] ∼_[2] the formula ∀ z_1 z_2, [1] ↔[2] is valid over .
By taking the prenex normal form[ does not contain the empty structure; thus we can take the prenex normal form.],
let [1] ∼_∃ x_1 … x_n, [1]_0 (resp. [2] ∼_∃ y_1 … y_m, [2]_0), where [1]_0 and [2]_0 are quantifier-free formulas.
Then,
[1] ≁_[2]
¬ (∀ z_1 z_2, [1] ↔[2])
(∃ z_1 z_2, [1] ¬[2]) (∃ z_1 z_2, ¬[1] [2])
(∃ z_1 z_2, (∃ x_1 … x_n, _0) (∀ y_1 … y_m, ¬[2]_0))
(∃ z_1 z_2, (∃ y_1 … y_m, ¬[1]_0) (∀ x_1 … x_n, [2]_0)) .
By taking the prenex normal form, the last sentence is equivalent to a sentence in the BSR class [∃^*∀^*, all]_=.
Because the satisfiability problem of the BSR class is decidable <cit.>,
the equational theory of _1 over is decidable.
Case n ≥ 2:
The class [∀∃∀^3, (ω, 1)] (hence, [∀∃∀^3, (0, ω)]) is a conservative reduction class <cit.>; thus, the satisfiability problem of the class [∀∃∀^3, (0, ω)] is undecidable.
Let (∀ x, ∃ y, [1]) (∀ x y z, [2]) be a sentence in the class [∀∃∀^3, (0, ω)], where [1], [2] are quantifier-free.
Since (∀ x y z, [2]) and (∃ x, ∀ y, ¬[1]) are in the level Σ_2 of the three variable fragment of first-order logic,
by using the recursive translation of <Ref> <cit.>,
let [1] (resp. [2]) be the term in _2
such that λ z_1 z_2. (∀ x y z, [2]) (resp. λ z_1 z_2. ∃ x, ∀ y, ¬[1]) is semantically equivalent to [1] (resp. [2]) over , where z_1, z_2 are any pairwise distinct variables.
Then, we have:
(∀ x, ∃ y, [1]) (∀ x y z, [2]) ¬((∀ x, ∃ y, [1]) (∀ x y z, [2]))
(∀ x y z, [2]) → (∃ x, ∀ y, ¬[1])
((∀ x y z, [2]) (∃ x, ∀ y, ¬[1])) ↔ (∃ x, ∀ y, ¬[1])
[1] ∪[2] ∼_[2].
Thus, as we can give a reduction from the satisfiability problem of the class [∀∃∀^3, (0, ω)],
the equational theory of _2 over is undecidable.
§.§ Proof of <Ref> when is finite
Assume that is a countably infinite set.
Then, the equational theory over is undecidable
for complement-free _2 (i.e., the class of terms in _2 not having the complement operator (-)).
We give a reduction from that equational theory of _2 (which is undecidable by <Ref>).
By taking the complement normal form (cf. <Ref>),
it suffices to give a reduction from that equational theory of terms in _2 such that the complement operator (-) only applies to variables.
Let [1], [2] be such terms.
Let _[1], [2] be the finite set of all variables occurring in [1] or [2].
Let ∙̅_[1], [2]→ (∖_[1], [2]) be an injective map.
Let [1]', [2]', [3], [3]' be the terms defined as follows:
[1]'
[2]'
[3] † (⋂_a ∈_[1], [2] (a ∪a̅)) †
[3]' ⊤· (⋃_a ∈_[1], [2] (a ∩a̅)) ·⊤.
([3] and [3]' are used to force a_∪a̅_ = ||^2 and a_∩a̅_ = ∅, respectively.)
Then, we have:
[1] ≁_[2] ([1]' ∩[3]) ∪[3]' ≁_ ([2]' ∩[3]) ∪[3]'.
⟹:
Let be s.t. [1]_≠[2]_.
Let ' be the structure in which a̅^' |'|^2 ∖ a^'.
Then, [1]_ = [1]'_' and [2]_ = [2]'_' hold by a^-_ = a̅_'.
Also by a̅^' = |'|^2 ∖ a^', [3]_' = |'|^2 and [3]'_' = ∅ hold.
Thus, ([1]' ∩[3]) ∪[3]'_' = [1]'_' = [1]_≠[2]_ = [2]'_' = ([2]' ∩[3]) ∪[3]'_'.
⟸:
Let be s.t. ([1]' ∩[3]) ∪[3]'_≠([2]' ∩[3]) ∪[3]'_.
Then, we have a ∩a̅_ = ∅ for a ∈_[1], [2]
(∵ If we assume a ∩a̅_≠∅,
then [3]'_ = ||^2, and thus ([1]' ∩[3]) ∪[3]'_ = ||^2 = ([2]' ∩[3]) ∪[3]'_;
so reaching a contradiction); thus [3]'_ = ∅.
Similarly, we have a ∪a̅_ = ||^2 for a ∈_[1], [2]
(∵ If we assume a ∪a̅_≠ ||^2;
then [3]_ = ∅, and thus ([1]' ∩[3]) ∪[3]'_ = ∅ = ([2]' ∩[3]) ∪[3]'_;
so reaching a contradiction); thus [3]_ = ||^2.
By a ∩a̅_ = ∅ and a ∪a̅_ = ||^2, we have a^-_ = a̅_,
and thus [1]_ = [1]'_ and [2]_ = [2]'_.
Thus, [1]_ = [1]'_ = ([1]' ∩[3]) ∪[3]'_≠([2]' ∩[3]) ∪[3]'_ = [2]'_ = [2]_.
Finally, because [1]', [2]' ∈_2, [3] ∈_1, [3]' ∈_1,
we have (([1]' ∩[3]) ∪[3]'), (([2]' ∩[3]) ∪[3]') ∈_2.
Hence, this completes the proof.
If is a non-empty set,
the equational theory of _2 is undecidable.
Let ' = a_1, a_2, … be a countably infinite set.
We give a reduction from the equational theory of complement-free _2 over ' (cf. <Ref>).
Let a^Δ (a^1 ↦ 1, 2 ↦ 1∩ a^1 ↦ 2, 2 ↦ 2);
note that a^Δ_ = x, y|x, x, y,y∈a_ for any .
Let ([1]) be the term defined by:
(a_i) a^Δ∩ (a · ((a^-∩I) · a)^i) ([2]^π) a^Δ∩([2])^π
(⊤) a^Δ ([2] ∪[3]) ([2]) ∪([3])
() ([2] ∩[3]) ([2]) ∩([3])
(I) a^Δ∩I ([2] ·[3]) ([2]) ·([3])
(D) a^Δ∩D ([2] †[3]) a^Δ∩ (((a^Δ)^-∪([2])) † ((a^Δ)^-∪([3]))).
(Cf. <cit.>.
This construction can be viewed as a variant of the relativization in logic—the translations (∃ x, ) (∃ x, A(x) ) and
(∀ x, ) (∀ x, A(x) →).
The construction above is refined not to increase the dot-dagger alternation.)
For every [1] such that the complement operator does not occur in [1],
* if [1] ∈_0, then ([1]) ∈_1;
* if [1] ∈_1, then ([1]) ∈_1;
* if [1] ∈_2, then ([1]) ∈_2.
By straightforward induction on [1] using (a_i) ∈_1.
Note that a^Δ∈_0 (= _0).
For a structure , let ^t be the structure defined by:
|^t| x |x, x∈ a^ a_i^^t (a_i)_.
Then, the following two hold:
For every and , _^t = ()_.
By induction on .
Case = ⊤:
⊤_^t = |^t|^2 = a^Δ_By Def. of ^t
= (⊤)_.
Case = , I, D:
As with the case of = ⊤.
Case = [2] ·[3]:
[2] ·[3]_^t = [2]_^t·[3]_^t = ([2])_·([3])_IH
= ([2])·([3])_
= ([2] ·[3])_.
Case = [2] ∪[3], [2] ∩[3]:
As with the case of = [2] ·[3].
Case = [2] †[3] (note that the operator † depends on the universe):
Note that if [2]_^t = ([2])_, we have the following:
[2]^-_^t = |^t|^2 ∖[2]_^t = |^t|^2 ∖([2])_By assumption
= |^t|^2 ∩([2])^-_
= a^Δ∩([2])^-_.
By using this, we have:
[2] †[3]_^t = |^t|^2 ∖ ([2]^-_^t·[3]^-_^t)
= |^t|^2 ∖ ( a^Δ∩([2])^-_·a^Δ∩([3])^-_) By IH with the above
= a^Δ∩ ((a^Δ∩([2])^-) · (a^Δ∩([3])^-))^-_
= a^Δ∩ (((a^Δ)^-∪([2])) † ((a^Δ)^-∪([3])))_
= ([2] †[3])_.
Hence, this completes the proof.
[cf. <cit.>]
For every , there is a structure [2] such that = [2]^t.
Let [2] = |[2]|, a^[2]_a ∈a be the structure defined by:
|[2]| || ∪x, y, i, j|x, y∈a_i_, j ∈1, i;
a^[2] I_||∪x, x, y, i, 1, x, y, i, i, y|x, y∈a_i_
∪x, y, i, j-1, x, y, i, j|x, y∈a_i_, j ∈2, i.
Here, we assume that || and x, y, i, j|x, y∈a_i_, j ∈1, i are disjoint (the other case can be shown in the same manner by renaming vertices).
The following is an illustration of an example of the conversion from [1] to [2]:
[
baseline=-4ex
]
fit1/.style=rounded rectangle, inner sep=0.8pt, fill=yellow!30,
fit2/.style=red!20, fill = red!20, thick,
fit3/.style=blue!20, fill = blue!20, thick,
nlab/.style=font= , color = gray
[grow right = 5.0cm, branch down = 6ex, nodes=inner sep = .1cm]
0/[draw, circle]-!-1/[draw, circle], 2/[draw, circle]
;
(0) edge [draw = white, opacity = 0] node[pos = .5, inner sep = .05cm](011) (1);
(0) edge [draw = white, opacity = 0] node[pos = .33, inner sep = .05cm](021) (2);
(0) edge [draw = white, opacity = 0] node[pos = .66, inner sep = .05cm](022) (2);
(0) edge [draw = white, opacity = 0] node[pos= 0.5, inner sep = 1.5pt, font = , opacity = 1](a1)a_1(1);
(0) edge [draw = white, opacity = 0, above left, out=150, in=120, loop] node[pos= 0.5, inner sep = 1.5pt, font = , opacity = 1](a2)a_1(0);
(0) edge [draw = white, opacity = 0] node[pos= 0.5, inner sep = 1.5pt, font = , opacity = 1](a3)a_2(2);
[use existing nodes, edges=color=black, pos = .5, earrow, edge quotes=fill=white, inner sep=1pt,font= ]
0 – a1 -> 1;
0 –[bend right] a2 ->[bend right] 0;
0 – a3 -> 2;
;
[
baseline=-4ex
]
fit1/.style=rounded rectangle, inner sep=0.8pt, fill=yellow!30,
fit2/.style=red!20, fill = red!20, thick,
fit3/.style=blue!20, fill = blue!20, thick,
nlab/.style=font= , color = gray
[grow right = 5.0cm, branch down = 6ex, nodes=inner sep = .1cm]
0/[draw, circle]-!-1/[draw, circle], 2/[draw, circle]
;
(0) edge [draw = white, opacity = 0] node[pos = .5, inner sep = .05cm, draw=black, circle, opacity = 1](011) (1);
(0) edge [draw = white, opacity = 0] node[pos = .33, inner sep = .05cm, draw=black, circle, opacity = 1](021) (2);
(0) edge [draw = white, opacity = 0] node[pos = .66, inner sep = .05cm, draw=black, circle, opacity = 1](022) (2);
(0) edge [draw = white, opacity = 0, above left, out=150, in=120, loop, looseness=30] node[pos= 0.5, inner sep = .05cm, font = , draw=black, circle, opacity = 1](001)(0);
(0) edge [draw = white, opacity = 0, above right, out=30, in=60, loop, looseness=20] node[pos = 0.5, inner sep = 1.5pt, font = , opacity = 1](0l)a(0);
(1) edge [draw = white, opacity = 0, above right, out=30, in=60, loop, looseness=20] node[pos = 0.5, inner sep = 1.5pt, font = , opacity = 1](1l)a(1);
(2) edge [draw = white, opacity = 0, above right, out=30, in=60, loop, looseness=20] node[pos = 0.5, inner sep = 1.5pt, font = , opacity = 1](2l)a(2);
(0) edge [draw = white, opacity = 0] node[pos= 0.5, inner sep = 1.5pt, font = , opacity = 1](a11)a(011);
(011) edge [draw = white, opacity = 0] node[pos= 0.5, inner sep = 1.5pt, font = , opacity = 1](a12)a(1);
(0) edge [draw = white, opacity = 0, bend left] node[pos= 0.5, inner sep = 1.5pt, font = , opacity = 1](a21)a(001);
(001) edge [draw = white, opacity = 0, bend left] node[pos= 0.5, inner sep = 1.5pt, font = , opacity = 1](a22)a(0);
(0) edge [draw = white, opacity = 0] node[pos= 0.5, inner sep = 1.5pt, font = , opacity = 1](a31)a(021);
(021) edge [draw = white, opacity = 0] node[pos= 0.5, inner sep = 1.5pt, font = , opacity = 1](a32)a(022);
(022) edge [draw = white, opacity = 0] node[pos= 0.5, inner sep = 1.5pt, font = , opacity = 1](a33)a(2);
[use existing nodes, edges=color=black, pos = .5, earrow, edge quotes=fill=white, inner sep=1pt,font= ]
0 – a11 -> 011; 011 – a12 -> 1;
0 – a21 -> 001; 001 – a22 -> 0;
0 – a31 -> 021; 021 – a32 -> 022; 022 – a33 -> 2;
0 –[bend right] 0l ->[bend right] 0;
1 –[bend right] 1l ->[bend right] 1;
2 –[bend right] 2l ->[bend right] 2;
;
By the construction of [2], we have:
x |x, x∈ a^[2] = || 𝒯(a_i)_[2] = a_i^.
Thus we have |[2]^t| = || and a_i^[2]^t = a_i^.
Hence, [2]^t =.
From these two, we have:
[1] ≁_[2] [1]_≁_[2]_
[1]_[2]^t≁_[2]_[2]^t⟹: <Ref>. ⟸: By letting [1] = [2]^t
([1])_[2]≁_([2])_[2]<Ref>
([1]) ≁_([2]).
Because [1], [2] are in _2, we have that ([1]), ([2]) are in _2 (<Ref>).
Hence, since ([1]), ([2]) contains the one variable a, this completes the proof.
Therefore, <Ref> holds even if is an non-empty finite set (particularly if = 1):
Assume that is a non-empty finite set.
The equational theory of _n (resp. _n) is decidable if n ≤ 1
and is undecidable if n ≥ 2.
By <Ref>.
§ PROOF COMPLETION FOR <REF>
§.§ Complement normal form
The following holds:
⊤^- ∼_ ^- ∼_⊤ I^- ∼_D D^- ∼_I
([2] ∪[3])^- ∼_[2]^-∩[3]^- ([2] ∩[3])^- ∼_[2]^-∪[3]^- ([2]^-)^- ∼_[2] ([2]^π)^- ∼_ ([2]^-)^π.
Easy.
Whereas ([1] ·[2])^-∼_[1]^-†[2]^- and ([1] †[2])^-∼_[1]^-·[2]^-,
we do not need them in the following because the complement operator does not apply to either · or † by the definition of _k and _k.
Let ∈(_1)1 and a ∈.
There are _0 ∈^∪, ∩, ·, , ⊤, I, D∪π|π∈1, 2^1, 21 and _1 ∈^-1 such that ∼__0_1a.
For ∈(_0)1:
By applying the rewriting rules induced from the equations in <Ref> (from left to right) as much as possible,
we can obtain a term ' ∈(_0)1 such that
∼_' and the complement operators (-) only apply to a variable.
(Note that _0 = ^∪, ∩, -, , ⊤, I, D∪π|π∈1, 2^1, 2.)
Then, there are _0 ∈^∪, ∩, , ⊤, I, D∪π|π∈1, 2^1, 21 and _1 ∈^-1 such that ' = _0_1a.
Since ∼__0_1a, we have obtained such _0 and _1.
For ∈(_1)1:
By easy induction on using the case for ∈(_0)1.
§.§ Projection normal form (PNF)
The following holds:
⊤^π ∼_⊤ ^π ∼_ I^π ∼_I (π(1) ≠π(2))
⊤ (π(1) = π(2))
D^π ∼_D (π(1) ≠π(2))
(π(1) = π(2))
([2] ∪[3])^π ∼_[2]^π∪[3]^π ([2] ∩[3])^π ∼_[2]^π∩[3]^π ([2]^-)^π ∼_ ([2]^π)^- ([2]^π)^π' ∼_[2]^π∘π'
([2] ·[3])^π ∼_[2] ·[3] (π = 1 ↦ 1, 2 ↦ 2)
[3]^⌣·[2]^⌣ (π = 1 ↦ 2, 2 ↦ 1)
([2] ∩[3]^⌣) ·⊤ (π = 1 ↦ 1, 2 ↦ 1)
⊤· ([2]^⌣∩[3]) (π = 1 ↦ 2, 2 ↦ 2)
([2] †[3])^π ∼_[2] †[3] (π = 1 ↦ 1, 2 ↦ 2)
[3]^⌣†[2]^⌣ (π = 1 ↦ 2, 2 ↦ 1)
([2] ∪[3]^⌣) † (π = 1 ↦ 1, 2 ↦ 1)
† ([2]^⌣∪[3]) (π = 1 ↦ 2, 2 ↦ 2)
.
They are also easy, from the semantics.
For example,
for ([2] ∪[3])^π∼_[2]^π∪[3]^π,
x_1, x_2∈([2] ∪[3])^π_ x_π(1), x_π(2)∈[2] ∪[3]_
x_π(1), x_π(2)∈[2]_x_π(1), x_π(2)∈[3]_
x_1, x_2∈[2]^π_x_1, x_2∈[3]^π_
x_1, x_2∈[2]^π∪[3]^π_.
For ([2]^π)^π'∼_[2]^π∘π',
x_1, x_2∈([2]^π)^π'_ x_π'(1), x_π'(2)∈[2]^π_
x_π(π'(1)), x_π(π'(2))∈[2]_
x_(π∘π')(1), x_(π∘π')(2)∈[2]_
x_1, x_2∈[2]^π∘π'_.
For ([2] ·[3])^π, we distinguish the following cases:
Case π = 1 ↦ 1, 2 ↦ 2: Clear.
Case π = 1 ↦ 2, 2 ↦ 1:
x_1, x_2∈([2] ·[3])^π_ x_2, x_1∈[2] ·[3]_
∃ z, x_2, z∈[2]_z, x_1∈[3]_
∃ z, z, x_2∈[2]^π_x_1, z∈[3]^π_
x_1, x_2∈[3]^π·[2]^π_.
Case π = 1 ↦ 1, 2 ↦ 1:
x_1, x_2∈([2] ·[3])^π_ x_1, x_1∈[2] ·[3]_
∃ z, x_1, z∈[2]_z, x_1∈[3]_
∃ z, x_1, z∈[2]_x_1, z∈[3]^⌣_
∃ z, x_1, z∈[2] ∩[3]^⌣_z, x_2∈⊤_
x_1, x_2∈([2] ∩[3]^⌣) ·⊤_.
Case π = 1 ↦ 2, 2 ↦ 2:
As with the case of π = 1 ↦ 1, 2 ↦ 1.
Let ∈^∪, ∩, ·, , ⊤, I, D∪π|π∈1, 2^1, 21 and a ∈.
Then, there are _0 ∈^∪, ∩, ·, , ⊤, I, D1 and _1 ∈^π|π∈1, 2^1, 21 such that ∼__0_1a.
By applying the rewriting rules induced from the equations in <Ref> (from left to right) as much as possible,
we can obtain a term ' ∈^∪, ∩, ·, , ⊤, I, D∪π|π∈1, 2^1, 21 such that
∼_' and the projection operators π only apply to a variable.
Then,
for the term ', there are _0 ∈^∪, ∩, , ⊤, I, D1 and _1 ∈^π|π∈1, 2^1, 21 such that ' = _0_1a.
Since ∼__0_1a, we have obtained such _0 and _1.
§.§ Union normal form
For every ∈^∪, ∩, ·, , ⊤, I, D1,
there is _0 ∈^∪, ∩, ·, I, D1 such that ∼__0.
By ∼_I∩D and ⊤∼_I∪D.
let _0 be the term in which has been replaced with I∩D and ⊤ has been replaced with I∪D.
Then, ∼__0 and _0 ∈^∪, ·, ∩, I, D1.
The following holds:
([2]_1 ∪[2]_2) ∩[3] ∼_ ([2]_1 ∩[3]) ∪ ([2]_2 ∩[3]) [3] ∩ ([2]_1 ∪[2]_2) ∼_ ([3] ∩[2]_1) ∪ ([3] ∩[2]_2)
([2]_1 ∪[2]_2) ·[3] ∼_ ([2]_1 ·[3]) ∪ ([2]_2 ·[3]) [3] · ([2]_1 ∪[2]_2) ∼_ ([3] ·[2]_1) ∪ ([3] ·[2]_2).
Easy.
For every ∈^∪, ∩, ·, I, D1,
there are n ≥ 1 and _1, …, _n ∈^∩, ·, I, D1 such that ∼__1 ∪…∪_n.
By straightforward induction on using the equations in <Ref>.
§.§ Proof of <Ref>
If ^∩, ·, I, D1∼_ is finite,
(_1)1∼_ is finite.
We have:
*(∵ the assumption)
*(∵<Ref>)
*(∵<Ref>)
*(<Ref> with the finiteness of ^π|π∈1, 2^1, 21∼_)
*(<Ref> with the finiteness of ^-1∼_)
§ PROOF COMPLETION OF <REF>
Using the standard encoding to formulas of first-order logic <cit.>, we can automatically check the validity of these equations.
See <cit.> for the TPTP files—they are at least checked by Z3 (Z3tptp 4.8.11.0) <cit.>, Vampire 4.7 (linked with Z3 4.8.13.0) <cit.>, and CVC5 1.0.3 <cit.>.
Our encoding is based on the encoding into first-order formulas <cit.> (see also <cit.>).
Note that there is another earlier presented TPTP-encoding for the calculus of relations, by Höfner and Struth <cit.>, which is based on axioms of relation algebras.
Apart from the automated checking above, in the following, we present an explicit proof for each equation (w.r.t. ∼__≥ 5).
For =:
a ∩I = a ∩ (I∩I) I∼_I∩I
= (a ∩I) ∩I. associativity law
For =:
a ∩D = a ∩ (D∩D) D∼_D∩D
= (a ∩D) ∩D. associativity law
For =:
a ∩I = a^⌣∩I. ∩I∼_^⌣∩I
For =:
a ∩I = a^⌣∩I∩I∼_^⌣∩I
= (a ∩I)^⌣. PNF; <Ref>
For =:
(a ∩D) ∩I = a ∩ (D∩I) associativity law
= a ∩ (I∩D) commutativity law
= (a ∩I) ∩D. associativity law
For =:
a^⌣∩D = (a ∩D)^⌣. PNF
For =:
a = a^⌣⌣. PNF
For =:
(a ∩I) ∩D = a ∩ (I∩D) associativity law
= a ∩∼_I∩D
= ∼_∩
= ·D∼_·
= (a ∩) ·D∼_∩
= (a ∩ (D∩I)) ·D∼_D∩I
= ((a ∩D) ∩I) ·D. associativity law
For =:
(a ∩I) ∩D = a ∩ (I∩D) associativity law
= a ∩∼_I∩D
= ∼_∩
= ((a ∩I) ·D) ∩I. ⋆1
Here, for <Ref>, we consider the translated first-order formula:
False↔ ((∃ y_1, (a(x_0, y_1) x_0 = y_1) y_1 ≠ y_0) x_0 = y_0).
This formula is valid over because x_0 = y_1 y_1 ≠ y_0 x_0 = y_0 is unsatisfiable.
For =:
(a ·D) ∩I = ((a ∩D) ·D) ∩I⋆2
Here, for <Ref>, we consider the translated first-order formula:
((∃ y_1, a(x_0, y_1) y_1 ≠ y_0) x_0 = y_0) ↔ ((∃ y_1, (a(x_0, y_1) x_0 ≠ y_1) y_1 ≠ y_0) x_0 = y_0).
This formula is valid over because (y_1 ≠ y_0 x_0 = y_0) ↔ (x_0 ≠ y_1 y_1 ≠ y_0 x_0 = y_0) always holds.
For =:
(a ∩I) ·D = ((a ∩I) ·D) ∩D⋆3
Here, for <Ref>, we consider the translated first-order formula:
((∃ y_1, a(x_0, y_1) x_0 = y_1) y_1 ≠ y_0) ↔ ((∃ y_1, (a(x_0, y_1) x_0 = y_1) y_1 ≠ y_0) x_0 ≠ y_0).
This formula is valid over because (y_1 ≠ y_0 x_0 = y_0) ↔ (x_0 ≠ y_1 y_1 ≠ y_0 x_0 = y_0) always holds.
For =:
(a ·D) ·D = a · (D·D) associativity law
= a ·⊤⊤∼_D·D
= ((a ·D) ∩D) ·D. ⋆4
Here, for <Ref>, we consider the translated first-order formula:
(∃ y_1, a(x_0, y_1)) ↔ (∃ y_2, ((∃ y_1, a(x_0, y_1) y_1 ≠ y_2) x_0 ≠ y_2) y_2 ≠ y_0).
For the right-hand side formula, over _≥ 4, we have:
(∃ y_2, ((∃ y_1, a(x_0, y_1) y_1 ≠ y_2) x_0 ≠ y_2) y_2 ≠ y_0)
↔ (∃ y_2 y_1, a(x_0, y_1) y_1 ≠ y_2 x_0 ≠ y_2 y_2 ≠ y_0) prenex normal form (prenex)
↔ (∃ y_1, a(x_0, y_1) (∃ y_2, y_2 ≠ y_1 y_2 ≠ x_0 y_2 ≠ y_0)) prenex
↔ (∃ y_1, a(x_0, y_1)). True↔ (∃ y_2, y_2 ≠ y_1 y_2 ≠ x_0 y_2 ≠ y_0) over _≥ 4
Hence, the translated formula for <Ref> is valid over _≥ 4.
For =:
(a ·D) ·D = a ·⊤·_⊤ =
= a · (⊤·D) ⊤∼__≥ 2⊤·D
= (a ·⊤) ·Dassociativity law
= ((a ·D) ·D) ·D. ·_⊤ =
Here, (·_⊤ =) means the following:
·⊤ = · (D·D) ⊤∼__≥ 3D·D
= (·D) ·Dassociativity law
For =:
((a ∩D) ·D) ·D = (a ∩D) ·⊤·_⊤ =
= ((a ·D) ∩I) ·⊤⋆5
= (((a ·D) ∩I) ·D) ·D. ·_⊤ =
Here, for <Ref>, we consider the translated first-order formula:
(∃ y_1, a(x_0, y_1) x_0 ≠ y_1) ↔ (∃ y_2, (∃ y_1, a(x_0, y_1) y_1 ≠ y_2) x_0 = y_2).
This formula is valid over , which can be shown by using [x_0/y_2] ↔ (∃ y_2, x_0 = y_2).
For =:
((a ·D) ·D) ∩D = (a ·⊤) ∩D·_⊤ =
= ((a ·⊤) ∩I) ·D⋆6
= ((a ·D·D) ∩I) ·D. ·_⊤ =
Here, for <Ref>, we consider the translated first-order formula:
((∃ y_1, a(x_0, y_1)) x_0 ≠ y_0) ↔ (∃ y_1, ((∃ y_2, a(x_0, y_2)) x_0 = y_1) y_1 ≠ y_0).
This formula is valid over , which can be shown by using [y_0/y_1] ↔ (∃ y_1, y_1 = y_0).
For =:
(a^⌣·D)^⌣·D = (D· a) ·DPNF
= D· (a ·D) associativity law
= ((a ·D)^⌣·D)^⌣. PNF
For =:
((a ∩I) ·D)^⌣ = D· (a^⌣∩I) PNF
= (D· (a^⌣∩I)) ∩D⋆7
= ((a ∩I) ·D)^⌣∩D. PNF
Here, for <Ref>, we consider the translated first-order formula:
(∃ x_1, x_0 ≠ x_1 (a(y_0, x_1) x_1 = y_0)) ↔ (∃ x_1, x_0 ≠ x_1 (a(y_0, x_1) x_1 = y_0)) x_0 ≠ y_0.
This formula is valid over because (x_0 ≠ x_1 x_1 = y_0) ↔ (x_0 ≠ x_1 x_1 = y_0 x_0 ≠ y_0) always holds.
For =:
(((a ·D) ∩I) ·D)^⌣ = D· ((D· a^⌣) ∩I) PNF
= (⊤· (a^⌣∩D)) ∩D⋆8
= ((D·D) · (a^⌣∩D)) ∩D⊤∼__≥ 3D·D
= (D· (D· (a^⌣∩D))) ∩Dassociativity law
= (((a ∩D) ·D) ·D)^⌣∩D. PNF
Here, for <Ref>, we consider the translated first-order formula:
(∃ x_2, x_0 ≠ x_2 ((∃ x_1, x_2 ≠ x_1 a(y_0, x_1)) x_2 = y_0))
↔ ((∃ x_1, a(y_0, x_1) x_1 ≠ y_0) x_0 ≠ y_0).
This formula is valid over , which can be shown by using [y_0/x_2] ↔ (∃ x_2, x_2 = y_0).
For =:
(a^⌣·D)^⌣·D = (D· a) ·DPNF
= D· (a ·D) associativity law
= ((a ·D)^⌣·D)^⌣PNF
For =:
(((a ·D) ∩I) ·D)^⌣ = D· ((D· a^⌣) ∩I) PNF
= ((⊤· (a^⌣∩D)) ∩D) ⋆9
= (((a ∩D) ·⊤)^⌣∩D) PNF
= ((((a ∩D) ·D) ·D)^⌣∩D) ·_⊤ =
Here, for <Ref>, we consider the translated first-order formula:
(∃ x_1, x_0 ≠ x_1 ((∃ x_2, x_1 ≠ x_2 a(y_0, x_2)) x_1 = y_0))
↔ ((∃ x_2, a(y_0, x_2) x_2 ≠ y_0) x_0 ≠ y_0).
This formula is valid over , which can be shown by using [y_0/x_1] ↔ (∃ x_1, x_1 = y_0).
For =:
(((a ·D)^⌣·D) ·D)^⌣
= ((a ·D)^⌣·⊤)^⌣·_⊤ =
= ⊤· (a ·D) PNF
= (⊤· a) ·Dassociativity law
= (a^⌣·⊤)^⌣·DPNF
= ((a^⌣·D) ·D)^⌣·D. ·_⊤ =
For =:
(((((((((a ∩D) ·D)^⌣∩D) ·D)^⌣∩D) ·D)^⌣∩D) ·D)^⌣∩D) ·D
= ((D· (((((((a ∩D) ·D)^⌣∩D) ·D)^⌣∩D) ·D)
∩D)) ∩D) ·DPNF
= ((D· ((((D· (((a ∩D) ·D) ∩D)) ∩D) ·D)
∩D)) ∩D) ·DPNF
= D· ((((D· ((((D· (a ∩D)) ∩D) ·D) ∩D))
∩D) ·D) ∩D) ⋆10
= D· ((((D· (((((a^⌣∩D) ·D)^⌣∩D) ·D) ∩D))
∩D) ·D) ∩D) PNF
= D· (((((((((a^⌣∩D) ·D)^⌣∩D) ·D)^⌣∩D) ·D)^⌣∩D) ·D) ∩D) PNF
= ((((((((((a^⌣∩D) ·D)^⌣∩D) ·D)^⌣∩D) ·D)^⌣∩D) ·D)^⌣∩D) ·D)^⌣. PNF
Here, for <Ref>, we consider the translated first-order formula:
(∃ y_1, ( (∃ x_1, x_0 ≠ x_1 ((∃ y_2, (
(∃ x_2, x_1 ≠ x_2 ((∃ y_3, (a(x_2, y_3) x_2 ≠ y_3) y_3 ≠ y_2) x_2 ≠ y_2))
x_1 ≠ y_2) y_2 ≠ y_1) x_1 ≠ y_1)) x_0 ≠ y_1) y_1 ≠ y_0)
↔
(∃ x_1, (∃ y_1, ((∃ x_2, x_1 ≠ x_2 ((∃ y_2, (
(∃ x_3, x_2 ≠ x_3 (a(x_3, y_2) x_3 ≠ y_2))
x_2 ≠ y_2) y_2 ≠ y_1) x_2 ≠ y_1)) x_1 ≠ y_1) y_1 ≠ y_0) x_1 ≠ y_0).
By taking the prenex normal form on each side, this formula is equivalent to the following:
( ∃ x_1 x_2 y_1 y_2 y_3, a(x_2, y_3)
x_0 ≠ x_1 x_1 ≠ x_2 x_2 ≠ y_3
y_3 ≠ y_2 y_2 ≠ y_1 y_1 ≠ y_0
x_0 ≠ y_1 x_1 ≠ y_1
x_1 ≠ y_2 x_2 ≠ y_2
)
↔( ∃ x_1 x_2 x_3 y_1 y_2, a(x_3, y_2)
x_0 ≠ x_1 x_1 ≠ x_2 x_2 ≠ x_3
x_3 ≠ y_2 y_2 ≠ y_1 y_1 ≠ y_0
x_1 ≠ y_0 x_1 ≠ y_1
x_2 ≠ y_1 x_2 ≠ y_2
).
For the left-hand side formula, over _≥ 5, we have:
( ∃ x_1 x_2 y_1 y_2 y_3, a(x_2, y_3)
x_0 ≠ x_1 x_1 ≠ x_2 x_2 ≠ y_3
y_3 ≠ y_2 y_2 ≠ y_1 y_1 ≠ y_0
x_0 ≠ y_1 x_1 ≠ y_1
x_1 ≠ y_2 x_2 ≠ y_2
)
↔( ∃ x_2 y_3, a(x_2, y_3) x_2 ≠ y_3
∃ y_2, y_2 ≠ y_3 y_2 ≠ x_2
∃ x_1, x_1 ≠ x_0 x_1 ≠ x_2 x_1 ≠ y_2
∃ y_1, y_1 ≠ y_2
y_1 ≠ y_0 y_1 ≠ x_0 y_1 ≠ x_1
) prenex
↔( ∃ x_2 y_3, a(x_2, y_3) x_2 ≠ y_3
∃ y_2, y_2 ≠ y_3 y_2 ≠ x_2
∃ x_1, x_1 ≠ x_0 x_1 ≠ x_2 x_1 ≠ y_2
) True↔∃ y_1, y_1 ≠ y_2 y_1 ≠ y_0 y_1 ≠ x_0 y_1 ≠ x_1 over _≥ 5
↔( ∃ x_2 y_3, a(x_2, y_3) x_2 ≠ y_3
∃ y_2, y_2 ≠ y_3 y_2 ≠ x_2
) True↔∃ x_1, x_1 ≠ x_0 x_1 ≠ x_2 x_1 ≠ y_2 over _≥ 4
↔ (∃ x_2 y_3, a(x_2, y_3) x_2 ≠ y_3) True↔∃ y_2, y_2 ≠ y_3 y_2 ≠ x_2 over _≥ 3
↔ (∃ x y, a(x, y) x ≠ y).
By the same argument, for the right-hand side formula, over _≥ 5, we have:
( ∃ x_1 x_2 x_3 y_1 y_2, a(x_3, y_2)
x_0 ≠ x_1 x_1 ≠ x_2 x_2 ≠ x_3
x_3 ≠ y_2 y_2 ≠ y_1 y_1 ≠ y_0
x_1 ≠ y_0 x_1 ≠ y_1
x_2 ≠ y_1 x_2 ≠ y_2
) ↔ (∃ x y, a(x, y) x ≠ y).
Combining them, the translated formula from <Ref> is valid over _≥ 5.
For =:
(((((((a ·D)^⌣·D) ∩I) ·D)^⌣·D) ∩I) ·D)
= ((((D· ((D· (a ·D)) ∩I)) ·D) ∩I) ·D) PNF
= (D· ((D· ((((D· a^⌣) ·D) ∩I) ·D)) ∩I)) ⋆11
= (((((((a ·D)^⌣·D) ∩I) ·D)^⌣·D) ∩I) ·D)^⌣. PNF
Here, for <Ref>, we consider the translated first-order formula:
(∃ y_1, ( (∃ y_2, (∃ x_1, x_0 ≠ x_1 ((∃ x_2,
x_1 ≠ x_2 (∃ y_3, a(x_2, y_3) y_3 ≠ y_2)) x_1 = y_2)) y_2 ≠ y_1) x_0 = y_1) y_1 ≠ y_0)
↔
(∃ x_1, x_0 ≠ x_1 ((∃ x_2, x_1 ≠ x_2 (∃ y_1,
((∃ y_2, (∃ x_3, x_2 ≠ x_3 a(y_2, x_3)) y_2 ≠ y_1) x_2 = y_1) y_1 ≠ y_0)) x_1 = y_0)).
By taking the prenex normal form on each side, this formula is equivalent to the following:
( ∃ x_1 x_2 y_1 y_2 y_3, a(x_2, y_3)
x_0 ≠ x_1 x_1 ≠ x_2
y_0 ≠ y_1 y_1 ≠ y_2 y_2 ≠ y_3
x_0 = y_1 x_1 = y_2
)
↔( ∃ x_1 x_2 x_3 y_1 y_2, a(y_2, x_3)
x_0 ≠ x_1 x_1 ≠ x_2 x_2 ≠ x_3
y_0 ≠ y_1 y_1 ≠ y_2
x_1 = y_0 x_2 = y_1
).
For the left-hand side formula, we have:
( ∃ x_1 x_2 y_1 y_2 y_3, a(x_2, y_3)
x_0 ≠ x_1 x_1 ≠ x_2
y_0 ≠ y_1 y_1 ≠ y_2 y_2 ≠ y_3
x_0 = y_1 x_1 = y_2
)
↔( ∃ x_1 x_2 y_3, a(x_2, y_3)
x_0 ≠ x_1 x_1 ≠ x_2
y_0 ≠ x_0 x_0 ≠ x_1 x_1 ≠ y_3
) [z'/z] ↔∃ z, z = z'
↔ (∃ x_2 y_3, a(x_2, y_3) x_0 ≠ y_0) True↔∃ x_1, x_1 ≠ x_0 x_1 ≠ x_2 x_1 ≠ y_3 over _≥ 5
↔ (∃ x y, a(x, y) x_0 ≠ y_0). By renaming
For the right-hand side formula, we have:
( ∃ x_1 x_2 x_3 y_1 y_2, a(y_2, x_3)
x_0 ≠ x_1 x_1 ≠ x_2 x_2 ≠ x_3
y_0 ≠ y_1 y_1 ≠ y_2
x_2 = y_1 x_1 = y_0
)
↔( ∃ x_3 y_1 y_2, a(y_2, x_3)
x_0 ≠ y_0 y_0 ≠ y_1 y_1 ≠ x_3
y_0 ≠ y_1 y_1 ≠ y_2
) [z'/z] ↔∃ z, z = z'
↔ (∃ x_3 y_2, a(y_2, x_3) x_0 ≠ y_0) True↔∃ y_1, y_1 ≠ y_0 y_1 ≠ x_3 y_1 ≠ y_0 y_1 ≠ y_2 over _≥ 5
↔ (∃ x y, a(x, y) x_0 ≠ y_0). By renaming
Hence, we have proved all the equations in <Ref>.
§ THE MINIMAL DFA AND SMT-LIB2 FILE FOR <REF>
(The files in this section can also be seen in <cit.>.)
<Ref> is the SMT-LIB2 file for showing:
there is no word w of length w≥ 29 such that w ∈^* ∖ (⋃_i ∈1, 21^* [2]_i ^*).
<Ref> is the output by Z3 <cit.> (Z3 version 4.11.0).
Thus, we have that ^* ∖ (⋃_i ∈1, 21^* [2]_i ^*) is finite.
Additionally, <Ref> is the SMT-LIB2 file for showing:
there is a word w of length w≥ 28 such that w ∈^* ∖ (⋃_i ∈1, 21^* [2]_i ^*).
<Ref> is the output by Z3 <cit.> (Z3 version 4.11.0).
For example, the following word of length 28 is in ^* ∖ (⋃_i ∈1, 21^* [2]_i ^*):
.
(This word can be obtained by uncommenting the last line of <Ref>. See also <Ref>.)
[ frame=h, caption=The SMT-LIB 2 file of <Ref> for length ≥ 29.,label=universal,captionpos=t,float,abovecaptionskip=-]programs/universal.smt2
[ frame=h, caption=The output by Z3 (Z3 version 4.11.0) for <Ref>.,label=universal_out,captionpos=t,float,abovecaptionskip=-]programs/universal.smt2.z3.out
[ frame=h, caption=The SMT-LIB 2 file of <Ref> for length ≥ 28.,label=universal_28,captionpos=t,float,abovecaptionskip=-]programs/universal_28.smt2
[ frame=h, caption=The output by Z3 (Z3 version 4.11.0) for <Ref>.,label=universal_28_out,captionpos=t,float,abovecaptionskip=-]programs/universal_28.smt2.z3.out
§.§ Minimal DFA for <Ref>
<Ref> presents the minimal DFA of (⋃_i ∈1, 21^* [2]_i ^*)
(see online since the DFA is large; see also <cit.> for the dot file).
Its language is cofinite because this is acyclic except the accepting state.
(The red colored edges denote the aforementioned word of length 28 which is not accepted by the DFA.)
Thus, the minimal DFA also shows the cofiniteness of its language graphically.
0.168
[>=latex,line join=bevel,]
1bp
black
[->] (2906.9bp,1490.3bp) .. controls (2906.8bp,1500.1bp) and (2907.5bp,1508.2bp) .. (2908.9bp,1508.2bp) .. controls (2909.8bp,1508.2bp) and (2910.4bp,1505.2bp) .. (2911.0bp,1490.3bp);
;
strokecol
(2908.9bp,1515.7bp) node ;
[->] (2905.3bp,1490.2bp) .. controls (2904.1bp,1508.2bp) and (2905.3bp,1526.2bp) .. (2908.9bp,1526.2bp) .. controls (2911.9bp,1526.2bp) and (2913.3bp,1514.3bp) .. (2912.6bp,1490.2bp);
(2908.9bp,1533.7bp) node ;
[->] (2904.1bp,1490.1bp) .. controls (2901.3bp,1515.7bp) and (2902.9bp,1544.2bp) .. (2908.9bp,1544.2bp) .. controls (2914.2bp,1544.2bp) and (2916.1bp,1522.6bp) .. (2913.8bp,1490.1bp);
(2908.9bp,1551.7bp) node ;
[->] (2903.3bp,1489.9bp) .. controls (2898.5bp,1522.7bp) and (2900.3bp,1562.2bp) .. (2908.9bp,1562.2bp) .. controls (2916.7bp,1562.2bp) and (2919.0bp,1530.3bp) .. (2914.6bp,1489.9bp);
(2908.9bp,1569.7bp) node ;
[->] (54.329bp,2696.0bp) .. controls (62.724bp,2696.0bp) and (72.083bp,2696.0bp) .. (90.857bp,2696.0bp);
red
[->] (118.81bp,2716.4bp) .. controls (132.87bp,2767.7bp) and (169.74bp,2902.2bp) .. (189.0bp,2972.4bp);
;
strokecol
(153.93bp,2860.5bp) node ;
[->] (127.49bp,2680.9bp) .. controls (134.42bp,2673.9bp) and (143.05bp,2665.7bp) .. (151.43bp,2659.0bp) .. controls (160.96bp,2651.4bp) and (163.3bp,2649.0bp) .. (174.43bp,2644.0bp) .. controls (197.95bp,2633.4bp) and (225.92bp,2625.6bp) .. (258.07bp,2618.3bp);
(195.64bp,2651.5bp) node ;
[->] (131.44bp,2705.8bp) .. controls (159.92bp,2721.1bp) and (215.49bp,2750.9bp) .. (261.04bp,2775.3bp);
(195.64bp,2757.5bp) node ;
[->] (131.88bp,2687.6bp) .. controls (148.11bp,2681.0bp) and (172.46bp,2673.0bp) .. (194.64bp,2673.0bp) .. controls (194.64bp,2673.0bp) and (194.64bp,2673.0bp) .. (489.35bp,2673.0bp) .. controls (513.99bp,2673.0bp) and (538.71bp,2685.6bp) .. (565.51bp,2704.1bp);
(337.75bp,2680.5bp) node ;
[->] (216.98bp,2995.1bp) .. controls (269.8bp,3000.5bp) and (415.69bp,3014.0bp) .. (537.54bp,3014.0bp) .. controls (537.54bp,3014.0bp) and (537.54bp,3014.0bp) .. (740.34bp,3014.0bp) .. controls (932.71bp,3014.0bp) and (980.78bp,3010.0bp) .. (1173.2bp,3010.0bp) .. controls (1173.2bp,3010.0bp) and (1173.2bp,3010.0bp) .. (1476.3bp,3010.0bp) .. controls (1587.0bp,3010.0bp) and (1614.7bp,3006.0bp) .. (1725.3bp,3006.0bp) .. controls (1725.3bp,3006.0bp) and (1725.3bp,3006.0bp) .. (2822.5bp,3006.0bp) .. controls (2843.8bp,3006.0bp) and (2862.1bp,2935.2bp) .. (2865.7bp,2917.0bp) .. controls (2894.3bp,2774.4bp) and (2905.6bp,1709.3bp) .. (2907.7bp,1490.4bp);
(1575.7bp,3016.5bp) node ;
[->] (198.95bp,3014.0bp) .. controls (204.92bp,3056.6bp) and (224.97bp,3149.0bp) .. (286.55bp,3149.0bp) .. controls (286.55bp,3149.0bp) and (286.55bp,3149.0bp) .. (2822.5bp,3149.0bp) .. controls (2839.3bp,3149.0bp) and (2862.6bp,3075.2bp) .. (2865.7bp,3060.0bp) .. controls (2898.4bp,2902.7bp) and (2906.5bp,1721.0bp) .. (2907.8bp,1490.4bp);
(1575.7bp,3156.5bp) node ;
red
[->] (201.47bp,2972.6bp) .. controls (215.84bp,2911.9bp) and (258.44bp,2731.8bp) .. (279.74bp,2641.8bp);
;
strokecol
(237.35bp,2832.5bp) node ;
[->] (215.6bp,3000.2bp) .. controls (221.71bp,3002.5bp) and (228.54bp,3004.9bp) .. (234.85bp,3007.0bp) .. controls (273.46bp,3019.6bp) and (318.18bp,3032.4bp) .. (359.02bp,3043.5bp);
(287.55bp,3038.5bp) node ;
[->] (288.88bp,2583.2bp) .. controls (292.14bp,2296.5bp) and (319.91bp,38.0bp) .. (386.95bp,38.0bp) .. controls (386.95bp,38.0bp) and (386.95bp,38.0bp) .. (2822.5bp,38.0bp) .. controls (2825.3bp,38.0bp) and (2864.4bp,103.4bp) .. (2865.7bp,108.0bp) .. controls (2904.2bp,237.9bp) and (2907.6bp,1228.8bp) .. (2907.9bp,1439.6bp);
(1625.9bp,45.5bp) node ;
[->] (288.85bp,2583.0bp) .. controls (290.81bp,2392.5bp) and (302.83bp,1330.8bp) .. (335.25bp,466.0bp) .. controls (339.15bp,361.88bp) and (282.76bp,0.0bp) .. (386.95bp,0.0bp) .. controls (386.95bp,0.0bp) and (386.95bp,0.0bp) .. (2822.5bp,0.0bp) .. controls (2854.0bp,0.0bp) and (2860.8bp,66.477bp) .. (2865.7bp,95.0bp) .. controls (2912.2bp,362.5bp) and (2909.4bp,1241.7bp) .. (2908.1bp,1439.6bp);
(1625.9bp,7.5bp) node ;
[->] (315.76bp,2622.5bp) .. controls (335.31bp,2628.4bp) and (362.44bp,2635.0bp) .. (386.95bp,2635.0bp) .. controls (386.95bp,2635.0bp) and (386.95bp,2635.0bp) .. (589.74bp,2635.0bp) .. controls (692.88bp,2635.0bp) and (718.63bp,2639.0bp) .. (821.76bp,2639.0bp) .. controls (821.76bp,2639.0bp) and (821.76bp,2639.0bp) .. (1275.6bp,2639.0bp) .. controls (1328.0bp,2639.0bp) and (1357.9bp,2467.3bp) .. (1370.4bp,2377.5bp);
(822.76bp,2646.5bp) node ;
red
[->] (300.94bp,2639.6bp) .. controls (309.26bp,2655.8bp) and (321.24bp,2676.3bp) .. (335.25bp,2692.0bp) .. controls (340.78bp,2698.2bp) and (347.42bp,2704.1bp) .. (362.42bp,2715.5bp);
;
strokecol
(337.75bp,2704.5bp) node ;
[->] (317.12bp,2785.5bp) .. controls (343.35bp,2782.0bp) and (383.25bp,2776.5bp) .. (417.65bp,2770.0bp) .. controls (462.64bp,2761.5bp) and (474.03bp,2759.6bp) .. (518.04bp,2747.0bp) .. controls (528.77bp,2743.9bp) and (540.27bp,2740.2bp) .. (560.48bp,2733.1bp);
(438.15bp,2773.5bp) node ;
[->] (296.09bp,2760.4bp) .. controls (302.99bp,2732.8bp) and (312.76bp,2689.9bp) .. (317.25bp,2652.0bp) .. controls (374.68bp,2167.1bp) and (385.48bp,549.6bp) .. (386.85bp,294.27bp);
(337.75bp,2418.5bp) node ;
[->] (307.36bp,2811.2bp) .. controls (320.55bp,2825.8bp) and (339.17bp,2844.6bp) .. (358.25bp,2858.0bp) .. controls (429.26bp,2908.0bp) and (451.14bp,2920.8bp) .. (536.04bp,2939.0bp) .. controls (749.01bp,2984.7bp) and (809.87bp,2972.5bp) .. (1026.1bp,2946.0bp) .. controls (1117.3bp,2934.8bp) and (1141.0bp,2929.1bp) .. (1226.9bp,2896.0bp) .. controls (1231.0bp,2894.4bp) and (1235.3bp,2892.5bp) .. (1248.4bp,2885.9bp);
(781.05bp,2976.5bp) node ;
[->] (315.36bp,2799.5bp) .. controls (350.34bp,2813.2bp) and (411.68bp,2837.3bp) .. (460.32bp,2856.4bp);
(387.95bp,2846.5bp) node ;
[->] (613.61bp,2739.6bp) .. controls (632.83bp,2751.3bp) and (661.02bp,2765.0bp) .. (688.14bp,2765.0bp) .. controls (688.14bp,2765.0bp) and (688.14bp,2765.0bp) .. (1175.2bp,2765.0bp) .. controls (1206.2bp,2765.0bp) and (1213.9bp,2767.9bp) .. (1244.9bp,2770.0bp) .. controls (1502.9bp,2787.3bp) and (1567.1bp,2806.0bp) .. (1825.7bp,2806.0bp) .. controls (1825.7bp,2806.0bp) and (1825.7bp,2806.0bp) .. (2822.5bp,2806.0bp) .. controls (2844.5bp,2806.0bp) and (2854.3bp,2800.8bp) .. (2865.7bp,2782.0bp) .. controls (2900.0bp,2725.8bp) and (2906.8bp,1705.3bp) .. (2907.8bp,1490.4bp);
(1776.5bp,2812.5bp) node ;
[->] (618.58bp,2724.8bp) .. controls (638.18bp,2725.9bp) and (664.72bp,2727.0bp) .. (688.14bp,2727.0bp) .. controls (688.14bp,2727.0bp) and (688.14bp,2727.0bp) .. (1125.0bp,2727.0bp) .. controls (1280.7bp,2727.0bp) and (1318.6bp,2755.0bp) .. (1474.3bp,2755.0bp) .. controls (1474.3bp,2755.0bp) and (1474.3bp,2755.0bp) .. (1727.3bp,2755.0bp) .. controls (2001.5bp,2755.0bp) and (2069.9bp,2768.0bp) .. (2344.0bp,2768.0bp) .. controls (2344.0bp,2768.0bp) and (2344.0bp,2768.0bp) .. (2822.5bp,2768.0bp) .. controls (2844.5bp,2768.0bp) and (2854.3bp,2762.7bp) .. (2865.7bp,2744.0bp) .. controls (2899.0bp,2689.5bp) and (2906.5bp,1701.4bp) .. (2907.8bp,1490.2bp);
(1776.5bp,2762.5bp) node ;
[->] (612.99bp,2705.1bp) .. controls (632.09bp,2692.2bp) and (660.42bp,2677.0bp) .. (688.14bp,2677.0bp) .. controls (688.14bp,2677.0bp) and (688.14bp,2677.0bp) .. (1225.4bp,2677.0bp) .. controls (1745.4bp,2677.0bp) and (1874.2bp,2730.0bp) .. (2394.2bp,2730.0bp) .. controls (2394.2bp,2730.0bp) and (2394.2bp,2730.0bp) .. (2822.5bp,2730.0bp) .. controls (2850.2bp,2730.0bp) and (2854.9bp,2710.5bp) .. (2865.7bp,2685.0bp) .. controls (2912.8bp,2573.5bp) and (2909.5bp,1689.5bp) .. (2908.1bp,1490.3bp);
(1776.5bp,2710.5bp) node ;
red
[->] (592.45bp,2693.4bp) .. controls (606.25bp,2542.8bp) and (668.73bp,1860.9bp) .. (685.44bp,1678.5bp);
;
strokecol
(638.94bp,2193.5bp) node ;
[->] (2842.8bp,1778.6bp) .. controls (2851.1bp,1776.6bp) and (2860.1bp,1772.9bp) .. (2865.7bp,1766.0bp) .. controls (2898.5bp,1726.2bp) and (2905.8bp,1571.7bp) .. (2907.7bp,1490.6bp);
(2863.2bp,1777.5bp) node ;
[->] (2837.7bp,1766.7bp) .. controls (2847.3bp,1756.8bp) and (2859.1bp,1742.7bp) .. (2865.7bp,1728.0bp) .. controls (2899.8bp,1652.0bp) and (2906.8bp,1552.8bp) .. (2908.2bp,1490.5bp);
(2863.2bp,1743.5bp) node ;
[->] (2832.9bp,1762.8bp) .. controls (2842.7bp,1744.8bp) and (2857.2bp,1716.3bp) .. (2865.7bp,1690.0bp) .. controls (2886.9bp,1624.7bp) and (2898.7bp,1545.2bp) .. (2905.4bp,1490.2bp);
(2863.2bp,1709.5bp) node ;
[->] (2829.9bp,1761.1bp) .. controls (2839.0bp,1735.9bp) and (2854.8bp,1691.1bp) .. (2865.7bp,1652.0bp) .. controls (2880.4bp,1599.4bp) and (2893.6bp,1537.4bp) .. (2903.2bp,1489.7bp);
(2863.2bp,1673.5bp) node ;
[->] (390.23bp,251.69bp) .. controls (394.47bp,202.26bp) and (412.43bp,84.0bp) .. (487.35bp,84.0bp) .. controls (487.35bp,84.0bp) and (487.35bp,84.0bp) .. (2822.5bp,84.0bp) .. controls (2847.8bp,84.0bp) and (2854.8bp,98.201bp) .. (2865.7bp,121.0bp) .. controls (2894.8bp,181.65bp) and (2905.7bp,1222.7bp) .. (2907.7bp,1439.7bp);
(1676.1bp,91.5bp) node ;
[->] (406.3bp,283.75bp) .. controls (420.15bp,292.08bp) and (440.23bp,303.59bp) .. (458.65bp,312.0bp) .. controls (589.52bp,371.81bp) and (636.16bp,351.0bp) .. (780.05bp,351.0bp) .. controls (780.05bp,351.0bp) and (780.05bp,351.0bp) .. (1626.9bp,351.0bp) .. controls (1752.4bp,351.0bp) and (1783.7bp,343.0bp) .. (1909.2bp,343.0bp) .. controls (1909.2bp,343.0bp) and (1909.2bp,343.0bp) .. (2822.5bp,343.0bp) .. controls (2856.6bp,343.0bp) and (2859.7bp,408.11bp) .. (2865.7bp,444.0bp) .. controls (2898.5bp,639.63bp) and (2906.2bp,1272.5bp) .. (2907.7bp,1439.6bp);
(1676.1bp,358.5bp) node ;
[->] (393.5bp,252.34bp) .. controls (403.33bp,213.84bp) and (430.41bp,135.0bp) .. (487.35bp,135.0bp) .. controls (487.35bp,135.0bp) and (487.35bp,135.0bp) .. (2822.5bp,135.0bp) .. controls (2847.8bp,135.0bp) and (2854.8bp,149.21bp) .. (2865.7bp,172.0bp) .. controls (2893.7bp,230.21bp) and (2905.5bp,1227.6bp) .. (2907.7bp,1439.6bp);
(1676.1bp,142.5bp) node ;
[->] (409.34bp,273.0bp) .. controls (420.53bp,273.0bp) and (434.79bp,273.0bp) .. (458.4bp,273.0bp);
(438.15bp,280.5bp) node ;
[->] (518.1bp,271.15bp) .. controls (548.02bp,269.39bp) and (596.24bp,267.0bp) .. (637.94bp,267.0bp) .. controls (637.94bp,267.0bp) and (637.94bp,267.0bp) .. (2822.5bp,267.0bp) .. controls (2844.5bp,267.0bp) and (2854.2bp,272.27bp) .. (2865.7bp,291.0bp) .. controls (2896.2bp,340.73bp) and (2905.9bp,1239.3bp) .. (2907.7bp,1439.8bp);
(1726.3bp,274.5bp) node ;
[->] (512.71bp,255.86bp) .. controls (531.87bp,243.51bp) and (560.23bp,229.0bp) .. (587.74bp,229.0bp) .. controls (587.74bp,229.0bp) and (587.74bp,229.0bp) .. (2822.5bp,229.0bp) .. controls (2852.2bp,229.0bp) and (2855.0bp,252.29bp) .. (2865.7bp,280.0bp) .. controls (2908.1bp,389.46bp) and (2908.5bp,1244.4bp) .. (2908.0bp,1439.7bp);
(1726.3bp,236.5bp) node ;
[->] (516.63bp,282.51bp) .. controls (546.15bp,291.95bp) and (594.79bp,305.0bp) .. (637.94bp,305.0bp) .. controls (637.94bp,305.0bp) and (637.94bp,305.0bp) .. (2822.5bp,305.0bp) .. controls (2844.5bp,305.0bp) and (2854.2bp,310.28bp) .. (2865.7bp,329.0bp) .. controls (2895.2bp,377.0bp) and (2905.7bp,1243.0bp) .. (2907.7bp,1439.8bp);
(1726.3bp,312.5bp) node ;
[->] (490.54bp,302.66bp) .. controls (500.8bp,558.29bp) and (574.27bp,2387.5bp) .. (586.54bp,2693.0bp);
(538.54bp,1505.5bp) node ;
[->] (782.3bp,1627.4bp) .. controls (784.09bp,1480.5bp) and (796.42bp,630.93bp) .. (843.26bp,527.0bp) .. controls (884.03bp,436.55bp) and (923.34bp,394.0bp) .. (1022.6bp,394.0bp) .. controls (1022.6bp,394.0bp) and (1022.6bp,394.0bp) .. (1626.9bp,394.0bp) .. controls (1797.1bp,394.0bp) and (1839.4bp,381.0bp) .. (2009.6bp,381.0bp) .. controls (2009.6bp,381.0bp) and (2009.6bp,381.0bp) .. (2822.5bp,381.0bp) .. controls (2853.1bp,381.0bp) and (2860.9bp,447.74bp) .. (2865.7bp,474.0bp) .. controls (2900.5bp,662.75bp) and (2906.7bp,1274.1bp) .. (2907.8bp,1439.4bp);
(1868.4bp,393.5bp) node ;
[->] (782.34bp,1670.4bp) .. controls (784.23bp,1802.6bp) and (795.36bp,2501.8bp) .. (820.26bp,2539.0bp) .. controls (834.35bp,2560.1bp) and (846.63bp,2563.0bp) .. (871.96bp,2563.0bp) .. controls (871.96bp,2563.0bp) and (871.96bp,2563.0bp) .. (1225.4bp,2563.0bp) .. controls (1282.3bp,2563.0bp) and (1289.8bp,2595.2bp) .. (1345.2bp,2608.0bp) .. controls (1573.1bp,2660.7bp) and (2160.3bp,2692.0bp) .. (2394.2bp,2692.0bp) .. controls (2394.2bp,2692.0bp) and (2394.2bp,2692.0bp) .. (2822.5bp,2692.0bp) .. controls (2844.5bp,2692.0bp) and (2854.2bp,2686.7bp) .. (2865.7bp,2668.0bp) .. controls (2897.0bp,2617.0bp) and (2906.1bp,1694.3bp) .. (2907.7bp,1490.3bp);
(1868.4bp,2678.5bp) node ;
[->] (781.94bp,1670.2bp) .. controls (781.49bp,1804.0bp) and (783.2bp,2525.0bp) .. (871.96bp,2525.0bp) .. controls (871.96bp,2525.0bp) and (871.96bp,2525.0bp) .. (1225.4bp,2525.0bp) .. controls (1748.0bp,2525.0bp) and (1871.5bp,2654.0bp) .. (2394.2bp,2654.0bp) .. controls (2394.2bp,2654.0bp) and (2394.2bp,2654.0bp) .. (2822.5bp,2654.0bp) .. controls (2848.7bp,2654.0bp) and (2862.2bp,2587.2bp) .. (2865.7bp,2570.0bp) .. controls (2909.0bp,2358.6bp) and (2908.8bp,1666.3bp) .. (2908.1bp,1490.7bp);
(1868.4bp,2612.5bp) node ;
red
[->] (783.81bp,1627.9bp) .. controls (794.78bp,1495.7bp) and (853.94bp,783.05bp) .. (869.5bp,595.71bp);
;
strokecol
(822.76bp,1180.5bp) node ;
[->] (886.78bp,539.51bp) .. controls (902.6bp,511.1bp) and (932.38bp,470.0bp) .. (972.36bp,470.0bp) .. controls (972.36bp,470.0bp) and (972.36bp,470.0bp) .. (1827.7bp,470.0bp) .. controls (1908.8bp,470.0bp) and (1928.5bp,457.0bp) .. (2009.6bp,457.0bp) .. controls (2009.6bp,457.0bp) and (2009.6bp,457.0bp) .. (2822.5bp,457.0bp) .. controls (2852.2bp,457.0bp) and (2854.9bp,480.34bp) .. (2865.7bp,508.0bp) .. controls (2899.8bp,594.94bp) and (2906.6bp,1265.9bp) .. (2907.8bp,1439.7bp);
(1910.2bp,472.5bp) node ;
[->] (881.19bp,537.25bp) .. controls (893.02bp,497.88bp) and (920.55bp,432.0bp) .. (972.36bp,432.0bp) .. controls (972.36bp,432.0bp) and (972.36bp,432.0bp) .. (1827.7bp,432.0bp) .. controls (1908.8bp,432.0bp) and (1928.5bp,419.0bp) .. (2009.6bp,419.0bp) .. controls (2009.6bp,419.0bp) and (2009.6bp,419.0bp) .. (2822.5bp,419.0bp) .. controls (2842.5bp,419.0bp) and (2863.5bp,483.01bp) .. (2865.7bp,492.0bp) .. controls (2910.7bp,674.99bp) and (2909.4bp,1277.0bp) .. (2908.2bp,1439.8bp);
(1910.2bp,434.5bp) node ;
[->] (873.87bp,595.79bp) .. controls (873.34bp,838.83bp) and (873.38bp,2487.0bp) .. (972.36bp,2487.0bp) .. controls (972.36bp,2487.0bp) and (972.36bp,2487.0bp) .. (1225.4bp,2487.0bp) .. controls (1748.0bp,2487.0bp) and (1871.5bp,2616.0bp) .. (2394.2bp,2616.0bp) .. controls (2394.2bp,2616.0bp) and (2394.2bp,2616.0bp) .. (2822.5bp,2616.0bp) .. controls (2836.9bp,2616.0bp) and (2864.3bp,2556.1bp) .. (2865.7bp,2551.0bp) .. controls (2894.8bp,2448.1bp) and (2905.5bp,1676.3bp) .. (2907.7bp,1490.5bp);
(1910.2bp,2578.5bp) node ;
red
[->] (902.69bp,566.0bp) .. controls (912.2bp,566.0bp) and (922.96bp,566.0bp) .. (943.43bp,566.0bp);
;
strokecol
(923.16bp,573.5bp) node ;
[->] (2637.0bp,2463.9bp) .. controls (2644.0bp,2481.1bp) and (2656.8bp,2504.8bp) .. (2676.9bp,2516.0bp) .. controls (2713.6bp,2536.4bp) and (2835.8bp,2545.4bp) .. (2865.7bp,2516.0bp) .. controls (2902.7bp,2479.6bp) and (2907.3bp,1679.8bp) .. (2907.9bp,1490.3bp);
(2779.8bp,2541.5bp) node ;
[->] (2646.6bp,2456.4bp) .. controls (2660.1bp,2465.8bp) and (2680.2bp,2477.9bp) .. (2699.9bp,2483.0bp) .. controls (2773.8bp,2502.3bp) and (2839.6bp,2509.8bp) .. (2865.7bp,2438.0bp) .. controls (2898.2bp,2348.6bp) and (2906.2bp,1664.4bp) .. (2907.7bp,1490.2bp);
(2779.8bp,2503.5bp) node ;
[->] (2647.9bp,2433.5bp) .. controls (2700.6bp,2402.1bp) and (2853.2bp,2309.8bp) .. (2865.7bp,2287.0bp) .. controls (2904.1bp,2217.5bp) and (2907.6bp,1649.4bp) .. (2907.9bp,1490.2bp);
(2779.8bp,2359.5bp) node ;
[->] (2650.6bp,2444.0bp) .. controls (2661.8bp,2444.0bp) and (2676.0bp,2444.0bp) .. (2699.6bp,2444.0bp);
(2679.4bp,2451.5bp) node ;
[->] (2759.0bp,2448.9bp) .. controls (2790.4bp,2452.5bp) and (2840.1bp,2452.3bp) .. (2865.7bp,2422.0bp) .. controls (2896.0bp,2386.1bp) and (2905.7bp,1669.9bp) .. (2907.7bp,1490.5bp);
(2821.5bp,2456.5bp) node ;
[->] (2757.4bp,2433.3bp) .. controls (2793.0bp,2418.4bp) and (2853.1bp,2391.0bp) .. (2865.7bp,2371.0bp) .. controls (2889.2bp,2333.6bp) and (2904.0bp,1664.5bp) .. (2907.4bp,1490.4bp);
(2821.5bp,2420.5bp) node ;
[->] (2749.8bp,2422.2bp) .. controls (2763.3bp,2407.4bp) and (2782.1bp,2387.7bp) .. (2800.3bp,2372.0bp) .. controls (2827.4bp,2348.6bp) and (2848.6bp,2356.4bp) .. (2865.7bp,2325.0bp) .. controls (2905.5bp,2251.8bp) and (2908.0bp,1654.1bp) .. (2908.0bp,1490.5bp);
(2821.5bp,2379.5bp) node ;
[->] (2738.8bp,2415.6bp) .. controls (2750.4bp,2374.6bp) and (2771.3bp,2296.1bp) .. (2782.3bp,2228.0bp) .. controls (2807.2bp,2073.7bp) and (2816.5bp,1886.6bp) .. (2819.8bp,1802.5bp);
(2779.8bp,2258.5bp) node ;
[->] (417.79bp,3051.4bp) .. controls (437.41bp,3051.7bp) and (463.94bp,3052.0bp) .. (487.35bp,3052.0bp) .. controls (487.35bp,3052.0bp) and (487.35bp,3052.0bp) .. (2822.5bp,3052.0bp) .. controls (2846.4bp,3052.0bp) and (2854.7bp,3041.2bp) .. (2865.7bp,3020.0bp) .. controls (2902.0bp,2950.4bp) and (2907.2bp,1726.5bp) .. (2907.9bp,1490.4bp);
(1676.1bp,3059.5bp) node ;
[->] (410.68bp,3070.4bp) .. controls (429.56bp,3085.1bp) and (458.36bp,3103.0bp) .. (487.35bp,3103.0bp) .. controls (487.35bp,3103.0bp) and (487.35bp,3103.0bp) .. (2822.5bp,3103.0bp) .. controls (2857.2bp,3103.0bp) and (2855.2bp,3071.1bp) .. (2865.7bp,3038.0bp) .. controls (2913.5bp,2887.1bp) and (2909.4bp,1720.7bp) .. (2908.1bp,1490.4bp);
(1676.1bp,3110.5bp) node ;
[->] (410.83bp,3031.3bp) .. controls (438.91bp,3005.1bp) and (487.61bp,2956.1bp) .. (518.04bp,2906.0bp) .. controls (546.23bp,2859.6bp) and (566.37bp,2799.8bp) .. (580.32bp,2751.7bp);
(488.35bp,2989.5bp) node ;
[->] (402.84bp,3025.0bp) .. controls (420.05bp,2992.8bp) and (449.46bp,2937.8bp) .. (473.53bp,2892.8bp);
(438.15bp,2966.5bp) node ;
[->] (2458.0bp,1291.0bp) .. controls (2534.4bp,1284.1bp) and (2804.9bp,1263.7bp) .. (2865.7bp,1318.0bp) .. controls (2897.1bp,1346.0bp) and (2905.8bp,1395.6bp) .. (2908.3bp,1439.7bp);
(2679.4bp,1288.5bp) node ;
[->] (2451.8bp,1308.2bp) .. controls (2458.5bp,1314.7bp) and (2467.1bp,1321.7bp) .. (2476.1bp,1326.0bp) .. controls (2634.9bp,1401.3bp) and (2721.0bp,1294.2bp) .. (2865.7bp,1394.0bp) .. controls (2879.1bp,1403.2bp) and (2888.8bp,1418.1bp) .. (2899.8bp,1441.2bp);
(2679.4bp,1357.5bp) node ;
[->] (2458.1bp,1295.7bp) .. controls (2537.7bp,1306.4bp) and (2825.9bp,1345.9bp) .. (2842.7bp,1356.0bp) .. controls (2870.2bp,1372.6bp) and (2887.5bp,1405.6bp) .. (2900.9bp,1440.6bp);
(2679.4bp,1334.5bp) node ;
red
[->] (2449.2bp,1275.6bp) .. controls (2456.2bp,1265.8bp) and (2465.7bp,1253.8bp) .. (2476.1bp,1245.0bp) .. controls (2481.1bp,1240.8bp) and (2486.8bp,1236.8bp) .. (2501.5bp,1228.2bp);
;
strokecol
(2478.6bp,1252.5bp) node ;
[->] (2558.5bp,1217.5bp) .. controls (2631.5bp,1222.0bp) and (2821.8bp,1237.6bp) .. (2865.7bp,1280.0bp) .. controls (2886.8bp,1300.4bp) and (2899.0bp,1381.9bp) .. (2905.6bp,1439.9bp);
(2729.6bp,1248.5bp) node ;
[->] (2558.0bp,1208.9bp) .. controls (2577.6bp,1204.6bp) and (2604.3bp,1200.0bp) .. (2628.2bp,1200.0bp) .. controls (2628.2bp,1200.0bp) and (2628.2bp,1200.0bp) .. (2822.5bp,1200.0bp) .. controls (2870.2bp,1200.0bp) and (2895.1bp,1357.4bp) .. (2905.2bp,1439.6bp);
(2729.6bp,1207.5bp) node ;
[->] (2555.5bp,1202.4bp) .. controls (2614.4bp,1173.3bp) and (2760.4bp,1113.5bp) .. (2842.7bp,1186.0bp) .. controls (2879.0bp,1217.9bp) and (2897.7bp,1361.8bp) .. (2905.7bp,1439.7bp);
(2729.6bp,1162.5bp) node ;
red
[->] (2537.0bp,1244.7bp) .. controls (2554.4bp,1313.7bp) and (2598.7bp,1489.1bp) .. (2621.0bp,1577.3bp);
;
strokecol
(2579.0bp,1418.5bp) node ;
[->] (2154.7bp,1374.6bp) .. controls (2158.3bp,1305.7bp) and (2172.8bp,1091.5bp) .. (2214.9bp,1039.0bp) .. controls (2291.8bp,943.2bp) and (2509.4bp,951.0bp) .. (2678.4bp,951.0bp) .. controls (2678.4bp,951.0bp) and (2678.4bp,951.0bp) .. (2822.5bp,951.0bp) .. controls (2846.4bp,951.0bp) and (2862.3bp,1017.1bp) .. (2865.7bp,1031.0bp) .. controls (2901.6bp,1175.6bp) and (2907.3bp,1354.4bp) .. (2908.0bp,1439.6bp);
(2528.8bp,960.5bp) node ;
[->] (2174.0bp,1396.0bp) .. controls (2192.2bp,1396.0bp) and (2219.7bp,1396.0bp) .. (2243.6bp,1396.0bp) .. controls (2243.6bp,1396.0bp) and (2243.6bp,1396.0bp) .. (2580.0bp,1396.0bp) .. controls (2708.0bp,1396.0bp) and (2747.6bp,1382.8bp) .. (2865.7bp,1432.0bp) .. controls (2871.4bp,1434.4bp) and (2877.0bp,1437.8bp) .. (2890.4bp,1447.8bp);
(2528.8bp,1403.5bp) node ;
[->] (2168.4bp,1410.8bp) .. controls (2185.2bp,1425.8bp) and (2214.0bp,1447.0bp) .. (2243.6bp,1447.0bp) .. controls (2243.6bp,1447.0bp) and (2243.6bp,1447.0bp) .. (2479.6bp,1447.0bp) .. controls (2626.1bp,1447.0bp) and (2800.5bp,1457.6bp) .. (2883.3bp,1463.2bp);
(2528.8bp,1454.5bp) node ;
red
[->] (2159.4bp,1375.6bp) .. controls (2174.2bp,1323.5bp) and (2213.5bp,1184.5bp) .. (2235.5bp,1106.6bp);
;
strokecol
(2194.4bp,1263.5bp) node ;
[->] (2270.5bp,1062.5bp) .. controls (2289.8bp,1052.0bp) and (2317.6bp,1040.0bp) .. (2344.0bp,1040.0bp) .. controls (2344.0bp,1040.0bp) and (2344.0bp,1040.0bp) .. (2479.6bp,1040.0bp) .. controls (2568.1bp,1040.0bp) and (2589.9bp,1027.0bp) .. (2678.4bp,1027.0bp) .. controls (2678.4bp,1027.0bp) and (2678.4bp,1027.0bp) .. (2822.5bp,1027.0bp) .. controls (2844.5bp,1027.0bp) and (2853.6bp,1032.7bp) .. (2865.7bp,1051.0bp) .. controls (2906.5bp,1112.8bp) and (2909.0bp,1340.9bp) .. (2908.3bp,1439.6bp);
(2579.0bp,1041.5bp) node ;
[->] (2262.1bp,1054.0bp) .. controls (2279.6bp,1031.7bp) and (2309.4bp,1002.0bp) .. (2344.0bp,1002.0bp) .. controls (2344.0bp,1002.0bp) and (2344.0bp,1002.0bp) .. (2479.6bp,1002.0bp) .. controls (2568.1bp,1002.0bp) and (2589.9bp,989.0bp) .. (2678.4bp,989.0bp) .. controls (2678.4bp,989.0bp) and (2678.4bp,989.0bp) .. (2822.5bp,989.0bp) .. controls (2824.5bp,989.0bp) and (2864.4bp,1037.9bp) .. (2865.7bp,1041.0bp) .. controls (2894.8bp,1111.0bp) and (2904.4bp,1340.6bp) .. (2907.3bp,1439.6bp);
(2579.0bp,1003.5bp) node ;
[->] (2274.4bp,1078.0bp) .. controls (2294.0bp,1078.0bp) and (2320.6bp,1078.0bp) .. (2344.0bp,1078.0bp) .. controls (2344.0bp,1078.0bp) and (2344.0bp,1078.0bp) .. (2479.6bp,1078.0bp) .. controls (2568.1bp,1078.0bp) and (2589.9bp,1065.0bp) .. (2678.4bp,1065.0bp) .. controls (2678.4bp,1065.0bp) and (2678.4bp,1065.0bp) .. (2822.5bp,1065.0bp) .. controls (2896.7bp,1065.0bp) and (2906.7bp,1331.4bp) .. (2907.9bp,1439.7bp);
(2579.0bp,1079.5bp) node ;
red
[->] (2257.8bp,1104.8bp) .. controls (2275.4bp,1143.2bp) and (2307.8bp,1214.0bp) .. (2331.7bp,1266.2bp);
;
strokecol
(2294.8bp,1193.5bp) node ;
[->] (1870.4bp,1246.7bp) .. controls (1873.8bp,1170.9bp) and (1886.7bp,916.46bp) .. (1907.7bp,887.0bp) .. controls (1922.3bp,866.35bp) and (1934.0bp,863.0bp) .. (1959.4bp,863.0bp) .. controls (1959.4bp,863.0bp) and (1959.4bp,863.0bp) .. (2195.4bp,863.0bp) .. controls (2321.3bp,863.0bp) and (2351.6bp,837.0bp) .. (2477.6bp,837.0bp) .. controls (2477.6bp,837.0bp) and (2477.6bp,837.0bp) .. (2822.5bp,837.0bp) .. controls (2844.5bp,837.0bp) and (2853.9bp,842.47bp) .. (2865.7bp,861.0bp) .. controls (2896.2bp,908.94bp) and (2905.3bp,1306.8bp) .. (2907.6bp,1439.6bp);
(2395.2bp,849.5bp) node ;
[->] (1869.7bp,1289.2bp) .. controls (1871.2bp,1412.4bp) and (1882.5bp,2031.0bp) .. (1959.4bp,2031.0bp) .. controls (1959.4bp,2031.0bp) and (1959.4bp,2031.0bp) .. (2195.4bp,2031.0bp) .. controls (2320.9bp,2031.0bp) and (2352.0bp,2044.0bp) .. (2477.6bp,2044.0bp) .. controls (2477.6bp,2044.0bp) and (2477.6bp,2044.0bp) .. (2822.5bp,2044.0bp) .. controls (2850.2bp,2044.0bp) and (2854.4bp,2024.3bp) .. (2865.7bp,1999.0bp) .. controls (2905.9bp,1909.5bp) and (2908.5bp,1604.8bp) .. (2908.2bp,1490.4bp);
(2395.2bp,2048.5bp) node ;
[->] (1870.1bp,1246.7bp) .. controls (1872.9bp,1168.4bp) and (1887.7bp,901.0bp) .. (1959.4bp,901.0bp) .. controls (1959.4bp,901.0bp) and (1959.4bp,901.0bp) .. (2195.4bp,901.0bp) .. controls (2321.3bp,901.0bp) and (2351.6bp,875.0bp) .. (2477.6bp,875.0bp) .. controls (2477.6bp,875.0bp) and (2477.6bp,875.0bp) .. (2822.5bp,875.0bp) .. controls (2878.9bp,875.0bp) and (2901.2bp,1302.4bp) .. (2907.0bp,1439.6bp);
(2395.2bp,887.5bp) node ;
red
[->] (1873.8bp,1288.6bp) .. controls (1887.9bp,1355.6bp) and (1932.6bp,1568.0bp) .. (1953.2bp,1666.0bp);
;
strokecol
(1910.2bp,1475.5bp) node ;
[->] (1963.3bp,1724.7bp) .. controls (1967.9bp,1786.4bp) and (1981.0bp,1925.8bp) .. (2008.1bp,1964.0bp) .. controls (2023.3bp,1985.5bp) and (2033.4bp,1993.0bp) .. (2059.8bp,1993.0bp) .. controls (2059.8bp,1993.0bp) and (2059.8bp,1993.0bp) .. (2195.4bp,1993.0bp) .. controls (2410.1bp,1993.0bp) and (2463.6bp,2006.0bp) .. (2678.4bp,2006.0bp) .. controls (2678.4bp,2006.0bp) and (2678.4bp,2006.0bp) .. (2822.5bp,2006.0bp) .. controls (2844.5bp,2006.0bp) and (2853.8bp,2000.5bp) .. (2865.7bp,1982.0bp) .. controls (2891.7bp,1941.7bp) and (2903.7bp,1610.8bp) .. (2907.3bp,1490.3bp);
(2436.9bp,2007.5bp) node ;
[->] (1977.8bp,1670.7bp) .. controls (1988.8bp,1653.6bp) and (2003.0bp,1630.1bp) .. (2013.1bp,1608.0bp) .. controls (2132.3bp,1345.3bp) and (2008.3bp,1192.4bp) .. (2214.9bp,991.0bp) .. controls (2333.1bp,875.79bp) and (2412.9bp,913.0bp) .. (2578.0bp,913.0bp) .. controls (2578.0bp,913.0bp) and (2578.0bp,913.0bp) .. (2822.5bp,913.0bp) .. controls (2854.5bp,913.0bp) and (2859.5bp,979.12bp) .. (2865.7bp,1010.0bp) .. controls (2896.8bp,1164.3bp) and (2905.1bp,1352.7bp) .. (2907.5bp,1439.8bp);
(2436.9bp,916.5bp) node ;
[->] (1966.2bp,1724.3bp) .. controls (1978.3bp,1791.2bp) and (2011.8bp,1950.0bp) .. (2059.8bp,1950.0bp) .. controls (2059.8bp,1950.0bp) and (2059.8bp,1950.0bp) .. (2396.2bp,1950.0bp) .. controls (2521.9bp,1950.0bp) and (2552.7bp,1968.0bp) .. (2678.4bp,1968.0bp) .. controls (2678.4bp,1968.0bp) and (2678.4bp,1968.0bp) .. (2822.5bp,1968.0bp) .. controls (2870.0bp,1968.0bp) and (2897.9bp,1614.7bp) .. (2906.3bp,1490.3bp);
(2436.9bp,1958.5bp) node ;
red
[->] (1990.1bp,1695.0bp) .. controls (1999.6bp,1695.0bp) and (2010.4bp,1695.0bp) .. (2030.8bp,1695.0bp);
;
strokecol
(2010.6bp,1702.5bp) node ;
[->] (1577.3bp,777.87bp) .. controls (1581.5bp,974.38bp) and (1607.9bp,2074.0bp) .. (1675.1bp,2074.0bp) .. controls (1675.1bp,2074.0bp) and (1675.1bp,2074.0bp) .. (1961.4bp,2074.0bp) .. controls (2190.8bp,2074.0bp) and (2248.1bp,2082.0bp) .. (2477.6bp,2082.0bp) .. controls (2477.6bp,2082.0bp) and (2477.6bp,2082.0bp) .. (2822.5bp,2082.0bp) .. controls (2836.1bp,2082.0bp) and (2864.3bp,2022.7bp) .. (2865.7bp,2018.0bp) .. controls (2895.7bp,1920.7bp) and (2905.0bp,1606.8bp) .. (2907.5bp,1490.3bp);
(2244.6bp,2086.5bp) node ;
[->] (1593.7bp,723.86bp) .. controls (1611.3bp,701.93bp) and (1641.0bp,673.0bp) .. (1675.1bp,673.0bp) .. controls (1675.1bp,673.0bp) and (1675.1bp,673.0bp) .. (2011.6bp,673.0bp) .. controls (2182.0bp,673.0bp) and (2223.7bp,647.0bp) .. (2394.2bp,647.0bp) .. controls (2394.2bp,647.0bp) and (2394.2bp,647.0bp) .. (2822.5bp,647.0bp) .. controls (2902.4bp,647.0bp) and (2907.9bp,1272.5bp) .. (2908.0bp,1439.6bp);
(2244.6bp,666.5bp) node ;
[->] (1586.8bp,720.36bp) .. controls (1600.9bp,686.9bp) and (1630.0bp,635.0bp) .. (1675.1bp,635.0bp) .. controls (1675.1bp,635.0bp) and (1675.1bp,635.0bp) .. (1911.2bp,635.0bp) .. controls (2126.1bp,635.0bp) and (2179.2bp,609.0bp) .. (2394.2bp,609.0bp) .. controls (2394.2bp,609.0bp) and (2394.2bp,609.0bp) .. (2822.5bp,609.0bp) .. controls (2906.3bp,609.0bp) and (2908.8bp,1266.4bp) .. (2908.1bp,1439.4bp);
(2244.6bp,624.5bp) node ;
red
[->] (1605.5bp,748.29bp) .. controls (1615.0bp,748.39bp) and (1625.7bp,748.5bp) .. (1646.2bp,748.71bp);
;
strokecol
(1625.9bp,756.5bp) node ;
[->] (1702.0bp,764.46bp) .. controls (1721.3bp,774.98bp) and (1749.2bp,787.0bp) .. (1775.5bp,787.0bp) .. controls (1775.5bp,787.0bp) and (1775.5bp,787.0bp) .. (2195.4bp,787.0bp) .. controls (2284.5bp,787.0bp) and (2305.1bp,761.0bp) .. (2394.2bp,761.0bp) .. controls (2394.2bp,761.0bp) and (2394.2bp,761.0bp) .. (2822.5bp,761.0bp) .. controls (2842.4bp,761.0bp) and (2863.4bp,825.01bp) .. (2865.7bp,834.0bp) .. controls (2895.2bp,948.04bp) and (2904.9bp,1314.1bp) .. (2907.5bp,1439.8bp);
(2294.8bp,782.5bp) node ;
[->] (1706.0bp,749.0bp) .. controls (1725.6bp,749.0bp) and (1752.1bp,749.0bp) .. (1775.5bp,749.0bp) .. controls (1775.5bp,749.0bp) and (1775.5bp,749.0bp) .. (2195.4bp,749.0bp) .. controls (2284.5bp,749.0bp) and (2305.1bp,723.0bp) .. (2394.2bp,723.0bp) .. controls (2394.2bp,723.0bp) and (2394.2bp,723.0bp) .. (2822.5bp,723.0bp) .. controls (2894.5bp,723.0bp) and (2905.9bp,1280.7bp) .. (2907.7bp,1439.5bp);
(2294.8bp,744.5bp) node ;
[->] (1702.0bp,733.54bp) .. controls (1721.3bp,723.02bp) and (1749.2bp,711.0bp) .. (1775.5bp,711.0bp) .. controls (1775.5bp,711.0bp) and (1775.5bp,711.0bp) .. (2195.4bp,711.0bp) .. controls (2284.5bp,711.0bp) and (2305.1bp,685.0bp) .. (2394.2bp,685.0bp) .. controls (2394.2bp,685.0bp) and (2394.2bp,685.0bp) .. (2822.5bp,685.0bp) .. controls (2898.4bp,685.0bp) and (2906.9bp,1276.3bp) .. (2907.9bp,1439.5bp);
(2294.8bp,706.5bp) node ;
red
[->] (1682.6bp,778.02bp) .. controls (1699.3bp,865.85bp) and (1749.2bp,1128.9bp) .. (1770.0bp,1238.9bp);
;
strokecol
(1726.3bp,1016.5bp) node ;
[->] (1303.8bp,2878.1bp) .. controls (1323.3bp,2882.4bp) and (1350.1bp,2887.0bp) .. (1373.9bp,2887.0bp) .. controls (1373.9bp,2887.0bp) and (1373.9bp,2887.0bp) .. (2822.5bp,2887.0bp) .. controls (2844.5bp,2887.0bp) and (2854.3bp,2881.8bp) .. (2865.7bp,2863.0bp) .. controls (2902.2bp,2803.1bp) and (2907.2bp,1713.2bp) .. (2907.9bp,1490.4bp);
(2111.0bp,2894.5bp) node ;
[->] (1302.8bp,2861.5bp) .. controls (1322.3bp,2855.6bp) and (1349.4bp,2849.0bp) .. (1373.9bp,2849.0bp) .. controls (1373.9bp,2849.0bp) and (1373.9bp,2849.0bp) .. (2822.5bp,2849.0bp) .. controls (2845.6bp,2849.0bp) and (2854.5bp,2840.2bp) .. (2865.7bp,2820.0bp) .. controls (2898.6bp,2760.6bp) and (2906.5bp,1707.5bp) .. (2907.8bp,1490.2bp);
(2111.0bp,2856.5bp) node ;
[->] (1296.8bp,2890.8bp) .. controls (1315.6bp,2906.2bp) and (1344.5bp,2925.0bp) .. (1373.9bp,2925.0bp) .. controls (1373.9bp,2925.0bp) and (1373.9bp,2925.0bp) .. (2822.5bp,2925.0bp) .. controls (2850.2bp,2925.0bp) and (2855.0bp,2905.6bp) .. (2865.7bp,2880.0bp) .. controls (2893.2bp,2814.5bp) and (2905.5bp,1714.1bp) .. (2907.7bp,1490.4bp);
(2111.0bp,2932.5bp) node ;
red
[->] (1285.8bp,2843.2bp) .. controls (1297.5bp,2810.5bp) and (1316.4bp,2754.5bp) .. (1327.2bp,2705.0bp) .. controls (1352.2bp,2591.4bp) and (1365.5bp,2454.4bp) .. (1371.8bp,2377.9bp);
;
strokecol
(1324.7bp,2730.5bp) node ;
[->] (1403.7bp,2340.6bp) .. controls (1423.3bp,2336.0bp) and (1450.2bp,2331.0bp) .. (1474.3bp,2331.0bp) .. controls (1474.3bp,2331.0bp) and (1474.3bp,2331.0bp) .. (2112.0bp,2331.0bp) .. controls (2256.2bp,2331.0bp) and (2623.4bp,2334.3bp) .. (2759.3bp,2286.0bp) .. controls (2762.7bp,2284.8bp) and (2864.0bp,2214.2bp) .. (2865.7bp,2211.0bp) .. controls (2900.4bp,2148.5bp) and (2906.7bp,1640.2bp) .. (2907.8bp,1490.3bp);
(2152.7bp,2338.5bp) node ;
[->] (1396.5bp,2327.1bp) .. controls (1415.2bp,2310.4bp) and (1444.2bp,2290.0bp) .. (1474.3bp,2290.0bp) .. controls (1474.3bp,2290.0bp) and (1474.3bp,2290.0bp) .. (1626.9bp,2290.0bp) .. controls (1968.0bp,2290.0bp) and (2053.1bp,2272.0bp) .. (2394.2bp,2272.0bp) .. controls (2394.2bp,2272.0bp) and (2394.2bp,2272.0bp) .. (2680.4bp,2272.0bp) .. controls (2775.1bp,2272.0bp) and (2802.1bp,2219.5bp) .. (2842.7bp,2134.0bp) .. controls (2857.5bp,2102.9bp) and (2859.3bp,2093.8bp) .. (2865.7bp,2060.0bp) .. controls (2905.2bp,1850.5bp) and (2908.5bp,1593.2bp) .. (2908.2bp,1490.2bp);
(2152.7bp,2283.5bp) node ;
[->] (1401.9bp,2361.4bp) .. controls (1421.4bp,2370.2bp) and (1448.8bp,2380.0bp) .. (1474.3bp,2380.0bp) .. controls (1474.3bp,2380.0bp) and (1474.3bp,2380.0bp) .. (1827.7bp,2380.0bp) .. controls (2197.9bp,2380.0bp) and (2292.2bp,2377.0bp) .. (2658.9bp,2326.0bp) .. controls (2745.3bp,2314.0bp) and (2777.6bp,2315.1bp) .. (2842.7bp,2257.0bp) .. controls (2855.3bp,2245.8bp) and (2859.9bp,2242.8bp) .. (2865.7bp,2227.0bp) .. controls (2891.1bp,2158.4bp) and (2904.1bp,1641.5bp) .. (2907.4bp,1490.2bp);
(2152.7bp,2384.5bp) node ;
red
[->] (1382.4bp,2319.1bp) .. controls (1399.5bp,2242.3bp) and (1446.2bp,2033.3bp) .. (1467.9bp,1936.0bp);
;
strokecol
(1425.1bp,2134.5bp) node ;
[->] (389.24bp,2700.9bp) .. controls (392.04bp,2421.2bp) and (414.85bp,281.82bp) .. (458.65bp,234.0bp) .. controls (583.55bp,97.627bp) and (687.03bp,186.0bp) .. (871.96bp,186.0bp) .. controls (871.96bp,186.0bp) and (871.96bp,186.0bp) .. (2822.5bp,186.0bp) .. controls (2837.3bp,186.0bp) and (2863.3bp,254.12bp) .. (2865.7bp,264.0bp) .. controls (2893.7bp,379.7bp) and (2905.3bp,1243.4bp) .. (2907.7bp,1439.7bp);
(1676.1bp,193.5bp) node ;
[->] (417.85bp,2729.8bp) .. controls (452.31bp,2728.5bp) and (510.03bp,2726.1bp) .. (558.8bp,2724.2bp);
(488.35bp,2735.5bp) node ;
[->] (416.18bp,2740.6bp) .. controls (474.48bp,2760.1bp) and (616.02bp,2803.0bp) .. (738.34bp,2803.0bp) .. controls (738.34bp,2803.0bp) and (738.34bp,2803.0bp) .. (1125.0bp,2803.0bp) .. controls (1168.4bp,2803.0bp) and (1213.1bp,2827.3bp) .. (1250.3bp,2852.8bp);
(822.76bp,2810.5bp) node ;
red
[->] (403.02bp,2756.7bp) .. controls (411.78bp,2772.0bp) and (423.63bp,2791.6bp) .. (435.65bp,2808.0bp) .. controls (443.19bp,2818.3bp) and (452.15bp,2829.0bp) .. (467.25bp,2845.9bp);
;
strokecol
(438.15bp,2820.5bp) node ;
[->] (2657.4bp,1595.9bp) .. controls (2702.3bp,1578.8bp) and (2794.2bp,1541.5bp) .. (2865.7bp,1498.0bp) .. controls (2870.9bp,1494.8bp) and (2876.2bp,1491.1bp) .. (2889.3bp,1481.1bp);
(2779.8bp,1551.5bp) node ;
[->] (2658.8bp,1610.1bp) .. controls (2687.2bp,1615.5bp) and (2730.7bp,1627.5bp) .. (2759.3bp,1653.0bp) .. controls (2788.3bp,1678.8bp) and (2804.8bp,1721.4bp) .. (2815.9bp,1760.2bp);
(2729.6bp,1660.5bp) node ;
[->] (2644.2bp,1632.1bp) .. controls (2654.8bp,1652.2bp) and (2669.7bp,1680.7bp) .. (2681.9bp,1706.0bp) .. controls (2690.4bp,1723.6bp) and (2685.2bp,1733.2bp) .. (2699.9bp,1746.0bp) .. controls (2725.1bp,1767.9bp) and (2763.2bp,1776.1bp) .. (2800.1bp,1780.2bp);
(2729.6bp,1780.5bp) node ;
red
[->] (2655.9bp,1619.0bp) .. controls (2664.7bp,1624.2bp) and (2674.3bp,1630.7bp) .. (2681.9bp,1638.0bp) .. controls (2685.2bp,1641.1bp) and (2696.3bp,1657.6bp) .. (2712.7bp,1682.5bp);
;
strokecol
(2679.4bp,1645.5bp) node ;
[->] (2367.2bp,1273.1bp) .. controls (2395.4bp,1247.7bp) and (2447.6bp,1203.7bp) .. (2499.1bp,1177.0bp) .. controls (2634.9bp,1106.6bp) and (2720.1bp,1066.6bp) .. (2842.7bp,1158.0bp) .. controls (2885.9bp,1190.2bp) and (2901.3bp,1355.9bp) .. (2906.7bp,1439.8bp);
(2629.2bp,1136.5bp) node ;
[->] (2349.9bp,1322.3bp) .. controls (2360.0bp,1386.5bp) and (2391.9bp,1541.2bp) .. (2476.1bp,1636.0bp) .. controls (2511.2bp,1675.5bp) and (2537.7bp,1661.4bp) .. (2581.5bp,1691.0bp) .. controls (2610.2bp,1710.3bp) and (2667.4bp,1780.0bp) .. (2699.9bp,1792.0bp) .. controls (2729.7bp,1803.0bp) and (2766.4bp,1797.2bp) .. (2801.0bp,1787.8bp);
(2579.0bp,1698.5bp) node ;
red
[->] (2374.9bp,1293.0bp) .. controls (2384.6bp,1293.0bp) and (2395.5bp,1293.0bp) .. (2415.5bp,1293.0bp);
;
strokecol
(2395.2bp,1300.5bp) node ;
[->] (2359.0bp,1319.4bp) .. controls (2391.6bp,1383.0bp) and (2482.3bp,1546.8bp) .. (2599.5bp,1645.0bp) .. controls (2626.8bp,1667.9bp) and (2664.0bp,1684.6bp) .. (2701.1bp,1698.3bp);
(2528.8bp,1612.5bp) node ;
[->] (2076.8bp,1669.5bp) .. controls (2092.4bp,1646.7bp) and (2119.0bp,1617.0bp) .. (2151.7bp,1617.0bp) .. controls (2151.7bp,1617.0bp) and (2151.7bp,1617.0bp) .. (2479.6bp,1617.0bp) .. controls (2537.3bp,1617.0bp) and (2545.4bp,1587.2bp) .. (2599.5bp,1567.0bp) .. controls (2698.5bp,1530.1bp) and (2817.8bp,1492.6bp) .. (2884.5bp,1472.1bp);
(2478.6bp,1624.5bp) node ;
[->] (2088.6bp,1706.6bp) .. controls (2095.0bp,1709.0bp) and (2101.9bp,1711.3bp) .. (2108.5bp,1713.0bp) .. controls (2304.0bp,1763.4bp) and (2360.1bp,1738.1bp) .. (2558.5bp,1776.0bp) .. controls (2621.9bp,1788.1bp) and (2635.8bp,1801.6bp) .. (2699.9bp,1809.0bp) .. controls (2736.3bp,1813.2bp) and (2747.8bp,1819.3bp) .. (2782.3bp,1807.0bp) .. controls (2787.0bp,1805.3bp) and (2791.8bp,1802.9bp) .. (2804.8bp,1794.4bp);
(2436.9bp,1767.5bp) node ;
red
[->] (2071.6bp,1667.2bp) .. controls (2082.4bp,1636.3bp) and (2100.0bp,1584.8bp) .. (2113.5bp,1540.0bp) .. controls (2125.1bp,1501.2bp) and (2136.8bp,1455.9bp) .. (2146.6bp,1416.6bp);
;
strokecol
(2111.0bp,1559.5bp) node ;
[->] (2090.5bp,1697.9bp) .. controls (2108.1bp,1699.4bp) and (2131.2bp,1701.0bp) .. (2151.7bp,1701.0bp) .. controls (2151.7bp,1701.0bp) and (2151.7bp,1701.0bp) .. (2396.2bp,1701.0bp) .. controls (2476.3bp,1701.0bp) and (2496.3bp,1704.6bp) .. (2576.5bp,1706.0bp) .. controls (2614.9bp,1706.7bp) and (2658.7bp,1706.9bp) .. (2699.7bp,1707.0bp);
(2395.2bp,1708.5bp) node ;
[->] (1777.2bp,1238.2bp) .. controls (1777.0bp,1138.9bp) and (1782.9bp,825.0bp) .. (1867.4bp,825.0bp) .. controls (1867.4bp,825.0bp) and (1867.4bp,825.0bp) .. (2195.4bp,825.0bp) .. controls (2321.3bp,825.0bp) and (2351.6bp,799.0bp) .. (2477.6bp,799.0bp) .. controls (2477.6bp,799.0bp) and (2477.6bp,799.0bp) .. (2822.5bp,799.0bp) .. controls (2852.2bp,799.0bp) and (2854.5bp,822.47bp) .. (2865.7bp,850.0bp) .. controls (2908.8bp,956.13bp) and (2909.4bp,1314.8bp) .. (2908.3bp,1439.5bp);
(2345.0bp,823.5bp) node ;
[->] (1783.3bp,1297.1bp) .. controls (1802.1bp,1389.2bp) and (1864.4bp,1671.8bp) .. (1930.7bp,1734.0bp) .. controls (2052.3bp,1848.0bp) and (2127.1bp,1828.0bp) .. (2293.8bp,1828.0bp) .. controls (2293.8bp,1828.0bp) and (2293.8bp,1828.0bp) .. (2479.6bp,1828.0bp) .. controls (2603.9bp,1828.0bp) and (2635.5bp,1836.4bp) .. (2759.3bp,1825.0bp) .. controls (2769.6bp,1824.1bp) and (2773.1bp,1825.9bp) .. (2782.3bp,1821.0bp) .. controls (2789.6bp,1817.1bp) and (2796.2bp,1811.4bp) .. (2808.6bp,1797.9bp);
(2294.8bp,1835.5bp) node ;
red
[->] (1806.5bp,1268.0bp) .. controls (1816.2bp,1268.0bp) and (1827.1bp,1268.0bp) .. (1847.1bp,1268.0bp);
;
strokecol
(1826.7bp,1275.5bp) node ;
[->] (1794.4bp,1291.8bp) .. controls (1821.3bp,1330.2bp) and (1874.7bp,1409.0bp) .. (1912.7bp,1480.0bp) .. controls (1952.9bp,1555.4bp) and (1960.1bp,1576.0bp) .. (1990.1bp,1656.0bp) .. controls (2003.8bp,1692.7bp) and (1995.2bp,1718.3bp) .. (2031.1bp,1734.0bp) .. controls (2138.8bp,1781.1bp) and (2443.1bp,1798.5bp) .. (2558.5bp,1776.0bp) .. controls (2607.8bp,1766.4bp) and (2660.9bp,1742.4bp) .. (2703.3bp,1720.9bp);
(2244.6bp,1786.5bp) node ;
[->] (1477.4bp,1936.8bp) .. controls (1481.0bp,1993.7bp) and (1498.0bp,2112.0bp) .. (1574.7bp,2112.0bp) .. controls (1574.7bp,2112.0bp) and (1574.7bp,2112.0bp) .. (1911.2bp,2112.0bp) .. controls (2081.2bp,2112.0bp) and (2123.7bp,2120.0bp) .. (2293.8bp,2120.0bp) .. controls (2293.8bp,2120.0bp) and (2293.8bp,2120.0bp) .. (2822.5bp,2120.0bp) .. controls (2848.6bp,2120.0bp) and (2861.9bp,2053.2bp) .. (2865.7bp,2036.0bp) .. controls (2888.7bp,1933.1bp) and (2902.6bp,1608.5bp) .. (2907.0bp,1490.3bp);
(2194.4bp,2125.5bp) node ;
[->] (1505.2bp,1907.0bp) .. controls (1524.8bp,1907.0bp) and (1551.3bp,1907.0bp) .. (1574.7bp,1907.0bp) .. controls (1574.7bp,1907.0bp) and (1574.7bp,1907.0bp) .. (2112.0bp,1907.0bp) .. controls (2401.2bp,1907.0bp) and (2474.8bp,1893.1bp) .. (2759.3bp,1841.0bp) .. controls (2769.6bp,1839.1bp) and (2773.6bp,1841.8bp) .. (2782.3bp,1836.0bp) .. controls (2792.4bp,1829.3bp) and (2800.6bp,1818.9bp) .. (2811.9bp,1800.1bp);
(2152.7bp,1913.5bp) node ;
red
[->] (1478.9bp,1877.5bp) .. controls (1492.3bp,1719.0bp) and (1555.9bp,969.68bp) .. (1572.2bp,777.65bp);
;
strokecol
(1525.5bp,1334.5bp) node ;
[->] (1499.6bp,1889.1bp) .. controls (1518.7bp,1876.2bp) and (1547.0bp,1861.0bp) .. (1574.7bp,1861.0bp) .. controls (1574.7bp,1861.0bp) and (1574.7bp,1861.0bp) .. (2112.0bp,1861.0bp) .. controls (2192.8bp,1861.0bp) and (2212.9bp,1866.0bp) .. (2293.8bp,1866.0bp) .. controls (2293.8bp,1866.0bp) and (2293.8bp,1866.0bp) .. (2479.6bp,1866.0bp) .. controls (2574.1bp,1866.0bp) and (2661.6bp,1783.6bp) .. (2710.0bp,1729.8bp);
(2111.0bp,1868.5bp) node ;
[->] (689.75bp,1678.8bp) .. controls (688.05bp,1840.1bp) and (685.06bp,2601.0bp) .. (780.05bp,2601.0bp) .. controls (780.05bp,2601.0bp) and (780.05bp,2601.0bp) .. (1225.4bp,2601.0bp) .. controls (1388.5bp,2601.0bp) and (1220.2bp,2358.9bp) .. (1345.2bp,2254.0bp) .. controls (1370.5bp,2232.8bp) and (1497.9bp,2201.0bp) .. (1725.3bp,2201.0bp) .. controls (1725.3bp,2201.0bp) and (1725.3bp,2201.0bp) .. (2011.6bp,2201.0bp) .. controls (2092.4bp,2201.0bp) and (2112.5bp,2196.0bp) .. (2193.4bp,2196.0bp) .. controls (2193.4bp,2196.0bp) and (2193.4bp,2196.0bp) .. (2680.4bp,2196.0bp) .. controls (2730.3bp,2196.0bp) and (2736.0bp,2160.1bp) .. (2759.3bp,2116.0bp) .. controls (2812.9bp,2014.7bp) and (2820.4bp,1874.7bp) .. (2820.9bp,1802.4bp);
(1776.5bp,2208.5bp) node ;
red
[->] (719.08bp,1649.0bp) .. controls (728.81bp,1649.0bp) and (739.69bp,1649.0bp) .. (759.7bp,1649.0bp);
;
strokecol
(739.34bp,1656.5bp) node ;
[->] (690.83bp,1619.1bp) .. controls (694.03bp,1486.4bp) and (708.51bp,949.05bp) .. (736.84bp,783.0bp) .. controls (748.36bp,715.45bp) and (711.53bp,635.0bp) .. (780.05bp,635.0bp) .. controls (780.05bp,635.0bp) and (780.05bp,635.0bp) .. (1476.3bp,635.0bp) .. controls (1517.4bp,635.0bp) and (1545.3bp,678.0bp) .. (1564.7bp,720.36bp);
(1124.0bp,642.5bp) node ;
[->] (689.83bp,1619.2bp) .. controls (688.6bp,1460.7bp) and (687.78bp,723.0bp) .. (780.05bp,723.0bp) .. controls (780.05bp,723.0bp) and (780.05bp,723.0bp) .. (1074.8bp,723.0bp) .. controls (1155.3bp,723.0bp) and (1170.2bp,1347.2bp) .. (1172.8bp,1523.9bp);
(923.16bp,730.5bp) node ;
[->] (994.96bp,545.06bp) .. controls (1013.6bp,528.45bp) and (1042.6bp,508.0bp) .. (1072.8bp,508.0bp) .. controls (1072.8bp,508.0bp) and (1072.8bp,508.0bp) .. (1827.7bp,508.0bp) .. controls (2012.6bp,508.0bp) and (2058.7bp,495.0bp) .. (2243.6bp,495.0bp) .. controls (2243.6bp,495.0bp) and (2243.6bp,495.0bp) .. (2822.5bp,495.0bp) .. controls (2844.5bp,495.0bp) and (2854.2bp,500.32bp) .. (2865.7bp,519.0bp) .. controls (2890.1bp,558.39bp) and (2904.3bp,1261.9bp) .. (2907.5bp,1439.4bp);
(1960.4bp,514.5bp) node ;
[->] (974.3bp,595.96bp) .. controls (974.03bp,836.84bp) and (975.83bp,2449.0bp) .. (1072.8bp,2449.0bp) .. controls (1072.8bp,2449.0bp) and (1072.8bp,2449.0bp) .. (1225.4bp,2449.0bp) .. controls (1748.0bp,2449.0bp) and (1871.5bp,2578.0bp) .. (2394.2bp,2578.0bp) .. controls (2394.2bp,2578.0bp) and (2394.2bp,2578.0bp) .. (2822.5bp,2578.0bp) .. controls (2850.9bp,2578.0bp) and (2854.9bp,2557.2bp) .. (2865.7bp,2531.0bp) .. controls (2905.7bp,2434.2bp) and (2908.0bp,1675.8bp) .. (2908.0bp,1490.7bp);
(1960.4bp,2553.5bp) node ;
[->] (1002.7bp,560.7bp) .. controls (1022.3bp,557.49bp) and (1049.0bp,554.0bp) .. (1072.8bp,554.0bp) .. controls (1072.8bp,554.0bp) and (1072.8bp,554.0bp) .. (1911.2bp,554.0bp) .. controls (2081.5bp,554.0bp) and (2123.5bp,533.0bp) .. (2293.8bp,533.0bp) .. controls (2293.8bp,533.0bp) and (2293.8bp,533.0bp) .. (2822.5bp,533.0bp) .. controls (2854.4bp,533.0bp) and (2860.3bp,599.31bp) .. (2865.7bp,629.0bp) .. controls (2894.3bp,786.29bp) and (2904.9bp,1290.7bp) .. (2907.5bp,1439.5bp);
(1960.4bp,561.5bp) node ;
red
[->] (977.3bp,595.57bp) .. controls (991.48bp,737.89bp) and (1052.7bp,1352.2bp) .. (1069.8bp,1524.5bp);
;
strokecol
(1023.6bp,1067.5bp) node ;
[->] (1076.4bp,1583.8bp) .. controls (1083.7bp,1705.1bp) and (1114.6bp,2155.0bp) .. (1173.2bp,2155.0bp) .. controls (1173.2bp,2155.0bp) and (1173.2bp,2155.0bp) .. (1727.3bp,2155.0bp) .. controls (1852.8bp,2155.0bp) and (1884.1bp,2158.0bp) .. (2009.6bp,2158.0bp) .. controls (2009.6bp,2158.0bp) and (2009.6bp,2158.0bp) .. (2680.4bp,2158.0bp) .. controls (2754.0bp,2158.0bp) and (2801.4bp,1902.1bp) .. (2817.4bp,1802.1bp);
(1960.4bp,2164.5bp) node ;
[->] (1074.3bp,1584.0bp) .. controls (1072.9bp,1692.5bp) and (1072.1bp,2063.4bp) .. (1121.5bp,2169.0bp) .. controls (1277.4bp,2502.6bp) and (1540.9bp,2426.0bp) .. (1909.2bp,2426.0bp) .. controls (1909.2bp,2426.0bp) and (1909.2bp,2426.0bp) .. (2529.8bp,2426.0bp) .. controls (2553.2bp,2426.0bp) and (2579.2bp,2431.0bp) .. (2608.4bp,2438.2bp);
(1868.4bp,2433.5bp) node ;
[->] (1075.2bp,1523.9bp) .. controls (1077.8bp,1377.6bp) and (1093.4bp,748.0bp) .. (1173.2bp,748.0bp) .. controls (1173.2bp,748.0bp) and (1173.2bp,748.0bp) .. (1476.3bp,748.0bp) .. controls (1495.9bp,748.0bp) and (1517.7bp,748.0bp) .. (1545.9bp,748.0bp);
(1324.7bp,755.5bp) node ;
red
[->] (1103.5bp,1554.0bp) .. controls (1113.0bp,1554.0bp) and (1123.8bp,1554.0bp) .. (1144.2bp,1554.0bp);
;
strokecol
(1124.0bp,1561.5bp) node ;
[->] (509.95bp,2887.9bp) .. controls (528.56bp,2904.6bp) and (557.6bp,2925.0bp) .. (587.74bp,2925.0bp) .. controls (587.74bp,2925.0bp) and (587.74bp,2925.0bp) .. (740.34bp,2925.0bp) .. controls (867.67bp,2925.0bp) and (899.13bp,2935.9bp) .. (1026.1bp,2946.0bp) .. controls (1113.8bp,2953.0bp) and (1135.3bp,2963.0bp) .. (1223.4bp,2963.0bp) .. controls (1223.4bp,2963.0bp) and (1223.4bp,2963.0bp) .. (2822.5bp,2963.0bp) .. controls (2836.2bp,2963.0bp) and (2864.4bp,2903.7bp) .. (2865.7bp,2899.0bp) .. controls (2904.8bp,2760.8bp) and (2907.7bp,1707.9bp) .. (2907.9bp,1490.4bp);
(1726.3bp,2970.5bp) node ;
red
[->] (505.81bp,2842.9bp) .. controls (522.13bp,2819.0bp) and (547.24bp,2782.3bp) .. (571.22bp,2747.2bp);
;
strokecol
(538.54bp,2802.5bp) node ;
[->] (517.92bp,2864.1bp) .. controls (587.18bp,2857.3bp) and (769.54bp,2841.0bp) .. (922.16bp,2841.0bp) .. controls (922.16bp,2841.0bp) and (922.16bp,2841.0bp) .. (1125.0bp,2841.0bp) .. controls (1200.3bp,2841.0bp) and (1170.6bp,2758.6bp) .. (1203.9bp,2691.0bp) .. controls (1251.1bp,2594.9bp) and (1268.7bp,2573.0bp) .. (1304.2bp,2472.0bp) .. controls (1329.0bp,2401.5bp) and (1291.9bp,2361.3bp) .. (1345.2bp,2309.0bp) .. controls (1452.6bp,2203.9bp) and (1492.4bp,2234.0bp) .. (2193.4bp,2234.0bp) .. controls (2193.4bp,2234.0bp) and (2193.4bp,2234.0bp) .. (2680.4bp,2234.0bp) .. controls (2726.1bp,2234.0bp) and (2734.6bp,2206.5bp) .. (2759.3bp,2168.0bp) .. controls (2797.2bp,2108.9bp) and (2813.8bp,1893.4bp) .. (2819.4bp,1802.1bp);
(1676.1bp,2237.5bp) node ;
[->] (515.48bp,2879.6bp) .. controls (522.04bp,2882.2bp) and (529.18bp,2884.6bp) .. (536.04bp,2886.0bp) .. controls (580.45bp,2894.9bp) and (592.65bp,2887.0bp) .. (637.94bp,2887.0bp) .. controls (637.94bp,2887.0bp) and (637.94bp,2887.0bp) .. (1175.2bp,2887.0bp) .. controls (1195.2bp,2887.0bp) and (1217.3bp,2883.7bp) .. (1245.3bp,2878.1bp);
(872.96bp,2894.5bp) node ;
[->] (2750.1bp,1685.1bp) .. controls (2753.4bp,1680.9bp) and (2756.6bp,1676.4bp) .. (2759.3bp,1672.0bp) .. controls (2769.8bp,1654.6bp) and (2766.4bp,1647.2bp) .. (2777.3bp,1630.0bp) .. controls (2808.0bp,1581.5bp) and (2830.9bp,1581.6bp) .. (2865.7bp,1536.0bp) .. controls (2875.2bp,1523.6bp) and (2884.2bp,1509.0bp) .. (2896.4bp,1487.3bp);
(2821.5bp,1606.5bp) node ;
[->] (2755.9bp,1692.2bp) .. controls (2780.8bp,1676.5bp) and (2818.6bp,1649.6bp) .. (2842.7bp,1618.0bp) .. controls (2870.4bp,1581.7bp) and (2888.7bp,1532.0bp) .. (2901.7bp,1489.2bp);
(2821.5bp,1666.5bp) node ;
red
[->] (2758.9bp,1701.0bp) .. controls (2767.0bp,1700.6bp) and (2775.4bp,1701.8bp) .. (2782.3bp,1706.0bp) .. controls (2798.2bp,1715.7bp) and (2807.7bp,1734.4bp) .. (2816.3bp,1760.4bp);
;
strokecol
(2779.8bp,1713.5bp) node ;
[->] (2754.8bp,1723.2bp) .. controls (2763.7bp,1729.4bp) and (2773.6bp,1736.8bp) .. (2782.3bp,1744.0bp) .. controls (2787.8bp,1748.6bp) and (2793.5bp,1753.8bp) .. (2806.1bp,1766.1bp);
(2779.8bp,1751.5bp) node ;
[->] (1179.2bp,1524.6bp) .. controls (1202.1bp,1363.6bp) and (1321.9bp,597.0bp) .. (1524.5bp,597.0bp) .. controls (1524.5bp,597.0bp) and (1524.5bp,597.0bp) .. (1911.2bp,597.0bp) .. controls (2081.6bp,597.0bp) and (2123.3bp,571.0bp) .. (2293.8bp,571.0bp) .. controls (2293.8bp,571.0bp) and (2293.8bp,571.0bp) .. (2822.5bp,571.0bp) .. controls (2910.3bp,571.0bp) and (2909.7bp,1263.5bp) .. (2908.2bp,1439.7bp);
(2060.8bp,599.5bp) node ;
[->] (1200.0bp,1569.5bp) .. controls (1219.3bp,1580.0bp) and (1247.2bp,1592.0bp) .. (1273.6bp,1592.0bp) .. controls (1273.6bp,1592.0bp) and (1273.6bp,1592.0bp) .. (2061.8bp,1592.0bp) .. controls (2120.5bp,1592.0bp) and (2134.6bp,1579.0bp) .. (2193.4bp,1579.0bp) .. controls (2193.4bp,1579.0bp) and (2193.4bp,1579.0bp) .. (2479.6bp,1579.0bp) .. controls (2559.5bp,1579.0bp) and (2581.2bp,1548.7bp) .. (2658.9bp,1567.0bp) .. controls (2707.4bp,1578.4bp) and (2724.9bp,1581.0bp) .. (2759.3bp,1617.0bp) .. controls (2794.7bp,1654.1bp) and (2810.1bp,1713.6bp) .. (2818.0bp,1759.8bp);
(2010.6bp,1599.5bp) node ;
[->] (1204.0bp,1554.0bp) .. controls (1223.6bp,1554.0bp) and (1250.1bp,1554.0bp) .. (1273.6bp,1554.0bp) .. controls (1273.6bp,1554.0bp) and (1273.6bp,1554.0bp) .. (1869.4bp,1554.0bp) .. controls (1976.0bp,1554.0bp) and (2002.0bp,1540.0bp) .. (2108.5bp,1536.0bp) .. controls (2180.8bp,1533.3bp) and (2703.5bp,1527.8bp) .. (2759.3bp,1574.0bp) .. controls (2773.1bp,1585.5bp) and (2800.6bp,1695.8bp) .. (2815.8bp,1760.4bp);
(2010.6bp,1551.5bp) node ;
red
[->] (1177.4bp,1583.7bp) .. controls (1190.3bp,1756.5bp) and (1255.8bp,2632.8bp) .. (1271.3bp,2841.3bp);
;
strokecol
(1224.4bp,2219.5bp) node ;
;
strokecol
(2908.93bp,1465.0bp) ellipse (21.39bp and 21.39bp);
(2908.93bp,1465.0bp) ellipse (25.43bp and 25.43bp);
(2908.9bp,1465.0bp) node 1;
;
strokecol
(112.21bp,2696.0bp) ellipse (21.43bp and 21.43bp);
(112.21bp,2696.0bp) node 0;
;
strokecol
(195.64bp,2993.0bp) ellipse (21.43bp and 21.43bp);
(195.64bp,2993.0bp) node 6;
;
strokecol
(287.55bp,2613.0bp) ellipse (29.9bp and 29.9bp);
(287.55bp,2613.0bp) node 12;
;
strokecol
(287.55bp,2789.0bp) ellipse (29.9bp and 29.9bp);
(287.55bp,2789.0bp) node 35;
;
strokecol
(588.74bp,2723.0bp) ellipse (29.9bp and 29.9bp);
(588.74bp,2723.0bp) node 18;
;
strokecol
(387.95bp,3051.0bp) ellipse (29.9bp and 29.9bp);
(387.95bp,3051.0bp) node 23;
;
strokecol
(1374.95bp,2348.0bp) ellipse (29.9bp and 29.9bp);
(1374.9bp,2348.0bp) node 17;
;
strokecol
(387.95bp,2731.0bp) ellipse (29.9bp and 29.9bp);
(387.95bp,2731.0bp) node 24;
;
strokecol
(387.95bp,273.0bp) ellipse (21.43bp and 21.43bp);
(387.95bp,273.0bp) node 3;
;
strokecol
(1274.55bp,2871.0bp) ellipse (29.9bp and 29.9bp);
(1274.6bp,2871.0bp) node 11;
;
strokecol
(488.35bp,2867.0bp) ellipse (29.9bp and 29.9bp);
(488.35bp,2867.0bp) node 27;
;
strokecol
(689.14bp,1649.0bp) ellipse (29.9bp and 29.9bp);
(689.14bp,1649.0bp) node 33;
;
strokecol
(2821.5bp,1781.0bp) ellipse (21.43bp and 21.43bp);
(2821.5bp,1781.0bp) node 2;
;
strokecol
(488.35bp,273.0bp) ellipse (29.9bp and 29.9bp);
(488.35bp,273.0bp) node 20;
;
strokecol
(781.05bp,1649.0bp) ellipse (21.43bp and 21.43bp);
(781.05bp,1649.0bp) node 4;
;
strokecol
(872.96bp,566.0bp) ellipse (29.9bp and 29.9bp);
(872.96bp,566.0bp) node 21;
;
strokecol
(973.36bp,566.0bp) ellipse (29.9bp and 29.9bp);
(973.36bp,566.0bp) node 19;
;
strokecol
(2629.2bp,2444.0bp) ellipse (21.43bp and 21.43bp);
(2629.2bp,2444.0bp) node 5;
;
strokecol
(2729.59bp,2444.0bp) ellipse (29.9bp and 29.9bp);
(2729.6bp,2444.0bp) node 22;
;
strokecol
(2436.89bp,1293.0bp) ellipse (21.43bp and 21.43bp);
(2436.9bp,1293.0bp) node 7;
;
strokecol
(2528.8bp,1216.0bp) ellipse (29.9bp and 29.9bp);
(2528.8bp,1216.0bp) node 13;
;
strokecol
(2629.2bp,1606.0bp) ellipse (29.9bp and 29.9bp);
(2629.2bp,1606.0bp) node 28;
;
strokecol
(2152.67bp,1396.0bp) ellipse (21.43bp and 21.43bp);
(2152.7bp,1396.0bp) node 8;
;
strokecol
(2244.58bp,1078.0bp) ellipse (29.9bp and 29.9bp);
(2244.6bp,1078.0bp) node 14;
;
strokecol
(2344.98bp,1293.0bp) ellipse (29.9bp and 29.9bp);
(2345.0bp,1293.0bp) node 29;
;
strokecol
(1868.45bp,1268.0bp) ellipse (21.43bp and 21.43bp);
(1868.4bp,1268.0bp) node 9;
;
strokecol
(1960.36bp,1695.0bp) ellipse (29.9bp and 29.9bp);
(1960.4bp,1695.0bp) node 15;
;
strokecol
(2060.76bp,1695.0bp) ellipse (29.9bp and 29.9bp);
(2060.8bp,1695.0bp) node 30;
;
strokecol
(1575.74bp,748.0bp) ellipse (29.9bp and 29.9bp);
(1575.7bp,748.0bp) node 10;
;
strokecol
(1676.14bp,749.0bp) ellipse (29.9bp and 29.9bp);
(1676.1bp,749.0bp) node 16;
;
strokecol
(1776.54bp,1268.0bp) ellipse (29.9bp and 29.9bp);
(1776.5bp,1268.0bp) node 31;
;
strokecol
(1475.34bp,1907.0bp) ellipse (29.9bp and 29.9bp);
(1475.3bp,1907.0bp) node 32;
;
strokecol
(2729.59bp,1707.0bp) ellipse (29.9bp and 29.9bp);
(2729.6bp,1707.0bp) node 25;
;
strokecol
(1174.15bp,1554.0bp) ellipse (29.9bp and 29.9bp);
(1174.2bp,1554.0bp) node 26;
;
strokecol
(1073.76bp,1554.0bp) ellipse (29.9bp and 29.9bp);
(1073.8bp,1554.0bp) node 34;
The minimal DFA of ⋃_i ∈1, 21^* [2]_i ^*.
(See online).
|
http://arxiv.org/abs/2307.04954v2 | 20230711005644 | Hybrid hidden Markov LSTM for short-term traffic flow prediction | [
"Agnimitra Sengupta",
"Adway Das",
"S. Ilgin Guler"
] | cs.LG | [
"cs.LG",
"stat.ML"
] |
Joint Radio Frequency Fingerprints Identification via Multi-antenna Receiver
Xiaofang Chen, Student Member, IEEE,
Wenbo Xu, Member, IEEE,
and Yue Wang, Senior Member, IEEE
XXX.
XXX.
XXX.
XXX.
XXX.
October 2023
==================================================================================================================================================
Deep learning (DL) methods have outperformed parametric models such as historical average, ARIMA and variants in predicting traffic variables into short and near-short future, that are critical for traffic management. Specifically, recurrent neural network (RNN) and its variants (e.g. long short-term memory) are designed to retain long-term temporal correlations and therefore are suitable for modeling sequences. However, multi-regime models assume the traffic system to evolve through multiple states (say, free-flow, congestion in traffic) with distinct characteristics, and hence, separate models are trained to characterize the traffic dynamics within each regime. For instance, Markov-switching models with a hidden Markov model (HMM) for regime identification is capable of capturing complex dynamic patterns and non-stationarity. Interestingly, both HMM and LSTM can be used for modeling an observation sequence from a set of latent or, hidden state variables. In LSTM, the latent variable is computed in a deterministic manner from the current observation and the previous latent variable, while, in HMM, the set of latent variables is a Markov chain. Inspired by research in natural language processing, a hybrid hidden Markov-LSTM model that is capable of learning complementary features in traffic data is proposed for traffic flow prediction. Results indicate significant performance gains in using hybrid architecture compared to conventional methods such as Markov switching ARIMA and LSTM.
§ INTRODUCTION
Accurate traffic predictions in the short or near-short term future, spanning from 5 minutes to 1 hour, play a vital role in efficient traffic management, encompassing traffic control and congestion mitigation. The effectiveness of various traffic control strategies, such as ramp metering or detour suggestions, heavily depends on precise traffic forecasting in the near future. However, achieving precise forecasting across both free-flow and congested traffic states is often challenging due to the inherent uncertainty and chaotic characteristics of transportation systems.
Numerous statistical models, both parametric and non-parametric, have been developed to accurately model the temporal aspects of traffic data.
Parametric models including historical average (HA) algorithms, autoregressive integrated moving average (ARIMA) <cit.> fails to uncover complex traffic dynamics as shown in <cit.>. To partially adapt to the complexities of traffic dynamics, multi-variable prediction models <cit.> and state-space models <cit.> were developed. Additionally, trend retrieval using simple-average, principle component analysis (PCA), and wavelet methods have been discussed in the literature <cit.> to account for the apparent similarity of daily traffic flow time series.
Alternately, multi-regime prediction models assume that a traffic system evolves through multiple regimes or states (say, free-flow, congestion) with distinct characteristics, and separate regression models are developed to predict traffic flow within each regime <cit.>. These model often use a Hidden Markov model (HMM) <cit.> for the identification of traffic regimes. For example, see <cit.>. Although the multi-regime models are observed to identify the local trend within the time series more efficiently, the overall performance of these models was not significantly improved due to errors incurred while switching between regimes <cit.>.
Despite their reasonable performances, the specific functional form and methodological assumptions of the parametric models limit their capabilities to adapt to non-linearities associated with short-term trends – which is a major shortcoming. Moreover, traffic data is observed to exhibit chaotic behaviour during congestion which makes it highly unstable <cit.>.
To the contrary, non-parametric techniques do not specify any functional form, rather they rely on pattern recognition to handle large data quantities. As a result, these approaches can better model traffic patterns with greater transferability and robustness across datasets <cit.>. Nearest neighbors algorithms <cit.>, support vector machine <cit.>, and Bayesian network <cit.> are among the machine learning (ML) models that have demonstrated successful performances in small-scale traffic prediction problems. However, the success of such models often depend on suitable feature definition that require engineering judgement <cit.>.
Deep learning (DL) models use a multi-layer neural framework to capture complex relations in non-linear data <cit.>, that require few to negligible feature engineering. Following its success, DL models have been extensively used in traffic time series modeling <cit.>. Specifically, recurrent neural networks (RNN) <cit.> and its variants like long short-term memory (LSTM) <cit.> are designed to preserve temporal correlations between observations in a time series, and hence better suited for traffic forecasting as well <cit.>.
Recent researches in natural language processing highlights the structural similarity between RNN and HMM, and their capability to learn complementary features from language data <cit.>. TConsequently, the use of hybrid models combining both RNN (specifically, Long Short-Term Memory or LSTM) and HMM can provide improved modeling capabilities for complex sequential data, such as traffic time series.
This study focuses on investigating hybrid models that leverage the joint usage of HMM and LSTM for the task of traffic flow prediction. Additionally, we explore the feasibility of incorporating duration-based state transitions within the HMM framework in these models.
The remainder of the paper is organized as follows. First, we present an overview on hidden (semi-) Markov models and LSTM, followed by the proposed hybrid models in this study. Next, the performance of hybrid models are compared with the baseline models. Finally, some concluding remarks are presented.
§ BACKGROUND
In this section, we provide a background on hidden Markov models (HMMs) and discuss the sojourn time distributions that are considered in this study. Subsequently, we present an overview of the long short-term memory (LSTM) model, which serves as the basis for the modifications that will be described in the following section.
§.§ Hidden Markov models
Markov chains model dynamical systems with the assumption that the state of the system at a time t only depends on the state in the immediately prior time step, t-1. However, such an assumption often does not hold true for complex dynamic systems. An alternative to Markov chains, the hidden Markov model (HMM) <cit.> assumes the existence of a latent (hidden) process that follows a Markov chain from which observations 𝐗 are generated. Therefore, for an observation sequence 𝐗={x_1,x_2,⋯,x_T} in [1,T], there exists an unobserved state sequence 𝐙={z_1,z_2,⋯,z_T}, where the hidden states, z_t belonging to state-space Q{q_1,q_2,⋯,q_M} follow a Markov chain governed by:
* a state-transition probability matrix 𝐀=[a_ij]∈ℝ^M× M where a_ij= p(z_t+1=q_j| z_t=q_i)
* initial state matrix π=[π_i]∈ℝ^1× M with π_i=p(z_1=q_i) (i.e., the prior)
Further, for each hidden state z_t, corresponding observation x_t is a realization of an emission process 𝐁=[b_j(x)] where b_j(x)=p(x| z=q_j). We assume b_j(x) follows a Gaussian mixture model (GMM) as defined in Equation <ref>.
p(x_t| z=q_j)= ∑_l=1^kc_jl𝒩(x_t|μ_jl,Σ_jl)
where ∑_l=1^kc_jl=1, ∀ j={1,⋯,M}, k is the number of Gaussian mixture components and 𝒩(x_t|μ_jl,Σ_jl) denotes a Gaussian probability density with mean μ_jl and covariance Σ_jl for state j and mixture component l. The number of hidden states (M) and mixture components (k) are the two hyperparameters of the model which have to be provided apriori.
Therefore, the joint probability density function of the observation 𝐗 can be expressed as:
p(𝐗)=p(z_1)∏_t=1^T-1p(z_t+1| z_t)∏_t=1^Tp(x_t| z_t)
The optimum parameters [𝐀,𝐁, π] that locally maximize the total observation likelihood (Equation <ref>) of observation 𝐗, are estimated using an expectation-maximization algorithm, known as the Baum-Welch algorithm <cit.>.
Furthermore, the probability of the system being in a given latent state, z_t corresponding to 𝐱_𝐭 is computed using the Viterbi algorithm.
§.§.§ Sojourn time distribution
An inherent assumption of the HMM is that the number of time steps (u) spent in a given state q_j (a.k.a sojourn time) is geometrically distributed (denoted by d_j) as shown below.
d_j(u) = a_jj^u(1-a_jj)
However for some dynamical systems, the probability of a state change depends on the time spent in the current state. Therefore, geometrically distributed sojourn time fails to model such systems.
An alternate solution is to explicitly estimate the duration density d(u), which results in a hidden semi-Markov model (HSMM). In this study, we compare Gamma, Weibull and logarithmic distributions for sojourn density in addition to the default choice of geometric distribution. For each of these assumptions, the parameters of the HSMM model is estimated by maximizing the likelihood of the joint probability density function of 𝐗, as shown in Equation <ref> <cit.>.
p(𝐗)=p(z_1) d_z_1(u_1) {∏_t=1^T-1 p(z_t+1| z_t)d_z_t(u_t)}
p(z_T| z_T-1)D_z_T(u_T) ∏_t=1^T p(x_t| z_t)
In the above equation, the survival function, D_j(u), is used to represent the time spent in the last state since the system is not observed beyond time T. Using this survival function improves the parameter estimation and, provides a more accurate prediction of the last state visited.
§.§ Long short-term memory
Feed-forward neural network architectures are not explicitly designed to handle sequential data. A class of DL approaches, recurrent neural network (RNN), uses a feedback mechanism where the output from a previous time step is fed as an input to the current time step such that information from the past can propagate into future states. This feedback mechanism preserves the temporal correlation and makes it suitable to capture the temporal evolution of traffic parameters. However, RNNs are incapable of handling the long-term dependencies in temporal data due to the vanishing gradient problem <cit.>. Long short-term memory (LSTM) <cit.>, a type of RNN, consists of memory cells in its hidden layers and several gating mechanisms, which control information flow within a cell state (or, memory) to selectively preserve long-term information.
The objective is to update the cell, C_t, over time using the input x_t and the previous time step's hidden state, h_t-1. This process involves several key operations. First, a forget gate, f_t, selectively filters information from the past. Then, an input gate, i_t, regulates the amount of information from the candidate memory cell, C_t, that should be incorporated into the current cell state, C_t. Finally, an output gate, o_t, governs the update of the hidden state, h_t. See Figure <ref>. The computations are represented as follows:
C_t = tanh(W_c[h_t-1,x_t] + b_c)
C_t = f_t ⊙ C_t-1 +i_t⊙C_t
h_t = o_t⊙tanh(C_t)
The outputs from the forget gate, f_t, input gate, i_t, and output gate, o_t are computed as shown below:
f_t = σ(W_f[h_t-1,x_t] + b_f)
i_t = σ(W_i[h_t-1,x_t] + b_i)
o_t = σ(W_o[h_t-1,x_t] + b_o)
Here, σ and tanh represent non-linear activation functions, while W_f, W_i, W_o, and W_c denote weight matrices corresponding to the forget gate, input gate, output gate, and candidate memory cell, respectively. Similarly, b_f, b_i, b_o, and b_c represent the corresponding bias vectors.
§.§ Modeling traffic data
Vehicle arrival rates in traffic are commonly modeled as a Poisson process, assuming that vehicles arrive independently at a specific location or time according to Poisson distribution with a fixed average rate, λ. However, in real-world traffic scenarios, the average arrival rate λ(t) fluctuates throughout the day, resulting in a non-homogeneous Poisson process. This means that the rate of vehicle arrivals vary over time, reflecting changes in traffic flow and congestion levels.
This variation in the Poisson average rate captures the dynamic nature of traffic patterns, aligning with real-world observations of different traffic conditions at different times of the day, such as high flow periods during rush hours or lower flow periods during off-peak times.
In our study, we analyze the data representing the fluctuations in vehicle counts, i.e., the changes in vehicle arrival at each time step. While the arrival rates in traffic are commonly modeled as a Poisson process, the observed fluctuation data follows a Skellam distribution, which arises from taking the difference between two Poisson random variables with parameters λ(t) and λ(t+δ t), where δ t represents the time gap between consecutive observations. Throughout the day, the average rate of vehicle arrivals fluctuates, leading to variations in the parameters of the Skellam distribution. Consequently, the distribution of traffic fluctuation can be described as a mixture of Skellam distributions, with each component representing a specific average arrival rate.
In our approach, we model the temporal sequence of traffic fluctuations using a Hidden Markov Model (HMM). In other words, the HMM tries to categorize the outcome of the traffic fluctuations, which are assumed to be a random variable that is the mixture of Skellam distributions. More specifically, the HMM approximates the output space by employing a mixture of Gaussian distributions, allowing for effective modeling and inference of the underlying traffic patterns. However, since the HMM assumes a continuous distribution while the Skellam is a discrete distribution, the Skellam distribution is normalized using the mean and standard deviation of the Skellam distribution.
Furthermore, it is worth noting that the independence of observations justifies the use of an HMM, as each observation in the traffic data is considered to be independent of previous observations. Moreover, despite the varying average trend in real traffic data throughout the day, the fluctuations tend to exhibit sporadic behavior, indicating little to no duration dependency. To account for this characteristic, multiple duration densities are employed to model the sojourn durations and find the distribution(s) that best fit the data.
§ METHODOLOGY
The hidden Markov model (HMM) and long short-term Memory (LSTM) are two distinct models that generate latent space representations for modeling the distribution of an observed sequence. While they have structural similarities and divergences, they can be considered as special instances of a more comprehensive framework called the Generative Unified Model (GUM).
In the GUM framework, there exists a hidden or latent state which provides information about the observation. In the HMM, the hidden state z follows Markovian dynamics, meaning that the current state z_t depends only on the previous state z_t-1 and is conditionally independent of the observation x (as represented by Equation <ref>).
On the other hand, in the LSTM model, the latent variable h_t is deterministic and is a function of the previous latent variable h_t-1 and the current observation x_t (see Equation <ref>). The LSTM captures temporal dependencies in the sequence by updating its hidden state representation based on the previous state and current observation.
These structural dissimilarities between the HMM and LSTM result in the two models learning complementary feature representations of an observed sequence. This phenomenon has been highlighted in research conducted in the field of natural language processing (NLP) <cit.>. For instance, hybrid RNN-HMM architectures were explored in text sequence modeling <cit.>, where the HMM modeled the underlying statistical patterns in the input sequence, while the LSTM captured the temporal dependencies using its hidden state representation.
In the context of traffic prediction, we aim to leverage this complementary feature learning phenomenon by combining HMM and LSTM in two proposed architectures.
By combining these two models, we can effectively capture both the statistical patterns and the temporal dynamics of the traffic data, leading to enhanced prediction accuracy.
This stands in contrast to the baseline model, which is a simple LSTM model (See Figure <ref>(a)) that solely relies on the temporal history of x to predict its future value. Additionally, unlike multi-regime models that employ separate prediction models for different states, the hybrid models train a single prediction model that captures the system's evolution within the latent space.
§.§ Model architectures
In this study, two architectures of a hybrid HMM-LSTM model are considered: the sequential hybrid (S-Hybrid) and the concatenated hybrid (C-Hybrid).
§.§.§ Sequential hybrid
In the sequential hybrid model (S-Hybrid), the first step is to train an HMM on the input sequence 𝐗 to learn the probability of the system being in each hidden state q ∈ Q at time t. The HMM captures the time-evolution of state probabilities based on the observed sequence. These HMM features, representing the probabilities of being in different hidden states, are then used as input to train the LSTM to learn the temporal dependencies in the sequence.
In the sequential Hybrid Model (S-Hybrid), the initial step involves training an HMM on the input sequence 𝐗 to estimate the likelihood of the system being in each hidden state q ∈ Q at time t. The HMM effectively captures the dynamic changes in state probabilities based on the observed sequence. These HMM features, which represent the probabilities associated with different hidden states, are subsequently utilized as input for training the LSTM network to learn the temporal dependencies within the sequence.
The latent outputs (h) from the LSTM are processed through a series of dense layers to generate the final prediction. This S-Hybrid approach effectively combines the probability information obtained from the HMM with the LSTM's capabilities, enhancing the model's ability to anticipate state transitions. As a result, it is expected that this modeling approach can potentially lead to improved prediction performance by leveraging the complementary strengths of the HMM and LSTM. See Figure <ref>(b) for the architecture.
𝐌 = [ p(z_t-T=1) p(z_t-T+1=1) ⋯ p(z_t-1=1); p(z_t-T=2) p(z_t-T+1=2) ⋯ p(z_t-1=2); ⋯ s⋯ ⋯ ⋯; p(z_t-T=S) p(z_t-T+1=S) ⋯ p(z_t-1=S); ]
§.§.§ Concatenated hybrid
In the concatenated hybrid model (C-Hybrid), the latent outputs from two distinct LSTM networks are combined. One LSTM network is trained on the input sequence X, while the other LSTM network is trained on the sequence of hidden states obtained from the HMM. These two sets of latent outputs, representing the learned temporal dependencies from both the input sequence and the HMM state sequence, are concatenated together. The concatenated features are then fed into a series of densely connected layers to generate the final prediction. By integrating the latent outputs from the LSTM networks trained on different sequences, the model potentially captures the complementary information and leverages it to enhance prediction accuracy. This approach allows the model to benefit from both the statistical patterns captured by the HMM and the temporal dynamics captured by the LSTM. See Figure <ref>(c) for the architecture.
§.§ Model training
The baseline deep learning (DL) model employed in this study consists of a stacked LSTM architecture. Our model consists of three LSTM layers with 20, 20 and 10 units, followed by four dense layers with 10, 10, 6 and 2 units respectively, with LeakyReLU activation function <cit.> for dense layers respectively.
Additionally, a statistical benchmark model called the HMM-based regime-switching autoregressive model (AR-HMM) <cit.> was chosen for comparison. The AR-HMM is a widely recognized model frequently used in traffic data analysis literature.
The hyperparameters of the proposed architectures were selected to ensure that the number of trainable parameters is comparable for each DL model. Specifically, the same architecture is utilized for the S-Hybrid model as the LSTM model, with the only difference being the number of feature channels in S-Hybrid, which is set equal to the number of hidden HMM states considered.
In the case of the C-hybrid model, it consists of two branches. Each branch comprises two LSTM layers with 20 and 10 units, respectively. The feature outputs from these branches are merged, and then passed through four dense layers with 10, 6, 6, and 1 units, respectively with LeakyReLU activation.
To ensure the generalizability of the models, the models are trained, validated, and tested on three separate sets. The dataset is divided into three parts: 60% for model training, 15% for validation, and 25% for testing. This division allows for assessing the model's performance on unseen data and helps prevent overfitting. To address overfitting, the model parameters are tuned throughout the training process based on their performance on the validation set.
The models are trained to minimize the mean squared error (MSE) loss function for sufficiently large number of epochs until the validation loss starts to increase. The model with the lowest validation error is selected as the final model.
We use Adadelta <cit.> as the optimizer with the learning rate of 0.20, ρ value of 0.95, and epsilon of 1e-7 to train the models.
§ DATA
The performance evaluation of the proposed models is conducted on a dataset obtained from the California Department of Transportation's Performance Measurement System (PeMS). This dataset is widely used for traffic data modeling. The traffic data, including flow, occupancy, and speed, is collected from vehicle detector stations (VDS) located along freeways and ramps. The raw data is sampled at 30-second intervals and then aggregated at 5-minute intervals.
Flow data for one year (i.e., 104942 samples) from VDS 1114805 on California Interstate 05 NB in District 11 were used for training and testing of the models. The performance of the models are evaluated on 25% of the dataset that was never used in training.
In this study, the models are employed to predict fluctuations in traffic flow, specifically the change in flow between successive time steps (Δ Flow), instead of directly predicting the absolute flow values at a detector location. It is important to note that modeling the first-order difference, i.e., the flow fluctuations, can be advantageous for capturing short-term dynamics in the time series, whereas, modeling the actual time series can be more suitable for capturing long-term trends and patterns <cit.>. Figure <ref> shows the detrended time series (Δ Flow) corresponding to the flow observed at the detector during a 48-hour cycle.
§ RESULTS
In this study, three DL models – 1) a stacked LSTM model trained on flow fluctuations, 2) a stacked LSTM model trained on probabilities of hidden state transition (or, S-Hybrid) and 3) merged LSTM model that takes both flow fluctuations and probabilities of hidden state transition as inputs, are considered for prediction tasks.
We evaluate the model performances for single-step prediction horizons, by comparing the prediction mean with the corresponding true values using three metrics: root mean squared error (RMSE), mean absolute percentage error (MAPE) and R^2 as defined below.
RMSE=√(1n∑_i=1^N [y_i-ŷ_̂î]^2)
MAPE=1N∑_i=1^N|y_i-ŷ_̂îy_i|
R^2=1-∑_i=1^N [y_i-ŷ_̂î]^2∑_i=1^N [y_i-y̅_̅i̅]^2
where y_i represents the 'ground truth' or true value of the observation i, ŷ_̂î is the predicted value of y_i for i=1,2,… T.
In the `Methodology' section, the hybrid models are described as utilizing HMM features derived from the input data, specifically the flow fluctuations (Δ Flow), to perform the prediction task. To enable this, HMMs are trained on the input data with different configurations. In this study, HMMs are trained assuming 3 and 5 latent states, and a Gaussian mixture model is used with either 1 or 2 mixture components.
To characterize the duration densities of the states, various distributions such as geometric, logarithmic, gamma, and Weibull distributions are assessed. For each case studied, the AIC (Akaike Information Criterion) and BIC (Bayesian Information Criterion) are computed to select the models based on a trade-off between model fit and complexity. The results are presented in Table <ref>. It is worth noting that the AIC and BIC values for models with 1 and 2 Gaussian mixture components are consistently similar. Hence, for the sake of brevity and reduced model complexity, results obtained with 1 Gaussian component are reported.
Across all the evaluated distributions, the AIC and BIC values indicate that a model with 3 latent states is preferable.
This configuration provides a good balance between model fit and complexity. On the other hand, using 5 latent states introduces more complexity, without substantial improvement in model performance, resulting in higher AIC and BIC values.
[The computations for these evaluations are conducted using the Hidden Markov Model package <cit.>.
]
Using the trained hidden (semi) Markov models, the most likely states for each time instant is identified as shown in Figures <ref> and <ref>. As can be seen, the system dynamically transitions between the hidden states within the day. However, due to different sojourn densities, the identified states and their duration are quite different.
In case of geometric and logarithmic sojourn density, the system is observed to exhibit abrupt state changes as flow increases and congestion builds on the roadway. Therefore, traffic is observed to be unstable during congestion, as highlighted by <cit.>.
Conversely, the system is observed to remain in each state for an increased time duration when Gamma and Weibull sojourn distributions are assumed. Therefore, abrupt fluctuations in flow during congestion are not efficiently captured in these two cases.
The performance of the S-Hybrid and C-Hybrid DL models for different configurations of hidden (semi) Markov models are compared in Tables <ref> and <ref> respectively. Upon analyzing the results, it is evident that models utilizing geometric and logarithmic sojourn densities demonstrate superior performance compared to models using Gamma and Weibull sojourn densities. This observation is consistent with the characteristics of the observed state transitions in the time-series data as depicted in Figures <ref> and <ref>.
The geometric distribution assumes a constant probability of transitioning from one state to another, regardless of the duration already spent in the current state.
On the other hand, the logarithmic distribution considers the time already spent in a state, allowing for a wider range of durations and better adaptation to the observed patterns in the traffic data. This flexibility in capturing varying durations enables the logarithmic sojourn density to more effectively model the complex dynamics of traffic behavior, leading to slightly better performance compared to the geometric sojourn density. This aligns well with the assumption of vehicle arrival being modeled as a Poisson process and the (near) independence assumption.
In this study, a logarithmic sojourn density with 5 states and 1 Gaussian emission is found to marginally outperform the geometric sojourn density in terms of prediction accuracy for both the S-Hybrid and C-Hybrid DL models.
The optimized parameters for the fitted HMM model with logarithmic sojourn density, 5 states, and 1 Gaussian emission distribution are denoted by Equation <ref>.
A = [ 0.000 0.0161 0.258 0.714 .012; 0.109 0.000 0.009 0.046 0.836; 0.441 0.002 0.000 0.548 0.009; 0.768 0.008 0.144 0.000 0.081; 0.038 0.572 0.029 0.361 0.000; ]
π = [ 1 0 0 0 0; ]
p = [ 0.307 0.455 0.934 0.420 0.245; ]
s = [ 1 1 1 1 1; ]
μ = [ -0.8594 -0.7903 -0.0098 0.8144 -0.8524; ]
σ = [ 0.3430 6.1145 0.1494 0.4009 2.3412; ]
where A represents the hidden state-transition matrix, π corresponds to the initial state probabilities, p and s denote the scale and shift parameters of the sojourn density, and μ and σ represent the parameters of the Gaussian emission function.
The proposed DL models with logarithmic sojourn density with 5 states and 1 Gaussian emission are compared with two baseline models – a stacked LSTM and HMM-based regime-switching autoregressive model (AR-HMM) with lags 1, 10 and 20.
As observed from Table <ref>, the hybrid models perform significantly better than the baseline models, with C-Hybrid outperforming S-Hybrid.
Figure <ref> demonstrates the prediction performance for a 24-hour period. It is evident from the figure that AR-HMM and LSTM follow similar trends, while the hybrid models perform significantly better to capture the abrupt flow changes.
However, in the free-flow regime (2 to 6 hrs), S-Hybrid fails to capture the trend. This is due to the fact that the system predominantly remains in state 3 during free-flow and hence, the input to the LSTM i.e., the hidden state probabilities over time does not change appreciably. Therefore, the model generates a near constant output. To the contrary, the predictions of C-Hybrid performs comparatively better than S-Hybrid to capture the low-fluctuations.
Further, we compare the feature-space representations of the penultimate layers of the models to identify specific patterns that enhance prediction capabilities of hidden Markov-LSTM models. We use t-Stochastic Neighbor Embedding, a non-linear technique for dimension reduction, to reduce the high dimensional feature output to two dimensions <cit.>.
Flow fluctuation data were labeled into four traffic regimes: 1) low flow (0 to 6 hr), 2) increasing flow (6 to 8 hr), 3) high flow (8 - 18 hr) or congestion and 4) decreasing flow (18 to 24 hr) which are suitably color-coded as shown below.
Figure <ref> illustrates the learned feature space for the four different regimes obtained using the stacked LSTM, S-Hybrid, and C-Hybrid models. When considering the LSTM model, it is evident that different traffic states overlap, making it challenging to distinguish between them. However, with the incorporation of HMM features into the model, we observe clear separations in the feature space for the hybrid models. Notably, the C-Hybrid model exhibits superior separability for low flow traffic states (as shown in Figure <ref>), which likely contributes to its superior performance. Table <ref> presents a comparison of the variances of outputs belonging to specific traffic regimes in the 6-dimensional feature space for each model. It is worth noting that the LSTM model demonstrates high dispersion of outputs within the feature space for data from the same traffic states, along with significant overlap between data from multiple regimes. In contrast, the hybrid models incorporating HMM features effectively localize features in the space, resulting in enhanced performance.
§ CONCLUSIONS
Hidden Markov model (HMM) and recurrent neural networks like Long short-term memory (LSTM) are capable of modeling an observation sequence from a set of latent (hidden) state variables. The latent variables in LSTM are determined in a deterministic manner from the current observation and the previous latent variable, while, in HMM, the set of latent variables is a Markov chain.
Recent research highlights the structural similarity between LSTM and HMM, and their capability to learn complementary features from input data. Therefore, appropriate hybridization of these models could lead to a better modeling of the data compared to the individual models.
Our study adopts a hybrid approach combining a HMM and LSTM to model the temporal sequence of traffic fluctuations. Specifically, the HMM allows to capture the underlying patterns and state changes in traffic dynamics that typically are assumed to have a Poisson distribution. The HMM's capability to model the short-term fluctuations in traffic, often resembling a Skellam distribution, enables us to accurately characterize the variability in traffic behavior. Additionally, we incorporate the LSTM, which retains long-term temporal correlations, allowing us to capture complex dynamic patterns and non-stationarity within the traffic data.
As shown in this study, hybrid models that jointly use HMM and LSTM to perform the task of traffic flow prediction outperform in terms of prediction accuracy the LSTM and auto-regressive HMM regime switching models in capturing chaotic behavior in traffic data by learning complementary features.
The testing of the models on loop detector data shows that the the LSTM and AR-HMM models result in an RMSE of 0.8235 and 0.8766, respectively, while the S-Hybrid and C-Hybrid models result in an RMSE of 0.5326 and 0.4203, respectively, which corresponds to an approximately 31-52% improvement in performance. Transferability of hybrid hidden Markov-LSTM models to predict traffic on out-of-distribution datasets will be explored in future studies.
|
http://arxiv.org/abs/2307.04122v1 | 20230709082919 | Enhancing Low-Light Images Using Infrared-Encoded Images | [
"Shulin Tian",
"Yufei Wang",
"Renjie Wan",
"Wenhan Yang",
"Alex C. Kot",
"Bihan Wen"
] | cs.CV | [
"cs.CV",
"eess.IV"
] |
Bounced Model of Droplet on Moving Substrate
Chengwu Liu
August 12, 2023
============================================
Low-light image enhancement task is essential yet challenging as it is ill-posed intrinsically.
Previous arts mainly focus on the low-light images captured in the visible spectrum using pixel-wise loss, which limits the capacity of recovering the brightness, contrast, and texture details due to the small number of income photons.
In this work, we propose a novel approach to increase the visibility of images captured under low-light environments by removing the in-camera infrared (IR) cut-off filter, which allows for the capture of more photons and results in improved signal-to-noise ratio due to the inclusion of information from the IR spectrum. To verify the proposed strategy, we collect a paired dataset of low-light images captured without the IR cut-off filter, with corresponding long-exposure reference images with an external filter.
The experimental results on the proposed dataset demonstrate the effectiveness of the proposed method, showing better performance quantitatively and qualitatively. The dataset and code are publicly available at
Low-light enhancement, infrared photography, computational photography
§ INTRODUCTION
Due to the small number of photons captured by the camera, the images captured under low-light environments usually suffer from poor visibility, intense noise, and artifacts. To enhance the visibility of the images captured in low-light environments, previous works mainly focus on modelling the mapping
relationship between low-light images and corresponding normally-exposed images.
Specifically, current deep learning based methods have the following paradigms: learning an end-to-end model using paired datasets in <cit.>; GAN-based networks in <cit.>; encoder-decoder based models in <cit.>. However, the aforementioned methods are all based on existing visible information of the corrupted inputs on RGB space,
i.e., even if they can achieve pleasant perceptual quality, they can not perform reliably due to the lack of incident photons <cit.>.
Besides, there are various limitations of the current mainstream methods, e.g.,
end-to-end training using pixel reconstruction loss leads to a regression-to-mean problem; GAN-based training requires careful hyper-parameter tuning and lacks enough supervision for noise removal.
Recently, infrared-light-based methods have attracted great attention in low-level computer vision tasks
as they introduce extra information from infrared spectroscopy.
There are several works explored the usage of infrared light in computation photography previously. Specifically, Zhuo et al. <cit.> propose to use additional Near-Infrared (NIR) flash images instead of normal flash images to restore the details of noisy input images that require the user to take two photos of the same scene in a static environment, causing the misalignment of the inputs easily;
Zhang et al. <cit.> propose a dual-camera system to capture a NIR image and a normal visible image of the same scene concurrently, while increasing the cost of devices during the acquisition of data.
In this paper, we propose a novel prototype that utilizes information from the infrared spectrum without the need for additional devices. Most solid-state (CCD/CMOS) based digital cameras are equipped with IR cutoff filters to avoid color distortion caused by the high sensitivity to IR light. Conversely, we remove the IR cutoff filter so that the CMOS can receive more incident photons located on the infrared spectrum, resulting in increased brightness, higher signal-noise ratio, and improved details as shown in Fig. <ref>. A paired dataset, namely IR-dataset, of IR-RGB images captured under low-light environments and their reference normally-exposed RGB images, is collected under different scenes. We further propose a novel flow-based model that can enhance visibility by modelling the distribution of normally-exposed images and address color distortion caused by the lack of IR cutoff filter through our proposed color alignment loss (CAL).
In summary, the contributions of our work are threefold:
*
We collect a paired dataset under a novel prototype, i.e., IR-RGB images captured under low-light environments and their normally-exposed reference RGB images, which supports future studies.
* We propose a flow-based model with our proposed color alignment loss, which can effectively address the color distortion caused by removing the IR-cut filter.
* We conduct extensive experiments on our collected datasets that demonstrate removing the IR-cut filter can lead to better-quality restored images in low-light environments. Besides, our proposed framework achieves superior performance compared with SOTA methods.
§ METHODOLOGY
§.§ Dataset Collection
The dataset is collected by a modified Nikon D3300 camera, in which the internal IR cut-off filter is removed.
The paired images are captured using a stable tripod and remote control to minimize misalignment. The low-light images are captured using the aforementioned device without IR cut-off filter. To capture the normally-exposed reference images in the visible light spectrum, an external IR filter, which has the same cut-off wavelength as the internal one, is carefully put in front of the lens to ensure that no camera shift occurs during the long exposure. To better explore the effectiveness of removing the IR cut-off filter in a low-light environment, we also collect a set of low-light images in the visible light spectrum (e.g., the example in Fig. <ref>).
We divide our dataset into a training set and an evaluation set. Specifically, the training set includes 236 pairs of low-light images without cut-off filter and their corresponding reference images (472 images in total). The evaluation set has 80 pairs of low-light images with and without the cut-off filter and their corresponding reference images.
§.§ Preliminary
Previously, the mainstream of deep learning based models is mainly based on pixel reconstruction loss. However, due to the limited capacity to distinguish the unwanted artifacts with the real distribution of normally-exposed images, they may lead to unpleasant visual quality with blurry outputs <cit.>.
Inspired by the extraordinary performance of flow-based models <cit.>, we found that learning conditional probability distribution can handle the aforementioned problem by including possibilities of various distributions of natural images. Specifically, the recent state-of-the-art LLFlow model <cit.> has shown great performance in using normalizing flow conditioned on corrupted inputs to capture the conditional distribution of normally exposed images. In this work, we inherited the
core idea of conditional flow with the likelihood estimation proposed in <cit.> as the backbone of our method.
The conditional probability density function of normally exposed images can be modified as follows:
f_cond(y|x) = f_z(Θ(y;x))|∂Θ/∂ y(y;x)|,
where Θ(·) is the invertible network with N invertible layers {θ ^1, θ ^2, …, θ ^N}, and the latent representation z=Θ(y;x) is mapped from the corrupted inputs x normally exposed images y. By characterizing the model with maximum likelihood estimations, the model can be optimized with the negative log-likelihood loss function:
ℒ_nll (x, y) = -log f_cond(y|x)
= -log f_z(Θ (y; x))
- ∑_n=0^N-1log |∂θ^n/∂ z^n(z^n; g^n(x_l))|,
where g(·) is the encoder that outputs conditional features of the layers θ ^i from the invertible network.
§.§ Color Alignment Loss
Although the benchmarks performed well on the visible light spectrum, the performance suffered from severe degradation caused by the additional infrared light in some extreme cases if we directly apply benchmark methods to the collected dataset. To further alleviate the color distortion caused by removing the IR filter, inspired by histogram-matching techniques studies <cit.>, used by remote sensing, we propose to minimize the divergence of the color distribution between the generated images and reference images. Specifically, by representing the color information using differentiable histograms in the RGB color channels, we emphasize more on the color distributions of the generated and reference images instead of the local details. To further measure the differences in these distributions, we propose using the Wasserstein distance, which can provide a more stable gradient compared with the commonly used KL divergence. The details are as follows:
§.§.§ Differentiable Histogram
Since the low-light images are taken without the existence of an IR cut-off filter, they admit more red light, which leads to color bias in the red channel. To suppress the color distortion, we propose to minimize the divergence of the channel-wise differentiable histogram between the generated and reference images.
Assume that x∈ℝ^C × H × W is an image where C, H and W refer to its number of channels, height, and width respectively.
To calculate its channel-wise histogram bounded by an arbitrary range [a;b], we consider fitting the histogram with uniformly spaced bins with size R, noted by nodes t_i ∈{t_1 = m, t_2, …, t_R = n}, where step size Δ = (a-b)/R-1. By matching the pixel values of different channels of the image to the histogram nodes, the value h_r of the histogram H at each node then be calculated as:
h_r = ∑_C1/1+δ*(p_i,j-t_r)^2, r = 1,2,…, R
where δ is a constant scaling factor. After collating and normalizing h_r, we could get the final one-dimensional histogram H(x) with size R on different channels.
§.§.§ Wasserstein Metric
Inspired by Wasserstein distance (W-distance) to measure the distance between distributions on a given metric space <cit.>, we propose to optimize the histograms of images using W-distance as follows
W_p (H_ŷ, H_y) = inf_ŷ∼ H_ŷ, y∼ H_y(𝔼||ŷ-y||^p)^1/p,
where H_ŷ and H_y denote differentiable histograms of the restored image ŷ and ground-truth image y respectively through Eq. (<ref>).
An explicit formula can be obtained since the dimension of the variable is 1 as follows,
W_p (H_ŷ, H_y) = ||F_ŷ^-1 - F_y^-1||_p
= (∫_a^b |F_ŷ^-1(α) - F_y^-1(α)|^p dα)^1/p,
where F_y and F_ŷ are the cumulative distribution of H_y and H_ŷ respectively. It could be further simplified when p=1 and the variable is discrete:
ℒ_CA = W_1(H_ŷ, H_y)
= ∑_ℝ |F_ŷ(t)-F_y(t)|d t.
The negative log-likelihood and the color alignment loss jointly define the total loss as follows
ℒ = ℒ_nll + λ·ℒ_CA,
where λ is a weighting constant to adjust the scales of color alignment loss for specific settings.
§ EXPERIMENTS
§.§ Experimental settings.
All the captured images are resized to the resolution of 400×600 for training and testing.
For our model, the weighting factor λ of CAL is set to 0.01 to cast the loss component value onto a similar numerical scale during training;
to simplify the task, we bound the range of the channel-wise histogram values to [0.0;1.0], and the bin size is set to 64 per channel.
§.§ Evaluations results.
To evaluate the performance of different methods on the proposed dataset, we retrain all the methods using the same training data, i.e., the training set of our proposed dataset. For a fair comparison, we explore training hyper-parameters of competitors in a wide range and report the best performance we obtained. We report the experimental results in Table <ref> and visual comparison in Fig. <ref>.
Based on our evaluation and analysis of the experiment results
As we can see in the table, Retinex-theory-based methods exhibit limited generalization ability and unpleasant outputs, e.g., RetinexNet <cit.>, Kind <cit.>, KinD++ <cit.>. We conjecture the reason is that the aforementioned methods assume the existence of an invariant reflectance map across low-light inputs and ground truth images and require a shared network to extract both illumination and reflectance maps of them
, which is not feasible in our setting. Besides, our method achieves the best performance among all competitors in terms of both fidelity and perceptual quality.
§.§ Ablation Study
1) Effectiveness of removing IR cut-off filter. To further verify the effect of removing the internal IR cut-off filter, we compare both quantitative and visual results that were restored from standard RGB space and IR-RGB space separately. For the models evaluated on the visible light spectrum, we utilize the pretrained/released models from SOTA methods trained on a large-scale dataset so that they have good generalization ability to different scenarios.
As shown in Table <ref>, the quantitative results calculated from IR light encoded image with our model are much higher than those directly restored from standard visible light spectrum. Besides, for the same method, especially for the method utilizing fully supervised training manner, there exists an obvious performance gap by converting the input space from IR-visible spectrum to only visible spectrum, which demonstrates that removing the IR cut-off filter may lead to the higher noise-signal ratio in extreme dark environment.
Besides, as shown in Fig. <ref>, the reconstructed image with IR light performs better in recovering local features and details of the image.
2) The effectiveness of color alignment loss. To validate the assumption of using color alignment loss can improve the imaging quality, we compare the visual quality difference of the usage of color alignment loss. As shown in Fig. <ref>, the result with CAL shows better perceptual quality with aligned color correctness and higher contrast. However, the original method without CAL appears to have obvious color distortion and blurry edges.
§ CONCLUSION
In this paper, we present a novel strategy for tackling low-light image enhancement tasks which introduces more income photons in the IR spectrum. The proposed prototype leads to a higher noise signal ratio in the extreme-dark environment. Based on the proposed prototype, a paired dataset is collected under different scenarios.
Experimental results on the proposed dataset show our method achieves the best performance in both quantitative results and perceptual quality. Our prototype shed light on the potential new designs for the digital cameras by exploiting the spectroscopic information captured from infrared light spectrum, providing better image quality with more practical solutions for customers.
IEEEbib
|
http://arxiv.org/abs/2307.04616v1 | 20230710145810 | MiVOLO: Multi-input Transformer for Age and Gender Estimation | [
"Maksim Kuprashevich",
"Irina Tolstykh"
] | cs.CV | [
"cs.CV",
"cs.AI",
"cs.LG",
"I.2.0; I.4.0; I.4.9"
] |
: Multi-input Transformer for Age and Gender Estimation
t]c@8emc
Maksim Kuprashevich Irina Tolstykh
[email protected] [email protected]
Layer Team, SaluteDevices
August 12, 2023
============================================================================================================================================
empty
Age and gender recognition in the wild is a highly challenging task: apart from the variability of conditions, pose complexities, and varying image quality, there are cases where the face is partially or completely occluded. We present (Multi Input VOLO), a straightforward approach for age and gender estimation using the latest vision transformer. Our method integrates both tasks into a unified dual input/output model, leveraging not only facial information but also person image data. This improves the generalization ability of our model and enables it to deliver satisfactory results even when the face is not visible in the image. To evaluate our proposed model, we conduct experiments on four popular benchmarks and achieve state-of-the-art performance, while demonstrating real-time processing capabilities. Additionally, we introduce a novel benchmark based on images from the Open Images Dataset. The ground truth annotations for this benchmark have been meticulously generated by human annotators, resulting in high accuracy answers due to the smart aggregation of votes. Furthermore, we compare our model's age recognition performance with human-level accuracy and demonstrate that it significantly outperforms humans across a majority of age ranges.
Finally, we grant public access to our models, along with the code for validation and inference. In addition, we provide extra annotations for used datasets and introduce our new benchmark. The source code and data can be accessed at <https://github.com/WildChlamydia/MiVOLO.git>
§ INTRODUCTION
Age and gender recognition of a person in a photo is a highly important and complex task in computer vision. It is crucial for various real-world applications, including retail and clothes recognition, surveillance cameras, person identification, shopping stores and more. Additionally, this task becomes even more challenging in uncontrolled scenarios. The significant variability of all conditions such as image quality, angles and rotations of the face, partial facial occlusion, or even its absence in the image, coupled with the necessary speed and accuracy in real-world applications, makes the task quite challenging.
Our objective was to develop a simple and easy to implement approach capable of simultaneously recognizing both age and gender, even in situations where the face is not visible. We aimed for scalability and speed in our solution.
In this paper, "gender recognition" refers to a well-established computer vision problem, specifically the estimation of biological sex from a photo using binary classification. We acknowledge the complexity of gender identification and related issues, which cannot be resolved through a single photo analysis. We do not want to cause any harm to anyone or offend in any way.
Meanwhile, gender recognition is a classification task, while age estimation can be solved either through regression or classification.
Many popular benchmarks and research papers <cit.> <cit.> consider age as a classification problem with age ranges. However, this approach can be inaccurate because age, by its nature, is a regression problem. Moreover, it is inherently imbalanced <cit.>. Treating it as a classification causes several issues. For a classification model, it makes no difference whether it misclassifies into a neighboring class or deviates by several decades from the ground truth. Additionally, as stated in <cit.>, classification models cannot approximate age ranges from unseen classes, while regression models can. However, regression models are much trickier to train, and collecting or cleaning datasets for such task is more challenging.
In this paper, we consider the following popular benchmarks: IMDB-Clean <cit.>, UTKFace <cit.>, Adience <cit.>, FairFace <cit.>. These are some of the most famous datasets containing both age and gender ground truth. IMDB-Clean is the largest available dataset for this task, but it consists of celebrities and is heavily biased. This bias poses a problem for recognition in the wild, you can see an example in Figure <ref>. For more details, refer to the <ref> section.
Therefore, in our work, we introduce a completely new benchmark comprising 84,192 pairs (FaceCrop, BodyCrop) randomly selected from the Open Images Dataset<cit.>.
These images were annotated on a crowd-sourcing platform, and we have achieved remarkably high accuracy using a weighted averaging votes strategy.
While most existing works focus on estimating age and/or gender solely from face images, this work introduces the MiVOLO model, which is built upon the visual transformer model VOLO <cit.>. The allows for the simultaneous prediction of age and gender by incorporating both face and body features.
Our model, trained using both body and face images, achieves SOTA results on the 4 largest benchmarks.
Additionally, it attains a high frame rate of 971 frames per second (FPS) when utilizing a batch size of 512 on the NVIDIA V100. Moreover, our model accommodates the inclusion of images that may lack visible faces.
Human-level estimation is also an open question. Accuracy heavily depends on conditions and is unclear in this task. Some articles <cit.> state that neural network models have already surpassed human-level performance. However, there are not many works where exact human-level performance has been estimated, and we did not find any that have been conducted on images with full-sized persons in the wild.
In this paper, we estimated this level using random images from the IMDB-clean dataset.
The main contributions of our work can be summarized as follows:
* We provide publicly available models that achieved SOTA results in 4 benchmarks.
* We have developed a readily implementable architecture called , capable of simultaneously handling faces and bodies. It enables accurate age and gender prediction, even in cases where humans may struggle. The architecture supports predictions with and without face input. has achieved top-1 results on 4 popular benchmarks, 2 of them without any fine-tuning on training data.
* Additionally, we have once again demonstrated that a carefully implemented multi-output (multi-task) approach can provide a significant performance boost compared to single-task models.
* We have also shown that multi-input models able to gain generalization ability in the same way as multi-task models.
* The original UTKFace dataset has been restored to include full-sized images.
* The annotations of IMDB-clean, UTK and FairFace datasets have been modified to include annotations of all detectable persons and faces in each image using our models.
* Human-level estimation for the task with a substantial sample size.
* A completely new, very well balanced that we propose to use as a benchmark for age and gender recognition in the wild.
§ RELATED WORKS
Facial age and gender recognition. Typically, solving the gender task separately is not of great interest in research or business. Therefore, most methods and benchmarks either tackle both age and gender tasks or focus solely on age. Convolutional neural networks (CNNs) have become the state-of-the-art in most computer vision challenges, although in recent years, there has been a trend to replace them in certain tasks. Levi et al. <cit.> were the first to use CNNs, evaluating their approach on the Adience dataset <cit.>, which contains age and gender as classes.
The network they implemented is a convolutional model with two fully-connected layers. It achieves an accuracy of 50.7 ± 5.1 for age.
With significant advancements in computer vision neural networks, many methods have been developed, some based on face recognition techniques and models <cit.>, suggesting the existence of powerful generic models for faces that can be adapted to downstream tasks.
Some papers <cit.> even employ more general models as encoders, such as VGG16 <cit.>, particularly for ordinal regression approaches in age estimation.
Other methods utilize CNN networks for direct classification or regression for age recognition <cit.> <cit.>.
As of the writing of this article, the state-of-the-art model on Adience for age classification used the Attention LSTM Networks approach <cit.>, achieving an accuracy of 67.47. However, they did not employ their model for gender prediction.
Recognition using face and body images.
Most methods for age or gender estimation are based on facial analysis. Some consider the body for age <cit.> or gender <cit.> recognition, but in very few works <cit.>, joint recognition using both face and body pictures has been utilized. Therefore, it is difficult to find a baseline in open sources to start with.
Only a few works exist that utilize full-body images of individuals. The earliest attempt <cit.> predates the era of neural networks and employed classical image processing techniques to predict age.
A more recent study <cit.> utilized both face and body images together in a single neural network for age and gender prediction. Another paper <cit.> employed face and body images with a late fusion approach, but solely for gender prediction.
Datasets and benchmarks. Our focus primarily lies on datasets containing both age and gender information. The largest existing dataset for these tasks is IMDB-Wiki <cit.> <cit.>. However, the ground truth answers in this dataset do not appear to be clean. Therefore, we used the cleaned version <cit.>.
Another interesting dataset is UTKFace <cit.>, which also contains both age and gender information but is much smaller, with only annotations for face crops.
The MORPH <cit.> dataset is also notable for age estimation, although the domain captured in this dataset cannot be considered as representing wild conditions.
KANFace <cit.> is another large dataset of images and videos that includes gender information.
The CACD dataset <cit.> is also sizeable and features celebrities from different age groups, making it highly useful, but it does not include gender information.
The aforementioned datasets above contains age information suitable for regression.
Adience dataset <cit.> contains both age and gender, but age presented as 8 classes.
FairFace <cit.> is a big and well-balanced dataset, where age is categorized into ranges.
All these datasets are focused on faces, but for most of it is possible to generate additionally persons information.
We are using for training experiments only IMDB-clean and UTKFace as biggest datasets with suitable image domain and annotations.
The FairFace and Adience are employed specifically for benchmarking purposes.
Visual Transformer Models. For many years, convolutional neural networks have dominated the field of computer vision. However, transformers have been increasingly gaining prominence in various tasks and benchmarks. Transformers are powerful and versatile models, and they are far from reaching their limits. One of the first transformer models applied to computer vision was ViT <cit.>, which achieved great success and inspired the exploration of many other variants <cit.> <cit.>. VOLO<cit.> is also a transformer-based model, but it efficiently combines the worlds of CNNs and Transformers and performs extremely well. We chose the VOLO model because it converges quickly and requires less data in our experience. Additionally, VOLO is one of the fastest transformer-based vision models.
Human level for age estimation.
In <cit.>, a comparison was made between the mean absolute error (MAE) of human and machine age estimation. The study examined the FG-NET dataset and the Pinellas County Sheriff's Office (PCSO) dataset (currently unavailable). The authors found that the human MAE on the FG-NET dataset <cit.> was 4.7, while on the PCSO dataset it was 7.2. For the machine results, they obtained 4.6 and 5.1, respectively. They also claimed that their algorithm performed worse than humans in the age range ∈[0, 15] years. The authors noted that this age range is not present in the FG-NET dataset <cit.>, which caused the observed difference. When excluding this range, the estimated human MAE for FG-NET is also very close - 7.4. Eventually, the authors concluded that their model is more accurate than humans.
§ DATASETS
§.§ IMDB-clean
We primarily conducted our experiments on the IMDB-Clean dataset, which comprises 183,886 training images, 45,971 validation images, and 56,086 test images. We utilized the original split of the dataset. The images in this dataset are highly suitable for our tasks and represent a wild domain. However, it is important to note that the dataset only includes images of celebrities, which introduces a bias. Additionally, it suffers from significant class imbalance (Figure <ref>), similar to other datasets.
For the face-only baseline, we utilized this dataset without making any modifications. For experiments involving body images, we generated face and person bounding boxes for all individuals detectable with our model in each image.
§.§ UTKFace
The original dataset only includes bounding boxes for cropped face images, as we performed backward matching to the original full-sized images. This process also involved double-checking by the face encoder. During this process, we encountered 4 faces that did not match back to the original images, so we dropped those images from our annotation. The remaining images maintain the original annotations but with bounding boxes generated by our detector.
The original dataset does not provide any predefined training, validation, and test splits. To align with other works, we utilized the exact same split as described in <cit.>. In this subset, the ages are in range ∈ [21, 60], totalling in 13,144 training and 3,287 test images.
§.§ FairFace
The FairFace<cit.> dataset comprises 86,744 training images and 10,954 validation images. The ground truth attributes in this dataset cover race, gender, and age, categorized into nine classes: (0-2, 3-9, 10-19, 20-29, 30-39, 40-49, 50-59, 60-69, 70+). The dataset is also very well balanced by races.
For measuring gender and age classification accuracy, we utilize a validation set.
To gather information about bodies, we utilize 'fairface-img-margin125' images and employ our detector model to localize the centered face and its corresponding body region.
§.§ Adience
The Adience<cit.> dataset consists of 26,580 facial images, depicting 2,284 subjects across eight age group classes (0-2, 4-6, 8-13, 15-20, 25-32, 38-43, 48-53, 60-100). The dataset additionally includes labels for the binary gender classification task.
The images in the Adience<cit.> dataset were captured under varying conditions, including differences in head pose, lighting conditions, and image quality.
For our analysis, we utilize coarse aligned images and refrain from applying any other aligning methods.
To refine the facial localization on the images, we employ our detector model. We deliberately avoid using in-plane aligned versions of the faces to prevent distortion. The validation of our models and computation of classification accuracy are performed using all five-fold sets.
§.§ New
§.§.§ benchmark
Due to issues such as bias in datasets containing celebrities and professional photos, we introduce a completely new benchmark in our paper for age and gender recognition tasks in wild conditions. We named this benchmark the Dataset (), by the name of our team. To create it, we initially sampled random person images from the Open Images Dataset <cit.> (OID). This dataset offers a high level of diversity, encompassing various scenes and domains.
The images were annotated using a crowd source platform. To ensure high-quality annotations for age estimation, we implemented strict control measures. Control tasks (honeypots) were included to maintain accuracy. Each honeypot had a 7-year age range, within ±3 years of the true age. Therefore, the accuracy on these control tasks can be seen as just CS@3 (see <ref>).
Control measures included:
* Mandatory training for all users before proceeding.
* Users had to pass an examination; CS@3 below 20% resulted in a ban.
* Annotation tasks consisted of 6 examples and 1 hidden control task, totaling 7 tasks per suite.
* After completing 10 task suites, users with average CS@3 below 20% were banned, and their answers rejected.
These measures were implemented to prevent significant noise from bots and cheaters.
Our dataset was annotated with an overlap of 10, meaning that each real task received 10 votes for both age and gender.
In the last step, we balanced the dataset by age distribution using 5-year groups and ensured gender distribution within each one. As a result, we obtained 67,159 images with 84,192 persons, comprising 41,457 males and 42,735 females samples. Please refer to Figure <ref> for a visualization of the dataset distribution.
§.§.§ Votes ensembling
After completing the annotation process, we encountered the challenge of determining how to utilize the obtained votes.
Table <ref> provides a list of all the methods that were tested. In addition to other statistical methods, we employed a weighted mean strategy. It was implemented as follows:
A(v) = ∑_i=1^N v_i * e^(MAE(u_i))^-1/∑_i=1^Ne^(MAE(u_i))^-1
where A is final age prediction for the v vector of user votes, N is size of v, amount of users who annotated this sample and MAE(u_i) denotes the individual MAE across all control tasks for the i-th user u.
We used an exponential weighting factor because there is a substantial difference in annotation quality between users with MAE of 3 and 4, for example. This approach outperformed other variants significantly.
Gender was aggregated using the simple mode(v), where v is an array of elements ∈ male, female. We discarded all answers where the mode occurred with a frequency of less than 75%. Based on control tasks, the gender accuracy has to be 99.72%. We can roughly claim that human accuracy for this task is less or equal to this level.
§.§.§ trainset
Experiments in this work required not only a high-quality benchmark but also a large amount of training data. Therefore, besides our benchmark, we also collected data from other sources, mostly from our production. These images are in almost the same visual domain as images from OID<cit.>.
Our train dataset contains approximately 500,000 images in total, which have been annotated in exactly the same way as benchmark.
In the text, we refer to this training and validation proprietary data as trainset.
Although we cannot make this data publicly available, we provide a demo with the model trained on it (the link can be found in the Github repository).
§ METHOD
§.§ : Multi-input age & gender model
Our model is depicted in Figure <ref>.
For each input pair (FaceCrop, BodyCrop) of size 224 × 224, we independently apply the original VOLO<cit.> patch embedding module, which tokenizes the crops into image patches of size 8 × 8.
Two representations are then fed into a feature enhancer module for cross-view feature fusion, which is achieved using cross-attention. The module is illustrated in Fig. <ref>, block 2. Once the features are enriched with additional information, we perform a simple concatenation, followed by a Multi-Layer Perceptron (MLP) that creates a new fused joint representation and reduces the dimensionality of the features.
This feature fusion allows us to pay attention to important features from both inputs and disregard less significant ones. Additionally, it handles scenarios where one of the inputs is empty, ensuring meaningful information is extracted even from a single view.
The fused features are then processed using the VOLO two-stage architecture<cit.>, which involves a stack of Outlookers, tokens downsampling, a sequence of transformers, and two heads on top.
The last two linear layers update the class embedding into a 3-dimensional vector: two output values for gender and one for the normalized age value. Unlike <cit.>, which uses multiple heads for separate age and gender predictions, produces a single vector for each image containing both outputs.
We use combination of two losses for training:
* WeightedMSE loss function for age prediction with weights from LDS<cit.>
* BinaryCrossEntropy loss function for gender prediction
As demonstrated in Table <ref>, multi-task learning enables us to achieve improvements in both tasks.
Moreover, early feature fusion allows us to maintain almost the same high performance as that of the original VOLO (see <ref>).
§.§ Data preprocessing
Each face and body crop image we resize by using letterbox with padding to preserve the aspect ratio, followed by RGB channel Z-score normalization, using the Imagenet original values. The resize algorithm used is bilinear.
The ground truth answers are also processed with min-max normalization:
ỹ_i = y_i - y_min/y_max - y_min.
To obtain face-body pairs, we follow these steps:
* The input image is first passed through a detector to find all faces and persons. We specifically trained YOLOv8 <cit.> for the publicly available version of our code.
* Using the lists of face and person objects obtained, we run the Assign(faces, persons) algorithm to associate faces with corresponding persons. This method makes use of the Hungarian algorithm. Unassigned faces or bodies they can still be utilized as independent inputs.
Unlike faces, body images of persons pose several specific challenges. The body can be heavily occluded, and there can be many different small parts of the body appearing in the image that are not useful for recognition. Such images require a more complex preprocessing approach. Additionally, the nature of bounding boxes introduces another challenge. While face crops rarely contain other people's faces or bodies, body crops often do.
We implemented additional preprocessing steps for body images:
* Check for intersections of the current body bounding box body_i with all detected objects in the image. If any intersection exists, regardless of its size, apply DetachObject(body_i) that removes all the objects intersected with the i-th. This also applies to the paired face crop.
* The remaining image may contain unwanted artifacts. To handle these artifacts, we added a trimming operation Trim(b_i). In Figure <ref>, result of this operation can be observed.
* If the resulting body image is too small in terms of pixels or size compared to the original crop, it is considered useless and discarded.
§.§ Performance
We consider the VOLO-D1 model variation as our baseline, which consists of 25.8M parameters. In comparison, the -D1 model has 27.4M parameters. Figure <ref> demonstrates that while -D1 is slightly slower than the original version, it still exhibits high performance. All measurements were conducted using a single V100 GPU with float16 precision. When dealing with a single input (even in a mixed batch), we have the option to skip the first PatchEmbedding step for the missing input, leading to a significantly faster inference time.
§ EXPERIMENTS
Our code is based on PyTorch <cit.> and timm <cit.>. We use the VOLO <cit.> model as our baseline.
§.§ Evaluation metrics
In this section, we present the model's performance using various metrics. For gender prediction and age prediction in classification benchmarks, we utilize the classification accuracy metric.
In regression age benchmarks, the model's performance is evaluated based on two metrics: Mean Absolute Error (MAE) and Cumulative Score (CS). MAE is calculated by averaging the absolute differences between the predicted ages and the actual age labels in the testing dataset. CS is computed using the following formula:
CS_l = N_l/N× 100%
Here, N represents the total number of testing examples, while N_l denotes the count of examples for which the absolute error between the estimated age and the true age does not exceed l years.
§.§ VOLO Experiments on Open Source Datasets
First, we conducted experiments on IMDB-clean and UTKFace datasets to establish a good baseline and identify model limitations. In this section original images, annotations and data splits were taken.
For the age estimation task, our baseline model, VOLO-D1, was trained using only the face input. We employed the AdamW optimizer with an initial learning rate of 1.5e-5 and a weight decay of 5e-5. The model was trained for 220 epochs individually on both the IMDB-clean and UTKFace datasets. The base learning rate batch size was set to 192. At the start of training, we performed a warmup with lr=1e-6 for 25 epochs with gradual increase.
The following data augmentations were applied during training:
* RandAugment with a magnitude of 22 and bilinear resizing.
* Random bounding box jitter for position and size, both with a magnitude of 0.45.
* Reprob with p=0.5.
* Random horizontal flip with p=0.5.
Additionally, we incorporated drop and drop-path with p=0.32.
We performed several experiments, exploring different parameters and loss functions. For age estimation, we tried WeightedFocalMSE loss and WeightedHuber loss, but simple WeightedMSE yielded the best performance.
As shown in Table <ref> our results are state-of-the-art without any additional data or advanced techniques on IMDB-clean and UTKFace datasets.
For the age & gender VOLO-D1 model, we followed the same training process. To address the discrepancy in the magnitudes of loss between age and gender, we weighted the gender loss with w=3e-2. We did not change anything else, including the number of epochs.
By adding a second age output to the model, we expected to observe the same effect as reported in the study <cit.>, where a single model performs better than multiple separate models, leveraging the benefits of learning two tasks simultaneously. And, indeed, we obtained a significantly better MAE for the age, while also achieving impressive accuracy for gender classification. Please refer to Table <ref> for the detailed results.
§.§ MiVOLO Experiments on Open Source Datasets
We made some minor adjustments to the training process for the model. To reduce training time, we initialized the model from a single-input multi-output VOLO checkpoint. We initialized weights of the body PatchEmbedding block with the same weights as the face PatchEmbedding block. The Feature Enhancer Module was initialized with random parameters.
During training, we froze the face PatchEmbedding block since it was already trained. We trained the model for an additional 400 epochs, incorporating random dropout of the body input with a probability of 0.1, and random dropout of the face input with a probability of 0.5. Face inputs were only dropped for samples with suitable body crops. If a face input was dropped, the model received an empty (zero tensor) input for face PatchEmbedding, and the same for empty body inputs.
These techniques were implemented to adapt the model for various mixed cases and to improve its understanding of input images, resulting in enhanced generalization. We also set the learning rate to 1e-5. To preserve the structural integrity of the data, all augmentations, excluding jitter, are applied simultaneously.
The remaining parts of the training procedure are unchanged.
We conducted experiments on the IMDB-clean dataset using our . Table <ref> shows a comparison between the single-input VOLO and the multi-input MiVOLO. The results indicate that the best performance across all benchmarks is achieved by using both face and body crops. The model trained on our dataset consistently outperforms the one trained on IMDB.
To evaluate the quantitative performance of the when only body images are available, we conducted an experiment where all faces were removed from the data. Additionally, we excluded any images that did not meet our specified requirements mentioned in Section <ref>. For IMDB-clean, UTKFace and Lagenda test datasets retained 84%, 99.6% and 89% of images, respectively. Results are displayed in the Table <ref> and Figure <ref> (b).
§.§ experiments
We repetead all previous experiments on our trainset. We trained three variants of the model: VOLO-D1 face-only age, VOLO-D1 face-only age & gender, and MiVOLO-D1 face + persons age & gender. We kept all training parameters unchanged, following the same configuration as for the IMDB-clean dataset.
Please refer to Table <ref> and Table <ref> for the results. As expected, the amount of data played a crucial role in the performance of our . We observed significant improvements and achieved SOTA results for the , UTKFace, and IMDB-clean datasets by utilizing the face & body multi-input approach. Remarkably, we also obtained satisfactory results for body-only inference.
In Figure <ref>, we provide an illustration of a successful recognition result without visible faces in a random picture sourced from the internet.
Model generalizes very well, even though it has never seen images like this with persons shown from the back.
Relationship between MAE and age for final models is shown in Figure <ref> (a) and (b).
§.§ Adience and FairFace experiments
Due to the model's impressive generalization capabilities, we decided to apply to popular classification benchmarks such as FairFace <cit.> and Adience <cit.>. Since our model cannot be trained explicitly for classification, we utilized our final -D1 age & gender model without any modifications. The only change made was mapping the regression output to classification ranges. As shown in Table <ref>, we achieved SOTA results for both datasets without any additional changes.
§ HUMAN LEVEL ESTIMATION AND VOTES ENSEMBLING FOR AGE RECOGNITION
§.§ Human level for age estimation
As described in Section <ref>, during the annotation of the , control tasks (honeypots) were generated from the IMDB-clean dataset. A total of 3,000 random examples were sampled for this purpose. Users were not aware of which examples were honeypots and annotated them alongside other tasks. This approach provided a reliable source for estimating the human level of performance in the task.
Figure <ref> illustrates the distribution of MAE values among the users. The mean of this distribution is 7.22, and the median is 7.05. The averaged maximum error is 28.56, while the minimum mean error for a specific user is 4.54.
We have briefly described paper <cit.> in section <ref>.
We disagree with the method of excluding certain age ranges as it can potentially lead to incorrect conclusions. The authors claimed that their model's accuracy is either equal to or surpasses human accuracy. However, since we can only consider the results obtained on the FG-NET dataset due to the aforementioned issue, we have only one estimation where the model achieved an MAE of 4.6 compared to 4.7 in humans. Given this small difference and the sample size of 1,002 images, the statistical evidence does not appear to be substantial. Furthermore, it is important to note that both datasets have specific visual domains, which can further affect the generalizability of the results.
To accurately compare human and machine performance, it is crucial to take into account the entire range of ages and images from the wild domain.
As can be seen in Figure <ref>, the previous suggestion about low neural network and high human performance in the age range of [0, 15] years no longer holds.
It turned out that both humans and neural network exhibit an increase in error and its dispersion with the age of the person in the image.
Overall, we can confidently state that our model surpasses human annotators across the majority of age ranges. Furthermore, as shown in Table <ref>, the model achieved a MAE of 6.66 on IMDB-clean with body-only images. This demonstrates that, on average, our model outperforms humans even when considering body-only mode.
§ CONCLUSIONS
We have introduced a simple yet highly efficient model, , that achieves state-of-the-art results on 4 benchmarks, demonstrating its capability to function robustly even in the absence of a face image.
To contribute to the research community, we are providing the weights of the models, which have been trained on Open Sourced data.
In addition, we have enriched and expanded the annotations of 3 prominent benchmarks, IMDB-clean, UTKFace and FairFace. Furthermore, we have developed our own diverse and unbiased , which contains challenging real-world images and is publicly available.
For the task of age annotation aggregation, we employed an intuitive yet remarkably accurate method and evaluated its performance.
Our investigation into the comparison of human and machine accuracy in age recognition tasks revealed that our current model consistently outperforms humans across various age ranges, exhibiting superior overall accuracy.
§ FUTURE WORK AND DISCUSSION
Despite the fact that we achieved our goals, some questions remain open. We still cannot be sure about the physically possible MAE on these or any other age recognition task in computer vision.
However, the weighted mean from human annotators gives us a very interesting estimation of a certain achievable level in the age recognition task, which is 3.5.
Our approach can be significantly improved by incorporating new class-agnostic segmentation approaches, such as the Segment Anything Model <cit.>. These approaches can provide accurate masks for the body, which would be highly beneficial.
Certainly, even in our very well-balanced dataset, there is a lack of data in the higher age ranges, particularly around 80 years and beyond. As we have shown, the largest contribution to the achieved MAE comes from this range, so it needs to be addressed in future work.
Additionally, this task requires a huge amount of data in order to train a perfect model. However, due to the nature of the task, it is very difficult to obtain it. Therefore, we expect that our method can be combined with Masked Autoencoders <cit.> or other scalable self-supervised method.
ieee_fullname
|
http://arxiv.org/abs/2307.04808v1 | 20230710180113 | Autonomous feedback stabilization of a cavity-coupled spin oscillator | [
"Julian Wolf",
"Olive H. Eilbott",
"Josh A. Isaacs",
"Kevin P. Mours",
"Dan M. Stamper-Kurn"
] | physics.atom-ph | [
"physics.atom-ph",
"quant-ph"
] |
[email protected]
[Present address: ]Eikon Therapeutics, Hayward, CA 94545, USA
Department of Physics, University of California, Berkeley, California 94720, USA
Challenge Institute for Quantum Computation, University of California, Berkeley, California 94720, USA
[Present address: ]Max-Planck-Institut für Quantenoptik, Garching, Germany
Munich Center for Quantum Science and Technology (MCQST), 80799 Munich, Germany
Department of Physics, University of California, Berkeley, California 94720, USA
Challenge Institute for Quantum Computation, University of California, Berkeley, California 94720, USA
Technische Universität Kaiserslautern, 67663 Kaiserslautern, Germany
Department of Physics, University of California, Berkeley, California 94720, USA
Challenge Institute for Quantum Computation, University of California, Berkeley, California 94720, USA
Materials Sciences Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720, USA
We report out-of-equilibrium stabilization of the collective spin of an atomic ensemble through autonomous feedback by an optical cavity.
For a magnetic field applied at an angle to the cavity axis, dispersive coupling to the cavity provides sensitivity to a combination of the longitudinal and transverse spin.
Coherent backaction from this measurement, conditioned by the optical cavity susceptibility, stabilizes the collective spin state at an arbitrary energy.
The set point tracking and closed-loop gain spectrum of the feedback system are characterized and found to agree closely with analytic predictions.
Autonomous feedback stabilization of a cavity-coupled spin oscillator
Dan M. Stamper-Kurn
August 12, 2023
=====================================================================
As in the case of classical systems, the state and evolution of quantum systems can be tailored by feedback control <cit.>.
At a scientific level, the development of a quantum control theory, one that integrates entanglement and non-classical effects of dissipation and measurement, opens a new line of inquiry into non-equilibrium and open quantum systems.
At an applied level, feedback control allows quantum devices to operate robustly, mitigating errors in system preparation and calibration as well as decoherence.
Feedback control underpins important tasks such as error correction in quantum computation <cit.> and sensing <cit.>, entanglement purification <cit.>, and adaptive measurement <cit.>.
Quantum feedback control can be divided broadly into the two categories of measurement-based and autonomous feedback.
In the measurement-based approach, properties of a quantum system are read out on a classical sensor, the measurement record of which is used by an extrinsic classical control device to alter the ensuing coherence, dissipation, and measurement operations on the system <cit.>.
The feedback system's design must account for noise and backaction that are intrinsic to quantum measurement.
By comparison, in autonomous (or coherent) quantum feedback the corrective response that steers a quantum subsystem is built into the quantum system itself.
Control is achieved by structuring the drive and dissipation of an open quantum system so that entropy is reliably extracted as the quantum system is steered to the desired final state.
Examples of such schemes include autonomous error correction in bosonic code spaces <cit.>, autonomous generation of entanglement between quantum bits <cit.>, quantum state preparation <cit.>, optical noise cancellation <cit.>, and generating spin squeezing of atoms in a driven optical resonator <cit.>.
In this work, we develop a coherent feedback scheme to stabilize the energy of an ensemble of quantum spins.
Our scheme employs optical backaction in a driven cavity to realize closed-loop autonomous feedback.
Under negative-feedback conditions, we observe that cavity spin optodynamics <cit.> deterministically steer the collective spin toward a steady-state energy that is set by the frequency of the driving optical field.
By examining both the light that drives the system and the atomic spins that respond to this drive, we quantify the tuning sensitivity as well as the closed-loop gain spectrum of the autonomous feedback system and find close agreement with a theoretical model.
The feedback system comprises the collective spin of an ultracold atomic gas interacting with an optical cavity mode.
In particular, an ensemble of ≈ 1400 non-degenerate atoms, cooled to around 3, is trapped predominantly in a single antinode of a standing-wave optical dipole trap (ODT, wavelength 842) resonant with a TEM00 mode of an optical Fabry–Pérot cavity <cit.>.
The atoms are initially prepared in the |f = 2,m_f = 2⟩ electronic ground state.
The atomic ensemble interacts strongly with a second TEM00 cavity mode (the “pump” mode) whose frequency is detuned slightly (≡ - = 2 π×-35) from the atomic transition (frequency , wavelength 780).
The half-linewidth of the cavity at the wavelength is κ / 2 π = 1.82.
We minimize the effects of the spatial dependence of the atom–cavity coupling by trapping atoms at a location where the trapping field and pump field antinodes coincide (schematica) <cit.>.
The symmetric coupling of the atoms to the cavity field allows the ensemble to be addressed in terms of a total (dimensionless) spin F = 2 ≈ 2800 and a mean dispersive cavity–atom coupling = g_0^2 /, where g_0 = 2 π×13 is the vacuum Rabi coupling of a single atom to the pump mode, averaged over the atom's motion in the ODT <cit.>.
For a cavity pumped with circularly polarized (, or σ_- relative to the cavity axis) light, the dynamics of the system are governed by the Hamiltonian <cit.> (Appendix A)
Ĥ =
- ħĉ^†ĉ
+ ħ
+ ħ[ α_0 - α_1 ] ĉ^†ĉ,
written in a frame rotating at the pump frequency .
Here, ≡ - is the pump–cavity detuning, ĉ is the cavity pump mode annihilation operator, and = g_F |B⃗| / ħ is the Larmor frequency (where g_F is the Landé g-factor, |B⃗| the applied magnetic field strength, and is the Bohr magneton).
The constants α_0 = 2/3 and α_1 = 1/6 describe the scalar and vector interactions, respectively, between the atoms and the cavity pump field (Appendix A).
We have treated atom–cavity interactions as purely dispersive, accounting for || being large compared to the atomic half-linewidth (γ = 2 π×3) and to the collective atom–cavity coupling strength √(N) g_0.
The collective atom–cavity interaction is the sum of two terms.
A scalar (spin-independent) dispersive atom–light interaction shifts the cavity resonance frequency proportional to atom number .
With being constant during the few- duration of the spin feedback experiments, it is useful to absorb this static frequency shift into an effective constant pump–cavity detuning ≡ - α_0.
In addition, a vector (spin-dependent) atom–cavity interaction shifts the resonance frequency of the σ_- cavity mode by an amount proportional to the projection of the collective spin onto the cavity axis (Appendix A).
We now outline how this quantum system autonomously includes the essential elements of a feedback control system.
In such a control system, a control variable, which represents the state of the plant (the subsystem to be controlled), is measured by a sensor.
A comparator generates an error signal as the difference of the sensor output and an externally determined set point.
A controller conditions the error signal and acts on the plant.
Under proper negative-feedback conditions, the control variable is stabilized unconditionally.
Identifying = sin + cos, with being the angle between the applied magnetic field and the cavity axis k̂ (schematica), we observe that the cavity is sensitive both to the longitudinal spin (equivalently, the bare spin energy) and the transverse spin .
The longitudinal spin plays the role of the control variable, and the cavity shift proportional to acts as a coherent sensor.
Accounting for this shift, the net detuning of the pump light from the cavity, averaged over the fast effects of Larmor precession, is given by ≡ + cos, with ≡α_1.
The system Hamiltonian hamiltonian-general can now be rewritten as
Ĥ =
- ħĉ^†ĉ
+ ħ
- ħĉ^†ĉ sin.
The net detuning represents the control system error signal, proportional to the difference between the instantaneous value of and the externally determined longitudinal spin set point
= -/cos.
The final term in hamiltonian-feedback completes the autonomous control system, serving as the feedback actuator.
As described in Refs. <cit.>, the Larmor precessing transverse spin modulates the cavity field intensity through the spin-dependent dispersive interaction.
In turn, this modulation, conditioned by cavity dynamics, acts resonantly on the precessing spin and alters its energy.
As in the case of cavity optomechanics, the resulting energy dynamics of the spin ensemble can be described in terms of cavity-induced sideband asymmetry.
Modulation of the cavity resonance by the precessing spin shifts optical power from the pump light into first-order frequency sidebands with optical frequencies ±.
While a free-space modulator would generate sidebands with equal power, here, the cavity spectrum induces a sideband asymmetry (schematicb).
For a pump blue-detuned from cavity resonance (> 0), the cavity induces stronger emission on the Stokes (red) sideband, reducing the net energy of the optical pump and, in turn, increasing the energy of the spin ensemble.
Similarly, a pump red-detuned from cavity resonance (< 0) reduces the energy of the spin ensemble.
With the correct sign of cos in Fz-final, the response of the spin ensemble in either case brings closer to zero.
The system arrives at a stable steady state, with ⟨⟩ =, that is determined by the pump frequency (through ) and that, notably, is independent of the initial state of the collective spin and insensitive to many perturbations.
We first confirm experimentally that the spin ensemble is autonomously stabilized to a state determined by the external set point.
To this end, the collective spin is initiated to ⟨(t = 0) ⟩ = 0 using a coherent rf π/2-pulse at drive frequency , such that (t = 0) =.
The cavity is then pumped with light at a constant and allowed to evolve.
The light emitted by the cavity is measured on a balanced heterodyne detector <cit.> (total detection efficiency = 2.2, schematica), allowing the power in the Stokes and anti-Stokes sidebands to be detected as independent time traces (dc-pullinga).
The difference in the power of the two sidebands directly measures the instantaneous energy transfer from the pump light to the collective spin.
The cumulative sum of this difference measures the total energy δ E(t) added to the collective spin, leading up to time t.
As shown in dc-pullinga and b, an initial < 0 leads to an enhancement of the anti-Stokes sideband and a net energy transfer δ E < 0, driving the spin to a low energy state, while > 0 has the opposite effect.
The spin state achieved after long evolution times under autonomous feedback, at a given , is shown in dc-pullingc.
Here, we measure the longitudinal spin by terminating the feedback, reorienting the magnetic field along the cavity axis, and measuring the spin-dependent cavity shift (Appendix B).
For each , the measured response shows a tuning range, centered about = 0, within which the steady-state spin energy varies linearly with .
The sideband-based energy transfer measurements show the same trend as the spin measurements, through the relation ħ⟨⟩(t) = δ E(t), but are found to be less precise.
By fitting the response curves to an analytical model (discussed below and in Appendix C), we determine the linear sensitivity of the steady-state spin to near = 0.
This linear sensitivity (dc-pullingd) matches well to the prediction of the set point equation (Fz-final) for a range of field angles.
Outside the linear tuning range || > |2 cos|, one would expect the feedback system to rail, driving the spin ensemble to one of its extremal energy states.
Such a saturated response is observed for ≳55.
Here, the cavity field modulation amplitude, proportional to sin, is large, driving the spin quickly to its steady state.
In contrast, for shallower angles (≲55), the collective spin undergoes dephasing during feedback, reducing its total magnitude to || F⃗ || < 2 before the system can reach its steady state.
Next, we investigate the dynamical response of the autonomous feedback system.
Considering the Hamiltonian of hamiltonian-feedback and adding terms accounting for pumping into and (non-Hermitian) leakage out of the cavity mode, the cavity field evolves according to
/ tĉ
= ( + ) ĉ
- κĉ
+ κη,
where η is the coherent-state amplitude of the field pumping the cavity.
The energy of the collective spin, meanwhile, evolves according to
/ t =
ĉ^†ĉ sin.
For ≫, as in our experiment, the optodynamical Larmor frequency shift (analogue of the optomechanical spring shift) <cit.> is small and the transverse spin can be approximated as = sin t; in practice, this relies on terms with nontrivial commutation relations only entering in at a higher order than is being considered, and amounts to treating as equal to its expectation value.
We expect a cavity field comprising a carrier at frequency and sidebands at frequencies ±: ĉ = ĉ_0 + ĉ_+ e^- t + ĉ_- e^ t.
In the limit of small modulation depth ( / 2 κ) F_⊥ |sin| ≪ 1, the amplitudes and phases of the sidebands ĉ_± can be calculated directly from cavity-field.
Inserting this solution for the cavity field into F-pulling, we find that the cavity resonance frequency is pulled toward its set point at a damping rate given as
≡1// t
= -2 ^2 sin^2 cos^3/κ^3.
Here, ≡ κ^2 / (κ^2 + ^2) with = |η|^2; this can be measured directly from the spectrum of the cavity output.
This simple model allows for straightforward simulation of how the system will act under a variety of conditions.
We probe the dynamics of our feedback system in two experiments.
First, we characterize the system's closed-loop transfer function by pumping the cavity with a time-varying tone (t) = sin t and measuring the response δ E(t) (ac-responsea).
At each modulation frequency, the closed-loop gain is calculated as the ratio between the response and the perturbation:
[] =
2 /ħ T∫_0^T t δ E(t) exp(- t),
where T = 2 π s /, for integer s (ac-responseb, black circles).
For a pure integrator system such as ours with damping rate , we expect a closed-loop gain of
[ω] =
( / ω)/1 + ( / ω),
which should describe the system well for ≪κ, 2 cos (ac-responseb, gray line).
Our measurements match this expectation qualitatively, but the data quality is limited by a signal-to-noise ratio of approximately 1 (dominated by shot noise on the detection of the sideband photons, which is exacerbated by the low ) as well as by saturation at the large set point modulation depth used for this measurement.
Second, and more quantitatively, we characterize the impulse response function of the feedback system.
Here, we initialize the collective spin near ⟨f̂_z ⟩ = +1, and then suddenly impose feedback with a set point of / = -1 (ac-responsec).
Time-resolved direct spin measurements track the system evolution toward the set point (Appendix B).
For regions over which is approximately constant (namely, | ⟨f̂_z ⟩ | ≤ 1), damping-rate states that ⟨⟩ should approach exponentially.
This allows the damping rate of the system to be directly measured, giving a value of = 450 ± 60 (ac-responsec).
For the same experimental parameters (= 60, = 2 π×300, = 2.4, = 1100), damping-rate predicts = 1600.
This disagreement is due, in part, to the large modulation depth used for this experiment: here, ( / 2 κ) sin = 0.4, which warrants the inclusion of higher-order terms.
Accounting for these corrections reduces the expected gain to = 790 (Appendix C).
The analytical model still does not account for the dephasing of the spin ensemble.
Constructing an accurate model for dephasing in our system is not straightforward, but any form of dephasing will have the effect of decreasing , and thus , which may explain the remaining discrepancy.
In this work, we have shown that autonomous feedback generated by optical backaction of a driven cavity onto a spin ensemble stabilizes the ensemble energy at an energy determined by the cavity pump frequency.
The optical cavity emission provides a real-time record of the feedback dynamics.
In future work, information from this real-time optical signal may also be used to enhance the feedback stabilization through additional measurement-based feedback <cit.>.
Our system can equivalently be described as autonomous feedback stabilization of the optical cavity's resonance frequency.
From this viewpoint, the control variable is .
The spin ensemble now plays the part of the controller by which the cavity is autonomously tuned to be in resonance with the light with which it is driven.
Our feedback setup stabilizes the spin ensemble to a specific value of the longitudinal spin, but does not control the phase at which this spin undergoes Larmor precession because of the time translation symmetry of our scheme.
In future work, it will be interesting to investigate methods for fuller control of the quantum spin state, e.g., applying phase coherent modulation at the Larmor frequency, either to the optical pump field or to an applied magnetic field, so as to stabilize the Larmor precession phase.
Another target for future investigation is the fluctuation of the spin ensemble under steady-state feedback.
In steady state, the ensemble should respond to the quantum noise of the cavity field, generating fluctuations in the longitudinal spin as well as the Larmor precession phase.
At the same time, coherent feedback suppresses longitudinal spin fluctuations.
The balance between quantum-optical fluctuations and coherent dissipation, achieved in the steady state and away from thermal equilibrium, may be revealed in the spectrum of the cavity output.
However, in our current setup, technical noise on , the pump light spectrum, and optical detectors obscures this quantum noise signature.
We acknowledge support from the National Science Foundation Quantum Leap Challenge Institutes program (Grant No. OMA-2016245), from the National Science Foundation (Grant No. PHY-1707756), from the Air Force Office of Scientific Research (Grant No. FA9550-19-1-0328), and from Army Research Office through the Multidisciplinary University Research Initiative program (Grant No. W911NF-20-1-0136).
The contributions of J. A. I. are funded by the Heising-Simons Foundation (Grant No. 2020-2479).
The contributions of O. H. E. are supported by the National Science Foundation Graduate Research Fellowship Program (Grant No. DGE-175281).
unsrt
§ A. DERIVATION OF THE AUTONOMOUS SPIN STABILIZATION HAMILTONIAN
The Hamiltonian for the autonomous spin stabilization system can be written, generically, as a sum of cavity, spin, and interaction terms: Ĥ = + +.
In the lab frame, the cavity Hamiltonian is simply given by ^lab = ħĉ^†ĉ.
We find it helpful to move to a frame rotating at the frequency of the cavity pump laser, such that
= -ħĉ^†ĉ
= -ħn̂,
where ≡ - is the pump–cavity detuning.
Here, the cavity annihilation operator ĉ and the photon occupation operator n̂ include light of both right- and left-handed circular polarizations.
Although the left- and right-handed cavity modes interact differently with the atomic ensemble, their bare energies are approximately degenerate, and here they can be considered together as n̂ = n̂_+ + n̂_-.
For an ensemble of noninteracting atoms indexed i at positions r⃗_i and spin projection f_z^(i) along the direction of the magnetic field, the spin Hamiltonian is given by = ∑_i ħ(r⃗_i) f̂_z^(i).
Here, the local spin precession frequency is given by ħ(r⃗) = g_F |B⃗(r⃗)|, where g_F is the Landé g-factor and is the Bohr magneton.
For a localized ensemble of atoms, the magnetic field is approximately constant, such that this can be rewritten in terms of an average spin precession frequency and a total spin projection = ∑_i f̂_z^(i):
= ħ.
Generically, the interaction between the cavity and atom i is described by
^(i)
= ħ∑_g,e
g_g;e^+(r⃗_i) ĉ^†_+ σ̂_e;g^(i) δ_m+1, m'
+
g_g;e^-(r⃗_i) ĉ^†_- σ̂_e;g^(i) δ_m-1, m' +
h.c.
Here, the summation runs over all possible transitions from the ground-state
manifold g ≡ | f = 2, m ⟩ to the excited states e ≡ | f' = 3, m' ⟩, with polarization-dependent coupling strengths g_g;e^±.
When the cavity–atom detuning is large compared to the hyperfine splittings _f' in the excited (f' = 3) manifold that is being addressed, the excited states can be eliminated.
This approximation results in a spin-dependent dispersive interaction Hamiltonian, describing dynamics within the ground-state manifold:
^(i)
= ħ |U(r⃗_i)|^2 {α_0 ( n̂_+ + n̂_- ) +
α_1 ( n̂_+ - n̂_- ) f̂_k^(i)
+
α_2 [
( n̂_+ + n̂_- ) ( f̂_k^(i))^2 -
ĉ_- ĉ_+ ( f̂_+^(i))^2 -
ĉ_+ ĉ_- ( f̂_-^(i))^2
]
},
where |U(r⃗_i)|^2 is the local relative intensity of the cavity pump mode, where ĉ_± are the annihilation operators for left- and right-handed cavity modes, which are approximately degenerate in our system, and where f̂_k and f̂_± are the spin operators relative to a quantization axis along the cavity axis k̂.
Here, the scalar, vector, and tensor interactions between the spin and the cavity field are described by coupling coefficients (α_0, α_1, α_2) → (2/3, 1/6, 0) in the limit of large (coupling-coefficients).
In our system, the atomic ensemble is primarily localized within a single antinode of the cavity pump field, which allows the local cavity field U(r⃗_i) to be treated as approximately constant.
This leaves
= ħ{α_0 n̂ +
α_1 ( n̂_+ - n̂_- ) },
such that the total system Hamiltonian, in the limit || ≫ |_f'|, is given by
Ĥ
= -ħn̂ +
ħ
+
ħ{α_0 n̂ +
α_1 ( n̂_+ - n̂_- ) }.
When the cavity is pumped with only right-handed (σ_-) light, this reduces to hamiltonian-general.
When the cavity is pumped with only left-handed light, the sign of the cavity–spin interaction flips, and with it the sign of the gain of the feedback system.
§ B. NONDESTRUCTIVE MEASUREMENT OF THE COLLECTIVE ATOMIC SPIN STATE
When the externally applied magnetic field is parallel to the cavity axis (= 0), the system Hamiltonian (hamiltonian-general), corresponding to pumping the cavity with right-handed (σ_-) light commutes with the total spin energy since this becomes equivalent to the projection of the spins along the cavity axis:
Ĥ^- =
- ħ( - 2/3 + 1/6) ĉ^†ĉ
+ ħ,
where the superscript on Ĥ^- indicates that this Hamiltonian only considers the right-handed cavity mode.
If the dispersive shift to the cavity resonance condition is measured by comparing the resonance frequencies with and without the presence of atoms, it will be given by
^-
= - 2/3 + 1/6⟨⟩,
where the superscript on ^- indicates that this is the dispersive shift to the right-handed cavity mode.
If the atom number were known exactly, this measurement would be sufficient to determine the collective spin energy of the ensemble; however, variable atom loss between state preparation and readout make this impractical.
By pumping the cavity with left-handed light, different information can be acquired.
Considering the case of an external field parallel to the cavity axis, the Hamiltonian can be derived which describes the left-handed (σ_+) cavity mode:
Ĥ^+ =
- ħ( - 2/3 - 1/6) ĉ^†ĉ
+ ħ,
which corresponds to a dispersive shift
^+
= - 2/3 - 1/6⟨⟩.
Using a pair of liquid crystal variable retarders (LCVRs) at the input and the output of the cavity, the polarization of the light pumping the cavity can be switched rapidly between left- and right-handed without otherwise affecting the detection chain.
By measuring the resonance frequencies of each of the polarizations with the atomic ensemble present in the cavity, and then repeating both measurements with the atoms absent, the total atom number and collective spin can be recovered (final-sweeps):
= -3/4^+ + ^-/;
⟨⟩ = -3 ^+ - ^-/.
The same effect can be achieved by changing the orientation of the magnetic field to = 180 between the first and second measurements of , such that = -, but this takes too long to be practical due to the self-inductance of the coils used to generate the field.
The effect can also be achieved by using an external rf field to drive a π-pulse on the collective spin, taking → - between measurements; this approach has been successfully used in the past, but its dependence on the calibration of the rf drive makes it less appealing than switching the polarization of the pump light.
§ C. QUANTUM MODEL OF A COLLECTIVE SPIN COUPLED TO AN OPTICAL CAVITY
The damping rate of the autonomous feedback system can be derived by considering how the system evolves in time.
Considering the Hamiltonian hamiltonian-feedback, and including terms accounting for pumping into and (non-Hermitian) leakage out of the cavity mode, the cavity field evolves according to
/ tĉ
= ( + ) ĉ
- κĉ
+ κη,
where η is the coherent-state amplitude of the field pumping the cavity.
Here, the field operator ĉ corresponds to the cavity field at frequency .
Without any coupling to the cavity, = 0, and field-evolution can be solved directly, giving
ĉ_0
= ηκ/κ - .
If the effects of the coupling between the cavity and the collective spin are small, the perturbation to the field can be approximated by
ĉ(t)
= ĉ_0 + ĉ'(t).
The collective spin, meanwhile, evolves according to
/ t = -F̂_y
+ ĉ^†ĉcos;
/ tF̂_y
=
+ ĉ^†ĉ( sin - cos);
/ t = - ĉ^†ĉsin.
For ≫ ||, as in our experiment, the transverse components admit solutions F̂_y ∝ F_⊥sin t.
For spins precessing at frequency at polar angle χ and azimuthal angle ϕ = t, the projection of the spin along the cavity axis looks like = F_⊥sincosϕ + cos, where F_⊥≡ F sinχ and F_z ≡ F cosχ.
Substituting this into field-evolution gives
/ tĉ'
= (
+ F_⊥sincos t
+ cos) ( ĉ_0 + ĉ' )
- κ( ĉ_0 + ĉ' )
+ κη.
To lowest order, it seems reasonable to expect a solution that looks like effective cavity drives at frequencies ± due to the modulation of the bare pump field ĉ_0 by the precessing spins.
This leads to the ansatz
ĉ'(t)
= ĉ_+ e^ t + ĉ_- e^- t.
Plugging this into field-equation-of-motion and ignoring quickly rotating terms ∼ e^± 2 t as well as terms of order [( / 2 κ) F_⊥sin]^2 gives
ĉ_0 = ηℒ( + F_z cos);
ĉ_+ = /2/κF_⊥sin ℒ( + F_z cos + )
ĉ_0;
ĉ_- = /2/κF_⊥sin ℒ ( + F_z cos - )
ĉ_0.
Here, ℒ() ≡κ / (κ - ) refers to a Lorentzian line at center frequency with width κ.
It is desirable to find the effect of the cavity field on the spin energy F_z.
This is given by spin-evolution, and depends on the instantaneous occupation number n̂≡ĉ^†ĉ of the cavity mode:
n̂ = n̂_0 +
( ĉ^†_+ ĉ_0 + ĉ_0^† ĉ_- ) e^- t +
( ĉ^†_- ĉ_0 + ĉ_0^† ĉ_+ ) e^ t
= n̂_0 -
/2/κF_⊥sinn̂_0
×[
ℒ(- - F_z cos + )
e^- t
-
ℒ ( + F_z cos + )
e^- t
+
ℒ (- - F_z cos - )
e^ t
-
ℒ( + F_z cos - )
e^ t]
Again, terms of order ( / κ)^2, corresponding to second-order sidebands, have been ignored.
Using the cycle-averages e^±ϕsinϕ = ± / 2, this gives the mean change in energy of the collective spin to be
/ t⟨F̂_z ⟩
= -1/2 F_⊥^2 sin^2 ^2/κ⟨ĉ_0^†ĉ_0 ⟩
×[
ℒ( + F_z cos + )
-
ℒ( + F_z cos - )
],
where the overline indicates time averaging over a Larmor precession cycle.
As expected, for < - F_z cos, the first Lorentzian term is larger, and F_z decreases; conversely, F_z increases for > - F_z cos.
The damping rate of the feedback system can be calculated as the exponential rate at which it approaches resonance in the limit F_z cos→ -.
In the unresolved sideband regime ≪κ, the asymmetry between the sidebands reduces to
ℒ( + F_z cos + )
- ℒ( + F_z cos - )
≈ -4 ( + F_z cos)/κ^2.
Using this approximation along with change-in-Fz and carrier-amplitude, and noting that η^2 = corresponds to the mean on-resonance photon occupation of the cavity, the damping rate of the system looks like
= - cos/ + F_z cos/ t⟨F̂_z ⟩
= - 2 F_⊥^2 sin^2 cos^3/κ^3,
where = κ^2 / (κ^2 + [ + F_z cos]^2) is the true cavity-filtered photon occupation of the cavity.
For the parameters used in our experiment, this amounts to a damping rate of = 1600.
Notably, however, these parameters do not fall well within the low-modulation regime used to approximate carrier-amplitude.
The inclusion of higher-order terms [( / 2 κ) F_⊥sin]^2 has the effect of reducing the carrier amplitude found in carrier-amplitude.
In particular, the full expression for the amplitude looks like
ĉ_0
= ηℒ̃(0)
×[
1 +
( F_⊥sin/2 κ)^2
ℒ̃(0)
[ ℒ̃() + ℒ̃(-) ]
]^-1,
where ℒ̃(ν) ≡ℒ( + F_z cos + ν) has been written for brevity.
For the parameters used in our experiment, this amounts to a correction factor of 0.7, resulting in a correction factor of 0.5 to and to the final damping rate: = 790.
The solutions given by ansatz-a and ansatz-b still ignore quickly rotating terms corresponding to higher-order sidebands; however, these effects are confirmed experimentally to be small.
In order to simulate the dynamics of the system, F_z ≡⟨F̂_z ⟩ can be treated as a c-number and gain-final can be used to propagate F_z forward in time.
In the absence of any dephasing, this treatment can be made complete by requiring that the total spin is conserved, F_z^2 + F_⊥^2 = 4 ^2.
Dephasing can be included heuristically by decreasing F_⊥ over time.
In practice, this decrease can take many functional forms, but a simple exponential decay captures much of the system dynamics.
Simulating the feedback process, then, amounts to propagating two coupled differential equations:
/ t F_z
= β(F_⊥) F_z;
/ t F_⊥
= -β(F_⊥) F_z^2/F_⊥ - Γ F_⊥.
The resulting values of F_z can be used as a model function for least-squares fitting, where Γ, as well as an overall offset to F_z which accounts for systematic measurement errors, are allowed to vary (s-curve-fits).
These fits are used to extract the sensitivities reported in dc-pullingd.
|
http://arxiv.org/abs/2307.04414v1 | 20230710084225 | Optical-power-dependent splitting of magnetic resonance in nitrogen-vacancy centers in diamond | [
"Shuji Ito",
"Moeta Tsukamoto",
"Kensuke Ogawa",
"Tokuyuki Teraji",
"Kento Sasaki",
"Kensuke Kobayashi"
] | cond-mat.mes-hall | [
"cond-mat.mes-hall",
"physics.app-ph"
] |
Department of Physics, The University of Tokyo, Bunkyo-ku, Tokyo, 113-0033, Japan
Department of Physics, The University of Tokyo, Bunkyo-ku, Tokyo, 113-0033, Japan
Department of Physics, The University of Tokyo, Bunkyo-ku, Tokyo, 113-0033, Japan
National Institute for Materials Science, Tsukuba, Ibaraki 305-0044, Japan
Department of Physics, The University of Tokyo, Bunkyo-ku, Tokyo, 113-0033, Japan
Department of Physics, The University of Tokyo, Bunkyo-ku, Tokyo, 113-0033, Japan
Institute for Physics of Intelligence, The University of Tokyo, Bunkyo-ku, Tokyo 113-0033, Japan
Trans-scale Quantum Science Institute, The University of Tokyo, Bunkyo-ku, Tokyo 113-0033, Japan
Nitrogen-vacancy (NV) centers in diamonds are a powerful tool for accurate magnetic field measurements.
The key is precisely estimating the field-dependent splitting width of the optically detected magnetic resonance (ODMR) spectra of the NV centers.
In this study, we investigate the optical power dependence of the ODMR spectra using NV ensemble in nanodiamonds (NDs) and a single-crystal bulk diamond.
We find that the splitting width exponentially decays and is saturated as the optical power increases.
Comparison between NDs and a bulk sample shows that while the decay amplitude is sample-dependent, the optical power at which the decay saturates is almost sample-independent.
We propose that this unexpected phenomenon is an intrinsic property of the NV center due to non-axisymmetry deformation or impurities.
Our finding indicates that diamonds with less deformation are advantageous for accurate magnetic field measurements.
Optical-power-dependent splitting of magnetic resonance in nitrogen-vacancy centers in diamond
Kensuke Kobayashi
Received / Accepted
==============================================================================================
§ INTRODUCTION
A nitrogen-vacancy (NV) center in a diamond is a defect where a nitrogen atom replaces a carbon atom in the lattice with a vacancy at its neighboring site.
The NV center has an electron spin S=1, and its peculiar spin-dependent optical transitions enable the optical initialization and readout of the ground-state spin.
This property has been applied to the quantum sensing of local magnetic fields <cit.> and temperature <cit.>.
Researchers have applied the technique to measure various physical properties, such as observing the electron flow in graphene <cit.> and the stray fields from magnetic domain walls of a single-crystal antiferromagnet Cr_2O_3 <cit.>.
The basis for these achievements is the ability to accurately measure local magnetic fields on the order of μT using NV centers.
Optically detected magnetic resonance (ODMR) is a typical and basic measurement technique for quantum sensing using NV centers.
This technique measures the microwave (MW) frequency dependence of the photoluminescence (PL) intensity (red) when the NV centers are continuously irradiated with an excitation light (green) and MW.
The ODMR spectrum presents a magnetic resonance signal between the ground state spin levels m_S=0 and m_S=±1.
The resonance frequency splits against the magnetic field due to the Zeeman effect <cit.> and shifts in the same direction against temperature change <cit.>.
In addition, the splitting of the resonance frequency is affected by crystal strain <cit.>, electric field <cit.>, and hyperfine interactions <cit.>.
Therefore, it is essential for accurate sensing to estimate the splitting width purely due to the magnetic field from the ODMR spectra.
Commonly used diamond samples are single-crystal bulk diamonds and nanodiamonds (NDs) with grain sizes ranging from tens to hundreds of nanometers <cit.>.
Depending on whether the diamond is a bulk crystal or nanoparticles, there are variations in crystal strains, impurity density, and crystal orientation.
The ODMR spectra of NV centers vary with the excitation light power.
For example, the contrast and linewidth vary with the degree of initialization and spin relaxation associated with optical excitation <cit.>.
These dependencies only affect sensitivity but not accuracy.
Recently, however, it was reported that the ODMR spectra of NV centers in NDs at low magnetic fields change with the optical power, degrading the accuracy of temperature measurements <cit.>.
They found that a change in the ODMR splitting up to 2.8 MHz (equivalent to Zeeman splitting for 50 μT) occurred depending on the optical power.
This unexpected observation directly affects the accuracy of the conversion of the ODMR splitting to magnetic field, which is a critical issue in achieving the μT-order magnetic field measurements necessary for the physical properties measurements.
In particular, in wide-field imaging of magnetic field and temperature using a CMOS camera and NV ensembles <cit.>, inhomogeneity of the optical power within the field of view could result in degradation of the measurement of the magnetic field and temperature distributions.
Thus, it is crucial to investigate the extent to which this phenomenon is universal for various samples, i.e., bulk diamonds as well as NDs.
In this study, we investigate the dependence of the ODMR splitting on the optical power using several NV ensemble samples.
We first investigate the NV ensembles in NDs with a grain size of 100 nm, the same size as in the previous study <cit.>.
We confirm the reported behavior of the ODMR splitting to decrease with increasing optical power.
In addition, we measure the ODMR spectra over a broader optical power range than in the previous study.
We thereby find the splitting decays exponentially with the optical power and saturates at a constant value.
We observe similar behavior in NDs with a different grain size of 50 nm.
We then investigate NV ensembles in a single-crystal bulk diamond with much fewer impurities and strain than NDs and find a weaker but similar behavior.
We prove the irrelevance of magnetic field and temperature on this observation and discuss possible mechanisms to account for this phenomenon.
Finally, we propose the possibility that repetitive photoionization of impurities averages the local non-axisymmetry environment of NV centers and a systematic method to deal with this phenomenon.
This paper is organized as follows.
Sec. <ref> describes the experimental setup and defines the optical power in this study.
Sec. <ref> reproduces the previous study <cit.> using NDs and confirms that the ODMR spectra change with optical power.
Sec. <ref> shows that a similar phenomenon occurs even in the single-crystal bulk diamond.
Sec. <ref> analyzes the dependence of the ODMR splitting on the optical power.
In Sec. <ref>, we discuss the influence of the magnetic field and temperature, possible mechanisms, and implications of the present finding.
Sec. <ref> presents our conclusions.
§ EXPERIMENTS
Figure 1(a) shows an overview of the experimental setup <cit.>.
All measurements in this study are performed in a confocal system at room temperature.
A green laser with a wavelength of 520 nm (Oxxius, LBX-520-70-CSB-PPA) is applied for initialization and readout of the NV centers.
The intensity of the green laser is adjusted using several fixed neutral density filters as appropriate.
The intensity of the red emission from the NV centers is detected by an avalanche photodiode (APD) after passing through a dichroic mirror, a 514 nm notch filter, a 650 nm long-pass filter, and an 800 nm short-pass filter.
When measuring NV centers in nanodiamonds, the red emission counts were suppressed using a fixed neutral density filter to match the APD measurement range.
We use a MW antenna for spin manipulation of the NV centers, which is a coplanar waveguide with ground consisting of a 1.6 mm thick PCB substrate and an 18 μm thick copper foil with a 2 mm width centerline terminated with a 50 Ω resistor.
The antenna is impedance matched so that no frequency dependence of the MW power at a sample position is present during the measurement.
We confirm that from S11 parameter.
Microwaves are output from a vector signal generator at approximately -13 dBm and input to a microwave antenna after passing through an MW amplifier (typ. +45 dB).
In all measurements in this paper, the microwave power is fixed at the above values.
We use three types of diamond samples, #1, #2, and #3, in the present study: NDs with nominal grain sizes of ϕ50 nm (#1) and of ϕ100 nm (#2), and NV ensemble in a bulk diamond film (#3).
The NDs are those commercially available from Adámas Nanotechnologies, NDNV50nmHi10ml for #1 and NDNV100nm10ml for #2.
In the measurements of #1 and #2, we prepare a ND film [see Fig. 1(b)], which is the NDs spin-coated on a cover glass at 600 rpm <cit.>.
The thickness of the ND film made by this method is typically about 200–1000 nm <cit.>.
The number of NDs in #1 and #2 within a laser irradiation area is estimated to be several hundred and more than 20, respectively.
The ND film is fixed to the antenna with carbon tape.
A surface of the ND film is at a height of 0.44 mm above the antenna.
In addition to NDs, this study investigates a bulk diamond film (#3).
It was synthesized using a custom-built microwave plasma chemical vapor deposition (MPCVD) system <cit.>.
High-pressure and high-temperature type-Ib (100) single crystalline diamond plates were used as substrates. ^12C concentrated (>99.95%) methane gas was used as a carbon source.
First, an undoped thick film with a total thickness of ∼70 μm was grown on the substrate by chemical vapor deposition (CVD).
A ^15N doped CVD layer was then overgrown on the undoped film with a gas ratio of ^15N/C of 4000 ppm.
An expected ^15N concentration is ∼10 ppm and a film thickness is ∼5 μm.
This nitrogen density is consistent with the NV's coherence T_2 = 29 μs obtained by Hahn echo <cit.>.
We fix #3 directly to the antenna with carbon tape for the measurement.
A surface of the bulk diamond film is at a height of 0.73 mm above the antenna.
In this study, NV centers spontaneously formed during the MPCVD process are used for characterization.
We perform the present study under three different magnetic fields: a zero field (A), an environmental field (B), and a biased field (C).
We apply magnetic fields for the conditions A and C.
We use two coils beside and beneath the sample stage to generate magnetic fields perpendicular and parallel to the optical axis, respectively, as shown in Fig. 1(a).
Using a tesla meter (Lake Shore Cryotronics F71), we evaluate the magnetic fields at the sample position as 6.3 μ T, 88.7 μ T, and 196.7 μ T for the conditions A, B, and C, respectively.
The upper panel of Fig. 1(b) shows an optical microscope image of the spin-coated NDs ϕ50 nm (#1).
The lower panel shows the PL intensity map at the spot surrounded by a red frame in the upper panel.
The color bar indicates PL intensity in a unit of kilo counts per sec (kcps).
The data set for #1 is obtained using the standard ODMR measurement at the red circle.
As the dependence of the ODMR spectra on the optical power of the excitation light is the central topic in this study, it is important to calibrate the optical power (P_opt).
We evaluate P_opt from the green laser intensity and the irradiated area with an accuracy of 10 %.
The green laser intensity is measured between the objective lens and the diamond sample using an optical power meter (Thorlab, Power Meter PM100D, sensor S121C).
The irradiation area is estimated as the spot size of the red luminescence from a single NV center near the surface of a high quality bulk diamond provided by H. Watanabe in AIST, Japan <cit.>.
The spot size is calculated as a circle whose diameter is the full width at half maximum of the intensity distribution.
Figure 1(c) presents an example of the PL intensity map from a single NV center used to determine the spot size, where the diamond surface is defined as the xy-plane.
Ten PL intensity maps of a single NV center are fitted by the two-dimensional (2D) Gaussian function, and the obtained average of their full width at half-maximum, 386 ± 2 nm, is used as the laser spot diameter.
The cross sections of the experimental data (markers) and the 2D Gaussian fitting (solid line) are shown in the upper side and right side panels of Fig. 1(c).
Both panels show that the fits are consistent with the experimental data.
All the experimental conditions in this study are compiled in Table <ref>.
NDs ϕ100 nm #2' in Table <ref> indicates the data set obtained at a different location of the same sample as NDs ϕ100 nm #2.
The estimated densities of nitrogen, [N], and NV center, [NV], are also given in Table <ref>.
We include the previous study (Ref. <cit.>) in Table <ref> in the same cell as 2B as their measurements were carried out in an environmental geomagnetic field (∼50 μ T) using NDs ϕ100 nm supplied by Adámas Nanotechnologies.
§ RESULTS AND DISCUSSIONS
§.§ ODMR Spectra of Nanodiamond NVs
The upper panel of Fig. 2(a) is the ODMR spectrum as a function of the MW frequency obtained at P_opt=0.55 kW/cm^2 shown by markers.
This result is for 2A (see Table <ref>).
The vertical axis indicates the PL contrast, namely the normalized contrast of the PL intensities with and without MW.
In this measurement, the swept frequency range is 60 MHz.
The splitting between dips in the ODMR spectrum is due to crystal strain and electric fields that break the axial symmetry of the NV centers. The impacts of such non-axisymmetry deformation were treated in Refs. <cit.>.
Below we call these factors as “deformation”.
We note that similar observations for the NDs ensemble were reported before, for example, in Fig. 1(d) of Ref. <cit.>. Their shapes are generally consistent with ours, while the splitting is slightly larger than that in the present study as they applied a magnetic field of 100 μ T.
Also, similar ODMR spectra obtained in a single ND were reported in Fig. 3(a) of Ref. <cit.>.
From now on, we focus on splitting quantitatively based on the values obtained from fitting with a double Lorentzian function.
This fitting method is meaningful because it is often used for magnetometry using NVs.
We will discuss the validity and limitations of this method later in Sec. <ref>.
The solid line in the upper panel of Fig. 2(a) is a fitted curve.
We define the difference in frequencies between the two dip values obtained by this fitting as the difference Δ.
Δ is 11.5±0.2 MHz in this specific case, which is consistent with the literature values of 10–20 MHz for NDs <cit.>.
We measure the ODMR spectra by increasing P_opt from 0.55 kW/cm^2.
The lower panel of Fig. 2(a) shows the spectrum for 2A obtained at P_opt=38.4 kW/cm^2, which is the maximum optical power used in the present study.
We discuss later that the temperature increase due to laser heating is inconsequential within the present optical power range.
As in the upper panel, the markers show experimental data, and the solid curved line results from a double Lorentzian fitting.
The PL contrast decreases from 2.7% at P_opt=0.55 kW/cm^2 to 0.5% at P_opt=38.4 kW/cm^2 because the increase in the optical power enhances the spin initialization rate, i.e., the transition rate from m_S=±1 to m_S = 0.
The spectrum also possesses two dips, but careful inspection reveals a slight change in shape between the upper and lower panels.
The dashed and solid vertical lines show the dip positions obtained by the fitting at P_opt=0.55 kW/cm^2 and P_opt=38.4 kW/cm^2, respectively.
Δ is determined to be 9.4±0.3 MHz for P_opt=38.4 kW/cm^2.
Thus, Δ decreases with increasing P_opt.
Similar behavior was reported in Fig. 3(a) of Ref. <cit.>, suggesting that Δ of NVs in NDs actually depends on the optical power, which is usually not considered.
In our case, Δ changes by approximately 2.1 MHz between the two different P_opt.
Significantly, ignoring deformation, this variation corresponds to about 38 μT according to a magnetic field conversion widely used in the NV research field.
Therefore, this phenomenon can be relevant in applying NVs to magnetic field measurements.
The above finding is not an artifact caused by a double Lorentzian fitting.
To confirm this, Fig. 2(b) presents the ODMR spectra measured at P_opt=0.55, 2.12, 4.24, 8.21, 15.2, and 31.3 kW/cm^2, which are incrementally shifted from bottom to top.
The markers are the experimental data, where the spline interpolation curves are superposed by the solid lines.
Since the PL contrast varies depending on P_opt, we appropriately normalize the spectra to focus only on the shape.
The cross markers (+) point to the dip positions in the spline interpolation curves.
Their behavior again supports that the two dips become closer for a larger P_opt.
While we do not show the data, the results of the condition 2'A and the NDs of ϕ50 nm (1A) are consistent with the results of 2A. Some results are later shown in Figs. 4(d), 4(e), and 4(f).
§.§ ODMR Spectra of Bulk Diamond NVs
We focus on the bulk diamond film #3 to investigate whether or not the optical power dependence observed in NDs is relevant here.
The upper panel of Fig. 3 presents the ODMR spectrum for the condition 3A obtained at P_opt=0.55 kW/cm^2.
The horizontal axis range is 10 MHz, much smaller than that in Fig. 2(a).
The obtained spectrum shown by the markers has two sharp dips, as expected for the NVs in bulk diamonds.
As performed for the analysis of NDs, we fit the experimental data with a double Lorentzian function.
We estimate the splitting between the two dips to be Δ=3.55±0.02 MHz, a comparable value to the width of 3.03 MHz due to the hyperfine interaction in ^15N <cit.>.
Presumably, the deformation is much less than 1 MHz because it is buried in this hyperfine splitting.
Thus, the bulk diamond differs from NDs because the hyperfine interaction prevails over the deformation.
In addition, the resonance line width is significantly narrower than in the NDs.
This reflects that the density of impurities, such as nitrogen impurities (P1 centers), which cause the decoherence <cit.>, is low in #3.
Indeed, the typical nitrogen concentration of a type 1b diamond, the raw material of NDs, is about 100 ppm, whereas the single-crystal diamond in this study is about 10 ppm.
Now, we discuss the ODMR spectra at increased optical powers.
The lower panel in Fig. 3 shows the ODMR spectrum by the markers in the condition 3A obtained at P_opt=38.4 kW/cm^2.
The markers are experimental data, and the solid curved line results from a double Lorentzian function fitting.
As seen in NDs, the contrast decrease is also due to a larger initialization rate in larger optical power.
In Fig. 3, the dashed and solid vertical lines indicate the dip positions obtained by the fitting at P_opt=0.55 kW/cm^2 and P_opt=38.4 kW/cm^2, respectively.
Δ is now 3.44±0.01 MHz, smaller than Δ=3.55±0.02 MHz.
As in the NDs case, Δ becomes smaller in the larger optical power in the bulk diamond.
Interestingly, the optical power dependence is present even when the ^15N hyperfine interaction causes the splitting.
However, the reduction of Δ in the bulk diamond is much smaller than in NDs.
§.§ Analysis of Splitting
We systematically examine the dependence of Δ on P_opt.
We start with the condition 2A.
The upward triangle markers in Fig. 4(a) show the experimentally observed Δ as a function of P_opt between 0.55 kW/cm^2 and 38.4 kW/cm^2.
We already showed the results of Δ at the minimum (P_opt=0.55 kW/cm^2) and maximum (P_opt=38.4 kW/cm^2) optical powers in the upper and lower panels in Fig. 2(a), respectively.
Figure 4(a) clearly tells that Δ monotonously decays with increasing P_opt and saturates at P_opt≳ 15 kW/cm^2.
Previous study <cit.> reported a similar dependence of Δ on P_opt.
Their results are superposed in Fig. 4(a) by the markers (+).
Significantly, the decaying behavior is almost the same between their results and ours, while they did not reach the optical power to saturate Δ.
It is well established that the PL intensity from an NV center, which is determined by the relaxation rate peculiar to its optical process, saturates for a large P_opt <cit.>.
However, the present observation is irrelevant as we perform the experiment using a sufficiently small laser intensity such that the PL intensity is linear to P_opt.
Figure 4(c) confirms that the PL intensity from NDs in the condition 2A is proportional to P_opt.
Ref. <cit.> also treated this sufficiently small optical power region.
The optical power dependence in such a very small intensity region is unexpected. Our work has quantitatively confirmed Ref. <cit.> for a wider optical power region.
It was previously reported <cit.> that the linewidth of the ODMR spectrum of the NV ensemble decreases with increasing P_opt for an optical power as small as in the present study.
However, they did not mention a decrease in Δ of the ODMR spectra.
While we observe a systematic change in Δ, no systematic change in the linewidth is detected.
For more quantitative discussion, we analyze the behavior of 2A shown in Fig. 4(a) using the following exponential fit.
Δ(P_opt) = Aexp(-P_opt/P_0)+Δ_0,
where A, P_0, and Δ_0 are the amplitude, the saturation power, and the offset, respectively.
The dotted line in Fig. 4(a) is the result of this fitting.
A semi-log plot of only the first term of Eq. (<ref>) is shown in Fig. 4(b) with the same markers as Fig. 4(a).
The linear variation is consistent with the exponential function.
Unlike Fig. 4(a), Fig. 4(b) does not include the previous result <cit.> because no convergence value (offset Δ_0) is available.
Then, how about the behavior of the bulk diamond film (the condition 3A)?
Figure 4(a) shows the P_opt dependence of Δ.
While the decrease of Δ is not as significant as in NDs (2A), the magnified view in the inset of Fig. 4(a) proves that an exponential decay of Δ is also present in the bulk diamond case.
Figure 4(b) depicts the decaying component extracted by the fitting to Eq. (<ref>), which looks very similar to the 2A case.
The fact suggests a common mechanism behind the present exponential decay of Δ in the NDs and the bulk diamond, even though different reasons cause the dip splitting.
We find similar behavior in all the measured conditions at zero fields (1A, 2A, 2'A, and 3A in Table I) and obtain the parameters A, P_0, and Δ_0.
Figure 4(d) shows the obtained amplitude A for the four conditions.
From left to right, the bars indicate the conditions 1A, 2A, 2'A, and 3A, and the vertical axis is expressed on a semi-log scale.
Comparing 1A, 2A, and 2'A, the A values are almost the same for NDs with different grain sizes.
On the other hand, the bulk diamond (3A) has A, one order of magnitude smaller than those of NDs (about 1/20).
Figure 4(e) shows the saturation power P_0 for different conditions.
While the amplitude A significantly differs between NDs and the bulk diamond, there is relatively little difference in P_0 between the two; P_0 ∼ 3.8 kW/cm^2 for NDs and P_0 ∼ 7.4 kW/cm^2 for the bulk diamond.
It is vital that the values of P_0 are close for different diamonds.
The offsets Δ_0 are shown in Fig. 4(f).
They reduce in the order of conditions 1A, 2A, 2'A, and 3A, which seems to coincide with the degree of deformation of NVs.
We intuitively expect that the smaller the crystal size is, the greater the deformation tends to be, affecting the sensitivity of the NVs to the optical power.
We come back to this fact later.
With the results and analysis explained so far, we have established that the ODMR spectra of NVs depend on the excitation light power even when the power is sufficiently small.
This phenomenon occurs in both NDs and the bulk diamond.
The amplitude of the decay (A) largely depends on the samples, but the behavior of exponentially decaying with the optical power characterized by P_0 seems an essential feature of NVs.
The quantitative establishment of the universality of this phenomenon is the main achievement of the present study.
The fact also means that the excitation light power can be relevant for accurate magnetic field measurements using NVs.
§.§ Possible Mechanisms
We are interested in the possible causes of the observed optical power dependence.
The zero-field splitting (ZFS), the coupling between the NV spin and the magnetic field, and the deformation are the most critical factors in defining the energy structure of an NV center in the ground state <cit.>.
The hyperfine interaction between the NV spin and the neighboring nuclear spins is also often relevant.
Therefore, it is essential as a starting point to investigate whether the present phenomenon is related to these four factors.
This section will examine them individually and then explore other possibilities.
We start with the ZFS, which might be subject to the optical power through the heating by the laser.
We define the ZFS as the average of the frequencies of the two dips obtained by a double Lorentzian fit.
Around room temperature at zero magnetic fields, the ZFS in the ODMR spectrum decreases linearly with increasing temperature <cit.>.
The dependences of ZFS on the optical power in the conditions 1A, 2A, and 3A are shown in Figs. 5(a), (b), and (c), respectively.
The figures indicate no signal of systematic change in ZFS due to the optical power.
Indeed, the variation of ZFS is much smaller than the amplitude A in Fig. 4(d).
Thus, heating by laser irradiation is not responsible for the present optical power dependence.
We estimate the maximum temperature change in this experiment to be about 12 K since the maximum frequency shift observed is approximately 850 kHz, as shown in Fig. 5(a).
Next, we discuss the influence of the magnetic field.
The upper and lower panels of Fig. 6 show the ODMR spectra in conditions 2A (zero magnetic field) and 2C (biased magnetic field of 196.7 μ T), respectively [the spectrum shown in the upper is the same as that in the upper panel in Fig. 2(a)].
Both are obtained with the minimum optical power (P_opt=0.55 kW/cm^2).
The markers are experimental data, and the solid curved lines are fitted by a double Lorentzian function.
The dashed and solid vertical lines show the dip positions obtained by the fit for 2A and 2C, respectively.
As expected from the Zeeman effect, the solid vertical lines are outside the two dashed lines, confirming that Δ increases in the magnetic field.
We obtain the spectra for the conditions 2A, 2B, and 2C as P_opt is modulated.
The acquired behaviors of Δ are plotted as a function of P_opt in the inset of Fig. 7(a).
Due to the Zeeman effect, Δ vertically shifts from 2A to 2B to 2C.
Importantly, there is no significant variation in the spectral shapes of 2A, 2B, and 2C except for this vertical shift.
We obtain the offset Δ_0 by the fitting to Eq. (<ref>) and plot Δ-Δ_0 against P_opt in the main panel of Fig. 7(a).
The behavior of 2A, 2B, and 2C are superposed on each other almost perfectly.
We plot the amplitude A, the saturation power P_0, and the offset Δ_0 for each field obtained by the fitting in Figs. 7(b), (c), and (d), respectively.
Δ_0 increases with increasing magnetic field [Fig. 7(d)], reflecting the Zeeman effect, although further quantitative analysis is complicated in this magnetic field region due to the considerable influence of deformation in NDs <cit.>.
On the other hand, A and P_0 do not change significantly as shown in Figs. 7(b) and (c), respectively.
Thus, in our examined regime, there is no visible correlation between the optical power dependence and the magnetic field.
Third, we consider the hyperfine interaction. The optical power dependence in the bulk diamond NVs is minimal, only about 1/20 of that in the nanodiamond NVs [see Figs. 4(a) and 4(d)].
However, the contribution of the hyperfine interaction to Δ is reasonably assumed to be almost similar in the two types of diamonds.
Therefore, if the hyperfine interaction was responsible for the present phenomenon, it would be difficult to explain the marked difference between both.
Consequently, we can conclude that the hyperfine interaction is not the leading cause of this phenomenon.
As the final factor, we examine the deformation.
In NDs, the deformation is about 10 MHz [Figs. 2(a) and 4(a)], while the value is well below 1 MHz in the bulk diamond, as discussed in Sec. IIIB.
Now, the amplitude A to characterize the optical power dependence is ∼ 2 MHz for NDs and ∼ 0.1 MHz for the bulk diamond [Fig. 4(d)].
For the former, the ratio of A to the deformation is about 2/10 = 0.2.
For the latter, the ratio is at least 0.1/1 = 0.1 and is comparable to the NDs' case.
The ratio of ND to bulk diamond deformation also corresponds to the ratio of nitrogen impurity density [see Table <ref>].
This suggests that either the deformation/impurity itself or the impurity-derived deformation would be responsible for this phenomenon.
Although this argument is not fully quantitative, it suggests a correlation between the deformation/impurity and the optical power dependence.
We infer a reasonable idea of the possible mechanism based on the deformation caused by impurities.
Previous work on single NV centers indicated that the electric field from charge traps causes deformation <cit.>.
This might also be the cause with the deformations in the NV ensemble case in our study.
If the charge traps originate from impurities, the magnitude of the deformation will correlate with the impurity density, consistent with our observations.
It is known that the charge state of impurities changes with photoionization.
For example, as the optical power is increased, the time that the NV center retains its charge state decreases exponentially on the millisecond scale <cit.>.
As this charge generated by photoionization moves around, the electric field would be time-averaged, suppressing deformation.
The relationship between the ionization rate at thermal equilibrium and the photoionization rate determines the coefficient of the exponential change.
When the optical power is sufficiently large, the electric field and crystal strain, which cannot be averaged, remain as a finite deformation.
Ref. <cit.> also noted that deformation due to charge can change the shape of the ODMR spectrum to a non-Lorentzian distribution.
This is consistent with the fact that the ODMR spectrum deviates from the double Lorentzian fitting, and its shape changes with optical power [see Figs. 2(a) and (b)].
Investigating both the dip position and its shape will help to elucidate the mechanism.
We note further experimental and theoretical efforts are needed because many parameters could be involved in the mechanism.
On the experimental side, comparing bulk samples with systematically varying impurities and deformations and investigating this optical power-dependent splitting in a single NV center with charge-induced deformation <cit.> are helpful.
The magnetic field can be swept over a sufficiently wide range compared to the deformation for bulk samples. This will clarify which parameters of the ground-state Hamiltonian appear to depend on optical power.
Pulsed ODMR <cit.> will provide information on the time the effect of the laser irradiation remains, which can be used to validate the mechanism.
On the theoretical side, it is helpful to investigate what fitting function is appropriate to reproduce the ODMR spectral shape and what defects are candidates for photoionization.
§ CONCLUSION
We investigate the optical power dependence of splitting of the ODMR spectra using various NV ensemble samples.
In addition to reproducing the previous study using NDs <cit.>, we find that the optical power dependence saturates in a larger optical power than in their study.
Since we also observe the same phenomenon in the single-crystal diamond, which has very few impurities and non-axisymmetry deformation compared to NDs, we consider our observation due to the NV center's intrinsic nature.
We quantitatively discuss the parameters that could be responsible for this phenomenon and infer that deformation is an important parameter.
We point out the possible responsibility of slow dynamics in the optical excitation and emission process of single NV centers.
The present optical power dependence can be critical in accurate magnetometry using NVs.
This effect may degrade the accuracy of the magnetometry using NDs by about a few ten μT.
Even when using high-quality bulk diamonds, we must be careful when discussing a few μT magnetic fields around zero magnetic fields.
We can minimize degradation by introducing strong optical power based on the phenomenological exponential behavior discussed here.
Also, we suggest that using diamonds with fewer impurities and deformation can reduce the influence on the accurate magnetic field measurement.
Further experimental verification and theoretical discussion on deformation, impurity densities, and a comprehensive range of magnetic fields will help to identify the mechanism of this phenomenon.
§ ACKNOWLEDGEMENTS
We thank K. M. Itoh for letting us use the confocal microscope system, and H. Watanabe for his high quality diamond, which we used in the estimation of the spatial resolution of our system [Fig. 1(c)].
We appreciate the fruitful discussion with J. Inoue.
We also thank MEXT-Nanotechnology Platform Program “Microstructure Analysis Platform" for technical support.
K.S. acknowledges the support of Grants-in-Aid for Scientific Research No. JP22K03524.
K.K. acknowledges the support of Grants-in-Aid for Scientific Research (Nos. JP23H01103, JP19H00656, and JP19H05826).
T.T. acknowledges the support of MEXT Q-LEAP (JPMXS0118068379), JST CREST (JPMJCR1773), JST Moonshot R&D (JPMJMS2062), MIC R&D for construction of a global quantum cryptography network (JPMI00316), JSPS KAKENHI (Nos. JP20H02187 and JP20H05661).
34
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Maze et al.(2008)Maze,
Stanwix, Hodges, Hong,
Taylor, Cappellaro, Jiang,
Dutt, Togan, Zibrov,
Yacoby, Walsworth, and Lukin]MazeNature2008
author author J. R. Maze, author P. L. Stanwix,
author J. S. Hodges, author S. Hong, author
J. M. Taylor, author
P. Cappellaro, author
L. Jiang, author M. V. G. Dutt, author E. Togan, author A. S. Zibrov, author A. Yacoby, author R. L. Walsworth, and author M. D. Lukin, https://doi.org/10.1038/nature07279 journal
journal Nature volume 455, pages 644 (year 2008)NoStop
[Degen(2008)]DegenAPL2008
author author C. L. Degen, https://doi.org/10.1063/1.2943282 journal
journal Applied Physics Letters volume
92, pages 243111 (year 2008)NoStop
[Balasubramanian et al.(2008)Balasubramanian, Chan, Kolesov,
Al-Hmoud, Tisler, Shin,
Kim, Wojcik, Hemmer,
Krueger, Hanke, Leitenstorfer, Bratschitsch, Jelezko, and Wrachtrup]BalasubramanianNature2008
author author G. Balasubramanian, author I. Y. Chan, author R. Kolesov,
author M. Al-Hmoud, author J. Tisler, author
C. Shin, author C. Kim, author A. Wojcik, author P. R. Hemmer,
author A. Krueger, author T. Hanke, author
A. Leitenstorfer, author
R. Bratschitsch, author
F. Jelezko, and author
J. Wrachtrup, https://doi.org/10.1038/nature07278 journal journal Nature volume 455, pages
648 (year 2008)NoStop
[Taylor et al.(2008)Taylor,
Cappellaro, Childress, Jiang,
Budker, Hemmer, Yacoby,
Walsworth, and Lukin]Taylor2008
author author J. M. Taylor, author P. Cappellaro,
author L. Childress, author L. Jiang, author
D. Budker, author P. R. Hemmer, author A. Yacoby, author R. Walsworth, and author M. D. Lukin, https://doi.org/10.1038/nphys1075
journal journal Nature Physics volume 4, pages 810 (year
2008)NoStop
[Schirhagl et al.(2014)Schirhagl, Chang, Loretz, and Degen]SchirhaglARPC2014
author author R. Schirhagl, author K. Chang,
author M. Loretz, and author C. L. Degen, https://doi.org/10.1146/annurev-physchem-040513-103659 journal journal Annual Review of Physical Chemistry volume 65, pages 83 (year
2014)NoStop
[Rondin et al.(2014)Rondin,
Tetienne, Hingant, Roch,
Maletinsky, and Jacques]Rondin2014
author author L. Rondin, author J.-P. Tetienne, author T. Hingant,
author J.-F. Roch, author P. Maletinsky, and author V. Jacques, https://doi.org/10.1088/0034-4885/77/5/056503 journal
journal Reports on Progress in Physics volume 77, pages 056503 (year
2014)NoStop
[Levine et al.(2019)Levine,
Turner, Kehayias, Hart,
Langellier, Trubko, Glenn,
Fu, and Walsworth]Levine2019
author author E. V. Levine, author M. J. Turner,
author P. Kehayias, author C. A. Hart, author
N. Langellier, author
R. Trubko, author D. R. Glenn, author R. R. Fu, and author R. L. Walsworth, https://doi.org/10.1515/nanoph-2019-0209
journal journal Nanophotonics volume 8, pages 1945 (year
2019)NoStop
[Barry et al.(2020)Barry,
Schloss, Bauch, Turner,
Hart, Pham, and Walsworth]Barry2020
author author J. F. Barry, author J. M. Schloss,
author E. Bauch, author M. J. Turner, author
C. A. Hart, author
L. M. Pham, and author
R. L. Walsworth, https://doi.org/10.1103/revmodphys.92.015004 journal
journal Reviews of Modern Physics volume
92, pages 015004 (year 2020)NoStop
[Acosta et al.(2010)Acosta,
Bauch, Ledbetter, Waxman,
Bouchard, and Budker]AcostaPRL2010
author author V. M. Acosta, author E. Bauch,
author M. P. Ledbetter, author A. Waxman, author
L.-S. Bouchard, and author
D. Budker, @noop journal journal Physical Review Letters volume 104, pages 070801 (year
2010)NoStop
[Neumann et al.(2013)Neumann, Jakobi, Dolde, Burk, Reuter, Waldherr, Honert, Wolf, Brunner, and Shim]NeumannNL2013
author author P. Neumann, author I. Jakobi,
author F. Dolde, author C. Burk, author
R. Reuter, author G. Waldherr, author J. Honert, author T. Wolf, author A. Brunner, and author J. H. Shim, @noop journal journal Nano
Letters volume 13, pages 2738
(year 2013)NoStop
[Toyli et al.(2013)Toyli,
Charles, Christle, Dobrovitski, and Awschalom]ToyliPNAS2013
author author D. M. Toyli, author F. Charles,
author D. J. Christle, author V. V. Dobrovitski, and author D. D. Awschalom, @noop
journal journal Proceedings of the National
Academy of Sciences volume 110, pages
8417 (year 2013)NoStop
[Tetienne et al.(2017)Tetienne, Dontschuk, Broadway,
Stacey, Simpson, and Hollenberg]TetienneSciAdv2017
author author J.-P. Tetienne, author N. Dontschuk,
author D. A. Broadway, author A. Stacey, author
D. A. Simpson, and author
L. C. L. Hollenberg, journal
journal Science Advances volume 3, https://doi.org/10.1126/sciadv.1602429 e1602429
(year 2017)NoStop
[Ku et al.(2020)Ku,
Zhou, Li, Shin, Shi, Burch, Anderson, Pierce, Xie, Hamo, Vool,
Zhang, Casola, Taniguchi,
Watanabe, Fogler, Kim,
Yacoby, and Walsworth]ku2020
author author M. J. H. Ku, author T. X. Zhou, author Q. Li, author Y. J. Shin, author
J. K. Shi, author C. Burch, author L. E. Anderson, author A. T. Pierce, author Y. Xie, author A. Hamo, author U. Vool, author
H. Zhang, author F. Casola, author T. Taniguchi, author K. Watanabe, author M. M. Fogler, author P. Kim, author A. Yacoby, and author R. L. Walsworth, https://doi.org/10.1038/s41586-020-2507-2 journal journal Nature volume 583, pages
537 (year 2020)NoStop
[Hedrich et al.(2021)Hedrich, Wagner, Pylypovskyi, Shields, Kosub, Sheka, Makarov, and Maletinsky]hedrich2021
author author N. Hedrich, author K. Wagner,
author O. V. Pylypovskyi,
author B. J. Shields, author T. Kosub, author
D. D. Sheka, author
D. Makarov, and author
P. Maletinsky, https://doi.org/10.1038/s41567-021-01205-3 journal journal Nature Physics volume 17, pages 659 (year 2021)NoStop
[Foy et al.(2020)Foy,
Zhang, Trusheim, Bagnall,
Walsh, Wang, and Englund]FoyAPMI2020
author author C. Foy, author L. Zhang, author M. E. Trusheim, author
K. R. Bagnall, author
M. Walsh, author E. N. Wang, and author D. R. Englund, https://doi.org/10.1021/acsami.0c01545 journal journal ACS Appl Mater Interfaces volume 12, pages 26525 (year 2020)NoStop
[Oort and Glasbeek(1990)]VanOort1990
author author E. V. Oort and author M. Glasbeek, https://doi.org/10.1016/0009-2614(90)85665-y
journal journal Chemical Physics Letters volume 168, pages 529 (year
1990)NoStop
[Dolde et al.(2011)Dolde,
Fedder, Doherty, Nöbauer,
Rempp, Balasubramanian, Wolf,
Reinhard, Hollenberg, Jelezko, and Wrachtrup]Dolde2011
author author F. Dolde, author H. Fedder,
author M. W. Doherty, author T. Nöbauer, author
F. Rempp, author G. Balasubramanian, author T. Wolf, author F. Reinhard, author L. C. L. Hollenberg, author F. Jelezko, and author J. Wrachtrup, https://doi.org/10.1038/nphys1969 journal journal Nature Physics volume
7, pages 459 (year 2011)NoStop
[Felton et al.(2009)Felton,
Edmonds, Newton, Martineau,
Fisher, Twitchen, and Baker]felton2009
author author S. Felton, author A. M. Edmonds,
author M. E. Newton, author P. M. Martineau, author
D. Fisher, author D. J. Twitchen, and author J. M. Baker, https://doi.org/10.1103/PhysRevB.79.075203 journal journal Physical Review B volume 79, pages 075203 (year 2009)NoStop
[Igarashi et al.(2012)Igarashi, Yoshinari, Yokota, Sugi, Sugihara, Ikeda, Sumiya, Tsuji, Mori, Tochio, Harada, and Shirakawa]igarashi2012
author author R. Igarashi, author Y. Yoshinari,
author H. Yokota, author T. Sugi, author
F. Sugihara, author
K. Ikeda, author H. Sumiya, author S. Tsuji, author I. Mori, author H. Tochio, author Y. Harada, and author M. Shirakawa, @noop journal journal Nano Letters volume 12, pages 5726 (year
2012)NoStop
[Fu et al.(2007)Fu,
Lee, Chen, Lim, Wu, Lin, Wei, Tsao,
Chang, and Fann]fu2007
author author C.-C. Fu, author H.-Y. Lee,
author K. Chen, author
T.-S. Lim, author H.-Y. Wu, author P.-K. Lin, author P.-K. Wei, author P.-H. Tsao,
author H.-C. Chang, and author W. Fann, @noop
journal journal Proceedings of the National
Academy of Sciences volume 104, pages
727 (year 2007)NoStop
[Dréau et al.(2011)Dréau, Lesik, Rondin, Spinicelli, Arcizet, Roch, and Jacques]dreau2011
author author A. Dréau, author M. Lesik,
author L. Rondin, author P. Spinicelli, author
O. Arcizet, author J.-F. Roch, and author V. Jacques, https://doi.org/10.1103/PhysRevB.84.195204 journal journal Physical Review B volume 84, pages 195204 (year 2011)NoStop
[Jensen et al.(2013)Jensen,
Acosta, Jarmola, and Budker]acosta2013
author author K. Jensen, author V. M. Acosta,
author A. Jarmola, and author D. Budker, https://doi.org/10.1103/PhysRevB.87.014115 journal journal Physical Review B volume 87, pages 014115 (year 2013)NoStop
[Fujiwara et al.(2020)Fujiwara, Dohms, Suto, Nishimura, Oshimi, Teki, Cai, Benson, and Shikano]fujiwara2020
author author M. Fujiwara, author A. Dohms,
author K. Suto, author
Y. Nishimura, author
K. Oshimi, author Y. Teki, author K. Cai, author O. Benson, and author Y. Shikano, https://doi.org/10.1103/PhysRevResearch.2.043415 journal
journal Physical Review Research volume
2, pages 043415 (year 2020)NoStop
[Scholten et al.(2021)Scholten, Healey, Robertson, Abrahams, Broadway, and Tetienne]ScholtenJAP2021
author author S. C. Scholten, author A. J. Healey, author I. O. Robertson, author G. J. Abrahams, author D. A. Broadway, and author J.-P. Tetienne, https://doi.org/10.1063/5.0066733 journal journal Journal of Applied Physics volume 130, pages 150902 (year
2021)NoStop
[Tsukamoto et al.(2021)Tsukamoto, Ogawa, Ozawa, Iwasaki, Hatano, Sasaki, and Kobayashi]TsukamotoAPL2021
author author M. Tsukamoto, author K. Ogawa,
author H. Ozawa, author T. Iwasaki, author
M. Hatano, author K. Sasaki, and author K. Kobayashi, https://doi.org/10.1063/5.0054809
journal journal Applied Physics Letters volume 118, pages 264002 (year 2021)NoStop
[Tsukamoto et al.(2022)Tsukamoto, Ito, Ogawa, Ashida, Sasaki, and Kobayashi]Tsukamoto2022
author author M. Tsukamoto, author S. Ito,
author K. Ogawa, author Y. Ashida, author
K. Sasaki, and author
K. Kobayashi, https://doi.org/10.1038/s41598-022-18115-w journal journal Scientific Reports volume 12, pages 13942 (year 2022)NoStop
[Misonou et al.(2020)Misonou, Sasaki, Ishizu, Monnai, Itoh, and Abe]misonou2020
author author D. Misonou, author K. Sasaki,
author S. Ishizu, author Y. Monnai, author
K. M. Itoh, and author
E. Abe, https://doi.org/10.1063/1.5128716 journal journal AIP Advances volume 10, pages 025206 (year 2020)NoStop
[Ogawa et al.(2023)Ogawa,
Tsukamoto, Sasaki, and Kobayashi]OgawaJPSJ2023
author author K. Ogawa, author M. Tsukamoto,
author K. Sasaki, and author K. Kobayashi, https://doi.org/10.7566/JPSJ.92.014002 journal journal Journal of the Physical Society of Japan volume 92, pages 014002 (year
2023)NoStop
[Teraji et al.(2015)Teraji,
Yamamoto, Watanabe, Koide,
Isoya, Onoda, Ohshima,
Rogers, Jelezko, Neumann,
Wrachtrup, and Koizumi]TerajiPSSA2015
author author T. Teraji, author T. Yamamoto,
author K. Watanabe, author Y. Koide, author
J. Isoya, author S. Onoda, author T. Ohshima, author L. J. Rogers, author F. Jelezko, author P. Neumann,
author J. Wrachtrup, and author S. Koizumi, @noop
journal journal physica status solidi (a) volume 212, pages 2365 (year
2015)NoStop
[Bauch et al.(2020)Bauch,
Singh, Lee, Hart,
Schloss, Turner, Barry,
Pham, Bar-Gill, Yelin, and Walsworth]Bauch2020
author author E. Bauch, author S. Singh,
author J. Lee, author
C. A. Hart, author
J. M. Schloss, author
M. J. Turner, author
J. F. Barry, author
L. M. Pham, author
N. Bar-Gill, author
S. F. Yelin, and author
R. L. Walsworth, https://doi.org/10.1103/physrevb.102.134210 journal journal Physical Review B volume 102, pages 134210 (year 2020)NoStop
[Ohashi et al.(2013)Ohashi,
Rosskopf, Watanabe, Loretz,
Tao, Hauert, Tomizawa,
Ishikawa, Ishi-Hayase, Shikata, Degen, and Itoh]ohashi2013
author author K. Ohashi, author T. Rosskopf,
author H. Watanabe, author M. Loretz, author
Y. Tao, author R. Hauert, author S. Tomizawa, author T. Ishikawa, author J. Ishi-Hayase, author S. Shikata, author C. L. Degen, and author K. M. Itoh, @noop journal journal Nano Letters volume 13, pages 4733 (year 2013)NoStop
[Jelezko and Wrachtrup(2006)]JelezkoPSS2006
author author F. Jelezko and author J. Wrachtrup, https://doi.org/https://doi.org/10.1002/pssa.200671403 journal journal physica status solidi (a) volume 203, pages 3207 (year
2006)NoStop
[Mittiga et al.(2018)Mittiga, Hsieh, Zu, Kobrin,
Machado, Bhattacharyya, Rui,
Jarmola, Choi, Budker, and Yao]Mittiga2018
author author T. Mittiga, author S. Hsieh,
author C. Zu, author
B. Kobrin, author F. Machado, author P. Bhattacharyya, author N. Z. Rui, author A. Jarmola, author S. Choi,
author D. Budker, and author N. Y. Yao, https://doi.org/10.1103/physrevlett.121.246402 journal
journal Physical Review Letters volume
121, pages 246402 (year 2018)NoStop
[Aslam et al.(2013)Aslam,
Waldherr, Neumann, Jelezko, and Wrachtrup]Aslam2013
author author N. Aslam, author G. Waldherr,
author P. Neumann, author F. Jelezko, and author
J. Wrachtrup, https://doi.org/10.1088/1367-2630/15/1/013064 journal
journal New Journal of Physics volume
15, pages 013064 (year 2013)NoStop
|
http://arxiv.org/abs/2307.07522v1 | 20230709211656 | The Future of Fundamental Science Led by Generative Closed-Loop Artificial Intelligence | [
"Hector Zenil",
"Jesper Tegnér",
"Felipe S. Abrahão",
"Alexander Lavin",
"Vipin Kumar",
"Jeremy G. Frey",
"Adrian Weller",
"Larisa Soldatova",
"Alan R. Bundy",
"Nicholas R. Jennings",
"Koichi Takahashi",
"Lawrence Hunter",
"Saso Dzeroski",
"Andrew Briggs",
"Frederick D. Gregory",
"Carla P. Gomes",
"Christopher K. I. Williams",
"Jon Rowe",
"James Evans",
"Hiroaki Kitano",
"Joshua B. Tenenbaum",
"Ross King"
] | cs.AI | [
"cs.AI",
"cs.LG"
] |
0.0cm
0.2cm
16cm
21cm
1.0cm
sciabstract
24pt
The Future of Fundamental Science Led by Generative Closed-Loop Artificial Intelligence
Hector Zenil,^1,2,3,4,∗ Jesper Tegnér,^21,28 Felipe S. Abrahão,^4,8,27,
Alexander Lavin,^19,20 Vipin Kumar,^6 Jeremy G. Frey,^7 Adrian Weller,^1,2
Larisa Soldatova,^9 Alan R. Bundy,^5
Nicholas R. Jennings,^10 Koichi Takahashi,^11,12,13
Lawrence Hunter,^14 Saso Dzeroski,^15
Andrew Briggs,^16 Frederick D. Gregory,^17
Carla P. Gomes,^18
Christopher K. I. Williams,^1,5 Jon Rowe,^1,22 James Evans,^23
Hiroaki Kitano,^1,24 Joshua B. Tenenbaum,^25 Ross King^1,2,26
^1The Alan Turing Institute
^2Department of Chemical Engineering and Biotechnology, University of Cambridge
^3Oxford Immune Algorithmics
^4Algorithmic Nature Group, LABORES for the Natural and Digital Sciences
^5School of Informatics, University of Edinburgh
^6Department of Computer, Science and Engineering, University of Minnesota
^7Department of Chemistry, University of Southampton
^8Centre for Logic, Epistemology and the History of Science, University of Campinas, Brazil.
^9Department of Computing, Goldsmiths, University of London
^10Vice-Chancellor's Office, Loughborough University
^11RIKEN Center for Biosystems Dynamics Research,
^12RIKEN Innovation Design Office
^13Keio University
^14Center for Computational Pharmacology, School of Medicine, University of Colorado
^15Department of Knowledge Technologies, Jozef Stefan Institute
^16Department of Materials, University of Oxford
^17DEVCOM ARL Army Research Office
^18Department of Computer Science, Cornell University
^19Pasteur Labs
^20Institute for Simulation Intelligence
^21Living Systems Laboratory, BESE, CEMSE, King Abdullah University of Sciences and Technology
^22School of Computer Science, University of Birmingham
^23Knowledge Lab, University of Chicago
^24The Systems Biology Institute, Okinawa Institute of Science and Technology
^25Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology
^26Chalmers Institute of Technology
^27DEXL, National Laboratory for Scientific Computing, Brazil.
^28Department of Medicine, Karolinska Institutet, Stockholm, Sweden.
^∗To whom correspondence should be addressed; E-mail: [email protected]
=============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Recent advances in machine learning and AI, including Generative AI and LLMs, are disrupting technological innovation, product development, and society as a whole. AI's contribution to technology can come from multiple approaches that require access to large training data sets and clear performance evaluation criteria, ranging from pattern recognition and classification to generative models. Yet, AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access. Generative AI, in general, and Large Language Models in particular, may represent an opportunity to augment and accelerate the scientific discovery of fundamental deep science with quantitative models. Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery, including self-driven hypothesis generation and open-ended autonomous exploration of the hypothesis space. Integrating AI-driven automation into the practice of science would mitigate current problems, including the replication of findings, systematic production of data, and ultimately democratisation of the scientific process. Realising these possibilities requires a vision for augmented AI coupled with a diversity of AI approaches able to deal with fundamental aspects of causality analysis and model discovery while enabling unbiased search across the space of putative explanations. These advances hold the promise to unleash AI's potential for searching and discovering the fundamental structure of our world beyond what human scientists have been able to achieve. Such a vision would push the boundaries of new fundamental science rather than automatize current workflows and instead open doors for technological innovation to tackle some of the greatest challenges facing humanity today.
§ INTRODUCTION
With the scientific revolution in the seventeenth century, the notion of mathematical modeling using equations became the efficient language of choice to understand and predict events in the natural world. Four hundred years later, we have vast amounts of data and increasing access to computational power. Recently, we have witnessed an ever-increasing comprehensive application of machine learning accelerating science in unprecedented ways with many questions around quantifying the speed up of discovery (Fig. <ref>). One consequence of this increase in scientific production that the digital revolution enabled, it is becoming more and more challenging for individual scientists to keep abreast of their fields and digest the relevant literature.
To advance science and perform end-to-end high-quality scientific investigations, scientists require one or two orders of magnitude more hypothesis-led experiments than are currently humanly possible.
Laboratories are under pressure to perform an increasing number of experiments needed to replicate results. This makes collaborations more challenging, given all the associated overheads in particular for interdisciplinary research. It may be that science will be too difficult for humans to continue to perform by themselves and will need AI at the driving seat of knowledge discovery to continue the endeavour of human science.
To conceptualise how AI can augment science, we like to distinguish between the following levels. First, AI and machine learning can operate as extractors of information. This includes text mining of scientific literature to find relevant papers and, in the best case, extract knowledge and synthesise a body of research from vast sources. A more “modern" use of AI has made existing workflows or procedures more efficient such as being faster and more automatic. This includes augmented computations and simulations in physics. Alternatively, to make an analysis workflow more automatic by constructing a loss function that incorporates several parameter-dependent steps into a single (complex) optimisation problem. The first version of AlphaFold is one example. Yet, in these two levels, AI primarily supports and augments current scientific practice. A third level is where AI could potentially discover something novel by learning, not only a useful but a "true" representation of a given process in nature. For example, a useful compressed latent representation could be learned by training an AI system on data. Alternatively, the scientist could impose soft priors such that certain symmetries and invariants exist in the problem, thus forcing the AI system to discover interpretable structures in the physical process. For example, recent work on flow-related problems in physics and chemistry, using physics-inspired neural networks (PINNs), demonstrated the feasibility of AI for science beyond a black-box model <cit.>. This line of work is similar to the “classical" model-based analysis of nature initiated by the scientific revolution. Here in this review, we focus on the prospect of not only augmenting science, by finding useful, interpretable representations but finding representations leading to new scientific discoveries by involving AI in what we refer to as a closed-loop-science-AI iterative cycle.
Here we suggest a next level of applying AI to science and leading science by developing such closed-loop science: AI systems integrated with laboratory automation to execute cycles of planned experiments <cit.>. Such systems fully automate simple forms of scientific research
and can facilitate collaboration across disciplines and between partners - humans or AI systems. AI can speed up the scientific discovery process and has the potential to advance AI itself in areas relevant to fundamental science, such as causal discovery, automation of experimental science, and the expansion of scientific knowledge. One current bottleneck is knowledge representation which, by nature, is biased toward the limited understanding of the human (scientific) mind. Here we can either use soft priors exploiting deep structures in the scientific area of interest or, better, actually develop AI systems that will discover effective interpretable representations of the scientific area in question. This is another inspiring direction in which science may move, and scientists will have to decide (if they have the option) to let machines build their internal languages and representations to do their science. This will not happen all of a sudden,it is most likely happening already, even if such language is in some way rudimentary. Here we refer to them as black boxes in areas such as deep learning, where scientists often struggle to make sense of it and increasingly use AI tools to guide and decipher AI results.
Applications of AI have so far been successful, but their success has been largely limited to industrial applications, problems of classification, and data science. A recipe for success has been to pick a problem, that has a well-defined metric for performance. The problems should preferentially have a history of previously increasingly successful attempts to solve them. Examples include board games (Chess, GO) and bioinformatics workflows (transcription factor binding, protein folding, antibiotics). Yet, despite impressive, the success of these examples, hinges upon clever search algorithms and efficient implementation of end-to-end workflows. But, in the end, no new fundamental laws of nature are being discovered. Furthermore, as a reflection of the lack of fundamental laws, the success of these AI systems remains challenging to disentangle. To advance beyond this state of affairs, we argue that we need AI systems that can discover new representations and generative laws of the scientific problem at hand. The notion of an iterative closed-loop discovery scheme constitutes one putative path forward. Yet, only a limited number of successful examples have closed the full loop of scientific discovery <cit.>.
Human scientists today need to think about how to create AI systems that can partner with scientists and take on responsibilities over the complete arc of scientific discovery, from the process of observation and intervention to hypothesis generation from a domain knowledge base to conducting experiments and evaluating results and rejecting or validating the assumptions, to integrating them into the current knowledge base and filing them with the relevant existing literature <cit.>.
Thus, the question is how to make substantial and meaningful advances in AI to enable us to go even further in accelerating science, hitherto driven exclusively by humans,
to not only rapidly expand human knowledge and improve the impact of scientific practice, but also to increase its reliability, availability, reproducibility, verifiability, transparency, and trustworthiness as the processes involved in scientific discovery become more automated.
In Fig. <ref>, we propose some quantitative measures that will not apply to all cases but rather instances where a combination of AI and human approaches can further accelerate science. Nevertheless, the expectation is that AI will provide a real gain on most fronts and domains.
§ AI IN SCIENTIFIC DISCOVERY
§.§ Challenges
Humans are traditionally biased and prone to very well-known cognitive fallacies or biases, which science is hardly a stranger to <cit.>. One common and increasingly discussed issue is reproducibilityreproducibility across all domains <cit.>.
Humans are ill-equipped to deal with the repetitive tasks that reproducibility entails, and there are all sorts of inducements for consciously or unconsciously making dubious moves, particularly when it comes to the game of funding and high-impact publishing <cit.>.
Confirmation bias, fake rigour, prior assumptions/hypotheses omission, ad hoc methodologies, cherry-picking experimentation, selective data, hype and overstatement of results, network community effects, “rich-get-richer” phenomena widening the inequality gap in science, and poor reporting are examples <cit.>.
We used to think that science was entirely objective, but history has taught us that it is also driven by community choices and groups, where it becomes clear that political and social preferences and underlying cognitive biases can interfere with scientific progress <cit.>.
All these problems are leading to a crisis impacting scientific networks, putting collaborative networks at a disadvantage and favouring competitive ones, and often compromising the very principles of scientific practice.
Closed-loop-AI-led science has the potential to mitigate all these problems because it can bootstrap itself with the right mechanisms to detach itself from human-led science and its own biases, even if human scientists initially transfer them. Furthermore, this invites scientists with the task of initially guiding AI as to the type of meaningful research that should be conducted but then letting it explore regions of the scientific space that may never be reachable by human scientists while having the option to keep what human scientists believe is of greatest interest but letting the close-loop-AI system to potentially continue using less human-relevant content searching for novelty in terms of what is potentially interesting to go after. That is to have AI bootstrap itself out of and above the loop-AI-science without human guidance.
One challenge in this direction is that automation can easily fall into the over-fitting trap without human input and mechanisms to avoid this have to be in place. However, it has been found that simplicity and randomness are powerful mechanisms to avoid local minima and maxima when iterating over searching algorithms<cit.>.
A striking feature of supervised machine learning is its propensity for over-parametrisation <cit.>. Deep networks contain millions of parameters, often exceeding the number of data points by orders of magnitude, so often, the model starts to over-fit right at the beginning <cit.>.
Broadly speaking, networks are designed to interpolate the data, learning/constructing an associated manifold by driving the training error to zero.
Deep neural networksDeep neural networks in particular are widely regarded as black-box approaches, ill-equipped to offer explanations of the produced models for classification, often with superhuman ability <cit.>. One strategy that has enabled researchers to make progress in understanding the workings and limitations of deep learning is the use of what has been called `generative models' <cit.>.generative models This involves training adversarial algorithms represented by neural networks that systematically tamper with data while asking it to generate novel examples <cit.>. By observing the resulting examples and how the classifier fails, they can understand the model's limitations and improve the classifier.
However, current approaches in science (see Fig. <ref>), including most machine and deep learning methods, rely heavily on traditional statistics and information theory. Consequently, such models are insufficient to capture certain fundamental properties of data and the world related to recursive and computable phenomena, and they are ill-equipped to deal with high-level functions such as inference, abstraction, modelling, and causation, being fragile and easily deceived <cit.>.
Most of these algorithms fail to be scalable in domains outside the training set. Such algorithms lack mechanisms for abstraction and logical inference, they fail at generalisation <cit.>. For example, in the case of driverless carsdriverless cars, one does not want a car to crash millions of times to learn how not to crash, so current techniques such as adversarial networks offer a way to produce examples in which not driving appropriately can lead to an event that is labelled a crash <cit.>. However, driving and crashing are events where cause and effect need to be learned, which current approaches cannot do.
When AI leads science so that laboratory experiments are automated to execute cycles of planned experiments, AI frees humans from repetitive, tedious, and error-prone tasks and can deal with vast amounts of data that no human could handle <cit.>.
These human scientists, in turn, can feed the AI systems back with new insights and novel theories.
Thus, such an emerging feedback loop of AI-human collaboration will synergistically boost scientific discovery toward previously unattainable results, rigour, and dissemination.
To overcome the above limitations and challenges, we claim that it will require the fostering of new theories and methods, as well as human and technological resources in AI, data science, and interdisciplinarity, so scientists become capable of dealing with this AI-human interplay both at an infrastructural and a metastructural level. One of these methods may involve AI that guides AI, and translates results to humans, and this intermediate AI may not be of the same type. For example, causal and model-driven AI may be required to disentangle other AI systems to which human scientists cannot relate if they do not have a mechanistic explicative component, whether there is one or not. This may lead to some sort of meta-AI that may not require Artificial general Intelligence but would require a different set of skills than purely statistical machine learning approaches.
§.§ Historical Context
Applications of AI in science are quite broad and cover many fields. The idea of automating reasoning goes back to Leibniz, where the modern incarnation can be traced back to efforts to build computing machines in Europe. In particular, the heroic efforts of Alan Turing'sTuring, Alan work at Bletchley to automate the problem of code breaking and his ideas of an imitation game <cit.>. It can also be traced back to Joshua LederbergLederberg, Joshua (Nobel laureate) <cit.>, Ed. FeigenbaumFeigenbaum, Edward (Turing award winner) <cit.>, Karl DjerassiDjerassi, Karl (co-inventor of the contraceptive pill) <cit.>, and colleagues at Stanford in the 1960s, who worked on automating mass-spectroscopy for the Viking Mars lander <cit.>. AI has long been a tradition of taking scientific discovery as an area of study. In the 1970s the Nobel Prize laureate and Turing prize winner Herbert SimonSimon, Herbert developed Bacon, an AI system for science <cit.>. Since this pioneering work, much has been achieved, and there are now many convincing examples of AI systems making clear contributions to scientific knowledge (e.g. the very recent <cit.>).
EuriskoEurisko <cit.> and CyranoCyrano <cit.> are two examples of other attempts to perform automated discovery from basic principles in a variety of technical fields, in particular in mathematics, chemistry, and a few other domains.
These are systems that can be viewed as heuristic search systems, with the additional advantage that they can reconfigure their own search space.
Some commercial products are specifically designed to be applied to knowledge and scientific discovery. For example, DataRobot <cit.> DataRobot promotes Eureqa <cit.>,Eureqa having acquired Nutonian <cit.>.Nutonian Eureqa was designed to create models from time series data and is based on creating random equations from mathematical building blocks through evolutionary search to explain the data <cit.>. It has been called a “Virtual Data Scientist”Virtual Data Scientist <cit.>.
A team of researchers from Google DeepMindDeepMind launched a machine learning project called AlphaFoldAlphaFold in 2018 to participate in the Critical Assessment of Techniques for Protein Structure Prediction or CASP <cit.>. CASPCASP is a biennial competition that assesses state-of-the-art in three-dimensional protein structure modelling. In its first version, AlphaFoldAlphaFold was particularly successful at predicting the most accurate structure for targets rated as the most difficult by the competition's organisers, but it was not until the second program, AlphaFold 2,AlphaFold 2 in 2020, when the team achieved a level of accuracy much higher than any other group before and scored above 90 for around two-thirds of the proteins in CASP's global distance test (GDT), a test that measures the degree to which a structure predicted by a computational program is similar to the structure validated experimentally, with 100 being a complete match. AlphaFoldAlphaFold relied on a lot of human knowledge already generated in the years before, especially in areas such as molecular dynamics. The program was designed to include the expert domain in the form of the training data. How much molecular biological knowledge was introduced is still not known, but while it required a team that did draw heavily on domain expertise to tune it, most of the predictive power came from the AlphaFold 2AlphaFold 2 tool itself <cit.>.
A precursor of AI in physics is the project GALILEO (Guided Analysis of Logical Inconsistencies Leads to Evolved Ontologies) <cit.>. The GALILEO project tried to model the repair of faulty theories of Physics whose predictions were contradicted by empirical evidence.
One area of successful application of machine learning from climate data, for example, was the discovery of climate dipoles through machine learning <cit.>.
Physics-driven AI has the potential to impact how we approach science, on our current predominantly data-reliant—as opposed to the model-centred—scientific method, by placing the mechanistic model at the centre of modelling itself.
Paradoxically, current physics-led AI and machine learning research have distracted researchers from more fundamental research, even though the discussion has started, and researchers will hopefully eventually get around to the first principles they claim to care about.
On the knowledge side, there are many applications of knowledge extraction of interest, such as for drug re-purposing by pharmaceutical companies <cit.>.
On task-oriented problem solving, we can find an increasing number of workflow systems that understand scientific tasks and carry them out.
There have been some success stories demonstrating that by collecting and integrating available molecular data into computational models, accurate predictions of interventions in the system can actually be made. An example is the Robot Scientist program <cit.> that was able to autonomously execute high-throughput hypothesis-led
research investigating yeast-based functional genomics, with the next-generation scientific program later using the same principles for drug screening. In another example, a computational model of Halobacterium salinarum NRC-1 was first constructed through massive data integration and machine learning-driven inference of the regulatory network <cit.>.
Another example was the ambitious whole-cell computational model of the life cycle of the human pathogen Mycoplasma genitalium <cit.>. The model accounted for all annotated gene functions and was validated against a broad range of data. Now, the model encompasses approximately 500 genes and their interactions.
In the area of neural networks, there has been, for example, an effort to make them `understand' cause and effect by algorithmic training. While more research is needed, fundamental research is aware that alternative approaches are required to capture the complexities of hypothesis and model generation or selection <cit.>.
In this sense, the research in this type of higher-order AI, such as deconvolution from searching for generative processes from the entire algorithmic space <cit.>, will also be crucial to advance current research.
To present a summary of the current state of AI applications to each scientific domain, Table <ref>
displays an organisation of scientific domains[Note that:
Complexity includes systems and intelligence as defined by the Santa Fe Institute;
Manufacturing notably includes ML-based design of sensors and chips;
and Earth systems includes oceans, land, air, and near space (see https://earthdna.orgearthdna.org).]
and the applicable AI algorithms' classes and approaches. Scientific domains are approximately ordered from smallest physical scales to largest.
Overlapping areas are not reflected in this high-level table (e.g., semi-supervised RL methods, or the representation of neural networks (NNs) that conflates various deep learning types like LSTM and Transformers), not to mention complex, context-dependent multidisciplinarity. Table <ref>'s content was the consensus and understanding of a subset of this paper authors. While supervised statistical methods have made contributions to almost every area of knowledge, these are of very different type mostly ranging from identification to classification. Some areas are more difficult than others across all approaches, such as mathematics, philosophy and epistemology. In general, statistical approaches rank poorly at finding first principles or adding new mechanistic knowledge to scientific domains.
Generative AI (GenAI) and Large Language Models (LLMs) are promising to advance science by assimilating and synthesising the vast corpus of human knowledge embedded in scientific literature. Through this synthesis, LLMs can interconnect disparate ideas, construct unique hypotheses, and venture into uncharted areas of scientific knowledge. However, this exploration is bound by the data they have been trained on, creating a theoretical bubble that could lead to model collapse through excessive training on the same data.
To burst this bubble, it is essential to supplement LLMs with other methods and multiple sources. For instance, active learning could serve to maximise information gain, challenging the model with fresh data and different viewpoints cross-pollinating from different scientific domains. Hybrid models blending AI with symbolic reasoning could tackle scientific problems requiring high-level abstraction, thus broadening LLMs' capabilities. This approach would therefore fall into the neuro-symbolic category for purposes of scientific discovery.
Indeed, an area where LLMs could be especially impactful is in scientific model discovery. By analysing patterns and correlations in vast datasets, LLMs could help identify mathematical relations and possibly reveal new potential (physical, or computational) laws just as it learns language grammar from natural language statistics. This could expedite the scientific process, enabling more rapid breakthroughs.
Furthermore, LLMs could make a significant contribution to causal analysis. By processing extensive scientific literature, they could draw links between causes and effects that might be overlooked by human researchers, proposing novel causal hypotheses for testing. Pairing this with counterfactual reasoning, where the AI predicts the outcome of modifying specific variables, could deepen our understanding of cause-effect relationships, and help simulate alternative model outcomes.
However, it is also important to acknowledge the limitations of current LLMs, and statistical machine learning (ML) in general, which currently lack the depth needed for any breakthrough to happen and require quality and diversity of data allowing an LLM `temperature' (favouring less likely statistical patterns) to explore further along the potential long tails of the distribution of scientific results with potential breakthrough science away from incremental average science. A collaborative approach, in which human scientists guide the AI, can help harness the strengths of both worlds mitigating the current weaknesses of LLMs and statistical ML, ensuring a more effective utilisation of this technology today.
§ ASPECTS OF AI-LED CLOSED-LOOP SCIENCE
The ability to predict and design (inverse design), while exceptionally useful, will not necessarily lead to new fundamental discoveries (new theories) unless AI and human goals in scientific discovery are aligned and synergistically intertwined to impose the similar objectives quantified and introduced into, for example, a loss function .
This is because scientific discovery cycles, such as those illustrated in Figs. <ref>, are not isolated parts but belong within a greater cycle of scientific inquiry spanning an entire topic or field comprised of a community of scientists.
It is the larger learning cycle that fuels the questions in the smaller learning cycles.
The larger cycle is fuelled by human curiosity and human challenges and has a strong historical and social component, but the shorter cycles, being more well-defined, they are more prone to be automated.
Nevertheless, the larger cycles may be needed to kick-start the discovery process of the smaller learning cycles.
In this sense, one option to integrate human scientists and AI-driven science is for humans to build the context of the greater cycle (for example, fulfilling the role of the `Final Theory' and `Background knowledge' steps at the leftmost smaller cycle in Fig. <ref>), feeding the AI with new insights, and leave the AI to independently deal with the smaller cycles (such as the rightmost smaller cycle in Fig. <ref>), guided by the greater ones. The LLM's could for example, be very useful as a technical interface and translation of human high-level larger cycle aspirations and their respective "divide-and-conquer" breakdown into smaller cycles. If one aims at the highest degree of automation of the discovery cycle, more sophisticated forms of AI should include automation of the validation, dissemination, refereeing, and other aspects of human science and its practice.
To tackle such challenges, we propose in the following sections the steps and technology suggested to conduct an entire cycle of AI-led scientific discovery <cit.>, as in Fig. <ref>.
§.§ Hypothesis Generation
One of the central components of the scientific practice is the `hypothetico-deductive' method <cit.>.
An additional set of epistemological tools is induction <cit.>, abduction <cit.> and counterfactual reasoning<cit.>.
To automate those knowledge processes, a deduction can be combined with simulation to infer the experimental consequences of hypotheses. Matching simulation with experimental output will be a reliable basis for an AI to accept or reject a hypothesis.
Such experimental output is tested with multiple interventions in the automated series of perturbation analyses <cit.>.
However, while one traditional approach to automate induction may follow, for example, new methods for clustering and regression, automating abduction and the creation of counterfactual scenarios may pose an even more challenging problem.
For this purpose, it would require the AI algorithm to explore irreducibly novel possibilities that are emergent to the current state of knowledge in which the AI is situated <cit.>.
In this sense, neural networks are unlikely to be useful in the process of hypothesis generation, nor is any statistical machine learning. This is because they need training, and not only is training over hypothesis generation exactly the problem to be solved in the first place, but training over previous hypotheses, dividing them into rejected or valid, may undermine the freedom and the unbiased exploration that is desired of regions of interest in the hypothesis space.
For hypothesis generation, what is needed is a bottom-up approach (e.g., a model-driven AI) or a hybrid one able to conduct cycles of systematic hypothesizing,
from either partial or exhaustive enumerations (even if redundant though universal) <cit.>.
A bottom-up approach that deals with this open-endedness concerning the role of novelty is the field of algorithmic information dynamics (AID) <cit.>, a framework for causal discovery and causal analysis based on algorithmic information theory and perturbation analysis.
Open-ended innovation in hypothesis generation and how to create and search over unbounded hypothesis spaces in less well-specified domains is an open challenge in itself, where research on the topics of this document can help make progress. These spaces and the methods exploring them usually have to deal with problems of intractability or uncomputability <cit.>.
Each method has its advantages and drawbacks and lies at different extremes of the causal inference spectrum.
Guiding heuristics based on first principles are needed to explore the hypothesis space <cit.>. Dovetailing partial results is necessary to avoid infinitely long cycles running the search. Here aspects of computability and tractability will be in play at every step, which we will need measures to deal with unless less powerful techniques are implemented (e.g. propositional logic or domain-restricted spaces such as a set of genetic circuits).
At one extreme are the statistical tools that confound correlation and causation but can help scientists make a call and guide their experiments, viz. graphical models that combine probability with symbolic logic, reasoning, and interventional calculus.
The statistical approach often leads to less computationally expensive methods and, although in general, they may present distortions or biases toward some selected features <cit.>, it returns sound results in cases one knows a priori that the underlying generative processes are purely stochastic, stationary and ergodic.
At the other extreme is AID, which searches for sets of agnostic generative models compatible with observations, and exploits these models as testable underlying mechanisms and causal first principles <cit.>, regardless of those being stochastic, computable, or mixed processes.
In addition to offering less constrained methods, for example deconvolution algorithms <cit.> and optimisation in non-differential spaces <cit.>, this approach offers results in direction to tackling the abduction and counterfactual problem, as for example shown in new methods for open-ended evolutionary computation <cit.>, and synergistic distributed computation <cit.>.
However, bottom-up approaches like AID may not be humanly understandable, or when they are, scrutinising them may require great computational effort, as is the case in other areas such as automatic theorem proving (e.g., the four-colour theorem).
LLMs may here again provide an advantage to interface between these model spaces as natural language processors integrating otherwise disparate systems translating among different domain databases and knowledge bases.
§.§ Experimentation and Sensing
One key task is to create AI systems for scientific discovery able to conduct experimentation and hypothesis testing independent of human instruction or with little to no human instruction.
This is because what is desired to take scientific discovery to the next level is not the programming of algorithms able to conduct experiments, but open-ended algorithms able to set their own goals and experiments guided by previously conducted experiments (their own or from the human literature).
To this end, involving the machine embodiment to perform as a physical scientist, instrument-driven approaches render robotics key to making progress in physical experimentation so that more and more of the physical execution of experiments will be done using robotics.
This will increase the productivity of science, as robots work cheaper, faster, more accurately, and for longer than humans.
Furthermore, if not embodied, the scientific experiment may collapse into a problem of data analysis and inference without the hypothesis, model, and theory testing that requires positive or negative feedback from the empirical side. Thus only a tiny part of the scientific discovery cycle would be tackled.
Neural networks can help physical machines to embed themselves in a physical world for representation purposes, as neural networks have proven useful in representing all sorts of images. Still, innovation in areas of robotics and mechatronics will be required to accommodate the kind of depth and range of scientific experiments, in particular when it comes to accuracy and precision—which should not present a problem—while also helping with the current, very human problem of reproducibility <cit.>.
This is expected to have a significant impact on the reproducibility of science, as automating science requires semantic precision.
LLMs will also interface between human and robot instructions making it easier to create tools to automate experiments in natural language effectively instantiating a robot assistant able to process human instructions for scientific experimentation.
§.§ Rejection, Validation and Model Selection
Model selection and reduction have been a recurring theme across several sub-fields of areas, such as computational biology and neuroscience, with special reference to dynamical forward models. The idea is that if a complex nonlinear model can be reduced in complexity (fewer state variables and parameters), the investigator can more readily discern which parameters and state variables are more crucial to the model's behaviour, facilitating model analysis and understanding. One example is the reduction of the four-dimensional Hodgkin–Huxley model to a two-dimensional FitzHugh–Nagumo (FHN) system <cit.>. The core idea was to perform a time-scale separation into fast and slow subsystems. This has been used in a number of model reduction studies, including the cell cycle.
Techniques for dimension reduction, feature, and model selection will be helpful at this stage, from statistical approaches such as principal component analysis to more sophisticated ones such as minimal information loss techniques.
Another core idea for model selection is that each hypothesis formed will have a predicted probability of being correct,
possibly along with the associated cost of the respective experiment. This may be the monetary cost of executing the experiment, plus a temporal discount rate to value finding results more quickly. It has been empirically shown that using a Bayesian approach to experiment selection is sound and outperforms experiments chosen manually <cit.>.
Current AI has shown the ability to yield valuable insights from noisy or incomplete data, optimise procedure design, and learn notions of structure amongst heterogeneous observations. Neural networks have shown utility in isolating proper signals from noisy datasets spanning disciplines from physics to biology; such capabilities could be critical to establishing scientific conclusions as we reach the practical limit of experimental data quality <cit.>. Approaches from optimisation have demonstrated an ability to reduce the expense of experimental campaigns by optimising sampling patterns using, for instance, bandit-style methods to more rapidly design electric batteries or iteratively specify experimental conditions in biology. Structure learning techniques from the graphical model literature could find use in identifying statistically meaningful relationships from large amounts of unannotated data <cit.>.
§.§ Knowledge Representation and Natural Language Processing
Ingested knowledge may no longer be machine-readable, either rule-based or probabilistic given that LLMs can interface between them but its possible caveats, such as low-level hidden misalignments, are difficult to unveil, making difficult traceability and liability. LLMs can allow machines to read, interpret, and exploit the current knowledge from a scientific domain in human natural language and digest the relevant literature in the target area.
An AI-led scientific discovery approach will require at least access to the space of interest needed for the system to be able to validate or reject a hypothesis based on contradiction or confirmation of previous knowledge which may be difficult in a black box like an LLM. So, the LLM will need to be self-explanatory with the caveat that the output explanation may not fit the internal statistical derivation of what the LLM ends up producing. An independent system and a more explainable mechanistic process may need to verify the output.
Without LLMs, this task would have required massive databases and curation efforts for domains that are not already significantly represented in a computable fashion.
Although all sorts of languages can be used to represent knowledge, some domains will be aptly represented by propositional-logic rules, such as simplified genetic circuits, to avoid these potential misalignments from LLMs or statistical ML in general.
Other domains will require more sophisticated representations, either to encompass the greater complexity of an extended domain or to deal with the greater sophistication of, e.g., a domain such as biomedicine, where system-expert rules with ifs, dos, and whiles are required, hence the full power of first-order logic and Turing-completeness.
For example, knowledge representation systems/ontologies are well developed in biology: The Gene Ontology (GO), nascent Causal Activity Models with the GO, Human Phenotype Ontology, Chemical Entities of Biological Interest, Ontology of Biomedical Investigation, among others <cit.>. So are integration efforts built on these ontologies, e.g., Monarch <cit.>.
The JST MIRAI `Robotic Biology' project can also provide technologies to help adoption, such as LabCode, a common formal language for experimental protocols, LabLive, a laboratory information IoT platform, and real-time parallel workflow scheduling software that can decompose processes in a given protocol and assign each to different robots/equipment so these are executed considering dependencies and concurrency between them.
Another example is statistical relational learning (SRL), which combines relational learning and probability theory and is an area of ML research (e.g. <cit.>),
enabling the representation of beliefs about relational data using probabilistic models.
Relational Learning (RL) is a general representation language based on first-order predicate logic <cit.>.
Such probabilistic logic models enable the specification of graphical models (Bayesian networks, Markov networks, etc.) over large relational domains.
One of the fundamental design goals of the representation formalisms developed in SRL is to abstract away from concrete entities and to represent instead general principles that are intended to be universally applicable. A key advantage of RL is that it can easily incorporate background scientific knowledge, and learn about structured objects such as scientific models particularly appropriate for utilising background bioinformatic data <cit.>.
These approaches can be further enhanced or complemented by the do-calculus <cit.> or algorithmic information dynamics <cit.>.
Deep neural networks are also good at capturing the apparent granularity and complexity of natural phenomena in a computable form (in weighted vectors of numerical matrices). The success of neural networks implies that once one captures an object in an optimal way, classification is trivial, as it was for deep learning in the protein-folding challenge <cit.> with its limitations.
Assuming that an appropriate formalism to record observation could be found for any domain, a modeller may be faced with a severe feature selection problem, which translates into a question of the identity of the relevant state variables of the systems of interest, e.g., drug docking dynamics for drug discovery or cytokines for cell dynamics.
On the one hand, all the entities of the system that are measured could define the set of state variables to be represented, e.g. drugs or proteins, augmented with the set of rules to which the entities may be subjected, such as thermodynamics or collisions.
However, this type of representation could quickly become very complex <cit.>.
On the other hand, a certain subset of combinations of measured state variables may be a useful representation of the governing dynamics driving a possible system, and this is a question that needs to be asked and resolved for scientific domains on a case-by-case basis.
Such a feature selection problem in computably representable objects is often found in analyses in which one is assuming a pure stochastic nature of the system's generative processes, although the system also comprises deterministic, mechanistic, or computable subprocesses <cit.>.
In addition, even in cases the whole algorithmic space of possibilities is covered, analyzing the information content carried by a network highly depends on the multidimensional space into which it is embedded <cit.>, where distortions may be exponential for multidimensionality-agnostic encodings.
Thus, developing expressive and efficient frameworks to computationally represent and capture a wide range of scientific knowledge about processes, models, observations and hypotheses is key.
Capturing scientific knowledge will push the limits of the state of the art.
In the opposite direction of knowledge representation by machines, the AI for scientific discovery may need to communicate in the form of a publication or other scientific means to explain the innovation and methods behind the discovery to humans and to articulate its significance and impact.
A choice that has to be made, on a case-by-case basis, is whether it is required that AI conducts the experiments without much human understanding or whether it is acceptable not to have a sophisticated translation of both the hypotheses generated and the process arriving at a conclusion.
In cases where there is a requirement for human understanding, and even for the most general case, at least partial interpretation by human scientists may be required.
Thus, knowledge representation and natural language processing techniques will be needed to be jointly developed to both:
feed the system with the current knowledge relevant to the hypothesis space;
and guide the search (in cases of human-machine interaction) or be able to follow up the inference process and interpret the results <cit.>.
These requirements will force us to make progress on humanly readable and interpretable machine-human translation.
§.§ Integration, Interpretation and Interfacing
One of the most challenging aspects of scientific discovery is integrating a new piece of information with the corpus of existing human knowledge.
Analysing the data will require moving to the larger learning loop where there is a broader view of the results for possible (re-)interpretation.
This is because while the specific objective for the target hypothesis may have been rejected, one of the main serendipity checkpoints is the reinterpretation of results in a broader context.
Machine learning systems have proven incredibly useful for automated knowledge base construction. They have recently contributed to the creation of multiple large databases describing, for instance, genome-wide association studies and drug-disease interactions directly from the published literature <cit.>. This ability to create massive knowledge bases that rapidly and effectively contextualise new findings could substantially accelerate scientific discovery by ensuring that seemingly disparate dots are more rapidly connected.
However, exploring and understanding user context requires automating certain social, political, and economic aspects of interconnected knowledge that are intrinsic to science <cit.>.
The AI systems' interactions with scientists must be guided by a knowledge-rich user model that enables the AI systems to act as colleagues like LLMs now may allow.
This constitutes an inextricable loop in which human scientists and AI-scientists are parts of a whole system, which the AI algorithm should try to optimise.
A striking example of such an optimal interplay has been the evolution of machine-human chess collaboration.
After the defeat of Gary Kasparov, it became standard to have human chess players practice with computers, and for champions, it became impossible to reach the level of playing demanded without intensive computer training <cit.>. To this day, the strongest freestyle chess teams have been those able to strike a perfect balance between machine and computer training and playing.
Again, neural networks and statistical machine learning will not help in this process, at least not on their own or in their traditional architectures.
What is most likely needed here is first an inference engine able to extract knowledge readable by humans as well, especially under human-machine schemes.
Classical logical inference engines are key, but so are hybrid approaches combining statistical learning and symbolic computation.
Techniques such as feature selection and data dimension reduction will be helpful.
Secondly, an AI algorithm that can simulate the network topological properties of scientific production <cit.> and perform the first five steps of the full cycle of AI-led scientific discovery, while taking into account the relational structures and biases that emerge when the AI-human relationship is analysed as a single system.
The application of AI to science will confer multiple advantages, and eliminate some of the disadvantages of having a human in the loop, such as biases and lack of reproducibility. Yet, if humans rely on automated scientific discovery, verifiability and transparency are crucial because the coupled AI-human system has to be able to be formally verified to ensure that it matches the goals and that the results match the process.
In this manner, the AI algorithm should be designed to continuously reiterate its data gathering from the outputs and behaviours of the whole system the AI is part of.
The same for the human scientist, which needs to be able to perform, evaluate, and produce analytical reasoning while participating in this coupled computational-social system.
§.§ Closing the Loop
Finally, connecting all the steps will require a meta-algorithm that will need to systematically manage each cycle and even decide when to break or restart the cycles (see Fig. <ref>), if human intervention is taking place.
The whole cycle should be open to human intervention, and the AI algorithm should both reiterate the new insights and data given by humans and counter any bias that these may introduce.
Therefore, the “grand challenge” that we propose ranges over automating not only laboratory practices and theory making, but also writing a paper, refereeing, and disseminating achievements.
Technology for remote web control and monitoring of full-cycle scientific discovery may require technologies such as TypeScript, React, GraphQL, Jest, and Redux to create a web-based beamline control system.
Techniques such as optimisation and anomaly detection can be used to find possible gaps and even glitches (found or promoted). These gaps can be exploited to reinterpret data, explore other regions of the hypothesis space and kick-start the process of hypothesis generation again, thus closing and restarting the discovery cycle.
§ CONCLUSION: THE FUTURE OF AI IN SCIENTIFIC DISCOVERY
Academic fields are disrupted to such an extent that future progress has become almost unthinkable without the involvement of some machine learning.
We have explored some of the challenges and opportunities in utilising and exploiting AI. We argue that a closed-loop formulation not only augments and accelerates scientific discovery but also leads science in new directions, thus disrupting the future of human science. Such AI-led, closed-loop experimentation, may also mitigate current challenges, such as the production and replication of data.
The application of AI in scientific discovery presents us with very different challenges compared to the application of AI to games such as chesschess, shogishogi, or GoGo. However, recent developments surprisingly suggest that some scientific challenges may not be that different from these games <cit.>.
To make contributions to fundamental science, of the kind that goes into textbooks of first principles of major fields, do we require AI equipped with sufficient intelligence and autonomy to render it capable of sensing and making observations to ask novel and relevant questions? Do we want AI in scientific discovery to be hand to remain guard-railed, unsupervised, or semi-supervised? By humans or other tertiary systems that we may trust? These are questions that scientists and policymakers will have to face and answer soon, if not now.
We envision future AI systems to have the potential to transform scientific discovery and enable an unprecedented expansion of knowledge while keeping some open questions unanswered, such as to what extent we want to exert full control over what is explored or experimented on, or to what extent do we want to be able to understand or at what moment we are willing to let human scientist understanding be left behind as it is in some ways with no single scientist able to know or catch up with their fields. Certainly, for some questions, limits are desirable, but for others, how much humans are willing to sacrifice, such as understanding, in exchange for possibly solving problems such as cancer, or climate change.
Science
10
Lavin2021si
A. Lavin, et al.,
abs/2112.03235 (2021).
King2004a
R. D. King, et al., Nature 427, 247 (2004).
King2009
R. D. King, et al., Science 324 (2009).
ROSS
R. D. King, Scientific American 304, 72 (2011).
Wang2019
D. Wang, et al., Proceedings of the ACM on Human-Computer
Interaction pp. 1–24 (2019).
Nosek2012
B. A. Nosek, J. R. Spies, M. Motyl, Perspectives on Psychological
Science 7, 615 (2012).
Fanelli2017
D. Fanelli, R. Costas, J. Ioannidis, PNAS 14, 3714 (2017).
Nuzzo2015
R. Nuzzo, Nature pp. 182–185 (2015).
Goodman2018
S. N. Goodman, D. Fanelli, J. P. Ioannidis, Getting to Good: Research
Integrity in the Biomedical Sciences pp. 96–102 (2018).
Harris2019
J. K. Harris, et al., Public Health Reports 134, 109
(2019).
Kaanders2021
P. Kaanders, P. Sepulveda, T. Folke, P. Ortoleva, B. D. Martino, bioRxiv p. 2021.06.29.450332 (2021).
BarabSci
W. Dashun, B. Albert-László, The Science of Science (Cambridge
University Press, Cambridge, UK, 2021).
Fortunato2018
S. Fortunato, et al., Science 359 (2018).
Nature2016
Nature, Nature 537, 465 (2016).
Colizza2006-ed
V. Colizza, A. Flammini, M. A. Serrano, A. Vespignani, Nat. Phys. 2, 110 (2006).
Baker2016-cd
M. Baker, Nature 533, 452 (2016).
Baddeley2015-jp
M. Baddeley, EMBO Rep. 16, 902 (2015).
Resnik2016
D. B. Resnik, K. C. Elliott, Accountability in research 23, 31
(2016).
HernandezOrozco2021
S. Hernández-Orozco, et al., Frontiers in Artificial
Intelligence 3, 567356 (2021).
Venturi2019-mr
L. Venturi, A. Bandeira, J. Bruna, Journal on Machine Learning Research
20, 1 (2019).
Goodfellow2016
Y. Goodfellow, Y. Bengio, A. Courville (MIT Press, 2016).
blackbox
V. Buhrmester, D. Münch, M. Arens (2019).
Rudin2019-pv
C. Rudin, Nature Machine Intelligence 1, 206 (2019).
Salakhutdinov2015-su
R. Salakhutdinov, Annual Review of Statistics and Its Application 2, 361 (2015).
Creswell2018-qa
A. Creswell, et al., IEEE Signal Process. Mag. 35, 53
(2018).
Bian2021-vh
Y. Bian, X.-Q. Xie, J. Mol. Model. 27, 71 (2021).
Calude2017
C. S. Calude, G. Longo, Foundations of Science 22, 595 (2017).
Zenil2020
H. Zenil, Entropy 22, 612 (2020).
Scholkopf2021
B. Scholkopf, et al., Proceedings of the IEEE 109, 612
(2021).
Colbrook2022
M. J. Colbrook, V. Antun, A. C. Hansen, Proceedings of the National
Academy of Sciences 119 (2022).
Nadeau2003-vk
C. Nadeau, Mach. Learn. 52, 239 (2003).
Spooner2021-px
J. Spooner, V. Palade, M. Cheah, S. Kanarachos, A. Daneshkhah, Applied
Sciences 11, 471 (2021).
Kitano2016
H. Kitano, AI Magazine 37 (2016).
Turing1
J. Copeland, Alan Turing: The codebreaker who saved `millions of lives' - BBC
News (2012).
Turing2
J. Copeland, D. Proudfoot, Alan Turing, Codebreaker and Computer Pioneer -
AlanTuring.net The Turing Archive for the History of Computing (2004).
Lederberg
L. L. Cavalli-Sforza, Cell 132 (2008).
Feigenbaum1
IEEE, IEEE Intelligent Systems 26 (2011).
Djerassi
J. I. Seeman, Chemical & Engineering News pp. 10–14 (2013).
DENDRAL
J. Lederberg, E. A. Feigenbaum, B. G. Buchanan, R. K. Lindsay, Applications of Artificial Intelligence for Organic Chemistry: The DENDRAL
Project (McGraw-Hill, 1980).
Buchanan1984
B. G. Buchanan, E. H. Shortliffe, Rule-Based Expert Systems: The MYCIN
Experiments of the Stanford Heuristic Programming Project
(Addison-Wesley, Reading, MA, 1984).
Langley1987
P. W. Langley, H. A. Simon, G. Bradshaw, J. M. Zytkow, Scientific
Discovery: Computational Explorations of the Creative Process (MIT Press,
Cambridge, Mass, 1987).
Burger2020
B. Burger, et al., Nature 583 (2020).
Jumper2021-gb
J. Jumper, et al., Nature 596, 583 (2021).
lenat
D. B. Lenat, Machine Learning, R. Michalski, J. Carbonell, Mitchell
T.M., eds. (Springer, Berlin, Heidelberg, 1983).
hasse
K. W. Haase, Discovery Systems AI Memo 898, Tech. rep., Artificial
Intelligence Laboratoy MIT, Cambridge Mass. (1986).
DataRobot
DataRobot, DataRobot - AI Cloud - The Next Generation of AI.
Eureqa
Eureqa, Eureqa Models | DataRobot.
Nutonian
Nutonian, DataRobot AI Cloud Platform.
Dubcakova2011-he
R. Dubčáková, Genet. Program. Evolvable Mach. 12,
173 (2011).
Awange2018-el
J. L. Awange, B. Paláncz, R. H. Lewis, L. Völgyesi, Mathematical
Geosciences (Springer International Publishing, Cham, 2018), pp. 321–357.
Wei2019-vt
G.-W. Wei, Nature Machine Intelligence 1, 336 (2019).
Skolnick2021-ty
J. Skolnick, M. Gao, H. Zhou, S. Singh, J. Chem. Inf. Model. 61,
4827 (2021).
Liu2021-zj
J. Liu, et al., Geophys. Res. Lett. 48 (2021).
Gupta2021-py
R. Gupta, et al., Mol. Divers. 25, 1315 (2021).
Liu2021-repurpose
R. Liu, L. Wei, P. Zhang, Nat Mach Intell 3, 68 (2021).
Bonneau2007
R. Bonneau, et al., Cell 131, 1354 (2007).
Karr2012
J. R. Karr, et al., Cell 150, 389 (2012).
Luo2020-xz
Y. Luo, J. Peng, J. Ma, Nat Mach Intell 2, 426 (2020).
Zenil2019b
H. Zenil, N. A. Kiani, A. A. Zea, J. Tegnér, Nature Machine
Intelligence 1, 58 (2019).
Gil2014-ch
Y. Gil, M. Greaves, J. Hendler, H. Hirsh, Science 346, 171
(2014).
Popper1972
K. R. Popper, Objective Knowledge: An Evolutionary Approach (Oxford
University Press, New York, 1972).
King2011
R. D. King, M. Liakata, C. Lu, S. G. Oliver, L. N. Soldatova, Journal of
the Royal Society Interface 8, 1440 (2011).
Russell1912
Bertrand Russell, The Problems of Philosophy (Home University
Library, 1912).
Pearl1995
J. Pearl, Biometrika 82, 669 (1995).
Zenil2020cnat
H. Zenil, N. Kiani, F. Abrahão, J. Tegnér, Scholarpedia
Journal 15, 53143 (2020).
Abrahao2022
F. S. Abrahão, H. Zenil, Philosophical Transactions of the Royal Society
A: Mathematical, Physical and Engineering Sciences 380 (2022).
Morgan1971-ly
C. G. Morgan, Artificial Intelligence 2, 179 (1971).
Thieme2005-ij
S. Thieme, Knowledge Representation and Organization in Machine
Learning (Springer-Verlag, Berlin/Heidelberg, 2005), pp. 177–191.
Zenil2018-pk
H. Zenil, et al., SSRN Electron. J. (2018).
Zenil2019
H. Zenil, et al., iScience pp. 1160––1172 (2019).
Lenat1982-aj
D. B. Lenat, Artificial Intelligence 19, 189 (1982).
Zenil2017a
H. Zenil, N. A. Kiani, J. Tegnér, Physical Review E 96,
012308 (2017).
Hernandez-Orozco2018
S. Hernández-Orozco, F. Hernández-Quiroz, H. Zenil, Artificial
Life 24, 56 (2018).
Hernandez-Orozco2018a
S. Hernández-Orozco, N. A. Kiani, H. Zenil, Royal Society Open
Science 5, 180399 (2018).
Abrahao2017
F. S. Abrahão, K. Wehmuth, A. Ziviani, Theoretical Computer
Science 785, 83 (2019).
Abrahao2018
F. S. Abrahão, K. Wehmuth, A. Ziviani, Complex Systems 27
(2018).
Lindner1999-wy
B. Lindner, L. Schimansky-Geier, Physical Review E 60, 7270
(1999).
Drton2017-bz
M. Drton, M. H. Maathuis, Annual Review of Statistics and Its
Application 4, 365 (2017).
Eddy2004-ub
S. R. Eddy, Nat. Biotechnol. 22, 1177 (2004).
Stevens2000-fu
R. Stevens, C. A. Goble, S. Bechhofer, Brief. Bioinform. 1, 398
(2000).
Bard2004-ej
J. B. L. Bard, S. Y. Rhee, Nat. Rev. Genet. 5, 213 (2004).
Shefchek2020-ci
K. A. Shefchek, et al., Nucleic Acids Res. 48, D704
(2020).
Raedt2008
L. D. Raedt, Logical and Relational Learning (Springer Berlin
Heidelberg, Berlin, Heidelberg, 2008).
Orhobor2020
O. I. Orhobor, N. N. Alexandrov, R. D. King, Machine Learning 2020
109:11 109, 2195 (2020).
Pearl2012
J. Pearl, Uncertainty in Artificial Intelligence - Proceedings of the 28th
Conference, UAI 2012 pp. 4–11 (2012).
Tang2019-ni
C. Tang, et al., Neural Netw. 117, 163 (2019).
Abrahao2021
F. S. Abrahão, K. Wehmuth, H. Zenil, A. Ziviani, Entropy 23
(2021).
Chowdhury2005-gl
G. G. Chowdhury, Annual Review of Information Science and Technology
37, 51 (2005).
Cambria2014-yd
E. Cambria, B. White, IEEE Comput. Intell. Mag. 9, 48 (2014).
Andronis2011-uk
C. Andronis, A. Sharma, V. Virvilis, S. Deftereos, A. Persidis, Brief.
Bioinform. 12, 357 (2011).
McCarthy2007-ns
J. McCarthy, Artificial Intelligence 171, 1174 (2007).
Campbell2002-tv
M. Campbell, A. J. Hoane, F. Hsu, Artificial Intelligence 134, 57
(2002).
Evans2011
J. A. Evans, J. G. Foster, Science 331 (2011).
Silver2016
D. Silver, et al., Nature 7587, 484– (2016).
Hassabis2017
D. Hassabis, Nature pp. 413–414 (2017).
Kitano2021
H. Kitano, npj Systems Biology and Applications 2021 7:1 7, 1
(2021).
Kitano1997
H. Kitano, et al., AI Magazine 18, 73 (1997).
Kitano1998-eo
H. Kitano, M. Asada, I. Noda, H. Matsubara, IEEE Robot. Autom. Mag.
5, 30 (1998).
|
http://arxiv.org/abs/2307.04780v2 | 20230710082045 | Comparison of Point Cloud and Image-based Models for Calorimeter Fast Simulation | [
"Fernando Torales Acosta",
"Vinicius Mikuni",
"Benjamin Nachman",
"Miguel Arratia",
"Bishnu Karki",
"Ryan Milton",
"Piyush Karande",
"Aaron Angerami"
] | cs.LG | [
"cs.LG",
"hep-ex",
"hep-ph",
"nucl-ex",
"physics.ins-det"
] |
[email protected]
Physics Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA
National Energy Research Scientific Computing Center, Berkeley Lab, Berkeley, CA 94720, USA
Physics Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA
Berkeley Institute for Data Science, University of California, Berkeley, CA 94720, USA
Department of Physics and Astronomy, University of California, Riverside, CA 92521, USA
Thomas Jefferson National Accelerator Facility, Newport News, Virginia 23606, USA
Department of Physics and Astronomy, University of California, Riverside, CA 92521, USA
Department of Physics and Astronomy, University of California, Riverside, CA 92521, USA
Computational Engineering Division, Lawrence Livermore National Laboratory, Livermore CA 94550
Nuclear and Chemical Science Division, Lawrence Livermore National Laboratory, Livermore, CA 94550
Score based generative models are a new class of generative models that have been shown to accurately generate high dimensional calorimeter datasets. Recent advances in generative models have used images with 3D voxels to represent and model complex calorimeter showers. Point clouds, however, are likely a more natural representation of calorimeter showers, particularly in calorimeters with high granularity. Point clouds preserve all of the information of the original simulation, more naturally deal with sparse datasets, and can be implemented with more compact models and data files. In this work, two state-of-the-art score based models are trained on the same set of calorimeter simulation and directly compared.
Comparison of Point Cloud and Image-based Models for Calorimeter Fast Simulation
Aaron Angerami
Received / Accepted
================================================================================
§ INTRODUCTION
Detector simulations are essential tools for data analysis by connecting particle and nuclear physics predictions to measurable quantities. The most precise detector simulations are computationally expensive. This is especially true for calorimeters, which are designed to stop most particles and thus require modeling interactions from the highest accessible energies down to the lowest ones. Well-established experiments typically have bespoke fast simulations that capture the salient aspects of the precise simulations (usually based on Geant <cit.>) at a fraction of the computational cost. Traditionally, fast simulations are constructed to reproduce a series of low-dimensional observables. Furthermore, assembling an effective fast simulation is time intensive. If there was a way to build a fast simulation automatically and using the full detector dimensionality, then data analysis at existing and developing experiments could be greatly enhanced.
Deep learning (DL) has been used to build automated and high-dimensional fast simulations (`surrogate models') for calorimeters. Starting from Generative Adversarial Networks (GANs) <cit.> <cit.> and now including Variational Autoencoders <cit.> <cit.>, Normalizing Flows <cit.> <cit.>, and Diffusion Models <cit.> <cit.>, deep learning based calorimeter simulations have rapidly improved over the last years. They are even starting to be used in actual experimental workflows, such as the ATLAS Collaboration fast simulation <cit.>. The recent CaloChallenge <cit.> community comparison showcased the state-of-the-art methods deployed to increasingly granular current and future detectors. As segmented detectors, calorimeters are naturally represented as (possibly irregular) images. Nearly all proposed methods for DL-based calorimeter simulations are based on an image format (fixed grid of pixels). However, these data are unlike natural images in a number of ways, most notably in their sparsity. As such, image-based approaches pioneered in industry may not be the most effective for particle interactions.
Since most cells in a calorimeter image are empty, a more natural representation of these data may be a point cloud. Point clouds are a set of attributes assigned to locations in space. In the calorimeter case, the attribute is energy and the location is the cell coordinates. A calorimeter point cloud would require far fewer numbers to specify than an image representation, since only cells with non-zero energy would be recorded. The main challenges for point cloud models in contrast to image-based approaches is that they must cope with variable-length outputs that respect permutation invariance. With a lag compared to image-based approaches, point cloud generative models for particle/nuclear physics applications have seen a rapid development in recent years <cit.>. However, until recently, these models have never been applied to calorimeter simulations.
The first (and until now, only) publication describing point cloud generative models applied to calorimeters is Ref. <cit.>, which proposed generating Geant `hits' (deposits of energy) prior to their discritization into cells. This innovative idea enables the separation of material interactions from readout geometry. However, the number of hits vastly exceeds the number of non-zero cells which makes this task difficult. In this paper, we explore point cloud generative models applied directly to cell-level information. In other words, we take calorimeter images and compare state-of-the-art generative models that represent the same inputs as either images or (zero-suppressed) point clouds. As a case study, the two representations are compared using simulations of a high-granularity hadronic calorimeter, similar to the design planned for the ePIC detector at the future Electron-Ion Collider <cit.>.
This paper is organized as follows. Section <ref> describes the DL models used for the comparison. Both the image-based and point-cloud representations are generated with diffusion models in order to make the comparison as direct as possible. The simulation of the calorimeter dataset is found in Sec. <ref>. Discussion of the advantages and disadvantages of both representation, as well as numerical results are presented in Sec. <ref>. The paper ends with conclusions and outlook in Sec. <ref>.
§ DEEP LEARNING MODELS
Generative models for detector simulation aim to precisely emulate physics-based models, like those based on Geant, but using far less time than the full simulation. With 𝒪(100) detector components, neural network architectures solely based on fully connected layers can efficiently produce high fidelity samples, resulting in surrogate models thousands of times faster than the standard simulation routines <cit.>. For higher detector granularity (𝒪(1k) - 𝒪(10k)), the use of data symmetries becomes crucial to achieve precision. These can be directly included in the model design through dedicated neural network architectures or included in the data pre-processing <cit.>. For generative models such as normalizing flows, introducing flexible network architectures is often not trivial as the model invertibility and tractable Jacobian of the transformation places a strong restriction on the model design. A second difficulty is to achieve a stable training routine of the surrogate model. At finer granularities, neural network models tend to become larger to accommodate the data complexity, often resulting in unstable training schedules. This issue becomes more prominent in generative models such as variational autoencoders, where the latent space can vary rapidly, leading to an unstable response of the decoder network, or GANs, where the adversarial training requires careful tuning of the model hyperparameters to achieve a stable training.
Diffusion models are a class of generative neural networks that allow for stable training paired with high flexibility in the model design. Data is slowly perturbed over time using a time parameter t ∈ℝ that determines the perturbation level. The task of the neural network is to approximate the gradients of the log probability of the data, or the score function ∇_xp(x) ∈ℝ^D, based on data observations x∈ℝ^D in the D-dimensional space. This can be approximated by a denoising score-matching strategy <cit.>. In the implementation used in this paper, data observations x∼ p_data(x) are perturbed using the kernel 𝐱_t∼ q(𝐱_t|𝐱)=𝒩(𝐱_t;α_t𝐱,σ_t^2𝐈), with time-dependent parameters α and σ determining the strength of the perturbation to be applied. In the variance-preserving setting of diffusion processes, σ_t^2 = 1 - α_t^2. For the time-dependence, a cosine schedule is used such that α_t = cos(0.5π t).
The loss function to be minimized is implemented using a velocity parameterization:
ℒ_θ = 𝔼_ϵ,t𝐯_t - 𝐯̂_t,θ^2,
where the time-dependent network output with trainable parameters θ, 𝐯̂_t,θ, is compared with the velocity of the perturbed data at time t, 𝐯_t ≡α_tϵ-σ_t𝐱, with ϵ∼𝒩(0,𝐈). The score function is then identified as
∇_xlogp̂_θ(𝐱_t) = -𝐱_t - α_t/σ_t𝐯̂_t,θ(𝐱_t).
The data generation from the trained diffusion models is implemented using the DDIM sampler proposed in Ref. <cit.> that can be interpreted as an integration rule <cit.> with update rule specified by:
𝐱_s = α_s𝐱̂_θ(𝐱_t) + σ_s𝐱_t -α_t𝐱̂_θ(𝐱_t)/σ_t.
For a fair comparison, all diffusion models are trained using the same score-matching strategy and fixed number of 512 time steps during sampling.
The fast point cloud diffusion model (FPCD) follows <cit.>, where a permutation equivariant estimation of the score function is obtained by the combination of a DeepSets <cit.> architecture with attention layers <cit.>. During the point cloud simulation, two models are also defined: one that learns the number of non-empty cells, conditioned on the initial energy of the incoming particle, and one model that learns the score function of the normalized point cloud, also conditioned on the energy of the particle to be simulated and the number of hits to be generated. This model is trained on Dataset 1, described in Sec. <ref>.
The model trained on the image dataset (CaloScore) is adapted from <cit.> with a few modifications. Compared to the original implementation, the calorimeter simulation task is now broken down in two diffusion models: one that learns only the energy deposits in each layer of the calorimeter, conditioned on the initial energy of the particle to be simulated, and one model that learns to generate normalized voxels per layer, conditioned on the energy deposition in each layer and the initial energy of the particle to be simulated. Additionally, the original U-Net <cit.> model is combined with attention layers. These changes increase the model expressiveness and the generation fidelity. This model is trained on Dataset 2, described in Sec. <ref>
§ DETECTOR AND DATA DESCRIPTIONS
§.§ Calorimeter Simulation
The DD4HEP framework <cit.> is used to run Geant simulations of a high-granularity iron-scintillator calorimeter (based on the CALICE-style design <cit.>), which has dimensions similar to those of the forward hadronic calorimeter in the future ePIC detector (LFHCAL <cit.>) at the EIC. Specifically, the sampling structure comprises 0.3 cm scintillator tiles sandwiched between 2.0 cm thick steel plates. It consists of a total of 55 layers. The transverse area of the scintillator is set to 10 cm×10 cm, somewhat larger than in Ref. <cit.>. It adopts a non-projective geometry with tower elements arranged in parallel to the z axis and has its front face at z=3.8 m.
1.7 million events of single π^+ particles incident on the center of the calorimeter are simulated. The incident momentum, P_Gen., was generated uniformly in log_10 space in the range 1.0 < P_Gen. < 125 GeV/c. In order to hit the center of the calorimeter, the pions were generated with a polar angle of θ_Gen. = 17^∘. Because the detector is symmetric about ϕ, the particles are generated in the range 0^∘ < ϕ_Gen. < 360^∘.
An energy threshold corresponding to 0.3 MeV are used to select hits for further analysis.
§.§ Datasets
Dataset 1 is the point cloud representation of the Geant showers, while Dataset 2 represents the same showers using the image representation. Both Dataset 1 and Dataset 2 used in training share the same parent Geant simulation, such that the fast point cloud diffusion model and the image model are trained on different representations of the same set of calorimeter showers.
Dataset 1 is created by taking the Geant simulation and converting it to a format based on JetNet data <cit.>, that stores information on jets and their constituents in a zero-suppressed point cloud representation. The Geant data is stored in files containing two datasets, clusters and cells. The cluster dataset contains the P_Gen of the incident pion, as well as the number of hits in the calorimeter. The cell data is comprised of a constant number of 200 cells per event. Empty cells, or cells with deposited energy below the threshold are masked, with all values set to 0.0, and ignored during training.
The x, y, and z distributions of the Geant simulation are initially discrete, resulting from the digitization step of the simulation, with values equal to the centers of the cells in each dimension. The point cloud model struggles to learn extremely sharp features, as the score function is not well-defined for discrete inputs. To circumvent this, a uniform smearing within a cell-width is applied to the cells along each dimension to obtain continuous distributions for the final point cloud dataset. This maintains the same distributions at histogram-level when binning according to the cell-width, but yields a point cloud dataset with smooth x, y, and z distributions. Without this smearing, the distributions in x, y, and z resemble a series of delta functions that the point cloud model struggles. The point cloud model is trained on this smeared point cloud representation of the Geant simulation.
Dataset 2 is created by converting the point cloud dataset into an image format. Images at the original granularity would would be too large for the generative model. The calorimeter cells were therefore clustered into groups of 5 along each axis of the detector to create voxels, where 5×5×5 cells = 1 voxel. Energy in each of the cells making up the voxel were summed and assignd to the final voxel's total energy. The final image format consists of 11 11× 11 voxels. A hit in the voxelized dataset, and referenced in Section <ref>, is defined as any voxel with energy deposition above threshold.
For the final comparison, generated samples from the point cloud model are voxelized using the same method for Dataset 2. All comparisons are in this image format, at the same resolution of 11 × 11 × 11 voxels per image.
Images representing the full resolution of the calorimeter with 55×55×55 voxels were not used, as this would result in unmanageably large datasets (see Table <ref>), and would represent the largest calorimeter image training ever done. The point cloud model was trained on the full resolution because point clouds naturally represent the calorimeter at full granularity. Training the point cloud model on this more natural representation is in line with the goal of this work to investigate advantages/disadvantages of two representations of the calorimeter data. It is also for this reason that the generated point cloud distributions are shown separately, while the direct comparisons between models are done in the image representation. Investigating possible advantages of a point-cloud model trained directly on the voxelized dataset is left to future work.
§ RESULTS
All generated samples along with Geant are converted to the same image format at the same resolution of 11× 11× 11 voxels per event for fair comparison. A variety of distributions are used to evaluate the quality of the generated images. After comparing calorimeter images generated by both models, the point cloud representation of Geant is compared to the generated samples of the point-cloud model to provide additional insight to the previous image-based comparison. For all comparisons, the Earth mover's distance (EMD) <cit.>, also known as the 1-Wasserstein distance <cit.>, between generated distributions and Geant) distributions is calculated.
The EMD score a distance-like measure of the dissimilarity between two distributions. It roughly represents the minimum amount of work needed to transform one distribution to another. While this is not the only possible metric, it is a standard and widely-used statistic that was also the main distance deployed in <cit.>, where an image based model was compared to a Wasserstein-GAN. All EMD scores in Figures <ref>, <ref> and <ref> are calculated on final voxelized distributions
Figure <ref> shows a qualitative assessment of the generative models using the 2-dimensional distribution of the average energy deposition in three layers. All voxels with an expected energy deposition above 0 are populated in both the image and point cloud based models, with very few additional hits. The calorimeter shower will have diverse shapes, as well as different overall distribution of voxels due to the variation of ϕ_Gen.. The qualitative similarities in each image in Fig <ref> indicate that models reproduce the various showers from the training dataset well. Each image contains a ring due to θ_Gen. being fixed while varying ϕ_Gen..
Table <ref> shows the model size, size of each dataset, and time to generate 100k calorimeter showers. The disk size and sample time under the point cloud model are for showers in the point cloud representation. The AUC is obtained from a classifier trained to distinguish the samples of both models only in the voxelized image format. Both models have very good AUC, reasonably close to 0.5, with the image model having the lower AUC. The point cloud model is smaller by a factor of 4 compared to the image based model, and samples events 3 times faster. Lastly, the point cloud dataset requires over 100 times less disk space than the image format at full granularity.
Figure <ref> compares the total energy deposited in the calorimeter and total number of calorimeter hits, where a hit is defined as any voxel with energy above threshold. The EMD is also calculated between Geant and the different generative models.
Both the image-based diffusion model and the point-cloud based diffusion model are in good agreement with Geant at small deposited energies, deviating no more than 10%. At the highest deposited energies, however, both diffusion models begin to fall away from Geant, with the point-cloud model generating less energy, and the image based model generating slightly more energy than Geant. These trends begin at about 10 GeV, with the point-cloud model deviating slightly earlier. The point-cloud model also shows a slightly higher EMD score than the image based model.
The region where the deviations are largest, past 20 GeV of deposited energy are rare, and statistical fluctuations begin to dominate the Geant distributions.
The number of hits shows a similar trend, though with larger deviations. At a small number of hits, both show good agreement with Geant, with deviations slightly above 10%. At 15 or more hits, both models begin to deviate well past 10%, with the point cloud model oversampling the number of hits, and the image based model generating less hits than Geant.
Figure <ref> and <ref> shows the average deposited energy x, y, and z-coordinates. Both models struggle in the first and last layers in x and y coordinates, but show good agreement in the middle layers. While the image-based model shows larger deviations in the first and last layers of the calorimeter compared to the point-cloud model, it has an overall lower EMD in both distributions. The two-pronged feature of these distributions is a result of generating the pions at a fix polar angle and varying ϕ. It should be noted that there are little to no hits in the first and last x and y layers of the calorimeter, so even a very small deviation from Geant will result in a large deviation percentage (bottom panels of Fig. <ref> and <ref>). Similarly, as there are fewer hits towards the back of the detector, deviations increase slightly for the very last layers. However, The z-distributions show both models in very good agreement with the original Geant predictions, a possible effect of the z-distribution of hits being less dependant on the generated θ and ϕ ranges.
All three distributions show the point cloud samples are systematically lower than the original Geant distributions. This indicates the point cloud model would benefit from learning the energy per layer directly, as is done in the image model described Sec. <ref>. This difference likely explains why this small bias is observed in the point cloud model, but not in the image model, and is an avenue for improving the point cloud.
Following <cit.>, a classifier was trained to distinguish between generated showers and Geant showers. The classifier is comprised of two fully connected layers of size 256 using the RELU activation function. The classifier is trained only on vectors of voxelized images of each dataset. The area under the receiver-operator curve (AUC) for the image model was 0.673. The AUC for the point-cloud model was 0.726. Generally, being closer to 0.5, where the classifier is maximally confused, is the target. However the AUC obtained by both models is very promising, as having an AUC even slightly below 1.0 is non-trivial.
A key advantage of the point cloud model is that the distributions at the sub-voxel level can be shown. The point cloud model already simulates the data at the original granularity of the calorimeter, and voxelization is only necessary for the image representation. The original output of the point cloud model is compared to the continuous (or smeared) Geant distributions.
Figure <ref> shows the number of hits in the point cloud representation of the calorimeter showers. In the point-cloud representation, a hit is defined as any cell that has a energy deposited above threshold.
The point-cloud model reproduces the total number of cell hits well, much better than the voxel hit distribution, shown in Fig. <ref>. This may indicate that while the point cloud model is overall similar to Geant in both representations, small deviations in point cloud distributions can be summed into larger deviations during the voxelization process, where 125 individual cells are combined into a single voxel. However, there is a large symmetry group under which mismodelings in the bigger space may not affect the modeling in the coarser space, so further investigation is needed. However, the very good agreement with Geant in the number of cell hits and degrading agreement in the number of voxel hits indicates that the first diffusion model of the point cloud model architecture is performing well, while the second model, responsible for sampling the cell distributions, would likely benefit from additional tuning.
Similar conclusions can be derived from Fig. <ref>, show the generated point samples at the full detector granularity and in good agreement with Geant. Fig. <ref> shows the average x, y, and z coordinate distributions, as well as the cell log_10E distribution in the point representation. Again, there are larger relative deviations in the first and last layers in x, y, and z, coordinates where there are very few hits, just as in the image representation. However, there is very good agreement with the Geant simulation in layers containing a reasonable number of hits.
§ CONCLUSION AND OUTLOOK
In this paper, we make the first direct comparison between two score based generative models using either images or point clouds as representations of the same training data. We use Geant calorimeter simulations of a high-granularity hadronic calorimeter. Both models perform well for most distributions, with very similar AUCs, but the image-based diffusion model invariably has a lower EMD in each comparison to Geant.
Overall, the performance of the point-cloud diffusion model is very close to the image model. This is despite the point cloud model being disadvantaged in this work in a few important ways.
First, the calorimeter showers from the FPCD model are closest to Geant in the point cloud representation at the full calorimeter granularity, as shown in Fig. <ref> and <ref>. But it is later voxelized for comparison. This may compound mismodeling during the voxelization, however further investigation is needed.
Second, the point cloud model is adapted from a model architecture initially designed for jet data from the JetNet datasets. While the high-level structure of the datasets are very similar, the data itself are quite different. For example, the first diffusion model making up the point cloud model was initially much larger, as predicting the jet multiplicity is in general a more difficult problem than the number of non-empty cells in a calorimeter shower. Reducing the size of the first diffusion model of the point cloud model architecture had no impact on performance while speeding up training. The second diffusion model making up the point cloud model architecture that is responsible for sampling the cell x, y, z, and E was directly adapted from <cit.>. Further tuning of the point cloud model, particularly the cell-model can likely close the small remaining gap in performance. The image model, in contrast, is based on CaloScore, which was tuned specifically for calorimeter showers.
Lastly, the image-based model uses the energy deposition in each layer in addition to the generated particle momentum to condition the second diffusion model making up its architecture. The second diffusion model making up the point cloud model is solely conditioned on the generated particle momentum. This might explain why the point cloud model has systematically lower mean energy distributions (see Fig. <ref> and <ref>) compared to both Geant and the image based model.
These potential sources of improvement in the point cloud model should not detract from it's already very reasonable performance, deviating from Geant more 10% only in the sparsest of layers, where the image based model also struggles. At the same time, the point cloud model offers several advantages over the image model.
First, the sheer size of the data. The point cloud data saved to HDF5 files is a factor of 100 times smaller using the same zlib compression as the image based dataset at full granularity, with no voxelization. As calorimeters continue to increase in granularity, this difference will only increase.
Second, information is lost during voxelization process; cell hits with the same x, y, z coordinates, but different energies are summed over in the image representation. This is true even if images are produced at the full granularity of the calorimeter, where hits within the single cells are summed over. This means that voxelized datasets cannot naturally be reverted back to a point cloud representation.
Additionally, as was showed in this work, the generated point clouds can be voxelized afterwards, or converted into other representations that better fit specific use cases.
This work establishes a benchmark for future research on generative models, offering valuable insights into the challenges of modeling hadronic showers in highly granular calorimeters using image-based techniques, while also exploring the potential of point-cloud methods. The current advantages of point clouds, in combination with improvements to close the remaining performance gap described earlier, will likely make point cloud based models a clear choice for highly granular calorimeters. This work should serve as a reference for studies utilizing future calorimeters based on the CALICE design, including those intended for use in CMS at the LHC and ePIC at the EIC.
§ CODE AVAILABILITY
The code used to produce the point cloud results shown in this document are available at <https://github.com/ftoralesacosta/GSGM_for_EIC_Calo>. The code for the image based model and comparisons of images is available at <https://github.com/ViniciusMikuni/Calo4EIC>. Example Geant4 datasets and generated samples are available at <https://zenodo.org/record/8128598>.
§ ACKNOWLEDGMENTS
We acknowledge support from DOE grant award number DE-SC0022355.This research used resources from the LLNL institutional Computing Grand Challenge program and the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231 using NERSC award HEP- ERCAP0021099. M.A acknowledges support through DOE Contract No. DE-AC05-06OR23177 under which Jefferson Science Associates, LLC operates the Thomas Jefferson National Accelerator Facility. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract No. DE-AC52-07NA27344.
unsrt
|
http://arxiv.org/abs/2307.05910v1 | 20230712043840 | Short-range expansion for the quantum many-body problem | [
"Ronen Weiss",
"Diego Lonardoni",
"Stefano Gandolfi"
] | nucl-th | [
"nucl-th"
] | |
http://arxiv.org/abs/2307.04901v1 | 20230710210646 | Verifying a quasi-classical spin model of perturbed quantum rewinding in a Fermi gas | [
"J. Huang",
"Camen A. Royse",
"I. Arakelyan",
"J. E. Thomas"
] | cond-mat.quant-gas | [
"cond-mat.quant-gas"
] |
^1Department of Physics, North Carolina State University, Raleigh, NC 27695, USA
We systematically test a quasi-classical spin model of a large spin-lattice in energy space, with a tunable, reversible Hamiltonian and effective long-range interactions. The system is simulated by a weakly interacting Fermi gas undergoing perturbed quantum rewinding using radio-frequency(RF) pulses. The model reported here is found to be in a quantitative agreement with measurements of the ensemble-averaged energy-resolved spin density. This work elucidates the effects of RF detunings on the system and measurements, pointing the way to new correlation measurement methods.
Verifying a quasi-classical spin model of perturbed quantum rewinding in a Fermi gas
J. Huang, Camen A. Royse, I. Arakelyan, and J. E. Thomas
August 12, 2023
====================================================================================
Measurement of coherence, entanglement, and correlations in time-reversible many-body spin lattices is of great interest, broadly impacting our understanding of quantum measurement and information processing <cit.>. A nearly ideal platform for simulating large spin-lattices is a weakly interacting Fermi gas, containing N≃ 10^5 atoms with a tunable, reversible Hamiltonian. The trapped cloud behaves as a spin lattice in energy-space with effective long-range interactions <cit.> and enables new tests of classical versus quantum spin evolution <cit.>.
Spin waves are observed in weakly interacting, nearly collisionless Fermi gases, which have been explained by several models <cit.>. Previously, a 1D quasi-classical spin evolution model that uses the exact energy-dependent couplings was found to yield agreement with spin-density profiles measured for the evolution of an initially x-polarized spin sample <cit.>. However, it appeared that this model failed to explain perturbed quantum rewinding experiments, where an RF pulse rotates the entire spin system by an angle ϕ_x about the x axis as a perturbation in between forward and backward evolutions. In a quantum picture, the ϕ_x rotation changes the relative phases of the superposed total angular momentum states that describe the system, i.e., |S,M_x⟩→ e^-iM_xϕ_x|S,M_x⟩ for each state, leading to ϕ_x-dependent coherence amplitude between states differing in M_x <cit.>. To fit the data, a scattering length of ≈2.5 times the measured value was needed in the previous work <cit.>, questioning the adequacy of the quasi-classical model.
In this work, we report precise, systematic tests of a modified quasi-classical spin model using single-shot measurements of the spin density profiles from perturbed quantum rewinding experiments. Such experiments are ideal for testing the model, since unperturbed rewinding experiments can be implemented in advance to prove that the system is reversed properly without model-dependent fits. We show the advantages of single-shot data analysis for studies of ensemble-averaged energy-resolved spin density, and quantitatively demonstrate the important roles of different RF detunings during the forward and backward evolution periods. By using two detunings as separate fit parameters, the data is explained by the model using the measured scattering length. The new approach reported here validates the modified quasi-classical treatment of this quantum spin system and suggests detuning-independent measurement methods for future correlation studies, avoiding probabilistic methods in data selection <cit.>.
Our experiments <cit.>, employ degenerate clouds of ^6Li containing a total of N=6.5× 10^4 atoms initially in a single spin state. The cloud is confined in a harmonic, cigar-shaped optical trap, with oscillation frequencies ω_x/2π=24.4 Hz in the axial direction and ω_r/2π=650 Hz in the radial direction. The corresponding Fermi temperature T_F=0.73 μK and T/T_F=0.31. RF pulses prepare coherent superpositions of the two lowest hyperfine-Zeeman states, which are denoted by |1⟩≡|↑_z⟩ and |2⟩≡|↓_z⟩. The experiments are done in the weakly interacting regime, where the energy-state changing collision rate is negligible over the time scale of the measurements <cit.>.
As the single particle energies are fixed and the energy distribution is time independent <cit.>, we approximate the cigar-shaped weakly interacting Fermi gas as a one-dimensional (1D) spin “lattice" in energy space <cit.>, with a Hamiltonian
H(a)/ħ=a∑_i,j≠ ig_ij s⃗_i·s⃗_j+∑_iΩ'E_i s_zi+Δ(t)S_z.
We associate a “site" i with the energy E_i=(n_i+1/2) hν_x of the i^ th harmonic oscillator state along the cigar axis x. For each E_i, we define a dimensionless collective spin vector s⃗ (E_i)≡s⃗_i.
The first term in Eq. <ref> is the site-to-site interaction, proportional to the s-wave scattering length a and to the overlap of the harmonic oscillator probability densities for colliding atoms. In a WKB approximation, g_ij∝ 1/√(|E_i-E_j|), which is an effective long-range interaction in the energy-space lattice <cit.>. For a zero temperature Fermi gas, the average interaction energy (in rad/s) is ag̅=6.8 n_0ħ a/m, where n_0 is the peak density. For our experimental parameters, with a=5.2 a_0, ag̅/2π≃ 2.0 Hz.
The second term in Eq. <ref> is an effective site-dependent Zeeman energy, arising from the quadratic spatial variation of the bias magnetic field along x, which produces a spin-dependent harmonic potential. As ω_r/ω_x=26.6, the corresponding effect on the radial motion is negligible, enabling a 1D approximation, where all atoms at site i have the same Zeeman energy. In Eq. <ref>, Ω'=-δω_x/(ħω_x), with δω_x/2π=14.9 mHz for our trap <cit.>. For the mean energy E̅_x≃ k_B T_F/4, Ω' E̅_x/2π≃ 2.0 Hz.
The last term in Eq. <ref> arises from the time-dependent global detuning Δ(t), which plays a central role in the analysis of the rewinding data. Here, S_z=∑_i s_zi. For a typical evolution time 200 ms, Δ(t)≃0.4 Hz. Fluctuations in the bias magnetic field and magnetic tuning of the scattering length cause Δ(t) to change at 5 kHz/G for |1⟩-|2⟩ superposition states.
To implement perturbed quantum rewinding, we employ the pulse sequence shown in Fig. <ref> <cit.>. The system is initially prepared in a pure z-polarized spin state, |ψ_0z⟩. The first (π/2)_y pulse (0.5 ms), defined to be about the y-axis, creates an x polarized state, |ψ_0x⟩. Here, the y- and x-axes are defined in the rotating frame of the RF pulses (RF-frame). Then, the system is allowed to evolve forward for a time τ_f. A voltage-controlled change of the RF phase by π/2 permits rotation about the x-axis by an angle ϕ_x. Applying a (π)_y pulse (1 ms) and magnetically tuning the scattering length from a→ -a (10 ms) inverts the sign of Hamiltonian shown in Eq. <ref>, causing the system to evolve backward for a time τ_b <cit.>. As described below, we perform experiments both with and without the final (π/2)_y pulse, after which the spatial profiles of the |↑_z⟩ and |↓_z⟩ states are measured by two resonant absorption imaging pulses, separated by 10 μs, to obtain the single-shot spin density S_z(x)=[n_↑_z(x)-n_↓_z(x)]/2. For each shot, S_z(x) is normalized to the total central density n(0)=n_↑_z(0)+n_↓_z(0) to minimize errors arising from shot-to-shot variation in the atom number and cloud width. All spatial profiles are folded about x=0 and displayed for 0≤ x≤σ_TF.
The reversibility of the system is tested (result shown in Fig. <ref>) using the pulse sequence of Fig. <ref> with
ϕ_x=0 and without the final (π/2)_y pulse. This sequence measures the component of the collective spin vector s⃗_i that was along the z-axis just prior to imaging. The longitudinal (z) component is insensitive to the detuning Δ(t) that causes a rotation of s⃗_i about the z-axis relative to the RF-frame, enabling a robust test. In the data analysis, since S_z=0 for ϕ_x =0, global spin balance is enforced to minimize the error from small shot-to-shot changes in the detuning of the RF pulses, which arises from magnetic field fluctuation.
In these experiments, it is essential to carefully calibrate the bias magnetic field B_0 at which the s-wave scattering length vanishes. This is best done by quantifying the reversal results using different magnetic fields, which is independent of fitting models and less sensitive to the initial conditions in contrast to the method adopted in Ref. <cit.>. B_0 is found by minimizing the sum of the mean square differences between the forward and backward spin density profiles at corresponding times <cit.>. Unperturbed rewinding experiments done at scattering lengths of ± 5.2 a_0 and ± 8.0 a_0, suggest that B_0=527.150(5) G, which is lower by 30 mG compared to the B_0 of Ref. <cit.>.
Fig. <ref> shows rewinding data (6-shot average) at corresponding forward(red) and backward(blue) evolution times for a=8.0 a_0 and -8.0 a_0 respectively. With the calibrated B_0, the corresponding forward and backward spin density profiles demonstrate good agreement for reversal at 280 ms (top row), while reversal at 400 ms (bottom row) leads to greater differences between corresponding forward and backward data profiles.
Having established that the system is reversible for scattering lengths up to ± 8.0 a_0 and τ_f=τ_b≤ 280 ms, data are mainly obtained with τ≡τ_f=τ_b =200 ms at ± 5.2 a_0 using the full pulse sequence of Fig. <ref>. This provides stringent tests of quasi-classical collective spin vector models. Here, the final (π/2)_y pulse is included to measure the transverse spin components that were along the x-axis in the RF frame in Fig. <ref> just prior to imaging. For ϕ_x=0 and a detuning Δ(t) that is constant over the total sequence, the system is expected to rewind to the initial state, where the density profiles for both spins are Thomas-Fermi. For ϕ_x≠ 0, however, the rewinding is perturbed, producing complex spin density profiles after the full sequence. Fig. <ref> shows single-shot spin density profiles for ϕ_x=π/2,π,3π/2. We obtain the corresponding energy-space profiles, s_zi≡ s_z(E), by inverse Abel-transformation <cit.> of the spatial profiles, which is valid in a WKB approximation when energy space coherence is negligible and a quasi-continuum approximation is valid, as in our experiments <cit.>.
To understand the perturbed rewinding data of Fig. <ref>, we include a time-dependent global detuning, Δ(t), in the Hamiltonian of Eq. <ref>. The detuning determines the relative angle between the RF-frame and the Bloch frame φ_fb in Fig. <ref>. Here, the RF frame is defined by x_RF and y_RF axes that rotate about the z-axis at the instantaneous RF frequency, ω_RF(t), tracking the total phase of the RF field. We define the rotation axes for all of the RF pulses in Fig. <ref> to be in the RF frame, i.e., x≡ x_RF and y≡ y_RF. The Bloch frame is defined by x_B and y_B axes that rotate at the instantaneous hyperfine resonance frequency ω_HF(t) for an atom of axial energy E=0.
The detuning, Δ(t)=ω_HF(t)-ω_RF(t), causes the components of spin vectors in the Bloch frame to rotate relative to the RF-frame by generally different angles φ_f=∫_τ_f dt Δ(t) and φ_b=∫_τ_bdt Δ(t), during the forward and backward evolution times, respectively, even for τ_b=τ_f. For measurements of spin components in the RF frame, the final state of the cloud can be written as |ψ_f⟩=e^-iπ/2 S_y|ψ_f1⟩, where
|ψ_f1⟩ is the state just prior to the final (π/2)_y pulse. Taking τ_f=τ_b=τ, we find <cit.>
|ψ_f1⟩=e^-iπ S_ye^i(φ_b-φ_f)S_z W_ϕ(φ_f,τ)|ψ_0x⟩,
where |ψ_0x⟩=e^-iπ/2 S_y|ψ_0z⟩ is the fully x-polarized state and
W_ϕ(φ_f,τ)=e^i/ħH_0(a)τe^-i ϕ_x S_x(φ_f)e^-i/ħH_0(a)τ.
Here, S_x(φ_f)=S_xcosφ_f-S_ysinφ_f with S_x and S_y the x- and y-components of the total spin vector in the RF frame. H_0(a) is defined by Eq. <ref> for Δ(t)=0.
For each shot, the operator s_zi is measured for an ensemble of atoms in a selected energy group E_i∈ [E,E+Δ E]. The energy resolution Δ E of the inverse Abel-transform method is small enough that all of the atoms in the energy group evolve identically over the time scale of the pulse sequence. A single-shot measurement of the spin density profile then yields the ensemble-averages, ⟨ψ_f|s_zi|ψ_f⟩≡⟨ψ_0x|s̃'_xi|ψ_0x⟩. Here,
s̃'_xi= cosφ_fb s̃_xi-sinφ_fb s̃_yi,
with s̃'_xi being the x-component of the spin vector operator relative to the RF frame just before the final (π/2)_y pulse as shown in Fig. <ref>. s̃'_xi is given in terms of the components in the Bloch frame, s̃_xi≡ W^†_ϕ(φ_f,τ)s_xiW_ϕ(φ_f,τ) and similarly for s̃_yi. For each measurement, we see that the difference between the backward and forward the phase shifts, φ_f-φ_b≡φ_fb, determines the relative contribution of the s̃_xi and s̃_yi spin components in the Bloch frame to the measured projection in the RF frame, s̃'_xi. In addition, Eq. <ref> shows that the forward phase shift φ_f determines the effective rotation axis for the ϕ_x pulse.
To predict the measured ⟨ψ_f|s_zi|ψ_f⟩, we employ a mean-field approximation to obtain a quasi-classical model <cit.>, where the Heisenberg equations are solved numerically by treating the collective spin vectors as classical variables, which ignores quantum correlations between the spin vectors for different energy groups. The Heisenberg equations of motion for the collective spin vectors take a simple form in energy space, ṡ⃗̇_i(t)=ω⃗_i(t)×s⃗_i(t), with
ω⃗_i(t)=a∑_j≠ ig_ij s⃗_j (t)+Ω'E_i ê_z+Δ(t)ê_z.
For a given choice of the forward and backward detunings, i.e., the phases φ_f and φ_b, s_zi is determined by numerical integration. An Abel transform of s_zi≡ s_z(E) then yields the corresponding spin density s_z(x) <cit.>.
Experimentally, 60 shots are taken for each set of parameters. Examples of single-shot data are shown in Fig. <ref> and in the supplement <cit.>. Due to the complexity of the spatial profiles for ϕ_x≠0, single-shot data analysis is essential for this experiment. Small variation (≤5%) in cloud parameters results in shifted spatial profiles, even for fixed φ_f and φ_b, so averaging over shots with slightly different initial conditions can wash out the fine structure. Fig. <ref> compares two quasi-classical models with the single-shot data (blue dots). The first model, adopted from Ref. <cit.>, assumes φ_f≡φ_b mod 2π, and the fits (black-dashed curves) to the data in Fig. <ref> (a,e) and (b,f) for τ=200 ms and a=5.2 a_0 require a fitted scattering length of a_fit=9.0 a_0 in disagreement with the measured value. These results confirm the large discrepancy between the data and the quasi-classical model ignoring φ_fb that was observed in our previous study of information scrambling <cit.>. For the second, modified model, the forward and backward evolution phases φ_f and φ_b are treated as two free parameters. In this case, the model (red curves) is in good agreement with data taken for ϕ_x=π/2, π and 3π/2 with τ=200 ms at both 5.2 a_0 and 8.0 a_0 and for τ=400 ms at 5.2 a_0. Additional data with ϕ_x in steps of ϕ_x=π/4 were obtained to test the model further and demonstrate equally good agreement <cit.>. Section IV B of the supplement explains the sources of minor defects observed in the data series for 8.0 a_0, τ=200 ms and 5.2 a_0, τ=400 ms.
It is observed that, for the small scattering length, a=5.2 a_0, and short forward evolution time τ=200 ms, the data can be fitted using φ_f and φ_b as two free parameters (red curves) or by using φ_f=φ_b as one parameter and the scattering length a as another free parameter (black-dashed curves). However, for the long forward evolution time of 400 ms or for the large scattering length of 8.0 a_0, the data cannot be fitted for any scattering length with the assumption of φ_b=φ_f. In contrast, the modified model reported in this work, which includes forward and backward evolution phases as separate parameters, fits the data very well using the measured scattering length.
The modified model explicitly shows the difficulty of multi-shot averaged measurements of transverse spin components, such as s_x, where the averages of cosφ_fb and sinφ_fb in Eq. <ref> tend to vanish. Previously, the imperfect phase control problem was partially circumvented by using a maximum likelihood estimation <cit.>. However, Eq. <ref>, which is valid for both quasi-classical and full quantum treatments, suggests that multi-shot averaged measurements of energy-space spin operator products, such as ⟨ψ_f|s_zis_zj|ψ_f⟩ =⟨ψ_0x|s̃'_xis̃'_xj|ψ_0x⟩, are important, since the random-phase averages of cos^2φ_fb and sin^2φ_fb are nonzero. This method enables improved out-of-time-order correlation measurements in quantum gases, where the W operator is unchanged and the operator V=s_xi is replaced with V=s_xis_xj, since the initial x-polarized state is an eigenstate of both operators <cit.>.
In summary, this work verifies that a modified quasi-classical spin vector model of weakly interacting Fermi gases explains perturbed quantum rewinding experiments, using measurements of single-shot spin density profiles with sufficient resolution to enable quantitative study. The modified model reported here elucidates the effects of uncontrolled forward and backward evolution phases, φ_f and φ_b, on the system and measurements, resolving an outstanding conflict with a previous treatment <cit.>. Our results suggest new correlation analysis methods based on energy-resolved operator products, which yield signals that are independent of the uncontrolled RF detuning without assuming phase distributions <cit.>. Applying such methods to measure the time dependence of correlations between transverse components ⟨ψ_0x|s̃_⊥ i·s̃_⊥ j|ψ_0x⟩ allows the study of entanglement development in a large system <cit.> and investigations of many-body dynamics and information propagation<cit.>. Such experiments will be a topic of future work.
Primary support for this research is provided by the Air Force Office of Scientific Research (FA9550-16-1-0378). Additional support is provided by the National Science Foundation (PHY-2006234).
^*Corresponding authors: [email protected]
[email protected]
24
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Lewis-Swan et al.(2019)Lewis-Swan, Safavi-Naini, Bollinger, and et al.]OTOCNatComm
author author R.J. Lewis-Swan, author A. Safavi-Naini, author J.J. Bollinger, and author et al., title title Unifying scrambling, thermalization and
entanglement through measurement of fidelity out-of-time-order correlators in
the Dicke model, @noop journal journal Nature Communication volume 10, pages 1581 (year 2019)NoStop
[Eisert et al.(2015a)Eisert, Friesdorf, and Gogolin]QMBNatPhy
author author J. Eisert, author M. Friesdorf, and author C. Gogolin, title title Quantum many-body systems
out of equilibrium, @noop journal journal Nature Physics volume 11, pages 124–130 (year 2015a)NoStop
[Kaufman and et al.(2016)]QEntangleScience
author author Adam M. Kaufman and author et al., title title Quantum thermalization through
entanglement in an isolated many-body system, @noop journal journal Science volume
353, pages 794–800 (year 2016)NoStop
[Du et al.(2008)Du,
Luo, Clancy, and Thomas]DuSpinSeg1
author author X. Du, author L. Luo, author B. Clancy, and author
J. E. Thomas, title
title Observation of anomalous spin segregation in a
trapped Fermi gas, @noop journal journal Phys. Rev. Lett. volume 101, pages 150401 (year 2008)NoStop
[Du et al.(2009)Du,
Zhang, Petricka, and Thomas]DuSpinSeg2
author author X. Du, author Y. Zhang, author J. Petricka, and author J. E. Thomas, title
title Controlling spin current in a trapped Fermi
gas, @noop journal journal Phys. Rev.
Lett. volume 103, pages 010401
(year 2009)NoStop
[Ebling et al.(2011)Ebling,
Eckardt, and Lewenstein]LewensteinDynLongRange
author author Ulrich Ebling, author André Eckardt, and author Maciej Lewenstein, title title Spin
segregation via dynamically induced long-range interactions in a system of
ultracold fermions, @noop journal journal Phys. Rev. A volume 84, pages 063607 (year 2011)NoStop
[Pegahan et al.(2019)Pegahan, Kangara, Arakelyan, and Thomas]SaeedPRASpinECorrel
author author S. Pegahan, author J. Kangara,
author I. Arakelyan, and author J. E. Thomas, title title Spin-energy correlation in degenerate
weakly interacting Fermi gases, @noop journal
journal Phys. Rev. A volume 99, pages 063620 (year 2019)NoStop
[Piéchon et al.(2009)Piéchon, Fuchs, and Laloë]Piechon
author author F. Piéchon, author J. N. Fuchs, and author F. Laloë, title title Cumulative
identical spin rotation effects in collisionless trapped atomic gases, @noop journal journal Phys. Rev. Lett. volume 102, pages 215301 (year 2009)NoStop
[Natu and Mueller(2009)]MuellerWeaklyInt
author author Stefan S. Natu and author Erich J. Mueller, title title
Anomalous spin segregation in a weakly interacting two-component Fermi
gas, 10.1103/PhysRevA.79.051601 journal
journal Phys. Rev. A volume 79, pages 051601 (year 2009)NoStop
[Deutsch et al.(2010)Deutsch, Ramirez-Martinez, Lacroûte,
Reinhard, Schneider, Fuchs,
Piéchon, Laloë, Reichel, and Rosenbusch]LaloeSpinReph
author author C. Deutsch, author F. Ramirez-Martinez, author C. Lacroûte, author F. Reinhard, author T. Schneider,
author J. N. Fuchs, author F. Piéchon, author
F. Laloë, author
J. Reichel, and author
P. Rosenbusch, title
title Spin self-rephasing and very long coherence
times in a trapped atomic ensemble, @noop journal
journal Phys. Rev. Lett. volume 105, pages 020401 (year 2010)NoStop
[Smale et al.(2019)Smale,
He, Olsen, Jackson,
Sharum, Trotzky, Marino,
Rey, and Thywissen]ThywissenDynamicalPhases
author author Scott Smale, author Peiru He,
author Ben A. Olsen, author Kenneth G. Jackson, author Haille Sharum, author
Stefan Trotzky, author
Jamir Marino, author
Ana Maria Rey, and author
Joseph H. Thywissen, title
title Observation of a transition between dynamical
phases in a quantum degenerate Fermi gas, @noop journal journal Science Advances volume 5 (year 2019), note elocation-id:
eaax1568NoStop
[Koller et al.(2016)Koller,
Wall, Mundinger, and Rey]KollerReySpinDep
author author Andrew P. Koller, author Michael L. Wall, author Josh Mundinger, and author Ana Maria Rey, title title
Dynamics of interacting fermions in spin-dependent potentials, @noop journal journal Phys. Rev. Lett. volume 117, pages 195302 (year 2016)NoStop
[Gärttner et al.(2017)Gärttner, Bohnet, Safavi-Naini,
Wall, Bollinger, and Rey]ReyNatPhys2017
author author Martin Gärttner, author Justin G. Bohnet, author Arghavan Safavi-Naini, author Michael L. Wall, author John J. Bollinger, and author Ana Maria Rey, title title
Measuring out-of-time-order correlations and multiple quantum spectra in a
trapped-ion quantum magnet, @noop journal journal Nature Physics volume 13, pages 781 (year 2017)NoStop
[Eisert et al.(2015b)Eisert, Friesdorf, and Gogolin]OutofEq
author author J. Eisert, author M. Friesdorf, and author C. Gogolin, title title Quantum many-body systems
out of equilibrium, @noop journal journal Nature Phys. volume 11, pages 124–130 (year 2015b)NoStop
[Schubert et al.(2021)Schubert, Richter, Jin, Michielsen, De Raedt, and Steinigeweg]Schubert
author author Dennis Schubert, author Jonas Richter, author Fengping Jin,
author Kristel Michielsen,
author Hans De Raedt, and author Robin Steinigeweg, title title Quantum versus classical
dynamics in spin models: Chains, ladders, and square lattices, @noop
journal journal Phys. Rev. B volume 104, pages 054415 (year
2021)NoStop
[Das et al.(2019)Das,
Kulkarni, Spohn, and Dhar]Kulkarni
author author Avijit Das, author Manas Kulkarni,
author Herbert Spohn, and author Abhishek Dhar, title title Kardar-Parisi-Zhang
scaling for an integrable lattice Landau-Lifshitz spin chain, @noop journal journal Phys. Rev. E volume 100, pages 042116 (year 2019)NoStop
[Lakshmanan et al.(1976)Lakshmanan, Ruijgrok, and Thompson]Lakshmanan
author author M. Lakshmanan, author Th. W. Ruijgrok, and author C. J. Thompson, title title On the
dynamics of a continuum spin system, @noop journal
journal Physica volume 84A, pages 577–590 (year 1976)NoStop
[Ball(2021)]CLvsQPhysWorld
author author P. Ball, title title Evolution of spins
looks surprisingly classical, @noop journal
journal Physics World volume 34(10), pages 6ii (year 2021)NoStop
[Gärttner et al.(2018)Gärttner, Hauke, and Rey]ReyPRL2018
author author Martin Gärttner, author Philipp Hauke, and author Ana Maria Rey, title title Relating
out-of-time-order correlations to entanglement via multiple-quantum
coherences, 10.1103/PhysRevLett.120.040402 journal journal Phys. Rev. Lett. volume 120, pages 040402 (year
2018)NoStop
[Pegahan et al.(2021)Pegahan, Arakelyan, and Thomas]SaeedInformScramb
author author S. Pegahan, author I. Arakelyan,
and author J. E. Thomas, title title Energy-resolved information
scrambling in energy-space lattices, @noop journal
journal Phys. Rev. Lett. volume 126, pages 070601 (year 2021)NoStop
[Sup()]Supplement
@noop note See the Supplemental Material for a
description of the experimental details and of the quasi-classical spin
model.Stop
[Pretzier(1991)]NewAbelInversion
author author G. Pretzier, title title A new method
for numerical Abel-inversion, @noop journal
journal Zeitschrift für Naturforschung A volume 46, pages 639 – 641 (year
1991)NoStop
[Jurcevic et al.(2014)Jurcevic, Lanyon, Hauke, and et al.]EntanglePropaNature
author author P. Jurcevic, author B. Lanyon,
author P. Hauke, and author et al., title title Quasiparticle engineering and entanglement propagation in a quantum
many-body system, @noop journal journal Nature volume 511, pages
202–205 (year 2014)NoStop
[Hauke and Tagliacozzo(2013)]PhysRevLett.111.207202
author author P. Hauke and author L. Tagliacozzo, title title Spread of
correlations in long-range interacting quantum systems, @noop
journal journal Phys. Rev. Lett. volume 111, pages 207202 (year
2013)NoStop
§ SUPPLEMENTAL MATERIAL
This supplemental material presents the experimental and theoretical details of the measurements and modeling of quantum rewinding in a weakly interacting Fermi gas. A new method is introduced for calibration of the magnetic field where the scattering length vanishes. With this precise measurement, systematic experimental defects in the perturbed quantum rewinding experiments are minimized. The critical role of time-dependent detuning is explained by deriving a new collective spin vector model that properly includes it. Finally, additional single-shot data from the perturbed quantum rewinding experiments are presented, illustrating excellent agreement with the quasi-classical model reported in this work.
§.§ Experimental Methods
For the experiments presented in this work, the sample comprises a mixture of two lowest hyperfine states ^6Li atoms, denoted |1⟩≡|↑_z⟩ and |2⟩≡|↓_z⟩, which is evaporatively cooled to degeneracy near the 1-2 broad Feshbach resonance at a bias magnetic field of 832.2 G. After the sample is prepared, |1⟩ is eliminated by an imaging pulse applied in the weakly interacting regime near 1200 G to create a z-polarized sample. The bias magnetic field is then tuned close to the zero-crossing 527.150 G, where the scattering length a nearly vanishes. At this field, a resonant radio-frequency(RF) (π/2)_y pulse, i.e., a rotation around the y-axis in the RF frame, coherently excites the spin state |2⟩, creating a 50-50 superposition of states |1⟩ and |2⟩. In addition to the bias magnetic field, a control magnetic field is applied along the bias field axis by a pair of auxiliary magnet coils, which are wrapped around the primary bias magnet containers, located on the top and bottom of the experimental vacuum chamber. The auxiliary coils enable magnetic field control of the scattering length in the zero crossing region.
With the coherent superposition state created, the trapped cloud is x-polarized and evolves at the chosen scattering length a for a selected evolution time t_fk, after which the spatial profiles of both hyperfine components are measured. Spin wave formation leads to spin segregation, where the maxima in the |1⟩ and |2⟩ densities are spatially separated <cit.>. This initial evolution is defined as “forward" evolution. To implement "backward" evolution, a π_y pulse is applied after a forward evolution time τ_f, interchanging the two spin states, and the auxiliary magnetic field is swept down over 10 ms to flip the scattering length from a to -a. Ideally, from this point, the sign of the Hamiltonian is inverted, which is equivalent to letting the system evolve backward in time. After a backward evolution time τ_b=τ_f, the system is expected to evolve back to its unsegregated x-polarized state.
§.§.§ Quantum Rewinding Measurements
The system status is observed for a number of different forward evolution times t_fk and backward evolution times t_bk, by imaging the density of both spins, using two resonant optical pulses, separated in time by 10 μs. Spin density spatial profiles are extracted by subtracting normalized spatial profiles for the two states and dividing by 2:
S_z(x) ≡1/2(n_1(x)-n_2(x)/n_1(0)+n_2(0))
= 1/2Δ n(x)/n_tot(0).
Spin segregation is quantified by measuring the central spin density S_z(x=0) = 1/2(Δ n(0)/n_tot(0)) at the center of the cloud. The larger the absolute value of the central spin density is, the more segregated the system is: For the unsegregated system, i.e., immediately after the coherent excitation RF (π/2)_y pulse, the central spin density is zero in theory.
Perturbed quantum rewinding experiments are done by adding a perturbing ϕ_x pulse before the π_y pulse. This pulse is generated by connecting a voltage-controlled phase shifter in series with the output of the RF generator, so that a ϕ_y pulse can be either unshifted in phase or phase shifted by 90^∘ to obtain a ϕ_x pulse. Immediately after the perturbing ϕ_x pulse, the sign of the Hamiltonian is reversed, as described in the previous paragraph, and the system evolves backward for a time period of τ_b=τ_f. Just before the final (π/2)_y pulse, the magnetic field is swept with the auxiliary coils to give the original scattering length a, and the final (π/2)_y pulse is applied to observe the transverse components of the spin vectors.
§.§.§ Detuning
Ideally, during the experimental cycle, for zero detuning, the Bloch frame overlaps with the RF frame, which means that the ϕ_x, π_y and the last (π/2)_y rotations in the RF frame are also done about the x-axis or y-axis in the Bloch frame. However, there are uncontrolled time-dependent global detunings Δ(t), producing relative angles between the RF and Bloch frames during the evolution periods τ_b and τ_f. Experimentally, all rotations are done in the RF frame, which is defined by the RF generator. At a forward evolution magnetic field B_f, which is near resonance, Δ≃ 0. However, as the magnetic field is swept down to B_b during an experimental cycle, the detuning changes by up to several kHz. This results in a large phase difference and corresponding angle between the two frames, which is imperfectly controlled due to field fluctuations. As described in detail in <ref>, the phase shift accumulated during the forward evolution period, φ_f, controls the effective rotation axis of the perturbation relative to the Bloch vector. In addition, the difference between φ_f and the phase accumulated during the backward evolution period, φ_b, determines the measured spin components.
For Hamiltonian reversal experiments, when only the z-component of the spin vectors is measured, there is no sensitivity to the detuning, because the detuning is equivalent to a rotation about the z-axis. In contrast, for perturbed quantum rewinding experiments, where the transverse components of the spin vectors are measured, understanding the roles of φ_f and φ_b is critical for correct data analysis and comparison with predictions.
Although the detuning is not controlled, the experimental results of this work show that complex single-shot data are surprisingly well fitted by the model described below, where the different accumulated phase shifts for the two evolution periods, φ_f and φ_b, are properly included.
§.§ Collective Spin Vector Evolution Model
To understand how the time-dependent global detuning Δ(t) affects the perturbed quantum rewinding measurements, we derive the final state of the system after the pulse sequence of Fig. 2 of the main text, which is reproduced here for convenience, Fig. <ref>.
Prior to the pulse sequence, the optically trapped atoms are initially prepared in a z-polarized state,
|ψ_0z⟩=Π_i |↑_zi⟩.
The system runs forward for a time τ_f and backward for τ_b, after which the spatial profiles of the |↑_z⟩ and |↓_z⟩ states are measured. Note that the pulse durations for the y- and x- axis rotations are <<τ_f, τ_b. In between the forward and backward evolutions, a perturbing pulse is applied, which rotates the system about the x-axis by an angle ϕ_x.
In Fig. <ref>, the x and y axes are defined in the “RF frame," where the x-axis of the RF frame is defined to rotate about the z-axis at the instantaneous frequency of the RF generator. The x-axis, therefore, tracks the total phase of the RF field. The detuning Δ(t) is defined as the difference between the instantaneous hyperfine resonance frequency for an atom at rest and the instantaneous radiofrequency. When the hyperfine frequency is larger than the radiofrequency, spin vectors will appear to rotate counterclockwise as seen looking down the z-axis from above, i.e., through a positive angle, relative to the RF frame. In the experiments, changes in the applied bias magnetic field, as used to reverse the scattering length, and uncontrolled magnetic field fluctuations, tune the hyperfine frequency at a rate of ≃ 5 kHz/G. The detuning causes the components of spin vectors in the Bloch frame to rotate relative to the RF-frame by generally different angles, during the forward and backward evolution times, respectively, even for τ_b=τ_f=τ as used in the experiments. We define the forward and backward phase shifts,
φ_f=∫_τ_f dt Δ(t) andφ_b=∫_τ_bdt Δ(t).
To find the final state including the global detuning, we write the Hamiltonian of Eq. 1 of the main text in the general form
H(a)/ħ= H_0(a)/ħ+Δ(t)S_z,
where S_z is the z-component of the dimensionless total spin vector.
Here, the time-independent part of the Hamiltonian, for Δ=0, is defined by
H_0(a)/ħ=a∑_i,j≠ ig_ij s⃗_i·s⃗_j+∑_iΩ'E_i s_zi
and
[H_0(a),S_z]=0.
Referring to Fig. <ref>, for measurements of spin components in the RF frame, we see that the final state of the cloud for τ_f=τ_b=τ is
|ψ_f⟩=e^-iπ/2 S_ye^-i/ħ H_0(-a)τ-iφ_b S_ze^-iπ S_ye^-i ϕ_x S_xe^-i/ħ H_0(a)τ-iφ_f S_ze^-iπ/2 S_y|ψ_0z⟩.
Eq. <ref> is readily simplified using
e^-i/ħ H_0(-a)τ-iφ_b S_ze^-iπ S_y=e^-iπ S_y[e^iπ S_ye^-i/ħ H_0(-a)τ-iφ_b S_ze^-iπ S_y].
Using Eq. <ref> and noting that the (π)_y rotation inverts S_z, we see that
e^iπ S_y[H_0(-a)/ħτ+φ_b S_z]e^-iπ S_y=-H_0(a)/ħτ-φ_b S_z.
With Eq. <ref>, we obtain
|ψ_f⟩=e^-i 3π/2 S_ye^+iφ_b S_ze^+i/ħ H_0(a)τ e^-i ϕ_x S_xe^-iφ_f S_ze^-i/ħ H_0(a)τe^-iπ/2 S_y|ψ_0z⟩.
Now,
e^-i ϕ_x S_xe^-iφ_f S_z=e^-iφ_f S_z[e^+iφ_f S_ze^-i ϕ_x S_xe^-iφ_f S_z].
It is easy to show that
S_x(φ_f)≡ e^+iφ_f S_z S_x e^-iφ_f S_z=S_xcosφ_f-S_ysinφ_f,
which follows from S_x”(φ_f)=-S_x(φ_f) and the initial conditions S_x(0)=S_x and S_x'(0)=-S_y. Then
e^-i ϕ_x S_xe^-iφ_f S_z= e^-iφ_f S_z e^-iϕ_x S_x(φ_f).
Again using Eq. <ref>, we then obtain
|ψ_f⟩=e^-i 3π/2 S_ye^+i(φ_b-φ_f) S_ze^+i/ħ H_0(a)τ e^-i ϕ_x S_x(φ_f)e^-i/ħ H_0(a)τe^-iπ/2 S_y|ψ_0z⟩.
Defining the operator
W_ϕ(φ_f,τ)=e^i/ħ H_0(a)τe^-i ϕ_x S_x(φ_f)e^-i/ħH_0(a)τ,
and the x-polarized state just after the first (π/2)_y rotation,
|ψ_0x⟩=e^-iπ/2 S_y|ψ_0z⟩
we obtain the final state in the simple form,
|ψ_f⟩=e^-i 3π/2 S_ye^+i(φ_b-φ_f)S_z W_ϕ(φ_f,τ)|ψ_0x⟩.
As explained in the main text, in a single shot, we can measure the operator s_zi for the ensemble of atoms in the i^th energy group. Noting that e^+i 3π/2 S_ys_zie^-i 3π/2 S_y=+s_xi, we find
⟨ψ_f|s_zi|ψ_f⟩=⟨ψ_0x|W^†_ϕ(φ_f,τ)e^-i(φ_b-φ_f)S_zs_xi e^i(φ_b-φ_f)S_zW^†_ϕ(φ_f,τ)|ψ_0x⟩.
By using Eq. <ref> with φ_f→φ_fb≡φ_f-φ_b, S_x→ s_xi and S_y→ s_yi, we see that
e^i(φ_f-φ_b)S_zs_xi e^-i(φ_f-φ_b)S_z=s_xicosφ_fb-s_yisinφ_fb.
Then,
⟨ψ_f|s_zi|ψ_f⟩=⟨ψ_0x|W^†_ϕ(φ_f,τ)s_xiW^†_ϕ(φ_f,τ)|ψ_0x⟩cosφ_fb-
⟨ψ_0x|W^†_ϕ(φ_f,τ)s_yiW^†_ϕ(φ_f,τ)|ψ_0x⟩sinφ_fb
Eq. <ref> shows that a single-shot measurement of the spin density profile then yields, via inverse-Abel transformation <cit.>, the ensemble-averages,
⟨ψ_f|s_zi|ψ_f⟩≡⟨ψ_0x|s̃'_xi|ψ_0x⟩,
where
s̃'_xi= cosφ_fb s̃_xi-sinφ_fb s̃_yi.
Here,
s̃_xi≡ W^†_ϕ(φ_f,τ)s_xiW_ϕ(φ_f,τ)
and similarly for s̃_yi, reproducing the results given in the main text.
We see that for each measurement, the difference between the forward and backward phase shifts, φ_f-φ_b≡φ_fb, determines the relative contribution of the s̃_xi and s̃_yi spin components in the Bloch frame to the measured projection in the RF frame, s̃'_xi. In addition, Eq. <ref> shows that the forward phase shift φ_f determines the effective rotation axis for the ϕ_x pulse.
To compare the prediction of Eq. <ref> to single-shot measurements in a system containing a large number of spins, we employ a quasi-classical model. In this case, we treat the Heisenberg equations for the spin-vectors s⃗_i as evolution equations for classical vectors, which neglects quantum correlations. These equations are readily evaluated for any chosen φ_f and φ_b by numerical integration, enabling fits to single-shot data with φ_f and φ_b as fit parameters.
§.§ Quantifying Hamiltonian Reversal
This section introduces the method of quantifying the quality of rewinding by Hamiltonian reversal.
Two sets of data are required to examine the result of the Hamiltonian reversal in the experiments. One set of data represents the state of the system at different forward evolution times, and the other set represents the state at corresponding backward evolution times. The first set is taken at a magnetic field B_f for a number of times 0≤ t_fk≤τ_f, where t=0 is the time of the initial coherent excitation pulse and τ_f is the maximum forward evolution time. The second set is taken at a backward evolution magnetic field B_b at corresponding times t_bk=τ_f+τ_bk, as discussed below.
The z-component of the spin density for the forward evolving system, S^f_z(x,t_k), is measured at different evolution times t_fk≡ t_k by imaging the density of both spins at a time t_fk relative to the time t≡ 0 of the initial
(π/2)_y RF pulse. To enable a determination of the reversal quality, the same quantity, S^b_z(x,t_k), is measured for the Hamiltonian reversed system at t_bk≡τ_f+τ_bk, where t_bk is the total time relative to the coherent excitation pulse. Here, τ_bk is the amount of time that the system evolves backward with the reversed Hamiltonian. To match the spin density spatial profiles for the forward evolution with the corresponding backward evolution ones, the evolution times need to be matched, i.e., τ_bk=τ_f-t_fk=τ_f-t_k and t_bk= 2τ_f-t_k. χ^2_k is defined as the normalized mean square difference between forward and backward evolution spin density profiles for t_k,
χ^2_k ≡∑_x(S^b_z(x,t_k)-S^f_z(x,t_k)/S^f_z(x,t_k))^2.
To quantify the reversal, we employ χ^2≡⟨χ^2_k⟩, the average χ_k^2 for all of the t_k in the data set. Small χ^2 means that the forward and backward S_z(x) profiles overlap very well, which corresponds to a good reversal.
§.§.§ Zero-crossing Measurement
A critical constant for the implementation of any experiments involving quantum rewinding is the zero-crossing magnetic field B_0 where the scattering length vanishes. Careful calibration of this field ensures that the Hamiltonian reversal in the experimental sequence is done properly. In this work, B_0 is precisely measured by quantifying Hamiltonian reversal for two different forward evolution magnetic fields B_f = 528.803 G and 529.713 G, where the scattering lengths are ≈5 a_0 and ≈8 a_0 respectively. Data is taken for five slightly different backward evolution magnetic fields B_b near the zero crossing for each B_f.
The magnitudes of all of the magnetic fields are measured precisely using RF spectroscopy, by applying a π pulse (15 ms) with a known RF frequency. The resonance frequencies of the RF pulse for the atomic transition are in one-to-one correspondence with the magnetic fields. With this property, the magnetic field can be calculated with mG precision from the resonance RF frequency for a π pulse that fully transfers atoms from |2⟩ to |1⟩.
By fitting a parabola to χ^2 as defined above for five different B_b, the optimum reversal magnetic B_b, opt is obtained for the corresponding B_f. The zero-crossing magnetic field is located at the midpoint between B_f and B_b,opt. Fig. <ref> displays the results of this measurement. Because the measurement for this experiment is insensitive to the detuning as described in the main paper, averaging is allowed. Each data point is the result after averaging 5 shots. The top two figures (a) and (b) are the result of data series taken for a=± 5.2 a_0 with Hamiltonian reversal done at τ_f=400 ms, and the bottom two (c) and (d) are the results for a=± 8.0 a_0 with Hamiltonian reversal done at τ_f=240 ms. The parabolic fit for χ^2 with a=± 5.2 a_0 series suggests that B_b,opt=525.488 G for B_f=528.803 G, and for ±8.0 a_0 suggests that B_b,opt=524.596 G for B_f=529.713 G. The two series of experiments yield the result B_0=527.150(5) G, which is 30 mG lower than the previous result <cit.>. Note that the scattering lengths ”5.2 a_0" and ”8.0 a_0" are calculated based on the zero-crossing magnetic field measured in the above experiment.
§.§.§ Reversibility in Different Regimes
This technique of quantifying the reversal quality can also be used to compare the reversibility for systems in different regimes of scattering length and forward evolution time. This is achieved by weighting the disagreement between forward and backward in spatial profiles differently for different t_k: By construction, the disagreement is weighted more heavily for small t_k, where the system has evolved backward long enough for discrepancies between backward and forward evolution to appear. For small t_k, S^f_z(x, t_k) is usually small, i.e., close to a horizontal line around x-axis, because the system has not segregated very much for short forward evolution times. Hence, the denominator in Eqn. <ref> exaggerates the magnitude of χ^2_k for small t_fk.
For the perturbed quantum rewinding experiments, the evolution times are always chosen to be τ_b=τ_f, which means that the reversal quality at t_k=0 ms is extremely critical. Hence, χ^2_k at t_k=0 ms needs to be weighted highly in the test of reversibility, especially for the purpose of choosing regimes to do perturbed quantum rewinding experiments.
Note that the χ^2 in Fig. <ref>(a) is one order of magnitude larger than it is in (c), which means that the system reverses more perfectly with a=8.0 a_0 and τ_f = 240 ms than with a=5.2 a_0 and τ_f=400 ms. This matches the comparisons of the central spin density evolutions shown in Fig. <ref> (b)(d): For the a=5.2 a_0 and τ_f=400 ms data series, there is a clear sign of segregation (Δ n(0)/n_tot(0)>0) for Hamiltonian reversed data at t_k=0 ms even if the optimum reversal magnetic field is adopted. In contrast, for a=8.0 a_0 and τ_f=240 ms, an almost perfect reversal is observed at t_k=0 ms.
With this method of quantifying quantum rewinding, a systematic study of the reversibility of a system in different regimes can be done by fixing τ_f and varying the scattering length and by fixing the scattering length and varying τ_f. This study is ongoing and requires a large amount of data, and so will not be pursued further here.
§.§ Testing the Quasi-Classical Spin Model
To test the quasi-classical spin model, as reported in this work, three series of perturbed quantum rewinding experiments are performed at: 5.2 a_0 with τ=200 ms, 8.0 a_0 with τ=200 ms, and 5.2 a_0 with τ=400 ms. The S_z(x) profiles are measured as described in <ref>. All spatial profiles shown in this work are folded over the center of the cloud, x=0, followed by equal-width binning into 50 bins. To extract energy-space information, S_z(E), Abel inversion is applied to the spatial profiles with 16 expansion terms <cit.>. As energy-space profiles are less sensitive to experimental defects shown in Fig. <ref>, the model is fitted to the S_z(E) extracted from single-shot data with φ_f and φ_b as free parameters. Data from the three experimental series are in good agreement with the model.
§.§.§ Primary Data for a=5.2 a_0 and τ=200 ms
Perturbed quantum rewinding measurements are employed to precisely test the quasi-classical spin model presented in <ref>. Data is mainly taken at 5.2 a_0 with τ=200 ms, as this set of parameters is expected to provide an almost perfect Hamiltonian reversal. Nine different ϕ_x values are used as a perturbation to the reversal, ranging from 0 to 2π in steps of π/4. Fig. <ref> shows examples of single shot data taken for different ϕ_x. Note that in these figures φ_f and φ_b are not necessarily the same across the whole data set because of fluctuations in RF detuning as discussed in <ref>. The systematic measurements and fits done for the perturbed quantum rewinding experiments show that the complicated structure observed in the spatial profiles is very sensitive to the initial conditions (cloud size σ_TF and atom number N), as well as the RF detunings for two evolution periods τ_f and τ_b. Even small variations in these parameters result in a slightly shifted/skewed spatial profiles. Hence, for the purpose of quantitative tests of the quasi-classical spin model using perturbed quantum rewinding experiments, as presented in this work, single-shot analysis is essential for processing data with imperfectly controlled experimental parameters, which tends to wash out the fine structure in the spatial profiles. The measured single-shot profiles presented in this work have adequate spatial resolution to capture small details in the profiles. Having minimized experimental defects by careful calibrations, the measured single-shot data provide stringent tests of predictions based on the model of <ref>.
§.§.§ Additional Data Sets
In this section, we present additional data for increased scattering length and evolution time.
The validity of the modified quasi-classical spin model of <ref> is demonstrated for the primary data set with a=5.2 a_0 and τ=200 ms in <ref>. To test the model further, a series of additional perturbed quantum rewinding experiments are done with τ=200 ms and 8.0 a_0 and with τ=400 ms and a=5.2 a_0 for three ϕ_x values: π/2, π and 3π/2. Less quantitative agreement between the model and data is observed in many of the single shots from these additional experiments, especially in the spatial profiles.
The disagreement appears from differences between the spin imbalances in the model and in the data. In Fig. <ref>, the basic model (red curves) assumes that all of the applied RF pulses are on resonance, while drifts in the detuning alter the pulse area. It is clear that the basic model still captures the shape and oscillations of the data (blue dots), but with an offset. In the experiments with longer experimental cycles or larger magnetic field sweeps, there is a larger probability that one or more of the RF pulses are slightly off resonance with the hyperfine frequency, since the RF frequency is fixed for all pulses, but the magnetic field can fluctuate because of the limitation on the stability of the auxiliary coils. Imperfect RF pulses result in a measurement with the wrong spin imbalance for the given ϕ_x, compared to ideal measurements. To include the effect of imbalance in the model, the atom numbers for the two spin states are adjusted from N_1 and N_2 to N_1-δ N_tot and N_2+δ N_tot, with N_tot = N_1+N_2 being total atom number and δ a reasonably small (≤ 10%) fraction. In our system, the total density n_tot(x) = n_1(x)+n_2(x) is invariant during the experimental cycle. Hence, to include the adjustment of spin imbalance in the modeled spatial profile, the outcomes n_1(x) and n_2(x) are first determined from the model and then scaled to n_1(x)-δ n_tot(x) and n_2(x)+δ n_tot(x). In this way, the total atom number and density profile remain the same in the model output before and after the adjustment. With this adjustment to the outcome of the regular model, the improved fits shown as the green curves in Fig. <ref> are obtained.
Another reason for the disagreement between the model and the data set with τ=400 ms at a=5.2 a_0 is the imperfect Hamiltonian reversal shown in Section <ref>. The unperturbed quantum rewinding experiment done with this set of experimental parameters suggests that the system is not precisely reversed in this regime. Therefore, it is reasonable that the perturbed rewinding data can not be predicted by the model as quantitatively as for the data obtained in the regime where the reversibility of the system is clearly better.
|
Subsets and Splits