text
stringlengths
0
2.11M
id
stringlengths
33
34
metadata
dict
Figure/ THU,LAB]Ziyi Guocor1 BNL]Minfang Yeh NJU]Rui Zhang NJU]De-Wen Cao NJU]Ming Qi THU,LAB]Zhe Wangcor2 THU,LAB]Shaomin Chen[cor1]Corresponding author: [email protected] [cor2]Corresponding author: [email protected][THU]Department of Engineering Physics, Tsinghua University, Beijing 100084, China [LAB]Key Laboratory of Particle & Radiation Imaging (Tsinghua University), Ministry of Education, China [BNL]Brookhaven National Laboratory, Upton, New York 11973, USA [NJU]School of Physics, Nanjing University, Nanjing 210093, China Slow liquid scintillator Cherenkov detectors have been proposed as part of several future neutrino experiments because they can provide both directionality and energy measurements. This feature is expected to enhance the sensitivities for MeV-scale neutrino physics, including solar physics, the search for supernova relic neutrino, and the study of geo-sciences. In this study, the characteristics of a slow liquid scintillator were investigated, along with the light yields and decay time constants for various combinations of linear alkylbenzene (LAB), 2,5-diphenyloxazole (PPO), and 1,4-bis (2-methylstyryl)-benzene (bis-MSB). The results of our study indicated that LAB with 0.07 g/L of PPO and 13 mg/L of bis-MSB was the best candidate for an effective separation between Cherenkov and scintillation lights with a reasonably high light yield.Cherenkov scintillation slow liquid scintillator neutrino detection § INTRODUCTIONThe China Jinping Underground Laboratory (CJPL) is located in Sichuan Province, China. With an overburden of about 2400 m <cit.> and located approximately 1000 km from the closest nuclear power plant, the CJPL is an ideal site for low background and MeV-scale neutrino experiments. The Jinping Neutrino Experiment was proposed at CJPL <cit.> with the primary goals of focusing on solar neutrinos, geoneutrinos, and supernova relic neutrinos (also referred to as the diffuse supernova neutrino background).In MeV-scale low-energy neutrino experiments, the directional information of charged particles can be reconstructed by Cherenkov light. A number of studies <cit.> have indicated that directional information (for particle indentification) and an accurate measurement of the energy on charged particles may provide extra discriminating power to the background suppression in MeV-scale neutrino experiments. For example, the solar angle cut on the direction of charged particles is a powerful selection criterion for solar neutrino events. Another study <cit.> demonstrated that atmospheric neutrino background via the neutral and charged current interactions, which is one of the major backgrounds in the search of supernova relic neutrino events, can be effectively suppressed if electrons and muons are distinguished from non-Cherenkov produced neutrons and protons by particle identification.In addition, it is possible to further perform particle identification based on the ratio of Cherenkov light yield to scintillation light yield. Both features are also useful for neutrinoless double beta decay experiments <cit.>, neutrino CP phase measurements <cit.>, proton decay searches <cit.> and the study of geoneutrinos <cit.>.The detection scheme with slow scintillator is now under consideration in the Jinping Neutrino Experiment <cit.> and THEIA <cit.>. Although the concept of diluted scintillators was pioneered as part of the LSND experiment <cit.>, its low light yield is not applicable to dedicated low-energy neutrino experiments. It is import to note that the discriminating individual solar neutrino flux component from the recoiled electron energy spectrum requires a reasonable energy resolution, which should be at least 4.5% at a 1 MeV energy deposit, i.e., 500 photoelectrons/MeV) <cit.>. This requirement exceeds the yield limit of photoelectrons in water or heavy water Cherenkov detectors. While liquid scintillator detectors can meet this requirement, those adopted in present neutrino experiments only provide energy information, because the extremely small amount of Cherenkov light emitted by charged particles is completely submerged by the huge scintillation light. A new type of slow liquid scintillator (water- or oil-based) has much larger time constants, which provides the opportunity to separate the Cherenkov light from the scintillation light. The concept of water-based liquid scintillators (WbLS) was proposed as early as the findings in <cit.> and recent efforts toward achieving this can be found in <cit.>.Linear alkylbenzene (LAB) was revisited in <cit.>. A quadruple coincidence system was used to select vertical cosmic-ray muons. It was observed that LAB has a characteristic of a large decay time constant of 35 ns, and is therefore classified as a slow liquid scintillator. This feature could be used to separate Cherenkov and scintillation lights by analyzing the time profile of the analogous output of a photomultiplier tube (PMT), given that the prompt time region will be dominated by the former while the later times will be dominated by the latter. The slower the fluorescence, the better the separation ability.Since the light yield of LAB is much lower than that of typical liquid scintillators widely used in neutrino experiments, especially in low-energy solar neutrino experiments, CHESS experiment <cit.> has made a good progress by adding 2,5-diphenyloxazole (PPO) and applying fast photon detectors <cit.> to enhance the light yield and maintain the scintillation-Cherenkov separation ability. However, for large neutrino detectors (quick absorption below 400 nm in LAB) or neutrino detectors using acrylic material (transmittance is cut off at about 300 nm), the light propagation loss cannot be ignored. We still need to shift the emission spectrum from the short wavelength to the longer range (>400 nm) to reduce the light propagation loss. This can be achieved by adjusting the concentrations of PPO and 1,4-bis (2- methylstyryl)-benzene (bis-MSB) <cit.>. On the other hand, adding too much PPO and bis-MSB will weaken the scintillation and Cherenkov separation ability because the Cherenkov photons will be absorbed and the time constants will be smaller. This is especially critical when using a more economical PMT detection approach (the timing precision of PMT is on the nanosecond scale and massive production is possible). The balance of time profile and light yield is vital to both the Cherenkov separation ability and high energy resolution and thus requires further study.In this study, we investigated the effect of adding both PPO and bis-MSB to LAB. We first scanned the light yields and time profiles of cosmic-ray muons for various LAB with PPO and bis-MSB combinations, and used an energy transfer model to describe the inverse relationship between them (Section <ref>). We then measured the scintillation emission spectra (Section <ref>) and the transmissions in acrylic (Section <ref>),which is a typical material for scintillator containers. We further measured the attenuation length (Section <ref>) for a typical sample with a long-arm apparatus. We evaluate the performances of candidate samples for neutrino detection in Section <ref>, and finally, we summarize the findings of our study in Section <ref>. § STUDY OF LIGHT YIELD AND SCINTILLATION TIME §.§ ApparatusThe detector is shown in Fig. <ref>; in the setup, where four plastic scintillators were positioned vertically. The four coincident signals were used to provide a trigger for a muon traveling from the top to the bottom. Two additional plastic scintillators were placed next to the coincident bottom scintillator to serve as a veto counter to exclude events with muon shower activity, which can destroy the distinguished time profile.Approximately 15.4 L of a liquid scintillator sample was placed in an acrylic container (36.4 cm height) and the container was placed between the second and third coincident scintillators. PPO and bis-MSB were then weighed by an electronic balance (1 mg division minimal) and dissolved in a 500 mL beaker filled with LAB. Theconcentrated solution was poured into the acrylic container, and the mixture was thoroughly stirred.The inner surface of the acrylic container was lined by a layer of black coarse acrylic to suppress reflections. The top and bottom PMTs symmetrically aligned with the acrylic container were then immersed in the liquid scintillator.The light signals from the six plastic scintillators and the liquid scintillator were collected by eight PMTs. The top and bottom PMTs used to acquire the signals of the liquid scintillator were Hamamatsu model R1828-01 with a 46 mm diameter for the effective photocathode area. The quantum efficiency is more than 10% from 300 to 530 nm. The rise time of the anode pulse is 1.3 ns. Other parameters of the PMTs can be found elsewhere <cit.>.Once a trigger signal was issued, a 10 bit, 1 GHz flash analog-to-digital converter (FADC, model CAEN V1751) opened a 4096-ns window and read out the voltage waveforms for all eight PMTs. §.§ Event selectionTo eliminate electronic noise and multi-track events, several characteristic variables, namely peak, width and charge (waveform area), were studied for each waveform.* Electronic noises were primarily caused by the power supply of the PMTs. The waveform of this noise was much narrower than that produced by a photon. Cuts on both the peak-to-charge-ratio and peak-to-width-ratio were applied to significantly suppress these backgrounds.* Muon showers may occur when an energetic muon spalls with an atom. This background was first rejected by the two anti-coincidence scintillators. The charges of the four coincidence scintillators were also applied when an event with charge significantly deviated from the average. For the purpose of comparison, we selected 2,000 candidates for the top and 2,000 for the bottom PMTs. The event rate was about 1.7 /min. The average waveforms among the selected candidates were then calculated, as shown in Figures <ref> and <ref>, respectively. The relative difference of the gains and acceptances between the two PMTs were corrected by the gain calibration and Monte-Carlo simulation in these figures. Since the vertical muons only come from the top, the top PMT can only detect the isotropic scintillation light rather than the forward Cherenkov light emitted by these muons, while the bottom PMT can detect both lights. We investigated the time profiles from the two PMTs and observed that there was a clear enhancement in the first 20 ns for the amplitude of the bottom PMT with respect to the top one. This is due to the contribution from the prompt Cherenkov component and the peak height is highly dependent on the concentration of PPO and bis-MSB. A previous study <cit.> indicated that the absorption of light is intensive for those with wavelength less than 400 nm, and the number of Cherenkov photons is reduced as a consequence §.§ Time profile measurementWe constructed a function of the time profile with the PMT time response convoluted,taking both the Cherenkov and scintillation light contributions into account, as expressed by,f_b(t)= [A_cδ(t-t_c)+A_sn(t-t_s)]⊗gaus(σ_b),where A_c is the amplitude of the Cherenkov light, t_c is the mean arrival time of the Cherenkov light, δ(t) represents the time profile of the prompt Cherenkov emission, which is a delta function since it is an instant process comparing to PMT timing precision of ns. A_s and t_s are the amplitude and start time of the scintillation light, respectively, n(t) is the time profile of the scintillation emission, gaus(σ_b) is the PMT time response function. In a binary or ternary scintillator system, emissions may feature a finite rise time or be slightly lengthened in duration due to the finite time of intermolecular energy transfer <cit.>. In organic solution scintillators, emissions present a finite rise time τ_r and a decay time τ_d so that a normalized pulse shape of scintillation light can be written asn(t)=τ_r+τ_d/τ_d^2(1-e^-t/τ_r)·e^-t/τ_d In contrast to the waveform of the bottom PMT, the waveform of the top PMT includes only the scintillation light contribution, which is expressed asf_t(t)=A_s n(t-t_s)⊗gaus(σ_t).where A_s is the scintillation amplitude, t_s is the start time of the scintillation light, which are the same as the values in Eq. (<ref>), and n(t) is the time profile in Eq. (<ref>). The time resolution σ_t is also taken into account. Both the time constants τ_r and τ_d can be determined by Eq. (<ref>) and Eq. (<ref>). For example, from a LAB sample with 0.07 g/L PPO and 13 mg/L bis-MSB we determined τ_r=(1.16±0.12) ns and τ_d=(26.76±0.19) ns, respectively. The fitting results for both the top and bottom PMT waveforms are shown in Fig. <ref> and <ref>, respectively. §.§ Light yield measurement The number of scintillation photoelectrons (PE) D_s detected by the bottom PMT can be expressed as,D_s=A_s/A_e,where A_e is the single PE charge obtained from the PMT gain calibration, while A_s is the fitting result from both Eq. (<ref>) and Eq. (<ref>).The total number of scintillation photons N_s can be obtained from D_s divided by the detection efficiency ε_s,N_s=D_s/ε_s=A_s/ε_s· A_e.The detection efficiency was estimated with Geant4 <cit.>-based Monte-Carlo simulation. The geometry represented in Fig. <ref> was implemented in the Geant4 simulation. Muons were sampled according to the Gaisser formula <cit.>. The simulation considered the emission spectra of different samples, quantum efficiency spectrum of PMT and attenuation length dependence on wavelength. Standard electromagnetic and muon-nucleus processes were both included. More details of the simulation can be found in <cit.>. The dominant uncertainty was found to be associated with the PMT quantum efficiency and estimated to be 10%.The scintillation light yield Y can be calculated byY=N_s/E_vis=A_s/ε_s· A_e· E_vis,where E_vis is the total visible energy and estimated to be (69.1±1.9) MeV from the simulation. It is important to note that this light yield includes the contribution from the hard UV portion of Cherenkov light because of the absorption and re-emission of photons. Since these photons lost the directional information of the original Cherenkov photons, they were treated as part of the effective scintillation yield.The quenching effect of muon is described by Birks' constant, which is 0.015 cm/MeV for low-energy electrons <cit.>. The difference with and without Birks' constant results in an uncertainty of 2.8%.The distance from the light production point along the muon track in the scintillator to each PMT photocathode is several tens of centimeters, which is much shorter than the attenuation length at wavelength longer than 400 nm. In the shorter wavelength region, the attenuation length cannot be ignored. The attenuation length spectrum can be represented as the combination of all the solution components,1/L=∑n_i/n_0iL_i,where L is the total attenuation length, n_i is the concentration of the i-th component and L_i is the attenuation length measured at concentration n_0i. <cit.> and <cit.> give the attenuation length spectra of pure LAB, LAB+3 g/L PPO, LAB+3 g/L PPO+15 mg/L bis-MSB, and the inherent attenuation length spectra of PPO and bis-MSB can then be extracted from that of the compound according to Eq. (<ref>),1/L_ PPO=1/L_ LAB+PPO-1/L_ LAB, 1/L_ bis-MSB=1/L_ LAB+PPO+bis-MSB-1/L_ LAB-1/L_ PPO. For the LAB sample with 0.07 g/L PPO and 13 mg/L bis-MSB, the numbers of measured PEs at the top and bottom PMTs are shown in Table. <ref>, and the uncertainties are all fitting errors. The number of detected Cherenkov PEs was 5.47±0.22. The light yield for the sample was estimated to be (4.01±0.60)×10^3 photons/MeV. §.§ Scanning of light yield and scintillation timeWe changed the concentration of PPO with different bis-MSB solutions, by measuring the scintillation light yield, rise time constant, decay time constant and Cherenkov photoelectron yield for each sample. The results and the scintillation photon detection efficiencies are shown in Table <ref>. These quantities can affect the performance of separation between scintillation and Cherenkov lights.The decay time constants and scintillation light yields are plotted in Fig. <ref> for all the test samples and showing an inverse relationship. The effect of wavelength shifter bis-MSB on decay time constants and scintillation light yields is relatively insignificant at low concentrations. Increasing PPO concentration will result in higher light yields and smaller time constants.To understand this inverse relationship between the scintillation light yield and the decay time constant, the mechanism of light emission in the scintillator was examined. As shown in Fig. <ref>, incident charged particles in a liquid scintillator deposit their energies, some of which can be transferred between molecules.The light yield Y from the energy transfer was modeled by <cit.>, as expressed by Eq. (<ref>), in which the amounts of energy transfer is represented by the PPO concentration A,Y=D·1/1+λ_sqA/λ_e·1/1+λ_i/λ_aAwhere D is the number of excited solvent molecules, λ_sq is the self-quenching factor, λ_e is the rate of photon emission after self-quenching, λ_i is the internal loss factor of solvent molecules, and λ_a is the energy transfer from the solvent (donor) molecules to the solute (acceptor) PPO molecules.The self-quenching effect is due to the interaction between unexcited and excited PPO molecules. The excitation energy is lost by collision and since the self-quenching can be neglected for the case of low PPO concentration (<10 g/L), the light yield can thus be simplified asY=DA/A+λ_i/λ_a The decay time constant τ can be described as the sum of the solute intrinsic lifetime τ_s and energy migration transfer (or hopping) time <cit.>,τ=τ_s+A_0/k_hAwhere k_h is the effective energy migration transfer rate for a given concentration A_0. The number of energy migration transfer processes caused by solvent-solvent collisions is inverse proportional to the PPO concentration A.Combining Eqs. (<ref>) and (<ref>), we can obtain the relationship between the light yield and decay time constant,τ=τ_s-A_0λ_a/k_hλ_i+A_0λ_aD/k_hλ_i·1/Y≡τ_0+C/Ywhere τ_0≡τ_s-A_0λ_a/k_hλ_i and C≡A_0λ_aD/k_hλ_i. This clearly indicates an inverse relationship between the decay time constant τ and the scintillation light yield Y. We observed that Eq. (<ref>) was consistent with our measurements as indicated in Fig. <ref>.As shown in Table <ref>, when the concentration increases beyond 0.1 g/L, the decay time constant is so small that the separation between scintillation light and Cherenkov light from the pulse shape discrimination becomes difficult. In the mean time, the addition of bis-MSB reduces the number of Cherenkov photons , even though it can shift the wavelength to the detectable region. Samples of LAB with 0.07∼1 g/L PPO and 0∼13 mg/L bis-MSB could be good slow liquid scintillator candidates and should be more thoroughly investigated.§ EMISSION SPECTRUMThe emission spectra of the candidate samples were measured using a RTI fluorescence spectrometer (made by Ocean Optics) excited at 260 nm. The relevant spectra are shown in Fig. <ref>. LAB emits light at 280∼300 nm. However, in a bulk solution, both re-absorption and re-emission occur during the light propagation process and shift the wavelength upward. It is important to note that the wavelength range from 380 to 500 nm is detectable to the PMT and transparent to the acrylic. From the emission spectra, it was possible to conclude that a significant amount of additional bis-MSB should be required. Fig. <ref> shows that the additions of 13 mg/L or more bis-MSB have similar wavelength spectra. The formulas with no bis-MSB addition have a better Cherenkov separation capability, but may lead to a lower photoelectron yield. There is a trade-off between the photoelectron yield and Cherenkov separation capability. These emission spectra were implemented in the simulation for evaluating the detection efficiency in Section <ref>. It is important to note that the sample container cell of the RTI fluorescence spectrometer is 10×10×10 cm^3, and the spectra measured is not at the origin of excitation. This was reflected in our simulation for the 15.4 L device.Combining the emission spectra with the scanning results in Table <ref>, we chose the formula of LAB with 0.07 g/L of PPO and 13 mg/L of bis-MSB as our slow liquid scintillator candidate, sinceit has reasonable light yield and time constants, while maintaining more than half of the Cherenkov photons with respect to the pure LAB. The emission spectrum was shifted to the detectable range above 390 nm, falling into the detectable region with almost no optical loss in acrylics (see Section <ref>).§ OPTICAL TRANSMISSION OF ACRYLICSince acrylics are compatible with LAB-based liquid scintillators in terms of chemical and optical properties, they are widely used for the scintillator vessels in neutrino experiments. However, to maintain a kiloton-scale liquid scintillator, the acrylic vessel should be at least several centimeters thick, as used in the SNO experiment <cit.>. As a result, the optical transmission loss in acrylic cannot be ignored. To evaluate the effect, we performed a qualitative study on the transmission for a UV transparent acrylic sample (UV transparent type made by DONCHAMP, China).The test stand included a deuterium lamp and a spectrometer (Ocean Optics) with a 10-mm-thick test sample plate in between, as shown in Fig. <ref>. The lamp light was set perpendicularly incident to the acrylic plate and the spectrometer was used to measure the transmission light. For comparison, the light intensity spectra was measured. The corresponding light intensity spectra were referred to as K_1 and K_0. The ratio between K_1 and K_0 is a function of the transmissivity t andreflectivity r for the acrylic sample as illustrated in Fig. <ref>.For the case of a vertical incident light, the transmission intensity should be t(1-r)^2+t^3r^2(1-r)^2+⋯, and the ratio can therefore be written asK_1/K_0=t(1-r)^2/1-t^2r^2.The reflectivity r can be derived from the Fresnel formula ,r=(n-1/n+1)^2.where n is the reflective index. The curve of transmissivity as a function of the wavelength was obtained from Eqs. (<ref>) and (<ref>) is shown in Fig. <ref>. The acrylic sample was found to be nearly transparent in the visible light wavelength range (i.e., >400 nm) and became almost opaque in the wavelength rangebelow 270 nm. As shown in Fig. <ref>, the spectrum of emission light for the pure LAB was below 400 nm and should be shifted upward to avoid the absorption of acrylics.§ ATTENUATION LENGTH OF THE SCINTILLATORIf all the chemical components in the scintillator were precisely known, the attenuation length might have been easily obtained using Eq. (<ref>). Given the fact that the impurity of the sample is difficult to know, a photometer was used to measurethe attenuation of the slow liquid scintillator candidates <cit.>. A schematic of this photometer is shown in Fig. <ref>.An LED lamp was mounted at the top of the photometer. Light was refocused from a lens to ensure it to travel through a diaphragm and a 1 m-long stainless steel pipe filled with the liquid scintillator. The liquid level in the pipe was controlled by a solenoid valve and a liquid level sensor. A PMT (Hamamatsu R7724, 51 mm diameter) was installed at the bottom of the equipment to receive light, and the wavelength of the response displayed a maximum at 420 nm <cit.>.The slow liquid scintillator used in the measurement was the LAB with 0.07 g/L of PPO and 13 mg/L of bis-MSB, which had an emission spectrum that partially overlapped with that of the LED light used in the experiment.As shown in Fig. <ref>, the LED spectrum is not monochromatic, so the light attenuation cannot be described by a simple exponentially decreasing curve. Instead, the intensity of transmission light I(x) is described by a weighted average of the LED spectrum f(λ),I(x)=I_0∫ f(λ)^-x/L(λ)λ,where I_0 is the intensity of incident light. For the sake of convenience, two exponentials were used to describe the data.I(x)=I_0[α^-x/L_1+(1-α)^-x/L_2],where α is the fraction of the component with a longer attenuation length (referred to as L_1), while L_2 is the shorter attenuation length. The fitting result is shown in Fig. <ref>, in which α was determined to be 0.925±0.003, L_1 and L_2 were determined to be (9.37±0.44) m and (0.16±0.02) m, respectively.It should be noted that the measured attenuation lengths actually included the contribution from the absorption, re-emission, and scattering effects <cit.>. It is understood that with a purification process the attenuation length can be extended to 20 m.§ PERFORMANCE EVALUATION FOR THE CANDIDATE SAMPLEIn this section, the possibility of using the candidate samples to satisfy both the detection requirements of solar and supernova relic neutrino experiments <cit.> is proposed. The detection feature is demonstrated by an analytical calculation with an empirical detector model. §.§ Solar neutrino studyThe impact of the slow liquid scintillator on future solar neutrino experiments was studied, for example the neutrino experiment at Jinping <cit.>, in which a minimum light yield of 500 photoelectrons/MeV is required. To evaluate the effect, we set up a detector model, as shown in Fig. <ref>, which was similar to SNO+ <cit.>. The target material filled in an acrylic or nylon inner vessel was 1 kiloton (6.5 m radius) liquid scintillator, 4,000 or more PMTs were placed around the inner vessel in the buffer water, the height and diameter of the water tank were both 12 m.The photoelectrons yield E of a scintillator neutrino detector is calculated usingE=Y·ϵ· c · qwhere Y is the light yield, ϵ is the light propagation efficiency, c is the photocathode coverage and q is the quantum efficiency.We assumed the PMT photocathode coverage and average quantum efficiency to be 70% and 20%, respectively. Efficiency loss due to the acrylic vessel was ignored owing to the wavelength shifter. For the effect of self attenuation, the light propagation efficiency was 46.2% in the center of the detector. For the sample of LAB with 0.07 g/L of PPO and 13 mg/L of bis-MSB, the scintillation light yield is roughly E=4010×46.2%×70%×20%=260PE/MeV.If the PMT photocathode coverage can reach 100% with the help of light concentrators <cit.>, PMTs with high quantum efficiency (>30%) can be adopted, and the attenuation length can reach up to 15 m <cit.> (i.e., the average propagation efficiency was 60%, then the photoelectrons yield could be increased to 720 PE/MeV, or 3.7% at 1 MeV of detectable energy). This value meets the requirement in the Jinping proposal for the solar neutrino study <cit.>. §.§ Supernova relic neutrino studyThe detection of supernova relic neutrinos <cit.> does not have a stringent requirement for the light yield. The suppression of the neutral and charged current backgrounds induced by atmospheric neutrinos relies on the identification of neutrons and protons. For the region of interest, the positron signals are between 10∼30 MeV. Unlike the recoiled protons, these positrons are over Cherenkov production threshold. About 500 Cherenkov photons are emitted in the 400∼700 nm range for a 10 MeV positron. Considering a quantum efficiency of 30%, a light propagation efficiency of 60% and photocathode coverage of 100%, approximately 90 photoelectrons are predicted to be detected by the PMTs in the forward Cherenkov ring direction. Based on the separation capability shown in Fig. <ref>, where the signal to background ratio (Cherenkov to scintillation) within the first 10 ns is approximately 1. The 90 photoelectrons from the Cherenkov light should have been easily identified. The samples with 0.07 g/L of PPO and 13 mg/L of bis-MSB or formulas with similar PPO and bis-MSB concentrations could be a potential candidate to supernova relic neutrino detection and further studies are necessary to assess the gain in sensitivity. § CONCLUSION AND OUTLOOK In this study, the liquid scintillator mixtures of LAB, PPO and bis-MSB solution with different compounding ratios were investigated. An inverse relationship between the light yields and decay time constants for these samples was observed. The relationship was understood by the mechanism of the energy transfer between scintillator molecules. The emission spectra of these samples were also reported. The addition of PPO and bis-MSB could enhance the light yield and shift the emission spectrum toward a more optically transparent region.For the first time, samples of LAB with around 0.07 g/L of PPO and 13 mg/L of bis-MSB or formulas with similar concentrations are shown to display a good balance between scintillation decay time and light yield. This combination with the PMT detection approach could serve as good slow liquid scintillator candidates for solar and supernova relic neutrino experiments. The concentrations of PPO and bis-MSB could be further optimized by using a large test apparatus <cit.> and by performing extensive offline simulations and analyses.For the direction reconstruction performance of low-energy electrons below 10 MeV, a more complicated full detector simulation and reconstruction method is being studied. Its impact on the detection of geoneutrinos through the electron-neutrino scattering will be reported.§ ACKNOWLEDGEMENTSThis work was supported in part by, the National Natural Science Foundation of China (No. 11620101004 and 11475093), the Key Laboratory of Particle & Radiation Imaging (Tsinghua University), and the CAS Center for Excellence in Particle Physics (CCEPP), and portion of this work performed at Brookhaven National Laboratory is supported in part by the United States Department of Energy under contract DE-AC02-98CH10886. § REFERENCES
http://arxiv.org/abs/1708.07781v3
{ "authors": [ "Ziyi Guo", "Minfang Yeh", "Rui Zhang", "De-Wen Cao", "Ming Qi", "Zhe Wang", "Shaomin Chen" ], "categories": [ "physics.ins-det" ], "primary_category": "physics.ins-det", "published": "20170825153517", "title": "Slow Liquid Scintillator Candidates for MeV-scale Neutrino Experiments" }
[email protected] Theory Center, Thomas Jefferson National Accelerator Facility, Newport News, VA 23606, USA Center for Exploration of Energy and Matter,Indiana University, Bloomington, IN 47403, USA Physics Department, Indiana University, Bloomington, IN 47405, USA [email protected] Center for Exploration of Energy and Matter,Indiana University, Bloomington, IN 47403, USA Department of Physics and Astronomy, Ghent University, Belgium Theory Center, Thomas Jefferson National Accelerator Facility, Newport News, VA 23606, USA Instituto de Ciencias Nucleares,Universidad Nacional Autónoma de México, Ciudad de México 04510, Mexico Center for Exploration of Energy and Matter,Indiana University, Bloomington, IN 47403, USA Physics Department, Indiana University, Bloomington, IN 47405, USA Universität Bonn,Helmholtz-Institut für Strahlen- und Kernphysik, 53115 Bonn, Germany Theory Center, Thomas Jefferson National Accelerator Facility, Newport News, VA 23606, USA Theory Center, Thomas Jefferson National Accelerator Facility, Newport News, VA 23606, USA Center for Exploration of Energy and Matter,Indiana University, Bloomington, IN 47403, USA Physics Department, Indiana University, Bloomington, IN 47405, USA School of Informatics, Computing, and Engineering,Indiana University, Bloomington, IN 47405, USA Joint Physics Analysis Center JLAB-THY-17-2539Analyticity constitutes a rigid constrainton hadron scattering amplitudes. This property is used to relate modelsin different energy regimes. Using meson photoproduction as a benchmark,we show how to test contemporarylow energy models directly against high energy data.This method pinpoints deficiencies of the modelsand treads a path to further improvement. The implementation of this techniqueenables one to produce more stable and reliable partial wavesfor future use in hadron spectroscopy and new physics searches.Analyticity Constraints for Hadron Amplitudes: Going High to Heal Low Energy Issues G. Fox December 30, 2023 ===================================================================================== Introduction.— Determination of varioushadronic effects represents a major challengein searches for New Physics through precision measurements <cit.>. For example, the possible identification ofBeyond Standard Model signals in B meson decays is hinderedby uncertainties in hadronic final state interactions. The strongly coupled nature of QCD prevents usfrom computing these effects directlyfrom the underlying microscopic formulation.Nevertheless, one can use the first principles ofS-matrix theory to impose stringent constraintson hadron scatteringamplitudes <cit.>.These approaches are encountering a renewed interest even in the more formal context of strongly coupled theories <cit.>.In this Letter, we show how to use analyticityto relate the amplitudes at high energies to the physicsat low energies, where resonance effects dominate. This is not only important for reducing hadronicuncertainties in the aforementioned processes,but is of interest on its own merits forunraveling the spectrum of QCD. According to phenomenological predictions and lattice QCD simulations, the current spectrum summarized in theParticle Data Group (PDG) is far fromcomplete <cit.>. For example, the recent discoveries ofunexpected peaks in data indicate thatthe true hadron spectrum is far more complex than predicted <cit.>. As a working case,we focus here on the baryon sectorin the intermediate energy range.In the PDG these N^* and Δ resonances are referred toas “poorly known" <cit.>,despite the large amount of data available.The ambiguities encountered when identifying resonancesare related to the fact that,as the center of mass energy increases,so does the number of contributing partial waves,vastly complicating the reaction models used in data analysis. The 2-3 mass region is of particular interestfor baryon spectroscopy since,besides the ordinary quark modelmultiplets, it is expected to containa new form of exotic light quark matterthat is dominated by excitations of the gluon field <cit.>. The recent upgrade at JeffersonLab <cit.>is providing high statistics data on hadron photoproduction. New amplitude analysis methods are a prerequisiteto achieve a robust extraction of hadron resonance parameters.Many research groups carry out low energy, coupled channel, partial wave analyses (PWA) for baryon spectroscopy.Currently, the most active areANL-Osaka <cit.>, Bonn-Gatchina <cit.>, JPAC <cit.>,Jülich-Bonn <cit.>,MAID <cit.>, and SAID <cit.>.These groups perform global fits tohadro- and/or photoproduction datausing a finite set of partial wavesto extract baryon resonance properties <cit.>.In these approaches the high energy dataare largely ignored.As we show in this Letter, these data can greatly impact the baryon spectrum analysesthrough analyticity.Specifically, we implement Finite Energy Sum Rulesderived from dispersion relations <cit.>,and use simple approximations to describe the high energy data. The sum rules relate the amplitudes in the baryon resonance regionto the high energy dynamics, where the amplitudes are described by exchanges of meson Regge poles  <cit.>. We apply our method to the existing data onπ^0 and η photoproduction <cit.>. These cases constitute a first step towards a straightforward and systematic implementationof high energy constraints into low energy amplitudes,and provide a template for further application in data analysis.Analyticity constraints for photoproduction.—The reaction γ p → x p,where x=π^0,η is completely described in terms offour independent scalar amplitudes A_i(s,t).These are analytic functions of theMandelstam variables s (the square of the center of mass energy)and t (the square of the momentum transfer) <cit.>.At fixed t, each A_i(s,t) satisfies an unsubtracted dispersion relationinvolving the discontinuity with respect to salong the unitarity cutand the crossed-channel unitarity cutin u = 2m_p^2 + m_x^2 - s -t. Charge conjugation symmetry relates the discontinuity alongthe crossed channel cut to that of the direct channel.This symmetry is made explicitby writing the amplitude as a function of thevariable[As customary,all dimensional variables are given in units of 1.]ν≡ (s-u)/2. For large |ν| and smallt kinematics,the amplitudes are well approximated by Regge poles,via crossed channel exchanges.In this region, the amplitudes take the formA_i(ν,t) = ∑_n β^(n)_i(t)ν^α^(n)(t)-1.The Regge poles are determinedby the trajectories α^(n)(t)and the residues β^(n)_i(t). The index n runs over all possible exchanges. This approximation holds only if |ν| is greater than some cutoffΛ abovethe resonance region. For |ν| < Λ,the amplitude is dominated by direct channel resonances,and thus it can be well approximatedby a finite number of partial waves.One can write a dispersion relation usingCauchy's theorem with the contourin Fig. <ref>,and calculate explicitly the integral in the circle|ν| = Λ assuming the form in Eq. (<ref>). One readily obtains <cit.> ∫_0^Λ A_i(ν,t) ν ^kν= ∑_nβ_i^(n)(t)Λ^α^(n)(t)+k/α^(n)(t)+k.The amplitudes A_1,2,4 and A_3are even and odd functions of ν, respectively.Here k is an arbitrary positive integer,odd for A_1,2,4 and even forA_3.We give the value of Λ in terms of anenergy cutoff s_max,which introduces additional t dependenceΛ = s_max + (t-2 m_p^2-m_x^2)/2.We restrict the sum on the right hand side ()of Eq. (<ref>)to the dominant t-channel Regge poles.Each A_i receives a contribution from bothisoscalar and isovector exchanges.Natural parity exchanges (with P = (-)^J)dominate A_1 and A_4,while the unnatural ones (with P = (-)^J+1)dominate A_2 and A_3.More specifically, then=ρ, ω, Regge poles contributeto A_1 and A_4, while A_2 and A_3 are determinedby exchanges of then=b,h,ρ_2,ω_2.[Even though there aresome experimental indicationsof the existence of ρ_2 and ω_2 <cit.>,they have been observed by one single group,and thus need further confirmation <cit.>.]The trajectories are nearly degenerate for all thenatural exchanges <cit.>,and in the kinematical region of interest, they can bewell approximated by α(t) ≡α^(ρ)(t) =α^(ω)(t) = 1+ 0.9 (t-m_ρ^2),for i=1,4.For the unnatural exchanges, for i=2,3. At high energy the contribution of unnatural versus natural exchangesto observables in the forward direction is suppressed. For example, with a beam energy of 9,the suppression is expected to be.This can be compared with polarization observables,such as the beam asymmetry Σ,[The beam asymmetry isΣ≡ (σ_⊥ - σ_∥)/ (σ_⊥ + σ_∥),with σ_⊥(∥)the differential cross section of the photon polarizedperpendicular (parallel)to the reaction plane.]which are sensitive to the interference betweenthe natural and the unnatural Regge poles.If one neglects the unnatural contributions, Σ = 1. The recent measurement of π^0 and η beam asymmetries atGlueX <cit.> confirms that Σ> 0.9,so that the unnatural exchanges contribute ≲ 5%to the observables. In the following, we will consider the amplitudesdominated by natural exchanges,A_1 and A_4, only.We use low energy models as input to determinethe left hand side of Eq. (<ref>),and use it to predict the residues. To this aim we define the effective residues,β_i(t) = α(t)+k/Λ^α(t)+k∫_0^Λ A^PWA_i(ν,t) ν^kν ,where A^PWA_i is the amplitude calculatedfrom low-energy models.Because of Regge trajectory degeneracy,the 's describe the sum of the contributionof both isovector and isoscalar exchanges.Consistency of the single pole hypothesis requires the of Eq. (<ref>)to be independent of k.[For example,if one added another nondegenerate trajectoryα_2 < α,the effective residue would depend on k as β̂_i = β_i+ β_i,2/Λ^α-α_2α+k/α_2+k.The latter becomes negligible for Λ sufficiently large.]For |ν| > Λ, the amplitudescan be expressed in terms of the effective residues as <cit.>A_i(ν,t) =[i+ tanπ/2α(t) ]β_i(t) ν^α(t)-1.The A_i(ν, t)are the high energy amplitudes calculatedfrom the low energy models entering in the .Comparing the observables calculated with theseto data allows us to check the quality of the low energy models. In the high energy limit, the differential cross section becomesσ/ t ≃1/32 π[|A_1 |^2- t|A_4|^2 ]=ν^2α(t)-2/32π[1+ tan^2 π/2α(t) ][ β_1^2(t) - tβ^2_4(t) ].Results.— We next discuss what these constraints cantell us about the existing low energy analyses. We consider for k=3,5,7,9. For π^0, we use the SAID partial wave modelwhich is valid up to s_max= (2.4)^2 <cit.>. For η, the amplitudes need to be extrapolatedbelow the physical η N threshold,down to the π N threshold (see Fig. <ref>). Among the various models, only  <cit.>,valid only up to s_max= (2)^2,is given in terms of analytical functions that allow for this continuation <cit.>. The two effective residuesβ_1,4(t)are shown in Figs. <ref>and <ref> for π^0and in Figs. <ref>and <ref> for η, respectively.In the case of π^0, we restrict the analysis to the region,because of subleading Regge cut contributionswhich are known to dominate the cross sectionat higher -t <cit.>.We note that the residues are fairly independent of k.Conversely, the dependence on k for η is large.This points to a problem in the low energy model.Possible reasons can be that the resonant contentfor energies less than 2 is underestimated,or the 2-3 resonances are relevant. In either case the low energy modelcan be improved using these constraints.In Fig. <ref>we predict the high energy π^0 differentialcross section computedin Eq. (<ref>) using the effective residues . Both the magnitude and shape of the t dependence show a remarkable agreement with the data.The energy dependence is given by the trajectoriesin Eq. (<ref>). In the region of interest, the t dependence is fully determinedby the low energy amplitudes through the integralover the imaginary part,see Eq. (<ref>).There is a dip in the cross section data near-t=0.5^2, which can be tracedto the zero in the dominantβ_4(t) at -t ≃ 0.7^2in Fig. <ref>. The predictions are almost independent of the moment k.The t dependence is identical for moments up to k=9,and the overall normalization changes by a maximum of 20%. The predictions for η are shown in Fig. <ref>.Since the computed from the low energy modelhave significant k dependence, we show the cross sectionfor fixed, k=3, which happens to have the correct overallnormalization.The prediction agrees very well with data up to somewhathigher -t,but it underestimatesthe cross section in the forward, -t <0.25^2 region. This effect originates from the small value of β_4 in this region,as can be seen on Figs. <ref>and <ref>.It is worth notingthat theavailable PWAmodels <cit.>strongly disagree in this specific t region.In particular, in there is a peculiar cancellationbetween isoscalar and isovector exchanges,which results in a smaller effective residue <cit.>.This illustrates how the implementation of our approach canimpact on the low energy analyses.Conclusions.— We discussed a technique which usesanalyticity to constrain low energy hadron effectswith the high energy data.We have benchmarked it against meson photoproduction, one of the main reactions to studyhadron spectroscopy. In this specific case, we showedthe effectiveness of the approach inidentifying potential deficiencies in the low energy models. We showed explicitly how the baryon spectrum determines the seemingly unrelated meson exchanges dominatingforward scattering at high energies, and vice versa.Experiments at Jefferson Lab are currently exploring meson photoproduction above the baryon resonance region. The technique presented here can be appliedto these forthcoming data, and make a significant impact on baryon spectroscopy research. The approach can be extended to other hadron reactions,and helpcontrol the hadronic effects that drivethe uncertainties in New Physics searches,especially in the heavy flavor sector.Acknowledgments.— This material is based upon worksupported in part by theU.S. Department of Energy, Office of Science,Office of Nuclear Physics under contract DE-AC05-06OR23177.This work was also supported in part by theU.S. Department of Energy underGrant DE-FG0287ER40365,National Science Foundation underGrants PHY-1415459 and PHY-1205019,the IU Collaborative Research Grant, the Research Foundation Flanders (FWO-Flanders), PAPIIT-DGAPA (UNAM) grant No. IA101717, CONACYT (Mexico) grant No. 251817,and Red Temática CONACYT deFísica en Altas Energías (Red FAE, Mexico). apsrev4-1
http://arxiv.org/abs/1708.07779v1
{ "authors": [ "JPAC Collaboration", "V. Mathieu", "J. Nys", "A. Pilloni", "C. Fernández-Ramírez", "A. Jackura", "M. Mikhasenko", "V. Pauk", "A. P. Szczepaniak", "G. Fox" ], "categories": [ "hep-ph" ], "primary_category": "hep-ph", "published": "20170825153235", "title": "Analyticity Constraints for Hadron Amplitudes: Going High to Heal Low Energy Issues" }
Sackler Centre for Consciousness Science and Department of Informatics, University of Sussex, Brighton BN1 9QJ, UK [email protected] (correspondence) Granger-Geweke causality (GGC) is a powerful and popular method for identifying directed functional (`causal') connectivity in neuroscience. In a recent paper, <cit.> raise several concerns about its use. They make two primary claims: (1) that GGC estimates may be severely biased or of high variance, and (2) that GGC fails to reveal the full structural/causal mechanisms of a system. However, these claims rest, respectively, on an incomplete evaluation of the literature, and a misconception about what GGC can be said to measure. Here we explain how existing approaches [as implemented, for example, in our popular MVGC software <cit.>] resolve the first issue, and discuss the frequently-misunderstood distinction between functional and effective neural connectivity which underlies Stokes and Purdon's second claim.Granger causality functional connectivity effective connectivity statistical inferenceGranger-Geweke causality (GGC) is a powerful analysis method for inferring directed functional (`causal') connectivity from time-series data, which has become increasingly popular in a variety of neuroimaging contexts <cit.>.GGC operationalises a statistical, predictive notion of causality in which causes precede, and help predict their effects. When implemented using autoregressive modelling, GGC can be computed in both time and frequency domains, in both bivariate and multivariate (conditional) formulations. Despite its popularity and power, the use of GGC in neuroscience and neuroimaging has remained controversial. In a recent paper, <cit.> raise two primary concerns: (1) that GGC estimates may be severely biased or of high variance, and (2) that GGC fails to reveal the full structural/causal mechanisms of a system. Here, we explain why these concerns are misplaced.Regarding the first claim, <cit.> describe how bias and variance in GGC estimation arise from the use of separate, independent, full and reduced regressions. However, this problem has long been recognised <cit.> and, moreover, has already been solved by methods which derive GGC from a single full regression[We note here that the “partition matrix” solution proposed by <cit.> is incorrect; see, , <cit.>.]. These methods essentially extract reduced model parameters from the full model via factorisation of the spectral density matrix. Well-documented approaches include Wilson's frequency-domain algorithm <cit.>, Whittle's time-domain algorithm <cit.>, and a state-space approach which devolves to solution of a discrete-time algebraic Riccati equation <cit.>. Thus, the source of bias and variance discussed in <cit.> has already been resolved.This is clearly illustrated in fig:sgc, where we plot estimated frequency-domain GGC for the 3-node VAR model in <cit.>, Example 1, using the single-regression state-space method <cit.>. We remark that identical results are obtained using the time-domain spectral factorisation method of <cit.>, as implemented in the current (v1.0, 2012) release of the associated MVGC Matlab^ software package <cit.>. fig:sgc may be directly compared with Fig. 2 in <cit.>; we see clearly that all estimates are strictly non-negative, and that exaggerated bias and variance associated with the dual-regression approach are absent. Therefore,<cit.> are in error when they state that “Barnett and Seth […] have proposed fitting the reduced model and using it to directly compute the spectral components …”. This is important to note because our MVGC toolbox has been widely adopted within the community, with > 3,500 downloads and a significant number of high-impact research publications using the method <cit.>. Thus, we can reassure users of the toolbox that problems of bias and variance as described by <cit.> do not apply. Sample variance is, of course, still evident, as is bias due to non-negativity of the GGC sample statistic (which may be countered by standard surrogate data methods), but both remain well below their minimum values across all model orders for the dual-regression case <cit.>. fig:bivar further compares bias and variance of time-domain GGC for the example system for single and dual regressions, at model order 3, across a wide-range of time-series lengths. A single regression consistently leads to substantially less bias and variance, except at high time-series lengths, where there is a drop off of bias and variance for both methods.<cit.> do correctly identify a fundamental cause of the problem with dual-regression GGC estimation: even if the full process is a finite-order autoregression, the reduced process will generally not be finite-order autoregressive; rather, it will be VARMA, or equivalently, a finite-order state-space process <cit.> – which may be poorly modelled as a finite-order VAR <cit.>. The problem is in fact more pervasive than this: the full process itself may have a strong moving-average (MA) component and be poorly-modelled as a finite-order VAR. This is because common features of neurophysiological data acquisition, sampling and preprocessing procedures such as subsampling and other temporal aggregation, filtering, measurement noise and sub-process extraction will all, in general, induce an MA component <cit.>. This is particularly pertinent to fMRI data, where the haemodynamic response acts as a slow, MA filter. Fortunately, the state-space and non-parametric approaches mentioned above handle VARMA data parsimoniously, hence avoiding this problem.Moving on to the second claim, Stokes and Purdon note that GGC reflects a combination of `transmitter' and `channel' dynamics, and is independent of `receiver' dynamics. Again, this independence has been previously identified, as a direct consequence of the invariance of GGC under certain affine transformations <cit.>. But why should this independence matter? They suggest that it runs “counter to intuitive notions of causality intended to explain observed effects” since, according to them, “neuroscientists seek to determine the mechanisms that produce `effects' within a neural system or circuit as a function of inputs or `causes' observed at other locations”.In fact, this view resonates more strongly with approaches such as Dynamic Causal Modelling <cit.>—usually characterised as `effective connectivity'—which attempt to find the optimal mechanistic (circuit-level) description that explains observed data. GGC, on the other hand, models dependencies among observed responses and is therefore an example of (directed) `functional connectivity' <cit.>. Essentially, the distinction is between making inferences about an underlying physical causal mechanism <cit.> and making inferences about directed information flow <cit.>. DCM is able to deliver evidence for circuit-level descriptions of neural mechanism from a limited repertoire of tightly-framed hypotheses, which must be independently motivated and validated <cit.>; it is, in particular, unsuited to exploratory analyses. GGC, on the other hand, is data-driven and “data-agnostic” (it makes few assumptions about the generative process, beyond that it be reasonably parsimoniously modelled as a linear stochastic system), and as such is well-suited to exploratory analyses. It delivers an information-theoretic interpretation of the neural process which is both amenable to statistical inference, and which also stands as an effect size for directed information flow between components of the system <cit.>. Both approaches address valid questions of interest to neuroscientific analysis.Concluding, GGC represents a conceptually satisfying and statistically powerful method for (directed) functional connectivity analysis in neuroscience and neuroimaging. Currently available implementations [, <cit.>] deal appropriately with issues of bias and variance associated with dual regression methods. However, a range of additional challenges remain in further developing this useful technique. These include issues of stationarity, linearity and exogenous influences, as noted by <cit.>, and in addition the influences of noise, sampling rates and temporal/spatial aggregation engendered by neural data acquisition <cit.>.§ ACKNOWLEDGEMENTS ABB is funded by EPSRC grant EP/L005131/1. All authors are grateful to the Dr. Mortimer and Theresa Sackler Foundation, which supports the Sackler Centre for Consciousness Science. model2-names
http://arxiv.org/abs/1708.08001v2
{ "authors": [ "Lionel Barnett", "Adam B. Barrett", "Anil K. Seth" ], "categories": [ "stat.ME" ], "primary_category": "stat.ME", "published": "20170826174931", "title": "Solved problems and remaining challenges for Granger causality analysis in neuroscience: A response to Stokes and Purdon (2017)" }
t-Structures on stable derivators and Grothendieck hearts Manuel Saorín[The first named author was supported by the research projects from the Ministerio de Economía y Competitividad of Spain (MTM2016-77445-P) and the Fundación `Séneca' of Murcia (19880/GERM/15), both with a part of FEDER funds.] Jan Šťovíček[The second named author was supported by Neuron Fund for Support of Science.] Simone Virili[The third named author was supported by the Ministerio de Economía y Competitividad of Spain via a grant `Juan de la Cierva-formación'. He was also supported by the Fundación `Séneca' of Murcia (19880/GERM/15) with a part of FEDER funds.]=========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== We consider a family of 3D models for the axi-symmetric incompressible Navier-Stokes equations. The models are derived by changing the strength of the convection terms in the axisymmetric Navier-Stokes equations written using a set of transformed variables. We prove the global regularity of the family of models in the case that the strength of convection is slightly stronger than that of the original Navier-Stokes equations, which demonstrates the potential stabilizing effect of convection. § INTRODUCTION AND MAIN RESULTThe three-dimensional (3D) Euler and Navier-Stokes equations govern the motion of ideal incompressible fluid in the absence of external forcing:𝐮_t+𝐮·∇𝐮=-∇ p+νΔ𝐮,∇·𝐮=0.Here 𝐮(x,t): 𝐑^3× [0, T)→𝐑^3 is the 3D velocity vector of the fluid, and p(x,t): 𝐑^3× [0, T)→𝐑 describes the scalar pressure.The viscous term νΔ𝐮 models the viscous forcing in the fluid. In the case that ν=0, equations (<ref>) are referred to as the Euler equations, and in the case that ν>0, equations (<ref>) are referred to as the Navier-Stokes equations. The divergence-free condition ∇·𝐮=0 guarantees the incompressibility of the fluid. The Euler and Navier-Stokes equations are among the most fundamental nonlinear partial differential equations (PDEs) in nature yet far from being fully understood. The fundamental question regarding the global regularity of the Euler and Navier-Stokes equations with smooth initial data in the 3D setting remains open, and it is generally viewed as one of the most important open questions in mathematical fluid mechanics; see the surveys <cit.>.The Euler equations have the following scaling-invariance 𝐮(x, t)→λ/τ𝐮(x/λ, t/τ), p(x,t)→λ^2/τ^2p(x/λ, t/τ),and for the Navier-Stokes equations, due to the viscous term, the two-parameter symmetry group in (<ref>) is restricted to the following one-parameter group𝐮(x, t)→1/τ^1/2𝐮(x/τ^1/2, t/τ), p(x,t)→1/τp(x/τ^1/2, t/τ). Smooth solutions to the Euler/Navier-Stokes equations (<ref>) enjoy the following energy identity, 1/2∫ |𝐮(x,t)|^2dx+ν∫_0^t∫_𝐑^3|∇𝐮(x,s)|^2dxds=1/2∫ |𝐮(x,0)|^2dx,which implies the following a priori estimates for Navier-Stokes:1/2∫_𝐑^3|𝐮(x,t)|^2dx,∫_0^t∫_𝐑^3|∇𝐮(x,s)|^2dxds≤ C.The above estimates seem to be the only known coercive a priori estimates for smooth solutions to the Navier-Stokes (<ref>). The main difficulty for the global regularity problem of the 3D Navier-Stokes equations lies in the fact that these known a priori estimates (<ref>) are supercritical with respect to the invariant scaling of the equations (<ref>); see <cit.> for more discussion about this supercritical barrier. For the 3D Euler equations, due to the lack of regularization mechanism (there is no viscosity), to prove the global regularity of the solutions becomes even more challenging.In this work, we consider a family of 3D models for the Navier-Stokes equations with axial symmetry, which is proposed in <cit.>,u_1,t+u^ru_1,r+u^zu_1,z =ν(∂_r^2+3/r∂_r+∂_z^2)u_1+2u_1ϕ_1,z, ω_1,t+u^rω_1,r+u^zω_1,z =ν(∂_r^2+3/r∂_r+∂_z^2)ω_1+(u_1^2)_z, -(∂_r^2+3/r∂_r+∂_z^2)ϕ_1 =ω_1,with the Biot-Savart law given byu^r=-ϵ rϕ_1,z,u^z=2ϵϕ_1+ϵ rϕ_1,r. We give the derivation of this model in Section <ref> for the sake of completeness.In (<ref>), the parameter ϵ characterizes the strength of the convection. The case that ϵ=1 corresponds to the original axi-symmetric Euler/Navier-Stokes equations, and the case ϵ=0 corresponds to the 3D model investigated in <cit.>. This family of models was proposed in <cit.> to study the effect of convection on the depletion of nonlinearity or formation of finite-time singularities. This family of models share several regularity results with the original Euler and Navier-Stokes equations, including an energy identity and two well-known non-blowup criteria. The numerical results in <cit.> suggest that the inviscid models with weak convection can develop self-similar singularity and such singularity scenario does not seem to persist as the strength of the convection terms increases, specifically for the original axisymmetric Euler. For the family of viscous models with ϵ∈ [1,2), we can obtain a maximum principle for a modified circulation quantityΓ^ϵ=u_1r^2/ϵ, i.e.,Γ^ϵ_L^∞≤Γ^ϵ_0_L^∞,which is subcritical with respect to the invariant scaling for ϵ>1.The models (<ref>) can also be written in a velocity-pressure form as 𝐯_t+ϵ𝐯·∇𝐯=-∇ p+νΔ𝐯+(2ϵ-2)v^θ v^r 𝐞_θ/r,where the velocity 𝐯(x,t) is a rescaling of the velocity in model (<ref>):𝐯=u^r/ϵ𝐞_𝐫+u^z/ϵ𝐞_𝐳+u^θ/ϵ^3/2𝐞_θ, u^θ=ru_1. The viscous models (<ref>) enjoy the same scaling-invariance as the Navier-Stokes equations, and smooth solutions enjoy a similar energy identity as (<ref>) for ϵ∈ [0,2). To be specific, thea priori energy estimate is supercritical. We also have the following maximum principle for a modified total circulation, Γ^ϵ, in the case that ϵ∈ [1, 2):Γ^ϵ=u_1r^2/ϵ, Γ^ϵ_L^∞≤Γ^ϵ_0_L^∞.The a priori estimate for Γ^ϵ_L^∞ is subcritical with respect to the invariant scaling of the models, which is the key in our proof of the global regularity.Next we state our main result. Consider the viscous models (<ref>) with ϵ∈ (20/19,2) and data 𝐯(·, 0)∈ H^4( 𝐑^3). Then the solution 𝐯(x,t) is globally regular in time. This result further demonstrates the potential stabilizing effect of the convection terms, which has been demonstrated in the numerical results in <cit.> that the self-similar singularity of the inviscid models with weak convection does not persist as the strength of the convection terms increases.To prove the main result Theorem <ref>, we can use L^p estimate for ω_1 and L^q estimate for u_1. To control the nonlinear vortex stretching term in the equation of ω_1 using the viscous term, we only need to use the subcritical a priori estimate (<ref>) and the Hardy inequality, under the condition that q=2p-p/2ϵ' for some ϵ'<ϵ. However, for the nonlinear term in the equation of u_1, the subcritical a priori estimate (<ref>) seems insufficient, because it can only control the angular component of the velocity. We use a combination of the supercritical energy estimate (<ref>) and the subcritical estimate of Γ^ϵ (<ref>) in the nonlinear term in the equation of u_1. To bound the nonlinear term using the viscous term, we need the condition ϵ > 20/19 in (<ref>). In our proof of the main result in section <ref>, we only conduct L^2 estimate for ω_1, and using any L^p estimate for ω_1 with p∈ (1, +∞) will lead to the same result under the condition ϵ > 20/19. The rest of this paper is organized as follows. In section <ref>, we derive the family of the models that we investigate in this work and list some regularity results for these models. We also give a brief review of recent regularity results for the Navier-Stokes equations with axial symmetry. In section <ref>, we prove our main result Theorem <ref>.§ DERIVATION OF THE MODELS AND REVIEW OF THE LITERATURE Recently the Euler and Navier-Stokes equations with axial symmetry have attracted a lot of interests. The global regularity problem in this setting remains open although a lot of progress has been made. Let 𝐞_𝐫, 𝐞_θ and 𝐞_𝐳 be the standard orthonormal vectors defining the cylindrical coordinates,𝐞_𝐫=(x_1/r, x_2/r, 0)^T,𝐞_θ=(x_2/r, -x_1/r, 0)^T,𝐞_𝐳=(0,0,1)^T,where r=√(x_1^2+x_2^2) and z=x_3. Then the 3D velocity field 𝐮(x,t) is called axi-symmetric if it can be written as 𝐮(x,t)=u^r(r, z, t)𝐞_𝐫+u^θ(r, z, t)𝐞_θ+u^z(r, z, t)𝐞_𝐳,where u^r, u^θ and u^z do not depend on the θ coordinate.We denote the axi-symmetric vorticity field ω as,ω(x,t)=∇×𝐮(x,t)=ω^r(r, z, t)𝐞_𝐫+ω^θ(r, z, t)𝐞_θ+ω^z(r, z, t)𝐞_𝐳,and then the Euler and Navier-Stokes equations with axial symmetry can be written using the cylindrical coordinates as u^θ_t+u^ru^θ_r+u^zu_z^θ =ν(Δ-1/r^2)u^θ-u^ru^θ/r, ω^θ_t+u^rω_r^θ+u^zω_z^θ =ν(Δ-1/r^2)ω^θ+2/ru^θ u^θ_z+u^rω^θ/r, -[Δ-1/r^2]ϕ^θ =ω^θ,where the radial and angular velocity fields u^r(r,z,t) and u^z(r,z,t) are recovered from the stream function ϕ^θ based on the Biot-Savart lawu^r=-∂_zϕ^θ, u^z=r^-1∂_r(rϕ^θ).Note that the equations for angular velocity (<ref>), axial vorticity (<ref>), and the Biot-Savart law (<ref>)-(<ref>) form a closed system. Equations (<ref>) have a formal singularity on the axis r=0 due to the 1/r terms. Using the fact that the angular component u^θ(r,z), ω^θ(r,z) and ϕ^θ(r,z) can all be viewed as odd functions of r <cit.>, Hou and Li introduced the following transformed variables in <cit.>,u_1=u^θ/r,ω_1=ω^θ/r,ϕ_1=ϕ^θ/r,to remove the formal singularity in (<ref>). This leads to the following reformulated axi-symmetric Navier-Stokes equations:u_1,t+u^ru_1,r+u^zu_1,z =ν(∂_r^2+3/r∂_r+∂_z^2)u_1+2u_1ϕ_1,z, ω_1,t+u^rω_1,r+u^zω_1,z =ν(∂_r^2+3/r∂_r+∂_z^2)ω_1+(u_1^2)_z, -[∂_r^2+3/r∂_r+∂_z^2]ϕ_1 =ω_1,with the Biot-Savart law given byu^r=-rϕ_1,z,u^z=2ϕ_1+rϕ_1,r.In <cit.>, a family of 3D models for axi-symmetric Euler and Navier-Stokes equations was proposed by changing the Biot-Savart law (<ref>):u^r=-ϵ rϕ_1,z, u^z=2ϵϕ_1+ϵ rϕ_1,r,to study the potential stabilizing effect of the convection terms.The viscous model (<ref>) enjoys the following scaling-invariance:u_1(r, z,t)→1/τu_1(r/τ^1/2, z/τ^1/2, t/τ), ω_1(r, z,t)→1/τ^3/2ω_1(r/τ^1/2,z/τ^1/2, t/τ). The modified velocity field (<ref>) is still divergence-free∇·𝐯=1/ϵ((u^rr)_r+(u^zr)_z)=0. It was proved in <cit.> that the models (<ref>) with ϵ∈ [0, 2) share several regularity results with the original Euler and Navier-Stokes equations, including an energy identity, the conservation of a modified circulation quantity, the BKM non-blowup criterion, and the Prodi-Serrin non-blowup criterion. Smooth solutions to the models (<ref>) with ϵ∈[0,2) enjoy the following energy identity with u^θ=ru^1: 1/2d/dt∫ (u^r)^2+(u^z)^2+1/2-ϵ(u^θ)^2 rdrdz=-ν∫ |∇ u^r|^2+|∇ u^z|^2+(u^r)^2/r^2+1/2-ϵ(u^θ)^2/r^2 rdrdz. Note that the modified energy functional in (<ref>), E_ϵ=d/dt∫ (u^r)^2+(u^z)^2+1/2-ϵ(u^θ)^2 rdrdzis equivalent to that of the original Euler and Navier-Stokes equations, E_1, min(1, 1/2-ϵ) E_1≤ E_ϵ≤max(1, 1/2-ϵ) E_1. Based on (<ref>), we have thefollowing a priori estimates of the solutions u^θ(r, z, t)_L^2,u^r(r, z, t)_L^2,∫_0^t ϕ_1,z(s)_L^2^2ds=∫_0^t 1/ϵ^2u^r(s)/r_L^2^2ds≤ C.We define the modified total circulation Γ^ϵ as Γ^ϵ=u_1r^2/ϵ,and then Γ^ϵ satisfies the following equation Γ^ϵ_t+u^rΓ^ϵ_r+u^zΓ^ϵ_z=ν(Δ -2/r(2/ϵ-1)∂_r+1/r^22/ϵ(2/ϵ-2))Γ^ϵ.Then for the inviscid model with ν=0, or the viscous model with ν>0, ϵ≥ 1, we have the following maximum principleΓ^ϵ(r, z, t)_L^∞≤Γ^ϵ(r, z, 0)_L^∞=Γ^ϵ_0_L^∞.For the viscous models with ϵ>1, the quantity Γ^ϵ is indeed subcritical with respect to the invariant scaling of the equations in (<ref>), which is the key in our proof of the global regularity result for the models in this paper.Both the inviscid models and the viscous models enjoy the following BKM type criterion for smooth initial data with decay at infinity. If ∫_0^T∇×𝐯(x, t)_𝐁𝐌𝐎dt<+∞,then𝐯(x,t)∈ L^∞( H^4(𝐑^3), [0, T]).The viscous models also enjoy the Prodi-Serrin type of regularity criterion for smooth initial data with decay at infinity. If 𝐯(x,t)∈ L^q(L^p(𝐑^3), (0, T)), 3/p+2/q=1,p∈ (3, +∞], q∈ [2, +∞),then𝐯(x,t)∈ L^∞( H^4(𝐑^3), [0, T]).In <cit.>, convincing numerical evidence is presented to show thatthe inviscid models with weak convection could develop stable self-similar singularity on the symmetric axis. The singularity scenario in <cit.> is different from that at the boundary described in <cit.> in the sense that the center of the singularity region is not stationary but traveling along the symmetric axis. As the strength of the convection terms increases, the self-similar singularity scenario becomes less stable. Such finite-time singularity scenario does not seem to persist for the models with strong convection (ϵ≥ϵ_0 for some ϵ_0 >0), specifically the original axi-symmetric Euler equations. These results demonstrate the potential stabilizing effect of the convection terms. In this work, we prove the global regularity of the viscous models when the strength of convection slightly stronger than the original Navier-Stokes, i.e. ϵ∈ (20/19,2). The result proved in this work further demonstrates the potential stabilizing effect of convection in axi-symmetric Navier-Stokes equations.The modified total circulation Γ^ϵ (<ref>) is subcritical with respect to the scaling (<ref>) for all ϵ>1. However, the estimate (<ref>) can only control the angular component of the velocity, and using the technique presented in this work we can only prove the regularity of the models for ϵ∈ (20/19,2), not (1,2).Some important progress has been made regarding the regularity of the axi-symmetric Navier-Stokes equations recently; see <cit.>, and we mention a few related works below. In <cit.>, Hou and Li proposed a 1D model by restricting the equations (<ref>) to the symmetric axis. Using a cancellation property in the equation for u_1,z, they proved the global regularity of the 1D model with or without viscosity. In <cit.>, the cancellation property used in <cit.> was further exploited, and several critical regularity criteria concerning only the angular velocity are proved. In particular, the authors of <cit.> showed that ifr^du^θ∈ L^q(L^p(𝐑^3), (0, T)) with d∈ [0, 1), (p, q)∈{(3/1-d,∞]×[2/1-d,∞]},3/p+2/q≤ 1-d,then the solutions can be smoothly extended beyond T.In <cit.>, the global regularity was obtained if |Γ|≤ C|ln r|^-2, and this result was later improved to |Γ|≤C|ln r|^-3/2 in <cit.>. The cancellation property in the equation of u_1,z is crucial for the results in <cit.>. However, for the family of models (<ref>) that we study in this paper, this cancellation is destroyed due to the change of strength in the convection terms in (<ref>).§ PROOF OF THE MAIN RESULTIn this section we prove the main result Theorem <ref>. We need the following Hardy inequality in 1D, see <cit.>.If λ>1, σ≠ 1, f(r) is a nonnegative measurable function, and F(r)=∫_0^r f(t)dt,forσ>1, F(r)=-∫_r^∞ f(t)dt,forσ<1,then∫_0^∞ r^-σF^λ dr≤ (λ/|σ-1|)^λ∫_0^∞ r^-σ(rf)^λ dr. We also need the following elliptic estimates <cit.> For axi-symmetric smooth functions ϕ_1(r,z) and ω_1(r,z) in 𝐑^3, which satisfy the elliptic equation -Δϕ_1-2/r∂_rϕ_1=ω_1, we have the following estimates ∇^2 ϕ_1_L^2≤ Cω_1_L_2,∇^2 ϕ_1,z_L^2≤ C∇ω_1_L^2. For smooth solution of the model (<ref>), u_1 and ϵ'∈(1, ϵ), ∫ |u_1(r, z)|^ϵ' f(r)^2rdrdz≤ C_1(r)∫ |∂_rf|^2rdrdz+Cr_1^-2ϵ'/ϵ∫_r≥ r_1f^2rdrdz, with C_1(r_1)=CΓ_0^ϵ^ϵ'_L^∞r_1^2-2ϵ'/ϵ(ϵ/ϵ-ϵ')^2,lim_r_1→ 0^+C_1(r_1)=0 . Let ψ(r) be a radial cutoff function such that ψ(r)∈ C^∞ (R), ψ(r)= 1, r≤ 1 0, r≥ 2 , 0≤ψ(r)≤ 1, |ψ_r(r)|≤ 2. Denote ψ_r_1(r) as ψ(r/r_1), then we have ∫ |u_1| ^ϵ'f(r)^2rdrdz=∫ |u_1|^ϵ'(f(r)ψ_r_1(r)+f(r)(1-ψ_r_1(r)))^2 rdrdz≤ 2∫_r≤ 2r_1 |u_1|^ϵ'f(r)^2|ψ_r_1(r)|^2rdrdz+2∫_r≥ r_1|u_1|^ϵ'f(r)^2(1-ψ_r_1(r))^2rdrdz. Using the maximum principle (<ref>), we have |u_1(r,z,t)|≤Γ^ϵ_L^∞r^-2/ϵ≤Γ_0^ϵ_L^∞r^-2/ϵ. Putting (<ref>) in the first term on the RHS of (<ref>), and using the Hardy inequality (<ref>), we get ∫_r≤ 2r_1 |u_1|^ϵ'f(r)^2|ψ_r_1(r)|^2rdrdz ≤∫_r≤ 2r_1Γ_0^ϵ_L^∞^ϵ'r^-2ϵ'/ϵ|f(r)ψ_r_1(r)|^2rdrdz≤ CΓ_0^ϵ^ϵ'_L^∞(ϵ/ϵ-ϵ')^2∫_r≤ 2r_1 r^2-2ϵ'/ϵ|∂_r(f(r)ψ_r_1(r))|^2rdrdz≤ CΓ_0^ϵ^ϵ'_L^∞(ϵ/ϵ-ϵ')^2[∫_r≤ 2r_1r^2-2ϵ'/ϵ|∂_r f|^2rdrdz+∫_r≥ r_1r^2-2ϵ'/ϵ|f|^2|∂_rψ_r_1|^2rdrdz]≤ CΓ_0^ϵ^ϵ'_L^∞(ϵ/ϵ-ϵ')^2r_1^2-2ϵ'/ϵ∂_rf^2_L^2+CΓ_0^ϵ^ϵ'_L^∞(ϵ/ϵ-ϵ')^2r_1^-2ϵ'/ϵ∫_r≥ r_1f^2rdrdz. For the second term in (<ref>), using the estimate (<ref>), we have ∫_r≥ r_1|u_1|^ϵ'f^2(r)(1-ψ_r_1(r))^2rdrdz≤ CΓ_0^ϵ^ϵ'_L^∞r_1^-2ϵ'/ϵ∫_r≥ r_1f^2rdrdz. Adding up estimates (<ref>) and (<ref>), we prove (<ref>). Next we give the proof for the Theorem <ref>. Without loss of generality, we assume that ν=1 in our proof. We denote ϵ'=20/19<ϵ and consider the following two quantities: ∫ |ω_1|^2rdrdz,∫ |u_1|^qrdrdz, q=4-ϵ'. Multiplying the equation of ω_1 (<ref>) by ω_1, we get d/dt1/2∫ω_1^2rdrdz+1/2∫ u^r (ω_1^2)_r+u^z(ω_1^2)_zrdrdz=∫ 2u_1u_1,zω_1rdrdz+∫ (Δω_1+2/rω_1,r)ω_1rdrdz. Using integration by part, we can show that the convection terms vanish due to the incompressibility condition (u^rr)_r+(u^zr)_z=0: ∫ u^r (ω_1^2)_r+u^z(ω_1^2)_zrdrdz=-∫ω_1^2((u^rr)_r+(u^zr)_z)drdz=0. For the viscous term on the RHS of (<ref>), we have ∫Δω_1 ω_1rdrdz=-∫|∇ω_1|^2rdrdz. Next we treat the first order derivative term on the RHS of (<ref>) as ∫2/rω_1,rω_1rdrdz=∫ (ω_1^2)_rdrdz=-∫ω_1(0, z, t)^2dz≤ 0. Using integration by part and Young's inequality leads to|∫ 2u_1u_1,zω_1rdrdz|=|∫ u_1^2ω_1,zrdrdz|≤1/2∫ u_1^4 rdrdz+1/2∫ |∇ω_1|^2rdrdz. For the first term on RHS of (<ref>), using Lemma <ref> and q=4-ϵ',we get1/2∫ u_1^4rdrdz=1/2∫ |u_1|^ϵ'|u_1|^qrdrdz≤ C(r_1)∫ |∇(|u_1|^q/2)|^2rdrdz+C∫ |u_1|^qrdrdz. Adding up estimates (<ref>) and (<ref>) in (<ref>), we have d/dt∫ω_1^2rdrdz+∫ |∇ω_1|^2rdrdz≤ C(r_1) ∫ |∇(|u_1|^q/2)|^2rdrdz+C∫ |u_1|^q rdrdz. Next we consider the equation of u_1, (<ref>) and multiply both sides by |u_1|^q-2u_1 to obtaind/dt1/q∫ |u_1|^qrdrdz+1/q∫ u^r|u_1|^q_r+u^z|u_1|^q_zrdrdz = ∫ 2|u_1|^qϕ_1,zrdrdz+∫ (Δ u_1+2/ru_1,r)|u_1|^q-2u_1rdrdz. Again the convection terms vanish due to incompressibility. For the diffusion term, using estimates similar to those for the ω_1 equation, we arrive at ∫ (Δ u_1+2/ru_1,r)|u_1|^q-2u_1rdrdz=-4(q-1)/q^2∫ |∇ (|u_1|^q/2)|^2rdrdz-q/2∫ |u_1(0, z)|^qdz≤ -4(q-1)/q^2∫ |∇ (|u_1|^q/2)|^2rdrdz. Next we decompose the nonlinear term on the RHS of (<ref>) into two parts ∫ 2|u_1|^qϕ_1,zrdrdz=2∫ |u_1|^qϕ_1,zψ(r)+|u_1|^qϕ_1,z(1-ψ(r))rdrdz, where ψ(r) is the cut-off function defined in (<ref>) that satisfies ψ(r)=1 for r≤ 1. For the second term on the RHS of (<ref>),Young's inequality implies, ∫_r≥ 1 |u_1|^qϕ_1,z(1-ψ(r))rdrdz≤∫_r≥ 1 |u_1|^2q+|ϕ_1,z|^2rdrdz≤∫_r≥ 1 |ru_1|^2 r^-2|u_1|^2q-2+|ϕ_1,z|^2rdrdz≤Γ_0^ϵ^2q-2_L^∞∫_r≥ 1 |ru_1|^2rdrdz+1/ϵ^2∫_r≥ 1 |u^r/r|^2rdrdz≤ C, where we have used the a priori estimates (<ref>) in the last step. As for the first term on the RHS of (<ref>), we denote g(r,z)=ϕ_1,z(r,z)ψ(r)and have |∫ 2|u_1|^qg(r,z) rdrdz| ≤ 2∫ |u_1|^α |u_1|^β |g(r,z)|rdrdz, where the exponents α, β are α=16-8ϵ'/4-ϵ',β=(ϵ')^2/4-ϵ',α+β=q=4-ϵ'. Then applying Young's inequality with L_1=4-ϵ'/4-2ϵ', L_2=4-ϵ'/ϵ',1/L_1+1/L_2=1, we obtain ∫ 2|u_1|^q|g|rdrdz ≤1/2∫ |u_1|^L_1α rdrdz+C∫ |u_1|^L_2β |g|^L_2rdrdz=1/2∫ |u_1|^4rdrdz+C∫ |u_1|^ϵ' |g(r,z)|^4-ϵ'/ϵ'rdrdz. The first term in (<ref>) is treated as in the estimate (<ref>). For the second term on the RHS of (<ref>), using Lemma <ref> with r_1=2 and the fact that g(r,z)=0 for r≥ 2, we obtain ∫ |u_1|^ϵ'|g(r,z)|^4-ϵ'/ϵ'rdrdz≤ C∫ |∂_r(|g(r,z)|^4-ϵ'/2ϵ')|^2rdrdz =C ∫ |g_r(r,z)|^2|g(r,z)|^4-3ϵ'/ϵ'rdrdz ≤ Cg(r,z)_L^∞^4-3ϵ'/ϵ'∇ g(r,z)_L^2^2 .Using the following interpolation inequality,∇ g(r,z)_L^2 ≤ Cg(r,z)_L^2^1/2∇^2g(r,z)_L^2^1/2, g(r,z)_L^∞ ≤ Cg(r,z)_L^2^1/4∇^2g(r,z)_L^2^3/4,we have∇ g(r,z)_L^2^11ϵ'-4/6ϵ' g(r,z)_L^∞^4-3ϵ'/ϵ'≤ Cg(r,z)_L^2^ϵ'+4/6ϵ'∇^2 g(r,z)_L^2^8-4ϵ'/3ϵ'. Using (<ref>) in (<ref>), we have ∫ |u_1|^ϵ'|g(r,z)|^4-ϵ'/ϵ'rdrdz≤Cg(r,z)_L^∞^4-3ϵ'/ϵ'∇ g(r,z)_L^2^2 = C[g(r,z)_L^∞^4-3ϵ'/ϵ'∇ g(r,z)_L^2^11ϵ'-4/6ϵ'] ∇ g(r,z)_L^2^4+ϵ'/6ϵ'≤ Cg(r,z)_L^2^4+ϵ'/6ϵ'∇ g(r,z)_L^2^4+ϵ'/6ϵ'∇^2 g(r,z)_L^2^8-4ϵ'/3ϵ'. In deriving the above estimate, we have used the interpolation inequality (<ref>) such that the exponents for g(r,z)_L^2 and ∇ g(r,z)_L^2 are the same in the RHS of (<ref>).Since ϵ'=20/19, using Young's inequality with L_1=12ϵ'/4+ϵ',L_2=6ϵ'/8-4ϵ',1/L_1+1/L_2=1, in (<ref>), we have ∫ |u_1|^ϵ'|g(r,z)|^4-ϵ'/ϵ'rdrdz≤ C(δ) (g(r,z)_L^2^4+ϵ'/6ϵ'∇ g(r,z)_L^2^4+ϵ'/6ϵ')^L_1+δ( ∇^2 g(r,z)_L^2^8-4ϵ'/3ϵ')^L_2=C(δ)g(r,z)_L^2^2∇ g(r,z)_L^2^2+δ∇^2 g(r,z)_L^2^2. Since g(r,z)=ϕ_1,zψ(r) and ψ(r) is constant for r≤ 1, we have g(r,z)_L^2^2≤ Cϕ_1,z^2_L^2, ∇ g(r,z)_L^2^2≤ C∇ϕ_1,z_L^2^2+C1_r≥ 1ϕ_1,z_L^2^2, ∇^2 g(r,z)_L^2^2≤ C∇^2ϕ_1,z^2+C1_r≥ 1∇ϕ_1,z_L^2^2+C1_r≥ 1ϕ_1,z_L^2^2. By the a priori estimate (<ref>), we have1_r≥ 1ϕ_1,z_L^2^2=1_r≥ 1u^r/r_L^2^2≤u^r_L^2^2≤ C.In view of (<ref>), we get from (<ref>) and (<ref>)∫ |u_1|^ϵ'|g(r,z)|^4-ϵ'/ϵ'rdrdz ≤ Cδ∇^2ϕ_1,z_L^2^2+C∇ϕ_1,z_L^2^2ϕ_1,z_L^2^2+C∇ϕ_1,z_L^2^2+Cϕ_1,z_L^2+C.Employing the elliptic estimate (<ref>) in (<ref>), we deduce ∫ |u_1|^ϵ'|g(r,z)|^4-ϵ'/ϵ'rdrdz ≤ Cδω_1,z_L^2^2+Cω_1_L^2^2ϕ_1,z_L^2^2+Cω_1_L^2^2+Cϕ_1,z_L^2+C.Putting the estimates (<ref>), (<ref>), (<ref>) in (<ref>), we get d/dt∫ |u_1|^qrdrdz+ C(1-C_1(r_1))∇(|u_1|^q/2)^2_L^2-Cδ∇ω_1_L^2^2≤ Cϕ_1,z_L^2^2ω_1_L^2^2+Cϕ_1,z_L^2^2+Cω_1_L^2^2+C. At last, choosing r_1, δ small enough, and adding up (<ref>) with (<ref>) gived/dt∫ |ω_1|^2+|u_1|^qrdrdz+C∫ |∇ω_1|^2+|∇(|u_1|^q/2)|^2rdrdz≤ C(ϕ_1,z_L^2^2+1) (∫ |ω_1|^2+|u_1|^q rdrdz)+Cϕ_1,z^2+C,which together with the a priori estimate (<ref>) implies ∫ω_1^2(T)+|u_1(T)|^qrdrdz+∫_0^T∫ |∇ω_1|^2+|∇(|u_1|^q/2)|^2rdrdzds≤ Cwhere the constant C may depend on the initial data and T.To prove the global regularity of the solutions, we consider 𝐯(x)_L^4≤ Cu^θ_L^4 +Cu^r_L^4+Cu^z_L^4 .For the u^θ_L^4 term in (<ref>), using estimates (<ref>), (<ref>), and (<ref>), we obtain∫ u_1^4r^4rdrdz =∫_r≤ 1 |u_1|^q |u_1|^ϵ'r^4 rdrdz+∫_r≥ 1 (u^θ)^2 |Γ^ϵ|^2/r^4/ϵ-2rdrdz≤Γ_0^ϵ_L^∞^ϵ'∫ |u_1|^qrdrdz+Γ^ϵ_0_L^∞^2∫ (u^θ)^2rdrdz≤ C. Then we consider the equation for ω^θ=rω_1, which is ω^θ_t+u^rω^θ_r+u^zω^θ_z=u^r/rω^θ+(u^θ)^2_z/r+(Δ-1/r^2)ω^θ.Multiplying both sides of (<ref>) by ω^θ and integrating, we get 1/2d/dt∫ (ω^θ)^2rdrdz+1/2∫ u^r(ω^θ)^2_r+u_z(ω^θ)^2_zrdrdz =∫u^r/r(ω^θ)^2rdrdz+∫(u^θ)^2_z ω^θ/rrdrdz-∫ |∇ω^θ|^2+(ω^θ)^2/r^2rdrdz. The convection terms vanish due to the incompressibility condition, and for the first nonlinear term in (<ref>), we have ∫u^r/r(ω^θ)^2rdrdz≤u^r/r_L^∞∫(ω^θ)^2rdrdz=ϵϕ_1,z_L^∞ω^θ_L^2^2≤ C(ϕ_1,z_L^2+∇^2ϕ_1,z_L^2)ω^θ_L^2^2≤ C(ϕ_1,z_L^2+∇ω_1_L^2)ω^θ_L^2^2,where we have used the Biot-Savart law (<ref>) ϵϕ_1,z(r,z)=u^r/r, the Sobolev embedding, and the elliptic estimate (<ref>) in the last step.For the second nonlinear term in (<ref>), we have |∫(u^θ)^2_zω^θ/rrdrdz|=|∫(u^θ)^2/rω^θ_zrdrdz|≤1/2∫(u^θ)^4/r^2rdrdz+1/2∇ω^θ^2.The first integral term in (<ref>) is estimated as ∫(u^θ)^4/r^2rdrdz =∫_r≥ 1(u^θ)^4/r^2rdrdz+∫_r≤ 1(u^θ)^4/r^2rdrdz≤∫ (u^θ)^4rdrdz+∫_r≤ 1 u_1^4r^2rdrdz≤∫ (u^θ)^4rdrdz+∫_r≤ 1 |u_1|^4-ϵ'|u_1|^ϵ'r^2rdrdz≤ C+CΓ^ϵ_0_L^∞^ϵ'≤ C. Adding up the estimates (<ref>), (<ref>) and (<ref>) in (<ref>), and using the Gronwell's inequality, we get that ω^θ(t)_L^2≤ C. Then since u^r𝐞_𝐫+u^z𝐞_𝐳=ϵ∇× (-Δ)^-1(ω^θ𝐞_θ), using Sobolev embedding, we have u^r𝐞_𝐫+u^z𝐞_𝐳_L^6≤∇ (u^r𝐞_𝐫+u^z𝐞_𝐳)_L^2≤ Cω^θ_L^2≤ C. Then based on the a priori estimate (<ref>), we have u^r𝐞_𝐫+u^z𝐞_𝐳_L^4≤u^r𝐞_𝐫+u^z𝐞_𝐳^3/4_L^6u^r𝐞_𝐫+u^z𝐞_𝐳^1/4_L^2≤ C. This together with the estimate (<ref>) and the Prodi-Serrin criterion (<ref>) implies the global regularity of the solutions. We proved our main result Theorem <ref> using the L^2 estimate for ω_1 and the L^4-ϵ' estimate for u_1. And we can also use the L^p estimate for ω_1 and L^q estimate for u_1 with q=2p-pϵ'/2 for p>1 to get the same result. Acknowledgments. The research was in part supported by the NSF Grants No. DMS-1613861 and DMS-1318377. plain
http://arxiv.org/abs/1708.07536v1
{ "authors": [ "Thomas Y Hou", "Pengfei Liu", "Fei Wang" ], "categories": [ "math.AP", "35Q35" ], "primary_category": "math.AP", "published": "20170824194429", "title": "Global regularity for a family of 3D models of the axisymmetric Navier-Stokes equations" }
Integral Curvature Representation and Matching Algorithmsfor Identification of Dolphins and Whales [ December 30, 2023 ==================================================================================================== Many embryonic deformations during development are the global result of local cell shape changes and other local active cell sheet deformations. Morphogenesis does not only therefore rely on the ability of the tissue to produce these active deformations, but also on the ability to regulate them in such a way as to overcome the intrinsic variability of and geometric constraints on the tissue. Here, we explore the interplay of regulation and variability in the green alga Volvox, whose spherical embryos turn themselves inside out to enable motility. Through a combination of light sheet microscopy and theoretical analysis, we quantify the variability of this inversion and analyse its mechanics in detail to show how shape variability arises from a combination of geometry, mechanics, and active regulation. § INTRODUCTIONJulian Huxley's pronouncement, “In some colony like [the green alga] Volvox, there once lay hidden the secret to the body and shape of [humans]” <cit.>, emphasises that morphogenesis across kingdoms relies on the fundamental ability of organisms to, firstly, produce active forces that drive the deformations of cell sheets underlying the development of many organs and tissues and, secondly, regulate these active deformations in such a way to complete morphogenesis. Unravelling the biomechanics of these processes is therefore of crucial importance to understand pathological errors and foster bioengineering to address these errors <cit.>. Local cellular changes can produce forces that are transmitted along the cell sheet to drive its global deformations <cit.>. Simple events of cell sheet folding such as ventral furrow formation in Drosophila can be driven primarily by cell shape changes <cit.>. In more complex metazoan developmental processes such as gastrulation <cit.>, optic cup formation <cit.>, neurulation <cit.> and related processes <cit.>, the effect of such cell shape changes is overlaid by that of other cellular changes such as cell migration, cell intercalation, cell differentiation, and cell division. In all of these processes however, these local cellular changes occur in specific regions of the cell sheet and at specific stages of morphogenesis. On the one hand, the spatio-temporal distribution of these local cellular changes affects the global tissue shape. On the other hand, a certain amount of noise is unavoidable in biological systems; indeed, it may even be necessary for robust development, as demonstrated for example by <cit.>, who showed that variability in cell growth is necessary for reproducible sepal size and shape in Arabidopsis.While some processes may be subject to less intrinsic variability than others, one must therefore ask: how are these processes orchestrated so that development can complete despite the intrinsic biological variability? Differences in the observed shapes of organisms at certain stages of development (i.e. what one might term their geometric variability) stem from a combination of mechanical variability (i.e. differences in mechanical properties or mechanical state) and active variability (i.e. differences in the active forces generated by individual cells). What experimental data there are suggest that the mechanical properties are subject to a large amount of variability <cit.>. Finally, differences in the mechanical stress state of the tissue are another facet of mechanical variability that is induced by active variability. The first mechanical models of morphogenesis <cit.> represented cells as discrete collections of springs and dashpots; they were soon followed by elastic continuum models <cit.>. Notable among this early modelling of morphogenesis is for example the work of(, ), who combined models of several mechanisms of sea urchin gastrulation with measurements of mechanical properties to test the plausibility of these different mechanisms. These models heralded the emergence of a veritable plethora of mechanical modelling approaches over the subsequent decades <cit.>, though the choice of model must ultimately be informed by the questions one seeks to answer <cit.>. More recent endeavours were directed at deriving models that can represent the chemical and mechanical contributions to morphogenesis and their interactions <cit.> and at establishing the continuum laws that govern these out-of-equilibrium processes <cit.>. There is, however, a rather curious gap in the study of the variability of development: the importance of quantifying the morphogenesis and its variability has been recognised <cit.>, yet accounts of the variability of development, e.g. in the loach <cit.>, have often been merely descriptive. For this reason, the interplay between mechanics and active variability has seemingly received little attention and hence a question we believe to be fundamental appears to lie in uncharted waters: how does active variability lead to geometric variability? Conversely, what does geometric variability tell us about active variability?This is the question that we explore in this paper in the context of the development of the multicellular green alga Volvox (Fig. <ref>a). Volvox and the related Volvocine algal genera have been recognised since the work of <cit.> as model organisms for the evolution of multicellularity <cit.>, spawning more recent investigations of kindred questions in fluid dynamics and biological physics <cit.>. The cells of Volvox (Fig. <ref>b) are differentiated into biflagellated somatic cells and a small number of germ cells, or gonidia, that will form daughter colonies <cit.>. The somatic cells in the adult are embedded in a glycoprotein-rich extracellular matrix <cit.>. The germ cells undergo several rounds of cell division, after which each embryo consists of several thousand cells arrayed to form a thin spherical sheet confined to a fluid-filled vesicle. Cells are connected to their neighbours by cytoplasmic bridges (Fig. <ref>b), thin membrane tubes resulting from incomplete cell division <cit.>. Those cell poles whence will emanate the flagella however point into the sphere at this stage, and so the embryos must turn themselves inside out through an opening at the anterior pole of the cell sheet (the phialopore), to enable motility and thus complete their development <cit.>. Because of this process of inversion, Volvox has become a model organism for the study of cell sheet deformations, too <cit.>.Inversion in Volvox <cit.> and in related species <cit.> results from cell shape changes only,without the complicating additional processes found in metazoan development discussed above.This simplificationfacilitates the study of morphogenesis. While different species of Volvox have developed different ways of turning themselves inside out <cit.>, here, we focus on the so-called type-B inversion arising, for example, in Volvox globator <cit.>. This shares features such as invagination and involution with developmental events in metazoans <cit.>. This inversion scenario is distinct from type-A inversion, in which four lips open at the anterior of the shell and peel back to achieve inversion <cit.>. Type-B inversion begins with the appearance of a circular bend region at the equator of the embryo (Fig. <ref>c,d, Fig. <ref>a): cells there become wedge-shaped by developing narrow basal stalks <cit.>. At the same time, the cells move relative to the cytoplasmic bridges so as to be connected at their thin stalks, thus splaying the cells and bending and, eventually, invaginating the cell sheet <cit.>. <cit.> showed that inversion is arrested in the absence of analogous motion of cells relative to the cytoplasmic bridges in type-A inversion in Volvox carteri. The relative motion results from a kinesin associated to the microtubule cytoskeleton (Fig. <ref>, figure supplement 1); orthologues of this kinesin are found throughout the Volvocine algae <cit.>. After invagination, the posterior hemisphere moves into the anterior (Fig. <ref>e), the phialopore widens and the anterior hemisphere moves over the subjacent posterior (Fig. <ref>f) while `rolling' over a second circular bend region, the anterior cap <cit.>. Additional cell shape changes (Fig. <ref>d–f, Fig. <ref>b–d) in the anterior and posterior hemispheres are implicated in the relative contraction and expansion of either hemisphere with respect to the other <cit.>. This plethora of cell shape changes is possible as Volvox cells do not have a cell wall <cit.>.In a previous study <cit.>, we combined light sheet microscopy and theory to analyse the early stages of inversion, showing that only a combination of active bending and active stretching (i.e. expansion or contraction) can account for the cell sheet deformations observed during invagination. The crucial role of active stretching was also highlighted by <cit.> who showed that type-A inversion in Volvox carteri cannot complete if acto-moysin mediated contraction is inhibited chemically. We later analysed the mechanics of this competition between bending and stretching in more detail <cit.>. Here, we analyse experimentally the variability of the shapes of inverting Volvox globator at consecutive stages of inversion. We refine our theoretical model to capture later stages of the inversion process, and finally combine theory and experiment to untangle the geometric, mechanical, and active contributions to the observed spatial structure of the shape variations.§ RESULTSWe acquired three-dimensional time-lapse visualisations of inverting Volvox globator embryos (Video 1) using a selective-plane-illumination-microscopy setup (Methods) based on the OpenSPIM system <cit.>. Data were recorded for 13 parent spheroids containing, on average, 6 embryos. Summary statistics for 33 embryos were obtained from the recorded z-stacks and, for a more quantitative analysis of inversion, embryo outlines were traced on midsagittal sections of 11 of the recorded inversion processes, selected for optimal image quality (Methods).In our previous work <cit.>, we discussed in detail three geometric descriptors of the traced embryo outlines, which we have reproduced for this dataset (Methods):* the distance e (Fig. <ref>a) from the posterior pole to the plane of the circular bend region; this serves as an indicator of the progress of the `upwards' movement of the posterior hemisphere; * the embryonic surface area A (Fig. <ref>b), which was computed by determining a surface of revolution from each half of the midsagittal slice and averaging the two values for each timepoint; * the minimal (most negative) value κ_∗ of the meridional curvature in the bend region (Fig. <ref>c).We have computed three additional descriptors associated with the progress of later of inversion:* the diameter d of the phialopore (Fig. <ref>d) as an indicator of progress of inversion of the anterior hemisphere; * the width w of the bend region (Fig. <ref>e), where the bend region is defined as the region of negative curvature; * the position of the bend region (Fig. <ref>f), measured along the arclength of the deformed shell from the posterior pole to the midpoint of the bend region.The computation of these descriptors is discussed in the Methods section. Each of these descriptors evolves in qualitatively similar ways in individual embryos, yet their evolution occurs over different timescales in different embryos and the local maxima in surface area (Fig. <ref>b) and in phialopore width (Fig. <ref>d) occurs at different relative times in different embryos. This initial impression of the variability of inversion is confirmed by the analysis of three summary statistics: (1) the duration of inversion, from appearance of the bend region to closure of the phialopore, (2) the diameter of the embryos post-inversion, (3) the relative time during inversion at which the phialopore starts to open. Histograms of these quantities in Fig. <ref>a–c reveal considerable variability, thus showing that the noise does not only affect the global duration of inversion, but also the relative timing of parts of it. We additionally note that there is no correlation between the size of an embryo and the duration of its inversion (Fig. <ref>d), not even between embryos from the same parent spheroid.It is natural to ask to what extent the different deformations of inversion must arise in a particular order: while invagination occurs before phialopore opening in all our samples, analysis of characteristic `checkpoints' of inversion (Fig. <ref>, figure supplement 1) reveals that there is still considerable leeway in the timing of posterior inversion and phialopore opening. To further quantify the variability of inversion, we must define an average inversion sequence; our averaging approach must take into account these different types of variability. §.§ The Local Variability of InversionTo define an average inversion sequence and analyse its mechanics, we compare the local geometry of the traced curves. The question of how to define an appropriate metric for this kind of comparison goes back at least to the work of D'Arcy Thompson <cit.>, and is altogether a rather philosophical one, to which there is no unique answer. Thompson showed for example how the outlines of fish of different species could be mapped onto one another by dilations, shears, and compositions thereof. In Volvox inversion, these shape differences are likely to arise from variations in cell shape and variations in the positions of cell shape changes. Our averaging method must therefore allow for these local variations as well as for differences in the timing of the cell shape changes (as suggested by the analysis of the summary statistics), while recognising that the posterior poles and the rims of the phialopores of the different embryos must correspond to each other. Our approach is therefore based on minimising the Euclidean distance between individual embryo shapes and their averages, with alignments obtained using dynamic time warping (Methods and Fig. <ref>, figure supplement 1). Results are shown in Fig. <ref>.Averaging approaches that do not consider both stretching in time of individual inversions and local stretching of corresponding points of individual shapes tend to give unsatisfactory results: the simplest averaging approach is to align the inversion sequences by a single time point, say when the posterior-to-bend distance reaches half of its initial value (Methods and Fig. <ref>, figure supplement 2). The absence of time stretching, however, means that large variations arise at later stages of inversion. (Given the dramatic embryonic shape changes during inversion, it is not suprising that there should be no single parameter that could be used to align inversions of different embryos.) A better alignment is obtained if we allow stretching in time (Methods and Fig. <ref>, figure supplement 3), but this method, without local stretching of individual shapes relative to each other, produces unsatisfactory kinks in the bend region of the average shapes (Fig. <ref>, figure supplement 3).The averages reveal that inversion seems to proceed at an approximately constant speed relative to the average inversion sequence (Fig. <ref>a,b). However, the alignment reveals that different stages of inversion take different times in different embryos (Fig. <ref>a), with some embryos seeming to linger in certain stages. This is the same non-linearity that we already saw earlier on the timelines in Fig. <ref>, figure supplement 1 obtained from the measurements in Fig. <ref>. To analyse the local variations of the embryo shapes, we define, at each point of the average shapes, a covariance ellipse. The curves that are parallel to the average shape and tangent to the covariance ellipse define what we shall term the standard deviation shape. These standard deviation shapes measure the variability of the average shapes and are shown in Fig. <ref>. The variations they represent naturally divide into two components: first, those variations that are parallel to the average shape, and second, those perpendicular to the average shape. The former represent mere local stretches of the average shapes, while the latter correspond to actual variations of the shapes; we shall therefore refer to the thickness of the standard deviation shapes as `shape variation' in what follows. We report the mean shape variation and its standard error in Fig. <ref>c. This plot shows that the mean shape variation reaches a maximal value around the stages in Fig. <ref>g–i: different embryos start from the same shape and reach the same inverted shape after inversion (up to a scaling), but may take different inversion paths. Plotting the mean shape variation for different averaging methods (Fig. <ref>, figure supplement 1), we confirm that the present averaging method yields a better alignment than the alternative methods discussed earlier. It is intriguing, however, to note the spatial structure of the local shape variations. In particular, during the early stages of posterior inversion (Fig. <ref>d–f), the shape variation is smaller in the active bend region than in the adjacent anterior cap (Fig. <ref>e, the second bend region of increased positive curvature). As the phialopore opens and the anterior begins to peel back over the partially inverted posterior (Fig. <ref>h) the relative shape variations becomes smaller in the anterior cap. The initially small variation in the bend region is especially intriguing since this is where cells become wedge-shaped to drive invagination, while the anterior cap bends passively <cit.>. In other words, the shape variation is reduced in the part of the cell sheet where the active cell shape changes driving inversion arise. This correspondence characterises what one might term, from a teleological point of view, a `good' inversion. We shall focus on a less exalted question, the answer to which will be falsifiable, however: how is this spatial structure of the variability related to the mechanics of inversion? Before addressing this question, we need to analyse the mechanics of inversion in some more detail. §.§ Active Bending and Stretching during InversionWhich active deformations are required for inversion? In our previous work <cit.>, we addressedthis question for the early stages of inversion: at a mechanical level of description, invagination arises from an interplay of active bending and stretching <cit.> associated with different types of cell shape changes . A key role is played by the cells close to the equator of the cell sheet (Fig. <ref>d, Fig. <ref>a), which become wedge-shaped <cit.>, thus splaying the cells and hence imparting intrinsic curvature to the cell sheet <cit.>. Yet no such cell wedging has been reported at the anterior cap at later stages of inversion, when the anterior hemisphere peels back over the partly inverted posterior (Fig. <ref>f, Fig. <ref>d). To resolve this conundrum, we ask whether the additional cell shape changes observed during type-B inversion <cit.> are sufficient to explain anterior peeling: cells in the anterior hemisphere have flattened, ellipsoidal shapes, while cells on the posterior side of the anterior cap are pencil-shaped (Fig. <ref>e, Fig. <ref>d). We have previously described the early stages of inversion using a mathematical model <cit.> in which cell shape changes appear as local variations of the intrinsic (meridional and circumferential) curvatures κ_s^0,κ_ϕ^0 and stretches f_s^0,f_ϕ^0 of an elastic shell. We recall the difference between open, one-dimensional elastic filaments and two-dimensional elastic shells in this context: the former can simply adopt a shape in which the curvature and stretch are everywhere equal to their intrinsic values. For the latter, by contrast, the intrinsic curvatures and stretches may not be compatible with the global geometry, causing the shell to deform elastically and adopt actual (meridional and circumferential) curvatures κ_s,κ_ϕ and stretches f_s,f_ϕ different from the imposed intrinsic curvatures and stretches. In order to address these later stages of inversion, we must first generalise our previous mathematical model using ideas from morphoelasticity (Methods). Indeed, that model was derived under the assumption of small strains. While the elastic strains are small indeed (since the metric tensor, which describes the deformed shape, is close to the intrinsic tensor defined by the cell shape changes), the geometric strains are large: both the metric tensor of the deformed shell and the intrinsic tensor differ considerably from the metric tensor of the undeformed sphere.To test whether anterior peeling can be achieved by contraction of the cell sheet alone, we impose functional forms for the intrinsic stretches of the shell (Fig. <ref>a–c) representing these cell shape changes, but we do not modify the intrinsic curvatures in the anterior hemisphere (Fig. <ref>c). In particular, the linear variation of the circumferential stretch in the anterior hemisphere represents the different orientations of the ellipsoidal cells at the phialopore (Fig. <ref>c), where the long axis is the circumferential axis, and at the anterior cap, where the long axis is the meridional axis <cit.>. In our quasi-static simulation, we approximate the shape in Fig. <ref>h by a configuration with inverted posterior hemisphere (Fig. <ref>d), and displace the intrinsic `peeling front' (Fig. <ref>a,b). The shell responds by peeling (Fig. <ref>e), with shapes in qualitative agreement with the experimentally observed shapes. Since the peeling front is located at the anterior cap, where the shape variation is reduced during anterior peeling as discussed previously, we again see a correlation between reduced shape variations and the location of the active cell shape changes driving inversion. These considerations suggest that contraction is sufficient to drive the peeling stage of inversion, even without changes in intrinsic curvature. Although the position of the cytoplasmic bridges (Fig. <ref>d), on the inside end of the cells at the end of inversion <cit.>, suggests that the intrinsic curvature may change sign in the anterior hemisphere, too, this appears to be a seconday effect. Hence intrinsic bending complements intrinsic stretching. By contrast, our previous work <cit.> revealed that stretching complements bending during invagination. The roles of stretching and bending are thus interchanged during inversion of the posterior and anterior hemispheres, and the embryo uses these two different deformation modes for different tasks during inversion.§.§.§ Analysis of Cell Shape ChangesFor a more quantitative analysis of the data and to validate our model, we proceed to fit the elastic model to the experimental average shapes (Methods). In the model, we impose a larger extent of the phialopore than in the biological system, where the phialopore is initially very small (Fig. <ref>a). This is an important simplification to deal with the discrete nature of the few cells that meet up at the phialopore. Nonetheless, using fifteen fitting parameters to represent previously observed cell shape changes <cit.> in terms of the intrinsic stretches and curvatures (Methods), the model captures the various stages of inversion (Fig. <ref>). This supports our interpretation of the observed cell shape changes (Fig. <ref>) and their functions. Comparing the geometric descriptors discussed previously (Fig. <ref>) for the experimental averages and the fitted shapes (Fig. <ref>, figure supplement 1), we notice that the fitted shapes underestimate the width of the bend region. Because curvature is a second derivative of shape, it is not surprising that larger differences arise in the minimal bend region curvature of the average and fitted shapes (Fig. <ref>, figure supplement 1).Nonetheless, the fitted values of the intrinsic curvature of the cell sheet also resolve a cell shape conundrum: during invagination, the curvature in the bend region increases (Fig. <ref>), yet <cit.> reported similar wedge-shaped cells in the bend region at early and late invagination stages, although the number of wedge-shaped cells in the bend region increases as invagination progresses <cit.>. The fitted parameters indeed suggest a constant value of the intrinsic curvature at early stages of inversion, while the actual curvature in the bend region increases (Fig. <ref>a). This serves to illustrate that the intrinsic parameters cannot simply be read off the deformed shapes and confirms that there is but a single type of cell change, expanding in a wave to encompass more cells, and thus driving invagination. It is only at later stages of inversion, when the wedge-shaped cells in the bend region become pencil-shaped <cit.>, that both the intrinsic curvature and the actual curvature in the bend region decrease (Fig. <ref>a). The fitted shapes also yield the posterior and anterior limits of the bend region (Fig. <ref>b), i.e. the original positions, relative to the undeformed sphere, of the corresponding cells. Because of the varying spatial stretches of the shell, these positions cannot simply be read off the deformed shapes, but must be inferred from the fits. The fitted data suggest that invagination results from an intrinsic bend region of constant width, complemented by other cell shape changes (Fig. <ref>d, Fig. <ref>b,c). Theregion of wedge-shaped cells (and, by implication, of negative intrinsic curvature) starts to expand into the posterior at constant speed (i.e. at a constant number of cell shape changes per unit of time) between the stages in Fig. <ref>e,f. Anterior inversion starts about five minutes later when this region begins to expand into the anterior just after the stage in Fig. <ref>g. Fig. <ref> also shows the stretches f_s,f_ϕ in the fitted shapes. It is particularly interesting to relate the values of f_s,f_ϕ in the fitted shapes to the measurements of individual cells by <cit.>: before inversion starts, the cells are teardrop-shaped, and measure 3-5 µ m in the plane of the cell sheet. As invagination starts, the cells in the posterior hemisphere become spindle-shaped, measuring 2-3 µ m. This suggests values f_s,f_ϕ≈ 0.6-0.66 in the posterior hemisphere during invagination, in agreement with the fitted data (Fig. <ref>d). At later stages of inversion, the cells in the bend region become pencil-shaped, measuring 1.5-2 µ m in the meridional direction, suggesting smaller values f_s≈ 0.4-0.5 there, again in agreement with the fitted data (Fig. <ref>h). The large stretches f_s>2 seen in the anterior cap during inversion of the posterior hemisphere (Fig. <ref>f) cannot be accounted for by the disc-shaped cells in the anterior (which only measure 4-6 µ m) in the meridional direction). While examination of the thin sections of <cit.> does suggest, in qualitative agreement with the fits, that the largest meridional stretches arise in the anterior cap, the fact that the model overestimates the actual values of these stretches may stem from the simplified modelling of the phialopore. Further, at the very latest stages of inversion (Fig. <ref>j), the fitted shapes suggest very small values f_s<0.3 and corresponding values f_ϕ>3 that are not borne out by the cell measurements.§.§.§ Phialopore Opening and Cell RearrangementTo understand how these values of the stretches at odds with the observed cell shape changes arise in the fitted shapes, we must analyse the opening of the phialopore in more detail. The observations of <cit.> show that the cytoplasmic bridges stretch considerably, to many times their initial length, as the phialopore opens. Circumferential elongation of cells as a means to increase effective radius was discussed in some detail by <cit.>, but is not sufficient to explain the circumferential stretches observed at the phialopore. Additional elongation of cytoplasmic bridges as a means to further increase the effective radius (Fig. <ref>) may suffice to produce the large circumferential stretches, but does not explain the small values of meridional stretch at the phialopore in the fitted shapes. For this reason, we additionally imaged the opening of the phialopore using confocal laser scanning microscopy (Methods) to resolve single cells close to the phialopore (Video 2). L6.3cm< g r a p h i c s >Mechanisms of phialopore stretching: cell shape changes, stretching of cytoplasmic bridges, and cell rearrangements. Red lines represent cytoplasmic bridges; fainter colours signify other, out-of-plane cells. CSC: cell shape changes, CB: cytoplasmic bridge. The data reveal that cells rearrange near the phialopore, suggesting an additional mechanism to stretch the phialopore sufficiently for the anterior to be able to peel over the inverted posterior (Fig. <ref>). Video 2 shows how, initially, only a small number of cells form a ring at the anterior pole. When the phialopore widens, cells that were initially located away from this initial ring come to be positioned at the rim of the phialopore. It is unclear whether the cytoplasmic bridges between these cells stretch or break, or whether these cells were not connected by cytoplasmic bridges in the first place. While such cell rearrangement is beyond the scope of the current model, it is nevertheless captured qualitatively by the small values of f_s near the phialopore. <cit.> observed elongation of cytoplasmic bridges near the phialopore of Volvox aureus, but not in small fragments of broken-up embryos, and concluded that the elongation of cytoplasmic bridges was the result of passive mechanical forces. By contrast, in our model, the opening of the phialopore is the result of active cell shape changes there. This discrepancy may herald a breakdown of the approximations made to represent the phialopore. The data also hint that there may be a different mechanical contribution at later stages of inversion (Fig. <ref>i), where the rim of the phialopore may be in contact with the inverted posterior. Since the model does not resolve the rim of the phialpore in the first place, we do not pursue this further here. For completeness of the mechanical analysis, we analyse such a contact configuration in Appendix <ref>, where we also discuss a toy problem to highlight the intricate interplay of mechanics and geometry in the contact configuration.§.§ Mechanics and Regulation of Local Shape VariationsWe now return to the spatial structure of the shape variations discussed previously. It is clear that some of this structure is geometrical: since the shapes are aligned so that the positions of their centres of mass along the axis coincide, the shape variations accumulate, and are thus expected to e.g. increase in the anterior hemisphere, towards the phialopore, as at the stage in Fig. <ref>c. At the same stage however, the shape variation is smaller in the bend region than in the adjacent anterior cap. Both of these regions are, however, close to the centre of mass, and so we do not expect this difference to arise from mere geometric accumulation of shape variations. We must therefore ask: can this structure arise purely mechanically (i.e. from a uniform distribution of the intrinsic parameters), but possibly as a statistical fluke, or must there be some regulation (i.e. non-uniform variation of the intrinsic parameters)?To answer this question, we analyse random perturbations of the fitted intrinsic parameters of the inversion stage in Fig. <ref>c. We observe that, if the relative size of perturbations (the `noise level') exceeds about 4% at this stage of inversion, computation of the perturbed shapes fails for some parameter choices. This mechanical effect is not surprising: our previous analysis of invagination <cit.> revealed strong shape non-linearities and the possibility of bifurcations as the magnitude of the intrinsic curvature in the bend region is increased. While we may therefore expect more leeway in some parameters than in others, we shall simply discard those perturbations for which the computation fails; further estimation of the distribution of possible perturbations is beyond the scope of the present discussion. We now estimate, for each noise level, the mean shape variation from 1000 perturbations of the fitted shape. By comparing this to the mean shape variation estimated from the N=22 embryo halves in Fig. <ref>c, we roughly estimate a noise level of 7.5% (Fig. <ref>a). At this noise level, about 15% of perturbations fail; while the non-uniformities are small, they are statistically significant (Methods).With this noise level, we obtain 10000 samples of N=22 perturbations to the fitted shape each (Fig. <ref>b), and we compute their averages in the same way as for the experimental samples. While these samples qualitatively capture the spatial structure of the shape variation, they overestimate the shape variation at the poles. More strikingly, they feature a local maximum of the shape variation in the bend region, rather than in the anterior cap. From the sample distribution of the position of these local maxima (Fig. <ref>c), it is clear that the experimental distribution with the local maximum in the anterior cap, is very unlikely to arise under this model. We make this statement more precise statistically in the Methods section. To explain the observed structure of the shape variation, we therefore allow more variability in the meridional stretch in the anterior cap (with a noise level of 80%, compared to 2.5% for the remaining parameters to reproduce the mean shape variation). The resulting distribution is consistent with the experimentally observed position of the local maximum of shape variation in the anterior cap (Fig. <ref>b,c). While still overestimating the variability near the posterior pole, this modified distribution of the parameter variability captures the magnitude of the variability in the anterior cap much better than the original one. Thus, at this early stage of inversion (Fig. <ref>c), the observed embryo shapes are consistent with an increased variation of the intrinsic meridional stretch in the anterior cap. We can take the interpretation of this active regulation (or lack thereof) further by relating it to the observed cell shape changes: at the stage of Fig. <ref>c, the variations of the meridional stretch in the anterior cap correspond to the formation of disc-shaped cells there (Fig. <ref>d). This indicates that invagination and initiation of the expansion of the anterior hemisphere (via the formation of disc-shaped cells) are really two separate processes the relative timing of which is not crucial. (The formation of disc-shaped cells starting at different times also explains the large noise level in the meridional stretch under the modified model, although there is no fitting involved, here.) This adds to our earlier point, that these processes rely on different deformation modes (active bending for invagination and active contraction and stretching for inversion of the anterior hemisphere). These considerations also rationalise our second observation concerning the spatial structure of shape variations, that the variation in the anterior cap is reduced as inversion of the posterior hemisphere ends (Fig. <ref>h): there are no longer two separate processes at work. We finally point to a purely mechanical aspect of the structure of the shape variations: despite the increased variability in the anterior cap, the mechanics ensure that the variability is lowest in the bend region, where the main cell shape changes driving invagination take place.§ DISCUSSIONIn this paper, we have combined experiment and theory to analyse the variability of Volvox inversion and obtain a detailed mechanical description of this process. From observations of the structure of the variability of the shapes of inverting Volvox embryos, we showed, using our mathematical model, that this structure results from a combination of geometry, mechanics, and active regulation. The simplest scenario with which the observed shape variations are consistent is that type-B inversion in Volvox globator results from two separate processes, with most of the variability at the invagination stage attributed to the relative timing of these processes in individual embryos. The difference between these processes is mirrored, at a mechanical level, by the different types of deformations driving them: the first process, to invert the posterior hemisphere, mainly relies on active bending, whereas the second process, to invert the anterior hemisphere, is mainly driven by active expansion and contraction. We anticipate that these ideas and methods can be applied to other morphogenetic events in other model organisms to add to our understanding of the regulation of morphogenesis: what amount of regulation, be it spatial or temporal, of the cell-level processes is there, and how does it relate to the amount required mechanically for the processes to be able to complete? Additionally, <cit.> showed that diffusion of two morphogens with inhibition à la <cit.> has error-correcting properties that can explain the precise domain specification that is observed in Drosophila embryos in spite of the huge variability of morphogen gradients <cit.>. Does the interplay of geometry and mechanics yield analogous error-correcting properties?While we have begun to analyse the mechanical regulation of development in the context of Volvox inversion, our answers this far have been either negative (excluding certain mechanisms of regulation) or of what one might term the Occam's razor variety (invoking the law of parsimony to find the simplest modification of the model that can explain the observations). This approach of testing falsifiable hypotheses mitigates the risk of drawing conclusions that are mere teleology <cit.>. Nonetheless, a fuller answer to the questions above requires estimation of the variability of the model parameters from the experimental data, yet that endeavour entails significant statistical, computational, and experimental difficulties: to estimate the variability with statistical signficance we need a large number of experimental samples to estimate the experimental distribution; for each step of the optimisation algorithm used to estimate the large number of variability parameters, a large number of computational samples must be computed to estimate the distribution under the model. Similar difficulties arise when estimating the variability allowed mechanically. While we have previously noted <cit.> that the dynamic data for type-B inversion suggest that invagination proceeds without a `snap-through' bifurcation, there is no general requirement for individual developmental paths to lie on one and the same side of a mechanical bifurcation boundary. This poses an additional challenge for modelling approaches.After this discussion of general challenges for a mechanobiological analysis of morphogenesis and its regulation, we mention some of the remaining questions specific to Volvox inversion: our model does not resolve the details of the phialopore, and hence does not describe the closure of the phialopore at the end of inversion, which remains a combined challenge for experiment and theory: as discussed above,the cytoplasmic bridges elongate drastically at the phialopore <cit.>, and confocal imaging has revealed the possibility of rearrangements within the cell sheet at the phialopore. Do some cytoplasmic bridges rend to make such rearrangements possible? Understanding the details of the opening of the phialopore may also require answering a more fundamental question the answer to which has remained elusive <cit.>: what subcellular structures are located within the cytoplasmic bridges and how is it possible for them to stretch to such an extent? At the theoretical level, rearrangements of cells near the phialopore raise more fundamental questions of morphoelasticity <cit.>: in particular, how does one describe the evolution of the boundary of the manifold underlying the elastic description? Cytoplasmic bridges rending next to the phialopore would lead to the formation of lips similar to those seen in type-A inversion <cit.>. Is there a simple theory to describe the elasticity of this non-axisymmetric setup?At the close of this discussion, it is meet to briefly dwell on a question of more evolutionary flavour: how did different species of Volvox evolve different ways of turning themselves inside out? Mapping inversion types to a phylogenetic tree of Volvocine algae shows shows that different inversion types evolved several times independently in different lineages <cit.>. Additionally, <cit.> reported that in Volvox rousseletii and Volvox capensis, inversion type depends on the (sexual or asexual) reproduction mode. This may be a manifestation of the poorly understood role of environmental and evolutionary cues in morphogenesis <cit.>, but it is natural to wonder whether there is a mechanical side to this issue. Ultimately, this is another incentive to study the mechanics of type-A inversion in more detail.§ METHODS AND MATERIALS§.§ Acquisition of Experimental DataWild-type strain Volvox globator Linné (SAG 199.80) was obtained from the Culture Collection of Algae at the University of Göttingen, Germany <cit.>, and cultured as previously described <cit.> with a cycle of 16 h light at 24°C and 8 h dark at 22°C.§.§.§ OpenSPIM ImagingA selective plane illumination microscope (SPIM) was assembled based on the OpenSPIM setup <cit.>, with modifications to accommodate a Stradus® Versalase™ laser system with multiple wavelengths (Vortran Laser Technology, Inc., Sacramento, CA, USA) and a CoolSnap Myo CCD camera (1940× 1460 pixels; Photometrics, AZ, USA). Moreover, to decrease the loss of data due to shadowing a second illumination arm was added to the setup (Fig. <ref>). Illumination from both sides improved the image quality and enabled re-slicing of the z-stacks when embryos began to spin during anterior inversion. Volvox globator parent spheroids were mounted in a column of low-melting-point agarose and suspended in fluid medium in the sample chamber. To visualise the cell sheet deformations of inverting Volvox globator embryos, chlorophyll-autofluorescence was excited at λ=561 nm and detected at λ>570 nm. Z-stacks were recorded at intervals of 60 s over 4-6 hours to capture inversion of all embryos in a parent spheroid. We acquired time-lapse data of 13 different parent spheroids each containing 4-7 embryos. L.46< g r a p h i c s >SPIM imaging setup. a: beamsplitter cube, b: mirror, c: beam expander, d: cylindrical lens, e: telescope, f: illumination objective, g: detection objective, h: emission filter, i: camera.§.§.§ Confocal Laser Scanning MicroscopySamples were immobilised on glass-bottom dishes by embedding them in low-melting-point agarose and covered with fluid medium. Chlorophyll-autofluorescence was excited atand detected at λ>647 nm. Z-stacks were recorded at intervals of 30 s over 1-2 hours to capture inversion of a single embryo. Trajectories of individual cells close to the phialopore were obtained using Fiji <cit.>. Experiments were carried out using a Observer Z1 spinning-disk microscope (Zeiss, Germany).§.§.§ Image TracingTo ensure optimal image quality (traceability) for the quantitative analyses of inversion, from the inversion processes recorded with the SPIM, we selected 11 inversions (in 6 different parent spheroids) in which the acquisition plane was initially approximately parallel to the midsagittal plane of the embryos. Midsagittal cross-sections were obtained using Fiji <cit.> and Amira (FEI, OR, USA).Splines were fitted to these cross-sections using the following semi-automated approach implemented in Python/C++: in a preprocessing step, images were bandpass-filtered to remove short-range noise and large-range intensity correlations. Low-variance Gaussian filters were applied to smooth out the images slightly. Splines were obtained from the pre-processed images I(x⃗) using the active contour model <cit.>, with modifications to deal with intensity variations and noise in the image: the spline x⃗_⃗s⃗(s), where s is arclength, minimises an energyℰ[x⃗_s]=ℰ_image[x⃗_s]+ℰ_spline[x⃗_s]+ℰ_skel[x⃗_s],whereℰ_image[x⃗_s]=-α∫I(x⃗_⃗s⃗(s)) ds, ℰ_spline[x⃗_s]= β∫∂^2x⃗_⃗s⃗∂ s^2^2ds+γ(∫ds-L_0)^2, ℰ_skel[x⃗_s] =δ∫I_skel(x⃗_⃗s⃗(s)) ds, wherein α,β,γ,δ are parameters, L_0 is the estimated length of the shape outline, and I_skel is obtained by skeletonising I using the algorithm of <cit.> to minimise the number of branches. The energy ℰ was minimised using stochastic gradient descent. Initial guesses for the splines were obtained by manually initialising about 15 timepoints for each inversion using a few guidepoints and polynomial interpolation. An initial guess for other frames was obtained from these frames by interpolation; these interpolated shapes were used to estimate L_0.With δ = 0, the standard active contour model of <cit.> is recovered. We found that this model was not sufficient to yield fits of sufficient quality, because of the existence of local minima at small values of α, while larger values of α lead to noisy splines. Thresholding methods on their own were not sufficient either, because of branching and, in particular, since they failed to capture the bend region properly.Dynamic thresholding methods <cit.> are not applicable either because of the fast variations of the brightness of the images. The modified active contour model did however produce good fits when we progressively reduced δ to zero with increasing iteration number of the minimisation scheme, yielding smooth splines, while overcoming the local minima (or, from the point of view of the skeletonisation method, choosing the correct, branchless part of the skeleton). All outlines obtained from this algorithm were manually checked and corrected. §.§ Analysis of Traced Embryo ShapesFrom the traced cell sheet outlines, anterior-posterior axes of the embryos were determined as follows: for shapes for which the bend region was visible on either side of the cross-section, the embryo axis was defined to be the line through the centre of mass of the shape that is perpendicular to the common tangent to the two bend regions (the apex line). Shapes were then rotated and translated manually so that their axes coincided. Since embryos do not rotate much before the flagella grow, the orientation of the axes of the earliest traces (for which the bend regions are not apparent) were taken to be the same as that of the earliest timepoint for which two bend regions were visible. The intersection of the embryo trace and axis defines the posterior pole. After manually recentring some embryos with more pronounced asymmetry, embryos were halved to obtain N=22 embryo halves. §.§.§ Computation of Inversion DescriptorsFrom the aligned shapes, the geometric descriptors of inversion reported in Fig. <ref> were computed as follows: the posterior-to-bend distance e was computed as the distance from the apex line to the posterior pole. The maximal surface area A_max and the most negative value of curvature κ_∗ in the bend region were computed as described previously <cit.>; traces were smoothed before computing the curvature. The phialopore width d was computed as the absolute distance between the two ends of a complete embryo trace. The bend region was defined as the region of negative curvature; the distance between the first and last points of negative curvature defined the bend region width. The bend region position is defined by the distance, along the embryo trace, between the posterior pole and the midpoint of the bend region. (The latter may differ from the point where the most negative value of curvature is attained.) The values of bend region width and position obtained for each embryo half were averaged to yield the reported values.§.§.§ Aligning and Averaging Embryo ShapesTo align embryos to each other, one embryo half was arbitrarily taken as the reference shape, and T=10 regularly spaced timepoints were chosen for fitting. (These timepoints were chosen to be well after invagination had started and before the phialopore had closed, so that defining the start and end of inversion was not required.) For each of the remaining N-1 embryo halves, a scale and T corresponding timepoints were then sought, with shapes being (linearly) interpolated at intermediate timepoints. The interpolated and scaled shapes were centred so that the centres of mass of the cross-sections coincided. This fixes the degree of freedom of translation parallel to the embryo axis; the position perpendicular to the axis is fixed by requiring that the embryo axes coincide (Fig. <ref>, figure supplement 1a). The motivation for using the centres of mass of the cross-sections (rather than of that of the embryos, which assigns the same mass to each cell by assigning more mass to those points of the cross-section that are farther away from the embryo axis) is a biological one: because of the cylindrical symmetry of the cell shape changes, this average assigns the same mass to each cell shape change. For aligning embryo shapes, we distribute M=100 averaging points uniformly along the (possibly different) arclength of each of the embryo halves. Corresponding points were determined using dynamic time warping (DTW) as described by e.g. <cit.>, and the distances between these shapes and their averages were minimised as explained in what follows. The parameters describing the alignment are thus the scale factors S_1 = 1, S_2,… S_N and the averaging time points τ⃗_1 = (τ_11,τ_12,…,τ_1T),τ⃗_2,…,τ⃗_N, where τ⃗_1 is fixed. Each choice of these parameters yields a set of shapes X⃗_1=(x_11,…,x_1M),X⃗_2,…,X⃗_N with points matched up by maps σ_1,σ_2,…,σ_N obtained from the DTW algorithm. The effect of the local stretching allowed by the DTW algorithm is illustrated in Fig. <ref>, figure supplement 1b,c. The mean shapes having been determined, the sum of Euclidean distances between shapes of individual embryos and the mean,∑_t=1^T{∑_n=1^N∑_m=1^M(x_nσ_n(m)-x_m)^2}^1/2,x_m=1N∑_n=1^Nx_nσ_n(m),was minimised over the space of all these alignment parameters using the Matlab (The MathWorks, Inc.) routine , modified to incorporate the variant of the Nelder–Mead algorithm suggested by <cit.> for problems with a large number of parameters. After the algorithm had converged, each of the alignment parameters was modified randomly, and the algorithm was run again. This was repeated until the alignment score defined by (<ref>) did not decrease further. The means x_1,x_2,…,x_M for the alignment minimising (<ref>) define the average embryo shapes.Aligning shapes in this way using dynamic time warping requires a considerable amount of computer time. To make the problem computationally tractable, we invoked the usual heuristics of only computing pairwise DTW distances, and reducing the size of the DTW matrix by only computing a band centred on the diagonal. To verify the algorithm, we also ran several instantiations of the alignment algorithm without DTW (i.e. with σ_n=id) and with larger parameter randomisations, confirming that the modified Nelder–Mead algorithm finds an appropriate alignment. This also enabled us to verify that results do not change qualitatively if the centres of mass of the cross-sections are replaced with those of the embryo halves (even though, as noted in the main text, the shapes without DTW are unsatisfactory since they have kinks in the bend region that are not seen in individual embryo shapes).For the simple alternative averaging method in Fig. <ref>, figure supplement 2, different numbers of averaging points were distributed at equal arclength spacing along all individual shapes. Differences in arclengths of individual embryos mean that the rims of the phialopores of individual embryo halves are not necessarily matched up (Fig. <ref>, figure supplement 1c). No time stretching was applied. The averaging method in Fig. <ref>, figure supplement 3, is the method discussed above, without DTW (i.e. with σ_n=id). §.§ Elastic Model We consider a spherical shell of radius R and uniform thickness h≪ R (Fig. <ref>a), characterised by its arclength s and distance from the axis of revolution ρ(s), to which correspond arclength S(s) and distance from the axis of revolution r(s) in the axisymmetric deformed configuration (Fig. <ref>b). We define the meridional and circumferential stretchesf_s(s)=dSds, f_ϕ(s)=r(s)ρ(s).The position vector of a point on the midsurface of the deformed shell is thusr⃗(s,ϕ)=r(s)u⃗_⃗r⃗(ϕ)+z(s)u⃗_⃗z⃗,in a right-handed set of axes (u⃗_⃗r⃗,u⃗_⃗ϕ⃗,u⃗_⃗z⃗) and so the tangent vectors to the deformed midsurface aree⃗_⃗s⃗ =r'u⃗_⃗r⃗+z'u⃗_⃗z⃗,e⃗_⃗ϕ⃗=ru⃗_⃗ϕ⃗,where dashes denote differentiation with respect to s. By definition, r'^2+z'^2=f_s^2, and so we may write r'=f_scosβ, z'=f_ssinβ.Hence the normal to the deformed midsurface isn⃗= r'u⃗_⃗z⃗-z'u⃗_⃗r⃗f_s=cosβ u⃗_⃗z⃗-sinβ u⃗_⃗r⃗.We now make the Kirchhoff `hypothesis' <cit.>, that the normals to the undeformed midsurface remain normal to the deformed midsurface (Fig. <ref>c). Taking a coordinate ζ across the thickness h of the undeformed shell, the position vector of a general point in the shell isr⃗(s,ϕ,ζ)=ru⃗_⃗r⃗+zu⃗_⃗z⃗+ζn⃗=(r-ζsinβ)u⃗_⃗r⃗+(z+ζcosβ)u⃗_⃗z⃗.The tangent vectors to the shell are thuse⃗_⃗s⃗ = f_s(1-κ_sζ)(cosβ u⃗_⃗r⃗+sinβ u⃗_⃗z⃗), e⃗_⃗ϕ⃗ = ρ f_ϕ(1-κ_ϕζ)u⃗_⃗ϕ⃗,where κ_s=β'/f_s and κ_ϕ=sinβ/r are the curvatures of the deformed midsurface. The metric of the deformed shell under the Kirchhoff hypothesis accordingly takes the formdr^2 = f_s^2(1-κ_sζ)^2ds^2+f_ϕ^2(1-κ_ϕζ)^2ρ^2dϕ^2.The geometric and intrinsic deformation gradient tensors are thusF^g=([ f_s(1-κ_sζ) 0; 0 f_ϕ(1-κ_ϕζ) ]), F^0=([ f_s^0(1-κ_s^0ζ) 0; 0 f_ϕ^0(1-κ_ϕ^0ζ) ]),where f_s^0,f_ϕ^0 and κ_s^0,κ_ϕ^0 are the intrinsic stretches and curvatures of the shell. Thence, invoking the standard multiplicative decomposition of morphoelasticity <cit.>, the elastic deformation gradient tensor isF=F^g(F^0)^-1=([ f_s(1-κ_sζ)f_s^0(1-κ_s^0ζ)0;0 f_ϕ(1-κ_ϕζ)f_ϕ^0(1-κ_ϕ^0ζ) ]).While we do not make any assumption about the geometric or intrinsic strains derived from F^g and F^0, respectively, we assume that the elastic strains derived from F remain small; we may thus approximate ε_ss≈f_s(1-κ_sζ)f_s^0(1-κ_s^0ζ)-1, ε_ϕϕ≈f_s(1-κ_sζ)f_s^0(1-κ_s^0ζ)-1,with the off-diagonal elements vanishing, ε_sϕ=ε_ϕ s=0. For a Hookean material with elastic modulus E and Poisson's ratio ν <cit.>, the elastic energy density (per unit extent in the meridional direction) is found by integrating across the thickness of the shell:ℰ2πρ =E2(1-ν^2)∫_-h/2^h/2(ε_ss^2+ε_ϕϕ^2+2νε_ssε_ϕϕ)dζ=Eh2(1-ν^2){(1+h^24κ_s^0^2)E_s^2+(1+h^24κ_ϕ^0^2)E_ϕ^2+2ν(1+h^212(κ_s^0^2+κ_s^0κ_ϕ^0+κ_ϕ^0^2))E_sE_ϕ}+Eh^324(1-ν^2){K_s^2+K_ϕ^2+2ν K_sK_ϕ-4κ_s^0E_sK_s-4κ_ϕ^0E_ϕ K_ϕ-2ν(κ_s^0+κ_ϕ^0)(E_ϕ K_s+E_sK_ϕ)},where we have expanded the energy up to third order in the thickness, and where we have defined the shell strains and curvature strainsE_s=f_s-f_s^0f_s^0,E_ϕ=f_ϕ-f_ϕ^0f_ϕ^0,K_s=f_sκ_s-f_s^0κ_s^0f_s^0,K_ϕ=f_ϕκ_ϕ-f_ϕ^0κ_ϕ^0f_ϕ^0.As in our previous work <cit.>, the elastic modulus is an overall constant that ensures that ℰ has units of energy, but does not otherwise affect the shapes. We shall also assume that ν=1/2 for incompressible biological material; the cell size measurements of <cit.> for type-A inversion in Volvox carteri support this assumption qualitatively. (These considerations also explain why we do not perturb these mechanical parameters in our analysis of the shape variations.) We finally set h/R=0.15 as in our previous work.§.§.§ Derivation of the Governing EquationsThe derivation of the governing equations proceeds similarly to standard shell theories <cit.>. In fact, the resulting equations turn out to have a form very similar to those of standard shell theories, but a host of extra terms arise in the expressions for the shell stresses and moments due to the assumptions of morphoelasticity. The variation of the elastic energy takes the formδℰ2πρ=n_sδ E_s+n_ϕδ E_ϕ+m_sδ K_ϕ+m_ϕδ K_ϕ,withδ E_s =δ f_sf_s^0=1f_s^0(β δ r'+f_stanβ δβ), δ E_ϕ =δ f_ϕf_ϕ^0=δ rf_ϕ^0ρ, δ K_s =δ(f_sκ_s)f_s^0=δβ'f_s^0, δ K_ϕ =δ(f_ϕκ_ϕ)f_ϕ^0=cosβf_ϕ^0ρδβ, wherein dashes again denote differentiation with respect to s, and where the shell stresses and moments are defined byn_s =Eh1-ν^2{E_s+ν E_ϕ+h^212(3κ_s^0^2E_s+ν(κ_s^0^2+κ_s^0κ_ϕ^0+κ_ϕ^0^2)E_ϕ-2κ_s^0K_s-ν(κ_s^0+κ_ϕ^0)K_ϕ)},n_ϕ =Eh1-ν^2{E_ϕ+ν E_s+h^212(3κ_ϕ^0^2E_ϕ+ν(κ_s^0^2+κ_s^0κ_ϕ^0+κ_ϕ^0^2) E_s-2κ_ϕ^0K_ϕ-ν(κ_s^0+κ_ϕ^0)K_s)}, and m_s =Eh^312(1-ν^2){K_s+ν K_ϕ-2κ_s^0 E_s-ν(κ_s^0+κ_ϕ^0)E_ϕ},m_ϕ =Eh^312(1-ν^2){K_ϕ+ν K_s-2κ_ϕ^0 E_ϕ-ν(κ_s^0+κ_ϕ^0)E_s}. DefiningN_s=n_sf_s^0f_ϕ, N_ϕ=n_ϕf_ϕ^0f_s,M_s=m_sf_s^0f_ϕ, M_ϕ=m_ϕf_ϕ^0f_s,the variation becomesδℰ2π = r N_sβ δ r+r M_s δβ-∫{(dds(r N_sβ)-f_sN_ϕ)δ r - (r f_sN_stanβ+f_sM_ϕcosβ-dds(r M_s))δβ} ds.The Euler–Lagrange equations of (<ref>) are thusdds(rN_sβ)-f_sN_ϕ=0, dds(rM_s)-f_sM_ϕcosβ-rf_sN_stanβ=0.To remove the singularity that arises in the second of (<ref>) when β=π/2, we define the transverse shear tension T=-N_stanβ as in standard shell theories. The governing equations can then be rearranged to givedN_sds =f_s(N_ϕ-N_srcosβ+κ_s T), dM_sds =f_s(M_ϕ-M_srcosβ-T).By differentiating the definition of T and using the first of (<ref>), one finds thatdTds=-f_s(κ_sN_s+κ_ϕ N_ϕ+Trcosβ).Together with the geometrical equations r'=f_scosβ and β'=f_sκ_s, equations (<ref>) and (<ref>) describe the deformed shell. The five required boundary conditions can be read off the variation (<ref>) and the definition of T, β = 0,r = 0, T = 0at the posterior pole,N_s= 0,M_s= 0at the phialopore. We solve these equations numerically using the boundary value-problem solverof Matlab (The MathWorks, Inc.). For completeness, we note that if external forces are applied to the shell, and δ𝒲 is the variation of the work done by these forces, then the variational condition is δℰ+δ𝒲=0. In that case, it is useful to write the variation (<ref>) in terms of δ r and δ z. We note that δ r'=-f_ssinβ δβ and δ z'=f_scosβ δβ, and sof_sδβ=cosβ δ z'-sinβ δ r'.Using this geometric relation and integrating by parts, we obtainδℰ2π = r M_s δβ+{rN_scosβ-sinβf_s(M_ϕcosβ-dds(rM_s))}δ r+{rN_ssinβ+cosβf_s(M_ϕcosβ-dds(rM_s))}δ z+∫{f_sN_ϕ-dds(rN_scosβ-sinβf_s(M_ϕcosβ-dds(rM_s)))} δ r ds-∫dds(rN_ssinβ+cosβf_s{M_ϕcosβ-dds(rM_s)})δ z ds. §.§.§ Limitations of the TheoryThe theory presented here has a singularity in a biologically relevant limit: the intrinsic deformation gradient F^0 becomes singular at |κ_s^0|=(h/2)^-1 or |κ_ϕ^0|=(h/2)^-1. This value corresponds precisely to the case of cells that are constricted to a point at one cell pole. The way around this issue would presumably involve writing down an energy directly relative to the (possibly incompatible) intrinsic configuration of the shell. Working in the intrinsic configuration of the shell raises another issue to contend with, however: intrinsic volume conservation, which implies that the thickness H of the intrinsically deformed shell, which is close to the thickness of the deformed shell by assumption, differs from the thickness h of the undeformed shell. For a doubly curved shell, the relative thickness η=H/h is a function of both the intrinsic stretches f_s^0,f_ϕ^0 and the intrinsic curvatures κ_s^0,κ_ϕ^0. The volume of an element of shell is∫_-H/2^H/2f_s^0f_ϕ^0(1-κ_s^0ζ)(1-κ_ϕ^0ζ)ρ ds dϕ dζ= f_s^0f_ϕ^0H(1+H^212κ_s^0κ_ϕ^0)ρ ds dϕ.It follows that η satisfies satisfies the cubic equation(h^212f_s^0f_ϕ^0κ_s^0κ_ϕ^0)η^3+f_s^0f_ϕ^0η-(1+h^212R^2)=0,the solution of which can be expressed in closed form. It is clear that this equation always has a solution if κ_s^0κ_ϕ^0>0. If κ_s^0κ_ϕ^0<0, there is a solution if and only if|κ_s^0κ_ϕ^0|<(4f_s^0f_ϕ^03h)^2(1+h^212R^2)^-2.Since 16/9<4, this condition may fail before the intrinsic geometry becomes singular, so this additional condition is not vacuous. This brief discussion therefore points to some interesting, more fundamental problems in the theory of morphoelastic shells.There is an additional subtlety associated with the geometric and intrinsic deformation gradient tensors in Eq. (<ref>): the components of F^g are expressed in (<ref>) relative to the (natural) mixed basis {ê⃗_⃗s⃗,ê⃗_⃗ϕ⃗}⊗{Ê⃗_⃗s⃗,Ê⃗_⃗ϕ⃗}, where ê⃗_⃗s⃗,ê⃗_⃗ϕ⃗ are the unit vectors tangent to the deformed configuration of the shell and Ê⃗_⃗s⃗,Ê⃗_⃗ϕ⃗ are defined analogously for the undeformed configuration. We have implicitly written down the components of F^0 relative to the same basis. In general however, the components of F^0 in (<ref>) are those relative to the basis {ê⃗^⃗0⃗_⃗s⃗,ê⃗^⃗0⃗_⃗ϕ⃗}⊗{Ê⃗_⃗s⃗,Ê⃗_⃗ϕ⃗}, where the unit basis {ê⃗^⃗0⃗_⃗s⃗,ê⃗^⃗0⃗_⃗ϕ⃗} can a priori be specified freely. We have neglected these additional degrees of freedom in the above derivation; the question of how to define a natural intrinsic tangent basis {ê⃗^⃗0⃗_⃗s⃗,ê⃗^⃗0⃗_⃗ϕ⃗} is however an interesting one, since the intrinsic stretches and curvatures need not be compatible. §.§ Fitting Embryo ShapesFor the purpose of fitting the model to the observed averages shapes, we define a family of piecewise constant orlinear functional forms for the intrinsic stretches and curvatures, shown in Fig. <ref>. This family of intrinsic stretches and curvatures is defined in terms of fifteen parameters, which are to be fitted for. Their functional forms are based on observations of cell shape changes by <cit.> summarised below:* The intrinsic stretches f_s^0,f_ϕ^0 vary in the both hemispheres (Fig. <ref>a): in the posterior hemisphere, the initially teardrop-shaped cells thin into spindle-shaped cells (Fig. <ref>c,d, Fig. <ref>b), while, in the anterior hemisphere, they flatten into disc-shaped (`pancake-shaped') cells (Fig. <ref>d,e, Fig. <ref>c). While the evolution towards spindle-shaped cells appears to occur at the same time all over the posterior hemisphere, the data from thin sections suggest that the transition to disc-shaped cells starts at the bend region and progresses towards the phialopore (Fig. <ref>d,e). Moreover, the spindle-shaped cells are isotropic, f_s^0≈ f_ϕ^0, while the pancake-shaped cells are markedly anisotropic: next to the bend region, the long axis of their elliptical cross-section is the meridional one; next to the phialopore, it is the circumferential axis (Fig. <ref>c). * The meridional intrinsic curvature κ_s^0 (Fig. <ref>b) is expected to vary most drastically in the region where paddle-shaped cells with thin wedge ends form (Fig. <ref>d, Fig. <ref>a). Because of the motion of cytoplasmic bridges relative to the cells, some additional, yet slighter, variation may be expected.* The variations of the circumferential intrinsic curvature κ_ϕ^0 are less clear: on the one hand, κ_ϕ^0 does not vary as drastically as the meridional one, because of the anisotropy of the paddle-shaped cells. This is a marked difference to type-A inversion, where the flasks-shaped cells are isotropic <cit.>, and both intrinsic curvatures therefore vary more dramatically in the bend region. On the other hand, some variation of the circumferential intrinsic curvature may be expected because of the motion of cytoplasmic bridges (Fig. <ref>c). We impose a continuous functional form for κ_ϕ^0, regularising a step function over a distance Δ s in arclength (Fig. <ref>c), but we do not fit for Δ s since we lack detailed information about the cell shape changes that define it.The other geometrical parameter of the shell, the angular extent P of the phialopore, is not fitted for. We arbitrarily set P=0.3. The reasons for this simplification are discussed in the main text.Numerical shapes were fitted to the average shapes by distributing M=100 points uniformly along the arclength of the numerical and average shapes, and minimising a Euclidean distance between them using Matlab (The MathWorks, Inc.) routine , modified as discussed above. A custom-written adaptive stepper was used to move about in parameter space and select the initial guess for the Nelder–Mead simplex. For each shape, the fit for the previous stage of inversion was used as the initial guess for the optimisation. §.§ Shape Perturbations and Statistical StatementsTo define perturbations for the F=15 fitted model parameters P⃗_⃗0⃗∈ℝ^F at noise level δ, we draw independent N uniform random samples X⃗∼𝒰[0,1]^F on the unit interval and define the perturbed parameters P⃗=P⃗_⃗0⃗(1+2δ(X⃗-1)). §.§.§ Uniformity of the Distribution of PerturbationsAs discussed in the main text, some of these perturbed parameters must be discarded. As a result, the samples that are retained are uniform on an unknown set 𝒜⊆[0,1]^F with means μ⃗. To establish that these means are not all the same, we derive confidence intervals for μ_i-μ_j. Since |X_i-X_j|⩽ 1, we may bound the variance of these differences by Var(X_i-X_j)⩽ 1, and hence, by the central limit theorem, a 100(1-p)% confidence interval is⟨ X_i⟩-⟨ X_j⟩±z√(N),z=Φ^-1(1-p/2F2),wherein Φ^-1 is the inverse of the cumulative distribution function of the 𝒩(0,1) distribution, and where have included a multiple-testing correction. At noise level δ=0.075, we have run 10000 perturbations, finding M=max⟨X⃗⟩≈ 0.526 and m=min⟨X⃗⟩≈ 0.485. With M-m≈ 0.041 and , we infer that the 99% confidence interval for the maximum difference of the means does not contain zero, and hence that the means are not all the same. We notice however that these deviations of the means are small, in that they are not statistically signficantly different from 0.5.§.§.§ Position of the Maxima of Shape VariationWe now make quantitative our statement, based on the cumulative distributions in Fig. <ref>c, that the experimental distribution of shape variation (with a maximum in the anterior cap) is very unlikely to arise under the uniform model. We ask: what is the probability p, under the uniform model, for the maximum in shape variation to lie in the anterior cap (Fig. <ref>c)? For 10000 perturbations, we found that 757 had a maximum in the anterior cap. Among these perturbations, 2345 yielded a single maximum in shape variation, with 60 of these maxima in the anterior cap. With 99% confidence, we therefore have upper bounds p<0.0757+0.0129<0.09 from all perturbations, and p<0.0256+0.0266<0.06 if we restrict to shape variations with a single maximum. § ACKNOWLEDGEMENTSWe are grateful to D. Page-Croft and C. Hitch for instrument fabrication. We thank S. Hilgenfeldt for asking a question at the right time, T. B. Berrett for a conversation on matters statistical, and the Engineering and Physical Sciences Research Council, the Schlumberger Chair Fund, and the Wellcome Trust for partially funding this work.In this appendix, we analyse the configuration where the rim of the phialopore is in contact with the inverted posterior for completeness of the mechanical analysis. We also analyse a toy problem to illustrate the intricate interplay of geometry and mechanics during contact. §.§ Elastic Model in the Contact ConfigurationLet P be the angular extent of the axisymmetric phialopore at the anterior pole of the shell. Here, we discuss the contact problem where the shell has deformed in such a way that the rim of the phialopore (at θ=π-P=Q, where θ=s/R is the polar angle) is in contact with the shell at some as yet unknown position θ=C, as shown in Fig. <ref>a,b.< g r a p h i c s >figureAnalysis of the contact problem. (a) Undeformed configuration and (b) contact configuration. The phialopore at θ=Q=π-P touches the shell at θ=C, where θ is the polar angle. (c) Increasing circumferential stretch f_ϕ with advancing position θ_p of the peeling front, at constant intrinsic stretch f_ϕ^0. Insets: configuration with inverted posterior (as in Fig. <ref>d), at beginning of contact, and at a later stage. (d) Advancing contact position with advancing peeling front. Insets: configurations at beginning of contact and at later stage, as in (c). As in the derivation of the governing equations without contact (Methods), we shall express the variations in terms of δ r and δβ. The third variation, δ z, is not independent of the former two, and so the condition that the vertical positions of the shell at the point of contact and at the phialopore match must be incorporated via a Lagrange multiplier, U. <cit.> raised a related issue in the derivation of the shape equations for vesicles. The Lagrangian for the problem is thereforeℒ=ℰ-2π U∫_C^Qf_ssinβ dθ,where the prefactor has been introduced for mere convenience. We note the variation of Eq. (<ref>),δℒ2π=δℰ2π- Utanβ δ r_C_+^Q+Uf_s(C_+)sinβ(C) δ C+U∫_C^Q{f_sκ_s^2β δ r-f_sβ δβ} dθ.Next, expanding the condition β(C_-)=β(C_+) of geometric continuity that we have already implicitly applied in the above, we note thatδβ(C_-)+f_s(C_-)κ_s(C_-) δ C=δβ(C_+)+f_s(C_+)κ_s(C_+) δ C.Since the outer part of the shell can rotate freely with respect to the inner part at the points of contact, the variations δβ(C_±) and δβ(Q) are, by contrast, independent. This is not true of the variations δ r(C_±) and δ r(Q), however:δ r(Q) = δ r(C_-)+f_s(C_-)cosβ(C) δ C=δ r(C_+)+f_s(C_+)cosβ(C) δ C.Analogous expansions were used by <cit.> for discussing an adhesion problem for vesicles. Next, a straightforward calculation reveals that the governing equations (<ref>) and (<ref>) remain unchanged if we define T=-N_stanβ+Uβ/r for C⩽θ⩽ Q. For convenience, we adjoin the equation dz/ds=f_ssinβ to the system (thereby fixing the degree of freedom of vertical translation). The system thus becomes a system of six first-order differential equations on two regions, with two unknown parameters (the contact position C and the Lagrange multiplier U). We thus have to impose fourteen boundary conditions: r(0) =0, z(0) =0, β(0) =0, T(0) =0,r(Q) = r(C), z(Q) = z(C), N_s(Q) =0, M_s(Q) =0,as well as the continuity conditions at θ=C,β =0,r =0,z =0,M_s =0,andr(C) N_sβ(C)-Utanβ(C)=r(Q)N_s(Q)β(Q)-Utanβ(Q), T = - N_stanβ(C) + Uβ(C)r(C),ℰ2π=r(C) f_sN_s+r(C)M_s(C) f_sκ_s. We also note that the conditions r(Q)=r(C) and z(C)=z(Q) do not take into account the finite, but small, thickness of the shell. A more detailed condition would require knowledge of the nature of the contact (and is anyway beyond the remit of a thin shell theory). We briefly explore shapes in the contact configuration in what follows. We start from a configuration where the posterior hemisphere has inverted, as in Fig. <ref>d, and advance the peeling front, but now without increasing the intrinsic circumferential stretch f_ϕ^0 at the phialopore. As the peeling front advances, the circumferential stretch at the phialopore increases (Fig. <ref>c) at constant f_ϕ^0, showing how the phialopore is pushed open by the posterior hemisphere. The procession of the point of contact between the posterior and the phialopore along the inverted posterior speeds up with advancing peeling front position (Fig. <ref>d) because the closer the point of contact is to the posterior, the more the latter resists the progression of the contact point because of the changing tangent angle.The inset configurations in Fig. <ref>c,d also suggest that, as the peeling front advances, the regime of contact at a point discussed here gives way to a second contact regime, where the contact is over a finite extent of the meridian of the shell. We do not pursue this further. §.§ Asymptotic Analysis of a Toy ProblemSome analytic progress can be made and additional insight into the contact configuration can be gained by asymptotic analysis of a toy problem: two elastic spherical shells, an inner shell of radius R_1 and an outer, open shell of radius R_2>R_1, touch at the respective angular positions Θ_1 and Θ_2<Θ_1 (Fig. <ref>a), so that R_2/R_1=sinΘ_1/sinΘ_2. The intrinsic stretches and curvatures are those of the undeformed shells. For the remainder of this section, we non-dimensionalise distances with respect to the radius R_1 of the inner shell; stresses we non-dimensionalise with Eh. If the outer shell is moved relative to the inner shell by a distance d (Fig. <ref>b), the two shells deform in asymptotically small regions near the point of contact. This point of contact moves a distance dΞ down along the inner shell, determined by matching the displacements of the contact point and the forces exerted by one shell on the other. We assume in particular that the nature of the contact is such that the shells do not exerttorques on each other. Since we have non-dimensionalised distances with R_1, our asymptotic small parameter isε^2=112(1-ν^2)h^2R_1^2≪ 1. < g r a p h i c s >figureAsymptotic toy contact problem. (a) Two shells of radii R_1 and R_2 are in contact at angular positions Θ_1 and Θ_2, respectively. (b) Relative motion of one shell with respect to the other by a distance d induces deformations of the shell in an asymptotic inner layer of size δ, and causes the point of contact to move by a distance dΞ along the inner shell. (c) Contours of Ξ in the (Θ_1,Θ_2) plane.The classical leading-order scalings for this problem are discussed by <cit.>, for example: deformations are localised to asymptotic inner regions of width δ∼ε^1/2, in which deviations of the tangent angle from its equilibrium value are of order d/δ, and we assume that d≪δ. We introduce an inner coordinate ξ, and write the polar angles as θ_1=Θ_1+δξ+𝒪(d), θ_2=Θ_2+δξ. We thus expandβ_1(θ_1) =Θ_1+(d/δ)b_1(ξ),β_2(θ_2) =Θ_2+(d/δ)b_2(ξ).Assuming that δ^2≪ d≪δ, we then have the leading-order expansions N_s^(1) =Eh δ d σ_1(ξ),N_s^(2) =Eh δ d σ_2(ξ),N_ϕ^(1) (∗)=Eh E_ϕ^(1)+ν N_s^(1)=Eh δ a_1(ξ), N_ϕ^(2) (∗)=Eh E_ϕ^(2)+ν N_s^(2)=Eh δ a_2(ξ), where a_1,a_2 are hoop strains. We note that the relations marked (∗) are only valid at leading order, where we may approximate f_s≈ f_ϕ≈ 1. Let F_r and F_z denote the (suitably scaled) radial and vertical forces exerted by the outer shell on the inner shell. We obtain the leading-order force balances from the energy variation (<ref>): using dashes to denote differentiation with respect to ξ,σ_1'sin^2Θ_1-b_1”'cosΘ_1sinΘ_1=F_zδ(ξ),σ_1'sinΘ_1cosΘ_1-a_1+b_1”'sin^2Θ_1=F_rδ(ξ).This system is closed, at leading order, by the geometric relation a_1'=-b_1, as in <cit.>. Eliminating σ_1, we obtainb_1””+b_1=(F_r-F_zΘ_1)δ'(ξ).The matching conditions b_1→ 0 as ξ→±∞ reduce the number of undetermined constants to four, which are determined by the jump conditions at the contact point ξ=0.The asymptotic balance for the outer shell is of course the same, but we must remember that the system has been non-dimensionalised with the radius of the inner shell, for which reason a geometric factor arises in the equations. Thusb_2””+(sinΘ_1sinΘ_2)^4b_2=0,with the matching condition b_2→ 0 as ξ→∞, leaving two boundary conditions to be imposed on this equation. Since the shells do not exert any moments on each other, b_2'(0)=0. The second condition is obtained from the force balance: the vertical force balance can be integrated once to yieldsinΘ_1sinΘ_2{σ_2-Θ_2(sinΘ_2sinΘ_1)^4b_2”}=F_z.Matching to the undeformed, unstressed shell as ξ→∞ implies F_z=0. The radial force boundary condition resulting from (<ref>) issinΘ_1cosΘ_2{σ_2(0)+tanΘ_2(sinΘ_2sinΘ_1)^4b_2”(0)}=F_r,which, upon imposing (<ref>), reduces tob_2”(0)=(sinΘ_1sinΘ_2)^3F_r.Let U_r^(1),U_z^(1) and U_r^(2),U_z^(2) denote the respective (non-dimensional) displacements of the contact point ξ=0, scaled with d. Then U_r^(1) =sinΘ_1∫_0^∞b_1 dξ=-F_r2√(2)sinΘ_1, U_z^(1) =-cosΘ_1∫_0^∞b_1 dξ=F_r2√(2)cosΘ_1,U_r^(2) =sinΘ_1∫_0^∞b_2 dξ=-√(2)F_rsinΘ_1, U_z^(2) =-sinΘ_1Θ_2∫_0^∞b_2 dξ=√(2)F_rsinΘ_1Θ_2. In particular, these expressions once again contain additional geometric factors resulting from the non-dimensionalisation. The values of the two remaining undetermined constants, F_r and Ξ, are finally obtained by imposing continuity of the displacement of the contact point, i.e.U_r^(1)+ΞcosΘ_1 =U_r^(2), U_z^(1)+ΞsinΘ_1 =U_z^(2)+1.Notice that arclength is computed here from the anterior pole of the shell to match the asymptotic setup of <cit.>, and so the `vertical' axis is pointing downwards in Fig. <ref>, giving rise to some sign changes. In particular, we obtainΞ=3sinΘ_11+2cosecΘ_2sin(2Θ_1-Θ_2).The contours of this expression are plotted in Fig. <ref>c. The very non-linear nature of this expression illustrates that the contact geometry is quite intricate; in particular, Θ_2(Θ_1) at fixed Ξ is not a monotonic function, but, as expected (since it is easier for the the contact point to slide along the inner shell the more parallel it is to the axis of symmetry), at fixed Θ_1, Ξ increases with Θ_2.
http://arxiv.org/abs/1708.07765v1
{ "authors": [ "Pierre A. Haas", "Stephanie S. M. H. Höhn", "Aurelia R. Honerkamp-Smith", "Julius B. Kirkegaard", "Raymond E. Goldstein" ], "categories": [ "cond-mat.soft", "physics.bio-ph", "q-bio.TO" ], "primary_category": "cond-mat.soft", "published": "20170825145504", "title": "Mechanics and Variability of Cell Sheet Folding in the Embryonic Inversion of $Volvox$" }
Application of the Tensor Train decomposition to channel turbulence flowTh. von Larcher Freie Universität Berlin, Institute of Mathematics, Arnimallee 9, 14195 Berlin, Germany Tel.: [email protected] R. KleinFreie Universität Berlin, Institute of Mathematics, Arnimallee 6, 14195 Berlin, GermanyOn identification of self-similar characteristics using the Tensor Train decomposition method with application to channel turbulence flow Thomas von Larcher Rupert Klein========================================================================================================================================= A study on the application of the Tensor Train decomposition method to 3D direct numerical simulation data of channel turbulence flow is presented.The approach is validated with respect to compression rate and storage requirement. In tests with synthetic data, it is found that grid-aligned self-similar patterns are well captured, and also the application to non grid-aligned self-similarity yields satisfying results. It is observed that the shape of the input Tensor significantly affects the compression rate.Applied to data of channel turbulent flow, the Tensor Train format allows for surprisingly high compression rates whilst ensuring low relative errors. § INTRODUCTIONMultidimensional data sets, data of dimension 3 or higher, require massive storage capacity that strongly depends on the (Tensor) dimension, d, and on the number of entities per dimension, n. The datasize or storage requirement scales with 𝒪 (n^d), n=max_i{n_i}. It is that curse of dimensionality that makes it difficult to handle higher-order Tensors or big data in an appropriate manner. Tensor product decomposition methods, <cit.>, were originally developed to yield low-rank, i.e. data-sparse, representations or approximations of high-dimensional data in mathematical issues. It has been shown that those methods are as good as approximations by classical functions, e.g., like polynomials and wavelets, and that they allow very compact representations of large-data sets. Novel developments focus on hierarchical Tensor formats as e.g. the Tree-Tucker format, <cit.>, and the Tensor Train format, <cit.>, <cit.>. Nowadays, those hierarchical methods are successfully applied in e.g., physics or chemistry, where they are used in many body problems and quantum states.Here, we applythe Tensor Train decomposition to data of a 3D direct numerical simulation (DNS) of a turbulence channel flow. We aim at capturing self-similar structures that might be hidden in the data as explained in the next paragraph. However, as those multiscale flow structures in highly irregular flows are not commonly aligned with the underlying grid but are translated, stretched, and rotated, we, at first, use synthetic data to evaluate the suitability of the method to generally detect self-similarity. In wall-bounded turbulent flows characteristic coherent patterns, heterogeneous structures limited in time appearing irregularly at varying positions, are observed in distinct spatial regions <cit.>, <cit.>. A number of experimental and numerical studies were able to describe some (quasi-) coherent features rather accurately, e.g. quasi-streamwise vortical structures are identified as a form of quasi-coherent structures <cit.>, <cit.>.A primary focus of research is the near-wall region, i.e. y^+<40 (y^+ as wall coordinate), where low-speed streaks are observed moving away from the wall, and, consequently, a flow towards the wall is required for continuity reasons. Indeed, regions of such wall-directed rapidly moving fluid, called sweeps, are observed, e.g. <cit.>, and it has been discussed that these patterns as well as the low-speed streaks positively contribute to the production of turbulent energy, e.g. <cit.>, <cit.>. Today, in large eddy simulation (LES) it is still subject of intense research to model the unresolved so-called small scales. One branch follows the idea to reconstruct the fluctuations of the unresolved scales. For example, <cit.> uses a stochastic approach of an adaptive deconvolution model for the subgrid scale closure; <cit.> suggests a variational finite element formulation; <cit.> develops a variational multiscale method where adaption controls the influence of an eddy viscosity model. Self-similarity, however, is not resolved per-se in multiscale models. For this purpose, wavelet decomposition methods have been applied to data of turbulent flows, e.g. <cit.>, and they were recently used to split the coherent signal from the incoherent part, e.g. <cit.>, <cit.>. Similarity models, e.g. <cit.>, and fractal models, e.g. <cit.>, are promising tools as they extrapolate self-similarity to the small scales.Our study is concerned with the question of whether Tensor decomposition methods can support the development of improved understanding and quantitative characterisation of multiscale behavior of turbulent flows, cf. e.g.<cit.>. Recent high-resolution numerical simulations obviously confirm established theoretic approaches that turbulent flows contain hierarchies of self-similar structures like vortex tubes or sheet-like flow patterns, e.g. <cit.>. Provided that our tests yield promising results, those quantitative features could be helpful in developing a LES closure approach based on and extending the idea of fractal or dynamic SGS models, e.g. <cit.>, <cit.>. Therefore, if proved positively, a long-term goal would be the construction of a self-consistent closure for LES of turbulent flows that explicitly exploits the Tensor decomposition approach's capability of capturing self-similar structures. Our approach is automatically linked with the following questions: (i) Can real data from multiscale dynamics be approximated or represented by the Tensor Train decomposition technique and how compact are the resulting storage schemes, i.e what compression rate can be achieved at which level of accuracy? (ii) Does the Tensor Train approximated data retain the dynamics? (iii) Is the Tensor Train format suitable for detecting cascades of scales in real data and in turbulence data in particular? In this article, we present results of evaluating the Tensor Train format especially to (i). Promising results would encourage us for future work.§ THE TENSOR TRAIN FORMAT In this section, we introduce the Tensor Train format, that is the Tensor decomposition method of choice for our study. A number of Tensor formats were developed in the past, e.g. the r-term approximation or so-called canonical format and the Tucker format <cit.>,and we refer to e.g. <cit.> and <cit.> for surveys of low-rank approximation techniques and specifically Tensor approximation formats. Previous studies show that the Tensor Train format allows for amuch higher attainable compression rate compared to the low rank approximation formats just mentioned.The Tensor Train format, however, is based on the Tucker decomposition. In the Tucker format a Tensor is decomposed into a product of a core Tensor and factor matrices. The factor matrices can be treated as the principal components of the Tensor and the core Tensor implies the level of interaction between the factor matrices. The Tucker format uses the advantages of the higher order singular value decomposition, hereafter HOSVD for short, with a closed set of low rank Tensors. The main disadvantage of the Tucker format is its high storage requirement as the core Tensor takes r^d elements, r as rank of the cores, resulting in a total storage requirement of 𝒪(dnr+r^d). The Tensor Train format on the other hand is designed such that the storage requirement scales linearly with order d: 𝒪(dnr^2), cf. <cit.>. The matrix SVD decomposes a matrix into a product of matrices, which then represents the original matrix: A=U S V^T, where A∈ℝ^m× n is the original matrix, U∈ℝ^m× m and V∈ℝ^n× n are orthonormal matrices and S∈ℝ^m× n is a diagonal matrix with the singular values, σ_n, of A on its diagonal. Note that σ_1≥σ_2≥…≥σ_n≥ 0 and the number of non zero singular values is equal to the rank of A: rank(A)=r. If the spectrum of the singular values contains only a few large entries and a sharp cut-off to the remaining tail, considerable data compression rates can be realized by truncating the matrices to the significant part of the spectrum. E.g., an approximation of the original matrix, A, with the first three singular values reads A≈σ_1u_1v_1^T + σ_2u_2v_2^T + σ_3u_3v_3^T. Moreover, a truncation to rank r results in a storage requirement of mr+r+nr instead of mn for the original matrix A.The key idea of Tensor Train decomposition is to separate a high-dimensional Tensor into its component Tensors (modes). The modes are then Tensors of order 3 by construction.Principally, the representation of a d-dimensional Tensor X in the Tensor Train format readsX(n_1,…,n_d) = ∑_k_1=1^r_1…∑_k_d-1=1^r_d-1U_1(n_1,k_1) U_2(k_1,n_2,k_2) … U_d(k_d-1,n_d).The component Tensors U_i are obtained by the step-by-step application of the matrix SVD. The procedure is as follows: in the first step, the d-dimensional Tensor X is reshaped into a 2-dimensional matrix A_1∈ℝ^n_1× n_2… n_d to which a SVD is applied, A_1=U_1S_1V_1^T, and the first mode U_1∈ℝ^n_1× r_1 is obtained. The remaining matrices of the SVD are contracted to one matrix A_2=(S_1V_1^T)∈ℝ^r_1× n_2… n_d which is used for the second step. In the second step, A_2 is reshaped, at first, so that A_2∈ℝ^r_1n_2× n_3… n_d. Then, a SVD is applied, again, and the second mode U_2∈ℝ^r_2× n_3 is obtained. By following this procedure successively, the remaining modes U_3,…,U_d are obtained. Note, that we need to apply the SVD d-times in total. As an example, figure <ref> shows a graphical representation of the resulting Tensor Train network for an input Tensor of order 4.The Tensor Train decomposition reveals some important topics: (a) in the Tensor Train network, the different modes U_i are linked by the ranks r_i (as sketched in figure <ref>). (b) the parameters r_i determine the approximation quality. In case of Tensor approximation, as opposed to its representation, the r_i are limited to some maximum rank, r, and a lower value of r usually results in lower storage requirement for the approximated Tensor but also results in decreased approximation quality. The ranks r_k, therefore, are also called compression ranks or TT-ranks (TT as Tensor Train). This is because, in the sense of the procedure described previously, the value of r determines the number of rows to be kept for the next step and sets the cut-off of the matrices A_i. The hope is, that the omitted rows will be of small norm and the truncated component Tensors then will represent the significant part of the input Tensor. However, the optimal value of r is not known a priori and it is not guaranteed that the omitted rows are indeed of small norm. That implies consequences for the application of the Tensor Train decomposition to real data as we will see later. § RESULTS Here, we present the results of the Tensor Train approximation applied to data. We use the open source Tensor library Xerus developed at the Technical University Berlin, Germany, <cit.>. The relative error of the approximation w.r.t. the original Tensor is measured in the Frobenius norm that, for a d-dimensional Tensor X, reads||X|| =√(∑^N_1_n_1=1∑^N_2_n_2=1⋯∑^N_d_n_d=1 x^2_n_1⋯ n_d), and the relative error in the Frobenius norm then readse = ||(U-X)||/||X||,with U as approximation of X.§.§ Principal operation of the Tensor Train approximationWe compute a data series of a function, f(x), that involves an overlay of sine-functions of different periods (figure <ref>). The function readsf(x)=3.125sin(x)sin(2x)sin(4x), x∈[0,2π]Here, f(x) is sampled with 2^14=16384 digits. By construction, f(x) involves similarity but not self-similarity in the proper sense.Formally, the data series is a Tensor of order 1, i.e. a vector, and X(n_1)∈ℝ with n_1=1,…,2^14. Without loss of generality, we can reshape the 1d-Tensor into a Tensor of order 3, that is T(n_1,n_2,n_3)∈ℝ with T[2,2,4096]. Note, that the choice of the shape of the Tensor T is initially arbitrary but has a significant impact on the result as we will see below.Applying the Tensor Train decomposition to T, it turns out that the input Tensor is approximated exactly, i.e. represented, for TT-rank ≥ 2, and we find r_1=1 and r_2=2. Figure <ref>a shows a sketch of the approximated Tensor written in the Tensor Train format. The modes U_1 and U_2 readU_1=1/√(2)[ -11 ]U_2=1/√(2)[ -11;11 ],and the basis is given by the components of mode U_3∈ℝ^2×4096 (see figure <ref>b). The storage of T in the Tensor Train format requires 8100 entries, which is less than half the storage requirement in the original Tensor notation. Note that, in the limit r_k=1 ∀ k, the Tensor approximation yields a relative error e=0.33 with a storage requirement of 4102 entries.Now, we perform a test on the sensitivity of the Tensor Train approximation on the shape of the input Tensor. Suppose, we reshape the given data series, (<ref>), into a Tensor T_2 of order 4 where T_2[2,2,2,2048]. Written in the Tensor Train format, TT-rank 4 is needed as minimum TT-rank to represent T_2 (instead of TT-rank 2 for T in the former example). Note, that TT-rank 2 approximates the input Tensor with a relative error e≈0.05. Table <ref> shows the relative error and the storage requirement for TT-rank 1 to 4.§.§ Detection of self-similar structuresNow, we extent the first example and test the Tensor Train format to detect self-similar structures hidden in data series. For this purpose, we compute a data series that consists of a 2^4=16 digit-sequence of triangles. Thus, it is in line with the power of two sequence. The 16 digit sequence is not only repeated but is also scaled-up in every loop, see figure <ref>. The data series comprises 2^10=1024 digits in total. As shown previously, the impact of the shape of the input Tensor on the resulting storage requirement can be significant. Therefore, we reshape the data series in a Tensor of order 9 where each of the 9 dimensions have 2 entries except the second to last one that has 4 entries, i.e. T_3[2,2,2,2,2,2,2,4,2]. Thus, the self-similar structure in the data series is maintained and, furthermore, remains grid-aligned and is not disturbed by splitting of the Tensor. We find TT-rank 2 as minimum rank to represent the input Tensor in the Tensor Train format. The storage requirement, 66 entries (≈6.4 % of the original Tensor notation), is very low. Variations of the input Tensor's shape results in very low storage requirements, too, but it turns out that the shape of T_3 demands the lowest storage requirement and the lowest TT-rank. Table <ref> shows results for varieties of the input Tensor entries.Finally, we consider the properties of the Tensor Train format in case the input Tensor involves self-similar structures not aligned with the grid, that is, the self-similar patterns are not in line with the power of two sequence. For that purpose we compute a top-hat function where the squares are scaled up, see figure <ref>a. The data series contains 2^14=16384 entries in total. To test the effect of noisy data, we perform two additional tests and add random noise of different amplitude, here amplitude factor 1 and 10, to the data series. We reshape the data series into a Tensor of order 14, thus each dimension has 2 entries, T_4[2,2,2,2,2,2,2,2,2,2,2,2,2,2]. With this arbitrary choice, we intentionally neglect our a priori knowledge on the character of the data series. This is reasonable as we can not expect detailed knowledge of the data structure in case of real data.Figure <ref>b shows the storage requirement (hereafter denoted as datasize) over the relative error for both, the data series without noise and with noise amplitude factor 1 and 10, for a number of TT-ranks. In case of no noise Tensor T_4 is represented with TT-rank 5 where relative error is about 10^-15, and storage in the Tensor Train format requires about 500 entries thus ≈3.0 % of the original datasize. Limiting the TT-rank to 2 (4) yields approximation error of about 0.2 (0.02) %, and storage of the approximated Tensor in the Tensor Train format requires about 100 (350) entries, i.e. 0.6 (2.1) % of the datasize in original Tensor notation. Approximation of the noisy data series for TT-rank 5 to about 80 shows a drastic increase in the storage requirement whilst only small changes in the relative error. Obviously, this is due to the small-scale noise the approximation of which naturally requires many more data entries.Both data series, with noise amplitude 1 and with noise amplitude 10, are approximated exactly at TT-rank 128 at which the relative error drops to 10^-15. In both cases, the storage requirement of the representations in the Tensor Train format (43690 entries for both cases) substantially exceeds the storage requirement of the source data in the original Tensor format (16384 entries). For both noisy data series, we find the original datasize at TT-rank 45 where e≈0.03 (noise factor 10) and e≈0.003 (noise factor 1), resp. Furthermore, for a given datasize, the relative error strongly depends on the noise amplitude as long as the noise is not well approximated.Interestingly, the relative error agrees well with that of the no-noise case up to TT-rank 4, TT-rank 3 for noise amplitude 10, indicating that the self-similar top-hat structure, large-scaled relative to the small scale noise, but not the noise itself is represented by low TT-ranks. Figure <ref>c shows the euclidian norm of the difference between the original Tensor and the approximated Tensor. At low TT-ranks, the euclidian norm agrees well in all cases underlining the previous statement that large-scale structures are resolved at low TT-ranks. At large TT-ranks, the graphs of both noisy data series show a significant drop at TT-rank 127 specifying the exact approximation of the input Tensor.Figure <ref>d confirms that picture since the self-similar top-hat structure but not the noisy part (small-scaled w.r.t. the top-hat structure) is approximated well at TT-rank 4 (except the small-scale top-hats at the beginning of the data series). With TT-rank 2 as maximum permitted TT-rank, the approximation of the top-hat structures is less accurate but the general trend is obtained, at least. That result is of course linked with the mathematics of the algorithm discussed previously since the given TT-rank is linked with the number of rows, i.e. number of singular values, to be kept in the Tensor Train decomposition procedure (cf. section <ref>). Low rank, therefore, means a loss of information particularly about small-scales, that is usually hidden in higher eigenvalues of the matrices. In other words, large-scale structures are already being represented whilst small-scale patterns are still coarsely approximated.In summary, the tests with different synthetic data series show that approximation in the Tensor Train format is efficient and promising in detecting self-similar patterns in particular if those structures are grid-aligned. Even if the self-similar structures are not grid-aligned, the approximation in the Tensor Train format yields satisfying results particularly for data series without noise. Furthermore, we found that the minimal TT-rank to represent a data series in the Tensor Train format is sensitive to the shape of the input Tensor. The latter statement has been also discussed by <cit.>. §.§ Channel turbulence flowWe use data of 3D direct numerical simulation (DNS) - by using a pseudo-spectral Fourier-Chebyshev method- of an incompressible, isothermal plane channel flow, figure <ref>a,computed by <cit.>, <cit.>, to which we refer for an in-depth description of data generation. Data are generated at Re_τ=590, where Re_τ is the friction-based Reynolds number. The channel geometric factor h has been set to h=1. The original grid spatial resolution is 600× 385× 600 in (x,y,z). Here, we make not use of data of the original grid but of the so-called fine grid that is 600× 352× 600 in (x,y,z), and we have applied a post-processing procedure to obtain a constant grid increment throughout the y-direction instead of the polynomial distribution of the original grid.We refer to <cit.> for a detailed description of the post-processing algorithms that convert the original data to the equidistant fine grid data.In wall units the grid increment of the fine grid is Δ y^+=3.37. Compared to the signatures of the original grid data, e.g. fig. 3 and 4 in <cit.>, no quantitative loss of information is recognized due to the fine grid post-processing procedure. However, the resolution of the near-all region is significantly reduced. In particular, the viscous sublayer (y^+<5) is not resolved anymore.The scalar q-criterion proposed by <cit.>, <cit.> is a measure to demarcate vortex structures within turbulent flows. The q-criterion readsq = 1/2 (|| A||^2 - || S||^2),where || S|| = [tr(SS^t)]^1/2, || A|| = [tr(AA^t)]^1/2, with S and A as the symmetric and anti-symmetricpart of the velocity gradient Tensor ∇ U, resp. Figure <ref>d shows an iso-surface map of the q-criterion for a given snapshot. Here, vortex tubes of different size and shape, rotated and stretched, can be identified indicating the highly turbulent flow.Here, we apply the Tensor Train decomposition to velocity data (fig. <ref>b), the datasize is 380160000≈ 10^9 entries per snapshot.Since the tests with synthetic data series already have revealed the sensitivity to the shape of the input Tensor, see the previous subsection, we consider two different approaches. Firstly, the input Tensor retains the original data structure, that is, it is a 4th-order Tensor T[600,352,600,3] where the last dimension contains the components of velocity. This approach is referred to as TT-approximation hereafter. On the other hand, we decompose every dimension into its prime factors, resulting in a Tensor of order 19T[n_1,…,n_19]=T[2,2,2,3,5,5, 2,2,2,2,2,11, 2,2,2,3,5,5, 3]. The latter approach is referred to as QTT-approximation hereafter as it is similar to the Quantics Tensor Train (QTT) approach, <cit.>, <cit.>, inasmuch as the binary representation is realized as well as possible.Figure <ref> shows the results of the Tensor Train decomposition for both approaches. Here, the diagram shows storage requirement over relative error for various TT-ranks.Obviously, the QTT-approximation yields less relative error with significantly less storage requirement. For example, given a relative error of 0.085 the QTT-approximation demands a storage requirement of about 18000 entries but TT-approximation demands about 60000 entries. Induced by the different shapes of the input Tensors the necessary TT-rank is much larger in the QTT-approach.Figures <ref> for the TT-approximation and figure <ref> for the QTT-approximation displays 2D-slices of the approximated data for selected TT-ranks, and also a 2D-slice of the original data is shown for comparison. Figure <ref> displays the magnitude of velocity computed from the components after TT-approximation and figure <ref> shows the streamwise component of the velocity. In addition, 1D cross-sections along x-coordinate are given in figure <ref>. Generally, images of lower TT-ranks show a smooth reflection of the turbulent flow, and with increasing TT-rank small-scale patterns are mapped with enhanced resolution. Interestingly, in the TT-approximation, TT-rank 2 involves the box-shaped mean velocity profile typical for channel turbulence flows, figure <ref>e, cf. e.g. <cit.>. In that sense, Tensor Train decomposition restricted to TT-rank 2 apparently acts as an averaging process.Now, we are interested in identifying those regions in the channel flow that show the most significant deviations from the original in the approximated data. In both cases, TT-approximation and QTT-approximation, reconstruction of the velocity field for higher TT-ranks, for example TT-rank 100 in figure <ref>c and <ref>b, already shows several details of small-scale features of the turbulent flow by eye, even if the relative error is significant. For the QTT-approximation, figure <ref>e for TT-rank 20 and <ref>f for TT-rank 100 shows the difference between the approximated and the original data. Obviously, significant deviations are located towards the near-wall region and the flow interior is mostly unaffected. This picture is becoming more and more visible the more details of the small-scale features are resolved, thus with increasing TT-rank. § CONCLUDING REMARKS In this article, the Tensor Train decomposition technique, a specific branch of the family of Tensor product decomposition methods, has been applied to DNS data of channel turbulence flow. We showed that the Tensor Train format yields significant compression rates whilst ensuring low relative errors. In particular, the QTT-approximation, that in our definition is decomposing the entries of each dimension of the input Tensor into its prime factors, shows considerably better results than the approximation of the original data Tensor of order 4.Applied to channel turbulence data, the Tensor Train format allows for surprisingly high compression rates. QTT-approximation at TT-rank 100 results in a compression factor of roughly 1000 and in a relative error of about 5 %. At TT-rank 2000, the relative error is about 0.2 % with a compression factor of about 5, that is a storage requirement of ≈10^8 (instead of ≈10^9 for the original data). However, it demonstrates that low-rank representation of those highly irregular and chaotic flows in the Tensor Train format can not be expected.Our study is concerned with the detection of and quantitative characterization of self-similar patterns in turbulent flow data. Therefore, we, firstly, applied synthetic data to demonstrate the Tensor Train decomposition´s capability to capture self-similarities. The synthetic data tests with and without noise provide promising results. Grid aligned self-similarity is well captured and non grid-aligned self-similarity is approximated at low TT-ranks strengthening the suitability of the method. Moreover, with no noise, non grid-aligned self-similar structures are approximated exactly at low TT-ranks. This research has been funded by Deutsche Forschungsgemeinschaft (DFG) through grant CRC 1114'Scaling Cascades in Complex Systems', Project B04 'Multiscale Tensor decomposition methods for partial differential equations'. The authors thank Prof. Illia Horenko (CRC 1114 Mercator Fellow) as well as Prof. Reinhold Schneider and Prof. Harry Yserentant for rich discussions and for steady support. Thomas von Larcher thanks Sebastian Wolf and Benjamin Huber (both at TU Berlin, Germany) very much for developing the Tensor library xerus which has been used for data analysis, as well as for their round the clock support in the project. The data were generated and processed using resources of the North-German Supercomputing Alliance (HLRN), Germany, and of the Department of Mathematics and Computer Science, Freie Universität Berlin, Germany. The authors thank Alexander Kuhn and Christian Hege (both at Zuse Institute Berlin, Germany) for steady support in data processing and data visualisation.spmpsci
http://arxiv.org/abs/1708.07780v1
{ "authors": [ "Thomas von Larcher", "Rupert Klein" ], "categories": [ "physics.flu-dyn" ], "primary_category": "physics.flu-dyn", "published": "20170825153446", "title": "On identification of self-similar characteristics using the Tensor Train decomposition method with application to channel turbulence flow" }
3mm =-1cm =-1.5cmThe rainbow connection number of enhanced power graph [2]This work was partially supported by CONACYT. Luis A. Dupont, Daniel G. Mendoza and Miriam Rodríguez. Facultad de Matemáticas, Universidad Veracruzana Circuito Gonzalo Aguirre Beltrán S/N; Zona Universitaria; Xalapa, Ver., México, CP 91090. e-mail: [email protected] G be a finite group, the enhanced power graph of G, denoted by Γ_G^e, is the graph with vertex set G and two vertices x,y are edge connected in Γ_G^e if there exist z∈ G such that x,y∈⟨ z⟩. Let ζ be a edge-coloring of Γ_G^e. In this article, we calculate the rainbow connection number of theenhanced power graph Γ_G^e.Keywords: enhanced power graph; power graph; rainbow path; rainbow connection number.AMS Mathematics Subject Classification: 05C25, 05C38, 05C45.§ INTRODUCTIONLet G be a finite group, the power graph of a finite group G we denote the power graph by Γ_G, it is the graph whose vertex set are the elements of G and two elements being adjacent if one is a power of the other. In <cit.> the authors found that the power graph is contained in the non-commuting graph and, they asked about how much the graphs are closer, and then, they defined the enhanced power graph of a finite group. We denoted to the enhanced power graph by Γ_G^e whose vertex set is the group G and two distinct vertices x,y∈V(Γ_G^e) are adjacent if x,y∈z for some z∈ G. Later, the enhanced power graph of a group was studied by Sudip Bera and A. K. Bhuniya <cit.>. In 2006, Chartrand, Johns, McKean and Zhang <cit.> introduced the concept of rainbow connection of graphs. This concept was motivated by communication of information between agencies of USA government after the September 11, 2001 terrorist attacks. The situation that helps to unravel this issue about communications has as graph-theoretic model the following. Let Γ be a connected graph with vertex set V(Γ) and edge set E(Γ). We define a coloring ζE(Γ){1,...,k} with k∈ℕ. A path P is a rainbow if any two edges of P are colored distinct. If for each pair of vertices u,v∈ V(Γ), Γ has a rainbow path from u to v, then Γ israinbow-connected under the coloring ζ, and ζ is called a rainbow k-coloring of Γ. The rainbow connection number of Γ, denoted by rc(Γ) is the minimum k for which there exists a rainbow k-coloring of Γ.We will apply the idea of calculating the rainbow connected number of enhanced power graph through the graphs such that as was carried out by the authors from<cit.> about the power graph, with InvMax_G, the set of maximal involution of G, whose important theorems we can summarize in the following: Let |InvMax_G|≠∅ and G be a finite group of order at least 3. Then rc(Γ_G)=3,if1≤ |InvMax_G|≤ 2; |InvMax_G|,if|InvMax_G|≥ 3. If |InvMax_G|=∅, let G be a finite group * If G is cyclic, then rc(Γ_G) =1,if |G| is a prime power; 2,otherwise* If G es noncyclic, then rc(Γ_G)= 2 or 3.In this paper we compute the rainbow connection number of Γ_G^e and we characterize it in terms of independence cyclic set, whose particular case is maximal involution. This paper is organized as follows. In section 2 we put definitions and some properties about rainbow connection number and we describe a way for guarantee a coloring for enchanced power graphs. In section 3 we wrote the main theorems for determine Γ_G^e. § DEFINITIONS AND PROPERTIESWe start the section with a proposition from enhanced power graph definition.rc(Γ_G^e)=1 if only if Γ_G^e is complete if only if G is cyclic.Let Max_G={x_1,...,x_m} be an essential cyclic set if * for all g∈ G, g=x_i for some i,* x_i≠x_j for i≠ j,* each x_i is a maximal cyclic subgroup.Therefore <ref> can be rewritten as follows|Max_G|=1 if only ifG is a cyclic group if only if rc(Γ_G^e)=1 If |Max_G|=2, then rc(Γ_G^e)=2 Since Γ_G^e is not complete, we have rc(Γ_G^e)≥ 2, then we have [ E_1 = {{a,b}|a,b∈x_1}; E_2 = {{a,b}|a,b∈x_2} ] We can note that the only one pathbetween x_1_j andx_2_i for all x_1_j∈x_1 and x_2_i∈x_2 is (x_1_j,e,x_2_i), then the 2-coloring is given by ζ: E(G)⟶{1,2} with f↦ i, if f∈ E_i is a rainbow 2-coloring of Γ_G^e. We define the independence cyclic set of Max_G, denoted by ics(G), asics(G)={x_i∈ Max_G|x_i∩x_j=e fori≠ j}Theindependence cyclic number of Max_G, denoted by icn(G), is icn(G)=|ics(G)|.We note thatInMax_G⊆ ics(G)⊆ Max_G. If |Max_G|=3, thenrc(Γ_G^e)=2, icn(G)=13, icn(G)=3Let Max_G={x_1,x_2,x_3} be an essential cyclic set.We do not need to be concise with the path with both vertex in x_i for some i, because with one color, we can coloring this path. The difficult is when both vertex are in different x_i.Case cin(G)=1 Without loss of generality we suppose x_1∩x_2=e=x_1∩x_3 and x_2∩x_3≠ e. Since G is not cyclic group, then rc(Γ_G^e)≥ 2. Let h∈x_2∩x_3 with h≠ e, thus [E_1={{a,b}|{a,b}⊂x_1}⋃{{a,b}|{a,b}⊂x_2 with a,b≠ e};E_2= {{e,g}|g∈x_2∪x_3}⋃{{a,b}|a∈x_3∖x_2, b∈x_2∩x_3, b≠ e} ] In particular {h,g}∈ E_2 for all g∈x_3∖x_2. Then, we will give a 2-coloring to Γ_G^e:[ ζ:E(G)⟶{1,2};f↦i ] ifi∈ E_i Case cin(G)=3 We suppose that |InMax_G|=0, and without loss of generality x_i∩x_j=e for 1≤ i<j≤ 3. We will give a 3-coloring for Γ_G^e, with[ E_1 ={{x_i,e}|i=1,2,3 }; E_2 = {{e,x_i_j}|x_i_j∈⋃_i=1^3x_i∖ x_i}; E_3 = {{a,b}|a,b∈x_i for i=1,2,3} ]With the coloring [ζ:E(G) ⟶ {1,2,3}; f ↦ i ]ifi∈ E_iNow, we suppose that InMax_G=Max_G, then with E_i={{a,b}|a,b∈x_i} be the edges set, and the coloring is given like <ref>. We can not give a 2-coloring for Γ_G^e. We claim that there is a 2-coloring. Let u∈x_1, v∈x_2 and w∈x_3. Then, we have ζ(u,e)=1 and ζ(e,v)=2, thus (u,e,v) is a desire rainbow path. Likewise ζ(u,e)=1 and ζ(e,w)=2, but for (v,e,w) there is not a rainbow path. From <ref> we can ask ourself about what happens whether no one of x_i can be intersected by another x_j with i≠ j or, what happens if all x_i are intersected with some common elements. For this, we have the following prepositions. The following preposition is just like <cit.> Let Max_G={x_1,...,x_m} be an essential cyclic set and InMax_G=Max_G. Then rc(Γ_G^e)=m. For Γ_G^e we will give a m-coloring. For each i=1,...,m we haveE_i(G)={{a,b}|a,b∈x_i}since for u∈x_i and v∈x_j with i≠ j we have a only one path between them, which is (u,e,v), and the coloring is given by[ ζ: E(G) ⟶ {1,...,m}; f ↦ iiff∈ E_i ]We can see the diagram in figure <ref>.Let Max_G={x_1,...,x_m} be an essential cyclic set with m≥ 2, and h_i,j∈x_i∩x_j for 1≤ i<j≤ m. If h_i,j≠ h_r,s, with i≠ r or j≠ s, then rc(Γ_G^e)=2. By <ref> we only give the coloring for x_i and x_j such that i< j. We fix E_1(G)={{a,h_i,j}|a∈x_i∖x_j}⋃{{a,b}|a,b∈x_i}E_2(G)={{b,h_i,j}|b∈x_j∖x_i}⋃{{a,b}|a,b∈x_j} Then, we always have a path for x_i_r∈x_i to x_j_s∈x_j given by (x_i_r,h_i,j,x_j_s) with i<j,and the coloring is the same given in <ref>.The next definition guarantees the existence of a coloring for Γ_G^e. An awning is a collection H_1,...,H_m-1 where the following occurs: * H_i=A_i⋃̇ B_i={h_i,i+1,...,h_i,m}⊂X_i for i=1,...,m-1* for all i<j, h_i,j∈x_i∩x_j* for i<j with j=2,...,m-1 , if h_j,s=h_i,r∈ H_j∩ H_i (s∈{j+1,...,m}, and r∈{i+1,...,m}), the following holds: * r=j, h_i,r∈ A_i, then h_j,s∈ B_j* r=j, h_i,r∈ B_i, then h_j,s∈ A_j* r=s>j, h_i,r∈ A_i, then h_j,r∈ A_j* r=s>j, h_i,r∈ B_i, then h_j,r∈ B_jThe case in<ref> is a particular case where G has not an awning. By definition of awning we want to say, if we have an awning, then we only need H_i={e} for only some i, and no more.If G has an awning and |Max_G|≥ 3, then icn(G)≤ 1. In particular, |InMax_G|≤ 1. Suppose that icn(G)=2, then H_i_1=H_i_2={e}. Hence (x_i_1,e,x_i_2) is a rainbow path, and (x_i_1,e,x_i_3) is another rainbow path, but in (x_i_2,e,x_i_j) we have not a rainbow path for Γ_G^e.If |∩ H_i|≥ m-1 with Max_G=m, then G has an awning.If |Max_G|=2 then icn(G)=0 or 2, and G has an awning.If G has an awning, then icn(G)=1. In particular |InMax_G|≤ 1.We note that the coloring whether we have to InMax_G or ics(G) does not change, both can be colored by only one color. The only one difference due to in InMax_G there is only two elements in the subset of G and, for a set taken ofics(G) there are more than two elements but, the behaviour in coloring is exactly the same, because, in a set taken of ics(G) all the elements are associated each them, then, one color is enough for coloring all set. In the following properties we only consider the set ics(G) unless otherwise indicated.If G has an awning, then rc(Γ_G^e)=2 We will give to Γ_G^e a rainbow 2-coloring, for 1≤ r < s ≤ m, let: [ E_r,s^1 = {{a,h_r,s}|a∈x_r\x_s; h_r,s∈ A_r}; E_r,s^2 = {{b,h_r,s}|b∈x_s\x_r; h_r,s∈ A_r}; E_r,s^1 = {{a,h_r,s}|a∈x_r\x_s; h_r,s∈ B_r}; E_r,s^2 = {{b,h_r,s}|b∈x_s\x_r; h_r,s∈ B_r} ] Write E_1=⋃_1≤ r < s ≤ m E_r,s^1 and E_2=⋃_1≤ r < s ≤ m E_r,s^2 and we define a coloring [ ζ: E(Γ_G^e) ⟶ {1,2}; f ↦ i, iff∈ E_i ] We go to check that, this is a 2-coloring for Γ_G^e. We will make a coloring for j,s-step. If this edges have been colored in a before step, i.e., if h_j,s=h_i,r with i<j, thus we will have coloring problems with r=j or r=s. For r=j (r=s), for (a)-(d) from <ref> we can guarantee in before step we can conserve the coloring and that, not affect us with the 2-coloring that we gave.If rc(Γ_G^e)=2, then for any order of Max_G, we have an awning. We have rc(Γ_G^e)=2and suppose E_1⋃̇E_2=E be the set of edges of Γ_G^e and a 2-coloring give by <ref> and let Max_G={x_1,...,x_m} be an independence cyclic set of Γ_G^e, thus there is h∈x_i∩x_j such that {x_i,h}∈ E_1 and {h,x_j}∈ E_2 (or {x_i,h}∈ E_2 and {h,x_j}∈ E_1). We define h_i,j:=h, moreover H_i:={h_i,1,...,h_i,m}=:A_i⋃̇B_i such that A_i={h_i,j|{x_i,h_i,j}∈ E_1} and B_i={h_i,j|{x_i,h_i,j}∈ E_2}, where (a)-(b) from <ref> are met.If G has an awning with any order on Max_G, then for every order, G has an awning. rc(Γ_G^e)=2 if only if G has an awning and G is not cyclic group. By <ref> and <ref> By <ref> we obtain a similar proposition like <cit.>. Let Max_G={x_1,...,x_m} be an essential cyclic set. If InMax_G≠∅, then |InMax_G|≤ rc(Γ_G^e). As in the proof of <cit.>.Let Max_G={x_1,...,x_m} be an essential cyclic set. If icn(G)≥ 3, then 3≤ rc(Γ_G^e). Suppose that |InMax|=0 and ics(G)={x_1,...,x_k} be an independence cyclic set with k≥ 3. We can not give a 2-coloring for the graph induced by x_1∪⋯∪x_k, but we will give a 3-coloring induced by the following edge sets[ E_1 ={{x_i,e}|i=1,...,m }; E_2 = {{e,x_i_j}|x_i_j∈⋃_i=1^mx_i∖ x_i}; E_3 ={{a,b}|a,b∈x_i for each i} ]with the rest edges just like <ref> and <ref>. Thus the 3-coloring is given by <ref>. If |InMax_G|≥ 3 then the edges set is[ E_1 ={{x_i,e}|i=l+1,...,m }; E_2 = {{e,x_i_j}|x_i_j∈⋃_i=l+1^mx_i∖ x_i}; E_3 ={{a,b}|a,b∈x_i for each i}; E_i ={{x_i,e}|i=1,...,l } ]and the coloring given by[ζ:E(G) ⟶ {1,...,l}; f ↦ i ] if f∈ E_iIn particular we have the following Let Max_G={x_1,...,x_m} be a essential cyclic group with m≥ 4 and icn(G)≥ 2, then 3≤ rc(Γ_G^e). We have rc(Γ_G^e)≤ rc(Γ_G) because E(Γ_G)⊆ E(Γ_G^e). § MAIN THEOREMSIn this section we prove our main theorems. Let Max_G={x_1,...,x_m} be an essential cyclic set. If icn(G)=1 then rc(Γ_G^e)=1 if only if m=1. In particular, if |InMax_G|=1 then rc(Γ_G^e)=1 if only if G≅ℤ_2. By <ref>, <ref> and <ref>.Let Max_G={x_1,...,x_m} be an essential cyclic set. If icn(G)=1 then rc(Γ_G^e)=2 if only if G has an awning. By <ref>, <ref> and <ref>.Let Max_G={x_1,...,x_m} be a essential cyclic set with m≥ 3. If icn(G)=1 then rc(Γ_G^e)=3 if only if G has not an awning. By <ref>. Let Max_G={x_1,...,x_m} be a essential cyclic set with m≥ 4. If icn(G)=2, then rc(Γ_G^e)=3.By <ref> and <ref>.Let Max_G={x_1,...,x_m} be an essential cyclic set. If icn(G)≥ 3, then rc(Γ_G^e)=|InMax_G|. By <ref> and <ref>. Let Max_G={x_1,...,x_m} be an essential cyclic set with icn(G)=0, then rc(Γ_G^e)=1,2,3,. Case 1 By <ref>.Case 2 By <ref>.Case 3 By <ref>, <ref>. 10 Aalipour G. Aalipour, S. Akbari, P. J. Cameron, R. Nikandish and F. Shaveisi. On the structure of the power graph and the enhanced power graph of a group. The Electronic Journal of Combinatorics, 24(3), #P3.16, 2017.Abe S. Abe and N. Iiyori. A generalization of prime graphs of finite groups.Hokkaido Math. J., 29(2):391–-407, 2000.AndersonLivings D. F. Anderson, P. S, Livingston. The zero-divisor graph of a commutative ring. J. Algebra. 217:434–447, 1999. Atani S. E. Atani.A ideal based zero divisor graph of a commutative semiring.Glasnik Matematicki. 44(64):141–153, 2009.Bera-Bhuniya S. Bera, A. K. Bhuniya. On some properties of enhanced power graph.arXiv:1606.03209v1, 2016.Bera-Bhuniya2 S. Bera, A. K. Bhuniya. Normal subgroup based power graph of a finite Group.Communications in Algebra, 45 (8): 3251–3259, 2017.Chakrabarty I. Chakrabarty, S. Ghosh, M. K. Sen. Undirected power graphs of semigroups. Semigroup Forum. 78:410–426, 2009.Chartrand G. Chartrand, G.L. Johns, K.A. McKeon, P. Zhang. Rainbow connection in graphs. Math. Bohem. 133 85-98, 2008.Diestel R. Diestel. Graph theory. volume 173 of Graduate Texts in Mathematics. Springer-Verlag, Berlin, third edition, 2005.Ma X. Ma, M. Feng, andK. Wang. The Rainbow Connection Number of the Power Graph of a Finite Group. Graphs and Combinatorics. 32: 1495, 2016Redmond S. P. Redmond.An ideal-based zero divisor graph of a commutative ring. Communication in algebra. 31:4425–4443, 2003.Willians J. S. Williams.Prime graph components of finite groups. J. Algebra, 69:487–-513, 1981.
http://arxiv.org/abs/1708.07598v1
{ "authors": [ "Luis A. Dupont", "Daniel G. Mendoza", "Miriam Rodríguez" ], "categories": [ "math.CO", "05C25, 05C38, 05C45" ], "primary_category": "math.CO", "published": "20170825015658", "title": "The rainbow connection number of enhanced power graph" }
dispositionEfficient Tree Decomposition of High-Rank Tensors Adam S. Jermyn December 30, 2023 ================================================= Information theory is a mathematical theory of learning with deep connections with topics as diverse as artificial intelligence, statistical physics, and biological evolution. Many primers on information theory paint a broad picture with relatively little mathematical sophistication, while many others develop specific application areas in detail. In contrast, these informal notes aim to outline some elements of the information-theoretic “way of thinking,” by cutting a rapid and interesting path through some of the theory's foundational concepts and results. They are aimed at practicing systems scientists who are interested in exploring potential connections between information theory and their own fields. The main mathematical prerequisite for the notes is comfort with elementary probability, including sample spaces, conditioning, and expectations. We take the Kullback-Leibler divergence as our most basic concept, and then proceed to develop the entropy and mutual information. We discuss some of the main results, including the Chernoff bounds as a characterization of the divergence; Gibbs' Theorem; and the Data Processing Inequality. A recurring theme is that the definitions of information theory support natural theorems that sound “obvious” when translated into English. More pithily, “information theory makes common sense precise.” Since the focus of the notes is not primarily on technical details, proofs are provided only where the relevant techniques are illustrative of broader themes. Otherwise, proofs and intriguing tangents are referenced in liberally-sprinkled footnotes. The notes close with a highly nonexhaustive list of references to resources and other perspectives on the field.§ WHY INFORMATION THEORY? Briefly, information theory is a mathematical theory of learning with rich connections to physics, statistics, and biology. Information-theoretic methods quantify complexity and predictability in systems, and make precise how observing one feature of a system assists in predicting other features. Information-theoretic thinking helps to structure algorithms; describe processes in natural and engineered systems; and draw surprising connections between seemingly disparate fields. Formally, information theory is a subfield of probability, the mathematical study of uncertainty and randomness. Information theory is distinctive in its emphasis on properties of probability distributions that are independent of how those distributions are represented. Because of this representation-independence, information-theoretic quantities often have claim to be the most “fundamental” properties of a system or problem, governing its complexity, learnability, and intrinsic randomness. In the original formulation of <cit.>, information theory is a theory of communication: specifically, the transmission of a signal of some given complexity over an unreliable channel, such as a telephone line corrupted by a certain amount of white noise. Here we will emphasize a slightly different role for information theory, as a theory of learning more generally. This emphasis is consistent with the original formulation, since the communication problem may be viewed as the problem of the message receiver learning the intent of the sender based on the potentially corrupted transmission. However, the emphasis on learning allows us to more easily glimpse some of the rich connections of information theory to other disciplines. Special consideration in these notes will be given to statistical motivations for information-theoretic concepts. Theoretical statistics is the mathematical design of methods for learning from data; information-theoretic considerations determine when such learning is possible and to what extent. We will close with a connection to physics; some connections to biology are cited in the references. § WHY NOT START WITH ENTROPY? Entropy is easily the information-theoretic concept with the widest popular currency, and many expositions take entropy as their starting point. We will, however, choose a different point of departure and derive entropy along the way. Our primary object is the Kullback-Leibler (KL) divergence between two distributions, also called in some contexts the relative entropy, relative information, or free energy.[For the remainder of these notes I'll stick with “divergence” – though there are many other interesting objects called “divergences” in mathematics, we won't be discussing any of them here, so no confusion should arise.] Why start with the divergence? Well, there's a simple reason – while we'll focus on discrete random variables here, we'd like to develop a theory that, wherever possible, applies to continuous random variables as well. The divergence is well-defined for both discrete random variables and continuous ones; that is, if p and q are two continuous distributions satisfying certain regularity properties, then d(p,q) is a uniquely determined, nonnegative real number whether p and q are discrete or continuous. In contrast, the natural definition of entropy (so called differential entropy) for continuous random variables has two bad behaviors. First, it can be negative, which is undesirable for a measure of uncertainty. Second, and arguably worse, the differential entropy is not even uniquely defined. There are multiple ways to describe the same continuous distribution – for example, the following three distributions are the same: * “The Gaussian distribution with mean 0 and variance 1.” * “The Gaussian distribution with mean 0 and standard deviation 1.” * “The Gaussian distribution with mean 0 and second moment equal to 1.” Technically, the act of switching from one of these descriptions to another can be viewed as a smooth change of coordinates[I.e. a diffeomorphism: smooth, invertible functions on coordinate space whose inverses are also smooth.] in the space of distribution parameters. For example, we move from the first description to the second by changing coordinates from (μ, σ^2) to (μ, σ), which we can do by applying the function f(x,y) = (x, √(y)). Regrettably, the differential entropy is not invariant under such coordinate changes – change the way you describe the distribution, and the differential entropy changes as well. This is undesirable. The foundations of our theory should be independent of the contingencies of how we describe the distributions under study. The divergence passes this test in both discrete and continuous cases; the differential entropy does not.[It is possible to define alternative notions of entropy that attempt to skirt these issues; however, they have their own difficulties. <https://en.wikipedia.org/wiki/Limiting_density_of_discrete_points>] Since we can define the entropy in terms of the divergence in the discrete case, we'll start with the divergence and derive the entropy along the way. § INTRODUCING THE DIVERGENCE It is often said that the divergence d(p,q) between distributions p and q measures how “surprised” you are if you think the state of the world is q but then measure it to be p. However, this idea of surprise isn't always explained or made precise. To motivate the KL divergence, we'll start from a somewhat unusual beginning – the Chernoff bounds – that makes exact the role that the divergence plays in governing how surprised you ought to be. Let's begin with a simple running example. You are drawing from an (infinite) deck of standard playing cards, with four card suits {, , , } and thirteen card values {1,…,13}. We'll view the sets of possible values as alphabets: 𝒳 = {, , , } is the alphabet of possible suits, and 𝒴 = {1,…,13} the alphabet of possible values. We'll let X and Y be the corresponding random variables, so for each realization, X ∈𝒳 and Y ∈𝒴. Suppose that I have a prior belief that the distribution of suits in the deck is uniform. My belief about the suits can be summarized by a vector q = (1/4,1/4, 1/4,1/4). It'sconvenient to view q as a single point in the probability simplex 𝒫^𝒳 of all valid probability distributions over 𝒳. [Probability Simplex] For any finite alphabet 𝒳 with X = m, the probability simplex 𝒫^𝒳 is the set 𝒫^𝒳≜{ q ∈^m| ∑_iq_i = 1,q_i ≥ 0 ∀ i} . It's helpful to remember that 𝒫^𝒳 is an m-1-dimensional space; the “missing” dimension is due to the constraint ∑_i q_i = 1. When m = 3, 𝒫^𝒳 is an equilateral triangle; when m = 4 a tetrahedron, and so on. If q is your belief, you would naturally expect that, if you drew enough cards, the observed distribution of suits would be “close” to q, and that if you could draw infinitely many cards, the distribution would indeed converge to q. Let's make this precise: define p̂_n ∈𝒫^𝒳 to be the distribution of suits you observe after pulling n cards. It's important to remember that p̂ is a random vector, which changes in each realization. But it would be reasonable to expect that p̂→ q as n→∞, and indeed this is true almost surely (with probability 1) according to the Strong Law of Large Numbers, if q is in fact the true distribution of cards in the deck. But what happens if you keep drawing cards and the observed distribution p̂_n is much different than your belief q? Then you would justifiably be surprised, and your “level of surprise” can be quantified by the probability of observing p̂_n if the true distribution were q, which I'll denote (p̂_n ; q). We'd naturally expect (p̂_n ; q)to become small when n grows large. Indeed, there is a quite strong result here – (p̂_n ; q) decays exponentially in n, with a very special exponent. [Kullback-Leibler Divergence] For p,q ∈𝒫^X, the Kullback-Leibler (KL) divergence of q from p is d(p,q) ≜∑_x ∈𝒳 p(x) logp(x)/q(x) , where we are using the conventions that log∞ = ∞, log 0 = -∞, 0 / 0 = 0, and 0 ×∞ = 0. Suppose that the card suits are truly distributed according to p ≠ q. Then, e^-nd(p,q)/(n+1)^m≤(p̂_n;q) ≤ e^-n d(p,q) . So, the probability of observing p̂_n when you thought the distribution was q decays exponentially, with the exponent given by the divergence of your belief q from the true distribution p. Another way to say this: ignoring non-exponential factors, - 1/nlog(p̂_n;q)≅ d(p,q) , that is, d(p,q) is the minus the log of your average surprise per card drawn. The Chernoff bounds thus provide firm mathematical content to the idea that the divergence measures surprise. To make this result concrete, supposeI start with the belief that the deck is uniform over suits. So, my belief is q = (1/4,1/4, 1/4,1/4) over the alphabet {, , , }. Unbeknownst to me, you have removed all the black cards from the deck, which therefore has true distribution p = (0,0, 1/2, 1/2). I draw 100 cards and record the suits. How surprised am I by the suit distribution I observe? The divergence between my belief and the true deck distribution is d(p,q) ≈ 0.69, so Theorem <ref> states that the dominating factor in the probability of my observing an empirical distribution close to p in 100 draws is e^-0.69 × 100 ≈ 10^-30. I am quite surprised indeed! Can you have “negative surprise?” Gibbs' inequality states that the answer is no: For all p,q ∈𝒫^𝒳, it holds that d(p,q) ≥ 0, with equality iff p = q. In words, you can never have “negative surprise,” and you are only unsurprised if you what you observed is exactly what you expected. There are lots of ways to prove Gibbs' Inequality; here's one with Lagrange multipliers. Fix p ∈𝒫^𝒳. We'd like to show that the problem min_q ∈𝒫^𝒳 d(p,q) has value 0 and that this value is achieved at the unique point q = p. We need two gradients: the gradient of d(p,q) with respect to q and the gradient of the implicit constraint g(q) ≜∑_x ∈𝒳 q(x) = 1. The former is ∇_q d(p,q)= ∇_q [ ∑_x ∈𝒳 p(x) logp(x)/q(x)] = - ∇_q [ ∑_x ∈𝒳 p(x) log q(x) ] = - p ⊘ q, where p ⊘ q is the elementwise quotient of vectors, and where we recall the convention 0 / 0 = 0. On the other hand, ∇_q g(q) = 1, the vector whose entries are all unity. The method of Lagrange multipliers states that we should seek λ∈ such that ∇_qd(p,q) = λ∇_q(g_q), or - p ⊘ q = λ1, from which it's easy to see that the only solution is q = p and λ = -1. It's a quick check that the corresponding solution value is d(p,p) = 0, which completes the proof. Theorem <ref> is the primary sense in which d behaves “like a distance” on the simplex 𝒫. On the other hand, d is unlike a distance in that it is not symmetric and does not satisfy the triangle inequality.[In fact, d is related to a “proper” distance metric on 𝒫^𝒳, which is usually called the Fisher Information Metric and is the fundamental object of study in the field of information geometry <cit.>.] Let's close out this section by noting one of the many connections between the divergence and classical statistics. Maximum likelihood estimation is a fundamental method of modern statistical practice; tools from linear regression to neural networks may be viewed as likelihood maximizers. The divergence offers a particularly elegant formulation of maximum likelihood estimation: likelihood maximization is the same as divergence minimization. Let θ be some statistical parameter, which may be multidimensional; for example, in the context of fitting normal distributions, we may have θ = (μ, σ^2); in the context of regression, θ may be the regression coefficients β. Let p_X^θ be the probability distribution over X with parameters θ. Let {x_1, …,x_n} be a sequence of i.i.d. observations of X. Maximum likelihood estimation encourages us to find the parameter θ such that θ^* = _θ∏_i = 1^n p_X^θ(x_i) , i.e. the parameter value that makes the data most probable. To express this in terms of the divergence, we need just one more piece of notation: let p̂_X be the empirical distribution of observations of X. Then, it's a slightly involved algebraic exercise to show that the maximum likelihood estimation problem can also be written θ^* = _θ d(p̂_X, p_X^θ). This is rather nice– maximum likelihood estimation consists in making the parameterized distribution p_X^θ as close as possible to the observed data distribution p̂_X in the sense of the divergence.[This result is another hint at the beautiful geometry of the divergence: the operation of minimizing a distance-like measure is often called “projection.” Maximum likelihood estimation thus consists in a kind of statistical projection.] § ENTROPY After having put it off for a while, let's define the Shannon entropy. If we think about the divergence as a (metaphorical) distance, the entropy measures how close a distribution is to the uniform. [Shannon Entropy] The Shannon entropy of p ∈𝒫^X is H(p) ≜ -∑_x ∈𝒳 p(x) log p(x). When convenient, we will also use the notation H(X) to refer to the entropy of a random variable X distributed according to p. The Shannon entropy is related to the divergence according to the formula H(p) = log m - d(p, u) , where m = 𝒳 is the size of the alphabet 𝒳 and where u is the uniform distribution on 𝒳.[Here is as good a place as any to note that for discrete random variables, the divergence can also be defined in terms of the entropy. Technically, d is the Bregman divergence induced by the Shannon entropy, and can be characterized by the equation d(p,q) =- [H(p) - H(q) - ∇_q H, p - q] . Intuitively, d(p,q) is minus the approximation loss associated with estimating the difference in entropy between p and q using a first-order Taylor expansion centered at q. This somewhat artificial-seeming construction turns out to lead in some very interesting directions in statistics and machine learning.] This formula makes it easy to remember the entropy of the uniform distribution – it's just log m, where m is the number of possible choices. If we are playing a game in which I draw a card from the infinite deck, the suit of the card is uniform, and the entropy of the suit distribution is therefore H(X) = log 4 = 2log 2. In words, d(p, u) is your surprise if you thought the suit distribution was uniform and then found it was in fact p. If you are relatively unsurprised, then p is very close to uniform. Indeed, Gibbs' inequality (Theorem <ref>) immediately implies that H(p) assumes its largest value of log m exactly when p = u. Theorem <ref> provides one useful insight into why the Shannon entropy does not generalize naturally to continuous distributions. Whereas equation (<ref>) expresses the entropy in terms ofthe uniform distribution on 𝒳, there can be no analogous formula for continuous random variables onbecause there is no uniform distribution on . §.§ A Bayesian Interpretation of Entropy The construction of the entropy in terms of the divergence is fairly natural. We use the divergence to measure how close p is to the uniform distribution, flip the sign so that high entropy distributions are more uniform, and add a constant term to make the entropy nonnegative. This formulation of the entropy turns out to have another interesting characterization in the context of Bayesian prediction. In Bayesian prediction, I will pull a single card from the deck. Before I do, I ask you to provide a distribution p over the alphabet {, , , } representing your prediction about the suit of the card I pulled. As examples, you can choose p = (1,0,0,0) if you are certain that the suite will be , or p = (1/4, 1/4, 1/4, 1/4) to express maximal ignorance. After you guess, I pull the card, obtaining a sample x ∈𝒳, and reward you based on the quality of your prediction relative to the outcome x. I do this based on a loss function f(p,x); after your guess I give you f(p,x) dollars, say. If we assume that my aim is to encourage you to (a) report your true beliefs about the deck and (b) reward you based only on what happened (i.e. not on what could have happened), then there is essentially only one appropriate loss function f, which turns out to be closely related to the entropy. More formally, A loss function f is proper if, for any alphabet 𝒴 and random variable Y on 𝒴, p_X|Y = y = _p ∈𝒫^𝒳[f(p, x)|Y = y]. In this definition, it's useful to think of Y as some kind of “side information” or “additional data.” For example, Y could be my telling you that the card I pulled is a red card, which could influence your predictive distribution. When f is proper, you have an incentive to factor that into your predictive distribution. While it may feel that “of course” you should factor this in, not all loss functions encourage you to do so. For example, if f is constant, then you have no incentive to use Y at all, since each guess is just as good as any other. A proper loss function guarantees that you can maximize your payout (minimize your loss) by completely accounting for all available data when forming your prediction, which should therefore be p_X|Y = y. Thus, a proper loss function ensures that the Bayesian prediction game is “honest”. A loss function f is local if f(p,x) = ψ(p, p(x)) for some function ψ. The function f is local iff f can be written as a function only of my prediction p and how much probabilistic weight I put on the event that actually occurred – not events that “could have happened” but didn't. Thus, a proper loss function ensures that the Bayesian prediction game is “fair.” Somewhat amazingly, the log loss function given by f(p,x) = -log p(x) is the only loss function that is both proper and local (both honest and fair), up to an affine transformation. Let f be a local and proper reward function. Then, f(p, x) = A log p(x) + B for some constants A < 0 and B ∈. Without loss of generality, we'll take A = -1 and B = 0. The entropy in this context occurs as the expected log-loss when you know the distribution of suits in the deck. If you know, say, that the proportions in the deck are p = (1/4, 1/4, 1/4, 1/4) and need to formulate your predictive distribution, Theorem <ref> implies that your best guess is just p, since you have no additional side information. Then.... [Entropy, Bayesian Characterization] The (Shannon) entropy of p is your minimal expected loss when playing the Bayesian prediction game in which the true distribution of suits is p. To see that this definition is consistent with the one we saw before, we can simply compute the expectation: [f(p,X)]= [-log p(X)] = -∑_x ∈𝒳 p(x) log p(x), which matches Definition <ref>. The second inequality follows from the fact that, if you are playing optimally, p is both the true distribution of X and your best predictive distribution. § CONDITIONAL ENTROPY The true magic of probability theory is conditional probabilities, which formalize the idea of learning: (A|B) represents my best belief about A given what I know about B. While the Shannon entropy itself is quite interesting, information theory really starts becoming a useful framework for thinking probabilistically when we formulate the conditional entropy, which encodes the idea of learning as a process of uncertainty reduction. In this section and the next, we'll need to keep track of multiple random variables and distributions. To fix notation, we'll let p_X ∈𝒫^𝒳 be the distribution of a discrete random variable X on alphabet 𝒳, p_Y ∈𝒫^𝒴 the distribution of a discrete random variable Y on alphabet 𝒴, and p_X,Y∈𝒫^𝒳×𝒴 their joint distribution on alphabet 𝒳×𝒴. Additionally, we'll denote the product distribution of marginals as p_X⊗ p_Y∈𝒫^𝒳×𝒴; that is, (p_X⊗ p_Y)(x,y) = p_X(x) p_Y(y). [Conditional Entropy] The conditional entropy of X given Y is H(X|Y) ≜∑_x, y ∈𝒳×𝒴 p_X,Y(x,y) log p_X|Y(x|y) . It might seem as though H(X|Y) ought to be defined as H̃(X|Y) = ∑_x, y ∈𝒳×𝒴 p_X|Y(x|y) log p_X|Y(x|y) , which looks more symmetrical. However, a quick think makes clear that this definition isn't appropriate, because it doesn't include any information about the distribution of Y. If Y is concentrated around some very informative (or uninformative) values, then H̃ won't notice that some values of Y are more valuable than others. In the framework of our Bayesian interpretation of the entropy above, the conditional entropy is your expected reward in the guessing game assuming you receive some additional side information. For example, consider playing the suit-guessing game in the infinite deck of cards. Recall that the suit distribution is uniform, with entropy H(X) = H(u) = 2 log 2. Suppose now that you get side information – when I draw the card from the deck, before I ask you to guess the suit, I tell you the color (black or red). Since for each color there are just two possible suits, the entropy decreases. Formally, if X is the suit and Y the color, it's easy to compute that H(X|Y) = log 2. Comparing to our previous calculation that H(X) = 2log2, we see that knowing the color reduces your uncertainty by half. The conditional entropy is somewhat more difficult to express in terms of the divergence, but it does have a useful relationship with the (unconditional) entropy. The conditional entropy is related to the unconditional entropy as H(X|Y) = H(X,Y) - H(Y), where H(X,Y) is the entropy of the distribution p_X,Y. This theorem is easy to remember, because it looks like what you get by recalling the definition of the conditional probability and taking logs: p_X|Y(x|y) = p_X,Y(x,y)/p_Y(y). Indeed, take logs and compute the expectations over X and Y to prove the theorem directly. Another way to remember this theorem is to just say it out: the uncertainty you have about X given that you've been told Y is equal to the uncertainty you had about both X and Y, less the uncertainty that was resolved when you learned Y. From this theorem, it's a quick use of Gibbs' Inequality to show: H(X|Y) ≤ H(X). That is, knowing Y can never make you more uncertain about X, only less. This makes sense – after all, if Y is not actually informative about X, you can just ignore it. Theorem <ref> implies that H(X) - H(X|Y) ≥ 0. This difference quantifies how much Y reduces uncertainty about X; if H(X) - H(X|Y) = 0, for example, then H(X|Y) = H(X) and it is natural to say that Y “carries no information” about X. We encode the idea of information as uncertainty reduction in the next section. § INFORMATION THREE WAYS Thus far, we've seen two concepts – divergence and entropy – that play fundamentals role in information theory. But neither of them exactly resemble an idea of “information,” so how does the theory earn its name? Our brief note at the end of the last section suggests that we think about information as a relationship between two variables X and Y, in which knowing Y decreases our uncertainty (entropy) about X. As it turns out, the idea of information that falls out of this motivation is a remarkably useful one, and can be formulated in many interesting and different ways. Let's start by formalizing this notion: [Mutual Information] The mutual information I(X,Y) in Y about X is I(X,Y) ≜ H(X) - H(X|Y). In the context of the Bayesian guessing game, I(X,Y) is the “value” of being told the suit color, compared to having to play the game without that information. From our calculations above, in the suit-guessing game, I(X,Y) = H(X) - H(X|Y) = 2log 2 - log 2 = log 2. Let's now express mutual information in two other ways. Remarkably, these follow directly via simple algebra, but each identity provides a new way to think about the meaning of the mutual information. The mutual information may also be written as: I(X,Y)= d(p_X,Y, p_X⊗ p_Y) = _Y[d(p_X|Y, p_X)] We'll start by unpacking equation (<ref>), which expresses the mutual information I(X,Y) as the divergence between the true joint distribution p_X,Y and the product distribution p_X⊗ p_Y. Recall that p_X⊗ p_Y is what the distribution of X and Y would be, were they independent random variables. Combining this observation with Gibbs' inequality, we obtain the following important facts: I(X,Y) ≥ 0, with equality if and only if X and Y are independent. So, I(X,Y) is something like a super-charged correlation coefficient, in that it measures the degree of statistical dependence between X and Y. However, the mutual information is more powerful than the correlation coefficient in two ways. First, I(X,Y) detects all kinds of statistical relationships, not just linear ones. Second, while the correlation coefficient can vanish for dependent variables, this never happens for the mutual information. Zero mutual information implies dependence, period. As a quick illustration, it's not hard to calculate or intuit that if X is the suit color and Z is the numerical value of the card pulled, then I(X,Z) = 0. Intuitively, if we were playing the suit-guessing game and I offered to tell you the card's face-value, you would be rightly annoyed. That's an unhelpful (“uninformative”) offer, because the face-values and suit colors are independent.[So, why don't we just dispose of correlation coefficients and use I(X,Y) instead? Well, correlation coefficients can be estimated from data relatively simply and are fairly robust to error. In contrast, I(X,Y) requires that we have reasonably good estimates of the joint distribution p_X,Y, which is not usually available. Furthermore, it can be hard to distinguish I(X,Y) = 10^-6 from I(X,Y) = 0, and statistical tests of significance that address this problem are much more complex than those for correlation coefficients.] Equation (<ref>) has another useful consequence. Since that formulation is symmetric in X and Y, The mutual information is symmetric: I(X,Y) = I(Y,X) . Now let's unpack equation (<ref>). One way to read this is as quantifying the danger of ignoring available information: d(p_X|Y = y, p_X) is how surprised you would be if you ignored the information Y=y and instead kept using p_X as your belief. If I told you that the deck contained only red cards, but you chose to ignore this and continue guessing u = (1/4,1/4,1/4,1/4) as your guess, you would be surprised to keep seeing red cards turn up draw after draw. Formulation (<ref>) expresses the mutual information as the expected surprise you would experience by ignoring your available side information Y, with the expectation taken over all the possible values the side information could assume. While this formulation may seem much more opaque than (<ref>), it turns out to be remarkably useful when thinking geometrically, as it expresses the mutual information as the average “distance” between the marginal p_X and the conditionals p_X|Y. Pursuing this thought usefully expresses the mutual information as something like the “moment of inertia” for the joint distribution p_X,Y. § WHY INFORMATION SHRINKS The famous 2nd Law of Thermodynamics states that, in a closed system, entropy increases. The physicists' concept of entropy is closely related to but slightly different from the information theorist's concept, and we therefore won't make a direct attack on the 2nd Law in these notes. However, there is a close analog of the 2nd Law that gives much of the flavor and can be formulated in information theoretic terms. Whereas the 2nd Law states that entropy grows, the Data Processing Inequality states that information shrinks. Let X and Y be random variables, and let Z = g(Y), where g is any function g:𝒴→𝒵. Then, I(X,Z) ≤ I(X,Y). This is not the most general possible form of the Data Processing Inequality, but it has the right flavor. The meaning of this theorem is both “obvious” and striking in its generality. If you are using Y to predict X, then any processing you do to Y can only reduce your predictive power. Data processing can enable tractable computations; reduce the impact of noise in your observations; and improve your visualizations. The one thing it can't do is create information out of thin air. No amount of processing is a substitute for having sufficient, salient data. We'll pursue the proof the Data Processing Inequality, as the steps are quite enlightening. First, we need the conditional mutual information: [Conditional Mutual Information] The conditional mutual information of X and Y given Z is I(X,Y|Z) = ∑_z ∈𝒵 p_Z(z)d(p_X,Y|Z = z,p_X|Z = z⊗ p_Y|Z = z). The divergence in the summand is naturally written I(X,Y|Z = z), in which case we have I(X,Y|Z) = ∑_z ∈𝒵 p_Z(z) I(X,Y|Z = z), which has the form of an expectation of mutual informations conditioned on specific values of Z. The conditional mutual information is naturally understood as the value of knowing Y for the prediction of X, given that you already know Z. Somewhat surprisingly, both of the cases I(X,Y|Z) > I(X,Y) and I(X,Y|Z) < I(X,Y) may hold; that is, knowing Z can either increase or decrease the value of knowing Y in the context of predicting X. We have I(X, (Y,Z)) = I(X, Z) + I(X, Y|Z). The notation I(X, (Y,Z)) refers to the (regular) mutual information between X and the random variable (Y,Z), which we can regard as a single random variable on the alphabet 𝒴×𝒵. We can compute directly, dividing up sums and remembering relations like p_X,Y,Z(x,y,z) = p_X,Y|Z(x, y|z)p_Z(z). Omitting some of the more tedious algebra, I(X,(Y,Z))= d(p_X, Y, Z, p_X⊗ p_Y,Z) = ∑_x, z ∈𝒳×𝒵 p_X,Y|Z(x,z)logp_X,Z(x,z)/p_X(x)p_Z(z)+ ∑_x, y, z ∈𝒳×𝒴×𝒵 p_Z(z)p_X,Y|Z(x, y| z) logp_X,Y|Z(x,y|z)/p_Y|Z(y|z)p_X|Z(x|z)= I(X,Z) + I(X,Y|Z) , as was to be shown. As always, the Chain Rule has a nice interpretation if you think about estimating X by first learning Z, and then Y. At the end of this process, you know both Y and Z, and therefore have information I(X,(Y,Z)). This information splits into two pieces: the information you gained when you learned Z, and the information you gained when you learned Y after already knowing Z. We are now ready to prove the Data Processing Inequality.[Proof borrowed from <http://www.cs.cmu.edu/ aarti/Class/10704/lec2-dataprocess.pdf>] Since Z = g(Y), that is, is a function of Y alone, we have that Z ⊥ X | Y, that is, given Y, Z and X are independent.[In fact, Z ⊥ X|Y is often taken as the hypothesis of the Data Processing inequality rather than Z = g(Y), as it is somewhat weaker and sufficient to prove the result.] So, I(X,Z|Y) = 0. On the other hand, using the chain rule in two ways, I(X, (Y,Z))= I(X,Z) + I(X,Y|Z) = I(X,Y) + I(X,Z|Y). Since I(X,Z|Y) = 0 by our argument above, we obtain I(X,Y) = I(X,Z) + I(X,Y|Z). Since I(X,Y|Z) is nonnegative by Gibbs' inequality, we conclude that I(X,Z) ≤ I(X,Y), as was to be shown. The Data Processing Inequality states that, in the absence of additional information sources, processing generically leaves you with less information than you started. The 2nd Law of Thermodynamics states that, in the absence of additional energy sources, the system dynamics leave you with less order than you started. These formulations suggest a natural parallel between the concepts of information and order, and therefore a natural parallel between the two theorems. We'll close out this note with an extremely simplistic-yet-suggestive way to think about this. Let X_0 and Y_0 each be random variables reflecting the possible locations and momenta of two particles at time t = 0. We'll assume (a) that the particles don't interact, but that (b) the experimenter has placed the two particles very close to each other with similar momenta. Thus, the initial configuration of the system is highly ordered, reflected byI(X_0, Y_0) > 0. If we knew Y_0, then we'd also significantly reduce our uncertainty about X_0. How does this system evolve over time? We're assuming no interactions, so each of the particles evolve separately according to some short-timescale dynamics, which we can write X_1 = g_x(X_0) and Y_1 = g_y(Y_0). Using the data processing inequality twice, we have I(X_1, Y_1) ≤ I(X_0, Y_1) ≤ I(X_0, Y_0). Thus, the dynamics tend to reduce information. Of course, we can complicate this picture in various ways, by considering particle interactions or external potentials, either of which require a more sophisticated analysis. The full 2nd Law, which is beyond the scope of these notes, is most appropriate for considering these cases. § SOME FURTHER READING Those interested in these topics have many opportunities to explore them in more detail. The below is a short list of some of the resources I have found most intriguing and useful, in addition to those cited in the introduction. §.§ Information Theory “in General” * Claude Shannon's original work <cit.>. * Shannon's entertaining information-theoretic study of written English <cit.>. * The text of <cit.> is the standard modern overview of the field for both theorists and practitioners. * Colah's blog post “Visual Information Theory” at <http://colah.github.io/posts/2015-09-Visual-Information/> is both entertaining and extremely helpful for getting basic intuition around the relationship between entropy and communication. §.§ Information Theory, Statistics, and Machine Learning * An excellent and entertaining introduction to these topics is the already-mentioned <cit.>. * Those who want to further explore will likely enjoy <cit.>, but I would suggest doing this one after MacKay. * Readers interested in pursuing the Bayesian development of entropy much more deeply may enjoy <cit.>, which provides an extremely thorough development of decision theory with a strong information-theoretic perspective. * The notes for the course “Information Processing and Learning” at Carnegie-Mellon's famous Machine Learning department are excellent and accessible; find them at <http://www.cs.cmu.edu/ aarti/Class/10704/lecs.html> §.§ Information Theory, Physics, and Biology * Marc Harper has some intriguing papers in which he views biological evolutionary dynamics as learning processes through the framework of information theory; a few are <cit.>. * John Baez and his student Blake Pollard wrote a very nice and easy-reading review article of the role of information concepts in biological and chemical systems <cit.>. * More generally, John Baez's blog is a treasure-trove of interesting vignettes and insights on the role that information plays in the physical and biological worlds: <https://johncarlosbaez.wordpress.com/category/information-and-entropy/>. For a more thoroughly worked-out connection between information dissipation and the Second Law of Thermodynamics, see this one: <https://johncarlosbaez.wordpress.com/2012/10/08/the-mathematical-origin-of-irreversibility/>. [Amari and Nagaoka, 2007]Amari2000 Amari, S.-I. and Nagaoka, H. (2007). Methods of Information Geometry. American Mathematical Society.[Baez and Pollard, 2016]Baez2016 Baez, J. and Pollard, B. (2016). Relative entropy in biological systems. Entropy, 18(2):46.[Bernardo and Smith, 2008]Bernardo2008 Bernardo, J. M. and Smith, A. F. (2008). Bayesian Theory. John Wiley and Sons, New York.[Cover and Thomas, 1991]Cover1991 Cover, T. M. and Thomas, J. A. (1991). Elements of Information Theory. John Wiley and Sons, New York.[Csiszar and Shields, 2004]Csiszzr2004 Csiszar, I. and Shields, P. C. (2004). Information Theory and Statistics: A Tutorial. Foundations and Trends™ in Communications and Information Theory, 1(4):417–528.[Harper, 2009]Harper2009a Harper, M. (2009). Information geometry and evolutionary game theory, arXiv: 0911.1383. pages 1–13.[Harper, 2010]Harper2009 Harper, M. (2010). The replicator equation as an inference dynamic, arXiv: 0911.1763. pages 1–10.[MacKay, 2003]MacKay2005 MacKay, D. J. C. (2003). Information Theory, Inference, and Learning Algorithms. Cambridge Univeristy Press, 4th edition.[Shannon, 1948]Shannon1948 Shannon, C. E. (1948). A mathematical theory of communication. The Bell System Technical Journal, 27:379–423.[Shannon, 1951]Shannon1951 Shannon, C. E. (1951). Prediction and entropy of printed English. Bell System Technical Journal, 30(1):50–64.apalike
http://arxiv.org/abs/1708.07459v2
{ "authors": [ "Philip Chodrow" ], "categories": [ "cs.IT", "math.IT", "math.ST", "physics.data-an", "stat.TH" ], "primary_category": "cs.IT", "published": "20170824153757", "title": "Divergence, Entropy, Information: An Opinionated Introduction to Information Theory" }
d ĝ ǧ R̂ Ř r B D F E I H M N P Q R T W Z KK EFÔ𝒜ℬ𝒞ℰℱℋ̋ℐ𝒦ŁℒØ𝒪ℛ𝒮𝒯𝒰𝒱𝒲𝒳𝒴𝒵 eff reg GH|∂ d|∇ẼB̃cosechcosec𝐱ḍỹ_μν^μ_ ν()ϵĝκ̨#1(<ref>)r_±r_+r_-r̃_±⊡r̃_+r̃_-a]Peter Adshead,b,c]C.P. Burgess,d]R. Holman,e]and Sarah Shandera[a]Department of Physics, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA [b]Physics & Astronomy, McMaster University, Hamilton, ON, Canada, L8S 4M1[c]Perimeter Institute for Theoretical Physics, Waterloo, Ontario N2L 2Y5, Canada [d]Minerva Schools at KGI,1145 Market St, San Francisco, CA 94103, USA [e]Institute for Gravitation and the Cosmos, The Pennsylvania State University, University Park, PA 16802, [email protected]@[email protected]@gravity.psu.eduWe elucidate the counting of the relevant small parameters in inflationary perturbation theory. Doing this allows for an explicit delineation of the domain of validity of the semi-classical approximation to gravity used in the calculation of inflationary correlation functions. We derive an expression for the dependence of correlation functions of inflationary perturbations on the slow-roll parameter ϵ = -Ḣ/H^2, as well as on H M_p, where H is the Hubble parameter during inflation. Our analysis is valid for single-field models in which the inflaton can traverse a Planck-sized range in field values and where all slow-roll parameters have approximately the same magnitude. As an application, we use our expression to seek the boundaries of the domain of validity of inflationary perturbation theory for regimes where this is potentially problematic: models with small speed of sound and models allowing eternal inflation.IGC-17/8-2Power-counting during single-field slow-roll inflation [======================================================§ INTRODUCTION AND SUMMARY There is considerable evidence from both the observed temperature fluctuations in the cosmic microwave background (CMB) <cit.> and the distribution of large-scale structure <cit.> across the observable universe that points to the existence of a specific pattern of near-scale-invariant primordial fluctuations. Furthermore, the properties of these primordial fluctuations inferred from observations seem well-described by quantum fluctuations <cit.> during an early accelerated epoch; usually assumed to be due to inflation <cit.>.The success of this description argues that large-scale quantum-gravity effects are not only detectable but that they have, in fact, been detected: verily a triumph of modern physics.Anyquantum treatment of gravity eventually collides with its non-renormalizability <cit.>, which arises because its coupling constant (Newton's constant 8π G = 1/M_p^2) has dimensions of inverse mass in fundamental units, for which ħ = c = 1.Because G has negative mass-dimension, any perturbative series in G inevitably becomes a low-energy expansion, since the dimensionless combination is GE^2/2π = (E/4π M_p)^2, for some E ≪ M_p. Gravity (like other non-renormalizable theories <cit.>) therefore naturally lends itself to a description in terms of an effective field theory (EFT) <cit.>. EFTs are relevant because they are designed specifically toefficiently exploit the low-energy limit whenever a system enjoys a large hierarchy of scales, such as E ≪ M (with M ∼ M_p). The lagrangian of an EFT is usually a complicated expression and, when written order-by-order in powers of the heavier mass, is usually an arbitrary real, local function of all possible combinations of the fields and their derivatives consistent with the symmetries of the problem. It is because powers of derivatives come together with inverse powers of the heavy scale, M— where usually (though, as we see below, not exclusively) for gravity M ∼ M_p— that this complicated lagrangian is useful. Only a few interactions prove to be relevant at low orders in the 1/M expansion. Indeed, the key question one asks withany EFT is: precisely which interactions appear in exactly which ways inside which graphs within a graphical expansion at any fixed order in the low-energy ratio E/M? Answering this question is called `power counting' the EFT, and the answer is important for two reasons. First, (as applied to the contributions you compute) it is central to being able to systematically exploit the predictions of the EFT because it identifies precisely which interactions enter to any given desired order in the low-energy expansion. Second, (as applied to the contributions you donot compute) it shows precisely what is the theoretical error by showing how large are the contributions due to the leading terms being neglected. Power-counting arguments quantify the size of the theoretical error, and this is clearly a prerequisite for any meaningful comparison with observations.But, as we review below, for gravity there is also a bonus: it is this kind of calculation that shows why semi-classical calculations are usually (but not always) a good approximation in gravitating systems.For example, it reveals the semi-classical expansion during cosmology to be a series in powers of (H/4π M_p)^2, where H is the spacetime's Hubble parameter. This answers a question that is probably not asked often enough: why is a classical analysis valid in the first place, and where does it start to break down?Power-counting is also related to, but logically distinct from, issues of technical naturalness.[Seee.g. ref. <cit.> for a review — similar in spirit to the approach used here — of what technical naturalness is and why it is regarded as a useful criterion when developing theories describing Nature.] It is related inasmuch as technical naturalness asks whether the sizes of effective couplings change as one integrates out states of various mass, with the particular focus being integrating out the most energetic states since these are the ones that potentially contribute dangerously to a few vulnerable low-energy quantities (such as small scalar masses or small vacuum energies). Power counting is relevant to addressing naturalness questions because the point of a power-counting argument is to identify systematically how various large mass scales contribute to any particular observable. But power counting and technical naturalness are distinct because although it can be an uncomfortable embarrassment (that one usually feels requires explaining) to have a theory that is not technically natural, the theory itself remains a self-consistent tool in which one can make sensible theoretical predictions. By contrast, if power counting indicates an uncontrolled dependence on a large mass then this really signals our inability to make quantitative predictions at all (at least purely within the low-energy limit). If a theory is in a regime where power-counting does not allow an expansion in small energy ratios it might or might not be technically natural; we simply do not know since we cannot compute with it reliably enough to tell. But if a theory is not technically natural its renormalized parameters are being chosen in an odd way that begs for an explanation, but this in itself need not undermine our understanding of how to make predictions (or, as is sometimes argued, of EFT methods in general).Since inflationary predictions of primordial fluctuations might provide the first observation of a quantum gravity effect, power-counting during inflation is particularly important. In this note we provide the power-counting expression for the simplest single-field slow-roll models in which all slow-roll parameters are similar in size. We start, in <ref>, by using a brief review of standard results for semiclassical power-counting in pure gravity to define notation useful later. (EFT aficionados should feel free to skip this section.) For inflationary applications there are a variety of things called EFTs <cit.> and our power-counting arguments apply to most of them. Although we explicitly make our arguments for a scalar-metric theory expanded about a rolling background — closer in spirit to ref. <cit.>— we believe our arguments also go through for the more general framework of fluctuations about single-clock backgrounds given in ref. <cit.>.Our main inflationary result is derived in <ref>, with <ref> tracking the low-energy factors that control the semiclassical approximation and <ref> adding the additional information about dependence on slow-roll parameters. These sections culminate in eq. PCinf, which expresses how any graph contributing to a connected n-point correlation function (at horizon exit, k = aH) depends on the two small parameters: the slow-roll parameters, ϵ, and the ratio of H to other, higher, scales. Besides reproducing standard results — as is argued is true for the lowest few n-point functions in <ref>— this expression allows the determination of when the underlying semiclassical expansion breaks down, thereby allowing a systematic inference of the boundaries of its domain of validity. Simpler `unitarity arguments' can also identify which small parameter controls perturbation theory; however, a full power-counting result does more because it systematically identifies how many powers of each small parameter enters from any particular graph in a perturbative expansion. We use the power-counting formula, eq. PCinf, to explore the edges of the perturbative regime in two different ways. First, in <ref>, we explore `small-c_s' models, in which the propagation of signals can occur with speeds much smaller than the speed of light. We show why these theories push the envelope of the vanilla power-counting arguments used here, and sketch how these arguments might be extended to include a small-c_s regime in a controlled way, particularly for models like DBI inflation <cit.> that enjoy additional symmetries.[We also in this section briefly discuss some of the other dangers that small c_s models must address in specific realizations.] Finally, <ref> discusses the regime of eternal inflation, which we argue also probes calculational boundaries in interesting ways. In this case we argue that although semiclassical perturbation theory usually is in very good shape, for sufficiently small slow-roll parameters the ϵ-expansion can become subdominant to the semiclassical expansion. In this regime it can be inconsistent to include ϵ-dependence without also including quantum effects and higher-derivative corrections to General Relativity. § POWER-COUNTING WITH GRAVITY Power counting is the central step when working with EFTs, since it systematically identifies which of the many effective interactions can contribute to observables at any order in the small quantities underlying the EFT expansion(for reviews, see for instance ref. <cit.>). This section briefly summarizes standard results when these methods are applied to gravity <cit.>, before generalizing to include the effects of small slow-roll parameters. The reader familiar with this story should feel free to skip ahead to <ref>. §.§ GREFT General relativity is not renormalizable. Although this was regarded as a problem back in the day, we now know that non-renormalizability in itself is not that remarkable since other very predictive theories (like the Fermi theory of weak interactions or low-energy interactions of pions) also share this property. Non-renormalizability is generic whenever there are couplings (like Newton's constant, G, or the Fermi constant G_) with engineering dimensions that are inverse powers of mass, and the central observation that gives them predictive power is that any series in this coupling is necessarily a low-energy expansion: one works in powers of GE^2 where E is a typical scale in the observables of interest. (The discussion below follows closely the treatment in ref. <cit.>.)EFTs are the natural language for describing this sort of low-energy expansion, since they organize interactions from the get-go into a derivative expansion in order to identify most efficiently those that dominate at low energies. Given that gravity is described by the spacetime metric, g_μν, the effective lagrangian describing pure gravity to which one is led in this way is- Ł_/√(-g) = λ + M_p^2/2g^μνR_μν+ [a_41R^2 + a_42R_μν R^μν + a_43R_μνλρ R^μνλρ +⋯] + 1/M^2[a_61R^3 + a_62R R_μν R^μν + ⋯] + ⋯ corresponding to a sum over all possible curvature invariants. In this expression the first line represents the usual terms of General relativity — consisting of a cosmological constant, λ, and the Einstein-Hilbert action — while the second line contains curvature-squared terms, the third line curvature-cubed terms, and so on. We have introduced the reduced Planck mass, M_p^2=(8π G)^-1.As written, the couplings a_di are dimensionless (in 4 dimensions) and to this end an appropriate power of an overall mass scale M is factored out of the curvature-cubed and higher terms. Two things are important in this expansion: the dimensionless quantities a_di are usually at most order-unity;[This property isnot true for some popular theories, like Higgs inflation <cit.> or curvature-squared inflation <cit.>, and it is this property that makes these theories difficult to obtain from known UV completions <cit.>.] and the scale M is likely to be much smaller than M_p. To see why these are true, imagine obtaining these interactions by integrating out a heavy particle of mass M. Within perturbation theory such a calculation generically predicts M appears as required on dimensional grounds, and that the a_di are proportional to dimensionless couplings and suppressed by powers of 2π (see below for more details on why). One also might expect that integrating out a particle of mass M would contribute an amount δ M_p^2 ∝ M^2 to the Einstein-Hilbert action, as well as an amount δλ∝ M^4 to the cosmological constant. But, the correction to the Einstein-Hilbert term is negligible if M ≪ M_p (unlike the contribution R^3/M^2, which completely swamps any prior contribution of order R^3/M_p^2). It is an unsolved puzzle why the cosmological constant is not also dominated by contributions from the largest values of M, so we simply drop λ until discussing inflationary models in later sections.Finally, it should also be noted that many of the interaction terms in the action in eq. GReffdef are redundant, in that their coefficients do not appear independently in observables. Two common reasons for this are if the term in question is a total derivative (such as a R term, not written explicitly above) or if the term can be removed by performing a field redefinition. As argued in more detail in ref. <cit.>, in practice this latter criterion means that terms can be dropped that vanish when using the lowest-order field equations (such as the vacuum Einstein equations in the present case). For pure gravity with λ = 0 this allows the dropping of any terms involving the undifferentiated Ricci tensor or Ricci scalar (R_μν or R = g^μνR_μν).§.§ Semiclassical perturbation theory For the purposes of estimating how these interactions contribute to observables, we imagine working in semiclassical perturbation theory. This involves expanding about a classical solution,g_μν (x) = ĝ_μν (x) +h_μν(x)/M_p ,and rewriting GReffdef as a sum of effective interactionsŁ_ = Ł̂_ + M^2 M_p^2 ∑_nc_n/M^d_n Ø_n( h_μν/M_p),where Ł̂_ = Ł_(ĝ_μν) is the lagrangian density evaluated at the background configuration.The sum over n runs over the labels for a complete set of interactions, Ø_n, each of which involves N_n ≥ 2 powers of the field h_μν.(N_n1 because of the background field equations satisfied by ĝ_μν.) The parameter d_n counts the total number of derivatives appearing in Ø_n (acting either on the background or the perturbation), and so the factor M^-d_n is what is required to keep the coefficients, c_n, dimensionless.For instance, an interaction like c_n/M_ph^μν∇̂_λ h_νρ∇̂^λh^ρ_μ, with indices raised and covariant derivatives built using the background metric, ĝ_μν, would have d_n = 2 and N_n = 3. The overall prefactor, M^2 M_p^2, is chosen so that the kinetic terms — i.e. those terms in the sum for which d_n = N_n = 2— are M and M_p independent. As is clear from the example, the operators Ø_n depend implicitly on the classical background, ĝ_μν, about which the expansion is performed.The coefficients c_n are calculable in terms of the a_di, but if M ≪ M_p the c_n's cannot all be order unity. Comparing eqs. GReffdef and GReffphih shows that the absence of M_p in all of the curvature-squared and higher terms in GReffdef implies the c_n for these interactions should be of orderc_n = (M^2/ M_p^2) g_n(if d_n > 2) ,where g_n is at most order-unity and independent (up to logarithms) of M and M_p. Perturbation theory proceeds by separating Ł_ - Ł̂_ into quadratic and higher order parts,Ł_ = ( Ł̂_ + Ł_0 )+ Ł_ int ,where Ł_0 consists of those terms in Ł_ for which N_n = 2 and d_n ≤ 2. All other terms are lumped into Ł_ int. Expanding the path integral in powers of Ł_ int allows the integral over h_μν to be expressed as a sum of Gaussian integrals, classifiable in terms of Feynman graphs, with Ł_0 defining the propagators of these graphs and Ł_ int their vertices in the usual way. Standard arguments show that this is a semiclassical expansion inasmuch as each loop corresponds to an additional order in ħ (though tracking powers of ħ in this way does not in itself yet identify the semiclassical parameter whose smallness makes the loop expansion a good approximation). To identify more cleanly what parameter controls the loop expansion we make the following dimensional argument. Imagine computing anamputated Feynman graph, whoseexternal lines are removed and so carry no dimensions. The propagators, G(x,y), associated with each of theinternal lines in this graph come from inverting the differential operator appearing in Ł_0. What matters about these for the present purposes is that they do not depend on M and M_p, although they can depend on scales (like the Hubble scale, H) that arise in the background configuration, ĝ_μν.The factors of M and M_p all come from vertices in the Feynman graph of interest, since they all come from terms in Ł_ int. Each time the interaction Ø_n contributes a vertex to the graph it comes with a factor of c_n M_p^2 - N_n M^2-d_n. If the graph contains V_n number of vertices of type n it therefore acquires a factor∏_n [ c_n M_p^2 - N_n M^2 - d_n]^V_n= M_p^2 - 2L -∏_n [c_n M^2 - d_n]^V_n ,where the equality uses the identity2 += ∑_n N_n V_nthat expresses that the end of each line in the graph must occur at a vertex, as well as the definition,L =1 +- ∑_n V_n ,of the number of loops, L, of the graph.Given this identification of how M and M_p arise, the dependence of the Feynman graph of interest on the other, low-energy, scales defined by the graph's external lines is set by evaluating the Feynman rules. The result is particularly simple in the special case where there is only one such a low-energy scale, since in this case its appearance is dominantly determined (up to logarithms) on dimensional grounds. Because dimensional arguments are complicated by ultraviolet divergences — arising due to the singularities in the propagators, G(x,y), in the coincidence limit y → x—for the purposes of making the dimensional argument it is very convenient to regularize these divergences using dimensional regularization. In this case all divergences arise as poles as the spacetime dimension approaches 4, and the overall dimension of the graph is set by the physical scale appearing in the external legs (such as the scale H characterizing the size of a derivative of the background classical configuration).Denoting this external scale by H, the contribution of a graph involving(amputated) external lines, L loops and V_n vertices involving d_n derivatives becomes_ (H) ≃ H^2 M_p^2 ( 1/M_p)^( H/4 πM_p)^2L∏_n [ c_n ( H/M)^d_n-2]^V_n ,with factors of 4π also included using standard arguments (see,e.g. <cit.>).Keeping in mind the factors of M and M_p hidden in some of the c_n's — c.f. eq. cndgt2h— it is useful to separate out the interactions with more than two derivatives in this result, to get_ (H) ≃ H^2 M_p^2 ( 1/M_p)^( H/4 πM_p)^2L[ ∏_d_n = 2 c_n^V_n] ∏_d_n ≥ 4[ g_n ( H/M_p)^2 ( H/M)^d_n-4]^V_n .Notice the dimension of _ here is what would be expected for the coefficient of ϕ^ in an expansion of the one-particle irreducible (1PI) action:i.e. _ has dimension (mass)^4-, as appropriate given its external lines have been amputated. Eq. PCresultGR identifies which combination of scales justifies regarding quantum effects to be small enough to allow semi-classical methods in General Relativity. A generic necessary condition for graphs with more loops (for a fixed number of external lines) to be parametrically suppressed compared to those with fewer loops is to have H be small enough to ensureH/4 π M_p≪ 1,while the suppression of interactions coming from higher-derivative interactions additionally requiresg_n ( H/M_p)^2 ( H/M)^d_n-4≪ 1 (for d_n ≥ 4) .Repeated insertions of two-derivative interactions ( i.e. those coming from the Einstein-Hilbert action) do not generically generate large contributions because these satisfyc_n ≃ 1 (for d_n = 2) .The lack of suppression of these interactions expresses the principle of equivalence, since it shows that they are all generically equally important in a given low-energy process.Eq. PCresultGR is central toall applications of General Relativity, since it identifies systematically which interactions are important in any given physical process. In particular, because H is always in the numerator it shows that control over semiclassical methods always requires a low-energy approximation (as is indeed expected due to the presence of the dimensionful non-renormalizable coupling, G). In practice, most people using General Relativity treat it as a classical field theory and eq. PCresultGR shows why this is usually valid: for a fixed number of external lines the dominant contributions come if L=0 and V_n = 0 for all interactions for which d_n > 2. Since d_n is even (and the assumption that λ is negligible implies d_n ≥ 2), this means that the dominant processes are those computed at tree level using only interactions with precisely d_n=2 derivatives; that is, classical processes computed using only the Einstein-Hilbert action. But eq. PCresultGR also shows which interactions contribute at subleading order: the dominant corrections are those using only d_n = 2 interactions but with L=1, or those with L=0 for which the d_n=2 interactions are supplemented with only a single d_n=4 interaction. That is, the next-to-leading contribution is suppressed compared to classical General Relativity by (H/4 π M_p)^2 and comes from one-loop General Relativity plus tree graphs containing exactly one curvature-squared interaction. It is the coefficient of the tree-level, curvature-squared interactions that renormalize the UV divergences that arise in the one-loop graphs. And so on, to any required accuracy in powers of H/M and H/M_p.§ POWER-COUNTING IN SIMPLE INFLATIONARY MODELS We now repeat and extend the above arguments to simple inflationary models, following the treatment in ref. <cit.>.§.§ Scalar-metric models We start by adding N dimensionless scalar fields, θ^i, though later restrict to single-field models. The effective lagrangian obtained as a derivative expansion is then- Ł_/√(-g) =v^4 V(θ) + M_p^2/2g^μν[W(θ) R_μν+ G_ij(θ)∂_μθ^i∂_νθ^j ] + A(θ) (∂θ)^4 + B(θ)R^2 + C(θ) R (∂θ)^2+ E(θ)/M^2(∂θ)^6+ F(θ)/M^2R^3 + ⋯ , with terms involving up to two derivatives written explicitly and the rest written schematically, inasmuch as R^3 collectively represents all possible independent curvature invariants involving six derivatives, and so on. As in the previous section, the explicit mass scales v, M_p and M are extracted so that the functions V(θ), W(θ), A(θ), B(θ) etc, are dimensionless. Eq. Leffdef normalizes the scalar fields so that their kinetic term has Planck mass coefficient. With inflationary applications in mind we take M ≪ M_p, and for inflationary applications we take V ≃ v^4 ≪ M^4 when θ≃Ø(1). Again expanding about a classical solution,θ^i(x) = ϑ^i(x) + ϕ^i(x)/M_pandg_μν (x) = ĝ_μν (x) +h_μν(x)/M_p ,allows the lagrangian in eq. Leffdef to be written as Ł_ = Ł̂_ + M^2 M_p^2 ∑_nc_n/M^d_n Ø_n(ϕ/M_p ,h_μν/M_p)where as before Ł̂_ = Ł_(ϑ,ĝ_μν) and the interactions, Ø_n, involve N_n = N^(ϕ)_n + N^(h)_n ≥ 2 powers of the fields ϕ^i and h_μν. Also as before the parameter d_n counts the number of derivatives appearing in Ø_n, the coefficients c_n are dimensionless and the prefactor, M^2 M_p^2, ensures the kinetic terms (and so also the propagators) are M and M_p independent. Following the steps of the previous section, we assign M and v dependence to the coefficients c_n (for d_n2) so that eq. Leffphih captures the same dependence as does eq. Leffdef. For simplicity, we first treat derivatives of the background and fluctuations on equally footing, as was done in the previous section. We relax this assumption in the next section. For d_n > 2 this implies c_n is given by eq. cndgt2h where g_n is order unity, and for terms with no derivatives — i.e. those coming from the scalar potential, V(θ)— we havec_n = ( v^4/M^2 M_p^2) λ_n(if d_n = 0) ,where the dimensionless couplings λ_n are independent of M_p and M (and are related to slow-roll parameters in what follows). In terms of the λ_n's the above assumptions imply the scalar potential has the schematic form[Notice our assumption that ϕ is normalized by M_p implies a qualitative steepness for the scalar potential: V = v^4 U(ϕ/M_p) where U(x) is order unity when evaluated at order-unity arguments. Technical naturalness asks whether this form remains valid after integrating out other states (seeref. <cit.> for how the power-counting arguments used here can help assess this).] V(ϕ) = v^4 [ λ_0 + λ_2 (ϕ/M_p)^2 + λ_4 (ϕ/M_p)^4 + ⋯],which shows that we choose V to range through values of order v^4 as ϕ^i range through values of order M_p. When applied to inflationary models in later sections we make the further `slow-roll' assumption that constrain λ_n to be smaller than order unity. Although these choices do not capture all possible inflationary models, they do capture those for which the inflaton rolls over a Planckian range and for which all slow-roll parameters are of similar size, such as single-field models with observable tensor-to-scalar ratio, r. The natural scale for the scalar masses under the above assumptions is m ≃√(V”)/M_p ≃√(ϵ)v^2/M_p ≃√(ϵ)H, where the natural value for the Hubble scale is H ≃√(V)/M_p ≃ v^2/M_p. Assuming the classical background is described by slow-roll inflation the derivatives of the canonically normalized scalar fields φ^i = M_p ϑ^i also satisfyφ̇≃V'/H≃M_p V'/√(V)≃√(ϵ V)≃√(ϵ)v^2 ≃√(ϵ)H M_p .§.§ Semiclassical perturbation theory Our goal is to identify how any observable depends on both the small energy ratios H/M and H/M_p as well as the slow-roll parameter, ϵ. To start off let us take ϵ≃Ø(1) and purely count powers of H/M_p, by repeating the dimensional power-counting argument of earlier sections. In subsequent sections we dial down ϵ to examine its competition with H/M_p.To this end we expand Ł_ = ( Ł̂_ + Ł_0 ) + Ł_ int, and examine the size of a Feynman graph havingexternal lines, with external lines characterized by the single low-energy scale H. We track the powers of M, v and M_p coming from the vertices and determine the H-dependence on dimensional grounds, in the manner that led to PCresult0, including making explicit the factors of v, M and M_p hidden in the c_n's when d_n2. For cosmological applications it also proves useful to normalize amplitudes differently than in PCresult0. Whereas the previous section normalizes _ as appropriate to amputated Feynman graphs, for cosmology it is more useful to track correlation functions[For cosmology one usually also separately tracks dependence on H and mode momentum k/a, but these are the same size if we assume all momentum components have a similar size and are evaluated during the epoch of most interest: horizon exit.] for which we attach a propagator to each external line and integrate over the space-time location of the amputated graph. Since power-counting here associates a factor of H for each dimension, the resulting Feynman amplitude scales with the parameters according to _≃_ H^2-4.Combining everything leads to the result_ (H)≃ M_p^2/H^2( H^2/M_p)^( H/4 πM_p)^2L[ ∏_d_n = 2c_n^V_n]×[ ∏_d_n = 0λ_n^V_n]∏_d_n ≥ 4[ g_n ( H/M_p)^2 ( H/M)^d_n-4]^V_n ,which uses H ≃ v^2/M_p to rewrite the potentially dangerous d_n = 0 term as∏_d_n = 0[ λ_n ( v^4/H^2 M_p^2) ]^V_n≃∏_d_n=0λ_n^V_n .Although insertions of scalar interactions can sometimes undermine the underlying H/M_p expansion <cit.>, this does not happen for potentials of the form assumed in inflationary models.Eq. PCresult shows that, under the assumptions given, the presence of scalar fields does not undermine the validity of the underlying semiclassical expansion, which again relies on the low-energy approximation Loopcond: H ≪ 4 π M_p. Just as for pure gravity the leading contribution comes from classical physics, though this time using the zero- and two-derivative parts of the action. This is what justifies standard classical treatments of inflation. Again as before the dominant subleading terms are down by (H/4π M_p)^2<cit.> and arise at one-loop together with classical contributions with appropriate counterterms with up to four derivatives. These power-counting results can be used to study the sensitivity of the inflationary choices made when specifying the effective action, with the generic result that integrating out a heavy field of mass m gives contributions of the same form as those already found in the action, but with v ≃ M ≃ m<cit.>, together with potential non-adiabatic corrections that can invalidate the underlying EFT description <cit.>. §.§ Slow-roll suppression Generically, derivatives of the background and derivatives of the fluctuations may be of parametrically different sizes. In inflation, this occurs because derivatives of background fields are additionally suppressed by powers of the slow-roll parameters. We now dial down the generic slow-roll parameter, ϵ, and find that there are two ways that ϵ modifies eq. PCresult. First, the assumed flatness of the inflationary potential (assuming all slow-roll parameters to be of the same order of magnitude) allows us to write the sth derivative of the scalar potential as (^sV/φ^s) ≃ϵ^s/2V/M_p^s ≃ϵ^s/2v^4/M_p^s and so λ_n ≃ϵ^N_n/2λ̂_n,where λ̂_n is order unity and N_n counts the number of lines that meet at the vertex in question.Using this in eq. PCresult shows how insertions of scalar interactions do not change the powers of H/M_p but always cost powers of the slow-roll parameters,[Of course this conclusion relies on the assumed form V = v^4 U(ϕ/M_p) where U(x) is order unity when evaluated at order-unity arguments.] with a factor of √(ϵ) arising for each scalar line that meets in the vertex.The other way slow-roll parameters enter into eq. PCresult is through scalar background-field derivatives, which we assume satisfy eq. SReqn and its higher slow-roll extensions ^n φ/ t^n≃ϵ^n/2 H^n M_p.This kind of suppression arises once scalar-field derivatives are expanded about their background (assuming all slow-roll parameters are similar in size), as in ∂_μ( φ + ϕ) = φ̇ δ^0_μ + ∂_μϕ and so on. (We assume negligible background gradient energy in φ as required for inflation.)To track these factors we replace the two labels (d,i) counting the numbers of derivatives and fields in an interaction with five labels that do so separately for background and fluctuating fields: (d,i;D, I_s, I_h). Here i and d respectively denote the number of powers of background fields φ (but not ĝ_μν) appearing in the vertex and the number of times these background fields are differentiated. The quantity I_s similarly counts the number of powers of the fluctuating scalar fields (ϕ) and I_h counts the number of powers of metric fluctuations h_μν), while D counts the total number of derivatives except those that act on the background scalar field (and so are separately counted by d). We incorporate the slow-roll suppression due to background evolution by requiring any vertex with these labels to be suppressed by c_n→c^d,i_,_s,_h≃ϵ^d/2ĉ^d,i_,_s,_h(for d_n = d+D = 2) andg_n→g^d,i_,_s,_h≃ϵ^d/2ĝ^d,i_,_s,_h(for d_n = d+D > 2) ,where the ĉ and ĝ are order-unity constants. Notice we assume no slow-roll suppression when the scalar field appears undifferentiated in the action anywhere except for the scalar potential, so that (for example) there is no additional suppression by powers of ϵ associated with any ϑ-dependence in[Ignoring ϵ suppression in W(ϑ) likely over-estimates its size in models where the small size of V(ϑ) is understood because ϑ is a pseudo-Goldstone boson]G_ij(ϑ) or W(ϑ).Using these choices in eq. PCresult finally leads to the following inflationary power-counting estimate for a Feynman graph withexternal lines_ (H)≃ M_p^2/H^2( H^2/M_p)^( H/4 πM_p)^2L{∏_i,I_s,I_h[ ∏_d=0,1,2( ϵ^d/2 ĉ^d,i_2-d, _s,_h)^V^d,i_2-d,_s,_h] } ×[ ∏_i,I_s,I_h( ϵ^I_s/2λ̂__s)^V^0,i_0,_s,_h]∏_i,I_s,I_h{∏_d+D ≥ 4[ ϵ^d/2ĝ^d,i_,( H/M_p)^2 ( H/M)^d+D-4]^V^d,i_,_s,_h} .Here the products are over all vertex types appearing in the graph, labeled according to the number of background scalar fields (i) and metric (I_h) or scalar perturbations (I_s) participating in the interaction. The vertices are labelled according to the number of derivatives on background scalar fields (d) and the total number of derivatives, d_n=D+d, on all background fields or fluctuations. This expression summarizes the ϵ and H/M_p dependence of a general Feynman graph under simple inflationary assumptions, and so is the main result of this section.§.§ Single-field slow-roll inflation Before exploring the trade-off between ϵ and H/M_p in exotic situations, it is worth first verifying that the above rules capture the known dependence of fluctuations in situations already considered in the literature. In making contact with the literature we must become more explicit about the gravitational sector, for which the fluctuation h_μν contains both scalar and tensor parts. At this point we also specialize to single-field models, which in practice means that we can always choose G_ij(θ) to be a constant so that the inflaton kinetic term is proportional to √(-g) ∂_μϕ ∂^μϕ, and so does not contain any trilinear or higher inflaton self-interactions (though it does contain trilinear and higher interactions coupling powers of the metric fluctuation to two inflaton fluctuations).For the purposes of tracking ϵ it is convenient to work in a gauge where both ϕ and the scalar part of h have diagonal ϕ-ϕ and h-h kinetic terms unsuppressed by ϵ while the off-diagonal ϕ-h kinetic mixing is order √(ϵ). This is precisely the counting one would have in an inflationary model when expanding the inflaton kinetic term √(-(ĝ+h)) ∂_μ(φ + ϕ) ∂^μ(φ + ϕ) out to quadratic order, using the above counting rules that convert φ̇→√(ϵ)H M_p. We must also come to grips with the gauge-dependence of the gravitational sector. Since we regard our scalar to be canonically normalized we effectively work in a non-unitary gauge for which the scalar field can be tracked separately from the metric fluctuation, though only one combination of these survives in physical quantities. The result in unitary gauge (and for dimensionless tensor modes, t_μν) can be found by the rescaling[To write the full action at cubic order and higher, we also need the non-linear terms in the gauge transformation <cit.>. Including these terms gives, parametrically, ζ≃ϕ/√(ϵ)M_p (1+𝒪(√(ϵ)ϕ/M_p)+…) and so will not change our leading order results below.] ζ≃ϕ/φ̇/H≃ϕ/√(ϵ)M_pandt_μν≃h_μν/M_p .With these rules we expect the leading contribution to the variance of ϕ and h to correspond to the lowest-order result for a Feynman graph with = 2 and L = 0 that only uses vertices taken from the 2-derivative interactions. The diagonal terms arise unsuppressed by powers of H/M_p or of ϵ, while as discussed above the off-diagonal terms are down by at least one power of √(ϵ).The result therefore is of order _hh(H) ≃_ϕϕ≃ H^2, while_ϕ h≃√(ϵ)H^2.Converting to curvature fluctuations and dimensionless strain using eq. zetadef then leads to the usual estimates_ζζ≃H^2/ϵM_p^2and_≃H^2/M_p^2 .The leading powers of H/M_p in the bispectra are similarly obtained by choosing =3 and L=0 and no vertices used except those with d_n =2.The leading powers of ϵ are then found from the estimates using the simplest graph involving only a single 3-point vertex. For the quantities ⟨ hhh ⟩ and ⟨ hϕϕ⟩ this leads to the ϵ-unsuppressed estimates_hhh(H) ≃_hϕϕ≃H^4/M_p , since the required unsuppressed cubic vertex comes from either the Einstein-Hilbert action or the inflaton kinetic term. The same is not true for ⟨ hhϕ⟩ or ⟨ϕϕϕ⟩ since there is no cubic interaction of these types arising unsuppressed by ϵ in the d_n ≤ 2 lagrangian. Since the cubic scalar interaction is order ϵ^3/2 it is subdominant to the interactions obtained by inserting a single h-ϕ kinetic mixing into ⟨ hhh ⟩ or ⟨ hhϕ⟩, leading to the estimates _hhϕ(H) ≃_ϕϕϕ≃√(ϵ)H^4/M_p .Again converting to dimensionless strain and curvature fluctuation using eq. zetadef then leads to the usual estimates <cit.> _(H) ≃_ζ(H) ≃H^4/M_p^4and_ζζ(H) ≃_ζζζ(H) ≃H^4/ϵM_p^4 . Continuing in this fashion for the tri-spectra, using the tree graphs with = 4, L=0 and V_n = 0 unless d_n = 2 similarly leads to _hhhh(H) ≃_hhϕϕ≃H^6/M_p^2while_hhhϕ(H) ≃_hϕϕϕ≃√(ϵ)H^6/M_p^2and_ϕϕϕϕ≃ϵH^6/M_p^2 ,and so the leading contributions to the dimensionless strain and curvature perturbation correlations scale as_(H) ≃_ζ(H) ≃H^6/M_p^6while_ζζ≃_ζζζ≃_ζζζζ≃H^6/ϵM_p^6 ,and so on to any order, and for any correlation function, desired. §.§ Examples near the perturbative boundary In this section we turn to several examples for which the previous power-counting points to nontrivial classes of graphs that must be summed to infer reliably the properties of correlations. The purposes of doing so are to highlight the need in these cases for additional arguments in order to ensure the theory retains some predictive power.§.§.§ Small sound speed One example along these lines is when higher-derivative interactions involving the inflaton become important. This limit is normally discussed in terms of a small sound speed, c_s ≪ 1, since expanding higher-derivative scalar self-interaction using X = -[ ∂(φ + ϕ)]^2 =φ̇^2 + 2 φ̇ ϕ̇- (∂ϕ)^2 produces modifications to the speed of mode propagation, sinceX + c_41X^2/M^4⊃(1 + 2c_41φ̇^2/M^4) ϕ̇^2 - (∇ϕ)^2,and so c_s^-2≃ 1 + 2c_41φ̇^2/M^4, where c_41 is a dimensionless effective coupling.It is clear that in order to obtain c_s much different from unity one must choose φ̇ and M to satisfyφ̇/M^2≃√(ϵ)HM_p/M^2≃√(ϵ)v^2/M^2≃Ø(1),and the purpose of this section is to outline the extent to which such a choice poses a threat to the power-counting given above. Clearly a minimal requirement is the validity of the semiclassical expansion itself, but at face value eq. PCinfsays this requires only H ≪ 4π M_p and H ≪ M, neither of which in themselves preclude the condition at eq. largephidot. The effective theory's validity also imposes additional conditions, such as that the background evolution must be adiabatic <cit.>. Since M is the smallest UV scale in the EFT, in the present context adiabaticity implies both ȧ / a = H ≪ M (which we already impose) as well as the weaker condition φ̇/φ≃√(ϵ)H ≪ M. These also seem consistent with largephidot.Does anything preclude the regime in eq. largephidot? It is clear that to the extent that ϵ≪ 1, eq. largephidot requires M ≪ v (for instance M ϵ^1/4 v would do the job). The issue is whether or not it is legitimate in an EFT to have the universe be dominated by an energy density, V ≃ v^4, that is larger than the UV scales being integrated out: v ≫ M.It happens that having an energy density above the UV scale in itself need not rule out the use of EFT methods. It is possible if the large energy density cannot be converted to more dangerous forms that violate the central EFT assumption that the motion involves only the specified low-energy degrees of freedom <cit.>. Dangerous processes from this point of view are those (for example) that transfer too much kinetic energy to background fields or cause excessive particle production.[Although examples — often supersymmetric — can arise for which a large background energy, V ≃ v^4 with v ≫ M, does not preclude using an EFT whose domain of validity is energies below M, these scenarios tend to be special and can break down at later, less protected, points in the history of the universe (such as reheating) <cit.>.]But the condition in eq. largephidot requires more than just that the potential energy V ≃ v^4 ≫ M^4, it also demands the scalarkinetic energy be of order (or larger than) UV scales, since φ̇^2M^4. What makes this precarious is that this kinetic energy must not be extractable to excite either UV particles (by assumption, not in the EFT) or to provoke non-adiabatic background evolution (which again is not reliably captured by the EFT). Were it not for an explicit example one would be tempted to conclude from all this that small sound speeds must be beyond the domain of sensible EFT methods.§.§.§ DBI inflation The explicit example that seems to argue otherwise is DBI inflation <cit.>, for which the inflaton arises as the centre-of-mass coordinate of a brane, for which it is argued that relativistic kinematics implies an action with a kinetic term of the formŁ = - √(-g)T √(1 - X/T) ,where X = -(∂ϕ)^2 and T is the brane's tension. What is unusual about the DBI action is that it keeps all orders in X but neglects all second and higher derivatives of ϕ, and this is believed to be a sensible regime because relativistic kinematics should not break down even for speeds near the speed of light — which in this case corresponds to the limit |X/T| ≃Ø(1). At the classical level this is self-consistent, inasmuch as the classical equations coming from eq. DBI drive φ̈→ 0 as φ̇^2 → T. And in the regime |X/T|≃Ø(1) the speed of sound predicted for the inflaton becomes small, as expected from the above estimates.The special structure of the DBI action has been argued to be preserved by a symmetry (nonlinear realization of the spacetime symmetries that are broken by the presence of the brane) <cit.>, and this symmetry is likely to ensure the presence of the special square-root factor in front of all effective interactions, including those involving higher derivatives of ϕ. It is quite possible that this, together with the use of a different scaling regime for which spatial derivatives are of order H/c_s, might lead to a consistent power-counting formulation[Notice that the power-counting issue differs from the issue of the technical naturalness of small c_s discussed in ref. <cit.>. Technical naturalness asks whether having small c_s at one scale ensures it remains small when run to other scales, for which the dangerous interactions are usually those involving the heaviest fields (which by assumption are not present in the EFT). By contrast, power-counting questions whether or not having small c_s undermines the entire low-energy expansion on which semiclassical methods are based.] by extending the preliminary steps taken in ref. <cit.> (see also <cit.>).Our discussion above shows how worthwhile it would be to develop such a systematic power-counting calculation (along the lines of the above) for DBI (and, by extension, some class of small-c_s models), showing how semiclassical calculations can be regarded as the leading terms in the expansion in powers of a small parameter (and, if so, explicitly what is this parameter). In the absence of such a power-counting result it is difficult to know how to quantify precisely the theoretical error made when using such semiclassical methods, and thereby to know how far to trust its predictions.Although the existence of such a power-counting scheme would mean that a DBI-type action could be self-consistent, it would not automatically guarantee that it provides a good description for any specific microscopic brane setup. This is because there are usually additional issues that need checking, as can be seen by keeping in mind the simple example of a first-quantized relativistic point particle. The Nambu-like action for such a particle is very similar to the DBI action (due to the similarly broken spacetime symmetries), and would at first sight seem equally tricky to power-count when in the extreme relativistic limit (for which the centre-of-mass coordinate, (t), satisfies √(1 - ^2)→ 0, similar to the relativistic DBI limit √(1-X/T)→ 0). Yet we know that a consistent power-counting formulation in this case exists, and is most easily given in the second-quantized framework obtained once the relevant antiparticle is also included. In order for an EFT description cast purely in terms of the centre-of-mass motion to be valid one must check whether or not the relativistic motion starts to produce particle-antiparticle pairs, or to radiate other states to which the moving particle couples. Similarly, for relativistic DBI constructions one must identify the extent to which any assumed relativistic motion similarly stimulates brane, string or Kaluza-Klein excitations that have been assumed to be integrated out when formulating the EFT involving only ϕ and its derivatives, but neglecting these other states <cit.>.§.§.§ Eternal inflation Our power-counting discussion shows that classical dynamics always dominates when H/M_p is sufficiently small, regardless of the size of ϵ. Working classically to subdominant order in ϵ therefore implicitly assumes a hierarchy of the form H/M_p ≪ϵ^s for some positive s, whose value depends on precisely how far one wishes to work in the slow-roll expansion. But for any specific value of ϵ and H/M_p it is also clear that beyond some order in ϵ it becomes invalid to work purely at classical order. For instance, reasonable values for H and ϵ might be H/(√(ϵ) M_p) ≃ 10^-5 and ϵ≃ 10^-2, for which H/M_p ≃ϵ^3. In this case working beyond 6th order in ϵ necessarily also requires including loop corrections and including the action's higher-derivative terms in any classical calculation.An extreme example to which this observation is pertinent is the case of eternal inflation, which corresponds to choosing parameters so that δϕ/φ̇/H≃H/4 π√(ϵ)M_pØ(1).This condition ensures that inflationary stochastic fluctuations can compete with classical evolution over Hubble time-scales. Clearly, because ϵ (H/4π M_p)^2 in this regime contributions suppressed by ϵ at any loop order (say, tree level) can compete with contributions unsuppressed by ϵ but at one higher loop.As has been pointed out elsewhere, the eternal-inflation regime is parametrically consistent with a perturbative analysis (although not if c_s is too small <cit.>) since both control parameters H/M_p and ϵ can be arbitrarily small while still satisfying eq. eternal. Where the above power-counting becomes interesting is if effects are computed for which ϵ being nonzero plays a role, implying the keeping of a fixed order in the ϵ expansion. Power-counting then shows that once one keepsany terms linear in ϵ (or smaller), it becomes inconsistent to work only within the classical approximation, using only up-to-two-derivative interactions, such as those of General Relativity coupled to an inflaton.For stochastic formulations of eternal inflation <cit.> these arguments indicate that inclusion of drift is only consistent, while neglecting quantum and higher-derivative corrections to General Relativity, in the regimeH/4π M_p√(ϵ)( H/4π M_p)^2,where the first inequality restates the condition in eq. eternal for eternal inflation and the second inequality is the condition that the drift — normally proportional to V' ∝√(ϵ)— be larger than one-loop corrections. It also means that all corrections to the noise (and subdominant contributions to the drift) arising at Ø(ϵ), such as those found in ref. <cit.>, must also be accompanied by loop and higher-derivative corrections if applied within the eternal-inflation regime. In view of recent resurgence of interest in stochastic methods <cit.> it clearly would be worthwhile developing systematic power-counting tools of equal power for the stochastic regime. § ACKNOWLEDGEMENTSWe thank Subodh Patil, Eva Silverstein, Andrew Tolley, and David Tong for helpful discussions about power-counting and small-c_s models. We also thank the Banff International Research Station for support and hospitality while this work was in progress.This work was partially supported by funds from the Natural Sciences and Engineering Research Council (NSERC) of Canada. Research at the Perimeter Institute is supported in part by the Government of Canada through NSERC and by the Province of Ontario through MRI. The work of PA was supported in part by the United States Department of Energy through grant DE-SC0015655. 99CMB P. A.  R. Ade et al. (Planck collaboration), “Planck 2015 results. XX. Constraints on inflation", Astron. Astrophys.  594 A20, (2016) arXiv: 1502.02114 [astro-ph.CO]. G. Hinshaw et al.  (WMAP collaboration),“Nine-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Cosmological Parameter Results", Astrophys. J. Suppl.  208, 19 (2013), arXiv:1212.5226 [astro-ph.CO].LSS S. A. Rodrguez-Torreset al., “Clustering of quasars in the First Year of the SDSS-IV eBOSS survey: Interpretation and halo occupation distribution,” Mon. Not. Roy. Astron. Soc.468, no. 1, 728 (2017), arXiv:1612.06918 [astro-ph.CO]. T. M. C. Abbottet al. [DES Collaboration], “Dark Energy Survey Year 1 Results: Cosmological Constraints from Galaxy Clustering and Weak Lensing,” arXiv:1708.01530 [astro-ph.CO]. Fluctuations V.  F.  Mukhanov and G.  V.  Chibisov, JETP Lett. 33, 532 (1981) [Pisma Zh.  Eksp.  Teor.  Fiz.  33, 549 (1981)];A.  H.  Guth and S.  Y.  Pi, Phys.  Rev.  Lett.  49, 1110 (1982);A.  A.  Starobinsky, Phys . Lett.  B 117, 175 (1982);S.  W.  Hawking, Phys.  Lett.  B 115, 295 (1982);V.  N.  Lukash, Pisma Zh.  Eksp.  Teor.  Fiz.  31, 631 (1980); Sov.  Phys.  JETP 52, 807 (1980) [Zh.  Eksp.  Teor.  Fiz.  79, (1980)];W.  Press, Phys.  Scr.  21, 702 (1980);K.  Sato, Mon.  Not.  Roy.  Astron.  Soc.  195, 467 (1981)firstINFAlan H. Guth,“The Inflationary Universe: A Possible Solution tothe Horizon and Flatness Problems", Phys. Rev.D23, (1981),347-356;A.D. Linde, “A New Inflationary Universe Scenario: A PossibleSolution of the Horizon, Flatness, Homogeneity, Isotropy andPrimordial Monopole Problems", Phys. Lett.B108, (1982),389-393;A. Albrecht and P.J. Steinhardt, “Cosmology for Grand UnifiedTheories with Radiatively Induced Symmetry Breaking", Phys. Rev.Lett.48 (1982), 1220-1223;A. D Linde, “Chaotic Inflation", Phys. Lett.B129 (1983),177-181. NRGR R. P. Feynman, “Quantum theory of gravitation,” Acta Phys. Polon.24 (1963) 697;B. S. DeWitt, “Quantum Theory of Gravity. 1. The Canonical Theory,” Phys. Rev.160 (1967) 1113. doi:10.1103/PhysRev.160.1113; “Quantum Theory of Gravity. 2. The Manifestly Covariant Theory,” Phys. Rev.162 (1967) 1195. doi:10.1103/PhysRev.162.1195;S. Mandelstam, “Feynman Rules For The Gravitational Field From The Coordinate Independent Field Theoretic Formalism,” Phys. Rev.175 (1968) 1604. doi:10.1103/PhysRev.175.1604.EFTreviewS. Weinberg, “Phenomenological Lagrangians",Physica A96, (1979), 327; H. Leutwyler, “Principles of chiral perturbation theory,” [arXiv:hep-ph/9406283]; A. V. Manohar, “Effective field theories,” [arXiv:hep-ph/9606222]; C. P. Burgess, “Introduction to Effective Field Theory,” Ann. Rev. Nucl. Part. Sci.57 (2007) 329 doi:10.1146/annurev.nucl.56.080805.140508 [hep-th/0701053]. GREFT0 J. F. Donoghue,“Introduction to the Effective Field Theory Description of Gravity,” [arXiv:gr-qc/9512024];J. F. Donoghue and T. Torma, “On the power counting of loop diagrams in general relativity,” Phys. Rev.D54, 4963 (1996) [arXiv:hep-th/9602121];GREFT C. P. Burgess, “Quantum gravity in everyday life: General relativity as an effective field theory,” Living Rev. Rel.7 (2004) 5 [arXiv:gr-qc/0311082]. CCRevC. P. Burgess, “The Cosmological Constant Problem: Why it's hard to get Dark Energy from Micro-physics,” doi:10.1093/acprof:oso/9780198728856.003.0004 arXiv:1309.4133 [hep-th]. IEFTWbgS. Weinberg, “Effective Field Theory for Inflation,” Phys. Rev.D77, 123541 (2008) [arXiv:0804.4291 [hep-th]]. InfEFT C. Cheung, P. Creminelli, A. L. Fitzpatrick, J. Kaplan and L. Senatore, “The Effective Field Theory of Inflation,” JHEP0803, 014 (2008) [arXiv:0709.0293 [hep-th]]. InfEFT2 R. Bean, D. J. H. Chung and G. Geshnizjani, “Reconstructing a general inflationary action,” Phys. Rev.D78, 023517 (2008) [arXiv:0801.0742 [astro-ph]]. DBI M. Alishahiha, E. Silverstein and D. Tong, “DBI in the sky,” Phys. Rev. D70 (2004) 123505 doi:10.1103/PhysRevD.70.123505 [hep-th/0404084]. Nonminscalar D. S. Salopek, J. R. Bond and J. M. Bardeen, “Designing Density Fluctuation Spectra in Inflation,” Phys. Rev. D40, 1753 (1989);R. Fakir and W. G. Unruh, “Improvement on cosmological chaotic inflation through nonminimal coupling,” Phys. Rev.D41, 1783 (1990); HIF. L. Bezrukov and M. Shaposhnikov, “The Standard Model Higgs boson as the inflaton,” Phys. Lett.B659, 703 (2008) [arXiv:0710.3755 [hep-th]]. StaroA. A. Starobinsky, “A New Type of Isotropic Cosmological Models Without Singularity,” Phys. Lett.91B, 99 (1980). InfPowerCount C. P. Burgess, H. M. Lee and M. Trott, “Power-counting and the Validity of the Classical Approximation During Inflation,” JHEP0909 (2009) 103 doi:10.1088/1126-6708/2009/09/103 [arXiv:0902.4465 [hep-ph]];“Comment on Higgs Inflation and Naturalness,” JHEP1007 (2010) 007 doi:10.1007/JHEP07(2010)007 [arXiv:1002.2730 [hep-ph]]. HIdiffJ. L. F. Barbon and J. R. Espinosa, “On the Naturalness of Higgs Inflation,” Phys. Rev. D79 (2009) 081302 doi:10.1103/PhysRevD.79.081302 [arXiv:0903.0355 [hep-ph]]; C. P. Burgess, S. P. Patil and M. Trott, “On the Predictiveness of Single-Field Inflationary Models,” JHEP1406 (2014) 010 doi:10.1007/JHEP06(2014)010 [arXiv:1402.1476 [hep-ph]].UVblockC. P. Burgess, M. Cicoli, S. de Alwis and F. Quevedo, “Robust Inflation from Fibrous Strings,” JCAP1605 (2016) no.05,032 doi:10.1088/1475-7516/2016/05/032 [arXiv:1603.06789 [hep-th]]. EFTadiabatic1G. Shiu and I. Wasserman, “On the signature of short distance scale in the cosmic microwave background,” Phys. Lett.B536, 1 (2002) [arXiv:hep-th/0203113]; N. Kaloper, M. Kleban, A. E. Lawrence and S. Shenker,“Signatures of short distance physics in the cosmicmicrowavebackground,” Phys. Rev.D66, 123510 (2002) [arXiv:hep-th/0201158];N. Kaloper, M. Kleban, A. Lawrence, S. Shenker and L. Susskind, “Initial conditions for inflation,” JHEP0211, 037 (2002) [arXiv:hep-th/0209231]. EFTadiabaticC. P. Burgess, J. M. Cline and R. Holman, “Effective field theories and inflation,” JCAP0310 (2003) 004 [arXiv:hep-th/0306079]. EFTnonadiabaticC. P. Burgess, J. M. Cline, F. Lemieux and R. Holman,“Are inflationary predictions sensitive to very high energy physics?,” JHEP0302, 048 (2003) [arXiv:hep-th/0210233]. MaldacenaJ. M. Maldacena, “Non-Gaussian features of primordial fluctuations in single field inflationary models,” JHEP0305 (2003) 013 doi:10.1088/1126-6708/2003/05/013 [astro-ph/0210603]. adiabatic R. Flauger, M. Mirbabayi, L. Senatore and E. Silverstein, “Productive Interactions: heavy particles and non-Gaussianity,” arXiv:1606.00513 [hep-th]. DBIsymJ. Hughes, J. Liu and J. Polchinski, “Virasoro-shapiro From Wilson,” Nucl. Phys. B316 (1989) 15. doi:10.1016/0550-3213(89)90384-2J. Hughes and J. Polchinski, “Partially Broken Global Supersymmetry and the Superstring,” Nucl. Phys. B278 (1986) 147. doi:10.1016/0550-3213(86)90111-2 DBIsym2 C. de Rham and A. J. Tolley, “DBI and the Galileon reunited,” JCAP1005 (2010) 015 doi:10.1088/1475-7516/2010/05/015 [arXiv:1003.5917 [hep-th]]. TNSmallcsL. Senatore, K. M. Smith and M. Zaldarriaga, “Non-Gaussianities in Single Field Inflation and their Optimal Limits from the WMAP 5-year Data,” JCAP1001 (2010) 028 doi:10.1088/1475-7516/2010/01/028 [arXiv:0905.3746 [astro-ph.CO]];L. Senatore and M. Zaldarriaga, “A Naturally Large Four-Point Function in Single Field Inflation,” JCAP1101 (2011) 003 doi:10.1088/1475-7516/2011/01/003 [arXiv:1004.1201 [hep-th]]; D. Baumann and D. Green, “Equilateral Non-Gaussianity and New Physics on the Horizon,” JCAP1109 (2011) 014 doi:10.1088/1475-7516/2011/09/014 [arXiv:1102.5343 [hep-th]].SmallcsPCL. Leblond and S. Shandera, “Simple Bounds from the Perturbative Regime of Inflation,” JCAP0808 (2008) 007 doi:10.1088/1475-7516/2008/08/007 [arXiv:0802.2290 [hep-th]].S. Shandera, “The structure of correlation functions in single field inflation,” Phys. Rev. D79 (2009) 123518 doi:10.1103/PhysRevD.79.123518 [arXiv:0812.0818 [astro-ph]]. SmallcsBounce C. de Rham and S. Melville, “Unitary null energy condition violation in P(X) cosmologies,” Phys. Rev. D95 (2017) no.12,123523 doi:10.1103/PhysRevD.95.123523 [arXiv:1703.00025 [hep-th]]. CheckOtherModes M. Becker, L. Leblond and S. E. Shandera, “Inflation from wrapped branes,” Phys. Rev. D76, 123516 (2007) doi:10.1103/PhysRevD.76.123516 [arXiv:0709.1170 [hep-th]];X. Chen, “Fine-Tuning in DBI Inflationary Mechanism,” JCAP0812, 009 (2008) doi:10.1088/1475-7516/2008/12/009 [arXiv:0807.3191 [hep-th]]. eternalptbvP. Creminelli, S. Dubovsky, A. Nicolis, L. Senatore and M. Zaldarriaga, “The Phase Transition to Slow-roll Eternal Inflation,” JHEP0809 (2008) 036 doi:10.1088/1126-6708/2008/09/036 [arXiv:0802.1067 [hep-th]]; I. S. Kohli and M. C. Haslam, “Stochastic Eternal Inflation in a Bianchi Type I Universe,” Phys. Rev. D93 (2016) no.2,023514 doi:10.1103/PhysRevD.93.023514 [arXiv:1508.02670 [gr-qc]]. StochInf A. A. Starobinsky, “Stochastic De Sitter (inflationary) Stage In The Early Universe,” Lect. Notes Phys.246 (1986) 107;A. A. Starobinsky and J. Yokoyama, “Equilibrium state of a selfinteracting scalar field in the De Sitter background,” Phys. Rev. D50 (1994) 6357 [astro-ph/9407016]. StochCorrC. P. Burgess, R. Holman and G. Tasinato, “Open EFTs, IR effects & late-time resummations: systematic corrections in stochastic inflation,” JHEP1601 (2016) 153 doi:10.1007/JHEP01(2016)153 [arXiv:1512.00169 [gr-qc]]. StochDecohC. P. Burgess, R. Holman, G. Tasinato and M. Williams, “EFT Beyond the Horizon: Stochastic Inflation and How Primordial Quantum Fluctuations Go Classical,” JHEP1503 (2015) 090 doi:10.1007/JHEP03(2015)090 [arXiv:1408.5002 [hep-th]]. RecentStoch V. K. Onemli, “Vacuum Fluctuations of a Scalar Field during Inflation: Quantum versus Stochastic Analysis,” Phys. Rev. D91 (2015) 103537 doi:10.1103/PhysRevD.91.103537 [arXiv:1501.05852 [gr-qc]];V. Vennin and A. A. Starobinsky, “Correlation Functions in Stochastic Inflation,” Eur. Phys. J. C75 (2015) 413 doi:10.1140/epjc/s10052-015-3643-y [arXiv:1506.04732 [hep-th]]; D. Boyanovsky, “Effective field theory during inflation. II. Stochastic dynamics and power spectrum suppression,” Phys. Rev. D93 (2016) 043501 doi:10.1103/PhysRevD.93.043501 [arXiv:1511.06649 [astro-ph.CO]]; H. Assadullahi, H. Firouzjahi, M. Noorbala, V. Vennin and D. Wands, “Multiple Fields in Stochastic Inflation,” JCAP1606 (2016) no.06,043 doi:10.1088/1475-7516/2016/06/043 [arXiv:1604.04502 [hep-th]]; J. Grain and V. Vennin, “Stochastic inflation in phase space: Is slow roll a stochastic attractor?,” JCAP1705 (2017) no.05,045 doi:10.1088/1475-7516/2017/05/045 [arXiv:1703.00447 [gr-qc]];M. Motaharfar, E. Massaeli and H. R. Sepangi, “Power spectra in warm G-inflation and its consistency: stochastic approach,” arXiv:1705.04049 [gr-qc];H. Collins, R. Holman and T. Vardanyan, “The quantum Fokker-Planck equation of stochastic inflation,” arXiv:1706.07805 [hep-th];C. Pattison, V. Vennin, H. Assadullahi and D. Wands, “Quantum diffusion during inflation and primordial black holes,” arXiv:1707.00537 [hep-th].
http://arxiv.org/abs/1708.07443v1
{ "authors": [ "Peter Adshead", "C. P. Burgess", "R. Holman", "Sarah Shandera" ], "categories": [ "hep-th", "astro-ph.CO", "gr-qc", "hep-ph" ], "primary_category": "hep-th", "published": "20170824144955", "title": "Power-counting during single-field slow-roll inflation" }
Department of Physics, Wesleyan University, Middletown CT-06459, USAAir Force Research Laboratory, Sensors Directorate, Wright-Patterson Air Force Base, OH- 45433, USADepartment of Physics, Wesleyan University, Middletown CT-06459, USA KBRwyle, Dayton, OH 45431Periodic structures with Bloch dispersion relation supporting a stationary inflection point (SIP) can display a unique scattering feature,the frozen mode regime (FMR). The FMR is much more robust than common cavity resonances; it is much less sensitive to the boundaryconditions, structural imperfections, and losses. Using perturbation theory, we analyze the FMR in the realistic case of a finite fragmentof a periodic structure. We show that in close proximity of SIP frequency, the character of the FMR is qualitatively different from the known case of a semi-infinite structure.Frozen Mode Regime in Finite Periodic StructuresTsampikos Kottos December 30, 2023 =================================================Introduction –The ability to engineer composite structures with predefined wave dispersion relation is one of the greatestachievements of the last thirty years <cit.>. An outgrowth of this technological development was the realization ofphotonic and phononic band-gap materials which are now used routinely to control the propagation of light and sound by creatingstop bands and by adjusting and reducing the wave's group velocity. Direct consequences of the latter accomplishment is theenhancement of linear (like absorption or amplification) and nonlinear (like Kerr) effects, and the efficient manipulation ofdirection of electromagnetic or acoustic signals.There are many ways that one can achieve a slow wave propagation, where the group velocity v_g=∂ω/∂ k |_k=k_0.≈ 0, via dispersion management. Example cases include degenerate or regular band edgescorresponding to frequency dispersion relations ω(k)-ω_0∝ (k-k_0)^2m with an integer m>1 or m=1respectively. Another type of slow wave is associated with stationary inflection points (SIP) singularities ω(k)-ω_0 ∝ (k-k_0)^2m+1. The latter results in the formation of the so-called frozen mode regime (FMR) <cit.>, whosemost prominent feature is the nearly total conversion of an input signal into a slow (frozen) mode and dramatically enhanced amplitude.This feature has to be contrasted with the situations encountered in (degenerate) band-edges where the incident wave is typicallyreflected at the interface between free space and a slow-wave medium. The FMR is not a resonance phenomenon; it is not particularly sensitive to the size and shape of thephotonic/phononic structure and it can tolerate all kinds of structural imperfections. On top of it, the FMRcan withstand much stronger losses than any known cavity resonance. The above features make the FMR veryattractive for a variety of applications in optics, microwave, RF and acoustics. In a combination with non-reciprocity, the FMRcan lead to the phenomenon of electromagnetic unidirectionality <cit.>.In a nonreciprocal photonicstructure with gain, the FMR can also result in a cavity-less unidirectional lasing <cit.>.Due to the underlying mathematical complexity, the FMR has been fully analyzed only in semi-infiniteperiodic multilayered structures <cit.> and multimode waveguide arrays <cit.>.Here we are investigating the scattering problem for a finite multi-mode structure, whose periodic counterpart displaySIPs. Using an abstract transfer matrix formalismtogether with a matrix perturbation approach, we studied the transmission characteristicsof such set-ups in theFMR. We derived theoretical expressions for the energy flux carried by the slow propagatingmode(s) and identify a new scaling behavior with respect to the frequency detuning. Specifically, we find that the energy fluxes,associated with the slow propagating mode(s), undergoes a transition at critical sample lengths L_ C∝ |ω- ω_ SIP|^-1/3 from an S_ p∼ O(1) behavior (characteristic of semi-infinite structures)to an S_p∝|ω-ω_ SIP|^-2/3 law. The latter divergence is balanced by a simultaneous developmentof an energy flux carried by pairs of evanescent modes. Our results are confirmed via detailed simulations for set-ups with bothsymmetric and asymmetric spectrum that support SIPs.Transfer Matrix Formalism near SIPs– We consider (finite) periodic composite structures whose infinite counterpart has adispersion relation ω(k) which supports a SIP at some frequency ω_ SIP. In the absence of any non-reciprocalelements, the dispersion relation is reciprocal ω(k)=ω(-k) and therefore anω_ SIP is associated withtwo counter-propagating slow modes at ± k_ SIP. In such structures, two sets of three modes are responsible for SIPs at± k_ SIP.The wave propagation can be analyzed using the transfer matrix approach. The transfer matrix T(z, z_0;ω) connects the wave amplitudes (in mode space– see Ref. <cit.> for a coupled mode theoryimplementation) Ψ of a monochromatic wave at two different positions z and z_0 through the relation Ψ(z)= T(z, z_0;ω)Ψ(z_0). For the specific case of periodic structures, the transfermatrix of a unit cell 𝒯(ν)≡𝒯(1, 0;ω_0+ν) dictates the transport.Here we assumed that the length of the unit cell is L_ uc=1, ω_0=ω_ SIP and ν is the frequencydetuning. We consider a minimal model for which the unit transfer matrix 𝒯(ν) is 6×6 and it is analytic around theSIP. Since a symmetricspectrum develops two SIPs at ν=0, 𝒯(0) can be represented by its Jordan normal form as𝒯(0)=g_0(0)([ J^- 0; 0 J^+ ])g_0^-1(0); J^±≡ e^± k_0([ 1 1; 1 1; 1 ]),where g_0(0)=[j_0^-,j_1^-,j_2^-,j_0^+,j_1^+,j_2^+] is an invertible 6× 6 matrix withcolumns given by the Jordan basis vectors and ± k_0=± k_ SIP. When ν 0 (but still ν→0),𝒯(ν) reduces to its normal form <cit.>g_0(ν)^-1𝒯(ν)g_0(ν)=([ T^-(ν)0;0 T^+(ν) ]),where T^±(ν)=J^±+T_1^±ν+⋯≡ e^± k_0(I_3+Z^± (ν)) and the matrix g_0(ν) depends analytically on ν in the vicinity of ν=0. Next we focus on the eigenvalue problem associated with the individual blocks of the normal form Eq. (<ref>).Let us consider, for example, the block matrix T^-(ν) or its equivalentproblem associated with the matrix Z^-(ν). Simple-minded normal perturbation theory is not useful in cases likeours when the leading term of the operation expansion is nilpotent i.e.(Z^-(0))^3=0. Indeed in such cases the standard Taylor series assumed for the eigenvalues is not the appropriate expansion; rather one has to develop the eigenvalue perturbation expansion using a Puiseux series <cit.>. Nevertheless, a singular perturbation theory provides a recipeto “re-construct" the appropriate operator expansion after identifying the correct leading order term <cit.>. Using this approachwe have found that Z^-(ν)=Z_0^-(ν̃)+Z_1^-ν̃+⋯ whereZ_0^-(ν̃)≡([ 0 1 0; 0 0 1; 0 0 0 ])+ν̃([ 0 0 0; 0 0 0; 1 0 0 ]) and ν̃≡[Z^-(ν)]_31=-3!/ω^”'(-k_0)ν+𝒪(ν^2). The diagonalization of Z_0^-(ν̃), givesG_0^-1(ν̃)Z_0^-(ν̃)G_0(ν̃)=ν̃^1/3Λ_0; Λ_0=diag(c_0,c_1,c_2),where c_n≡ e^2π/3n and the similarity transformation matrix G_0(ν̃) is aVandermonde matrix <cit.> of order 3. Further, the diagonalization process for Z^-(ν̃) (or equivalently T^-(ν)) can be continued order-by-order, leading to the following compact forme^-SG_0(ν̃)^-1T^-(ν)G_0(ν̃)e^S= e^- k_0(I_3+ν̃^1/3Λ_0+ν̃^2/3Λ_1+⋯),where the matrix S≡ S(ν̃^1/3)=ν̃^1/3S_1+ν̃^2/3S_2+⋯ and Λ_1,⋯ are diagonal matrices.A similar treatment applies for the eigenvalue problem associated with T^+(ν).The above approach allows us to evaluate perturbatively the eigenvalues θ_n^∓(ν) and the eigenvectorsf_n^∓(ν) of the unit transfer matrix 𝒯(ν). We get𝒯(ν)f_n^∓(ν)=θ_n^∓(ν)f_n^∓(ν), n=0,1,2θ_n^∓(ν)≈e^(∓ k_0+λ_n^∓); λ_n^∓(ν)≡α_0^∓c_nν^1/3f_n^∓(ν)≈ [1-σ_2^∓λ_n^∓+η^∓(λ_n^∓)^2]j_0^∓+[λ_n^∓-σ_1^∓(λ_n^∓)^2]j_1^∓- (λ_n^∓)^2j_2^∓where α_0^∓=(3!/ω^”'(∓ k_0))^1/3, η^∓=γ_3^∓-γ_1^∓+1/2((σ_1^∓)^2+σ_1^∓σ_2^∓-(σ_2^∓)^2),σ_l^∓=1/3[T_1^∓]_l+1,l/[T_1^∓]_31, γ_l^∓=1/3[T_1^∓]_l,l/[T_1^∓]_31 andj_n^∓ is the Jordan basis of T(0). We assume that ν→0^+ and that the incident wave is entering the finite structurefrom the left interface at z=0; for an example see the dispersion relation in Fig. <ref>b. We are now ready to decomposeany wave inside the structure to the forward (backward) propagating f_0^- (f_0^+) and evanescent f_1^-,f_2^+ (f_1^+,f_2^-)modes and thus evaluate the associated conversion coefficients. We shall also analyze the energy flux carried from these modes anddetermine their scaling with respect to detuning ν.Conversion Coefficients – We consider that the finite structure consists of N periods of the unit cell. In contrast to the semi-infinitecase <cit.>, finite structures involve two interfaces at z=0 and z=N and therefore both forward and backward modes canparticipate in the scattering process. When ν→0^+, the eigenmodes Eq. (<ref>) associated with different blocksin Eq. (<ref>) become degenerate within the specific block. This observation forces us to construct a new “well-behaved" basis ℬ_fb={ f_0^-, f̃_1^-,f̃_2^-, f_0^+, f̃_1^+, f̃_2^+}, where the new basis vectors f̃_1^∓= f_1^∓-f_0^∓/α_0^∓ν^1/3(c_1-1) and f̃_2^∓=-1/3(α_0^∓)^2ν^2/3(c_2f_2^∓+c_1f_1^∓+f_0^∓) together with f_0^±, are independent in the limit of ν→0^+. Next we introduce semi-infinite leads and coupled them to the left and right of the structure. We shall assume that the leads do not developany spectral singularity around ω_0. We then request continuity ofΨ(z) at the interfaces at z=0,N together with the scattering conditionthat the incident wave enters the structure from the left i.e. thatthe coefficients of the backward modes on the right leads are zero. Finally the identification of the appropriate (non-degenerate inthe ν→0 limit) basis guarantee that the scattering problem has unique solution and that the expansion coefficients {φ_1^+,φ_2^+,φ_3^-,φ_1^-,φ_2^-,φ_3^+} of Ψ(z=0^+) in the basis ℬ_fb exists for any incident wave. Obviously the specific values of the expansion coefficients depend on the particular form of the incident wave. Neverthelesssome features are independent of the incident waveform; we find, for example, that φ_l^±(ν)= φ_l^±(0)+𝒪(ν^1/3) while the envelopes scale as |φ_3^±(0)|∼𝒪(N^-1) and |φ_j^±(0)|∼𝒪(N^0), j=1,2., in the large N limit. Correspondingly, in terms of the eigenmodes of T(ν), theexpansion of Ψ(z=0^+) is given asΨ(z=0^+)=∑_σ=±[-φ_3^-σ/3(λ_0^-σ)^2+-φ_2^σ/(c_1-1)λ_0^-σ+φ_1^σ]f_0^-σ+[-c_1φ_3^-σ/3(λ_0^-σ)^2+φ_2^σ/(c_1-1)λ_0^-σ]f_1^-σ+[-c_2φ_3^σ/3(λ_0^σ)^2]f_2^σ,where σ=+/- correspond to forward/backward modes. Substitution of the scaling expressions for the expansioncoefficients φ_l^± together with λ_0^-σ (see Eq. (<ref>)) in Eq. (<ref>),allow us to estimate the scaling of the conversion coefficients. Specifically we find that each of the square bracket terms in Eq. (<ref>) scale as [⋯] β_2/(Nν^2/3)+ β_1/ν^1/3; where β_1,2 are some constants independent of N and ν. Equation (<ref>) signifies a scaling transition from1/ν^2/3 (small samples) to 1/ν^1/3 (large samples) at some critical sample length L_ C= L_ uc N_ C∝ L_ uc/ν^1/3.While the latter scaling law for the conversion coefficients is already known from the case of semi-infinite structures, theformer one is completely new and a trademark of the finite length nature of the scattering setting.Modal energy flux –We now turn our focus on the consequences of the scaling (<ref>) in the modal energyflux. First we recall that near a SIP the Bloch dispersion relation takes the form ω -ω _0∝( k-k_0) ^3. The group velocity of the slow propagating mode(s) isv_g=∂ω/∂ k∝( k-k_0) ^2∝( ω -ω _0) ^2/3while the associated energy flux contribution S_p isS_p=W_pv_g∝ W_p ν ^2/3where W_p is the energy density of the slow propagating mode. Anestimation for the scaling of W_p is provided from the behavior of the conversion coefficients associated with f_0^±, seeEqs. (<ref>,<ref>), i.e. W_p∝|β_2/Nν^2/3+β_1/ν^1/3|^2. In other words, W_pundergoes a transition from an 1/ν^4/3 (for N<N_ C) to an 1/ν^2/3 (for N>N_ C) scalingwith respect to the detuning ν. In the latter limit of “semi-infinite” samples the sole contribution to the energy flux comes from the slow mode and thusS=S_p=W_pv_g∝ 1, as expected also from previous studies <cit.> (see also Appendix).In contrast, in finite scattering set-ups, the contribution S_p from the slow propagating mode(s) to thetotal energy flux S isS_p=W_pv_g∝ W_pν^2/3∝ν ^-2/3where we have used Eq. (<ref>) together with the scaling behavior of W_p for short samples.The anomalous scaling Eq. (<ref>) of the modal energy flux of the propagating modes near the SIPcan be balanced only by the same type (but different in sign) of divergence of modal energy flux S_ ev carriedby the two pairs of forward and backward evanescent modes. This is necessary in order to get a total energy flux S∼ O(1) and it is a new feature associated with the fact that the scattering set-up is finite. In the remaining of this paper we will be checking these theoretical predictions usingsome simple numerical examples. Tight-binding model – We first consider a tight-binding (TB) model supporting a symmetric dispersion relation with two symmetric SIPs, see Fig. <ref>a,b. This system can be realized as a quasi-one-dimensionalarray of coupled resonator <cit.>. The system consists of M=3 chains of coupled resonators where the resonatorsof each chain have equal nearest-neighbor coupling (set to be 1 as coupling unit). The vertical inter-chaincoupling between the nearest chains is κ_1. In addition, the resonators at the first two chains have an on-sitepotential contrast κ_0 (with respect to the resonators of the third chain) and they are also coupled via an inter-chaindiagonal coupling κ_2. In this TB model a monochromatic electromagnetic wave is described byω E_l^(1)= E_l-1^(1)+E_l+1^(1)+κ_1E_l^(2)+κ_2E_l+1^(2) +κ_0E_l^(1) ω E_l^(2)= E_l-1^(2)+E_l+1^(2)+κ_1(E_l^(1)+E_l^(3))+ κ_2E_l-1^(1)+κ_0E_l^(2) ω E_l^(3)= E_l-1^(3)+E_l+1^(3)+κ_1E_l^(2),where E_l^(m) is the field amplitude at the site l of the chain m. Substituting E_l^(m)= A^(m)e^ kl in Eq. (<ref>), we get ω u_A=Du_A; D≡([ ϵ(k) v(k)0; v^*(k) ϵ(k)κ_1;0κ_1 2cos k ]) where ϵ(k)=2cos k+κ_0, v(k)=κ_1+κ_2e^ k and u_A=(A^(1), A^(2),A^(3))^T. Then the dispersion relation ω(k) is obtained by setting (D-ω I_3)=0.Generally there are three bands for this model and we mainly focus on the band supporting SIPs characterized by ω' (± k_0)=ω”(± k_0)=0 and ω”'(± k_0)≠0. An example is given in Fig. <ref>b, where for the parameter values κ_0=κ_1=κ_2=5 and SIPs at ± k_0= ±π/2 and ω_0=-5.The scattering sample is attached to the left and to the right with semi-infinite leads, which are composed of three decoupledchains with constant nearest-neighbor coupling κ_L in each chain. Thus the leads support a traveling wave whenever its frequency is within the band ω(k_L)=2κ_Lcos k_L where -π≤ k_L< π. The field amplitude in eachlead-chain can be written as a sum of two counter-propagating waves, i.e., E_l^(m)=a^(m)e^|k_L|l+ b^(m)e^-|k_L|l. In the simulations, we assume κ_L=4 so that b^(m)represents the amplitude of incoming waves since v_g≡∂ω/∂ k_L|_-|k_L|.>0. Finally the energy flux through a section l in the scattering domain can be defined using the continuity equation, d/dt[∑_mE_l^(m)*E_l^(m)]=F_l-1→ l-F_l→ l+1where F_l-1→ l≡2ℐm[∑_mE_l-1^(m)E_l^(m)*+ E_l-1^(1)κ_2E_l^(2)*] denotes the flux flowing from section l-1 to l. At the same time the field amplitudes can be parametrized as E_l-1^(m)=a_l-1^(m)+b_l-1^(m) and E_l^(m) =a_l-1^(m)e^ k+b_l-1^(m)e^- k, where ω=2cos k and k is in general a complex number.The self-consistency requirements impose E_l^(m)≡ a_l-1^(m)e^ k+b_l-1^(m)e^- k= a_l^(m)+b_l^(m), which together with Eq. (<ref>) allows us to calculate the unit transfer matrix 𝒯(ν) such that Ψ_l=𝒯(ν) Ψ_l-1 where Ψ_l≡(a_l^(1),b_l^(1),a_l^(2),b_l^(2),a_l^(3),b_l^(3))^T.We are now ready to analyze numerically the scaling of the modal energy fluxes of the TB model Eq. (<ref>). Firstwe have verified that Eq. (<ref>) is valid for 𝒯(0) using the aforementioned parameters. InFig. <ref>c we report our numerical findings for the modal energy flux associated with the slow propagatings S_pand evanescent S_ ev modes for three different system sizes N. We find that while for ν→ 0 thesequantities scale according to the new scaling law Eq. (<ref>), the modal fluxes saturate to a constant value at different ν_ C∝ 1/N^3 in accordance to our theoretical prediction, see Eq. (<ref>). In Fig. <ref>d we report the data for one of the N values referring to a linear-linear plot. We find that S_ ev^∓, associated with each of the two pairs of evanescent modes (corresponding to the T^∓ blocks in Eq. (<ref>)), balances the divergent of the S_p contribution so that the total flux S∼ O(1).Non-reciprocal layered structures – It is straightforward to reproduce Eqs. (<ref>,<ref>, <ref>) for the case of finite set-ups with spectral non-reciprocity i.e ω(k)≠ω(-k). Here, instead, we confirmnumerically the validity of these equations for the example case of a multilayer periodic magnetic photonic crystal (PC) withproper spatial arrangement, see Fig. <ref>a <cit.>. When analyzing the modal energyflux, we find that the forward slow propagating mode carries an energy flux which scales according to Eq. (<ref>)while the pair of the associated evanescent modes balanced this divergence in a similar manner, see Fig. <ref>c,d. Theenergy flux of the remaining (fast) backward propagating mode does not show any divergence and has minimal contributionto the total energy flux, see Fig. <ref>d.Conclusions - We developed a theory of FMR for realistic finite structures. We find that the character of the FMR undergoes atransition which is reflected in a dramatic change of the scaling behaviour (with respect to detuning ν) of the modal energy fluxof the slow propagating modes at critical lengths L_C∝ 1/ν^1/3. As opposed to the semi-infinite case, below this length -scale, the energy flux is carried even by (pairs of) evanescent modes. Our results might have important applications to non-reciprocaltransport.Acknowledgments - (I.V.) acknowledges support from AFOSR via LRIR14RY14COR. 1 D13P. A. Deymier, Acoustic Metamaterials and Phononic Crystals, (Springer, 2013)JJWM08J. D. Joannopoulos, S. G. Johnson, J. N., Winn, & R. D. Meade, Photonic Crystals: Molding the Flow of Light, 2nd ed. (Princeton University Press, Princeton, NJ, 2008).FV06A. Figotin and I. Vitebskiy, Waves Random Complex Media 16, 293 (2006).FV11A. Figotin and I. Vitebskiy. Slow Wave Phenomena in Photonic Crystals (Review article),Laser & Photonic Reviews 5, 201 (2011).FV13A. Figotin and I. Vitebskiy, Electromagnetic Unidirectionality in Magnetic Photonic Crystals;Book chapter in: Magnetophotonics: From Theory to Applications. Springer Series in Materials Science, 2013.RKVK14H. Ramezani, S. Kalish, I. Vitebskiy, and T. Kottos, Phys. Rev. Lett. 112, 043904 (2014). FV01A. Figotin, I. Vitebskiy, Phys. Rev. E 63, 066609 (2001).FV06a A. Figotin, I. Vitebskiy, J. Magn. Mater 300, 117 (2006)FV06bA. Figotin, I. Vitebskiy, Phys. Rev. E |bf 74, 066613 (2006)GSSB12 N. Gutman, C. Martijn de Sterke, A. A. Sukhorukov, L. C. Botten, Phys. Rev. A 85, 033804 (2012).GDSSS12 N. Gutman, W. Hugo Dupree, Y. Sun, A. S. Sukhorukov, C. M. de Sterke, Opt. Express 20, 3519 (2012).SLCPK08A. Sukhorukov, A. Lavrinenko, D. Chigrin, D. Pelinovsky, Y. Kivshar, J. Opt. Soc. Am. B 25, C65 (2008).CMTW-P Huang, J. Opt. Soc. Am. A 11, 963 (1994).K95T. Kato, Perturbation theory oflinear operators, Springer (1995).MPT V. N. Bogaevski and A. Povzner, Algebraic Methods in Nonlinear Perturbation Theory (Springer-Verlag, Berlin, 1991). LT85P. Lancaster, M. Tismenetsky, The theory of matrices, Academic Press (2985).KMMVK16U. Kuhl, F. Mortessagne, E. Makri, I. Vitebskiy, T. Kottos, submitted (2016).SFKMS17T. Stegmann, J. A. Franco-Villafane, U. Kuhl, F. Mortessagne, T. H. Seligman, Phys. Rev. B 95,035413 (2017).BKMM13M. Bellec, U. Kuhl, G. Montambaux, F. Mortessagne, Phys. Rev. B 88, 115437 (2013).Supplemental Materials§ SEMI-INFINITE STRUCTURES We can also consider semi-infinite scattering set-ups for the case of coupled waveguide arrays, see Fig. 1a,b.In this case the semi-infinite scattering domain is attached to one sem-infinite lead (say to the left). Again we assume that the lead does not develop any singularity in itsspectrum around ω=ω_0. We consider that the incident wave is sent from the lead towards thescattering domain, thus exciting the forward mode(s). In this case the forward mode(s) consist of one forwardslow propagating mode f_0^-(ν) and two forward evanescent modes f_1^-(ν)and f_2^+(ν). It is important to notice that in the limit ν→0^+, the modes f_0^-(ν) andf_1^-(ν) are degenerate because f_0^-(ν)-f_1^-(ν)→0, see Eq. (5). Following the same strategy as in the case of finite structures, we construct a “well behaved” basis for the forward modes as ℬ_f={ f_0^-(ν), f̃_1^-(ν), f_2^+(ν)} with f̃_1^-(ν)≡f_1^-(ν)-f_0^-(ν)/α_0^-ν^1/3(c_1-1), which are linearly independent as ν→0^+. Next we decompose the propagating waves in this basis. At the interface z=0, the wave state Ψ is assumed tobe continuous. By matching the boundary condition, the expansion coefficients {φ_1, φ_2, φ_3} of Ψ(z=0^+) in the basis ℬ_f are obtained. Although these coefficientsdepend on the specific form of the incident waves, they, nevertheless, are characterized by some general features i.e. φ_l(ν)=φ_l(0)+𝒪(ν^1/3). As results, in terms of forward Bloch modes, we obtain the expansion of Ψ(z=0^+) as Ψ(z=0^+)=[φ_1-φ_2/α_0^-(c_1-1)ν^1/3]f_0^-+[φ_2/α_0^-(c_1-1)ν^1/3]f_1^-+φ_3f_2^+.This result is similar to the one obtained in layered structures. The only differences is the additional forward evanescent mode originating from the T^+ block in Eq. (2). This mode has relatively negligible contribution to the field when ν→0^+.Following the same argumentation as in layered structures we can estimate, for the semi-infinite systems, the scaling behaviour of the modal energy flux of the slow propagating mode. Specifically, using Eq. (<ref>) we can evaluate the wave Ψ( z)transmitted to lossless semi-infinite periodic structure. The latter is composed of propagating Ψ_p( z) and evanescentΨ _ev( z) contributions. An estimation of the scaling of these components with the detuning ν is given from theconversion coefficients associated with f_0^- (propagating amplitude) and f_1^- (diverging evanescent amplitude), see Eq. (<ref>). We get|Ψ _pr| ^2∝|Ψ _ev| ^2∝ν^-2/3.At the interface z=0 of the semi-infinite structure, the two diverging Bloch components interfere destructively, thereby satisfying the continuous boundary conditions at z=0. As the distance z from the interface increases, the evanescent contribution Ψ _ev( z) vanishes, while the diverging propagating component Ψ _p(z) provides the sole contribution to the transmitted field Ψ( z) and determines its saturation value. In short, in the vicinity of a SIP, the wave transmitted to the semi-infinite periodic structure has diverging saturation amplitude (<ref>) and vanishing group velocity. The latter can be easily calculatedfrom the dispersion relation which in the vicinity of a SIP isω (k)-ω _SIP∝ (k-k_SIP)^3.resulting in a vanishing group velocity which scales asv_g∝( ω -ω _SIP) ^2/3Using Eqs. (<ref>,<ref>) we get that the energy flux isS∝ v_g|Ψ _pr| ^2provided by the frozen mode remains finite even at ω =ω _SIP. Under certain conditions, the energy flux of the transmitted frozen mode can be close to that of the incident wave, implying effective coupling with the incident wave.§ SIMULATION DETAILS OF MULTI-LAYERED STRUCTURE The basic unit of the PC shown in Fig. 2a, contains three components consisting of two nonmagnetic misaligned anisotropic layers (red and blue)with a magnetic layer (grey) in-between.The magnetic layer – in the presence of static magnetic field or spontaneous magnetization – guaranties the violation of time-reversalsymmetry while the anisotropic layers break the mirror reflection symmetry; a necessary condition for achieving spectral non-reciprocityi.e ω(k)≠ω(-k). We can find appropriate parameters for which the spectrum supports one SIP (see Fig. 2b). The numerical analysis for the multilayered structure was performed using the transfer matrix approach (for a detailpresentation see Ref. [6]). The permeability tensor for the gyrotropic magnetic layer has the formμ̂_f = ( [ μ_xxi β0; -i β μ_xx0;001 ]) while the permittivity tensor isϵ̂_f = ( [ϵ_fi α0; -i αϵ_f0;001 ]) where α is the gyrotropic parameter responsible for Faraday rotation. The width of this layer is L_f.Similarly the permeability tensor for the birefrigent layers is a 3×3 identity matrixμ̂=1 while thepermittivity tensor takes the formϵ̂_1,2 = ( [ ϵ_A + δcos (2 ϕ_1,2) δsin (2 ϕ_1,2)0; δsin (2 ϕ_1,2) ϵ_A - δcos (2 ϕ_1,2)0;001 ]),where ϵ_A is the permittivity of the medium. Moreover δ is the magnitude ofthe in-plane anisotropy and ϕ_1,2 is the angular orientation of the principal axes in the xy-plane.The width of these layers is L_1,2=L.In our simulations, we have placed the multilayered structure in air and we have used the following parameters for ϵ_f≈ 3.9, α≈ 0.9, μ_xx≈0.814, β≈ 0.015, ϵ_A≈ 7.3, δ=0.54, L_f=1/2 L and L=1. Finally the we have used ϕ_1 = π/4and ϕ_2=0.The SIP (see Fig. 1c) was found at ω_0=2.5c/L_ uc where L_ uc=2.5.
http://arxiv.org/abs/1708.08139v1
{ "authors": [ "Huanan Li", "Ilya Vitebskiy", "Tsampikos Kottos" ], "categories": [ "physics.optics" ], "primary_category": "physics.optics", "published": "20170827210643", "title": "Frozen Mode Regime in Finite Periodic Structures" }
A Fast Approximation Scheme for Low-Dimensional k-Means Vincent Cohen-AddadUniversity of Copenhagen======================================================= We consider the popular k-means problem in d-dimensional Euclidean space. Recently Friggstad, Rezapour, Salavatipour [FOCS'16] and Cohen-Addad, Klein, Mathieu [FOCS'16] showed that the standard local search algorithm yields a(1+)-approximation in time (n · k)^1/^O(d), giving the firstpolynomial-time approximation scheme for the problem in low-dimensional Euclidean space. While local search achieves optimal approximation guarantees, it is notcompetitive with the state-of-the-art heuristics such as the famousk-means++ and D^2-sampling algorithms. In this paper, we aim at bridging the gap between theory and practice by giving a (1+)-approximation algorithm for low-dimensionalk-meansrunning in time n · k · (log n)^(d^-1)^O(d), and somatching the running time of the k-means++ andD^2-sampling heuristicsup to polylogarithmic factors. We speed-up the local search approach by making a non-standard use of randomized dissections that allows tofind the best local move efficiently using a quite simple dynamic program. We hope that our techniques could help design better local search heuristics for geometric problems. § INTRODUCTIONThe k-means objective is arguably the most popular clusteringobjective among practitioners. While originally motivated by applications in image compression, the k-means problem has proven to be asuccessful objective to optimize in order to pre-process and extract information from datasets.Its most successful applications are now stemming frommachine learning problems like for example learningmixture of Gaussians, Bregman clustering, or DP-means <cit.>.Thus, it has become a classic problem in both machine learning and theoretical computer science.Given a set of points in a metric space, the k-means problem asks for aset of k points, called centers, that minimizes the sum of the squares of thedistances of the points to their closest center.The most famous algorithm for k-means is arguably theLloyd[Also referred to as Lloyd-Forgy] heuristic introduced in the 80s <cit.> and sometimesreferred to as “the k-means algorithm”. While this algorithm is very competitive in practice and yields empiricallygood approximate solutions to real-world inputs, it is known that its running time can be exponential in the inputsize and that it can return arbitrarily bad solutions in the worst-case (see <cit.>). This induces a gap between theory and practice.Thus, to fix this unsatisfactory situation, Arthur and Vassilvitskii <cit.> have designed a variant of theLloyd heuristic, calledk-means++, and proved that it achieves an O(log k) approximation.The k-means++ algorithm has now becomea standard routine that is part of several machine learning libraries (<cit.>) and is widely-used in practice. While this has been a major step for reducing the gap between theory and practice, it has remained an important problem as to conceive algorithms with nearly-optimal approximation guarantee.Unfortunately, the k-means problem is known to be APX-Hard even for (high dimensional) Euclidean inputs <cit.>.Hence, to design competitive approximation schemes it is needed to restrict our attention to classes of “more structured” inputs that are importantin practice. The low-dimensional Euclidean inputs form a class ofinputs that naturally arise in image processing and machine learning (see examples of <cit.> or in <cit.>). Thus, finding a polynomial time approximation scheme (PTAS) forO(1)-dimensional Euclidean inputs of k-meanshas been an important research problem for the last 20 yearssince the seminal work of <cit.>.Recently, Friggstad et al. <cit.> and Cohen-Addad et al. <cit.> both showed that the classic local search heuristic with neighborhood of magnitude (d/)^O(d) achieves a (1+)-approximation. While this has been an important result for the theory community, ithas a much weaker impact for practitioners since the running time of the algorithm is n^(d/)^O(d).Therefore, to reduce the gap between theory and practice, it is natural to ask for near-optimal approximation algorithms with competitive running time. This is the goal of this paper.Fast local search techniques are important. The result of Friggstad et al. and Cohen-Addad et al. has beenpreceded by several result showing that local search achieves good approximation bound or even exact algorithms for various problems(see <cit.>). Furthermore, there is a close relationship between local search and clustering showing that the standard local search heuristics achieve very good approximation guarantees in various settings(in addition to the twoaforementioned papers,see <cit.>). Moreover local search approaches are extremely popular in practicebecause they are easy to tune, easy to implement, and easy to run in parallel. Thus, it has become part of the research agenda of the theory community to develop fast local search approaches while preserving theguarantees on the quality of the output (see <cit.>).§.§ Our ResultsWe show that our fast local search algorithm(Algorithm <ref>) yields a PTAS for the slightly more general variants of the k-means problem where centers canhave an opening cost (a.k.a. weight).There exists a randomized algorithm (Algorithm <ref>)that returns a (1+) approximation to the center-weighted d-dimensionalEuclidean k-means problem in time n · k ·(log n)^(d ^-1)^O(d) with probability at least 1/2. We would like to remark that the doubly exponential dependencyin d is needed:Awasthi et al. <cit.> showed that the Euclidean k-meansis APX-Hard when d = Ω(log n). Note that it is possible to obtain an arbitrarily small probability of failure p > 0 by repeating the algorithm log 1/ptimes.As far as we know, this is the first occurrence of a local searchalgorithm whose neighborhood size only impacts the running timeby polylogarithmic factors.§.§ Other Related WorkThe k-means problem is known to be NP-Hard, even when restricted to inputs lying in the Euclidean plane(Mahajan et al.  <cit.>, and Dasgupta and Freud <cit.>) and was recently shown to be APX-hard in Euclidean space of dimension Ω(log n) (<cit.>).There has been a large body of work on approximation algorithmsfor the Euclidean k-means problem (see <cit.>), very recently Ahmadian et al. <cit.> gave a 6.357-approximation improving over the 9-approximation of Kanungo et al. <cit.>.Given the hardness results, researchers have focused ondifferent scenarios. There have been various(1+ε)-approximation algorithms when k isconsidered a fixed-parameter (see <cit.>). Another successful approach has been through the definition of “stable instances” to characterize the real-world instances stemming from machine learning and dataanalysis(see for example<cit.>)or in the context of smoothed analysis (see forexample <cit.>).In the case of low-dimensional inputs, Bandyapadhyay and Varadarajan showed that local search with neighborhood of size ^-O(d) achieves a (1+)-approximation <cit.> when allowed to open O( k) extra centers. As mentioned before, this results has been improved by Friggstad et al. <cit.> and Cohen-Addad et al. <cit.> who showed that even when constrained to open exactly k centers, local search achieves a (1+)-approximation.Related work on local search Local Search heuristics belong to the toolbox of all practitioners, (see Aarts and Lenstra <cit.> for a general introduction). As mentioned before, there is a tight connection between local search and clustering: Arya et al. <cit.> proved that local search with a neighborhood size of 1/ε yields a 3+2ε approximation to k-median. For the k-means problem, Kanungo et al. <cit.> showed a similar result by provingthat the approximation guarantee of local search for Euclidean k-means is 9+ε. For more applied examples of local search an clusteringsee <cit.>. For other theoretical example of local search for clustering,we refer to <cit.>. Related work on k-median. The k-median problem has been widely studied. For the best known results in terms of approximation for general metric spaces inputs we refer to Li and Svensson <cit.> andByrka et al. <cit.>. More related to our results are theapproximation schemes for k-median in low-dimensional Euclidean space given by Arora et al. <cit.> who gavea (1+)-approximation algorithm running in time n^^-O(d). This was later improved by Kolliopoulos andRao <cit.> who obtained a running time of 2^^-O(d) n·polylog n. Quite surprisingly, it is pretty unclear whetherthe techniques used by Arora et al. and Kolliopoulos and Rao could be used to obtain a (1+)-approximation for the k-means problem;this has induced a 20-year gap between the first PTAS for k-median and the first PTAS for k-means in low-dimensional Euclidean space. See Section <ref> for more details. §.§ Overview of the Algorithm and the Techniques Our proof is rather simple. Given a solution L, our goal is to identify – in near-linear time – a minimum cost solution L' such that |L - L'| + |L' - L| ≤δ for some (constant) parameterδ. If (L) - (L') = O(()/k), then we can immediately apply the result of Friggstad et al. <cit.> or Cohen-Addad et al. <cit.>: the solution L islocally optimal and its cost is at most (1+O())(). Finding L has to be done in near-linear time since we could repeat thisprocess up to Θ(k) times until reaching a local optimum. Hence, the crux of the algorithm is to efficiently identify L', it proceeds as follows (see Algorithm <ref> for a full description):* Compute a random recursive decomposition of L (see Section <ref>); * Apply dynamic programming on the recursive decomposition; we show that there exists a near-optimal solution whose interface between different regions has small complexity. To obtain our recursive decomposition we make a quite non-standard use of the classic quadtree dissection techniques(see Section <ref>). Indeed, the k-means problem is famous for being “resilient” torandom quadtree approaches – this is partly why a PTAS for thelow-dimensional k-median problem was obtained 20 years ago but thefirst PTAS for the k-means was only found last year. More precisely, the classic quadtree approach (which works well for k-median)defines portals on the boundary ofthe regions of the dissection and forces the clients of a given region that are served by a center that is in a different region(in the optimal solution) to make a detour through the closest portal. This is a key property as it allows to bound the complexity of between regions. Unfortunately, when dealing with squareddistances, making a detour could result in a dramatic cost increase and it is not clear that it could be compensated by the fact thatthe event of separating a client form its center happens with small probability (applying the analysis ofArora et al. <cit.> or Kolliopoulos et al. <cit.>). This problem comes from the fact that some facilities ofand L might be too close from the boundary of the dissection (and so, their clients might have to make too important detour (relative to their cost in ) through the portals). We call these facility the “moat” facilities (as they fall in a bounded-size “moat” around the boundaries).We overcome this barrier by defining a more structured near-optimal solution as follows: (1) when a facility ofis too close from the boundary of our dissection we simply remove it and (2) if a facility of the current solution L istoo close from a boundary of our dissection we add it to .Of course, this induces two problems: first we have to bound the cost of removing the facilities ofand second we have to show that adding the facilities of L does not result in a solution containing more than k centers. This is done through some technical lemmas and using the concept of isolated facilities inherited from Cohen-Addad et al. <cit.>. Section <ref> shows the existence of near-optimal solution S^* whose set offacilities (facilities that are too close from the boundaries) is exactly the set offacilities of L (and so, we already know their location and so the exact cost of assigning a given client to such a facility).We now aim at using the result of Friggstad et al. <cit.> and Cohen-Addad et al. <cit.>: if for any sets Δ_1 ⊆ L and Δ_2 ⊆ S^*, we have (L) - (L - Δ_1 ∪Δ_2) = O(()/k), then we have (L) ≤ (1+O()(S^*). Thus, we provide a dynamic program (in Section <ref>)that allows to find the best solution S that is such that (1) its set offacilities is exactly the set offacilities of L and (2) |S-L|+|L-S| ≤δ for some fixed constant δ. The dynamic program simply “guess” the approximate location of the centers of (L-S) ∪ (S-L), we show that since each such center is far from the boundary, its location can be approximated. §.§ PreliminariesIn this article, we consider the k-means problem in a d-dimensional Euclidean space: Given a set A of points (also referred to as clients) and candidate centers C in ^d, the goal is to output a set S⊆ C of size k that minimizes: ∑_a ∈ A(a,C)^2, where (a,C) = min_c ∈ C(a,c). We refer to S as a set of centers or facilities. Our results naturally extends to any objective function of the form∑_a ∈ A(a,C)^p for constant p. For ease of exposition, we focus on thek-means problem. As Friggstad et al <cit.>, we alsoconsider the more general version called weighted k-means for which, in addition of the sets A and C, we are given a weight function w : C ↦_+ and the goal is to minimize ∑_c ∈ C w(c) + ∑_a ∈ A(a,C)^2.A classic result of Matousek <cit.> shows that, if C = ^d, it is possible to compute in linear time a set C' of linear size (and polynomial dependency in d and ) such that the optimal solution using thecenters in C' cost at most (1+) times the cost of the optimal solution using C. Hence, we assume without loss of generality that|C| has size linear in |A| and we let n = |A| + |C|. Isolated Facilities. We make use of the notion of isolated facilities introduced by Cohen-Addad et al. <cit.> defined as follows. Let _0 <1/2 be a positive constant andandbe two solutions for theEuclidean k-means problem.Let _0 <1/2 be a positive number and letandbe two solutions for the k-clustering problem with parameter p. Given a facility f_0 ∈ and a facility ℓ∈,we say that the pair (f_0,ℓ) is1-1 _0-isolated ifmost of the clients served by ℓ inare served byf_0 in , and most of the clients served by f_0 inare served byℓ in ; formally, if |(ℓ) ∩(f_0)| ≥max{[ (1-_0) |(ℓ)|,;(1-_0) |(f_0)| ]} When _0 is clear from the context we refer to 1-1 _0-isolatedpairs as 1-1 isolated pairs. Let _0 <1/2 be a positive number and letandbe two solutions for the k-clustering problem with exponent p. Let k̅ denote the number of facilities f of that are not in a 1-1 _0-isolated region.There exists a constant c and a set S_0 of facilities ofof size at least _0^3 k̅/6 that can be removedfromat low cost:(∖ S_0) ≤ (1+c ·_0) () +c ·_0 (). Observethat since _0 < 1/2, each facility ofbelongs to at most one isolated region. Letdenote the facilities ofthat are not in an isolated region. In the rest of the paper, we will use it with _0 = ^3.§.§ Fast Local SearchThis section is dedicated to our the description of the fast local search algorithm for the k-means problem. It relies on a dynamic program called FindImprovement that allows to find the best improvement of the current solution in time n ·poly_,d(log n). We then show that total number of iterations of the do-while loop isO(k).SE[DOWHILE]DodoWhile[1] #1 MRepeatEndRepeatrepeat EndRepeat§ DISSECTION PROCEDURE In this section, we recall the classic definition ofquadtree dissection. For simplicity we give the definition for _2,the definition directly generalizes to any fixed dimension d,see Arora <cit.> and Arora et al. <cit.> for acomplete description.Our definition of quadtree is standard and follows thedefinition of <cit.>, our contribution in thestructural properties we extract from the dissection andis summarized by Lemma <ref>.Letbe the length of the bounding box of the client set A (the smallest square containing all the points in A). Applying standard preprocessing techniques,see in <cit.> and <cit.>,it is possible to assume that the points lie on a unit grid of size polynomial in thenumber of input points. This incurs an addititive error of O(/n^c)for some constant c and yields = n ·poly(^-d).We define a quadtree dissectionof a set ofpointsas follows. A dissection of (the bounding box of)is a recursivepartitioning into smaller squares. We view it as a 4-ary tree whose root is the bounding box of the input points. Each square in the tree is partitioned into 4 equal squares, which are its children. We stop partitioning a square if it has size < 1 (and therefore at most one input point). It follows that the depth of the tree is log = O(log n).Standard techniques show that such a quadtree can be computedin n·log n ·poly(^-d),see <cit.> for more details. The total number of nodes of the quadtree isn·log n ·poly(^-d).Given two integers a,b ∈ [0,), the (a,b)-shifted dissection consists in shifting the x- and y- coordinates of all the vertical and horizontal lines by a and b respectively. For a shifted dissection, we naturally define the level of a bounding box to be its depth in the quadtree. From this, we define the level of a line to be the level of the square it bounds.For a given square of the decomposition, each boundary ofthe square defines a subline of one of the 2 lines of the grid. It follows that each line at level i consists of 2^i sublines of length /2^i.Given an (a,b)-shifted quadtree dissection of a set of n points,and given a set of points U, we say that a point p of U is an i,γ- point if it is at distance less than γ·/2^i of a line of the dissection that is at level i. We say that a point p of U is a γ- point if there exists an i such that p is a i,γ- point. When γ is clear from the context, we simply call such a point apoint. We have:For any p ∈ U, the probability thatp is a γ- point is at most γlog = O(γlog n).Let i be an integer in [0,…,log]and consider the horizontal lines at level i (an analogousreasonning applies to the vertical lines).By definition, the number of dissection lines that are at distanceat most γ/2^ifrom p is γ/2^i. We now bound the probability that one of them isat level i (and so the probability of p being at distance less than γ/2^i of a horizontal line of length /2^i).For any such line l, we have: Pr_a[lis at level i] = 2^i/. Hence, Pr_a[p is a i,γ-point] ≤∑_l: (l,p) ≤γ/2^iPr_a[lis at level i] ≤γ. The lemma follows by taking a union bound over all i.We now consider an optimal solutionand any solution L. In the following, we will focus on γ- centers of L andfor γ = ^13/log n. In the rest of the paper, γ is fixed to that value and so γ- centers are simply calledcenters.Let ι the facilities ofthat are not 1-1 isolated.We define a weigth function w̃ :L ∪∪↦_+ as follows. For each facility s ∈ιwe define (s) as the sum of w(s) and the cost of serving all the clients served by s inby the closest facility ℓ in L plus w(ℓ). For each facility s ∈ Lwe define (s) = w(s) + ∑_c served by s in L(c,s). Similarly, for each s ∈ - ι, we let (s) = w(s) + ∑_c served by s in (c,s). We show: There exists a constant c_0 such that ∑_s ∈ι(s) ≤c_0 (() + (L)). Consider a facility s ∈ι and the closest facility l in L that (1) serves in L at least one client that is served by s inand that (2) minimizes the following quantity: η = min_c (c, l)^2 + (c, s)^2. Let c^* be a client that minimizes the quantity (c,l)^2+(c,s)^2. Let N(s) be the set of all clients served by s in . We have: |N(s)| η≤∑_c ∈ N(s)(c,L)^2+(c,)^2.We have that the total cost of sending all the clients in N(s) is at most (by triangle inequality): ∑_c ∈ N(s) ((c, s) + (s,l))^2 ≤∑_c ∈ N(s) ((c, s) + (s,c) + (c,l))^2. Note that there exists a constant c_0 such that the above sum is at most c_0 ∑_c ∈ N(s)(c, s)^2 + (s,c^*)^2 + (c^*,l)^2 and so at most c_0 (|N(s)|η + ∑_c ∈ N(s)(c, )^2). The lemma follows by combining with the above bound on |N(s)|η. In the following we denote by ϕ : ι↦ L the mapping from each non-isolated facility ofto its closest facility in L. We define ι_L to be the set of non-isolated facilities of L.We define Event (L ∪, w̃) as follows: * The set ofcenters S_1 of L ∪ι is such that (S_1) = ∑_c ∈ S_1(c)≤^9 ∑_c ∈ L ∪ι(c) = ^9 (L ∪ι), and* The set ofcenters S_1 of ι_L ∪ι is such that|S_0| ≤^9 |ι_L ∪ι| = ^9k̅ (recall that k̅ is the number of non 1-1-isolated facilities of(and so of L as well)).The following lemma follows from Lemma <ref>, applying Markov's inequality and taking a union bound over the probability of failures ofproperty (1) and (2).The probability that Event (L ∪, w̃) happens is at least 1/2. We apply Lemma <ref> and obtain that any element of L ∪ is acenter with probability O(ρ^-1log n). Sinceρ^-1 =(c ^-12log n)^-1, we obtain that this probability is at most ^12. Thus, taking linearity of expectation we have that the expected sizeof S_0 is at most ^12 |S ∪ι| and that the expected value of (S_0) is at most ^12(S ∪ι). Applying Markov's inequality toobtain concentration bounds on both quantities and then taking a unionbound over the probabilities of failure yields the lemma.We finally conclude this section with some additional definitions that are used in the following sections. We define a basic region of a decomposition of a to be a region of the dissection that contains exactly 1 points of L. The other squares of the decompositionare simply called regions.§ A STRUCTURED NEAR-OPTIMAL SOLUTIONThis section is dedicated to the following proposition. Let L be any solution. Let _L be a random quadtree dissectionof L as per Sec. <ref>.Suppose Event (L ∪,) happens.Then there exists a constant c and a solution S^* of cost at most (1+c ·) ()+ c·(L) and such that the set ofcenters of S^* is equal to the set ofcenters of L. We prove Propositon <ref> by explicitly constructing S^*. We iteratively modifyin four main steps: * Modifyby replacing f_0 by ℓ_0 forfor each 1-1 isolated pair (f_0,ℓ_0)where ℓ_0 or f_0 is acenter. This yields a near-optimal solution S_0(Lemma <ref>).* Replace ineachcenter s that is in ι by ϕ(s) (as per Section <ref>). This yields a near-optimal solution S_1 (Lemma <ref>).* Apply Theorem <ref> (Theorem III.7 in <cit.>)to obtain a near-optimalsolution S_2 that has at most k -c_2 ^9·k where k is the number of facilities ofthat are not 1-1 isolated (Lemma <ref>).* Add thecenters of L that are non 1-1 isolated to S_2. This yields a near-optimalsolution S_3 that has at most k centers. See Section <ref>for a detailed proof.§.§ 1-1 Isolated Pairs We start fromand for each 1-1 isolated facility (f_0,ℓ_0), f_0 ∈, ℓ_0 ∈ L, where ℓ_0 or f_0 is acenter, we replacef_0 by ℓ_0 in . This results in a solution S_0 whose structural properties are captured by the following lemma. For any f_0 ∈ (resp. ℓ_0 ∈), let (f_0) (resp.(ℓ_0)) bethe set of clients served by f_0 in(resp.the set of clients served by ℓ_0 in L). Assuming Event (L∪,) happens,there exists a constant c_0 such that(S_0)≤ (1+c_0 ·) () + c_0 ·(L).Since Event (L∪,) happens, we have by Lemma <ref>thatthe total opening cost of thecenters plus the total service cost induced by the clients served by the centers isbounded by c_1 ·· ((L) + ()) for some constant c_1. More formally, we can write:∑_(f_0,ℓ_0): 1-1isolated pair and ℓ_0 or f_0 is acenter(w(ℓ_0)+ ∑_a ∈(ℓ_0)(a, ℓ_0)) ≤c_1 ·· ((L) + ()) Thus, we need to bound the cost for the clients that are in (f_0) -(ℓ_0), for each 1-1 isolated pair f_0,ℓ_0 whereℓ_0 or f_0 is acenter.Consider such a pair f_0,ℓ_0. We bound the cost of the clients served by f_0 in by the cost of rerouting them toward ℓ_0. We can thus write for each such client c: (c,ℓ_0)^2 ≤ ((c, f_0) + (ℓ_0,f_0))^2. Also, (c,ℓ_0)^2 ≤ (1+)^2(c, f_0)^2 +(1+^-1)^2(ℓ_0,f_0))^2. Note that (c,f_0)^2 is the cost paid by c in . Thus we aim at bounding (ℓ_0,f_0).Applying the triangle inequality, we obtain (ℓ_0,f_0)^2 ≤ ((ℓ_0,c_1) + (c_1,f_0))^2,for any c_1 in (f_0) ∩(ℓ_0). Hence,(ℓ_0,f_0)^2≤1/|(f_0) ∩(ℓ_0)|∑_c_1 ∈(f_0) ∩(ℓ_0)((ℓ_0,c_1) + (c_1,f_0))^2.Now, ∑_c_1 ∈(f_0) ∩(ℓ_0) ((ℓ_0,c_1) + (c_1,f_0))^2≤ 3∑_c_1 ∈(f_0) ∩(ℓ_0)(c_1,ℓ_0)^2 +(c_1,f_0)^2.Combining, we obtain that the total cost for the clients in (f_0) - (ℓ_0) is at most∑_c ∈(f_0) - (ℓ_0)(c,ℓ_0)^2≤ (1+)^2 (∑_c ∈(f_0) - (ℓ_0)(c,f_0)^2 ) + |(f_0) - (ℓ_0)|(1+^-1)^2(ℓ_0,f_0)^2≤ (1+)^2 (∑_c ∈ N(f_0) - N(ℓ_0)(c,f_0)^2 ) + (1+^-1)^2|(f_0) - (ℓ_0)|/|(f_0) ∪(ℓ_0)|· 3∑_c_1 ∈(f_0) ∩(ℓ_0)(c_1,f_0)^2 +(c_1,ℓ_0)^2The lemma follows from applying the definition of 1-1 isolation: |(f_0) - (ℓ_0)|/|(f_0) ∪(ℓ_0)| ≤^3, and summing over all 1-1 isolated pair.§.§ Replacing the Moat Centers of In this section, we consider the solution S_0 and define a solution S_1 whose set ofcenters is a subset of thecenters of L. Namely: There exists a constant c_1 and a solution S_1 such that thecenters of S_1 are a subset of thecenters of L and (S_1) ≤ (1+c_1 ) (S_0) + c_1(L). Note that by Lemma <ref>, all the 1-1 isolated facilities of S_0 that arecenters are also in L. Thus, we focus on thecenters of S_0 that are not 1-1-isolated (and so, by definition, in ι).We replace each center s ∈ι by the center ϕ(s) ∈ L (as per Section <ref>): the bound on the cost follows immediately from the definition ofand by combining Lemma <ref> and Lemma <ref>.§.§ Making Room for Non-Isolated Moat FacilitiesWe now consider the solution S_1 described in the previous section that satisfies the condition of Lemma <ref>. The following lemma is a direct corollary ofTheorem <ref> (from <cit.>).Given S_1 and L, there exists a solution S_2 ⊆ S_1and constants c_2,c_3such that* |S_2| ≤ k - c_2 ·^9 k, where k is the number of facilities thatare not 1-1 isolated in(or in L it is the same number),and* (S_2) ≤ (1+c_3 ·) () +c_3 ··(L). §.§ Adding theNon-Isolated Moat Facilities and Proof of Proposition <ref> We now consider the set of centers S_3 consisting ofthe centers in S_2 and the non-1-1-isolatedcenters ofL. Combining Lemmas <ref>, <ref>, and <ref> yields the bound on the cost of S_3.By definition of S_3 and applying Lemmas <ref> and <ref>, we have that the set of centers of S_3 is exactly the set ofcenters of L. By Lemma <ref> (and because Event (L ∪,) happens), we have that the total number of centers in S_3 is at most k. § PROOF OF THEOREM <REF> We summarize: By Proposition <ref>, we have that there exists a near-optimal solution S^* whose set ofcenters is the set ofcenters of L.By Proposition <ref>, we have that,FindImprovement identifies a solution S'that is δ-close w.r.t. L, whose set ofcenters is the set ofcenters of L, and such that (L)-(S') ≥ (1-) ((L) -(_δ)), where _δ is the minimum costsolution S such that |S - L| + |L - S| ≤δ. We now arguethat:If FindImprovement outputs a solution S_4 such that (L) - (S_4) ≤()/kthen there exists a constant c^* such that(L) ≤ (1+c^* ·)(). Assuming (L) - (S_4) ≤()/k impliesby Proposition <ref> that(L) - (_δ) ≤ 2()/k,since < 1/2. Now, consider the solution S^* defined in Section <ref>.By Theorem 1 in <cit.>(see also <cit.> fora slightly better dependency inin the unweighted case), if for any pair of sets Δ_1 ⊆ L, Δ_2 ⊆ S_2 such that|Δ_2| ≤ |Δ_1| = (d ^-1)^O(d), we have(L) - (L - Δ_1 ∪Δ_2) ≤/k, then there exists a constant c_6 such that(L) ≤ (1+c_6 ·) (S_2).To obtain such a bound we want to apply Proposition <ref>, and so we need to show that any such solution M = L - Δ_1 ∪Δ_2 is such that its centers are thecenters of L. This follows immediately from Proposition <ref>: thecenters of S^* are thecenters of L.Therefore, we can apply Proposition <ref> and we have (L) ≤ (1+c^* ·) (), for some constant c^*.We now bound the running time ofAlgorithm <ref>.The running time of Algorithm <ref> is at mostn · k · (log n)^(d^-1)^O(d).By Proposition <ref>,we only need to bound the number of iterations of theloop (lines <ref>to <ref>) ofAlgorithm <ref>.Let (S_0) denotes the cost of the initial solution. The number of iterations of theloop islog((S_0) / ())/log(1/1-1/k).Assuming (S_0) ≤ O(), we have thatthe total number of iterations is at most O(k). To obtain(S_0) ≤O() it is possibleto use thealgorithm of Guha et al. <cit.> which outputs an O(1)-approximationin time n · k ·polylog(n),as a preprocessing step (for the computations at line <ref>), without increasing the overall running time. We conclude by bounding the probability of failure: By lemma <ref>, Eventhappens withprobability at least 1/2. Since the random dissection is repeatedindependently c ·log(k) times,the probability of failure of Eventfor a given iteration of the while loop is at most 2^-c·log k = k^-c. Now, by Lemma <ref>, the do-while loopis repeated a total of at most O(k) times, thus the probability of failure is at most O(k^-c+1). § A DYNAMIC PROGRAM TO FIND THE BEST IMPROVEMENT For a given solution L, we define a solution L' tobe δ-close from L if |L-L'| + |L'-L| ≤δ. Let_δ denote the cost of the best solution that isδ-close from L and whose set ofcenters is the set ofcenters of L. In the following we refer to this solution by the best δ-close solution.As a preprocessing step, we round the weights of the centers to theclosest (1+)^i /n, for some integer i. It is easy to see that this only modify the total value by a factor (1+).For each region R, we define the center of the region c_R to be the center of the square R. For each point p that is outside of R and at distance at least ∂ R/log n of R, we define the coordinates of p w.r.t. c_R as follows. Consider the coordinates of the vectorc_R p, rounded to the closest (1+/log n)^i ^14∂ R/log n, for some integer i. Let c_R p be the resulting list of coordinates. Let s the point such that the coordinates of the vector c_R s are equal to the coordinates of c_R p. We define the coordinates of p rounded w.r.t. R to be the coordinates of s.For each region R we also define the grid G_R of R as the d-dimensional grid of size 2log n/^14× ...× 2log n/^14 on R. Note that the distance between two consecutive points of the grid is ^14∂ R/2log n. For each point p that is inside of R, we define the coordinates ofp rounded w.r.t. R to be the coordinates of the closest grid point.The following fact follows from the definition and recalling that the region sizes are in the interval [1,poly(n)]. For any region R, the number of different coordinates rounded w.r.t. to R is at mostO((log n/^14)^2d. We now describe the dynamic program. Each entry of the table is defined by the following parameters: * a region R,* a list of the rounded coordinates w.r.t. R of the centers of L - _δ and _δ - L,* a list of the (rounded) weights of the centers of L - _δ and _δ- L.* a boolean vector of length δ indicating whether ith center in the above lists is in L-_δ (value 0)orin _δ-L (value 1). The following fact follows from the definition and Fact <ref>.The total number of entries that are parameterized by region R is at most (log n/^14)^O(d ·δ). We now explain how to fill-up the table. We maintain the following constraint when we compute a (possibly partial) solution L': there is no center of (L' - L) ∪ (L-L') that is . Under this constraint, we proceeds as follows, starting with the basic regions which define the base-case of our DP. The base-case regions contains only one single candidate center. Hence, the algorithm proceeds as follows: it fills up table entries that are parameterized by: * any boolean vector of length δ.* the rounded coordinates of the unique candidate center inside R and a set of δ-1 rounded coordinates for the centers of (L-_δ) ∪ (_δ-L) outside R, or* a set of δ rounded coordinates for the centers of (L-_δ) ∪ (_δ-L) outside R.Additionally, we require that the boolean vector is consistent with the rounded coordinates: the candidate center inside R is already in L if and only if its corresponding boolean entry is 0.It iterates over all possible rounded coordinates for the at most δ centers of (L-_δ) ∪ (_δ-L) outside R and for each of possibility, it computes the cost. Note that this can be done in time n ·δ.We now consider the general case which consists in merging table entries of child regions. Fix a table entry parameterized by a region R, and the rounded coordinates of the centers of (L-_δ) ∪ (_δ-L). We define whichtables entries of the child regions are compatible given the rounded coordinates.For a table entry of a child region R_1, with the rounded coordinates of the centers of (L-_δ) ∪ (_δ-L), we require the following for all center c_0 ∈ (L-_δ) ∪ (_δ-L). Denote by c̃_0^1 the coordinates of c_0 ∈ (L-_δ) ∪ (_δ-L) rounded w.r.t. c_R_1, namely its values in the table entry for R_1. Let c̃_0^R denote its rounded coordinate w.r.t. R, namely its values in the table entry for R. We require: * If c̃_0^R is outside R, we say that the table entries are compatible for c_0 if the coordinates of the vector c_R c̃_0^1 and c_R c̃_0^R are all within a (1±/log n) factor.* If c_0 is inside R, we say that the table entries are compatible for c_0 if the point of the grid G_R that is the closest to c̃_0^1 is c̃_0^R.* the entries corresponding to c_0 in the boolean vectors are the same. The following lemma follows immediately from the above facts and the definition.The running time of the dynamic program is n (log n/)^O(dδ). We now turn to the proof of correctness. For a given region R, and a δ-close solution S we define the table entry of R induced by S to be the table entry parameterized by R and the coordinates of the centers of (L-S) ∪ (S-L) rounded w.r.t. R. Consider the best δ-closesolution _δ. For any level i of the quadtree dissection, for any region R at level i, we have that the table entry induced by _δ has cost at most ∑_c ∈ R ((1+/log n)^i (c, _δ))^2. Observe that we consider a δ-closesolution that has the same set ofcenters than L. Hence, if a client is served by acenter in _δ, we know exactly the position of this center (as it is also in L and so cannot be removed). Thus, for any region R, the set of clients in R is served by either a center in R or a center at distance at least ^13∂ R/log n from the boundary of R or a center of L that is acenter.We now proceed by induction. We consider the base case: let R be a region at the maximum level. Consider the table entry induced by _δ. We claim that for each client c in R, the cost induced by the solution for this table entry is at most (1+/log n) (c,_δ). Indeed, since _δ has the same set ofcenters each client that is served by a center outside of R that is at distance at most ^13∂ R /log n from the boundary is served by acenter of L and so there is no approximation in its service cost. Each client that is served by a center of _δ - L is at distance at least ^13∂ R/log n and so, the error induced by the rounding is at most (c,_δ)/log n. Finally, the cost for the clients in R served by the unique center of R (if there is one) is exact.Thus, assume that this holds up to level i-1. Consider a region R at level i and the table entry induced by _δ. The inductive hypothesis implies that for each of the table entries of the child regions that are induced by _δ, the cost for the clients in each subregion is at most (1+/log n)^i-1∑_c ∈ R(c,_δ)^2.By definition, we have that each client of R that is served in _δ by a center that is outside R is at distance at least ^13∂ R/log n or acenter of L (and so the distance is known exactly). Thus the rounding error incurred for the cost of the clients of R served by a center outside R is at most (1+/log n)^i(c, _δ).We now turn to the rounding error introducedfor the centers that are inside R. Let c be such a center. We have that the error introduced is at most the distance between two consecutive grid points and so at most ^14∂ R/2log n. Now, observe that again because _δ shares the samecenters than L,each client a pf a child region R_1 that suffers some rounding error and that is served by c, is at distance at least ^13∂ R_1/log n from c and so, combining with the inductive hypothesis the error incurred is at most (1+/log n)^i(a,c). Proposition <ref> follows from combining Lemmas <ref> and <ref>.Let >0 be a small enough constant.Let L be a solution andbe a decomposition andbe thecenters of L. The dynamic program FindImprovement output a solution_δ such that(L) - (_δ) ≥(1-)((L) - (_δ)), where _δ is a minimum-cost solution that is δ-close from L and whose set ofcenters is the set ofcenters of L. Its running time is n · (log n/^14)^O(d δ).abbrv
http://arxiv.org/abs/1708.07381v2
{ "authors": [ "Vincent Cohen-Addad" ], "categories": [ "cs.DS", "cs.CG" ], "primary_category": "cs.DS", "published": "20170824130038", "title": "A Fast Approximation Scheme for Low-Dimensional $k$-Means" }
G State Key Laboratory of Magnetic Resonance and Atomic and Molecular Physics, Wuhan Institute of Physics and Mathematics, Chinese Academy of Sciences, Wuhan 430071, ChinaUniversity of Chinese Academy of Sciences, Beijing 100049, China.[][email protected];[email protected] Key Laboratory of Magnetic Resonance and Atomic and Molecular Physics, Wuhan Institute of Physics and Mathematics, Chinese Academy of Sciences, Wuhan 430071, ChinaCenter for Cold Atom Physics, Chinese Academy of Sciences, Wuhan 430071, ChinaDepartment of Theoretical Physics, Research School of Physics and Engineering, Australian National University, Canberra ACT 0200, Australia05.30.Fk, 02.30.Ik,03.75.SsIn this letter we present a unified derivation of the pressureequation of states, thermodynamics and scaling functions for the one-dimensional (1D) strongly attractiveFermi gases with SU(w) symmetry. Thesephysical quantities provide a rigorous understanding on a universality class of quantum criticality characterised by the critical exponents z=2 and correlation length exponent ν=1/2. Such a universality class of quantum criticalitycan occur whenthe Fermi sea of one branch of charge bound states starts to fill or becomegappedat zero temperature.The quantum critical cone can be determined throughthe double peaks inspecific heatwhichserve to mark two crossover temperatures fanning out from the critical point.Our method opens to furtherstudy onquantum phases andphase transitions in stronglyinteracting fermions with large SU(w)and non-SU(w) symmetries in one dimension. A unified approach to the thermodynamics and quantum scaling functions ofone-dimensionalstrongly attractive SU(w) FermiGases Xi-Wen Guan December 30, 2023 ================================================================================================================================== The experimental realization of the 1D quantum gases, such as repulsive Bose gases <cit.>, Yang-Gaudin model <cit.>, multicomponent attractive Fermi gases <cit.>,has provided a remarkable test ground for exactly solvable models. The mathematical theory of Bethe ansatz integrable models has become testable in ultracold atoms. The Bethe anssatzhas also foundsuccess for other significant models like the Kondo physics <cit.>, BCS pairing model <cit.>, strongly correlated electronic systems <cit.>, spin ladders <cit.> and quantum degenerategases <cit.>. Recent studies of the 1D Fermi gases with high spin symmetries <cit.> has givenmany theoretical predictions on the existence ofbound states of multiparticles, quantum liquids and phase transitions. In this regard, exploring exactly solvable models of interacting fermions with high mathematical symmetries is highly desirable in order to understand new phases of matter.However,the thermodynamic properties of exactly solvable models with high symmetries at finite temperatures are notoriously difficult to extract andpresent a formidable challenge.Building on Yang-Yang thermodynamic Bethe ansatz equations,here we present a unified approach to the thermodynamics and quantum critical scalings in1D strongly attractiveFermi gases with SU(w) symmetry.Analytical results of the equation of states(EOS), dimensionless ratios andscaling functions of thermal and magnetic propertiesprovide a rigorous understanding on a universality class of quantum criticality of free fermions. The quantum critical region can be determined throughthe double maximainspecific heatwhichcharacterize thetwo crossover temperatures fanning out from the critical point.These results suggest to experimentally test the universal nature of multicomponent quantum liquidsthrough the1D strongly attractiveFermi gases of ultracold atoms<cit.>. The model. The 1DSU(w) Fermi gaseswith δ-function interaction confined to length Lis described by the following Hamiltonian <cit.> H = -ħ^2/2m∑_i=1^N∂^2/∂ x_i^2+g_1D∑_1≤ i<j≤ Nδ(x_i-x_j) - E_z- μ N,and with the chemical potential μ and theeffective Zeeman energy E_z = ∑_r=1^w1/2r(w-r) n_rH_ r. HereN is the total particle number.There are w possible hyperfine states |1⟩, |2⟩, …, |w⟩ that the fermions can occupy. Experimentally, g_1D=-2ħ^2/m a_1D, with a_1D the effective scattering length in 1D <cit.>,can be tuned from a weak interaction to a strong coupling regimevia Feshbach resonances. For our convenience, from now on, we choose our units such that ħ^2=2m=1 unless we particularly use the units.In this model,the two-body charge bound states involve the Bethe ansatz roots {λ_j± i c/2}, j=1...M_2 and the three-body bound states {λ_j ± i c, λ_j},j=1...M_3, and so on, where M_2 and M_3 are the numbers of charge bound states and three-body bound states, respectively <cit.>.The thermodynamics of the modelare determined by the effective external fields H_r, chemical potential, interaction between different particlesand spin wave fluctuations.The TBAequations. The thermodynamics ofthe Hamiltonian (<ref>)is determined bythe following TBA equations <cit.> ϵ^(r)(k) = rk^2 - rμ- H_r - r(r^2-1)c^2/12-∑_q=1^w â_rq∗ F[ϵ^(q)] +∑_q=1^∞ a_q ∗ F[η_r,q],η_r,l(k)=l ·(2 H_r - H_r - 1 - H_r + 1)- a_l ∗ F[ϵ^(r)] - ∑_q=1^∞ U_lq∗ F[η_r,q] + ∑_q=1^∞ S_lq∗ F[η_r-1,q] + ∑_q=1^∞ S_lq∗ F[η_r+1,q],where we denote a_n(x) = 1/2πn|c|/(nc/2)^2+x^2, â_lj(x) = ∑_q=1,2q ≠ l+j^min(l,j) a_l+j-2q(x),F[ε]≜-T ln[1+exp(-ε/T)].In the above equations, ∗ denotes the convolution(f ∗ g)(λ) = ∫_-∞^∞ f(λ-λ') g(λ') dλ' andthe functions U_lj(x) and S_lj(x) are given in <cit.>. From the dressed energies ϵ^(r)(k) for bound states of r-atoms with r=1,⋯, w, one can obtain the pressure p=∑_r=1^w rT/2π∫_-∞^∞dk ln(1+e^-ϵ^(r)/T).The summation of the pressures of all charge bound states services as the EOS, from which we can obtainfull thermodynamics of the model at thetemperatures ranging from zero to high.This form of the EOS gives rise to the additivity nature of quantum liquids in low temperatures <cit.>.We are interested in the low temperature behaviour of interacting fermions with high symmetriesin 1D.We can see fromthe TBA equations (<ref>) thatthefermomagnetic ordering (the second term in (<ref>)) drives the spin contributions η_r,l(k) to the dressed energies of charge bound states exponentially small in strongly attractive regimes.Consequently, we can ignore all the string contributions inthe above TBA equations when temperature is much less than the binding energies of the charge bound states ε_r=1/48 r ( r^2 -1) g_1D^2. In the recent study <cit.>, it has been provedthat the dimensionless Wislonratios, i.e. either the ratio of the susceptibility χ to the specific heat c_V divided by the temperatureR_ W^ s=4/3(π k_B/μ_B g )^2χ/c_V/T,or theratioofthe compressibility κto the specific heat c_V divided by the temperatureR_ W^ c=k_B^2π^2 /3κ/c_V/T,essentially captures thequasiparticle nature of Fermi liquid<cit.>,as its value characterizes the interacting effect in the Fermi liquid. Here k_B is Boltzmann's constant, μ_B is the Bohr magneton and g is the Lande factor. The two types of dimensionless ratios (<ref>) and(<ref>) characterise a competition between the fluctuations of two thermodynamic quantities.Thusa constantWilson ratioimplies thatthe two types offluctuations are on an equal footing with respect to the temperature, regardless of the microscopic details of many-body systems <cit.>. In Fig. <ref>, we demonstrate the compressibility Wilson ratio elegantly maps out the fullphase digram of the SU(2) Fermi gas <cit.> with a strong attraction at T=0.001ε_2/k_B, where ε_2 is the binding energy of abound pair. Thisturns out that the low temperature SU(2) TBA equations (<ref>) provide rigorous results of quantum liquid behaviour and quantum criticality,promptingusto explore universal thermodynamics and quantum scaling functions for the high symmetry Fermi gases with strong attractions through the SU(w) TBA equations. Without loss of accuracy atlow temperature thermodynamics, wesimplify the TBA equations (<ref>)intothe following form ϵ^(r)(k)=V_r-∑_q=1^w â_rq∗ F[ϵ^(q)] , r=1,2,⋯,wwith V_r=rk^2-rμ-H_r-1/12r(r^2-1) c^2, wherethe last effective field H_w is set to zero due to the spin singlet charge bound states.For example,the SU(3) case, i.e. w=3 in the Eq. (<ref>)determinesthe low temperature properties of the 1Dthree-component strong interacting fermions <cit.>.Equation of states. In the strong coupling region, i.e. |c| ≫ E_F (E_F is the Fermi energy) and atlow temperatures, we have the following expansion form-a_n*F[ϵ^(m)](k) ≈n|c|/(nc/2)^2+k^2p^(m)/m +1/2π32/3m√(m)1/n^3|c|^3Γ(5/2)T^5/2 Li_5/2(-e^A^(m)/T),which is obtainedby integration by parts. In (<ref>) theLi(x) is the polylog function. Substitutingthis relation intothe TBA equations <ref>, we have ϵ^(r)(k) = V_r+∑_m=1^w ∑_^min(r,m)4/|c|p^(m)/m(r+m-2q) -∑_m=1^w ∑_^min(r,m)16/|c|^3p^(m)/m(r+m-2q)^3k^2 +∑_m^w Z_rmT^5/2/|c|^3 Li_5/2 (-e^A^(m)/T).In the above equation, we defined matrix,Z_rm=∑_^min(r,m)4/√(π)1/(r+m-2q)^3 m^3/2and the A^(m) collectsthe constantterms (terms are independent of the k) in the dressed energy ϵ^(m)(k). We would like to mention that the polylog functions involve different modes of generating functions of Fermi integrals. In contrast to the Somerfield expansion with respect to the powers of the temperature t, here the polylog functions contain enough thermal and quantum fluctuations that are acquired by the quantum criticality. Therefore, the polylog functions essentially characterise the singular behaviour of the 1D strongly interacting fermions even near quantum phase transitions.If weconsider the first three orders inthe pressure, only the constant term and thequadratic term (O(k^2) terms) contribute to the thermodynamic quantities, and we can safelydrophigh order termsin k . Explicitly, we express the dressed energy as ϵ^(r)=r D_r k^2- A^(r),Integratingeq.(<ref>) by parts we havep^(m)=-√(m)/2√(π D_m) T^3/2 Li_3/2 (-e^A^(m)/T).The parameter D_r is amodification tothe quadratic term inthe dressed energy ϵ^(m)(k) and could be read offfrom the Eq.(<ref>)D_r=1-∑_m=1^w ∑_^min(r,m)16/|c|^3p^(m)/rm(r+m-2q)^3 .Here we only consider the first three orders in the dressed energy equations, thusD_r ≈ 1, and the EOS becomesp^(r) = -√(r)/2√(π) T^3/2 Li_3/2 (-e^A^(r)/T), A^(r) =A_0^(r)-∑_m=1^N D_rm p^(m)/|c|- T^5/2Z_rm Li_5/2(-e^A^(m)/T)/|c|^3,where A^(r)_0 and D_rm are determined by the Eq.(<ref>)A^(r)_0 = r μ+H_r+1/12r(r^2-1)c^2,D_rm = ∑_^min(r,m)4/m(r+m-2q).In order to simplify the EOS,we define the dimensionless quantities andparametersp̃^(r) = p^(r)/|c|^3,Ã^(r)=A^(r)/|c|^2, μ̃ = μ/|c|^2,h_r=H_r/|c|^2, t=T/|c|^2.Then the dimensionless EOS is given by p̃^(r)= -√(r)/2√(π) t^3/2 Li_3/2 (-e^Ã^(r)/t),Ã^(r)= Ã_0^(r)-∑_m=1^N D_rmp̃^(m)-Z_rm t^5/2 Li_5/2 (-e^Ã^(m)/t).whereÃ_0^(r)=r μ̃+h_r+1/12r(r^2-1).For simplifying our notations, we further define matrices ( Li_s)_rm= Li_s (-e^Ã^(r)/t)δ_rm, (D)_rm=D_rm,(Ẑ)_rm=Z_rm, (Ã)_r1=Ã^(r),(p̃)_r1=p̃^(r), (M_r)_rm=√(r)/2√(π)δ_rm,(F_s)_rm≜ t^sLi_s(-e^Ã^(r)/t)δ_rm, (f_s)_rm≜ t^sLi_s(-e^Ã_0^(r)/t)δ_rm. Li_5/2, Li_3/2,M_r,D and Z̃ are square matrices ( or thecolumn matriceswhen they are at the most right of the related terms),p̃ and à are column matrices.For example, in the SU(2) caseD = ( [ 0 2; 4 1; ]), Z= ( [ 0 √(2)/√(π);4/√(π)1/4√(2π); ]),and for SU(3) case,D = ( [022/3;41 16/9;28/31;]),Z= ( [ 0 √(2)/√(π)1/6√(3π);4/√(π)1/4√(2π) 112/81√(3π); 1/2√(π)56/27√(2π) √(3)/16√(π); ]).With the help of these notations, we rewritea unified expression of EOS for the SU(w) strongly attractive Fermi gasesp̃ = -M_rt^3/2 Li_3/2, à = Ã_̃0̃-Dp̃-Ẑt^5/2 Li_5/2.The last term in the function à is negligible in pressure. Nevertheless, it isnecessary in thecalculation of the scaling function or phase boundaries.After a lengthy iteration, we get a close formof the EOSp̃ = -M_rF_3/2,Ã≈ Ã_̃0̃+DM_rf_3/2 +DM_rf_1/2DM_rf_3/2. Furthermore,we could obtain all thethermodynamic quantities of the system in equilibrium by standard thermodynamic relations viathepressureEq.(<ref>), whichserves asthe grand thermodynamic potential of the system. In this context,the partial derivatives of the pressure by any chemical potential, externalfields and temperature are essential in our approach. Thus we take thederivative of the pressure Eq.(<ref>)with respect to the variable η: η=μ̃,h_1,h_2,....It follows that ∂𝐩̃/∂η = -M_r𝐅_1/2∂𝐀̃/∂η, ∂𝐀̃/∂η = ∂𝐀̃_0/∂η-𝐃∂𝐩̃/∂η -Ẑ𝐅_3/2∂𝐀̃/∂η.By solving the above two linear equations,we obtain the first order derivativethermodynamic properties∂𝐩̃/∂η = -M_r F_1/2(𝐈+DM_r F_1/2 +(DM_r F_1/2)^2) ∂𝐀̃_0/∂η, ∂Ã/∂η = (𝐈+DM_r F_1/2 +(DM_r F_1/2)^2) ∂𝐀̃_0/∂η. For the second order phase transitions, the second derivatives of the pressure,the compressibility or the susceptibility for instance, give a deep insight into the quantum criticality of the systems. Similarly,the second order thermodynamic quantities canbe obtained ∂^2 p̃/∂η_1 ∂η_2 = [-M_rF_1/2(I+DM_rF_1/2) DM_rF_-1/2. .-M_rF_-1/2] (∂Ã/∂η_1∂Ã/∂η_2). On the other hand, the derivative of pressure with respect to t always imposes a tedious task.After carefully solving the above equations involving the derivatives of pressure,we obtain the entropy s̃=∂p̃/∂ t ∂p̃/∂ t = -M_r3/2tF_3/2 +M_rF_1/2Ã/t -M_rF_1/2(I+D M_r F_1/2)× (-D M_r F_1/2Ã/t +D M_r3/2tF_3/2),∂Ã/∂ t = [𝐈+DM_r F_1/2 +(DM_r F_1/2)^2] (-D M_r F_1/2Ã/t..+D M_r3/2tF_3/2 +ẐF_3/2Ã/t -Ẑ5/2tF_5/2).This result contains not only the linear-temperature-dependentbehaviour of the entropyin the Luttigner liquid (for T≪ E_F) but also the universal quantum scalingsof the entropy in the quantum critical region (for T≫ E_F) in the vicinity of the critical point.Similarly, the second derivative of the pressure with respect to t is given by ∂^2 p̃/∂ t^2 = -M_r( 3/4t^2F_3/2+1/tF_1/2B+F_-1/2B^2+. . F_1/2D M_r(3/4t^2F_3/2+1/tF_1/2B+F_-1/2B^2) )with B=∂Ã/∂ t-Ã/t.In contrast to the previous studies on multicomponentinteracting fermions <cit.>, the above close forms of the thermodynamics are very useful for analyzing the behaviour of thequantum liquids and critical scalings of the 1D interacting fermions withSU(w) symmetry.Quantum criticality. In the vicinities ofthe phase boundaries in the phase diagrams of the the 1D interacting fermions withSU(w) symmetries <cit.>,a discontinuity emerges in the polylog functions Li_s(x) in the EOS, namely,lim_μ/t → 0^+ Li_s (-e^μ/t)=-(μ/t)^s/Γ(s+1),lim_μ/t → 0^- Li_s (-e^μ/t)=0.The sign change of the Ã^(r) leads to a sudden change of the Polylog functions at thecritical point of a phase transition, for example, see Fig. <ref>, where a sudden enhancementin the Wilson ratio is observed when the driving parameters is tuned across anyphase boundary.The condition A^(r)>0 implies theexist of the bound states of r-fermions in this certainquantum phase.As a consequence, in the vicinities ofphase boundaries, anythermodynamic quantity can be separated into two parts: *the background part and *the discontinuous part. The background part involves the states which do not have a sudden change and the calculation of this part is cumbersome <cit.>.The discontinuous part can be obtained by analyzingthe first orderof divergence in the EOS, i.e. the part involves a sudden change in the density ofa certain branch of bound states. For example, at the phase transition from the fully paired states into the FFLO phase, the regular part mainly relates tothe thermal fluctuation presented by functions with A^(2), whereas the singular part results in a sign change ofA^(1) ,which indicates the crossover of the density of unpaired fermions from zero to non-zerowhile the Fermi sea of the unpaired fermions starts to fill with particles. We further calculate the phase transition driven by chemical potential or external fields. Supposingthatthe quantum phase transition is driven by the sign change ofÃ^(r) , we observe that the thermodynamic quantities are naturallysplit into background and discontinuous parts as:∂𝐩̃/∂η = -M_r F_1/2(𝐈+DM_r F_1/2 +(DM_r F_1/2)^2) ∂𝐀̃_0/∂η= ∂𝐩̃/∂η|_0 -M_r F_1/2∂𝐀̃_0/∂η,where ∂𝐩̃/∂η|_0 denotesthe backgroud part, i.e. ∂𝐩̃/∂η|_0 =-M_r F_1/2(𝐈+DM_r F_1/2 +(DM_r F_1/2)^2) ∂𝐀̃_0/∂η|_F_s^(r)=0.Then we obtain the explicit scaling form of the first derivative thermodynamic quantities of the SU(w)Fermi gases ∂p̃/∂η= ∂p̃/∂η|_0^(r) -√(r)/2√(π)∂Ã^(r)_0/∂ηt^1/2 Li_1/2(-e^r(μ̃-μ_cr)/t),where the μ_cr is the critical chemical potential for the phase transition induced by the change ofther-atoms bond states.This method canbe further applied tothe second order thermodynamic quantities,explicitly ∂^2 p̃/∂η_1 ∂η_2 ≈ ∂^2 p̃/∂η_1 ∂η_2|_0^(r)-√(r)/2√(π)∂Ã^(r)_0/∂η_1∂Ã^(r)_0/∂η_2 t^-1/2 Li_-1/2(-e^r(μ̃-μ_cr)/t).We thus read off the critical exponents from these scaling functions, i.e. the dynamic critical exponent z=2 and the correlation critical exponent ν=1/2.In particular,the specific heat is given by c̃_V/t ≈ ∂^2 p̃/∂ t^2≈∂^2 p̃/∂ t^2|_0-M_r( 3/4t^2F_3/2-1/tF_1/2Ã_0/t +F_-1/2(̃Ã_0)^2/t^2)that gives ∂^2 p̃/∂ t^2≈ ∂^2 p̃/∂ t^2|_0^(r)-√(r)/2√(π t)ℋ(r(μ̃-μ_cr)/t),where the function ℋ (x) = 3/4 Li_3/2 ( - e^x) - xLi_1/2 ( - e^x) + x^2Li_- 1/2 ( - e^x).Solvingthe equation d/dxℋ (x)=0, we get two solutions: y_1≈ 0.639844, y_2 ≈ 0.276201 thatdeterminethe two peaks of thespecific heat at quantum criticality in the SU(w) Fermi gases, i.e. t^*_1=-y_1 r(μ̃-μ_cr), t^*_1=y_2r(μ̃-μ_cr).The two crossover temperatures fanning out from the critical point indicatethe quantum critical region beyondthe quantum liquid phases, see Fig. <ref>. Recent studies on the quantum criticality in 1DHeisenberg spin chain <cit.> and 1D Bose gas <cit.> confirm such a novelexistence of the critical cone while a quantum phase transition occurs. For clarity and possible experimental use, we present the explicitscaling forms of the thermodynamic properties for the 1D strongly attractive SU(w) Fermi gases. The polarization of the systems is m̃=∑_k^w 1/2 k(w-k) ñ_k,where ñ_r=∂p̃/∂ h_r presents the density of ther-atoms .For the phase transition related to the sign change ofÃ^(r), i.e. the phase transition occurs whenthe Fermi sea of the charge bound states of r-atoms starts to fill or to be gappedat zero temperature. Based on the above unified scaling forms (<ref>)–(<ref>),we can directly presentthe result of the scaling functions ofphysical quantitiesñ≈ ñ_0 -r√(r)/2√(π) t^1/2ℱ (r(μ̃-μ_cr)/t) m̃≈ m̃_0 -1/2 (w-r)r √(r)/2√(π) t^1/2ℱ (r(μ̃-μ_cr)/t)κ̃≈ κ̃_0-r^2√(r)/2√(π) t^-1/2𝒢 (r(μ̃-μ_cr)/t)χ̃≈ χ̃_0-1/2 r(w-r) √(r)/2√(π) t^-1/2𝒢 (r(μ̃-μ_cr)/t)c̃_V/t≈ c̃_V0/t-√(r)/2√(π) t^-1/2ℋ(r(μ̃-μ_cr)/t)with ℱ(x) =Li_1/2 ( - e^x)𝒢(x) =Li_-1/2 ( - e^x)ℋ (x) =3/4 Li_3/2 ( - e^x) - xLi_1/2 ( - e^x) + x^2Li_- 1/2 ( - e^x).Here every first term in the above quantities denote the background parts.Notice that for theSU(2) case, we conventionallytake the magnetic field H=2H_1, and the dimensionless form now p̃^(r)=p^(r)/1/2|c|^3,Ã^(r)=A^(r)/1/2|c|^2, μ̃=μ/1/2|c|^2, h=H/1/2|c|^2, t=T/1/2|c|^2. These scaling functions provide exact result ofquantum critical phenomena ofthe 1D SU(w) Fermi gases, also see the theory of quantum criticality <cit.>. In summary, we have presented a unified approach to the thermodynamics and quantum scaling functions for the 1D strongly attractive Fermi gases with SU(w) symmetry. In particular, we have obtained the two crossover temperature lines fanning out from the critical point that confirm the existence of the critical cone at quantum criticality. While the quantum liquids can be measured through the dimensionless ratios, revealing theimportant free fermion nature of 1D interacting fermions. Our results pave a way to experimentally study quantum criticality ofthe fermionic alkaline-earth atoms that display an exact SU(w) spin symmetries with w=2I+1<cit.>. Here I is the nuclear spin. The study of critical phenomena and quantum correlations in ultracold atoms withhigh symmetries has become a new frontier in atomic physics.Acknowledgments.The authorsthank Yu-Peng Wang,Yu-Zhu Jiang, Feng He, Song Cheng and Peng He for helpful discussions. This work is supported by the NSFC under grant numbers 11374331 and the key NSFC grant No. 11534014. XWG has been partially supported by the Australian Research Council. 99Exp-B1 Kinoshita T, Wenger T and Weiss D S 2004 Science 305 1125Exp-B2 Paredes B, Widera A, Murg V, Mandel O, Fölling S, Cirac I, Shlyapnikov G V, Hänsch T W and Bloch I 2004 Nature (London) 429 277 Exp-B3 Kinoshita T, Wenger T and Weiss D S 2006 Nature 440 900 Exp-B4 Van Amerongen A H, Van Es J J P, Wicke P, Kheruntsyan K V,and van Druten N J 2008 Phys. Rev. Lett. 100 090402Exp-B5Haller E, Gustavsson M, Mark M J, Danzl J G, Hart R, Pupillo G and Christo H 2009 Science 325 1224 Exp-B6Yang B, Chen Y Y, Zheng Y G, Sun H, Dai H N, Guan X W, Yuan Z S and Pan J W 2016 arXiv:1611.00426[cond-mat.quant-gas] Liao:2010Liao Y A, Rittner A S C, Paprotta T, Li W H, Partridge G B, Hulet R G, Baur S K and Mueller E J 2010Nature 467 567 Wenz:2013Wenz A N, Zürn G, Murmann S, Brouzos I, Lompe T and Jochim S 2013Science 342 457 Pag14Pagano G, Mancini M, Cappellini G, Lombardi P, Schäfer F, Hu H, Liu X J, Catani J, Sias C, Inguscio M and Fallani L 2014 Nat. Phys.10 198-201 Andrei:1983Andrei N, Furuya K and Lowenstein J H 1983Rev. Mod. Phys. 55 331Dukelsky:2004Dukelsky J, Pittel S and Sierra G 2004 Rev. Mod. Phys. 76 643Ess05 Essler F H L, Frahm H, Göhmann F, Klümper A and Korepin E 2005 The One-Dimensional Hubbard Model (Cambridge, Cambridge University Press) TakahashiTakahashi MThermodynamics of One-Dimensional Solvable Models 1999(Cambridge, Cambridge University Press) WangYP:2015Wang Y P, Yang W L, Cao J P and Shi K 2015Off-Diagonal Bethe Ansatz for Exactly Solvable Models( Heidelberg, Springer-Verlag Berlin ) Wang99 Wang Y P 1999 Phys. Rev. B609236Batchelor:2007Batchelor M T, Guan X W, Oelkers N and Tsuboi Z 2007 Adv. Phys. 56 465CazalillaCazalilla M A, Citro R, Giamarchi T, Orignac E and Rigol M 2011 Rev. Mod. Phys.83 1405 GuaBL13 Guan X W, Batchelor M T and Lee C H. Fermi gases in one dimension:From Bethe ansatz to experiments 2013 Rev. Mod. Phys.85 1633 Yu:2016Yu Y C, Chen Y Y, Lin H Q, Roemer R A and Guan X W 2016 Phys. Rev. B94 195129WuHZ03 Wu C, Hu J P and Zhang S C 2003 Phys. Rev. Lett. 91 186402 CazR14 Cazalilla M A and Rey A M 2014Rep. Prog. Phys.77 124401 Schlottmann:1993Schlottmann P 1993 J. Phys. Condens. Matter5 5869 LeeGBY11 Lee J Y, Guan X W and Batchelor M T 2011 J. Phys. A: Math. Theor.44 165002 Jiang:2016Jiang Y Z, He P Guan X W 2016 J. Phys. A49 174005 He:2010He P, Yin X G, Guan X W, Batchelor M T and Wang Y P 2010 Phys. Rev. A 82 053633 Oelkers:2006Oelkers N, Batchelor M T, Bortz M and Guan X W 2006 J. Phys. A: Math. Gen.39 1073 YangYang C N 1967 Phys. Rev. Lett.191312 GaudinGaudin M 1967 Phys. Lett. A24 55Sut68 Sutherland B. 1968Phys. Rev. Lett.20 98 Tak70 Takahashi M 1970Prog. Theor. Phys. 44 348-358 Ols98Olshanii M1998 Phys. Rev. Lett.81 938 SommerfeldSommerfeld A 1928 Z. Phys.47 1Wilson1975Wilson K G 1975 Rev. Mod. Phys.47 773Wang98Wang Y PHeidelberg 1998 Int. J. Mod. Phys. B123465Guan-PRLGuan X W, Yin X G, Foerster A Batchelor M T, Lee C H and Lin H Q 2013 Phys. Rev. Lett.111 130401Guan:2008Guan X W Batchelor M T, Lee C and Zhou H Q 2008 Phys. Rev. Lett.100 200401 Foerster:2012Kuhn C C N and Foerster A 2012 New J. Phys.14 013008 Guan-HoGuan X W and Ho T L 2011 Phys. Rev. A84 023616 He:2017He F, Jiang Y Z, Yu Y C, Lin H Q and Guan X W 2017 arXiv:1702.05903[cond-mat.stat-mech] QC-BookSachdev S 1999Quantum Phase transitions( Cambridge, Cambridge University Press )Giamarchi:2004Giamarchi T 2004Quantum Physics in onedimension (Oxford, Oxford University Press) Gor2010Gorshkov A V, Hermele M, Gurarie V, Xu C, Julienne P S,Ye J, Zolller P, Demler E, Lukin M D, and Rey A M 2010 Nat. Phys.6 289 Cazalilla:2009Cazalilla M A, Ho A F and Ueda M 2009 New J. Phys.11 103033
http://arxiv.org/abs/1708.07939v1
{ "authors": [ "Yi-Cong Yu", "Xiwen Guan" ], "categories": [ "cond-mat.quant-gas" ], "primary_category": "cond-mat.quant-gas", "published": "20170826062447", "title": "A unified approach to the thermodynamics and quantum scaling functions of one-dimensional strongly attractive $SU(w)$ Fermi Gases" }
http://arxiv.org/abs/1708.07641v2
{ "authors": [ "Amalia Betancur", "Dipsikha Debnath", "James S. Gainer", "Konstantin T. Matchev", "Prasanth Shyamsundar" ], "categories": [ "hep-ph", "hep-ex" ], "primary_category": "hep-ph", "published": "20170825075915", "title": "Measuring the mass, width, and couplings of semi-invisible resonances with the Matrix Element Method" }
§ INTRODUCTIONA doubly synchronous binary asteroid is subject to the BYORP effect,which can either increase or decrease its angular momentum <cit.>. Simultaneously both components of the system experience the YORP effect <cit.>. Under some conditions a stable equilibrium berween these two torques is possible, and it is the subject of this article.Interaction between YORP and BYORP torque is non-trivial. While BYORP inputs momentum into orbital motion of the two components <cit.>, YORP alters rotation rates of the two components separately. The orbital period and the two rotational periods of the components get slightly out of synchronization, and they turn with respect to each other, until the further offset is stopped gravity torques, which originate when the binary system gets lopsided. These gravity torques help to distribute the torqus inside the binary system. If the exterior torqus are small enough, the system remains tidally locked. When studying dynamics of such a system, we can just add up the BYORP and the two YORP torques. Although the torques are acting on different bodies, they are re-distributed in such a way,that the system remains synchronized, and rotates as a whole.The idea of an equilibrium between YORP and BYORP is illustrated in Figure <ref>. The binary system is composed of two Rubincam propellers with wedges of slightly different height. Then wedges A and B create YORP torque on the primary, wedges E and F create YORP torque on the secondary, and the superstructures C and D create BYORP. The BYORP torque is negative and linearly increases with the distance between the asteroids. The YORP torque is positive and independent of the distance between the asteroids. If the superstructures C and D are small enough, the total torque is positive at a close distance between the components. So the angular momentum of the system increases, and the two components move further apart. The distance between the superstructures C and D increases, and the larger lever arm causes the larger BYORP torque. It is negative, and when the distance between the asteroids is large enough, BYORP ultimately compensates YORP, the further outward motion is stopped, and the system arrives at an equilibrium.This basic idea is treated in the paper more rigorously. In Section <ref> we present the mathematical formalism, which describes entangled YORP–BYORP evolution. In Section <ref> we apply the formalism to a set of photometric and radar shape models of asteroids, and in Section <ref> discuss implications of such equilibria to the evolution of asteroids.Throughout this paper we neglect tangential YORP <cit.> and mutual shadowing of the components, which could alter the torque acting on the system. We assume obliquity of the system to be fixed, although both YORP and BYORP have components which alter obliquity. Thus our statements about stability of the orbit are valid only if the obliquity of the orbit is stable. We know that in many cases orbits with obliquities 0^∘, 90^∘ or 180^∘ are stable,thus will we will concentrate on these cases in Section <ref>. The orbit of the binary system is assumed circular, and evolution of eccentricity is disregarded. We know that in many cases zero eccentricity is stable with respect to BYORP <cit.>,and in some cases when it is not it is damped by tides <cit.>.§ THEORYThe axial component of the YORP torque is expressed by the equation <cit.>T_z = Φ/c∮_S (dS×r)_z p_z.Here Φ=Φ_0 A_0^2/(√(1-e^2)A^2) is the time averaged solar energy flux at the asteroid's position, with Φ_0 being the solar constant at the distance A_0=1 AU, A is the semimajor axis of the asteroid's heliocentric orbit, e is its eccentricity, c the speed of light, and p_z a function depending on the latitude on the asteroid and the asteroid's obliquity <cit.>. The integral is to be taken over the surface of both asteroids comprizing the binary. This equation neglects shadowing and multiple scattering of light from the primary to the secondary, and vice versa.We direct the x axis from the primary A to the secondary B, the z axis towards the angular momentum of the system, and the origin of the coordinate system in the center of mass. Then radius-vectors of points on the two components arer_A = -μr+r'_A = (-μ r+x_A,y_A,z_A) , r_B = (1-μ)r+r'_B = ((1-μ)r+x_B,y_B,z_B) .Here μ=M_B/(M_A+M_B) is the mass fraction of the secondary, r_A and r_B are radius-vectors of points on the two components with respect to the center of mass of the system, r'_A and r'_B are radius-vectors with respect to the centers of mass of the two components, and r is the vector joining the centers of mass of A and B.Then we substitute Eqn. (<ref>) into Eqn. (<ref>) and findT_z = Φ/c(R_A^3 C_A+R_B^3 C_B-aR_Tμ R_A^2 B_A+aR_T(1-μ) R_B^2 B_B) .Here R_A and R_B are volume-equivalent radii of the primary and the secondary, andR_T=R_A+R_B is the total radius. The density of the primary and the secondary is assumed the same, so that R_A=(1-μ)^1/3R_T/((1-μ)^1/3+μ^1/3) and R_B=μ^1/3R_T/((1-μ)^1/3+μ^1/3). a=r/R_T is the distance between the components expressed in terms of the total radius. The dimensionless YORP and BYORP coefficients are determined asC_A = ∮_S_A(dS_A^2/R_A^2×r_A/R_A)_z p_z ,C_B = ∮_S_B(dS_B^2/R_B^2×r_B/R_B)_z p_z ,B_A = ∮_S_A(dS_A^2/R_A^2×e_x)_z p_z ,B_B = ∮_S_B(dS_B^2/R_B^2×e_x)_z p_z .Here e_x is the basis vector in the x direction. The former two coefficients are the same as dimensionless YORP τ_z used by <cit.>, while the latter two are similar to BYORP coefficients used by <cit.>. All the four coefficients are independent of the size of an asteroid, and only depend on its shape. In general, the coefficients are large only for asymmetric asteroids.It is convenient to introduce the dimensionless total YORP and BYORP torque, determined asC=(1-μ)C_A+μ C_B/((1-μ)^1/3+μ^1/3)^3 , B=aμ^2/3(1-μ)^2/3/((1-μ)^1/3+μ^1/3)^2(-μ^1/3B_A+(1-μ)^1/3B_B) .Then the total torque experienced by the asteroid can be expressed in a simpler form,T_z = Φ/cR_T^3(C+aB) . From Eqn. (<ref>) we see, that the dependence between T_z and a is linear. Depending on the signs of the coefficients C and B,one of the four cases presented in Figure <ref> will occur. Assuming that the system rotates in the positive direction, in the upper left case the system will decay, in the lower left case it will merge, in the upper right case it will reach a stable equilibrium, while in the lower right case the equilibrium is unstable and the system will either merge or decay depending on the initial conditions.The upper right case in Figure <ref> is somewhat similar to the stable equilibrium described by <cit.>, although the equilibrium is attained not between BYORP and YORP, but between BYORP and tides. In fact, the Jacobson & Scheeres equilibrium is stable only from the point of view of the secondary, while the primary's rotation state continues evolving due to tidal effects, causing its rotation rate to slow. In contrast, the equilibrium between YORP and BYORP described here totally stops the evolution of the rotational state.When the equilibrium is present, it can be determined from Eqn. (<ref>) by equating T_z to 0. Thus for the radius of the equilibrium orbit we geta_0 = |C|/B . § APPLICATIONSTo undertand which YORP and BYORP coefficients are realistic, we consider a set of photometric and radar shape models, and compute their YORP and BYORP coefficients using Eq. (<ref>). The set of photometric shape models contains 1593 shapes of 910 different asteoids from DAMIT database <cit.>. (Some asteroids have two different shape models, and we treat the different models independently.) The set of radar shapes includes 26 shape models from <cit.>. We plot the resulting coefficients C and B for obliquities ϵ=0 and ϵ=90^∘ in Figure <ref>.Both C and B have nearly equal probabilities of being positive and negative,but for Figure <ref> we take their absolute values. The sign of C is altered if the asteroid spins in the opposite direction, and the sign of B is altered by a 180^∘ flip of the asteroid around its rotation axis. So we assume different signs of C and B to be equally probable,actually treating each shape model as four different shape models, with ± C and ± B.From Figure <ref> we see, that values C_1=0.001 and C_2=0.01, B_1=0.01 and B_2=0.1 are typical, so we use them to make the following set of plots. We fix C_A, C_B, B_A, and B_Bat some values among ± C_1, ± C_2, ± B_1, and ± B_2. We allow different size ratios, thus treating μ as a parameter. In Figure <ref> we plot the equilibrium radius a for each size ratio μ. Different panels of the figure correspond to different signs of YORP and BYORP coefficients.The shadowed area marks the distances a, which not only formally turn Eqn (<ref>) into 0, but are also physically feasible. The smallest possible distance is limited by the two components touching each other,a_min = 1 .The biggest piossible distance is determined by the Hill limit,a_max = √(ρ/3ρ_⊙)A_min/R_⊙ .Here ρ_⊙ and R_⊙ are the density and the radius of the Sun, ρ is the density of the asteroid,and A_min is the closest distance between the asteroid and the Sun. Close to the Hill limit dynamics of the binary system gets much more complicated. Moreover, the exact value of the Hill limit slightly depends on the mass fraction μ. Still, we neglect these fine details in our estimates. The upper boundary of the shadowed area in Figure <ref> corresponds to A_min=1 AU and ρ=2.5 g cm^-3.We see that in many cases a lies in the shadowed area. These systems represent possible stably rotating doubly synchronous binaries.To estimate the probability of a doubly synchronous binary to end up in such a stable equilibrium for each value of μ evaluated, we consider all possible asteroid shape pairs and orientations, computing the total number of these that could reside in equilibrium state. In Figure <ref> such probabilities are ploted versus the mass fraction μ for two different obliquities ε, two different perihelion distances A_min, and two different sets of shape models. We see that results for the both sets of models, both obliquities, and both perihelion distances are similar. The probability p of a binary ending up in a stable equilibrium is aboul 0.04 at μ≈ 0.5, and is over 0.1 for μ<0.1 or μ>0.9. Binaries with very small or very large mass fractions are unlikely to become doubly synchronous, but even the probability of 0.04 is significant enough to pay attention to, as it could modify the persistence of a special class of doubly synchronous binaries in the asteroid population <cit.>.§ DISCUSSIONThe common understanding of evolution of small asteroids is deduced from YORP cycles. In this picture, each asteroid is accelerated by YORP to its disruption limit, forms a binary, and the binary decays due to BYORP, tidal interaction, or chaotic dynamics. Then the asteroid enters the next YORP cycle, and loses mass once again, and so on. It has been argued by <cit.> that asteroids can stop their dynamical evolutionbecause of stable equilibrium between tangential and normal YORP effects.Here we find another possible equilibrium, this time on the binary stage of the YORP cycle.If each asteroid with some probability decays into a binary with such a shape and such a mass ratio that it appears in equilibrium, than after sufficiently many YORP cycles such eqilibrium will necessary be attained. Then the YORP evolution of the asteroid will be stopped for a very long time, until a collision or a close encounter disrupts the otherwise stable system. It implies that an asteroid can spend more time in stable equilibria than in YORP cycles, that should drastically slow down the YORP evolution.99 [Asteroid Radar Research2016]radar Asteroid Radar Research, 2016. http://echo.jpl.nasa.gov/asteroids/shapes/shapes.html[Ćuk & Burns2005]cuk05 Ćuk M., Burns J. A., 2005. Effects of thermal radiation on the dynamics of binary NEAs. Icarus 176, 418-431[Durech et al.2010]damit Durech J., Sidorin V., Kaasalainen M., 2010. DAMIT: a database of asteroid models. A&A, 513, A46 [Golubov et al.2016]golubov16 Golubov O., Kravets Y., Scheeres D. J., Krugly Yu. N., 2016. Physical models for the normal YORP and diurnal Yarkovsky effects. MNRAS, Vol. 458, Issue 4, 3977[Golubov & Krugly2012]golubov12 Golubov O., Krugly Yu. N., 2012. Tangential Component of the YORP Effect. ApJL 752, L11[Jacobson et al.2016]jacobson16 Jacobson S. A., Marzari F., Rossi A., Scheeres D. J., 2016. Matching asteroid population characteristics with a model constructed from the YORP-induced rotational fission hypothesis. Icarus 277, 381[Jacobson & Scheeres2011]jacobson11 Jacobson S. A., Scheeres D. J., 2011. Long-term Stable Equilibria for Synchronous Binary Asteroids. ApJL, 736, L19 [McMahon & Scheeres2010]mcmahon10 McMahon J., Scheeres D. J., 2010. Detailed prediction for the BYORP effect on binary near-earth asteroid (66391) 1999 KW4 and implications for the binary population. Icarus 209, 494[Margot et al.2015]margot15 Margot J.-L., Pracec P., Taylor P., Carry B., Jacobson S., 2015. Asteroid Systems: Binaries, Triples, and Pairs. In Asteroids IV (P. Michel et al., eds.), pp. 355-374. Univ. of Arizona, Tucson[Rubincam2000]rubincam00 Rubincam D. P., 2000. Radiative spin-up and spin-down of smallasteroids. Icarus 148, 2[Rubincam2000]steinberg11 Steinberg E., Sari R., 2011. Binary YORP Effect and Evolution of Binary Asteroids. Astronomical Journal 141, 55
http://arxiv.org/abs/1708.07925v1
{ "authors": [ "Oleksiy Golubov", "Daniel J. Scheeres" ], "categories": [ "astro-ph.EP" ], "primary_category": "astro-ph.EP", "published": "20170826032319", "title": "Equilibrium rotation states of doubly synchronous binary asteroids" }
Multi-task Self-Supervised Visual Learning CarlDoersch^†Andrew Zisserman^†,* ^†DeepMind ^*VGG, Department of Engineering Science, University of Oxford December 30, 2023 =============================================================================================================================== The discrete wavelet transform can be found at the heart of many image-processing algorithms. Until now, the transform on general-purpose processors (CPUs) was mostly computed using a separable lifting scheme. As the lifting scheme consists of a small number of operations, it is preferred for processing using single-core CPUs. However, considering a parallel processing using multi-core processors, this scheme is inappropriate due to a large number of steps. On such architectures, the number of steps corresponds to the number of points that represent the exchange of data. Consequently, these points often form a performance bottleneck. Our approach appropriately rearranges calculations inside the transform, and thereby reduces the number of steps. In other words, we propose a new scheme that is friendly to parallel environments. When evaluating on multi-core CPUs, we consistently overcome the original lifting scheme. The evaluation was performed on 61-core Intel Xeon Phi and 8-core Intel Xeon processors. § INTRODUCTION The two-dimensional discrete wavelet transform (DWT) is a very versatile image processing instrument. It is employed in several image-compression standards (e.g., JPEG 2000). As a consequence, many works deal with its fast implementation on all sorts of computer systems, including parallel architectures. As it might be expected, many developers have adapted this transform on massively-parallel architectures, especially on GPUs. However, all of these adaptations are based on the most popular separable schemes – the convolution and lifting schemes. The separable convolution scheme can be computed in just two calculation steps, however, using a large number of arithmetic operations. Whereas, the separable lifting scheme exhibits the smallest number of operations, and, on the contrary, the largest number of steps. It is natural to expect that the number of operations should be proportional to a transform performance. This is especially true on single-core CPUs. However, it is essential that also the number of steps forms a bottleneck. This is mainly meant in relation to multi-core processors.In this paper, we show that the optimal scheme for multi-core CPUs lies aside the separable convolution and lifting schemes. To the best of our knowledge, this problem has not been addressed in the literature yet. The newly introduced scheme does not retain the separable property, as its operations cannot be associated with a horizontal or vertical direction. In order to evaluate the proposed scheme, we performed several experiments on high-end server CPUs. The evaluation is performed with CDF 5/3 wavelet, employed, e.g., in JPEG 2000 standard. However, the presented schemes are general and they are not limited to any particular wavelet.The rest of this paper is organized as follows. Section 2 discusses a related work and introduces a mathematical notation used in the rest of the paper. Section 3 presents the proposed non-separable scheme and its adaptation to a particular platform. Section 4 evaluates the discussed schemes on multi-core CPUs. Eventually, Section 5 summarizes and closes the paper.§ BACKGROUND AND RELATED WORK This section introduces some notations and definitions to be used in the paper, and then it reviews conventional methods for computation of thetransform.The widely-used z-transform is used for the description of wavelet filters. Such filters are represented by polynomials in z like G(z). Since this paper is focused ontransform, it is necessary to extend this notation into two dimensions. So, two-dimensional filters look like G(z_m, z_n), where the subscript m refers to the horizontal axis and n to the vertical axis. The G^* indicates a polynomial transposed to the original G.The DWT splits the input signal into two components, according to a parity of its samples. The components are often referred to as L and H. The transform can be computed by a pair of quadrature mirror filters, referred to as G, followed by subsampling by a factor of 2. Formally, this can be represented by the polyphase matrix[ G_1^(o) G_1^(e); G_0^(o) G_0^(e) ],where operators (e) and (o) denote the even and odd terms of G. This equation defines one-dimensional convolution scheme. Further, Sweldens showed <cit.> how the convolution scheme can be decomposed into a sequence of simple steps. These filters are referred to as the lifting steps and the scheme as the lifting scheme. The following paragraph discusses the lifting scheme in detail.The initial polyphase matrix (<ref>) is factored into several pairs of lifting steps. In each pair, the first step is called the predict step and the second one as the update step. Formally, this can be represented by the product of polyphase matrices∏_k [ 1 U^(k); 0 1 ][ 1 0; P^(k) 1 ],where 2K is the number of the lifting steps, and P^(k) and U^(k) represent the kth predict and update filter. For simplicity, the superscript (k) is omitted in the following text.On multi-core CPUs, the processing of single or several adjacent signal samples is mapped to independent cores. Due to the data exchange, the cores must use some synchronization method to avoid race conditions. In the lifting scheme, these synchronizations can be required before the lifting steps. In this paper, the synchronizations are indicated by the | symbol placed before a polyphase matrix. For example, M_2 | M_1 refers to a sequence of two lifting steps separated by some synchronization method.Usually, thetransform <cit.> is defined as the tensor product oftransforms. Unlike thecase, thetransform splits the input signal into a quadruple of wavelet coefficients (LL, HL, LH, and HH). To describematrices, the predict and update operators must be extended into two dimensions. Considering the separable lifting scheme, the predict and update lifting steps can be applied in both directions sequentially. It should be noted that the horizontal and vertical steps can be arbitrarily interleaved. Thelifting then follow from a sequence.[ 1 0 U^* 0; 0 1 0 U^*; 0 0 1 0; 0 0 0 1 ]| .[ 1 U 0 0; 0 1 0 0; 0 0 1 U; 0 0 0 1 ]| .[ 1 0 0 0; 0 1 0 0; P^* 0 1 0; 0 P^* 0 1 ]| .[ 1 0 0 0; P 1 0 0; 0 0 1 0; 0 0 P 1 ]| .Note the synchronization | before the matrices. As the sequence can be hard to imagine, the individual matrices are illustrated in Figure <ref> for the CDF 5/3 wavelet <cit.>. For multiple lifting pairs, the scheme is separately applied to each such pair. Recall that the separable lifting scheme has the smallest possible number of arithmetic operations and the highest number of steps.Another scheme used fortransform is the separable convolution. In this case, all calculations in a single direction are performed in a single step. The drawback of this is the highest number of operations. The scheme can formally be described as.𝐍^V| .𝐍^H| ,where 𝐍^H is a product of all steps in the horizontal direction and 𝐍^V is in the vertical one. The convolution is followed by the subsampling.So far, several studies have compared the performance of the separable lifting and convolution schemes on parallel architectures. In an exemplary manner, the authors of <cit.> compared these schemes on GPUs. Although the results of their comparison are ambiguous, they concluded that the separable convolution is more efficient than the separable lifting counterpart in most cases. They also claimed that fusing several consecutive steps might significantly speed up the execution, even if the complexity of the resulting fused step is higher. In this regard, the authors failed to consider the possibility of a partial fusion, where the number of steps is reduced but it remains greater than a single step. Other notable works can be found in <cit.>.This work is based on our previous work in <cit.>. In these papers, we introduced several non-separable schemes for calculation ofDWT suitable for graphics cards (GPUs). We also presented a trick leading to a reduction of arithmetic operations. The trick is also exploited in this paper. In this paper, we extend previously presented schemes to multi-core CPU platform. This is the point investigated in the following section.§ PROPOSED SCHEME This section presents non-separable schemes suitable for multi-core CPUs. A contribution of the paper starts with this section.The above-described approaches did not exploit the possibility of a fusion of polyphase matrices. Having this in mind, all horizontal and vertical calculations of the corresponding pair of matrices can be performed in a single step. The drawback of this approach is a higher number of operations and memory accesses. Since CPUs are very sensitive to the total number of arithmetics operations, the fusion will be appropriate to apply to the lifting scheme. In this way, non-separable lifting scheme is formed. The scheme has the same number of steps as its separable counterpart. On the other hand, the number of operations has been increased. The scheme consists of a spatial predict and spatial update steps. For curiosity, The predict step is completely responsible for the HH coefficient, whereas the update step for the LL one. Formally, the scheme is defined by.[1UU^* UU^*;010U^*;001U;0001 ]| .[1000;P100;P^*010; PP^*P^*P1 ]| .The PP^* and UU^* are the spatial filters (tensor products offilters). For CDF 5/3 wavelet, it is illustrated in Figure <ref>.As mentioned above, an optimization approach can adapt the schemes to a particular platform. The number of operations or memory accesses can be reduced, while the number of computational steps remains unaffected. Regardless the underlying platform, an important observation can be made. A special form of the operations guarantees that the CPU cores never access the results belonging to extraneous cores. These operations comprise constants (monomials with the zero exponents). As the convolution is the linear operation, these polynomials can be detached from the original operations, and calculated using the separable scheme (due to the lowest number of operations). Such schemes are referred to as adapted schemes. Formally, the original polynomials are split as P = P_0 + P_1 and U = U_0 + U_1, where P_0 and U_0 are the desired constants. The P_1 and U_1 are kept in the original non-separable scheme. For a better understanding, the adapted non-separable scheme is illustrated in Figure <ref>.§ EVALUATION Since the above-listed properties do not provide sufficient information on performance in real environments, the performance on real multi-core CPUs is compared in this section.In order to evaluate the considered schemes, high-performance server CPUs were used, along with the code written using the C language and OpenMP interface. The evaluation was performed primarily on Intel Xeon and Intel Xeon Phi server processors. Their technical parameters are summarized in Table <ref>. In the following paragraphs, several experiments on these CPUs are presented.In the first experiment shown in Figure <ref>, the optimal number of threads was examined. The measurements were conducted with separable and non-separable schemes and CDF 5/3 wavelet. The transform performance was measured with tiles of 1024×1024 size, comprising single-precision floating-point values. The presented results are a median of 100 measurements. The time is given in nanoseconds per pixel (ns/pel). It is clear from the figure that the curves roughly approximate the 1/x function, where x is the number of threads. Therefore, the measurements made show that the optimal number of threads roughly corresponds to the maximum number of threads available. Note the phenomenon that occurs when the number of threads exceeds the number of CPU cores (i.e. 8 for the Xeon, 61 for the Xeon Phi).In the second experiment in Figure <ref>, the optimal transform tile size was examined. The number of threads that was found optimal in the previous experiment was used. For the Xeon CPU, the optimal power-of-two tile size was chosen as 1024×1024. For the Xeon Phi, the size 2048×2048 was chosen. Note that the tile size does not necessarily have to be a power of two, but this is a suitable choice, for example, due to JPEG 2000.In the last experiment in Figure <ref>, we were interested in a real performance. The x-axis shows the size of the image edge. The input and output images were supplied by the main memory. Note that the image sizes exceed CPU cache size after a while. The experiment confirms that the non-separable scheme consistently overcomes the original separable lifting scheme. For example, for 8192×8192 image, the speedup factor is about 10% on the Xeon and 25% on the Xeon Phi processor.In summary, we can conclude that the reduction in transform steps can improve performance, at least on some platforms. All the source codes used in this article together with all the results are available in a repository <cit.> on the website of the authors' affiliation.§ SUMMARY This paper introduces and discusses the non-separable lifting scheme for computation of the two-dimensional discrete wavelet transform on multi-core CPUs. We found that the non-separable scheme outperforms its separable counterpart in most cases. We can confirm that fusing consecutive steps of the original lifting scheme might speed up the execution, irrespective of its higher complexity in terms of arithmetics operations. The presented scheme is general and it can be used in conjunction with any wavelet transform.For future work, we plan to extend our approach to other wavelets and possibly other non-separable schemes. The implementation can also further be improved using appropriate SIMD extensions. Finally, we look for other multi-core platforms such as multi-core ARM processors.This work has been supported by the Ministry of Education, Youth and Sports of the Czech Republic from the National Programme of Sustainability (NPU II) project IT4Innovations excellence in science (LQ1602), and the Technology Agency of the Czech Republic (TA CR) Competence Centres project V3C – Visual Computing Competence Center (no. TE01020415). spiebib
http://arxiv.org/abs/1708.07853v3
{ "authors": [ "David Barina", "Pavel Najman", "Petr Kleparnik", "Michal Kula", "Pavel Zemcik" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20170825181908", "title": "The Parallel Algorithm for the 2-D Discrete Wavelet Transform" }
Maximum A Posteriori Estimation of Distances Between Deep Features in Still-to-Video Face Recognition Andrey V. SavchenkoNational Research University Higher School of EconomicsLaboratory of Algorithms and Technologies for Network Analysis,36 Rodionova St., Nizhny Novgorod, [email protected] Natalya S. BelovaNational Research University Higher School of Economics20 Myasnitskaya St., Moscow, [email protected] 30, 2023 =================================================================================================================================================================================================================================================================================================================================================The paper deals with the still-to-video face recognition for the small sample size problem based on computation of distances between high-dimensional deep bottleneck features. We present the novel statistical recognition method, in which the still-to-video recognition task is casted into Maximum A Posteriori estimation. In this method we maximize the joint probabilistic density of the distances to all reference still images. It is shown that this likelihood can be estimated with the known asymptotically normal distribution of the Kullback-Leibler discriminations between nonnegative features. The experimental study with the LFW (Labeled Faces in the Wild), YTF (YouTube Faces) and IJB-A (IARPA Janus Benchmark A) datasets has been provided. We demonstrated, that the proposed approach can be applied with the state-of-the-art deep features and dissimilarity measures. Our algorithm achieves 3-5% higher accuracy when compared with conventional aggregation of decisions obtained for all frames. § ABBREVIATIONS* CNN - Convolution Neural Network* i.i.d. - independent identically distributed* IJB-A - IARPA Janus Benchmark A* KL - Kullback-Leibler divergence* LFW - Labeled Faces in the Wild* MAP - Maximum A Posteriori* ML - Maximum Likelihood* NN - Nearest Neighbor * YTF - YouTube Faces § INTRODUCTION The usage of deep learning technologies <cit.> instead of traditional computer vision methods <cit.> has recently made it possible to achieve a near human-level performance in various face recognition tasks, such as verification <cit.> or identification <cit.>. Moreover, contemporary techniques can deal even with well-known crucial issues appeared in practical applications of face recognition, e.g., unconstrained environment (various illumination and pose, partial occlusion) <cit.>, or the small sample size problem <cit.>, when usually only single facial image per person is available <cit.>. The latter problem is solved using transfer learning methods <cit.>, in which large external datasets of celebrities are used to train deep convolutional neural network (CNN) <cit.>. This CNN is further applied to extract features of the training images from the limited sample of subjects of interest using the outputs of a bottleneck layer <cit.>. Such transfer learning technique makes it possible to create a classifier that ideally performs nearly as well as if rich dataset of photos of these individuals were present <cit.>. Though the algorithms that do recognition from a single test photo have its own practical applications, e.g., search for a person in a social network, one of the most significant use-cases is provided by surveillance and authentication systems. Hence, all the more attention is nowadays paid to video face recognition <cit.>, in which a set or a sequence of observed images of the same individual are available for decision-making <cit.>. Despite the large number of recent papers devoted to video face recognition <cit.>, this task still remains very challenging. For example, most of the known algorithms suffered from heavy off-line training load <cit.>, and they were experimentally studied only with databases, in which the still images are gathered under controlled environment. In this paper we proposed the novel still-to-video face recognition method suitable for application with the small sample of still photos per person, which is based on the probabilistic interpretation <cit.> without learning temporal coherence <cit.>. At first, the nearest neighbor (NN) reference images are examined for each video frame. Next, computed distances to all still images are used to weight the recognition results for each frame based on the estimation of their reliability. The more is the likelihood of the computed vector of distances for particular individual, the more is the weight corresponding to this frame and this subject. The likelihoods (joint probabilistic densities) are computed using the idea of the maximum-likelihood approximate NN algorithm <cit.> by assuming that the Kullback-Leibler (KL) divergence <cit.> is used to compare distances. However, we demonstrate that our approach can be successfully applied with much more widely used dissimilarity measures.The rest of the paper is organized as follows. Section <ref> contains the literature survey of related papers. In Section <ref> we present a simple statistical formulation of the still-to-video recognition task <cit.> using the KL minimum discrimination principle. In Section <ref> we propose the novel approach, in which the maximum a-posteriori (MAP) rule is regularized using the computation of the joint probability densities of distances based on the asymptotic properties of the KL divergence. In Section <ref> we present the complete algorithm, in which the initial assumptions are facilitated and the computational complexity is taken into account. Section <ref> presents the experimental results in recognition of videos from either IJB-A (IARPA Janus Benchmark A) <cit.> or YouTube Faces (YTF) datasets <cit.> with the still images from the Labeled Faces in the Wild (LFW) dataset <cit.> using such deep CNNs as VGGNet <cit.> and Lightened CNN <cit.>. Finally, concluding comments are given in Section <ref>.§ RELATED WORKS Compared to still-to-still recognition, the requirement of real-time processing in video face recognition prevents an implementation of too sophisticated recognition algorithms <cit.>, especially, when the number of individuals is rather large <cit.>. However, more information of the individuals can be exploited from several video frames <cit.>, and the recognition results can be generally improved when using multiple images or video sequences <cit.>. Most of the papers devoted to person identification or retrieval in surveillance videos <cit.> address the video-to-video face recognition problem, in which query videos are matched against a set of reference video sets <cit.>. However, real-world applications usually involve the still-to-video face recognition scenario, when only very few still images per person are enrolled into the training set while the input sequence of video frames is captured <cit.>. This problem is very difficult, because the testing videos have generally low quality and are captured under poor illumination and pose <cit.>. There are two types of still-to-video recognition methods <cit.>. First group of methods exploits the temporal information in a video sequence. Liu and Chen used the hidden Markov models in recognition of video frames <cit.>. The paper <cit.> describes successful application of a state space model <cit.> parametrized by a tracking state vector and a recognizing identity variable, simultaneously characterizing the kinematics and identity of humans. All such methods require large datasets of video clips to train the extraction of dynamic information. Moreover, their performance relied on strong temporal coherence in training and testing environment <cit.>. Hence, another group of methods with accumulation of multiple observations of the same individual has recently been studied. These methods are usually based on fusion of evidence from multiple measurements <cit.>. For example, the authors of the paper <cit.> extended the probabilistic appearance-based face recognition approach to work with multiple images and video sequences. Shakhnarovich et al <cit.> assumed that the multiple frames for each human subject followed the normal distribution, and they used the KL divergence <cit.> to measure the distance between distributions. Several important works on still-to-video face recognition adopted a metric learning framework. For example, Zhu et al <cit.> extended the well-known Mahalonbis metric learning to point-to-set and set-to-set distance learning. Huang et al <cit.> applied Euclidean-to-Riemannian metric learning framework to image-to-set object classification and still-to-video face recognition tasks. Nowadays, it is more typical to map each face image into a feature vector using a deep CNN. Next, a naive approach would be representing a video face as a set of frame-level face features from the CNN <cit.>. Such a representation obviously maintains all the information across all frames. To speed-up the recognition process, the features of all video frames are aggregated into a single vector (video face representation) using such pooling strategies as average and max pooling <cit.>. The Eigen-PEP representations <cit.> integrates the visual information from all relevant video frames using part-based average pooling through the probabilistic elastic part model. Then the intermediate representation is compressed through principal component analysis, and only a number of principal eigen dimensions are kept. Yang et al <cit.> proposed the learning of the frames weights in the aggregation module, which consists of two attention blocks driven by a memory storing all the extracted features. Canziani and Culurciello proposed the CortexNet neural architecture <cit.> in order to obtain a robust and stable representation of temporal visual inputs. In the paper <cit.> two video sequences representing head motion and facial expressions are compared using a new positive definite kernel based on the concept of principal angles between two linear subspaces. Most of described methods learned the relationship between the still images and video frames but did not directly handle bad quality frames, which very likely make the recognition perform badly. To deal with this problem, another kind of methods such as <cit.> was proposed, which first select the best quality frames, perform facial alignments and then integrate the recognition results of selected frames <cit.>. Thus, one can notice from this brief survey, that the methods without learning temporal coherence are more widely used nowadays. Hence, in our paper we decided to focus on application of statistical recognition of individual frames <cit.> described by the modern deep CNN features <cit.>.§ STATISTICAL STILL-TO-VIDEO FACE RECOGNITIONThe task of closed-set still-to-video face identification is to assign an observed sequence of T video frames to one of C>1 classes (identities). The classes are specified by the training set of R≥ C still images. We consider the supervised learning case, when the class label (subject id) c(r) ∈{1,...,C} of the rth photo is known. For simplicity, we assume that only one image per person is available (C=R,c(r)=r), and all frames contain facial region of only one identity, i.e., the faces are detected, and the whole video clip is clustered to extract sequential frames containing the same subject. At first, each image should be described with a feature vector using the preliminarily trained deep CNN, as it was described in introduction. The outputs at the last layer of this CNN for all t frames and each r-th reference image are used as the D-dimensional feature vectors 𝐱(t)=[x_1(t),...,x_D(t)] and 𝐱_r=[x_r;1,...,x_r;D], respectively. Let us assume that the normalized feature vector 𝐱(t) is the estimate of multinomial distribution of (hypothetical) random variable X(t),t ∈{1,...,T}. We also assume that each rth instance represents the probabilistic distribution of the random variable X_r,r ∈{1,...,R}. The video face recognition task is reduced to a problem of statistical testing of simple hypothesis W_r, r ∈{1,...,R} for all frames. Statistically optimal decision is defined as follows: r ∈{1,...,R}max P(W_r|𝐱(1),...,𝐱(T)),where the posterior probability is estimated using the Bayes rule: P(W_r|𝐱(1),...,𝐱(T))=f(𝐱(1),...,𝐱(T)|W_r)/∑_i=1^Rf(𝐱(1),...,𝐱(T)|W_i), In this paper we focus on the case of full prior uncertainty, hence, the prior probabilities of observing each subject here are equal, so the rule (1) represents the maximum likelihood (ML) criterion. Finally, it is necessary to perform aggregation of the frame recognition results <cit.>. Kittler et al <cit.> presented a statistical interpretation of a number of common methods for cross-modal fusion, such as the product, sum, maximum, and majority vote rules, which are also appropriate for late integration over a set of observations from a single modality. For example, it can be shown that under the assumption that the frames in the video sequence correspond to independent identically distributed (i.i.d) random variables, the likelihood is estimated using the product rule <cit.> f(𝐱(1),...,𝐱(T)|W_r)=∏_t=1^T∏_d=1^D(x_r;d)^n · x_d(t),where n is the sample size used to estimate the distribution 𝐱(t). In practice, this parameter can be chosen proportional to the total number of pixels in the input image. By substituting (3) in (2) and dividing its numerator and denominator to ∏_t=1^T∏_d=1^D(x_d(t))^n · x_d(t), one can obtain the following estimate of the posterior probability: P(W_r|𝐱(1),...,𝐱(T))= exp(-n ∑_t=1^T∑_d=1^Dx_d(t) ln(x_d(t)/x_r;d))/∑_i=1^Rexp(-n ∑_t=1^T∑_d=1^Dx_d(t) ln(x_d(t)/x_i;d)),orP(W_r|𝐱(1),...,𝐱(T))= exp(-n ∑_t=1^TI(𝐱(t):𝐱_r))/∑_i=1^Rexp(-n ∑_t=1^TI(𝐱(t):𝐱_i)),where I(𝐱(t):𝐱_r)) is the KL divergence between feature vectors 𝐱(t) and 𝐱_r.Hence, the ML criterion (1) can be written in the simplified form: r ∈{1,...,R}min∑_t=1^TI(𝐱(t):𝐱_r). Thus, the ML decision (1) of the still-to-video face recognition task <cit.> and assumptions about i.i.d. frames is equivalent to the KL minimum information discrimination principle <cit.>.§ MAXIMUM A POSTERIORI ESTIMATION OF DISTANCES BETWEEN DEEP BOTTLENECK FEATURESIn this paper we use the idea of the maximal-likelihood approximate NN method from the paper <cit.>. Namely, we exploit the known property of the KL divergence between two densities, which can be considered as the information for discrimination in favor of the first density against the second one <cit.>. Hence, another recognition criterion is used, in which the maximum posterior probability based on the joint distribution of all R-dimensional random vectors of distances 𝐈(t)=[I(𝐱(t):𝐱_1),..., I(𝐱(t):𝐱_R)] is obtained: r ∈{1,...,R}max f(W_r|𝐈(1),...,𝐈(T)). To deal with this conditional density of distances let us firstly consider much more simple marginal conditional distribution f(𝐈(t)|W_r), t ∈{1,...,T}. By using a natural assumption about independence of the still photos of all identities from the training set, we estimate the joint density of distances between reference images and the tth frame f(𝐈(t)|W_r) as follows: f(𝐈(t)|W_r)=f(I(𝐱(t):𝐱_r)|W_r) ·∏_i=1,i ≠ r^Rf(I(𝐱(t):𝐱_i)|W_r). At first, let us consider each term in the second multiplier. We propose to estimate the conditional density function f(I(𝐱(t):𝐱_i)|W_r) of the distance between the observed object from the rth class and the ith instance using the known asymptotic properties of the KL divergence. It is known <cit.> that if i ≠ r, then 2n-times KL divergence I(𝐱(t):𝐱_r) is asymptotically distributed as the non-central chi-squared with D-1 degrees of freedom and the non-centrality parameter 2n · I(𝐱_r:𝐱_i). This asymptotic distribution is also known to be achieved for such probabilistic dissimilarities as the chi-squared distance, the Jenson-Shannon divergence, etc. <cit.>. As the number of features D is usually large, it is possible to approximate the non-central chi-squared distribution with the Gaussian distribution <cit.>. Hence, we use the asymptotically normal distribution of the distance between the facial image from the rth class and the ith reference photo:𝒩(I(𝐱_r:𝐱_i)+D-1/n,4n · I(𝐱_r:𝐱_i)+(D-1)/2n^2) Thus, the conditional distributions in the second multiplier in (8) are estimated as follows:f(I(𝐱(t):𝐱_i)|W_r) = exp( -n ·ϕ_r;i (𝐱(t))),where we denote ϕ_r;i (𝐱(t)) for ϕ_r;i (𝐱(t)) = 1/2nln (4I(𝐱_r:𝐱_i)+π+D-1/n)+ +(I(𝐱(t):𝐱_i)-I(𝐱_r:𝐱_i)-(D-1)/n)^2/4I(𝐱_r:𝐱_i)+(D-1)/n. On the one hand, the number of outputs (features) in the last layer in modern DNNs is very high (D ≫ 1) <cit.>. On the other hand, the feature vector dimensionality D should be much less than the number of samples n to estimate the conditional densities (3) reliably (D ≪ n). Hence, it is possible to approximately rewrite eq. (11) <cit.>: ϕ_r;i (𝐱(t))≈(I(𝐱(t):𝐱_i)-I(𝐱_r:𝐱_i))^2/4I(𝐱_r:𝐱_i). The first multiplier in (8) cannot be computed similarly, because, if i=r, then I(𝐱_r:𝐱_i)=0, and asymptotic distribution (9) does not hold in practice. However, in such a case this conditional density can be estimated using eq. (3). Similarly to (4), (5) one can show thatf(I(𝐱(t):𝐱_r)|W_r)) ∝exp(-n I(𝐱(t):𝐱_r)). By combining expressions (10), (12), (13), the marginal distribution (8) is written using softmax operation as followsf(𝐈(t)|W_r)=exp(-n · (I(𝐱(t):𝐱_r)+∑_i=1^Rϕ_r;i (𝐱(t))))/∑_j=1^Rexp(-n · (I(𝐱(t):𝐱_j)+∑_i=1^Rϕ_j;i (𝐱(t)))). After that it is possible to apply some known rules <cit.> of combining statistical classifiers (14). For example, in case of the i.i.d. frames assumptions the criterion (7) is equivalent to rather simple video face recognition ruler ∈{1,...,R}min∑_t=1^T(I(𝐱(t):𝐱_r)+∑_i=1^Rϕ_r;i (𝐱(t))),which can be viewed as an extension of conventional statistical classification (6) using the new regularization term ∑_i=1^Rϕ_r;i (𝐱(t)). However, Kittler et al <cit.> argued, that the sum rule is much more robust to the errors in unknown density estimation. In our task this rule can be formulated in the MAP way:r ∈{1,...,R}max∑_t=1^Tf(W_r|𝐈(t)),where posterior probability f(W_r|𝐈(t)) is defined using the Bayes rule (2) for the proposed ML of distances estimate (14).§ PROPOSED ALGORITHMThough the previous sections contain many unrealistic assumptions about independence of sequential frames and representation of feature vectors as the estimates of the distributions of hypothetical random variables, the resulted criteria (15), (16) looks rather straightforward. In this section we demonstrate the possibility to implement this approach in more realistic scenarios. At first, it is necessary to emphasize, that the criteria discussed in the previous section contain only computations of the KL divergence between features of all frames and every instances from the database (12). Thus, it is possible to replace the KL divergence I(𝐱(t):𝐱_r) in (12), (13) with an arbitrary dissimilarity measure ρ(𝐱(t),𝐱_r) between deep bottleneck features 𝐱(t) and 𝐱_r. For instance, the KL minimum information discrimination criterion (6) is equivalent to the NN rule with the simple accumulation <cit.>: r ∈{1,...,R}min∑_t=1^Tρ(𝐱(t),𝐱_r). The sum rule <cit.> implements the MAP estimate of the the class, which corresponds to the maximal average posterior probability <cit.>: c ∈{1,...,C}max∑_t=1^Texp(-n·ρ_c(𝐱(t))/∑_i=1^Cexp(-n·ρ_i(𝐱(t)), It is important to emphasize that our conclusion about asymptotical distribution (10) of the KL divergence is in agreement with the well-known assumption about Gaussian distribution of dissimilarity measures between high-dimensional feature vectors, which is supported by many experiments <cit.>.Secondly, though the small sample size problem is usually appeared in practical applications of video face recognition, it is not typical to have only one instance per class in the available training set. Hence, we propose to extend criterion (15) to the case of C≤ R:c ∈{1,...,C}max∑_t=1^Texp(-n · (ρ_c(𝐱(t))+λ/C∑_i=1^C(ρ_i(𝐱(t))-ρ_c;i)^2/ρ_c;i))/∑_j=1^Cexp(-n · (ρ_j(𝐱(t))+λ/C∑_i=1^C(ρ_i(𝐱(t))-ρ_j;i)^2/ρ_j;i)). Here we added the smoothing parameter λ>0 to tune the influence of our regularization. The distance between feature vector 𝐱(t) and the cth identity in this criterion can be defined using the idea of the single-linkage clustering:ρ_c(𝐱(t))=r ∈{1,...,R},c(r)=cminρ(𝐱(t),𝐱_r). The distance between images from cth and ith identities can be computed as the average distance between instances from these classes:ρ_c;i=1/R_cR_i∑_r=1^R∑_r_i=1^Rδ(c-c(r))δ(i-c(r_i))ρ(𝐱_r,𝐱_r_i),where δ(c) is the discrete delta function (indicator) and R_c=∑_r=1^Rδ(c-c(r)) is the total number of still photos of the cth subject. Finally, it is necessary to notice that the run-time complexity of the proposed criterion (19) O(T(R+C^2 )) is more than C-times higher than the complexity of the baseline ML rule (17). Hence, our approach cannot be applied in practical tasks when the number of classes is rather large <cit.>. To speed-up the decision process, we propose to modify criterion (19) by examining only M ≪ C candidate classes. Namely, the sum of distances between all frames and each class are computed identically to the conventional approach (17):ρ_c(𝐱(1),...,𝐱(T))=1/T∑_t=1^Tρ_c(𝐱(t)),and M candidate classes {c_1,...,c_M} with the lowest distances (18) are chosen. This Mth element search (selection algorithm) is known to have linear average complexity: O(C). In the final decision only these M candidates are checked, so the criterion (19) is modified as follows: c ∈{c_1,...,c_M}max∑_t=1^Texp(-n · (ρ_c(𝐱(t))+λ/C∑_i=1^C(ρ_i(𝐱(t))-ρ_c;i)^2/ρ_c;i))/∑_m=1^Mexp(-n · (ρ_c_m(𝐱(t))+λ/C∑_i=1^C(ρ_i(𝐱(t))-ρ_c_m;i)^2/ρ_c_m;i)).Our complete procedure is presented in Algorithm <ref>. Its runtime complexity is equal to O(T(R+MC)). Here the decision is made only after observation of all T frames. However, it is not difficult to implement online recognition in our approach by looking for the class label c^* right after the next video frame with the same subject is available. § EXPERIMENTAL RESULTSIn most experiments of still-to-video face recognition methods it is usually assumed that the still photos are of high quality and resolution, in frontal view, with normal lighting and neutral expression <cit.>. However, in this paper we decided to consider the most challenging experimental setup, in which the training set contains photos gathered in uncontrolled environment <cit.>. Namely, we choose all photos and videos of C=1589 classes as the intersection of people from the well-known LFW and YTF datasets. It is important to mention that though the YTF dataset contains the videos of celebrities from the LFW, the quality of the images in these datasets are completely different. The training set is filled by all R=4732 photos of these C=1589 identities taken from the LFW dataset <cit.>. Our testing set contains 3353 videoclips from the YTF <cit.> (in average, 183 frames per person). In our experiments we discovered that it is not necessary to process all frames in each video, hence, only each fifth frame in every video is presented in the testing set, given T=36 frames per video in average.The Caffe framework is applied to extract deep bottleneck features using two publicly available CNN models for unconstrained face recognition, namely, the well-known VGG-13 Network (VGGNet) <cit.>, and the Lightened CNN <cit.>. The fc8 layer of the VGGNet extracts D=4096 non-negative features from 224x224 RGB image. The Lightened CNN (version C) extracts D=256 (possibly negative) features from 128x128 grayscale image. The outputs of these CNNs were L_2 normalized to form the final feature vectors, which are matched using Euclidean distance. The features in the VGGNet are positive, hence we also perform L_1 normalization to treat them as the probability distributions and match them using the KL divergence.The proposed Algorithm <ref> is implemented in a stand-alone C++ application with Qt 5 framework. In the first experiments we examined the dependence of the top-1 accuracy on the most important parameters of our algorithm, namely, the importance λ of regularization term ∑_t=1^T∑_i=1^C(ρ_i(𝐱(t))-ρ_c;i)^2/ρ_c;i (23), and the number of candidate classes M. The results are presented in Fig. <ref> and Fig. <ref>, respectively.Here, firstly, error rates drastically depend on the deep CNN used for feature extraction. The accuracy of the Lightened CNN <cit.> is approximately 20% higher than the accuracy of much widely used VGGNet <cit.>. It is worth noting that treating the features from the latter network as probability distributions and matching them with the KL divergence (4) allows decreasing error rate at 1.4% in the best case. Secondly, the U-curves in Fig. <ref> prove that the proper choice of the parameter λ can significantly influence the recognition accuracy. It is especially true for the VGGNet features, in which the difference in error rates achieves 2.5%. However, the optimal value of this parameter is practically identical (λ=6..8) for all three combination of the CNN and dissimilarity measure. At the same time, the curves in Fig. <ref> reaches stability very fast with an increase of the number of candidate classes M. For example, the accuracy for the VGGNet features does not increase further if M ≥ 16. Though the examination of all M=C classes causes the lowest error rates, this value is not recommended for practical usage due to the very low recognition speed. Hence, in the further part of this paper we use near-optimal values of these parameters: λ=7 and M=64.In the next experiments we examine the influence of additive noise on the face recognition quality. The uniform random number from the range [-X_max;X_max] was added to every pixel of each frame of all testing videoclips, where X_max≥ 0 determines the noise level. Original videos from the YTF dataset are recognized, when X_max=0. The proposed approach (23) is compared with conventional video face recognition techniques, namely, 1) the ML rule (17) applied to all frames <cit.>; 2) selection of one representative frame with the k-medoids clustering method (k=1) and the NN matching (17) of this frame <cit.> (hereinafter "ML with clustering"); and 3) estimation of the MAP class for each frame (18) (hereinafter "MAP"). The parameter n of the proposed criterion and MAP rule is tuned in order to provide the highest validation accuracy. The main results of these experiments are shown in Fig. <ref>, Fig. <ref> and Fig. <ref> for VGGNet features matched with the Euclidean distance and the KL divergence, together with the Lightened CNN features, respectively.Based on these results one can draw the following conclusions. Firstly, though the additive noise leads to degraded recognition accuracy in most cases, the increase of the error rate is rather low even if up to X_max=10 number is added to the value of each pixel. Secondly, though the KL divergence outperforms the conventional Euclidean distance for original YTF dataset (X_max=0), the latter dissimilarity is more robust to the presence of noise. In fact, it is better to apply more robust probabilistic dissimilarities, which are based on testing for statistical homogeneity of the feature vectors <cit.>. Nevertheless, the Lightened CNN again significantly outperforms the VGGNet even for very high noise levels. Thirdly, though the rather popular accumulation of all frames in one centroid (video face representation <cit.>) in the "ML with clustering" method makes it possible to increase performance in T times, its error rate is 2-5% and 2-6% higher than the error rate of the baseline ML (17) and the average posterior probability pooling (18) <cit.>. Moreover, if this approach is applied in online video recognition right after new tth frame is observed, the gain in performance over other methods is not so obvious. Really, it is easy to implement criteria (17), (23) in adaptive mode <cit.> so that the processing of the tth frame is aggregated with the results obtained for recognition of previous (t-1) frames. It is not surprising that the frame selection methods <cit.> have more potential in video-to-video recognition or video verification tasks <cit.>, in which the matching of all pairs of frames from two video clips is very expensive. Finally, the most important conclusion here is the highest recognition accuracy of the proposed algorithm <ref> in all cases. Our approach is 3-4% more accurate when compared to other methods. Moreover, though our idea is based on looking for correspondence of the distances ρ_i(𝐱(t)) (17) and ρ_c;i (18), (19), the gain in error rate does not significantly degrade even with the presence of noise in the testing videos. It is important to highlight that though the original version of the proposed approach (14) was based on the properties of the KL divergence between positive features, our algorithm can be successfully used with the state-of-the-art distances and arbitrary feature vectors.In the last part of this section we provide preliminary results for IJB-A dataset <cit.>. The IJB-A 1:N protocol is primarily used for testing of the video-to-video recognition. Moreover, each split is used to recognize only 33% subjects, the rest part can be used to train an algorithm under the strict condition that no such imagery contain the same subjects that are in the test split. Hence, we decided to implement more difficult scenario of the still-to-video recognition with all C=500 subjects. We put all R=5712 still photos of these subjects to the training set. The testing set contains all 2043 videos from this dataset with approximately 8 frames per video. The average accuracies for all methods described above are shown in Table <ref>.In contrast to the previous experiments, here the features from the VGGNet <cit.> allows classifying videos 25% more accurately when compared to the Lightened CNN <cit.>, which can be probably caused by the presence of many non-frontal photos in this dataset. However, all the previous conclusions about the still-to-video recognition methods remain the same. The selection of the most representative frame in the ML with clustering is 0.6-0.9% less accurate that the matching of all frames in criterion (17). The pooling of posterior probabilities (18) here is much preferable than the ML rule (17). At the same time, the error rate of the proposed algorithm <ref> is 4.5-5.5% and 1.1-2.5% lower, than the error rates of the widely-used criteria (17) and (18), respectively. § CONCLUSIONIn this paper we proposed the novel statistical approach to still-to-video face recognition (Algorithm <ref>), in which the joint density of distances to all reference images is maximized (7), (8). We have shown that this approach is implemented by introduction of a special regularization term (23) in the nearest neighbor matching (17) of high-dimensional features from the outputs of the deep CNNs. In this regularization it is assumed that the resulted class c^*(t) for recognition of individual tth video frame is reliable only if the distances between the features of this frame and the rth still image are approximately equal to the distances between the reference images from the c^*(t) and c(r) classes for all r ∈{1,...,R}. This assumption is known to be asymptotically correct for the KL divergence and rather simple probabilistic model from Section <ref>  <cit.>. However, it was experimentally demonstrated that the proposed approach can be combined even with the state-of-the-art Euclidean distance (Fig. <ref>). Moreover, our algorithm allows to increase accuracy even for general feature vectors with possible negative values (Fig. <ref>). We demonstrated how to tune the parameters of the proposed algorithm (Fig. <ref>) in order to drastically reduce its computation complexity by proper choice of M candidate classes (compare the criterion (23) with the original one (19)).The main direction for further research of the proposed algorithm is its applications with more accurate approximation of the distance probability distributions, e.g., the usage of the more appropriate Weibull distribution <cit.>. Secondly, it is important to make our regularization smoother by taking into account the temporal coherence of sequential frames <cit.>. Thirdly, it is necessary to combine our approach with the modern techniques with frames weighting in the aggregation module (22) <cit.> to make distinction between frames with different quality  <cit.>.Finally, though the deep bottleneck features are known to be highly embedded in the manifold space, non-Euclidean metrics, e.g, such techniques as the point-to-set metric learning <cit.> or geometry aware feature matching <cit.> can also be discussed in the future experiments.§ ACKNOWLEDGEMENTThe paper is supported by Russian Federation President grant no. MD-306.2017.9. The work in Section 3 was conducted by A.V. Savchenko at Laboratory of Algorithms and Technologies for Network Analysis, National Research University Higher School of Economics and supported by RSF grant 14-41-00039. § REFERENCES 10 url<#>1urlprefixURL href#1#2#2 #1#1lecun2015deep Y. LeCun, Y. Bengio, G. Hinton, Deep learning, Nature 521 (7553) (2015) 436–444.prince2012computer S. J. Prince, Computer vision: models, learning, and inference, Cambridge University Press, 2012.taigman2014deepface Y. Taigman, M. Yang, M. Ranzato, L. Wolf, Deepface: Closing the gap to human-level performance in face verification, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014, pp. 1701–1708.liu2015targeting J. Liu, Y. Deng, T. Bai, Z. Wei, C. Huang, Targeting ultimate accuracy: Face recognition via deep embedding, arXiv preprint arXiv:1506.07310.learned2016labeled E. Learned-Miller, G. B. Huang, A. RoyChowdhury, H. Li, G. Hua, Labeled faces in the wild: A survey, in: Advances in Face Detection and Facial Image Analysis, Springer, 2016, pp. 189–248.savchenko2016search A. V. Savchenko, Search techniques in intelligent classification systems, Springer, 2016.savchenko2015statistical A. V. Savchenko, N. S. Belova, Statistical testing of segment homogeneity in classification of piecewise–regular objects, International Journal of Applied Mathematics and Computer Science 25 (4) (2015) 915–925.tan2006face X. Tan, S. Chen, Z.-H. Zhou, F. Zhang, Face recognition from a single image per person: A survey, Pattern recognition 39 (9) (2006) 1725–1745.cao2013practical X. Cao, D. Wipf, F. Wen, G. Duan, J. Sun, A practical transfer learning algorithm for face verification, in: Proceedings of the IEEE International Conference on Computer Vision, 2013, pp. 3208–3215.parkhi2015deep O. M. Parkhi, A. Vedaldi, A. Zisserman, Deep face recognition, in: BMVC, Vol. 1, 2015, p. 6.wu2016light X. Wu, R. He, Z. Sun, T. Tan, A light cnn for deep face representation with noisy labels, arXiv preprint arXiv:1511.02683.savchenko2017deep A. V. Savchenko, Deep convolutional neural networks and maximum-likelihood principle in approximate nearest neighbor search, in: 8th Iberian Conference on Pattern Recognition and Image Analysis (IbPRIA 2017), Springer, 2017, pp. 42–49.cui2013fusing Z. Cui, W. Li, D. Xu, S. Shan, X. Chen, Fusing robust face region descriptors via multiple metric learning for face recognition in the wild, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013, pp. 3554–3561.cevikalp2010face H. Cevikalp, B. Triggs, Face recognition based on image sets, in: Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), 2010, pp. 2567–2573.mian2013image A. Mian, Y. Hu, R. Hartley, R. Owens, Image set based face recognition using self-regularized non-negative coding and adaptive distance metric learning, IEEE Transactions on Image Processing 22 (12) (2013) 5252–5262.wolf2003kernel L. Wolf, A. Shashua, Kernel principal angles for classification machines with applications to image sequence interpretation, in: Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), Vol. 1, 2003.huang2013coupling Z. Huang, X. Zhao, S. Shan, R. Wang, X. Chen, Coupling alignments with recognition for still-to-video face recognition, in: Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2013, pp. 3296–3303.liu2014toward L. Liu, L. Zhang, H. Liu, S. Yan, Toward large-population face identification in unconstrained videos, IEEE Transactions on Circuits and Systems for Video Technology 24 (11) (2014) 1874–1884.zhu2015still Y. Zhu, Z. Zheng, Y. Li, G. Mu, S. Shan, G. Guo, Still to video face recognition using a heterogeneous matching approach, in: Proceedings of the IEEE International Conference on Biometrics Theory, Applications and Systems (BTAS), 2015, pp. 1–6.yang2016neural J. Yang, P. Ren, D. Chen, F. Wen, H. Li, G. Hua, Neural aggregation network for video face recognition, arXiv preprint arXiv:1603.05474 (accepted for CVPR17).savchenko2017maximum A. V. Savchenko, Maximum-likelihood approximate nearest neighbor method in real-time image recognition, Pattern Recognition 61 (2017) 459–469.kullback1997information S. Kullback, Information theory and statistics, Courier Corporation, 1997.shakhnarovich2002face G. Shakhnarovich, J. W. Fisher, T. Darrell, Face recognition from long-term observations, in: European Conference on Computer Vision, Springer, 2002, pp. 851–865.klare2015pushing B. F. Klare, B. Klein, E. Taborsky, A. Blanton, J. Cheney, K. Allen, P. Grother, A. Mah, A. K. Jain, Pushing the frontiers of unconstrained face detection and recognition: Iarpa janus benchmark a, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 1931–1939.wolf2011face L. Wolf, T. Hassner, I. Maoz, Face recognition in unconstrained videos with matched background similarity, in: Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), 2011, pp. 529–534.savchenko2012adaptive A. V. Savchenko, Adaptive video image recognition system using a committee machine, Optical Memory and Neural Networks 21 (4) (2012) 219–226.zhang2006weighted Y. Zhang, A. M. Martínez, A weighted probabilistic approach to face recognition from multiple images and video sequences, Image and Vision Computing 24 (6) (2006) 626–638.liu2003video X. Liu, T. Chen, Video-based face recognition using adaptive hidden markov models, in: Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), Vol. 1, 2003.huang2012benchmarking Z. Huang, S. Shan, H. Zhang, S. Lao, A. Kuerban, X. Chen, Benchmarking still-to-video face recognition via partial and local linear discriminant analysis on cox-s2v dataset, in: Asian Conference on Computer Vision, Springer, 2012, pp. 589–600.zhou2002face S. Zhou, V. Krueger, R. Chellappa, Face recognition from video: A condensation approach, in: Proceedings of the IEEE International Conference on Automatic Face and Gesture Recognition, 2002, pp. 221–226.kittler1998combining J. Kittler, M. Hatef, R. P. Duin, J. Matas, On combining classifiers, IEEE transactions on pattern analysis and machine intelligence 20 (3) (1998) 226–239.zhu2013point P. Zhu, L. Zhang, W. Zuo, D. Zhang, From point to set: Extend the learning of distance metrics, in: Proceedings of the IEEE International Conference on Computer Vision, 2013, pp. 2664–2671.huang2014learning Z. Huang, R. Wang, S. Shan, X. Chen, Learning euclidean-to-riemannian metric for point-to-set classification, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 1677–1684.schroff2015facenet F. Schroff, D. Kalenichenko, J. Philbin, Facenet: A unified embedding for face recognition and clustering, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 815–823.chen2015end J.-C. Chen, R. Ranjan, A. Kumar, C.-H. Chen, V. M. Patel, R. Chellappa, An end-to-end system for unconstrained face verification with deep convolutional neural networks, in: Proceedings of the IEEE International Conference on Computer Vision (ICCV) Workshops, 2015, pp. 118–126.li2014eigen H. Li, G. Hua, X. Shen, Z. Lin, J. Brandt, Eigen-pep for video face recognition, in: Proceedings of the Asian Conference on Computer Vision, Springer, 2014, pp. 17–33.yang2017cortexnet A. Canziani, E. Culurciello, Cortexnet: a generic network family for robust visual temporal representations, arXiv preprint arXiv:1706.02735.wong2011patch Y. Wong, S. Chen, S. Mau, C. Sanderson, B. C. Lovell, Patch-based probabilistic image quality assessment for face selection and improved video-based face recognition, in: Proceedings of the IEEE Computer Vision and Pattern Recognition Workshops (CVPRW), 2011, pp. 74–81.burghouts2008distribution G. Burghouts, A. Smeulders, J.-M. Geusebroek, The distribution family of similarity distances, in: Proceedings of the International Conference on Advances in neural information processing systems (NIPS), 2008, pp. 201–208.harandi2014manifold M. T. Harandi, M. Salzmann, R. Hartley, From manifold to manifold: Geometry-aware dimensionality reduction for spd matrices, in: European conference on computer vision, Springer, 2014, pp. 17–32.
http://arxiv.org/abs/1708.07972v1
{ "authors": [ "Andrey V. Savchenko", "Natalya S. Belova" ], "categories": [ "cs.CV", "68T10" ], "primary_category": "cs.CV", "published": "20170826135648", "title": "Maximum A Posteriori Estimation of Distances Between Deep Features in Still-to-Video Face Recognition" }
§ INTRODUCTIONStatistical analyses of angular or directional data have found applications in a variety of fields, such as geology (earth's magnetic poles), meteorology (wind directions) and bioinformatics (backbone structures of proteins). Directional data can be univariate or multivariate, and one way of representing such data is via angles measured on a circle [0, 2π) (element-wise when multivariate), and hence the name angular. Angular methods are also applicable to any interval that wraps around (e.g., [0, L) or [-L/2, L/2) for some L > 0) when transformed to the circle [0, 2π).The wraparound condition on the support invalidates direct applicability of many standard statistical methods.There is substantial literature devoted to the development of descriptive and inferential techniques for directional data (see, e.g.,<cit.>), with the traditional univariate case as the primary focus, although the bivariate case is gaining increasing interest <cit.> along with the emergence of new applications. Bivariate angular data can now be found ina variety of modern scientific problems, with many notable applications arising from the field of computational biology <cit.>. A major area of research in protein bioinformatics involves modeling and predicting protein 3-D structures, which requires proper handling of the paired backbone torsion angles. Formal analyses of these bivariate angle pairs thus require rigorous statistical techniques and models.A unique feature in the modeling ofdirectional data is the use of angular probability distributions, ormixtures thereof (see Section <ref>), which are inherently different from their linear (Euclidean) counterparts because of the wraparound nature of their supports.Bayesian methods provide flexible tools for analyzing and modelingsuch data. First, one may incorporate prior information, if available, into modeling. Second, one may use powerful computational methods, i.e., Markov chain Monte Carlo (MCMC, see Section <ref>) for sampling from the posterior, to fit such models and assess the fitted models.Third, one may readily compute posterior quantities of interest while coherently accounting for uncertainty in the model parameters.Within this context, this package was developed for fitting Bivariate Angular Mixtures using Bayesian Inference (hence, the package name BAMBI).In we implement the two most popular angular distributions, namely the wrapped normal (or Gaussian) and the von Mises distributions, and consider both univariate and bivariate versions of these.provides functionality for modeling univariate and bivariate angular data using these distributions, and for fitting finite mixture models of these distributions. We first introduce the basics of these distributions and mixture models. It should be noted that the bivariate distributions considered in this paper have support [0, 2π)^2 (i.e., on a torus), which are distinct from those defined on the surface of the unit sphere, such as the von Mises - Fisher distribution. §.§ Wrapped Normal Distributions For univariate continuous data, the angular analogue of the normal distribution on the real line is the wrapped normal distribution obtained by wrapping a normal random variable around the unit circle(see, e.g., <cit.>). Formally, let X be a normal random variable with mean μ and variance σ^2 > 0. Then the distribution of ψ = X2π is called the wrapped normal distribution with mean μ and variance σ^2 and is denoted by (μ, σ^2). The density of ψ∼(μ, σ^2) is given by:f_(ψ|μ, σ) = 1/σ√(2π)∑_ω∈exp[-1/2 σ^2(ψ - μ - 2πω)^2 ]; ψ∈ [0, 2π)wheredenotes the set of all integers. Since the density contains a summation over entire , without loss of generality, we let μ∈ [0, 2π) to ensure identifiability. Figure <ref> displays the univariate wrapped density with μ = π and κ = 0.01, 1 and 10, which shows that the density is symmetric around μ and becomes more concentrated as κ increases. The multivariate generalization of the above distribution is straightforward <cit.>.The distribution of a random vector = (ψ_1, ⋯, ψ_p)^⊤ with probability density1/√(|Σ|(2π)^p)∑_∈^pexp[-1/2( -- 2π)^⊤Σ^-1( -- 2π) ]; ∈ [0, 2π)^pwith ∈ [0, 2π)^p and Σpositive definite, is called the p-variate wrapped normal distribution with mean vectorand variance matrix Σ, denoted by _p(, Σ).Although (<ref>) and (<ref>) are the most common parameterizations of the wrapped normal distributions found in the literature, to facilitate comparability with the von Mises distribution (defined in Section <ref>), we shall use the equivalent representation(s) obtained through the re-parameterization(s) κ = 1/σ^2 and Δ = Σ^-1.BAMBI handles the univariate and bivariate cases, namely p = 1 and p = 2. Thus, the form of the univariatewrapped normal density we use isf_(ψ|μ, κ) = √(κ/2π)∑_ω∈exp[-κ/2(ψ - μ - 2πω)^2 ]; ψ∈ [0, 2π)with μ∈ [0, 2π) and κ > 0; and that of the bivariate density isf__2(ψ_1, ψ_2 | μ_1, μ_2, κ_1, κ_2, κ_3) = √(κ_1 κ_2 - κ_3^2)/2π∑_(ω_1, ω_2) ∈^2exp[-1/2{κ_1 (ψ_1 - μ_1 - 2πω_1)^2 + κ_2 (ψ_2 - μ_2 - 2πω_2)^2. .. . + 2 κ_3 (ψ_1 - μ_1 - 2πω_1) (ψ_2 - μ_2 - 2πω_2) }]where ψ_1, ψ_2, μ_1, μ_2 ∈ [0, 2π), κ_1, κ_2 > 0 and κ_3^2 ≤κ_1κ_2, obtained by letting = (μ_1, μ_2)^⊤ andΔ = [ κ_1 κ_3; κ_3 κ_2 ].Figure <ref> plots the univariate wrapped normal densities with μ = π and κ = 0.01, 1 and 10, which shows that the density is symmetric around μ and becomes more concentrated as κ increases.Similarly, the bivariate wrapped normaldensity is also symmetric around (μ_1, μ_2) and becomes more concentrated as κ_1 and/or κ_2 increases, while the parameter κ_3 regulates the association between the random coordinates. This can be visualized from Figure <ref> displaying the surfaces of the density created via BAMBI function surface_model,for different parameter combinations. (Codes for generating these plots can be found in the replication R script for this paper.) The upper panels of Figure <ref> show how the density becomes more concentrated when κ_1 and κ_2 are increased (while keeping κ_3 fixed). In contrast,the lower panels of Figure <ref> display density surfaces showing how the association between the random coordinates changes (from positive to negative), when κ_3 is changed (from negative to positive, since κ_3 is the diagonal element of the inverse covariance matrix) while keeping κ_1 and κ_2 are fixed. Note that when κ→ 0 (or Δ→ 0_2 × 2) then the distribution of ψ = X2π converges to the uniform distribution over [0, 2π) (or [0, 2π)^2). Hence, we shall include the cases κ = 0 and κ_1 = κ_2 = κ_3 = 0 in the support of these parameters, and define the associated densities by their limits.The precision parameter κ (κ_1, κ_2 in the bivariate case) is (are) conceptually similar to the concentration parameters in the von Mises distribution (see Section <ref>). Therefore to aid comparability, we shall call κ (κ_1 and κ_2) the concentration parameter(s) of the univariate (bivariate) wrapped normal model. In BAMBI, evaluation of univariate and bivariate wrapped normal densities are implemented through the function dwnorm and dwnorm2 respectively. Random datafrom these models can be generated using rwnorm and rwnorm2 respectively. §.§ von Mises distributions Wrapped normal models have a high computational cost in practice. Although the sum overin the expression for the density can be well-approximated by a sum over the set A = {-3,-2,-1,0,1,2,3} (i.e., 3 integer displacements, covering ± 3 standard deviations from the mean), it can be seen that the number of terms in the sum grows exponentially as the dimension increases. For instance, in the bivariate case, even ifis approximatedby set A, the (double) sum in the densityconsists of 49 terms.Because of this difficulty, the von Mises distribution is an alternative that is widely used; it is able to approximate the wrapped normal while being less computationally intensive <cit.>. A random variable ψ is said to follow the von Mises distribution (also calledthe circular normal distribution, <cit.>)with mean parameter μ and concentration parameter κ, denoted ψ∼(μ, κ), if ψ has the densityf_(ψ|μ, σ) = 1/2π I_0(κ)exp(κcos(ψ-μ)); ψ∈ [0, 2π)where μ∈ [0, 2π), κ≥ 0 and I_r(·) denotes the modified Bessel function of the first kind and order r.Letting κ = 0 makes (<ref>) the uniform density over [0, 2π), and when κ→∞, (<ref>) converges to a normal density. An intuitive explanation of the latter result follows from the fact that whenthe concentration parameter κ is large, ψ - μ≈ 0, so that cos (ψ - μ) ≈ 1-(ψ-μ)^2/2, which makes the exponent in the density (<ref>) approximately proportional to the(μ, (1/√(κ))^2) density. A formal proof can be found in<cit.>. Figure <ref> plots the von Mises densities with μ = π and κ = 0.01, 1 and 10, which shows that the density is symmetric around μ and becomes more concentrated as κ increases, and that the density is broadly similar to the associated univariate wrapped normal density.A multivariate generalization for the univariate von Mises distribution is however not as straightforward as the wrapped normal distribution, as there is not a unique way of defining amultivariate distribution with univariate von Mises-like marginals.In the bivariate case, two versions of the bivariate von Mises distribution have been suggested for practical use, namely the sine model <cit.> and the cosine model <cit.>. They are comparable to the bivariate normal model both in terms of number of parameters (five), and the interpretability of those parameters.Other generalizations with more parameters have been studied theoretically <cit.>.Let = (ψ_1, ψ_2)^⊤ be a random vector on ^2 with support [0, 2π)^2. Thenis said to follow the (bivariate) von Mises sine distribution with mean parameters μ_1, μ_2, concentration parameters κ_1, κ_2, and association parameter κ_3, denoted ∼(μ_1, μ_2, κ_1, κ_2, κ_3)), ifhas the probability densityf_(ψ_1, ψ_2 |μ_1, μ_2, κ_1, κ_2, κ_3) = C_s(κ_1, κ_2, κ_3)exp [ κ_1 cos(ψ_1 - μ_1) + κ_2 cos(ψ_2 - μ_2) + κ_3 sin(ψ_1 - μ_1) sin(ψ_2 - μ_2) ]where κ_1, κ_2 ≥ 0, -∞ < κ_3 < ∞, μ_1, μ_2 ∈ [0, 2π) and the normalizing constant is given byC_s(κ_1, κ_2, κ_3)^-1 = 4 π^2 ∑_m = 0 ^∞2mm(κ_3^2/4 κ_1 κ_2)^m I_m(κ_1) I_m(κ_2). In contrast,is said to follow the (bivariate) von Mises cosine distribution with mean parameters μ_1, μ_2, concentration parameters κ_1, κ_2, and association parameter κ_3, denoted ∼(μ_1, μ_2, κ_1, κ_2, κ_3), ifhas the probability density [<cit.> define the density with -κ_3 instead of κ_3 in the exponent. However, that makes the normalizing constant equal to C_c(κ_1, κ_2, -κ_3) in our current notation (i.e., in the form shown in (<ref>)) and not C_c(κ_1, κ_2, κ_3) as given in the paper. See Appendix <ref> for a proof.]f_(ψ_1, ψ_2 |μ_1, μ_2, κ_1, κ_2, κ_3) =C_c(κ_1, κ_2, κ_3) exp [ κ_1 cos(ψ_1 - μ_1) + κ_2 cos(ψ_2 - μ_2) + κ_3 cos(ψ_1 - μ_1 - ψ_2 + μ_2) ].Here, similar to the sine model, κ_1, κ_2 ≥ 0, -∞ < κ_3 < ∞, μ_1, μ_2 ∈ [0, 2π) and the normalizing constant is given byC_c(κ_1, κ_2, κ_3)^-1 = 4 π^2 { I_0(κ_1) I_0(κ_2) I_0(κ_3) + 2 ∑_m = 0 ^∞ I_m(κ_1) I_m(κ_2) I_m(κ_3)}. From (<ref>) and (<ref>) it is easy to see that when κ_3 = 0, both the von Mises sine and cosine densities become products of univariate von Mises densities, implying independence between the two random coordinates. In addition, when κ_1 and κ_2 are also zero, both densities become uniform over [0, 2π)^2. <cit.> and <cit.> provide explicit forms for the marginal and conditional distributions in the sine and cosine models; the conditional distributions in both sine and cosine models are univariate von Mises, whereas the marginal distributions, although not von Mises, are symmetric around μ_1 and μ_2. One key difference between the bivariate wrapped normal model and the bivariate von Mises models is thatκ_3^2 is not required to be bounded above by κ_1 κ_2 in the latter, and thus can take any value in (-∞, ∞). Consequently, the densities can be bimodal; <cit.> show that the sine (cosine) joint density is unimodal ifκ_3^2 < κ_1 κ_2 (κ_3 ≥ - κ_1κ_2/(κ_1+κ_2)), and bimodal otherwise. This flexibility gives the two bivariate von Mises distributionsricher sets of possible contour plots and the ability to model a larger class of angular data.Figures <ref> and <ref> display the surfaces of the von Mises sine and von Mises cosine densities respectively with μ_1 = μ_2 = π, κ_1 = κ_2 = 1 and various κ_3's. From Figure <ref>, it can be seen that the density is bimodal when κ_3 = ± 2 (or more generally for |κ_3| ≥ 1 when κ_1 = κ_2 = 1), and unimodal when |κ_3| < 1. It can also be seen that the density surface (or the contours) of a sine model with κ_3 = ξ is essentially a mirror image of that with κ_3 = -ξ, for any ξ∈ (-∞, ∞); see, e.g., the upper-left and the lower-right panels of Figure <ref>. Such is however, not the case for the cosine density, as depicted in Figure <ref>. The cosine density is bimodal when κ_3 is very negative (κ_3 ≤ -0.5 whenκ_1 = κ_2 = 1, see, e.g., the upper-left and upper-middle panels of Figure <ref>), and is unimodal otherwise. Moreover, flipping the sign of κ_3 does not yield density surfaces (or contours) that are mirror images of each other. An interesting feature of both sine and cosine densities is that they both approximate the regular bivariate normal density (on ^2) when the concentration parameters κ_1 and κ_2 are large, and the densities are unimodal (<cit.>, <cit.>). This property is analogous to the univariate von Mises distribution. A heuristic explanation of this result again follows from the fact that when the distributions are unimodal and κ_1, κ_2 are large, then ϕ_1 and ϕ_2 are highly concentrated around μ_1 and μ_2. This means ϕ_i - μ_i ≈ 0 so that sin(ϕ_i - μ_i) ≈ (ϕ_i - μ_i) and cos(ϕ_i - μ_i) ≈ 1 - (ϕ_i - μ_i)^2/2 for i=1,2.§.§ Summary Measures for Univariate and Bivariate Angular Distributions Circular summary measures are useful for describing various aspects of angular distributions. The circular mean or mean direction (see <cit.>) of an angular random variable ψ is defined asE_c (ψ) = arctan[ E(sinψ)/E(cosψ)]and the circular variance of ψ is given by_c(ψ) = 1 - E[cos (ψ - E_c(ψ))].Note that 0 ≤_c(ψ) ≤ 1.When considering the joint distribution of paired angular random variables (ϕ, ψ), their association can be measured using circular correlation. Multiple parametric circular correlation coefficients have been proposed in the literature, and here we consider two of them.Let μ_1 and μ_2 be the circular means of ψ_1 and ψ_2 respectively. Then the Jammalamadaka-Sarma (JS) circular correlation coefficient <cit.> is defined asρ_(ψ_1, ψ_2) = E[sin (ψ_1 - μ_1) sin (ψ_2 - μ_2) ]/√(E[sin^2 (ψ_1 - μ_1) ] E[sin^2(ψ_2 - μ_2)]). Now let (ψ_1^(1), ψ_2^(1)) and (ψ_1^(2), ψ_2^(2)) be independent and identically distributed (IID) copies of (ψ_1, ψ_2). Then the Fisher-Lee (FL) circular correlation coefficient <cit.> is defined byρ_(ψ_1, ψ_2) = E[sin(ψ_1^(1) - ψ_1^(2)) sin(ψ_2^(1) - ψ_2^(2)) ]/√(E[sin^2 (ψ_1^(1) - ψ_1^(2)) ] E[sin^2(ψ_2^(1) - ψ_2^(2)) ]). Both ρ_ and ρ_ have propertiessimilar to the ordinary correlation coefficient. In particular, ρ_, ρ_∈ [-1, 1] and they are equal to 1 (-1) under perfect positive (negative) toroidal-linear (T-linear) relationship <cit.>.Note that all distributions considered in have circular mean(s) equal to the respective mean parameter(s). For the univariate models, the circular variances are just functions of the associated concentration parameter (see <cit.>). In particular, if ψ∼(μ, κ) then _c(ψ) = 1 - exp(-σ^2/2) with σ^2 = 1/κ, and for ψ∼(μ, κ), _c(ψ) = 1 - I_1(κ)/I_0(κ). For a bivariate wrapped normal model withΣ = (Σ_ij) = Δ^-1, the marginal circular variance of the first coordinate is 1 - exp(-Σ_11/2), ρ_ = sinh(2Σ_12) /√(sinh(2Σ_11) sinh(2Σ_22)) and ρ_ = sinh(Σ_12) / √(sinh(Σ_11) sinh(Σ_22)) <cit.>, where sinh denotes the hyperbolic sine function. For bivariate von Mises models (both sine and cosine forms), these expressions, provided in Appendix <ref>, are much more complicated, and involve infinite series of product of modified Bessel functions (see <cit.>). In we implement circular variances and correlation coefficientsfor all the three bivariate models considered in this article. In addition, a function for calculating sample circular correlation coefficients is also provided, where the sample analogs of ρ_ and ρ_, along with two other non-parametric circular correlation coefficients are considered (see Section <ref>).§.§ Mixture ModelsMixture models are convex combinations (mixtures) of two or more probability distributions, and provide a semi-parametric approach to modeling complex datasets with multiple noticeably distinct clusters. Mixture models of both univariate and multivariate (non-wrapped) normal distributions are well studied in the literature (e.g., see <cit.>), and implemented in many statistical packages, such as the R <cit.> packages mixtools <cit.>, mclust <cit.>, and Rmixmod <cit.>.However, these are not applicable to mixture models for angular data.This is a key motivation for our creation of BAMBI, which considers finite mixture models of univariate and bivariate angular distributions (the single function fit_angmix handles the fitting of all such models; see Section <ref> Category <ref>).Let K denote the number of components (where K is finite), { f(·|_j): j = 1, ⋯, K } denote the component densities (f can be univariate or bivariate) with _j denoting the parameter vector associated with the j-th component, and let = (p_1, ⋯, p_K)^Tdenote the vector of mixing proportions (or weights) with p_j ≥ 0 and∑_j=1^K p_j = 1. Then the mixture density is defined as(·|; _1, ⋯, _K) = ∑_j=1^K p_j f(·|_j)In practice, the number of components K necessary to fit the data is usually unknown,and thus should be estimated on the basis of the data itself.(See Section <ref> for a discussion on number of components estimation.)An important special case of the general mixture model (<ref>) is the mixture of product components, also called a conditional independence model. Here, one assumes each multivariate component density f(·|_j) to be a product of univariate densities; specifically for the bivariate angular models considered in BAMBI, this is achieved by letting κ_3=0 in each component. Note that a mixture of product components does not imply independence in the final mixture density. In fact, such a model can reasonably approximate a wide class of more complicated models, while being computationally less involved (see <cit.>);however, one often needs a larger K compared to a general (non-product) mixture model to achieve similar results, thus offsetting some of the potential computational gains. In aproduct component mixture can be fitted via fit_angmix by setting the argument cov.restrict = "ZERO" (see Section <ref> under Category <ref>).It is also noteworthy to mention the aspect of bimodality of bivariate von Mises distributions in the context of mixture modeling. In practice, often eachcomponent of a mixture model is used to represent one single (unimodal) cluster in data. However, as discussed in Section <ref>, both von Mises sine and cosine models can be bimodal depending on the values of the concentration and association parameters. When bimodality is present in some of the component specific densities, the final mixture model can be harder to interpret. To avoid this issue, it is possible to restrict the parameter spaces associated with the concentration and association parameters (by letting κ_3^2 < κ_1κ_2 in the sine model, and κ_3 ≥ -κ_1κ_2 / (κ_1+κ_2) in the cosine model) in these angular models to force unimodality in each component specific density.Consequently, a larger K may be needed to achieve similar results, which increases model complexity. Inwe provide an option of having only unimodal von Mises component densities. This is achieved by setting the logical argument unimodal.component = TRUE in fit_angmix (defaults to FALSE). See thediscussion in Section <ref>, Category <ref>.§.§ Related Work and Motivation for BAMBI§.§.§ Literature Several papers have addressed inferential problems relating to mixtures of bivariate angular distributions.<cit.> consider the mixture of bivariate von Mises cosine distributions, and suggest an EM algorithm for frequentist estimation of the associated parameters. Their approach is used in <cit.> in the context of modeling protein backbone angles. In other work,<cit.> consider a Bayesian non-parametric model involving an infinite mixture of von Mises sine distributions. In we focus on classical finite mixtures, providing a unifying framework for Bayesian estimation of all three bivariate angular models presented earlier. §.§.§ SoftwareTo the best of our knowledge, no previous packages or libraries handle finite mixture modeling for univariate or bivariate angular data, whether in R or otherwise. In fact, the only available software (as of the time of writing this manuscript) thathas functionality for bivariate von Mises modelsis the C++ library mocapy++ in the context of Dynamic Bayesian Networks<cit.>. However, mocapy++ does not implement bivariate wrapped normal models. The overarching goal of BAMBI is to create a unified platform that implements descriptive andinferential statistical tools required to analyze bivariate and univariate angular data. First, provides functions fordensity evaluation, computation of various summary measures (such as circular mean, variance and correlation coefficient), and random data generation from bivariate and univariate angular models and their mixtures. Second, it has functions for fitting these models to real angular data using Bayesian methods. Third, it implements a number of post-processing steps required in any Bayesian statistical analysis. For example, visual and numerical assessment of the goodness of fits can be done using a number of native functions, as well as coda package functions, which are applicable on outputs (angmcmc objects) through a convenientas.mcmc.list method.Furthermore, has functions for model selection as well as random data generation and density evaluation from fitted models, which are useful in posterior predictive analyses. It is to be noted that while it is possible to use general-purpose MCMC samplers such as stan <cit.>, JAGS <cit.> and WinBUGS <cit.> for fitting the angular mixture models considered in BAMBI, there are important motivations for developing specialized implementations for these models. First, special care needs to be taken while handling the normalizing constants in the von Mises sine and cosine densities, which contain infinite series of product of Bessel functions that can be numerically unstable for some ranges of parameter values; such cases are handled in BAMBI via (quasi) Monte Carlo approximations. Second, computations for Bayesian mixture modeling benefit from using a latent allocation structure, as done in BAMBI (see Section <ref>), which allows independent sampling ofthe component specific parameters.Such an approach cannot be used in stan due to the discreteness of the allocation (p. 79, Section 6.2 of the reference manual v2.18.0); consequently, the dimensionality of the parameter vector can hinder convergence of MCMC sampling for mixtures with many components.In contrast, JAGS/WinBUGS allows incorporation of discrete latent allocation; however,their sampling techniquesdo not make use of the gradient of the target (log) posterior density. As discussed in Section <ref>, Hamiltonian Monte Carlo uses the gradientand hence is typically more efficient for sampling from intractable distributions. Finally, theanalytic gradients necessary for efficient MCMC sampling in these models are built into BAMBI. §.§ Organization of the Paper The remainder of this article is organized as follows. In Section <ref>, wereview Bayesian methods for fitting angular mixture models to data. In Section <ref> we describe the capabilities of BAMBI, by describing all functions and datasets available in BAMBI, and providing brief overviews on their usage. Following, in Section <ref> we illustrate angular mixture modeling on datasets included in BAMBI.The paper concludes with a brief summary and possible directions for future development in Section 5. A derivation for the von Mises cosine model normalizing constant, formulas for circular variances and correlation coefficients in the von Mises sine and cosine models, analytic forms of gradients needed for efficient MCMC sampling (discussed in Section <ref>), and MCMC parameter traceplots associated with one of the examples considered in Section <ref> areprovided in the Appendices. § METHODS§.§ OverviewWe adopt a Bayesian approach for fitting angular mixture models to data. Let ^⊤ = (_1, ⋯, _n) be thedata matrix (ordata vector in the univariate case)with each _i being a bivariate vector of angles (or a univariate angle) [0, 2π)^2 (or in[0, 2π)).We are interested in fitting a mixture density of the form (<ref>) for a given number of components K. For example, in bivariate wrapped normal mixtures, the density for the j-th component is given by f_j≡ f_ (·|_j) =: f_, j, where _j^⊤ = (κ_1j, κ_2j, κ_3j, μ_1j ,μ_2j) denotes the vector of (model) parameters for the jth component, j = 1, ⋯, K, and the mixture density is given by _ = ∑_j=1^K p_j f_, j. For a specified K, our objective is to estimate the parameter vector ^⊤ = (^⊤, ^⊤),which consists of the model parameters ^⊤ = (^⊤_1, ⋯, ^⊤_K) and the mixing proportions ^⊤ = (p_1, ⋯, p_K), based on . Often, K itself will also need to be estimated. In the following, we review some commonly used techniques in Bayesian mixture model fitting.§.§ Bayesian mixture modelingUnder a Bayesian framework a prior distribution must be specified for the parameter vector, which can be non-informative (or diffuse) if a priori information is unavailable. Let π(, ) denote the joint prior density for . Often the prior distributions ofandare assumed to be independent so that (with a slight abuse of notation; here π(y) stands for the appropriate prior density of the random variable y) π(, ) = π() π(). Moreover, parameters from different components are often assumed to be independent, so that π() = ∏_j=1^Kπ(_j). Let L(|, ) = ∏_i = 1^n (_i |, ) denote the likelihood function of the data. Then the posterior density ofgiven the data isπ(, |) ∝ L(|, )π() ∏_j=1^Kπ(_j),which is the basis for Bayesian inference on . It is to be noted that the prior densities π(_j)'s all need to be proper in order to ensure that the posterior density π(, |) is proper <cit.>. Specific comments about the choice of priors used in the current setting are provided in Section <ref>. Note that the associated posterior mean, median or mode, commonly used as point estimates of the parameters, are not availablein closed form for our distributions of interest.Additionally, π(, |) is intractable for directly simulating IID samples, and thus some kind of Markov Chain Monte Carlo (MCMC) technique is used in practice as an alternative.Starting from some initial point, an MCMC algorithm generates a Markov chain which has the target posterior density π(, |) as the invariant distribution. Various summary measures of the posterior distributions – such as mean, mode (known as the maximum a posteriori or MAP parameter value), and quantiles –can then be approximated based on the MCMC realizations. In practice, the MCMC algorithm must be run long enough for the Markov chain to converge, so that the realizations approximately follow the target posterior distribution.For this purpose the chain is given a burn-in period, where the initial iterations are discarded. In the function fit_angmix fits a Bayesian angular mixture model with a specified number of components, and the function fit_incremental_angmix fits angular mixtures with incremental number of components to determine an optimum number of components. In the following we briefly review the MCMC generation techniques Gibbs sampler (GS), Metropolis Hastings and Hamiltonian Monte Carlo (HMC), and describe how they are used for sampling from the posterior distributions of model parameters and mixing proportions in these two functions.§.§ Gibbs sampler (GS)The Gibbs sampler (GS) <cit.> breaks the Markov chain updates for the parameter vector into blocks.For example, when = (_1, _2) the GS generates the N-th state of the Markov Chain (_1^(N), _2^(N)) from the previous state (_1^(N-1), _2^(N-1)) with the steps* Generate _1^(N) from π(_1 |_2^(N-1),).* Generate _2^(N) from π(_2 |_1^(N),).The GS is most effective when it is easy to sample from the (full) conditional posterior densities π(_1 |_2,) and π(_2 |_1,).Note that when _1 and _2 are vectors, this is sometimes called the blocked Gibbs sampler.For mixture models, an efficient Gibbs sampling step for the mixing proportions(when K > 1) can be obtained by adopting a so-called Data Augmentation scheme, where one introduces (“augments”) unobserved data to make the conditional distributions simpler <cit.>. Here, we introduce (hidden) component indicators _i^⊤ = (ζ_i1, ⋯, ζ_iK) corresponding to each observation _i where ζ_ij is 1 if the ith observation comes from the jth component, and 0 otherwise, for i = 1, ⋯,n and j=1,⋯,K. Thus, given ζ_ij = 1, the density of _i is simply f(_i |_j), i.e., the density in the j-th component evaluated at _i. Moreover,n_j := ∑_i=1^n ζ_ij is the total number of observations comingfrom this density. It is customary to assume a Dirichlet() prior for , where ^⊤ = (α_1, ⋯, α_K) with α_j > 0 for all j, so that π() ∝∏_j=1^K p_j^α_j-1. Note that α_j = 1 for all j represents the uniform prior. Let Z^⊤ = (_1^⊤, ⋯, _n^⊤) and let ^(N-1), ^(N-1) and Z^(N-1) be the (N-1)th MCMC realizations of ,and Z respectively. Then the Nth realization of(and Z) are obtained as follows: * For i = 1, ⋯,n, generate _i^(N) from Multinomial(1; _i1^(N-1), ⋯, _iK^(N-1)) independently, and define n_j^(N) := ∑_i=1^n ζ_ij ^(N), where_ij^(N-1) = p_j^(N-1) f(_i |_j^(N-1))/∑_h=1^K p_h^(N-1) f(_i |_h^(N-1))are the posterior membership probabilities. * Generate ^(N) from Dirichlet(α_1+n_1^(N), ⋯, α_K+n_K^(N)).Thus when K > 1, the latent allocation _i's generated during the Gibbs sampling step forleads to simplifications that reduce the computational burden substantially.Note that, conditional on _i's, all _i's have independent single component densities f(·|_j_i), with j_i being the non-zero position of _i. Thus, given _i's, all _j's are independent with only data points coming from component j contributing to the respective likelihoods. Consequently _j's can be sampled independently from their (component-specific) conditional posterior densities.To complete the GS scheme for the mixture model, it remains to sample _j's from π(_j | Z, , ). As these distributions are still intractable for direct IID simulation, we use a Markov chain simulation technique for sampling, and then combine this step with the GS updates forand Z. In the following we describe two such Markov chain simulation techniques, and how they are used in BAMBI.§.§ Metropolis-Hastings AlgorithmThe Metropolis-Hastings algorithm<cit.>is simple and widely-used for Markov chain simulation.Formally, let x be the current state of a Markov chain Φ with stationary density q. Let (·| x) be a proposal density defined on the state space of Φ that is easy to sample from.Then the next state x' of the Markov chain Φ is obtained as follows: * Generate x^* from _x. * Define r(x^*, x) = min{1, q(x^*)/q(x) (x | x^*)/(x^* | x)}, and define the next state x' equal to x^* with probability r(x^*, x) and equal to x with probability 1-r(x^*, x). The random walk variant of Metropolis-Hastings (RWMH) uses a proposal density (·| x) that is symmetric about x; e.g., by taking (·| x) to be the density of Y_x = x + Y_0,where Y_0 is a normal random variable with mean zero. Under RWMH, (x | x^*) = (x^* | x), and hence r(x^*, x) = min{1, q(x^*)/q(x)}, thus simplifying computations. In BAMBI, RWMH is implemented with independent normal proposals.Note that the variance of the density (·| x) strongly affects the acceptance probabilities r(x^*, x). Convergence of the Markov chain will be slow if the variance of (·| x) is too large or too small. <cit.> suggest maintaining an acceptance rate of 20-30% as a general rule-of-thumb. In we provide an auto-tuning feature that implements adaptive tuning during the burn-in period.Briefly, the acceptance rate and scale of the sampled parametersare monitored at regular intervals, and the proposal variances are adjusted accordingly (see the documentation of fit_angmix for details).We limit adaptation to the burn-in period, so that the desired properties of the final MCMC samples are retained.§.§ Hamiltonian/Hybrid Monte Carlo (HMC) Simple RWMH can become quite inefficient in multi-dimensional problems.A powerful alternative to RWMH when the gradient of the posterior density has an analytical form isHamiltonian (also called Hybrid) Monte Carlo (HMC) <cit.>.HMC makes use of the gradient of the log posterior density and an auxiliary random variable, and incorporates tools from molecular dynamics to furnish proposal states coming from high posterior density regions. This allows a much faster exploration of the state space than a RWMH scheme. A gentle and detailed introduction to HMC with applications to statistical problems can be found in <cit.>.Briefly, in HMC first an auxiliary random variablecalled momentum is considered along with the variable of interest (vector of model parametersin our case), which is classically called the position in physical problems, denoted by .[In classical HMC literature, the auxiliary variable is denoted by ; however, we will keep that notation for mixing proportions.] Furthermore, two energy functions U()and K() are introduced, followed by a Hamiltonian function H(, ) which is usually the sum of those two energies, i.e., H(, ) = U() + K(). U(), called thepotential energy, is defined as the negative log posterior density of(plus any fixed constant) in MCMC applications, andK(), called the kinetic energy, is usually defined as K() = ^⊤ M^-1 for some fixed positive definite matrix M.This form for K() corresponds to the negative logdensity (plus a constant) of the zero-mean normal distribution with variance matrix M. In practice, M is typically taken to be diagonal, often the identity matrix (as used in BAMBI), or a scalar multiple of the identity matrix. Let ∇ U() denote the gradient vector of U() with respect to . Further, let ϵ > 0 be a small real number, called the step-size, and L≥ 2, a positive integer, called the number ofleapfrog steps.Then one step of HMC that updates (via leapfrog method) the current stateto the next state ' can be described as follows: * Generatefrom N(0, M) and let ^(0) = and ^(0) =- (ϵ/2) ∇ U(^(0)). * For t = 1, ⋯, L define ^(t) = ^(t-1) + ϵ ^(t-1) and ^(t) = ^(t-1) - (ϵ/γ_l) ∇ U(^(t)), where γ_l = 1 for l = 1, ⋯, L-1 and γ_L = 2. * Let ^* = ^(L) and ^* = -^(L), and define β(^*,^*; ,) = min{ 1,exp[ H(^*, ^*) - H(, ) ] }. * Finally, define the new state ' equal to ^* with probability β(^*,^*; ,), and equal towith probability 1-β(^*,^*; ,).Special care needs to be taken for the cases where the variables being sampled are constrained: for our angular models, μ_i's are angles in [0,2π), and the (raw) concentration parameters are positive. Seesection 5.5.1.5 of <cit.>for more details.Since HMC approximates the dynamics by discretization, the step-size ϵ needs to be sufficiently small for the proposals to have a high acceptance rate. However,if ϵ is too small, convergence of the Markov chain will be slow. Thus, ϵ requires tuning to obtain a reasonable acceptance rate (∼40-90%, with 65% being optimal, as suggested by <cit.>). In we provide an auto-tune feature for ϵ similar to the one for the proposal standard deviation in RWMH (see Section <ref>), which adaptively tunes ϵ during burn-in to ensure a reasonable acceptance rate (60-90% by default).Care is required for choosing the number of leapfrog steps L, since a L that is too large or too small can lead to poor convergence.While setting an appropriate L can be challenging for high dimensional parameter vectors, here the independence of components π(_j | Z, , ) means that only two (for univariate models) or five (for bivariate models) parameters need to be sampled at a time.Thus, the default L=10 used in BAMBI, which works well empirically, suffices for mixtures with any number of components. As suggested in <cit.>, randomly choosing ϵ and L from some fairly small interval at the beginning of every HMC step may improve convergence of the chain. In ϵ is by default randomly chosen at each iteration from an interval of the form (ϵ_0(1-δ), ϵ_0(1+δ)) for a fixed ϵ_0 > 0 (can be auto-tuned in BAMBI) and a given δ∈ (0, 1), while L is kept fixed. However, these settings can be changed; in particular, L can also be randomly chosenfrom the set of integers contained in an interval (L_0/exp(A), L_0 exp(A)) for some given L_0 > 0 and A > 0, or both ϵ and L can be specified to be non-random. See the documentation of fit_angmix for more details. When properly tuned, HMC can achieve faster convergence and better exploration of the target density than RWMH, for a similar computational cost. Note that the computational cost for each HMC iteration is higher due to L gradient evaluations, however, HMC usually requires fewer iterations to reach stationarity and successive samples have lower autocorrelation.Hence, HMC is our recommended sampling approach in BAMBI.HMC, while powerful, does not solve all the challenges associated with MCMC sampling algorithms; in particular, both RWMH and HMC can get trapped in local modes. One possible remedy is to use multiple independent chains, see Section <ref>.By default, uses HMC to sample .All angular densities considered here,both univariate and bivariate, admit analytic gradients for efficient programming implementation. Expressions for the conditional log posterior density and its gradients are provided in the following section.§.§ Using RMWH or HMC for angular mixture modelsConsider the mixture model (<ref>) with density f(·|_j) for the j-th component, j=1,⋯,K. It follows that given the component indicators Z, information onis superfluous, and the complete-data (i.e., givenand Z) likelihood for = (_1, ⋯, _K) is given by:likelihood(| Z, ) ∝∏_i = 1^n ∏_j=1^Kf(_i|_j)^ζ_ij.Recall that the joint prior density ofis ∏_j=1^K π(_j). Hence, the complete-data posterior density ofis given by:π(| Z, ) ∝{∏_i = 1^n ∏_j=1^Kf(_i|_j)^ζ_ij}∏_j=1^K π(_j).Therefore, by taking the logarithm, the complete-data log posterior density (LPD) for = (_1, ⋯, _K) given the component indicators Z is obtained as_complete-data() := logπ(| Z, )= C + ∑_i=1^n ∑_j=1^Kζ_ijlog f(_i|_j) + ∑_j=1^Klogπ(_j) = C + ∑_j=1^K{∑_i=1^n ζ_ijlog f(_i|_j) + logπ(_j) }= C + ∑_j=1^K{∑_i ζ_ij = 1log f(_i|_j) + logπ(_j) }where C is a constant free of . The above expression shows that conditional on Z, _j's are independent, and that the complete-data log posterior density of _j is of the form_j(_j) = C_j + ∑_i ζ_ij = 1log f(_i|_j) + logπ(_j)where C_j's areconstants (free of ).Given the current GS draw of Z, samples from the conditional posterior density _j in (<ref>) can therefore be drawn independently for all j=1,⋯, K. For each j ≥ 1,we let _j play the role of the target density q (see Section <ref>) in RWMH, or let -_j play the role ofthe potential energy U (see Section <ref>) in HMC; the gradient of U with respect to _j, ∇ U, is therefore the negative of the gradient ∇_j. From (<ref>), it follows that ∇_j()=( ∑_i ζ_ij = 1∂log f(_i|_j)/∂_j) + ∂logπ(_j)/∂_j=(∑_i ζ_ij = 11/f(_i|_j)·∂ f(_i|_j)/∂_j) + ∂logπ(_j)/∂_j For the von Mises distributions (both univariate and bivariate), form (<ref>) is easier to work with, whereas form (<ref>) is more useful for the wrapped normal distributions. Full analytic expressions for all model specific gradients are provided in Appendix <ref>.Note that parameters with a non-negative support are often sampled more efficiently on the log scale; we use this strategy for sampling the concentration parameters κ (in univariate models) and κ_1, κ_2 (in bivariate models). §.§ Choice of priors Selection of prior constitutes an important step in Bayesian analyses, as they play a key role in the final inference. This is comparatively more standard for the component-specific model parameters . As discussed, proper prior distributions for the model parameters are required to ensure posterior propriety.For the mean parameters μ (in univariate models) and μ_1, μ_2 (in bivariate models), their prior distributions can be taken to be a member of the same family of the distribution which are being used in the mixture model (e.g., von Mises sine prior for (μ_1, μ_2) in a von Mises sine mixture model) to aid conjugacy. <cit.> use this conjugate prior for the mean parameters in their von Mises sine (infinite) mixture model. Note that conjugacy for the mean parameter is not achievable except in trivial cases in the wrapped normal distributions (both univariate and bivariate).In we set a uniform prior over [0, 2π) (if univariate) or [0, 2π)^2 (if bivariate)for the mean parameter(s), which can be viewed as a special case of the von Mises and wrapped normal distributions (see Sections <ref> and <ref>). Conjugacy is also possible for the concentration and association parameters, e.g., <cit.> consider such a family for von Mises sine model. However, that approach does not aid sampling, as the resulting unnormalized densities involve infinite sums of products of modified Bessel functions. As a simple alternative, we suggest using independent normal distributions with zero mean as the prior for the association parameter κ_3, as well as for the log of the concentration parameters κ, κ_1, andκ_2 (i.e.,the prior for concentration parameters are log normal). These prior distributions can be made informative or diffuse through appropriate choices of the variance hyper-parameter. Priors are assigned independently to each parameter, and truncation is performed to reflect any specified constraints in the model (such as κ_3^2 < κ_1 κ_2 in a bivariate wrapped normal model, and a von Mises sine model with unimodal density). Care is required in the selection of prior for the mixing proportions , as an ill-chosen prior may result in very poor fits. This is particularly true when K is too large (i.e., the mixture is overfitted). Note that overfitting is a necessary step when the true number of components is unknown and needs to be estimated, see Section <ref> for more details. It is customary to assume a Dirichlet() prior for , where ^⊤ = (α_1, ⋯, α_K) with α_j > 0, often with the special caseα_j = α_0 for all j. When the mixture is overfitted, the asymptotic results in <cit.> show that α_j's strongly influence how the spurious mixture components are handled by the limiting posterior density. In particular, if max_j=1,⋯, Kα_j < d /2, where d = _j, then the spurious components vanish asymptotically. On the other hand, if min_j=1,⋯, Kα_j > d /2, then the spurious components asymptotically getsuperimposed on some of the existing components with positive mixing proportions <cit.>. The subsequent estimation of K depends on which way the overfitting is handled by the posterior density (see Section <ref>); thus α_j's all need to be appropriately either small or large <cit.>.A uniform prior with α_j=α_0 = 1 is a rather poor choice in this regard. In estimation of K is done assuming the use of α_j > d/2 for all j in conjunction with a model selection criterion;our default is α_j = α_0 =(r +r(r+1)/2)/2 + 3 as used in <cit.>, where r denotes the dimension of the data, i.e., r is 1 or 2 according as whether the model is univariate or bivariate (and consequently, all α_j's are either 4 or 5.5).§.§ Estimating the number of components K from data Suppose the data were generated from a mixture of K_ (non-empty, non-identical) components. In practice, K_ will not be known, and therefore mixture modeling requires estimating the appropriate number of components from the data.In the Bayesian setting, the estimation of K_ requires an overfitted mixture model, i.e., one that has spurious or superfluous components. There are two ways of introducing superfluous components to overfit a mixture model, and the subsequent estimation of K_ should reflect which way is taken. First, the superfluous components can be arbitrarily introduced at regions with no data points (“leave some groups empty”), and assigned zero mixing proportions. Then, the number of non-empty components in the fitted mixture provides a good estimate of K_. Second, the spurious components can be superimposed on some of the existing components (“let two component-specific parameters be identical”), and assigned positive mixing proportions. Here, the total number of components in the fitted mixture, after accounting for model complexity (via some model selection criterion), provides a reasonable estimate for K_.Note that the prior distribution of the mixing proportionaffects the way overfitting is handled by the posterior, and hence the associated prior hyper-parameters need to be wisely chosen (see Section <ref>). A detailed discussion on the estimation of the number of components can be found in <cit.>. In we assume that the superfluous components are introduced in the second (“lettwo component-specific parameters be identical”) way. Consequently, K_ is estimated by first incrementally fitting the data with one additional component (starting from K=1), until a model with K+1 component fails to improve upon the previous fit with K component (as determined through a model selection criterion); that value of K is then used as an estimate of K_. There exist multiple model selection criteria in the literature; we review six such criteria implemented in BAMBI and comment on their applicability in MCMC simulations. In the following, = (, ) denotes the entire parameter vector, and =(y_1, …, y_n) is the vector/matrix of n independentobservations. * Watanabe-Akaike Information Criterion (WAIC) (<cit.>). Given the dataset y_1, ⋯, y_n, the Markov chain realizations {_1, ⋯, _N } of the parameter vector, and the pointwise densities {p(y_i |_s): i = 1, ⋯, n; s = 1, ⋯, N }, define the computed log pointwise posterior predictive density LPPD = ∑_i=1^n log(1/N∑_s=1^N p(y_i |_s) ). Then WAIC is defined as WAIC = LPPD - p_W where p_W is a correction term to adjust for effective number of parameters. Two forms for the adjustment terms are proposed in the literature, both being approximations based on Bayesian cross validation. In the first approach, (computed) p_W is defined as p_W = 2 ∑_i=1^n [ log(1/N∑_s=1^N p(y_i |_s) ) -1/N∑_s=1^N log p(y_i |_s)] whereas, in the second approach, (computed) p_W is defined by p_W = ∑_i=1^n var logp(y_i |), where for i = 1, ⋯, n, var logp(y_i |) denotes the estimated variance of p(y_i |) based on the realizations _1, ⋯, _N. * Leave One Out Cross Validation Information Criterion (LOOIC) <cit.>. Under the same set-up as WAIC, the LOOIC is defined as LOOIC = ∑_i=1^n log( ∑_s=1^N w_i^s p(y_i |_s)/∑_s=1^N w_i^s) where for each s = 1, ⋯, N, w^s =(w_1^s, ⋯, w_n^s) is a vector of importance sampling weights, typically calculated via the Pareto smoothed importance sampling method <cit.>. Because of the importance sampling weights,LOOIC can be more stable in practice than WAIC. See <cit.> for a gentle and thorough introduction to both WAIC and LOOIC, including applications and case studies. Because both WAIC and LOOIC are based on the mixture likelihood and do not explicitly depend on the sampled model parameters (thus, remain unaffected by the presence of multiple permutation and non-permutation modes), in BAMBI we recommend using either of these two criteria for selecting the number of mixture components.Both WAIC and LOOIC are made available in via their implementations in the R package loo <cit.>, which also provides a compare() function for comparing WAICs/LOOICs based on estimated difference in expected log predictive density (elpd). In BAMBI, during an incremental model fitting via fit_incremental_angmix with crit = 'WAIC' or crit = 'LOOIC', a test of hypothesis H_0K: elpd for the fitted model withK components ≥ elpd for the fitted model with K + 1 components, is performed at every K ≥ 1. The test-statistic used for the test is an approximate z-score based on the normalized estimated elpd difference between the two models (obtained from compare(), which provides estimated elpd difference along with its standard error). The incremental fitting stops ifH_0K cannot be rejected (at level alpha, defaults to 0.05) for some K ≥ 1; this K is then regarded as the optimum number of components. * Marginal Likelihood (ML). Marginal likelihood is arguably the most natural and intuitive model selection criterion that is used in the Bayesian paradigm. As the name suggests, marginal likelihood is the likelihood obtained by integrating out the parameters from the joint density of the data and parameters, and provides a natural way of measuring the “marginal” effect of data. In the context of Bayesian model selection, marginal likelihood provides a way of selecting an optimum model in that the model with largest marginal likelihood provides the best fit.Given the likelihood L(|) and prior density π(), marginal likelihood is the constant (dependent only on the data): m() = ∫_ℰ L(|) π() d where ℰ denotes the support of the parameter vector . Note that m() is the reciprocal of the normalizing constant required to define the posterior density. Evaluation of the marginal likelihood m() in practice however is typically challenging, as it tends to be a high-dimensional intractable integral (as in our case). Multiple estimation techniques based on samples from the posterior density π(|) have been proposed in the literature; in we implement bridge sampling <cit.>. Briefly, the key idea is to first write m() as m() = ∫_ℰ h() L(|) π() g() d/∫_ℰ h() g()π(|)d = E_g[h() L(|) π()] /E_π(·|)[h() g() ] where g is a density, called the proposal density, and h is a function, called the bridge function. Then one approximates the above ratio by m̂() = 1/n_2∑_j=1^n_2 h(_j^*) L(|_j^*) π(_j^*)/1/n_1∑_j=1^n_1h(_j^†) g(_j^†) where _1^†, ⋯, _n_1^† are MCMC samples from the posterior density π(·|) and _1^*, ⋯, ^*_n_2 are samples from the proposal density g. Note that h and g play crucial roles in the estimation of m(), and must be optimally chosen for accurate results. See <cit.> for a gentle and detailed tutorial on bridge sampling. In BAMBI, marginal likelihood can be used to select the optimal number of mixture component in a fit_incremental_angmix run, by specifying crit = 'LOGML'. This will ensure computation of the log marginal likelihood via bridge sampling for every mixture model during the incremental run, and the model attaining the first minimum negative log marginal likelihood will be treated as the optimum model. It should however be notedthat for mixture models, optimal selection of h and g is typically difficultdue to the multi-modality of the posterior density; see <cit.>, for a review of some of the available methods. InBAMBI, marginal likelihood is computed by leveraging the function bridge_sampler from the R package bridgesampling <cit.>, and the authors of bridgesampling warn against the use ofbridge_sampler in mixture models <cit.>. As such, use of this method in is not recommended, even though the functionality is available. * Akaike Information Criterion (AIC) <cit.>.Let L̂ be the maximum value of the likelihood function for the model andlet m be the number of estimated parameters in the model. Then AIC is defined as AIC = -2 logL̂ + 2m. * Bayesian Information Criterion (BIC) <cit.>. Under the same setup, if n denotes the number of data points, BIC is defined by BIC = - 2 logL̂ + m log(n). Observe that both AIC and BIC depend on the maximum value L̂, which, in general, is not directly available in MCMC simulations. A possibly suboptimal estimate of the global maximum is given by the maximum value of the likelihood function computed at the MCMC samples. During model selection, the model with minimum AIC (or BIC) can be treated as the optimal model. In AIC/BIC can be used for selecting optimum number of components in an fit_incremental_angmix run by specifying crit = 'AIC' or crit = 'BIC'. This will ensure computation of AIC/BIC of every mixture model fitted during the incremental fitting; the model attaining the first minimum AIC/BIC will be treated as the optimum model. It should to be noted, however, that AIC and BIC are both based on asymptotic normality results that do not hold formixture models with multiple modes, and hence their use in selecting the number of mixture components may lead to inconsistent results. Thus, though implemented, using AIC or BIC is not recommended in BAMBI. * Deviance Information Criterion (DIC) <cit.>. DIC is another model selection criterion, which, similar to AIC and BIC, is based on an asymptotic result for large samples. Let D( ) = -2log p(| )denote the deviance, wheredenotes the vector of all parameters in the model and p(|) denotes the likelihood. Let {_1, ⋯, _N } denote the MCMC realizations of the parameters. Define (estimated) effective number of parameters p_D byp_D = D̅() - D(), where D̅() = N^-1∑_s=1^N D(_s)and = N^-1∑_s=1^N _s. Another commonly used form for p_D is given by p_D = varD() / 2, <cit.> where varD() denotes the estimated variance of D() based on _1, ⋯, _N. Then DIC is defined as DIC = p_D + D̅() = D () + 2 p_D. In AIC/BIC can be used for selecting optimum number of components in an fit_incremental_angmix run by specifying crit = 'DIC'. This will ensure computation of DIC of every mixture model fitted during the incremental fitting; the model attaining the first minimum DIC will be treated as the optimum model. It should be noted that use of DIC can be unstable in practice. For example,if the first form of p_D, i.e., p_D = D̅() - D()) is used, DIC becomes heavily dependent on the plug-in estimator . However, in Bayesian mixture modeling the posterior mean is not always a suitable plug-in estimator for the parameter vector as it may lie between different modes of the posterior density <cit.>. The problem is exacerbated by the presence of label switching in the MCMC samples (see Section <ref>). Moreover, depending on how the information on latent component indicators are handled, multiple versions of DIC can be constructed here. <cit.> consider no less than eight variants, but are unable to recommend any of them for practical use. Likewise we caution against the use of DIC, although the functionality is available in BAMBI. §.§ Label switching Label switching is a fundamental aspect of Bayesian mixture modeling that requires proper care. Briefly, when exchangeable priors π(_j)'s are placed on the model parameters _j's, the resulting posterior distribution becomes invariant to permutation in the component label j's. As a result, the posterior densityconsists of symmetric or permutation modes that are identical up to permutation of component labels. A well mixingMCMC algorithm will explore these permutation modes,causing the component labels to switch over the course of an MCMC simulation. This phenomenon is called label switching, and is required in MCMC-based Bayesian mixture modeling to justify convergence. A fundamental limitation ofMCMC-based Bayesian mixture modeling is that the chains may become trapped at local modes, rather than fully exploring the symmetric modes. One possible remedy is to run multiple independent chains to improve exploration of the posterior.Alternatively, one may embed a deliberate random relabeling step into the sampler, i.e., adopt a so-called permutation sampling <cit.> scheme: after each draw of therandom allocation, components are relabeled according to a random permutation of 1, ⋯, K. This is, in fact, a specific example of a sandwich algorithm <cit.>, where a computationally inexpensive step (of drawing a random permutation of {1,…, K}, and then relabeling the components according to that random permutation) is sandwiched in between the two steps (drawing the allocation vector, and drawing the component-specific parameters) of a Data Augmentation algorithm. Sandwich algorithms often converge fasterthan the original Data Augmentation algorithm; in the case of permutation sampling, the chain is forced to visit the permutation modes (and potentially the non-permutation modes) more frequently. These potential improvements in convergencewould not be achieved by simply randomly switching labels of the MCMC samples post-hoc. In permutation sampling, inclusion of the random label switching step results in a modified MCMC algorithm thatis theoretically proven to be at least as good as the original Data Augmentation algorithm in terms of convergence rates <cit.>. However, care must be taken if RWMH or HMC updates for the component specific parameters are adaptively tuned according to the scales and variabilities of the sampled model parameters (to do so properly requires keeping track of each component label). In permutation sampling can be done after burn-in, by setting perm_sampling = TRUE (defaults to FALSE) in a fit_angmix call. Although label switching is required for MCMC convergence in Bayesian mixture modeling, its presence in MCMC samples makes inference on the different components via posterior means or quantiles challenging (note that MAP estimation is not affected). A number of techniques have been proposed to handle this problem; see, e.g., <cit.> and <cit.>. The available methods either need to be applied during MCMC sampling (on-line) or after simulating the entire chain (post processing).Several post-processing techniques that undo label switching are implemented in the useful R package label.switching <cit.>, and in BAMBI, we provide a wrapper called fix_label for the main label.switching function from that package. All the methods available in label.switching are appropriately implemented in fix_label, which takes an angmcmc object (see Section <ref>) as input, and may require additional user inputs depending on the method used. The Kullback-Leibler divergence based method by <cit.> (method = 'STEPHENS') is used by default if the permutation sampling is performed during original MCMC run; otherwise, the default method is the data-based algorithm of <cit.> (method = 'DATA-BASED'); neither requires any additional input other than an angmcmc object.§.§ Initialization of the parameters, and the use of multiple chains for faster convergenceMCMC algorithms can converge faster if the initial values are chosen well. The function fit_angmix, when called without supplying starting parameter values, will automatically initialize the latent allocation to the mixture components. The default (and recommended) option is initialization via a k-means algorithm:toroidal angle pairs are first projected onto the surface of a unit sphere, and then Cartesian coordinates of the projected spherical points are clustered.Random initial allocation is also provided as an option, but is not recommended as it may lead to slow convergence.Once an initial allocation is obtained, component specific parameters areestimated via method of moments (see <cit.> for more details on these estimators), and the mixing proportions are estimated by the sample proportions.When explicit starting values of the model parameters and mixing proportions are provided to fit_angmix,no initial allocation is necessary. This is particularly useful for estimating the number of components, when mixture models are being fitted incrementally (e.g., via fit_incremental_angmix).Under incremental model fitting the parameters of a K+1 component mixture can beinitialized directly from the parameter estimates from a K component mixture; theextra component is simply taken as a “copy” of an existing component (preferably the one with the largest mixing proportion), and the associated mixing proportion is distributed equally between the two identical components. This method is expected to work well when the posterior density handles overfitting in the “let two component-specific parameters be identical” way(see Sections <ref> and <ref>), which is the approach taken in BAMBI. As such, this is the default method of initializing parameters in fit_incremental_angmixwhen K > 2; however k-means allocation followed by moment estimation can also be used, by setting prev_par = FALSE.Finally, we note that even with good initial values, MCMC samplers can still get trapped in local modes for a large number of iterations,rather than fully exploring the posterior density.One possible remedy is to run multiple independent chains to improve exploration of the posterior, which is implemented in BAMBI. The argument n.chains (set to 3 by default) specifies the number of independent chains to run in fit_angmix. These chains can be run in parallel, see Section <ref> Category IV for more details. § BAMBI PACKAGE This section overviews the functionalities of BAMBI. At the core of the package is the angmcmc object, which is created when a model fitting function is used. In the following we first describe the angmcmc objects, then describe the datasets included in BAMBI, and finally discuss the functions available in BAMBI and comment on their usability. However, this is not an exhaustive manual; all functions in BAMBI include R documentations, which should serve as the definitive resources. §.§ "angmcmc" objectsangmcmc objects are classified lists belonging to the S3 class angmcmc that are created when the function fit_angmix is used. An angmcmc object contains a number of elements, including the dataset and its dimension (i.e., univariate or bivariate), the model being fitted, the tuning parameters used,MCMC samples of the parameter vector, and at each iteration the (hidden) component indicators for data points,log-likelihood and log posterior density values (up to additive constants). When printed, an angmcmc object returns a brief summary of the function arguments used and the acceptance rate of the proposal states (in HMC and RWMH). An angmcmc object can be used as an argument for the diagnostic and post-processing functions available in BAMBI for making further inferences.§.§ Datasets contains two illustrative datasets, namely wind (univariate) and tim8 (bivariate), each measured in the radian scale [0, 2π). wind The wind dataset consists of 239 observations on wind direction (originally measured in 10s of degrees, and then converted into radians) measured at Saturna Island, British Columbia, Canada during October 1-10, 2016 (obtained from Environment Canada website). There was a severe storm during October 4-7 in Saturna Island, which caused significant fluctuations in wind direction. tim8 The dataset tim8consists of 490 pairs of backbone dihedral angles (ϕ, ψ) for Triose Phosphate Isomerase (8TIM). The three dimensional structure of8TIM is available from the protein data bank (PDB).Theprotein is an example of a TIM barrel, a common type of protein fold exhibiting alternating α-helices and β-sheets. The backbone angles for this protein were obtained by using the DSSP software (<cit.>) on the PDB file for 8TIM, and then converted into radians. §.§ Functions In BAMBI, all five models described in Section <ref>, namely the univariate von Mises (vm), univariate wrapped normal (wnorm), bivariate von Mises sine (vmsin), bivariate von Mises cosine (vmcos) and bivariate wrapped normal (wnorm2), and their (within same model) mixturesare implemented. The functions in can be classified into six major categories:Functions for evaluating density and generating random samples from an angular distribution. The functions dvm, dwnorm, dvmsin, dvmcos and dwnorm2 evaluate the density and the functions rvm, rwnorm, rvmsin, rvmcos and rwnorm2generate random samples from the models vm, wnorm, vmsin, vmcos and wnorm2 respectively. The parameters of the models are specified as arguments; otherwise, default values (zero means, unit concentrations, and zero association) are used.Density evaluations require computation of the normalizing constants, which for the vmcos model requires proper care, especially when κ_1, κ_2 or |κ_3| is large. This is because the analytic expression involves infinite (alternating if κ_3 < 0) series of product of modified Bessel functions, which become numerically unstable when these parameters are large. As such, when κ_3 < -5 or max(κ_1, κ_2, |κ_3|) > 50, the reciprocal of the integral for the normalizing constant is evaluated numerically using a (quasi) Monte Carlo method. By default, n_qrnd = 10^4 pairs of Sobol numbers are used for this purpose; however, n_qrnd, or a two-column matrix qrnd containing random/quasi-random numbers between 0 and 1 can be supplied for this approximation[The user may perform a Monte Carlo approximation for the normalizing constant even when numerical evaluation of the analytic formula is stable, by changing force_approx_const to TRUE from its default value FALSE.]. For vmsin model, evaluation of the constant via its analytic form is much more stable, as the associated infinite series consists only of non-negative terms.For univariate and bivariate wrapped normal models, the default absolute integer displacement for approximating the Wrapped Normal sum is 3, which can be changed to any value in {1, 2, 3, 4, 5}, through the argument int.displ. Note that int.displ regulates how many terms would be used to approximate the infinite sum present in the univariate and bivariate wrapped normal densities in (<ref>) and (<ref>). For example, int.displ = M implies that the infinite sum in the univariate wrapped normal density will be approximated by a finite sum of 2M +1 values, with the summation index ω ranging over {0, ± 1, …, ± M }. For a bivariate wrapped normal density, setting int.displ = M will ensure that the infinite double sum is approximated by a finite double sum, with the paired summation index (ω_1, ω_2) ranging over {0, ± 1, …, ± M }^2.Random data generation from the von Mises models (both univariate and bivariate) is done via rejection sampling. In the univariate case, the von Mises random deviates are efficiently generated using a rejection sampling scheme from a wrapped Cauchy distribution <cit.>. For the bivariate models, two forms of random samplings are implemented. In the first method, random deviates are generated via a naive bivariate rejection sampler with uniform proposal density (the majorization constant is numerically evaluated). In the second method (proposed in the web appendix of <cit.>), random deviates are first generated from the marginal distribution of one coordinate, then the other coordinate is drawn from the corresponding conditional distribution (which is von Mises in both models). The authors note that this latter scheme has a typical efficiency rate of at least 60%. It is to be noted that while this scheme is usually more efficient than the naive rejection sampler (especially when the concentration is high), it does have an often substantial overhead due to the numerical computations required for determining appropriate proposal density parameters. These overheads often outweigh efficiency gains, especially if the sample size and/or concentration parameters are small. In BAMBI, therefore, the naive rejection sampler is used by default when the sample size is moderate or small (< 100), or when the concentration parameters are small (< 0.1).[When the concentration parameters are large, the density becomes concentrated in (a) very narrow region(s). As such, efficiency of the naive sampler, which draws proposal random deviates from a uniform density over the entire support, can be 15-20% or less. However, even then the overall runtimes of the naive method are often still comparable to <cit.>'s method when the sample size is moderate or small.]For wrapped normal distributions (both univariate and bivariate) a random deviate is easily obtained by sampling from the unwrapped normal distribution (using rnorm if univariate, and rvmnorm from package mvtnorm if bivariate), and then wrapping into [0, 2π). Functions for evaluating density and generating random samples from a finite mixture model with a fixed number of components.Analogous to the functions for single component densities, the functions dvmmix, dwnormmix, dvmsinmix, dvmcosmix and dwnorm2mix evaluate the density and the functions rvmmix, rwnormmix, rvmsinmix, rvmcosmix and rwnorm2mixgenerate random samples from mixtures of vm, wnorm, vmsin, vmcos and wnorm2 respectively. All model parameters and mixing proportions must be provided as input arguments. Functions for visualizing andsummarizing bivariatemodels.To visualize the density for any of the three bivariate angular mixture models (with specified parameters and number of components) considered in this paper, the functions surface_model and contour_model can be used, which respectively plot the surface and the contour of a mixture density. To compute summary statistics for a single bivariateangular distribution, the function circ_varcor_model can be used, which calculates the circular variance andcorrelation coefficients (both Jammalamada-Sarma and Fisher-Lee forms, see Section <ref>).However, summarizing angular mixture models via circular variances and correlations is not recommended, as interpretations of the results can be challenging when multiple clusters are present in the data.[To calculate the circular variances and correlations for a mixture density, one can simulate from the density first, and then approximate the population quantities by their sample analogs on the basis of the simulated data.] The function circ_cor implements the sample Jammalamada-Sarma and Fisher-Lee circular correlation coefficients, as well as two forms of Kendall's tau <cit.> as non-parametric measures. The sample circular variance can be computed using the var.circular function from R package circular <cit.>. Functions for fitting a single component model or a finite mixture model with specified number of components to a given dataset using MCMC.Given a dataset, using methods discussed in Section <ref>, the function fit_angmixgenerates MCMC samples for parameters in an angularmixture model with a specified number of components. Available models for bivariate input data (which must be supplied as a two-column matrix or data frame) are vmsin, vmcos and wnorm2, and for univariate data are vm and wnorm.The argument ncomp specifies number of componentsin the mixture model, with ncomp = 1 representing the single component case (i.e., fitting a single density). A Gibbs sampler is used to generate latent component indicators, and conditional on this allocation the model parameters are sampled either by HMC (default), or by RWMH (can be specified through the argument method). A permutation sampling step can be added after burn-in by setting the logical argument perm_sampling to TRUE. The tuning parameters epsilon and L in HMC, and propscale in RWMH have pre-specified default values, and there is an auto-tuning feature for epsilon and propscale which is used by default, but can be turned off by setting the logical argument autotune = FALSE. The burn-in proportion can be specified through the argument burnin.prop, which is set to 0.5 by default. For HMC, the option to use random epsilon and L at each iteration is specified via the logical arguments epsilon.random and L.random respectively. In practice, using multiple chains is recommended, and the argument n.chains specifies the number of chains to be used. These chains can be run in parallel, if the logical argument chains_parallel is set to TRUE. The parallelization is implemented using future_lapply from R package future.apply; an appropriate future::plan() must be set in advance to ensure that the chains run in parallel (otherwise the chains will run sequentially), see Section <ref> for an example. To retain reproducibility while running multiple chains in parallel, the same RNG state is passed at the beginning of each chain. This is done by specifying future.seed = TRUE in future.apply::future_lapply call. Then at the beginning of the i-th chain, before drawing any parameters, i-many Uniform(0, 1) random numbers are generated using runif(i) (and then thrown away). This ensures that the RNG states across chains prior to sampling of the parameters are different (but reproducible), and hence, no two chains can become identical, even if they have the same starting and tuning parameters. This however, creates a difference between a fit_angmix call with multiple chains which is run sequentially by setting chains_parallel = FALSE, and another call which is run sequentially because of a sequential plan() (or no plan()), with chains_parallel = TRUE. In the former, base::lapply instead of future_lapply is used, which means that different RNG states are passed at the initiation of each chain.There are options for choosing prior hyperparameters. The prior for the association parameter κ_3 in bivariate models, and the log of the concentration parameters κ (in univariate models), κ_1 and κ_2 (in bivariate models) are taken to be the normal distribution (i.e., the priors for κ, κ_1, κ_2 are log normal), all with zero mean. The default variance for the normal prior is 1000, which provide diffuse priors, although they can be set by the user via the argument norm.var.A fixed non-informative Uniform(0, 2π) prior is used for the mean parameters. The Dirichlet prior parameters α_j's for the mixing proportions p_j's can be supplied through the argument pmix.alpha, which can either be a positive real number (same for all α_j), or a vector of the same length as pmix. It is recommended that α_j's be chosen large for proper handling of overfitted mixtures;following <cit.>, all α_j's default to (r+r(r+ 1)/2)/2 + 3, where r denotes dimension of the data (i.e., r=1 for univariate data, and r=2 for bivariate data). See Sections <ref> and <ref> for more details. The argument cov.restrict specifies any (additional) restriction to be imposed on the component specific association parameters while fitting the model. The available choices are "POSITIVE", "NEGATIVE", "ZERO" and "NONE". Note that when cov.restrict = "ZERO", fit_angmix fits a mixture with product components. By default, cov.restrict = "NONE", which does not impose any (additional) restriction.When model is "vmsin" or "vmcos", the component densities can be bimodal. However, one can restrict these densities to be unimodal, by setting the logical argument unimodal.component to TRUE (defaults to FALSE). For "wnorm" and "wnorm2" models, the default absolute integer displacement for approximating the Wrapped Normal sum is 3, which can be changed to any value in {1, 2, 3, 4, 5}, through the argument int.displ. For "vmcos" model, the normalizing constant is numerically approximated using quasi Monte Carlo method when analytic evaluation suffers from numerical instability. The arguments qrnd and n_qrnd can be used to alter the default settings used for these approximations. See the documentation of fit_angmix for more details.The function fit_angmix creates an angmcmc object, which can be used for assessing the fit, post processing, and estimating parameters.Functions for assessing the fit. Goodness of fit for MCMC-based Bayesian modeling depends on both convergence of the Markov chain and the appropriateness of the model used.contains a number of functions that can be used to examine these two aspects. The functions paramtrace and lpdtrace respectively plot the parameter and log posterior density traces for visual assessment of convergence. These two functions are called together in the plot method for angmcmc objects. The as.mcmc.list method for angmcmc objects provides a convenient way of converting an angmcmc object to an mcmc.list object from package coda, which provides several additional functions for convergence diagnostics.Once convergence is justified, the appropriateness of the fitted model can be visually assessed by the S3 functions densityplot (from lattice) and contour. The first function plots the density surface (for bivariate data) or density curve (for univariate data) of the fitted mixture model, and the second plots the associated contour of a bivariate model. Note that these plots provide visual assessment of the goodness of fit by assuming the Markov chains have converged and the parameters can be estimated on the basis of the MCMC samples. As such, convergence of the MCMC samples must be ensured prior to this step. Otherwise these visual diagnostics will lead to misleading conclusions. The comparative goodness of fit for two mixture models can be assessed on the basis of model selection criteria implemented in BAMBI, namely, marginal likelihood, AIC, BIC, DIC, WAIC and LOOIC via the functions bridge_sampler.angmcmc, AIC, BIC, DIC, waic.angmcmc and loo.angmcmc.As with the diagnostic plots, one general caveat for using any of these model selection criteria is that one should ensure convergence of the associated Markov chain first; otherwise, the results may be misleading.Functions for post-processing and estimating parameters. BAMBI provides several post-processing functions to aid inference on the basis of the generated MCMC samples. The function add_burnin_thin adds additional burn-in and/or thinning to an angmcmc object and the function select_chains extracts a subset of chains. These two functions can be helpful if convergence diagnostics indicate that some of the chains are poor mixing and/or require additional burn-in and thinning for convergence.As described in Section <ref>, care should be taken to ensure that there is no label switching in the MCMC output if inference is being made on the basis of posterior mean/median. If present, label switching can be fixed by applying the wrapper function fix_label on the angmcmc object, which will output another angmcmc object with label switching fixed. Point estimates of the parameters are obtained using the function pointest on a fitted angmcmc object.The function pointest calculates point estimates by applying fn on the MCMC samples, where fn is either a function, or a character string specifying the name of the function. Default for fn is mean, which computes posterior mean. If fn is "MODE" or "MAP" then the (MCMC-based approximate) MAP estimate is returned. Posterior quantiles can be estimated by (sample) quantiles of the MCMC realizations using the S3 function quantile.angmcmc. These quantiles can be used to construct credible sets. For example, if ξ_ζ denotes the ζ-th (sample) quantile of the MCMC observations for 0 < ζ < 1, the central 95% credible interval is given by (ξ_0.025, ξ_0.975). Both of these functions can be applied on specific parameters and/or component labels by setting the arguments par.name and comp.label accordingly. The S3 function summary.angmcmc prints (estimated) posterior means and the central 95% credible intervals for all the parameters.The estimated latent allocation from an angmcmc object can be obtained using the function latent_allocation, which first estimates parameters via pointest, computes posterior membership probabilities (see (<ref>)) for each data point, and then assigns each data point the class with largest membership probability.The (estimated) log-likelihood of an angmcmc object can be extracted as a logLik object using the S3 function logLik.angmcmc. Note that there are two methods of obtaining the log-likelihood from an angmcmc object. In the default method (method = 1), the finallog-likelihood is computed by applying a function fn (defaults to max) on the iteration wise log-likelihood values obtained during the original MCMC sampling. On the other hand, if method = 2, first the parameters are estimated (using pointest), and then the log-likelihood is computed at the estimated parameters.Density evaluations and random data generation from a fitted model can be done using the functions d_fitted and r_fitted respectively. Both functions take an angmcmc object as input and apply the appropriate model specific density evaluation and random data generation functions with the estimate of the parameter vectorobtained via pointest. The actual MCMC samples for one or more parameters in one or more components from one or more chains can be extracted via the function extractsamples on an angmcmc object for further analysis.Functions for incremental mixture model fitting and number of components estimation.Using the methods and model selection criteria described in Sections <ref> and <ref>, thefunction fit_incremental_angmix fits mixture models with incremental number of components by calling fit_angmix at each step, and uses a Bayesian model selection criterion to determine an optimal number of components.The arguments start_ncomp and max_ncomp provide the starting and maximum number of components to be used in the incremental fitting, which are respectively set to 1 and 10 by default. The availablemodel selection criterion to use (specified via the argument crit) are 'LOGML', 'AIC', 'BIC', 'WAIC', 'LOOIC' and 'DIC', which is computed for every intermediate fit. The initial values of the starting model (or a model with ≤ 2 components) are obtained by default using moment estimation on k-means clusters (they can also be directly supplied by the user). For the subsequent models (with number of components ≥ 3), the initial values are by default obtained from the MCMC-based MAP parameter estimates for the previous model with one fewer component (see Section <ref>).This can be overridden by setting prev_par = FALSE, to use k-means clustering followed by moment estimation instead. By default, only the “best” chain, i.e., the onewith maximum average log posterior density, is used for computation of model selection criterion, and parameter estimation (if prev_par is set to TRUE). This default helps safeguard against situations where some of the chains may get trapped at local optima. However, samples from all chains can be used for these computations by setting use_best_chain = FALSE. The function stops when crit achieves its first minimum, or when max_ncomp is reached, and returns a list with the following elements: * fit.best is an angmcmc object corresponding to the optimum or best fit.* crit.all provides a vector (list) of model selection criterion values for each incremental model fitted.* crit.best is the value of the model selection criterion for the model with optimal number of components. * maxllik.all is the maximum (obtained from MCMC iterations)log-likelihood for all fitted models. * maxllik.best is maximum log-likelihood for the optimal model.* ncomp.best is the optimal number of components associated with the “best” model. * fit.allis a list consisting of angmcmc objects for all number of components fitted during the model selection process. Any element of this list can be used as an argument for any function that takes angmcmc objects as input. However, this can be very memory intensive, and as such, by default not returned (can be returned by setting return_all = TRUE). The angmcmc objectcorresponding to the best fit and the associated value of the model selection criterioncan also be extracted from the output of fit_incremental_angmixextracted using the convenience functions bestmodel and bestcriterion respectively. § ILLUSTRATIONSIn this section we illustrate functionalities of by fitting mixture models to the angular datasets included in the package.The following commandR> library(BAMBI)loads the package after it has been installed.For reproducibility of the results presented, the same random seed 12321 is used for all of our examples. §.§ Fitting mixture models on the tim8 bivariate dataThe tim8 dataset consists of 490 backbone torsion angle pairs (ϕ, ψ) for the protein 8TIM. The protein is an example of a TIM barrel, a common type of protein fold exhibiting alternating α-helices and β-sheets. Its Ramachandran plot (i.e., a scatterplot of (ϕ, ψ) pairs) is generated using the following R command and shown in Figure <ref>.R> plot(tim8, pch = 16, + xlim = c(0, 2*pi), ylim = c(0, 2*pi), + main = "8TIM",+ col = scales::alpha("black", 0.6)) Note that this scatterplot projects the torus onto a 2D surface, which cannot show the wraparound nature of the angles.This projection is not unique and affects the appearance of the scatterplot depending on how the angles are represented, e.g., in [-π, π) instead of [0, 2π). Moreover, one should be careful to note that thetop and bottom boundaries in these plots join together, as do the left and right boundaries. In Figure <ref>, about 3-6 visually distinct clusters can be seen; however, we need to note that the points around (4.5, 0) and (5.5, 6) in fact may form a single cluster.Such features cannot be correctly modeled with statistical methods that ignorecircularity. Thus, this is a suitable example for illustrating the need for mixtures of (bivariate) angular distributions.To fit a bivariate mixture model with a specified number of components to the dataset, we can use the BAMBI function fit_angmix by specifying a model and the number of components to be used.For example, the following R command fits a 4 component vmsin mixture by generating 3 MCMC chains with 20,000 samples each for the mixture model parameters. HMCis used for sampling the model parameters, with tuning parameter epsilon adaptively tuned during burn-in (which is by default constituted by the first half of all iterations, i.e., first 10,000 iterations), and L taking its default value 10.A fit_angmix call creates an angmcmc object, which can then be used for various post-processing tasks, including convergence assessment, parameter estimation and visualizing goodness of fit. R> set.seed(12321) R> fit.vmsin.4comp <- fit_angmix("vmsin", tim8, ncomp = 4, n.iter =2e4, +n.chains = 3) Note that in order for the independent chains to be run in parallel, an appropriate plan() from R package future needs to be set first; otherwise the chains will run sequentially. For example, running the commands R> library(future) R> plan(multiprocess, gc = TRUE)before the fit_angmix call will ensure that the three chains are run in parallel, provided resources are available.We suggest setting gc = TRUE in plan, to allow proper garbage collection from the parallel workers, even though it adds some overhead. This is because the parallel workers can end up leaving heavy memory footprints,especially when mixture models are being fitted incrementally. In the previous example with fit_angmix, the number of components K was specified through ncomp. However, the “true” K generally needs to be estimated from the data.For this purpose, we use fit_incremental_angmix with start_ncomp = 2, which fits angular mixtures with incremental number of components (starting with 2 components), and uses a Bayesian model selection criterion to determine an optimal model. We use this function to fit optimal mixtures of vmsin, vmcos and wnorm2 models separately. By specifying n.chains = 3 and n.iter = 20,000 in fit_incremental_angmix, each incremental model will be fitted using three chains with 20,000 iterations each (with first 10,000 iterations treated as burn-in, where epsilon in HMC is tuned).By default, the algorithm uses MCMC-based MAP estimates from the preceding fitted model with one fewer component (if available) as starting parameter values (see Section <ref>). We use the default leave one out cross validation information criterion ('LOOIC') as the model selection criterion for determining the optimal model.When the function stops, we extract the best fitted model usingthe function bestmodel, and assess convergence of the associated chains. After justifying convergence, we provide point and interval estimates of the parameters and visually examine goodness of fit.Fitting the vmsin mixture model.We start with the vmsin model. The R commands are as follows: R> set.seed(12321) R> fit.vmsin <- fit_incremental_angmix(model="vmsin", data = tim8, +crit = "LOOIC", +start_ncomp = 2, +max_ncomp = 10, +n.iter = 2e4, +n.chains = 3)The algorithm stops at 5 components and determines the optimal number of components to be 4 on the basis of the 'LOOIC' values. The MCMC-based maximum log-likelihood estimates for the intermediate models are-945.3684, -853.3546, -803.4193, and -793.8154, which are steadily increasing. This is expected, since a “smaller” mixture should be nested within a “larger” mixture when properly fitted. We extract the optimum fitted model from the output of fit_incremental_angmix viabestmodel:R> fit.vmsin.best <- bestmodel(fit.vmsin) Before estimating parameters, we first need to assess convergence and stationarity of the Markov chains. For this purpose, we first look at the (non-normalized) log posterior density (LPD) trace plots, which can be obtained using the function lpdtrace:R> lpdtrace(fit.vmsin.best) The resulting plot displayed in Figure <ref> shows that all three chains have stabilized into similar LPD rangesafter burn-in, without noticeable trends or patterns.Next, we look at the parameter traces, plotted using function paramtrace:R> paramtrace(fit.vmsin.best)These trace plots are displayed in the panels of Figure <ref> through Figure <ref> in Appendix <ref>, which show adequate signs of convergence and stationarity for samples within each chain. Stationarity of a chain can be formally tested using Geweke's convergence diagnostic<cit.>, which tests equality (of means) of the first and last part of a Markov chain using standard z-scores. The test is implemented in R package coda, and can be applied on fit.vmsin.best, first by converting it into acoda mcmc.list object via S3 function as.mcmc.list:R> mcmc.vmsin.best <- coda::as.mcmc.list(fit.vmsin.best)and then by applying the coda function geweke.diag on the output. We apply geweke.diag on mcmc.vmsin.best by setting both frac1 and frac2 equal to 0.5, to test the equality of the first and second halves. This produces a list of size 3 containing z-statistics for each chain. These values are displayed inFigure <ref> as barplots. The R commands are as follows:R> geweke_res <- coda::geweke.diag(mcmc.vmsin.best, frac1 = 0.5,+frac2 = 0.5) R> par(mfrow = c(1, 3), mar = c(5, 6, 2, 1)) R> for(j in 1:3)+barplot(geweke_res[[j]]z, horiz = TRUE, +names = names(geweke_res[[j]]z), +xlim = range(geweke_res[[j]]z, -3, 3), +ylab="", xaxt='n', las = 2) +axis(1, las = 1) +title(main = paste("Chain", j), xlab = "geweke.diag") + From these barplots, we see that all the z-scores more or less lie between±3, thus indicating adequate similarity between the first and second halves of the chains. Package coda provides functions for a number of additional diagnostic plots and formal tests that may be used after conversion to a mcmc.list object.Note that not all tests are applicable in every situation.For example, the Gelman-Rubin test <cit.>, available via coda function gelman.diag, assumes normality of the posterior density; this assumption is clearly violated when the posterior density is multimodal. In this example, as well as for mixture models with several components in general, multimodality in the posterior density can be commonly observed.Posterior multimodality is evident in the parameter traceplots for the current fit:the sample values of the same parameter across the independent chains exhibit noticeable differences (see, e.g., the panels of Figure <ref>). Note that this can be due to both permutation (i.e., label-switching) and non-permutation modes. These modes are from similar posterior density regions, as theLPD traces show. This also indicates that various regions of the posterior density are being well explored by the three chains together. Next, we consider parameter estimation and assessing goodness of fit. Sincethe combined MCMC samples are multimodal, the posterior mean point estimates from raw MCMC samples will not be meaningful, as they will lie in between modes. A simple alternative is to use the MCMC based approximate MAP estimate, which is unaffected by multimodality; proper care must be taken otherwise. The inferential difficulties associated with having permutation modes in the MCMC samples can be (potentially) solved by undoing label switching. Thefunction fix_label (with the default settings) can be used for this purpose:R> fit.vmsin.best <- fix_label(fit.vmsin.best)The parameter traceplots obtained (using paramtrace after undoing label switching) are displayed in Figures <ref> through <ref> in Appendix <ref>. Compare these traceplots to the ones displayed in Appendix <ref>. It can be seen that this procedure indeed removes some of the permutation modes, in that the parameter traces for the three independent chains now largely overlap. However, some non-unique modes are still present, as can be seenin the traces ofκ_3andμ_2in component 4, displayed in panel (d) and (f) of Figure <ref>: e.g.,κ_3“jumps” between modes at approximately 2 and -2 over the course of the MCMC simulation. These modes might be genuine non-permutation modes, or permutation modes that fix_label is unable to resolve. In BAMBI,parameter estimates are computed using the function pointest, which can find point estimates of the whole parameter vector, as well as its sub-vectors. Note that the function supports multiple methods of estimation. In particular, the argument fn in pointest specifies what function to evaluate on the MCMC samples for estimation. For example, fn = mean computes the MCMC posterior mean, while fn = "MODE" returns an MCMC based approximate MAP estimate. We use pointest to find the MAP and posterior mean estimates (after applying fix_label), and then note their differences.The R commands are as follows.R> round(pointest(fit.vmsin.best, fn = "MODE"), 2)1 2 3 4 pmix 0.430.160.350.06 kappa136.266.154.633.72 kappa227.932.257.800.00 kappa3 -12.03 -0.45 -0.58 -1.85 mu15.224.664.461.84 mu25.546.142.414.94R> round(pointest(fit.vmsin.best, fn = mean), 2)1 2 3 4 pmix 0.430.170.340.06 kappa136.397.354.344.32 kappa229.192.068.120.07 kappa3 -12.74 -0.38 -1.08 -0.19 mu15.234.674.431.71 mu25.556.172.433.60 We note that both the approximate MAP estimate and the posterior mean estimate reasonably agree on the first three components. However, they disagree on the remaining fourth component regarding the value of mu2 and kappa3. This is not surprising, since the MCMC samples for these two parameters have (possibly non-permutation) multiple modes, as we saw earlier. In this case, their posterior mean estimates lie in between modes, and hence are not good point estimates. To visualize the differences between these estimates, we plot thecontours andsurfaces of the corresponding fitted model densities using the S3 functions contour (from graphics) and densityplot (from lattice) for angmcmc objects:R> contour(fit.vmsin.best, fn = "MODE") R> lattice::densityplot(fit.vmsin.best, fn = "MODE") R> contour(fit.vmsin.best, fn = mean) R> lattice::densityplot(fit.vmsin.best, fn = mean) The contour plots are shown in Figures <ref> and <ref>, and the density surfaces in Figures <ref> and <ref>. As can be seen, these plots are visually quite similar, despite the differences in the point estimates. This is due to the fact the component mostly affected by the existence of multiple modes has a low mixing proportion.Nonetheless, in the current setting, the MAP estimate is the better of the two for the reasons described.Finally, we compute interval estimates of the parameters. This done by the S3 function summary for angmcmc objects, which computes the MCMC posterior mean along with a 95% credible interval: R> summary(fit.vmsin.best)1 2 pmix0.43 (0.38, 0.48) 0.17 (0.12, 0.22) kappa1 36.39 (29.01, 45.41)7.35 (4.99, 10.68) kappa2 29.19 (21.99, 38.96) 2.06 (1.21, 3.16) kappa3 -12.74 (-18.49, -7.50) -0.38 (-1.76, 1.16) mu1 5.23 (5.20, 5.26) 4.67 (4.54, 4.80) mu2 5.55 (5.51, 5.58) 6.17 (5.99, 6.28) 34 pmix0.34 (0.29, 0.38) 0.064 (0.043, 0.09) kappa14.34 (3.45, 5.43) 4.32 (1.64, 7.76) kappa2 8.12 (6.00, 10.61) 0.073 (0.00012, 0.55) kappa3 -1.08 (-2.29, 0.025) -0.19 (-3.10, 3.05) mu1 4.43 (4.35, 4.52) 1.71 (1.44, 2.05) mu2 2.43 (2.37, 2.49) 3.60 (0.66, 5.97) We also use this example to illustrate r_fitted, which generates random deviates from a fitted model, with parameters estimated using pointest. The corresponding function d_fitted evaluates the density. These can be useful for posterior predictive checks. We draw observations from the best (4 component) fitted vmsin model, construct the Ramachandran plot for the generated dataset (exhibited in Figure <ref>) and compare it with the original Ramachandran plot. The following are the R commands used for this purpose. R> set.seed(12321) R> vmsin.data <- r_fitted(nrow(tim8), fit.vmsin.best, fn = "MODE") R> plot(vmsin.data, xlab = "phi", ylab = "psi", + xlim = c(0, 2*pi), ylim = c(0, 2*pi),+ pch = 16, col = scales::alpha("black", 0.6)) R> title("Data generated from best fitted vmsin")Observe the similarity between Figures <ref> and <ref>. The different clusters of the actual data points are reproduced well in the simulated data, which corroborate the goodness of fit. Fitting the vmcos mixture model.Next, we fit vmcos mixtures to the data, by using fit_incremental_angmix with model = "vmcos". We set n_qrnd = 1e4 (the default used in dvmcos), which specifies that 10,000 pairs of quasi-random Sobol numbers would be used to approximate the vmcos normalizing constant in cases where its analytic computaion is unstable. In a small dimensional problem with finite variance (such as ours), the Sobol sequence (or low discrepancy quasi-random sequences in general) often provides a better Monte Carlo approximation than (pseudo-) random sequences. In fact, for two dimensional problems, the rate of convergence for a Sobol sequence based Monte Carlo approximation isO((logN)^2/N)as opposed toO(1/√(N))for an ordinary (pseudo-) random sequence based Monte Carlo approximation <cit.>, whereNdenotes the number of (quasi) random pairs used.See the documentation of dvmcos for examples comparing analytic, quasi Monte Carlo, and ordinary Monte Carlo approximationsof the (normalizing constant of the) vmcos density. For fit_incremental_angmix (or more specifically in fit_angmix),Monte Carlo approximations based on10^4pairs of Sobol numbers typically provide reasonable approximations, while keeping the computational burden moderate. The following are the R commands used for incrementally fittingvmcosmixture models. R> set.seed(12321) R> fit.vmcos <- fit_incremental_angmix(model="vmcos", data = tim8, +crit = "LOOIC", +start_ncomp = 2, +max_ncomp = 10, +n.iter = 2e4, +n.chains = 3, +use_best_chain = FALSE, +n_qrnd = 1e4)Similar to the vmsin case, here also the algorithm stops at 5 components and determines the optimal number of components to be 4. We first extract the “best” fitted model, via bestmodel:R> fit.vmcos.best <- bestmodel(fit.vmcos)and then plot the log posterior and parameter traces. These plots show similar convergence properties, and are omitted for brevity. For parameter estimation, we compute both (approximate) MAP and posterior mean estimates (after undoing label switching), and plot the contour and surface of the associated fitted model densities. The following are the associated R commands:R> fit.vmcos.best <- fix_label(fit.vmcos.best) R> contour(fit.vmcos.best, fn = "MODE") R> lattice::densityplot(fit.vmcos.best, fn = "MODE") R> contour(fit.vmcos.best, fn = mean) R> lattice::densityplot(fit.vmcos.best, fn = mean) The fitted contours are displayed in Figures <ref> and <ref>, and the density surfaces are displayed in Figures <ref> and <ref>, are noticeably similar.Theyare also broadly similar to the ones obtained for the fitted vmsin models.Estimated posterior means along with estimated 95% credible intervals are obtained using the S3 function summary.angmcmc as follows. R> summary(fit.vmcos.best)1 2 pmix0.43 (0.38, 0.49) 0.16 (0.12, 0.21) kappa1 48.70 (37.89, 61.88)7.97 (5.11, 12.56) kappa2 41.49 (30.79, 55.64)2.15 (0.002, 4.57) kappa3 -12.61 (-18.64, -7.35) -0.0077 (-1.90, 2.12) mu1 5.23 (5.20, 5.26) 4.67 (4.54, 4.81) mu2 5.55 (5.51, 5.58) 6.17 (5.99, 6.28)34 pmix 0.34 (0.29, 0.38) 0.065 (0.043, 0.095) kappa1 5.16 (3.80, 6.69)3.62 (0.90, 6.77) kappa28.79 (6.36, 11.81) 0.19 (0.00013, 1.33) kappa3 -0.98 (-2.15, 0.12)-0.20 (-1.54, 0.90) mu14.43 (4.34, 4.51)1.59 (1.31, 1.93) mu22.43 (2.36, 2.49)3.13 (0.14, 6.13)Fitting the wnorm2 mixture model.Finally, we fit wnorm2 mixtures to the data. The R commands used are as follows: R> set.seed(12321) R> library(future) R> plan(multiprocess(workers = 3)) R> fit.wnorm2 <- fit_incremental_angmix(model="wnorm2", data = tim8, + crit = "LOOIC", + start_ncomp = 2, + max_ncomp = 10, + n.iter = 2e4, + n.chains = 3, + use_best_chain = FALSE) Here also, the function stops at 5 components and determines the optimal number of components to be 4.[It should be noted that the runtime for wrapped normal fitting is considerably longer than the von Mises sine models, due to the computational burden; see Section <ref>.] As done in the previous two cases, after extracting the best model, we assess convergence via trace plots (omitted for brevity). Wefind MCMC-based MAP and posterior mean estimates (after undoing label switching), and also find credible interval estimates. Finally we assess goodness of fit through fitted contour and density surfaces. The following are the R commands that perform these tasks. R> fit.wnorm2.best <- bestmodel(fit.wnorm2) R> lpdtrace(fit.wnorm2.best) R> paramtrace(fit.wnorm2.best) R> fit.wnorm2.best <- fix_label(fit.wnorm2.best) R> contour(fit.wnorm2.best, fn = "MODE") R> lattice::densityplot(fit.wnorm2.best, fn = "MODE") R> contour(fit.wnorm2.best, fn = mean) R> lattice::densityplot(fit.wnorm2.best, fn = mean) R> summary(fit.wnorm2.best) The contours and density surfacesdisplayed in Figures <ref> and <ref>) are noticeablysimilar.They are also broadly similar to the fitted vmsin and vmcos mixture model density contours and surfaces. The estimated credible intervals along with MCMC posterior means for fitted 4 component wnorm2 mixture are obtained using the S3 function summary.angmcmc as follows. R> summary(fit.wnorm2.best)12 pmix0.44 (0.38, 0.49)0.16 (0.12, 0.21) kappa1 35.57 (28.05, 44.49) 7.13 (4.55, 10.90) kappa2 28.03 (20.93, 36.91)1.48 (0.75, 2.44) kappa312.32 (7.38, 18.02) 0.20 (-0.90, 1.12) mu1 5.23 (5.20, 5.26)4.66 (4.54, 4.78) mu2 5.55 (5.52, 5.58)6.17 (5.98, 6.28) 34 pmix0.34 (0.29, 0.38) 0.063 (0.043, 0.087) kappa13.68 (2.86, 4.65) 8.05 (2.42, 17.30) kappa2 7.68 (5.62, 10.32) 0.46 (0.034, 0.91) kappa31.00 (0.10, 2.00) 1.13 (-1.36, 3.09) mu1 4.43 (4.35, 4.51)1.62 (1.37, 1.94) mu2 2.43 (2.37, 2.50) 3.06 (0.044, 6.23)Comparative analysis of the three fitted models.So far, we have considered mixtures of vmsin, vmcos and wnorm2 densities, and have fitted them to tim8 data. The associated optimal number of components were determined via leave one out information criterion (LOOIC) in incremental fitting schemes. We then plotted the fitted density contours andsurfaces to assess the goodness of fit, and noticed that these plots are broadly similar across the three optimal fitted mixture models (of vmsin, vmcos and wnorm2 densities). It is natural to then consider the question of which among these three fitted bivariate mixture models best explains the data. This can again be answered via LOOIC. For angmcmc objects LOOIC can be conveniently computed using the S3 function looic from package loo. However, here we do not need to recompute them since they were already computed during incremental model fitting, and can be extracted via the convenience function bestcriterion as follows:R> vmsin.4.looic <- bestcriterion(fit.vmsin) R> wnorm2.4.looic <- bestcriterion(fit.wnorm2) R> vmcos.4.looic <- bestcriterion(fit.vmcos) Now we compare the three models via loo::compare on the basis of their LOOIC's:R> comp <- loo::compare(vmsin.4.looic, vmcos.4.looic, wnorm2.4.looic) R> comp elpd_diff se_diff elpd_loo p_loolooicvmsin.4.looic 0.0 0.0-826.0 25.5 1652.0 wnorm2.4.looic -0.9 3.6-826.9 26.5 1653.8 vmcos.4.looic-5.3 3.1-831.3 25.2 1662.7 The documentation of loo from package loo v2.1.0 says “When comparing two fitted models, we can estimate the difference in their expected predictive accuracy by the difference in elpd_loo or elpd_waic (or multiplied by -2, if desired, to be on the deviance scale). When that difference, elpd_diff, is positive then the expected predictive accuracy for the second model is higher. A negative elpd_diff favors the first model. When using compare() with more than two models, the values in the elpd_diff and se_diff columns of the returned matrix are computed by making pairwise comparisons between each model and the model with the best ELPD (i.e., the model in the first row)”. Thus the above output provides a ranking among the three models based on their (estimated) expected log predictive density (elpd) values; a higher elpd indicates a better predictive accuracy and thus a better fit.The fitted vmsin model appears to have the highest elpd (see the column elpd_diff in the above output), followed by the fitted wnorm2 model and the fitted vmcos model. However, these elpd's are estimates, and the variabilities of these estimates need to be considered when making comparisons. To address this, we make use of the standard errors of the differences provided in the column se_diff and construct approximate 95% credible interval estimates of the pairwise elpd differences (viz., elpd_diff±2 se_diff) for the fitted model pairs (wnorm2, vmsin) and(vmcos, vmsin). A elpd difference is considered to be significant (at the 95% level) if the corresponding interval estimate does not contain zero. The R commands are as follows.R> find_ci <- function(x, digits = 1)+round(c(lower = unname(x[1] - 2*x[2]), +upper = unname(x[1] + 2*x[2])), +digits = digits) +R> t(apply(comp[-1, c("elpd_diff", "se_diff")], 1, find_ci)) lower upper wnorm2.4.looic-8.1 6.3 vmcos.4.looic-11.5 0.9This shows that approximate 95% interval estimates of the elpd differences between the fitted best vmsin and the best wnorm2 model, and the best vmsin and the best vmcos model, are(-8.1, 6.3)and(-11.5, 0.9)respectively, both containing zero. It therefore follows that all three of the fitted (four component) mixture models are not significantly different in terms of their goodness of fit to these data.§.§ Fitting mixture models on the wind (univariate) data The wind data contains 239 observations on wind direction in radiansmeasured at Saturna Island, British Columbia, Canada, during October 1-10, 2016. As a result of a severe storm that occurred during that period, the data shows significant variability with an interesting bi- (or possibly tri-) modality. Figure <ref> shows a histogram of the data, constructed by applying the default hist function on wind[, "angle"].Similar to the bivariate case,weuse fit_incremental_angmix to fit mixtures of vm and wnorm separately with incremental number of components (starting at 1) and determine an optimum size in each case.To fiteach mixture model, we first generate 20,000 MCMC samples for the parameters, with the (default) first half taken as burn-in. Except for n.iter, defaults for all other arguments are used in these examples. After generating the MCMC samples, we assess their convergence via LPD and parameter trace plots. Following, we visualize the fits viadensity curves constructed using the S3 function densityplot (which requires lattice). Finally, we compute point and interval estimates for each parameter using the S3 function summary.angmcmc. Fitting the vm mixture model. We start with vm,The R commands are as follows:R> set.seed(12321) R> fit.vm <- fit_incremental_angmix(model="vm", data = wind[, "angle"], + crit = "LOOIC", + start_ncomp = 1, max_ncomp = 10, + n.iter = 2e4, + n.chains = 3) The function stops at 3 components and determines the optimal number of components to be 2. After it stops, we extract the angmcmc object corresponding to the best model from its output, and inspect its LPD and parameter traces for convergence (omitted for brevity). R> fit.vm.best <- bestmodel(fit.vm) R> lpdtrace(fit.vm.best) R> paramtrace(fit.vm.best) We first use fix_label to undo label switching, and then assess goodness of fit through density curves fitted using MAP and posterior mean estimation:R> fit.vm.best <- fix_label(fit.vm.best) R> lattice::densityplot(fit.vm.best, fn = "MODE") R> lattice::densityplot(fit.vm.best, fn = mean)The plots are displayed in Figures <ref> and <ref>, which show noticeable similarity.Finally, we compute MCMC posterior mean and associated 95% credible interval using S3 function summary:R> summary(fit.vm.best) 1 2 pmix 0.24 (0.13, 0.43) 0.76 (0.57, 0.87) kappa 7.81 (1.34, 22.40) 1.01 (0.59, 1.71) mu 5.30 (5.08, 5.49) 2.75 (2.49, 3.04)Fitting the wnorm mixture model. Next, we do similar exercises with wnorm model. The following are the R codes used.R> set.seed(12321) R> fit.wnorm <- fit_incremental_angmix(model="wnorm", data = wind[, "angle"], +crit = "LOOIC", +start_ncomp = 1, max_ncomp = 10, +n.iter = 2e4, +n.chains = 3) R> fit.wnorm.best <- bestmodel(fit.wnorm) R> lpdtrace(fit.wnorm.best) R> paramtrace(fit.wnorm.best) R> fit.wnorm.best <- fix_label(fit.wnorm.best) R> lattice::densityplot(fit.wnorm.best, fn = "MODE") R> lattice::densityplot(fit.wnorm.best, fn = mean) Similar to the "vm" case, here also the function stops at 3 components and determines the optimal number of components to be 2.The LPD and parameter trace plots areomitted for brevity. The density curves fitted using MAP and poterior mean estimates are shown in Figures <ref> and <ref> respectively, which arenoticeably similar. They are alsobroadly similar to the plots associated with the fitted vm mixture densities shown in Figure <ref>.Finally we compute the MCMC posterior mean and 95% credible interval using the S3 function summary.R> summary(fit.wnorm.best) 1 2 pmix 0.28 (0.16, 0.43) 0.72 (0.57, 0.84) kappa 5.09 (1.16, 14.21) 0.78 (0.43, 1.35) mu 5.35 (5.12, 5.53) 2.71 (2.45, 3.00) Comparison between the two models. Similar to the bivariate case, wecompare the fitted vm and wnorm mixture models using their LOOIC values. We first extract the LOOICs using the convenience function bestcriterion:R> vm.2.looic <- bestcriterion(fit.vm) R> wnorm.2.looic <- bestcriterion(fit.wnorm)Then we compare the two models based on their estimated expected log predictive densities, by using loo::compare on vm.2.looic and wnorm.2.looic:R> loo::compare(wnorm.2.looic, vm.2.looic)elpd_diffse1.0 1.0 Clearly an approximate 95% credible interval estimate for the elpd difference, obtained by elpd_diff±2 se, contains zero. This implies that the fitted vm model and the fitted wnorm model do not have a significant difference in terms of their goodness of fit to these data. § CONCLUDING REMARKS AND FUTURE WORK Angular data, both univariate and bivariate, arise naturally in a variety of modern scientific problems, and their analyses require appropriate use of rigorous statistical tools and distributions specifically developed for such data. The lack of comprehensive software implementing such methods (in R or otherwise) has hindered their applicability in practice – especially for bivariate angular models and mixtures thereof. The packageis our contribution to this area, providing a platform that implements a set of formal statistical tools and methods for analyzing such data, and is readily accessible topractitioners. There are various directions in which the software could be extended in future releases. Some possible features under consideration include the following. *Implementation of additional angular distributions, such as wrapped Cauchy.*Additional methods of density evaluation and random simulation from fitted models on the basis of MCMC samples. *Visualizations of bivariate angles with toroidal plots.*Use of parallel tempering or related methods during MCMC simulations for faster exploration of the posterior density.*Proper handling of overfitting heterogeneity that takes place in high dimensional mixture models when some of the component specific parameters in two different components are identical. <cit.> suggests the use of sparse priors for the component specific location parameters to deal with this problem.§ ACKNOWLEDGMENTThe authors thank the two anonymous referees who reviewed this work, and provided valuable, thorough and constructive suggestions on both the package and the manuscript. The authors are also thankful to Anahita Nodehi from Tarbiat Modares University, Iran, for notifying them of a few typos and providing helpful suggestions on the R package documentation. § THE NORMALIZING CONSTANT FOR VON MISES COSINE DENSITYThe normalizing constant for the density (<ref>) is given by C_c(κ_1, κ_2, κ_3) = [(2π)^2 { I_0(κ_1)I_0(κ_2)I_0(κ_3) + 2 ∑_n=1^∞I_n(κ_1) I_n(κ_2)I_n(κ_3)}]^-1Without loss of generality, we first assume that the mean parameters in the density (<ref>) are all zero, i.e., μ_1 = μ_2 = 0. Therefore, our objective boils down to evaluate the integralC_c(κ_1, κ_2, κ_3)^-1 =ℐ = ∫_0^2π∫_0^2πexp(κ_1 cos x + κ_2 cos y + κ_3 cos (x-y))dx dy.Now from equation 9.6.34 of <cit.>, it follows thatexp(κ_1 cos x) = I_0(κ_1) + 2 ∑_l=1^∞ I_l(κ_1) cos(lx)exp(κ_2 cos y) = I_0(κ_2) + 2 ∑_m=1^∞ I_m(κ_2) cos(my)and exp(κ_3 cos (x-y)) = I_0(κ_3) + 2 ∑_n=1^∞ I_n(κ_3) cos(n(x-y)).Therefore, the integrand in (<ref>) can be written asI_0(κ_1)I_0(κ_2)I_0(κ_3) + 2 {I_0(κ_2) + I_0(κ_3)}∑_l=1^∞ I_l(κ_1) cos(lx)+ 2 {I_0(κ_3) + I_0(κ_1)}∑_m=1^∞ I_m(κ_2) cos(my)+ 2 {I_0(κ_1) + I_0(κ_2)}∑_n=1^∞ I_n(κ_3) cos(n(x-y))+ 8 ∑_l=1^∞∑_m=1^∞∑_n=1^∞ I_l(κ_1)I_m(κ_2) I_n(κ_3)cos(lx)cos(my)cos(n(x-y)).Note that for any positive integer q,∫_0^2πcos(qz) dz = ∫_0^2πsin(qz) dz = 0which implies, for a positive integer n,∫_0^2π∫_0^2πcos(n(x-y)) dx dy= ∫_0^2πcos(nx) dx ∫_0^2πcos(ny) dy+ ∫_0^2πsin(nx) dx ∫_0^2πsin(ny) dy = 0.(Equality of the double and the iterative integrals are ensured by the Fubini theorem, which is applicable as the integrands and the range of integrals are all finite.) Thus the (double) integrals of the second, third and fourth terms in (<ref>) are all zero. Hence,ℐ = (2π)^2I_0(κ_1)I_0(κ_2)I_0(κ_3) + 8 ∫_0^2π∫_0^2π∑_l=1^∞∑_m=1^∞∑_n=1^∞ I_l(κ_1) I_m(κ_2)I_n(κ_3)cos(lx)cos(my)cos(n(x-y))dxdy.Now, for the second term in (<ref>), first note that∫_0^2π∫_0^2π∑_l=1^∞∑_m=1^∞∑_n=1^∞|I_l(κ_1)I_m(κ_2) I_n(κ_3)cos(lx)cos(my)cos(n(x-y)) | dx dy ≤∫_0^2π∫_0^2π∑_l=1^∞∑_m=1^∞∑_n=1^∞ I_l(κ_1)I_m(κ_2) I_n(|κ_3|) dx dy =∑_l=1^∞∑_m=1^∞∑_n=1^∞∫_0^2π∫_0^2π I_l(κ_1)I_m(κ_2) I_n(|κ_3|) dx dy (by Fubini-Tonelli)= (2π)^2 ( ∑_l=1^∞ I_l(κ_1)) (∑_m=1^∞I_m(κ_2) ) ( ∑_n=1^∞ I_n(|κ_3|) ) < ∞where the equality in the third line follows from the Fubini-Tonelli theorem for non-negative integrands. Therefore, the Fubini theorem for general integrands can be applied to ensure interchangeability of the sums and the integrals in the second term in (<ref>). In particular, one can write∫_0^2π∫_0^2π∑_l=1^∞∑_m=1^∞∑_n=1^∞ I_l(κ_1) I_m(κ_2)I_n(κ_3)cos(lx)cos(my)cos(n(x-y))dxdy = ∑_l=1^∞∑_m=1^∞∑_n=1^∞ I_l(κ_1) I_m(κ_2)I_n(κ_3) ∫_0^2π∫_0^2πcos(lx)cos(my)cos(n(x-y))dxdy. Now, for any positive integers l, m, n,cos(lx)cos(my)cos(n(x-y))= cos(lx) cos(nx)cos(my)cos(ny)+ cos(lx) sin(nx)cos(my) sin(ny).Observe that for any two positive integers p and q,∫_0^2πcos(pz) cos(qz) dz = π_{p=q} and ∫_0^2πcos(pz) sin(qz) dz = 0.Therefore, for any positive integers l, m, n,∫_0^2π∫_0^2πcos(lx) cos(nx)cos(my)cos(ny) dxdy = π_{l = n}π_{m = n} = π^2 _{l = m = n}and∫_0^2π∫_0^2πcos(lx) sin(nx)cos(my) sin(ny) dxdy = 0.which implies,∫_0^2π∫_0^2πcos(lx)cos(my)cos(n(x-y))dxdy= π^2 _{l = m = n}. Therefore, combining (<ref>), (<ref>) and (<ref>), we getℐ = (2π)^2I_0(κ_1)I_0(κ_2)I_0(κ_3) + 8 π^2 ∑_l=1^∞∑_m=1^∞∑_n=1^∞ I_l(κ_1)I_n(κ_3) I_m(κ_2)_{l = m = n}= (2π)^2 { I_0(κ_1)I_0(κ_2)I_0(κ_3) + 2 ∑_n=1^∞I_n(κ_1) I_n(κ_2)I_n(κ_3)}This completes the proof. § CIRCULAR VARIANCE AND CORRELATION COEFFICIENTS§.§ von Mises sine model Let (ψ_1, ψ_2) ∼(μ_1, μ_2, κ_1, κ_2, κ_3). Then *the Fisher-Lee circular correlation coefficient (<ref>) between ψ_1 and ψ_2 is given by ρ_ (ψ_1, ψ_2) = (1/C̅_s∂C̅_s/∂κ_3) (1/C̅_s∂^2 C̅_s/∂κ_1 ∂κ_2)/√((1/C̅_s∂^2 C̅_s/∂κ_1^2) (1 - 1/C̅_s∂^2 C̅_s/∂κ_1^2) (1/C̅_s∂^2 C̅_s/∂κ_2^2) (1 - 1/C̅_s∂^2 C̅_s/∂κ_2^2)).*the Jammalamadaka-Sarma circular correlation coefficient (<ref>) between Θ and Φ is given by ρ_ (ψ_1, ψ_2) = 1/C̅_s∂C̅_s/∂κ_3/√((1 - 1/C̅_s∂^2 C̅_s/∂κ_1^2)(1 - 1/C̅_s∂^2 C̅_s/∂κ_2^2)).*the circular variance for ψ_i, i = 1, 2 is given by (ψ_i) = 1 - 1/C̅_s∂C̅_s/∂κ_i. Here C̅_s = 1/C_s, where C_s is the normalizing constant of the von Mises sine distribution as defined in (<ref>). Infinite series expressions for partial derivatives of C̅_s constant are provided as follows. ∂C̅_s/∂κ_1 = 4 π^2∑_m=0^∞2mm(κ_3^2/4κ_1 κ_2)^m I_m+1(κ_1) I_m(κ_2) ∂C̅_s/∂κ_2 = 4 π^2∑_m=0^∞2mm(κ_3^2/4κ_1 κ_2)^m I_m(κ_1) I_m+1(κ_2) ∂C̅_s/∂κ_3 =8 π^2∑_m=1^∞ m 2mmκ_3^2m-1/(4κ_1 κ_2)^m I_m(κ_1) I_m(κ_2)∂^2 C̅_s/∂κ_1^2 = 4 π^2∑_m=0^∞2mm(κ_3^2/4κ_1 κ_2)^m (I_m+1(κ_1)/κ_1 + I_m+2(κ_1)) I_m(κ_2) ∂^2 C̅_s/∂κ_2^2 = 4 π^2∑_m=0^∞2mm(κ_3^2/4κ_1 κ_2)^m I_m(κ_1) (I_m+1(κ_2)/κ_2 + I_m+2(κ_2))∂^2 C̅_s/∂κ_1∂κ_2 = 4 π^2∑_m=0^∞2mm(κ_3^2/4κ_1 κ_2)^mI_m+1(κ_1) I_m+1(κ_2)§.§ von Mises cosine model Let (ψ_1, ψ_2) ∼(μ_1, μ_2, κ_1, κ_2, κ_3). Then *the Fisher-Lee circular correlation coefficient (<ref>) between ψ_1 and ψ_2 is given by ρ_ (ψ_1, ψ_2) = (1/C̅_c{∂C̅_c/∂κ_3 -∂^2 C̅_c/∂κ_1 ∂κ_2}) (1/C̅_c∂^2 C̅_c/∂κ_1 ∂κ_2)/√((1/C̅_c∂^2 C̅_c/∂κ_1^2) (1 - 1/C̅_c∂^2 C̅_c/∂κ_1^2) (1/C̅_c∂^2 C̅_c/∂κ_2^2) (1 - 1/C̅_c∂^2 C̅_c/∂κ_2^2)).*the Jammalamadaka-Sarma circular correlation coefficient (<ref>) between Θ and Φ is given by ρ_ (ψ_1, ψ_2) = 1/C̅_c{∂C̅_c/∂κ_3 -∂^2 C̅_c/∂κ_1 ∂κ_2}/√((1 - 1/C̅_c∂^2 C̅_c/∂κ_1^2)(1 - 1/C̅_c∂^2 C̅_c/∂κ_2^2)).*the circular variance for ψ_i, i = 1, 2 is given by (ψ_i) = 1 - 1/C̅_c∂C̅_c/∂κ_i . Here C̅_c = 1/C_c is the reciprocal of the von Mises cosine normalizing constant, as given in (<ref>). Infinite series expressions for partial derivatives of C̅_c are given as follows. ∂C̅_c/∂κ_1 =4π^2 { I_1(κ_1) I_0(κ_2) I_0(κ_3)+. ∑_m = 1 ^∞.I_m(κ_2) I_m(κ_3) [ I_m+1(κ_1) + I_m-1(κ_1) ] } ∂C̅_c/∂κ_2 = 4π^2{I_0(κ_1) I_1(κ_2) I_0(κ_3)+. ∑_m = 1 ^∞.I_m(κ_1) I_m(κ_3) [ I_m+1(κ_2) + I_m-1(κ_2) ] } ∂C̅_c/∂κ_3 = 4π^2{ I_0(κ_1) I_0(κ_2) I_1(κ_3)+. ∑_m = 1 ^∞. I_m(κ_1) I_m(κ_2) [ I_m+1(κ_3) + I_m-1(κ_3) ] }.∂^2 C̅_c/∂κ_1^2 = 2π^2{I_0(κ_2) I_0(κ_3)[I_0(κ_1) + I_2(κ_1)] + . ∑_m = 1 ^∞. I_m(κ_2) I_m(κ_3) [I_m-2(κ_1) + 2I_m(κ_1) + I_m+2(κ_1)] } ∂^2 C̅_c/∂κ_2^2 = 2π^2{I_0(κ_1) I_0(κ_3)[I_0(κ_2) + I_2(κ_2)] + . ∑_m = 1 ^∞. I_m(κ_1) I_m(κ_3) [I_m-2(κ_2) + 2I_m(κ_2) + I_m+2(κ_2)] } ∂^2 C̅_c/∂κ_1 ∂κ_2 = 2π^2{ 2 I_1(κ_1) I_1(κ_2) I_0(κ_3) + . ∑_m = 1 ^∞.I_m(κ_3) [ I_m+1(κ_1) + I_m-1(κ_1) ] [ I_m+1(κ_2) + I_m-1(κ_2) ]} § GRADIENTSFor notational simplicity we shall omit the subscripts i and j. Note that, in the sequel,stands for the parameter vector for one generic component and not the entire parameter vector of all components. §.§ Wrapped normal models* Univariate case. Here ^⊤ = (κ, μ), and∂ f_(ψ| )/∂κ = 1/2 κ^1/2√(2π)∑_ω∈exp[-κ/2(ψ - μ - 2πω)^2 ] [ 1 - κ (ψ - μ - 2πω)^2]∂ f_(ψ| )/∂μ = κ^3/2/√(2π)∑_ω∈exp[-κ/2(ψ - μ - 2πω)^2 ] (ψ - μ - 2πω).* Bivariate case. Here ^⊤ = (κ_1, κ_2, κ_3, μ_1, μ_2), ^⊤ = (ψ_1, ψ_2) and∂ f_(| )/∂κ_1 = 1/4π√(κ_12.3) ∑_(ω_1, ω_2) ∈^2E_ω_1, ω_2[ κ_2 - κ_12.3 (ψ_1 - μ_1 - 2πω_1)^2 ]∂ f_(| )/∂κ_2 = 1/4π√(κ_12.3) ∑_(ω_1, ω_2) ∈^2E_ω_1, ω_2[ κ_1 - κ_12.3 (ψ_2 - μ_2 - 2πω_2)^2 ] ∂ f_(| )/∂κ_3 = 1/2π√(κ_12.3) ∑_(ω_1, ω_2) ∈^2E_ω_1, ω_2[ κ_3 - κ_12.3 (ψ_1 - μ_1 - 2πω_1)(ψ_2 - μ_2 - 2πω_2) ] ∂ f_(| )/∂μ_1 = √(κ_12.3)/2π ∑_(ω_1, ω_2) ∈^2E_ω_1, ω_2[ κ_1(ψ_1 - μ_1 - 2πω_1)+ κ_3(ψ_2 - μ_2 - 2πω_2)]∂ f_(| )/∂μ_2 = √(κ_12.3)/2π ∑_(ω_1, ω_2) ∈^2E_ω_1, ω_2[ κ_3(ψ_1 - μ_1 - 2πω_1)+ κ_2(ψ_2 - μ_2 - 2πω_2)]whereE_ω_1, ω_2= exp [-1/2{κ_1 (ψ_1 - μ_1 - 2πω_1)^2 + κ_2 (ψ_2 - μ_2 - 2πω_2)^2. .. . + 2 κ_3 (ψ_1 - μ_1 - 2πω_1) (ψ_2 - μ_2 - 2πω_2) }]and κ_12.3 = κ_1κ_2 - κ_3^2.§.§ von Mises models* Univariate case. Here ^⊤ = (κ, μ) and∂log f_(ψ| )/∂κ = cos(ψ - μ) - I_1(κ)/I_0(κ) ∂log f_(ψ| )/∂μ = κsin(ψ - μ).* Bivariate sine model. Here ^⊤ = (κ_1, κ_2, κ_3, μ_1, μ_2), ^⊤ = (ψ_1, ψ_2) and∂log f_(| )/∂κ_1 = cos(ψ_1 - μ_1) - ∂C̅_s(κ_1, κ_1, κ_3)/∂κ_1/C̅_s(κ_1, κ_1, κ_3) ∂log f_(| )/∂κ_2 = cos(ψ_2 - μ_2) - ∂C̅_s(κ_1, κ_1, κ_3)/∂κ_2/C̅_s(κ_1, κ_1, κ_3) ∂log f_(| )/∂κ_3 = sin(ψ_1 - μ_1) sin(ψ_2 - μ_2) - ∂C̅_s(κ_1, κ_1, κ_3)/∂κ_3/C̅_s(κ_1, κ_1, κ_3) ∂log f_(| )/∂μ_1 = κ_1 sin(ψ_1 - μ_1) - κ_3 cos(ψ_1 - μ_1) sin(ψ_2 - μ_2)∂log f_(| )/∂μ_2 = κ_2 sin(ψ_2 - μ_2) - κ_3 sin(ψ_1 - μ_1) cos(ψ_2 - μ_2)where C̅_s(κ_1, κ_1, κ_3) = 1/C_s(κ_1, κ_1, κ_3) and expressions for the partial derivatives are provided in Appendix <ref>.* Bivariate cosine model. Here ^⊤ = (κ_1, κ_2, κ_3, μ_1, μ_2), ^⊤ = (ψ_1, ψ_2) and∂log f_(| )/∂κ_1 = cos(ψ_1 - μ_1) - ∂C̅_c(κ_1, κ_1, κ_3)/∂κ_1/C̅_c(κ_1, κ_1, κ_3) ∂log f_(| )/∂κ_2 = cos(ψ_2 - μ_2) - ∂C̅_c(κ_1, κ_1, κ_3)/∂κ_2/C̅_c(κ_1, κ_1, κ_3) ∂log f_(| )/∂κ_3 = cos(ψ_1 - μ_1 - ψ_2 + μ_2) - ∂C̅_c(κ_1, κ_1, κ_3)/∂κ_3/C̅_c(κ_1, κ_1, κ_3) ∂log f_(| )/∂μ_1 = κ_1 sin(ψ_1 - μ_1) + κ_3 sin(ψ_1 - μ_1 - ψ_2 + μ_2)∂log f_(| )/∂μ_2 = κ_2 sin(ψ_2 - μ_2) - κ_3 sin(ψ_1 - μ_1 - ψ_2 + μ_2)where C̅_c(κ_1, κ_1, κ_3) = 1/C_c(κ_1, κ_1, κ_3) and infinite series expressions for the partial derviatives are provided in Appendix <ref>.§ TRACE PLOTS FOR 4 COMPONENT VMSIN § TRACE PLOTS FOR 4 COMPONENT VMSIN WITH LABEL SWITCHINGS FIXED
http://arxiv.org/abs/1708.07804v3
{ "authors": [ "Saptarshi Chakraborty", "Samuel W. K. Wong" ], "categories": [ "stat.CO" ], "primary_category": "stat.CO", "published": "20170825163746", "title": "BAMBI: An R package for Fitting Bivariate Angular Mixture Models" }
X-ray Flux of Jets of Swift J1753-0127 from TCAF A. Jana, S. K. Chakrabarti, & D. Debnath1Indian Center for Space Physics, 43 Chalantika, Garia St. Rd., Kolkata, 700084, India. 2S. N. Bose National Centre for Basic Sciences, Salt Lake, Kolkata, 700106, [email protected], [email protected], [email protected] black hole candidate Swift J1753.5-0127 was discovered on 2005 June 30 by the Swift Burst Alert Telescope.We study the accretion flow properties during its very first outburst through careful analysis of the evolutionof the spectral and the temporal properties using the two-component advective flow (TCAF) paradigm. RXTEproportional counter array spectra in 2.5-25 keV are fitted with the current version of the TCAF model fitsfile to estimate physical flow parameters, such as two component (Keplerian disk and sub-Keplerian halo)accretion rates, properties of the Compton cloud, probable mass of the source, etc. The sourceis found to be in harder (hard and hard-intermediate) spectral states during the entire phase of the outburst withvery significant jet activity.Since in TCAF solution, the model normalization is constant for any particular source, any requirement of significantly differentnormalization to have a better fit on certain days would point to X-ray contribution from components not taken into account in the current TCAF model fits file. By subtracting the contribution using actual normalization,we derive the contribution of X-rays from the jets and outflows. We study its properties, such as its magnitude and spectra. We find that on some days, up to about 32% X-ray flux is emitted from the base of the jet itself. § INTRODUCTION Stellar mass black holes candidates (BHCs) exhibiting transient behavior generally reside in binaries. They show occasional outburstsof variable duration ranging from few weeks to months. In between two outbursts, these transient BHCs stay inlong periods of quiescence. During the outbursts, compact objects (here, BHCs) accrete matter from their companionsvia Roche-lobe overflow and/or by wind accretion, which forms a disk-like structure, commonly known as an accretion disk.Electromagnetic radiation from radio to γ-rays are emitted from the disk, whichmakes it observable. It is believed that an outburst is triggered by a sudden rise in viscosity in the disk, which increasedthe accretion rates in the inner disk causing outbursts (Chakrabarti, 2013). Rapid evolution of spectral and temporal properties areobserved during an outburst of transient BHCs and these are found to be strongly correlated. In the hardness-intensitydiagram (HID; Fender et al. 2004; Debnath et al. 2008) or accretion rate ratio intensity diagram (ARRID; Jana et al. 2016),observed in different states are found to be correlated with different branches. Generally four spectral states, namely, the hard (HS), hard-intermediate (HIMS), soft-intermediate (SIMS) and soft (SS)states are observed during an outburst. Each state is defined with certain characteristics of spectral and temporal features.HS and HIMS are dominated by non-thermal high energetic radiations with observation of monotonical rise/fall of low frequencyquasi-periodic oscillations (QPOs), whereas SIMS and SS are dominated by thermal radiations with sporadic QPOs (in SIMS)or no QPOs (in SS) (for more details, see Nandi et al. 2012; Debnath et al. 2010, 2013 and references therein).According to Debnath et al. (2017), outbursts are of two types: type-I or classical type, whereall spectral states are observed, and type-II or harder type, where SS are absent. The latter type of outburstsare termed as `failed' outbursts. For instance, 2005 outburst of Swift J1753.5-0127 is of type-II.Black hole (BH) X-ray spectrum consists of both thermal and non-thermal components. The thermal component is basically a multicolor blackbody that is emitted from the standard Keplerian disk (Shakura & Sunyaev 1973). The non-thermalcomponent is of power-law (PL) type, and it originates from the so-called `hot corona' or 'Compton cloud' (Sunyaev & Titarchuk 1980). In the two-component advective flow (TCAF) solution (Chakrabarti & Titarchuk 1995), this corona is identified withthe CENtrifugal pressure supported BOundary Layer (CENBOL), which naturally forms behind the centrifugal barrier dueto pile-up of the free-falling, weakly viscous (less than critical viscosity), sub-Keplerian (low angular momentum) matter.Soft photons from the Keplerian disk gain energy by repeated inverse-Compton scattering with the hot electron at the CENBOLand emerge as high energetic photons having a power-law distribution in energy. Recently, this TCAF solution has been included in HEASARC's spectral analysis software package XSPEC as an additive tablemodel to fit BH spectra (Debnath et al. 2014, 2015a). Few transient BHCs have been studied by our group during theirX-ray outbursts to find a clear picture about the evolution of the physical properties of these sources during their X-ray outbursts (Mondal et al. 2014, 2016; Debnath et al. 2015a,b, 2017;Chatterjee et al. 2016; Jana et al, 2016; Bhattacharjee et al. 2017; Molla et al. 2017). Jets and outflows are important features in accretion disk dynamics. According to the TCAF paradigm,the jets and outflows are produced primarily from the CENBOL region (Chakrabarti 1999a; Das & Chakrabarti 1999). If this region remains hot as in hard and hard-intermediate states, jets could be produced, otherwise not. Generally, inflow rates increase as the object goes from the hard state to the hard-intermediate state, higher outflow rates are also observed in the intermediate states. It is also reported in the literature that blobby-jetsare possible in intermediate states (Chakrabarti, 1999b; 2001; Nandi et al. 2001) due to higher optical depth at the base of the jet which episodically cools and separates the jets. In softer states, this region is quenched and the outflow rates are reduced (also see, Garain et al. 2013).Collimation of the jets could be accomplished by toroidal flux tubes emerging from generally convectivedisks (Chakrabarti & D’Silva 1994; D’Silva & Chakrabarti 1994).There are several papers in the literature that invoke diverse mechanisms for the accelerationof this matter discussion of which is beyond the scope of the present paper. In the present paper,we introduce a new method to estimate X-ray flux, emitted from the base of the jets during the entireperiod of the 2005 outburst of Swift J1753.5-0127 and compare that with the radio observations. Radio jets are common in active galactic nucleus (AGN) sources. It has been observed for several Galactic BHCs, such as, GRS 1758-258(Rodriguez et al., 1992), 1E 1740.7-2942 (Mirabel et al., 1992) etc. Compact radio jets have been detected in BHCs, such as,GRS 1915+105 (Dhawan et al., 2000), Cyg X-1 (Stirling et al., 2001). The BHCs GRS 1915+105 (Mirabel & Rodriguez, 1994)and GRO J1655-40 (Tingay et al., 1995, Hjellming & Rupen, 1995) show superluminal jets.Though jets are prominent in radio, they could be observed in otherenergy bands, such as, X-rays and γ-rays. High energy γ-ray jetshave been observed in Cyg X-1 (Laurent et al. 2011, Jourdain et al. 2012) and V 404 Cyg (Loh et al. 2016). Large scale, decelerating relativistic X-ray emitting jets have been observed in BHC XTE J1550-564(Corbel et al. 2002a, 2002b, Kaaret et al. 2006, Tomsick et al. 2003). In this case, radio blobs were predictedto move at relativistic speed, with blobs emitting in X-rays. H 1743-322 also showed a similar X-ray jet (Corbel et al. 2005).Kaaret et al. (2006) reported large scale X-ray jet in BHC 4U 1755-33.A relation between IR and X-ray jets has been found in BHC GRS 1915+105 (Eikenberry et al. 1998;Lasso-Cabrera & Eikenberry, 2013). X-ray jet of SS 433 even close to the black hole is well known.A correlation between X-ray and radio band intensity in compact jets was first found in BHCGX 339-4 (Hannikainen et al. 1998). The standard correlation is F_R∝ F_X^b with b ∼ 0.6-0.7(Corbel et al. 2003; Gallo et al. 2003). This empirical relation is thought to be universal, although for some BHCs, it is observedto have steeper PL with index ∼ 1.4 (Jonker et al. 2004; Coriat et al. 2011). Some BHCs also have showndual track in correlation plot. Dual correlation indices were observed for BHCs GRO J1655-40 (Corbel et al. 2004), H 1743-322(Coriat et al. 2011), XTE J1752-522 (Ratti et al. 2012), MAXI J1659-152 (Jonker et al. 2012). Until now, the radio and X-ray correlation study was done using quasi-simultaneous data of radio and X-ray fluxes. Usually, total X-ray flux (disk plus jet)is used for the correlation.It is reported that jets are emitted in the entire range of electromagnetic spectra: radio to γ-ray.Thus X-rays emitted from BHCs when jets are present is the net contribution coming from both the jet and the accretiondisk. Till now, there was no way to separate the contribution of these two components. In the present paper, for the first time, we make an attempt to separate these two components from the total observed X-raysusing the unique aspects of spectral studies by the TCAF solution. These are radiation in the accretiondisk component is contributed by the Keplerian disk (dominating the soft X-ray band) andfrom the `hot Compton cloud' region, i.e., from the CENBOL (dominating the hard X-ray band) and the normalization can be treated as a constant across the spectral states.Swift J1753.5-0127 is discovered on 2005 June 30 by Swift/BAT instrument at RA=17^h 53^m 28^s.3, DEC=-01^∘ 27' 09”.3(Palmer et al. 2005). BHC Swift J1753.5-0127 has a short orbital period (2.85 hrs according to Neustroev et al. 2014;and 3.2± 0.2 hrs according to Zurita et al. 2007). Neustroev et al. (2014) also estimated the mass of the source as< 5 M_⊙ and the companion mass to be between 0.17-0.25 M_⊙ with the disk inclination angle >40^∘.On contrary, Shaw et al. (2016) estimated the mass as >7.4 M_⊙. The distance of the source is estimated to be4-8 kpc (Cadolle Bel et al. 2007). Radio jets are also observed during 2005 outburst of the source (Fender et al. 2005; Soleri et al. 2010). Several authors have found radio/X-ray correlation for this source. This does not fall on the traditional correlation track,rather, it shows power-law index to be steeper ∼ 1-1.4 (Soleri et al. 2010, Rushton et al. 2016, Kolehmainen et al. 2016).In Debnath et al. (2017; hereafter Paper-I), a detailed study of the spectral and temporal properties of this object duringits 2005 outbursts (from2005 July 2 to 2005 October 19) was made. They used TCAF model fits file to fit the spectra andobtained accretion flow properties of the source during the outburst. Based on the variations of the TCAF model fitted(spectral) physical flow parameters and observed QPO frequencies, entire 2005 outburst was classified into two harderspectral states, such as, HS & HIMS, and these states were observed in the sequence:HS (Ris.) → HIMS (Ris.) → HIMS (Dec.) → HS (Dec.).They also estimated the mass of the BHC to be in the range of 4.75-5.90 M_⊙ or 5.35^+0.55_-0.60 M_⊙. According to the TCAF solution, model normalization (N) is a function of intrinsic parameters, such as, distance, massand constant inclination angle of the binary system. So, N is a constant for a particular BHC across, its spectral states unless there is a precession in the disk to change the projected emission surface area or there are some significant outflowor jet activities which so far are not included in the current version (v0.3) of the TCAF model fits file. As reportedin Paper-I, there are significant deviation of the constant N in few observations during the outburst. This allows us toestimate the amount of jet flux by separating it from the total X-ray luminosity from our spectral study with the currentversion of the TCAF solution by keeping model normalization frozen at the lowest observed value. The spectral property of theresidual X-ray is also found.The paper is organized in the following way. In 2, we briefly discuss the relation of jet with spectral states. In 3, we also briefly present a method to estimate the jet flux from the totalX-ray flux. In 4, we present results on our estimated jet flux and its evolution during the entire 2005 outburst ofSwift J1753.5-0127. We compare our estimated jet flux with that of the radio fluxes observed during the outburstand study correlation between X-ray and radio jet flux components.Finally, in 5, a brief discussion and concluding remarks are presented.§ DISK-JET CONNECTION WITH SPECTRAL STATES In general, there are two types of jets: continuous outflows (Compact jets) and discrete ejections (blobby jets:Chakrabarti & Nandi 2000; Chakrabarti et al. 2002). In TCAF, CENBOL acts as a base of the jet (Chakrabarti 1999a).Ejection of the matter depends on the shock location (X_s), compression ratio (R) and inflow rate.A schematic diagram of inflow and outflow is shown in the second panel of Fig. 1. Jet move subsonically up to thesonic surface (∼ 2.5 X_s) and then moves away supersonically, thereby reducing its temperature during expansionand emitting in UV, IR to radio (Chakrabarti 1999ab; Chakrabarti & Manickam 2000, hereafter CM00).The subsonic region will upscatter seed photonsfrom the Keplerian disk and downscatter CENBOL photons contributing to softer X-rays,which we define here as the jet X-ray (F_ouf) flux in this paper. This does not include the X-rays emitted frominteraction of the jet with ambient medium. If the CENBOL is not hot, i.e., the object is not in the hardor hard intermediate states, compact jets are not possible. However, as the shock moves in due tolarger inflow rates and consequent post-shock cooling, as insoft-intermediate states,the outflow rate increases and the subsonic region has relatively high optical depth (Chakrabarti 1999b). In some outburst sources, Keplerian matter may rise much faster than the sub-Keplerian flow as in the present case (Paper-I). Thus, the shock disappears even in HIMS and blobby jets may arise in HIMS as well.In presence of high Keplerian accretion rates, CENBOL cools down due to high supply of the soft photons from the Keplerian disk.Hence it is quenched and we do not see any jet in this state. The results from this considerations are given inFig. 1 (left panel), where the `generic' variation of the ratio of outflow (Ṁ_out) and inflow (Ṁ_in)rate (R_ṁ=Ṁ_out/Ṁ_in) with shock compression ratio (R) is shown. Clearly, the ratio (R_ṁ)is maximum when the Compression ratio is intermediate as in the hard-intermediate and soft-intermediate states.The observed jet in this spectral state isdense compact initially, but becomes increasingly blobby as the transition to the soft-intermediate state is approached.This is due to the rapid cooling of the jet base, the outflowing matter gets separated since even thesubsonic flow region becomes suddenly supersonic (Chakrabarti, 1999b; Das & Chakrabarti, 1999; CM00).§ FLUX AND SPECTRUM OF X-RAYS FROM THE BASE OF THE JET Detailed study of the evolution of the spectral and timing properties for the BHC Swift J1753.5-0127 during its 2005 outburstusing the TCAF solution is presented in Paper-I. Depending upon the variation of TCAF model fitted physical flow parameters andnature of QPOs (if present), they classified the entire outburst (from2005 July 2 to 2005 October 19) into two harder (HS and HIMS)spectral states. No signatures of softer states (SIMS and SS) were observed. This could be due to the lack of viscosity that prevented the Keplerian disk to achieve a significant rate close to the black hole. While fitting spectra with the current version (v0.3) of the TCAF solution, the model normalization (N) is found to varyin a very narrow range (1.41-1.81), except for a few days when the radio flux was higher.This may be because of non-inclusion of the jet mechanism in the current TCAF modelfits file. This motivated us to introduce a new method to detect an X-ray jetand calculate its contribution from the total X-ray flux. We use 2.5-25 keV RXTE/PCA data to calculate the X-ray flux from the base of the outflow.In presence of a jet, the total X-ray flux (F_X)is contributed from the radiation emitted from both the disk and the base of the jet.So, during the days with significant X-rays in the outflow, we require higher values of the model normalization to fit the spectra, since the present version of our TCAF model fitsfile is only concerned with the emission from the disk and no contribution from the jets is added.If the jet is absent, a constant or nearly constant TCAF model normalization is capableof fitting the entire outburst (see, Molla et al. 2016, 2017; Chatterjee et al. 2016). In Paper-I,TCAF normalization found to be constant at ∼ 1.6 during the entire 2005 outburst of Swift J1753.5-0127, except for 5 observationswhen it assumed higher values (≥ 2.0) in the initial period of HIMS (dec.).However in HS (dec.) minimumnormalization of ∼ 1.41 was required to fit spectral data on 2005 September 17 (MJD=55630.31). We assume that there was very little X-ray jet or, outflowing matter on that day and the entire X-ray flux is contributed onlyby the accretion disk and CENBOL, i.e., from inflowing matter alone. This is also the theoretical outcome (Chakrabarti, 1999b).When we compared the radio data, it was observed that radio flux contributions were also minimum during these days of observations.To calculate X-ray flux contribution F_inf only from the inflow, we refitted all the spectra by freezing modelnormalization at 1.41. Then, we take the difference of the resulting spectrum from the total flux to calculate jet X-rayflux F_ouf. In other words, the flux of the jet, relative to MJD=55630.31 can be written as,F_ouf = F_X - F_inf. (1)Here, F_X and F_inf fluxes (in units of 10^-9 ergs cm^-2 s^-1)are calculated using `flux 2.5 25.0' command after obtaining the best fitted spectrum in XSPEC. F_X is basicallythe TCAF model flux in the energy range of 2.5-25 keV with free normalization as reported in Paper-I,where as F_inf is TCAF model flux in the same energy range with constant normalization, N=1.41.§ RESULTS§.§ Evolution of Jet X-rays X-ray fluxes from jets or outflow (F_ouf) are calculated using Eq. (1). The variation of the derived jet X-ray flux (F_ouf) during theentire phase of the 2005 outburst of Swift J1753.5-0127 is shown in Fig. 2(c). To make a comparison, we show thevariation of 4.8 GHz VLA radio flux as reported by Soleri et al. (2010) in Fig. 2(d). First radio observationwas ∼ 5 days after RXTE/PCA observation, which missed initial two harder spectral states. Note that theRadio flux is maximum, during the middle of the HIMS, namely, in the late stage of HIMS in the rising phase andearly stage of HIMS in the declining phase, precisely as anticipated from the outflow rate behavior in Fig. 1.Since the object started to return to the hard state, the outflow rate went down also (Fig. 2c) and thus the radioflux also started to go down (Fig. 2d). During the initial 5 days (MJD=53553.05-53557.24), X-ray flux was completelydominated by the inflowing component (F_inf) and reached its peak on 2005 July 7 (MJD=53557.24),which was the day of HS to HIMS transition (Paper-I). Jet X-ray flux (F_ouf) started to increase from the transition day and reached its maxima on 2005 July 13 (MJD=53564.91).After that the jet X-ray flux starts to decrease; initially the flux reduced rapidly for the next∼ 6 days and then very slowly or roughly becomes constant until the end of our observation,except a weak local peak, observed near on 2005 August 11 (MJD=53593.23).The TCAF normalization (N) also shows a behavior similar to the radio flux of the jet as shown by F_ouf plot in Fig. 2c. It was constant in the first few observations. Then it increased and attained maximum value on the same day whenF_ouf shows peak value on MJD=53564.91. After that, it decreases fast and becomes almostconstant till the end of our observations, starting from ∼ MJD=53570. This additional requirement on N arises from emission of X-rays from the base of the jet, particularly in the subsonic region, which isnot included in the present version of the TCAF model fits file. The four plots in Fig. 3(a-d) show spectra from four different spectral states (dates marked as onlinered square boxes in Fig. 2e), fitted with free (black solid curve) or frozen(online red dashed curve) normalization of the TCAF model. The jet spectrum is also shown(online blue dot-dashed curve). It clearly shows that the jet was becoming stronger as the outburstprogressed and was strongest in HIMS (dec.). Then the contribution from the jet is rapidlyreduced as the shock receded farther away in the HS (dec.). In the strong jet-dominated region (HIMS in the rising and the declining phases), F_ouf is observed to bein the order of 10^-9 ergs cm^2 s^-1, whereas towards the end of the outburst, whenthe jet is weak, it decreases by a factor of a hundred. We also calculated the contribution of the jet in total X-ray emission. On an average, the flux of X-rayjet is ∼ 12.5% of the total X-rays (F_X). When the jet activity is strong, the contribution risesup to ∼ 32% (see, Appendix Table I). The spectrum of X-ray emission from the jet appears to beharder than the disk spectrum, which is expected when the base of the jet is optically thin.Note also that, the spectral slope of the jet component is different with a turnover propertyat a lower energy than that of the disk as is expected from an expanded system. Though wedid not plot at lower energy, we expect this region to be downscattered radiation emittedfrom the inflow. §.§ Correlation between the Radio and X-ray Jets The first radio observation of Swift J1753.5-0127 was made with MERLIN on 2005 July 3 at 1.7 GHz (Fender et al. 2005). WRST and VLA also observed the BHC (Soleri et al. 2010). VLA observed the BHC at 1.4 GHz, 4.8 GHz and 8.4 GHz. First radio observation was made with VLA on 2005 July 8 (MJD=53558) with radio flux F_R=2.79 mJy at 4.8 GHz. After that, F_R slightly decreased on MJD=53561, before attaining peak on 2005 July 15 (MJD=53566).X-ray jets attain its peak roughly two days prior to the radio, i.e., on 2005 July 13 (MJD=53564.91).There is ∼ 9 days gap between 2nd and 3rd radio observations. So, it is hard to findexact delay between the X-ray jet and the radio peak fluxes, although there is a gapof ∼ 2 day. Similar to F_ouf, F_R also showed decreasing nature afterits peak. F_R decreased rapidly until HIMS (Dec.) to HS (Dec.) transition day (MJD=53589),and then decreased slowly and becomes almost constant from ∼ MJD=53590. It is known from the literature that there exists a correlation between radio and X-ray wave bands from jets. In Fig. 4(a-d), we draw an F_R versus F_X plot.We use the results of the available quasi-simultaneous observations of 4.8 GHz VLA and2.5-25 keV RXTE/PCA.In an effort to find a relation, we fit the data with F_R ∼ F_X^b, where b is a constant.In Fig. 4a, we show the variation between jet X-ray (F_ouf) with radio (F_R) from quasi-simultaneous observations.We obtained b ∼ 0.59 ± 0.11. The relation with the X-ray flux from inflow (F_inf), shown in Fig. 4b, requiredan indexb∼ 1.28± 0.11. The relation of soft X-ray (3-9 keV)and radio (Fig. 4c), which is a standard practice, yields b∼1.05 ± 0.14.When we use F_R and total F_X in the 2.5-25 keV range, we find b∼ 1.13 ± 0.12 (Fig. 4d).From these plots, we conclude that the entire X-ray (sum of those from inflow and outflow) is well correlatedonly at lower fluxes be it in 3-9 keV range (Fig. 4c) or in 2.5-25 keV range (Fig. 4d).However, if we consider outflow X-ray flux (F_ouf) instead of F_X, then the correlation ofF_ouf vs. F_R (Fig. 4a) is found to be weak. However, a good correlation is obtained between F_Rand X-ray flux from the inflow (F_inf) at all levels of flux (Fig. 4b). It is possible that the natureof the jet deviates from compactness as the intermediate state is approached. This behavior is compatible with the observed fact that the compact jets are generallywell correlated with the radio flux, while the blobby jets are not.Swift J1753.5-0127 is less luminous in radio as compared to other BHCs (Soleri et al. 2010). In fact, even during the strong jet observation, the total X-ray flux is not entirely contributedby the jets. A large contribution always comes from the accretion disk.This may be the reason behind not fitting our result with the standard b (0.6-0.7).Rushton et al. (2016) also found a similar result. They found the correlationindex to be ∼0.99±0.12 in soft (0.6-10 keV) and ∼0.96±0.06 in hard (15-150 keV)X-ray bands using the data of Swift/XRT and Swift/BAT instruments respectively.§ DISCUSSIONS AND CONCLUDING REMARKS In this paper, we use a novel approach to obtain the spectral evolution of the X-rays from the outflowcomponent of Swift J1753.5-0127 during its 2005 outburst by exploiting the fact that the normalizationof a TCAF fit having X-ray contributions from an inflow remains constant across the states. We use2.5-25 keV RXTE/PCU2 data of BHC Swift J1753.5-0127 during its 2005 outburst.Much higher normalization values were required to fit spectra on a few days,belonging to HIMS (dec.). Assuming the minimum TCAF model normalization, 1.41 obtained on 2005 September 17 (MJD=55630.31) to be contributed from the 2.5-25 keV range flux from accretion flows only, we estimated the outflow contributionin rest of the observations. This was done by separating accretion disk spectrum and flux (F_inf) from the totalspectrum and flux by refitting all spectra, keeping normalization frozen at 1.41. X-rayflux (F_ouf) contribution from the outflow was obtained using Eq. 1. Time dependence of X-ray flux and spectrumfrom the outflow thus obtained and the flux variation is appeared to be similar to the observed radio flux data(see, Fig. 2d). The variations of F_inf and F_ouf showed that although initially disk flux increased rapidly and attainedits maximum on 2005 July 7 (MJD=53557.24), the jet flux stays roughly constant. Starting from the time when theF_inf is maximum, jet flux also starts to increase and attains its maximum on 2005 July 13 (MJD=53564.91)when the spectral state changed from hard to hard intermediate.In the declining phase, the jet flux decreases and becomes roughly constant in the later phase of the outburstand finally became negligible. If we interpret that the radio intensity is directly related to the outflow rate,then it should follow the nature of outflow rate (ṁ R_ṁ, where R_ṁ variation as in Fig. 1)that was predicted by Chakrabarti (1999ab) in the presence of shocks.Here, ṁ is the sum of the disk and halo component rates that increased from HS to HIMS (Mondal et al. 2014, 2016; Debnath et al. 2015a,b; Jana et al. 2016; Molla et al. 2017). In deriving the properties of the X-rays from the jets, we assumed that the significant variation of theTCAF model normalization (N) is entirely due to the variation in jet contribution in X-ray.Since the outflow rate is supposed to increase in HIMS,it is likely that the X-ray contribution would also go up. We needed N=2.61(maximum) on MJD=53564.91 for fitting, when F_ouf is observed to be maximum. Correlation between these two is good until the compactness of the jet is maintained. Higher outflow rates may have caused blobbiness (Chakrabarti, 1999b, 2000)and the variation of the outflow contribution with radio was no longer well correlated at higher flux.During the radio jet-dominated region, i.e.,HIMS (dec.), the X-ray jet had a flux of around of 10^-9 ergs cm^2 s^-1,whereas during the declining phase, the flux drops to ∼ 10^-11 ergs cm^2 s^-1, which is about 100 timeslower. There are a few examples of X-ray flux measurements of inner jets. For example, Nandi et al. (2005) showed that theX-ray flux from the jets for BHC SS 433 is around 10^-10 ergs cm^2 s^-1 in 3-25 keV energy band.For 4U 1755-33, X-ray flux from the jet is observed to be around 10^-16 ergs cm^2 s^-1 in quiescent state(Angelini & White, 2003). In the later part of the 2005 outburst of the BHC Swift J1753.5-0127, radio flux (F_R)was found to be about constant at its lower value (∼ 0.4 mJy).Toward of the end of our observations, jets may be moderately stronger in radio but weaker in the X-ray band. Overall, jet X-ray contribution is found to be at ∼ 12.5% over the total X-ray. When the jet is strong, i.e.,in the HIMS, the outflow contribution is about 32% of that of the inflow contribution, surprisingly very similarto the ratio of the flow rates predicted in HIMS (Chakrabarti, 1999a).Our result is consistent with what is observed in other similar compact sources.In the TCAF solution, the jets are considered to emerge out of CENBOL (Chakrabarti 1999ab), which is the `hot' puffed-upregion acting as a Compton cloud. The CENBOL acts as the base of the jet. While CENBOL is the post-shock compressed matterflowing inward, the matter in the jet is expanding outward and is relatively optically thin. This explains why the spectrumfrom the jet is flatter. As matter expands and interacts with entangled magnetic fields, it emits radio waves, generallyfar away from the black hole. Both the X-ray and the radio emissions from outflow depend on the outflow rate. However, X-ray component is strong onlyif the outflow rate is higher as happens when the object goes to HIMS.Since the shock is weaker, theoutflow must be radiation driven, rather than thermal pressure driven. The jets could be blobby when the opticaldepth is high and the correlation between the two fluxes breaks down. On theother hand, the X-ray emission from the inflow causes F_inf to rise also from HS to HIMS. Outflow rate is controlledby the shock strength i.e., by the compression ratio R (Fig. 1). Hence, it is expected that a correlation between F_infand F_R should exist. Since F_ouf << F_inf this translates to a correlation between total F_X and F_R.An empirical relation (F_R∝ F_X^b with b ∼ 0.6-0.7) was found by Hannikainen et al. 1998; Corbel et al. 2003;Gallo et al. 2003), although, some `outliers' were found to have a steeper power-law index (b ∼ 1.4) (Jonker et al. 2004;Coriat et al. 2011). Using quasi-simultaneous observation of VLA at 4.8 GHz and the 2.5-25 keV RXTE/PCA TCAF model fittedtotal X-ray flux, we find b∼ 1.13 ± 0.12, for F_R and F_X i.e., F_R∝ F_X^1.13 ± 0.12. Instead of the 2.5-25 keV total X-ray flux (F_X), using the 3-9 keV soft X-ray flux, we find a less steep exponent of b∼ 1.05± 0.14. Our result is consistent with several other authors, who also have found a steeper exponent for thisparticular BHC with b∼ 1.0-1.4 (Soleri et al. 2010, Rushton et al. 2016, Kolehmainen et al. 2016).This BHC candidate is less luminous in radio which may be the reason behind getting a steeper index (Soleri et al. 2010). When F_inf and F_R are compared, the index is ∼ 1.28 ± 0.11. When F_ouf and F_R are compared, b ∼ 0.59 ± 0.11.The observed points in the high jet-dominated region are not well correlated in the later case (F_ouf vs. F_R, see Fig. 4a).This may be due to the possible blobby nature of the jets in the high flux HIMS (dec.) region of the outburst.In future, we would like to estimate X-ray jet fluxes for a few other transient BHCs, such as, MAXI J1836-194, XTE J1180+480, etc.,where deviations of the constancy of the TCAF model normalization have been observed (see, Jana et al. 2016; Chatterjee et al. 2016),using the same method described in this paper as well as persistent sources such as GRS 1915+105, GX 339-4, V 404 Cyg.§ ACKNOWLEDGMENTS A.J. and D.D. acknowledge the support from ISRO sponsored RESPOND project fund (ISRO/RES/2/388/2014-15).D.D. also acknowledges support from DST sponsored Fast-track Young Scientist project fund (SR/FTP/PS-188/2012).[Angelini & White 2003]AW03 Angelini, L., & White, N. E. 2003, ApJ, 586, L71 [Bhattacharjee et al.(2017)]B17 Bhattacharjee, A., Banerjee, I., Banerjee, A. et al., 2017, MNRAS, 466, 1372 [Cadolle Bel et al. 2007]Cadolle Bel 2007 Cadolle Bel, M., Ribó, M., Rodriguez, J. et al. 2007, ApJ, 659, 549 [Chakrabarti & D'Silva(1994)]CD94 Chakrabarti, S. K., & D'Silva, S. 1994, ApJ, 424, 138 [Chakrabarti & Titarchuk(1995)]CT95 Chakrabarti, S. K., & Titarchuk, L.G. 1995, ApJ, 455, 623 (CT95) [Chakrabarti(1999a)]C99a Chakrabarti, S. K. 1999a, A&A, 351, 185 [Chakrabarti(1999b)]C99b Chakrabarti, S. K. 1999b, Ind. J. Phys., 73B, 6, 931 [Chakrabarti & Manickam (2000)]CM00 Chakrabarti, S, K, & Manickam, S. G. 2000, ApJ, 531, L41 [Chakrabarti 2000]C00 Chakrabarti, S. K. 2000, CQGra, 17, 2427 [Chakrabarti, & Nandi,(2000)]CN00 Chakrabarti, S. K., & Nandi, A., 2000, Ind. J. Phys. 75(B), 1 (arxiv:12526) [Chakrabarti(2001)]C01 Chakrabarti, S. K. 2001, AIPC, 558, 831 [Chakrabarti(2002)]CGW02 Chakrabarti, S. k., Goldoni, P., Wiita, P. J. et al. 2002, ApJ, 576, L45 [Chakrabarti(2013)]c13 Chakrabarti, S. K., 2013, Astron.Soc. of India Conf. Ser., 8, 1 [Chatterjee et al.(2016)]C16 Chatterjee, D., Debnath, D., Chakrabarti, S.K, et al. 2016, ApJ, 827, 88 [Corbel2002a]Corbel2002a Corbel, S., Fender, R., & Tzioumis, A. 2002a, IAU Circ. 7795, 2 [Corbel200b]Corbel2002b Corbel, S., Fender, R., & Tzioumis, A. 2002b, Science, 298, 196 [Corbel et al.(2003)]Corbel03 Corbel, S., Nowak, M. A., Fender, R. P., Tzioumis, A. K., & Markoff, S. 2003, A&A, 400, 1007 [Corbel et al.(2004)]Corbel04 Corbel, S., Fender, R. P., Tomsick, J. A., Tzioumis, A. K., & Tin-gay, S. 2004, ApJ, 617, 1272 [Corbel et al.(2005)]Corbel05 Corbel, S., Kaaret, P., Fender, R. P. et al. 2005, ApJ, 632, 504 [Coriat et al.(2011)]Coriat11 Coriat, M., Corbel, S., Prat, L., et al., 2011, MNRAS, 414, 677 [Das & Chakrabarti(1999)]DC99 Das, T. K. & Chakrabarti, S. K. 1999, CQGra, 16, 3879 [Debnath et al.(2008)]DD08 Debnath, D., Chakrabarti, S.K., & Nandi, A. et al. 2008, BASI, 36, 151 [Debnath et al.(2010)]DD10 Debnath, D., Chakrabarti, S. K., & Nandi, A. 2010, A&A, 520, A98 [Debnath et al.(2013)]DD13 Debnath, D., Chakrabarti, S. K., & Nandi, A. 2013, AdSpR, 52, 2143 [Debnath, Chakrabarti & Mondal(2014)]DCM14 Debnath, D., Chakrabarti, S. K., & Mondal, S. 2014, MNRAS, 440, L121 [Debnath, Mondal & Chakrabarti(2015a)]DMC15 Debnath, D., Mondal, S., & Chakrabarti, S. K. 2015a, MNRAS, 447, 1984 [Debnath et al.(2015b)]DMCM15 Debnath, D., Molla, A. A., Chakrabarti, S. K. & Mondal, S. 2015b, ApJ, 803, 59 [Debnath et al.(2017)]DD17 Debnath, D., Jana, A., Chakrabarti, S. K. et al. 2017, ApJ (submitted) (arXiv:1703.05479) (Paper-I) Dhawan01Dhawan, V., Mirabel, I. F., & Rodríguez, L. F. 2000, ApJ, 543, 373 [D'Silva & Chakrabarti(1994)]DC94 D'Silva, S., Chakrabarti, S. K. 1994, ApJ, 424, 149 [Eikenberry et al. 1998]Eikenberry98 Eikenberry, S. S., Matthews, K., & Morgan, H. et al. 1998, ApJ, 494, L61 [Fender et al.(2004)]Fender04 Fender, R. P., Belloni, T. M., Gallo, E. 2004, MNRAS, 355, 1105 [Fender et al.(2005)]Fender05 Fender, R. P., Garrington, S., Muxlow, T. 2005, ATel, 558, 1 [Garain et al. (2013)]Garain13 Garain, S. K., Ghosh, H., & Chakrabarti, S. K. 2013, ASInC, 8, 11 [Gallo et al.(2003)]Gallo03Gallo, E., Fender, R. P., & Pooley, G. G. 2003, MNRAS, 344, 60 [Hannikainen et al.(1998)]Hannikainen98 Hannikainen, D. C., Hunstead, R. W., Campbell-Wilson, D., & Sood, R. K. 1998, A&A, 337, 460 [Hjellming95]HR95Hjellming, R. M., & Rupen, M. P. 1995, Nature, 375, 464 [Jana,Debnath & Chakrabarti(2016)]Jana16 Jana, A., Debnath, D., Chakrabarti, S. K. et al. 2016, ApJ, 803, 107 [Jonker et al.(2004)]Jonker04 Jonker, P. G., Gallo, E., Dhawan, V. et al. 2004, MNRAS, 351, 1359 [Jonker et al.(2012)]Jonker12 Jonker, P. G., Miller-Jones, J. C. A., Homan, J. et al. 2012, MNRAS, 423, 3308 [Jourdain et al. (2012)]Jourdain12 Jourdain, E., Roques, J. P., Chauvin, M., & Clark, D. J., 2012, ApJ, 761, 27 [Kaaret,2006]Kaaret06 Kaaret, P., Corbel, S., & Tomsick, J. A. et al. 2006, ApJ, 641, 410 [Kolehmainen et al.(2016)]Kolehmainen16 Kolehmainen, M., Fender, R., Jonker, P.G., et al. 2016, AN, 337, 485 [Lasso-Cabrera 2006]LC06 Lasso-Cabrera, N. M., Eikenberry, S. S. 2013, ApJ, 775, 82 [Laurent et al. (2011)]L11 Laurent, P., Rodriguez, J., & Wilms, J. er at. 2011, Sci, 332, L438 [Loh et al. (2016)]Loh16 Loh, A., Corbel, S., & Dubus, G., et al. 2016, MNRAS, 462, L111 Mirabel92Mirabel, I. F., Rodríguez, L. F., Cordier, B., Paul, J., & Lebrun, F. 1992, Nature, 358, 215 [Mirabel94]Mirabel94 Mirabel, I. F., Rodríguez, L. F. 1994, Nature, 371, 46 [Molla et al. 2017]Molla17 Molla, A. A., Debnath, D., & Chakrabarti, S. K. et al. 2017, ApJ, 834, 88 [Molla et al.(2016)]Molla16 Molla, A. A., Debnath, D., & Chakrabarti, S. K. et al. 2016b, MNRAS, 460. 3163[Mondal, Debnath & Chakrabarti(2014a)]Mondal14a Mondal, S., Debnath, D., & Chakrabarti, S. K. 2014, ApJ, 786, 4 [Mondal, Chakrabarti & Debnath(2014b)]Mondal14b Mondal, S., Chakrabarti, S.K., & Debnath, D. 2014b, Ap&SS, 353, 223 [Mondal, Chakrabarti & Debnath(2015)]Mondal15 Mondal, S., Chakrabarti, S.K., & Debnath, D. 2015, ApJ, 798, 57 [Mondal, Chakrabarti & Debnath(2016)]Mondal16 Mondal, S., Chakrabarti, S. K., & Debnath, D. 2016, Ap&SS, 361, 309 [Nandi (2001)]Nandi2001 Nandi, A., Chakrabarti, S. K., Vadawale, S. V., & Rao, A. R. 2001, A&A, 380, 245 [Nandi et al.2005]Nandi05 Nandi, A., Chakrabarti, S. K. & Belloni, T. et al. 2005, MNRAS, 359, 629 [Nandi et al.(2012)]Nandi12 Nandi, A., Debnath, D., Mandal, S., & Chakrabarti, S.K., 2012, A&A, 542, A56 [Neustroev et al. 2014]Neustroev14 Neustroev, V. V., Veledina, A, Poutanen, J. et al. 2014, MNRAS, 445, 2424 [Palmer et al. (2005)]Palmer2005 Palmer, D. M., Barthelmey, S. D. & Cummings, J. R. et al. 2005, ATel, 546, 1 [Ratti et al.(2012)]Ratti12 Ratti, E. M., Jonker, P. G., Miller-Jones, J. C. A. et al. 2012, MNRAS, 423, 2656 RM92Rodríguez, L. F., Mirabel, I. F., & Marti, J. 1992, ApJ, 401, L15 [Rushton et al. (2016)]Rushton16 Rushton, A. P.; Shaw, A. W.; Fender, R. P. et al. 2016, MNRAS, 463, 628 [Shakura & Sunyaev(1973)]SS73 Shakura, N. I., & Sunyaev, R. A. 1973, A&A, 24, 337 [Shaw et al.(2016)]Shaw16 Shaw, A. W., Charles, P. A., Casares, J., Hernández Santisteban, J. V. 2016, MNRAS, 463, 1314 [Soleri et al.(2010)]Soleri10 Soleri, P., Fender, R. P., Tudose, V. et al. 2010, MNRAS, 406, 1471 [Sunyaev & Titarchuk(1980)]ST80 Sunyaev, R.A., & Titarchuk, L.G. 1980, ApJ, 86, 121 Stirling01Stirling, A. M., Spencer, R. E., de la Force, et al. 2001, MNRAS, 327, 1273 [Tomsick et al.(2003)]Tomsick03 Tomsick, J. A., Corbel, S. & Fender., R. et al. 2003, ApJ, 582, 983 Tingay95Tingay, S. J., et al. 1995, Nature, 374, 141 [Zurita et al.(2007)]Zurita07 Zurita, C., Torres, M.A.P., Durant, M., et al. 2007, ATel, 1130, 1
http://arxiv.org/abs/1708.08054v2
{ "authors": [ "Arghajit Jana", "Sandip K. Chakrabarti", "Dipak Debnath" ], "categories": [ "astro-ph.HE" ], "primary_category": "astro-ph.HE", "published": "20170827054540", "title": "Properties of X-ray Flux of Jets During 2005 Outburst of Swift J1753.5-0127 Using TCAF Solution" }
http://arxiv.org/abs/1708.08019v1
{ "authors": [ "Anna I. Toth" ], "categories": [ "cond-mat.str-el" ], "primary_category": "cond-mat.str-el", "published": "20170826213014", "title": "One-and-a-half-channel Kondo model and its family of non-Fermi liquids" }
Department of Astrophysical Sciences, Princeton University, Princeton, New Jersey 08544, USA National Research Nuclear University MEPhI, Kashirskoe sh. 31, 115409, Moscow, Russia Moscow Institute of Physics and Technology,Institutskiy per. 9, Dolgoprudny, Moscow Region 141700, Russia National Institutes for Quantum and Radiological Sciences and Technology, 8-1-7 Umemidai, Kizugawa, Kyoto 619-0215, Japan National Institutes for Quantum and Radiological Sciences and Technology, 8-1-7 Umemidai, Kizugawa, Kyoto 619-0215, Japan Institute of Physics of the Czech Academy of Sciences v.v.i. (FZU), Na Slovance 1999/2, 18221, Prague, Czech Republic A. M. Prokhorov Institute of General Physics of the Russian Academy of Sciences, Vavilov Street 38, Moscow, 119991, Russia. In contrast to hydrodynamic vortices, vortices in plasmacontain an electric current circulating around the center of the vortex, which generates a magnetic field localized inside.Using computer simulations, we demonstrate that the magnetic field associated with the vortex gives rise to a mechanismof dissipation of the vortex pair in a collisionless plasma, leading to fast annihilation of the magnetic field with its energy transforminginto the energy of fast electrons, secondary vortices, and plasma waves. Two major contributors to the energy damping of double vortex system, namely, magnetic field annihilation and secondary vortex formation, are regulated by the size of the vortex with respect to the electron skin depth, which scales with the electron gamma-factor, γ_e, as R/d_e ∝γ_e^1/2. Magnetic field annihilation appears to be dominant in mildly relativistic vortices, while for the ultrarelativistic case, secondary vortex formation is the main channel for damping of the initial double vortex system. On annihilation of the relativistic electron vortex pair in collisionless plasmas S.V. Bulanov December 30, 2023 =================================================================================§ INTRODUCTION Formation and evolution of localized nonlinear structures such as vortices and solitons play a crucial role in the physics of continuous media <cit.>.For instance, drift wave dynamics in tokamak plasmas can be described within the framework of the Hasegawa-Mima (HM) equation <cit.>, which has a well-known point vortex solution. The vortices may affect energy and particle transport significantly <cit.>. The formation of finite-radius relativistic electron vortex structuresassociated with quasistatic magnetic field generation provides one of the pathways for the electromagnetic field energydepletion in laser plasmas <cit.>.The late stage of the vortex evolution resulting in strong plasma density modulations has been revealed in the experiments<cit.> using proton radiography. Electron vortex pairs are also observed in simulations of relativistic shocks, being responsible for electron energization in the upstream region <cit.>. Understanding the dynamics of vortex structures in plasmas is important for developing the theory of relativistic plasma turbulence <cit.>. Relativistic electron vortex dynamics may also be a significant factor in the late stages of relativistic Weibel-like instability, which can arise in superstrong laser-plasma interaction <cit.>, as well as in colliding astrophysical flows of electron-positron plasmas <cit.>. In contrast to hydrodynamical vortices, which are sustained byfluids comprised of neutral particles, vortices in plasmas are sustained by the rotational motion of charged particles, leading to nonzero circular electric current, which forms a magnetic field inside the vortex <cit.>.In the case of small radius vortices, which correspond to thepoint-vortex solution of the HM equation, the vortex internal energy is conserved during the interaction process. However, in the case of finite radius vortices, we expect the finite-radius and electromagnetic interaction effects to become prominent,leading to a fast vortex energydissipation with its transformation into the energy of fast particles. Below, using two dimensional (2D) Particle-In-Cell (PIC) simulations with the code REMP <cit.>,we demonstrate how pairs of vortices interact beyond the point vortex approximation.We reveal the effect of relativistic annihilation of the binary electron vortices magnetic field that leads to vortex pair dampening.§ SIMULATION SETUPThe simulation parameters are as follows. For clarity, we describe the simulation setup in terms of an arbitrary spatial scale parameter, λ, and then immediately rescale the model to the physically relevant units. We set a slab of electron plasma (assuming immobile ions) with a constant density gradient along the x axis,so the electron plasma density equals n_e/n_ max = 0.1 at x = 55 λ and n_e/n_ max = 1 at x = 95 λ, with width 40 λ and zero temperatures for electrons. We measure spatial parameters in λ, temporal – in 2 π / ω_0 = λ / c, densities – in n_ 0=m_e ω_0^2 / 4 π e^2,electromagnetic fields – in E_0 = m_e ω_0 c / e, where m_e is electron mass, e is the absolute value of electron charge, c is the speed of light in vacuum. For the sake of simplicity, we introduce circularly symmetric electron vortices. They are initiated by accumulating the localized magnetic field during a numberof timesteps at the beginning of the simulation <cit.>. For the simulations presented, electron vortices are formed with various maximum magnetic fields: B_ max = 0.5, 1, 2, 4, 6.5, 35 in plasma with n_ max=0.16, 0.36, 0.64, 1, 4, 16,respectively. Hereafter, we will refer to the simulation parameters by the magnetic field amplitude B_ max. The vortex centres are located around points x=75 λ and y= -4 λ, 4 λ. We choose our parameters in such a way that the condition ω_pe^2 ≪ω_B ^2 holds <cit.>,so the electrons can be considered magnetized. Here ω_ pe^2=4 π n_e e^2/m_e is the plasma frequency and ω_B = e B / m_e c is the electron gyrofrequency. The computational grid is 150 λ× 120 λ with 32 nodes per λ, boundary conditions are periodic. We have also qualitively verified the results of our simulations with a larger domain resolution (64 and 128 nodes per λ). The initial particle-in-cell number corresponding to the maximum electron density is equal to 100. The total number of particles is about 10^8. The integration timestep is 0.0155. The total time of the simulations is 500 time units. For the sake of clarity, we further rescale our numerical model to physically relevant units appearing from the simple electron vortex model. It can be formulated as follows. Let us assume that the electron moves in a circular orbit around the uniformly distributed immobile and positively charged ions. Then, the electric field experienced by the electron is E=2 π e n R, where R is the radius of the electron vortex and n is the ion density. Assuming the electron to have a speed v_e ≈ c, we obtain the magnetic field to be B=2 π e n R. Radial force balance for the electron can be written as v_e p_e /R = -e E, which gives an expression connecting electron vortex radius and electron momentum, R = (p_e c / 2 π n e^2)^1/2≈ d_e √(2 γ_e). Thus, we fix λ = (4π^2n_ max/n_ 0· m_e c /p_e)^1/2 R, normalizing all spatial quantities to R, temporal frequencies to crossing frequency ω_cr = c/R, fields to E_0' = m_e ω_cr c/ e, densities – to n_ 0'=m_e ω_cr^2 / 4 π e^2.§ PIC SIMULATION RESULTS AND THEORETICAL ESTIMATES In our simulations, we expect to observe the following scenario: first, when two vortices are far away from each other (> 5 R),they would be stationary unless we were to take into account the effects of a finite vortex radius.In the latter case, we can expect that the vortices will move perpendicularly to the density gradient (parallel to the y-axis),due to the conservation of the Ertel's invariant I = Ω / n, where Ω is the vorticity and n is the electron density <cit.>.The velocity of such motion is estimated as Ω R^2 |∇ n/n|, which is ⪅ c /80 and has turned out to be fairly consistent with the simulation results presented below. Then, when the vortex interaction becomes significant (it scales as K_0(|Δ y/d_e|) with the vortex separation Δ y, K_0 is the modified Bessel function of second kind, see, e.g., <cit.>), we expect the binary vortex to start movingalong the x axis and possibly follow one of the complicated trajectories discussed in Ref. <cit.>. The typical velocities of such motion are V_ bin≈ 0.2 - 0.5 c. Eventually, the vortex binary tightening until ∼ Rwill lead to the finite-radius effects coming into play, which are beyond the scope of applicability of the point vortex theory described in Ref. <cit.>.To reveal the finite vortex radius effects and the effects of magnetic interaction we perform the PIC simulations. Figure 1 illustrates typicalevolution of the B_z component of the magnetic field observed during the simulation (for B_ max=2). When the binary vortex system is tight enough (i.e. distance between the closest points of the vortices is ∼ d_e, where d_e=c/ω_ pe is the electron skin depth, Fig. 1, t=330), the point vortex approximation breaks down. The electron currents of the two vortices, both directed along the x axis in the closest point of approach, attract each other and form a magnetic-dipole vortex structure (Fig. 1, t=414) <cit.>. The structure observed has an analogue in hydrodynamics, which is known as the Larichev-Reznik dipole vortex solution <cit.>. This type of structure is believed to be stable in the hydrodynamic case <cit.>. However, in our case, the magnetic structure moves along the +x direction, losing the majority of its magnetic energy by turning it into electromagnetic waves (Fig. 1, t=467; Fig. 3b), accelerated electrons and forming of von Karman-like streets of secondary vortices (Fig. 1, t=488, 501; Fig. 3b, Fig. 3c), though, secondary vortex formation does not decrease the total magnetic energy of the system significantly. The direction of the binary vortex motion may be deflected from the straight propagation along the x axis, as the binary components disintegrate unequally on the secondary vortices, and the resulting binary vortex with unequal components deflects in the direction of the larger vortex component, in agreement with <cit.>. The rapidly accelerated electrons are a sign of the relativistic magnetic field annihilation. The annihilation of the magnetic field was observed in PIC simulations previously in a different geometry <cit.> between the azimuthal magnetic fields formed by two parallel laser pulses propagating in a nonuniform underdense plasma and leads to electron heating. Though the overall physics of the Ampere's law is the same in both cases, as well as the signature of rapid electron energization, in <cit.> the displacement current arose as a result of the magnetic fields expanding towards each other due to the negative density gradient along the propagation axis of the laser pulses. In our case, the two vortices are pushed towards each other by the finite-radius effect of the vortex drift motion. Still, in both cases the dynamics of the magnetic fields is guided by the conservation of the Ertel's invariant. The process of secondary vortex formation may be caused by vortex boundary bending, observed in simulations previously <cit.>. Secondary vortices are not subject to the vortex film instability <cit.>, as the finite vortex radius effects dominate the motion of the vortices which are separated by a few d_e. The role of the relativistic effects is demonstrated using auxiliary simulations with n_ max = 0.36 with a large range of B_ max from 0.1 to 2. It was demonstrated that the magnetic field damping in the nonrelativistic case is at least three times longer, and the electric fields coming from the displacement current term in Ampere's law are negligible, see <cit.>. A simple model of the magnetic field annihilation of electron vortices may be written as follows. The radius of a vortex is connected to the electron momentum by relation R/d_e = (2 p_e / m_e c)^1/2. Thus, the nonrelativistic vortices have radius R ≤ d_e and the ultrarelativistic vortices have R ≫ d_e.Ampere's law is generally stated as ∇×𝐁 = 4π/c ·𝐉 + 1/c ·∂𝐄/∂ t. It may be rewritten as an order-of-magnitude estimate, using |∇×𝐁|≈ |∂ B / ∂ y| ∼ |B / d|, where d is the typical spatial gradient scale length,|𝐉|≈ e n_e c for the limit when v_e ∼ c,|∂𝐄 /∂ t| ∼ E / τ, where τ is the typical temporal scale. Finally, it yields d/d_e = B / (1 + E/ ω_ peτ) (B and E are dimensionless). Thus, it is clear from this equation that reaching d_e scale (d ≤ d_e) is necessary for the magnetic field annihilation through the displacement current term (see, e.g., <cit.>). Thus, the more relativistic the vortex is (in terms of p_e/m_e c ≈γ parameter), the harder it is to squeeze the dipole vortex down to a d_e scale. That being said, large vortices (in terms of d_e scale) are harder to damp via the magnetic field annihilation.Let us compare two types of simulations with the same parameters except for the signs of the magnetic fields in the vortices. Thus, in one case the vortices move towards each other and interact (Figure 2, blue line), in the other case they move away from each other and do not decay on the timescale of the simulations (Figure 2, dashed black line). Figure 2 shows the rate of magnetic energy dissipation in both simulations. Here, we can distinguish at least two mechanisms of vortex dissipation - slow (dashed lines, dissipation time is larger than 10^3 ω_cr^-1) and fast (solid lines, typically less or much less than 10^3 ω_cr^-1). The first mechanism can probably be attributed to the formation of spiral density waves in the electron plasma, which are seen in the early stages of simulations(e.g., see spiral perturbations of electron density in Fig. 3a and Fig. 4a). In our simulations, this mechanism gives us the rate of dissipation which dissipates no more than 20% of the magnetic energy during the simulation time, so it will not impact the characteristic lifetime of the electron vortex, or at least will make a contribution on a longer timescale than the fast dissipation, which will be discussed below. In turn, fast vortex dissipation can destroy the vortex pair on a much shorter timescale. Synchrotron losses, in comparison to electromagnetic solitons, are also negligible in the electron vortex case <cit.>.As the result of the magnetic energy dissipation, we observe a bunch of electrons being accelerated approximately in +x direction, adding up to ∼ 60 m_e c to the electron momentum in comparison to the maximum electron momentum of the stationary electron vortices in the case of B_max = 35. Figure 4 demonstrates the effect of the electron acceleration. The energy of electrons is large enough for the bunch to escape the plasma region. According to Figure 2, we see that the more relativistic vortices, with larger γ-factors, are harder to annihilate, in agreement with our theoretical model. Secondary vortices, which are more prominent in the simulations with higher γ factors of the initial vortices, are also more stable against the magnetic field annihilation, which results in the saturation of the magnetic field energy in the system (see Figure 2, aqua and purple lines). It is also important to note that the immobile ion approach is justified only if ω_ pi/ω_ pe≪ 1 and 2 π / ω_ pi is greater than the total simulation time. Besides, the binary vortex motion should be fast enough so we could ignore the ion motion: V_ bin/R ≫ω_ pi, where R is the typical radius of the vortex. Otherwise, the binary system of vortices does not move according to the HM equation, but they evolve independently <cit.> until two vortex boundaries collide. The effects of ion inertia on the binary vortex system will be considered in a separate paper. The simulation setup used in the problem, such as a plasma density gradient, is implemented in order to consider the adiabatic switching-on of the vortex interaction effects. Thus, we may observe the same effect of vortex damping in homogeneous plasmas when forming tight binary systems of vortices using our numerical scheme. However, in order to exclude the effect of the initial generation process, which inevitably will cause strong coupling between the vortex pair, and demonstrate the stability of single electron vortices, we decided to form vortices far away from each other, making sure that the vortex generation process does not impact their interaction and the magnetic field energy is almost constant over the simulation time (for non-interacting vortices). The dashed black line in Figure 2 demonstrates the evolution of the magnetic energy in the single vortex drift case. In general, the lifetime of the electron vortex binaries in the homogeneous plasma appears to be longer than in the nonzero density gradient case. It is also natural to discuss a system of binary vortices with the same polarization of magnetic field. In the point vortex approximation, they will simply rotate around each other in the case of homogeneous plasma <cit.>. However, it turns out that the finite radius vortices are subject to a merger process,which may also lead to minor electromagnetic energy dissipation (Figure 2, dashed brown line) via the spiral density wave formationby the resulting ellipsoidal vortex <cit.>, which turned out to be in principle agreement with the results of the hydrodynamical simulations of the 2D vortex merger process <cit.>.§ CONCLUSIONS In conclusion, we presented the computer simulation results on the interaction of electron vortex binaries.These structures are often seen in 2D PIC simulations of various laser-plasma configurations and are crucial for understanding the superstrong magnetic field evolution and turbulence in relativistic plasmas.If the binary vortex system is tight enough, the point vortex approximation breaks down,and the binary vortex is subject to the fast annihilation.The vortex annihilation leads to acceleration of the electron bunches, which in its turn leads to propagating electrostatic waves.In the case of larger γ factor of the initial vortices (i.e., for simulations with B_max=4 and more), we also observe formation of the von Karman-streets of secondary vortices, the motion of which is stabilized by the drift motion due to the finite-radius effects. Mildly relativistic electron vortex pairs damp mainly through the annihilation of the magnetic field, while ultrarelativistic electron vortex pairs decay via the secondary vortex formation.We believe that the results obtained will be useful for the development of a theory describing electromagnetic turbulence in relativistic plasmas <cit.>.This work utilized the MIPT-60 cluster, hosted by Moscow Institute of Physics and Technology (we thank Ilya Seleznev for running it smoothly for us).SVB acknowledges support at the ELI-BL by the project High Field Initiative(CZ.02.1.01/0.0/0.0/15_003/0000449) from European Regional Development Fund. KVL is grateful to ELI-Beamlines project for hospitality during the final stages of this work. KVL thanks Veniamin Blinov for fruitful discussions.VORTEXBOOK P.G. Saffman, Vortex Dynamics (Cambridge: Cambridge University Press, 1993).SOLITONBOOK S. Nettel, Wave Physics: Oscillations - Solitons - Chaos (Springer, 2009).HMEQN A. Hasegawa and K. Mima, Phys. Fluids 21, 97 (1978).KONO J. Nycander and M. B. Isichenko, Phys. Fluids 2, 2042 (1990); M. Kono and W. Horton, Phys. Fluids 3, 3255 (1991); D. D. Hobson, Phys. Fluids 3, 3027 (1991). KRASH S. I. Krasheninnikov, Phys. Lett. A 380, 3905 (2016);Y. Zhang and S. I. Krasheninnikov, Phys. Plasmas 23, 124501 (2016). VORTEX S. V. Bulanov, M. Lontano, T. Zh. Esirkepov, F. Pegoraro, and A. M. Pukhov,Phys. Rev. Lett. 76, 3562 (1996).VORTEXEXPL. Romagnani, A. Bigongiari, S. Kar, S. V. Bulanov, C. A. Cecchetti, M. Galimberti, T. Zh. Esirkepov, R. Jung, T. V. Liseykina, A. Macchi, J. Osterholz, F.Pegoraro, O. Willi, and M. Borghesi, Phys. Rev. Lett. 105, 175002 (2010). Naseri2018 N. Naseri, S. G. Bochkarev, P. Ruan, V. Yu. Bychenkov, V. Khudik, and G. Shvets, Phys. Plasmas 25, 012118 (2018).TURBPLASMA2 S. V. Bulanov, T. Zh. Esirkepov, M. Lontano, and F.Pegoraro, Plasma Phys. Rep. 23, 660 (1997);B. N. Kuvshinov and T. J. Schep, Plasma Phys. Rep. 42, 523 (2016). Wei2004 M. S. Wei, F. N. Beg, E. L. Clark, A. E. Dangor, R. G. Evans, A. Gopal, K. W. D. Ledingham, P. McKenna, P. A. Norreys, M. Tatarakis, M. Zepf, and K. Krushelnick, Phys. Rev. E 70, 056412 (2004).Kazimura1998 Y. Kazimura, J. I. Sakai, T. Neubert, and S. V. Bulanov, Astrophysical Journal, 498, L183–L186 (1998). LEZHNIN K. V. Lezhnin, F. F. Kamenets, T. Zh. Esirkepov, S. V. Bulanov, Y. J. Gu, S. Weber, and G. Korn, Phys. Plasmas 23, 093116 (2016).REMPT. Zh. Esirkepov, Comput. Phys. Commun. 135, 144 (2001). SSBULANOV S. S. Bulanov, T. Z.Esirkepov, F. F. Kamenets, F. Pegoraro, Phys. Rev. E 73, 036408 (2006).GordeevA. V. Gordeev,Plasma Phys. Rep. 36, 30 (2010).ERTEL H. Ertel, Meteorol. Zs. 59. 277 (1942).NAKAMURA N. M. Naumova, J. Koga, K. Nakajima, T. Tajima, T. Zh. Esirkepov, S. V. Bulanov, and F. Pegoraro, Phys. Plasmas 8, 4149 (2001); T. Nakamura and K. Mima, Phys. Rev. Lett. 100, 205006 (2008). L-D V. D. Larichev and G. M. Reznik, Doklady Akad. Sci. SSSR 231, 12 (1976). HYDRVORT J. C. McWilliams, G. R. Flierl, V. D. Larichev, and G. M. Reznik, Dynamics of Atmospheres and Oceans, 5, 219 (1981). YGU Y. J. Gu, O. Klimo, D. Kumar, Y. Liu, S. K. Singh, T. Zh. Esirkepov,S. V. Bulanov,S. Weber, and G. Korn, Phys. Rev. E 93, 013203 (2016).Wang2017 Y. Y. Wang, F. Y.Li, M. Chen, S. M. Weng, Q. M. Lu, Q. L. Dong, Z. M. Sheng, and J. Zhang, Sci. China Phys. Mech. Astron.60, 115211 (2017).Esirkepov2004 T. Esirkepov, S. V. Bulanov, K. Nishihara, and T. Tajima, Phys. Rev. Lett. 92, 255001 (2004).LEZHNIN_SPIE K. V. Lezhnin, A. R. Kniazev, S. V. Soloviev, F. F. Kamenets, S. A. Weber, G. Korn, T. Zh. Esirkepov, and S. V. Bulanov, Proc. of SPIE 10241, 102410Y-1 (2017).MERGER E. A. Overman II and N. J. Zabusky, Phys. Fluids 25, 1297 (1982).
http://arxiv.org/abs/1708.07803v2
{ "authors": [ "K. V. Lezhnin", "F. F. Kamenets", "T. Zh. Esirkepov", "S. V. Bulanov" ], "categories": [ "physics.plasm-ph" ], "primary_category": "physics.plasm-ph", "published": "20170825163714", "title": "On annihilation of the relativistic electron vortex pair in collisionless plasmas" }
Nuclear Magnetic Resonance Probe Head Design for Precision Strain Control N.J. Curro December 30, 2023 ========================================================================= There exist constant radial surfaces, 𝒮, that may not be globally embeddable in ℝ^3 for Kerr spacetimes with a>√(3)M/2.To compute the Brown and York (B-Y) quasi-local energy (QLE), one must isometrically embed 𝒮 into ℝ^3.On the other hand, the Wang and Yau (W-Y) QLE embeds 𝒮 into Minkowski space.In this paper, we examine the W-Y QLE for surfaces that may or may not be globally embeddable in ℝ^3.We show that their energy functional, E[τ], has a critical point at τ=0 for all constant radial surfaces in t=constant hypersurfaces using Boyer-Lindquist coordinates.For τ=0, the W-Y QLE reduces to the B-Y QLE.To examine the W-Y QLE in these cases,we write the functional explicitly in terms of τ under the assumption that τ is only a function of θ. We then use a Fourier expansion of τ(θ) to explore the values of E[τ(θ)] in the space of coefficients.From our analysis, we discovered an open region of complex values for E[τ(θ)]. We also study the physical properties of the smallest real value of E[τ(θ)], which lies on the boundary separating real and complex energies. § INTRODUCTIONIt is not possible to define a local measure of the gravitational energy associated with the curvature of spacetime due to the equivalence principle of general relativity.However, it is possible to define a quasi-local energy (QLE) density with respect to a field of observers t⃗ and a 2-surface 𝒮 bounding some 3-volume in a spacetime manifold ℳ.In 1993, Brown and York (B-Y) gave a natural method for devising such an energy using a Hamilton-Jacobi approach <cit.>.To understand their expression for QLE, we first introduce Fig. <ref>, which includes notations for all submanifolds of ℳ and their respective metrics.It also includes the notations for the normal and tangent vectors defined in ℳ.Looking at equation 4.5 of <cit.>, the B-Y QLE is defined as E=-1/8π∫_𝒮_t[N k - N^μ v^ν(K_μν - K g_μν)]√(σ_t) dx^2_physicalspaceenergy -E^0_reference energy where k is the mean curvature of 𝒮_t embedded in the spacelike hypersurface Σ_t, K is the extrinsic curvature tensor of Σ_t embedded in ℳ, K is the trace of K and E^0 is the reference energy that emerges from the freedom to choose the zero point energy in any Hamilton-Jacobi formulation. The lapse and shift are given by N and N⃗, respectively.Hawking and Horowitz proposed a similar definition of QLE in 1996 <cit.>.One choice for E^0 suggested by B-Y involves isometrically embedding 𝒮_t in some flat reference space and computing the corresponding reference energy.This gives E^0=-1/8π∫_𝒮_t[N k_0 - N^μ v_0^ν((K_0)_μν-K_0 η_μν)]√(σ_t) dx^2 where the N and N⃗ are the same as Eq. <ref> and η is the metric of the flat space.Their reason for choosing the reference space to be flat is one would expect the QLE to be zero for a flat spacetime. Given 𝒮_t defined in a maximal hypersurface of a stationary spacetime, B-Y suggested that one uses the Eulerian observers defined by t⃗=u⃗ as their observers and ℝ^3 as their reference space.Using these suggestions, the B-Y QLE reduces toE_BY = 1/8π∫_𝒮_t(k-k_0)√(σ_t) dx^2. The surface isometric embedding theorem (proposed by Weyl and proved independently by Nirenberg <cit.> and Pogorelov <cit.>) states that a closed surface with a Riemannian metric of positive Gaussian curvature can be uniquely isometrically embedded into ℝ^3. In 1994, Martinez analyzed Eq. <ref> for Kerr spacetimes using a small angular momentum approximation <cit.>.With this approximation, Martinez found that the B-Y QLE at the event horizon is given byE=2M_ir = √((M+√(M^2-a^2))^2+a^2) where M_ir is the irreducible mass, a is the angular momentum per unit mass and M is the mass of the black hole.In 1973, Larry Smarr showed that the event horizon of a Kerr black hole with a>√(3)M/2 has a region centered at the poles with negative Gaussian curvature <cit.>.Since the Gaussian curvature is not positive everywhere, the theorem of Nirenberg and Pogorelov is not applicable. Thus an isometric embedding into ℝ^3 may not exist at all, and an existing isometric embedding may not be unique. This implies that the B-Y QLE energy is not well defined at the event horizon for spacetimes with large angular momentum. See Appendix  <ref> for a discussion on surface isometric embeddings. The existence of negative Gaussian curvature creates a demarcation between constant radial surfaces for which Eq. <ref> is well defined everywhere and those where it is only partially defined.This demarcation is illustrated in Fig. <ref>. One can explicitly write r^* for a constant radial surface by finding the root of its Gaussian curvature at the poles.We begin the derivation of r^* by first introducing the metric of the constant radial surface in Kerr.The line element of Kerr in Boyer-Lindquist coordinates is given bydl^2_ℳ = g_tt dt^2 + 2 g_tϕ dt dϕ + g_rr dr^2 + g_θθ dθ^2 + g_ϕϕ dϕ^2whereg_tt= -( 1- 2M r/Ξ),g_tϕ= - 2 M r/Ξa sin^2θ,g_rr=Ξ/Δ,g_θθ=Ξandg_ϕϕ=( r^2 + a^2 (1+ 2M r sin^2θ/Ξ) ) sin^2θare the non-zero components of the Kerr metric.The definitions of Ξ and Δ areΞ:=r^2+ a^2 cos^2θand Δ:= r^2-2 M r+a^2.The 2-surface 𝒮_t for which quasi-local energy is computed is defined in a t=constant hypersurface Σ with constant radius R.The choice of t is inconsequential since the spacetime is stationary.For this reason, we drop the subindex t from subsequent notation.Inserting dt=dr=0 and r=R in Eq. <ref> gives the line element of 𝒮 as dl_𝒮^2 = (R^2+a^2 cos^2θ)_σ_θθ dθ^2 +( R^2+a^2+ 2 R a^2 M sin^2θ/Ξ) sin^2θ_σ_ϕϕdϕ^2where σ_θθ and σ_ϕϕ are the non-zero components of the induced metric σ on 𝒮.The Gaussian curvature of 𝒮 is given by𝒦 = σ_θθσ^2_ϕϕ,θ+σ_ϕϕ(σ_θθ,θσ_ϕϕ,θ-2σ_θθσ_ϕϕ,θθ)/4σ^2_θθσ^2_ϕϕ.Solving for the root of Eq. <ref> at θ=0 givesr^*(a,M)=-3^1/3a^2+Γ^2/3/3^2/3Γ^1/3whereΓ=27 M a^2+a√(3)√(243 M^2 + a^4).This is the only non-zero real root of the Gaussian curvature at the poles.Recently, Yu and Liu studied QLE for r^*<R<√(3)a with unrestricted angular momentum <cit.>.Their analysis remains in the regime of strictly positive Gaussian curvature.In this paper, we study the Wang and Yau (W-Y) QLE for constant radial surfaces in Kerr with r_+<R<r^*. In the W-Y approach, one embeds 𝒮 into Minkowski space ℝ^3,1 instead of ℝ^3.Because 𝒮 is a co-dimension 2 surface with respect to ℝ^3,1, the isometric embedding equations are underdetermined thus giving infinitely many embeddings. To solve this problem, W-Y introduced the scalar field τ on 𝒮, which determines a unique embedding into ℝ^3,1 given a choice of τ.Choosing τ also chooses a unique field of observers on 𝒮.Using the W-Y approach, Eq. <ref> is redefined asE[τ]=1/8π∫(-k̅√(1+|∇τ|^2) + ⟨∇τ | ∇v⃗̅⃗ | u⃗̅⃗⟩)√(σ) dx^2_physical space energy - 1/8π∫k̂√(σ̂) dx^2_reference energywhere k̅ is the mean curvature of 𝒮 embedded in ℳ with respect to the spacelike normal v⃗̅⃗ and k̂ is the mean curvature of the convex shadow 𝒮̂ embedded in ℝ^3.The 2-metric of the convex shadow is written as σ̂.All necessary information for this paper regarding the normal basis {u⃗̅⃗, v⃗̅⃗} and the convex shadow is contained in appendix <ref>; they are also defined in <cit.>.The purpose of the appendix is to give the reader a self contained explanation for the physical motivations behind the W-Y formalism.The W-Y QLE is defined as the minimum of Eq. <ref> with respect τ, which is equivalent to minimizing with respect to all possible observer fields.We are unaware of any research that explores QLE near the event horizon for extreme Kerr spacetimes using a Hamilton-Jacobi approach.Given the generalization of the B-Y QLE by W-Y, we believe their definition is a good starting point to explore this area of research.It can be shown for a Kerr spacetime that a critical point of the Eq. <ref> is found at τ=0 regardless of the value of R. Given τ=0, the W-Y QLE functional reduces to Eq. <ref>.To gleam some insight on the behavior of E[τ] in this region, we explore the W-Y QLE using numerical techniques.We restrict τ to only a function of θ to simplify the W-Y QLE functional and make the calculation more tractable.Given this restriction on τ, the main results of this analysis are the following: (1) there exists a boundary separating admissible real energies from inadmissible complex energies and the minimum real value, E_min, of E[τ(θ)] lies on this boundary, (2) τ=0, which is a critical point of the W-Y QLE functional for constant radial surfaces with R<r^*, is not admissible within their formalism and (3) the physical behavior of E_min disagrees with the behavior one would expect from the analysis of Martinez.We structure the paper in the following way. In Sec. <ref> we write the W-Y QLE functional in terms of τ.In Sec. <ref> we show that τ=0 is a critical point of the W-Y QLE functional for Kerr regardless of the the value of R.In Sec. <ref> we present our numerical analysis.Finally, in Sec. <ref> we have further discussions and conclusions.§ EXPRESSING THE W-Y QLE IN TERMS OF Τ The purpose of this section is to write Eq. <ref> explicitly in terms of τ for constant radial surfaces.This will be used in Sec. <ref> for our numerical analysis. To this end, we separate this section into two subsections.The first derives the physical energy in terms of τ, while the second derives the reference energy in terms of τ.Before we continue with our derivations,we must define the mean curvature vector H⃗.Let X⃗(η^a) represent the spacetime coordinates of 𝒮 embedded in ℳ where η^a={θ,ϕ} are the Boyer-Lindquist coordinates of 𝒮.At each point p∈𝒮 there also exists a spacelike tangent plane 𝒯_s(p) that is spanned by an orthogonal basis made of spacelike tangent vectors ζ⃗_a=∂X⃗/∂η^a.Given an arbitrary normal basis {u⃗,v⃗} on 𝒮, the mean curvature vector can be written asH⃗= H_u⃗ u⃗ + H_v⃗ v⃗where H_u⃗ is the fractional rate of expansion of 𝒮 along the timelike normal u⃗ and is given byH_u⃗ = σ^ab⟨ζ⃗_a | ∇u⃗ | ζ⃗_b⟩,and H_v⃗ is the fractional rate of expansion of 𝒮 along the spacelike normal v⃗ and is given byH_v⃗ = σ^ab⟨ζ⃗_a | ∇v⃗ | ζ⃗_b⟩.The covariant derivative is taken with respect to the Kerr metric g for both H_u⃗ and H_v⃗.The mean curvature vector is the direction of maximal expansion of 𝒮 in ℳ and is independent of the normal basis in which it is computed.§.§ The physical contribution to the W-Y QLE in terms of τ In this subsection we follow the prescription given in <cit.> to compute the physical portion of QLE.This is done in three steps: * Compute the normal basis {u⃗'⃗,v⃗'⃗} of 𝒮 that satisfiesv⃗'⃗ = H⃗/|H⃗|. * Transform {u⃗'⃗,v⃗'⃗} to {u⃗̅⃗,v⃗̅⃗} usingu⃗̅⃗ = u⃗'⃗coshα+v⃗'⃗sinhα v⃗̅⃗ = u⃗'⃗sinhα+v⃗'⃗coshαwhere α is the hyperbolic angle that minimizes the physical energy in Eq. <ref> and is given bysinhα = -Δτ/|H⃗|√(1+|∇τ|^2). * Use {u⃗̅⃗,v⃗̅⃗} to express the physical energy in terms of τ.We will refer to {u⃗'⃗,v⃗'⃗} and {u⃗̅⃗,v⃗̅⃗} as the non-preferred and preferred normals, respectively. For step 1, we begin with the non-preferred normal basisv⃗'⃗={0,1/√(g_rr),0,0}andu⃗'⃗ = β{-g_ϕϕ/g_tϕ,0,0,1}whereβ=1/√(g_ϕϕ(1-g_ϕϕg_tt/g^2_tϕ)).Here, u⃗'⃗ is the timelike normal of Σ restricted to 𝒮.Since Σ is a maximal hypersurface of Kerr and 𝒮⊂Σ, writing the mean curvature vector in terms of {u⃗'⃗,v⃗'⃗} givesH⃗ = 0H_u⃗'⃗ u⃗'⃗ + H_v⃗'⃗ v⃗'⃗,which satisfies Eq. <ref> and completes step 1.For step 2 we use cosh^2α-sinh^2α=1 to writecoshα = √(1+(Δτ)^2/|H⃗|^2 (1+|∇τ|^2)).Inserting sinhα and coshα into Eqs. <ref> and <ref> to transform from {u⃗'⃗,v⃗'⃗} to {u⃗̅⃗,v⃗̅⃗} completes step 2.To complete step 3, we begin by inserting v⃗̅⃗ intoEq. <ref> to compute k̅, which givesk̅=-√(g_rr)coshα̅(Γ^r_θθ/σ_θθ + Γ^r_ϕϕ/σ_ϕϕ). Next we insert k̅ into the first term of the physical space energy givingk̅√(1+|∇τ|^2) = -√(g_rr(Γ^r_θθ/σ_θθ + Γ^r_ϕϕ/σ_ϕϕ)^2(1+|∇τ|^2)+Δτ^2)where |∇τ|^2=τ^2_,θ/σ_θθandΔτ = σ^ab∇_a ∇_b τ= 1/σ_θθ(τ_,θθ - Γ^θ_θθτ_,θ) - Γ^θ_ϕϕτ_,θ/σ_ϕϕ.For the second term of the physical space energy, we map ∇τ from 𝒮 to ℳ using∇τ = σ^abτ_,aζ⃗_b.Inserting u⃗̅⃗ and v⃗̅⃗ into the second term of the physical space energy gives⟨∇τ | ∇v⃗̅⃗ | u⃗̅⃗⟩ = τ_,θ/σ_θθ((v̅_t,θ-Γ^t_θ tv̅_t)u̅^t + (v̅_r,θ-Γ^r_θ rv̅_r)u̅^r-Γ^t_θϕv̅_tu̅^ϕ).Combining Eq. <ref> and <ref>and integrating over 𝒮 gives the physical contribution to the W-Y QLE in terms of derivatives of τ and completes step 3.§.§ The reference contribution to the W-Y QLE in terms of τ In section 3 of <cit.>, it was shown that E^0=-∫_𝒮[N k_0 - N^μ v_0^ν((K_0)_μν-K_0 η_μν)]√(σ) dx^2 = ∫k̂√(σ̂) dx^2where k̂ is the mean curvature of the convex shadow embedded in ℝ^3.Therefore, one only needs to isometrically imbed 𝒮̂ in ℝ^3 and integrate the mean curvature to find the reference energy.Assuming τ is only a function of θ,the metric components of 𝒮̂ are given by σ̂_θθ = σ_θθ + τ^2_,θ σ̂_ϕϕ = σ_ϕϕ.Let the Cartesian coordinates of 𝒮̂ be defined asx(θ, ϕ) = ρ(θ)cosϕy(θ, ϕ) = ρ(θ)sinϕz(θ) = f(θ)where ρ( θ) and f( θ) are smooth real valued functions on the domain θ∈ [0,π].Equating the line element on 𝒮̂ with that of Euclidean space, we getρ(θ) = √(σ_ϕϕ)f_,θ(θ) = √(σ_θθ - σ^2_ϕϕ,θ/4 σ_ϕϕ+τ^2_,θ).Now we can write the mean curvature in terms of the derivatives of ρ and f with respect to θ.The principle curvatures in the θ and ϕ directions arek̂_θθ = f_,θρ_,θθ-ρ_,θf_,θθ/(f_,θ^2+ρ_,θ^2)^3/2 and k̂_ϕϕ =-f_,θ/ρ√(f_,θ^2+ρ_,θ^2),respectively.The mean curvature is the sum of the principle curvatures and is given byk̂=-(f_,θ^3+ ρ ρ_,θ f_,θθ +f_,θ(ρ_,θ^2-ρ ρ_,θθ)/(f_,θ^2+ρ_,θ^2 )^3/2)1/ρ.We integrate Eq. <ref> over 𝒮̂ to get the contribution to QLE from the reference action.With Eqs <ref>, <ref> and <ref>, the W-Y QLE functional is completely determined by τ_,θ, τ_,θθ and τ_,θθθ. § THE CRITICAL POINT OF THE W-Y QLE FUNCTIONAL FOR CONSTANT RADIAL SURFACES In section 6 of <cit.>,W-Y derived the Euler-Lagrange equation of E[τ], which is given by-(k̂σ̂^ab - σ̂^acσ̂^bdk̂_cd)∇_b ∇_a τ/√(1+|∇τ|^2) +_(1) σ^ab∇_a (∇_b τ/√(1+|∇τ|^2)coshα|H⃗|)_(2)-Δα_(3)-σ^ab∇_a ⟨ζ⃗_b | ∇v⃗̅⃗ | u⃗̅⃗⟩_(4)=0.All covariant derivatives are taken with respect to the 2-metric on 𝒮 except for the covariant derivative on v⃗̅⃗, which is taken with respect to the spacetime metric g. To show that τ=0 is a solution to Eq. <ref>, we write each term explicitly in terms of τ.The first term of Eq. <ref> written explicitly in terms of τ is given by(1)=-1/√(1+τ^2_,θ/σ_θθ)((τ_,θθ-Γ^θ_θθτ_,θ)(k̂-σ̂^θθk̂_θθ)/σ_θθ+τ^2_,θ-Γ^θ_ϕϕτ_,θ/σ_ϕϕ(k̂-σ̂^ϕϕk̂_ϕϕ)). The second term is (2)=∂_θ(|H⃗|coshα/√(1+τ^2_,θ/σ_θθ))τ_,θ/σ_θθ + |H⃗|coshα/√(1+τ^2_,θ/σ_θθ)Δτ where coshα and Δτ are given by Eqs. <ref> and <ref>, respectively. Term 3 is simply(3)=1/σ_θθ(α_,θθ - Γ^θ_θθα_,θ) - Γ^θ_ϕϕα_,θ/σ_ϕϕ.Let V_a=⟨ζ⃗_a | ∇v⃗̅⃗ | u⃗̅⃗⟩,the last term in Eq. <ref> is given by(4)=σ^ab∇_a V_b=1/σ_θθ(V_θ,θ-Γ^θ_θθV_θ)+1/σ_ϕϕ(V_ϕ,ϕ-Γ^θ_ϕϕV_θ).It is easy to see that the first three terms vanish for τ=0.Next we show that the fourth term also vanishes for τ=0.Writing V_θ and V_ϕ, one getsV_θ = ∂_θv̅^νu̅_ν + Γ_θα^νv̅^αu̅_ν andV_ϕ = Γ_ϕ r^t v̅^r u̅_t.From Eqs. <ref> and <ref>, it is clear that {u⃗̅⃗,v⃗̅⃗}={u⃗'⃗,v⃗'⃗} for τ=0. It is also clear that the first term of Eq. <ref> is equal to zero since v⃗'⃗ only has a radial component and the contravariant components of u⃗'⃗ are only non-zero for time.Furthermore, the second term of Eq. <ref> reduces to Γ_θ r^t v'^r u'_t where Γ_θ r^t=0. This gives V_θ=0.Inserting V_θ=0 into Eq. <ref> gives (4)=V_ϕ,ϕ/σ_ϕϕ.Since V_ϕ is independent of ϕ, term 4 vanishes.This shows explicitly that τ=0 is a critical point regardless of one's choice of R. Indeed, it was shown in <cit.> that for any axi-symmetric surface, the fourth term of Eq. <ref> always vanishes and τ=0 is always a solution. However, τ=0 is not necessarily a local or global minimum, see <cit.> for a criterion for local minimum of a critical point in terms of a mean curvature inequality.§ NUMERICAL RESULTS In this section, we apply the direct search algorithm developed by Torczon <cit.> to minimize E[τ(θ)], which is given by Eqs. <ref>, <ref> and <ref>, in the space of coefficients.Without loss of generality, we will use a=M=1 for our numerical analysis unless stated otherwise.The value of r^* is approximately 1.65 for this choice of a and M.To apply the direct search algorithm, we use a Fourier expansion to express τ_,θ asτ_,θ( θ) = F_0( θ) + ∑^κ_n=1a_nsinnθwhere θ is the polar angle in Boyer-Lindquist coordinates, F_0(θ) is an initial guess of the optimal τ_,θ and a_n are the Fourier coefficients.Symmetry about the equator excludes all but the odd values of the Fourier coefficients of sin(nθ).The expansion lacks cosine modes due to boundary conditions on the derivative of τ at the poles.We choose our initial guess to beF_0(θ)=√(σ_ϕϕ/sin^2θ - σ_θθ) .This function gives an integrand of E[τ(θ)] that is well behaved at the poles.It also gives an initial guess reasonably close to a solution of the Euler-Lagrange equation for all radii.The image of the convex shadow and its mean curvature at R=3/2are shown in Fig. <ref>.There is nothing notable about R=3/2;we simply use this for as an illustrative example for surfaces with R < r^*.We will continue to use this radius for further examples.All statements made for R=3/2 apply equally to all radii below r^* unless specified otherwise.The complexity of the space of coefficients increases with the dimension.As one increases the number of coefficients used to minimize E[τ(θ)], the likelihood of getting caught in local minima increases.To mitigate this difficulty, we begin with just one Fourier coefficient set to zero.We then apply the direct search algorithm to find the smallest real value of E[τ(θ)] in the space of a_1.Once a_1 is obtained, we add a_3=0 and search in the space of a_1 and a_3.Here we allow both a_1 and a_3 to change until we find the minimum in two dimensions.We iteratively increase the number of coefficients until the change in E_min is at least less than 10^-2 for each additional coefficient added.The number of coefficients needed increases as one approaches r_+ due to increasing curvature gradients of 𝒮.Our direct search algorithm was coded using Mathematica.All integrals were done using the NIntegrate function. §.§ The boundary separating admissible and non-admissible values of the W-Y QLE functional There are three criteria within the W-Y QLE formalism that determine whether a choice of τ is admissible.These criteria can be found in section 4 of <cit.> as well as section 5 of <cit.>.The purpose of the second and third criteria is to ensure that the value of E[τ(θ)] is positive.We will not focus on these since we do not obtain negative energies for any of our results.Instead, we will focus on the first criteria, which is𝒦 - (1+|∇τ |^2)^-1 det( ∇_a∇_b τ) > 0where 𝒦 is the Gaussian curvature of 𝒮 and all covariant derivatives are taken with respect to σ.This requires that the Gaussian curvature of the convex shadow given a choice of τ is strictly positive everywhere. If this criteria is met, W-Y can guarantee the existence and uniqueness of E[τ(θ)].Unfortunately, our analysis indicates that this criteria can not be met at τ=0. While minimizing in the space of coefficients, we discovered a boundary separating τ's with real values of E[τ(θ)] from those with complex values.This can be seen in Fig. <ref> where we use two Fourier coefficients, a_1 and a_3, to visualize the QLE landscape.The white gap in the middle of the plot represents complex values of E[τ(θ)] whose existence can be understood by examining Eq. <ref>.Here, complex energies arise for choices of τ_,θ that satisfyτ^2_,θ < σ^2_ϕϕ,θ/4σ_ϕϕ - σ_θθ.We will show that these choices of τ are inadmissible using our numerical results.To demonstrate that choices of τ with complex energies are not admissible, we compare the Gaussian curvature of 𝒮 and 𝒮̂ for the initial guess and τ_min in Fig. <ref>.From Fig. <ref>a, we see that the Gaussian curvature of the convex shadow for the initial guess is strictly positive and significantly different than the curvature of 𝒮.On the other hand, Fig. <ref>b shows that the Gaussian curvature of the convex shadow at τ_min is similar to the curvature of 𝒮 within the interval of positive Gaussian curvature.Outside of this interval, the Gaussian curvature of 𝒮 becomes negative while the shadow's curvature is flat.This indicates that the optimization algorithm tends toward a τ that embeds 𝒮 into ℝ^3 as much as possible.In fact, if we allow the algorithm to cross the boundary of admissible solutions by taking the real part of the QLE functional, it converges to τ=0.This implies that choices of τ within the boundary do not have shadows with strictly positive Gaussian curvature.We believe this is due to the unnecessary restriction that τ is a function of θ only. In general, we should allow τ to be dependent of both θ and ϕ and solve both the isometric embedding equation and the Euler-Lagrange equation.§.§ The physical relevance of E_min In this section we analyze the physical behavior of E_min and compare it to what one would expect based on the results of Martinez.We begin by plotting E_min as a function of R in Fig. <ref>.The vertical dividing line is placed at the critical radius r^*≈1.65.Above r^*, E_min is equivalent to the B-Y QLE.Below the plot in Fig. <ref> are the convex shadows at τ_min associated with radii R={1.05, 1.1, 1.2, 1.4, 1.6}.The mean curvature of these shadows can be see in Fig. <ref>.These plots show a Gibbs' phenomena that occurs when the Gaussian curvature of 𝒮 becomes negative, which, as we showed in Fig. <ref>, is also when the Gaussian curvature of 𝒮̂ at τ_min becomes zero.To physically interpret the results in Fig. <ref>,we analyze E_min at the outer event horizon for black holes with increasing angular momentum.This will give us a reference on how E_min should behave at the event horizon once the angular momentum exceeds a=√(3)M/2.We will also interpret the results by looking at the field of observers associated with E_min and compare them to the Eulerian observers chosen by B-Y.In Fig. <ref>a, we plot E_min, which is equivalent to the B-Y QLE for a<√(3)M/2, at various values of a between (r_+,0) and (r_+,√(3)M/2). We see that these two plots agree for a ≤ 0.4.As the angular momentum grows, the low angular momentum approximation starts to deviate from the B-Y QLE.The most important feature of this plot is the fact that the B-Y QLE decreases as angular momentum increases.Looking at Fig. <ref>b, we plot E_min at r_+ + ϵ, where ϵ=10^-5, for values of a between points (r_+,√(3)M/2) and (r_+,1).Here we see that the E_min predicts a growing energy with increased angular momentum.So why is E_min significantly greater than the predicted 2M_ir and why does it disagree with the trend of decreasing QLE with increased angular momentum?The reason is due to the field of observers that are chosen at τ_min.Assuming the isometric embedding of 𝒮 is a surface of revolution, the interval for which 𝒮 is embeddable in ℝ^3 is determined by Eq. <ref> with τ=0.For surfaces in the regime of strictly positive Gaussian curvature, σ_θθ is strictly greater than or equal to σ^2_ϕϕ,θ/4 σ_ϕϕ.Surfaces with regions of negative Gaussian curvature can only be partially embedded in ℝ^3 between θ^** <θ <π-θ^**. Here, θ^** is the smaller root of Eq. <ref> with τ=0.We will refer to this interval as the “interval of embeddability".In Fig. <ref>, we plot the inner product between the field of observers given by W-Y at τ_min with the Eulerian observers that would be chosen by B-Y as a function of θ. We see that the observers at τ_min and the Eulerian observers agree within some ϵ around 0.7<θ<π-0.7.For this choice of angular momentum and radius, θ^** is approximately equal to 0.64.This shows that E_min chooses the Eulerian observers within the interval of embeddability and smoothly transitions to observers that are boosted with respect to the Eulerian observers outside of this interval.As one approaches (r_+,M), the interval of embeddability decreases.This implies that more observers chosen by the W-Y QLE procedure at τ_min are boosted with respect to the Eulerian observers. We also found that the magnitude of the boosts increases as one approaches (r_+,M).This is why E_min has a growing energy with increased angular momentum.It also explains why the E_min at the event horizon is significantly greater than twice the irreducible mass when a>√(3)M/2. § CONCLUSION AND FURTHER DISCUSSION In this paper, we analyzed the W-Y QLE functional with the restriction that τ is only a function of θ for constant radial surfaces with R<r^*.These surfaces may not be embeddable in ℝ^3, but they are embeddable in ℝ^3,1. We discovered an open region of complex values for E[τ(θ)] while minimizing the functional in the space of coefficients.Our results suggest that the smallest real value of E[τ(θ)] lies on the boundary separating real and imaginary energies. Our results also suggest that there does not exist a convex shadow whose Gaussian curvature is strictly non-negative for choices of τ within the region of complex energies. We also analyzed the behavior of E_min to gleam some insight on its possible physical relevance.In Fig. <ref>, we saw a sudden increase in E_min for surfaces with R<r^*≈1.65.It is uncertain if these energies are physically meaningful since no results exists for such surfaces.To gain some clarity, we examined E_min at the event horizon as a function of angular momentum.For a<√(3)M/2, the results of Martinez suggest that the QLE is comparable to the irreducible mass of the black hole, which decreases with increasing angular momentum.Above √(3)M/2, E_min increases with increasing angular momentum.We attributed this change in behavior to the difference between the Eulerian observers chosen by B-Yand the field of observers chosen by W-Y at τ_min.In Fig. <ref>, we showed that the W-Y observers at τ_min agree with the Eulerian observers within the interval of embeddability and transition to boosted observers outside of this interval.Our results are contingent on τ being a function of θ alone.For a true understanding of the W-Y QLE applied to extreme Kerr spacetimes near the event horizon, one must allow τ to be a function of both θ and ϕ.This is an interesting avenue for future research.§ ACKNOWLEDGEMENTSWe wish to thank Po-Ning Chen, Rory Conboye, Matthew Corne, and Ye-Kai Wang for stimulating discussions. We also thank the the Information Directorate of the Air Force Research Laboratory and the Griffiss Institute for providing us with an excellent environment for research. This work was supported in part through the VFRP and SFFP program, as well as AFRL grant #FA8750-15-2-0047.§.§ Physical motivations behind the W-Y QLE formalism Let ℳ be an arbitrary spacetime manifold.The B-Y QLE energy given by Eq. <ref> does not give a general description on how to choose the field of observers in ℳ, nor does it give the reference space.For stationary spacetimes, B-Y suggested the Eulerian observers associated with maximal hypersurfaces as their observers and ℝ^3 as their reference space.This choice is reasonable when the extrinsic curvature of 𝒮 along u⃗ vanishes and the embedding of 𝒮 exists.However, it was shown that these choices do not work for surfaces in general and can give non-zero values of QLE for flat spacetimes <cit.>.This is due to the second term in Eq. <ref>.If the extrinsic curvature of 𝒮_t along u⃗ does not vanish, there is no way to account for this curvature in ℝ^3 when computing the reference energy.To address this problem, W-Y used the flat spacetime as their reference space.This extends the application of the B-Y QLE to dynamical spacetimes.The extension of the reference space from ℝ^3 to Minkowski space ℝ^3,1 creates a new challenge. Since 𝒮_t is a co-dimension two surface with respect to ℝ^3,1, the isometric embedding equations are underdetermined. To address this problem, W-Y introduced the scalar field τ on 𝒮_t which allows them to define a unique embedding into Minkowski space up to a choice of τ.They then construct a procedure to associate each choice of τ with two observer fields which are used to compute QLE.One field exists in the physical space and is denoted as t⃗=Nu⃗̅⃗+N⃗ while the other exists in Minkowski space and is denoted as t⃗_0=Nu⃗_0+N⃗. These observers are chosen such that the extrinsic curvature of 𝒮_t along u⃗̅⃗ embedded in ℳ is equal to the extrinsic curvature of 𝒮_t along u⃗_0 embedded in ℝ^3,1.The notation u⃗̅⃗ is used to distinguish the unique timelike normal on 𝒮_t whose extrinsic curvature agrees with u⃗_0 as opposed to an arbitrary timelike normal u⃗.This matching of extrinsic curvature along timelike normals is given by the constraint⟨u⃗̅⃗,H⃗⟩ = ⟨u⃗_0,H⃗_0 ⟩where H⃗_0 is the mean curvature vector of 𝒮_t embedded in ℝ^3,1. This addresses the problem of the second term in Eq. <ref>.Next we discuss the isometric embedding into Minkowski space and how the lapse and shift are chosen.Let i:𝒮_t ↪ℝ^3,1 represent an isometric embedding of 𝒮_t into Minkowski space.In principle, one would compute the reference energy using a field of observers who are at rest with respect to i(𝒮_t).If we work in the rest frame of these observers, at each point p ∈ i(𝒮_t) we have t⃗_0 = {1,0,0,0}.Let τ be the time component of i(𝒮_t),then the embedding takes the form x⃗_0={τ,x^1,x^2,x^3}.One can alternatively write the embedding asx⃗_0 = x⃗̂⃗ + τt⃗_0 where x⃗̂⃗={0,x^1,x^2,x^3} are the spatial coordinates of i (𝒮_t) that lie in a three dimensional Euclidean plane orthogonal to t⃗_0.This projection 𝒮̂ onto ℝ^3 is defined as the shadow of i (𝒮_t) with respect to t⃗_0.Vectors with hats exist on the shadow, while vectors with the zero subscript exist on i (𝒮_t).Starting from Eq. <ref>, the metric of the shadow σ̂ is given by⟨x⃗_,a | x⃗_,b⟩ = ⟨x⃗̂⃗_,a+τ_,at⃗_0 | x⃗̂⃗_,b+τ_,bt⃗_0 ⟩= ⟨x⃗̂⃗_,a | x⃗̂⃗_,b⟩ -τ_,aτ_,b,which impliesσ̂_ab = (σ_t)_ab + τ_,aτ_,b.The isometric embedding of 𝒮_t into ℝ^3,1 using 𝒮̂ and τ is shown in Fig. <ref>.A necessary condition for choosing τ requires the shadow 𝒮̂ to be a smooth convex surface in ℝ^3.This condition is used to prove the existence and uniqueness of i(𝒮_t) given the observer field t⃗_0.It can be seen from the embedding theorem of Nirenberg and Pogorelov and Eq. <ref> that any isometric embeddings of 𝒮_t in Minkowski space with the same convex shadow and scalar field τ must be congruent.This completes the discussion on embedding 𝒮_t into ℝ^3,1. Since the field of observers in Minkowski is defined as being at rest with respect to i(𝒮_t), the lapse and shift are chosen such thatt⃗_0=Nu⃗_0 + N⃗.Using the embedding as described in Fig. <ref>, it can be shown thatt⃗_0={1,0,0,0}=√(1+|∇τ|^2) u⃗_0 - ∇τwhere∇τ = σ_t^abτ_,ax⃗_0,b.Here we see that N=√(1+|∇τ|^2) and N⃗= -∇τ.The corresponding field of observers in ℳ ist⃗=√(1+|∇τ|^2) u⃗̅⃗ - ∇τwhere the coordinates x⃗_0 in Eq. <ref> are replaced with the coordinates of 𝒮_t in ℳ.With the observer fields and the isometric embedding into ℝ^3,1 written in terms of τ, the discussion on the physical motivations behind the W-Y formalism is complete.§.§ A discussion on the isometric embedding theoremThere are several common misconceptions about isometric embedding of a closed surface into ℝ^3.We take this opportunity to address these issues.1. Isometric embeddings do not preserve symmetry:One reason why the current formalism does not work is because of the assumption that τ is a function of θ only, or τ is axi-symmetric. The Killing field of a Riemannian metric does not extend to the embedding, or does not extend to be a Killing field of the ambient space. In particular, it is possible that an axi-symmetric metric admits an isometric embedding into ℝ^3 that is not a surface of revolution.2. Non-embeddability:The surface isometric embedding theorem guarantees the existence and uniqueness of a global isometric embedding if the Gaussian curvature is positive everywhere. However, there does not seem to be any non-trivial non-embeddability theorem. In particular, for a surface with a metric that has negative Gauss curvature at some point, isometric embedding into ℝ^3 is still possible. There are many closed surfaces in ℝ^3 with negative Gauss curvature somewhere, but these isometric embeddings are not expected to be unique.3. Global isometric embedding vs. local isometric embedding:The theorem of Frolov on the non-embeddability near a point of negative Gauss curvature <cit.> seems to contradict a well-known local isometric embedding theorem <cit.> that states if a surface has negative Gauss curvature at a point, then there exists a neighborhood near the point that can be isometrically embedded into ℝ^3. This is the local isometric embedding theorem which holds as long as the Gauss curvature is positive, negative, or changes sign cleanly. This violation implies that Frolov's theorem does not necessarily eliminate the existence of embeddings into ℝ^3 for these surfaces.In particular, one can not rule out embeddings that are not surfaces of revolution. unsrt
http://arxiv.org/abs/1708.07532v2
{ "authors": [ "Warner A. Miller", "Shannon Ray", "Mu-Tao Wang", "Shing-Tung Yau" ], "categories": [ "gr-qc" ], "primary_category": "gr-qc", "published": "20170824192608", "title": "Wang and Yau's Quasi-Local Energy for an Extreme Kerr Spacetime" }
Department of Applied Physics and Applied Mathematics, Columbia University, New York, New York 10027, USADepartment of Applied Physics and Applied Mathematics, Columbia University, New York, New York 10027, USA Re-based double perovskites (DPs) have garnered substantial attention due to their high Curie temperatures (T_C) and display of complex interplay of structural and metal-insulator transitions (MIT).Here we systematically study the ground state electronic and structural properties for a family of Re-based DPs A_2BReO_6 (A=Sr, Ca and B=Cr, Fe), which are related by a common low energy Hamiltonian, using density functional theory + U calculations.We show that the on-site interaction U of Re induces orbital ordering (denoted C-OO), with each Re site having an occupied d_xy orbital and a C-type alternation among d_xz/d_yz, resulting in an insulating state consistent with experimentally determined insulators Sr_2CrReO_6, Ca_2CrReO_6, and Ca_2FeReO_6.The threshold value of U_Re for orbital ordering is reduced by inducing E_g octahedral distortions of the same C-type wavelength (denoted C-OD), which serves as a structural signature of the orbital ordering;octahedral tilting also reduces the threshold.The C-OO, and the concomitant C-OD, are a spontaneously broken symmetry for the Sr based materials (i.e. a^0a^0c^- tilt pattern),while not for the Ca based systems (i.e. a^-a^-b^+ tilt pattern).Spin-orbit coupling does not qualitatively change the physics of the C-OO/C-OD, but can induce relevantquantitative changes.We prove that a single set of U_Cr,U_Fe,U_Re capture the experimentally observed metallic state in Sr_2FeReO_6 and insulating states in other three systems. We predict that the C-OO is the origin of the insulating state in Sr_2CrReO_6, and that the concomitant C-OD may be experimentally observed at sufficiently low temperatures (i.e. space group P4_2/m) in pure samples.Additionally, given our prescribed values of U, we show that the C-OO induced insulating state in Ca_2CrReO_6 will survive even if the C-OD amplitude is suppressed (e.g. due to thermal fluctuations).The role of the C-OO/C-OD in the discontinuous, temperature driven MIT in Ca_2FeReO_6 is discussed.71.30.+h, 75.70.Cn, 75.47.Lx, 75.25.Dk, 71.15.MbStructural andmetal-insulator transitions in rhenium based double perovskites via orbital orderingChris A. Marianetti December 30, 2023 ====================================================================================================== § INTRODUCTION §.§ General Background There is a huge phase space of possibilities for perovskite based transition metal oxides with more than one type of transition metal which nominally bears d electrons, and experimental efforts are continuing to expand in this direction; including chemical synthesis <cit.> and layer-by-layer growth by pulsed laser deposition <cit.>.Given that many of these materials will exhibit strongly correlated electron behavior, it will be critical to have appropriate first-principles based approaches which can be applied to this vast phase space in order to guide experimental efforts; allowing for the development of novel, functional materials.Nearly two decades ago, room-temperature ferrimagnetism (sometimes loosely referred to as ferromagnetism) was discovered in the double-perovskite (DP) transition metal oxides (TMO)Sr_2FeMoO_6<cit.>, attracting much attention to DP TMO's due to their rich physics and potential for spintronic applications <cit.>.Recent first-principles efforts have shown promise in identifying new, novel materials in this phase space<cit.>.Among the various double perovskites, Re-based DPs are a particularly intriguing class; and the small set A_2BReO_6 (A=Sr, Ca and B=Cr, Fe) already contains a wealth of interesting physics and impressive metrics.Moreover, this particular set of Re-based DPs materials forms a sort of family which descends from the same low energy Hamiltonian of Re dominated orbitals, despite the fact that Cr and Fe have different numbers of electrons; and this can be deduced from nominal charge counting along with some amount of post facto knowledge (see Section <ref> for a more detailed explanation).Given that Sr and Ca are isovalent (i.e. nominally 2+), these two cations serve as binary parameter to modify the degree and type of octahedral tilting, changing the bandwidth of the system.Switching between Cr andFe changes the valence by two electrons and alters the B site energy. However, Cr and Fe are totally analogous in the sense that both yield a filled spin shell given a predominant octahedral crystal field and a high spin configuration (i.e. t_2g,↑^3 and t_2g,↑^3e_g,↑^2, respectively). Experiment dictates that the resulting four permutations of A_2BReO_6 yield both metallic and insulating ground states, insulator to metal transitions as a function of temperature (for reasonable temperature scales), structural transitions as a function of temperature, and in some cases very high ferrimagnetic to paramagnetic transition temperatures.Moreover, this Re-based family of DP contains unexplained phenomena, such as the discontinuous,isostructural phase transition in Ca_2FeReO_6.Therefore, there are a variety of phenomenological, qualitative, and quantitative challenges which need to be addressed in this family. Given that all of these compounds are strongly magnetically ordered at low temperatures, it is reasonable to expect that DFT+U might provide an overarching, qualitative view of the physics; perhaps even quantitative.In this work, we use DFT+U calculations to investigate the electronic and structural aspects of A_2BReO_6 (A=Sr, Ca, and B=Cr, Fe),systematically accounting for the effects of octahedral distortions and rotations; in addition to carefully exploring the effect of the Hubbard U for both the B sites and Re.We show that a single set of U_Re,U_Fe,U_Crcan obtain qualitative agreement with known experiments of all four compounds.Particular attention is payed to isolating the effects of the Hubbard U by additionally considering cubic reference structures in the absence of any octahedral distortions or tilting.Finally, we explore the effect of spin-orbit coupling, demonstrating that it can perturb the C-OO and the resulting C-OD, but the qualitative trends hold. The rest of the paper is organized as follows.Sections <ref> and <ref> address the previous literature of the Re-based double perovskites and orbital ordering physics in other perovskites, respectively.Section <ref> details the computational methods and provides a brief discussion on the value of U, while a detailed analysis of the optimal U values is given in Section <ref>.Section <ref> provides a minimal analysis of the various physical mechanisms at play in this family of materials, highlighting the key findings in our paper; while detailed calculations which shape our conclusions can be found in Sections <ref> and <ref>.Section <ref> discusses future experiments which could test our predictions, and Section <ref> presents the summary of the paper. §.§ Literature review of A_2BReO_6 (A=Sr, Ca and B=Cr,Fe)Here we review the experimental literature, in addition to some of the theoretical literature, on our Re-based compounds of interest: A_2BReO_6 (A=Sr, Ca, and B=Cr, Fe). All four compounds form a perovskite structure with the Re/B atoms ordering in a q_sc = ( 1/2,1/2,1/2) motif with respect to the primitive simple cubic perovskite lattice vectors (see Figure <ref>).All systems are ferrimagnetically ordered below room temperature with the Re and B atoms having opposite spins.We begin by presenting an experimental table of the crystal structures for the ground state and at temperatures above the structural transition; except for Ca_2CrReO_6, which is not known to have a transition near room temperature (see Table <ref>). Additionally, we tabulate the transition temperatures and the nature of the ground state (i.e. metal vs. insulator). We will also discuss other experimental viewpoints from the literature, some with dissenting views, that are not represented in this table.Bulk Sr_2FeReO_6 is tetragonal at 5K (I4/m, space group 87) as shown in Fig. <ref>(a), metallic (even in well ordered samples)<cit.>, has a^0a^0c^- octahedral tilting, and in-plane and out-of-plane ∠Fe-O-Re are 171.9 and 180^∘, respectively <cit.>.Upon increasing temperature, it undergoes a tetragonal-to-cubic phase transition atT_t=490K to space group 225 (Fm3̅m), removing octahedral tilting.Ca_2FeReO_6 is monoclinic at 7K (P2_1/n, space group 14-2), and has a^-a^-b^+ octahedral tilting (see Fig. <ref>(b)).It is generally known as insulator at low temperature <cit.>, though Fisher et al. suggested that it may be a bad metal <cit.>.With increasing temperature, Ca_2FeReO_6 undergoes a concomitant structural and metal-insulator transition (MIT) at 140K <cit.>.Interestingly, the structures above and below the transition have the same space group symmetry, and octahedral tilting, but the structural parameters are slightly different <cit.>.Based on the experimental results, we infer that the predominant structural change at the phase transition is the enhancement and reorientation of a local axial octahedral distortion of the Re-O octahedron (i.e. linear combinations A_1g+E_g octahedral modes, as defined from the cubic reference) which order in an C-type antiferro (i.e. q_fcc = (0,1/2,1/2)) manner with respect to the primitive face-centered cubic lattice vectors of the double perovskite (see Figure <ref>).We refer to this as a C-type octahedral distortion (C-OD) (see Table<ref> for projections onto the octahedral mode amplitudes).The C-OD will be demonstrated to be a signature of orbital ordering of the Re electrons; which we will proveto be the common mechanism of the MIT in this entire family of materials.The C-OD is not a spontaneously broken symmetry in the space group of Ca_2FeReO_6 (e.g. there is a small non-zero amplitude in the high temperature phase), and there are two symmetry inequivalent variants (i.e. C-OD^+ and C-OD^-, see Fig. <ref>) which represent the low and high temperature structures, respectively.Incidentally, the C-OD is a spontaneously broken symmetry in the Sr-based crystals (due to the a^0a^0c^- tilt pattern), whereby C-OD^+ and C-OD^- are identical by symmetry.In order to clearly characterize the C-OD, the bond lengths of theRe-O octahedronfrom the experimental structures are summarized in Fig.<ref>.Above the structural transition, the Re-O bonds are split into three sets of two equal bond lengths (where the equal bonds arise from the inversion symmetry at the Re site), but two of the three sets are very similar.Specifically, at T=300K d_Re1-O1=1.959Å, d_Re1-O2=1.954Å, and d_Re1-O3=1.939Å, where Re1-O1 and Re1-O2 are approximately within the a-b plane and Re1-O3 is approximately along the c-axis (see Fig. <ref>). In order to quantify relevant aspects of the octahedral distortions, we will define a parameter d_|x-y|=|d_Re-O1-d_Re-O2|, which is small in the high temperature phase (i.e. d_|x-y| =0.005Å at T=300K); and d_|x-y| is precisely the amplitude of the E_g^(0) octahedral mode in the unrotated local coordinate system. For the symmetry equivalent Re within the unit cell (i.e. Re2), the nearly equivalent O1 and O2 bond lengths are swapped (i.e.d_|x-y| is identical but the direction of the long/short bonds have reversed); while Re-O3 is identical.Upon changing to the low temperature phase, there is a modest change whereby the Re-O3 bond length shifts up by 0.006Å (i.e. equivalently in both Re), and a more dramatic change whereby the splitting between Re-O1 and Re-O2 becomes substantially larger (i.e. d_|x-y| =0.014Å).As in the high temperature structure, symmetry dictates that the direction of d_|x-y| alternates between the two Re sites.The main difference is that d_|x-y| acquires an appreciable value in the low temperature phase, and the C-OD switches between C-OD^+ and C-OD^- (see Section <ref> for a more detailed discussion). Interestingly, Granado et al. suggested that there is phase separation between 10K and 650K, with all three phases being monoclinic <cit.>.More specifically, the most abundant phases are found to be the M1 and M2 phases, with fraction of 55% and 45%, respectively, and the main differences between the two phases are the b-lattice parameter and angle β.Similarly, Westerburg et al. also observed two different phases below 300K <cit.>.We note that the M1 and M2 phases in Granado et al.'s results <cit.> are similar to the low-T and high-T phases reported by Oikawa et al., where the separation was not detected <cit.>.M1, which has the largest portion at low temperature, has a b lattice parameter which is ∼0.015Å smaller and a β which is ∼0.1^∘ larger than those of the M2 phase, which constitutes ∼90%of the high T phase <cit.>.Similarly, at 140K, the low-T phase has smaller b and larger β than the high-T phase in Oikawa et al.'s report <cit.>. Having clarified the nature of the experimentally measured structural distortions in Ca_2FeReO_6, we return to the issue of the MIT as addressed in the literature.Since there are nominally only Re t_2g states near the Fermi level, the MIT is a gapping of these states.Oikawa et al. suggested that the d_xy+d_yz and d_xy+d_zx orbitals are randomly arranged at Re sites in the metallic phase, whereas the d_yz+d_zx orbitals are preferentially occupied in the insulating phase; and the splitting between d_yz+d_zx and d_xy orbitals produce the energy gap <cit.>.Previous local spin density functional theory (LSDA) studies showed that Ca_2FeReO_6 is metallic without considering on-site Coulomb repulsion U term for Re (U_Re) <cit.> and a gap is opened with the large value U=3-4 eV <cit.>. Gong et al. concluded that the Re t_2g states order into a d_xy+d_zx configuration using the modified Becke-Johnson (mBJ) exchange-correlation potential, and showed that Ca_2FeReO_6 is insulating; though this study did not explicitly identify the C-type orbital ordering that drives this insulating state.A noteworthy approximation made in their work is that the atomic coordinates are relaxed within GGA, where d_|x-y| is only 0.004Å, which is far smaller than low temperature experimental value in the insulating state.The importance of this amplified C-OD amplitude will be clearly demonstrated within our work. Antonov et al.<cit.> reported the electronic structure of Ca_2FeReO_6 using LSDA+U+spin-orbit-coupling calculations.Using structures obtained from experiment atdifferent temperatures, which encompasses the discontinuous phase transition at T=140K <cit.>, they showed that spin and orbital moments also have abrupt changes across the transition, while both change linearly with structures from temperatures below and above the MIT <cit.>.Sr_2CrReO_6 has been determined to be tetragonal with an a^0a^0c^- octahedral tilt pattern(i.e. space group I4/m, see Fig. <ref>(c))<cit.>.Teresa et al. reported a structural transition at T=260K, going from I4/m to Fm3̅ m (with increasing temperature) whereby the octahedral tilts and the tetragonality are disordered. Alternatively, Kato et al. found that Sr_2CrReO_6 is still I4/m<cit.> at room temperature, implying that the transition temperature was even higher in these particular samples; while Winkler et al. found that it was cubic (Fm3̅m) at room temperature<cit.>.This is likely a minor discrepancy given that the structure measured by Kato et al. at room temperature (300K) only has small deviations from Fm3̅m: the in-plane Cr-O-Re angle in the tetragonal structure is 179.7^∘, close to 180^∘, and the lattice parameters are nearly cubic with √(2)a=7.817 and c=7.809 Å<cit.>.Recent experiments have found Sr_2CrReO_6 to be insulating at low temperatures, in contrast with earlier work which found metallic states.Specifically, Hauser et al. found that a Sr_2CrReO_6 film grown on STO, where the strain is less than 0.05%, is insulating at 2K with a 0.21eV energy gap <cit.>.Alternatively, numerous samples obtained from chemically synthesis were all found to bemetallic <cit.>, in addition to previous thin film samples<cit.>.It should be noted that Kato et al. emphasized that Sr_2CrReO_6 is a very bad metal, and lies at the vicinity of a Mott-insulating state <cit.>.Moreover, Hauser et al. suggested that oxygen vacancies are the reason why Sr_2CrReO_6 samples reported in previous studies were metallic <cit.>.Indeed, previously reported metallic Sr_2CrReO_6 samples have a large amount of defects, such as Cr/Re anti-site defect: 9%<cit.>, 15% <cit.>, 10-12% <cit.>, and 23.3% <cit.>.However, there is not yet theoretical justification for why Sr_2CrReO_6 might be an insulator.Unfortunately, full structural parameters have not yet been extracted from the insulating film at low temperatures<cit.>, which could reveal signatures of an orbitally ordered insulator which we predict in our analysis (see Section <ref>). To our knowledge, there are only few experiments on Ca_2CrReO_6 <cit.>; finding a monoclinic crystal structure (space group P2_1/n, see Fig. <ref>(d)) and an insulating ground state.The energy gap is not reported yet, though the reflectivity spectra and optical conductivity were measured <cit.>.Theoretically, the recent mBJ study of Gong et al. suggested that the energy gap is 0.38eV, much larger than that of Ca_2FeReO_6.The resistivity curve suggests that it is still insulating at room temperature, but the resistivity will have an error given the 12-13.7% of B-site disorder <cit.>, similar to the case of Sr_2CrReO_6.In addition, structural parameters as a function of temperature have not yet been reported, which will be relevant to testing the predictions in our study.In the existing literature, the effect of electronic correlation, orbital ordering, and octahedral distortions have not been sufficiently isolated to give a universal understanding of this family.Most importantly, the origin of the MIT and it's relation to orbital ordering and the concomitant C-OD have not been elucidated.Theory and computation will be critical to separating cause from effect.§.§ Orbital ordering Orbital ordering is a well known phenomena in transition metal oxides <cit.>, and it can drive a material into an insulating ground state. Two main mechanisms which drive orbital ordering are the electron-lattice (e-l) coupling, with a very relevant scenario being the well known Jahn-Teller (JT) Effect, and electron-electron (e-e) interactions.Disentangling these two effects in a real system can be challenging, as both mechanisms result in orbital ordering and a concomitant lattice distortion; though the latter could be vanishingly small in the case of e-e driven orbital ordering.A complicating factor in both theory and experiment is that preexisting structural distortions (e.g. octahedral tilting) may preclude the orbital ordering from being a spontaneously broken symmetry; meaning that orbital ordering is always present and the only question is a matter of degree. In the event that the e-e interactions are driving the ordering, a further question is if orbital ordering is critical to realizing the insulating state (ie. Slater-like e-e driven orbital ordered insulator) or if Mott physics generates the insulating state (ie. the system remains insulating even if the orbitals are thermally disordered). This latter question can also be cumbersome to disentangle. In the context of DFT+U calculations, the Hubbard U captures a very relevant portion of the e-e interactions which drive orbital-ordering; similar to the U in a model Hamiltonian which gives rise to superexchange<cit.>.The e-l coupling is accounted for in the DFT portion of the calculation (assuming a local or semi-local approximation to the DFT functional). If experimentally deduced orbital ordering is accounted for at the level of DFT (ie. U=0), then e-l couplings are likely playing a dominant role;while if DFT does not predominantly capture the orbital ordering,then the e-e interactions are likely playing a dominant role. In this case of dominant e-e interactions, if a particular spatial ordering is a necessary condition todrive an insulating state within DFT+U (for a physical value of U), then the resulting insulating state could be labeled as Slater-like. If an insulating state is achieved for an arbitrary ordering of the orbitals, then the system would be considered Mott-like. Classic examples of perovskites which display antiferro orbital ordering, and are insulators, includethe 3d^4LaMnO_3 <cit.> and the 3d^9 KCuF_3<cit.> which have ordering of e_g electrons; the 3d^1 (t_2g) materials LaTiO_3 and YTiO_3<cit.>; and the 3d^2(t_2g)perovskites LaVO_3 and YVO_3<cit.>. It is useful to make some empirical characterization of these classic examplesto provide context for the orbital ordering we identify in this study (see Table <ref>).All of these systems are insulators until relatively high temperatures.All of the aforementioned examples have the GdFeO_3 tilt pattern (a^-a^-b^+) except KCuF_3, and thus there is only orbital degeneracy in KCuF_3 (assuming a reference state where the E_g^(1) strain mode is zero).Therefore, antiferro orbital ordering could be a spontaneously broken symmetry for KCuF_3, while the other systems will always display some degree of orbital polarization and octahedral distortion.In all cases, the most relevant lattice distortion is an E_g^(0) distortion,driven by both e-l and e-e interactions.The e-l coupling is generally much larger for scenarios involving e_g electrons as compared to t_2g.DFT+U calculations can be helpful in disentangling the effects of e-e interactions and e-l coupling. In the aforementioned classic examples of orbital ordering involving e_g electrons, important contributions are realizedfrom both e-e interactions and e-l coupling.In LaMnO_3, an antiferro E_g^(0) Jahn-Teller distortion, and corresponding orbital ordering, is found even at the level of GGA (ie. U_Mn=0): the e-l coupling is strong enough to recover 0.8 of the experimentally observed Jahn-Teller distortion<cit.>. However, a non-zero U_Mn is needed to properly capture the energy stabilization, insulating ground state, and full magnitudeof theJahn-Teller distorted, orbitally ordered state. In KCuF_3, pure GGA is sufficient to spontaneously break symmetry and obtain the antiferro E_g^(0) Jahn-Teller distortion that is observed in experiment, though the stabilization energy is grossly underestimated and the distortion magnitude is too small<cit.>.Including the on-site U gives reasonable agreement with experiment (both within DFT+U and DFT+DMFT)<cit.>.Alternatively, if one remains in the cubic reference structure, preventing coupling with the lattice, an on-site U of 7eV can drive the orbitally ordered insulator with a corresponding transition temperature of roughly 350K<cit.>. Therefore, both mechanism can drive the same instability, but in isolation the on-site U recovers a larger component of the stabilization energy; though both ingredients are necessary to quantitatively describe experiment. In both LaMnO_3 and KCuF_3, e-e interactions and e-l coupling both play a direct, relevant role.In the t_2g-based systems, the e-l coupling is expected to be smaller.DFT+U studies for LaTiO_3 show that at U=0, the system is metallic and has a very small E_g^(0) distortion of d_|x-y|=0.004-0.005Å <cit.>; in contrast to experiment which yields an insulator with d_|x-y|=0.021Å. Increasing the e-e interactions to U=3.2eV/J=0.9eV <cit.>, an insulator is obtained and d_|x-y|=0.018Å, in much better agreement with experiment (see Table <ref>).DFT will always have a small value of d_|x-y| due to the broken symmetry caused by the octahedral tilting, and the e-l coupling within DFT provides no strong enhancement of this distortion.Applying the Hubbard U both orders the orbitals and induces an appreciable value of d_|x-y|.This concomitant d_|x-y| distortion may increase the potency of the Hubbard U, such as increasing the resulting band gap (See results of Re-based family in Section <ref>). The vanadatesbehave in a similar fashion.DFT (ie. U=0) calculations for LaVO_3, for the low-temperature phase P2_1/n (or alternatively, P2_1/a or P2_1/b), showed that LaVO_3 is metallic and d_|x-y|=0.001-0.002Å<cit.>, in stark disagreement with experiment which shows insulating behavior and d_|x-y|=0.061Å (see Table <ref>). Hybrid functional calculations, which are very similar in nature to DFT+U, recover the insulating state and a appreciable d_|x-y| amplitude. In these t_2g-based systems, DFT gets d_|x-y| wrong by a factor of approximately 4-5 and 30-60 for the titanates and the vanadates, respectively; a much more dramatic failure than in the e_g-based materials.Within experiment, one cannot easily isolate different terms in the Hamiltonian, though it may be possible to thermally quench the lattice distortion and determine if the orbital polarization persists. If so, this would strongly indicate that e-e interaction are dominant in driving orbital ordering. Furthermore, experiment could possibly determine if the system remains gapped upon thermally disordering the orbitals. As mentioned above, octahedral tilting is often a higher energy scale which already breaks symmetry, and it will not be totally clear what the quenched value of the distortion and or the orbital polarization should be.Below we tabulate the amplitude of the E_g^(0) distortion and the metallic/insulating nature at a low temperature (i.e. a temperature below the orbital ordering) and a high temperature (i.e. either above the orbital ordering or the highest temperature measured); the magnetic transition temperatures are also included. Three scenarios can be identified. First, the E_g^(0) distortion may be essentially unchanged as a function of temperature, or even enhanced, while the material remains insulating (i.e. KCuF_3, LaTiO_3, and YTiO_3). Second, the E_g^(0) distortion may be largely quenched via temperature and the system concomitantly becomes metallic (i.e. LaMnO_3). Third, the E_g^(0) distortion may be largely quenched and the system remains insulating (i.e. LaVO_3 and YVO_3). In the first two scenarios, little can be deduced without further analysis: the orbital ordering and structural distortion are either frozen in or are simultaneously washed out. For the vanadates, we learn that the lattice distortion is irrelevant for attaining the insulating state: e-e interactions drive the orbital ordering. Further analysis would be needed to know if the insulator is Slater-like or Mott-like. The orbital ordering which we identify in the 3d-5d Re-based DP's of this study has a number of distinct circumstances as compared to these classic 3d single perovskites.First, in the Re-DP's the magnetic transition temperatures (T_mag) are amuch larger energy scale (see Table <ref>), which means that the spins are strongly orderedwell before orbital physics comes into play;whereas the reverse is true in the single 3d perovskites. Another difference is that the electronic structure of the 3d-5d DPs are generally governed by 5d orbitals, which may have a non-trivial spin-orbit interaction<cit.>. While 5d orbitals are more delocalized than 3d orbitals, it should be kept in mind that that the rock salt ordering of the DP's results in relatively small effective Re bandwidths (see Section <ref>). A well studied class of DP's where orbital ordering may be relevant is the A_2BB'O_6 double perovskites, where B has fully filled or empty d orbital and B' is a 5d transition metal.One well studied type of family is the B'=5d^1 Mott insulators, such as Ba_2BOsO_6 (B=Li, Na) <cit.> and Ba_2BMoO_6 (B=Y, Lu)<cit.> (see Table <ref>). These materials have very weak magnetic exchange interactions (e.g. T_C of Ba_2NaOsO_6 is 6.8-8 K <cit.>), and exotic phases have been proposed such as quantum-spin-liquids, valence-bond solids, or spin-orbit dimer phases<cit.>.Xiang et al. <cit.> studied Ba_2NaOsO_6 using the first-principles calculations, and suggested that an insulating phase cannot be obtained within GGA+U up to U-J=0.5 Ryd: orbital ordering is not observed in their electronic band structure within GGA+U.They also show that Ba_2NaOsO_6 is insulating within GGA+U when including spin-orbit coupling (SOC) with U-J=0.2Ryd and the [111] magnetization axis.Gangopadhyay et al. <cit.> also proposed that SOC is essential to obtain a nonzero band gap, using hybrid functional + SOC calculations.Based on experiment, Erickson et al. proposed that Ba_2NaOsO_6 has orbital ordering with a non-zero wavevector, deduced in part from the small negative Weiss temperature from magnetic susceptibility measurements <cit.>. Another analogous example is the Ir-based double perovskite Sr_2CeIrO_6 (see Table <ref>),where Ce has a filled shell and the Ir 5d nominally have 5 electrons (or one hole) in the e_g^π + a_1g orbitals (i.e. descendants of t_2g)<cit.>, and this results inweak antiferromagnetic coupling (i.e. T_N=21K).Additionally, orbital ordering has been identified in this material, where the hole orders in the e_g^π shell among the d_xz and d_yz orbitals with an antiferro modulation. The orbital ordering is accompanied by a E_g^(0) structural distortion, though the experimental temperature dependence is rather unusual. At 2K and 300K, d_|x-y|=0.008Å and d_|x-y|=0.043Å, respectively (see Table <ref>), showing a strong increase in amplitude with increasing temperature<cit.>; while d_|x-y|=0.049Å is obtained within GGA+U (U=4eV and J=1eV) <cit.>. The authors attribute the orbital ordering to the Jahn-Teller effect, though they demonstrate that U_Ir is a necessary condition for opening a band gap<cit.>. It should be noted that the wavevector of the antiferro orbital and structural ordering in this system is the same as what we identify in the Re-based family in the present work.Re-based double perovskites are quiet distinct from the aforementioned double perovskites with empty or fully filled d shell on the B ion.Unlike these latter materials, Re-based DPs have nonzero magnetic spin for B (e.g., Cr has spin 3/2 and Fe has spin 5/2), and thus have a strong antiferromagnetic exchange interaction between B and B'; resulting in a T_C that ismuch higher than room temperature (e.g. T_C in Sr_2CrReO_6 is 620K, see Table <ref>). Therefore, the spin degrees of freedom are locked in until relatively high temperatures, creating an ideal testbed to probe orbital physics. The family of Re-based DP's evaluated in this study are ideally distributed in parameter space about the orbital ordering phase transition. § COMPUTATION DETAILS We used the projector augmented wave (PAW) method <cit.> in order to numerically solve the Kohn-Sham equations, as implemented in the VASP code <cit.>.The exchange-correlation functional was approximated using the revised version of the generalized gradient approximation (GGA) proposed by Perdew et al. (PBEsol) <cit.>.In all cases, the spin-dependent version of the exchange correlation functional is employed; both with and without spin-orbit coupling (SOC). A plane wave basis with a kinetic energy cutoff of 500 eV was employed. We used a Γ-centered k-pointmesh of 9×9×7 (11×11×9 for density of states). Wigner-Seitz radii of 1.323, 1.164, and 1.434 Å  were used for siteprojections on Cr, Fe and Re atoms, respectively, as implemented in the VASP-PAW projectors. The GGA+U scheme within the rotationally invariant formalism and the fully localized limit double-counting formula <cit.> is used to study the effect of electron correlation.The electronic and structural properties critically depend on U_Re, andtherefore we carefully explore a range of values.We also explore how the results depend on U_Cr and U_Fe, which play a secondary but relevant role in the physics of these materials.We do not employ an on-site exchange interaction J for any species, as this is already accounted for within the spin-dependent exchange-correlation potential <cit.>.A post facto analysis of our results demonstrate that a single set of values (which are reasonable as compared to naive expectations and previous work) can account for the electronic and crystal structure of this family (see Section <ref>), and it is useful to provide this information at the outset for clarity. In the absence of spin-orbit coupling, values of U_Fe=4 eV, U_Cr=2.5 eV, and U_Re=2 eV are found; including spin-orbit coupling requires U_Re to be slightly decreased to 1.9 eV in order to maintain the proper physics.In subsequent discussions, the units of U will always be in electron volts (eV), and this may be suppressed for brevity.We used experimental lattice parameters throughout (see Table <ref>), and the reference temperature is 300Kunless otherwise specified.Atomic positions within the unit cell were relaxed until the residual forces were less than 0.01 eV/Å.In select cases we do relax the lattice parameters as well to ensure no qualitative changes occur, and indeed the changes are small and inconsequential in all cases tested. § RESULTS AND DISCUSSION §.§ General Aspects of the Electronic Structure We begin by discussing the nominal charge states of the transition metals, the basic energy scales,and the common mechanism of the metal-insulator transition in these compounds; which is a C-type antiferro orbital ordering. A perfect cubic structure (Fm3̅m) is first considered, in the absence of SOC (which will be addressed at the end of this discussion). Given the Re double perovskite A_2BReO_6 (A=Sr,Ca, B=Fe,Cr), nominal charge counting dictates that the transition metal pair BRe must collectively donate 8 electrons to the oxygen (given that A_2 donates 4 electrons), and it is energetically favorable (as shown below) to have Re^5+ (d^2) and B^3+ (Cr→ d^3 and Fe → d^5 ) in a high spin configuration. The Re spin couples antiferromagnetically to the B spin via superexchange, yielding a ferrimagnetic state.Given that the nominally d^5 Fe has a half filled shell when fully polarized, and that the nominally filled d^3 Cr has a half-filled t_2g-based shell when fully polarized, none of these compounds would be expected to have Fe or Cr states at the Fermi energy when strongly polarized.Given that Re is in a d^2 configuration, group theory dictates that the system will be metallic with majority spin Re states present at the Fermi energy within band theory. These naive expectations are clearly realized in DFT calculations (ie. U=0), as illustrated in Sr_2CrReO_6 and Sr_2FeReO_6 using the Fm3̅m structure (see Figure <ref>).It is useful to compare the Re states crossing the Fermi energy, which are substantially narrower for Sr_2CrReO_6 as compared to Sr_2FeReO_6. Relatedly, the Cr states hybridize less with and are further from the Re states as compared to the case of Fe. The net result is that the Cr based compounds will have a smaller effective Re bandwidth, and therefore stronger electronic correlations which result in a higher propensity to form an insulating state.At the level of DFT+U, or any static theory for that matter, one can only obtain an insulator from the fully spin-polarized scenario outlined above via an additional spontaneously broken symmetry, which could be driven either via the on-site Re Coulomb repulsion U_Re, structural distortions (which includes effects of electron-phonon coupling), or combinations thereof. As we will detail in the remainder of the paper, structural distortions alone (i.e. if U_Re=0) cannot drive an insulating state in any of the four Re-based materials studied. Therefore, a non-zero U_Re is a necessary condition to drive the insulating state, but the minimum required value of U_Re will be influenced by the details of the structural distortions; in addition to the on-site U of the 3d transition metal and the SOC. In order to illustrate the points of the preceding paragraph, we show that DFT+U calculations (with the only nonzero U being U_Re=2.6) for Sr_2CrReO_6 with the nuclei frozen in the Fm3̅m structure results in a spontaneously broken symmetry of the electrons where the Re orbitals order and result in an insulating state (see Figure <ref>, panel a).We investigated ordered states consistent with q_fcc=(0,0,0), q_fcc=(0,0,1/2),q_fcc=(0,1/2,1/2), and q_fcc=(1/2,1/2,1/2) (where q is a fractional coordinate of the reciprocal lattice vectors constructed fromthe primitive FCC DP lattice vectors; see Fig. <ref>);resulting in a ground state of q_fcc=(0,1/2,1/2) (ie. C-type ordering). Specifically,a Re d_xy orbital is occupied on every site and there is a C-type alternation between d_xz and d_yz (see schematic in Figure <ref>, panels a and b). This C-type antiferro orbital ordering (denoted as C-OO) is generic among this A_2BReO_6 family. We will demonstrate that other orderings are possible and even favorable under certain conditions.For example, for small values of U_Re, the orbitals order in a ferro fashion (denoted F-OO),whereby the d_xy and either the d_xz or d_yz is occupied at every Re site. For intermediate values of U_Re, a ferri version of the C-OO ordering (denoted FI-OO) is found,though it is destroyed by octahedral tilts. These detailed scenarios are explored in Section <ref>. We now turn to the importance of structural distortions, such as the E_g octahedral distortions which are induced by the orbital ordering.We first remain in the Fm3̅m structure and lower the value of U_Re to 2.3eV, demonstrating that the orbital ordering is destroyed and the gap is closed (see Figure <ref>, panel b).Subsequently, we allow any internal relaxations of the ions consistent with q_fcc=(0,0,0) or q_fcc=(0,1/2,1/2), demonstrating that an E_g^(0) octahedral distortion with C-type wavevector (denoted C-OD)condenses (see Figure <ref> (c)/(d) for schematic); lowering the structural symmetry from Fm3̅m to P4_2/mnm (see symmetry lineage in Figure <ref>) and allowing theC-OO to occur at U_Re=2.3eV (see Figure <ref>, panel c).This demonstrates how the C-OD can be an essential ingredient for realizing the orbitally ordered insulating state, by influencing the critical value of U_Re for the transition. Incidentally, it should be noted that when the orbital ordering changes, the structural distortion changes as expected. For example, ferro orbital ordering (i.e. F-OO) will lead to a ferro octahedral distortion (i.e. F-OD). The above analysis proves that it is reasonable to characterize the insulating state as an orbitally ordered state, despite the fact that the C-OD structural distortion could play a critical role in moving the MIT phase boundaries to smaller values of U_Re.We will demonstrate that this renormalization of the critical U_Re via the C-ODallows a common value of U_Re to realize the insulating in Sr_2CrReO_6, while retaining a metallic state in Sr_2FeReO_6; and we predict that the orbitally ordered state can persist in the near absence of the C-OD in Ca_2CrReO_6 where electronic correlations are strongest.Given that the C-OD does not occur in the absence U_Re, we refrain from characterizing this as a Jahn-Teller effect, or pseudo Jahn-Teller effect in the case were the C-OO/C-OD is not a spontaneously broken symmetry, which could have been a primary driving force given the orbital degeneracy (or near degeneracy) present in these systems. Another generic consideration is octahedral tilting, which will influence both the C-OO and the C-OD.The a^-a^-b^+ tilt pattern of the Ca-based systems is a relatively large energy scale and therefore the tilts in these system exist independently of orbital ordering and or the C-OD. Alternatively, the a^0a^0c^- tilt pattern of the Sr-based systems is a much weaker energy scale, and therefore it may be somewhat coupled to the orbital ordering and the concomitant C-OD. These statements will be investigated in detail below (see Section <ref>), where we find that the differences of Sr/Ca are dominant over those of Fe/Cr in terms of setting the effective Re bandwidth; which results in a ordering of Sr_2FeReO_6, Sr_2CrReO_6, Ca_2FeReO_6, Ca_2CrReO_6 (smallest to largest effective Re bandwidth or electronic correlations). For example, the resulting Re-bandwidths are 1.84, 1.70, 1.50, and 1.35 eV, respectively (using U_Re=0, U_Fe=4, and U_Cr=2.5).Furthermore, the a^0a^0c^- tilt pattern may be isolated from the C-OO/C-OD as they break symmetry in a distinct manner (see symmetry lineage in Figure <ref>).Therefore, the C-OO/C-OD will be a spontaneously broken symmetry in the Sr based systems (should it occur).Alternatively, the a^-a^-b^+ tilt pattern already has a sufficiently low symmetry such that the C-OO/C-OD is not a spontaneously broken symmetry. Therefore, the C-OD cannot strictly be a signature of orbital ordering in the case of the a^-a^-b^+ tilt pattern.However, experiment dictates that the magnitude of the C-OD is a useful metric given the discontinuous structural phase transition at 140K between two crystal structures of the same space group (P2_1/n, no. 14-2), whereby the magnitude of the C-OD changes discontinuously; and the variant switches from C-OD^+ to C-OD^-. Another generic consideration is the effect of the on-site Coulomb repulsion U for the 3d transition metals, which do not nominally have states at low energies given the half filled (sub)shell of (Cr) Fe. However, in reality there is a non-trivial amount of 3d states at low energies due to hybridization, more so for Fe than for Cr, and this determines the effective Re-d bandwidth.While U_Re can drive orbital ordering even in the absence of U_Fe/U_Cr, as previously illustrated above (see Fig. <ref>), we will demonstrate the quantitative influence of U_Fe/U_Cr in renormalizing the critical value of U_Re for orbital ordering.First, considering U_Re=0, one can clearly see an unmixing of 3d states from the Re-d states as U_Fe/U_Cr is applied, further narrowing the effective Re-d bandwidth (compare panels a↔ c and e↔ g in Fig. <ref>).This effect is more dramatic in the case of Fe, which started with a larger degree of hybridization.Focussing on the Fe compound, we see that applying U_Re=2 does not drive the orbitally ordered insulator even when U_Fe=4, and thus the system remains metallic despite the diminished Re-d bandwidth. Alternatively, when applying U_Re=2 to the Cr compound, the addition of U_Cr=2.5 is sufficient to move the critical value of U_Re below 2eV, and an orbitally ordered insulator is obtained. This demonstrates that, while indirect, the on-site U for the 3d transition metal can play a critical role. Interestingly, U_Cr also turns out to be critical for stabilizing the experimentally observed a^0a^0c^- tilt pattern in Sr_2CrReO_6 (see Section <ref>).Yet another generic consideration is the spin-orbit coupling. We demonstrate the SOC is a relatively small perturbation in this system by comparing the Re states near the Fermi energy for the cubic reference structure computed using GGA (ie. U=0) with and without SOC (see Fig. <ref>, panels c and d). As shown, the DOS only exhibits small changes upon introducing SOC. Indeed, we will demonstrate the SOC can shift the phase boundary of the C-OO/C-OD by small amounts, and this can be very relevant in the Ca-based systems (including a strong magnetization direction dependence in Ca_2FeReO_6, see Section <ref>). Finally, we discuss how temperature will drive the insulator to metal transition and the structural transition associated with the C-OD. For the most part, we will only address ground state properties in this study, as finite temperatures will be beyond our current scope; though some of our analysis will shed light on what may occur. As outlined above, the insulating ground state in this family of materials is driven by C-type orbital ordering on the Re sites, though two main factors will influence the critical value of U_Re: the C-OD and octahedral tilting. One can imagine several different scenarios which could play out depending on the energy scales. First, the temperature of the electrons could disorder the C-OO. Given that our DFT+U calculations predict that this C-OO induced insulator is Slater-like (i.e.the gap closes given ferro and other orbital orderings, see Section <ref> and <ref>), the material will become metallic upon disordering the orbitals. Given that weak nature of the electron-phonon coupling (i.e. the C-OD cannot condense without an on-site U_Re), this means that the C-OD would disorder along with the orbitals. A different scenario can be envisioned at an opposite extreme, whereby the energy scale for orbital ordering is very large and we can neglect the electronic temperature and only consider the phonons. In this case, temperature could disorder the C-OD and or the octahedral tilts which would substantially increase the critical value of U_Re, driving the system into a metallic state.We will entertain this latter scenario (see Section <ref>, and Figs. <ref> in particular), though it does not appear consistent with our preferred values of U when including SOC unless there is a reorientation of the magnetization direction as seen in experiment (see Section <ref>, and Fig. <ref>).In reality, it is possible that all ingredients may be needed in order to properly capture the MIT and structural transitions from first-principles, and our paper will lay the groundwork for future study. §.§ Crystal and electronic structures Here we compute the crystal and electronic structure of A_2BReO_6 (A=Sr, Ca and B=Cr,Fe), exploring a range of Hubbard Us for all transition metals.We approach the four materials in order of increasing strength of electronic correlations: Sr_2FeReO_6, Sr_2CrReO_6, Ca_2FeReO_6, Ca_2CrReO_6. We will address orbital ordering, axial octahedral distortions, octahedral tilt pattern, the presence of a band gap, and relative structural energetics. §.§.§ Sr_2FeReO_6Experimentally, Sr_2FeReO_6 is found to be a metal with an a^0a^0c^- octahedral tilt pattern and I4/m symmetry (see Section <ref>).Given that Sr will have a smaller propensity to drive octahedral tilts relative to Ca, and that in Figure <ref> we showed that Re has a larger effective bandwidth in Fe based systems as opposed to Cr based systems, it is easy to understand why Sr_2FeReO_6 is the only metal among the four compounds considered. Here we explore the interplay of octahedral tilts, octahedral distortions, and the Hubbard U in detail (see Fig. <ref>); including at least six different crystal structures (i.e. all structures in Fig. <ref> except Fm3̅ m and P2_1/n).We will use the acronym OD (i.e. octahedral distortion) to generically refer to any spatial ordering of E_g^(0) octahedral distortions (E_g^(0) is shown schematically in Fig. <ref>, panels c, d and mathematically defined in Ref. <cit.>), such as C-OD for C-type ordering, F-OD for ferro ordering, etc; and the same nomenclature will be used for the orbital ordering (i.e. OO generically refers to C-OO, F-OO, etc.).In the higher symmetry structures which lack an OD (i.e. I4/m andI4/mmm), the Hubbard U may cause theelectrons to spontaneously break space group symmetry despite the fact that we will prevent the nuclei from breaking symmetry; allowing us to disentangle different effects.This is achieved by using a reference crystal structure obtained from relaxing with U_Re=0 and then retaining this structure for U_Re>0 (this process is repeated for different values of U_Fe/U_Cr). Anytime a reference structure is employed, it will be indicated using an asterisk.Given that the OO/OD is a spontaneously broken symmetry for I4/m, we could have created a reference structure simply by enforcing space group symmetry, but this is not possible in the Ca-based systems where the OO/OD is not a spontaneously broken symmetry; and we prefer to have a uniform approach throughout.We note that in all cases we retain the small degree of tetragonality in the lattice parameters, so there is technically always a very small tetragonal distortion (√(2)a=7.865 and c=7.901 <cit.>).Fully relaxing the lattice parameters had a very small effect on the results in the test cases we evaluated (see Table <ref>).In all panels, solid points indicate an insulator, while hollow points indicate a metal.It should be noted that the structures with an OO (e.g. C-OO, F-OO, etc.) are merged into the same line for brevity, despite the fact that they have different space groups (see Fig. <ref>). The C-OO can easily be distinguished as it is always insulating in this compound (it is only favorable at larger values of U_Re), and the F-OO is always metallic (it is only favorable at smaller values of U_Re). The same statements clearly follow for C-OD and F-OD, given that the orbital ordering is what causes the structural distortion.Interestingly, we will show that there is a different state which can occur at intermediate values of U_Re in the region between the F-OO/F-OD and the C-OO/C-OD, and this is a ferrimagnetic orbital ordering (FI-OO) and corresponding octahedral distortion (FI-OD); though the smaller magnitude OO/OD within the FI-OO/FI-OD is always nearly zero. These three regimes, F-OO/F-OD, FI-OO/FI-OD, and C-OO/C-OD,are easy to identify due to kinks in the curves, as we shall point out. The FI-OD will prove not to be important given that it tends to lose a competition with octahedral tilting.For each structure, we present the relative energyΔ E (i.e. the energy of a reference structure with respect to the ground state, panels a and b), the band gap (panel b,c), the amplitude of the OD (denoted d_|x-y|, see panel d,e), and the magnitude of the OO defined as the orbital polarization (panels f and g): P_xz,yz =1/N_τ∑_τ|n_d_yz^τ-n_d_xz^τ|/n_d_yz^τ+n_d_xz^τwhere n_d^τ is the occupancy of a given minority spin d orbital, τ labels a Re site in the unit cell, and N_τ is the number of Re atoms in the unit cell. We first focus on the left column of panels in Fig. <ref> (i.e. a, c, e, and g), where U_Fe=4, though nearly qualitative behavior is independent of U_Fe; the few small differences will be noted as they arise.Focussing on the blue curves corresponding to the*a^0a^0a^0 structure (where the nuclei are constrained to space group I4/mmm), we see thatd_|x-y| is zero, as it must be when the nuclei are confined to this space group. Despite this fact, P_xz,yzreveals a small symmetry breaking of the electronic density for U_Re≤2.2 (see panel g) where F-OO is found; and this sharply transitions to a new plateau for2.3≤ U_Re≤2.7 where FI-OO is found; and finally there is a sharp transition to the C-OO insulating state for U_Re≥2.8. Therefore, the MIT occurs at approximatelyU_Re=2.8 in this scenario.Inspecting the relative energy, Δ E is roughly constant up until approximately U_Re=2, whereafter Δ Eincreases linearly due to the fact that the ground state structure has formed the C-OO/C-OD.Allowing the C-OD to condense, but still in the absence of tilts, will shift the orbital ordering to lower values of U_Re; and this is illustrated in the red curves labeled a^0a^0a^0+OD (space group Fmmm and P4_2/mnm for the F-OD and C-OD, respectively). A jump in the value of the OD amplitude d_|x-y| can be seen occurring concomitantly with the orbital polarization. Clearly, the C-OD cooperates with the C-OO, allowing the latter to form at smaller values of U_Re and saturate at larger values.Here Δ E has two clear kinks in the slope, given that the curve begins as roughly constant, then changes to linear when the ground state forms the C-OO/C-OD, and then becomes constant once again when the C-OO/C-OD forms in this a^0a^0a^0+OD structure.We can now explore the results where we allow a^0a^0c^- tilts, but not the OD (ie. nuclei are frozen in I4/m space group, see green curves labeled *a^0a^0c^-). The tilts also reduce the threshold U_Re needed to drive the C-OO insulating state as compared to the *a^0a^0a^0 reference structure. Serendipitously, this reduction is roughly the same as the a^0a^0a^0+OD reference structure; though we see that when comparing the energetics of these two cases, *a^0a^0c^- is favorable up to the largest U_Reconsidered (see panel a).It should be noted that the ferri FI-OO state is not realized in this case (ie. F-OO transitions directly to C-OO).Finally, we can allow both a^0a^0c^- tilts and the OD (ie. space group P4_2/m, see black curves labeled a^0a^0c^-+OD), which cooperate to strongly reduce the threshold for the C-OO insulating state to U_Re=2.1.Interestingly, this appears to occur because the tilts have a preference for converting the FI-OD to the C-OD (see panels e and g), which appears reasonable given that the tilt pattern of the Re alternates in the z-direction with the same phase as the C-OD. All of the same generic trends can be observed in the right column where U_Fe=0, though all transitions are shifted to higher values of U_Re as is expected for a larger effective Re bandwidth. Given that Sr_2FeReO_6 is metallic in experiment, and that we expect U_Fe=4 to be a reasonable value, we would infer that U_Re≤2.0 in order to be consistent with experiment (see Section <ref> for a more detailed discussion).In the region U_Re≤2.0, the energy differences are nearly constant, and it is worth noting that the predicted energy gain for octahedral tilting is reasonable given the experimentally observed transition from I4/m→ Fm3̅ m at T=490K.It is also useful to determine the effect of U on the magnetic moment of the transition metal sites in addition to the number of electrons (N_d) in the correlated manifold (see Table <ref>).The Fe and Re moments are 3.65 (4.09) and 0.82 (1.34) μ_B, respectively, within GGA (GGA+U).The number of d electrons decreases by roughly 0.15 electrons for Fe as U is turned on, reflecting the unmixing the Fe; while the changes in Re are more modest. §.§.§ Sr_2CrReO_6 As we discussed in the literature review (see Section <ref>), several experiments suggested that Sr_2CrReO_6 is metallic with space group I4/m (demanding that d_|x-y|=0)<cit.>.However, recently Hauser et al. proposed that a fully-ordered Sr_2CrReO_6 film on the STO substrate is in fact a semiconductor with E_gap=0.21 eV <cit.>, and further suggested that the previously reported metallicity of Sr_2CrReO_6 may be due to oxygen vacancies <cit.>.Our calculations lend support to the observations of Hauser et al., showing that the C-OO/C-OD can induce an insulating state for reasonable values of U_Re.Here we perform the same analysis for Sr_2CrReO_6 as in the previous section for Sr_2FeReO_6, demonstrating the same generic behavior; but different quantitative thresholds (see Fig. <ref>).The main notable difference observed in Sr_2CrReO_6 as compared to Sr_2FeReO_6 is the energy scale for octahedral tilting, where U_Cr plays a role in stabilizing the tilt pattern. For example, when U_Cr=0 octahedral tilting is either unstable or stabilized by less than 1meV, depending on whether or not one relaxes lattice parameters in addition to internal coordinates (see Table <ref>). However, applying a non-zero U_Cr results in a small stabilization energy for a^0a^0c^- tilting, and this effect only depends weakly on U_Re prior to the onset of the C-OO (i.e. U_Re< 2; see Fig. <ref>, panels a and b). Clearly, a non-zero U_Cr is essential to obtaining an energy scale for octahedral tilting which is consistent with a tilt transition of T=260K (see Table <ref>).Otherwise, all of the same trends from Sr_2FeReO_6 can be seen in Sr_2CrReO_6. If we then take a value of U_Cr=2.5, an insulating state can only be achieved if U_Re⪆2for the ground state structure P4_2/m (i.e. a^0a^0c^-+C-OD).Given our preferred values of U (i.e. U_Cr=2.5 and U_Re=2), Sr_2CrReO_6 is insulating as in experiment. However, given these values of U, the C-OD is a necessary condition for realizing the insulating state (i.e. compare the black and green curves in Fig. <ref>, panel c), and the C-OD is only energetically favorable by 5.5 meV (i.e. green curve in panel a). Therefore, if thermal fluctuations of the phonons were to disorder the C-OD, the system would be driven through the MIT. The system could remain insulating in the absence of the C-OD if U_Re⪆ 2.4, but then Sr_2FeReO_6 would be insulating with a C-OD stabilized by 34.4 meV for U_Re = 2.4; inconsistent with experiment.Therefore, the C-OD should condense at sufficiently low temperatures in experiment and the space group should be measured to be P4_2/m instead of I4/m given sufficiently clean samples. Later we demonstrate that SOC introduces quantitative changes, but the same general conclusion holds. Future experiments can test this prediction. Given that the experimental insulating state was realized via growth on STO, it is worthwhile to determine the influence of imposing the STO lattice parameter (a=3.905); which is ∼0.04% of compressive strain compared to the optimized lattice parameter within GGA+U (with U_Cr=2.5 and U_Re=2).We find that this strain has only a small effect on energy differences, resulting in a difference of -17.8 meV for P4_2/m-P4_2/mnm, as compared to -18.5 meV for the bulk case in Table  <ref>; and therefore we do not believe the substrate has any substantial effect.The magnetic moments and number of electrons as a function ofU are summarized in Table <ref>.The Cr and Re moments are 2.24 (2.65) and 1.07 (1.53) μ_B, respectively, within GGA (GGA+U).However, note that the total moment is constant (1μ_B/f.u.) within both GGA and GGA+U.The number of d electrons decreases by ∼0.06 for Cr as U is turned on, which is almost half of the change of N_d of Fe in Sr_2FeReO_6.Smaller change of N_d(Cr) also reflects the weaker Cr-Re hybridization.§.§.§ Ca_2FeReO_6 We now move on to the case of Ca_2FeReO_6, which has the lower symmetry space group P2_1/n (a^-a^-b^+ tilt, see Fig. <ref>) and is measured to be an insulator with a 50meV energy gap at low temperature (see Section <ref> for a detailed review).Given the smaller size of Ca relative to Sr, the tilts in both Ca-based materials are large in magnitude (see Table <ref> for octahedral mode amplitudes) and retained up to the highest temperatures that have been studied in experiment (i.e. 300K and 550K for Ca_2CrReO_6 and Ca_2FeReO_6, respectively).For example, two in-plane and one out-of-plane ∠Fe-O-Re are 151.2, 151.8, and 152.4^∘ at 7K, and both DFT andDFT+U accurately capture the large magnitude of the octahedral tilts: ∠Fe-O-Re are 149.9, 151.1, 150.5 using DFT; 149.7, 150.0, and 150.8^∘ using DFT+U (U_Fe=4 and U_Re=2).Furthermore, these large tilts substantially reduce the effective Re bandwidth, resulting in a smaller critical value of U_Re needed to drive the C-OO induced insulating state, as we will now illustrate.In Sec. <ref>, we briefly discussed the structures of Ca_2FeReO_6 obtained at low and high temperatures <cit.>, as summarized in Fig. <ref>.According to experiment, there is an appreciableC-OD amplitude below the phase transition (e.g. C-OD^+, d_|x-y|=0.014Å at T=7K), and it is highly suppressed and swapped to the alternate variant above the transition temperature of T=140K(e.g. C-OD^-, d_|x-y|=0.005Å at T=300K).It should be emphasized that the C-OD is not a spontaneously broken symmetry in this structure, in contrast to the Sr case (see symmetry lineage in Fig. <ref>). We now elaborate on the fact that there are two types of C-OD within the monoclinic P2_1/n structure (see schematic in Fig. <ref>).We will use a notation ofC-OD^+ to denote the ordering where a givenRe-O octahedron has the same sign for the E_g^(0) mode (defined in the unrotated coordinate system) and the rotation mode (i.e. both modes positive or both modes negative); whereas C-OD^- indicates opposite signs.Structures below and above the MIT exhibitC-OD^+ and C-OD^-, respectively. Note that the C-OD^+ and C-OD^- are distinguishable only in the monoclinic (i.e., a≠ b) double perovskites, whereas they are symmetry equivalent in the tetragonal double perovskites (e.g. in the Sr-based systems).GGA results in C-OD^+,and C-OD^- is not even metastable (i.e. it relaxes back to C-OD^+); though the C-OD^+ amplitude is negligibly small (i.e., d_|x-y|<0.001Å). The energies of C-OD^+ and C-OD^- become distinct as U_Re increases, while the relative stability also depends on U_Fe.As depicted in Fig. <ref>(a), C-OD^+ switches to C-OD^- at U_Re=1.4 when U_Fe=4, and the energy difference increases as a function of U_Re.When U_Fe=0, as shown in Fig. <ref>(b),C-OD^+ is always favorable, and its stability increases in the range U_Re=1.8-2.5eV.Since the energy difference between two different orderings is very small, we simply follow C-OD^+ (i.e. low temperature orientation) whenever applying DFT+U.In terms of the C-OD amplitudes, GGA+U and GGA agree more closely for the low-T and high-T structures, respectively (see Figure <ref>), though GGA+U overestimates and GGA underestimates d_|x-y|. We now perform the same analysis as for the Sr-based systems, except that the untilted structure does not need to be considered given its large energy scale.In the Sr-based systems, we considered high symmetry reference structures, where we allowed the electrons to spontaneously break symmetry but prevented the structure from doing so by fixing it at the relaxed U_Re=0 structure (though non-zero U_Cr/U_Fe was included in creating a relaxed reference structure). The same recipe can be followed in the Ca-based cases, despite the fact that the U_Re=0 structure has an identical space group symmetry, and this reference structure will be referred to as *a^-a^-b^+; where the asterisk indicates that this a reference structure where we have effectively removed the C-OD which is induced by orbital ordering.Comparison to the reference structure will give insight into the importance of the C-OD in realizing the C-OO.Additionally, we will also study the unrelaxed experimentally measured structures from T=120K and T=160K, which straddle the T=140K phase transition.Due to the strong octahedral tilting, only the C-OO/C-OD is found in the Ca-based results, as opposed the Sr-based systems where ferro and ferri OO/OD's are observed.We begin by focussing on the reference structure *a^-a^-b^+, depicted by a green curve, where the C-OD amplitude is negligibly small irrespective of U_Fe (see Fig. <ref>, panels e and f).Increasing U_Re causes the orbital polarization to increase, and an insulating state (solid point) is eventually realized at U_Re=2.4 for the case of U_Fe=4 (see Fig. <ref>, panels c, e, and g).For U_Fe=4, the relative energy Δ E increases rather slowly for U_Re⪅1.4, and the slope increases thereafter due to the fact that the ground state experiences the C-OO/C-OD at U_Re≈ 1.4.As in Sr-based systems, turning off the U on the 3d transition metal shifts the metal-insulator phase boundary to larger values of U_Re, and an insulating state is not achieved for U_Re≤2.5 if U_Fe=0 (panels d, f, and h).Hereafter we focus our discussion on U_Fe=4, as all the same qualitative trends hold upon decreasing U_Fe.The experimental T=160K structure (depicted by a red curve) shows relatively small differences as compared to the *a^-a^-b^+ reference structure, with the band gap being quantitatively similar.We now move on to the fully relaxed structure, where C-OD amplitude is allowed to relax as U_Re is increased (depicted as black curve).In this discussion, we focus on U_Fe=4.For U_Re≤1.3, both the orbital polarization (i.e. P_xz,yz) and the C-OD amplitude (i.e. d_|x-y|) are very small with a weak U_Re dependence; comparable in magnitude to the reference structure. Once U_Re>1.3, there is a sharp increase in the C-OO/C-OD amplitude, and the system becomes an insulator for U_Re≥1.7. Therefore, the cooperation of the C-OO and C-OD greatly reduces the critical U_Re needed to drive the insulating state, from a value of U_Re=2.4 in the reference *a^-a^-b^+ structure down to a value of 1.7; which is the same trend as the case of the Sr-based systems.It is interesting to compare the relaxed C-OD amplitude to that of the T=120K experimental structure, depicted as a blue curve. In the relaxed structure, the smallest value of U_Re which has an insulating state is 1.7, and already the C-OD amplitude is nearly twice that of the experimental T=120K structure.However, later we demonstrate that including SOC dampens the C-OD amplitude (though not enough to agree with experiment, see Section <ref>, Fig. <ref>).The T=120K and T=160K experimental structures produce a critical U_Re of 2.0 and 2.4 for the C-OO/C-OD, respectively, which is still appreciably different. Given the substantial renormalization of the critical U_Re between the *a^-a^-b^+ reference structure and the fully relaxed structure, and analogously between the two experimental structures,it is interesting to consider the possibility of the anharmonic phonon free energy being the primary driving force of the MIT as a function of temperature. In this scenario, the structural transition is driven by the phonon free energy, and the resulting change in the structure is sufficient to renormalize the critical value of U_Re and drive the system through the MIT. Using our prescribed values of U_Re=2.0 andU_Fe=4 (given that we are not yet using spin-orbit coupling), we plot the site/orbital projected electronic density-of-states for the *a^-a^-b^+ reference structure and the relaxed a^-a^-b^+ structure (see Fig. <ref>). As shown, the result is a metal for the *a^-a^-b^+ structure and an insulator for the a^-a^-b^+ structure, with the latter having a gap of 110meV; slightly larger than relatively small experimental gap of 50meV. While a greater value of U_Re would yield an insulator in the *a^-a^-b^+ structure, this sort of tuning is discouraged by the fact that Sr_2FeReO_6 would wrongly be driven into a C-OO/C-OD insulating state in contradiction with experiment (assuming a common value of U_Re is utilized).Future work will determine if this phonon driven scenario is dominant, as opposed to the other extreme where temperature disorders the electrons (see Section <ref> for further discussion of these scenarios). §.§.§ Ca_2CrReO_6 Similar to Ca_2FeReO_6, Ca_2CrReO_6 results in amonoclinic structure P2_1/n (a^-a^-b^+ tilt, see Fig. <ref>) with an insulating ground state. Both DFT andDFT+U reasonably capture the large magnitude of the octahedral tilts: the two in-plane and one out-of-plane ∠Cr-O-Re are 153.8, 154.1, 154.9 using DFT; 151.7, 151.0 and 152.7^∘ using DFT+U (U_Cr=2.5 and U_Re=2); and153.1, 154.3, and 155.0^∘as measured at T=300K in experiment<cit.>.In terms of the C-OD amplitude, the experimental value of d_|x-y| at 300K reported by Kato et al. is 0.003 Å, which is smaller than d_|x-y| =0.005Å of Ca_2FeReO_6 at the same temperature<cit.>; and this suggests that the C-OD has been disordered at 300K, yet the transport still suggests an insulating state.Unfortunately, the low temperature values of d_|x-y| have not yet been measured, but we will demonstrate that a large C-OD amplitude is expected just as in the case of Ca_2FeReO_6. Just as in the case of Ca_2FeReO_6, the C-OD may form in either the C-OD^+ or C-OD^- variant.Unlike Ca_2FeReO_6, C-OD^+ ordering ismore stable over a broad range of U_Re, as depicted in Figs. <ref>.The energy difference between the C-OD variants are relatively small as compared to the case of Ca_2FeReO_6, which might be due to the smaller difference between the respective a and b lattice parameters.More specifically, b-ais0.070Å in Ca_2FeReO_6, whileb-a is 0.026Å in Ca_2CrReO_6.In both cases, the energy difference between C-OD^+/C-OD^- is well within the error of DFT+U.As in the case of Ca_2FeReO_6, here we only present the results of C-OD^+ ordering. We now perform the same analysis as in the case of Ca_2FeReO_6, computing the orbital polarization, C-OD amplitude, band gap, and relative energy of the ground state structure a^-a^-b^+ and the reference structure *a^-a^-b^+ as a function of U (see Fig. <ref>). The same trends are observed as compared to Ca_2FeReO_6, with the only differences being quantitative changes due to the smaller effective Re bandwidth in the Cr-based systems.Interestingly, the C-OD amplitude rapidly saturates after its onset, and the relative energy difference Δ E shows three distinct regions.The third region, corresponding to U_Re>1.6 and U_Cr=2.5, corresponds to the formation of the C-OO in the*a^-a^-b^+ reference structure, whereby the energy penalty of U_Re in the *a^-a^-b^+structure is reduced via polarization. This region could not be clearly seen in the Ca_2FeReO_6 case given that the corresponding transition occurs just preceding the maximum value of U_Re in the plot, and the magnitude of the effect should be smaller given the larger effective Re bandwidth.Most importantly, the critical threshold of U_Re for driving the MIT is strongly reduced, requiring only U_Re=1.4 in the relaxed structure (with U_Cr=2.5); and a similar renormalization occurs in the reference structure *a^-a^-b^+ which now only needs U_Re=1.7 to achieve an insulating state.This has interesting implications, as the critical U_Re is now sufficiently small in the reference structure that the insulating state may survive in the absence of any appreciable C-OD amplitude. If we assume our preferential values of U_Re=2 and U_Cr=2.5, we see that both the relaxed structure and the reference structure are insulators (see Fig. <ref> for projected DOS). This result is consistent with the experimental measurements on Ca_2CrReO_6 which find no appreciable C-OD amplitude, as in our reference structure, yet still measure an insulating state<cit.>; though further experiments are clearly needed in this system before drawing conclusions. One could argue that choosing a smaller value of U_Re could yield the same behavior as Ca_2FeReO_6, where the loss of the C-OD amplitude destroys the C-OO and results in an metallic state, but this sort of tuning would be forbidden by the fact that U_Re≥2.0 is needed to obtain the experimentally observed insulating state in Sr_2CrReO_6.Therefore, Ca_2CrReO_6 could be a concise example where orbital ordering can clearly be observed in the (near) absence of a concomitant structural distortion (i.e. at a temperature where the C-OD is suppressed but the C-OO survives).§.§ Effect of spin-orbit couplingThe strength of the spin-orbit coupling (λ) can be up to 0.5eV in the 5d transition metal oxides, which is non-negligible when compared to U and the bandwidth. In the better known example of the irridates, the t_2g bandwidth is approximately 1eV, and thus a spin-orbit coupling of λ=0.3-0.5eV plays an important role in realizing the insulating state <cit.>.The effect of SOC in the Re based DPs will be smaller than the irridates given that the t_2g bandwidth of Re is closer to 2eV and the strength of the SOC of Re will also be smaller due to the smaller atomic number of Re.For example, our comparison of the Re-projected DOS with and without SOC in the Sr-based systems demonstrated changes of approximately 0.2eV (see Fig. <ref>, panels c andd). While SOC does not qualitatively change any major trends, the small quantitative changes can be relevant;as we will demonstrate. In this section, we will explore the magnetic anisotropy energy as a function of U_Re, in addition to repeating our previous analysis of the orbital polarization, the OD amplitude, band gap, and relative energetics. Here we will only consider U_Fe=4 and U_Cr=2.5.We begin by considering the magnetic anisotropy energy (E_ma) as summarized in Fig. <ref>.We define E_ma as the relative energy (per Re) of a given magnetic orientation with respect to the energy of the [001] orientation (e.g. E_ma[010]=E[010]-E[001]). The magnetic orientation is particularly important since the threshold of U_Re for the C-OO/C-OD depends on the magnetic orientation, and shifts as large as 0.4eV can observed for Ca_2FeReO_6. For Sr_2FeReO_6, the magnetization along [001] is most stable in our calculations, as shown in Fig <ref> panel (a), whereas magnetic moments are aligned in ab-plane in the experiment at 298K <cit.>.This appears to be a discrepancy, though we only explored [100] and [010] directions within the ab-plane, so it is possible that some other direction within the plane is lower. Also, our calculations are at T=0, while the experiments were done at T=298K. Otherwise, this could serve as an interesting failure of the method (albeit for a very small energy scale).Nonetheless, Sr_2FeReO_6 is metallic with U_Re<2.0 in all orientations that we explored. For Sr_2CrReO_6, the magnetization along the [100] and [010] directions are equivalent, as shown in Fig. <ref> panel (b).Interestingly, [001] is more stable for small U_Re, but then this trend is reversed once the system goes through the C-OO/C-OD and there is a magnetic easy ab plane for U_Re≥1.8. Given our preferred values of U_Re=1.9 and U_Cr=2.5 (see Section <ref>), DFT+U results in an easy ab plane.Recent experiment by Lucy et al. showed that a Sr_2CrReO_6 film on SrTiO_3 and (LaAlO_3)_0.3(Sr_2AlTaO_6)_0.7, corresponding to0.09% and 1.04% of compressive strains, results in a magnetic easy axis within the ab plane at both low (20K) and high T (300K) <cit.>. For Ca_2FeReO_6, Rietveld refinement determined that the magnetization easy axis below T_MIT is the b-axis (ie. [010]), while above T_MIT the magnetization easy axis changes <cit.>; though there is not yet consensus on the direction. Granado et al. suggested that Fe and Re moments lie on the ac-plane, where the magnetization angle from the a axis is 55^∘ (close to [101]) <cit.>, whereas Oikawa et al. showed that [001] is the easy axis <cit.>. We will explore [100], [010], [001], and [101] in the ground state structure, while primarily focussing on [001] in the *a^-a^-b^+ reference structure; though with the latter we investigate a few scenarios using [101]. By using the experimental atomic coordinates and LDA+U calculations (U_Re=3 and J_Re=0.7), Antonov et al. showed that [010] is the easy axis and [001] is lower in energy than [100], for both low T and high T experimental structures <cit.>.Gong et al. found the same result using the mBJ potential<cit.>, despite the fact that they were using the GGA relaxed structure which more closely resembles the experimental structure above the phase transition.We also found the same ordering, which proved to be independent of the value ofU_Re, even when crossing the C-OO/C-OD transition (see Fig. <ref>, panel c).Given that above the MIT Granado et al. found [101] to be the easy-axis, we also explore this direction; demonstrating that it is very similar to [100].Interestingly, the magnetic orientation can have an appreciable effect on the onset of C-OO/C-OD. For Ca_2CrReO_6, we are not aware of any experimental data on the magnetic easy axis.From an mBJ study with GGA-relaxed structure, Gong et al. reported that [010] is the easy axis, and E_ma[001] > E_ma[100]<cit.>.Alternatively, our GGA+U+SOC calculations suggest that [100] is the easy axis forU_Re≥ 0.9 (see Fig. <ref>, panel (d)).Given our preferred values of U_Re=1.9 and U_Cr=2.5 (see Section <ref>), we would expect an easy axis of [100] and that [010],[001]are very close in energy.Having established the easy axis for each material, we now repeat the previous analysis probing the behavior as a function of the Hubbard U but nowincluding SOC and the easy axis axis as determined from DFT+U (see Figs. <ref>-<ref>); and it should be kept in mind that the predicted easy-axis for Sr_2FeReO_6 disagrees with experiment.Summarizing, we consider Sr_2FeReO_6 [001], Sr_2CrReO_6 [100], Ca_2FeReO_6 [010], and Ca_2CrReO_6 [100].Given that SOC will break the block diagonal structure of the single-particle density matrix in the spin sector, it is useful to introduce a more general measure of orbital polarization rather than the definition used in equation (<ref>); and we will utilize the standard deviation of the Eigenvalues of the local single particle density matrix for the correlated subspace, denoted σ_τ (this is a component of the DFT+U energy functional, see Ref. <cit.> for a detailed derivation): σ_τ=√(∑_m(n_m^τ-μ_τ)^2/N_orb)andμ_τ=∑_mn_m^τ/N_orb ,where m labels an Eigenvalue of the single-particle density matrix for the correlated subspace (ie. Eigenvalues ofthe10× 10 single-particle density matrix for the case of d electrons), τ labels a Re site in the unit cell, and N_orb=10 for d-electrons. The orbital polarization is then defined to be σ_τ.We begin with the Sr-based materials, Sr_2FeReO_6 and Sr_2CrReO_6, characterizing the effect of the SOC for the relaxed structure a^0a^0c^-+OD (e.g. P4_2/m for a^0a^0c^-+C-OD, etc.) and the reference structure I4/m (*a^0a^0c^-)(see Fig. <ref>). The previously presented results without SOC are included to facilitate comparison, in addition to providing updated values for our new metric of orbital polarization σ_τ.As expected, SOC is a relatively small perturbation in all cases, though there are some interesting differences. We begin by examining the orbital polarization for the reference structures *a^0a^0c^-where the C-OD amplitude is restricted to be zero (see panels g and h). For smaller values of U_Re, prior to the C-OO transition, SOC enhances the orbital polarization at a given value of U_Re in the F-OO state (comparing lines with up and down triangles). For Sr_2CrReO_6, the critical U_Re for the C-OO transition is shifted down by about 0.2eV (compare lines with up and down triangles), indicating the SOC is facilitating the onset of the C-OO and the resulting MIT.This renormalization of U_Re is much smaller forSr_2FeReO_6 and cannot be seen at the resolution we have provided.In both cases, the magnitude of the orbital polarization beyond the C-OO transition is very similar with and without the SOC.Allowing the C-OD to condense in the relaxed structures shows similar behavior (see red and blue curves).In both Sr_2FeReO_6 and Sr_2CrReO_6, SOC pushes the onset of the C-OO/C-OD to smaller values of U_Re; more substantially in the case of Cr.As a result, including SOC causes the gap to open at slightly smaller values of U_Re: approximately 0.1 less for Sr_2FeReO_6 and 0.2 less for Sr_2CrReO_6.Notably, the C-OD amplitude for the metallic phase of Sr_2FeReO_6 is dampened to zero, in agreement with experiment.Somewhat counterintuitively,SOC results insmaller C-OD amplitudes for values of U_Re beyond the MIT, despite causing an earlier onset of the C-OD.For the relative energetics, in both compounds SOC decreases the stabilization energy of the C-OD for U_Re⪆2.1 (see panels a and b), consistent with the reduced magnitude of the C-OD.Given our preferred value of U_Re=1.9 for SOC, we find that Sr_2FeReO_6 is metallic with space group I4/m (ie. no condensation of OD), consistent with experiment; while Sr_2CrReO_6 is insulating with a non-zero C-OD amplitude (i.e. space group P4_2/m), stabilized by roughly 14meV.In the Ca-based systems, the effects of SOC are slightly more pronounced (see Fig. <ref>), which is likely associated with the smaller Re t_2g bandwidth,but the trends are all the same as the Sr-based materials. We begin by analyzing the orbital polarization in the reference structure *a^-a^-b^+, where the C-OD has effectively been removed (see panels g and h, curves with pink-up and blue-down triangles).For small values ofU_Re, SOC mildly enhances the orbital polarization, but the differences diminish once both cases form the C-OO insulator. However, SOC has a more dramatic effect in the Ca-based systems in terms of shifting the C-OO induced MIT to smaller values of U_Re, giving a reduction of 0.7 and 0.4eV for the Fe-based and Cr-based material, respectively (see panels c and d, curves with pink-up and blue-down triangles).For the relaxed structures (see red and blue curves), the C-OD is activated at much smaller values of U_Re in both materials, more so for the case of Ca_2CrReO_6. Furthermore, Ca_2FeReO_6 reaches a relatively smaller value of the C-OD amplitude beyond the C-OO induced MIT, while Ca_2CrReO_6 saturates at roughly the same value.Given our preferred value of U_Re=1.9, both Ca_2FeReO_6 andCa_2CrReO_6 are insulators with a appreciable C-OD amplitude, consistent with known experiments (though the low temperature structural parameters of Ca_2CrReO_6 have not yet been measured). Furthermore, SOC has reduced the C-OD amplitude of Ca_2FeReO_6, moving it closer to the experimental value (see panel e, red curve).For Ca_2FeReO_6, we also investigate the behavior of the [001] magnetization direction for both the reference structure *a^-a^-b^+ and the ground state structurea^-a^-b^+, which is essential given that experiment dictates [001] is approximately the easy-axis above the MIT where the C-OD is suppressed. For a^-a^-b^+, the [001] orientation is higher in energy than [010], with the difference being enhanced as U_Re increases (see Fig. <ref>, panel c, green curve).Furthermore, for [001] the threshold value of U_Re for the onset of the C-OO/C-OD is increased, and the magnitude of the band gap and C-OD amplitude are diminished at a given value of U_Re (see Fig. <ref>, panels c and e, dark green triangles).More relevantly, the same trends are observed in the reference structure*a^-a^-b^+, but the effect is amplified (light green triangles).In particular, the critical value of U_Refor the C-OO/C-OD dramatically increases from 1.8 to 2.2 eV as the magnetization switches from [010] to [001] (compare pink and light green curves, respectively).We also investigate the case of [101] magnetization direction.The overall features of [101] are similar to the case of [001](not shown), except that the critical value of U_Re for the C-OO/C-OD in the reference structure is increased to 2.4eV. In Section <ref>, where SOC was not yet included, we elucidated the possibility that a suppression of the C-OD (e.g. via thermal fluctuations) closes the band gap via moving the critical value of U_Re beyond our expected value of U_Re=2.0 within GGA+U (see Figure <ref>). This could have been a viable mechanism for the MIT, but SOC is strong enough to alter this scenario (see Figure <ref>, panels a and b, using U_Re=1.9). Given the [010] magnetization direction, the gap is reduced in the reference structure, but it does not close, unlike the case where SOC is not included. However, the experiments of Oikawa et al. dictate that [001] should be the easy axis of the high temperature structure, in contradiction with DFT+U+SOC using our reference structure (though our predicted energy difference is less than 6meV). If we consider the [001] direction in the reference structure *a^-a^-b^+, we see that the gap has indeed closed (see Figure <ref>, panel (c)); the gap also closes for the [101] direction. Therefore, it is possible that the reorientation of the magnetization is important to the MIT. In summary, we see that for U_Re=1.9, Sr_2FeReO_6 is a metal, while the remaining systems are C-OO induced insulators. The general physics that was deduced in the absence of SOC holds true with some small renormalizations of various observables. Slightly reducing the value of U_Re allows for results which are qualitatively consistent with experiment, with the caveat that the easy-axis of Sr_2FeReO_6 disagrees with experiment. Another interesting feature of SOC is the nonzero orbital moments of Re.Spin and orbital moments of Re within GGA+SOC and GGA+U+SOC with U_Re=1.9 are summarized in Table <ref>.The direction of Re orbital moment is opposite to the spin moment, in agreement with the previous experiments <cit.> and GGA+SOC <cit.>. As presented in Table  <ref>, varying results have been measured for the magnitude of spin and orbital moments by different groups.However, the | m_L/m_S| values are more or less consistent <cit.> since this quantity is not affected by possible uncertainties in the calculated number of holes <cit.>, thus these values are better quantities to compare theory and experiments. While GGA largely underestimate the experimental | m_L/m_S| values, GGA+U gives a much better estimation for | m_L/m_S|. §.§ OptimalU values Exploring a range of U is a necessary burden for several reasons. First, the procedure for constructing both the interactions and the double-counting correction is still an open problem. Second, given that the DFT+U method is equivalent to DFT+DMFT when the DMFT impurity problem is solved within Hartree-Fock<cit.>, DFT+U contains well known errors which may be partially compensated by artificially renormalizing the U to smaller values.Given that our most basic concern in this paper is to develop a qualitative, and perhaps evensemi-quantitative, understanding of an entire family of Re-based double perovskites, performing an empirical search for a single set of U's which can capture the physics of this family was essential.In Sec <ref> and <ref>, we have explored various observables for a range of values of U.Clearly, U_Re is the main influence, as it is a necessary condition for driving the C-OO insulating state in the entire family of materials, in addition to the C-OD.However, we also demonstrated that the U of the 3d transition metal could play an important indirect role, via renormalizing the critical value of U_Re for the C-OO/C-OD to smaller values. Also, for the case of Sr_2CrReO_6, a nonzero U_Cr was important for properly capturing the energetics of the a^0a^0c^- tilt pattern.For the 3d transition metals, we typically only explored U=0 and another value which is in line with expectations based on previous literature or methods for computing U.For Cr, we used U_Cr=2.5 eV,which is similar to values used for CaCrO_3 <cit.> and Cr-related DPs (U=3 eV and J=0.87 eV) <cit.>.For Fe, we focus on U_Fe=4 eV, as widely used elsewhere <cit.>. Excessive tuning of U_Fe or U_Cr is not needed based on our results, and the nonzero values that we evaluated were either necessary to capture a given phenomena (i.e. the tilts in Sr_2CrReO_6), or were needed for a consistent and reasonable value of U_Re (via the indirect influence of the U_Fe or U_Cr).Therefore, U_Fe=4 eV and U_Cr=2.5 eV are reasonable values to adopt, though a range of values could likely give sufficient behavior. In the case of U_Re, we explored a large number of values between 0-3.2eV. The overall goal for selecting a set of U's is to obtain the proper ground states in the entire family of materials, which is nontrivial given that Sr_2FeReO_6 is metallic and the rest are insulators. While it is possible for U_Re to have small changes due to differences in screening among the four materials, these differences should be relatively small given the localized nature of the d orbitals which comprise the correlated subspace; and therefore we do seek a common value for all four compounds. We conclude that U_Re=2.0 and 1.9 are reasonable values within GGA+U and GGA+U+SOC, respectively, and these values will properly result in a metal for Sr_2FeReO_6 and insulators for the rest. The predicted bandgap E_gap for Ca_2FeReO_6 (i.e. 105meV and 150meV within GGA+U and GGA+U+SOC, respectively) is somewhat larger than the experiment (i.e. 50meV), but this seems reasonable given the nature of approximations we are dealing with. For Ca_2CrReO_6, we obtain E_gap=250and 270meV using GGA+U and GGA+U+SOC, respectively (experimental gap is not known); while E_gap of Sr_2CrReO_6 within GGA+U and GGA+U+SOC is 120 and 40meV, respectively, somewhat smaller than the experimental value of 200meV <cit.>.It is also interesting to computeU_Re via the linear response approach <cit.>. In Sr_2CrReO_6 and Ca_2CrReO_6, we obtained U_Re=1.3 for both systems; the calculation employed a supercell containing 8 Re atoms. Therefore, linear response predicts a relatively small value for U, consistent with 5d electrons, but too small in order to be qualitatively correct: Sr_2CrReO_6 could not be an insulator with such a small value. §.§ Future challenges for experiment The central prediction of our work is that the minority spin Re d_xz/d_yz orbitals order in aq_fcc=( 0,1/2,1/2) motif, along with occupied minority spin Re d_xy orbitals, in Sr_2CrReO_6, Ca_2FeReO_6, and Ca_2CrReO_6. This section explores how this prediction may be tested in experiment.This orbital ordering results in a narrow gap insulator in our calculations, consistent with the insulating states observed in experiment for these compounds (see Section <ref>).However, more direct signatures of the orbital ordering are desired.Perhaps the most straightforward experiment is precisely resolving the crystal structure of insulating Sr_2CrReO_6 at low temperatures.Given that the C-OO breaks the symmetry of the I4/m space group, inducing the C-OD, experiment may be able to detect the resulting P4_2/m space group at low temperatures.Such a measurement would serve as a clear confirmation of our predicted orbital ordering.Precisely resolving the bond lengths of Ca_2CrReO_6 at low temperatures would also be beneficial. While the C-OO/C-OD is not a spontaneously broken symmetry in Ca_2CrReO_6, an enhancement of d_|x-y| is predicted in our calculations; similar to what has already been experimentally observed in the case ofCa_2FeReO_6.Other experiments could possibly directly probe the orbital ordering, such as X-ray linear dichroism.Once again, Sr_2CrReO_6 may be the best test case given that the orbital ordering is a spontaneously broken symmetry. § SUMMARY In summary, we investigate the electronic and structural properties of Re-based double perovskitesA_2BReO_6(A=Sr, Ca and B=Cr, Fe) through density-functional theory + U calculations, with and without spin-orbit coupling.All four compounds share a common low energy Hamiltonian, which is a relatively narrow Re t_2g minority spin band that results from strong antiferromagnetic coupling to filled 3d majority spin shell (or sub-shell) of the B ion. Cr results in a narrower Re t_2g bandwidth than Fe, while Ca-induced tilts result in a narrower Re t_2g bandwidth than Sr-induced tilts; resulting in a rankingof the Re t_2g bandwidth as Sr_2FeReO_6, Sr_2CrReO_6, Ca_2FeReO_6, and Ca_2CrReO_6 (from largest to smallest).Spin orbit coupling is demonstrated to be a relatively small perturbation, though it still can result in relevant quantitative changes.In general, we show that the on-site U_Re drives a C-type (i.e. q_fcc=( 0,1/2,1/2) given the primitive face-centered cubic unit cell of the double perovskite)antiferro orbital ordering (denoted C-OO) of the Re d_xz/d_yz minority spin orbitals, along with minority d_xy being filled on each site, resulting in an insulating ground state.This insulator is Slater-like, in the sense that the C-type ordering is critical to opening a band gap.Interestingly, this C-OO can even occur in a cubic reference structure (Fm3̅m) in the absence of any structural distortions for reasonable values of U_Re. Furthermore, allowing structural distortions demonstrates that this C-OO is accompanied by a local E_g structural distortion of the octahedra with C-type ordering (denoted as C-OD); and it should be emphasized that U_Re is a necessary condition for the C-OO/C-OD to occur. The C-OO/C-OD will be a spontaneously broken symmetry for a^0a^0c^--type tilt patterns as in the Sr based systems (i.e. I4/m → P4_2/m), whereas not for the a^-a^-b^+-type tilting pattern of the Ca based systems (i.e. P2_1/n → P2_1/n). While U_Re is a necessary condition for obtaining an insulating state, the presence of the C-OD will reduce the critical value of U_Re necessary for driving the orbitally ordered insulating state; as will the U on the 3d transition metal.Furthermore, the C-OD is necessary for reducing the critical U_Re to a sufficiently small value such that Sr_2FeReO_6 remains metallic while Sr_2CrReO_6 is insulating.More specifically, using a single set of interaction parameters (i.e.U_Re=1.9eV, U_Fe=4eV, U_Cr=2.5eV, when using SOC), we show that Sr_2CrReO_6, Ca_2CrReO_6, and Ca_2FeReO_6 are all insulators, while Sr_2FeReO_6 is a metal; consistent with most recent experiments. Previous experiments concluded that Sr_2CrReO_6 was half-metallic <cit.>, but recent experiments showed that fully ordered films grown on an STO substrate are insulating<cit.>.We show that Sr_2CrReO_6 is indeed insulating with U_Re=1.9eV, so long as the structure is allowed to relax and condense the C-OD.Given that the C-OD is a spontaneously broken symmetry in this case, the challenge for experimental verification will be resolving the P4_2/m space group at low temperatures instead of the higher symmetry I4/m group. While the C-OD is not a spontaneously broken symmetry in Ca_2FeReO_6, experiment dictates that there is an unusual discontinuous phase transition at T=140K between two structures with the same space group, P2_1/n; with the high temperature structure being metallic and the low temperature structure being insulating.The main structural difference between the experimental structures is the C-OD amplitude: d_|x-y| is 0.016 and 0.005Å in the structures at 120K and 160K, respectively. Additionally, the C-OD changes variants across the transition, going from C-OD^+ (120K) to C-OD^- (160K). The appreciable C-OD^+ amplitude measured in low temperature experiments is consistent with our prediction of a large C-OD amplitude which is induced by the C-OO.The same trends are found in Ca_2CrReO_6, which has a narrower Re bandwidth andresults in a more robust insulator with a larger band gap.Predicting the transition temperature from first-principles will be a great future challenge given that the temperature of the electrons and the phonons may need to be treated on the same footing, all while accounting for the spin-orbit coupling. SOC is a small quantitative effect, though it can have relevant impact, such as lowering the threshold value of U_Re for inducing the C-OO/C-OD in the Ca-based compounds; even having a strong dependence on magnetization direction for Ca_2FeReO_6. GGA+U+SOC predicts the easy axis of Sr_2CrReO_6 and Ca_2FeReO_6 to be {100} and [010], respectively, consistent with the experiment, and also compares well to the experimental measurements of the magnitude of the orbital moment. It should be emphasized that U_Re, and the C-OO/C-OD which it induces, is critical to obtaining the qualitatively correct easy-axis in Sr_2CrReO_6. In the case Sr_2FeReO_6, GGA+U+SOC predicts a [001] easy axis, in disagreement with one experiment which measured the easy axis to be in the a-b plane.Additionally, the GGA+U+SOC predicted ratios ofobital/spin momentm_L/m_s are close to the experimental values, whereas GGA+SOC largely underestimates them.§ ACKNOWLEDGMENTSWe thank to K. Oikawa and K. Park for helpful discussion.This work was supported by the grant DE-SC0016507 funded by the U.S. Department of Energy, Office of Science.This research used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231.
http://arxiv.org/abs/1708.07798v2
{ "authors": [ "Alex Taekyung Lee", "Chris A. Marianetti" ], "categories": [ "cond-mat.str-el" ], "primary_category": "cond-mat.str-el", "published": "20170825162311", "title": "Structural and metal-insulator transitions in rhenium based double perovskites via orbital ordering" }
remark[theorem]Remark example[theorem]Example assumption[theorem]Assumption conjecture[theorem]Conjecture
http://arxiv.org/abs/1708.08048v2
{ "authors": [ "Yongfeng Li", "Zaiwen Wen", "Chao Yang", "Yaxiang Yuan" ], "categories": [ "math.OC", "physics.comp-ph", "quant-ph" ], "primary_category": "math.OC", "published": "20170827034244", "title": "A Semi-smooth Newton Method for Solving Semidefinite Programs in Electronic Structure Calculations" }
H. Aziz B. E. Lee Data61, CSIRO and UNSW, Sydney 2052 , Australia Tel.: +61-2-8306 0490 Fax: +61-2-8306 0405 [email protected], [email protected] The Expanding Approvals Rule: Improving Proportional Representation and Monotonicity Haris Aziz Barton E. Lee====================================================================================== Proportional representation (PR) is often discussed in voting settings as a major desideratum. For the past century or so, it is common both in practice and in the academic literature to jump to single transferable vote (STV) as the solution for achieving PR.Some of the most prominent electoral reform movements around the globe are pushing for the adoption of STV. It has been termed a major open problem to design a voting rule that satisfies the same PR properties as STV and better monotonicity properties.In this paper, we first present a taxonomy of proportional representation axioms for general weak order preferences, some of which generalise and strengthen previously introduced concepts.We then present a rule called Expanding Approvals Rule (EAR) that satisfies properties stronger than the central PR axiom satisfied by STV,can handle indifferences in a convenient and computationally efficient manner, and also satisfies better candidate monotonicity properties.In view of this, our proposed rule seems to be a compelling solution for achieving proportional representation in voting settings. JEL Classification: C70 · D61 · D71§ INTRODUCTIONOf all modes in which a national representation can possibly be constituted, this one [STV] affords the best security for the intellectual qualifications desirable in the representatives—John Stuart Mill (Considerations on Representative Government, 1861). A major unsolved problem is whether there exist rules that retain the important political features of STV and are also more monotonic—<cit.>. We consider a well-studied voting setting in which n voters express ordinal preferences over m candidates and based on the preferences k≤ m candidates are selected. The candidates may or may not be from particular parties but voters express preferences directly over individual candidates.[The setting is referred to as a preferential voting system. It is more general and flexible than settings in which voters vote for their respective parties and then the number of seats apportioned to the parties is proportional to the number of votes received by the party <cit.>. ]This kind of voting problem is not only encountered in parliamentary elections but to form any kind of representative body. When making such a selection by a voting rule, a desirable requirement is that of proportional representation. Proportional representation stipulates that voters should get representation in a committee or parliament according to the strengths of their numbers. It is widely accepted that proportional representation is the fairest way to reflect the diversity of opinions among the voters.[Proportional representation may be the fairest way for representation but it also allows for extreme group to have some representation at least when the group is large enough. PR also need not be the most effective approach to a stable government. <cit.> wrote that “It [PR] makes it difficult to form a cabinet which can command a parliamentary majority and so makes for weak government.”] For the last 120 years or so, the most widely used and accepted way to achieve it is via single transferable vote (STV) <cit.> and its several variants. In fact STV is used for elections in several countries including Australia, Ireland, India, and Pakistan. It is also used to select representative committees in hundreds of settings including professional organisations, scientific organizations, political parties, school groups, and university student councils all over the globe.[Notable uses of STV include Oscar nominations, internal elections of the British Liberal Democrats, and selection of Oxford Union, Cambridge Union, and Harvard/Radcliffe Undergraduate Councils.] The reason for the widespread adoption of STV is partly due to the fact that it has been promoted to satisfyproportional representation axioms. In particular, STV satisfies a key PR axiom called Proportionality for Solid Coalitions (PSC) <cit.>. <cit.> argues that “It is the fact that STV satisfies PSC that justifies describing STV as a system of proportional representation.” <cit.> also calls the property the “essential feature of STV, which makes it a system of proportional representation.”<cit.> motivated PSC as a minority not requiring to coordinate its report and that it should deserve some high preferred candidates to be selected as long as enough voters are `solidly committed to such candidates.' PSC captures the idea that as long as voters have the same top candidates (possibly in different ordering), they do not need to coordinate their preferences to get a justified number of such candidates selected. In this sense PSC is also similar in spirit to the idea that if a clone of a candidate is introduced, it should not affect the selection of the candidate. Voters from the same party not having to coordinate their reports as to maximize the number of winners from their own party can be viewed as a weak form of group-strategyproofness. PSC can also be seen as a voter's vote not being wasted due to lack of coordination with like-minded voters.PSC has been referred to as “a sine qua non for a fair election rule” by <cit.>.[There are two PSC axioms that differ in only whether the Hare quota is used or whether the Droop quota is used. The one with respect to the Droop quota has also been referred to as DPC (Droop's proportionality criterion) <cit.>. <cit.> went as far as saying that “I assume that no member of the Electoral Reform Society will be satisfied with anything that does not satisfy DPC.” ] Although STV is not necessarily the only rule satisfying PR properties, it is at times synonymous with proportional representation in academia and policy circles. The outcome of STV can also be computed efficiently which makes it suitable for large scale elections.[Although it is easy to compute one outcome of STV, checking whether a certain set is a possible outcome of STV is NP-complete <cit.>.] Another reason for the adoption of STV is historical. Key figures proposed ideas related to STV or pushed for the adoption of STV. The ideas behind STV can be attributed to several thinkers including C. Andrae, T. Hare , H. R. Droop, and T. W. Hill. For a detailed history of the development of STV family of rules, please see the article by <cit.>. In a booklet, <cit.> explains the rationale behind different components of the STV rule.STV was supported by influential intellectuals such asJohn Stuart Millwho placed STV “among the greatest improvements yet made in the theory and practice of government.” <cit.> note the British influence on the spread of STV among countries with historical association with Great Britain. With historical, normative, and computational motivation behind it, STV has become the`go to' rule for PR and has strong support.[One notable exception was philosopher Michael Dummett who was a stringent critic of STV. He proposed a rival PR method called the Quota Borda System (QBS) and pushed its case <cit.>. However, even he agreed that in terms of achieving PR, “[STV] guarantees representation for minorities to the greatest degree to which any possible electoral system is capable of doing” <cit.>[page 137].] It is also vigorously promoted by prominent electoral reform movements across the globe including the Proportional Representation Society of Australia (<http://www.prsa.org.au>) and the Electoral Reform Society (<https://www.electoral-reform.org.uk>).Despite the central position of STV, it is not without some flaws. It is well-understood that it violates basic monotonicity properties even when selecting a single candidate <cit.>. Increasing the ranking of the winning candidate may result in the candidate not getting selected. STV is also typically defined for strict preferences which limits its ability to tackle more general weak orders.There are several settings where voters may be indifferent between two candidates becausethe candidates have the same characteristics that the voter cares about. It could also be that the voter does not have the cognitive power or time to distinguish between two candidates and does not wish to break ties arbitrarily.It is not clearly resolved in the literature how STV can be extended to handle weak orders without compromising on its computational efficiency or some of the desirable axiomatic properties it satisfies.[<cit.> and <cit.> propose one way to handle indifferences however, this leads to an algorithm that may take time O(m!).] The backdrop of this paper is that improving upon STV in terms of both PR as well as monotonicity has been posed as a major challenge <cit.>. Contributions We propose a new voting rule called Expanding Approvals Rule () that has several advantages. (1) It satisfies an axiom called Generalised PSC that is stronger than PSC. (2) It satisfies some natural monotonicity criteria that are not satisfied by STV. (3) It is defined on general weak preferences rather than just for strict preferences and hence constitutes a flexible and general rule that finds a suitable outcome in polynomial time for both strict and dichotomous preferences. Efficient computation of a rule is an important concern when we deal with election of large committees. Our work also helps understand the specifications under which different variants of STV satisfy different PR axioms.Apart from understanding how far STV and satisfy PR axioms, one of the conceptual contributions of this paper is to define a taxonomy of PR axioms based on PSC and identify their relations with each other. In particular, we propose a new axiom for weak preferences called Generalised PSC that simultaneously generalises PSC (for strict preferences) and proportional justified representation (for dichotomous preferences).§ MODEL AND AXIOMS In this section, we lay the groundwork of the paper by first defining the model and then formalizing the central axioms by which proportional representation rules are judged.§.§ Model We consider the standard social choice setting with a set of voters N={1,…, n}, a set of candidates C={c_1,…, c_m} and a preference profile =(_1,…,_n) such that each _i is a complete and transitive relation over C.Based on the preference profile, the goal is to select a committee W⊂ C of predetermined size k. Since our new rule is defined over weak orders rather than strict orders, we allow the voters to express weak orders. We write a _i b to denote that voter i values candidate a at least as much as candidate b and use _i for the strict part of _i, i.e., a _i b iff a _i b but not b _i a. Finally, _i denotes i's indifference relation, i.e., a _i b if and only if both a _i b and b _i a. The relation _i results in (non-empty) equivalence classes E_i^1,E_i^2, …, E_i^m_i for some m_i such that a_i a' if and only if a∈ E_i^l and a'∈ E_i^l' for some l<l'. Often, we will use these equivalence classes to represent the preference relation of a voter as a preference list i E_i^1,E_i^2, …, E_i^m_i. If candidate c is in E_i^j, we say it has rank j in voter i's preference. For example, we will denote the preferences a_i b_i c by the list i: {a,b}, {c}. If each equivalence is of size 1, the preferences will be called strict preferences or linear orders. Strict preferences will be represented by a comma separated list of candidates. If for each voter, the number of equivalence classes is at most two, the preferences are referred to as dichotomous preferences. When the preferences of the voters are dichotomous, the voters can be seen as approving a subset of voters. In this case for each voter i∈ N, the first equivalence class E_i^1 is also referred to as the approval set A_i. The vector A=(A_1,…, A_n) is referred to as the approval ballot profile. Since our model concerns ordinal preferences, when a voter i is completely indifferent between all the candidates, it means that E_i^1=C and we do not ascribe any intensity with which these candidates are liked or disliked by voter i. None of our results connecting dichotomous preferences with approval-based committee voting are dependent on how complete indifference is interpreted in terms of all approvals versus all disapprovals. The model allows for voters to express preference lists that do not include some candidates. In that case, the candidates not included in the list will be assumed to form the last equivalence class.§.§ PR under Strict Preferences In order to understand the suitability of voting rules for proportional representation, we recap the central PR axiom from the literature. It was first mentioned and popularised by <cit.>. It is defined for strict preferences. A set of voters N' is a solid coalition for a set of candidates C' if every voter in N' strictly prefers every candidate in C' ahead of every candidate in C\ C'. That is, for all i∈ N' and for any c'∈ C'∀ c∈ C\ C'c'_i c.The candidates in C' are said to be supported by voter set N'. Importantly, the definition of a solid coalition does not require voters to maintain the same order of strict preferences among candidates in C' nor C\ C'. Rather the definition requires only that all candidates in C' are strictly preferred to those in C\ C'. Also notice that a set of voters N' may be a solid coalition for multiple sets of candidates and that the entire set of voters N is trivially a solid coalition for the set of all candidates C. Let q∈ (n/(k+1), n/k]. We say a committee W satisfies q-PSC if for every positive integer ℓ, and for every solid coalition N' supporting a candidate subset C' with size |N'|≥ℓ q, the following holds |W∩ C'|≥min{ℓ, |C'|}.If q=n/k, then we refer to the property as Hare-PSC. If q=n/(k+1)+ϵ for small ϵ>0 , then we refer to the property as Droop-PSC.[Droop PSC is also referred to as Droop's proportionality criterion (DPC). Technically speaking the Droop quota is n/(k+1)+1. The exact value n/(k+1) is referred to as theHagenbach-Bischoff quota.]There are some reasons to prefer the `Droop' quota n/(k+1)+ϵ for small ϵ>0. Firstly, for k=1 the use of the Droop quota leads to rules that return a candidate that is most preferred by more than half of the voters. Secondly, STV defined with respect to the Droop quota ensures slight majorities get slight majority representation.Hare-PSC was stated as an essential property that a rule designed for PR should satisfy <cit.>. When preferences are strict and k=1, <cit.> refers to the restriction of Droop-PSC under these conditions as the majority principle.The majority principle requires that if a majority of voters are solidly committed to a set of candidates C', then one of the candidates from C' must be selected. Consider the profile with 9 voters and where k=3. Then the voters in set N'={1,2,3} form a solid coalition with respect to Hare quota who support candidates in {c_1,c_2,c_3,c_4}. The voters in set N”={4,5,6,7,8,9} form a solid coalition with respect to Hare quota who support three candidate subsets {e_1}, {e_1, e_2} and {e_1, e_2, e_3}. 1:c_1, c_2, c_3, c_4,... 2:c_4, c_1, c_2, c_3,... 3:c_2, c_3, c_4, c_1,... 4:e_1, e_2, e_3, ... 5:e_1, e_2, e_3,... 6: e_1, e_2, e_3, ... 7:e_1, e_2, e_3, ... 8: e_1, e_2, e_3, ... 9: e_2, e_1, e_3, ... One can also define a weak version of PSC. In some works <cit.>, the weaker version has been attributed to the original definition of PSC as defined by Dummett. For example, <cit.> in their Definition 2.9 term weak PSC as the property put forth by Dummett although he advocated the stronger property of PSC.Let q∈ (n/(k+1), n/k]. A committee W satisfies weak q-PSC if for every positive integer ℓ, and for every solid coalition N' supporting a candidate subset C':|C'|≤ℓ with size |N'|≥ℓ q, the following holds. |W∩ C'|≥min{ℓ, |C'|}. For weak q-PSC, we restrict our attention to solid coalitions who support sets of candidates of size at most ℓ whereas in q-PSC we impose no such restriction.Note that q-PSC implies weak q-PSC but the reverse need not hold. Furthermore, the condition that |C'|≤ℓ and |W∩ C'|≥min{ℓ, |C'|} is equivalent to C'⊆ W. We also note that under strict preferences and k=1, if a majority of the voters have the same most preferred candidate, then weak Droop-PSC implies that the candidate is selected. In particular, this implies that the majority principle is satisfied when weak-Droop PSC is satisfied.We now present a lemma connecting (weak) q-PSC for different values of q. The proof is omitted since it is implied by a stronger lemma (Lemma <ref>) proven in section <ref>.Let q, q' be real numbers such that q<q'.If a committee W satisfies (weak) q-PSC then W satisfies (weak) q'-PSC. In this paper we focus on PR axioms related to q-PSC where q is a real number contained in the interval (n/k+1, n/k]. The reason for focusing on these values is that a committee satisfying q-PSC committee is guaranteed to exists for any preference profile when q>n/k+1. Whilst whenever q≤n/k a unanimity-like principle is satisfied. That is,if all voters form a solid coalition for a size-k candidate subset C' then there is a unique committee satisfying q-PSC, i.e, W=C'. However in principle, the PR axioms and many results in this paper can be considered with values of q outside of the interval (n/k+1, n/k].In section <ref>, we generalise the PSC property to weak preferences which has not been done in the literature.§.§ Candidate Monotoncity Axioms PR captures the requirement that cohesive groups of voters should get sufficient representation. Another desirable property is candidate monotonicity that requires that increased support for an otherwise-elected candidate should never cause this candidate to become unelected. Candidate monotonicity involves the notion of a candidate being reinforced.We say a candidate is reinforced if its relative position is improved while not changing the relative positions of all other candidates.More formally, we say that candidate c is reinforced in preference _i to obtain preference _i', if (1) c _i dc _i' d for all d∈ C∖{c}; (2) d _i' cd _i c for all d∈ C∖{c}; (3) there exists a d∈ C such that d _i c and c ≻_i' d (in this case c is said to cross over d); and (4) d _i ed _i' e for all d,e ≠ c.We are now in a position to formalise some natural candidate monotonicity properties of voting rules. The definitions apply not just to strict preferences but also to weak preferences. One of the definitions (RRCM) is based on the ranks of candidates as specified in the preliminaries. * Candidate Monotonicity (CM): if a winning candidate is reinforcedby a single voter, it remains a winning candidate. * Rank Respecting Candidate Monotonicity (RRCM): if a winning candidate c is reinforcedby a single voter without changing the respective ranks of other winning candidates in each voter's preferences, then c remains a winning candidate.* Non-Crossing Candidate Monotonicity (NCCM): if a winning candidate c is reinforcedby a single voter without ever crossing over another winning candidate, then c remains a winning candidate.* Weak Candidate Monotonicity (WCM): if a winning candidate is reinforcedby a single voter, then some winning candidate still remains winning.NCCM and WCM are extremely weak properties but STV violates them even for k=1. We observe the following relations between the properties. The following relations hold. * RRCM and NCCM are equivalent for linear orders. * CMRRCMNCCM.* CMWCM. * Under k=1, WCM, RRCM, NCCM, and CM are equivalent. * Under dichotomous preferences, RRCM and CM are equivalent. Note that if a rule fails CM for k=1, then it also fails RRCM, NCCM, and WCM.§ PR UNDER GENERALISED PREFERENCE RELATIONS The notion of a solid coalition and PSC can be generalised to the case of weak preferences. In this section, we propose a new axiom called generalised PSC which not only generalises PSC (that is only defined for strict preferences) but also Proportional Justified Representation (PJR) a PR axiom that is only defined for dichotomous preferences. A set of voters N' is a generalised solid coalition for a set of candidates C' if every voter in N' weakly prefers every candidate in C' at least as high as every candidate in C\ C'. That is, for all i∈ N' and for any c'∈ C'∀ c∈ C\ C'c'_i c.We note that under strict preferences, a generalised solid coalition is equivalent to solid coalition.Let c^(i,j) denotes voter i's j-th most preferred candidate. In case the voter's preference has indifferences, we use lexicographic tie-breaking to identify the candidate in the j-th position.Let q∈ (n/(k+1), n/k]. A committee W satisfies generalised q-PSC if for all generalised solid coalitions N' supporting candidate subset C' with size |N'|≥ℓ q, there exists a set C”⊆ W with size at least min{ℓ, |C'|} such that for all c”∈ C”∃ i∈ N':c”_i c^(i, |C'|).The idea behind generalised q-PSC is identical to that of q-PSC and in fact generalised q-PSC is equivalent to q-PSC under linear preferences.Note that in the definition above, a voter i in the solid coalition of voters N' does not demand membership of candidates from the solidly supported subset C' but of any candidate that is at least as preferred as a least preferred candidate in C'. Generalised weak q-PSC is a natural weakening of generalised q-PSC in which we require that C' is of size at most ℓ. Let q∈ (n/(k+1), n/k]. A committee W satisfies weak generalised q-PSC if for every positive integer ℓ, and every generalised solid coalition N' supporting a candidate subset C': |C'|≤ℓ with size |N'|≥ℓ q, there exists a set C”⊆W with size at least min{ℓ, |C'|} such that for all c”∈ C”∃ i∈ N':c”_i c^(i, |C'|). The following example shows that generalised q-PSC is a weak property when solid coalitions equal, or just barely exceed, the quota q.Let N={1, 2, 3, 4}, C={a, b, …, j}, k=2, and suppose voter 1 and 2's preferences are given as follows:1: {a,b,…, j}2:a, b,… , jWe consider generalised PSC with respect to the Hare quota; that is, q_H=n/k=2. There is a generalised solid coalition N'={1, 2} with |N'|≥ q_H supporting candidate subset C'={a}. The generalised q_H-PSC axiom requires the election of ℓ=1 candidates into W who are at least as preferred as either voter 1 or 2's most preferred candidate. Since voter 1 is indifferent between all candidates, electing any candidate such as j∈ C, will satisfy the axiom – this is despite candidate j being voter 2's strictly least preferred candidate.We now show that if a committee W satisfies generalised (weak) q-PSC then the committee also satisfies generalised (weak) q'-PSC for all q'>q. Let q, q' be real numbers such that q<q'.If a committee W satisfies generalised (weak) q-PSC then W satisfies generalised (weak) q'-PSC.Let q<q' and suppose that the committee W satisfies generalised (weak) q-PSC. We wish to show that (weak) generalised q'-PSC is also satisfied by W.To see this, notice that any generalised solid coalition N' requiring representation under generalised (weak) q'-PSC also requires at least as much representation under generalised (weak) q-PSC since |N'|≥ℓ q' implies that |N'|≥ℓ q.Under linear orders, generalised PSC and generalised weak PSC coincide respectively with PSC and weak PSC.Generalising PSC to the case of weak preferences is important because it provides a useful link with PR properties defined on dichotomous preferences. Proportional Justified Representation (PJR) <cit.> is a proportional representation property for dichotomous preferences <cit.>. Recall the following definition of PJR: A committee W with |W|=k satisfies PJR for an approval ballot profile A=(A_1, …, A_n) over a candidate set C if for every positive integer ℓ≤ k there does not exists a set of voters N^*⊆ N with |N^*|≥ℓn/k such that |⋂_i∈ N^* A_i|≥ℓbut|(⋃_i∈ N^* A_i)∩ W|<ℓ. Under dichotomous preferences, generalised weak Hare-PSC implies PJR. For the purpose of a contradiction, let W be a committee of size k and suppose that generalised weak Hare-PSC holds but PJR does not. If PJR does not hold, then there must exist a set N^* of voters and a positive integer ℓ such that |N^*|≥ℓn/k=ℓ q_H (where q_H is the Hare quota) and both |⋂_i∈ N^* A_i|≥ℓand|(⋃_i∈ N^* A_i)∩ W|<ℓ. Note that if i∈ N^* it must be that i is not indifferent between all candidates (i.e. A_i≠∅, C), otherwise (<ref>) cannot hold. This means that the result is independent of whether a voter with preference _i leading to single equivalence class is defined to have preference presented via the approval ballot A_i=∅ or A_i=C (both of which induce the same single equivalence class over candidates). Now it follows that N^* is a generalised solid coalition for each candidate subset C'⊆⋂_i∈ N^* A_i since every candidate in C' is weakly preferred to every candidate in C for all i∈ N^*. Since |⋂_i∈ N^* A_i|≥ℓ, we can select a subset C' with exactly ℓ candidates so that |C'|=ℓ. Thus, if generalised weak Hare-PSC holds then there exists a set C”⊆ W with size at least min{ℓ, |C'|}=ℓ such that for all c”∈ C” there exists i∈ N^*:c” c^(i, ℓ). But note that for any voter j∈ N^* we have c^(j, ℓ)∈ A_j and hence for this particular candidate c” and voter i∈ N^* we have c”∈ A_i. It follows that C”⊆ (⋃_i∈ N^* A_i)∩ W, and |(⋃_i∈ N^* A_i)∩ W|≥ |C”| ≥ℓ, which contradicts (<ref>).Under dichotomous preferences,PJR implies generalised Hare-PSC. Suppose that for dichotomous preferences, a committee W of size k satisfies PJR. Then there exists no set of voters N^*⊆ N with |N^*|≥ℓn/k such that |⋂_i∈ N^* A_i|≥ℓbut|(⋃_i∈ N^* A_i)∩ W|<ℓ. Equivalently, for every set of voters N^*⊆ N with |N^*|≥ℓn/k, the following holds: |⋂_i∈ N^* A_i|≥ℓ|(⋃_i∈ N^* A_i)∩ W|≥ℓ. We now prove that for all generalised solid coalitions N^* of size |N^*|≥ℓ n/k=ℓ q_H (where q_H is the Hare quota) supporting candidate subset C' then there exists a set C”⊆ W with size at least min{ℓ, |C'|} such that for all c”∈ C” ∃ i∈ N^*:c”_i c^(i, |C'|). Consider a solid coalitions N^* of size |N^*|≥ℓ n/k supporting candidate subset C'. * Suppose there exists some voter i∈ N^* who has one of her least preferred candidates c in C'. In that case, each candidate in C' is at least as preferred for i as c. Hence the condition of genaralized Hare-PSC is trivially satisfied by any committee. * The other case is that for each i∈ N^* and each c∈ C', c∈max__i(C).[Here max__i(C) denote the equivalence class of (strictly) most preferred candidates in C with respect to _i.] Equivalently, for each i∈ N^* and each c∈ C', c∈ A_i. Hence C'⊆⋂_i∈ N^*A_i. Since W satisfies PJR, it follows that |(⋃_i∈ N^* A_i)∩ W|≥ℓ. In that case, we know that there exists a set C”=(⋃_i∈ N^* A_i)⊆ W of size at least min{ℓ, |C'|} such that for all c”∈ W,∃ i∈ N^*:c”_i c^(i, ℓ).Hence the condition of genaralised Hare-PSC is again satisfied. This completes the proof. Under dichotomous preferences, PJR, weak generalised Hare-PSC, and generalised Hare-PSC are equivalent. Since it is known that testing PJR is coNP-complete <cit.>, it follows that testing generalised PSC and generalised weak PSC is coNP-complete.Testing generalised PSC and generalised weak PSC is coNP-complete even under dichotomous preferences. On the other hand, PSC and weak PSC can be tested efficiently (please see the appendix).Figure <ref> depicts the relations between the different PR axioms. § THE CASE OF STV In this section, we define the family of STV rules for instances where voters submit strict preferences. The family is formalised as Algorithm <ref>.STV is a multi-round rule; in each round either a candidate is selected as a winner or one candidate is eliminated from the set of potential winners. Depending on the quota q and the reweighting rule applied, one can obtain particular STV rules <cit.>.To distinguish between variants of STV, based on different quota values q, we denote the STV rule with quota q by q-STV. When the quota q is equal to the Hare or Droop quota we simply refer to the q-STV variant as Hare-STV or Droop-STV, respectively. One of the most common rules is Hare-STV with discrete reweighting. This implies that a subset of voters of sizen/kis removed from the profile once their most preferred candidate in the current profile has been selected. STV modifies the preference profile ≻ by deleting candidates. We will denote by C(≻) the current set of candidates in the profile ≻. When k=1, STV is referred to as Instant-Runoff voting (IRV) or as the Alternative Vote (AV). [Illustration of Droop-STV] For this example we consider the Droop-STV rule with uniform fractional reweighting. This reweighting method means that line <ref> of Algorithm <ref> is executed as follows: First calculate the total weight of voters in N', i.e.,T=∑_i∈ N' w_i, then the weight of each voter i∈ N' is updated from w_i to w_i×T-q_D/T where q_D is the Droop quota. To illustrate this STV rule consider the following profile with 9 voters and suppose we wish to elect a committee of size k=3 1:c_1, c_2, c_3, e_1, e_2, e_3, e_4, d_1 2:c_2, c_3, c_1, e_1, e_2, e_3, e_4, d_1 3:c_3, c_1, d_1, c_2, e_1, e_2, e_3, e_4 4:e_1, e_2, e_3, e_4, c_1, c_2, c_3, d_1 5:e_1, e_2, e_3, e_4, c_1, c_2, c_3, d_1 6: e_1, e_2, e_3, e_4, c_1, c_2, c_3, d_1 7:e_1, e_2, e_3, e_4, c_1, c_2, c_3, d_1 8: e_1, e_2, e_3, e_4, c_1, c_2, c_3, d_1 9: e_1, e_2, e_3, e_4, c_1, c_2, c_3, d_1 In the first round e_1 is selected and the total weight of the voters in set {4, 5, 6, 7, 8, 9} goes down by the Droop quota q_D, i.e., slightly more than 2.25. Candidate e_1 is then removed from the preference profile. In the second round,e_2 is selected and the total weight of the voters in the set {4, 5, 6, 7, 8, 9} is now 6-2q_D, i.e., slightly less than 1.5. Candidate e_2 is then removed from the preference profile. After that since no candidate has plurality support, with respect to current weights, of at least the quota one candidate is deleted. Candidates e_3 and e_4 are removed in succession as they have plurality support no more than 1.5 with respect to the current voting weights. Then candidate c_1 is elected since she has plurality support of 1+6-2q_D, i.e., slightly less than 2.5, which exceeds the quota q_D.STV has been claimed to satisfy Proportionality for Solid Coalitions/Droop Proportionality Criterion <cit.>. On the other hand, STV violates just about every natural monotonicity axiom that has been proposed in the literature.In STV, voters are viewed as having an initial weight of one. When a candidate supported by a voter is selected, the voter's weight may decrease.STV can use fractional reweighting or discrete reweighting.[Fractional reweighting in STV has been referred to as Gregoryor `senatorial' <cit.>]We will show that fractional reweighting is crucial for some semblance of PR. Incidentally, fractional reweighting is not necessarily introduced to achieve better PR but primarily to minimize the “stochastic aspect” of tie-breaking in STV <cit.>. The following result shows that if STV resorts to discrete reweighting then it does not even satisfy weak PSC. Discrete reweighting refers to the modification of voter weights in Line 9 of Algorithm <ref> such that the total weight of voters in N' decreases by some integer greater or equal to q. Under strict preferences and for any q, q'∈ (n/k+1, n/k], q-STV with discrete reweighting does not satisfy weak q'-PSC. Let N={1,2, …, 10}, C={c_1, …, c_8}, k=7 and consider the following profile: 1:c_1, c_5, c_6, c_7, c_8, c_2, c_3, c_4 2:c_2, c_5, c_6, c_7, c_8, c_1, c_3, c_4 3:c_3, c_5, c_6, c_7, c_8, c_1, c_2, c_4 4:c_4, c_5, c_6, c_7, c_8, c_1, c_2, c_3 5:c_5, c_6, c_7, c_8, c_1, c_2, c_3, c_4 6:c_5, c_6, c_7, c_8, c_1, c_2, c_3, c_4 7:c_5, c_6, c_7, c_8, c_1, c_2, c_3, c_4 8:c_5, c_6, c_7, c_8, c_1, c_2, c_3, c_4 9:c_5, c_6, c_7, c_8, c_1, c_2, c_3, c_4 10:c_5, c_6, c_7, c_8, c_1, c_2, c_3, c_4 For fixed q∈(n/k+1, n/k] we use q-STV with discrete reweighting to select the candidates. Under discrete reweighting, the total weights of voters are modified by some integer p≥ 2 (refer to Line 9 of Algorithm <ref>). In this proof we focus on the case where p=2, a similar argument can be applied to prove the proposition for larger integer values.Note that 2=q for any q∈(n/k+1, n/k]. Applying the q-STV rule, first c_5, c_6 and c_7 are selected. Each time we select these candidates, the total weight of voters in the set N'={5, 6, 7, 8, 9, 10} goes down by 2. Thus the remaining four candidates are to be selected from c_1, c_2, c_3, c_4 and c_8. At this stage candidate c_8 has the lowest plurality support (equal to zero) and is removed from all preference profiles and the list of potentially elected candidates, and hence c_8∉ W. Now for any fixed q'∈(n/k+1, n/k], weak q'-PSC requires that at least four candidates from {c_5,c_6, c_7,c_8} be selected, since |N'|≥ 4× q', but using discrete reweighting only three candidates are selected by q-STV. The proof above has a similar argument as Example 1 in <cit.> that concerns an approval voting setting. Next we show that STV satisfies PSC with fractional reweighting. The proof details are in the appendix. For any q∈ (n/k+1, n/k], under strict preferences q-STV satisfies q-PSC. Below we provide an example of STV violating CM for k=1.The example assumes the Droop quota, i.e., Droop-STV, however, the same example violates CM for any q-STV rule with q∈ (n/k+1, n/k]. [Example showing that STV violates CM for k=1.] Consider the following instance of 100 voters with strict preferences. Total number of votersCorresponding preferences 28 :c,b,a 5:c,a,b 30 : a,b,c 5: a, c,b 16: b,c,a 16:b,a,c. We consider the single-winner election setting with the Droop quota; that is, k=1 and q_D=50+ε for sufficiently small ε>0. Under the Droop-STV rule the outcome is W_STV={a}. To see this notice that the plurality support of the candidates a,b,c are 35,32,33, respectively. Since no candidate receives plurality support ≥ q_D we remove the candidate with lowest plurality support, i.e, candidate b, and the 32 voters previously supporting candidate b now give their plurality support to their second preference. Thus, the updated plurality support of the two remaining candidates a and c are 35+16=51 and 33+16=49, and hence candidate a is elected. Now to show a violation of CM we consider an instance where two voters originally with preferences c,a,b change their preferences to a,c,bi.e. a reinforcement of the previously winning candidate a. The new profile is shown below Total number of votersCorresponding preferences 28 :c,b,a 3:c,a,b 30: a,b,c 7: a, c,b 16: b,c,a 16:b,a,c. In this modified setting, the Droop-STV outcome is W_STV'={b} which is a violation of candidate monotonicity (CM). To see this notice that plurality support of the candidates a,b,c are 37, 32, 31, respectively. Since no candidate receives plurality support ≥ q_D we remove the candidate with lowest plurality support, i.e, candidate c, and the 31 voters previously supporting candidate c now give their plurality support to their second preference. Thus, the updated plurality support of the two remaining candidates a and b are 37+3=40 and 32+28=60, and hence candidate b is elected. § EXPANDING APPROVALS RULE () We now present the Expanding Approvals Rule (). The rule utilises the idea of j-approval voting whereby every voter is asked to approve their j most preferred candidates, for some positive integer j. At a high level, works as follows.An index j is initialised to 1. The voting weight of each voter is initially 1. We use a quota q that is between n/(k+1) and n/k. While k candidates have not been selected, we do the following.We perform j-approval voting with respect to the voters' current voting weights. If there exists a candidate c with approval support at least a quota q, we select such a candidate.If there exists no such candidate, we increment j by one and repeat until k candidates have been selected. The rule is formally specified as Algorithm <ref>.It is well-defined for weak preferences.EAR is based on a combination of several natural ideas that have been used in the design of voting rules. * Candidates are selected in a sequential manner. * A candidate needs to have at least the Droop quota of `support' to be selected. * The voting weight of a voter is reduced if some of her voting weight has already been used to select some candidate. The way voting weight is reduced is fractional. * We use j-approval voting for varying j. When considering weak orders, we adapt j-approval voting so that in j-approval voting, a voter not only approves her j most preferred candidates but also any candidate that is at least as preferred as the j-th most preferred candidate.One way to view j-approval for weak orders is as follows.(1) break all ties temporarily to get an artificial linear order. (2) Identify the j-th candidate d in the artificial linear order. (3) Approve all candidates that are at least as weakly preferred as d. For EAR, the default value of q that we propose is q̅=n/k+1 + 1/m+1(n/k+1+1-n/k+1).The reason for choosing this quota is that q̅ can be viewed as q̅=n/k+1+ε where ε is small enough so that that for any ℓ≤ k, ℓ·q̅ <ℓn/k+1+1.In particular, this implies that if there exists a solid coalition N' of size |N'|≥ℓ q_D (where q_D is the Droop quota) then |N'|≥ℓq̅. On the other hand, ε is large enough so that the algorithm is polynomial in the input size. We also propose a default priority ordering that is with respect to rank maximality under . This way of tie breaking is one but not the only way to ensure thatsatisfies RRCM. For any candidate a, its corresponding rank vector is r(a)=(r_1(a), …, r_m(a)) where r_j(a) is the number of voters who have a in her j-th most preferred equivalence class.We compare rank vectors lexicographically. One rank vector r=(r_1,…, r_m) is better than r'=(r_1',…, r_m') if for the smallest i such that r_i≠ r_i', it must hold that r_i>r_i'.Finally we propose the following natural way of implementing the reweighting in Step <ref>. If the total support for c in the j-approval election is T, then for each i∈ N who supported c, we reweigh it as follows w_i⟵ w_i×T-q/T.This ensures that exactly q weight is reduced. In the following example, we demonstrate how works. [Illustration of ] Consider the profile with 9 voters and where k=3. Note that the default quota is q̅≈ 2.33. 1:c_1, c_2, c_3, e_1, e_2, e_3, e_4, d_1 2:c_2, c_3, c_1, e_1, e_2, e_3, e_4, d_1 3:c_3, c_1, d_1, c_2, e_1, e_2, e_3, e_4 4:e_1, e_2, e_3, e_4, c_1, c_2, c_3, d_1 5:e_1, e_2, e_3, e_4, c_1, c_2, c_3, d_1 6: e_1, e_2, e_3, e_4, c_1, c_2, c_3, d_1 7:e_1, e_2, e_3, e_4, c_1, c_2, c_3, d_1 8: e_1, e_2, e_3, e_4, c_1, c_2, c_3, d_1 9: e_1, e_2, e_3, e_4, c_1, c_2, c_3, d_1 In the first round e_1 is selected and the total weight of the voters in set {4, 5, 6, 7, 8, 9} goes down by q̅, i.e., 6-q̅≈ 3.67. In the second round, since no other candidate has sufficient weight when we run the 1-approval election, we consider 2-approval election. Under 2-approval, candidate e_2 receives support 6-q̅≈ 3.67 which exceeds q̅ and hence is elected.When e_2 is selected, the total weight of the voters in set {4, 5, 6, 7, 8, 9}goes down again by q̅. At this point the total weight of voters in set {4, 5, 6, 7, 8, 9} is 6-2q̅≈ 1.34 and no unselected candidate has approval support more than q̅, under 2-approval. So considers 3-approval whereby, candidates c_1 and c_3 both get support 3. Hence selects c_1, due to the default priority ordering L (rank maximality ordering), and the winning committee is {e_1,e_2, c_1}. The example above produced an outcome which coincided with the Droop-STV outcome (recall Example <ref>). Next, we present an example showing that STV and EAR are different even for k=1. [Example showing that STV and EAR are different even for k=1.] Recall Example <ref> which showed that the following instance of 100 voters with strict preferences Total number of votersCorresponding preferences28 :c,b,a 3:c,a,b 30: a,b,c 7: a, c,b 16: b,c,a 16:b,a,c, leads to the single-winner Droop-STV outcome W={b}.More generally for any quota q∈ (n/k+1, n/k] the q-STV outcome is also {b}. We now show that in this case the outcome is W'={a} and hence the STV outcome and outcomes do not coincide even for k=1. To see that the outcome is W'={a} we proceed as follows: First, we consider the 1-approval election which gives candidates a,b,c approval support 37,32,31, respectively. Since no candidate attains approval support beyond the default quota q̅=501/3 we move to the 2-approval election. In the 2-approval election candidates a,b,c attain approval support 56,90,54, respectively. Since all candidates attain approval support beyond q̅ we apply our default priority ordering L (rank-maximality) which leads to candidate a being elected. In case voters do not specify certain candidates in their list and do not wish that their vote weight to be used to approve such candidates, EAR can be suitably tweaked so as to allow this requirement. In this case, candidates are selected as long as a selected candidate can get approval weight q. The required number of remaining candidates can be selected according to some criterion.Another way EAR can be varied is that instead of using L as the priority ordering, the candidate with the highest weighted support that is at least q is selected. We point out the rule's outcome can be computed efficiently. runs in polynomial time O(n+m)^2. The rank maximal vectors can be computed in O(n+m) and the ordering based on rank maximality can be computed in time O(n+m)^2. In each round, the smallest j is found for which there are some candidates in C∖ W that have an approval score of at least q. The candidate c which is rank-maximal is identified. All voters who approved of c have their weight modified accordingly which takes linear time. Hence the whole algorithm takes time at most O(n+m)^2. A possible criticism of is that the choice of quota as well as reweighting makes it complicated enough to not be usable by hand or to be easily understood by the general public. However we have shown that without resorting to fractional reweighting, STV already fails weak PSC. Since is designed for proportional representation which is only meaningful for large enough k, may not be the ideal rule for k=1. Having said that, we mention the following connection with a single-winner rule from the literature. For k=1 and under linear orders with every candidate in the list, is equivalent to the Bucklin voting rule <cit.>. For k=1 and under linear orders for all but a subset of equally least preferred candidates applying the tweak in Remark <ref> leads to the being equivalent to the Fallback voting rule <cit.>. Under dichotomous preferences and using Hare quota, bears similarity to Phragmén's first method (also called Enström's method) described by <cit.> (page 59).However the latter method when extended to strict preferences does not satisfy Hare-PSC. Although has connections with previous rules, extending them to the case of multiple-winners and to handle dichotomous, strict and weak preferences simultaneously and satisfy desirable PR properties requires careful thought. We observe some simple properties of the rule. It is anonymous (the names of the voters do not matter). It is also neutral as long as lexicographic tie-breaking is not required to be used.Under linear orders and when using with the default quota, if more than half the voters most prefer a candidate, then that candidate is selected. This is known as the majority principle.is defined with default quota q. However it is possible to consider variants of with other quota values such as the Hare quota. We refer to the variant of with the Hare quota as Hare-. The proposition below shows that the choice of this quota can lead to different outcomes. In particular, we show that (defined with the default quota) can lead to a different outcome than that attained under Hare-. Under dichotomous preferences, Hare-EAR and EAR are not equivalent. Let |N|=100 and k=5. Denote the Hare quota by q_H=20 and note that the default quota is q̅≈ 16.71. Let the preferences of the voters be (i.e. expressing the top equivalence class of most preferred candidates): Total number of votersCorresponding preferences 17:{a} 17:{b,f} 17:{c} 17:{d} 17:{e} 1: {f} 14:{g}. First note that the rank-maximal ordering, L (with the lexicographic ordering applied for equal rank maximal ordering) is f▹ a▹ b▹ c▹ d▹ e▹ g. This ordering is independent of voter weights, is calculated at the start of the algorithm, and is never updated throughout the algorithm. Now under Hare-EAR, in the 1-approval election no candidate has support exceeding q_H. Thus, we move to a 2-approval election whereby all voters support all candidates and all candidates have support 100. Thus, we select the candidate with highest rank-maximal priority which is candidate f and reweight its supporters (i.e. reduce all voter weights to 8/10 since all voters support all candidates). Repeating this process leads to the election of the first five candidates according to the priority ranking L i.e. W={f,a,b,c,d}. Now under EAR, in the 1-approval election there are 6 candidates {a,b,c,d,e, f} with support q̅. Thus, we select the candidate with highest L priority, i.e., candidate f, and then reweight each of its 18 supporters' weights to 18-q̅/18≈ 0.07. Now there still remains 4 candidates {a,c,d,e} with support of 17>q̅, since all voters supporting such a candidate are disjoint we have that the winning committee is W'={f,a,c,d,e}. Note that W≠ W'. § PROPORTIONAL REPRESENTATION AND CANDIDATE MONOTONICITY UNDER§.§ Proportional RepresentationunderWe show that satisfies the central PR axioms for general weak order preference profiles. Under weak orders, satisfies generalised Droop-PSC. Let W be an outcome of the and suppose for the purpose of a contradiction that generalised Droop-PSC is not satisfied. That is, there exists a positive integer ℓ and a generalised solid coalition N' such that |N'|≥ℓ q_D (where q_D is the Droop quota) supporting a candidate subset C' and for every set C”⊆ W with |C”|≥min{ℓ, |C'|} there exists c”∈ C” such that ∀ i∈ N' c^(i, |C'|)_i c”. Without loss of generality we may assume that the solid coalition N' is chosen such that ℓ≤ |C'| and so min{ℓ, |C'|}=ℓ. Let j^* be the smallest integer such that in the j^*-approval election each voter in N' supports all candidates in C'. We claim that the j^*-approval election must be reached. Suppose not, then it must be that |W|=k at some earlier j-approval election where j<j^*. But if |W|=k this implies that after reweighting ∑_i∈ N w_i = n-kq̅=n/k+1-kε̅, where ε̅=1/m+1(n/k+1+1-n/k_1). However, in every j'-approval election for j'≤ j each voter i∈ N' only supports candidates weakly preferred to c^(i, |C'|). Let Ĉ_ibe the set of candidates which voter i finds weakly preferable to c^(i, |C'|) and define Ĉ=∪_i∈ N'Ĉ_i. The total weight of voters in N' at the termination of the algorithm (i.e. at the end of the j-approval election) is reduced by at most |C”| q̅ where C”=W∩Ĉ. But by assumption since C”⊆ W and every candidate c”∈ C” is weakly preferred to c^(i, |C'|) for some voter i∈ N' it must be that |C”|<min{ℓ, |C'|}=ℓ. It follows that ∑_i∈ N' w_i≥ℓ q_D-(ℓ-1)q̅= n/k+1-(ℓ-1)ε̅, which contradicts (<ref>) since N'⊆ N. Thus, we conclude that the j^*-approval election is indeed reached. Now at the j^*-approval election each voter i∈ N' supports only the candidates in the set Ĉ_i, excluding those already elected in an earlier approval election. Recall that Ĉ=∪_i∈ N'Ĉ_i and let m^* be the number of candidates elected from Ĉ in earlier approval elections. Since |N'|≥ℓ q_D implies that |N'|≥ℓq̅, it follows that the total weight of voters in N' when the j^*-approval election is reached is at least (ℓ-m^*)q̅. Since C'⊆Ĉ_i for all i∈ N', every unelected candidate in C' attains support at least (ℓ-m^*)q̅, note that there are at least |C'|-m^*≥ℓ-m^* of these. In addition, each voter i∈ N' also supports the unelected candidates in Ĉ_i\ C'. Thus, the algorithm can only terminate if weight of voters in N' is reduced below q̅ which can only occur if at least (ℓ-m^*) candidates from Ĉ are elected. It then follows that |W∩Ĉ|=m^*+(ℓ-m^*)=ℓ. By again defining C”=W∩Ĉ we attain a contradiction since C”⊆ W and |C”|≥ℓ but (<ref>) does not hold since C”⊆Ĉ. Note that satisfying PSC or generalised PSC does not depend onwhat priority tie-breaking is used (Step <ref>) or how the fractional reweighting is applied (Step <ref>). Recalling Lemma <ref> we have the following corollary. Under weak orders, satisfies generalised Hare-PSC. Under linear orders, satisfies the majority principle. Under linear orders, Droop-PSC implies the majority principle. We get the following corollary from Corollary <ref>. Under dichotomous preferences, satisfies proportional justified representation. We had observed that under dichotomous preferences, generalised Hare-PSC implies PJR. Incidentally, the fact that there exists a polynomial-time algorithm to satisfy PJR for dichotomous preferences was the central result of two recent papers <cit.>. We have shown that EAR can in fact satisfy a property stronger than PJR that is defined with respect to the Droop quota. Since satisfies generalised PSC, it implies that there exists a polynomial-time algorithm to compute a committee satisfying generalised PSC. Interestingly, we already observed that checking whether a given committee satisfies generalised PSC is coNP-complete. §.§ Candidate Monotonicity underWe show thatsatisfies rank respecting candidate monotonicity (RRCM). In what follows we shall refer to the profile of all voter preferences (weak or strict) as simply the profile. satisfies rank respecting candidate monotonicity (RRCM). Consider a profilewith election outcome W and let c_i∈ W. Now consider another modified profile ' in which c_i's rank is improved while not harming the rank of other winning candidates, relative to , and denote the election outcome under ' by W'. Since we use rank-maximality to define the order L, note that the relative position of c_i is at least as good under ' as it is under . Let the order of candidates selected underbe c_1,…, c_i, …, c_|W|. In the modified profile ', let us trace the order of candidates selected. For the first candidate c_1, either it is selected first for exactly the same reason as it is selected first underor, alternatively, now c_i is selected. If c_i is selected, our claim has been proved. Otherwise, the same argument is used for candidates after c_1 until c_i is selected. This leads immediately to the following corollaries of the above proposition. satisfiesnon-crossing candidate monotonicity (NCCM). For k=1, satisfies candidate monotonicity (CM). For dichotomous preferences, satisfies candidate monotonicity. Consider dichotomous profile P and another profile dichotomous P' in which winning candidate c_i's rank is improved while not harming the rank of other winning candidates. This implies that the rank of c_i is improved while not affecting the ranks of any other alternatives including the winning candidates. Since satisfies rank respecting candidate monotonicity (RRCM), it follows that for dichotomous preferences, satisfies candidate monotonicity. On the other hand, does not satisfy CM or WCM for k>1. does not satisfy WCM. Let N={1, 2, 3, 4}, C={a,b, c,d,e, f}, k=2, and let strict preferences be given by the following preference profile:1:a, c, f, d,…2:d, b, f, a, c, …3:a, d, c, b, f, …4:f, e, c, d, …, With this preference profile will output the winning committee W={a, f}. First, a is selected into W and each weight of each voter i∈{1,3} is reduced to w_i=2-q̅/2 where q̅ is the default quota. Since q̅>4/3 we infer that w_i<1/3.Moving to the 2-approval election no candidate receives support of at least q̅. Finally in the 3-approval election candidate f receives support at least 2 which exceeds the quota, and candidate c attains support 1+2w_i. If 1+2w_i<q̅ then candidate f is the only candidate exceeding the quota and so is elected. On the other hand, if 1+2w_i≥q̅ then both candidates c and f exceed the quota; however, due to the default (rank maximal) priority ordering f is still elected into W. Now consider a reinforcement of f by voter 1 (shift f from third to first place), this is described by the following preference profile:1:f, a, c, d, …2:d, b, f, a, c,…3: a, d, c, b, f,…4:c, e, f, d,…, With these preferences the winning committee is W'={d, c}. In the 2-approval election both candidates a and d attain support of at least q̅, however due to the priority ordering L (rank maximality) candidate d is selected into W and each voter i∈{2,3} has their weight reduced to w_i=2-q̅/2<1/3. In the 3-approval election both candidates c and f attain support of at least 2≥q̅. Furthermore, c and f are equally ranked with respect to rank maximality but applying lexicographic tie-breaking leads to c being elected into W'. The above proposition was proven for the default quota q̅ however the same counter-example can be used to prove the statement for any other quota q∈ (n/k+1, n/k]. § OTHER RULES In the literature, several rules have been defined for PR purposes. We explain how is better in its role at achieving a strong degree of PR or has other relative merits. §.§ Quota Borda System (QBS) <cit.> proposed a counterpart to STV called QPS (Quota Preference Score)or a more specific version QBS (Quota Borda System). The rule works for complete linear orders and is designed to obtain a committee that satisfies Droop-PSC. It does so by examining the prefixes (of increasing sizes) of the preference lists of voters and checking whether there exists a corresponding solid coalition for a set of voters. If there is such a solid set of voters, then the appropriate number of candidates with the highest Borda count are selected.[The description of the rule is somewhat informal and long in the original books of Dummett which may have lead to The Telegraph terming the ruleas a “a highly complex arrangement” <cit.>. ] We can partition the PSC demands among demands pertaining to rank from j=1 to m. For any given j, we only consider candidates in C^(j) the set of candidate involved in preferences of voters up till their first j positions. In Algorithm <ref>, we present a formal description of QBS. Although Dummett did not show that the rule satisfies some axiom which STV does not, he argued that QPS satisfies the Droop proportionality criterion and is somewhat less “chaotic” than STV. <cit.> argues that QBS is chaotic as well and his Example 3 implicitly shows that QBS in fact violates WCM.[<cit.> wrongly claims that QBS satisfies the stronger axiom of CM.]Tideman also feels that QBS is overly designed to satisfy Droop-PSC but is not robust enough to go beyond this criterion especially if voters in a solid coalition perturb their preferences. has some important advantages over QBS: (1) it can easily handle indifferences whereas QBS is not well-defined for indifferences. In particular, in order for QBS to be suitably generalised for indifferences and to still satisfy generalised PSC, it may become an exponential-time rule[QBS checks for PSC requirements and adds suitable number of candidates to represent the corresponding solid coalition of voters. In order to work for generalised PSC, QBS will have to identify whether solid coalitions of voters and meet their requirement which means that it will need to solve the problem of testing generalised-PSC which is coNP-complete.], (2)can easily handle voters expressing partial lists by implicitly having a last indifference class whereas QBS cannot achieve this, (3) rule satisfies an established PR property called PJR for the case for dichotomous preferences. As said earlier, QBS is not well-defined for indifferences and even for dichotomous preferences, (4) addresses a criticism of <cit.>: “Suppose there are voters who would be members of a solid coalition except that they included an “extraneous” candidate, which is quickly eliminatedamong their top choices. These voters' nearly solid support for the coalition counts for nothing which seems to me inappropriate.” We demonstrate the last flaw of QBS pointed out by Tideman in the explicit example below. does not have this flaw. Consider the profile with 9 voters and where k=3. 1:c_1, c_2, c_3, e_1, e_2, e_3, e_4, d_1 2:c_2, c_3, c_1, e_1, e_2, e_3, e_4, d_1 3:c_3, c_1, d_1, c_2, e_1, e_2, e_3, e_4 4:e_1, e_2, e_3, e_4, c_1, c_2, c_3, d_1 5:e_1, e_2, e_3, e_4, c_1, c_2, c_3, d_1 6: e_1, e_2, e_3, e_4, c_1, c_2, c_3, d_1 7:e_1, e_2, e_3, e_4, c_1, c_2, c_3, d_1 8: e_1, e_2, e_3, e_4, c_1, c_2, c_3, d_1 9: e_1, e_2, e_3, e_4, c_1, c_2, c_3, d_1 In theexample, {e_1, e_2, e_3} is the outcome of QBS. Although PSC is not violated for voters in {1,2,3} but the outcome appears to be unfair to them because they almost have a solid coalition. Since they form one-third of the electorate they may feel that they deserve that at least one candidate such as c_1, c_2 or c_3 should be selected.In contrast, it was shown in Example <ref> that does not have this flaw and instead produces the outcome {e_1, e_2, c_1}. §.§ Chamberlin-Courant and Monroe There are other rules that have been proposed within the class of “fully proportional representation” rules such as Monroe <cit.> and Chamberlin-Courant (CC) <cit.>.<cit.> used the term “fully proportional representation” to refer to PR-oriented rules that take into account the full preference list. Recently, variants of the rules called Greedy Monroe, and Greedy CC <cit.> have been discussed. However none of the rules satisfy even weak PSC <cit.>. A reason for this is that voters are assumed to not care about how many of their highly preferred candidates are in the committee as long the most preferred is present. Monroe and CC are also NP-hard to compute <cit.>. §.§ Phragmén's First Method Phragmén's first method was first considered by Phragmén but not published or pursued by him <cit.>. In the method, voters approve of most preferred candidates that have not yet been selected. The candidate with highest weight of approval is selected. The total weight of the voters whose approved candidate was selected is reduced by the Hare quota if the total weight is more than the Hare quota. Otherwise all such voters' weights are set to zero. Although Phragmén's first method seems to have been defined primarily for dichotomous preferences, the same definition of the rule works for linear orders. However the rule when applied to linear orders does not satisfy Hare-PSC. See the example below. Consider the profile with 9 voters and where k=3. 1:c_1, c_2, c_3, … 2:c_2, c_3, c_1, … 3:c_3, c_1, c_3,… 4:e_1, c_1, c_2, c_3, … 5:e_2, c_1, c_2, c_3,… 6:e_3, c_1, c_2, c_3,… 7:e_4, c_1, c_2, c_3,… 8: e_5, c_1, c_2, c_3,… 9: e_6, c_1, c_2, c_3,… In theexample, {e_1, e_2, e_3} is a possible outcome of rule. When e_1 is selected, voter 4's weight goes to zero. Then when e_2 is selected, voter 5's weight goes to zero. Finally e_3 is selected. Hare-PSC requires that c_1 or c_2, or c_3 is selected. The next example shows that Hare-EAR and Phragmén's first rule are not equivalent. Consider the profile with 4 voters, where k=2 and with dichotomous preferences given as follows: 1: {b} 2: {a, c} 3,4: {a}. Note that in this example the Hare quota is q_H=2. Now under Hare-the winning committee is W={a,c }. First a is elected since her approval support is 3≥ q_H and all other candidates have support less than the quota. Once candidate a is elected all of her supporting voters have their weights reduced to w_i=1/3. Now there is no candidate with support beyond the quota, q_H, and so we move to a 2-approval election. In this case the support of candidate b is 2-1/3<q_H and the support of candidate c is 2≥ q_H, and so c is elected into W. Under Phragmén's first rule we attain W'={a,b}. First candidate a is elected since she has maximal support, then all supporting voters have their weight reduced to 1/3. Then the weighted supported of candidate b is 1 and the weight support of candidate c is 1/3 – hence Phragmén's first rule elects b into the committee. Thus, Hare-and Phragmén's rule are not equivalent under dichotomous preferences. §.§ Phragmén'sMethods A compelling rule isPhragmén's Ordered Method that can even be generalised for weak orders. Under strict preferences, it satisfies weak Droop-PSC  <cit.>. On the other hand, even under strict preferences, it does not satisfy Droop-PSC <cit.>. If we are willing to forego PSC, then Phragmén's Ordered Method seems to be an exceptionallyuseful rule for strict preferences because unlike STV it satisfies both candidate monotonicity and committee monotonicity <cit.>. Committee monotonicity requires that for any outcome W of size k, there is a possible outcome W' of size k+1 such that W'⊃ W. §.§ Thiele's Ordered Methods Thiele's methods are based on identifying candidates that are most preferred by the largest weight of voters. For any voter who has had j candidates selected has current weight 1/(j+1).<cit.> presented an example (Example 13.15) that can be used to show that Thiele's ordered methods do not satisfy weak Droop-PSC under strict preferences. §.§ CPO-STV rules A class of STV related rules is CPO-STV that was proposed by <cit.>. The rules try to achieve a PR-type objective while ensuring that for k=1, a Condorcet winner is returned if there exists a Condorcet winner.[Since these rules are Condorcet-consistent, they are vulnerable to the no-show paradox <cit.>.] One particular rule within this class is Schulze-STV <cit.>. All of these rules are only defined for linear orders and hence do not satisfy generalised PSC. Furthermore, they all require enumeration of all possible committees and then finding pairwise comparisons between them. Hence they are exponential-time rules and impractical for large elections. <cit.> writes that CPO-STV is “computationally tedious, and for an election with several winners and many candidates it may not be feasible.” <cit.> also considered whether CPO-STV rules satisfy Droop-PSC but was unable to prove that they satisfy PSC (page 282). In any case, having an exponential-time rule satisfying PSC may not be compelling because there exists a trivial exponential-time algorithm that satisfies PSC: enumerate committees, check whether they satisfy PSC, and then return one of them. § CONCLUSIONS In this paper, we undertook a formal study of proportional representation under weak preferences. The generalised PSC axiom we proposed generalises several well-studied PR axioms in the literature. We then devised a rule that satisfies the axiom. Since EAR has relative merits over STV and Dummett's QBS (two known rules that satisfy PSC), it appears to be a compelling solution for achieving PR via voting. At the very least, it appears to be another useful option in the toolbox of representative voting rules and deserves further consideration and study. The relative merits of STV, QBS, and are summarised in Table <ref>. EAR can be modified to also work for `participatory budgeting' settings in which candidates may have a non-unit `cost' and the goal is to select a maximal set of candidates such that the total cost of candidates does not exceed a certain budget B. In our original setting of committee voting, EAR with q=n/k can be seen as follows.We initialise the weight of each voter as 1. Each candidate is viewed as having a unit cost and the budget is k. A candidate c is selected if including c does not exceed the budget (k) and if c has approval support of 1n/k. When c is selected, the total weight of the voters supporting c is decreased by n/k times the unit cost. If no such candidate exists, we increment j-approval to j+1-approval. The process continues until no candidate can be included without exceeding the budget limit. We now explain how to modify EAR to work for non-unit costs and budgets. We again initialise the budget weight of each voter as 1.A candidate c is selected if including c does not exceed the budget B andif c has approval support of n/B times the cost of c. When c is selected, the total weight of the voters supporting c is decreased by n/B times the cost of c. If no such candidate exists, we increment j-approval to j+1-approval. The process continues until no candidate can be included without exceeding the budget limit. Proportional representation axioms related to those studied in this paper have only recently been extended and explored in the participatory budgeting setting by <cit.>.Our work also sheds light on the complexity of computing committees that satisfy PR axioms as well the the complexity of testing whether a given a committee satisfies a given PR axiom. We found that whereas a polynomial-time algorithm such as finds a committee that satisfies generalised PSC, testing whether a given committee satisfies properties such as generalised PSC or generalised weak PSC is computationally hard. These findings are summarised in Table <ref>. § ACKNOWLEDGMENTS Haris Aziz is supported by a Julius Career Award. Barton Lee is supported by a Scientia PhD fellowship. The authors thank Markus Brill, Edith Elkind, Dominik Peters and Bill Zwicker for comments. plainnat 38 urlstyle[Aiyar(1930)]Aiya30a K. V. K. Aiyar. Proportional Representation by The Single Transferable Vote: The System and its Methods. The Madras Law Journal Press, 1930.[Aleskerov and Karpov(2013)]AlKa13a F. Aleskerov and A. Karpov. A new single transferable vote method and its axiomatic justification. Social Choice and Welfare, 400 (13):0 771–786, 2013.[Aziz and Huang(2016)]AzHu16a H. Aziz and S. Huang. Computational complexity of testing proportional justified representation. Technical Report arXiv:1612.06476, arXiv.org, 2016.[Aziz et al.(2017)Aziz, Brill, Conitzer, Elkind, Freeman, and Walsh]ABC+16a H. Aziz, M. Brill, V. Conitzer, E. Elkind, R. Freeman, and T. Walsh. Justified representation in approval-based committee voting. Social Choice and Welfare, pages 461–485, 2017.[Aziz et al.(2018a)Aziz, Elkind, Huang, Lackner, Sánchez-Fernández, and Skowron]AEH+18 H. Aziz, E. Elkind, S. Huang, M. Lackner, L. Sánchez-Fernández, and P. Skowron. On the complexity of Extended and Proportional Justified Representation. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence (AAAI). AAAI Press, 2018a.[Aziz et al.(2018b)Aziz, Lee, and Talmon]ALT18 H. Aziz, B. E. Lee, and N. Talmon. Proportionally representative participatory budgeting: Axioms and algorithms. In Proceedings of the 17th International Joint Conference on Autonomous Agents and Multi-Agent Systems (AAMAS), 2018b.[Black(1958)]Blac58a D. Black. The Theory of Committees and Elections. Cambridge University Press, 1958.[Bowler and Grofman(2000)]BoGr00a S. Bowler and B. Grofman. Introduction: STV as an Embedded Institution. In Elections in Australia, Ireland, and Malta under the Single Transferable Vote: Reflections on an Embedded Institution, pages 1–14, 2000.[Brams and Sanver(2009)]BrSa09a S. Brams and R. Sanver. Voting systems that combine approval and preference. In The Mathematics of Preference, Choice and Order, pages 215–237. Springer, 2009.[Brill et al.(2017)Brill, Freeman, Janson, and Lackner]BFJL16a M. Brill, R. Freeman, S. Janson, and M. Lackner. Phragmén's voting methods and justified representation. In Proceedings of the 31st AAAI Conference on Artificial Intelligence (AAAI), pages 406–413. AAAI Press, 2017.[Chamberlin and Courant(1983)]ChCo83a J. R. Chamberlin and P. N. Courant. Representative deliberations and representative decisions: Proportional representation and the Borda rule. The American Political Science Review, 770 (3):0 718–733, 1983.[Conitzer et al.(2009)Conitzer, Rognlie, and Xia]CRX09a V. Conitzer, M. Rognlie, and L. Xia. Preference functions that score rankings and maximum likelihood estimation. In Proceedings of the 21st International Joint Conference on Artificial Intelligence (IJCAI), pages 109–115. AAAI Press, 2009.[Doron and Kronick(1977)]DoKr77a G. Doron and R. Kronick. Single transferrable vote: An example of a perverse social choice function. American Journal of Political Science, 210 (2):0 303–311, 1977.[Dummett(1984)]Dumm84a M. Dummett. Voting Procedures. Oxford University Press, 1984.[Dummett(1997)]Dumm97a M. Dummett. Principles of Electoral Reform. Oxford University Press, 1997.[Elkind et al.(2014)Elkind, P., Faliszewski, Skowron, and Slinko]EFSS14a E. Elkind, P., Faliszewski, P. Skowron, and A. Slinko. Properties of multiwinner voting rules. In Proceedings of the 13th International Joint Conference on Autonomous Agents and Multi-Agent Systems (AAMAS), pages 53–60, 2014.[Elkind et al.(2017)Elkind, Faliszewski, Skowron, and Slinko]EFSS17a E. Elkind, P. Faliszewski, P. Skowron, and A. Slinko. Properties of multiwinner voting rules. Social Choice and Welfare, 2017.[Faliszewski et al.(2017)Faliszewski, Skowron, Slinko, and Talmon]FSST17a P. Faliszewski, P. Skowron, A. Slinko, and N. Talmon. Multiwinner voting: A new challenge for social choice theory. In U. Endriss, editor, Trends in Computational Social Choice, chapter 2. 2017.[Geller(2002)]Gell02a C. Geller. Single transferable vote with borda elimination: a new vote-counting system. Technical Report 2201, Deakin University, Faculty of Business and Law, 2002.[Hill(2001)]Hill01a I. D. Hill. Difficulties with equality of preference. Voting Matters, 0 (13), 2001.[Janson(2016)]Jans16a S. Janson. Phragmén's and Thiele's election methods. Technical Report arXiv:1611.08826 [math.HO], arXiv.org, 2016.[Meek(1994)]Meek94a B. L. Meek. A new approach to the single transferable vote, paper ii. Voting Matters, 0 (1), 1994.[Monroe(1995)]Monr95a B. L. Monroe. Fully proportional representation. The American Political Science Review, 890 (4):0 925–940, 1995.[Moulin(1988)]Moul88 H. Moulin. Condorcet's principle implies the No Show Paradox. Journal of Economic Theory, 45:0 53–64, 1988.[Procaccia et al.(2008)Procaccia, Rosenschein, and Zohar]PSZ08a A. D. Procaccia, J. S. Rosenschein, and A. Zohar. On the complexity of achieving proportional representation. Social Choice and Welfare, 30:0 353–362, 2008.[Pukelsheim(2014)]Puke14a F. Pukelsheim. Proportional Representation: Apportionment Methods and Their Applications. Springer, 2014.[Sánchez-Fernández et al.(2016)Sánchez-Fernández, Fernández, and Fisteus]SFF16a L. Sánchez-Fernández, N. Fernández, and L. A. Fisteus. Fully open extensions to the D'Hondt method. Technical Report arXiv:1609.05370 [cs.GT], arXiv.org, 2016.[Sánchez-Fernández et al.(2017a)Sánchez-Fernández, Elkind, and Lackner]SEL17a L. Sánchez-Fernández, E. Elkind, and M. Lackner. Committees providing ejr can be computed efficiently. Technical Report arXiv:1704.00356, arXiv.org, 2017a.[Sánchez-Fernández et al.(2017b)Sánchez-Fernández, Elkind, Lackner, Fernández, Fisteus, Basanta Val, and Skowron]SFF+17a L. Sánchez-Fernández, E. Elkind, M. Lackner, N. Fernández, J. A. Fisteus, P. Basanta Val, and P. Skowron. Proportional justified representation. In Proceedings of the 31st AAAI Conference on Artificial Intelligence (AAAI). AAAI Press, 2017b.[Schulze(2002)]Schu02a M. Schulze. On Dummett's 'Quota Borda System'. Voting matters, 150 (3), 2002.[Schulze(2011)]Schu11a M. Schulze. Free riding and vote management under proportional representation by single transferable vote. 2011.[Telegraph(2011)]Tele11a The Telegraph. Professor Sir Michael Dummett, December 2011.[Tideman(1995)]Tide95a N. Tideman. The single transferable vote. Journal of Economic Perspectives, 90 (1):0 27–38, 1995.[Tideman and Richardson(2000)]TiRi00a N. Tideman and D. Richardson. Better voting methods through technology: The refinement-manageability trade-off in the single transferable vote. Public Choice, 1030 (1-2):0 13–34, 2000.[Tideman(2006)]Tide06a T. N. Tideman. Collective Decisions And Voting: The Potential for Public Choice. Ashgate, 2006.[Woodall(1994)]Wood94a D. R. Woodall. Properties of preferential election rules. Voting Matters, 3, 1994.[Woodall(1997)]Wood97a D. R. Woodall. Monotonicity of single-seat preferential election rules. Discrete Applied Mathematics, 770 (1):0 81–98, 1997.[Zwicker(2016)]Zwic15a W. S. Zwicker. Introduction to the theory of voting. In F. Brandt, V. Conitzer, U. Endriss, J. Lang, and A. D. Procaccia, editors, Handbook of Computational Social Choice, chapter 2. Cambridge University Press, 2016. § STV Proof of Proposition <ref> Let q∈ (n/k+1, n/k] and assume that all voters have strict preferences. Suppose for the purpose of a contradiction that W is the STV outcome for some instance and q-PSC is not satisfied. That is, there exists a positive integer ℓ and a solid coalition N'⊆ N with |N'|≥ℓ q supporting a set of candidates C' and W∩ C'|<min{ℓ^*, |C'|}. Without loss of generality we assume that |C'|≥ℓ. If this were not the case, i.e., |C'|<ℓ, one could simply consider a smaller subset of voters in N' and these voters will necessarily still solidly support all candidates in C'. Thus, given that N' solidly supports the candidate set C' with |C'|≥ℓ and |N'|≥ℓ q we wish to derive a contradiction from the fact that j=|W∩ C'|<min{ℓ, |C'|} =ℓ. The STV algorithm iteratively elects or eliminates a single candidate.[Technically speaking, if the sum of elected candidates, W, and unelected but also uneliminated candidates, C^*, is equal to k the set of all unelected and also uneliminated candidates, C^*, are elected simultaneously in the same iteration. Within this proof and for simplicity of notation, we assume that if such situation has occurred the election of the candidates in C^* occurs in a sequential manner and supporting voter weights are reduced as per line 8-10 of Algorithm <ref>. Clearly this assumption is without loss of generality.] Let T be the number of the iteration which the STV algorithm runs through to output W, and let (c_1, …, c_T) be a sequence of candidates such that in iteration t∈{1, …, T} candidate c_t is either elected or eliminated. Note that T≥ k and it need not be the case that ∪_t{c_t}=C. Furthermore, since |W∩ C'|=j it must be the case that |∪_t{c_t}∩ C'|≥ j. First, we claim that C'⊆∪_t {c_t}. If this were not the case, then in every iteration voters in N' support only candidates in C' and since precisely j candidates in C' were elected the remaining weight of voters in N' is >(ℓ -j)q. But this is a contradiction; the STV algorithm cannot terminate when voters in N have remaining weight >q (recall the footnote of the previous paragraph) since this would imply <k candidates have been elected. Thus, we conclude that C'⊆∪_t {c_t}. This implies that there exist a unique ordering of the candidates in C' (c_1', …, c_|C'|') which maintains the ordering (c_1, …, c_T), i.e., simply define the one-to-one mapping σ: {1, …, |C'|}→{1, …, T} such that (c_1', …, c_|C'|')=(c_σ(1), …, c_σ(|C'|)). Second, we claim that the j candidates (c_|C'|-(j-1)', c_|C'|-(j-2)', …, c_|C'|') are all elected. Starting with iteration t=σ(|C'|) suppose that candidate c_|C'|' is not elected. This would then imply that at the start of iteration t voters in N' have weight >(ℓ-j)q>q – having only supporting candidates in C' in earlier iterations at precisely j of these being elected. Furthermore, sincec_|C'|' is the only unelected and un-eliminated candidate in C' all voters in N' support candidate c_|C'|'. Thus, the weighted-plurality support is >q which contradicts the assumption that c_|C'|' is not elected. We conclude that in fact candidate c_|C'|' is elected and so in all earlier iterations s<t the weight of voters in >(ℓ-j+1)q and precisely (j-1) candidates from C' are elected in earlier iterations. Now consider iteration t'=σ(|C'|=1) and suppose that candidate c_|C'|-1' is not elected. But at the start of this iteration each voter in N' supports with their plurality vote either candidate c_|C'| or candidate c_|C'|-1 and in total have weight >(ℓ-j+1)q>2q. Thus, at least one of the candidate must have weighted-plurality support >q and so no candidate is eliminated in this iteration, and we conclude that it must be that candidate c_|C'|-1' is elected. This argument can be repeated for all iterations t”=σ(|C'|-2), …, σ(|C'|-(j-1)), and so we conclude that the j candidates (c_|C'|-(j-1)', c_|C'|-(j-2)', …, c_|C'|') are all elected. Finally, we argue that candidate c_|C'|-j is also elected which contradicts the original assumption that |W∩ C'|=j<ℓ. To see this, note that at the start of iteration t^*=σ(|C'|-j) the total weight of voters in N' is >ℓ q (applying the claim from the previous paragraph) and each voter in N' has a plurality vote for one of the (j+1) candidates in ∪_i=0^j{c_|C'|-i'}. Noting that ℓ≥ j+1, there exists at least one candidate in ∪_i=0^j{c_|C'|-i'} who receives weighted-plurality support > q and so no candidate is eliminated in this iteration. In particular, this implies that candidate c_|C'|-j is elected. Combining this with the previous claims we see that j+1 candidate are elected which contradicts the original assumption and completes the proof. § COMPLEXITY OF TESTING PSC Under linear orders, it can be tested in polynomial time whether a committee satisfies PSC. For each i from 1 to m one can look at prefixes of preference lists of sizes i. For these prefixes, we can see check whether there exists a corresponding solid coalition. For such solid coalitions we can check whether the appropriate number of candidates are selected or not. The same idea can be used for weak PSC. Under linear orders, it can be tested in polynomial time whether a committee satisfies weak PSC. Under dichotomous preference, the problem of testing generalised PSC is coNP-complete. Under dichotomous preferences, generalised PSC is equivalent to PJR. Since <cit.> and <cit.> showed that testing PJR is coNP-complete, it follows that that testing Generalised PSC is coNP-complete as well.
http://arxiv.org/abs/1708.07580v2
{ "authors": [ "Haris Aziz", "Barton Lee" ], "categories": [ "cs.GT", "cs.AI", "91A12, 68Q15", "F.2; J.4" ], "primary_category": "cs.GT", "published": "20170825001043", "title": "The Expanding Approvals Rule: Improving Proportional Representation and Monotonicity" }
The Price of Uncertainty in Present-Biased PlanningWork supported by the European Research Council, Grant Agreement No. 691672. [=============================================================================================================================== In a simple model of propagation of asymmetric Gaussian beams in nonlinear waveguides, described by a reduction to ordinary differential eqautions of generalized nonlinear Schrödinger equations (GNLSEs)with cubic-quintic (CQ) and saturable (SAT) nonlinearities and a graded-index profile, the beam widths exhibit two different types of beating behavior, with transitions between them. We present an analytic model to explain these phenomena, which originate in a 1:1 resonance in a 2 degree-of-freedom Hamiltonian system. We show how small oscillations near a fixed point close to 1:1 resonance in such a system can be approximated using an integrable Hamiltonian and, ultimately, by a single first order differential equation. In particular, the beating transitions can be located from coincidences of roots of a pair of quadratic equations, with coefficients determined (in a highly complex manner) by the internal parameters and initial conditions of the original system. The results of the analytic model agree with numerics of the original system over largeparameter ranges, and allow new predictions that can be verified directly. In the CQ case we identify a band of beam energies for which there is only a single beating transition (as opposed to 0 or 2) as the eccentricity is increased. In the SAT case we explain the sudden (dis)appearance of beating transitions for certain values of the other parameters as the grade-index is changed. § INTRODUCTIONIn the sequence of papers <cit.> a variational approach was taken to investigate the propagation of asymmetric (elliptic) Gaussian beams in nonlinear waveguides, with cubic-quintic and saturable nonlinearities and a parabolic graded-index (GRIN) profile, as described by suitablegeneralized nonlinear Schrödinger equations (GNLSEs).The beam widths in the two transverse directions to the direction of propagation were found to obey a set of ordinary differential equations which can be identified asthe equations of motion of a point particle in certain rather complicated, but tractable, 2d potentials.Numerical analysis of these equations revealed “beating” phenomena: in addition to fast oscillations, the beam widths exhibit a (relatively) slow periodic variation. Furthermore, two types of beating were identified: In type I beating the amplitude of oscillation of the beam width in one direction remains greater than the amplitude of oscillation in the other direction, whereas in type II, there is an interchange between the widths in the two transverse directions. The type of beating depends on the parameters of the system and initial eccentricity of the beam. Remarkably, as the initial eccentricty or other parameters are changed, there can be a transition between types, and this transition is characterized by a singularity in the ratio of the periods of the beating and of the fast oscillatory motion. The intention of the current paper is to provide a theoretical analysis of the beating phenomena and, in particular, to present an approximate analytic method to find the transitions between types. The relevant tool is the analysis of small oscillations in 2 degree-of-freedom Hamiltonian systems near a fixed point which is close to 1:1 resonance. The fact that resonance is the source of “beating” or “energy transfer” phenomena in mechanical systems is well known. A classic example can be found in the paper of Breitenburger and Mueller <cit.> on the elastic pendulum, which the authors describe as a “paradigm of a conservative, autoparametric system with an internal resonance”. The paper <cit.> has other features in common with our work (such as the use of action-angle variables and the fact that the analytic approximation used isa single elliptic function equation) but it is in the much simpler context of 1:2 resonance. For other examples of autoparametric resonance see, for example, <cit.>. The most widely used tool for analysis of systems near resonance is the mutliple time scale method, see for example <cit.> forthorough presentations and many examples. For a typical modern application see <cit.>. However, averaging techniques present an alternative <cit.>, and in the context of Hamiltonian systems, working in action-angle coordinates has substantial advantages <cit.>. A typical study of a system near resonance will involve looking at the bifurcations of special solutions. In this context much attention has been paid to the definition and identification of nonlinear normal modes — see <cit.> for a review, and <cit.> for an example in the context of 1:1 resonance. The 2 degree-of-freedom Hamiltonian systems we study have a discrete symmetry, and are approximated by a family of systemswith1:1 resonance studiednearly 40 years ago by Verhulst <cit.>.Verhulst showed the existence of an approximate second integral and used this to study bifurcations of special solutions and their stability. Our work differs from that of Verhulst and other works on 1:1 resonance in several regards. Thebifurcation question we pose depends not only on the internal parameters of the system, but also on the initial conditions. The question is not only one of identifying different types of solutions of the system, but also seeing how the type of solution changes as both the initial condition and internal system parameters are varied. We have not seen a similar study in the highly complex context of 1:1 resonance.Our methodology usesaction-angle variables and canonical transformations (though in an appendix we show how to apply standard two time scale techniques). Unlike in most existing studies, it is necessary to compute the relevant canonical transformation to second order. However, this does not affect the result that once the correct canonical transformation has been applied, the resulting approximating Hamiltonian depends only on a single combination of the angle variables and is integrable. The equations of motion for the integrable Hamiltonian can be reduced to a singlefirst order differential equation, and the rich bifurcation structure of the systems we study can reduces to understanding the bifurcations of roots of a pair of quadratic polynomials, with coefficients that depend (in a complex, nonexplicit manner) on the internal parameters of the systems and the initial conditions.Comparison with numerical results shows our method gives high-quality results in a significant region of parameter space, and allows a variety of interesting new predictions. The structure of this paper is as follows. In the next section we review the relevant models from nonlinear optics and the collective variable approximation to obtain equations for the propagation of beam widths, and present the main findings of papers <cit.>and some further numerical results.In section 3, we develop ourmethod of integrable approximation for small oscillations in a 2 degree-of-freedom Hamiltonian systemnear a fixed point close to 1:1 resonance. In section 4 we describe the application of this method to the specific systems relevant to beam propagation, confirming existing numerical results and presenting new predictions.In section 5 we summarize and conclude. Appendix A completes some technical details omitted from the main text,and Appendix B describes an alternate method of approximation of the full equations using a two time expansion. This is a more ad hoc approach than the one explained in section 3, but we include it as it is more commonly used in the literature,and for certain values of parameters it gives better results. Before closing this introduction we mention a number of points concerning the relevance of the work in this paper to optical solitons. We will describe in next section the manner in which we use ordinary differential equations (ODEs) to study the behavior of solutions of GNLSEs. The use of ODEs to study GNLSEs is widespread, see for example <cit.>In particular, the last two papers use ODE methods in the study of rotating solitons. Our work extends the catalog of interesting bifurcations that can be observed in the context of GNLSEs;for another example; see the papers <cit.> for a case of a saddle-loop bifurcation.Finally, we mention that we neglect dispersive terms in the GNLSEs we study.This is justifiable in the context ofnew optical materials <cit.> characterized by Kerr coefficients of the order 10^-11–10^-12 cm^2/ W, making the critical intensity for self-focusingsmall enough that it can be reached using microsecond pulses and possibly even continuous wave (CW) laser beams. § MODELS, THE COLLECTIVE VARIABLE APPROACH AND NUMERICAL RESULTS We consider beam propagation in a nonlinear, graded-index fiber, as described by one of the following GNLSEs:2iψ_z + ψ_xx + ψ_yy + (|ψ|^2 - Q|ψ|^4 - g(x^2+y^2) ) ψ =0 , 2iψ_z + ψ_xx + ψ_yy + (|ψ|^2/1+α^2|ψ|^2- g(x^2+y^2) ) ψ =0 .Here, modulo suitable normalizations <cit.>, ψ is the strength of the electric field, z is the longitdinal coordinate, x,y are transverse coordinates, and Q,α,g are parameters.The first equation is the case of cubic-quintic nonlinearity (CQ), the second is the case of saturable nonlinearity (SAT). In the low intensity limit these models are similar, but for higher intensity they display different physical properties.In both cases, the higher order nonlinearity prevents beam collapse associated with the standard Kerr nonlinearity<cit.>. The term -g(x^2+y^2)ψ reflects the graded-index nature of the fibre, that the refractive index n falls with distance r from the center of the fibre according to the law n^2 = n_0^2 - Gr^2; the physical significance of thisis explained in <cit.>. The collective variable approximation (CVA), introduced for the study of self-focusing beams in <cit.>, is a variational technique to approximate solutions of nonlinear Schrödinger-type equations which has been used and validated in many different situations <cit.>.The method replaces partial differential equations such as (<ref>) and (<ref>) by a system of ordinary differential equations for the coefficients of an ansatz for the full solution. The GNLSEs (<ref>) and (<ref>) are variational equations for action principles based on the Lagrangian densitiesL_ CQ =i( ψψ^*_z-ψ^*ψ_z ) + |ψ_x|^2 + |ψ_y|^2 - 1/2 |ψ|^4+ Q/3|ψ|^6 + g(x^2+y^2) |ψ|^2,L_ SAT = i( ψψ^*_z-ψ^*ψ_z ) + |ψ_x|^2 + |ψ_y|^2+ ln(1 + α^2|ψ|^2)-α^2|ψ|^2/α^4 + g(x^2+y^2) |ψ|^2 .We assume ψ takes the form of the trial function ψ_T(x,y,z) = A(z) exp(iϕ(z) -x^2/2a_x^2(z) + ib_x(z)x^2 -y^2/2a_y^2(z) + ib_y(z)y^2 ) ,where A,ϕ,a_x,a_y,b_x,b_y are currently undetermined, real functions of only the longitudinal coordinate z. This trial function describes an elliptic Gaussian beam with a_x,a_y representing the widths of the beam in the x,y directions. b_x,b_y describe curvatures of the beam wavefront, A is the normalized amplitude of the electric field, and ϕ is a longitudinal phase factor. Our choice of a Gaussian shape for the trial function is appropriate because the Gaussian is an exact solution of the linear Schrödinger equation for GRIN waveguides <cit.>. Substituting the trial function in the Lagrangian densities (<ref>),(<ref>) and computing the integrals over the variables x,y we obtain reduced densities for the functions A,ϕ,a_x,a_y,b_x,b_y. The corresponding Euler-Lagrange equations in the CQ case are Ȧ =-(b_x +b_y)A,ȧ_x,y =2a_x,yb_x,y , ḃ_x,y = 1/2a^4_x,y - 2b^2_x,y - g/2 - A^2/a^2_x,y( 1/8- QA^2/9) , ϕ̇ =- 1/2a^2_x - 1/2a^2_y + ( 3/8 - 5QA^2/18) A^2 .Here a dot denotes differentiation with respect to z.In the SAT case the equations for A,a_x,a_y remain the same, but those for b_x,b_y,ϕare replaced by ḃ_x,y = 1/2a_x,y^4 -2b_x,y^2 - g/2 +ln(1+ α^2 A^2)+Li_2(-α^2 A^2)/2α^4 A^2a_x,y^2 ,ϕ̇ = -1/2a_x^2 - 1/2a_y^2+ α^2A^2-2ln(1+α^2A^2)- Li_2(-α^2 A^2)/2α^4A^2 .Here Li_2(x)=∑_k=1^∞x^k/k^2is the Spence or dilogarithm function<cit.>.For both CQ and SAT cases we observe that A^2 a_xa_y is conserved <cit.>, and we write A^2 a_x a_y = 4 E (4E is the beam energy), and use this to eliminate A(z) . Furthermore, ϕ(z) evidently plays no role in determining the other functions and can be computed by a simple quadrature once the other functions have been found.Furthermore, it is clear that we can write b_x (b_y) in terms of a_x(a_y) and its z-derivative. Thus we can reduce the system of 6 equations to a pair of second order equations for a_x,a_y. After some more calculation it emerges that the equations are simply the equations of motion ä_x = -∂ V/∂ a_x , ä_y = -∂ V/∂ a_yfor a particle in a potential V(a_x,a_y), where for CQV = V_CQ≡1/2( 1/a_x^2 + 1/a_y^2)- E/a_x a_y + 16 QE^2/9a_x^2a_y^2+ g/2( a_x^2+a_y^2) ,and for SAT V =V_SAT≡1/2( 1/a_x^2 + 1/a_y^2)-a_xa_y/4Eα^4 Li_2 ( -4Eα^2/a_xa_y)+ g/2( a_x^2+a_y^2) . Thus integration of equations (<ref>) for potentials (<ref>) and (<ref>) provides a first approximation to solutions of the GNLSEs (<ref>) and (<ref>). Full numerical solutions ofGNLSEs have been given in boththe SAT <cit.> and the CQ <cit.> cases with g=0.In<cit.> it was shown that the breathing frequencies found numerically are similar to those obtained by the CVA technique. In <cit.> it was shown thatthe shape of the beam obtained numerically for a saturable medium remains similar toGaussian, even for an asymmetric initial condition.However, use of direct numeric methods to give an overall picture of the behavior of a GNLSE, as a function of all the various parameters,remains a computationally overwhelming task, and having an qualitatively correct analytic or semianalytic model is therefore useful for developing physical insight <cit.>. Appropriate initial conditions for (<ref>) area_x(0) = a_0 r, a_y(0) = a_0/r , ȧ_̇ẋ(0) = ȧ_̇ẏ(0) = 0 .The latter two conditions are equivalent to taking b_x(0)=b_y(0)=0.Note that both the CQ and the SAT system have a scaling symmetry[ a_x →λ a_x,a_y →λ a_y , a_0 →λ a_0,r → r, z →λ^2 z,;Q→λ^2Q ,α→λα , g→λ^-4g , E→ E. ]Thus for CQ we do not need to study the dependence of solutions on the 5 parameters Q,g,E,a_0,r, but only on the 4 scale invariant quantities Qa_0^-2, ga_0^4, E, r. On occasion we will workwith the scale invariant quantityK_ CQ = 4QEa_0^-2 instead of the quantity Qa_0^-2. (For SAT, replace all instances of Q in the previous two sentences with α^2, and K_ SAT=4α^2Ea_0^-2.) Note that since there is symmetry in both the models between a_x and a_y, there is a r→1/r inversion symmetry, and thus we need only study r≤ 1 or r≥ 1.In the papers <cit.> the ODE systems above were studied numerically. For appropriate choices of the parameters “beating” phenomena were observed: in addition to (relatively) fast “breathing” oscillations, the beam widths exhibit a (relatively) slow periodic variation. Two types ofbeating were identified:In type I beating, the amplitude of oscillation of the beam width in one direction remains greater than the amplitude of oscillation in the other direction. In type II beating, there is an interchange between the widths in the two transverse directions.This is illustrated in Figure 1, which shows solutions of the CQ system for E=2.039, K_ CQ=0.71, ga_0^4=0.01, and two choices of r: r=1.14 gives type I beating, whereasr=1.16 gives type II beating.The type of beating depends on the parameters of the system and, as evident from Figure 1, on theinitial eccentricity of the beam. Remarkably, as the initial eccentricty is increased, or as other parmeters are changed, there can be a transition between types. The approach to this transition is characterized by a divergence in the ratio of the periods of the slow beating and of the fast oscillatory motion. In Figure 2 this ratio (determined from numerical simulations) is plotted as a function of r^2 for the CQ system, with E=2.039, K_ CQ=0.71 andga_0^4=0,0.01,0.02,0.03. (The reason for the choice of the coordinate r^2 on the x-axis is simply to make the plot clearer.) For r just above 1 the beating is type II, then there is a transition to type I, and then a second transition back to type II. The dependence on the system parameters of the two critical values of r, which we denote collectively by r_c,is explored further in Figure 3.In Figure 3athe values of r_c are plotted as a function of ga_0^4 for three different values of K_ CQ and a constant value ofE; in Figure 3b r_c is plotted as a function of ga_0^4 for three different values of E and a constant value ofK_ CQ. In general we see that r_c increases as a function of ga_0^4 (for fixed E,K_ CQ).From Figure 3b we see that since the (solid) red is above the (dashed) blue is above the (dot-dashed) black, r_c also increases as a function of E (for fixed ga_0^4,K_ CQ). But in Figure 3a we see there is difference between the upperand lower branches of r_c. We deduce thatthe higher value of r_c also increases with K_ CQ (for fixed E,ga_0^4), but the lower value decreases. We shall see later that for other values of ga_0^4,K_ CQ,E there can be just a single transition or no transitions at all as r is increased from 1.Transitions between beating types are also observed in the SAT system, again with a complex dependence on the parameters ga_0^4, K_ SAT and E.The aim of this paper is to provide an integrable approximation forequations (<ref>) withpotentials (<ref>) and (<ref>) which provides a theoretical model to predict where the transitions between types take place. § SMALL OSCILLATIONS NEAR 1:1 RESONANCE In this section we describe a general process of approximation near a 1:1 resonancefor a 2 degree-of-freedom Hamiltonian system with HamiltonianH =1/2(p_x^2 + p_y^2 )+ V(a_x,a_y) .Here a_x,a_y are the coordinates, p_x,p_y are the conjugate momenta, and the potential V (which typically will depend on a number of parameters) is symmetric, V(a_x,a_y)=V(a_y,a_x). We assume that for typical values of the parametersthe potential has an isolated symmetric minimum (at a_x=a_y=a_ min, say) at which the system is close to 1:1 resonance. Note that because of the symmetry, ∂^2V/∂ a_x^2 (a_ min, a_ min) = ∂^2V/∂ a_y^2 (a_ min,a_ min).Thus the Hessian matrix of the potential at (a_ min,a_ min) has eigenvectors ( [ 1; ± 1 ]) with eigenvalues ∂^2V/∂ a_x^2 (a_ min,a_ min) ±∂^2V/∂ a_xa_y (a_ min,a_ min). The condition for being close to 1:1 resonance (i.e. equal eigenvalues) is therefore simply ∂^2V/∂ a_xa_y (a_ min,a_ min) ≈ 0.For these systems we study orbits with initial conditions as given in (<ref>).The process of approximating such a system with an integrablesystem has 3 steps. The first step is to expand in normal coordinates near the fixed point, retaining only terms up to order 4 in the potential. Thus we writea_x = a_ min + ζ_2+ζ_1/√(2) ,a_y = a_ min + ζ_2-ζ_1/√(2)and expand to fourth order to obtainH_1 = 1/2( p_1^2 + p_2^2 + ω_1^2 ζ_1^2 + ω_2^2 ζ_2^2 )+ a_1 ζ_1^2 ζ_2 + a_2 ζ_2^3 + a_3 ζ_1^4 + a_4 ζ_1^2 ζ_2^2 + a_5 ζ_2^4where p_1,p_2 are the conjugate momenta to the coordinates ζ_1,ζ_2, and ω_1,ω_2,a_1,a_2,a_3,a_4,a_5 are constants that depend on the parameters of the original potential V. The Hamiltonian H_1 has ζ_1 → -ζ_1 symmetry as a consequence of the symmetry of H, and is the general Hamiltonian with this symmetry and a quartic potential.Aspects of the behavior of this Hamiltonian at, or close to, 1:1 resonance have been studied previously, for example,in<cit.>. The initial conditions for this system, corresponding to (<ref>) are ζ_1(0) = a_0/√(2)(r-1/r), ζ_2(0) = 1/√(2)(a_0(r+1/r) - 2 a_ min), p_1(0) = p_2(0) = 0. Symmetric solutions correspond to the initial condition ζ_1(0)=0. In regarding H_1 as an approximation for H we are neglecting terms of fifth order and above.The second step is to make the canonical transformation to action-angle coordinates associated with the quadratic part of the Hamiltonian H_1, i.e. to substitute[ζ_1 = √(2J_1/ω_1)cosθ_1 ,p_1 = - √(2J_1ω_1)sinθ_1,;ζ_2 = √(2J_2/ω_2)cosθ_2 , p_2 = - √(2J_2ω_2)sinθ_2 . ]This gives H_2= ω_1 J_1 + ω_2 J_2 + ( 2J_1a_1/ω_1 +3 J_2 a_2/ω_2) √(J_2/2ω_2)cosθ_2 + (J_2/2ω_2)^3/22a_2cos 3θ_2 +a_1 J_1/ω_1√(J_2/2ω_2)( cos(2θ_1-θ_2)+ cos(2θ_1+θ_2) )+ 3a_3 J_1^2/2ω_1^2+ a_4 J_1J_2/ω_1ω_2+ 3a_5 J_2^2/2ω_2^2+ a_4 J_1J_2/2ω_1ω_2( cos(2θ_1-2θ_2)+ cos(2θ_1+2θ_2) ) ( 2a_3 J_1^2/ω_1^2 +a_4 J_1J_2/ω_1ω_2)cos(2θ_1) + a_3 J_1^2/2ω_1^2cos(4θ_1) +(a_4J_1J_2/ω_1ω_2+ 2a_5J_2^2/ω_2^2)cos(2θ_2) +a_5J_2^2/2ω_2^2cos(4θ_2).Here θ_1,θ_2 are the angle variables, and J_1,J_2 the conjugate actions.The initial conditions for the action variables areJ_1(0) = a_0^2ω_1/4(r-1/r)^2,J_2(0) = ω_2/4(a_0(r+1/r) - 2 a_ min)^2 .The initial conditions for the angle variables depend on the sign of ζ_1(0) and ζ_2(0).If ζ_1(0)>0 (ζ_2(0)>0) then, from (<ref>) we should take θ_1(0)=0 (θ_2(0)=0) and otherwise θ_1(0)=π (θ_2(0)=π). Due to the ζ_1→ -ζ_1 symmetry of H_1the Hamiltonian H_2 has period π (and not 2π) as a function of θ_1 and thus the choice of the θ_1 initial condition is irrelevant. The choice of the θ_2 initial condition, however, is important. We are introducing a non-physical discontinuity in the approximation procedure when the sign of ζ_2(0) changes, i.e. whenr+1/r=2a_ min/a_0. We will see the effects of this later, in our results for the SAT potential. The third step involves a canonical change of coordinates (θ_1,θ_2,J_1,J_2)→ (ϕ_1,ϕ_2,K_1,K_2) defined by a generating function of the second type G_2(θ_1,θ_2,K_1,K_2) <cit.>, chosen to eliminate the nonresonant terms from the Hamiltonian (i.e. all the trigonometric terms of order ||J||^3/2 or ||J||^2 except the one involving cos(2θ_1-2θ_2).)The full change of coordinates is given by [ ϕ_1 = ∂ G_2/∂ K_1 ,J_1 = ∂ G_2/∂θ_1 ,; ϕ_2 = ∂ G_2/∂ K_2 ,J_2 = ∂ G_2/∂θ_2 . ]The generating function G_2 should be taken in the formG_2= K_1θ_1 + K_2θ_2 +A_1 sinθ_2 + A_2sin 3θ_2 +A_3 sin(2θ_1-θ_2) + A_4 sin(2θ_1+θ_2) +A_5 sin2θ_1 + A_6 sin 4θ_1+ A_7 sin2θ_2 + A_8 sin 4θ_2 + A_9 sin6θ_2+ A_10sin (2θ_1 +2θ_2)+ A_11sin (2θ_1 +4θ_2) + A_12sin (4θ_1 +2θ_2) + A_13sin (2θ_1 -4θ_2)+ A_14sin (4θ_1 -2θ_2)where the coefficients A_1,…,A_14 are functions of K_1,K_2, whichare chosen to eliminate the nonresonant trigonometric terms in the Hamiltonian to required order.A_1,A_2,A_3,A_4 are of order ||K||^3/2 and A_5,…,A_14 are of order ||K||^2.The calculations are long, but straightforward with the help of a symbolic manipulator, and the final Hamiltonian is found to be simply H_3 =ω_1 K_1 + ω_2 K_2+b_1 K_1^2 + b_2 K_1K_2 + b_3 K_2^2+ (b_4 K_1^2 + b_5 K_1K_2) cos( 2(ϕ_1-ϕ_2) )where b_1= 3a_3/2ω_1^2- a_1^2( 8ω_1^2-3ω_2^2)/4ω_1^2ω_2^2( 2ω_1-ω_2 )( 2ω_1 + ω_2 ) b_2= a_4/ω_1ω_2 - 3 a_1 a_2/ω_1ω_2^3 - 2a_1^2/ω_1ω_2( 2ω_1-ω_2)( 2ω_1+ω_2) b_3= 3a_5/2ω_2^2 -15 a_2^2/4ω_2^4b_4= ( ω_2-ω_1) a_1^2/2ω_1^2ω_2^2( 2ω_1-ω_2) b_5= a_4/2ω_1ω_2 - a_1a_2(4ω_1^2-3ω_1ω_2-4ω_2^2)/2ω_1ω_2^3( 2ω_1-ω_2)( 2ω_1+ω_2)- a_1^2/ω_1^2ω_2(2ω_1-ω_2) . The Hamiltonian H_3 given in (<ref>) is an integrable approximation of the original Hamiltonian H given in (<ref>).H_3 is a normal form for the “natural” Hamiltonian H_1 at or near 1:1 resonance. Note that in the case of exact resonanceω_1=ω_2the coefficient b_4 vanishes. Also close to resonance, the corresponding term in H_3 is of lower order than the other terms, and in<cit.> it is omitted.However, we choose toretain it to avoid any assumption on the relative orders of magnitude of |ω_1 - ω_2| and ||K||.The integrability of H_3 is evident, as it only depends on the modified angle variables ϕ_1,ϕ_2 through the combination ϕ_1-ϕ_2. As a consequence the quantity K_1+K_2 is conserved, in addition to the Hamiltonian itself. We denote the value of the Hamiltonian by E and the value of K_1+K_2 by P (these should be computed from the system parameters and initial conditions). The full equations of motion are ϕ̇_1 = ∂ H_3/∂ K_1 = ω_1 + 2 b_1 K_1 + b_2 K_2 +(2 b_4 K_1 + b_5 K_2) cos(2(ϕ_1-ϕ_2)),ϕ̇_2 = ∂ H_3/∂ K_2 = ω_2 +b_2 K_1 + 2 b_3 K_2 + b_5 K_1 cos(2(ϕ_1-ϕ_2)) , K̇_1 = -∂ H_3/∂ϕ_1 =2 K_1 (b_4 K_1 + b_5 K_2) sin (2(ϕ_1-ϕ_2)), K̇_2 = -∂ H_3/∂ϕ_2 =-2 K_1 (b_4 K_1 + b_5 K_2) sin (2(ϕ_1-ϕ_2)).Using the two conservation laws it is possible to eliminate K_2 and ϕ_1-ϕ_2 from the K_1 equation of motion to get a single equation for K_1:K̇_1^2 =- 4((b_1-b_2+b_3+b_4-b_5)K_1^2+((b_2-2b_3+b_5)P-ω_2+ω_1)K_1+b_3P^2+ω_2P- E)((b_1-b_2+b_3-b_4+b_5)K_1^2+((b_2-2b_3-b_5)P-ω_2+ω_1)K_1+b_3P^2+ω_2P- E) . Equation (<ref>) is a central result of this paper.To solve (<ref>) it is necessary to translate the initial conditions for J_1,J_2,θ_1,θ_2 into initial conditions for K_1,K_2. This step requires details of the canonical tranformation.Due to their length, the full equations determining the initial values of K_1,K_2 are given in Appendix A (equations(<ref>)-(<ref>)). Note there are two cases depending on whether θ_2(0) is 0 or π. Note also thatthereis no guarantee that these equations will have a solution with real, positive K_1,K_2.In the case of the SAT system, for a certain range of parameter values we have experienced numerical problems with the solution of (<ref>)-(<ref>), specifically for initial values ofJ_2 close to zero, close to the jump from θ_2=0 to θ_2=π. However, typically there are values of K_1(0),K_2(0) close to the given values of J_1(0),J_2(0). Once the initial values of K_1,K_2 have been computed, the values of the constants E and P can be found and equation (<ref>) can be solved. The right hand side of (<ref>) is the product of two quadratic factors in K_1, with up to 4 real roots, and typical solutions will beoscillatory between two roots. When there is a double root then there is the possibility of the period of the oscillation becoming infinite, marking a bifurcation in the solution. There are two ways that a double root can occur,by the vanishing of the discriminant of one of the quadratic factors, or by one of the roots of the first factor coinciding with one of the roots of the second. The discriminants of the quadratic factors areΔ_1= ( (b_2+b_5)^2 - 4b_3 (b_1 + b_4) ) P^2+ 2 ( (b_2 - 2 b_3 + b_5) ω_1 +(-2b_1 + b_2 - 2 b_4 +b_5) ω_2) P + 4(b_1 - b_2 + b_3 + b_4 - b_5) E + (ω_1 - ω_2) ^ 2, Δ_2 = ( (b_2-b_5)^2 -4b_3 (b_1 - b_4) ) P^2+ 2 ( (b_2 - 2 b_3 - b_5) ω_1 +(-2b_1 + b_2 + 2 b_4 -b_5) ω_2) P + 4(b_1 - b_2 + b_3 - b_4 + b_5) E + (ω_1 - ω_2) ^ 2.A simple algebraic manipulation shows thatthe first factor and second factor have coincident roots if either Δ_3=0 or Δ_4=0, whereΔ_3= b_3 P^2 + ω_2P -E ,Δ_4 =b_1 b_5^2 - b_2 b_4 b_5 + b_3 b_4^2/(b_4-b_5)^2P^2 +b_4ω_2-b_5ω_1/b_4-b_5 P -E .From (<ref>), we see that the first case occurs when the repeated root is at K_1=0. It should be emphasized that the occurence of a double root on the RHS of (<ref>) is a necessarycondition for a bifurcation of the solution (giving rise to a transition between types)but not a sufficient condition.For example, if the solution is describing an oscillation on the interval between two adjacent roots of the RHS, and the two other roots outside this interval merge, this will have no effect on the solution. We illustrate, in Figure 4, with two concrete examples of equation (<ref>) emerging from the CQ system described in Section 2. In both cases Qa_0^-2=0.077 and ga_0^4=0; in the first case r=1.01 and in the second case r=1.045. In both cases we plot the roots of the RHS as a function of the single remaining parameter E.(The choice to plot the roots for fixed values of Qa_0^-2, ga_0^4 and r and to vary E is just an illustration; we could just as easilly vary any of the other parameters or a combination thereof.) In the first casethere are 4 points P_1,P_2,P_3,P_4 at which there are double roots; however, transitions only occur at the two points P_1,P_4 (marked in Figure 4 with large dots).In the second casethere are 5 points P_1,P_2,P_3,P_4,P_5 at which there are double roots; however, transitions only occur at the two points P_2,P_4.In both cases, the first transition is from type II to type I, and the second transition is from type I to type II, as indicated by Roman numerals on the plot. The theoretical explanation of this is as follows. In the first case, r=1.01, there are 4 values of E for which there is a double root. The points labelled P_3 and P_4 on the diagram are associated with the vanishing of the discriminant Δ_2; the point labelled P_1 is a double root at 0, associated with the condition Δ_3=0, and the point labelled P_2 is associated with the vanishing of the discriminant Δ_1. The motion takes place between the root that is at K_1≈ 0.00015 and an adjacent root: for values of E below P_2 the adjacent root is below, for values of E above P_2 the adjacent root is above. Thus the double root at P_1 indicates a value of E for which there is a bifurcation, and the period of oscillation diverges. The double root at P_2 is a special solution for which K_1 and K_2 are constant (looking at (<ref>)-(<ref>) it can be seen that there are 3 kinds of solution of this type, each corresponding to vanishing of one of the three factors on the RHS of this equation; these are related to the nonlinear normal modes of the system <cit.>). The point P_2does not, however, give rise to a transition in behavior of the CQ system;the beating period diverges there, but the type does not change. The double root at P_3 also does not mark a transition. This is precisely the case described above, in which the oscillation is on the interval between 2 roots, and the other two roots outside this interval merge. The point P_4, however, does mark a second transition, from type I beating back to type II. Proceeding to the second example in Figure 4, there are now 5 cases of a double root. P_2 and P_4 are associated with the vanishing of the discriminant Δ_2, P_3 with the vanishing of the discriminant Δ_1. P_1 is the case of a double root at zero associated with the condition Δ_3=0, and P_5 is associated with the final possibility, Δ_4=0. There are however only 2 transitions, associated with the points P_2 and P_4, for similar reasons to the case described in the previous paragraph. In this section we haveexplained how small oscillations of the original Hamiltonian (<ref>) near its fixed point andnear (symmetric) 1:1 resonance can be approximated using the integrable Hamiltonian (<ref>) and the single differential equation (<ref>).We have arrived at a simple analytic approximation for beating transitions, viz. a necessary condition fora transition between type I and type II beating is the vanishing of one of the four quantities Δ_1,Δ_2,Δ_3,Δ_4 given in (<ref>),(<ref>),(<ref>),(<ref>).It should be emphasized that this far from trivializes the original problem. There is substantial complexity hidden in the relationship between parametersand initial conditions of the original Hamiltonian and those ofthe integrable Hamiltonian. Also determining which of the vanishing conditions gives a physical transition can be subtle. InSection4 we apply the approximation to the CQ and SAT models from Section 2 and validate its predictions against numerical results. § APPLICATION TO THE MODELS§.§ The CQ Model The potential of the CQ model, given by (<ref>), has an isolated minimum when a_x = a_y = a_ min≡4 E√(2Q)/3√(E-1) C_0where C_0>0 is a solution of the equation1 - C_0^2 =1024E^4Q^2g/81(E-1)^3 C_0^6 .The 1:1 resonance condition is E= E_ res where E_ res = 4/1 + √(1 + 65536 Q^2g/81)In the case of zero grade index, g=0, we have C_0=1 and the resonance condition is simply E=2.The model is valid if the parameters E,Q,g are chosen so that E≈ E_ res andthe initial conditions (see (<ref>)) satisfy a_0≈ a_ min and r≈ 1. The relevant parameters for the quartic Hamiltonian (<ref>) are ω_1^2= 81/512 (E-1)^2((2-E)C_0^2+E-1)/E^4Q^2C_0^6 ω_2^2= 81/512 (E-1)^3(3-2C_0^2)/E^4Q^2C_0^6a_1= 243/8192(E-1) ^5/2( 2(E-3) C_0^2-3(E-1))/Q^5/2C_0^7E^5a_2= 243/8192( E-1) ^7/2 (2C_0^2-5) /Q^5/2C_0^7E^5a_3= 729/262144(E-1) ^3 ( 2(5-E)C_0^2+3(E-1) )/ C_0^8E^6Q^3a_4= 729/131072(E-1)^3( 10(3-E)C_0^2+ 21(E-1) )/C_0^8E^6Q^3a_5= 3645/262144(E-1)^4( 7-2 C_0^2 )/C_0^8E^6Q^3 The detailed recipe for checking whether a given set of parameters and initial conditions E,Q,g,a_0,r might give rise to a transition is as follows: * Compute the coefficients ω_1^2,ω_2^2,a_1,a_2,a_3,a_4,a_5 using(<ref>). This is the onlystage of the recipe that is model dependent. Compute the coefficients b_1,b_2,b_3,b_4,b_5 from(<ref>). * Compute the initial conditions J_1(0),J_2(0) from (<ref>) and θ_1(0),θ_2(0) from the comments following (<ref>). In the case of CQ, all the parameter values which we used gave θ_1(0)=0 (we took r>1 throughout) and θ_2(0)=π. * Compute the initial conditions K_1(0),K_2(0) using (<ref>)-(<ref>). This is the only stage of the recipe that is not completely explicit, and involves solving two equations in two variables. If no real solution can be found, the method fails. A suitable initial guess for the solution is K_1(0)≈ J_1(0) and K_2(0)≈ J_2(0). * Determine the value of E, the constant value of the Hamiltonian H_3 using (<ref>), taking cos(2(ϕ_1-ϕ_2))=1. Determine the value of P=K_1+K_2. * Compute Δ_1,Δ_2,Δ_3,Δ_4 from (<ref>),(<ref>),(<ref>),(<ref>). Values of E,Q,g,a_0,r for which any of these quantities vanish are candidates for transitions.Figure 5 displays results. Figure 5a shows numeric values and candidate analytic approximations of r_c as a function of E for Qa_0^-2=0.077 and ga_0^4=0.01. The dots denote numeric values of transitions in the original system.The solid curves show candidate analytic approximations of 3 distinct types:(1) (black) values for which Δ_2=0 (a closed loop with a cusp on the axis at r=1),(2) (green) values for which Δ_3=0 (a simple open curve) and(3) (red) values for which Δ_4=0 (two crossing open curves). For the values of Qa_0^-2 and ga_0^4 specified, it seems there are two branches of parameter values for which there are transitions. We denote the lower branch (on the plot) by r_c,1(E), which exists for E greater than a certain value which we denote by E_c,1,andthe upper branch by r_c,2(E), which exists for E greater than a certain value which we denote by E_c,2, with E_c,2≈ 1.975 < E_c,1≈ 1.977.On the lower branch, as E increases from E_c,1, r_c,1(E) at first follows the approximation Δ_2=0, until a triple point at which the curves Δ_2=0 and Δ_4=0 intersect. As E increases further, r_c,1(E) follows the approximation Δ_4=0. Surprisingly, this approximation stays reasonably accurate for the full range shown on the figure, even though r_c,1(E) rises to approximately 1.12. On the upper branch, as E increases from E_c,2, r_c,2(E) at first follows the approximation Δ_3=0, until a triple point at which the curves Δ_2=0 and Δ_3=0 intersect. As E increases further, r_c,2(E) follows the approximation Δ_2=0. However, the quality of this approximation rapidly decreases as E and r_c,2(E) increase further, with the discrepancy already visible on the plot for r_c≈ 1.06.Figure 5b shows numeric values and the correct analytic approximation (made up of pieces of the curves Δ_2=0, Δ_3=0 andΔ_4=0) in the cases (1) ga_0^4 = 0, (2) ga_0^4 = 0.01, (3) ga_0^4 = 0.02, all for Qa_0^-2=0.077. In addition, stars indicate numerical values of transitions obtained for the quartic system with Hamiltonian (<ref>). For small values of r, the numerical values for transitions for the exact Hamiltonian and the approximate quartic Hamiltonian (<ref>) are, as we would expect, very close. However as r increases, we see that the results for the quartic Hamiltonian rapidly diverge from the results for the exact Hamiltonian, while, remarkably,the analytic approximation continues to be a reasonable approximation for the exact Hamiltonian. This may find an explanation in the fact that while the exact Hamitlonian (<ref>), the quartic approximation (<ref>) and the integrable approximation (<ref>) all agree close to the fixed point, the global properties of the exact Hamiltonian are expected to be closer to those of the integrable approximation than the quartic approximation. Note also in Figure 5b that the intercepts of the curves on the E axis, that we have denoted above by E_c,1 and E_c,2, are very close to the values of E determined by the resonance condition (<ref>), which are (1) E=2, (2) E ≈ 1.977, (3) E≈ 1.954. However, even though the intercepts for the two curves obtained for each set of parameter values are very close, they are not identical. This is something that is difficult to establish a priori by direct numerics for the original systems (as the beating periods, for values of r close to 1, are very long), but oncethe analytic approximation is available to give accurate candidate values for the transition locations, it is possible to verify them a posteriori. Thusin the small band of values E_c,2<E<E_c,1 there is only a single beating transition as the beam eccentricity is increased.As r is increased from 1 there is immediately type I beating, and as r is increased further there is only a single transition to type II (as opposed, for example, to the sitution in Figure 2, where as r is changed from 1 type II beating is seen, and then there are two transitions). Using the analytic approximation it can be shown (see Appendix A) that the points E_c,1, E_c,2 are determined by the conditionsω_1 - ω_2 + P(b_2-2b_3 ∓ b_5) = 0(minus for E_c,1, plus for E_c,2) for a solution with r=1. (The condition r=1implies J_1(0) = K_1(0)=0, and then equation (<ref>) gives a single equation from which to determine K_2(0) from J_2(0) =ω_2 (a_0 - a_ min)^2.) Figure 6 shows the dependence of E_c,1 and E_c,2 on ga_0^4 for two values of Qa_0^-2, as computed by the analytic approximation, along with a few numeric values (computed a posteriori). In addition the value of E_ res from (<ref>) is shown, this being the value of E for which there isexact 1:1 resonance in the linear approximation. We see that the values of E_c,1, E_c,2 and E_ res all decrease monotonically with ga_0^4.§.§ The SAT Model The potential of the SAT model, given by (<ref>), has an isolated minimum when a_x = a_y = a_ min≡ a_0 √(K_ SAT/K_0) where K_0, which depends on the parameters E,K_ SAT, ga_0^4,is a solution of the equationK_0^2/4E +Li_2 ( -K_0)+ ln(1+K_0) -K_ SAT^2ga_0^4/4E = 0 .(Recall that the constant K_ SAT is defined by K_ SAT=4α^2Ea_0^-2.) The resonance condition can be written K_0 = K_ res where K_ res is the solution of Li_2 ( -K_ res)+ 2ln(1+K_ res) -K_ res/1+K_ res = 0.K_ res has numerical value approximately 5.017.We recall that for our analytic model to be most effective we need to be near resonance, and the initial conditions shouldbe close to the minimum, i.e. a_0≈ a_ min, or K_ SAT≈ K_0,and r≈ 1. These conditions giveK_ SAT≈ K_ res = 5.017and E ≈ 6.550( 1 - ga_0^4). In practice we will look at a large range of values of E and K_ SAT, but focus on this region. We also recall that in our model the sign of ζ_2(0) (as given in (<ref>) plays a critical role.From (<ref>) we have ζ_2(0)=0 (or equivalently K_ SAT=K_0) when<cit.>E = -K_ SAT^2(1-ga_0^4)/ 4(Li_2(-K_ SAT)+ln(1+K_ SAT)). The relevant parameters for the quartic Hamiltonian (<ref>) in the SAT case are ω_1^2= 2/a_0^4( K_0^2/K_ SAT^2 + ga_0^4 )ω_2^2= 4/a_0^4K_ SAT^2(-2Eln(1+K_0) + K_0^2+ 2E K_0/1+K_0)a_1= √(2K_0)/K_ SAT^5/2a_0^5( 2Eln(1+K_0) - 3K_0^2-2EK_0/1+K_0)a_2= √(2K_0)/3K_ SAT^5/2a_0^5(-2Eln(1+K_0)-3K_0^2+ 2EK_0(1+3K_0)/(1+K_0)^2)a_3= K_0/4K_ SAT^3a_0^6( -2Eln(1+K_0) + 5K_0^2 +2EK_0/1+K_0)a_4= K_0/2K_ SAT^3a_0^6( -2Eln(1+K_0) + 15K_0^2 + 2EK_0(1-K_0)/(1+K_0)^2)a_5= K_0/12K_ SAT^3a_0^6( 2Eln(1+K_0) + 15K_0^2 -2EK_0(1+10K_0 + K_0^2)/(1+K_0)^3) . The method is identical to that given for CQ in the previous subsection, so we can immediately presentresults. For fixed values of E and ga_0^4 we look for values of r giving beating transitions as a function of K_ SAT. Both numeric and analytic resultssuggest there is a qualitative difference in behavior for E above and below a critical threshold,and our results are consitent with the value of this threshold beingapproximately 6.550( 1 - ga_0^4), as found above. Figure 7 displays results for ga_0^4=0 and E=6.3 (below the threshold, left) and E=6.7 (above the threshold, right). The numeric results show that below the threshold, there are two ranges of K_ SAT for which there is a single beating transition, from type I (for r below r_c) to type II (for r above r_c). For values of K_ SAT below or above these two ranges, there is only type I beating, and for values between the two ranges there is only type II beating.The analytic approximation reproduces these results well. In this region of parameter space there are values for which Δ_2=0 (indicated in black in the figure) and Δ_3=0 (indicated in green). It is the latter that are physically relevant, and the values of r_c predicted by the analytic model are accurate for a good range.Moving “above the threshold”, numerics show there is a range of values of K_ SAT for which, as r is increased from 1, the beating is initially type II, then there is a transition to type I. For some of these values there is then a further transition back to type II for quite high values of r. It should be mentioned that these latter transitions were initially discovered using the analytic approximation, andconfirmed numerically a posteriori. The analytic approximation reproduces the first transition very well, using pieces of the Δ_2=0 and Δ_4=0degeneracy curves. The upper transition is not reproduced well, which is not surprising bearing in mind the values of r involved. Pieces of the Δ_2=0 and Δ_3=0degeneracy curves are close to some of the results, but for a small range of values of K_ SAT and r the model fails as there is no solution of equations (<ref>)-(<ref>). Two branches of the Δ_3=0 degeneracy curve come to an abrupt end (in the plot we have connected the ends with a dashed line, which is not associated with any degeneracy). The values of parameters involved are precisely those for which ζ_2(0)≈ 0.Figure 8 enlarges upon these results for different values of E and ga_0^4. In the 4 panels here, the upper panels (a and b) show results for values of E above the threshold, and the lower panels (c and d) show results for values of E below the threshold. In the left panels (a and c), ga_0^4=0, in the right panels (b and d), ga_0^4=0.02. For ga_0^4=0, the values E=6.3,6.4,6.5,6.7,7.0 are shown, the first three of which are below the threshold (in panel c), and the last two above the threshhold (in panel a).For ga_0^4=0.02, the values E=6.3,6.4,6.5,6.7 are shown, the first two of which are below the threshold (in panel d), and the last two above the threshold (in panel b).Note specifically that for ga_0^4=0 the case E=6.5 is below the threshold (approximately 6.55), while for ga_0^4=0.02 it is above (as the threshold drops to approximately 6.42). Thus (for example) for E=6.5, K_ SAT=5 and ga_0^4=0, no beating transitions are observed as the beam eccentricity is increased; but if the grade index is changed to ga_0^4=0.02, there are two beating transitions. The analytic theory fully explains this phenomenon. Indeed, for all the cases shown in Figure 8, the analytic theoryis in excellent quantitative agreementwith numerics for lower values of r, and gives reasonable qualitative predictionsfor higher values of r. Another conclusion from Figure 8 is that for values ofE below the threshold, we can find two values of K_ SAT that give rise to a given value of r_c, but for E above the threshold this need not be the case; furthermore the gap in r_c values increases with the given value of E. In Figure 9 we illustrate this phenomenon more clearly. For the case ga_0^4=0, we show contours in the K_ SAT,E plane that give rises to the values r_c = 1/0.95≈ 1.053 (black), r_c = 1/0.92≈ 1.087 (blue) and r_c = 1/0.895≈ 1.117 (red). It is clear that the “gap”between the two branches of each contour increases with r.Note that in the upper branch of each contour there is a small section denoted by a dashed line where the analytic method fails (the dashed line is a straight line between the last two points on each side for which the method works). As expected, the regions where the methodfails straddle the curve (<ref>), incidicated by a dashed turquoise curve.Note further that in many cases the analytic method works well far beyond the region in which this is expected, but there are some exceptions. § CONCLUSIONS AND DISCUSSION In this paper we have described the beating phenomena observed in the equations of motion for the beam widths obtained in a collective variable approximation to solution of the GNLSEs relevant for beamsin nonlinear waveguideswith cubic-quintic (CQ) and saturable (SAT) nonlinearities and a graded-index profile. We have described the different types of beating, and the transitions between them. Arguing that the origin of these phenomena is in a 1:1 Hamiltonian resonance, we have developed an approximation scheme for small oscillations ina class of 2 degree-of-freedom Hamiltonian systems with an isolated fixed point close to 1:1 resonance. We have shown that such oscillations can be described by an integrable Hamiltonian, or, alternatively, a single first order differential equation (<ref>). Understanding the bifurcations of the system, which include the beating transitions, can be reduced to looking at the bifurcations of the roots of a pair of quadratic equations.Applying our general methodology to the specific cases of the CQ and SAT models we managed to reproduce numerical results for beating transitions over a large range of parameter values. The theory allows us to map out the regions (of parameter space and beam eccentricities)where beating transitions do and do not exist. Amongst other things, in the CQ case we identified a band of beam energies for which there is only a single beating transition (as opposed to 0 or 2) as the beam eccentricity is increased, andin the SAT case we explained the appearance and disappearance of transitions with changes of the grade-index. We expect our methods to have applications to related problems in nonlinear optics, for nonlinearities other than the ones studied here,for different beams, such as super-Gaussian beams <cit.>, and for optical bullets <cit.>. We are encouraged by the fact that there is some recent experimental evidence <cit.> of breathing in optical solitons, albeit in a dissipative setting. We also hope the general theory of 1:1 resonances that we have developed will find application in the settings of nonlinear mechanics and astronomy, as well as suitable extensions for1:1:1 resonances in higher dimensional systems (see for example the recent papers <cit.>). § FURTHER TECHNICAL DETAILS As explained in section 3, the Hamiltonian (<ref>) is in an integrable approximation to the Hamiltonian (<ref>), and is obtained from (<ref>) via a canonical transformation and neglecting higher order terms. The only need for explicit details of the canonical transformation is to compute the initial conditions of the variables K_1,K_2 from the initial conditionsof J_1,J_2 given in (<ref>). The equations to be solved are J_1= K_1∓4K_1a_1/4ω_1^2- ω_2^2√(2K_2/ω_2) + (a_1^2 ( 48ω_1^4-8ω_1^3ω_2-40ω_2^2ω_1^2+2ω_1ω_2^3+5ω_2^4)/4ω_2^2ω_1^3( 2ω_1+ ω_2)^2( 2ω_1-ω_2) ^2-5a_3/2ω_1^3)K_1^2 + (( 40ω_1^3+28ω_1^2ω_2-6ω_1ω_2^2 -3ω_2^3) a_1^2/ω_1^2ω_2( 2ω_1-ω_2) ^2( 2ω_1+ω_2)^2( ω_1+ω_2) . .+( 12ω_1^3+11ω_1^2ω_2-10ω_1ω_2^2-6ω_2^3) a_ 1a_2/2ω_1^2ω_2^3( ω_1+ω_2)( 4ω_1^2-ω_2^2)- ( 3ω_1+2ω_2) a_4/2ω_1^2ω_2( ω_1+ω_2) ) K_1K_2 , J_2= K_2∓ 2 (( 2ω_1^2-ω_2^2)K_1a_1/ω_1ω_2( 4ω_ 1^2-ω_2^2)+K_2a_2/ω_2^2)√(2K_2/ω_2)+ (33a_2^2/4ω_2^5 -5a_5/2ω_2^3) K_2^2+ ( 16ω_1^4+8ω_1^3ω_2 - 12ω_1^2ω_2^2-2ω_1ω_2^3+3ω_2^4) a_1^2K_1^2/2ω_1^2ω_2^3( 2ω_1-ω_2)^2(2ω_1+ω_2)^2+ ( ( 8ω_1^4+16ω_1^3ω_2-10ω_1^2ω_2^2-8ω_1ω_2^3+ω_2^4) a_1^2/ω_1^2ω_2^2( 2ω_1-ω_2)^2( 2ω_1+2ω_2) ^2( ω_1+ ω_2)..+( 40ω_1^3+44ω_1^2ω_2-9ω_1ω_2^2-16ω_2^3) a_1a_2/2ω_1ω_2^4( ω_1+ω_2)( 2ω_1-ω_2)( 2ω_1+ω_2) -( 2ω_1+3ω_2) a_4/2ω_1ω_2^2( ω_1+ω_2) ) K_1K_2.Here the upper signs should be taken in the square roots termsin the case θ_2(0)=0 and the lower signs in the case θ_2(0)=π. In Section 4.1, in the study of the CQ system, we stated the conditions (<ref>) for the value r_c giving a beating transition to tend to 1. We briefly describe the origin of these conditions. The symmetric solutions with a_x=a_y of (<ref>), arising from the initial condition r=1,correspond to solutions with K_1≡ 0 of (<ref>)-(<ref>). From (<ref>), the values of P and E for such a solution must evidently satisfy b_3P^2 + ω_2 P - E=0, which is just the condition Δ_3=0, see (<ref>).As explained in Section 3, a necessary condition for a beating transition is the vanishing of one of the quantites Δ_1,Δ_2,Δ_3,Δ_4.To determine E_c,1 in Section 4.1 we want r_c→ 1 for a solution of Δ_2=0. Clearly this requires Δ_2=Δ_3=0,and some simple algebra then gives the condition ω_1-ω_2 + P(b_2-2b_3-b_5)=0.To determine E_c,2,however, is not so straightforward, as for this we want we want r_c→ 1 for a solution of Δ_3=0, and apparently we do not have two equations. The resolution of this conundrum is as follows: Although we stated above that the symmetric solutionsof (<ref>)correspond to solutions with K_1≡ 0 of (<ref>)-(<ref>), the latter in fact provide a blow up of the former — there is a 3 parameter family of the latter and only a 2 parameter family of the former. Solving (<ref>)-(<ref>) in the case K_1≡ 0, we obtain K_2=P (constant), ϕ_2 = ϕ_2(0) + (ω_2 + 2 b_3P)z, and that ϕ_1 must satisfy the ODEϕ̇_1 = ω_1 + b_2 P + b_5 P cos( 2(ϕ_2(0) + (ω_2 + 2 b_3P)z- ϕ_1 )) .This latter equation can be solved explicitly, and for a general choice of the constant of integration will give a complicated function ϕ_1(z). However, for a beating transitionwe seek a solution that is characterized by a single frequency, i.e. we need ϕ_1(z) =ϕ_1(0) +(ω_2 + 2 b_3 P)zSubstituting this in the differential equation, we obtain ω_2 + 2 b_3 P=ω_1 + b_2 P + b_5 P cos( 2(ϕ_2(0)- ϕ_1(0) )) .Since the initial conditionsϕ_1(0),ϕ_2(0) take the values 0 or π, we deduce that ω_1 - ω_2 + P(b_2 - 2b_3 + b_5) = 0, as required. § A TWO TIME EXPANSION APPROACH In this appendix we outline a two time expansion approach <cit.> which is an alternative to the procedurebased on canonical transformations given in Section 3.We wish to look at solutions of the Hamiltonian system with Hamiltonian (<ref>) and initial conditions (<ref>). We assume the system has an isolated symmetric minimum at which the system is close to 1:1-resonance. To apply a two time technique we need to introduce a small parameter ϵ explicitly into the equations. Our systems involve a number of system parameters, for example in the CQ case, the parameters E,Q,g, for which the resonance condition is (<ref>). We introduce a small parameter by selecting one system parameter and writing this as its value at resonance plus a small perturbation. However, for reasons described in <cit.>, the “small perturbation” here should be quadratic in the small parameter. Thus, for example in CQ, we have to consider two possibilities, E = E_ res±ϵ^2 where E_ res (which depends on the other system parameters Q,g) is the value of E at resonance. The two resulting expansions will differ just in signs. This is the counterpart in the two time method of the need to choose θ_2(0) to be 0 or π in Section 3 and the resulting choice of signs in equations (<ref>)-(<ref>).However, we emphasize that it is not the same, so the resulting method is different, in particular, the “choice” in Section 3 involves the initial conditions as well as the system parameters.Taking, as before,the minimum of the potential V to be at a_x=a_y=a_ min we now writea_x = a_ min + ϵã_x,a_y = a_ min + ϵã_yand expand to 4th order in ϵ. The order 0 terms are irrelevant and can be discarded.The order 1 terms vanish by definitionof a_ min. In the other terms there is dependence on all the system parameters. However by making the assignment of the form E = E_ res±ϵ^2, discarding all terms of order higher than 4 and a suitable rescaling, we obtain an approximate potential of the form Ṽ = 1/2 C_1(ã_x^2 + ã_y^2)+ ϵ( C_2( ã_x^3 + ã_y^3)+ C_3ã_xã_y(ã_x + ã_y) ) + ϵ^2 ( C_4( ã_x^4 + ã_y^4)+ C_5ã_xã_y (ã_x^2 + ã_y^2)+ C_6 ã_x^2ã_y^2 +C_7(ã_x^2 + ã_y^2) + C_8 ã_xã_y ).Here C_1,…,C_8 are all functions of the system parameters excluding the parameter replaced by ϵ.Note that as a result of the dependence of the system parameters on ϵ there are now quadratic terms in ã_x,ã_y in the O(ϵ^2) terms. Following the usual two time formalism, we seek solutions of the system with potential Ṽ in the form ã_x= Λ_1(ϵ^2 z)cos (√(C_1)z) + Λ_2(ϵ^2 z) sin (√(C_1)z) +ϵã_x,1 (z,ϵ^2 z) + ϵ^2 ã_x,2 (z,ϵ^2 z) + … ã_y= Λ_3(ϵ^2 z) cos (√(C_1)z) + Λ_4(ϵ^2 z) sin (√(C_1)z) + ϵã_y,1 (z,ϵ^2 z) + ϵ^2 ã_y,2 (z,ϵ^2 z) + …Here Λ_1(ϵ^2 z), Λ_2(ϵ^2 z), Λ_3(ϵ^2 z), Λ_4(ϵ^2 z) are functions of the slow variable ϵ^2 z. Substituting in the equations of motion and equating order-by-order, the first order terms ã_x,1, ã_y,1 can be determined, and a system of first order equations is obtained that Λ_1,Λ_2,Λ_3,Λ_4 must satisfy to guarantee the absence of secular terms in ã_x,2, ã_y,2. WritingR_1= Λ_1^2 + Λ_2^2 + Λ_3^2 + Λ_4^2R_2= Λ_1^2 + Λ_2^2 - Λ_3^2 - Λ_4^2R_3= Λ_1 Λ_3+ Λ_2 Λ_4R_4= Λ_1 Λ_4- Λ_2 Λ_3(c.f. <cit.>) we obtain the systemR_1'=0 R_2'=4 R_4( γ_1 R_1 + γ_2 + (γ_3+γ_4)R_3) R_3'=-γ_3 R_2 R_4 R_4'=- R_2( γ_1 R_1 + γ_2 + γ_4 R_3)where the constants γ_1,γ_2,γ_3,γ_4 are certain combinations of the constants C_1,…,C_8.(Note R_3^2 + R_4^2 = 1/4 ( R_1^2 - R_2^2 ).)Thus R_1 is an invariant, as are the quantities Q_2 = R_2^2 + 4(1 + γ_4/γ_3) (R_3 + γ_1 R_1 + γ_2/γ_3+γ_4)^2 ,Q_3= R_4^2-γ_4/γ_3( R_3 + γ_1 R_1+ γ_2/γ_4)^2. Note that R_1,Q_2,Q_3 are related byQ_2 + 4Q_3 = R_1^2 - 4(γ_1R_1+γ_2)^2/γ_4(γ_3+γ_4) . Using the invariants it is possible to write a single differential equation for the quantity R_3:(R_3')^2 =-4 γ_4 (γ_3 + γ_4) ( (R_3 + γ_1 R_1 + γ_2/γ_3+γ_4)^2-γ_3/4(γ_3 + γ_4 )Q_2)(( R_3 + γ_1 R_1+ γ_2/γ_4)^2+ γ_3Q_3/γ_4) .This has the same form as (<ref>) —the right hand side is a product of two quadratic factors in R_3 — and similar techniques can be used to discuss bifurcations of its solutions. Specifically, there can be a double root if the discriminant of one of the factors vanishes (i.e. if Q_2 or Q_3 vanish), or if the factors have a common root. The latter happens in the two cases ( (2γ_1±γ_4)R_1+2 γ_2)^2+ 4Q_3 γ_3γ_4= 0 .As in Section 3, detecting beating transitions requires translating the initial conditions to the constants of motion R_1,Q_2,Q_3 and checking up to 4 conditions.We have implemented this method for the CQ and SAT systems and found some satisfactory results which we do not report here; in certain cases the results were better than those found using the method based on canonical transformations. However there are numerous reasons to prefer the method based on canonical transformations.The two time method requires deciding how to explicitly introduce a small parameter and different ways of doing this give different results. It also requires advance knowledge of the correct relative order of magnitude of the oscillations around the fixed point and the deviation of the system parameters from their resonance values. In general, the algebraic manipulations required to implement the two time method, most of which we have omitted in our account here, are substantially more complicated than those required for the method based on canonical transformations; in particular the reduction of the system (<ref>)to a single differential equation (<ref>) is a surprise, that emerges from ad hoc manipulations, whereas the parallel steps in the canonical formalism are standard, based on the integrability of the Hamiltonian (<ref>). Finally, from our numerical experiments it emerges that while the results based on the vanishing of the discriminant of one of the factors of the right hand side of (<ref>) are good, the results based on conditions (<ref>) are poor.acm
http://arxiv.org/abs/1708.07706v1
{ "authors": [ "David Ianetz", "Jeremy Schiff" ], "categories": [ "nlin.PS", "math-ph", "math.MP", "physics.optics" ], "primary_category": "nlin.PS", "published": "20170825120051", "title": "Analytic Methods to Find Beating Transitions of Asymmetric Gaussian Beams in GNLS equations" }
=17cm =21cm -2mm 2mm -1in -1cmTheoTheorem[section] LemLemma[section] ProProposition[section] RemRemark[section] ExaExample[section] DefDefinition[section] CorCorollary[section] .theorem .theorem
http://arxiv.org/abs/1708.08064v1
{ "authors": [ "Lahcen Boulanba", "Mohamed Mellouk" ], "categories": [ "math.PR", "60F10, 60H15, 60G15" ], "primary_category": "math.PR", "published": "20170827083932", "title": "Large deviations for a stochastic Cahn-Hilliard equation in Hölder norm" }
theoremTheoremlemma[theorem]LemmarmkRemark=20cm = 0.2in J. Differential Equation^aDepartment of Mathematics, Shanghai Normal University,Shanghai, 200234, P. R. China^bDepartment of Applied Mathematics, Western University,London, Ontario, Canada N6A 5B7 In this paper, we present a method of higher-order analysis on bifurcationof small limit cycles around an elementary center of integrable systemsunder perturbations. This method is equivalent to higher-order Melinikovfunction approach used for studying bifurcation of limit cyclesaround a center but simpler. Attention is focused on planar cubic polynomial systemsand particularly it is shown that the system studied by H. Żoła̧dek in the article Eleven small limit cycles in a cubic vector field(Nonlinearity 8, 843–860, 1995) can indeedhave eleven limit cycles underperturbations at least up to 7th order. Moreover,the pattern of numbers of limit cycles produced nearthe center is discussed up to 39th-order perturbations, and no more than eleven limit cycles are found. Bifurcation of limit cycles; Higher-order analysis; Darboux integral;Focus value.34C0734C23 § INTRODUCTIONBifurcation theory of limit cycles is important for both theoreticaldevelopment of qualitative analysis and applications in solving real problems. It is closely relatedto the well-known Hilbert's 16th problem <cit.>, whose second part asks for the upper bound,called Hilbert number H(n), on the number of limit cyclesthat the following system,dx/dt = P_n(x,y), dy/dt = Q_n(x,y),can have, where P_n(x,y) and Q_n(x,y) represent n^ th-degree polynomials in x and y. This problem has motivated many mathematiciansand researchers in other disciplines todevelop mathematical theories and methodologies in the areas ofdifferential equations and dynamical systems.However, this problem has not been completely solved even forquadratic systems since Hilbert proposed the problem inthe Second Congress of World Mathematicians in 1900.The maximal number of limit cycles obtained for some quadratic systems is4 <cit.>.However, whether H(2)=4 is still open. For cubic polynomial systems, many results have been obtained on the lower bound of the number of limit cycles. So far, the best result for cubic systems is H(3) ≥ 13 <cit.>. Note that the 13 limit cycles obtained in <cit.> are distributed around several singular points. When the problem is restricted to consider the maximum number ofsmall-amplitude limit cycles, denoted by M(n), bifurcating from afocus or a center in system (<ref>), one of the best-known results is M(2)=3, which was obtained by Bautin in 1952 <cit.>. For n=3, a number of results in this research direction have been obtained.So far the best result for the number of small limit cycles around afocus is 9 <cit.>,and that around a center is12 <cit.>. One of powerful tools used for analyzing local bifurcation of limit cyclesaround a focus or a center is normal form theory (e.g., see <cit.>).Suppose system (<ref>) has an elementary focus or an elementary center at the origin. With the computation methods using computer algebra systems (e.g., see <cit.>), we obtain the normal form expressed in polar coordinates as[dr/dt = r( v_0 + v_1 r^2 + v_2 r^4+ ⋯ + v_k r^2k + ⋯),; d θ/dt= ω_c + τ_0 + τ_1 r^2 + τ_2 r^4 + ⋯+ τ_k r^2k + ⋯ , ]where r and θ represent the amplitude and phase of motion, respectively.v_k (k=0,1,2, ⋯) is called the kth-order focus value. v_0 and τ_0 are obtained from linear analysis. The first equation of (<ref>) can be used for studying bifurcation and stability of limit cycles, while the second equation can be used to determine the frequency of the bifurcating periodic motion. Moreover, the coefficients τ_j can be used to determine the order or critical periods of a center (when v_j=0,j ≥ 0). A particular attention has been paid tonear-integrable polynomial systems, described in the form of [ dx/dt = M^-1(x,y,μ)H_y(x,y,μ)+ εp(x,y,ε,δ),; dy/dt = -M^-1(x,y,μ) H_x(x,y,μ)+ εq(x,y,ε,δ), ]where 0 < ε≪ 1, μ and δ are vector parameters; H(x,y,μ) is an analytic function in x, y and μ; p (x,y,ε,δ) and q(x,y,ε,δ) are polynomials in x and y, and analytic in δ and ε. M(x,y,μ) is an integrating factor of the unperturbedsystem (<ref>)|_ε=0.Suppose the unperturbed system (<ref>)|_ε=0has an elementary center. Then, considering limit cycles bifurcation in system (<ref>)around the center, we may use the normal form theory toobtain the first equation of (<ref>) as follows: [ dr/dt= r [v_0(ε) + v_1(ε)r^2+ v_2(ε)r^4 + ⋯ +v_i(ε)r^2i+⋯], ] where v_i(ε) = ∑_k=1^∞ε^k V_ik, i=0,1,2, …, in which V_ik denotes the ith ε^k-order focus value,and will be used throughout this paper.Note that v_i(ε)=O(ε) since the unperturbed system (<ref>)|_ε=0is an integrable system. Further, because system (<ref>) is analytic in ε, we can rearrange the terms in (<ref>), and obtain dr/dt = V_1(r) ε + V_2(r) ε^2 + ⋯+ V_k(r) ε^k + ⋯ , where V_k(r) = ∑_i=0^∞ V_ikr^2i+1,k=1,2, …. Similarly, for the normal form of system (<ref>)we have the θ differential equation, given by dθ/dt = T_0(r) + O(ε), with T_0(0)≠0, and thus dr/dθ=V_1(r) ε + V_2(r) ε^2+ ⋯ + V_k(r) ε^k + ⋯/T_0(r) + O(ε).Assume the solution r(θ,ρ,ε) of (<ref>), satisfying the initial condition r(0,ρ,ε)=ρ, is givenin the form of r(θ,ρ,ε)=r_0(θ,ρ)+r_1(θ,ρ)ε +r_2(θ,ρ)ε^2 + ⋯+ r_k(θ,ρ)ε^k + ⋯,with 0<ρ≪1. Then, r_0(0,ρ)=ρ and r_k(0,ρ)=0,for k≥1.If there exists a positive integer K such that V_k(r)≡0, 1≤ k< K, and V_K(r)≢0, then it follows from (<ref>) that r_0(θ,ρ)=ρ, r_k(θ,ρ)=0,1≤ k< K, r_K(θ,ρ)=V_K(ρ)/T_0(ρ)θ.Thus, the displacement function d(ρ) of system (<ref>)can be written asd(ρ)=r(2π,ρ,ε)-ρ=2πV_K(ρ)/T_0(ρ)ε^K+O(ε^K+1).Therefore, if we want to determine the number of small-amplitude limit cyclesbifurcating from the center in system (<ref>), we only need to studythe number of isolated zeros of V_K(ρ) for 0<ρ≪ 1,and have to obtain the expression of the first non-zero coefficient V_K(r) in (<ref>) by computing V_iK, for i≥0.The above discussions show that the basic idea of using focus valuesis actually the same as that of the Melnikov function method. Using H(x,y)=h to parameterize the section (i.e. thePoincaré map), we obtain the displacement function of (<ref>), given byd(h)= M_1(h)ε + M_2(h)ε^2 + ⋯ + M_k(h)ε^k + ⋯,whereM_1(h) = ∮_H(x,y,μ)=h M(x,y,μ)[ q(x,y,0,δ) dx - p(x,y,0,δ) dy ],evaluated along closed orbits H(x,y,μ)=h for h∈(h_1,h_2). Then, we can study the first non-zero Melnikov function M_k(h) in (<ref>) to determine the number oflimit cycles in system (<ref>).In the following, we remark on the comparison of the Melnikov functionmethod and the method of normal forms (or focus values). (1) Let H=h,0<h-h_1≪1 define closed orbits aroundthe center of system (<ref>)|_ε=0. It is easy to see that for any integer K≥1, equation (<ref>) holds if and only if M_k(h)≡0, 1≤ k<K and M_K(h)≢0 in (<ref>). Moreover, V_K(ρ) for 0<ρ≪1 and M_K(h) for 0<h-h_1 ≪ 1have the same maximum number of isolated zeros. (2) As we can see, V_k(r) can be obtained by the computation of normal formsor focus values.(3) In particular, when the original system is not a Hamiltonian system but an integrable system, then even computing the coefficients of the first-order Melnikov function is much more involved than the computation of using the method of normal forms. (4) However, the method of normal forms (or focus values) is restricted to Hopf andgeneralized Hopf bifurcations, while the Melnikov function method can also be applied to study bifurcation of limit cycles from homoclinic/heteroclinic loops or any closed orbits. When we apply the method of normal form computation, some unnecessary perturbation parameters are involvedin the computation of high-order focus values, which could be extremely computation demanding (in both time and memory),and makes it much more difficult to solve the problem.Meanwhile, before we use the first non-zero coefficientV_K(r) in (<ref>) to find limit cycles, we need to prove V_k(r)≡0, 1≤ k < K. The unnecessary parameters involved could greatlyincrease the difficulty of proving that.In this paper, without loss of limit cycles, we introduce a linear transformation to eliminate unnecessaryparameters from system (<ref>).With less parameters in (<ref>), we can use the approximation offirst integrals to prove V_k(r)≡0. The idea will be illuminated in Section 2.We will apply our method to study the bifurcation of small-amplitude limitcycles in the system [ dx/dt = a + 5/2x + xy + x^3+ ∑_k=1^n ε^k p_k(x,y),; dy/dt = -2 a x + 2 y - 3 x^2 + 4 y^2 - a x^3 + 6 x^2 y+∑_k=1^n ε^k q_k(x,y), ] where p_k(x,y) = a_00k + ∑_i+j=1^3 a_ijkx^i y^j, q_k(x,y) = b_00k + ∑_i+j=1^3 b_ijkx^i y^j, in which a_ijk and b_ijk are ε^kth-order coefficients(parameters).The unperturbed system (<ref>)|_ε=0 has a rationalDarboux integral <cit.>, H_0=f_1^5/f_2^4=(x^4+4x^2+4y)^5/(x^5+5x^3+5xy+5x/2+a)^4,with the integrating factor M=20f_1^4f_2^-5. It can be shown that for a<-2^5/4, system (<ref>)|_ε=0 has a center at E_0 =(- a/2,- a^2+2/4).The system (<ref>)|_ε=0 was proposedin <cit.>, and it was claimed that this systemcould have 11 limitcycles around the center by studying the second-order Melnikovfunction. Later, Yu and Han applied the normal form computation method andgot only 9 limit cycles around E_0 <cit.> by analyzing the ε- and ε^2-order focus values.Recently, it has been shown <cit.>that errors are made in <cit.> for choosing 12 integrals asthe basis of the linear space of corresponding Melnikov functions of system(<ref>)|_ε=0. In fact, among the 12 chosen integrals,two of them can be expressed as linear combinations of the other ten integrals, and therefore only 9 limit cycles can exist, agreeing with that shown in <cit.>. The rest of the paper is organized as follows.In the next section, we consider system (<ref>), andconstruct a transformation to reduce the number of perturbationparameters, which greatly simplifies the analysis in the following section.Section 3 is devoted to the computation of higherε^k-order focus values and the existence of11 limit cycles in system (<ref>), which needscomputing at least ε^7-order focus values. Finally, conclusion is drawn in Section 4. § PRELIMINARIESThe method of focus values (or normal forms) is one of important and powerfultools for the study of small-amplitude limit cycles generated from Hopfbifurcation. In general, a sufficient number of focus values would be neededif one wants to find more small-amplitude limit cycles. One main challenge is that the computation of focus values becomes more and more difficult as the order of focus values goes up. That is why computer algebra systems such as Maple and Mathematica havebeen used for computing the focus valuesto improve the computational efficiency (e.g. see <cit.>). Another approach isto eliminate certain parameters from the system,which is the method we shall develop here for near-integrable systems. In most studies of near-integrable systems, full perturbations like those polynomials p(x,y,ε,δ)and q(x,y,ε,δ) given in system (<ref>) are considered.The parameter vector δ usually represents the coefficients in p and q. When normal forms are used to study small limit cycles, it is easy to get and solve the focus values of ε order (coefficients in V_1(r)), because they are linear functions of thesystem parameters, namely the coefficients in p(x,y,0,δ)and q(x,y,0,δ). For the ε^k-order focus values (coefficients in V_k(r)),more parameters would be involved in the computation. One can observe that some parameters are not necessaryfor obtaining the maximum number of limit cycles,and they only increase the difficulty in finding limit cycles.Whenthe first n functions V_k(r) in (<ref>),1≤ k≤ n are applied to studyingbifurcation of limit cycles, in order toremove unnecessary parameterswithout reducing the number of limit cycles, we may use the following transformation: { x→ x+e_1(ε)x+e_2(ε)y+e_3(ε),y→ y+e_4(ε)x+e_5(ε)y+e_6(ε),t→ t+e_7(ε)t,μ→μ+e_8(ε), .where e_i(ε)=e_i1ε+e_i2ε^2+⋯+e_inε^n, i=1,⋯,8.Note that (<ref>)|_ε=0 is an identity map. Thus, (<ref>) keeps the unperturbed system of (<ref>) unchanged. Furthermore, the new system obtained by using (<ref>) can be still written in the same form of (<ref>). So we only need to find proper e_i(ε)'s to getsimpler perturbation functions without loss of generality. To illustrate how to obtain e_i(ε),we take system (<ref>) as an example. The coefficients a_ijk and b_ijk in (<ref>) are the parameters. Substituting the transformation (<ref>)into system (<ref>) yields[dx/dt = a + 5/2x + xy + x^3+ ∑_k=1^n ε^k p̃_k(x,y)+o(ε^n),; dy/dt = -2 a x + 2 y - 3 x^2 + 4 y^2 - a x^3 + 6 x^2 y+∑_k=1^n ε^k q̃_k(x,y) +o(ε^n), ] wherep̃_k(x,y)= ã_00k + ∑_i+j=1^3 ã_ijkx^i y^j, q̃_k(x,y)= b̃_00k + ∑_i+j=1^3 b̃_ijkx^i y^j. Obviously, the coefficients ã_ijk and b̃_ijkin (<ref>) are linear in e_mk, m=1,…,8. Let E_k=(e_1k,e_2k,⋯,e_8k)^T. For any 1≤ k≤ n, ã_ijk andb̃_ijk can be written in the form of ã_ijk = A_ijE_k +η_ijk,b̃_ijk = B_ijE_k +ζ_ijk,where A_ij and B_ij are 1×8 matrices,and η_ijk and ζ_ijk, given by η_ijk = η_ijk(E_1,⋯,E_k-1, a_ml1,⋯,a_mlk, b_ml1,⋯,b_mlk),ζ_ijk = ζ_ijk(E_1,⋯,E_k-1, a_ml1,⋯,a_mlk, b_ml1,⋯,b_mlk) ,are polynomials in e_ml, 1≤ l≤ k-1,and the coefficients in the perturbation functions (<ref>). Note that A_ij and B_ij are not dependent on k. We hope that we can find some proper values for e_ikto make some of the coefficientsã_ijk and b̃_ijk vanish or satisfy some conditions, so that the computation of the focus values would become easier. For instance, we can choose for 1≤ k≤ n, [ ã_10k= ã_01k= ã_20k=ã_11k= ã_02k= ã_30k= 0,;ã_p_k≜p̃_k(-a/2,-a^2+4/4)=0,ã_q_k≜q̃_k(-a/2,-a^2+4/4)=0. ]The last two equations in (<ref>) keep the equilibriumof system (<ref>) in a neighborhood ofE_0 with radius o(ε^n). A direct computation yields [ã_10k =2ae_2k+e_6k+5/2e_7k+η_10k,ã_01k = 1/2e_2k+e_3k+η_01k,;ã_20k =3e_2k+3e_3k+e_4k+η_20k,ã_11k = e_5k+e_7k+η_11k,;ã_02k = -3e_2k+η_02k,ã_30k = 2e_1k+ae_1k+e_7k+η_30k,;ã_p_k = -1/4a(4+a^2)e_1k- 1/8(4+a^2)(2+a^2)e_2k + 1/4(4+a^2)e_3k;+1/4a^2e_4k + 1/8a(2+a^2)e_5k- 1/2ae_6k + e_8k +η̃_k ,;ã_q_k =-1/8a^2(16+3 a^2)e_1k- 1/16a(16+3 a^2)(2+a^2)e_2k;+ 1/4a(16+3a^2)e_3k +1/4a(4+a^2)e_4k + 1/8(4+a^2)(2+a^2)e_5k; -1/4(4+a^2)e_6k + 1/8a(a^2+8)e_8k+ζ̃_k,;]where η̃_k and ζ̃_k are also functionsin η_ijl and ζ_ijl with 1≤ l≤ k-1,respectively.Because[∂(ã_10k,ã_01k,ã_20k, ã_11k, ã_02k,ã_30k,ã_p_k,ã_q_k)/∂(e_1k,e_2k,e_3k,e_4k,e_5k,e_6k,e_7k,e_8k)] =3/4(32-a^4)<0for a<-2^-5/4, we can solve (<ref>) for e_mk to obtain e_mk=e_mk(η_10k,η_01k,η_20k,η_11k,η_02k, η_30k,η̃_k,ζ̃_k),1≤ m≤ 8,which can be rewritten by using (<ref>) ase_mk=ẽ_mk(E_1,⋯,E_k-1, a_ij1,⋯,a_ijk, b_ij1,⋯,b_ijk).Note that e_m1 only depends on a_ij1 and b_ij1. Therefore, for all 1≤ m≤ 8, 1≤ k≤ n, e_mk canbe expressed as a polynomial in a_ijl and b_ijl, 1≤ l≤ k. In other words, (<ref>) has solutions for all 1≤ k≤ n.Thus, without loss of generality, we assume that (<ref>) takes the following form, [p_k(x,y) = a_00k +a_21k x^2 y + a_12k x y^2 + a_03k y^3,;q_k(x,y) = b_00k+ b_10k x + b_01k y +b_20k x^2 + b_11k x y + b_02k y^2; +b_30k x^3 + b_21k x^2 y + b_12k x y^2 + b_03k y^3, ] with[a_00k= 1/64 (a^2+2) [ (a^2+2)^2a_03k + 2 a (a^2+2)a_12k +4 a^2 a_21k],; b_00k = 1/64{ 8 a^3 b_30k+ 16 a (2 b_10k- a b_20k)+ 4 (a^2 + 2) (4 b_01k- 2 a b_11k+ a^2 b_21k); - (a^2+2)^2 [ 4 b_02k -2 a b_12k -(a^2+2) b_03k] }. ]As mentioned in Section 1, to find limit cycles around E_0 in system (<ref>), we apply normal form theory to compute the focus values and then solvethe multivariate polynomial equations based on the focus values.Particularly, we have[ b_01k = 1/16[ 4 a(2 b_11k-a b_21k) - (a^2+2)^2 (a_12k +3 b_03k); +4 (a^2+2) (2 b_02k -a a_21k -a b_12k) ], ] solved from the zeroth-order focus value V_0k=0, where[ V_0k = 1/32{16b_01k - 4 a(2 b_11k-a b_21k) + (a^2+2)^2 (a_12k +3 b_03k);[1.0ex]-4 (a^2+2) (2 b_02k -a a_21k -a b_12k) ]. ] Higher-order focus values are relatively complex, and we shall study them in Section 3. When we want to use focus values V_iK in V_K(r), i=0,1,2,⋯, to study limit cycles, we first need to show V_k(r)≡0,1≤ k<K, ordr/dt= O(ε^K) in (<ref>). In order to prove this,we use the approximation of first integrals, and claim thatif there exists an analytic function H_K(x,y,ε) such that(M^-1 H_y+ε p) ∂ H_K/∂ x + (- M^-1 H_x+ε q)∂ H_K/∂ y = O(ε^K), then dr/dt=O(ε^K). This claim can be easily provedby using the closed contour H_K=h as the parameter to express the displacement function. Usually, like that considered in <cit.>the method of focus values is used only to prove how many limit cycles around the equilibrium point that system (<ref>) can have.Combining it with the approximation of first integrals, we can obtain the maximal number of small limit cycles for parametersin a neighborhood of critical conditions. Furthermore, if the focus values are linear functions in parameters, we have a global result as follows.Consider system (<ref>) and assume V_k(r)≡0, 1≤ k<K.Suppose that for an integer m≥ 1, each V_iK, 0≤ i<m is linear in δ, and further the following two conditions hold:(i)[ ∂(V_0K,⋯,V_m-1,K)/∂(δ_1,⋯,δ_m)] =m,(ii)V_K(r)≡0,if V_iK=0, i=0,1,⋯,m-1.Then, for any given N>0, there exist ε_0>0 and aneighborhood V of the origin such that system (<ref>)has at most m-1 limit cycles in V for0<|ε|<ε_0, |δ|≤ N. Moreover, m-1 limitcycles can appear in an arbitrary neighborhood of the origin for somevalues of (ε,δ).The above theorem can be proved following the proof givenin <cit.> with a minor modification. So the proof is omitted here.§ HIGHER-ORDER ANALYSIS LEADING TO 11 LIMIT CYCLESIN SYSTEM (<REF>)In this section, we focus on system (<ref>) and show that it can have 11 limit cycles by usingperturbations at least up to 7th order. In the following, we will use the transformed system (<ref>) with thesimplified perturbations given in (<ref>) for the analysis. In order to compute the focus values of this system, we first shiftthe equilibrium of system (<ref>),(x,y)=(-a/2+o(ε^n), -a^2+2/4+o(ε^n))to the origin and then use a computer algebra system and software package (e.g., the Maple program in <cit.>) toobtain the focus values in terms of the parametersa, a_ijk and b_ijk.We shall give detailed analysis for the first few lower-orderfocus values,and then summarize the results obtained fromhigher-order analysis. For convenience, define the vectors:[ W_k^8= (V_1k, V_2k, ⋯, V_8k),; W_k^9= (V_1k, V_2k, ⋯, V_9k),;W_k^10 = (V_1k, V_2k, ⋯, V_10k),; S_k^8= (b_10k, b_20k, b_11k, b_02k, b_30k, b_21k, b_12k,b_03k),; S_k^9=(b_10k, b_20k, b_11k, b_02k, b_30k, b_21k, b_12k,b_03k, a_03k),;S_k^10 = (b_10k, b_20k, b_11k, b_02k, b_30k, b_21k, b_12k,b_03k, a_03k, a_12(3m)), ] where in S_k^10, k=7m for Case (A) and k=13m for Case (B)(m ≥ 1, integer) to be considered in Sections 3.4 and 3.5; and the determinants:[_k^8 = [ ∂ W_k^8/∂ S_k^8 ], _k^9 = [ ∂ W_k^9/∂ S_k^9], _k^10 = [ ∂ W_k^10/∂ S_k^10]; ] and the functions:[ F_1 = - 373423834799904305184768/5a^36 (a^4-32)^8 ,; F_2 = 3013505105717894236809449177088/5a^45 (a^4-32)^9,; F_3 =- 57397219210893210316046010501071634432/a^55 (a^4-32)^10,; F_4 = - 279638476916415193342384256641414767487418/a^66 ( a^4 - 32)^11,; G_1 = - 258237837/32a^9 (a^4-32),; G_2 =23476167/64a^11 (a^4-32)^2(57697 a^4-35728 a^2-88704),; G_3 = - 23476167/1024a^13 (a^4-32)^3 (2304313595 a^8 - 1702233920 a^6 - 11829269248 a^4; -39211065344 a^2+8642101248),; G_4 =- 75246080/ a^10 (a^4-32),; G_5 = 9405760/3a^12(a^4-32)^2(75767 a^4-46944 a^2-96768),; G_6 = - 180880/3a^14 (a^4-32)^3(11681524055 a^8-8555309984 a^6 -56944147200 a^4; [0.5ex] -204210659328 a^2+30640177152),; G_7 = 2006968901247765/2883584 a^11 (a^4-32),; G_8 = - 154382223172905/2883584 a^13(a^4-32)^2 (48667 a^4-30160 a^2-52416),; G_9 = 66163809931245/46137344 a^15(a^4-32)^3 (6314158847 a^8-4591849024 a^6;- 29599122432 a^4 - 112639700992 a^2+11915624448), ]Note that F_i0,i=1,2,3, andG_i0, i=1,2, …, 9, sincea^4 - 32 > 0 for a < -2^-5/4. §.§ ε- and ε^2-order analysis The ε-order focus valuesV_11, V_21, ⋯, V_111 are obtainedby using the algorithm and Maple program developed in <cit.>. Their expressions are lengthy, and here we only present the first onefor brevity, [ V_11= - 1/64 a (a^4-32){ 6912 b_101 -5760 a b_201;+16 (a^4-36 a^2+40) b_111+48 a (a^4 +36 a^2 +160)b_021; +3456 a^2b_301 -24 a (a^4-12 a^2+40)b_211;-16 (3 a^6+68 a^4+300 a^2+20)b_121; -24 a (3 a^6+65 a^4+300 a^2+224)b_031; +27 (a^2+2) (7 a^6+82 a^4+320 a^2+128)a_031;+8 a (24 a^6+223 a^4+1140 a^2+1180)a_121; +8 (21 a^6-73 a^4+480 a^2+320)a_211}. ] It is noted that all V_i1's are linear polynomials in a_ij1 and b_ij1. It can be shown that _1^8 = F_10, _1^9 = F_20, _1^10 = 0.In fact, with the solution of S_1^8 solved from W_1^8=0, we obtain [ V_91 = G_1 a_031, V_101 = G_2 a_031, V_111 = G_3 a_031, ] where G_i's are given in (<ref>). NoticingG_10 for a < - 2^5/4, we haveV_91 0 if a_031 0.Moreover, _1^80 and (<ref>) indicate that perturbing W_1^8 and V_01around the solutions S_1^8 and b_011 (see (<ref>)) does yield9 small limit cycles around the equilibrium E_0. It is seen from (<ref>) thatV_91= V_101= V_111= 0 for a_031= 0.For convenience, define the critical condition S_1c^8,satisfying (<ref>), W_1^8 = 0 and a_031= 0, as S_1c^8 : {[ b_011 = C_1 a_121 -9/8a^3a_211,; a_031 = b_211 = 0, b_121 = 7/2a_211,b_021 = - 6a_121, b_031 = 8/3a_121,; b_111 = 9 aa_121 + 9/2a_211, b_101 = C_2 a_121 + C_3 a_211,; b_201 = C_4 a_121 + C_5a_211,b_301 = C_6 a_121 + C_7 a_211, ].where C_i's are given in Appendix A.We have the following result. The equilibrium E_0 of system (<ref>) is a center up to ε-order,i.e. all ε-order focus values vanish if and only if the condition S_1c^8 holds. Furthermore, there exist at most 9 small limit cycles around E_0 for all parameters a_ij1 andb_ij1, and 9 small limit cycles can be obtained for some parameter values near S_1c^8.The existence of 9 small limit cycles has been shown underthe solution S_1^8 witha_031 0 and det_1^80.It is obvious that the critical condition S_1c^8isnecessary for all ε-order focus valuesto vanish since V_i1 = 0,0 ≤ i ≤ 11 under thiscondition. To provesufficiency, under the critical condition S_1c^8,we use (<ref>) to obtainthe following ε-order approximation of the first integral, H_1(x,y,ε)=f_1+ε f_11/f_2+ε f_21,where f_1 and f_2 are given in (<ref>),f_11=a_121r_1+a_211r_2 and f_21=a_121r_3+a_211r_4 with[ r_1 = -1/48[a^2(3a^2+4)(5+2y+2x^2+x^4) +220 -192ax + 280y; +120x^2 -64y^2 +128ax^3 +64x^2y +76x^4 ],; r_2 =-1/8a(5a^2-4) +5x -1/8a(a^2-4)(2y+2x^2+x^4)+2xy -2x^3,; r_3 =1/192[a^2(3a^2 + 4)(4a - 15x+ 10xy + 10x^3) +304a+16a^3-180x;-40 ( 16ay - 8ax^2 + 5xy - 23x^3 - 8xy^2 + 16ax^4 + 8x^3y ) ],;r_4= a^2/8(a^2 - 1) - a(15/32a^2 +5/8)x+5/2x^2(3/2+ y - x^2)+5/16a(a^2 - 4)x(y + x^2). ] This implies that settingthe first 10 focus values V_i1=0, i=0,⋯,9 yields V_i1=0 for all i ≥ 10. Moreover, due to that all V_i1 are linear in all parameters a_ij1 and b_ij1,by Theorem <ref> at most 9small limit cycles can be obtained for this case.The proof is complete. Now suppose the condition S_1c^8 holds and so allε-orderfocus values vanish, we then need to use the ε^2-order focusvalues to study bifurcation of limit cycles.With an almost exact same procedure as that used inthe ε-order analysis,we can find a solution S_2^8 such that W_2^8=0, and then [ V_92 = G_1 a_032, V_102 = G_2 a_032, V_112 = G_3 a_032, _2^8 = F_10, ] where F_1 and G_i's are given in (<ref>).Note that the above equations are exactly the same as those given in(<ref>) and (<ref>), if we replace k = 1 by k = 2in (<ref>) and (<ref>).This clearly shows that there can exist 9 limit cycles around theequilibrium E_0 when all ε-order focus values vanish.It is also noted that all V_i2 are linear polynomialsin a_ij2 and b_ij2. Similarly, we see that setting a_032= 0 in (<ref>) yieldsV_92= V_102=V_112= 0,implying that the solution S_2^8 with a_032= 0 and b_012 given in (<ref>) definesa necessary condition for all ε^2-order focus values tovanish. This critical condition is given below: S_2c^8: {[ b_012 = 9/64 a ^4 a_211^2 - 9/8 a^3 a_212+ C_1 a_122 + C_8 a_121a_211 + C_9 a_121^2 ,; a_032 =0,b_032 = 8/3a_122 +5a_121^2 ,b_212 = a/2a_121 (5 aa_121-9 a_211),; b_122 = 7/2a_212- 1/4a_121 (31 aa_121-45 a_211) ,; b_102 =C_2a_122 + C_3 a_212+ C_10a_121^2 + C_11a_211^2 + C_12a_121 a_211,; b_202 = C_4a_122 + C_5 a_212+C_13a_121^2 + C_14a_211^2 + C_15a_121 a_211,; b_112 = 9 aa_122+ 9/2a_212 + 9a^3/32 a_211^2 + C_16a_121^2 + C_17a_121 a_211,; b_022 = - 6 a_122 + C_18a_121^2+ C_19a_121 a_211,; b_302 = C_6 a_122 + C_7 a_212 + C_20a_121^2+ C_21a_211^2 + C_22a_211 a_121, ]. where C_i's are given in Appendix A.We have the following theorem. Assume S_1c^8 holds. The equilibrium E_0 of system (<ref>) is a center up to ε^2-order,if and only if S_2c^8 holds. Furthermore, there exist at most 9 small limit cycles around E_0 for all parameters a_ij2and b_ij2, and 9 small limit cycles exist for some parameter valuesnear S_2c^8.Similarly, we only need to provesufficiency. With S_1c^8 and S_2c^8 holding, we can use (<ref>) to findthe following ε^2-order approximation of the first integral, H_2(x,y,ε)=f_1+ε f_11+ε^2 f_12/f_2+ε f_21+ε^2f_22,where f_11 and f_21 are given in H_1(x,y,ε) (seeEq. <ref>), and [ f_21=a_122r_1+a_212r_2+a_121^2s_1+a_211^2s_2+a_121a_211s_3,; f_22=a_122r_3+a_212r_4+a_121^2s_4+a_211^2s_5+a_121a_211s_6, ] in which r_i, i=1,2,3,4 are given in (<ref>),and s_i,i=1,2, …, 8, are listed in Appendix B. The existence of 9 small limit cycles is easily seen fromV_92 0 and _2^80 when a_032 0under the critical condition S_2c^8.On the other hand, the above results show that settingV_i2, 0≤ i≤ 9 results inV_i2 =0 for all i ≥ 10. Further, all V_i2's are linear in a_ij2 and b_ij2,and S_2c^8 is the unique solution of V_i2=0, 0≤ i≤ 9. Then by Theorem <ref>,at most 9 small limit cycles can be obtained around E_0 for all parameters a_ij2 and b_ij2.§.§ ε^3-order analysisIn this section, we assumethe critical condition { S_1c^8, S_2c^8 },which stands for that boththe critical conditions S_1c^8 and S_2c^8 hold,under which all ε- and ε^2-order focus values vanish.Thus, we use ε^3-order focus values V_i3 tostudy bifurcation of limit cycles around the equilibriumE_0. With a similar procedure, but for this order,we solve 9 equationsW_3^9=0 to obtain the solution S_3^9for which[ V_103 = G_4 a_121^3, V_113 = G_5 a_121^3, V_123 = G_6 a_121^3, _3^9 = F_20, ] where F_2 and G_i's are given in (<ref>).Note that for this order, there is one more independent coefficient a_033 in S_3^9 for solving W_3^9 =0,compared to the solutions S_1^8 and S_2^8 which have only8 independent coefficients to be used for solving the first8 focus value equations.The equations in (<ref>) show that when allε- and ε^2-order focus values vanish,the ε^3-order focus values can have solutions such thatV_i3=0, i = 0, 1, ⋯, 9 but V_103 0, as wellas _3^9 0, implying that 10 small limit cyclescan bifurcate from the equilibrium E_0. Setting a_121 = 0 in (<ref>),we have V_103 = V_113 = V_123 =0, implying thatunder the solution S_3^9 with a_121=0 andb_013 given in (<ref>), the equilibrium E_0might be a center up to ε^3 order.This critical condition is given by S_3c^9 : {[ b_013 =9/32 a^4 a_211 a_212- 9/8 a^3 a_213 + C_1 a_123 + C_8 a_122 a_211+ C_23 a_211^3,; a_121 =a_033 = 0, b_033 = 8/3 a_123, b_023 = - 6 a_123 + C_19 a_122 a_211; b_213 = - 9a/16 a_211 (8 a_122+ a_211^2),b_123 = 7/2 a_213+45/32 a_211 (8 a_122+ a_211^2 ),; b_113 =9 aa_123 + 9/2 a_213+ a_211[ 9/16 a^3 a_212 + C_17 a_122+ C_24 a_211^2 ]; b_103 = C_2 a_123 + C_3 a_213 + a_211[ 2 C_11a_212 +C_12 a_122+ C_25 a_211^2 ],; b_203 = C_4 a_123 + C_5 a_213 + a_211[ 2 C_14 a_212 +C_15 a_122+ C_26 a_211^2 ]; b_303 =C_6 a_123 + C_7 a_213+ a_211[ 2 C_21 a_212 + C_22 a_122+ C_27 a_211^2 ], ].under which the critical conditions S_1c^8 and S_2c^8are simplified. Here, C_i's are given in Appendix A.We have the following theorem.Let { S_1c^8, S_2c^8} hold. The equilibrium E_0 of system (<ref>) is a center up to ε^3-order,if and only if the condition S_3c^9 holds. Furthermore, there exist 10 small limit cycles around E_0 for some parameter values of a_ij3 and b_ij3near the critical value defined by S_3c^9 when V_103≠ 0. Similarly again, we only need to provesufficiency. Under the condition { S_1c^8, S_2c^8, S_3c^9},we obtain the following ε^3-order approximation of first integral, H_3(x,y,ε)=f_1+ε a_211r_1 +ε^2 (a_122r_1+a_212r_2+a_211^2s_2) +ε^3 f_31/f_2+ε a_211r_4 +ε^2 (a_122r_3+a_212r_4+a_211^2s_5) +ε^3 f_32,where f_31=a_123r_1+a_213r_2 +a_211(a_122t_1+a_212t_2+a_211^2t_3),f_32=a_123r_3+a_213r_4 +a_211(a_122t_4+a_212t_5+a_211^2t_6),in which r_i,i=1,2,3,4 are given in (<ref>),and s_2, s_5 and t_i,i=1,2, …, 6 are listed in Appendix B.This implies that settingV_i3=0, 0≤ i≤ 10 yields V_i3 = 0 for all i ≥ 11.Then, there exist at most 10 small limit cyclesfor this case. On the other hand, 10 small limit cycles existsince when a_121 0, V_101 0 and_3^90. §.§ ε^4–ε^6-order analysisThe analyses for ε^4-,ε^5-and ε^6-order aresimilar to thatof ε^1-, ε^2- and ε^3-order,respectively. Let { S_1c^8, S_2c^8, S_3c^9} hold. Following the same procedure used in the previous sections, wecan solve the equations W_4^8=0 to obtain a solution S_4^8 such that V_94 = G_1 a_034, V_104 = G_2 a_034, V_114 = G_3 a_034, _4^8 = F_10, which has the exactly same form of the equations as those given in(<ref>) and (<ref>), implying that perturbing theε^4-order focus values from the solution S_4^8 and b_014 (see (<ref>))can yield 9 limit cycles around the equilibrium E_0.Similarly, the solution S_4^8 and b_014 with a_034 = 0 yields a critical condition S_4c^8, under whichthe equilibrium E_0 is a center up to ε^4 order.Then let { S_1c^8, S_2c^8, S_3c^9,S_4c^8} hold. In the same line, we can solve the equationsW_5^8=0 to obtain a solution S_5^8 such that [ V_95 = G_1A_035,V_105 = G_2 A_035,V_115 = G_3 A_035, _5^8 = F_10, ]where A_035 = a_035 + 1/48a_122a_211(140 a_122+35 a_211^2).This shows that perturbing the ε^5-orderfocus values near the solution S_5^8and b_015 given in (<ref>)can also yield 9 limit cycles around the equilibrium E_0.It is easy to see that the solution of A_035 = 0, a_035 = - 35/48a_122a_211(4 a_122+ a_211^2), yields V_95= V_105 = V_115 = 0. Now, we combine the solution S_5^8, b_015 and a_035to obtain the critical condition S_5c^8, under which the equilibrium E_0 becomesa center up to ε^5 order.The lengthy critical conditions S_4c^8 and S_5c^8 areomitted here for brevity.Summarizing the above results leads to the following theorem. System (<ref>) can have maximal 9 limit cyclesaround the equilibrium E_0 underthe condition { S_1c^8, S_2c^8, S_3c^9}by perturbing the ε^4-order focus values around the critical value S_4c^8;and under the critical condition{ S_1c^8, S_2c^8, S_3c^9, S_4c^8}by perturbing the ε^5-orderfocus values near the critical point S_5c^8.The equilibrium E_0 becomes a center up to ε^4 order under the condition { S_1c^8, S_2c^8, S_3c^9, S_4c^8},and a center up to ε^5 order under the condition{ S_1c^8, S_2c^8, S_3c^9, S_4c^8, S_5c^8}. The proof for the center conditions in Theorem <ref>is similar to that in proving Theorems <ref>, <ref> and <ref> by findingthe ε^4-order and ε^5-order approximationsof the first integrals. This is the major and tedious part.For higher-order analysis, the proofs are similar.We omit the detailed proofs in the following analysis for brevity. Next, suppose the condition{ S_1c^8, S_2c^8, S_3c^9, S_4c^8, S_5c^8}is satisfied, then allε^k,k=1,2, …, 5, order focus values vanish.Following a similar analysis as that for ε^3 order, wesolve the equations W_6^9=0to obtain a solution S_6^9 such that [V_106 =G_4a_122^2 ( a_122 + 9/8a_211^2 ) ,;V_116 =G_5a_122^2 ( a_122 + 9/8a_211^2 ) ,;V_126 = G_6a_122^2 ( a_122 + 9/8a_211^2 ) , _6^9 = F_20, ] which indeed shows the existence of 10 limit cycles around theequilibrium E_0, generated from perturbing the ε^6-order focus values near the solution S_6^9under the condition{ S_1c^8, S_2c^8, S_3c^9, S_4c^8, S_5c^8}.Moreover, when a_122= -9/8a_211^2 or a_122=0,we have V_106= V_116= V_126= 0,indicating that the solution S_6^9 with either a_122=-9/8a_211^2 or a_122= 0, plusb_016 given by (<ref>)|_k=6, yields acritical condition S_6c^9a (corresponding tothe former) or S_6c^9b (corresponding to the latter)under which all ε^6-order focus values vanish.Thus, under the critical condition{ S_1c^8, S_2c^8, S_3c^9, S_4c^8, S_5c^8, S_6c^9} (S_6c^9equals either S_6c^9a or S_6c^9b),the equilibrium E_0 becomes a centerup to ε^6 order. We have the following theorem for this order.System (<ref>) can have maximal 10 limit cyclesbifurcating from the equilibrium E_0 underthe condition{ S_1c^8, S_2c^8, S_3c^9, S_4c^8, S_5c^8}by perturbing the ε^6-orderfocus values near the critical point S_6c^9aor S_6c^9b.Further, the equilibrium E_0 becomes a centerup to ε^6 order under the condition{ S_1c^8, S_2c^8, S_3c^9, S_4c^8, S_5c^8, S_6c^9 },for which all ε^k-order(k=1,2, …, 6 )focus values vanish.Suppose the condition{ S_1c^8, S_2c^8, S_3c^9, S_4c^8, S_5c^8, S_6c^9 } holds.Then, all the ε^k-order(k=1,2, …, 6) focus values vanish. We have two casesfor higher-order analysis, defined as [ Case(A) { S_1c^8, S_2c^8, S_3c^9, S_4c^8, S_5c^8, S_6c^9a},;Case(B) {S_1c^8, S_2c^8, S_3c^9, S_4c^8, S_5c^8, S_6c^9b}. ]§.§ Higher-order analysis for Case (A)First we consider Case (A), under whichwe will show that 11 limit cycles can bifurcate from the equilibriumE_0 based on the ε^7-order focus values. §.§.§ ε^7-order analysisUnder the condition (A) defined in (<ref>)with a_122 = - 9/8 a_211^2, we obtain _7^10 = F_3a_211^4 , _7^11 = F_4a_211^10,which shows that _7^10 0 and_7^11 0 when a_211 0, implying thatwe may have solutions such that the first ten focus values vanish but V_117 0 and so 11 small limit cycles may be obtained.Indeed, we can solve the first ten focus valuesequations: W_k^10 = 0 to obtain a solution S_7^10 such thatV_117 = G_7 a_211^7, V_127 = G_8 a_211^7, V_137 = G_9 a_211^7, which clearly shows that V_117 0 if a_211 0.In addition, due to _7^10 0 when a_211 0, implying that 11 small limit cycles exist. Letting a_211= 0, we have V_117= V_127= V_137= 0, leading to a critical condition S_7c^10, defined byS_7c^10: {[ b_017 =C_1 a_127-9/8 a^3 a_217+ C_8 C_28+9/32 a^4 C_29+ C_23 C_30 ,; a_211 =a_123 = a_037 = 0,b_037= 8/3a_127,; b_027 =-6 a_127+C_19 C_28, b_217 = -9a/16 (8 C_28 + C_30 ),; b_127 = 7/2 a_217+ 45/32 ( 8 C_28 + C_30 ),; b_107 = C_2 a_127 + C_3 a_217+ 2 C_11 C_29 + C_12 C_28 + C_25 C_30,; b_207 = C_4 a_127 + C_5 a_217+ 2 C_14 C_29 + C_15 C_28+ C_26 C_30 ,; b_117 = 9 aa_127+ 9/2a_217+ 9 a^3/16 C_29 + C_17 C_28 + C_24 C_30 ,; b_307 = C_6 a_127 + C_7 a_217 + 2 C_21 C_29 + C_22 C_28+ C_27 C_30, ]. where C_i's are given in Appendix A. We have the following result.Let { S_1c^8,S_2c^8, S_3c^9, S_4c^8, S_5c^8, S_6c^9a} hold. The equilibrium E_0 of (<ref>) becomes a centerup to ε^7 order underS_7c^10 for which all ε^7-orderfocus values vanish. Furthermore, there exist11 small limit cycles around E_0for parameter values of a_ij7 and b_ij7 near the critical pointS_7c^10.§.§.§ Higher-order analysis For higher-order analysis (k ≥ 8),we first briefly list the results for a few orders to see thepatterns and then summarize the results in a table for higher orders.The analysis on ε^k (k=8,9,10,11) ordersshow the same pattern, giving9 limit cycles for each order, as follows: [ [Order k:; (k=8,9,10,11) ] {S_k^8,W_k^8}, {[V_9k = G_1 A_03k, V_10k = G_2 A_03k,; [0.7ex] V_11k = G_3 A_03k, _k^8 = F_1 , ]. ] where {S_k^m,W_k^m} denotes the solution S_k^m solved fromW_k^m=0, and [ A_038 =a_038,A_039 = a_039,;A_0310 =a_0310 + 35/48 a_124a_212(4 a_124+ a_212^2),;A_0311 = a_0311+ 35/48[ a_125 a_212 (8 a_124+a_212^2)+ a_124 a_213 (4 a_124 +3 a_212^2) ]. ] This clearly shows that for each order of k=8,9,10,11, one cansolve A_03k=0 to get a unique solution for a_03kunder which (together with the solutions S_k^mand b_01k obtained in the previousorders and the current order) the equilibrium E_0 becomes a center up to that order.When the equilibrium E_0 is a center up to 11th order,as given in (<ref>) we obtain the following result for order 12:[ Order 12:{S_12^9,W_12^9}, {[ V_1012 = G_4a_124^2 (a_124+9/8 a_212^2 ),; [1.0ex] V_1112 =G_5a_124^2 (a_124+9/8 a_212^2 ) ,;[1.0ex]V_1212 = G_6a_124^2 (a_124+9/8 a_212^2 ), _12^9 = F_2 , ]. ] which has the exactly the same pattern as order 6, shown in(<ref>), indicating that 10 limit cycles can be obtained fromthis order, and there are two solutions from theequations V_1012 = V_1112= V_1212 = 0:a_124= -9/8 a_212^2 and a_124= 0, which areagain similar to that asin order 6. When a_124 = 0, it will be shownin Section 4.7 thatit yields the same pattern as that for Case (B) in higher orders. So in this section, we choose a_124= -9/8 a_212^2,like we chose a_122= -9/8 a_212^2 in order6 to obtain the center condition. Let a_124= -9/8 a_212^2, under which (together withthe solutions obtained from previous orders and this order) the equilibriumE_0 becomes a center up to ε^12 order.Then, we have the result for ε^13 order: [ Order 13:{S_13^9,W_13^9}, {[V_1013 = 81/64 G_4a_212^4(a_125+ 9/4 a_212 a_213),;V_1113 = 81/64 G_5a_212^4(a_125+ 9/4 a_212 a_213),;V_1213 = 81/64 G_6a_212^4(a_125+ 9/4 a_212 a_213), _13^9 = F_2 , ]. ]which shows that perturbing ε^13-order focus valuescan also yield 10 small limit cycles around the equilibrium E_0.It can be seen from (<ref>) that either a_212= 0 ora_125= -9/4 a_212 a_213 leads tothe equilibrium E_0 being a center. However, it can be shownthat setting a_212= 0 at this order will not yield 11small limit cycles at the next order though it will resume the same patternat higher orders. So let a_125= -9/4 a_212 a_213. Then, we obtainthe following result for ε^14 order: [Order 14: {S_14^10,W_14^10}, {[ V_1114 = G_7 a_212^7,V_1214 = G_8 a_212^7 ,;[0.7ex]V_1314 =G_9 a_212^7, _14^10 = F_30, ]. ] which shows that perturbing ε^14-order focus values can yield 11 limit cycles around the equilibrium E_0,and setting a_212 = 0 leads to E_0 being a centerup to ε^14 order.It has been noted that choosing a_212= 0 at order 13 or 14 makes differences. More precisely, as shown in Table <ref>, if takinga_125= -9/4 a_212 a_213 atorder 13, we have small limit cycles 11, 9, 9, 9,9for the orders 14–18; while if taking a_212= 0 at order 13,then the limit cycles obtained for the orders 14–18 are 9,10, 9, 9, 10, and then the two different choicesmerge into the same pattern from order 19. Note that the choicea_212= 0 at order 13 does not yield 11 small limit cyclesat order 14, butgives two more 10 small limit cycles at orders 15 and 18.However, it returns to the general pattern at order 19. So we treat a_212= 0 as a special case of thecase a_125= -9/4 a_212 a_213. Summarizing the above results we have the following pattern: 11limit cycles are obtained from ε^7 order, then 9 limit cyclesfrom four consecutive ε^k orders (k=8,9,10,11), and then10 limit cycles from two consecutive ε^k orders (k=12,13),and finally return to 11 limit cycles at ε^14 order. This pattern, starting from order 8, four9 limit cycles, followed by two 10 limit cycles, and then11 limit cycle, has been verified up to ε^35 order.We call this as 9^4-10^2-11^1 generic pattern, and the correspondingsolution (or center condition) is calledgeneric solution(or generic center condition).Bygeneric we mean thatone should always choose a non-zero solution (if it exists)when one solves the center conditions at each order. Other types of solutions are callednon-generic.For example, as discussed above, if choosing the non-generic solutiona_212= 0 at order 13, then 11 limit cycles will be missed at order 14 but the solution procedure will return to thegeneric 9^4-10^2-11^1 pattern at order 19. However, it should be noted that a non-generic solution in Case (A)does not always lead to the generic 9^4-10^2-11^1 pattern.For instance, choosing the non-generic solutiona_124= 0 at order 12 will generate solutionsin the form of generic patter of Case (B) at a higher order,as shown in the next section.It has been observed from the above analysis, the values of theparameter a in the Hamiltonian function does not affect the number oflimit cycles. In other words, a can not be used to increase thenumber of bifurcating limit cycles. Thus, to simply the computationsin higher order analysis, we set a=- 3 in higher-order(k ≥ 15) computations, which greatly simplify the computations. We summarize the results of Case (A) in Table <ref>,where k is the order of ε^k focus values,(S_k^m,W_k^m) represents the solution S_k^m solved fromW_k^m = 0, and LC denotes limit cyclesaround the equilibrium E_0 obtained by perturbing theε^k-order focus values.The “Condition for Center” in each row only lists the condition forthe current row, which assumes that the conditions in the previousrows hold. For example, when k= 4, S_4c^8 only gives thecenter condition for k = 4, which should be combined with theconditions given in the previous rows: S_1c^8,S_2c^8 and S_3c^9 to get a completecenter condition { S_1c^8,S_2c^8,S_3c^9,S_4c^8}.Note that the critical condition S_kc^8 contains the solutionsS_k^8, theb_01k given in (<ref>)and a particular coefficient. For example, S_2c^8={S_2^8, b_012, a_032}, S_3c^9={S_3^9,b_013 , a_121}, andS_7c^10={S_7^10, b_017 , a_211}, etc.The solutions of these key coefficients are given below. [9LC: k=1,2,4,8,9 a_03k=0; k=5 a_035=-35/48 a_122a_211 (4 a_122 + a_211^2); [0.5ex]k=10 a_0310=-35/48 a_124a_212(4 a_124 + a_212^2); [0.5ex]k=11a_0311 = -35/48[ a_212 a_125 (8 a_124+a_212^2);+a_213 a_124 (4 a_124+3 a_212^2) ];k=15a_0315= -735/256 a_213^5; [0.5ex]k=16 a_0316= 35/768 a_213^3 (128 a_127 - 27 a_214 a_213);k=17 a_0317 = -35/768 a_213[32 a_127 (2 a_127-3 a_214 a_213);-a_213^2 (128 a_128+ 54 a_214^2 - 27 a_215a_213) ]; [0.5ex]k=18 a_0318 = 35/6 a_213^3 a_129-35/24 a_213 a_128 (4 a_127-3 a_214 a_213);- 35/48 a_127[ a_214 (4 a_127+ 3 a_214 a_213) - 6 a_213^2 a_215]; + 3/4096 a_213^2 [ 1120 a_214 (a_214^2+6 a_215 a_213); +3 a_213^2 (2269 a_213^2-560 a_216) ]; [0.5ex]k=22a_0322= 35/768a_214^3 (128 a_1210- 486 a_215^2 - 27 a_216 a_214);k=23 a_0323 = 35/768 a_214^2 [a_214 (128 a_1211 -27 a_217 a_214); [0.5ex] +a_215 (384 a_1210 -108 a_214 a_216 -198 a_215^2) ]; [0.5ex] k=24,25 a_03k = ⋯; k=29–32 a_03k = ⋯; [1.0ex] 10LC: k=3 a_121 = 0; k=6 a_122 =-9/8 a_211^2;k=12 a_124 =-9/8 a_212^2;k=13 a_125 =-9/4 a_213 a_212;k=19 a_127 =-9/4a_213a_214;k=20 a_128 =-9/8 (a_214^2+2 a_215 a_213);k=26a_1210 =-9/8 (a_215^2+2 a_216 a_214);k=27a_1211 =-9/4 (a_217 a_214+a_216 a_215);k=33a_1213 =-9/4 (a_218 a_215+a_217 a_216);k=34a_1214 =-9/8 (a_217^2 +2 a_219 a_215+ 2 a_218 a_216); [1.0ex] 11LC:k=7m;m= 1–5 a_21m = 0 ]where `⋯'represents the omitted lengthy expressions for brevity.In addition, in Table <ref>, the blue and red colors denotethe solutions and center conditions corresponding to the10 and 11 small limit cycle, respectively. §.§ Higher-order analysis for Case (B)We now turn to Case (B) for which we choose a_122= 0 atε^6 order.Thus, the results starting from ε^6 orderare different from those given in Table <ref>.Now under the condition a_122= 0, together with theconditions obtained in previousorders, the equilibrium E_0 becomes a center up to ε^6 order. Then for ε^7-order focus values we solve W_7^8 =0 to obtain S_7^8 and then V_97 = G_1 A_037, V_107 = G_2 A_037, V_117 = G_3 A_037,_7^8 = F_1, where A_037 =a_037+35/768 a_211[ a (a^2 - 4) a_123 a_211^3 + 16 a_124 a_211^2 + 16 a_123 (4 a_123+3 a_212 a_211) ], which shows that for Case (B) only 9 small limit cycles can be obtained fromε^7-order.Then, solving A_037= 0 gives a unique solution for a_037,under which, together with the conditions obtained in the previousorders as well as S_7^8 and b_017,the equilibrium E_0 becomes a center up toε^7 order.Next, the ε^8-order analysis shows that 10 limit cyclescan be obtained by solving W_8^9 =0 to have the solution S_8^9,under which higher-order focus values become V_108 = 9/8 G_4 a_211^2 a_123^2, V_118 = 9/8 G_5 a_211^2 a_123^2, V_128 = 9/8 G_6 a_211^2 a_123^2, _8^9 = F_2. This clearly indicates that either a_211= 0 or a_123= 0,together with b_018, leads to the equilibrium E_0 being a center up toε^8 order.If taking a_211= 0, then we again obtain 10 small limit cyclesfrom ε^9 order by solving W_9^9 =0 to obtain the solutionS_9^9 and V_109 = G_4a_123^3, V_119 = G_5a_123^3, V_129 = G_6a_123^3, _9^9 = F_2. Thus, for the equilibrium E_0 being a center up toε^9 order, a_123 must be taken zero (withb_019), yielding the same result as that generated from Case (A) atorder 9 (and so the result at order 8 also becomes same).In other words, choosing the non-generic solution a_211= 0at order 8 makes the higher-order solutions (k ≥ 9) follow thegeneric pattern of Case (A). Now we consider the choice a_123= 0 at ε^8 order.It can be shown that under this condition only 9 limit cyclesexist for ε^9 order. Then for the ε^10 order,we solve W_10^9=0 to obtain the solution S_10^9 and then get [ V_1010 =9/8G_4 a_211^2 (a_124^2- 5/16 a_211^4 a_124+ 429/40960 a_211^8 ),; V_1110 =9/8G_5a_211^2 (a_124^2- 5/16 a_211^4 a_124+ 429/40960a_211^8 ),; V_1210 = 9/8G_6a_211^2 (a_124^2- 5/16 a_211^4 a_124+ 429/40960a_211^8 ), _10^9 = F_2, ] which gives two solutions leading to a center at E_0, one of themis a_211= 0, which yields the same solution as thatobtained in Case (A) at order 10. Thus, choosing the non-generic solutiona_211= 0 at this order leads to the generic pattern of Case (A)starting fromε^11 order (i.e., for k ≥ 11).The second solution, given by a_124 =1/64(10 ±1/10√(5710)) a_211^4,is a generic solution for Case (B), different from thegeneric pattern of Case (A). Then, following a similar computation procedure as that used inCase (A), we obtain the generic solutions up to ε^39 order.The results are given in Table <ref>, showing a9^6-10^6-11^1 generic pattern, starting from order 14.The notations used in this table are the same as that used inTable <ref>. For each k, the key coefficient used to obtain the center condition isgive below. [ 9LC:k=7 a_037= - 35/768 a_211[ 64 a_123^2 -a_211 (15 a_123 a_211^2; -16 a_124 a_211 -48 a_123 a_212) ]; k=9 a_039= ⋯;[0.5ex] k=14a_0314= - 35/48 a_212^3 a_128;[0.5ex] k=15a_0315= - 35/48 a_212^2 (a_129 a_212+3 a_128 a_213);[0.5ex] k=16a_0316=- 35/768 a_212[48 a_213^2 a_128 +a_212 ( 16 a_1210 a_212;+48 a_129 a_213 +48 a_128 a_214 - 15 a_128 a_212^2) ];[0.5ex]k=17–19 a_03k= ⋯;[0.5ex]k=27–32 a_03k= ⋯;[1.5ex] 10LC:k = 8a_123 = 0;k=10a_124 = 100 ±√(5710)/640a_211^4;[0.5ex] k=11 a_125 = 100 ±√(5710)/10240 a_211^3 [ 64 a_212 + a ( a^2-4) a_211^2 ];[0.5ex] k=12a_126 = ⋯;[0.5ex] k=20a_128 = 100 ±√(5710)/640a_212^4;[0.5ex] k=21a_129 = 100 ±√(5710)/160a_212^3 a_213;[0.5ex]k=22–25 a_12(k-12) = ⋯;[0.5ex] ][k=33 a_1215 = - 100 ±√(5710)/10240 a_213[a_213^2 (15 a_213^2 -64 a_216); - 64 a_214 ( a_214^2 +3 a_213 a_215) ];[0.5ex]k=34–38 a_12(k-18) = ⋯;11LC:k=13m ;m= 1,2,3a_21m = 0 ]§.§ Non-generic solutionsCouple of non-generic solutions have been discussed in Case (B)(see Section 3.5), showing that setting a_211= 0 at order8 or 10 (see Eqns. (<ref>) and (<ref>)) leads to the 9^4-10^2-11^1 generic pattern of Case (A)for orders greater than 10 or 11. These two examples give a route from Case (B) to Case (A). In this section, we present several more non-generic solutions toshow other possibilities that they eventually return to eitherthe 9^4-10^2-11^1 generic pattern of Case (A) or9^6-10^6-11^1 generic pattern of Case (B).Other cases can be similarly discussed.Since the discussions for different cases are similar,we will not give the details but list the cases below andsummarize the results in Table <ref>.(A1) In Case (A), at order 13: a_212=0, leading to Case (A).(A2) In Case (A), at order 12: a_124=0, leading to Case (B).(B1) In Case (B), at order 11: a_211=0, leading to Case (B).For each k, the key coefficient used to obtain the center condition isgiven below. [Case(A1)k=13a_212= 0;k=14 a_0314= -35 a_125/48[ 4 a_125 a_214+ a_213 (8 a_126+ a_213^2) ]; [0.5ex]k=16 a_0316= - 35/48[ a_127 a_213(8 a_126+a_213^2);+a_126 a_214 (4 a_126 +3 a_213^2)]; [0.5ex]k=17a_0317= -35/48 a_128 a_213 (a_213^2+8 a_126); -35/48 a_127 (4 a_213 a_127+ 8 a_214 a_126+ 3 a_214a_213^2); - 35/48 a_126 (3 a_214^2 a_213+ 4 a_126 a_215+ 3 a_215 a_213^2); [0.5ex]k=15a_125= 0; [0.5ex]k=18a_126= -9/8a_213^2 ][ Case(B1) k=11 a_211= 0;k=12a_0312= -35/48 a_212(a_126 a_212^2 + 3 a_125 a_212 a_213+ 4a_125^2);k=13,15,17a_03k = ⋯;k=14,16,18 a_12(k/2-2) =0; [1.5ex] Case(A2) k=13 a_0313= -35/48 a_212[ a_127 a_212^2+a_126 (3 a_213 a_212+ 8 a_125) ];+35/768 a_125( 15 a_212^4 -48 a_212^2 a_214;-48 a_212 a_213^2 -64 a_125 a_213 ); k=15,17a_03k = ⋯; k=12,14,16,18a_12(k/2-2) = 0 ] Therefore, there are four possible routes for the non-generic solutions:from Case (A) to Case (A) or Case (B); andfrom Case (B) to Case (A) or Case (B). §.§ Summary of this sectionSummarizing the results obtained in sections3.4, 3.5 and 3.6, we have the following theorem.For system (<ref>), based on the higher-order focus values,there exist two generic patterns: One is 9^4-10^2-11^1 patternstarting from order 8 with four consecutive 9 limit cycles,followed by two consecutive 10 limit cycles, and thenone 11 limit cycles up to ε^35 order;and the other is 9^6-10^6-11^1 pattern,starting from order 14 with six consecutive 9 limit cycles,followed by six consecutive 10 limit cycles, and thenone 11 limit cycles up to ε^39 order.Other non-generic solutions deviate from the current pattern forcertain orders and eventually return to eitherthe 9^4-10^2-11^1 pattern or the 9^6-10^6-11^1 pattern.Finally, we propose a conjecture on the number of limit cyclesaround E_0 for system (<ref>). Conjecture. For the perturbed system (<ref>),the maximal number of small limit cycles which can bifurcate from the equilibrium E_0 is 11,§ CONCLUSION In this paper, we have applied high-order focus value computation toprove that system (<ref>) can have 11 limit cycles around the equilibrium of(<ref>), obtained byperturbing at least ε^7-order focus values.Moreover, no more than 11 limit cycles can be found up toε^39-order analysis. It is believed thatsystem (<ref>) can have maximal 11 small limit cyclesaround the equilibrium.§ ACKNOWLEDGMENTThis work was supported by the National Natural Science Foundation ofChina (NSFC No. 11501370), and the Engineering Research Council of Canada (NSERC No. R2686A02). § APPENDIX AThe coefficients C_i's in (<ref>), (<ref>) and (<ref>) are givenbelow. [ C_1 =- 3/16 (3 a^4 + 4 a^2 +44) C_2 = - a/48 (3 a^4+12 a^2 + 116); C_3 = - 1/8 (a^4 +2 a^2 +5) C_4 = - 1/16 (9 a^4 - 20 a^2 + 172); C_5 = - 3 a/8 (3 a^2 -8 ) C_6 = - a/12 ( a^2 -120 ); C_7 = - 1/8 (3 a^2 -80)C_8 = 3/128 a (7a^4+68a^2-900); C_9 = 1/64(3 a^6 + 40 a^4 - 860 a^2 - 1600) C_10 = - a/576 ( 303 a^4 + 1596 a^2 + 8096);C_11 =-a/64 (55 a^2 - 256)C_12 = - 1/384 (569 a^4-1660 a^2+1420);C_13 =1/192 (21 a^6 + 80 a^4 - 2996 a^2 -9040) C_14 = 3/64 (7 a^4 + 8 a^2 + 160);C_15 =a/128 (49 a^4 + 220 a^2 - 380)C_16 = a/32 (9a^4+36a^2-172);C_17 =3/64 (15 a^4+28 a^2-916) C_18 = - 1/16 (3 a^4 + 4 a^2 + 340) ][C_19 = - 3a/8 (a^2 - 16) C_20 = a/96(45 a^4 + 266 a^2 - 644);C_21 =a/32 (41 a^2+72)C_22 = 1/192 (303 a^4+1728 a^2-3620);C_23 =-9/512 a ( a^4 + 12 a^2 + 200)C_24 = - 9/256 (a^4+160);C_25 = - 1/512 (70 a^6 - 471 a^4 + 128 a^2- 300)C_26 = - 9/512 a (a^4 - 108 a^2 + 696);C_27 =3/256 (21 a^4 + 58 a^2 - 1880)C_28 = a_125 a_212+a_124 a_213;C_29 = a_215 a_212+a_214 a_213C_30 = 3 a_213 a_212^2 ] § APPENDIX BThe coefficients s_i's involved in H_2 (see Eq. (<ref>))and t_i's involved in H_3 (see Eq. (<ref>)) are givenas follows. [s_1 = 1/3072a^6(3a^2+16)(10+4y+4x^2+3x^4) -1/192(2850+1824a^2-85a^4);+a/18(6a^2 - 319)x -1/288(5902 + 1568a^2 + 45a^4)y -a/9(53+10x^2)xy +13/6y^2; -1/72a^2(3a^2+4)y(y-x^2) -1/96(1074+200a^2+45a^4)x^2 -29/18x^2y -2/9y^3; +a/36(12-4a^2+3a^4)x^3 -1/1152(6746+3140a^2+91a^4+3a^6)x^4 +2/3x^2y^2;s_2 =1/128a^2(24-10a^2+5a^4) -1/64(1120-24a^2+ 2a^4 -a^6)(y+x^2)-1/2x^2y;+a/16(4 + 5a^2)x -25/4x^2 -a/256(a^2 - 4)x [32(y - x^2) - 3a^3x^3] -1/8(73 - 2a^2 )x^4;s_3 = 1/384a(-4716+376a^2+25a^4+15a^6) -1/96(2260-164a^2-15a^4)x;+1/192a^5(5+3a^2)(y+x^2) -1/12a(a^2-4)y(y-x^2) -2/3xy^2 +2x^3y; -1/48[a(1151+10a^2)y -a(1511+40a^2)x^2 +(100-4a^2-3a^4)xy; +(140 -76a^2 +11a^4)x^3] -1/768a(10216+100a^2-2a^4-9a^6)x^4;s_4 = 1/9216a(96656+17952a^2+3640a^4+24a^6+9a^8); -5/4608(2256+7272a^2-56a^4+6a^6+9a^8)x -5/72a(224+8a^2+3a^4)y; -5/144a(642-8a^2-3a^4)x^2 -5/1152(368-1628a^2-92a^4+3a^6)xy; -5/2ay^2 +5/1152(5272+86a^4+2964a^2-3a^6)x^3 -275/36ax^2y; +5/288(372 +4a^2 +3a^4) xy^2 -5/144a(140+12a^2+3a^4)x^4; -5/288(268+3a^4+4a^2)x^3y -5/18xy^3-25/18ax^4y +5/6x^3y^2;s_5 = 1/256a(-16a^2-8a^4+a^6+1120) +5/256(560+9a^4-2a^6-4a^2)x; -5/64a(20-11a^2)x^2 -5/128a^2(3a^2+4)x(y+x^2) +175/8xy +265/16x^3;+5/32a(a^2-4)x^2(y-x^2) -5/8x^3y;][s_6 =1/768a^2(540a^2 + 3904 - 8a^4 + 3a^6) -5/1536a(484 - 100a^2 - 23a^4 + 12a^6)x;-5/12a^2(a^2 - 1)y -35/384(244 - 4a^2 - 7a^4)x^2 +5/768a(5132 + 84a^2 - 13a^4)xy; -5/768a(-3756-76a^2+13a^4)x^3 +5/192(380+3a^4+4a^2)x^2y;+5/48a(a^3-4)xy(y-x^2) -5/192(11a^4+20a^2+12)x^4 -5/6x^2y^2+5/2x^4y. ][t_1 = 1/384a(-4716+376a^2+25a^4+15a^6) -1/96(2260-164a^2-15a^4)x;+1/192a^5(5 +3a^2)(y+x^2) -1/48a( 1151 +10a^2 )y -2/3xy^2 +2x^3y;-1/48a( 1511 + 40a^2 )x^2 +1/48(100 - 4a^2 - 3a^4)xy -1/12a(a^2-4)y(y-x^2);+1/48( 140 -76a^2 +11a^4)x^3 +1/768a(-100a^2-10216+9a^6+2a^4)x^4;t_2 = 1/64a^2( 24 -10a^2 +5a^4) -1/32(1120 -24a^2 +2a^4 -a^6)(y+x^2);+1/8a(5a^2+4)x -25/2x^2 -1/4a(a^2-4)x(y-x^2) -x^2y; -1/128(2336 -64a^2 +12a^4 -3a^6)x^4;t_3 = -1/2048a^5(20+a^4)(5+2y+2x^2+2x^4) -1/128a(550-343a^2);-1/256(6800 - 24a^2 - 10a^4 + 5a^6)x -3/64a(25a^2 + 34)y -1/32a(45a^2 + 61)x^2;-1/1024a^4(2-a^2)x(8y-8x^2+3ax^3) -3/16(50-a^2)xy +3/16(80-a^2)x^3; +1/32a(a^2-4)x^2y -1/64a( 197 -2a^2 )x^4 +1/4x^3y;t_4 =1/768a^2(3904 + 540a^2 - 8a^4 + 3a^6) -5/1536a(484 - 100a^2 - 23a^4 + 12a^6)x;-5/12a^2(a^2-1)y -35/384(244-7a^4-4a^2)x^2;-5/768a(-5132-84a^2+13a^4)xy -5/768a(-3756-76a^2+13a^4)x^3;][+5/192(380+4a^2+3a^4)x^2y +5/48a(a^2-4)xy(y-x^2); -5/192(12+20a^2+11a^4)x^4 -5/6x^2y^2 +5/2x^4y; t_5 = 1/128a(1120-16a^2-8a^4+a^6) +5/128(560-4a^2+9a^4-2a^6)x; +5/32a(11a^2-20)x^2 -5/64a^2(4+3a^2)x(y+x^2) +175/4xy;+265/8x^3 +5/16a(a^2-4)x^2(y-x^2) -5/4x^3y; t_6 = -1/2048a^2(2544 - 2336a^2 + 8a^4 + 3a^6) -5/2048a(2632 + 276a^2 + 3a^4 - 8a^6)x;+5/512(1240-4a^2-41a^4+4a^6)x^2 +5/1024a(-1832+892a^2+a^4)xy; +5/1024a(-1368+696a^2+a^4)x^3 -5/256a^2(3a^2+4)x^2(y-x^2);+375/32x^2y -65/16x^4 -5/128a(a^2-4)x^3y +5/16x^4y ] 99tocsectionBibliographyHan2013 M. Han, Bifurcation theory of limit cycles, Science Press, Beijing, 2013.Hilbert1900 D. Hilbert, Mathematical problems, (M. Newton, Transl.) Bull. Amer. Math. 8 (1902) 437–479.Shi1980 S. Shi, A concrete example of the existence of four limit cycles for plane quadratic systems, Sci. Sinica, 23 (1980) 153–158.ChenWang1979 L. Chen, M. Wang, The relative position, and the number, of limit cycles of a quadratic differential system, Acta. Math. Sinica, 22 (1979) 751–758.LiLiu2010 J. Li, Y. Liu, New results on the study of Z_q-equivariant planar polynomial vector fields, Qualitative Theory of Dynamical Systems, 9(1-2) (2010) 167–219.LLY09 C. Li, C. Liu, J. Yang, A cubic system with thirteen limit cycles, J. Diff. Eqns. 246 (2009) 3609–3619.Arnold83 V. I. Arnold, Geometric Methods in the Theory of Ordinary Differential Equations, Springer-Verlag, New York, 1983.Han2006 M. Han, Bifurcation of limit cycles of planar systems, Handbook of Differential Equations, Ordinary Differential Equations, Vol. 3 (Eds. A. Canada, P. Drabek and A. Fonda), Elsevier, 2006.HanYu2012 Han M., Yu, P., Normal Forms, Melnikov Functions and Bifurcations of Limit Cycles. Sringer-Verlag, New York, 2012.Bautin1952 N. Bautin, On the number of limit cycles appearing from an equilibrium point of the focus or center type under varying coefficients, Matem. Sb. 30 (1952) 181–196.YC2009 P. Yu, R. Corless, Symbolic computation of limit cycles associated with Hilbert's 16th problem, Communications in Nonlinear Science and Numerical Simulation, 14(12) (2009) 4041–4056. Lloyd2012 N. Lloyd, J. Pearson, A cubic differential system with nine limit cycles, Journal of Applied Analysis and Computation 2 (2012) 293–304. CCMYZ2013 C. Chen, R. Corless, M. Maza, P. Yu, Y. Zhang, A modular regular chains method and its application to dynamical systems, Int. J. Bifurcation and Chaos 23(9) (2013) 1350154 (21 pages).YuTian2014P. Yu, Y. Tian,Twelve limit cycles around a singular point in a planar cubic-degree polynomial system, Commun. Nonlinear Sci. Numer. Simulat. 19 (2014) 2690–2705. GuckenheimerHolmes1993 J. Guckenhermer, P. Holmes, Nonlinear Oscillations, Dynamical Systems, and Bifurcations of Vector Fields (4th Ed.). Springer, New York, 1993.ChowLiWang1994 S. N. Chow, C. C. Li, D. Wang, Normal Forms and Bifurcation of Planar Vector Fields. Cambridge University Press, Cambridge, 1994.Kuznetsov1998 Yuri A. Kuznetsov, Elements of Applied Bifurcation Theory (2nd Ed.). Springer, New York, 1998.GazorYu2012 M. Gazor, P. Yu, Spectral sequences and parametric normal forms. J. Diff. Eqns. (2012) 252: 1003-1031.Yu1998 P. Yu, Computation of normal forms via a perturbation technique, J. Sound and Vib. 211 (1) (1998) 19–38.YuLeung2003 P. Yu, A. Y. T. Leung, The simplest normal form of Hopf bifurcation. Nonlinearity (2003) 16: 277-300.TianYu2013 Y. Tian, P. Yu, An explicit recursive formula for computing the normal form and center manifold of n-dimensional differential systems associated with Hopf bifurcation. Int. J. Bifur. Chaos (2013) 23: 1350104 (18 pages).TianYu2014 Y. Tian, P. Yu, An explicit recursive formula for computing the normal forms associated with semisimple cases. Commun Nonlinear Sci. Numer. Simul. (2014) 19: 2294-2308.Zoladek1995 H. Żoła̧dek, Eleven small limit cycles in a cubic vector field, Nonlinearity 8 (1995) 843–860.YuHan2011 P. Yu, M. Han,A study on Żoła̧dek's example,J. Appl. Anal. Comput. 1 (2011) 143–153. TianYu2016 Y. Tian, P. Yu,Bifurcation of ten small-amplitude limit cycles by perturbing a quadraticHamiltonian system with cubic polynomials,J. Diff. Eqns. 260 (2016) 971–990.§ APPENDIX B[ a_0311= -35/48[ a_125 a_212 (8 a_124+a_212^2) + a_124 a_213 (4 a_124+3 a_212^2) ];a_0317 = -35/768 a_213[32 a_127 (2 a_127 -3 a_214 a_213); - a_213^2 (128 a_128 +54 a_214^2-27 a_215 a_213) ],;a_0318 = 35/6 a_129 a_213^3- 35/24 a_213 a_128 (4 a_127-3 a_214 a_213);- 35/48 a_127[ a_214 (4 a_127 +3 a_214 a_213)-6 a_213^2 a_215]; + 3/4096 a_213^2[ 1120 a_214 (a_214^2+6 a_215 a_213) +3 a_213^2 (2269 a_213^2-560 a_216) ], ] James1991 E. M. James, N. G. Lloyd, A cubic system with eight small-amplitude limit cycles, IMA, J. of Appl. Math. 47 (1991) 163–171.Ilyashenko1991 Yu. S. Ilyashenko, S. Yakovenko, Finitely smooth normal forms of local families of diffeomorphismes and vector fields, Russ. Math. Surv. 46 (1991) 3–19.Ecalle1992 J. Écalle, Introduction aux Fonctions Analysables et Preuve Constructive de la Conjecture de Dulac (French), Hermann, Paris, 1992.YH2012 P. Yu, M. Han, Four limit cycles from perturbing quadratic integrable systems by quadratic polynomials, Int. J. Bifurcation & Chaos, 22(10) (2012) 1250254 (28 pages).Li2003 J. Li, Hilbert's 16th problem and bifurcations of planar polynomial vector fields, Int. J. Bifurcation and Chaos13 (2003) 47-106.HorozovIliev1994 E. Horozov, I. D. Iliev, On the number of limit cycles in perturbations ofquadratic Hamiltonian systems, Proc. Lond. Math. Soc. 69 (1994) 198–224.Iliev2000 I. D. Iliev, On the limit cycles available from polynomial perturbations of the Bogdanov-Takens Hamiltonian, Israel J. of Math. 115 (2000) 269–284.Gavrilov2001 L. Gavrilov, The infinitesimal 16th Hilbert problem in the quadratic case, Invent. Math. 143 (2001) 449–497.HorozovIliev1998 E. Horozov, I. D. Iliev, Linear estimate for the number of zeros of Abelian integrals with cubic Hamiltonians, Nonlinearity 11 (1998) 1521–1537.Petrov1986 G. S. Petrov, Elliptic integrals and their non-oscillatorness, Funct. Anal. Appl. 20 (1986) 46–49 (in Russian).Gavrilov1998 L. Gavrilov, Nonoscillation of elliptic integrals related to cubic polynomials with symmetry of order three, Bull. Lond. Math. Soc. 30 (1998) 267–273.LiZhang1995 B. Li, Z. Zhang, A note on a result of G. S. Petrov about the weakened 16th Hilbert problem, Journal of Mathematical Analysis and Applications 190 (1995) 489–516.Francoise1996 J. P. Françoise, Successive derivatives of a first return map, application to the study of quadratic vector fields, Ergodic Theory and Dynam. Syst. 16 (1996) 87–96.Jebrane2007 A. Jebrane, H. Żoła̧dek, A Note on higher order Melnikov functions, Qualitative Theory of Dynamical Systems 6 (2005) 273–287. Uribe2006 M. Uribe, Principal Poincaré-Pontryagin function of polynomial perturbations of the Hamiltonian triangle, J. Dyn. Control Syst. 12 (1) (2006) 109–134Arnold1988 V. I. Arnold, A. N. Varchenko, S.M. Gusein-Zade, Singularities of Differentiable Maps, v. 2, Monographs in Mathematics, Birkhäuser, Boston, 1988.
http://arxiv.org/abs/1708.07864v1
{ "authors": [ "Yun Tian", "Pei Yu" ], "categories": [ "math.DS" ], "primary_category": "math.DS", "published": "20170825190816", "title": "Bifurcation of small limit cycles in cubic integrable systems using higher-order analysis" }
]Nanoscale chemical mapping of laser-solubilized silk ^1Tokyo Institute of Technology, Meguro-ku, Tokyo 152-8550, Japan^2Nihon Thermal Consulting Co., 1-5-11, Nishi-Sinjuku, Shinjuku-ku, Tokyo 160-0023, Japan^3Nanotechnology facility, Swinburne University of Technology, John st., Hawthorn, 3122 VIC, Australia ^4Center for Physical Sciences and Technology, Savanoriu ave. 231, LT-02300 Vilnius, Lithuania ^5Infrared Microspectroscopy Beamline, Australian Synchrotron, Clayton, VIC 3168, Australia^6Institute for Frontier Materials, Deakin University, Geelong, VIC 3220 Australia ^7Research Institute of Electronics, Shizuoka University, Naka Ku, 3-5-3-1 Johoku, Hamamatsu, Shizuoka 4328561, Japan^8Melbourne Center for Nanofabrication, Australian National Fabrication Facility, Clayton 3168, Melbourne, [email protected]; [email protected] 25 August 2017A water soluble amorphous form of silk was made by ultra-short laser pulse irradiation and detected by nanoscale IR mapping. An optical absorption-induced nanoscale surface expansion was probed to yield the spectral response of silk at IR molecular fingerprinting wavelengths with a high ∼ 20 nm spatial resolution defined by the tip of the probe. Silk microtomed sections of 1-5 μm in thickness were prepared for nanoscale spectroscopy and a laser was used to induce amorphisation. Comparison of silk absorbance measurements carried out by table-top and synchrotron Fourier transform IR spectroscopy proved that chemical imaging obtained at high spatial resolution and specificity (able to discriminate between amorphous and crystalline silk) is reliably achieved by nanoscale IR. A nanoscale material characterization using synchrotron IR radiation is discussed.Keywords: silk, FT-IR, synchrotron radiation § INTRODUCTIONIn analytical material science, absorption of IR light is used for fingerprinting (chemical imaging) of particular molecules, specific compounds, and provides insight into interactions in their immediate vicinity. However, challenges arise when this information needs to be obtained from sub-wavelength and sub-cellular dimensions, in particular at IR and terahertz spectral bands of absorption <cit.> or scattering <cit.>. Optical properties of sub-wavelength structures and patterns have opened an entirely new direction in photonics and design of highly efficient optical elements for control of intensity, phase, polarisation, spin and orbital momenta of light based on flat planar geometries, yet rich in nanoscale features <cit.>. We aim at reaching that level of control in the domain of chemical spectral imaging. There, interpretation of data from near-field requires further knowledge of probe interaction with substrate, phase information of the reflected/transmitted light from sub-wavelength structures to reveal complex peculiarities of light-matter interactions at nanoscale and is now advancing with strongly concentrated efforts <cit.>. Recently, an electron tunneling control by a single-cycle terahertz pulse illuminated onto a tip of a scanning transmission microscope (STM) needle was demonstrated at 10 V/nm fields <cit.>. STM reaches an atomic precision in surface probing and its spectroscopic characterisation and can be carried out on a water surface <cit.>.Absorbance spectra quantify and identify the resonant molecules, chemical structures through their individual or collective excitations as detected in transmission (or inferred from reflection). By sweeping excitation wavelength through the finger printing region of a particular material, the usual optical excitation relaxation pathway ending with a thermal energy deposition into host material can be sensitively measured using an atomic force microscopy (AFM) approach <cit.> which opened up a rapidly growing AFM-IR field <cit.> (also known as nano-IR; Fig. <ref>(a)). How reliably one can determine IR properties with an AFM nano-tip based on the thermal expansion is currently still under debate due to a lack of knowledge of the actual anisotropy of thermal and mechanical properties at the nanoscale, 3D molecular conformation, alignment, and the interaction volume. To establish the correspondence between nano-IR and spectroscopy it is necessary to compare IR spectral imaging in near-field and far-field modes with nano-IR. Another challenge of the nano-IR technique is that a very thin sub-micrometer film has to be prepared and mounted on a thermally conductive substrate. Microtomed thin sections usually have thicknesses above the optimum and need to be embedded into an epoxy host which interferes with the nano-IR signal from micro-specimens.In order to demonstrate that microtomed slices of samples with features at or below micrometers in lateral cross-section can be measured using nano-IR and provide reliable IR spectral information we acquired absorbance spectra from areas down to single pixel hyper-spectral resolution accessible on table-top FT-IR spectrometers, and at high-resolution which was achieved with a solid immersion lens using high-brightness synchrotron FT-IR microscopy. Such comparison of IR properties read out from nanoscale and far-field was carried out in this study using silk - a bio-polymer complex in its structure comprised of crystalline and amorphous building blocks <cit.>.Here, micrometer-thick slices of silk were used to measure thermal expansion under a specific wavelength of excitation with ∼ 20-nm-sharp AFM tips and to compare with high-resolution spectra measured with ATR FT-IR method using synchrotron radiation at the Infrared Microspectroscopy (IRM) Beamline (Australian Synchrotron) as well as a table-top FT-IR spectrometer. Amorphisation of silk induced by single ultra-short laser pulses <cit.> has been spectroscopically recognised using nano-IR.§ SAMPLES AND METHODS Domestic (Bombyx mori) silk fibers, stripped of their sericin rich cladding <cit.>, were probed during the spectroscopic imaging experiments. Pulsed laser radiation was used to induce local structural modifications of silk. The AFM-based nano-IR experiments with ∼25-nm-diameter tips were benchmarked against more conventional methods, such as attenuated total reflectance (ATR) at the Australian Synchrotron IRM Beamline (∼ 1.9 μm resolution), and a table-top FT-IR spectrometer (∼ 6 μm resolution).For this set of experiments method-agnostic silk samples, in the form of thin slices, had to be prepared. For cross-sectional observation, the natural silk fibers were aligned and embedded into an epoxy adhesive (jER 828, Mitsubishi Chemical Co., Ltd.). Fibers fixed in the epoxy matrix were microtomed into 1-5 μm-thick slices which are mechanically robust enough to be measured using standard FT-IR setups without any supporting substrate. This was particularly important to increase sensitivity of the far-field absorbance measurements and to diminish reflective losses. Both longitudinal (L) and transverse (T) slices of the silk fibers were prepared by microtome (RV-240, Yamato Khoki Industrial Co., Ltd). The slices were thinner than the original silk fibers. For synchrotron ATR FT-IR, an aluminum disk was used to support the thin silk sections when they were brought into contact with a 100 μm diameter facet tip of the Ge ATR hemisphere (refractive index n = 4).Synchrotron ATR-FTIR mapping measurement was performed using a in-house developed ATR-FTIR device at the IR Microspectroscopy Beamline, which has a high-speed, high-resolution surface characterisation capabilities with spatial resolution down to 1.9 μm <cit.>. A 100 μm tip ATR accessory for a FT-IR spectrometer (Hyperion 3000, Bruker) with Ge contact lens of NA = nsinφ≃ 2.4 of refractive index n = 4 and φ = 36.9^∘ a half-angle of the focusing cone was used. A deep sub-wavelength resolution r = 0.61λ_IR/NA ≃ 1.5 μm is achievable for the IR wavelengths of interest at the Amide band of λ_IR = 1600-1700 cm^-1 or 6.25 - 5.9 μm. The nano-IR experiments, i.e., an AFM readout of the height changes in response to IR sample excitation, were carried out with nano-IR2 (Anasys Instruments, Santa Barbara, CA) tool with a ∼20 nm diameter tip. During continuous scan over the selected region with 0.1 Hz frequency, digitisation was carried out resulting in 25× 100 nm^2 pixels in x× y. The oscillation frequency of the AFM needle was 190 kHz. Table-top FT-IR spectrometer (Spotlight, PerkinElmer) was used with a detector array with ∼ 6 μm pixel resolution. Localized modification of silk was carried out via exposure to 515 nm wavelength and 230 fs duration pulses (Pharos, Light Conversion Ltd.) in an integrated industrial laser fabrication setup (Workshop of Photonics, Ltd.). Fibers were imaged and laser radiation was focused using an objective lens of numerical aperture NA = 0.5 (Mitutoyo). Single pulse exposures were carried out with pulse energy, E_p. Optical and electron scanning microscopy (SEM) were used for structural characterisation of the laser modified regions. § RESULTS AND DISCUSSION A highly crystalline natural silk fiber can be thermally amorphised only through rapid 2× 10^3 K/s thermal quenching from melt <cit.> as demonstrated for a very tiny amounts of silk measured in nanograms (1 ng occupies a sphere of 5.7 μm diameter). Amorphous silk is water soluble and can be used as 3D printing material for scaffolds, desorbable implants <cit.>, bio-resists <cit.>, and even be utilised as an electron beam resist <cit.>. Amorphous-to-crystalline transition of silk fibroin is usually achieved via a simple water and alcohol bath processing at moderately elevated temperatures ∼ 80^∘ <cit.>. An UV 266 nm wavelength nanosecond pulsed laser irradiation was also used to enrich amorphous silk fibroin with crystalline β-sheets <cit.>.To realise a fast thermal quenching and to retrieve the amorphous silk phase <cit.>, we used ultrashort 230 fs duration and 515 nm wavelength laser pulses tightly focused into focal spot of d = 1.22λ/NA ≃ 1.3 μm; the numerical aperture of the objective lens was NA = 0.5 and the wavelength was λ = 515 nm. Single pulse irradiation was carried out to ablate nanograms of silk and create a molten phase which is thermally quenched fast enough <cit.> to prevent crystallisation (Fig. <ref>(b)). Threshold of optically recognisable modification of silk fiber during laser irradiation was at 8 nJ which corresponds to 2.8 TW/cm^2 average power (0.6 J/cm^2 fluence) per pulse and is typical for polymer glasses <cit.>.Nano-IR spectrum, the low-frequency band of the AFM detected height changes in response to a spectral sweep of excitation at the IR absorbance bands at 1 kHz frequency was measured on a transverse (T) microtome cross section of silk fiber embedded into a micro-thin epoxy (Fig. <ref>(a)). Single wavelength chemical map measured at the specific Amide I band of 1660 cm^-1 associated with the amorphous silk components is shown in (b) with clearly discernable amorphous rim of the ablation crater, which, in contrast, is not recognisable at the crystalline β-sheet region of 1700 cm^-1 (inset of (b)). Typical single point measurement spectra are shown in Fig. <ref>(c) for the Amide I region with clear distinction between epoxy matrix, amorphous, and crystalline counterparts of silk. Amorphous silk of the molten quenched phase has only 200 nm thickness as observed by SEM and AFM, however, it was distinguished using illumination at the corresponding absorption bands (λ_ex≃ 6 μm).Thin microtomed slices placed on a high thermal conductivity substrate have been shown to enhance speed of nano-IR imaging since a thermalisation time scale is t_th≃ρ c_p h^2/η <cit.>, where h is the thickness of the film, η is thermal conductivity, ρ is the mass density, and c_p is the specific heat capacity.§ CONCLUSIONS AND OUTLOOK It was shown that by using micro-thin slices which are sub-wavelength at the IR spectral range of interest, it was possible to obtain spectral band readout using nano-IR method - the surface height changes due to thermal expansion following the absorbance spectrum of silk. Amorphous silk created by ultra-fast thermal quenching at the irradiation location of fs-laser pulse was distinguished on the chemical map and imaged with lateral resolution defined by digitisation x× y ≡ 25× 100 nm^2 and can potentially reach the limit defined by tip <cit.> which was 20× 20 nm^2 in this study. Chemical mapping result acquired using the nano-IR method is consistent with far-field spectroscopy of silk carried out with table-top and synchrotron FT-IR (see Fig. <ref>(b)). The far-field FT-IR detectors have a typical pixel size of ∼ 6 μm which limits the resolution to the entire T-cross-section of the silk fiber, however, a good correspondence with nanoscale IR spectral mapping is confirmed between different spectroscopic methods. Laser-induced amorphisation of crystalline silk has been spectroscopically resolved with high spatial resolution.We can envisage, that a simple principle of the nano-IR technique allows for a coupling with synchrotron light sources and to complement it with a phase and amplitude mapping using scanning near-field microscopy. The high brightness of synchrotron radiation would also enable a fast mapping required for temporally resolved evolution of photo or thermally excited processes, and even a molecular alignment mapping could be realised by using the four polarisation method <cit.>. These functionalities will bring new developments into a cutting edge nanoscale molecular characterisation. § ACKNOWLEDGMENTSJ.M. acknowledges a partial support by a JSPS KAKENHI Grant No.16K06768. We acknowledge the Swinburne's startup grant for Nanotechnology facility and partial support via ARC Discovery DP130101205 and DP170100131 grants. Experiments were carried out via beamtime project No. 11119 at the Australian Synchrotron IRM Beamline. Window-on-Photonics R&D, Ltd. is acknowledged for joint development grant and laser fabrication facility.iopart-num10 url<#>1#1urlprefixURL Dazzi1 Dazzi A, Prater C B, Hu Q, Chase D B, Rabolt J F and Marcott C 2012 Appl. Spectrosc. 66 1365 –1384Hillenbrand Hillenbrand R, Taubner T and Keilmann F 2002 Nature 418 159 – 162Kuznetsov Kuznetsov A I, Miroshnichenko A E, Brongersma M L, Kivshar Y S and Lukyanchuk B 2016 Science 354 aag2472Kruk Kruk S, Hopkins B, Kravchenko I I, Miroshnichenko A, Neshev D N and Kivshar Y S 2016 APL Photonics 1 030801Woessner Woessner A, Alonso-Gonzalez P, Lundeberg M B, Gao Y, Barrios-Vargas J E, Navickaitė G, Ma Q, Janner D, Watanabe K, Cummings A W, Taniguchi T, Pruneri V, Roche S, Jarillo-Herrero P, Hone J, Hillenbrand R and Koppens F H 2016 Nat. Commun. 17 10783Ni Ni G X, Wang L, Goldflam M D, Wagner M, Fei Z, McLeod A S, Liu M K, Keilmann F, Özyilmaz B, Neto A H C, Hone J, Fogler M M and Basov D N 2016 Nat. Photon. 10 244 – 248Huber Huber M A, Plank M, Eisele M, Marvel R E, Sandner F, Korn T, Schüller C, Haglund R F, Huber R and Cocker T L 2016 Nano Lett. 16 1421 – 1427Zenin Zenin V A, Andryieuski A, Malureanu R, Radko I P, Volkov V S, Gramotnev D K, Lavrinenko A V and Bozhevolnyi S I 2015 Nano Lett. 15 8271 – 8276Khanikaev Khanikaev A B, Arju N, Fan Z, Purtseladze D, Lu F, Lee J, Sarriugarte P, Schnell M, Hillenbrand R, Belkin M A and Shvets G 2016 Nat. Commun. 7 12045Fei Fei Z, am M D G, Wu J S, Dai S, Wagner M, McLeod A S, Liu M K, Post K W, Zhu S, Janssen G C A M, Fogler M M and Basov D N 2015 Nano Lett. 15 8271 – 8276Katayama Yoshioka K, Katayama I, Minami Y, Kitajima M, Yoshida S, Shigekawa H and Takeda J 2016 Nat. Photon. 10 762–765Guo Guo J, Bian K, Lin Z and Jiang Y 2016 J. Chem. Phys. 145 160901Dazzi Dazzi A, Prazeres R, Glotin F and Ortega J M 2005 Opt. Lett. 30 2388 – 2390Carminati Dazzi A, Glotin F and Carminati R 2010 J. Appl. Phys. 107 124519NComm15 Cho S Y, Yun Y S, Lee S, Jang D, Park K Y, Kim J K, amd K Kang B H K, Kaplan D L and Jin H J 2015 Nat. Commun. 6 7145Yoshioka Yoshioka T, Tashiro K and Ohta N 2016 Biomacromolecules 17 1437 – 1448Hu Hu X, Kaplan D and Cebe P 2006 Macromolecules 39 6161 – 6170Yazawa16 Yazawa K, Ishida K, Masunaga H, Hikima T and Numata K 2016 Biomacromolecules 17 1057–1066 17sr7419 Ryu M, Bačytis A, Wang X, Vongsvivut J, Hikima Y, Li J, Tobin M J, Juodkazis S and Morikawa J 2017 Sci. Reports 7 741916b054101 Maximova K, Wang X W, Balčytis A, Fan L, Li J and Juodkazis S 2016 Biomicrofluidics 10 054101Pimm Vongsvivut J, Heraud P, Zhang W, Kralovec J A, McNaughton D and Barrow C J 2014 Food Bioprocess. Technol. 7 265Â – 277Cebe Cebe P, Hu X, Kaplan D L, Zhuravlev E, Wurm A, Arbeiter D and Schick C 2013 Sci. Rep. 3 1130Hotz Li C, Hotz B, Ling S, Guo J, Haas D S, Marelli B, Omenetto F, Lin S J and Kaplan D L 2016 Biomaterials 110 24 – 33Li Li G, Li Y, Cher G, He J, Han Y, Wang X and Kaplan D L 2015 Adv. Healthc. Mater. 4 1134 – 1151Tao Tao H, Kaplan D L and Omenetto F G 2012 Adv. Mater. 24 2824 – 2837Sun Sun Y L, Li Q, Sun S M, Huang J C, Zheng B Y, Chen Q D, Shao Z Z and Sun H B 2015 Nat. Commun. 6 8612Kim Kim S, Marelli B, Brenckle M A, Mitropoulos A N, Gil E S, Tsioris K, Tao H, Kaplan D L and Omenetto F G 2014 Nat. Nanotechn. 9 306 – 31015a11863 Morikawa J, Ryu M, Balčytis A, Seniutinas G, Fan L, Mizeikis V, Li J L, Wang X W, Zamengo M, Wang X and Juodkazis S 2015 RSC Adv. 6 11863 – 11869Tsuboi1 Tsuboi Y, Ikejiri T, Shiga S, Yamada K and Itaya A 2001 Appl. Phys. A 73 637 – 64016le16133 Malinauskas M, Žukauskas A, Hasegawa S, Hayasaki Y, Mizeikis V, Buividas R and Juodkazis S 2016 Light: Sci. Appl. 5 e16133Katzenmeyer Katzenmeyer A M, Aksyuk V and Centrone A 2013 Anal. Chem. 85 1972 – 1979Wang2017 Wang L, Wang H, Wagner M, Yan Y, Jakob D S and Xu X G 2017 Science Advances 3 e1700255Hikima Hikima Y, Morikawa J and Hashimoto T 2011 Macromolecules 44 3950 – 3957
http://arxiv.org/abs/1709.01799v1
{ "authors": [ "Meguya Ryu", "Hanae Kobayashi", "Armandas Balcytis", "Xuewen Wang", "Jitraporn Vongsvivut", "Jingliang Li", "Norio Urayama", "Vygantas Mizeikis", "Mark Tobin", "Saulius Juodkazis", "Junko Morikawa" ], "categories": [ "physics.app-ph", "cond-mat.mes-hall" ], "primary_category": "physics.app-ph", "published": "20170825074328", "title": "Nanoscale chemical mapping of laser-solubilized silk" }
IntroductionA Conservation Law Method in Optimization Bin Shi December 30, 2023 =========================================Scatterplots—one of the most widely used types of statistical graphics <cit.>—are commonly used to visualize two continuous variables using visual marks mapped to a two-dimensional Cartesian space, where the color, size, and shape of the marks can represent additional dimensions. However, scatterplots are so-called overlapping visualizations <cit.> in that the visual marks representing individual data points may begin to overlap each other in screen space in situations when the marks are large, when there is insufficient screen space to fit all the data at the desired resolution, or simply when several data points share the same value. The latter is particularly problematic for discrete variables with small domains, such as for nominal and ordinal data <cit.>, due to the increased incidence of shared values. This kind of overlap is known as overplotting (or overdrawing) in visualization, and is problematic because it may lead to data points being entirely hidden by other points, which in turn may lead to the viewer making incorrect assessments of the data. In particular, the issue with small-domain discrete variables has led to scatterplots being almost exclusively used for continuous variables; Figure <ref>(a) shows an example of what happens if a data dimension mapped to an axis has this discrete property.However, realistic multidimensional datasets often contain a combination of both continuous and discrete data dimensions, and even if a dataset is entirely continuous, the physical limitations of computer screens means that overplotting may (and most likely will) still occur even if no value overlap exists in the data (Figure <ref>). Several approaches have been proposed to address this problem <cit.>, the most prominent being transparency, clustering, and jittering. The first of these, transparency, does not so much address the problem as sidestep it by making the visual marks semi-transparent so that an accumulation of overlapping points in the same are still visible. However, this will not scale well for large datasets, and also causes blending issues if color is used to encode additional variables. Clustering, on the other hand, attempts to organize overlapping marks into visual groups that summarize the distribution <cit.>, but increases the complexity of the scatterplot. Finally, jittering perturbs visual marks using a random displacement <cit.> so that no mark falls on the exact same screen location as any other mark (Figure <ref>(b)), but this approach is still prone to overplotting for large datasets. Jittering also introduces uncertainty in the data that is not aptly communicated by the scatterplot since marks will no longer be placed at their true location on the Cartesian space.In this paper, we propose the concept of gathering as an alternative to scattering and jittering, and then show how we can use this visual transformation to define a novel visualization technique called a gatherplot. Gathering is a generalization of the linear mapping used by scatterplots, and works by partitioning the graphical axis into segments based on the data dimension and then organizing points into stacked groups for each segment that avoids overplotting. This means that the gather operation relaxes the continuous spatial mapping traditionally used for a graphical axis; instead, each discrete segment occupies a certain amount of screen space that is all defined to map to the exact same data value. This is communicated using graphical brackets on the axis that shows the value or interval for each segment (Figure <ref>(c)). The gatherplot technique, then, is merely a scatterplot where the gather transformation is used on one or both of the graphical axes. Additional data dimensions can be used to cluster together different points within a stacked group; Figure <ref> shows how the origin of cars, communicated using color, is also used to organize these marks into discrete groupings. Furthermore, if the user is trying to assess relative proportions rather than absolute numbers, the aspect ratio of the visual marks in each stacked group can be changed independently to fill the available space (Figure <ref>(d)). Because we define a common model for scatterplots, jitterplots, and gatherplots alike, our prototype implementation makes it easy to transition freely between them.The contributions of our paper are the following: (1) the concept of the gather visual transformation as a generalization of linear visual mappings; (2) the gatherplot technique, an application of the gather operation to scatterplots to solve the overplotting problem; and (3) results from both a crowdsourced graphical perception evaluation studying effectiveness of gatherplots compared to jitterplots as well as an expert review involving multiple visualization experts using the technique for in-depth data analysis. In the remainder of this paper, we first review the literature on statistical graphics and overplotting. We then present the gather operation and use it to define gatherplots. This is followed by our crowdsourced evaluation and expert review. We close with implementation notes, conclusions, and our future plans. § BACKGROUND Our goal with gatherplots is to generalize scatterplots to a representation that maintains its simplicity and familiarity while eliminating overplotting. With this in mind, below we review prior art that generalizes scatterplots for mitigating overplotting. We also discuss related visualization techniques specifically designed for nominal variables. §.§ Characterizing Overplotting While there are many ways to categorize visualization techniques, a particularly useful classification for our purposes is one introduced by Fekete and Plaisant <cit.>, which splits visualization into two types: * Overlapping visualizations: These techniques enforce no layout restrictions on visual marks, which may lead to them overlapping on the display and causing occlusion. Examples include scatterplots, node-link diagrams, and parallel coordinates. * Space-filling visualizations: A visualization that restricts layout to fill the available space and to avoid overlap. Examples include treemaps, adjacency matrices, and choropleth maps.Fekete and Plaisant <cit.> investigated the overplotting phenomenon for a 2D scatterplot, and found that it has a significant impact as datasets grow. The problem stems from the fact even with two continuous variables that do not share any coordinate pairs, the size ratio between the visual marks and the display remains more or less constant. Furthermore, most datasets are not uniformly distributed. This all means that overplotting is bound to happen for realistic datasets.Ellis and Dix <cit.> survey the literature and derive a general approach to reduce clutter. According to their treatment, there are three ways to reduce clutter in a visualization: by changing the visual appearance, through space distortion, or by presenting the data over time. Some trivial but impractical mechanisms they list include decreasing mark size, increasing display space, or animating the data. Below we review more practical approaches based on appearance and distortion. §.§ Appearance-based Methods Practical appearance-based approaches to mitigate overplotting include transparency, sampling, kernel density estimation (KDE), and aggregation. Transparency changes the opacity of the visual marks, and has been shown to convey overlap for up to five occurrences <cit.>. However, there is still an upper limit for how much overlap is perceptible to the user, and the blending caused by overlapping marks of different colors makes identifying specific colors difficult. Sampling uses stochastic methods to statistically reduce the data size for visualization <cit.>. This may reduce the amount of overplotting, but since the sampling must be random, it can never reliably eliminate it.KDE <cit.> and other binned aggregation methods <cit.> replace a cluster of marks with a single entity that has a distinct visual representation. However, these methods are difficult to apply for scatterplots because scatterplots operate on the principle of object identity, meaning that each visual mark is supposed to represent a single entity. Splatterplots <cit.> overcome this by overlaying individual marks side-by-side with the aggregated entities, using marks to show outliers and aggregated entities to show the general trends. However, even with only few aggregated entities, the resulting color-blended image becomes visually complex and challenging to read and understand. §.§ Distortion-based Methods Unfortunately, appearance-based clutter reduction methods <cit.> are not well-suited for discrete variables, since such dimensions may cause many data points to map to the exact same screen location. In such a situation, changing the appearance of the marks does not help. For such data, distortion-based techniques may be better. The canonical distortion technique is jittering, where a random displacement is used to subtly modify the exact screen space position of a data point. This has the effect of spreading data points apart so that they are easier to distinguish. However, most naïve jittering mechanisms apply the displacement indiscriminately to all data points, regardless of whether they are overlapping or not. This has the drawback of distorting all points away from their true location on the visual canvas, and still does not completely eliminate overplotting.Bezerianos et al. <cit.> use a more structured approach to displacement, where overlapping marks are organized onto the perimeter of a circle. The circle is grown to a radius where all marks fit, which means that its size is also an indication of the number of participating points. However, this mechanism still introduces uncertainty in the spatial mapping, and it is also not clear how well it scales for very dense data. Nevertheless, it is a good example of how deterministic displacement can be used to great effect for eliminating overplotting.Trutschl et al. <cit.> propose a deterministic displacement (“smart jittering”) that adds meaning to the location of jittering based on clustering results. Similarly, Shneiderman et al. <cit.> propose a related structured displacement approach called hieraxes, which combines hierarchical browsing with two-dimensional scatterplots. In hieraxes, a two-dimensional visual space is subdivided into rectangular segments for different categories in the data, and points are then coalesced into stacked groups inside the different segments. This idea is obviously very similar to our gatherplots technique, but the main difference is that we in this paper derive gatherplots as generalizations of scatterplots, and also define mechanisms for laying out the stacked groups, organizing them by another dimension, and modifying their aspect ratio to support relative assessments. §.§ Visualizing Nominal Variables While we have already ascertained that scatterplots are not optimal for nominal variables, there exists a multitude of visualization techniques that are <cit.>. Simplest among them are histograms, which allows for visualizing the item count for each nominal value <cit.>, but much more complex representation are possible. One particular usage for visualizing nominal data that is of practical interest is for making inferences based on statistical and probabilistic data. Cosmides and Toody <cit.> used frequency grids as discrete countable objects, and Micallef et al. <cit.> build extend this with six different area-proportional representations of nominal data organized into different classes.As a parallel to our work on gatherplots, one particular multidimensional visualization technique that is closely related to scatterplots is parallel coordinate plots <cit.>. However, just like scatterplots, parallel coordinate plots are often plagued by overplotting due to high data density and discrete data dimensions. The work by Kosara et al. <cit.> to extend parallel coordinate plots into parallel sets is interesting because it specifically addresses the overplotting concern by grouping points with the same value into a segment on the parallel axis. This is precisely the same idea we will apply for scatterplots in this work. § THE GATHER TRANSFORMATIONPosition along a common scale is the most salient of all visual variables <cit.>, and so mapping a data dimension to positions on a graphical axis is a standard operation in data visualization. We call this mapping a visual transformation. However, most statistical treatments of data, such as Stevens' classical theory on the scale of measurements <cit.>, do not take the physical properties of display space into account. This is our purpose in the following section. §.§ Problem Definition Let V = <f, s> be a visual transformation that consists of a transformation function f and a mark size s. Furthermore, assume that f transforms a data point p_d ∈ D from a data dimension D to a coordinate on a graphical axis p_c ∈ C by f(p_d) = p_c. Given a dataset D_i ⊆ D, we say that a particular visual transformation V_j exhibits overlap if ∃ p_x, p_y ∈ D_i ∧ x ≠ y : |f_j(p_x) - f_j(p_y)| < s_j. In other words, overlap occurs for a particular dataset and visual transformation if there exists at least one case where the visual marks of two separate data points in the dataset fall within the same interval on the graphical axis. The overlap index of a dataset and visual transformation is defined as the number of unique pairs of points that overlap. For a one-dimensional visualization, only a single transformation is used and the visualization and dataset is said to exhibit overplotting iff it exhibits overlap. For a two-dimensional visualization, however, the visualization and dataset will only exhibit overplotting iff there is overlap in both visual transformations and data dimensions. Analogously, the overplotting index is the unique number of overplotting incidences for that particular visual transformation and dataset.This has two practical implications: (1) even a dataset that consists only of nominal variables may not exhibit overplotting if there is only at most one instance of each nominal value, and (2) a dataset consisting of continuous values may still exhibit overplotting if any two points in the dataset are close enough that they get mapped to within the size of the visual marks on the screen. The corollary is basically that overplotting is a function of both visualization technique and dataset. §.§ Definition: The Gather Transformation We build on the previous idea of structured displacement <cit.> by proposing a novel visual transformation function called a gather transformation f_gather that non-linearly segments the graphical axis C and organizes data points in each segment to eliminate overplotting.The gather transformation V_gather = <f_gather, s_gather> consists of a transformation function f_gather that maps data points p_d ∈ D to coordinates p_c ∈ C, and a visual mark sizing function (instead of a scalar) s_gather that yields a visual mark size given the same data point. The gather transformation function is special in that it eliminates overplotting by subdividing the graphical axis C into n contiguous segments C = { C_1, C_2, …, C_n }, where n is the size of the domain of the gather transformation function, i.e., the number of unique elements in the data dimension D. When mapping a data point p_d to the graphical axis, f_gather will return an arbitrary graphical coordinate p_c ∈ C_i for whatever coordinate segment C_i that p_d belongs to.Practically speaking, coordinates p_c ∈ C_i will be chosen to efficiently pack visual marks into the available display space without causing overplotting (i.e., using a regular spacing of size s_gather). Several different methods exist for adapting the gather transformation to the dataset D. One approach is to keep the segments C_1, …, C_n of equal size and find a constant visual mark size s_gather(p_d) = s_max that ensures that all points fit within the most dense segment. The constant mark size makes visual comparison straightforward. Another approach is to adapt segment size to the density of the data while still keeping the mark size constant. This will minimize empty space in the visual transformation and allows for maximizing mark size. A third approach is to vary mark size proportionally to the number of points in a segment. This will make comparison of the absolute number of points in each segment difficult, but may facilitate relative comparisons if marks are distinguished in some other way (e.g., using color).For data dimensions D that have a very large number of unique values, it often makes sense to first quantize the data using a function p_q = Q(p_d) so that the number of elements n is kept manageable (on the order of 10 or less for most visualizations). For example, a data dimension representing a person's age might heuristically be quantized into ranges of 10 years: 0-9 years, 10-19 years, 20-29 years, and so on.In a gather transformation, the coordinate axis has been partitioned into segments, where the order of segments on the axis depends on the data. For nominal data, the segments can be reordered freely, both by the algorithm and by the user. For ordinal or quantized data, the order is given by the data relation. Furthermore, it often makes sense to be able to order points inside each segment C_i using the gathering transformation function f_gather, for example using a second data dimension (possibly visualized using color) to group related items together.Appropriate visual representations of data where the gather transformation has been applied are also important. The stacked entities of gathered points—one per coordinate segment C_i—should typically maintain object identity, so that each constituent point and their size is discernible as a discrete visual mark. Similarly, a visual representation of the segmented graphical axis should externalize the segments as labeled intervals instead of labeled major and minor ticks; this will also communicate the discontinuous nature of the axis itself to the viewer. §.§ Using the Gather Transformation To give an example in one-dimensional space, parallel coordinate plots <cit.> use multiple graphical axes, one per dimension D_i, and organize them in parallel while rendering data points as polylines connecting data values on one axis to adjacent ones. However, traditional parallel coordinate plots merely use a scatter transformation on each graphical axis, which makes the technique prone to overplotting. Multiple authors have studied ways of mitigating this problem, for example by reorganizing the position of nominal values <cit.>, using transparency, applying jitter, or by clustering the data <cit.>.However, an alternative approach is to use the gather representation for each graphical axis to minimize overplotting. This will cause each axis to be segmented into intervals, and we can then resize segments according to the number of items falling into each segment so that segments with many data points become proportionally larger than those with fewer points. Finally, if the data dimensions represent nominal data, it may make sense to use a global segment ordering function so that there is a minimum of lateral movement for the majority of points as they connect to adjacent axes. This will also minimize line crossings between the parallel axes. This particular visualization technique—a parallel coordinate plot with the gather transformation applied to each graphical axis—is essentially equivalent to parallel sets <cit.>.In fact, by applying our generalized gather transformation to the axis, we are actually proposing a new type of stacked visualization where each entity is still represented by lines. In a sense, this technique combines parallel coordinates and parallel sets because the grouped lines maintain the illusion of a single entity for an axis with nominal categorical values (similar to parallel sets), yet integrates directly with a parallel coordinate axis with continuous values. The main difference is that the new parallel coordinate/set variation allows each axis to be either categorical or continuous, meaning that one axis can represent the gender and the next can represent the height of person. § GATHERPLOTS: A 2D GATHERING REPRESENTATION Here we apply the concept of gathering to the scatterplots to alleviate overplotting, focusing on optimal layouts of gathered entities, graphical representations of chart elements, and novel interactions. §.§ Applying Gathering to Perpendicular Axes The application of the gather transformation results in the segmentation of output range of an axis scale, where the items with same values will be arranged to avoid overplotting. Applying gathering to two perpendicular axes defining a Cartesian space results in a gatherplot: a 2D visualization technique that aggregates entities for each axis. However, given this basic visual representation, there are many open design possibilities for aspect ratio, layout, and item shapes. We discuss these design parameters in the treatment below.§.§ Layout Modes Gatherplots organizes entities into stacked groups according to a discrete variable to eliminate overplotting. However, the result depends on the context, especially on the size distribution of each groups, the aspect ratio of assigned space, and the task at hand. This makes finding an optimal layout difficult. One solution is to provide interaction techniques to change the layout, but such layout options may lead to confusion for the user. Our approach is to provide one general optimal visualization for the most common aspect ratio and tasks, and provide a very few optional methods to change it. As a result, we derive the following three layout modes (examples in Figure <ref>): * Absolute mode. Here stacked groups are sized to follow the aspect-ratio of the assigned region. The node size of the items are determined by the maximum length dots which can fill the assigned region without overlapping. This means with the same assigned space, the groups with the maximum number of members determines the overall size of the nodes. * Relative mode. In this mode, the node size and aspect ratio is adapted so that every stacked group has equal dimensions. This is a special mode to make it easier to investigate ratios when the user is interested in the relative distributions of subgroups rather than the absolute number of members. Items also change their shape from a circle (absolute mode) to a rectangle. * Streamgraph mode.Here stacked groups are reorganized so that the maintain the same number of elements in their shorter edge. This mode is used for regions where the ratio of width and height are drastically different (in our prototype implementation, we use a heuristic value of 3 for aspect ratio to be a threshold for activating this mode). This means there are usually many times more groups in the axis in parallel with shorter edges. The purpose of this mode is to make it easier to compare the size along these many entities. The resulting graphic resembles ThemeRiver <cit.> as the number of entities increase.The choice between absolute mode and streamgraph mode happens automatically based on the aspect ratio and without user intervention. Therefore only interactive option is required to toggle between absolute mode and relative mode. Our intuition is that the absolute mode should be good enough for most of the time, and when very specific tasks are required, the user can switch to the relative modes.However, gatherplots involve many more possibilities beyond than these layout functions. Below follows our treatment of these design possibilities and our rationale for our decisions.§.§.§ Area vs. Length Oriented Layout Maintaining the aspect ratio of all stacked groups means that the size of the group is represented by its area. The length of the group is only used in special cases when the aspect ratio is very high or low. According to Cleveland and McGill <cit.>, length is far more effective than the area for graphical perception. However, Figure <ref> shows the three problems associated with layout to enable length-based size comparison. In this view, the items are stacked along the vertical axis to make the size comparison along the horizontal axis easier. The width of rectangle is all set to be equal to so that the length can represent the size of subgroups. However, they show drastically different shape of line vs. rectangle, which may cause users to lose concept of equality. Furthermore, to make length-based comparison easier, the stacking should be aligned to one side of the available space: left, right, top, or bottom.In this case, the bottom is selected to make it easier to compare along the X axis. However, this creates additional two problem. The first problem is that center of mass of each stacked group is so different that the concept of belonging to the same value can be misleading. The second problem is that choosing alignment direction is arbitrary and depends on the task. For example, in this view it is more difficult to compare along the Y axis. In this sense, this layout is biased to the X axis, while sacrificing the performance along the Y axis. For this reason, the most general choice is to use center alignment with aspect ratio resembling the assigned range to avoid bias.§.§.§ Uniform vs. Variable Area Allocation In gatherplots, we assign uniform range to different values of overlappable variables. For some cases, assigning variable area can make sense and create interesting visualizations. As a simple example, we can argue that assigning the range of output for gather transfer function to be proportional to the numbers of items that belong that value uses the space most efficiently. This will result in the following layout shown in Figure <ref>, which is basically a mosaic plot <cit.>.§.§.§ Role of Relative Mode Since gathering assigns a discrete noncontinuous range to each graphical axis, each stacked group can be grown to fill all available space. This relative mode is useful for two specific tasks:* Getting a relative percentage of the subgroups in the group (Figure <ref>). Because groups of different size is normalized to the same size, any comparison in area results in a relative comparison.* Finding the distribution of outliers. When there are many items on the screen for absolute mode, all node sizes must be reduced. This can make outliers hard to locate.§.§ Graphical Representations In this section, we discuss the visual representation for gatherplots and how it differs from traditional scatterplots.§.§.§ Continuous Color Dimension The gather transform sorts items according to a data property, such as a variable also assigned for coloring items. This removes the scattered color patterns in the stacked groups that is common in other techniques such as Gridl <cit.>. This is also particularly useful for continuous color scales, making the variation of colors are easier to perceive (Figure <ref>).§.§.§ Shape for Items Scatterplots typically use a small circle or dot as a visual representation for items, but many variations exist that use glyph shapes to convey multidimensional variables<cit.>. However, in the relative mode, sometimes the aspect ratio of nodes changes according to the aspect ratio of box assigned to that value. Also, as gathering changes the size of nodes to fit in one cluster, sometimes node size becomes too small, or too large compared to other nodes. This results in several unique design considerations for item shapes. After trying various design alternatives, we recommend using a rectangle with constant rounded edge without using stroke lines. Using constant rounded edge allows the nodes to be circular when the node is small, as in Figure <ref>(b), and a rectangle to show the degree of stretching, as shown in Figure <ref>(b). Figure <ref> shows some previous trials with various shapes.§.§.§ Design of Tick MarksThe single line type tick marks for scatterplots are not appropriate for gatherplots. Because we are representing a range rather than a single point, a range tick marker will be better. Without this visual representation, when the user is confronted with a number, it can be confusing to determine whether adjacent nodes with different offset has same value or not. After considering a few visual representation, we recommend a bracket type marker for this purpose. Figure <ref> shows various types of markers for range representation. The bracket is optimal in that it uses less ink and creates less density with adjacent ticks.§.§.§ Tick Labels for Numbers In gatherplots, a tick mark represents a range. This creates a problem for the data label. Because it is a number, the naïve way to represent will be having beginning number and ending number at each side, as shown in Figure <ref>. However this creates a very dense region between adjacent marks and, worse, the same number is repeated in this region, thereby wasting space. We recommend the use of a plus-minus sign to represent a bin size, to create a conceptually consistent tick labels. One limitation with this approach is the binning of arbitrary size can create bins of arbitrary floating number occupying an inappropriate amount of size.§.§.§ Extension of Scatterplot Matrix One subtle difficulties in working with scatterplot matrix is when the users want to see only single axis, without definition in other axis. Or one may assigning same dimension to the X and Y axes while exploring various dimensions. Both will create an overdrawing in a traditional scatterplot matrix. However, in gatherplots, an undefined axis result in the aggregation of all nodes in one group, which is a spontaneous logical extension. This enables the scatterplots matrix to have an additional row and column with undefined axes. Figure <ref> shows an example of this using cars dataset. Two dimensions—displacement and MPG—were used to create a 3 by 3 gatherplots matrix. Note that 7 out of 9 charts are new compared to scatterplots, while adding information to the whole picture. The diagonals also enable seeing distribution, which is a improvement over previous scatterplots matrix. §.§.§ Applications for Continuous Variables Gatherplots can be used to mitigate overplotting caused by continuous variables as well. Figure <ref> (a) shows how gatherplots handle the overplotting caused by continuous variables. The plot is using relative mode with two random variables. The relative mode makes it easier to identify the outliers and the distribution of outliers.One limitation of gatherplots is that it requires binning to manage a continuous variable, yet binning creates arbitrary boundaries. In this sense, gatherplots can be misleading. However, combining gatherplots with scatterplots makes this problem less severe.§.§.§ Animated Transitions The many shape and layout transitions involved in gatherplots can be confusing to users. Animation can be a powerful tool to reduce this and maintain the user's mental model. Heer and Robertson investigated the effectiveness of animated transitions and found that animating the statistical chart can improve the perception of statistical data<cit.>. Robertson et al. <cit.> found that animation leads to an enjoyable and exciting experience, even ifx the analysis was not effective. Elmqvist et al. <cit.> used animation for the scatterplots to maintain congruence. Drawing on all of this work, gatherplots uses animated transitions for all state and layout changes. In addition to the animation according to the axis dimension change, animation is used to show thx transition between scatterplots and jitterplots. This ameliorates the potential misconception of the data distribution in the gatherplots.§.§.§ Axis Folding Interaction As an exploration tool for real-world dataset, it is crucial to have means to filter unwanted data. To aid this process with gather transform, we provide an optional mechanism to go back to the original continuous linear scale function. We allow each axis tick have an interactive control to be filtered out (minimize) or focused (maximized). This is called axis folding, because it can be explained mentally by a folding paper. When minimized or folded, the visualization space is shrunk by applying linear scales instead of nonlinear gather scales. This results in overplotting as if a scatterplot was used for that axis. A maximization is simply folding all other values except the value of the interest in order to assign maximum visual space to that value. Figure <ref> shows the axis folding applied to third class adult passengers in the Titanic dataset. § IMPLEMENTATION We have implemented a web-based demonstration of gatherplots and published it online.[<http://www.gatherplot.org>] The users can load various dataset and compare each visualization with scatterplots and jittered scatterplots with one button click. In the top right area, an interactive guided walk-through is provided. The users can follow instructions step-by-step to experience gatherplots. In the bottom, a discussion board is provided. The purpose of the discussion board is to accumulate discussions during the expert review. Other people can also join the evaluation process.The gatherplot prototype implementation was developed using D3.js[<http://www.d3js.org>] and Angular.js[<http://www.angularjs.org>]. Figure  <ref> shows the screenshot of the implemented website. To test various layout and shape of the nodes, an intermediate version which allows various tweak is also available.[<http://www.gatherplot.org>]§ EVALUATION OF GATHERPLOTS USING CROWDSOURCING The purpose of this study is to examine the effectiveness of gatherplots especially to see how different modes of gatherplots influence certain types of tasks for the crowdsourced workers. Crowdsourcing platforms have been widely used and have shown to be reliable platforms for evaluation studies <cit.>. Therefore, we conducted our experiment on Amazon Mechanical Turk [<https://www.mturk.com>] .§.§ Experiment Design Gatherplots was developed to overcome limitations of conventional scatterplots. We believe that gatherplots solves the issue of overdrawing, while maintaining structural identity with scatterplots. Jittered scatterplots were selected as a comparison, as it is widely accepted standard technique maintaining same consistency with scatterplots. We also wanted to measure how different modes of gatherplots were effective. Therefore we designed the experiment to have four conditions such as scatterplots with jittering (jitter), gatherplots with absolute mode (absolute), gatherplots with relative mode (relative), and gatherplots with one check button to switch between absolute and relative mode (both). We adopted between-subject design to eliminate learning effect by experiencing other modes. The exact test environment is available for review [<https://purdue.qualtrics.com/SE/?SID=SV_9YX7LCgsiwv0Voh>]. Note the questions for each conditions were generated randomly. §.§ Participants A total of 240 participants (103 female) completed our survey. Because some questions asked a concept of absolute numbers and probability, we limited demographic to be United States to remove the influence of language.Also to ensure the quality of the workers, qualification of workers were the approval rate of more than 0.95 with number of hits approved to be more than 1,000. Only three of 240 participants did not use English as their first language. 119 people had more than bachelor's degree, with 42 people having hight school degree. We filtered random clickers, if the time to complete one of questions was shorter than a reasonable time, 5 seconds.Eventually, we have a total of 211 participants.§.§ Task Different layouts of gatherplots could support different types of tasks. After reviewing task for nominal variables, we selected three types of task such as retrieving value as a low-level task; comparing and ranking as a high-level task. For the comparing and ranking task, two different types of questions were asked: the tasks to consider absolute values such as frequency and tasks that consider relative values such as percentage. Therefore, for one visualization 5 different questions were generated. For gatherplots, our interest is more about the difference between questions considering absolute values and relative values. The five types of questions are as follows:* Type 1: retrieve value considering one subgroup* Type 2: comparing of absolute size of subgroup between groups* Type 3: ranking of absolute size of subgroup between groups* Type 4: comparing relative size of subgroup between groups* Type 5: ranking relative size of subgroup between groups To reduce the chance of one chart being optimal by luck for specific task, two charts of same problem structure were provided.Eventually, the resulting questions were 10 for each participant.Each question was followed by the question asking confidence of estimation with a 7-point Likert scale, and the time spent for each question was measured. §.§ Hypotheses We believe that different types of tasks will favor from different type of layouts. Therefore our hypotheses are as follows: H1 For retrieving value considering one subgroup (Type 1), absolute, relative, both mode reduces the occurrence of the error than jitter mode.H2 For tasks considering absolute values (Type 2 and 3), the absolute mode reduces the error.H3 For tasks considering relative values (Type 4 and 5), the relative mode reduces the error. §.§ Results The results were analyzed with respect to the accuracy (correct or incorrect), time spent, and confidence of estimation. Based on our hypotheses, we analyzed the different modes for each type of question: retrieve value, absolute value task, and relative value task.§.§.§ Accuracy The number and percentage of participants who answered correct and incorrect answers are shown in Figure <ref>. Eventually, we had 42 participants for jitter, 56 participants for absolute, 56 participants for relative, and 57 participants fro both mode.As the measure for each question was either correct or incorrect, a logistic regression was employed using PROC LOGISTICS in SAS. For the retrieving-value task (Type 1), both the the absolute view and relative view had significant main effects (Wald Chi-Square = 18.58, p < 0.01, Wald Chi-Square = 21.05, p < 0.01, respectively) with a significant interaction effect (Wald Chi-Square = 19.53, p = 0.03) (H1 confirmed). For absolute-value tasks (Type 2 and 3), both the the absolute view and relative view had significant main effects (Wald Chi-Square = 10.35, p < 0.01, Wald Chi-Square = 10.35, p< 0.01, respectively) with a significant interaction effect (Wald Chi-Square = 4.31, p = 0.03) (H2 confirmed). For relative-value tasks (Type 4 and 5), only the relative view had a significant effect (Wald Chi-Square= 5.10, p = 0.02 ) (H3 confirmed).§.§.§ Time spentThe time spent (in seconds) for each question was compared using mixed-model ANOVA with repeated measures. For the retrieving-value task, on average, the time spent (sec) for each interface was for jitter (44.26), absolute (56.84), relative (52.45), and both (56.57). There was no significant difference between interfaces (p > 0.05 for all cases).Forthe absolute-value task (Type 2 and 3), on average, the time spent (sec) for each interface was for jitter (30.74), absolute (32.3), relative (33.6), and both (47.91). The interface had a significant main effect (F(3, 207) = 11.5, p <0.01). However, when we conducted pairwise comparisons with adjusted p values using simulation, the only significant difference in time spent was when using the both interface which took longer (p< 0.01 for all comparisons).For relative-value task (Type 4 and 5), on average, the time spent for each interface was for jitter (26.6), absolute (31.12), relative (31.38), and both (46.78). The interface had a significant main effect (F(3, 207) = 10.12, p <0.01). However, when we conducted pairwise comparisons with adjusted p values using simulation, the only significant difference in time spent was when using the both interface which took longer (p< 0.01 for all comparisons). §.§.§ Confidence The 7-point Likert-scale rating was used for the level of confidence on their estimation. For the value-retrieving task (Type 1), Kruskal-Wallis non-parametric test revealed that the type of interface had significant impact on the confidence level (χ^2(3) = 74.57 p < 0.01). The mean rating for each interface was for jitter (4.8), absolute (6.3), relative (6.0), and both (6.25). A post-hoc Pairwise Wilcoxon Rank Sum test was employed with Bonferroni correction to adjust errors. The jitter interface was significantly lower than the other three modes (p<0.01 for all cases). There was no difference between absolute, relative, and both interfaces.For absolute-value tasks (Type 2 and 3), Kruskal-Wallis non-parametric test revealed that the type of interface had significant impact on the confidence level (χ^2(3) = 18.32, p < 0.01). The mean rating for each interface was jitter (5.4), absolute (5.7), relative (5.0), and both (5.8). A post-hoc Pairwise Wilcoxon Rank Sum test was employed with Bonferroni correction to adjust errors. The interface with both mode was significantly higher than relative and jitter mode (p<0.01 for both), however no difference with the absolute mode. The interface with absolute mode was significantly higher than relative and jitter mode (p<0.01).For relative-value tasks (Type 4 and 5), Kruskal-Wallis non-parametric test revealed that the type of interface did not have significant impact on the relative tasks (χ^2(3) = 4.1, p = 0.2). The mean rating was jitter (4.7), absolute (4.9), relative (4.9), and both (4.8).One possibilities for result is that relative task might be harder than others. The low correct percentage of questions are also shown in Figure <ref>. To see that, we have tested the confidence level among task types. Kruskal-Wallis non-parametric test revealed that the type of task had significant impact on the confidence level (χ^2(2) = 148.1, p < 0.01). The mean rating for retrieving value (5.9), absolute (5.5), and relative (4.8). The post-hoc Pairwise Wilcoxon Rank Sum test was employed with Bonferroni correction to adjust errors showed that all three task types have significantly different (p <0.01 for all cases). § DISCUSSIONS WITH EXPERTS FEEDBACKWhile the performance of executing low level unit task can explain functional part of new visualization, there are more qualitative aspects, such as aesthetics or playfulness.Also the complex interaction techniques and features make it practically difficult to design test with statistical validation.For this reason, complimentary evaluation of visualization technique with experts can be used <cit.>. To facilitate structured evaluations, a set of tasks and questions were given.The experts were asked to follow the task and write their feedbacks in the discussion board.They could see other people's comments. After each sessions, we organized feedbacks by moving to an existing thread or creating new topics.In addition to this, authors requested the on-line data visualization and visual analytics community for opinions.The intention is getting an initial response for adoption.For they are voluntary and free from conflicts of interests, their response can be valuable for general adoption of a new visualization technique.Because the feedbacks deal with advanced features and design choices, this session is combined with the discussion of issues triggered by the expert reviews. §.§ Expert ReviewsWe were able to get opinions from two graduate students and one professor whose major field is an information visualization.Two sessions with graduate students were conducted in lab environments, where one of authors was available to answer quick questions or provide feedback if necessary.But in general they were asked to follow the on-line guidelines and use discussion board to leave feedbacks.They took about 70 minutes to finish the reviews. The professor was on his own while reviewing.Their original responses were archived and available in the demonstration website at <http://www.gatherplot.org>.The responses were positive in general, especially about the aesthetics and the layouts.However many in-depth issues were discovered.Most frequently pointed problems was the difficulty associated with the task of comparing absolute numbers of subgroups between groups of different size.Especially comparison between a large percent subgroup in small group and a small percent subgroup in a large group is difficult.For example, estimatingwhether the second class female or third class male passengers survived more or not using figure <ref> (a) and (b) is difficult.The fundamental reason why this is difficult in gatherplots are because the areas are less effective than the length for perception <cit.>.This task is well supported by the layout shown in figure <ref> <ref>, which was dropped during the design process. However the experts also provided other plausible suggestions to handle better, such as tool-tips, which shows the number of counts in the group.One interesting solution was using the size of small groups as a mask for the anchor box, which will overlaid over the larger groups, so that the size estimation becomes easier. One interesting suggestion was changing to bar chart or pie chart.For example, when there are only a few items in groups, due to large size of items, the estimation at relative mode can become inaccurate. During the design process, authors implemented this transform, where the rectangle shapes becomes thinner to become lines.However the support for this mode was dropped later, because this mode loses sense of individual entities. One reviewers suggested a subtle transition where the bin size changes incrementally by small steps to help maintain the sense of object constancy. Relative mode was commented to be useful to understand the Bayesian inference problem, while one reviewer mentioning the difficulty of getting right setup with only small number of options.Once correct setting is applied, it helped understanding of a counter intuitive result.Also stretched out rectangles was helpful for user reminding that relative view is applied. §.§ Community FeedbackDuring 3 days, it received comments from the 7 experts of data visualization and visual analytics community. All of them were positive.Interesting remarks are following: one commented that he has been looking for a tool to create this for a long time.other requested an open source library for gatherplots and ability to test their own datasets.These comments imply that there may be a demand for technique in general. Finally one pointed that this to be a general tool, which can be a standard requirements. § CONCLUSION AND FUTURE WORK We have proposed the concept of the gather transformation, which enables space-filling layout without overdrawing while maintaining object constancy. We then applied this transformation to scatterplots, resulting in gatherplots, a generalization of scatterplots, which enable overview without clutter. While gatherplots are optimal for categorical variables, it can also be used to ameliorate overplotting caused by continuous ordinal variables. We discussed several aspects of gatherplots including layout, coloring, tick format, and matrix formations. We also evaluated the technique with a crowdsourced user study showing that gatherplots are more effective than the jittering, and absolute and relative mode serves specific types of tasks better. Finally, in-depth feedback from an expert review involving visualization reviewers revealed several limitations for the gatherplots technique. We addressed these weaknesses and suggested possible remedies.We believe that gathering is a general framework to formulate the transition of overlapping visualization to space-filling visualization without sense of individual objects. In the future, we plan on studying the application of this framework to other visual representations to explore novel visualizations. For example, parallel sets can be reconstructed to render individual lines instead of block lines, which would enable combining both categorical and continuous variables. Gathering also enables mixing nominal variables and ordinal variables in a single axis. This can be pursued further, for example in a gathering lens that gathers underlying objects according to a data property. If we apply this lens to selected boundary in crowded region of scatterplots, the underlying distribution of that region can be revealed.abbrv
http://arxiv.org/abs/1708.08033v1
{ "authors": [ "Deokgun Park", "Sung-Hee Kim", "Niklas Elmqvist" ], "categories": [ "cs.HC" ], "primary_category": "cs.HC", "published": "20170827014529", "title": "Gatherplots: Generalized Scatterplots for Nominal Data" }
Nonconforming Finite Element Discretisation forSemilinear Problems with Trilinear Nonlinearity Carsten Carstensen[Department of Mathematics, Humboldt-Universität zuBerlin, 10099 Berlin, Germany.Distinguished Visiting Professor, Department of Mathematics,Indian institute of Technology Bombay, Powai, Mumbai-400076. Email [email protected]] · Gouranga Mallik [Department of Mathematics, Indian Institute of Science, Bangalore 560012 India. Email [email protected]] · Neela Nataraj [Department of Mathematics, Indian Institute of Technology Bombay, Powai, Mumbai 400076, India. Email [email protected]]Accepted XXX. Received YYY; in original form ZZZ ======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================The Morley finite element method (FEM) is attractive forsemilinear problemswith the biharmonic operator as a leading term inthe stream function vorticity formulation of 2D Navier-Stokes problem and in the vonKármán equations. This paper establishes abest-approximation a priori error analysis and an a posteriori error analysis of discrete solutions close to an arbitraryregular solution on the continuous level to semilinear problems with a trilinear nonlinearity. The analysis avoids any smallness assumptions on the data and so has toprovide discrete stability by a perturbation analysis before the Newton-Kantorovictheorem can provide the existence of discrete solutions. An abstract framework for the stability analysis in terms of discrete operators from the medius analysis leads to new resultson the nonconformingCrouzeix-Raviart FEM for second-order linear non-selfadjoint and indefinite elliptic problems with L^∞ coefficients. Thepaperidentifiessix parameters and sufficient conditions for the local a priori anda posteriori error control of conforming and nonconformingdiscretisations of a class of semilinear elliptic problems first in an abstract framework and then in thetwo semilinearapplications. This leads to new best-approximation error estimatesand toa posteriori error estimates in terms of explicit residual-based error control for the conforming and Morley FEM. Keywords: nonconforming, Morley finite element, elliptic, semilinear,stream function vorticity formulation, 2D Navier-Stokes equations,von Kármán equations, a posteriori, second-order linear non-selfadjoint and indefinite elliptic, Crouzeix-Raviart§ INTRODUCTION§.§ MotivationThe nonconforming finite element methods (FEMs) have recently been rehabilitated by the medius analysis, which combines arguments from traditional a priori and a posteriori error analysis <cit.>. In particular, nonconforming finite element schemes can be equivalent <cit.> or superior to conforming finite elementschemes <cit.>. The conforming FEMs for fourth-order problems require C^1 conformity and lead to cumbersome implementations, while the nonconforming Morley FEMis as simple as quadratic Lagrange finite elements; the reader may consider thefinite element program in<cit.> with less than 30 lines of Matlab for a proof of its simplicity. The second-best scheme of easy implementations forfourth-order problems is theC^0 interior penalty method (C0IP) <cit.> with the benefit of higher-ordervariants and the disadvantage of a (critical) stability parameter choice.The optimal convergence ratesare known for theadaptive Morley FEM <cit.> in fourth-order problems,but open for C0IP;cf. <cit.>for the state of the art in second-order applications. Hence the advantage of higher-order schemes is not guaranteed for C0IP and leavesthe Morley FEM as the method of choice.This relevance of the nonconforming Morley FEMforfourth-order problems is not reflected in the contributionsin the literature on the attractive application to semilinear problems with the linear biharmonic operator as the leading term (plus quadratic lower-order contributions). There are importantmodel applications of this problem in the stream-function formulation of the incompressible 2D Navier-Stokes equations <cit.>and in the von Kármán equations <cit.> fornonlinear plates in solid mechanics. This paper enriches the general theory of semilinear problems for trilinear low-order terms from conforming FEMs <cit.> to nonconforming FEMs with the medius analysis. This overcomes the smallness assumption (on the load f) in<cit.> and adds a posteriori error control beyond <cit.> for a dG discretisation. The Morley FEM allows for additional benefits and leads, for instance, to guaranteed lower eigenvalue bounds <cit.>. §.§ Discrete StabilityThis paper considers the local approximation of a general regular solution u to a nonlinear function N(u) without any extra conditions.The invertible Frechét derivative DN(u) of thenonlinear function N:X→ Y^*at a regular solution u is by definitiona linear bijection between Banach spaces X and Y^*; this is equivalent toan inf-sup condition on the associatedbilinear form DN(u;∙,∙)=a+b: X× Y→ (split into two contributions a and bin Section 2). For a nonconforming finite elementdiscretisation with some finite elementspace X_h× Y_h ⊄X× Y, in theabsence of further conditions,theinf-sup condition fora+b: X× Y→ does not imply an inf-sup conditionfor the discrete bilinear form a_h+b_h:X_h× Y_h→.Section 2 studies two generalbilinear forms a and b defined on a superspaceX×Y of X_h× Y_h and X× Y, and introducesfour parameters in(H1)-(H4) with a sufficientcondition for an inf-sup conditionto hold for a_h+b_h: X_h× Y_h→ to enable a Petrov-Galerkin scheme and is the first contribution of this paper. There will be three applications of this abstract framework in this paper. The first of which is on former results in<cit.> on a nonconforming Crouzeix-Raviart FEM for well-posedsecond-order linear self-adjoint and indefinite elliptic problems: Since the framework applies the medius analysis tools, there are no smoothness assumptions and the feasibility and best-approximation propertyfor sufficiently small mesh-sizesis newly established for the Crouzeix-Raviart FEM for L^∞ coefficients in this paper(compared topiecewise Lipschitz continuous coefficients in<cit.>). §.§ Fourth-order semilinear problemsThe second and third applications of this discrete stability framework of Section 2 is on semilinear problems with a trilinear nonlinearity:The stream function formulation of the incompressible 2DNavier-Stokes problem <cit.> in Section 4 and the von Kármán equations <cit.> in Section 5 with conforming and Morley FEM. The abstract stability result (a) overcomes the high regularity assumptionsu ∈ H^2_0(Ω) ∩ H^3(Ω) and (b) is not restricted to smalldataas in <cit.>. §.§ Overview of further results The main abstract results are stated in (A)-(D) below. Here and throughout the paper, it isassumed that the mesh needs to be sufficiently fine to well approximate the solutions to the linear problems associated with the leading elliptic differential operator. Throughout this subsection let N:V→ V^* be a differentiable function in a Hilbert space V with dual V^* with one fixedregular solution u to N(u)=0. The Hilbert space is a Sobolev space H_0^m(Ω) associated to somepolyhedral bounded Lipschitz domainΩ⊂ℝ^n that is partitioned by arbitrarily fineshape-regular triangulations into simplices. The latter formafamilyand givenany δ>0,let (δ) denote the (nonempty)subset of all triangulationsof maximal mesh-size smaller than or equal to δ. For each∈, suppose there is a conforming or nonconforming finite element space V_h() and adifferentiable function N_h:V_h()→ V^*_h() with additional conditions; in particular, thereis a norm ∙_V on V+V_h() that extends the norm in V. (Notice the simplified notation N_h≡ N_h().)This paper discusses conditions in(H1)-(H6) sufficient for the subsequent consequences. (A). There existϵ,δ>0 such that, for all ∈(δ), there exists a unique discrete solution u_h∈V_h() to N_h(u_h)=0 with u-u_h_V≤ϵ.(B). There exist ϵ,δ,ρ>0 such that(A) holds and,for all ∈(δ) andfor anyinitial iterate u_h^(0)∈V_h() with u_h-u_h^(0)_V≤ρ,the Newton scheme converges quadratically tou_h .(C).There exist ϵ,δ,C_ qo>0 such that (A) holds and, for all∈(δ),u-u_h_V≤C_ qo(min_v_h∈V_h() u-v_h_V+ apx())with some approximation term apx() to be specified in the particularapplication.A local reliable and efficient a posteriori error control holds even for inexact solve (owing to atermination in an iterative solver)in the sense of(D). There exist ϵ,δ,C_ rel,C_ eff>0 such that any approximation v_h∈ V_h()with u-v_h_V≤ϵ and ∈(δ) satisfiesC_ rel^-1 u-v_h_V≤ N(v_h)_V^* + min_v∈ V v_h-v _V≤ C_ eff u-v_h_V.It is part of the abstract results in Section 2 and 3 to identify the reliability and efficiency constants in the above displayed estimate and prove that the positive constantsϵ, δ, ρ, C_ qo, C_ rel, andC_ eff aremesh-independent. The abstract error control in (D) is the point of departure in the applications to thestream function formulation of the incompressible 2D Navier-Stokes problem <cit.> in Section 4 and the von Kármán equations <cit.> in Section 5. This paperestablishes the firstreliableestimate of N(v_h)_V^* andmin_v∈ V v_h-v _V in terms of an explicit residual-based error estimator for the conforming and Morley FEM and discusses its efficiency. §.§ OutlookThispresentation is restricted to quadratic problems in which the weak formulation involvesa trilinear form for a simple outline to covertwo important semilinear fourth-order problems. The generalisation to more general and stronger nonlinearities, however, requires appropriategrowth conditions in various norms and involves a more technical framework. The presentation matches exactly the nonconforming applications(Crouzeix-Raviart and Morley finite elements); other schemes likethe discontinuous Galerkin schemes <cit.> with their discrete norms and various jump conditions could be included with more additional technicalities.The ideas developed in this paper extend to other semilinear problems,to optimal control and obstacle problems <cit.>(governed by fourth-order plates and very thin plates), fully nonlinear Monge-Ampere equations based on vanishing moment method <cit.>.Moreover, thelowest-order version of skeletal or polytopal, hybridizablediscontinuous Galerkin andhigher-order hybrid methods <cit.>is a perturbation of the nonconforming Crouzeix-Raviart finite element method for the Poisson problems. It is therefore expected that theMorley FEM is related to the lowest-order variant of skeletal schemesfor PDEs governed by fourth-order elliptic equations <cit.>. In this way this paper stimulates the developmentofthe a priori and a posteriori error analysis ofthose schemes.§.§ General notationStandard notation on Lebesgue and Sobolev spaces applies throughout the paper and ∙ abbreviates ∙_L^2(Ω) with a L^2 scalar product (∙,∙)_L^2(Ω), while the duality brackets <∙,∙>_V^*× Vare reserved for a dual pairing in V^*× V;∙_∞ abbreviates the norm in L^∞(Ω);H^m(Ω) denotes theSobolevspaces of order m with norm ∙_H^m(Ω); H^-1(Ω)(resp. H^-2(Ω)) is the dual space ofH^1_0(Ω):={v∈ H^1(Ω): v|_∂Ω=0}(resp. H^2_0(Ω):={v∈ H^2(Ω): v|_∂Ω=∂ v/∂ν|_∂Ω=0}). With a regular triangulationof the polygonal Lipschitz domain Ω⊂^n into simplices, associateits piecewise constant mesh-size h_∈ P_0() withh_|_T:=h_T:=diam(T) ≈|T|^1/n for all T∈ and itsmaximal mesh-sizeh_max:=max h_. Here and throughout, P_k():={v∈ L^2(Ω): ∀ T∈,v|_T∈ P_k(T)}denotes the piecewise polynomials of degree at most k∈ℕ_0 and letΠ_k denote theL^2(Ω) (resp. L^2(Ω;^n) orL^2(Ω;^n× n)) orthogonal projection onto P_k()(resp. P_k(;^m) or P_k(;^m × m)). Oscillations ofdegree k read osc_k(∙,):=h_^p(I-Π_k)∙_L^2(Ω)with its square osc_k^2(∙,):= osc_k(∙,)^2 forp=1 for second-order in Section 2 and p=2 for fourth-order problems in Sections 4 and 5.The notation A≲ B means there exists a generich_-independent constant C such that A≤ CB; A≈ B abbreviates A≲ B≲ A. In the sequel, C_rel and C_eff denote generic reliability and efficiency constants. The set of all n× n real symmetric matrices is :=^n× n_sym. § WELL-POSEDNESS OF THE DISCRETE PROBLEMThis section presents sufficient conditions for the stability of nonconforming discretizations of a well-posed linear problem. Subsection 2.1 introduces four parameters (H1)-(H4) and a condition on themsufficient for a discrete inf-sup condition for the sum a+b of two bilinear forms a, b: X× Y→ extended to superspaces X⊃ X+X_h and Y⊃ Y+Y_h. Subsection 2.2 discusses a first application to second-order non-self adjoint and indefinite elliptic problems <cit.>.§.§ Abstract discrete inf-sup conditionLet X (resp. Y) be a real Banach space with norm ∙_X (resp. ∙_Y) and suppose X and X_h (resp. Y and Y_h) are two complete linear subspaces of X (resp. Y) with inherited norms ∙_X:=(∙_X)|_X and ∙_X_h:=(∙_X)|_X_h (resp. ∙_Y:=(∙_Y)|_Y and ∙_Y_h:=(∙_Y)|_Y_h). Let a,b:X×Y→ be bounded bilinear forms and abbreviatea:=a|_X× Y,a_h:=a|_X_h× Y_h andb:=b|_X× Y,b_h:=b|_X_h× Y_h. Let the bilinear forms a and b be associated to the linear operators A andB∈ L(X;Y^*), e.g., Ax:=a(x,∙)∈ Y^* for all x∈ X. Suppose that the linear operatorA∈ L(X; Y^*) (resp.A+B∈ L(X;Y^*))associated to the bilinear form a(resp.a+b)is invertible and 0<α :=inf_x∈X x_X=1sup_y∈Y y_Y=1a(x,y);0<β :=inf_x∈ X x_X=1sup_y∈ Y y_Y=1(a+b)(x,y).Suppose that three linear operators P∈ L(Y;Y_h), Q∈ L(X_h; X),∈ L(Y_h;Y) exist andlead to parametersδ_1,δ_2,δ_3,Λ_4≥ 0 in(H1) δ_1:=sup_x_h∈ X_h x_h_X_h=1sup_y_h∈ Y_h y_h_Y_h=1a(A^-1(b(x_h,∙)|_Y),y_h- y_h);(H2) δ_2:=sup_x_h∈ X_h x_h_X_h=1sup_y∈Y y_Y=1a(x_h + A^-1(b(x_h,∙)|_Y),y-Py);(H3) δ_3:=sup_x_h∈ X_h x_h_X_h=1b(x_h,(1-)∙)_ Y_h^*;(H4) ∃Λ_4<∞ ∀ x_h∈ X_h(1-Q)x_h_X≤Λ_4dist_∙_X(x_h,X).Abbreviate the boundb_X× Y^* of the bilinear form b|_X× Y^*simply bybandset a:=A_L(X;Y^*)as well as A^-1:=A^-1_L(Y^*;X) — whenever there is no risk of confusion (e.g. with the L^2 norm ∙ of a Lebesgue function).If(H4) holds with 0≤Λ_4<∞, setβ:=β/Λ_4β+a(1+Λ_4(1+A^-1b))>0.In the applications discussed in this paper, δ_1+δ_2+δ_3from (H1)-(H3) will be smaller than αβ so that the subsequent result provides a discrete inf-sup condition with β_h>0. Under the aforementioned notation, (<ref>)-(<ref>)and (H1)-(H4) imply αβ-(δ_1+δ_2+δ_3)≤β_h:=inf_x_h∈ X_hx_h_X_h=1sup_y_h∈ Y_hy_h_Y_h=1(a_h+b_h)(x_h,y_h). Given any x_h∈ X_h with x_h_X_h=1, definex:=Qx_h, ξ=A^-1(b(x_h,∙)|_Y)∈ X, and η=A^-1(b(x,∙)|_Y)∈ X .The inf-sup condition (<ref>) and Aη=Bx lead toβx_X≤Ax+Bx_Y^*=A(x+η)_Y^*≤ax+η_X.This and triangle inequalities implyβ/a x_X≤x+η_X≤x-x_h_X+x_h+ξ_X+η-ξ_X.The definition of ξ and η, the boundedness of the operator A^-1 andof the bilinear form b|_X× Y showξ-η_X=A^-1(b(x-x_h,∙)|_Y)_X≤A^-1bx-x_h_X.The combination of (<ref>)-(<ref>) readsβ/a x_X≤x_h+ξ_X+(1+A^-1b)x-x_h_X.Since (H4) implies x-x_h_X≤Λ_4 x_h+ξ_X,the estimate (<ref>) results in x_X≤a/β(1+Λ_4(1+A^-1b))x_h+ξ_X.The triangle inequality and (<ref>)-(<ref>) lead to1=x_h_X_h ≤x-x_h_X+x_X≤(Λ_4+a/β(1+Λ_4(1+A^-1b)))x_h+ξ_X.With the definition of β in (<ref>), this readsβ≤x_h+ξ_X. For given x_h+ξ∈X and for any 0<ϵ<α, the inf-sup condition (<ref>) implies the existence of some y∈Y with y_Y=1 and(α-ϵ)x_h+ξ_X≤a(x_h+ξ,y)=a(x_h+ξ,y-Py) +a(x_h+ξ,Py).Since a(ξ, y_h)=b(x_h, y_h) for y_h:=Py, the latter term is equal toa(x_h+ξ,y_h)=a_h(x_h,y_h)+b_h(x_h,y_h)+a(ξ,y_h- y_h)+b(x_h, y_h-y_h).Let γ_h:=a_h(x_h,y_h)+b_h(x_h,y_h), then (H1)-(H3) and (<ref>) lead toa(x_h+ξ,y)≤γ_h+δ_1+δ_2+δ_3.The combination of (<ref>)-(<ref>) andϵ↘0 in the end result inαβ-(δ_1+δ_2+δ_3)≤γ_h≤a_h(x_h,∙)+b_h(x_h,∙)_Y_h^*.The last estimate holds for an arbitrary x_h with x_h_X_h=1 and so proves the discrete inf-sup conditionαβ-(δ_1+δ_2+δ_3)≤β_h.It is well known thatapositive β_h>0 in(<ref>) implies thebest-approximation for the Petrov-Galerkin scheme<cit.> in the following sense. Suppose (X,a) is aHilbert spaceandu∈ X, u_h∈ X_h, andF∈Y^* satisfy(a+b)(u,∙)=F:=F|_Y∈ Y^* and(a_h+b_h)(u_h,∙)=F_h:=F|_Y_h∈ Y_h^*. Thenβ_h u- u_h _X≤ M min_x_h∈ X_h u - x_h _X + sup_y_h∈ Y_hy_h_Y_h=1(F_h(y_h)- (a+b)( u ,y_h))with the bound M:=a+b_X× Y_h ≤a+b of the bilinear form(a+b)|_X× Y_h. The proof of the quasi-optimal convergencefor a stable discretisation is nowadays standardin all finite element textbooksin the context of the Strang-Fix lemmas. §.§ Second-order linear non-selfadjointand indefinite elliptic problems This subsection applies (H1)-(H4) to second-order linear self-adjoint and indefinite elliptic problems and establishes a priori estimates for conforming and nonconforming FEMs under more general conditions on the smoothness of the coefficients of the elliptic operator and for Ω⊂ℝ^nvis-à-vis <cit.>.§.§.§ Mathematical modelThe strong form of a second-order problem with L^∞ coefficients A,b and γ reads: Given f∈ L^2(Ω) seek u∈ V:=H^1_0(Ω) such that ℒu :=-∇· ( A∇ u+u 𝐛)+ γu=f.The coefficients A∈ L^∞(Ω;), 𝐛∈ L^∞(Ω;^n),γ∈ L^∞(Ω)satisfy 0<λ≤λ_1( A(x))≤⋯≤λ_n( A(x)) ≤λ<∞ for the eigenvalues λ_j( A(x))of the SPD A(x)for a.e. x∈Ω. For u,v∈ V, the expressiona(u,v):=∫_Ω( A∇ u)·∇ vdefines a scalar product on V (and V is endowed with this scalar product in the sequel) equivalent to the standard scalar product in the sensethatthe H^1-seminorm |∙|_H^1(Ω):=∇∙ in V satisfiesλ^1/2| ∙|_H^1(Ω)≤∙_a :=a(∙,∙)^1/2≤λ^1/2| ∙|_H^1(Ω). Given the bilinear form b:V× V→ withb(u,v) :=∫_Ω(u 𝐛·∇ v+γ uv)for all u,v ∈ Vand the linear form F∈ L^2(Ω)^*⊂ H^-1(Ω)=: V^* with F(v):=∫_Ωfv for all v∈ V, the weak formulation of (<ref>) seeks the solution u∈ V to(a+b)(u,v):=a(u,v)+b(u,v)=F(v) for allv∈ V.In the absence of further conditions on the smoothness of the coefficients, anyhigher regularity of the weak solution u∈ H^1_0(Ω) (<ref>) in the formu∈ H^s(Ω) for any s>1 is not guaranteed even forf∈ C^∞(Ω) <cit.>.§.§.§ TriangulationsThroughout this paper,is a set of shape-regular triangulations of the polyhedralbounded Lipschitz domain Ω⊂^n into simplices. Given an initialtriangulation _0 ofΩ, letthe newest-vertex bisection define localmesh-refining that leads to a set of shape-regular triangulations ∈. Shape-regularity means that there existsa universal constantκ>0such that the maximal diameter diam(B)of a ball B⊂ K satisfiesκh_K≤ diam(B)≤ diam(K)=:h_K for any K∈∈.Given ∈, let h_∈ P_0() be piecewise constant with h_|_K=h_K = diam (K) for K∈ and let h_max :=h_max():=max h_;recall (δ):={∈ :h_max()≤δ} for anyδ >0.The set of all sides of the shape-regular triangulationofΩ into simplices is denoted by . The set of all internal vertices (resp. boundary vertices) andinterior sides (resp.boundary sides)ofare denoted by (Ω) (resp.(∂Ω)) and(Ω) (resp. (∂Ω)).§.§.§ Conforming FEMLet P_1() denote the piecewise affine functions in L^∞(Ω) with respect to the triangulationso that the associated P_1 conforming finite element function spaces without and with (homogeneous) boundary conditions readS^1():=P_1()∩ C(Ω̅) andS^1_0():={v_C∈ S^1():v_C=0on ∂Ω}.The interior nodes (Ω) label the nodal basis functions φ_z with patch ω_z={φ_z>0}= int(supp φ_z) around z∈(Ω).Given some finite-dimensional finite element space V_h withS_0^1()⊆ V_h⊂ V≡ H^1_0(Ω), the discrete formulationof (<ref>) seeks u_h∈ V_h with a(u_h,v_h)+b(u_h,v_h)=F(v_h) for all v_h∈ V_h . The arguments of <cit.>are rephrased in the following lemma (proven in the appendix)that allows the application of Theorem <ref>in the subsequenttheorem. For any ϵ>0 there exists some δ>0 such that the solution z∈ V≡ H^1_0(Ω) to a(z,∙)=g∈⊂ H^-1(Ω)forg∈satisfies, for all ∈(δ), that min_z_C∈ S^1_0() z-z_C_a + min_Q_0∈ P_0(;^n) A∇ z - Q_0≤ϵg.Adopt the aforementioned assumptions on a and bin(<ref>) -(<ref>) and suppose that (<ref>)is well-posed in the sense that it allows for a unique solution ufor all right-hand sides f∈ L^2(Ω). Then 0<β:=inf_x∈ Vx_h_a=1sup_y∈ V y_a=1(a+b)(x,y)and for any positiveβ_0<β, there exist δ>0 such thatβ_0≤β_h:=inf_x_h∈ V_hx_h_a=1sup_y_h∈ V_h y_h_a=1(a+b)(x_h,y_h)holds for all S_0^1()⊂ V_h:=X_h=Y_h⊂ V with respect to ∈(δ). Moreover,the solution u to (<ref>) and u_h to(<ref>) satisfy u-u_h _a≤a+b/β_0min_v_h∈ V_h u-v_h_a.The invertibility of a linear operator from one Banach space into the dualof another is equivalent to an inf-sup condition<cit.>; in particular,the well-posedness of the theorem implies β>0. The remaining assertionsfollow from Theorem <ref> with a=a, b=b,S_0^1()⊂ V_h=X_h=Y_h ⊂X=Y=V=H^1_0(Ω) endowed with the norm ∙_a.Thenα=α=1=a and β is the constant in(<ref>).To conclude the discrete inf-sup condition, it is sufficient to verify that the parameters involved in (H1)-(H4) can be chosen such that the discrete inf-sup constant in Theorem <ref> is positive.Moreover, the discrete inf-sup constants of a+b are equal to those of the dual problem a+ b^* with b^*(u,v)=b(v,u). Therefore, Theorem <ref> is applied to a and b^* (rather than a and b). Let Q andbe the identity, while P∈ L(V;V_h) denotes the Galerkin projection onto V_h with respect to a, i.e. a(v-Pv,∙)=0 in V_h for all v∈ V. Then the parameters in(H1), (H3), and (H4) are δ_1=δ_3=Λ_4=0.The choice of the parameter δ_2 in (H2) concerns v∈ V andu_h∈ V_h with v_a=1= u_h_a and the solutionz:=A^-1(b^*(u_h,∙))∈ V to a(z,∙)=b(∙, u_h).Notice that g:= b·∇ u_h+γ u_h∈ satisfies b(φ, u_h)=∫_Ω( b·∇ u_h+γ u_h)φ= gφ for allφ∈ Vand (with the Friedrichs constant C_F for ∙≤λ^-1/2C_F∙_a in V)g≤ u_h_a A^-1/2 b_∞+γ_∞u_h≤ ( b_∞+C_Fγ_∞)λ^-1/2 =: C .The Galerkin orthogonality withP, the definition of z,and a Cauchyinequality with v_a=1 in the end showa(u_h+A^-1(b^*(u_h,∙)),v- P v)=a(z,v- P v)=a(z- P z,v) ≤z-Pz_a.Given any ϵ>0, Lemma <ref> leads toδ >0 such that for all ∈(δ) there exists somez_C ∈ S_0^1()with z-Pz_a≤ z-z_C_a ≤ϵg≤ϵ Cwith(<ref>)in the last step. The combination of the previous inequalitiesproves(H2) withδ_2 := ϵ C.Theorem <ref> applies with β=β and β_h≥β- ϵ C. This proves the assertionon β_h≥β_0 for sufficiently small ϵ and δ. The quasi-optimal convergence (<ref>) follows from Corollary <ref> without the second term in the conforming discretisation.The proof requires that the discrete space V_h solely satisfies S_0^1()⊂ V_h⊂ H^1_0(Ω) and so allows for conforming hp finite elementspaces.The condition ∈(δ) allows for local mesh-refining as long as max h_0 is sufficiently small.§.§.§ Nonconforming FEMThis subsection establishes the first best-approximation-type a priorierror estimate for thelowest-order nonconforming FEM in any space dimension ≥ 2 under theassumptions on the coefficients of Subsubsection <ref> as an application of Theorem <ref>. Thisgeneralises<cit.> from piecewise Lipschitz continuous to L^∞ coefficients. The nonconforming Crouzeix-Raviart (CR) finite element spaces readCR^1(𝒯): ={v ∈ P_1(𝒯): ∀ E ∈ℰ,  v  is continuous at mid(E) },CR^1_0(𝒯):={v ∈ CR^1(𝒯):v(mid (E)) =0   for all E ∈ℰ (∂Ω) }.Here mid (E) denotes themid operator for a simplex obtained by taking the arithmetic mean of all vertices. The CR finite element spaces give rise to thebilinear forms a_pw,b_pw:CR_0^1()× CR_0^1()→ defined, for all u_CR,v_CR∈ CR_0^1(), bya_pw(u_CR,v_CR):=∑_T∈∫_T(𝐀∇ u_CR)·∇ v_CR,b_pw(u_CR,v_CR):=∑_T∈∫_T(u_CR𝐛·∇ v_CR+γ u_CRv_CR).The nonconforming FEM seeks the discrete solution u_CR∈ CR_0^1() toa_pw(u_CR,v_CR)+b_pw(u_CR,v_CR)=F(v_CR) v_CR∈ CR_0^1().Notice that ∇_pw∙ with the piecewise action∇_pw of the gradient ∇ is a norm on CR_0^1() and so is∙_pw:= A^1/2∇_pw∙. The subsequent theorem implies the unique solvability and boundedness of discrete solutions for sufficiently fine meshes.Suppose thatis a bijection and so ^-1 is bounded and(<ref>)holds with β=^-1>0. Thenthere exist positive δ and β_0such that any ∈(δ) satisfiesβ_0≤β_h:=inf_w_ CR∈ CR_0^1()w_ CR_ pw=1sup_ v_ CR∈ CR_0^1()v_ CR_ pw=1(a_ pw+b_ pw)(w_ CR,v_ CR). Let H^1():={v∈ L^2(Ω) | ∀ T∈,v|_T∈ H^1(T)} and endow the vector spaceV:=X:=Y:={v∈ H^1() | ∀ E∈, ∫_E[v]_E=0}⊃ V+CR_0^1()with the norm ∙_pw:= A^1/2∇_pw∙.Here and throughout the paper, the jump of v∈Vacross any interior faceE=∂ K_+∩∂ K_-∈(Ω) shared bytwo simplices K_+ and K_-reads[v]_E:= v|_K_+-v|_K_-onE=∂ K_+∩∂ K_-(thenω_E:=int( K_+∪ K_-)), while [v]_E:=v|_E along any boundary faceE∈(∂Ω) according to the homogeneous boundary condition on∂Ω (and thenω_E:=int(K) for K∈ with E∈(K)).The boundedness of a+b follows from a piecewise Friedrichs inequalityv̂≤ C_pwF( ∑ _E∈|ω_E|^-1 |∫_E [v̂]ds |^2 + ∇_pwv̂^2 )^1/2known for all v̂∈V with the volume |ω_E| of the side-patch |ω_E|≈ h_E^n. Forv̂∈V and E∈,the integral∫_E [v̂]ds=0 vanishes; hence the piecewise Friedrichsinequality reduces to v̂≤ C_pwF∇_pwv̂. This enables a proof that (V,a) is a Hilbert space thatb is abounded bilinear form with respect to those norms. Consequently, α=1=a and (<ref>)holds with some β=^-1>0.Define the nonconforminginterpolation operatorI_CR∈ L(V;CR_0^1()) by I_CRv:= ∑_E ∈ℰ(_E v ds) ψ_Efor all v ∈Vwith the side-oriented basis functions ψ_E of CR_0^1() with ψ_E( mid(F))=δ_EF, the Kronecker symbol, for all sides E,F ∈ℰ.For any v_CR∈ CR_0^1(), the conforming companion operator Q:=:= J ∈ L(CR_0^1();V) withJv_CR∈ P_4()∩ C^0(Ω̅) from <cit.> satisfies (a) that w:=v_CR-Jv_CR⊥ P_1() is L^2 orthogonal to the space P_1() of piecewise first-order polynomials, (b)the integral mean property of the gradientΠ_0(∇_pw(v_CR-J v_CR))=0,and(c) the approximation and stability property(with a universal constantΛ_CR) h_^-1(v_CR-J v_CR) +∇_pw(v_CR-J v_CR) ≤Λ_CRmin_v∈ H^1_0(Ω)∇_pw ( v_CR-v) .(The proofs in <cit.> are in 2D, but can be generalised to any dimension). Note that J is a right inverse to I_CR in the sense that I_CRJv_CR=v_CR holds for all v_CR∈ CR_0^1(). The inequality (<ref>) implies (H4) with Λ_4=(λ/λ)^1/2Λ_CR.The bilinear forms a≡ a_pw, b≡ b_pw:V×V→ read, for all u,v∈V, as a(u,v):=∑_T∈∫_T(𝐀∇u)·∇vandb(u,v):=∑_T∈∫_T(u𝐛·∇v +γuv).As in the stability proof of the conforming FEM, Theorem <ref> applies to a andb^* (rather than to a andb).The proof of (H1) concernsu_CR,v_CR∈ CR_0^1() withu_CR_pw=1= v_CR _pw andthe solutionz=A^-1(b(∙, u_CR) |_V)∈ V toa(z,∙)= b(∙, u_CR) in V. The right-hand side is the L^2 scalar product of the test function in V with g:= b·∇_pw u_CR+γ u_CR∈ bounded withthe discrete Friedrichs inequality ∙≤ C_dF∇_pw∙ in CR_0^1() <cit.> byg≤ ( b_∞ +C_dFγ_∞)λ^-1/2=: C_0.Since ∇_pw w⊥P_0(;^n)in L^2(Ω;^n) for w:= v_CR-Jv_CR, Lemma <ref> applies for any ϵ>0 and leads to δ>0 so that, for ∈(δ) with the L^2 projection Π_0, a(A^-1(b^*(u_CR,∙)|_V),v_CR-J v_CR)= a_pw(z,w)=∫_Ω ((1-Π_0) A∇ z )·∇_pww ≤ (1-Π_0) A∇ z∇_pww ≤ϵg Λ_CRλ^-1/2≤ C_1ϵ =:δ_1with (<ref>) for v=0 and ∇_pw v_CR≤λ^-1/2 v_CR_pw =λ^-1/2in the end for C_1:=C_0Λ_CRλ^-1/2. Thisconcludes the proof of(H1). The proof of (H2) concernsu_CR∈ CR_0^1(), v∈V with u_CR_pw=1=v_pw, and the solutionz∈ V to a(z,∙)= b(∙, u_CR) in V as before. The operator P:V→ CR^1_0(), however, is not I_CR becausethe oscillating coefficients A prevent the immediate cancellation property for a(u_CR,v-Pv )=0. The latteris a consequence ofthe best-approximation P in the Hilbert space V onto its linear and closed subspaceCR_0^1(); so let Pv∈ CR_0^1() be the unique minimiser in v -Pv_pw=min_v_CR∈ CR_0^1()v - v_CR_pw≤v_pw =1.Lemma <ref> applies for any ϵ>0 and leads to δ>0 so that, foreach ∈(δ), there exists some z_C∈ S^1_0()⊂ CR_0^1()with z-z_C_pw≤ϵ C_0. This, a(u_CR+z_C,v-Pv )=0,andv- P v_pw≤ 1 in the end provide(H2) (with b^* replacing b):a(u_CR+ A^-1(b^*(u_CR,∙)|_V),v-Pv) =∫_Ω( A∇_pw(z-z_C)·∇_pw(v-Pv)≤ z-z_C_pwv- P v_pw≤C_0 ϵ=:δ_2The proof of (H3) concerns u_CR,v_CR∈ CR_0^1() with u_CR_pw=1= v_CR_pw and w:= v_CR- Jv_CR.This and(<ref>) (with v=0)proveb^*(u_CR,v_CR- Jv_CR) =∫_Ω( b·∇_pwu_CR+γ u_CR) w= ∫_Ω g w≤h_maxgΛ_CR∇_pw v_CR≤ C_0Λ_CRλ^-1/2δ.Without loss of generality, assume δ≤ϵ. Then (H3) follows withδ_3:= C_3ϵ for C_3:=C_0Λ_CRλ^-1/2.(It is remarkable thatin the last inequalities,the extra property w:=v_CR-Jv_CR⊥ P_1()leads to the bound λ^-1/2Λ_CRosc_1(g,),butthat can easily be exploited solely for piecewise smooth or at least piecewise continuous b and γ). Since(H1)-(H4) hold for a andb^*,Theorem <ref> proves β_h≥β - (C_1+C_2+C_3)ϵwithpositive β<β defined in (<ref>). Any positiveϵ<β/(C_1+C_2+C_3) concludes the proof; in fact, anyconstant β_0 with 0<β_0< β can be realised in (<ref>) by small δ>0. The following best-approximation-type error estimate generalises a result in<cit.>. Let u∈ H^1_0(Ω) solve (<ref>) and set p:= A∇ u+u b∈ H( div,Ω). There exists δ>0 such that for all ∈(δ), the discrete problem (<ref>) has a unique solution u_ CR∈ CR_0^1() and u, u_ CR,p and its piecewise integral mean Π_0 p satisfyu-u_ CR_ pw ≲ u-I_ CRu_ pw+p-Π_0p+ osc_1(f-γ u,).Givene_CR:=I_CRu-u_CR, the discrete inf-sup condition of Theorem <ref> implies the existence of v_CR∈ CR_0^1() with v_CR_pw≤ 1/β_0 ande_CR_pw = a_pw(e_CR,v_CR)+b_pw(e_CR,v_CR).Recall from (a)-(b) in the proof of Theorem <ref> thatv:= Jv_CR satisfies I_CRv=v_CR and Π_1v=Π_1v_CR.Since a(u,v)=-b(u,v)+F(v) and u_CR solves (<ref>),w:=v-v_CRsatisfiesa_pw(e_CR,v_CR) =a_pw(u,v_CR)-a_pw(u_CR,v_CR)=F(w) -a_pw(u,w)-b(u,v)+b_pw(u_CR,v_CR).This leads in (<ref>) to e_CR = F(w) -a_pw(u,w)-b_pw(u,w) -b_pw(u-I_CRu, v_CR)=∫_Ω(f-γ u) w - ∫_Ω p·∇_pww -b_pw(u-I_CRu, v_CR).Since ∇_pww P_0(;^n) in L^2(Ω;^n) and w P_1(;^n), an upper bound for the first two terms on the right-hand side is(I-Π_1)(f-γ u)w - (p-Π_0 p)·∇_pw w≤(p-Π_0 p+ osc_1(f-γ u,)) λ^-1/2 w_pw≤Λ_CRλ^-1/2β_0^-1(p-Π_0 p+ osc_1(f-γ u,)).This and a triangle inequality conclude the proof. §.§.§ A modified CR-FEM for general right-hand sides The nonconforming scheme of the previous subsection allows for a right-hand side in L^2(Ω),while conforming variants directly apply to f ∈ H^-1(Ω). This subsection briefly discusses the modification forf ∈ H^-1(Ω) and a surprisinganalog to Theorem <ref> withoutoscillation terms. Givenf ∈ H^-1(Ω),the modified CR-FEMseeksu_CR∈ CR_0^1() such that a_pw(u_CR,v_CR)+b_pw(u_CR,v_CR) =<f,Jv_CR>_H^-1(Ω) × H^1_0(Ω)for allv_CR∈ CR_0^1()with the duality brackets <∙ ,∙ >_H^-1(Ω) × H^1_0(Ω) on the right-hand side of (<ref>) acting on f ∈ H^-1(Ω) and the test function Jv_CR∈ H^1_0(Ω). The bilinear form b(∙,∙) is replaced in (<ref>) by a modification b_pw(∙,∙) defined, for u_CR,v_CR∈ CR_0^1(), by b_pw(u_CR,v_CR):=∑_T∈∫_T(u_CR𝐛·∇_pwv_CR+γ u_CRJv_CR).The difference tob_pw (u_CR,v_CR) from(<ref>) is in the finalapplication of Jv_CR rather than v_CR with the conforming companion operator J from the proof of Theorem <ref>. Let u∈ H^1_0(Ω) solve (<ref>) with the right-hand side f≡ f∈ H^-1(Ω) and set p:= A∇ u+u b∈ H( div,Ω). There exists δ>0 such that for all ∈(δ), the discrete problem (<ref>) has a unique solution u_CR∈ CR_0^1() and u,u_ CR,p and its piecewise integral mean Π_0 p satisfyu-u_ CR_ pw≲ u-I_ CRu_ pw+p-Π_0p. The stability of the modified bilinear form a_pw(∙, ∙) +b_pw(∙, ∙) follows from the methodology of this section. An immediateproof follows from the stability (<ref>) and a perturbation argument: For any v_CR, w_CR∈ CR_0^1() with v_CR_pw=1=w_CR_pw, | b_pw(v_CR, w_CR ) - b_pw(v_CR,w_CR) |≤γ_∞ v_CRw_CR - J w_CR≤γ_∞ C_pwFΛ_CRh_maxwith a piecewiseFriedrichs inequality and(<ref>) in the last step.The combination with (<ref>) and a triangle inequalityprove stability of the modified schemeβ_0/2≤β_h:=inf_w_CR∈ CR_0^1()w_CR_pw=1sup_ v_CR∈ CR_0^1()v_CR_pw=1(a_pw+b_pw)(w_CR,v_CR)for any ∈𝕋( β_0/(2γ_∞ C_pwFΛ_CR)).The proof of the a priori error estimate follows the arguments of the proof of Theorem <ref>.Givene_CR:=I_CRu-u_CR, the stability of the modified scheme leads to some v_CR∈ CR_0^1() with norm v_CR_pw≤ 2/β_0 and e_CR_pw = a_pw( I_CRu- u_CR ,v_CR)+b_pw(I_CRu- u_CR,v_CR).Letv:=Jv_CR and w:= v-v_CR. Then (<ref>)and(<ref>)implye_CR_pw=a_pw( I_CRu-u ,v_CR)-(p, ∇_pw w)_L^2(Ω)- (u- I_CRu , γ v + b·∇_pw v_CR)_L^2(Ω) .The point is that all terms with v disappear and no oscillation terms remain. In fact, all other terms are controlled by u- I_CRu_pw or by p-Π_0 p as in the proof ofTheorem <ref>; further details are omitted.§ A CLASS OFSEMILINEAR PROBLEMS WITHTRILINEAR NONLINEARITY This section is devoted to an abstract framework for an a priori and a posteriori analysis to solve a class of semilinear problems that includes the applications in Section 4 and 5. §.§ A priori error controlSupposeX and Y are real Banach spaces and let thequadratic function N:X→ Y^* be of the formN(x):= x+Γ(x,x,∙)with a leading linear operator A∈ L(X;Y^*) andF∈ Y^* for the affine operator x:=Ax-F for all x∈ X and a bounded trilinear form Γ: X× X× Y→.To approximate a regular u solution to N(u)=0, the discrete version involves some discrete spaces X_h and Y_h plusa discrete functionF_h∈ Y_h^*,_hx_h:=A_hx_h-F_h, and abounded trilinear form Γ_h:X_h× X_h× Y_h→ with N_h(x_h)=_h x_h+Γ_h(x_h,x_h,∙). The discrete problem seeksu_h∈ X_h such that_hu_h+Γ_h(u_h,u_h,∙)=0 inY_h^*.The a priori error analysis is based on the Newton-Kantorovich theorem andadaptsthe abstract discrete inf-sup results of Subsection <ref>. Some further straightforward notation is required for this. Suppose that there exists some invertible bounded linear operatoroperator A (i.e. Av=a(v,∙) in Y for all v∈X) on extended Banach spaces X and Y andsuppose thatthere exists a boundedextension Γ:X×X×Y→withΓ:=Γ_X×X×Y:= sup_x∈X x_X=1sup_ξ∈X ξ_X=1sup_y∈Y y_Y=1Γ(x,ξ,y)<∞of Γ=Γ|_X× X× Y withΓ_h=Γ|_X_h× X_h× Y_h. Given the regular solution u∈ X to N(u)=0 in (<ref>), let thebilinear form b:X×Y→ be the linearisation ofΓ at the solution u , i.e.,b(∙,∙):= Γ(u,∙, ∙)+Γ(∙,u,∙),and be bounded by b:=b_X×Y≤2 u_XΓ.Adopt the notation(<ref>) for the bilinear forms a,a_h,b, and b_h as respective restrictions of a and b and supposeF∈Y^* exists with F:=F|_Y andF_h:=F|_Y_h.Recallthat the bounded linear operatorA is invertible and the so the associatedbilinear form a is bounded and satisfies (<ref>)with some positive α.Recallthat u is a regular solution to N(u)=0 in the sense thatN(u)=0 and DN(u)∈ L(X;Y^*) with DN(u)=(a+b)(∙,∙) satisfies the inf-sup condition (<ref>). Suppose all the aforementioned bilinear forms satisfy (H1)-(H4) with some operatorsP∈ L(Y;Y_h), Q∈ L(X_h; X), and ∈ L(Y_h;Y). In addition to(H1)-(H4)supposethat δ_5,δ_6≥ 0 satisfy(H5)δ_5:=(F-Au)(1-)∙_Y_h^*; (H6)∃ x_h∈ X_h such that δ_6:=u-x_h_X. The non-negativeparameters δ_1,δ_2,δ_3,δ_5,δ_6and α,β, b all dependon the fixed regular solution u to N(u)=0 and this dependence is suppressed in the notation for simplicity. Under the present assumptions and with the additionalsmallness assumption 4 δΓ < β_0 (in the notation of (<ref>)-(<ref>)) the properties(A)-(B) hold for thefixed discretisation at handin the following sense. Suppose that Γ>0 forotherwise N is a linear equation with a unique solution and the results of Section 2 apply.Given a regular solutionu∈ X to N(u)=0, assume the existence ofextendedbilinear forms a and b with (<ref>)-(<ref>) and α>0 (resp. β>0 in(<ref>) and β>0 in(<ref>)).Suppose that(H1)-(H6) hold with parameters δ_1,…,δ_6≥ 0 andthatx_h∈ X_hsatisfies(H6). Suppose thatβ_0 :=αβ-(δ_1+δ_2+δ_3+2Γδ_6) >0andδ :=β_0^-1(δ_5+ aδ_6+δ_6 (x_h_X_h+u_X)Γ +δ_3/2 )≥ 0satisfy 4 δΓ < β_0. Thenϵ:=δ_6 +δ + r_- with m:=2 Γ/β_0 >0, h:= δ m≥ 0,r_-:= (1-√(1-2h ))/m- δ≥ 0 , andρ:=(1+√(1-2h))/ m>0 satisfy(i) there exists a solution u_h∈ X_h to N_h(u_h)=0with u-u_h _X≤ϵ and (ii)given any v_h∈ X_hwith v_h-u_h_X_h≤ρ, the Newton scheme with initial iteratev_h converges R-quadraticallyto the discrete solutionu_h in (i). If even 4 ϵΓ≤β_0, then (iii)there is at most one solution u_h∈ X_h to N_h(u_h)=0with u-u_h _X≤ϵ. The proof is based on the Newton-Kantorovich convergence theorem found, e.g., in <cit.> for X= Y=^n and in <cit.> for Banach spaces. The notation is adopted to the setting ofTheorem <ref>.Assume the Frechét-derivative DN_h(x_h) of N_h at some x_h∈ X_h satisfiesD N_h(x_h)^-1_L( Y_h^*; X_h)≤ 1/β_0 andD N_h(x_h)^-1N_h(x_h)_X_h≤δ.Suppose that D N_h is Lipschitz continuous with Lipschitz constant 2 Γand that4 δΓ≤β_0. Then there exists a unique root u_h∈ B(x_1,r_-) toN_h in the ball around the first iterate x_1 := x_h - D N_h(x_h)^-1N_h(x_h)and this is the only root in B(x_h,ρ) with r_-, ρ from (<ref>). If even 4 δΓ < β_0, thenthe Newton scheme with initial iterate x_h leads to a sequence in B(x_h,ρ) thatconverges R-quadratically to u_h. Proof of Theorem <ref>. Suppose that δ≥ 0 and Γ>0sothatr_-≥ 0 in (<ref>)well defined. The bounded trilinear formΓ_h=Γ|_ X_h×X_h×Y_hleads to the Frechét-derivative DN_h( x_h)∈ L(X_h;Y_h^*) with DN_h( x_h;ξ_h,η_h)=a_h(ξ_h,η_h)+Γ_h(x_h, ξ_h,η_h) +Γ_h( ξ_h,x_h,η_h)for allx_h, ξ_h∈ X_h, η_h∈ Y_h.The definitions of a and b and their extensions and discrete versions with (H1)-(H4) allowinTheorem <ref> for a positive inf-sup constant β_1:=αβ-(δ_1+δ_2+δ_3)in (<ref>) for the bilinear formDN(u)|_X_h× Y_h= a_h+Γ(u,∙,∙)+Γ(∙,u,∙)= a_h+b_hfor the extended nonlinear form N(x)=A(x)-F+ Γ(x,x,∙) ∈Y^*for x∈X and its derivativeDN(u) at u.This discrete inf-sup condition (<ref>) anda triangle inequality with x_h from (H6)lead to an inf-sup constant0< β_0:=β_1 - 2Γδ_6 ≤β_h:=inf_ξ_h∈ X_h ξ_X_h=1sup_η_h∈ Y_h η_h_Y_h=1DN_h(x_h;ξ_h,η_h)for the bilinear formDN_h(x_h;∙,∙)=a_h+Γ_h(x_h,∙,∙) +Γ_h(∙,x_h,∙).The discrete inf-sup constant is a singular value and equal to the norm of the inverse operator;1/β_0 is an upper bound of the operator norm of thediscrete inverse. This provesthe first estimate of (<ref>).It also provesin the second estimate of (<ref>) thatDN_h(x_h)^-1N_h(x_h)_X_h≤β_0^-1N_h(x_h)_Y_h^*and it remains to estimate N_h(x_h) in the norm of Y_h^*. Given anyy_h∈ Y_h with y_h_Y_h=1 and y:= y_h∈ Y,an exact Taylor expansion withN(u;y)=0 shows N_h(x_h;y_h)= N_h(x_h;y_h)-N(u;y)= F(y-y_h) +a_h( x_h,y_h)-a(u,y) +Γ_h( x_h, x_h,y_h)-Γ(u,u,y)=F(y-y_h) -a( u,y-y_h) +a(x_h-u,y_h)+Γ_h( x_h, x_h,y_h) -Γ(u,u,y).Inabbreviated duality brackets, thefirst two terms in (<ref>) are equal toF(y-y_h) -a( u,y-y_h) =⟨F-Au,(-I)y_h⟩≤δ_5with (H5). The definition of δ_6 in (H6) provesa(x_h-u,y_h)≤aδ_6.Up to the factor 2, the last two terms in (<ref>) are equal to2Γ_h( x_h, x_h,y_h)-2Γ(u,u,y) =Γ( x_h-u, x_h,y_h)+Γ( x_h,x_h-u,y_h)+Γ(u,x_h-u,y)+Γ(x_h-u,u,y)-b(x_h,y-y_h).≤2δ_6(x_h_X_h+ u_X)Γ +δ_3.The combination of the preceding three displayed estimates with(<ref>) implies β_0^-1N_h(x_h)_Y_h^*≤δwith δ≥ 0 from(<ref>). Thecombination of (<ref>) and(<ref>) shows the second inequalityin (<ref>). The smallness assumptionreads h<1/2 and is statedexplicitly in the theorem; hencethe Newton-Kantorovich Theorem <ref> applies. Let us interrupt the proof for a brief discussion of the extreme but possiblecaseδ=0 with the implicationsδ_6=δ_5=δ_3=0 andx_h=u in (H6). The proof of (<ref>) remains validin this case and thenN_h(x_h)=0 guaranteesthat u=x_h is the discrete solution u_h. In this very particular situation, the Newton scheme convergesand leads to the constant sequence x_h=x_1=x_2=... with thelimit x_h=u_h.Theorem <ref> applies with r_-=0=ϵ and provides(i)-(iii).Therefore, throughout the remainder of this proof suppose that δ>0 and soρ,ϵ, r_- >0 inTheorem <ref> show the existence of a discrete solutionu_h to N_h(u_h)=0 in B(x_1,r_-)and this is the only discrete solution in B(x_h,ρ). This and triangle inequalities lead tou-u_h_X≤u-x_h_X+ x_1-x_h_X_h +x_1-u_h_X_h≤δ_6+δ+r_-= ϵforthe Newton correction x_1-x_h is estimated in the second inequality of(<ref>). This proves the existence of a discrete solutionu_h inX_h∩B(u,ϵ)as asserted in (i). Theorem <ref> implies (ii) and it remains to prove the uniqueness of discrete solutions inB(u,ϵ) under the additional assumption that 4 ϵΓ≤β_0, i.e.,2mϵ≤ 1. Recallthat the limit u_h∈B(x_1,r_-)in (i)-(ii) is the only discrete solution in B(x_h,ρ). Suppose there exists a second solutionu_h∈ X_h∩B(u,ϵ) toN_h(u_h)=0. The uniqueness in B(x_h,ρ) and a triangle inequality imply that ρ< x_h- u_h_X≤u- u_h_X +u- x_h_X≤ϵ+δ_6≤ 2ϵ≤ 1/mwith the smallness assumption on ϵ in the end. But this leads to a contradiction withthedefinition of ρ in(<ref>) and soconcludes the proof of (iii).In the applications, if h_max is chosen sufficiently small, the parameters δ_1,δ_2,δ_3, δ_5, andδ_6 are also small.In particular, δ from (<ref>) is small and so is ϵ. This ensures 4 δΓ≤4 ϵΓ< β_0 so thatTheorem <ref> applies.The convergence speed in the Newton-Kantorovich theorem is knownto be h=δ m and this parameter is uniformly smaller than one in the applications. Hence the number of iterations in the Newton scheme does not increaseas the mesh-size decreases.§.§ Best-approximationThis subsection discusses the best-approximation result (C) for regular solutions of semilinear problems with trilinear nonlinearity under the assumption (H1)-(H6) with parameters δ_1,…,δ_6 and α (resp. β) from (<ref>)(resp.(<ref>)).The extra termN̂ (u)_ Y_h^* in the best-approximation result inTheorem <ref>will be discussed afterwards and leads tosome best-and data-approximation term.If u is a regular solution to N(u)=0 and δ andϵ:=δ_6 +δ + r_- from(<ref>)-(<ref>)satisfy 2mϵ≤ 1, then there exists C_ qo>0 such thattheunique discrete solution u_h∈ X_h∩B(u,ϵ )satisfies the best-approximation property u-u_h_X≤ C_ qo(min_v_h∈ X_hu-v_h_X +N(u)_Y_h^*). Given the best-approximation u_h^* to u in X_h with respect to the norm inX, set e_h:=u_h^*-u_h∈ X_h and apply the discrete inf-supcondition(<ref>) to the bilinear form DN(u)|_X_h× Y_h withthe constantβ_1:=αβ-(δ_1+δ_2+δ_3) from the proof of Theorem <ref>.This leads toy_h∈ Y_h with y_h_Y_h≤ 1/β_1 and e_h_X_h= DN(u;e_h,y_h).Since the quadratic Taylor expression of N at u for N_h(u_h;y_h)=0 is exact, e:=u-u_h∈X satisfies0=N(u;y_h)-DN(u;e,y_h) - D^2N(u;e,e,y_h).The sum of (<ref>) and (<ref>),D^2N(u;e,e,y_h)=2 Γ(e,e,y_h), and y_h_Y_h≤ 1/β_1 proveβ_1 e_h_X_h≤N(u)_Y_h^* +DN(u)u-u_h^*_X +Γe_X^2.This,a triangle inequality, and min_x_h∈ X_hu-x_h_X= u-u_h^* _X show(β_1-Γe_X)e_X≤N(u)_Y_h^*+(β_1+ DN(u))min_x_h∈ X_hu-x_h_X.Recall 4ϵΓ≤β_0≤β_1 ande_X≤ϵ from Theorem <ref>, so that 3β_1/4≤β_1-Γe_X leadsin (<ref>) toC_qo=3/4 max{1/ β_1,1 + DN(u)/β_1 } andapx():=N(u)_Y_h^*in the asserted best-approximation.This concludes the proof. Two examples for the term apx():=N(u)_Y_h^* conclude this subsection.If Y_h⊂ Y, then apx()=N(u)_Y_h^*≤N(u)_Y^*=0.Hence, Theorem <ref> implies the quasi-optimality of the conforming FEM.For the second-order linear non-selfadjoint and indefinite elliptic problems of Subsection 2.2, Γ=0 and β_0=β_1 etc. is feasible inTheorem <ref> and the best-approximation estimate holds. Theapproximationterm apx() is the norm ofthe functional F-(a_pw+b_pw) (u,∙ ) in V_h^*. This is exactlythe extraterm in Corollary <ref> that leads to the additional two terms in Theorem <ref>.§.§ A posteriori error controlThe regular solution u to N(u)=0 is approximated by some v_h∈ X_h sufficiently close to u such that the Theorem <ref> below asserts reliability (<ref>) and efficiency(<ref>)-(<ref>). Any v_h∈ X_h with u-Qv_h_X<β/Γ satisfiesu-v_h_X≤N(Qv_h)_Y^*/β-Γu-Qv_h_X+Qv_h-v_h_X,Qv_h-v_h_X≤Λ_4u-v_h_X,N(Qv_h)_Y^*≤ (1+Λ_4)(DN(u)+ β )u-v_h_X. Abbreviate ξ=Qv_h and e:=u-ξ. Recall that the bilinear form a+b is associatedto the derivative DN(u;∙,∙)∈ L(X;Y^*)with an inf-sup constant β>0. Hence for any0<ϵ<β there exists some y∈ Y with y_Y=1 and (β-ϵ)e_X≤ DN(u;e,y).Since N(u)=0 and N is quadratic, the finite Taylor seriesN(ξ,y)=- DN(u;e,y)+ D^2N(u;e,e,y)is exact. This, D^2N(u;e,e,y)=2Γ(e,e,y), and (<ref>) imply(β-ϵ)e_X ≤ -N(ξ,y)+Γ(e,e,y) ≤N(ξ)_Y^*+Γe_X^2.With ϵ↘ 0 and β-Γe_X>0, this leads toe_X≤N(ξ)_Y^*/β-Γe_X.A triangle inequalityu-v_h_X≤e_X+Qv_v-v_h_Xconcludes the proof of (<ref>).Recall that (H4) implies (<ref>). This and a triangle inequality showe_X≤u-v_h_X(1+Λ_4).The identity (<ref>) results inN(ξ)_Y^*≤DN(u;e)_Y^*+Γ(e,e,∙)_Y^*≤(DN(u)+Γe_X)e_X.The combination of the previous two displayed estimates proves (<ref>). The discrete function v_h can be estimated in the sense of (D) from the introduction. In addition tothe assumptions of Theorem <ref> suppose thatu- v_h_X≤ϵ≤κβ /(Γ (1+Λ_4)) holds for some positive κ<1 andv_h∈ X_h. ThenC_ rel,1:=1/(β(1-κ)) and C_ rel,2:= 1+LC_ rel,1for L:=a+2Λ( u _X+ϵ(1+Λ_4)) satisfy reliability in the sense that u- v_h_X≤ C_ rel,1N(v_h)_Y^* +C_ rel,2Qv_h-v_h_Xand efficiency with(<ref>) and with C_ eff,1:= ((1+Λ_4) (DN(u)+β)+ LΛ_4) inN(v_h)_Y^*≤ C_ eff,1u- v_h_X .Recall the abbreviationsξ=Qv_h and e:=u-ξ.A triangle inequality and (H4) show that e_X ≤ (1+Λ_4)u- v_h_X≤ϵ(1+Λ_4)≤κβ/ Γ. This and Theorem <ref> imply u-v_h_X≤N(Qv_h)_Y^*/β(1-κ)+Qv_h-v_h_X.The derivative DN is globally Lipschitz continuous with a Lipschitz constant 2Λ, the functionN isLipschitz continuous in the closedball B(u,ϵ(1+Λ_4))in X with a Lipschitz constantL.Sincev_h, Qv_h∈B(u,ϵ(1+Λ_4)),N(Qv_h)_Y^*≤N(v_h)_Y^* + L Qv_h-v_h_X.The combination of the previous displayed estimates proves the asserted reliability. The efficiency employs the Lipschitz continuity as well and then utilises(<ref>)-(<ref>) to verify N(v_h)_Y^*≤N(Qv_h)_Y^*+L Qv_h-v_h_X≤ C_ eff,1u-v_h_X.Thisconcludes the proof.§ STREAM FUNCTIONVORTICITY FORMULATION OF THEINCOMPRESSIBLE 2D NAVIER-STOKES PROBLEMThis section is devoted to the stream function vorticity formulation of 2D Navier-Stokesequations with right-hand side f∈ in apolygonal bounded Lipschitz domain Ω⊂ℝ^2:There exists <cit.> at least onedistributional solution u∈ V:= toΔ^2 u+∂/∂ x_1((-Δ u)∂ u/∂ x_2)-∂/∂ x_2((-Δ u)∂ u/∂ x_1)=f in Ω.The analysis of extreme viscosities lies beyond the scope of this paper and the viscosity(the factor in front of the bi-Laplacian in (<ref>)) is setone throughout this paper.§.§ Continuous problemThe weak formulation to (<ref>) seeks u∈ V such that a(u,v)+Γ(u,u, v)=F( v)v∈ V.The associated bilinear form a: V× V→ and the trilinear form Γ: V× V× V→ reada(η, χ):=ΔηΔχ, Γ(η,χ,ϕ):=Δη(∂χ/∂ x_2∂ϕ/∂ x_1-∂χ/∂ x_1∂ϕ/∂ x_2),and F∈ V^* is given by F( ϕ):= f ϕ for all η,χ,ϕ∈ V.The Hilbert space V≡ H^2_0(Ω) with the scalar product a(∙,∙) is endowed with the H^2 seminorm∙:=|∙|_H^2(Ω) and ∙_V^* denotes the dual norm. The bilinear form a(∙,∙) is equivalent to thescalar product in V and the trilinear formΓ(∙,∙,∙) is bounded (owing to the continuous embeddingV⊂ H^2(Ω)↪ W^1,4(Ω))with⟨ N(u), v⟩=N(u; v):=a(u, v)-F(v)+Γ(u,u, v) for allu,v∈ V.The 2D Navier-Stokes equations in the weak stream function vorticityformulation (<ref>) seeks u∈ V with N(u)=0.The regularity results for the biharmonic operator Δ^2in <cit.> ensure that z∈ V with a(z,∙)∈ H^-1(Ω)⊂ V^* belongs toH^2+s(Ω) for some elliptic regularity indexs ∈(1/2,1]andz_H^2+s(Ω)≤ C a(z,∙)_H^-1(Ω).The regularity results for theNavier-Stokes problem in <cit.> ensure thatany weak solution u∈ V to N(u)=0 satisfies u∈ H^2+s(Ω). This makes thecontinuous embeddings H^2+s(Ω) ↪ W^1,∞(Ω) (for s>0) andH^2+s(Ω)↪ W^2,4(Ω) (for s>1/2) available throughout this (and the subsequent) section. The embeddingsand inequalitiesimplyfor u∈ H^2+s(Ω) and for θ∈ V, ϕ∈ H^1(Ω) thatΓ(u,θ,ϕ)≲u_H^2+s(Ω)θ_H^2(Ω)ϕ_H^1(Ω).Consequently,the derivative b(∙,∙) := DN(u;∙,∙):=Γ(u,∙,∙)+Γ(∙,u,∙) at the solution uis a bounded bilinear form in H^2(Ω)× H^1(Ω) and will be key in the subsequent analysis. §.§ Conforming FEMLet V_C be a conforming finite element space contained in C^1(Ω)∩ V; for example, the spaces associated with Bogner-Fox-Schmit, HCT, or Argyris elements <cit.> and a regular triangulationof Ω into triangles. The conforming finite element formulationseeks u_C∈ V_C withN_h(u_C;v_C):=N(u_C;v_C):=a(u_C, v_C)-F( v_C)+Γ(u_C,u_C, v_C)=0v_C∈ V_C.If u is a regular solution to N(u)=0, then there exist positiveϵ, δ, and ρ such that (A)-(C) hold with apx()≡ 0for all ∈(δ). Set X=Y=V,X_h=Y_h=V_C, a(∙,∙):=a(∙,∙), andb(∙,∙):=b(∙,∙) :=Γ(u,∙,∙)+Γ(∙,u,∙).Forand Q chosen as identity,the parameters in the hypotheses (H1) and (H3)-(H5) are δ_1=δ_3=Λ_4=δ_5=0.For the proof of (H2), suppose θ_h≡ x_h∈ V_C⊂ V with θ_h =1 and recallfrom the end of the previous subsection that b(θ_h, ∙)∈ H^-1(Ω).Hencethe solutionz∈ V to the biharmonic problema(z,ϕ)=b(θ_h,ϕ)ϕ∈ Xsatisfies z∈ H^2+s(Ω) andz_H^2+s(Ω)≤ C θ_h= C.(Note that z is called A^-1(b(x_h,∙)|_Y) in Subsection <ref>). This regularity and the Galerkin projection P with theGalerkin orthogonality and the approximationproperty z-Pz≲ h_max^s <cit.> leadfor any y∈ Y≡ V with y =1 to a(x_h + z, y-Py ) =a(z,y-Py ) = a(z-Pz,y) ≲ h_max^s.This proves (H2) withδ_2≲ h_ max^s. The choice x_h=Pu implies (H6) with δ_6≲ h_ max^s(from the higher regularity of u andu-Pu≲ h_max^s). Consequently,for sufficiently small maximal mesh-size h_max,Theorem <ref> providesthe discrete inf-sup condition 1≲β_h andTheorem <ref> applies. Since V_C is a conforming finite element space,Theorem <ref> holds withapx():= N(u)_Y_h^*≡ 0.This concludes the proof. The explicit residual-based a posteriori error estimator for the stream function vorticity formulation of 2D Navier-Stokes equations requires some notation for the differential operators: For any scalar function v, vector field Φ=(ϕ_1,ϕ_2)^T,and tensor σ with the 4 entries σ_11, σ_12, σ_21, and σ_22 in form of a 2× 2 matrix,∇ v=[ ∂ v/∂ x_1; ∂ v/∂ x_2 ],Curlv=[ -∂ v/∂ x_2;∂ v/∂ x_1 ],curl[ ϕ_1; ϕ_2 ] =∂ϕ_2/∂ x_1-∂ϕ_1/∂ x_2, DΦ= [ ∂ϕ_1/∂ x_1 ∂ϕ_1/∂ x_2; ∂ϕ_2/∂ x_1 ∂ϕ_2/∂ x_2 ], [ ϕ_1; ϕ_2 ]=∂ϕ_1/∂ x_1+∂ϕ_2/∂ x_2 ,Curl[ ϕ_1; ϕ_2 ] =[ -∂ϕ_1/∂ x_2∂ϕ_1/∂ x_1; -∂ϕ_2/∂ x_2∂ϕ_2/∂ x_1 ] ,and σ =[ ∂σ_11/∂ x_1 +∂σ_12/∂ x_2; ∂σ_21/∂ x_1 +∂σ_22/∂ x_2 ].For any K∈ and E∈(Ω), define the volume and edgeerror estimators by η_K^2 :=h_K^4Δ^2 u_C - curl(Δ u_C∇ u_C) -f^2_L^2(K),η_E^2 :=h_E^3 [ (D^2 u_C)]_E ·ν_E^2_L^2(E) +h_E[Δ u_C ]_E^2_L^2(E)+h_E^3[Δ u_C∇ u_C]_E·τ_E^2_L^2(E)with the unit tangential (resp. normal) vectorτ_E(resp. ν_E) alongthe edge E∈.Recall osc_m(∙,):=h_^2(I-Π_m)∙_L^2(Ω) form∈ℕ_0 in all fourth-order applications. Ifu∈ V is a regular solution to N(u)=0 and m∈_0, then there exist positiveϵ,δ, C_ rel, and C_ eff such that, for any ∈(δ), the unique discrete solutionu_C∈V_C to (<ref>) withu-u_C<ϵ satisfies C_ rel^-2 u-u_C^2 ≤∑_K∈η_K^2 +∑_E∈ (Ω)η_E^2≤ C_ eff^2( u-u_C^2 + osc_m^2(f)).The proof utilizes a quasiinterpolation operator.For any ∈ there exists an interpolation operator Π_h: H^2_0(Ω)→ V_C such that,for 0≤ k≤ m≤ 2 andφ∈ H^2_0(Ω),φ-Π_hφ_H^k(K)≲ h_K^m-k|φ|_H^q(ω_K)holds for any in the triangle K∈andtheinterior ω_K of the unionω_Kof the triangles in sharing a vertex with K. This follows from <cit.> once the required scaling properties of the degrees of freedom are clarified. The Argyris or the HCT finite element schemes involve some normal derivative anddo not form an affine finite element family, butan almost affine finite element element family <cit.>. It is by now understood that this guarantees the appropriate scaling properties. This isexplicitlycalculated in<cit.> for the HCT finite elements and also followsfor the Argyris finite elements, as employede.g. in <cit.>. Since the result is frequently accepted <cit.>,further details are omitted.Proof of Theorem <ref>. Continue the notation of the proof of Theorem <ref>with X=Y=V, X_h=Y_h=V_C, Q=1,etc. and recall that, for sufficiently small δ, Theorem <ref>guarantees u-u_C < β/Γ. Hence Corollary <ref> implies (for v_h≡ u_C) u-u_C≤C_ rel,1N(u_C)_V^*.With Π_h from Lemma <ref>,someappropriateϕ∈ V with ϕ=1satisfies N(u_C)_V^*= N(u_C;ϕ)= N(u_C;ϕ-Π_hϕ). Two successive integrations by parts result ina(u_C,ϕ-Π_hϕ)= (Δ^2 u_C) (ϕ-Π_hϕ)+[Δ u_C]_E ∇(ϕ-Π_hϕ)·ν_E - (ϕ-Π_hϕ)[(D^2 u_C)]_E ·ν_E.An integration by parts in the nonlinear termΓ(u_C,u_C,ϕ-Π_hϕ) leads toΓ(u_C,u_C,ϕ-Π_hϕ)=∑_K∈∫_K Δ u_C∇ u_C· Curl(ϕ-Π_hϕ)=∑_K∈∫_K (ϕ-Π_hϕ) curl(-Δ u_C∇ u_C) +∑_E∈∫_E(ϕ-Π_hϕ) [Δ u_C∇ u_C]_E·τ_E.Those identities show that(<ref>) is equal to a sumover edges of jump contributions plus a sum over triangle of volume contributions; the latter is( Δ^2 u_C- curl (Δ u_C∇ u_C)-f ) (ϕ-Π_hϕ)≲∑_K∈η_K h_K^-2 || ϕ- Π_h ϕ_L^2(K)and controlledwith standard manipulationsbased on Lemma <ref> (with k=0 and m=2) and the finite overlap of the patches (ω_K:K∈). The jump contributions includesome trace inequality as well and are otherwise standard as in linear problems that involve the bi-Laplacian. For instance, the nonlinear jump contributionfor each edge E reads∫_E(ϕ-Π_hϕ) [Δ u_C∇ u_C]_E·τ_E =∫_E(ϕ-Π_hϕ) [Δ u_C]_E∇ u_C·τ_Ein case ofan interior edge E shared by the two triangles T_+ and T_- that form the patch ω_E andvanishes in case of a boundary edge E⊂∂Ω (with ϕ=Π_hϕ=0 on ∂Ω).The continuity of ∇ u_C leads to the previous equality. This term is controlled by the residual h_E^3/2[Δ u_C]_E∇ u_C·τ_E_L^2(E) times h_E^-3/2ϕ-Π_hϕ_L^2(E)≲ h_E^-2ϕ-Π_hϕ_L^2(T_±)+h_E^-1ϕ-Π_hϕ_H^1(T_±)≲ | ϕ |_H^2(ω_T_±)with a trace inequality on one of the two triangles T_± in the first and Lemma <ref> (for k=0,1)in the second estimate. The remaining terms are controlled in a similar way. Some words are in order about the termh_E^3/2[Δ u_C]_E∇ u_C·τ_E_L^2(E), in whichan inverse inequality along the interior edge E=∂ T_+∩∂ T_-(shared by T_±∈) of the polynomial ∇ u_C·τ_E (unique as a trace from T_±) shows∇ u_C·τ_E _L^∞(E)≲ h_E^-1 u_C _L^∞(E). This and the global continuous embedding H^2(Ω)↪ L^∞(Ω) leads toh_E^3/2[Δ u_C]_E∇ u_C·τ_E_L^2(E)≲ h_E^1/2[Δ u_C]_E _L^2(E) u_C .Since u_C ≲ 1, the nonlinear edge contribution is controlledby another contribution h_E^1/2[Δ u_C]_E _L^2(E) to η_E; in other words, this nonlinear edge contribution can be omitted. The overall strategy in the efficiency prooffollows thebubble-function technique due to Verfürth <cit.>. The emphasis in this paper is on the nonlinear contributionsand on the interaction of the various nonlinear terms with the volume estimator.We will give two examples only to illustrate some details and start withthe cubic bubble-functionb_K ∈ W^1,∞_0(K)(the product of all three barycentric coordinates times 27) of the triangle K∈ with 0≤ b_K≤max b_K=1.Let f_K:=Π_m f∈ P_m(K) be theL^2(K)orthogonal polynomial projection of f∈ L^2(K) fordegree m∈ℕ_0so that f-f_K_L^2(K)=h_K^-2 osc_m(f,K).Since g:=Δ^2 u_C- curl (Δ u_C∇ u_C)-f_K is a polynomial of degree at most max{k-4,(k-2)(k-1)-1,m} (recall that k is thedegree of thefinite element functions), an inverse estimatereadsg_K^2≲∫_Kρ_K gfor the test function ρ_K:=b_K^2g∈ H^2_0(K)⊂ V.The above integrations by parts (<ref>)-(<ref>)with the test functionϕ-Π_hϕ replaced by ρ_K are restricted to K forthe support of b_K and ∇ b_Kis K.This leads to the first equality in ∫_K g ρ_K =a(u_C, ρ_K)+Γ(u_C,u_C,ρ_K) -∫_Kρ_Kf_K=a(u_C-u, ρ_K)+Γ(u_C,u_C,ρ_K) - Γ(u,u,ρ_K) +∫_Kρ_K(f-f_K)and(<ref>)leads to the second. Except for the last term (that leads to oscillations in the end),elementary algebra,Γ(u,u,ρ_K)- Γ(u_C,u_C,ρ_K)=Γ(u-u_C,u,ρ_K)+Γ(u_C,u-u_C,ρ_K), Cauchy, and Hölder inequalities bound theabove terms upto a constantbyu-u_C_H^2(K)( (1 +| u |_W^1,∞(Ω)) ρ_K_H^2(K)+| u_C|_H^2(Ω) | ρ_K|_W^1,∞(K)).Theinverse estimates ρ_K_H^2(K) + | ρ_K|_W^1,∞(K)≲ h_K^-2ρ_K_L^2(K)≤ h_K^-2 g _L^2(K) lead in the preceding estimates (after division byh_K^-2 g _L^2(K)) toh_K^2g_L^2(K)≲ u-u_C_H^2(K)+ osc_m(f,K).This and atriangle inequalityprove efficiencyη_K≲ u-u_C_H^2(K)+ osc_m (f,K) of the volume contribution.The patchω_E of aninterior edgeE∈ is the interior of the union of the two neighbouringtriangles insharing the edge E and may be a non-convex quadrilateral.Observe that the shape-regularity inimplies theshape-regularity ofthe largest rhombusR contained in the patch ω_E that has E as one diagonal.Let b_R∈ H^1_0(R)⊂ H^1_0(ω_E) be the (piecewise quadratic)edge-bubble function of E in R (with 0≤ b_R≤max b_R=1)and let Φ_E∈ P_1(R) be the affine function that vanishes along E and satisfies ∇Φ_E =h_E^-1ν_E. Then b_E:= Φ_E b_R^3∈ H^2_0(R)⊂ H^2_0(ω_E)satisfies∇ b_E ·ν_E= h_E^-1 b_R^3 along E and | b_E|_L^∞(ω_E)≲ 1 as in<cit.>.Extend [Δ u_C]_E constantly in the normal direction to E and setϱ_E:=h_E^2 [Δ u_C]_Eb_E ∈ H^2_0(R)⊂ H^2_0(ω_E).An inverseestimate in the beginning, ∇ϱ_E·ν_E=h_E b_R^3[Δ u_C]_E on E,and piecewise integrations by parts lead to h_E [Δ u_C]_E_L^2(E)^2≲h_Eb_R^3/2[Δ u_C]_E_L^2(E)^2 =∫_E ∇ϱ_E·ν_E [Δ u_C]_E =∫_ω_E(Δ u_CΔϱ_E -ϱ_EΔ^2_pw u_C).The test-functionϱ_Ein (<ref>) showsthat, the right-hand side readsa(u_C-u,ϱ_E)+Γ(u_C,u_C,ϱ_E)-Γ(u,u,ϱ_E)+ ∫_ω_E(f-Δ_pw ^2 u_C+ curl_pw (Δ u_C∇ u_C))ϱ_E.A Cauchy inequality in the first, the arguments for (<ref>) in the second term, and thebound (η_T_++η_T_-)h_E^-2ϱ_E _L^2(ω_E) for the third term lead to h_E [Δ u_C]_E_L^2(E)^2≲(u-u_C_H^2(ω_E)+η_T_++η_T_-)(h_E^-2ϱ_E _L^2(ω_E)+ |ϱ_E |_H^2(ω_E)).The function ϱ_E is polynomial in each of the two open triangles in R∖ (E∪∂ R) and allows for inverse estimates. Since|b_E|≲ 1 a.e., this proves that the last factor is controlled byh_E^-2ϱ_E_L^2(ω_E)≲[Δ u_C]_E_L^2(ω_E)≲h_E^1/2[Δ u_C]_E_L^2(E)for the constant extension of[Δ u_C]_E in the direction of ν_E in the last step. The combination of the previous two displayed inequalities with the above efficiency of thevolume contribution concludes the proof of h_E^1/2[Δ u_C]_E_L^2(E)≲u-u_C_H^2(ω_E)+η_T_++η_T_-≲u-u_C_H^2(ω_E)+ osc_m (f,{T_+,T_-}).The efficiency of h_E^3[ (D^2 u_C)]_E ·ν_E^2_L^2(E) is also established through an adoption of the corresponding arguments in<cit.>. Hence the straightforward details are omitted. §.§ Morley FEMThe nonconforming Morley element space V_h:=()associated with thetriangulationof the polygonal domainΩ⊂ℝ^2 into triangles reads():={ v_M∈ P_2()v_Mis continuous at (Ω)and vanishes at (∂Ω),∫_E[∂ v_M/∂ν]_E=0 for all E ∈ (Ω),∫_E∂ v_M/∂ν=0 for all E∈ (∂Ω) }.The discrete formulationseeks u_M∈() such that N_h(u_M;v_M):=a_pw( u_M, v_M)-F( v_M) +Γ_pw( u_M, u_M, v_M)=0v_M∈().Here and throughout this section, V:=V+ () is endowed with the mesh-dependentnormφ_pw :=√(a_pw(φ,φ)) for φ∈V and, for all η,χ,ϕ∈(), a_pw(η,χ):=∑_K∈∫_K D^2 η:D^2χ, Γ_pw(η,χ,ϕ):= ∑_T∈∫_T Δη(∂χ/∂ x_2∂ϕ/∂ x_1-∂χ/∂ x_1∂ϕ/∂ x_2). The a priori error estimatemeansbest-approximation upto first-order terms and so refines<cit.> for the Morley FEM and generalises it for any regular solution. If u∈ H^2_0(Ω) is a regular solution to N(u)=0, then there exist positiveϵ, δ, and ρ such that(A)-(C) hold for all ∈(δ) withapx() ≲u-I_ M u_ pw + h_Δ u ∇ u+osc_0(f,)≲ h_ max^s.The proof requires the following four lemmas. For any v∈ V+ (), the Morley interpolationI_M(v)∈()defined by(I_M v)(z)=v(z)for anyz∈(Ω)and ∫_E∂ I_M v/∂ν_E=∫_E∂ v/∂ν_E for anyE∈ satisfies (a) D^2_ pw I_M =Π_0 D^2 and (b) h_K^-2(1-I_M)v _L^2(K)+h_K^-1∇(1-I_M) v_L^2(K) +D^2 I_Mv_L^2(K)≲D^2v_L^2(K).Let H^2((ω_K)) denote the piecewise H^2 functions onthe neighbourhoodω_K, piecewise with respect tothe triangulation(ω_K) of all triangles T with zero distance to K∈. Let |∙|_H^2((ω_K)) be the corresponding seminorm as the local contributions of ∙_pw associated with ω_K.There exists an enrichment operator E_M:()→ V such that φ_M∈() satisfies(a)∑_m=0^2 h_K^2m|φ_M-E_Mφ_M|_H^m(K)^2 ≲h_K^4|φ_M|_H^2((ω_K))^2 K∈; (b) h_^-2(φ_M-E_Mφ_M)_^2≲∑_E∈ h_E [D^2φ_M]_Eτ_E_L^2(E)^2≲φ_M-E_Mφ_M_ pw^2≤Λmin_φ∈ V D_h^2(φ_M-φ)_^2; (c)I_ME_Mφ_M=φ_M,andφ_M-E_Mφ_M⊥ P_0() in L^2(Ω).The Sobolev embeddings for conforming functions depend on the domainΩ, while their discrete counterparts for nonconforming functions require particular attention.For any 1≤ p<∞, there exists a constant C=C(Ω,p,∢)(which depends on p, Ω, and the shape regularity of ) withv_L^∞(Ω)+v_W^1,p()≤ Cv_ pwfor all v∈ H^2_0(Ω)+().The main observation is that the enrichment operator E_M fromLemma <ref> maps into the HCT finite element spaceplus squaredbubble-functions<cit.>;sov_M-E_Mv_M is a piecewise polynomial of degree at most 6 for any v_M∈() (with respect to some refinement of , where each triangle T is divided into threesub-triangles by connecting each vertex with its center of inertia). This leads toinverse estimates such as | v_M-E_Mv_M |_W^1,∞(T)≲ h_T^-1 | v_M-E_Mv_M |_H^1(T)≲ h_T^-2 v_M-E_Mv_M _L^2(T).Lemma <ref>.b shows for v∈ H^1_0(Ω) that the right-hand side is controlled by h_^-2 (v_M-E_Mv_M) _L^2(Ω)≲ v_M-E_Mv_M _pw≤Λmin_φ∈ V v_M-φ_pw≤Λ v+v_M _pw.Since T∈ is arbitrary, this proves| v_M-E_Mv_M |_W^1,∞(Ω,)≲ v+v_M _pw.Sincev_M-E_Mv_M is Lipschitz continuous withLipschitz constant| v_M-E_Mv_M |_W^1,∞() andvanishes at the vertices of T∈,|| v_M-E_Mv_M ||_L^∞(Ω)≤ h_max | v_M-E_Mv_M |_W^1,∞(Ω,) holds for the maximal mesh-sizeh_max≤diam(Ω). This and (<ref>)imply(with C_1≈ 1)v_M-E_Mv_M _L^∞(Ω)≤C_1v+v_M_pw. The boundedness ofthecontinuous 2D Sobolev embeddingH^2(Ω)↪L^∞ (Ω) leads to∙_L^∞(Ω)≤ C_2 ∙ in H^2_0(Ω). Consequently, witha triangle inequality in the beginning, v+v_M _L^∞(Ω) ≤v_M-E_M v_M _L^∞(Ω)+ v+ E_Mv_M _L^∞(Ω)≤C_1 v+v_M_pw+C_2 v+ E_M v_M .Thetriangle inequality and Lemma <ref>.b (again with φ=-v) showv+E_M v_M≤ v+v_M _pw+v_M -E_Mv_M _pw≲ v+v_M _pw.The combination of (<ref>) with the previously displayed estimate showsthe first assertionv+v_M _L^∞(Ω)≲ v+v_M _pw. The proof of the second assertion is similar with(<ref>)-(<ref>). The boundedness ofthecontinuous 2D Sobolev embeddingH^2(Ω)↪ W^1,p(Ω) leads to| ∙|_W^1,p(Ω)≤ C(p,Ω) ∙ in H^2_0(Ω). Consequently,| v+v_M |_W^1,p(Ω,) ≤| v+E_M v_M|_W^1,p(Ω) +| v_M -E_Mv_M|_W^1,p(Ω,)≤ C(p,Ω)v+E_M v_M + |Ω|^1/p | v_M -E_Mv_M|_W^1,∞(Ω,)with the area |Ω|≈ 1≈ C(p,Ω).Recall(<ref>)and (<ref>) in the end to control theprevious upper bound in terms of v+E_M v_M + v+ v_M _pw≲v+ v_M _pw. This concludes the proof of the second assertion| v+v_M |_W^1,p(Ω,)≲v+ v_M _pw.The bound fora_pw is immediate from(<ref>) for the norm∙_pw in V≡ V+ ().The boundΓ_pw =√(2) C(Ω,4,∢)^2 in|Γ_pw(η,χ,ϕ)|≤Γ_pwη_pwχ_pwϕ_pwfor all η, χ, ϕ∈V≡ V+ ()followsfrom (<ref>)with Hölder inequalities and Lemma <ref>.For 1/2<s≤ 1 thereexists a positive constant C such that any η∈ H^2+s(Ω) andφ_M∈() satisfya_pw(η,φ_M-E_Mφ_M)≤ Ch_max^sη_H^2+s(Ω)φ_M_ pw. Proof of Theorem <ref>.Set X=Y=V, X_h=Y_h=V_h, X=V+V_h, a(∙,∙):=a_pw(∙, ∙), b(∙,∙):=Γ_pw( u,∙,∙)+Γ_pw(∙,u,∙) and P=I_M, Q=𝒞= E_M. The regularity u∈ H^2+s(Ω) of Subsection <ref> with s>1/2 allows forthebounded global Sobolev embeddingsH^2+s(Ω) ↪ W^2,4(Ω)↪ W^1,∞(Ω). This and Lemma <ref> lead for θ∈X and ϕ∈ H^1(Ω) to|Γ_pw(u,θ,ϕ)|+|Γ_pw(θ,u,ϕ)|≲(u_W^2,4(Ω)θ_W^1,4(Ω,) +u_W^1,∞(Ω)θ_pw)ϕ_H^1(Ω)≲u_H^2+s(Ω) θ_pwϕ_H^1(Ω).For θ_M∈() with θ_M_pw=1, the aforementioned estimates imply that b(θ_M,∙)∈ H^-1(Ω)and so the solution z∈ V to the biharmonic problema(z,ϕ)=b(θ_M,ϕ)ϕ∈ Vsatisfies z∈ H^2+s(Ω) and z _H^2+s(Ω)≲ 1 <cit.>. The regularityz∈ H^2+s(Ω) and Lemma <ref>(resp. Lemma <ref>)imply(H1)(resp.(H2))with δ_1≲ h_ max^s (resp. δ_2≲ h_ max^s). The estimate (<ref>) and Lemma <ref> verify (H3) withδ_3≲ h_ max.Lemma <ref>.b leads to (H4) with Λ_4=Λ. For any y_M∈ M() withy_M_pw=1,Lemma <ref> guarantees a_pw(u, y_M-E_My_M)≲ h_max^su_H^2+s(Ω)≈ h_max^s,whileLemma <ref> shows F(y_M-E_My_M)≲ h_max^2 f≲h_max^s. This implies(H5) with δ_5≲h_max^s.Choose x_h=I_M u so that (H6) holds with δ_6≲ h_ max^s.In conclusion,for sufficiently small mesh-size h_max,the discrete inf-sup inequality of Theorem <ref> holdswith β_h≥β_0>0.Moreover, Theorems <ref> and <ref> apply and prove (A)-(C). To compute apx()=N(u)_M()^*,letϕ_M ∈ M() satisfyϕ_M_pw=1 and apx() = N(u; ϕ_M). Since N(u, E_Mϕ_M)=0, the differenceψ :=ϕ_M-E_M ϕ_M∈V satisfies apx() = N(u;ψ )= a_ pw (u- I_M u, ψ)-F((1-Π_0)ψ)+ Γ_ pw(u,u,ψ)withLemma <ref>.c for a_ pw (I_M u, ψ)=0 and Π_0 ψ=0 a.e. in the last step. This, the finite overlap of (ω_K:K∈) in Lemma <ref>.a and <ref>.b for ψ_pw≲ 1lead to (<ref>). §.§ A posteriori error estimateFor any K∈ and E∈, define the volume and edge error estimatorsby η_K^2:=h_K^4 curl(-Δ u_M∇ u_M)-f_L^2(K)^2 and η_E^2 :=h_E[D^2 u_M]_Eτ_E_L^2(E)^2 +h_E^3[Δ u_M∇ u_M]_E·τ_E_L^2(E)^2 +h_E^3{Δ u_M∇ u_M}_E·τ_E_L^2(E)^2.Here and throughout this section, the average of ϕ∈X across the interior edge E=∂ K_+∩∂ K_-∈(Ω) shared by two triangles K_±∈reads {ϕ}_E:=(ϕ|_K_++ϕ|_K_-)/2, while {ϕ}_E:=ϕ|_E along any boundary edge E∈(∂Ω). If u∈ V is a regular solution to (<ref>), then there exist positive δ,ϵ, and C_ relsuch that, for any ∈(δ), the discrete solution u_M∈() to (<ref>)with u-u_M_ pw≤ϵsatisfiesC_ rel^-2 u-u_M_ pw^2 ≤∑_K∈η_K^2+∑_E∈η_E^2. Let u_M be the solution to (<ref>) close to u and apply Theorem <ref> with X=Y=V, X_h=Y_h=V_h,v_h= u_M, and Q:=E_M from Lemma <ref>.Suppose thatϵ,δ satisfy Theorem <ref> and, if necessary, are chosen smaller such that, for any ∈(δ), exactly one discrete solution u_ M∈ X_Mto (<ref>) satisfiesu-u_ M_ pw≤ϵ≤β/( 2(1+Λ )Γ).Lemma <ref>.b implies u_M-E_M u_M_ pw≤Λ u -u_M _ pw≤Λϵ. This andtriangle inequalities showE_M u _M +u_M_pw ≤ u_M-E_M u_M _pw + 2 u_M_pw≤2u+(2+ Λ)ϵ=:M; u- E_Mu_M ≤u- u_M_pw+ u_M- E_M u_M_pw≤ (1+Λ)ϵ≤β/( 2Γ ).Consequently,the abstract residual (<ref>) in Theorem <ref> impliesu- u_M_pw≤ 2β^-1N(E_M u_M)_ V^* +u_M- E_M u_M_pw.There exists someϕ∈ V with ϕ=1 andN(E_Mu_M)_V^*= N(E_M u_M;ϕ)= a(E_M u_M,ϕ)-F(ϕ) +Γ(E_M u_M,E_M u_M,ϕ)=N( u_M;ϕ)+ a_pw(E_M u_M- u_M,ϕ)+Γ(E_M u_M,E_M u_M,ϕ) -Γ_pw( u_M, u_M,ϕ)with the definition of N and of N. This, theboundofa_pw,elementary argumentswith the trilinear form and its bound Γ_ pw from Remark <ref>,and M proveN(E_M u_M)_ V^*≤N( u_M;ϕ)+(1+ M Γ_ pw)u_M- E_Mu_M_ pw.Sinceu_Msolves(<ref>), N_h( u_M;ϕ)=N_h( u_M;χ) holds for χ:=ϕ-I_M ϕ withtheMorley interpolation I_M ϕ of ϕ. Since Lemma <ref>.aimplies a_pw( u_M,ϕ-I_Mϕ)=0,an integration by parts in the nonlinear term Γ_pw(∙,∙,∙) leads toN( u_M;ϕ) =∑_K∈∫_K Δ u_M∇ u_M· curlχ -F(χ)=∑_K∈∫_K ( curl(-Δ u_M∇ u_M)-f)χ +∑_E∈∫_E [Δ u_M∇ u_M·τ_E]_E{χ}_E+∑_E∈∫_E{Δ u_M∇ u_M}_E·τ_E [χ]_E.This and standard arguments with Cauchy and trace inequalities plusLemma <ref>.b with ϕ=1 eventually lead to some constant C_A≈ 1 withC_A ^-2N( u_M;ϕ)^2 ≤∑_K∈η_K^2+∑_E∈η_E^2. Piecewise inverse estimates u_M- E_Mu_M_pw≲h_^-2 ( u_M- E_Mu_M )_L^2(Ω)andLemma <ref>.b with the tangential jump residualslead to some constant C_B≈ 1 with C_B ^-2 u_M- E_Mu_M_pw^2 ≤∑_E∈ h_E [D^2 u_M]_Eτ_E_L^2(E)^2.This is bounded by ∑_E∈η_E^2.The combination of(<ref>)-(<ref>) concludes the proof with C_ rel= 2 β^-1 C_A+(1+ 2β^-1(1+M Γ _pw)) C_B.The efficiency of the estimator remainsas an open questionowing to the average term{Δ u_M∇ u_M·τ_E}_E_L^2(E) in η_E(for the remaining contributions are efficient).The sum of all those contributions associated to those terms,however, converge(at least) with linear rate inthat S:= (∑_E∈h_E^3{Δ u_M∇ u_M·τ_E}_E_L^2(E)^2)^1/2≲h_max u _H^2+s(Ω) u_M_pw=O(h_max). Before a sketch of the proof concludes this remark, it should be stressed that (<ref>)can be a higher-order term:Consider a uniform mesh in a singular situation with re-entering corners (withan exact solution of reduced regularityu∉ H^3(Ω) <cit.>) with asuboptimal convergence rate s<1. Then S in(<ref>) is of higher-order. The proof of (<ref>) starts with atriangle inequality S^2 ≤∑_T∈∑_E∈(T)h_E^3 (Δ u_M∇ u_M) |_T_L^2(E)^2.The discrete trace inequality(i.e. a trace inequality followed by an inverse inequality)for each summand shows h_E^3 (Δ u_M∇ u_M) |_T_L^2(E)^2≲ h_E^2Δ u_M∇ u_M_L^2(T)^2. Recallthe piecewise constant mesh-sizeh_∈ P_0(), h_|_T:=h_T:=diam(T) for T∈, with maximum h_max:=max h_≤δ. The shape regularity ofshowsS≲h_Δ_pw u_M∇_pw u_M_L^2(Ω)≤h_Δ u ∇_pw u_M_L^2(Ω) +h_Δ_pw (u-u_M)∇_pw u_M_L^2(Ω)with a triangle inequality in the last step. Recall that u∈ H^2+s(Ω) for s>1/2 enablesthe bounded embedding H^s(Ω)↪ L^2p(Ω) for any p with 1<p<1/(1-s). This and a Hölder inequality with 1/p+1/p'=1 leads toΔ u ∇_pw u_M_L^2(Ω)≤Δ u _L^2p(Ω)∇_pw u_M_L^2p'(Ω).Lemma <ref> shows that the last term is controlledbyu_M_pw. Consequently,h_Δ u ∇_pw u_M_L^2(Ω)≲ h_max u _H^2+s(Ω) u_M_pw.The analysis of the second termstarts with 0<r≤ s≤ 1 andthe elementary observation h_Δ_pw (u-u_M)∇_pw u_M_L^2(Ω)≤√(2) h_max^1-r u-u_M_pw| h_^ru_M |_W^1,∞(Ω,).Theasserted convergence rate followswithu-u_M_pw≲h_max^su _H^2+s(Ω). The maximum of the remaining term| h_^ru_M |_W^1,∞(Ω,)=h_T^r | u_M |_W^1,∞(T) is attained for (at least) one T∈. An inverse inequality andLemma <ref>in the end showh_T^r | u_M |_W^1,∞(T)≲ | u_M |_W^1,2/r(T)≤| u_M |_W^1,2/r(Ω,)≲ u_M_pw.Consequently,h_Δ_pw (u-u_M)∇_pw u_M_L^2(Ω)≲h_max^1+s-r u_M_pw. The combination of the previous estimates proves(<ref>). The lack of local efficiencyis part of amore general structuraldifficulty. Whenever volume terms require a piecewise integration by parts with Morley finite element test functions, there arise averageterms like {ϕ}_E in Theorem <ref>, which are not residuals. This prevents an efficiency analysis in this section as well as in <cit.> or<cit.>. It is left as an open problem for future research and may causea modification of the discrete scheme. In the vibration of a biharmonic plate or in the von Kármán equations of the subsequent section, this difficulty does not arise. § VON KÁRMÁN EQUATIONSGiven a load function f∈ L^2(Ω), the von Kármán equations model the deflection of a very thin elastic plate with vertical displacement u∈ and the Airy stress function v∈ such thatΔ^2 u =[u,v]+fand Δ^2 v =-[u,u]in Ω.Withthe co-factor matrix(D^2 v )of D^2 v, the von Kármán brackets read[u,v]:=∂^2 u/∂ x_1^2∂^2 v/∂ x_2^2 +∂^2 u/∂ x_2^2∂^2 v /∂ x_1^2 -2∂^2 u/∂ x_1∂ x_2∂^2 v/∂ x_1∂ x_2=(D^2 u):D^2 v.§.§ Continuous problemTheweak formulationofthe von Kármán equations (<ref>) seeksu,v∈ V:=H^2_0(Ω) with a(u,φ_1)+ γ(u,v,φ_1)+γ(v,u,φ_1) = (f,φ_1)_L^2(Ω)φ_1∈ Va(v,φ_2)-γ(u,u,φ_2)=0φ_2 ∈ V.Here and throughout this sectionabbreviate, for all η,χ,φ∈ V, a(η,χ):= D^2 η:D^2χandγ(η,χ,φ):=- [η,χ]φ.The abstract theory of Sections <ref>-<ref> applies for the real Hilbert spaceX:=V× Vwith its dual X^* tothe operator N:X→X^*defined byN(Ψ;Φ):=⟨N(Ψ), Φ⟩:=A(Ψ,Φ)-F(Φ)+Γ(Ψ,Ψ,Φ)for all Ξ=(ξ_1,ξ_2), Θ=(θ_1,θ_2),Φ=(φ_1,φ_2)∈Xandthe abbreviations A(Θ,Φ) :=a(θ_1,φ_1)+a(θ_2,φ_2),F(Φ) :=(f,φ_1)_, Γ(Ξ,Θ,Φ) :=γ(ξ_1,θ_2,φ_1)+γ(ξ_2,θ_1,φ_1)-γ(ξ_1,θ_1,φ_2).Note that A(∙,∙) is a scalar product in Xand the trilinearform Γ(∙,∙,∙) is bounded <cit.>.It is known <cit.> that there exist a solution Ψ∈ X with N(Ψ)=0. Any solution has the regularityΨ∈ H^2+s(Ω):=(H^2+α(Ω))^2 for1/2<s≤ 1 depending on the polygonal bounded Lipschitz domain Ω <cit.>. This allows for the boundedness Γ(Ψ,Θ,Φ)≤ C Ψ_H^2+s(Ω)ΘΦ_H^1(Ω)for any Θ∈ X and Φ∈ H^1_0(Ω;^2).§.§ Conforming FEMWith the notation of Section <ref> on V_C⊂ H^2_0(Ω), the conformingfinite element formulation seeks Ψ_C=(u_C,v_C)∈ X_h:=V_C× V_C such thatN(Ψ_C;Φ_C)=0 for allΦ_C ∈X_h.If Ψ∈ X is a regular solution to N(Ψ)=0, then there exist positiveϵ, δ, and ρ such that(A)-(C) hold with apx()≡ 0for all ∈(δ). The proofis analogous to that of Theorem <ref>and hence omitted. The a priori error analysis is derived in<cit.> with a fixed point iteration (of linear convergence). For any K∈ and E∈, definethe volume and edge error estimators by η_K^2 :=h_K^4Δ^2 u_C-[u_C,v_C]-f^2_L^2(K) +h_K^4 Δ^2 v_C+1/2[u_C,u_C]^2_L^2(K), η_E^2 :=h_E^3[ (D^2 u_C)]_E·ν_E^2_L^2(E) +h_E^3[ (D^2 v_C)]_E·ν_E^2_L^2(E)+h_E[D^2 u_Cν_E]_E·ν_E^2_L^2(E) +h_E[D^2 v_Cν_E]_E·ν_E^2_L^2(E) .If Ψ∈ Xis a regular solutionto N(Ψ)=0,then there exist positiveδ, ϵ, C_ rel, and C_ effsuch that, for all ∈(δ), the unique discrete solution Ψ_C=(u_C,v_C)∈X_h to (<ref>) with Ψ-Ψ_C <ϵsatisfies C_ rel^-2Ψ-Ψ_C^2 ≤∑_K∈η_K^2 +∑_E∈η_E^2≤ C_ eff^2(Ψ-Ψ_C^2+ osc_0^2(f)).For Y=X,Y_h=X_h, we proceed as in the proof of Theorem <ref> and, for sufficiently small δ, derive(H1)- (H6) and u-u_C < β/Γ fromTheorem <ref>. Hence Corollary <ref> implies for v_h≡Ψ_C= (u_C, v_C) thatΨ-Ψ_C≲N(Ψ_C)_X^*=N(Ψ_C;Φ)for someΦ∈ X with Φ=1 and its approximationΠ_hΦ∈ X_h (Π_h from Lemma <ref> applies componentwise).Abbreviate (χ_1,χ_2):=χ :=Φ-Π_hΦ and deduce from(<ref>) thatN(Ψ_C)_X^*=N(Ψ_C;(χ_1,χ_2)). Successive integrations by parts showA(Ψ_C,χ )= (Δ^2 u_C) χ_1 + (Δ^2 v_C) χ_2 +[D^2 u_C]_Eν_E·∇χ_1 + [D^2 v_C]_Eν_E·∇χ_2 - χ_1 [(D^2 u_C)]_E·ν_E - χ_2 [(D^2 v_C)]_E·ν_E .This and the definition of Γ(∙,∙,∙) lead to the residualA(Ψ_C,Φ-Π_hΦ)-F(Φ-Π_hΦ)+Γ(Ψ_C,Ψ_C,Φ-Π_hΦ)= (Δ^2 u_C-[u_C,v_C]-f)χ_1+(Δ^2 v_C+[u_C,u_C])χ_2-([(D^2 u_C)]_E·ν_E)χ_1 +[D^2 u_C]_Eν_E·∇χ_1 -([(D^2 v_C)]_E·ν_E)χ_2 + [D^2 v_C]_Eν_E·∇χ_2.The two edge terms in the above expression that involve ∇χ_j for j=1,2can be rewritten as[D^2 u_C]_Eν_E·∇χ_1+[D^2 v_C]_Eν_E·∇χ_2=[D^2 u_Cν_E]_E·ν_E ∂χ_1/∂ν+[D^2 v_Cν_E]_E·ν_E ∂χ_2/∂ν +[D^2 u_Cν_E]_E·τ_E ∂χ_1/∂τ+[D^2 v_Cν_E]_E·τ_E ∂χ_2/∂τ.The last two terms involve tangential derivatives and so vanish for u_C and v_C belong to .Standard arguments analogous to <cit.> with a Cauchyinequality, an inverse inequality, and Lemma <ref>conclude the proof of the reliability. The proof of the efficiency of the volume term η_K is immediately adopted fromthat of<cit.>. The arguments in the proof of efficiency for the edge termsh_E[D^2 u_Cν_E]_E·ν_E_L^2(E) and h_E[D^2 v_Cν_E]_E·ν_E_L^2(E) are the sameas for the (linear) biharmonicequation and can be adopted from<cit.> or <cit.>.Furtherdetails are omitted. §.§ Morley FEMThe Morley FEM seeks Ψ_M∈X_M:=() ×()⊂X:=X+X_M(endowed with the norm ∙_pw) such thatN_h(Ψ_M;Φ_M):=A_pw(Ψ_M,Φ_M) +Γ_pw(Ψ_M,Ψ_M,Φ_M)-F(Φ_M)=0 Φ_M ∈X_M.Here and throughout this subsection,for all Ξ=(ξ_1,ξ_2),Θ=(θ_1,θ_2), Φ=(φ_1,φ_2)∈X,A_pw(Θ,Φ):=a_pw(θ_1,φ_1)+a_pw(θ_2,φ_2), F(Φ):= fφ_1, Γ_pw(Ξ,Θ,Φ):=b_pw(ξ_1,θ_2,φ_1)+b_pw(ξ_2,θ_1,φ_1)-b_pw(ξ_1,θ_1,φ_2),and, for all η,χ,φ∈V:=H^2_0(Ω)+ (),a_pw(η,χ):= D^2 η:D^2χ andb_pw(η,χ,φ):=- [η,χ]φ. (The boundedness ofa_pw is immediate andthat of Γ_pw follows from Lemma <ref>.)If Ψ∈ X is a regular solution to N(Ψ)=0, then there exist positiveϵ, δ, and ρ such that(A)-(C) hold for any ∈(δ) with apx()≲Ψ-I_MΨ_ pw +osc_0(f+ [u,v],) +osc_0([u,u],)≲h_ max^s.Set Y=X, Y_h=X_M, X= X+X_M, a(∙,∙):=A_pw(∙,∙), b(∙,∙):=2Γ_pw( Ψ,∙,∙) and P=I_M,Q=𝒞=E_M.GivenΨ∈ H^2+s(Ω), Θ, Φ∈X, piecewise inequalities andthe bounded globalSobolev imbeddingH^2+s(Ω)↪ W^2,4(Ω) (for s>1/2) showΓ_pw(Ψ,θ,Φ) ≲Ψ_H^2+s(Ω)Θ_pwΦ_L^4(Ω).For Θ_M ∈ X_M with Θ_M_pw=1,the linear functional Γ(Ψ,Θ_M,∙)∈ H^-1(Ω) leads to aunique solution Z∈X to the biharmonic problemA( Z,Φ)=Γ(Ψ,Θ_M,Φ) for all Φ∈ Xwith Z∈ H^2+s(Ω) <cit.>. Forφ_M∈(), the inverse estimateφ_M-E_Mφ_M_L^4(K)≤ C h_K^-1/2φ_M-E_Mφ_M_L^2(K) for all K∈,the bound for Γ_pw, andLemma <ref>.a imply δ_3≲ h_ max^3/2.Theremaining conditions for the parameters in the (H1)-(H2) and (H4)-(H6)are verified as in the proof of Theorem <ref>. For some Φ_M∈ X_M with Φ_M_pw=1, apx()=N(Ψ)_X_h^*=N(Ψ; Φ_M). This, N(Ψ; E_MΦ_M)=0,(<ref>),andLemmas <ref>-<ref>lead for (χ_1,χ_2):=χ:=Φ_M-E_MΦ_M toapx() = N(Ψ; Φ_M-E_M Φ_M)= A_pw(Ψ, χ )-F(χ) +Γ_pw(Ψ,Ψ,χ)= A_pw(Ψ-I_M Ψ,χ) - (f+ [u,v], χ_1)_L^2(Ω)+1/2([u,u], χ_2)_L^2(Ω)≲Ψ-I_M Ψ_ pw + osc_0(f+ [u,v],) + osc_0([u,u],)≲h_ max^swith arguments as inthe final part of the proofof Theorem <ref>. Hence Theorems <ref> and <ref>apply and prove (A)-(C). For any K∈ and E∈, define the volume and edgeerror estimatorsby η_K^2 := h_K^4[u_M,v_M]+f_L^2(K)^2+ h_K^4 [u_M,u_M]_L^2(K)^2, η_E^2:= h_E[D^2 u_M]_Eτ_E_L^2(E)^2 +h_E[D^2 v_M]_Eτ_E_L^2(E)^2. If Ψ=(u,v)∈X is a regular solution to N(Ψ)=0, then there existδ,ϵ, C_ rel, and C_ eff such that, for any ∈(δ), the discrete solution Ψ_M=(u_M,v_M)∈ X_M:=()×() to (<ref>) with Ψ-Ψ_ M_ pw≤ϵsatisfiesC_ rel^-2Ψ-Ψ_ M_ pw^2 ≤∑_K∈η_K^2+∑_E∈η_E^2≤ C_ eff^2 (Ψ-Ψ_M_ pw^2+ osc_0^2(f)).Let Ψ_M be the solution to (<ref>) close to Ψ and apply Theorem <ref> withY=X, Y_h=X_M,v_h=Ψ_M, and Q=E_M. Suppose thatϵ,δ satisfy Theorem <ref> and, if necessary, are chosen smaller such that, for any ∈(δ), exactly one discrete solution Ψ_M∈ X_Mto (<ref>) satisfies Ψ-Ψ_M_ pw≤ϵ≤β/( 2(1+Λ )Γ). Lemma <ref>.b impliesΨ_ M- E_MΨ_ M_ pw≤ΛΨ-Ψ_M_ pw≤Λϵ. This andtriangle inequalities showE_MΨ_M + Ψ_M_pw ≤Ψ_M-E_MΨ_M _pw + 2Ψ_M_pw≤2 Ψ +(2+ Λ)ϵ=:M;Ψ- E_MΨ_M ≤Ψ- Ψ_M_pw+ Ψ_M- E_MΨ_M_pw≤ (1+Λ)ϵ≤β/( 2Γ).Consequently,the abstract residual (<ref>) in Theorem <ref> impliesΨ-Ψ_M_pw≤2β^-1N(E_MΨ_M)_ X^*+Ψ_M-E_MΨ_M_pw.There exists Φ∈X with Φ=1 and N(E_MΨ_M)_ X^* =N(E_MΨ_M;Φ)=A_ pw(E_MΨ_M,Φ)-F(Φ)+Γ(E_MΨ_M,E_MΨ_M,Φ)with the definition of N. This and the definition of N(Ψ_M;Φ) lead toN(E_MΨ_M)_ X^* =N(Ψ_M;Φ)+A_pw(E_MΨ_M-Ψ_M,Φ) +Γ(E_MΨ_M,E_MΨ_M,Φ)-Γ_pw(Ψ_M,Ψ_M,Φ)≤N(Ψ_M;Φ) + (1+MΓ_ pw )Ψ_M-E_MΨ_M_pwwith theboundofA_pw, elementary argumentswith the trilinear form andits boundΓ_ pw(deduced from Lemma <ref> as in Remark <ref>),and Min the second step. Since Ψ_M solves(<ref>),N(Ψ_M;Φ)=N(Ψ_M; χ) holds for χ:=(χ_1,χ_2):=Φ-I_MΦwith theMorley interpolation I_M Φ of Φ. Since Lemma <ref>.aimplies A_pw(Ψ_M,χ)=0, the definitions of Γ_ pw(∙,∙,∙)and F(∙) lead toN(Ψ_M; Φ) = Γ_pw(Ψ_M,Ψ_M,χ)- F(χ) = 1/2( [u_M,u_M], χ_2 )_L^2(Ω) - (f+[u_M,v_M], χ_1)_L^2(Ω)≤ ( ∑_K∈η_K^2 )^1/2 h_^-2(Φ-I_MΦ)_L^2(Ω)≤ C_A ( ∑_K∈η_K^2 )^1/2with weighted Cauchy inequalities in the second last step and the constant C_A≈ 1 from Lemma <ref>.bwith Φ=1in the end. The combination with (<ref>) readsΨ-Ψ_M_pw≤2 β^-1C_A( ∑_K∈η_K^2 )^1/2 +(1+2β^-1 (1+MΓ _pw )) Ψ_M-E_MΨ_M_pw.The last termiscontrolled as in (<ref>) andthis concludes the proof of thereliability estimate withC_ rel= max{ 2β^-1 C_A ,(1+ 2β^-1(1+M Γ _pw))C_B}.The proof of the efficiency of the volume term η_K is immediately adopted fromthat of<cit.>. The arguments in the proof of efficiency for the edge term η_E are the sameas for the (linear) biharmonicequation and can be adopted from<cit.>. Further details are omitted. The adaptation of the nonconforming scheme that allows for a right-hand side f ∈ H^-1(Ω) is possible by arguments in Section <ref>. § ACKNOWLEDGEMENTS The authors thank for the comments by the anonymous referee that led to Subsection <ref>. The research of the first author has been supported by the Deutsche Forschungsgemeinschaft in the Priority Program 1748 under the project "foundation and application of generalized mixed FEM towards nonlinear problems in solid mechanics" (CA 151/22-2).The research of the second author is supported by the NBHM Grant 0204/58/2018/R&D-II/14721. The finalization of this paper has been supported by DST SERB MATRICS grant of the second author MTR/2017/000199 and SPARC project(id 235) entitled the mathematics and computation of plates.§ APPENDIX §.§ Proof of Lemma <ref>The first part of the assertion is included in <cit.>and so is merely outlined for convenient reading of the second.The Rellich compact embedding theoremH^1_0(Ω)c↪ L^2(Ω)leads to c↪ H^-1(Ω) in the sequel.Hence S:={ g∈| g=1} is pre-compact in V^*. The operatorA∈ L(V;V^*), associated to the scalar product a via Av=a(v,∙) for allv∈ V (note A in contrast to the coefficients A); A isinvertible and A^-1∈ L(V^*;V) maps S onto W:=A^-1(S) pre-compact in H^1_0(Ω). The open balls B(z,ϵ/6) in Varoundz∈Wwith radius ϵ/6 with respect to the norm ∙_a form an open cover of the compact set W and so have a finite sub-cover for z_1,…, z_J∈W, W⊂∪_j=1^JB(z_j,ϵ/6)⊂ V.Since (Ω) is dense in V, there exists ζ_j∈(Ω) withz_j-ζ_j_a <ϵ/6. The smoothness of ζ_j proves ζ_j-I_Cζ_j_a≤ C h_max≤ C δfor any triangulation ∈(δ) and the nodal interpolation I_C in S_0^1(); the constant C depends on max_j=1, …,JD^2ζ_j,the shape-regularity parameter κ, and on λ.For any g∈∖{0} withz=A^-1(g)/g∈ W from (<ref>), there exists at least one index j∈{1,…,J} withz∈ B(z_j,ϵ/6). This, the choice of ζ_j, and (<ref>)with δ:=ϵ/(6C) provez-I_Cζ_j_a ≤ z-z_j_a+z_j-ζ_j_a+ζ_j-I_Cζ_j_a<ϵ/3+Cδ <ϵ/2.A rescaling of this leads to  A^-1(g)-g I_Cζ_j_a =g z-I_Cζ_j_a≤ϵg/2. This proves that the first term in the asserted inequalityis bounded by the right-hand side.The analysis of the second term considers the pre-compact subsetA∇ W={ A∇ z: Tz=g∈ S} of L^2(Ω;^n), where T: L^2(Ω) ⟶ H^1_0(Ω) is the solution map with z=Tg ∈ H^1_0(Ω).Sincethe open balls B(Q,ϵ/6)around Q∈ A∇ Win the L^2 norm form an open cover of the compact closure A∇ W in L^2(Ω;^n),there exists Q_1,…, Q_K inA∇ W withA∇ W ⊂∪_k=1^KB(Q_k,ϵ/6)⊂ L^2(Ω;^n).Since (Ω;^n) is dense in L^2(Ω;^n),there exists Φ_k∈(Ω;^n) withQ_k-Φ_k <ϵ/6. The smoothness of Φ_k and a Poincaré inequality(on simplices with constant h_T/π) proveΦ_k-Π_0 Φ_k≤ ||∇Φ_k|| h_max/π≤ C δfor any triangulation ∈(δ) with the L^2 projection Π_0 ontoP_0(;^n). The constant C=max{ ||∇Φ_1||,…, ||∇Φ_K||} depends on the smoothness of thefunctions Φ_1 ,…, Φ_K. For any g∈∖{0} withz=A^-1(g)/g∈ W from (<ref>), there exists at least one index k∈{1,…,K} withA∇ z∈ B(Q_k,ϵ/6). This, the choice of Φ_k, and (<ref>)with δ:=ϵ/(6C) prove(1-Π_0)A∇ z≤ A∇ z - Π_0 Φ_k ≤ A∇ z -Q_k+Q_k -Φ_k+(1-Π_0)Φ_k<ϵ/2.A rescaling of this proves (1-Π_0)A∇ z≤ϵg/2 for all Az=g∈ L^2(Ω) (with arbitrary norm g≥ 0). This concludes the proof. amsplain
http://arxiv.org/abs/1708.07627v3
{ "authors": [ "Carsten Carstensen", "Gouranga Mallik", "Neela Nataraj" ], "categories": [ "math.NA", "cs.NA" ], "primary_category": "math.NA", "published": "20170825064439", "title": "Nonconforming Finite Element Discretisation for Semilinear Problems with Trilinear Nonlinearity" }
=1 1.1 fpheader ?
http://arxiv.org/abs/1708.07833v1
{ "authors": [ "Mikica Kocic", "Marcus Högås", "Francesco Torsello", "Edvard Mortsell" ], "categories": [ "hep-th", "gr-qc" ], "primary_category": "hep-th", "published": "20170825180000", "title": "On Birkhoff's theorem in ghost-free bimetric theory" }
Bayesian naturalness, simplicity, and testability applied to the B-L MSSM GUT Panashe Fundira and Austin PurvesDepartment of Physics, Manhattanville CollegePurchase, New York 10577, United [email protected]@mville.eduDecember 30, 2023 ================================================================================================================================================================================ Recent years have seen increased use of Bayesian model comparison to quantify notions such as naturalness, simplicity, and testability, especially in the area of supersymmetric model building. After demonstrating that Bayesian model comparison can resolve a paradox that has been raised in the literature concerning the naturalness of the proton mass, we apply Bayesian model comparison to GUTs, an area to which it has not been applied before. We find that the GUTs are substantially favored over the non-unifying puzzle model. Of the GUTs we consider, the B-L MSSM GUT is the most favored, but the MSSM GUT is almost equally favored. § INTRODUCTIONThe naturalness principle — the principle that a correct theory should not require fine-tuning of its parameters to agree with experimental data — has been widely used in physics both to make predictions and to guide theoretical study<cit.>. When a theory does require fine-tuning to be in agreement with experimental data, it is in conflict with the naturalness principle and is said to have a fine-tuning problem. For example, the horizon problem and the flatness problem are both fine-tuning problems in the big bang theory. Together, they motivated the discovery of the theory of inflation<cit.> and the theory of the ekpyrotic universe<cit.>. Notable examples of contemporary fine-tuning problems include the cosmological constant problem<cit.>, the strong CP problem<cit.>, and the little hierarchy problem of the Minimal Supersymmetric Standard Model (MSSM)<cit.>. Each of these examples has inspired many new theoretical developments. See Ref. Nobbenhuis:2006yf for a review of possible solutions inspired by the cosmological constant problem. The strong CP problem has inspired theories of axions<cit.>, and a variety of other possible solutions<cit.>[See Refs. Diaz-Cruz:2016pmm,Swain:2010rr for brief lists of possible solutions to the strong CP problem.]. The little hierarchy problem has helped inspire the Super-Little Higgs theory <cit.>, the Extended MSSM <cit.>, and the NMSSM <cit.>. Perhaps the most influential of fine-tuning problems is that of fine-tuning in the mass of the standard model (SM) Higgs boson. This is known as the (big) hierarchy problem <cit.>.[It has been argued that the big hierarchy problem is not a true fine-tuning problem because it is dependent on how quadratic divergences are regularized. For example, see Ref. Farina:2013mla.]The lack of experimental evidence of new physics from recent particle physics experiments such as the LHC has left us with a plurality of models that are consistent with experimental data. This has lead to increasing reliance upon naturalness both for making predictions, and for motivating various theories of physics beyond the standard model, especially supersymmetry. In order to make quantitative predictions using the naturalness principle, it is necessary to have a quantitative definition of naturalness and a criterion to represent the naturalness principle. This arrived in the form of the Barbieri-Giudice (BG) sensitivity, introduced in Refs. Ellis:1986yg,Barbieri:1987fn. The BG sensitivity is a measure of fine-tuning, so that large BG sensitivity corresponds to a lack of naturalness. That is,⇔⇔ , ⇔⇔ .This became a dominant way to quantify fine-tuning (for example, see Refs. Antoniadis:2014eta,Ciafaloni:1996zh,de Carlos:1993yy,Casas:2003jx,Casas:2004gh,Casas:2005ev,Allanach:2006jc,Giusti:1998gz). However, some shortcomings of the BG sensitivity have been identified, suggesting that it is not appropriate for every situation <cit.>.The naturalness principle is not without its detractors. It has been pointed out that the anthropic principle could result in apparent violation of the naturalness principle in our observable universe <cit.>. This idea seems to have gained traction recently as the LHC has not revealed a solution to the hierarchy problem. It has also been argued that the naturalness principle is more an interesting historical and sociological factor in physics than a useful aid in objectively determining the truth value of theories <cit.>.At the same time, however, there is growing recognition that the naturalness principle may be rooted in Bayesian model comparison. The connection between the naturalness principle and Bayesian model comparison was noticed in Ref. Strumia:1999fr. In Ref. Allanach:2007qk, it was shown that Bayesian model comparison could be used to reach some of the same conclusions that previously had been reached by the naturalness principle. Then Refs. Cabrera:2008tj,Cabrera:2009dm,Ghilencea:2012qk,Fichet:2012sn used Bayesian model comparison to derive the BG sensitivity. Not only did this show that physicists' most popular quantitative measure of fine-tuning can be derived from Bayesian model comparison, but also Bayesian model comparison provides an objective interpretation of that fine-tuning measure. In Ref. Fichet:2012sn it is furthermore shown that the derived fine-tuning measure encompasses not only the BG sensitivity but also some of the refinements proposed in Refs. Anderson:1994dz,Athron:2007ry, and that it addresses some of the ambiguities in the BG sensitivity.The Bayesian approach to naturalness has caught on and been put to use in recent years. It was applied to a number of MSSM-related model comparisons in Refs. Fowlie:2014xha,Fowlie:2014faa,Fowlie:2015uga,Athron:2017fxj. It has also been applied to more general extensions of the SM in Ref. Clarke:2016jzm and to the relaxion mechanism in Ref. Fowlie:2016jlx. In Ref. AbdusSalam:2015uba, Bayesian naturalness is used to argue that natural supersymmetry is still viable, and to identify the more natural allowed regions of parameter space. In Ref. Kim:2013uxa it is used to compare the CMSSM to the CNMSSM. In Ref. Dumont:2013wma it is used to study higher-dimensional operators in the Higgs sector. In Ref. Ghilencea:2012gz the Bayesian roots of the BG sensitivity are used to provide context and interpretation of an analysis based on BG sensitivity. Some of these developments have been discussed in Ref. Ghilencea:2013fka. Useful review can be found in Ref. Kvellestad:2015cpa.It has also been noticed, though less talked about in the physics literature, that Bayesian model comparison accounts for the simplicity and testability of theories. See Ref. Mackay for an earlier paper on this and see Ref. FowlieTalk for a more recent discussion. See Refs. Nesseris:2012cq,March,Kunz:2006mc for some discussion of this in the context of evaluating cosmological models. See Refs. Earman,Forster,Oppy for discussion in the philosophy of science literature. In Ref. Nesseris:2012cq the connection to testability (predictiveness) is made explicit.Bayesian naturalness has been developed in the context of discussing supersymmetry and the little hierarchy problem, and it has seen the most use within this context (see for example the references in the previous paragraph. See Refs. Clarke:2016jzm,Fowlie:2016jlx for two examples outside this context). In this paper, we apply Bayesian model comparison to gauge unification, an area to which it has not be applied before. Some of the Grand Unified Theories (GUTs) that we consider have additional parameters related to threshold corrections, making them more complicated than a simple GUT. Our analysis shows that despite this added complexity, these GUTs are preferable to the puzzle model because the observables are rather insensitive to the additional parameters.The proton mass is important because applying the BG sensitivity leads to a paradox. The BG sensitivity seems to suggest that the proton mass is fine-tuned even though most physicists would agree that it is not actually fine-tuned<cit.>. We show that with Bayesian naturalness this paradox is resolved, thus justifying the claim that Bayesian naturalness is a more correct way to quantitatively understand naturalness. We will also apply the language of Bayesian naturalness to a simple GUT. Simplicity sits alongside naturalness as another intuitive, or aesthetic, criterion that physicists have used to guide research throughout history. The fact that Bayesian model comparison can quantitatively weigh both naturalness and simplicity has uses that we demonstrate by applying it to a number of more complicated GUTs, a task that requires original development of a Monte Carlo integration program utilizing the computational power of a Graphics Processing Unit (GPU).The paper is organized as follows. In section <ref> the Bayesian naturalness formalism is discussed, reviewing some existing literature on the topic. In section <ref> we review the derivation of the BG sensitivity from Bayesian model comparison in a simple case of a model with one observable and one parameter. Then we apply Bayesian model comparison to the proton mass, and a number of GUTs. Section <ref> presents our conclusions and discussion. <ref> contains some brief details about how we implement Monte Carlo integration on a GPU. We have released our code to the public<cit.>.§ BAYESIAN MODEL COMPARISON§.§ Bayes factors Bayes' theorem is a fundamental theorem in probability that follows deductively from the definition of conditional probability. In the context of making scientific inferences it can be written asp(ℳ|d)=p(d|ℳ)p(ℳ)/p(d)where ℳ is some model that is being considered, and d is some experimental data. The quantity p(ℳ|d) is called the posterior probability of model ℳ, and p(ℳ) is called the prior probability.There are two major problems with using Bayes' theorem to calculate posterior probabilities directly. First, such statements depend on the prior probabilities of all other hypothetical models that could predict data d. This enters Bayes' theorem through the quantity p(d). Second, such statements will depend on the prior probability of the model being considered, p(ℳ), which is subjective. The first of these two problems is avoided by considering the ratio of posterior probabilities of two models so that the prior probability of the data, p(d) cancels out. Let us refer to the two models as 𝒫 and ℳ (the reason for this choice will become clear in subsection <ref>).p(𝒫|d)/p(ℳ|d)=p(d|𝒫)/p(d|ℳ)p(𝒫)/p(ℳ) .The second problem is avoided by focusing on the quantity p(d|𝒫)/p(d|ℳ). This quantity is referred to as the Bayes factor, B_𝒫ℳ.p(𝒫|d)/p(ℳ|d) = B_𝒫ℳp(𝒫)/p(ℳ) ,B_𝒫ℳ = p(d|𝒫)/p(d|ℳ) .The Bayes factor quantifies whether the data d favor model 𝒫 over model ℳ (B_𝒫ℳ > 1) and how strongly. The farther the Bayes factor is from unity, the more strongly the data d favor model 𝒫 over ℳ. In practice one is usually considering not the sum total of all relevant experimental data, but a specific experimental measurement or a set of experimental measurements. Referring to that data of immediate consideration as d_2, and all other baseline data as d_1, using Bayes' theorem, the ratio of posterior probabilities can then be writtenp(𝒫|d_2,d_1)/p(ℳ|d_2,d_1) = p(d_2|𝒫,d_1)/p(d_2|ℳ,d_1)p(𝒫|d_1)/p(ℳ|d_1)= p(d_2|𝒫,d_1)/p(d_2|ℳ,d_1)p(d_1|𝒫)/p(d_1|ℳ)p(𝒫)/p(ℳ).= B_𝒫ℳ^(2)B_𝒫ℳ^(1)p(𝒫)/p(ℳ) .The B_𝒫ℳ^(2) and B_𝒫ℳ^(1) are called partial Bayes factors<cit.>. For brevity and compact notation, from here on in this paper, when we say “Bayes factor” we are referring to a partial Bayes factor. And we do not explicitly write the ^(1) or ^(2) superscripts or subscripts, nor do we explicitly include baseline data in our notation. It should be clear from context what data is being considered throughout this paper.The numerator and denominator of the Bayes factor are referred to as the Bayesian evidence for 𝒫 and ℳ, and are denoted 𝒵_𝒫 and 𝒵_ℳ respectively so that the Bayes factor can be writtenB_𝒫ℳ=𝒵_𝒫/𝒵_ℳThe Bayesian evidence for a model is simply the probability of the data d assuming the model ℳ is true.The Bayes factor for some data d is the crucial quantity for comparing two models with respect to how much the data favor one model over the other. Note that B_𝒫ℳ=1/B_ℳ𝒫. Interpretation of Bayes factors is sometimes aided by the Jeffreys' scale <cit.>. §.§ Bayesian evidences Bayes factors are computed by computing the Bayesian evidences in the numerator and the denominator. Computing the Bayesian evidence for a model with n unknown parameters, referred to as θ_i where i = 1⋯ n, requires assigning a prior probability distribution, p(θ_i), to the parameters. Suppressing the indices from here forward, the Bayesian evidence can then be written as𝒵_ℳ = ∫ p(d|θ)p(θ|ℳ)dθ ,where the integral is over the entire n dimensional parameter space and dθ denotes the volume element in parameter space. The probability p(d|θ) is referred to as a likelihood function. Many experimental results are Gaussian likelihood functions, due to the central limit theorem<cit.>. For example the experimental result that the Z-boson mass is M_Z = 91.1876±0.0021 <cit.> means the likelihood p(d|M_Z) is a Gaussian function of M_Z with central value 91.1876 GeV and standard deviation 0.0021 GeV. The m observables are referred to as 𝒪_i where i=1⋯ m. The Bayesian evidence can then be written as𝒵_ℳ = ∫ p(d|𝒪)p(θ|ℳ)dθ ,where the relationship between the observables and the parameters, 𝒪(θ), in the model ℳ is used to compute the integral. As long as there is no covariance between observables (or it can be neglected) the likelihood function p(d|𝒪) can be written as a product of separate likelihood functions for each observable. If the experimental uncertainty is sufficiently small, these likelihood functions can be approximated by Dirac δ-functions inside an integral. That is, using the form of a normalized Gaussian probability distribution with mean μ and variance σ^2,∫1/σ√(2π)e^-1/2(x-μ)^2/σ^2f(x)dx≈∫δ(x-μ) f(x)dx ,as long as f(x) doesn't vary too much close to x=μ. Both of these simplifications were used explicitly in Ref. Fowlie:2014xha, for example. §.§ Puzzle modelsComparisons between two specific models, for example the CMSSM and the MSSM, are instructive. However, they do not capture the whole of the naturalness principle. It was shown in Ref. Fichet:2012sn that a puzzle model, referred to as 𝒫 can aid in this. A puzzle model, 𝒫, defined as a model in which the observables are simply considered to be fundamental parameters, is particularly useful for demonstrating the existence of the (big and little) hierarchy problem. Consider arguments that the CMSSM has a little hierarchy problem. To what model should the CMSSM be compared? Comparison to the SM is problematic because, depending on how one handles regularization of quadratic divergences, the SM may have a big hierarchy problem that dwarfs the CMSSM's little hierarchy problem <cit.>. Instead, the CMSSM should be compared to a puzzle model, defined such that the electroweak scale is a fundamental parameter of the model. Then the Bayes factor favors the puzzle model<cit.>.[Such a comparison is carried out in Refs. Fowlie:2014xha,Balazs:2013qva by comparing the CMSSM to the SM less quadratic divergences. It is pointed out in Ref. Balazs:2013qva that the latter is essentially a puzzle model.] The fact that Bayesian model comparison favors the puzzle model over the CMSSM is a manifestation of the little hierarchy problem. The Bayes factor can then serve as a quantitative measure of fine-tuning, like the BG sensitivity. The Bayes factor in Bayesian comparison with a puzzle model can be used to reveal a number of fine-tuning problems, including the big and little hierarchy problems, and the cosmological constant problem. §.§ Prior distributions We use the log-uniform prior for parameters throughout this paper:p(θ)dθ=1/logθ_max/θ_min1/θdθ .Note that the log-uniform prior follows from requiring that the log of θ have a uniform distribution. The quantity log(θ_max/θ_min) is called the prior volume. The log-uniform prior is not uncommon in the literature on Bayesian naturalness. See, for example, Refs. Fichet:2012sn,Fowlie:2016jlx,Athron:2017fxj,Fowlie:2015uga,Fowlie:2014faa,Fowlie:2014xha.The justification is that it is invariant under power and scale transformations of the parameters. That is to say, if the parameter θ has a log-uniform prior, then θ^'=bθ^a also has a log-uniform prior. Note that in the region between θ_min and θ_max, the log-uniform prior is proportional to 1/θ. This is similar to certain cases of the noninformative Jeffrey's prior and reference priors that take the form p(θ)∝ 1/θ. See Ref. Kass for review.This invariance is important for application to physics models. For example, the relevant observable in the hierarchy problem is the electroweak scale. The puzzle model will have the electroweak scale as a fundamental parameter with a log-uniform prior. But to which exact quantity should the log-uniform prior be assigned? Should it be m_Z, m_Z^2, or perhaps even m_W or the Higgs VEV, v? With the log-uniform prior, all of these choices are equivalent because assigning the log-uniform prior to one means they all have a log-uniform prior. A similar question arises in gauge unification. Should the fundamental parameter in gauge unification models be taken to be the gauge coupling, g, or α=g^2/(4π)? With the log-uniform prior, both choices are equivalent.§ IMPLICATIONS §.§ BG-sensitivity and the hierarchy problemsThe BG sensitivity arises in a straightforward way from the Bayesian formulation of naturalness. Derivation of the BG sensitivity from Bayesian naturalness has been done in Refs. Cabrera:2008tj,Cabrera:2009dm,Ghilencea:2012qk,Fichet:2012sn. Here, we review this derivation in a simple case and we use the result in the next subsection.Consider a Bayesian comparison between a puzzle model, 𝒫 and a candidate model, ℳ, with a parameter, θ. The comparison will use a single observable 𝒪 that has been measured in experiments and found to have value 𝒪_ex with very small experimental uncertainty. The likelihood function can then be approximated as a δ function:[It is not required that a likelihood function p(d|𝒪) be normalized so that its integral with respect to 𝒪 is unity. However, any overall coefficient will just cancel out in B_𝒫ℳ so it is not necessary to account for the normalization here.] p(d|𝒪)=δ(𝒪-𝒪_ex). As discussed in subsection <ref>, log-uniform priors should be used for both the observable and the parameter. The Bayesian evidence for the puzzle model then becomes𝒵_𝒫 = 1/log𝒪_max/𝒪_min1/𝒪_ex .Assuming there is a unique point in the one-dimensional parameter space for which the observable takes its experimental value, the Bayesian evidence for the candidate model becomes𝒵_ℳ = 1/logθ_max/θ_min1/θ_ex1/.∂𝒪/∂θ|_θ_ex ,where θ_ex denotes the value of θ for which 𝒪 takes its experimental value. That is, 𝒪(θ_ex)=𝒪_ex. The Bayes factor, B_𝒫ℳ=𝒵_𝒫/𝒵_ℳ, then becomesB_𝒫ℳ = logθ_max/θ_min/log𝒪_max/𝒪_min.θ/𝒪∂𝒪/∂θ|_θ_ex= logθ_max/θ_min/log𝒪_max/𝒪_min.∂log𝒪/∂logθ|_θ_ex .The partial derivative to the right, ∂log𝒪/∂logθ|_θ_ex, is exactly the BG sensitivity. The ratio multiplying it is the ratio of prior volumes. A large BG sensitivity would imply that the puzzle model is favored and the candidate model disfavored, as long as the ratio of the prior volumes is not too different from unity.Note the few assumptions necessary to demonstrate the connection between the Bayes factor and the BG sensitivity: similar prior volumes, log-uniform priors for both the observable and the parameter, and a sufficiently precise measurement of the observable so that the likelihood can be approximated by a δ function.The observation that prior volumes affect the Bayes factor is not new. For example, Bartlett's paradox<cit.> points out that when an alternative hypothesis uses an improper prior for a parameter to be estimated (for example a uniform prior over the interval (-∞,∞) is considered), the posterior probability of the null hypothesis can become unity regardless of the data.Some comments are in order regarding the ratio of prior volumes, which multiplies the BG sensitivity in equation (<ref>). Consider the case of the little hierarchy problem, which was the original motivation for the invention of the BG sensitivity. Here the observable, 𝒪, is the electroweak scale. In the standard model, the electroweak scale is proportional to the mass term of the Higgs doublet. In the standard model without quadratic divergences, the mass of the Higgs doublet is a fundamental parameter. Therefore the standard model is equivalent to the puzzle model for the purposes of computing Bayes factors. The candidate model under consideration is the MSSM, where the fundamental parameter of interest to the little hierarchy problem is usually taken to be the μ-term. The SM Higgs doublet mass and the MSSM μ-term are of the same basic type. That is, they are both dimensionful mass terms not protected by any symmetry. Therefore it is reasonable to suppose them to have similar prior volumes. So the ratio of prior volumes is close to unity and the Bayes factor is approximately just the BG sensitivity. Not only does this explain why the BG sensitivity is appropriate for quantifying fine-tuning in the MSSM, but it also gives the BG sensitivity a concrete interpretation in terms of probabilities. Perhaps most importantly it suggests how we might identify situations in which the BG sensitivity is not appropriate for quantifying fine-tuning. §.§ Proton mass The mass of the proton can be estimated as the energy scale at which the strong coupling constant becomes non-perturbative. As noted in Refs. Anderson:1994dz,Athron:2007ry, this energy scale is very sensitive to the high-scale boundary value of the strong coupling.[“High scale” is usually taken to be the Planck mass.] As a result, the proton mass has a very high BG sensitivity. This is a paradox if the BG sensitivity is taken as a measure of fine-tuning, since most physicists would agree that the proton mass is not fine-tuned.In search for a resolution of this paradox, it is pointed out in Ref. Anderson:1994dz that one difference between the sensitivity in the proton mass and the sensitivity in some fine-tuning problems, such as the (big and little) hierarchy problem, is that the sensitivity in the proton mass is a global sensitivity. That is, the proton mass is highly sensitive over the entire parameter space. In the hierarchy problems, however, the electroweak scale is only highly sensitive in a small part of the parameter space and it is that specific part of the parameter space that nature has chosen. Motivated by this observation, the authors of Ref. Anderson:1994dz propose a new measure of fine tuning that is normalized by a kind of average sensitivity so that the proton mass, according to this new measure, is not fine-tuned.In fact, according to Bayesian naturalness, the proton is not fine-tuned. This means it addresses the paradox noted in Ref. Anderson:1994dz automatically. The results of the previous section will help illuminate this. Following the analysis in Ref. Anderson:1994dz, using the one-loop renormalization group equation (RGE), the low-energy value of the strong coupling can be written asα^-1_3(μ)=α^-1_3(M_P)-b_3/2πlogμ/M_P ,where M_P is the Planck mass. For this discussion of the proton mass we neglect any threshold corrections, as their impact on the numerical results would not be large enough to change any of our conclusions. Using the scale at which the strong coupling becomes unity as the proton mass, that is α^-1_3(m_p)=1 where m_p is the proton mass, yields1=α^-1_3(M_P)-b_3/2πlogm_p/M_P .Treating m_p as the observable and α_3(M_P) as a fundamental parameter. We can write an expression for the observable in terms of the parameter. Abbreviating α_3(M_P) as α_3, m_p(α_3)=M_Pe^2π/b_3(α_3^-1-1) .Then we can calculate the BG sensitivity.∂log m_p/∂logα_3=-2π/b_3α_3^-1≈ 45 ,where the numerical result is obtained by substituting α_3^-1≈ 50 and b_3=-7<cit.>. If the BG sensitivity is taken as an indicator of fine tuning this would suggest that the proton mass is fine-tuned, a conclusion most physicists would intuitively disagree with. But when B_𝒫ℳ is taken as the indicator of fine-tuning we see that the ratio of the prior volumes enters the calculation. Equation (<ref>) can be used to calculate B_𝒫ℳ in this case. B_𝒫ℳ = logα_3_max/α_3_min/logm_p_max/m_p_min(-2π/b_3)α_3^-1 .Unlike in the case of the little hierarchy problem, there is no reason to suppose that the ratio of prior volumes should be close to unity. The observable, m_p, is a dimensionful mass while α_3 is a dimensionless coupling constant, so there is no reason they should have similar prior volumes. Therefore a large BG sensitivity should not be taken to imply that B_𝒫ℳ is large and the proton mass fine-tuned. Not only is there no reason to suppose that the ratio of the prior volumes is close to unity, but such an supposition seems biased toward favoring the puzzle model and finding the proton mass to be fine-tuned. Suppose that the prior for α_3 is chosen to cover the range from 0.001 to 1, or three orders of magnitude. We then find from equation (<ref>) that the range of values of m_p that could be obtained from the model spans approximately 390 orders of magnitude. This is due to the large global sensitivity noted in Ref. Anderson:1994dz. Supposing that the prior volume of m_p should be comparable to the prior volume of α_3 (three orders of magnitude in this case) arbitrarily restricts the puzzle model to a relatively narrow interval around the observed value of m_p while the candidate model is allowed to span a much larger interval. This, unsurprisingly, would make the Bayes factor appear to favor the puzzle model.Perhaps it would be sufficient to leave the discussion of the proton mass here. The reader is hopefully convinced that there is no reason the ratio of prior volumes should be close to unity, and thus no reason to take the BG sensitivity at face value as an indicator of fine-tuning in the proton mass. Furthermore, assuming the ratio of prior volumes to be close to unity is biased toward finding the proton mass to be fine-tuned. Given that such an assumption is implicit in using the BG sensitivity as an indicator of fine-tuning (see subsection <ref>), it is no surprise that the BG sensitivity makes the proton mass appear fine-tuned. Thus the paradoxical result of applying the BG sensitivity to the proton mass is resolved. According to Bayesian naturalness, the proton mass is not fine-tuned and there is no reason to take the BG sensitivity as an indicator of fine-tuning in the proton mass. That said, the discussion up to this point suggests a possible heuristic for restricting the ratio of prior volumes, thereby enabling us to make some more concrete statements about B_𝒫ℳ. This heuristic, which we call the fairness heuristic, is not really necessary given the above discussion, but it is interesting to explore its implications.The fairness heuristic is stated as follows: the prior volumes for the puzzle model and the candidate model should be chosen so that the two models are allowed to cover the same range of values for the observable. We refer to this heuristic simply as the fairness heuristic because it enforces a kind of fairness in choosing prior volumes for the two models. Applied to the case of the proton mass, the fairness heuristic says simplym_p_max = m_p(α_3_max)m_p_min = m_p(α_3_min) .An appropriate choice for α_3_max is unity, because if it were any larger the theory would be non-perturbative. Applying the fairness heuristic using equation (<ref>) yieldslogm_p_max/m_p_min=2π/b_3(α_3_max^-1-α_3_min^-1)≈-2π/b_3α_3_min^-1 .The B_𝒫ℳ then becomesB_𝒫ℳ≈ -ln(α_3_min)α_3_minα_3^-1≲ 4 ,where the numerical result is obtained using the fact that α_3_min must be less than the measured value α_3≈ 1/50.A few comments are in order, as there are strong grounds for questioning the fairness heuristic. First, when comparing more than two models, as we do below, it is not clear which of multiple candidate models should be used to determine the prior volume of the puzzle model. Second, the fairness heuristic is only meaningful when the prior distributions have some maximal and minimal values for the parameters. Prior distributions which cover the interval (-∞,∞) have no such maximal and minimal values and the fairness criterion is meaningless there. Third, even if the prior has maximal and minimal values, In some cases these may not correspond to any maximal and minimal values of the observables. In such cases, there is no way to apply the fairness heuristic. Finally, a model that predicts that an observable should lie in a narrow range ought to be strongly favored, but applying the fairness heuristic in such a case would result in a low prior volume for the puzzle model, favoring the puzzle model instead of the highly predictive candidate model. In light of these points, we do not claim that the fairness heuristic should be used for model comparisons beyond this particular example. We emphasize again that the fairness heuristic is not really necessary to reach our central conclusion in this subsection, which is that, according to Bayesian naturalness, the proton mass is not necessarily fine-tuned and the BG sensitivity should not be taken at face value as an indicator of fine-tuning in the proton mass.§.§ The weak-scale MSSM GUTIn previous subsections we have discussed Bayesian naturalness as it relates to the (big and little) hierarchy problem, and to the proton mass. These problems are not unfamiliar to the literature on naturalness, and it is increasingly recognized that Bayesian naturalness reliably reproduces physicists' intuitive notion of naturalness. As we mentioned in the introduction, it has also been noticed that Bayesian model comparison reproduces physicists' notion of simplicity<cit.>. We demonstrate an example by applying Bayesian model comparison to the weak-scale MSSM GUT<cit.> (see Ref. Martin:1997ns and references therein for review). This will also be a useful preliminary to applying Bayesian model comparison to the more complicated GUTs below.The MSSM contains the SM matter fields each with scalar superpartners, the SM gauge bosons each with fermionic superpartners (gauginos), and two Higgs supermultiplets to effect electroweak symmetry breaking. One important consequence of the additional particle content of the MSSM is that it changes the slope factors in the renormalization group equations of the gauge couplings. When the gauge couplings are evolved to high-scale under the MSSM slope factors, they unify almost exactly (i.e. within experimental uncertainty). This unification is shown in Fig. <ref>. Supersymmetry in the MSSM is broken via soft mass parameters, which give mass to the scalar superpartners and the gauginos above the masses of their SM counterparts. The soft mass parameters are typically assumed to be all around the same mass scale, called M_. At mass scales much less than M_, the scalar superpartners and gauginos are decoupled and the particle content and slope factors are effectively those of the SM. The parameter M_ is then important to unification because it defines the scale at which the slope factors change from their SM to the MSSM values.The weak-scale MSSM GUT assumes that the SUSY scale, M_ is approximately equal to the electroweak scale, M_Z. LHC searches have placed a lower bound on sparticle masses that requires that the SUSY scale be substantially higher. Below we add threshold corrections to this model so that the SUSY scale can be higher. First we discuss this weak-scale MSSM as it serves as an instructive example of how Bayesian model comparison reproduces conclusions that otherwise would be rooted in an aesthetic principle that simple theories are to be favored.The weak-scale MSSM GUT is favored on the basis of simplicity because it can explain the measured values of the three gauge couplings α_1, α_2, and α_3, using fewer than three parameters. The intuitive conclusion is that the GUT is favorable because having fewer parameters is simpler. In this subsection we use Bayesian model comparison to arrive at the same conclusion.We choose the two parameters of the GUT to be the unified gauge coupling, α_u, and the scale of unification, M_u. The SUSY scale, M_, representing the average mass of the sparticles, is assumed to be approximately the electroweak scale, that is, M_≈ M_Z. We work with log-uniform priors, so our results are independent of scaling or power law redefinitions of the parameters, as shown in subsection <ref>.The non-unifying model treats each of the gauge couplings each as separate fundamental parameters, so it is a puzzle model. In the GUT, one-loop RGEs relate the observables to the parameters.α_a^-1=α_u^-1+b_a/2πlnM_u/M_Zwhere a∈{1,2,3}. The Bayes factor favoring the GUT, ℳ, over the non-unifying model, 𝒫, isB_ℳ𝒫 = 𝒵_ℳ/𝒵_𝒫 ,where,𝒵_ℳ = ∫ p(d|𝒪)p(θ|ℳ)dθ 𝒵_𝒫 = ∫ p(d|𝒪)p(𝒪|𝒫)d𝒪 .The likelihood function is a product of three Gaussian likelihood functions, one for each of the three gauge couplings.The Bayesian evidence for the puzzle model is an integral over a three-dimensional parameter space, and each of the three Gaussian factors in the likelihood function may be approximated as a δ-function to evaluate the integral. Assuming the same prior for all three gauge couplings, that is,α_1_max=α_2_max=α_3_max≡α_max α_1_min=α_2_min=α_3_min≡α_min ,and using the experimental values<cit.>,α_1 = 0.016946 α_2 = 0.033793 α_3 = 0.1181yields𝒵_𝒫 = 1/ln^3α_max/α_min1/α_1α_2α_3≈ 124 ,where we have chosen α_u_max to be unity, chosen α_u_min to be the fine-structure constant (≈ 1/137).The Bayesian evidence for the GUT is an integral over a two-dimensional parameter space. The Gaussian likelihood functions for α_1 and α_2 can be approximated as δ-functions because their experimental uncertainties, σ_1 and σ_2, are much smaller than σ_3, the experimental uncertainty in α_3. The experimental uncertainties are<cit.>σ_1 = 3.5× 10^-6σ_2 = 1.9× 10^-5σ_3 = 0.0011These δ-functions can be used to evaluate the two-dimensional integral. Note that this necessitates a change of variables, yielding a Jacobian factor:|(∂𝒪_i/∂θ_j)|=1/2πα_1^2α_2^2/α_u^2M_u|b_1-b_2| .where 𝒪_i = (α_1,α_2) and θ_j = (α_u,M_u). Using a normalized Gaussian likelihood for α_3, the integral evaluates to𝒵_ℳ = 1/σ_3√(2π)e^-1/2(α_3-α_3_ex)^2/σ_3^21/lnα_max/α_min1/lnM_u_max/M_u_min2πα_u/α_1^2α_2^2|b_1-b_2|≈1.00× 10^5 ,where we have used σ_3 = 0.0011<cit.>, chosen M_u_max to be the Planck scale, M_P=1.22×10^19 GeV, chosen M_u_min to be the Z mass, 91.1876 GeV,<cit.> chosen α_u_max to be unity, chosen α_u_min to be the fine-structure constant (≈ 1/137), and used the well-known MSSM slope factors,b_3^ = -3b_2^ = 1b_1^ = 33/5 .The minimal and maximal values of the parameters are summarized in Table <ref>. Note that there is a unique value for α_u and M_u that yields the experimentally observed values of α_1 and α_2 when substituted into equation (<ref>). In equation (<ref>), α_u refers to that unique value. This constraint was imposed by the two δ-functions that were used to evaluate the integral. Furthermore, α_3 in equation (<ref>) refers to the the unique value that is yielded by substituting those unique values of α_u and M_u into equation (<ref>). In general, this value need not match, or even be close to α_3_ex. The fact that it does match (within experimental uncertainty) is what makes this unification model so appealing. It is also what makes the normalized Gaussian factor in the front of equation (<ref>) quite large, so that the conclusion from Bayesian model comparison echoes the physicist's intuitive conclusion based on simplicity.The approximate analytical result in equation (<ref>) is attainable in the weak-scale MSSM GUT. But the GUTs discussed below are more complicated so all the Bayesian evidences computed below are computed using Monte Carlo integration run on a GPU. For a few brief details on the implementation, see <ref>. We have released our code to the public<cit.>. Monte Carlo integration was also used to verify the result in (<ref>), yielding 𝒵 = (1.0569±.0014)× 10^5 where the uncertainty given here and in our other results throughout this paper is the 1σ statistical uncertainty arising from the Monte Carlo integration. The ∼ 6% discrepancy is evidently not due to the Monte Carlo integration and is probably due to approximating the Gaussian likelihood functions as δ-functions.Taking the quotient of the Bayesian evidences to compute the Bayes factor yieldsB_ℳ𝒫≈ 806 ,so the Bayes factor strongly favors the weak-scale MSSM GUT over the puzzle model. This result is exactly in line with physicists' intuitive conclusion that the GUT is more favorable based on simplicity. So how has simplicity manifested itself in the Bayesian analysis? As mentioned earlier, due to the GUT having fewer free parameters than the puzzle model (a characteristic of simple theories), its prior parameter space is more restricted. Since this restricted parameter space is consistent with the observed data, it is much more probable that a random point selected from this restricted parameter space is consistent with the data than a random point selected from the much broader and less restricted parameter space of the puzzle model. Put another way, if the puzzle model were true, it would be quite surprising and a coincidence that the observed data happen to be consistent with unification, but if the GUT is true then it is not a coincidence but rather is to be expected. This probabilistic language is not unfamiliar to physicists discussing such matters as gauge unification, and it illuminates why the Bayesian analysis favors the simpler theory.The more precise the measurement of the observables, the greater the coincidence and the more strongly Bayesian model comparison favors the simple model. This fact is reflected in the Bayesian analysis by the experimental uncertainty, σ_3, appearing in the denominator of the Gaussian factor in equation (<ref>). Of course, if the observed data did not agree with the weak-scale MSSM GUT, that is, if α_3 was different from α_3_ex by much more than the experimental uncertainty, the Gaussian factor in equation (<ref>) would rapidly go to zero, bringing the credibility of the GUT with it. §.§ The MSSM GUTThe weak-scale MSSM GUT serves as a concrete example of Bayesian model comparison favoring a simple theory. However, the non-observation of sparticles at the LHC implies that the SUSY scale, if it exists, must be well above the electroweak scale. Accommodating this within an MSSM GUT requires modeling threshold corrections, going beyond the weak-scale MSSM GUT. One way to model the SUSY threshold corrections is to consider the effect of the colored superpartners decoupling at a higher mass scale than the non-colored superpartners.[This model of SUSY threshold corrections was studied in Ref. Ovrut:2012wg in the context of the B-L MSSM.] This splits the mass scale M_ into two mass scales: M__c where the colored superpartners decouple, and M__n where the non-colored superpartners decouple. Such a model is theoretically well motivated, because the gluino is pushed to a higher mass than the other gauginos by one-loop corrections involving the strong coupling constant. The squarks are in turn pushed to a higher mass than the other scalar superpartners by one-loop corrections involving the gluino. See Refs. Martin:1997ns,Ovrut:2012wg,Ovrut:2015uea,Deen:2016vyh for some discussion and relevant results. The result is that the colored superpartners on average decouple at a significantly higher mass scale than the non-colored superpartners.The slope factors above the mass scales M__c and M__n are the MSSM slope factors given in equation (<ref>). Between the mass scales M__c and M__n the slope factors are called b^_a. They are <cit.>b^_3 = -7b_2^ = -1/2 b_1^ = 11/2 .Below those scales the slope factors are the well-known SM slope factors:b_3^ = -7b_2^ = -19/16 b_1^ = 41/10 .The relationship between the observables and the parameters of this model is thenα_a^-1 = α_u^-1+b_a^/2πlnM_u/M__c+b_a^/2πlnM__c/M__n+b_a^/2πlnM__n/M_Z ,where a∈{1,2,3}. Splitting the SUSY scale in this theoretically motivated way turns out to provide just the right correction to allow the gauge couplings to unify, even with the SUSY scales well above the electroweak scale. This unification is shown in Fig. <ref>.Viewing all of these scales as free parameters, one could argue that this GUT lacks simplicity because of the additional free parameters it introduces. It actually has more parameters than the puzzle model (four parameters, α_u, M__n, M__C, and M_u, as opposed to three parameters α_3, α_2, and α_1). On the other hand, the observables depend only logarithmically on these parameters, so the observables are not very sensitive to changes in these parameters. This means that the model lacks fine-tuning or, equivalently, is natural. When a model is natural, but not simple, how should these conflicting judgments be weighed? Bayesian model comparison gives a quantitative means to weigh the conflicting intuitive judgements of naturalness and lack of simplicity. In order to compute the Bayesian evidence for this unification model, we need priors for the four parameters M__c, M__n, M_u, and α_u. There are a few reasonable choices. Perhaps the most obvious choice is to use the same prior for all of the mass scales. This prior, which we will refer to as the unconstrained SUSY threshold prior, is summarized in Table <ref>.This choice is unrealistic, however, because it allows the two SUSY scales to be drastically separated, while typically one would expect all sparticles to have masses mildly scattered around a single mass scale. The results in Refs. Martin:1997ns,Ovrut:2012wg,Ovrut:2015uea,Deen:2016vyh suggest a mild separation less than a factor of ten. We therefore introduce a parameter h defined byM__c=hM__n ,and assign to it a log-uniform prior with maximal and minimal values. Then M__n is no longer treated as a fundamental parameter, it is calculated from M__c and h, which are treated as fundamental parameters. We refer to this prior as the constrained SUSY threshold prior. The minimal value, h_min, is always set to unity, corresponding to no SUSY threshold correction. We consider h_max values of 10, 100, and 1000. This is summarized in Table <ref>. We compute the Bayesian evidences for the MSSM GUT with both unconstrained and constrained SUSY threshold priors. Step functions are used in the likelihood to enforce M_Z ≤ M__n≤ M__c≤ M_u. The results are given in Table <ref>.The MSSM GUT with SUSY thresholds is favored over the puzzle model (𝒵 = 124). This is true even though it is a more complicated GUT. Additional scales (parameters) are needed to make this GUT agree with experimental data making it more complicated than even the puzzle model. However, importantly, the observables are relatively insensitive to these new scales, meaning that the GUT is natural. Bayesian model comparison automatically accounts for this and weighs it against the lack of simplicity. This example demonstrates that Bayesian model comparison gives a quantitative way to weigh the intuitive notions of simplicity and naturalness, which is especially useful in cases like this where naturalness and simplicity seem to be in conflict and intuitive considerations alone cannot judge which of those criteria should be weighed more heavily.Also of interest is the fact that the constrained prior, which is better motivated anyway, is more favored than the unconstrained prior. Even though the Bayesian evidence does not depend strongly on the value of h_max. We regard h_max=10 as the most well-justified value because it is suggested by the results in Refs. Martin:1997ns,Ovrut:2012wg,Ovrut:2015uea,Deen:2016vyh.Fig. <ref> shows the posterior probability distribution of log M__c in the case of the constrained SUSY threshold priors. The posterior probability distribution is given by Bayes theorem. That is,p(log M__c|d) = p(d|log M__c)p(log M__c)/p(d) ,where all probabilities are in the MSSM GUT with constrained SUSY threshold priors. Since M__c is a fundamental parameter with a log-uniform prior, p(log M__c) is constant, as is p(d). We use Monte Carlo integration to compute p(d|log M__c). The result is plotted in arbitrary units. This figure is limited to the constrained SUSY threshold priors because they are better motivated and favored by the Bayesian model comparison. The figure shows that the SUSY scale is not constrained to be weak-scale. It can be orders of magnitude higher and still be consistent with unification. However, exactly how much higher the SUSY scale can be depends on the prior chosen for the SUSY threshold. With h_max = 10 it can be up to around 10 TeV, above the current LHC bounds. With h_max = 1000, it can be as high as 10^9 GeV.In addition to threshold corrections at the SUSY scale, GUT models may also account for threshold corrections at the unification scale. See Refs. Dienes:1996du,Deen:2016vyh,Deen:2016zfr,Kaplunovsky:1992vs,Mayr:1993kn,Dolan:1992nf,Kiritsis:1996dn,Klaput:2010dg,deAlwis:2012bm,Bailin:2014nna for examples and discussion in the context of string theory. One way to do this is by introducing new parameters, Δ_1, Δ_2, and Δ_3, which contain threshold corrections to the three gauge couplings. See, for example, Refs. Langacker:1992rq,Deen:2016vyh. The relationship between the parameters and observables is modified from equation (<ref>) to α_a^-1 = α_u^-1 + Δ_a/4π +b_a^/2πlnM_u/M__c+b_a^/2πlnM__c/M__n+b_a^/2πlnM__n/M_Z ,where a∈{1,2,3}. We assign to |Δ_a| a log-uniform prior with minimal value of Δ_min=1 and consider maximal values of Δ_max =10100. The sign of each of the Δ_a is randomly selected to be positive or negative with equal probability. This is summarized in Table <ref>.Our results for the Bayesian evidences are given in Table <ref>.The results show that the Bayesian evidence is not much effected by the introduction of the unification threshold corrections with Δ_max = 10. With Δ_max = 100, however, the Bayesian evidence is reduced by about a factor of two. Given that the results are so dependent upon the prior for the unification threshold corrections, we should be cautious about how we interpret the results. §.§ The B-L MSSM GUT The B-L MSSM GUT was proposed and studied in a series of papers<cit.>. It involves a two-step breaking of the unified gauge theory by two Wilson lines, which are denoted χ_T_3R and χ_B-L. The two Wilson lines each have a mass scale associated with them, denoted M_χ_T_3R and M_χ_B-L, at which they partially break the gauge group. These two mass scales are not necessarily equal, and the gauge group in the intermediate regime between those mass scales depends on which mass scale is higher.In the case that χ_B-L has the higher mass scale, it breaks the unified SO(10) gauge group to a left-right type SU(3)_C× SU(2)_L× SU(2)_R× U(1)_B-L gauge group with slope factors,b_3^=10b_2^=14b_R^=14b_B-L^=19 .At the lower scale, M_χ_T_3R, the χ_T_3R Wilson line further breaks the SU(2)_R factor to U(1)_R, yielding the B-L MSSM gauge group, SU(3)_C× SU(2)_L× U(1)_R× U(1)_B-L. We refer to this case as left-right type unification.In the other case, that χ_T_3R has the higher mass scale, it breaks the unified SO(10) gauge group to a Pati-Salam type SU(4)_C× SU(2)_L× U(1)_R gauge group with slope factors,b_4^=6b_2^=14b_R^=20 .At the lower scale, M_χ_B-L, the χ_B-L Wilson line breaks the SU(4)_C factor to SU(3)_C× U(1)_B-L, yielding the B-L MSSM gauge group, SU(3)_C× SU(2)_L× U(1)_R× U(1)_B-L. We refer to this case as Pati-Salam type unification.For this paper, the slope factors are all that is needed, but the complete particle content of the theory in between the two Wilson line scales is given in Ref. Ovrut:2012wg. Interestingly, in either the left-right type unification or the Pati-Salam type unification, the effect of this intermediate regime is to push the gauge couplings in the right direction to help them unify. This fact will result in a higher Bayesian evidence for the B-L MSSM GUT.Below the scales of the two Wilson lines, the particle content of the theory is that of the MSSM plus three right-handed neutrino supermultiplets. The third-family right-handed sneutrino soft mass squared becomes negative due to one-loop radiative corrections, triggering radiative breaking of the U(1)_R× U(1)_B-L factor to U(1)_Y, yielding the familiar MSSM gauge group, SU(3)_C× SU(2)_L× U(1)_Y. This symmetry breaking is analogous to radiative electroweak symmetry breaking in the MSSM. The associated scale is called the B-L scale, M_B-L. The boundary condition relating the U(1) gauge couplings at the B-L scale isα_1^-1=2/5α_B-L^-1+3/5α_R^-1 .The parameter sin^2θ_R is defined in a way analogous to the Weinberg angle,sin^2θ_R=α_B-L^-1/2/3α_R^-1+α_B-L^-1 ,where the gauge couplings are evaluated at the B-L scale. Finally, at the SUSY scale, M_, the superpartners are integrated out and the standard model is obtained. We will consider the B-L MSSM GUT both with and without the SUSY scale being split into M__c and M__n. For more details on the B-L MSSM see Refs. Ovrut:2012wg,Ovrut:2015uea. An example unification scenario is shown in Fig. <ref>. It was shown in Refs. Ovrut:2012wg,Ovrut:2015uea that the exact value of M_B-L has no impact on the unification of gauge couplings, so for the present analysis it can be ignored. Applying the boundary conditions and slope factors for left-right type unification, the relationship between the observables (α_3, α_2, α_1) and the parameters (M_, M_χ_T_3R, M_B-L, α_u) isα_3^-1 = α_u^-1+b_3^/2πlnM_χ_B-L/M_χ_T_3R+b_3^/2πlnM_χ_T_3R/M_+b_3^/2πlnM_/M_Z α_2^-1 = α_u^-1+b_2^/2πlnM_χ_B-L/M_χ_T_3R+b_2^/2πlnM_χ_T_3R/M_+b_2^/2πlnM_/M_Z α_1^-1 = α_u^-1+2/5 b_B-L^+3/5 b_R^/2πlnM_χ_B-L/M_χ_T_3R +b_1^/2πlnM_χ_T_3R/M_+b_1^/2πlnM_/M_Z .For Pati-Salam type unification, the relationship is α_3^-1 = α_u^-1+b_3^/2πlnM_χ_T_3R/M_χ_B-L+b_3^/2πlnM_χ_B-L/M_+b_3^/2πlnM_/M_Z α_2^-1 = α_u^-1+b_2^/2πlnM_χ_T_3R/M_χ_B-L+b_2^/2πlnM_χ_B-L/M_+b_2^/2πlnM_/M_Z α_1^-1 = α_u^-1+2/5 b_B-L^+3/5 b_R^/2πlnM_χ_T_3R/M_χ_B-L +b_1^/2πlnM_χ_B-L/M_+b_1^/2πlnM_/M_Z . Regarding the prior for the scales M_χ_B-L and M_χ_T_3R, we take an approach similar to the one we use for the SUSY threshold. We consider both an unconstrained prior, where the scales may take any values between M_Z and M_P, and a constrained one. The unconstrained prior is summarized in Table <ref>.For the constrained prior, we introduce a parameter f defined byM_χ_B-L=fM_χ_T_3R ,and assign to it a log-uniform prior with maximal and minimal values. Then M_χ_T_3R is no longer treated as a fundamental parameter, it is calculated from M_χ_B-L and f, which are treated as fundamental parameters. We consider f_max values of 10, 100, and 1000, with f_min taking values of 1/10, 1/100, or 1/1000 respectively. Note that, unlike the case of the SUSY threshold where M__c was always the higher scale due to theoretical considerations, we are allowing either M_χ_T_3R or M_χ_B-L to be higher, and thus allowing either left-right type or a Pati-Salam type unification. The constrained prior is summarized in Table <ref>.Step functions are used in the likelihood to enforce M_Z ≤ M__n≤ M__c≤min(M_χ_T_3R,M_χ_B-L). The Bayesian evidences with the constrained and unconstrained priors are given in Table <ref>.One thing to note when comparing these results to the results for the MSSM GUT with SUSY threshold corrections in Table <ref>, is that the B-L MSSM GUT is favored over the MSSM GUT with SUSY threshold corrections when the unconstrained prior is used for both. This is due to the fact that in the B-L MSSM GUT can be successful with either left-right type or Pati-Salam type unification, that is, with either M_χ_T_3R < M_χ_B-L or M_χ_B-L < M_χ_T_3R. The MSSM GUT with SUSY threshold corrections, in contrast, only successfully unifies with M__n < M__c. When constrained priors are used, neither the B-L MSSM GUT nor the MSSM GUT with SUSY threshold corrections is substantially favored over the other. This is due to the fact that with the constrained prior we always have M__n < M__c so the fact that the MSSM GUT does not not unify otherwise is moot. Recall that the constrained prior was chosen this way because theoretical considerations suggested it.With the constrained prior, the Bayesian evidence does not depend strongly on the values of f_min and f_max. We nevertheless regard f_min=1/10, f_max=10 as the most reasonable choice because each Wilson line scale is related to the inverse radius of a non-contractible curve in the Calabi-Yau manifold of the underlying string theory<cit.>. They are thus both related to the string compactification scale and there is no reason to suspect that they should be different by more than an order of magnitude. See Ref. Ovrut:2012wg for discussion.Fig. <ref> shows the posterior probability distribution of log M_ in B-L MSSM GUT with constrained priors. The result is plotted in arbitrary units. This figure is limited to the constrained priors because they are better motivated and favored by the Bayesian model comparison. The figure shows that the SUSY scale is not constrained to be weak-scale. It can be orders of magnitude higher and still be consistent with unification. However, exactly how much higher the SUSY scale can be depends on the prior chosen. With f_min=1/10, f_max = 10 it can be up to around 10^7 GeV. With f_min=1/1000, f_max = 1000, it can be as high as 10^13 GeV.The “two-step” shape of the posteriors is due to the two different unification schemes, left-right type and Pati-Salam type. Comparing the left-right type slope factors in equation (<ref>) to the Pati-Salam type slope factors in equation (<ref>), we see that the latter slope factors are more different from each other, allowing a stronger push toward unification with less separation of the Wilson line scales. This means the Pati-Salam type unification scheme can accommodate a higher SUSY scale and still achieve unification, resulting in the longer, lower “step” in each of the three posterior distributions shown in Fig. <ref>. The abrupt cutoff at M_≈ 10^13 GeV is due to the step function assigning zero likelihood to points for which M_ > M_χ_B-L.The B-L MSSM GUT may of course include the same threshold corrections that we included in the MSSM GUT. That is, SUSY threshold corrections modeled by separating the SUSY scale into M__c and M__n, and unification threshold corrections modeled by the parameters Δ_1, Δ_2, and Δ_3. Considering first only the SUSY threshold corrections, the relationship between the parameters and observables is equations (<ref>) and (<ref>) appropriately modified to include the SUSY threshold terms in equation (<ref>). That is, in the case of left-right type unification, α_3^-1 = α_u^-1+b_3^/2πlnM_χ_B-L/M_χ_T_3R+b_3^/2πlnM_χ_T_3R/M__c +b_3^/2πlnM__c/M__n+b_3^/2πlnM__n/M_Z α_2^-1 = α_u^-1+b_2^/2πlnM_χ_B-L/M_χ_T_3R+b_2^/2πlnM_χ_T_3R/M__c +b_2^/2πlnM__c/M__n+b_2^/2πlnM__n/M_Z α_1^-1 = α_u^-1+2/5 b_B-L^+3/5 b_R^/2πlnM_χ_B-L/M_χ_T_3R+b_1^/2πlnM_χ_T_3R/M__c +b_1^/2πlnM__c/M__n+b_1^/2πlnM__n/M_Z .And the case of Pati-Salam type unification,α_3^-1 = α_u^-1+b_3^/2πlnM_χ_T_3R/M_χ_B-L+b_3^/2πlnM_χ_T_3R/M__c +b_3^/2πlnM__c/M__n+b_3^/2πlnM__n/M_Z α_2^-1 = α_u^-1+b_2^/2πlnM_χ_T_3R/M_χ_B-L+b_2^/2πlnM_χ_T_3R/M__c +b_2^/2πlnM__c/M__n+b_2^/2πlnM__n/M_Z α_1^-1 = α_u^-1+2/5 b_B-L^+3/5 b_R^/2πlnM_χ_T_3R/M_χ_B-L+b_1^/2πlnM_χ_T_3R/M__c +b_1^/2πlnM__c/M__n+b_1^/2πlnM__n/M_Z .We use a constrained prior with h_max=10 and f_min=1/10, f_max=10. This is summarized in Table <ref>.Considering both the SUSY threshold corrections and the unification threshold corrections, the relationship between the parameters and observables is the same as in equations (<ref>) and (<ref>) with the Δ_1/(4π), Δ_2/(4π), and Δ_3/(4π) added to the three equations respectively, as in equation (<ref>). We use the same prior as in the B-L MSSM GUT with SUSY threshold corrections but with unification threshold corrections prior with Δ_max=10. This choice is made because Δ_max=10 had a substantially higher Bayesian evidence than Δ_max=100 in the case of the MSSM GUT with unification threshold corrections. This is summarized in Table <ref>. As in the case of the MSSM GUT with unification threshold corrections, the sign of each Δ_a is randomly selected to be positive or negative with equal probability. The Bayesian evidences are given in Table <ref>.Note that the word “threshold” is used for brevity in the tables instead of “threshold corrections”. The results show that including the threshold corrections in the B-L MSSM GUT does not substantially change the Bayesian evidence. §.§ A non-SUSY GUT With sufficiently large unification threshold corrections, there arises the possibility of unification without any SUSY below the unification scale. The tools developed in this paper allow us to consider such a non-SUSY GUT easily, if we assume that the slope factors are (at least approximately) those of the SM. The relationship between the parameters and observables is simplyα_a^-1=α_u^-1+Δ_a/4π+b_a^/2πlnM_u/M_Z ,where a∈1,2,3. For a prior we consider Δ_max = 1001000. Using Δ_max=10 does not yield large enough threshold corrections to permit unification in this model (that is, the Bayesian evidence would be zero). The prior is summarized in Table <ref>.The results are given in Table <ref>.The results demonstrate that even though large threshold corrections may allow unification without SUSY, the SUSY GUTs are still favored by the Bayesian analysis.§ CONCLUSION AND DISCUSSIONThis paper gives 22 Bayesian evidences, all of which are repeated in Table <ref>.From these and our other results we make the following conclusions. * In our results, every GUT is substantially more supported than the puzzle (non-unifying) model. Even GUTs that have more parameters than the puzzle model are more supported. This implies that even though the additional parameters may be in conflict with simplicity, they more than make up for it by enabling unification. This is due to the fact that the observables are relatively insensitive to the additional parameters in the GUTs, corresponding to a degree of naturalness. An intuitive understanding of simplicity and naturalness would not be sufficient to reach this conclusion because it offers no way to weigh naturalness against the lack of simplicity. A quantitative language that can simultaneously weigh both simplicity and naturalness is needed and Bayesian model comparison provides just that.* The constrained priors for the SUSY threshold corrections and Wilson line scales are more supported than the unconstrained priors. This is reassuring because the constrained priors are better motivated theoretically.* In the MSSM GUT and the B-L MSSM GUT, adding unification threshold corrections slightly lowers the Bayesian evidence. While there may be other reasons for believing in significant unification threshold corrections, from the standpoint of gauge unification in these SUSY GUTs, large unification threshold corrections are unnecessary.* The most strongly supported model (with the exception of the weak-scale MSSM GUT, which has M_≈ M_Z, incompatible with LHC data) is the B-L MSSM GUT. That said, its support over the MSSM GUT, while statistically significant with regard to the Monte Carlo integration, is so slight as to be barely worth mentioning. We still regard the B-L MSSM as the stronger model for reasons other than these results (it provides a theory of R-parity and neutrino masses, and is motivated by string theory).* The SUSY GUTs are all more strongly supported than the non-SUSY GUT that relies on large unification threshold corrections. While it may be tempting to rely on large unification threshold corrections to allow unification in models which do not otherwise allow it, the SUSY models, which do not rely on such large unification threshold corrections, are more strongly supported.* Based on posteriors for the SUSY scale in Figs. <ref> and <ref>, SUSY GUTs are consistent with SUSY scales well above current LHC bounds. It would be a mistake to think that unification is a feature unique to weak scale SUSY. That said, the posteriors do tend to be weighted more toward lower-SUSY scales, and this speaks to the utility of the Very Large Hadron Collider (VLHC) or other next generation collider (see Ref. Fowlie:2014xha for a paper on this topic).* We showed that the quantified Bayesian naturalness, unlike the BG sensitivity, suggests that the proton mass is natural, consistent with physicists' intuitive notions of naturalness. Bayesian naturalness thus accomplishes what had already been accomplished by some ad-hoc refinements of the BG sensitivity<cit.>. But Bayesian naturalness follows from principles that are more basic rather than ad-hoc. While Bayesian model comparison has already been extensively applied to supersymmetry, it has seen relatively little use in other areas of theoretical physics (Refs. Clarke:2016jzm,Fowlie:2016jlx and now this paper are three examples). We are led to wonder what other open questions in fundamental physics may be productively addressed using Bayesian model comparison. Where neither experimental data nor intuitive notions of naturalness, simplicity, and testability are able to decisively settle a question, Bayesian model comparison may be useful. String theory, eternal inflation, and the anthropic principle all come to mind. Some of these have been mentioned in Ref. FowlieTalk.Lastly we should point out that quantitative analysis using Bayesian model comparison should not replace qualitative analysis using intuitive notions of naturalness, simplicity, and testability. On the contrary, because analysis using Bayesian model comparison can be computationally intensive, and often only verifies things that are obvious to the experienced physicist, Bayesian model comparison should take its place alongside intuitive notions of naturalness, simplicity, and testability as useful tools for guiding physics research. In fact Bayesian model comparison provides a strong justification for the continued of those intuitive notions.§ MONTE CARLO INTEGRATION ON THE GPUThe results in equations (<ref>) and (<ref>) were both verified using Monte Carlo integration[See Ref. recipes for an explanation of Monte Carlo integration.] implemented in single-threaded Python code. Due to the larger number of parameters in the other GUTs considered, the Bayesian evidences do not have straightforward analytic solutions, so we rely entirely on Monte Carlo integration for those results. We determined that it would be necessary to implement the Monte Carlo integration using the parallel processing power of a Graphics Processing Unit (GPU) to get acceptable performance. We have released our code to the public<cit.>.Monte Carlo integration was chosen over other methods of numerical integration for three reasons. First, these are integrals over multiple variables. Many other methods of integration take time that is exponential in the number of variables of integration. Monte Carlo integration doesn't. Second, with Monte Carlo integration, the standard deviation of the samples can be used to straightforwardly obtain an estimate of the uncertainty in the result. Third, because each sample is independent of all the others, Monte Carlo integration lends itself easily to parallelization. Therefore it can better utilize the capabilities of modern computer hardware, including GPUs.We implemented the Monte Carlo integration using OpenGL 4.0 and the C programming language, along with the Simple DirectMedia Layer 2.0 library for window and OpenGL context creation, and the Epoxy library for OpenGL function pointer management. The Monte Carlo integration was executed on a computer with an NVIDIA GTX 960 GPU, running Xubuntu 16.04 with NVIDIA proprietary graphics drivers. Creating a 1000×500 pixel window and drawing a single quad over the entire window results in the fragment shader being invoked at each of the 1000×500 pixels. The GPU then executes fragment shader invocations in parallel, as is the standard operation for a GPU in computer graphics applications. We implement the Monte Carlo sample generation in the fragment shaders using the OpenGL Shading Language (GLSL). This approach was used instead of compute shaders because early development was using OpenGL 3.3, which does not support compute shaders.In each fragment shader invocation, we generate many Monte Carlo samples, average them, and output the average as a 32-bit floating point number in one of the color channels normally used for graphical output to the screen. Since this is not a graphics application, we instruct OpenGL to render to a texture in video memory, rather than to the screen. The C code then reads this texture data into main memory where the output from all the fragment shader invocations can be averaged and the final result computed.OpenGL was chosen over a ready-made GPU compute application or library because, to our knowledge, existing ready-made GPU compute applications and libraries do not utilize state of the art Philox pseudo-random number generation (discussed below). OpenGL was chosen over CUDA for platform independence, and it was chosen over OpenCL because the authors are more familiar with it and it has all the needed functionality.The uncertainty in the result of a Monte Carlo integration is proportional to one over the square root of the number of samples. Obtaining a fractional uncertainty below 1% required 𝒪(10^12) samples.[Such a large number of samples is required because the integrand contains extremely narrow Gaussian likelihood functions, which are nearly zero over the vast majority of the volume over which we integrate.] This presents a problem for popular pseudo-random number generators (PRNGs). The problem, and its solution, which have been thoroughly understood in Ref. Manssen:2012vj,philox, are briefly reviewed in this paragraph and the next. The problem with most popular PRNGs is that they require each pseudo-random number in a sequence of pseudo-random numbers to be generated successively, rather than in parallel. Generating 𝒪(10^12) pseudo-random numbers successively on the CPU would take too long, and storing them would exhaust RAM. Generating multiple smaller sequences in parallel is a partial solution, but it is still RAM and CPU intensive and in practice it is difficult to guarantee that the smaller sequences are not correlated in a way that could lead to systematic error.The solution to this problem is to instead use counter-based PRNGs, wherein each pseudo-random number, x_n is simply defined by a function, b, applied to a counter, n:x_n = b(n) .The entire sequence can be generated in parallel. We use the Philox counter-based PRNG introduced in Ref. philox and further tested in Ref. Manssen:2012vj, which is highly performant, uses little memory, and has passed stringent tests for unwanted correlations. Specifically, we use the Philox-4×32-7 counter-based PRNG.Our GPU-based Monte Carlo integration was used to verify equation (<ref>), and showed a 5000× speed increase over the single-threaded Python code.[This remarkable speed increase may be partially due to the fact that we made no attempt to optimize the Python code. Nevertheless, an 𝒪(1000)× speed increase is to be expected since the NVIDIA GTX 960 GPU has about 1000 cores.] Running the code to compute any one of the Bayesian evidences given in this paper takes about 3 minutes with 10^12 samples.§ ACKNOWLEDGMENTSThe authors are grateful to an anonymous reviewer for their thorough reading and detailed comments that helped improve and expand the manuscript. A. Purves thanks Burt A. Ovrut and Sogee Spinner for mentorship and continued support. He thanks Rehan Deen for helpful conversations. He thanks the faculty, staff, and administration of Manhattanville College for support, encouragement, and helpful conversations. He thanks his students for their penetrating questions. P. Fundira thanks Edward Schwartz and the faculty of the Department of Mathematics and Computer Science for their encouragement and support. He thanks Bothwell and Tsungirirayi Fundira for the opportunity to attend Manhattanville College. He thanks Farirai A. Fundira for his companionship over the last four years. This work was not supported by any specific grant.99 Giudice:2008biG. F. Giudice, “Naturally Speaking: The Naturalness Criterion and Physics at the LHC,” In *Kane, Gordon (ed.), Pierce, Aaron (ed.): Perspectives on LHC physics* 155-178 [arXiv:0801.2562 [hep-ph]]. Nelson P. Nelson, “Naturalness in Theoretical Physics,” Am. Sci. 73, 60 (1985) Grinbaum:2009skA. Grinbaum, “Which fine-tuning arguments are fine?,” Found. Phys.42, 615 (2012) doi:10.1007/s10701-012-9629-9 [arXiv:0903.4055 [physics.hist-ph]]. Guth:1980zmA. H. Guth, “The Inflationary Universe: A Possible Solution to the Horizon and Flatness Problems,” Phys. Rev. D 23, 347 (1981). doi:10.1103/PhysRevD.23.347 Khoury:2001wfJ. Khoury, B. A. Ovrut, P. J. Steinhardt and N. Turok, “The Ekpyrotic universe: Colliding branes and the origin of the hot big bang,” Phys. Rev. D 64, 123522 (2001) doi:10.1103/PhysRevD.64.123522 [hep-th/0103239]. Bousso:2007gpR. Bousso, “TASI Lectures on the Cosmological Constant,” Gen. Rel. Grav.40, 607 (2008) doi:10.1007/s10714-007-0557-5 [arXiv:0708.4231 [hep-th]]. Martin:2012btJ. Martin, “Everything You Always Wanted To Know About The Cosmological Constant Problem (But Were Afraid To Ask),” Comptes Rendus Physique 13, 566 (2012) doi:10.1016/j.crhy.2012.04.008 [arXiv:1205.3365 [astro-ph.CO]]. Peccei:2006asR. D. Peccei, “The Strong CP problem and axions,” Lect. Notes Phys.741, 3 (2008) doi:10.1007/978-3-540-73518-2_1 [hep-ph/0607268]. Martin:1997nsS. P. Martin, “A Supersymmetry primer,” Adv. Ser. Direct. High Energy Phys.21, 1 (2010) [Adv. Ser. Direct. High Energy Phys.18, 1 (1998)] doi:10.1142/9789812839657_0001, 10.1142/9789814307505_0001 [hep-ph/9709356]. Barbieri:2000gfR. Barbieri and A. Strumia, “The `LEP paradox',” hep-ph/0007265. Nobbenhuis:2006yfS. Nobbenhuis, “The Cosmological Constant Problem, an Inspiration for New Physics,” Ph.D. thesis, Utrecht U. (2006), gr-qc/0609011. Kim:2008hdJ. E. Kim and G. Carosi, “Axions and the Strong CP Problem,” Rev. Mod. Phys.82, 557 (2010) doi:10.1103/RevModPhys.82.557 [arXiv:0807.3125 [hep-ph]]. Kim:1979ifJ. E. Kim, “Weak Interaction Singlet and Strong CP Invariance,” Phys. Rev. Lett.43, 103 (1979). doi:10.1103/PhysRevLett.43.103 Shifman:1979ifM. A. Shifman, A. I. Vainshtein and V. I. Zakharov, “Can Confinement Ensure Natural CP Invariance of Strong Interactions?,” Nucl. Phys. B 166, 493 (1980). doi:10.1016/0550-3213(80)90209-6 Dine:1981rtM. Dine, W. Fischler and M. Srednicki, “A Simple Solution to the Strong CP Problem with a Harmless Axion,” Phys. Lett. B 104, 199 (1981). doi:10.1016/0370-2693(81)90590-6 Diaz-Cruz:2016pmmJ. L. Daz-Cruz, W. G. Hollik and U. J. Saldaa-Salazar, “Addressing the strong CP problem with quark mass ratios,” arXiv:1605.03860 [hep-ph].Banerjee:2000qwH. Banerjee, D. Chatterjee and P. Mitra, “Is there still a strong CP problem?,” Phys. Lett. B 573, 109 (2003) doi:10.1016/j.physletb.2003.08.058 [hep-ph/0012284]. Blinov:2016kteN. Blinov and A. Hook,JHEP 1606, 176 (2016) doi:10.1007/JHEP06(2016)176 [arXiv:1605.03178 [hep-ph]]. Hook:2014cdaA. Hook, “Anomalous solutions to the strong CP problem,” Phys. Rev. Lett.114, no. 14, 141801 (2015) doi:10.1103/PhysRevLett.114.141801 [arXiv:1411.3325 [hep-ph]]. Swain:2010rrJ. Swain, “Black Holes and the Strong CP Problem,” arXiv:1005.1097 [gr-qc]. Csaki:2005fcC. Csaki, G. Marandella, Y. Shirman and A. Strumia, “The Super-little Higgs,” Phys. Rev. D 73, 035006 (2006) doi:10.1103/PhysRevD.73.035006 [hep-ph/0510294]. Bellazzini:2009ixB. Bellazzini, C. Csaki, A. Delgado and A. Weiler, “SUSY without the Little Hierarchy,” Phys. Rev. D 79, 095003 (2009) doi:10.1103/PhysRevD.79.095003 [arXiv:0902.0015 [hep-ph]]. Babu:2008geK. S. Babu, I. Gogoladze, M. U. Rehman and Q. Shafi, “Higgs Boson Mass, Sparticle Spectrum and Little Hierarchy Problem in Extended MSSM,” Phys. Rev. D 78, 055017 (2008) doi:10.1103/PhysRevD.78.055017 [arXiv:0807.3055 [hep-ph]]. Dermisek:2005arR. Dermisek and J. F. Gunion, “Escaping the large fine tuning and little hierarchy problems in the next to minimal supersymmetric model and h → aa decays,” Phys. Rev. Lett.95, 041801 (2005) doi:10.1103/PhysRevLett.95.041801 [hep-ph/0502105]. Farina:2013mlaM. Farina, D. Pappadopulo and A. Strumia, “A modified naturalness principle and its experimental tests,” JHEP 1308, 022 (2013) doi:10.1007/JHEP08(2013)022 [arXiv:1303.7244 [hep-ph]]. Ellis:1986ygJ. R. Ellis, K. Enqvist, D. V. Nanopoulos and F. Zwirner, “Observables in Low-Energy Superstring Models,” Mod. Phys. Lett. A 1, 57 (1986).Barbieri:1987fnR. Barbieri and G. F. Giudice, “Upper Bounds on Supersymmetric Particle Masses,” Nucl. Phys. B 306, 63 (1988).Antoniadis:2014etaI. Antoniadis, E. M. Babalic and D. M. Ghilencea, “Naturalness in low-scale SUSY models and "non-linear" MSSM,” Eur. Phys. J. C 74, no. 9, 3050 (2014) [arXiv:1405.4314 [hep-ph]].Ciafaloni:1996zhP. Ciafaloni and A. Strumia, “Naturalness upper bounds on gauge mediated soft terms,” Nucl. Phys. B 494, 41 (1997) [hep-ph/9611204].de Carlos:1993yyB. de Carlos and J. A. Casas, “One loop analysis of the electroweak breaking in supersymmetric models and the fine tuning problem,” Phys. Lett. B 309, 320 (1993) [hep-ph/9303291].Casas:2003jxJ. A. Casas, J. R. Espinosa and I. Hidalgo, “The MSSM fine tuning problem: A Way out,” JHEP 0401, 008 (2004) [hep-ph/0310137].Casas:2004ghJ. A. Casas, J. R. Espinosa and I. Hidalgo, “Implications for new physics from fine-tuning arguments. 1. Application to SUSY and seesaw cases,” JHEP 0411, 057 (2004) doi:10.1088/1126-6708/2004/11/057 [hep-ph/0410298]. Casas:2005evJ. A. Casas, J. R. Espinosa and I. Hidalgo, “Implications for new physics from fine-tuning arguments. II. Little Higgs models,” JHEP 0503, 038 (2005) doi:10.1088/1126-6708/2005/03/038 [hep-ph/0502066]. Strumia:1999frA. Strumia, “Naturalness of supersymmetric models,” hep-ph/9904247. Allanach:2006jcB. C. Allanach, “Naturalness priors and fits to the constrained minimal supersymmetric standard model,” Phys. Lett. B 635, 123 (2006) doi:10.1016/j.physletb.2006.02.052 [hep-ph/0601089]. Giusti:1998gzL. Giusti, A. Romanino and A. Strumia, “Natural ranges of supersymmetric signals,” Nucl. Phys. B 550, 3 (1999) doi:10.1016/S0550-3213(99)00153-4 [hep-ph/9811386]. Anderson:1994dzG. W. Anderson and D. J. Castano, “Measures of fine tuning,” Phys. Lett. B 347, 300 (1995) [hep-ph/9409419].Athron:2007ryP. Athron and D. J. Miller, “A New Measure of Fine Tuning,” Phys. Rev. D 76, 075010 (2007) doi:10.1103/PhysRevD.76.075010 [arXiv:0705.2241 [hep-ph]]. Allanach:2007qkB. C. Allanach, K. Cranmer, C. G. Lester and A. M. Weber, “Natural priors, CMSSM fits and LHC weather forecasts,” JHEP 0708, 023 (2007) doi:10.1088/1126-6708/2007/08/023 [arXiv:0705.0487 [hep-ph]]. Cabrera:2008tjM. E. Cabrera, J. A. Casas and R. Ruiz de Austri, “Bayesian approach and Naturalness in MSSM analyses for the LHC,” JHEP 0903, 075 (2009) doi:10.1088/1126-6708/2009/03/075 [arXiv:0812.0536 [hep-ph]]. Cabrera:2009dmM. E. Cabrera, J. A. Casas and R. Ruiz d Austri, “MSSM Forecast for the LHC,” JHEP 1005, 043 (2010) doi:10.1007/JHEP05(2010)043 [arXiv:0911.4686 [hep-ph]]. Ghilencea:2012qkD. M. Ghilencea and G. G. Ross, “The fine-tuning cost of the likelihood in SUSY models,” Nucl. Phys. B 868, 65 (2013) doi:10.1016/j.nuclphysb.2012.11.007 [arXiv:1208.0837 [hep-ph]]. Fichet:2012snS. Fichet, “Quantified naturalness from Bayesian statistics,” Phys. Rev. D 86, 125029 (2012) doi:10.1103/PhysRevD.86.125029 [arXiv:1204.4940 [hep-ph]]. Fowlie:2014xhaA. Fowlie, “CMSSM, naturalness and the "fine-tuning price" of the Very Large Hadron Collider,” Phys. Rev. D 90, 015010 (2014) doi:10.1103/PhysRevD.90.015010 [arXiv:1403.3407 [hep-ph]]. Fowlie:2014faaA. Fowlie, “Is the CNMSSM more credible than the CMSSM?,” Eur. Phys. J. C 74, no. 10, 3105 (2014) doi:10.1140/epjc/s10052-014-3105-y [arXiv:1407.7534 [hep-ph]]. Fowlie:2015ugaA. Fowlie, “The little-hierarchy problem is a little problem: understanding the difference between the big- and little-hierarchy problems with Bayesian probability,” arXiv:1506.03786 [hep-ph]. Athron:2017fxjP. Athron, C. Balazs, B. Farmer, A. Fowlie, D. Harries and D. Kim, “Bayesian analysis and naturalness of (Next-to-)Minimal Supersymmetric Models,” arXiv:1709.07895 [hep-ph].Clarke:2016jzmJ. D. Clarke and P. Cox, “Naturalness made easy: two-loop naturalness bounds on minimal SM extensions,” JHEP 1702, 129 (2017) doi:10.1007/JHEP02(2017)129 [arXiv:1607.07446 [hep-ph]]. Fowlie:2016jlxA. Fowlie, C. Balazs, G. White, L. Marzola and M. Raidal, “Naturalness of the relaxion mechanism,” JHEP 1608, 100 (2016) doi:10.1007/JHEP08(2016)100 [arXiv:1602.03889 [hep-ph]]. AbdusSalam:2015ubaS. S. AbdusSalam and L. Velasco-Sevilla, “Where to look for natural supersymmetry,” Phys. Rev. D 94, no. 3, 035026 (2016) doi:10.1103/PhysRevD.94.035026 [arXiv:1506.02499 [hep-ph]]. Kim:2013uxaD. Kim, P. Athron, C. Balzs, B. Farmer and E. Hutchison, “Bayesian naturalness of the CMSSM and CNMSSM,” Phys. Rev. D 90, no. 5, 055008 (2014) doi:10.1103/PhysRevD.90.055008 [arXiv:1312.4150 [hep-ph]]. Ghilencea:2012gzD. M. Ghilencea, H. M. Lee and M. Park, “Tuning supersymmetric models at the LHC: A comparative analysis at two-loop level,” JHEP 1207, 046 (2012) doi:10.1007/JHEP07(2012)046 [arXiv:1203.0569 [hep-ph]]. Dumont:2013wmaB. Dumont, S. Fichet and G. von Gersdorff, “A Bayesian view of the Higgs sector with higher dimensional operators,” JHEP 1307, 065 (2013) doi:10.1007/JHEP07(2013)065 [arXiv:1304.3369 [hep-ph]]. Ghilencea:2013fkaD. M. Ghilencea, “A new approach to Naturalness in SUSY models,” PoS Corfu 2012, 034 (2013) [arXiv:1304.1193 [hep-ph]]. Kvellestad:2015cpaA. Kvellestad, “Chasing SUSY Through Parameter Space,” Ph.D. thesis, Oslo U. (2015). Mackay D. Mackay, “Bayesian Methods for Adaptive Models,” Ph.D. thesis, California Institute of Technology (1992).FowlieTalk A. Fowlie, “Bayesian Approach to Naturalness,” talk given at conference on “Fine-tuning, the Multiverse and Life,” University of Sydney, 2016. <http://www.physics.usyd.edu.au/ luke/2016FTConf/talks/2_1_Fowlie.pdf> Nesseris:2012cqS. Nesseris and J. Garcia-Bellido,JCAP 1308, 036 (2013) doi:10.1088/1475-7516/2013/08/036 [arXiv:1210.7652 [astro-ph.CO]].March M. C. March, G. D. Starkman, R. Trotta, P. M. Vaudrevange, “Should we doubt the cosmological constant?,” Mon Not R Astron Soc 410, 4 (2011) doi: 10.1111/j.1365-2966.2010.17614.x . Kunz:2006mcM. Kunz, R. Trotta and D. Parkinson,Phys. Rev. D 74, 023503 (2006) doi:10.1103/PhysRevD.74.023503 [astro-ph/0602378].Earman John Earman, “Bayes or Bust? A Critical Examination of Bayesian Confirmation Theory,” The MIT Press, Cambridge, Massachusetts, USA (1992).Forster M. R. Forster, “Bayes and Bust: Simplicity as a Problem for a probabilist's Approach to Confirmation,” The British Journal for the Philosophy of Science 46, 3 (1995).Oppy G. Oppy, “Bayes not Bust! Why Simplicity is no Problem for Bayesians,” British Journal for the Philosophy of Science 58 4 (2007).git <https://github.com/AustinNH/gut-bayesian> Balazs:2013qvaC. Balazs, A. Buckley, D. Carter, B. Farmer and M. White, “Should we still believe in constrained supersymmetry?,” Eur. Phys. J. C 73, 2563 (2013) doi:10.1140/epjc/s10052-013-2563-y [arXiv:1205.1568 [hep-ph]]. Jeffreys H. Jeffreys, “Theory of probability”, 3rd ed., Clarendon Press, Oxford, U.K. (1961) Olive:2016xmwC. Patrignani et al. [Particle Data Group], “Review of Particle Physics,” Chin. Phys. C 40, no. 10, 100001 (2016). doi:10.1088/1674-1137/40/10/100001Kass R. E. Kass, L. Wasserman, “The Selection of Prior Distributions by Formal Rules,” Journal of the American Statistical Association 91, no. 435, 1343 (1996) doi:10.2307/2291752Bartlett M. S. Bartlett, “A Comment on D. V. Lindley's Statistical Paradox,” Biometrika 44, no. 3/4, 533 (1957) doi:10.2307/2332888. Dimopoulos:1981zbS. Dimopoulos and H. Georgi, “Softly Broken Supersymmetry and SU(5),” Nucl. Phys. B 193, 150 (1981). doi:10.1016/0550-3213(81)90522-8 Langacker:1990jhP. Langacker, “Precision tests of the standard model,” In *Boston 1990, Proceedings, Particles, strings and cosmology* 237-269 and Pennsylvania Univ. Philadelphia - UPR-0435T (90,rec.Oct.) 33 p. (015721) (see HIGH ENERGY PHYSICS INDEX 29 (1991) No. 9950)Ellis:1990wkJ. R. Ellis, S. Kelley and D. V. Nanopoulos, “Probing the desert using gauge coupling unification,” Phys. Lett. B 260, 131 (1991). doi:10.1016/0370-2693(91)90980-5 Amaldi:1991cnU. Amaldi, W. de Boer and H. Furstenau, “Comparison of grand unified theories with electroweak and strong coupling constants measured at LEP,” Phys. Lett. B 260, 447 (1991). doi:10.1016/0370-2693(91)91641-8 Langacker:1991anP. Langacker and M. x. Luo, “Implications of precision electroweak experiments for M_t, ρ_0, sin^2θ_W and grand unification,” Phys. Rev. D 44, 817 (1991). doi:10.1103/PhysRevD.44.817 Giunti:1991taC. Giunti, C. W. Kim and U. W. Lee, “Running coupling constants and grand unification models,” Mod. Phys. Lett. A 6, 1745 (1991). doi:10.1142/S0217732391001883 Dienes:1996duK. R. Dienes, “String theory and the path to unification: A Review of recent developments,” Phys. Rept.287, 447 (1997) doi:10.1016/S0370-1573(97)00009-4 [hep-th/9602045]. Deen:2016vyhR. Deen, B. A. Ovrut and A. Purves, “The minimal SUSY BL model: simultaneous Wilson lines and string thresholds,” JHEP 1607, 043 (2016) doi:10.1007/JHEP07(2016)043 [arXiv:1604.08588 [hep-ph]]. Deen:2016zfrR. Deen, B. A. Ovrut and A. Purves, “Supersymmetric Sneutrino-Higgs Inflation,” Phys. Lett. B 762, 441 (2016) doi:10.1016/j.physletb.2016.09.059 [arXiv:1606.00431 [hep-ph]]. Kaplunovsky:1992vsV. S. Kaplunovsky, “One loop threshold effects in string unification,” hep-th/9205070. Mayr:1993knP. Mayr, H. P. Nilles and S. Stieberger, “String unification and threshold corrections,” Phys. Lett. B 317, 53 (1993) doi:10.1016/0370-2693(93)91569-9 [hep-th/9307171]. Dolan:1992nfL. Dolan and J. T. Liu, “Running gauge couplings and thresholds in the type II superstring,” Nucl. Phys. B 387, 86 (1992) doi:10.1016/0550-3213(92)90047-F [hep-th/9205094]. Kiritsis:1996dnE. Kiritsis, C. Kounnas, P. M. Petropoulos and J. Rizos, “Universality properties of N=2 and N=1 heterotic threshold corrections,” Nucl. Phys. B 483, 141 (1997) doi:10.1016/S0550-3213(96)00550-0 [hep-th/9608034]. Klaput:2010dgM. A. Klaput and C. Paleani, “The computation of one-loop heterotic string threshold corrections for general orbifold models with discrete Wilson lines,” arXiv:1001.1480 [hep-th]. deAlwis:2012bmS. P. de Alwis, “Gauge Threshold Corrections and Field Redefinitions,” Phys. Lett. B 722, 176 (2013) doi:10.1016/j.physletb.2013.04.007 [arXiv:1211.5460 [hep-th]]. Bailin:2014nnaD. Bailin and A. Love, “Reduced modular symmetries of threshold corrections and gauge coupling unification,” JHEP 1504, 002 (2015) doi:10.1007/JHEP04(2015)002 [arXiv:1412.7327 [hep-th]]. Langacker:1992rqP. Langacker and N. Polonsky, “Uncertainties in coupling constant unification,” Phys. Rev. D 47, 4028 (1993) doi:10.1103/PhysRevD.47.4028 [hep-ph/9210235]. Ovrut:2012wgB. A. Ovrut, A. Purves and S. Spinner, “Wilson Lines and a Canonical Basis of SU(4) Heterotic Standard Models,” JHEP 1211, 026 (2012) doi:10.1007/JHEP11(2012)026 [arXiv:1203.1325 [hep-th]]. Marshall:2014keaZ. Marshall, B. A. Ovrut, A. Purves and S. Spinner,“Spontaneous R-parity Breaking, Stop LSP Decays and the Neutrino Mass Hierarchy,” Phys. Lett. B 732, 325 (2014) doi:10.1016/j.physletb.2014.03.052 [arXiv:1401.7989 [hep-ph]]. Marshall:2014cwaZ. Marshall, B. A. Ovrut, A. Purves and S. Spinner, “LSP Squark Decays at the LHC and the Neutrino Mass Hierarchy,” Phys. Rev. D 90, no. 1, 015034 (2014) doi:10.1103/PhysRevD.90.015034 [arXiv:1402.5434 [hep-ph]]. Ovrut:2014rbaB. A. Ovrut, A. Purves and S. Spinner, “A statistical analysis of the minimal SUSY BL theory,” Mod. Phys. Lett. A 30, no. 18, 1550085 (2015) doi:10.1142/S0217732315500856 [arXiv:1412.6103 [hep-ph]]. Ovrut:2015ueaB. A. Ovrut, A. Purves and S. Spinner, “The minimal SUSY B-L model: from the unification scale to the LHC,” JHEP 1506, 182 (2015) doi:10.1007/JHEP06(2015)182 [arXiv:1503.01473 [hep-ph]].recipes William H. Press, Saul A. Teukolsky, William T. Vetterling, and Brian P. Flannery, “Numerical Recipes 3rd Edition: The Art of Scientific Computing (3 ed.),” Cambridge University Press, New York, NY, USA (2007) Manssen:2012vjM. Manssen, M. Weigel and A. K. Hartmann, “Random number generators for massively parallel simulations on GPU,” Eur. Phys. J. ST 210, 53 (2012) doi:10.1140/epjst/e2012-01637-8 [arXiv:1204.6193 [physics.comp-ph]]. philox J. K. Salmon, M. A. Moraes, R. O. Dror, D. E. Shaw “Parallel Random Numbers: As Easy As 1, 2, 3,” Proceedings of 2011 International Conference for High Performance Computing, Networking, Storage and Analysis, SC '11 (ACM, New York, NY, USA, 2011)
http://arxiv.org/abs/1708.07835v2
{ "authors": [ "Panashe Fundira", "Austin Purves" ], "categories": [ "hep-ph", "hep-th" ], "primary_category": "hep-ph", "published": "20170825180000", "title": "Bayesian naturalness, simplicity, and testability applied to the $B-L$ MSSM GUT" }
Rank bounds for jacobians]Bounds of the rank of the Mordell–Weil group of Jacobians of Hyperelliptic Curves Department of Mathematics and Statistics, Amherst College, Amherst, MA 01002, USA [email protected] <http://www3.amherst.edu/ hdaniels/> Department of Mathematics, University of Connecticut, Storrs, CT 06269, USA [email protected] <http://alozano.clas.uconn.edu/> Department of Mathematics, University of Connecticut, Storrs, CT 06269, USA [email protected] Primary: 11G10, Secondary: 14K15. In this article we extend work of Shanks and Washington on cyclic extensions, and elliptic curves associated to the simplest cubic fields. In particular, we give families of examples of hyperelliptic curves C: y^2=f(x) defined over , with f(x) of degree p, where p is a Sophie Germain prime, such that the rank of the Mordell–Weil group of the jacobian J/ℚ of C is bounded by the genus of C and the 2-rank of the class group of the (cyclic) field defined by f(x), and exhibit examples where this bound is sharp.[ Erik Wallace================§ INTRODUCTION Let C/ be a hyperelliptic curve, given by a model y^2=f(x), with f(x)∈[x], and let J/ be the jacobian variety attached to C. The Mordell–Weil theorem shows that J() is a finitely generated abelian group and, therefore,J()≅ J()_tors⊕^R_J/,where J()_tors is the (finite) subgroup of torsion elements, and R_J/=rank_(J())≥ 0 is the rank of J(). In this article we are interested in bounds of R_J/ in terms of invariants of C or f(x). In <cit.>, Washington showed the following bound for the rank of certain elliptic curves, building on work of Shanks on the so-called simplest cubic fields (see <cit.>). Let m≥ 0 be an integer such thatm^2+3m+9 is square-free. Let E_m be the elliptic curve given by the Weierstrass equationE_m y^2 = f_m(x)= x^3+mx^2-(m+3)x+1.Let L_m be the number field generated by a root of f_m(x), let Cl(L_m) be its class group, and let (L_m)[2] be the 2-torsion subgroup of (L_m). Then,rank_(E_m()) ≤ 1 + dim__2(Cl(L_m)[2]).In this article, we extend Washington's result to curves of genus g≥ 2. In order to find other families of hyperelliptic curves of genus g≥ 2 where a similar bound applies, we use a method of 2-descent for jacobians described by Cassels, Poonen, Schaefer, and Stoll (see Section <ref>; in particular, we follow the implementation described in <cit.>). The examples come from hyperelliptic curves y^2=f(x) such that f(x) defines the maximal real subfield of a cyclotomic extension of , and the degree of f(x) is p, a Sophie Germain prime. We obtain the following result. Let q≥ 7 be a prime such that p=(q-1)/2 is also prime, and let L=(ζ_q)^+ be the maximal totally real subfield of (ζ_q). Let f(x) be the minimal polynomial of ζ_q+ζ_q^-1 or -(ζ_q+ζ_q^-1), let C/ be the hyperelliptic curve y^2=f(x), of genus g=(p-1)/2, and let J/ be its jacobian. Then, there are constants ρ_∞ and j_∞, that depend on q, such that_ (J())≤__2^(2)(,J)≤ρ_∞ + j_∞ + __2 (^+(L)[2]),where ρ_∞+j_∞≤ p-1. Further, if one of the following conditions is satisfied,* the Davis–Taussky conjecture holds (Conjecture <ref>), or* the prime 2 is inert in the extension (ζ_p)^+/, or* q≤ 92459,then ρ_∞=0 and j_∞=g=(p-1)/2, and __2^(2)(,J) ≤g + __2((L)[2]). In fact, if the Davis–Taussky conjecture holds (see Remark <ref>), then the bound of Theorem <ref> becomes __2^(2)(,J) ≤g.The organization of the paper is as follows. In Section <ref>, we review the method of 2-descent as implemented by Stoll in <cit.>. In Sections <ref>, <ref>, and <ref>, we specialize the 2-descent method for the situations we encounter in the rest of the paper, namely the case when f(x) defines a totally real extensions, or cyclic extension of , of prime degree. In Section <ref>, we give a new proof of Washington's theorem using the method of 2-descent. In Section <ref> we provide examples of hyperelliptic curves of genus g=(p-1)/2 where p is a Sophie Germain prime, and prove Theorem <ref>. Finally, in Section <ref>, we illustrate the previous sections with examples of curves, jacobians, and how their ranks compare to the bounds. The authors would like to thank Keith Conrad, Gürkan Dogan, Franz Lemmermeyer, Paul Pollack, and Barry Smith, for several helpful comments and suggestions. We would also like to express our gratitude to David Dummit for very useful suggestions and noticing an error in an earlier version of the paper. Finally we would like to express our thanks to the referees who have given us helpful suggestions and pointed out some errors in previous versions of this paper. § STOLL'S IMPLEMENTATION OF 2-DESCENTIn this section we summarize the method of 2-descent as implemented by Stoll in <cit.>. The method was first described by Cassels <cit.>, and by Schaefer <cit.>, and Poonen-Schaefer <cit.> in more generality. Throughout the rest of this section we will focus on computing the dimension of the 2-Selmer group of the jacobian J of a hyperelliptic curve C, given by an affine equation of the formC:y^2=f(x),where f∈[x] is square-free and (f) is odd (Stoll also treats the case when (f) is even, but we do not need it for our purposes). In this case, the curve C is of genus g=((f) - 1)/2 with a single point at infinity in the projective closure. Before we can compute the dimension of the 2-Selmer group, we must define a few objects of interest and examine some of their properties. We will follow the notation laid out in <cit.>.Let ^(2)(,J) be the 2-Selmer group of J over , and let (,J)[2] be the 2-torsion of the Tate-Shafarevich group of J (as defined, for instance, in Section 1 of <cit.>). Selmer and Sha fit in the following fundamental short exact sequence:0[r]J()/2J()[r] ^(2)(,J)[r] (,J)[2][r]0.With this sequence in hand we get a relationship between for the rank of J() and the _2-dimensions of the other groups that we defined._ J() + __2 J()[2] + __2(,J)[2] = __2^(2)(,J).Using equation (<ref>), we get our first upper bound on the rank_ J()≤__2^(2)(,J)-__2(,J)[2]≤__2^(2)(,J).This upper bound is computable, in the sense that J()[2] and the Selmer group are computable, as we describe below. For any field extension K ofand f∈[x], let L_K = K[T] / (f(T)) denote the algebra defined by f and let N_K denote the norm map from L_K down to K. We denote L_K=K[θ], where θ is the image of T under the reduction map K[T]→ K[T]/(f(T)), and L_K is a product of finite extensions of K:L_K=L_K,1×⋯× L_K,m_K,where m_K is the number of irreducible factors of f(x) in K[x]. Here, the fields L_K,j correspond to the irreducible factors of f(x) in K[x], and the map N_K:L_K→ K is just the product of the norms on each component of L_K. That is, if α=(α_1,α_2,…,α_m_K), then N_K(α)=∏_i=1^m_KN_L_K,i/K(α_i) where N_L_K,i/K:L_K,i→ K is the usual norm map for the extension of fields L_K,i/K.In order to ease notation, we establish the following notational conventions: when K= we will drop the field from the subscripts altogether, and if K=_v, we will just use the subscript v. This convention will apply to anything that has a field as a subscript throughout the rest of the paper. As an example, L_v=_v[T]/(f(T)) and L=[T]/(f(T)).Following standard notational conventions, we let _K, I(K), and (K) denote the ring of integers of K, the group of fractional ideals in K, and the ideal class group of K, respectively. Wedefine analogous objects for the algebra L_K as products of each component, as follows:_L_K =_L_K,1×⋯×_L_K,m_K,I(L_K) =I(L_K,1)×⋯× I(L_K,m_K), (L_K) =(L_K,1)×⋯×(L_K,m_K). Let K be a field extension of , and let L=[T]/(f(T)) be as before.* Let I_v(L) denote the subgroup of I(L) generated, in each component, by fractional ideals in L_,i with support above a prime v in . For a finite set S of places of , letI_S(L)=∏_v∈ S∖{∞} I_v(L).* For any field extension K of , let H_K = ( N_K L_K^×/(L_K^×)^2→ K^×/(K^×)^2 ).For any place v of , we let _v H→ H_v be the canonical restriction map induced by the natural inclusion of fields ↪_v.* Let (C) denote the group of degree-zero divisors on C with support disjoint from the principal divisor ÷(y). In our case, the curve is given by C:y^2=f(x), and the support of ÷(y) is exactly thepoints with coordinates (α,0), where α is a root of f, and the unique point at infinity. Now for each K, there is a homomorphismF_K(C)(K)→ L^×_K, ∑_P n_PP↦∏_P (x(P) - θ )^n_P,and this homomorphism induces a homomorphism δ_K J(K) → H_K with kernel 2J(K) by <cit.>. By abusing notation, we also use δ_K to denote the induced map J(K)/2J(K) → H_K. All of these facts, together with some category theory, give us the following characterization of the 2-Selmer group of J over .The 2-Selmer group of J overcan be identified as follows:^(2)(,J) = {ξ∈ H |_v(ξ) ∈δ_v(J(_v)) for allv}.In order to take advantage of this description of the Selmer group, we need some additional facts about the 2-torsion of J and the δ_K maps. Let K be a field extension of . (1) For a point P∈ C(K) not in the support of ÷(y), δ_K(P-∞) = x(P) - θ (L_K^×)^2.(2) Let f = f_1⋯ f_m_K be the factorization of f over K into monic irreducible factors. Then, to every factor f_j, we can associate an element P_j∈ J(K)[2] such that: (i) The points {P_j} generate J(K)[2] and satisfy ∑_j=1^m_KP_j = 0.(ii) Let h_j be the polynomial such that f = f_jh_j. Then δ_K(P_j) = (-1)^(f_j)f_j(θ) + (-1)^(h_j)h_j(θ)(L_K^×)^2.(3) J(K)[2] = m_K -1.Let I_v = (N I_v(L)/I_v(L)^2 → I()/I()^2) and let _v H_v→ I_v be the map induced by the valuations on each component of L_v. Considering all primes at once, we get another map H → I(L)/I(L)^2. More specifically, themap is the product of _v(_v) over all places v.Next, the following lemma helps us compute the dimensions of various groups when K is a local field or . Let K be a v-adic local field, and let d_K=[K:_2] if v=2 and d_K = 0 if v is odd. Then (1) J(K)/2J(K) =J(K)[2] + d_Kg =m_K-1+d_Kg.(2) H_K = 2 J(K)/2J(K) = 2(m_K-1+d_Kg).(3) I_K = m_K - 1.With all of this machinery the description of ^(2)(,J) given in Proposition <ref> can be refined as follows. Let S = {∞,2}∪{v : v^2divides (f) }. Then ^(2)(,J) = {ξ∈ H |(ξ)∈ I_S(L)/I_S(L)^2, _v(ξ)∈δ_v(J(_v)) for allv∈ S}. This new characterization suggests the following method to compute ^(2)(,J):S1: Find the set S.S2: For each v∈ S, determine J_v = δ_v(J(_v))⊆ H_v.S3: Find a basis for a suitable finite subgroup H⊆ L^×/(L^×)^2 such that ^(2)(,J)⊆H.S4: Compute ^(2)(,J) as the inverse image of ∏_v∈ S J_v under∏_v∈ S_vH→∏_v∈ S H_v.Ignoring any complications that arise from computing and factoring the discriminant of f, we focus on steps 2 and 3. We omit the details of how to carry out step 4, since we are only interested in an upper bound for the _2-dimension of ^(2)(J,). Step 2 can be broken down into three substeps: S2.1: For all v∈ S∖{∞}, compute J_v = δ_v(J(_v)) and its image G_v = _v(J_v) in I_v.S2.2: If G_v = 0 for some v, with v odd, remove v from S. S2.3: Compute J_∞.To complete step 2.3, we need the following lemma. With notation as above: (1) J()/2J() = m_∞-1 - g.(2) J_∞ is generated by {δ_∞(P-∞) : P∈ C()}.(3) The value of δ_∞(P-∞) only depends on the connected component of C() containing P. Next, for step 3, we see that if we let G = ∏_v∈ S∖{∞} G_v ⊆ I(L)/I(L)^2,then the group{ξ∈ H : (ξ)∈ G} contains ^(2)(,J). In fact, the larger group H = {ξ∈ L^×/(L^×)^2 :(ξ)∈ G } also contains the 2-Selmer group and we can compute its basis using the following two steps. S3.1: Find a basis of V = ( L^×/(L^×)^2 → I(L)/I(L)^2).S3.2: Enlarge this basis to get a basis of H = ^-1(G). With notation as above, Stoll deduces an upper bound and a formula for the _2-dimension of the 2-Selmer group (see Lemma 4.10 and the discussion under Step 4), as follows.With notation as above,__2^(2)(,J)≤ (m_∞-1) + __2((L)[2]) + __2(G→(L)/2(L)).In the next section, we modify the proof of the bound in Proposition <ref> to allow for extra conditions at infinity, before we specialize to totally real, andcyclic extensions. §.§ About the proof of Proposition <ref>The following commutative diagrams helps understand where the Selmer group fits:J()/2J()@^(->[rrr]^δ[d] H[rrr]^[d]^∏_v res_v I(L)/I(L)^2 [d]∏_v J(_v)/2J(_v)@^(->[rrr]^∏_v δ_v ∏_v H_v[rrr]^∏_v _v ∏_v I_v(L)/I_v(L)^2The 2-Selmer group of J overis then given, as in Prop. <ref>, by^(2)(,J) = {ξ∈ H : (ξ)∈ I_S(L)/I_S(L)^2, _v(ξ)∈δ_v(J(_v)) for allv∈ S}.The Selmer group is thus contained in H⊆ L^×/(L^×)^2, and more precisely,^(2)(,J)⊆{ξ∈ H : (ξ) ∈ G, res_∞(ξ) ∈ J_∞}where J_∞=δ_∞(J()), the group G is the product ∏_v∈ S∖{∞} G_v ⊆ I(L)/I(L)^2, and recall that H is the kernel of the norm map from L^×/(L^×)^2 down to ^×/(^×)^2. Thus, ^(2)(,J) is contained in the larger groupH= {ξ∈ L^×/(L^×)^2 : (ξ) ∈ G, res_∞(ξ) ∈ J_∞}.We emphasize here that the definition of H in <cit.> does not impose a condition at ∞, but the definition of H does to improve the bounds accuracy (thus H⊆H). In an attempt to simplify notation, let L_J_∞ be the subspace of L^×/(L^×)^2 with a condition added at infinite primes by L_J_∞ = L^×/(L^×)^2 ∩_∞^-1(J_∞). Thus, H={ξ∈ L_J_∞ : val(ξ) ∈ G}, and ^(2)(,J)⊆H. Note that H is the largest subgroup of L_J_∞ such that (H)≅G∩val(L_J_∞).Let us show that indeed (H)≅G∩val(L_J_∞). Indeed: * Suppose ξ∈H and consider val(ξ). By definition, sinceξ∈H, we have that val(ξ) is in G, and ξ∈ L_J_∞, thus val(ξ)∈val(L_J_∞). Hence, (ξ)∈ G∩val(L_J_∞).* Conversely, suppose g ∈ G∩val(L_J_∞). Then, there is some ξ∈ L_J_∞ such that val(ξ)=g. In particular, res_∞(ξ) ∈ J_∞ and since val(ξ)=g∈ G, it follows that ξ∈H. Hence, val(ξ)∈val(H).Also, let us show that H is the largest subgroup of L_J_∞ such that(H)≅G∩val(L_J_∞). Suppose that ξ∈ L_J_∞ and val(ξ) ∈ G∩val(L_J_∞). Then, ξ∈ L_J_∞ and val(ξ)∈ G, so ξ∈H by definition.Next, we define subspaces V and W of L^×/(L^×)^2 as follows: * Let {ξ_i}_i=1^r be generators of G∩val(L_J_∞) with (G∩val(L_J_∞))=r, and for each 1≤ i≤ r pick one μ_i∈ L_J_∞ such that (μ_i)=ξ_i. Let W be the subspace generated by {μ_i}_i=1^r. Note that W⊆ L_J_∞⊆ L^×/(L^×)^2. In particular, _∞(w)∈ J_∞ for all w∈ W. Moreover, W and (W) are isomorphic by construction, soW≅(W) = G∩val(L_J_∞) ⊆ G∩val(L^×/(L^×)^2)=(G→(L)/2(L)).Thus, r=(W)=(G∩val(L_J_∞))≤(G→(L)/2(L)). * Next, let us write V=( L_J_∞→ I(L)/I(L)^2). It follows that H = V ⊕ W (note that (V) is trivial, while (w) is non-trivial for every w≠ 0 in W). Let V=( L_J_∞→ I(L)/I(L)^2), let U=(_L^×/(_L^×)^2)∩ L_J_∞, and let (L_J_∞) be the class group defined as follows (L_J_∞)=I(L)/P(L_J_∞), where I(L) is the group of fractional ideals of L, and P(L_J_∞) is the group of principal fractional ideals 𝔄=(α) with a generator such that _∞(α)∈ J_∞.Then, there is an exact sequence: 0 ↦ U→ V →(L_J_∞)[2] → 0. Consider L^×∩_∞^-1(J_∞) [rr]^2[d] L^×∩_∞^-1(J_∞)[rr][d]L_J_∞[rr][d]^00 [r] I(L)[rr]^2I(L)[rr] I(L)/I(L)^2[rr]0and apply the snake lemma.Let P(L) be the subgroup of all principal fractional ideals, let P^+(L) be the subgroup of principal ideals generated by a totally positive element, and let P(L_J_∞) be as above. Since the trivial signature (1,1,…,1)∈ J_∞, it implies that P^+(L) ⊆ P(L_J_∞) ⊆ P(L), and therefore there are surjections(L) ↞(L_J_∞) ↞^+(L),where ^+(L)=I(L)/P^+(L) is the narrow class group of L. Putting all this together (and writingfor __2), we obtain a bound^(2)(,J) ≤(H) =U + (L_J_∞)[2] +W= (_L^×/(_L^×)^2∩_∞^-1(J_∞)) + (L_J_∞)[2] +G∩val(L_J_∞)≤(_L^×/(_L^×)^2∩_∞^-1(J_∞))+ ^+(L)[2] +G∩val(L_J_∞).We note here that (_L^×/(_L^×)^2∩_∞^-1(J_∞))≤ m_∞ -1, where we have used the fact that (_L^×/(_L^×)^2)=m_∞, and the fact that _∞(-1) is not in J_∞ (because J_∞=δ_∞(J())⊆ H_∞, which is the kernel of the norm map, so N(j)=1 for j∈ J_∞, but the norm N(-1)=-1 because the degree of L is odd). We will improve on the bound given by (<ref>) above by making certain assumptions about G and a more careful analysis of the dimension of the subgroup of totally positive units. Before we state our refinements, we review some of the results on totally positive units that we shall need.It is worth pointing out that the third line of Eq. (<ref>) is not necessarily an improvement over the bound in Prop. <ref> if ^+(L)[2]> (L)[2]. In our setting, we will seek conditions where ^+(L)[2]= (L)[2] and then our bound in Eq. (<ref>) will be an improvement due to the more careful counting of units according to their infinite valuations. §.§ Totally Positive UnitsLet L be a totally real Galois number field of prime degree p>2, with embeddings τ_i : L →, for i=1,…,p, and maximal order _L. Let (L) = (_L) be the ideal class group of L, and let ^+(L) be the narrow class group. Let V_∞ = {± 1}^p≅ (_2)^p and, by abuse of notation, we extend the map _∞ (as in Definition <ref>, where we note that H_∞≅ V_∞ and H⊆ L^×/(L^×)^2) to_∞: L^×/(L^×)^2 → V_∞by _∞(α) = ((τ_1(α)),(τ_2(α)),…,(τ_p(α))). Let _L^× be the unit group of _L, and let _L^×, + be the subgroup of totally positive units. Thus,(_∞|__L^×/(_L^×)^2) = _L^×,+/(_L^×)^2.We refer the reader to <cit.> for heuristics and conjectures about the dimension of the totally positive units (in particular, the conjecture on page 4). In the following theorem, we use the notation of <cit.>. Let ρ, ρ^+, and ρ_∞ be defined byρ = __2(L)/2(L), ρ^+ = __2^+(L)/2^+(L),and ρ_∞ = __2_L^×,+/(_L^×)^2Then,* ρ_∞= p-__2_∞(_L^×/(_L^×)^2)=__2{± 1 }^p/_∞(_L^×/(_L^×)^2).* We have0 →{± 1 }^p/_∞(_L^×/(_L^×)^2) →^+(L) →(L)→ 0.In particular, max{ρ,ρ_∞}≤ρ^+≤ρ_∞+ρ, and ρ^+=ρ_∞+ρ if and only if the exact sequence splits.* (Armitage-Fröhlich) ρ^+-ρ≤ (p-1)/2.For part (1), note that ρ_∞ = __2_L^×,+/(_L^×)^2 is the dimension of (_∞|__L^×/(_L^×)^2), and the dimension of _L^×/(_L^×)^2 is p. Thus, the dimension of the image of _∞|__L^×/(_L^×)^2 is p minus the dimension of the kernel. For part (2), see Section 2 of <cit.>, and in particular Equation (2.9). Part (3) is shown in <cit.>, where it is shown that ρ^+-ρ≤⌊ r_1/2 ⌋, where r_1 is the number of real embeddings of L. Here r_1=p is an odd prime, so the proof is concluded. From the statement of the previous theorem, we see that ρ^+≥ρ_∞. However, ρ≥ρ_∞ is not necessarily true. In the following result, a condition is given that implies ρ≥ρ_∞ (see also <cit.>, Section 3). Let L/ be a finite abelian extension with Galois group of odd exponent n, and suppose that -1 is congruent to a power of 2 modulo n. Then, in the notation of Theorem <ref>, we have ρ=ρ^+. In particular, ρ≥ρ_∞. We obtain the following corollary. Let L/ be a cyclic extension of odd prime degree p, and suppose that the order of 2 in (/p)^× is even. Then, ρ=ρ^+. In particular, __2(L)[2] = __2^+(L)[2]. Suppose that (L/)≅/p for some prime p>2, such that the order of 2 in (/p)^× is even (since (L/) is cyclic of order p, this is equivalent to -1 being congruent to a power of 2 modulo p). Hence, Theorem <ref> applies, and ρ=ρ^+.The odd primes below 100 such that the order of 2 is odd modulo p are 7,23, 31, 47, 71, 73, 79, and 89, so the corollary applies to all other primes not in this list (i.e., 3, 5, 11, 13, 17, 19, etc.).Let K/ be an (imaginary) abelian extension of the rationals of degree n, let h_K be the class number of K, and let C_K⊆_K^× the groups of circular units of K (as defined in <cit.>, p. 376), and let C_K^+ ⊆_K^×, + be the subgroup of circular units that are totally positive. Let K^+ be the maximal real subfield of K, and let h_K^+ be its class number. Let h_K^- = h_K/h_K^+. Further, assume that each prime p which ramifies in K does not split in K^+. Then: * The index [_K^×,+:(_K^×)^2] is a divisor of the index [C_K^+:C_K^2].* If the discriminant of K is plus or minus a power of a prime, then h_K^- is odd if and only if [C_L^+:C_L^2]=1, where C_L is the subgroup of circular units of L=K^+.* Suppose the discriminant of K is plus or minus a power of a prime, [K^+:]=n is a power of an odd prime p, and the order of 2 p is even. Then, h_K^- is odd if and only if h_K^+ is odd.The results are, respectively, Lemma 5, Theorem 3, and Theorem 4 of <cit.>. Next, we cite a results of Estes which extends workof Davis (<cit.>), and Stevenhagen (<cit.>). See also <cit.>. Let q and pbe primes such that q=2p+1. If 2 is inert in (ζ_p)^+, where ζ_p is a primitive p-th root of unity, then the class number of (ζ_q) is odd. The following result combines the results of Davis, Estes, Stevenhagen, and Garbanati, and gives a specific criterion to check that ρ_∞=0 for the maximal real subfield of a cyclotomic field. Let q and p>2be primes such that q=2p+1, and letL=(ζ_q+ζ_q^-1), where ζ_q is a primitive q-th root of unity. Further, assume thatthe prime 2 is inert in the extension (ζ_p)^+/. Then, ρ_∞ = __2_L^×,+/(_L^×)^2 = 0. Let K=(ζ_q) and let K^+ = L = (ζ_q+ζ_q^-1). We note that the discriminant of K is a power of a prime (namely q), and therefore the primes of K or L that are ramified (namely the primes above q), are totally ramified, so they do not split. Moreover, p=(q-1)/2 is prime and [L:]=p. Thus, the hypotheses of Theorem <ref> are satisfied for K and L. If 2 is inert in (ζ_p)^+/, then Theorem <ref> shows that h_K is odd, and therefore h_K^- is odd as well, since h_K^- = h_K/h_K^+ by definition. Since the discriminant of K is a power of q, Theorem <ref> part (2) shows that [C_L^+:C_L^2]=1 and therefore [_L^×,+:(_L^×)^2]=1 as well by part (1). We conclude that ρ_∞=0.There is in fact a conjecture of Davis and Taussky that says that ρ_∞=0 in the case of L=(ζ_q+ζ_q^-1), where p=(q-1)/2 is a Sophie Germain prime. For more on the Davis–Taussky conjecture see <cit.>, <cit.>, <cit.>, <cit.>, and <cit.>. Let q and pbe primes such that q=2p+1, and letL=(ζ_q+ζ_q^-1), where ζ_q is a primitive q-th root of unity. Then, C^+_L=C_L^2. (Thus, it follows that ρ_∞ = __2_L^×,+/(_L^×)^2 = 0.) In the next result we note that the Davis–Taussky conjecture is equivalent to the class number of (ζ_q) being odd. (We thank David Dummit for pointing out the following equivalence to us.) Let q and pbe primes such that q=2p+1, letL=(ζ_q+ζ_q^-1), where ζ_q is a primitive q-th root of unity, and let K=(ζ_q). Then, C^+_L=C_L^2 if and only if the class number of K is odd. If h^+_K is even, then h^-_K is even (see, for instance, <cit.>, pp. 773-774, for a proof of this fact). Since h_K=h_K^+h_K^-, it follows that h_K^- is odd if and only if h_K is odd. Since the discriminant of K is a power of q (prime), Theorem <ref>, part (2), implies that h^-_K is odd if and only if C^+_L = (C_L)^2. Hence, the Davis–Taussky conjecture holds if and only if h_K^- is odd, if and only if h_K is odd. We conclude this section with some remarks about how to compute an upper bound of ρ_∞ in the cyclic case, working in coordinates over _2. Let L be a cyclic extension ofof degree p>2, and let (L/)=⟨σ⟩. Then, L is totally real (since L is Galois over , then it is totally real or totally imaginary, so [L:]=p>2 is odd, and p=2r_2 is impossible). Let u≠± 1 be a fixed (known) unit in _L^× and let res_∞(u) = (ε_1,…,ε_p)where τ_1,…,τ_p are the real embeddings of L and ε_i is the sign of τ_i(u)∈. We order our embeddings in the following manner. Let g_u(x) be the minimal polynomial of u over , and let r_1,…,r_p be the real roots of g_u(x) ordered so that r_1<r_2<⋯ <r_p. Then, {r_i}={τ_j(u)}, and we choose τ_i so that τ_i(u)=r_i for all 1≤ i≤ p. With this notation, res_∞(u) = (-1,-1,-1,…,1,1,1), i.e., it consists of a non-negative number of -1 signs followed by a non-negative number of +1 signs. Recall that u is in the kernel of res_∞ if and only if u is a totally positive unit. Attached tothe generator σ∈𝒢=(L/), there is a permutation ϕ=ϕ_σ∈ S_p, where ϕ is considered here as a permutation of {1,2,…,n}, such that τ_i(σ(u))=r_ϕ(i), and thereforeres_∞(σ(u)) = (ε_ϕ(1), …, ε_ϕ(p)).Since τ is an embedding (injective), if τ_i(α)=r_i for some α∈ L^×, then τ_i(σ(α)) = r_ϕ(i). It follows that τ_i(σ^n(u)) = r_ϕ^n(i), andres_∞(σ^n(u)) = (ε_ϕ^n(1), …, ε_ϕ^n(p)), for all n≥ 1. Now, in addition, suppose that u is a unit of norm 1, and let 𝒢· u be the subgroup of units generated by the conjugates of u, i.e., 𝒢· u = ⟨ u, σ(u), σ^2(u),…, σ^p-1(u)⟩⊆_L^×generate a subgroup of_L^×. Note that the product ∏_n=0^p-1σ^n(u)=1, so 𝒢· u = ⟨ u, σ(u), σ^2(u),…, σ^p-2(u)⟩.Then, res_∞(𝒢· u) = ⟨(ε_ϕ^n(1), …, ε_ϕ^n(p)) : 0≤ n≤ p-2⟩⊆ V_∞,where we have defined V_∞ = {± 1}^p. If we fix an isomorphism ψ:{± 1}≅_2, and write f_i = ψ(ε_i), then the map res_∞: 𝒢· u → V_∞ can be written in _2-coordinates, and the corresponding p× (p-1) matrix over _2 is given byM_∞,u = ( f_ϕ^j(i))_1≤ i≤ p0≤ j ≤ p-2 = ([f_1 f_ϕ(1)⋯ f_ϕ^p-2(1);f_2 f_ϕ(2)⋯ f_ϕ^p-2(2);⋮⋮⋱⋮;f_p f_ϕ(p)⋯ f_ϕ^p-2(p);]). Let u be a unit of norm 1, and let d_∞,u be the dimension of the column space of M_∞,u or, equivalently, the dimension of res_∞(𝒢· u). Then, ρ_∞≤ (p-1) - d_∞,u. In particular, if d_∞,u = p-1, then ρ_∞ =0.If u is of norm 1, then -1∉𝒢· u, because the norm of -1 is -1, and the norm of every element in 𝒢· u is 1. In particular, ⟨_∞(-1), _∞(𝒢· u)⟩ is a space of dimension 1+d_∞,u. Hence, the kernel of _∞ is at most of dimension p-(1+d_∞,u). It follows that ρ_∞≤ (p-1)-d_∞,u as desired.§.§ Totally positive units in cyclic extensions of prime degree Let L be a cyclic extension of prime degree p>2, and let _L^× be the unit group of _L. Let _L^×, 1 be the units of norm 1, so that _L^×≅{± 1 }×_L^×,1. In this section we show the following result: Let p≥ 3 be a prime, let L be a cyclic extension of degree p, and suppose that the polynomial ϕ_p(x)=(x^p-1)/(x-1) is irreducible over _2. Then, either ρ_∞=0 (i.e., _L^×,+ = (_L^×)^2), or ρ_∞ = p-1 in which case every unit in _L^×,1 is totally positive. If every unit in _L^×,1 is totally positive, then ρ_∞=p-1 since we would haveρ_∞ = __2_L^×,+/(_L^×)^2 = __2_L^×,1/(_L^×)^2= p-1.Otherwise, there must be a unit u∈_L^×,1 that is not totally positive (in particular, u is not in ± 1 ·(_L^×)^2). Let G=(L/)=⟨σ⟩≅/p, and let u_i = σ^i(u) for i=0,…,p-1, be the conjugates of u. Let τ be a fixed embedding of L in , let τ(u_i)=ε_i ∈{± 1 } for i=0,…,p-1, and order the embeddings τ=τ_0,…,τ_p-1 of L such that _∞(u)=(ε_0,…,ε_p-1). In other words, τ_i = τ∘σ^i. Thus, _∞(σ(u)) = _∞(u_1) = (ε_1,…,ε_p-1,ε_0). Consider the class u∈_L^×/(_L^×)^2 and its non-trivial signature _∞(u). The group ring _2[G] acts on the module M=_2[G]·u. Since G is of prime order and by assumption ϕ_p(x) is irreducible, it follows that M is irreducible. Furthermore, since u≠± 1 it follows that_∞(u)≠ (1,1,…,1) or (-1,-1,…,-1). Hence, _∞(σ(u))≠_∞(u) by our formula above, and therefore M is not 1-dimensional.Moreover, _2[G]≅_2[x]/(x^p-1)≅_2[x]/(x-1) ⊕_2[x]/(ϕ_p(x)),Since we are assuming that ϕ_p(x) is irreducible over _2, the only irreducible representations of G over _2 are the trivial (1-dimensional) representation, and a representation of dimension p-1. Since the irreducible _2[G]-module M is not 1-dimensional, it must be (p-1)-dimensional. Finally, we note that M ⊆_L^×,1/(_L^×)^2, and every unit class in M has non-trivial signature (the norm is 1 and there are p>2 signs, so both 1 and -1 appear in the signature). Since the dimension of all possible signatures in _L^×,1 is p-1, and M is (p-1)-dimensional, we conclude that all signatures occur, and therefore ρ_∞=0, as desired. We conclude this section quoting a conjectural density of cubic fields and quintic fields with maximal ρ_∞, which is part of a broader conjecture of Dummit and Voight (see <cit.>).Let p=3 (resp. p=5). As L varies over all totally real fields of degree p ordered by absolute discriminant, the density of such fields with ρ_∞=2 (resp. ρ_∞=4) is approximately 1.9% (resp. 0.000019%). By Prop. <ref>, if L is a cyclic field of degree p=5, and ρ_∞≠ 4, then it must be 0. By Conjecture <ref>, approximately 99.999981% of all totally real quintic fields conjecturally have ρ_∞≠ 4. Thus, we expectthat cyclic quintic fields with ρ_∞=0 must be quite abundant. Note, however, that cyclic quintic fields are a subset of density 0 among all totally real quintic fields, so the conjectures of Dummit and Voight do not apply directly here.§.§ Refinements of the bound on the rank Now we are ready to improve the bound in Proposition <ref>. We will continue using the notation of Section <ref>. Let p be an odd prime, let C: y^2=f(x) with f(x) of degree p (and genus g=(p-1)/2), such that L, the number field defined by f(x), is totally real of degree p, and let J/ be the jacobian of C/. Let ρ_∞ = __2_L^×,+/(_L^×)^2, andlet j_∞ = __2 (_∞(_L^×/(_L^×)^2)∩ J_∞). Then:^(2)(,J) ≤ j_∞ +ρ_∞ + ^+(L)[2] +G∩(L_J_∞),In particular,* ρ_∞+j_∞≤ p-1.* j_∞≤ J_∞ = (p-1)/2 = genus(C).* ^+(L)[2] ≤ρ_∞ + (L)[2]. In particular,^(2)(,J) ≤ j_∞ +2ρ_∞ + (L)[2] +G∩(L_J_∞), * If G∩(L_J_∞) is trivial, then^(2)(,J) ≤ j_∞ + ρ_∞ +^+(L)[2]≤ j_∞ +2ρ_∞ +(L)[2]. Let p be an odd prime, let C: y^2=f(x) with f(x) of degree p, such that L, the number field defined by f(x), is totally real of degree p, and let J/ be the jacobian of C/. Recall that in Section <ref> we showed^(2)(,J) ≤(H) =U +V +W= (_L^×/(_L^×)^2∩_∞^-1(J_∞)) + (L_J_∞)[2] +G∩val(L_J_∞).Clearly,(_L^×/(_L^×)^2∩_∞^-1(J_∞))= _∞(_L^×/(_L^×)^2)∩ J_∞ +(_∞|__L^×/(_L^×)^2)= j_∞ + _L^×,+/(_L^×)^2 = j_∞ + ρ_∞.Now, for part (1), notice that (_L^×/(_L^×)^2∩_∞^-1(J_∞))⊆_L^×/(_L^×)^2 so the dimension as a _2-vector space is at most p. Moreover, _∞(-1) is not in J_∞ (because J_∞=δ_∞(J())⊆ H_∞, which is the kernel of the norm map, so N_^L(j)=1 for j∈ L such that _∞(j)∈ J_∞, but N_^L(-1)=-1 since the degree of L is odd). Thus, j_∞+ρ_∞ = (_L^×/(_L^×)^2∩_∞^-1(J_∞))≤ p-1, as claimed.For part (2), recall that we have defined J_∞ = δ_∞(J()/2J())⊆ H_∞. By Lemma <ref>, and the fact that δ_∞ is injective (Lemma 4.1 in <cit.>), we have (J_∞) = m_∞-1-g = p-1 - p-1/2 = p-1/2,where we have used the fact that L is totally real to claim that m_∞=p.Part (3) follows from Theorem <ref>, which shows that ρ^+≤ρ_∞ + ρ. And part (4) is immediate from (3), so the proof is complete.Now we can put together Corollary <ref> and Proposition <ref> to give a bound in the cases when the multiplicative order of 2 p is even. Suppose L is a cyclic, totally real number field of degree p>2, such that the order of 2 in (/p)^× is even. Then, in the notation of Proposition <ref>, we have^(2)(,J)≤ j_∞ + ρ_∞ + (L)[2] + (G→(L)/2(L))≤ p-1 + (L)[2] + (G→(L)/2(L)).Moreover, if ρ_∞=0, then ^(2)(,J)≤ g + (L)[2] + (G→(L)/2(L)) = p-1/2 + (L)[2] + (G→(L)/2(L)).Suppose L is a cyclic, totally real number field of degree p>2, such that the order of 2 in (/p)^× is even. Then, Corollary <ref> implies that ρ=ρ^+. Thus, the bound follows from Proposition <ref>. Note that the bound p-1+ (L)[2] + (G→(L)/2(L)) is the bound that appears in Proposition <ref>. § GENUS 1 The goal of this section is to show an alternative proof of Theorem <ref>, using Stoll's implementation of the 2-descent algorithm and the results we showed in the previous section. Let m be an integer such that D=m^2+3m+9 is square-free. Let C be the (hyper)elliptic curve given by the Weierstrass equationC: y^2 = f_m(x)= x^3+mx^2-(m+3)x+1.Since C is elliptic, the jacobian J is isomorphic to C, so we will identify C with J. We will conclude the bound stated in the theorem as a consequence of Theorem <ref>. Since the order of 2≡ -13 is 2, Corollary <ref> shows that ρ^+=ρ. In Section <ref> we discussed that cyclic cubic fields with ρ_∞≠ 0 are rare. Let us show that in fact _L^×,+ = (_L^×)^2, and therefore ρ_∞=0, for the cyclic cubic fields L=L_m defined by f_m(x)=0. Let m be an integer such that m≡ 39, and let L be the number field defined by f_m(x)= x^3+mx^2-(m+3)x+1=0. Then, ρ_∞=0. Let α be the negative root of f_m(x). Then α'=1/(1-α), and α”=1-1/α are the two other roots, and in fact they are units in _L^×. Moreover,-m-2<α<-m-1<0<α'<1<α”<2and therefore all eight possible sign signatures may be obtained from α and its conjugates. Thus, every totally positive unit is a square, and ρ_∞=0, as claimed. Thus, in order to prove Theorem <ref>, it is enough to show that G is trivial.Let m≥ 0 be an integer such that D=m^2+3m+9 is square-free. Let v=2, or let v be a prime divisor of D, and let f_m(x)=x^3+mx^2-(m+3)x+1. Then, f_m(x) is irreducible as a polynomial in_v[x]. Let v=2. Then, if we consider f_m(x) as a polynomial in _2[x], we havef_m(x)= x^3+x^2+1 if m≡ 12,x^3+x+1 if m≡ 02.In both cases, f_m is irreducible over _2, hence it is irreducible over _2.Now, let v>2 be a prime divisor of D. Then, by assumption, v>3 andf_m(x-m/3)=x^3-D/3x+D· (2m+3)/27is integral over _v. Since D=m^2+3m+9, it follows that 4D-(2m+3)^2=27 and therefore the greatest common divisor of D and 2m+3 divides 27. Since by assumption D is square-free, it cannot be divisible by 3, and so (D,2m+3)=1; this together with the fact that D is square-free implies that f_m(x-m/3) is Eisenstein over _v. Hence, f_m is irreducible over _v, as claimed. We are now ready to prove Theorem <ref>. Let m≥ 0 be an integer such thatm^2+3m+9 is square-free. Let E_m be the elliptic curve given by the Weierstrass equationE_m: y^2 = f_m(x)= x^3+mx^2-(m+3)x+1.Let L_m be the number field generated by a root of f_m(x), and let Cl(L_m) be its class group. Then,rank_(E_m()) ≤ 1 + dim__2(Cl(L_m)[2]). We shall use Theorem <ref>. The order of 2≡ -13 is 2, even, so ρ=ρ^+, and Lemma <ref> shows that ρ_∞=0, so it remains to compute G_m = ∏_v∈ S_m∖{∞} G_m,v. The discriminant of f_m is D^2=(m^2+3m+9)^2, so we haveS_m={∞,2}∪{v| D}.However, by Lemma <ref>, the polynomial f_m(x) is irreducible over _v for any finite prime v∈ S_m. It follows that the number of irreducible factors of f_m(x) over _v is 1, andtherefore I_m,v is zero-dimensional by Lemma <ref>. Since G_m,v⊆ I_m,v, we conclude that G_m,v is always trivial. Hence, G_m is trivial, and Theorem <ref> implies that ^(2)(,J_m) ≤ g + (L_m)[2] + (G_m→(L_m)/2(L_m)) = g + (L_m)[2],where J_m is the jacobian of the elliptic curve E_m. Since the genus of E_m is 1, then E_m ≅ J_m over . Hence,_(E_m())≤^(2)(,E_m) ≤ 1 + (L_m)[2],as desired.§ GENUS G=(P-1)/2, WHERE P IS A SOPHIE GERMAIN PRIME The goal of this section is to find examples of hyperelliptic curves of genus g≥ 2 where the dimension of the Selmer group can be bounded in terms of a class group, as in Theorem <ref> (the genus 1 case). We begin by looking at polynomials f(x) that cut out extensions of degree p, contained inside a q-th cyclotomic extension, where q is another prime. Let q>2 be a prime such that the multiplicative order of 2 q is either q-1 or (q-1)/2, and let p>2 be a prime dividing q-1. Let (ζ_q) be the q-th cyclotomic field, and let L be the unique extension of degree p contained in (ζ_q). Further, suppose that _L = [α] for some algebraic integer α∈ L, let f(x) be the minimal polynomial of α, and let J/ be the jacobian variety associated to the hyperelliptic curve C: y^2=f(x). Then,^(2)(,J) ≤ρ_∞ + g + ^+(L)[2].If in addition the multiplicative order of 2 p is even, then^(2)(,J) ≤ρ_∞ + g + (L)[2].We apply Stoll's algorithm to y^2=f(x), in order to compute G. Note that f(x) is monic, integral, and irreducible over , since α generates _L as a ring. Moreover, since _L=[α], it also follows that(f(x))= (𝒪_L),and since L⊆(ζ_q), the only prime dividing (_L) is q. Hence, the set S={∞,2,q}. We will show that G_2 and G_q are trivial, and therefore G is trivial as well. Indeed: * Let v=2. Since the order of 2 modulo q is q-1 or (q-1)/2 by hypothesis, it follows from <cit.> that 2 splits into 1 or 2 prime ideals in (ζ_q)/, and therefore 2 must be inert in the intermediary extension L/ of degree p. In particular, the polynomial f(x) is irreducible over _2, since it defines an unramified extension L_2/_2 of degree p. Hence, m_2=1, and the dimension of I_2 is m_2-1=0 by Lemma <ref>. Since G_2⊆ I_2, we conclude that G_2 is trivial as well. * Let v=q. Since L⊆(ζ_q) and q is totally ramified in the cyclotomic extension, it is also totally ramified in L/. Thus, f(x) is irreducible over _q because it defines a totally ramified extension L_q/_q of degree p. Thus, m_q=1 and arguing as above in the case of v=2, we conclude that G_q is trivial.Since the only finite primes in S are 2 and q, it follows that G=G_2× G_q is trivial. Now, Proposition <ref> shows the bound ^(2)(,J) ≤ρ_∞ + g + ^+(L)[2]. If in addition the order of 2 p is even, and since L/ is cyclic of degree p, then Corollary <ref> shows that ρ=ρ^+. Hence ^(2)(,J) ≤ρ_∞ + g + (L)[2], as claimed. The drawback, however, of the previous result is that there are very few subfields of cyclotomic extensions with a power basis, as the following result points out. Let L be an extension of degree p≥ 5, and let _L be the maximal order of L. Then, _L has a power basis if and only if L=(ζ_q)^+ is the maximal real subfield of the q-th cyclotomic field, where q is a prime with q=2p+1.For instance, the unique cyclic number field of degree 5 with a power basis for the maximal order is (ζ_11)^+. Also, there is no cyclic number field of degree 7 with a power basis for its maximal order (since 15 is not a prime). Hence, we concentrate on those cyclic extensions of degree p, where p is a Sophie Germain prime, i.e., q=2p+1 is also prime.Let q≥ 7 be a prime such that p=(q-1)/2 is also prime, and let L=(ζ_q)^+ be the maximal real subfield of (ζ_q). Let f(x)∈[x] be any monic integral polynomial defining L, let C: y^2=f(x), and let J/ be its jacobian. Then,^(2)(,J) ≤ρ_∞ + j_∞ + ^+(L)[2] + (G→(L)/2(L)).Moreover, if f(x) is the minimal polynomial of ζ_q+ζ_q^-1 or -(ζ_q+ζ_q^-1), then ^(2)(,J) ≤ρ_∞ + j_∞ + ^+(L)[2].Further, if one of the following conditions is satisfied,* the Davis–Taussky conjecture holds, or* the prime 2 is inert in the extension (ζ_p)^+/,* q≤ 92459, then ρ_∞=0, and ^(2)(,J) ≤g + (L)[2]. The first bound follows from Proposition <ref>, so let us assume that f(x) is the minimal polynomial of ζ_q+ζ_q^-1 or -(ζ_q+ζ_q^-1).The ring of integers _(ζ_q)^+ has a power basis, namely [ζ_q+ζ_q^-1]. Moreover, since p is a Sophie Germain prime (with q=2p+1 prime), it follows that the multiplicative order of 2 q is a divisor of 2p = 2· ((q-1)/2). Since q≥ 7, the order of 2 is bigger than 2, so it must be p=(q-1)/2 or q-1. Thus, Theorem <ref> applies and we obtain ^(2)(,J) ≤ρ_∞ + j_∞ + ^+(L)[2].Further, if (1), (2), or (3) holds, then by Conjecture <ref>, or Theorem <ref>, respectively, we find that ρ_∞=0 and ρ=ρ^+.If q≤ 92459, we shall use the computational approach at the end of Section <ref> to show that ρ_∞=0. Let ζ=ζ_q=e^2π i/q. Then, the ring of integers of (ζ_q)^+ has a power basis, namely _(ζ_q)^+ = [ζ_q+ζ_q^-1]. Let u=-(ζ+ζ^-1) if p≡ 14 and u=ζ+ζ^-1 if p≡ 34, thus chosen so that u is a unit in _L^× of norm 1. Moreover, we note thatif p≡ 14, then-(ζ+ζ^-1) < -(ζ^2+ζ^-2) < ⋯ < -(ζ^p-1/2+ζ^-p-1/2) < 0 < -(ζ^p+1/2+ζ^-p+1/2) < ⋯ < -(ζ^p+ζ^-p) <1,and if p≡ 34, thenζ^p+ζ^-p < ζ^p-1+ζ^-(p-1) < ⋯ < ζ^p+1/2+ζ^-p+1/2 < 0 < ζ^p-1/2+ζ^-p-1/2 < ⋯ < ζ+ζ^-1 <1.Thus, according to our conventions described in this section, the embeddings τ_1,…, τ_p are numbered so that τ_i(u)=r_i∈ with r_i =-(ζ^i+ζ^-i) ifp≡ 14, ζ^p+1-i+ζ^-(p+1-i)ifp≡ 3 4,for all 1≤ i ≤ p. Thus,res_∞(u) = (-1,-1,-1,…,1,1,1) ∈ H_∞ with (p-1)/2 minus ones when p≡ 14, and (p+1)/2 minus ones when p≡ 34.Now, the Galois group 𝒢=(L/) is cyclic of order p. Since q=2p+1 is prime, then the multiplicative order of 2 q is (q-1)/2 or q-1. Thus, either -2 or 2 is a primitive root mod q. Let γ(ζ)→(ζ) that sends γ(ζ)=ζ^2. It follows that σ = γ∈𝒢≅ (/q)^×/{± 1} is a generator. Thus, the automorphism σ(ζ+ζ^-1)=ζ^2+ζ^-2 generates 𝒢. Let ϕ=ϕ_σ∈ S_p be the permutation attached to σ as defined above. For instance, if p≡ 14, then r_1=-(ζ+ζ^-1), where ζ=e^2π i/q, so τ_1(σ(u))=r_2=-(ζ^2+ζ^-2)and therefore ϕ(1)=2. However, if p≡ 3 4, then r_1=ζ^p+ζ^-p. We can find an integer 1≤ k≤ p, and k or -k≡ 2pq, such thatσ(u)=ζ^k+ζ^-k. It follows that τ(σ(u))=r_k and so ϕ(1)=k in this case. In general, the permutation ϕ is defined by ϕ(i) = min{2· iq, (-2· i)q} when p≡ 14, and by ϕ(i) = p+1 - min{(2· (p+1-i))q, q-(2· (p+1-i)q)}, when p≡ 34, where our representatives in /q are always chosen amongst {0,1,…,q-1}. With these explicit descriptions of res_∞(u) and ϕ_σ, we have computed(using Magma) the matrix M_∞,u for all primes p and q, with q≤ 92459, as in the statement, and in all cases d_∞,u=p-1. Hence, ρ_∞=0 follows from Lemma <ref>. If the Davis–Taussky conjecture holds, then the class number h_K of (ζ_q) is odd (by Theorem <ref>), and therefore the class number h_K^+ of L=(ζ_q)^+ is odd as well (because h_K^+ is a divisor of h_K). Hence, if the Davis–Taussky conjecture holds, then (L)[2] is trivial, and the bound of Theorem <ref> becomes ^(2)(,J) ≤g.§ EXAMPLES§.§ Curves of Genus 1In this section we present some data that was collected on the curves described in Theorem <ref> (the data collected can be found at <cit.>). For m∈, let f_m(x), E_m, and L_m be as in Theorem <ref>. Using Magma, we attempted to compute (subject to GRH) the Mordell–Weil rank of E_m() and __2(L_m)[2] for every m∈ such that 1≤ m ≤ 20000 and m^2+3m +9 is square-free. There are 12462 such values of m in the given interval, and we were able to compute the rank of E_m/ for12235 of them. For the other 227 curves, we were only able to get upper and lower bounds on their rank.Of the 12462 curves that we tested, 10327 of them (about 82.87%) had rank equal to the upper bound given in Theorem <ref>. However, one might expect that the sharpness of this upper bound would decay as m gets larger and larger, and in fact that seems to be the case. Let us define a function to keep track of the sharpness of the bound in an interval. LetM = {m∈ : 1≤ m ≤ 20000and m^2+3m +9is square-free} and S = {m∈ M : (E_m) = (L_m) +1}. Given I⊆ M, we define(I) = #(S∩ I)/#(M∩ I). The set S only includes curves whose rank was actually computable and, because of this, the (I) statistic only gives a lower bound for how sharp Washington's upper bound is over the set I. In order to see how the sharpness of the upper bound degrades as m grows, in Table <ref> we present (I) over disjoint intervals of length 1000. In the data, we clearly see that the number of curves for which Washington's bound is sharp in a given interval does start to decrease, but the bound is still sharp more often than not (notice, however, that the sharpness is inflated by the fact that the bound is sharp every time the bound is 1, since there is a point of infinite order P = (0,1) on E_m).To see how fixing the bound first affects its sharpness, we define the following setsT(r) = {m ∈ M:(E_m) =r} andB(b) = {m ∈ M: (L_m)[2]+1 = b}.In Table <ref>, for each bound b that occurs we give the number of curves of rank r whose bound is b for each r that occurs. We also give the percentage of the curves whose rank is exactly b and provide the totals of each column so that we can see how many curves of each rank we found (for similar statistics and conjectures in a broader context, see <cit.>).From Table <ref> we can see that all of the curves that we computed, have odd rank less than or equal to 7. It also turns out that for all of the curves that we computed, Washington's bound is also odd and less than or equal to 7. In Table <ref>, we give the first m such that (E_m) = r and (L_m)+1= b for each pair (r,b) that occurred. It is also interesting to point out that the average rank among curves with b=1 is 1, the average rank among curves with b=3 is 2.23, among curves with b=5 is 3.38, and among curves with b=7 the average rank is 5.20 (see <cit.> for other examples of Selmer bias in genus 1).Lastly, for the sake of concreteness, we end this section with an explicit example. When m=143 we have that E_143:y^2 = x^3 + 143x^2 - 146x + 1with conductor 2^2· 20887^2. Using Magma, we can compute that (L_143)≅/2 + /2 + /4 + /4,and so Washington's bound for the Mordell–Weil rank is (L_143)[2] +1 = 4+1 = 5. Looking for points on E_143 we find 5 independent points of infinite order that generate the Mordel–Weil group. E_143() ≅^5 = ⟨ (126/121 , -3023/1331), (90 , -1369), (65/64 , 577/512), (21/4 ,-461/8), (-1 , 17) ⟩. §.§ Examples in the Sophie Germain case In this section we show examples of hyperelliptic curves that arise from Theorem <ref>.Let q=7 and p=3. Then, L=(ζ_7)^+ is the maximal real subfield of (ζ_7), which has degree 3, and class number 1 (see <cit.>). Note that the order of 2 in (/3)^× is 2=p-1, and therefore condition (2) of Theorem <ref> is met. Hence, if f(x) is the minimal polynomial of ±(ζ_7+ζ_7^-1), then the jacobian J() of y^2=f(x) satisfies_(J())≤__2^(2)(,J) ≤ g+(L)[2] = 1.For instance, if f(x) is chosen to be the minimal polynomial of -(ζ_7+ζ_7^-1), then f(x)=x^3 - x^2 - 2x + 1 we in fact recover the elliptic curve E_-1 of Theorem <ref> for m=-1. The Mordell–Weil rank of the elliptic curve E_-1 y^2=x^3 - x^2 - 2x + 1 is 1, so the bound on the rank given by Theorem <ref> in this case is in fact sharp. If q=11 and p=5, then L=(ζ_11)^+ is a field of degree 5 and trivial class group. Since 2 is a primitive root modulo 5, if f(x) be the minimal polynomial of ±(ζ_11+ζ_11^-1), then Theorem <ref> says _(J())≤__2^(2)(,J) ≤ g+(L)[2] = p-1/2=2,where J is the jacobian of y^2=f(x). If f(x) is the minimal polynomial of ζ_11+ζ_11^-1, then f(x)=x^5 + x^4 - 4x^3 - 3x^2 + 3x + 1. Below we will describe a general method to find some rational points on the jacobian, and show that 2≤_(J)≤ 2. Thus, the rank is 2 and the bound is sharp.If q=23 and p=11, then L=(ζ_23)^+ is a field of degree 11 and trivial class group. Since 2 is a primitive root modulo 11, if f(x) be the minimal polynomial of ±(ζ_23+ζ_23^-1), then Theorem <ref> says _(J())≤__2^(2)(,J) ≤ g+(L)[2] = p-1/2=5,where J is the jacobian of y^2=f(x). If f(x) is the minimal polynomial of -(ζ_23+ζ_23^-1), then f(x)=x^11 - x^10 - 10x^9 + 9x^8 + 36x^7 - 28x^6 - 56x^5 + 35x^4 + 35x^3 -15x^2 - 6x + 1.Below we will show that 4≤_(J)≤ 5. A full 2-descent (via Magma) shows that the rank is, in fact, equal to 4. We finish this section by describing a method to produce points in J (as in Theorem <ref>, and compute the rank of the subgroup generated by these points. Let p be a fixed Sophie Germain prime, let q=2p+1, and let L=(ζ_q)^+ be the maximal real subfield of (ζ_q), where ζ_qis a primitive q-th root of unity. The minimal polynomial for ζ_q+ζ_q^-1 has constant term 1 or -1 according to whether p is congruent to 1 or 34. If f∈[x] isa polynomial with constant term 1, then the point (0,1) willbe on the curveC:y^2=f(x)and furthermore the factorization of f(x)-1 will provide points on the jacobian J of C, allowing us to obtain a lower bound for the Mordell–Weil rank of J().For this reason, we define f to be the minimal polynomial ofθ=(-1)^(p-1)/2(ζ_q+ζ_q^-1).A lower bound for the rank can be computed by considering the images of the factors of f-1 in L^×/(L^×)^2 under the map δ_:J()→ H_ (see Section <ref>). Let y_0∈, let g(x) be an irreducible factor of f(x)-y_0^2,and let K be the splitting field of g(x).Then, over K, we have a factorizationg(x)=∏_i=1^n (x-x_i),and the points P_i=(x_i, y_0) are in C(K). Under the map δ_K J(K)→ H_K, as a map from C(K) to L_K^×/(L_K^×)^2 we haveP_i↦ (x_i-θ_K)(L_K^×)^2.If f remains irreducible over K, then L_K is simply the composite extension of K and L_q, and θ_K=θ. As a map from J(K) to L_K^×/(L_K^×)^2, δ_K is a homomorphism of groups, and hence the divisor P_1+P_2+⋯ +P_n maps to∏_i=1^n (x_i-θ_K)(L_K^×)^2=(-1)^ng(θ_K)(L_K^×)^2.On the other hand, the divisor P_1+P_2+⋯ +P_n can be regarded as the base extension to K of a certain divisor D defined over , hence overwe haveD ↦ (-1)^ng(θ)(L^×)^2,via the map F_K of Section <ref>. Thus, for each irreducible factor g(x) we obtain a point on the jacobian J() that corresponds to the divisor D=D(g) defined above. Moreover, since the map J()/2J()→ H_ induced by δ_ is injective (<cit.>), in order to compute the rank of the subgroup generated by {D(g) }_g, it suffices to check the dimension of the (multiplicative) subgroup generated by {δ_(D(g)) } in H=H_. For example let q=11 and p=5, as in Example <ref>, so L=(ζ_11)^+ and f(x)=x^5+x^4-4x^3-3x^2+3x+1. Then,f(x)-1 factors asx(x^2 - 3)(x^2 + x - 1).Their images in L^×/(L^×)^2 via δ_ are-θ(L^×)^2, (θ^2 - 3)(L^×)^2, and(θ^2 + θ - 1)(L^×)^2respectively. To obtain a lower bound for the rank of J it remains only to reduce {-θ, (θ^2 - 3),(θ^2 + θ - 1)} to a multiplicatively independent subset modulo squares. Since the product of all three is-(f(θ)-1)(L^×)^2=(L^×)^2we see that at most two are multiplicatively independent. On the other hand -θ(θ^2 - 3) is not a square in L, so-θ(L^×)^2 and (θ^2 - 3)(L^×)^2are multiplicatively independent give us a lower bound of 2 as therank of J over . An upper bound of 2 for the rank was computed in Example <ref>, hence the rank is exactly 2. In Table <ref>, we have collected the upper bound given by Theorem <ref>, together with some computational data of lower bounds for the rank of the jacobian J associated to the first few Sophie Germain primes, obtained using the method we have described here. It is worth pointing out that the bound given by Theorem <ref> in these examples is unconditional (i.e., not dependent on the Davis–Taussky conjecture) since q≤ 92459. We also note that 2 is inert in (ζ_p)^+ in some cases such as p=5,11,23,29,53,83,131,173 but not in others such as p=41,89,113.plain
http://arxiv.org/abs/1708.07896v6
{ "authors": [ "Harris B. Daniels", "Álvaro Lozano-Robledo", "Erik Wallace" ], "categories": [ "math.NT" ], "primary_category": "math.NT", "published": "20170825214354", "title": "Bounds of the rank of the Mordell-Weil group of jacobians of hyperelliptic curves" }
Projected particle methods for solving McKean-Vlasov stochastic differential equations Denis Belomestny^1 and John Schoenmakers ^2 December 30, 2023 ====================================================================================== We propose a novel projection-based particle method for solving McKean-Vlasov stochastic differential equations. Our approach is based on a projection-type estimation of the marginal density of the solution in each time step. The projection-based particle method leads in many situation to a significant reduction of numerical complexity compared to the widely used kernel density estimation algorithms. We derive strong convergence rates and rates of density estimation. The convergence analysis, particularly in the case of linearly growing coefficients, turns out to be rather challenging and requires some new type of averaging technique. This case is exemplified by explicit solutions to a class of McKean-Vlasov equations with affine drift. The performance of the proposed algorithm is illustrated by several numerical examples.[1]Duisburg-Essen University, Thea-Leymann-Str. 9, D-45127 Essen, Germany,[2] Weierstrass Institute for Applied Analysis and Stochastics, Mohrenstr. 39, 10117 Berlin, Germany,Keywords: Mckean-Vlasov equations, particle systems, projection estimators, explicit solutions.2010 MSC: 60H10, 60K35.§ INTRODUCTION Nonlinear Markov processes are stochastic processes whose transition functions may depend not only on the current state of the process but also on the current distribution of the process. These processes were introduced by McKean <cit.> to model plasma dynamics. Later nonlinear Markov processes were studied by a number of authors; we mention here the books of Kolokoltsov <cit.> and Sznitman <cit.>. These processes arise naturally in the study of the limit behavior of a large number of weakly interacting Markov processes and have a wide range of applications, including financial mathematics, population dynamics, and neuroscience (see, e.g., <cit.> and the references therein).Let [0,T] be a finite time interval and (Ω,ℱ,P) be a complete probability space, where a standard m-dimensional Brownian motion W is defined. We consider a class of McKean-Vlasov SDEs, i.e. stochastic differential equation (SDE) whose drift and diffusion coefficients may depend on the current distribution of the process of the form:{[ X_t =ξ+∫_0^t∫_ℝ^da(X_s,y)μ_s(dy)ds+∫ _0^t∫_ℝ^db(X_s,y)μ_s(dy)dW_s; μ_t=Law(X_t), t∈0,T], ].where X_0=ξ is an ℱ_0-measurable random variable in ℝ^d, a: ℝ^d×ℝ^d→ℝ^d and b: ℝ^d×ℝ^d→ℝ^d× m. If the functions a and b are smooth with uniformly bounded derivatives and the random variable ξ has finite moments of any order, then (see <cit.>) there is a unique strong solution of (<ref>) such that for all p>1,E[sup_s≤ T|X_s|^p]≤∞.In the sequel we assume that there exists a unique strong solution of (<ref>) such that (<ref>) holds and refer to <cit.> for more general sufficient conditions for this.Assume that d=1 and for any t≥0, the measure μ_t(du) possesses a bounded density μ_t(u). Then the family of these densities satisfies a nonlinear Fokker-Planck equation of the form∂μ_t(x)/∂ t =-∂/∂ x( (∫ a(x,y)μ_t(y) dy) μ_t(x)) +1/2∂^2/∂ x^2((∫ b(x,y)μ_t(y) dy)^2 μ_t(x)),which can be seen as an analogue of a well-known linear Fokker-Planck equation in the case of linear stochastic differential equations. In Section <ref> we will show that if the drift a is affine in x, and the diffusion coefficient b is independent of x, then the system (<ref>), and hence (<ref>), has an explicit solution. These solutions, apart from being interesting in their own right, also provide explicit cases of an explosive behavior.The theory of the propagation of chaos developed in <cit.>, states that (<ref>) is a limiting equation of the system of stochastic interacting particles (samples) with the following dynamicsX_t^i,N=ξ^i+∫_0^t∫_ℝ^da(X_s^i,N,y)μ _s^N(dy) ds+∫_0^t∫_ℝ^db(X_s^i,N,y)μ_s ^N(dy) dW_s^ifor i=1,…,N, where μ_t^N=1/N∑_i=1^Nδ _X_t^i,N, ξ^i, i=1,…,N, are i.i.d copies of ξ, distributed according the law μ_0, and W^i, i=1,...,N, are independent copies of W. In fact it can be shown, under sufficient regularity conditions on the coefficients, that convergence in law for empirical measures on the path space holds, i.e., μ^N={μ_t^N :t∈0,T]}→μ, N→∞, see <cit.>.Despite the numerous branches of research on stochastic particle systems, results on numerical approximations of McKean-Vlasov-SDEs are very sparse. The authors in <cit.> proposed to use the Euler scheme with time-step h=T/L, that for l=0,…,L-1, yieldsX̅_t_l+1^i,N=X̅_t_l^i,N+1/N∑_j=1^N a(X̅_t_l^i,N,X̅_t_l^j,N) h+1/N∑_j=1 ^Nb(X̅_t_l^i,N,X̅_t_l^j,N) Δ_l+1W^ifor i=1,…,N, t_l=hl, and Δ_l+1W^i=W_h(l+1)^i -W_hl^i (see also <cit.> for more general MVSDEs and <cit.> for Gauss-quadrature based approach). Implementation of the above algorithm requires usually N^2× L operations in every step of the Euler scheme. By using the algorithm presented here one can significantly reduce the complexity of the particle simulation especially if the coefficients of the corresponding McKean-Vlasov SDE are smooth enough.The contribution of this paper is twofold. On the one hand, we propose a new approximation methodology based on a projection-type estimation of the marginal densities of (<ref>). This methodology often leads to numerically more efficient algorithms than the kernel-type approximation algorithms, as they can profit from a global smoothness of coefficients a,b and the corresponding marginal densities. On the other hand, we present a comprehensive convergence analysis of the proposed algorithms in the case of possibly linearly growing (in x) coefficients a and b. To the best of our knowledge, no stability analysis of MVSDEs under this linear growth assumption was done before. In fact such analysis is rather challenging and requires a special type of averaging technique. And, last but not least, we study a general class of MVSDEs with affine drift and derive their explicit solutions, to the best of our knowledge, for the first time.The paper is organized as follows. In Section <ref> we present the idea of our projected particle method. Section <ref> is devoted to the convergence analysis of the projected particle method. In particular, in Section <ref> we derive the convergence rates for the corresponding projected density estimate. Section <ref> presents a thorough study of affine MVSDEs. Numerical examples for affine and convolution-type MVSDEs are presented in Section <ref>. All proofs are collected in Section <ref>.§ PROJECTED PARTICLE METHODLet w:ℝ^d→ℝ_+ be some weight function with w>0, such thata(x,·),b(x,·)∈ L_2(ℝ^d,w)for anyx∈ℝ^d.Let further (φ_k, k=0,1,2,..) be a total orthonormal system in L_2(ℝ^d,w). The corresponding (generalized) Fourier coefficients of the functions a(x,·) and b(x,·) are given byα_k(x) :=∫ a(x,u)φ_k(u) w(u) du∈ℝ ^d, β_k(x) :=∫ b(x,u)φ_k(u) w(u) du∈ℝ^d× mand the following series representation holdsa(x,·)=∑_k=0^∞α_k(x)φ_k(·) and b(x,·)=∑_k=0^∞β_k(x)φ_k(·),x∈ℝ^d,in L_2(ℝ^d,w). Further it is assumed that each function φ_k is bounded so that the functionsγ_k(s):=E[φ_k(X_s)]are well defined. Now let us fix some natural number K>0 and consider a projected particle approximation for (<ref>)X_t^i,K,N=ξ^i+∫_0^t∑_k=0^Kγ_k^N(s)α _k(X_s^i,K,N) ds+∫_0^t∑_k=0^Kγ_k^N(s)β _k(X_s^i,K,N) dW_s^ifor i=1,…,N, whereγ_k^N(s):=1/N∑_j=1^Nφ_k(X_s^j,K,N)can be regarded as an approximation to (<ref>). The projected system (<ref>), with (<ref>), is heuristically motivated by assuming that for any s≥0 the measure μ_s(du) possesses a density μ _s(u) that, in view ofγ_k(s)=∫μ_s(u)φ_k(u)du(cf. (<ref>)), formally satisfiesμ_s(u)=∑_k=0^∞γ_k(s)φ_k(u)w(u).Then, we (formally) have the expansion∫ a(x,u)μ_s(u)du=∑_k=0^∞α_k(x)γ_k(s),and this motivates the drift term in (<ref>). For the diffusion term in (<ref>) an analogue motivation applies. In order to solve (<ref>) we may consider, for any fixed L>0, an Euler-type approximation,X̅_t^i,K,N=X̅_η(t)^i,K,N+∑_k=0^Kγ_k ^N(η(t)) α_k(X̅_η(t)^i,K,N ) (t-η(t))+∑_k=0^Kγ_k^N(η(t)) β_k(X̅_η(t)^i,K,N) (W_t^i-W_η(t))for i=1,…,N, and h=T/L, where η(t):=lh for t∈ lh,(l+1)h), l=1,…,L. Note that in order to generate a discretized particle system (X̅_hl^i,K,N), i=1,…,N, l=1,…,L, we need to perform (up to a constant depending on the dimension) NLK operations. This should be compared to N^2L operations in (<ref>). Thus if K is much smaller than N, we get a significant cost reduction. Of course, this complexity analysis implicitly assumes that the generalized Fourier coefficients α_k(x) and β_k(x) are known in closed form or can be cheaply computed. For more details in this respect see Remark <ref> below. Many well known McKean-Vlasov type models used in physics and engineering are constructed and formulated via certain Fourier type expansions of the respective drift and/or diffusion coefficients. For example, in the famous Kuramoto-Shinomoto-Sakaguchi model (see e.g. <cit.>, eq. (5.214)) or in the coupled Brownian phase oscillators (see <cit.>) the mean field potential is given by its Fourier series, which entails a similar expansion for the coefficient a(x,u) (b(x,u) is constant).Let us also mention a classical work of <cit.>, where a known power series expansion for the coefficients of a nonlinear Fokker-Planck equation is assumed. From another point of view, since the basis ( φ _k) with the corresponding weight w can, in principle, be chosen freely, it is natural to assumethey can be chosen such that the coefficients α_k(x) and β_k(x) can be computed in closed form. In this respect, let us give some further examples. If for any x, a(x,·) is a linear combination of functions of the form:q_1(u_1)··· q_d(u_d)where each q_i:ℝ→ ℝ is a polynomial with coefficients possibly depending on x, then (φ_k) may be taken to be Hermite functions in ℝ^d, i.e.φ_α(u)=H_α_1(u_1)··· H_α_d(u_d)e^-| u| ^2/2,α =(α_1,…,α_d).The latter situation appears for instance in the popular interaction case with a(x,u)=A(x-u), where the function A has a given representationA(z)=∑_αc_αz_1^α_1·…· z_d ^α_d, z∈ℝ^d,α∈ℕ_0 ^d.As another example, note that the Fourier coefficients of any function of the formu→ u_1^α_1·…· u_d^α_d e^-| u-c| ^2/σ, c∈ℝ ^d,α∈ℕ_0^d,σ>0,with respect to the Hermite basis above can be expressed in closed form. One so could also consider a(x,u), b(x,u) of the form∑_r=1^Rq_r(x)u^α_re^-| u-c_r(x)| ^2/σ_r(x),with free to choose q_r(x)∈ℝ, c_r(x)∈ℝ^d, σ_r(x)∈ℝ_+, α_r∈ℕ_0^d, R∈ℕ. § CONVERGENCE ANALYSISIn this section we first study the convergence of the approximated particle system (<ref>) to the solution of the original system (<ref>). As a first obvious but important observation, we note that the distribution of the triple (X_s^j,K,N,X_s ^K,N,X_s^j) with X_s^K,N:=(X_s^1,K,N ,…,X_s^N,K,N) does not depend on j, and therefore we can write(X^j,K,N,X^K,N,X^j)distr.=( X^·,K,N,X^K,N,X^·) for j=1,...,N.For ease of notation, henceforth we denote with |·| :=|·| _ for a generic dimensionthe standard Euclidian norm in ℝ^. Let us make the following assumptions.(AF) The basis functions (φ_k) fulfil|φ_k(z)-φ_k(z^')|≤ L_k,φ| z-z^'| ,|φ _k(z)|≤ D_k,φ, k=0,1,…for all z,z^'∈ℝ^d and some constants L_k,φ,D_k,φ>0. (AC) The functions α_k(x), β_k(x), k=0,1,2,… satisfy|α_k(x)| ≤ A_k,α(1+| x| )with (A_k,α)_k=0,1,…∈ l_2, ∑_k=0^∞D_k,φA_k,α ≤ D_φA_α,and ∑_k=0^∞L_k,φA_k,α≤ L_φA_α, |β_k(x)| ≤ A_k,β(1+| x| )with (A_k,β)_k=0,1,…∈ l_2,∑_k=0^∞D_k,φA_k,β ≤ D_φA_β,and ∑_k=0^∞L_k,φA_k,β≤ L_φA_β,for some constants A_α, A_β, D_φ, and L_φ>0, and furthersup_x,x^'∈ℝ^d,x≠ x^'|α _k(x)-α_k(x^')|/|x-x^'| ≤ B_k,αwith ∑_k=0^∞D_k,φB_k,α≤ D_φB_α, sup_x,x^'∈ℝ^d,x≠ x^'|β _k(x)-β_k(x^')|/|x-x^'| ≤ B_k,β with ∑_k=0^∞D_k,φB_k,β≤ D_φB_β,for some B_α, B_β >0.(_p) For some p>0 the initial distribution μ_0 possesses a finite absolute moment of order p.In the sequel, for any random variable ξ∈ℝ^ on (Ω,ℱ,P) we shall use ‖ξ‖_p for the norm of |ξ| in L_p(Ω). The following bound on the strong error can be proved.For p≥2, it holds under assumptions (AC), (AF) and (AM_p) that‖sup_0≤ r≤ T| X_r^·,K,N-X_r^·|‖ _p ≲ N^-1/2+∑_k=K+1^∞A_k,α ‖γ_k‖_L_p[0,T]+∑_k=K+1^∞A_k,β ‖γ_k‖_L_p[0,T],where ≲ stands for an inequality with some (hidden) positive finite constant depending only on A_α,A_β,B_α,B_β ,D_φ,L_φ, p, and T.For 1≤ p^'≤2, we simply have‖sup_0≤ r≤ T| X_r^·,K,N-X_r^·|‖ _p^'≤‖sup_0≤ r≤ T| X_r^·,K,N-X_r^·|‖ _pfor any p≥2. The next theorem, on the convergence of the Euler approximation (<ref>) to the projected system (<ref>), can be proved along the same lines as the proof of Theorem <ref>. For p≥2, it holds under assumptions (AC), (AF) and (AM_p) that for any natural K,N‖sup_0≤ r≤ T|X̅_r^·,K,N-X_r ^·,K,N|‖ _p≲√(h),where ≲ stands for an inequality with some (hidden) positive finite constant depending only on A_α,A_β,B_α,B_β,D_φ,L_φ, p and T.DiscussionThe bound (<ref>) is proved under rather general assumptions on the coefficients a(x,y) and b(x,y). In particular, we allow for linear growth of these coefficients in x. This makes the proof of the bound in Theorem <ref> rather challenging, since we need to avoid an explosion. In order to overcome this problem, we employ a kind of averaging technique which, being combined with the symmetry of the particle distribution and the existence of moments (see Section <ref>), gives the desired bound. Note that for this we have to assume existence and uniqueness of a strong solution of the original MVSDE (<ref>). Funaki <cit.> proved existence and uniqueness under (essentially) global Lipschitz condition. However, one should be able to extend his results by exploiting a kind of one sided Lipschitz condition like in <cit.> or <cit.>.The bound (<ref>) consists of stochastic and approximation errors. While the first error is of order 1/√(N), the second one depends on K and the properties of the coefficients a(x,y) and b(x,y). If these coefficients are smooth in the sense that their generalized Fourier coefficients (α_k) and (β_k) decay fast, then the approximation error can be made small even for medium values of K. The (normalized) Hermite polynomial of order j is given, for j≥0, byH_j(x)=c_j(-1)^je^x^2d^j/dx^j(e^-x^2 ), c_j=(2^jj!√(π))^-1/2.These polynomials satisfy: ∫_ℝH_j(x)H_ℓ(x)e^-x^2dx=δ_j,ℓ and, as a consequence,φ_k(u)=H_k(u)e^-u^2/2, k=1,2,…,is a total orthonormal system in L_2(ℝ^d) (i.e. here w=1). Moreover, (φ_k)_k≥0 fulfil the assumption (AF) with D_k,φ and L_k,φ being uniformly bounded in k, see, e.g. <cit.>, p. 242. Now let us suppose that a(x,·),b(x,·)∈ L_2(ℝ^d) for any x∈ℝ, and discuss the assumptions (AC).Suppose that for any x∈ℝ, the functions (in u)a(x,u):=a(x,u)/√(1+x^2), b(x,u):=b(x,u)/√(1+x^2)admit derivatives in u up to order s>2 such that the functions (in u)u^ℓ∂_u^ma(x,u), u^ℓ∂_u ^mb(x,u), 0≤ l+m≤ sare bounded and belong to L_1(ℝ) (uniformly in x) together with their first derivatives in x. Then the assumption (AC) is satisfied and‖sup_0≤ r≤ T| X_r^·,K,N-X_r^·|‖ _p≲ K^1-s/2+N^-1/2,as K,N→∞.We haveα_k(x) =√(1+x^2)∫a(x,u)H _k(u) e^-u^2/2 du =√(1+x^2) α_k(x) , β_k(x) =√(1+x^2)∫b(x,u)H _k(u) e^-u^2/2 du= √(1+x^2) β_k(x).The identity(2k+2)^1/2H_k(x)=H^'_k+1(z)and the integration-by-parts formula implyα_k(x) =.a(x,u)e^-u^2 /2H_k+1(u)/(2k+2)^1/2|_-∞^∞-1/(2k+2)^1/2∫_-∞^∞[∂a(x,u)/∂ u-ua(x,u)]H _k+1(u)e^-u^2/2 du.Note that |H_k(u)| e^-u^2/2≤1 uniformly in u and k (see, e.g. <cit.>, p. 242.) Hence if a(x,u) is bounded and∫|∂a(x,u)/∂ u-ua(x,u)| duis bounded uniformly in x, then α_k(x)=O( k^-1/2) uniformly in x. The second integration-by-parts shows that α_k(x)=O(k^-1), provided the functionsu^2·a(x,u),∂^2a(x,u)/∂ u^2,u·∂a(x,u)/∂ uare integrable on ℝ with their L_1(ℝ) norms uniformly bounded in x. Integrating by parts further, we derive the desired statement.As a rule, one chooses N and K such that the errors in (<ref>) are balanced, that is N^1/(s-2)∼ K, yielding a proportional reduction of computational cost of order N· K/N^2 ∼ N^-(s-3)/(s-2). Alternatively we can compare the complexity, that is the computational cost for achieving a prescribed accuracy ε, for of the Euler schemes (<ref>) and (<ref>). It is not difficult to see that, after incorporating the path-wise time discretization error, the standard Euler scheme (<ref>) has complexity of order ε^-6, while the projected one (<ref>) has complexity of order ε^-(4s-6)/(s-2) which is significantly smaller when s>3. Moreover, in <cit.> conditions are formulated, guaranteeing that all measures μ_t, t≥0, possess infinitely smooth exponentially decaying densities. In this case we can additionally profit from the decay of the generalized Fourier coefficients (γ_k) such that the convergence rates in (<ref>) give rise to a proportional reduction of computational cost approaching N^-1, corresponding to a complexity of order ε^-4 (modulo some logarithmic term) for the method (<ref>).§.§ Density estimationLet us now discuss the estimation of the densities μ_t, t≥0. Let us assume that the formal relationship (<ref>) holds in the sense thatμ_s/w=∑_k=0^∞γ_k(s)φ_kin L_2(ℝ^d,w), i.e. μ_s^2/w∈ L_1(ℝ^d). Fix some t>0, K_test ∈ℕ and setμ_t^K_test,K,N(x):=∑_k=1^K_test γ_k^N(t)φ_k(x)w(x)with γ_k^N(t):=1/N∑_i=1^Nφ_k(X_t^i,K,N), k=1,…,K_test. We obviously haveE∫|μ_t^K_test,K,N(x)-μ_t (x)|^2w^-1(x) dx =∑_k=1^K_testE[ |γ_k^N(t)-γ_k(t)|^2] +∑_k=K_test+1^∞|γ_k(t)|^2,where (due to (AF))E[|γ_k^N(t)-γ_k(t)|^2] =E[|1/N∑_j=1^Nφ_k (X_t^j,K,N)-E[φ_k(X_t^·)] | ^2] ≤2E[|1/N∑_j=1^N( φ_k(X_t^j,K,N)-φ_k(X_t^j))| ^2] +2E[|1/N∑_j=1^N(φ _k(X_t^j)-E[φ_k(X_t^j)]) | ^2] ≤2L_k,φ^2E[| X_t^·,K,N -X_t^·| ^2]+2/NVar[ φ_k(X_t)],since the X^j are independent. Theorem <ref> now implies(E∫|μ_t^K_test ,K,N(x)-μ_t(x)|^2 w^-1(x) dx)^1/2≲( 1/N∑_k=1^K_test(L_k,φ^2+D_k,φ ^2))^1/2+(∑_k=1^K_testL_k,φ^2)^1/2( ∑_k=K+1^∞(A_k,α+A_k,β) ‖γ_k‖ _L_p[0,T]) +(∑_k=K_test+1^∞|γ_k(t)|^2) ^1/2.The last term always converges to zero as K_test→∞, since μ_t/w∈ L_2(ℝ^d,w). The first term can be controlled for any fixed K_test by taking N large enough. Finally, for any fixed K_test, the second term can be made small by taking K large enough and using the condition (AC).§ SPECIFIC MODELS§.§ Generalized Shimizu-Yamada Models Inspired by the work of Shimizu and Yamada <cit.>, <cit.> and <cit.>, we consider one-dimensional McKean-Vlasov equations of the form (<ref>) witha(x,u):=a^0(u)+a^1(u)x, b(x,u):=b(u).This class of models allows for a linear dependence of drift on the distribution of X through E[a^0(X_t)] and E[a^1(X_t)]. Let us define for polynomially bounded and measurable functions a^j and b the generalized Gauss transforms,H_a^j(p,q) :=1/√(2π q)∫ a^j(u)e^-(p-u)^2 /2qdu, j=0,1,H_b(p,q) :=1/√(2π q)∫ b(u)e^-(p-u)^2/2q du,p∈ℝ, q>0.Let moreover a^j and b be such that the partial derivatives,∂_pH_a^j(p,q), ∂_qH_a^j(p,q) j=0,1,and ∂_pH_b(p,q), ∂_qH_b (p,q),extend continuously to any (p,q)∈ℝ× ℝ_≥0.It is not difficult to see that (<ref>) holds if a^j and b are entire functions for which the coefficients of their power series around u=0 decay fast enough to zero (which is trivially satisfied for any polynomial). A complete characterization of a^j and b such that (<ref>) holds, is connected with analytic vectors for semigroups related to the heat kernel and considered beyond the scope of this paper however.Let a^j and b satisfy (<ref>). (i) Then the following system of ODEsG_t^' =H_b^2(A_t,G_t)+2H_a^1( A_t,G_t)G_tA_t^' =H_a^0(A_t,G_t)+H_a^1( A_t,G_t)A_t,(A_0,G_0)=(x_0,0) ,has for 0≤ t<t_∞≤∞, i.e. up to some possibly finite exploding time t_∞, a unique solution (A_t,G_t)∈ℝ× ℝ_≥0. (ii) The McKean-Vlasov SDEdX_t=(E[a^0(X_t)]+X_t E[ a^1(X_t)]) dt+E[b(X_t)]dW_t, X_0=x_0is then equivalent todX_t=(H_a^0(A_t,G_t)+H_a^1( A_t,G_t)X_t)dt+H_b(A_t,G_t) dW_t,X_0=x_0,and has explicit solution,X_t =x_0e^∫_0^tH_a^1(A_s,G_s)ds +∫_0^tH_a^0(A_s,G_s)e^∫_s^tH_a^1 (A_r,G_r)drds+∫_0^tH_b(A_s,G_s)e^∫_s^tH_a^1 (A_r,G_r)drdW_s,0≤ t<t_∞ ≤∞.Note: the Wiener integral in (<ref>) can be interpreted by an ordinary integral after partial integration, due to the smoothness of the (deterministic) integrand.§.§ Affine structuresLet us consider affine functionsa^0(u) =a_0^0+a_1^0u,a^1(u) =a_0^1+a_1^1u,b(u) =b_0+b_1u.Then for c≡ a^0, c≡ a^1, and c≡ b, respectively, we haveH_c(p,q) =1/√(2π q)∫ c(u)e^-(p-u)^2/2qdu=1/√(2π q)∫ c_0e^-(p-u)^2/2qdu+1/√(2π q)∫ c_1ue^-(p-u)^2/2qdu=c_0+c_1pwith c(u)=c_0+c_1u. In particular, the H_c(p,q) do not depend on q, and so (<ref>) simplifies toA_t^'=a_0^0+(a_1^0+a_0^1)A_t+a_1 ^1A_t^2,A_0=x_0.We first consider the case a_1^1=0, then (<ref>) reads A_t^'=a_0^0+(a_1^0+a_0^1)A_t with solutionA_t =(x_0+a_0^0/a_1^0+a_0^1) e^(a_1^0+a_0^1)t-a_0^0/a_1^0+a_0 ^1if a_1^0+a_0^1≠0,andA_t =x_0+a_0^0tif a_1^0+a_0^1=0.For the case a_1^1≠0 the solution (checked by Mathematica) is as follows. If D:=(a_1^0+a_0^1)^2-4a_0^0a_1 ^1<0, a_1^1≠0,A_t=-(a_1^0+a_0^1)/2a_1^1+√(-D)/2a_1^1tan[ 1/2√(-D)t+arctan[ a_1^0+a_0^1+2a_1^1x_0/√(-D)]].If D>0, a_1^1≠0,A_t=1/2(√(D)-a_1^0-a_0^1)/a_1^1 +x_0-1/2(√(D)-a_1^0-a_0^1) /a_1^1/1+1/2(√(D)+a_1^0+a_0^1+2a_1^1 x_0)(e^ -√(D)t-1)/√(D).If D=0, a_1^1≠0,A_t=-a_1^0+a_0^1/2a_1^1+1/a_1^1a_1^0+a_0^1+2a_1^1x_0/2-(a_1^0+a_0^1+2a_1 ^1x_0)t.As a result, the McKean-Vlasov SDEdX_t=(a_0^0+a_1^0A_t+(a_0^1+a_1^1A_t) X_t)dt+(b_0+b_1A_t)dW_thas the following (unique) solutionX_t =x_0e^∫_0^t(a_0^1+a_1^1A_s) ds+∫_0^t(a_0^0+a_1^0A_s)e^∫_s ^t(a_0^1+a_1^1A_r)drds+∫_0^t(b_0+b_1A_s)e^∫_s^t( a_0^1+a_1^1A_r)drdW_s,where A_t is given by (<ref>), (<ref>), (<ref>), or (<ref>). By taking in Section <ref>a(x,u)=a_1^0u+a_0^1x,b(x,u)=b_0,a_1 ^0+a_0^1<0,we get essentially the Shimizu-Yamada model. From (<ref>) we then haveA_t=x_0e^ (a_1^0+a_0^1)t,and from (<ref>) we then get the explicit solutionX_t=x_0e^(a_1^0+a_0^1)t+∫_0^tb_0 e^a_0^1(t-s)dW_swhich is Gaussian with mean x_0e^(a_1^0+a_0^1)t and variance b_0^2e^2a_0^1t-1/2a_0^1, and which is consistent with the terminology in (<cit.>, Section 3.10), where a_1^0+a_0^1=-γ and a_0^1=-γ-κ. By taking in Section <ref>a(x,u)=(a_0^1+a_1^1u)x,b(x,u)=b_0,we straightforwardly get from (<ref>),A_t=x_0e^ a_0^1t/1-a_1^1/a_0^1x_0( e^ a_0^1t-1),andX_t=x_0e^∫_0^t(a_0^1+a_1^1A_s)ds +∫_0^tb_0e^∫_s^t(a_0^1+a_1^1A_r) drdW_s,respectively. Plugging (<ref>) into (<ref>) then yieldsX_t=x_0e^ a_0^1t/1-a_1^1/a_0^1x_0( e^ a_0^1t-1)+b_0e^ a_0^1t/1-a_1^1 /a_0^1x_0(e^ a_0^1t-1)Γ_twith Gaussian Γ_t=∫_0^t(1-a_1^1/a_0^1 x_0(e^ a_0^1s-1))e^ -a_0^1sdW_s. In particular, if a_0^1=0 we getA_t=x_0/1-a_1^1x_0t,andX_t=x_0/1-a_1^1x_0t+b_0∫_0^t1-a_1^1 x_0s/1-a_1^1x_0tdW_s. From Example <ref> it is clear that if a_1^1 ≠0, the affine McKean-Vlasov solution may explode in finite time. This is not surprising since in this case the derivative ∂_ua(x,u) is unbounded and so the main results in <cit.> do not apply. On the other hand, it is easy to check that for the case a_1^1≠0, the affine solutions in Section <ref> are non-exploding whenever,D≥0 and √(D)≥ a_1^0+a_0^1+2a_1^1x_0.That is, in the case D≥0, a_1^1≠0, it is always possible to choose x_0 such that the solution does or does not explode.§.§ Kuramoto-Shinomoto-Sakaguchi type models In the Kuramoto-Shinomoto-Sakaguchi modelthe nonlinear one-dimensional Fokker-Planck equation (<ref>) is considered in the domain (t,x)∈( 0,∞)×(0,2π), where b=1, a(x,y)=a(x-y)=-d/dxU_MF(x-y) withU_MF(z)=-∑_n=1^∞c_ncos(nz)and the process starts in u at time zero, for some fixed u∈( 0,2π), see for details <cit.> (Sect. 5.3.2). Thus a is a 2π-periodic function related to a 2π-periodic potential. Let usconsider the corresponding McKean-Vlasov SDE {[X_t =u+∫_0^t∫_ℝa(X_s-y)μ_s(dy)ds+W_t;μ_t =Law(X_t), t∈0,T], ].and define the integer valued function k(x):=max{j∈ℤ:2π j≤ x}. Obviously, the processY_t:=X_t-2π k(X_t)has state space [0,2π). Let ρ_t(x;u)=ρ_t(x) be the density of Y_t, which is concentrated on (0,2π). Note that for any 2π-periodic function f we have by (<ref>) that∫_0^2πf(x)ρ_t(x)dx=E[f(Y_t)]=E[ f(X_t)]=∫_-∞^∞f(x)μ_t(x)dx,and for any test function g with support in (0,2π) it holds that,∫_0^2πg(x)ρ_t(x)dx=E[g(Y_t)]=E[ g(X_t-2π k(X_t))]=∑_j∈ℤ∫_2π j^2π(j+1)g(x-2π j)μ_t(x)dx=∫_0^2πg(z)∑_j∈ℤμ_t(z+2π j)dz,that isρ_t(z)=∑_j∈ℤμ_t(z+2π j)for z∈ (0,2π). Thus, in particular,∫_ℝa(x-y)μ_t(y)dy=∫_0^2πa(x-y)ρ_t (y)dy.an(<ref>) is equivalent toX_t =u+∫_0^t∫_0^2πa(X_s-y)ρ_s(y)dyds+W_t ρ_t =Law(Y_t), t∈0,T],(see (<ref>)). Note that by using (<ref>) and (<ref>) it straightforwardly follows that ρ_t(x)=ρ_t(x;u) satisfies (<ref>) in the above context. Instead of taking the scalar product in L_2(ℝ^d,w), we now consider the scalar product in L_2([0,2π)), i.e. w≡1, and take for (φ_k) the standard (total) orthonormal trigonometric basis consisting of the 2π-periodic functions (2π) ^-1/2, π^-1/2cos(my) and π^-1/2sin( my), m=1,2,… suitably ordered. Thus, by definingγ_k(t) =∫_0^2πρ_t(y)φ_k(y) dy, α_k(x) =∫_0^2πa(x-y)φ_k( y)dy,one hasρ_t(y)=∑_k=0^∞γ_k(t)φ_k( y) and ∫_ℝa(x-y)μ_t(y)dy=∑ _k=0^∞α_k(x)γ_k(t),due to (<ref>). That is (<ref>) reads,X_t^i,K,N=u+∫_0^t∑_k=0^Kγ_k^N(s)α_k (X_s^i,K,N) ds+ W_t^iwith γ_k^N as in (<ref>). Next we may follow (<ref>) for the corresponding Euler scheme. Finally, the estimator for the density ρ_t readsρ_t^K_test,K,N(y):=∑_k=1^K_test γ_k^N(t)φ_k(y)(cf. the estimator for μ_t in Section 3.1).§ NUMERICAL TEST CASES§.§ Affine MVSDE models Let us now test the numerical performance of the projected particle approach for the processes discussed in Section <ref>. Consider the situation wherea^0(u)=(1+u^M)exp(-u^2/2), a^1(u)=ρexp(-u^2/2), b(x,u)≡σfor some M>0, ρ≥0 and σ>0. Then, by using the Hermite functions (<ref>) with w≡1 and the well-known identityu^M=1/2^M∑_m=0^⌊M/2⌋ M!/m!(M-2m)!H_M-2m(u),we derive straightforwardly,α_k(x) =∫(1+u^M)exp(-u^2)H_k(u) du+ρ x·∫exp(-u^2)H_k(u) du=0if k>Morkis unevenπ^1/4/2^M-k/2M!/(M-k/2)!√(k!) if 0≤ k≤ M, s.t.M-k is even +(1+ρ x)· π^1/4 if k=0,0if k>0.On the other hand, by some algebra we getH_a^0(p,q) =1/√(2π q)∫(1+u^M)e^-u^2/2 e^-(p-u)^2/2qdu=1/√(1+q)e^-p^2/2(1+q)+1/√(2π(1+q))e^-p^2/2(1+q)∫(√(q/1+q)y+p/1+q)^Me^-y^2/2dy,andH_a^1(p,q)=ρ/√(1+q)e^-p^2/2(1+q).The explicit solution of the MVSDEdX_t=(𝖤[(1+X_t^M)exp(-X_t^2/2)] +ρ X_t𝖤[exp(-X_t^2/2)]) dt+σ dW_tis given by (<ref>) and (<ref>). Hence the density of X_t is normal with meanx_0e^∫_0^tH_a^1(A_s,G_s)ds+∫_0 ^tH_a^0(A_s,G_s)e^∫_s^tH_a^1( A_r,G_r)dr dsand varianceσ^2∫_0^te^2∫_s^tH_a^1(A_r,G_r) dr ds.In our numerical example we take M=2, ρ=-1, σ=1 and x_0=0. Our aim is to approximate the normal density of X_1 by using our projected particle method based on Hermite basis. To this end, we first simulate N paths of the process X̅^i,K,N, defined in (<ref>) with a time step h=0.02. Since M=2, the case K=2 corresponds to a perfect approximation of the integral ∫_ℝ^da(x,y)μ _t(y) dy=∑_k=0^2α_k(x)γ_k(t). Next using the obtained sample X̅_1^1,K,N,…,X̅_1^N,K,N, we construct projection estimates for the density of X_1 by using Hermite basis functions of order K_test∈{1,2,…,10}. The mean (0.727) and the variance (0.487) of the true normal density are approximated by solving the ODE system  (<ref>) using Euler method with time step 0.0001. The Figure <ref> shows the box plots of L_2-distance between μ_1 and μ_t^K_test,K,N for K∈{1,2} based on 50 different replications of the process X̅^i,K,N. As can be seen, the choice of K_test is crucial and depends on K and N. It also should be stressed that the truncation error dominates the statistical one already for medium sample sizes. An optimal balance between K, N and K_test can be found by analyzing the right hand side of (<ref>) under various assumptions on the coefficients (A_k,α), (A_k,β) and (γ_k).§.§ Convolution-type MVSDE models Consider the MVSDE of the form:dX_t=_X^'[Q(X_t-X_t^')] dt+σ dW_t, t∈0,1], X_0∼𝒩(0,1),i.e. of the form (<ref>) with a(x,y)=Q(x-y), b(x,y)=σ and μ_0(x)=(1/√(2π))e^-x^2/2. Let us again use the Hermite basis to approximate the density of X_t for any t∈0,1]. In the case Q(x)=e^-x^2/2, we explicitly derive via repeated integration by parts∫_ℝe^-(x-y)^2/2-x^2/2H_n(x) dx =e^-y^2/4 /2∫ e^-(z-y)^2/4H_n(z/2) dz=√(π) e^-y^2/4/2(1/2) ^n-1(2y)^n.As a resultα_n(y)=∫ e^-(x-y)^2/2-x^2/2H_n(x) dx=π ^1/4(1/2)^n/2y^n/√(n!)e^-y^2/4,where H_n stands for the normalized Hermite polynomial of order n. We take σ=0.1. Using the Euler scheme (<ref>) with time step h=1/L=0.01, we first simulate N=500 paths of the time discretized process X̅^·,N. Next, by means of the closed form expressions for α_n, we generate N paths of the projected approximating process X̅^·,K,N, K∈{1,…,20} (see (<ref> )), using the same Wiener increments as for X̅^·,N, so that the approximations X̅^·,N and X̅^·,K,N are coupled. Finally, we compute the strong approximation errorE_N,K=√(1/N∑_i=1^N(X̅_1^i,K,N-X̅ _1^i,N)^2)of the projective system relative to the system (<ref>) and record times needed to compute approximations X̅_1^·,N and X̅_1^·,K,N, respectively. Figure <ref> shows the (natural) logarithm of E_N,K versus the logarithm of the corresponding (relative) computational time gain defined as (comp. time due to (<ref>) - comp. time due to (<ref>))/comp. time due to (<ref>), for values K∈{1,…,20}. As can be seen, the relation between logarithmic strong error and logarithmic computational time gain can be well approximated by a linear function. On the right-hand side of Figure <ref> we depict the projection estimate for the density of X_1 corresponding to K=20. Note that we compare two particle systems (projected and non projected ones) for a fixed N and are mainly interested in the dependence of their strong distance on K. In fact, the choice of N doesn't have much influence on E_N,K, provided N is large enough.§ PROOFS§.§ Proof of Theorem <ref> Let us introduce𝔞_K,N(x,y):=1/N∑_j=1^N∑_k=1^Kα _k(x)φ_k(y^j)=1/N∑_j=1^N∑_k=1^Kφ _k(y^j)∫ a(x,u)φ_k(u)w(u)du, 𝔟_K,N(x,y):=1/N∑_j=1^N∑_k=1^Kβ _k(x)φ_k(y^j)=1/N∑_j=1^N∑_k=1^Kφ _k(y^j)∫ b(x,u)φ_k(u)w(u)du,and𝔞_s(x) :=∫_ℝ^da(x,u)μ_s(du)ds, 𝔟_s(x) :=∫_ℝ^db(x,u)μ_s(du)dsfor any x∈ℝ^d,y∈ℝ^d× N. We so have thatΔ_t^i:=X_t^i,K,N-X_t^i =∫_0^t( 𝔞_K,N(X_s^i,K,N,X_s^K,N) -𝔞_s(X_s ^i))ds+∫_0^t(𝔟_K,N(X_s^i,K,N,X_s ^K,N) -𝔟_s(X_s^i))dW_s^i,where W^i, i=1,...,N, are i.i.d. copies of the m-dimensional Wiener process W. Hence,|Δ_t^i| ^p ≤2^p-1t^p-1∫_0 ^t|𝔞_K,N(X_s^i,K,N,X_s^K,N ) -𝔞_s(X_s^i)| ^pds+2^p-1d^p-1∑_q=1^d|∫_0^t(𝔟 _K,N^q(X_s^i,K,N,X_s^K,N) -𝔟_s^q(X_s ^i))dW_s^i| ^p,where𝔟_K,N^q:=(𝔟_K,N^q,1,...,𝔟 _K,N^q,m),q=1,...,d,denote the rows of the ℝ^d× m valued 𝔟_K,N, and so we have withΔ_t^p:=1/N∑_i=1^Nsup_s∈ 0,t]|Δ_s^i| ^pthe boundΔ_t^p ≤2^p-1t^p-11/N∑_i=1^N ∫_0^t|𝔞_K,N(X_s^i,K,N,X_s ^K,N) -𝔞_s(X_s^i)| ^pds+2^p-1d^p-1∑_q=1^d1/N∑_i=1^Nsup_s∈0,t]|∫_0^s(𝔟_K,N^q(X _s^i,K,N,X_s^K,N) -𝔟_s^q(X_s^i))dW_s ^i| ^p=:2^p-1t^p-1 Term_1+2^p-1d^p-1 Term_2.Assumption (AC) implies|𝔞_K,N(x,y)-𝔞_K,N(x^',y^')| =|1/N∑_j=1^N∑_k=1^K( α_k(x)φ_k(y_j)-α_k(x^')φ_k (y_j^'))|≤1/N∑_j=1^N∑_k=1^K|α_k (x)-α_k(x^')||φ_k(y_j^')|+1/N∑_j=1^N∑_k=1^K|α_k(x)||φ_k(y_j)-φ_k(y_j^')|≤| x-x^'| D_φB_α+L_φA_α/N(1+| x| )∑_j=1^N| y_j-y_j^'| .Hence|𝔞_K,N(x,y)-𝔞_K,N(x^',y^')| ^p ≤2^p-1| x-x^'| ^pD_φ^pB_α^p+2^p-1L_φ^pA_α^p(1+| x| )^p 1/N∑_j=1^N| y_j-y_j^'| ^p.So it holds that|𝔞_K,N(X_s^i,X_s)-𝔞_K,N(X _s^i,K,N,X_s^K,N) | ^p ≤2^p-1D_φ ^pB_α^p|Δ_s^i|^p+2^p-1L_φ^pA_α^p(1+| X_s^i| )^p1/N∑_j=1^N|Δ_s^j|^p,and then it follows that, with regard to Term_1,E[Term_1]≤2^2p-2D_φ ^pB_α^p∫_0^tE[Δ_s^p ]ds+2^2p-2L_φ^pA_α^p∫_0^tE[ Δ_s^p·1/N∑_i=1^N(1+| X_s ^i| )^p]ds+2^p-11/N∑_i=1^N∫_0^tE[|𝔞_K,N(X_s^i,X_s) -𝔞_s(X_s ^i)| ^p]ds.Let us now consider the middle term. Setζ_s,N:=1/N∑_i=1^N(1+| X_s^i| )^p-1/N∑_i=1^NE[(1+| X_s ^i| )^p]so thatE[Δ_s^p·1/N∑_i=1 ^N(1+| X_s^i| )^p]=1/N ∑_i=1^NE[(1+| X_s^i| )^p]·E[Δ_s^p] +E[ζ_s,N·Δ_s^p].For arbitrary but fixed θ>0, it holds thatE[ζ_s,N·Δ_s^p] =E[ζ_s,N·Δ_s^p 1_{ζ_s,N≤θ}]+E[ζ_s,N ·Δ_s^p 1_{ζ_s,N>θ}],where on the one handE[ζ_s,N·Δ_s^p 1_{ζ_s,N≤θ}]≤θE[ Δ_s^p]and on the otherE[ζ_s,N·Δ_s^p 1_{ζ_s,N>θ}]≤√(E[ζ _s,N^21_{ζ_s,N>θ}])√(E[(Δ_s^p)^2]).Due to (<ref>) we have that for any η>0, there exists C_θ,η>0 such thatE[ζ_s,N^21_{ζ_s,N>θ}]=1/NE[(√(N)ζ_s,N) ^21_{√(N)ζ_s,N>θ√(N)}] ≤C_θ,η^2/N^η+1, 0≤ s≤ T,for N large enough andE[(Δ_s^p)^2]≤E[1/N∑_j=1^Nsup_r∈ 0,T]|Δ_r^j| ^2p]=E[ sup_r∈0,T]|Δ_r^·| ^2p] =E[sup_r∈0,T]| X_r^·,K,N -X_r^·| ^2p] ≤2^2p-1E[sup_r∈0,T]| X_r ^·,K,N| ^2p]+2^2p-1E[sup _r∈0,T]| X_r^·| ^2p] ≤ D_1+D_2=:D^2,where due to Theorem <ref>, Appendix <ref>,2^2p-1E[sup_r∈0,T]| X_r^· ,K,N| ^2p]≤ D_1<∞uniform in N and K,andD_2:=2^2p-1E[sup_r∈0,T]| X_r ^·| ^2p]<∞due to (<ref>). Thus, by combining (<ref>)–(<ref>), one hasE[Δ_s^p·1/N∑_i=1 ^N(1+| X_s^i| )^p]≤ F_1^p ·E[Δ_s^p]+F_2 /N^p/2+1/2with F_1:=θ^1/p+sup_0≤ s≤ T‖ 1+| X_s|‖ _p and F_2:=C_θ,pD, where we have taken η=p. Set nowH(s):=E[Δ_s^p],then the estimate (<ref>) (cf. (<ref>)) reads1/N∑_i=1^N∫_0^tE[|𝔞_K,N(X_s^i,K,N,X_s^K,N) -𝔞_s(X_s ^i)| ^p]ds ≤(2^2p-2D_φ^pB_α^p+2^2p-2L_φ ^pA_α^pF_1^p)∫_0^tH(s)ds+2^2p-2L_φ ^pA_α^pF_2/N^p/2+1/2t+2^p-11/N∑_i=1^N∫_0^tE[|𝔞_K,N(X_s^i,X_s) -𝔞_s(X_s ^i)| ^p]ds.Regarding the term Term_2 we call upon the Burkholder-Davis-Gundy's inequality which states that for any p≥1,‖sup_s∈0,t]|∫_0^s( 𝔟_K,N^q(X_s^i,K,N,X_s^K,N) -𝔟_s ^q(X_s^i))dW_s^i|‖ _p≤ C_p(E[(∫_0^t|( 𝔟_K,N^q(X_s^i,K,N,X_s^K,N) -𝔟_s ^q(X_s^i))| ^2ds)^p/2]) ^1/p.This implies that for p≥2,Esup_s∈0,t]|∫_0^s( 𝔟_K,N^q(X_s^i,K,N,X_s^K,N) -𝔟_s ^q(X_s^i))dW_s^i| ^p≤ C_p^pE[(∫_0^t|( 𝔟_K,N^q(X_s^i,K,N,X_s^K,N) -𝔟_s ^q(X_s^i))| ^2ds)^p/2] ≤ C_p^pt^p/2-1E[∫_0^t|( 𝔟_K,N^q(X_s^i,K,N,X_s^K,N) -𝔟_s ^q(X_s^i))| ^pds] ≤ C_p^pt^p/2-1E[∫_0^t|( 𝔟_K,N(X_s^i,K,N,X_s^K,N) -𝔟_s(X_s ^i))| ^pds].Now, completely analogue to the derivation of (<ref>), we get1/N∑_i=1^N∫_0^tE[|𝔟_K,N(X_s^i,K,N,X_s^K,N) -𝔟_s(X_s ^i)| ^p]ds ≤(2^2p-2D_φ^pB_β^p+2^2p-2L_φ ^pA_β^pF_1)∫_0^tH(s)ds+2^2p-2L_φ ^pA_β^pF_2/N^p/2+1/2t+2^p-11/N∑_i=1^N∫_0^tE[|𝔟_K,N(X_s^i,X_s) -𝔟_s(X_s ^i)| ^p]ds.Now by taking expectations on both sides of (<ref>) and gathering all together, we arrive atH(t)≤(D_φ^pB_α^pT^p-1+L_φ^pA_α^pF_1^pT^p-1..+C_p^pD_φ^pB_β^pd^pT^p/2-1+C_p ^pL_φ^pA_β^pd^pT^p/2-1F_1^p)2^3p-3 ∫_0^tH(s)ds+2^3p-3(L_φ^pA_α^pT^p+d^pC_p^pL_φ^pA_β^pT^p/2)F_2/N^p/2+1/2+2^2p-2T^p-11/N∑_i=1^N∫_0^tE[ |𝔞_K,N(X_s^i,X_s) -𝔞_s (X_s^i)| ^p]ds+2^2p-2d^pC_p^pT^p/2-11/N∑_i=1^N∫_0 ^tE[|𝔟_K,N(X_s^i ,X_s) -𝔟_s(X_s^i)| ^p]ds.We next proceed with explicit estimates for the last two terms above. Let us write𝔞_K,N(X_s^i,X_s)-𝔞_s(X_s^i)=∑_k=1 ^Kα_k(X_s^i)∑_j=1^N1/N(φ_k (X_s^j)-γ_k(s))-∑_k=K+1^∞α_k(X_s ^i)γ_k(s),then we have by the Minkowski inequality,‖𝔞_K,N(X_s^i,X_s)-𝔞_s(X_s ^i)‖ _p ≤∑_k=1^K‖α_k(X_s ^i)1/N∑_j=1^Nξ_k^j‖ _p+∑_k=K+1^∞‖α_k(X_s^i)γ_k (s)‖ _p,where ξ_k^j:=φ_k(X_s^j)-γ_k(s), j=1,…,N, have mean zero. Let us now observe thatE[.|∑_j=1^Nξ_k^j| ^p| X^i]=E[.|ξ_k^i+∑_j≠ i^Nξ_k^j| ^p| X^i] ≤2^p-1E[.|ξ_k^i| ^p+|∑_j≠ i^Nξ_k^j| ^p| X^i] ≤2^2p-1D_k,φ^p+2^p-1E[|∑_j≠ i^Nξ_k^j| ^p]using (<ref>). For p≥2, it follows from the Rosenthal's inequality that,E[|∑_j≠ i^Nξ_k^j| ^p]≤ C_p^(1)((∑_j≠ i^NE |ξ_k^j| ^2)^p/2+∑_j≠ i ^NE|ξ_k^j| ^p)for a constant C_p^(1) only depending on p, and, in fact, for p=2 we have simply,E[|∑_j≠ i^Nξ_k^j| ^p]=∑_j≠ i^NE|ξ_k^j| ^2.Thus, for p≥2,E[.|1/N∑_j=1^Nξ_k ^j| ^p| X_s^i]≤2^2p-1 D_k,φ^p/N^p+2^p-1C_p^(1)/N^p(( ∑_j≠ i^NE|ξ_k^j| ^2) ^p/2+∑_j≠ i^NE|ξ_k^j| ^p) ≤2^2p-1D_k,φ^p/N^p+2^2p-1C_p ^(1)D_k,φ^p/N^p/2+2^2p-1C_p^(1)D_k,φ^p /N^p-1≤(C_p^(2))^pD_k,φ^p/N^p/2for N>N_p and some constantsC_p^(2)>0, N_p>0.So for any p≥2,‖α_k(X_s^i)1/N∑_j=1^Nξ_k ^j‖ _p^p ≤ A_k,α^pE[( 1+| X_s^i|)^pE[. |1/N∑_j=1^Nξ_k^j| ^p| X_s^i]] ≤ A_k,α^pD_k,φ^p(C_p^(2)) ^p/N^p/2E[(1+| X_s|) ^p],hence‖α_k(X_s^i)1/N∑_j=1^Nξ_k ^j‖ _p≤ C_p^(2)A_k,αD_k,φF_3 N^-1/2with F_3:=sup_0≤ s≤ T‖ 1+| X_s|‖ _p,and further∑_k=K+1^∞‖α_k(X_s^i)γ_k(s)‖ _p≤ F_3∑_k=K+1^∞A_k,α|γ _k(s)| .We thus obtain,‖𝔞_K,N(X_s^i,X_s)-𝔞_s(X_s ^i)‖ _p≤ C_p^(2)A_αD_φF_3N^-1/2 +F_3∑_k=K+1^∞A_k,α|γ_k(s)| ,that is,E[|𝔞_K,N(X_s^i,X_s)-𝔞 _s(X_s^i)| ^p]≤2^p-1(C_p ^(2))^pA_α^pD_φ^pF_3^pN^-p/2+2^p-1F_3^p(∑_k=K+1^∞A_k,α|γ_k(s)|)^p.Analogously we getE[|𝔟_K,N(X_s^i,X_s)-𝔟 _s(X_s^i)| ^p]≤2^p-1(C_p ^(2))^pA_β^pD_φ^pF_3^pN^-p/2+2^p-1F_3^p(∑_k=K+1^∞A_k,β|γ_k(s)|)^p.Now, combining the estimates (<ref>) and (<ref>) with (<ref>), yields for 0≤ t≤ T, H(t) ≤(C_p,φ,XT^p-1+D_p,φ,Xd^p T^p/2-1)∫_0^tH(s)ds+(E_p,φ,XT^p+F_p,φ,Xd^pT^p/2+O(N^-1/2 ))N^-p/2+G_p,φ,XT^p-1∫_0^T(∑_k=K+1^∞ A_k,α|γ_k(s)|)^pds+H_p,φ,Xd^pT^p/2-1∫_0^T(∑_k=K+1^∞A_k,β|γ_k(s)|)^pdswith abbreviationsC_p,φ,X =2^3p-3D_φ^pB_α^p+2^3p-3L_φ^pA_α^pF_1^pD_p,φ,X =2^3p-3C_p^pD_φ^pB_β^p +2^3p-3C_p^pL_φ^pA_β^pF_1^pE_p,φ,X =2^3p-3(C_p^(2))^pA_α ^pD_φ^pF_3^pF_p,φ,X =2^3p-3C_p^p(C_p^(2))^p A_β^pD_φ^pF_3^pG_p,φ,X =2^3p-3F_3^pH_p,φ,X =2^3p-3C_p^pF_3^p.Finally, the statement of the theorem follows from Gronwall's lemma by raising the resulting inequality to the power 1/p, then using that ( ∑_i=1^q|a_i|^p)^1/p≤∑_i=1^q|a_i| for arbitrary a_i∈ℝ, p,q∈ℕ, a Minkowski type inequality, and the observation thatE[Δ_T^p]=1/N∑_i=1 ^NE[sup_s∈0,T]|Δ_s ^i| ^p]=E[sup_s∈ 0,T]|Δ_s^·| ^p]. §.§ Proof of Theorem <ref> (i): Under the assumption (<ref>) the functions H_a^j and H_b are locally Lipschitz in ℝ× ℝ_≥0 and, obviously, their extensions (p,q)→ H_a^j(p,|q|), H_b(p,|q|) are locally Lipschitz in ℝ× ℝ. Thus, by standard ODE theory, there exists a unique solution to the systemG_t^' =H_b^2(A_t,| G_t|)+2H_a^1(A_t,| G_t|)G_tA_t^' =H_a^0(A_t,| G_t|)+H_a^1(A_t,| G_t|) A_t,(A_0,G_0)=(x_0,0),0≤ t<t_∞≤∞,for some possibly finite explosion time t_∞. Then it can be straightforwardly checked that this (unique) solution can be represented asG_t =∫_0^tH_b^2(A_s,| G_s|)e^2∫_s^tH_a^1(A_r,| G_r|)drdsA_t =e^∫_0^tH_a^1(A_s,| G_s|)dsx_0+∫_0^tH_a^0(A_s,| G_s |)e^∫_s^tH_a^1(A_r,| G_r|)drds, 0≤ t<t_∞,whence in particular G_t≥0 for 0≤ t<t_∞. This proves (i).(ii): By straightforward differentiating with respect to t, it follows that (<ref>) is a solution to (<ref>). Let us abbreviate in (<ref>)𝔞_t^0≡ H_a^0(A_t,G_t),𝔞_t^1≡ H_a^1(A_t,G_t),𝔟_t≡ H_b(A_t,G_t),0≤ t<t_∞.The characteristic function of X_t in (<ref>) then takes the formφ_t(v)=exp[𝔦v∫_0^t𝔞_s ^0e^∫_s^t𝔞_r^1drds-1/2v^2∫_0 ^t𝔟_s^2e^2∫_s^t𝔞_r^1drds+𝔦 ve^∫_0^t𝔞_s^1dsx_0].Sincee^-(p-u)^2/2q/√(2π q)=1/2π∫ e^-𝔦vuexp[𝔦vp-v^2q/2]dv,we have for j=0,1,H_a^j(p,q)=1/2π∫ a^j(u)du∫exp[𝔦 vp-v^2q/2]e^-𝔦vudv.It then follows thatH_a^j(e^∫_0^t𝔞_s^1dsx_0+∫_0 ^t𝔞_s^0e^∫_s^t𝔞_r^1drds,∫_0 ^t(𝔟_s^0)^2e^2∫_s^t𝔞 _r^1drds)=1/2π∫ a^j(u)du∫φ_t(v)e^-𝔦 vudv=∫ a^j(u)μ_t(u)du=E[a^j(X_t)],j=0,1,with μ_t being the density of X_t, and similarly,H_b(e^∫_0^t𝔞_s^1dsx_0+∫_0^t𝔞 _s^0e^∫_s^t𝔞_r^1drds,∫_0^t( 𝔟_s^0)^2e^2∫_s^t𝔞_r^1 drds)=E[b(X_t)].On the other hand, in view of (<ref>) and the fact that G≥0, one has∫_0^t(𝔟_s^0)^2e^2∫_s ^t𝔞_r^1drds =G_te^∫_0^t𝔞_s^1dsx_0+∫_0^t𝔞_s ^0e^∫_s^t𝔞_r^1drds =A_t,that is, by (<ref>), (<ref>), and (<ref>), we obtain (<ref>) from (<ref>).§ APPENDIX§.§ Existence of moments Fix some p≥2 and suppose that E[|X_0 |^p]<∞. Then it holds under assumptions (AC) and (AF),‖sup_s∈0,T]| X_s^·,K,N|‖ _p<∞,uniformly in K and N.Fix some i∈{1,…,N} and for every R>0 introduce the stopping timeτ_i,R=inf{t∈0,T] :| X_t^i,K,N-X_0 ^i| >R}.We obviously havesup_t∈0,T]| X_t∧τ_i,R^i,K,N|≤ R+| X_0^i|so that the non-decreasing function f_R(t):=‖sup_s∈0,t]| X_s∧τ_i,R^i,K,N|‖ _p, t∈0,T], is bounded by R+‖ X_0^i‖ _p. On the other handsup_s∈0,t]| X_s∧τ_i,R^i,K,N| ≤| X_0^i| +∫_0^t∧τ_i,R|𝔞_K,N(X_r^i,K,N,X_r^K,N)| dr+sup_s∈0,t]|∫_0^s∧τ_i,R 𝔟_K,N(X_r^i,K,N,X_r^K,N) dW_r^i|≤| X_0^i| +∫_0^t∧τ_i,R|𝔞_K,N(X_r^i,K,N,X_r^K,N)| dr+∑_q=1^dsup_s∈0,t]|∫_0^s∧τ _i,R𝔟_K,N^q(X_r^i,K,N,X_r^K,N) dW_r ^i|(cf. (<ref>)). It then follows from the Minkowski and BDG inequality thatf_R(t) ≤‖ X_0‖ _p+∫_0^t‖ 1_{s≤τ_i,R}𝔞_K,N(X_s^i,K,N,X_s^K,N )‖ _p ds+dC_p^BDG‖√(∫_0^t∧τ_i,R|𝔟_K,N(X_s^i,K,N,X_s^K,N)| ^2 ds)‖ _p≤‖ X_0‖ _p+A_αD_φ∫_0 ^t‖(1+| X_s∧τ_i,R^i,K,N|)‖ _p ds+A_βD_φdC_p^BDG‖√(∫_0^t|(1+| X_s∧τ_i,R^i,K,N|) | ^2ds)‖ _p≤‖ X_0‖ _p+A_αD_φ∫_0 ^t(1+‖| X_s∧τ_i,R^i,K,N|‖ _p) ds+A_βD_φdC_p^BDG(√(t)+(∫_0 ^t‖| X_s∧τ_i,R^i,K,N| ^2‖ _p/2 ds)^1/2)again by the Minkowski inequality (p≥2). Consequently, the function f_R satisfiesf_R(t)≤‖ X_0‖ _p+A_αD_φ∫_0 ^t(1+f_R(s)) ds+A_βD_φdC_p^BDG( √(t)+(∫_0^tf_R^2(s) ds)^1/2),that is,f_R(t) ≤‖ X_0‖ _p+A_αD_φt+A_βD_φdC_p^BDG√(t)+A_αD_φ∫_0^tf_R(s) ds+A_βD_φ dC_p^BDG(∫_0^tf_R^2(s) ds)^1/2.By Lemma <ref> (see Appendix) it follows that‖sup_s∈0,T]| X_s∧τ_i,R ^i,K,N|‖ _p ≤2e^(2A_α D_φ+A_β^2D_φ^2d^2(C_p^BDG) ^2)T×(‖ X_0‖ _p+A_αD_φT+A_βD_φdC_p^BDG√(T)).Now note that the stopping times τ_i,R are non-decreasing in R, and thus converges non-decreasingly to τ_i,∞ say, with τ _i,∞∈0,T]∪{∞}. Thus,R→sup_s∈0,T]| X_s∧τ_i,R ^i,K,N|is nondecreasing withlim_R↑∞sup_s∈0,T]| X_s∧τ_i,R ^i,K,N| ={c]c sup_s∈0,T]| X_s^i,K,N|on {τ_i,∞=∞} ∞on {τ_i,∞≤ T} ..Indeed, on the set {τ_i,∞≤ T} we have for any R>0, | X_τ_i,R^i,K,N-X_0^i|≥ R with τ_i,R≤ T, so thatsup_s∈0,T]| X_s∧τ_i,R^i,K,N|≥| X_τ_i,R^i,K,N|≥| X_τ_i,R ^i,K,N|≥ R-| X_0^i| .The Fatou lemma (<ref>) implies (with 0:=∞·0),‖lim_R↑∞1_{τ_i,∞≤ T}sup_s∈0,T]| X_s∧τ_i,R^i,K,N|‖ _p =∞· P({τ_i,∞≤ T}) ≤lim inf_R‖ 1_{τ_i,∞≤ T} sup_s∈0,T]| X_s∧τ_i,R^i,K,N|‖ _p≤lim inf_R‖sup_s∈0,T]| X_s∧τ_i,R^i,K,N|‖ _p<∞,because of (<ref>). So P({τ_i,∞≤ T})=0, i.e. τ_i,∞=∞ almost surely. Again by the Fatou lemma, (<ref>) then implies‖sup_s∈0,T]| X_s^i,K,N|‖ _p≤lim inf_R‖sup_s∈0,T]| X_s∧τ_i,R^i,K,N|‖ _p<∞,uniformly in K and N, because of (<ref>) again. The following lemma is consequence of Gronwall's theorem.Let f: [0,T]→ℝ_+ and ψ: [0,T]→ℝ_+ be two non-negative non-decreasing functions satisfyingf(t)≤ A∫_0^tf(s) ds+B(∫_0^tf^2(s) ds) ^1/2+ψ(t), t∈0,T],where A,B are two positive real constants. Thenf(t)≤2e^(2A+B^2)t ψ(t), t∈0,T].It follows from the elementary inequality √(xy)≤1/2( x/B+By), x,y≥0,B>0, that(∫_0^tf^2(s) ds)^1/2≤(f(t)∫_0 ^tf(s) ds)^1/2≤f(t)/2B+B/2∫_0 ^tf(s) ds.Plugging this into (<ref>) yieldsf(t)≤(2A+B^2)∫_0^tf(s) ds+2ψ(t).Now the standard Gronwall inequality yields the desired result. plain
http://arxiv.org/abs/1708.08087v2
{ "authors": [ "Denis Belomestny", "John Schoenmakers" ], "categories": [ "math.NA" ], "primary_category": "math.NA", "published": "20170827134611", "title": "Projected particle methods for solving McKean-Vlasov stochastic differential equations" }
top=6cm,bottom=1cmCross-view Asymmetric Metric Learning for Unsupervised Person Re-identification Hong-Xing Yu, Ancong Wu, Wei-Shi Zheng Code is available at the project page: <https://github.com/KovenYu/CAMEL> For reference of this work, please cite:Hong-Xing Yu, Ancong Wu, Wei-Shi Zheng. “Cross-view Asymmetric Metric Learning for Unsupervised Person Re-identification.” Proceedings of the IEEE International Conference on Computer Vision. 2017. Bib:@inproceedings{yu2017cross, title={Cross-view Asymmetric Metric Learning for Unsupervised Person Re-identification},author={Yu, Hong-Xing and Wu, Ancong and Zheng, Wei-Shi},booktitle={Proceedings of the IEEE International Conference on Computer Vision},year={2017}} [ ] Hong-Xing Yu^1,5]Ancong Wu^2]Wei-Shi Zheng^1,3,4Corresponding author [ ]^1School of Data and Computer Science, Sun Yat-sen University, China [ ]^2School of Electronics and Information Technology, Sun Yat-sen University, China [ ]^3Key Laboratory of Machine Intelligence and Advanced Computing, Ministry of Education, China [ ]^4Collaborative Innovation Center of High Performance Computing, NUDT, China [ ]^5Guangdong Key Laboratory of Big Data Analysis and Processing,  Guangzhou, China [ ][email protected], [email protected], [email protected] Cross-view Asymmetric Metric Learning for Unsupervised Person Re-identification [ December 30, 2023 =================================================================================] While metric learning is important for Person re-identification (RE-ID), a significant problem in visual surveillance for cross-view pedestrian matching, existing metric models for RE-ID are mostly based on supervised learning that requires quantities of labeled samples in all pairs of camera views for training. However, this limits their scalabilities to realistic applications, in which a large amount of data over multiple disjoint camera views is available but not labelled. To overcome the problem, we propose unsupervised asymmetric metric learning for unsupervised RE-ID. Our model aims to learn an asymmetric metric, i.e., specific projection for each view, based on asymmetric clustering on cross-view person images. Our model finds a shared space where view-specific bias is alleviated and thus better matching performance can be achieved. Extensive experiments have been conducted on a baseline and five large-scale RE-ID datasets to demonstrate the effectiveness of the proposed model. Through the comparison, we show that our model works much more suitable for unsupervised RE-ID compared to classical unsupervised metric learning models. We also compare with existing unsupervised RE-ID methods, and our model outperforms them with notable margins. Specifically, we report the results on large-scale unlabelled RE-ID dataset, which is important but unfortunately less concerned in literatures.§ INTRODUCTION emptyPerson re-identification (RE-ID) is a challenging problem focusing on pedestrian matching and ranking across non-overlapping camera views. It remains an open problem although it has received considerable exploration recently, in consideration of its potential significance in security applications, especially in the case of video surveillance. It has not been solved yet principally because of the dramatic intra-class variation and the high inter-class similarity. Existing attempts mainly focus on learning to extract robust and discriminative representations <cit.>, and learning matching functions or metrics <cit.> in a supervised manner. Recently, deep learning has been adopted to RE-ID community <cit.> and has gained promising results.However, supervised strategies are intrinsically limited due to the requirement of manually labeled cross-view training data, which is very expensive <cit.>. In the context of RE-ID, the limitation is even pronounced because (1) manually labeling may not be reliable with a huge number of images to be checked across multiple camera views, and more importantly (2) the astronomical cost of time and money is prohibitive to label the overwhelming amount of data across disjoint camera views. Therefore, in reality supervised methods would be restricted when applied to a new scenario with a huge number of unlabeled data.To directly make full use of the cheap and valuable unlabeled data,some existing efforts on exploring unsupervised strategies <cit.> have been reported, but they are still not very satisfactory. One of the main reasons is that without the help of labeled data, it is rather difficult to model the dramatic variances across camera views, such as the variances of illumination and occlusion conditions. Such variances lead to view-specific interference/bias which can be very disturbing in finding what is more distinguishable in matching people across views (see Figure <ref>). In particular, existing unsupervised models treat the samples from different views in the same manner, and thus the effects of view-specific bias could be overlooked.In order to better address the problems caused by camera view changes in unsupervised RE-ID scenarios, we propose a novel unsupervised RE-ID model named Clustering-based Asymmetric[“Asymmetric” means specific transformations for each camera view.] MEtric Learning (CAMEL). The ideas behind are on the two following considerations. First, although conditions can vary among camera views, we assume that there should be some shared space where the data representations are less affected by view-specific bias. By projecting original data into the shared space, the distance between any pair of samples 𝐱_i and 𝐱_j is computed as:d(𝐱_i,𝐱_j) = ‖U^T𝐱_i - U^T𝐱_j ‖_2 = √((𝐱_i-𝐱_j)^TM(𝐱_i-𝐱_j)),where U is the transformation matrix and M = UU^T. However, it can be hard for a universal transformation to implicitly model the view-specific feature distortion from different camera views, especially when we lack label information to guide it. This motivates us to explicitly model the view-specific bias. Inspired by the supervised asymmetric distance model <cit.>, we propose to embed the asymmetric metric learning to our unsupervised RE-ID modelling, and thus modify the symmetric form in Eq. (<ref>) to an asymmetric one:d(𝐱_i^p,𝐱_j^q) = ‖U^pT𝐱_i^p - U^qT𝐱_j^q ‖_2,where p and q are indices of camera views.An asymmetric metric is more acceptable for unsupervised RE-ID scenarios as it explicitly models the variances among views by treating each view differently. By such an explicit means, we are able to better alleviate the disturbances of view-specific bias.The other consideration is that since we are not clear about how to separate similar persons in lack of labeled data, it is reasonable to pay more attention to better separating dissimilar ones. Such consideration motivates us to structure our data by clustering. Therefore, we develop asymmetric metric clustering that clusters cross-view person images. By clustering together with asymmetric modelling, the data can be better characterized in the shared space, contributing to better matching performance (see Figure <ref>).In summary, the proposed CAMEL aims to learn view-specific projection for each camera view by jointly learning the asymmetric metric and seeking optimal cluster separations. In this way, the data from different views is projected into a shared space where view-specific bias is aligned to an extent, and thus better performance of cross-view matching can be achieved.So far in literatures, the unsupervised RE-ID models have only been evaluated on small datasets which contain only hundreds or a few thousands of images. However, in more realistic scenarios we need evaluations of unsupervised methods on much larger datasets, say, consisting of hundreds of thousands of samples, to validate their scalabilities. In our experiments, we have conducted extensive comparison on datasets with their scales ranging widely. In particular, we combined two existing RE-ID datasets <cit.> to obtain a larger one which contains over 230,000 samples. Experiments on this dataset (see Sec. <ref>) show empirically that our model is more scalable to problems of larger scales, which is more realistic and more meaningful for unsupervised RE-ID models, while some existing unsupervised RE-ID models are not scalable due to the expensive cost in either storage or computation.§ RELATED WORK At present, most existing RE-ID models are in a supervised manner. They are mainly based on learning distance metrics or subspace <cit.>, learning view-invariant and discriminative features <cit.>, and deep learning frameworks <cit.>. However, all these models rely on substantial labeled training data, which is typically required to be pair-wise for each pair of camera views. Their performance depends highly on the quality and quantity of labeled training data. In contrast, our model does not require any labeled data and thus is free from prohibitively high cost of manually labeling and the risk of incorrect labeling.To directly utilize unlabeled data for RE-ID, several unsupervised RE-ID models <cit.> have been proposed. All these models differ from ours in two aspects. On the one hand, these models do not explicitly exploit the information on view-specific bias, i.e., they treat feature transformation/quantization in every distinct camera view in the same manner when modelling. In contrast, our model tries to learn specific transformation for each camera view, aiming to find a shared space where view-specific interference can be alleviated and thus better performance can be achieved. On the other hand, as for the means to learn a metric or a transformation, existing unsupervised methods for RE-ID rarely consider clustering while we introduce an asymmetric metric clustering to characterize data in the learned space. While the methods proposed in <cit.> could also learn view-specific mappings, they are supervised methods and more importantly cannot be generalized to handle unsupervised RE-ID.Apart from our model, there have been some clustering-based metric learning models <cit.>. However, to our best knowledge, there is no such attempt in RE-ID community before. This is potentially because clustering is more susceptible to view-specific interference and thus data points from the same view are more inclined to be clustered together, instead of those of a specific person across views. Fortunately, by formulating asymmetric learning and further limiting the discrepancy between view-specific transforms, this problem can be alleviated in our model. Therefore, our model is essentially different from these models not only in formulation but also in that our model is able to better deal with cross-view matching problem by treating each view asymmetrically. We will discuss the differences between our model and the existing ones in detail in Sec. <ref>. § METHODOLOGY§.§ Problem Formulation Under a conventional RE-ID setting, suppose we have a surveillance camera network that consists of V camera views, from each of which we have collected N_p (p = 1,⋯,V) images and thus there are N = N_1+⋯+N_V images in total as training samples. LetX = [𝐱_1^1,⋯,𝐱_N_1^1,⋯,𝐱_1^V,⋯,𝐱_N_V^V]∈ℝ^M × N denote the training set, with each column 𝐱_i^p (i = 1,⋯,N_p; p = 1,⋯,V) corresponding to an M-dimensional representation of the i-th image from the p-th camera view. Our goal is to learn V mappings i.e., U^1,⋯,U^V, where U^p ∈ℝ^M × T (p = 1,⋯,V), corresponding to each camera view, and thus we can project the original representation 𝐱_i^p from the original space ℝ^M into a shared space ℝ^T in order to alleviate the view-specific interference. §.§ Modelling Now we are looking for some transformations to map our data into a shared space where we can better separate the images of one person from those of different persons. Naturally, this goal can be achieved by narrowing intra-class discrepancy and meanwhile pulling the centers of all classes away from each other. In an unsupervised scenario, however, we have no labeled data to tell our model how it can exactly distinguish one person from another who has a confusingly similar appearance with him. Therefore, it is acceptable to relax the original idea: we focus on gathering similar person images together, and hence separating relatively dissimilar ones. Such goal can be modelled by minimizing an objective function like that of k-means clustering <cit.>:min_Uℱ_intra= ∑_k=1^K ∑_i ∈𝒞_k‖U^T𝐱_i - 𝐜_k ‖^2,where K is the number of clusters, 𝐜_k denotes the centroid of the k-th cluster and 𝒞_k = { i | U^T𝐱_i ∈ k-th cluster}.However, clustering results may be affected extremely by view-specific bias when applied in cross-view problems. In the context of RE-ID, the feature distortion could be view-sensitive due to view-specific interference like different lighting conditions and occlusions <cit.>. Such interference might be disturbing or even dominating in searching the similar person images across views during clustering procedure. To address this cross-view problem, we learn specific projection for each view rather than a universal one to explicitly model the effect of view-specific interference and to alleviate it. Therefore, the idea can be further formulated by minimizing an objective function below:min_U^1,⋯,U^Vℱ_intra=∑_k=1^K ∑_i ∈𝒞_k‖U^pT𝐱_i^p - 𝐜_k ‖^2s.t.U^pT Σ^pU^p = I (p = 1,⋯,V),where the notation is similar to Eq. (<ref>), with p denotes the view index, Σ^p = X^pX^pT/ N_p + αI and I represents the identity matrix which avoids singularity of the covariance matrix. The transformation U^p that corresponds to each instance 𝐱_i^p is determined by the camera view which 𝐱_i^p comes from. The quasi-orthogonal constraints on U^p ensure that the model will not simply give zero matrices. By combining the asymmetric metric learning, we actually realize an asymmetric metric clustering on RE-ID data across camera views.Intuitively, if we minimize this objective function directly, U^p will largely depend on the data distribution from the p-th view. Now that there is specific bias on each view, any U^p and U^q could be arbitrarily different. This result is very natural, but large inconsistencies among the learned transformations are not what we exactly expect, because the transformations are with respect to person images from different views: they are inherently correlated and homogeneous. More critically, largely different projection basis pairs would fail to capture the discriminative nature of cross-view images, producing an even worse matching result.Hence, to strike a balance between the ability to capture discriminative nature and the capability to alleviate view-specific bias, we embed a cross-view consistency regularization term into our objective function. And then, in consideration of better tractability, we divide the intra-class term by its scale N, so that the regulating parameter would not be sensitive to the number of training samples. Thus, our optimization task becomes min_U^1,⋯,U^Vℱ_obj = 1/N ℱ_intra + λℱ_consistency= 1/N∑_k=1^K∑_i ∈𝒞_k‖U^pT𝐱_i^p - 𝐜_k ‖^2+λ∑_p≠ q‖U^p-U^q‖_F^2s.t. U^pTΣ^pU^p = I (p = 1,⋯,V),where λ is the cross-view regularizer and ‖·‖_F denotes the Frobenius norm of a matrix. We call the above model the Clustering-based Asymmetric MEtric Learning (CAMEL).To illustrate the differences between symmetric and asymmetric metric clustering in structuring data in the RE-ID problem, we further show the data distributions in Figure <ref>. We can observe from Figure <ref> that the view-specific bias is obvious in the original space: triangles in the upper left and circles in the lower right. In the common space learned by symmetric metric clustering, the bias is still obvious. In contrast, in the shared space learned by asymmetric metric clustering, the bias is alleviated and thus the data is better characterized according to the identities of the persons, i.e., samples of one person (one color) gather together into a cluster.§.§ Optimization For convenience, we denote 𝐲_i=U^pT𝐱_i^p. Then we have Y∈ℝ^T × N, where each column 𝐲_i corresponds to the projected new representation of that from X. For optimization, we rewrite our objective function in a more compact form. The first term can be rewritten as follow <cit.>:1/N∑_k=1^K ∑_i ∈𝒞_k‖𝐲_i - 𝐜_k ‖^2=1/N[Tr(Y^TY)-Tr(H^TY^TYH)],whereH = [ 𝐡_1,...,𝐡_K ] ,𝐡_k^T𝐡_l =0 k≠ l 1 k= l 𝐡_k = [ 0,⋯,0,1,⋯,1,0,⋯,0,1,⋯ ] ^T/√(n_k)is an indicator vector with the i-th entry corresponding to the instance 𝐲_i, indicating that 𝐲_i is in the k-th cluster if the corresponding entry does not equal zero. Then we construct X = [ 𝐱^1_1 ⋯ 𝐱^1_N_1 0 ⋯ 0 ⋯ 0; 0 ⋯ 0 𝐱^2_1 ⋯ 𝐱^2_N_2 ⋯ 0; ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮; 0 ⋯ 0 0 ⋯ 0 ⋯ 𝐱^V_N_V ] U = [ U^1T, ⋯, U^VT ] ^T ,so thatY = U^TX,and thus Eq. (<ref>) becomes1/N∑_k=1^K ∑_i ∈𝒞_k‖𝐲_i - 𝐜_k ‖^2 = 1/NTr(X^TUU^TX) -1/NTr(H^TX^TUU^TXH). As for the second term, we can also rewrite it as follow:λ∑_p≠ q‖U^p-U^q‖_F^2 = λTr(U^TDU),whereD = [ (V-1)I -I -I⋯ -I; -I (V-1)I -I⋯ -I;⋮⋮⋮⋮⋮; -I -I -I⋯ (V-1)I ].Then, it is reasonable to relax the constraintsU^pTΣ^pU^p = I (p = 1,⋯,V)to∑_p=1^V U^pTΣ^pU^p = U^TΣU = VI,where Σ = diag(Σ^1, ⋯, Σ^V) because what we expect is to prevent each U^p from shrinking to a zero matrix. The relaxed version of constraints is able to satisfy our need, and it bypasses trivial computations.By now we can rewrite our optimization task as follow:min_Uℱ_obj = 1/NTr(X^TUU^TX) +λTr(U^TDU) - 1/NTr(H^TX^TUU^TXH)s.t.U^TΣU = VI. It is easy to realize from Eq. (<ref>) that our objective function is highly non-linear and non-convex. Fortunately, in the form of Eq. (<ref>) we can find that once H is fixed, Lagrange's method can be applied to our optimization task. And again from Eq. (<ref>), it is exactly the objective of k-means clustering once U is fixed <cit.>. Thus, we can adopt an alternating algorithm to solve the optimization problem. Fix H and optimize U. Now we see how we optimize U. After fixing H and applying the method of Lagrange multiplier, our optimization task (<ref>) is transformed into an eigen-decomposition problem as follow:G𝐮 = γ𝐮,where γ is the Lagrange multiplier (and also is the eigenvalue here) andG = Σ^-1(λD+1/NXX^T-1/NXHH^TX^T).Then, U can be obtained by solving this eigen-decomposition problem.Fix U and optimize H. As for the optimization of H, we can simply fix U and conduct k-means clustering in the learned space. Each column of H, 𝐡_k, is thus constructed according to the clustering result.Based on the analysis above, we can now propose the main algorithm of CAMEL in Algorithm <ref>. We set maximum iteration to 100. After obtaining U, we decompose it back into {U^1,⋯,U^V}. The algorithm is guaranteed to convergence, as given in the following proposition: In Algorithm <ref>, ℱ_obj is guaranteed to convergence. In each iteration, when U is fixed, if H is the local minimizer, k-means remains H unchanged, otherwise it seeks the local minimizer. When H is fixed, U has a closed-form solution which is the global minimizer. Therefore, the ℱ_obj decreases step by step. As ℱ_obj≥ 0 has a lower bound 0, it is guaranteed to convergence. § EXPERIMENTS§.§ Datasets Since unsupervised models are more meaningful when the scale of problem is larger, our experiments were conducted on relatively big datasets except VIPeR <cit.> which is small but widely used. Various degrees of view-specific bias can be observed in all these datasets (see Figure <ref>).The VIPeR datasetcontains 632 identities, with two images captured from two camera views of each identity.The CUHK01 dataset <cit.> contains 3,884 images of 971 identities captured from two disjoint views. There are two images of every identity from each view.The CUHK03 dataset <cit.> contains 13,164 images of 1,360 pedestrians captured from six surveillance camera views. Besides hand-cropped images, samples detected by a state-of-the-art pedestrian detector are provided.The SYSU dataset <cit.> includes 24,448 RGB images of 502 persons under two surveillance cameras. One camera view mainly captured the frontal or back views of persons, while the other observed mostly the side views.The Market-1501 dataset <cit.> (Market) contains 32,668 images of 1,501 pedestrians, each of which was captured by at most six cameras. All of the images were cropped by a pedestrian detector. There are some bad-detected samples in this datasets as distractors as well.The ExMarket dataset[Demo code for the model and the ExMarket dataset can be found on <https://github.com/KovenYu/CAMEL>.]. In order to evaluate unsupervised RE-ID methods on even larger scale, which is more realistic, we further combined the MARS dataset <cit.> with Market. MARS is a video-based RE-ID dataset which contains 20,715 tracklets of 1,261 pedestrians. All the identities from MARS are of a subset of those from Market. We then took 20% frames (each one in every five successive frames) from the tracklets and combined them with Market to obtain an extended version of Market (ExMarket). The imbalance between the numbers of samples from the 1,261 persons and other 240 persons makes this dataset more challenging and realistic. There are 236,696 images in ExMarket in total, and 112,351 images of them are of training set. A brief overview of the dataset scales can be found in Table <ref>. §.§ Settings Experimental protocols: A widely adopted protocol was followed on VIPeR in our experiments <cit.>, i.e., randomly dividing the 632 pairs of images into two halves, one of which was used as training set and the other as testing set. This procedure was repeated 10 times to offer average performance. Only single-shot experiments were conducted.The experimental protocol for CUHK01 was the same as that in <cit.>. We randomly selected 485 persons as training set and the other 486 ones as testing set. The evaluating procedure was repeated 10 times. Both multi-shot and single-shot settings were conducted.The CUHK03 dataset was provided together with its recommended evaluating protocol <cit.>. We followed the provided protocol, where images of 1,160 persons were chosen as training set, images of another 100 persons as validation set and the remainders as testing set. This procedure was repeated 20 times. In our experiments, detected samples were adopted since they are closer to real-world settings. Both multi-shot and single-shot experiments were conducted.As for the SYSU dataset, we randomly picked 251 pedestrians' images as training set and the others as testing set. In the testing stage, we basically followed the protocol as in <cit.>. That is, we randomly chose one and three images of each pedestrian as gallery for single-shot and multi-shot experiments, respectively. We repeated the testing procedure by 10 times.Market is somewhat different from others. The evaluation protocol was also provided along with the data <cit.>. Since the images of one person came from at most six views, single-shot experiments were not suitable. Instead, multi-shot experiments were conducted and both cumulative matching characteristic (CMC) and mean average precision (MAP) were adopted for evaluation <cit.>. The protocol of ExMarket was identical to that of Market since the identities were completely the same as we mentioned above. Data representation: In our experiments we used the deep-learning-based JSTL feature proposed in <cit.>. We implemented it using the 56-layer ResNet <cit.>, which produced 64-D features. The original JSTL was adopted to our implementation to extract features on SYSU, Market and ExMarket. Note that the training set of the original JSTL contained VIPeR, CUHK01 and CUHK03, violating the unsupervised setting. So we trained a new JSTL model without VIPeR in its training set to extract features on VIPeR. The similar procedures were done for CUHK01 and CUHK03.Parameters:We set λ, the cross-view consistency regularizer, to 0.01. We also evaluated the situation when λ goes to infinite, i.e., the symmetric version of our model in Sec. <ref>, to show how important the asymmetric modelling is.Regarding the parameter T which is the feature dimension after the transformation learned by CAMEL, we set T equal to original feature dimension i.e., 64, for simplicity. In our experiments, we found that CAMEL can align data distributions across camera views even without performing any further dimension reduction. This may be due to the fact that, unlike conventional subspace learning models, the transformations learned by CAMEL are view-specific for different camera views and always non-orthogonal. Hence, the learned view-specific transformations can already reduce the discrepancy between the data distributions of different camera views.As for K, we found that our model was not sensitive to K when N≫ K and K was not too small (see Sec. <ref>), so we set K = 500. These parameters were fixed for all datasets.§.§ Comparison Unsupervised models are more significant when applied on larger datasets. In order to make comprehensive and fair comparisons, in this section we compare CAMEL with the most comparable unsupervised models on six datasets with their scale orders varying from hundreds to hundreds of thousands. We show the comparative results measured by the rank-1 accuracies of CMC and MAP (%)in Table <ref>. Comparison to Related Unsupervised RE-ID Models. In this subsection we compare CAMEL with the sparse dictionary learning model (denoted as Dic) <cit.>, sparse representation learning model ISR <cit.>, kernel subspace learning model RKSL <cit.> and sparse auto-encoder (SAE) <cit.>. We tried several sets of parameters for them, and report the best ones. We also adopt the Euclidean distance which is adopted in the original JSTL paper <cit.> as a baseline (denoted as JSTL).From Table <ref> we can observe that CAMEL outperforms other models on all the datasets on both settings. In addition, we can further see from Figure <ref> that CAMEL outperforms other models at any rank.One of the main reasons is that the view-specific interference is noticeable in these datasets. For example, we can see in Figure <ref> that on CUHK01, the changes of illumination are extremely severe and even human beings may have difficulties in recognizing the identities in those images across views.This impedes other symmetric models from achieving higher accuracies, because they potentially hold an assumption that the invariant and discriminative information can be retained and exploited through a universal transformation for all views. But CAMEL relaxes this assumption by learning an asymmetric metric and then can outperform other models significantly. In Sec. <ref> we will see the performance of CAMEL would drop much when it degrades to a symmetric model.Comparison to Clustering-based Metric Learning Models. In this subsection we compare CAMEL with a typical model AML <cit.> and a recently proposed model UsNCA <cit.>. We can see from Fig. <ref> and Table <ref> that compared to them, CAMEL achieves noticeable improvements on all the six datasets. One of the major reasons is thatthey do not consider the view-specific bias which can be very disturbing in clustering, making them unsuitable for RE-ID problem. In comparison, CAMEL alleviates such disturbances by asymmetrically modelling. This factor contributes to the much better performance of CAMEL. Comparison to the State-of-the-Art. In the last subsections, we compared with existing unsupervised RE-ID methods using the same features. In this part, we also compare with the results reported in literatures. Note that most existing unsupervised RE-ID methods have not been evaluated on large datasets like CUHK03, SYSU, or Market, so Table <ref> only reports the comparative results on VIPeR and CUHK01.We additionally compared existing unsupervised RE-ID models, including the hand-craft-feature-based SDALF <cit.> and CPS <cit.>, the transfer-learning-based UDML <cit.>,graph-learning-based model (denoted as GL) <cit.>, and local-salience-learning-based GTS <cit.> and SDC <cit.>. We can observe from Table <ref> that our model CAMEL can outperform the state-of-the-art by large margins on CUHK01.Comparison to Supervised Models. Finally, in order to see how well CAMEL can approximate the performance of supervised RE-ID, we additionally compare CAMEL with its supervised version (denoted as CAMEL_s) which is easily derived by substituting the clustering results by true labels, and three standard supervised models, including the widely used KISSME <cit.>, XQDA <cit.>, the asymmetric distance model CVDCA <cit.>. The results are shown in Table <ref>. We can see that CAMEL_s outperforms CAMEL by various degrees, indicating that label information can further improve CAMEL's performance. Also from Table <ref>, we notice that CAMEL can be comparable to other standard supervised models on some datasets like CUHK01, and even outperform some of them. It is probably because the used JSTL model had not been fine-tuned on the target datasets: this was for a fair comparison with unsupervised models which work on completely unlabelled training data. Nevertheless, this suggests that the performance of CAMEL may not be far below the standard supervised RE-ID models.§.§ Further Evaluations The Role of Asymmetric Modeling. We show what is going to happen if CAMEL degrades to a common symmetric model in Table <ref>. Apparently, without asymmetrically modelling each camera view, our model would be worsen largely, indicating that the asymmetric modeling for clustering is rather important for addressing the cross-view matching problem in RE-ID as well as in our model. Sensitivity to the Number of Clustering Centroids. We take CUHK01, Market and ExMarket datasets as examples of different scales (see Table <ref>) for this evaluation. Table <ref> shows how the performance varies with different numbers of clustering centroids, K. It is obvious that the performance only fluctuates mildly when N ≫ K and K is not too small. Therefore CAMEL is not very sensitive to K especially when applied to large-scale problems. To further explore the reason behind, we show in Table <ref> the rate of clusters which contain more than one persons, in the initial stage and convergence stage in Algorithm <ref>. We can see that (1) in spite of that K is varying, there is always a number of clusters containing more than one persons in both the initial stage and convergence stage. This indicates that our model works without the requirement of perfect clustering results. And (2), although the number is various, in the convergence stage the number is consistently decreased compared to initialization stage. This shows that the cluster results are improved consistently. These two observations suggests that the clustering should be a mean to learn the asymmetric metric, rather than an ultimate objective. Adaptation Ability to Different Features. At last, we show that CAMEL can be effective not only when adopting deep-learning-based JSTL features. We additionally adopted the hand-crafted LOMO feature proposed in <cit.>. We performed PCA to produce 512-D LOMO features, and the results are shown in Table <ref>. Among all the models, the results of Dic and ISR are the most comparable (Dic and ISR take all second places). So for clarity, we only compare CAMEL with them and L_2 distance as baseline. From the table we can see that CAMEL can outperform them. § CONCLUSION In this work, we have shown that metric learning can be effective for unsupervised RE-ID by proposing clustering-based asymmetric metric learning called CAMEL. CAMEL learns view-specific projections to deal with view-specific interference, and this is based on existing clustering (e.g., the k-means model demonstrated in this work) on RE-ID unlabelled data, resulting in an asymmetric metric clustering. Extensive experiments show that our model can outperform existing ones in general, especially on large-scale unlabelled RE-ID datasets.§ ACKNOWLEDGEMENTThis work was supported partially by the National Key Research and Development Program of China (2016YFB1001002), NSFC(61522115, 61472456, 61573387, 61661130157, U1611461), the Royal Society Newton Advanced Fellowship (NA150459), Guangdong Province Science and Technology Innovation Leading Talents (2016TX03X157). ieee
http://arxiv.org/abs/1708.08062v2
{ "authors": [ "Hong-Xing Yu", "Ancong Wu", "Wei-Shi Zheng" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20170827075929", "title": "Cross-view Asymmetric Metric Learning for Unsupervised Person Re-identification" }
http://arxiv.org/abs/1708.08341v3
{ "authors": [ "Andrea Giusti" ], "categories": [ "cond-mat.stat-mech", "35R11, 26A33, 74F05, 35Q79" ], "primary_category": "cond-mat.stat-mech", "published": "20170824203226", "title": "Dispersion relations for the time-fractional Cattaneo-Maxwell heat equation" }
http://arxiv.org/abs/1708.07832v3
{ "authors": [ "Michał Tomza", "Krzysztof Jachymski", "Rene Gerritsma", "Antonio Negretti", "Tommaso Calarco", "Zbigniew Idziaszek", "Paul S. Julienne" ], "categories": [ "physics.atom-ph", "cond-mat.quant-gas", "physics.chem-ph", "quant-ph" ], "primary_category": "physics.atom-ph", "published": "20170825174502", "title": "Cold hybrid ion-atom systems" }
University of Illinois at Urbana-ChampaignUrbana Illinois [email protected] of Illinois at Urbana-ChampaignUrbana Illinois [email protected] BombayMumbai India [email protected] of Illinois at Urbana-ChampaignUrbana Illinois [email protected] of Illinois at Urbana-ChampaignUrbana Illinois [email protected] of Illinois at Urbana-ChampaignUrbana Illinois [email protected] propose a probabilistic packet reception model for Bluetooth Low Energy (BLE) packets in indoor spaces and we validate the model by using it for indoor localization. We expect indoor localization to play an important role in indoor public spaces in the future. We model the probability of reception of a packet as a generalized quadratic function of distance, beacon power and advertising frequency. Then, we use a Bayesian formulation to determine the coefficients of the packet loss model using empirical observations from our testbed. We develop a new sequential Monte-Carlo algorithm that uses our packet count model. The algorithm is general enough to accommodate different spatial configurations. We have good indoor localization experiments: our approach has an average error of ∼ 1.2m, 53% lower than the baseline range-free Monte-Carlo localization algorithm. Finding by Counting: A Probabilistic Packet Count Model for Indoor Localization in BLE Environments Hari Sundaram December 30, 2023 =================================================================================================== § INTRODUCTIONIn this paper we develop a probabilistic model for Bluetooth Low Energy (BLE) packet reception within an indoor environment. Then, as a test application, we use the packet reception model for indoor localization. We expect Indoor localization using BLE to play an important role in future retail experiences, facilitating automated checkouts and targeted advertisements. The importance of developing a packet reception model is two-fold: novel indoor localization techniques; network simulations. First, techniques for indoor localization fall under two camps: Received Signal Strength (i.e. energy loss models) fingerprinting, and range free models that avoid using RSS indicators. We know that RSS indicators to be unreliable—they vary with human presence, presence of obstructions and affected by multi-path loss. Range free models in contrast assume that a heard beacon is within a known distance threshold. A packet reception model allows us to localize based on packet counts without making assumptions on RSS or distance thresholds. Second, a packet reception model would serve as an alternative to RSS based packet models used in network simulators such as NS3.Our main contributions are a new BLE packet reception model for indoor environments and a sequential Monte-Carlo localization application of the proposed model.We model the probability of reception of a packet as a generalized quadratic function of distance, beacon power and advertising frequency. We obtain extensive empirical data by conducting experiments varying beacon power and frequency in an experimental testbed with stacks to dampen packet reception. Then, we proposed a Bayesian formulation to determine the coefficients of the packet loss model using the empirical observations. We develop a new sequential Monte-Carlo algorithm that uses our packet count model. The algorithm is general enough to accommodate different spatial configurations.Our experiments on indoor localization reveal that our proposed approach works well: it has an average error of ∼ 1.2m which is 53% lower than the baseline Monte-Carlo Localization algorithm.Our localization errors within an aisle are even better at ∼ 0.4m, with the increased errors arising due to the transition. In the next section, we discuss related work. Then, in <Ref>, we formally define the two problems that we solve. In <Ref>, we introduce solutions to both estimating the packet reception model, and indoor localization using packet counts. In <Ref>, we discuss testbed set-up, collect empirical data and conduct localization experiments. We present our results in <Ref> and conclude in <Ref>. § RELATED WORKNow we discuss prior work related to wireless propagation models and indoor localization. Propagation models deal with loss in energy of radio waves between sender and receiver. Localization models track mobile nodes in an environment using seed nodes with known locations. There is prior work on modeling the loss in energy during transmission for wireless signals like Wi-Fi, Bluetooth. The Received Signal Strength (RSS) i.e energy of received signal varies due to factors like distance, obstruction, walls, multi-path fading in indoor environment. <cit.> provides a detailed analysis of all these factors. Deterministic models like the Friis propagation model <cit.>, Log Distance Path Loss <cit.> give a fixed RSS value based on distance. Stochastic models like Jakes model <cit.>, two-parameter Nakagami distribution <cit.> capture the uncertainty in received RSS values. A wide range of techniques exist to localize a node within an indoor environment. All these techniques involve installing seed nodes in the environment with known prior location and then localizing other nodes relative to these seed nodes. These systems vary—the measuring capability of nodes, the nature of environment (i.e indoor/outdoor), and the mobility of nodes. Range-Based techniques involve the use of specialized and expensive hardware to measure some quantity which is then translated back to distance. GPS uses Time of Arrival (TOA) technique. <cit.> proposed the use of Time Difference of Arrival (TDOA) technique. Received Signal Strength based ranging techniques like SpotOn <cit.> are cheaper but inaccurate. RSS values in indoor environments becomes uncertain due to random factors like multi-path loss, fading and shadowing effects<cit.>. WiFi RSS fingerprinting based methods try to mitigate this problem of inaccuracy, but require expensive human labor. <cit.> gives a detailed survey of all such methods.Range free techniques do not use special hardware, but rather make assumptions on certain properties of node movement and signal propagation. Monte Carlo Localization (MCL) <cit.> , Mobile and Static sensor network Localization (MSL) <cit.>, Weighted MCL <cit.> use previous location estimate and current observations to find present location of moving nodes. They assume that a heard beacon must be within a threshold distance to the current measurement location.Our framework does not calculate RSS loss for each packet, or make any assumptions about beacon distance. Instead, we model the probability of receiving a packet, and use this probability for localization. Next we will formally define our problem and then discuss the entire solution architecture.§ PROBLEM DEFINITIONOur broad goal in this work is two fold—finding a model of packet reception rate in a Bluetooth Low Energy (BLE) Internet of Things (IoT) retail store like environment and then use the model to localize individuals.We assume that we are in a rectangular W × L space comprising stacks.We have k BLE beacons in fixed, known positions in the space. All beacons transmit at the same frequency f and at the same power r. Further, we assume that at any location (x,y), the probability of receiving packets from any beacon is binomially distributed with parameter p. In other words, the probability of receiving m packets when we send N packets is: m ∼B(N, p). §.§ Packet Reception Rate We aim to discover how p, the probability that we would hear a packet from a beacon varies as a function g of distance (d), frequency (f) and Power (r). That is p = g(d, f, r). Additionally g will vary based on number of intermediate stacks between beacon and packet reception location.One can consider our probabilistic packet counting model to be a hybrid of the RSSI model and the models used in range free localization. Energy loss models <cit.> <cit.> are attractive in that they model the signal attenuation in the physical world. Prior work <cit.> also shows that packet RSSI is highly unpredictable in indoor environment and varies with the environment layout. Existing range free models assume a spherical zone of hearing for the packets <cit.> assuming that if we hear a beacon, it must be in this zone. In contrast, we make no assumptions about distance when we hear a beacon.§.§ LocalizationNow, we list our assumptions for the localization problem. Assume that we have an individual moving in our hypothetical retail store, possessing a device that listens to the BLE beacons. This may be a smartphone, and the retail store application running on the smartphone is logging the BLE packets and then sending them to the cloud for analysis. Assume further than we would like to track the individual every δ sec. Finally, we assume stable store layout—beacon and stack locations don't change while the individual is moving.Without loss of generality, assume that the smartphone application listens to the packets creates the following logL = { (b_1,t_1), (b_2,t_2), …,(b_N,t_N) }where b_i refers to the BLE beaconheard at time t_i. The goal is to determine a list of locations XY_δ={x_i, y_i; δ}, at a store determined time resolution δ such that we know the location every δ sec.Having presented the problems for determining packet reception and localization, we now discuss potential solutions.§ SOLUTION ARCHITECTUREIn this section we first show how to determine the probability of receiving a packet as a function of distance, frequency and power. Then, we present a solution to the problem of tracking individuals through the retail location using the packet reception model. Common to both approaches is a Bayesian formulation of the problem. §.§ Estimating the packet reception modelFirst, we solve the problem of determining the free space packet reception model—the case when stacks are present will follow in a straightforward manner.To determine the packet reception model,we assume that we know the ground truth location of any spot where we listen to the beacons. Since we know the ground truth locations of all the k beacons, we can calculate distance from the spot to each of the beacons heard at the spot. Assume that there exist N such spots. Thus at any location l_i, i ∈{1, …, N }, we have a list D_i containing the number of packets of every beacon heard at l_i.That is, D_i = {(b_j, c_j), j ∈ 1, …, k }, where c_j is the count of beacon b_j.We make a simplifying assumption about g(d, f, r).We assume g to be an exponential function of the variable and that the log of the probability log p is quadratic in the variables. More formally:log p = b_0 + ∑_i b_i x_i + ∑_i, j b_i, x_ix_j,i, j ∈{1, 2, 3}where, x_i refer to the variables of d, f, r.Since power (f) and power (r) are constant for a specific configuration,  <Ref> reduces to a quadratic equation in distance (d). That is,log p = b_0 + b_1 d + b_2 d^2The more general formulation of  <Ref> essentially states that the coefficients b_0, b_1, b_2 of  <Ref> regress in frequency (f) and power (r). Thus the more general form allows us estimate the packet reception model for a variety of beacon power and beacon frequency configurations.We can use Maximum Likelihood (ML) estimation via least squares to estimate the coefficients b_i.We can assume that at any one of the N locations, the probability p_i of receiving the i-th beacon is:p̅_̅i̅ = c_i/f ∗δ,p̅_̅i̅ where, c_i is the number of packets received, f is the number of packets sent per second and δ is the time window of observation. Then we can estimate b_i from  <Ref> through least squares regression. The major challenge is that for low frequencies (e.g. f = 1Hz) or low power (e.g. -20db) we may not receive enough packets for a stable ML estimate of the coefficients.A Bayesian formulation allows us to quantify the uncertainty in the coefficient estimates; when the number of packets received is large, the ML estimates and the Bayesian estimates of the coefficients will agree.Let θ≡{ b_i} be the set of coefficients that we plan to estimate. Then the goal is estimate P(θ| D), where P(θ| D) ∝ P(D |θ) P(θ). D refers to the observed data—the number of packets heard for every beacon, at every location.To set up a Bayesian formulation, let us view packet reception through the lens of a generative process. Assume that we are at a particular spot A, listening to the i-th beacon. Then the number of packets received c_i is drawn from a binomial distribution:c_i= B(N, p_i)p_i= g(f, r, d_i,A) where the probability p_i of receiving a packet from the i-th beacon is a function of frequency, power and the distance between the spot A and the location l_i of the i-th beacon. To formulate the priors P(θ), we assume that the prior of each of the coefficients b_i is drawn from independent and identically distributed Normal distribution.That is,b_i ∼𝒩(μ, σ),where we set μ=0 and σ=10 so that the priors are conservative, allowing for a large range of values. The Bayesian formulation is compactly summarized in  <Ref>. We compute the posterior P(θ| D) using a standard Markov Chain Monte Carlo technique.We can use the same formulation for the free space case and the case when there are stacks. At each location, we filter the packets based on the beaconallowing us to separately analyze the different cases since we know the ground truth location of the beacons and we know their distances to the location where we are making the measurement.§.§ Estimating location sequenceWe use Bayesian formulation for determining the location of a person in a store. We plan to use the layout of the space to impose constraints on the solution.Let us begin with what is observable. As before, at any location, the observations include the packet counts from each beacon. We do know the ground truth locations of each beacon, the frequency (f) of transmission and the power (r).Now due to the results of  <Ref>, we know the parameters of the packet reception model.For any location, we need to estimatehidden parameters. First, since we don't know the location, we don't know the location of any of the beacons relative to the current position. We do not know, when we receive packets from the i-th beacon, if the i-th beacon is in the same aisle, or one or more aisles away. Thus the number of aisles between the current location and any beacon is a latent parameter for that beacon. The speed s at which a person moves through the store is a latent variable. We can assume an upper bound for the speed.Now, we describe the movement model. Let us assume as before that we wish to estimate the true (x,y) values at N locations, where the N depends onthe temporal resolution at which the retail store wishes to track its customers. The basic movement model assumes the following priors:s_i∼ U(0, S_max), i ∈{1, …, N-1 }x_0∼ U(0, W),y_0∼ U(0, L),x_i| x_i-1 ∼𝒩(0, s_i-1*δ), i ∈{1, …, N-1 }y_i | y_i-1 ∼𝒩(0, s_i-1*δ), i ∈{1, …, N-1 } . Where, s_i refer to the speed between locations, and uniform prior until some speed S_max, (x_0, y_0) are the initial (x,y) locations of the person, and since we know little about them, we assume that they are uniformly distributed over the space. We assume that an intermediate location (x_i, y_i) is Normally distributed around (x_i-1, y_i-1) with a standard deviation equal to s_i-1 * δ, where s_i-1 is the speed with which the person left the previous location (x_i, y_i) and where δ is the time window of observation.To estimate the location of the beacon relative to the measurement location, we make use of the layout of the space. We arrange our beacons in regularly spaced intervals on stacks. The beacons on the two sides of an aisle form a group. In  <Ref>, beacon numbers [1-12], [13-36] and [37-60] form three groups. All beacons in the same group must be an identical number of stacks away from the current location. Thus all beacons in the same group will use the same packet reception model. In our layout, the packet reception model used for a beacon group will depend on the y coordinate of the person. We can model the decision to switch as follows:τ_i∼ U(0, L), i ∈{1, 2}A_i=0y_i < τ_1,1 τ_1 ≤ y_i ≤τ_2, 2y > τ_2, S_i,k = M(A_i, b_k). Where, τ_i are two latent variables with a uniform prior along the y direction;  <Ref> helps us determine the estimate of the current aisle A_i and M is a deterministic mapping of the relative number of the stacks S_i,k between the current location i and beacon k. We can do this mapping because we know the store layout. The variable S_i,k helps us determine the appropriate packet reception model.We estimate parameters θ≡{{x_i, y_i }, {s_i}, τ_i }.The data collected D over all locations include the packet counts { c_k} of each beacon b_k within each time window. Notice that since we estimate S_i,k the number of stacks between beacon k and current location i, we use the following relation:c_i,k = B(M, p_i,k), M= f * δ,p_i,k = g(f, r, d_i,k; S_i,k),d_i,k = √((x_i - b_k,x)^2 + (y_i - b_k,y)^2). Where, the packet counts c_i,k of the k-th beacon at location i is Binomially distributed with parameter p_i,k. We obtain the parameter p_i,k using the correct packet reception model, by using the estimate of the number of stacks S_i,k between location i and location of beacon k. The distance between the location i and location of beacon k denoted as (b_k,x, b_k,y) is the standard Euclidean distance.Our goal is to estimate P(θ| D) ∝ P(D |θ) P(θ). We use a standard MCMC framework to estimate P(θ| D).What if the store geometry was not so simple to use the two latent random variables τ_i?. We can formulate the number of stacks between the beacon and the location in a more general way using a Dirichlet ditribution as a prior:q_i,k ∼Dir(α)S_i,k ∼Cat(q_i,k) Where we use a symmetric Dirichlet distribution with parameter α=1; We draw a three dimensional distribution q_i,k from the Dirichlet, for each location i and for each beacon k corresponding to the probabilities that there is either no stack, or one stack ortwo stacks respectively, between beacon k and location i. We would use probabilities q_i,k to then draw from a categorical distribution. We did not use this formulation, since in our case we could exploit geometric constraints.In this section we presented a solution to estimating the packet reception model and then showed how to use that model in locating an individual as she walks in a retail environment. A Bayesian formulation is central to solving both problems. In the next section, we discuss how we gathered empirical data to develop our packet reception model model and how we use the developed model to locate the individual. § EXPERIMENT DESIGNIn this section we will describe the three steps of carrying out the real world experiments—setting up the devices (<Ref>), the experimental testbed (<Ref>) and data collection (<Ref>). §.§ Device Set-UpFirst we discuss three device types used in our testbed — Bluvision iBeeks, BluFi, TI packet sniffer.iBeeks send out bluetooth low energy (BLE) packets into the environment and act as seed nodes of location. We choose these particular beacons because of their battery capacity, transmission power range and high advertising frequency. Their batteries last for a long time ranging from three to nine years. They support a wide range of broadcasting power from -40 dBm to +5 dBm. -40 dBm translates to 3 meter line of sight range, while +5 dBm gives us a range as large as 150 meter. We test the impact of range of sight on localization accuracy in our experiments. The beacons advertise packets as fast as one per 100 milliseconds. iBeeks are installed on particular locations in the environment and they remain stationary throughout the experiment. As their locations are known to us, they act like seed nodes based on which the location of other nodes are estimated.BluFi enables mass re-configuration of iBeeks. To test the effects of frequency and power on packet reception rate, we need to re-configure the beacons at regular intervals. Bluzone app allows us to talk with single iBeek at a time. BluFi pushes new configurations to thousands of beacons with one single command from the Bluzone cloud.Thus this device proves to be essential in large scale BLE beacon deployments.Texas Instrument Packet Sniffer scans BLE packets sent out by iBeeks and also act as the node for which we want to estimate the location. iBeeks broadcast on three different channels and the packet properties vary a lot based on the channel.The sniffer is a CC2540 dongle developed by Texas Instruments that can capture BLE packets on one advertising channel. The packets captured can be shown in real time by the Smart RF Packet Sniffer Software.The sniffer connected to a Windows laptop is kept at fixed locations during the training phase to collect the beacon packet trace. We walk around with the sniffer during the test phase to collect movement traces.§.§ Environment Set-UpNow we will report on two environments that constitute our testbed—Undergraduate Library (UGL) and Grainger Engineering Library at the University of Illinois at Urbana-Champaign. Both environments are subareas of a library floor. They have book shelves segregating the floor into aisles and corridors. We chose to experiment in library spaces since we didn't have ready access to retail locations; we hope to perform future experiments in actual retail stores. The floor plan is like retail stores where we have stack of items. The two environments differ: presence of walls, different kinds of obstructing materials.We do the training phase of the experiment at the UGL. This phase involves collecting of packet trace data at different locations. Weestimate the packet reception model parameters using empirical data collected at this location. Aisles between shelves provide free space and they are 1.22 meters wide. We use two bookshelves, each 0.64 meters wide and 17 meters long. On each aisle, we place two rows of 16 beacons on the two shelves facing the aisle. The inter-beacon distance on the same row is 1 meter, while the inter-beacon distance for beacons on the same shelf, but on different aisles is 0.64 meter i.e the thickness of the book shelf. The shelves are made of wood. We collect the packet traces in the aisles.The testing phase takes place at the Grainger Library. This phase involves using the packet reception model to localize a moving person in the space. The Grainger environment differs from the training phase location in threeaspects. First, there are steel bookshelves as opposed to wooden shelves in UGL. Second, there is more open space on either side of boundary shelves as opposed to a more closed feature with walls on either side in UGL. We expect the effects of multi-path fading to be different. Third, this particular region has high foot traffic people in contrast to the training location where foot traffic was low. This will help us study the impact of dynamic human presence on localization.The testing location differs in number of stacks, length and width of the aisle. Each stack is 11 meters long and 0.5 meters wide. The environment comprises three such stacks. Aisles are 0.7 meters wide. We place two rows of 12 beacons on each stack. The inter-beacon distance on the same row is 0.91 meter, while the inter-beacon distance for two devices kept opposite each other on the same shelf, but facing two different aisles is 0.43 meter. §.§ Data CollectionWe collect two types of data at different power and frequency—beacon packet trace required for training the packet reception model and movement trace to test the utility of the packet reception model in localization.We collect beacon packet traces during the training phase while standing at fixed spots in the layout. Since the distance calculations have to be exact, we do not introduce mobility in this step.The broad steps for this phase are the following. * Placing the beacons on the shelves at regular intervals.* Using BluFi to re-configure the beacons to desired parameter settings (power, advertisingfrequency).* Collecting the packet trace for current beacon configuration at three fixed locations per aisle. Two locations chosen near the two ends of each aisle and one in the middle.* Repeating Steps 2 and 3 until all the desired parameter settings are covered. We collect movement trace in the testing phase by carrying out a modified random waypoint mobility model. This trace data contains two parts—packet trace heard during movement and actual ground truth locations. Obtaining actual locations while moving becomes is a challenge. We address the challenge by carrying out a random waypoint like movement model in real world with one modification: we fix in advance all the destinations while introducing movement randomness. Stop locations marked in  <Ref> act as destinations. We start moving from one end aisle and finish in the other.  <Ref> shows the exact movement sequence at the testing location starting from stop location 1 and ending at 9. Like a waypoint model, the speed of movement remains random since an actual person is doing the movement. The pause time after reaching each destination is also randomly chosen between 8 seconds and 10 seconds. We collect the movement trace for all beacon parameter settings. After one round of movement we use BluFi to re-configure all the beacons.§ RESULTSIn this section we estimate the packet reception model parameters in <Ref> and the localization using the packet reception model in <Ref>.§.§ Inferring the Noise ModelThe variables affecting the packet reception rate are distance, frequency and beacon power. We measure distance, represented as (d), in meters and frequency, shown as (f), in Hertz (Hz). 1Hz advertising frequency represents a time interval of 1 sec between each packet. We represent beacon power in dBm. Since dBm is a relative figure, we use -12dbm as a reference to compute the parameters in our model. Our reference power of -12dBm translates to a 10-12m beacon hearing range.We collect data at three values each for device parameters of frequency and power. We use frequency values of 1Hz, 2Hz and 10Hz. High frequency of 10Hz helps us to check the effect of high packet emission rate on the noise or confusion in the medium. Such noise in turn can lead to lower reception rate. We set beacon powers at values -20db, -15db, -12db. -12db gives us a large range of 10-11 meters whichalmost covers our entire experiment layout. -20db covers a much smaller range of 3-4 meters. We carried out experiments and collected data at all nine possible combinations of these two parameters. We estimate the posterior P(θ| D) using PyMC3 a standard MCMC package <cit.>. We estimate θ (parameter values b_i of <Ref>) by using the entire dataset that includes all nine combinations of frequency and power.  <Ref> shows the distributions. The plot shows that the distribution for all the coefficients have converged. Taking the mean estimates of the posterior distributions of each parameter, the model forlog p_0 the log of the free space packet reception probability:log p_0 = -0.101 - 0.012f + 0.056r -0.272d + 0.189rdThe mean of the coefficients of d^2, r^2, f^2, f· d are close to zero and ignored. <Ref> shows the free space model fit to the raw data for all nine combinations of power and frequency. The dots show the raw packet counts received at varying distance while the curves represent the Bayesian fit. We can see from the figure that increasing power increases the packet reception probability and that decreasing frequency increases packet reception due to decreasing packet interference.Now we infer a stack model where the process of estimation remains the same, but we filter the data points such that there is obstruction present between the device and receiver.Figure <ref> shows a comparison of the different stack models for a fixed frequency of 10Hz. One stack and two stack model obtained on estimation is as follow.log p_1=-0.236 - 0.026f + 0.303r -0.292d + 0.018rd log p_2=-0.305 - 0.033f + 0.604r -0.302d + 0.017rdwhere p_i, i ∈{1,2} is the probability for the case of one stack and two stacks respectively, and where, f, r, d represent frequency, power and distance respectively. With the increase in the number of stacks, the constant factor in packet reception becomes lower (i.e parameter b_0 becomes more negative). This means in general, we have a lower chance of getting a packet. Similarly, the decay rate due to frequency b_1 and distance b_3 increase as well. The most significant change occurs in the impact of beacon power on packet reception. The coefficient of r, b_2 jumps from 0.056 in the no stack case to 0.303 in one stack case and 0.604 in two stack case. This is also evident in <Ref> where the gradual increase of power from left to right has more impact on two stack and one stack cases as compared to the no stack case. We can justify this result by the fact that the stacks dampen the power of the transmitted packets and larger power helps in crossing this barrier leading to higher reception. Beacon power plays more significant role in reception across stacks. Due to increased role of beacon power in overcoming the stacks, it has less impact on compensating for distance which is evident by decreasing value of b_4.Thus, packet reception varies based on distance, frequency, power and presence of obstructions. It decreases with increase in distance, frequency or number of obstructions. Power plays a vital role in compensating for the effects of both distance and obstructions. It helps in increasing reception across obstructions and to a larger distance in free space. §.§ Localization Accuracy In this section we present the accuracy using our packet reception probability model along with MCMC localization. We term our localization framework as Packet Count-Monte-Carlo Localization or PC-MCL in short. We compare against a standard range free localization algorithm, MCL <cit.> to see the effects of the packet reception model on its performance. While more recent work <cit.>, <cit.> improve upon the standard MCL accuracy, all assume a hard threshold model for hearing the beacons (i.e. if they hear a beacon, thenit must be nearer some threshold distance d_0).We estimate location in discrete time intervals of size δ and then calculate localization error over each interval. We segregate the movement trace of the person into time windows each of duration δ seconds. In our case, we choose δ=10 sec. Localization error in each interval is the euclidean distance between predicted and ground truth location.  <Ref> shows the average error for different device settings. Packet count based MCL gives higher accuracy compared to baseline MCL. Our system can localize within a range of 1-2m while baseline MCL always has an error over 3m. Note that the errors are lowest for a device power of -15dB. This is because at -20dB power beacons have a low coverage and individual moving in the space may not receive sufficient number of packets to get localized with low error. In contrast, -12dB gives high coverage and we hear all the beacons with increased reception rate throughout our layout. This makes it slightly harder to distinguish through which aisle the person is moving.The average localization error within an aisle, and when the person transitions between the aisles using the corridor are different. <Ref> shows the time series variation of error of our proposed PC-MCL and the baseline MCL algorithms for a device setting of -15dB and 1Hz. We see that the errors increase during the time intervals 4-5-6 and 10-11-12 for both the algorithms. This is because during the transition we don't have the right packet reception model to be used. Thus the average error of both algorithms increases due to errors during the transition. Indeed, the average error within an aisle drops to as low as 0.4m with our PC-MCL algorithm. Thus, if we can eliminate the high errors during transition, our algorithm can achieve high localization accuracy in the range of0.4-0.5m. One way to achieve this to learn a packet counting model for the corridor where transitions occur, in addition to the packet count model for the aisles. § CONCLUSIONIn this paper, we developed a probabilistic model for BLE packet reception in an indoor environment with stacks and used this model to localize moving individuals in an indoor environment. We observed that the packet counts for a beacon are binomially distributed with a parameter p, and then modeled p as function of advertising frequency, beacon power and distance to beacon. We estimated the coefficients using a Bayesian MCMC technique. We developed a Monte-Carlo localization technique using the packet reception model exploiting environment geometry in our solution. Our proposed framework performs well: we achieve an average reduction of 53% in localization error compared to a baseline Monte-Carlo localization algorithm.We can improve our proposed framework. We noticed that while our average localization error was around ∼ 1.2m, the errors within an aisle were ∼ 0.4m. The increase in the average localization error is due to poor localization during the transition. This leads us to conclude that a “corridor” packet model in conjunction to the proposed “aisle” packet model will lead to reduction of average localization error.ACM-Reference-Format
http://arxiv.org/abs/1708.08144v1
{ "authors": [ "Subham De", "Shreyans Chowdhary", "Aniket Shirke", "Yat Long Lo", "Robin Kravets", "Hari Sundaram" ], "categories": [ "cs.NI" ], "primary_category": "cs.NI", "published": "20170827215005", "title": "Finding by Counting: A Probabilistic Packet Count Model for Indoor Localization in BLE Environments" }
mythTheorem myproProposition mylemLemma
http://arxiv.org/abs/1708.07626v1
{ "authors": [ "Y. Shi", "H. D. Tuan", "A. V. Savkin", "T. Q. Duong", "H. V. Poor" ], "categories": [ "cs.SY" ], "primary_category": "cs.SY", "published": "20170825064417", "title": "Model Predictive Control for Smart Grids with Multiple Electric-Vehicle Charging Stations" }
[email protected] School of Physics, Georgia Institute of Technology, Atlanta, Georgia 30332, USA Joint Quantum Institute and Joint Center for Quantum Information and Computer Science, University of Maryland, College Park, MD20742,USASchool of Physics, Georgia Institute of Technology, Atlanta, Georgia 30332, USA Schools of Chemistry and Biochemistry and Computational Science and Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332, USA In an ion trap quantum computer, collective motional modes are used to entangle two or more qubits in order to execute multi-qubit logical gates. Any residual entanglement between the internal and motional states of the ions results in loss of fidelity, especially when there are many spectator ions in the crystal. We propose using a frequency-modulated (FM) driving force to minimize such errors. In simulation, we obtained an optimized FM two-qubit gate that can suppress errors to less than 0.01% and is robust against frequency drifts over ±1 kHz. Experimentally, we have obtained a two-qubit gate fidelity of 98.3(4)%, a state-of-the-art result for two-qubit gates with 5 ions. Robust two-qubit gates in a linear ion crystal using a frequency-modulated driving force Kenneth R. Brown December 30, 2023 ========================================================================================Ion traps are a leading candidate for the realization of a quantum computer. Magnetically insensitive qubit energy splittings, long coherence times, and high-fidelity state initialization and detection <cit.> prove to be significant advantages for trapped ion qubits. Individual qubit addressing and single-qubit gates with error rates on the order of 10^-5 per gate have been achieved <cit.>. Multiple qubits can be entangled through state-dependent forces driven by external fields <cit.>, and for exactly two ions, entangling gate fidelities routinely exceed 99% and in some cases 99.9%. <cit.>. With increasing ion number, however, the motional modes bunch in frequency, which means exciting only a single motional mode becomes prohibitively slow. Alternatively, the state-dependent driving forces can couple to all modes of motion. A number of schemes have been proposed for disentangling the internal qubit states from the motional states of all modes by introducing variations to the driving force during the gate. One way to achieve this goal is amplitude modulation (AM) of the driving field <cit.>. Several experiments have adopted this method and have achieved a 2 to 5% error <cit.>. Discrete phase modulation (PM) has also been proposed for the same purpose, but the number of pulses in the sequence increases exponentially with the number of ions <cit.>. Moreover, discrete changes in laser amplitude and phase are hard to implement physically, especially when we perform fast gates.We propose a novel decoupling method through continuous frequency modulation (FM), theoretically equivalent to continuous PM, which involves only small and smooth oscillations of the detuning of the applied field. First, we explain the coherent displacement of the ion chain's motional modes during the Mølmer-Sørensen (MS) gate. Then, we describe how the residual displacement of the ions can be minimized in a way which is robust to small changes in trap frequency. Next, we experimentally demonstrate this gate in a chain of 5 ^171Yb+ ions. Finally, we discuss extensions of the method to larger ion chains, with 17 ions as an example.To entangle two qubits with the MS gate, we apply a state-dependent driving force near the sideband frequencies. As a result, each motional mode experiences a coherent displacement characterized by the operator<cit.>:D̂(α̂_̂k̂)= exp(α̂_k a_k^†-α̂_k^† a_k), α̂_k (t)= Ω/2(η_i,kσ_ϕ^i+η_j,kσ_ϕ^j)∫^t_0 e^iθ_k(t') dt'where Ω is the carrier coupling strength, η_i,k and η_j,k are the Lamb-Dicke parameters of ions i and j with respect to mode k, σ_ϕ^i and σ_ϕ^j are bit-flip Pauli operators for the addressed ions, and θ_k(t) = ∫^t_0 δ_k (t') dt' and δ_k (t) are the phase and detuning of the driving force relative to mode k. If the qubits are at the +1 eigenstate of both σ_ϕ^i and σ_ϕ^j, the displacement is: α_k (t) = Ω/2(η_i,k+η_j,k)∫^t_0 e^iθ_k(t') dt' We may visualize the trajectory of α_k(t) over time by plotting it in the complex plane. This is the phase space trajectory (PST) of the motional mode k. For a total gate time τ, α_k(0) = 0 and α_k(τ) are the beginning and end points of the PST.Due to the state-dependent nature of α̂_k (t), different eigenstates of σ_ϕ^i and σ_ϕ^j follow different PSTs. If any of the α_k (τ) is non-zero, there is residual entanglement between the internal and motional state spaces, which leads to a mixed internal state. This lowers the overall gate fidelity (F = |⟨ψ_final|ψ_ideal⟩|^2). Given that |α_k| ≪ 1, we find that the consequent gate error may be estimated as: ε≡ 1 - F ≈∑_k=1^N|α_k(τ)|^2 Minimizing |α_k| is therefore the most straightforward criterion for an optimized gate. However, the gate is sensitive to small drifts in sideband frequencies (δ_k →δ_k + δ_1 and δ_1 ≪ 1/τ), an imperfection which we often observe in experiments. The frequency dependence of α_k(τ) can be canceled to the first order by setting the time-averaged position of α_k(t)to zero. α_k,avg∝∫^τ_0∫^t_0e^iθ_k (t')dt'dt = 0 It turns out that if we only consider symmetric pulses (δ_k(τ - t) = δ_k(t)), minimizing α_k,avg also minimizes α_k(τ).113mm20mm(a) 113mm75mm(b)In our scheme, we modulate the driving frequency during the gate to minimize the gate error. The trajectory α_k(t) moves with constant speed but varying angular rate δ_k(t). Therefore, FM allows us to control the curvature and thus the shapes and end points of the PSTs. We let the frequency assume a symmetric, oscillatory pattern (see example in Fig. 1). The vertices (local maxima and minima) of the oscillations are set to be evenly spaced in time and are the only variable control parameters in our optimization. The vertices are connected with sinusoidal functions, which leads to a smooth and continuous frequency profile. The function to be minimized is |α_k,avg|^2 for robust pulses and |α_k|^2 for non-robust. The number of vertices used is increased until we successfully converge to a solution with errors much lower than 0.01%. Detailed derivations for equations (3) and (4) as well as the optimization process are provided in the Supplemental Material. Both robust and non-robust versions of the gate are tested on our 5-ion quantum computer. In our setup, 5 ^171Yb+ ions are held in an rf Paul trap with a radial trap frequency of 3.045 MHz and an average ion separation of about 5 μm. Our qubit is defined by the ground hyperfine states ^2S_1/2,|F=0⟩ and ^2S_1/2,|F=1⟩ with an energy splitting of 2π×12.642821 GHz <cit.>. Initially, all ions are cooled to close to the motional ground state (≈ 0.1 phonons) and then optically pumped to the |0⟩ state. Quantum gates are implemented using a beatnote generated by counter-propagating Raman laser beams that are capable of addressing any individual qubit <cit.>.The 5 transverse motional sidebands are experimentally determined and used to find the optimal FM pulses for the 2-qubit gate. We increase the number of oscillations (degrees of freedom) for optimization until we find a pulse with low errors. With a fixed gate time of 90 μs, the optimized robust pulse consists of 13 oscillations, whereas the non-robust version has only 9 (Fig. 1). The driving frequency crosses the sidebands multiple times, which contrasts with other implementations of MS gates that avoid sideband resonance.PSTs are plotted for no frequency error and for a 1 kHz frequency drift for both robust and non-robust pulses in Fig. 2. With the drift, the end points of the robust trajectory (circles) stick to the origin, whereas those of the non-robust (diamonds) deviate from the starting point, causing an estimated error of about 0.5%. This proves the importance of the robustness criterion. We present the results on entangling two neighboring ions on one edge of the ion chain in the robust case. The output population and parity are measured and shown in Figs. 3(a) and (b), giving a SPAM-corrected fidelity of 98.3(4)%. This is among the highest fidelities achieved for multi-qubit gates in the presence of spectator ions <cit.>. Using the robust gate, we also successfully perform a CNOT gate with 98.6(7)% fidelity and generate a 3-qubit GHZ state with 92.6(3)% fidelity, whose results are demonstrated in the Supplemental Material.In order to lower the overall laser intensity Ω, each 90 μs pulse is performed twice for each gate, with a combined gate time of 180 μs. The Ω required is 2π×600 kHz in carrier Rabi frequency, which is much larger than 2π×151 kHz as expected by simulation. The discrepancy is most likely due to an overestimate of the Lamb-Dicke parameters in our simulation. The high power used worsens other error sources such as Raman scattering, off-resonant excitation, and crosstalk with other qubits <cit.>, which may contribute to the 1% error level observed.The theoretically estimated gate error is plotted as a function of frequency drift in Fig. 4(a) to compare the robust pulse with non-robust. A typical error threshold for high-fidelity gates is 0.01%. The robust pulse can tolerate frequency errors up to ±1.5 kHz, whereas the non-robust less than ± 0.1 kHz. The non-robust pulse has a quadratic dependence on the drift, whereas the robust version has a quartic dependence. This is expected, since error is proportional to displacement squared, and the first-order dependence of the displacement on drift is canceled out in the robust case. To determine the impact of sideband drifts, we experimentally run the two gates over a range of symmetric detuning offsets (Fig. 4(b)). The robust version has even-parity population higher than 90% for frequency offsets up to ±5 kHz, whereas the non-robust gate has significantly lower fidelity and tolerance towards frequency errors (within ±1 kHz), confirming that the robust method improves fidelity significantly by canceling errors due to frequency drifts.17mm25mm(a) 61mm25mm(b)113mm19mm(a) 113mm80mm(b)To test the scalability of our method, we run a similar optimization for 17 ions, motivated by the 17-qubit surface code proposed for quantum error correction <cit.>. The sideband frequencies are calculated from a simulated anharmonic ion trap with an average ion separation of about 3.5 μm. Such high ion density may be challenging to realize with current technology, but that does not pose a fundamental physical limit to experiments.The robust FM pulse obtained consists of 47 oscillations within a gate time of 250μs (Fig. 5). The gate can tolerate a frequency drift of 500 Hz for an error threshold of 0.01%. Apparently, the gate is more sensitive to frequency errors due to an increased number of motional modes and a longer gate time.The power required (Ω) for the two-qubit gate ranges from 2π× 115 kHz for neighboring ions to 2π×249 kHz for the furthest separated ions (≈ 1:2 ratio between lowest and highest). This is an encouraging result. Previous simulation results indicate that two-qubit gate time and power increase very quickly with the distance between the ions. But by using a flexible and well-designed optimization program, we have found an FM pulse that can overcome this difficulty.We have shown that we can perform high-fidelity two-qubit gates in a 5-ion trap using frequency modulation. In theory, the optimized robust FM pulse can suppress errors in gate fidelities to below 0.01 % for up to a ±1.5 kHz frequency offset for 5 ^171Yb^+ ions. The gate is used to maximally entangle two ions in experiment and has a fidelity of 98.3(4)%. We speculate that in the near future, we will attain over 99.9% fidelity previously achieved with 2-ion chains <cit.>.We would like to thank Todd Green, Luming Duan, and Gang Shu for useful discussions. This work was supported by the Office of the Director of National Intelligence - Intelligence Advanced Research Projects Activity through ARO contract W911NF- 10-1-0231 and the ARO MURI on Modular Quantum Systems.25 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Olmschenk et al.(2007)Olmschenk, Younge, Moehring, Matsukevich, Maunz, and Monroe]yb1 author author S. Olmschenk, author K. C. Younge, author D. L. Moehring, author D. N. Matsukevich, author P. Maunz, and author C. Monroe, 10.1103/PhysRevA.76.052314 journal journal Phys. Rev. A volume 76, pages 052314 (year 2007)NoStop [Noek et al.(2013)Noek, Vrijsen, Gaultney, Mount, Kim, Maunz, and Kim]yb2 author author R. Noek, author G. Vrijsen, author D. Gaultney, author E. Mount, author T. Kim, author P. Maunz,and author J. Kim, 10.1364/OL.38.004735 journal journal Opt. Lett. volume 38, pages 4735 (year 2013)NoStop [Brown et al.(2011)Brown, Wilson, Colombe, Ospelkaus, Meier, Knill, Leibfried, and Wineland]single1 author author K. R. Brown, author A. C. Wilson, author Y. Colombe, author C. Ospelkaus, author A. M. Meier, author E. Knill, author D. Leibfried,and author D. J. Wineland, 10.1103/PhysRevA.84.030303 journal journal Phys. Rev. A volume 84, pages 030303 (year 2011)NoStop [Harty et al.(2014)Harty, Allcock, Ballance, Guidoni, Janacek, Linke, Stacey, and Lucas]single2 author author T. P. Harty, author D. T. C. Allcock, author C. J. Ballance, author L. Guidoni, author H. A. Janacek, author N. M. Linke, author D. N. Stacey,and author D. M. Lucas, 10.1103/PhysRevLett.113.220501 journal journal Phys. Rev. Lett. volume 113, pages 220501 (year 2014)NoStop [Aude Craik et al.(2017)Aude Craik, Linke, Sepiol, Harty, Goodwin, Ballance, Stacey, Steane, Lucas, and Allcock]microwave7 author author D. P. L. Aude Craik, author N. M. Linke, author M. A. Sepiol, author T. P. Harty, author J. F. Goodwin, author C. J. Ballance, author D. N. Stacey, author A. M. Steane, author D. M. Lucas,and author D. T. C. Allcock, 10.1103/PhysRevA.95.022337 journal journal Phys. Rev. A volume 95, pages 022337 (year 2017)NoStop [Mølmer and Sørensen(1999)]Molmer author author K. Mølmer and author A. Sørensen, 10.1103/PhysRevLett.82.1835 journal journal Phys. Rev. Lett. volume 82, pages 1835 (year 1999)NoStop [Sørensen and Mølmer(1999)]Sorensen author author A. Sørensen and author K. Mølmer, 10.1103/PhysRevLett.82.1971 journal journal Phys. Rev. Lett. volume 82, pages 1971 (year 1999)NoStop [Milburn et al.(2000)Milburn, Schneider, and James]Milburn author author G. Milburn, author S. Schneider, and author D. F. V. James, 10.1002/1521-3978(200009)48:9/11<801::AID-PROP801>3.0.CO;2-1 journal journal Fortschritte der Physik volume 48, pages 801 (year 2000)NoStop [Solano et al.(1999)Solano, de Matos Filho, and Zagury]Solano author author E. Solano, author R. L. de Matos Filho,and author N. Zagury, 10.1103/PhysRevA.59.R2539 journal journal Phys. Rev. A volume 59, pages R2539 (year 1999)NoStop [Schäfer et al.(2017)Schäfer, Ballance, Thirumalai, Stephenson, Ballance, Steane, and Lucas]Lucas1 author author V. M. Schäfer, author C. J. Ballance, author K. Thirumalai, author L. J. Stephenson, author T. G. Ballance, author A. M. Steane,and author D. M. Lucas, @noopjournal journal ArXiv e-prints(year 2017), http://arxiv.org/abs/1709.06952 arXiv:1709.06952 NoStop [Ballance et al.(2016)Ballance, Harty, Linke, Sepiol, and Lucas]Lucas2 author author C. J. Ballance, author T. P. Harty, author N. M. Linke, author M. A. Sepiol,and author D. M. Lucas, 10.1103/PhysRevLett.117.060504 journal journal Phys. Rev. Lett. volume 117, pages 060504 (year 2016)NoStop [Gaebler et al.(2016)Gaebler, Tan, Lin, Wan, Bowler, Keith, Glancy, Coakley, Knill, Leibfried, and Wineland]HF_Wineland author author J. P. Gaebler, author T. R. Tan, author Y. Lin, author Y. Wan, author R. Bowler, author A. C. Keith, author S. Glancy, author K. Coakley, author E. Knill, author D. Leibfried,and author D. J. Wineland, 10.1103/PhysRevLett.117.060505 journal journal Phys. Rev. Lett. volume 117, pages 060505 (year 2016)NoStop [Ospelkaus et al.(2011)Ospelkaus, Warring, Colombe, Brown, Amini, Leibfried, and Wineland]microwave1 author author C. Ospelkaus, author U. Warring, author Y. Colombe, author K. R. Brown, author J. M. Amini, author D. Leibfried,and author D. J. Wineland, 10.1038/nature10290 journal journal Nature volume 476, pages 181 (year 2011)NoStop [Harty et al.(2016)Harty, Sepiol, Allcock, Ballance, Tarlton, and Lucas]microwave6 author author T. P. Harty, author M. A. Sepiol, author D. T. C. Allcock, author C. J. Ballance, author J. E. Tarlton,and author D. M. Lucas, 10.1103/PhysRevLett.117.140501 journal journal Phys. Rev. Lett. volume 117, pages 140501 (year 2016)NoStop [Monz et al.(2011)Monz, Schindler, Barreiro, Chwalla, Nigg, Coish, Harlander, Hänsel, Hennrich, and Blatt]14-qubit author author T. Monz, author P. Schindler, author J. T. Barreiro, author M. Chwalla, author D. Nigg, author W. A. Coish, author M. Harlander, author W. Hänsel, author M. Hennrich,and author R. Blatt, 10.1103/PhysRevLett.106.130506 journal journal Phys. Rev. Lett. volume 106, pages 130506 (year 2011)NoStop [Shi-Liang Zhu et al.(2006)Shi-Liang Zhu, C. Monroe, and L.-M. Duan]AM1 author author Shi-Liang Zhu, author C. Monroe,and author L.-M. Duan, 10.1209/epl/i2005-10424-4 journal journal Europhys. Lett. volume 73, pages 485 (year 2006)NoStop [Roos(2008)]Roos author author C. F. Roos, http://stacks.iop.org/1367-2630/10/i=1/a=013002 journal journal New Journal of Physics volume 10, pages 013002 (year 2008)NoStop [Debnath et al.(2016)Debnath, Linke, Figgatt, Landsman, Wright, and Monroe]small_computer author author S. Debnath, author N. M. Linke, author C. Figgatt, author K. A. Landsman, author K. Wright,and author C. Monroe, 10.1038/nature18648 journal journal Nature volume 536, pages 63 (year 2016)NoStop [Kim et al.(2009)Kim, Chang, Islam, Korenblit, Duan, and Monroe]spin-spin_coupling author author K. Kim, author M.-S. Chang, author R. Islam, author S. Korenblit, author L.-M. Duan,and author C. Monroe, 10.1103/PhysRevLett.103.120502 journal journal Phys. Rev. Lett. volume 103, pages 120502 (year 2009)NoStop [Choi et al.(2014)Choi, Debnath, Manning, Figgatt, Gong, Duan, and Monroe]optimal_control author author T. Choi, author S. Debnath, author T. A. Manning, author C. Figgatt, author Z.-X. Gong, author L.-M. Duan,and author C. Monroe, 10.1103/PhysRevLett.112.190502 journal journal Phys. Rev. Lett. volume 112, pages 190502 (year 2014)NoStop [Green and Biercuk(2015)]Phase_decoupling author author T. J. Green and author M. J. Biercuk, 10.1103/PhysRevLett.114.120502 journal journal Phys. Rev. Lett. volume 114, pages 120502 (year 2015)NoStop [Trout et al.(2017)Trout, Li, Gutierrez, Wu, Wang, Duan, and Brown]qec_new author author C. J. Trout, author M. Li, author M. Gutierrez, author Y. Wu, author S.-T. Wang, author L. Duan,and author K. R. Brown, @noopjournal journal ArXiv e-prints(year 2017), http://arxiv.org/abs/1710.01378 arXiv:1710.01378 NoStop [Tomita and Svore(2014)]qec1 author author Y. Tomita and author K. M. Svore, 10.1103/PhysRevA.90.062320 journal journal Phys. Rev. A volume 90, pages 062320 (year 2014)NoStop [Horsman et al.(2012)Horsman, Fowler, Devitt, and Meter]qec2 author author C. Horsman, author A. G. Fowler, author S. Devitt,and author R. V. Meter, http://stacks.iop.org/1367-2630/14/i=12/a=123011 journal journal New Journal of Physics volume 14, pages 123011 (year 2012)NoStop [Stephens(2014)]qec3 author author A. M. Stephens, 10.1103/PhysRevA.89.022321 journal journal Phys. Rev. A volume 89, pages 022321 (year 2014)NoStop § SUPPLEMENTAL MATERIAL§ ADDITIONAL EXPERIMENTAL RESULTS 17mm110mm(a) 61mm110mm(b)15mm164mm(a) 15mm185mm(b)We experimentally perform a CNOT gate with a fidelity of 98.6(7)% (Fig. 6), using one robust FM two-qubit entangling gate and several single-qubit gates (Fig. 8(a)). We also successfully create a 3-qubit GHZ state with a fidelity of 92.6(3)% (Fig. 7), using two robust two-qubit gates and several single-qubit gates (Fig. 8(b)). These results give further proof that the FM two-qubit gate is a working tool for quantum logic operations.§ MØLMER-SØRENSEN GATE FOR VARYING DETUNING This section reviews the physics of a standard Mølmer-Søresen gate. Note that the most important generalization made in this paper is the time-dependence of detuning. The laser phase must be kept continuous, which should be more easily achieved in experiments than otherwise.Suppose the driving field consists of two counter-propagating laser beams with the same intensity and opposite detunings, applied to any two ions in a linear N-ion crystal. We assume that the beams are perpendicular to the ion chain axis, so that only the transverse motional modes are excited. The ion-field interaction can be written as <cit.>: Ĥ_MS = Ω2∑_k=1^N S^k_ϕ,γ a_k^† e^iθ_k(t) + S^k_ϕ,γ^† a_k e^-iθ_k(t) where θ_k is the integrated phase of the detuning between the driving force and the k-th sideband, i.e. θ_k(t) = ∫_0^tδ_k(t')dt', and Ω is the effective Rabi frequency for the carrier transition using a particular laser intensity. S^k_ϕ equals η_i,kσ_ϕ^i + η_j,k e^iγσ_ϕ^j, where σ_ϕ = σ_xcosϕ+σ_ysinϕ is a general spin flip operator about an axis on the x-y plane, ϕ is half the relative phase between the two sidebands, and γ is the relative phase between the lasers applied to the two ions. η_j,k is the Lamb-Dicke parameter for the jth ion and the kth motional mode, and is given by Δ k√(ħ2mω_k) u_jk, where Δ k = 4π/λ is the wavenumber of the two counterpropagating Raman lasers (λ = 355 nm), and u_jk is the unitary matrix that maps ion coordinates to the resonant mode coordinates. Note that if the lasers are at an angle to the axis of motion, the parameter will be reduced by the cosine of that angle. The expression is valid if the Lamb-Dicke approximation holds (η_j,k√(n+1/2)≪ 1,n = √(⟨ a^† a⟩)), and the direct carrier transition is small (Ω is much smaller than the detuning from the carrier transition).The Hamiltonian consists of a sum of products of internal and motional operators, and thus represents a state-dependent force acting on the ion chain as a whole. To solve the time-dependent Schrödinger equation, we apply the Magnus expansion to compute the argument of the effective propagator <cit.>: |ψ(t)⟩ =D̂ ({α̂_k}) Ê(β_ij) |ψ(0)⟩ D̂ ({α̂_k})= exp(∑_k=1^N(α̂_k a_k^† - α̂_k^† a_k)) whereα̂_k(t) = S^k_ϕ,γΩ2∫^t_0 e^iθ_k(t')dt' Ê(β_ij)= exp(-iβ_ijσ_ϕ^iσ_ϕ^j) = exp(-iσ_ϕ^iσ_ϕ^jΩ^22cosγ∑^N_k=1∫^t_0 ∫^t'_0η_i,kη_j,k×sin(θ_k (t')-θ_k (t”)) dt'dt”) The first term from the expansion is the direct time integral of the Hamiltonian and is proportional to α̂_k a_k^† - α̂_k^† a_k, which is the argument of the displacement operator and is related to quantum coherent states. The second term is the double time integral of the commutator of the Hamiltonian as functions of different time parameters, and is proportional to σ_ϕ^i⊗σ_ϕ^j. Conveniently, higher-order terms vanish, and the two surviving terms commute, so we can express the final propagator as the product of two unitaries.Consider the first operator D̂ ({α̂_k}), where the displacement α̂_k is state-dependent and is proportional to the spin operator S^k_ϕ,γ. If the internal state happens to be an eigenstate of S^k_ϕ,γ, we may replace it with its eigenvalue, and D̂ simply displaces the motional state from one coherent state to another by α_k. We can plot the 2-D phase space trajectory (PST) to keep track of the complex displacement over time. It is worth emphasizing that the quadrature axes in the PSTs do not represent the expected position or momentum of any particle like they do for a single quantum harmonic oscillator, since we are looking at the Hamiltonian in the interaction frame, and we are tracking down the collective instead of individual motion of the ions.In general, the initial internal state is a superposition of the four eigenstates of S^k_ϕ,γ=η_i,kσ_ϕ^i + η_j,k e^iγσ_ϕ^j, and each eigenstate follows a different trajectory in the phase space according to its eigenvalue. For tidiness, we only track the trajectory of |++⟩_ϕ, where |+⟩_ϕ is the positive eigenstate of σ^i_ϕ, in the case where the laser phase γ is zero. Since the trajectories for different eigenstates have different end points, there is a residual entanglement between the internal and motional state spaces, which will result in a mixed internal state since we do not measure the ion motion. Thus, we need α_k(t) = 0 in magnitude for all motional modes k in our optimization to guarantee that end points of the trajectories are sent back to their starting points.The second operator Ê_ij represents a rotation on the Bloch sphere spanned by |↓↓⟩ and |↑↑⟩. For maximal entanglement we set γ = 0 and require the magnitude of the argument of the exponential to be π/4 to effect a π/2 rotation, which maps |↓↓⟩ to 1/√(2)(|↓↓⟩+i e^2iϕ|↑↑⟩). We simply adjust Ω to satisfy this requirement since it is a free constant parameter. If Ω is too large, we repeat the gate sequence R times to lower it by a factor of √(R). We may also alter the axis of rotation by changing phase lag between the sidebands ϕ.§ ERROR ESTIMATE DUE TO SPIN-MOTION ENTANGLEMENT This section gives a simplified justification for the error estimate presented in equation (3) in the main text.Suppose the internal state is an equal superposition between |Ψ_1⟩ and |Ψ_2⟩, which are eigenstates of some spin operator Ŝ, with eigenvalues ± 1. The system is subject to the effect the displacement operator D̂ (α) = exp(Ŝ(α a^† - α^* a)), so the two eigenstates have opposite displacements ±α from the origin, which is ideally zero. Assuming that the ions are perfectly cooled to the ground state (a reasonable approximation for this experiment), the final and ideal states will be: |ψ_final⟩ = 1√(2)(|Ψ_1,α⟩+|Ψ_2,-α⟩)|ψ_ideal⟩ = 1√(2)(|Ψ_1,0⟩+|Ψ_2,0⟩) The gate fidelity is given by:|⟨ψ_final||ψ_ideal⟩|^2= |1/2(⟨α||0⟩+⟨-α||0⟩)|^2=e^-|α|^2≈ 1-|α|^2 Alternatively, we can trace the associated density matrix |ψ_final⟩⟨ψ_final| over the motional space. By realizing that tr(|α⟩⟨-α|)=tr(|-α⟩⟨α|) = e^-2|α|^2, in the eigenbasis {|Ψ_1⟩,|Ψ_2⟩}, the final density matrix is:ρ_f = 12[ 1 e^-2|α|^2; e^-2|α|^2 1 ]And we arrive at the same fidelity:F = ⟨ψ_ideal|ρ_f|ψ_ideal⟩≈ 1-|α|^2Since there are multiple motional modes for a multi-ion chain, the total error is simply the sum of |α|^2 for all modes.The motional displacement is difficult to determine since it is inherently state-dependent, and the initial state is assumed to be arbitrary. By observing the original expression for α̂_k(t), we approximate the error as:|α_k| ≈η̃Ω̃|∫^τ_0 e^iθ_k(t)dt|where η̃ = η_j,0 = Δ k√(ħ2mω_x)1√(N) is the Lamb-Dicke parameter for all ions for the common mode (0.047 for 5 ^171Yb^+ ions, 0.025 for 17 ions), and Ω̃ is the approximate power required to entangle a pair of qubits (about 2π×200 kHz). Thus we define the gate error ε to be:ε ≈∑_k=1^N|α_k|^2≈ (η̃Ω̃)^2∑_k=1^N|∫^τ_0 e^iθ_k(t)dt|^2where η̃ is the characteristic size of the Lamb-Dicke parameter, and Ω̃ is the approximate power required to induce maximum entanglement between the qubit pair. Together, η̃Ω̃ is the overall “sideband coupling strength", which is approximated as 2π×10 kHz for 5 ions and 2π×5 kHz for 17 ions. § THE ROBUSTNESS CONDITION This section explains the robustness condition presented in equation (4) in the main text.Since α_k ∼∫^t_0 e^iθ_k(t')dt' = 0 is a necessary condition for guaranteeing zero displacement in the motional state space, we investigate how we can suppress α_k up to the first order in δ_1. Replacing the phase θ_k with θ_k + δ_1 t, we evaluate the displacement through integration by parts: α_k(t)∼∫^τ_0e^iθ_k (t)+iδ_1 tdt ≈∫^τ_0(1+iδ_1 t)e^iθ_k (t)dt = iδ_1 ∫^τ_0t e^iθ_k (t)dt = iδ_1( [ t ∫^t_0e^iθ_k (t')dt' ]^τ_0 -∫^τ_0∫^t_0e^iθ_k (t')dt'dt )= iδ_1( 0 - τα_k,avg)where α_k,avg is the time-averaged position of the trajectory from t = 0 to t = τ.Therefore, we need it to lie on the starting point in order for α_k to remain zero up to the first order of the drift or uncertainty. Note that the approximation e^iθ_k (t)≈ 1+iδ_1 t is valid only when δ_1 ≪ 1/τ. Hence, the longer the gate time, the less robust the gate becomes.In addition, if the pulse is time-symmetric (i.e. δ_k(t) = δ_k(τ-t)), the center of mass lying at the origin automatically guarantees that the end point will lie there as well. Thus, the robustness condition (α_k,avg = 0) is a sufficient condition for displacement minimization (α_k = 0) as long as we are restricted to symmetric pulses. The optimization criterion is now simply:α_k,avg∼∫^τ_0∫^t_0e^iθ_k (t')dt'dt = 0,k = 1,..,N We seek to vary the detuning during that gate such that the above condition is satisfied. Given sufficient degrees of freedom and a good initial guess, we can arrive at an optimal pulse deterministically. § OPTIMIZATION PROCESS Modifying the frequency allows us to alter the trajectories' curvature, and hence their end points and time-averaged positions. In our optimization, we choose the vertices of the frequency oscillations to be our control parameters. The number of vertices correspond to the degrees of freedom needed to achieve an optimal solution. It increases linearly with the number of motional modes. We connect these vertices using the cosine function to create a smoothly varying frequency pattern. This may be a useful feature, since it is difficult to vary physical parameters discretely in real experiments. The overall change in frequency (∼100 kHz) is small compared to the frequencies used by conventional optical modulators (∼100 MHz), minimizing sudden physical changes.The average frequency lies above all motional modes (blue detuned), but the frequency crosses several sidebands and becomes red detuned with respect to them. The phonon number does not increase dramatically since the driving frequency only overlaps with the sidebands momentarily.To search for a robust frequency pattern, we define the cost function as the sum of distance squared between the center of mass of each trajectory and its starting point: Cost = ∑_k=1^N| ∫^τ_0∫^t_0e^iθ_k (t')dt'dt |^2 = ∑_k=1^N(∫^τ_0∫^t_0cosθ_k (t')dt'dt)^2 + (∫^τ_0∫^t_0sinθ_k (t')dt'dt)^2Similarly, we can define the error for a non-robust pattern as the distance squared between the trajectory end points and starting points.It is worth noting that the optimization algorithm is inherently deterministic and requires little computational resources. For 5 sidebands, given a good initial guess, we can arrive at an optimal FM pattern in about 30 seconds using a regular laptop computer.§ AREA ENCLOSED BY THE TRAJECTORY This section shows that the area enclosed by a trajectory has a simple expression as a double integral. Consider the following integral: α(t) = ∫^t_0 e^iθ(t')dt', θ(t) = ∫_0^tδ(t')dt',which is a general representation of a trajectory in the complex plane (see Fig. 9). At any given time t it moves at angular rate δ(t), angle θ(t), and speed 1. The area enclosed from t to t+δ t (yellow triangle in figure) is given by: 1/2 |α(t)|dtsin(ϕ) = 1/2 dt (e^iθ(t)α^*(t)) = 1/2 dt (∫^t_0e^iθ(t)-iθ(t')dt') = 1/2 dt ∫^t_0sin(θ(t)-θ(t'))dt' Hence the total area enclosed by the trajectory over a period of time t is given by:β(t) = 1/2∫^t_0 ∫^t'_0sin(θ(t')-θ (t”)) dt”dt',This double integral coincides with the entanglement between two qubits after the MS gate, or rather the angle of rotation between |↓↓⟩ and |↑↑⟩. Hence we may evaluate how much entanglement is generated by the MS gate by observing the sizes and shapes of the PSTs.
http://arxiv.org/abs/1708.08039v4
{ "authors": [ "Pak Hong Leung", "Kevin A. Landsman", "Caroline Figgatt", "Norbert M. Linke", "Christopher Monroe", "Kenneth R. Brown" ], "categories": [ "physics.atom-ph", "quant-ph" ], "primary_category": "physics.atom-ph", "published": "20170827021123", "title": "Robust two-qubit gates in a linear ion crystal using a frequency-modulated driving force" }
The Weisfeiler-Leman Dimension of Planar Graphs is at most 3 Sandra KieferRWTH Aachen [email protected] Ilia Ponomarenko Petersburg Department of V. A. SteklovInstitute of [email protected] Pascal SchweitzerRWTH Aachen University [email protected] Received:; accepted: =================================================================================================================================================================================================================================================================This is an extended version of the paper with the same title published in the Proceedings of the 32nd Annual ACM/IEEE Symposium on Logic in Computer Science <cit.>. We prove that the Weisfeiler-Leman (WL) dimension of the class of all finite planar graphs is at most 3. In particular, every finite planar graph is definable in first-order logic with counting using at most 4 variables. The previously best known upper bounds for the dimension and number of variables were 14 and 15, respectively.First we show that, for dimension 3 and higher, the WL-algorithm correctly tests isomorphism of graphs in a minor-closed class whenever it determines the orbits of the automorphism group of any arc-colored 3-connected graph belonging to this class.Then we prove that, apart from several exceptional graphs (which have WL-dimension at most 2), the individualization of two correctly chosen vertices of a colored 3-connected planar graph followed by the 1-dimensional WL-algorithm produces the discrete vertex partition. This implies that the 3-dimensional WL-algorithm determines the orbits of a colored 3-connected planar graph.As a byproduct of the proof, we get a classification of the 3-connected planar graphs with fixing number 3.§ INTRODUCTION The Weisfeiler-Leman algorithm (WL-algorithm) is a fundamental algorithm used as a subroutine in graph isomorphism testing. More precisely, it constitutes a family of algorithms. For every positive integer k there is a k-dimensional version of the algorithm that colors all k-tuples of vertices in two given undirected input graphs and iteratively refines the color classes based on information of previously obtained colors. The algorithm has surprisingly strong links to notions that seem unrelated at first sight. For example, there is a precise correspondence to Sherali-Adams relaxations of certain linear programs <cit.>, there are duplicator-spoiler games capturing the same information as the algorithm <cit.>, it is related to separability of coherent configurations <cit.>, and there is a close correspondence between the algorithm and first-order logic with counting (C^k). More precisely, for two graphs G and G', if the integer k is the smallest dimension such that the k-dimensional WL-algorithm distinguishes two graphs, then k+1 is the smallest number of variables of a sentence in first-order logic with counting distinguishing the two graphs <cit.>. Exploiting these correspondences, the seminal construction of Cai, Fürer and Immerman <cit.> shows that there are examples of pairs of graphs on n vertices for which a dimension of Ω(n) is required for the WL-algorithm to distinguish the two graphs.However, for various graph classes a bounded dimension suffices to distinguish every two non-isomorphic graphs from each other. While typically not very practical due to large memory consumption, this yields a polynomial-time algorithm to test isomorphism of graphs from such a class.In a tour de force, Grohe shows that for all graph classes with excluded minors a bounded dimension of the WL-algorithm suffices to decide graph isomorphism <cit.>. An ingredient in Grohe's proof deals with the special case of the class of planar graphs, for which a bound on the necessary dimension of the WL-algorithm had been proven earlier by Grohe separately <cit.>. Thus, there is a k such that the k-dimensional WL-algorithm distinguishes every two non-isomorphic planar graphs from each other. In his Master's thesis <cit.> (see <cit.>), Redies analyses Grohe's proof showing that this k can be chosen to be 14. For 3-connected planar graphs it was also shown earlier by Verbitsky <cit.> that one can additionally require the quantifier rank to be logarithmic. Feeling that this is far from optimal, Grohe asked in his book <cit.> and also at the 2015 Dagstuhl meeting on the graph isomorphism problem <cit.> for a tight k. In this paper we show that k=3 is sufficient.We say that a graph is identified by the k-dimensional WL-algorithm if the algorithm distinguishes the graph from all other graphs. Following Grohe <cit.>, we say that a graph class 𝒞 has WL-dimension k if k is the smallest integer such that all graphs in 𝒞 are identified by the k-dimensional WL-algorithm. With this terminology our main theorem reads as follows.The Weisfeiler-Leman dimension of the class of planar graphs is at most 3.(Equivalently, every planar graph is definable by a sentence in first-order logic with counting that uses only 4 variables.) Our proof is separated into two parts. The first part (Sections <ref>–<ref>) constitutes a reduction from general graphs to 3-connected graphs, while the second part (Section <ref>) handles 3-connected planar graphs.In the first part, which does not only concern planar graphs, we start by showing that for a hereditary graph class 𝒢,for k ≥ 2, if the k-dimensional WL-algorithm distinguishes every two vertex-colored non-isomorphic 2-connected graphs from each other, then it distinguishes every two non-isomorphic graphs in 𝒢 (Theorem <ref>).While it is tempting to believe that when requiring k≥ 3 a similar statement can be made about 3-connected graphs, we need the additional assumption that 𝒢 is minor-closed. In fact, our proof also needs a more technical requirement that the WL-algorithm correctly determines the vertex orbits.We can then argue that if 𝒢 is additionally minor-closed, for k ≥ 3, if the k-dimensional WL-algorithm correctly determines orbits on all arc-colored 3-connected graphs in 𝒢, then it distinguishes every two non-isomorphic graphs in 𝒢 from each other (Theorem <ref>).To prove these two reductions, we employ several structural observations on decomposition trees. These allow us to cut off isomorphism-invariantly the leaves of the decomposition trees of 2- and 3-connected components, respectively. This can be done implicitly without having to explicitly construct the corresponding decomposition trees (see Section <ref>).In the second part we show that the 3-dimensional WL-algorithm identifies all (arc-colored) 3-connected planar graphs. More precisely, we argue that orbits are determined, as required by our reduction.In fact, we show a stronger statement in that we do not need the full power of the 3-dimensional WL-algorithm. Using Tutte's Spring Embedding Theorem <cit.>, we argue that if in an arc-colored 3-connected planar graph there are three vertices each with a unique color (so-called singletons) that share a common face then applying the 1-dimensional WL-algorithm (usually called color refinement) yields a coloring of the graph in which all vertices are singletons. Since this would only give us a bound of k=4, we show then that in most 3-connected planar graphs it suffices to individualize 2 vertices to get the same result. Our proof actually characterizes the exceptions, the graphs in which we need to individualize 3 vertices. We can handle these graphs separately to finish our proof. The fixing number of a graph G is the minimum size of a set of vertices S such that the only automorphism that fixes S pointwise is the identity. It follows from generally known facts that the fixing number of a 3-connected planar graph is at most 3. Our proof however shows that the only 3-connected planar graphs with fixing number 3 are those depicted in Figure <ref> (see also Corollary <ref>). The properties of these graphs are summarized in Table <ref>.normalvertex=[circle,fill=white,draw=black, inner sep = 2.5] On the algorithmic side we obtain a very easy algorithm to check isomorphism of 3-connected planar graphs. In fact the arguments show that with the right cell selection strategy, individualization refinement algorithms (such as nauty and traces <cit.> or bliss <cit.>), which constitute to-date the fastest isomorphism algorithms in practice, have polynomial running time on 3-connected planar graphs. Concerning lower bounds on the WL-dimension of planar graphs, it is not difficult to see that there are planar graphs with WL-dimension 2 (for example the 6-cycle). However, the question whether the maximum dimension of the class of planar graphs is 2 or 3 remains open.*Related work.There is an extensive body of work on isomorphism testing of planar graphs. Most notably Hopcroft and Tarjan first exploited the decomposition of a graph into its 3-connected components to obtain an algorithm with quasi-linear running time <cit.>, which led to a linear-time algorithm by Hopcroft and Wong <cit.>.More recent results show that isomorphism of planar graphs can be decided in logarithmic space <cit.>.There are also various results on the descriptive complexity of planar graph isomorphism. In this direction, Grohe shows that fixed-point logic with counting FPC captures polynomial time on planar graphs <cit.> and more generally on graphs of bounded genus <cit.>. This was also known for graphs of bounded tree width <cit.>.Subsequent work shows that for 3-connected planar graphs and for graphs of bounded tree width it is possible to restrict the quantifier depth (or equivalently, the number of iterations that the WL-algorithm performs until it terminates) to a polylogarithmic number, which translates to parallel isomorphism tests <cit.>. For general graphs, recent results give new upper and lower bounds on the quantifier depth (translating into bounds on the maximum number of iterations of the WL-algorithm)<cit.>. Extending the results on planar graphs in the direction of dynamic complexity, Mehta shows that isomorphism of 3-connected planar graphs is in DynFO+ <cit.>, where in fact no counting quantifiers are required.While it is possible to describe precisely the graphs of WL-dimension 1 <cit.> (i.e., graphs definable with a 2-variable sentence in first order logic with counting), it appears difficult to make such statements for higher dimensions. However, for various graph classes for which the isomorphism problem is known to be polynomial time solvable, one can give upper bounds on the dimension.E.g., for cographs, interval graphs, and, more generally, for rooted directed-path graphs, it suffices to apply the 2-dimensional WL-algorithm in order to decide isomorphism <cit.>. In general, isomorphism of graph classes with an excluded minor can be solved in polynomial time <cit.> and in fact a sufficiently high-dimensional WL-algorithm will decide isomorphism on such a class <cit.>. More strongly, FPC captures polynomial time on graph classes with an excluded minor.In the proof of this result, structural graph theory and in particular decompositions play a central role. While our paperuses very basic parts of these techniques and concepts, they are only implicit and we refer the reader to <cit.> for a more systematic treatment. § PRELIMINARIES All graphs in this paper are finite simple graphs, that is, undirected graphs without loops. The vertex and the edge set of a graph G are denoted by V(G) and E(G), respectively. The neighborhood N(X) of a subset of the vertices X ⊆ V(G) is the set { u∈ V(G)∖ X |∃ v ∈ Xs.t. {u,v}∈ E(G)}.For X ⊆ V(G), we denote by G[X] the subgraph of G induced by X, i.e., the graph with vertex set X and edge set E(G) ∩{{ u,v }| u,v ∈ X}. The graph G-XG[V(G)∖ X] is obtained from G by removing X. We write G ≅ H to indicate that G is isomorphic to H. A minor of G is a graph obtained by repeated vertex deletions, edge deletions and edge contractions.For a positive integer k, a graph G is k-connected if G has more than k vertices and for all X ⊆ V(G) with |X| < k, the graph G - X is connected. A separator S⊆ V(G) is a subset of the vertices such that G-S is not connected. A vertex v is a cut vertex if {v} is a separator, and a 2-separator is a separator of size 2.A 2-connected component of G is a subset S' of V(G) such that the graph G[S'] is 2-connected and such that S' is maximal with respect to inclusion. We refer the reader to <cit.> for more basic information on graphs, in particular on planar graphs, which are graphs that can be drawn in the plane without crossings.A vertex-colored graph (G, λ) is a graph G with a function λ V →𝒞, where 𝒞 is an arbitrary set. We call λ a vertex coloring of G. Similarly, an arc-colored graph is a graph G with a function λ{(u,u)| u∈ V(G)}∪{ (u,v) |{u,v}∈ E(G)}→𝒞. In this case, we call λ an arc coloring. We interpret λ(u,u) as the vertex color of u and for {u,v}∈ E(G) we interpret λ(u,v) as the color of the arc from u to v. In particular it may be the case that λ(u,v) ≠λ(v,u). However, while we allow such colorings, all graphs in this paper are undirected. Furthermore, we treat every uncolored graph as a monochromatic colored graph.*The Weisfeiler-Leman algorithm (see <cit.>). For k ∈ℕ, a graph G and a coloring λ of V^k(G), let (v_1, …, v_k) be a vertex k-tuple of G. We define 0χ_G^k (v_1, …, v_k) to be a tuple consisting of an encoding of λ(v_1, …, v_k) and an encoding of the isomorphism class of the colored graph obtained from G[{v_1, …, v_k}] by coloring for i ∈{1, …, k} vertex v_i with color i. That is, for a second graph G', possibly equal to G, with coloring λ' and for a vertex k-tuple (v_1',…,v_k') of G' we have0χ^k_G (v_1, …, v_k) = 0χ^k_G' (v_1', …, v_k')if and only if λ (v_1, …, v_k) = λ'(v_1', …, v_k') and there is an isomorphism from G[v_1,…, v_k] to G'[v'_1,…, v'_k] mapping v_j to v'_j for all j∈{1,…,k}. We recursively define the color i+1χ^k_G (v_1, …, v_k) by settingi+1χ^k_G (v_1, …, v_k)(iχ^k_G (v_1, …, v_k); ℳ),where ℳ is the multiset defined asℳ:= {{(iχ^k_G(w,v_2, …, v_k), …, iχ^k_G(v_1, …, v_k-1, w) )| w∈ V }}if k ≥ 2 and as ℳ:={{ (iχ^1_G(w) | w∈ N(v_1) }} if k = 1. That is, if k = 1, the iteration is only over neighbors of v_1.There is a slight technical issue about the initial coloring. Suppose for a fixed k we are given a graph G with a coloring λ' of V^ℓ(G) for ℓ < k. To turn it into a correct input for the k-dimensional WL-algorithm, we replace λ' by an appropriate coloring λ. For a vertex tuple (u_1, …, u_k), we define λ (u_1, …, u_k) λ'(u_1, … ,u_ℓ). Note that λ preserves all information from λ'. If λ' is an arc coloring we define λ (u_1, …, u_k) to be (λ'(u_1, u_2),0) if (u_1,u_2) is in the domain of λ' and to be (1,1) otherwise.By definition, the coloring i+1χ^k_G induces a refinement of the partition of the k-tuples of the vertices of the graph G with coloring iχ^k_G. Thus, there is some minimal i such that the partition induced by the coloring i+1χ^k_G is not strictly finer than the one induced by the coloring iχ^k_G on G. For this minimal i, we call iχ^k_G the stable coloring of G and denote it by χ^k_G. For k ∈ℕ, the k-dimensional WL-algorithm takes as input a vertex coloring or an arc coloring λ of a graph G and returns the coloring χ^k_G. For two graphs G and G', we say that the k-dimensional WL-algorithm distinguishes G and G' if its application to each of them results in colorings with differing color class sizes. More precisely, the graphs G and G' are distinguished if there is a color C in the range of χ^k_G such that the sets {v̅|v̅∈ V^k(G), χ^k_G(v̅) = C} and {w̅|w̅∈ V^k(G'), χ^k_G'(w̅) = C} have different cardinalities. If two graphs are distinguished by the k-dimensional WL-algorithm for some k, then they are not isomorphic. However, if k is fixed, the converse is not always true. There is a close connection between the WL-algorithm and first-order logic with counting (as well as fixed-point logic with counting). We refer the reader to existing literature (for example <cit.>) for more information.For improved readability, we will use the letter λ to denote arbitrary colorings that do not necessarily result from applications of the WL-algorithm.§ DECOMPOSITIONS For a graph G, define the set P(G) to consist of the pairs (S,K), where S is a separator of G of minimum cardinality and K ⊆ V(G)∖ S is the vertex set of a connected component of G-S.We observe that if G is a connected graph that is not 2-connected, then P(G) is the set of pairs ({s},K) where s is a cut vertex and G[K] a connected component of G- {s}. In this case we also write (s,K) instead of ({s},K). If G is 2-connected but not 3-connected, all separators in P(G) have size 2.There is a natural partial order on P(G) with respect to inclusion in the second component, i.e., we can define:(S,K) ≤ (S',K')K ⊆ K'.We define P_0(G) to be the set of minimal elements of P(G) with respect to this partial order. It immediately follows from the definitions that the sets P(G) and P_0(G) (and the corresponding partial orders) are isomorphism-invariant (i.e., preserved under isomorphisms).Note that P_0(G) is non-empty whenever G is not a complete graph. Also note that if G is not 2-connected, then for two distinct minimal elements (S,K) and (S',K') in P_0(G) we have K ∩ K' = ∅. Furthermore, in the case that G is connected but not 2-connected, the set P_0(G) contains exactly the pairs (s,K) for which s is a cut vertex and G[K] a connected component of G-{s} that does not contain a cut vertex of G.These two observations can be generalized to graphs with a higher connectivity, but for this we need an additional requirement on the minimum degree as follows. Let G be a graph that is not (k+1)-connected and has minimum degree at least 3k-1/2.*If (S,K)∈ P_0(G) and (S',K')∈ P(G) are distinct, then K⊆ K' or (K∪ S)∩ (K'∪ S') = S∩ S'.* If (S,K),(S',K') ∈ P_0(G) are distinct, then (K∪ S)∩ (K'∪ S') = S∩ S'.* A pair (S,K)∈ P(G) is contained in P_0(G) if and only if there is no separator S' of G of minimum cardinality with S'∩ K ≠∅.(Part <ref>) Assume that (S,K)∈ P_0(G) and (S',K')∈ P(G) are distinct. Note that S = N(K) and S' = N(K') since S and S' are minimal separators.Suppose K ⊈K' and that there exists v ∈ K ∩ K'. Then v has a neighbor u ∈ K which does not belong to K'. Since v ∈ K', this implies that u belongs to S'. Therefore, the graph G-u is at most (k-1)-connected. On the other hand, u lies in K. By Corollary 1 in <cit.>, the graph G-u is k-connected, yielding a contradiction.(Part <ref>) This follows by applying Part <ref> twice. (Part <ref>) If (S,K)∈ P(G) is not minimal then there is (S',K')∈ P(G) with K'⊊ K. Then S'⊆ K∪ S but S≠ S', which shows that S'∩ K≠∅. Conversely, suppose (S,K)∈ P_0(G) and that there is a minimum size separator S' of G with S'∩ K≠∅. Let K' ⊆ V(G-S') be a vertex set such that G[K'] is a connected component of G-S'. Then (S,K) and (S',K') violate Part <ref> of the lemma. We remark that for a connected graph G which is not 3-connected, the elements of P_0(G) correspond to the leaves in a suitable decomposition tree (i.e., the decomposition into 2- or 3-connected components) in the sense of Tutte.However, we will not require this fact.In the following we present a method to remove the vertices appearing in the second component of pairs in P_0 from graphs in such a way that the property whether two graphs are isomorphic is preserved. This will allow us to devise an inductive isomorphism test. In the next sections we will then show that a sufficiently high-dimensional WL-algorithm in some sense implicitly performs this induction.For a graph G and a set S ⊆ V(G), we define the graph G^S_⊤ as consisting of the vertices of S and those appearing together with S in P_0(G). More precisely, G^S_⊤ is the graph on the vertex set  V'= S ∪⋃_(S,K)∈ P_0(G) K with edge set E' = E(G[V']) ∪{{s,s'}| s,s'∈ Sand s≠ s'}, see Figure <ref> left and bottom right.[For the reader familiar with tree decompositions we remark that this graph corresponds to the torso of the bag S∪ K in a suitable tree decomposition. However, we will not require this point of view in the paper.] Note that if S is not a separator or does not appear as a separator in P_0(G), then V'=S. invclip/.style=clip,insert path=[reset cm] (-16383.99999pt,-16383.99999pt) rectangle (16383.99999pt,16383.99999pt)For an arc coloring λ of G, we define an arc coloring λ^S_⊤ for the graph G^S_⊤ as follows:λ^S_⊤(v_1,v_2) (0,0) if {v_1, v_2}⊆Sand {v_1, v_2}∉ E(G) (λ(v_1,v_2),1) if {v_1, v_2}⊆Sand {v_1, v_2}∈ E(G)(λ(v_1,v_2),2)otherwise. If G is a vertex-colored graph with vertex coloring λ', in order to obtain a coloring for G^S_⊤, we define an arc coloring λ as λ (v_1,v_2) λ'(v_1) and let λ^S_⊤ be as above.For (S,K) ∈ P_0(G) we also define G^(S,K)_⊤ G^S_⊤[S ∪ K], which differs from G^S_⊤ in that only vertices from S and K are retained. Again we define a coloring λ^(S,K)_⊤ which is simply the restriction of λ^S_⊤ to pairs (v_1,v_2) for which v_1,v_2∈ S∪ K.Given a graph G, we define G_ (see Figure <ref> left and top right) to be the graph with vertex set  V_ V(G) ∖(⋃_(S,K)∈ P_0(G)K),and edge setE_ E(G[V_])∪ {{s_1,s_2}|∃ (S,K)∈ P_0(G)s.t.s_1,s_2∈ S, s_1≠ s_2}.We observe that if G is not 2-connected then G_ is equal to G[V_]. In general, if for some k the graph G is not (k+1)-connected but has minimum degree at least 3k-1/2, then Lemma <ref> applies. In particular, the various components whose vertex sets appear in P_0(G) are disjoint. If G is not 3-connected, this implies that G_ is a minor of G.In the following we restrict our discussions to graphs that are not 3-connected. Given an arc coloring λ of G we define an arc coloring λ_ of G_ as follows. Assume that v_1, v_2 ∈ V(G_). Let S {v_1,v_2}.If S is a 2-separator of G but S ∉ E(G), we setλ_(v_1,v_2)(0, ((G^S_⊤,λ^S_⊤)_(v_1,v_2))) . Furthermore, if v_1 = v_2 or if {v_1,v_2}∈ E(G) we setλ_(v_1,v_2)(λ(v_1,v_2), ((G^S_⊤,λ^S_⊤)_(v_1,v_2))),where by ((G^S_⊤,λ^S_⊤)_(v_1,v_2))we denote the isomorphism class of the colored graph (G^S_⊤,λ^S_⊤)_(v_1,v_2) obtained from the arc-colored graph (G^S,λ^S_⊤) by individualizing v_1 and v_2. Thus (G^{v_1,v_2}_⊤,λ^{v_1,v_2}_⊤)_(v_1,v_2) and (G'^{v'_1,v'_2}_⊤,λ'^{v'_1,v'_2}_⊤)_(v'_1,v'_2) have the same isomorphism type if and only if there is an isomorphism from the first graph to the second mapping v_1 to v'_1 and v_2 to v'_2. Note that by definition, the λ_-colors of 2-separators of G are distinct from those of other pairs of vertices.If not stated otherwise, we implicitly assume that for a graph G with initial coloring λ, the corresponding graph G_ is a colored graph with initial coloring λ_. For k∈{1,2}, if G and G' are k-connected graphs that are not (k+1)-connected and that are of minimum degree at least 3k-1/2 with arc colorings λ and λ', respectively, then(G, λ) ≅ (G', λ')⟺ (G_, λ_) ≅ (G'_,λ'_).(⟹) Suppose that φ is an isomorphism from (G, λ) to (G', λ'). Since P_0(G) is isomorphism-invariant (Remark <ref>), we know that φ(V(G_))= V(G'_). We claim that φ induces an isomorphism from (G_, λ_) to (G'_,λ'_). For this it suffices to observe that the definitions of G_ from G and λ_ from λ are isomorphism invariant.(⟸) Conversely suppose φ is an isomorphism from the graph (G_, λ_) to (G'_,λ'_). Let {S_1,…,S_t}{S|∃(S,K)∈ P_0(G)} be the set of separators that appear in P_0(G). Since φ respects the colorings λ_ and λ'_ we can conclude that{φ(S_1),…,φ(S_t)}={S|∃(S,K)∈ P_0(G')}.For each j∈{1,…,t} we choose an isomorphism φ_j from G^S_j_⊤ to G'^φ(S_j)_⊤ that maps each s∈ S_j to φ(s)∈φ(S_j). We know that such an isomorphism exists because φ respects the colorings λ_ and λ'_. We define a map φ from (G, λ) to (G', λ') by settingφ(v)φ(v) if v ∈ V(G_) φ_j(v) if there is a set K⊆ V(G) with v∈ K and (S_j,K)∈ P_0(G). This map is well-defined since by Parts <ref> and <ref> of Lemma <ref> the elements in the second components of pairs in P_0(G) are disjoint and not contained in V(G_). Moreover, the map is an isomorphism, since it respects all edges. Finally, by construction, it also respects the colors of vertices and arcs.§ REDUCTION TO VERTEX-COLORED 2-CONNECTED GRAPHS It is easy to see that for a hereditary graph class 𝒢 and k ≥ 2, the k-dimensional WL-algorithm distinguishes all (vertex-colored) graphs in 𝒢 if it distinguishes all (vertex-colored) connected graphs in 𝒢. (By a vertex-colored graph from 𝒢 we mean more precisely a colored graph whose underlying uncolored graph lies in 𝒢.) For this, one simply has to observe that for two non-isomorphic connected components, the sets of colors which the WL-algorithm computes for their vertices are disjoint. In this section we show a stronger statement replacing the assumption on connected graphs by an assumption on 2-connected graphs as follows. Let 𝒢 be a hereditary graph class. If, for k ≥ 2, the k-dimensional Weisfeiler-Leman algorithm distinguishes every two non-isomorphic 2-connected vertex-colored graphs (H, λ) and (H', λ') with H, H' ∈𝒢 from each other, then the k-dimensional Weisfeiler-Leman algorithm distinguishes all non-isomorphic graphs in 𝒢. For the rest of this section, let 𝒢 be a hereditary graph class. Recall that for a graph G ∈𝒢 with an initial vertex coloring or arc coloring λ, the coloring χ^k_G is the stable k-tuple coloring produced by the k-dimensional WL-algorithm on (G,λ).For ℓ vertices u_1, …, u_ℓ with ℓ < k, we defineχ^k_G (u_1, …, u_ℓ) χ^k_G (u_1, …, u_ℓ, u_ℓ, …, u_ℓ_k-ℓ times)to be the coloring of the k-tuple resulting from extending the ℓ-tuple by repeating k-ℓ times its last entry. To prove Theorem <ref>, we first show that the 2-dimensional WL-algorithm distinguishes pairs of vertices that lie in a common 2-connected component from pairs that do not. Assume k ≥ 2 and let G, H be two graphs.Let u and v be vertices from the same 2-connected component of G and let u' and v' be vertices that are not contained in a common 2-connected component of H. Then χ^k_G(u,v) ≠χ^k_H(u',v').To improve readability, in this proof we omit the superscripts k, i.e., we write χ_G and χ_H instead of χ^k_G and χ^k_H, respectively. For an integer i and vertices x,y denote by W_i(x,y) the number of walks of length exactly i from x to y. (It will be clear from context in which graph we count the number of walks.) By induction on i, it is easy to see that for k≥ 2 it holds that W_i(x,y)≠ W_i(x',y') implies χ_G(x,y)≠χ_H(x',y') (<cit.>). Thus, it suffices to show that for some i, we have W_i(u,v) ≠ W_i(u',v'). Since u' and v' are not contained in the same 2-connected component, there is some cut vertex w' such that every walk from u' to v' passes w'. Suppose that there does not exist a vertex w such that for all i the following hold: *  W_i(u,w) = W_i(u',w')*  W_i(w,w) = W_i(w',w')*  W_i(w,v) = W_i(w',v').Then for every vertex w it holds that χ_G(u,w) ≠χ_H(u',w') or χ_G(w,w) ≠χ_H(w',w') or χ_G(w,v) ≠χ_H(w',v'). If χ_G(w) ≠χ_H(w'), then χ_G(u,w) ≠χ_H(u',w') and thus, for every vertex w it holds that χ_G(u,w) ≠χ_H(u',w') or χ_G(w,v) ≠χ_H(w',v').In other words, there is no vertex w such that ( χ_G(w,v),χ_G(u,w)) = (χ_H(w',v'),χ_H(u',w')). By the definition of the WL-algorithm this implies that χ_G(u,v) ≠χ_H(u',v').Now suppose that there is a vertex w such that for all i Conditions 1, 2 and 3 hold. Then for every i the number of walks of length i from u to v which pass w equals the number of walks from u' to v' which pass w'. However, there must be a walk from u to v which avoids w. Let d be its length. We have W_d(u,v) > W_d(u',v') and thus χ_G(u,v) ≠χ_H(u',v'). Next we argue that for k ≥ 2, the k-dimensional WL-algorithm distinguishes cut vertices from other vertices.Let k ≥ 2 and assume G, Hare connected graphs.Let w∈ V(G) and w'∈ V(H) be vertices such that G - {w} is connected and H - {w'} is disconnected. Then χ^k_G(w) ≠χ^k_H(w').Let u' and v' be two neighbors of w' not sharing a common 2-connected component in H. Note that such vertices do not exist for w in G. It suffices to show that for all u ∈ V(G) it holds that χ^k_G(u,w) ≠χ^k_H(u',w'). By Theorem <ref>, the color χ^k_H(u',w') encodes that there is v' which is a neighbor of w' and which is not contained in the same 2-connected component as u'. For u and w, such a vertex does not exist in G. We prove Theorem <ref> by induction over the sizes of the input graphs. The strategy is to show that on input (G, λ) the WL-algorithm implicitly computes the graph (G_⊥, λ_⊥) and to then apply Lemma <ref>. Let k ≥ 2 and assume G, H are connected graphs that are not 2-connected. For vertices v ∈ V(G_⊥) and w ∈ V(H) ∖ V(H_⊥) we have χ^k_G(v) ≠χ^k_H(w).Note that for a connected but not 2-connected graph G, a vertex v∈ V(G) is in V(G_⊥) if and only if it is a cut vertex or there are at least two cut vertices that lie in the same 2-connected component as v. The equivalent statement holds for H. If v is a cut vertex of G, then the lemma follows immediately from Corollary <ref>.If v is not a cut vertex, then there are at least two cut vertices u and u' lying in the same 2-connected component as v. Note that there are no such two vertices for w. By Corollary <ref>, u and u' obtain colors distinct from colors of non-cut vertices. Thus, also the colors χ^k_G(v,u) and χ^k_G(v,u') are distinct from all colors of edges from v to non-cut vertices. Moreover, Theorem <ref> yields that the colors χ^k_G(v,u) and χ^k_G(v,u') also encode that v,u,u' all share a common 2-connected component. This information about the existence of such u and u' is contained in the color χ^k_G(v) and thus, χ^k_G(v) ≠χ^k_H(w). For graphs G, G' ∈𝒢 with vertex colorings λ and  λ', respectively, assume (s,K) ∈ P_0(G) and (s',K') ∈ P_0(G'). For k ≥ 2 suppose the k-dimensional Weisfeiler-Leman algorithm distinguishes all non-isomorphic vertex-colored 2-connected graphs in 𝒢. Assume there is no isomorphism from (G_⊤^(s,K), λ_⊤^(s,K)) to (G_⊤'^(s',K'), λ_⊤'^(s',K')) that maps s to s'. Then{χ^k_G(s,v) | v ∈ K}∩{χ^k_G'(s',v') | v' ∈ K'} = ∅. If χ^k_G(s)≠χ^k_G'(s') then the conclusion of the lemma is obvious. Thus, we can assume otherwise. We have already seen with Corollary <ref> that cut vertices obtain different colors than non-cut vertices. Thus, we can assume that G and G' are already colored in a way such that s and s'have a color different from the colors of vertices in K ∪ K'. With Theorem <ref> we will now argue that for v∈ K and v'∈ K' we have χ^k_G(v)≠χ^k_G'(v'), which implies the lemma. For readability, we drop the superscripts (s,K) and (s',K').We will show by induction that if the lemma does not hold, then for all u,v ∈ K ∪{s} and all u',v' ∈ K' ∪{s'} with {u,v}⊈{s} and {u',v'}⊈{s'} the following implication is true: iχ_G_⊤^k(u,v) ≠iχ_G'_⊤^k(u',v') ⇒iχ_G^k(u,v) ≠iχ_G'^k(u',v'). For i = 0 the claim follows by definition of the colorings λ_⊤ and λ'_⊤. For the induction step, assume that there exist vertices x,y ∈ K ∪{s}, x',y' ∈ K' ∪{s'} such that {x,y}⊈{s}, {x',y'}⊈{s'}with iχ_G_⊤^k(x,y) = iχ_G'_⊤^k(x',y') and i+1χ_G_⊤^k(x,y) ≠i+1χ_G'_⊤^k(x',y'). Thus, there must be a color tuple (c_1,c_2) such that the setsM{w | w ∈ V(G_⊤) \{x,y}, (iχ^k_G_⊤(w,y),iχ^k_G_⊤(x,w)) = (c_1,c_2)}andM'{w' | w' ∈ V(G'_⊤) \{x',y'}, (iχ^k_G'_⊤(w',y'), iχ^k_G'_⊤(x',w')) = (c_1,c_2)}do not have the same cardinality. Let D{(iχ^k_G(w,y), iχ^k_G(x,w)) | w ∈ M}∪{(iχ^k_G'(w',y'), iχ^k_G'(x',w')) | w' ∈ M'}. By induction and by Theorem <ref>we have that{w | w ∈ V(G) \{x,y}, (iχ^k_G(w,y), iχ^k_G(x,w)) ∈ D} = M and{w' | w' ∈ V(G') \{x',y'}, (iχ^k_G'(w',y'), iχ^k_G'(x',w')) ∈ D} = M'and hence these sets do not have the same cardinality. Thus, i+1χ_G^k(x,y) ≠i+1χ_G'^k(x',y'). Having shown Implication (<ref>), it suffices to show that {χ^k_G_⊤(s,v) | v ∈ K}∩{χ^k_G'_⊤(s',v') | v' ∈ K'} = ∅.However, this follows directly from the assumption that the k-dimensional WL-algorithm distinguishes every pair of non-isomorphic vertex-colored 2-connected graphs in 𝒢 and that the graphs (G_⊤^(s,K), λ_⊤^(s,K)) and (G_⊤'^(s',K'), λ_⊤'^(s',K')) are 2-connected. With this, we can prove the following. Assume k ≥ 2 and suppose the k-dimensional Weisfeiler-Leman algorithm distinguishes all non-isomorphic vertex-colored 2-connected graphs in 𝒢. For two graphs G, G' ∈𝒢 with vertex colorings λ, λ', respectively, suppose s ∈ V(G), s' ∈ V(G'). Assume there is no isomorphism from (G_⊤^s, λ_⊤^s) to (G_⊤'^s', λ_⊤'^s') that maps s to s'. Then χ^k_G(s) ≠χ^k_G'(s').Assume otherwise that χ^k_G(s) = χ^k_G'(s'). Further suppose that{K_1,…,K_t}= {K | (s,K) ∈ P_0(G)}and that{K'_1,…,K'_t'}= {K' | (s',K') ∈ P_0(G)}.From (G_⊤^s, λ_⊤^s) ≇(G_⊤'^s', λ_⊤'^s') we conclude that there is a vertex-colored graph (H,λ_H) such that the setsI{j|(G_⊤^(s,K_j), λ_⊤^(s,K_j)) ≅ (H,λ_H)}and I'{j|(G'_⊤^(s',K'_j), λ_⊤'^(s',K'_j)) ≅ (H,λ_H)}have different cardinalities. Note that all K_j with j∈ I and all K'_j with j∈ I' have the same cardinality. We know by Lemma <ref> that for v∈ K_i with i∈ I and v'∈ K_j with j∉ I' we have χ^k_G(s,v)≠χ^k_G'(s',v'). Letting C {χ^k_G(s,v)| i∈ Iandv∈ K_i}, the vertices s and s' do not have the same number of neighbors connected via an arc of color C. We conclude that χ^k_G(s) ≠χ^k_G'(s'). Assume k ≥ 2 and suppose the k-dimensional Weisfeiler-Leman algorithm distinguishes all non-isomorphic vertex-colored 2-connected graphs in 𝒢. Let G, G' ∈𝒢 be connected graphs that are not 2-connected with vertex colorings λ, λ', respectively. If for vertices v_1,v_2 ∈ V(G_⊥) and v'_1,v'_2∈ V(G'_⊥) we have χ^k_G_⊥(v_1,v_2) ≠χ^k_G'_⊥(v'_1,v'_2), then χ^k_G(v_1,v_2) ≠χ^k_G'(v'_1,v'_2). By Lemma <ref>, with respect to the colorings χ^k_G and χ^k_G', the vertices in V(G_)and V(G'_) have different colors than the vertices in V(G) ∖ V(G_) and V(G') ∖ V(G'_). Thus, it suffices to show that the colorings χ^k_G and χ^k_G' refine the colorings λ_ and λ'_, respectively. For this, by the definition of λ_ and λ'_, it suffices to show the following two statements.* If we have that v_1=v_2 and v'_1=v'_2 and also ((G^{v_1}_⊤,λ^{v_1}_⊤)_(v_1)) ≠((G'^{v'_1}_⊤,λ'^{v'_1}_⊤)_(v'_1)), then χ^k_G(v_1) ≠χ^k_G'(v'_1). * If we have {v_1,v_2}∈ E(G) and {v'_1,v'_2}∈ E(G') and also λ_(v_1,v_2) ≠λ'_(v'_1,v'_2), then χ^k_G(v_1,v_2) ≠χ^k_G'(v'_1,v'_2). For the first item, from ((G^{v_1}_⊤,λ^{v_1}_⊤)_(v_1)) ≠((G'^{v'_1}_⊤,λ'^{v'_1}_⊤)_(v'_1)) we know that v_1 and v'_1 must be cut vertices. Thus, the statement is exactly Lemma <ref>. For the second item, from the definition of λ_ and λ'_ we obtain λ(v_1,v_2) ≠λ'(v'_1,v'_2), which implies χ^k_G(v_1,v_2) ≠χ^k_G'(v'_1,v'_2).Let (G, λ) and (G', λ') be vertex-colored graphs in 𝒞. We prove the statement by induction on |V(G)|+|V(G')|. If both graphs are 2-connected, then the statement follows directly from the assumptions. If exactly one of the graphs is 2-connected, then exactly one of the graphs has a cut vertex and the statement follows from Lemma <ref>. Thus suppose both graphs are not 2-connected but connected. Since (G, λ) ≇(G', λ'), we know by Lemma <ref> that (G_, λ_) ≇(G'_,λ'_). By Lemma <ref> the vertices in V(G_)and V(G'_) have different colors than the vertices in V(G) ∖ V(G_) and V(G') ∖ V(G'_). Moreover by Corollary <ref>, the partition of the vertices and arcs induced by the coloring χ^k_G restricted to V(G_) is finer than the partition induced by λ_. Similarly, the partition induced by χ^k_G' on V(G'_)is finer than the partition induced by λ'_. By induction, the k-dimensional WL-algorithm distinguishes (G_, λ_) from (G'_,λ'_). Thus, the k-dimensional WL-algorithm distinguishes (G, λ) from (G', λ').§ REDUCTION TO ARC-COLORED 3-CONNECTED GRAPHS In this section our aim is to weaken the assumption from Theorem <ref> which requires that 2-connected graphs are distinguished to an assumption of 3-connected graphs being distinguished.The strategy to prove our reduction follows similar ideas as those used in Section <ref>. It relies on the assumption that the input consists of vertex-colored 2-connected graphs, which we can make without loss of generality by the reduction from the last section. Now we consider the decomposition of vertex-colored 2-connected graphs into their so-called “3-connected components”. Most of the results stated in Section <ref> have analogous formulations for the 3- or higher-dimensional WL-algorithm on 2-connected graphs. But a 3-connected component of a 2-connected graph G is not necessarily a subgraph and may only be a minor of G. Thus, we require that the graph class 𝒢 is minor-closed. Furthermore, to enable the inductive approach we will now have to consider graphs G in which the 2-tuples (u,v) with {u,v}∈ E(G), i.e., the arcs, are also colored. However, it turns out that it is not sufficient to require that arc-colored graphs are distinguished. In fact we need the following stronger property.Let ℋ be a set of graphs. We say that the k-dimensional WL-algorithm correctly determines orbits in ℋ if for all arc-colored graphs (G, λ), (G',λ') with G, G' ∈ℋ and all vertices s ∈ V(G) and s' ∈ V(G') the following holds: there exists an isomorphism from (G,λ) to (G',λ') mapping s to s' if and only if χ^k_G(s) = χ^k_G'(s').Note that for G' = G, the vertex color classes obtained by an application of the k-dimensional WL-algorithm to an arc-colored graph (G, λ) are the orbits of the automorphism group of G with respect to λ. The main result in this section is the following reduction theorem.Let 𝒢 be a minor-closed graph class and assume k ≥ 3. Suppose the k-dimensional Weisfeiler-Leman algorithm correctly determines orbits on all arc-colored 3-connected graphs in 𝒢. Then the k-dimensional Weisfeiler-Leman algorithm distinguishes all non-isomorphic graphs in 𝒢. The next corollary states that the 3-dimensional WL-algorithm distinguishes 2-separators from other pairs of vertices.Assume k ≥ 3 and let G and H be 2-connected graphs. Let u,v,u',v' be vertices such that G - {u,v} is disconnected and H - {u',v'} is connected. Then χ^k_G(u,v) ≠χ^k_H(u',v'). Consider the connected graphs G-{u} and H-{u'}. In the first graph v is a cut vertex but in the second graph v' is not a cut vertex. Thus, by Corollary <ref>, we have that χ^k-1_G-{u}(v) ≠χ^k-1_H-{u'}(v') and thus χ^k_G(u,v) ≠χ^k_H(u',v'). Just as we did in the previous section, we want to apply a recursive strategy that relies on Lemma <ref>. However, to apply that lemma we require a minimum degree of 3. The following lemma states that vertices of degree 2 can be removed.Let 𝒢 be a minor-closed graph class and assume k ≥ 2. Suppose the k-dimensional Weisfeiler-Leman algorithm correctly determines orbits on all arc-colored graphs in 𝒢 of minimum degree at least 3. Then the k-dimensional Weisfeiler-Leman algorithm distinguishes all non-isomorphic arc-colored graphs in 𝒢.The proof is a basic exercise regarding the WL-algorithm. We give a sketch. By Theorem <ref> we can assume that the graphs are 2-connected. We first observe that the 2-dimensional WL-algorithm identifies graphs of maximum degree at most 2. Since vertices of degree at least 3 obtain different colors than vertices of degree at most 2 it suffices now to observe that for each i the color χ^k_G(u,v) implicitly encodes the number of paths of length exactly i from u to v whose inner vertices are of degree 2.Inductively we can then consider the minors obtained by retaining vertices of degree at least 3 and connecting two such vertices with an edge if there is a path between them whose inner vertices all have degree 2. The edge is colored with a color that encodes the multiset of lengths of paths between the two vertices only having inner vertices of degree 2. The lemma allows us to focus on graphs with minimum degree 3. Doing so, in analogy to Lemma <ref>, the following proposition gives a characterization of the vertices in V(G_⊥). Assume k ≥ 3, let G be a 2-connected graph of minimum degree at least 3 that is not 3-connected.Then x ∉ V(G_⊥) if and only if there exists a vertex u contained in some minimal separator of G such that x∉ V((G-{u})_⊥) and such that the (unique) 2-connected component containing x in G-{u} has exactly one vertex belonging to a minimal separator of G.If x∉ V(G_⊥) then x∈ K for a ({u,s},K) ∈ P_0(G). Then (s,K) ∈ P_0(G-{u}). Moreover, by Part <ref> of Lemma <ref>, no vertex of K is contained in a minimum separator of G. This implies that s is the only vertex in the 2-connected component of x in G-{u} that belongs to a minimum separator of G.Conversely suppose u is a vertex in a minimal separator of G such that x∉ V((G-{u})_⊥) and the 2-connected component of x in G-{u} has exactly one vertex belonging to a minimal separator of G. Since x∉ V((G-{u})_⊥), there is (s,K)∈ P_0(G-{u}) with x∈ K. Then K∪{s} is a 2-connected component of G-{u} and, since s is the onlyvertex in this 2-connected component that is contained in a minimal separator of G, by Part <ref> of Lemma <ref>, we have ({u,s},K) ∈ P_0(G). Assume k ≥ 3, let G, H be 2-connected graphs of minimum degree at least 3 that are not 3-connected.Then for vertices v ∈ V(G_⊥) and w ∈ V(H) ∖ V(H_⊥) we have χ^k_G(v) ≠χ^k_H(w). Suppose that v ∈ V(G_⊥) and w ∈ V(H) ∖ V(H_⊥). Let u be a vertex contained in some minimal separator of H such that w∉ V((H-{u})_⊥) and such that the 2-connected component of w in H-{u} has exactly one vertex that is contained in a minimal separator of H. Such a vertex exists by Proposition <ref>. We argue that χ^k_G(v,t)≠χ^k_H(w,u) for all t∈ V(G). If t is not contained in any minimal separator, then this follows from Corollary <ref>. Otherwise we know that v∈ V((G-{t})_⊥) or the 2-connected component of v in G-{t} does not have exactly one vertex that is contained in a minimal separator of G. In the first case we use Lemma <ref> and in the second case we use Theorem <ref> and Corollary <ref> to conclude that χ^k-1_G-{t}(v)≠χ^k-1_H-{u}(w) and thus χ^k_G(v,t)≠χ^k_H(w,u).For k ≥ 3 suppose the k-dimensional Weisfeiler-Leman algorithm correctly determines orbits on all arc-colored 3-connected graphs in 𝒢. Suppose G, G' ∈𝒢 are arc-colored 2-connected graphs of minimum degree at least 3. Assume that ({s_1,s_2},K) ∈ P_0(G) and ({s'_1, s'_2},K') ∈ P_0(G').If no isomorphism from (G_⊤^({s_1, s_2},K), λ_⊤^({s_1, s_2},K)) to (G_⊤'^({s'_1, s'_2},K'),λ_⊤'^({s'_1, s'_2},K')) maps s_1 to s'_1 and s_2 to s'_2, then{χ^k_G(s_1,s_2,v) | v ∈ K}∩{χ^k_G'(s'_1,s'_2,v) | v ∈ K'} = ∅. This is an adaption of the proof of Lemma <ref>.If χ^k_G(s_1,s_2) ≠χ^k_G'(s'_1,s'_2) then the conclusion of the lemma is obvious. Thus, we can assume otherwise. We have already seen with Corollary <ref> that 2-separators obtain different colors than other pairs of vertices. Thus, we can assume that G and G' are already colored in a way such that (s_1, s_2), (s_2, s_1), (s'_1, s'_2), (s'_2, s'_1) have colors different from the colors of pairs of vertices (t_1,t_2) with {t_1, t_2}∩ (K ∪ K') ≠∅. This immediately implies that we can assume that s_1, s_2, s'_1, s'_2 have colors different from colors of vertices that are not contained in any 2-separator in the graphs.We will now argue that for v ∈ K and v' ∈ K' we have χ^k_G(v) ≠χ^k_G'(v'), which implies the lemma. For readability, we drop the superscripts ({s_1, s_2},K) and ({s'_1, s'_2},K'). We will show by induction that if the lemma does not hold, then for all i ∈ℕ, all u,v,w ∈ K ∪{s_1, s_2} and all u',v',w' ∈ K' ∪{s'_1, s'_2} with {u,v,w}⊈{s_1,s_2} and {u',v',w'}⊈{s'_1,s'_2} the following implication holds: iχ_G_⊤^k(u,v,w) ≠iχ_G'_⊤^k(u',v',w') ⇒iχ_G^k(u,v,w) ≠iχ_G'^k(u',v',w'). For the induction base with i = 0, suppose 0χ_G_⊤^k(u,v,w) ≠0χ_G'_⊤^k(u',v',w'). Then either there is no isomorphism from G_⊤[u,v] to G'_⊤[u',v'] mapping u to u' and v to v', or λ_⊤(u,v,w) ≠λ'_⊤(u',v',w'). In the first case, by definition of the graphs G_⊤ and G'_⊤ and their colorings, we immediately get 0χ_G^k(u,v,w) ≠0χ_G'^k(u',v',w') since an isomorphism from G to G' that maps u to u' and v to v' would induce an isomorphism from G_⊤ to G'_⊤ with the same mappings. In the second case, since G and G' are arc-colored graphs, we have λ_⊤(u,v,w) = λ_⊤(u,v)≠λ'_⊤(u',v') = λ'_⊤(u',v',w'). If λ(u,v)≠λ'(u',v') then 0χ_G^k(u,v,w) ≠0χ_G'^k(u',v',w'). Otherwise, from the definitions of λ_⊤ and λ'_⊤ we conclude that (u,v)∈ E(G) ⇎ (u',v')∈ E(G') or {u,v}⊆{s_1, s_2}⇎{u',v'}⊆{s'_1, s'_2}. In the first subcase, we conclude that 0χ_G^k(u,v,w) ≠0χ_G'^k(u',v',w'). In the second subcase, since ({s_1,s_2},K) ∈ P_0(G) and ({s'_1,s'_2},K') ∈ P_0(G') we know with Part <ref> from Lemma <ref> that either at least one of u and v or at least one of u' and v' is a vertex not contained in any 2-separator. Since we have assumed that s_1, s_2, s'_1, s'_2 have colors that are different from colors of vertices not contained in any 2-separator, we also conclude that 0χ_G^k(u,v,w) ≠0χ_G'^k(u',v',w'). For the induction step, assume that there exist vertices x,y,z ∈ K ∪{s_1,s_2} and vertices x',y',z' ∈ K' ∪{s'_1, s'_2} such that {x,y,z}⊈{s_1, s_2} and {x',y',z'}⊈{s'_1, s'_2}with iχ_G_⊤^k(x,y,z) = iχ_G'_⊤^k(x',y',z') and i+1χ_G_⊤^k(x,y,z) ≠i+1χ_G'_⊤^k(x',y',z'). Thus, there must be a color triple (c_1,c_2,c_3) such that the setsM{w | w ∈ V(G_⊤) \{x,y,z}, {w | (iχ^k_G_⊤(w,y,z),iχ^k_G_⊤(x,w,z), iχ^k_G_⊤(x,y,w)) = (c_1,c_2,c_3)} and M'{w' | w' ∈ V(G'_⊤) \{x',y',z'}, {w' | (iχ^k_G'_⊤(w',y',z'), iχ^k_G'_⊤(x',w',z'), iχ^k_G'_⊤(x',y',w')) = (c_1,c_2,c_3)} do not have the same cardinality. Let D{(iχ^k_G(w,y,z), iχ^k_G(x,w,z), iχ^k_G(x,y,w)) | w ∈ M} ∪{(iχ^k_G'(w',y',z'), iχ^k_G'(x',w',z'), iχ^k_G'(x',y',w')) | w' ∈ M'}.By induction and by Theorem <ref> we have that{w | w ∈ V(G) \{x,y,z}, (iχ^k_G(w,y,z), iχ^k_G(x,w,z),iχ^k_G(x,y,w)) ∈ D} = M and{w' | w' ∈ V(G') \{x',y',z'}, (iχ^k_G'(w',y',z'), iχ^k_G'(x',w',z'),iχ^k_G'(x',y',w')) ∈ D} = M'.Hence these sets do not have the same cardinality. Thus, i+1χ_G^k(x,y,z) ≠i+1χ_G'^k(x',y',z'). Having shown Implication (<ref>), it suffices to show that {χ^k_G_⊤(s_1,s_2,v) | v ∈ K}∩{χ^k_G'_⊤(s'_1,s'_2,v') | v' ∈ K'} = ∅.For this, it suffices to prove that χ^k_G_⊤(s_1) ≠χ^k_G'_⊤(s'_1) holds. The graphs (G_⊤^({s_1,s_2},K), λ_⊤^({s_1,s_2},K)) and (G_⊤'^({s'_1,s'_2},K'), λ_⊤'^({s'_1,s'_2},K')) are 3-connected. Thus, if we had that χ^k_G_⊤(s_1) = χ^k_G'_⊤(s'_1), there would have to be an isomorphism from the graph (G_⊤^({s_1, s_2},K), λ_⊤^({s_1, s_2},K)) to (G_⊤'^({s'_1, s'_2}',K'),λ_⊤'^({s'_1, s'_2},K')) that maps s_1 to s'_1 since we have assumed that the k-dimensional WL-algorithm correctly determines orbits on all arc-colored 3-connected graphs in 𝒢. However, by definition of λ_⊤^({s_1, s_2},K) and λ_⊤'^({s'_1, s'_2},K'), this isomorphism would also map s_2 to s'_2 contradicting the assumptions of the lemma. Using the lemma, we can show the following. Assume k ≥ 3 and suppose the k-dimensional Weisfeiler-Leman algorithm correctly determines orbits on all arc-colored 3-connected graphs in 𝒢. Assume G, G' ∈𝒢 are arc-colored 2-connected graphs of minimum degree at least 3 and let {s_1,s_2}⊆ V(G) and {s'_1, s'_2 }⊆ V(G') be 2-separators of G and G', respectively. If no isomorphism from (G_⊤^{s_1, s_2}, λ_⊤^{s_1, s_2}) to (G_⊤'^{s'_1, s'_2},λ_⊤'^{s'_1, s'_2}) maps s_1 to s'_1 and s_2 to s'_2, then χ^k_G(s_1,s_2) ≠χ^k_G'(s'_1,s'_2).This proof is similar to the proof of Lemma <ref>. Suppose that{K_1,…,K_t}= {K | ({s_1,s_2},K) ∈ P_0(G)}and that{K'_1,…,K'_t'}= {K' | ({s'_1,s'_2},K') ∈ P_0(G')}.Since there is no isomorphism from (G_⊤^{s_1, s_2}, λ_⊤^{s_1, s_2}) to (G_⊤'^{s'_1, s'_2}, λ_⊤'^{s'_1, s'_2}) that maps s_1 to s'_1 and s_2 to s'_2, there is an arc-colored graph (H,λ_H) such that the setsI{j|(G_⊤^({s_1,s_2},K_j), λ_⊤^({s_1,s_2},K_j))_(s_1,s_2)≅ (H,λ_H)}and I'{j|(G'_⊤^({s'_1,s'_2},K'_j), λ_⊤'^({s'_1,s'_2},K'_j))_(s'_1,s'_2)≅ (H,λ_H)}have different cardinalities. Note that all K_j with j∈ I and all K'_j with j∈ I' have the same cardinality. We know by Lemma <ref> that for v∈ K_i with i∈ I and v'∈ K_j with j∉ I' we have χ^k_G(s_1,s_2,v)≠χ^k_G'(s'_1,s'_2,v'). Letting C {χ^k_G(s_1,s_2,v)| i∈ Iandv∈ K_i}, the sets {v|χ^k_G(s_1,s_2,v)∈ C} and {v'|χ^k_G'(s'_1,s'_2,v')∈ C} do not have the same cardinality. We conclude that χ^k_G(s_1,s_2) ≠χ^k_G'(s'_1,s'_2). Assume k ≥ 3 and suppose the k-dimensional Weisfeiler-Leman algorithm correctly determines orbits on all arc-colored 3-connected graphs in 𝒢. Let G, G' ∈𝒢 be arc-colored 2-connected graphs of minimum degree at least 3. If for vertices v_1,v_2 ∈ V(G_⊥) and v'_1,v'_2∈ V(G'_⊥) we have χ^k_G_⊥(v_1,v_2) ≠χ^k_G'_⊥(v'_1,v'_2), then χ^k_G(v_1,v_2) ≠χ^k_G'(v'_1,v'_2). By Lemma <ref>, with respect to the colorings χ_G^k and χ_G'^k, the vertices in V(G_) and V(G'_) have different colors than the vertices in V(G) ∖ V(G_) and V(G') ∖ V(G'_). Thus, it suffices to show that the colorings χ^k_G and χ^k_G' refine the colorings λ_ and λ'_, respectively. For this, by the definition of λ_ and λ'_, it suffices to show the following two statements.* If {v_1,v_2} and {v'_1,v'_2} are 2-separators and ((G^{v_1,v_2}_⊤,λ^{v_1,v_2}_⊤)_(v_1,v_2)) ≠((G'^{v'_1,v'_2}_⊤,λ'^{v'_1,v'_2}_⊤)_(v'_1,v'_2)), then χ^k_G(v_1,v_2) ≠χ^k_G'(v'_1,v'_2). * If v_1=v_2 and v'_1=v'_2, or {v_1,v_2}∈ E(G) and {v'_1,v'_2}∈ E(G') but {v_1,v_2} and {v'_1,v'_2} are not 2-separators, if λ_(v_1,v_2) ≠λ'_(v'_1,v'_2), then χ^k_G(v_1,v_2) ≠χ^k_G'(v'_1,v'_2). The first item is exactly Lemma <ref>. For the second item, from the definition of λ_ and λ'_ we obtain λ(v_1,v_2) ≠λ'(v'_1,v'_2), which implies χ^k_G(v_1,v_2) ≠χ^k_G'(v'_1,v'_2). By Theorem <ref> it suffices to show the statement for vertex-colored 2-connected graphs. To allow induction we will show the statement for arc-colored 2-connected graphs. Let (G, λ) and (G', λ') be arc-colored 2-connected graphs in 𝒢. We prove the statement by induction on |V(G)|+|V(G')|. If both graphs are 3-connected then the statement follows directly from the assumptions. If exactly one of the graphs is 3-connected, then exactly one of the graphs has a 2-separator and the statement follows from Corollary <ref>.Thus suppose both graphs are not 3-connected. By Lemma <ref> we can assume that both graphs have minimum degree at least 3. Since (G, λ) and (G', λ') are not isomorphic, we know by Lemma <ref> that (G_, λ_) ≇(G'_,λ'_). Note that G_ and G'_ are 2-connected. By Lemma <ref> the vertices in V(G_)and V(G'_) have different colors than the vertices in V(G) ∖ V(G_) and V(G') ∖ V(G'_). Moreover by Corollary <ref>, the partition of the vertices and arcs induced by the coloring χ^k_G restricted to V(G_)is finer than the partition induced by λ_. Similarly, the partition induced by χ^k_G' on V(G'_)is finer than the partition induced by λ'_. By induction the k-dimensional WL-algorithm distinguishes (G_, λ_) from (G'_,λ'_). Thus the k-dimensional WL-algorithm distinguishes (G, λ) from (G', λ').In the last two sections we have concerned ourselves with graphs being distinguished (referring to two input graphs from a class) rather than graphs being identified (referring to one input graph from a class and another input graph being arbitrary). However, the theorems we prove also havecorresponding versions concerning the latter notion.For a graph G the Weisfeiler-Leman dimension of G is the least integer k such that the k-dimensional WL-algorithm distinguishes G from every non-isomorphic graph G'. Let 𝒢be a minor-closed graph class. The Weisfeiler-Leman dimension of graphs in 𝒢 is at most max{3,k}, where k is the minimal number ℓ such that the ℓ-dimensional Weisfeiler-Leman algorithm correctly determines orbits on all arc-colored 3-connected graphs in 𝒢 and identifies such graphs. The proof follows almost verbatim the lines of the entire proof of Theorem <ref> outlined in the last two sections replacing “distinguishes” with “identifies”. § ARC-COLORED 3-CONNECTED PLANAR GRAPHS Let G be a 3-connected planar graph. We show that typically we can individualize two vertices in G so that applying the 1-dimensional WL-algorithm yields a discrete graph. There are some 3-connected planar graphs for which this is not the case. However, we can precisely determine the collection of such exceptions.We call a graph G an exception if G is a 3-connected planar graph in which there are no two vertices v,w in G such that χ^1_G_(v,w) is the discrete coloring. Here and in the following we denote by G_(v_1,v_2,…,v_t) the colored graph obtained from the (uncolored) graph G by individualizing the vertices v_1,v_2,…,v_t in that order. More specifically, we let G_(v_1,v_2,…,v_t) be the colored graph (G,λ) withλ(v) iif v=v_i0if v∉{v_1,…,v_t}.As before χ^1_H denotes the stable coloring of the 1-dimensional WL-algorithm applied to the graph H. Let G be a 3-connected planar graph and let v_1,v_2,v_3 be vertices of G. If v_1,v_2,v_3 lie on a common face, then χ^1_G_(v_1,v_2,v_3) is a discrete coloring.We will use the Spring Embedding Theorem of Tutte <cit.> (see <cit.>) which is as follows. Let v_1,v_2,v_3 be vertices of a common face of G. Let μ_0V(G)∖{v_1,v_2,v_3}→ℝ^2 be an arbitrary mapping that satisfies μ_0(v_1) = (0,0), μ_0(v_2) = (1,0) and μ_0(v_3) = (0,1).For i∈ℕ we define μ_i+1 recursively by settingμ_i+1(v) = 1/d(v)∑_w∈ N(v)μ_i(w)if v∉{v_1,v_2,v_3}, μ_i(v)otherwise. Then Tutte's result says that this recursion converges to a barycentric planar embedding of G, that is, an embedding in which every vertex not in {v_1,v_2,v_3} is contained in the convex hull of its neighbors <cit.>.This implies that after a finite number of steps the barycentric embedding is injective, i.e., no two vertices are mapped to the same image.From the theorem, we will only require the fact that for some i the map μ_i is injective. Choose μ_0 with the requirements above and so that all vertices in V(G)∖{v_1,v_2,v_3} have the same image. For example set μ_0(v) = (1,1) for v∈ V(G)∖{v_1,v_2,v_3}.We argue the following statement by induction on i. For every two vertices v and v', it holds thatμ_i(v)≠μ_i(v') ⇒iχ^1_G_(v_1,v_2,v_3)(v)≠iχ^1_G_(v_1,v_2,v_3)(v'),where iχ^1_G_(v_1,v_2,v_3)(x) denotes the color of vertex x after the i-th iteration when the 1-dimensional WL algorithm is applied to G_(v_1,v_2,v_3).For i=0 the statement holds by the definition of μ_0 and the fact that v_1, v_2 and v_3 are singletons in G_(v_1,v_2,v_3). For i>0, if ∑_w∈ N(v)μ_i(w)≠∑_w'∈ N(v')μ_i(w') then the multisets {{μ_i(w) | w ∈ N(v) }} and {{μ_i(w') | w' ∈ N(v')}} are different and thus, by induction, the multisets {{iχ^1_G_(v_1,v_2,v_3)(w) | w ∈ N(v) }} and {{iχ^1_G_(v_1,v_2,v_3)(w') | w' ∈ N(v')}} are different.We conclude with the fact that for some i the map μ_i is injective implying that iχ^1_G_(v_1,v_2,v_3) and therefore also χ^1_G_(v_1,v_2,v_3) are discrete colorings. From the lemma one can directly conclude that for k≥ 4, the k-dimensional WL-algorithm correctly determines the orbits of every 3-connected planar graph.For k≥ 4, the k-dimensional Weisfeiler-Leman algorithmcorrectly determines orbits of arc-colored 3-connected planar graphs.Let G be an arc-colored 3-connected planar graph. Then by Lemma <ref>, there are vertices v_1, v_2, v_3 such that χ^1_G_(v_1,v_2,v_3) is discrete (the additional arc coloring can only refine the stable coloring of the uncolored graph). This implies that the multiset C {{χ^4_G(v_1,v_2,v_3,x)| x∈ V(G) }} contains n different colors. Let H be a second arc-colored 3-connected planar graph. If H contains vertices v_1',v_2',v_3' such that we have {{χ^4_H(v'_1,v'_2,v'_3,x')| x'∈ V(H) }} = C, then G and H are isomorphic via an isomorphism that maps v_1 to v'_1. Otherwise the color χ^4_G(v_1,v_2,v_3,v_3) is for all v'_1,v'_2,v'_3∈ V(H) different from the color χ^4_H(v'_1,v'_2,v'_3,v'_3) implying that G and H are distinguished and thus their sets of vertex colors are disjoint. This is also true for k=3. The proof amounts to proving the following theorem.If G is an exception (i.e., G is a 3-connected planar graph without a pair of vertices v,w such that χ^1_G_(v,w) is discrete), then G is isomorphic to one of the graphs in Figure <ref>. Before we present the lengthy proof of the theorem, we state its implications. Let G be a 3-connected planar graph. The fixing number of G is at most 3 with equality attained if and only G is isomorphic to an exception (i.e., a graph depicted in Figure <ref>).If in a given graph there is a set of ℓ vertices such that individualizing all vertices in the set and then applying the 1-dimensional WL-algorithm yields a discrete coloring, then Theorem <ref>, a graph that is not an exception has fixing number at most 2. To conclude the corollary it thus suffices to check that all exceptions have fixing number 3. For k≥ 3, the k-dimensional Weisfeiler-Leman algorithm correctly determines orbits of arc-colored 3-connected planar graphs.Suppose G is an arc-colored 3-connected planar graph that is not an exception. Then there are vertices v and w such that χ^1_G_(v,w) is discrete. Analogously to the proof of Corollary <ref>, we obtain that for any second arc-colored 3-connected planar graph H, the 3-dimensional WL-algorithm only assigns equal colors to a vertex of G and a vertex of H if there is an isomorphism mapping the one to the other. With direct computations, one can check that each arc-colored exception is distinguished from all arc-colored 3-connected planar graphs and that on each arc-colored exception the stable coloring under the 3-dimensional Weisfeiler-Leman algorithm induces the orbit partition on the vertices. The task in the rest of this section is to show Theorem <ref>. The proof of the theorem is a lot more involved than the proof of Corollary <ref>. Thus, at the expense of increasing k by 1 from 3 to 4 in the main theorem (Theorem <ref>), the reader may skip the following lengthy exposition. We determine the exceptions in a case-by-case analysis with respect to the existence of vertices of certain degrees.The following two lemmas serve as general tools to deduce information about the structure of these input graphs. For a subgraph G' of a graph G we say that v∈ G' is saturated in G' with respect to G if d_G'(v)= d_G(v). Thus, if a vertex is saturated, then its neighbors in G and in G' are the same. Let G be a 3-connected planar graph. *Let G' be a subgraph of G and suppose that the sequence v_1,…, v_t forms a face cycle in the planar embedding of G' induced by a planar embedding of G. If in {v_1,…,v_t} there are at most two vertices that are not saturated in G' with respect to G, then V(G) = {v_1,…,v_t} or v_1,…,v_t is a face cycle of G.* If v_1,…,v_t is a 3-cycle or an induced 4-cycle of G that contains two vertices of degree 3 in G, then V(G) = {v_1,…,v_t} or v_1,…,v_t is a face cycle of G. For Part <ref>, suppose v_1,…, v_t forms a face cycle of G' and there are at most two vertices v_i and v_j that are not saturated in G' with respect to G.Assume v_1,…, v_t is not a face cycle in G. Then the vertices v_i and v_j are the only ones among v_1,…,v_t that have neighbors in G inside the region of the plane corresponding to the face cycle of G' formed by v_1,…,v_t. Therefore, if  V(G)≠{v_1,...,v_t}, then {v_i,v_j} is a separator of G, which contradicts the 3-connectivity.For Part <ref> consider an induced 4-cycle v_1,…,v_4 which contains two vertices v_i and v_j that have degree 3 in G. If v_1,…,v_4 is not a face cycle of G, then {v_1,…,v_4}∖{v_i,v_j} is a separator of size 2. The argument for a 3-cycle v_1,…,v_3 is similar.Let G be an exception and let v be a vertex of G. Let u_1, …, u_d(v) be the cyclic ordering of the neighbors of v induced by a planar embedding of G. Then every pair of vertices u_i, u_i+1 has a common neighbor of degree d(v) other than v.If u_i and u_i+1 do not have a common neighbor of degree d(v) other than v, the coloring χ^1_G_(u_i,u_i+1) has the three singletons u_i, v, u_i+1, which lie on a common face. Thus, by Lemma <ref>, the coloring χ^1_G_(u_i,u_i+1) is discrete, contradicting the assumption that G is an exception. Now we can start determining the structure of the exceptions. If G is an exception that has a vertex of degree 5, then it is isomorphic to the icosahedron or the bipyramid on 7 vertices.To simplify notation, we let χ_G χ^1_G in the course of this proof.Let G be an exception with a vertex v of degree 5. Let NN(v) be the set of neighbors of v and let (u_1,…,u_5) be their circular ordering. For convenience we will take indices modulo 5. By Lemma <ref>, every pair u_i,u_i+1 has a common neighbor x_i,i+1 of degree 5 other than v. (We remark that the vertices x_i,i+1 are not necessarily distinct or unique.)For all i∈{1,…,5}, every pair of vertices u_i,u_i+2 has a common neighbor x_i,i+2 of degree 5 other than v. To show the claim assume without loss of generality that u_1 and u_3 do not have a common neighbor of degree 5 other than v. Consider the coloring χ_G_(u_1,u_3). In this coloring the vertex v is a singleton.It follows that N is the union of three color classes of the coloring χ_G_(u_1,u_3), one of which is {u_2, u_4, u_5}. (Otherwise, there are two consecutive vertices in N that are singletons and thus χ_G_(u_1,u_3) is discrete by Lemma <ref>.)[x_1,2∈ N or  x_2,3∈ N]We only consider the case that x_1,2∈ N, since the case x_2,3∈ N is analogous.We know that u_2, u_4 and u_5 have the same degree in G[N] and thus the vertices u_1 and u_3 must have the same degree in G[N] since otherwise one of them would have a unique degree in G[N] (and then some u_i would be a singleton in χ_G_(v) and thus χ_G_(v,u_i+1) would be discrete by Lemma <ref>).Now suppose first x_1,2 = u_4 or x_1,2 = u_5. Either way, one vertex and thus all the vertices u_2,u_4,u_5 are adjacent to u_1, since they form a color class of χ_G_(u_1,u_3). It follows that the degree of u_1 and hence of u_3 in G[N] is at least 3. Thus, u_1 and u_3 have a common neighbor. It has degree 5 since x_1,2 has degree 5. The case x_1,2 = u_3 can be treated analogously by simply swapping the roles of u_1 and u_3. [x_1,2∉ N and x_2,3∉ N] Since u_2 and u_4 have the same color, the vertices u_1 and u_4 have a common neighbor x∉ N of degree 5 other than v. Similarly u_3 and u_5 have a common neighbor x'∉ N of degree 5 other than v. By the planarity of G, we have x= x', and thus x is a vertex of degree 5 adjacent to u_1 and to u_3.Having proved the claim we now finish the proof of Lemma <ref>. Again we distinguish cases. [G[N] is non-empty and some vertex in N has degree 5 in G]Due to plana­rity there can be at most one vertex in N that has degree 4 within G[N]. Indeed, if there were two such vertices u_i and u_j, then each of v, u_i, u_j would be adjacent to all vertices in N \{u_i, u_j}, yielding a K_3,3 minor. However, no vertex in N can have a unique degree in G[N] and thus G[N] has a maximum degree of at most 3. Suppose that G[N] contains an edge {u_i,u_i+2} for some i, say i=1. Due to Claim <ref>, the vertex u_2 must share a common neighbor with each of u_4 and u_5 apart from v. By planarity these common neighbors are in {u_1,u_3}. However the two common neighbors must be different since otherwise the respective vertex has degree 4 in G[N]. Thus u_2 is adjacent to u_1 and u_3 and both u_1 and u_3 have degree 3 in G[N]. By the planarity of G, the vertex u_2 has degree 2 in G[N]. Suppose u_4 has degree 3 in G[N]. Then it must be adjacent to u_1, u_3 and u_4. Since u_5 cannot have a unique degree in G[N], it must be adjacent to u_3, i.e., it must have degree 2 in G[N]. (By planarity, it cannot be adjacent to u_1.) However, this would give u_3 a degree of 4 in G[N], yielding a contradiction. The case that u_5 has degree 3 in G[N] is symmetric. Thus, u_1 and u_3 are the only vertices of degree 3 in G[N].Then u_2 is the only vertex of N that is adjacent to two vertices of degree 3 in G[N], making u_2 a singleton in χ_G_(v) and yielding a contradiction.We conclude that there is no edge of the form {u_i,u_i+2}. This implies that in G[N] there is no vertex of degree 1. (Otherwise, we could individualize this vertex and v, and together with their unique common neighbor, they would yield a discrete coloring by Lemma <ref>.)Consequently, since G[N] is non-empty, we conclude that u_1,u_2,…,u_5 is an induced cycle in G[N]. Since some vertex in N has degree 5 and within G[N] the two neighbors of each vertex must have the same degree, we conclude that all vertices in N have degree 5.Thus, if a vertex fulfills Case <ref>, then all its neighbors also fulfill Case <ref>. Therefore, being connected, the entire graph G must be a 5-regular triangulated planar graph since we have restricted ourselves to connected graphs. Being 5-regular, the graph has m= 5/2 n edges, and being a triangulation the number of edges is m=3n-6. We conclude that n=12. There is only one 5-regular graph on 12 vertices, the icosahedron (see for example <cit.>). [G[N] is empty or no vertex in N has degree 5 in G] From Claim <ref> we already know that every pair of vertices u_i, u_i+2 has a common neighbor of degree 5 other than v. In Case <ref> this vertex cannot be in N. Due to planarity, all these common neighbors for different i must be equal to a single vertex y adjacent to all vertices of N.Observing that y has degree 5, consider the subgraph HG[{v,y,u_1,…,u_5}] of G. With the described circular ordering of the vertices in N there is only one planar drawing of H up to equivalence. In this drawing every face is a 4-cycle or a 3-cycle containing y and v. Since y and v have degree 5 in G, but they already have degree 5 in H, they are saturated.Thus, since both y and v belong to each face of H, by Lemma <ref>, no interior of a face of the drawing of H contains vertices of G. Therefore G = H =G[{v,y,u_1,…,u_5}]. Since G is 3-connected, G[N] cannot be empty. Similarly as in Case <ref> we conclude that u_1,u_2,…,u_5 is a cycle rendering G the bipyramid on 7 vertices. If G is an exception that has a vertex of degree 3, then it is isomorphic to a tetrahedron, a cube, a triangular bipyramid, a triakis tetrahedron, a rhombic dodecahedron, or a triakis octahedron.Assume G is a 3-connected planar graph with a vertex of degree 3 and that G does not have two vertices, individualization of which followed by an application of the 1-dimensional WL-algorithm produces the discrete partition. Let v be a vertex of degree 3 in G and let N N(v)= {u_1,u_2,u_3} be its neighbors.By Lemma <ref>, no vertex of N can have a unique degree in G. Thus, the graph G[N] is either a triangle or empty.By Lemma <ref>, for i∈{1,2,3} (indices always taken modulo 3) the pair u_i,u_i+1has a common neighbor x_i,i+1 of degree 3 other than v. (As in the previous proof, these x_i,i+1 are not necessarily distinct or unique.) If x_i,i+1∈ N then x_i,i+1= u_i+2 and N({u_i+2,v}) = {u_i,u_i+1}. Thus, unless G only has 4 vertices (in which case it is the tetrahedron), the set {u_i,u_i+1} forms a 2-separator, which contradicts G being 3-connected. We can therefore assume for all i∈{1,2,3} that x_i,i+1∉ N.We claim that v, u_i,x_i,i+1, u_i+1 is a face cycle of G, or u_i and u_i+1 are adjacent and both u_i,u_i+1,x_i,i+1 and u_i,u_i+1,v are face cycles. The vertex x_i,i+1 has degree 3, so the claim follows directly from Part <ref> of Lemma <ref>. [G[N] is empty]In this case every face incident to v is a 4-cycle consisting of two non-adjacent vertices of degree 3 in G and two other vertices of degree d≥ 3. By analogous arguments as for v, for every i ∈{1,2,3}, the graph G[N(x_i,i+1] must be either empty or a triangle. It cannot be a triangle because the edge {u_i,u_i+1} is not present. So by successively replacing v by its opposite vertices in the incident 4-cycles, we conclude that G is a (3,d)-bi-regular quadrangulation. [G is 3-regular]In this subcase G must be a 3-regular quadrangulation. Such a graph has m= 3n/2 edges, since it is 3-regular, but also m= 2n-4 edges since it is a quadrangulation. Thus n = 8. It is easy to verify that the only 3-regular planar quadrangulation on 8 vertices is the cube.[G is not 3-regular]Then G is bipartite and bi-regular with degrees 3 and d say. Let n_3 and n_d be the number of vertices of degree 3 and d, respectively. Then 3 n_3 = d n_d by double counting and d n_d = m = 2n-4 since G is a quadrangulation. It follows that d n_d = 2(n_3+n_d)-4 = 2(d n_d/3+n_d)-4 which gives that 4 = n_d (2-d/3). Thus d≤ 5. The case d=3 is Case <ref> and d=5 cannot occur according to Lemma <ref>.We conclude that G is a (3,4)-biregular quadrangulation. We have 3n_3 = 4n_4. Then m = 3 n_3 = 3 · 4/7 n. But also m = 2n-4. Thus n= 14. If we modify G by adding an edge between every pair of degree 3 vertices that share a common face and by removing all vertices that originally had degree 4, we obtain a new graph G' that is a 3-regular planar quadrangulation on 8 vertices.The only such graph is the cube. Undoing the modification we obtain that G is the rhombic dodecahedron. [G[N] is a triangle]In this case every face in G is a 3-cycle consisting of a vertex of degree 3 and two other vertices of equal degree d. [G is 3-regular, i.e., d=3] Then G is a 3-regular triangulation. Thus m = 3/2 n and m = 3n-6, thus n= 4. We conclude that G is the tetrahedron.[G is not 3-regular] Consider the graph G' obtained from G by removing all vertices of degree 3. The resulting graph is a planar d/2-regular triangulation. Indeed, since v was chosen arbitrarily among all vertices of degree 3 and the vertices x_i,i+1 have degree 3 as well, it is easy to see that the resulting graph is a triangulation. Moreover, one can verify that in G for every vertex of degree d, in the cyclic ordering of its neighbors, the degrees 3 and d alternate. Thus, a deletion of the vertices of degree 3 halves the degrees of each of the other vertices.It follows that d/2∈{2,3,4,5}. For d/2 =2 we obtain a 3-cycle. This implies that G is a triangular bipyramid. For d/2=3 we conclude that G' is a tetrahedron. This implies that G is a triakis tetrahedron. For d/2=4 we obtain an octahedron. This implies that G is the triakis octahedron. For d/2=5 we would obtain an icosahedron. This would imply that G is the triakis icosahedron. However in this solid, there are two vertices u,u' (namely vertices of degree 3 of distance 2 that only have one common neighbor) such that χ^1_G_(u,u') is discrete.If G is an exception that has a vertex of degree 4, then it is isomorphic to a bipyramid, a rhombic dodecahedron or a tetrakis hexahedron.Assume G is an exception with a vertex of degree 4. First we make two observations that hold for every vertex u of degree 4 with neighbors v_1,v_2,v_3,v_4 in cyclic order.Observation 1:It holds that d(v_1)= d(v_3) and d(v_2)= d(v_4). Otherwise we can individualize u and a neighbor v_i of u so that v_i+1 or v_i-1 refines to a singleton class, which yields a contradiction. Observation 2:By a similar argument, the induced graph G[v_1,v_2,v_3,v_4] either is empty or forms an induced cycle such that v_i is adjacent to v_i+1for all i∈{1,2,3,4} (indicestaken modulo 4). Due to Observation [obs:2]2, if G is 4-regular then G is either a triangulation or every face is of size at least 4. In the first case G has n=6 vertices (since m= 4n/2 and m=3n-6), and thus G is the octahedron (a bipyramid).The second case cannot occur since a planar graph without triangle faces has at most 2n-4 edges, but a 4-regular graph has 2n edges. We can thus assume that G has a vertex v of degree other than 4 that is adjacent to a vertex of degree 4. By Lemmas <ref> and <ref> we can assume that G neither has a vertex of degree 5 nor a vertex of degree 3. Thus, we can assume that v has degree at least 6. Let NN(v) be the set of neighbors of v and let N_4⊆ N be those neighbors of v that are of degree 4. Suppose (u_1,…,u_t) is the cyclic order of N_4 induced by the cyclic order of N. [G[N_4] is non-empty]First assume there are distinct u,u'∈N_4 that are adjacent, i.e., G[N_4] is non-empty. According to Observation [obs:1]1, each vertex in N_4 must have two neighbors in G of degree d(v)≠ 4, and thus G[N_4] has maximum degree 2. We argue that G[N_4] cannot have a vertex of degree 1. Indeed, if u_i were a vertex that has degree 1 in G[N_4] then u_i would have two neighbors of degree d(v) and two neighbors of degree 4, one of which adjacent to v and one of which non-adjacent to v. This means that in χ^1_G_(v,u_i) all neighbors of u_i are singletons, since the two neighbors of u_i of degree 4 disagree on being adjacent to v or not. This is impossible by Lemma <ref>. We conclude that G[N_4] has only vertices of degree 2 and 0. Since G[N_4] is non-empty, this implies that there is some cycle in G[N_4]. Assume this cycle has an edge {u_i,u_j} connecting two vertices that are not consecutive in the cyclic order (u_1,…,u_t). Let u_i^+ and u_i^- be the vertices of G[N] following and preceeding, respectively, the vertex u_i in the cyclic ordering of N (so they may or may not have degree 4). By Lemma <ref>, there must be vertices x^+ and x^- of degree d(v)≠ 4 such that x^+ is adjacent to both u_i and u_i^+ and x^- is adjacent to both u_i and u_i^-. However, x^+≠ x^- since the cycle v,u_i,u_j separates u_i^+ from u_i^-. We conclude that u_i has the following five neighbors: the vertex v, two neighbors in N_4 as well as x^+ and x^-. But u_i has degree 4, which gives a contradiction. We conclude that u_i is adjacent to u_i+1 for all i∈{1,…,t}.Finally, by Observation [obs:1]1, every pair of vertices {u_i, u_i+1} must have a common neighbor other than v of degree d(v) (otherwise all neighbors of u_i would be singletons in χ^1_G_(u_i,u_i+1)).Since u_i is adjacent to v,u_i+1 and u_i-1, it can only have one further neighbor. Thus, all these common neighbors for the pairs {u_i, u_i+1} for i∈{1,… ,t} are indeed the same vertex x. Consider G'G[{u_1,…,u_t,v,x}]. The graph G' is 3-connected and every face is a 3-cycle with two vertices saturated in G' with respect to G. Part <ref> of Lemma <ref> implies that G=G'. We conclude that G is isomorphic to a bipyramid.[G[N_4] is empty] In the second case we now assume that the degree 4 neighbors of v form an independent set.For every i∈{1,…,t} there is a vertex x_i,i+1 such that either the sequence v,u_i,x_i,i+1,u_i+1 forms a face cycle or both v,u_i,x_i,i+1and v,u_i+1,x_i,i+1 are face cycles.Without loss of generality, we show the claim for i=2. We first argue that there are vertices u'∈ N_4 and x' such that v,u_2,x',u' is a 4-cycle. For this let x' be the first neighbor following v in the cyclic ordering among the neighbors of u_2. Then by Lemma <ref>, there must be a vertex u' other than u_2 of degree 4 that is adjacent to x' and v. Let u_j∈ N_4 be the first neighbor of v following u_2 in the cyclic ordering of vertices in N_4 that has a common neighbor with u_2 other than v. We choose a common neighbor x of u_2 and u_j so that it is closest to v: more precisely, for a common neighbor x≠ v of u_2 and u_j consider the cycle u_2,v,u_j,x. It bounds two areas, one of which contains the vertices of N that follow u_2 but precede u_j while the other one contains the vertices that follow u_j but precede u_2 in the cyclic order of N. (One of these sets may be empty.) We choose x so that the first of these areas is minimal with respect to inclusion and we call this area A.We claim that the 4-cycle u_j, x,u_2, v is a face cycle of G or a face cycle after removing the diagonal {x,v} (i.e.,the 3-cycles v,u_2,x and v,u_j,x are faces). (Note that the edge {u_2,u_j} cannot be present since G[N_4] is empty.) Indeed, suppose that u_2 has a neighbor that lies within A. Choose as such a neighbor z the vertex that precedes v in the cyclic ordering of neighbors of u_2. Then for some vertex u̅∈ N_4, the sequence (v,u_2,z, u̅) forms a 4-cycle (Lemma <ref> applied to χ^1_G_(v,z)). Now, either u̅ precedes u_j in the cyclic ordering of N_4 starting from u_2 or u̅= u_j, but {v,u_2,z, u_j} bounds an area that is a proper subset of A. Either case contradicts the minimal choices of u_j and x. Finally assume that u_j has a neighbor z that lies within A. Choose z to be the vertex that follows v in the cyclic ordering of neighbors of u_j. Since {v,x} is not a separator and u_2 does not have a neighbor inside A, there must be a path from z to u_2 via u_j that leaves A. Thus, the vertex u_j must have some neighbor outside of A. Hence, since u_j has degree 4, the vertex z is the only neighbor of u_j inside A.By Lemma <ref>, for some vertex u̅∈ N_4, the sequence (v,u_j,z, u̅) must be a 4-cycle. Consider the coloring χ^1_G_(u_2,u̅). In this coloring v is a singleton, since by the minimality of u_j it is the only common neighbor of u_2 and u̅. Furthermore, u_j is the only vertex in N_4 that has simultaneously a common neighbor with u_2 other than v and a common neighbor with u̅ other than v and thus u_j is a singleton. Moreover, u̅ and u_j only have one common neighbor other than v, namely z, which is then a singleton as well. The singletons v,u_j and z lie on a common face by the choice of z which yields a contradiction with Lemma <ref>. Thus, neither u_2 nor u_j have a neighbor inside the cycle. This implies that the cycle u_j, x,u_2, v is a face or it becomes a face after removing the possible diagonal {x,v}, since otherwise the set {x,v} would be a separator. We conclude that u_j=u_3 and that the vertex x_2,3 x justifies the claim.In the following we call an edge of G a diagonal if neither of its endpoints has degree 4.Overall the claim implies that at least every second neighbor of v is of degree 4 (in particular |N|≤ 2|N_4|) and thus, being of degree at least 6, the vertex v has at least 3 neighbors of degree 4, i.e., |N_4|≥ 3. Recall that u_1 and u_3 are the two vertices in N_4 that are closest to u_2 in the cyclic ordering of neighbors of degree 4 of v.We distinguish several cases according to the size of N_4. [|N_4| = 3]Since v must have degree at least 6 and at least every second of its neighbor has degree 4, we conclude that v also has exactly three neighbors of degree larger than 4. Thus, the degree of v is 6. Let (u_1,t_1,u_2,t_2,u_3,t_3) be the neighbors of v in cyclic order. Then by Claim <ref>, the vertices u_i,t_i,v form a face cycle for every i∈{1,2,3}. Likewise t_i,u_i+1,v is a face cycle. Thus, the graph induced by N ∪{v} is a wheel with 7 vertices. By Observation [obs:2]2 at the start of the proof, the neighborhood of u_i forms a cycle. Moreover, by Observation [obs:1]1, the vertices t_1, t_2 and t_3 all have the same degree d. We argue that d=6. Since all u_i have degree 4, if every pair of vertices u_i, u_i+1 had a common neighbor other than v and t_i, this would have to be a single vertex adjacent to u_1, u_2 and u_3. However, such a vertex x does not exist because otherwise G' G[u_1,u_2,u_3,t_1,t_2,t_3,x,v] would be a 3-connected graph in which every face is a triangle with a saturated vertex or a 4-cycle with two saturated vertices and Lemma <ref> would imply G = G' which cannot be since G' has a vertex of degree 3, namely x. Thus, some pair {u_i,u_i+1} does not have a common neighbor other than v or t_i, which in turn implies d(v) = d(t_i)= 6 using Lemma <ref>. Therefore, all neighbors of v have degree 4 or 6. More strongly, we conclude that the vertex degrees that appear among the neighbors of t_i are the same as the vertex degrees appearing as neighbors of v, including multiplicites. Thus, each t_i also has 3 neighbors of degree 4. By Observation [obs:1]1 all vertices in N_4 only have neighbors of degree 6 (since we already know that they have 3 neighbors of degree 6). By the same argument, these degree 6 vertices have themselves 3 neighbors of degree 4 and 3 neighbors of degree 6.We conclude that the entire graph has only vertices of degree 4 and 6. Since every face incident with v is a triangle, and v is arbitrary among the degree 6 vertices, we conclude that G is a triangulation. Moreover, every vertex of degree 4 has exactly 4 neighbors of degree 6 and every vertex of degree 6 has exactly 3 neighbors of degree 4. We conclude that 4n_4= 3n_6 where n_i is the number of vertices of G of degree i. Since n_4+n_6 = n =|G| we conclude that G has 18n/7 edges. Since G is a triangulation, it has 3n-6 edges. We conclude that 18n/7 = 3n-6 and thus G is a graph on 14 vertices. Furthermore, the graph G' induced by the vertices of degree 6 is a 3-regular graph on 8 vertices. All faces in the induced drawing of G' are 4-cycles. We conclude that G' is the cube. (There is only one triangle-free planar 3-regular graph on 8 vertices.) Each face of G' contains, within G, a vertex of degree 4. We conclude that G is the tetrakis hexahedron. [|N_4| = 4]In this case v must be incident to some diagonals, since otherwise v has degree 4. Thus, for every i ∈{1,2,3,4} the diagonal {v, x_i,i+1} must be present, since by Observation [obs:2]2 the neighbors of a degree 4 vertex form a cycle or an independent set. It follows that v must have degree 8. We obtain a graph that has at most as many vertices of degree 4 as it has vertices of degree at least 8. (Every vertex of degree at least 8 is adjacent to at least 4 vertices of degree 4 and every vertex of degree 4 is adjacent to 4 vertices of degree at least 8.) Double counting implies that the graph has at least 3n edges, which is impossible for a planar graph. [|N_4| ≥ 5]We first show the following claim.[resume]For each i∈{1,…,t}, the vertices u_i and u_i+2 have a common neighbor other than v.We show the statement for i=1. Assuming otherwise implies that v is a singleton in χ^1_G_(u_1,u_3). We first argue that in this coloring, the vertex u_2 is also a singleton. Again, assume otherwise. Then there must be a vertex u∈ N_4∖{u_2} that has the same color as u_2. By Claim <ref>, for i∈{1,2} the vertices u_i and u_i+1 have a common neighbor x_i,i+1 so that u_i, v,u_i+1,x_i,i+1 form a face or a face after removing a diagonal.Thus, the vertex u must have a neighbor y_1,2 other than v that is adjacent to u_1 and a neighbor y_2,3 other than v that is adjacent to u_3. Moreover, for i∈{1,2}, the vertex x_i,i+1 should have the same color as y_i,i+1. See Figure <ref>. (Note that y_1,2≠ y_2,3 since we have assumed that u_1 and u_3 do not have a common neighbor other than v.) Since it holds that |N_4| ≥ 5 we know that u_4≠ u_t. Therefore u≠ u_4 or u≠ u_t. By symmetry we can assume the latter. (To see the symmetry recall that u_4 is the successor of u_3 in N_4 and u_t is the predecessor of u_1 in N_4). Note that the cycle v,u,y_1,2,u_1 separates u_t from u_2.Consider the area A' bounded by the cycle v,u,y_1,2,u_1 which contains u_t. Inside the area lies u'∈ N_4, the vertex that follows u in the cyclic ordering of N_4. Consider the set MN(u_3)∪{u_1,v,u_3} and note that M is a union of color classes since we have assumed that u_1 and u_3 do not have a common neighbor other than v. The cycle v,u,y_1,2,u_1 contains only two vertices of M and there are no vertices inside A' that are in M. Thus, due to 3-connectivity, there must be a path from u or from y_1,2 to u' such that no inner vertex of the path is in M. Note that u' does not have the same color as u_2 since it cannot share a neighbor other than v with u_3.Unless x_1,2= y_1,2, every path from u_2 and every path from x_1,2 to a vertex in N_4∖{u_1,u_2,u_3,u} that does not have an inner vertex in Mmust pass through u or through y_1,2. This implies however that u and u_2 do not have the same color or that x_1,2 and y_1,2 do not have the same color contradicting our construction. We conclude that x_1,2=y_1,2.By Claim <ref>, the vertices u and u' have a common neighbor other than v. If this neighbor were not x_1,2 then u could not have the same color as u_2, since there would be a path from u to u' avoiding inner vertices from M∪{y_1,2} but there would be no such path from u to any vertex in N_4∖{u_1,u_2,u_3,u}. Figure <ref> depicts this situation. We conclude that u' is a neighbor of y_1,2=x_1,2. However this makes {y_1,2,v} a separator that separates u_1 from u, since by Claim <ref>, neither u' nor u_2 have a neighbor both inside and outside of the cycle u',v,u_2,x_1,2.Up to this point, regarding our efforts to prove the claim, we have shown that u_2 is a singleton. If x_1,2 were not adjacent to v or x_2,3 were not adjacent to v, then χ^1_G_(u_1,u_3) would have three singletons lying on the same face. So assume otherwise. We can also assume that neither x_1,2 nor x_2,3 is a singleton. But this cannot be, because the copies rendering x_1,2 and x_2,3 non-singletons (i.e., the other, necessarily existing vertices that have the same color as x_1,2 or x_2,3) should also be non-equal and adjacent to u_2 which would force u_2 to have degree at least 5. This proves the claim. Since G[N_4] is empty, a common neighbor of u_i and u_i+2 other than v must be equal to a common neighbor of u_i+1 and u_i+3 other than v. This means that there is a vertex v' other than v adjacent to all vertices of N_4.Consider now the area bounded by the cycle v,u_i,v',u_i+1 which does not contain u_i+2. If both u_i and u_i+1 have two neighbors inside this area, then they do not have any neighbors outside the area, making {v,v'} a separator. If u_i only has one neighbor inside the area, then this neighbor coincides with x_i,i+1 and hence must be adjacent to u_i+1. We conclude that v,u_i,x_i,i+1,u_i+1 forms a face or becomes a face after removing the diagonal {x_i,i+1,v}. A symmetric argument can be applied with regard to v' in place of v. It follows that inside the cycle v,u_i,v',u_i+1 there is at most one vertex, namely x_i,i+1. However, we already ruled out vertices of degree 3 at the beginning of the proof, so x_i,i+1 must be adjacent to all vertices of the cycle, and thus has degree 4. This cannot be since it would then be in N_4. We conclude that u_i does not have a neighbor inside the cycle v,u_i,v',u_i+1. A similar observation holds for the cycle v,u_i-1,v',u_i.However, u_i must have some neighbors within the area bounded by the cycle v,u_i,v',u_i+1 or within the area bounded by the cycle v,u_i-1,v',u_i yielding the final contradiction.Recalling that every 3-connected planar graph has a vertex of degree 3, 4 or 5, the proof follows immediately by combining Lemma <ref> with the Lemmas <ref>, <ref>, and <ref>. plain
http://arxiv.org/abs/1708.07354v1
{ "authors": [ "Sandra Kiefer", "Ilia Ponomarenko", "Pascal Schweitzer" ], "categories": [ "cs.DM", "cs.LO", "math.CO", "03B70", "F.4.1; F.2.2" ], "primary_category": "cs.DM", "published": "20170824110619", "title": "The Weisfeiler-Leman Dimension of Planar Graphs is at most 3" }
Let 𝕊_g be the orientable surface of genus g. We prove that the component structure of a graph chosen uniformly at random from the class 𝒮_g(n,m) of all graphs on vertex set [n]={1,…,n} with m edges embeddable on 𝕊_g features two phase transitions. The first phasetransition mirrors the classical phase transition in the Erdős–Rényi random graph G(n,m) chosenuniformly at random from all graphs with vertex set [n] and m edges. It takes place at m=n/2+O(n^2/3), when a unique largest component, the so-called giant component, emerges. The second phase transition occurs at m = n+O(n^3/5), when the giant component covers almost all vertices of the graph. This kind of phenomenon isstrikingly different from G(n,m) and has only been observed for graphs on surfaces. Moreover, we derive an asymptotic estimation of the number of graphs in 𝒮_g(n,m) throughout the regimes of these two phase transitions. Dipolar Bose Supersolid StripesR. Bombin, J. Boronat and F. Mazzanti ===========================================§ INTRODUCTION AND RESULTS §.§ Background and motivation In their series of seminal papers <cit.>,andstudied asymptotic stochastic properties of graphs chosen according to a certain probability distribution—an approach that laid the foundations for the classical theory of random graphs. The main questions considered by , , and many others are of the following type. Consider the so-calledrandom graph G(n,m) chosen uniformly at random from the class G(n,m) of all graphs on vertex set [n]:={1,…,n} with m=m(n) edges. What structural properties does G(n,m) have with high probability (commonly abbreviated as ), that is, with probability tending to one as n tends to infinity?One of the most extensively studied properties of random graphs is the component structure.and  <cit.> proved that the order (that is, the number of vertices) of thecomponents of G(n,m) changes drastically when m is around n/2; this kind of behaviouris widely known as a phase transition. The result ofandstates thata) ifthe average degree t := 2m/n of G(n,m) is smaller than one, then all components have at most logarithmic order; b) if t=1, the largest component has order n^2/3; c) if t→ c>1, then thereis a unique component of linear order, while all other components are at most logarithmic. This phenomenonbecame known as the emergence of the giant component and was considered byandtobe `one of the most striking facts concerning random graphs'.While the result ofandseems to indicate a `double jump' in the order of the largestcomponent from logarithmic to order n^2/3 to linear, Bollobás <cit.> proved that the phase transition is actually `smooth' when we look more closely at the range of t being around one, that is, when s:=m-n/2 is sublinear.Bollobás' result, which was later improved by Łuczak <cit.>, shows that the order of thelargest component changes gradually, depending on whether s has order at most n^2/3 (known as the criticalregime) or if s has larger order and s>0 (the supercritical regime) or s<0 (the subcritical regime). Subsequently, Aldous <cit.> further improved the result for the critical regime using multiplicative coalescent processes and inhomogeneous Brownian motion.In the supercritical regime and in the regime t>1, central limit theorems and local limit theoremsprovide stronger concentration results for the order and the size (that is, the number of edges) of the largest component. The methods used for these results range from counting techniques <cit.> over Fourier analysis <cit.> to probabilistic methods such as Galton-Watson branching processes <cit.>, two-round exposure <cit.>, or random walks and martingales <cit.>.Since the pioneering work ofand , various random graph models have been introduced and studied.A particularly interesting model are random planar graphs or, more generally, random graphs that areembeddable on a fixed two-dimensional surface. Here, a graph G is called embeddable on a surface 𝕊 if G can be drawn on 𝕊 without crossing edges.Graphs embeddable on a surface and graphs embedded on a surface—also known as maps—have been studied extensively since the pioneering work of Tutte (see e.g. <cit.>) in view of enumeration <cit.>, random sampling <cit.>, and asymptoticproperties <cit.>.Maps and embeddable graphs have also shown to have important applications in algebra and geometry(see e.g. <cit.> for an overview) and statistical physics <cit.>. In some of these applications (e.g. <cit.>) phase transitions play a crucial role, therefore it is an important question whether random embeddable graphs undergo similar phase transitions asrandom graphs and if they do, what the critical behaviour close to the point of the phase transition is.For the order of the largest component of G(n,m), the critical behaviour is described by the resultsof Bollobás <cit.> and Łuczak <cit.> mentioned above. In order to formallystate their results, we need to introduce some notation. A connected graph is called tree if it has no cycles, unicyclic if it contains precisely one cycle, and complex (or multicyclic) otherwise. Given a graph G, we enumerate its components as _i=_i(G), i=1,2,…, in such a way that they are ordered from large to small, that is, the orders _1,_2,… of the components satisfy _i≥_j whenever i<j. We say that _i is the i-th largest component of G.The results of Bollobás and Łuczak can now be described as follows (for all ordernotation in the following, see <Ref>). If m is smaller than n/2 and satisfies m-n/2=ω(n^2/3), thenall components of G(n,m) have order o(n^2/3). Once m-n/2 = O(n^2/3), several components of order Θ_p(n^2/3) appear simultaneously. Finally, if m becomes even larger, then the largest component _1has order ω(n^2/3), while every other component has order o(n^2/3) . If we view this development as a process, this means that all components of order Θ_p(n^2/3) that appeared when m-n/2 = O(n^2/3) later merge into a single component that is then the unique component of order ω(n^2/3). This component is usually referred to as the giant component. Let m=1+ n^-1/3n/2, where =(n)=o(n^1/3), and let _i=_i(G), i=1,2,…, be the i-th largest component of G=G(n,m). *If →-∞, then for every i∈∖{0}_i is a tree and has order2+o1n^2/3/^2log-^3. *If → c for a constant c∈, then for every i∈∖{0} the order of _i isΘ_pn^2/3.Furthermore, the probability that _i is complex is bounded away both from 0 and 1.*If →∞, thenthe largest component _1 of G is complex and has order(2+o(1)) n^2/3.For i≥ 2,_i is a tree of order o(n^2/3).Returning to embeddable graphs, we call a graph planar if it is embeddable on the sphere and denote by P(n,m) the graph chosen uniformly at random from the class P(n,m) of all planar graphs with vertex set [n] and m edges. Kang and Łuczak <cit.> proved that P(n,m) features a similar phase transition as G(n,m), that is, the giant component emerges at m = n/2+O(n^2/3). Let m=1+ n^-1/3n/2, where =(n)=o(n^1/3), and let _i=_i(G), i=1,2,…, be the i-th largest component of G=P(n,m). For every i∈∖{0}_i = 2+o1n^2/3/^2log-^3 if →-∞, Θn^2/3 if → c∈,(1+o(1)) n^2/3 if →∞ and i=1, Θ(n^2/3)if →∞ and i≥ 2.The main difference to therandom graph lies in the case →∞. In this regime, the largest component of P(n,m) is roughly half as large as the largest component of G(n,m). On the other hand, the order of the second largest component (or more generally, of the i-th largest component for every fixed i≥2) is much larger in P(n,m) than in G(n,m).This behaviour, however, is not the most surprising feature of random planar graphs. Indeed, Kang and Łuczak <cit.> discovered that there is a second phase transition at m = n+O(n^3/5), which is when the giant component covers almost all vertices. Such a behaviour is not observed forrandom graphs, where the number of vertices outside the giant component is linear in n as long as m is linear. Let m=2+ n^-2/5n/2, where =(n)=o(n^2/5). Thenthe largest component _1 of P(n,m) is complex andn-_1 = Θn^3/5 if →-∞, Θn^3/5 if → c∈, Θ^-3/2n^3/5 if →∞ and =o(n^1/15).Given that this second phase transition has only been observed for random planar graphs, the fundamental question that is raised by <Ref> is whether this is an intrinsic phenomenon of planar graphs. Which other classes of graphs feature a phase transition analogous to <Ref>? Canonical candidates for classes that lie `between' P(n,m) and G(n,m) are graphs that are embeddable on a surface of fixed positive genus. In this paper, we consider graphs embeddable on the orientablesurfacewith genus g∈. Let _g(n,m) be the class of graphswith vertex set [n] and m edges that are embeddable on . (Of course, _0(n,m)=P(n,m).)One of the main results of this paper is that for every fixed g, the answer to <Ref> is positive for the class _g(n,m).For m=⌊μ n⌋ with μ∈(1,3),and Noy <cit.> showed, among several other results, thatP(n,m) has a component that covers all but finitely many vertices. Observe that <Ref> leaves a gap of order Θ(n^1/3)to the `dense' regime considered byand Noy. Subsequently, Chapuy, Fusy, , Mohar, and Noy <cit.> proved analogous results in the dense regime for _g(n,m). §.§ Main results This paper is the first to determine the component structure of _g(n,m) for arbitrary g≥0 in the `sparse' regime m≤(1+o(1))n. In terms of phase transitions, the component structure of _g(n,m) features particularly interesting phenomena inthis regime, similar to P(n,m). To derive these phenomena, we use a wide range of complementarymethods from various fields (see <Ref> for more details). With this paper, we strive to provide a deeper understanding of the evolution of graphs embeddable onfor fixed g. Moreover, we pave a way to better understand embeddability of random graphs, in particular a) the `typical' genus of G(n,m) when m=m(n) is given and b) the evolution of graphs on a surface of non-constantgenus g=g(n).The main contributions of this paper are fourfold. We determine the order and structure of the largest components of a graph _g(n,m) chosen uniformly at random from _g(n,m), where the number m of edges is a) around n/2, b) around n, or c) in between the previous two regimes. Moreover, we determine d) the asymptotic number of graphsin _g(n,m) for all the aforementioned regimes.Our first main result describes the appearance of the unique giant component in _g(n,m). Similar to various random graph models includingrandom graphs and random planar graphs (see <Ref>), the critical range for the number of edges for the appearance of the giant component is m=n/2+O(n^2/3). Below this range, the i-th largest component (for each i≥1) of _g(n,m)is a tree of order o(n^2/3). In the critical range, several components of order Θ_p(n^2/3) appear simultaneously. After the critical range, _g(n,m)has a unique component of order ω(n^2/3) which in addition is complex and has genus g, that is, it is embeddable on , but not on g-1. Let m=1+ n^-1/3n/2, where =(n)=o(n^1/3), and denote by _i=_i(G), i=1,2,…, the i-th largest component of G=_g(n,m). For every i∈∖{0}the following holds. *If →-∞, then _i is a tree of order2+o1n^2/3/^2log-^3. *If → c for a constant c∈, then the probability that G has complex components is bounded away both from 0 and 1. The i-th largest component has orderΘ_pn^2/3. *If →∞, then _1 is complex and has ordern^2/3+O_p(n^2/3).For i≥ 2, we have _i = Θ_p(n^2/3).Moreover, G has O_p(1) complex components. The probability that G has at least i complexcomponents is bounded away both from 0 and 1. If G has at least i complex components, then the i-th largest complex component (by this we mean _i(_G), where _G is the union of all complex components of G) has order Θ_p(n^2/3).Furthermore, if g≥ 1, then whp _1 is not embeddable on g-1, while all other components of G are planar. Comparing the special case of g=0 in <Ref> with <Ref>, the following discrepancies are apparent. Firstly, in the critical regime → c∈, <Ref><ref> yields components of order Θ_p(n^2/3) compared to Θ(n^2/3) claimed by <Ref>. The same holds for the orders of _i for i≥ 2 in the supercritical regime →∞. Both points are due to minor mistakes in <cit.>; the proofs given there in fact yield order Θ_p(n^2/3) instead of the claimed Θ(n^2/3). Secondly, the error term in the order of the giant component given in <Ref><ref> is stronger than the one from <Ref>. Finally, <Ref><ref> tells us that for positive genus, the giant component is not only the unique largest component but also the unique non-planar one.Our second main result describes the time when the giant component covers almost all vertices. The critical phase for the number of edges for this phenomenon is m=n+O(n^3/5). Here, the number of vertices outside the giant component changes from ω(n^3/5) for m below the critical range to Θ(n^3/5) within the critical range to o(n^3/5) beyond the critical range. Let m=2+ n^-2/5n/2, where =(n)=o(n^2/5). Thenthe largest component _1 of _g(n,m) is complex. Furthermore, forr(n) := n^3/5 if -∞,n^3/5 ifc∈, ^-3/2n^3/5 if ∞ and =o((log n)^-2/3n^2/5),we have n-_1=O_pr(n) andn-_1=Ω(r(n)). The main improvement of <Ref> in comparison to <Ref> (the corresponding result for g=0) is that <Ref> only deals with the case ζ = o(n^1/15) and therefore leaves a gap to the dense regime m=⌊μ n⌋ with μ∈(1,3) that has been covered in <cit.>. <Ref> closes this gap up to a factor (log n)^2/3. Additionally, <Ref> provides a correction of the upper bound given in <cit.> on the number of vertices outside the giant component. In <cit.>, the upper bound was obtained with the help of an intermediate result (Theorem 2(iv) in <cit.>) about the structure of the complex part (see <Ref> for a definition). However, this intermediate result does not apply in the regime m∼ n. <Ref> provides a slightly weaker upper bound that is of larger order than the lower bound (albeit the orders differ by less than every fixed growing function).Our third main result covers the case when the number of edges is between the regimes of the two phase transitions, that is, the average degree of the graph is between one and two. In this `intermediate' regime, the largest component is complex, has genus g, and its order is linear both in n and in the average degree of the graph. Let m=n/2, where =(n) converges to a constant in (1,2), and let _i=_i(G), i=1,2,…, be the i-th largest component of G=_g(n,m). Then_1 = -1n+O_pn^2/3.For i≥ 2, we have _i = Θ_p(n^2/3).Furthermore, if g≥ 1, then_1 is not embeddable on g-1, while all other components are planar. In the intermediate regime, or more generally, for m=n/2 with >1, the classicalrandom graph G(n,m)has a largest component of order (1+o(1))β n, where β is the unique positive solution of the equation1-β = e^-αβ.In particular, as long as >1 is a constant, the largest component of G(n,m) will leave a linear number of vertices uncovered, see <Ref>. Indeed, Karp <cit.> proved that the components of G(n,m) can be explored via a Galton-Watson branching process with offspring distribution Po(); the survival property of such a process is given by β above, yielding order (1+o(1))β n of the largest component. For graphs on surfaces, however, there is no such simple approach to explorecomponents.As our last main result, we derive the asymptotic number of graphs embeddable on . For n→∞, the number of graphs in _g(n,m) is asymptotically given as follows. *If m = 1+ n^-1/3n/2, where =(n)=o(n^1/3), then_g(n,m) = 1+o(1)/π^1/2e^3/4e/1+ n^-1/3^n/2+/2n^2/3n^n/2+/2n^2/3-1/2 if →-∞, Θ(1)e^n/2-^2/4n^1/3n^n/2+/2n^2/3-1/2 if → c∈, expO()e/1- n^-1/3^n/2-/2n^2/3n^n/2+/2n^2/3-1/2 if →∞. *If m=n/2, where =(n) converges to a constant in (1,2), then_g(n,m) = expO(n^1/3)e/2-^(2-)n/2n^n/2. *If m = 2+ n^-2/5n/2, where =(n)=o(n^2/5), then_g(n,m) = expO(^-2/3n^3/5)/e^/2n^3/5n^n+3/10 n^3/5 if →-∞, expO(n^3/5)n^n+3/10 n^3/5 if → c∈, expO( n^3/5)^-3/4 n^3/5 n^n+3/10 n^3/5 if →∞ and=o((log n)^-2/3n^2/5).§.§ Proof techniques and outline The techniques used in this paper are novel in comparison to the vast majority of papers onrandom graphs. Classical random graph results are usually proved with the help of probabilistic arguments such as first and second moment methods, independence of random variables, or martingales. On the other hand,papers about random graphs on surfaces, e.g. <cit.>,use singularity analysis of generating functions. In contrast, we combine various complementary methods to prove our results.The starting point of our proofs are constructive decompositions of graphs, a method mostly used in enumerative combinatorics. Every graph in _g(n,m) can be decomposed into its complex components and non-complex components, which then can further be decomposed into smaller parts. The most important structures occurring in this decomposition are the so-called core and kernel of the graph. The decomposition is constructive in the sense that every graph can be constructed in a unique way starting from its kernel via its core and complex components (see <Ref>).We interpret the aforementioned constructive decomposition in terms of combinatorial counting,in other words, we represent the number of graphs in the class _g(n,m) as a sum ofsubclasses that are involved in the decomposition. We proceed by determining the main contributionsto the sum using a combinatorial variant of Laplace's method from complex analysis, a techniqueto derive asymptotic estimates of integrals that depend on a parameter n tending to infinity. Toillustrate how we apply this approach, assume that we want to analyse a sum of the formA(n) = ∑_i∈ IB(i)C(n-i),where i is a parameter related to one of the substructures occurring in the constructive decomposition, e.g. the order of the core, say. We rewrite A(n) asA(n) = ∑_i∈ Iexpf(i)with f(i)=log(B(i)C(n-i)) and then estimate the exponent f(i) in order to determine the maincontribution to A(n) in the following sense. We determine a set J⊂ I so that the partial sumover all i∈ I∖ J (the tail of the sum) is of smaller order than the total sum (see<Ref> for a formal definition). The probabilistic interpretation of this main contribution is that _g(n,m)has its corresponding parameter i in the set J. In our example, this will tell us the `typical' order of the core of _g(n,m).The exact method how we estimate the value of the tail and compare it to the total value of the sum will differ from case to case. In some cases, rough bounds provided by maximising techniques will suffice; in other cases, we need better bounds, which we derive by using Chernoff bounds or by bounding the sums via integrals. Systematic applications of these techniques enable us to derive the exact ranges of the maincontributions. From the main contributions, we deduce the orders of components, component structure, and other structuralproperties of _g(n,m) by applying both combinatorial methods (e.g. double counting) andprobabilistic techniques (e.g. Markov's and Chebyshev's inequalities).This paper is organised as follows. After presenting the necessary notation and definitions in <Ref>, we give an overview of the proof strategy in <Ref>; in particular, we derive the aforementioned representation of _g(n,m) as a sum. In<Ref>, we determine the main contributions to this sum using the techniques mentioned above. From these results, we derive structural properties of _g(n,m) in <Ref>. <Ref> are devoted to the proofs of our main results and of the auxiliary results, respectively. Finally, we discussvarious open questions in <Ref>. §.§ Related work The order of the largest component of therandom graph G(n,m) at the time of the phase transition has been extensively studied <cit.>. Most of the results have been proved using purely probabilistic arguments (e.g. random walks, martingales), leading toeven stronger results than the ones stated in <Ref>, e.g. about the limiting distributionof the order and size of the largest component <cit.>.In the case of _g(n,m), the additional condition of the graph being embeddable onmakes itvirtually impossible to use the same techniques in order to derive such strong results.Comparing <Ref>, the main differences appear when the giant component arises in the supercritical regime, that is, when →∞. Firstly, the order of the giant component is only about half as large in _g(n,m) as it is in G(n,m). Secondly, the i-th largest component _i for fixed i≥ 2 is much larger in _g(n,m) than in G(n,m). These two differences are closely related for the following reason. In G(n,m), the number n' of vertices and m' of edges outside the giant component are such that m' = (1+'n^-1/3)n'/2 with '→-∞ and thus, G(n',m') only has small components. In _g(n,m), the smaller order of the giant component enforces m' to be in the critical regime, where '→ c∈, thus resulting in larger orders for _i with i≥ 2. Lastly, while each such _i is a treefor therandom graph, it has a positive probability to be complex for _g(n,m).Planar graphs and graphs embeddable onhave been investigated separately for the `sparse' regime m ≤ n+o(n) <cit.> and for the `dense' regime m = ⌊μ n⌋ with μ∈(1,3) <cit.>. From a random graph point of view, in particular when the giant component is considered, the sparse regime is the more interesting regime. In the sparse regime, Kang and Łuczak <cit.> supplied new resourceful proof methods—some of which we apply in a somewhat similar fashion in this paper—combining probabilistic and graph theoretic methods with techniques from enumerative and analytic combinatorics. On the other hand, minor mistakes in <cit.> led to results that featured order terms that claimed to be stronger than what has actually been proved. One contribution of this paper is to correct and strengthen these results from <cit.>.In the dense regime, Giménez and Noy <cit.> and Chapuy, Fusy, , Mohar, and Noy <cit.> use techniques from analytic combinatorics to prove limit laws for graphs embeddable on . The advantage of their techniques is that one method can be applied to derive a range of various limit laws, e.g. on the number of components, the order of the largest component, and the chromatic and list-chromatic number. On the other hand, the techniques are limited to a) the class _g(n) of n-vertex graphs embeddableon , in other words, graphs with n vertices and an arbitrary number of edges, or b) the class _g(n,⌊μ n⌋), where μ is a constant. A random graph chosen fromthe class _g(n) is averaged over all graphs with an arbitrary number of edges and thus not appropriate when we look at a specific range of m.[In fact, the properties of a random graph chosen from _g(n) are dominated by the graphs whose edge densityis quite large, more precisely, when μ≈ 2.21 <cit.>.] On the other hand, the class _g(n,⌊μ n⌋) scales the number m of edges as a linear function in n, which is not fine enough in order to capture the changes that take place within the critical windows, which have length Θ(n^2/3) for <Ref> and Θ(n^3/5) for <Ref>. In terms of critical behaviour these techniques are therefore not applicable.§ PRELIMINARIES§.§ Asymptotic notations Bywe denote the set of non-negative integers. In order to express orders of components in a random graph when n tends to infinity, we use the following notation. Recall that an event holds with high probability, orfor short, if it holds with probability tending to one as n tends to infinity. Let X=(X_n)_n∈ be a sequence of random variables and let f→_≥ 0 be a function. For c∈^+ and n∈, consider the inequalitiesX_n ≤ c f(n), X_n ≥ c f(n).We say that * X_n = O(f) , if there exists c∈^+ such that (<ref>) holds ;* X_n = o(f) , if for every c∈^+, (<ref>) holds ;* X_n = Ω(f) , if there exists c∈^+ such that (<ref>) holds ;* X_n = ω(f) , if for every c∈^+, (<ref>) holds ;* X_n = Θ(f) , if both X_n = O(f) and X_n = Ω(f) ;* X_n = O_p(f), if for every δ>0, there exist c_δ∈^+ and N_δ∈ such that (<ref>) holds for c=c_δ and n≥ N_δ with probability at least 1-δ;* X_n = Θ_p(f), if for every δ>0, there exist c_δ^+,c_δ^-∈^+ and N_δ∈ such that for n≥ N_δ with probability at least 1-δ, both (<ref>) holds for c=c_δ^+ and (<ref>) holds for c=c_δ^-. The special case of X=O_p(1) is also known as X being bounded in probability.§.§ Graphs on surfaces Given a graph G, we denote its vertex set and its edge set by V(G) and E(G), respectively. All graphs in this paper are vertex-labelled, that is, V(G)=[n] for some n∈. Let g∈ be fixed. An embedding of a graph G on , the orientable surface of genus g, is a drawing of G onwithout crossing edges. If G has an embedding on , we call G embeddable on . Clearly, embeddability is monotone in g, i.e. every graph that is embeddable onis also embeddable on g+1. By the genus of a given graph Gwe denote the smallest g∈ for which G is embeddable on . Graphs with genus zero arealso called planar.Letbe a connected graph embeddable on . We say thatis unicyclicif it contains precisely one cycle and we callcomplex (also known as multicyclic) if it contains at least twocycles; the latter is the case if and only ifhas more edges than vertices. If is complex, we call():=E()-V()the excess of . For a non-connected graph G, we define (G) to be the sum of the excesses of its complex components (and set (G)=0 as a convention if G has no complex components). G is called complex if all its components are complex.§.§ Complex part, core, and kernel Let G be any graph. The union _G of all complex components of G is called theof G. The core _G of G is defined as the maximal subgraph of minimum degree at least two of _G. The core can also be obtained from theby recursively deleting vertices of degree one (in an arbitrary order). Vice versa, the can be constructed from the core by attaching trees to the vertices of the core. Finally, the kernel _G of G is constructed from the core _G by replacing all vertices of degree two in the following way. Every maximal path P in _G consisting of vertices of degree two is replaced by an edge between the vertices of degree at least three that are adjacent to the end vertices of P. By this construction, loops and multiple edges can occur. Reversing the construction,the core arises from the kernel by subdividing edges.It is important to note that _G is non-empty as soon as _G is, because each component of the complex graph _G contains a non-empty core with at least one vertexof degree at least three. Furthermore, _G has minimum degree at least three and mightcontain loops and multiple edges. Observe that G is embeddable onif and only if_G is. In particular, G and _G have the same genus. Also observe that(G) = (_G) by definition and (_G) = (_G) = (_G),because subdividing edges and attaching trees changes the number of vertices and edges by the sameamount.Given a graph G with n vertices, we denote the number of vertices of the _G, the core _G, and the kernel _G by n_, n_, and n_, respectively. The number of edges of _G, _G, and _GsatisfyE(_G) = n_+(G), E(_G) = n_+(G), E(_G) = n_+(G).The kernel has minimum degree at least three by definition and thus has at least 3/2n_ edges. A kernel is called cubic if all its vertices have degree three; in that case, it has precisely 3/2n_ edges. The deficiency of G is defined as(G) := 2E(_G)-3n_ = 2(G) - n_.Clearly, the deficiency is always non-negative and (G)=0 if and only if the kernel _G is either empty or cubic. The definition of the excess and deficiency of a graph immediately implies the following relation between the deficiency, the excess, and the number of vertices and edges of the kernel. Given a graph G, the numbers n_ of vertices and m_ of edges in the kernel _G of G aren_ = 2(G) - (G) and m_ = 3(G) - (G).§.§ Useful bounds We will frequently use the following widely known formulas.1+x =expx-x^2/2+x^3/3+O(x^4)if x=o(1), 1+x ≤expx, 1+x ≥expx-x^2/2if x≥0,To derive bounds for the factorial n! and the falling factorial (k)_i := k!/(k-i)! we shall use the inequalities√(2π n)n/e^n ≤ n!≤ e√(n)n/e^n, k^iexp-i^2/2(k-i) ≤ (k)_i≤ k^iexp-i(i-1)/2k.For 1≤ k≤ n-1 we will also use refined bounds for the binomial coefficient obtained byapplying (<ref>) thrice.√(2π)n^n+1/2/e^2 k^k+1/2(n-k)^n-k+1/2 ≤nk≤en^n+1/2/2π k^k+1/2(n-k)^n-k+1/2 .We shall also use the inequality1/a+b≥1/a - b/a^2if a≠0, a+b>0. Finally, we need some well known inequalities from probability theory. Given a random variable X, we denote by X its expectation. The variance of X is then defined asσ^2:=X-X^2 = X^2-X^2.For a non-negative random variable X and any t>0, Markov's inequality states thatX≥ t≤X/t.A stronger bound—which additionally holds for arbitrary random variables—is provided by Chebyshev's inequality. For any random variable X and any t>0, we haveX-X≥ t≤σ^2/t^2.In terms of Chernoff bounds, we shall need the two special cases of normal distributions and binomial distributions. For a normally distributed random variable X, we have, for any given t>0,..X-X≥ t≤ 2exp-t^2/2σ^2.If X is a binomially distributed random variable, then..X-X≥ t≤ 2exp-t^2/2X+t/3.§ PROOF STRATEGY§.§ Decomposition and construction Throughout the paper, let g∈ be fixed. We have seen in <Ref> that any graph that is embeddable oncan be decomposed into a) itsand b) trees and unicyclic components. Thecan then further be decomposed so as to obtain the core and the kernel. Vice versa, we can construct a graph onby performing the reverse constructions. The following steps construct every graph embeddable on .*Pick a kernel, i.e. a multigraph with minimum degree at least three that is embeddable on ;*subdivide the edges of the kernel to obtain a core;*to every vertex v of the core, attach a rooted tree T_v (possibly only consisting of one vertex) by identifying v with the root of T_v, so as to obtain a complex graph;*add trees and unicyclic components to obtain a general graph embeddable on . To avoid overcounting in <ref> if the kernel has loops or multiple edges, multigraphs will always be weighted by the compensation factor introduced by Janson, Knuth, , and Pittel <cit.>, which is defined as follows. Given a multigraph M and an integer i≥ 1, denote by e_i(M) the number of (unordered) pairs {u,v} of vertices for which there are exactly i edges between u and v. Analogously, let ℓ_i(M) denote the number of vertices x for which there are precisely i loops at x. Finally, let ℓ(M)=∑_i iℓ_i(M) be the number of loops of M. The compensation factor of M is defined to be(M):=2^-ℓ(M)∏_i=1^∞(i!)^-e_i(M)-ℓ_i(M).In <ref>, the compensation factor enables us to distinguish multiple edges and loops at the same vertex (because of the factors 1/i!) as well as the different orientations of loops (because of the factor 2^-ℓ(M)). This fact ensures that there is no overcounting in <ref>. Indeed, if a corehas kernel , thencan be constructed fromby subdividing edges in precisely 1/() different ways; thus, assigning weight () toprevents overcounting.We denote by * _g the class of all graphs embeddable on ;* _g the class of all s of graphs in _g;* _g the class of all cores of graphs in _g;* _g the class of all kernels of graphs in _g;*the class of all graphs without complex components.In other words, _g is the class of all complex graphs embeddable on ; _g consists of all complex graphs embeddable onwith minimum degree at least two; and _g comprises all (weighted) multigraphs embeddable onwith minimum degree at least three.The empty graph lies in all the classes above by convention.If n,m∈ are fixed, we write _g(n,m) for the subclass of _g containing all graphs with exactly n vertices and m edges. By _g(n,m) we denote a graph chosen uniformly at random from all graphs in _g(n,m). We use the corresponding notation also for the other classes defined above.The construction of graphs in _g from their kernel via the core and as described in <ref>–<ref> can be translated to relations between the numbers of graphs in the previously defined classes. Starting from _g(n,m), <ref> immediately gives rise to the identity_g(n,m) = ∑_n_,lnn__g(n_,n_+l)·(n_,m_),where n_ = n-n_ and m_ = m-n_-l. Indeed, for each fixed number n_ of vertices in theand each fixed excess l * the binomial coefficient counts the possibilities which vertices lie in the ,* _g(n_,n_+l) counts the s with n_ vertices and n_+l edges, and* (n_,m_) counts all possible arrangements of non-complex components. If _g(n_,n_+l) and (n_,m_) are known, then we can use (<ref>) to determine _g(n,m). Determining _g(n_,n_+l) turns out to be quite a challenging task, to which we devote a substantial part of this paper. The number (n_,m_), on the other hand, can be determined using known results.§.§ Graphs without complex components The classof graphs without complex components (i.e. each component is either a tree or unicyclic) has been studied by Britikov <cit.> and by Janson, Knuth, , and Pittel <cit.>, who determined the number of graphs in (n,m) for different regimes of m. Let m=(1+ n^-1/3)n/2 with =(n)<n^1/3 and let ρ(n,m) be such that(n,m)=n2mρn,m.There exists a constant c>0 such that forf(n,m)=c2/e^2m-nm^m+1/2n^n-2m+1/2/(n-m)^n-m+1/2,we have *ρn,m=1+o(1), if -∞;*for each a∈, there exists a constant b=b(a)>0 such that ρn,m≥ b whenever ≤ a;*ρn,m≤ n^-1/2f(n,m) if >0 and =o(n^1/12);*ρn,m≤ f(n,m) if >0.<Ref><ref>, <ref>, and <ref> are proven in<cit.> and <cit.>, but <ref> is a slight extension of the results in <cit.>which we prove in <Ref> along the following lines. Inspired by the proof of <ref> in <cit.>, we bound ρ(n,m) by a contour integral and prove that this integral has value at most f(n,m) for all >0.Clearly, every graph inis planar and thus also embeddable on . This fact, together with <Ref> and <Ref><ref> and <ref> will be enough to prove <Ref><ref> and <ref>. For all other regimes, <Ref> will provide a very useful way to bound the number (n,m) in (<ref>).§.§ Complex parts For the number _g(n_,n_+l), we analyse <ref>–<ref> in order to derive an identity similar to (<ref>). Firstly, we need to sum over all possible numbers n_ of vertices in the core; the number of edges in the core is then given by n_+l. For fixed n_, n_, and l, we have * n_n_ choices for which vertices of thelie in the core,* _g(n_,n_+l) ways to choose a core, and* n_ n_^n_-n_-1 possibilities to attach n_ rooted trees with n_ vertices in total to the vertices of the core.By <ref>, we thus deduce that_g(n_,n_+l) =∑_n_n_n__g(n_,n_+l)n_ n_^n_-n_-1 . In order to determine _g(n_,n_+l), recall that by <Ref>, the number of vertices and edges in the kernel depend only on the excess and deficiency of the graph. Thus, we choose the deficiency d as the summation index. The number of ways to construct a core from a kernel according to <ref> cannot be described in an easy fashion like the constructions in <ref> and <ref>. We will investigate this construction step in more detail in <Ref>. For a kernel ∈_g(2l-d,3l-d), consider the number of different ways to subdivide its edges that result in a core with n_ vertices and n_+l edges. Denote by φ_n_,l,d the average of this number, taken over all kernels in _g(2l-d,3l-d). With this notation, we deduce from <ref> and <ref> that_g(n_,n_+l) = ∑_dn_2l-d_g(2l-d,3l-d)φ_n_,l,d.Recall that the multigraphs in _g are weighted. Accordingly, _g(2l-d,3l-d) does not denote the number of these multigraphs, but the sum of their weights.§.§ Analysing the sums In each of (<ref>), (<ref>), and (<ref>), we may assume that the parameters n_,n_,l,d of the sums only take those values for which the summands are non-zero.We call values for a parameter (or a set of parameters) admissible, if there exists at least one graphsatisfying these values for the corresponding parameters.The definition of the parameters, together with <Ref>, directly yield the following necessary conditions for admissibility.* 0≤ n_≤ n;* 0≤ n_≤ n_;* 0≤ l≤ m-n_;* l=0 if and only if n_=0;*l≤ 2n_ + 6(g-1);* 0≤ d≤ 2l. Inequality <ref> is due to Euler's formula applied to the core. These bounds will frequently be used;if we use other bounds, we will state them explicitly.On the first glance, the sole application of (<ref>), (<ref>), and (<ref>) seems to be to determine the number of graphs with given numbers of vertices and edges in the classes _g, _g, and _g. However, we shall use these sums to derive typical structural properties of graphs chosen uniformly at random from one of these classes.Our plan to derive such properties from the sums (<ref>), (<ref>), and (<ref>)is as follows. Once we have determined the values _g(2l-d,3l-d) and φ_n_,l,d, we consider the parameters n_,n_,l,d of the sums one after another. For each parameter i, we seek to determine which range for i provides the `most important'summands. To make this more precise, let us introduce the following notation. For every n∈, let I(n),I_0(n)⊂ be finite index sets with I_0(n)⊆ I(n). For each i∈ I(n), let A_i(n)≥ 0. We say that the main contribution to the sum∑_i∈ I(n)A_i(n) is provided by i∈ I_0(n) if∑_i∈ I(n)∖ I_0(n)A_i(n) = o∑_i∈ I(n)A_i(n),where n→∞. The sum over i∈ I(n)∖ I_0(n) is then called the tail of ∑ A_i(n). We shall determine index sets I_(n), I_(n), I_l(n), I_d(n) so that the main contributions to the sums in (<ref>), (<ref>), and (<ref>) are provided by n_∈ I_(n), n_∈ I_(n), l∈ I_l(n), and d∈ I_d(n), respectively. This will yield statements about the size of these values in the following way. For fixed m=m(n), the index set I_(n), for example, will be of the type [c_1f(n),c_2f(n)]for certain constants 0<c_1<c_2 and a certain function f=f(n). This implies that if G=_g(n,m), thenn_∈ I_(n) and thus n_=Θ(f).The main challenge is to find the `optimal' intervalsI_(n), I_(n), I_l(n), I_d(n) in view of <Ref> in the sense that they should be a) large enough so as to provide the main contribution and b) as small as possible so as to yield stronger concentration results. To achieve these two antipodal goals is a difficult task whose solution will differ from case to case. In order to prove that a given interval indeed provides the main contribution to a sum,we bound the tail of the sum using various complementary methods including maximisingtechniques (e.g. <Ref>), Chernoff bounds (<Ref>),and approximations by integrals (<Ref>).Determining the main contributions to (<ref>), (<ref>), and (<ref>) will yield structural statements like the typical order of the , the core, and the kernel of G=_g(n,m). In order to derive the component structure of G, we further apply combinatorial techniques like double counting (e.g. <Ref> and <Ref>) and probabilistic methods such as Markov'sand Chebyshev's inequalities (<Ref>).§ KERNELS, CORES, AND S For the remainder of the paper, let n,m,n_,n_,l,d∈ be such that m=m(n)≤(1+o(1))n and such that n_, n_, l, and d are admissible (in terms of <Ref>). Furthermore, set n_=n-n_ and m_=m-n_-l.The aim of this section is to determine the main contributions (in the sense of <Ref>) to the sums in (<ref>), (<ref>), and (<ref>). In other words, we derive the typical orders of theand the core of G=_g(n,m), as well as the excess and the deficiency of G. These orders will be the main ingredients for the proofs of Theorems <ref>–<ref>. For all results in this section, we defer the proofs to <Ref>. §.§ Kernels Throughout this section, we assume l≥1. As a basis of our analysis of (<ref>), (<ref>), and (<ref>), we first have to determine the sum _g(2l-d,3l-d) of weights of the multigraphs in _g(2l-d,3l-d).We start with the case when the kernel is cubic (or equivalently, d=0). The number of cubic kernels was determined in <cit.> by Fang andthe authors of the present paper. The number of cubic multigraphs with 2l vertices and 3l edges embeddable on , weighted by their compensation factor, is given by_g(2l,3l)=1+Ol^-1/4e_g l^5g/2-7/2γ_^2l(2l)! ,where γ_=79^3/4/54^1/2≈ 3.606 and e_g>0 is a constant depending only on g. The number of connected cubic kernels will be of interest as well. The number of connected multigraphs in _g(2l,3l), weighted by their compensation factor, is1+Ol^-1/4c_g l^5g/2-7/2γ_^2l(2l)! ,where γ_ is as in <Ref> and c_g>0 is a constant depending only on g. In particular, <Ref> imply that _g(2l,3l) is connected with probability tending to c_g/e_g>0; in other words, the probability that a random cubic kernel is connected is bounded away from zero.Before we consider kernels with non-zero deficiency, we shall look at the structure of cubic kernels. We aim to find the giant component of _g(n,m) and prove that it is complex, hence finding the giant component of the kernel would be a basis for a complex giant component in _g(n,m). Moreover, we would like this giant component to have genus g. The following result from <cit.> provides us with a component of genus g in a cubic kernel. If g≥ 1, then _g(2l,3l)has one component of genus g and all its other components are planar. Intuitively, the non-planar component provided by <Ref> should be the largest component of the kernel, ideally even large enough to be the giant component. The following result shows that this component indeed covers almost all vertices in the kernel. Let g≥ 1. Denote by (G) the subgraph of G=_g(2l,3l) consisting of all planar components. Then (G) = O_p(1). Furthermore, (G) is even and there exist constants c^+,c^-∈^+ such that for every fixed integer i≥ 1 and sufficiently large l,c^-i^-7/21-i/l^5g/2-7/2≤(G)=2i≤ c^+i^-7/21-i/l^5g/2-7/2.For the case g=0, <cit.> provides an analogous statement to (<ref>) for the number of vertices outside the giant component of _0(2l,3l).Let us now look at general (not necessarily cubic) kernels. For such kernels, we are not able to give a precise formula for their number, but we can bound their number by comparing them to cubic kernels via a double counting argument. Let k∈ be fixed. For ∈_g, denote by * 𝒫_1 the property thathas precisely k components;* 𝒫_2 the property that, if g≥1, then each component ofhas genus strictlysmaller than g.For i=1,2, denote by _g(n_,m_;𝒫_i) the subclass of _g(n_,m_) of kernels that have property 𝒫_i. Then_g(2l-d,3l-d)/_g(2l,3l)≤6^d/d!and_g(2l-d,3l-d;𝒫_i)/_g(2l,3l;𝒫_i)≤6^d/d! for i=1,2.If in addition d≤2/7l, then also_g(2l-d,3l-d)/_g(2l,3l)≥1/216^dd!and_g(2l-d,3l-d;𝒫_i)/_g(2l,3l;𝒫_i)≥1/216^dd! for i=1,2.<Ref> has two main applications. On one hand, together with <Ref>, <Ref> provides a way to bound the value _g(2l-d,3l-d) in (<ref>). On the other hand, <Ref> will also enable us to extend the structural results from <Ref> to kernels with a fixed constant deficiency d (see <Ref>).§.§ Core and deficiency We first determine the main contributions to the sums in (<ref>) and (<ref>). By definition, _g(0,0)=1. Thus, throughout this section we will assume that both n_≥1 and l≥1 (recall that l=0 if and only if n_=0). Observe that (<ref>), (<ref>), and the identityn_n_n_2l-d = (n_)_n_/(2l-d)!(n_-2l+d)!imply that_g(n_,n_+l) =∑_n_,d(n_)_n__g(2l-d,3l-d)φ_n_,l,d n_ n_^n_-n_-1/(2l-d)!(n_-2l+d)! . The factor _g(2l-d,3l-d) in (<ref>) can be bounded using <Ref>and <Ref>. The term φ_n_,l,d, however, is still unknown. Recall that this value denotes the average number, over all ∈_g(2l-d,3l-d), of different ways to subdivide the edges ofthat result in a core with n_ vertices and n_+l edges. There exists a function ν=ν(n_,l,d) such thatφ_n_,l,d = (n_-2l+d)!n_+ν l -13l-d-1and -5≤ν≤ 1. Let us now determine the value of the sum in (<ref>) over n_, as well as its main contribution.To this end, we apply <Ref> to (<ref>), gather all parts of the equation that depend on n_, and denote the sum over these values by Σ_. There exists a function τ=τ(l,d) such that * 1/216≤τ≤6 for all 0≤ d≤⌊2l/7⌋;* 0≤τ≤6 for all ⌊2l/7⌋ < d≤2l;and_g(n_,n_+l)=n_^n_-1_g(2l,3l)/(2l)! ∑_d=0^2l2ldτ^d/(3l-d-1)!Σ_,whereΣ_=Σ_(n_,l,d):=∑_n_(n_)_n_/n_^n_n_(n_+ν l-1)_3l-d-1.The strategy to determine the main contribution to Σ_ is roughly as follows. Using inequalities from <Ref>, we bound Σ_(n_,l,d) from above by a sum of the type∑_n_expA(n_,n_,l,d).The derivative of A(n_,n_,l,d) with respect to n_ will show to have a zero at n_ = (1+o(1))n_, wheren_ = √(n_(3l-d)).We then substitute n_ = n_ + r and prove that the resulting sum—up to a scaling factor—corresponds to a normally distributed random variable to which the Chernoff bound (<ref>) applies. Finally, for n_ from the range of the maincontribution to the upper bound, we derive a similar lower bound, which will enable us to derivethe main contribution to Σ_. Let f_=f_(n_,l,d) be such thatΣ_(n_,l,d) = √(n_)n_(3l-d)/e^(3l-d)/2expf_.*There exist constants a_^+,b_^+∈ such thatf_≤ a_^+ + b_^+√(l^3/n_). *For every function ϵ(n_)=o(1), there exist constants N_∈, a_^-,b_^-∈ such that whenever n_≥ N_ and 7/2d≤ l≤ϵ n_, thenf_≥ a_^- + b_^-√(l^3/n_). *For every 0<δ<1/2, whenever n_,l→∞ and 7/2d≤ l≤ϵ n_, where ϵ=ϵ(n_)=o(1) is given, the main contribution to Σ_ is provided byn_∈ I_^δ(n_,l,d) := {k∈ | k-n_ < δn_}. Our next aim is to analyse the sum over d in (<ref>). To this end, observe that forΣ_d = Σ_d(n_,l) := ∑_d2ld(3l-d)^(3l-d+2)/2e^d/2τ^d/(3l-d)!n_^d/2exp(f_),(<ref>) and (<ref>) yield_g(n_,n_+l)=n_^n_+3l/2-1/2_g(2l,3l)/e^3l/2(2l)!Σ_d. We determine the value of Σ_d, as well as its main contribution, in a similar fashion as for Σ_.Let f_d=f_d(n_,l) be such thatΣ_d = (3l)^-(3l-1)/2e^3lexpf_d.*There exist constants a_d^+∈ and b_d^+∈ such thatf_d≤ a_d^+ + b_d^+√(l^3/n_).*For every function ϵ(n_)=o(1), there exist constants N_∈ and a_d^-,b_d^-∈ such thatf_d≥ a_d^- + b_d^-√(l^3/n_),whenever n_≥ N_ and l≤ϵn_.*There exists a constant β_d^+∈^+ such that for n_,l→∞ and l=o(n_), the main contribution to Σ_d is provided by * d∈ I_d(n_,l) := {0} if l=o(n_^1/3);* d∈ I^h_d(n_,l) := { k∈| k≤ h(n_)} for every fixed function h=h(n_)=ω(1) if l=Θ(n_^1/3);* d∈ I_d(n_,l) := { k∈| k ≤β_d^+√(l^3/n_)}if l=ω(n_^1/3). Interpreted in a probabilistic sense, <Ref> immediately yield the typical order of a core of a complex graph, as well as the typical deficiency. For every function ϵ(n_)=o(1), if n_,l→∞ and l≤ϵn_, then=_g(n_,n_+l) has a core with √(3n_ l)(1+o(1)) vertices. Furthermore, the deficiency ofis given by() =0if l = o(n_^1/3),O_p(1)if l = Θ(n_^1/3),O√(l^3/n_) if l = ω(n_^1/3).Observe that <Ref> requires n_ and l to be growing and l to be of smaller order than n_. We shall later see that this willbe the case for the of _g(n,m).In addition to <Ref>, which tells us the deficiency and the order of the core of _g(n_,n_+l), <Ref> also enables us to express the number of complex graphs that are embeddable on . For all positive admissible values n_,l, we have_g(n_,n_+l)=n_^n_+3l/2-1/2_g(2l,3l)e^3l/2/(3l)^(3l-1)/2(2l)!expf_d.This finalises our analysis of (<ref>) and (<ref>).§.§ Complex part and excess In this section we derive the main contributions with respect to n_ and l to the doublesum (<ref>). In the previous section, we had to distinguish the cases n_=0 and n_>0 in order to determine the number of complex graphs. Similarly, it will turn out that our asymptotic formulas will be quite different depending on whether the number m_=m-n_-l of edges outside the complex part is zero or not. In order to keep expressions simple, we will deal with the special cases n_=0 and m_=0 separately.To this end, define _g^*(n,m) to be the subclass of _g(n,m) consisting of all graphs for which the complex part is non-empty and the non-complex part has at least one edge. After bounding _g^*(n,m), we shall see that the two special cases n_=0 and m_=0 are `rare' in the sense that almost all graphs in _g(n,m) are also in _g^*(n,m). For every m=m(n) as in <Ref><ref>, <Ref>, or <Ref> we have_g(n,m)∖_g^*(n,m)=o_g^*(n,m).By <Ref>, we can determine the main contributions to (<ref>) by deriving the main contributions to the corresponding sum for _g^*(n,m), namely_g^*(n,m) = ∑_n_,lnn__g(n_,n_+l)·(n_,m_),where n_ and l take all admissible values with n_>0 and m_>0.In order to analyse (<ref>), we derive an upper bound for the sum over n_ and subsequently also for the sum over l. These upper bounds indicate which intervals I_(n) and I_l(n) for n_ and l, respectively, `should' provide the main contribution to (<ref>). For n_ and l from these intervals, we then derive lower bounds and prove that the lower bound for n_∈ I_(n) and l∈ I_l(n) is much larger than the tails of the upper bound, thusproving that the main contribution to (<ref>) is indeed provided by n_∈ I_(n) and l∈ I_l(n).Applying (<ref>), <Ref>, <Ref>, and <Ref> to (<ref>), we have_g^*(n,m) = Θ(1)n^n+1/2e/2^m∑_l l^5g/2-3-3l/2ϕ^l∑_n_ρ(n_,m_) ψ(n_,l),where ϕ=2√(e)γ_^23^-3/2 andψ(n_,l) = (2/e)^n_n_^3l/2-1n_^2m_-n_-1/2m_^-m_-1/2expf_d. Consider the sumΣ_=Σ_(n,m,l):=∑_n_ρ(n_,m_) ψ(n_,l),where we sum over all values of n_ that are admissible in _g^*(n,m). We shall see in <Ref> that for fixed l>0, the main contribution to Σ_ is centred aroundn_ = 2m-n-2l.The corresponding numbers of vertices and edges in the non-complex components are given byn_ = 2(n-m+l) andm_ = n-m+l.The bounds for Σ_ will depend on whether l is `small' or `large', more precisely, whether9m_^23l/2-1≤n_^3is satisfied (if so, l is considered small) or not (if so, l is large).Define M_=M_(n,m,l) byM_ = 2/e^2m-nn_^3l/2-1m_^-m_-1 if (<ref>) holds, 2/e^2m-nl^l/2-1/3m_^-m_+l-5/3 otherwise.ThenΣ_≤ n^3/2expO(l)M_.Furthermore, for every fixed positive valued function ϵ=ϵ(n)=o(1) and every δ>0, there exists N∈ such that for all n≥ NΣ_≤Θ(1)n^3/2e/2^2l(1+δ)^lM_,whenever9m_^23l/2-1≤ϵn_^3. For the case that m is larger than n/2 by only a small margin, we prove a stronger bound with the help of <Ref><ref> and a more careful analysis of the sums involved.Let m=(1+ n^-1/3)n/2 with =o(n^1/12) and →∞. Then we haveΣ_≤ n^2/3expO(l)M_.In <Ref>, the exact bound depends on whether (<ref>) is satisfied or violated.Correspondingly, we setΣ_l := ∑_l l^5g/2-3-3l/2ϕ^lΣ_(n,m,l),where l takes all admissible values for which (<ref>) holds, andΣ̃_l := ∑_l l^5g/2-3-3l/2ϕ^lΣ_(n,m,l),where l takes all admissible values for which (<ref>) is violated. Heuristically, Σ_l should be the larger of the two sums, because l^-3l/2 should be the dominating term and this term is small when l is large (which isthe case when (<ref>) is violated). We shall see in <Ref> that Σ̃_l is indeed negligible.Accordingly, we focus on Σ_l for the moment. Applying the bound (<ref>), we have Σ_l ≤Σ_l^+, whereΣ_l^+ = 2/e^2m-n∑_l l^5g/2-3-3l/2ϕ^ln_^3l/2-1m_^-m_exp(O(l)).The main contribution to Σ_l^+ should be centred around its largest summand. We approximate the largest summand by ignoring polynomial terms and replacing the term exp(O(l)) by (e/2)^2l (which we saw in <Ref>to be a good approximation when (<ref>) holds). The remaining terms attain their largest value at the unique solution l_0 of the equationl_0 = ϕ^2/3(2m-n-2l_0)/e^1/32^4/3(n-m+l_0)^2/3 , m-n<l_0<m-n/2 .Before we proceed to prove that the main contribution to _g^*(n,m) is indeed provided by l `close to' l_0 (and thus the `typical excess' of a graph in _g^*(n,m) is close to l_0), let us take a closer look at the value l_0. We introduce the following notation for the seven different cases of m(n) from our main results.: m(n)=(1+ n^-1/3)n/2 with =(n)=o(n^1/3) and →-∞, the first subcritical regime;: m(n)=(1+ n^-1/3)n/2 with → c_∈, the first critical regime;: m(n)=(1+ n^-1/3)n/2 with =o(n^1/3) and →∞, the first supercritical regime;: m(n)=n/2 with =(n)→ c_∈(1,2), the intermediate regime;: m(n)=(2+ n^-2/5)n/2 with =(n)=o(n^2/5) and →-∞, the second subcritical regime;: m(n)=(2+ n^-2/5)n/2 with → c_∈, the second critical regime;: m(n)=(2+ n^-2/5)n/2 with =o((log n)^-2/3n^2/5) and →∞, the second supercritical regime. The union of the first three cases will also be referred to as the first phase transition, while the union of the last three cases is called the second phase transition. Inand , our main results will follow from well-known results. Thus, for the rest of this section, we assume that we are in one of the other five cases.The definition of l_0 immediately yields its asymptotic order.The value l_0 defined in (<ref>) is positive andsatisfiesl_0 = Θ()in , Θ(n^1/3)in , Θ^-2/3n^3/5 in , Θ(n^3/5)in , 1/2 n^3/5 + Θ^-3/2n^3/5 in .Furthermore, in , we have 0<l_0-1/2 n^3/5=Θn^3/5. In general, l_0 will not be an integer and thus in particular not admissible. Setl_1 := ⌈ l_0 ⌉.Now (<ref>) and <Ref> yieldl_1 = (1+o(1))ϕ^2/3(2m-n-2l_1)/e^1/32^4/3(n-m+l_1)^2/3 .From <Ref> we deduce that all l 'close to' l_1 are admissible and use this fact to derive a lower bound on _g^*(n,m).Let c>1 be given and suppose that l∈ with l_0/c≤ l ≤ c l_0 and0 < m_ = Θ(n^3/5)in , Θ^-3/2n^3/5 in .Then l is admissible. Furthermore, there exists ñ_ = n_ + Om_^2/3such thatΣ_(n,m,l) ≥Θ(1)e/2^2lm_^2/3expf_d(ñ_,l)M_(n,m,l).In particular, for every δ>0 and n large enough, _g^*(n,m)≥Θ(1)n^n+1/2e/2^m+2l_1l_1^-3l_1/2ϕ^l_1(n-m+l_1)^2/3(1-δ)^l_1M_(l_1,n,m). The bound in <Ref> enables us to show that Σ̃_l is negligible.For n→∞, we haven^n+1/2e/2^m Σ̃_l = o_g^*(n,m).<Ref> implies that the main contribution to _g^*(n,m) is provided by the same intervals that provide the main contribution to Σ_l. After determining lower bounds for the summands in (<ref>), our aim is to determine the `optimal' intervals in view of <Ref>. In other words, we are looking for intervals I_(n) and I_l(n) such that a) the lower bound, summed over I_(n) and I_l(n), is much larger than the `tail' of the upper bound and b) I_(n) and I_l(n) are as small as possible. To that end, in the second phase transition, we need an auxiliary result that tells us that f_d (defined in <Ref>) does not change `too much' if we fix l and change n_ by asmall fraction. Suppose that m(n) lies in , , or . Let positive valued functions h=h(n)=ω(1) and ϵ=ϵ(n)=o(1) satisfying hϵ=ω(1) be given. Then for all δ>0, there existsN∈ such that for all n>N, n_ = (1+o(1))n, and h ≤ l ≤n_/h, we havef_d((1-ϵ)n_,l)-f_d(n_,l)≤δϵ l.With this auxiliary result, we can now determine the desired intervals I_(n) and I_l(n) that provide the main contribution to _g^*(n,m). There exist constants β_l^+,β_l^-∈^+ and functionsη_l^+,η_l^-: (1,2)→^+, and ϑ_l^+,ϑ_l^-→^+ withβ_l^+>β_l^-,η_l^+(x)>η_l^-(x),ϑ_l^+(x)>ϑ_l^-(x)>x/2for all x∈ such that the following holds. For every fixed function h=h(n)=ω(1), the main contribution to (<ref>) is provided by l∈ I_l(n) and n_∈ I_^h(n,l), whereI_l(n) = { k∈|β_l^-≤ k ≤β_l^+} in , { k∈|η_l^-(c_)n^1/3≤ k ≤η_l^+(c_)n^1/3} in , { k∈|β_l^-^-2/3n^3/5≤ k ≤β_l^+^-2/3n^3/5} in , { k∈|ϑ_l^-(c_)n^3/5≤ k ≤ϑ_l^+(c_)n^3/5} in , { k∈|β_l^-^-3/2n^3/5≤ k - 1/2 n^3/5≤β_l^+^-3/2n^3/5} in ,andI_^h(n,l) = { k∈|k-n_≤ h m_^2/3}.§ INTERNAL STRUCTURE In the <Ref>, we have determined the main contributions to _g^*(n,m) and thus, by <Ref>, also the main contributions to _g(n,m). Interpreting these results in a probabilistic sense, we deduce the typical orders n_,n_ of theand the core of G=_g(n,m), respectively, as well as its typical excess (G) and deficiency (G).All results in this section are proved in <Ref>.The , for instance, grows from order n^2/3 in the first supercritical regime to linear order in the intermediate regime. The number m_ of edges outside theis about half the number n_ of vertices outside the . Let G=_g(n,m). Then n_, n_, (G), and (G)lie in the following ranges.n_ n^2/3+O_p(n^2/3) -1n+O_pn^2/3n_ Θ n^1/3 Θn^2/3(G) Θ Θn^1/3(G) 0 O_p(1)Furthermore, m_ = n_/2+O_p(n_^2/3).In the second phase transition, thecovers almost all vertices and thus, it is more convenient to consider the number n_=n-n_ of vertices outside the . Let G=_g(n,m). Then n_, n_, (G), and (G) lie in the following ranges.n_ n^3/5+O_p^2/3n^2/5 Θn^3/5 Θ^-3/2n^3/5 n_ Θ^-1/3n^4/5 Θn^4/5 Θ^1/2n^4/5 (G) Θ^-2/3n^3/5 Θn^3/5 1/2 n^3/5+Θ^-3/2n^3/5 (G) O^-1n^2/5 On^2/5 O^3/2n^2/5Furthermore, we haven_ = 2(G) -n^3/5 + O_p2(G) -n^3/5^2/3andm_ = n_/2+O_p(n_^2/3).As an immediate corollary of <Ref>, we deduce the typical order of the kernel of G=_g(n,m). The number n_ of vertices and m_=3/2n_+(G) of edges of the kernel of G=_g(n,m) lie in the following ranges .n_ Θ() Θ(n^1/3) Θ^-2/3n^3/5 Θn^3/5 1/2 n^3/5+Θ^-3/2n^3/5(G) 0 O_p(1) O^-1n^2/5 On^2/5 O^3/2n^2/5<Ref> and <Ref> tell us the orders of the , the core, and the kernel. What we are ultimately looking for, however, are orders of components. <Ref> cover the case of cubic kernels, which are precisely the kernels of _g(n,m) in . However, we are not interested in the properties a kernel has if we pick it uniformly at random from the class of all kernels. We are rather looking for properties of the kernel of _g(n,m), where the randomness lies in _g(n,m). Clearly, we cannot expect the probability distribution on the class of kernels given by this construction to be uniform.However, by a double counting argument, we prove that the aforementioned probability distribution does not differ `too much' from the uniform distribution if we are inor . From this, we useMarkov's inequality (<ref>) to deduce that in these regimes, the kernel _G, the core _G, and the complex part _G of G=_g(n,m) have a component of genus gthat covers almost all vertices, while all other components are planar.Recall that H_i(G') denotes the i-th largest component of a graph G'. Denote by R(G') the graph G'∖ H_1(G'). Let G=_g(n,m), where m=m(n) lies inor . *_G, _G, and _G have the same number k=O_p(1) of components;*for every i≥2, the probability that _G, _G, and _G have at least i components is bounded away both from 0 and 1;* H_1(_G), H_1(_G), and H_1(_G) have genus g;* R(_G), R(_G), and R(_G) are planar;*if _G, _G, and _G have at least i≥ 2 components, thenH_i(_G) = Θ_p(1), H_i(_G) = Θ_pn^1/3, H_i(_G) = Θ_pn^2/3; *R(_G)=O_p(1);*R(_G)=O_p(n^1/3);*R(_G)=O_p(n^2/3).For the second phase transition, the proof method of <Ref> fails. For these cases, we provethe existence of the giant component by different means in <Ref>.From <Ref>, we deduce the typical order of the largest components of the , the core, and the kernel of _g(n,m), respectively. For G=_g(n,m), the largest components of the_G, the core _G,and the kernel _G, respectively, have the following order. H_1(_G) n^2/3+O_p(n^2/3) -1n+O_pn^2/3H_1(_G) Θ n^1/3 Θn^2/3H_1(_G) Θ Θn^1/3§ PROOFS OF MAIN RESULTS In this section, we prove the main results (<Ref>) of this paper, as well as the structural results from <Ref>. §.§ Proof of <Ref> In , i.e. m=(1+ n^-1/3)n/2 with =o(n^1/3) and →-∞, therandom graph G(n,m)is embeddable onby <Ref>. Thus, <Ref><ref> follows immediately from <Ref><ref>. In , i.e. → c_∈, <Ref><ref> implies that G(n,m) has no complex components with positive probability. Thus, <Ref><ref>yields the second statement of <Ref><ref>. By <cit.>, the probability that G(n,m) isplanar, and thus in particular embeddable on , is larger than the probability that G(n,m) has no complex components.Hence the first statement of <Ref><ref> follows as well. In , i.e. =o(n^1/3) and →∞, by<Ref><ref>–<ref> and <Ref>, theof G=_g(n,m)has one component thathas genus g and order n^2/3+O_p(n^2/3), while all other components are planar andhave order Θ_p(n^2/3). By <Ref><ref> and <ref>, it remains to show that for each i≥ 1, thelargest non-complex component has order Θ_pn^2/3. By <Ref><ref> and the fact that m_ = n_/2+O_pn_^2/3, there is a positive probability that G(n_,m_) has no complex component and therefore the claim follows from <Ref><ref>. §.§ Proof of <Ref> Let m(n) be a function from the second phase transition, that is, m(n)=(2+ n^-2/5)n/2with = (n) = o(n^2/5). Again, we denote the number n-n_ of vertices outside theof a given graph G∈_g(n,m) by n_ and the numberof edges outside theby m_. We claim that n_-H_1(_G) = O_p(n_).In other words, for every δ>0 we need to find a constant c_δ sothat n_-H_1(_G)≤ c_δ n_ with probability greater than 1-δfor sufficiently large n. Fix δ >0 and denote by E_g(n,m) the subclass of_g(n,m) of those graphs G for which n_-H_1(_G) > c_δ n_with c_δ := 5/δ. We have to prove that E_g(n,m) < δ_g(n,m)for sufficiently large n. Suppose that there exists an infinite set I⊂ such thatE_g(n,m)≥δ_g(n,m) for all n∈ I. We use double countingin order to derive a contradiction from this assumption. Let n∈ I be fixed and pick a graph G∈E_g(n,m). <Ref> together with the assumption E_g(n,m)≥δ_g(n,m) yields thatm_ = m-m_= n_/2+O_pn_^2/3.By definition, H_1(_G) < n_-5/δn_. Thus, there is a partition(A,B) of the vertices in _G such that each component is contained either in A or inB and that A≥B≥5/δn_. Now we perform the following operation.We delete one edge from the non-complex components and instead add an edge between Aand B. The resulting graph is still embeddable onand thus lies in _g(n,m). The number of choices for this operation is thereforem_A·B ≥ (1+o(1))5/4δn_^2n_.The reverse operation is to delete an edge uv from thethat separates u and v and add an edge outside the (not creating any new complex components). There are less than n_ choices for uv, because any spanningtree of a component has to contain all edges of that component that are feasible for uv. Thus, there are less thann_^2n_ possibilities for the reverse operation, yielding(1+o(1))5/4δn_^2n_E_g(n,m) < n_^2n__g(n,m)and thusE_g(n,m) < (1+o(1))4δ/5_g(n,m) < δ_g(n,m)for sufficiently large n∈ I, a contradiction.We have thus proved thatn_-H_1(_G) = O_p(n_), which in turn implies thatn-H_1(_G) = O_p(n_). <Ref> now follows from <Ref>and the trivial fact that n-H_1(_G)≥ n_. §.§ Proof of <Ref> Analogously to the proof of the caseof <Ref>, <Ref> follows from <Ref><ref>, <Ref><ref>, <Ref>, and <Ref>. main4In <ref>, the casesandfollow directly from <Ref><ref> and <ref>, respectively, and Stirling's formula, applied to the binomial coefficient n2m.For all other regimes, <Ref> provides a lower bound. On the other hand, <Ref> and <Ref> tell us that the main contribution to _g(n,m) is provided by an interval I_l(n) that is centred around l_1=⌈ l_0⌉. Moreover, by (<ref>), there exist constants 0<c^-<c^+ such thatc^-2m-n-2l/(n-m+l)^2/3≤ l ≤ c^+2m-n-2l/(n-m+l)^2/3for all l∈ I_l(n). The length of I_l(n) is O(l_0). Let l_2∈ I_l(n) be the index that maximises thesummand. Applying <Ref> and (<ref>), we observe that the resulting upper bound differs from the lower bound from<Ref> by a factor of the type exp(O(l_0)). Now <Ref> follows by inserting the values for l_0 from <Ref> into the lower bound from <Ref>.§.§ Proof of <Ref> The results on the excess and the order of the complex part follow from <Ref>. Observe that (G) = o(n_) for all regimes and thus <Ref> is applicable, yielding the order n_ of the core and the deficiency (G). Finally, by <Ref> we know thatn_ = 2(n-m+(G))+ O_p(n-m+(G))^2/3andm_ = n-m+(G)+ O_p(n-m+(G))^2/3,which yields the last statements of <Ref>.kernelsize<Ref> follows directly from <Ref> and the values of (G) and (G) stated in <Ref>.generalstructureGiven a fixed kernel , call a subdivision ofgood if it is a simple graph (and thus a valid core). We first prove that the fraction of good subdivisions among all subdivisions ofis bounded away from zero.To this end, suppose thatis a kernel with 2l-d vertices and 3l-d edges and that we want to subdivide its edges k times (with k≥ 6l-2d) in order to construct a corewith k+2l-d vertices. We subdividein the following way. First, decide which labels the vertices ofshould have in ; there are k+2l-d2l-d choices for this. Let I be the set of the remaining k labels. We recursively subdivide edges ofand assign the smallest remaining label in I to the new vertex. The number of choices increases by one in each recursion step and thus we have (k+3l-d-1)_k choices in total. This way, we construct each subdivision precisely once. Hence the total number of subdivisions ofis2l-d+k2l-d(k+3l-d-1)_k .In order to give a lower bound on the number of good subdivisions, we change our construction slightly by introducing a preliminary step. After choosing the labels for the vertices in , we subdivide each edge oftwice and then choose labels from I for the new vertices; there are (k)_6l-2d choices for this. After this step, we proceed as before, with the additional rule that an edge may only be subdivided if none of its end vertices is a vertex of . Similar to our first construction, there are (k-3l+d-1)_k-6l+2d choices for this part of the construction. Every graph obtained by this type of subdivision is simple and no graph is constructed more than once. Thus, the total number of good subdivisions is at least2l-d+k2l-d(k)_6l-2d(k-3l+d-1)_k-6l+2d .The fraction of good subdivisions among all subdivisions ofis thus at least(k)_6l-2d(k-3l+d-1)_k-6l+2d/(k+3l-d-1)_k≥(k-3l+d/k-6l+2d)^-6l+2d(<ref>)≥exp-2(3l-d)^2/k-6l+2d.Substituting l=(G), d=(G), and k=n_-3l+d from <Ref> (and observing that these values satisfy k≥ 6l-2d ) yields that the fraction of good subdivisions is bounded away from zero.To make this more precise, denote by (_G) the proportion of subdivisions of _G that lie in_g(n_,n_+l). We have shown that for every δ>0 there exists an ε>0 such that1-δ≤(_G)≤ 1in , ε≤(_G)≤ 1with probability at least 1-δ in .Recall the construction steps <ref>–<ref>: the core _G isconstructed from _G by subdividing edges; the_G is obtained from _G by attaching rooted trees to all vertices;adding trees and unicyclic components to _G yields G. Let X be an event that depends on the choice of ∈_g. From the above construction, (<ref>), and the fact that the kernel of G=_g(n,m) has a growing number of vertices by <Ref>, we deduce thatε≤Xholds for =_G/Xholds for =_g(2l-d,3l-d)≤1/ε,provided that the denominator is non-zero.To prove <ref>, observe that the kernel, the core, and theof a graph have thesame number k of components by construction. <Ref> (for g≥ 1) and <cit.> (for g=0) tell us that the cubic kernel _g(2l,3l) has O_p(1) components. Thus by (<ref>), we have k=O_p(1) if the kernel is cubic, which is the caseinby <Ref>. In , we have (G)=O_p(1). Thus, we apply <Ref> and deduce that k=O_p(1). By analogous arguments, we deduce <ref>, <ref>, and the statements about _G from <ref>, <ref>, and <ref>.The observation that subdividing edges (when constructing _G) and attaching trees (constructing _G)does not change the genus of any component proves the remaining statements of <ref> and <ref>.In order to prove <ref>, <ref> and <ref>, let A_ be any fixed componentof _G. Denote by A_ and A_ the corresponding components of _G and _G, respectively. Observe that * in a random (not necessarily good) subdivision of the kernel, the expected number of subdivisions of any given edge e is n_/n_-1;* if we attach a rooted forest to the core in order to construct the , the expected order of the tree attached to any given vertex v is n_/n_.By <Ref>, we have n_/n_ = Θ(n^1/3) and n_/n_ = Θ(n^1/3) . Therefore, (<ref>) and Markov's inequality (<ref>), applied to therandom variables A_ and A_, imply that A_=O_p(n^1/3)A_andA_=O_p(n^1/3)A_for every fixed component A_. On the other hand, there are O_p(1) components, which proves <ref> and <ref>.It remains to prove the lower bound for _i(_G) and _i(_G) in <ref>. For an edge e of _G, denote by X_e the random variable of subdivisions of e. Both the expectation X_e andthe variance σ^2 have order Θ(n^1/3). Therefore, Chebyshev's inequality (<ref>) implies that X_e ≤1/2X_e = O(n^-1/3).Thus, for a fixed component A_≠_1(_G), a union bound over all O_p(1) edges in A_proves that A_=Θ_p(n^1/3)A_. By another union bound, this is true for all O_p(1) components (apart from _1(_G)), proving _i(_G)=Θ_p(n^1/3) for all i≥ 2.Similarly, for a vertex v of _G, denote by Y_v the number of vertices in the tree attached to v when we construct _G. Again, both the expectation and the variance have order Θ(n^1/3) and we deduceY_v ≤1/2Y_v = O(n^-1/3)from Chebyshev's inequality (<ref>). This implies that, for any given δ>0, there exists an ε>0 such that with probability at least 1-δ, every component A_ contains at least ε n^1/3 vertices v with Y_v > ε n^1/3, which yields A_≥ε^2n^2/3. This proves <ref> and thus finishes the proof of <Ref>.compsizesfirst<Ref> is an immediate consequence of <Ref>. § PROOFS OF AUXILIARY RESULTS In this section we prove all results from <Ref>.§.§ Proof of <Ref> It remains to prove <ref>. From Lemma 3, (10.11), and (10.12) in <cit.>, we deduce thatρ(n,m)= 2^2m-ne^nm!n!/(n-m)!n^2m2π i∮√(1-z)expn k(z) z/z,where the contour of the integral is a closed curve around the origin with z≤1 andk(z)=z-1-m/nlog(z)+1-m/nlog(2-z).We use the contour consisting of a) the line segment from 1 to i, b) the semicircle of radius onewith negative real value, and c) the line segment from -i to 1. Along this contour we have exp(k(z))≤ 1 and thus ρ(n,m) ≤2^2m-ne^nm!n!/(n-m)!n^2m2π∮√(1-z)/z z(<ref>)≤e^2(π+2√(2))/√(2)π^3/22/e^2m-nm^m+1/2n^n-2m+1/2/(n-m)^n-m+1/2 ,proving the lemma. cubic:lc-sizeWe abbreviate the class of cubic kernels embeddable onby A_g and the subclass ofA_g of connected cubic kernels by B_g. Clearly, every graph in A_g has an even number of vertices. We first prove (<ref>). By <Ref> there exist positive constants a_g^-,a_g^+,b_g^-,b_g^+ such that for all l a_g^-≤A_g(2l)/(2l)^5g/2-7/2γ_^2l(2l)!≤ a_g^+ and b_g^-≤B_g(2l)/(2l)^5g/2-7/2γ_^2l(2l)!≤ b_g^+. By <Ref>, the elements of A_g(2l)have a unique non-planar component.Therefore the probability that (G) has exactly 2i vertices is given by (G)= 2i=(1+o(1))2l2iB_g2l-2i·A_02i/A_g2land we can therefore conclude that (<ref>) holds.It remains to show that for every δ>0 there exists a constant c_δ such that(G)>2c_δ<δ for sufficiently large l. By <Ref>, (<ref>), and the fact that g≥1, we have for any c_δ∈_>0(G)>2c_δ≤ (1+o(1))∑_i=c_δ+1^l-3 c_g^+ i^-7/21-i/l^-1.The summand (as a function in i) has a unique minimum at i=7l/9. Therefore,(G)≥ 2c_δ ≤ (1+o(1))c_g^+∫_c_δ^l-2x^-7/21-x/l^-1 x= 2/5+o(1)c_g^+c_δ^-5/2(1+O(l^-1/2)) < δfor c_δ and l large enough, as desired.kernelpumpingFor ∈_g2l,3l and ∈_g2l-d,3l-d, we say that contracts toif for each vertex inwith labeli∈{2l-d+1,…,2l} we can choose an edge e_i={i,v_i} so that contracting these edges results in (the contracted vertices obtain the smaller of the two labels). We say thate_2l-d+1,…,e_2l are the contracted edges. Denote by _g^Δ=4(2l-d,3l-d) the subclass of _g(2l-d,3l-d) consisting of multigraphs with maximum degree four.We say that a contraction oftohas degree four if ∈_g^Δ=4(2l-d,3l-d).Ifcontracts to , then the compensation factor defined in (<ref>) satisfiesw() ≤ w() ≤ 6^d w(). Each ∈_g(2l,3l) contracts in at most 3^d ways, becauseiscubic and hence there are at most 3^d choices for the edges e_2l-d+1,…,e_2l. Vice versa, we claim that every fixed ∈_g2l-d,3l-d is obtained by at least d!2^-d different contractions from graphs in _g2l,3l.By recursively splitting vertices ofof degree at least four into two new adjacent vertices ofdegree at least three each, not increasing the genus throughout the process, we obtain a weighted multigraph∈_g2l,3l that contracts to . The new vertices can be labelled in d! ways, of which at least 2^-dd! result in distinct multigraphs in _g2l,3l. Together with (<ref>), this proves the upper bound _g(2l-d,3l-d)/_g(2l,3l)≤6^d/d!.The corresponding bound for _g(2l-d,3l-d;P_i) follows analogously observing that the two constructions above do neither change the number of components nor increase the genus of any component.For the lower bound, we claim that the elements of _g(2l,3l) have at least 6^-d contractions of degree four on average. Indeed, first observe that ∈_g2l,3l contracts to ∈_g^Δ=4(2l-d,3l-d)if and only if the contracted edges form a matching in . By choosing the edges of the matching recursively, we see thatcontains at least 2^dd!^-1∏_j=0^d-1(2l-6j) matchings of size d.Denote by A() the class of all weighted multigraphs that are isomorphic to .If we choose A∈A() and a matching M of size d in A uniformly at random, then theprobability that every edge in M has precisely one end vertex with label in {2l-d+1,…,2l} is2^d/2ld. Therefore, the average number of contractions of degree four of graphs in A() is at least∏_j=0^d-1(2l-6j)/2^dd!·2^d/2ld≥2l-6d/2l-d^d ≥ 6^-d,where the last inequality uses the fact that d≤2l-d/6. The fact that the classes A() partition _g(2l,3l) proves that ∈_g(2l,3l) has at least 6^-d contractions of degree four on average.Vice versa, let ∈_g^Δ=42l-d,3l-d. By recursively splitting the d vertices ofdegree four in , we see thatcan be obtained by at most 42^dd!=6^dd! contractionsof degree four. Together with (<ref>), we deduce that_g(2l-d,3l-d)/_g(2l,3l)≥1/216^dd!.The corresponding bound for _g(2l-d,3l-d;P_i) follows analogously. §.§ Remark Observe that the proof of <Ref> applies to any class F of (multi-) graphs that is a) closed under taking minors and b) weakly addable, that is, if G is obtained by adding an edge between two distinct components of F∈F, then also G∈F. For more details, see <Ref>. binsandballsLet ∈_g(2l-d,3l-d). We subdivide the edges ofby inserting n_-2l+d vertices and then assign labels to these new vertices in one of (n_-2l+d)! possible ways so as to obtain a core with n_ vertices.Call a distribution of n_-2l+d new vertices to the edges offeasible if the resulting graph has no loops or multiple edges. The number n_+l-13l-d-1 of all distributions is clearly an upper bound for the number of feasible distributions. On the otherhand, a distribution is feasible if and only if each loop is subdivided at least twice and for every multiple edge, at most one of its edges is not subdivided. Denote by s_ the minimal number of times that we need to subdivide the edges ofin order to obtain a simple graph. Then n_+l-s_-13l-d-1 is a lower bound on the number of feasible distributions.By construction, s_≤ 2(3l-d) ≤ 6l and we thus deduce thatmin_-5≤ν≤1(n_-2l+d)!n_+ν l -13l-d-1≤φ_n_,l,d≤ (n_-2l+d)!n_+l -13l-d-1.Now the lemma follows from the intermediate value theorem and the fact that the function xk for fixed k∈ is continuous for x∈.complex:bounds<Ref> follows directly from (<ref>), <Ref>, the intermediate value theorem, and the fact that x^d is continuous.lem:SigmaCWe first derive an upper bound for Σ_, as well as the main contribution to this upper bound. We substitute n_=n_+r (recall that n_=√(n_(3l-d))). Applying (<ref>) to (<ref>), and then using (<ref>) and (<ref>) we deduce thatΣ_≤Σ_^+ := ∑_rexp(-r^2/2n_+rA_1+A_2),whereA_1= 1-2n_/2n_+3l-d-1/n_+(3l-d-1)(3l-d-2)/2(n_+ l-1)^2 , A_2= (3l-d)logn_-1/2+√(3l-d/4n_)+(3l-d-1)l-1/n_-3l-d-2/2(n_+l-1). Evaluating the `Gaussian' sum in (<ref>) we obtainΣ_^+ ≤√(2π n_)expA_2+n_ A_1^2/2.The existence of the constants a_^+,b_^+ from <ref> now follows fromexp(A_2) ≤n_(3l-d)/e^(3l-d)/2expO√(l^3/n_)and the observation that n_ A_1^2 = Ol^2/n_, which is O√(l^3/n_), because l=O(n_).In order to prove <ref>, suppose that 7/2d≤ l≤ϵ n_ = o(n_); then also l=o(n_). In (<ref>), we set n_=n_-ν l+1+s. If we let the parameter s=r+ν l-1 take only values for which n_∈ I_^δ(n_,l,d) with fixed 0<δ<1/2, thenΣ_≥∑_s (n_)_n_/n_^n_n_n_+s_3l-d-1.The interval I_^δ(n_,l,d) has length 2δn_ > 2δ√(n_) and hence we can choose for s an interval I_s of length δ√(n_) in which s<δn_ holds. We use (<ref>) for both falling factorialsand obtainΣ_≥n_^3l-d∑_s 1+s/n_^3l-d-11+s+1+ν l/n_expB_1,whereB_1 = -n_-ν l+1+s^2/2(n_-n_+ν l-1-s)-(3l-d-1)^2/2(n_-3l+d+1+s).Observe that 1+s/n_=Θ(1) and 1+s+1+ν l/n_=Θ(1). Using (<ref>), we deduce that1+s/n_^3l-dexpB_1≥exp-3l-d/2 + O√(l^3/n_) + O(1).Now (<ref>) and (<ref>), together with I_s=δ√(n_) prove <ref>.It remains to prove <ref>. First observe thatif we take the sum (<ref>) over all r∈ and normalise, we obtain a normally distributed random variable X with mean n_ A_1 = O(l) and variance n_. Applying the Chernoff bound (<ref>) to X, we deduce∑_r-n_ A_1 > δ/2n_exp-r^2/2n_+rA_1+A_2≤ 2exp-δ^23l-d/8Σ_^+.Note that r-n_ A_1<δ/2n_ implies that n_∈ I_^δ(n_,l,d) for sufficiently large n_ and l=o(n_), because then n_ A_1=o(n_). Therefore,∑_n_∉ I_^δ(n_)_n_/n_^n_n_(n_+ν l-1)_3l-d-1/∑_n_∈ I_^δ(n_)_n_/n_^n_n_(n_+ν l-1)_3l-d-1≤exp-δ^23l-d/8+Θ(1)+Θ√(l^3/n_).Now δ^23l-d/8 = Θ(l), √(l^3/n_)=o(l), and the fact that l→∞ finish the proof of <ref>.lem:SigmaDWe start by proving <ref>. We apply(3l-d)^(3l-d+2)/2/(3l-d)!(<ref>)≤e^3l-d/√(2π)(3l-d)^-3l-d-1/2(<ref>)≤e^3l-d/2/√(2π)(3l)^-3l-d-1/2and <Ref> to deduce thatΣ_d≤expa_^++b_^+√(l^3/n_)/√(2π)e^3l(3l)^-3l-1/2∑_d=0^2l2ld108l/n_^d/2,proving <ref> with a_d^+=a_^+-1/2log(2π) and b_d^+=b_^++2√(108).For <ref>, first note that we have a lower bound for Σ_d if we restrict the sum (<ref>) to0≤ d≤⌊2l/7⌋. By analogous arguments as for the upper bound, we deduce thatΣ_d≥expa_^-+b_^-√(l^3/n_)/ee^3l(3l)^-3l-1/2∑_d=0^2l/72ld3l/216^2en_^d/2.The sum above can be extended to a sum Y=∑_d=0^2l2ldy^d with y=o(1). Normalising this sum results in a binomially distributed random variable X=Bi(2l,p) with p=y/1+y and X=Θ√(l^3/n_).If X→0, then the main contribution to Y is provided by the term with d=0. Otherwise, the Chernoff bound(<ref>) yields that the main contribution to X—and thus also to Y—is provided by an interval contained in the range 0≤ d≤2l/7. Thus, with (<ref>) we deduce thatΣ_d≥expa_^-+b_^-√(l^3/n_)/e(1+o(1))e^3l(3l)^-3l-1/2exp√(3)/108√(e)√(l^3/n_)-√(3)l^2/216√(e)n_.Observing that l^2/n_=o(√(l^3/n_)), we have thus proved <ref> for any choice of a_d^-<a_^–1 and b_d^-<b_^-+√(3)/108√(e).In order to prove <ref>, it remains to show that the tail of Σ_d has smaller order than its total value, that ise^b_d^+√(l^3/n_)∑_d∉I_d2ld6 √(3l)/√(n_)^d=oe^b_d^-√(l^3/n_).WriteZ = ∑_d=0^2l2ld6 √(3l)/√(n_)^d.For √(l^3/n_)→ 0, the exponential terms in (<ref>) are both 1+o(1) and the sum on the left hand side is o(1), because its range does not include the main contribution of the binomial sum Z, which is located at d=0.If √(l^3/n_)→ c∈^+, then both exponential terms in (<ref>) are Θ(1). For any fixed h=h(n_)=ω(1), we deduce from (<ref>), applied to the normalised sum Z,∑_d>h2ld6 √(3l)/√(n_)^d ≤exp-chfor some constant c>0, which proves (<ref>).Finally, if √(l^3/n_)→∞, we can choose β_d^+ sufficiently large so that(<ref>) yields∑_d>β_d^+√(l^3/n_)2ld6 √(3l)/√(n_)^d ≤exp-(b_d^+-b_d^-+1)√(l^3/n_),which proves (<ref>) also in this last case.coredeficiencyThe typical range for d(G) follows directly from <Ref><ref>. Substituting this deficiency in the formulas for the main contribution for n_ from <Ref> yields thetypical order of the core.complex:numberThis follows directly from (<ref>) and (<ref>).lem:exceptionalWe prove <Ref> using the lower bound on _g^*(n,m) from <Ref>. It is important to note that vice versa, the proof of <Ref> does not rely on <Ref>.Suppose first n_=0, i.e. the complex part is empty and the graph only consists of trees and unicycliccomponents. In this case <Ref><ref> implies that the number of such graphs satisfies(n,m)≤Θ(1)n^m2^m-ne^n-m^2/n.Comparing this to the lower bound from <Ref> shows that(n,m)/_g^*(n,m)≤ e^-l_1 = o(1). The remaining case is m_=0, i.e.m=n_+l ≥ n_+1 (recall that n_>0 implies l>0). The number of such graphs is given by∑_n_≤ m-1nn__g(n_,m).The case n_=m-1 in the sum above is of smaller order than the lower bound for _g^*(n,m) from <Ref>. For every n_<m-1, <Ref> implies thatnn__g(n_,m)/nn__g(n_,m-1)n-n_2 = Θ(1)n_^3/2(m-n_)^-3/2(n-n_)^-2.Inand , the right hand side of (<ref>) is O(n^-1/2). Observing that the denominator is a summand of _g^*(n,m), we deduce that∑_n_≤ m-1nn__g(n_,m) = o_g^*(n,m)inand .Suppose now that we are in the second phase transition and write I_l=[p_l,q_l]. For n_< m-p_l, the right hand side of (<ref>) is o(1) and thus∑_n_< m-p_lnn__g(n_,m) = o_g^*(n,m).For n_≥ m-p_l, or equivalently l ≤ p_l, we have∑_n_ = m-p_l^m-2nn__g(n_,m-1)n-n_2≤exp(-f(n))_g^*(n,m),where f=ω(log n) is a positive valued function. From this, we deduce that∑_n_ = m-p_l^m-2nn__g(n_,m)(<ref>)≤Θn^3/2exp(-f(n))_g^*(n,m) = o_g^*(n,m).This concludes the proof of <Ref>. lem:SigmaQIn Σ_=∑_n_ρψ (see (<ref>) for the definition of ψ), we substituten_ = n_+r. We then have n_ = n-n_ = n_-r and m_ = m-n_-l = m_-r.With this substitution, we obtainψ=2/e^n_+r(n_+r)^3l/2-1(n_-r)^-r-1/2(m_-r)^-m_+r-1/2expf_d.Because n_,l are admissible, we have l = O(n_) and thusf_d ≤ a_d^++b_d^+√(l^3/n_) = O(l).If in addition (<ref>) holds, then l=on_ and thus, for every fixed h(n)=ω(1),f_d ≤ a_d^+ + o(1) l,whenever r ≥ -n_+hl. In either case, we distinguish whether r>0 or r≤0.Let Σ_r>0 be the part of Σ_ consisting of the summands with r>0. We bound ρ(n_,m_) from above by 1. Additionally we claim that2/e^r(n_-r)^-r(m_-r)^-m_+r < m_^-m_exp-r^3/24m_^2.Indeed, for r≥0, the quotient of the two sides in (<ref>) has a unique maximum at r=0, where we have equality. Furthermore, there exists a constant c>0 with(n_-r)^-1/2(m_-r)^-1/2≤ cm_^-1expr^3/216m_^2.Now (<ref>), (<ref>), and (<ref>) yieldΣ_r>0≤2/e^n_m_^-m_-1expO(l)∑_r (n_+r)^3l/2-1exp-r^3/27m_^2.If in addition (<ref>) holds, we can replace exp(O(l)) by (1+o(1))^l. The summand above is maximised at the (not necessarily integral) unique positive solution r_0 ofr_0^3+r_0^2n_=9m_^23l/2-1. Suppose first that (<ref>) holds, that is, n_^3≥ 9m_^23l/2-1. Then 1/2√(9m_^23l/2-1/n_)≤ r_0≤√(9m_^23l/2-1/n_)and thus(n_+r_0)^3l/2-1exp-r_0^3/27m_^2 (<ref>)≤n_^3l/2-1expr_03l/2-1/n_-r_0^3/27m_^2(<ref>),(<ref>)≤n_^3l/2-1expO(l).Summing over 1≤ r≤m_-1, we deduce thatΣ_r>0≤2/e^n_n_^3l/2-1m_^-m_expO(l),which proves (<ref>) for Σ_r>0 if (<ref>) holds. If the stronger condition (<ref>) is satisfied, the factor expO(l) improves to expO√(ϵ) l=expo(1) l, proving (<ref>) for Σ_r>0.Now consider the case n_^3<9m_^23l/2-1. Then1/2√(9m_^23l/2-1)≤ r_0≤ 2√(9m_^23l/2-1)and hence(n_+r_0)^3l/2-1exp-r_0^3/27m_^2≤ (3r_0)^3l/2-1expO(l).Summing over less than m_ values for r, we deduce thatΣ_r>0≤2/e^n_r_0^3l/2-1m_^-m_expO(l).Together with (<ref>), this proves (<ref>) for Σ_r>0 in the case that (<ref>) is violated. Finally, consider the part Σ_r≤0 of Σ_ consisting of the summands with r≤0. Observe that -n_+1≤ r≤0; in particular, the case r≤ 0 only occurs if n_>0. We use <Ref><ref> asan upper bound for ρ=ρ(n_-r,m_-r) to deduceρψ≤ c2/e^n_(n_+r)^3l/2-1m_^-m_-1/2exp(f_d).We bound the factor exp(f_d) by (<ref>). Furthermore, (n_+r)^3l/2-1≤n_^3l/2-1, because r≤ 0.Summing over r, we deduce thatΣ_r≤0≤ c2/e^n_n_^3l/2m_^-m_-1/2exp(O(l)).This proves (<ref>) for Σ_r≤0, independent of whether (<ref>) is satisfied.Finally, suppose that (<ref>) holds. Then in (<ref>), we bound the factor exp(f_d) by (<ref>) for r≥ r_1 := -n_+hl and deduce by analogous arguments as above that∑_r=r_1^0ρψ≤Θ(1)2/e^n_n_^3l/2m_^-m_-1/2(1+o(1))^l.For r<r_1, observe that Euler's formula yields r ≥ r_2 := -n_+Θ(l). In this range, the summand ρψ is maximised at the upper bound r=r_1-1; this yields∑_r=r_2^r_1-1ρψ≤Θ(1)2/e^n_hl^3l/2m_^-m_-1/2(1+o(1))^l.If we choose h to be growing slowly enough so that hl = o(n_), then this proves (<ref>) for Σ_r<0.The trivial observation Σ_ = Σ_r>0+Σ_r≤0 finishes the proof.lem:SigmaQ:smallLike in the proof of <Ref>, we distinguish the cases r>0 and r≤0 as well as whether (<ref>) holds or not.First consider Σ_r>0 when (<ref>) holds. Then (<ref>) implies r_0≤n_, which yields∑_r=1^n_n_+r^3l/2-1exp-r^3/27m_^2 ≤n_^3l/21+r_0/n_^3l/2-1exp-r_0^3/27m_^2≤n_^3l/2exp(O(l)).The sum over the remaining values for r is bounded by the integral∫_n_^∞2r^3l/2-1exp-r^3/27m_^2 r ≤m_^lΓl/2exp(O(l)) = m_^ll^l/2exp(O(l)).Now (<ref>), (<ref>), and the fact that n_ = 2m-n-2l <n^2/3 prove (<ref>) for Σ_r>0.If (<ref>) is violated, we split Σ_r>0 into the sums for 1≤ r≤ r_0 and r_0<r. Observe that (<ref>) implies n_<2r_0. Thus, the sum for 1≤ r≤ r_0 is smaller than m_^ll^l/2exp(O(l)), while the sum for r_0<r is bounded by the integral∫_r_0^∞3r^3l/2-1exp-r^3/27m_^2 r ≤m_^ll^l/2exp(O(l)) < m_^ll^l/2-1/3exp(O(l)).Now (<ref>) for Σ_r>0 follows from (<ref>) and the trivial fact that m_ = O(n).For r≤0, observe that m_ = n_/2-r/2. Furthermore, we have n_≤n_ = O( n^2/3) and thusr = O( n^2/3) and n_ = (1+o(1))n.By the assumption = o(n^1/12), <Ref><ref> applies to ρ(n_,m_) and summing over -n_+1≤ r≤ 0 yieldsΣ_r≤0≤ c2/e^n_n_^3l/2n_^-1/2m_^-m_-1/2exp(O(l)).Now (<ref>) follows for Σ_r≤0 analogously to the proof of <Ref>, with the additional fact n_ = O( n^2/3). sizesl0By (<ref>), l_0 is positive. We prove the order of l_0 separately for each of the five regimes.: In this regime, we havel_0=ϕ^2/3( n^2/3-2l_0)/e^1/32^4/3n/2- n^2/3+l_0^2/3.The denominator is of order Θ(n^2/3). Thus, in order for the equality to be true, the numerator mustbe of order n^2/3 and thus l_0=Θ().: Here, the denominator is still of order n^2/3 andthe numerator is of order Θ(n) and thus l_0=Θ(n^1/3).: The numerator is of order Θ(n) and thusl_0=Θ(n)/(l_0-1/2 n^3/5)^2/3.If l_0=Ω( n^3/5), then we have l_0=Θn/l_0^2/3 and thus l_0=Θ(n^3/5)=o( n^3/5), a contradiction. Therefore, l_0=o( n^3/5) and l_0=Θn/( n^3/5)^2/3=Θ^-2/3n^3/5. : The numerator has order Θ(n). For the denominator we have a contradiction similar to the previous case if l_0 is not Θ(n^3/5). Furthermore, the denominator has order Θn^3/5.: The numerator is Θ(n) and we obtain a contradiction if there is no cancellation in the denominator. Thus we set l_0=1/2 n^3/5+r with r=o( n^3/5) and deduce thatr=Θ^-3/2n^3/5.nQandl:lowerBy <Ref>, we have 0<l=o(n_) and 0<n_<n. Thus, n_ and m_ are also positive. Therefore, we have _g(n_,n_+l) ≠∅ and (n_,m_) ≠∅, showing that the given value l and n_ = n_ are admissible. Recall thatΣ_ = ∑_n_ρ(n_,m_)ψ(n_,l).Observe that (at least) all n_ with n_≤ n_≤n_ + m_ -1 are admissible in this sum. For each such n_, we have m_≤n_/2 and thus <Ref><ref> yieldsΣ_≥Θ(1)∑_n_=n_^n_+m_-1ψ(n_,l).Set n_ = n_ + r. There exists a c>0 such thatψ(n_+r,l) ≥2/e^n_n_+r^3l/2-1m_^-m_-1expf_d-r^3/12m_^2holds for 0≤ r≤ cm_. The factor n_+r^3l/2-1exp-r^3/12m_^2 is increasing until the unique positive solution r_0 ofr_0^3+r_0^2n_=4m_^23l/2-1.The assumptions on the size of l imply that l satisfies (<ref>), which in turn yields r_0=Θm_^2/3. Therefore, for 1≤ r≤ r_0, we haveψ(n_+r,l) ≥2/e^n_n_^3l/2-1m_^-m_-1expf_d(n_+r,l)-1/12m_^2.Let ñ_ = n_+r be the value that minimises f_d(n_+r,l) for 1≤ r≤ r_0; then ñ_ = n_+O(m_^2/3), since r ≤ r_0. This proves thelower bound for Σ_. The lower bound for _g^*(n,m) follows directlyfrom (<ref>), the bound for Σ_, and the fact that l_1 is admissible.lem:caseBFirst observe that there exists l_b>0 such that (<ref>) is violated precisely when l≥ l_b. In the first supercritical regime, we have l_b=Θ^3, in all other regimes l_b=Θ(n).By <Ref>, we haveΣ̃_l≤∑_l n^3/2 l^-l+5g/2-10/3(n-m+l)^m-n-5/3expO(l),where the sum is taken over all l≥ l_b. The sum on the right hand side is bounded from above by a geometric sum ∑_lexp-cl with c>0 and thusΣ̃_l≤ (1+o(1)) n^3/2 l_b^-l_b+5g/2-10/3(n-m+l_b)^m-n-5/3expO(l_b).Comparing this with the lower bound for _g^*(n,m) from <Ref> and implementing (<ref>), we deduce thatn^n+1/2e/2^mΣ̃_l/_g^*(n,m)≤2m-n-2l_1n^1/6l_b^-l_b+5g/2-10/3n-m+l_b/n-m+l_1^m-nexpO(l_b).The right hand side is o(1), unless we are in the first supercritical regime and(and thus also l_b) is too small for the term l_b^-l_b to compensate the polynomial terms in n. For this to be the case, we would in particular have =on^1/12. For such , we have the stronger upper boundfor Σ̃_l provided by <Ref>, which is smaller than the one from <Ref> by a factor of ^-1n^5/6. Thus, for these , we haven^n+1/2e/2^mΣ̃_l/_g^*(n,m) ≤2m-n-2l_1n^-2/3l_b^-l_b+5g/2-10/3n-m+l_b/n-m+l_1^m-nexpO(l_b)≤^2l_b^-l_bexpO(l_b),which is o(1), because l_b=Θ^3.lem:boundfdWe first show that for d=o(l) we havef_(d,(1-ϵ)n_,l)-f_(n_,l,d) = o(ϵ l).By (<ref>), we haveΣ_(d,(1-ϵ)n_,l)/Σ_(n_,l,d)=(1-ϵ)^3l-d+1/2expf_(d,(1-ϵ)n_,l)-f_(n_,l,d). We can also compare the summands of the two terms Σ_(d,(1-ϵ)n_,l) andΣ_(n_,l,d) separately. Denote the summands bys(n_,d,n_,l) = (n_)_n_/n_^n_n_(n_+ν l-1)_3l-d-1 .Then we have for 1≤ n_ = o(n_)s(n_,d,(1-ϵ)n_,l)/s(n_,d,n_,l) =(1-ϵ)n_n_/(1-ϵ)^n_n_n_(<ref>)=Θ(1)1+ϵ n_/(1-ϵ)n_-n_^(1-ϵ)n_-n_1-n_/n_^ϵ n_(<ref>)=Θ(1)exp-(1+o(1))ϵ n_^2/2n_.There exists an interval I that contains the ranges of the main contribution to both Σ_(n_,l,d)and Σ_(d,(1-ϵ)n_,l), such that n_=(1+o(1))√(n_(3l-d)), and thus in particular n_ = o(n_),for all n_∈ I. Then for n_∈ I and d=o(l),s(n_,d,(1-ϵ)n_,l)/s(n_,d,n_,l) = Θ(1)exp-3/2+o(1)ϵ l.Summing over n_∈ I, we deduce thatΣ_(d,(1-ϵ)n_,l) = Θ(1)exp-3/2+o(1)ϵ lΣ_(n_,l,d).Combining this with (<ref>) and the condition ϵ l=ω(1) yields (<ref>).<Ref> yieldsΣ_d((1-ϵ)n_,l)/Σ_d(n_,l)=expf_d((1-ϵ)n_,l)-f_d(n_,l).Suppose that J is an interval that contains the ranges of the main contributions to both Σ_d(n_,l) and Σ_d((1-ϵ)n_,l), such that d≤ d_0=o(l) for all d∈ J. Denote the summands of Σ_d(n_,l) bys_d(n_,l) = 2ld(3l-d)^(3l-d+2)/2e^d/2τ^d/(3l-d)!n_^d/2exp(f_(n_,l,d)).Recall that τ=τ(d,l) does not depend on n_. With(<ref>), we haves_d((1-ϵ)n_,l)/s_d(n_,l)=(1-ϵ)^-d/2expo(ϵ l)for d∈ J. Summing over J and comparing with (<ref>) proves the lemma. nQandlLet us write I_l(n)=[p_l(n),q_l(n)] and I_^h(n,l) = [p_(n,l),q_(n,l)]. Without loss of generality p_l<l_1<q_l. We first prove that the main contribution with respect to l is provided by l∈ I_l(n). To that end, we bound the tail of the sum (the part with l∉ I_l(n)) from above and prove that this upper bound has smaller order than the lower bound from <Ref>.Observe that for l∈ I_l(n), we havelm_^2/3/n_ = Θ(1). For this proof, let s_l(n,m) be the summand of the sum Σ_l, i.e. Σ_l=∑_l s_l(n,m), ands_(n,m,l) be the summand of Σ_=∑_n_s_(n,m,l). We need to show thatT := n^n+1/2e/2^m∑_l∉I_l(n)s_l(n,m) = o_g^*(n,m).By <Ref>, we may take our sum only over l that satisfy (<ref>).If l_2 denotes the index where s_l(n,m) takes its maximal value outside I_l(n), then∑_l∉I_l(n)s_l(n,m) ≤ ns_l_2(n,m).In , (<ref>) is violated for all l≥ l_b = Θ(^3) and thus we have the stronger bound∑_l∉I_l(n)s_l(n,m) ≤Θ(^3) s_l_2(n,m).By <Ref>, there exists a constant α>1 such thats_l(n,m) ≤ n^3/22/e^2m-nM(l,n,m;α),whereM(l,n,m;α) = l^-3l/2e^2ϕ/4^l2m-n+2l^3l/2-1n-m+l^m-n-l-1α^l.By choosing β_l^- (respectively η_l^- or ϑ_l^-) small enough and β_l^+ (respectively η_l^+ orϑ_l^+) large enough, we may assume that M(l,n,m;α) is strictly increasing (with respect to l) for l≤ p_l and strictly decreasing for l≥ q_l. For l≤ p_l, (<ref>) is satisfied and thus (<ref>) holds for every α=1+δ, where δ>0 is any given constant. Thus,s_l_2(n,m) ≤ n^3/22/e^2m-nmax{M(p_l,n,m;1+δ),M(q_l,n,m;α)}.In , when =on^1/12, <Ref> together with analogous arguments gives us an upper bounds_l_2(n,m) ≤ n^2/32/e^2m-nmax{M(p_l,n,m;α),M(q_l,n,m;α)}. If m is such that (<ref>) applies and if the maximum in (<ref>) is M(q_l,n,m;α),then (<ref>) and (<ref>) yield (for large enough β_l^+)T/_g^*(n,m)≤^4 e^-l_1,which is o(1) by <Ref> and the fact that →∞. If (<ref>) does not apply and the maximum in (<ref>) is M(q_l,n,m,α), then (<ref>), (<ref>), and <Ref> imply that if we choose β_l^+, η_l^+, or ϑ_l^+ large enough, respectively, thenT/_g^*(n,m)≤ n^5/2 e^-l_1,which is o(1).If the maximum in (<ref>) or (<ref>) is M(p_l,n,m,1+δ) or M(p_l,n,m,α), respectively, then analogousconsiderations show that we can choose β_l^-, η_l^-, and ϑ_l^- so that for every m=m(n) there exists a constant c>0 such thatT/_g^*(n,m)≤^4 exp-cl_1 infor =o(n^1/12),n^5/2exp-c^-3/2n^3/5 in ,n^5/2exp-cl_1 otherwise.Inand , the fact that we have α=1+δ is essential for deducing the above bound. In all regimes—using that =o(log n)^-2/3n^3/5 in —we deduce that this upper bound is o(1). This proves that the main contribution to Σ_l is indeed provided by l∈ I_l(n).It remains to prove that for each l∈ I_l(n), the main contribution to Σ_ is provided by n_∈ I_^h(n,m,l). We substitute n_=n_+r.First consider the case n_<p_ = n_ - hm_^2/3, i.e. r< -hm_^2/3. We shall split the sum into the three parts -vm_^2/3≤ r ≤ -hm_^2/3, -wm_^2/3≤ r ≤ -vm_^2/3, and r≤-wm_^2/3, wherev := m_^1/24and w := ^1/2 in ,lm_^-2/9 otherwise.Observe that the interval -wm_^2/3≤ r ≤ -vm_^2/3 is empty inif < m_^1/12. Furthermore,w = ω√(l^3/n_)and w = on_/m_^2/3(<ref>)= o(l). By (<ref>) and <Ref>, in each of the three intervals,∑ρψ/Σ_(n,m,l)≤Θ(1)m_^-1/6∑ s_r(n,m,l)withs_r(n,m,l) = 1+r/n_^3l/2-1expf_d(n_+r,l)-f_d(ñ_,l). Recall that for (<ref>), <Ref><ref> was used to bound ρ. Observe that for -vm_^2/3≤ r ≤ -hm_^2/3, <Ref><ref> isapplicable and thus (<ref>) holds with a factor of m_^-2/3 instead of m_^-1/6. Furthermore, we claim that f_d(n_+r,l)-f_d(ñ_,l) = orl/n_. Indeed, in and , the left hand side is O(1) and the claim follows by observing that rl/n_=Ω(h) by (<ref>).In the second phase transition,such r satisfy the conditions of <Ref> with ϵ=Θr/n_ and thus the claim follows.Therefore, there exists a constant c>0 such that∑_r=-vm_^2/3^-hm_^2/3ρψ/Σ_(n,m,l) ≤Θ(1)m_^-2/3∑_r=-vm_^2/3^-hm_^2/3exp3/2-o(1)rl/n_(<ref>)≤Θ(1)∫_h^∞e^-cx x = Θ(1)exp-ch = o(1).Observe that in , if =o(n^1/24), then r>-n_>-vm_^2/3 and thus the interval -vm_^2/3≤ r ≤ -hm_^2/3 covers all cases for negative r. From now on, we may thus assume that =Ω(n^1/24), which implies w=Ω(n^1/48).Now consider the interval -wm_^2/3≤ r ≤ -vm_^2/3. In this regime, we still have f_d(n_+r,l)-f_d(ñ_,l) = orl/n_ and thus∑_r=-wm_^2/3^-vm_^2/3ρψ/Σ_(n,m,l)≤Θ(1)m_^1/2exp-cv = o(1). Finally, suppose that r ≤ -wm_^2/3. In this regime,s_r ≤exp3l/2-1log1+r/n_+c_1√(l^3/n_+r)-c_2√(l^3/ñ_).The right hand side has its maximum (with respect to r) at r=-wm_^2/3. For this r, the first summandis negative and has order w by (<ref>). The other two summands are o(w) by (<ref>). Thus, there exists a constant c>0 such that∑_r≤-wm_^2/3ρψ/Σ_(n,m,l)≤Θ(1)nexp-cw,which is o(1), because w=Ω(n^1/48). This finishes the proof for r<0. Suppose now that n_>q_ = n_ + hm_^2/3,i.e. r>hm_^2/3. By (<ref>), (<ref>), and <Ref> we conclude that∑ρψ/Σ_(n,m,l)≤Θ(1)m_^-2/3∑_r>hm_^2/3exp3lr/2n_+f_d(n_+r,l)-f_d(ñ_,l)-r^3/27m_^2.Note that for all r in this sum, rl/n_=or^3/m_^2. We claimthat additionallyf_d(n_+r,l)-f_d(ñ_,l)=or^3/m_^2.Indeed, this difference is O(1) inand , while r^3/m_^2≥ h^3=ω(1). In the second phase transition, the claim follows immediately if r≥√(l)m_^2/3. If hm_^2/3<r<√(l)m_^2/3, the conditions of <Ref> are satisfied with ϵ=Θr/n_ andthus f_d(n_+r,l)-f_d(ñ_,l)=orl/n_=or^3/m_^2. Therefore, we deduce that∑ρψ/Σ_(n,m,l) ≤Θ(1)m_^-2/3∑_r>hm_^2/3exp-r^3/30m_^2≤Θ(1)∫_h^∞exp-x^3 x ≤Θ(1)exp(-h).This finishes the proof also for r>0. § DISCUSSION AND OPEN PROBLEMS Comparing the range for m that we cover in Theorems <ref>–<ref> with the `dense' regime m=⌊μ n⌋ for 1<μ<3 considered in <cit.>, a gap of order (log n)^2/3 becomes apparent—a significant improvement of <cit.>, where the gap had order n^1/3. The order term ^-3/2n^3/5 in <Ref> becomes constant when =Θ(n^2/5), which matches the results from <cit.> that the giant component covers all but finitely many vertices in the dense regime. Therefore, we expect <Ref> to hold for all m=(1+o(1))n.The gap of order (log n)^2/3 originates from the fact that we can only determine the number of kernels up to an exponential error term (see <Ref>) in the second phase transition. We thus believe that the key to closing the gap would be to determine the number of kernels more exactly. What is the exact value of _g(2l-d,3l-d) for any admissible l,d∈? Solving <Ref> would pave the way to prove <Ref> for all m=(1+o(1))n. Moreover, it might open the possibility to prove an analogous version of <Ref> in the second phase transition, thus rendering the additional double counting argument in the proof of <Ref> unnecessary; observe that this double counting argument is responsible for the fact that the upper and lower bound on the order of n-_1 are not quite the same. We believe that these bounds should actually be of the same order. Let m=2+ n^-2/5n/2, where =(n)=o(n^2/5). Thenthe largest component _1 of _g(n,m) is complex and satisfiesn-_1 = Θn^3/5 if →-∞, Θn^3/5 if → c∈, Θ^-3/2n^3/5 if →∞.Observe that in contrast to <Ref>, <Ref> does neither provide a statement about the genus of the largest component nor does it state the order of the i-th largest component for i≥2.By <cit.>, the largest component of _g(n,⌊μ n⌋) has genus g, thus it is to be expected that this also holds throughout the second phase transition. Let m=(1+o(1))n. Then the giant component of _g(n,m)is not embeddable on g-1and all other components are planar. In view of <Ref>, an analogous statement to <Ref> for general kernels would be necessary. Similarly, proving <Ref> for arbitrary kernels would open the possibility to determine the order of the i-th largest component for i≥2. For i≥2, what is the order of the i-th largest component in the second phase transition? In view of enumeration of graphs embeddable on , <Ref> provides an asymptotic result. Observe that the error terms in <Ref> become larger the bigger m is. In particular, if we increaseto Θ(n^2/5) in <Ref><ref>, then the main term of _g(n,m) becomes n^n—which matches the results from <cit.> for the dense regime m=⌊μ n⌋ with μ∈(1,3)—but the error term has order expO(n). It should be possible to improve the error terms to being smaller than (1±δ)^l_0 forevery δ>0 (with l_0 defined as in (<ref>)) by a careful analysis of<Ref>, yet even better bounds would still be desirable. Find asymptotic expressions for _g(n,m) with better error terms than in <Ref>. It is important to note that the results in this paper apply to more general graph classes than _g(n,m). Indeed, the constructive decomposition that yields (<ref>), (<ref>), and (<ref>) relies on the fact that a graph is in _g if and only if its kernel is in the corresponding class _g of multigraphs. The only other ingredients of the proof that are specifically tailored for graphs onare<Ref>, and <Ref>. Recall that we saw in <Ref> that <Ref> holds for any class of multigraphs that is weakly addable (that is, closed under adding an edge between two components) and closed under taking minors. Let X be a graph class and Y be a class of (weighted) multigraphs of minimum degree at least three. Suppose that *a graph lies in X if and only if its kernel is in Y;*there are constants c,γ>0 and k∈ such thatY(2l,3l) = (1+o(1))c l^kγ^2l(2l)!; *there is a constant 0<q≤1 withY(2l,3l)is connected l→∞⟶ q; *_1(Y(2l,3l))=2l-O_p(1) and for each fixed i∈∖{0}, the probability that _1(Y(2l,3l))=2l-2i is bounded away from both 0 and 1;*Y is weakly addable and closed under taking minors.Then analogous statements to Theorems <ref>–<ref> hold for X. Obvious candidates for the classes X and Y would be (multi)graphs on non-orientable surfaces. For such classes, <ref> and <ref> in <Ref> are automatically satisfied,<ref> and <ref> would follow if <Ref> also hold for non-orientable surfaces, and <ref> holds if <Ref> is true for non-orientable surfaces. Prove analogous versions of <Ref> and <Ref> for non-orientable surfaces. One striking difference between _g(n,m) and G(n,m) is the order and the structure of the i-th largestcomponent for i≥2 inand . In _g(n,m), the second largest component is muchlarger than in G(n,m); Θ_p(n^2/3) versus o(n^2/3).Moreover, the i-th largest component of G(n,m) is a tree . In contrast, _g(n,m) with positive probability has both tree components and complex components of order Θ_p(n^2/3). It would thusbe interesting to know whether there is a hierarchy in the size of the largest tree component and the second largestcomplex component. Given i≥2, what is the probability that the i-th largest component of _g(n,m) is a tree? For G(n,m), the giant component is in fact far better understood than it is stated in <Ref>. Central limit theorems and local limit theorems provide much stronger concentration results about the order (i.e. the number of vertices) and the size (i.e. the number of edges) of the giantcomponent <cit.> and give more insight into the global and local structure of the giant component and its core. Derive central and local limit theorems for the giant component of _g(n,m). As mentioned in <Ref>, the component structure of G(n,m) is closely related to a Galton-Watson branchingprocess. More precisely, the local structure of G(n,αn/2) converges to that of a Galton-Watson tree with offspringdistribution Po(α) in the sense of Benjamini-Schramm local weak convergence <cit.>. For _g(n,m), the additional constraint of the graph being embeddable on , exploration via a simple Galton-Watson type process is not possible. This naturally raises the question if the local structure of _g(n,m) can be described in terms of the Benjamini-Schramm local weak convergence. What is the limit of the local structure of _g(n,m) in the sense of the Benjamini-Schramm local weak convergence? The core, which plays a central role in our constructive decomposition, is also known as the 2-core. More generally,given k≥ 2, the k-core of a graph G is the largest subgraph of G of minimum degree at least k. Like the core, the k-core can be constructed by a peeling process that recursively removes vertices of degree less than k. The orderand size of the k-core of G(n,m) has been determined in a seminal paper by Pittel, Spencer, and Wormald <cit.>. Following Pittel, Spencer, and Wormald, the k-core has been extensively studied <cit.>. The most striking results in this area are the astonishing theorem by  <cit.> that the k-core for k≥ 3 jumps to linearorder at the very moment it becomes non-empty, the central limit theorem by Janson and Luczak <cit.>, and the local limit theorem by Coja-Oghlan, Cooley, Kang, and Skubch <cit.> that described—in addition to the order and size—several other parameters of the k-core of G(n,m). In <cit.>, the same authors used a 5-type branching process in order to determine the local structure of the k-core. In terms of global structure, <cit.> provides a randomised algorithm that constructs a random graph with given order and size of the k-core. What are the local and global structure of the k-core of _g(n,m)? One of the main difficulties regarding _g(n,m) is that while graph properties such as having a component of a certain order are monotone for G(n,m) (that is, for every fixed n, the probability that G(n,m) has this property is monotone for 0≤ m≤n2), this is not necessarily the case for _g(n,m). Indeed, monotonicity of graph properties in G(n,m) usually follows immediately from the equivalence between G(n,m) and the random graph process, where we add one random edge at a time. For graphs on surfaces, however, not all edges are allowed to be addedin the corresponding process. Thus, the process is fundamentally different from _g(n,m). For instance, in thedense regime m=⌊μ n⌋ with μ>1, we know by <cit.> that the probability thatP(n,m) is connected is bounded away from both 0 and 1. The planar graph process, however, is connected in that regime <cit.>. Knowing which graph properties are monotone for _g(n,m) would yield a significant improvement to the complexity of the arguments. Which graph properties are monotone for _g(n,m)? The constructive decomposition and generating functions of cubic planar graphs and their relation tothe core of sparse planar graphs by Kang and Łuczak <cit.> have been strengthened by Noy,Ravelomanana, and Rué <cit.> to yield an answer to a challenging open question ofand <cit.> about the limiting probability of G(n,m) being planar at the critical phase , that is, for every constant ∈, the limit p() of the probability that Gn,1+ n^-1/3n/2 is planar. For graphs embeddable on a surface of positive genus, they gave a general strategy of how to determine the corresponding probability. However, determining the exact limiting probability for g≥ 1 is still an open problem.Furthermore, for m beyond , we know that G(n,m)is not embeddable on any surface of fixed genus.This immediately raises the question what genus g we need in order to embed G(n,m) on . Let m=m(n) and g=g(n) be given. *When is the limiting probability of G(n,m) being embeddable onpositive?*When is G(n,m) embeddable on?*What is the expected genus of G(n,m)?Another interesting direction, which might provide insight into the answer of <Ref>, is to consider _g(n,m) for genus g=g(n) that tends to infinity with n. If g grows `fast enough' (e.g.as n2), then _g(n,m) will coincide with G(n,m) and will thus exhibit the emergence of the giant component, but not the second phase transition described in <Ref>. For `slowly' growing g, on the other hand, it is to be expected that the second phase transition does take place. For which functions g=g(n) does _g(n,m) feature two phase transitions analogous to <Ref>?plain
http://arxiv.org/abs/1708.07671v1
{ "authors": [ "Mihyun Kang", "Michael Moßhammer", "Philipp Sprüssel" ], "categories": [ "math.CO" ], "primary_category": "math.CO", "published": "20170825095400", "title": "Phase transitions in graphs on orientable surfaces" }
Fenchel Dual Gradient Methods for Distributed Convex Optimization over Time-varying Networks Xuyang Wu and Jie LuX. Wu and J. Lu are with the School of Information Science and Technology, ShanghaiTech University, 201210 Shanghai, China. Email: {wuxy, lujie}@shanghaitech.edu.cn.This work has been supported by the National Natural Science Foundation of China under grant 61603254, the Shanghai Pujiang Program under grant 16PJ1406400, and the Natural Science Foundation of Shanghai under grant 16ZR1422500. December 7, 2017 =================================================================================================================================================================================================================================================================================================================================================================================================================================== In the large collection of existing distributed algorithms for convex multi-agent optimization, only a handful of them provide convergence rate guarantees on agent networks with time-varying topologies, which, however, restrict the problem to be unconstrained. Motivated by this, we develop a family of distributed Fenchel dual gradient methods for solving constrained, strongly convex but not necessarily smooth multi-agent optimization problems over time-varying undirected networks. The proposed algorithms are constructed based on the application of weighted gradient methods to the Fenchel dual of the multi-agent optimization problem, and can be implemented in a fully decentralized fashion. We show that the proposed algorithms drive all the agents to both primal and dual optimality asymptotically under a minimal connectivity condition and at sublinear rates under a standard connectivity condition. Finally, the competent convergence performance of the distributed Fenchel dual gradient methods is demonstrated via simulations. § INTRODUCTION In many engineering scenarios, a network of agents often need to jointly make a decision so that a global cost consisting of their local costs is minimized and certain global constraints are satisfied. Such a multi-agent optimization problem has found a considerable number of applications, such as estimation by sensor networks <cit.>, network resource allocation <cit.>, and cooperative control <cit.>.To address convex multi-agent optimization in an efficient, robust, and scalable way, distributed optimization algorithms have been substantially exploited, which allow each agent to reach an optimal or suboptimal decision by repeatedly exchanging its own information with neighbors <cit.>. One typical approach is to let the agents perform consensus operations so as to mix their decisions that are updated using first-order information of their local objectives (e.g., <cit.>). Recently, rates of convergence to optimality have been established for a few consensus-based algorithms. By assuming that the problem is unconstrained and smooth (i.e., the gradient of each local objective is Lipschitz) and that the network is fixed, the consensus-based multi-step gradient methods <cit.> are able to achieve sublinear rates of convergence, and also linear rates if the local objectives are further (restricted) strongly convex. Unlike these algorithms, the Subgradient-Push method <cit.>, the Gradient-Push method <cit.>, the DIGing algorithm <cit.>, and the Push-DIGing algorithm <cit.> can be implemented over time-varying networks and still provide convergence rate guarantees. Specifically, Subgradient-Push converges to optimality at a sublinear rate of O(ln k/√(k)) for unconstrained, nonsmooth problems with bounded subgradients <cit.>. In addition, when the problem is unconstrained, strongly convex, and smooth, an O(ln k/k) rate is established for Gradient-Push <cit.>, and linear rates are provided for DIGing and Push-DIGing <cit.>.Another standard approach is to utilize dual decomposition techniques, which often lead to a dual problem with a decomposable structure, so that it can be solved in a distributed fashion by classic optimization methods including the gradient projection method, the accelerated gradient methods, the method of multipliers, and their variants (e.g., <cit.>). Compared with the aforementioned consensus-based primal methods, many distributed dual/primal-dual algorithms can handle more complicated coupling constraints, yet still manage to achieve sublinear rates of convergence to dual and primal optimality when the dual function is smooth, and achieve linear rates when the dual function is also strongly concave. Despite this advantage, most of such methods require a fixed network topology. Although the primal-dual subgradient methods in <cit.>, the primal-dual perturbation method in <cit.>, and the proximal-minimization-based method in <cit.> cope with time-varying agent networks, they only guarantee asymptotic convergence to optimality and no results on convergence rate are provided. In addition to the above two approaches, there are other lines of research on distributed optimization, including incremental optimization methods (e.g., <cit.>),distributed Newton methods (e.g., <cit.>),and continuous-time distributed optimization algorithms (e.g., <cit.>).This paper is motivated by the lack of distributed optimization algorithms in the literature that are able to address constrained convex multi-agent optimization at a guaranteed convergence rate over time-varying networks. We propose, in this paper, a family of distributed Fenchel dual gradient methods that are able to solve a class of constrained multi-agent optimization problems at sublinear rates on time-varying undirected networks, where the local objectives of the agents are strongly convex but not necessarily differentiable and the global constraint is the intersection of the local convex constraints of the agents. To develop such algorithms, we first derive the Fenchel dual of the multi-agent optimization problem, which consists of a separable, smooth dual function and a coupling linear constraint. Additionally, the gradient of the Fenchel dual function can be evaluated in parallel by the agents. We then utilize a class of weighted gradient methods to solve the Fenchel dual problem, which can be implemented over time-varying networks in a distributed fashion and can be viewed as a generalization of the distributed weighted gradient methods in <cit.>. We show that the proposed Fenchel dual gradient algorithms asymptotically converge to both dual and primal optimality if the agents and their infinitely occurring interactions form a connected graph. We also show that the dual optimality is reached at an O(1/k) rate and the primal optimality is achieved at an O(1/√(k)) rate if the underlying agent interaction graph during every B iterations is connected. Finally, the efficacy of the Fenchel dual gradient methods is illustrated through numerical examples.The outline of the paper is as follows: Section <ref> formulates the multi-agent optimization problem, and Section <ref> develops the distributed Fenchel dual gradient methods. Section <ref> establishes the convergence results of the proposed algorithms. Section <ref> presents simulation results, and Section <ref> concludes the paper. All the proofs are included in the appendix. This paper is a significantly improved version of an earlier, 6-page conference paper <cit.>.Throughout the paper, we use · to represent the Euclidean norm and ·_1 the ℓ_1 norm. For any set X⊆ℝ^d, intX represents its interior and |X| its cardinality. Let P_X(x)=arg min_y∈ Xx-y denote the projection of x∈ℝ^d onto X, which uniquely exists if X is closed and convex. The ball centered at x∈ℝ^d with radius r>0 is denoted by B(x,r):={y∈ℝ^d:y-x≤ r}. The floor of a real number is represented by ⌊·⌋. For any 𝐱∈ℝ^nd, 𝐱=(x_1^T,…,x_n^T)^T means the even partition of 𝐱 into n blocks, i.e., x_i∈ℝ^d ∀ i=1,…,n. For any function f:ℝ^d→ℝ, ∂ f(x) denotes any subgradient of f at x∈ℝ^d, i.e., f(y)-f(x)-∂ f(x)^T(y-x)≥0 ∀ y∈ℝ^d. If f is differentiable, then ∇ f(x) denotes the gradient of f at x∈ℝ^d. In addition, I_d is the d× d identity matrix, O_d is the d× d zero matrix, 1_d∈ℝ^d is the all-one vector, 0_d∈ℝ^d is the all-zero vector, and ⊗ is the Kronecker product. For any matrices M,M'∈ℝ^n× n, M≼ M' and M'≽ M both mean M'-M is positive semidefinite. Also, [M]_ij represents the (i,j)-entry of M, ℛ(M) the range of M, and Null(M) the null space of M. If M is a block diagonal matrix with diagonal blocks M_1,…,M_m, we write it as M=diag(M_1,…,M_m). If M is symmetric positive semidefinite, we use λ_i^↓(M)≥0 to denote its ith largest eigenvalue and M^† its Moore-Penrose pseudoinverse. § PROBLEM FORMULATION Consider a set 𝒱={1,2,…,n} of agents, where each agent i∈𝒱 possesses a local objective function f_i:ℝ^d→ℝ and a local constraint set X_i⊆ℝ^d. All of the n≥2 agents attempt to solve the constrained optimization problem[ x∈ℝ^d ∑_i∈𝒱f_i(x);subject to x∈⋂_i∈𝒱X_i, ]which satisfies the following assumption. (a) Each f_i, i∈𝒱 is strongly convex over X_i with convexity parameter θ_i>0, i.e., for any x,y∈ X_i and any subgradient ∂ f_i(x) of f_i at x, f_i(y)-f_i(x)-∂ f_i(x)^T(y-x)≥θ_iy-x^2/2.(b) 0_d∈int⋂_i∈𝒱X_i. Assumption <ref> ensures the existence of a unique optimal solution x^⋆∈⋂_i∈𝒱X_i to problem (<ref>). Notice that Assumption <ref>(a) is a common assumption for distributed optimization methods with convergence rate guarantees (e.g., <cit.>). In addition, unlike many existing works that require each f_i to be continuously differentiable (e.g., <cit.>), here each f_i is not necessarily differentiable. Also, Assumption <ref>(b) can always be replaced with the less restrictive condition int⋂_i∈𝒱X_i≠∅, which is also assumed in <cit.>. To see this, suppose x'∈int⋂_i∈𝒱X_i for some x'≠0_d. Consider the change of variable z=x-x', and write each f_i(x) and X_i as f_i(z+x') and {z∈ℝ^d:z+x'∈ X_i}, respectively. Then, the resulting new problem with the decision variable z is in the form of (<ref>) and satisfies Assumption <ref>.We model the n agents and their interactions as an undirected graph 𝒢^k=(𝒱,ℰ^k) with time-varying topologies, where k∈{0,1,…} represents time, 𝒱={1,2,…,n} is the set of nodes (i.e., the agents), and ℰ^k⊆{{i,j}:i,j∈𝒱,i≠ j} is the set of links (i.e., the agent interactions) at time k. Without loss of generality, we assume that ℰ^k≠∅ ∀ k≥0. In addition, for each node i∈𝒱, let 𝒩_i^k={j∈𝒱:{i,j}∈ℰ^k} be the set of its neighbors (i.e., the nodes that it directly communicates with) at time k.To enable cooperation of the nodes, we need to impose an assumption on network connectivity, so that the local decisions of the nodes can be mixed across the network. To this end, define ℰ_∞:={{i,j}:{i,j}∈ℰ^k for infinite many k≥ 0}. Then, consider the following assumption.[Infinite connectivity] The graph (𝒱, ℰ_∞) is connected. Assumption <ref> is equivalent to the connectivity of the graph (𝒱,∪_t=k^∞ℰ^t) for all k≥0. This is a minimal connectivity condition for distributed optimization algorithms to converge to optimality, which ensures every node to directly or indirectly influence any other nodes infinitely many times <cit.>. As Assumption <ref> does not quantify how quickly the local decisions of the nodes diffuse throughout the network, we need a stronger connectivity condition to derive performance guarantees for the algorithms to be developed.[B-connectivity] There exists an integer B>0 such that for any integer k≥0, the graph (𝒱,⋃_t=kB^(k+1)B-1ℰ^t) is connected. Assumption <ref> forces each node to have an impact on the others in the time intervals [kB,(k+1)B-1] ∀ k≥0 of length B. Compared with Assumption <ref>, Assumption <ref> is more restrictive but more commonly adopted in the literature (e.g., <cit.>).§ FENCHEL DUAL GRADIENT ALGORITHMS In this section, we develop a family of distributed algorithms to solve (<ref>) based on Fenchel duality. §.§ Fenchel Dual Problem We first transform (<ref>) into the following equivalent problem:[𝐱∈ℝ^nd F(𝐱):=∑_i∈𝒱f_i(x_i);subject to x_i∈ X_i,∀ i∈𝒱,; 𝐱∈ S, ]where 𝐱=(x_1^T,…,x_n^T)^T and S:={𝐱∈ℝ^nd:x_1=x_2=⋯=x_n}. Note that problem (<ref>) has a unique optimal solution 𝐱^⋆=((x^⋆)^T,…,(x^⋆)^T)^T, where x^⋆∈⋂_i∈𝒱X_i is the unique optimum of problem (<ref>). In addition, its optimal value F^⋆ is equal to that of problem (<ref>).Next, we construct the Fenchel dual problem <cit.> of (<ref>). To this end, we introduce a function q_i:ℝ^d×ℝ^d→ℝ for each i∈𝒱 defined asq_i(x_i,w_i)=w_i^Tx_i-f_i(x_i).The conjugate convex function d_i:ℝ^d→ℝ is then given byd_i(w_i) = sup_x_i∈ X_iq_i(x_i,w_i).With the above, the Fenchel dual problem of (<ref>) can be described as[ 𝐰∈ℝ^nd -D(𝐰):=-∑_i∈𝒱 d_i(w_i); subject to 𝐰∈ S^, ]where 𝐰=(w_1^T,…,w_n^T)^T and S^:={𝐰∈ℝ^nd:w_1+w_2+⋯+w_n=0_d} is the orthogonal complement of S. Note that (<ref>) is a convex optimization problem. Also, with Assumption <ref>, it can be shown that strong duality between (<ref>) and (<ref>) holds, i.e., the optimal value -D^⋆ of (<ref>) equals F^⋆, and that the optimal set of (<ref>) is nonempty <cit.>. Moreover, 𝐰^⋆=((w_1^⋆)^T,…,(w_n^⋆)^T)^T∈ S^ is an optimal solution to (<ref>) if and only if ∇ d_i(w_i^⋆)=∇ d_j(w_j^⋆) ∀ i,j∈𝒱 <cit.>, i.e., ∇ D(𝐰^⋆)∈ S.Below we acquire a couple of properties regarding the Fenchel dual problem (<ref>). Notice from Assumption <ref>(a) that for each i∈𝒱 and each w_i∈ℝ^d, there uniquely existsx̃_i(w_i):=arg max_x∈ X_i q_i(x,w_i).Thus, d_i is differentiable <cit.> and∇ d_i(w_i) = x̃_i(w_i).The following proposition shows that d_i is smooth, i.e., ∇ d_i is Lipschitz.<cit.> Suppose Assumption <ref> holds. Then, for each i∈𝒱, ∇ d_i is Lipschitz continuous with Lipschitz constant L_i=1/θ_i, where θ_i>0 is defined in Assumption <ref>, i.e., ∇ d_i(u_i)-∇ d_i(v_i)≤ L_iu_i-v_i ∀ u_i,v_i∈ℝ^d. In fact, the strong convexity of f_i on X_i assumed in Assumption <ref>(a) is both sufficient and necessary for the smoothness of d_i <cit.>. Likewise, we can see that D(𝐰) is differentiable and∇ D(𝐰)=𝐱̃(𝐰):=(x̃_1(w_1)^T,…,x̃_n(w_n)^T)^T.According to (<ref>) and (<ref>), if each w_i is known to node i, then the gradient of the Fenchel dual function D can be evaluated in parallel by the nodes, while the Lagrange dual of (equivalent forms of) problem (<ref>) does not have such a favorable feature when the network is time-varying and not necessarily connected at each time instance. Further, notice that F(𝐱) in problem (<ref>) is strongly convex over X_1×⋯× X_n with convexity parameter θ_min:=min_i∈𝒱θ_i. Also note that D(𝐰)=sup_𝐱∈ X_1×⋯× X_n𝐰^T𝐱-F(𝐱). Like Proposition <ref>, we can establish the Lipschitz continuity of ∇ D. Suppose Assumption <ref> holds. Then, ∇ D is Lipschitz continuous with Lipschitz constant L=1/θ_min. Finally, we show that the dual optimal set and the level sets of D on S^ are bounded.Suppose Assumption <ref> holds. For any optimal solution 𝐰^⋆∈ S^ of problem (<ref>),𝐰^⋆≤(∑_i∈𝒱max_x_i∈ B(0_d,r_c)f_i(x_i))-F^⋆/r_c<∞,where r_c∈(0,∞) is such that B(0_d,r_c)⊆⋂_i∈𝒱X_i. In addition, for any 𝐰∈ S^, the level set S_0(𝐰):={𝐰'∈ S^:D(𝐰')≤ D(𝐰)} is compact.See Appendix <ref>. The boundedness of the dual optimal set relies on the nonemptyness of int⋂_i∈𝒱X_i assumed by Assumption <ref>(b), without which the dual optimal set can be unbounded (e.g., X_i={(z_1,z_2)^T∈ℝ^2:z_1=0} ∀ i∈𝒱). §.§ Algorithms In <cit.>, a set of weighted gradient methods are proposed to solve a network resource allocation problem, which can be cast in the form of (<ref>). Inspired by this, we consider a class of weighted gradient methods as follows: Starting from an arbitrary 𝐰^0∈ S^, the subsequent iterates are generated by𝐰^k+1=𝐰^k-α^k(H_𝒢^k⊗ I_d)∇ D(𝐰^k),∀ k≥0,where α^k>0 is the step-size and H_𝒢^k∈ℝ^n× n is the weight matrix that depends on the topology of 𝒢^k, defined as[H_𝒢^k]_ij=∑_s∈𝒩_i^k h_is^k,if i=j, -h_ij^k, if {i,j}∈ℰ^k, 0,otherwise,∀ i,j∈𝒱.We require h_ij^k=h_ji^k>0 ∀{i,j}∈ℰ^k ∀ k≥0. We also assume that there exists a finite interval [h,h̅] such thath_ij^k∈ [h,h̅]⊂(0,∞),∀ k≥0, ∀ i∈𝒱, ∀ j∈𝒩_i^k.Since ℰ^k≠∅, H_𝒢^k≠ O_n for any k≥0. Moreover, H_𝒢^k is symmetric positive semidefinite and H_𝒢^k1_n=0_n. Thus, using the same rationale as <cit.>, the proposition below shows that as long as 𝐰^0 is feasible, so are 𝐰^k ∀ k≥1. Let (𝐰^k)_k=0^∞ be the iterates generated by (<ref>). If 𝐰^0∈ S^, then (𝐰^k)_k=0^∞⊆ S^.The weighted gradient method (<ref>) can be tuned to solve problems of minimizing ∑_i∈𝒱d_i(w_i) subject to ∑_i∈𝒱w_i=c, ∀ c∈ℝ^d. To do so, we can simply replace the initial condition 𝐰^0∈ S^ with ∑_i∈𝒱w_i^0=c. Next, we introduce primal iterates to the weighted gradient method (<ref>) that is intended for the Fenchel dual problem (<ref>). Note from (<ref>) and (<ref>) that (<ref>) can be written asx_i^k=x̃_i(w_i^k),∀ i∈𝒱,[0]w_i^k+1=w_i^k-α^k∑_j∈𝒩_i^kh_ij^k(x_i^k-x_j^k),∀ i∈𝒱,where w_i^k∈ℝ^d is the ith d-dimensional block of 𝐰^k and x̃_i(w_i^k) is defined in (<ref>). We assign each w_i^k and x_i^k to node i as its dual and primal iterates, with x_i^k being node i's estimate on the optimal solution x^⋆ of problem (<ref>). Thus, the above algorithm with both dual and primal iterates can be implemented in a distributed and possibly asynchronous way on the time-varying network, as is shown in Algorithm <ref>.In Algorithm <ref>, the initial condition 𝐰^0∈ S^ can simply be realized by setting w_i^0=0_d ∀ i∈𝒱. Subsequently at each iteration, every node i with at least one neighbor updates its dual iterate w_i^k via local interactions with its current neighbors and then updates its primal iterate x_i^k on its own. To implement Algorithm <ref>, each node i needs to select the weights h_ij^k ∀ j∈𝒩_i^k that satisfy h_ij^k=h_ji^k in a predetermined interval [h,h̅]⊂(0,∞), where h and h̅ may or may not be related with 𝒢^k ∀ k≥0. This can be done through inexpensive interactions between neighboring nodes. Two typical examples of H_𝒢^k are the graph Laplacian matrix[H_𝒢^k]_ij=[L_𝒢^k]_ij:=|𝒩_i^k|,if i=j, -1, if {i,j}∈ℰ^k, 0,otherwise,and the Metropolis weight matrix <cit.>[H_𝒢^k]_ij=∑_s∈𝒩_i^k1/max{|𝒩_i^k|L_i, |𝒩_s^k|L_s},if i=j, -1/max{|𝒩_i^k|L_i, |𝒩_j^k|L_j}, if {i,j}∈ℰ^k, 0,otherwise.When H_𝒢^k is set to (<ref>), each node i does not need any additional efforts in computing the weights h_ij^k ∀ j∈𝒩_i^k since they are 1 by default. When H_𝒢^k is set to (<ref>), each node i only needs to obtain from every neighbor j∈𝒩_i^k the product of node j's neighborhood size |𝒩_j^k| and Lipschitz constant L_j=1/θ_j of ∇ d_j. The remaining parameter to be determined is the step-size α^k. Later in Section <ref>, we will show that the following step-size condition is sufficient to guarantee the convergence of Algorithm <ref>: Suppose there is a finite interval [α,α̅] such thatα^k∈[α,α̅]⊂(0,2/δ),∀ k≥0,where δ>0 can be any positive constant satisfyingH_𝒢^k≼δΛ_L^-1,∀ k≥ 0,with Λ_L:=diag(L_1,…,L_n). Note that such δ always exists because Λ_L^-1 is positive definite and H_𝒢^k is positive semidefinite. For example, we may choose δ=Lsup_k≥ 0λ_1^↓(H_𝒢^k), where L=1/θ_min=max_i∈𝒱L_i. More conservatively, because H_𝒢^k≼h̅L_𝒢^k and λ_1^↓(L_𝒢^k)≤ n, we can always let δ=Lh̅n and thus[α,α̅]⊂(0,2/Lh̅n).Since h̅ can be predetermined and known to all the nodes, this condition only requires the nodes to obtain the global quantities n and L=max_i∈𝒱L_i, which can be computed decentralizedly by some consensus schemes (e.g., <cit.>). Below, we provide less conservative step-size conditions for the two specific choices of H_𝒢^k in (<ref>) and (<ref>), which also can be satisfied by the nodes without any centralized coordination. When H_𝒢^k is set to the graph Laplacian matrix L_𝒢^k as in (<ref>), in addition to the aforementioned choice δ=Lsup_k≥ 0λ_1^↓(L_𝒢^k), another option for δ could be δ=2sup_k≥0max_i∈𝒱|𝒩_i^k|L_i, so that δΛ_L^-1-L_𝒢^k is diagonally dominant and thus positive semidefinite for each k≥ 0. Therefore, α^k can be selected in the interval [α,α̅] satisfying0<α≤α̅< 1/min{L/2sup_k≥ 0λ_1^↓(L_𝒢^k),sup_k≥0max_i∈𝒱|𝒩_i^k|L_i}.The above step-size condition can be simplified for some special interaction patterns. For instance, if the nodes interact in a gossiping pattern, i.e., each ℰ^k contains only one link, then we may let 0<α≤α̅<1/L. Even though the topologies of (𝒢^k)_k=0^∞ are completely unknown, since λ_1^↓(L_𝒢^k)≤ n, we can adopt a more conservative step-size condition 0<α≤α̅<2/(nL). When H_𝒢^k is set according to (<ref>), we can simply take δ=2, because 2Λ_L^-1-H_𝒢^k is diagonally dominant and thus 2Λ_L^-1≽ H_𝒢^k. Hence, the step-sizes can be selected as0<α≤α^k≤α̅<1,∀ k≥0,which requires no global information and is independent of the network and the problem. The underlying weighted gradient method (<ref>) in Algorithm <ref> can be viewed as a generalization of the distributed weighted gradient methods in <cit.>. By assuming the (directed) network to be time-invariant and connected, <cit.> proposes a class of weighted gradient methods in the form of (<ref>) but with a constant weight matrix. It is also shown in <cit.> that if the time-invariant network is further undirected, the constant weight matrix can be determined in a distributed fashion via (<ref>) or (<ref>). The step-size conditions in <cit.> for fixed undirected networks and fixed weight matrices given by (<ref>) and (<ref>) are extended here in Examples <ref> and <ref> to handle time-varying networks and time-varying weight matrices. On the other hand, <cit.> considers time-varying undirected networks satisfying Assumption <ref>. By setting H_𝒢^k to L_𝒢^k in (<ref>) and α^k=1/(2nL) ∀ k≥0, (<ref>) reduces to the algorithm in <cit.>. Note from Example <ref> that here we allow for a much broader step-size range for this particular weight matrix.§ CONVERGENCE ANALYSIS This section is dedicated to analyzing the convergence performance of Algorithm <ref>.§.§ Asymptotic convergence under infinite connectivity In this subsection, we show that Algorithm <ref> asymptotically converges to the optimum of problem (<ref>) under Assumption <ref>. We first show that the step-size condition (<ref>) ensures (D(𝐰^k))_k=0^∞ to be non-increasing. Suppose Assumption <ref> holds. Let (𝐰^k)_k=0^∞ be the dual iterates generated by Algorithm <ref>. If the step-sizes (α^k)_k=0^∞ satisfy (<ref>), then for each k≥0,D(𝐰^k+1)-D(𝐰^k)≤-ρ∇ D(𝐰^k)^T(H_𝒢^k⊗ I_d)∇ D(𝐰^k),where ρ:=min{α-α^2δ/2, α̅-α̅^2δ/2}∈(0,∞), with α,α̅>0 in (<ref>) and δ>0 in (<ref>).See Appendix <ref>. Lemma <ref>, along with Propositions <ref> and <ref>, implies that for each k≥0, 𝐰^k∈ S_0(𝐰^0) and 𝐰^k-𝐰^⋆≤ M_0, where 𝐰^⋆ is any optimum of problem (<ref>) andM_0:=max_𝐰∈ S_0(𝐰^0), 𝐰^⋆∈ S^:D(𝐰^⋆)=D^⋆𝐰-𝐰^⋆∈[0,∞).Another important consequence of Lemma <ref> is that the differences of the primal iterates along the time-varying links are vanishing. To see this, by adding the inequality in Lemma <ref> from k=0 to ∞,∑_k=0^∞⟨𝐱^k,(H_𝒢^k⊗I_d)𝐱^k⟩ =∑_k=0^∞⟨∇ D(𝐰^k),(H_𝒢^k⊗I_d)∇ D(𝐰^k)⟩[0]≤(D(𝐰^0)-D^⋆)/ρ<∞,where 𝐱^k=((x_1^k)^T,…,(x_n^k)^T)^T. This implies that ⟨𝐱^k,(H_𝒢^k⊗ I_d)𝐱^k⟩→ 0 as k→∞. Since ⟨𝐱^k, (H_𝒢^k⊗ I_d)𝐱^k⟩= ∑_{i,j}∈ℰ^k h_ij^kx_i^k-x_j^k^2 and h_ij^k≥h>0 ∀{i,j}∈ℰ^k, we havelim_k→∞max_{i,j}∈ℰ^kx_i^k-x_j^k=0.Because 𝒢^k may not be connected at each k≥0, (<ref>) alone is insufficient to assert that the primal iterates x_i^k ∀ i∈𝒱 asymptotically reach a consensus. Nevertheless, by integrating (<ref>) with Assumption <ref>, we are able to show in Lemma <ref> below that such an assertion is indeed true. The main idea of proving this can be summarized as follows: By (<ref>) we know that x_i^k-x_j^k ∀{i,j}∈ℰ^k can be arbitrarily small after some time T≥0. Then, instead of studying the differences x_i^k-x_j^k ∀ i,j∈𝒱 across the entire network, we show that such differences within each connected component of the graph (𝒱,∪_t=T^kℰ^t) become sufficiently small after some k≥ T. Finally, note from Assumption <ref> that the graph (𝒱,∪_t=T^kℰ^t) must be connected when k≥ T is sufficiently large. The dissipation of the differences among all the x_i^k's can thus be concluded. Suppose Assumptions <ref> and <ref> hold. Let (𝐱^k)_k=0^∞ be the primal iterates generated by Algorithm <ref>. If the step-sizes (α^k)_k=0^∞ satisfy (<ref>), then lim_k→∞max_i,j∈𝒱x_i^k-x_j^k=0.See Appendix <ref>. Since x_i^k∈ X_i ∀ i∈𝒱, 𝐱^k is feasible if and only if 𝐱^k∈ S. Thus, P_S^(𝐱^k) can be used to quantify the infeasibility of 𝐱^k. Note that P_S^(𝐱^k)^2=𝐱^k-P_S(𝐱^k)^2=∑_i∈𝒱x_i^k-1/n∑_j∈𝒱x_j^k^2≤1/n∑_i∈𝒱∑_j∈𝒱x_i^k-x_j^k^2. It follows from Lemma <ref> that P_S^(𝐱^k)^2→ 0 as k→∞. This can further be utilized to establish the asymptotic convergence to both dual and primal optimality, as is shown in the theorem below. Suppose Assumptions <ref> and <ref> hold. Let (𝐰^k)_k=0^∞ and (𝐱^k)_k=0^∞ be the dual and primal iterates generated by Algorithm <ref>, respectively. If the step-sizes (α^k)_k=0^∞ satisfy (<ref>), then lim_k→∞P_S^(𝐱^k)=0, lim_k→∞D(𝐰^k)=D^⋆, lim_k→∞F(𝐱^k)=F^⋆, and lim_k→∞𝐱^k= 𝐱^⋆.See Appendix <ref>.§.§ Convergence rates under B-connectivity In this subsection, we offer sublinear rates of convergence for Algorithm <ref> under Assumption <ref>.Inspired from <cit.>, we first provide a bound on the accumulative drop in the value of D over each time interval [tB,(t+1)B-1], t∈{0,1,…}, which depends only on the dual iterate at time tB and the underlying interaction graph during these B iterations. To this end, for each k≥0, let 𝒢̃^k=(𝒱,ℰ̃^k) be any spanning subgraph of (𝒱, ⋃_t=k^k+B-1ℰ^t), which, owing to Assumption <ref>, is chosen to be connected at k∈{0,B,2B,…}. Also let ϖ^k be the maximum degree of 𝒢̃^k and ϖ̅:=sup_t∈{0,1,…}ϖ^tB. Clearly, 1≤ϖ^tB≤ϖ̅≤ n-1 ∀ t∈{0,1,…}.Suppose Assumptions <ref> and <ref> hold. Let (𝐰^k)_k=0^∞ be the dual iterates generated by Algorithm <ref>. If the step-sizes (α^k)_k=0^∞ satisfy (<ref>), then for each k∈{0,B,2B,…},∑_t=k^k+B-1∇ D(𝐰^t)^T(H_𝒢^t⊗ I_d)∇ D(𝐰^t)≥∇ D(𝐰^k)^T(L_𝒢̃^k⊗ I_d)∇ D(𝐰^k)/η,where η:=3Bϖ̅α̅^2δ L+3/h∈(0,∞), with α̅>0 in (<ref>), δ>0 in (<ref>), L>0 in Corollary <ref>, and h>0 in (<ref>).See Appendix <ref>. When H_𝒢^k=L_𝒢^k and α^k=1/(2nL), <cit.> provides a similar bound to (<ref>) with η replaced by 3B/2 and 𝒢̃^k being a spanning tree. Lemma <ref> improves this bound since η≤ 3B/4+3 for such a particular choice of H_𝒢^k and α^k, allows for more general selections of H_𝒢^k and α^k, and sheds light on how the network topologies come into play.Lemma <ref> and Lemma <ref> together bound the decrease in the value of D during every B iterations, with which we are able to provide a rate for D(𝐰^k)→ D^⋆. Prior to doing that, we define a sequence (M̃_k)_k=0^∞ as follows: Let M̃_0∈ℝ be any positive constant and defineM̃_k=max_t=0,…,k-1min_𝐰^⋆∈ S^:D(𝐰^⋆)=D^⋆𝐰^tB-𝐰^⋆, ∀ k≥1.Notice that 0≤M̃_k≤ M_0<∞, where M_0 is given by (<ref>).Suppose Assumptions <ref> and <ref> hold. Let (𝐰^k)_k=0^∞ be the dual iterates generated by Algorithm <ref>. If the step-sizes (α^k)_k=0^∞ satisfy (<ref>), then for each k≥0,D(𝐰^k)-D^⋆ ≤ηM̃_⌊ k/B⌋^2(D(𝐰^0)-D^⋆)/ηM̃_⌊ k/B⌋^2+ρλ(D(𝐰^0)-D^⋆)⌊ k/B⌋,where M̃_⌊ k/B⌋∈ [0,M_0] is defined in (<ref>) with M_0≥0 in (<ref>), λ:=inf_t∈{0,1,…}λ_n-1^↓(L_𝒢̃^tB)∈(0,∞), and η,ρ>0 are given in Lemma <ref> and Lemma <ref>, respectively.See Appendix <ref>. Theorem <ref> says that Algorithm <ref>, or equivalently, the underlying weighted gradient method (<ref>), converges to the optimal value D^⋆ of problem (<ref>) at an O(1/k) rate. The derivation of this result requires each d_i to be smooth and the dual optimal set to be compact. These two conditions on problem (<ref>) may not hold if Assumption <ref> is not satisfied (cf. Section <ref>). Note that without the compactness of the dual optimal set, (<ref>) still holds, but we cannot guarantee (𝐰^k)_k=0^∞ and thus M̃_⌊ k/B⌋ ∀ k≥0 to be bounded.The distributed weighted gradient methods in <cit.> also require the above two conditions on problem (<ref>) to establish their convergence to D^⋆. By imposing an additional assumption that the Hessian matrices of d_i ∀ i∈𝒱 are positive definite, the methods in <cit.> are proved to achieve linear convergence rates on fixed networks. In contrast, Theorems <ref> and <ref> allow for time-varying networks and do not even require the existence of the Hessian matrices of d_i ∀ i∈𝒱. The algorithm in <cit.> is shown to asymptotically drive D(𝐰^k) to D^⋆ and satisfy min_t=1,…,kP_S^(∇ D(𝐰^tB))^2≤ C· n^3B/k for some C>0. Our results in Theorems <ref> and <ref> for the more general algorithm (<ref>) are still stronger. We show that lim_k→∞D(𝐰^k)=D^⋆ under the less restrictive Assumption <ref>, and that D(𝐰^k) converges to D^⋆ at an O(1/k) rate under Assumption <ref>. Also, since ∇ D(𝐰^k)=𝐱^k, the first inequality in Theorem <ref> below is comparable to and slightly stronger than the aforementioned convergence rate in <cit.>.Based on Theorem <ref>, below we show that the primal errors 𝐱^k-𝐱^⋆ and |F(𝐱^k)-F^⋆| in optimality and P_S^(𝐱^k) in feasibility all converge to zero at rates of O(1/√(k)). Like many Lagrange dual gradient methods (e.g., <cit.>), we do so by relating such primal errors with the dual error D(𝐰^k)-D^⋆. Suppose Assumptions <ref> and <ref> hold. Let (𝐱^k)_k=0^∞ be the primal iterates generated by Algorithm <ref>. If the step-sizes (α^k)_k=0^∞ satisfy (<ref>), then for each k≥0,P_S^(𝐱^k)≤𝐱^k-𝐱^⋆≤√(2LηM̃_⌊ k/B⌋^2(D(𝐰^0)-D^⋆)/ηM̃_⌊ k/B⌋^2+ρλ(D(𝐰^0)-D^⋆)⌊ k/B⌋),[0]F(𝐱^k)-F^⋆≤𝐰^k√(2LηM̃_⌊ k/B⌋^2(D(𝐰^0)-D^⋆)/ηM̃_⌊ k/B⌋^2+ρλ(D(𝐰^0)-D^⋆)⌊ k/B⌋),[0]F(𝐱^k)-F^⋆≥-𝐰^⋆√(2LηM̃_⌊ k/B⌋^2(D(𝐰^0)-D^⋆)/ηM̃_⌊ k/B⌋^2+ρλ(D(𝐰^0)-D^⋆)⌊ k/B⌋),where 𝐰^⋆ is any optimal solution of problem (<ref>), L is given in Corollary <ref>, and the remaining constants have been introduced in Theorem <ref>.See Appendix <ref>. Since 𝐰^k∈ S_0(𝐰^0) ∀ k≥0 and S_0(𝐰^0) is compact, the term 𝐰^k that appears in the convergence rate of F(𝐱^k)-F^⋆ is uniformly bounded above by M_0+𝐰^⋆. Consequently, the primal convergence rates of Algorithm <ref> in Theorem <ref> are all of order O(1/√(k)), which commensurate with the convergence rate of the classic (centralized) subgradient projection method <cit.>. In the final part of this section, we compare the primal convergence rates of Algorithm <ref> with those of the existing distributed optimization algorithms that also have guaranteed convergence rates over time-varying networks, including Subgradient-Push <cit.>, Gradient-Push <cit.>, DIGing <cit.>, and Push-DIGing <cit.>. Different from Algorithm <ref> that is developed by applying distributed weighted gradient methods to the Fenchel dual, Subgradient-Push and Gradient-Push are constructed by incorporating the subgradient method and the stochastic gradient descent method into the Push-Sum consensus protocol <cit.>, DIGing is designed by combining a distributed inexact gradient method with a gradient tracking technique, and Push-DIGing is derived by introducing Push-Sum into DIGing.The convergence rates of the aforementioned algorithms are all established under Assumption <ref>.[When it comes to Subgradient-Push, Gradient-Push, and Push-DIGing, “connected” in Assumption <ref> is indeed “strongly connected” since they consider directed networks.] For each of these algorithms, Table <ref> lists its assumptions and convergence rate. Observe that only Algorithm <ref> is capable of solving problems with different local constraints of the agents, while the remaining algorithms all require the problem to be unconstrained and their extensions to constrained problems are still open challenges. Also, Gradient-Push, DIGing, and Push-DIGing require both strong convexity and smoothness of the f_i's, leading to faster convergence rates than the O(1/√(k)) rate of Algorithm <ref>. This is natural because we assume a weaker condition on f_i ∀ i∈𝒱, which allows the strongly convex f_i's to be nonsmooth. Subgradient-Push needs neither strong convexity nor smoothness of each f_i, and the resulting convergence rate O(ln k/√(k)) is slower than our O(1/√(k)) result. Note that the assumption on the f_i's for Algorithm <ref> is not necessarily more restrictive than that for Subgradient-Push, since Subgradient-Push requires the subgradients of each f_i to be uniformly bounded over ℝ^d but Algorithm <ref> does not. Unlike Subgradient-Push, Gradient-Push, and Push-DIGing that admit directed links, DIGing and Algorithm <ref> are only applicable to undirected graphs. With that said, Algorithm <ref> is guaranteed to converge to the optimum with the minimal connectivity condition, i.e., Assumption <ref>, while the other methods have no such convergence results. § NUMERICAL EXAMPLES In this section, we demonstrate the competent convergence performance of the proposed distributed Fenchel dual gradient methods by comparing them with a number of existing distributed optimization algorithms via simulations. §.§ Constrained case We first compare the convergence performance of a consensus-based subgradient projection method <cit.>, a proximal-minimization-based method <cit.>, and Algorithm <ref> with H_𝒢^k given by the graph Laplacian matrix (<ref>) and the Metropolis weight matrix (<ref>), respectively, in solving constrained distributed optimization problems in the form of (<ref>). It has been proved that when each local constraint X_i is compact, the consensus-based subgradient projection method and the proximal-minimization-based method, with diminishing step-sizes (e.g., 1/k), asymptotically converge to an optimum over time-varying networks satisfying Assumption <ref> <cit.>. Thus, consider the following multi-agent ℓ_1-regularization problem that often arises in machine learning:[x∈ℝ^5 ∑_i∈𝒱(x^TA_ix+b_i^Tx+1/nx_1); subject to x∈⋂_i∈𝒱{x∈ℝ^5:p_i ≤ x≤ q_i}, ]where each A_i∈ℝ^5× 5 is symmetric positive definite, b_i∈ℝ^5, and p_i≤ x≤ q_i with p_i,q_i∈ℝ^5 means an elementwise inequality. In addition, for each i∈𝒱, the convexity parameter of its local objective is θ_i=λ_5^↓(A_i)>0.For Algorithm <ref>, we adopt α^k=1/(Ln) for H_𝒢^k in (<ref>) and α^k=1/2 for H_𝒢^k in (<ref>) to satisfy the step-size condition (<ref>). For the other two methods, we adopt the diminishing step-size 1/k and the local (unweighted) averaging operation as the consensus scheme to guarantee convergence. We also let the algorithms all start from the same initial primal iterate.Figure <ref> presents the average primal errors produced by the aforementioned algorithms with different values of n, B and θ_i ∀ i∈𝒱. Observe that Algorithm <ref> with the Metropolis weight matrix (<ref>) outperforms the others in all six cases. Moreover, although at early stage the subgradient projection method and the proximal minimization method converge faster than Algorithm <ref> with the Laplacian weight matrix (<ref>), their convergence gradually becomes much slower due to the diminishing nature of the step-size. By comparing Figure <ref> versus <ref> and Figure <ref> versus <ref>, we can see that smaller B leads to faster convergence of Algorithm <ref>, which is consistent with our convergence analysis in Section <ref>, while the impact of B on the subgradient projection method and the proximal minimization method is not apparent. Besides, Figure <ref> versus <ref> and Figure <ref> versus <ref> suggest that Algorithm <ref> with H_𝒢^k in (<ref>) is more scalable to the network size n than the others. Additionally, by comparing Figures <ref> and <ref> with Figure <ref>, it can be inferred that the larger the θ_i's are, the better Algorithm <ref> performs. §.§ Unconstrained case In Section <ref>, we have compared Algorithm <ref> versus Subgradient-Push <cit.>, Gradient-Push <cit.>, DIGing <cit.>, and Push-DIGing <cit.> in the theoretical aspects. Here, we compare, via simulation, their convergence performance in solving the following unconstrained quadratic program that satisfies all the assumptions in <cit.>:minimize_x∈ℝ^5∑_i∈𝒱(x^TA_ix+b_i^Tx),where we let θ_i=λ_5^↓(A_i)∈(2,3) ∀ i∈𝒱 and (n,B)=(50,10). For fair comparison, we assume there is no stochastic error in gradient evaluation for Gradient-Push. Then, Gradient-Push and Subgradient-Push have the same algorithmic form when the local objectives are differentiable, and below we omit Subgradient-Push.Figure <ref> plots the evolution of the average primal error for Gradient-Push, DIGing, Push-DIGing, and Algorithm <ref> with the Laplacian weight matrix (<ref>) and with the Metropolis weight matrix (<ref>). We adopt the same step-sizes for Algorithm <ref> as in Section <ref>. For the other three methods, we fine-tune the step-sizes while satisfying the step-size conditions in <cit.> that theoretically ensure their convergence rates. Observe that Gradient-Push, DIGing, and Push-DIGing almost stop making progress after a few iterations with a non-negligible primal error, while Algorithm <ref> achieves much better accuracy with the above two choices of H_𝒢^k. As all the convergence rate results in <cit.> and this paper are derived from worst-case analysis, the theoretical step-size conditions could be very conservative. Thus, in Figure <ref> we empirically choose the step-sizes for these algorithms, whose values may violate the theoretical conditions but speed up convergence. After some tuning, we select the step-sizes to be 1/(nL), 1.7, 0.15/k, 0.05, and 0.04 for Algorithm <ref> with H_𝒢^k in (<ref>), Algorithm <ref> with H_𝒢^k in (<ref>), Gradient-Push, DIGing, and Push-DIGing, respectively. Note that for Algorithm <ref> with H_𝒢^k in (<ref>), the empirical step-size coincides with the theoretical one in Figure <ref>. By comparing Figure <ref> with Figure <ref>, we can observe that with the above empirically-selected step-sizes, Gradient-Push slightly accelerates its convergence, DIGing and Push-DIGing exhibit prominently improved convergence performance, yet Algorithm <ref> with H_𝒢^k in (<ref>) still performs best.§ CONCLUSION We have constructed a family of distributed Fenchel dual gradient methods for solving multi-agent optimization problems with strongly convex local objectives and nonidentical local constraints over time-varying networks. The proposed algorithms have been proved to asymptotically converge to the optimal solution under a minimal connectivity condition, and have an O(1/√(k)) convergence rate under a standard connectivity condition. Simulation results have illustrated the competitive performance of the distributed Fenchel dual gradient methods by comparing them with related algorithms. In future, this work may be extended in a number of directions such as problems with general convex objective functions and networks with directed links. § APPENDIX§.§ Proof of Proposition <ref> Let 𝐰^⋆=((w_1^⋆)^T,…,(w_n^⋆)^T)^T be an optimal solution of problem (<ref>). Since Assumption <ref>(b) assumes 0_d∈int⋂_i∈𝒱X_i, there exists r_c∈(0,∞) such that B(0_d,r_c)⊆⋂_i∈𝒱X_i. For each i∈𝒱, if w_i^⋆≠0_d, let x_i'=r_cw_i^⋆/w_i^⋆; otherwise let x_i'=0_d. Clearly, x_i'∈ B(0_d,r_c). Consequently,D^⋆=D(𝐰^⋆)=∑_i∈𝒱(sup_x_i∈ X_i(w_i^⋆)^Tx_i-f_i(x_i))[0]≥∑_i∈𝒱((w_i^⋆)^Tx_i'-f_i(x_i'))=r_c∑_i∈𝒱w_i^⋆-∑_i∈𝒱f_i(x_i').This, along with 𝐰^⋆≤∑_i∈𝒱w_i^⋆ and D^⋆=-F^⋆, implies that 𝐰^⋆≤((∑_i∈𝒱f_i(x_i'))-F^⋆)/r_c. Note that ∑_i∈𝒱f_i(x_i')≤∑_i∈𝒱max_x_i∈ B(0_d,r_c)f_i(x_i), where F^⋆≤∑_i∈𝒱max_x_i∈ B(0_d,r_c)f_i(x_i)<∞ because B(0_d,r_c) is compact. Therefore, (<ref>) holds, which suggests that the optimal set of problem (<ref>) is compact. Then, due to the convexity of D and S^, the level sets S_0(𝐰) ∀𝐰∈ S^ are compact <cit.>. §.§ Proof of Lemma <ref> For convenience, let 𝐲^k = (H_𝒢^k⊗ I_d)∇ D(𝐰^k). Due to the Descent Lemma <cit.> and (<ref>),D(𝐰^k+1)-D(𝐰^k) ≤⟨∇ D(𝐰^k),𝐰^k+1-𝐰^k⟩+(𝐰^k+1-𝐰^k)^TΛ_L⊗ I_d/2(𝐰^k+1-𝐰^k)=-α^k⟨∇ D(𝐰^k), 𝐲^k⟩+(α^k)^2(𝐲^k)^TΛ_L⊗ I_d/2𝐲^k. Then, consider the following lemma. Suppose M, M̅∈ℝ^n× n are symmetric positive semidefinite and M≼M̅. Then, for any 𝐱∈ℝ^nd and any 𝐲∈ℛ(M⊗ I_d),⟨𝐱, (M⊗ I_d)𝐱⟩≥⟨ (M⊗ I_d)𝐱, (M̅^†⊗ I_d)(M⊗ I_d)𝐱⟩. Let 𝐱∈ℝ^nd. Then,⟨𝐱, (M⊗ I_d)𝐱⟩-⟨ (M⊗ I_d)𝐱, (M̅^†⊗ I_d)(M⊗ I_d)𝐱⟩= 𝐱^T[(M-MM̅^†M)⊗ I_d]𝐱.In addition, by Schur complement condition, M≽ O_n and M̅≽ M implies( [MM;M M̅;]) ≽ O_2nand the inequality above leads to M-MM̅^†M≽ O_n. Combining this with (<ref>), the proof can be completed. From Lemma <ref>, (𝐲^k)^T(Λ_L ⊗ I_d)𝐲^k≤δ⟨∇ D(𝐰^k), 𝐲^k⟩. Combining this with (<ref>) leads toD(𝐰^k+1)-D(𝐰^k) ≤ ((α^k)^2δ/2-α^k)⟨∇ D(𝐰^k), 𝐲^k⟩.This, along with (<ref>), completes the proof. §.§ Proof of Lemma <ref> We first consider the following optimization problem: For any ℐ⊆𝒱, ℐ≠∅ and any c∈ℝ^d,[w_i∈ℝ^d ∀ i∈ℐ ∑_i∈ℐ d_i(w_i); subject to ∑_i∈ℐ w_i=c. ]Similar to problem (<ref>), w_i' ∀ i∈ℐ compose an optimum to (<ref>) if and only if for any i,j∈ℐ, ∇ d_i(w_i')=∇ d_j(w_j') <cit.>, or equivalently, x̃_i(w_i')=x̃_j(w_j'). With the above setting, consider the following lemma. Suppose Assumption <ref> and the step-size condition (<ref>) hold. Let 𝐮,𝐯∈ℝ^nd be two feasible solutions of problem (<ref>) such that u_i ∀ i∈ℐ and v_i ∀ i∈ℐ are feasible to problem (<ref>). Suppose x̃_i(v_i)-x̃_j(v_j)≤ϵ' ∀ i,j∈ℐ for some ϵ'>0, ∑_i∈ℐ d_i(u_i)≤∑_i∈ℐ d_i(v_i), and D(𝐯)≤ D(𝐰^0), where 𝐰^0∈ S^ is the initial dual iterate of Algorithm <ref>. Then,x̃_i(u_i)-x̃_j(u_j)≤4√(LM_0(|ℐ|-1)ϵ'),∀ i,j∈ℐ,where M_0 is defined in (<ref>).Let 𝐰'=(w_1'^T,…,w_n'^T)^T∈ℝ^nd be such that w_i'∈ℝ^d ∀ i∈ℐ compose an optimal solution to (<ref>) and w_j'=v_j ∀ j∉ℐ. Due to the convexity of each d_i and (<ref>),∑_i∈ℐ d_i(v_i)-∑_i∈ℐ d_i(w_i')≤∑_i∈ℐ⟨x̃_i(v_i), v_i-w_i'⟩.Let x̅_v:=1/|ℐ|∑_i∈ℐx̃_i(v_i). Since w_i' ∀ i∈ℐ and v_i ∀ i∈ℐ are feasible to (<ref>), we have ∑_i∈ℐw_i'=∑_i∈ℐv_i, which gives∑_i∈ℐ⟨x̃_i(v_i), v_i-w_i'⟩ =∑_i∈ℐ⟨x̃_i(v_i)-x̅_v, v_i-w_i'⟩≤∑_i∈ℐx̃_i(v_i)-x̅_v·v_i-w_i'.Also note that for each i∈ℐ, x̃_i(v_i)-x̅_v=1/|ℐ|∑_j∈ℐ (x̃_i(v_i)-x̃_j(v_j))≤|ℐ|-1/|ℐ|ϵ'. Combining the above,∑_i∈ℐ d_i(v_i)-∑_i∈ℐ d_i(w_i')≤|ℐ|-1/|ℐ|ϵ'∑_i∈ℐv_i-w_i'≤ (|ℐ|-1)ϵ'√(∑_i∈ℐv_i-w_i'^2).Since ∑_i∈ℐd_i(w_i')≤∑_i∈ℐd_i(v_i) and w_j'=v_j ∀ j∉ℐ, we have D(𝐰')≤ D(𝐯)≤ D(𝐰^0), implying that 𝐰',𝐯∈ S_0(𝐰^0) and that for any optimum 𝐰^⋆ of problem (<ref>),𝐰'-𝐯≤𝐰'-𝐰^⋆+𝐯-𝐰^⋆≤ 2M_0.This inequality and (<ref>) together yield∑_i∈ℐ d_i(v_i)-∑_i∈ℐ d_i(w_i')≤ 2M_0(|ℐ|-1)ϵ'.Due to the optimality of w_i' ∀ i∈ℐ with respect to (<ref>), we have ∇ d_i(w_i')=∇ d_j(w_j') ∀ i,j∈ℐ. Also, because of the feasibility of u_i ∀ i∈ℐ, ∑_i∈ℐu_i=∑_i∈ℐw_i'. Therefore, ∑_i∈ℐ⟨∇ d_i(w_i'), u_i-w_i'⟩ =0. This, along with (<ref>), (<ref>), and the inequality d_i(u_i)-d_i(w_i')≥⟨∇ d_i(w_i'), u_i-w_i'⟩ + 1/2L∇ d_i(w_i')-∇ d_i(u_i)^2 <cit.>, implies∑_i∈ℐx̃_i(u_i)-x̃_i(w_i')^2≤ 2L∑_i∈ℐ(d_i(u_i)-d_i(w_i'))[0]≤ 2L∑_i∈ℐ(d_i(v_i)-d_i(w_i'))≤ 4LM_0(|ℐ|-1)ϵ'.Hence, for any i,j∈ℐ, we have x̃_i(u_i)-x̃_j(u_j)≤x̃_i(u_i)-x̃_i(w_i')+x̃_j(u_j)-x̃_j(w_j')≤4√(LM_0(|ℐ|-1)ϵ'), where the first inequality is from the optimality of w_i' ∀ i∈ℐ and (<ref>). Next, we define the following: Arbitrarily pick ϵ>0. Due to (<ref>), ∃ T_ϵ≥0 such thatx_i^k-x_j^k≤ϵ,∀{i,j}∈ℰ^k, ∀ k≥ T_ϵ.Then, for each i∈𝒱, let 𝒞_i,ϵ^k=∅ ∀ k∈[0,T_ϵ). For each k≥ T_ϵ, let𝒞_i,ϵ^k= {i}∪{j∈𝒱:There exists a path between i and jin the graph (𝒱,∪_t=T_ϵ^kℰ^t)}⊆𝒱.For each k≥ T_ϵ, observe that in the graph (𝒱,∪_t=T_ϵ^kℰ^t), the subgraph induced by 𝒞_i,ϵ^k is the largest connected component that contains node i. Thus, for any two nodes i and j, i≠ j, 𝒞_i,ϵ^k and 𝒞_j,ϵ^k are either identical or disjoint. Additionally, for every s∈𝒞_i,ϵ^k+1, 𝒞_s,ϵ^k is always contained in 𝒞_i,ϵ^k+1. This implies that the number of distinct sets in the collection {𝒞_i,ϵ^k}_i∈𝒱 is non-increasing with k over [T_ϵ,∞). In particular, from each k to k+1, 𝒞_i,ϵ^k+1 either equals 𝒞_i,ϵ^k or is the union of 𝒞_i,ϵ^k and some other 𝒞_j,ϵ^k's that are disjoint from 𝒞_i,ϵ^k. Also due to Assumption <ref>, there exists K_ϵ∈[T_ϵ,∞) such that 𝒞_i,ϵ^k=𝒱 ∀ i∈𝒱 ∀ k≥ K_ϵ. By means of the 𝒞_i,ϵ^k's and Lemma <ref>, below we show that ∀ i∈𝒱, ∀ k≥ T_ϵ,max_j,ℓ∈𝒞_i,ϵ^kx_j^k-x_ℓ^k≤Φ_i^k(ϵ).Here, Φ_i^k(ϵ) ∀ i∈𝒱 ∀ k≥ T_ϵ are defined recursively as follows: Initially at k=T_ϵ, Φ_i^k(ϵ)=(|𝒞_i,ϵ^k|-1)ϵ. At each subsequent k≥ T_ϵ+1,Φ_i^k(ϵ)=4√(LM_0(|𝒞_i,ϵ^k|-1)Φ_i^t^k(ϵ)), if 𝒞_i,ϵ^k=𝒞_i,ϵ^k-1,(1+2Lα̅h̅n)|𝒞_i,ϵ^k|ϵ +∑_s∈𝒞_i,ϵ^kΦ_s^k-1(ϵ), otherwise,where t^k:= max{t∈[T_ϵ,k]:𝒞_i,ϵ^t≠𝒞_i,ϵ^t-1}. Note that 𝒞_i,ϵ^k=𝒞_i,ϵ^t ∀ t∈[t^k,k].We prove (<ref>) by induction. At time k=T_ϵ, for each i∈𝒱, if |𝒞_i,ϵ^k|=1, then max_j,ℓ∈𝒞_i,ϵ^kx_j^k-x_ℓ^k=Φ_i^k(ϵ)=0, i.e., (<ref>) is satisfied; otherwise for any j,ℓ∈𝒞_i,ϵ^k, j≠ℓ, there exists a path of length at most |𝒞_i,ϵ^k|-1 connecting j and ℓ. It follows from (<ref>) that x_j^k-x_ℓ^k≤(|𝒞_i,ϵ^k|-1)ϵ=Φ_i^k(ϵ), i.e., (<ref>) also holds. Next, suppose max_j,ℓ∈𝒞_i,ϵ^tx_j^t-x_ℓ^t≤Φ_i^t(ϵ) ∀ i∈𝒱 ∀ t∈[T_ϵ,k-1] for some k≥ T_ϵ+1. For each i∈𝒱, to show that (<ref>) holds, consider the following two cases.Case i: 𝒞_i,ϵ^k=𝒞_i,ϵ^k-1. In this case, we have T_ϵ≤ t^k≤ k-1. Also, ∀ t∈[t^k+1,k], ∀ j∈𝒞_i,ϵ^t-1, we have 𝒩_j^t⊆𝒞_i,ϵ^t-1=𝒞_i,ϵ^k . Hence, using the same arguments as the proofs of Proposition <ref> and Lemma <ref>, it can be shown that ∑_s∈𝒞_i,ϵ^kw_s^k=∑_s∈𝒞_i,ϵ^kw_s^k-1=⋯=∑_s∈𝒞_i,ϵ^kw_s^t^k and that ∑_s∈𝒞_i,ϵ^kd_s(w_s^k)≤∑_s∈𝒞_i,ϵ^kd_s(w_s^k-1)≤⋯≤∑_s∈𝒞_i,ϵ^kd_s(w_s^t^k). Let ℐ=𝒞_i,ϵ^k and c=∑_s∈𝒞_i,ϵ^kw_s^t^k in problem (<ref>). It then follows from Lemma <ref> and Lemma <ref> with ϵ'=Φ_i^t^k(ϵ), 𝐮=𝐰^k, and 𝐯=𝐰^t^k that (<ref>) holds.Case ii: 𝒞_i,ϵ^k≠𝒞_i,ϵ^k-1. Pick any j,ℓ∈𝒞_i,ϵ^k, j≠ℓ and consider the following two subcases.Subcase ii(a): 𝒞_j,ϵ^k-1=𝒞_ℓ,ϵ^k-1. Then, x_j^k-x_ℓ^k≤x_j^k-x_j^k-1+x_j^k-1-x_ℓ^k-1+x_ℓ^k-1-x_ℓ^k≤x_j^k-x_j^k-1+x_ℓ^k-x_ℓ^k-1+Φ_j^k-1(ϵ). Also, from (<ref>), Proposition <ref>, (<ref>), and (<ref>), we havex_p^k-x_p^k-1≤ L_pw_p^k-w_p^k-1≤ Lα̅∑_q∈𝒩_p^k-1h_pq^k-1(x_p^k-1-x_q^k-1)[0]≤ Lα̅h̅∑_q∈𝒩_p^k-1x_p^k-1-x_q^k-1≤ Lα̅h̅nϵ,∀ p∈𝒱.Consequently, x_j^k-x_ℓ^k≤2Lα̅h̅nϵ+Φ_j^k-1(ϵ).Subcase ii(b): 𝒞_j,ϵ^k-1∩𝒞_ℓ,ϵ^k-1=∅. Then, there exists a path from j to ℓ belonging to the subgraph induced in the graph (𝒱,∪_t=T_ϵ^kℰ^t) by 𝒞_i,ϵ^k. Along the path are nodes p_1=j,s_1,p_2,s_2,…,p_τ,s_τ=ℓ such that (1) 𝒞_p_r,ϵ^k-1=𝒞_s_r,ϵ^k-1 ∀ r=1,…,τ; (2) 𝒞_p_r,ϵ^k-1 ∀ r∈{1,…,τ} are disjoint from each other; and (3) {s_r,p_r+1}∈ℰ^k ∀ r∈{1,…,τ-1}. Here, τ∈{2,…,|𝒞_i,ϵ^k|} is an integer whose value is no more than the number of distinct sets in the collection {𝒞_s,ϵ^k-1}_s∈𝒞_i,ϵ^k. Hence, x_j^k-x_ℓ^k≤x_p_1^k-x_s_1^k+∑_r=1^τ-1(x_s_r^k-x_p_r+1^k+x_p_r+1^k-x_s_r+1^k). For each r=1,…,τ, since p_r,s_r∈𝒞_p_r,ϵ^k-1, we obtain from Subcase ii(a) that x_p_r^k-x_s_r^k≤2Lα̅h̅nϵ+Φ_p_r^k-1(ϵ). It then follows from (<ref>) that x_j^k-x_ℓ^k≤(τ-1)ϵ+2τ Lα̅h̅nϵ+∑_r=1^τΦ_p_r^k-1(ϵ)≤(1+2Lα̅h̅n)|𝒞_i,ϵ^k|ϵ +∑_s∈𝒞_i,ϵ^kΦ_s^k-1(ϵ).Combining the above two subcases, we obtain (<ref>). This completes the proof of (<ref>) for all i∈𝒱 and all k≥ T_ϵ. Further, notice that for each i∈𝒱, Φ_i^k(ϵ) is updated only if either 𝒞_i,ϵ^k or 𝒞_i,ϵ^k-1 is changed. Also note that 𝒞_i,ϵ^k can be expanded at most n times and remains unchanged since time K_ϵ. Therefore, for any k≥ K_ϵ+1,max_i,j∈𝒱x_i^k-x_j^k=max_i∈𝒱max_j,ℓ∈𝒞_i,ϵ^kx_j^k-x_ℓ^k≤max_i∈𝒱Φ_i^k(ϵ)≤ O(ϵ^1/2^n),which implies max_i,j∈𝒱x_i^k-x_j^k→0 as k→∞. §.§ Proof of Theorem <ref> Let 𝐰^⋆ be an optimal solution to the dual problem (<ref>). Due to the convexity of D, (<ref>), and Proposition <ref>,D(𝐰^k)-D^⋆≤⟨∇ D(𝐰^k), 𝐰^k-𝐰^⋆⟩=⟨𝐱^k, 𝐰^k-𝐰^⋆⟩[0]≤P_S^(𝐱^k)·𝐰^k-𝐰^⋆[0]≤ M_0P_S^(𝐱^k),where M_0 is defined in (<ref>). As k→∞, we have shown in the paragraph below Lemma <ref> that P_S^(𝐱^k)→ 0. This, along with the above inequality, implies D(𝐰^k)→ D^⋆. In addition, since Assumption <ref> guarantees zero duality gap, we have F(𝐱^k)→ F^⋆. Finally, for any 𝐰∈ S^, due to Corollary <ref>, <cit.>, and (<ref>),D(𝐰)-D^⋆≥⟨∇ D(𝐰^⋆), 𝐰-𝐰^⋆⟩+1/2L∇ D(𝐰)-∇ D(𝐰^⋆)^2=1/2L𝐱̃(𝐰)-𝐱^⋆^2,where the last equality is because ∇ D(𝐰^⋆)=𝐱^⋆∈ S and 𝐰, 𝐰^⋆∈ S^. Thus, because lim_k→∞D(𝐰^k)-D^⋆=0 and L>0, 𝐱^k-𝐱^⋆^2→ 0 as k→∞.§.§ Proof of Lemma <ref> Let k∈{0,B,2B,…}. For each {i,j}∈ℰ̃^k, let t_{i,j}^k∈{k,…,k+B-1} be such that {i,j}∈ℰ^t_{i,j}^k. Then, note from Proposition <ref> that∇ d_i(w_i^k)-∇ d_i(w_i^t_{i,j}^k)^2 =∑_t=k^t_{i,j}^k-1(∇ d_i(w_i^t+1)-∇ d_i(w_i^t))^2[0]≤ B∑_t=k^k+B-1∇ d_i(w_i^t+1)-∇ d_i(w_i^t)^2≤ L_i^2B∑_t=k^k+B-1w_i^t+1-w_i^t^2.Thus,∑_{i,j}∈ℰ̃^k(∇ d_i(w_i^k)-∇ d_i(w_i^t_{i,j}^k)^2+∇ d_j(w_j^t_{i,j}^k)-∇ d_j(w_j^k)^2)[0] ≤B∑_{i,j}∈ℰ̃^k∑_t=k^k+B-1 (L_i^2w_i^t+1-w_i^t^2+L_j^2w_j^t+1-w_j^t^2)[0] ≤Bϖ̅∑_t=k^k+B-1∑_i∈𝒱L_i^2w_i^t+1-w_i^t^2[0] ≤Bϖ̅α̅^2∑_t=k^k+B-1⟨∇ D(𝐰^t), ((H_𝒢^tΛ_L^2H_𝒢^t)⊗ I_d)∇ D(𝐰^t)⟩.Note that H_𝒢^tΛ_L^2H_𝒢^t≼ LH_𝒢^tΛ_LH_𝒢^t. Also, from (<ref>) and Lemma <ref>, H_𝒢^tΛ_LH_𝒢^t≼δ H_𝒢^t. Hence,∑_{i,j}∈ℰ̃^k(∇ d_i(w_i^k)-∇ d_i(w_i^t_{i,j}^k)^2+∇ d_j(w_j^t_{i,j}^k)-∇ d_j(w_j^k)^2)[0]≤ Bϖ̅α̅^2δ L∑_t=k^k+B-1∇ D(𝐰^t)^T(H_𝒢^t⊗ I_d)∇ D(𝐰^t).In addition,∑_{i,j}∈ℰ̃^k∇ d_i(w_i^t_{i,j}^k)-∇ d_j(w_j^t_{i,j}^k)^2 ≤1/h∑_t=k^k+B-1∑_{i,j}∈ℰ^th_ij^k∇ d_i(w_i^t)-∇ d_j(w_j^t)^2[0]≤1/h∑_t=k^k+B-1∇ D(𝐰^t)^T(H_𝒢^t⊗ I_d)∇ D(𝐰^t).It follows from (<ref>) and (<ref>) that∇ D(𝐰^k)^T(L_𝒢̃^k⊗ I_d)∇ D(𝐰^k)=∑_{i,j}∈ℰ̃^k∇ d_i(w_i^k)-∇ d_j(w_j^k)^2[0]≤3∑_{i,j}∈ℰ̃^k(∇ d_i(w_i^k)-∇ d_i(w_i^t_{i,j}^k)^2+∇ d_j(w_j^t_{i,j}^k)-∇ d_j(w_j^k)^2+∇ d_i(w_i^t_{i,j}^k)-∇ d_j(w_j^t_{i,j}^k)^2)[0]≤η∑_t=k^k+B-1∇ D(𝐰^t)^T(H_𝒢^t⊗ I_d)∇ D(𝐰^t).§.§ Proof of Theorem <ref> Let k≥0. By Lemmas <ref> and <ref>,(D(𝐰^(k+1)B)-D^⋆)-(D(𝐰^kB)-D^⋆)=∑_t=kB^(k+1)B-1(D(𝐰^t+1)-D(𝐰^t))[0]≤-ρ∑_t=kB^(k+1)B-1∇ D(𝐰^t)^T(H_𝒢^t⊗ I_d)∇ D(𝐰^t)≤-ρ/η∇ D(𝐰^kB)^T(L_𝒢̃^kB⊗ I_d)∇ D(𝐰^kB)[0]≤-ρλ/ηP_S^(∇ D(𝐰^kB))^2,where the last inequality is because 𝒢̃^kB is connected and thus Null(L_𝒢̃^kB⊗ I_d)=S. Also, since 𝒢̃^tB ∀ t=0,1,… are connected, we have λ>0. From Proposition <ref>, we know that 𝐰^kB∈ S^. Also, for any optimal solution 𝐰^⋆ to (<ref>), because 𝐰^⋆∈ S^, we have 𝐰^kB-𝐰^⋆∈ S^. Then,D(𝐰^kB)-D^⋆≤ ⟨∇ D(𝐰^kB), 𝐰^kB-𝐰^⋆⟩=⟨ P_S^(∇ D(𝐰^kB)), 𝐰^kB-𝐰^⋆⟩[0] ≤ P_S^(∇ D(𝐰^kB))·𝐰^kB-𝐰^⋆.This, along with (<ref>), gives(D(𝐰^(k+1)B)-D^⋆)-(D(𝐰^kB)-D^⋆)≤-ρλ(D(𝐰^kB)-D^⋆)^2/(ηmin_𝐰^⋆∈ S^:D(𝐰^⋆)=D^⋆𝐰^kB-𝐰^⋆^2).Finally, using Lemma 6 in <cit.>, we obtainD(𝐰^kB)-D^⋆≤ D(𝐰^0)-D^⋆/1+ρλ(D(𝐰^0)-D^⋆)/η∑_t=0^k-1(min_𝐰^⋆∈ S^:D(𝐰^⋆)=D^⋆𝐰^tB-𝐰^⋆^2)^-1[0] ≤ D(𝐰^0)-D^⋆/1+ρλ(D(𝐰^0)-D^⋆)k/(ηM̃_k^2).Note that the above inequality is equivalent to (<ref>) since (D(𝐰^k))_k=0^∞ is non-increasing. §.§ Proof of Theorem <ref> Let 𝐰∈ S^. Note that P_S^(𝐱̃(𝐰))=𝐱̃(𝐰)-P_S(𝐱̃(𝐰))≤𝐱̃(𝐰)-𝐱^⋆. Thus, from (<ref>),P_S^(𝐱̃(𝐰))≤𝐱̃(𝐰)-𝐱^⋆≤√(2L(D(𝐰)-D^⋆)).Also note thatF(𝐱̃(𝐰))-F^⋆ = ⟨𝐰,𝐱̃(𝐰)⟩-D(𝐰)+D^⋆≤⟨𝐰,𝐱̃(𝐰)⟩=⟨𝐰,P_S^(𝐱̃(𝐰))⟩.On the other hand, for any dual optimum 𝐰^⋆∈ S^, we have -F^⋆=D^⋆≥⟨𝐰^⋆,𝐱̃(𝐰)⟩-F(𝐱̃(𝐰)), which leads toF(𝐱̃(𝐰))-F^⋆≥⟨𝐰^⋆,P_S^(𝐱̃(𝐰))⟩.As a result,-𝐰^⋆·P_S^(𝐱̃(𝐰))≤ F(𝐱̃(𝐰))-F^⋆≤𝐰·P_S^(𝐱̃(𝐰)).Combining (<ref>) and (<ref>) with Proposition <ref> and Theorem <ref> completes the proof.IEEEtran
http://arxiv.org/abs/1708.07620v2
{ "authors": [ "Xuyang Wu", "Jie Lu" ], "categories": [ "math.OC" ], "primary_category": "math.OC", "published": "20170825062051", "title": "Fenchel Dual Gradient Methods for Distributed Convex Optimization over Time-varying Networks" }
Hoch-Weiseler Str. 46, Butzbach, 35510, Hesse, Germany [email protected] study propositional and first-order Gödel logics over infinitary languages which are motivated semantically by corresponding interpretations into the unit interval [0,1]. We provide infinitary Hilbert-style calculi for the particular (propositional and first-order) cases with con-/disjunctions of countable length and prove corresponding completeness theorems by extending the usual Lindenbaum-Tarski construction to the infinitary case for a respective algebraic semantics via complete linear Heyting algebras. We provide infinitary hypersequent calculi and prove corresponding cut-elimination theorems in the Schütte-Tait-style. Initial observations are made regarding truth-value sets other than [0,1]. On Infinitary Gödel logics Nicholas Pischke December 30, 2023 ==========================§ INTRODUCTIONInfinitary logics in a classical setting go back to <cit.> and over time became influential in various areas of mathematical logic like (finite) model theory, set theory and also formal arithmetic, among others. Model theoretically, they pose an interesting challenge since the usual propositional and first-order properties (like e.g. compactness) become more intertwined with set-theoretic principles (like e.g. large cardinal axioms).We study infinitary extensions of Gödel logics. In the finitary setting, Gödel logics arose historically from a sequence of propositional finite-valued logics given by Gödel <cit.> to show that intuitionistic logic does not have a finite characteristic matrix. These were extended to an infinite-valued variant by Dummett <cit.> and the whole collection is today especially studied in the context of intermediate logics. Further, Gödel logics have been characterized as one of three main instances of t-norm based fuzzy logics by Hájek <cit.>. First-order versions were first described by Horn <cit.> and later rediscovered by Takeuti and Titani <cit.> under the name of intuitionistic fuzzy logics. The infinitary versions studied here assume a similar position among both the infinitary intermediate and infinitary fuzzy logics and the present work is thus, in that way, also a particular case study of these classes. For that purpose, Gödel logics pose an especially interesting case as they, in the intermediate context, are logics with many classical properties but which, at the same time, are distinct enough from classical or intuitionistic logic to still pose interesting methodical challenges for the adaption of well-known results (e.g. like interpolation). An example of this phenomenon in the infinitary setting is the work <cit.> by Aguilera where he studies analogues of the compactness results for classical infinitary logics in a Gödel setting where, although the classical results stay true modulo appropriate reformulations, the methods of Skolem functions used classically had to be reformulated using a certain theory of fuzzy ultraproducts.By now, the work <cit.> is the only paper on infinitary Gödel logics and many interesting problems arising from generalizations of the classical case have remained open, like the study of propositional variants, the development of infinitary calculi and appropriate completeness theorems in the propositional and first-order case as well as infinitary structural proof theory, among others. We study all these previously named topics (in variable depth) and in particular prove the relevant completeness and cut-elimination theorems (where one naturally restricts to the instances with conjunctions and disjunctions of countable length and finitary quantifiers like in the classical (see <cit.>) and intuitionistic cases (see <cit.>)). At the end, we consider truth-value sets different than [0,1] and extend the results to some of these cases as well. § PROPOSITIONAL INFINITARY GÖDEL LOGICS §.§ Syntax and FragmentsLet κ be any cardinal number. The infinitary propositional language associated with κ is given by ℒ_κ:ϕ ::= | x| (ϕ→ϕ)| (ϕϕ)|(ϕϕ)|⋀Φ|⋁Φwhere we have x∈ Var_κ:={x_λ|λ∈κ} and Φ is a set of formulas of size <κ. For the other classical operators, we define * ϕ:=ϕ→,* ⊤:=,* ϕ↔ψ:=(ϕ→ψ)(ψ→ϕ).Given some formula ϕ, we denote the set of all subformulas (including ϕ) by sub(ϕ) and the set of variables of ϕ by var(ϕ). Both of these naturally extend to sets Γ.We mostly deal with the special case of κ=ω_1 and in that context, we will mostly write ⋀_i∈ωϕ_i for ⋀{ϕ_i| i∈ω} and ⋁_i∈ωϕ_i for ⋁{ϕ_i| i∈ω} where (ϕ_i)_i∈ω is a countable family of formulas.Related to that particular instance of κ=ω_1, we will also need the notion of a fragment. These fragments are (possibly countable) sublanguages of ℒ_ω_1 for which a Lindenbaum-Tarski construction is, nevertheless, still possible and they form a cornerstone of the proof of the completeness theorem. While these fragments, in particular the notation ℒ_A, originate from the connection of (classical) infinitary logic with admissible sets in the sense of Barwise <cit.>, we only need and use the following syntactic definition, in similarity to Nadel <cit.> in the context of infinitary intuitionistic logic.A (distributive) fragment of ℒ_ω_1 is a set ℒ_A⊆ℒ_ω_1 such that * ∈ℒ_A,* ϕ∈ℒ_A implies sub(ϕ)⊆ℒ_A,* ϕ,ψ∈ℒ_A implies ϕ∘ψ∈ℒ_A for ∘∈{,,→},* ϕ,⋀_i∈ωϕ_i∈ℒ_A implies ⋀_i∈ω(ϕ→ϕ_i)∈ℒ_A,* ϕ,⋁_i∈ωϕ_i∈ℒ_A implies ⋀_i∈ω(ϕ_i→ϕ)∈ℒ_A,* ϕ,⋀_i∈ωϕ_i∈ℒ_A implies ⋀_i∈ω(ϕϕ_i)∈ℒ_A. The important kind of fragments will be countable ones. In particular, we will consider the smallest fragments containing some set of formulas.For any Γ⊆ℒ_ω_1, there is a smallest (w.r.t. ⊆) distributive fragment frag(Γ) such that Γ⊆frag(Γ). If Γ is countable, then frag(Γ) is also countable.§.§ The Standard Semantics and 𝖦_κWe now introduce the standard semantics for the language ℒ_κ and the resulting logics of semantic consequence. This standard semantics naturally extends the usual finitary standard semantics for propositional Gödel logics. For Gödel logics, being one of the prime examples of many-valued logics, the first important parameter in that context is that of the truth-value set. We fix this to be [0,1] for the major part of the paper and discuss other choices only in part at the end.An ℒ_κ-Gödel-evaluation is a function v:ℒ_κ→ [0,1] such that * v()=0,* v(ϕψ)=min{v(ϕ),v(ψ)},* v(ϕψ)=max{v(ϕ),v(ψ)},* v(ϕ→ψ)=v(ϕ)⇒ v(ψ) where x⇒ y:=1 if x≤ y, y otherwise,* v(⋀Φ)=inf{v(ϕ)|ϕ∈Φ},* v(⋁Φ)=sup{v(ϕ)|ϕ∈Φ},for any Φ∪{ϕ,ψ}⊆ℒ_κ.Given a set of formulas Γ, we write v[Γ]:={v(γ)|γ∈Γ}. The derived notion of semantics consequence is then defined as follows: for Γ∪{ϕ}⊆ℒ_κ, we write Γ_𝖦_κϕ if v[Γ]⊆{1} implies v(ϕ)=1 for any ℒ_κ-Gödel-evaluation v. We call the set of consequences Γ_𝖦_κϕ the κ-infinitary Gödel logic and denote it by 𝖦_κ.For the particular case of 𝖦_ω_1, a main part of the paper is devoted to the study of various proof theoretic formalism for capturing that semantic consequence and we thus continue by introducing the relevant Hilbert-style calculus used later in a corresponding completeness proof.§.§ A Proof Calculus for κ=ω_1The proof calculus for 𝖦_ω_1 which we introduce, denoted by 𝒢_ω_1, is a straightforward combination of a proof calculus for propositional infinitary intuitionistic logic with the (pre-)linearity scheme(ϕ→ψ) (ψ→ϕ).To be concrete, we consider the following system of axiom and rules:§.§ The Calculus 𝒢_ω_1(IL) a complete set of axioms for propositional intuitionistic logic;[Naturally, we expect the set of axioms to be defined using the connectives ,,→, and to only require modus ponens as an inference rule. For a particular choice, take the schemes -Ax, -Ax, →-Ax, -Ax from <cit.>.](GL) (ϕ→ψ) (ψ→ϕ);(ω) ϕ_j→⋁_i∈ωϕ_i, (j∈ω);(ω) ⋀_i∈ωϕ_i→ϕ_j, (j∈ω);(MP) from ϕ→ψ and ϕ, infer ψ;(Rω)_1 from ϕ_i→ψ for all i∈ω, infer ⋁_i∈ωϕ_i→ψ;(Rω)_2 from ϕ→ψ_i for all i∈ω, infer ϕ→⋀_i∈ωψ_i.Further, we consider the extension 𝒢^D_ω_1 which extends the above calculus by the axiom schemeD⋀_i∈ω(ϕψ_i)→(ϕ⋀_i∈ωψ_i)expressing the distributivity of the infinitary operations. A proof in 𝒢_ω_1 (or 𝒢^D_ω_1) of some ϕ from some assumptions Γ is any function f:α+1→ℒ_ω_1 where α<ω_1 as well as f(α)=ϕ and such that any f(β) is either * an instance of an axiom scheme,* element of Γ, * the result of (MP) or (Rω)_1, (Rω)_2 with assumptions f(γ) where γ<β.We write Γ⊢_𝒢_ω_1ϕ (or Γ⊢_𝒢^D_ω_1ϕ, respectively) if there is such a proof.Relative to some fragment ℒ_A as defined above, we also introduce a restricted notion of derivation: for Γ∪{ϕ}⊆ℒ_A, we write Γ⊢_𝒢_ω_1(ℒ_A)ϕ (or Γ⊢_𝒢^D_ω_1(ℒ_A)ϕ, respectively) if there is a proof f with img(f)⊆ℒ_A. Note that 𝒢_ω_1(ℒ_A), and thus also 𝒢^D_ω_1(ℒ_A), have the classical Deduction Theorem.For any Γ∪{ϕ,ψ}⊆ℒ_A, we have Γ∪{ϕ}⊢_𝒢_ω_1(ℒ_A)ψ iff Γ⊢_𝒢_ω_1(ℒ_A)ϕ→ψ.The same holds for 𝒢^D_ω_1(ℒ_A).§ FIRST-ORDER INFINITARY GÖDEL LOGICSFor the first-order variant, we assume a standard underlying first-order signature σ consisting of any number of predicate symbols P and functions symbols f. For any such given symbol, we write ar(P) or ar(f), respectively, for its arity (which is assumed to be finite). We construct a infinitary language corresponding to cardinals κ≥λ as it is usually done classically as well: assume a set of variables of size κ, given by Var_κ as before, and in that context denote the set of terms over σ and Var_κ by 𝒯_κ(σ). The infinitary language over σ associated with κ,λ is then given byℒ_κ,λ(σ):ϕ::=| P(t_1,…,t_n)| (ϕ→ϕ)| (ϕϕ)|(ϕϕ)|⋀Φ|⋁Φ|∃ Xϕ|∀ Xϕwhere X⊆ Var_κ is a set of size <λ, P∈σ is a predicate symbol with ar(P)=n, t_1,…,t_n∈𝒯_κ(σ) and Φ is a set of formulas of size <κ. We drop the σ if the context is clear or if the choice is arbitrary.We write free(ϕ) for the set of free variables of ϕ and var(ϕ) for the set of variables of ϕ, free or bound. As before, we write sub(ϕ) for the set of subformulas of ϕ. Further, we write σ(ϕ) for the set of function and predicate symbols occurring in ϕ. These notions straightforwardly extend to sets Γ and we use the same notation there.Again, the countable case ℒ_ω_1,ω with finite quantifiers will be of particular interest, especially in the context of a completeness theorem, later on. In that case, we consider the existential and universal quantifiers to just quantify one variable at a time and write∃ xϕ or ∀ xϕfor x∈ Var_ω_1 as usual in that case. Further, we again write ⋀_i∈ωϕ_i for ⋀{ϕ_i| i∈ω} and ⋁_i∈ωϕ_i for ⋁{ϕ_i| i∈ω} where (ϕ_i)_i∈ω is a countable family of formulas. On ℒ_κ,λ, we denote the simultaneous substitution of terms t=(t_1,…,t_n) for free variables x=(x_i_1,…,x_i_n) with i_j≠ i_k for j≠ k in a term t by t[t/x] and by ϕ[t/x] for formulas ϕ. Here, we assume that quantifiers are treated by renaming the quantified variable in the sense of(Qxϕ)[t/x]:=Qzϕ[(t',z)/(x',x)]where Q∈{∀,∃} and x' is x with x removed (if it occurs), t' is t with t_j removed when x_i_j=x and z is fresh, i.e. does not occur in ϕ or t.Similarly to the propositional case, we consider a notion of fragments for ℒ_ω_1,ω by extending the previous properties appropriately to allow for a Lindenbaum-Tarski construction over these fragments also in the first-order case later on.A (distributive) fragment of ℒ_ω_1,ω is a set ℒ_A⊆ℒ_ω_1,ω together with a set Var_A⊆ Var_ω_1 and a signature σ_A such that 𝒯_A is the set of terms of σ_A using Var_A and * ∈ℒ_A,* P(t_1,…, t_n)∈ℒ_A for n-ary P∈σ_A and t_i∈𝒯_A,* ϕ∈ℒ_A implies sub(ϕ)⊆ℒ_A, var(ϕ)⊆ Var_A and σ(ϕ)⊆σ_A,* ϕ,ψ∈ℒ_A implies ϕ∘ψ∈ℒ_A for ∘∈{,,→} and ∃ xϕ,∀ xϕ∈ℒ_A for x∈ Var_A,* ϕ∈ℒ_A, t∈𝒯_A implies ϕ[t/x]∈ℒ_A, t[t/x]∈𝒯_A for any t∈(𝒯_A)^n, any x=(x_i_1,…,x_i_n)∈ (Var_A)^n with i_j≠ i_k for j≠ k.* ϕ,⋀_i∈ωϕ_i∈ℒ_A implies ⋀_i∈ω(ϕ→ϕ_i)∈ℒ_A,* ϕ,⋁_i∈ωϕ_i∈ℒ_A implies ⋀_i∈ω(ϕ_i→ϕ)∈ℒ_A,* ϕ,⋀_i∈ωϕ_i∈ℒ_A implies ⋀_i∈ω(ϕϕ_i)∈ℒ_A.It is additionally assumed that fragments are “saturated" when it comes to variables, in the sense that there are enough variables to find fresh ones given a finite selection of formulas from ℒ_A. More precisely, we want that for any ϕ_1,…,ϕ_n∈ℒ_A, there is a variable y∉var(ϕ_1)∪…∪var(ϕ_n). For any Γ⊆ℒ_ω_1,ω, there is a smallest (w.r.t. ⊆) distributive fragment frag(Γ) such that Γ⊆frag(Γ). If Γ is countable, then frag(Γ) is also countable.Note that, for countable Γ, one can even find a countable fragment ℒ_A⊇Γ with a countable Y⊆ Var_A such that var(ϕ)∩ Y is finite for any ϕ∈ℒ_A. So in this case, the saturation of variables is directly satisfied.§.§ The Standard Semantics and 𝖦_κ,λThe standard semantics of infinitary first-order Gödel logics which we want to consider is, like in the propositional case, a straightforward extension of the usual finitary case. Also here, we initially focus on the full real unit interval [0,1] as the corresponding truth-value set.An ℒ_κ,λ(σ)-Gödel-model is a structure 𝔐 which consists of* a non-empty set M,* P^𝔐:M^n→ [0,1] for every n-ary predicate P of σ,* f^𝔐:M^n→ M for every n-ary function f of σ.An ℒ_κ,λ(σ)-Gödel-interpretation is a structure ℑ=(𝔐,v) composed of an ℒ_κ,λ(σ)-Gödel-model 𝔐 together with a function v:Var_κ→ M.Over such an interpretation ℑ, one naturally defines the value t^ℑ of some term t of 𝒯_κ. Further, we definevfX(x):=f(x) if x∈ X, v(x) otherwise,for X⊆ Var_κ and functions f:X→ M. We write vmx for the case of X={x} and f(x)=m and also introduce a special notation for finite tuples withvmx:=(…(vm_1x_1)…)m_nx_nwhere m=(m_1,…,m_n)∈ M^n and x=(x_i_1,…,x_i_n)∈ (Var_κ)^n. We writeℑfX:=(𝔐,vfX)and similarly for singletons and tuples. We also allow empty sets/tuples m, x and set vmx:=v in this case. By recursion on ℒ_κ,λ, we construct the evaluation ℑ:ℒ_κ,λ→ [0,1] associated with ℑ: * ℑ():=0^𝐀;* ℑ(P(t_1,…,t_n)):=P^𝔐(t_1^ℑ,…,t_n^ℑ) for n-ary P;* ℑ(ϕψ):=min{ℑ(ϕ),ℑ(ψ)};* ℑ(ϕψ):=max{ℑ(ϕ),ℑ(ψ)};* ℑ(ϕ→ψ):=ℑ(ϕ)⇒ℑ(ψ);* ℑ(⋀Φ):=inf{ℑ(ϕ)|ϕ∈Φ};* ℑ(⋁Φ):=sup{ℑ(ϕ)|ϕ∈Φ};* ℑ(∀ Xϕ):=inf{ℑfX(ϕ)| f:X→ M};* ℑ(∃ Xϕ):=sup{ℑfX(ϕ)| f:X→ M}.As before, one immediately derives a notion of semantical consequence from the model/interpretation construction and their corresponding evaluations: for Γ∪{ϕ}⊆ℒ_κ,λ, we write Γ_𝖦_κ,λϕ if ℑ[Γ]⊆{1} implies ℑ(ϕ)=1 for any ℒ_κ,λ-Gödel-interpretation ℑ. We again define the κ,λ-infinitary Gödel logic to be the set of consequences Γ_𝖦_κ,λϕ and in general denote it by 𝖦_κ,λ.§.§ A Proof CalculusThe following proof calculus 𝒢_ω_1,ω is obtained by extending the previous proof calculus for the propositional case with appropriate axioms and rules for the quantifiers.§.§ The Calculus 𝒢_ω_1,ω(IL) a complete set of axiom schemes for propositional intuitionistic logic, in the first-order language;[The same remark as in Section 2.3 applies, in fact one can again just take the schemes -Ax, -Ax, →-Ax, -Ax from <cit.>, now in the first-order language.](GL) (ϕ→ψ) (ψ→ϕ);(ω) ϕ_j→⋁_i∈ωϕ_i, (j∈ω);(ω) ⋀_i∈ωϕ_i→ϕ_j, (j∈ω);(∀ E) ∀ x ϕ→ϕ[t/x];(∃ E) ϕ[t/x]→∃ xϕ;(MP) from ϕ→ψ and ϕ, infer ψ;(Rω)_1 from ϕ_i→ψ for i∈ω, infer ⋁_i∈ωϕ_i→ψ;(Rω)_2 from ϕ→ψ_i for i∈ω, infer ϕ→⋀_i∈ωψ_i;(∀ I) from ψ→ϕ, infer ψ→∀ xϕ where x∉free(ψ);(∃ I) from ϕ→ψ, infer ∃ xϕ→ψ where x∉free(ψ).As before, we consider an extension 𝒢^D_ω_1,ω obtained by adding the schemeD⋀_i∈ω(ϕψ_i)→(ϕ⋀_i∈ωψ_i)and now additionally also the axiom schemeQS∀ x(ψϕ)→ (ψ∀ xϕ) where x∉free(ψ).The notion of proof immediately transfers to this setting from the propositional case. We write Γ⊢_𝒢_ω_1,ωϕ (or Γ⊢_𝒢^D_ω_1,ωϕ, respectively) if there is such a proof. We define restrictions 𝒢_ω_1,ω(ℒ_A) (or 𝒢^D_ω_1,ω(ℒ_A)) to some fragment ℒ_A of ℒ_ω_1,ω as before. Note that also both 𝒢_ω_1,ω(ℒ_A) and 𝒢^D_ω_1,ω(ℒ_A) have the classical Deduction Theorem.For any Γ∪{ϕ,ψ}⊆ℒ_A and ϕ closed, we have Γ∪{ϕ}⊢_𝒢_ω_1,ω(ℒ_A)ψ iff Γ⊢_𝒢_ω_1,ω(ℒ_A)ϕ→ψ.The same holds for 𝒢^D_ω_1,ω(ℒ_A).§ L-ALGEBRAS, CHAINS AND ALGEBRAIC SEMANTICSWe follow a similar route to semantic completeness as in the setting of finitary Gödel logics (see in particular <cit.>): we first establish completeness w.r.t. a class of algebras and then construct embeddings from that class into the relevant structures of the intended interpretation. More precisely, we first show completeness w.r.t. linearly ordered and sufficiently complete Heyting algebras over countable fragments and then extend this to the Heyting algebra of the real unit interval by embeddings, similar to <cit.>. This approach does not only offer a high degree of modularity but also establishes linear Heyting algebras (with sufficient completeness) as the algebraic semantics for infinitary Gödel logics, in analogy to the finitary case. Once we have established the result with respect to countable fragments, this assumption can be removed over complete algebras like the unit interval.For that, we need various notions from the theory of Heyting algebras and the next subsection gives a, for reasons of self-containedness, quite detailed account mostly following <cit.> (up to some notation change).§.§ Heyting algebras and related notionsA Heyting algebra is a structure 𝐀 = ⟨ A,^𝐀,^𝐀,→^𝐀,0^𝐀,1^𝐀⟩ such that ⟨ A,^𝐀,^𝐀,0^𝐀,1^𝐀⟩ is a bounded lattice with largest element 1^𝐀 and smallest element 0^𝐀 and →^𝐀 is a binary operation with * x→^𝐀x=1^𝐀,* x^𝐀(x→^𝐀 y)=x^𝐀 y,* y^𝐀(x→^𝐀 y)=y,* x→^𝐀 (y^𝐀 z)=(x→^𝐀 y)^𝐀(x→^𝐀 z),where we write a≤^𝐀b for a^𝐀b=a and ^𝐀x:=x→^𝐀0^𝐀. Joins (suprema) and meets (infima) of subsets X are defined as usual and denoted 𝐀 X and 𝐀 X, respectively. If every subset has a join and meet, 𝐀 is called complete. An existing meet 𝐀X is called distributive if𝐀x∈ X(y^𝐀x)=y^𝐀𝐀Xfor any y∈𝐀 and 𝐀 is called distributive if every meet is distributive.Two particular types of Heyting algebras, which we will consider in this note are chains, i.e. Heyting algebras where ≤^𝐀 is linear, and L-algebras, i.e. Heyting algebras where (x→^𝐀y)^𝐀(y→^𝐀x)=1^𝐀 for all x,y∈𝐀. We denote the class of all L-algebras by 𝖫, the class of all distributive L-algebras by 𝖣𝖫 and the class of all chains by 𝖢. We write 𝖢𝖢 for the class of countable chains. Naturally, every chain is a distributive L-algebra. Further, we will need the notion of a filter. A set F⊆𝐀 is a filter for a Heyting algebra 𝐀, if (1) 1∈ F, (2) x,y∈ F implies x^𝐀y∈ F and (3) x∈ F and x≤^𝐀y imply y∈ F. F is called proper if F⊊𝐀 and F is called a prime filter if it is proper and if x^𝐀y∈ F implies x∈ F or y∈ F. We then can “filter" a Heyting algebra 𝐀 via F: define x≤_F y if x→^𝐀y∈ F for x,y∈𝐀 and x≡_F y if x≤_F y and y≤_Fx. Then ≡_F is a congruence relation on Heyting algebras and thus defines a quotient Heyting algebra 𝐀/F over the set of equivalence classes [a]_F of elements a of 𝐀 over ≡_F. In particular, “filtering" with a prime filter in L-algebras yields a chain:If F is a prime filter of some L-algebra 𝐀, then 𝐀/F is a chain. The behavior of meets and joins under quotients will be of particular importance later on. For that, we first note the following:Let 𝐀 be a Heyting algebra. If𝐀X and 𝐀Yexist in 𝐀, then𝐀x∈ X(z→^𝐀x)=z→^𝐀𝐀X and 𝐀y∈ Y(y→^𝐀 z)=𝐀Y→^𝐀zfor any z∈𝐀.A filter F of a Heyting algebra 𝐀 is said to preserve an existing meetx=𝐀Xif x∈ F if, and only if x∈ F for all x∈ X.Further, a homomorphism h:𝐀→𝐁 of Heyting algebras is said to preserve a meet 𝐀X, or a join 𝐀Y, ifh(𝐀X)=𝐁h[X] or h(𝐀Y)=𝐁h[Y],respectively.Suppose 𝐀X and 𝐀Yexist in 𝐀 and F is a filter of 𝐀 which preserves 𝐀x∈ X(z→^𝐀x) and 𝐀y∈ Y(y→^𝐀 z)for any z∈𝐀. Then[𝐀X]_F=𝐀/Fx∈ X[x]_F and [𝐀Y]_F=𝐀/Fy∈ Y[y]_F.Therefore, the canonical map x↦ [x]_F from 𝐀 into 𝐀/F is a Heyting algebra homomorphism which preserves the respective meet and join. Let 𝐀 be a Heyting algebra and letx_n=𝐀X_nbe a sequence of distributive meets in 𝐀. If x,y∈𝐀 with x≰^𝐀y, then there is a prime filter F such that x∈ F, y∉F and such that F preserves the meets given by x_n.By [0,1]_ℚ and [0,1]_ℝ, we denote the Heyting algebras of all rationals in the unit interval and of the whole unit interval, respectively.Let 𝐀 be a countable chain. Then there is an embedding, i.e. an injective homomorphism of Heyting algebrasq:𝐀→[0,1]_ℚwhich preserves all meets and joins of 𝐀. §.§ Algebraic Propositional Evaluations for ℒ_ω_1In this section, we now introduce the actual algebraic generalizations of the ℒ_ω_1-Gödel-evaluations, broadening the domains to fragments and the range to certain Heyting algebras which may be, in a particular way, incomplete. This will be necessary in the approach to completeness chosen here since the Lindenbaum-Tarski algebras later constructed are, in fact, incomplete. Let ℒ_A be an arbitrary fragment of ℒ_ω_1 and 𝐀 be a Heyting algebra.A function v:ℒ_A→𝐀 is an (𝐀-valued) ℒ_A-evaluation if * v()=0^𝐀,* v(ϕ∘ψ)=v(ϕ)∘^𝐀v(ψ) for ∘∈{→,,},* for any ⋀_i∈ωϕ_i,⋁_i∈ωψ_i∈ℒ_A, we havev(⋀_i∈ωϕ_i)=𝐀i∈ωv(ϕ_i)and v(⋁_i∈ωϕ_i)=𝐀i∈ωv(ϕ_i)such that the corresponding infima/suprema exist. Given such an evaluation v, we still write v[Γ]:={v(γ)|γ∈Γ} for sets Γ⊆ℒ_A and (𝐀,v)ϕ for v(ϕ)=1^𝐀. We denote the set of all 𝐀-valued ℒ_A-evaluations by 𝖤𝗏(ℒ_A;𝐀). Using this notion of ℒ_A-evaluations, there is now a natural notion of semantical entailment: let 𝖢𝗅 be a class of Heyting algebras and let Γ∪{ϕ}⊆ℒ_A. We write Γ_𝖢𝗅(ℒ_A)ϕ if∀𝐀∈𝖢𝗅∀ v∈𝖤𝗏(ℒ_A;𝐀)(v[Γ]⊆{1^𝐀} implies v(ϕ)=1^𝐀).We abbreviate Γ_𝖢𝗅(ℒ_ω_1)ϕ by Γ_𝖢𝗅ϕ. By transfinite induction on the length of the proof, on quickly verifies the following soundness result:For any Γ∪{ϕ}⊆ℒ_A: * Γ⊢_𝒢_ω_1(ℒ_A)ϕ implies Γ_𝖫(ℒ_A)ϕ,* Γ⊢_𝒢^D_ω_1(ℒ_A)ϕ implies Γ_𝖣𝖫(ℒ_A)ϕ. To form new ℒ_A-evaluations by composition with homomorphisms of Heyting algebras, we now have to additionally require that the existing meets and joins arising from the infinitary connectives are preserved. This is captured in the following lemma.Let v:ℒ_A→𝐀 be an evaluation of ℒ_A in 𝐀 and let h:𝐀→𝐁 be a Heyting algebra homomorphism which preserves all the meets and joins𝐀i∈ωv(ϕ_i) and 𝐀i∈ωv(ψ_i)for any ⋀_i∈ωϕ_i∈ℒ_A and ⋁_i∈ωψ_i∈ℒ_A. Then h∘ v is an evaluation of ℒ_A in 𝐁.The proof is rather immediate and thus omitted.§.§ Algebraic First-Order Interpretations for ℒ_ω_1,ωFor a similar motivation as in the propositional case, we are also lead to broadening the definitions of models and interpretations for ℒ_ω_1,ω to both arbitrary fragments as domains and Heyting algebras as ranges. For this, let ℒ_A now be a fragment of ℒ_ω_1,ω and 𝐀 again be a Heyting algebra.An ℒ_A-model is a structure 𝔐 which consists of* a Heyting algebra 𝐀,* a non-empty set M,* P^𝔐:M^n→𝐀 for every n-ary predicate P of σ_A,* f^𝔐:M^n→ M for every n-ary function f of σ_A.An ℒ_A-interpretation is a structure ℑ=(𝔐,v) where 𝔐 is an ℒ_A-model and v:Var_A→ M.As before, over some ℒ_A-interpretation ℑ, one naturally defines the value of some term t of 𝒯_A which we still denote by t^ℑ. vmx, vmx and the resulting ℑmx are defined (over Var_A) in the same way as with the standard semantics.Such a model 𝔐 is called (ℒ_A-)suitable w.r.t. to a variable assignment v:Var_A→ M if for any m∈ M^n, any x∈ (Var_A)^n, corresponding to the interpretation ℑ=(𝔐,vmx) there is a function ℑ:ℒ_A→𝐀 such that * ℑ()=0^𝐀,* ℑ(P(t_1,…,t_n))=P^𝔐(t_1^ℑ,…,t_n^ℑ) for n-ary P,* ℑ(ϕ∘ψ)=ℑ(ϕ)∘^𝐀ℑ(ψ) for ∘∈{,,→},* for any ⋀_i∈ωϕ_i,⋁_i∈ωψ_i∈ℒ_A, we haveℑ(⋀_i∈ωϕ_i)=𝐀i∈ωℑ(ϕ_i)and ℑ(⋁_i∈ωϕ_i)=𝐀i∈ωℑ(ϕ_i)such that the corresponding infima/suprema exist,* for any ϕ∈ℒ_A and any x∈ Var_A, we haveℑ(∀ xϕ)=𝐀m∈ Mℑmx(ϕ)andℑ(∃ xϕ)=𝐀m∈ Mℑmx(ϕ)such that the corresponding infima/suprema exist. Note that it actually suffices to establish the existence of such an extension only for m and x=(x_i_1,…,x_i_n) where i_j≠ i_k for j≠ k.We still write ℑ[Γ]:={ℑ(γ)|γ∈Γ} for sets Γ⊆ℒ_A and ℑϕ if ℑ(ϕ)=1^𝐀. We denote the class of all models over 𝐀 by 𝖬𝗈𝖽(ℒ_A;𝐀) and if 𝔐 is a model, we write 𝖨𝗇𝗍(ℒ_A;𝔐) for the set of all corresponding interpretations (𝔐,v) such that 𝔐 is suitable for v. The derived notion of semantic consequence is then given by the following: let 𝖢𝗅 be a class of Heyting algebras and let Γ∪{ϕ}⊆ℒ_A. We write Γ_𝖢𝗅(ℒ_A)ϕ if∀𝐀∈𝖢𝗅∀𝔐∈𝖬𝗈𝖽(ℒ_A;𝐀)∀ℑ∈𝖨𝗇𝗍(ℒ_A;𝔐)(ℑ[Γ]⊆{1^𝐀} implies ℑ(ϕ)=1^𝐀).It is again straightforward to verify the following soundness result.For any Γ∪{ϕ}⊆ℒ_A where all formulas from Γ are closed: * Γ⊢_𝒢_ω_1,ω(ℒ_A)ϕ implies Γ_𝖫(ℒ_A)ϕ,* Γ⊢_𝒢^D_ω_1,ω(ℒ_A)ϕ implies Γ_𝖣𝖫(ℒ_A)ϕ. As before, we can form new interpretations by composition with Heyting algebra homomorphisms as long as they respect the need meets and joins.Let 𝔐 be a model over some Heyting algebra 𝐀 which is suitable w.r.t. v:Var_A→ M and let h:𝐀→𝐁 be a Heyting algebra homomorphism which preserves all the meets and joins𝐀i∈ωℑ(ϕ_i) and 𝐀i∈ωℑ(ψ_i)for any ⋀_i∈ωϕ_i∈ℒ_A and ⋁_i∈ωψ_i∈ℒ_A as well as𝐀m∈ Mℑmx(ϕ) and 𝐀m∈ Mℑmx(ϕ)for all ϕ∈ℒ_A and x∈ Var_A and for any interpretation ℑ=(𝔐,vmx) over 𝔐. Then the model h∘𝔐 defined over the same domain with f^h∘𝔐:=f^𝔐 for any function symbol f and P^h∘𝔐:=h∘ P^𝔐 for predicate symbols P is a suitable model w.r.t. v and for any interpretation ℑ=(𝔐,vmx), we haveh∘ℑ(ϕ)=h(ℑ(ϕ))for all ϕ∈ℒ_A where h∘ℑ=(h∘𝔐,vmx).§ A PROPOSITIONAL AND FIRST-ORDER COMPLETENESS THEOREMWe fix a fragment ℒ_A of either ℒ_ω_1 or ℒ_ω_1,ω and all notions, if not explicitly indicated, are to be understood relative to that fragment. Let 𝒢^(D)(ℒ_A) be either 𝒢_ω_1(ℒ_A) or 𝒢^D_ω_1(ℒ_A) in the propositional case or either 𝒢_ω_1,ω(ℒ_A) or 𝒢^D_ω_1,ω(ℒ_A) in the first order case.We construct, as it is usually done, the Lindenbaum-Tarski algebra of 𝒢^(D)(ℒ_A): given Γ∪{ϕ,ψ}⊆ℒ_A, we write ϕ≼^Γψ ifΓ⊢_𝒢^(D)(ℒ_A)ϕ→ψand write ϕ≡^Γψ ifϕ≼^Γψ and ψ≼^Γϕ.We write [ϕ]^Γ for the equivalence class of ϕ under ≡^Γ and ℒ_A/≡^Γ for the set of all equivalence classes. In the following, we even omit the Γ when the context is clear. The Lindenbaum-Tarski algebra 𝐋𝐓^Γ is defined as𝐋𝐓^Γ:=⟨ℒ_A/≡^Γ,^𝐋𝐓,^𝐋𝐓,→^𝐋𝐓,0^𝐋𝐓,1^𝐋𝐓⟩where we define * [ϕ]^𝐋𝐓[ψ]:=[ϕψ],* [ϕ]^𝐋𝐓[ψ]:=[ϕψ],* [ϕ]→^𝐋𝐓[ψ]:=[ϕ→ψ],* 0^𝐋𝐓:=[],* 1^𝐋𝐓:=[⊤].Further, the order induced on 𝐋𝐓^Γ is given by [ϕ]≤^𝐋𝐓[ψ] iff ϕ≼^Γψ𝐋𝐓^Γ is a well-defined L-algebra with𝐋𝐓i∈ω[ϕ_i]=[⋀_i∈ωϕ_i] and 𝐋𝐓i∈ω[ψ_i]=[⋁_i∈ωψ_i]for ⋀_i∈ωϕ_i,⋁_i∈ωψ_i∈ℒ_A. In the case of 𝒢^D(ℒ_A), the meets[χ]→^𝐋𝐓[⋀_i∈ωϕ_i]=𝐋𝐓i∈ω([χ]→^𝐋𝐓[ϕ_i]) and [⋁_i∈ωψ_i]→^𝐋𝐓[χ]=𝐋𝐓i∈ω([ψ_i]→^𝐋𝐓[χ])are distributive for every additional χ∈ℒ_A. Further, we have[∀ xϕ]=𝐋𝐓t∈𝒯_A[ϕ[t/x]] and [∃ xϕ]=𝐋𝐓t∈𝒯_A[ϕ[t/x]].for ϕ∈ℒ_A and x∈ Var_A in the first-order case and in the case of 𝒢^D_ω_1,ω(ℒ_A), the meets[χ]→^𝐋𝐓[∀ xϕ]=𝐋𝐓t∈𝒯_A([χ]→^𝐋𝐓[ϕ[t/x]]) and [∃ xϕ]→^𝐋𝐓[χ]=𝐋𝐓t∈𝒯_A([ϕ[t/x]]→^𝐋𝐓[χ]).are distributive for every additional χ∈ℒ_A. We skip the finitary operations. Using the axiom scheme (GL), it is easy to see that 𝐋𝐓^Γ is a well-defined L-algebra. We only show the infinitary claims. For that, let ⋀_i∈ωϕ_i∈ℒ_A and ⋁_i∈ωψ_i∈ℒ_A. Then, we have𝒢^(D)(ℒ_A)⊢⋀_i∈ωϕ_i→ϕ_j and 𝒢^(D)(ℒ_A)⊢ψ_j→⋁_i∈ωψ_ifor any j by axioms (ω) and (ω) which gives [⋀_i∈ωϕ_i]≤^𝐋𝐓[ϕ_j] and [ψ_j]≤^𝐋𝐓[⋁_i∈ωψ_i]for any j. Suppose that [χ]≤^𝐋𝐓[ϕ_j] for any j and [ψ_j]≤^𝐋𝐓[χ'] for any j. Therefore, we haveΓ⊢_𝒢^(D)(ℒ_A)χ→ϕ_j and Γ⊢_𝒢^(D)(ℒ_A)ψ_j→χ'for any j which impliesΓ⊢_𝒢^(D)(ℒ_A)χ→⋀_i∈ωϕ_i and Γ⊢_𝒢^(D)(ℒ_A)⋁_i∈ωψ_i→χ'by (Rω)_1,2 which is[χ]≤^𝐋𝐓[⋀_i∈ωϕ_i] and [⋁_i∈ωψ_i]≤^𝐋𝐓[χ'].This gives that𝐋𝐓i∈ω[ϕ_i]=[⋀_i∈ωϕ_i] and 𝐋𝐓i∈ω[ψ_i]=[⋁_i∈ωψ_i]. Next, we show that all the mentioned meets are distributive. Let χ,ξ∈ℒ_A be arbitrary. We have⋀_i∈ω(ξ(ψ_i→χ))∈ℒ_A and ⋀_i∈ω(ξ(χ→ϕ_i))∈ℒ_A.by the closure properties of ℒ_A. Write (α_i)_i for either (χ→ϕ_i)_i or (ψ_i→χ)_i. Now, we get[ξ]^𝐋𝐓𝐋𝐓i∈ω[α_i]≤^𝐋𝐓[ξ]^𝐋𝐓[α_j]for any j by axiom (ω) and therefore [ξ]^𝐋𝐓𝐋𝐓i∈ω[α_i]≤^𝐋𝐓𝐋𝐓i∈ω([ξ]^𝐋𝐓[α_i])=[⋀_i∈ω(ξα_i)].Further, by axiom scheme (D), we haveΓ⊢_𝒢^D(ℒ_A)⋀_i∈ω(ξα_i)→(ξ⋀_i∈ωα_i)which gives the converse[⋀_i∈ω(ξα_i)]≤^𝐋𝐓[ξ]^𝐋𝐓𝐋𝐓i∈ω[α_i],i.e. combined we have[ξ]^𝐋𝐓𝐋𝐓i∈ω[α_i]=[⋀_i∈ω(ξα_i)]. In the first-order case, the quantifier claims can be proved as outlined in <cit.>. We still sketch the proof here for self-containedness.For the first two quantifier claims, note that we haveΓ⊢_𝒢^(D)_ω_1,ω(ℒ_A)∀ xϕ→ϕ[t/x] and Γ⊢_𝒢^(D)_ω_1,ω(ℒ_A)ϕ[t/x]→∃ xϕfor any t∈𝒯_A by the axioms (∀ E) and (∃ E). Now, suppose that χ∈ℒ_A is such thatΓ⊢_𝒢^(D)_ω_1,ω(ℒ_A)χ→ϕ[t/x] for all t∈𝒯_A or Γ⊢_𝒢^(D)_ω_1,ω(ℒ_A)ϕ[t/x]→χ for all t∈𝒯_A.Then, pick y∈ Var_A with y∉var(χ)∪var(ϕ). Note that this is possible as fragments are saturated for variables. By assumption, we haveΓ⊢_𝒢^(D)_ω_1,ω(ℒ_A)χ→ϕ[y/x] or Γ⊢_𝒢^(D)_ω_1,ω(ℒ_A)ϕ[y/x]→χand as y is not free in χ, we get Γ⊢_𝒢^(D)_ω_1,ω(ℒ_A)χ→∀ yϕ[y/x] or Γ⊢_𝒢^(D)_ω_1,ω(ℒ_A)∃ yϕ[y/x]→χvia the rule (∀ I) and (∃ I). As we have⊢_𝒢^(D)_ω_1,ω(ℒ_A)∀ yϕ[y/x]↔∀ xϕ and ⊢_𝒢^(D)_ω_1,ω(ℒ_A)∃ yϕ[y/x]↔∃ xϕ,the claims follow. Regarding distributivity, let ϕ∈ℒ_A and x∈ Var_A as well as χ,ξ∈ℒ_A. Further, let y∈ Var_A with y∉var(ϕ)∪var(χ)∪var(ξ) (note again variable saturation). Then, we have⊢_𝒢^(D)_ω_1,ω(ℒ_A)ϕ[t/x]↔ϕ[y/x][t/y]and therefore, defining ϕ':=ϕ[y/x], we have [ϕ[t/x]]=[ϕ'[t/y]]. Since y does not occur in χ, we haveχ→ϕ'[t/y]=(χ→ϕ')[t/y]=:α^∀[t/y]andϕ'[t/y]→χ=(ϕ'→χ)[t/y]=:α^∃[t/y]and therefore, since y also does not occur in ξ, the previous quantifier claims imply (where we write α for either α^∃ or α^∀):𝐋𝐓t∈𝒯_A([ξ]^𝐋𝐓[α[t/y]]) =𝐋𝐓t∈𝒯_A[(ξα)[t/y]]=[∀ y(ξα)].Using the distributivity axiom, as y again does not occur in ξ, we get [∀ y(ξα)] =[ξ∀ yα]=[ξ]^𝐋𝐓𝐋𝐓t∈𝒯_A[α[t/y]]and finally we have𝐋𝐓t∈𝒯_A[ξ]^𝐋𝐓([χ]→^𝐋𝐓[ϕ[t/x]])=[∀ y(ξα^∀)]=[ξ]^𝐋𝐓𝐋𝐓t∈𝒯_A([χ]→^𝐋𝐓[ϕ[t/x]])and𝐋𝐓t∈𝒯_A[ξ]^𝐋𝐓([ϕ[t/x]]→^𝐋𝐓[χ])=[∀ y(ξα^∃)]=[ξ]^𝐋𝐓𝐋𝐓t∈𝒯_A([ϕ[t/x]]→^𝐋𝐓[χ]). In the propositional case, there is a canonical ℒ_A-evaluation over 𝐋𝐓^Γ, ι:ℒ_A→𝐋𝐓^Γ, which is defined byι(ϕ):=[ϕ].By Lemma <ref>, ι indeed is a well-defined ℒ_A-evaluation. In the first-order case, this Lindenbaum-Tarski algebra now forms the algebraic part of the Lindenbaum-Tarski model: define 𝔏𝔗^Γ as a model with 𝐋𝐓^Γ as the underlying Heyting algebra and with 𝒯_A as the domain by settingf^𝔏𝔗(t_1,…,t_n):=f(t_1,…,t_n) as well as P^𝔏𝔗(t_1,…,t_n):=[P(t_1,…,t_n)]for functions symbols f and predicate symbols P of ℒ_A. We define the canonical variable assignmentι:Var_A→𝒯_A,x↦ xand denote it, for convenience, also by ι but the context will make it clear whether we mean the propositional evaluation or the first-order variable assignment.First note that 𝔏𝔗^Γ is indeed suitable for ι.The model 𝔏𝔗^Γ is a suitable model w.r.t. ι. In particular, given some t=(t_1,…,t_n)∈(𝒯_A)^n and x=(x_i_1,…,x_i_n)∈ (Var_A)^n with i_j≠ i_k for j≠ k, it holds that(𝔏𝔗^Γ,ιtx)(ϕ)=[ϕ[t/x]].for any ϕ∈ℒ_A. Consider (𝔏𝔗^Γ,ιtx) for x and t as required. By Remark <ref>, showing that ϕ↦ [ϕ[t/x]] is the extension of (𝔏𝔗^Γ,ιtx) for those t,x is enough to establish suitability.Note first that we have s^(𝔏𝔗^Γ,ιtx)=s[t/x]∈𝒯_Afor any s∈𝒯_A as by assumption, 𝒯_A is closed under substitution. The theorem is then proved by induction on ϕ. By the above result for terms, we get[P(s_1,…,s_n)[t/x]]=[P(s_1[t/x],…,s_n[t/x])]=P^(𝔏𝔗^Γ,ιtx)(s_1^(𝔏𝔗^Γ,ιtx),…,s_n^(𝔏𝔗^Γ,ιtx)).Further, we in particular have [[t/x]]=[] and this provides the result for the atomic cases. The cases for ϕ∘ψ∈ℒ_A with ∘∈{,,→} follow just by noting that ·[t/x] distributes over ∘ and by considering the definitions of ∘^𝐋𝐓.The same also holds true for ⋀_i∈ωϕ_i∈ℒ_A where we have[(⋀_i∈ωϕ_i)[t/x]]=[⋀_i∈ωϕ_i[t/x]]=𝐋𝐓i∈ω[ϕ_i[t/x]]and similarly for ⋁_i∈ωϕ_i∈ℒ_A where we have used Lemma <ref>. This gives the infinitary cases.Lastly, let ϕ∈ℒ_A and x∈ Var_A. Then, we have[(∀ xϕ)[t/x]]=[∀ zϕ[(t',z)/(x',x)]]where z and x',t' are as in the definition of first-order substitutions. Using Lemma <ref>, we get[∀ zϕ[(t',z)/(x',x)]]=𝐋𝐓t∈𝒯_A[ϕ[(t',z)/(x',x)][t/z]]=𝐋𝐓t∈𝒯_A[ϕ[(t',t)/(x',x)]].Using similar reasoning, one can show [(∃ xϕ)[t/x]]=𝐋𝐓t∈𝒯_A[ϕ[(t',t)/(x',x)]]in the existential case. This gives the quantifier cases by noting that (ιtx)tx=ι(t',t)(x',x). This immediately yields completeness theorems for 𝒢_ω_1(ℒ_A) or respectively 𝒢_ω_1,ω(ℒ_A) w.r.t. L-algebras:Let ℒ_A be any fragment of ℒ_ω_1 or ℒ_ω_1,ω. For any Γ∪{ϕ}⊆ℒ_A (with Γ closed in the first-order case), the following are equivalent: * Γ⊢_𝒢(ℒ_A)ϕ;* Γ_𝖫(ℒ_A)ϕ.Here, we write 𝒢 for either 𝒢_ω_1 or 𝒢_ω_1,ω, respectively. “(1) implies (2)" is contained in Lemma <ref> or Lemma <ref>, respectively. For the converse, suppose Γ⊬_𝒢(ℒ_A)ϕ and construct the Lindenbaum-Tarski algebra 𝐋𝐓^Γ as indicated above. Then Lemma <ref> gives that ι is a well-defined 𝐋𝐓^Γ-valued ℒ_A-evaluation in the propositional case and by construction, we have ι[Γ]⊆{1^𝐋𝐓} but ι(ϕ)≠1^𝐋𝐓. Therefore Γ_𝖫(ℒ_A)ϕ as 𝐋𝐓^Γ is an L-algebra.In the first-order case, we construct the Lindenbaum-Tarski model 𝔏𝔗^Γ as above which is suitable w.r.t. ι by Lemma <ref>. Again by Lemma <ref>, the underlying algebra 𝐋𝐓^Γ is a well-defined L-algebra and for the corresponding ι, we have(𝔏𝔗^Γ,ι)[Γ]⊆ 1^𝐋𝐓 but (𝔏𝔗^Γ,ι)(ϕ)=[ϕ]≠ 1^𝐋𝐓by Lemma <ref> and therefore Γ_𝖫(ℒ_A)ϕ.However, restricting to countable fragments even yields a further completeness result for 𝒢^D w.r.t. countable chains and thus, by Lemma <ref>, for [0,1] which is what we will outline next.Let ℒ_A be a countable fragment of ℒ_ω_1 or ℒ_ω_1,ω. For any Γ∪{ϕ}⊆ℒ_A (with Γ closed in the first-order case), the following are equivalent: * Γ⊢_𝒢^D(ℒ_A)ϕ;* Γ_𝖣𝖫(ℒ_A)ϕ;* Γ_𝖢(ℒ_A)ϕ;* Γ_𝖢𝖢(ℒ_A)ϕ;* Γ_[0,1]_ℚ(ℒ_A)ϕ;* Γ_[0,1]_ℝ(ℒ_A)ϕ.Here, we again write 𝒢^D for either 𝒢^D_ω_1 or 𝒢^D_ω_1,ω, respectively. “(1) implies (2)" is contained in Lemma <ref> or Lemma <ref>, respectively . “(2) implies (3)" follows from the fact that every chain is a distributive L-algebra. “(3) implies (4)" and “(4) implies (5)" as well as “(6) implies (5)" and “(1) implies (6)" are also immediate. We thus only show “(5) implies (1)" and for that, suppose Γ⊬_𝒢^D(ℒ_A)ϕ. Also here, we construct the corresponding Lindenbaum-Tarski algebra 𝐋𝐓^Γ and naturally have [ϕ]≠1^𝐋𝐓. Lemma <ref> again guarantees that 𝐋𝐓^Γ is a well-defined L-algebra and that ι is a well-defined evaluation in the propositional case.In the first-order case, we again construct the Lindenbaum-Tarski model 𝔏𝔗^Γ over the algebra 𝐋𝐓^Γ and by Lemma <ref>, the model 𝔏𝔗^Γ is suitable w.r.t. ι. With ι as the canonical variable assignment, we also get(𝔏𝔗^Γ,ι)(ψ) = [ψ]by the same result.Lemma <ref> now yields that any of the meets𝐋𝐓i∈ω([χ]→^𝐋𝐓[ϕ_i])=[χ]→^𝐋𝐓𝐋𝐓i∈ω[ϕ_i] and 𝐋𝐓i∈ω([ψ_i]→^𝐋𝐓[χ])=𝐋𝐓i∈ω[ψ_i]→^𝐋𝐓[χ]and additionally, in the first-order case, any of the meets𝐋𝐓t∈𝒯_A([χ]→^𝐋𝐓[ϕ[t/x]])=[χ]→^𝐋𝐓[∀ xϕ] and 𝐋𝐓t∈𝒯_A([ϕ[t/x]]→^𝐋𝐓[χ])=[∃ xϕ]→^𝐋𝐓[χ]are distributive for any χ,⋀_i∈ωϕ_i,⋁_i∈ωψ_i,ϕ∈ℒ_A and x∈ Var_A.As ℒ_A is countable, we can enumerate all of the above (distributive) meets and by Lemma <ref>, there is a prime filter F with [ϕ]∉F and such that F preserves all the above meets (depending on the propositional or first-order case).As [χ] reaches every element of 𝐋𝐓^Γ, Lemma <ref> gives that the map p_F:𝐋𝐓^Γ→𝐋𝐓^Γ/F, [χ]↦[χ]_F:=[[χ]]_Fis a homomorphism of Heyting algebras with preserves all the meets/joins 𝐋𝐓i∈ω[ϕ_i] and 𝐋𝐓i∈ω[ψ_i]as well as𝐋𝐓t∈𝒯_A[ϕ[t/x]] and 𝐋𝐓t∈𝒯_A[ϕ[t/x]]for any ⋀_i∈ωϕ_i,⋁_i∈ωψ_i,ϕ∈ℒ_A and x∈ Var_A. Further, 𝐋𝐓^Γ/F is a chain as F is a prime filter and 𝐋𝐓^Γ is an L-algebra. In the propositional case, Lemma <ref> implies that p_F∘ι is a well-defined ℒ_A-evaluation into 𝐋𝐓^Γ/F with p_F(ι(ϕ))=[ϕ]_F≠ 1^𝐋𝐓/𝐅 as [ϕ]∉F.In the first-order case, since ϕ[t/x]∈ℒ_A for any ϕ∈ℒ_A, any t∈(𝒯_A)^n and any x∈ (Var_A)^n with i_j≠ i_k for j≠ k, we in particular have that the map p_F preserves the meets and joins𝐋𝐓i∈ω[ϕ_i[t/x]] and 𝐋𝐓i∈ω[ψ_i[t/x]]as well as𝐋𝐓t∈𝒯_A[ϕ[t/x][t/x]] and 𝐋𝐓t∈𝒯_A[ϕ[t/x][t/x]].Since by Lemma <ref>, we have (𝔏𝔗^Γ,ιtx)(ϕ)=[ϕ[t/x]], the map p_F fulfills the premises of Lemma <ref> and therefore, the model p_F∘𝔏𝔗^Γ is suitable for ι with (p_F∘𝔏𝔗^Γ,ιtx)(ϕ)=[ϕ[t/x]]_F. Lemma <ref> guarantees the existence of an injective Heyting algebra homomorphismq:𝐋𝐓^Γ/F→[0,1]_ℚwhich preserves all the meets and joins existing in 𝐋𝐓^Γ/F.In the propositional case, by Lemma <ref> we have that q∘ (p_F∘ι) is a well-defined ℒ_A-evaluation with (q∘ (p_F∘ι))(ϕ)≠ 1 by injectivity. Naturally, we have (q∘ (p_F∘ι))[Γ]⊆{1} and therefore Γ_[0,1]_ℚ(ℒ_A)ϕ.In the first-order case, since p_F∘𝔏𝔗^Γ is suitable for ι, this implies that in particular the conditions of Lemma <ref> are met again and therefore, the modelq∘(p_F∘𝔏𝔗^Γ)is suitable for ι as well. In particular, we have(q∘(p_F∘𝔏𝔗^Γ),ι)(ϕ)=q([ϕ]_F)≠ 1as q is injective and as [ϕ]∉F, i.e. [ϕ]_F≠ 1^𝐋𝐓^Γ/F. Let ℒ_A be an arbitrary fragment of ℒ_ω_1 or ℒ_ω_1,ω but let Γ∪{ϕ}⊆ℒ_A be countable with Γ closed in the first-order case. Then, the following are equivalent: * Γ⊢_𝒢^D(ℒ_A)ϕ;* Γ_𝖣𝖫(ℒ_A)ϕ;* Γ_𝖢(ℒ_A)ϕ;* Γ_[0,1]_ℝ(ℒ_A)ϕ.Here, we again write 𝒢^D for either 𝒢^D_ω_1 or 𝒢^D_ω_1,ω, respectively. The directions “(1) implies (2)",“(2) implies (3)" as well as“(3) implies (4)" follow as before. Suppose Γ⊬_𝒢^D(ℒ_A)ϕ. Then, alsoΓ⊬_𝒢^D(ℒ_B)ϕfor ℒ_B:=frag(Γ∪{ϕ}) since ℒ_B⊆ℒ_A. Since Γ∪{ϕ} is countable, we have that ℒ_B is countable as well. Theorem <ref> gives Γ_[0,1]_ℝ(ℒ_B)ϕ.Therefore, in the propositional case there is an ℒ_B-evaluation v:ℒ_B→ [0,1] such that v[Γ]⊆{1} but v(ϕ)≠1. Similarly, in the first-order case, there is an interpretation ℑ=(𝔐,v) such that ℑ[Γ]⊆{1} but ℑ(ϕ)=1.In [0,1]_ℝ (by completeness), every function v:Var_A→ [0,1] has a single extension to an evaluation which we denote here by v. If we set v':p↦v(p) if p∈ℒ_B, 0 otherwise,for p∈ Var_A, then it is easy to see that v'(ψ)=v(ψ) for ψ∈ℒ_B (as v is an ℒ_B-evaluation) and therefore, we have v'[Γ]=1 but v'(ϕ)≠ 1.On the first-order side, any model over [0,1]_ℝ (again by completeness) is suitable for any variable assignment. We define 𝔐' by f^𝔐':=f^𝔐 and P^𝔐':=P^𝔐 for function symbols f and predicate symbols P in σ_B and otherwise set f^𝔐' and P^𝔐' arbitrary for symbols from σ_A∖σ_B. Further, we define v'(x):=v(x) if x∈ Var_B and set it arbitrary otherwise for x∈ Var_A∖ Var_B. Again, a simple induction shows that(𝔐',v')(ψ)=(𝔐,v)(ψ)for any ψ∈ℒ_B which results in Γ_[0,1]_ℝ(ℒ_A)ϕ.In particular, for countable Γ (closed, in the first-order case) in the appropriate language, we haveΓ⊢_𝒢^D_ω_1ϕ iff Γ_𝖦_ω_1ϕ.as well asΓ⊢_𝒢^D_ω_1,ωϕ iff Γ_𝖦_ω_1,ωϕ.We will actually see later on that the requirement that the set of premises is countable can not be removed.In the first-order cases, the formulations of the various completeness results require the set of premises Γ to be closed. This is, however, only needed for the soundness result as can be seen by inspecting the various proofs for the converse direction: both the Lindenbaum-Tarski algebra and model do not rely on Γ to be closed and neither do any remaining parts of the above presented completeness proofs.§ HYPERSEQUENT CALCULI FOR 𝒢^D_Ω_1 AND 𝒢^D_Ω_1,Ω AND CUT-ELIMINATIONWe now want to address structural proof theory for infinitary Gödel logics, both for the propositional and first-order instances with operations of countable length. For that, we lift the usual approach towards structural proof theory for Gödel logics via hypersequent calculi (see <cit.>) to the infinitary case and provide cut-elimination theorems.§.§ Sequents, Hypersequents and Related NotionsHypersequents, as introduced by Avron <cit.>, are multisets of sequents and the rules operating on hypersequents then allow for parallel modification of and for “exchange of information" between sequents (as in particular exemplified by the rule (𝖼𝗈𝗆) given by Avron). More formally, let ℒ be either ℒ_ω_1 or ℒ_ω_1,ω. A sequent is a pairΓΔof finite multisets Γ and Δ of ℒ-formulas where Δ contains at most one element. We write Γ,Δ for multiset union and denote the multiset formed by (possibly equal) formulas ϕ_1,…,ϕ_n by [ϕ_1,…,ϕ_n]. We simply write ϕ for [ϕ]. A hypersequent is then a multiset of sequents Γ_iΔ_i (1≤ i≤ n) which we denote byΓ_1Δ_1|…|Γ_nΔ_n.In any other way, we follow the notational conventions of <cit.>. In particular, there is a canonical interpretation ℐ of hypersequents as ℒ-formulas: set ℐ(ΓΔ):=⋀Γ→⋁Δ where ⋀Γ (⋁Δ) is the conjunction (disjunction) over all members of Γ (Δ), with the convention that ⋀∅:=⊤ and ⋁∅:=. This ℐ extends to hypersequents byℐ(Γ_1Δ_1|…|Γ_nΔ_n):=⋁_i=1^nℐ(Γ_iΔ_i).The hypersequent systems which we consider are based on a hypersequent calculus for finitary propositional Gödel logic introduced by Avron <cit.>. This calculus naturally extends the usual sequent calculus for intuitionistic logic, lifted to the hypersequent setting, by a specific rule emulating the prelinearity axiom scheme (ϕ→ψ) (ψ→ψ). Avron's calculus was extended to first-order Gödel logics by Baaz and Zach in <cit.>.§.§ The Systems ℋ𝒢^D_ω_1 and ℋ𝒢^D_ω_1,ωThe range of constituting rules for the various hypersequent calculi can be seen in Figure <ref>. They consist of the rules given in <cit.> for a hypersequent calculus for first-order Gödel logic extended by four additional infinitary rules. Following Baaz and Ciabattoni <cit.>, the version of (𝖼𝗈𝗆) given here differs from the one usually given (see e.g. <cit.>) and (similar to <cit.>) this serves some technical purposes in the following cut-elimination proof. For further context, see in particular Remark 2 from <cit.>. Principal formulas are defined as commonly done.The rules (∀,𝗅) and (∃,𝗋) are supposed to fulfill the eigenvariable condition: the variable a is free and does not occur in the lower hypersequent. We refer with ℋ𝒢^D_ω_1 to all initial, structural, logical and infinitary rules (over the propositional language) and with ℋ𝒢^D_ω_1,ω to ℋ𝒢^D_ω_1 (now over the first-order language) extended with the quantifier rules. Deductions in the hypersequent calculi are defined by natural infinitary generalizations of the usual definition: deductions are countable (possibly infinite) rooted trees where every node is labeled with a hypersequent and every edge is labeled with a rule such that the arities of the rules are respected and the applications are correct. If d is such derivation with a root hypersequent H, then we write d⊢_ℋ𝒢^D_ω_1,ωH or d⊢_ℋ𝒢^D_ω_1H (depending on the used language and systems). We omit the system if it is arbitrary or clear from the context and then write d⊢ H. We write ⊢ H if there is any derivation d with d⊢ H. One quickly verifies that the non-atomic version /ϕϕof (𝗂𝖽) for arbitrary ϕ is derivable in the systems and that they are complete w.r.t. to the Hilbert-type systems introduced before in the following sense:For any ϕ∈ℒ_ω_1, ⊢_ℋ𝒢^D_ω_1ϕ if, and only if ⊢_𝒢^D_ω_1ϕ. Similarly for ℒ_ω_1,ω, ℋ𝒢^D_ω_1,ω and 𝒢^D_ω_1,ω. The only thing we want to remark is that the infinitary distributivity axiomD⋀_i∈ω(ϕψ_i)→(ϕ⋀_i∈ωψ_i).is derivable in the calculus, akin to the analogous derivation of the first order (QS) axiom e.g. given in <cit.>. Concretely, we have the following derivationsϕϕ ψ_iψ_i (𝖼𝗈𝗆) ϕψ_i|ψ_iϕ ϕϕ (,𝗅) ϕψ_i|ϕψ_iϕ ψ_iψ_i (,𝗅) ϕψ_iψ_i|ϕψ_iϕ 2x(⋀,𝗅) ⋀_i∈ω(ϕψ_i)ψ_i|⋀_i∈ω(ϕψ_i)ϕfor any i∈ω. Using those as input for (⋀,𝗋), we get⋮ (⋀,𝗋) ⋀_i∈ω(ϕψ_i)⋀_i∈ωψ_i|⋀_i∈ω(ϕψ_i)ϕ (_0,𝗋),(_1,𝗋) ⋀_i∈ω(ϕψ_i)ϕ⋀_i∈ωψ_i|⋀_i∈ω(ϕψ_i)ϕ⋀_i∈ωψ_i (𝖤𝖢) ⋀_i∈ω(ϕψ_i)ϕ⋀_i∈ωψ_i (→,𝗋). ⋀_i∈ω(ϕψ_i)→ϕ⋀_i∈ωψ_i For the upcoming proof of the cut-elimination theorem, we introduce versions of the calculi given above which use sets of formulas for sequents and sets of sequents for hypersequents. With this set-version, we follow both Tait <cit.> as well as Baaz and Ciabattoni <cit.> and the advantage is that we can omit both the internal and external contraction rules in the set-version. We still writeΓΔ for set-sequents where Δ is still at most a singleton and in this context write Γ,Δ for the ordinary union of Γ and Δ. To emphasize the set-version of the hypersequents, we denote the respective objects by G∪{ΓΔ}. All the previously introduced rules (besides external and internal contraction) can be naturally reformulated using the set-hypersequents and we use the same naming in theses cases. We write 𝒮ℋ𝒢^D_ω_1 and 𝒮ℋ𝒢^D_ω_1,ω for these set-versions of the previous calculi. We use the same notion of proof defined via countable trees and write d⊢^s_𝒮ℋ𝒢^D_ω_1,ω H or ⊢^s_𝒮ℋ𝒢^D_ω_1H if d is a derivation tree with H as a root set-hypersequent in the respective systems. Again, if we don't want to be specific about the system, we write d⊢^s H. Also, derivations d⊢^s H with additional set-hypersequent assumptions are defined as always. The notion of substitution naturally carries over to sequents, (set-)hypersequents and derivations and we use the same notation as in formulas and terms.§.§ Cut-Elimination in the Schütte-Tait StyleWe now turn to cut-elimination. Most cut-elimination methods fall into one of two categories: Gentzen style methods <cit.> which remove highest cuts and Schütte-Tait style methods <cit.> which remove most complex cuts (in the sense of occurring logical symbols or a similar complexity measure). The notion of highest cut does not generally result in terminating procedures with systems which have infinitary rules and it is thus not surprising that we opt for a Schütte-Tait style proof.For proving cut-elimination, we closely follow the argument given in <cit.> by Baaz and Ciabattoni where the authors provide a Schütte-Tait style cut-elimination proof for the calculus which we have used (together with its propositional fragment) as a basis for our infinitary extensions. Naturally, the finitary notions used there have to be appropriately extended to arbitrary countable ordinals and we do this in a similar vein as in Tait's work <cit.>. Before moving on to the technical results, we will need to introduce various measures on proofs and formulas. For this, we first give a short primer on the relevant notions regarding ordinals. For a general overview over ordinal arithmetic and further notions, see <cit.>. We write + and · for the usual ordinal addition and multiplication. Given a family of ordinals α_i, we write sup^+_iα_i for the smallest ordinal greater than every α_i and sup_iα_i for the smallest ordinal greater or equal than every α_i.Further, we will need the natural sum and multiplication on ordinals, also called the Hessenberg sum and multiplication (see <cit.>), which we denote by x⊕ y and x⊗ y, respectively. The precise definition for ⊕ and ⊗ can be given using Cantor normal forms, among others, but we will only need certain properties of ⊕, ⊗ and their interplay with sup^+ which we list in the following remark.⊕ and ⊗ are commutative, associative and monotone in both arguments. Further, we have α⊕ 1=α+1 for any ordinal α and α⊗ (β⊕γ) = (α⊗β)⊕(α⊗γ). We have sup^(+)_i(β⊕α_i)≤β⊕sup^(+)_iα_i and sup^(+)_i(β⊗α_i) ≤ β⊗sup^(+)_iα_i where sup^(+) is either sup or sup^+.There is a notion of exponentiation derived from natural multiplication, first considered by de Jongh and Parikh <cit.>, which we denote by α^⊗β and call (super-Jacobsthal) exponentiation, following Altman <cit.>. This exponentiation can be formally defined using transfinite recursion by* α^⊗ 0:=1 for any α,* α^⊗ (β+1):=α^⊗β⊗α for any α,β,* α^⊗β:=sup_γ <βα^⊗γ for any α and any limit ordinal β,and has, in particular, the following properties: α^⊗β is strictly increasing and continuous in β andα^⊗(β⊕γ)=α^⊗β⊗α^⊗γ.The first two are immediate by the monotonicity of ⊗ and the definition of exponentiation. A proof of the latter can be found in <cit.>. Inspired by Tait <cit.>, we define the function χ^0(α):=4^⊗α and derived from that, we define χ^z as the function enumerating the common fixed points of χ^w for all w<z. These χ^z are the Veblen iterations of χ^0 which exist (for all countable ordinals) by <cit.> as χ^0 is continuous and strictly increasing. Further, all χ^z are also continuous and strictly increasing. We define the complexity |ϕ| of ϕ∈ℒ_ω_1 recursively by * |ϕ| := 0 for atomic ϕ,* |ϕ∘ψ| := max{|ϕ|,|ψ|}+1,* |⋀_i∈ωϕ_i|:=|⋁_i∈ωϕ_i|:=sup^+_i∈ω|ϕ_i|.|·| is extended to ℒ_ω_1,ω by adding the clause* | Qxϕ|:=|ϕ|+1,for Q∈{∀,∃}. Let d be a derivation (of normal or set-hypersequents) with d_i, i<k≤ω, as its direct predecessors (i.e., those subderivations proving the assumptions of the last rule in d as the direct subderivations from <cit.>). As natural generalizations of the notions defined in <cit.> (and in some way akin to <cit.>) we set | d|:=sup^+_i<k| d_i| if the last rule was not a weakening and | d| :=| d_0| otherwise. We define w(d):=sup_i<kw(d_i) if the last rules was not an internal weakening and w(d):=w(d_0)+1 otherwise. Similarly, we define W(d) by using external instead of internal weakenings. Lastly, we recursively define ρ(d) by * ρ(d):=0 if d is cut-free,* ρ(d):=sup_i<kρ(d_i) if the last inference is not a cut,* ρ(d):=max{|ϕ|+1,ρ(d_0),ρ(d_1)} if the last inference is a cut with cut-formula ϕ.| d| is called the rank of d and ρ(d) is called the cut-degree of d. All of these naturally extend to derivations with assumptions.If d⊢^s H, then d[t/x]⊢^s H[t/x] with | d[t/x]|=| d| and ρ(d[t/x])=ρ(d).It is straightforward to check that d[t/x] is a correct proof since d is. Both d[t/x]⊢^s H[t/x] and the other properties are then immediate. It is clear that ⊢ and ⊢^s prove essentially the same theorems (modulo applications of contractions) and using the above notions, we can state the following result on the impact of change between ℋ𝒢 and 𝒮ℋ𝒢 on the rank of derivations. For that, we write H^s for the set-hypersequent obtained from a hypersequent H by removing all multiplicities of formulas and sequents (and treating the resulting objects as sets).Following <cit.>, we call a hypersequent H (1-1-)reduced if no formula occurs more than once in any multiset and if no sequent occurs more than once in the hypersequent. These hypersequents can be naturally seen as set-hypersequents.Let H be a reduced hypersequent. If d'⊢^s H^s, then there is a d⊢ H with | d|≤ (2⊗| d'|)⊕ w(d') and ρ(d')=ρ(d). Conversely, if d⊢ H, then there is a d'⊢^s H^s with | d'|≤| d| and ρ(d')=ρ(d).We omit the proof as it is a natural generalization of the respective finitary result of Baaz and Ciabattoni <cit.>. The proofs which we give in the following rely on a (formal) tracking of the cut formula through the proof based on so-called decorations as introduced by Baaz and Ciabattoni in <cit.>, extended to the infinitary case. Given d⊢^s H and given a decoration of H, that is H where some (but not necessarily all) occurrences of a formula ϕ are decorated, denoted by ϕ^*, the decorated version of d is defined by recursion on the tree: if we have a decoration of an occurring hypersequent H', then the premises are decorated according to which rule was used to derive H'. The definitions for the rules (𝖤𝖶), (𝗐,𝗅), (𝗐,𝗋), (𝖼𝗈𝗆) and (𝖼𝗈𝗆) are exactly as in <cit.>. Suppose the rule used is a (possibly infinitary) logical rule with arity k≤ω, i.e. we haveG∪{Γ_iΔ_i| i<k}/G∪{ΓΔ}.Then if * ϕ is the principal formula of the rule: if ϕ^*∈Γ then ϕ is decorated in Γ_i iff ϕ already occurs in Γ_i.* ϕ is not the principal formula of the rule then ϕ is decorated in Γ_i or Δ_i iff it is decorated in Γ or Δ, respectively. Further, in both cases, G∖{Γ_iΔ_i| i<k} is decorated as in the conclusion. The following inversions are valid: (i) If d⊢^s G∪{Γ,ϕψΔ}, then there are proofs d_0⊢^s G∪{Γ,ϕΔ} and d_1⊢^s G∪{Γ,ψΔ}.(ii) If d⊢^s G∪{Γ,ϕψΔ}, then there is a proof d_0⊢^s G∪{Γ,ϕ,ψΔ}.(iii) If d⊢^s G∪{Γϕψ}, then there are proofs d_0⊢^s G∪{Γϕ} and d_1⊢^s G∪{Γψ}.(iv) If d⊢^s G∪{Γϕ→ψ}, then there is a proof d_0⊢^s G∪{Γ,ϕψ}.(v) If d⊢^s G∪{Γ,∃ xϕΔ}, then there is a proof d_0⊢^s G∪{Γ,ϕ[a/x]Δ}.(vi) If d⊢^s G∪{Γ∀ xϕ}, then there is a proof d_0⊢^s G∪{Γϕ[a/x]}.(vii) If d⊢^s G∪{Γ,⋁_i∈ωϕ_iΔ}, then there are proofs d_j⊢^s G∪{Γ,ϕ_jΔ} for each j∈ω.(viii) If d⊢^s G∪{Γ⋀_i∈ωϕ_i}, then there are proofs d_j⊢^s G∪{Γϕ_j} for each j∈ω.In any case, we respectively have ρ(d_i)≤ρ (d) and | d_i|≤| d|. The proof for items (i) to (vi) follows exactly the reasoning of <cit.> for the respective finitary result (see Lemma 4 there). We give the proofs for (vii) and (viii) in the same spirit. (vii) We consider a decoration of d starting with G∪{Γ,(⋁_i∈ωϕ_i)^*Δ}. Replace every occurring Γ',(⋁_i∈ωϕ_i)^*Δ' by Γ',ϕ_jΔ'. Delete all the subderivations but the j-th one above any application of (⋁,𝗅) where (⋁_i∈ωϕ_i)^* occurs as a decorated formula and is principle. As all initial hypersequents are atomic, correctness of the resulting d_j can be shown by an induction over | d|⊕ w(d)⊕ W(d). Clearly | d_j|≤| d| and ρ(d_j)≤ρ(d).(viii) We consider a decoration of d starting from G∪{Γ(⋀_i∈ωϕ_i)^*}. Replace every occurring Γ'(⋀_i∈ωϕ_i)^* by Γ'ϕ_j and delete all subderivations but the j-th one above any application of (⋀,𝗋) in which (⋀_i∈ωϕ_i)^* occurs decorated and is principle. Again, the correctness of d_j follows by a straightforward induction on | d|⊕ w(d)⊕ W(d) and we have | d_j|≤| d| and ρ(d_j)≤ρ (d) already by construction.Suppose d⊢^s G∪{Γ,ϕΔ} where ϕ is atomic and not the cut-formula of any cut in d. Then for any Σ, there is a d' with assumption G∪{Σϕ} such that d'⊢ G∪{Γ,ΣΔ} with | d'|≤| d| and ρ(d')≤ρ(d). The proof can be easily obtained by generalizing the proof of the respective finitary result from <cit.>: Decorate d, starting from G∪{Γ,ϕ^*Δ}, replace any occurring {Γ',ϕ^*Δ'} by {Γ',ΣΔ'} and add G to every set-hypersequent. This tree now needs to be corrected to yield a correct proof. As in <cit.>, weakenings which produce decorated ϕ^* are replaced by (potentially more) weakenings producing Σ. Initial sequents/ψψ(𝗂𝖽) and /()which don't introduce ϕ^* are respectively replaced by /ψψ/G∪{ψψ} and //G∪{}where we added sufficiently many weakenings to introduce G, while initial sequents introducing ϕ^* are replaced by G∪{Σϕ} which is an allowed assumption for d'. That proof is now correct as ϕ is not the cut-formula of any cut in d. As weakenings do not lengthen | d|, we get | d'|≤| d| and as we didn't introduce any new cut, we get ρ(d')≤ρ(d). Suppose d_0⊢^s G∪{Γϕ} and d_1⊢^s G∪{Γ,ϕΔ} with ρ(d_i)≤|ϕ| for i=0,1. Then there is a d⊢^s G∪{ΓΔ} with ρ(d)≤|ϕ| and | d|≤ 2⊗(| d_0|⊕| d_1|). The proof is a natural extension of the corresponding finitary result from <cit.>. Note that since ρ(d_i)≤|ϕ|, ϕ is not the cut-formula of any cut in d_i. For ϕ=, decorate d_0 starting from G∪{Γ^*} and replace any {Γ'^*} by {Γ'Δ}. We now have to correct the proof at the points where ^* originates. In this simple case, ^* arises by either an internal or external weakening. The internal weakenings can be either removed if Δ is empty or replaced by an internal weakening with Δ if Δ is nonempty. Similarly, the external weakenings get appropriately replaced. For atomic ϕ≠, note first that ρ(d_i)≤|ϕ|=0 implies that d_1 and d_0 are cut-free. In particular, ϕ is not the cut-formula of any cut in d_1 and thus, there is a derivation d_1' with assumption G∪{Γϕ} such thatd_1'⊢^s G∪{ΓΔ}as well as | d'_1|≤| d_1| and ρ(d_1')=0. We form d by replacing every assumption G∪{Γϕ} with the proof d_0. It is straightforward to check that | d|≤| d_0|⊕| d_1| ≤ 2⊗ (| d_0|⊕| d_1|) and we have ρ(d)=0 by Lemma <ref>. For ϕ=⋁_i∈ωϕ_i, consider a decoration of d_0 starting from G∪{Γ(⋁_i∈ωϕ_i)^*}. Replace any occurring {Σ(⋁_i∈ωϕ_i)^*} by {Γ,ΣΔ} and add G to every set-hypersequent and Γ to every premise of any sequent. This resulting tree is not a correct proof anymore and we consider the following correction steps on the initial rules and on the rules which introduce a decorated instance of ϕ: * Replace any initial rule /ψψ(𝗂𝖽) or /()by (𝗂𝖽) ψψ G∪{Γ,ψψ}or ()G∪{Γ,}respectively, using sufficiently many applications of (𝖤𝖶).* Suppose (⋁_i∈ωϕ_i)^* originates as the principal formula of a logical rule. Then, replace any part of the form⋮ G'∪{Γ'ϕ_j} (⋁,𝗋) G'∪{Γ' (⋁_i∈ωϕ_i)^*}occurring in the decorated version of d_0 by⋮ G∪ (G'∪{Γ,Γ'ϕ_j})[Δ/(⋁_i∈ωϕ_i)^*] ⋮d_1^-1 G∪G'∪{Γ,Γ',ϕ_jΔ} (𝖼𝗎𝗍) G∪G'∪{Γ,Γ'Δ}where d_1^-1 is obtained by Lemma <ref> from d_1⊢ G∪{Γ,⋁_i∈ωϕ_iΔ}. Here, G' is the set-hypersequent G' after the possible internal replacements. In the above presentation, we have suppressed the needed external and internal weakenings as they have no effect on the resulting rank.* If (⋁_i∈ωϕ_i)^* originates from an internal weakening, replace any such⋮ G'∪{Γ'} (𝗐,𝗋) G'∪{Γ'(⋁_i∈ωϕ_i)^*}by ⋮ G'∪ G∪{Γ'} (𝗐,𝗋) G'∪ G∪{Γ,Γ'Δ}if Δ is nonempty and remove them otherwise.* If (⋁_i∈ωϕ_i)^* originates from an external weakening, replace any ⋮ G' (𝖤𝖶) G'∪{Γ'(⋁_i∈ωϕ_i)^*}by ⋮ G'∪ G (𝖤𝖶). G'∪ G∪{Γ,Γ'Δ} The resulting proof d is correct as can be verified by transfinite induction on | T|⊕ w(T)⊕ W(T) for subderivations T of d_0. Here, it is important that ϕ is not the cut-formula of any cut.Furthermore, any newly introduced cut uses one of the ϕ_i as a cut-formula. We thus get, using ρ(d_1^-1)≤ρ(d_1) from Lemma <ref>, thatρ(d)≤max{ρ(d_0),ρ(d_1),sup_i∈ω(|ϕ_i|+1)}=max{ρ(d_0),ρ(d_1),^+sup_i∈ω|ϕ_i|}≤|ϕ|as ρ(d_i)≤|ϕ| for i=0,1 by assumption and |ϕ|=sup^+_i∈ω|ϕ_i| by definition. Regarding the rank, let T' be the replacement of any rooted sub derivation T of d_0 after the replacement and correction procedure. Then we can prove, by transfinite induction on | T|⊕ w(T)⊕ W(T), that | T'|≤ 2⊗ (| d_1^-1|⊕| T|).* If T is just a single initial rule, then | T'|=| T| as only weakenings were added.* If T ends with an application of (⋁,𝗋) which introduces ϕ^* in the annotation, let T_0 be its preceding derivation. Note that T' ends with an application of cut with preceding derivations T_0' and d_1^-1 (modulo additional weakenings). We get| T'| =max{| T_0'|,| d_1^-1|}+1≤max{2⊗(| T_0|⊕| d_1^-1|),| d_1^-1|}+1≤ 2⊗((| T_0|+1)⊕| d_1^-1|)=2⊗(| T|⊕| d_1^-1|)which completes this case. Here, the first inequality follows from the induction hypothesis.* If T ends with a weakening which introduces ϕ^*, then we still have | T_0|⊕ w(T_0)⊕ W(T_0)<| T|⊕ w(T)⊕ W(T) and thus, we get| T'|=| T_0'|≤ 2⊗ (| d_1^-1|⊕| T_0|)=2⊗ (| d_1^-1|⊕| T|)as by definition | T_0| =| T| and as only weakenings were added.* If the last rule of T does not introduce ϕ^* and is not a weakening, then for the preceding derivations T'_i of T', we have| T'|=^+sup_i∈ω| T'_i|≤^+sup_i∈ω2⊗ (| d_1^-1|⊕| T_i|)≤ 2⊗ (| d_1^-1|⊕^+sup_i∈ω| T_i|)=2⊗ (| d_1^-1|⊕| T|). * If the last rules does not introduce ϕ^* but is a weakening, then the reasoning is as in case (3).We in particular thus have (taking d=d_0')| d|≤ 2⊗(| d_1^-1|⊕| d_0|)≤2⊗(| d_1|⊕| d_0|)as | d_1|≤| d_1^-1| by Lemma <ref>. For ϕ=⋀_i∈ωϕ_i, consider a decoration of d_1 starting from G∪{Γ,(⋀_i∈ωϕ_i)^*Δ}. Replace any occurring {Γ',(⋀_i∈ωϕ_i)^*Δ'} by {Γ',ΓΔ'}, add G to every set-hypersequent and add Γ to the premise of every sequent. This resulting tree is not a correct proof anymore and we consider the following correction steps: * For initial rules, this is the same as the previous correction step (1).* If (⋀_i∈ωϕ_i)^* originates as the principal formula of a logical rule, then replace any part of the form⋮ G'∪{Γ',ϕ_jΔ'} (⋁,𝗋) G'∪{Γ',(⋀_i∈ωϕ_i)^*Δ'}of the current proof by⋮ G∪ (G'∪{Γ,Γ',ϕ_jΔ'}[Γ/(⋀_i∈ωϕ_i)^*] ⋮d_0^-1 G∪G'∪{Γ,Γ'ϕ_j} (𝖼𝗎𝗍) G∪G'∪{Γ,Γ'Δ}where d_0^-1 is obtained via Lemma <ref> from d_0⊢ G∪{Γ⋀_i∈ωϕ_i}.* If (⋀_i∈ωϕ_i)^* originates from an internal weakening, replace any such⋮ G'∪{Γ'Δ'} (𝗐,𝗋) G'∪{Γ',(⋀_i∈ωϕ_i)^*Δ'}by ⋮ G'∪ G∪{Γ'Δ'} G'∪ G∪{Γ',ΓΔ'}using stepwise internal weakenings for members of Γ.* If (⋀_i∈ωϕ_i)^* originates from an external weakening, replace any ⋮ G' (𝖤𝖶) G'∪{Γ',(⋀_i∈ωϕ_i)^*Δ'}by ⋮ G'∪ G (𝖤𝖶) G'∪ G∪{Γ',ΓΔ'} As ϕ is again not the cut-formula of any cut in d_1, one can verify the correctness of the resulting proof d by transfinite induction on | T|⊕ w(T)⊕ W(T) of subdervations T of d_1.Using the same reasoning as before, we also again derive ρ(d)≤|ϕ| as well as | T'|≤ 2⊗ (| d_0^-1|⊕| T|) where T' is the replacement of any rooted subderivation T of d_1 after the replacement and correction procedure, as before. The latter implies | d|≤ 2⊗ (| d_0|⊕| d_1|) as before. The cases of ϕ=ϕ_0ϕ_1 and ϕ=∃ xϕ_0 are similar to that of ϕ=⋁_i∈ωϕ_i (see <cit.> for the latter). The quantifier case in particular uses Lemma <ref>.Similarly, the cases of ϕ=ϕ_0ϕ_1, ϕ=∀ xϕ_0 and ϕ=ϕ_0→ϕ_1 are analogous to that of ϕ=⋀_i∈ωϕ_i (see again <cit.> for the last). As with ∃, the ∀-case uses Lemma <ref>. Let d⊢^s H with ρ(d)≤ v+ω^z. Then there is a derivation d'⊢^s H with | d'|≤χ^z(| d|) and ρ(d')≤ v. The theorem is proved by induction on the lexicographically ordered pair (z,| d|). So assume the claim for any (ẑ,|d̂|) with ẑ<z or ẑ=z and |d̂|<| d|. We divide the proof on whether the last inference rule of d was a cut, a weakening or neither. Suppose the last rule was not a cut and not a weakening. Let k≤ω be the arity of the last rule and let d_i, i<k, be the direct predecessors with d_i⊢^sH_i. Naturally, | d_i| <| d| for all i<k as the last rule was not a weakening and alsov+ω^z≥ρ(d)≥ρ(d_i)for all i<k by definition. Using the induction hypothesis on d_i, we get derivations d_i'⊢^sH_i with | d_i'|≤χ^z(| d_i|) and ρ(d_i')≤ v. Using the same last rule as in d, we combine the d_i' to a proof d'⊢ H. First, we have ρ(d')≤ v as the last rule was not a cut. Further, we get| d_i'|≤χ^z(| d_i|)<χ^z(| d|)using that | d_i|<| d| and that χ^z is increasing. As | d'| is the least ordinal α with | d'_i| <α for all i, as the last rule was not a weakening, we get | d'|≤χ^z(| d|). Suppose the last rule was a cut. Then we get two preceding derivationsd_0⊢^s G∪{Γϕ} and d_1⊢^s G∪{Γ,ϕΔ}and by definition, we havev+ω^z≥ρ(d)=max{|ϕ|+1,ρ(d_0),ρ(d_1)}.and, as the last rule was not a weakening, we get | d_i| <| d|. We can apply the induction hypothesis to d_0,d_1 to get derivations d'_0⊢^s G∪{Γϕ} and d_1'⊢^s G∪{Γ,ϕΔ} with | d_i'|≤χ^z(| d_i|) and ρ(d_i')≤ v.If z=0, then ρ(d)≤ v+ω^z=v+1 and therefore |ϕ|≤ v.Now, either (i) max{ρ(d_0'),ρ(d_1')}≤|ϕ|, or (ii) max{ρ(d_0'),ρ(d_1')}>|ϕ|.For (i), we may apply Lemma <ref> to get a derivationd'⊢ G∪{ΓΔ}with ρ(d')≤|ϕ|≤ v and| d'| ≤ 2⊗(χ^0(| d_0|)⊕χ^0(| d_1|))≤ 2⊗(4^⊗| d_0|⊕ 4^⊗| d_1|)≤ 2⊗ (2⊗ 4^⊗max{| d_0|,| d_1|})≤ 4^⊗(max{| d_0|,| d_1|}+1)=4^⊗| d|=χ^0(| d|)where we in particular used that ⊕ and ⊗ are increasing in both arguments as well as commutative, associative and distributive, that χ^0 is increasing, that α⊕ 1=α+1,and that max{| d_0|,| d_1|}+1=| d| by definition.For (ii), we combine d_0' and d_1' using cut on ϕ to a derivation d'⊢ G∪{ΓΔ}. Now, we getρ(d)=max{|ϕ|+1,ρ(d_0'),ρ(d_1')}=max{ρ(d_0'),ρ(d_1')}≤ vas |ϕ|+1≤max{ρ(d_0'),ρ(d_1')} by the assumption (ii) and as ρ(d_i')≤ v from the induction hypothesis. Further, we have| d'|=max{| d_0'|,| d_1'|}+1≤max{4^⊗| d_0|,4^⊗| d_1|}+1≤ 4^⊗| d|=χ^0(| d|)which completes the case for (ii). If z≠ 0, then there are y<z and k∈ℕ such that |ϕ|≤ v+ω^y· k. We can combine d_0' and d_1' using cut to a derivation d̂⊢^s G∪{ΓΔ}.This immediately gives |d̂|≤max{χ^z(| d_0|),χ^z(| d_1|)}+1 and ρ(d̂)≤ v+ω^y· k. As y<z, we can apply the induction hypothesis k-times and get a derivation d'⊢^s G∪{ΓΔ} with ρ(d')≤ v as well as| d'|≤ (χ^y)^(k)(max{χ^z(| d_0|),χ^z(| d_1|)}+1)We have | d_i| <| d| and thus χ^z(| d_i|)<χ^z(| d|) which implies χ^z(| d_i|)+1≤χ^z(| d|). This gives max{χ^z(| d_0|),χ^z(| d_1|)}+1≤χ^z(| d|). Now, assuming α≤χ^z(β) and y<z, thenχ^y(α)≤χ^y(χ^z(β))=χ^z(β)where the first inequality follows from χ^y being increasing and the latter equality follows from the definition of χ^z(β) being the β-th simultaneous solution for γ=χ^x(γ) for all x<z. This implies, in combination with the above, that| d'|≤ (χ^y)^(k)(max{χ^z(| d_0|),χ^z(| d_1|)}+1)≤χ^z(| d|). From Lemma <ref>, cut elimination immediately follows:For any derivation d⊢ H, there exists a cut-free derivation d'⊢ H.Further, we of course get a bound on | d'| which, in the case of finitary proofs with finitary formulas, matches that of Baaz and Ciabattoni <cit.>. § THE RANGE OF THE RESULTS AND EXTENSIONSWe want to use this section to give an overview of some other topics extending the previous ones and some initial observations, at varying depth, regarding those.§.§ Extensions of the Completeness ResultsAt first, it should be noted that the completeness theorems for 𝖦_ω_1 and 𝖦_ω_1,ω do not generalize to uncountable sets Γ. Consider the following two notions of compactness from <cit.>:Let ⊩⊆𝒫(ℒ_κ,λ)×ℒ_κ,λ or ⊩⊆𝒫(ℒ_κ)×ℒ_κ be a relation. Then ⊩ is called * weakly compact if for every Γ,ϕ with at most κ many different atomics, there is a Δ with size <κ such that Γ⊩ϕ implies Δ⊩ϕ,* compact if for every Γ,ϕ, there is a Δ with size <κ such that Γ⊩ϕ implies Δ⊩ϕ. Indeed, the relations ⊢_𝒢^(D)_ω_1 and ⊢_𝒢^(D)_ω_1,ω are compact as any proof only involves countably many formulas.However, as Aguilera shows in <cit.>, the consequence relation _𝖦_ω_1,ω (i.e. _[0,1]_ℝ) is not even weakly compact. This follows from the following general result:If _𝖦_κ,ω is weakly compact, then κ is weakly compact, i.e. it is strongly inaccessible and any tree of size κ, such that every level has <κ nodes, has a branch of length κ.As ω_1 is not strongly inaccessible, it is not weakly compact and therefore _𝖦_ω_1,ω is not either. It turns out that Aguilera's proof is, in itself, “propositional" and can be straightforwardly adapted to 𝖦_κ. We therefore have the following result which implies the same limitation for 𝒢^D_ω_1:If _𝖦_κ is weakly compact, then κ is weakly compact.We omit the proof as it is, as hinted above, literally that of Aguilera <cit.>, rephrased in the propositional language.Before moving on to other topics, we want to mention a peculiar application of the above result of Aguilera and its propositional version. Although we don't dive into the rich (and difficult) topic of interpolation for Gödel logics, we can use the impossibility of a proof calculus for uncountable premises to show the following negative result.There is no countable Δ⊆ℒ_ω_1,ω such that for any ϕ,ψ∈ℒ_ω_1,ω, _𝖦_ω_1,ωϕ→ψ implies that there is a δ∈Δ with _𝖦_ω_1,ωϕ→δ and _𝖦_ω_1,ωδ→ψ. Similarly for ℒ_ω_1 and 𝖦_ω_1. Suppose such a Δ would exist. Constructing the Lindenbaum-Tarski algebra over ℒ_ω_1,ω, the set {[δ]|δ∈Δ} would form a countable dense subset of 𝐋𝐓^Γ, even for uncountable Γ. As 𝐋𝐓^Γ is therefore separable, it embeds into [0,1]_ℝ with an embedding preserving infima and suprema. If we assume Γ⊬_𝒢^D_ω_1,ωϕ with said uncountable Γ, then this embedding would provide a countermodel by Lemma <ref>, verifying Γ_𝖦_ω_1,ωϕ. Thus, we would have completeness of 𝒢^D_ω_1,ω for 𝖦_ω_1,ω w.r.t. uncountable Γ which is a contradiction to Proposition <ref>.The argument works similarly also for the propositional 𝖦_ω_1.Although it is probably expected that there is no countable set of interpolants in this infinitary case (as we already work over an uncountable language), it is maybe still instructive to note how cardinality considerations can have an impact on these type of questions. In a similar vein, a generalization of the results to ℒ_ω_1,ω_1 is problematic. As is well-known by a theorem of Scott (and Karp, see <cit.>[As Karp remarks, the proof of hers is based on an outline circulated by Scott in 1960 which was not published.]), the set of classical validities over ℒ_ω_1,ω_1 is not definable in H(ω_1), the collection of hereditarily countable sets, and thus in particular not Σ_1 on H(ω_1), a property which would, however, be implied by the existence of a complete classical proof calculus with proofs of countable lengths. These results should generalize to the Gödel case:Is the set of theorems of 𝖦_ω_1,ω_1 non-definable over H(ω_1)?We also want to note that there is a different definition of consequence common in the context of Gödel logics, which we may define byΓ^≤_𝖦_κϕ if inf v[Γ]≤ v(ϕ) for any evaluation vfor the propositional infinitary case and similarly in the first-order case byΓ^≤_𝖦_κ,λϕ if infℑ[Γ]≤ℑ(ϕ) for any interpretation ℑ.In a finitary context, it can be easily seen that ^≤ is equivalent toand we can show a similar statement here if we restrict to countable sets. The following results are thereby natural generalizations of the finitary cases as given in <cit.> and the proofs for both results are essentially the same (and thus omitted).Let 𝐀 be a complete linear Heyting algebra, x∈ A, and let v:ℒ_κ→𝐀 be any evaluation. Then, forv_x(p):=v(p) if v(p)<^𝐀x, 1^𝐀 otherwise,with p∈ Var_κ, the unique extension v_x:ℒ_κ→𝐀 to an evaluation satisfies:x∉v[sub(ϕ)] implies v_x(ϕ)=v(ϕ) if v(ϕ)<^𝐀x, 1^𝐀 otherwise,for any ϕ∈ℒ_κ.The following first-order version is rather intricate to formulate if one wants to guarantee a similar level of generality as in the finitary case. We give it here in its full strength but will, in the following, mostly use the special case with κ=ω_1 and λ=ω where the conditions simplify considerably (in particular the definition of Val_ℑ(ϕ)).Let 𝐀 be a complete linear Heyting algebra, x∈ A, and let 𝔐 be an 𝐀-valued model. Define 𝔐_x from 𝔐 by replacing P^𝔐 withP^𝔐_x(m_1,…,m_n):=P^𝔐(m_1,…,m_n) if P^𝔐(m_1,…,m_n)<^𝐀x, 1^𝐀 otherwise,where P is an n-ary predicate. We write 𝔍_x=(𝔐_x,w) andVal_𝔍(ϕ):={𝔍'(ψ)|ψ∈sub(ϕ) and 𝔍'=(…(𝔍f_1X_1)f_2X_2…)f_nX_n whereX_i⊆ Var_κ∩sub(ϕ) with | X_i| <λ and f_i:X_i→ M}.given an interpretation 𝔍=(𝔐,w). Then, for a given v:Var_κ→ M and ℑ=(𝔐,v), we havex∉Val_ℑ'(ϕ) implies ℑ'_x(ϕ)=ℑ'(ϕ) if ℑ'(ϕ)<^𝐀x, 1^𝐀 otherwise,for any ϕ∈ℒ_κ,λ and any interpretationℑ'=(…(ℑf_1X_1)f_2X_2…)f_nX_nwhere X_i⊆ Var_κ with | X_i| <λ and f_i:X_i→ M.As a direct consequence, we obtain the following result:For any countable Γ∪{ϕ}⊆ℒ_ω_1, Γ^≤_𝖦_ω_1ϕ iff Γ_𝖦_ω_1ϕ. Similarly, for any countable Γ∪{ϕ}⊆ℒ_ω_1,ω, we have Γ^≤_𝖦_ω_1,ωϕ iff Γ_𝖦_ω_1,ωϕ. §.§ Other Sets of VariablesAs common in propositional and first-order Gödel logics, one could consider closed sets V with {0,1}⊆ V⊊ [0,1] instead of [0,1] as truth-value sets, thereby forming the propositional variants 𝖦^V_κ and the first-order variants 𝖦^V_κ,λ by extending the semantic definitions from before. The most common instances of V are amongV_ℝ:=[0,1],V_0:={0}∪[1/2,1],V_↓:={1/k| k≥ 1}∪{0},V_↑:={1-1/k| k≥ 1}∪{1},V_n:={1-1/k| 1≤ k≤ n-1}∪{1} with n≥ 2,following the selection from <cit.>. In that notation, we have 𝖦_κ=𝖦^V_ℝ_κ and 𝖦_κ,λ=𝖦^V_ℝ_κ,λ. A few easy observations from <cit.> directly carry over to the infinitary case.Let 𝖦^V be either 𝖦^V_κ or 𝖦^V_κ,λ for arbitrary κ,λ (for λ≤κ). The following relations hold: * 𝖦^V_ℝ=⋂_V𝖦^V* 𝖦^V_n⊋𝖦^V_n+1* 𝖦^V_n⊋𝖦^V_↑⊋𝖦^V_ℝ* 𝖦^V_n⊋𝖦^V_↓⊋𝖦^V_ℝ* 𝖦^V_0⊋𝖦^V_ℝ. We omit the proof as it is essentially a replica of the analogous result in the finitary first-order case (see <cit.>). Still, we want to emphasize the following differences to the finitary case: already in the finitary propositional case, 𝖦^V_↑_ω, 𝖦^V_↓_ω and 𝖦^V_0_ω differ in entailment (see <cit.>) while they coincide, as observed first by Dummett <cit.>, in tautologies unlike the finitary first-order versions. So, it is natural that in this infinitary case the propositional (and first-order) versions differ in entailment as well.But, in the infinitary case, the propositional variants already differ with respect to tautologies and, moreover, the witnessing (non-)tautologies are natural analogues of the finitary first-order examples: consider C^↑:=⋁_i∈ω(p_i→⋀_j∈ωp_j), C^↓:=⋁_i∈ωp_i(⋁_j∈ωp_j→ p_i) and 𝖨𝖲𝖮_0:=⋀_i∈ω p_i→⋀_i∈ωp_i. Then C^↑ is valid in 𝖦^V_↑ but not in 𝖦^V_↓.C^↓ is valid in 𝖦^V_↑ and 𝖦^V_↓. Both are not valid in 𝖦^V_0 and 𝖦^V_ℝ. 𝖨𝖲𝖮_0 is valid in 𝖦^V_0 but not in 𝖦^V_ℝ. Further, we can give the following analogy of the relationship between 𝖦^V_↑ and the finite-valued 𝖦^V_n.We have𝖦^V_↑_κ=⋂_n≥ 2𝖦^V_n_κ and 𝖦^V_↑_κ,λ=⋂_n≥ 2𝖦^V_n_κ,λ.Again, let 𝖦^V be either 𝖦^V_κ or 𝖦^V_κ,λ. Item (3) of Proposition <ref> gives𝖦^V_↑⊆⋂_n≥ 2𝖦^V_n.For the converse, suppose that Γ_𝖦^V_↑ϕ, i.e. there is an evaluation v such that v[Γ]⊆{1} but v(ϕ)<1 in the propositional case or an interpretation ℑ such that ℑ[Γ]⊆{1} but ℑ(ϕ)<1 in the first-order case. As v/ℑ evaluate into V_↑, there is a k such that v(ϕ)=1-1k or ℑ(ϕ)=1-1k. Let x∈ [0,1] be such that 1-1k<x<1-1k+1 andx∉v[sub(Γ∪{ϕ})]in the propositional case or such thatx∉Val_ℑ(Γ∪{ϕ})in the first-order case. We form v_x or ℑ_x by Lemma <ref> or Lemma <ref>, respectively. The above choice of x is such that v_x[Γ]⊆{1} but v_x(ϕ)<1 in the propositional case and ℑ_x[Γ]⊆{1} but ℑ_x(ϕ)<1 in the first-order case by the previous lemmas. But, by the choice of x, we have that v_x or ℑ_x evaluate into V_k+1 which gives Γ_𝖦^V_k+1ϕ.By the results of <cit.>, the status quo on complete proof calculi in the finitary setting in very clear cut: 𝖦^V_ω,ω is axiomatizable iff V is finite or uncountable with either 0 contained in the perfect kernel of V or isolated. In particular, already the tautologies of 𝖦^V_↑_ω,ω and 𝖦^V_↓_ω,ω are not recursively enumerable. On the propositional side, while the tautologies of all 𝖦^V_ω are axiomatizable (again, see<cit.>), the only axiomatizable entailment relations are 𝖦^V_n_ω and 𝖦^V_ℝ_ω (see <cit.>). Now, the situation is different in the infinitary cases. In the following, we will obtain analogous axiomatizations for the instances of V which were axiomatizable already in the finite but we further obtain infinitary axiomatizations of 𝖦^V_↑_ω_1,ω and 𝖦^V_↑_ω_1.We don't now the state of 𝖦^V_↓_ω_1,ω, 𝖦^V_↓_ω_1 or any other V in particular but in the finitary, as shown by Hájek <cit.>, the tautologies of 𝖦^V_↓_ω,ω are not arithmetical. So we leave with the following question regarding the other truth-value sets:Have any 𝖦^V_ω_1 or 𝖦^V_ω_1,ω (countable) infinitary axiomatizations for any V not considered here?To approach these axiomatizability questions, we follow the general route of <cit.> which relies on tools from the theory of Polish spaces, like the Cantor-Bendixson theorem, which we briefly want to recall. In the following, we write 𝐀_V for the Heyting algebra associated with a Gödel set V. Note that every V, as a closed subset of ℝ, is a Polish space. A subset P of ℝ is perfect if it is closed and every point is a limit point in the topology induced by ℝ.Any Polish space X can be partitioned as P∪ C with P perfect and C countable and open.The following result is then the central connection between the Cantor-Bendixson theorem and evaluations over Gödel sets.Let M⊆[0,1] be countable and P⊆ [0,1] be perfect. Then there is a strictly monotone h:M→ P which preserves any infima and suprema existing in M and if inf M∈ M, then h(inf M)=inf P.§.§.§ V is finiteWe consider the axiom scheme𝖥𝖨𝖭(n):=(ϕ_0→ϕ_1)(ϕ_1→ϕ_2)…(ϕ_n-1→ϕ_n)as in the finitary axiomatizations.For any countable Γ∪{ϕ}⊆ℒ_ω_1, we haveΓ⊢_𝒢^D_ω_1+𝖥𝖨𝖭(n)ϕ iff Γ_𝖦^V_n_ω_1ϕ.Similarly, for any countable Γ∪{ϕ}⊆ℒ_ω_1,ω where all formulas of Γ are closed, we haveΓ⊢_𝒢^D_ω_1,ω+𝖥𝖨𝖭(n)ϕ iff Γ_𝖦^V_n_ω_1,ωϕ.In fact, 𝖥𝖨𝖭(n) can, in both cases, be replaced by 𝖥𝖨𝖭^a(n): all atomic instances of 𝖥𝖨𝖭(n). Soundness is routine. For the converse, define ℒ_A:=frag(Γ∪{ϕ}) and write ℒ^a_A for the atomics of ℒ_A. We considerΠ:={(ϕ_0→ϕ_1)(ϕ_1→ϕ_2)…(ϕ_n-1→ϕ_n)|ϕ_i∈ℒ^a_A}.Π is countable as Γ, and therefore ℒ_A, is countable. Now suppose Γ⊬_(𝒢^D_ω_1+𝖥𝖨𝖭(n))(ℒ_A)ϕ. Then clearly Γ∪Π⊬_𝒢^D_ω_1(ℒ_A)ϕ and by strong completeness of 𝒢^D_ω_1(ℒ_A), we have Γ∪Π_[0,1]_ℝ(ℒ_A)ϕ, i.e. there is an evaluation v:ℒ_A→ [0,1] withv[Γ∪Π]⊆{1} but v(ϕ)<1.Now, the set v[ℒ^a_A] contains at most n elements. If not, then there are formulas ϕ_0,…,ϕ_n∈ℒ^a_A with v(ϕ_i)>v(ϕ_i+1). In that case, we havev((ϕ_0→ϕ_1)(ϕ_1→ϕ_2)…(ϕ_n-1→ϕ_n))<1which is a contradiction to v[Π]⊆{1}. Thus, we can write v[ℒ^a_A]⊆{0,v_1,…,v_n-2,1} with v_i<v_i+1. By induction on the structure of formulas, we also get v[ℒ_A] ⊆ {0,v_1,…,v_n-2,1}. We define a function h:v[ℒ_A]→ V_n by setting h(0):=0, h(1):=1 and h(v_i):=1-1i+1. v[ℒ_A] is, with its order by <, a Heyting algebra and therefore h is an isomorphism of Heyting algebras and in particular preserves infima and suprema. Lemma <ref> gives that h∘ v is a ℒ_A-evaluation with(h∘ v)[Γ]⊆{1} but (h∘ v)(ϕ)<1.As h∘ v evaluates into V_n, we get Γ_𝐀_V_n(ℒ_A)ϕ. Therefore also Γ_𝖦^V_n_ω_1ϕ, as 𝐀_V_n is complete. For the first-order instances, we instead considerΠ:={∀ x_i_1…∀ x_i_m((ϕ_0→ϕ_1)…(ϕ_n-1→ϕ_n))|ϕ_j∈ℒ^a_A, var(ϕ_j)⊆{x_i_1,…,x_i_m}}.similar to <cit.> where ℒ_A^a now represents the atomic formulas of the first-order fragment ℒ_A.For countable fragments ℒ_A, Π is countable and as every atomic formulas has only a finite number of variables, we have that every formula in Π is closed. Then apply Lemma <ref> in place of Lemma <ref> as in <cit.>.§.§.§ V is V_↑The strength of infinitary logics is of course that we have infinitary disjunctions available which we can use to combine the various finitary axioms 𝖥𝖨𝖭(n). More precisely, we define the scheme 𝖥𝖨𝖭 by⋁_n≥ 2⋀_k∈ω⋁_i=0^n-1(ϕ^n,k_i→ϕ^n,k_i+1).In the first-order case, we will additionally consider a seemingly weakened version 𝖥𝖨𝖭^a given by⋁_n≥ 2⋀_k∈ω⋁_i=0^n-1∀ x^n,k_j_1…∀ x^n,k_j_m(ϕ^n,k_i→ϕ^n,k_i+1)where all ϕ^n,k_i are atomic with var(ϕ^n,k_i)⊆{x^n,k_j_1,…,x^n,k_j_m}. Now, 𝖥𝖨𝖭 can be used to obtain an axiomatization of ⋂_n≥ 2𝖦^V_n_ω_1 or ⋂_n≥ 2𝖦^V_n_ω_1,ω for countable sets. In combination with the previous Proposition <ref>, we then obtain an axiomatization of V_↑.For any countable Γ∪{ϕ}⊆ℒ_ω_1, we haveΓ⊢_𝒢^D_ω_1+𝖥𝖨𝖭ϕ iff Γ_𝖦^V_↑_ω_1ϕ.Similarly, for countable Γ∪{ϕ}⊆ℒ_ω_1,ω where all formulas in Γ are closed, we haveΓ⊢_𝒢^D_ω_1,ω+𝖥𝖨𝖭ϕ iff Γ_𝖦^V_↑_ω_1,ωϕ.𝖥𝖨𝖭 is valid in V_↑: suppose v:ℒ_ω_1→ V_↑ is such that v(⋁_n≥ 2⋀_k∈ω⋁_i=0^n-1(ϕ^n,k_i→ϕ^n,k_i+1))<1.Thus, for some α, we have v(⋀_k∈ω⋁_i=0^n-1(ϕ^n,k_i→ϕ^n,k_i+1))≤α<1for any n≥ 2. Let k be such that v(⋁_i=0^n-1(ϕ^n,k_i→ϕ^n,k_i+1))<1. Such a k exists as α<1. For such a k, we getv(ϕ_0^n,k)>v(ϕ_1^n,k)>…>v(ϕ^n,k_n)and as v evaluates into V_↑, we havev(ϕ_0^n,k)≥ 1-1n+1 and v(ϕ_1^n,k)≥ 1-1nand therefore v(ϕ_0^n,k→ϕ_1^n,k)≥ 1-1n. This yieldsv(⋁_i=0^n-1(ϕ^n,k_i→ϕ^n,k_i+1))≥ 1-1nfor any such k and we getv(⋀_k∈ω⋁_i=0^n-1ψ^n,k) =min{inf_v(ψ^n,k)<1v(ψ^n,k),inf_v(ψ^n,k)=1v(ψ^n,k)}=inf_v(ψ^n,k)<1v(ψ^n,k)≥ 1-1nfor any n≥ 2 where we write ψ^n,k:=⋁_i=0^n-1(ϕ^n,k_i→ϕ^n,k_i+1). But this impliesv(⋁_n≥ 2⋀_k∈ω⋁_i=0^n-1(ϕ^n,k_i→ϕ^n,k_i+1))=1in contradiction to our assumption.For the converse, suppose Γ⊬_𝒢^D_ω_1+𝖥𝖨𝖭ϕ. Then, we haveΓ⊬_𝒢^D_ω_1+𝖥𝖨𝖭(n)ϕfor some n as if Γ⊢_𝒢^D_ω_1+𝖥𝖨𝖭(n)ϕ for all n, then there are countably many instances⋁_i=0^n-1(ϕ^n,k_i→ϕ^n,k_i+1) (k∈ω)of 𝖥𝖨𝖭(n), such thatΓ⊢_𝒢^D_ω_1⋀_k∈ω⋁_i=0^n-1(ϕ^n,k_i→ϕ^n,k_i+1)→ϕfor any n and thusΓ⊢_𝒢^D_ω_1⋁_n≥ 2⋀_k∈ω⋁_i=0^n-1(ϕ^n,k_i→ϕ^n,k_i+1)→ϕby (Rω)_1. The premise is an instance of 𝖥𝖨𝖭 which implies that Γ⊢_𝒢^D_ω_1+𝖥𝖨𝖭ϕ, a contradiction. Therefore, there is an n with Γ⊬_𝒢^D_ω_1+𝖥𝖨𝖭(n)ϕ and thus, using that Γ is countable, we get(Γ,ϕ)∉⋂_n≥ 2𝖦^V_n_ω_1by Theorem <ref> which implies Γ_𝖦^V_↑_ω_1ϕ by Proposition <ref>. The first order case is very similar. Soundness follows in the same way and the converse follows by the following slightly modified argument: supposing Γ⊬_𝒢^D_ω_1,ω+𝖥𝖨𝖭ϕ, we of course also have Γ⊬_𝒢^D_ω_1,ω+𝖥𝖨𝖭^aϕ. As in the propositional case, we get Γ⊬_𝒢^D_ω_1,ω+𝖥𝖨𝖭^a(n)ϕ for some n where it is essential that all instances of 𝖥𝖨𝖭^a are closed to be able to use the deduction theorem. We get Γ_𝖦^V_↑_ω_1,ωϕ using Proposition <ref>.§.§.§ 0 contained in the perfect kernelWe obtain the following infinitary version of the well-known finitary result from <cit.>.Let V be a Gödel set and P its perfect Kernel and W=V∪[inf P,1]. For any fragment ℒ_A (of ℒ_ω_1 or ℒ_ω_1,ω) and any countable Γ∪{ϕ}⊆ℒ_A, we haveΓ_𝐀_V(ℒ_A)ϕ iff Γ_𝐀_W(ℒ_A)ϕ.We only present the propositional case which is essentially contained in <cit.>. The first order case is the same with Lemma <ref> replaced by Lemma <ref> and Lemma <ref> replaced by Lemma <ref>, respectivelySince V⊆ W, we have that Γ_𝐀_W(ℒ_A)ϕ implies Γ_𝐀_V(ℒ_A)ϕ. Let v:ℒ_A→𝐀_W be such that v[Γ]⊆{1} but v(ϕ)<1. As Γ is countable, there is an x∈ [0,1] with v(ϕ)<x<1 and where x∉v[sub(Γ∪{ϕ})]. With v_x as in Lemma <ref>, we havev_x(ψ)=v(ψ) if v(ψ)<x, 1 otherwise,for any ψ∈Γ∪{ϕ}. Set M:={v_x(ψ)|ψ∈ℒ_A}∪{1} and M_0:=M∩ [0,inf P) as well as M_1:=(M∩ [inf P,x])∪{inf P}.Lemma <ref> gives a strictly monotone h:M_1→ P preserving all infima and suprema such that h(inf M_1)=inf P. We defineg:y↦y if y∈ [0,inf P], h(y) if y∈ [inf P,x], 1 if y=1,for y∈ M. g preserves infima, suprema and is strictly monotone with g(0)=0 and g(1)=1. g therefore is a homomorphism of Heyting algebras preserving infima and suprema and by Lemma <ref>, g∘ v_x is an evaluation. But g∘ v_x maps into V and (g∘ v_x)[Γ]⊆{1} but g(v_x(ϕ))=g(v(ϕ))<1.This immediately yields the following completeness result.Let V be a Gödel set where 0 is contained in the perfect Kernel. For any fragment ℒ_A (of ℒ_ω_1 or ℒ_ω_1,ω) and countable Γ∪{ϕ}⊆ℒ_A where Γ is closed in the first-order case, we haveΓ⊢_𝒢^D(ℒ_A)ϕ iff Γ_𝐀_V(ℒ_A)ϕ.Here, we write 𝒢^D(ℒ_A) for 𝒢^D_ω_1(ℒ_A) or 𝒢^D_ω_1,ω(ℒ_A), respectively, depending the choice of language.§.§.§ V uncountable and 0 isolatedWe now turn to the case of an isolated 0. For the first-order case, we have to additionally consider the quantifier version 𝖰𝖨𝖲𝖮_0 of 𝖨𝖲𝖮_0 (from which it was derived) as e.g. seen in <cit.>:∀ xϕ→∀ xϕ.The following lemma is an easy adaption of the finitary first-order case from <cit.>, which only mentions the latter statement regarding the quantifiers.For any ϕ_i, we have ⊢_𝒢^D_ω_1+𝖨𝖲𝖮_0⋀_i∈ωϕ_i→⋁_i∈ωϕ_i. Similarly for 𝒢^D_ω_1,ω. In the first-order case, we additionally have ⊢_𝒢^D_ω_1,ω+𝖰𝖨𝖲𝖮_0∀ xϕ→∃ xϕ for any ϕ and x. Let V be an uncountable Gödel set with 0 isolated. Let ℒ_A be a countable fragment of ℒ_ω_1 or ℒ_ω_1,ω where ⋀_i∈ωϕ_i∈ℒ_A iff ⋁_i∈ωϕ_i∈ℒ_A and where ⋁_i∈ωϕ_i∈ℒ_A implies ⋁_i∈ωϕ_i∈ℒ_A. For any Γ∪{ϕ}⊆ℒ_A, we haveΓ⊢_(𝒢^D_ω_1+𝖨𝖲𝖮_0)(ℒ_A)ϕ iff Γ_𝐀_V(ℒ_A)ϕin the propositional case andΓ⊢_(𝒢^D_ω_1,ω+𝖨𝖲𝖮_0+𝖰𝖨𝖲𝖮_0)(ℒ_A)ϕ iff Γ_𝐀_V(ℒ_A)ϕin the first-order case where Γ is assumed to be closed. Soundness is clear. For the other direction, note that we know by Lemma <ref> thatΓ_𝐀_V(ℒ_A)ϕ iff Γ_𝐀_V∪[inf P,1](ℒ_A)ϕwhere P is the perfect kernel of V. We thus assume that [inf P,1]⊆ V. We defineΠ:={⋀_i∈ωϕ_i→⋁_i∈ωϕ_i|⋀_i∈ωϕ_i∈ℒ_A}.By the assumptions on ℒ_A, we have Π⊆ℒ_A and in particular, Π is countable. Suppose that Γ⊬_(𝒢^D_ω_1+𝖨𝖲𝖮_0)(ℒ_A)ϕ.We now either have Π∪Γ_[0,1]_ℝ(ℒ_A)ϕ or Π∪Γ_[0,1]_ℝ(ℒ_A)ϕ. For the former, we getΠ∪Γ⊢_𝒢^D_ω_1(ℒ_A)ϕby completeness. As by Lemma <ref>, (𝒢^D_ω_1+𝖨𝖲𝖮_0)(ℒ_A) proves every element of Π, we haveΓ⊢_(𝒢^D_ω_1+𝖨𝖲𝖮_0)(ℒ_A)ϕwhich is a contradiction to our assumption. Therefore we have Π∪Γ_[0,1]_ℝ(ℒ_A)ϕ, i.e. there is a v:ℒ_A→ [0,1] with v[Π]∪ v[Γ]⊆{1} and v(ϕ)<1. We defineh:x↦0 if x=0, inf P+x/1-inf P otherwise.Note that (h∘ v)[ℒ_A]⊆ V. Uniquely extend v_h:x↦ h(v(x)) for x∈ Var_ω_1∩ℒ_A to v_h on ℒ_A. Then we havev_h(ψ)=h(v(ψ))for any ψ∈ℒ_A which gives the claim. This can be proven by induction on ψ, see in particular <cit.> for the similar finitary case, where v[Π]⊆{1} is used to handle the ⋀-case.The proof in the first-order case is very similar. We then considerΠ:={∀x(⋀_i∈ωϕ_i→⋁_i∈ωϕ_i)|⋀_i∈ωϕ_i∈ℒ_A,x∈ (Var_A)^n,n∈ℕ}∪{∀y(∀ xϕ→∃ xϕ)|ϕ∈ℒ_A, x∈ Var_A, y∈ (Var_A)^n,n∈ℕ}and one proceeds as above and obtains a ℑ with ℑ[Π]∪ℑ[Γ]⊆{1} but ℑ(ϕ)<1. As Π is not closed, note in particular Remark <ref> in that closedness is not needed for both directions of the completeness results, only for the soundness direction. Note that Π is countable.With the same h defined as before, one similarly defines ℑ'_h for ℑ'=ℑmx by changing P^𝔐 to h∘ P^𝔐 for predicates P in the underlying model 𝔐 and the key point is to now establish ℑ'_h(ψ)=h(ℑ'(ψ))for any ψ∈ℒ_A for any such ℑ' where, for the ⋀- and ∀-cases, it is important that ℑ[Π]⊆{1} and that any element of Π is universally quantified with arbitrary but finitely many quantifiers such that ℑ[Π]⊆{1} implies ℑ'(⋀_i∈ωϕ_i→⋁_i∈ωϕ_i)=1and ℑ'(∀ xϕ→∃ xϕ)=1even for any ℑ' as above.As before, we can now lift the above result to arbitrary ℒ_A if we restrict to countably many assumptions.Let V be an uncountable Gödel set with 0 isolated and let ℒ_A be an arbitrary fragment of ℒ_ω_1 or ℒ_ω_1,ω with the same additional closure conditions as in Theorem <ref> but let Γ∪{ϕ}⊆ℒ_A be countable with Γ closed in the first-order case.For any Γ∪{ϕ}⊆ℒ_A, we haveΓ⊢_(𝒢^D_ω_1+𝖨𝖲𝖮_0)(ℒ_A)ϕ iff Γ_𝐀_V(ℒ_A)ϕin the propositional case andΓ⊢_(𝒢^D_ω_1,ω+𝖨𝖲𝖮_0+𝖰𝖨𝖲𝖮_0)(ℒ_A)ϕ iff Γ_𝐀_V(ℒ_A)ϕin the first-order case.The proof follows the same type of argument as in Corollary <ref> where we now consider the smallest fragment containing Γ∪{ϕ} with the additional closure properties. The important point is here that for a countable Γ, this fragment is again countable. Again, the above gives in particularΓ⊢_𝒢^D_ω_1+𝖨𝖲𝖮_0ϕ iff Γ_𝖦^V_ω_1ϕfor countable Γ andΓ⊢_𝒢^D_ω_1,ω+𝖨𝖲𝖮_0+𝖰𝖨𝖲𝖮_0ϕ iff Γ_𝖦^V_ω_1,ωϕ for countable and closed Γ where 0 is isolated in V.§.§ AcknowledgmentsI want to thank Matthias Baaz for helpful discussions of the topics of this paper.plain
http://arxiv.org/abs/1708.07897v4
{ "authors": [ "Nicholas Pischke" ], "categories": [ "math.LO", "03B50, 03F05, 06D20, 03C75" ], "primary_category": "math.LO", "published": "20170825214430", "title": "On Infinitary Gödel logics" }
dsb @ seas.upenn.edu ^1Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, 19104^2Department of Electrical and Systems Engineering, University of Pennsylvania, Philadelphia, PA, 19104 Network neuroscience is the emerging discipline concerned with investigating the complex patterns of interconnections found in neural systems, and to identify principles with which to understand them. Within this discipline, one particularly powerful approach is network generative modeling, in which wiring rules are algorithmically implemented to produce synthetic network architectures with the same properties as observed in empirical network data. Successful models can highlight the principles by which a network is organized and potentially uncover the mechanisms by which it grows and develops. Here we review the prospects and promise of generative models for network neuroscience. We begin with a primer on network generative models, with a discussion of compressibility and predictability, utility in intuiting mechanisms, and a short history on their use in network science broadly. We then discuss generative models in practice and application, paying particular attention to the critical need for cross-validation. Next, we review generative models of biological neural networks, both at the cellular and large-scale level, and across a variety of species including C. elegans, Drosophila, mouse, rat, cat, macaque, and human. We offer a careful treatment of a few relevant distinctions, including differences between generative models and null models, sufficiency and redundancy, inferring and claiming mechanism, and functional and structural connectivity. We close with a discussion of future directions, outlining exciting frontiers both in empirical data collection efforts as well as in method and theory development that, together, further the utility of the generative network modeling approach for network neuroscience. Generative Models for Network Neuroscience: Prospects and Promise Danielle S. Bassett^1,2 December 30, 2023 =================================================================§ INTRODUCTION Many complex systems are composed of elements that interact dyadically with one another and can therefore be represented as graphs (networks) composed of nodes interconnected by edges. The network framework can be applied to systems across a range of disciplines, from sociology and psychology to molecular biology and genomics, making it possible to leverage a common mathematical language and set of analytic tools to investigate the topological organization of systems that, outwardly, might appear dissimilar to one another <cit.>.In neuroscience, network-based analyses have become common. This is due in part to initiatives for sharing large, multi-modal neuroimaging datasets <cit.>, the availability of easy-to-use software packages for computing graph-theoretic metrics <cit.>, and because networks are natural vehicles for representing and analyzing complex spatio-temporal interactions among neural elements (i.e. neurons, populations, and brain areas) <cit.>.Though the scope of topics studied in network neuroscience is broad, the typical study involves characterizing the structure of a network with a series of summary statistics. Each statistic describes a particular feature of the network, ranging from simple to complex and operating over all topological scales. For example, degree is a local (node-level) property that simply counts the number of connections incident upon a node. On the other hand, characteristic path length is a global (whole-network) measure of the average length of all pairwise shortest paths. In general, summary statistics offer succinct descriptions of a network's organizational features, especially those that are not immediately apparent given a network's list of nodes and edges.The application of summary statistics to better understand the structure and function of biological neural networks has been fruitful. Over a decade or so, evidence from networks across different organisms and spatial scales <cit.> has converged onto a small set of properties and summary statistics that, collectively, describe the organization of most biological neural networks. These include indices of small-worldness <cit.>, heavy-tailed degree and edge-weight distributions <cit.>, a diverse meso-scale structure that includes segregated modules but also core-periphery structure <cit.>, hubs and rich clubs <cit.>, and economic spatial layouts favoring the formation of short-range (low-cost) connections <cit.>. Further, such core organizational principles also include functional constraints, like the need to balance properties that support either the segregated or integrated brain function <cit.>, but also emphasize the tradeoff between the cost of such properties and their functionality <cit.>. These properties, collectively, create a caricature of neural system organization and function.While illuminating, the process of describing networks in terms of their topological properties amounts to an exercise in “fact collecting." Though summary statistics might be useful for comparing individuals <cit.> and as biomarkers of disease <cit.>, they offer limited insight into the mechanisms by which a network functions, grows, and evolves. Arguably, one of the overarching goals of neuroscience (and biology, in general) is to manipulate or perturb networks in targeted and deliberate ways that result in repeatable and predictable outcomes <cit.>. For network neuroscience to take steps in addressing this goal, it must shift its current emphasis beyond network taxonomy – i.e. studying subtle individual- or population-level differences in summary statistics – towards a science of mechanisms and process <cit.>.While there exists many methodological approaches for seeking mechanisms in networks and a range of spatial, topological, and temporal scales at which those methods can be deployed <cit.>, the focus of this article is on network generative modeling. Network generative modeling is a flexible framework for generating synthetic networks from a set of parameterized wiring rules. Generative models figure prominently in the network science canon <cit.>, and have recently been deployed in domain-specific scenarios to study the evolution of protein interaction networks <cit.>, the world-wide web <cit.>, and social systems <cit.>. Importantly, and provided that the wiring rule is sufficiently informed and biologically-grounded, generative models can be used to test and identify potential mechanisms that underlie the growth and evolution of biological neural networks. With mechanisms in hand, it becomes possible to distinguish the topological features that drive a network's growth from those that emerge as mere byproducts <cit.>, and to pursue deliberate and targeted interventions <cit.>.In the following sections, we present a primer on network generative models, highlighting past use, their interpretation, and open methodological considerations. We review current applications of generative models to neural systems, emphasizing several outstanding questions and implementation details. Finally, we plot a course for future studies.§ GENERATIVE MODELS: A PRIMER This article deals with the topic of generative models. Broadly, a generative model is a statistical process that outputs a synthetic set of data or observations. Usually, these synthetic data and the generative process are designed to have some properties in common with empirical data and the process believed to have generated those data. Generative models are often parameterized, and those parameters can be chosen so as to minimize the discrepancy between observed and synthetic data. The models, themselves, can be compared against one another using standard model comparison techniques, including goodness-of-fit criteria and cross-validation approaches.In the context of network science, generative models represent algorithmically-implemented wiring rules that output synthetic networks. While a network's nodes and edges encode all of its structural properties, studying generative models shifts focus away from those structural properties and instead onto wiring rules and the process of network formation. This shift in emphasis confers a number of distinct advantages: * Generative models compress our descriptions of networks and highlight regularities in their organization. * They make predictions about out-of-sample and unobserved network data. * Under the best circumstances, generative models can uncover network mechanisms.We discuss these topics in greater detail throughout the following subsections. §.§ Compressibility of networksGenerative models compress our descriptions of a network, encoding the network's topology into a set of wiring rules and parameters. Naïvely, we could describe a network exactly given a list of its nodes and edges, i.e. by consulting the list, we could correctly connect nodes that are supposed to be connected and avoid connecting nodes that should not be connected. However, connections in many networks are not independent of one another and exhibit statistical regularities so that, given the wiring rule that matches those regularities, we could predict the presence/absence of connections ahead of time. In this case, it becomes unnecessary to consult the list of nodes and edges to describe the network. More importantly, we can often interpret the wiring rule itself to uncover the network's organizing principles.As an example, consider real-world spatial networks, where the probability of observing an edge between two nodes decays as a function of distance <cit.>. Oftentimes, these kinds of networks can be well-approximated by a simple geometric model whose wiring rule mimics the network's distance-dependent connection formation <cit.>. To perfectly describe a spatial network we could generate a long and possibly unwieldy list of its nodes and edges. However, if the geometric model is a good approximation, e.g. synthetic networks generated by the model recapitulate many observed edges, then the model can be used to replace those edges in the list, effectively shortening our description of the network. The geometric model naturally mimics the distance dependencies of the spatial network. For many networks, however, the statistical regularities among links may not be obvious, in which case selecting the appropriate model may not be straightforward. We discuss this issue of model selection later in this section. §.§ PredictabilityBesides compressing our descriptions of a network, generative models also have predictive capacity and can be used as forward models of unobserved and out-of-sample data. Returning to the example of spatial networks, we might hypothesize the relevance of a generative model in which the probability of connection formation is given by a decaying exponential. If we let A_ij∈{0,1} indicate the presence or absence of an edge between nodes i and j, we can write this connection probability as: P(A_ij = 1) ∝exp(-β· D_ij), where D_ij is the distance between nodes i and j and β≥ 0 is a parameter to be fit <cit.>. If we were given a network G, we could fit the parameter β so that the discrepancy between synthetic networks generated by the model and G is minimized. Having fit the model, we could use it to make predictions about a second network, G^', whose connectivity pattern is unknown but whose nodes' spatial locations are given.As another example, consider the stochastic blockmodel <cit.>, in which nodes are assigned membership to one of K communities, z_i ∈{1,…,K}, and where the probability of two nodes, i and j, being connected to one another depends only on their community assignments: P(A_ij = 1) = ω_z_i,z_j (ω is a K × K matrix that encodes community-to-community connection probabilities). Fitting this model to a network G entails inferring nodes' communities and connection probabilities. If we encountered a second network, G^', with an unknown connectivity pattern but whose nodes correspond to those in G, e.g. the same set of neurons or brain regions, then we could use the model to predict the configuration of nodes and edges in that network. §.§ Mechanisms Finally, provided that it incorporates sufficient system-specific details (in our case, neurobiological information), a generative model can be used to gain insight into the mechanisms that guide the formation and growth of a system. This last point is critical. A generative model, under ideal circumstances, is a recipe for building a network. Having such a recipe opens new avenues for interrogating a network. It allows us to identify structural features of a network that emerge as a direct result of the wiring rule, versus those that emerge spontaneously as a consequence of constraints imposed by a given wiring rule <cit.>. For example, a geometric model will generate networks with high levels of clustering even though the wiring rule never explicitly optimizes for this property. Importantly, a recipe for building a network also gives us the ability to explore alternative ingredients. What happens if we change a parameter slightly? Does the model generate networks of vastly different character? Can we control the trajectory of a network's growth and guide it into a desired target configuration <cit.>? The ability to selectively drive the growth of a network is a tantalizing prospect, and one with profound implications in the treatment of clinicial and psychiatric disorders. §.§ Canonical generative models for networks Before engaging neuroscience-specific questions, it is useful to discuss examples of generative models as they have been applied in network science and other fields. In the remainder of this section we review some canonical generative models, emphasizing the properties that they share with one another as well as those that make them distinct.Generative models have a long history in network science and mathematics. One of the earliest examples is the so-called Erdős-Rényi (ER) model <cit.>, in which connections are formed independently between pairs of N nodes with probability P (another version exists where, instead of P, a fixed number of edges, M, are added uniformly at random). While the ER model has interesting combinatoric and mathematical properties, e.g. binomially-distributed node degree <cit.>, it is a poor approximation of most real-world networks. That is, the random and independent process by which connections are formed in the ER model results in networks with no real structure (poor compressibility) and does not resemble any of the mechanisms by which real-world networks grow. Accordingly, if we want to model networks in the real world, we need a set of models that generate networks with realistic properties.Initial explorations into generative models for real-world data resulted in two models that, collectively, helped spark broad interest in complex networks. The first, introduced by Duncan Watts and Steven Strogatz, sought the origin of empirically observed “small world” topologies, in which a network simultaneously exhibits greater-than-expected clustering and shorter-than-expected path length <cit.>. Broadly speaking, the model supposed that small-world networks are an interpolation between two extreme configurations: a ring lattice network (nodes arranged on the circumference of a circle and linked to their k clockwise and counter-clockwise neighbors) and an ER network. To move from one extreme to the other, the authors introduced a tuning parameter, p, which governed the probability that an edge in the lattice network would be rewired randomly. When p is small, the model generates networks that have mostly lattice-like properties, but when p is large, the model generates networks whose properties are indistinguishable from those produced by the ER model. Between those extremes, however, is a “sweet spot” – a region of parameter space yielding networks with properties of both extremes, namely high clustering and short path length. This model is referred to as the Watts-Strogatz (WS) model.At around the same time, a second group sought an explanation for why many real-world networks exhibited heavy-tailed degree distributions. The proposed model, by Réka Albert and Albert-László Barabási, was based on a growth rule <cit.>. Starting with a small set of fully-connected nodes, the model adds new nodes to the network by forming connections preferentially to already-existing nodes with higher degrees. This growth mechanism is a sort of “rich get richer” process; nodes that have existed for a long time accumulate many connections, which further increases their likelihood of being connected to newly-added nodes. The result of this process is a network with an approximately power-law degree distribution, mimicking those frequently observed in real networks <cit.>. This model is identical to that defined by Price in 1976 with a single value change to one parameter <cit.>, and is generally referred to as the Barabási-Albert (BA) or preferential attachment (PA) model. §.§ Generative models in practice and application The WS and BA models generate synthetic networks with properties qualitatively similar to those observed in real-world networks (small-worldness and heavy-tailed degree distribution). If we wanted to make the similarity of empirical and synthetic networks quantitative and more precise, how would we do so? Supposing that a model yields networks that repeatably and exactly recapitulate all properties of an empirical network, can we equate the model with mechanism? Both of these questions are difficult to answer, and represent some of the technical challenges associated with generative modeling.§.§.§ Choosing an objective function We will first address the issue of how to perform quantitative comparisons between synthetic and empirical networks. Fortunately, there exists a plurality of approaches for quantitatively comparing networks. The challenge is selecting the approach that is best suited for a given research question.Typically, we wish to answer the question of whether an empirically-observed network could have been produced by some generative model. One strategy for addressing this question involves defining a likelihood function over the space of all possible networks, and evaluating that function for the observed network. Stochastic blockmodels are a good example of this strategy in action <cit.>. The probability of a connection forming between nodes i and j, P_ij, depends on their community assignments, z_i and z_j: P(A_ij = 1) = ω_z_i,z_j. The probability that i and j are disconnected, is therefore 1 - P_ij and the likelihood that the observed network was generated by this model is given by: ℒ = ∏_i,j > i P_ij^A_ij (1 - P_ij)^1 - A_ij. Blockmodels are convenient in that this likelihood function can be written in closed form. This approach can be generalized for other models – even when the precise likelihood function is unknown – by generating a sample of networks from a given step of parameters and estimating, from those samples, the probability of any connection existing.This approach is similar to others in the literature <cit.>, in that it links the model's fitness with its ability to correctly account for the empirical network's exact configuration of nodes and edges. While this seems like a good approach, it is not difficult to envision scenarios where even near-perfect prediction of an empirical network's connections nonetheless fails to account for some of its critical topological properties. For example, consider the canonical small-world network – a ring lattice plus a few random (shortcut) connections that reduce the network's characteristic path length. The ring lattice and small-world network have nearly-perfect edge overlap. If we were to regard edge overlap as the definitive measure of fitness, we might be inclined to treat the lattice network as a good approximation of the small-world network. In other words, from a strictly structural point of view, these two networks are almost perfect matches; from a functional perspective, however, the two networks are highly dissimilar; because of its longer characteristic path length the ring lattice will lack efficient (short) routes possible used for communication or transportation.Comparing synthetic and empirical networks on the basis of their edge configuration is useful, but has some shortcomings that motivate the exploration of alternative approaches.Another approach, and one that has been used in several recent studies <cit.>, eschews the edge-wise comparison of two networks, instead simultaneously comparing them along several topological dimensions (e.g., their efficiency, clustering, modularity, etc.), and calculating a statistic of average dissimilarity. For example, in Vértes et al. (2012), the authors compared synthetic and empirical networks with the energy function, E = 1/V_C · V_E · V_M · V_K, where V_C, V_E, V_M, and V_K are p-values associated with statistical tests comparing clustering, efficiency, modularity, and degree distribution of synthetic and empirical networks <cit.>. Similarly, Betzel et al. (2016) defined the energy function E = max(KS_K,KS_C,KS_B,KS_E), where each term is a Kolmogorov-Smirnov statistic comparing the degree (K), clustering (C), betweenness centrality (B), and edge length (E) distribution <cit.>. Intuitively, in both cases smaller energies imply greater fitness.This approach is flexible and can be adapted to include virtually any set of metrics. It is important to note, however, that many network measures are correlated with one another, so the choice of which to include should take this into account. Also, there might be synthetic networks that match an empirical network in terms of network statistics but not its precise set of connections. Irrespective of how the objective function is defined, having one makes it possible to perform different kinds of comparisons. For a given model, we can perform model fitting by selecting the parameter values that optimize the objective function. We can also leverage an objective function to compare different generative models to one another. For example, we may wish to discount a model that is incapable of generating networks that resemble our real-world network of interest.§.§.§ Cross-validation Suppose that we fit a generative model by optimizing some objective function so that the model generates synthetic networks that share some set of properties with an empirical network. As in any model-fitting exercise, we can continue adding layers of complexity and free parameters to the model so that it matches our real-world network to some arbitrary degree of precision. It is often the case, however, that we are less interested in predicting the organization of a single network, but of a class of networks. For example, we may wish to identify wiring rules that can recapitulate the organization of structural brain networks, on average, rather than the network of any one individual. Even if our aim was to predict subject-specific networks, it might be unsurprising (in a statistical sense) that our models reproduce many of the features of those networks; after all, the model's parameters were selected only after an optimization procedure.In both cases (fitting models to empirical network data based on edge- or property-matching), it is essential that we perform a cross-validation procedure. This procedure might entail taking the best-fitting parameters from one model and using them to generate estimates of a second network not involved in the model-fitting process. We can compare the goodness of fit to that of a random (ER) model, to ensure that our model performs above chance. This type of cross-validation ensures that a generative model is identifying general wiring rules and not overfitting. A second type of cross-validation involves testing whether synthetic networks have properties in common with real-world networks that they were not explicitly optimized to possess. In other words, does a generative model give us certain properties “for free?” This type of cross-validation ensures that our objective function is sufficiently general and not overfitting and emphasizing a specific subset of network properties. §.§ The space of generative models What distinguishes one generative model from another? Is it possible to delineate classes of generative models based on their functions or characteristics? Arguably, one of the distinguishing features of any generative model is the timescale over which it operates (Fig. <ref>). On one extreme are models with no timescale at all, like stochastic blockmodels <cit.>. These kinds of models are “single-shot” generators of networks, and can therefore be quite poor representations of real-world networks that grow and evolve over time. On the other extreme are models whose internal timescale matches that of the real system. Nodes and edges are added or rewired on a realistic timescale to match known properties of the system. The growth model of C. elegans presented by Nicosia et al. (2013) is a good example <cit.>. In this model, nodes and edges are added according to their empirically measured birth times (time of cell division); a feature that contributed to the success of that model in predicting different properties of the C. elegans connectome.Between these two extremes – where models operate with either no intrinsic or biologically plausible timescale – is where most generative models are situated. In this middle ground, edges and nodes are added to or rewired in an existing network, but the timescale over which these processes occurs is arbitrary. A good example is the BA model, in which new nodes are linked to an existing network over a series of steps. These steps are ordered, so the addition of one node precedes or follows that of another. However, time is measured in arbitrary units (steps) and direct comparison to biological timescales, e.g. human development, might be inappropriate. Ordering generative models based on their internal timescales is similar to ordering them according to their plausbility and mechanistic understanding. Blockmodels and models with arbitrary timescales can do a good job compressing our description of a network and may identify general organizational principles <cit.>. However, if our aim is to develop realistic mechanistic models of network growth and development, it is essential that we include the necessary components that ground the model in reality.§ GENERATIVE MODELS OF BIOLOGICAL NEURAL NETWORKS Now that we have an intuition for what a generative model is, and what the goals are for building a generative model, we turn to a brief review of existing generative models for biological networks observed in neural systems. We note that this review is not comprehensive, but instead focuses on areas in which significant work has been accomplished, or areas that motivate important current and future frontiers. We also refer readers elsewhere for additional details on the mechanisms of connectome development <cit.>, biophysical models of neural dynamics <cit.>, and modeling mesoscale structure in dynamic networks <cit.> and multiscale networks <cit.>.Finally, we note that this review focuses mostly on generative models of structural and not functional networks (the distinction is in how edges are defined; in structural networks they represent physical connections, e.g. synapses, projections, fiber tracts, whereas in functional networks they represent statistical associations among neural elements' activity, e.g. correlation, coherence, etc.). Because of differences in how structural and functional networks are generated and evolve, certain classes of models that are appropriate for one type may be wholly inappropriate for the other. For example, functional networks are not generated through an edge addition process – they emerge from constrained dynamical processes. We discuss the implications of these differences in more detail later in this section. §.§ The requisite ingredients An open and important question that scientists face when embarking on a study to develop a generative model is: “What features are required to build good network models?” Perhaps the simplest feature one requires is a target network topology, the organization of the network that one is trying to recapitulate and ultimately explain. Yet, a single network topology can be built in many different ways, with strikingly different underlying mechanisms <cit.>. Thus one might also wish to have a deep understanding of (i) the contraints on anatomy, from physical distance <cit.> to energy consumption <cit.>, (ii) the rules of neurobiological growth, from chemical gradients <cit.> to genetic specification <cit.>, and (iii) the pressures of normal or abnormal development, and their relevance for functionality. Moreover, each of these constraints, rules, and pressures can change as the system grows, highlighting the importance of developmental timing <cit.>. Of course, one might also wish to choose which of these details to include in the model, with model parsimony being one of the key arguments in support of building models with fewer details. §.§ Generative models at the cellular level Recent efforts to model cellular level network architecture have had the benefit of building on rich empirical observations made over the last several decades. At one of the smallest spatial scales of neuronal connectivity, evidence suggests that the arbors of single neurons can be characterized by both local <cit.> and global <cit.> optimization rules to more strongly minimize volume than length, signal propagation speed, or surface area. Within the confines of relative volume cost minimization, there is also evidence for a maximization of the repertoire of possible connectivity patterns between dendrites and surrounding axons: in basal dendritic arbors of pyramidal neurons, arbor size scales with the total dendritic length, the spatial correlation of arbor branches appears to have a single functional form, and small sections of an arbor display self-similarity <cit.>. The morphology of dendritic arbors specifically and other parts of the cell more generally have direct bearing on the degree of connectivity that can take place between neurons <cit.>. Like dendritic arbors, synaptic connectivity appears to be organized in a highly non-random manner <cit.>, with unexpectedly high density in relation to its volume <cit.>. Interestingly, both synaptic connectivity and neuronal morphology appear to experience some similar constraints, including principles of wiring optimization <cit.>. Some suggest that constraints on synaptic wiring may be the more fundamental of the two, explaining the degree of separation between cortical neurons <cit.>, as well as the placement of cell bodies <cit.>. Others suggest that it is in fact the combination of wiring economy and volume exclusion that can determine neuronal placement <cit.>. In either case, the highly non-random nature of synaptic connectivity has been the subject of several recent generative modeling efforts. Initial observations that this non-random organization could be parsimoniously described as small-world <cit.>, have motivated the question of how this particular type of network complexity is combined with pressures for wiring minimization. Nicosia et al. (2013) suggest that the growth rules shaping cellular nervous systems balance an economical tradeoff between wiring cost and the functionality of network topology (Fig. <ref>). Using a dynamic economical model incorporating a continuously negotiated tradeoff between wiring cost and network topology, they recapitulate an empirically observed phase transition in the proportion of nodes to links present over the developmental time period of C. elegans <cit.>. The authors speculate that such dynamically negotiated tradeoffs may be characteristic of other complex systems, whether biological or manmade. It will be interesting in the future to consider scenarios in which such tradeoffs may be negotiated over shorter time periods, such as in the alteration of the prevalence of autaptic connections posited to play a role in homeostatic network control of bursting <cit.>. The incorporation of a dynamic economic tradeoff is an example of the broader importance of incorporating biophysically accurate features in generative models of cellular neural systems. Another example of such a biophysical feature is axon and dendrite geography, which has been shown to predict the specificity of synaptic connections in a functioning spinal cord network of hatchling frog tadpoles <cit.>. Some generative models have also sought to determine the role of neuron type in observed network topology and function, for example by building models of sensory neurons, sensory pathway interneurons, central pattern generator (CPG) interneurons, and motoneurons, and then linking them in a network with known inter-type connectivity <cit.>. By adding knowledge about development including chemical gradients and physical barriers <cit.>, a cell-type specific model of 2000 neurons in the spine of a young Xenopus tadpole can produce swimming behavior in response to sensory stimulation <cit.>. These and related efforts demonstrate the ability of generative network models built with neuron and synapse resolution, and incorporating biophysical phenomena, to reproduce behaviors observed in whole organisms. Such findings are reminiscent of other biophysical modeling efforts at the large scale of human areal networks <cit.>, where the biophysics of regional rhythms and inter-regional synchronization inform our understanding of human cognition <cit.>.Of course, statistically bridging structural connections such as synapses at the cellular scale with behaviors in non-human animals – and cognition in humans – at the organism scale begs the question of what processes exist between the two scales.There do exist generative models of functional network topology from structural network topology, and visa versa.A particularly powerful approach for cellular nervous systems is the pairwise maximum entropy model <cit.> and recent extensions <cit.>, which can be used to predict patterns of pairwise correlations from structure, or to infer structure from pairwise correlations. This latter inference neglects unmeasured higher-order (non-pairwise) interactions, operationalized via the maximum entropy distribution, which assumes maximal independence among variables (in this case: cells) <cit.>. The technique was initially applied to neural spiking data to demonstrate that, in the case of the energy function being the Ising model, pairwise interactions give an excellent approximation of the full correlation network <cit.>. The surprisingly good fit of this model to the data has important implications for how we think about neural population codes in response to stimuli <cit.>, which can be represented by joint activity patterns of spiking and silence <cit.>. Moreover, the maximum entropy model also provides a surprisingly accurate fit to large-scale imaging data in the form of fMRI BOLD collected in humans at rest <cit.>, as well as during a task <cit.>. Interestingly, the simple assumptions of this minimal generative model also appear to provide excellent fits to the dynamics of mesoscale network communities in functional data <cit.> and insights into the energy landscape that the system traverses <cit.>.While the pairwise maximum entropy model has proven useful in inferring structural network organization from functional network organization, and visa versa, it is certainly true that non-pairwise interactions may nevertheless play a non-trivial role in neural population function. Intuitively, beyond-pairwise interactions can occur via common input <cit.>, producing multi-way synchrony <cit.> with varying prevalence across different length scales in the system <cit.>. Generative models of such high-order relations include beyond-pairwise maximum entropy models <cit.> and dichotomous Gaussian models <cit.>. Another way in which beyond-pairwise functional interactions can occur is if neurons themselves do not only display pairwise connections, but also higher-order connections. This possibility higlights a complementary challenge in describing the presence of such higher-order relations in structural networks from a topological point of view, with the goal of building generative models that account for them. A useful language with which to meet this challenge is the language of algebraic topology and specifically simplicial complexes <cit.> whose fundamental units are simplices: a 0-simplex is a node, a 1-simplex is a dyad, a 2-simplex is a face, a 3-simplex is a tetrahedron, a 4-simplex is a 5-cell, etc. A collection of simplices – called a simplicial complex – can include many interesting features including cliques (i.e., fully connected subgraphs) and cavities (collections of n-simplices arranged so that they have an empty geometric boundary). In patterns of correlations among the activity of pyramidal neurons in rat hippocampus, the topology of cliques and cavities demonstrates geometric organization consistent with a generative model of simplicial complexes related to random geometric graphs <cit.>. This higher-order structure has also enabled the identification of unexpectedly long structural loops linking regions of early and late evolutionary origin, underscoring their unique role in controlling brain function <cit.>. Indeed, the topology of cliques and cavities has specific implications for local processing (cliques) versus processing in which information may flow in either diverging or converging patterns (cavities) <cit.>, and can support efficient coding by enabling inference of neural codes even in highly undersampled set of patterns <cit.>. While generative models of simplicial complexes based on random geometric graphs have shown some utility in explaining these structures, further work is needed to understand the extent of their applicability, and to consider models for growing simplicial complexes <cit.>.§.§ Increasing in scale: generative models of large-scale connectomes In the previous section, we reviewed some of the literature supporting the notion that cellular network organization in neural systems is characterized by pressures of wiring economy and topological complexity. Such pressures are similarly thought to play a role in the organization of networks at the meso- and large-scale in both human and non-human mammalian brains <cit.>. Computational studies suggest that trade-offs between wiring economy and topological complexity <cit.> support the formation of network modules, offering relative segregation of function, and network hubs, offering relative integration of function <cit.>. The role of topological complexity and the presence of unusually high wiring costs in some parts of cortex suggests that simple notions of spatial embedding are not sufficient to explain the observed organization of the connectome. This limitation has motivated models deriving a latent (rather than physical) space from which to predict missing links <cit.>, or incorporating information about cytoarchitecture <cit.> such that cytoarchitectonically similar cortical areas in the two hemispheres have an unexpectedly high probability of connecting with one another <cit.>. A particularly salient example of a generative model of areal connectivity in a mammalian brain that incorporates many of these considerations is the recent predictive model of Beul et al. (2015) (Fig. <ref>). In this paper, the authors study meso-scale structural connectivity between 49 areas of the cat cerebral cortex as estimated by tract tracing techniques <cit.>. They test the predictive utility of 3 separate wiring rules: (i) a structural rule in which the laminar patterns of origins and terminations of inter-areal projections vary according to the relative cytoarchitectonic differentiation of the projection sources and targets, (ii) a distance rule in which connections are more frequent, and more dense, among neighboring regions and sparser or absent between remote regions, and (iii) a hierarchical rule in which differences in the functional hierarchical levels of source and target areas are inversely related to the degree of connectivity between them. While the latter rule did not accurately fit the data, the first two rules (structure and distance) explained significant variance in the observed connectivity patterns, with a linear combination of the two predicting the existence of connections with more than 85% accuracy. Work in non-human primates generally and the macaque cortex specifically recapitulates many of the same motifs from work in lesser mammals. Early work suggested that cortical components are optimally placed so as to minimize the costs of their interconnections <cit.>, facilitating a global optimal cerebral cortex layout <cit.>. Later work suggested that component placement did not maximally minimize wiring, but also tended to favor short processing paths, due to long-distance projections <cit.>. Indeed, separate from where components are placed, it has been noted that there appear to be successfully arbitrated optimization problems in the organization of inter-areal connectivity, for example favoring near-minimization of distance <cit.> and increased support for connectivity between areas with similar topological properties <cit.>. In an extension of the model described above for the cat, Beul and colleagues similarly demonstrate the striking utility of the structural rule of architectonic similarity, where similarity in the laminar pattern of projection origins, and the absolute number of cortical connections of an area, demonstrated the strongest and most consistent influence on connection features <cit.>. In this case, the distance rule was surprisingly not predictive. Future extensions of this model may include explicit nonlinear growth rules, which have previously been linked to the emergence of network hubs <cit.>. Finally, efforts in the human support the notions of wiring economy <cit.> and topological complexity <cit.>, and further add new considerations such as the geometric segregation of the brain into gray and white matter, enabling the relative minimization of conduction delays <cit.>. While one-shot models have been the most commonly exercised generative models for human structural networks, relatively new evaluation criteria for them include an assessment of their controllability profiles <cit.> and homological features <cit.>. Moreover, there has been a recent and growing interest in developing network growth models that incorporate biologically motivated rules for the probability of connections <cit.>. For example, spatially constrained adaptive rewiring creates small-world network architectures with spatially localized modules <cit.>, while wiring rules based on topological affinities recapitulate known scaling laws of physical network topology <cit.>. It would be interesting in future work to determine how these rules could be adapted to explain the patterns of conserved and variable architecture of white matter networks across individual humans <cit.>.The recent paper by Betzel et al. (2016) represents one of the first attempts at subject-level generative modeling <cit.>. In this study, the authors fit thirteen generative models to white-matter networks acquired from three independent datasets, totaling 380 subjects (Fig. <ref>). The model generated synthetic networks using an edge-addition algorithm, in which connections were added probabilistically and one at a time according to a set of parameterized wiring rules. Each of the thirteen models was fit in two stages: first by matching distributional statistics of the white-matter networks and later cross-validated on a separate set of network measures. The best-fitting models across all three datasets featured wiring rules based on wiring cost reduction and homophilic attraction principles, the severity of each controlled by a separate parameter. Because the models were fit to individual subjects, it was possible to explore individual variability in model fit. When applied to lifespan data from the Nathan Kline Institute, the authors found that the parameter governing the severity of the wiring cost reduction weakened systematically with age, as did the model goodness of fit. These findings suggest that generative models are sensitive to changes in network organization with development and aging, and may be useful tools in studying variation across individuals <cit.>.In a more recent study, Tang and colleagues study individual variation in youth by examining the white matter networks of 882 individuals between the ages of 8 yr and 22 yr <cit.>. Here, the authors posited that over this developmental time period, structural brain networks become optimized for a greater diversity of neural dynamics, as instantiated by recently defined metrics of network controllability <cit.>. They tested the hypothesis that an observed trajectory of network change over youth could be recapitulated by a generative model that increased average controllability (predicted ease of transitioning between nearby network states – the level of activity in each region, across the entire brain), increased modal controllability (predicted ease of transitioning between distant network states), and decreased synchronizability (predicted capacity for global synchronization). The model was initiated with a given brain network, and then evolved in silico according to a rewiring rule such that an existing edge was randomly chosen to take the place of an edge that did not exist, and this edge swap was retained only if the new network advanced the Pareto front, the set of all network configurations that were optimal in their tradeoff between average and modal controllability (Fig. <ref>). As rewiring progressed forward in time, a course was charted in which networks increased in controllability and decreased in synchronizability; while as rewiring progressed backwards in time, networks decreased in controllability and increased in synchronizability. The simulated developmental trajectories displayed a striking similarity in functional form to the observed developmental trajectories, suggesting a possible mechanism of human brain development that preferentially optimizes dynamic network control over static network architecture. § A FEW RELEVANT DISTINCTIONS In this section, we describe a few important distinctions that are particularly relevant to the understanding and further development of generative network models for neural systems. First, we will explore the relations between generative models that seek mechanisms and explanations, and null models for statistical testing of hypotheses. Second, we will discuss the important tradeoff in sufficiency of a generative model versus redundancy. Third, we will seek to disambiguate between inferring a possible mechanism versus claiming proof of a mechanism. And finally, we will describe some relevant considerations when building or evaluating generative models of structural versus functional connectivity. §.§ Generative models and null models The stated goals of the generative modeling approach, as described in the early sections of this review, include the identification of putative mechanisms of observed network architecture, and intuitive explanations for some of the features that characterize that architecture. Yet, depending on their degree of biological realism, such models can also be used as statistical null models, potentially enabling the dismissal of a null hypothesis. In general, topological and spatially-informed null models play a critical role in network science broadly <cit.>, and network neuroscience specifically <cit.>. One could consider using a generative model to test the hypothesis that the topology of an empirically measured neural network was consistent with a topology of an artificial network built on a fixed set of rules or principles. In this case, one would need to be careful in the exposition of the study to distinguish between when the model was being used to propose a generative mechanism, and when the model was being used in a statistical sense to dismiss a null hypothesis. §.§ Sufficiency and redundancy When building generative network models of empirically observed neural systems, a common observation is that the models often fit topological signatures that they were designed to fit, but rarely fit topological signatures that were not considered in the model specification <cit.>. Informally, this observation is reminiscent of the “No free lunch” theorem <cit.>. However, this seeming insufficiency is not always the case <cit.>, and its inconsistent presence begs the question of what exactly makes a sufficient model. Is a sufficient network model one that can display the topological signatures it was optimized to possess (the objective function used to fit the model), or should it also predict a topological signature that was not hard coded into the objective function and/or generative algorithm? A complementary consideration to sufficiency is redundancy. Suppose rule a is chosen to create topological signature 1 and as a biproduct also appears to create topological signature 2. In addition, suppose that rule b is chosen to create topological signature 3 and as a byproduct also appears to create topological signature 2. Such a scenario can be quite common, as there exist whole families of graphs that display similar graph metric values <cit.>, community structure <cit.>, controllability profiles <cit.>, and homological features <cit.>. A generative model that combined rules a and b would appear redundant in that both rules ensured signature 2, and arguments for biological parsimony might undercut the anticipated verity of such a model. These examples illustrate that sufficiency and redundancy are important considerations in developing and evaluating generative network models of neural systems. §.§ Inferring and claiming mechanism Suppose that one is thoroughly successful, and creates a generative model that beautifully reproduces an empirically observed network structure. Do the rules that compose the generative model provide a mechanism explaining the empirical network's architecture <cit.>? Even more brazenly, can such a generative model help us to develop a theory of brain network organization and resultant behavior <cit.>? In seeking answers to these questions, it is important to disambiguate between inferring a possible mechanism and claiming proof of a mechanism. If a generative network model built upon rule a recapitulates the network structure of interest, one can say that rule a is a possible mechanism, but one cannot claim that it is the mechanism. To provide a more concrete example embedded in network neuroscience, let us consider the topological feature of Rentian scaling, an isometric scaling relationship between the number of processing elements and the number of connections, which is often found in systems that are built upon the principle of wiring reduction, and is observed in brain networks <cit.> as well as other transmission systems such as computer circuits <cit.>, transportation systems <cit.>, and vasculature <cit.>. Given the scaling relationship, one might infer that the network's structure is given by a mechanism that operates uniformly across all scales such as wiring minimization. However, such an inferrence would neglect the fact that many scale-heterogeneous mechanisms also produce topological scaling relationships <cit.>. In future work, it will be important to concretely discuss support for possible mechanisms separately from exact claims that such mechanisms have been proven. §.§ Functional connectivity and structural connectivity This review has focused on mostly generative models for structural networks, where links represent physical pathways among neural elements. Generative network models can also be built for functional connectivity data, with some caveats and limitations <cit.>. Posited drivers of functional network organization across species include similar notions of cost-efficiency <cit.>, small-world architecture <cit.>, and spatial clustering <cit.>. However, the appropriate growth mechanisms that such generative models employ face different constraints in the functional domain than in the structural domain <cit.>. Functional connectivity is not generated piece-by-piece, as instantiated by a discrete placement of edges in a network <cit.>. Instead, functional connectivity is a consequence of dynamical processes constrained by many factors <cit.>, including but not limited to anatomical structure <cit.>, the activity elicited by a particular task <cit.>, the distance between brain areas <cit.>, genetics <cit.>, and any stimulation or other input to the system <cit.>. Many good models of brain dynamics exist, ranging from the biologically realistic to the heavily idealized <cit.>. However, growth models built from the placement of independent edges are conceptually more appropriate for structural networks than for functional networks. § FUTURE DIRECTIONS§.§ What would a generative model accomplish? In practice, many of the current approaches for studying biological neural networks involve computing and comparing summary statistics between groups or continuously across individuals. While this approach is useful in identifying “what” is different, it fails to explain “how” those differences come to be, in the first place. In this review, we echo other recent reviews <cit.> and call for a shift in emphasis away from “fact collecting” studies and towards uncovering the mechanisms that explain the organization of neural systems. We argue that network generative modeling represents a framework that can help us move towards addressing these lofty goals.Suppose that – with the right dataset and the right modeling approach – we can devise a model that, to a reasonable approximation, can successfully mimic the growth or evolution of a real-world neural system. In other words, the model results in a network that changes over time (where time has a clear developmental or biological interpretation) and whose topology evolves in way that is consistent with known facts about the real-world growth of that network. What does having such a model buy us? On one hand we could simply maintain the status quo, fit the model's parameters to individual subjects and compute statistical relationships between parameters and behavioral measures (Fig. <ref>A) using machine learning techniques to partition the model's parameter space into regions associated with clinical and control populations (Fig. <ref>B). While useful, these approaches are quite similar to the current state of the field.Another more novel possibility is to use the model for disease simulation. Many psychiatric <cit.> and neurodegenerative diseases <cit.> are manifest at the network level in the form of miswired or dysconnected systems, but it is unclear what predisposes an individual to evolve into a disease state. The generative model can be used to propagate individuals from one time point to another and identify those that are likely to evolve into a state similar to that of the disease phenotype and perhaps likely to develop that disease. In this way, the model has a clear role as a forecaster (Fig. <ref>C).Similarly, the generative model can be used to explore in silico the effect of potential intervention strategies. We can think of biological neural networks as living in a high-dimensional space based on their topological characteristics, where some regions (of this space; not of the brain) are associated with neurological disease and considered maladaptive (and perhaps even deadly) <cit.>. In this context, the generative model represents an evolution operator that propagates a network from one point to another, tracing out a trajectory through this space. If we can identify individuals who are predisposed to travel near those maladative regions, we can begin to identify perturbations – changes to model parameters or wiring rules – that steer those trajectories towards regions not associated with disease (Fig. <ref>C). These goals are in line with current theoretical work, applying tools from network control theory to neuroimaging data <cit.>.§.§ Dream datasets and experiments Generative models have clear utility in furthering our capacity to predict disease and identify the mechanisms that shape the development, growth, and evolution of biological neural networks. A major hindrance in realizing these goals, however, is the absence of data tailored for generative models. The ideal data would (i) be longitudinal, enabling one to track and incorporate individual-level changes over time in the model, and (ii) include multiple data modalities, such as functional and structural connectivity, and genetics, along with other select factors that could influence network level organization. In short, any meta-data that could theoretically be incorporated into a model would be valuable and possibly worth collecting. Ideally, these data would be acquired at the earliest possible time point in utero <cit.> and proceed through maturity.Clearly, collecting and curating such a dataset represents a massive undertaking. Though recent large-scale studies have made it possible to image thousands of individuals over a short period of time <cit.> and a small number of individuals over a long period of time <cit.>, the duration and scale of a longitudinal study of the nature proposed here seems, at present, out of reach. Furthermore, the studies that have come closest to acquiring these kind of data have relied on MRI due to its non-invasive nature. However, this same advantage also limits the fidelity and kinds of data that can be acquired from an individual (e.g., region-specific gene transcription levels can only be acquired post-mortem <cit.>).An attractive alternative, then, is to consider building generative models of data from non-human, model organisms. Not only are the lifecycles of several model organisms much shorter than that of humans (making it possible to track an individual over the course of its entire life), but new advances in network reconstruction techniques <cit.> and the ability to make recordings of activity in unprecedented detail <cit.> ensure that any generative model will be endowed with sufficiently rich data to probe for novel wiring rules. Moreover, working with model organisms also makes it possible to collect data modalities that, otherwise, would be inaccesible, including details about gene expression <cit.>. §.§ Increasing sophistication of generative network models Finally, given ideal data, there are also exciting and important future directions in increasing the mathematical sophistication of generative network models. One particularly accessible extension of current methods lies in multilayer generative network models. A multilayer network consists of multiple single-layer networks, e.g. representing a neural systems structural connectivity, functional connectivity, and gene co-expression <cit.>, that are linked across layers to one another. A generative model for this type of data is one that, instead of single-layer networks, generates multilayer networks <cit.>, and the rules of generation can apply to a single layer, to multiple layers, or to the interconnectivity between layers <cit.>. One potentially useful place to start would be to construct multilayer generative models where the neural connectivity evolves with a specific set of dynamics (or network growth rules) that are explicitly coupled to the underlying tissue growth or to the inervating vasculature growth <cit.>. At the larger scale, one could also consider developing multilayer generative models that couple brain network growth with social network growth, a coupling that has recently been postulated to occur through processes of development and learning <cit.>. Indeed, it is likely that there are other ways in which our brain network topology, and changes in that topology, are coupled to our experiences. Such experiences could be defined by our environment, for example as partially stipulated by our socio-economic status <cit.>, or by our practices, for example as instantiated in our practice of curiosity <cit.>. Indeed, it is interesting to speculate that generative network models may be useful in understanding the relations between brain network architecture and the architecture of knowledge networks, which are physically instantiated in the brain <cit.>, as well as semantic networks <cit.>, which can be tuned by our attention <cit.>.Semantic networks, social networks, brain networks, vasculature networks, and tissue networks may all evolve with one another in inter-twined multilayer network systems, an understanding of any pair of which will require concerted efforts in extending the sophistication of current generative network modeling techniques. § CONCLUSION As the field of network neuroscience matures, efforts in data description and statistical characterization are being complemented by efforts to infer principles, to predict unobserved data, and to perturb the system with theoretically grounded expectations about the results of those perturbations. Generative modeling is a particularly powerful approach for moving beyond description towards prediction, mechanism, and eventually theory. In this article, we have offered a simple primer on generative models, a review of recent efforts in generative models of biological neural networks, and a discussion of current frontiers in empirical data collection and mathematical sophistication.We look forward with anticipation to efforts in the coming years that use generative models to understand human development, and to potentially inform interventions in psychiatric disease or neurological disorders in which wiring patterns have gone awry.§ ACKNOWLEDGMENTS The authors thank Lia Papadopoulos and Evelyn Tang for helpful comments on earlier versions of this manuscript. This work was supported by the John D. and Catherine T. MacArthur Foundation, the Alfred P. Sloan Foundation, the Army Research Laboratory and the Army Research Office through contract numbers W911NF-10-2-0022 and W911NF-14-1-0679, the National Institute of Health (2-R01-DC-009209-11, 1R01HD086888-01, R01-MH107235, R01-MH107703, R01MH109520, 1R01NS099348 and R21-M MH-106799), the Office of Naval Research, and the National Science Foundation (BCS-1441502, CAREER PHY-1554488, BCS-1631550, and CNS-1626008).The content is solely the responsibility of the authors and does not necessarily represent the official views of any of the funding agencies.
http://arxiv.org/abs/1708.07958v1
{ "authors": [ "Richard F. Betzel", "Danielle S. Bassett" ], "categories": [ "q-bio.NC" ], "primary_category": "q-bio.NC", "published": "20170826113035", "title": "Generative Models for Network Neuroscience: Prospects and Promise" }
MainThmTheoremdefinition defnDefinition[subsection] thm[defn]Theorem cor[defn]Corollary prop[defn]Proposition lem[defn]Lemma ex[defn]Example exs[defn]Examples quest[defn]Question rmk[defn]Remark rmks[defn]Remarks constr[defn]Construction up[defn]Universal Property setup[defn]Hypothesis notn[defn]NotationcA,B,C,D,E,F,G,H,I,J,K,L,M,N,O,P,Q,R,S,T,U,V,W,X,Y,Z bA,B,C,D,E,F,G,H,I,J,K,L,M,N,O,P,Q,R,S,T,U,V,W,X,Y,Z sA,B,C,L,Q,R,S,T,U, fg,m,h,t,n,b, bg,h,x,y,A,B,C,D,E,F,G,H,I,J,K,L,M,N,O,P,Q,R,S,T,U,V,W,X,Y,Z B,R,C,Z,N,P,Q,F,G,H,U,W,X,Y,TDer,gr,Spec,Proj,MaxSpec,ad,Loc,Stab,Aut,Hom,End,im,coker,Lie,Spf,rig,Sp,Ad,aug,ac,tors,Sym,mod,Homeo,op,qc,Res,alg,rk,Sch,id,GL,AbOT1pzcmit
http://arxiv.org/abs/1708.07475v1
{ "authors": [ "Konstantin Ardakov" ], "categories": [ "math.RT", "math.NT", "14G22, 32C38" ], "primary_category": "math.RT", "published": "20170824160607", "title": "Equivariant $\\mathcal{D}$-modules on rigid analytic spaces" }
[pages=-]main.pdf [pages=-]SI.pdf
http://arxiv.org/abs/1708.07503v3
{ "authors": [ "Xinshu Zhang", "Fahad Mahmood", "Marcus Daum", "Zhiling Dun", "Joseph A. M. Paddison", "Nicholas J. Laurita", "Tao Hong", "Haidong Zhou", "N. P. Armitage", "Martin Mourigal" ], "categories": [ "cond-mat.str-el", "cond-mat.mtrl-sci" ], "primary_category": "cond-mat.str-el", "published": "20170824173831", "title": "Hierarchy of exchange interactions in the triangular-lattice spin-liquid YbMgGaO$_{4}$" }
Measuring technological complexity - Current approaches and a new measure of structural complexity Tom BroekelTom Broekel, Department of Human Geography and Spatial Planning, Faculty of Geosciences, Utrecht University, [email protected] Utrecht University December 30, 2023 =================================================================================================================================================================The paper reviews two prominent approaches for the measurement of technological complexity: the method of reflection and the assessment of technologies’ combinatorial difficulty. It discusses their central underlying assumptions and identifies potential problems related to these. A new measure of structural complexity is introduced as an alternative. The paper also puts forward four stylized facts of technological complexity that serve as benchmarks in an empirical evaluation of five complexity measures (increasing development over time, larger R&D efforts, more collaborative R&D, spatial concentration). The evaluation utilizes European patent data for the years 1980 to 2013 and finds the new measure of structural complexity to mirror the four stylized facts as good as or better than traditional measures. empty§ INTRODUCTIONThe complexity of technologies is seen as crucial explanatory dimension of technological development and economic success <cit.>. <cit.> argue that country's economic development is shaped by its ability to successfully engage in complex economic activities and technologies. Both <cit.> and <cit.> show that few cities are capable of mastering complex technologies that lay the foundation for their future growth.Despite its theoretical relevance and an increasing empirical interest, measuring the complexity of technologies empirically is a complicated issue, as <cit.> note: “We do not have any easy way to measure complexity” [p. 280]. The two most prominent approaches are put forward by <cit.> and <cit.>, with the latter transferring the approach of <cit.> for approximating economic complexity to the measurement of technological complexity.[Further approaches can be found in <cit.> and <cit.>.] The present paper presents both approaches and argues that they build on the assumption of complexity being scarce at their core. <cit.> assume technological complexity to be spatially scare, while <cit.> build on the idea of complex knowledge combinations appearing less frequently than simple ones. It is shown that these assumptions are either theoretically problematic or may induce challenges in the measures' empirical application.The paper develops an alternative measure of technological complexity, structural complexity, which does not relate scarcity and complexity. The paper proceeds by empirically evaluating the approaches (and including two variants of the traditional approaches) against four stylized facts of technological complexity (increasing average complexity over time, more collaborative R&D, spatial concentration, and larger R&D efforts). The empirical assessment is made using patent data for Europe between 1990 to 2015. The new measure of structural complexity is shown to match the stylized facts similarly or even better than the traditional measures. Similar to the measure of <cit.>, it is not dependent upon the definition of spatial units.The paper is structured as follows. The next section discusses the traditional approaches of measuring technological complexity. It also introduces the new measure of structural complexity. Section <ref> presents four stylized facts of technological complexity that will serve as benchmarks for the empirical comparison of the traditional and new complexity measures. The set up of the empirical evaluation is subject to Section <ref>, the results of which are presented and discussed in Section <ref>. Section <ref> summarizes the findings and concludes the paper. § TWO TRADITIONAL AND ONE NEW MEASURES OF TECHNOLOGICAL COMPLEXITY §.§ (Re-)combinatorial rareness and complexity <cit.> approach technological complexity by conceptualizing technological advancement as a search process for knowledge combination.[This also includes combination).] They assume that the difficulty of combining knowledge represents technological complexity, with more difficult combinations being required to advance more complex technologies. Their second assumption relates past knowledge re-combination frequencies to the current difficulty of combinatorial innovation. On this basis, they construct a measure of technological complexity resembling the (in-)frequency of past knowledge combination such that small frequencies, after controlling for their chances of random occurrence in an N/K framework of<cit.>, translate into low complexity values. In a follow-up study employing US patent data, they substantiate their results by showing that their measure of technological complexity fits well with inventors' perceived difficulty of the inventive combination process <cit.>.However, does the past (in-)frequency of combination really give a clear approximation of the inventive difficulty and thereby of technological complexity? Less frequent combinations may indeed be caused by the difficulty of the according invention process. Yet, it also seems plausible that there is, or has been, little technological or economic interest in such a combination. For instance, it should be relatively easy to integrate the electronic navigation technology used in cars into horse chariots. However, this combination has rarely been realized, if at all, most likely because there is little market potential for it.§.§ The method of reflection approach <cit.> propose an alternative measure of economic complexity building on the work of <cit.>. They transfer the so-called method of reflection used by <cit.> to assess economic complexity to empirically derive a measure of technological complexity. The method of reflection is based on diversity and ubiquity and assumes that technological complexity is spatially scarce. Diversity is the number of distinct technologies in a region and ubiquity the number of regions specialized in a technology. The proposed index of technological complexity yields high values for technology A, when places specialized in A are also specialized in other technologies that few other places are specialized in. Put differently, a technology will be evaluated as being complex when it belongs to a group of technologies few places specialize in and these specializations appear in the same places. <cit.> apply this approach to patent data and estimate the complexity of technologies considering the technological specialization of US metropolitan statistical areas. The authors find that regions commonly associated with technological and economic success (e.g., San Jose, Austin, Bay area) are highly specialized in complex technologies.There are many arguments supporting the idea of complexity being spatially scarce (see also subsection <ref>). <cit.> argue that in order to be successful in complex activities (e.g. in the development of complex technologies), it requires “nontradable” spatial “capabilities” including “property rights, regulation, infrastructure, specific labor skills” <cit.>. Similarly, concepts like “learning regions”, “innovative milieu”, and “regional innovation systems” argue that few regions possess location-specific capabilities yielding advantages for technological development <cit.>. The findings of <cit.> add some empirical support to this by showing that 10 to 15 % of industrial agglomeration can be explained by technological complexity.However, technologies' spatial distribution may have multiple sources among which complexity is just one. For instance, corporate R&D facilities are known to be located close to public universities <cit.>, whose location is largely determined by policy and historical circumstance. The distribution is also impacted by technologies' geographic diffusion, which depends among others on its degree of maturity, popularity, natural conditions, geographic distances, place of origin, and crucially, economic potential <cit.>. Hence, all these factors that are not related to technological complexity may impact technologies' spatial distribution and potentially distort the complexity measure.Two more issues are related to the assumption of spatial scarcity. First, it makes the measure highly endogenous when analyzing spatial phenomena. For instance, endogeneity is likely to arise when the spatial distribution of technologies is explained with their levels of complexity using a complexity measure based on their spatial distribution <cit.>. Crucially, this issue prevents a sound empirical test of the measure's underlying assumption of complexity being spatially scarce.Second, as the measure requires a spatial delineation of regions, it becomes conditional on this definition. Put differently, a technology's complexity may depend on the employed spatial unit, i.e., the size of the regions. §.§ Ameasure of structural complexity <cit.> base their measure on ideas of complex systems. I follow this line of thinking and start with technological advancement being a knowledge combination process. I also follow their argument of technologies' complexity being related to the difficulty of combining knowledge pieces in its advancement. Knowledge can be thought of as a “network” of knowledge combination with the nodes being individual knowledge pieces and their combination representing the links. To borrow the example of <cit.>, think of an airplane as a specific type of technology. In order to fly, the airplane combines many different knowledge pieces. Crucially, some pieces need to be directly linked in order to function (e.g., wing design and aluminum processing), while others just need to be indirectly related (e.g., electronic navigation and wing design). When representing the airplane as the network of combined knowledge pieces, wing design and aluminum processing are directly linked. In contrast, electronic navigation is only indirectly related, as other knowledge pieces (electronic control systems, mechatronical interfaces, etc.) act as bridges.In this conception, I propose to use the complexity of this network representing the combinatorial structure of knowledge pieces as a measure of the (airplane) technology's complexity. That is, the difficulty of combining knowledge is argued to be determined by the precise structure with which knowledge pieces are integrated with each other in innovation processes. Complex structures are more difficult to realize and hence represent more complex technologies. This is motivated by two arguments, one being inspired by the literature on network complexity <cit.> in combination with the literature on knowledge relatedness <cit.> and the second by information theory <cit.>.Beginning with the literature on network complexity and knowledge relatedness, Figure <ref> shows ideal typical network structures. If the combinatorial network has the shape of a star, it means that all knowledge pieces just need to be combined with a central one. As knowledge piece combinations require some technological / cognitive overlap <cit.>, the pieces share some common parts making combination/integration easier. The same reasoning applies to fully connected networks (Complete). Such overlap is lower when multiple central knowledge pieces characterize the combinatorial network (tree structure). In this sense, the network resembles the idea of the knowledge space <cit.>. The greater knowledge diversity makes such network structures more complex. A tree network implies a modular structure with each module being made of somewhat similar knowledge pieces, which reduces the overall complexity in the network. Such clear-cut modules as in a tree network are less frequent than small-world network structures, which therefore indicate greater knowledge diversity. However, there is still a certain degree of modularity and symmetry, which provides some simplifying patterns. Any of those are lacking in purely random combinatorial structures (Random). Each element is combined in a distinct way and there are no overarching principles structuring the combinatorial processes. Complexity is highest in this case. Hence, such structural differences (stars, complete, trees, small-world, random) in combinatorial networks can be used to differentiate complex and simple technologies.An alternative motivation for using combinatorial networks as way to approach technological complexity is provided by information theory <cit.>. The combinatorial network represents a system of (knowledge) pieces and their interaction (combination). Systems' complexity increases with the amount of information contained in its structure <cit.>. For instance, a star is simple because it can be summarized by the number of pieces (nodes) and the identity of the central piece (node). Much more information are contained in tree and small-world networks. However, the existence of structuring principles allows for information to be condensed. This is not possible in the case of random (network) structures, which therefore contain maximum information. A complete network is also a simple structure as it represents little information besides the number of pieces <cit.>. Hence, the information theoretical perspective onnetworks also allows for differentiating complex and simple structures and can therefore be used to assess the complexity of combinatorial networks and thereby that of technologies. Unfortunately, there is no single widely accepted method of measuring the complexity of (combinatorial) network structures. In contrast, a wide range of approaches exists that capture different structural aspects. It is beyond the scope of the present paper to review or discuss their pros and cons <cit.>.Recently, <cit.> developed the so-called Network Density Score (NDS), which reflects the structural diversity in a network. The measure has a number of desirable features. Most importantly, it convincingly differentiates ordered, complex, and random networks. Networks are considered ordered when many nodes show similar properties (e.g., degree). For instance, most nodes in a star and tree network have the same degree (one). According to the above discussion, ordered networks represent simple technologies because they contain less information and are more homogeneous. Complex networks represent mixtures of such ordered and random structures while random networks lack any type of order. In accordance with the above, complex networks belong to less complex technologies than random networks. <cit.> show that no traditional measure of network complexity is similarly good at categorizing networks with respect to their structural complexity. In addition, the NDS measure is relatively invariant to the size of networks; a rather unique feature among the measures of network complexity. It will be shown later in this paper that the measure's size invariance is a strong asset. § STYLIZED FACTS ABOUT TECHNOLOGICAL COMPLEXITY Each of the approaches of measuring technological complexity takes a somewhat different perspective, so the following question arises: which reflects technological complexity most appropriately? Unfortunately, there is no objective standard against which such a comparison can be made. I therefore put forward a (non-exclusive) list of four stylized facts about technological complexity, which most scholars in the field seem to agree upon. The three approaches will be evaluated on how well the complexity measures constructed on their basis are able to empirically reflect these facts. Technological complexity increases over time. Technological systems have become increasingly complex over time because of knowledge and technologies' cumulative nature, with each generation building upon the technological environment established by its predecessors <cit.>. Technologies also become more complex due to their growing range of functions. For instance, “[d]igital control systems [of aircraft engines] interact with and govern a larger (and increasing) number of engine components than [previous] hydromechanical ones <cit.>. Another example is Microsoft's operation system Windows, that grew from 3-4 million lines of code (Windows 3.1) to more than 40 million (Windows Vista) <cit.>. Moreover, technologies have reached higher levels of complementary requiring more multi-technology activities, which adds to the complexity of their development and application <cit.>. “The result is a constantly increasing sophistication and richness of the technological world” <cit.>. The pattern of increasing technological complexity over time should hence be reflected by complexity measures applied to empirical data. Complextechnologies require more R&DThe development of complex technologies requires dealing with greater technological diversity and combining less common knowledge than simple technologies <cit.>. Creating new knowledge combinations implies search activities for potentially fitting pieces and subsequent testing of these combinations. Frequently, advancing complex technologies is achieved by trial-and-error <cit.>. “What succeeded and failed last time gives clues as to what to try next, etc. <cit.>. Hence, “harder-to-find”, i.e., more difficult/complex, solutions involve more trials and errors, which consume resources. The greater knowledge diversity inherent to complex technologies further demands more diverse but specialized experts working together. “When dealing with technological complex projects [...], they [...] depend more heavily on other functional specialists for the expertise” <cit.>. They must have to be provide with a environment that puts them into position to exchange knowledge, learn, and work together, which requires further (e.g., organizational) resources <cit.>. In particular, (spatial) proximity among experts allowing for face-to-face communication enhances the work on complex projects, which is not necessarily true for simple projects in which intensive communication may even have negative effects <cit.>. Related to these are the greater difficulties of transmitting and diffusing more complex knowledge <cit.>. Learning of complex knowledge is more resource-intensive because greater absorptive capacities are needed <cit.> and passive learning modes are insufficient <cit.>. This challenges communication and collective learning processes within and among R&D labs.While there is no direct empirical confirmation for this stylized fact, some findings support it. For instance, the development time of complex products is larger (and hence more expensive) than that of simple ones <cit.>. Studies also find nations' R&D intensities outgrowing their economic outputs and incomes <cit.>. Thegreater need of collaborative R&D in case of complex technologies is also frequently related to larger resource requirements that are overcome by organizations pooling their resources <cit.>. Moreover, the larger uncertainty and costs associated to complex technologies makes organizations engaging in their development more likely to fail <cit.>. Complex technologies require more cooperation. “With the universe of knowledge ever expanding, researchers need to specialise to continue contributing to state of the art knowledge production” <cit.>. This in turn has led to a stronger dispersion of knowledge in the economy, thereby increasing the relevance of interpersonal knowledge exchange. Put differently, technological advancement increasingly requires interpersonal interaction and cooperation <cit.>. This trend is reflected in empirical data. For instance, <cit.> show that about ten percent of scientific publications were realized by co-authorships at the beginning of the twentieth century. This percentage rose to almost fifty percent at the end of this century. A similar trend can be observed for patents <cit.>. Interaction and cooperation is thereby more crucial for the development of complex than simple knowledge, as complex technologies include the combination of diverse and heterogeneous knowledge <cit.>. These are more likely possessed by specialized experts <cit.>. This finds some indirect confirmation in the studies of <cit.> and <cit.>. These authors report positive correlations between the number of citations to scientific articles (as a rough measure of their quality) and their numbers of authors. Complex technologies concentrate in space. As has been argued for a long time in Economic Geography and Regional Science as well as more recently by <cit.> and <cit.>, developing complex technologies requires special skills, existing expertise, infrastructure, and institutions not found in every place. For instance, industrial sectors interlinked by labor mobility, open but dense social networks, and related knowledge bases are crucial factors in such contexts <cit.>.Adding to this are strong economies of scale in R&D and the location choice of large R&D labs and universities that tend to be highly agglomerated <cit.>. The place-specificity of favorable conditions for innovation are emphasized in concepts like the “learning regions”, “innovative milieu”, and “regional innovation systems” <cit.>. These conditions allow for bridging cognitive distances and combining heterogeneous knowledge, which in other places would remain uncombined. Such conditions are path-dependent and place-specific making places with such characteristics relatively rare. The studies of <cit.> and <cit.> confirm this stylized fact using U.S. patent data. § EMPIRICAL EVALUATION To compare the approaches of measuring technological complexity, I will estimate five measures and apply them to empirical data. Subsequently, I will evaluate if the obtained results meet the four stylized facts above. §.§ Data In a common manner, I rely on patent data for approximating knowledge and technologies. Despite well-known problems <cit.>, patents entail detailed and unparalleled information about innovation processes such as date, location, and a technological classification. I use the OECD REGPAT database covering patent applications and their citations from the European Patent Office. The data covers the period 1975 to 2013 andincludes information on 2.823.975 patent applications. I remove all non-European inventors leaving 1.393.411 patents that are assigned to European NUTS 2 and 3 regions by means of inventors' residence (multiple-counting).Technologies are defined on the basis of the International Patent Classification (IPC). The IPC is hierarchically organized in eight classes at the highest and more than 71,000 classes at the lowest level. I use the four-digit IPC level to define 630 distinct technologies. While there is no objective reason for this level, it offers a good trade-off between technological disaggregation and manageable numbers of technologies. In addition, it has been used in related studies <cit.>.The complexity measures are estimated in a moving window approach. Patent numbers vary considerably between years and some technologies have few patents. I therefore follow common practice and combine patent information of five years such that a complexity measure estimated for year t is based on patents issued between t and t-4 <cit.>. §.§ Estimation of complexity measures§.§.§ Measures based on the method of reflectionThe estimation of the complexity measures based on the method of reflection starts with the calculation of the regional technological advantage (RTA) of region r with respect to to technology c in year t.RTA_r,c,t=patents_r,c,t/∑_r patents_r,c,t/∑_c patents_r,c,t/∑_c∑_r patents_r,c,tSecond, an incidence matrix (M), or two-mode network, between regions (rows) and technologies (columns) is constructed with a binary link if region r has RTA_r,c,t>1, i.e., it is above average specialized in technology c, and no link otherwise. Each region's number of links (row sum) represents its diversity (K_r,0) and each technology's links its ubiquity (K_c,0) (column sum). In accordance with <cit.>, the diversity and ubiquity scores are sequentially calculated by estimating the following two equations simultaneously over n (20) iterations <cit.>.KCI_r,n=1/K_r,0∑_r M_r,c K_r,n-1 KCI_c,n=1/K_c,0∑_c M_r,c K_c,n-1In the present paper, I am particularly interested in KCI_c,n, which represents technologies' complexity value. As a robustness check, the complexity index is estimated using the assignments of patents to NUTS 3 (1.383) regions, denoted as HH.3NUTS, and alternatively to NUTS 2 (384) regions, which will be denoted as HH.2NUTS.On the basis of the work of <cit.> and <cit.>, <cit.> propose an alternative version of this complexity measure. Matrix M is column standardized and multiplied with its transposed version to get the square matrix B, which has the 630 technologies as dimensions. Its none-diagonal elements represent the similarity of technologies' distributions across places. The diagonal is the average diversity of cities having an RTA in the row/column technology. A technological complexity score is then estimated as the second eigenvector of matrix B. It is called HH.eigen.Accordingly, two measures are based on the original method of reflection (HH.3NUTS, HH.2NUTS) that vary in terms of the underlying spatial unit. In addition, a modified version of the method of reflection is used for the measure HH.Eigen.[The three measures have been estimated using the R-package EconGeo by <cit.>.]§.§.§ Measures based on the difficulty of knowledge combination For calculating the complexity measure of <cit.>, knowledge pieces need to be defined whose combinations can then be evaluated. In accordance with <cit.>, knowledge pieces are approximated by the most disaggregated level of IPC subclasses (ten-digit subclass IPC level). Knowledge combinations are these subclasses' co-occurrences on patents (patents are usually classified into multiple classes). The ease of combination is approximated by setting the co-occurrence count of subclass i with all other subclasses in relation to the number of patents in this subclass. E_i=countofsubclassespreviouslycombinedwithsubclass i/countofpreviouspatentsin subclassiThis score is inverted and averaged over all patents of subclass i to create a measure of independence for each patent.K_l=count ofsubclassesonpatentl/∑E_i_l ϵ iBased on the N/K model of <cit.>, the final complexity score is estimated as the ratio between the measure of independence K_l and the total number of patents on which l's occurs (N). Crucially, E_i and the count of subclasses on patent l are estimated on the basis of different time periods. While the latter is calculated with respect to the current time period (moving window: patents granted between t and t-4), the first considers all patents prior to t-4. The score is estimated for each patent and subsequently averaged across all patents belonging to a technology (four digit IPC class). It is denoted as FS.Modular.§.§.§ Calculation of the measure of structural complexityThe calculation of the new measure of structural complexity (Structural) begins in a similar manner as FS.Modular. First, for each of the 630 technologies c, the set of patents are extracted belonging to the respective class. Second, the matrix M_c is established for each set by counting all co-occurrences of (ten-digit) IPC subclasses on its patents. M_c is dichotomized with all positive entries being set to one. The matrix now represents a binary undirected network G_c with the nodes being all IPC subclasses occurring on patents with at least one IPC subclass belonging to technology c. Links indicate observed co-occurrence. G_c contains all ways technology c's subclasses have been combined among themselves and with all other patent subclasses. Hence, it is the combinatorial network of technology c.[Alternatively, the network can be restricted to subclasses belonging to technology c. However, such approach would ignore potential bridging functions of adjacent technologies as well as the possibilities of embedding this technology into larger technological systems.] The question now is whether this network G_c has a complex structure.The network complexity NDS measure of <cit.> provides an answer. In contrast to most traditional network complexity measures, the NDS combines multiple network variables into one. First, the share of modules in the network (α_module=M/n) with M being the number of modules and n that of nodes. Modules can be seen as sign of general organizational principles in the network, i.e. of the existence of ordered structures. Second, a measure of the variance of module sizes v_module=var(m)/mean(m), whereby m is the vector of module sizes. It approximates “the variability of network sizes in respect to the mean size of a module” <cit.>. Random networks are likely to show a low variability and low average size of modules. Third, the variable V_λ capturing the Laplacian (L) matrix's variability is defined as v_λ=var(Λ(L))/mean(Λ(L)), which picks up similar structures as v_module. Fourth, the relation of motifs of size three and four (r_motif=(N_motif (3))/N_motif (4)). In numerical exercises <cit.> observe this variable to be highest in ordered, medium in complex, and lowest in random networks. The four variables are combined in order to obtain the individual network diversity score (INDS) for the network (G_c):INDS(G_c)=α_module*v_module/v_λ*r_motif. Networks may show properties of a complex or ordered network just by chance and thereby mislead measures of complexity. <cit.> therefore estimate INDS for a population of networks G_M, to which G_i belongs. In practice, this is achieved by drawing samples S from network G_c and estimating INDS for each sample network. The final network diversity measure (NDS_s) can than be obtained by:NDS_s ({ G_c^S | G_M }) = 1/S∑_G_c ϵ G_M^S INDS(G_c) Since the network density score (NDS) is only defined for sufficiently large and connected networks <cit.>, I restrict the estimation to the largest component of network G_c. Moreover, the NDS_c score (equation <ref>) is only calculated if the component has at least five nodes (co-occurring IPC subclasses). More precisely, for each G_c (main component), a sample of 100 nodes n (in case of components with less than 1.000 nodes) and 300 (for components with more than 1.000 nodes) is randomly drawn. For each node n, a network G_n is drawn from G_c by a random walktrap of 1.000 steps starting from n. From this network, a subnetwork G_n^i of 200 random nodes i [<cit.> find a sample network size of 120 nodes to be sufficient for robust results.] is selected. INDS (equation <ref>) is then estimated for G_n^i. The score is subsequently averaged over all subnetworks giving NDS_c. To obtain values with large values signaling random networks (complex technologies), medium values indicating complex networks (medium complex technologies), and low values standing for ordered networks (simple technologies), NDS_c is taken in logs and multiplied by -1. It represents the structural (combinatorial) complexity of technology c and is denoted as Structural. Notably, the results (i.e., the ranking of technologies) will somewhat vary by default when the measure is repeatedly estimated [The estimations of the measures' parts have been conducted with the R-package QuACN by <cit.>] due to the measures' random component.§ RESULTS§.§ Application oriented aspects of the complexity measures Before the measures are evaluated against the four stylized facts, it is informative to examine some empirical features unrelated to the four stylized facts. Unfortunately, two technologies do not have sufficient patents for any measures to be estimated leaving sample of 628 technologies in the example year 2010. Sixteen lack a sufficiently large component in the combinatorial network for a calculation of structural complexity. Table <ref> in the Appendix lists some basic descriptives.A first interesting insight into the measures' properties is gained by rank-correlation analyses using the data of the last five years (2008-2013) (Table <ref>). Besides the five complexity measures, the analyses include the growth of patents in the last 10 years (Patent.Growth.10), the number of citations per patent (Cit.Pat), and the number of IPC subclasses (IPCs) found on patents of a technology. No measure shows a strong relationship with the number of citations per patents (Cit.Pat). Research shows a correlation between patents' technological and economic values with their citation counts <cit.> suggesting that no measure seems to be able to directly capture this “value” dimension of technologies. Similar holds true for the growth of patent numbers during the last 10 years (Patents.Growth.10).HH.2NUTS and HH.3NUTS are positively correlated. Their correlation is relatively high with r=0.89 implying that the employed scale of the underlying spatial units matters but does not dramatically alter the complexity scores. Therefore, one of the criticisms of this measure raised in Section <ref> find weak support. Put differently, the ranking of technologies in terms of complexity depends to some but not to a dramatic degree on the spatial unit chosen as the basis in the estimation.The two measures based on IPC subclass combinations (FS.Modular and Structural) are negatively associated with the other complexity measures (except for FS.Modular and HH.Eigen). Accordingly, while attempting to measure the same thing (technological complexity), the two approaches (method of reflection and evaluating IPC subclass combinations) do not overlap empirically.It should be noted that the computational requirements of Structural drastically exceed those of the other measures. In part, this is due to the fact that it is not yet implemented in existing software and (more significantly) it includes an iterative procedure. §.§ Increasing complexity over time§.§.§ Average complexity While the application-oriented aspects are important, they don't give insights into how well the different approaches perform in measuring technological complexity. The first stylized fact used for such an assessment is whether the average complexity of technologies increases over time. Figure <ref> answers this question by showing the median complexity value across all technologies for each of the five measures from 1980 to 2013. For better visualization and comparison, all measures have been divided by their maximum.The first thing to notice is the relatively erratic and nonparallel development of HH.2NUTS and HH.3NUTS. With some interruptions, HH.3NUTS remains close to one (maximum) until about 2000, before it starts to drop to values around 0.55. In contrast, HH.3NUTS starts from a maximum value of almost one, before dropping to about 0.27 in 1993, increasing back to one in 1997, and declining again strongly until 2008, before growing in the last three years. While technological development does not necessarily take place in a smooth manner, there are no explanations for why complexity should have dropped that drastically at some point in time. Moreover, the nonparallel development of HH.2NUTS and HH.3NUTS underlines the scale variance of the measure. Clearly, the two measures fail in representing the stylized fact of increasing complexity over time.The three other measures, HH.Eigen, FS.Modular, and Structural are more effective. While there is a strong drop in HH.Eigen to almost zero in the early 1980s, it increases relatively monotonically afterwards. FS.Modular and Structural show a more steady and monotonic increase, which however turns in the year 2004 in case of Structural. The decline of Structural is rather limited (the value of 2013 is just 7.3 % smaller than the maximum value in the year 2004). The decline might be a feature of the employed database where recent patents are frequently added multiple years after their actual application and hence they might not have been included yet. It should therefore not be over interpreted.In general, the figure shows the similarity in the developments of FS.Modular and that of the median number of patents per technology (also normalized with its maximum). Structural follows the general trend of patent numbers as well but to a lower degree. The extent to which this might be caused by a “size dependency” of the complexity measures, will be explored in more detail in Section <ref>. §.§.§ Technologies' ageIncreasing complexity over time can also be assessed by comparing the average `age' of technologies to their complexity, with the idea being that more recent technologies are more complex. I approximate age by calculating the mean age of patents in a given year for each technology and correlate it with the according complexity scores.[Note that the database is restricted with the earliest patents being from 1978.] A positive correlation implies that technologies with young patents (e.g., subject to more recent R&D) obtain higher complexity values, which corresponds to the stylized fact.Figure <ref> plots this rank correlation for each year. It clearly confirms the previous observation: just HH.Eigen, FS.Modular, and Structural are able to replicate the stylized fact of younger technologies being more complex, i.e., growing in complexity over time. Notably, the correlation of HH.Eigen and patents' mean age only becomes positive after 1986, while for FS.Modular and Structural it has been positive since 1981.[Given the lack of patent data prior to 1978, early years may not be reliable for this analysis.] HH.2NUTS and HH.3NUTS are characterized by a negative correlation for most years suggesting that they identify older technologies as complex.In summary, the three measures HH.Eigen, FS.Modular, and Structural, correspond to and reflect growing technological complexity over time and thereby align with the first stylized fact. §.§ Magnitude of R&D efforts <ref>Unfortunately, I lack information on the true R&D efforts invested or R&D employment contributing to the development of the technologies considered in the paper. In a common manner, I therefore approximate the R&D efforts with the number of patents. This is justified by patents and R&D efforts being positively correlated at the organizational and regional level <cit.>. However, it has to be pointed out that this approximation is strongly influenced by national and industrial differences in patent propensity and R&D productivity <cit.>. This surely reduces the reliability of the analysis and calls for future work on this issue.[Alternatively, I could have used the number of inventors as approximation of R&D efforts. However, their correlation with patent counts is r=0.98*** and does not impact the empirical results at all.] The results of the (rank) correlation analysis are shown in Figure <ref>. The two measures HH.3NUTS and HH.2NUTS are strongly negatively correlated with patent counts for all years, except 1991. The negative correlation of HH.3NUTS and HH.2NUTS may reflect that technologies with few patents tend to be (for this reason) (co-)concentrated in space, which will increases their estimated complexity. The strong negative correlation implies that these two measures cannot resemble this stylized fact. A positive correlation between patent numbers and complexity scores are observed for Structural. Large patent classes imply many IPC subclasses (r=0.93***), which reduces the chances of their co-occurrence on patents. The correlation of Structural is above 0.6 in most and above 0.8 in recent years. Hence, the measure seems to be strongly influenced by the number of patents assigned to 4-digit IPC classes. This makes the measure reflecting this stylized fact easily.n However, it also leads to the question whether the measure's information content goes sufficiently beyond that represented by theabsolute number of patents. While the ranking information is not identical, it overlaps to more than 80%. Figure <ref> reveals that the magnitude of the correlation drops strongly when very small technologies are excluded. For instance, when excluding patents in technologies with less than 200 patents, which correspond an exclusion of 8% of all patents, the correlation of Structural and technology size already drops to 0.5. Given that the measure is based on network complexity measures that are known to be closely linked to networks size, a rank correlation of less than r=0.5 has to be seen as a relatively low value in this context and highlights one of the NDS measure's attractive features <cit.>. By further limiting the sample to patents in large technologies, the correlation decreases to a minimum of 0.35 before gradually increasing again. Crucially, the correlation always remains positive without reaching the initial large levels again. The declining correlation for larger technologies relates to the fact that small technologies with very few patents frequently show complete combinatorial networks (density of 1), which are per definition classified as being simple (see Section <ref>). In sum, the stylized fact can be clearly confirmed for Structural. A more moderately positive correlation is found for F.S.Modular signaling that this measure clearly represents the stylized fact of complex technologies requiring larger R&D efforts. Figure <ref> reveals that this correlation is somewhat larger in case of medium sized technologies than in case of smaller and larger ones.The results for HH.Eigen are less clear. Its correlation with patent counts remains negative until 1997. Afterwards it becomes positive. Given the positive correlation staying well below r=0.2, I argue of this measure aligning to this fact.In short, only two out of five measures (FS.Modular and Structural) are able to mirror the stylized fact of complex technologies being associated to larger R&D efforts.§.§ Spatial concentration The production of complex technologies is expected to be spatially concentrated because few places possess the necessary capabilities. To test this stylized fact, I first estimate the spatial concentration of technologies by means of the GINI coefficient and the assignment of inventors to NUTS3 regions. The coefficient obtains a values close to one if inventors concentrate in few regions and its value converges to zero if they are evenly distributed in space. As a simple test of the degree of spatial concentration, I estimate the correlation between complexity scores and GINI coefficients of the patents used in their construction for the year 2010. The results are shown in Table <ref>. The two measures HH.3NUTS and HH.2NUTS turn out to be strongly positively correlated to spatial concentration, while HH.Eigen, FS.Modular, and Structural are found to be negatively correlated. While this would suggest that just the first two measures correspond to the stylized fact, it has to be pointed out that spatial concentration is strongly negatively correlated with technologies' size (number of patents). Larger technologies concentrate less in space. Since FS.Modular and Structural are positively correlated with size, this is might drive the results.Figure <ref> clarifies this issue by plotting the correlation of complexity and spatial concentration for varying subsamples. More precise, I iteratively re-estimate the correlation by removing the smallest technologies from the original data whereby the technologies' minimum size (number of patents) to remain in the subsample is raised by one patent in each iteration. Accordingly, the solid lines represent the correlation coefficient given technologies of at least the according size. Additionally, the figure shows the share of patents (on all patents) still covered by the subsample (solid line). To exclude potential temporal effects, I exclusively consider the year 2010. The exercise has little impact on the correlation of HH.2NUTS and HH.3NUTS much, which remains close to 0.3. Similarly, the negative correlation of FS.Modular with spatial concentration remains intact. However, the results for HH.Eigen and Structural change dramatically. When the smallest technologies are excluded (those with less than 350 patents in 2010) the correlation, which initially was strongly negative, becomes positive. Excluding these technologies corresponds to dropping ca. 13 % of all patents. When excluding about 25 % of all patents, the correlation of Structural is already at the level of that of HH.2NUTS and HH.3NUTS. It keeps increasing after this point. For HH.Eigen to reach this level, almost 75 % of all patents would have to be dropped, which suggests that spatial concentration is not a strong feature of technologies identified as complex with this measure.In summary, the stylized fact of complex technologies concentrating in space corresponds to what can be observed when applying HH.2NUTS and HH.3NUTS to empirical data. However, this might be related to what is already built into this measure (see Section <ref>). The empirical results for Structural also mirror this fact when excluding the smallest technologies. There is no accordance of HH.Eigen and FS.Modular with this stylized fact. §.§ Collaborative R&D Complex technologies should show higher degrees of collaborative R&D than simple ones. In a similar fashion as above, I explore the relation by correlating the number of inventors per patent with the five complexity measures. Figure <ref> depicts this correlation over time.The figure reveals that only Structural corresponds to the stylized fact of more collaborative R&D in complex technologies. The correlation is consistently positive and exceeds r=0.25^*** in all years. The peaking correlation between spatial concentration and Structural in 1992 with a value close to 0.5 is an interesting observation that deserves more attention in future research. All other complexity measures show negative correlations with the number of inventors per patent in extended time periods. While HH.2NUTS and HH.3NUTS show positive correlations until about 1993, the coefficients remains negative in most of the subsequent years. Modular never manages to gain a positive correlation with spatial concentration. Hence, it is again only Structural that reflects this stylized fact. § DISCUSSION & CONCLUSIONThe complexity of technologies has been measured in various ways in the past. The paper reviewed two existing empirical measures of technological complexity: the method of reflection approach by <cit.> and the difficulty of knowledge combination approach put forward by <cit.>. It was demonstrated that both approaches rely on critical assumptions motivating the need for alternative measures of technological complexity. Based on the work of <cit.> and the literature on network complexity, I proposed the new measure of structural complexity. It captures the complicatedness of the knowledge combinatorial process underlying technologies' advancement. Five distinct measures of technological complexity based on the three approaches were estimated and evaluated using European patent data for the years 1980 - 2013. I put forward four stylized facts that served as a benchmark for the evaluation: increasing (average) technological complexity over time,complex technologies requiring more R&D efforts, their R&D is more collaborative, complex technologies concentrating in space, and identified complex / simple technologies meeting intuitive expectations.Table <ref> summarizes the evaluation results. Only the newly introduced measure Structural, which captures the structural complexity of knowledge combination underlying technologies, meet all stylized facts to an acceptable degree. While it does not confirm small complex technologies being spatially concentrated, these represent a relatively small fraction of all patents.Its position is further strengthened by the empirical issues troubling the traditional measures (Table <ref>). When using the method of reflection approach (HH.3NUTS, HH.2NUTS, and HH.Eigen), the ranking of technologies in terms of complexity is found to be weakly conditional on the definition of the underlying spatial unit. Finding an appropriated spatial scale is not only a very difficult task in general, but appropriate spatial units are most likely to differ in scale between technologies. For instance, some technology's development requires spatial proximity of their underlying knowledge bases <cit.> implying rather small spatial units being appropriate representations, while others do not. The latter's R&D activities might therefore be better captured at larger spatial scales. Accordingly, any chosen scale will potentially be correct only for a share of technologies.Using the measure of structural; complexity requires considering its strong positive correlation with technologies' size (patent counts) when technologies with few patents are considered. Moreover, by construction of the measure, the obtained complexity scores are subject to some random variation across re-estimations using the same data. These variations are however limited in scope[In non-systematic tests, I found a Pearson correlation of about r=0.98^*** across re-estimations and a rank correlations of about r=0.91^***.] and can be minimized by increasing the size of the drawn subsamples (nodes and network subsamples), though this feeds into the computational burden of the calculations. The high computational burden is another noteworthy negative feature of this measure.Lastly, it is also worthwhile examining the technologies ranked most complex and simplest by the five measures. I therefore present the ten technologies highest ranked in terms of the five complexity measures in Tables <ref>, <ref>, <ref>, <ref>, and <ref> in the Appendix. The technologies identified as being most simple are listed in Tables <ref>, <ref>, <ref>, <ref>, and <ref>. Given the potentially biasing effects of small technologies, I concentrate on technologies with at least 10 patents in the identification of the most complex ones.[The low number of patents also makes the obtained complexity scores unreliable because most of them require a sufficiently large number of empirical observations. The full rankings can be obtained from the author upon request.] It is beyond the paper's scope to discuss each and every technology in the lists but some general patterns should be mentioned. The lists of most complex technologies as identified by HH.3NUTS, HH.2NUTS, and HH.Eigen include many technologies related to manual activities (B23G, B21L, B27C, D01H, B25C D05B) or to natural resources (B27B, B27G). Usually, these technologies are not associated with technological complexity. According to HH.3NUTS and HH.2NUTS chemical technologies (C07C, C07D, C12N, C07K) are technologically simple. This is counterintuitive as chemistry is usually considered a high-tech technology involving large R&D efforts <cit.>. In case of FS.Modular, the top-ten list also includes some technologies that relate to rather simple activities (A63C, A01C,A47J) and hence might not considered to be complex. In contrast, the ten most simplest technologies according to this measure seem to be reasonable. It is however strongly driven by low patent numbers in these fields. The top-ten and bottom-ten lists of Structural are very compelling with the size of patenting activities appearing to be a clear factor. Nevertheless, technologies ranking the in one-hundreds in terms of patent numbers, also make the top-10 list. As for Modular the list of the most simple technologies is clearly driven by small patent numbers with B61G ranking 484 in terms of patents among the 587 technologies with more than ten patents in 2010.In summary, the newly proposed measure of structural complexity yields promising results and performs well with respect to the four stylized facts of technological complexity put forward in the paper.Of course, given the lack of an objective benchmark, the presented evaluation has its limitation, which particularly relates to the four stylized facts. While the literature seems to agree on these, there is little to no supporting empirical evidence. This, of course, is in large part due to the lack of a widely-accepted complexity measure. Moreover, there might be additional stylized facts that have not been considered here. For instance, <cit.> argue that complex technologies are likely to yield higher economic rents. This has not been included in the current assessment, as it is debatable and empirical data is missing for its assessment.In light of this, the paper should also be seen as a call for further research and dI hope to stimulate and contribute to fruitful scientific debate on this issue.§ APPENDIXapalike
http://arxiv.org/abs/1708.07357v3
{ "authors": [ "Tom Broekel" ], "categories": [ "stat.AP", "physics.soc-ph", "D.2.8; F.1.3" ], "primary_category": "stat.AP", "published": "20170824113332", "title": "Measuring technological complexity - Current approaches and a new measure of structural complexity" }
2126Streaming Graph Challenge: Stochastic Block Partition Edward Kao, Vijay Gadepally, Michael Hurley, Michael Jones, Jeremy Kepner, Sanjeev Mohindra, Paul Monticciolo, Albert Reuther, Siddharth Samsi, William Song, Diane Staheli, Steven Smith MIT Lincoln Laboratory, Lexington, MA Received: date / Accepted: date ======================================================================================================================================================================================================================================= An important objective for analyzing real-world graphs is to achieve scalable performance on large, streaming graphs. A challenging and relevant example is the graph partition problem. As a combinatorial problem, graph partition is NP-hard, but existing relaxation methods provide reasonable approximate solutions that can be scaled for large graphs. Competitive benchmarks and challenges have proven to be an effective means to advance state-of-the-art performance and foster community collaboration. This paper describes a graph partition challenge with a baseline partition algorithm of sub-quadratic complexity. The algorithm employs rigorous Bayesian inferential methods based on a statistical model that captures characteristics of the real-world graphs. This strong foundation enables the algorithm to address limitations of well-known graph partition approaches such as modularity maximization. This paper describes various aspects of the challenge including: (1) the data sets and streaming graph generator, (2) the baseline partition algorithm with pseudocode, (3) an argument for the correctness of parallelizing the Bayesian inference, (4) different parallel computation strategies such as node-based parallelism and matrix-based parallelism, (5) evaluation metrics for partition correctness and computational requirements, (6) preliminary timing of a Python-based demonstration code and the open source C++ code, and (7) considerations for partitioning the graph in streaming fashion. Data sets and source code for the algorithm as well as metrics, with detailed documentation are available at http://GraphChallenge.orgGraphChallenge.org. *This material is based upon work supported by the Defense Advanced Research Projects Agency under Air Force Contract No. FA8721-05-C-0002. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Department of Defense.§ INTRODUCTIONIn the era of big data, analysis and algorithms often need to scale up to large data sets for real-world applications. With the rise of social media and network data, algorithms on graphs face the same challenge. Competitive benchmarks and challenges have proven to be an effective means to advance state-of-the-art performance and foster community collaboration. Previous benchmarks such as Graph500 <cit.> and the Pagerank Pipeline <cit.> are examples of such, targeting analysis of large graphs and focusing on problems with sub-quadratic complexity, such as search, path-finding, and PageRank computation. However, some analyses on graphs with valuable applications are NP-hard. The graph partition and the graph isomorphism (i.e. matching) problems are well-known examples. Although these problems are NP-hard, existing relaxation methods provide good approximate solutions that can be scaled to large graphs <cit.>, especially with the aid of high performance computing hardware platform such as massively parallel CPUs and GPUs. For example, the 10th DIMACS Implementation Challenge <cit.> resulted in substantial participation in the graph partition problem, mostly with solutions based on modularity maximization. To promote algorithmic and computational advancement in these two important areas of graph analysis, our team has implemented a challenge for graph isomorphism <cit.> and graph partition at http://GraphChallenge.orgGraphChallenge.org. This paper describes the graph partition challenge with a recommended baseline partition algorithm of sub-quadratic complexity. Furthermore, the algorithm employs rigorous Bayesian inferential methods based on the stochstic blockmodels that capture characteristics of the real-world graphs. Participants are welcome to submit solutions based on other partition algorithms as long as knowledge on the true number of communities (i.e. blocks) is not assumed. All entries should be submitted with performance evaluation on the challenge data sets using the metrics described in Section <ref>.Graph partition, also known as community detection and graph clustering, is an important problem with many real-world applications. The objective of graph partition is to discover the distinct community structure of the graph, specifically the community membership for each node in the graph. The partition gives much insight to the interactions and relationships between the nodes and enables detection of nodes belonging to certain communities of interest. Much prior work has been done in the problem space of graph partition, with a comprehensive survey in <cit.>. The most well-known algorithm is probably the spectral method by <cit.> where partition is done through the eigenspectrum of the modularity matrix. Most of the existing partition algorithms work through the principle of graph modularity where the graph is partitioned into communities (i.e. modules) that have much stronger interactions within them than between them. Typically, partitioning is done by maximizing the graph modularity <cit.>. <cit.> extends the concept of modularity for time-dependent, multiscale, and multiplex graphs. Modularity maximization is an intuitive and convenient approach, but has inherent challenges such as resolution limit on the size of the detectable communities <cit.>, degeneracies in the objective function, and difficulty in identifying the optimal number of communities <cit.>. To address these challenges, recent works perform graph partition through membership estimation based on generative statistical models. For example, <cit.> estimate community memberships using the degree corrected stochastic blockmodels <cit.>, and <cit.> proposes a mixed-memberships estimation procedure by applying tensor methods to the mixed-membership stochastic blockmodels <cit.>. The baseline partition algorithm for this challenge is based on <cit.>, because of its rigorous statistical foundation and sub-quadratic computational requirement. Under this approach, each community is represented as a “block” in the model. Going forward, this paper will use the term “block” as the nomenclature for a community or a graph cluster.When some nodes in the graph have known memberships a priori, these nodes can serve as “cues” in the graph partition problem. <cit.> is an example of such using random walks on graph. This challenge will focus on the graph partition problem where such cues are not available.In many real-world applications, graph data arrives in streaming fashion over time or stages of sampling <cit.>. This challenge addresses this aspect by providing streaming graph data sets and recommending a baseline partition algorithm that is suitable for streaming graphs under the Bayesian inference paradigm. This paper describes the graph partition challenge in detail, beginning with Section <ref> on the data sets and streaming graph generator. Section <ref> describes the baseline partition algorithm, including pseudocode on the core Bayesian updates. Section <ref> focuses on the parallel computation of the baseline algorithm, argues for the correctness of parallelizing the Bayesian updates, then proposes parallel computation strategies such as node-based parallelism and matrix-based parallelism. Section <ref> describes the evaluation metrics for both partition correctness and computational requirements, including a preliminary timing of a Python-based demonstration code and the open source C++ code by Tiago Peixoto <cit.>. Considerations for partitioning the graph in streaming fashion are given throughout the paper. § DATA SETS The data sets for this challenge consist of graphs of varying sizes and characteristics. Denote a graph G = (𝒱, ℰ), with the set 𝒱 of N nodes and the set ℰ of E edges. The edges, represented by a N × N adjacency matrix , can be either directed or undirected, binary or weighted. Specifically, A_ij is the weight of the edge from node i to node j. A undirected graph will have a symmetric adjacency matrix.In order to evaluate the partition algorithm implementation on graphs with a wide range of realistic characteristics, graphs are generated according to a truth partition b^† of B^† blocks (i.e. clusters), based on the degree-corrected stochastic blockmodels by Karrer and Newman in <cit.>. Under this generative model, each edge, A_ij, is drawn from a Poisson distribution of rate λ_ij governed by the equations below: A_ij ∼Poisson(λ_ij)λ_ij = θ_i θ_j Ω_b_i b_j where θ_i is a correction term that adjusts node i's expected degree, Ω_b_i b_j the strength of interaction between block b_i and b_j, and b_i the block assignment for node i. The degree-corrected stochastic blockmodels enable the generation of graphs with characteristics and variations consistent with real-world graphs. The degree correction term for each node can be drawn from a Power-Law distribution with an exponent between -3 and -2 to capture the degree distribution of realistic, scale-free graphs <cit.>. The block interaction matrix Ω specifies the strength of within- and between-block (i.e. community) interactions. Stronger between-block interactions will increase the block overlap, making the block partition task more difficult. Lastly, the block assignment for each node (i.e. the truth partition b^†) can be drawn from a multinomial distribution with a Dirichlet prior that determines the amount of variation in size between the blocks. Figure <ref> shows generated graphs of various characteristics by adjusting the parameters of the generator. These parameters server as “knobs” that can be dialed to capture a rich set of characteristics for realism and also for adjusting the difficulty of the block partition task. Real-world graphs will also be included in the data sets. Since the truth partition is not available in most real-world graphs, generated graphs with truth will be embedded with the real-world graphs. While the entire graph will be partitioned, evaluation on the correctness of the partition will be done only on the generated part of the hybrid graph. Embedding will be done by adding edges between nodes in the real-world graph and the generated graph, with a relatively small probability proportional to the product of both node degrees.In real-world applications, graph data often arrives in streaming fashion, where parts of the input graph become available at different stages. This happens as interactions and relationships take place and are observed over time, or as data is collected incrementally by exploring the graph from starting points (e.g. breadth first search and snowball sampling) <cit.>. Streaming graph data sets in this challenge are generated in both ways, as demonstrated in Figure <ref>. The partition algorithm should process the streaming graph at each stage and ingest the next stage upon completion of the current stage. Performance evaluated using the metrics in Section <ref> should be reported at each stage of the processing. For efficiency, it is recommended that the partition algorithm leverages partitions from the previous stage(s) to speed up processing at the current stage. The baseline partition algorithm for this challenge is a natural fit for streaming processing, as discussed in Section <ref>.§ BASELINE ALGORITHM This section described the recommended baseline partition algorithm, although participants are welcome to submit solutions based on other partition algorithms as long as knowledge on the true number of blocks is not assumed. The baseline graph partition algorithm for this challenge, chosen for its rigorous statistical foundation and sub-quadratic, O(E log^2 E), computational requirement, is developed by Tiago Peixoto in <cit.> based on the degree-corrected stochastic blockmodels by Karrer and Newman in <cit.>. Given the input graph, the algorithm partitions the nodes into B blocks (i.e. clusters or communities), by updating the nodal block assignment represented by vector b of N elements where b_i ∈{1, 2, ..., B}, and the inter-block and intra-block edge count matrix (typically sparse in a large graph) represented by M of size B × B, where each elementM_ij represents the number or the total weight of edges going from block i to block j. The diagonal elements represent the edge counts within each block. For conciseness, this matrix will be referred to as the inter-block edge count matrix going forward. The goal of the algorithm is to recover the truth partition b^† of B^† blocks (i.e. clusters).The algorithm performs a Fibonacci search (i.e. golden section search) <cit.> through different numbers of blocks B and attempts to find the minimum description length partition. The best overall partition b^* with the optimal number of block B^* minimize the total description length of the model and the observed graph (i.e. entropy of the fitted model). To avoid being trapped in local minima, the algorithm starts with each node in its own block (i.e. B=N) and the blocks are merged at each step of the Fibonacci search, followed by iterative Monte Carlo Markov Chain (MCMC) updates on the block assignment for each node to find the best partition for the current number of blocks. The block-merge moves and the nodal updates are both governed by the same underlying log posterior probability of the partition given the observed graph: p(b | G) ∝ ∑_t_1, t_2M_t_1 t_2log(M_t_1 t_2/d_t_1, out d_t_2, in) The log posterior probability is a summation over all pairs of blocks t_1 and t_2 where d_t_1, out is the total out-degree for block t_1 and d_t_2,in is the total in-degree for block t_2. Note that in computing the posterior probabilities on the block assignments, the sufficient statistics for the entire graph is only the inter-block edge counts, giving much computational advantage for this algorithm. Another nice property of the log posterior probability is that it is also the negative entropy of the fitted model. Therefore, maximizing the posterior probability of the partition also minimizes the overall entropy, fitting nicely into the minimum description length framework. The block-merge moves and the nodal block assignment updates are described in detail next, starting with the nodal updates. §.§ Nodal Block Assignment UpdatesThe nodal updates are performed using the MCMC, specifically with Gibbs sampling and the Metropolis-Hastings algorithm since the partition posterior distribution in Equation <ref> does not have a closed-form and is best sampled one node at a time. At each MCMC iteration, the block assignment of each node i is updated conditional on the assignments of the other nodes according to the conditional posterior distribution: p(b_i | b_-i, G). Specifically, the block assignment b_i for each node i is updated based on the edges to its neighbors, A_i _i and A__i i, the assignments of its neighbors, b__i, and the inter-block edge count, M. For each node i, the update begins by proposing a new block assignment. To increase exploration, a block is randomly chosen as the proposal with some predefined probability. Otherwise, the proposal will be chosen from the block assignments of nodes nearby to i. The new proposal will be considered for acceptance according to how much it changes the log posterior probability. The acceptance probability is adjusted by the Hastings correction, which accounts for potential asymmetry in the directions of the proposal to achieve the important detailed balance condition that ensures the correct convergence of the MCMC. Algorithm <ref> in Appendix [sec:appendixA]A is a detailed description of the block assignment update at each node, using some additional notations: d_t,in = ∑_kM_kt is the number of edges into block t,d_t,out = ∑_kM_tk the number of edges out of block t, d_t = d_t, in + d_t,out the number of edges into and out of block t, K_it the number of edges between nodes i and block t, and β is the update rate that controls the balance between exploration and exploitation. The block assignments are updated for each node iteratively until convergence when the improvement in the log posterior probability falls below a threshold.§.§ Block-Merge MovesThe block-merge moves work in almost identical ways as the nodal updates described in Algorithm <ref> in Appendix [sec:appendixA]A, except that it takes place at the block level. Specifically, a block-merge move proposes to reassign all the nodes belonging to the current block i to a proposed block s. In other words, it is like applying Algorithm <ref> on the block graph where each node represents the entire block (i.e. all the nodes belonging to that block) and each edge represents the number of edges between the two blocks. Another difference is that the block-merges are done in a greedy manner to maximize the log posterior probability, instead of through MCMC. Therefore, the Hastings correction computation step and the proposal acceptance step are not needed. Instead, the best merge move over some number of proposals is computed for each block according to the change in the log posterior probability, and the top merges are carried out to result in the number of blocks targeted by the Fibonacci search. §.§ Put It All TogetherOverall, the algorithms shifts back and forth between the block-merge moves and the MCMC nodal updates, to find the optimal number of blocks B^* with the resulting partition b^*. Optimality is defined as having the minimum overall description length, H, of the model and the observed graph given the model:H = E h(B^2/E)+Nlog B- ∑_r,sM_rslog(M_rs/d_r, out d_s,in)where the function h(x) = (1+x) log(1+x) - xlog(x). The number of blocks may be reduced at a fixed rated (e.g. 50%) at each block-merge phase until the Fibonacci 3-point bracket is established. At any given stage of the search for optimal number of blocks, the past partition with the closest and higher number of blocks is used to begin the block-merge moves, followed by the MCMC nodal updates, to find the best partition at the targeted number of blocks. Figure <ref> shows the partition at selected stages of the algorithm on a 500 node graph: The algorithm description in this section is for directed graphs. Very minor modifications can be applied for undirected graphs that have no impact on the computational requirement. These minor differences are documented in Peixoto's papers <cit.>.Advantageously, the baseline partition algorithm with its rigorous statistical foundation, is ideal for processing streaming graphs. Good partitions found on the graph at a previous streaming stage are samples on the posterior distribution of the partition, which can be used as starting partitions for the graph at the current stage. This has the natural Bayesian interpretation of the posterior distribution from a previous state serving as the prior distribution on the current state, as additional data on the graph arrives.§ PARALLEL COMPUTATION STRATEGIESSignificant speed up of the baseline partition algorithm is the primary focus of this graph challenge, and is necessary for computation on large graphs. Since the same core computation, described in Algorithm <ref> in Appendix [sec:appendixA]A, is repeated for each block and each node, parallelizing this core computation across the blocks and nodes provides a way to speed up the computation potentially by the order of the number of processors available. This section first discusses the correctness in parallelizing the MCMC updates. It then examines some of the parallel computation schemes for the baseline algorithm, with their respective advantages and requirements. §.§ Correctness of Parallel MCMC Updates The block-merge moves are readily parallelizable, since each of the potential merge move is evaluated based on the previous partition and the best merges are carried out. However, the nodal block assignment updates are not so straight forward, since it relies on MCMC through Gibbs sampling which is by nature a sequential algorithm where each node is updated one at a time. Parallelizing MCMC updates is an area of rising interest, with the increasing demand to perform Bayesian inference on large data sets. Running the baseline partition algorithm on large graphs is a perfect example of this need. Very recently, researchers have proposed to use asynchronous Gibbs sampling as a way to parallelize MCMC updates <cit.>. In asynchronous Gibbs sampling, the parameters are updated in parallel and asynchronous fashion without any dependency constraint. In <cit.>, a proof is given to show that when the parameters in the MCMC sparsely influence one another (i.e. the Dobrushin's condition), asynchronous Gibbs is able to converge quickly to the correct distribution. It is difficult to show analytically that the MCMC nodal updates here satisfy the Dobrushin's condition. However, since the graph is typically quite sparse, the block assignment on each node influences one another sparsely. This gives intuition on the adequacy of parallel MCMC updates for the baseline partition algorithm. In fact, parallel MCMC updates based on one-iteration-old block assignments have shown to result in equally good partitions compared to the sequential updates, based on the quantitative metrics in Section <ref>, for the preliminary tests we conducted so far.§.§ Parallel Updates on Nodes and Blocks An intuitive and straight-forward parallel computation scheme is to evaluate each block-merge and update each nodal block assignment (i.e. Algorithm <ref> in Appendix [sec:appendixA]A) in distributed fashion across multiple processors. The block-merge evaluation is readily parallelizable since the computation is based on the previous partition. The MCMC nodal updates can be parallelized using the one-iteration-old block assignments, essentially approximating the true conditional posterior distribution with: p(b_i | b^-_-i, G). The conditional block assignments, b^-_-i, may be more “fresh” if asynchronous Gibbs sampling is used so that some newly updated assignments may become available to be used for updates on later nodes. In any case, once all the nodes have been updated in the current iteration, all the new block assignments are gathered and their modifications on the inter-block edge count matrix aggregated (this can also be done in parallel). These new block assignments and the new inter-block edge count matrix are then available for the next iteration of MCMC updates.§.§ Batch Updates Using Matrix Operations Given an efficient parallelized implementation of large-scale matrix operations, one may consider carrying out Algorithm <ref> as much as possible with batch computation using matrix operations <cit.>. Such matrix operations in practice perform parallel computation across all nodes simultaneously. Under this computation paradigm, the block assignments are represented as a sparse N × B binary matrix Γ, where each row π_i is an indicator vector with a value of one at the block it is assigned to and zeros everywhere else. This representation results in simple matrix products for the inter-block edge counts:M = Γ^TAΓThe contributions of node i of block assignment r to the inter-block edge count matrix row r and column r are:ΔM_ row,i = A_iΓ ΔM^+_ col,i = _ i^TΓThese contributions are needed for computing the acceptance probabilities of the nodal block assignment proposals, which makes up a large part of the overall computation requirement.Algorithm <ref> in Appendix [sec:appendixB]B is a batch implementation of the nodal updates described in Algorithm <ref>. The inter-block edge counts under each of the N proposal are represented using a 3D matrixof size N × B × B. For clarity, computations of the acceptance probabilities involving the inter-block edge counts and degrees are specified using tensor notation. Note that much of these computations may be avoided with clever implementations. For example: * If the proposed block assignment for a node is the same as its previous assignment, its acceptance probability does not need to be computed. * New proposals only change two rows and columns of the inter-block edge count matrix, corresponding to moving the counts from the old block to the new block, so most of the entries inare simply copies of M^-.* The inter-block edge count matrix should be sparse, especially when there is a large number of communities, since most communities do not interact with one another. This gives additional opportunity for speeding up operations on this matrix.* Similarly, each node is likely to connect with only a few different communities (i.e. blocks). Therefore, changes by each nodal proposal on the inter-block edge count matrix will only involve a few selected rows and columns. Limiting the computation of change in log posterior, Δ S, to these rows and columns may result in significant computation speedup. § METRICSAn essential part of this graph challenge is a canonical set of metrics for comprehensive evaluation of the partition algorithm implementation by each participating team. The evaluation should report both the correctness of the partitions produced, as well as the computational requirements, efficiency, and complexity of the implementations. For streaming graphs, evaluation should be done at each stage of the streaming processing, for example, the length of time it took for the algorithm to finish processing the graph after the first two parts of the graph become available, and the correctness of the output partition on the available parts so far. Efficient implementations of the partition algorithm leverage partitions from previous stages of the streaming graph to “jump start” the partition at the current stage. §.§ Correctness Metrics The true partition of the graph is available in this challenge, since the graph is generated with a stochastic block structure, as described in Section <ref>. Therefore, correctness of the output partition by the algorithm implementation can be evaluated against the true partition. On the hybrid graphs where a generated graph is embedded within a real-world graph with no available true partition, correctness is only evaluated on the generated part. Evaluation of the output partition (i.e. clustering) against the true partition is well established in existing literature and a good overview can be found in <cit.>. Widely-adopted metrics fall under three general categories: unit counting, pair-wise counting, and information theoretic metrics. The challenge in this paper adopts all of them for comprehensiveness and recommends the pairwise precision-recall as the primary correctness metric for its holistic evaluation and intuitive interpretation. Computation of the correctness metrics described in this section are implemented in Python and shared as a resource for the participants at http://GraphChallenge.orgGraphChallenge.org. Table <ref> provides a simple example to demonstrate each metric, where each cell in row i and column j is the count of nodes belonging to truth block i and reported in output block j. In this example, the nodes are divided into two blocks in the true partition, but divided into three blocks in the output partition. Therefore, this is an example of over-clustering (i.e. too many blocks). The diagonal cells shaded in green here represent the nodes that are correctly partitioned whereas the off-diagonal cells shaded in pink represent the nodes with some kind of partition error.§.§.§ Unit Counting MetricsThe most intuitive metric is perhaps the overall accuracy, specifically the percentage of nodes correctly partitioned. This is simply the fraction of the total count that belong to the diagonal entries of the contingency table after the truth blocks and the output blocks have been optimally associated to maximize the diagonal entries, typically using a linear assignment algorithm <cit.>. In this example, the overall accuracy is simply 50/66=89%. While this one single number provides an intuitive overall score, it does not account for the types and distribution of errors. For example, truth block B in Table <ref> has three nodes incorrectly split into output block C. If instead, these three nodes were split one-by-one into output block C, D, and E, a worse case of over-clustering would have taken place. The overly simplified accuracy cannot make this differentiation. A way to capture more details on the types and distribution of errors is to report block-wise precision-recall. Block-wise precision is the fraction of correctly identified nodes for each output block (e.g. Precision(Output A) = 30/31) and the block-wise recall is the fraction of correctly identified nodes for each truth block (e.g. Recall(Truth B) = 20/24). The block-wise precision-recall present a intuitive score for each of the truth and output blocks, and can be useful for diagnosing the block-level behavior of the implementation. However, it does not provide a global measure on correctness.§.§.§ Pairwise Counting MetricsMeasuring the level of agreement between the truth and the output partition by considering every pair of nodes has a long history within the clustering community <cit.>. The basic idea is simple, by considering every pair of nodes which belongs to one of the following four categories: (1) in the same truth block and the same output block, (2) in different truth blocks and different output blocks, (3) in the same truth block but different output blocks, and (4) in different truth blocks but the same output block. Category (1) and (2) are the cases of agreements between the truth and the output partition, whereas categories (3) and (4) indicate disagreements. An intuitive overall score on the level of agreement is the fraction of all pairs belonging to category (1) and (2), known as the Rand index <cit.>. <cit.> proposes the adjusted Rand index with a correction to account for the expected value of the index by random chance, to provide a fairer metric across different data sets. Categories (4) and (3) can be interpreted as type I (i.e. false positives) and type II (i.e. false negative) errors, if one considers a “positive” case to be where the pair belongs to the same block. The pairwise precision-recall metrics <cit.> can be computed as: Pairwise-precision = #Category1/#Category1 + #Category4Pairwise-recall = #Category1/#Category1 + #Category3 Pairwise-precision considers all the pairs reported as belonging to the same output block and measures the fraction of them being correct, whereas pairwise-recall considers all the pairs belonging to the same truth block and measures the fraction of them reported as belonging to the same output block. In the example of Table <ref>, the pairwise-precision is about 90% and the pairwise-recall about 81%, which indicates this to be a case of over-clustering with more Type II errors. Although pairwise counting is somewhat arbitrary, it does present holistic and intuitive measures on the overall level of agreement between the output andthe true partition. For the challenge, the pairwise precision-recall will serve as the primary metrics for evaluating correctness of the output partition.§.§.§ Information Theoretic MetricsIn recent years, holistic and rigorous metrics have been proposed based on information theory, for evaluating partitions and clusterings <cit.>. Specifically, these metrics are based on the information content of the partitions measured in Shannon entropy. Naturally, information theoretic precision-recall metrics can be computed as: Information-precision = I(T;O)/H(O)Information-recall =I(T;O)/H(T) where I(T;O) is the mutual information between truth partition T and the output partition O, and H(O) is the entropy (i.e. information content) of the output partition. Using the information theoretic measures, precision is defined as the fraction of the output partition information that is true, and recall is defined as the fraction of the truth partition information captured by the output partition.In the example of Table <ref>, the information theoretic precision is about 57% and recall about 71%. The precision is lower than the recall because of the extra block in the output partition introducing information content that does not correspond to the truth. The information theoretic precision-recall provide a rigorous and comprehensive measure of the correctness of the output partition. However, the information theoretic quantities may not be as intuitive to some and the metrics tend to be harsh, as even a small number of errors often lower the metrics significantly.§.§ Computational MetricsThe following metrics should be reported by the challenge participants to characterize the computational requirements of their implementations:* Total number of edges in the graph (E):This measures the amount of data processed. * Execution time: The total amount of time taken for the implementation to complete the partition, in seconds. * Rate: This metric measures the throughput of the implementation, in total number of edges processed over total execution time (E/second). Figure <ref> shows the preliminary results on this metric between three different implementations of the partition algorithm, when run on a HP ProLiant DL380 Gen9 with 56 cores of Intel(R) Xeon(R) processors at 2.40GHz, and 512 GB of HPE DDR4 memory at 2400 MHZ. The three implementations are: (1) C++ serial implementation and (2) C++ parallel implementation by Tiago Peixoto <cit.>, and (3) Python serial implementation. The C++ implementations leverage the Boost Graph Library (BGL) extensively. Since the algorithm complexity is super-linear, the rate drops as the size of the graph increases, with a slope matching the change in rate according to the analytical complexity of the algorithm, O(E log^2 E). The serial C++ implementation is about an order of magnitude faster than the Python implementation. With parallel updates, the C++ implementation gains another order of magnitude in rate when the graph is large enough. The Python implementation is limited in its ability to process very large graphs due to the lack of a fast implementation of sparse matrices in Python. All three implementations are available at http://GraphChallenge.orgGraphChallenge.org. * Energy consumption in watts: The total amount of energy consumption for the computation.* Rate per energy: This metric captures the throughput achieved per unit of energy consumed, measured in E/second/Watt.* Memory requirement: The amount of memory required to execute the implementation.* Processor requirement: The number and type of processors used to execute the implementation.§.§ Implementation Complexity Metric Total lines-of-code count: This measure the complexity of the implementation. SCLC <cit.> and CLOC <cit.> are open source line counters that can be used for this metric. The Python demonstration code for this challenge has a total of 569 lines. The C++ open source implementation is a part of a bigger package, so it is difficult to count the lines on just the graph partition. § SUMMARYThis paper gives a detailed description of the graph partition challenge, its statistical foundation in the stochastic blockmodels, and comprehensive metrics to evaluate the correctness, computational requirements, and complexity of the competing algorithm implementations. This paper also recommends strategies for massively parallelizing the computation of the algorithm in order to achieve scalability for large graphs. Theoretical arguments for the correctness of the parallelization are also given. Our hope is that this challenge will provide a helpful resource to advance state-of-the-art performance and foster community collaboration in the important and challenging problem of graph partition on large graphs. Data sets and source code for the algorithm as well as metrics, with detailed documentation are available at http://GraphChallenge.orgGraphChallenge.org.§ ACKNOWLEDGMENTThe authors would like thank Trung Tran, Tom Salter, David Bader, Jon Berry, Paul Burkhardt, Justin Brukardt, Chris Clarke, Kris Cook, John Feo, Peter Kogge, Chris Long, Jure Leskovec, Henning Meyerhenke, Richard Murphy, Steve Pritchard, Michael Wolfe, Michael Wright, and the entire GraphBLAS.org community for their support and helpful suggestions. Also, the authors would like to recognize Ryan Soklaski, John Griffith, and Philip Tran for their help on the baseline algorithm implementation, as well as Benjamin Miller for his feedback on the matrix-based parallelism.unsrt § APPENDIX A: PARTITION ALGORITHM PSEUDOCODE 1 InputinputOutputoutput b_i^-, b__i^-: current block labels for node i and its neighbors _i M^-: current B × B inter-block edge count matrix A_i _i, A__i i: edges between i and all its neighbors b_i^+: the new block assignment for node i   propose a block assignment obtain the current block assignment r= b_i^-draw a random edge of i which connects with a neighbor j, obtain its block assignment u=b_j^-draw a uniform random variable x_1∼Uniform(0,1)x_1 ≤B/d^-_u+Bwith some probability, propose randomly for explorationpropose b_i^+ = s by drawing s randomly from {1,2,...,B}otherwise, propose by multinomial draw from neighboring blocks to upropose b_i^+ = s from MultinomialDraw(M^-_u + M^-_ u/d^-_u)accept or reject the proposals s=rreturn b_i^+ = b_i^- proposal is the same as the old assignment. done!compute M^+ under proposal (update only rows and cols r and s, on entries for blocks connected to i) compute proposal probabilities for the Hastings correction:p_r → s = ∑_t ∈{b__i^-}[ K_itM_ts^- + M_st^- + 1/d^-_t+B] and p_s → r = ∑_t ∈{b__i^-}[ K_itM_tr^+ + M_rt^+ +1/d_t^++B] compute change in log posterior (t_1 and t_2 only need to cover rows and cols r and s): Δ S = ∑_t_1, t_2[- M_t_1 t_2^+ log(M_t_1 t_2^+/d_t_1, out^+ d_t_2,in^+) + M_t_1 t_2^- log(M_t_1 t_2^-/d_t_1, out^- d_t_2,in^-)] compute probability of acceptance: p_accept = min[ exp(-βΔ S) p_s → r/p_r → s, 1 ] draw a uniform random variable x_3∼Uniform(0,1) x_3 ≤ p_accept return b_i^+ = s accept the proposalreturn b_i^+ = r reject the proposal Block Assignment Update At Each Node i§ APPENDIX B: MATRIX-BASED BATCH UPDATE PSEUDOCODE 1 InputinputOutputoutput Γ^-: current block assignment matrix for all nodes M^-: current B × B inter-block edge count matrix A: graph adjacency matrix Γ^+: new block assignments for all nodes   propose new block assignments compute node degrees: k = ( + ^T)1 compute block degrees: d^-_ out = M^-1 ; d^-_ in =M^-^T1 ; d^- = d^-_ out + d^-_ in compute probability for drawing each neighbor: P_ Nbr = RowDivide(+^T, k ) draw neighbors (N_ br is a binary selection matrix): N_ br = MultinomialDraw(P_rn) compute probability of uniform random proposal: p_ UnifProp = B/N_ brΓ^-d^-+B compute probability of block transition: P_ BlkTran = RowDivide(M^- + M^-^T, d^-) compute probability of block transition proposal: P_ BlkProp = N_brΓ^-P_ BlkTran propose new assignments uniformly:Γ_ Unif = UniformDraw(B, N) propose new assignments from neighborhood:Γ_ Nbr = MultinomialDraw(P_ BlkProp) draw N Uniform(0,1) random variables x compute which proposal to use for each node: I_ UnifProp = x≤p_ UnifProp select block assignment proposal for each node: Γ^P = RowMultiply(Γ_ Unif, I_ UnifProp ) + RowMultiply(Γ_ Nbr, (1-I_ UnifProp)) accept or reject the proposals compute change in edge counts by row and col: ΔM^+_ row = Γ^- ; ΔM^+_ col = ^TΓ^-update edge count matrix for each proposal: (resulting matrix is N × P × P):^+_ijk = M^-_jk - Γ^-_ijΔ M^+_ row,ik + Γ^P_ijΔ M^+_ row,ik - Γ^-_ikΔ M^+_ col,ij + Γ^P_ikΔ M^+_ col,ij update block degrees for each proposal: (resulting matrix is N × P):D^+_ out,ij = d^-_ out,j - Γ^-_ij∑_kΔ M^+_ row,ik + Γ^P_ij∑_kΔ M^+_ row,ik D^+_ in,ij = d^-_ in,j - Γ^-_ij∑_kΔ M^+_ col,ik + Γ^P_ij∑_kΔ M^+_ col,ik compute the proposal probabilities for Hastings correction (N × 1 vectors): p_r → s = [ (p_ NbrΓ^-) ∘ (Γ^P M^- + Γ^P M^-^T + 1) ∘RepMat(1/d^- + B, N) ] 1 p_s → r, i = [ (p_ NbrΓ^-) ∘ (Γ^- ^+_i + Γ^- ^+^T_i + 1) ∘1/D_ out^+ + D_ in^+ + B] 1 compute change in log posterior (only need to operate on the impacted rows and columns corresponding to r, s, and the neighboring blocks to i):Δ S_i = ∑_jk[-^+_ijklog(^+_ijk/D^+_ out, ij + D^+_ in, ik) + M^-_jklog(M^-_jk/d^-_ out, j + d^-_ in, k)]compute probabilities of accepting the proposal (N × 1 vector):p_ Accept = min[ exp(-βΔS) ∘p_s → r∘1/p_r → s, 1]draw N Uniform(0,1) random variable x_ Acceptcompute which proposals to accept: I_ Accept = x_ Accept≤p_ Accept return Γ^+ = RowMultiply(Γ^P, I_ Accept ) + RowMultiply(Γ^-, (1-I_ Accept)) Batch Assignment Update for All Nodes§ APPENDIX C: LIST OF NOTATIONS Below is a list of notations used in this document: N: Number of nodes in the graphB: Number of blocks in the partition : Adjacency matrix of size N × N, where A_ij is the edge weight from node i to jk: Node degree vector of N elements, where k_i is the total (i.e. both in and out) degree of node iK: Node degree matrix of N × B elements, where k_it is the total number of edges between node i and block t _i: Neighborhood of node i, which is a set containing all the neighbors of i ^-: Superscript that denotes any variable from the previous MCMC iteration^+: Superscript that denotes any updated variable in the current MCMC iterationb: Block assignment vector of N elements where b_i is the block assignment for node i Γ: Block assignment matrix of N × B elements where each row Γ_i is a binary indicator vector with 1 only at the block node i is assigned to. Γ^P is the proposed block assignment matrix.M: Inter-block edge count matrix of size B × B, where M_ij is the number of edges from block i to j^+: Updated inter-block edge count matrix for each proposal, of size N × B × BΔM^+_ row/col: Row and column updates to the inter-block edge count matrix, for each proposal. This matrix is of size N × B.d_ in: In-degree vector of B elements, where d_ in,i is the number of edges into block id_ out: Out-degree count vector of B elements, where d_ out,i is the number of edges out of block id: Total edge count vector of B elements, where d_i is the total number of edges into and out of block i. d = d_ in + d_ outD^+_ in/out: In and out edge count matrix for each block, on each proposal. It is of size N × B Δ S: The difference in log posterior between the previous block assignment and the new proposed assignmentβ: Learning rate of the MCMCp_r → s: Probability of proposing block s on the node to be updated which currently is in block rp_ Accept: Probability of accepting the proposed block on the nodeP_ Nbr: Matrix of N × N elements where each element P_Nbr, ij is the probability of selecting node j when updating node iN_ br: Matrix of N × N elements where each row N_ br, i is a binary indicator vector with 1 only at j, indicating that j is selected when updating ip_ UnifProp: Vector of N elements representing the probability of uniform proposal when updating each nodeP_ BlkTran: Matrix of B × B elements where each element P_ BlkTran, ij is the probability of landing in block j when randomly traversing an edge from block iP_ BlkProp: Matrix of N × B elements where each element P_ BlkProp, ij is the probability of proposing block assignment j for node iΓ_ Unif: Block assignment matrix from uniform proposal across all blocks. It has N × B elements where each row Γ_ Unif, i is a binary indicator vector with 1 only at the block node i is assigned toΓ_ Nbr: Block assignment matrix from neighborhood proposal. It has N × B elements where each row Γ_ Unif, i is a binary indicator vector with 1 only at the block node i is assigned toI_ UnifProp: Binary vector of N elements with 1 at each node taking the uniform proposal and 0 at each node taking the neighborhood proposalI_ Accept: Binary vector of N elements with 1 at each node where the proposal is accepted and 0 where the proposal is rejectedUniform(x,y): Uniform distribution with range from x to y δ_tk: Dirac delta function which equals 1 if t=k and 0 otherwise.RowDivide(A,b): Matrix operator that divides each row of matrix A by the corresponding element in vector b RowMultiply(A,b): Matrix operator that multiplies each row of matrix A by the corresponding element in vector b UniformDraw(B, N): Uniformly choose an element from {1,2,...,B} as the block assignment N times for each node, and return a N × B matrix where each row i is a binary indicator vector with 1 only at j, indicating node i is assigned block j MutinomialDraw(P_ BlkProp): For each row of the proposal probability matrix P_ BlkProp, i, draw an block according to the multinomial probability vector P_BlkProp, i and return a N × B matrix where each row i is a binary indicator vector with 1 only at j, indicating node i is assigned block j
http://arxiv.org/abs/1708.07883v1
{ "authors": [ "Edward Kao", "Vijay Gadepally", "Michael Hurley", "Michael Jones", "Jeremy Kepner", "Sanjeev Mohindra", "Paul Monticciolo", "Albert Reuther", "Siddharth Samsi", "William Song", "Diane Staheli", "Steven Smith" ], "categories": [ "cs.DC", "cs.DS", "cs.PF", "cs.SI" ], "primary_category": "cs.DC", "published": "20170825211006", "title": "Streaming Graph Challenge: Stochastic Block Partition" }
y decorations.pathmorphing calc
http://arxiv.org/abs/1708.08147v3
{ "authors": [ "Persi Diaconis", "Soumik Pal" ], "categories": [ "math.PR", "60J10, 60J60" ], "primary_category": "math.PR", "published": "20170827220914", "title": "Shuffling cards by spatial motion" }
homo-geneous hetero-geneous mocap Gait Recognition from Motion Capture Data]Gait Recognition from Motion Capture Data 0000-0001-7153-99840000-0002-5768-4007Masaryk UniversityFaculty of Informatics Botanická 68a602 00Brno Czech Republic Gait recognition from motion capture data, as a pattern classification discipline, can be improved by the use of machine learning. This paper contributes to the state-of-the-art with a statistical approach for extracting robust gait features directly from raw data by a modification of Linear Discriminant Analysis with Maximum Margin Criterion. Experiments on the CMU MoCap database show that the suggested method outperforms thirteen relevant methods based on geometric features and a method to learn the features by a combination of Principal Component Analysis and Linear Discriminant Analysis. The methods are evaluated in terms of the distribution of biometric templates in respective feature spaces expressed in a number of class separability coefficients and classification metrics. Results also indicate a high portability of learned features, that means, we can learn what aspects of walk people generally differ in and extract those as general gait features. Recognizing people without needing group-specific features is convenient as particular people might not always provide annotated learning data. As a contribution to reproducible research, our evaluation framework and database have been made publicly available. This research makes motion capture technology directly applicable for human recognition. <ccs2012> <concept> <concept_id>10002978.10002991.10002992.10003479</concept_id> <concept_desc>Security and privacy Biometrics</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10010147.10010178</concept_id> <concept_desc>Computing methodologies Artificial intelligence</concept_desc> <concept_significance>300</concept_significance> </concept> <concept> <concept_id>10010147.10010257</concept_id> <concept_desc>Computing methodologies Machine learning</concept_desc> <concept_significance>300</concept_significance> </concept> <concept> <concept_id>10010405.10010462.10010463</concept_id> <concept_desc>Applied computing Surveillance mechanisms</concept_desc> <concept_significance>300</concept_significance> </concept> </ccs2012> [500]Security and privacy Biometrics [300]Computing methodologies Artificial intelligence [300]Computing methodologies Machine learning [300]Applied computing Surveillance mechanismsAuthors thank to the reviewers for their detailed commentary and suggestions. The data used in this project was created with funding from NSF EIA-0196217 and was obtained from <http://mocap.cs.cmu.edu> <cit.>. Our extracted database and evaluation framework are available online at <https://gait.fi.muni.cz> to support reproducibility of results.[ Petr Sojka==============plain fancy [L] ACM Transactions on Multimedia Computing, Communications, and Applicationsspecial issue on Representation, Analysis and Recognition of 3D Humans, preprint§ INTRODUCTIONFrom the surveillance perspective, gait pattern biometrics is appealing because it can be performed at a distance and without body-invasive equipment or the need for the subject's cooperation. This allows data acquisition without a subject's consent. As the data are collected with a high participation rate and the subjects are not expected to claim their identities, the trait is employed for identification rather than for authentication.Motion capture technology acquires video clips of individuals and generates structured motion data. The format maintains an overall structure of the human body and holds estimated 3D positions of the main anatomical landmarks as the person moves. These so-called motion capture data (MoCap) can be collected online by RGB-D sensors such as Microsoft Kinect, Asus Xtion or Vicon. To visualize motion capture data (see Figure <ref>), a simplified stick figure representing the human skeleton (a graph of joints connected by bones) can be automatically recovered from the values of body point spatial coordinates. With recent rapid improvements in MoCap sensor accuracy, we foresee an affordable MoCap technology <cit.> that can identify people from MoCap data.The goal of this work is to present a method for learning robust gait features from raw MoCap data. A collection of extracted features builds a gait template that serves as the walker's signature. Templates are stored in a central database. Recognition of a person involves capturing their walk sample, extracting gait features to compose a template, and finally querying this database for a set of similar templates to report the most likely identity. The similarity of the two templates is expressed in a single number computed by a similaritydistance function.Related work is outlined in Section <ref> and our methods are described in detail in Section <ref>. We provide a thorough evaluation of a number of competitive MoCap-based gait recognition methods on a benchmark database and framework described in Section <ref>. Two setups of data separation into learning and evaluation parts are presented in <ref>, together with several metrics used for thorough evaluation. Results are presented and discussed in Section <ref>.§ RELATED WORKHuman gait has been an active subject of study in physical medicine (detection of gait abnormalities <cit.> and disorders caused by stroke or cerebral palsy <cit.>), sport (gait regulation <cit.>), or sociology (age and gender classification <cit.> or attractiveness evaluation <cit.>) for a long time. Inspired by medical studies of Murray <cit.>, a psychological research of Johansson <cit.> and by Cutting and Kozlowski <cit.> conducted experiments about how participants are able to recognize pedestrians from simply observing the 2D gait pattern generated by light bulbs attached to several joints over their body. These experiments proved that the gait is personally unique and can be potentially used for biometric recognition. The main challenges from the perspective of biometric security <cit.> are (1) to detect gait cycles in long video footage; (2) to recognize registered participants by their biometric samples; and (3) to retrieve relevant biometric samples from large databases.Many research groups investigate the discrimination power of gait pattern and develop models of human walk for applications in the automatic recognition from MoCap data of walking people. A number of MoCap-based gait recognition methods have been introduced in the past few years and new ones continue to emerge. In order to move forward with the wealth and scope of competitive research, it is necessary to compare the innovative approaches with the state-of-the-art and evaluate them against established evaluation metrics on a benchmark database. New frameworks and databases have been developed recently <cit.>.Over the past few years, most of the introduced gait features are on a geometric basis. They typically combine static body parameters (bone lengths, person's height) with dynamic gait features such as step length, walk speed, joint angles and inter-joint distances, along with various statistics (mean, standard deviation or localglobal extremes) of their signals. We are particularly focused on the dynamic parameters. By our definition, gait is a dynamic behavioral trait. Static body parameters are not associated with gait, however, they can be used as another biometric.What follows is a detailed specification of the thirteen gait features extraction methods that we have reviewed in our work to date. Since the idea behind each method has some potential, we have implemented each of them for direct comparison.#1• #1AhmedF by Ahmed  <cit.> chooses 20 joint relative distance signals and 16 joint relative angle signals across the whole body, compared using the Dynamic Time Warping (DTW). AhmedM by Ahmed  <cit.> extracts the mean, standard deviation and skew during one gait cycle of horizontal distances (projected on the Z axis) between feet, knees, wrists and shoulders, and mean and standard deviation during one gait cycle of vertical distances (Y coordinates) of head, wrists, shoulders, knees and feet, and finally the mean area during one gait cycle of the triangle of root and two feet. AliS by Ali  <cit.> measures the mean areas during one gait cycle of lower limb triangles. AnderssonVO by Andersson and Araujo <cit.> calculates gait attributes as mean and standard deviation during one gait cycle of local extremes of the signals of lower body angles, step length as a maximum of feet distance, stride length as a length of two steps, cycle time and velocity as a ratio of stride length and cycle time. In addition, they extract the mean and standard deviation during one gait cycle of each bone length, and height as the sum of the bone lengths between head and root plus the averages of the bone lengths between root and both feet. BallA by Ball  <cit.> measures mean, standard deviation and maximum during one gait cycle of lower limb angle pairs: upper leg relative to the Y axis, lower leg relative to the upper leg, and the foot relative to the Z axis. DikovskiB by Dikovski  <cit.> selects the mean during one gait cycle of step length, height, all bone lengths, then mean, standard deviation, minimum, maximum and mean difference of subsequent frames during one gait cycle of all major joint angles, and the angle between the lines of the shoulder joints and the hip joints. JiangS by Jiang  <cit.> measures angle signals between the Y axis and four major lower body (thigh and calf) bones. The signals are compared using the DTW. KrzeszowskiT by Krzeszowski  <cit.> observes the signals of rotations of eight major bones (humerus, ulna, thigh and calf) around all three axes, the person's height and step length. These signals are compared using the DTW distance function. KwolekB by Kwolek  <cit.> processes signals of bone angles around all axes, the person's height and step length. The gait cycles are normalized to 30 frames. NareshKumarMS by Naresh Kumar and Venkatesh Babu <cit.> is an interesting approach that extracts all joint trajectories around all three axes and compares gait templates by a distance function of their covariance matrices. PreisJ by Preis  <cit.> takes height, length of legs, torso, both lower legs, both thighs, both upper arms, both forearms, step length and speed. SedmidubskyJ by Sedmidubsky  <cit.> concludes that only the two shoulder-hand signals are discriminatory enough to be used for recognition. These temporal data are compared using the DTW distance function. SinhaA by Sinha  <cit.> combines all features of BallA and PreisJ with mean areas during one gait cycle of upper body and lower body, then mean, standard deviation and maximum distances during one gait cycle between the centroid of the upper body polygon and the centroids of four limb polygons. The aforementioned features are schematic and human-interpretable, which is convenient for visualizations and for intuitive understanding of the models, but unnecessary for automatic gait recognition. Instead, to refrain from ad-hoc schemes and to explore beyond the limits of human interpretability, we prefer learning gait features that maximally separate the identity classes in the feature space. The features calculated by statistical observation of large amounts of data are expected to have a much higher discriminative potential, which will be the subject of experimental evaluation in Section <ref>.Methods for 2D gait recognition extensively use machine learning models for extracting gait features, such as principal component analysis and multi-scale shape analysis <cit.>, genetic algorithms and kernel principal component analysis <cit.>, radial basis function neural networks <cit.>, or convolutional neural networks <cit.>. All of these and many other models can be reasonably utilized in 3D gait recognition as well. The following section provides a scheme for learning the features directly from raw data by (i) a modification of Fisher's Linear Discriminant Analysis with Maximum Margin Criterion and (ii) a combination of Principal Component Analysis and Linear Discriminant Analysis.§ LEARNING GAIT FEATURESIn statistical pattern recognition, reducing space dimensionality is a common technique to overcome class estimation problems. Classes are discriminated by projecting high-dimensional input data onto low-dimensional sub-spaces by linear transformations with the goal of maximizing class separability. We are interested in finding an optimal feature space where a gait template is close to those of the same walker and far from those of different walkers.Let the model of a human body havejoints and alllearning samples ofwalkers be linearly normalized to their average length . Labeled learning data in a sample (measurement) space have the form ={(n,n)}_n=1^ wheren=[[11 ⋯ 1] ⋯ [1 ⋯ ]]^⊤is a gait sample (one gait cycle) in which jt∈ℝ^3 are 3D spatial coordinates of joint j∈{1,…,} at time t∈{1,…,} normalized with respect to the person's position and walk direction. See thathas dimensionality =3. Each learning sample falls strictly into one of the learning identity classes {c}_c=1^ labeled by n. A class c⊆ has c samples. The classes are complete and mutually exclusive. We say that samples (n,n) and (n',n') share a common walker if and only if they belong to the same class, i.e., (n,n),(n',n')∈c⇔n=n'. For the whole labeled data, we denote the between- and within-class and total scatter matrices=∑_c=1^(c-)(c-)^⊤ =∑_c=1^1/c∑_n=1^c(nc-c)(nc-c)^⊤ =∑_c=1^1/c∑_n=1^c(nc-)(nc-)^⊤=+where nc denotes the n-th sample in class c and c andare sample means for class c and the whole data set, respectively, that is, c=1/c∑_n=1^cnc and =1/∑_n=1^n. Margin is defined as the Euclidean distance of class means minus both individual variances (traces of scatter matrices)c=1/c∑_n=1^c(nc-c)(nc-c)^⊤. We measure class separability of a given feature space by a representation of the Maximum Margin Criterion (MMC) <cit.> used by the Vapnik's Support Vector Machines (SVM) <cit.> which maximizes the sum of 1/2(-1) between-class margins=1/2∑_c,c'=1^((c-c')^⊤(c-c')-(c+c')) =1/2∑_c,c'=1^(c-c')^⊤(c-c')-1/2∑_c,c'=1^(c+c') =1/2∑_c,c'=1^(c-+-c')^⊤(c-+-c')-∑_c=1^(c) =(∑_c=1^(c-)(c-)^⊤)-(∑_c=1^c) =()-()=(-).Since () measures the overall variance of the class mean vectors, a large one implies that the class mean vectors scatter in a large space. On the other hand, a small () implies that classes have a small spread. Thus, a largeindicates that samples are close to each other if they share a common walker but are far from each other if they are performed by different walkers. Extracting features, that is, transforming the input data in the sample space into a feature space of higher , can be used to link new observations of walkers more successfully.Feature extraction is given by a linear transformation (feature) matrix ∈ℝ^× from a -dimensional sample space ={n}_n=1^ of not necessarily labeled gait samples to a -dimensional feature space ={n}_n=1^ of gait templates where < and each gait sample n is transformed into a gait template n=^⊤n. The objective is to learn a transformthat maximizes MMC in the feature space=(^⊤(-)).Once the transformation is found, all measured samples are transformed into templates (in the feature space) along with the class means and covariances. The templates are compared by the Mahalanobis distance functionnn'=√((n-n')^⊤^-1(n-n')). We show that a solution to the optimization problem in Equation (<ref>) can be obtained by eigendecomposition of the matrix -. An important property to notice about the objectiveis that it is invariant w.r.t. rescalings →α. Hence, we can always choose =1⋯ such that d^⊤d=1, since it is a scalar itself. For this reason we can reduce the problem of maximizinginto the constrained optimization problemmax  ∑_d=1^d^⊤(-)d subject to  d^⊤d-1=0 ∀d∈{1,…,}.To solve the above optimization problem, let us consider the Lagrangiandd=∑_d=1^d^⊤(-)d-d(d^⊤d-1)with multipliers d. To find the maximum, we derive it with respect to d and equate it to zero∂dd/∂d=((-)-d)d=0which leads to(-)d=ddwhere d are the eigenvalues of - and d are the corresponding eigenvectors. Putting it all together,(-)=where =(1,…,) is the eigenvalue matrix. Therefore,=(^⊤(-))=(^⊤)=∑_d=1^d^⊤dd=∑_d=1^dd^⊤d=∑_d=1^d=()is maximized whenhaslargest eigenvalues andcontains the corresponding eigenvectors.In the following we discuss how to calculate the eigenvectors of - and to determine an optimal dimensionalityof the feature space. Rewrite -=2-. Note that the null space ofis a subspace of that ofsince the null space ofis the common null space ofand . Thus, we can simultaneously diagonalizeandto someand ^⊤= ^⊤=with the ×() eigenvector matrix=^-1/2whereandare the eigenvector and corresponding eigenvalue matrices of , respectively, andis the eigenvector matrix of ^-1/2^⊤^-1/2. To calculate , we use a fast two-step algorithm in virtue of Singular Value Decomposition (SVD). SVD expresses a real r × s matrix A as a product A=UDV^⊤ where D is a diagonal matrix with decreasing non-negative entries, and U and V are r×min{r,s} and s×min{r,s} eigenvector matrices of AA^⊤ and A^⊤A, respectively, and the non-vanishing entries of D are square roots of the non-zero corresponding eigenvalues of both AA^⊤ and A^⊤A. See thatandcan have the forms=  ^⊤ where =1/√()[(1-)⋯(-)] and =  ^⊤ where =[(1-)⋯(-)],respectively. Hence, we can obtain the eigenvectorsand the corresponding eigenvaluesofthrough the SVD ofand analogicallyof ^-1/2^⊤^-1/2 through the SVD of ^-1/2^⊤. The columns ofare clearly the eigenvectors of 2- with the corresponding eigenvalues 2-. Therefore, to constitute the transformby maximizing the MMC, we should choose the eigenvectors inthat correspond to the eigenvalues of at least 1/2 in . Note thatcontains at most ()=-1 positive eigenvalues, which gives an upper bound on the feature space dimensionality .We found inspiration in the Fisher Linear Discriminant Analysis (LDA) <cit.> that uses Fisher's criterion_LDA=(_LDA^⊤_LDA/_LDA^⊤_LDA).However, since the rank ofis at most -, it is a singular (non-invertible) matrix ifis less than +, or, analogously might be unstable if ≪. Small sample size is a substantial difficulty as it is necessary to calculate ^-1. To alleviate this, the measured data can be first projected to a lower dimensional space using Principal Component Analysis (PCA), resulting in a two-stage PCA+LDA feature extraction technique <cit.> originally introduced for face recognition_PCA= (_PCA^⊤_PCA) _LDA= (_LDA^⊤_PCA^⊤_PCA_LDA/_LDA^⊤_PCA^⊤_PCA_LDA)and the final transform is =_PCA_LDA. Given that there areprincipal components, then regardless of the dimensionalitythere are at least +1 independent data points. Thus, if the × matrix _PCA^⊤_PCA is estimated from - independent observations and providing the ≤≤-, we can always invert _PCA^⊤_PCA and in this way obtain the LDA estimate. Note that this method is sub-optimal for multi-class problems <cit.> as PCA keeps at most - principal components whereas at least -1 of them are necessary in order not to lose information. PCA+LDA in this form has been used for silhouette-based (2D) gait recognition by Su  <cit.> and is included in our experiments with MoCap (3D).On given labeled learning data , Algorithm <ref> and Algorithm <ref> <cit.> provided below are efficient ways of learning the transformsfor MMC and PCA+LDA, respectively.§ EVALUATIONThis paper provides an extended evaluation. In the following we describe the evaluation database and framework, setups for data separation into learning and evaluation parts, evaluation metrics, and results with discussion. §.§ Database and FrameworkFor evaluation purposes we have extracted a large number of gait samples from the MoCap database obtained from the CMU Graphics Lab <cit.>, which is available under the Creative Commons license. It is a well-known and recognized database of structural human motion data and contains a considerable number of gait sequences. Motions are recorded with an optical marker-based Vicon system. People wear a black jumpsuit with 41 markers taped on. The tracking space of [30]m^2 is surrounded by 12 cameras with a sampling rate of [120]Hz at heights ranging from 2 to 4 meters above ground thereby creating a video surveillance environment. Motion videos are triangulated to get highly accurate 3D data in the form of relative body point coordinates (with respect to the root joint) in each video frame and are stored in the standard ASF/AMC data format. Each registered participant is assigned with their respective skeleton described in an ASF file. Motions in the AMC files store bone rotational data, which is interpreted as instructions about how the associated skeleton deforms over time.These MoCap data, however, contain skeleton parameters pre-calibrated by the CMU staff. Skeletons are unique to each walker and even a trivial skeleton check could result in 100% recognition. In order to fairly use the collected data, a prototypical skeleton is constructed and used to represent bodies of all subjects, shrouding the skeleton parameters. Assuming that all walking individuals are physically identical disables the skeleton check from being a potentially unfair classifier. Moreover, this is a skeleton-robust solution as all bone rotational data are linked to one specific skeleton. To obtain realistic parameters, it is calculated as the mean of all skeletons in the provided ASF files.The raw data are in the form of bone rotations or, if combined with the prototypical skeleton, 3D joint coordinates. The bone rotational data are taken from the AMC files without any pre-processing. We calculate the joint coordinates using the bone rotational data and the prototypical skeleton. One cannot directly use raw values of joint coordinates, as they refer to absolute positions in the tracking space, and not all potential methods are invariant to a person's position or walk direction. To ensure such invariance, the center of the coordinate system is moved to the position of root joint roott=[0,0,0]^⊤ for each time t and the axes are adjusted to the walker's perspective: the X axis is from right (negative) to left (positive), the Y axis is from down (negative) to up (positive), and the Z axis is from back (negative) to front (positive). In the AMC file structure notation it is achieved by setting the root translation and rotation to zero () in all frames of all motion sequences.Since the general motion database contains all motion types, we extracted a number of sub-motions that represent gait cycles. First, an exemplary gait cycle was identified, and clean gait cycles were then filtered out using a threshold for their DTW distance on bone rotations in time. The distance threshold was explicitly set low enough so that even the least similar sub-motions still semantically represent gait cycles. Setting this threshold higher might also qualify sub-motions that do not resemble gait cycles anymore. Finally, subjects that contributed with less than 10 samples were excluded. The final database <cit.> has 54 walking subjects that performed 3,843 samples in total, resulting in an average of about 71 samples per subject.As a contribution to reproducible research, we release our database and framework to improve the development, evaluation and comparison of methods for gait recognition from MoCap data. They are intended also for our fellow researchers and reviewers to reproduce the results of our experiments. Our recent paper <cit.> provides a manual and comments on reproducing the experiments. With this manual, a reader should be able to reproduce the evaluation and to use the implementation in their own application. The source codes and data are available online at our research group web page <cit.> with link to our departmental Git repository.The evaluation framework comprises (i) source codes of the state-of-the-art human-interpretable geometric features that we have reviewed in our work to date as well as own our two approaches where gait features are learned by MMC (see Algorithm <ref>) and by PCA+LDA (see Algorithm <ref>), the Random method (no features and random classifier), and the Raw method (raw data and DTW distance function). Depending on whether the raw data are in the form of bone rotations or joint coordinates, the methods are referred to with BR or JC subscripts, respectively. The framework includes (ii) a mechanism for evaluating four class separability coefficients of feature space and four classifier performance metrics. It also contains (iii) a tool for learning a custom classifier and for classifying a custom probe on a custom gallery. We provide (iv) an experimental database along with source codes for its extraction from the CMU MoCap database. §.§ Evaluation Setup and MetricsIn the following, we introduce two setups of data separation: homogeneous and heterogeneous. The homogeneous setup learns the transformation matrix on a fraction of samples ofidentities and is evaluated on templates derived from the remaining samples of the same = identities. The heterogeneous setup learns the transform on all samples ofidentities and is evaluated on all templates derived from all samples of otheridentities. An abstraction of this concept is depicted in Figure <ref>. Note that in heterogeneous setup no walker identity is ever used for both learning and evaluation at the same time.A use of the homogeneous setup can be any system to recognize people that cooperated at learning the features. For example, a small company entrance authentication where all employees cooperated to register. MoCap data of each employee's gait pattern can be registered along with a picture of their face. On the other hand, the heterogeneous setup can be used at person re-identification. During video surveillance, new identities can appear on the fly and labeled data for all the people encountered may not always be available. In this scenario, forensic investigators may ask for tracking the suspects.The homogeneous setup is parametrized by a single number = of learning-and-evaluation identity classes, whereas the heterogeneous setup has the form (,). This parametrization specifies how many learning and how many evaluation identity classes are randomly selected from the database. The evaluation of each setup is repeated 3 times, selecting new random identity classes each time and reporting the average result.In both setups, the class separability coefficients are calculated directly on the full evaluation part whereas the classification metrics are estimated with the 10-fold cross-validation taking one dis-labeled fold as a testing set and the other nine labeled folds as gallery. Test templates are classified by the winner-takes-all strategy, in which a test template ^test gets assigned with the label _i^testi^gallery of the gallery's closest identity class.What follows is a list of evaluation metrics. Recognition rate is often perceived as the ultimate qualitative measure; however, it is more explanatory to include an evaluation in terms of class separability of the feature space. The class separability measures give an estimate of the recognition potential of the extracted features and do not reflect eventual combination with an unsuitable classifier:#1 (#2) • #1: #2 Davies-Bouldin Index (DBI)DBI=1/∑_c=1^max_1 ≤ c' ≤, c' ≠ cσ_c+σ_c'/cc'where σ_c=1/c∑_n=1^cnc is the average distance of all elements in identity class c to its centroid, and analogically for σ_c'. Templates of low intra-class distances and of high inter-class distances have a low DBI. Dunn Index (DI)DI=min_1 ≤ c<c' ≤cc'/max_1 ≤ c ≤σ_cwith σ_c from the above DBI. Since this criterion seeks classes with high intra-class similarity and low inter-class similarity, a high DI is more desirable. Silhouette Coefficient (SC)SC=1/∑_n=1^b(n)-a(n)/max{a(n),b(n)}where a(n)=1/c∑_n'=1^cnn' is the average distance from n to other samples within the same identity class and b(n)=min_1 ≤ c' ≤, c' ≠ c1/c'∑_n'=1^c'nn' is the average distance of n to the samples in the closest class. It is clear that -1≤SC≤1 and a SC close to one mean that classes are appropriately separated. Fisher's Discriminant Ratio (FDR)FDR=1/∑_c=1^c/1/∑_c=1^∑_n=1^cnc.A high FDR is preferred for seeking classes with low intra-class sparsity and high inter-class sparsity. Apart from analyzing the distribution of templates in the feature space, it is schematic to combine the features with a rank-based classifier and to evaluate the system based on distance distribution with respect to a query. For obtaining a more applied performance evaluation, we evaluate:#1 (#2) • #1: #2  Cumulative Match Characteristic (CMC)Sequence of Rank-k (for k on X axis from 1 up to ) recognition rates (Y axis) to measure ranking capabilities of a recognition method. Its headline Rank-1 is the Correct Classification Rate (CCR). False Accept Rate vs. False Reject Rate (FAR/FRR) Two sequences of the error rates (Y axis) as functions of the discrimination threshold (X axis). Each method has a value e of this threshold giving Equal Error Rate (EER=FAR=FRR). Receiver Operating Characteristic (ROC) Sequence of True Accept Rate (TAR) and False Accept Rate with a varied discrimination threshold. For a given threshold, the system signalizes both TAR (Y axis) and FAR (X axis). The value of Area Under Curve (AUC) is computed as the integral of the ROC curve. Recall vs. Precision (RCL/PCN) Sequence of the rates with a varied discrimination threshold. For a given threshold the system signals both RCL (X axis) and PCN (Y axis). The value of Mean Average Precision (MAP) is computed as the area under the RCL/PCN curve. These measures reflect how class-separated the feature space is, how often the walk pattern of a person is classified correctly and how difficult it is to confuse different people. They do not, in fact, provide complementary information, although a quality evaluation framework should be able to evaluate the most popular measures. Different applications use different evaluation measures. For example, a hotel lobby authentication system could use a high Rank-3 at the CMC, while a city-level person tracking system is likely to need the ROC curve leaning towards the upper left corner.Finally, the evaluation incorporates two scalability measures: average distance computation time (DCT) in milliseconds and average template dimensionality (TD). §.§ ResultsIn this section we provide comparative evaluation results of the feature extraction methods in terms of evaluation metrics defined in Section <ref>. To ensure a fair comparison, we evaluate all methods on the same experimental database and framework described in Section <ref>. Table <ref> presents the implementation details of all the methods and the results of class separability coefficients, classification metrics and scalability.The goal of the MMC-based learning is to find a linear discriminant that maximizes the misclassification margin. This optimization technique appears to be more effective than designing geometric gait features. A variety of class-separability coefficients and classification metrics allows insights from different statistical perspectives. The results in Table <ref> indicate that the proposed MMCBR method (on bone rotational data) is a leading concept for rank-based classifier systems: highest SC and AUC, lowest EER, and competitive DBI, DI, FDR, CCR and MAP. In terms of recognition rate, the MMC method was only outperformed by the Raw method, which is implemented here as a form of baseline. We interpret the high scores as a sign of robustness.Apart from the performance merits, the MMC method is also efficient: relatively low-dimensional templates and Mahalanobis distance ensure fast distance computations and thus contribute to high scalability. Note that even if the Raw method has some of the best results, it can hardly be used in practice due to its extreme consumption of time and space resources. On the other hand, Random has no features but cannot be considered a serious recognition method. To illustrate the evaluation time, calculating the distance matrix (a matrix of distances between all evaluation templates) took a couple minutes for the MMC method, almost nothing for the Random method, and more than two weeks for the Raw method. The learning time of the MMC and PCA+LDA methods increases with the number of learning samples(countingandmatrices); however, this is not an issue in the field of gait recognition as training the models suffers from the opposite problem, undersampling.To reproduce the experiments in Table <ref>, instructions to follow are to be found at <cit.>. Please note that some methods are rather slow – total evaluation times (learning included) in the last column were measured on a computer with Intel® Xeon® CPU E5-2650 v2 @ 2.60GHz and [256]GB RAM.An additional experiment was carried out with the traditional 10-fold cross-validation in which gait cycles of a common walk sequence are always kept together in the same fold. This is to prevent a situation in which two consecutive gait cycles are split between testing and training folds which would cause a potential overtraining. The classification metrics of methods include CCR, EER, AUC, MAP, and the CMC curve up to Rank-10. Displayed in Table <ref>, the results suggest that the methods based on machine learning outperform the methods of hand-designed features in terms of all measured metrics. Note that the results are slightly worse than those in Table <ref> due to avoiding the overtraining cases and that the order of the tested methods is preserved. We interpret this order as a demonstrative result of comparison of all tested methods based on their classification potential.Finally, the experiments A, B, C, D described below compare the homogeneous and heterogeneous setup and examine how quality of the system in the heterogeneous setup improves with an increased number of learning identities. The main idea is to show that we can learn what aspects of walk people generally differ in and extract those as general gait features. Recognizing people without needing group-specific features is convenient as particular people might not always provide annotated learning data. Results are illustrated in Figure <ref> and in Figure <ref>.0pt 0ptA homogeneous setup with =∈{2,…,27};B heterogeneous setup with =∈{2,…,27};C heterogeneous setup with ∈{2,…,27} and =27;D heterogeneous setup with ∈{2,…,27} and =27-. Experiments A and B compare homogeneous and heterogeneous setups by measuring the drop in the quality measures on an identical number of learning and evaluation identities (=). Please note that based on Section <ref> our database has 54 identity classes in total and they can be split into the learning and evaluation parts in the numbers of at most (27,27). Top plot in Figure <ref> shows the values of DBI and CCR metrics in both alternatives, which not only appear comparable but also in some configurations the heterogeneous setup has an even higher CCR. Bottom plot expresses heterogeneous setup as a percentage of the homogeneous setup in each of the particular metrics. Here we see that with raising number of identities the heterogeneous setup approaches 100% of the homogeneous setup. Therefore, the gait features learned by the introduced MMC method are walker-independent, which means that they can be learned on anybody.Experiments C and D investigate the impact of the number of learning identities in the heterogeneous setup. Experiment D has only been evaluated up until (27,27) as results with a learning part larger than an evaluation part would be insignificant. As can be seen in Figure <ref>, the performance grows quickly on the first configurations with very few learning identities, which we can interpret as an analogy to the Pareto (80–20) principle. Specifically, the results of experiment C say that 8 learning identities achieve almost the same performance (66.78 DBI and 0.902 CCR) as if they had been learned on 27 identities (68.32 DBI and 0.947 CCR). The outcome of experiment D indicates a similar growth of performance and we see that 14 identities can be enough to learn the transformation matrix to distinguish 40 completely different people (0.904 CCR).§ CONCLUSIONA common practice in state-of-the-art MoCap-based human identification is designing geometric gait features by hand. However, the field of pattern recognition has not recently advanced to a point where best results are frequently obtained using a machine learning approach. Finding optimal features for MoCap-based gait recognition is no exception. This work introduces the concept of learning robust and discriminative features directly from raw MoCap data by a modification of the Fisher Linear Discriminant Analysis with Maximum Margin Criterion and by combining Principal Component Analysis and Linear Discriminant Analysis, both with the goal of maximal separation of identity classes, avoiding the potential singularity problem caused by undersampling. Instead of instinctively drawing ad-hoc features, these methods are computed from a much larger space beyond the limits of human interpretability. The collection of features learned by MMC achieves leading scores in four class separability coefficients and therefore has a great potential for gait recognition applications. This is demonstrated on our extracted and normalized MoCap database of 54 walkers with 3,843 gait cycles by outperforming other 13 methods in numerous evaluation metrics. MMC is suitable for optimizing gait features; however, our future work will continue with research on further potential optimality criteria and multi-linear machine learning approaches.The second contribution lies in showing the possibility of building a representation on a problem and using it on another (related) problem. Simulations on the CMU MoCap database show that our approach is able to build robust feature spaces without pre-registering and labeling all potential walkers. In fact, we can take different people (experiments A and B) and use just a fraction of them (experiments C and D). We have observed that already on 5 identities the heterogeneous evaluation setup exceeds 95% of the homogeneous setup and improves with an increasing volume of identities. This means that one does not have to rely on the availability of all walkers for learning; instead, the features can be learned on anybody. The CCR of over 90% when learning on 14 identities and evaluating on other 40 identities shows that an MMC-based system once learned on a fixed set of walker identities can be used to distinguish many more additional walkers. This is particularly important for a system that supports video surveillance applications where the encountered walkers never supply labeled data. Multiple occurrences of individual walkers can now be linked together even without knowing their actual identities.In the name of reproducible research, our third contribution is the provision of the evaluation framework and database. All our data and source codes have been made available <cit.> under the Creative Commons Attribution license (CC-BY) for database and the Apache 2.0 license for software, which grant free use and allow for experimental evaluation. We hope that they will contribute to a smooth development and evaluation of further novel MoCap-based gait recognition methods. We encourage all readers and developers of MoCap-based gait recognition methods to contribute to the framework with new algorithms, data and improvements.ACM-Reference-Format
http://arxiv.org/abs/1708.07755v2
{ "authors": [ "Michal Balazia", "Petr Sojka" ], "categories": [ "cs.CV", "68T05, 68T10", "I.5" ], "primary_category": "cs.CV", "published": "20170824120014", "title": "Gait Recognition from Motion Capture Data" }
Weimin Chen, Department of Mathematics, Unviersity of Massachusetts at Amherst, Amherst, MA 01003 E-mail address:Tian-Jun Li, School of Mathematics, University of Minnesota, Minneapolis, MN 55455 E-mail address:Weiwei Wu, Department of Mathematics, University of Georgia, Athens, GA 30606 E-mail address: [2000]Symplectic rational G-surfaces and equivariant symplectic cones Weimin Chen, Tian-Jun Li, and Weiwei Wu December 30, 2023 =============================================================== We give characterizations of a finite group G acting symplectically on a rational surface(^2 blown up at two or more points).In particular, we obtain a symplecticversion of the dichotomy of G-conic bundles versus G-del Pezzo surfaces for the correspondingG-rational surfaces, analogous to a classical result in algebraic geometry. Besides the characterizationsof the group G (which is completely determined for the case of ^2# N^2,N=2,3,4), we also investigate the equivariant symplectic minimality and equivariant symplecticcone of a given G-rational surface. § INTRODUCTION In this paper we study symplectic 4-manifolds (X,ω) equipped with a finite symplectomorphism group G, where X is diffeomorphic to a rational surface. We shall call such a pair, i.e., ((X,ω),G), a symplectic rational G-surface.They are the symplectic analog of (complex) rational G-surfaces studied in algebraic geometry, which are rational surfaces equipped with a holomorphic G-action.These rationalG-surfaces played a central role in the classification of finite subgroups of the planeCremona group, a problem dating back to the early 1880s, see <cit.>. Note that any rational G-surface can be regarded as a symplectic rational G-surface – simply endowing it with a G-invariant Kähler form which always exists. Our work shows that a large part of the story regarding the classification of rational G-surfaces can be recovered by techniques from 4-manifold theory and symplectic topology. Furthermore, we also add some new, interesting symplectic geometry aspect to the study of rational G-surfaces; in particular, in regard to the equivariant symplectic minimality andequivariant symplectic cone of the underlying smooth action of a rational G-surface. In addition, we also obtain some result which does not seem previously known in the algebraic geometry literature (cf. Theorem <ref>).We begin with a discussion on the notion of minimality (i.e., G-minimality) in the equivariant context.Let (X,ω) be a symplectic 4-manifold with a finite symplectomorphism group G. Suppose there exists a G-invariant set of disjoint union of symplectic (-1)-spheres in X. Then blowing down X along the (-1)-spheres gives rise to a symplectic 4-manifold(X^',ω^'), which can be arranged so that G is natually isomorphic to a finite symplectomorphism group of (X^',ω^'). The symplectic G-manifold X is called minimal if no such set of (-1)-spheres exists. It was shown in <cit.> that whenX is neither rational nor ruled, the symplectic G-manifold is minimal if and only if the underlyingsmooth manifold is minimal. However, in the case considered in the present paper, the underlyingrational surface is often not minimal even though the corresponding symplectic rational G-surfaceis minimal. Furthermore, it is not known whether the notion of G-minimality is the same for the various different categories, i.e., the holomorphic, symplectic, or smooth categories. In general it is a difficult problem to establish the equivalence of G-minimality in the different categories, and we refer the reader to <cit.> for a more thorough discussion on this topic. For our purpose in this paper, it suffices to only studyminimal symplectic rational G-surfaces.The most fundamental problem in our study is to classify symplectic rational G-surfaces up to equivariantsymplectomorphisms. However, work in <cit.> showed that even in the simple case where Xis ^2 or a Hirzebruch surface and G is a cyclic or meta-cyclic group, such a classification is alreadyquite involved. In fact, in one circumstance where G is meta-cyclic,a weaker classification, i.e., classification up to equivariant diffeomorphisms, still remains open. With the preceding understood, the main objectives of this paper are more basic: for symplectic rational G-surfaces X in general, we would like to* classify the possible symplectic structures;* describe the induced action of G on H_2(X);* give a list of possible finite groups for G;* understand the equivariant minimality and equivariant symplectic cones. These problems, however, are still highly non-trivial and not completely settled.In particular,part of our determination of G and the induced action on H_2(X) relies on theDolgachev-Iskovskikh's solution of the corresponding problems in algebraic geometry, with new inputs from Gromov-Witten theory and a detailed analysis of the symplectic structures.§.§ The setupIn this paper, we shall be focusing on the case where the rational surface, denoted by X, is^2 blown up at 2 or more points. More concretely, we shall consider minimal symplectic rational G-surfaces (X,ω) where X=^2# N^2, for N≥ 2. (Note that the minimality assumption implies in particular that G is a nontrivial group.) The case where the rational surface is ^2 or a Hirzebruch surface had been previously studied, cf. <cit.>; we point out that the J-holomorphic curvetechniques employed in this paper are drastically different in flavor from those developed in these previous works.For convenience, we shall fix some notations and terminology, which will be frequently used throughout the paper. We will denote by H, E_1,E_2,⋯, E_N a basis of H^2(X,), under which the intersection matrix takes its standard form, i.e., H^2=1, E_i^2=-1, H· E_i=0, ∀ i, and E_i· E_j=0, ∀ i≠ j. The canonical class of (X,ω) will be denoted by K_ω∈ H^2(X), in order to emphasize its dependence on the symplectic form ω. Another frequently used notation is H^2(X)^G, which denotes the subset of H^2(X,) consisting of elements fixed under the induced action of G, and is called the invariant lattice. Recall that a symplectic rational surface (X,ω) is called monotone if K_ω=λ [ω] is satisfiedin H^2(X;) for some λ∈. In this case, we have λ<0, and N must be in the range N≤ 8. Such a symplectic rational surface is the symplectic analog of Del Pezzo surface in algebraic geometry. Another important notion, given in the following definition and called a symplectic G-conic bundle, corresponds to a conic bundle structureon a rational G-surface in algebraic geometry. Let (X,ω) be a symplectic 4-manifold equipped with a finite symplectomorphismgroup G. A symplectic G-conic bundle structure on (X,ω) is a genus-0 smooth Lefschetz fibrationπ: X→ B which obeys the following conditions: * each singular fiber of π contains exactly one critical point;* there exists a G-invariant, ω-compatible almost complex structure J such that the fibers of π are J-holomorphic;* the group action of G preserves the Lefschetz fibration.Although the above definition looks more rigid than it should be (in particular, it is always an almost complex fibration), Theorem <ref> shows that this is a purely symplectic notion in the case of minimal symplectic rational G-surfaces. A symplectic G-conic bundle is called minimal if for any singular fiber there is an element of G whose action switches the two components of the singular fiber. Here are some immediate consequences from the definition: * X is a rational surface if and only if B=^2; in this case, note that the number of singular fibers of π equals N-1, where X=^2# N^2;* the Lefschetz fibration is symplectic with respect to ω;* the fiber class lies in the invariant lattice H^2(X)^G as G preserves the Lefschetz fibration;* if the underlying symplectic G-manifold is minimal, then the symplectic G-conic bundle must be also minimal. §.§ The symplectic structuresOur first theorem is concerned with the symplectic structure of a minimal symplectic rational G-surface. Let (X,ω) be a minimal symplectic rational G-surface, whereX=^2# N^2 for some N≥ 2. Then N≠ 2, and one of the following holds true: (1) The invariant lattice H^2(X)^G has rank 1. In this case, 3≤ N≤ 8 and(X,ω) must be monotone.(2) The invariant lattice H^2(X)^G has rank 2.In this case,N=5or N≥ 7, and there exists asymplectic G-conic bundle structure on ((X,ω),G). (a) An analog of Theorem <ref>for minimal rational G-surfaces is a classical theorem in algebraic geometry (see e.g. Theorem 3.8 in <cit.>), a proof of which can be given using theequivariant Mori theory (see 4 of <cit.>). With this understood, we remark that our proof of Theorem <ref> gives an independent proof of the corresponding result in algebraic geometry by taking ω to be a G-invariant Kähler form. Consequently, a significant portion of the theory of rational G-surfaces (e.g. as described in <cit.>) can be recovered(see Theorems <ref> <ref>). (b) In case (1) of Theorem <ref> the invariant lattice H^2(X)^G is spanned by K_ω, and in case (2), H^2(X)^G is spanned by K_ω and the fiber class of the symplectic G-conic bundle. We should point out that the analysis on the type of the group G, the induced action on H^2(X), as well as the structure of the equivariant symplectic cone,depends on the rank of the invariant lattice H^2(X)^G. (c) In case (2), the proof of Theorem <ref> reveals the following additional informationabout the symplectic G-conic bundle structure: there exists a basisH, E_1,E_2,⋯, E_N of H^2(X) with standard intersection matrix such that(i) the fiber class is given by H-E_1 and the pair of (-1)-spheres in a singular fiberare given by the classes E_j and H-E_1-E_j, where j=2,⋯,N;(ii) the symplectic areas satisfy ω(E_j)=1/2ω(H-E_1),ω(E_1)≥ω(E_j) for j=2,⋯, N;(iii) the canonical class K_ω=-3H + E_1+E_2+⋯ +E_N. (d)In complex geometry, it was known that a minimal (complex) rational G-surface which is diffeomorphic to ^2 blown up at 6 points must be Del Pezzo(cf. <cit.>, Theorem 3.8, Proposition 5.2). However, it seemed new that the invariant Picard group Pic(X)^G must beof rank 1. Comparing the three minimality assumptions: The reader should be noted that a minimal symplectic G-conic bundle is different from a minimal symplectic G-surface with G-conic bundle structure: a minimalsymplectic G-conic bundle may still contain a G-invariant disjoint union of symplectic(-1)-spheres.However, Lemma <ref> implies this cannot happen when N≥5 and N≠6.The minimal G-conic bundles is an intermediate notion between minimal symplectic G-surfaces and minimal complex G-surfaces.There is always a G-invariant symplectic form compatible with the given complex structure on a minimal complex G-surface.Although we do not know whether this symplectic form is always G-minimal, if we assume a symplectic G-conic bundle structure underlies this action, this conic bundle structure is always minimal.Therefore, provingour results in the more general minimal G-conic bundle context plays an important role inbridging the complex and symplectic G-surfaces as well as in the study of the equivariant symplectic cones. §.§ The homological action and the groups Our next task is to describe possible candidates for G. We begin with case (1) in Theorem <ref> where (X,ω) is a minimal symplectic rational G-surface such that the invariant lattice H^2(X)^G has rank 1.In this case (X,ω) is monotone, H^2(X)^G is spanned by K_ω, and3≤ N≤ 8. With this understood, we note that the orthogonal complement of K_ωin H^2(X) (with respect to the intersection product), denoted by R_N, is a G-invariant root lattice of type E_N (N=6,7,8), D_5 (N=5), A_4(N=4), and A_2+A_1 (N=3) respectively. We denote by W_N the corresponding Weyl group.Let (X,ω) be a minimal symplectic rational G-surface such that H^2(X)^G has rank 1. There are two cases: (1) Suppose 4≤ N≤ 8. Then the induced action of G on H^2(X) is faithful, which gives rise to a monomorphismρ: G→ W_N. Moreover, the image ρ(G) in W_N satisfies ∑_g∈ G trace{ρ(g):R_N→ R_N}=0. (2) Suppose N=3. Let Γ be the subgroup of G which acts trivially on H^2(X) and let K:=G/Γ be the quotient group. Then Γ is isomorphic to a subgroup of the 2-dimensional torus and K is isomorphic to _6 or the dihedralgroup D_12. Furthermore, G is a semi-direct product of Γ by K. As a corollary, G can be written as a semi-direct product of an imprimitive finite subgroup of PGL(3) by _2.1. For N=4,5, the subgroups of the Weyl group W_N which satisfy the condition ∑_g∈ G trace{ρ(g):R_N→ R_N}=0 are determined, see Theorem 6.4 and Theorem 6.9 in <cit.> respectively. All such groups can be realized by a minimal G-Del Pezzo surface, which is also minimal as a symplectic rational G-surface with respect to any G-invariant Kähler form (cf. Theorem 1.10(2)). 2. For N=3, Theorem <ref> (2) and Theorem <ref> completely determined all possible G that acts minimally on X.The statement in Theorem <ref> implies the corresponding statement in <cit.>, Theorem 6.3. In fact, the semi-direct product structure of G in our statement is an improvement upon the corresponding theorem in algebraic geometry. For a list of imprimitive finite subgroups of PGL(3) up to conjugacy, see Theorem 4.7 in <cit.>.Now we consider case (2) of Theorem <ref>. In fact, we will work in a slightly more general situation, where the symplectic rational G-surface (X,ω) is not assumed to be minimal, but only admits a minimal symplectic G-conic bundle π:(X,ω)→^2.Furthermore, we assume N≥ 4 (instead of the fact that N≥ 5 when (X,ω) is a minimal symplectic rational G-surface). We make the following definition.* Q G is the subgroup that acts trivially on the base ^2 of the G-conic bundle;* G_0◃ Q is the subgroup that acts trivially on H^2(X) (in the case we consider, there are N-1≥ 3 critical values on the base that was fixed, hence G_0◃ Q);* P=G/Q, so that G decomposes as1→ Q→ G→ P→ 1. We denote by Σ the subset of ^2 which parametrizes the singular fibers of the symplectic G-conic bundle π. Note that #Σ=N-1,and the induced action of P on^2 leaves the subset Σ invariant. The action of P on ^2 is effective, so P is isomorphic to a polyhedral group, i.e., a finite subgroup of SO(3).Therefore, up to an extension problem, the description of G boils down to the following theoremwhich describes the subgroups G_0 and Q. Let (X,ω) be a symplectic rational G-surface equipped with a minimal symplecticG-conic bundle structure with at least three singular fibers (i.e, N≥ 4). Let G_0,Qbe given as in Definition <ref>. Then one of the following is true. 1. G_0=_m, m>1, and Q is either the dihedral group D_2mcontaining G_0 as an index 2 subgroup, or Q=G_0 and m is even. Moreover, N must be odd, and any elementτ∈ Q∖ G_0 switches the two (-1)-spheres in each singular fiber.2. G_0 is trivial and Q=_2 or (_2)^2. In the latter case, letτ_1,τ_2,τ_3 be the distinct involutions in Q. Then Σ is partitioned into subsets Σ_1, Σ_2, Σ_3, where Σ_i parametrizes those singular fibers of which τ_i leaves each (-1)-sphere invariant, and #Σ_i≡ N-1 2,for i=1,2,3. (a) Note that when G_0 is trivial and Q=(_2)^2, each element of P acts on Q as anautomorphism, permuting the three involutions τ_1,τ_2,τ_3. Consequently,the action of P on the base ^2 preserves the partition Σ=Σ_1⊔Σ_2⊔Σ_3.For the corresponding result in algebraic geometry, see <cit.>, Theorem 5.7. (b) When the fiber class of the symplectic G-conic bundle is unique, Q is uniquely determined as a subgroup of G, see Proposition 4.5 for more details.§.§ Minimality and equivariant symplectic cones In this subsection, we are concerned with the underlying smooth action of a minimal symplectic rational G-surface. In particular, our consideration here offers an interesting symplectic geometry perspective to the study of rational G-surfaces in algebraic geometry.We begin with the setup of our study here. Let X=^2# N^2, N≥ 2, which is equipped with a smooth action of a finite group G. Suppose there is a G-invariant symplectic form ω_0 on X such that the corresponding symplectic rationalG-surface (X,ω_0) is minimal. With this understood, we denoteΩ(X,G):={: is symplectic on X, g^*=, for any g∈ G}. Note that Ω(X,G) is non-empty as ω_0∈Ω(X,G). The part (2) of the following theorem showsthat the underlying smooth action of a minimal (complex) rational G-surface satisfies the aboveassumption, where we can take ω_0 to be any G-invariant Kähler form. (1) Let X=^2# N^2, N≥ 2, which is equipped with a smooth actionof a finite group G. Suppose there is a G-invariant symplectic form ω_0 on Xsuch that ((X,ω_0),G) is minimal. Then for any ω∈Ω(X,G),the canonical class K_ω=K_ω_0 or -K_ω_0, and the symplectic rational G-surface(X,ω) is minimal. (2) Let X be any minimal (complex) rational G-surface which is ^2 blown up at 2 or more points. Then for any symplectic form ω which is invariant under the underlying smooth action of G (e.g., any G-invariant Kähler form), the corresponding symplecticrational G-surface (X,ω) is minimal. At the time of writing, it is not known whether the symplectic minimality would imply the smooth minimality of the underlying group action.The symplectic minimality in Theorem <ref> is a weaker statement that symplectic minimalityis determined by the underlying smooth action, but the proof is still quite non-trivial. For related (stronger) results in the case of G-Hirzebruch surfaces, see <cit.>. 5mm With the minimality in Theorem <ref>(1) in place, we now turn our attention to the equivariantsymplectic cone of the G-manifold (X,G). The equivariant symplectic cone of (X,G) is defined asC̃(X,G)={Ω:Ω=[], is a G-invariant symplectic form}⊂ H^2(X;)^G. Note that K_-=-K_. Therefore it suffices to consider the subset C(X,G):={[ω]| ω∈Ω(X,G), K_ω=K_ω_0}⊂ H^2(X;)^G.Furthermore, we observe that if H^2(X)^G has rank 1,C(X,G)={λ K_ω_0|λ∈, λ<0}. In what follows, we shall assume that H^2(X)^G has rank 2. Note that under this assumption,N≠ 6 by part (2) of Theorem <ref>. In order to describe C(X,G), it is helpful to introduce the following terminology. A class F∈ H^2(X)^G is called a fiber class if there exists an ω∈Ω(X,G) such that F is the class of the regular fibers of a symplectic G-conic bundle on ((X,ω),G). Since we will focus on the set C(X,G), we shall assume further that [ω]∈ C(X,G), i.e., K_ω=K_ω_0. We observe that since rank H^2(X)^G=2, the class of such an ω can be written as [ω]=-a K_ω_0+bF, a>0.With this understood, we consider the following subset of C(X,G) and its projective classesC(X,G,F)={[]∈ C(X,G): =-aK__0+bF, a>0,b≥0},Ĉ(X,G,F)={[]∈ C(X,G,F): (F)=2} (equivalently, a=1).Now []∈Ĉ(X,G,F) can be written as [ω]=-K_ω_0+ δ_ω,F F.Then the one to one correspondence[ω]↦δ_ω,F identifies Ĉ(X,G,F) with a subset of . With this understood, we introduce δ_X,G,F:=inf_[ω]∈Ĉ(X,G,F)δ_ω,F∈ [0,∞).Note that (X,ω) is monotone if and only if δ_ω,F=0, so δ_ω,F may be thought of as a sort of gap function which measures how far away (X,ω) is from being monotone.Let X=^2# N^2, N≥ 2, be equipped with a smooth finite G-action which is symplectic and minimal with respect to some symplectic form ω_0. Furthermore, assume rank H^2(X)^G=2.* If N≥ 9 or G_0 is nontrivial, there is a unique fiber class F, and C(X,G)=C(X,G,F). * For N=5,7,8, either there is a unique fiber class F, or there are two distinct fiberclasses F,F^'. In the former case, C(X,G)=C(X,G,F), and in the latter case, C(X,G)=C(X,G,F)∪ C(X,G,F^'), with C(X,G,F)∩ C(X,G,F^') being eitherempty or consisting of [ω] such that (X,ω) is monotone. * For any fiber class F, Ĉ(X,G,F) is identified with either [0,∞) or (δ_X,G,F,∞) under [ω]↦δ_ω,F.(In particular, δ_X,G,F can not be attained unless it equals 0.) We conjecture that when there are two distinct fiber classes, the equivariant symplectic conemust contain the class of a monotone form. Furthermore, it is an interesting problem to determine the gap functions δ_X,G,F for a given minimal rational G-surface X withPic(X)^G=^2. We shall leave these studies for a future occasion.The organization of this paper is as follows. Section 2 is concerned with the proof of the structural theorem, Theorem 1.3. In section 2.1 we collect some preliminary lemmas on minimal symplecticG-conic bundles. In section 2.2 we review a reduction process for the exceptional classesof a rational surface which plays an essential role in the proof of Theorem 1.3. Sections 2.3 and 2.4 are devoted to the proof of Theorem 1.3. Section 3 is concerned with the analysis of the structure of group G. Proofs of Theorems 1.5 and 1.8 are presented here. Finally, section 4 is devoted to the discussion on equivariant symplectic minimality and equivariant symplectic cones. In particular, we prove Theorems 1.10 and 1.12. At the end of section 4, we also include a uniqueness result on the subgroup Q in Definition 1.7.Acknowledgments:This long overdue project started from the Workshop and Conference on Holomorphic Curves and Low Dimensional Topology in Stanford, 2012. We are grateful to Igor Dolgachev for inspiring conversations during the FRG conference on symplectic birational geometry 2014 in University of Michigan.This work was partially supported by NSF Focused Research Grants DMS-0244663 and DMS-1065784, under FRG: Collaborative Research: Topology and Invariants of Smooth 4-Manifolds. § THE STRUCTURE OF (X,Ω)§.§ Preliminary lemmas on symplectic G-conic bundles Let (X,ω) be a symplectic rational G-surface, whereX=^2# N^2, and let π: X→^2 be a symplectic G-conic bundle on (X,ω). We first observe thateach singular fiber of π consists of a pair of (-1)-spheres. To see this, let C_1,C_2 be the components of a singular fiber, which are embeddedJ-holomorphic spheres. Since C_1,C_2 are isolated J-curves, their self-intersection must be negative. On the other hand, C_1· C_2=1, so it follows easily from (C_1+C_2)^2=0 that both of C_1,C_2 are (-1)-spheres.There are N-1 singular fibers. We shall pick a (-1)-sphere from each singular fiber and name the homology classes by E_2,⋯, E_N. Then there is a unique pair of line class and exceptional class H andE_1 such that * H-E_1 is the fiber class of π,* H,E_1, E_2,⋯, E_N form a basis of H_2(X) with standard intersection matrix.With this understood, observe that* for each (-1)-sphere E_j, j=2,⋯, N, the other (-1)-sphere lying in the same singular fiber has homology class H-E_1-E_j, * the canonical class K_ω=-3H+ E_1+⋯ +E_N. There are further consequences if the symplectic G-conic bundle is minimal. In this case,for each (-1)-sphere E_j, j=2,⋯,N, there exists a g∈ G such thatg· E_j=H-E_1-E_j. This impliesω(E_j)=1/2ω(H-E_1),j=2,⋯, N. Thus a minimal symplectic G-conic bundle falls into three cases:(i)ω(E_1)=ω(E_j), (ii)ω(E_1)>ω(E_j), (iii) ω(E_1)<ω(E_j).Case (i) occurs iff (X,ω) is monotone. Since E_1 is a section class, we shall call case (ii) (resp. case (iii)) a symplectic G-conic bundle with small fiber area (resp. large fiber area).The following lemma is the J-holomorphic analog of Lemma 5.1 in <cit.>.Let π:X→^2 be a symplectic G-conic bundle on (X,ω), which comes with a G-invariant, ω-compatible almost complex structure J. Suppose E, E^' are two distinct J-holomorphic sections of self-intersection-m,-m^', and let r be the number of singular fibers where E,E^' intersectthe same component. ThenN-1=r+m+m^'+2E· E^'.Moreover, if the symplectic G-manifold (X,ω) is minimal, then N≥ 5 must be true. Since E_1,F=H-E_1,E_s,s>1 generate H^2(X), both E and E' have the form E_1+cF+∑_t>1c_tE_t,where c∈, and c_t=0 or 1 depending on which component they intersect on each singular fiber.Since E,E^' are sections, we have E^'-E=bF +∑_s b_s E_s, where b∈ and b_s=± 1 with s running over the set of singular fibers at whereE^', E intersect different components. Note that the number of s is exactly N-1-r.Now(E^'-E)^2= ∑_s b_s^2 E_s^2=-(N-1-r), which gives N-1=r+m+m^'+2E· E^'. Now suppose (X,ω) is minimal. We will show that N≥ 5 in this case. First, we claim that the sections E,E^' in the lemma do exist. This is becausethe class E_1 can be represented by a J-holomorphic stable curve.By checking the intersection with H-E_1, the E_1 stable representative contains a uniqueJ-holomorphic section E.Note that E,F=H-E_1,E_s,s>1 also form a basis of H_2(X), andE_1 can be written as E_1=a(H-E_1)+∑_s>1 a_s E_s+E, where a≥ 0, and a_s=0 or 1 depending on which component that E intersects at each singular fiber. Note thatE^2=-2a-1-∑_s>1 a_s. We take E^'=g· E for some g∈ G such that E^'≠ E. Such a g∈ G exists because E must intersect a singular fiber and there is a g∈ G which switches the two (-1)-spheres in that singular fiber. Note that E^' is a section, as the fiber class is G-invariant so that E^'· (H-E_1)=E·(H-E_1)=1. Furthermore, note that (E^')^2=E^2. Consequently, if E^2≤ -2, we must have N=1+r+m+m^'+2 E· E^'≥ 1+0+2+2+0=5. If E^2>-2, then a=a_s=0 and E=E_1 must be a (-1)-sphere. The minimalityassumption then implies that E^'=g· E for some g∈ G can be chosen such that E^' intersects E. In this case,we haveN= 1+r+m+m^'+2 E· E^'≥ 1+0+1+1+2=5. This finishes off the proof of Lemma <ref>. Let (X,ω) be a symplectic rational G-surface which admits a minimalsymplectic G-conic bundle structure. Then the invariant lattice H^2(X)^G hasrank 2 which is spanned by K_ω and the fiber class of thesymplectic G-conic bundle. First, we show that H^2(X;)^G is 2-dimensional. To see this, we first note thatK_ω, H-E_1, E_2,⋯,E_N form a basis ofH^2(X,).We setV=H^2(X;)/Span _(K_ω,H-E_1). Then it suffices to show that V^G={0} because K_ω,H-E_1∈ H^2(X)^G.We let e_2,⋯,e_N be the image of E_2,⋯,E_N under the quotient map, whichform a basis of V. Suppose to the contrary, there is a x≠ 0 in V^G. We writex=∑_k=2^N a_k e_k. Then there exists a k_0 such that a_k_0≠ 0.With this understood, we note that by the minimality assumption,there is a g∈ G such that g· E_k_0=H-E_1-E_k_0,which means that g· e_k_0=-e_k_0. Now we set I={k| g· e_k=-e_k}. Thenk_0∈ I; in particular, I≠∅. We let J be the complement of I in the set{2,⋯,N}. Then it follows easily that if k∈ J, then g· e_k=± e_l for some l∈ J (g· E_k=E_l or H-E_1-E_l for some l).With this understood, we write x=y+z, where y=∑_k∈ I a_k e_k and z=∑_k∈ J a_ke_k. Then g· x=-y+z^' for some z^'∈Span _(e_k|k∈ J). Since g· x=x, we have 2y=z^'-z∈Span _(e_k|k∈ J). Since e_2,⋯,e_N form a basis of V,this clearly contradicts the fact that y∈Span _(e_k|k∈ I) and y≠ 0. Hence the claim that H^2(X;)^G is 2-dimensional. Now for any α∈ H^2(X)^G, we write α=aK_+b (H-E_1) for some a,b∈.Then α· E_N=-a, implying a∈. On the other hand, α· E_1=-a+b,we have b∈ also. Hence H^2(X)^G is spanned by K_ and H-E_1. Let π: X→^2 be a minimal symplectic G-conic bundle where X=^2 # N^2 with N≥ 6 and even. Let J be any G-invariant, compatible almost complex structure, and let m_J=max{m∈: there is a J-holomorphic section of π of self-intersection -m}.Then m_J≤ (N-4)/2. In particular, when N=6, m_J≤ 1.First, note that if E is a J-holomorphic section of self-intersection -m, then there isa g∈ G such that g· E≠ E, where g· E is also of self-intersection -m.This is because E must intersect a singular fiber, and by the minimality assumption thereis a g∈ G which switches the two components of that singular fiber. Clearly for this g,g· E≠ E. Nowapply Lemma<ref> to E and E^'=g· E, we see that m≤ (N-2)/2as N is even. Hence we reduced the lemma to showing m≠N-2/2.Suppose to the contrary that m=(N-2)/2, and let E,E^' be a pair of sections whose self-intersection equals -m. Then by Lemma <ref>, we see that E,E^' are disjoint and r=1. Let F be the singular fiber where E,E^' intersect thesame (-1)-sphere. Then again by the minimality assumption there is a h∈ G which switches the two (-1)-spheres in F. It follows easily that h· E, E,and E^' are distinct. Let r,r^' be the number of singular fibers where h· E, E and h· E, E^' intersect the same (-1)-sphere, respectively. Then it follows easily that r+r^'=N-2. On the other hand, Lemma <ref> implies that r=r^'=1,contradicting the fact that N≥ 6. Hence the lemma. §.§ Reduction of exceptional classes Momentarily we let ω be any symplectic structure on X=^2# N^2,and denote_ω={e∈ H^2(X)| e^2=-1, K_ω· e= -1, ω(e)>0} which may depend on ω. The following fact is crucial in our considerations. (cf. <cit.>) Assume N≥ 2. Then for any ω-compatible almost complex structure J on X, each class E∈_ωwith minimal area, i.e., ω(E)=min_e∈_ωω(e), is represented by an embedded J-holomorphic sphere. The key ingredient in the proof of Theorem <ref> is a reduction procedure which involves a certain type of standard basis of H^2(X), called a reduced basis. We shall begin by a brief digression and refer the reader to <cit.> or independently <cit.> for more details.Recall that a reduced basis is a basis H, E_1,⋯, E_N of H^2(X) with standard intersection matrix, where E_i∈_ω, such that ω(E_N)=min_e∈_ωω(e), and for any i<N, E_i satisfies the following inductive condition: let_i={e∈_ω| e· E_j=0 ∀ i<j}, then ω(E_i)=min_e∈_iω(e). Furthermore, the canonical class K_ω=-3H+E_1⋯+E_N. Reduced basis always exists. If N=2, then _ω={E_1,E_2, H-E_1-E_2}. For N≥ 3, a reduction procedure can be introduced, which requires the following discussions. 1. Introduce H_ijk=H-E_i-E_j-E_k for i<j<k and H_ij=H-E_i-E_j for i<j. Then H_ij∈_j, which implies that ω(H_ijk)≥ 0.2. For any E∈_ω, write E=aH-∑_s b_s E_s. Then * a≥ 0;* if a>0, then b_s≥ 0 for all s;* if a=0, then E=E_l for some l;* assume a>0, and let b_i,b_j,b_k be the largest three coefficients (here we use the assumption N≥ 3),then b_i≤ a<b_i+b_j+b_k, which is equivalent to E· H_ijk<0 and E·(H-E_i)≥0.3. The classes H_ijk are represented by embedded (-2)-spheres, hence for each H_ijk there is a diffeomorphism of X inducing an automorphism R(H_ijk) on H^2(X):R(H_ijk)α =α + (α· H_ijk)· H_ijk, ∀α∈ H^2(X).Moreover, each R(H_ijk) has the following properties: (1) R(H_ijk)K_ω=K_ω, (2) R(H_ijk)E∈_ω,∀ E∈_ω.With the preceding understood, consider any E∈_ω, where E=aH-∑_s b_s E_swith a>0. Let b_i,b_j,b_k be the largest three coefficients for some i<j<k. Then it follows easily from the last bullet of item 2 that* R(H_ijk)E=a^' H-∑_s b_s^' E_s for some a^'<a, and * ω (R(H_ijk)E)≤ω(E) with “=" iff ω(H_ijk)=0.Set E^':= R(H_ijk)E. We say that E is reduced to E^' by H_ijk. The operations R(H_ijk) has the following properties of our interests: * (finite termination) By <cit.>, one may find a sequence of finitely many H_ijk for any E∈_ω, such that after performing R(H_ijk), E is reduced to E_l for some l. * (monotonicity) The symplectic area is monotonically decreasing during the above reduction procedure. Therefore, after the reduction procedure, ω(E)≥ω (E_l), with “=" iffω(H_ijk)=0 for all the H_ijk's involved. In particular, when E has the minimal area in _ω, i.e.,ω(E)=ω(E_N), then we have ω(E_l)=ω(E_l+1)=⋯ =ω(E_N), ω(H_ijk)=0H_ijk'sWe first rule out the case of N=2. There are no minimal symplectic rational G-surface with N=2. Suppose ((X, ω), G) is a minimal symplectic rational G-surface withN=2.Let {H,E_1,E_2} be a reduced basis. Then _ω={E_1,E_2,H-E_1-E_2}. Fix a G-invariant J, and let C be the J-holomorphic (-1)-sphere representing E_2 (cf. Lemma <ref>). We set Λ:=∪_g∈ G g· C. Then Λ is a union of finitely many J-holomorphic (-1)-spheres, containing at least two distinct (-1)-spheresintersecting each other because of the minimality assumption. Since_={E_1,E_2,H-E_1-E_2},there are only two possibilities: (1) Λ is a union of three (-1)-spheres,representing the classes E_1,E_2 and H-E_1-E_2, and (2) Λ is a union of two (-1)-spheres representing the classes E_2 and H-E_1-E_2. In either case, H-E_1-E_2 is represented by a (-1)-sphere. Since H-E_1-E_2 is the only characteristic element in _, it must be fixed by the G-action, which contradicts the minimality of the symplecticG-manifold (X,ω). Hence there are no minimal symplectic rational G-surface with N=2.§.§ Reduced basis for symplectic rational G-surfacesDue to Lemma <ref>, in what follows we assume that (X,ω) is a minimal symplectic rational G-surface,where X=^2# N^2 with N≥ 3.Suppose N≥ 3. Then one of the following must be true. (i) (X,ω) is monotone. (ii) The reduced basis of X satisfies ω(E_1)>ω(E_2)=⋯ =ω(E_N), (H-E_1)=2(E_j).Moreover, if E∈_ω has minimal area, then either E=E_j or E=H_1j=H-E_1-E_jfor some j>1. Assume (X,ω) is not monotone. We shall first prove a slightly weaker statement that is independent of the G-action. 5mmClaim: If E∈_ω hasminimal area in _ω and E≠ E_s for any s, then E=H-E_1-E_j for some j>1, and furthermore, if such an E exists and E=H-E_1-E_j for some j>2, then we must haveω(E_2)=⋯ =ω(E_N). Let E∈_ω be such a class, i.e., E has minimal area in _ω andE≠ E_s for any s, We reduce E to E_l for some l by a sequence of H_ijk's. Since E has minimal area in _ω, it follows that ω(E_l)=⋯=ω(E_N). Note that l>1.Otherwise, (E_i)=(E_j) and (H_ijk)=0 for some i,j,k from property (3) in the reduction process in Section <ref>.This violates that (X,ω) is not monotone. Suppose the reduction from E to E_l takes n steps, let E^'∈ be the classobtained at the (n-1)^th step. Then E^'=aH -∑_s b_s E_s, where a>0 and b_s≥ 0.The equation E_l=E^'+ (E^'· H_ijk)· H_ijk reads E_l=(2a-b_i-b_j-b_k)H -∑_s=i,j,k(b_s+a-b_i-b_j-b_k)E_s-∑_s≠ i,j,k b_sE_s,which implies that l must be one of i,j or k, and without loss of generality assuming that l=k, then 2a=b_i+b_j+b_k,a=b_j+b_k=b_i+b_k,a-b_i-b_j=-1.It follows easily that E^'=H_ij=H-E_i-E_j. Here we assumei<j, but do not requireany condition on k=l>1. Note that E^' has minimal area in _ω, which implies that ω(H)=ω(E_i)+ω(E_j)+ω(E_l) from (<ref>). Next we prove that i=1. Suppose to the contrary that i>1. Then ω(H_1ij)=ω(H)-ω(E_1)-ω(E_i)-ω(E_j)≥ 0. On theother hand, ω(H)=ω(E_i)+ω(E_j)+ω(E_l) and ω(E_1)≥ω(E_l), which implies that ω(E_1)=⋯ =ω(E_N)=1/3(H). This is a contradiction because we assume (X,ω) is not monotone. Hence i=1 and E^'=H-E_1-E_j.We claim that E=E^'=H-E_1-E_j. Suppose this is not true. Then there must be a classẼ which is reduced to E^' by some H_vrt for v<r<t. We assert v>1. To see this, write Ẽ=aH -∑_s b_s E_s, where a>0 and b_s≥ 0. Then similarly we haveH-E_1-E_j=(2a-b_v-b_r-b_t)H -∑_s=v,r,t(b_s+a-b_v-b_r-b_t)E_s-∑_s≠ v,r,t b_sE_s.If v=1, then 2a-b_v-b_r-b_t=1 and a-b_r-b_t=1, implying a=b_v. Now for s=r,t, the coefficient of E_s on the right hand side is b_t, b_r respectively, which arenon-negative. It follows that b_r=b_t=0 from the property 2 in Section <ref>, which contradicts the fact that a<b_v+b_r+b_t. Hence v>1.We now get a contradiction as follows. Note that ω(H_vrt)=0 and ω(H_1vr)≥ 0 implies that ω(E_1)=⋯=ω(E_t).Then with ω(H_vrt)=0 again, we have ω(H)=3ω(E_1). On the other hand, ω(E_l)=ω(E^')=ω(H)-ω(E_1)-ω(E_j), from which it follows thatω(E_1)=⋯=ω(E_l)=⋯=ω(E_N).This contradicts the assumption that (X,ω) is not monotone. Hence E=H-E_1-E_j is proved. Finally, if j>2, then ω(H_12j)≥ 0 and ω(H)-ω(E_1)-ω(E_j)=ω(E_l) implies that ω(E_2)=ω(E_l), henceω(E_2)=⋯=ω(E_N). This concludes the claim. To obtain Lemma <ref>, it remains to show that if (X,ω) is not monotone, then there exists some j>2, such that E=H-E_1-E_j attains the minimal area within ℰ_. To this end, we fix a G-invariant ω-compatible J.Then by Lemma <ref>, there exists a J-holomorphic (-1)-sphere C representing the classE_N. Note that for any g∈ G, g· C∈_ω. Now by the assumption that(X,ω) is minimal, there must be a g∈ G such that g· C≠ C and g· C intersects with C.The class g· C has minimal area in _ω and g· C≠ E_s for any s.Let g· C=H-E_1-E_j. Then j=N≥ 3 must be true because g· C intersectswith C. This finishes the proof of the lemma. §.§ Proof of Theorem <ref>In what follows, we still assume that N≥ 3. We will discuss according to the following three possibilities: i) (X,ω) is not monotone: By Lemma <ref>, there is a reduced basis{H, E_1,⋯,E_N} such that ω(E_1)>ω(E_2)=⋯=ω(E_N),and moreover, if E∈_ω has minimal area, then either E=E_j or E=H_1j=H-E_1-E_jfor some j>1.Let J be any G-invariant ω-compatible almost complex structure. For each fixed j>1, E_j has minimal area so by Lemma <ref>, so there is an embedded J-holomorphic sphere C representing E_j. Since(X,ω) is minimal, there must be a g∈ G such that g· C≠ C and g· C∩ C≠∅. It follows easily that g· C must be the J-holomorphic (-1)-sphere representing the class H-E_1-E_j, since it also has the minimal area. Furthermore, g· C and C intersect transversely and positively at a single point. Standard gluing construction in J-holomorphic curve theory yields a J-holomorphic sphere Ĉ, carrying the class C+g· C=H-E_1.It follows that Ĉ has self-intersection 0, and by the adjunction formula it must be embedded. By a standard Gromov-Witten index computation, the moduli space of J-spheres in class [Ĉ] gives rise to a fibration (which contains singular fibers) structure on X, where each fiber is homologous to Ĉ. Since X is a rational surface, the base of the fibration must be ^2. We denote the fibration by π_j: X→^2. Next we show that π_j is G-invariant. To see this, note that for any h∈ G, h· (C∪ g· C) must be a pair of J-holomorphic (-1)-spheres representing E_k and H-E_1-E_k for some k(see Lemma <ref>). It follows that the class of Ĉ, which is H-E_1, is invariant under the G-action. This implies that the fibrationπ_j is G-invariant. Finally, we note that the fibration π_j is independent of j, because the fiber class, which isH-E_1, is independent of j. We will denote the fibration by π: X→^2.Note that the same argument shows also that π contains at least N-1 singular fibers,consisting of a pair of (-1)-spheres whose classes are E_j, H-E_1-E_j for j=2,⋯, N.There are no other singular fibers by an Euler number count.Note that Lemma <ref> asserts rank H^2(X)^G=2 in this case. ii) (X,ω) is monotone and H^2(X)^G has rank 1: In this case, note thatK_ω∈ H^2(X)^G and is a primitive class, hence H^2(X)^G is spanned byK_ω. The constraint N≤ 8 is an easy consequence of (X,ω) being monotone. iii) (X,ω) is monotone and rankH^2(X)^G>1: Note that the de Rham class[ω]=λ K_ω∈ H^2(X;)^G for some λ∈.Since rankH^2(X)^G>1, we may pick a G-invariant closed 2-form η such that [η] lies in a different direction in H^2(X;)^G. Let ω^':=ω+ϵη for some very small ϵ. Then ω^' is aG-invariant symplectic structure such that (X,ω^') is not monotone.Claim:(X,ω^') is minimal as a symplectic G-manifold for sufficiently small ϵ.Suppose there is a disjoint union of ω^'-symplectic (-1)-spheres{C_i}, such that for any g∈ G, g· C_i=C_j for some j. Note that for sufficiently small ϵ, we have _=_' because K_ω=K_ω^'. Let e_i be the class of C_i. It follows easily that for each i, e_i∈_. Now pick a G-invariant J compatible with . Since (X,ω) is monotone, each e_i is represented by an embedded J-holomorphic (-1)-sphere Ĉ_i. Notice that {e_i} has the following properties:e_i· e_j=0 for i≠ j, and for any g∈ G, g· e_i=e_j for some j. It follows that {Ĉ_i} is a disjoint union of ω-symplectic (-1)-spheres which is invariant under the G-action. This contradicts the minimality assumption on (X,ω), hence the claim. We apply the argument for case i) to (X,ω^').Consequently there is a reduced basis {H, E_1,⋯,E_N} (w.r.t. ω^'), and a G-invariant fibration π^':X→^2, whose fiber class is H-E_1 and the singular fibers are pairs of (-1)-spheres representing E_j,H-E_1-E_j for j>1. Note that by taking ϵ small, we have E_i∈_'=_.Finally, observe the following crucial property (∑_g∈ G g· E_j)^2=0 for any j>1 because E_k and H-E_1-E_k must appear in pairs in the sum.Now let J be any G-invariant ω-compatible almost complex structure. For each fixed j>1,since (X,ω) is monotone and E_j∈_, E_j is representedby an embedded J-holomorphic (-1)-sphere C. Let Λ:=∪_g∈ G g· C.Then the set Λ is a union of finitely many distinct J-holomorphic (-1)-spheres. Since (∑_g∈ G g· E_j)^2=0 for any j>1, it follows easily thatΛ^2=0.Let {Λ_i} be the set of connected components of Λ. Since G acts on the connected components of Λ transitively,it follows that for any i≠ k, Λ_i^2=Λ_k^2. Clearly Λ^2=∑_i Λ_i^2, which implies that Λ_i^2=0 for any i. Now Λ_i^2=0, together withthe fact that Λ_i is a union of finitely many distinct J-holomorphic (-1)-spheres and Λ_i is connected, implies that Λ_i consists of two (-1)-spheres intersecting transversely at a single point.We claim that the pair of (-1)-spheres in each Λ_i have classes E_k,H-E_1-E_k for some k>1.This is because for each g∈ G, g· E_j is either E_k or H-E_1-E_k for some k>1, and for each k>1, there is a g∈ G such that g· E_k=H-E_1-E_k.By the same argument as in case i), we obtain a G-invariant fibration on X as desired, independent of the choice of j. The statement that N≥ 5 was proved in Lemma <ref>. The statement that N 6 follows from Lemma<ref> below.This completes the proof of Theorem <ref>.Let π:X→^2 be a minimal symplectic G-conic bundle whereX=^2#6^2. Then for any G-invariant, compatible almost complex structure J, there is a G-invariant J-holomorphic (-1)-sphere. Let H,E_1,⋯,E_6 be a basis of H^2(X) with standard interaction matrix such that H-E_1 is the fiber class and E_2,⋯, E_6 are (-1)-spheres contained in singular fibers. Note that the canonical class K_X=-3H+E_1+⋯ +E_6.With this understood, note that C=-K_X-(H-E_1)=2H-∑_6≥ i≥2E_i∈ H^2(X)^G, and C is an exceptional class. Hence for any given G-invariant, compatible almost complex structure J, C has a J-holomorphic representative, which admits a decomposition of irreducible componentsC=F̂+Ê,where F̂ is the sum of components contained in the fibers of the G-conic bundle (the vertical class),and Ê is the sum of other components (the horizontal class).By Lemma <ref>, E_1 must have a J-holomorphic representative: otherwise, it has a stablecurve representative where one of the component has to be a section since E_1· F=1. Such a section has self-intersection less than -1, a contradiction.To continue with our proof, we first show that Ê does not contain E_1-components.Suppose the multiplicity of the E_1-component is k, since all irreducible components pairs with H-E_1 non-negatively, (C-kE_1)· (H-E_1)≥0, hence k≤2.Furthermore, if the multiplicity is 2 (meaning either there is a doubly covered component or two components), Ê=2E_1.Otherwise, Ê· (H-E_1)≥3>C·(H-E_1), violating the positivity of intersection with H-E_1.For the multiplicity 2 case, F̂=2H-2E_1-∑_6≥ i≥2 E_i, while all possible irreducible components are of the form H-E_1-E_i, E_i or H-E_1 for 6≥ i≥2.A simple check shows this cannot be consistent with (<ref>).For the multiplicity 1 case, Ê=E_1+E'. E' has E'·(H-E_1)=1 hence a section.Therefore, E'≥-1 from Lemma <ref>.Since E' E_1, its coefficient of H under the reduced basis must be positive.One may again easily check F̂ cannot be represented as a combination of classes of forms H-E_1-E_i, E_i or H-E_1.Now since the equivariant J-holomorphic representative of C does not contain E_1-components, it is disjoint from the E_1-section.Therefore, F̂· E_1=0, which implies the sum of vertical components F̂=∑ m_sE_s for some s 1.However, there is always an element g∈ G which sends E_s to H-E_1-E_s for any s 1.This forces F̂=0, and C=Ê.At last, again note that Ê has at most 2 components by positivity of intersection with H-E_1.We assume Ê has two components, which are both sections S_1 and S_2. C^2=(S_1+S_2)^2=-1=S_1^2+S_2^2+2S_1S_2≥ -2+2S_1· S_2from Lemma <ref>.Therefore, 2S_1· S_2≤ 1, which implies S_1· S_2=0.This implies these are disjoint sections S_1^2=-1 and S_2^2=0.A simple check on class C shows there are no such decomposition of section classes (recall that [S_1] E_1).Summarizing, we have showed that the J-representative of C is indeed irreducible,which is a G-invariant J-holomorphic (-1)-sphere. Hence the lemma.To compare with known results in algebraic geometry, it seems worth to record the following easy consequence.Let (X,ω) be a minimal symplectic rational G-surface whereX=^2# 6^2. Then the invariant lattice H^2(X)^G must be of rank 1. In particular, a minimal complex rational G-surface X which is ^2 blown up at 6 points must be Del Pezzo with Pic(X)^G=. Under the above assumptions, Lemma <ref> implies (X,) is not a G-conic bundle, hence rank(H^2(X)^G)=1 by Theorem <ref>.For the second statement, notice that a conic bundle on a minimal complex rational G-surface defines a minimal symplectic G-conic bundle with respect to any G-invariant Kähler form. Theorem 3.8 of <cit.> asserts then X must be a del Pezzo surface if rankH^2(X)^G=1. § THE STRUCTURE OF G §.§ Proof of Theorem <ref>, 4≤ N≤ 8We start with the following observation. Suppose (X,ω) is monotone. If N≥ 4, then the representation of G on H^2(X) is faithful.Fix a G-invariant J.Let g∈ G be any element acting trivially on H^2(X). Then g fixes every element E∈_, which implies thatall the J-holomorphic (-1)-spheres are invariant under g. Now let C_1 be the J-holomorphic (-1)-sphere representing E_1, and for each 1<j≤ N, let C_j be the J-holomorphic (-1)-sphere representingH-E_1-E_j. Then it is clear that C_1 intersects each C_j, j>1, transversely at one point and the C_j's are mutuallydisjoint. It follows that the cardinality of the set C_1∩ (∪_j C_j) is N-1. On the other hand, each point inC_1∩ (∪_j C_j) is fixed under g, hence the action of g on C_1 contains at least N-1 fixed points. WhenN≥ 4, it follows that C_1 must be fixed by g. A similar argument shows that every J-holomorphic (-1)-sphereis fixed by g. Since there are J-holomorphic (-1)-spheres intersecting transversely at a point, the action of g on thetangent space at the intersection point must be trivial, which shows that g must be trivial. Hence the lemma. Assume (X,ω) is a symplectic rational G-surface with rank H^2(X)^G=1.Since K_ω∈ H^2(X) is fixed under the action of G, there is an induced representation of G on the orthogonal complement R_N, which is faithful by Lemma <ref>. This gives rise to a monomorphismρ: G→ W_N. On the other hand, H^2(X)^G is spanned by K_ω, so that R_N^G={0}. This implies that1/|G|∑_g∈ G trace{ρ(g):R_N→ R_N}=rank R_N^G=0.The proof for Case 1 is completed.§.§ Proof of Theorem <ref>, N=3Our main objective in this case is to show that G contains an index 2subgroup which is isomorphic to an imprimitive finite subgroup of PGL(3).We first describe the list of subgroups of PGL(3) involved. For simplicity, we will adapt thefollowing convention from <cit.>: we denote an elementT∈ PGL(3) by the image of[z_0,z_1,z_2]∈^2 under T. Here is the list of imprimitive finite subgroups of PGL(3) up to conjugacy (see Theorem 4.7in <cit.>), where μ_r=exp(2π i/r) is the r-th root of unity. * G_n, generated by the following elements of PGL(3): [μ_n z_0,z_1,z_2], [z_0,μ_nz_1,z_2], [z_2,z_0,z_1] The group G_n is isomorphic to a semi-direct product of (_n)^2 and _3. * G̃_n, generated by the following elements of PGL(3): [μ_n z_0,z_1,z_2], [z_0,μ_n z_1,z_2], [z_0,z_2, z_1], [z_2,z_0,z_1]The group G̃_n is isomorphic to a semi-direct product of (_n)^2 and S_3.* G_n,k,s, where k>1, k|n, and s^2-s+1=0 k. It is generated bythe following elements of PGL(3): [μ_n/k z_0,z_1,z_2], [μ_n^sz_0,μ_nz_1,z_2], [z_2,z_0,z_1] The group G_n,k,s is isomorphic to a semi-direct product of _n×_n/k and _3. * G̃_n,3,2, generated by the following elements of PGL(3): [μ_n/3 z_0,z_1,z_2], [μ_n^2 z_0,μ_n z_1,z_2], [z_0,z_2,z_1], [z_1,z_0,z_2]The group G̃_n,3,2 is isomorphic to a semi-direct product of _n×_n/3and S_3. §.§.§ Preliminaries: factorization of G by exceptional spheresLet {H,E_1,E_2,E_3} be a reduced basis. Then_={E_1,E_2,E_3, H-E_1-E_2,H-E_1-E_3,H-E_2-E_3}. Since (X,ω) is monotone, the classes in _ have the same area, and consequently, each class in _ is representedby a J-holomorphic (-1)-sphere for any fixed J, which we assume to be G-invariant.Let Λ be the union ofthese six (-1)-spheres. The intersection pattern of these curves can be described by a hexagon, where each edge representsa (-1)-sphere and each vertex represents an intersection point (See Figure 1). For simplicity, for each E∈_ we shall use the same notation to denote the corresponding (-1)-sphere.Obviously there is an induced G-action on Λ. The action of G on the components of Λ is transitive, and consequently, there is a short exact sequence 1→Γ→ G→ K→ 1,where Γ is isomorphic to a subgroup of ^1×^1, and K is either D_12, the full automorphism group of the hexagon, or the cyclic subgroup of order 6.Let C be any of the (-1)-spheres. Then the class of the union ∪_g∈ G g· C must lie in H^2(X)^G. Since H^2(X)^G=Span (K_ω), the class of ∪_g∈ G g· C must be a multiple of K_ω. On the other hand, the class of Λ equals -K_, from which it followseasily that Λ=∪_g∈ G g· C. This proves that the action of G on componentsof Λ is transitive.The action of G on Λ gives rise to a short exact sequence 1→Γ→ G→ K→ 1, where Γ is the normal subgroup of G consisting of elements which leave each (-1)-sphereinvariant. The quotient group K=G/Γ has an effective, transitive action on the hexagon Λ,which must be either the automorphism group D_12 of the hexagon, or the cyclic subgroup of order 6. To see that Γ is a subgroup of ^1×^1, we look atthe action of Γ on the tangent space of any intersection point of two adjacent (-1)-spheres in Λ.The action preserves a pair of complex lines intersecting transversely, giving a natural isomorphism of Γ toa subgroup of ^1×^1. In fact, we may identify K geometrically as follows. There is a natural isomorphism D_12=_2× S_3, where the latter is the Weyl group W_N of the corresponding root lattice R_N, which is generated by H-E_1-E_2-E_3, E_1-E_2 and E_2-E_3.In this sense, _2=⟨ s_1⟩ where s_1 is the rotation of 180 degrees of thehexagon, and S_3=⟨ s_2,s_3⟩ where s_2, s_3 are the reflections of the hexagon which switches E_1 and E_2, E_2 and E_3 respectively. Note that s_2s_3,which is s_3 followed by s_2, is a counter-clockwise rotation of 120 degrees of the hexagon. It follows that the short exact sequence 1→Γ→ G→ K→ 1is the same as the one obtained from the induced action of G on the root lattice R_N, with Γ being the subgroup of G acting trivially on H^2(X). Under this identification, one of the following is true by Lemma <ref> * K is generated by s_1, s_2 and s_3 if it is the cyclic subgroup of order 12.* K is generated by s_1 and s_2s_3 if it is the cyclic subgroup of order 6.In order to understand the structure of G, we begin by getting more information aboutthe subgroup Γ. To this end, we fix a monomorphism Γ→^1×^1 induced from the action of Γ on the tangentspace of the intersection point of E_1 and H-E_1-E_3, where the first ^1-factoris from the action on E_1. We summarize the main objects in consideration as follows for readers' convenience:* Γ G is the subgroup with trivial homological action,* K=G/Γ, * ρ_i: Γ→^1, i=1,2, are the projection to the two ^1-factors,* Γ_i=ρ_i,* Γ_i^'=image ρ_i⊂^1. * Γ̃_i is a subgroup of Γ such that ρ_i: Γ̃_i→Γ_i^' is isomorphic.* Γ_i, Γ_i' are cyclic,* ord (Γ_1)=ord (Γ_2) and ord (Γ_1')=ord (Γ_2')* ord (Γ_i)|ord (Γ_i') We will hence denote n:=ord(Γ_i') and k:=ord(Γ_i')/ord(Γ_i) It is clear that Γ_i^' is cyclic. Γ_i is also cyclic because bothρ_2|_Γ_1 and ρ_1|_Γ_2 are injective. Note that this also shows that the order of Γ_1^' (resp. Γ_2^') is divisible by the order ofΓ_2 (resp. Γ_1). Finally, Γ_1 and Γ_2 have the same order. This is because if we let g∈ G be an element whose action on the hexagon is a counter-clockwiserotation of 60 degrees, then gΓ_2g^-1=Γ_1. Consequently, the order ofΓ_i^'=Γ/Γ_i is independent of i.The subgroup Γ̃_i<Γ exists.In particular, since Γ is Abelian,Γ≅Γ_i×Γ̃_i for i=1,2.Let h∈Γ be an element such that ρ_1(h) is a generator of Γ_1^'. Since ρ_1(h)^n=1, we have h^n∈Γ_1. We claim h^n=1. This is because ρ_2|_Γ_1 is injective, so that if h^n≠ 1 in Γ_1, then ρ_2(h)^n=ρ_2(h^n)≠ 1 in Γ_2^'. But this contradicts the fact that the order of Γ_2^' equals n. Hence h^n=1. With this understood, we simply take Γ̃_1 to be the subgroup generated by h. §.§.§ Rotation numbersNext we recall some basic facts about rotation numbers. Let h∈Γ be any element and E∈ be any (-1)-sphere which is invariant under h. Then either h fixes E or h acts on Enontrivially. In the latter case, h fixes two points on E.The following fact is a straightforward local computation: 5mmFact: if we let (a,b) be the rotation numbers of h at one of the fixed point, where a is the tangential weight and b is the weight in the normal direction, then the rotation numbers at the other fixed point are (-a, b+a), with the second number being the weight in the normal direction. 5mm Explicitly, the tangential weight being a means that the action of h is given by multiplication of exp(2aπ√(-1)/ord(h)) and similarly for the normal weight.Here we orient the (-1)-sphere by the almost complex structure J and orient the normal direction accordingly,so there is no sign ambiguity on a, b. Note that even when E is fixed by h, this continues to make sense, with the understanding that a=0 in this case.With the preceding understood, we order the exceptional curves and their intersections according to the counter-clockwise orientation of the hexagon in Figure <ref>, and ask E_1 to be the first exceptional curve. For example,we say H-E_1-E_3 is before E_1 and H-E_1-E_2 is after E_1, and the intersection point of H-E_1-E_3 and E_1 is the first fixed point on E_1, etc.Now for any h∈Γ, we denote by (a,b) the rotationnumbers at the intersection point of H-E_1-E_3 and E_1, with a being the weight in thedirection tangent to E_1. According to the orientation of the hexagon, this is the first fixedpoint of h on E_1. With this understood, the rotation numbers at thefixed points of hon (-1)-spheres are given below according to the orientation of the hexagon, (a+b,-a), (b,-a-b), (-a,-b), (-a-b,a), (-b,a+b). Finally, we remark that h is completely determined by the rotation numbers at the six vertices of the hexagon.Throughout the rest of the proof of Theorem <ref>, g∈ G will denote those elements which act on the hexagon by acounter-clockwise rotation of 60 degrees, and we shall investigate the action of g on Γ=Γ_1×Γ̃_1 given by conjugation, i.e., h↦ ghg^-1, ∀ h∈Γ. Here is the first corollary of the analysis on rotation numbers: note that the action of g^3, which is a rotation of180 degrees, sends every pair of rotation numbers to its negative, i.e., (a,b)↦ (-a,-b),(a+b,-a)↦ (-a-b,a) and (b,-a-b)↦ (-b,a+b). This implies thatg^3hg^-3=h^-1, ∀ h∈Γ. Since g^3 is sent to s_1∈ K under the homomorphism G→ K in (<ref>), we obtain the following lemma:(1) For any element of G which is sent to s_1∈ K under G→ K, itsaction on Γ by conjugation sends h∈Γ to h^-1. (2) g^6=1.It remains to show that g^6=1.First, note that g^6∈Γ. Secondly, the action of g^3 on g^6 by conjugation is trivial, but the action of g^3 on Γ sends h to h^-1 as we just showed, from which we see that either g^6=1 or g^6 is an involution.We rule out the latter by showing that the action of g on any involution of Γ is nontrivial, and G is a semi-direct product of Γ and K follows.Let τ∈Γ be any involution. Without loss of generality, we assume τ actsnontrivially on E_1. Then in the rotation numbers of the two fixed points, (a,b) and(-a,b+a), a=1 must be true because τ has order 2, and furthermore, either b=0 or b=1.If b=0, the rotation numbers for the action of τ at the six vertices are(1,0), (1,-1), (0,-1), (-1,0), (-1,1), (0,1). If b=1, the rotation numbers for the action of τ at the six vertices are(1,1), (0,-1), (1,0), (-1,-1), (0,1), (-1,0).In the former case, the rotation numbers for the action of gτ g^-1 are(0,1), (1,0), (1,-1), (0,-1), (-1,0), (-1,1),which shows gτ g^-1≠τ. In the latter case, the rotation numbers for the actionof gτ g^-1 are(-1,0), (1,1), (0,-1), (1,0), (-1,-1), (0,1),which also shows gτ g^-1≠τ. This finishes the proof of the lemma.With these preparation, we may now continue our proof according to the alternative byLemma <ref> * Case A: K=_6, and * Case B: K=D_12. §.§.§ Case A: K=_6We begin with the following observation which followsimmediately from the fact that g^6=1. G is a semi-direct product of Γ and K.Let ⟨Γ,g^2⟩ denote the index 2 subgroup of Ggenerated by Γ and g^2. If K=_6, then ⟨Γ,g^2⟩ is isomorphic to G_n when k=1, and is isomorphic to G_n,k,s when k>1, where n=|Γ_1^'|=|Γ̃_1|and n/k=|Γ_1|. Let h_1∈Γ_1 be the generator whose rotation numbers at the six vertices of the hexagon are (0,1), (1,0), (1,-1), (0,-1), (-1,0), (-1,1),and let h̃_1∈Γ̃_1 be the generator whose rotation numbersat the six vertices of the hexagon are (1,b), (1+b,-1), (b,-1-b), (-1,-b), (-1-b,1), (-b,1+b).This is can be shown by examining the weight on the first intersection (the one betweenH-E_1-E_3 and E_1), because on the base it should project to that of Γ_1'. Then g^2h_1g^-2 has rotation numbers(-1,0), (-1,1), (0,1), (1,0), (1,-1), (0,-1),and g^2h̃_1 g^-2 has rotation numbers(-1-b,1), (-b,1+b), (1,b), (1+b,-1), (b,-1-b), (-1,-b).Letg^2h_1g^-2=h̃_1^kl h_1^u for some l and u (note that k· ord(Γ_i)=ord(Γ'_i)).Then comparing the rotation numbers we have (-1,0)= (l,bl)+(0,u),which implies that l=-1 and u=b. Similarly, let g^2h̃_1 g^-2=h̃_1^m h_1^v for some m and v, we have (-1-b,1)=(m,mb)+(0,kv),which implies that b^2+b+1=kv n and m=-1-b. Putting together, we haveg^2h_1g^-2=h̃_1^-k h_1^b, g^2h̃_1 g^-2=h̃_1^-b-1 h_1^v.We will need a different presentation of ⟨Γ,g^2⟩. To this end,we let t_1=h_1, t_2=h̃_1^-1, and moreover, we set s=-b. Then we haveg^2t_1g^-2=t_2^k t_1^-s, g^2t_2 g^-2=t_2^s-1 t_1^-v, where s^2-s+1=kv n. With this presentation, one can identify the subgroup⟨Γ,g^2⟩ with an imprimitive finite subgroup of PGL(3).More precisely, when k=1, i.e., Γ̃_1 and Γ_1 have the same order, we can actually take Γ̃_1 to be Γ_2, which corresponds to b=0. Then s=-b=0, and with k=1, we have v=1. In this case. ⟨Γ,g^2⟩ is isomorphic to G_n by identifying t_1= [μ_n z_0,z_1,z_2], t_2=[z_0,μ_n z_1,z_2], and g^2=[z_2,z_0,z_1].When k>1, ⟨Γ,g^2⟩ is isomorphic to the groupG_n,k,s, by identifying t_1= [μ_n/k z_0,z_1,z_2], t_2=[μ_n^s z_0,μ_nz_1,z_2], and g^2=[z_2,z_0,z_1].This finishes off the proof of Proposition <ref>.§.§.§ Case B: K=D_12We will first show that G is a semi-direct product of Γ and K. There exists an involution in G which is sent to the reflection s_3 under G→ K.Moreover,* for any h∈Γ and any such involution τ, hτ h^-1=ĥτ, whereh,ĥ are related as follows: if the rotation numbers of h at the first fixed point on E_1(according to the orientation of the hexagon which is counter-clockwise) are (a,b), then the rotation numbers of ĥ at the first fixed point on E_1 are (2a,-a). * for any two such involutions τ,τ^', τ^'=hτ for some h∈Γwhose rotation numbers at the first fixed point on E_1 are (2a,-a) for some a.We note that analogous statements hold for the reflections s_2 and s_2s_3s_2 inK.Suppose τ∈ G is sent to s_3 under G→ K, then τ leaves the (-1)-spheres E_1 and H-E_2-E_3 invariant. Let h_1∈Γ_1 be the generator with rotation numbers (0,1) at the first fixed point. By examining the rotation numbers,one can check easily that τ h_1τ^-1=h_1. On the other hand, it is easily seen thatτ^2∈Γ fixes 4 points (two fixed points from τ and two from intersections with other exceptional curves) hence the whole (-1)-sphere E_1.Therefore, τ^2∈Γ_1.Thekey observation is that τ^2=h_1^2b is an even power of h_1. To see this, note that τ has two fixed points on E_1 and their rotation numbers are(a,b) and (-a,b+a) for some a,b. Furthermore, the order of τ must be even, say 2m. Since τ^2 fixes the (-1)-sphere E_1, we must have 2a=2m, and the rotation numbers of τ^2 at the two fixed-points are (0, 2b) and (0,2b). Comparing with the rotation numbers with h_1, we see easily that τ^2=h_1^2b. With τ h_1τ^-1=h_1, wesee easily that τ h_1^-b is an involution which is sent to s_3 under G→ K.Now we consider any involution τ which is sent to s_3. For any h∈Γ, letx,y be the first and second fixed points on E_1, and suppose the rotation numbers of h at x are (a,b). Then τ(x)=y and τ: T_x E_1 → T_y E_1. It is easily seen that hτ h^-1: T_x E_1→ T_y E_1 equals (-a) τ (-a)=-2a τ (here (-a) means multiplication by exp(2(-a)π√(-1)/ord(h))). Similarly, hτ h^-1= (a+b)τ (-b)=aτ in the normal direction. Hence hτ h^-1=ĥτ for some ĥ whose rotation numbers at y are (-2a,a). It follows easily that the rotation numbers of ĥ at x are (2a,-a).Finally, let τ,τ^' be any two involutions sent to s_3. We let f=dτ:T_xX→ T_yX g=dτ':T_yX→ T_xX. Theng∘ f=d(τ^'τ): T_x X→ T_x Xg^-1∘ f^-1=d(ττ'): T_y X→ T_y X Note that τ^'τ=h∈Γ. Since g^-1∘ f^-1=(f∘ (g∘ f)∘ f^-1)^-1, we see easily that the rotation numbers of h at y are the negative of the rotation numbers at x. If the rotation numbers at x are (c,d), then the rotation numbers at y are (-c,c+d) (the second number in each pair stands for the weight in the normal direction). This gives rise to the relation c+d=-d, so that the rotation numbers of h at x are (-2d,d). Setting d=-a we proved that τ^'=hτ for some h∈Γ whose rotationnumbers at the first fixed point on E_1 are (2a,-a) for some a, as we claimed.The group G is a semi-direct product of Γ and K.Recall that we fixed an element g∈ G which is sent to a counter-clockwise rotation of 60 degrees under G→ K. We set τ_1=g^3, which is an involution by Lemma <ref>. We pick another involution τ_3∈ G which is sent to s_3∈ K from Lemma <ref>. We shall first show that we can always arrange to have τ_1τ_3τ_1=τ_3.By Lemma <ref>, τ_1τ_3τ_1=h^'τ_3 for some h^'∈Γ whose rotation numbers at the first fixed point on E_1 are (2a_3,-a_3) for some a_3. On the other hand, if we replace τ_3 by hτ_3h^-1=ĥτ_3, then from Lemma <ref>τ_1(ĥτ_3)τ_1=ĥ^-1τ_1τ_3τ_1=ĥ^-2h^' (ĥτ_3).Hence if there exists an ĥ∈Γ such that ĥ^2=h^', which is equivalent to a_3 being even, then we can replace τ_3 by hτ_3h^-1 for an h∈Γ to achieve the commutativity property. To show that a_3 is even, we pick an involution τ_2 which is sent to s_2∈ K (by ananalog of Lemma <ref>) and consider τ_1τ_2τ_1. By the corresponding version of Lemma <ref>, we see that τ_1τ_2τ_1=h̃τ_2 for some h̃∈Γ whose rotation numbers at the second fixed point on E_1 are (2a_2,-a_2) for some a_2. It follows easily that the rotation numbers of h̃ at the first fixed point on E_1 are (a_2,a_2). Now τ_2τ_3 is sent to a counter-clockwise rotation of 120 degrees underG→ K, so that there exists an h∈Γ such that τ_2τ_3=hg^2. Now τ_1(τ_2τ_3)τ_1=h̃τ_2h^'τ_3=h̃h^' kτ_2τ_3for some k∈Γ whose rotation numbers can be determined as follows: since the rotationnumbers of h^' at the second fixed point on E_1 are (a_3,-2a_3), by the analog of Lemma <ref> the rotation numbers of k at the second fixed point on E_1 are (-2a_3,a_3). It follows that the rotation numbers of k at the first fixed point are (-a_3,-a_3). With this understood, note that the rotation numbers of h̃h^' k at the first fixed point on E_1 are (a_2+a_3,a_2-2a_3). On the other hand,τ_1(τ_2τ_3)τ_1=τ_1(hg^2)τ_1=h^-1τ_1g^2τ_1=h^-1 g^2=h^-2τ_2τ_3,which implies that both a_2+a_3, a_2-2a_3 are even. It follows that a_3 is even, and hence there is an involution τ_3 sent to s_3∈ K with the property that τ_1τ_3τ_1=τ_3.We set τ_2:=gτ_3g^-1. Then τ_2 is an involution sent to s_2∈ K which naturally satisfies τ_1τ_2τ_1=τ_2. We will show that τ_2,τ_3 satisfy the relation τ_2τ_3τ_2=τ_3τ_2τ_3.Note that with this relation, the subgroup generated by τ_2,τ_3 is isomorphic to S_3. Together with the involution τ_1, we obtain a lifting of K=_2× S_3 in G, proving that G is a semi-direct product of Γ and K. As we have seen earlier, τ_2τ_3=hg^2 for some h∈Γ, where h^2=1 from (<ref>) because both τ_2,τ_3 commute with τ_1. The rotation numbers of h at the first fixed point on E_1 must be one of the following: (i) (0,1), (ii) (1,0), (iii) (1,1). We claim that in case (i), we have τ_2τ_3τ_2=τ_3τ_2τ_3. To see this,τ_2τ_3τ_2=hg^2τ_2=hτ_3 g^2=τ_3hg^2=τ_3τ_2τ_3,where we use the fact hτ_3=τ_3h because the rotation numbers of h are (0,1). It remains to rule out the cases (ii) and (iii). To this end, we computegτ_2τ_3g^-1=g^2τ_3g^-2τ_2=(hτ_2τ_3)τ_3(hτ_2τ_3)^-1τ_2 =hτ_2τ_3τ_2 hτ_2=hhkτ_2τ_3τ_2τ_2=kτ_2τ_3,where the rotation numbers of k∈Γ can be determined as follows. Note thatτ_2τ_3τ_2 is sent to the reflection s_2s_3s_2∈ K, so the rotation numbers of k at the first fixed point on H-E_1-E_3 can be determined by an analog of Lemma <ref>. In case (ii),the rotation numbers of h at the first fixed point on H-E_1-E_3 are (0,1), so by an analogof Lemma <ref>, the rotation numbers of k at the first fixed point on H-E_1-E_3 are (0,0), i.e., k is trivial in this case. In case (iii), the rotation numbers of h at the first fixed point onH-E_1-E_3 are (1,0), so by an analog of Lemma <ref>, the rotation numbers of k at the first fixed point onH-E_1-E_3 are (0,1). It follows that the rotation numbers of k at the first fixed point on E_1 are (1,0) in this case. On the other hand, g(hg^2)g^-1=ghg^-1g^2=h^' g^2 for some h^'∈Γ,where the rotation numbers of h^' at the first fixed point on E_1 are (0,1) in case (ii)and (1,0) in case (iii). Since in both cases, h^' h≠ k, so we reached a contradiction. This ruled out the cases (ii) and (iii), and the proposition is proved.Finally, we show that G contains an index 2 subgroup which is isomorphic to an imprimitive subgroup of PGL(3). To this end, we fix a lifting K^' of K to G, and let g∈ K^' be an element of order 6 and τ∈ K^' be the involution sent to s_3∈ K. We denote by ⟨Γ,g^2,τ⟩ the subgroup generated by the elements in Γ, g^2 and τ. Suppose K=D_12. Then ⟨Γ,g^2,τ⟩ is isomorphic to the imprimitive finite subgroup G̃_n of PGL(3) when k=1, and is isomorphic to G̃_n,3,2 when k>1, where n=|Γ_1^'|=|Γ̃_1| and n/k=|Γ_1|. Let h_1∈Γ_1 be the generator whose rotation numbers at the six vertices of the hexagon are (0,1), (1,0), (1,-1), (0,-1), (-1,0), (-1,1),and let h̃_1∈Γ̃_1 be the generator whose rotation numbersat the six vertices of the hexagon are (1,b), (1+b,-1), (b,-1-b), (-1,-b), (-1-b,1), (-b,1+b).Then the rotation numbers of τh̃_1τ are (-1,1+b), (b,1), (1+b,-b), (1,-1-b), (-b,-1), (-1-b,b).Writing τh̃_1τ =h̃_1^l h_1^u for some l,u, we get(-1, 1+b)=(l,lb)+(0,ku),which implies that l=-1 and 2b+1=ku n.(i) Assume Γ̃_1 and Γ_1 have the same order n, i.e. k=1.Recall from the proof of Proposition <ref> that in this case,b=0, so that u=1 and τh̃_1τ=h̃_1^-1 h_1. Renaming t_1=h_1, t_2=h̃_1^-1 as in the proof of Proposition <ref>, we getg^2t_1g^-2=t_2, g^2t_2 g^-2=t_2^-1 t_1^-1, τ t_1τ =t_1,τ t_2τ =t_2^-1 t_1^-1.With this presentation, the subgroup ⟨Γ,g^2,τ⟩ can be identified withG̃_n by g^2=[z_2,z_0,z_1],τ=[z_0,z_2,z_1], t_1=[μ_n z_0,z_1,z_2], t_2=[z_0,μ_n z_1,z_2]. (ii) Assume Γ̃_1 and Γ_1 have different orders, i.e.,|Γ̃_1|=n, |Γ_1|=n/k, with k>1. In this case, we first note that 2b+1=0k.On the other hand, recall from the proof of Proposition <ref> that b^2+b+1=0 k. It follows thatb=1k and k=3. With this understood, note that one can modify h̃_1 by a suitable power of h_1 to arrange so that b=-2. With this choice, we then have s=-b=2 and v=1, where s,v appear in the relations (see the proof of Proposition <ref>)g^2t_1g^-2=t_2^kt_1^-s, g^2t_2 g^-2=t_2^s-1 t_1^-v.Moreover, b=-2 implies u=-1, hence the presentation of ⟨Γ,g^2,τ⟩:g^2t_1g^-2=t_2^3 t_1^-2, g^2t_2 g^-2=t_2 t_1^-1,τ t_1τ =t_1,τ t_2τ=t_2^-1 t_1With this presentation, the subgroup ⟨Γ,g^2,τ⟩ can be identified with G̃_n,3,2 by identifying g^2=[z_2,z_0,z_1],τ=[z_0,z_2,z_1], t_1=[μ_n/3 z_0,z_1,z_2],t_2=[μ_n^2 z_0,μ_n z_1,z_2].It is clear that G is a semi-direct product of the imprimitive subgroup of PGL(3) and _2.The proof of Theorem <ref> is completed.The rest of this section is occupied by the proof of Theorem <ref>, where we assume (X,ω)admits a minimal symplectic G-conic bundle structure π:X→^2. Furthermore, we assume N≥ 4.Recall that Q is the subgroup of G which leaves each fiber ofπ invariant, and G_0 is the subgroup of Q which acts trivially on H^2(X).§.§ Proof of Theorem <ref>We begin with some useful observations about the rotation numbers of an element of Q at a fixed point. 3mm Rotation numbers of fixed points. 3mmSuppose q∈ X is fixed by a nontrivial element g∈ Q and q is notthe singular point of a singular fiber. Then g must fix the symplectic orthogonal direction of the fiber at q,because g induces a trivial action on the base. It follows that the rotation numbers of gat q are (a,0) for some a≠ 0. Furthermore, if q lies in a singular fiber, then g must leave the (-1)-sphere containing q invariant,and the rotation numbers of g at the otherfixed point, which is the singular point of the singular fiber, must be (a,-a). (See Section <ref>)3mm Structure of fixed point sets. 3mmNote that the fixed-point set of a nontrivial element g∈ Q consists of embedded J-holomorphic curves and isolated points. Since each regular fiber contains two fixed points of g, it follows that thefixed-point set of g consists of a bisection and isolated points which are the singular points of those singular fibers of which g leaves each component invariant. Clearly, one of the following must be true:* g leaves components of a singular fiber invariant.Then there is a fixed point q as described above, and the singularity p on the fiber has rotation number (a,-a) hence is an isolated fixed point (and there is one more fixed point on either component of the fiber).* g switches the 2 components of the singular fiber.Then* the singularity p on the fiber is contained in the fixed bisection,* p is a branched point of the double branched covering from the bisection to the base,* g must be an involution, because if g^2≠ 1, then p must be an isolated fixed point of g^2 from (i), a contradiction.With the preceding understood, note that the subgroup G_0 can be identified with the subgroup which leaves each (-1)-sphere in a singular fiber invariant. Since there are N-1≥ 3 singular fibers, the induced action of G_0 on the base ^2 has at least 3 fixed points. It follows that the action of G_0 on the base must be trivial, and G_0 is a subgroup of Q. It is clear that either G_0 is trivial, or it is finite cyclic. Furthermore, the fixed-point set ofeach nontrivial element of G_0 consists of N-1 isolated points with rotation numbers(a,-a) for some a≠ 0 (these are the singular points of the singular fibers) and two disjoint fixed sections from the two other fixed points other than the singular points on each fiber.The two fixed sections must have the same self-intersection number because there exists a g∈ G which switchesthe two sections. Applying Lemma <ref>to the fixed sections, we see that each has self-intersection (N-1)/2; in particular, N must be odd if G_0 is nontrivial.Finally, we note that each element of Q∖ G_0 must be an involution (the square fixes 4 points in a general fiber: intersections with the bisection, and the intersections of the two disjoint sections as above).The group Q contains an involution; in particular, it has an even order. Let F be the singular fiber which contains the exceptional sphere representing E_N. The minimality assumption implies that there is a g∈ G whichswitches the two (-1)-spheres in F. Clearly, g has an even order, say 2m. If g∈ Q, then as we have shown earlier, g must be an involution, and we are done. Suppose m>1. We let h=g^m, which is an involution. We claim that h∈ Q. Suppose h is not contained in Q. Then h induces a rotation on the base, so that the fixed-point set of h must becontained in the two fibers which h leaves invariant. With this understood, we claim that h must fix one of the (-1)-spheres in F. Denote Σ as the two dimensional component of the fixed point set of h, which could be emptyIf h does not fix either of the (-1)-spheres in F, we must have E_N·Σ=0. On the other hand,by Proposition 5.1 in <cit.>, we have(h· E_N)· E_N= E_N·Σ2.This is a contradiction because h· E_N=E_N or H-E_1-E_N, so that (h· E_N)· E_N=± 1. Hence h fixes one of the (-1)-spheres in F. Now since g commutes with h and g switches the two (-1)-spheres in F, h must also fix the other (-1)-sphere. But this clearly contradicts the fact that h is nontrivial. Hence h∈ Q and the lemma is proved.If G_0 is nontrivial, say, of finite order m>1, and Q≠ G_0, then Q is the dihedral group D_2m. Moreover, any involution τ∈ Q∖ G_0 switches the two fixed sections of G_0, hence the two (-1)-spheres in each singular fiber.For any g∈ Q, since G_0 is normal in Q, g leaves the two fixed sections of G_0 invariant. Note that if g leaves each of the section invariant, then it must fix both of them because the induced action of g on the base is trivial. Consequently, Q/G_0 is either trivial or _2, depending on whether there is a g∈ Q switching the two fixed sections of G_0. It follows easily thatevery element in Q∖ G_0 switches the two (-1)-spheres in each singular fiber. Finally, if Q≠ G_0, then Q must be the corresponding dihedral group because each element in Q∖ G_0 is an involution. (g^2 preserves all exceptional spheres hence in G_0) This finishes the proof of the proposition.Next we consider the case where G_0 is trivial. Let Σ be the set of singular fibers.Suppose G_0 is trivial. Then Q=_2 or (_2)^2. In the latter case, let τ_1,τ_2,τ_3 be the distinct involutions in Q. Then Σ is partitioned into subsets Σ_1, Σ_2, Σ_3, where Σ_i parametrizes the set of singular fibers of which τ_i leaves each (-1)-sphere invariant, and #Σ_i≡ N-1 2,for i=1,2,3.First of all, Q consists of involutions as G_0 is trivial. Suppose Q≠_2, and let τ, τ^'∈ Q be two distinct nontrivial elements. We claim that there is no singular fiber such that both τ,τ^' leave both of the (-1)-spheres in this fiber invariant. This is because if there is such a singular fiber, then by examining the action of ττ^' at the singular point of the fiber, we see ττ^' must be trivial (the rotation numbers at this signular point is (0,0)). But this contradicts the assumption that τ≠τ^'. With the preceding understood, let τ_1,⋯,τ_n, n>1, be the distinct involutions in Q, andlet Σ_i be the set of singular fibers of which τ_i leaves each (-1)-sphere invariant. Then the previous paragraph shows that Σ_i∩Σ_j=∅ for i≠ j. On the other hand, suppose τ_k=τ_iτ_j, then Σ∖ (Σ_i∪Σ_j)⊂Σ_k, implying Σ∖ (Σ_i∪Σ_j)=Σ_k.It follows easily that n=3 and Q=(_2)^2. It remains to see that for each i, #Σ_i=N-1 2. Consider the fixed-point set S_i of τ_i. Then S_i is a bisection and the projection of S_i onto the base is a double branched covering which ramifies exactly at the singular points of those singular fibers not parametrized by the set Σ_i. Since the number of ramifications must be even, we have #Σ_i=N-1 2 as claimed. The proof of Theorem <ref> is completed. § MINIMALITY AND EQUIVARIANT SYMPLECTIC CONES Now let X=^2# N^2, N≥ 2, which is equipped with a smooth action of a finite group G. Suppose there is a G-invariant symplectic form ω_0 on X such that the corresponding symplectic rationalG-surface (X,ω_0) is minimal. With this understood, we denote byΩ(X,G) the set of G-invariant symplectic forms on X.The following is a crucial observation. For any ω∈Ω(X,G), the canonical class K_ω=K_ω_0 or -K_ω_0. The lemma is obvious if H^2(X)^G has rank 1 because [_0],[] and K_ are proportional. Assume thatH^2(X)^G has rank 2. Recall from the proof of Theorem <ref>, there is a reduced basisH,E_1,⋯, E_N of (X,ω_0) such that K__0= -3H + E_1 +E_2 + ⋯ +E_N. Now after changing ω by sign and with a further scaling if necessary, we can write[]=-K__0+bF= (3+b)H - (1+b) E_1 -⋯ -E_N for some b∈.We claim that [] is always a reduced class in the sense of <cit.>. To see this, note that the condition []^2>0 implies 4b>N-9, which implies b≥-1 since N≥ 5. This gives 3+b>1+b>0 and 3+b=1+b+1+1.Now the conclusionfollows from <cit.>, which asserts that any symplectic class with reduced form has canonical classK__0.Recall that we have shown that a minimal symplectic rational G-surface where X=^2# N^2 admits a minimal symplectic G-conic bundle only whenN≥ 5 and N≠ 6. The following lemma deals with precisely these cases and gives the converse of this fact.Let (X,ω) be a symplectic rational G-surface where X=^2# N^2 with N≥ 5 and N≠ 6. Suppose (X,ω) admits a minimal symplectic G-conic bundlestructure. Then for any ω^'∈Ω(X,G) such that K_ω^'=K_ω, the symplectic rational G-surface (X,ω^') is minimal. By Lemma <ref>, H^2(X)^G is of rank 2 spanned by K_ and the fiber class F of the symplectic G-conic bundle. Now suppose to the contrary that (X,ω^') is not minimal. Then there is a G-invariant, disjoint union of ω^'-symplectic (-1)-spheres C_1,C_2,⋯, C_m. Let C=C_1+⋯ +C_m. Then sinceK_ω^'=K_ω, we have -m=C^2=K_ω· C.On the other hand, C∈ H^2(X)^G, so that C=aK_ω+bF, 2mm a,b∈.The key ingredient for deriving a contradiction is the fact that F· C≥ 0,whichas a corollary implies that a<0, and so that F· C>0.To see this, Note that F is represented by an embedded J-holomorphic sphere S where J is ω-compatible.On the other hand, since K_ω^'=K_ω, ω and ω^' have the same set of symplectic (-1)-classes. Hence since the class of each C_i is represented by aω^'-symplectic (-1)-sphere, it is also represented by a ω-symplectic(-1)-sphere. Consequently, the class of C_i can be represented by ∪_j m_j D_jwhere D_j is a J-holomorphic curve and m_j>0. Now since S is irreducible and S^2=0, we have S· D_j≥ 0 for each j, which implies that F· C_i≥ 0 for each i. Hence our claim that F· C≥ 0. With this understood, we consider the pairing of C=aK_ω+bF with K_ωand then square both sides, we haveaK_ω^2-2b=-m -m=a^2K_ω^2-4abwhich givesm=-a^2K_ω^2/2a-1.Notice that m>0, hence K_ω^2>0, which excludes all cases for N≥9. Moreover, a^2 and 2a-1 are co-prime for all a≤ -1, therefore, 2a-1|K_ω^2.But this is not possible because of the assumption N≥ 5 and N≠ 6.Proof of Theorem <ref> The claim regarding the canonical class is proved in Lemma <ref>. The minimality claims are trivial if H^2(X)^G is of rank 1. For the case rank H^2(X)^G>1, they follow from Lemma <ref>, Lemma <ref> for N≥5 and N≠ 6; and Lemma <ref> for N=6.Theorem <ref> implies these are the only cases to consider.Combining these results with Theorem 3.8 of <cit.>, we cover the case (2) of minimal complex rational G-surfaces. In Theorem <ref>, the case of complex rational G-surface admits an alternative proof, which we sketch it here. Let X be a minimal complex rational G-surface which is ^2 blown up at 6 points, and assume X is a conic bundle.Then by Proposition 5.2 of <cit.>, X must be Del Pezzo, and hence has a G-invariant monotone Kähler form ω.Part (1) of Lemma <ref> below implies there are two distinct fiber classes in H^2(X)^G, but N=6 contradicts part (2) of the lemma. 5mm We now turn to the proof of Theorem <ref>.Fixing the canonical class K_ω_0, we recall that F∈ H^2(X)^G is called a fiber class if it is the class of the fibers of a symplectic G-conic bundle on X for some G-invariant symplectic formwith K_=K_ω_0. Suppose H^2(X)^G has rank 2.* If X admits a G-invariant monotone symplectic form, then there are at least two distinctfiber classes in H^2(X)^G. * Suppose F,F^'∈ H^2(X)^G are distinct fiber classes. Then F+F'=-aK_ω_0 for some integer a>0, and N=5,7 or 8 where a=1, 2, 4 respectively. In particular, thereare at most two distinct fiber classes in H^2(X)^G.To prove (1), let ω be a G-invariant monotone form on X, and letF be the fiber class of the G-conic bundle structure obtained from Theorem 1.3. Pick a G-invariant closed 2-form η representing F. Then for sufficiently smallϵ≠ 0, the G-invariant symplectic form ω^':=ω+ϵηis non-monotone, and the symplectic G-manifold (X,ω^') is minimal (as shown in the proof of Theorem <ref>).By Theorem <ref>, (X,ω^') admits a symplectic G-conic bundle with small fiber. An easy check with the symplectic areas shows that for ϵ>0, the small fiber class (whose area is twice the minimal exceptional spheres) of the symplectic G-conic bundle equals F, but for ϵ<0, it is not F.To prove (2), let ω,ω^' be the G-invariant symplectic forms associated with the symplectic G-conic bundles whose fiber classes are F,F^' respectively.For simplicity, we set K=K_ω_0. Write F^'=-aK +b F,for some a,b∈. Note that K· F=K· F^'=-2,F^2=(F^')^2=0, and the assumption that F≠ F^' implies a≠0.Then by pairing (<ref>) with K, F and F', respectively, one has -aK^2=-4 and b=-1. Therefore, a and K^2 are both divisors of 4, and F+F'=-aK.We claim 2a=F· F'≥ 0, which implies a>0 and N=5,7,8. The point is that F can be represented by an embedded J-holomorphic sphere V with V^2=0, whereJ is ω-compatible. Since K_=K_^', this fact implies the Gromov-Witten invariant of F' is nontrivial,hence F^' can be represented by a stable J-holomorphic curves, from which our claim F· F'≥ 0 follows easily. Suppose G_0 is nontrivial. Then there is a unique fiber class.We note first that the 2-dimensional fixed point set of G_0 consists of two embedded 2-spheres S_1,S_2 , each with self-intersection -(N-1)/2 (in particular, N must be odd). Suppose to the contrary that there are two distinct fiber classes F,F^'. Then S_1 (and S_2) is a J-holomorphic section of the corresponding symplectic G-conic bundles with fiber classes F,F^', for an appropriate ω or ω^'-compatible G-invariant J. This implies that S_1· F=S_1· F^'=1. On the other hand, by Lemma <ref>, F+F^'=-aK_ω_0 for some a>0, implying that K_ω_0· S_1<0. This violates the adjunction formula because S_1^2= -(N-1)/2≤ -2, and K_=K_^'=K_ω_0.Proof of Theorem <ref> Part (1) and part (2) follow immediately from Lemma <ref> and Lemma 4.4, and the proof of Theorem <ref>.In the case (2) when there is a unique fiber class F for the G-conic bundle, any G-invariant symplectic form has the form (3H-E_1-⋯-E_m)+bF.From Theorem <ref>, (E_1)≥(E_k), hence b≥0.This also shows C(X,G,F)∪ C(X,G,F')=C(X,G).To see that C(X,G,F)∩ C(X,G,F^') is either empty or consists of classes of G-invariant monotone symplectic forms, let [ω]∈ C(X,G,F) and [ω^']∈ C(X,G,F^') such that [ω]=[ω^']. Then [ω]=-a K_ω_0+bF,[ω^']=-a^' K_ω_0+b^' F^', where a,a^'>0 and b,b^'≥ 0. If both b,b^' are non-zero, then [ω]=[ω^'], together with the fact that F+F^' is a multiple of K_ω_0 (cf. Lemma <ref>), would imply that F,F^' are linearly dependent, a contradiction.To see (3), first we note that if [ω]∈Ĉ(X,G,F), then for anyδ>δ_ω,F, there is an ω^'∈Ω(X,G) such that [ω^']∈Ĉ(X,G,F) with δ_ω^',F=δ. We can simply take ω^':= ω+ (δ-δ_ω,F)π^∗η, where π is a symplectic G-conicbundle on (X,ω) with fiber class F, and η is an area form on the base of π with total area 1. Secondly, if δ_X,G,F>0, then it can not be attained. Suppose to the contrary that there is an ω such that δ_ω,F=δ_X,G,F. Then take 0<ϵ<δ_X,G,F sufficiently small, the G-invariant form ω^':=ω-ϵη, where η is a G-invariant closed 2-form representing F, is a symplectic form. The condition ϵ<δ_X,G,F implies that [ω^']∈Ĉ(X,G,F), which contradicts the definition of δ_X,G,F.We end this section with a uniqueness result on the subgroup Q from Definition 1.7.Suppose (X,G) has a unique fiber class F. Then the normal subgroup Q of G is uniquelydetermined, i.e., it is independent of ω∈Ω(X,G), nor the symplectic G-conicbundle structure on (X,ω) involved in the definition of Q. Let ω,ω^'∈Ω(X,G), and let π, π^' be symplectic G-conic bundles with fiber class F, and let Q,Q^' be the subgroups of Gdefined using π,π^' respectively. Let H,E_1,⋯,E_N and H^',E_1^',⋯,E_N^' be a reduced basis associated to π,π^'respectively. Let Q_(H,E_i)={g∈ G| g· E_j=E_jH-E_1-E_j}. We claim Q=Q_(H,E_i).First, it is clear that Q⊂ Q_(H,E_i). Secondly, if g∈ Q_(H,E_i), then g leaveseach singular fiber invariant. Since the number of singular fibers is N-1 which is greaterthan 3, the induced action of g on the base ^2 has at least 3 fixed points. Thisimplies that the action of g on the base must be trivial, and g∈ Q. Hence Q=Q_(H,E_i).Similarly, Q^'=Q_(H^',E_i^').If we normalize so that ω(F)=ω^'(F)=2, then for each j=2,⋯,N,ω(E_j^')=ω^'(E_j)=1 also. In particular,H^',E_1^',⋯,E_N^' is a reduced basis for (X,ω). ByLemma <ref>, and the fact that ω is not monotone, we have,for each j>1, E_j^'=E_kor H-E_1-E_k for some k>1. It follows that H^'-E_1^'-E_j^'=H-E_1-E_k or E_k for the same k. From these relations we see immediately thatQ_(H,E_i)=Q_(H^',E_i^'). Hence Q=Q^'. BP O. Buse and M. Pinsonnault. Packing numbers of rational ruled four-manifolds, Journal of Symplectic Geometry 11.2 (2013): 269-316.C1 W. Chen, Orbifold adjunction formula and symplectic cobordismsbetween lens spaces, Geometry and Topology8(2004), 701-734.C2 W. Chen, Smooth s-cobordisms of elliptic 3-manifolds,Journal of Differential Geometry 73 no.3 (2006), 413-490.C3 W. Chen, Group actions on 4-manifolds: some recent results and open questions, Proceedings ofthe Gokova Geometry-Topology Conference 2009, Akbulut, S.et al ed., pp. 1-21, International Press, 2010. C4 W. Chen, G-minimality and invariant negative spheres inG-Hirzebruch surfaces,Journal of Topology 8 (2015), 621-650. DI I.V. Dolgachev and V.A. Iskovskikh, Finite subgroups of the plane Cremona group, in Algebra, arithmetic, and geometry: in honor of Yu. I. Manin. Vol. I, 443-548,Progr. Math., 269, Birkhuser Boston, Inc., Boston, MA, 2009.Ed A.L. Edmonds, Aspects of group actions on four-manifolds, Topology and its Applications 31 (1989), 109-124. FH K. Frantzen and A. Huckleberry, Finite symmetry groups in complex geometry,Revue de l'Institut Elie Cartan Nancy 19 (2009), 73-113.KK Y. Karshon and L. Kessler, Distinguishing symplectic blowups of the complex projective plane,arXiv:1407.5312v1 [math.SG] 20 Jul 2014, to appear in the Journal of Symplectic Geometry.LL B.-H. Li and T.-J. Li, Symplectic genus, minimal genus and diffeomorphisms,Asian J. Math. 6 (2002), no. 1, 123-144.LW12 T.-J. Li, W. Wu, Lagrangian spheres, symplectic surfaces and the symplectic mapping class group, Geom. Topol., 16(2):1121-1169, 2012.MS09 D. McDuff, F. Schlenk, The embedding capacity of 4-dimensional symplectic ellipsoids,Ann. of Math. (2) 175 (2012), no. 3, 1191-1282.
http://arxiv.org/abs/1708.07500v1
{ "authors": [ "Weimin Chen", "Tian-Jun Li", "Weiwei Wu" ], "categories": [ "math.SG", "math.AG" ], "primary_category": "math.SG", "published": "20170824172854", "title": "Symplectic rational $G$-surfaces and equivariant symplectic cones" }
- -
http://arxiv.org/abs/1708.07837v2
{ "authors": [ "Niko Jokela", "Matti Jarvinen", "Matthew Lippert" ], "categories": [ "hep-th", "cond-mat.str-el" ], "primary_category": "hep-th", "published": "20170825180001", "title": "Holographic pinning" }
On the repeated inversion of a covariance matrix M. de JongNWO-I, Nikhef, PO Box 41882, Amsterdam, 1098 DB Netherlands Leiden University, Leiden Institute of Physics, PO Box 9504, Leiden, 2300 RA Netherlands =========================================================================================================================================================================== We use the Markov chain approximation method to construct approximations for the solution of the mean field game (MFG) with reflecting barriers studied in <cit.>. TheMFG is formulated in terms of a controlled reflected diffusion with a cost function that depends on the reflection terms in addition to the standard variables: state, control, and the mean field term.This MFG arises from the asymptotic analysis of an N-playergame for single server queues with strategic servers. By showing that our scheme is an almost contraction, we establish the convergence of this numerical scheme over a small time interval.AMS Classification: 65M12, 60K25, 91A13, 60K35, 93E20,65M12, 60F17.Keywords: Numerical scheme, mean field games, Nash equilibrium, rate control, reflected diffusions, heavy traffic limits, queuing systems, Markov chain approximation method. § INTRODUCTIONThe theory of mean field games (MFGs) was initiated a decade ago in the seminal work of Lasry and Lions <cit.>, and Huang, Malhamé, and Caines <cit.>. For theoretical study and applications of this theory see <cit.> and the references therein. MFGs are control problems that approximate many player games with weak interaction between the players that is given in terms of the empirical distribution of the players' states. In these control problems the empirical distribution that governs the interaction is replaced by a deterministic flow of measures. A solution of the MFG is a probability measure on the path space of the single player statethat is the distribution of the state process under the optimal control for the control problem associated with theflow of measures given by the (time-)marginal distributions of this probability measure.A standard (probablistic) method to prove the existence of a MFG solution is by solving a fixed point theorem on the space of probability measures on certain path spaces.A probability measure on the path space is fixed and a stochastic control problem is formulated in terms of the flow of time marginals of this probability measure. Then a `best reply' to the probability measureis found by solving thiscontrol problem. The distribution of the state process under the best reply is another measure on the path space.Solution of the MFG is the fixed point of this map that takes a probability measure to its implied `best reply' distribution. There are other ways to describe a MFG solution, for example the seminal papers of Lasry and Lions <cit.> represent a MFG solution through two coupled nonlinear partial differential equations; one is an equation of Hamilton-Jacobi-Bellman (HJB) type while the second takes the form of a Kolmogorov forward equation, andrecent works of Carmona, Delarue, and Lacker <cit.>, using probabilistic methods,characterize the MFG solution as a solution to certain forward backward stochastic differential equations.In general closed form solutions for MFGs are not available and thus one needs numerical approximations. In our work we study one such procedure that uses the Markov chain approximation method (<cit.>) and establish convergence of the scheme over a small time interval.In recent years there have been several works on numerical schemes for MFGs, most of which are based on the PDE system of <cit.>.Achdou and Capuzzo-Dolcetta <cit.> were the first to suggest a finite difference method for approximating the PDE system relying on monotone approximations of the Hamiltonians and a weak formulation of the forward equation. Together with Camilli, the same authors proved in <cit.> the convergence of the scheme. In <cit.>, Achdou and Porretta showed that the solutions of a certain discrete system converges to a weak solution of the PDE system. In <cit.>, Lachapelle, Salomon, and Turinici provided an iterative scheme using a discrete Markov decision problem. Taking advantage of the structure of the problem (in particular, the problem is linear-quadratic in the control), they used the monotonic algorithm method introduced in <cit.> and iteratively constructed a value function, control, and a measure by using finite differences based on the forward-backward system. Guéant studies numerical schemes when the Hamiltonians are quadratic, see <cit.>. Semi-Lagrangian schemes were studied by Carlini and Silva in <cit.>. In a recent paper, Chassagneux, Crisan, and Delarue <cit.> used the master equation and by making smoothness assumptions on this infinite dimensional PDE, they proposed an algorithm based on Picard iterations and the continuation method. The master equation is a parabolic partial differential equation with a terminal condition. Its variables are time, state, and measure and its solution approximates the value function of the MFG, see e.g., <cit.>. Our method in contrast to above methods is purely probabilistic. We do not make smoothness assumptions as in <cit.>. We use an iterative Markov chain approximation method (see <cit.>) to construct numerical solutions of the MFG. Specifically, we discretize time and space and for a fixed measure on the path space we define a Markov decision problem that is suggested by the MFG. In the first step of the iteration, the law of the solution of the MDP is computed. Then we take this law as the starting point to formulate the MDP for the second iteration and repeat the process. Unfortunately, it is not clear that the map defined by such iterations is in general a contraction. We instead show that the map is an almost contraction over a small time interval with length independent of the discretization parameter. By an almost contraction we roughly mean that the map is a contraction up toan additional termthat vanishes as the discretization parameter approaches 0.The proof of this almost contraction property relies on the construction of a coupling between certain controlled reflected Markov chains (see proof of Proposition <ref>) which we believe is of independent interest.Using the above almost contraction property, tightness of relevant processes, and weak convergence arguments, we show the convergence of the laws obtained from the iteration scheme to the solution of the MFG over a small time interval. Proving the convergence of a Markov chain based approximation method of the form considered in this work over an arbitrary time interval is for now a challenging open problem. The paper is organized as follows. In Section <ref> we present the MFG and summarize the results from <cit.>. In Section <ref> we provide the numerical scheme and present our main convergence result (Theorem <ref>). Section <ref> provides proofs of some auxiliary results from Section <ref>. Finally Section <ref> gives a numerical example.§.§ Preliminaries We use the following notation. For every t∈(0,) andf:[0,)→^d, let f_t≐sup_[0,t]f. In case that d=1, we often use |f|_t.For any two metric spaces _1, _2 denote by (_1:_2) the space of continuous functions mapping _1 to _2. When _2 =, we use the notation (_1). For a Polish space , the space ([0,T]: ) will be equipped with theuniform topology. We will denote by ([0,T]: ) the space of functions mapping [0,T] tothat are right continuous and have left limits (RCLL) defined on [0,T]. This space is equipped with the usual Skorohod topology. Denote by () the space of probability measures on . We endow () with the topology of weak convergence of measures. Convergence in distribution of S valued random variable X_n to X will be denoted as X_n ⇒ X. For T, L ∈ (0,∞), the space (([0,T]:[0,L])) will be denoted as _T,L. The Wasserstein distance of order 1 on (), whereis a compact metric space,is defined asW_1(η',η)=inf{[∫_d(x,y)dπ(x,y)] : π∈(×) with marginals η' and η}, where η,η'∈().For ϕ∈^1,2([0,T]× [0,L]), D_tϕ, Dϕ, D^2ϕ will denote the time derivative and the first two space derivatives of ϕ, respectively. For x ∈, δ_x ∈() denotes the Dirac measure at x. Throughout the paper we will make extensive use of the Skorohod map, which for the particular setting of interest here is recalled below.Fix T,L>0. Given ψ∈([0,T]:) such that ψ(0)∈ [0,L], we say the triplet of functions (,ζ_1,ζ_2)∈([0,T]: ^3) solve the Skorohod problem for ψ if the following properties are satisfied:(i) For every t∈[0,T],(t)=ψ(t)+ζ_1(t)-ζ_2(t)∈ [0,L].(ii) ζ_i are nonnegative and nondecreasing, ζ_1(0)=ζ_2(0)=0, and∫_[0,T]1_(0,L]((s))dζ_1(s)=∫_[0,T]1_[0,L)((s))dζ_2(s)=0.We denote by Γ(ψ)=(Γ_1,Γ_2,Γ_3)(ψ)≐ (,ζ_1,ζ_2) and refer to Γ as the Skorohod map.It is known that there is a unique solution to the Skorohod problem for every ψ∈([0,T]: ) and so the Skorohod map in Definition <ref> is well defined. The Skorohod map has the following Lipschitz property (see <cit.>).There exists c_S ∈ (0,∞) such that for all ,∈([0, T]:) with (0), (0) ∈ [0,L],∑_i=1^3_i() - _i()_T≤ c_S-_T. § THE MFG AND RELATED RESULTS We now provide a precise description of the MFG that was studied in <cit.> and state some relevant results from there. §.§ Description of the MFGFix L,T>0. Here T denotes the terminal time of our finite time horizon and [0,L] will be the state space of the controlled process X. Also, let U be a compact subset ofrepresenting the control space. Let (Ω,,{_t},) be a filtered probability space that supports a one dimensional standard _t-Brownian motion B. We will refer to the collection (Ω,,{_t},, B) as a system and denote it by Ξ. Given (x,t,ν) ∈ [0,L]× [0,T]×_T,L, we denote by (Ξ, t,x,ν) the collection of all pairs (α, Z) where α = {α(s)}_0≤ s ≤ T-t is a U-valued _s-progressively measurable process, Z = {Z(s)}_0≤ s ≤ T-t is a [0,L]×_+×_+ valued _s-adapted continuous process such that, Z=(X,Y,R) andZ(s) = (X,Y,R)(s)=Γ(x+∫_0^·b̅(u)du+σ B(·))(s), s∈[0,T-t],where b̅(u)≐ b(t+u,ν(t+u),X(u), (u)),u ∈ [0, T-t],b:[0,T]×([0,L])×[0,L]× U, ν(s) is the marginal of ν at time instant s and σ is a (strictly) positive constant.Given ν∈_T,L, t ∈ [0,T], x ∈ [0,L], and a system Ξ as above, let (α,Z) ∈(Ξ, t,x,ν). The cost functon is given by, J_ν(t,x,,Z) ≐[∫_0^T-t f(s+t,ν(s+t),X(s),(s) )ds + g(ν(T),X(T-t))+∫_0^T-ty(s+t,ν(s+t))dY(s)+∫_0^T-tr(s+t,ν(s+t))dR(s)],and the value function isV_ν(t,x)=inf_Ξinf_(,Z)∈(Ξ,t,x,ν)J_ν(t,x,,Z).Conditions on f,g,y,r will be specified below. We now introduce the notion of a solution to the MFG associated with (<ref>)–(<ref>). A solution to the MFG, associated with (<ref>)–(<ref>), with initial condition x ∈ [0,L] is defined to be a ν∈_T,L such that there exist a system Ξ and an (α,Z) ∈(Ξ,0,x,ν) such that Z=(X,Y,R) satisfies ∘ X^-1 = ν andV_ν(0,x) = J_ν(0, x, α,Z).If there exists a unique such ν, we refer to V_ν(0,x) as the value of the MFG with initial condition x.§.§ Background resultsThe following conditions were used in <cit.> in order to characterizethe value function V_ν and the optimal control.(a) There exists c_L ∈ (0,∞) such that for every (t,η,x,),(t',η',x',')∈[0,T]×([0,L])×[0,L]× U,|f(t,η,x,)-f(t',η',x',')| + |g(η,x) - g(η',x')|+|b(t,η,x,)-b(t',η',x',')| +|y(t,η)-y(t',η')|+|r(t,η)-r(t',η')|≤ c_L (|t-t'|+W_1(η,η')+|x-x'|+|-'|). (b) For every (t,η, x,p) ∈ [0,T]×([0,L])× [0,L]×, there is a unique (t,η,x,p) ∈ U such that(t,η,x,p)=u∈ Umin h(t,η,x,u,p) ,where h(t,η,x,u,p)=f(t,η,x,u)+b(t,η,x,u)p. As argued in <cit.>, Berge's maximum theorem (see <cit.>) together with part (b) of the above assumption implies that is continuous. Also note that (<ref>) implies that b,f,g,y,r are bounded functions, in particular,sup_(t,η,x,u)∈ [0,T]×([0,L])× [0,L]× U|b(t,η,x,u)| ≐ c_B < ∞.For further discussion about the assumption, see <cit.>.The Hamilton-Jacobi-Bellman equation for the value function V_ν(t,x) is given as follows.-D_t ϕ-H(t,ν(t),x,D ϕ)-1/2σ^2 D^2ϕ=0, (t,x)∈[0,T]×[0,L],with the boundary conditionsϕ(T,x)=g(ν(T),x),Dϕ(t,0)=-y(t,ν(t)), andDϕ(t,L)=r(t,ν(t)),t∈[0,T],where H is the Hamiltonian given asH(t,η,x,p)=inf_u∈ U h (t,η,x,u,p).The following class of Hölder continuous ν∈_T,L plays a key role in the analysis._0:={ν∈_T,L:sup_0≤ s<t≤ TW_1(ν(t),ν(s))/(t-s)^1/2<}. Fixν∈_0 and suppose that Assumption <ref> holds. Then V_ν is continuously differentiable w.r.t. t and twice continuously differentiable with respect to (w.r.t.) x. It is the unique solution of the Hamilton-Jacobi-Bellmanequation (<ref>) with the boundary conditions (<ref>).Furthermore, with α̂ as introduced in Assumption <ref>,the map (s,x')↦(s,ν(s),x',D V_ν(s,x')) is continuous and the feedback control γ̂(u, x')≐(u+t,ν(u+t),x', D V_ν(u+t,x')) is an optimal feedback control for (<ref>) for every t ∈ (0,T). Moreover, any optimal controlfor (<ref>) satisfies (u,ω)=γ̂(u,X(u,ω)), _T^t⊗ almost surely (a.s.), where λ_T^t denotes the Lebesgue measure on [0,T-t].Using Proposition <ref>, <cit.> proves the existence of a solution of MFG under Assumption <ref>. In order to establishuniqueness of the solution, we need an additional condition. Fix η_0 ∈([0,L]).For every (t,η,x,u)∈[0,T]×([0,L])×[0,L]× U,b(t,η,x,u) =b(t,η_0,x,u), f(t,η,x,u)=f_0(t,η,x)+f_1(t,x,u),y(t,η) =y(t,η_0), r(t,η)=r(t,η_0).Moreover, for every t∈[0,T] and η,η'∈([0,L]), f_0 and g satisfy the following monotonicity property ∫_0^L [f_0(t,η,x)-f_0(t,η',x))]d(η-η')(x) ≥ 0, ∫_0^L (g(η,x)-g(η',x))d(η-η')(x) ≥ 0. Abusing notation, when Assumption <ref> holds, we will write b(t,x,u) =b(t,η_0,x,u), y(t)=y(t,η_0), and r(t)=r(t,η_0). The following is one of the main results from <cit.>. Under Assumption <ref>, there exists a solution of the MFG. If in addition Assumption <ref> holds then there is a unique MFG solution. §.§ Rate control in queues with strategic serversThe MFG described above arises from the heavy traffic analysis of a large queuing system that consists of many symmetric strategic servers that are weakly interacting. Consider a collection of n critically loaded single server queues. Given past information, each server controls the arrival and service rate associated with its own queue. In addition the rates depend on time, the individual queue length, and the empirical measures of all the queue states. The servers aim to minimize individual costs, that in particular account for the scaled idleness and rejection processes. The cost also depends on the individual queue state, the control action and the state of the overall system given through the empirical measure of states of all queues.The main goal is to find asymptotic Nash equilibrium in this game as the system approaches criticality (i.e. heavy traffic limit) and the number of queues approach ∞, simultaneously.It is shown in <cit.> that given a solution of the MFG and an optimal control associated with it of the form in Section <ref>, one can construct an asymptotic (in number of players and in heavy traffic limit) Nash equilibrium for the n-player gamesuch that the solution of the MFG and its associated value function approximate the empirical distribution of the states of the queues and the value function of each server. The current work provides a numerical approximation for the solution of the MFG that is needed in order for constructing the above n-player asymptotic Nash equilibrium. § NUMERICAL SCHEME FOR THE MFG In this section we will use the Markov chain approximation method (<cit.>) to construct numerical solutions of the MFG. The main result of the paper Theorem <ref> is given here. The numerical scheme is composed of two main steps. First, in Section <ref>, given a probability measure in _T,L,we construct a finite state, discrete time, controlled Markov chain and provide a numerical scheme to construct a measure over ([0,T]:[0,L]). Then, in Section <ref> we show that, under assumptions that include the existence of a unique solution of the MFG, the measureconstructed from the chain converges to the solution of the MFG over a small time interval. Throughout the section we assume that Assumption <ref> is satisfied and that (<ref>) holds.Note that we do not assume (<ref>) or the monotonicity condition in Assumption <ref>, however we will introduce additional assumptions as needed. We now introduce thecontrolled Markov chain constructed on some probability space (, , ) that will be used to approximate the solution of the MFG.§.§ Approximating controlled Markov chainsFix a discretization parameter h>0 such that L is an integer multiple of h. Denote the h-grid {-h,0,h,…,L+h} by ^h. This is a discretized version of the state space [0,L]. Since 0 and L are reflecting barriers for the state process X, we will consider twotypes of transition steps for the approximating chain. The first, which occurs when the chain is away from the boundary, will be referred to as the rate control step and the second occurs at the end points L+h and -h and is referred to as the reflection step.Rate control step. For every t∈ [0,T], η∈([0,L]), u∈ U, and x∈^h_0≐^h∖{-h,L+h} letq^h(t,η,u; x,x± h) ≐± hb(t,η,x,u)+σ^2/2σ^2,Note that∑_y ∈{x± h} q^h(t,η,u; x,y) = 1and that for 0<h<σ^2/c_B, the transition probabilities are positive. Hereafter, these inequalities on h are in force.Also, defineΔ^h≐h^2/σ^2.This will be used to define the continuous time interpolation of the controlled Markov chain. Denote the Δ^h-grid {0,Δ^h,2Δ^h,…,T-Δ^h} by 𝕋^h. Note that Δ^h→ 0 as h→ 0. One can verify that the following local consistency conditions (cf. <cit.>) hold for every x∈^h_0,m^h_0(t,η,u,x) ≐∫_^h(x̃-x)q^h(t,η,u; x,dx̃) = b(t,η,x,u)Δ^h,(σ^h_0)^2(t,η,u,x) ≐∫_^h(x̃-x-m^h_0)^2q^h(t,η,u; x,dx̃) =σ^2Δ^h-(b(t,η,u,x)Δ^h)^2. Reflection step. Such a step occurs only when x∈{-h,L+h}. For every t∈[0,T], η∈([0,L]), and u∈ U letq^h(t,η,u; L+h,L)=q^h(t,η,u; -h,0)=1. We will now define a controlled Markov chain {X_n^h,ν}_n ∈_0 associated with the parameter h, a measure ν∈_T,L and an initial condition x_0∈ [0,L].We choose to work with a deterministic initial state for simplicity of presentation.The results continue to holdwhen theinitial state is random. In that case, in Construction <ref>, one needs an additional initialization step at which one takes a random draw x_0 from the initial distribution and then sets x^h_0=⌊ x_0/h⌋ h, as in the construction below. We will assume that I(h) ≐ T/Δ^h and L/h are integers. The numerical scheme that we develop will be based on controlled Markov chains associated with the probability kernel q^h. Such controlled Markov chain based schemes areclosely related to explicit finite difference schemes for parabolic PDE. Although not studied in the current work, one can also consider Markov chain approximation schemes that have behavior similar to that of implicit finite difference schemes. One of the important steps in convergence proofsof finite difference schemes is the identification of appropriate stability conditions for space-time discretizations.For the Markov chain approximation method, the analogue of such stability conditions are the local consistency condition of the form in (<ref>)–(<ref>) which form the heart of our convergence proof.These local consistency requirements in particular imply for our scheme the space-time scaling of the form in (<ref>). * Define X_0^h,ν = x^h_0=⌊ x_0/h⌋ h, set t^h,ν_0=0 and let α_-1^h,ν be a fixed element of U. * Having defined for i=0, 1, …, n time instants t_i^h,ν<T and random variables X_i^h,ν, α^h,ν_i-1 with values in ^h and U respectively, let _i^h,ν≐σ{X^h,ν_j, α^h,ν_j-1: j = 0, 1, … i}. * Choose the control α_n^h,ν for the n-th step that is a U valued _n^h,ν measurable random variable and let X^h,ν_n+1 be such that its conditional distribution given ^h,ν_n is q^h(t_n^h,ν,ν(t_n^h,ν), α_n^h,ν,X_n^h,ν, ·), where ν(t) denotes the marginal distribution of ν at time instant t. Also define t^h,ν_n+1≐ t_n^h,ν + Δ^h 1_{X_n^h,ν∉{-h, L+h}} where the indicator in the above definition will ensure that when we do a continuous time interpolation of the chain, reflection steps `occur instantaneously'. Note that the choice of α^h,ν_n is irrelevant if X^h,ν_n∈{-h, L+h}. If α_n^h,ν = ϑ(t_n^h,ν, X_n^h,ν) for some ϑ: 𝕋^h ×𝕊^h → Uthen the function ϑ is referred to as a feedback control.Some auxiliary processes. We will now introduce some processes that will be useful in the analysis of the h-th Markov chain.Consider the piecewise constant processes(X^h,ν(t), α^h,ν(t)) ≐ (X^h,ν_n^h,ν(t), α^h,ν_n^h,ν(t)), t ∈ [0,T],wheren^h,ν(t) ≐ max{n: t^h,ν_n = j Δ^h}, t ∈ [jΔ^h, (j+1)Δ^h), j = 0, … I(h)-1.Let F^h,ν(0)=B^h,ν(0)=Y^h,ν(0)=R^h,ν(0)=0 and for every t∈[0,T] letF^h,ν(t) ≐∑_j=0^n^h,ν(t)-1[X^h,ν_j+1-X^h,ν_j|^h,ν_j]1_{X^h,ν_j∉{-h,L+h}},B^h,ν(t) ≐1/σ∑_j=0^n^h,ν(t)-1(X^h,ν_j+1-X^h,ν_j-[X^h,ν_j+1-X^h,ν_j|^h,ν_j])1_{X^h,ν_j∉{-h,L+h}},Y^h,ν(t) ≐∑_j=0^n^h,ν(t)-1(X^h,ν_j+1-X^h,ν_j)1_{X^h,ν_j=-h} = h∑_j=0^n^h,ν(t)-11_{X^h,ν_j=-h},R^h,ν(t) ≐∑_j=0^n^h,ν(t)-1(X^h,ν_j-X^h,ν_j+1)1_{X^h,ν_j=L+h} = h∑_j=0^n^h,ν(t)-11_{X^h,ν_j=L+h}. One can verify that the following representation holds(X^h,ν,Y^h,ν,R^h,ν)(t)=Γ(x^h_0+F^h,ν(·)+σ B^h,ν(·))(t), t∈[0,T].Also, from (<ref>) and (<ref>) it follows that, on the set {X^h,ν_n∉{-h,L+h}} [X^h,ν_n+1-X^h,ν_n|^h,ν_n]=b(t^h,ν_n,ν(t^h,ν_n),X^h,ν(t^h,ν_n),^h,ν(t^h,ν_n))Δ^h , [[X^h,ν_n+1-X^h,ν_n-[X^h,ν_n+1-X^h,ν_n|^h,ν_n] ]^2|^h,ν_n ]= σ^2Δ^h+o(Δ^h). Define ^h,ν_t ≐^h,ν_n^h,ν(t). Then since n^h,ν(t) for each fixed t is a {^h,ν_j} stopping time, we have by optional sampling theorem that B^h,ν(·) is a {^h,ν_t} martingale. Also, from the above,F^h,ν(t) =∫_0^t b(l^h(s),ν(l^h(s)),X^h,ν(s),^h,ν(s))ds,wherel^h(s)≐⌊ s/Δ^h⌋Δ^h, s∈[0,T]. Cost function for the MDP. Forevery (t,x)∈𝕋^h×𝕊^h and any admissible control α^h,ν used to construct the h-th controlled Markov chain, define the associated costJ^h,ν(t,x,^h,ν) ≐[∫_t^T f(l^h(s),ν(l^h(s)),X^h,ν(s),^h,ν(s))ds+g(ν(T),X^h,ν(T)). .+∫_t^Ty(s)dY^h,ν(s)+∫_t^Tr(s)dR^h,ν(s)| X^h,ν(t)=x]. The value function associated with the above cost is given by,V^h_ν(t,x)≐inf_J^h,ν(t,x,),where the infimum is taken over all admissible controls. We now provide properties of the value function V^h_ν and the optimal strategy in the h-thMDP.For every (t,x,ν)∈𝕋^h×𝕊^h_0×, define the h-th finite difference of the value function w.r.t. x, as follows^h_xV^h_ν(t,x) ≐1/2h(V^h_ν(t+Δ^h,x+h)-V^h_ν(t+Δ^h,x-h)),where V^h_ν(t+Δ^h,L+h) ≐ r(t+Δ^h)h+V^h_ν(t+Δ^h,L), V^h_ν(t+Δ^h,-h) ≐ y(t+Δ^h)h+V^h_ν(t+Δ^h,0).The optimal control in the h-thMDP is given in state feedback form asϑ^h,ν(t,x)= (t,ν(t),x,^h_xV^h_ν(t,x)), (t,x)∈𝕋^h×𝕊^h.Letting ^h,ν(t,x)≐ϑ^h,ν(l^h(t),x), (t,x)∈ [0,T]×𝕊^h, there exists a constant c_d(T)∈(0,), such that for every (t,ν)∈𝕋^h× and for every h, one has, V^h_ν(t,X^h,ν(t))+ σ∑_s∈𝕋^h, s≥ t^h_xV^h_ν(s,X^h,ν(s))(B^h,ν(s+Δ^h)-B^h,ν(s))=g(ν(T),X^h,ν(T))+∫_t^Tf(l^h(s),ν(l^h(s)),X^h,ν(s),^h,ν(s,X^h,ν(s)))ds+∫_t^Ty(s)dY^h,ν(s)+∫_t^Tr(s)dR^h,ν(s),where (X^h,ν,B^h,ν,Y^h,ν,R^h,ν) are as in (<ref>)-(<ref>) with {α^h,ν_n} replaced with the optimal feedback control ϑ^h,ν and for all (t,x)∈𝕋^h×𝕊^h|^h_xV^h_ν(t,X^h,ν(t))|≤ c_d(T).We note that Lemma <ref> gives, in an explicit form, the finite difference scheme associated with the dynamic programming equation for the cost function (<ref>). Indeed, recall that the functionis given in (<ref>), that the gradient ^h_xV^h_ν(t,x) from (<ref>) is calculated based on the values of the value function at time t+Δ^h, and that the integrals in (<ref>) can be written as finite sums over s∈𝕋^h∩[t,T]. Now, by taking (conditional) expected values in (<ref>), the finite difference scheme follows from a backwards induction. The proof of the lemma is given in Section <ref>.The induced measure Φ^h(ν). Recall from (<ref>) that for j = 0, 1, … I(h), n^h,ν(j Δ^h) = max{i: t_i^h,ν = j Δ^h}. Let {X̂^h,ν(t)}_t∈ [0,T] be the continuous stochastic process which is linear on [jΔ^h, (j+1)Δ^h] and equals X^h,ν_n^h,ν(j Δ^h) at t = jΔ^h, for j=0, … I(h)-1, where {X^h,ν_n} is the controlled Markov chain constructed using the optimal feedback control ^h,ν. Let Φ^h(ν)≐∘(X̂^h,ν)^-1. Next, we show that,under suitable conditions,Φ^h is a contraction up to an (h^2) term, over a small time interval.This `almost-contraction' property lies at the heart of our main result, Theorem <ref>.An almost-contraction property. Recall that we assumethat Assumption <ref> is satisfied. In addition, we will make the following assumption on a Lipschitz property of thefunctionfrom (<ref>). There exists c_∈ (0,∞) such that for every t∈[0,T], η,η'∈([0,L]), x,x'∈[0,L], and p,p'∈,|(t,η,x,p)-(t,η',x',p')|≤ c_(W_1(η,η')+|x-x'|+|p-p'|).The following lemma gives a sufficient condition for Assumption <ref> to hold. Suppose that the drift and the cost functions satisfy the following properties. (a) For every (t,η,x,)∈[0,T]×([0,L])×[0,L]× U,b(t,η,x,)=b_1(t,η,x)+b_2(t). (b) There exists c_m∈ (0,∞) such that for every t∈[0,T], η,η'∈([0,L]), x,x'∈[0,L], and ,'∈ U,f(t,η,x,')-f(t,η,x,)-('-)f_(t,η,x,)≥ c_m('-)^2. (c) The map α↦ f(t,η,x,,p) is continuously differentiable for every (t,η,x,p) ∈ [0,T]×([0,L])×[0,L]× and there exists c_l ∈ (0,∞) such that for every t∈[0,T], η,η'∈([0,L]), x,x'∈[0,L], ∈ U, and p,p'∈,|f_(t,η,x,,p)-f_(t,η',x',,p')|≤ c_l(W(η,η')+|x-x'|+|p-p'|). Then Assumption <ref> is satisfied.The proof of the lemma is deferred to Section <ref>. The conditions in the above lemma are not new to the literature of MFG. For example, Assumptions (A.1), (A.2), and (A.3) in <cit.> are stronger. Also, parts (b) and (c) above that concern the running cost are imposed by <cit.>, which studies a rate control problem (part (a) is irrelevant for that model). A basic example that satisfies parts (a)–(c) in the Lemma, in addition to Assumptions <ref> and <ref>, is the followingb(t,η,x,) =b_1(t,x)+b_2(t),f(t,η,x,) =a_1(t,x)+a_2(t,x)k()+a_3(t)(c_1+a_4(x))∫_0^La_4(y)dη(y),g(η,x) =(c_2+a_5(x))∫_0^La_5(y)dη(y),y(t,η) =a_6(t), r(t,η)=a_7(t),where b_1,a_1,a_2:[0,T]×[0,L]→,b_2,a_3,a_6, a_7:[0,T]→, a_4,a_5:[0,L]→ are Lipschitz functions,c_1,c_2∈,k: U → is a C^2-strictly convex function (e.g. k(α) = (α-α_0)^2 for some α_0 ∈), and a_2(t,x)≥ c_m>0 for all (t,x)∈ [0,T]× [0,L].We note that, although Assumption <ref> is not explicitly imposed in the current work, we will assume later in the section that the MFG has a unique solution (see Assumption <ref>) which from Proposition <ref> holds under Assumptions <ref> and <ref>. For this reason we presented an example that satisfies all three assumptions (i.e., Assumptions <ref>, <ref> and <ref>). From a modeling perspective, by choosing positive and nondecreasing a_3 and a_4 and a positive a_3, the system planner penalizes all servers collectively for congestion when the empirical measure has high a_3 and a_4-moments and in addition it penalizesindividual servers for long queues. Also, when a_7>0, rejections of jobs by an individual server are disincentivized and when a_6<0, idleness is being rewarded. Finally a convex nondecreasing k assigns costs for increasing the rates. The next result plays an important role in the proof of Theorem <ref>. Recall Assumption <ref> and (<ref>) are in force.Suppose that Assumption<ref> is satisfied. Then there exist T̂>0, ĥ>0 and q∈(0,1), such that for every T≤T̂, ν,ν'∈ and h∈(0,ĥ∧T̂),W^2_1(Φ^h(ν),Φ^h(ν')) ≤ q (h^2+ W^2_1(ν,ν')). The proof of the proposition is given in Section <ref>. §.§Approximating the solution of the MFGWe now provide the numerical scheme that approximates the solution of the MFG. Let T̂, ĥ be as in Proposition <ref>. Fix T<T̂ and (x^h_0,ν^1,h)∈𝕊^h×× (0,ĥ∧T̂). Let {X^h,ν^1_n} be the h-th Markov chain from Construction <ref> associated with the optimal control ^h,ν^1. Having defined for m∈ the process {X^h,ν^m_n} , set ν^m+1≐Φ^h(ν^m) and let {X^h,ν^m+1_n} be the h-th Markov chain from Construction <ref> associated with the optimal control ^h,ν^m+1.With q∈ (0,1) as in Proposition <ref> we get that for every h as in Construction <ref> and every k∈,W^2_1(Φ^h(ν^k),ν^k)=W^2_1(Φ^h(ν^k),Φ^h(ν^k-1))≤ q(h^2+W^2_1(ν^k,ν^k-1)).By iterating this bound we obtainW^2_1(Φ^h(ν^k),ν^k)≤q/1-qh^2+q^k-1W^2_1(ν^2,ν^1)).Setk_h≐min{k∈ : W^2_1(Φ^h(ν^k),ν^k)≤2q/1-qh^2}andν_h≐ν^k_h.We note that k_h depends also on ν^1, however, it plays no role in the sequel and is therefore omitted from the notation. Processes (X^h,ν_h,Y^h,ν_h,R^h,ν_h,B^h,ν_h) are defined as in Construction <ref>, replacing ν with ν_h and α_n^h,ν with α̂_n^h,ν_h. As an immediate consequence of the definition of ν_h, we get the following proposition, which is key to the proof of the approximation result in Theorem <ref> below.Suppose that Assumption<ref> is satisfied. Then with T̂ as in Proposition <ref>,for every T≤T̂lim_h→ 0 W^2_1(Φ^h(ν_h),ν_h)=0. For the main result of this section (Theorem <ref>), in addition to Assumptions <ref>, <ref> and the property in (<ref>), we also need the following assumption. There is a unique ν̅∈_T,L that solves the MFG with initial condition x. In order to formalize the main result we introduce the notion of relaxed controls. The reason is that we need to argue the tightness of control sequences in an appropriate space. For this, we borrow a relaxed control formulation from <cit.>. Consider the relaxation of the stochastic control problem in (<ref>)–(<ref>) where the control space U is replaced by (U), the drift function b is replaced by the function b_:[0,T]× [0,L]×(U) → defined asb_(t,x, r) ≐∫_U b(t, x, u) r(du), ,and the running cost f is replaced by f_: [0,T]×([0,L])× [0,L]×(U) →, defined asf_(t,η, x, r) ≐∫_U f(t, η, x, u) r(du) .Finally, we replace the class of admissible controls (Ξ, t,x, ν̅) by _(Ξ, t,x, ν̅) of pairs (α_, Z) that are similar to pairs(α, Z) introduced above (<ref>) except that α_ is (U) valued rather than U valued and in (<ref>) we replace b̅(u) = b(u, X(u), α(u)) with b_(u, X(u), α_(u)). The corresponding cost function J_ν̅, is defined by (<ref>) with f replaced by f_. The value function in this relaxed formulation, denoted as V_ν̅,, is given by (<ref>) withreplaced by _. Define the function h_ by (<ref>), replacing (f,b) with (f_,b_). Then, from Assumption <ref>(b),H(t,η,x,p) = inf_u ∈ U h(t,η, x, u,p) = inf_r ∈(U) h_(t,η, x, r,p).To see the last equality note thatinf_r ∈(U) h_(t,η, x, r,p)=inf_r∈(U)∫ h(t,η, x, u,p)dr(u)≥inf_u∈ Uh(t,η, x, u,p)and on the other hand inf_u∈ Uh(t,η, x, u,p)= inf_u∈ U∫ h(t,η, x, a,p)dδ_u(a)≥inf_r ∈(U) h_(t,η, x, r,p). Therefore, V_ν and V_ν, are both solutions of the partial differential equation (<ref>)-(<ref>). In view of the uniqueness result given in Proposition <ref>, V_ν = V_ν,. Recall from Proposition <ref> thatAssumption <ref> is satisfied if in addition to Assumption <ref>, Assumption <ref> holds. Also, from Assumptions <ref> and <ref> it follows from arguments as in the proof of Proposition 3.1 in <cit.> (see also the statement of Proposition <ref> here), that there is a continuous mapγ:[0,T] × [0,L] → U such that if there exist a system Ξ and an (α_,Z) ∈_(Ξ,0,x,ν) such thatZ(t)≡ (X,Y,R)(t)=Γ(x+∫_0^· b_(t, ν(t), X(t), α_(t))dt +σ B(·) )(t),t ∈ [0,T], ∘ X^-1 =ν, andV_ν(0,x) = J_ν(0, x, α_,Z).then α_(t, ω) = δ_γ(t, X(t, ω)), λ_T^0⊗-a.s., where recall that λ_T^0 denotes the Lebesgue measure on [0,T], and ν = ν̅.We now present the main result of the paper.Let (U × [0,T]) be the space of finite measures on U × [0,T] equipped with the topology of weak convergence.Define (U× [0,T]) valued random variable m̂^h,ν_h asm̂^h,ν_h(duds) ≐δ_α̂^h,ν_h(s,X^h,ν_h(s))(du) ds. Also, recall Assumption <ref> and (<ref>) are in force throughout this section. Suppose that T≤T̂ where T̂ is as in Proposition <ref>. Also suppose thatAssumptions <ref> and <ref> are satisfied. Recall the processes (X^h,ν_h, Y^h,ν_h, R^h,ν_h, B^h,ν_h) introduced below Construction <ref> and consider a sequence {h}→ 0. Then the sequence(X^h,ν_h, Y^h,ν_h, R^h,ν_h, B^h,ν_h, m̂^h,ν_h), converges in distribution to (X,Y, R, B, m) in ([0,T]:^4)×(U × [0,T]) andlim_h→0ν_h= ν in _T,L where the limit processes defined on some probability space (Ω, , ) satisfy the following. (a) B is a _t ≐σ{ B(s), X(s), Y(s), R(s), m(A×[0,s]): s ≤ t, A ∈(U)} Brownian motion and so Ξ≐ (Ω, , {_t}, , B) is a system. (b) Disintegrating m(duds) = m_s(du) ds, the following relationship holds a.s. Z(t)≐ (X(t), Y(t), R(t)) = Γ(x + ∫_0^·b_(s, ν(s), X(s), m_s) ds + σ B(·))(t), t ∈ [0,T]. (c) ∘ (X)^-1 = ν. (d) The pair (m,Z) ∈_(Ξ,0,x,ν) and V_ν(0,x) = J_ν, (0, x, m,Z). In particular, with γ as in Remark <ref>, m(duds) = δ_γ(s, X(s))(du) ds, and ν = ν̅, the unique solution of the MFG. Proof. Since many steps in the proof are quite standard we will only provide details where appropriate. Using properties (<ref>) and (<ref>) of the controlled transition probability kernel it can be argued (cf. Proof of <cit.>) that {(F^h,ν_h, B^h,ν_h)}_h>0 are tight in ([0,T]:^2). Using this tightness property along with the continuity of the Skorohod map (Lemma <ref>) it now follows that {(X^h,ν_h, Y^h,ν_h, R^h,ν_h, B^h,ν_h)}_h>0 is tight in ([0,T]:^4). In fact, this sequence is -tight. Next note that, since m̂^h,ν_h(U× [0,T]) =T and U is compact, the sequence {m̂^h,ν_h}_h>0 is tight in (U× [0,T]). Also, recalling the definition of the interpolated processes (right before (<ref>)) it can be checked that|X^h,ν_h - X̂^h,ν_h|_T → 0,h→ 0.Combining this with the tightness of {X^h,ν_h} and the fact that Φ^h(ν_h) = ∘ (X̂^h,ν_h)^-1 gives the relative compactness of {Φ^h(ν_h)} in _T,L. Suppose now that along a subsequence (relabeled again as {h})(X^h,ν_h, Y^h,ν_h, R^h,ν_h, B^h,ν_h, m̂^h,ν_h) ⇒ (X,Y, R, B, m) and Φ^h(ν_h) →ν.Then by (<ref>) we also have thatν_h→ν.By standard martingale methods it follows that B is a {_t} Brownian motion (see e.g. Proof of <cit.>) proving part (a) of the theorem.Using (<ref>), F^h,ν_h converges, along with the above processes, in distribution to∫_U× [0,·] b(s, ν(s), X(s), u) m(duds) = ∫_0^· b_(s, ν(s), X(s), m_s) ds,where m(du ds) = m_s(du) ds. Using the continuity property of the Skorohod map we now get (b).Also, part (c) is immediate on using (<ref>), recalling that Φ^h(ν_h) is the probability law of X̂^h,ν_h, and by (<ref>).Clearly (m,Z) ∈_(Ξ, 0,x, ν). We will now argue the first statement in (d), namelyV_ν(0,x) = J_ν, (0, x, m,Z).This together with Remark<ref> will prove the second statement in (d) for the subsequence.Since the convergent subsequence was arbitrary, we will get the convergence asserted in the statement of the theorem and complete the proof. Proof of (<ref>). For the rest of the proof we consider the subsequence along which the convergence in (<ref>) and (<ref>) holds. Using arguments similar to those in Proposition 4.2 in <cit.> it can be checked thatlim_h→ 0 J^h,ν_h(α̂^h,ν_h) = J_ν, (0, x,m,Z),where we have suppressed (0,x_0^h) from the notation in J^h,ν_h. Next, let (β, Z) ∈(Ξ̅, 0, x, ν) for some system Ξ̅= (Ω̅, , {_t}, , B̅). We will now show that for every _0 > 0 there is a sequence of controls {β^h,ν_h_i} for the controlled Markov chains defined as in Construction <ref> with the sequence {ν_h}, such that lim sup_h→ 0 J^h,ν_h(β^h,ν_h) ≤ J_ν(0, x, β, Z) + _0. Note that from the optimality property of α̂^h,ν_h lim sup_h→ 0 J^h,ν_h(α̂^h,ν_h) ≤lim sup_h→ 0 J^h,ν_h(β^h,ν_h). Combining the above inequality with (<ref>) and (<ref>) we now get that V_ν(0,x) ≤ J_ν, (0, x, m,Z) = lim_h→ 0 J^h,ν_h(α̂^h,ν_h) ≤lim sup_h→ 0 J^h,ν_h(β^h,ν_h) ≤ J_ν(0, x, β, Z) + _0 . Since _0>0, the system Ξ̅, and (β, Z) ∈(Ξ̅, 0, x, ν) are arbitrary we have (<ref>), completing the proof of the theorem. We now prove (<ref>). Using arguments as in the proof of <cit.> it can be shown that there is a θ_1:[0,∞) → [0,∞) such that θ_1(κ)→ 0 as κ→ 0 and for every >0 there is a system Ξ^≐ (Ω^,^,{_t^},^, B^) and (β^, Z^) ∈(Ξ^, 0,x,ν) with the following properties * Z^ satisfies the following equation for t ∈ [0,T]. Z^(t) ≡ (X^,Y^,R^)(t)=Γ(x+∫_0^· b(t, ν(s), X^(s), β^(s))ds +σ B^(·) )(t). * For some δ > 0, β^ is piecewise constant on intervals of the form [lδ,(l+1)δ), l=0,1,…, T/δ. For some finite set U^⊂ U, β^(s) takes values in U^ for every s ∈ [0,T]. * For some θ>0, for each u ∈ U^ ^(β^(lδ)=u|B^(s),s≤ lδ, β^(jδ),j<l) =^(β^(lδ)=u|B^(pθ),pθ≤ lδ, β^(jδ),j<l)= F^_u(B^(pθ),pθ≤ lδ, β^(jδ),j<l), where for suitable Î,L̂∈, F^_u: ^Î× (U^)^L̂→ [0,1] is a measurable function such that F_u(·, 𝐮) is continuous on ^Î for every 𝐮∈ (U^)^L̂. * Letting m^(du dt) ≐δ_β^(t)(du) dt, m(du dt) ≐δ_β(t)(du) dt, as → 0, (X^,Y^, R^, B^, m^) ⇒ (X,Y, R, B, m) in ([0,T]:^4)×(U× [0,T]). * J_ν(0,x,β^,Z^) ≤ J_ν(0,x,β,Z) + θ_1(). We will now use the piecewise constant control β^ to construct a collection of control sequences {β^h,ν_h_i} as stated above (<ref>) and for whichlim_h→ 0 J^h,ν_h(β^h,ν_h, ν_h) = J_ν(0, x, β^, Z^).Note that since θ_1()→ 0 as → 0, this will prove (<ref>) and complete the proof of the theorem. The construction is carried out as follows. * Define X^h,ν_h_0 = x^h,ν_h, t_0^1=0 and let β_-1^h,ν_h be a fixed element of U^. * Having defined for i=0, 1, …, n time instants t_i^h,ν_h<T and random variables X_i^h,ν_h, β^h,ν_h_i-1 with values in ^h and U^ respectively, let _i^h,ν_h≐σ{X^h,ν_h_j, β^h,ν_h_j-1: j = 0, 1, … i}. * Choose the control β_n^h,ν_h for the n-th step that is a U^ valued _n^h,ν_h measurable random variable as follows: * If t_n-1^h,ν_h and t_n^h,ν_h both lie in [jδ, (j+1)δ) for some j, set β^h,ν_h_n = β^h,ν_h_n-1. * If t_n-1^h,ν_h< lδ≤ t_n^h,ν_h < (l+1)δ for some l, choose β^h,ν_h_n according the the conditional distribution P(β^h,ν_h_n = u| X_i^h,ν_h, β^h,ν_h_i-1, 0 ≤ i ≤ n) = F_u^(B^h,ν_h(pθ),pθ≤ lδ, β^h,ν_h(jδ),j<l), where B^h,ν_h is defined as in (<ref>), β^h,ν_h(s) = β^h,ν_h_n^h,ν_h(s), and n^h,ν_h(·) is as in (<ref>). * Let X^h,ν_h_n+1 be such that the conditional distribution of X^h,ν_h_n+1 given ^h,ν_h_n equals q^h(t_n^h,ν_h, ν_h(t_n^h,ν_h), β_n^h,ν_h, X_n^h,ν_h, ·), where ν_h is as introduced above Theorem <ref>. Also define t^h,ν_h_n+1≐ t_n^h,ν_h + Δ^h 1_{X_n^h,ν_h∉{-h, L+h}}. Now define processes X^h,ν_h, F^h,ν_h, Y^h,ν_h, R^h,ν_h as in (<ref>)–(<ref>). Exactly as in the first part of the proof we now have that {(X^h,ν_h, Y^h,ν_h, R^h,ν_h, B^h,ν_h)}_h is -tight in ([0,T]:^4) and the sequence {m^h,ν_h}_k≥ 1, where m^h,ν_h(du ds) = δ_β^h,ν_h(s)(du) ds is tight in (U× [0,T]). Arguing as before, if along a further subsequence the convergence (<ref>) holds (with m̂^h,ν_h replaced with m^h,ν_h) then parts (a) and (b) as in the statement of Theorem <ref> are satisfied with ν as in (<ref>). Using the continuity property of F_u and the fact that the control is piecewise constant with values in a finite set it follows that (cf. Proof of <cit.>) (B, m) has the same distribution as (B^, m^). By unique solvability of (<ref>), that follows from the Lipschitz property of b (Assumption <ref>) and the Lipschitz property of the Skorohod map (Lemma <ref>) we now have that (Z^, m^) has the same law as (Z, m) for every limit point of the chosen further subsequence. Since the chosen further subsequence was arbitrary, this proves the weak convergence of (Z^h,ν_h, m^h,ν_h) to (Z^,m^) along the subsequence fixed above (<ref>) andarguing once again as in the proof of Proposition 4.2 in <cit.> we have the convergence of costs as in (<ref>), completing the proof of the theorem. § PROOFS OF RESULTS FROM SECTION <REF> In this section we provide theproofs of Lemmas <ref>, <ref> and Proposition <ref>.Proof of Lemma <ref>.We start by analyzing the evolution of the value function V^h_ν. Clearly, V^h_ν(T,x) = g(ν(T),x) for all x ∈𝕊^h_0. Using backwards induction, we get that for any(t,x,ν)∈𝕋^h×𝕊^h_0×, one has from the definition of(cf. (<ref>)), q^h (see (<ref>)) and (<ref>)-(<ref>),V^h_ν(t,x) =min_u∈ U{Δ^hf(t,ν(t),x,u)+q^h(t,ν(t),u;x,x+h)V^h_ν(t+Δ^h,x+h)+q^h(t,ν(t),u;x,x-h)V^h_ν(t+Δ^h,x-h)}=Δ^hf(t,ν(t),x,^h,ν(t,x))+q^h(t,ν(t),^h,ν(t,x);x,x+h)V^h_ν(t+Δ^h,x+h) +q^h(t,ν(t),^h,ν(t,x);x,x-h)V^h_ν(t+Δ^h,x-h),where ^h,ν is as in (<ref>). The above identity in particular shows that ϑ^h,ν gives an optimal feedback control.We now use (<ref>) to prove (<ref>). Define the following h-th finite differences for (t,x)∈𝕋^h×𝕊^h_0,^h_tV^h_ν(t,x) ≐1/Δ^h(V^h_ν(t+Δ^h,x)-V^h_ν(t,x)),^h_xxV^h_ν(t,x) ≐1/h^2(V^h_ν(t+Δ^h,x+h)-2V^h_ν(t+Δ^h,x)+V^h_ν(t+Δ^h,x-h)).Simplifying (<ref>) by using (<ref>), we get that ^h_tV^h_ν(t,x) = -H(t,ν(t),x,^h_x(t,x)) -1/2σ^2^h_xxV^h_ν(t,x). Notice also that X^h,ν(t+Δ^h) X^h,ν(t) if and only if R^h,ν(t+Δ^h)-R^h,ν(t)=Y^h,ν(t+Δ^h)-Y^h,ν(t)=0, in which case, one hasV^h_ν(t+Δ^h,X^h,ν(t+Δ^h))-V^h_ν(t,X^h,ν(t))=^h_tV^h_ν(t,X^h,ν(t))Δ^h+^h_xV^h_ν(t,X^h,ν(t))(X^h,ν(t+Δ^h)-X^h,ν(t))+1/2h^2^h_xxV^h_ν(t,X^h,ν(t)).This can be easily verified by considering separately the cases X^h,ν(t+Δ^h)=X^h,ν(t)± h. In case that R^h,ν(t+Δ^h)-R^h,ν(t)=h, that is X^h,ν(t+Δ^h)= X^h,ν(t)=L, (<ref>) implies thatV^h_ν(t+Δ^h,X^h,ν(t+Δ^h))-V^h_ν(t,X^h,ν(t)) =V^h,ν(t+Δ^h,L)-V^h,ν(t+Δ^h,L+h)+V^h,ν(t+Δ^h,L+h)-V^h,ν(t,L)=-r(t+Δ^h)h+^h_tV^h_ν(t,X^h,ν(t))Δ^h+^h_xV^h_ν(t,X^h,ν(t))h+1/2h^2^h_xxV^h_ν(t,X^h,ν(t))and finally, in case that Y^h,ν(t+Δ^h)-Y^h,ν(t)=h,one similarly hasV^h_ν(t+Δ^h,X^h,ν(t+Δ^h))-V^h_ν(t,X^h,ν(t)) =-y(t+Δ^h)h+^h_tV^h_ν(t,X^h,ν(t))Δ^h+^h_xV^h_ν(t,X^h,ν(t))(-h)+1/2h^2^h_xxV^h_ν(t,X^h,ν(t)).From the last two equalities, we get that,V^h_ν(t+Δ^h,X^h,ν(t+Δ^h))-V^h_ν(t,X^h,ν(t))= -H(t,ν(t),X^h,ν(t),^h_xV^h_ν(t,X^h,ν(t)))Δ^h + ^h_xV^h_ν(t,X^h,ν(t))(X^h,ν(t+Δ^h)-X^h,ν(t))-(y(t+Δ^h)+ ^h_xV^h_ν(t,X^h,ν(t)) )(Y^h,ν(t+Δ^h)-Y^h,ν(t))-(r(t+Δ^h)-^h_xV^h_ν(t,X^h,ν(t)))(R^h,ν(t+Δ^h)-R^h,ν(t))= -f(t,ν(t),X^h,ν(t),^h,ν(t,X^h,ν(t)))Δ^h + σ^h_xV^h_ν(t,X^h,ν(t))(B^h,ν(t+Δ^h)-B^h,ν(t))-y(t+Δ^h)(Y^h,ν(t+Δ^h)-Y^h,ν(t))-r(t+Δ^h)(R^h,ν(t+Δ^h)-R^h,ν(t)).Summing up the terms over t∈𝕋^h, one gets (<ref>). We will postpone the proof of (<ref>) to the end of the paper.Proof of Lemma <ref>. Fix t∈[0,T], η,η'∈([0,L]), x,x'∈[0,L], and p,p'∈. Denote =(t,η,x,p) and '=(t,η',x',p').Recall the definition of h from (<ref>). By (<ref>), (<ref>), and the definition of ', we get thath(t,η',x',,p')≥ h(t,η',x',',p')≥ h(t,η',x',,p')+('-)h_(t,η',x',,p')+c_m|'-|^2.From the minimizing property of α we seethat ('-)h_(t,η,x,,p)≥ 0. Subtracting this term from the right side of the abovec_m|'-|^2 ≤ |'-|·|h_(t,η',x',,p')-h_(t,η,x,,p)|≤ c |'-|(W_2(η,η')+|x-x'|+|p-p'|) ,where c=c_l+sup_t∈[0,T]|b_2(t)| and the second inequality follows by (<ref>) and (<ref>) . Result follows on dividing both sides by |'-|/c_m. Proof of Proposition <ref>.We begin by introducing a coupling between two optimally controlled chains, one associated with ν and the other with ν'.Coupling. Fix x∈[0,L] and ν,ν'∈. Let {X^ν_n} and {X^ν'_n} be the Markov chains from Construction <ref> associated with the parameter h and the optimal strategies given by (<ref>). Denote by Σ^ν=(n^ν,F^ν,B^ν,X^ν,Y^ν,R^ν) and Σ^ν'=(n^ν',F^ν',B^ν',X^ν',Y^ν',R^ν') the processes that were defined immediately after Construction <ref>, where we suppressed the index h since it is fixed in the rest of the proof.Also, denoteb^ν(t)≐ b(l^h(t),ν(l^h(t)),X^ν(t),^ν(t,X^ν(t))),where recall that l^h(t)=⌊ t/Δ^h⌋Δ^h. Similarly define b^ν'. We now define a coupling of the chains through a time change of an underlying Markov chain {(Z^ν_n,Z^ν'_n)}. The main idea in the construction of the latter Markov chain is to keep track of the proper time. Whenever an `instantaneous jump' occurs for only one of the Z-processes, the other process has a degenerate step, that is, it remains at the same position. Therefore, we use two sequences of times. The first, which we refer as time instants, {t_n} has the same role as in (<ref>). The second is referred as time steps and denoted by {(N^ν_n,N^ν'_n)}. Each of the components counts how many non-degenerate steps the respective Z process has taken so far. Set Z^ν_0=Z^ν'_0=x_0, t_0=0, and N^ν_0= N^ν'_0=0. Having defined for i=0,1,…,n time instants t^ν_i<T, time steps N^ν_i, N^ν'_i∈, and random variables Z^ν_i,Z^ν'_i with values in 𝕊^h, define them for the (n+1)-th step as follows. * If Z^ν_n,Z^ν'_n∉{-h,L+h}, then(Z^ν_n+1,Z^ν'_n+1)=(Z^ν_n,Z^ν'_n) +(h,h),w.p.(hmin{b^ν(t_n),b^ν'(t_n)}+σ^2)/(2σ^2),(h,-h),w.p.h( b^ν(t_n)- b^ν'(t_n))^+/(2σ^2), (-h,h),w.p.h(b^ν(t_n)- b^ν'(t_n))^-/(2σ^2),(-h,-h),w.p.(-hmax{b^ν(t_n),b^ν'(t_n)}+σ^2)/(2σ^2),where w.p. stands for `with probability', andt_n+1=t_n+Δ^h,N^ν_n+1= N^ν_n+1, andN^ν'_n+1= N^ν'_n+1,where x^+=max{0,x} and x^-=max{0,-x}. *If Z^ν_n∉{-h,L+h} and Z^ν'_n∈{-h,L+h} then(Z^ν_n+1,Z^ν'_n+1)=(Z^ν_n,Z^ν'_n)+(0, h(-1)^_{Z^ν'_n=L+h})andt_n+1=t_n,N^ν_n+1= N^ν_n, andN^ν'_n+1= N^ν'_n+1. The transition probabilities when Z^ν'_n∉{-h,L+h} and Z^ν_n∈{-h,L+h}are defined similarly. * If Z^ν_n,Z^ν'_n∈{-h,L+h}, then with probability 1,(Z^ν_n+1,Z^ν'_n+1)=(Z^ν_n,Z^ν'_n)+(h(-1)^_{Z^ν_n=L+h}, h(-1)^_{Z^ν'_n=L+h})andt_n+1=t_n,N^ν_n+1= N^ν_n+1, andN^ν'_n+1= N^ν'_n+1.* For every n∈, setX^ν_n≐ Z^ν_M^ν_nand X^ν'_n≐ Z^ν'_M^ν'_n,where M^ν_n≐max{m : N^ν_m≤ n} andM^ν'_n≐max{m : N^ν'_m≤ n}. With the above construction {X^ν_n} and {X^ν'_n} are controlled Markov chains constructed using the optimal feedback controls ^ν and ^ν' respectively, given on the same probability space. Also relationships (<ref>)–(<ref>) are satisfied by Σ^ν and Σ^ν'. The above coupling of the two processes gives the joint evolution(X^ν(t),X^ν'(t))_0≤ t≤ T as follows.X^ν(0)=X^ν'(0)=x_0 and for every t∈𝕋^h, (X^ν(t+Δ^h),X^ν'(t+Δ^h))- (X^ν(t),X^ν'(t))= (h_{X^ν(t) L},h_{X^ν'(t) L}),w.p.(hmin{b^ν(t),b^ν'(t)}+σ^2)/(2σ^2),(h_{X^ν(t) L},-h_{X^ν'(t) 0}),w.p.h(Δ b(t))^+/(2σ^2),(-h_{X^ν(t) 0},h_{X^ν'(t) L}),w.p.h(Δ b(t))^-/(2σ^2), (-h_{X^ν(t) 0},-h_{X^ν'(t) 0}),w.p.(-hmax{b^ν(t),b^ν'(t)}+σ^2)/(2σ^2),whereΔ b(t)≐ b^ν(t)-b^ν'(t), t∈[0,T]. We also define the corresponding `unconstrained' increment as(Z^ν(t+Δ^h),Z^ν'(t+Δ^h))-(X^ν(t),X^ν'(t))≐(h,h),w.p.(hmin{b^ν(t),b^ν'(t)}+σ^2)/(2σ^2),(h,-h),w.p.h(Δ b(t))^+/(2σ^2),(-h,h),w.p.h(Δ b(t))^-/(2σ^2), (-h,-h),w.p.(-hmax{b^ν(t),b^ν'(t)}+σ^2)/(2σ^2).Bounding W^2_1(Φ^h(ν),Φ^h(ν')). Denote Δ X(t)≐ X^ν(t)-X^ν'(t). The processes Δ B, Δ R and Δ Y are defined similarly. Note thatW^2_1(Φ^h(ν),Φ^h(ν'))≤[|Δ X|^2_T].We now estimate [|Δ X|^2_T]. Recall that Δ X(0)=0. From (<ref>) and Lemma <ref>,|Δ X|_T≤ |Δ X|_T + |Δ Y|_T + |Δ R|_T ≤ c_S(∫_0^T|Δ b(s)|ds+σ|Δ B|_T).Therefore,[|Δ X|^2_T]≤ 2c^2_S[(∫_0^T|Δ b(s)|ds)^2+σ^2 |Δ B|^2_T].We now estimate the second term on the right side. By using the martingale property of B^h,ν(t)- B^h,ν'(t) and Doob's inequality,[|Δ B|_T^2] ≤ 4[∑_s∈𝕋^h(Δ B(s+Δ^h)-Δ B(s))^2].From (<ref>) and (<ref>), σ |Δ B(s+Δ^h)- Δ B(s)| ≤|(Δ X+Δ R-Δ Y)(s+Δ^h)- (Δ X+Δ R-Δ Y)(s)|+Δ^h |Δ b(s)|.If (Z^ν(t+ Δ^h) - X^ν(t))(Z^ν'(t+ Δ^h) - X^ν'(t)) >0, i.e.,the unconstrained increments are of the same sign, then|(Δ X+Δ R-Δ Y)(s+Δ^h)- (Δ X+Δ R-Δ Y)(s)|= |(X^ν+R^ν-Y^ν)(s+Δ^h)- (X^ν+R^ν-Y^ν)(s) ..-[(X^ν'+R^ν'-Y^ν')(s+Δ^h)- (X^ν'+R^ν'-Y^ν')(s)]| =0.If the signs are different, i.e., (Z^ν(t+ Δ^h) - X^ν(t))(Z^ν'(t+ Δ^h) - X^ν'(t)) <0, then|(Δ X+Δ R-Δ Y)(s+Δ^h)- (Δ X+Δ R-Δ Y)(s)|≤ 2h.Hence,σ|(Δ B(s+Δ^h)-Δ B(s))|≤ 2h_Ê_s+|Δ b(s)|Δ^h,whereÊ_s≐{ω : (Z^ν(t+ Δ^h) - X^ν(t))(Z^ν'(t+ Δ^h) - X^ν'(t)) <0}.From (<ref>) we now have that,(Ê_s|^h_s)≤ h|Δ b(s)|/(2σ^2),where {^h_t} is the filtration generated by the process (X^ν(t),X^ν'(t))_0≤ t≤ T.As a consequence,[(σ(Δ B(s+Δ^h)-Δ B(s)))^2|^h_s]≤[( 2h_Ê_s+|Δ b(s)|Δ^h )^2|^h_s]≤2h |Δ b(s)|(Δ^h+ChΔ^h),where in the above expression, and in the rest of the proof, C refers to a finite positive constant that is independent of h and s, ν, ν' and whichcan change from one line to the next. Applying the above bound to (<ref>) and taking h sufficiently small such that Ch≤ 1/2, we get that for sufficiently small h, σ^2[|Δ B|^2_T] ≤ 12Δ^h∑_s∈𝕋^hh[|Δ b(s)|]=12[∫_0^Th|Δ b(s)|ds].Combining this with (<ref>) and using the inequality,h∫_0^T|Δ b(s)|ds≤1/2[h^2T^1/2+T^-1/2(∫_0^T|Δ b(s)|ds)^2] ≤1/2T^1/2[h^2 + ∫_0^T|Δ b(s)|^2ds]we get that[|Δ X|^2_T] ≤ C[(∫_0^T|Δ b(s)|ds)^2+h∫_0^T |Δ b(s)|ds]≤ CT^1/2h^2+C(T^1/2+T)[∫_0^T(Δ W(s))^2ds+∫_0^T(Δ X(s))^2ds+∫_0^T(Δ^h_xV^h(s))^2ds],where for t∈[0,T]Δ^h_xV^h(t)≐^h_xV^h_ν(l^h(t),X^ν(t))-^h_xV^h_ν'(l^h(t),X^ν(t)),Δ W(t) ≐ W_1(ν(l^h(t)),ν'(l^h(t)))and the above inequality also uses the Lipschitz property of b (Assumption <ref>), the Lipschitz property of(Assumption <ref>) and (<ref>).We now consider the last term on the right side of (<ref>). We willshow that for some T̂ that does not depend on h, ν, ν' and all T ≤T̂,[∫_0^T(Δ^h_xV^h(s))^2ds]≤ C(h^2+ sup_0≤ s≤ T(Δ W(s))^2 +[|Δ X|_T^2]). Define,Δ V^h(s) ≐ V^h_ν(s,X^ν(s))-V^h_ν'(s,X^ν'(s)), Δ g(T)≐Δ V^h(T) = g(ν(T),X^ν(T))-g(ν'(T),X^ν'(T)), Δ f(s) ≐ f(l^h(s),ν(l^h(s)),X^ν(s),^h,ν(s,X^ν(s)))-f(l^h(s),ν'(s),X^ν'(l^h(s)),^h,ν'(s,X^ν'(s))). From (<ref>) we get thatΔ V^h(0)+ σ∑_s∈𝕋^hΔ^h_xV^h(s)(B^ν(s+Δ^h)-B^ν(s))=- σ∑_s∈𝕋^h^h_xV^h_ν'(s,X^ν'(s))(Δ B(s+Δ^h)-Δ B(s))+Δ g(T)+∫_0^TΔ f(s)ds +∫_0^Ty(s)d(Δ Y(s))+∫_0^Tr(s)d(Δ R(s)).By squaring both sides and taking expectations , we have(Δ V^h(0))^2+ [(σ∑_s∈𝕋^hΔ^h_xV^h(s)(B^ν(s+Δ^h)-B^ν(s)))^2]≤ 5{[(σ∑_s∈𝕋^h^h_xV^h_ν'(s,X^ν'(s))(Δ B(s+Δ^h)-Δ B(s)))^2]+[(Δ g(T))^2]. +.[(∫_0^TΔ f(s)ds)^2]+[(∫_0^Ty(s)d(Δ Y(s)))^2]+[(∫_0^Tr(s)d(Δ R(s)))^2]}.Here we have used the fact that B^h,ν is a {^h,ν_t}-martingaleand therefore has mean 0. Also, we use the elementary inequality that (∑_i=1^5 a_i)^2 ≤ 5 ∑_i=1^5 a_i^2. We now bound the terms in the inequality above. For any s∈𝕋^h, we get by (<ref>) and (<ref>) that[(Δ^h_xV^h(s)(B^ν(s+Δ^h)-B^ν(s))^2|^h_s]=(Δ^h_xV^h(s))^2(Δ^h+C(Δ^h)^2).Using once more the martingale property of B^νwe get that, forsufficiently small h,[(∑_s∈𝕋^hΔ^h_xV^h(s)(B^ν(s+Δ^h)-B^ν(s)))^2] =∑_s∈𝕋^h[(Δ^h_xV^h(s)(B^ν(s+Δ^h)-B^ν(s)))^2]≥1/2[∫_0^T(Δ^h_xV^h(s))^2ds]. Recall that from (<ref>), |^h_xV^h_ν'(·,X^ν'(·))|_T≤ c_d(M) whenever T ≤ M. Henceforth we will only consider T ∈ [0,M]. Using (<ref>), we get that for sufficiently small h,[(σ∑_s∈𝕋^h^h_xV^h_ν'(s,X^ν'(s))(Δ B(s+Δ^h)-Δ B(s)))^2]=∑_s∈𝕋^h[(σ^h_xV^h_ν'(s,X^ν'(s))(Δ B(s+Δ^h)-Δ B(s)))^2] ≤ 3(c_d(M))^2[∫_0^Th|Δ b(s)|ds]≤ CT^1/2[h^2+∫_0^T(Δ W(s))^2ds+∫_0^T(Δ X(s))^2ds+∫_0^T(Δ^h_xV^h(s)))^2ds],where the last inequality follows by a similar inequality as in (<ref>).From (<ref>), it is easy to see that[(Δ g(T))^2] ≤ C((Δ W(T))^2+[(Δ X (T))^2]),and that[(∫_0^TΔ f(s)ds)^2]≤ C{(∫_0^TΔ W(s)ds)^2+ (∫_0^TΔ X(s)ds)^2 +(∫_0^TΔ^h_xV^h(s)ds)^2}≤ CT[∫_0^T(Δ W(s))^2ds+∫_0^T(Δ X(s))^2ds+∫_0^T(Δ^h_xV^h(s)))^2ds]. Using integration by parts and the boundedness of y [ (∫_0^Ty(s)d(Δ Y(s)))^2]≤ C[ |Δ Y|^2_T ]≤ C[|F^ν+σ B^ν-F^ν'-σ B^ν'|^2_T] ≤ C[(∫_0^T|Δ b(s)|ds)^2+σ^2|Δ B|_T^2]≤ CT^1/2h^2+C(T^1/2+T)[∫_0^T(Δ W(s))^2ds+∫_0^T(Δ X(s))^2ds+∫_0^T(Δ^h_xV^h(s)))^2ds],where the inequality on the third line is from Lemma <ref>, the fourth line is from (<ref>) and the last line uses (<ref>) and (<ref>). A similar bound holds for [(∫_0^Tr(s)d(Δ R(s)))^2].From (<ref>)–(<ref>), we have(Δ V^h(0))^2+ 1/2σ^2[∫_0^T(Δ^h_xV^h(s))^2ds]≤ C( (Δ W(T))^2+[(Δ X(T))^2]) +CT^1/2h^2+C(T^1/2+T)[∫_0^T(Δ W(s))^2ds+∫_0^T(Δ X(s))^2ds+∫_0^T(Δ^h_xV^h(s)))^2ds].Thus we can find a T̂_1 ∈ (0,M) and ĥ_1>0 such that for all T ≤T̂_1 and h ≤ĥ_1 ∧T̂_1,(<ref>) is satisfied. Together with (<ref>) we now get that there exist T̂∈ (0,M), ĥ>0 and q∈(0,1) such that for every T≤T̂ and h∈(0,ĥ∧T̂),W^2_1(Φ^h(ν),Φ^h(ν'))≤[|Δ X|^2_T]≤ q(h^2+sup_0≤ s≤ T(Δ W(s))^2)≤ q(h^2+W^2_1(ν,ν')).We finally prove the last statement in Lemma <ref>, namely the inequality in (<ref>). Proof of (<ref>). Fix ν∈, x x' in 𝕊^h_0, and t_0∈𝕋^h, which will be regarded as the initial time. As in the proof of Proposition <ref>, one can define a coupling of two processes on the same h-grid, both of which are driven by the same ν. The first one is denoted as (X(s),Y(s),R(s),B(s),(s))_t_0≤ s≤ T, where its components are defined in (<ref>)–(<ref>) with X(t_0)=x andis the optimal policy for this process. The second process, denoted as (X'(s),Y'(s),R'(s),B'(s),(s))_t_0≤ s≤ T is also given by (<ref>)–(<ref>) using the same control process {α(s)} as for the first one, except that the second onestarts at x', i.e., X'(t_0)=x'.For every s∈[t_0,T], let Δ X(s)≐ X(s)-X'(s) andΔ b(s)≐ b(l^h(s),ν(l^h(s)),X(s),(s))-b(l^h(s),ν(l^h(s)),X'(s),(s)).Processes Δ Y(s) and Δ R(s) are defined in a similar manner. By definition Δ X(0)=x-x'. The arguments that lead to (<ref>) can also be applied here, but in fact they are simpler here since Δ W=0 and the controls are the same for both processes. Specifically,[ sup_t_0≤ s≤ T(|Δ X(s)| + |Δ R(s)| + |Δ Y(s)|)^2]≤ C(x-x')^2+C_1(T-t_0)^1/2h^2+C((T-t_0)^1/2+(T-t_0))[∫_t_0^T(Δ X(s))^2ds]≤ C(x-x')^2(1+T^1/2)+C(T^1/2+T)[∫_t_0^T(Δ X(s))^2ds]. The second inequality is a consequence of the fact that since x x', h≤ |x-x'|. Therefore,[ sup_t_0≤ s≤ T(Δ X(s))^2] ≤ C(x-x')^2(1+T^1/2)+C(T^1/2+T)[∫_t_0^T(Δ X(s))^2ds].ByGrönwall's inequality, [ sup_t_0≤ s≤ T(Δ X(s))^2] ≤ c_T|x-x'|^2,wherec_T≐ C(1+T^1/2)exp{C_1(T^3/2+T^2)}.Using the above boundwe have that the left side in (<ref>) can be bounded above byC(x-x')^2(1+T^1/2)+C(T^1/2+T)T [sup_t_0≤ s≤ T(Δ X(s))^2]≤c̃_T |x-x'|^2,where c̃_T:=(4CL^2(1+T^1/2)+C(T^1/2+T)Tc_T)^1/2, and so,[ sup_t_0≤ s≤ T( |Δ X(s)| + |Δ Y(s)| + |Δ R(s)| )]≤ (c̃_T)^1/2 |x-x'|,Consequently, using integration by parts as in (<ref>) and Lipschitz property of f and g, we get that V^h_ν(t_0,x')-V^h_ν(t_0,x)≤ J^h,ν(t_0,x',)-J^h,ν(t_0,x,)≤[|g(ν(T),X'(T))-g(ν(T),X(T))|+ |∫_t_0^Ty(s)d(Δ Y(s))|+ |∫_t_0^Tr(s)d(Δ Y(s))|+∫_t_0^T|f(l^h(s),ν(l^h(s)),X'(s),(s))-f(l^h(s),ν(l^h(s)),X(s),(s))|ds]≤c̅_T|x-x'|,where c̅_T depends on the parameters c_L,c_S, the bounds on y and r,and the terminal time T. By reversing the roles of the processes we get that for every t_0∈𝕋^h, |V^h_ν(t_0,x)-V^h_ν(t_0,x')|≤c̅_T|x-x'|and the result follows.§ NUMERICAL STUDY In this section we present a numerical example. We set the parameters L=1, T=0.4, σ=1, and U={-0.75,0.25}. Also, b(t,η,x,)=2x+7, f(t,η,x,)=(4x-5η̅)^2+^2, g(η,x)=(4x-5η̅)^2, y(t,x)=0, and r(t,x)=15, where η̅ is the mean of η. Assumption <ref> obviously holds and therefore, by Proposition <ref> the MFG admits a unique solution and Assumption<ref> holds. The initial state of the MFG is taken to be X(0)=0.5 and in the numerical scheme x^h=X^h(0)=⌊ x/h ⌋· h. We choose the initial function ν^h(t)=δ_⌊ x/h ⌋· h, t∈[0,T]. We implemented the algorithm described in Construction <ref> by computing 15 iterations of the map Φ^h for each of the h's taken from the set {1/5,1/10,1/15,1/20,1/25}. For each h we calculated the value function of the MDP after each iteration V^h(x^h). Since our example depends on ν^h only through its mean, we also calculated the mean of ν^h(·), which we denote by ϖ^h. Figure <ref> illustrates the convergence of the value functions of the MDP's to the value function of the MFG. The convergence of the means ϖ^h is illustrated in Figure <ref>. Finally in Figure <ref> we present the distribution taken from the last iteration, ν^1/25. Here we provide a bird's eye view of this distribution, where the darker areas represent greater density.As can be seen by Figure <ref>, shortly after time zero, the means tend to increase w.r.t. the time. The reason for this is that there are two opposing “forces” in our example. The reflection cost r=15 “pushes” the process X^h downwards when it is close to the boundary L and then the optimal control is close to -0.75. When X^h is relatively far away from the boundary, the control =-0.75 is too costly and then the optimal controlis close to 0.25. As we approach the terminal time, the rejection cost has less impact and therefore the distribution has higher expectation.Acknowledgement. We are thankful to the anonymous referee and AE for their suggestions, which helped us to improve the presentation of the paper. abbrvErhan BayraktarDepartment of MathematicsUniversity of MichiganAnn Arbor, MI 48109, USAemail: [email protected]: www-personal.umich.edu/∼ erhan/ Amarjit Budhiraja Department of Statistics and Operations ResearchUniversity of North CarolinaChapel Hill, NC 27599, USAemail: [email protected] web: http://www.unc.edu/∼ budhiraj/ Asaf CohenDepartment of StatisticsUniversity of HaifaHaifa 31905, Israelemail: [email protected]: https://sites.google.com/site/asafcohentau/
http://arxiv.org/abs/1708.08343v3
{ "authors": [ "Erhan Bayraktar", "Amarjit Budhiraja", "Asaf Cohen" ], "categories": [ "math.OC", "math.NA", "65M12, 60K25, 91A13, 60K35, 93E20, 65M12, 60F17" ], "primary_category": "math.OC", "published": "20170825043219", "title": "A numerical scheme for a mean field game in some queueing systems based on Markov chain approximation method" }
Linear Differential Constraints for Photo-polarimetric Height Estimation Silvia TozzaSapienza - Università di Roma [email protected] A. P. SmithUniversity of York [email protected] Dizhong Zhu University of [email protected] Ravi RamamoorthiUC San [email protected] Edwin R. Hancock University of [email protected] 30, 2023 =======================================================================================================================================================================================================================================================================================================================empty In this paper we present a differential approach to photo-polarimetric shape estimation. We propose several alternative differential constraints based on polarisation and photometric shading information and show how to express them in a unified partial differential system. Our method uses the image ratios technique to combine shading and polarisation information in order to directly reconstruct surface height, without first computing surface normal vectors. Moreover, we are able to remove the non-linearities so that the problem reduces to solving a linear differential problem. We also introduce a new method for estimating a polarisation image from multichannel data and, finally, we show it is possible to estimate the illumination directions in a two source setup, extending the method into an uncalibrated scenario. From a numerical point of view, we use a least-squares formulation of the discrete version of the problem. To the best of our knowledge, this is the first work to consider a unified differential approach to solve photo-polarimetric shape estimation directly for height. Numerical results on synthetic and real-world data confirm the effectiveness of our proposed method. § INTRODUCTION A recent trend in photometric <cit.> and physics-based <cit.> shape recovery has been to develop methods that solve directly for surface height, rather than first estimating surface normals and then integrating them into a height map. Such methods are attractive since: 1. they only need solve for a single height value at each pixel (as opposed to the two components of surface orientation), 2. integrability is guaranteed, 3. errors do not accumulate through a two step pipeline of shape estimation and integration and 4. it enables combination with cues that provide depth information directly <cit.>. In both photometric stereo <cit.> and recently in shape-from-polarisation (SfP) <cit.>, such a direct solution was made possible by deriving equations that are linear in the unknown surface gradient.In this paper, we explore the combination of SfP constraints with photometric constraints (i.e. photo-polarimetric shape estimation) provided by one or two light sources. Photometric stereo with three or more light sources is a very well studied problem with robust solutions available under a range of different assumptions. Two source photometric stereo is still considered a difficult problem <cit.> even when the illumination is calibrated and albedo is known. We show that various formulations of one and two source photo-polarimetric stereo lead to the same general problem (in terms of surface height),that illumination can be estimated and that certain combinations of constraints lead to an albedo invariant formulation. Hence, with only modest additional data capture requirements (a polarisation image rather than an intensity image), we arrive at an approach for uncalibrated two source photometric stereo. We make the following novel contributions: * We show how to estimate a polarisation image from multichannel data such as from colour images, multiple light source data or both (Sec. <ref>).* We show how polarisation and photometric constraints (Sec. <ref>) can be expressed in a unified formulation (of which previous work <cit.> is a special case) and that various combinations of these constraints provide different practical advantages (Sec. <ref>).* We show how to estimate the illumination directions in two source photo-polarimetric data leading to an uncalibrated solution (Sec. <ref>).§.§ Related Work The polarisation state of light reflected by a surface provides a cue to the material properties of the surface and, via a relationship with surface orientation, the shape. Polarisation has been used for a number of applications, including early work on material segmentation <cit.> and diffuse/specular reflectance separation <cit.>. However, there has been a resurgent interest <cit.> in using polarisation information for shape estimation.Shape-from-polarisation The degree to which light is linearly polarised and the orientation associated with maximum reflection are related to the two degrees of freedom of surface orientation. In theory, this polarisation information alone restricts the surface normal at each pixel to two possible directions. Both Atkinson and Hancock <cit.> and Miyazaki <cit.> solve the problem of disambiguating these polarisation normals via propagation from the boundary under an assumption of global convexity. Huynh <cit.> also disambiguate polarisation normals with a global convexity assumption but estimate refractive index in addition. These works all used a diffuse polarisation model while Morel <cit.> use a specular polarisation model for metals. Recently, Taamazyan <cit.> introduced a mixed specular/diffuse polarisation model. All of these methods estimate surface normals that must be integrated into a height map. Moreover, since they rely entirely on the weak shape cue provided by polarisation and do not enforce integrability, the results are extremely sensitive to noise.Photo-polarimetric methods There have been a number of attempts to combine photometric constraints with polarisation cues. Mahmoud <cit.> used a shape-from-shading cue with assumptions of known light source direction, known albedo and Lambertian reflectance to disambiguate the polarisation normals. Atkinson and Hancock <cit.> used calibrated, three source Lambertian photometric stereo for disambiguation but avoiding an assumption of known albedo. Smith <cit.> showed how to express polarisation and shading constraints directly in terms of surface height, leading to a robust and efficient linear least squares solution. They also show how to estimate the illumination, up to a binary ambiguity, making the method uncalibrated. However, they require known or uniform albedo. We explore variants of this method by introducing additional constraints that arise when a second light source is introduced, allowing us to relax the uniform albedo assumption.We also give an explanation for why the matrix they consider is full-rank except in a unique case.Recently, Ngo <cit.> derived constraints that allowed surface normals, light directions and refractive index to be estimated from polarisation images under varying lighting. However, this approach requires at least 4 lights. All of the above methods operate on single channel images and do not exploit the information available in colour images.Polarisation with additional cues Rahmann and Canterakis <cit.> combined a specular polarisation model with stereo cues. Similarly, Atkinson and Hancock <cit.> used polarisation normals to segment an object into patches, simplifying stereo matching. Stereo polarisation cues have also been used for transparent surface modelling <cit.>. Huynh <cit.> extended their earlier work to use multispectral measurements to estimate both shape and refractive index. Drbohlav and Sara <cit.> showed how the Bas-relief ambiguity <cit.> in uncalibrated photometric stereo could be resolved using polarisation. However, this approach requires a polarised light source.Recently, Kadambi <cit.> proposed an interesting approach in which a single polarisation image is combined with a depth map obtained by an RGBD camera. The depth map is used to disambiguate the normals and provide a base surface for integration. § REPRESENTING POLARISATION INFORMATIONWe place a camera at the origin of a three-dimensional coordinate system (Oxyz) in such a way that Oxy coincides with the image plane and Oz with the optical axis. In Sec. <ref> we propose a unified formulation for a variety of methods, all of which assume a) orthographic projection, b) known refractive index of the surface. Other assumptions will be given later on, depending on the specific problem at hand.Wedenote by 𝐯 theviewer direction, by 𝐬 a general light source direction with 𝐯≠𝐬. We only require the third components of these unit vectors to be greater than zero (all the vectors belong to the upper hemisphere). We will denote by 𝐭 a second light source where required.We parametrise the unknown surfaceheight by the function z(𝐱), where 𝐱=(x,y) is an image location, andthe unit normal to thesurface at the point 𝐱 is given by:𝐧(𝐱) = 𝐧̂(𝐱)/|𝐧̂(𝐱)| = [-z_x, -z_y, 1]^T/√(1 + |∇ z(𝐱)|^2), where 𝐧̂(𝐱) is the outgoing normal vector and z_x, z_y denotes the partial derivative of z(𝐱) w.r.t. x and y, respectively, so that ∇ z(𝐱) = (z_x, z_y).We nowintroduce relevant polarization theory, describing how we can estimate a polarisation image from multichannel data. §.§ Polarisation image When unpolarised light is reflected by a surface it becomes partially polarised <cit.>. A polarisation image can be estimated by capturing a sequence of imagesin which a linear polarising filter in front of the camera lensis rotated through a sequence of P≥ 3 different angles ϑ_j, j∈{1, …, P}. The measured intensity at a pixel varies sinusoidally with the polariser angle:i_ϑ_j(𝐱)=i_un(𝐱)( 1 + ρ(𝐱) cos(2ϑ_j-2ϕ(𝐱))).The polarisation image is thus obtained by decomposing the sinusoid at every pixel locationinto three quantities <cit.>: the phase angle, ϕ(𝐱), the degree of polarisation, ρ(𝐱), and the unpolarised intensity, i_un(𝐱). The parameters of the sinusoid can be estimated from the captured image sequence using non-linear least squares <cit.>, linear methods <cit.> or via a closed form solution <cit.> for the specific case of P=3, ϑ∈{ 0^∘, 45^∘, 90^∘}.§.§ Multichannel polarisation image estimationA polarisation image is usually computed by fitting the sinusoid in (<ref>) to observed data in a least squares sense. Hence, from P≥ 3 measurements we estimate i_un, ρ and ϕ. In practice, we may have access to multichannel measurements. For example, we may capture colour images (3 channels), polarisation images with two different light source directions (2 channels) or both (6 channels). Since ρ and ϕ depend only on surface geometry (assuming that, in the case of colour images, the refractive index does not vary with wavelength), then we expect these quantities to be constant over the channels. On the other hand, i_un will vary between channels either because of a shading change caused by the different lighting or because the albedo or light source intensity is different in the different colour channels. Hence, in a multichannel setting with C channels, we have C+2 unknowns and CP observations. If we use information across all channels simultaneously, the system is more constrained and the solutionwill be more robust to noise. Moreover, we do not need to make an arbitrary choice about the channel from which we estimate the polarisation image. This idea shares something in common with that of Narasimhan <cit.>, though their material/shape separation was not in the context of polarisation.Specifically, we can express the multichannel observations in channel c with polariser angle ϑ_j asi^c_ϑ_j(𝐱) = i^c_un(𝐱)(1 + ρ(𝐱)cos(2ϑ_j-2ϕ(𝐱))).The system of equations is linear in the unpolarised intensities and, by a change of variables, can be made linear in ρ and ϕ <cit.>. Hence, we wish to solve a bilinear system and do so in a least squares sense using interleaved alternating minimisation. Specifically, we a) fix ρ and ϕ and then solve linearly for the unpolarised intensity in each channel and b) then fix the unpolarised intensities and solve linearly for ρ and ϕ usingall channels simultaneously. Concretely, for a singlepixel, we obtain the unpolarised intensities across channels by solving:min_i^1_un(𝐱), …, i^C_un(𝐱) C_I [ i^1_un(𝐱), …, i^C_un(𝐱) ]^T -d_I ^2 ,where C_I∈ℝ^CP× C is given by C_I = [(1+ ρ(𝐱)cos(2ϑ_1-2ϕ(𝐱))) I_C; ⋮; ( 1+ ρ(𝐱)cos(2ϑ_P-2ϕ(𝐱))) I_C ], with I_Cdenoting the C× C identity matrix,and d_I∈ℝ^CP is given by d_I= [ i^1_ϑ_1(𝐱), …, i^C_ϑ_1(𝐱), i^1_ϑ_2(𝐱), …, i^C_ϑ_P(𝐱) ]^T. Then, withthe unpolarised intensities fixed, we solve for ρ and ϕ using the following linearisation:min_a,b C_ρϕ[ a; b ]-d_ρϕ^2,where [ab]^T = [ρ(𝐱)cos(2ϕ(𝐱)), ρ(𝐱)sin(2ϕ(𝐱))]^T, and C_ρϕ∈ℝ^CP× 2 is given by C_ρϕ= [ i^1_un(𝐱)cos(2ϑ_1) i^1_un(𝐱)sin(2ϑ_1);⋮⋮; i^1_un(𝐱)cos(2ϑ_P) i^1_un(𝐱)sin(2ϑ_P); i^2_un(𝐱)cos(2ϑ_1) i^2_un(𝐱)sin(2ϑ_1);⋮⋮; i^C_un(𝐱)cos(2ϑ_P) i^C_un(𝐱)sin(2ϑ_P);], and d_ρϕ∈ℝ^CP is given by: d_ρϕ= [ i^1_ϑ_1(𝐱)-i^1_un(𝐱);⋮; i^1_ϑ_P(𝐱)-i^1_un(𝐱); i^2_ϑ_1(𝐱)-i^2_un(𝐱);⋮; i^C_ϑ_P(𝐱)-i^C_un(𝐱) ]. We estimateρ and ϕ from the linear parameters using ϕ(𝐱)=1/2atan2(b,a) and ρ(𝐱)=√(a^2+b^2). We initialise by computing a polarisation image from one channel using linear least squares, as in <cit.>, and then use the estimated ρ and ϕ to begin alternatinginterleaved optimisation by solving for the unpolarised intensities across channels. Weinterleave and alternate the two steps until convergence. In practice, we find that this approach not only dramatically reduces noise in the polarisation images but also removes the ad hoc step of choosing an arbitrary channel to process. We show an example of the results obtained in Figure <ref>. The multichannel result is visibly less noisy than the single channel performance. § PHOTO-POLARIMETRIC HEIGHT CONSTRAINTSIn this section we describe the different constraints provided by photo-polarimetric information and thenshow how to combine them to arrive at linear equations in the unknownsurface height. §.§ Degree of polarisation constraintA polarisation image provides a constraint on the surface normal direction at each pixel.The exact nature of the constraint depends on the polarisation model used. In this paper we will consider diffuse polarisation, due to subsurface scattering (see <cit.> for more details).The degree of diffuse polarisation ρ_d(𝐱) at each point 𝐱 can be expressed in terms of the refractive index η and the surface zenith angle θ∈ [0, π/2] as follows (Cf. <cit.>):ρ_d(𝐱) = (η-1/η)^2 sin^2(θ)/2 + 2η^2 - (η+ 1/η)^2 sin^2(θ)+ 4 cos(θ) √(η^2 - sin^2 (θ)).Recall that the zenith angle is the angle between the unit surface normal vector 𝐧(𝐱) and the viewing direction 𝐯. If we know the degree of polarisation ρ_d(𝐱) and the refractive index η (or have goodestimates ofthem at hand),equation (<ref>) can be rewritten with respect to the cosine of the zenith angle, and expressed in terms of the function, f(ρ_d(𝐱),η), that depends on the measureddegree of polarisation and the refractive index: cos(θ) = 𝐧(𝐱)·𝐯 = f(ρ_d(𝐱),η) =√(η^4 (1-ρ_d^2) + 2η^2 (2ρ_d^2 +ρ_d -1) +ρ_d^2 + 2ρ_d -4 η^3 ρ_d √(1-ρ_d^2)+1/(ρ_d+1)^2 (η^4 + 1) + 2η^2(3ρ_d^2 + 2ρ_d -1)) where we drop the dependency of ρ_d on (𝐱) for brevity.§.§ Shading constraintThe unpolarised intensity provides an additional constraint on the surface normal direction via an appropriate reflectance model. We assume that pixels have been labelled as diffuse or specular dominant and restrict consideration to diffuse shading. In practice, we deal with specular pixels in the same way as <cit.> and simply assume that they point in the direction of the halfway vector between 𝐬 and 𝐯. For the diffuse pixels, we therefore assume that light is reflected according to the Lambert's law.Hence, the unpolarised intensity is related to the surface normal by:i_un(𝐱) = γ(𝐱) cos(θ_i) = γ(𝐱) 𝐧(𝐱) ·𝐬, where γ(𝐱) is the albedo. Writing 𝐧(𝐱) in terms of the gradient of z as reported in (<ref>), (<ref>) can be rewritten as follows:i_un(𝐱) = γ(𝐱) -∇ z(𝐱)·𝐬̃ + s_3/√(1 + |∇ z(𝐱)|^2),with 𝐬̃ = (s_1, s_2).This is a non-linear equation, but we will see in Sec. <ref> and <ref> how it is possible to remove the non-linearity by using the ratios technique. §.§ Phase angle constraintAn additionalconstraint comes from the phase angle, which determines the azimuth angle of the surface normal α(𝐱) ∈ [0, 2π] up to a 180^∘ ambiguity. This constraint can be rewritten as a collinearity condition <cit.>, that is satisfied by either of the two possible azimuth angles implied by the phase angle measurement.Specifically, for diffuse pixels we require the projection of the surface normal into the x-y plane, [n_x n_y], and a vector in the image plane pointing in the phase angle direction, [sin(ϕ) cos(ϕ)], to be collinear. This corresponds to requiring 𝐧(𝐱)· [cos(ϕ(𝐱)) -sin(ϕ(𝐱)) 0]^T = 0.In terms of the surface gradient, using (<ref>), it is equivalent to (-cosϕ,sinϕ) ·∇ z = 0.A similar expression can be obtained for specular pixels, substituting in the π/2-shifted phase angles. The advantage of doing this will becomeclear in Sec. <ref>. §.§ Degree of polarisation ratio constraintCombining the two constraints illustrated in Sec. <ref> and <ref>, we can arrive at a linear equation, that we refer to as the DOP ratio constraint.Recall that cos(θ) = 𝐧(𝐱) ·𝐯 and that we can express 𝐧 in terms of the gradient of z by using (<ref>), then isolating the non-linear term in (<ref>) we obtain√(1 + |∇ z(𝐱)|^2) = -∇ z(𝐱)·𝐯̃ + v_3/f(ρ_d(𝐱),η),where𝐯̃ = (v_1, v_2).On the other hand, considering the shading information contained in (<ref>), and again isolating the non-linearity we arrive at the following√(1 + |∇ z(𝐱)|^2) = γ(𝐱) -∇ z(𝐱)·𝐬̃ + s_3/i_un(𝐱).Note that we are supposing 𝐬𝐯,and i_un(𝐱)≠ 0, f(ρ_d(𝐱),η) ≠ 0.InspectingEqs. (<ref>) and (<ref>) we obtain-∇ z(𝐱)·𝐯̃ + v_3/f(ρ_d(𝐱),η) = γ(𝐱) -∇ z(𝐱)·𝐬̃ + s_3/i_un(𝐱).We thusarrive atthe following partial differential equation (PDE):𝐛(𝐱)·∇ z(𝐱)=h(𝐱),where𝐛(𝐱) :=𝐛^(f,i_un) =i_un(𝐱) 𝐯̃ - γ(𝐱) f(ρ_d(𝐱),η)𝐬̃,andh(𝐱) :=h^(f,i_un)= i_un(𝐱)v_3 - γ(𝐱) f(ρ_d(𝐱),η) s_3. §.§ Intensity ratio constraint Finally, we construct an intensity ratio constraint by considering two unpolarised images, i_un,1,i_un,2, taken from two different light source directions, 𝐬,𝐭.We construct our constraint equation by applying (<ref>) twice, once for each light source. We can remove the non-linearity as before and take a ratio, arriving at the following equation:i_un,2(-∇ z(𝐱)·𝐬̃ +s_3) = i_un,1(-∇ z(𝐱)·𝐭̃ + t_3).The aboveequation isindependent of albedo, light source intensity and non-linear normalisation term. Again as before, we can rewrite (<ref>) as a PDE in the formof (<ref>) with𝐛(𝐱) := 𝐛^(i_un,1,i_un,2) =i_un,2(𝐱) 𝐬̃ - i_un,1(𝐱)𝐭̃,where 𝐭̃ = (t_1, t_2), andh(𝐱) :=h^(i_un,1,i_un,2) =i_un,2(𝐱)s_3 - i_un,1(𝐱) t_3.§ A UNIFIED PDE FORMULATION Commencingfrom the constraints introduced in Sec. <ref>, in this section we show how to solveseveral differentproblems in photo-polarimetric shape estimation. The common feature is that these are all linear in the unknown height, and are expressed in a unified formulation in terms ofa system of PDEs in the same general form: 𝐁(𝐱) ∇ z(𝐱)=𝐡(𝐱),where 𝐁: Ω̅→ℝ^J× 2, 𝐡: Ω̅→ℝ^J× 1, denoting by Ω the reconstruction domain and being J=2,3 or 4 depending on the cases.(<ref>)is a compact and general equation, suitable fordescribing several cases in a unified differential formulation thatsolves directly for surfaceheight. Different combinations of the three constraints described in Sec. <ref> that are linear in the surface gradient can be combined in the formulation of (<ref>). Each corresponds to different assumptions and have different pros and cons. We explore three variants and show that <cit.> is a special case of our formulation. We summarise the alternative formulations in Tab. <ref>.§.§ Single light and polarisation formulation This case has been studied in <cit.>. It uses a single polarisation image, requires known illumination (though <cit.> show how this can be estimated if unknown) and assumes that albedo is known or uniform. This last assumption is quite restrictive, since it can only be applied to objects withhomogeneous surfaces. With just a single illumination condition, only the phase angle and DOP ratio constraints are available. Thisthus becomes a special case of our general unified formulation (<ref>),where 𝐁 and 𝐡 are defined as𝐁 = [ b^(f,i_un)_1 b^(f,i_un)_2;-cosϕ sinϕ;], 𝐡 = [h^(f,i_un), 0]^T,with 𝐛^(f,i_un) and h^(f,i_un) defined by (<ref>) and (<ref>), with uniform γ(𝐱) and 𝐯 = [0,0,1]^T.§.§ Proposed 1: Albedo invariant formulation Our first proposed method uses the phase angle constraint (<ref>) and two unpolarised images, taken from two different light source directions,obtained through (<ref>) and combined as in (<ref>).In this case the problem studiedis described by the system of PDEs (<ref>) with 𝐁(𝐱) =[ b^(i_un,1,i_un,2)_1 b^(i_un,1,i_un,2)_2; -cosϕsinϕ; ], 𝐡(𝐱)= [ h^(i_un,1,i_un,2); 0 ], where 𝐛^(i_un,1,i_un,2) and h^(i_un,1,i_un,2) defined as in (<ref>) and (<ref>).The phase angle does not depend on albedo and the intensity ratio constraint is invariant to albedo. As a result, this formulation is particularly powerful because it allows albedo invariant height estimation. Moreover, the light source directions in the two images can be estimated (again, in an albedo invariant manner) using the method in Sec. <ref>. Once surface height has been estimated, we can compute the surface normal at each pixel and it is then straightforward to estimate an albedo map using (<ref>). Where we have two diffuse observations, we can compute albedo from two equations of the form of (<ref>) in a least squares sense. In real data, where we have specular pixel labels, we use only the diffuse observations at each pixel. To avoid artifacts at the boundary of specular regions, we introduce a gradient consistency term into the albedo estimation. We encourage the gradient of the albedo map to match the gradients of the intensity image for diffuse pixels.§.§ Proposed 2: Phase invariant formulation Our second proposed method uses only theDOP ratio and the intensity ratio constraints. This means that phase angle estimates are not used. The advantage of this is that phase angles are subject to a shift of π/2 at specular reflections when compared to diffuse reflections. So, the phase angle constraint relies upon having accurate per-pixel specularity labels, whichclassify reflections as either dominantlyspecularor diffuse(oralternatively use a mixed polarisation model <cit.> with a four way ambiguity).In this case we need a) two unpolarised intensity images, taken with two differentlight source directions, 𝐬 and 𝐭, obtained through (<ref>), b) polarisation informationfrom the function f(ρ,η) and c) knowledge of the albedo map. We need 𝐬, 𝐭, 𝐯 non-coplanar in order to have the matrix field 𝐁 not singular.Note that the function f, obtained from polarization information (as in (<ref>)), is the same for the two required images. The reason for this is that itdoes not depend on the light source directions but only on the viewer direction𝐯which does not change.This formulation can be deduced starting from (<ref>) and (<ref>), arriving at a PDE system as in (<ref>) with𝐁 =[𝐛^(f,i_un,1) , 𝐛^(f,i_un,2) , 𝐛^(i_un,1,i_un,2)]^T,and 𝐡 =[h^(f,i_un,1) , h^(f,i_un,2) , h^(i_un,1,i_un,2)]^T, using(<ref>), (<ref>), (<ref>), (<ref>) to define the vector fields 𝐛 and the scalar fields h that appear in 𝐁 and 𝐡. §.§ Proposed 3: Most constrained formulation Our final proposed method combines all of the previous constraints, leading to a problem ofthe form(<ref>) with𝐁=[b^(f,i_un,1)_1b^(f,i_un,1)_2;b^(f,i_un,2)_1b^(f,i_un,2)_2; b^(i_un,1,i_un,2)_1 b^(i_un,1,i_un,2)_2; -cosϕsinϕ; ], 𝐡 =[h^(f,i_un,1);h^(f,i_un,2); h^(i_un,1,i_un,2); 0; ].This formulation uses the most information and so is potentially the most robust method. However, it requires known albedo in order to use the DOP ratio constraint. Nevertheless, it is possible to first apply proposed method 1, estimate the albedo and then re-estimate surface height using the maximally constrained formulation and the estimated albedo map. In fact, the best performance is obtained by iterating these two steps, alternately using the surface height estimate to compute albedo and then using the updated albedo to re-compute surface height. §.§ Extension to colour images We now consider how to extend the above systems of equations when colour information is available. If a surface is lit by a coloured point source, then each pixel provides three equations of the form in (<ref>). In principle, this provides no more information than a grayscale observation since the surface normal and light source direction are fixed across colour channels. However, in the presence of noise using all three observations improves robustness. In particular, if the albedo value at a pixel is lower in one colour channel, the signal to noise ratio will be worse in that channel than the others. For a multicoloured object, it is impossible to choose a single colour channel that provides the best signal to noise ratio across the whole object. For this reason, we propose to use information from all colour channels where available. We already exploit colour information in the estimation of the polarisation image in Sec. <ref>. Hence, the phase angle estimates have already benefited from the improved robustness. Both the DOP ratio and intensity ratio constraints can also exploit colour information by repeating each constraint three times, once for each colour channel. In the case of the intensity ratio, the colour albedo once again cancels if ratios are taken between the same colour channels under different light source directions.§ HEIGHT ESTIMATION VIA LINEAR LEAST SQUARES We have seen that each ofthevariants illustrated in the previous section, each with different advantages, can be written as a PDE system (<ref>).Denoting by M the number of pixels, we discretise the gradient in (<ref>) via finite differences, arriving at the following linear system in 𝐳𝐀𝐳 = 𝐡̅,where 𝐀 = 𝐁̅𝐆, with 𝐆∈ℝ^2M× M the matrix of finite difference gradients.𝐁̅∈ℝ^JM× 2M is the discrete per-pixel version of the matrix 𝐁(𝐱), hence 𝐀∈ℝ^JM× M, where J depends on the various proposed cases reported in Sec. <ref> (J=2 for (<ref>) and (<ref>), J=3 for (<ref>) and J=4 for (<ref>)). 𝐡̅ is the discrete per-pixel version of the function 𝐡(𝐱), 𝐡̅∈ℝ^JM× 1, and 𝐳∈ℝ^M× 1 the vector of the unknown height values.The resulting discrete system is large, since we have JM linear equations in M unknowns, but sparse, since 𝐀 has few non-zero values for each row, and has as unknowns the height values. The per-pixel matrix 𝐀 is a full-rank matrix, for each choice of 𝐁̅ that comes from the proposed formulations in Sec. <ref>, under the different assumptions specified for each case.The per-pixel matrix 𝐀 related to <cit.> is full-rank except in one case: when the first two components of the light vector 𝐬 are non-zero and s_1 = -s_2 and it happens that the phase angle is ϕ = π/4 at least in one pixel. In that case, the matrix has a rank-deficiency (though in practice ϕ assuming a value of exactly π/4, up to numerical tolerance, is unlikely).We want to find a solution of(<ref>) in the least-squares sense, find a vector 𝐳∈ℝ^M such that||𝐀𝐳 - 𝐡̅||^2_2 ≤ ||𝐀𝐲 - 𝐡̅||^2_2, ∀𝐲∈ℝ^M.Considering the associated system ofnormal equations𝐀^T (𝐀𝐳 - 𝐡̅) = 0,it is well-known that if there exists 𝐳∈ℝ^M that satisfies (<ref>), then 𝐳 is also solution of the least-squares problem, 𝐳 satisfies (<ref>).Since 𝐀 isa full-rankmatrix, then the matrix 𝐀^T 𝐀 is not singular, hence there exists a unique solution 𝐳 of (<ref>) for each data term 𝐡̅.Since neither 𝐁 nor 𝐡 depend on z in (<ref>), the solution can be computed only up to an additive constant (which isconsistentwith the orthographic projection assumption).To resolve the unknown constant,knowledge of z at just one pixel is sufficient. In our implementation, we remove the height of one pixel from the variables and substitute its zero value elsewhere.§ TWO SOURCE LIGHTING ESTIMATION Our three proposed shape estimation methods require knowledge of the two light source directions. Previously, Smith <cit.> showed that a single polarisation image can be used to estimate illumination conditions up to a binary ambiguity.However, to do so, they assumed that the albedo was known or uniform,and they also worked only with a single colour channel. In a two source setting, we show that it is possible to estimate both light source directions simultaneously, and do so in an albedo invariant manner. Moreover, we can exploit information across different colour channels to improve robustness to noise. Hence, our three methods can be used in an uncalibrated setting.The intensity ratio (<ref>) provides one equation per pixel relating unpolarised intensities, surface gradient and light source directions. Given two polarisation images with different light directions, we have one such equation per pixel and six unknowns in total. We assume that ambiguous surface gradient estimates are known from ρ and ϕ, and thenuse (<ref>) to estimate the light source directions.The intensity ratio (<ref>) is homogeneous in s and t and so has a trivial solution s= t=[0 0 0]^T. If we assume that the intensity of the light source remains constant in each colour channel across the two images, then this intensity divides out when taking an intensity ratio and so the length of the light source vectors is arbitrary. We therefore constrain them to unit length (avoiding the trivial solution), and represent them by spherical coordinates (θ_s,α_s) and (θ_t,α_t), such that [s_1,s_2,s_3]=[cosα_ssinθ_s,sinα_ssinθ_s,cosθ_s] and [t_1,t_2,t_3]=[cosα_tsinθ_t,sinα_tsinθ_t,cosθ_t]. This reduces the number of unknowns to four. We can now write the residual at each pixel given an estimate of the light source directions. There are two possible residuals, depending on which of the two ambiguous polarisation normals we use. From the phase angle and the zenith angle estimated from the degree of polarisation using (<ref>), we have two possible surface normal directions at each pixel and therefore two possible gradients:z_x(𝐱) ≈±cosϕ(𝐱)tanθ(𝐱), z_y(𝐱) ≈±sinϕ(𝐱)tanθ(𝐱).Hence, the residuals at pixel 𝐱_j in channel c are given by either: r_j,c(θ_s,α_s,θ_t,α_t) = i_un,1^c(𝐱_j)(-z_x(𝐱_j)t_1-z_y(𝐱_j)t_2+t_3) -i_un,2^c(𝐱_j)(-z_x(𝐱_j)s_1-z_y(𝐱_j)s_2+s_3),orq_j,c(θ_s,α_s,θ_t,α_t) = i_un,1^c(𝐱_j)(z_x(𝐱_j)t_1+z_y(𝐱_j)t_2+t_3) -i_un,2^c(𝐱_j)(z_x(𝐱_j)s_1+z_y(𝐱_j)s_2+s_3). We can now write a minimisation problem for light source direction estimation by summing the minimum of the two residuals over all pixels and colour channels:min_θ_s,α_s,θ_t,α_t∑_j,cmin [ r_j,c^2(θ_s,α_s,θ_t,α_t),q_j,c^2(θ_s,α_s,θ_t,α_t) ].The minimum of two convex functions is not itselfconvex and so this optimisation is non-convex. However, we find that, even with a random initialisation, it almost always converges to the global minimum. As in <cit.>, the solution is still subject to a binary ambiguity, in that if ( s, t) is a solution then ( Ts, Tt) is also a solution (with T=diag([-1,-1,1])), corresponding to the convex/concave ambiguity. We resolve this simply by choosing the maximal solution when surface height is later recovered.§ EXPERIMENTS#190#1 We begin by using synthetic data generated from the Mozart height map (Fig. <ref>). We differentiate to obtain surface normals and compute unpolarised intensities by rendering the surface using light sources s=[1,0,5]^T and t=[-1,-2,7]^T according to (<ref>). We experiment with both uniform albedo and varying albedo for which we use a checkerboard pattern. We simulate the effect of polarisation according to (<ref>), varying the polariser angle between 0^∘ and 180^∘ in 10^∘ increments. Next, we corrupt this data by adding Gaussian noise with zero mean and standard deviation σ, saturate and quantise to 8 bits. This noisy data provides the input to our reconstruction. First, we estimate a polarisation image using the method in Sec. <ref>, then apply each of the proposed methods or the state-of-the-art comparison method <cit.> to recover the height map. In Tab. <ref> we report Root-Mean-Square (RMS) error in the surface height (in pixels) and mean angular error (in degrees) in the surface normals obtained by differentiating the estimated surface height. In Fig. <ref> we show a sample of qualitative results from this experiment. In all cases, more than oneof our proposed methods outperform <cit.>. When albedo is uniform, our phase invariant (Prop. 2) or maximally constrained solution (Prop. 3) provides the best results. When albedo is non-uniform, the albedo invariant method (Prop. 1) provides much better performance. Although the combination of the albedo invariant method followed by the maximally constrained method (Prop. 1+3) does not give quantitatively the best performance, we find that on real world data containing more complex noise and specular reflections, this approach is most robust.In Fig. <ref> we show qualitative results on two real objects with spatially varying albedo. From left to right we show: an image from the input sequence; the surface normals of the estimated height map (inset sphere shows how orientation is visualised as colour); the estimated albedo map; a re-rendering of the estimated surface and albedo map under novel lighting with Blinn-Phong reflectance <cit.>; a rotated view of the estimated surface; and, for comparison, reconstructions of the same surfaces using <cit.>. The results of <cit.> are highly distorted in the presence of varying albedo. Our approach avoids transfer of albedo details into the recovered shape, leading to convincing relighting results. § CONCLUSIONS In this paper we have introduced a unifying formulation for recovering height from photo-polarimetric data and proposed a variety of methods that use different combinations of linear constraints. We proposed a more robust way to estimate a polarisation image from multichannel data and showed how to estimate lighting from two source photo-polarimetric images. Together, our methods provide uncalibrated, albedo invariant shape estimation with only two light sources. Since our unified differential formulation does not depend on a specific camera setup or a chosen reflectance model, the most obvious target for future work is to move to a perspective projection, considering more complex reflectance models, exploiting better the information available in specular reflection and polarisation.In addition, since our methods directly estimate surface height, it would be straightforward to incorporate positional constraints, for example provided by binocular stereo. §.§.§ AcknowledgementsThis work was supported mainly by the “GNCS - INdAM”, in part by ONR grant N000141512013 and the UC San Diego Center for Visual Computing.W. Smith was supported byEPSRC grant EP/N028481/1.ieee
http://arxiv.org/abs/1708.07718v1
{ "authors": [ "Silvia Tozza", "William A. P. Smith", "Dizhong Zhu", "Ravi Ramamoorthi", "Edwin R. Hancock" ], "categories": [ "cs.CV", "cs.NA", "math.NA" ], "primary_category": "cs.CV", "published": "20170825130105", "title": "Linear Differential Constraints for Photo-polarimetric Height Estimation" }
2Sales Forecast in E-commerce using Convolutional Neural NetworkKui Zhao College of Computer Science Zhejiang University Hangzhou, China [email protected] Can Wang College of Computer Science Zhejiang University Hangzhou, China [email protected]: December 30, 2023/ Accepted: December 30, 2023 ========================================================================================================================================================================================================================================================== Sales forecast is an essential task in E-commerce and has a crucial impact on making informed business decisions. It can help us to manage the workforce, cash flow and resources such as optimizing the supply chain of manufacturers etc. Sales forecast is a challenging problem in that sales is affected by many factors including promotion activities, price changes, and user preferences etc. Traditional sales forecast techniques mainly rely on historical sales data to predict future sales and their accuracies are limited. Some more recent learning-based methods capture more information in the model to improve the forecast accuracy. However, these methods require case-by-case manual feature engineering for specific commercial scenarios, which is usually a difficult, time-consuming task and requires expert knowledge. To overcome the limitations of existing methods, we propose a novel approach in this paper to learn effective features automatically from the structured data using the Convolutional Neural Network (CNN). When fed with raw log data, our approach can automatically extract effective features from that and then forecast sales using those extracted features. We test our method on a large real-world dataset from CaiNiao.com and the experimental results validate the effectiveness of our method.<ccs2012> <concept> <concept_id>10010147.10010257.10010258.10010259.10010264</concept_id> <concept_desc>Computing methodologies Supervised learning by regression</concept_desc> <concept_significance>300</concept_significance> </concept> <concept> <concept_id>10010147.10010257.10010293.10010294</concept_id> <concept_desc>Computing methodologies Neural networks</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10010405.10003550.10003555</concept_id> <concept_desc>Applied computing Online shopping</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [300]Computing methodologies Supervised learning by regression [500]Computing methodologies Neural networks [500]Applied computing Online shopping§ INTRODUCTIONThe dynamic and complex business environment in E-commerce brings great challenges to business decision making.Many intelligent technologies such as sales forecast are developed to overcome these challenges. Sales forecast is helpful for managing the workforce, cash flow and resources, such as optimizing the supply chain of manufacturers. The value of sales forecast depends on its accuracy. Inaccurate forecasts may lead to stockout or overstock,hurting the decision efficiency in E-commerce.Traditional sales forecast techniques are based on time series analysis, which only take the historical sales data as the input.These methods can handle well commodities with stable or seasonal sales trends <cit.>.However, commodities in E-commerce are much more irregular in their sales trends(an example is shown in Figure <ref>)and the forecast accuracies achieved by these traditional methods are generally unacceptable <cit.>. Fortunately, a massive amount of data are available in E-commerceand it is possible to exploited these data to improve forecast accuracy. Besides the historical sales data, we can collect many other log data for online commodities over a long time period,such as page view (PV), page view from search (SPV),user view (UV), user view from search (SUV), selling price (PAY) and gross merchandise volume (GMV) etc. By using supervised learning methods such as regression models, these information can be integrated into the sales forecast modeland better forecast accuracy can be achieved. The first step of the conventional machine learning methods is generally feature engineering, where effective features are extracted manually from the available data using domain knowledge <cit.>.The quality and quantity of features can greatly affect the accuracy of final forecast model.However, coming up with effective features is a difficult and time-consuming task.Moreover, these features are generally case-by-case extracted for specific commercial scenariosand models are difficult to be reused when data or requirements change.For instance, after more data are collected for online commodities,feature engineering should be done again to integrate the information contained in the new data into the sales forecast model. Feature learning can obviate the need for manual feature engineering <cit.>.Through feature learning, effective features can be learned automatically from raw input dataand then be used in specific machine learning tasks.Deep neural network is one of the most popular feature learning methods.It is inspired by the nervous system, where the nodes act as neurons and edges act as synapse.A neural network characterizes a function by the relationship between its input layer and output layer,which is parameterized by the weights associated with edges.Features are learned at the hidden layers and subsequently used for classification or regression at the output layer. There are many works using deep neural networks to learn features from the unstructured data,such as from image <cit.>, audio <cit.>, and text<cit.>, etc.In this paper, we propose a novel approach to learn effective features automatically from the structured data using the Convolutional Neural Network (CNN),which is one of the most popular deep neural network architectures.Firstly, we transform the log data of the commodity into a designed Data Frame.Then we apply Convolutional Neural Network on this Data Frame,where effective features will be extracted at the hidden layers and subsequently used for sales forecast at the output layer.Our approach takes the raw log data of commodities andit is easy to integrate new available data into the sales forecast model with few human intervention.What's more, sample weight decay technique and transfer learning technique are used to improve the forecast accuracy further. We test our approach on a large real-world dataset from CaiNiao.comand the experimental results validate the effectiveness of our method.The rest of our paper is organized as follows. We briefly review related works in section 2.We describe sales forecasting model in section 3 and the training of it in section 4.We show our experimental setup and results followed by discussion in section 5.Finally, we present our conclusions and plans to future research in section 6.§ RELATED WORKTraditional sales forecast methods mainly exploit time series analysis techniques <cit.> <cit.>.Classical time series techniques include the autoregressive models (AR), integrated models (I), and moving average models (MA).These models predict future sales using a linear function of the historical sales data.More recent models, such as autoregressive moving average (ARMA) and autoregressive integrated moving average (ARIMA),are more general and can achieve better performance <cit.>.When used for sales forecast, these time series analysis models take the historical sales data as the inputand are only suitable for commodities with stable or seasonal sales trends <cit.>.To better handle irregular sales patterns, some new methods attempt to exploit more information in sales forecastas an increasing amount of data are becoming available in E-commence.Kulkarni et al. <cit.> use online search data to forecast new product sales by using search term volume as a marketing metric.Ramanathan et al. <cit.> improve the forecast accuracy in promotional salesby incorporating product specific demand factors using multiple linear regression analysis.Yeo et al. <cit.> predict product sales by identifying customers purchase purpose from their browsing behavior.These methods are generally case-by-case developed for specific commercial scenarios and are limited in their applicability.They rely on specific domain knowledge to extract relevant features from the data,which is labor-intensive and exhibits their inability to extract andorganize the discriminative information from the data. Feature learning can obviate the need for manual feature engineeringby learning effective features automatically from the raw input data <cit.>.Deep neural network is one of the most popular feature learning methods andits performances in many tasks have surpassed the conventional learning methods<cit.> <cit.> <cit.>.One particular family of deep neural networks named Convolutional Neural Network (CNN) was introduced by LeCun et al. <cit.>and rejuvenated in recent applications since the AlexNet <cit.>won the image classification challenge in ILSVRC2012 <cit.>.Then CNN experienced a strong surge from computer vision to speech recognition and natural language processing<cit.> <cit.> <cit.>.Deep neural networks can learn effective features at the hidden layers and then use these features for classification or regression at the output layer. Different from existing works, which learn features from the unstructured data (image, audio and text etc.),we intend to learn features automatically from the structured data using Convolutional Neural Network.That is learning effective features automatically from log data of commodities for sales forecast. § SALES FORECAST MODEL §.§ Problem formulationWe here describe the problem in a formal way.Given a commodity i and certain geographic region r in which the commodity sales are accumulated, we intend to forecast its total sales in this region y_ir over the time period [T+1, T+l], by using the commodity information and a sequence of related log data x_ir_t in time period [1, T].We use x_ir_t to denote the d-dimensional item vector of commodity i in region r at time t. The elements in x_ir_t are sales, page view (PV), page view from search (SPV), user view (UV), user view from search (SUV),selling price (PAY) and gross merchandise volume (GMV) etc. We denote the array of x_ir_t as the item matrix X_ir=[ x_ir_1, ⋯,x_ir_T].The collection of intrinsic attributes of commodity i are represented by vector a_i, which includes category, brand and supplier etc. Our goal is to build a mapping function f(·) to predict y_ir with X_ir and a_i as the input:y_ir = f( X_ir,a_i, θ),where the parameter vector θ will be learned in the training process.§.§ Forecasting with CNN We forecast sales with function f(·), which is a convolutional architecture as shown in Figure <ref>. In the following, we give a brief explanation of the main components in our CNN architecture. §.§.§ Data FrameBefore using Convolutional Neural Network,we construct the Data Frame for each commodity i based on its related log data and intrinsic attributes. For each brand b, category c, and supplier s,we calculate the brand vector x_br_t, category vector x_cr_t,and supplier vector x_sr_t in region r at time t respectively: x_br_t = ∑_brand(i)= b x_ir_t,x_cr_t = ∑_category(i)= c x_ir_t,x_sr_t = ∑_supplier(i)= s x_ir_t.We denote the array of x_br_t, x_cr_t and x_sr_tas the brand matrix X_br=[ x_br_1, ⋯,x_br_T],the category matrix X_cr=[ x_cr_1, ⋯,x_cr_T],and the supplier matrix X_sr=[ x_sr_1, ⋯,x_sr_T] respectively. For each region r, we calculate the region vector x_r_t at time t: x_r_t = ∑_i x_ir_t.We denote the array of x_r_t as the region matrix X_r=[ x_r_1, ⋯,x_r_T]. Finally, for each commodity i in region r, we construct its Data Frame DF_ir as follow: DF_ir=[ X_ir,X_brand(i)r,X_category(i)r,X_r],which is illustrated in Figure <ref>. To forecast the sales of commodity i in region r,series operations including convolution, non-linear activation and pooling etc. are applied to the Data Frame DF_ir. §.§.§ Convolutional feature mapsConvolution can be seen as a special kind of linear operation, which aims to extract local patterns.In the context of sales forecast, we use the one-dimensional convolutionto capture the shifting patterns in the time series of each input indicator individually. More formally, the one-dimensional convolution is an operation betweentwo vectors f∈ℝ^m and s∈ℝ^|s|.The vector f is called as filter with size m and the vector s is a sequence with size | s|. The specific operation is to take the dot product ofthe vector f with each sub-sequence with length m sliding along the whole sequence s and obtaina new sequence c, where:c_j= f^ T s_j-m+1:j. In practice, we usually add a bias b to the result of dot product. Thus we have: c_j= f^ T s_j-m+1:j+b.According to the allowed range of index j, there are two types of convolution: narrow and wide.The narrow convolution restricts j in the range [m, | s|] andyields a new sequence c∈ℝ^| s|-m+1.The wide convolution restricts j in the range [1, | s|+m-1] andyields a new sequence c∈ℝ^| s|+m-1.Note that when i<1 and i>s, the values of s_i are padded with zero.The benefits of wide convolution over the narrow oneare discussed with details in <cit.>.Briefly speaking, unlike the narrow convolutionwhere input values close to margins are seen fewer times,wide convolution gives equal attention to each value in the sentenceand so is better at handling values at margins.More importantly, the wide convolution always produces a validnon-empty result c even when | s|<m.For these reasons, we use wide convolution in our model. The indicator matrix S in DF_ir is not just a sequence of single indicator but a sequence of vectors including many indicators,where the dimension of each vector is d.So when we apply the one-dimensional convolution on the indicator matrix S,we need a filter bank F∈ℝ^d× m consisting of d filters with size m and a bias bank B∈ℝ^d consisting of d baises.Each row of S is convoluted with the corresponding row of Fand then the corresponding row of B is added to the convolution result.After that, we obtain a matrix C∈ℝ^d× (| s|+m-1):conv( S,F,B): ℝ^d× | s|→ℝ^d× (| s|+m-1).The values in filter bank F and bias bank B are parameters optimized during training. The filter size m is a hyper-parameter of the model. For each Data Frame DF_ir,it has four indicator matrices, named item matrix X_ir,brand matrix X_brand(i)r,category matrix X_category(i)rand region matrix X_r.Therefore, we apply distinct convolutions to each indicator matrix respectively with filter bank F_j=1,2,3,4 and bias bank B_j=1,2,3,4, and get four result matrices. §.§.§ Activation functionTo make the neural network capable of learning non-linear functions,a non-linear activation α (·) need to be applied to the output of preceding layer in an element-wise way.Then we obtain a new matrix A∈ℝ^d× (| s|+m-1):α( C): ℝ^d× (| s|+m-1)→ℝ^d× (| s|+m-1). Popular choices of α (·) include sigmod, tanh and relu (rectified linear defined as max (0, x)).It has been shown that the choice of α (·) may affect the convergence rate and the quality of final solutions.In particular, Nair et al. <cit.> show that relu has significant edgesbecause it overcomes some shortcomings of sigmoid and tanh.In practice, our experimental results are not very sensitive to the choice of activationand we choose relu due to its simplicity and computing efficiency. In addition, we can see that the role played by the bias b in (<ref>) is to setan appropriate threshold for controlling units to be activated. §.§.§ PoolingAfter passing through the activation function, the output from convolutional layer is then passed to the pooling layer.Pooling layer will aggregate the information in the output of preceding layer.This operation aims to make the representation more robust and invariant to small translations in the input. For a given vectora∈ℝ^| a|, pooling with length k aggregates each k-values in it into a single value one by one:pooling( a): ℝ^| a|→ℝ^⌈| a|/k⌉.According to the way of aggregating the information, there are two types of pooling operations: average and max.Though both pooling methods have their own limitations, max-pooling is used more widely in practice. When we apply pooling on the matrix A,each row of A is pooled respectively and we obtain a matrix P∈ℝ^d×⌈| a|/k⌉: pooling( A): ℝ^d× | a|→ℝ^d×⌈| a|/k⌉. §.§.§ Multiple feature mapsWe have described how to apply a wide convolution, a non-linear activationand a pooling successively to an indicator matrix.After a group of those operations, we obtain the first order representationfor learning to recognize specific shifting patterns in the input time series of indicators.To obtain higher order representations, we can use a deeper networkby repeating these operations. Higher order representations areable to capture patterns in much longer range in the input time series of indicators. Meanwhile, like the CNN in object recognition,we learn multi-aspect representations for each input indicator matrix.Let P^i denote the i-th order representation andwe take the input Data Frame DF_ir as the 0-th order representation.We compute K_i representations P^i_1,⋯, P^i_K_iin parallel at the i-th order. Each representation P^i_jis computed by two steps.Firstly, we apply convolution to each representation P^i-1_k at the lower order i-1 with distinct filter bank F^i_j,k and bias bank B^i_j,kand thensum up the results.Secondly, non-linear activation and pooling are applied to the summation result.The whole process is as follow:P^i_j=pooling(α(∑_k=1^K_i-1conv( P^i-1_k,F^i_j,k,B^i_j,k))). §.§.§ Fully connectionThe last hidden layer of our CNN architecture is fully connection.Fully connection is a linear operation,which concentrates all representations at the highest order into a single vector.This vector can be seen as the features extracted from the original input. More specifically, for the highest order representationsP^h_1,⋯, P^h_K_h (assume P^h_k∈ℝ^d× p),we first flat them into a vector p∈ℝ^K_h× d× p.Then we transform it with a dense matrix H∈ℝ^(K_h× d× p)× n andapply non-linear activation:x̂=α( p^ T H),where x̂∈ℝ^n can been seen as the final extracted feature vector.The values in matrix H are parameters optimized during training.The representation size n is a hyper-parameter of the model.§.§.§ Linear regressionAfter obtaining the feature vector x̂_ir, we use linear regressionto forecast the final sales y_ir of commodity i in region r:y_ir=[1, x̂^ T]· w.The values in vector w are parameters optimized during training.§ TRAININGWe build a model for each region individually.For each region r, our model is trained to minimize the mean squared error on an observed training set 𝒟_r:L_r=∑_ir∈𝒟_r(y_ir-ŷ_ir)^2,where y_ir is the real total sales of commodity i in region r over a time period [T+1, T+l],and ŷ_ir= f( X_ir, a_i, θ) is the corresponding forecasting result. The parameters optimized in our neural network is θ:θ={ F,B,H,w},namely the filter bank F, bias bank B,dense matrix H and linear regression weight w.Note that there are multiple filter banks and bias banks to be learned. In the following, we present details in training our deep learning model. §.§ Sample weight decay By sliding the end point of the Data Frame, we can construct many training samples.However, each sample should have different importance: the closer to the target forecasting interval,the more important the training sample is. Let sp denote the start point of the target forecasting intervaland ep_ir denote the end point of the Data Frame of training sample ir. Obviously,ep_ir≤ sp - k holds, where k is the length of forecasting interval.For each region r, we assign a weight to each sample in the training set 𝒟_r as follow: weight_ir = e^β× (ep_ir + k - sp),where β is a hyper-parameter.Then for each region r, instead of minimizing the mean squared error,we minimize the weighted mean squared error on the observed training set 𝒟_r:L_r^w=∑_ir∈𝒟_rweight_ir× (y_ir-ŷ_ir)^2,where weight_ir is calculated as above. §.§ Transfer learning Transfer learning aims to transfer knowledge acquired in one problem onto another problem <cit.>. In the context of sales forecast, we can transfer learned patterns in one region onto another region. We first train our neural network on the whole training set 𝒟, which includes training samples in all regions:𝒟=⋂_r𝒟_r.After that, we replace 𝒟 with 𝒟_r and continue trainingthe specific model for each region r respectively. §.§ RegularizationNeural networks are capable of learning very complex functions and tend to easily overfit, especially on the training set with small and medium size.To alleviate the overfitting issue, we use a popular and efficient regularizationtechnique named dropout <cit.>.Dropout is applied to the flatten vector p in (<ref>) beforetransforming it with the dense matrix H.During the forward phase, a portion of units in p are randomly dropped out by setting them to zero to prevent feature co-adaptation.The dropout rate is a hyper-parameters of the model.As suggested in <cit.>,dropout is approximately equivalent to model averaging, which is an effective technique to generalize models in machine learning.§.§ Hyper-parameters The hyper-parameters in our deep learning model are set as follows:the size of filters and the length of pooling at the first order representation are m=7, k=7;the size of filters and the length of pooling at the second order representation are m=4, k=4;the size of filters and the length of pooling at the third order representation are m=3, k=3.We intend to capture the patterns in the week level at the first order representation,the month and season level at the second and the third order representation respectively. What's more, the parameter of weight decay is β = 0.02;the dimension of extracted feature vector is n=1024;the dropout rate is p=0.2;there are 128 representations computed in parallel at each order representation, which means K_1=K_2=K_3=128.§.§ OptimizationTo optimize our deep learning model, we use the Stochastic Gradient Descent (SGD) algorithm with shuffled mini-batches.The parameters are update through the back propagationframework (see <cit.> for its principle) with Adamax rule <cit.>.The batch size is set as 128 and the network is firstly pre-trained on the whole training set 𝒟 for 10 epochs andthen trained on each training set 𝒟_r for another 10 epochs.What's more, normalization is usually helpful for the convergence of deep learning model,thus we normalize the input Data Frame by z-score method <cit.>.To exploit the parallelism of the operations for speeding up, we train our network on a GPU.A Python implementation using Keras[http://keras.io]powered by Theano <cit.>can process 73k samples per minute on a single NVIDIA K2200 GPU. § EXPERIMENTS §.§ DatasetWe evaluate our deep learning model on a large dataset collected from CaiNiao.com[https://tianchi.aliyun.com/competition/ information.htm?raceId=231530],which is the largest collaboration platform of logistics and supply chain in China. The dataset is provide by Alibaba Group and it contains 1814892 records,covering log data and attributes information of 1963 commodities in 5 regions ranging from 2014-10-10 to 2015-12-27.There are d=25 log indicators, including sales, page view (PV), page view from search (SPV), user view (UV), user view from search (SUV), selling price (PAY) and gross merchandise volume (GMV) etc.§.§ SetupWe here forecast the total sales of each commodity i in each region r over the time period [2015-12-21, 2015-12-27] byusing the Data Frame in time period [2015-10-28, 2015-12-20]. That is to say the length of target interval is k=7 and the length of Data Frame is T=84.After splitting all samples for each region r into the training set and testing set, we have:the end points of Data Frame of training samples range from 2015-01-01 to 2015-12-13; the end point of Data Frame of testing samples is 2015-12-20. We compare our approach against several baselines and several different settings of our approach. §.§.§ BaselinesARIMA. ARIMA is a classical time series analysis technique.It takes the historical sales data as input and predicts the sales at the next time point directly.FE+GBRT. We first extract 523 features manually, including UV of the previous day,average UV over previous three days,average UV over previous one week, average UV over previous one month, and whether there is a price reduction or not etc.After that, we use the Gradient Boosting Regression Tree (GBRT) to forecast the sales by taking those features as input. DNN. DNN is the simplest neural network architecture, which puts multi-layers fully connection after the input.We first flat the Data Frame into a vector and then append 4 fully connection layersand a linear regression layer after the obtained input vector. The dimension of each fully connection is 1024 anda dropout with p=0.2 is apply to the output of last fully connection layer.The implementation of ARIMA is taken from pandas <cit.>and the implementation of GBRT is taken from xgboost <cit.>. §.§.§ Different settingsCNN. The Convolution Neural Network architecture described in the section <ref> is the foundation of our sales forecast model.CNN+WD. To improve the accuracy ofsales forecast,we assign weights to training samples according to the equation (<ref>), and then train the CNN model to minimize the weighted mean squared error.CNN+WD+TL. To improve the accuracy of sales forecast further, we intend to transfer the patternslearned from all training samples in 𝒟 onto each distinct region r,which is described with details in section <ref>.Single-CNN.We train an unified model using all training samples in 𝒟 and forecast sales for each region r respectively.Note that there is no sample weight decay technique and transfer learning technique. All results reported in the following sections is on the testing setand the metric we used to measure the performance is the mean squared error (MSE).§.§ ResultThe detailed experimental results are presented in Figure <ref>and the summary of those results is shown in Table <ref>. The classical machine learning method (FE+GBRT) considers more information thanthe time series analysis method (ARIMA) and consequently achieves better performance.The simplest deep neural network architecture (DNN) extracts features automatically.Sometimes it even obtains more useful feature representation than feature engineering done by humans,which is shown in the experiment that DNN beats FE+GBRT in some situations. The Convolution Neural Network architecture can make full use of the inherent structure in the raw datato extract more effective features and achieves a significant performance improvement.The sample weight decay technique and the transfer learning technique are highly effective.They further improve the performance and the final results are very competitive.Moreover, as we can see from the Figure <ref>, the forecast results from these two methods are more robust.It is interesting to explore whether it is possible to train one model for forecasting sales in all regions.So we train an unified model using all training samples in 𝒟 and forecast sales for each region r respectively.The results are promising but less competitive than using individual models. §.§ Discussion §.§.§ The length of target forecasting interval Figure <ref> helps us to analyze the relationship between the length of target forecasting interval and the difficulty of forecasting. We can see that the longer the target forecasting interval, the easier the sales forecasting is.The main reason is that the total sales over a long target forecasting interval is more stable thanthat over a short forecasting interval. However, short forecasting interval allows more flexible business decision.So there is a typical tradeoff between the practical flexibility and the forecast accuracy in real-world applications. §.§.§ The length of Data Frame The length of Data Frame is a crucial hyper-parameter in our model.It represents how much historical data we used as the input of our model.As can be seen from Figure <ref>, if the Data Frame is too short the information contained in it is insufficient.On the other hand, long Data Frame may contain too much useless information,which confuses the learning machines. What's more, longer Data Frame means more resources consumption in computing. §.§.§ The intensity of weight decayThe value of β in equation (<ref>) controls the importance of training samplesaccording to its closeness to the target forecasting interval.A large β will make the model bias toward the closer training samples.As can be seen from Figure <ref>, β = 0.02 achieves a good tradeoffbetween extracting long-term patterns and extracting shot-term patterns in log data for sales forecast.§ CONCLUSIONSIn this paper, we present a novel approach to learn effective features automatically from the structured datausing CNN. It can obviate the need for manual feature engineering, which is usually difficult, time-consumingand requires expert knowledge. We use the proposed approach to forecast sales by taking the raw log dataand attributes information of commodities as the input. Firstly, we transform the log data and attributes information of commodities,which is in the structured type, into a designed Data Frame.Then we apply Convolutional Neural Network on the Data Frame,where effective features will be extracted at the hidden layers and subsequently used for sales forecast.We test our approach on a real-world dataset from CaiNiao.com and it demonstrates strong performance.What's more, sample weight decay technique and transfer learning technique are used to improve the forecasting accuracy further, which have been proved to be highly effective in the experiments.There are several interesting problems to be investigated in our further works:(1) Is it possible to find the most important indicators for sales forecast from the raw log data by deep neural networks;(2) It will be very appealing to find an unified framework for extracting features automatically from all types of data. § ACKNOWLEDGMENTSWe would like to thank Alibaba Group for providing the valuable datasets. abbrv
http://arxiv.org/abs/1708.07946v1
{ "authors": [ "Kui Zhao", "Can Wang" ], "categories": [ "cs.LG", "stat.ML" ], "primary_category": "cs.LG", "published": "20170826074715", "title": "Sales Forecast in E-commerce using Convolutional Neural Network" }
Instituto de Fisica, Universidade Federal de Alagoas, 57072-970, Macei, AL, BrazilLaboratory of Theoretical Physics, Yerevan State University, Alex Manoogian 1, 0025 Yerevan, ArmeniaDepartamento de Fisica, Universidade Federal de Lavras, CP 3037, 37200000, Lavras, MG, BrazilWe consider the diamond chain with S=1/2 XYZ vertical dimers which interact with the intermediate sites via the interaction of the Ising type. We also suppose all four spins form the diamond-shaped plaquette to have different g-factors. The non-uniform g-factors within the quantum spin dimer as well as the XY-anisotropy of the exchange interaction lead to the non-conserving magnetization for the chain. We analyze the effects of non-conserving magnetization as well as the effects of the appearance of negative g-factors among the spins from the unit cell. A number of unusual frustrated states for ferromagnetic couplings and g-factors with non-uniform signs are found out. These frustrated states generalize the "half-fire-half-ice" state introduced in Ref. [yin15]. The corresponding zero-temperature ground state phase diagrams are presented. 75.10.Pq, 75.50.XxNon-conserved magnetization operator and `fire-and-ice' ground states in the Ising-Heisenberg diamond chain. Onofre Rojas Accepted ?. Received ?; in original form ?. ============================================================================================================§ INTRODUCTION In the last decade, intensive investigations have been focused on the effects of magnetic anisotropy in metal complexes and adatoms. The anisotropy arises due to the interplay of the spin-orbit coupling on the magnetic ion sites and the crystal field from neighboring atoms and ligands <cit.>. This phenomenon can affect the magnetothermal properties of the system essentially <cit.>. One of the most unusual features of these joint interactions is the negative Landé g-factor which occurs in some complexes <cit.>. The appearance of the negative and positive g-factors in the same system leads to a series of peculiar features even in the simplest case of Ising chain with alternating g-factors. It was demonstrated in Ref. [yin15] that the novel frustration can be arisen in ferromagnets with non-uniform g-factors with different signs. It was also argued in the paper that the aforementioned novel frustrated state, which has been given the name "half-fire-half-ice" by the authors can be realized in copper-iridium oxides such as Sr_3CuIrO_6 <cit.>. Also, the magnetic centers in some compounds of the transition-metal ions with unquenched angular momentum and relatively strong spin-orbit coupling could posses rather large Landé g factors, essentially different from the corresponding g-factors for free ion. One can mention, for instance, Fe^3+ ion with a Landé g factor g≈ 2.8, as well as Co^2+ ion with g≈ 6.0 <cit.>.Large anisotropy can be obtained combining almost isotropic transition-metal ion with highly anisotropic rare-earth ions increasing the difference of the Land g factors in oligonuclear complexes. In it known that the Dy^3+ ion has roughly g≈20. A series of magnets compounds with this ion have been recently investigated revealing some intriguing properties <cit.>.These unusual large g factors must correlate with a strong anisotropy in the exchange interaction as well <cit.>. One can mention the heterodinuclear Cr^3+-Yb^3+ <cit.> complex as an example of the molecular magnet withhighly anisotropic exchange interaction in z direction.A recent investigation of the magnetism of a Co_5 complex brings evidence of negative g factors for some Co^2+ ions <cit.>. Surely, this study stimulates a deeper understanding of the origin of negative g factors and their implications for magnetic properties of some compounds. The inversion of the sign of the g factors can occur in the molecular magnets as well as in the single chain magnet and other materials <cit.>. For instance, in Ref. [fecu] combining ligand field and density functional theory (DFT) analysis of the magnetic anisotropy in cyanide-bridged single-molecule magnets (oligonuclear complexes, Fe^—CN—M^ (M=Cu, Ni)) has been performed. Particularly,it was found that the g-factor of the Fe^3+ ion is isotropic and negative, g_Fe=-1.72, while for the and Cu^2+ ion it is positive and has small axial anisotropy, g_Cu_x=g_Cu_y=2.18, g_Cu_z=2. It was also shown recently using the Ab initio calculation that the product of the diagonal components of the Lande g factors satisfy for some lanthanide and transition metal complexes g_xg_yg_z<0. It is worth mentioning that the negative sign of the product of Landé g-factor components has been known for some transitional metals and lanthanide complexes since 60s <cit.>. Moreover, there are compounds of single-chain magnet (SCM) type, for example [(CuL)_2DyMo(CN)_8]·2CH_3CN· H_2O <cit.> which are an interesting magnetic material exhibiting different Land g-factors for different magnetic ions and describing within the Ising-Heisenberg spin chain model. These models, in contrast to the Ising-Heisenberg models with uniform g-factors demonstrate zero temperature magnetization curve with an unusual non-plateau behavior within the same eigenstate <cit.>.The theoretical model of the aforementioned compound can be solved exactly by means of the generalized classical transfer matrix method <cit.>. The models of the Ising-Heisenberg type imply the lattice consisting of small quantum spin clusters interacting with each other through the intermediate Ising spin <cit.>. Therefore, the eigenstates of the whole system are direct products of the eigenstates for the quantum spin clusters. The zero-temperature magnetization curve of such models usually contains the regions corresponding to certain eigenstates with the sharp transitions between them. These regions are horizontal (magnetization plateaus) in case if the magnetic moment is a good quantum number and each eigenstate possesses fixed value of it. This is the case for conserving magentization operator. However, for the different g-factors for different spins within the same cluster the magnetizaiton operator does not commute with the Hamiltonian. As a result the magnetic moment is not a good quantum number and the magnitude of magnetization could vary within the same eigenstate under the change of the magnetic field. Thus, the deviations form the horizoantal line is occur in the magnetization curve (quasi-plateaus) <cit.>. However, the deviation of the magnetization curve parts from the horizontal line due to difference in g-factors of the quantum spin from the three-spin linear cluster in the [(CuL)_2DyMo(CN)_8]·2CH_3CN· H_2O SCM is merely visible by eyes, as the difference of the values of g-factors is rather small <cit.>. Almost the same effect has been observed but even quantitatively less pronounced in the approximate model of the SCM, the F-F-AF-AF spin chain compoundCu(3-chloropyridine)_2(N_3)_2 <cit.>.In the past decades, a so-called diamond chain magnetic structure and its variants have been intensively studied. Since the experimental discovery that the Cu^2+ ion in the well-known mineral azurite, Cu_3(CO_3)_2(OH)_2, are arranged along the b-plane in a diamond chain manner and that the interchain coupling is small enough <cit.>, the issue has been receiving permanent attention form the theoreticians and experimentalists <cit.>. Due to its symmetric properties and relative simplicity the diamond chain is also the most popular one-dimensional structure for the theoretical research in the field of the Ising-Heisenberg spin lattices. Various physical effects and issues have been considered in the context of the corresponding model on the diamond chain or its modification, magnetization plateaus and zero-temperature phase diagrams, higher spin, mixed spins, four-spin interaction, magnetocaloric effect, entanglement and quantum state transfer, just to mention few of them <cit.>. In the present paper we consider the S=1/2 Ising-Heisenberg model on the diamond chain with non-conserved magtnetization due to non-uniform g-factors as well as due to XY-anisotropy. We describe the eigenstates of the chain for the case of four different g-factors. Particularly, we are interested in the zero-temperature effect induced by the appearance of the negative g-factors(s). As a further development of the ideas of the Ref. [yin15] we preset the detailed description of the "fire-and-ice" configuration which in our case are more divers. We analyze the Ising case as well as the whole Ising-Heisenberg model. The paper is organized as follows. In Sec. 2 we present the model under consideration and make a general statements about the non-commutativity of the magnetization and the Hamiltonian, its origin and basis consequences.In Sec.3 we describe in details the ground states of the model and its Ising limit. In sec. 4 we study the effect of the negative g-factor for the part of the spins from the unit cell. We found various frustrated states of the "fire-and-ice" type introduced in Ref. [yin15]. The Sec. 5 contains a conclusion.§ THE MODEL Let us consider the S=1/2 XYZ-Ising diamond chain describing by the following Hamiltonian (See Fig. <ref>)ℋ= ∑_j=1^N(ℋ_j-Bg_jσ_j),where ℋ_j is given by ℋ_j= J{ (1+γ)S_j,1^xS_j,2^x+(1-γ)S_j,1^yS_j,2^y}+ΔS_j,1^zS_j,2^z+K(S_j,1^z+S_j,2^z)(σ_j+σ_j+1) -B(g_1S_j,1^z+g_2S_j,2^z),the g-factors of the Ising intermediate spins, σ_j are supposed to be alternating,g_j={[ g_3,j; g_4,j . ].Thus, the diamond-chain is composed of the vertical S=1/2 XYZ-dimers with quantum spin operators 𝐒_j,1 and 𝐒_j,2. These dimers are alternating with Ising spins σ_j taking ±1/2 values. The Ising spins interact with the z-component of their left and right neighboring 𝐒-operator with exchange interaction K. The quantum spins belonging to the same dimer are also supposed to have different Land g-factors, denoted by g_1 and g_2. Therefore, the Hamiltonian of the whole system is the sum of the mutually commutative block Hamiltonians ℋ_j. The important feature of the Hamiltonian ℋ is its the non-commutativity with the magnetization operator,ℳ^z=1/N∑_j=1^N(g_1S_j,1^z+g_2S_j,2^z)+1/N∑_j=1^N/2(g_3σ_2j-1+g_4σ_2j), [ℋ, ℳ^z]≠0.The origin of these non-commutativity is the difference in g-factors for the quantum spins and XY-anisotropy,[ℋ_j,g_1S_j,1^z+ g_2S_j,2^z]= -iγ(g_1+g_2)(S_j,1^xS_j,2^y+S_j,1^yS_j,2^x)+iJ(g_1-g_2)(S_j,1^xS_j,2^y-S_j,1^yS_j,2^x). As one can see, there are two sources of the non-commutativity, the XY-anisotropy γ and difference of the g-factors (g_1-g_2) <cit.>. This non-commutativity leads to a non-linear magnetic field dependence of the spectrum of the model and to the phenomena of quasi-plateau <cit.>. The quasi-plateau actually means the eigenstate with an explicit magnetic field dependance, even at zero temperature. The part of the magnetization curve corresponding to the eigenstate with an explicit magnetic field dependance demonstrates the monotonous grow of the magnetization with the increasing the magnetic field magnitude, instead of being constant (plateau) what takes the place in a conventional case when the finite spin cluster has conserving magnetization operator. Non-commutativity of the magnetization operator and Hamiltonian leads to another unusual phenomena, the reentrant transitions due to non-linear magnetic field dependence of the spectrum. The sequence of the quantum phase transitions at zero temperature with the monotonous changing of the magnetic field for a finite spin cluster is determined by a level crossing. For the linear in magnetic field spectrum any two levels can have no more than one crossing and thus each eigenstate can appear only once in the magnetization curve. In case of non-linear spectrum two levels can have more than one crossing which can lead to a multiple appearance of the same ground state in the magnetization curve.As the lattice has six spins in the translational invariant unit cell, two σ spins and two vertical dimers, the total saturation magnetization per unit cell (note that N is the number of block which is supposed to be even, while the number of the unit cell with six spins is N/2) isM_sat=(g_1+g_2+12(g_3+g_4)).§ GROUND STATES The eigenstates of the chain are composed of a direct product of the eigenstates of each block. The Ising interaction between the vertical dimers makes the propagation of any type of spin excitation from block to block impossible. That is why, we can describe all possible ground states of the system exactly in term of few configuration. However, the Hamiltonian breaks the translational symmetry of the diamond-chain by the doubling of the block leading to the six-spin unit cell. When g_3=g_4 the unit cell coincides with the three-site triangular block of the diamond chain.§.§ Quantum dimer eigenstates Let us start with the description of the four eigenstates of the isolated quantum spin dimer (Eq. (<ref>)), which are the building blocks for the construction of the ground states for the whole chain. After diagonalization of the block Hamiltonian (<ref>), we obtain four eigenvalues. The first couple of eigenvalues ε_1,2 can be expressed as follows:ε_1,2(σ_j,σ_j+1)=-Δ4± G,where G=√(B^2(g_1-g_2)^2+J^2)2. The corresponding eigenstates are independent on the value of the neighboring σ-spins:|Ψ_1,2⟩= (|↑↓⟩+c_±|↓↑⟩)/√(1+c_±^2),with c_±=B(g_1-g_2)±2G/J.This eigenstates in the limit of uniform g-factors transform to the singlet state and S^z=0 component of the triplet state. That is why the vertical dimer decouples from its neighborhood.The second set of eigenvalues ε_3,4 of the Hamiltonian (<ref>) areε_3,4(σ_j,σ_j+1)=Δ4± F_σ_j,σ_j+1, where F_σ_j,σ_j+1=√([B(g_1+g_2)-2K(σ_j+σ_j+1)]^2+J^2γ^2)2. The two eigenstates associated with the ε_3,4 eigenvalues are dependent on their left and right σ spins:|Ψ_3,4⟩= (|↑↑⟩+b_σ_j,σ_j+1^±|↓↓⟩)/√(1+(b_σ_j,σ_j+1^±)^2),with b_σ_j,σ_j+1^±=B(g_1+g_2)-2K(σ_j+σ_j+1)±2F_σ_j,σ_j+1/Jγ.Thus, here we have three different eigenstates for a vertical quantum dimer depending on the configuration of neighboring σ spins: |Ψ_3,4^±⟩ corresponding to σ_j=σ_j+1=∓1/2 and |Ψ_3,4^0⟩ corresponding to σ_j=-σ_j+1 which differ from each other only by the form of coefficient b_±.In Appendix <ref> one can find the Ising limit of the Eqs. (<ref>) and (<ref>).One of the unusual features of the eigenstates (<ref>) and (<ref>) is the explicit dependance of the corresponding magnetic moment on the magnetic field, which is a direct consequence of the non-commutativity of the magnetization operator and block Hamiltonian. It is easy to obtain that,ℳ_1,2^z= ⟨Ψ_1,2|(g_1S_j,1^z+g_2S_j,2)|Ψ_1,2⟩= ∓B(g_1-g_2)^2/4G, andℳ_3,4^z= ⟨Ψ_3,4|(g_1S_j,1^z+g_2S_j,2)|Ψ_3,4⟩= ∓(g_1+g_2)[B(g_1+g_2)-2K(σ_j+σ_j+1)]/4F_σ_j,σ_j+1.Thus, ℳ_3,4^z not only continuously depends on the magnetic field but also exhibit jumps under the flip of the neighboring σ-spins. §.§ Eigenstates for the chain Let us now describe the ground states for the whole chain using the Hamiltonian (<ref>), which are constructed with the aid of the block eigenstates. In virtue of the difference in g-factors for the Ising spins the model has six spins (two blocks) in the unit cell and therefore the ground states will demonstrate the two-block translational symmetry. Notice that in case of g_3=g_4, the unit cell can contain only three spins (no period doubling). 1.- Quasi-Saturated (SQ) state:First of all, let us mention the quasi-saturated state with the corresponding magnetic moment and energy per unit cell. Let us remind that the total number of the diamond-shaped blocks in the chain we denote by N, but due to the difference in the g-factors of the σ spins the unit cell corresponding to the Hamiltonian (<ref>) contains six sites. Thus, all quantities presented below are calculated with respect to the number of the six-spin unit cells equal to N/2.The first `quasi-saturated' (QS_1) state reads|QS_1⟩= ∏_j=1^N/2|↑⟩__2j-1⊗|Ψ_4^-⟩__2j-1⊗|↑⟩__2j⊗|Ψ_4^-⟩__2j,ℳ_QS_1= (g_1+g_2)(B(g_1+g_2)-2K)/2F_+,++1/2(g_3+g_4),E_QS_1= Δ/2-2F_+,+-B/2(g_3+g_4), here the arrow stand for the spin-up configuration of the corresponding σ-spins.The second `quasi-saturated' (SQ_2) state is expressed as follows:|QS_2⟩= ∏_j=1^N/2|↓⟩__2j-1⊗|Ψ_4^+⟩__2j-1⊗|↓⟩__2j⊗|Ψ_4^+⟩__2j,ℳ_QS_2= (g_1+g_2)(B(g_1+g_2)+2K)/2F_-,--1/2(g_3+g_4),E_QS_2= Δ/2-2F_-,-+B/2(g_3+g_4). The eigenstates QS_1 and QS_2 are linked to each other by the inversion of the all σ_j spins. The QS_2 represents a ground states at the strong magnetic field when the g-factors of the σ spins are negative. Both QS_1 and QS_2 become degenerate at the vanishing magnetic field. They are the counterparts of the saturated or fully polarized state. However, the XY-anisotropy γ prevents the magnetization from reaching its saturated value given by Eq.(<ref>), at any finite values of the magnetic field. Therefore, the saturation can be reached asymptotically when B→∞ or at the vanishing XY-anisotropy γ→ 0.2.- Ferrimagnetic (FI) state: There are two 'ferrimagnetic' (FI) (with respect to the spin orientation, but not to the magnetic moment) eigenstates. This implies appearance of several sublattices with non-zero net magnetization as well as the nonzero S^z.Thus the first ferrimagnetic (FI_1) state is|FI_1⟩= ∏_j=1^N/2|↑⟩__2j-1⊗|Ψ_2⟩__2j-1⊗|↑⟩__2j⊗|Ψ_2⟩__2j,ℳ_FI_1= B(g_1-g_2)^2/2G+1/2(g_3+g_4),E_FI_1= -Δ/2-2G-B/2(g_3+g_4).The unit cell of the ground state FI_1, thus, contains two Ising spin (with different g-factors) pointing up andtwo Heisenberg dimers with average two spin pointing up and two spin pointing down. Despite of non-coherent superposition of |↑↓⟩ and |↓↑⟩ in |Ψ_2⟩ the expectation values of S_j,1^1 and S_j,2^z, though differ from ± 1/2, compensate each other: __j⟨Ψ_2|S_j,1^z|Ψ_2⟩__j=1/21-c_-^2/1+c_-^2,__j⟨Ψ_2|S_j,2^z|Ψ_2⟩__j=-1/21-c_-^2/1+c_-^2. However, __j⟨Ψ_2|g_1 S_j,1^z|Ψ_2⟩__j≠- __j⟨Ψ_2|g_2 S_j,2^z|Ψ_2⟩__j.The second ferrimagnetic (FI_2) state is |FI_2⟩= ∏_j=1^N/2|↓⟩__2j-1⊗|Ψ_2⟩__2j-1⊗|↓⟩__2j⊗|Ψ_2⟩__2j,ℳ_FI_2= B(g_1-g_2)^2/2G-1/2(g_3+g_4),E_FI_2= -Δ/2-2G+B/2(g_3+g_4). Similarly, state FI_2 has two Ising spin-down, two Heisenberg spin-down and two Heisenberg spin-up in the unit cell.Despite the net spin orientation is not balanced, the magnetization of the system could vanish at g_1=g_2 and g_3=-g_4.Therefore, if one do not take into account the difference of the g-factors (g_3=g_4) of the σ spins these ground states have three spins in the unit cell, "up-up-down" for the FI_1 state and "down-up-down" for the FI_2 state. The pair of Heisenberg spins in both case form a perfect singlet state. 3.- Antiferromagnetic (AF) state:There are two eigenstates which one can call `antiferromagnetic' (AF), because the corresponding unit cell contains equal amount of spins pointing up and pointing down (balanced spin orientation). However, the magnetization does not necessarily vanish, unless the particular case g_1=g_2 and g_3=g_4.The first `antiferromagnetic' (AF_1) states is given by |AF_1⟩= ∏_j=1^N/2|↑⟩__2j-1⊗|Ψ_2⟩__2j-1⊗|↓⟩__2j⊗|Ψ_2⟩__2j,ℳ_AF_1= B(g_1-g_2)^2/2G+1/2(g_3-g_4),E_AF_1= -Δ/2-2G-B/2(g_3-g_4).The second one (AF_2)is|AF_2⟩= ∏_j=1^N/2|↓⟩__2j-1⊗|Ψ_2⟩__2j-1⊗|↑⟩__2j⊗|Ψ_2⟩__2j,ℳ_AF_2= B(g_1-g_2)^2/2G-1/2(g_3-g_4),E_AF_2= -Δ/2-2G+B/2(g_3-g_4).The eigenstates |AF_1⟩ and |AF_2⟩ are slightly different due to the left-right asymmetry which takes the place because of difference in g-factors for the Ising spins. They become identical when g_3=g_4. 4.- Quantum ferrimagnetic (QI) state:Finally, we introduce two so–called `quantum ferrimagnetic` eigenstates. Here the number of the spin pointing up (down) in the unit cell is not a good quantum number (is not fixed), as the |Ψ_4^0⟩ eigenstate for the quantum dimer is a non-coherent superposition of |↑↑⟩ and |↓↓⟩. Thus, the corresponding unit cell is characterized by one Ising spin pointing up, another one pointing down, and the total S^z=0 for the spins on Heisenberg dimers. But, in contrast to the AF and FI eigenstates it does not makes any sense to speak about the number of the Heisenberg spins with certain orientation even in the expectation value level.The first `quantum ferrimagnetic` (QI_1) state reads|QI_1⟩= ∏_j=1^N/2|↑⟩__2j-1⊗|Ψ_4^0⟩__2j-1⊗|↓⟩__2j⊗|Ψ_4^0⟩__2j,ℳ_QI_1= B(g_1+g_2)^2/2F_+,-+1/2(g_3-g_4),E_QI_1= Δ/2-2F_+,--B/2(g_3-g_4), and the second `quantum ferrimagnetic` (QI_2) state differs from the previous one just by the orientation of the σ spins,|QI_2⟩= ∏_j=1^N/2|↓⟩__2j-1⊗|Ψ_4^0⟩__2j-1⊗|↑⟩__2j⊗|Ψ_4^0⟩__2j,ℳ_QI_2= B(g_1+g_2)^2/2F_+,--1/2(g_3-g_4),E_QI_2= Δ/2-2F_+,-+B/2(g_3-g_4).Like in the previous case, the states |QI_1⟩ and |QI_2⟩ differ from each other only due to difference in the Ising spins g-factors.They become identical when g_3=g_4. Note that if g_1=-g_2 and g_3=g_4 the magnetization can be zero.In Appendix <ref>the ground states energies for the limiting case of all Ising spins diamond chain are described.§ `FIRE-ICE' INTERFACE In the Ref. <cit.> an interesting unusual critical point has been described. For the simplest classical case of the one-dimensional ferromagnetic Ising model with staggered g-factors with different signs the authors described the situation when there are two sub-lattice (at zero temperature) in the ground state of the system. The one of them is ordered and another one is totally disordered. For the obvious reason they called the ground state `Half Fire, Half Ice'. However, it is worthy mentioning, that the critical lines of aforementioned kind have been considered a bit earlier. They are quite common properties of the Ising-Heisenberg spin systems<cit.>. Generally speaking, the ground states with ordered and disorder sublattices naturally arise in the spin systems with complex unit cell containing several spins. For instance, the same phenomena occurs in the ferromagnetic-ferromagnetic-antiferromagnetic Ising chain, due to antiferromagnetic bond <cit.>. The appearance of the antiferromagnetic bonds here is crucial for such critical states. They usually arises as the degeneracy between two different ground states which differ one from another by the orientation of one or several spins. The simplest example can be found probably in Ref. <cit.> in the Ising-Heisenberg S=1/2 diamond-chain. The critical line between fully polarized state and the ground state where spins from quantum dimer are pointing along the magnetic field, while the Ising spins between them are pointing oppositely, due to antiferromagnetic coupling between them and the dimers is the line corresponding to the 'one third fire-two third ice' configuration. This means one sublattice from three in the ground state is disordered. The principal difference of the `Half Fire, Half Ice' configuration of the Ref. <cit.> from the partly ordered-partly disordered degenerate configurations mentioned above is the uniform ferromagnetic coupling for all bonds. The ambiguity in the state of the spins from the disordered sublattice is here the consequence of their negative g-factor. As the model of the Ising-Heisenberg diamond chain is the simplest generalization of the Ising chain (decorated Ising chain) the corresponding 'fire-and-ice' degenerate configurations also can be realized here.Let as describe first how this configuration arises in the ordinary ferromagnetic Ising chain with two alternating g-factors, g_A>0 and g_B<0 <cit.>. The Hamiltonian isℋ_Is^1d=J∑_j=1^Nσ_jσ_j+1-B∑_j=1^N/2(g_Aσ_2j-1+g_Bσ_2j),where J<0, σ_j=±1/2 and we assume for simplicity, |g_B|>g_A. Thus, for T=0 and sufficiently low magnetic field the ground state will be ferromagnetic one with all spins pointed down. Then, it is easy to see that there is a critical point atB_c=|J|/g_A,when the degeneracy occurs between aforementioned ground state and a configuration where each spin with g-factor g_A is pointed up which becomes a non-degenerate ground state when B>B_c. Thus, at the critical value of the magnetic field the system has two sub-lattices. One of them is ordered (all spins with negative g-factor are pointed down) and another one is completely disordered. This extraordinary feature the authors of the Ref. <cit.> named `Half Fire, Half Ice'. This property can be easily obtained in the nearest-neighbor ferromagnetic Ising model on arbitrary bipartite lattice with the g-factors of different signs on each sublattice. Making the same assumptions about J, g_A and g_B, and considering the model with two sublattices A and B,ℋ_Is^AB=J∑_i∈ A,j∈ Bσ_iσ_j-B( g_A∑_i∈ Aσ_i+g_B∑_j∈ Bσ_j),one can easily see that there exists the zero-temperature critical point of the same origin with the corresponding value of the magnetic fieldB_c=|J|d/2g_A,where d is the coordination number of the bipartite lattice.It is easy first to do so for the purely Ising model on a diamond-chain, then we can use the obtained results as a guideline for searching the corresponding phenomena in the quantum model under consideration. §.§ Ising model on a diamond chain Let us consider the Ising limit of the Hamiltonian (<ref>). Assuming J=0 we getℋ_I= ∑_j=1^N{ ΔS_j,1^zS_j,2^z+K(S_j,1^z+S_j,2^z)(σ_j+σ_j+1). .-B(g_1S_j,1^z+g_2S_j,2^z)-Bg_jσ_j} ,where the g-factors of the Ising intermediate spins alternating with the spin-dimer are given by the Eq. (<ref>).The Ising diamond chain exhibits several interfaces between the ground states with peculiar partial frustration. Below we discuss some of them using the results given in appendix <ref>. As the unit cell of the model contains six spins the "fire-and-ice" configurations with one, two, three, four and five frustrated (disordered) sublattice are possible. Let us emphasize once again that the origin of this frustration is the conflict between negative g-factor(s) and ferromagnetic couplings.§.§.§ One frustrated spin (1/6-fire and 5/6-ice) The interfaces with one frustrated spin in the unit cell are described in the Appendix <ref>1. Here, we consider particular cases of negative g_3 as well as negative g_3 and g_4. All couplings are supposed to be ferromagnetic, Δ<0, K<0. Let us first consider the case of only one spin with negative g-factor, say g_3. Due to ferromagnetic coupling between all spins the unit cell can be divided into two parts, the spin with negative g-factor, and the rest spins. When |g_3|<2(g_1+g_2)+g_4 the zero-temperature ground state at the small enough magnetic filed is QS_1^+ (all spins are pointed up). The saturated state (the ground state at strong magnetic field) differs form QS_1^+ by the flip down of the spin with negative g-factor leading to QI_2^+ (See Eq. (<ref>)). Thus, there is a critical value of the magnetic filed at which these two configurations become degenerate,B_c=2K/g_3.At this particular value of the magnetic field the system has one disordered and five ordered sites in the six-site unit cell. Using the terminology of the Ref. [yin15], one can naively refer to this state as to `1/6-fire-5/6-ice'. However, in virtue of the same argument about the entropy given above the ground state belongs to the half-fire-half-ice discussed in Ref.<cit.>. The same situation can be also occurred at the interface betweenQS_2^+ and QI_1^+ when both g_3 and g_4 are negative and additionally |g_3|<|g_4|. Let us emphasize that despite of triangular form of the unit cell there is no geometrical frustration in the system as we have only ferromagnetic bonds. The appearance of disorder here is the direct consequence of the interplay between the ferromagnetic interaction and negative g-factors <cit.>. §.§.§ Two frustrated spins (1/3-fire and 2/3-ice) All possible interfaces with two frustrated spins are described in the Appendix <ref>2. As an illustration here we analyze particular case, g_3<0 and g_4<0 for ferromagnetic coupling. Consider first the case |g_3+g_4|<2(g_1+g_2). Thus, the interface with two disordered sublattices arises between QS_1^+ and QS_2^+ atB_c=4K/g_3+g_4.As here we have two spins from the six-spin unit cell disordered, according to our convention <cit.> the corresponding state can be referred to as to `1/3-fire and 2/3-ice' configuration. At |g_3+g_4|>2(g_1+g_2) the interface with two disordered sublattices occurs between QS_2^- and QS_2^+ at the particular value of the magnetic field,B_c=2|K|/g_1+g_2. Let us now consider the case of antiferromagnetic coupling into the dimer, Δ>0 and K<0.Assuming for the time being g_1=g_2>0 and g_3<0, g_4<0 we can find the ground state of the Ising diamond-chain at weak magnetic field pointing along z-axis to be four-fold degenerate (or two-fold degenerate when g_3=g_4 and the magnetic unit cell shrunk to the three-spin plaquette): σ_j=-1/2,S_j,1^z=±1/2, S_j,2^z=∓1/2. But the degeneracy is lifted once we put g_1≠ g_2. The degeneracy can be even sixteen-fold at zero magnetic field when σ-spins become frustrated as well. The corresponding ground state is the Ising counterpart of the dimer-monomer ground state of the quantum diamond chain <cit.>. Supposing as well Δ>|K| we get the non-degenerate ground state with four spins in the six-spin unit cell pointing down (σ_j=σ_j+1=-1/2 and S_j,2^z=S_j+1,2^z=-1/2) and two spins pointing up (S_j,1^z=S_j+1,1^z=1/2), or FI_2^+ (See Eq. (<ref>)). In the saturated state (QS_1^+) at strong magnetic field one has the flip-up of the spins with g-factor g_2. Therefore, there is a critical value of the magnetic field, defining the interface QS_1^+↔ FI_1^+ (QS_2^+↔ FI_2^+):B_c=Δ+2|K|/2g_2. Thus, we defined here another `fire-and-ice' ground sates, in which four spins in the unit cell are ordered and two is completely disordered. In virtue of the distribution of the spins from the unit cell between ordered and disordered sublattices, we can refer to this ground state as to `1/3-fire-2/3-ice'.§.§.§ Three frustrated spins (1/2-fire and 1/2-ice) Increasing further the number of the frustrated sites in the unit cell one can arrive at the ground statesgiven in appendix <ref>3.As an example we consider the following distribution of the g-factors for the ferromagnetic Ising diamond-chain: g_1<0 and g_3<0, while g_2>0 and g_4>0.Two different regimes |2g_1+g_3|>2g_2+g_4 and |2g_1+g_3|<2g_2+g_4 leading todifferent value of the critical field are possible. However, in both situation there are three ordered and three coherently disordered spins in the six-spinmagnetic unit cell. Therefore, we deal here with exact half-fire-half-ice configuration with the values of the critical field in the interface between QS_2^- and AF_2^+ for |2g_1+g_3|>2g_2+g_4 and QS_1^+ and AF_2^- for |2g_1+g_3|>2g_2+g_4. The corresponding values of the magnetic field areB_c=|Δ|+2|K|/2g_2+g_4.andB_c=Δ+2K/2g_1+g_3,respectively. It is worth emphasizing once again that the three disordered spins in the unit cell can not be disordered independently of each other, they can change their direction only simultaneously. We call them, thus, coherently disordered spins. Therefore, the residual entropy per the unit cell at the critical values of the magnetic field is equal to log 2.§.§.§ Four frustrated spins (2/3-fire and 1/3-ice) All possible interfaces between two ground eigenstates with four frustrated spins in the unit cell are presented in the Appendix <ref>4.As an example we consider the following case: Δ<0, K<0 and g_3<0, g_4<0, g_1>0, g_2>0. Again the ground state at sufficiently weak magnetic field depends on the mutual relation of total positive and total negative g-factors in the unit cell. For |g_3+g_4|>2(g_1+g_2) the system demonstrate all spins pointing down, or QS_2^-. Then, for strong enough magnetic field the ground state transforms into the saturated one, which in this case is QS_2^+ The degeneracy between these ground states takes place atB_c=2|K|/g_1+g_2,As there are four coherently disordered spins and two ordered spins in the magnetic unit cell, the corresponding configuration can be classified as "2/3-fire and 1/3-ice". If the g-factors of σ_j-spin equal to each other, g_3=g_4,the magnetic unit cell shrunk to only three spins. It is also straightforward that this critical value is nothing else but the Eq. (<ref>) with K<0 and σ_j=σ_j+1=-1/2.§.§.§ Five frustrated spins (5/6-fire and 1/6-ice) Finally, the last case we present here is the interface with only one order site from the six sites in the magnetic unit cell. One can consider the following case g_3<0 with all other g-factors positive. Again, the ratio of ordered and disordered sites inside the magnetic unit cell depends on the relation between |g_3| and 2(g_1+g_2)+g_4.If |g_3|>2(g_1+g_2)+g_4 then the low-field orientation of all six spins form the magnetic unit cell is defined by the orientation of spin with negative g-factor. In the saturated state all spins except the one with negative g-factor are pointed up. Thus, at the critical value of the magnetic field arises the boundaryQS_1^+↔ QI_1^- and QS_1^-↔ QI_1^+B_c=2|K|/g_4-2(g_1+g_2),the system has only one ordered site among the six sites of the magnetic unit cell. Thus, the corresponding configuration could be referred to as `5/6 fire-1/6 ice'. However, it is important to remember that the rest five spins from the unit cell can not be disordered independently form each other. For each magnetic unit cell they all can either point up or down. Therefore, the residual entropy per the unit cell is log 2.§.§.§ Additional remarksAnother important point affecting the structure of the partially frustrated interface is the relation between the total negative and positive g-factors in the system. Consider arbitrary S=1/2 Ising spin lattice with the uniform ferromagnetic coupling K. It can be naively seemed that the number of the frustrated spins into the magnetic unit cell in the interface between two ground states corresponds to the number of the spins with negative g-factor in it. However, this is true only when absolute value of the total negative g-factor is bigger than the total positive g-factor for the unit cell, ∑|g_neg|>∑ g_pos. The ground state under this condition (at infinitely low magnetic field in z direction) is the ferromagnetic state with all spins pointing down, F_-. At the opposite limit, ∑|g_neg|<∑ g_pos, the ground state is ferromagnetic with all spins pointing up, F_+. The saturated state corresponds to all spins with positive g-factor pointing up and all spins with negative g-factor pointing down, S. It is easy to write down the corresponding ground states energies per one unit cell in the following form:E_F^-=-n/4 K+B/2(∑ g_pos-∑|g_neg|),E_F^+=-n/4 K-B/2(∑ g_pos-∑|g_neg|), E_S=-m/4 K-B/2(∑ g_pos+∑|g_neg|), where n and m are positive integers and m<n. When ∑|g_neg|>∑ g_pos the interface between F_- and S exists at B_c=(n-m)K/4∑ g_pos. Thus, all spins with positive g-factor are "coherently" frustrated and the critical field does not depend on the rest of g-factors. For the opposite situation, ∑|g_neg|<∑ g_pos, the ordered and disordered sites change over each other. Now, at the interface between F_+ and S at the critical value of the magnetic field, B_c=(m-n)K/4∑|g_neg|, all spins with positive g-factors are ordered within the magnetic unit cell, while the spins with negative g-factors are "coherently" disordered. Thus, one can conclude that having q spins with negative g-factors in the p-spin magnetic unit cell can lead to the interface with either q disordered spins or p-q disordered spins in the unit cell depending on the relation between total negative and positive g-factors. Let us illustrate this feature on the example of one and five disordered sites in the unit cell for the Ising diamond chain. Eqs. (<ref>) and (<ref>) are the examples of Eqs. (<ref>) and (<ref>) respectively. Taking, g_3>0 and the rest g-factors negative one can obtain the same interfaces given by the Eqs. (<ref>) and (<ref>) but under the opposite relation between the g-factors. Thus, when g_3>|2 (g_2+g_2)+g_4| one gets "5/6-fire-1/6-ice" with five disorder spins in the unit cell and the critical field given by the Eq. (<ref>). For g_3<|2 (g_2+g_2)+g_4| the system has the interface corresponding to "1/6-fire-5/6-ice" with one disordered spin and critical value of magnetic field coinciding with Eq. (<ref>). §.§.§ The phase diagrams Let us now proceed to the description of the ground state phase diagrams illustrating the interfaces discussed above as well as more sophisticated cases connected with interplay between negative g-factors and antiferromagnetic and mixed coupling in the system. In the examples presented above we considered the simplest interfaces between ground states at low (vanishing) magnetic field and the saturated state at strong enough magnetic filed. Here, using the phase diagrams we demonstrate some intermediate interfaces as well. The zero-temperature ground states phase diagrams in the plane B-Δ for the Ising diamond-chain with the Hamiltonian (<ref>) are presented in the Fig.<ref>. The panel (a)displays the phase diagram corresponding to the following particular values of the parameters: K=-1, g_1=-2, g_2=-2, g_3=4 and g_4=3. Here one can see five interfaces between four ground states, QS_2^-, QI_1^-, QS_2^- and FI_1, the latter is degenerate superposition of FI_1^+and FI_1^-, which exists due to the equal g-factors of the spins from vertical dimer. Moreover, the spins in each vertical dimer can be either in "up-down" or in "down-up" configuration independently even inside the unit cell. Thus, the corresponding state has the residual entropy per the unit cell, 𝒮=2 log 2. The interface QS_2^-↔ FI_1 corresponds to `2/3 fire-1/3 ice' state, with residual entropy per the unit cell, 𝒮= 2log 2, although there are four frustrated spins these spins are not frustrated independently from each other. The interface QS_1^-↔ FI_1 corresponds to `1/3 fire-2/3 ice' state. It has residual entropy per the unit cell, 𝒮=2log 3. The degeneracy is so high due to independent possibility for for two vertical dimers form the six-spin unit cell to be in one of the three spin configurations, "up-down", "down-up" and "down-down" at the QS_1^-↔ FI_1interface. Another interface is QI_1^-↔ FI_1. It corresponds to `1/2 fire-1/2 ice' state with residual entropy the unit cell, S=log 5. Quite remarkable is a series of two interfaces, QS_2^-↔ QI_1^- and QS_1^-↔ QI_1^-, which both correspond to the `1/6 fire-5/6 ice' configuration, but with the different frustrated sites. The residual entropy per the unit cell in both cases is 𝒮=log 2. Thus, one can see that the intermediate eigenstate QI_1^- between the ground state at zero magnetic field and the saturated state arises due to difference in the g-factors of two spins between the dimers. In the Fig.<ref>(b), we present the phase diagram for the case g_1=g_2>0, g_3=g_4<0 for ferromagnetic coupling K=-1. Under these conditions for the g-factors we have only three sites in a unit cell.Here one can see three phases and three interfaces. The interface betweenQS_1^+ and QS_2^+ represents`1/3 fire-2/3 ice' configuration with residual entropy per the unit cell 𝒮=log 2 [Let us remind the reader that in this particular choice of the g-factors the system has only three spins in the magnetic unit cell. That is why we get log 2=log 4^N/2/N instead of 2log2 for the case of six-spin unit cell.], similarly there is the boundary between FI_2 and QS_2^+ states corresponding to `2/3 fire-1/3 ice' state with residual entropy the unit cell 𝒮=log 3. The interface between QS_1^+ and FI_2 is characterized byfour frustrated spins generating the so-called `2/3 fire-1/3 ice' state with residual entropy per the unit cell,𝒮=log2.In the Fig.<ref>(c) for g_1=-2, g_2=1, g_3=4 and g_4=-2 one can see additional type of interface QS_1^+↔ QI_1^- corresponding to `5/6 fire-1/6 ice' state. However, as in this case the total magnetic moment of the unit cell is zero, 2(g_1+g_2)+g_3+g_4=0 we have simple (non-macroscopic) degeneracy between QS_1^+ (all spins up) and QS_2^- (all spins down). Thevanishing total g-factor within the unit cell also leads to a very high degeneracy at the horizontal line B=1/2. The local mixture of the QS_1^+, QS_2^- and QI_1^- states yields the asymptotic value of the residual entropy per unit cell in the thermodynamic limit, 𝒮=log(3+√(5)/2)<cit.>. There is another interface between QI_1^- and AF_1^- representing the`1/3 fire-2/3 ice' configuration with residual entropy given by 𝒮=2log2. The interfaces QS_1^+↔ AF_1^- corresponds to `1/2 fire-1/2 ice' configuration and has residual entropy per block 𝒮=log(3+√(5)/2). Finally, the Fig.<ref>(d)is quite similar to the previous one.§.§ Ising-Heisenberg diamond chain Let us now turn to our Ising-Heisenberg model and look for the quantum or semi-classical counterparts of the fire-and-ice configurations described above for the purely Ising case.Concerning the Ising-Heisenberg diamond-chain we considered here the same phenomena takes place. We have to put γ=0 as any non-zero XY-anisotropy mixed up the up-up and down-down states for the quantum spin dimer. Thus, the disorder term being applied to the separate spins of the quantum dimer to some extent makes no sense. However, the degeneracy still can exist. For exchange parameters J<0, K<0 and Δ>0, with g-factors g_1=g_2>0, g_3=g_4<0, and |g_3|>2g_1 the degeneracy occurs between |QS_2⟩ and the eigenstate with all spins pointing down at the value of the magnetic field given by Eq. (<ref>). Moreover, at Δ=1 the degeneracy rises up as the energy of the eigenstate |FI_1⟩ at the critical value of the magnetic field B_c become equal to those of |QS_2⟩ and |QS_1⟩. However, this setup is just simple generalization of the Ising chain from Ref. <cit.>.In case of the quantum dimer the situation is different. For g_1=g_2 instead of up-up and down-down configurations we have the spin-singlet (for g_1=g_2) and the S_tot^z=0 component of the triplet. that is why the full analog of the classical `1/3 fire-2/3 ice` state in quantum case does not exist. We are going to consider instead the analog of the critical line between fully polarized eigenstate and the eigenstate with Ising spins pointing down. As was mentioned above the corresponding state caused by the antiferromagnetic coupling K has been considered in Ref. <cit.> . In our case, however, we consider ferromagnetic K and negative g_3 and g_4 with interface between |QS_1⟩ and |QS_2⟩. One has to keep in mind that the case of the negative g-factors of the σ-spins corresponds to the saturated state |QS_2⟩, while |QS_1⟩ has intermediate magnetization. The critical B_c can be find from eqs. (<ref>) and (<ref>):B(g_3+g_4)= √((B(g_1+g_2)+2K)^2+J^2γ^2) -√((B(g_1+g_2)-2K)^2+J^2γ^2).Solving this equation we getB_c=√(16K^2/(g_3+g_4)^2+4J^2γ^2/(g_3+g_4)^2-4(g_1+g_2)^2). In the Fig.<ref> the ground states zero temperature phase diagrams in the (B-Δ)-plane are presented for fixed J=1 and γ=0.5. The critical line given by the Eq. (<ref>) can be found in three panels, (a) and (b). In the panels (c) and (d) one can see another interfaces given by the vertical lines, QS_1↔ QI_1 and QS_2↔ QI_1. In the panel (a) the following values of teh parameters are chosen: g_1=2, g_2=1.2, K=-1, g_3=-3 and g_4=-3. The interface between QS_1 and QS_2 corresponding to `1/3 fire-2/3 ice' state, with critical B_c given by (<ref>) resulting in B_c=0.4927794. At first glance this interface seems to be frustrated, analogous to the Ising limit case (see Fig.<ref>(b)), but due to the fact that the states |Ψ_4^-⟩=0.9975898|↑↑⟩-0.06938726|↓↓⟩ and |Ψ_4^+⟩=0.4207327|↑↑⟩-0.9071847|↓↓⟩ are different, the Ising-Heisenberg diamond chain is simply two-fold degenerate and not macroscopically degenerate, thus there is no residual entropy in this interface. However, there exists a quantum frustrated interface between QS_2↔ FI_2 with residual entropy 𝒮=2log 2 and another quantum frustrated interface QS_1↔ FI_2 with a non-trivial residual entropy depending of the parameters B and Δ. In the Fig.<ref>bthe phase diagram for fixed K=1, g_1=2, g_2=1.2, g_3=3 and g_4=3 is presented. Here we observe once again the interface QS_1↔ QS_2 whose critical magnetic field is given by (<ref>) (B_c=0.4927794). Similarly, there is also the quantum frustrated interface between QS_2↔ FI_1 with residual entropy 𝒮=2log 2 and another quantum interface QS_1 and FI_1 with a non-trivial residual entropy depending of the parameters B and Δ. In the Fig.<ref>c one can see the phase diagram for fixed J=1, K=-1, γ=0.5, g_1=2 and g_2=2. Here the following phases are presented: QS_1, QI_1 , QS_2 and FI_2. Two interfaces QS_1↔ QI_1 and QI_1↔ QS_2 corresponding to `1/6 fire-5/6 ice' state have residual entropy given by 𝒮=log2). Whereas, the interface QS_2↔ FI_2 is frustrated with residual entropy given by 𝒮=2 log2. The other interfaces are pure quantum frustrated states. Similarly in Fig.<ref>dthe phase diagram for fixed J=1, K=-1, γ=0.5, g_1=-2, g_2=2, g_3=-4 and g_4=3 is shown. The interface between QS_2 and QI_2 representing the `1/6 fire-5/6 ice' state has residual entropy given by 𝒮=2log2. The other interfaces QS_2↔ AF_2 and QI_2↔ AF_2 are pure quantum frustrated states with residual entropies given by 𝒮=log2. 𝒮=2log2 respectively. §.§ Magnetization and magnetic Susceptibility The fact that the magnetization is a non-conserved quantity gives rise to an unusual behavior of the magnetic susceptibility at zero temperature.In the Fig.<ref> the magnetization and magnetic susceptibility as a function of B is displayed for fixed values of the parameters J=1, K=-1 and γ=0.5. The left panel, Fig.<ref>a andFig.<ref>c show the magnetization and susceptibility for g_1=2, g_2=1.2, g_3=-3 and g_4=-3. The effects of thethe non-conserving magnetization are well visible here. For low magnetic field there is a magnetization quasi-plateau <cit.> which gives the impression of a constant value of the magnetization. However,actually this region corresponds to the state QS_1 with non-conserving magnetization which has weak but monotone dependence of the magnetic field.The magnetization curves for Δ=-0.5 and Δ=-0.6demonstrate another quasi-plateau at B≈0.641 corresponding to FI_2. The final part of the curve corresponds to the quasi-saturates state, QS_2.In the panel (c) the magnetic susceptibility for the same set of parameters is shown. The behavior of the susceptibility evidences of the non-plateau nature of the magnetization within the same eigenstates of the system. The interesting feature is monotone decrease of the susceptibility as a function of the magnetic field for the QS_1 eigenstate and non-monotone behavior with the maximum for the QS_2. The right panel of theFig.<ref> demonstrate the magnetization and susceptibility as a functions of the magnetic field for J=1, K=-1 and γ=0.5 with g-factors g_1=2, g_2=2, g_3=-3 and g_4=-4. Again here one can find the quasi-plateau corresponds to the state QS_1 for low magnetic field and the quasi-saturated state QS_2 for the strong field.There is also an intermediate quasi-plateau, which corresponds to the FI_2 state for Δ=-0.1. The third quasi-plateau arises for for Δ=-0.37 and Δ=-0.42. This is the QI_1 state which follows the FI_2 in these cases. However,for Δ=-1 there is only one intermediate quasi-plateau (second one) corresponding to the QI_1 state. The corresponding magnetic susceptibility is shown in the panel (d). Here, it demonstrates the monotone decrease for each quasi-plateau.§ CONCLUSION In this paper we consider the Ising-XYZ model on the diamond chain which was assembled as follows: the particles with the Ising spin are located at the nodal diamond chain sites, whereas Heisenberg spins are over interstitial sites. We have assumed the Ising spin and the Heisenberg spins have different g-factors, as well as we have assumed the system is under external magnetic field. The non-commutativity of the magnetization operator and Hamiltonian is due to the different g-factors of Ising and Heisenberg spins and due to theXY-anisotropy (γ) in the Heisenberg exchange interaction. This leads to the unusual phenomena, such as the non-linear magnetic field dependence of the spectrum and non-constant magnetization within the same ground state. We discuss in detail the zero temperature phase diagram under several conditions and we find interesting phases. Due to the non-uniform sighs of the four g-factors presented in the unit cell there are a phase boundaries corresponding to so-called “half fire-half ice” configurations for ferromagnetic couplings, which contain ordered and disordered sublattices simultaneously. These interfaces for Ising diamond were classified in five groups: the first one is when there is one frustrated (disordered) spin and rest five ordered spins in the unit cell ( `1/6 fire-5/6 ice'); similarly for two frustrated spins ( `1/3 fire-2/6 ice'); for three frustrated spins (`1/2 fire-1/2 ice'), for four frustrated spins ( `2/3 fire-1/3 ice'), and finally for five frustrated spins ( `5/6 fire-1/6 ice'). For quantum Ising-Heisenberg diamond chain most of the interfaces become quantum frustrated states.Besides that, we study the zero temperature magnetization and magnetic susceptibility as a function of the external magnetic field.§ ACKNOWLEDGEMENTS V. O. acknowledges the partial financial support form the grants by the State Committee of Science of Armenia No. 15-1F332 and SFU-02 as well as from the ICTP Network NT-04. He also expresses his deep gratitude to the ICMP (Lviv) for warm hospitality during the final stage of the work. O. R. thanks the Brazilian agencies FAPEMIG and CNPq for partial financial support.§ LIMITING CASE Γ=0 AND G_1-G_2=0 Although, the case of conserving magnetization (γ=0 and g_1-g_2=0) is trivial and well known, we are going to make few comments about these limits and some new notations which are suitable for our further analysis. First of all let us mention, the limit g_1-g_2=0 under which the |Ψ_1,2⟩ transform into the conventional singlet and zeroth component of triplet for two spin-1/2:|Ψ_1,2⟩→1/√(2)(|↑↓⟩±(J)|↓↑⟩), ε_1,2=-Δ/4±1/2|J|.where (J) is the sign-function. Thus, depending on the sign of the coupling constant J both eigenvectors can transform to the singlet and zeroth component of the triplet. Namely,|Ψ_1⟩= |τ_0⟩=1/√(2)(|↑↓⟩+|↓↑⟩), J>0|s⟩=1/√(2)(|↑↓⟩-|↓↑⟩), J<0 ,and|Ψ_2⟩= |s⟩=1/√(2)(|↑↓⟩-|↓↑⟩), J>0|τ_0⟩=1/√(2)(|↑↓⟩+|↓↑⟩), J<0 ,where we introduce the conventional notations |τ_0⟩ and |s⟩ for triplet and singlet respectively,The case of γ=0 is more tricky. First of all, there is no continuous limit from the eigenvectors |Ψ_3,4⟩ to the upper and lower components of the triplet, |τ_+⟩=|↑↑⟩ and |τ_-⟩=|↓↓⟩ as at γ=0 we have commutativity of the Hamiltonian and S_tot^z=S_1^z+S_2^z operator yielding the block-diagonal form of the Hamiltonian matrix and decoupling from each other of the basis states, corresponding to |τ_+⟩ and |τ_-⟩ <cit.>. However, the eigenvalues admit the corresponding limit leading toε_3,4= Δ/4±1/2|B(g_1+g_2)-2K(σ_j+σ_j+1)|,depending on the values of the adjacent Ising spins σ_j and σ_j+1.In contrast to the case of |Ψ_1,2⟩ the result of the γ=0 limit depends on the value of the magnetic field.This condition lead to an obvious critical value of the magnetic field at which the upper and lower component of the triplet for the vertical quantum spin dimer are degenerate,B_c=2K(σ_j+σ_j+1)/g_1+g_2.Namely,|Ψ_3⟩={[ |τ_-⟩=|↓↓⟩, B>B_c; |τ_+⟩=|↑↑⟩,B<B_c, ]., |Ψ_4⟩={[ |τ_+⟩=|↑↑⟩, B>B_c;|τ_-⟩=|↓↓⟩B<B_c, ]..Here, we use the notation |Ψ_3,4⟩ for the γ=0 having in mind the features mentioned above.§ ON THE GROUND STATES FOR ISING DIAMOND CHAIN Here we present the ground states and the corresponding energies per the unit cell for the Ising diamond chain. In our to make the link between Ising and Ising-Heisenberg counterparts more clear and facilitate the interpretation of the Ising limit we keep in general the same notations for the ground states. * Quasi-saturated (QS_1) state splits into two states |QS_1^±⟩= ∏_j=1^N/2|↑⟩__2j-1⊗|τ_±⟩__2j-1⊗|↑⟩__2j⊗|τ_±⟩__2j,E_QS_1^±= Δ/2-B/2(g_3+g_4)∓(B(g_1+g_2)-2K), Thus,|QS_1^+⟩ corresponds to a saturated (S) state for B>B_c=2K/g_1+g_2, whereas |QS_1^-⟩ stands for a ferrimagnetic (FIM) state when B<B_c=2K/g_1+g_2.* Quasi-saturated QS_2 state splits into two states |QS_2^±⟩= ∏_j=1^N/2|↓⟩__2j-1⊗|τ_±⟩__2j-1⊗|↓⟩__2j⊗|τ_±⟩__2j,E_QS_2^±= Δ/2+B/2(g_3+g_4)∓(B(g_1+g_2)+2K)Thus, |QS_2^-⟩ corresponds to a saturated (S) state for B<B_c=-2K/g_1+g_2, whereas|QS_2^+⟩ stands for a ferrimagnetic (FIM) state when B>B_c=-2K/g_1+g_2. * Ferrimagnetic states, FI_1 and FI_2 |FI_1^+⟩= ∏_j=1^N/2|↑⟩__2j-1⊗|↑↓⟩__2j-1⊗|↑⟩__2j⊗|↑↓⟩__2j,E_FI_1^+= -Δ/2-B/2(g_3+g_4)-B(g_1-g_2)|FI_2^+⟩= ∏_j=1^N/2|↓⟩__2j-1⊗|↑↓⟩__2j-1⊗|↓⟩__2j⊗|↑↓⟩__2j,E_FI_2^+= -Δ/2+B/2(g_3+g_4)-B(g_1-g_2) Exchanging the values of the g_1 and g_2 as well as the orientation of all S_j spins one can get another pair of states, FI_1^- and FI_2^- respectively.* Antiferromagnetic states, AF_1 and AF_2 |AF_1^+⟩= ∏_j=1^N/2|↑⟩__2j-1⊗|↑↓⟩__2j-1⊗|↓⟩⊗|↑↓⟩__2jE_AF_1^+= -Δ/2-B/2(g_3-g_4)-B(g_1-g_2)|AF_2^+⟩= ∏_j=1^N/2|↓⟩__2j-1⊗|↑↓⟩__2j-1⊗|↑⟩__2j⊗|↑↓⟩__2jE_AF_2^+= -Δ/2+B/2(g_3-g_4)-B(g_1-g_2). In the full analogy with the previous case, one can also define another pair of the antiferromagnetic states, AF_1^- and AF_2^-,by exchanging the values of the Isign spins g-factors simultaneously with the orientation of all S_j spins. * "Quantum ferrimagnetic" states QI_1 and QI_2The Ising limit of the Eqs. (<ref>) and (<ref>) lead to the following four ground states of the Ising diamond-chain. Though, some of them have pure antiferromagnetic orientation of spins and other can be described as pure ferrimagnetic ones, we keep the original notations form the Ising-Heisenberg case in order to avoid a confusion. |QI_1^±⟩= ∏_j=1^N/2|↑⟩__2j-1⊗|τ_±⟩__2j-1⊗|↓⟩__2j⊗|τ_±⟩__2j,E_QI_1^±= Δ/2-B/2(g_3-g_4)∓B(g_1+g_2),|QI_2^±⟩= ∏_j=1^N/2|↓⟩__2j-1⊗|τ_±⟩__2j-1⊗|↑⟩__2j⊗|τ_±⟩__2j,E_QI_2^±= Δ/2+B/2(g_3-g_4)∓B(g_1+g_2). § FRUSTRATED INTERFACE STATE FOR ISING DIAMOND CHAIN Here we list possible interfaces between various ground states of the Ising diamond chain with ferromagnetic couplings and mixed, positive and negative, g-factros. §.§ One frustrated spin (1/6-fire and 5/6-ice) The configuration with one frustrated sublattice ( `1/6-fire and 5/6-ice' in the terms of Ref. [yin15]) corresponds to the interface between Quasi-saturated and `Quantum-ferrimagnetic' states when g_3 or g_4 is negative. B_c=2K/g_3, QS_1^+↔ QI_2^+, and QS_2^+↔ QI_1^+,-2K/g_3, QS_1^-↔ QI_2^-, and QS_2^-↔ QI_1^-, 2K/g_4, QS_1^+↔ QI_1^+, and QS_2^+↔ QI_2^+,-2K/g_4, QS_1^-↔ QI_1^-, and QS_2^-↔ QI_2^-.When spin with g_3 (g_4) is frustrated, the corresponding residual entropy per block is 𝒮=k_Bln(2). §.§ Two frustrated spin (1/3-fire and 2/3-ice) For the case of two frustrated spins into the unit cell one hase the following interfaces between the ground states of the Ising diamond chain: interface between `Quantum-ferrimagnetic' and Antiferromagnetic states,B_c=/2g_1, QI_1^+↔ AF_1^- and QI_2^+↔ AF_2^-,-/2g_1, QI_1^-↔ AF_1^+ and QI_2^-↔ AF_2^+, /2g_2, QI_1^+↔ AF_1^+ and QI_2^+↔ AF_2^+,-/2g_2, QI_1^-↔ AF_1^- and QI_2^-↔ AF_2^-, interface between Quasi-saturated and Ferrimagnetic states, B_c=Δ±2K/2g_1, QS_1^+↔ FI_1^- (QS_2^+↔ FI_2^-),-Δ±2K/2g_1, QS_2^-↔ FI_2^+ (QS_1^-↔ FI_1^+), Δ±2K/2g_2, QS_1^+↔ FI_1^+ (QS_2^+↔ FI_2^+),-Δ±2K/2g_2, QS_2^-↔ FI_2^- (QS_1^-↔ FI_1^-).There is another interface between the Quasi-Saturated states QS_1^+↔ QS_2^+ and QS_1^-↔ QS_2^-. For this case we haveB_c=±4K/g_3+g_4,respectively. Here the spins with g_3 and g_4 are frustrated. §.§ Three frustrated spins (1/2-fire and 1/2-ice) When three form the six spins in the unit cell are frustrated we one can speak about `1/2-fire and 1/2-ice' configuration. the corresponding interfaces are listed below.Interface between `Quantum-ferrimagnetic' and Ferrimagnetic ground states, when two spins with g_1 (g_2) and one spin with are disordered g_4, the critical magnetic field becomesB_c=Δ/2g_1± g_4, QI_2^+↔ FI_2^- (QI_1^+↔ FI_1^-),-Δ/2g_1± g_4, QI_1^-↔ FI_1^+ (QI_2^-↔ FI_2^+), Δ/2g_2± g_4, QI_2^+↔ FI_2^+ (QI_1^+↔ FI_1^+),-Δ/2g_2± g_4, QI_1^-↔ FI_1^- (QI_2^-↔ FI_2^-), when two spins with g_1 (or g_2) and one spin with g_3 are disorderedB_c=Δ/2g_1± g_3, QI_1^+↔ FI_2^- (QI_2^+↔ FI_1^-),-Δ/2g_1± g_3, QI_2^-↔ FI_1^+ (QI_1^-↔ FI_2^+), Δ/2g_2± g_3, QI_1^+↔ FI_2^+ (QI_2^+↔ FI_1^+),-Δ/2g_2± g_3, QI_2^-↔ FI_1^- (QI_1^-↔ FI_2^-). There is the interfaces between Quasi-saturated and Antiferromagnetic states. When two spins with g_1 (or g_2) and one spin with g_4 are disordered,B_c=±Δ+2K/2g_1+g_4, QS_1^+↔ AF_1^- (QS_2^-↔ AF_2^+), ±Δ-2K/2g_1-g_4, QS_2^+↔ AF_2^- (QS_1^-↔ AF_1^+), ±Δ+2K/2g_2+g_4, QS_1^+↔ AF_1^+ (QS_2^-↔ AF_2^-), ±Δ-2K/2g_2-g_4, QS_2^+↔ AF_2^+ (QS_1^-↔ AF_1^-),When two spins with g_1 (or g_2) and one spin with g_3 are disorderedB_c=±Δ+2K/2g_1+g_3, QS_1^+↔ AF_2^- (QS_2^-↔ AF_1^+), ±Δ-2K/2g_1-g_3, QS_2^+↔ AF_1^- (QS_1^-↔ AF_2^+), ±Δ+2K/2g_2+g_3, QS_1^+↔ AF_2^+ (QS_2^-↔ AF_1^-), ±Δ-2K/2g_2-g_3, QS_1^+↔ AF_2^+ (QS_1^-↔ AF_2^-). §.§ Four frustrated spins (2/3-fire and 1/3-ice) There are several critical points corresponding to four disordered spins in the six-spin unit cell. The interfaces between two quasi-saturated states ( QS_1^+↔ QS_1^- and QS_2^+↔ QS_2^-) exist atB_c=±2K/g_1+g_2,respectively. The four spins with the g-factors g_1 and g_2 are frustrated here.Critical magnetic field for the phase boundary between Quasi-ferrimagnetic and Antiferromagnetic states is given by,B_c=Δ/2g_1±(g_3-g_4), QI_1^+↔ AF_2^- (QI_2^+↔ AF_1^-), -Δ/2g_1±(g_3-g_4), QI_2^-↔ AF_1^+ (QI_1^-↔ AF_2^+), Δ/2g_2±(g_3-g_4), QI_1^+↔ AF_2^+ (QI_2^+↔ AF_1^+), -Δ/2g_2±(g_3-g_4), QI_2^-↔ AF_1^- (QI_1^-↔ AF_2^-).Whereas the interface between Quasi-ferrimagnetic and Ferrimagnetic phases byB_c=±(Δ+2K)/2g_1+(g_3+g_4), QS_1^+↔ FI_2^- (QS_2^-↔ FI_1^+), ±(Δ-2K)/2g_1-(g_3+g_4), QS_2^+↔ FI_1^- (QS_1^-↔ FI_2^+), ±(Δ+2K)/2g_2+(g_3+g_4), QS_1^+↔ FI_2^+ (QS_2^-↔ FI_1^-), ±(Δ-2K)/2g_2-(g_3+g_4), QS_2^+↔ FI_1^+ (QS_1^-↔ FI_2^-).In all these sixteen cases the spins with g_1 (or g_2) and one both spins with g_3 and g_4 are frustrated.§.§ Five frustrated spins (5/6-fire and 1/6-ice) Finally, the spin configuration with five frustrated spins in the six-spin unit cell is possible at the interface between quasi-saturated and "quasi-ferrimagnetic" states.The phase boundary of these `5/6-fire and 1/6-ice' sates are given by B_c=2K/2(g_1+g_2)± g_4, QS_1^+↔ QI_1^- (QS_1^-↔ QI_1^+), -2K/2(g_1+g_2)± g_4, QS_2^-↔ QI_2^+ (QS_2^+↔ QI_2^-), 2K/2(g_1+g_2)± g_3, QS_1^+↔ QI_2^- (QS_1^-↔ QI_2^+), -2K/2(g_1+g_2)± g_3, QS_2^-↔ QI_1^+ (QS_2^+↔ QI_1^-).It is easy to recognizing looking at the denominators that in all cases two spins with g_1 g-factor, two spins with g_2, one of the spins with g_3 and g_4 are frustrated.10 jomie L.J. De Jongh, A.R. Miedema, Adv. Phys. 23, 1 (1974).carlin R.L. Carlin, Magnetochemistry, (Springer-Verlag, Berlin, 1986).kahn D. Gatteschi, R. Sessoli, and J. Villain, Molecular Nanomagnets, (Oxford University Press, Oxford, England, 2006). oha15 V. Ohanyan, O. Rojas, J. Strec̆ka, and S. Bellucci, Phys. Rev. B 92, 214423 (2015). tor16 J. Torrico, M. Rojas, S. M. de Souza, and O. Rojas, Phys. Lett. A 380, 3655 (2016). ungur L.F. Chibotaru, L. Ungur, Phys. Rev. Lett. 109, 246403 (2012).fecu M. Atanasov, P. Comba, C.A. Daul, Inorg. Chem. 47, 2449 (2008).yin15 W.-G. Yin, and C. R. Roth, A. M. Tsvelik, Spin Frustration and a `Half Fire, Half Ice' Critical Point from Nonuniform g-Factors, arXiv:1510.00030, (2015).Ir1T. N. Nguyen, P. A. Lee,and H.-C. Z. Loye, Science 271, 489 (1996).Ir2 A. Furusaki, M. Sigrist, P. A. Lee, K. Tanaka,and N. Nagaosa, Phys. Rev. Lett. 73 , 2622 (1994).Dy1 D. Visinescu, A. M. Madalan, M. Andruh, C. Duhayon, J.-P. Sutter, L. Ungur, W. Van den Heuvel, and L. F. Chibotaru, Chem. Eur. J. 15, 11808 (2009).Dy2 W. Van den Heuvel and L. F. Chibotaru, Phys. Rev. B 82, 174436 (2010).bel14 S. Bellucci, V. Ohanyan, and O. Rojas, EPL 105, 47012 (2014).mironov V.S. Mironov, L.F. Chibotaru, A. Ceulemans, Phys. Rev. B 67, 014424 (2003).chibotaru L.F. Chibotaru, in Molecular Nanomagnets and Related Phenomena, ed. S. Gao, Structure and Bonding, Vol. 164, Springer-Verlag, Berlin Heidelberg, 2015.klo09 F. Klöwer, Y. Lan, J. Nehrkorn, O. Waldmann, C. E. Anson, and A. K. Powell, Chem. Eur. J. 15, 7413 (2009). ggg1 C. A. Hutchison, and B. Weinstock, J. chem. Phys. 32, 56 (1960).ggg2 J. D. Axe, H. J. Stapleton, and C. D. Jefries, Phys. Rev. 121, 1630 (1961).ggg3 P. Rigny and P. Plurien, J. Phys. Chem. Solids 28, 2589 (1967).str05 J. Strečka, M. Jaščur, M. Hagiwara, Y. Narumi, and K. Kindo, K. Minami, Phys. Rev. B 72, 024459 (2005).str03 J. Strečka, M. Jaščur, J. Phys.: Condens. Matter, 15, 4519 (2003).can06 L. Čanov, J. Strečka and M. Jačur, J. Phys.: Condens. Matter 18, 4967 (2006).val08 J. S. Valverde, O. Rojas, and S. M. de Souza, J. Phys.: Condens. Matter, 20, 345208 (2008).can09 L. Canova, J. Strečka and T. Lucivjansky, Condens. Matter. Phys. 12, 353 (2009).ant09 D. Antonosyan, S. Bellucci, and V. Ohanyan, Phys. Rev. B 79, 014432 (2009).bel10 S. Bellucci and V. Ohanyan, Eur. Phys. J. B 75, 531 (2010).roj11a O. Rojas, S. M. de Souza, V. Ohanyan, and M. Khurshudyan, Phys. Rev. B 83, 094430 (2011).oha12 V. Ohanyan and A. Honecker, Phys. Rev. B 86, 054412 (2012).bel13 S. Bellucci, and V. Ohanyan, Eur. Phys. J. B 86, 408 (2013). gal13 L. Galisova. Physica Status Solidi B 250, 187 (2013). ver13 T. Verkholyak, and J. Strečka, Phys. Rev. B 88, 134419 (2013).gal14 L. Galisova, Cond. Matt. Phys., 17, 13001 (2014).ana14 N. S. Ananikian, J. Strečka, V. Hovhannisyan, Solid State Communications. 194, 48 (2014).tor14 J. Torrico, M. Rojas, S. M. de Souza, O. Rojas, and N. S. Ananikian, EPL 108, 50007 (2014).qi14 Y. Qi, and A. Du, Phys. Status Solidi B 251, 1096 (2014).lis15 B. Lisnyi, and J. Strečka, J. Magn. Magn. Mater. 377, 502 (2015).abg15 V. B. Abgaryan, N. S. Ananikian, L. N. Ananikyan, and V. Hovhannisyan, Solid State Comm. 203, 5 (2015).gal15 L. Galisova, Acta Mechanica Slovaca. 19, 46 (2015).gao15 K. Gao, Y.-L. Xu, X.-M. Kong, Z.-Q. Liu, Physica A 429, 10 (2015).lis16 B. Lisnyi, and J. Strečka, Physica A 462, 104 (2016).hov16 V. V. Hovhannisyan, J. Strečka, N. S. Ananikian, J. Phys.: Condens. Mater, 28, 085401 (2016).ana17 N. S. Ananikian, Č. Burdik, L. Ananikyan, H. Poghosyan, J. Phys.: Conf. Series, 804, 012002 (2017).rod17 F. C. Rodriques, S. M. de Souza, O. Rojas, Ann. Phys. 379, 1 (2017).kikuchi03H. Kikuchi, Y. Fijii, M. Chiba, S. Mitsudo, T. Idehara, Physica B, 329, 967 (2003).azu4 A. Honecker, S. Hu, R. Peters, and J. Richter, J. Phys.: Condens. Matter 23, 164211 (2011).heschkeH. Jeschke, I. Opahle, H. Kandpal, R. Valent, H. Das, T. Saha-Dasgupta, O. Janson, H. Rosner, A. Brühl, B. Wolf, M. Lang, J. Richter, Sh. Hu, X. Wang, R. Peters, T. Pruschke, and A.Honecker, Phys. Rev. Lett. 106, 217201 (2011).hid17 K. Hida, and K. Takano, J. Phys. Soc. Jpn. 86, 033707 (2017).mor17 K. Morita, M. Fujihala, H. Koorikawa, T. Sugimoto, Sh. Sota, S. Mitsuda, T. Tohyama, Phys. Rev. B 95, 184412 (2017).FFA03 V. Ohanyan, and N. Ananikian, Phys. Lett. A 307, 76 (2003).tor16 J. Torrico, M. Rojas, M. S. S. Pereira, J. Strečka, and M. L. Lyra, Phys. Rev. B 93, 014428 (2016).
http://arxiv.org/abs/1708.07494v2
{ "authors": [ "Jordana Torrico", "Vadim Ohanyan", "Onofre Rojas" ], "categories": [ "cond-mat.stat-mech" ], "primary_category": "cond-mat.stat-mech", "published": "20170824170504", "title": "Non-conserved magnetization operator and `fire-and-ice' ground states in the Ising-Heisenberg diamond chain" }
LaTeX2e14cm 22cm -1cm theoremTheorem[section]example0Example[subsection] exampleegExample corollaryCorollary propositionProposition definitionDefinition lemmaLemma remarkRemark equationsection theoremsection corollarysection propositionsection lemmasection definitionsection remarksectionsub-ordinator þ@newremarkþ@remark@headfont #16pt 6pt#16pt6pt#11mm1mm#1 – Y.1mm1mm#11mm1mm#1 – Ross.1mm1mm
http://arxiv.org/abs/1708.08344v2
{ "authors": [ "Yuguang F. Ipsen", "Peter Kevei", "Ross A. Maller" ], "categories": [ "math.PR" ], "primary_category": "math.PR", "published": "20170825050115", "title": "Convergence to stable limits for ratios of trimmed Levy processes and their jumps" }
=1 di-men-sio-nal phase-space La-gran-gian La-grange Ha-mil-to-nian Ha-mil-ton sub-ma-ni-fold ma-ni-fold arrows matrixleft=2.5cm,right=2.5cm,top=2.5cm,bottom=2.5cm ifundefinedchaptertheoremTheorem[section]theoremTheorem[chapter] axiom[theorem]Axiom claim[theorem]Claim comment[theorem]Comment conjecture[theorem]Conjecture corollary[theorem]Corollary definition[theorem]Definition exerciseExercise *exercise*Exercise example[theorem]Example lemma[theorem]Lemma problem[theorem]Problem proposition[theorem]Proposition remark[theorem]Remark summary[theorem]Summary δ÷÷OT1pzcmitProjected Variational Integrators for Degenerate Lagrangian Systems Michael Kraus ([email protected])Max-Planck-Institut für Plasmaphysik Boltzmannstraße 2, 85748 Garching, Deutschland andTechnische Universität München, Zentrum Mathematik Boltzmannstraße 3, 85748 Garching, DeutschlandAugust 24, 2017 =============================================================================================================================================================================================================================================We propose and compare several projection methods applied to variational integrators for degenerate Lagrangian systems, whose Lagrangian is of the form L = ϑ() · - H() and thus linear in velocities. While previous methods for such systems only work reliably in the case of ϑ being a linear function of , our methods are long-time stable also for systems where ϑ is a nonlinear function of . We analyse the properties of the resulting algorithms, in particular with respect to the conservation of energy, momentum maps and symplecticity. In numerical experiments, we verify the favourable properties of the projected integrators and demonstrate their excellent long-time fidelity. In particular, we consider a two-dimensional Lotka–Volterra system, planar point vortices with position-dependent circulation and guiding centre dynamics.§ INTRODUCTION In various areas of physics, we are confronted with degenerate Lagrangian systems, whose Lagrangian is of the form L = ϑ() · - H(), that is L is linear in the velocities . Examples for such systems are planar point vortices, certain Lotka–Volterra models and guiding centre dynamics.In order to derive structure-preserving integrators <cit.> for such systems, it seems natural to apply the variational integrator method <cit.>. This, however, does not immediately lead to stable integrators, as the resulting numerical methods will in general be multi-step methods which are subject to parasitic modes <cit.>. Moreover, we face the initialization problem, that is how to determine initial data for all previous time steps used by the method without introducing a large error into the solution.A potential solution to the first problem is provided by the discrete fibre derivative while a solution to the second problem is provided by the continuous fibre derivative. Using the discrete fibre derivative, we can rewrite the discrete Euler–Lagrange equations in position-momentum form, which constitutes a one-step method for numerically computing the phasespace trajectory in terms of the generalized coordinatestogether with their conjugate momenta . The resulting system can be solved, as in general the discrete Lagrangian will not be degenerate even though the continuous Lagrangian is. The continuous fibre derivative ( (t),(t) = ∂ L / ∂ ( (t),(t))) can then be used in order to obtain an initial value for the conjugate momenta as functions of the coordinates as (t) = ϑ ( (t)). Tyranowski et al. <cit.> show that this is a viable strategy when ϑ is a linear function. Unfortunately, for the general case of ϑ being a nonlinear function, this idea does in general not lead to stable integrators as the numerical solution will drift away from the constraint submanifold defined by the continuous fibre derivative, ϕ((t), (t)) =(t) - ϑ ( (t)) = 0. A standard solution for such problems is to project the solution back to the constraint submanifold after each time step <cit.>. This, however, renders the integrator non-symmetric (assuming the variational integrator itself is symmetric), which leads to growing errors in the solution and consequently a drift in the total energy of the system. Improved long-time stability is achieved by employing a symmetric projection <cit.>, where the initial data is perturbed away from the constraint submanifold before the variational integrator is applied and then projected back to the manifold. While these projection methods are standard procedures for holonomic and non-holonomic constraints, there are only few traces in the literature on their application to Dirac constraints ϕ(,)=0. Some authors consider general differential algebraic systems of index two <cit.>, but do not go into the details of Lagrangian or Hamiltonian systems.Or they consider symplectic integrators for Hamiltonian systems subject to holonomic constraints <cit.>, which simplifies the situation dramatically compared to the case of Dirac constraints. Most importantly, we are not aware of any discussion of the influence of such a projection on the symplecticity of the algorithm assuming that the underlying numerical integrator is symplectic. As symplecticity is a crucial property of Lagrangian and Hamiltonian systems, which is often important to preserve in a numerical simulation, we will analyse all of the proposed methods regarding its preservation. We will find that the well-known projection methods, both standard projection and symmetric projection, are not symplectic. However, we can introduce small modifications to the symmetric projection method which make it symplectic.The outline of the paper is as follows. In Section <ref> we provide an overview of degenerate Lagrangian systems and Dirac constraints and their various formulations and discuss symplecticity and momentum maps. In Section <ref> we review the discrete action principle leading to variational integrators and problems that arise when this method is applied to degenerate Lagrangians. This is followed by a discussion of the proposed projection methods in Section <ref> and numerical experiments in Section <ref>.§ DEGENERATE LAGRANGIAN SYSTEMS Degenerate Lagrangian systems have attracted quite some interest in the geometric mechanics literature <cit.> due to their interesting properties. They are also relevant for practical applications like the study of population models, point vortex dynamics or reduced charged particle models like the guiding centre system. In the following, we will consider degenerate Lagrangian systems characterized by a Lagrangian that is linear or singular in the velocities. In particular, we consider the class of systems whose Lagrangian is of the formL (, ) = ϑ() · - H() .The Lagrangian L is a function on the tangent bundle M,L : M→ ,where M denotes the configuration manifold of the system which is assumed to be of dimension d. The cotangent bundle of the configuration manifold M is denoted by M. Further, we denote the coordinates of a point m ∈M by (m) = (^1 (m), , ^d (m)) and similarly coordinates of points in M by (^i, ^i) and coordinates of points in M by (^i, ^i). In the following, we will always assume the existence of a global coordinate chart, so that M can be identified with the Euclidean space ^d. For simplicity, we often use short-hand notation where we writeto refer to both a point in M as well as its coordinates. Similarly, we often denote points in the tangent bundle M by (, ). In local coordinates, the Lagrangian (<ref>) is thus written as a map (, ) ↦ L(, ).In Equation (<ref>), ϑ = ϑ_i ()^i is a differential one-form ϑ : M→M, whose components ϑ_i : M→ are general, possibly nonlinear functions of , some of which (but not all) could be identically zero. For details on differential forms, tangent and cotangent bundles the interested reader may consult any modern book in mathematical physics or differential geometry. We recommend <cit.> for more physics oriented accounts and <cit.> for more mathematics oriented accounts. In the following we assume a basic understanding of these concepts. To see their usefulness for classical mechanics we refer to <cit.>.§.§ Hamilton's Action Principle The evolution of Lagrangian systems is described by curveson M. To make this precise, let us fix two points _1, _2∈M and an interval [t_1, t_2] ⊂ and define the path space connecting _1 and _2 asQ ( t_1, t_2, _1, _2 ) = { : [t_1, t_2] →M |is a C^2 curve (t_1) = _1,(t_2) = _2} .Elementsof Q ( t_1, t_2, _1, _2 ) map the time interval [t_1, t_2] to curves on M, whereby the first and last points, (t_1) and (t_2), take fixed values, _1 and _2, respectively. Such a curve : [t_1, t_2] →M with (t) = (q^1 (t), , ^d (t)) can be lifted to a curve : [t_1, t_2] →M, which in coordinates is given by(t) = ( ^1 (t), , ^d (t), d^1dt (t), , d^ddt (t) ) .In the following the derivative of the curve with respect to the parameter t is denoted by = d/dt. This constitutes slight abuse of notation as we also denote the tangent bundle coordinates that way (note that not all curves in the tangent bundle are lifts of curves in the configuration manifold), but it should be clear at any time if we refer to the derivative of the curve or to the coordinates.In order to determine the equations of motion of a Lagrangian system, we consider infinitesimal variations of the action integralA [] = ∫_0^T L ( (t),(t)) dt ,where, without loss of generality and in order to simplify the discrete treatment, we choose an interval [0,T]. Infinitesimal variations (c.f., Figure <ref>) are defined in terms of C^2 maps : [0,T] →^d which vanish at the boundaries of the interval [0,T], that is (0) = 0 and (T) = 0, and are such that c(t) ∈[q(t)]M for 0 ≤ t ≤ T with [q(t)]M the tangent space to M at the point q(t). In coordinates, we can write(t) = ( ^1 (t), , ^d (t) ) .We can now define a one-parameter family of trajectories ^∈Q ( 0, T, _1, _2 ), for which= dd^|_ = 0 ,with ∈ (-r,+r) and r ∈_+. That is ^ is a differentiable mapping (-r,+r) × [0,T] →M, such that ^ (0) = q (0) and ^ (T) = q (T) for all values ofand q^0 (t) = q(t) for all values of t. If Q is a vector space, the simplest example of such a family of trajectories is given by^ (t) = ( ^1 (t) + ^1 (t), , ^d (t) + ^d (t) ) .On a general manifold, the corresponding expressions are usually more complicated. However, if the manifold is embedded in ^d, each member of the family of trajectories can be expanded in a power series, whose leading-order terms are those given in (<ref>).Hamilton's principle states that in order to determine the equations of motion, we need to find a curve = ^0, such that the action (<ref>) takes a stationary point with respect to all curves ^. A necessary condition formaking the action stationary is that ddA [^] |_ = 0 = dd∫_0^T L (^ (t), ^ (t)) dt |_ = 0= ∫_0^T[ ∂ L∂((t),(t) ) · (t) + ∂ L∂((t),(t) ) · (t) ] dt = 0 .Let us note that the derivatives of L with respect toandare sometimes also written as D_1 and D_2, respectively, where D_i denotes the slot-derivative with respect to the ith argument of L. Assuming that the operations of computing variations ofand computing the time derivative ofcommute (which is a fair assumption, see e.g. <cit.>), we can integrate the second term of (<ref>) by parts to obtainddA [q^] |_ = 0 = ∫_0^T[ ∂ L∂((t),(t) ) - ddt( ∂ L∂((t),(t) ) ) ] · (t) dt+ [ ∂ L∂((t),(t) ) · (t) ]_0^T = 0 .As the infinitesimal variationsare required to vanish at the boundaries, the boundary term vanishes, and as the variationsare otherwise arbitrary, vanishing of the variations of A implies vanishing of the term in square brackets in the integrand. This leads us to the Euler–Lagrange equations, that is the equations of motion,∂ L∂((t),(t) ) - ddt( ∂ L∂((t),(t) ) ) = 0 .For the Lagrangian (<ref>) the Euler–Lagrange equations yield∇ϑ ( (t)) · (t) - ∇ H( (t)) - ϑ̇ ( (t)) = 0 ,which after computing the time-derivative of ϑ can be written asΩ^T ( (t)) (t)= ∇ H( (t)) with Ω_ij = ∂ϑ_j∂^i - ∂ϑ_i∂^j .The skew-symmetric matrix Ω plays an important role as it holds the components of the symplectic form ω on M.Let us note that in principle Ω can be of odd dimension, in which case the corresponding two-form ω is degenerate and therefore does not classify as a symplectic form. However, most degenerate Lagrangians of the form (<ref>), including all of the examples discussed in Section <ref>, originate from some kind of coordinate transformation of a canonical system, possibly followed by some reduction procedure, which always results in a system of even degree. We therefore assume in the following that the system under consideration is of even dimension and hence symplectic. Details of the symplectic structure will be discussed in Section <ref>.Equation (<ref>) has the structure of a noncanonical Hamiltonian system on M, characterized by the skew-symmetric matrix Ω and the Hamiltonian H. For such noncanonical Hamiltonian systems no general geometric integrators are known. In principal it is possible to use the Darboux theorem to find canonical coordinates and then apply some canonical symplectic integrator. In practice, however, the construction of such Darboux coordinates tends to be a non-trivial task and is often possible only locally but not globally. Our strategy will thus be to reformulate the system as a canonical Hamiltonian system by adding canonical conjugate momenta, thus doubling the size of the solution space, and restricting the numerical solution to the physical subspace of this extended solution space. The geometrical foundation of this procedure is the theory of Dirac constraints.§.§ Dirac Constraints Degenerate systems of the form (<ref>) can also be formulated in terms of the phasespace trajectory (, ) in the cotangent bundle M, subject to a primary constraint in the sense of Dirac, determined by the function ϕ : M→^d, given byϕ (, ) =- ϑ() = 0 ,and originating from the fibre derivative F L : M→M,F L (_) ·_ = dd|_=0 L(_ + _) ,where _ = (, ) and _ = (, ) denote two points in M which share the same base pointand are thus elements of the same fibre of M. By acting point-wise for each t, the fibre derivative maps the curve (, ) in the tangent bundle M into the curve (, ) in the cotangent bundle M,( (t),(t)) = ((t), ∂ L∂ ( (t),(t)) ) = ( (t), ϑ ( (t))) ,where the last equality follows for Lagrangians of the form (<ref>). The Dirac constraint arising from the degenerate Lagrangian restricts the dynamics to the submanifoldΔ = { (, ) ∈M | ϕ (, ) = 0}⊂M .In the preceding and the following, we assume that the Lagrangian is degenerate in all velocity components, that is, the Lagrangian is either linear or singular in each component of , so that∂^2 L∂^i ∂^j = 0 for all 1 ≤ i,j ≤ d.For instructive reasons, however, assume for a moment that the Lagrangian is degenerate in only m<d components ofand, e.g., quadratic in the other d-m components. That is to say we can write(t) = ( β_1 ( (t),(t)), , β_d-m ( (t),(t)),ϑ_d-m+1 ( (t)), , ϑ_d ( (t)) )^T ,where∂ L∂^i ( (t),(t)) = β_i ( (t),(t)) 1 ≤ i ≤ d - m ,ϑ_i ( (t)) d-m+1 ≤ i ≤ d .We can then denote coordinates in Δ by (^i, π^j) with 1 ≤ i ≤ d and 1 ≤ j ≤ d-m, where the π^i denote those momenta which are “free”, i.e., not determined by the Dirac constraint. The inclusion map i : Δ→M can then be written asi : (, π) ↦ (, π, ϑ ()) .In the fully degenerate case, however, we have m=d, so that the configuration manifold M and the constraint submanifold Δ are isomorphic and we can label points in Δ by the samewe use to label points in M. The inclusion map i : Δ→M simplifies accordingly and readsi : ↦ (, ϑ ()) ,where it is important to keep in mind thatdenotes a point in Δ. The inverse operation is given by the projection π_Δ : M→Δ, defined such that π_Δ∘ i = 𝕀.As we are lacking a general framework for constructing structure-preserving numerical algorithms for noncanonical Hamiltonian systems on M, we will construct such algorithms on i(Δ). This can be achieved by using canonically symplectic integrators on M and assuring that their solution stays on i(Δ). To this end we will employ various projection methods, as discussed in Section <ref>.§.§ Augmented Hamiltonian Approach Hamilton's form of the equations of motion for a degenerate Lagrangian system (<ref>) can be derived from the phasespace actionA [q,p,λ] = ∫_0^T[(t) · (t) - H ( (t),(t),(t)) ] dt ,with the augmented Hamiltonian H : M×^d→ defined asH (, , λ) = H() + ϕ (,) · .Applying Hamilton's principle of stationary action to (<ref>) results in the following index two differential-algebraic system of equations (see e.g. <cit.> for a definition of the notion of index),(t)= ϕ_^T ( (t),(t)) (t) , (t)= - H_ ( (t)) - ϕ_^T ( (t),(t)) (t) , 0= ϕ ( (t),(t)) .Here, subscriptsanddenote partial derivatives with respect to the coordinates of M. For the constraint function ϕ these derivatives are explicitly written asϕ_ = [ ∂ϕ_1∂^1 ∂ϕ_1∂^2 ∂ϕ_1∂^d; ⋮ ⋮ ⋱ ⋮; ∂ϕ_d∂^1 ∂ϕ_d∂^2 ∂ϕ_d∂^d; ]and ϕ_ = [ ∂ϕ_1∂^1 ∂ϕ_1∂^2 ∂ϕ_1∂^d; ⋮ ⋮ ⋱ ⋮; ∂ϕ_d∂^1 ∂ϕ_d∂^2 ∂ϕ_d∂^d; ] ,so that in ϕ_^Tλ and ϕ_^Tλ, the components of λ are contracted with the components of ϕ, not with the derivatives. As the Hamiltonian H does not depend onand ϕ =- ϑ(), we find that the first equation reduces to (t) =(t), that is the Lagrange multiplier takes the role of the velocity. Denoting the trajectory in the cotangent bundle M by =(, ), the equations of motion (<ref>) can be rewritten more compactly as(t)= Ω^-1 ∇ H ( (t)) + Ω^-1 ∇ϕ^T ( (t)) (t) , 0= ϕ ( (t)) ,where ∇ denotes the derivatives with respect to = (, ).§.§ Hamilton–Pontryagin Principle The phasespace action principle of the previous section is equivalent to the Hamilton–Pontryagin principle <cit.> on M⊕M, given byδ∫_0^T[ L ( (t),(t)) +(t) · ((t) -(t) ) ] dt = 0 .Here, the dynamics of the system are described by the evolution of (, , ), which constitute a trajectory in the Pontryagin bundle M⊕M. If the Lagrange multiplieris replaced by the velocityit is easy to verify that the Lagrangian is related to the augmented Hamiltonian (<ref>) by L (, ) = · - H (, , ), and that the Hamilton–Pontryagin principle (<ref>) is equivalent to the phasespace action principle δA (, , ) = 0 with the augmented action A given in (<ref>). Computing variations of (<ref>), where ,andare all varied independently and only restricted in that the variations ofhave to vanish at the endpoints, we obtain the implicit Euler–Lagrange equations,(t)=(t) , (t)= ∂ L∂v ( (t),(t)) , (t)= ∂ L∂ ( (t),(t)) ,which are easily seen to be equivalent to (<ref>) and (<ref>). Here, the Dirac constraint ϕ( (t),(t)) = 0 appears quite naturally as one of the equations of motion, which suggests that the Hamilton–Pontryagin principle might be the natural starting point for the discretization of degenerate Lagrangian systems. That this is not necessarily the case will be discussed in Section <ref>.§.§ Symplecticity Our aim is to construct methods which retain the symplecticity of the integrator as well as its momentum maps. Care has to be taken, when stating that the variational integrator and the projection are symplectic. The continuous system preserves two symplectic forms, the canonical two-form ω on Δ, but also the noncanonical two-form ω on M defined byω = ϑ = 12Ω_ij ^i∧^jwith Ω_ij = ∂ϑ_j∂^i - ∂ϑ_i∂^j .The matrix Ω is the noncanonical symplectic matrix which we already encountered in the equations of motion (<ref>). The function ϑ is interpreted as a one-form on M, in coordinates given by ϑ = ϑ_i ()^i. In principle it is possible that ω is degenerate, namely in the case of a system of odd dimension d. Then ω is not a symplectic form but a presymplectic form. Most of the following discussion also holds in this case. However, in almost all examples of practical relevance the configuration space is even-dimensional. For this reason, we will always refer to ω as symplectic form.Note that ω is not the symplectic form ω_L on M originating from the boundary terms in the action principle (<ref>). Besides leading to the equations of motion, the variational principle provides a direct and natural way to derive the fundamental geometric structures of classical mechanics. For this derivation, the boundary conditions (0) =(T) = 0 are relaxed, while the time interval [0,T] is kept fixed. Thus the variational principle readsddA [^] |_ = 0 = ∫_0^T[ ∂ L∂((t),(t) ) · (t) + ∂ L∂((t),(t) ) · (t) ] dt+ [ ∂ L∂·]_t_1^t_2 .where the variations (t) do not vanish at the boundary point, so that the last term on the right hand side does not vanish. This last term corresponds to a linear pairing of the function ∂ L / ∂, which in general is a function of (, ), with the tangent vector . The boundary term in (<ref>) can be written as Θ_Lδ q|_0^T, where Θ_L is the so-called Lagrangian one-form or Cartan one-form, in coordinates given byΘ_L = ∂ L∂^i ^i ,One could be tempted to regard ∂ L / ∂ as a one-form on M as it only has a component in . The same waycould be regarded as a tangent vector on M. However, in general ∂ L / ∂ is a function of (, ) and therefore clearly a function on M. The exterior derivative of the Lagrangian one-form gives the Lagrangian two-form, also referred to as the symplectic two-form,ω_L = Θ_L ,given in coordinates byω_L = ∂^2 L∂^i ∂^jd ^i∧ d ^j+ ∂^2 L∂^i ∂^jd ^i∧ d ^j .For more details on this derivation see e.g. <cit.>. As the Lagrangian (<ref>) is degenerate, so is the corresponding symplectic matrix Ω_L, which can be written in block form asΩ_L = [ Ω 0; 0 0; ] .We recognize the upper left block, which corresponds to the noncanonical symplectic matrix Ω on M. When we discuss symplecticity in the following, we are always referring to the noncanonical symplectic form ω or its matrix representation Ω.Preserving the noncanonical symplectic form ω on M is equivalent to preserving the canonical symplectic form ω = Θ on the embedding of Δ in M. Denoting coordinates on M by = (, ), the canonical one-form Θ and the symplectic two-form ω can be written in coordinates asΘ = p_iq^i , ω = 12Ω_ij ^i∧^j = _i∧^i ,with Ω the canonical symplectic matrix, given byΩ = [0 -1;10 ] .On the constraint submanifold we have that _i = ϑ_i (), and therefore _i = ϑ_i (), so that ω restricted to Δ readsω|_Δ= ∂ϑ_i∂^j ^j∧^i = 12∂ϑ_i∂^j ^j∧^i+ 12∂ϑ_j∂^i ^i∧^j= 12( ∂ϑ_j∂^i - ∂ϑ_i∂^j) ^i∧^j= ω .Using the inclusion (<ref>), we can write ω = i^*ω. The preceding arguments thus suggest that in order to construct a numerical algorithm that preserves the noncanonical symplectic form ω on M, a viable strategy could be to construct a canonically symplectic algorithm on M whose solution stays on the constraint submanifold Δ.§.§ Noether Theorem and Conservation Laws On of the most influential results of classical mechanics in the 20th century is the correspondence of point-symmetries of the Lagrangian and conservation laws of the Euler–Lagrange equations established by Emmy Noether <cit.>. In the following we will summarize her famous theorem.Consider a Lagrangian system L : M→ and a one-parameter group of transformations {σ^ : ∈ B_0^r ,σ_0 = 𝕀}, where B_0^r denotes the open ball with radius r > 0 centred at 0. We denote the transformed trajectory by ^ = σ^∘ and its time derivative by ^ = d (σ^∘) / dt such that ^0 = q and ^0 =. We have a symmetry if the transformation σ^ leaves the Lagrangian L invariant, that isL ( ^ (t), ^ (t) ) = L ((t),(t) ) for alland all .Taking thederivative of (<ref>), we obtain the infinitesimal invariance condition,dd L ( ^ (t), ^ (t) ) |_=0 = 0 ,which is equivalent to (<ref>). Explicitly computing thisderivative, we obtaindd L ( ^ (t), ^ (t) ) |_=0 = ∂ L∂((t),(t) ) ·d σ^d|_=0 + ∂ L∂((t),(t) ) ·d σ̇^d|_=0 = 0 .Denoting by V the vector field with flow σ^, defined as follows,V= d σ^d|_=0 ,and ifsolves the Euler–Lagrange equations (<ref>), we can rewrite (<ref>) as[ ddt∂ L∂((t),(t) ) ] · V ((t) ) + ∂ L∂((t),(t) ) ·dVdt((t),(t) )= 0 .The time derivative of the vector field V is simply computed by the chain rule, so that assuming that the transformation σ^ does not explicitly depend on time it is given bydVdt = ^j ∂ V^i∂^j∂∂^i .The expression in (<ref>) amounts to a total time derivative of the so-called Noether current, which constitutes the preserved quantity,ddt[ P ((t),(t) ) ]= 0 , P((t),(t) )= ∂ L∂((t),(t) ) · V ( (t)) .Thus, the momentum ∂ L / ∂ in the direction V is conserved along solutionsof the Euler–Lagrange equations (<ref>) obtained from L for all times t. In Section <ref>, we will apply the Noether theorem several times in order to determine the conservation laws for the various examples we will consider. § VARIATIONAL INTEGRATORS Variational integrators can be seen as the Lagrangian equivalent of symplectic integrators for Hamiltonian systems. Instead of discretizing the equations of motion, the action integral is discretized, followed by the application of a discrete version of Hamilton's principle of stationary action. This leads to discrete Euler–Lagrange equations (the discrete equations of motion) at once.The evolution map that corresponds to the discrete Euler–Lagrange equations is what is called a variational integrator. Such a numerical scheme preserves a discrete symplectic form which originates from the boundary terms in the variation of the discrete action.The seminal work in the development of a discrete equivalent of classical mechanics was presented by <cit.>. His method, based on a discrete variational principle, leads to symplectic integration schemes that automatically preserve constants of motion <cit.>. Comprehensive reviews of variational integrators and discrete mechanics can be found in <cit.> and <cit.>, including thorough accounts on the historical development preceding and following the work of .In the following we collect some material on variational integrators, specifically on the discrete action principle, the position-momentum form, and on variational Runge–Kutta methods, before discussing the problems that arise when trying to apply the method to degenerate Lagrangians.§.§ Discrete Action Principle Time will be discretized uniformly, i.e., the time step h ∈_+ is constant. We thus split the interval [0,T] into a finite sequence of times { t_n = nh|n = 0, , N }, where h = T/N, so that t_N = T. Let us denote the configuration of the discrete system at time t_n by _n, so that _n≈ (t_n), where (t_n) is the configuration of the continuous system at time t_n. Then a discrete trajectory can be written as _d = {_n}_n=0^N.The discrete Lagrangian is defined as an approximation of the time integral of the continuous Lagrangian over the interval I_n = (t_n, t_n+1), i.e.,L_d (_n, _n+1) ≈∫_I_n L ( _n,n+1 (t) ,_n,n+1 (t) ) dt ,where _n,n+1 denotes the solution of the Euler–Lagrange equations (<ref>) in I_n. The specific expression of the discrete Lagrangian is determined by the polynomial approximation of the trajectory and the quadrature rule used to approximate the integral. The discrete action then becomes merely a sum over the time index of discrete Lagrangians,A_d [_d] = ∑_n=0^N-1 L_d (_n, _n+1) .Using linear interpolation between _n and _n+1 to describe the discrete trajectory, thus approximating _n,n+1 by_n,n+1 (t)≈_n t_n+1 - tt_n+1 - t_n + _n+1 t - t_nt_n+1 - t_n ,the velocity _n,n+1 will be approximated by a simple finite-difference expression, namely_n,n+1 (t)≈_n+1 - _nt_n+1 - t_n .As we assume the time step to be constant, in the following we will just write h instead of t_n+1 - t_n. The quadrature approximating the integral in (<ref>) is most often realized by either the trapezoidal rule, leading to the discrete LagrangianL_d^tr (_n, _n+1) = h2L ( _n, _n+1 - _nh) + h2L ( _n+1, _n+1 - _nh) ,or the midpoint rule, leading to the discrete LagrangianL_d^mp (_n, _n+1) = h L ( _n + _n+12, _n+1 - _nh) .The configuration manifold of the discrete system is still M, but the discrete state space is M×M instead of M, such that the discrete Lagrangian L_d is a functionL_d : M×M→ . The discrete equations of motion are determined in the same way as the continuous equations of motion (<ref>), that is by applying Hamilton's principle of stationary action. Infinitesimal variations of the discrete trajectories _d are given in terms of maps _d : { t_n}_n=0^N→^d, which vanish at t_0 and t_N, that is _d(t_0) = 0 and _d(t_N) = 0, and are such that _d(t_n) ∈[q_d (t_n)]M, where _d(t_n) = _n and q_d (t_n) = q_n and we denote such maps by _d = {_n}_n=0^N. Discrete one-parameter families of trajectories _d^ = {_n^}_n=0^N, are defined by_d = dd_d^|_ = 0 ,with the simplest example (c.f., Figure <ref>) given by_d^ = {_n + _n }_n=0^N .Such trajectories are elements of the discrete path space, defined asQ_d (t_0, t_N, _0, _N) = {_d : { t_n}_n=0^N→M | _d (t_0) = _0,_d (t_N) = _N} .A necessary condition for the discrete trajectory _d = _d^0 making the discrete action (<ref>) stationary with respect to all curves ^, is that ddA_d [_d^] |_ = 0 = dd∑_n=0^N-1 L_d (_n^, _n+1^) |_ = 0 = 0 .Computing the -derivative of the discrete action explicitly, we obtainddA_d [_d^] |_ = 0 = ∑_n=0^N-1[ D_1 L_d (_n, _n+1) ·_n + D_2 L_d (_n, _n+1) ·_n+1] ,where D_i denotes the slot-derivative with respect to the ith argument of L_d. What follows corresponds to a discrete integration by parts, i.e., a reordering of the summation. The n=0 term is removed from the first part of the sum and the n=N-1 term is removed from the second part,ddA_d [_d^] |_ = 0 = D_1 L_d (_0, _1) · c_0 + ∑_n=1^N-1 D_1 L_d (_n, _n+1) ·_n+ ∑_n=0^N-2 D_2 L_d (_n, _n+1) ·_n+1 + D_2 L_d (_N-1, _N) ·_N .As the variations at the endpoints, _0 and _N, vanish, the corresponding terms in the above sum also vanish. At last, the summation range of the second sum is shifted upwards by one with the arguments of the discrete Lagrangian adapted correspondingly, so thatddA_d [_d^] |_ = 0 =∑_n=1^N-1[ D_1 L_d (_n, _n+1) + D_2 L_d (_n-1, _n) ] ·_n .Hamilton's principle of least action requires the variation of the discrete action A_d to vanish for any choice of _n. Consequently, the expression in the square brackets of (<ref>) has to vanish. This defines the discrete Euler–Lagrange equationsD_1 L_d (_n, _n+1) + D_2 L_d (_n-1, _n) = 0 .Denoting coordinates on M×M by (_1^1, , _1^d, _2^1, , _2^d), this can also be written as∂ L_d∂_1 (_n, _n+1) + ∂ L_d∂_2 (_n-1, _n) = 0 .The discrete Euler–Lagrange equations (<ref>) define an evolution mapF_L_d:M×M→M×M: ( _n-1, _n ) ↦ ( _n, _n+1 ) .Starting from two configurations, _0≈ (t_0) and _1≈ (t_1 = t_0 + h), the successive solution of the discrete Euler–Lagrange equations (<ref>) for _2, _3, etc., up to _N, determines the discrete trajectory _d.For the class of degenerate Lagrangians (<ref>) under consideration, the prescription of two sets of initial conditions, _0 and _1, is not natural. The continuous Euler–Lagrange equations are ordinary differential equations of first order and therefore need only one set of initial conditions in order to solve the equations. In practice, we face the problem that there is no unique way of determining a second set of initial conditions. All methods will introduce some error that will propagate to the solution and eventually most often lead to a break down of the solution.§.§ Position-Momentum Form A viable way around this problem appears to be to use the discrete fibre derivative to reformulate the discrete Euler–Lagrange equations (<ref>) in position-momentum form. For regular Lagrangians L this is equivalent to rewriting the continuous Euler–Lagrange equations in the form of Hamilton's equations by using the continuous fibre derivative and Legendre transform.Given a point (_n, _n+1) in M×M and a discrete Lagrangian L_d : M×M→, we define two discrete fibre derivatives, F^- L_d and F^+ L_d, in analogy to the continuous case (<ref>) byF^- L_d : (_n, _n+1)↦ (_n, _n) = ( _n, - D_1 L_d (_n, _n+1) ) ,F^+ L_d : (_n, _n+1)↦ (_n+1, _n+1) = ( _n+1, D_2 L_d (_n, _n+1) ) .With this, the discrete Euler–Lagrange equations (<ref>) can be written asF^+ L_d (_n-1, _n) - F^- L_d (_n, _n+1) = 0 ,which motivates the introduction of the position-momentum form F_L_d : M→M of the variational integrator (<ref>) by_n = -D_1 L_d (_n, _n+1) ,_n+1 =D_2 L_d (_n, _n+1) .Given (_n, _n), Equation (<ref>) can be solved for _n+1. This is generally a nonlinearly implicit equation that has to be solved by some iterative technique like Newton's method. Equation (<ref>) is an explicit function, so to obtain _n+1 we merely have to plug in _n and _n+1. The corresponding Hamiltonian evolution map isF_L_d:M→M: ( _n, _n ) ↦ ( _n+1, _n+1 ) .In terms of the discrete fibre derivatives it can be equivalently expressed asF_L_d = ( F^- L_d) ∘ F_L_d∘( F^- L_d)^-1 ,F_L_d = ( F^+ L_d) ∘ F_L_d∘( F^+ L_d)^-1 ,F_L_d = ( F^+ L_d) ∘( F^- L_d)^-1 .In position-momentum form, the variational integrator can be initialized by prescribing an initial position _0 in conjunction with the corresponding momentum _0 = ϑ (_0). We thus have an exact initialization mechanism, as _0 constitutes a well-defined second set of initial conditions which can be exactly determined. Starting with an initial position _0 and an initial momentum _0, the repeated solution of (<ref>) gives the same discrete trajectory _d = {_n}_n=0^N as (<ref>).§.§ Discrete Symplectic Structure In the following we will show why variational integrators can be considered symplectic integrators and shed some light on the relation with symplectic integrators. As in the continuous case, we can obtain a discrete Lagrangian one-form by computing the variation of the action for varying endpoints,ddA_d [_d^] |_ = 0 = ∑_n=0^N-1[ D_1L_d (_n, _n+1) ·_n + D_2L_d (_n, _n+1) ·_n+1] = ∑_n=1^N-1[ D_1L_d (_n, _n+1) + D_2L_d (_n-1, _n) ] ·_n + D_1L_d (_0, _1) ·_0 + D_2L_d (_N-1, _N) ·_N .The two latter terms originate from the variation at the boundaries. They form the discrete counterpart of the Lagrangian one-form. However, there are two boundary terms that define two distinct one-forms on M×M,[Θ_L_d^- ( q_0, q_1 ) · ( _0 , _1 ) ≡ -D_1 L_d (q_0, q_1) ·_0 ,;Θ_L_d^+ ( q_N-1, q_N ) · ( _N-1 , _N )≡ D_2 L_d (q_N-1, q_N) ·_N . ]In general, these one-forms are defined asΘ_L_d^- ( _n , _n+1 )≡ -D_1 L (_n, _n+1)_n ,Θ_L_d^+ ( _n , _n+1 )≡ D_2 L (_n, _n+1)_n+1 .As L_d = Θ_L_d^+ - Θ_L_d^- and ^2 L_d = 0 one observes thatΘ_L_d^+ = Θ_L_d^-such that the exterior derivative of both discrete one-forms defines the same discrete Lagrangian two-form or discrete symplectic formω_L_d = Θ_L_d^+= Θ_L_d^-= ∂^2 L_d (_n, _n+1)∂_n∂_n+1 _n∧_n+1(no summation over n) .Now consider the exterior derivative of the discrete action (<ref>). Upon insertion of the discrete Euler–Lagrange equations (<ref>), it becomesA_d = D_1 L_d (_0, _1) ·_0 + D_2 L_d (_N-1, _N) ·_N = Θ_L_d^+ (_N-1, _N) - Θ_L_d^- (_0, _1) .On the right hand side we find the just defined Lagrangian one-forms (<ref>). Taking the exterior derivative of (<ref>) givesω_L_d (_0, _1) = ω_L_d (_N-1, _N) ,where _N-1 and _N are connected with _0 and _1 through the discrete Euler–Lagrange equations (<ref>). Therefore, (<ref>) implies that the discrete symplectic structure ω_L_d is preserved while the system advances from t=0 to t=Nh according to the discrete equations of motion (<ref>). As the number of time steps N is arbitrary, the discrete symplectic form ω_L_d is preserved at all times of the simulation. Note that this does not automatically imply that the continuous symplectic structure ω_L is preserved by the variational integrator. However, as can be seen by comparing (<ref>) and (<ref>), the discrete one-forms (<ref>) correspond to the canonical one-form _i ^i under pullback by the discrete fibre derivatives (<ref>). Thus conservation of the discrete symplectic form ω_L_d by the discrete Euler–Lagrange equations (<ref>) on M×M is equivalent to conservation of the canonical symplectic form Ω by the position momentum form (<ref>) on M. §.§ Variational Runge–Kutta Methods The derivation of higher-order variational integrators in either standard or position-momentum form is rather cumbersome. A convenient framework for the derivation of integrators of arbitrary order is provided by variational Runge–Kutta methods, which can be seen as a generalization of the position-momentum form. These methods constitute a special family of symplectic-partitioned Runge–Kutta methods for Lagrangian systems, which are of the form_n,i = ∂ L∂ (_n,i, _n,i) ,_n,i = ∂ L∂ (_n,i, _n,i) ,_n,i = _n + h ∑_j=1^s a_ij _n,j ,_n,i = _n + h ∑_j=1^sa̅_ij _n,j ,_n+1 = _n + h ∑_i=1^s b_i _n,i ,_n+1 = _n + h ∑_i=1^sb̅_i Ṗ_n,i ,with coefficients satisfying the symplecticity conditions,b_ia̅_ij + b̅_j a_ji = b_ib̅_jand b̅_i = b_i .Here, s denotes the number of internal stages, a_ij and a̅_ij are the coefficients of the Runge–Kutta method and b_i and b̅_i the corresponding weights. Note that while _n,i and _n,i represent velocities and forces at the internal stages, they strictly speaking do not correspond to the time derivatives of _n,i and _n,i, respectively. As (_n,i, _n,i) is nothing else than a point in M, the concept of a time derivative of these quantities does not make any sense. Instead, _n,i and _n,i as well as _n,i and _n,i, respectively, denote independent degrees of freedom, which are however related by (<ref>).<cit.> show that variational Runge–Kutta methods (<ref>) correspond to the position-momentum form (<ref>) of the discrete LagrangianL_d (_n, _n+1) = h ∑_i=1^s b_iL ( _n,i, _n,i) .They can also be obtained from a discrete action principle similar to the Hamilton–Pontryagin principle presented in Section (<ref>). For discretizations of Gauss–Legendre type, like the midpoint Lagrangian (<ref>), this is achieved by extremizing a discrete action of the following form <cit.> (see also <cit.>),𝒜_d = ∑_n=0^N-1 h ∑_i=1^s b_i [ L ( _n,i, _n,i) + _n,i·( _n,i - _n - h ∑_j=1^s a_ij _n,j) ] - _n+1·( _n+1 - _n - h ∑_i=1^s b_i _n,i).Here, the definition of the generalized coordinates at the internal stages _n,i and the update rule determining _n+1 are added as constraints with the corresponding momenta _n,i and _n+1 taking the role of Lagrange multipliers. Requiring stationarity of the discrete action (<ref>) for arbitrary variations of _n, _n+1, _n,i, _n,i and _n,i, we recover (<ref>) with the conditions (<ref>) automatically satisfied. For discretizations of Lobatto–IIIA type like the trapezoidal Lagrangian (<ref>), where the first internal stage coincides with the solution at the previous time step, the velocities _n,i are not linearly independent and the discrete action (<ref>) needs to be augmented by an additional constraint to take this dependence into account (for details see <cit.>),𝒜_d = ∑_n=0^N-1 h ∑_i=1^s b_i [ L ( _n,i, _n,i) + _n,i·( _n,i - _n - h ∑_j=1^s a_ij _n,j) ] - _n+1·( _n+1 - _n - h ∑_i=1^s b_i _n,i) + μ_n·( ∑_i=1^s d_i_n,i).Requiring stationarity of (<ref>), we obtain a modified system of equations,_n,i = ∂ L∂ (_n,i, _n,i) ,_n,i = ∂ L∂ (_n,i, _n,i) ,_n,i = q_n + h ∑_j=1^s a_ij _n,j ,_n,i = p_n + h ∑_j=1^sa̅_ij _n,j - μ_nd_ib_i ,_n+1 = q_n + h ∑_i=1^s b_i _n,i ,_n+1 = p_n + h ∑_i=1^sb̅_i _n,i , 0= ∑_i=1^s d_i_n,i ,accounting for the linear dependence of the Q̇_n,i and consequently also of the P_n,i. The particular values of d_i depend on the number of stages s and the definition of the _n,i <cit.>. For two stages, we have d_1 = - d_2, so that we can choose, for example, d_1 = 1 and d_2 = -1, and (<ref>) becomes equivalent to the variational integrator of the trapezoidal Lagrangian (<ref>). For three stages, we can choose d = (12, -1, 12), and for four stages we can use d = (+1, -√(5), +√(5), -1).§.§ Variational Integrators and Degenerate Lagrangians In this section, we want to give an overview of some approaches for discretizing degenerate Lagrangian systems which fail and discuss why they fail.The most obvious option for obtaining a geometric integrator for any Lagrangian systems is to directly discretize the Lagrangian (<ref>) and compute the corresponding discrete Euler–Lagrange equations (<ref>), followed by a discrete fibre derivative (<ref>) in order to obtain the position-momentum form (<ref>) of a variational integrator. Indeed, it has been shown by  <cit.> that this is a viable strategy for those cases where ϑ is a linear function. For cases where ϑ is a nonlinear function, however, we observe in simulations with such integrators that the numerical solution will in general not satisfy the constraint (<ref>). Thus the discrete trajectory { (q_n, p_n) }_n=0^N will drift away from the constraint submanifold (<ref>), i.e., even though (q_0, p_0) ∈ i(Δ) we usually find that (q_n, p_n) ∉ i(Δ) for n ≥ 1. Whence the solution becomes unphysical, c.f., Figure <ref>. In the standard form of the variational integrators, which are multi-step methods, this behaviour can be explained in terms of parasitic modes <cit.>.In some cases, the solution stays close to the constraint submanifold, which is to say that ϕ (q_n, p_n) although not zero at least stays bounded for very long times. In such cases variational integrators might still be a viable solution method. However, in general it is not clear to which extend this behaviour depends on the initial conditions. It is easily perceivable that the deviation from the constraint submanifold is bounded for some initial conditions but not for others. And indeed, we observe such behaviour for the example of guiding centre dynamics, described in Section <ref>, where certain particles are found to stay close to the constraint submanifold for very long times while other particles diverge further and further from the constraint submanifold as the simulation proceeds until it eventually crashes. Let us note that for variational Runge–Kutta methods (<ref>) and (<ref>) the constraint is automatically satisfied for the internal stages of the method, so that ϕ(Q_n,i, P_n,i) = 0 for all 1 ≤ i ≤ s, but not for the solution at the next time step (q_n+1, p_n+1). This is due to the fact that the internal stages obey discrete versions of the equations of motion, whereas the final step merely amounts to a numerical quadrature. It appears, though, that the drift-off problem could be avoided by choosing particular coefficient matrices a_ij and a̅_ij and weights b_i and b̅_i in (<ref>) or (<ref>) such that the last internal stage corresponds to the solution at the next time step, that is Q_n,s = q_n+1 and P_n,s = p_n+1. It turns out, however, that such a choice is incompatible with the symplecticity conditions (<ref>), i.e., variational Runge–Kutta methods for which the coefficients and weights are such that the Dirac constraint is automatically satisfied do not exist <cit.>.It seems then natural to augment the discrete action (<ref>) or (<ref>) with the constraint evaluated at the solution at the next time step via a Lagrange multiplier similar to (<ref>). This approach and why it fails will be discussed in some detail in the next section.Yet another seemingly natural approach is to apply a discrete version of the Hamilton–Pontryagin principle (<ref>) as proposed by <cit.>. We will see in Section <ref> that the resulting integrator is exactly equivalent to the position-momentum form (<ref>) and therefore shares the same problems.After discussing in more detail why these approaches fail, we will present several projection methods for enforcing the constraint ϕ (q_n+1, p_n+1) = 0 in Section <ref>. In these, we take the solution of a variational integrator and project it to the constraint submanifold (<ref>). Although these methods are not strictly-speaking variational or geometric, they lead to useful long-time stable integration algorithms.§.§ Augmented Variational Runge–Kutta Methods For degenerate Lagrangian systems, we see that the constraint p = ϑ(q) is automatically enforced at the internal stages. But as the constraint is not enforced at the time steps n, the solution tends to drift away from the constraint submanifold (<ref>). It seems to be natural to add the constraint to the discrete action (<ref>) or (<ref>), e.g.,𝒜_d = ∑_n=0^N-1 h ∑_i=1^s b_i [ L ( Q_n,i, Q̇_n,i) - Ṗ_n,i·( Q_n,i - q_n - h ∑_j=1^s a_ij Q̇_n,j) ] + p_n+1·( q_n+1 - q_n - h ∑_i=1^s b_i Q̇_n,i) + ϕ (q_n+1, p_n+1) ·λ_n+1 .The momenta are denoted by p_n+1 instead of p_n+1 in order to highlight the problem with this approach as discussed below. Variation of (<ref>) leads to a modified integrator, which upon definingp_n = p_n - ϕ_^T (q_n, p_n)λ_nand q_n = q_n - ϕ_^T (q_n, p_n)λ_n ,can be written asP_n,i = ∂ L∂ (Q_n,i, Q̇_n,i) ,Ṗ_n,i = ∂ L∂ (Q_n,i, Q̇_n,i) , Q_n,i = q_n + h ∑_j=1^s a_ij Q̇_n,j , P_n,i = p_n + h ∑_j=1^sa̅_ij Ṗ_n,j ,q_n+1 = q_n + h ∑_i=1^s b_i Q̇_n,i ,p_n+1 = p_n + h ∑_i=1^sb̅_i Ṗ_n,i ,with projectionq_n+1 = q_n+1 + ϕ_^T (q_n+1, p_n+1)λ_n+1 , p_n+1 = p_n+1 - ϕ_^T (q_n+1, p_n+1)λ_n+1 , 0= ϕ (q_n+1, p_n+1) ,where we assume that p_0 = ϑ(q_0) so that p_0 = p_0 and λ_0 = 0. We observe that the constraint is enforced at (q_n+1, p_n+1), that is the projected coordinate but the unprojected momentum. Practically, we fix the momentum p_n+1 and change the coordinate _n+1 until it matches the constraint ϕ (q_n+1, p_n+1).Then, the momentum is shifted using λ_n+1 as determined from the projection of _n+1. The result is that while (q_n+1, p_n+1) is guaranteed to lie on the constraint submanifold, this is not the case for (q_n+1, p_n+1).§.§ Discrete Hamilton–Pontryagin Principle As the Hamilton–Pontryagin principle of Section <ref> provides a very natural setting for degenerate Lagrangian systems, where the Dirac constraint appears as one of the equations of motion, it appears as an appropriate starting point for discretization. To that end, consider the (+) and (-) discrete Lagrange–Pontryagin principles proposed by <cit.> and given byδ∑_n=0^N-1[ L_d (q_n, q_n^+) + p_n+1· (q_n+1 - q_n^+) ] = 0 ,andδ∑_n=0^N-1[ L_d (q_n+1^-, q_n+1) - p_n· (q_n - q_n+1^-) ] = 0 ,respectively. Computing the variations and assuming that the variations ofare fixed at the endpoints, δ q_0 = δ q_N = 0, we obtainq_n+1 = q_n^+ , p_n = - D_1 L_d (q_n, q_n^+) , p_n+1 = D_2 L_d (q_n, q_n^+) ,as well asq_n = q_n+1^- , p_n = - D_1 L_d (q_n+1^-, q_n+1) , p_n+1 = D_2 L_d (q_n+1^-, q_n+1) ,which in both cases are immediately recognized as being equivalent to the position-momentum form (<ref>) and therefore subject to the same instabilities.§ PROJECTION METHODS Projection methods are a standard technique for the integration of ordinary differential equations on manifolds <cit.>. The problem of constructing numerical integrators on manifolds with complicated structure is often difficult and thus avoided by embedding the manifold into a larger space with simple, usually Euclidean structure, where standard integrators can be applied. Projection methods are then used to ensure that the solution stays on the correct subspace of the extended solution space, as that is usually not guaranteed by the numerical integrator itself.In the standard projection method, a projection is applied after each step of the numerical algorithm. Assuming that the initial condition lies in the manifold, the solution of the projected integrator will stay in the manifold. The problem with this approach is that even though assuming that the numerical integrator is symmetric, the whole algorithm comprised of the integrator and the projection will not be symmetric. This often leads to growing errors in the solution and consequently a drift in the total energy of the system. This can be remedied by symmetrizing the projection <cit.>, where the initial data is first perturbed out of the constraint submanifold, before the numerical integrator is applied, and then projected back to the manifold. This leads to very good long-time stability and improved energy behaviour.While such projection methods, both standard and symmetric ones, are standard procedures for conserving energy, as well as holonomic and non-holonomic constraints, not much is known about their application to Dirac constraints.Some authors consider general differential algebraic systems of index two <cit.>, the class to which the systems considered here belong, but a discussion of symplecticity seems to be mostly lacking from the literature, aside from some remarks on the conservation of quadratic invariants by the post-projection method of <cit.>.In the following, we apply several projection methods (standard, symmetric, symplectic, midpoint) to variational integrators in position-momentum form. As it turns out, both the standard projection and the symmetric projection are not symplectic. The symmetric projection nevertheless shows very good long-time stability, as it can be shown to be pseudo-symplectic. The symplectic projection method, as the name suggests, is indeed symplectic, although in a generalized sense. The midpoint projection method is symplectic in the usual sense but only for particular integrators.The general procedure is as follows. We start with initial conditions _n on Δ (recall that for the particular Lagrangian (<ref>) considered here, the configuration manifold M and the constraint submanifold Δ are isomorphic, so that we can use the same coordinates on Δ as we use on M). We compute the corresponding momentum p_n by the continuous fibre derivative (<ref>), which yields initial conditions (q_n, p_n = ϑ(q_n)) on M satisfying the constraint ϕ(q_n, p_n) = 0. This corresponds to the inclusion map (<ref>). Then, we may or may not perturb these initial conditions off the constraint submanifold by applying a map (q_n, p_n) ↦ (q_n, p_n) which is either the inverse P^-1 of a projection P : M→ i(Δ) or, in the case of the standard projection of Section <ref>, just the identity. The perturbation is followed by the application of some canonically symplectic algorithm Ψ_h on M, namely a variational integrator in position-momentum form (<ref>) or a variational Runge–Kutta method (<ref>) or (<ref>), in which cases we have that Ψ_h = ( F^+ L_d) ∘( F^- L_d)^-1. In general, the result of this algorithm, (q_n+1, p_n+1) = Ψ_h (q_n, p_n), will not lie on the constraint submanifold (<ref>). Therefore we apply a projection (q_n+1, p_n+1) ↦ (q_n+1, p_n+1) which enforces ϕ (q_n+1, p_n+1) = p_n+1 - ϑ(q_n+1) = 0. As this final result is a point in i(Δ) it is completely characterized by the value _n+1. < g r a p h i c s > Gradient of the constraint function ϕ orthogonal and Ω-orthogonal to constant surfaces of ϕ(, ) =- √(_0^2 - ^2) for _0∈{ 1, 2, 3 }.Let us emphasize that in contrast to standard projection methods, where the solution is projected orthogonal to the constrained submanifold, along the gradient of ϕ, here the projection has to be Ω-orthogonal (c.f., Figure <ref>), where Ω is the canonical symplectic matrix (<ref>). That is, denoting by λ the Lagrange multiplier, the projection step is given by Ω^-1∇ϕ^T instead of an orthogonal projection ∇ϕ^T. This appears quite natural when comparing with (<ref>).Let us also note that, practically speaking, the momenta _n and _n+1 are merely treated as intermediate variables much like the internal stages of a Runge–Kutta method. The Lagrange multiplier , on the other hand, is determined in different ways for the different methods and can be the same or different in the perturbation and the projection. It thus takes the role of an internal variable only for the standard, symmetric projection and midpoint projection, but not for the symplectic projection.§.§ Projected Fibre Derivatives In the following, we will try to underpin the construction of the various projection methods with some geometric ideas. We already mentioned several times that the position-momentum form of the variational integrator (<ref>) suffers from the problem that it does not preserve the constraint submanifold Δ defined in (<ref>). That is, even though it is applied to a point in i(Δ), it usually returns a point in M, but outside of i(Δ). In order to understand the reason for this, let us define Δ_M^- and Δ_M^+ as the subsets of M×M which are mapped into the constraint submanifold i(Δ) by the discrete fibre derivatives F^- L_d and F^+ L_d, respectively, i.e.,Δ_M^- = { (_n, _n+1) ∈M×M | F^- L_d (_n, _n+1) = (_n, _n) ∈ i(Δ) } ,Δ_M^+ = { (_n, _n+1) ∈M×M | F^+ L_d (_n, _n+1) = (_n+1, _n+1) ∈ i(Δ) } ,or more explicitly,Δ_M^- = { (_n, _n+1) ∈M×M |- D_1 L_d (_n, _n+1) = ϑ(_n) } ,Δ_M^+ = { (_n, _n+1) ∈M×M |D_2 L_d (_n, _n+1) = ϑ(_n+1) } .A sufficient condition for the position-momentum form of the variational integrator (<ref>) to preserve the constraint submanifold (<ref>) would be that Δ_M^- and Δ_M^+ are identical.Slightly weaker necessary conditions can be formulated depending on the formulation of the position-momentum form in terms of the discrete Euler–Lagrange equations (<ref>) and the discrete fibre derivative (<ref>), c.f., Equation (<ref>). For example, considering (<ref>), a necessary condition for the position-momentum form to preserve Δ is that the image of the inverse of F^- L_d, namely Δ_M^-, is in Δ_M^+,( F^- L_d)^-1 (i(Δ)) = Δ_M^-⊂Δ_M^+ .Further, from (<ref>) and (<ref>) it follows that the image of the variational integrator F_L_d applied to Δ_M^- must be in Δ_M^- and the image of F_L_d applied to Δ_M^+ must be in Δ_M^+,F_L_d( Δ_M^-)⊂Δ_M^- , F_L_d( Δ_M^+)⊂Δ_M^+ .None of these conditions can be guaranteed and they are in general not satisfied. Although Δ_M^- and Δ_M^+ might have some overlap, they are usually not identical, and the variational integrator, applied to a point in Δ_M^- or Δ_M^+, does not necessarily result in a point in Δ_M^- or Δ_M^+, respectively.In order to construct a modified algorithm which does preserve the constraint submanifold, we compose the discrete fibre derivatives F^± with appropriate projections P^±,(_n, _n)= ( P^-∘F^- L_d) (_n, _n+1)= P__n^-^-( _n, -D_1 L_d (_n, _n+1) ) , (_n+1, _n+1)= ( P^+∘F^+ L_d) (_n, _n+1)= P__n+1^+^+( _n+1, D_2 L_d (_n, _n+1) ) ,so that they take any point in M×M to the constraint submanifold Δ. The Lagrange multiplier λ is indicated as subscript and implicitly determined by requiring that the constraint ϕ is satisfied by the projected values ofand . These projected fibre derivatives will not be a fibre-preserving map anymore, but they will change bothand , mimicking the continuous equations (<ref>). Noting that the nullspace of P_ is the span of Ω^-1∇ϕ, a natural candidate for the projection P_ is given byP_λ^± (, ) : (, )= (, ) ± hΩ^-1∇ϕ^T (, ) λ , 0= ϕ(, ) ,so that ( P^-∘F^- L_d ) (_n, _n+1) explicitly reads_n = _n - hϕ_^T (_n, _n) _n^- ,_n = - D_1 L_d (_n, _n+1) + hϕ_^T (_n, _n) _n^- , 0= ϕ(_n, _n) ,and ( P^+∘F^+ L_d ) (_n, _n+1) explicitly reads_n+1 = _n+1 + hϕ_^T (_n+1, _n+1) _n+1^+ ,_n+1 = D_2 L_d (_n, _n+1) - hϕ_^T (_n+1, _n+1) _n+1^+ , 0= ϕ(_n+1, _n+1) .The signs in front of the projections have been chosen in correspondence with the signs of the discrete forces in <cit.>. With these projections we obtain all of the algorithms introduced in the following sections, except for the midpoint projection, in a similar fashion to the definition of the position-momentum form of the variational integrator (<ref>), as a map Δ→Δ which can formally be written asΦ_h = ( π_Δ∘P^+∘F^+ L_d) ∘( π_Δ∘P^-∘F^- L_d)^-1 .In total, we obtain algorithms which map _n into _n+1 via the stepsΔ i(Δ) MM×MM i(Δ) Δ ,where π_Δ^-1 is identical to the inclusion (<ref>). The difference of the various algorithms lies in the choice of _n^- and _n+1^+ as follows Projection_n^- _n+1^+Standard0 _n+1Symplectic_n R (∞)_n+1Symmetric _n+1/2 R (∞)_n+1/2Midpoint_n+1/2 R (∞)_n+1/2 For the symmetric, symplectic and midpoint projections, it is important to adapt the sign in the projection according to the stability function R(∞) of the basic integrator (for details see e.g. <cit.>). For the methods we are interested in, namely Runge–Kutta methods, the stability function is given by R(z) = 1 + z b^T ( - zA)^-1 e with e = (1, 1, ..., 1)^T∈^s, and we have R(∞)=1 or, more specifically, for Gauss–Legendre methods R(∞) = (-1)^s and for partitioned Gauss–Lobatto IIIA–IIIB and IIIB–IIIA methods we have R(∞) = (-1)^s-1.Let us remark that for the standard projection, the basic integrator and the projection step can be applied independently. Similarly, for the symplectic projection, the three steps, namely perturbation, numerical integrator, and projection, decouple and can be solved consecutively, as we use different Lagrange multipliers _n in the perturbation and _n+1 in the projection. For the symmetric projection and the midpoint projection, however, this is not the case.There, we used the same Lagrange multiplier _n+1/2 in both the perturbation and the projection, so that the whole system has to be solved at once, which is more costly. This also implies that for the projection methods where _n^- and _n+1^+ are the same (possibly up to a sign due to R(∞)), strictly speaking we cannot write the projected algorithm in terms of a composition of two steps as we did in (<ref>). Instead the whole algorithm has to be treated as one nonlinear map. The idea of the construction of the methods is still the same, though. Only the midpoint projection of Section <ref> needs special treatment. There, the operator P_ is defined in a slightly more complicated way than in (<ref>), using different arguments in the projection step, which does not quite fit the general framework outlined here. [][tbh] < g r a p h i c s > Illustration of the standard projection method. The solution is projected to the constraint submanifold Δ after each step of the numerical integrator Ψ_h. §.§ Standard Projection The standard projection method <cit.> is the simplest projection method. Starting from _n, we use the continuous fibre derivative (<ref>) to compute _n = ϑ (_n). Then we apply some symplectic one-step method Ψ_h to _n = (_n, _n) to obtain an intermediate solution _n+1,_n+1 = Ψ_h (_n) ,which is projected onto the constraint submanifold (<ref>) by_n+1 = _n+1 + hΩ^-1∇ϕ^T (_n+1) _n+1 ,enforcing the constraint0= ϕ (_n+1) .This projection method, combined with the variational integrators (<ref>), is not symmetric, and therefore not reversible. Moreover, it exhibits a drift of the energy, as has been observed before, e.g., for holonomic constraints <cit.>. §.§.§ Symplecticity In order to verify the symplecticity condition, we write the projection (<ref>) in terms of (, ), that is_n+1^i = _n+1^i - hϕ_^i^k (_n+1, _n+1)_n+1^k ,_n+1^i = _n+1^i + hϕ_^i^k (_n+1, _n+1)_n+1^k ,and assume that Ψ_h is a symplectic integrator so that_n^i∧_n^i = _n+1^i∧_n+1^i .We start by taking the exterior derivative of _n+1 and _n+1,_n+1^i = _n+1^i + ( hϕ_^i^k (_n+1, _n+1)_n+1^k) ,_n+1^i = _n+1^i - ( hϕ_^i^k (_n+1, _n+1)_n+1^k) .Take the wedge product of the two equations,_n+1^i∧_n+1^i= _n+1^i ∧_n+1^i- _n+1^i∧( hϕ_^i^k (_n+1, _n+1)_n+1^k) + ( hϕ_^i^k (_n+1, _n+1)_n+1^k) ∧_n+1^i- ( hϕ_^i^k (_n+1, _n+1)_n+1^k) ∧( hϕ_^i^l (_n+1, _n+1)_n+1^l) .The second and third term on the right-hand side become( h_n+1^kϕ_^i^k (_n+1 , _n+1) ) ∧_n+1^i+ ( h_n+1^kϕ_^i^k (_n+1, _n+1) ) ∧_n+1^i= ( h_n+1^k[ ϕ_^i^k (_n+1, _n+1)_n+1^i + ϕ_^i^k (_n+1, _n+1)_n+1^i] ) = ( h^k [ ϕ^k (_n+1, _n+1) ] )= 0 .The term in square brackets vanishes as ϕ (_n+1, _n+1) = 0 and therefore ϕ (_n+1, _n+1) = 0. Further we have( h_n+1^kϕ_^i^k (_n+1, _n+1) ) = h_n+1^kϕ_^i^j^k (_n+1, _n+1)_n+1^j+ h_n+1^kϕ_^i^j^k (_n+1, _n+1)_n+1^j+ hϕ_^i^k (_n+1, _n+1)_n+1^k= - h_n+1^kϑ_k,ij (_n+1)_n+1^j- hϑ_k,i (_n+1)_n+1^k ,and( h_n+1^lϕ_^i^l (_n+1, _n+1) )= h_n+1^lϕ_^i^j^l (_n+1, _n+1)_n+1^j+ h_n+1^lϕ_^i^j^l (_n+1, _n+1)_n+1^j+ hϕ_^i^l (_n+1, _n+1)_n+1^l= h_n+1^i ,where the terms involving ϕ_^i^j^k or ϕ_^i^j^k vanish as ϕ(p,q) is separable and the terms involving ϕ_^i^j^k vanish as ϕ is linear in . The wedge product of the two expressions becomes- ( hϕ_^i^k (_n+1, _n+1)_n+1^k) ∧( hϕ_^i^l (_n+1, _n+1)_n+1^l) = = ( h_n+1^kϑ_k,ij (_n+1)_n+1^j+ hϑ_k,i (_n+1)_n+1^k) ∧( h_n+1^i) = = h^2 ϑ_j,i (_n+1)_n+1^j∧_n+1^i .The result can be anti-symmetrized so that by using (<ref>) as well as (<ref>), we obtain_n^i∧_n^i= _n+1^i∧_n+1^i+ h^22Ω_ij (_n+1)_n+1^i∧_n+1^j + h^2 _n+1^kϑ_k,ij (_n+1)_n+1^i∧_n+1^j .Using that the constraint ϕ(, ) = p - ϑ(q) = 0 holds for both (_n, _n) and (_n+1, _n+1), this can be rewritten as12Ω_ij (_n)_n^i∧_n^j= 12Ω_ij (_n+1)_n+1^i∧_n+1^j- h^22Ω_ij (_n+1)_n+1^i∧_n+1^j - h^2 _n+1^kϑ_k,ij (_n+1)_n+1^i∧_n+1^j ,and we see that the noncanonical symplectic form (<ref>) is not preserved, but in each step accumulates an error h^2 _n+1^kϑ_k,ij (_n+1)_n+1^i∧_n+1^j + 12 h^2 Ω_ij (_n+1)_n+1^i∧_n+1^j. In numerical simulations, this error accumulation usually manifests itself in form of a drift of the solution and the energy.§.§ Symmetric Projection To overcome the shortcomings of the standard projection, we consider a symmetric projection of the variational Runge–Kutta integrators following <cit.>, c.f., Figure <ref> (see also <cit.>). Here, one starts again by computing the momentum _n as a function of the coordinates _n according to the continuous fibre derivative, which can be expressed with the constraint function as0= ϕ (_n) .Then the initial value _n is first perturbed,_n = _n + hΩ^-1∇ϕ^T (_n)_n+1/2 ,followed by the application of some one-step method Ψ_h,_n+1 = Ψ_h (_n) ,and a projection of the result onto the constraint submanifold,_n+1 = _n+1 + h R(∞)Ω^-1∇ϕ^T (_n+1) _n+1/2 ,which enforces the constraint0= ϕ (_n+1) .Here, it is important to note that Lagrange multiplier _n+1/2 is the same in both the perturbation and the projection step, and to account for the stability function R(∞) of the basic integrator, as mentioned before. The algorithm composed of the symmetric projection and some symmetric variational integrator in position-momentum form, constitutes a symmetric mapΦ_h : _n↦_n+1 ,where, from a practical point of view, _n, _n+1 and _n+1/2 are treated as intermediate variables. §.§.§ Symplecticity In the following, we assume that R(∞) = 1. Then, the considerations of symplecticity for the symmetric projection follow along the very same lines as for the standard projection.In addition to the projection, we also have to consider the perturbation. Assuming the integrator Ψ_h is such that_n^i∧_n^i = _n+1^i∧_n+1^i ,we obtain_n^i∧_n^i- h^22Ω_ij (_n)^i_n+1/2∧^j_n+1/2- h^2_n+1/2^kϑ_k,ij (_n)_n^i∧_n+1/2^j = = _n+1^i∧_n+1^i- h^22Ω_ij (_n+1)^i_n+1/2∧^j_n+1/2- h^2_n+1/2^kϑ_k,ij (_n+1)_n+1^i∧_n+1/2^j .The symmetrically projected integrator admits a certain symmetry in the error terms and can be shown to be pseudo-symplectic <cit.>. It is worth to go one step back, and reconsider the derivation that leads to (<ref>). By the same considerations as for the standard projection, we obtain_n^i∧_n^i- ( hϕ_^i^k (_n, _n)_n+1/2^k) ∧( hϕ_^i^l (_n, _n)_n+1/2^l) == _n+1^i∧_n+1^i- ( hϕ_^i^k (_n+1, _n+1)_n+1/2^k) ∧( hϕ_^i^l (_n+1, _n+1)_n+1/2^l) .We see, that in general the symmetric projection is not symplectic unless( hϕ_^i^k (_n, _n)_n+1/2^k)= ( hϕ_^i^k (_n+1, _n+1)_n+1/2^k) for all i,as well as( hϕ_^i^k (_n, _n)_n+1/2^k)= ( hϕ_^i^k (_n+1, _n+1)_n+1/2^k) for all i,that is the initial perturbation is exactly the same as the final projection. While the first condition (<ref>) is not obvious, the second condition is immediately seen to be satisfied for ϕ (, ) = p - ϑ (q), as ϕ_ =, so that (<ref>) reduces to _n+1/2 = _n+1/2.If the first condition is not satisfied, though, the method is not symplectic. However, as the error terms to the symplecticity condition appear on both sides of (<ref>), the accumulated error is much smaller than with the standard projection.Again, using that the constraint ϕ(, ) = p - ϑ(q) = 0 holds for both (_n, _n) and (_n+1, _n+1), the symplecticity condition (<ref>) can be rewritten as12Ω_ij (_n)( _n^i∧_n^j- h^2 _n+1/2^i∧_n+1/2^j)- h^2_n+1/2^kϑ_k,ij (_n)_n^i∧_n+1/2^j == 12Ω_ij (_n+1)( _n+1^i∧_n+1^j- h^2 _n+1/2^i∧_n+1/2^j)- h^2_n+1/2^kϑ_k,ij (_n+1)_n+1^i∧_n+1/2^j .This formulation suggests the following construction.[][bth] < g r a p h i c s > Illustration of the post projection method. Starting on the constraint submanifold Δ, the numerical integrator Ψ_h moves the solution away from Δ in the first step. After each step, the solution is projected back onto Δ, but the perturbation at the beginning of each consecutive step is exactly the inverse of the previous projection, so that, practically speaking, the solution is projected back onto Δ only for output purposes. §.§ Symplectic Projection If we modify the perturbation (<ref>) to use the Lagrange multiplier at the previous time step, _n, instead of _n+1, that is we replace (<ref>) by0= ϕ (_n) ,_n = _n + hΩ^-1∇ϕ^T (_n)_n ,_n+1 = Ψ_h (_n) ,_n+1 = _n+1 + h R(∞)Ω^-1∇ϕ^T (_n+1) _n+1 , 0= ϕ (_n+1) ,the symplecticity condition (<ref>) is modified as follows,12Ω_ij (_n)( _n^i∧_n^j- h^2 _n^i∧_n^j)- h^2_n^kϑ_k,ij (_n)_n^i∧_n^j == 12Ω_ij (_n+1)( _n+1^i∧_n+1^j- h^2 _n+1^i∧_n+1^j)- h^2_n+1/2^kϑ_k,ij (_n+1)_n+1^i∧_n+1^j ,implying the conservation of a modified symplectic form ω_ defined on an extended phasespace M×^d with coordinates (, ) byω_= 12Ω_ij ()^i∧^j- h^22Ω_ij ()^i∧^j- h^2^kϑ_k,ij ()^i∧^j ,with matrix representationΩ_ = [Ω - h^2·ϑ_; h^2·ϑ_ - h^2Ω;] .To this corresponds a modified one-form ϑ_, such that ω_ = ϑ_, given byϑ_ = ( ϑ_i (q) - hλ^kϑ_k,i (q) ) (q^i - hλ^i )As noted by <cit.>, the modified perturbation (<ref>)-(<ref>) can be viewed as a change of variables from (, ) on M×^d to (, ) on M, and the projection (<ref>)-(<ref>) as a change of variables back from (, ) to (, ). The symplectic form ω_ on M×^d thus corresponds to the pullback of the canonical symplectic form ω on M by this variable transformation.Let us note that the sign in in front of the projection in (<ref>), given by the stability function of the basic integrator, has very important implications on the nature of the algorithm. If it is the same as in (<ref>), the character of the method is very similar to the symmetric projection method described before. If the sign is the opposite of the one in (<ref>), like for Gauss–Legendre Runge–Kutta methods with an odd number of stages, the perturbation reverses the projection of the previous step, so that we effectively apply the post-projection method of <cit.>. That is, the projected integrator Φ_h is conjugate to the unprojected integrator Ψ_h byΦ_h = P^-1∘Ψ_h∘P ,so that the following diagram commutes (m) [matrix of math nodes,row sep=3em,column sep=4em,minimum width=2em] _n _n+1(_n, _n) (_n+1, _n+1);[-stealth, line width=.4mm] (m-1-1) edge node [above] Ψ_h (m-1-2) (m-2-1) edge node [left]P^-1 (m-1-1) (m-1-2) edge node [right] P(m-2-2) (m-2-1) edge node [below] Φ_h (m-2-2); and the projection is effectively only applied for the output of the solution, but the actual advancement of the solution in time happens outside of the constraint submanifold (c.f., Figure <ref>). In other words, applying n times the algorithm Φ_h to a point (_0, 0) is equivalent to applying the perturbation P^-1, then applying n times the algorithm Ψ_h and projecting the result with P.Potentially, this might degrade the performance of the algorithm. If the accumulated global error drives the solution too far away from the constraint submanifold, the projection step might not have a solution anymore. Interestingly, however, post-projected Gauss–Legendre Runge–Kutta methods retain their optimal order of 2s <cit.>. Moreover, for methods with an odd number of stages, the global error of the unprojected solution is O(h^s+1), compared to O(h^s) for methods with an even number of stages. In practice this seems to be at least part of the reason of the good long-time stability of these methods. §.§.§ Symplecticity While in the continuous case, the symplectic form on M is always degenerate, thus not symplectic but presymplectic, in the discretization this is changed. The discrete Lagrangian on M×M is in general not degenerate, thus the symplectic form on M×M is non-degenerate as well. Composing the usual position-momentum form with the projection to Δ⊂M, thus enforcing ϕ(, )=0 in the way outlined before, we effectively obtain an algorithm mapping M×^d into M×^d instead of the original variational integrator, which mapped M×M into M×M. However, the new algorithm preserves a true symplectic form on M×^d, which is not the same as the presymplectic form of the continuous dynamics, and also not the same as the discrete symplectic form on M×M. This change of the presymplectic form to a symplectic form appears to be due to the initial “non-conservation” of degeneracy when discretizing the Lagrangian in conjunction with the projection.§.§ Midpoint Projection For certain variational Runge–Kutta methods, we can also modify the symmetric projection in a different way in order to obtain a symplectic projection, namely by evaluating the projection at the midpoint_n+1/2 = (_n+1/2, _n+1/2) ,_n+1/2 = 12( _n + _n+1) ,_n+1/2 = 12( _n + _n+1) ,so that the projection algorithm becomes0= ϕ (_n) ,_n = _n + hΩ^-1∇ϕ^T (_n+1/2)_n+1/2 ,_n+1 = Ψ_h (_n) ,_n+1 = _n+1 + hΩ^-1∇ϕ^T (_n+1/2) _n+1/2 , 0= ϕ (_n+1) .This method can be shown to be symplectic with respect to the original noncanonical symplectic form on M if the integrator Ψ_h is a symmetric, symplectic Runge–Kutta method with an odd number of stages s, for which the central stage with index (s+1)/2 corresponds to _n+1/2. This is obviously the case for the implicit midpoint rule, that is the Gauss–Legendre Runge–Kutta method with s=1, but unfortunately not for higher-order Gauss–Legendre or for Gauss-Lobatto methods. However, following <cit.> and <cit.>, higher-order methods similar to Gauss–Legendre methods but satisfying the requested property can be obtained. See for example the method with three stages given in Table <ref>.§.§.§ Symplecticity In order to show symplecticity, we follow a similar path as before for the standard projection method. We start by computing the exterior derivative of the perturbation and projection steps,_n^i = _n^i + ( hϕ_^i^k (_n+1/2, _n+1/2)_n+1/2^k) ,_n^i = _n^i - ( hϕ_^i^k (_n+1/2, _n+1/2)_n+1/2^k) ,and_n+1^i = _n+1^i - ( hϕ_^i^k (_n+1/2, _n+1/2)_n+1/2^k) ,_n+1^i = _n+1^i + ( hϕ_^i^k (_n+1/2, _n+1/2)_n+1/2^k) .Then we compute the wedge products _n^i∧_n^i,_n^i∧_n^i= _n^i∧_n^i + ( hϕ_^i^k (_n+1/2, _n+1/2)_n+1/2^k) ∧_n^i- _n^i∧( hϕ_^i^k (_n+1/2, _n+1/2)_n+1/2^k) - ( hϕ_^i^k (_n+1/2, _n+1/2)_n+1/2^k) ∧( hϕ_^i^k (_n+1/2, _n+1/2)_n+1/2^k) ,and _n+1^i∧_n+1^i,_n+1^i∧_n+1^i= _n+1^i∧_n+1^i - ( hϕ_^i^k (_n+1/2, _n+1/2)_n+1/2^k) ∧_n+1^i+ _n+1^i∧( hϕ_^i^k (_n+1/2, _n+1/2)_n+1/2^k) - ( hϕ_^i^k (_n+1/2, _n+1/2)_n+1/2^k) ∧( hϕ_^i^k (_n+1/2, _n+1/2)_n+1/2^k) .Now assume that the integrator Ψ_h is symplectic and thus satisfies _n+1^i∧_n+1^i = _n^i∧_n^i, which allows us to insert the second equation into the first to obtain_n+1^i∧_n+1^i= _n^i∧_n^i - ( hϕ_^i^k (_n+1/2, _n+1/2)_n+1/2^k) ∧ ( _n^i + _n+1^i ) - ( hϕ_^i^k (_n+1/2, _n+1/2)_n+1/2^k) ∧ ( _n^i + _n+1^i ) .Noting that _n^i + _n+1^i = 2 _n+1/2 and _n^i + _n+1^i = 2 _n+1/2, we can rewrite the previous expression as_n+1^i∧_n+1^i= _n^i∧_n^i - ( h_n+1/2^k ϕ_^i^k (_n+1/2, _n+1/2)_n+1/2^i) - ( h_n+1/2^k ϕ_^i^k (_n+1/2, _n+1/2)_n+1/2^i) .The last two terms can be combined, so that the symplecticity condition reads_n+1^i∧_n+1^i = _n^i∧_n^i - ( h_n+1/2^k ϕ^k (_n+1/2, _n+1/2) ) .The additional terms vanish under the assumption that _n+1/2 = (_n+1/2, _n+1/2) is equivalent to one of the internal stages of the variational Runge–Kutta method. We pointed out before that for the internal stages the Dirac constraint ϕ(, )=0 is automatically satisfied by the first equation in (<ref>). Therefore, if _n+1/2 corresponds to one of the internal stages, we have that ϕ(_n+1/2, _n+1/2) = 0 and thus also ϕ (_n+1/2, _n+1/2) = 0 so that_n+1^i∧_n+1^i = _n^i∧_n^i .It is worth pointing out that this holds for arbitrary constraints ϕ(, ) = 0 and that we did not use the particular structure of (<ref>) like separability or ϕ_ =. Therefore, the midpoint projection method is applicable to arbitrary Hamiltonian systems with Dirac constraints, not just the degenerate Lagrangian systems discussed in this paper.§ NUMERICAL EXPERIMENTS The projection methods described in the previous section have all been implemented in thepackage, which is a library of geometric integrators for ordinary differential equations and differential algebraic equations in the Julia programming language <cit.> freely available on GitHub <cit.>. We use Newton's method with quadratic line search for solving the nonlinear systems and LU decomposition for solving the linear systems. The Jacobian is computed via automatic differentiation via thepackage <cit.> and updated in every time step but only every five nonlinear iterations. If possible, the numerical integration step and the projection step are solved separately (that is for the standard and symplectic projection, but not for the symmetric and midpoint projection). The updates of the solution are computed using compensated summation (Kahan's algorithm) in order to reduce the propagation of round-off errors.The examples we will consider are a two-dimensional Lotka–Volterra model, planar point vortices with varying circulation and guiding centre dynamics. The first two examples are implemented in thepackage. The latter is implemented in thepackage. Both packages are also available on GitHub <cit.>. Except for the first example, all systems possess Noether symmetries and some related conservation law, whose preservation will be monitored in the simulations.We perform simulation with Gauss–Legendre Runge–Kutta methods with one to six stages as well as Gauss–Lobatto–IIIA, IIIB, IIIC, IIID and IIIE methods <cit.> with two, three and four stages.Here, the referenced method always provides the coefficients a and the coefficients a̅ are chosen, such that the symplecticity condition (<ref>) is satisfied. That is Gauss–Lobatto–IIIA denotes the IIIA–IIIB pair, Gauss–Lobatto–IIIB denotes the IIIB–IIIA pair, and Gauss–Lobatto–IIIC denotes the IIIC–IIIC* pair. For the Gauss–Legendre as well as Gauss–Lobatto–IIID and IIIE methods, we have a̅ = a. The Gauss–Lobatto–IIIC* method is sometimes also referred to as Gauss–Lobatto–III. Similar inconsistent naming is found for the IIID and IIIE methods. Here, we denote by the Gauss–Lobatto–IIID method the special case of the Gauss–Lobatto–IIIS method with σ=1.0 and by the IIIE method the special case of the IIIS method with σ=0.5. The Gauss–Lobatto–IIIS methods are interpolations of the IIIA, IIIB, IIIC and IIIC* methods with coefficients given bya_ij^S (σ) = (1-σ) ( a_ij^A + a_ij^B) + (σ - 12)( a_ij^C + a_ij^C*) ,so thata_ij^D = a_ij^S (1) = 12 (a_ij^A + a_ij^B ) ,anda_ij^E = a_ij^S (12) = 12 (a_ij^C + a_ij^C* ) .We compare the results of the variational Runge–Kutta methods with simulations of Radau–IIA methods, which have the advantage that they automatically preserve the Dirac constraint but also have the disadvantage of dissipating energy.For all methods, we perform simulations both without projection and with standard, symmetric, symplectic and midpoint projection. Due to the limited space we will only show some selected examples. The collection of all simulation results can be found in the documentation of thepackage <cit.>.For most examples the simulations with the Gauss–Lobatto–IIIA, IIIB and IIIC methods break down after very few time steps. Even when reducing the time step by an order of magnitude, the IIIA, IIIB and IIIC methods perform rather poorly in almost all of the experiments. For the unprojected integrator this was already shown as a motivating example in Figure <ref> for the Lotka–Volterra model. The origin of this behaviour is most likely related to the fact that for the IIIA, IIIB and IIIC methods, different Runge–Kutta coefficient a_ij and a_ij are used for the integration of the trajectoryand the conjugate momenta p. Even though the nodes of the stages c_i are the same for bothand p, the definition of the values at the nodes _n,i and _n,i in terms of the corresponding vector fields _n,i and _n,i is different. While this is usually fine for regular problems, especially with separable Hamiltonians, it does not seem appropriate for degenerate problems where there is a functional relationship between the momenta and the position along the trajectory given at the internal stages by _n,i = ϑ (_n,i) for 1 ≤ i ≤ s. This particular property of degenerate systems suggests that the same coefficient matrices should be used for the definition of the internal stages of bothand p.For the two- and four-stage Gauss–Lobatto–IIIA, IIIB and IIIC methods, the symplectic projection amounts to a post-projection. Therefore, if the simulation without projection breaks down, so does the simulation with symplectic projection. The midpoint projection is only symplectic for the Gauss–Legendre method with one stage and the SRK3 method whose tableau was given in Table <ref>. Nevertheless, we run experiments with this projection and all integrators to study the long time behaviour.§.§ Lotka–Volterra Model Lotka–Volterra models <cit.> are used in mathematical biology for modelling population dynamics of animal species, as well as many other fields where predator-prey and similar models appear. The dynamics of the growth of two interacting species can be modelled by the following Lagrangian system <cit.>,L (, ) = ( log_2_1 + _2) _1 + _1_2 - H () ,with the Hamiltonian H given byH () = a_1_1 + a_2_2 - b_1log_1 - b_2log_2 .The noncanonical symplectic form (<ref>) is computed asω = - 1_1_2 _1∧_2 .In the position-momentum form, which is the basis for the variational Runge-Kutta methods we employ in the numerical experiments, we obtain the following functions for the momenta and forces,ϑ_1 ()= log_2_1 + _2 , f_1 (, )= _2 - log_2_1^2 _1 - a_1 + b_1_1 ,ϑ_2 ()= _1 , f_2 (, )= ( 1 + 1_1_2) _1 - a_2 + b_2_2 .In the simulations, we use a time step of h = 0.1 and consider initial conditions (q_1,0, q_2,0) = (1, 1) with parameters (a_1, a_2, b_1, b_2) = (1, 1, 1, 2), which give a periodic solution.We make the following observations:*The Gauss–Legendre Runge–Kutta methods with an odd number of stages(Figure <ref>) as well as the SRK3 method (Figure <ref>) are stable even without projection. Even though they do not preserve the Dirac constraint exactly, the error in the constraint oscillates about zero and the amplitude of that oscillation appears to be bounded or at least grows only slowly. A similar behaviour is observed for the Gauss–Lobatto–IIID and IIIE methods (not shown). *The Gauss–Legendre Runge–Kutta methods with an even number of stages (Figure <ref>) show an increasing error in the Dirac constraint and also in the energy, which eventually renders the simulation unstable (after about 250 000 time steps for the two-stage method and after about 1 000 000 time steps for the four-stage method). *The Gauss–Lobatto–IIIA, IIIB and IIIC methods (not shown) are unstable without projection. For the integrator with two stages, the simulation crashes after about 25 time steps. For the integrator with three stages, it crashes immediately on the first time step. Decreasing the time step to h = 0.01 both integrator run for a short period. The integrator with two stages crashes after about 350 time steps and the integrator with three stages after about 1.000 time steps. *The standard projection leads to very good results with all Gauss–Legendre methods (Figures <ref>, <ref>), the SRK3 method (Figure <ref>), as well as the Gauss–Lobatto–IIID and IIIE methods (not shown), but not with the Gauss–Lobatto–IIIA, IIIB and IIIC methods (not shown), whose solution deteriorates quickly. We observe small drifts in the energy error, but over 10 000 000 time steps this drift is of the order of 10^-12. *For the Gauss–Legendre methods (Figures <ref>, <ref>), the SRK3 method (Figure <ref>), and the Gauss–Lobatto–IIID and IIIE methods (not shown), the symmetric projection leads to similar results as the standard projection. In some cases the drift in the energy seems to be slightly larger than with the standard projection. This, however, is due to round-off errors (c.f., Section <ref>). The errors of the Gauss–Lobatto–IIIA, IIIB and IIIC methods (not shown) are smaller than with the standard projection, but there still is a substantial drift in the energy. *The symplectic projection (Figures <ref>, <ref>, <ref>) leads to very good results with all Gauss–Legendre methods, the SRK3 method, as well as the Gauss–Lobatto–IIID and IIIE methods, comparable to the results obtained with the symmetric projection. For the Gauss–Lobatto–IIIA, IIIB and IIIC methods with an even number of stages, the symplectic projection corresponds to a post-projection method. For this reason and as these methods are unstable without projection, the symplectic projection is also unstable. Although for an odd number of stages, the projection does not correspond to a post-projection, simulations still tend to crash as quickly as without projection. *The midpoint projection (Figures <ref>, <ref>, <ref>) leads to good results with all Gauss–Legendre methods, the SRK3 method as well as the Gauss–Lobatto–IIID and IIIE methods, again comparable to the results obtained with the symmetric projection. However, it is only symplectic for the Gauss–Legendre method with one stage and the SRK3 method. *With the Radau methods, we observe exact conservation of the Dirac constraint, as expected, but dissipation of energy, which is related to large errors in the solution. In summary, the numerical experiments for the Lotka–Volterra problem suggest that the Gauss–Legendre, the SRK and the Gauss–Lobatto–IIID and IIIE methods lead to good results with all projection methods, whereas (for the time step used) the the results with the Gauss–Lobatto–IIIA, IIIB and IIIC methods are never satisfactory, even with projection. For these methods, the time step needs to be reduced by at least a factor 10 in order to obtain stable simulations. With such small time steps, however, the Gauss–Lobatto–IIIA, IIIB and IIIC methods are not competitive anymore and one should rather use the Gauss–Legendre or Gauss–Lobatto–IIID and IIIE methods. For the Gauss–Legendre methods with an odd number of stages, the simulation appears to be stable even without projection, at least for very long times (10 million time steps), although the order of the integrators is decreased in this case (c.f., Section <ref>). It was already reported by <cit.> that for index two differential-algebraic equations Gauss–Legendre Runge–Kutta methods with an odd number of stages behave much better than those with an even number of stages, which is related to the stability function R(∞) being +1 for the former and -1 for the latter.§.§ Planar Point Vortices with Varying Circulation Systems of planar point vortices <cit.> provide a challenging problem for numerical integrators. Such systems are integrable for up to three vortices but produce chaotic behaviour for a minimum number of four vortices. An interesting phenomenon is that of leapfrogging, which is usually observed only for two pairs of point vortices. However, also one pair of point vortices can leapfrog by itself (see Figure <ref>) if the circulation is position dependent <cit.>. In this case, the function ϑ in the Lagrange is nonlinear, hence this provides an interesting test case for our integrators.We denote coordinates on M by (,) = (^1, , ^d, ^1, , ^d) and correspondingly coordinates on M by (,,,) = (^1, , ^d, ^1, , ^d, ^1, , ^d, ^1, , ^d). The coordinates on M are sometimes also collectively referred to by q. The general form of the Lagrangian for point vortices is L = 12∑_i,j=1^dΓ_ij (^i^j - ^i^j ) - 14π∑_i ≠ k^d∑_j ≠ l^dΓ_ijΓ_kl log( (^i - ^k)^2 + (^j - ^l)^2) ,with d the number of vortices and Γ the matrix of vortex strengths, which is assumed to be of the form Γ_ij = γ_iδ_ij, where γ_i is the circulation of the ith vortex. Here, we consider the special case of Γ being position-dependent, specifically Γ_ij (^i, ^i, ^j, ^j) = γ_iS(^i, ^i)δ_ij, where we assume later on that S is such that it has rotational symmetry. For d=2 the Lagrangian thus becomesL= 12( γ^1S(^1, ^1) (^1^1 - ^1^1 ) + γ^2S(^2, ^2) (^2^2 - ^2^2 ) )- H (, ) , H= 12πγ^1γ^2S(^1, ^1) S(^2, ^2)log( (^1 - ^2)^2 + (^1 - ^2)^2) .The noncanonical symplectic form (<ref>) of this system readsω= ∑_i=1^2γ_iS(^i, ^i)^i∧^i+ 12∑_i=1^2γ_i( ^i ∂ S∂ (^i, ^i) + ^i ∂ S∂ (^i, ^i) ) ^i∧^i .Assuming that the function S is of the form S(a,b) = s(a^2 + b^2) with some function s : →, the Lagrangian is invariant under rotations of all coordinates by a constant angle ϑ, that is, the following transformation of the coordinates,σ^ :[ ^i; ^i;]↦[ ^icos (ϑ) - ^isin (ϑ); ^icos (ϑ) + ^isin (ϑ); ] ,together with the corresponding transformation of the velocities, leaves the Lagrangian L invariant. The generating vector field is computed asV = d σ^d|_=0 = - ^i∂∂^i + ^i∂∂^i ,and the corresponding conserved quantity (<ref>) is obtained asP = ∂ L∂· V= 12∑_i=1^2γ_i ( ^i^2 + ^i^2) S(^i, ^i) = q^1ϑ_2 (, ) - q^2ϑ_1 (, ) + q^3ϑ_4 (, ) - q^4ϑ_3 (, ).We are particularly interested in the behaviour of this angular momentum under the various projection methods.We consider the simple case of s(r) = 1 + r, so that S(a,b) = 1 + a^2 + b^2 and the functions for the momenta are computed asϑ_1 (,)= - 12γ^1 ^1S(^1, ^1) ,ϑ_2 (,)= 12γ^1 ^1S(^1, ^1) ,ϑ_3 (,)= - 12γ^2 ^2S(^2, ^2) ,ϑ_4 (,)= 12γ^2 ^2S(^2, ^2) ,and those for the forces asf^1 (,,,)= 12γ^1(^1^1 - ^1^1 ) S^() (^1, ^1)+ 12γ^1^1S(^1, ^1)- ∇_1 H (,) , f^2 (,,,)= 12γ^2(^2^2 - ^2^2 ) S^() (^2, ^2)+ 12γ^2^2S(^2, ^2)- ∇_2 H (,) , f_3 (,,,)= 12γ^1(^1^1 - ^1^1 ) S^() (^1, ^1)- 12γ^1^1S(^1, ^1)- ∇_3 H (,) , f_4 (,,,)= 12γ^2(^2^2 - ^2^2 ) S^() (^2, ^2)- 12γ^2^2S(^2, ^2)- ∇_4 H (,) ,with the gradient of the Hamiltonian being∇_1 H (,)= γ^1γ^22πS^() (^1, ^1) S(^2, ^2)log( (^1 - ^2)^2 + (^1 - ^2)^2) + γ^1γ^2πS(^1, ^1) S(^2, ^2)^1 - ^2(^1 - ^2)^2 + (^1 - ^2)^2 ,∇_2 H (,)= γ^1γ^22πS^() (^2, ^2) S(^1, ^1)log( (^1 - ^2)^2 + (^1 - ^2)^2) - γ^1γ^2πS(^1, ^1) S(^2, ^2)^1 - ^2(^1 - ^2)^2 + (^1 - ^2)^2 ,∇_3 H (,)= γ^1γ^22πS^() (^1, ^1) S(^2, ^2)log( (^1 - ^2)^2 + (^1 - ^2)^2) + γ^1γ^2πS(^1, ^1) S(^2, ^2)^1 - ^2(^1 - ^2)^2 + (^1 - ^2)^2 ,∇_4 H (,)= γ^1γ^22πS^() (^2, ^2) S(^1, ^1)log( (^1 - ^2)^2 + (^1 - ^2)^2) - γ^1γ^2πS(^1, ^1) S(^2, ^2)^1 - ^2(^1 - ^2)^2 + (^1 - ^2)^2 ,where S^() and S^() denote theandderivative of S, respectively.We use the time step h = 0.1, circulations γ_1 = γ_2 = 0.1 and initial conditions _0 = (1.0, 0.1, 1.0, -0.1). This setup leads to a circular leapfrogging of the two point vortices.We make the following observations:*All methods except the two-stage Gauss–Lobatto–IIIA and IIIB method and all of the Gauss–Lobatto–IIIC methods are stable even without projection, although with reduced order (c.f., Section <ref>). *For all methods except the two-stage Gauss–Lobatto–IIIA and IIIB and the Gauss–Lobatto–IIIC methods method we observe that the angular momentum oscillates about its initial value where the amplitude of the oscillation seems bounded, that is the angular momentum seems to be preserved in a nearby sense, similar to the energy with symplectic integrators. *The standard projection worsens the result for all methods except the two-stage Gauss–Lobatto–IIIA and IIIB and the Gauss–Lobatto–IIIC methods method, which do not crash when applying the projection, but the projected methods still show large errors and do not provide satisfactory results. *The symplectic, symmetric and midpoint projections lead to very good results with almost all methods, restoring the original order of the methods and showing good long-time behaviour of both the energy and the angular momentum. There are some exceptions, however: *The symplectic projection applied to the two-stage Gauss–Lobatto–IIIA and IIIB and the Gauss–Lobatto–IIIC methods is just as unstable as the corresponding unprojected methods.*Both, the symmetric and midpoint projection applied to all of the Gauss–Lobatto–IIIC methods lead to an improved behaviour compared to the unprojected case, but exhibit a strong drift in the energy.*The Gauss–Lobatto–IIID and IIIE methods with an even number of stages together with the midpoint projection exhibit a rather erratic behaviour in the energy error.*For the symmetric projection and the higher-order methods (e.g. Gauss–Legendre with four or more stages or Gauss–Lobatto-IIID with four stages), we observe a small drift in the angular momentum, but over 1 000 000 time steps this drift is of the order of 10^-12. This drift is most likely caused by round-off errors (see Section <ref> for more details). In summary, the numerical experiments suggest that the combination of almost all integration methods and all projection methods excluding the standard projection provide suitable integration algorithms for the point vortex example. Exceptions are the Gauss–Lobatto–IIIC methods with any of the projection methods and the combination of the midpoint projection with the two-stage Gauss–Lobatto–IIIA and IIIB method and Gauss–Lobatto–IIID and IIIE methods with an even number of stages. §.§ Guiding Centre Dynamics In plasma physics, the search for geometric integrators for guiding centre dynamics and gyrokinetics is currently of great interest. As the Hamiltonian structure of the guiding centre system is noncanonical, there are practically no standard methods which can be easily applied. As the guiding centre equations can also be obtained from a Lagrangian, the application of variational integrators seems natural and has recently been tried by various researchers <cit.>. However, the guiding centre Lagrangian is degenerate, leading to all the problems discussed so far. We will see in the following if our projection methods can overcome these deficits.Guiding centre dynamics <cit.> is a reduced version of charged particle dynamics, where the motion of the particle in a strong magnetic field B is reduced to the motion of the guiding centre, that is the centre of the gyro motion of the particle about a magnetic field line. The dynamics of the guiding centre can be described in terms of only four coordinates (as compared to six for the full motion of the charged particle), the position of the guiding centreand the parallel velocity , where parallel refers to the direction of the magnetic field. Denoting coordinates on M by (, ) = (^1, ^2, ^3, ) and correspondingly coordinates on M by (, , , ) = (^1, ^2, ^3, , ^1, ^2, ^3, ), the guiding centre Lagrangian <cit.> can be written asL= (A () + b ()) · - H(,) , H= 12^2 + μB () ,where b = B / B is the unit vector of the magnetic field B = ∇× A with A the magnetic vector potential and μ is the magnetic moment. The first term in H denotes the parallel part of the kinetic energy and the second term the perpendicular part (parallel and perpendicular to the direction of the magnetic field). Here, we consider the case of only a magnetic field with vanishing electrostatic potential.Denoting a curve in M by t ↦ (x(t), u(t)), the Euler–Lagrange equations (<ref>) are computed as follows,∇ϑ^T( x(t), u(t) ) ·ẋ (t) - ϑ̇( x(t), u(t) )= ∇ H ( x(t), u(t) ) , b ( x(t) ) ·ẋ (t)= u (t) ,with ϑ(,) = A () + b () and the gradient denoting the derivative with respect to . This can be rewritten in an explicit form asẋ (t)= u (t)β (x (t))b (x (t)) ·β (x (t)) + B (x (t))B (x (t)) ·β (x (t))×∇ H (x (t),u (t)) ,u̇ (t)= - β (x (t))b (x (t)) ·β (x (t))·∇ H (x (t),u (t)) ,where β = ∇×ϑ. The noncanonical symplectic form (<ref>) is given byω = 12 ( ϑ_j,i (, ) - ϑ_i,j (, ) )^i∧^j - b_i ()^i∧ .Let us assume that the magnetic field B is not uniform, but that both A and B do not depend on one of the coordinates, say _3. Than we have a symmetry for the transformationσ^ : ^3↦^3 +,with generating vector fieldV = d σ^d|_=0 = ∂∂_3 .The corresponding conserved momentum map (<ref>),P = ∂ L∂· V = ϑ_3 (,) ,which, depending on the actual form of ϑ, can be quite complicated and is therefore a good test for our algorithms. Although the basic integrator will preserve this toroidal momentum if the discrete Lagrangian preserves the corresponding symmetry, the projection could potentially modify its value. The projection guarantees preservation of the constraint p_n+1 = ϑ(q_n+1) but it does not guarantee that p_n+1 = p_n.In the numerical experiments, we use toroidal coordinates = (R, Z, ), where R, Z anddenote the radial, vertical and toroidal direction, respectively. For the magnetic field B and the vector potential A we will use analytic expressions following <cit.>. The vector potential is given asA_R = B_0 R_0 Z2 R , A_Z = - ln( RR_0)B_0 R_02 , A_ = - B_0 r^22 q_0 R .The magnetic field B = ∇× A is computed asB_R= - B_0 Zq_0 R , B_Z= B_0(R - R_0)q_0 R , B_ = - B_0 R_0R ,B = B_0 Sq_0 R ,and the normalized magnetic field asb_R= - ZS , b_Z= R - R_0S , b_ = - q_0 R_0S .Here, R_0 is the radial position of the magnetic axis, B_0 is the magnetic field at R_0, and _0 is the safety factor, regarded as constant. In all of the examples, these constants are set to R_0 = 2, B_0 = 5 and q = 2, respectively. The functions r and S are given byr= √( (R - R_0)^2 + Z^2) , S= √( r^2 + q_0^2 R_0^2) .In toroidal coordinates, the functions for the momenta areϑ_1 (,)= A_R () + b_R () ,ϑ_2 (,)= A_Z () + b_Z () ,ϑ_3 (,)= R ( A_ () + b_ () ) ,ϑ_4 (,)= 0 ,and those for the forces are computed asf_1 (,,,)= ϑ_1,1 (,)^1 + ϑ_2,1 (,)^2 + ϑ_3,1 (,)^3- ∇_1 H (,) , f_2 (,,,)= ϑ_1,2 (,)^1 + ϑ_2,2 (,)^2 + ϑ_3,2 (,)^3- ∇_2 H (,) , f_3 (,,,)= ϑ_1,3 (,)^1 + ϑ_2,3 (,)^2 + ϑ_3,3 (,)^3- ∇_3 H (,) , f_4 (,,,)= b(x) ·- ∇_4 H (,) ,with the gradient of the Hamiltonian being∇_1 H (,)= ∇_1B() ,∇_2 H (,)= ∇_2B() ,∇_3 H (,)= ∇_3B() ,∇_4 H (,)=. We consider four different initial conditions, that of a deeply trapped particle, a barely trapped particle, a barely passing particle and a deeply passing particle (c.f., Figure <ref>). The second and third case are particularly challenging. In all three cases we choose (R,Z,) = (2.5, 0, 0) and set the magnetic moment μ = 0.01. The parallel velocity u and the time step h are chosen as follows:deeply trapped barely trapped barely passing deeply passingu 0.1 0.3375 0.3425 0.5 h 5.0 3.0 2.5 2.5 We make the following observations:*The passing particles (Figure <ref>) seem to be the more challenging ones of the considered examples. Without projection almost no method is long-time stable and we observe drifts in the energy as well as in the toroidal momentum. The Gauss–Lobatto–IIIA, IIIB and IIIC methods are particularly unstable, crashing after only a few or at best a few hundred time steps. This can be remedied by a smaller time step. Then, however, the computational effort is much larger than with the other methods. For the Gauss–Legendre Runge–Kutta methods we observe that for the integrators with an odd number of stages, the solution has two branches, while for the integrators with an even number of stages, the solution has only one branch. *For the trapped particles (Figure <ref>), we obtain good results without projection for the Gauss–Legendre methods and the SRK3 method except for the six-stage Gauss–Legendre method. For the barely trapped particle, this is also true for the two-stage Gauss–Legendre method. The Gauss–Lobatto–IIIA, IIIB and IIIC methods crash quickly, that is after only about one or a few thousand time steps. The Gauss–Lobatto–IIID and IIIE methods work mostly well for the deeply trapped particle but not for the barely trapped particle. *The standard projection seems to improve the situation in case of the passing particles (left column of Figure <ref>) and worsen the situation in case of the trapped particles (left column of Figure <ref>). In all cases, though, the results are not satisfactory as the solution always exhibits a drift in the energy error. *The symplectic projection (left column of Figures <ref> and <ref>) leads to good results for the trapped particles but not for the passing particles. As before, we observe that in those cases where the unprojected solution is stable, the symplectic projection is also stable, but in those cases where the unprojected solution is unstable, the symplectic projection is also unstable. *The symmetric (right column of Figures <ref> and <ref>) and midpoint (right column of Figures <ref> and <ref>) projections lead to good results with almost all methods except for the Gauss–Lobatto–IIIC methods. For the barely trapped particle both projections are unstable for almost all Gauss–Lobatto methods. For the symmetric projection and the higher-order Gauss–Legendre method we observe a drift in the energy error, which again is only of the order of 10^-12 for 1 250 000 time steps. This drift is not due to the non-symplecticity of the symmetric projection method, as one could expect, but it is caused by round-off errors and disappears in simulations in quadruple precision (c.f., Section <ref>).*With the Radau methods we observe exact conservation of the toroidal momentum, which constitutes one of the components of the Dirac constraint and is thus expected to be preserved, but we also observe dissipation of energy. For the two-stage Radau method these are related to large errors in the solution (Figures <ref> and <ref>). For the three-stage Radau method (Figures <ref> and <ref>) the errors in the solution are less pronounced.The numerical experiments suggest that the symmetric and midpoint projection lead to good results with all integrators, while the symplectic projection is only stable for those integrators and those examples which are stable even without projection. Similar to the point vortex example, the solution with the standard projection exhibits a drift in the energy and correspondingly a degradation of the numerical solution, as is expected.§.§ Energy Drift For several examples we observed a drift of the energy error with the higher-order variational integrators, e.g., for Gauss–Legendre Runge–Kutta methods with four or more stages. This behaviour was particularly prominent with the symmetric and midpoint projection when the energy error approached the machine accuracy. A natural question to ask is if the origin for this phenomenon lies in the non-symplecticity of the two projection methods or if it is just due to round-off errors. To this end, we repeated some simulations, namely those for a barely passing guiding centre particle, in quadruple precision. The resulting energy error is plotted in Figure <ref>. We also show the drift of the energy error, which is obtained by splitting the time interval of the simulation in 10 sub-intervals and computing the maximum of the absolute value of the energy error in each interval. As can be seen, there is no trend like linear growth, etc., visible, indicating that all errors due to either the non-symplecticity of the projection methods or due to round-off are much smaller than the energy error in these simulations.<cit.> attribute this drift behaviour to inaccuracies in the Runge-Kutta coefficients and weights, leading to an only approximate satisfaction of the symplecticity conditions (<ref>). This leads to a linear growth of the energy error with time, even though one would expect a growth with the square root of the time as follows from random walk arguments (Brouwer's law <cit.>). In order to reduce the influence of round-off errors for Runge–Kutta methods of high precision, <cit.> suggest to apply a method inspired by compensated summation, where the Runge-Kutta coefficients and weights are split into two parts,b_i = b_i^* + b_i , a_ij = a_ij^* + a_ij .Here, b_i^* and a_ij^* are approximations of b_i and a_ij, which are exact to machine precision, and b_i and a_ij are corrections e.g. to the values of b_i and a_ij in quadruple precision. With these coefficients and weights, the definition of the internal stages and the update rule e.g. in (<ref>) are modified toQ_n,i = q_n + h ∑_j=1^s a_ij^* Q̇_n,j + h ∑_j=1^sa_ij Q̇_n,j , q_n+1 = q_n + h ∑_i=1^s b_i^*Q̇_n,i + h ∑_i=1^sb_i Q̇_n,i ,which comes at practically no cost but allows to recover the missing accuracy in the last few digits of the solution.Another technique, recently proposed by <cit.>, consists in choosing the coefficients a̅_ij such that the symplecticity condition (<ref>) holds exactly in floating point precision. This means that even for methods like Gauss–Legendre, where a and a̅ are usually the same, slightly different coefficients are used. If this technique is applied, e.g., with Gauss–Legendre Runge–Kutta methods with five and six stages and symmetric projection, the acquired solutions exhibit no drift in the energy error (not shown).§.§ Convergence We analyse the convergence behaviour of the various integrators together with the various projection methods for the planar point vortex example (c.f., Figure <ref>). The orders for the solution error and the energy error are obtained as listed in Table <ref>, while the orders for the angular momentum error are obtained as listed in Table <ref>.We see that without projection the orders of the integrators are reduced to half the order for ODE problems or half the order plus one in the case of half the order being odd (as all considered methods are symmetric and therefore have even order). The standard, symmetric and symplectic projection restore the usual order of 2s for the Gauss–Legendre methods, 2s-2 for the Gauss–Lobatto methods and order 4 for the SRK3 method. These are mostly well known results (see e.g. <cit.>). The midpoint projection restores the usual order for the Gauss–Legendre method with one stage and the SRK3 method, that is for the integrators for which the midpoint projection is symplectic, as well as the Gauss–Lobatto methods. For the other Gauss–Legendre methods the order is either s+1 or s+2 depending on the method being of odd or even s, respectively.Interestingly, for the standard, symmetric and symplectic projection methods, the convergence order of the angular momentum error is increased compared to the convergence order for the solution and the energy error, by one for the standard projection and by two for the symmetric and symplectic projection. Without projection and with the midpoint projection the convergence of the angular momentum error is the same as the convergence order of the solution and the energy error. Only the SRK3 method forms an exception, as here the convergence order of the angular momentum error is decreased by one for the standard projection and by two for the midpoint projection with respect to the order of the other errors. §.§ Poincaré Integral Invariants It has recently been established that Poincaré integral invariants provide useful diagnostics for analysing the long-time accuracy of numerical integrators for Hamiltonian dynamics and for distinguishing between symplectic and non-symplectic integrators <cit.>. In the following we will apply this diagnostics to the guiding centre example with a simple, symmetric magnetic field.In particular, we consider the first and second Poincaré integral invariant ϑ and ω, which are given by the Lagrangian one- and two-form, respectively. The one-form ϑ is a relative integral invariant, which means that the integralI_1 = ∫_γϑ_i (q) dq^istays constant in time when γ is a closed loop in the configuration space M (a compact one-dimensional parametrized submanifold of M without boundary), that is advected along the solution of the dynamics. Figure <ref> shows examples of single trajectories of some samples of such a loop, as well as the temporal evolution of the whole loop following the dynamics of the guiding centre system. The two-form ω is an absolute integral invariant, which means that the integralI_2 = ∫_Sω_ij (q) dq^idq^jstays constant in time when S is any compact two-dimensional parametrized submanifold of M, advected along the solution of the dynamics. Figure <ref> shows how an initially rectangular area in phasespace is advected by the dynamics of the guiding centre system.The loop γ is parametrized by τ∈ [0,1), so thatI_1 (t)= ∫_0^1ϑ_i (q_(τ)(t))d q_(τ)^idτdτ .In order to compute this integral, we use N equidistant points in [0,1), so that the derivatives d q_(τ) / dτ can be efficiently computed via discrete Fourier transforms. The integral is approximated with the trapezoidal quadrature rule, which has spectral convergence on periodic domains <cit.>. The area S is parametrized by (σ, τ) ∈ [0,1]^2, so thatI_2 (t)= ∫_0^1∫_0^1ω_ij (q_(σ, τ) (t)dq_(σ, τ)^idσ dq_(σ, τ)^jdτdσdτ .Here, we represent the surface in terms of Chebyshev polynomials and thus use Chebyshev points for the discretization of the domain [0,1]^2. The Chebyshev polynomials allow for an extremely accurate approximation of the surface, even if the latter becomes severely deformed. Moreover, they allow for the use of thepackage <cit.> for the easy and accurate computation of the derivatives and the integral. As initial conditions we useq_(τ) (0) = (r_xcos (2πτ), r_ysin (2πτ), r_zsin (2π s), u_0 + u_1cos (2π s))for the loop γ with r_x = 0.5, r_y = 0.3, r_z = 0.1, u_0 = 0.5, u_1 = 0.05 andq_(σ, τ) (0) = (r_0 (σ-0.5), r_0 (τ-0.5), r_zcos (2πσ) cos (2πτ), u_0 + u_1sin (2πσ) sin (2πτ) )for the surface S with r_0 = 0.1, r_z = 0.1, u_0 = 0.5 and u_1 = 0.01. In both cases, the magnetic field is given by B = (0, 0, B_0(1 + x^2 + y^2)) with B_0 = 1 and we set μ = 0.01. We use a simpler configuration as in Section <ref> as for the magnetic field used there not all integrators are stable. We use 2000 points to discretize the loop γ and 100 × 100 points to discretize the surface S. The time step is h = 10 in both cases.We make the following observations:*The unprojected variational integrators preserve the Poincaré integral invariants with respect to the canonical one-form p_iq^i and two-form p_i∧ q^i (Figures <ref> and <ref>) but not the invariants with respect to the noncanonical one-form ϑ_iq^i (Figures <ref>-<ref>) and two-form ω_ij (q) q^i∧ q^j (Figures <ref>-<ref>).This behaviour is expected. The variational integrators preserve the canonical forms by construction but not the noncanonical forms as the solution does not satisfy the Dirac constraint under which the two are equivalent. However, in cases like this one, where the solution stays close to the constraint submanifold, conservation of the canonical forms implies also approximate conservation of the noncanonical forms. This manifests itself in the error of the integral invariant appearing to be bounded.In addition, conservation of the canonical forms implies conservation of discrete noncanonical forms, which are obtained by pulling back the canonical forms with the discrete fibre derivatives (<ref>). *For the standard projection, we observe a clear drift in both the first and second Poincaré integral invariant for the one- and two-stage Gauss–Legendre Runge–Kutta integrators (Figures <ref>-<ref> and <ref>-<ref>) as well as the SRK3 method (Figures <ref> and <ref>).For the higher-order Gauss–Legendre Runge–Kutta integrators (not shown) we see no drift over the runtime of the simulation as the error in the Poincaré integral invariants introduced by the standard projection is smaller than machine accuracy and therefore not measurable. *For the symmetric projection, both the first and second integral invariant is exactly preserved by all three of the considered methods (Figures <ref>-<ref> and <ref>-<ref>).We only observe some “diffusion” of the error. This is expected due to the deformation of the loop and the surface, over which the integrals are evaluated, leading to a degradation of accuracy in the integrals. *The symplectic projection preserves neither the first (Figures <ref>-<ref>) nor the second (Figures <ref>-<ref>) integral invariant exactly. Instead it shows a behaviour of the error of the invariant similar to the unprojected methods albeit at greatly reduced amplitude (for the two-stage Gauss–Legendre Runge–Kutta integrator the error is already reduced below the machine accuracy, so the invariants appear to be preserved). This behaviour is expected, though, as the symplectic projection does not preserve the standard noncanonical one- and two-form but the modified forms (<ref>) and (<ref>).Taking the corresponding “correction terms” to the standard one- and two-form into account, we find exact conservation of both the first and the second invariant (Figures <ref> and <ref>). *For the midpoint projection, we find exact conservation of both the first and second invariant for all three integrators (Figures <ref>-<ref> and <ref>-<ref>), even for the two-stage Gauss–Legendre Runge–Kutta method, for which the midpoint projection is not symplectic. We have seen that the Poincaré invariant diagnostics provides a useful tool in experimentally judging the symplecticity of a numerical algorithm. We see that the unprojected variational integrators only preserve the invariants with respect to the canonical one- and two-form, but not with respect to the noncanonical forms. With the standard projection all integrators exhibit a clear drift in both invariants. With the symmetric and the midpoint projection, this drift is essentially gone, even though the former is not symplectic. Similarly, with the symplectic projection and all the integrators, we find exact conservation of the corrected invariant. Yet, these results should be taken with a grain of salt. The test case considered here is relatively simple. It is conceivable that in more complicated situations larger errors in the Poincaré invariants will be observed with the various projection methods. It is still expected, though, that the errors are bounded with all three of the symmetric, midpoint and symplectic projection methods. If it is conceived necessary, it should always be possible to reduce the errors in the invariants to the level of the machine accuracy by reducing the time step, where in most cases we expect only a mild reduction to be necessary. When the errors in the invariants are smaller than the machine accuracy, the projected methods essentially behave like any other nonlinearly implicit symplectic integrator, which needs to be solved in finite precision and is thus never exactly symplectic (see <cit.> for details).§.§ Symplectic Projection For many integrators we observed that the symplectic projection is unstable, even when it does not amount to the post projection method. The reason for this is that in general there are no bounds on the amplitude of Lagrange multiplier λ that prevent the unprojected solution from moving further and further away from the constraint submanifold Δ. This behaviour is illustrated in Figure <ref> and prominently exemplified by the passing guiding centre particles as shown in Figure <ref>. Here, the Lagrange multiplier oscillates between positive and negative values while its amplitude grows without bound. Eventually, the projection step becomes larger than the integration step and the simulation becomes unstable. A deeper understanding of this behaviour is very much desirable. One could, for example, apply backward error analysis and try to understand under which conditions the modified equation for the evolution of λ is stable. There are also indications of a connection with index reduction of the continuous problem. Such investigations, however, are left for future work. § SUMMARY We have devised several projection methods and analyzed their influence on the symplecticity and long-time stability of variational integrators applied to degenerate Lagrangian systems. The corresponding system of equations constitutes a system of differential-algebraic equations of index two, for which standard symplectic integrators like Gauss–Legendre or Gauss–Lobatto Runge–Kutta methods are well known to deliver poor performance. In particular their order of accuracy is reduced severely compared to their order when applied to ordinary differential equations.This underperformance can be remedied by the application of appropriate projection methods. In the context of symplectic or variational integrators, this approach raises the question what influence such a projection method has on the symplecticity and long-time stability of the resulting integrator. While for simple problems all projection methods, and even some of the unprojected integrators, lead to long-time stable simulations, the only universally stable methods have been found to be the symmetric and midpoint projection. In most examples, the standard projection admits a drift in the solution and thus the energy error, rendering the simulation unstable in finite time. When exactly that happens depends strongly on the order of the underlying numerical integrator and thus the error in the algebraic constraint. Among the two stable projection methods, the symmetric projection is usually preferable, as the midpoint projection leads to integration methods of reduced order, whereas the symmetric projection restores the order of the underlying numerical integrator. On the other hand, the midpoint projection is exactly symplectic for the midpoint and SRK3 methods. Even though the symmetric projection method is not exactly symplectic, we found the error in the symplecticity condition to be so small, that it has no practical influence on the long-time stability of the projected variational integrator.We also analyzed a modification of the symmetric projection method that is symplectic in a generalized sense, namely in that it is preserving a generalized symplectic structure on a larger space. While for some problems this method leads to similarly good results as the symmetric method, it fails for others. The reason for this failure is that the Lagrange multiplier can grow without bounds. If that happens, the simulation becomes unstable. However, in those cases where the symplectic projection can be applied, it is preferable over the symmetric projection as it has lower computational cost. For the symplectic projection, the underlying numerical integrator and the projection step can be performed independently, whereas in the symmetric case the whole system has to be solved at once.§ ACKNOWLEDGMENTSHelpful discussions with Joshua Burby, Leland Ellison, Melvin Leok and Tomasz Tyranowski are gratefully acknowledged. Moreover, the author is indebted to Omar Maj and Hiroaki Yoshimura for inspiring conversations as well as reading a draft of the paper.The author has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska–Curie grant agreement No 708124. The views and opinions expressed herein do not necessarily reflect those of the European Commission.tocsectionReferencesplainnat
http://arxiv.org/abs/1708.07356v1
{ "authors": [ "Michael Kraus" ], "categories": [ "math.NA", "math-ph", "math.MP", "physics.comp-ph" ], "primary_category": "math.NA", "published": "20170824112954", "title": "Projected Variational Integrators for Degenerate Lagrangian Systems" }
IST Austria, Am Campus 1, Klosterneuburg 3400, [email protected] École Polytechnique Fédérale de Lausanne, Station 8, Lausanne 1015, Switzerland and Rényi Institute, Hungarian Academy of Sciences,P.O.Box 127 Budapest, 1364, [email protected] Thrackles: An Improved Upper Bound Radoslav Fulek1The author greatfully acknowledgessupport from Austrian Science Fund (FWF): M2281-N35. and János Pach2Supported by Swiss National Science Foundation Grants 200021-165977 and 200020-162884. December 30, 2023 ================================================================================================================================================================================================================= A thrackle is a graph drawn in the plane so that every pair of its edges meet exactly once: either at a common end vertex or in a proper crossing. We prove that any thrackle of n vertices has at most 1.3984n edges. Quasi-thrackles are defined similarly, except that every pair of edges that do not share a vertex are allowed to cross an odd number of times. It is also shown that the maximum number of edges of a quasi-thrackle on n vertices is 3 2(n-1), and that this bound is best possible for infinitely many values of n. § INTRODUCTION Conway's thrackle conjecture <cit.> is one of the oldest open problems in the theory of topological graphs. A topological graph is a graph drawn in the plane so that its vertices are represented by points and its edges by continuous arcs connecting the corresponding points so that (i) no arc passes through any point representing a vertex other than its endpoints, (ii) any two arcs meet in finitely many points, and (iii) no two arcs are tangent to each other. A thrackle is a topological graph in which any pair of edges (arcs) meet precisely once. According to Conway's conjecture, every thrackle of n vertices can have at most n edges. This is analogous to Fisher's inequality <cit.>: If every pair of edges of a hypergraph H have precisely one point in common, then the number of edges of H cannot exceed the number of vertices.The first linear upper bound on the number of edges of a thrackle, in terms of the number of vertices n, was established in <cit.>. This bound was subsequently improved in <cit.> and <cit.>, with the present record, 1.4n, held by Goddyn and Xu <cit.>, which also appeared in the master thesis of the second author <cit.>. One of the aims of this note is to show that this latter bound is not best possible.Any thrackle on n>3 vertices has at most 1.3984n edges. Several variants of the thrackle conjecture have been considered. For example, Ruiz-Vargas, Suk, and Tóth <cit.> established a linear upper bound on the number of edges even if two edges are allowed to be tangent to each other. The notion of generalized thrackles was introduced in <cit.>: they are topological graphs in which any pair of edges intersect an odd number of times, where each point of intersection is either a common endpoint or a proper crossing. A generalized thrackle in which no two edges incident to the same vertex have any other point in common is called a quasi-thrackle. We prove the following.Any quasi-thrackle on n vertices has at most 3/2(n-1) edges, and this bound is tight for infinitely many values of n. The proof of Theorem <ref> is based on a refinement of parity arguments developed by Lovász et al. <cit.>, by Cairns–Nikolayevsky <cit.>, and by Goddyn–Xu  <cit.>, and it heavily uses the fact that two adjacent edges cannot have any other point in common. Therefore, one may suspect, as the authors of the present note did, that Theorem <ref> generalizes to quasi-thrackles. Theorem <ref> refutes this conjecture. § TERMINOLOGYGiven a topological graph G in the projective or Euclidean plane, if it leads to no confusion, we will make no distinction in notation or terminology between its vertices and edges and the points and arcs representing them. A topological graph with no crossing is called an embedding. A connected component of the complement of the union of the vertices and edges of an embedding is called a face. A facial walk of a face is a closed walk in G obtained by traversing a component of the boundary of F. (The boundary of F may consist of several components.) The same edge can be traversed by a walk at most twice; the length of the walk is the number of edges counted with multiplicities. The edges of a walk form its support.A pair of faces, F_1 and F_2, in an embedding are adjacent (or neighboring) if there exists at least one edge traversed by a facial walk of F_1 and a facial walk of F_2. In a connected graph, the size of a face is the length of its (uniquely determined) facial walk. A face of size k (resp., at least k or at most k) is called a k-face (resp., k^+-face and k^--face).A cycle of a graph G is a closed walk along edges of G without vertex repetition. (To emphasize this property, sometimes we talk about “simple” cycles.) A cycle of length k is called a k-cycle.A simple closed curve on a surface is said to be one-sided if its removal does not disconnect the surface. Otherwise, it is two-sided. An embedding of a graph G in the projective plane is called a parity embedding if every odd cycle of G is one-sided and every even cycle of G is two-sided. In particular, in a parity embedding every face is of even size.§ PROOF OF THEOREM <REF>For convenience, we combine two theorems from <cit.> and <cit.>.A graph G is a generalized thrackle if and only ifG admits a parity embedding in the projective plane. In particular, any bipartite thrackle can be embedded in the (Euclidean) plane.If G is a non-bipartite generalized thrackle, then, by a result of Cairns and Nikolayevsky <cit.>, it admits aparity embedding in the projective plane.On the other hand, Lovász, Pach, and Szegedy <cit.> showed that a bipartite graph is a generalized thrackle if and only if it is planar, in which case it can be embedded in the projective plane so that every cycle is two-sided. The proof of the next lemma is fairly simple and is omitted in this version.A thrackle does not contain more than one triangle. Refer to Fig. <ref>. By Lemma <cit.>, every pair of triangles in a thrackle share a vertex. A pair of triangles cannot share an edge, otherwise they would form a 4-cycle, and a thrackle cannot contain a 4-cycle, since 4-cycle is not a thrackle, which is easy to check.Let T_1=vzy and T_2=vwu be two triangles that have a vertex v in common. By Lemma <cit.>, the two closed curves representing T_1 and T_2 properly cross each other at v. Hence, the closed Jordan curve C_1 corresponding to T_1 contains w in its exterior and u in its interior. Thus, the drawing of T_1∪{uv,uw} in a thrackle is uniquely determined up to isotopy and the choice of the outer face.If we traverse the edge wu from one endpoint to the other, we encounter its crossingswith the edges vy,yz, and zv in this or in the reversed order.Indeed, the crossings between wu and vz, and wu and vy must be in different connected components of the complement of the union of zy,vw, and vu in the plane. By symmetry, the crossing of zy and wu is on both zy and wu between the other two crossings. Now, a simple case analysis reveals that this is impossible in a thrackle. We obtain a contradiction, which proves the lemma. Next, we prove Theorem <ref> for triangle-free graphs. Our proof uses a refinement of the discharging method of Goddyn and Xu <cit.>.Any triangle-free thrackle on n>3 vertices has at most 1.3984(n-1) edges.Since no 4-cycle can be drawn as a thrackle, the lemma holds for graphs with fewer than 5 vertices. We claim that a vertex-minimal counterexample to the lemma is (vertex) 2-connected. Indeed, let G=G_1∪ G_2, where |V(G_1) ∩ V(G_2)|<2≤ |V(G_1)|,|V(G_2)|. Suppose that |V(G_1)|=n'. By the choice of G, we have |E(G)|=|E(G_1)| + |E(G_2)| ≤1.3984(n'-1)+1.3984(n-n') = 1.3984(n - 1).Thus, we can assume that G is 2-connected. Using Corollary <ref>, we can embed G as follows. If G is not bipartite, we construct aparity embedding of G in the projective plane. If G is bipartite, we construct an embedding of G in the Euclidean plane. Note that in both cases, the size of each face of the embedding is even.The following statement can be verified by a simple case analysis. It was removed from the short version of this note. In the parity embedding of a 2-connected thrackle in the projective plane, the facial walk of every 8^--face is a cycle, that is, it has no repeated vertex. If G is bipartite, the claim follows by the 2-connectivity of G and by the fact that the 4-cycle is not a thrackle.Suppose G is not bipartite. Then G cannot contain 4^--face, since we excluded triangles (by the hypothesis of the lemma) and 4-cycles (which are not thrackles). We can also exclude any 5-face F, because either the facial walk of F is a 5-cycle, which is impossible in a parity embedding, or the facial walk contains a triangle.Analogously, if F is a 7-face, its facial walk cannot be acycle (with no repeated vertex). Hence, the support of F must contain a 5-cycle. Using the fact that G has no triangle and 4-cycle, we conclude that F must be incident to a cut-vertex, a contradiction.It remains to deal with 6-faces and 8-faces. If the facial walk of a 6-face F is not a 6-cycle, then its support is a path of length three or a 3-star. In this case, G is a tree on three vertices, contradicting our assumption that G is 2-connected. Thus, the facial walk of every 6-face must be a 6-cycle.The support of the facial walk of an 8-face F cannot contain a 5-cycle, because in this case it would also contain a triangle. Therefore, the support of F must contain a 6-cycle. The remaining (2-sided) edge of F cannot be a diagonal of this cycle (as then it would create a triangle or a 4-cycle), and it cannot be a “hanging” edge (because this would contradict the 2-connectivity of G). This completes the proof of the proposition.To complete the proof of Lemma <ref>, we use a discharging argument. Since G is embedded in the projective plane, by Euler's formula we havee+1≤ n+fwhere f is the number of faces and e is the number of edges of the embedding.We put a charge d(F) on each face F of G, where d(F) denotes the size of F, that is, the length of its facial walk. An edge is called bad if it is incident to a 6-face. Let F be an 8^+-face. Through every bad edge uv of F, we discharge from its charge acharge of 1/6 to the neighboring 6-face on the other side of uv.We claim that every face ends up with a chargeat least 7. Indeed, we proved in <cit.> that in a thrackle no pair of 6-cycles can share a vertex. By Proposition <ref>, G has no 8-face with 7 bad edges. Furthermore, every 8^--face is a 6-face or an 8-face, since in a parity embedding there is no odd face, and 4-cycles are not thrackleable. Unless G has 12 vertices and 14 edges, no two 8^+-faces that share an edge can end up with charge precisely 7.An 8-face F with charge 7 must be adjacent to a pair of 6-faces, F_1 and F_2. By Proposition <ref>, the facial walks of F,F_1, and F_2 are cycles. Since G does not contain a cycle of length 4, both F_1 and F_2 share three edges with F, or one of them shares two edges with F and the other one four edges. Hence, any 8-face F' adjacent to F shares an edge uv with F, whose both endpoints are incident to a 6-face. If F' has charge 7, both edges adjacent to uv along the facial walk F' must be incident to a 6-face. By the aforementioned result from <cit.>, these 6-faces must be F_1 and F_2. By Proposition <ref>, the facial walk of F' is an 8-cycle. Since F' shares 6 edges with F_1 and F_2, we obtain that G has only 4 faces F,F',F_1, and F_2. In the case where G has 12 vertices and 14 edges, the lemma is true. By Proposition <ref>, if a pair of 8^+-faces share an edge, at least one of them ends up with a charge at least 43/6. Let F be such a face. We can further discharge 1/24 from the charge of F to each neighboring 8^+-face. After this step, the remaining charge of F is at least 43 6-31 24=7+1 24, which is possibly attained only by an 8-face that shares 5 edges with 6-faces. Every 9^+-face F' has charge at least d(F')-d(F') 6≥ 7+1 2. In the last discharging step, we discharge through each bad edge of an 8^+-face an additional charge of 1/288 to the neighboring 6-face. At the end, the charge of every face is at least 7+1 24-61 288=7+1 48. Since the total charge ∑_F d(F)=2e has not changed during the procedure, we obtain 2e≥ (7+1 48)f. Combining this with (<ref>), we conclude thate≤7+1 48/5+1 48n-7+1 48/5+1 48≤ 1.3984(n-1),which completes the proof of Lemma <ref>.Now we are in a position to prove Theorem <ref>. Proof of Theorem <ref>.If G does not contain a triangle, we are done by Lemma <ref>. Otherwise, G contains a triangle T. We remove an edge of T from G and denote the resulting graph by G'. According to Lemma <ref>, G' is triangle-free. Hence, by Lemma <ref>, G' has at most 1.3984(n-1) edges, and it follows that G has at most 1.3984(n-1)+1<1.3984n edges. Without introducing any additional forbidden configuration, our methods cannot lead to an upper bound inTheorem <ref>, better than 22 16n=1.375n. This is a simple consequence of the next lemma. Let H(k) be a graph obtained by taking the union of a pair of vertex-disjoint paths P=p_1… p_6k and Q=q_1… q_6kof length 6k; edges p_iq_i for all i 3= 0; edges p_iq_6k-i for all i 3 = 2; and paths p_ip_i'p_i”q_i, for all i 3=1, which are internallyvertex-disjoint from P, Q, and from one another. For every k∈ℕ, the graph H(k) has 16k vertices and 22k-2 edges, it contains no two 6-cycles that share a vertex or are joined by an edge, andit admits a parity embedding in the projective plane. For every k, H(k) has 12k-4 vertices of degree three and 4k+4 vertices of degree two. Thus, H(k) has 3(6k-2)+4k+4=22k-2 edges. A projective embedding of G(k) with the required property is depicted in Figure <ref>. Using the fact that all 6-cycles are facial, the lemma follows.It was stated without proof in  <cit.> that the thrackle conjecture has been verified by computer up to n=11. Provided that this is true, the upper bound in Theorem <ref> can be improved to e≤7+1 5/5+1 5(n-1)≤ 1.3847(n-1). This follows from the fact that in this case an 8-face and a 6-face can share at most one edge and therefore we can maintain a charge of at least 7+1/5 on every face. Indeed, we change our discharging procedure so that we just send to every 6-facea charge of 1/5 from every neighboring face. Since an 8-face has at most four 6-faces as its neighbors, also every 8^+-face ends up with a charge of at least 7+1/5as required.§ PROOF OF THEOREM <REF>It is known <cit.> that C_4, a cycle of length 4, can be drawn as a generalized thrackle. Hence, our next result whose simple proof is left to the reader implies that the class of quasi-thrackles forms a proper subclass of the class of generalized thrackles. C_4 cannot be drawn as a quasi-thrackle. Suppose for contradiction that C_4=uvwz can be drawn as a quasi-thrackle; see Fig. <ref>. Assume without loss of generality that in the corresponding drawing, the path uvw a path uvw does not intersect itself. Let c_1 denote the first crossing along uz (with vw) on the way from u. Let c_2 denote the first crossing along wz (with uv) on the way from w. Let C_u denote the closed Jordan curve consisting of uv; the portion of uz between u and c_1; and the portion of vw between v and c_1. Let C_w denote the closed Jordan curve consisting of vw; the portion of wz between w and c_2; and the portion of uv between v and c_2.Observe that z and w are not contained in the same connected component of the complement of C_u in the plane. Indeed, wz crosses C_u an odd number of times, since it can cross it only in uv. Let 𝒟_u denote the connected component of the complement of C_u containing z. By a similar argument, z and u are not contained in the same connected component of the complement of C_w in the plane. Let 𝒟_w denote the connected component of the complement of C_wcontaining z.Since z∈𝒟_u ∩𝒟_w, we have that 𝒟_u ∩𝒟_w ≠∅. On the other hand, C_u and C_w do not cross each other, but they share a Jordan arc containing neither u nor w. If𝒟_u ⊂𝒟_w (or 𝒟_w ⊂𝒟_u), then u and z are both in 𝒟_w (or w and z are both in 𝒟_u), which is impossible. Otherwise, u and z are both in 𝒟_w, and at the same time w and z are both in 𝒟_u, which is again a contradiction.Let G(k) denote a graph consisting of k pairwise edge-disjoint triangles that intersect in a single vertex. The drawing of G(3) as a quasi-thrackle, depicted in Figure <ref>, can be easily generalized to any k. Therefore, we obtain the following For every k, the graph G(k) can be drawn as a quasi-thrackle. In view of Lemma <ref>, G_k cannot be drawn as a thrackle for any k>1. Thus, the class of thrackles is a proper sub-class of the class of quasi-thrackles.Cairns and Nikolayevsky <cit.> proved that every generalized thrackle of n vertices has at most 2n-2 edges, and that this bound cannot be improved. The graphs G(k) show that for n=2k+1, there exists a quasi-thrackle with n vertices and with 3/2(n-1) edges. According to Theorem <ref>, no quasi-thrackle with n vertices can have more edges. Proof of Theorem <ref>. Suppose that the theorem is false, and let G be a counterexample with the minimum number n of vertices. We can assume that G is 2-vertex-connected. Indeed, otherwise G=G_1∪ G_2, where |V(G_1) ∩ V(G_2)|≥ 1 and E(G_1) ∩ E(G_2)=∅. Suppose that |V(G_1)|=n'. By the choice of G, we have |E(G)|=|E(G_1)| + |E(G_2)| ≤3/2(n'-1)+3/2(n-n') = 3/2(n - 1), so G is not a counterexample.Suppose first that G is bipartite. By Corollary <ref>, G (as an abstract graph) can be embedded in the Euclidean plane. By Lemma <ref>, all faces in this embedding are of size at least 6. Using a standard double-counting argument, we obtain that 2e≥ 6f, where e and f are the number of edges and faces of G, respectively. By Euler's formula, we have e+2=n+f. Hence, 6e+12≤ 6n+2e, and rearranging the terms we obtain e≤3 2(n-6), contradicting our assumption that G was not a counterexample.If G is not bipartite, then, according to Corollary <ref>, it has a parity embedding in the projective plane. By Lemma <ref>, G contains no 4-cycle. It does not have loops and multiple edges, therefore, the embedding has no 4-face. G cannot have a 5-face, because the facial walk of a 5-face would be either a one-sided 5-cycle (which is impossible), or it would contain a triangle and a cut-vertex (contradicting the 2-connectivity of G). The embedding of G also does not have a 3-face, since G is bipartite. By Euler's formula, e+1=n+f and, as in the previous paragraph, we conclude that 6e+6≤ 6n+2e, the desired contradiction. plain§ OMITTED PROOFS Lemma<ref> A thrackle does not contain more than one triangle. Refer to Fig. <ref>. By Lemma <cit.>, every pair of triangles in a thrackle share a vertex. A pair of triangles cannot share an edge, otherwise they would form a 4-cycle, and a thrackle cannot contain a 4-cycle, since 4-cycle is not a thrackle, which is easy to check.Let T_1=vzy and T_2=vwu be two triangles that have a vertex v in common. By Lemma <cit.>, the two closed curves representing T_1 and T_2 properly cross each other at v. Hence, the closed Jordan curve C_1 corresponding to T_1 contains w in its interior and u in its exterior. Thus, the drawing of T_1∪{uv,uw} in a thrackle is uniquely determined up to isotopy and the choice of the outer face.If we traverse the edge wu from one endpoint to the other, we encounter its crossingswith the edges vy,yz, and zv in this or in the reversed order.Indeed, the crossings between wu and vz, and wu and vy must be in different connected components of the complement of the union of zy,vw, and vu in the plane, see Fig. <ref> right. By symmetry, the crossing of zy and wu is on both zy and wu between the other two crossings. However, this is impossible in a thrackle. We obtain a contradiction, which proves the lemma. Proposition<ref> In the parity embedding of a 2-connected thrackle in the projective plane, the facial walk of every 8^--face is a cycle, that is, it has no repeated vertex. If G is bipartite, the claim follows by the 2-connectivity of G and by the fact that the 4-cycle is not a thrackle.Suppose G is not bipartite. Then G cannot contain 4^--face, since we excluded triangles (by the hypothesis of the lemma) and 4-cycles (which are not thrackles). We can also exclude any 5-face F, because either the facial walk of F is a 5-cycle, which is impossible in a parity embedding, or the facial walk contains a triangle.Analogously, if F is a 7-face, its facial walk cannot be acycle (with no repeated vertex). Hence, the support of F must contain a 5-cycle. Using the fact that G has no triangle and 4-cycle, we conclude that F must be incident to a cut-vertex, a contradiction.It remains to deal with 6-faces and 8-faces. If the facial walk of a 6-face F is not a 6-cycle, then its support is a path of length three or a 3-star. In this case, G is a tree on three vertices, contradicting our assumption that G is 2-connected. Thus, the facial walk of every 6-face must be a 6-cycle.The support of the facial walk of an 8-face F cannot contain a 5-cycle, because in this case it would also contain a triangle. Therefore, the support of F must contain a 6-cycle. The remaining (2-sided) edge of F cannot be a diagonal of this cycle (as then it would create a triangle or a 4-cycle), and it cannot be a “hanging” edge (because this would contradict the 2-connectivity of G). This completes the proof of the proposition. Lemma<ref> C_4 cannot be drawn as a quasi-thrackle.Suppose for contradiction that C_4=uvwz can be drawn as a quasi-thrackle; see Fig.3. Assume without loss of generality that in the corresponding drawing, the path uvw a path uvw does not intersect itself. Let c_1 denote the first crossing along uz (with vw) on the way from u. Let c_2 denote the first crossing along wz (with uv) on the way from w. Let C_u denote the closed Jordan curve consisting of uv; the portion of uz between u and c_1; and the portion of vw between v and c_1. Let C_w denote the closed Jordan curve consisting of vw; the portion of wz between w and c_2; and the portion of uv between v and c_2.Observe that z and w are not contained in the same connected component of the complement of C_u in the plane. Indeed, wz crosses C_u an odd number of times, since it can cross it only in uv. Let 𝒟_u denote the connected component of the complement of C_u containing z. By a similar argument, z and u are not contained in the same connected component of the complement of C_w in the plane. Let 𝒟_w denote the connected component of the complement of C_wcontaining z.Since z∈𝒟_u ∩𝒟_w, we have that 𝒟_u ∩𝒟_w ≠∅. On the other hand, C_u and C_w do not cross each other, but they share a Jordan arc containing neither u nor w. If𝒟_u ⊂𝒟_w (or 𝒟_w ⊂𝒟_u), then u and z are both in 𝒟_w (or w and z are both in 𝒟_u), which is impossible. Otherwise, u and z are both in 𝒟_w, and at the same time w and z are both in 𝒟_u, which is again a contradiction.
http://arxiv.org/abs/1708.08037v1
{ "authors": [ "Radoslav Fulek", "János Pach" ], "categories": [ "math.CO" ], "primary_category": "math.CO", "published": "20170827020420", "title": "Thrackles: An Improved Upper Bound" }
T. Dörffel and A. Papke and R. Klein FB für Mathematik und Informatik, Freie Universität Berlin, Berlin, [email protected] N. Ernst Zuse Insitute Berlin, Berlin, Germany P. Smolarkiewicz National Center for Atmospheric Research, Boulder, CO 80307, USAIntensification of tilted atmospheric vortices by asymmetric diabatic heating Tom DörffelAriane PapkeRupert KleinNatalia ErnstPiotr K. Smolarkiewicz Received: date / Accepted: date ==================================================================================================================<cit.> studied the nonlinear dynamics of strongly tilted vortices subject to asymmetric diabatic heating by asymptotic methods. They found, , that an azimuthal Fourier mode 1 heating pattern can intensify or attenuate such a vortex depending on the relative orientation of tilt and heating asymmetries. The theory originally addressed the gradient wind regime which, asymptotically speaking, corresponds to vortex Rossby numbers of order unity in the limit. Formally, this restricts the applicability of the theory to rather weak vortices in the near equatorial region. It is shown below that said theory is, in contrast, uniformly valid for vanishing Coriolis parameter and thus applicable to vortices up to low hurricane strengths. In addition, the paper presents an extended discussion of the asymptotics as regards their physical interpretation and their implications for the overall vortex dynamics. The paper's second contribution is a series of three-dimensional numerical simulations examining the effect of different orientations of dipolar heat release on idealized tropical cyclones. Comparisons with numerical solutions of the asymptotic equations yield evidence that supports the original predictions. In addition, the influence of asymmetric diabatic heat release on the time evolution of centerline tilt is analysed further, and a steering mechanism based on the orientation of the heating dipole is revealed. § INTRODUCTIONAtmospheric vortex intensification and the associated evolution of vortex structure remain a topic of intense investigations. As <cit.> point out in their review article, intricate interactions of boundary layer processes, moist thermodynamics, multiscale stochastic deep convection, and the vortex-scale fluid dynamics produce the observed, sometimes extremely rapid intensification of incipient hurricanes. They also emphasize that, despite the valuable insights that have been gained in many studies of idealized axisymmetric flow models, asymmetries of vortex structure, convection patterns, and boundary layer structure have been observed to be important for vortex intensification in real-life situations.This study focuses on the principal response mechanisms of Tropical cyclone-like (TC) atmospheric vortices to asymmetric diabatic heat release following the theory of <cit.>. Therefore, we analyze both the structure and intensity of the bulk vortex above the boundary layer under the influence of different configurations of asymmetric heating. From <cit.>, among others, we adopt the point of view that latent heat release from condensation can be modeled, with limitations, by external diabatic heat sources in dry air. In the cited studies, non-axisymmetric heating patterns were shown to have at most a small effect on vortex strength within the framework of linearizations about an axisymmetric upright vortex. These results of linear theory were corroborated in <cit.> by comparison with fully nonlinear three-dimensional simulations. By both, analytical and numerical examination, we will see that the particular flow structure of a strongly tilted vortex allows for a leading order intensification mechanism based on asymmetric heating that cannot be captured for linearized (weak) vortex tilts.Investigating incipient hurricanes that develop in the tropical Atlantic, <cit.> revealed, that such vortices can exhibit very strong tilt. Thus, for instance, the locations of the vortex center at heights equivalent to the 925 and 200 pressure levels are located about 200 apart, e.g., in <cit.> and <cit.>. This amounts to an overall vortex tilt at a scale comparable to the vortex diameter, , to a situation that clearly does not allow for linearizations about an upright vortex. In fact, in characterizing the wind field of Hurricane Norbert, <cit.> already utilized the concept of a height-dependent vortex center, i.e., of time dependent centerline, and this is one of the key structural aspects in the analysis of <cit.>, which we revisit in this paper.<cit.> analyzed the dynamics of such strongly tilted atmospheric vortices in the gradient wind regime by matched asymptotic expansions. They obtained a closed coupled set of evolution equations for the primary circulation structure and the vortex centerline, and demonstrated that in a strongly tilted vortex symmetric and asymmetric heating patterns can have a comparable impact on vortex intensity. As by its very definition the gradient wind regime is restricted to vortex Rossby numbers of order unity, this theory has thus far been considered applicable only to rather weak vortices with intensities relatively far from the interesting stage of the tropical storm/hurricane transition <cit.>.To allow for vortices in this transition regime, we consider here in the first part of the paper the dynamics of meso-scale atmospheric vortices ∼100 that extend vertically across the depth of the troposphere ∼10 but feature large vortex Rossby number ≫ 1. We use the asymptotic techniques introduced by <cit.> and recycle many of their technical steps. As indicated in fig. <ref>, we assume vortices with nearly axisymmetric core structure at each horizontal level, and we allow for strong vortex tilt such that the vortex centers observed at different heights may be displaced horizontally relative to each other by distances comparable to the vortex core size . One of the main findings of <cit.> was the following evolution equation for the primary circulation described by the axisymmetric leading-order circumferential velocity, , valid for time scales large compared to the vortex turnover time scale, u_θt + w_0u_θz + u_r,00(u_θr + u_θ/r + f_0 ) = - u_r,*(u_θ/r + f_0 ). Here (t,r,z) are the appropriately rescaled time, radial, and vertical coordinates, f_0 is the Coriolis parameter, and w_0 and u_r,00 are the axisymmetric components of the vertical and radial velocities induced by the axisymmetric mean heating patterns, also properly rescaled. The apparent radial velocity u_r,* results from an interaction of the vortex tilt with the asymmetric first circumferential Fourier mode of the vertical velocity. In particular, u_r,* = 1/2π∫_-π^π w·zdθ, where (t,z) is the time dependent vortex centerline position at height z (see fig. <ref>), w is the full vertical velocity, and = cos(θ) + sin(θ) is the radial unit vector of a horizontal polar coordinate system attached to the centerline.X(t,z) itself is governed by the centerline equation: Xt =u_s + ( X ·∇)u_s + ln1/δ k × M_1 +k × u_s(t,z) expresses the background wind profile, δ is a small number according to the asymptotic scaling, k the vertical unit vector, M_1 is a weighted curvature measure of X andevaluates Fourier-1 modes of vertical velocity resulting from both, diabatic heating and adiabatic balances within a tilted vortex. In the adiabatic case and without vertical wind shear equation (<ref>) simplifies to a linear Schrödinger-like equation exhibiting undamped precession of eigenmodes. More details on the expressions for M_1 andfollow in the further course of this article.The main findings of present work are: *The evolution equation from eq:TempevoluthetaIntro is uniformly valid as f_0 → 0 so that it holds, in particular, also for ≫ 1, , for vortices of hurricane strength. In fact, we argue the for ≥1 the structure of the leading-order equations does not change. *The mechanism of vortex spin-up by asymmetric heating of a tilted vortex is traced back analytically to an effective circumferential mean vertical mass flux divergence that arises when the first Fourier mode diabatic heating and the vortex tilt correlate negatively. * Asymmetric heating introduces a forcing of the vortex motion which can accelerate/decelerate the centerline precession and/or increase/decrease its tilt depending on the relative orientation of tilt and heating dipole. *Equation eq:TempevoluthetaIntro can be recast into a balance equation for kinetic energy, = ρ_0^2/2, (r 0pt9pt)_t + (r u_r,00[ + p]0pt9pt)_r + (r _0 [ + p]0pt9pt)_z =rρ_0/N^2^2(Θ· Q_Θ)_0 in line with the theory by <cit.> for available potential energy (APE) generation.Here p is the relevant pressure perturbation, Θ, Q_Θ are the potential temperature perturbations and the diabatic heating, respectively, and (·)_0 corresponds to the axisymmetric mean. In the current case, it encodes the correlation of potential temperature perturbation and diabatic heating. N andare the Brunt-Väisälä frequency and the background potential temperature stratification, respectively. Equation eq:KineticEnergyBudget states that, except for a conservative redistribution of kinetic energy due to advection and the work of the pressure perturbation, p̃, positive correlations of diabatic sources and potential temperature perturbations generate the potential energy available for increasing the kinetic energy of the vortex.<cit.> study the effects of asymmetric diabatic heating on vortex strength in a linearized model. One of their conclusions is that “... purely asymmetric heating generally leads to vortex weakening, usually in terms of the symmetric energy, and always in terms of the low-level wind.” The present theory shows that this conclusion does not hold up in case of a strongly tilted vortex, but that in this case symmetric and suitably arranged asymmetric heating have vortex intensification efficiencies of the same order of magnitude. *The theory compares favorably with three-dimensional numerical simulations based on the compressible Euler equations. To arrive at these results, we first recount the governing equations and the principles of our analytical approach in section <ref>, and then revisit the derivations by <cit.>. A discussion of the scaling regime is given in section <ref> to investigate the influence of the Coriolis effect (item (<ref>)), and the asymptotic vortex core expansion is carried out in section <ref> analytically supporting the physical interpretation of the asymmetric intensification mechanism given in item (<ref>). In section <ref> we establish the kinetic energy balance of item (<ref>). Section <ref> presents results of the theory in comparison with three-dimensional computational simulations to corroborate item (<ref>). Conclusions and an outlook are provided in section <ref>.§ DIMENSIONLESS GOVERNING EQUATIONS AND DISTINGUISHED LIMITS§.§ Governing equationsThe dimensionless inviscid rotating compressible flow equations for an ideal gas with constant specific heat capacities in the beta plane approximation form the basis for the subsequent asymptotic analysis:t + · + w z + 1/^2 1/ρ p + 1/(1 + βy)k × = 0,wt + ·w + wwz + 1/^21/ρpz = -1/^2 , ρt + ·ρ + w ρz + ρ·+ ρwz = 0,Θt + ·Θ + wΘz = Q_Θ , ρΘ= p^1/γHere p,ρ,Θ,,w are rescaled pressure, density, potential temperature, and the horizontal and vertical velocities, and γ is the specific heat ratio.The three-dimensional gradient is =+∂/∂ z with the horizontal gradient =∂/∂ x +∂/∂ y, the zonal, meridional, and vertical coordinates (x,y,z), and the related unit vectors (,,). Finally, t is the time variable and Q_Θ is a diabatic source term.Table <ref> lists general characteristics of the near-tropical atmosphere. Together with the combined values in Table <ref> they form reference values for non-dimensionalization. Let an asterisk denote dimensional quantities, then the unknowns and coordinates in eq:CompressibleFlowEquationsDimless are p = p^*/p, ρ = ρ^*/ρ,(, w) = (^* , w^*)/u,(, z) = (^*,z^*)/,t = t^*u/ . Note that u/ is an estimate of the large-scale thermal wind shear, and =x +y is the horizontal coordinate vector.In deriving the dimensionless equations eq:CompressibleFlowEquationsDimless using the quantities from tables <ref> and <ref> the Mach, internal wave Froude, and Rossby numbers, and the β-parameter [ =u/√( R T)≈ 3.4· 10^-2; =u/N≈ 1.1· 10^-1 ] , [ = u/f ≈13.3; = β/f ≈ 2.7·10^-3 ] emerge naturally. These are replaced with functions of a single small expansion parameter ≪ 1 through the distinguished limits = ^3/2,= /N,= 1/ f,= ^3 β, in line with the multiscale asymptotic modelling framework of <cit.>. Here (N, f, β) = 1 as → 0, with concrete values N = 0.91 ,f = 0.75 , β = 2.7 derived from eq:GeophysicalParameters for = ^2/3 = 0.105. Replacing the characteristic numbers according to (<ref>) we gett + · + w z + 1/^3 1/ρ p + (f + ^3βy)k × = 0,wt + ·w + wwz + 1/^31/ρpz = - 1/^3 , ρt + ·ρ + w ρz + ρ·+ ρwz = 0,Θt + ·Θ + wΘz = Q_Θ , ρΘ= p^1/γ .Whereas f and β appear explicitly in (<ref>), N characterizes the background stratification of potential temperature and will be invoked below where we define the initial conditions for the vortex flow.Equations eq:AsymptoticFlowEquations will form the basis for the subsequent asymptotic analysis for ≪ 1, although much of the expansions will proceed in terms of the small parameter = √() .§ SCALING REGIME FOR LARGE VORTEX ROSSBY NUMBER AND STRONG TILT§.§ Vortex core size, intensity, and evolution time scaleVortex core sizes of 50 to 200 are typical for tropical storms and hurricanes, and the storm/hurricane threshold lies at wind speeds of 30 <cit.>. With ^2 ≡∼ 1/10, ∼10, and u∼10, these data correspond well with ∼/^2 ≈100 , ∼u/≈33,p_ v∼^4p ,for a characteristic vortex core size , a typical wind speed, and the associated depression in the vortex core, respectively. Note that these scalings deviate from those adopted by <cit.>, who considered systematically larger radii of the order ∼/^3 needed for direct matching to a quasi-geostrophic large scale outer flow. From their work we recall, however, that the vortex core structure and tilt develop on a time scale t_ v that is by 1/^2 longer than the vortex core turnover time scale t_ to =/. Thus, in view of eq:CoreScalings, we will follow the vortex core evolution on the time scale t_ v = t_ to/^2 = 1/^2/^2/u = t/^3∼10 . The scalings in eq:CoreScalings and eq:TimeScales include the regime of “rapid intensification”, defined by NOAA's National Hurricane Center [http://www.nhc.noaa.gov/aboutgloss.shtml] to denote maximum wind accelerations of 30kt∼15 in 24.Also, the adopted scalings describe a vortex in the cyclostrophic regime since /u^2^2/r = 1whereas/u_θ^2f = 1/u/u =, , the Coriolis term is subordinate to the centripetal acceleration in the horizontal momentum balance in this regime. Accordingly, the vortex Rossby number is large, _ v = u_ max/f_0 = u_ max/u/ = ^-2-1+2 = 1/ .§.§ Co-moving coordinates for a strongly tilted vortexFollowing <cit.>, we resolve the flow dynamics on the vortex precession and core evolution time scale t_ v from eq:TimeScales. The appropriate time coordinate is = ^3 t. For the core structure analysis we introduce vortex centered horizontal coordinates = 1/^2((, z) + ) where (, z) is the horizontal position of the vortex centerline at height z andis the relative horizontal offset. With this scalingresolves the core scalefrom eq:CoreScalings and the centerline covers comparable distances. This justifies the notion of “strong tilt”.In the sequel we use polar coordinates in horizontal planes, , = x+ y where{[x = cosθ;; y =sinθ; ][ = cosθ - sinθ; = sinθ + cosθ ]. withandthe radial and circumferential unit vectors, respectively. The transformation rules for derivatives in these coordinates read= ^2( + 1/θ) ≡^2,z|_t,x,y= z|_,,θ-· ,t|_x,y,z =δ^3 ( |_,θ,z-·).The horizontal velocity is decomposed into the vortex' motion plus the relative velocity, =+ (u_r+ u_θ ). For later reference, here are the centerline represented in the (, ) basis, = (Xcosθ + Ysinθ)+ (- X sinθ + Y cosθ), and the Fourier expansion of functions of the circumferential angle, θ, F(θ) = F_0 + ∑_n (F_n1cos(nθ) + F_n2sin(nθ)). Note that we have exchanged the roles of F_n1 and F_n2 relative to their use in <cit.> as this will streamline the analysis of the orientation of a dipolar field characterized by F_1 = (F_11, F_12)^T relative to the tilt vector ∂ X /∂ z.§.§ Vortex core expansion schemeThe circumferential velocity is expanded as(t,,z;ε) = 0(t, , z)+ 1(t, , z)+ 2(t, ,θ, z)+ ,(t,,z;ε) =+ .2(t, ,θ, z)+ ., non-axisymmetry relative to the centerline is allowed for scaling orders from u upwards. Across the core size length scale, , such asymmetries induce horizontal divergences of order / ∼u / ( / ^2) = ^3 u/, see eq:CoreScalings. Since the flow field is anelastic to leading order as derived below, this implies the vertical velocity scaling,w(t,,z;ε) = ^3 0(t, ,θ, z) + δ^3. Expansions for the thermodynamic variables are anticipated as follows, =+ ^2 + ^4(4 + _4)+ ^5(5 + _5)+ ^5,ρ =+ ^2 + ^4( 4 + _4)+ ^5( 5 + _5)+ ^5,Θ = Θ_0+ ^2+ ^4( 4+ _4)+ ^5( 5+ _5)+ ^5,<cit.>. In eq:MesoExpansion, the variables (p_0, p_2, ρ_0, ρ_2, Θ_2)(z) describe the stationary background (Θ_0 is a constant), (_i, _i, _i)(, z), are higher-order horizontal means, and (i, i, i)(, , θ, z) are the quantities of prime interest.Note that, owing to the Fourier representation defined in eq:Fourier this notational convention “overloads” the subscript (·)_0 with a double-meaning, but the distinction should always be clear from the context.The vortex centerline position is expanded as (, z) =0(, z) + ^1 .§ ASYMPTOTIC ANALYSIS OF THE CORE STRUCTURE EVOLUTIONThis section revisits the analysis of <cit.> for large vortex Rossby numbers focusing on the evolution equation for the primary circulation.§.§ Asymptotic equation hierarchy for the vortex coreThe governing equations transformed to the co-moving coordinates are provided in appendix <ref>. Inserting the expansion scheme from the previous section we obtain - (u_θ)^2/ + 1/ρ_0 4 = 0 , 4θ = 0- 2 u_θ1/ +1/ρ_0 5 - f_0 u_θ = 0 , 5θ = 0from the horizontal momentum balance at leading and first order, respectively. Each line in eq:HorMomLeadingFirst displays the respective radial balance first and the circumferential balance as the second equation. We observe from the radial component in eq:HormomLeading that the vortex is in cyclostrophic balance to leading order which implies large vortex Rossby number. The Coriolis effect enters only as a first-order perturbation in the present regime as seen in the radial component of eq:HormomFirst. The pressure perturbations p4, p5 inherit the assumed axisymmetry of 0, 1 thanks to the leading and first order circumferential momentum balances in eq:HormomLeading and eq:HormomFirst, respectively.The full second order horizontal momentum equations are listed in appendix <ref>, equations eq:HormomSecondApp, but for the rest of the paper we only need the circumferential average of the circumferential component mom_theta_32. Letting ψ_0 ≡1/2π∫_-π^πψ(θ) dθ denote the circumferential average of some θ-dependent variable ψ in line with eq:Fourier, we have t + 0_0 z + u_r,02(+ /) - u_r,*2 = 0, where u_r,*2 = (0·0z)_0 . The flow is hydrostatic up to third order, , p_iz = -ρ_i (i = 1,...,4), whereas ∂4/∂ z - ∂0/∂ z·4 = - 4. The leading and first order velocities are horizontal and axisymmetric according to Meso:HorizontalVelocity, Meso:VerticalVelocity and thus divergence free. The second order velocity is subject to an anelastic divergence constraint obtained from the mass balance, ρ_0/( (2) + 2θ) + ∂/∂ z( ρ_0 0) - ∂0/∂ z·∇̂_ (ρ_0 0) = 0. Similarly, the first non-trivial potential temperature transport equation reads 0/4θ + 0d Θ_2/d z = Q_Θ0, and the equation of state relates the thermodynamic perturbation variables through4 = ρ_0 ( 4/γ_0 - 4/Θ_0) .§.§ Temporal evolution of the vortex structure<cit.> observed that with the aid of eq:HorMomLeadingFirst and eq:UrStarDef–state42, and given the vortex tilt, 0z, as well as the diabatic source term, 0, one may interpret eq:MomThetaTwoAverage as a closed evolution equation for the leading circumferential velocity, 0.To corroborate this, we use the Fourier decomposition, eq:Fourier, for 0 andthe representation of the centerline representation in polar coordinates from eq:CenterlinePolar to obtain u_r,*2 = (0·0z)_0 = 1/2[ 0_110z + 0_120z]. Expressions for 0_0 and 0_1k for k = 1,2 follow from the Θ–transport equation in potentialtemp2, 0_0 Θ_2z =Q_Θ,00, 0_1kΘ_2z = Q_Θ,1k0 - (-1)^k u_θ/4_1[3-k] . Since 4 is axisymmetric (see eq:HormomLeading), 4_1k≡ 0 and the equation of state, state42, yields 4_1k/Θ_0 = - 4_1k/ρ_0. With this information, the vertical momentum balance vertmom2 yields _114/Θ_0 = - _114/ρ_0 =- 1/ρ_0∂0/∂ z∂4/∂, _124/Θ_0 = - _124/ρ_0 = - 1/ρ_0∂0/∂ z∂4/∂. Using the cyclostrophic balance in eq:HormomLeading to eliminate 4, and going back to w52 we obtain expressions for the 0_1k in terms of 0, 0z, and 0, 0_1kΘ_2z = Q_Θ,1k0 -(-1)^k Θ_0 X_[3-k]0z (u_θ^(0))^3/^2(0pt10ptk = 1,2 ), where 0_1 ≡0 and 0_2 ≡0. Upon insertion of this result in eq:UrStar, the second term on the right cancels, so that u_r,*2 = 1/2dΘ_2/dz[ Q_Θ,1100z + Q_Θ,1200z] ≡1/2dΘ_2/dzQ0_Θ,1·0z. Here we have interpreted the cosine and sine Fourier-1 components of Q_Θ0 as the components of a heating dipole vector, Q_Θ, in the horizontal plane.To find a corresponding expression for u_r,02 (see the third term in eq:MomThetaTwoAverage), consider the circumferential average of mass continuity, mass2. A brief calculation yields (ρ_0 u_r,02) + ∂(ρ_0 0_0 )/∂ z - 1/2[ 0z(ρ_0 0_11) + 0z(ρ_0 0_12)] = 0 or, equivalently, (ρ_0 [u_r,02 - u_r,*2]) + ∂(ρ_00_0 )/∂ z = 0 with u_r,*2 defined in eq:UrStar. Exploiting w5211 in that definition and integrating inrequiring that u_r,02 be finite at =0 we find u_r,02 = u2_r,00 + u_r,*2, where u2_r,00 = - 1/∫_0^r/ρ_0z(ρ_0 Q_Θ,00/ dΘ_1/ d z) d r. With w52 (first equation), eq:UrStarFinal, eq:urzeroone, and eq:UradZeroZero we have now indeed expressed w0_0, u_r,02, and u_r,*2 in terms of 0, 0z, and Q_Θ0 as announced. In the sequel, we may thus derive from eq:MomThetaTwoAverage how vortex tilt and diabatic heating affect the evolution of the primary circulation. The results in this section match the corresponding result by <cit.> with the Coriolis parameter f_0 set to zero. This corroborates our statement (<ref>) in the introduction that the vortex amplification/attenuation mechanism described in their work does not depend on the vortex Rossby number being at most of order unity.§ DISCUSSION OF THE ASYMMETRIC INTENSIFICATION/ATTENUATION MECHANISM§.§ The influence of asymmetric heating on the primary circulationAs elaborated in the previous section, eq:MomThetaTwoAverage describes the evolution of the primary circulation in response to external diabatic heating in the present vortex flow regime. Aiming to separate the influence of heating asymmetries from those of axisymmetric effects, we recall from eq:urzeroone that the net circumferentially averaged radial velocity is entirely a response to diabatic effects, and that it consists of one part, u2_r,00, which, according to eq:UradZeroZero is induced by axisymmetric heating, and a second part, u_r,*2, which, according to eq:UrStarFinal, arises from first Fourier mode asymmetric heating patterns. Using this decomposition in eq:MomThetaTwoAverage, we rewrite the equation as t + 0_0 z + u_r,002(+ /) = - u_r,*2/ , which is the large-Rossby version of equation eq:TempevoluthetaIntro announced in the introduction. In this equation, the left hand side captures the influence of the axisymmetric dynamics and diabatic heating, whereas the right hand side covers all effects due to the interaction of asymmetric heating and vortex tilt.§.§ Mechanics of vortex intensification by asymmetric heating of a tilted vortexIn the following section we analyze the leading-oder mass balance relations given in (<ref>) and (<ref>). We furthermore argue that u_r,*2 given in (<ref>) plays a crucial role in explaining the spin-up mechanism based on asymmetric diabatic heating. In this context we note that, according to (<ref>), the first order Fourier modes of the vertical velocity involve a contribution from diabatic heating (first term) and one due to the adiabatic dynamics (second term). It is only the contribution by diabatic heating that has an impact on u_r,*2 as seen in (<ref>).Depicting the situation of asymmetric heating anti-parallel to the tilt in figure <ref> we observe that the suitably arranged vertical motions can generate a mass flux through the boundary of the tilted disc control volume with the coordinate interval (z, z+ Δ z). Considering mass continuity in centerline-attached coordinates in (<ref>), we can identify the term in brackets as the axisymmetric mean of the vertical mass flux. Equation (<ref>) reveals that this expression is equal to a horizontal mass flux governed by u_r,*2. We therefore conclude, that the net vertical outflow in figure <ref> is compensated in the present balanced vortex situation by a net horizontal inflow to preserve continuity.This gives an additional spin-up mechanism which exploits the vertical (tilted) structure of the vortex to gain angular momentum by moving air masses from larger radii to the center of the vortex. In contrast, the opposite orientation of diabatic Fourier-1 modes leads to an attenuation of the vortex by transporting angular momentum away from the center. We therefore claim that by this mechanism it is possible to influence the overall strength of an atmospheric vortex as will be demonstrated in section <ref>.This should settle the announcement of (item <ref>) in the introduction.§.§ Energy budget for the externally heated vortexHere we elaborate on how the asymmetric diabatic heating is transferred to kinetic energy of the primary circulation in a tilted vortex. This will be particularly useful in assessing the derived equations within the framework of Available Potential Energy <cit.>.To this end we multiply eq:MomThetaTwoAverageAsymmetriesSplitOff by ρ_00, use the θ-averaged leading-order mass balance from anelastic and recast the advective terms in conservation form to obtain, t(ρ_0^2/2) + (ρ_0 u_r,002^2/2) + z(ρ_0 _0 ^2/2) = - u_r,02p4 . Here we have dropped the 0 superscript on 0 and w0 to simplify the notation, and we have used the cyclostrophic radial momentum balance from eq:HorMomLeadingFirst to introduce the pressure gradient on the right. This reveals the change of kinetic energy (left hand side) to result from the work of the pressure force due to the mean radial motion (right hand side). Some straightforward but lengthy calculations, the details of which are given in appendix <ref>, yield a direct relation of the kinetic energy balance in eq:KineticEnergyBalanceI to the Lorenz' theory of generation of available potential energy (APE) by diabatic heating, ()_t + ( u_r,002)_ + ( w_00)_z= ρ_0/dΘ_2/dz1/Θ_0[Θ_04Q_Θ,00+1/2Θ_14·Q_Θ,10] , = rρ_0/N^2Θ_0^2(Θ4· Q_Θ0)_0 where =+ p4, and (Θ,Q_Θ)_1 = (Θ, Q_Θ)_12 + (Θ, Q_Θ)_11 are the dipole vectors spanned by the first circumferential Fourier components of the fourth order potential temperature perturbation, Θ4, and of the diabatic heating function, Q_Θ0, respectively.Equation (<ref>) poses the differential form of kinetic energy balance. To end up with an integral form as presented in <cit.> we make use of the Gauss's theorem which allows us to drop the radial and vertical derivative assuming u_r,002 and w_02 vanish for sufficiently largeand z respectively. To achieve this condition for u_r,002 (<ref>) shows that we do not only need Q_Θ,0 such that the integral converges for large radii but we need the integral to converge to zero. When assuming a concentrated pattern of heat release with amplitude 10^-4 close to the vortex center (in the eyewall) over a surface of ∼ 10^4 km^2 it would need to be counteracted by contributions with opposite sign, , cooling, but over a much larger surface of ∼ 10^6 km^2. This simple scale approximation reveals cooling rates of 0.1K/d which is by an order of magnitude smaller than what is observed by radiative cooling <cit.>.For the total (integrated) kinetic energy E_k we find dE_k/dt = 2π∫_0^∞∫_0^∞ρ_0/N^2Θ_0^2( Θ4 Q_Θ0)_0 dr dz , On the one hand, <cit.> balanced the kinetic energy with the conversion rate from APE to kinetic energy (C) and the dissipation rate (D) where the latter is neglected here. On the other hand, the expression on the right-hand side coincides with the generation rate (G) of APE (see appendix <ref> for details). Therefore, no APE accumulates in the present flow regime as it is directly converted to kinetic energy (at leading order). This is the result of the timescale used in the asymptotic analysis as conversion between APE and kinetic energy is accomplished by the advective and pressure-velocity fluxes on faster timescales.In line with <cit.> and announced in the introduction in item <ref> this result shows that positive correlations of temperature perturbation and diabatic heat release lead to the increase of kinetic energy.The precise form of the right hand side of eq:KineticEnergyBudget as announced in the introduction (item <ref>) is obtained from eq:EKinBalance by realizing that (1/Θ_0) dΘ_2/dz is the dimensionless representation of N^2, the square of the Brunt-Väisälä frequency, and that the constant Θ_0 is the leading-order dimensionless background potential temperature = T (Θ_0 + 1). <cit.>, extending prior similar studies, investigate the influence of asymmetric diabatic heating on vortex intensification on the basis of a linearized anelastic model that includes a radially varying base state and baroclinic primary circulation. Their central conclusions are that (i) asymmetric heating patterns quite generally tend to attenuate a vortex, that (ii) there are situations in which they can induce amplification, but in these cases their influence is (iii) generally rather weak. In fact, they state in their section e: “... purely asymmetric heating generally leads to vortex weakening, usually in terms of the symmetric energy, and always in terms of the low-level wind.”. Equation eq:EKinBalance shows, in contrast, that purely asymmetric heating in a tilted vortex can intensify or attenuate a vortex depending on the arrangement of the heating pattern relative to the tilt, and that the efficiencies of symmetric and asymmetric heating in generating kinetic energy are of the same order in the asymptotics as claimed in (item <ref>) of the introduction.§.§ Diabatic forcing of the centerline motionTogether with the previous discussion we want to highlight some aspects of the effects of asymmetric heating on the vortex centerline motion. Examinations in appendix <ref> of the constituents of the centerline equation reveal that the term k × splits into an adiabatic and a diabatic contribution due to the linear dependency ofon the vertical velocity dipole w_1 and w_1 being composed of an adiabatic and a diabatic contribution (see (<ref>)). In particular, we find that the adiabatic expression is of the same form as the diabatic one but evaluated with the adiabatic vertical velocity: w_1,ad = - [01; -10 ]1/Θ'_2 W ∂_zX = R̂_-π/21/Θ_2' W ∂_zX , with W = u_θ/r( u_θ^2/r + f u_θ) , and the rotation matrix R̂_θ_0.Inserting (<ref>) into eq. (<ref>)results in a linear differential operation on X which, interpreted as Hamiltonian of a (complex-valued) Schrödinger equation (<ref>), leads to a purely real spectrum, , to precession of the centerline in the complex (x-y) plane.The diabatic motion of the centerline on the other hand results from inserting some non-trivial w_1,dia intowhich in our case shall be a rotated version of (<ref>): w_1,dia = R̂_θ_01/Θ_2' W ∂_zX . Here θ_0 is the relative orientation of the diabatic vertical velocity dipole relative to the tilt. Clearly, w_1,dia coincides with w_1,ad for θ_0=-π/2. Therefore, such a diabatic heating vertical velocity pattern will result in an additional contribution to the centerline motion of the same orientation and magnitude as the adiabatic contribution. By numerical experiments we observed that the adiabatic vertical velocity dipole, oriented -π/2 relative to the tilt leads to a centerline motion in the direction +π/2.In contrast by varying θ_0 we expect a diabatic centerline forcing which is oriented -θ_0 relative to the tilt. Therefore, for our formulation of diabatic heating the rotation angle θ_0 determines whether the diabatic forcing leads to acceleration (θ_0=-π/2) or deceleration (θ_0=π/2) of the centerline precession or to increasing (θ_0=π) or decreasing the tilt of the centerline. [Note that the resulting linear operator governing the diabatic centerline motion effectively is the rotated version of the adiabatic operator (multiplied with complex number on the unit circle), , its spectrum is rotated in complex plane as w_1 is rotated. Real parts of the spectrum lead to precession while imaginary parts lead to growth/damping of the centerline amplitude.]In the further course of diabatic experiments we make use of our findings and specify heating patterns of the form Q_1,θ_0 := Θ_2'w_1,dia = _θ_0 W ∂_zX. § COMPARISON BETWEEN ASYMPTOTIC MODEL AND THREE-DIMENSIONAL SIMULATIONSIn the course of this paper we presented the derivation of a system of PDEs governing the leading-order dynamics of a tilted tropical cyclone in weak cyclostrophic regime (eqs. (<ref>) and (<ref>)).The aim of this section is twofold: First, we want to validate the reduced model equations against the three-dimensional Euler equations, , first principle equations (<ref>), by solving both sets of equations in a suitable numerical framework. To this end, we follow the work of <cit.> who presented the first results on this scenario. The second goal is to highlight principal mechanisms that are activated by purely asymmetric heating. Therefore, after analyzing the adiabatic dynamics of an initially tilted vortex, we continue by constructing a prototypical asymmetric diabatic heating pattern which will be imposed on the vortex under different angles relative to the tilt. We will refer to these experiments as adiabatic (reference simulation), stagnation, intensification and attenuation according to their influence on tilt and centerline.The quasi-two-dimensional equations (<ref>) and (<ref>) are solved by a combination of appropriate numerical methods details of which are presented in section <ref>. For the three-dimensional simulations the general purpose atmospheric flow model EULAG <cit.> provides efficient integration strategies for equations (<ref>), and its compressible model was used. These two alternative representations of the tilted vortex flows will be referred to as asymptotic and three-dimensional simulations.Note that, although we worked out the scaling in the cyclostrophic regime (= 1/δ), initial data in the following sections will be in the gradient-wind regime (= 1).[Experiments have shown substantial damping of the centerline tilt amplitude when initializing the vortex in the cyclostrophic regime. Furthermore, the special formulation of asymmetric diabatic heat release sensitively depends on the magnitude of tangential velocity causing numerical instabilities in the cyclostrophic regime.] Therefore, in the numerical treatment terms involving the Coriolis parameter f, even if reasonably small, will not be neglected. §.§ Numerical setting and initial dataWith the asymptotic analysis above we demonstrated that a tilted vortex evolves at leading-order on a time scale comparable to the synoptic time scale. Higher-order dynamics occurs in the presence of initial perturbations (excitation of higher-order asymptotic modes) on faster time scales. However, we are interested in the leading-order effects only, hence we construct initial data to closely reproduce the leading-order symmetries imposed by the asymptotic analysis of section <ref> allowing to solve for solutions on the slowest varying manifold.In the case of an adiabatic vortex the tangential velocity equation (<ref>) is trivially stationary, t u_θ0=0 , and the centerline equation (<ref>) becomes a Schrödinger-type equation, i ∂_t X = Ĥ X , with a Hamiltonian Ĥ depending on the tangential velocity u_θ(r,z) and the z-dependent background profiles ρ̅, p̅, Θ̅. Note, that we have introduced X := ( X · i) +i ( X · j) as the complex representation of X (see appendix <ref>).With suitable boundary conditions the governing Hamiltonian takes the form of a Sturm-Liouville operator and therefore exhibits a real, discrete spectrum <cit.>, which sets the precession frequency of each eigenmode. The first non-trivial eigenvalue corresponds to the slowest varying solution and posses a cosine-like eigenfunction, , the simplest tilted solution.[Details on that are skipped but can easily be validated by numerical means.]As tilt is crucial for coupling asymmetric diabatic heating modes to the leading-order vortex dynamics, we prescribe an initially barotropic tangential velocity profile u_θ(r), corresponding to a Gaussian vorticity profile, q(r), = q_m 1 - e^-σ^2 r^2/2 σ^2 r , where the radial vorticity profile reads q(r) = q_m e^-σ^2 r^2, and choose the first non-trivial eigenfunction of the corresponding Hamiltonian to define the initial centerline geometry, scaled to a displacement of 160 km, see figure <ref>.For the sake of complying with horizontal boundary conditions imposed by EULAG and to avoid reflective features near the boundary of the computational domain, the initial data for the three-dimensional simulations are smoothly transitioned to zero at some finite radius by applying a mollifier: m(r) = sin^2( π/2r/r_0),r < r_01 , r_0 < r < r_1cos^2( π/2r-r_1/_̊1-r_∞) , r_1 < r < r_∞0 , r > r_∞ r_1 = 1 250  and r_∞ = 1 750  are the radii where the mollifier starts and where it reaches full-suppression. In addition to that the profile within r_0 = 100  is adjusted to preserve differentiability at the origin.As mentioned earlier, the solution depends on the background state of the atmosphere which is determined by the potential temperature profile Θ̅(z) = T exp(N^2/gz) . In the three-dimensional case the vortex is embedded into a domain of 4000  extent in both horizontal directions, 10  in the vertical, and a damping layer surrounds the domain near the horizontal boundaries to suppress gravity waves emerging from the inner core and to keep them from reflecting.Along the structural properties of the tropical cyclone, i.e., an inner core and smooth transition to the quasi-geostrophic far field, we use the static mesh refinement capability of EULAG <cit.> and map equidistant coordinates onto a grid focused at the inner core. The actual mapping is accomplished by [ x_p; y_p ]= c_1 [ x_c; y_c ] + c_2 [ x_c^α; y_c^α ] , where α = 5, c_1,2 = 1/2. (x,y)_p,c are normalized coordinates on the domain [-1,1]^2. Figure <ref> demonstrates how the horizontal grid is focused towards the center of the computational domain. The asymptotic equations are solved on a regular equidistant tilted polar grid on the domain (r,z) ∈ [0, 1.12] × [0, 12.5] in dimensionless units covering roughly a tilted cylinder of 1000   around the centerline and the full vertical extent.In the further course, we will compute the diabatic heating from eq. (<ref>) which involves the reconstruction of both, the current centerline position and circumferentially averaged tangential velocity. For the first, we compute the center of mass of the vorticity field at each horizontal level: X = ∫ k · (∇× u_||)x dx dy We follow the ideas of <cit.> and the implementation outlined in <cit.>.The circumferential mean of u_θ then is computed by the Biot-Savart integral u_θ,0 = Γ(r,z)/2π r = 1/2 π r∫_B_r( X) k · (∇× u_||) dx dy , where B_r( X) = {( x_||∈ℝ^2 | ( x_|| -X)^2 < r^2 } denotes the circular domain centered at X with radius r. §.§ Results and discussionIn the following subsection we will present results of numerical simulations solving either the full three-dimensional Euler equations (<ref>) or the reduced asymptotic equation (<ref>) for the primary circulation velocity u_θ0, and the centerline evolution (<ref>) explained in detail in App. <ref> and following.§.§.§ Adiabatic vortexAs a reference for the following experiments we first investigate the dynamics of a tilted, adiabatic vortex. In subsection <ref> we constructed initial data to follow the first non-trivial eigenmode of the governing (adiabatic) Hamiltonian and we found stationarity (eq. (<ref>)) of the mean tangential wind. Hence, from the structure of eq. (<ref>) we expect undamped precession of the eigenfunction.Figure <ref> compares the results of both adiabatic simulations, three-dimensional and asymptotic, with initial data as discussed. Though exhibiting small-scale oscillation and damping, the three-dimensional simulation (fig. <ref>, right panel) compares well with its asymptotic analog. Time scales of one precession are 5.5 days for the asymptotic and 6.5 days for the three-dimensional simulation. This difference leads to a deviation of the final positions of the centerline but given that the effective expansion parameter δ = √()∼ 1/3, this is well within the error bounds of the leading order solution.The asymptotic analysis revealed non-trivial leading-order balances for w and the thermodynamic quantities according to tilt, gradient-wind (cyclostrophic) and hydrostatic balance. In figure <ref> both, w and Θ̃ are visualized by representative horizontal slice at 5000  at t = 6.5 comparing asymptotic values (black contours) with the 3D numerical simulation results (color-coded). The tilt vector ∂_zX is indicated by an arrow. Qualitative similarities are rather apparent, while deviations are again well within the asymptotic truncation order δ∼ 1/3. Both figures demonstrate the alignment of dipolar perturbations relative to the tilt as w_1 and Θ'_1 are rotated by -90  and 180, respectively. This is in agreement with the findings of <cit.>. §.§.§ StagnationThe stagnation test follows the idea that the choice of θ_0 = π/2 in (<ref>) leads to deceleration of the centerline precession by canceling the termin (<ref>).Furthermore, it has no impact on the leading-order tangential velocity as we immediately see when neglecting symmetric vertical motions due to symmetric diabatic heat release resulting in the tangential velocity time evolution u_θt = -u_r,*(u_θ/r + f ) and realizing that u_r.* is the projection of the diabatic heat release onto the tilt vector, u_r,* = 1/2( ∂_zX · Q_1,π/2) = 1/2( ∂_zX · W R_π/2∂_zX ) , which vanishes due to orthogonality.By construction, inserting the heating Q_1, π/2 into (<ref>) satisfies w_1 ≡ 0 up to leading order. Figure <ref> shows the expected behavior for the three-dimensional simulations. Q_1, π/2 attenuates vertical velocity by a factor of ∼δ, , canceling the leading-order expansion mode of w. The residual, depicted in the right panel may refer to higher-order expansion modes.Furthermore, the leading-order tangential velocity is not affected by asymmetric heating of that orientation. Figure <ref> presents both the time series of maximum tangential wind (blue) and of maximum heating (red). Only small variations of the tangential velocity are apparent and, as we will see in the next subsection, this changes substantially when we alter the orientation of Q_1, θ_0.Experiments (not shown here) revealed instabilities caused by small perturbations due to discretization errors: The diabatic heating will exhibit large amplitudes where the local tilt ∂_zX (being result of the reconstruction (<ref>)) is large. This affects the local velocity and as a consequence the spectral properties of the Hamiltonian of eq. (<ref>) (projecting onto higher frequency modes) increasing small-scale oscillatory features of the centerline X. Hence, we need make sure to maintain a certain regularity of X to avoid obscuring the effect under consideration by triggering this feedback loop. We achieve this by restricting the heating to a concentrated pulse by applying a time-dependent amplitude factor of the shape f(t) = a exp( (t-b)^2/2c^2) For the current setting a=1, b=5.5 and c=1 /2√(2 ln 300).The heating distribution Q_1,π/2 constructed in such a way satisfies u_r,* to vanish (cf. (<ref>)) and additionally, by canceling w_1,is canceled in the centerline equation of motion (<ref>). As discussed in the introduction of section <ref>, we are initially in the regime of gradient-wind balance. Hence M_1 remains the only contribution to the centerline equation of motion. Numerical evaluations show that the amplitude of M_1 is about 1/6 of the amplitude of thein the adiabatic case which is why we observe a significant slow down of the centerline precession in the asymptotic case in figure <ref>, left panel, during the heating time interval (cf. fig <ref> for reference). Although not as prominently as with the asymptotic simulation due to the aforementioned discretization errors, the centerline also slows down in the three-dimensional simulation. I becomes also present that due to heating the shape of the centerline becomes superimposed by higher-frequent features that are excited due to the slight misalignment of the diabatic heating dipole.§.§.§ IntensificationIn subsection <ref> we constructed a dipolar heating pattern and by aligning its orientation π/2 relative to the centerline tilt we found stagnation of the centerline precession as well as suppression of the vortex-induced vertical velocity dipole. However, what is the influence of this heating dipole when oriented with θ_0 = π relative to the tilt? θ_0=π determines the sign of (<ref>) negative and as a consequence the right-hand side of (<ref>) positive, , leading to intensification. To examine the effect of the orientation of a dipolar heating pattern without altering its amplitude by the dependency of Q_1,θ_0 on u_θ and ∂_zX we fix u_θ and | Xz| in (<ref>) to values at t=5 but still keep track of the orientation of Xz.In contrast to figure <ref>, where we did not see any sizeable impact of diabatic heating on the mean circumferential velocity, figure <ref> displays a clear increase for both the asymptotic and three-dimensional equations. We notice that the intensification is more effective in the asymptotic simulation (∼ 3 m/s) compared to the three-dimensional simulation (∼ 1.5 m/s), even though the heating amplitude of the asymptotic simulation is tuned to meet the three-dimensional simulation. This may be caused by the fact that the centerline in the three-dimensional simulations is a reconstruction from the flow field and hence affected by inaccuracies leading to non-optimal alignment of the heating dipole. Furthermore, even with rather high resolution the three-dimensional simulation is affected by numerical damping leading to a decrease of the centerline tilt over time which further reduces the impact (cf. eqs. (<ref>) and (<ref>)) even with Q_1,θ_0 computed by (<ref>) with values of ∂_zX just before the heating. Figure <ref> reveals the tilt dynamics without and with diabatic heating for both simulation approaches. In the three-dimensional simulations the tilt is more distorted than in the asymptotic simulations and the effect on the centerline tilt is less pronounced. However, the overall behavior is comparable between the two simulations. From the discussion in subsection <ref> we concluded that a heating pattern with θ_0 = π would lead to an increase of the centerline tilt. This behavior could be validated in the current situation as seen in figure <ref> for both simulations, asymptotic and three-dimensional. As the increase in tangential velocity is not as efficient for the three-dimensional simulation as it is for the asymptotic one, after the period of heating, the centerline precesses with higher angular frequency in the asymptotic case. Again, this may be due to inaccuracies in the three-dimensional simulation when computing and aligning the heating dipole. In section <ref> we discussed the intensification mechanism as result of the effective vertical velocity dipole resulting from the superposition of both adiabatic and diabatic contributions. In the current situation the diabatic vertical velocity dipole is oriented by θ_0=π while the adiabatic dipole is oriented by -π/2. As both have comparably the same amplitude we would expect -3π/4 rotation between tilt and resulting vertical velocity which is verified by results shown in the right panel of figure <ref>. Although the intensification is rather weak, there is evidence that the orientation of the asymmetry of the diabatic heat release matters for the evolution of the circumferential velocity. In fact, by allowing for stronger heating by increasing the duration of the heat pulse the effect on the tangential velocity is stronger (see figure <ref>), but it also affects the structure of the centerline in a more profound manner. Figure <ref> also demonstrates that the system's kinetic energy increases as a consequence of the imposed asymmetric heating in line with our findings of section <ref>.§.§ AttenuationThe final experiment of this work consists in switching the heating dipole pattern to a configuration where we expect attenuation of the vortex and vertical alignment of the centerline. Following equation (<ref>) attenuation corresponds to θ_0=0, , positive heating in the direction of the centerline tilt. Again, tilt amplitude and tangential velocity of eq. (<ref>) are set to initial values to avoid non-linear feedback and restricted by an amplitude factor to act over a short interval only.In figure <ref> for both, asymptotic and three-dimensional simulations, the centerline aligns when forced by heating.In addition, figure <ref> demonstrates the reduction of integrated kinetic energy due to the attenuating heating dipole.§.§ Summary of tilt dynamics We want to emphasize the effect of asymmetric diabatic heating on the centerline tilt by analyzing the time series of tilt amplitude as measured by an L_2-norm: ‖ Xz‖ := ∫_0^z_top( √( Xz^2)) dz,z_top = 10 , With figure <ref> this quantity is plotted for all types of experiments performed in the course of this work. It confirms on the one hand, that the orientation of the diabatic heating dipole correlates with changes in the tilt amplitude. Aligning the heating dipole with the tilt (attenuation case) leads to decreasing the tilt, , the vortex aligns. This situation turns around if the heating dipole is rotated 180 (intensification case) in that the tilt further increases. The stagnation configuration (tilt and heating dipole are perpendicular) leads to no significant alterations from the adiabatic behavior for the asymptotic simulations.On the other hand, we see that the three-dimensional representation of the initial data does not project as well onto the first eigenmode of the centerline as in the asymptotic cases. Besides features which oscillate on a time scale of less than one day all three-dimensional simulations exhibit an additional oscillation on the scale of roughly six days. However, in every case, asymptotic and three-dimensional, the effects of activating an asymmetric diabatic heating dipole are superimposed onto the adiabatic reference. The stagnation simulation follows the adiabatic reference on both cases but exhibits slight distortions in the three-dimensional case. As discussed before this is the reason for restricting the diabatic heating to a short time period as this effect would increase for longer periods. However, for both experiments, intensification and attenuation, the tilt amplitude follows approximately the adiabatic reference curve on an increased and lowered level, respectively.We argue that the response of asymmetric diabatic heating for the three-dimensional simulations is not as direct as it is for the asymptotic analog simulations. This is probably due to imbalances excited through the diabatic heating, , by misalignment of the heating dipole, and limits the efficacy of the heating in (<ref>) in influencing tilt and circumferential velocity. Nonetheless, in all cases the intensification configuration increases the centerline tilt while the attenuation configuration decreases it. The analysis above corroborates our initial statement that the orientation of a purely dipolar diabatic heating pattern intensifies and shears a atmospheric vortex apart in the anti-parallel orientation of heating dipole and tilt while it attenuates and aligns the vortex vertically in the parallel orientation.§ CONCLUSIONS AND OUTLOOKWith the present work we have extended the results of <cit.> to vortex Rossby numbers larger than unity corresponding to wind speeds of 𝒪(30)and showed that the principal structure of the reduced asymptotic model equations is retained in this limit. We found that the validity of the equations holds from the Gradient-wind regime up to (modest) cyclostrophic vortices. This corresponds to hurricanes of strength H1 on the Saffir-Simpson scale.Current models of tropical cyclone intensification rely on organized symmetric heating in an upright vortex <cit.>. Observations show that in incipient tropical storms the level of organization of convection is weak compared to that of mature storms. Our findings of leading-order effects of asymmetric diabatic heating both on the strength of the primary circulation and on the vortex tilt may add a new route of acceleration from tropical storms to mature hurricanes.Here, we have focused on highlighting the potential effect of asymmetric diabatic heating release on the mean tangential velocity, which we argued, can be of the same order of magnitude as symmetric diabatic heat release. Although the gain of horizontal wind was limited as we restricted to heating configurations which maintained the overall flow structure which the asymptotic analysis was based on, we expect to see potentially stronger efficiencies in nature due to the self-regulation of moist convection in a sheared environment. Moist thermodynamics has been replaced here, however, by artificial diabatic source and sink terms neglecting effects of water phase transitions. As we argued, the resulting asymmetric pattern of vertical velocity is the driver for intensification/attenuation. In ongoing research, to be published elsewhere in the future, we find that mean vertical mass fluxes of an ensemble of convective towers can provide for similar effects. Thus, despite its somewhat artificial setup, the present study does reveal an interesting physical mechanism.In this work we did not discuss the interaction of environmental shear with the TC. Observations of TCs just before rapid intensification often show a phase of relatively stationary configuration of background wind shear <cit.>, vortex tilt, and asymmetries of convection <cit.>. Further, we explicitly restricted to initially barotropic velocity distribution to avoid interaction due to baroclinicity. Still, these interactions may find an explanation in the framework of the present asymptotic model and are subject to current investigation. T.D., A.P., and R.K.'s work has been funded by Deutsche Forschungsgemeinschaft (DFG) through grant CRC 1114 “Scaling Cascades in Complex Systems”, Project Number 235221301, Project (C06) “Multi-scale structure of atmospheric vortices”, and by the Helmholtz Society of Research Institutions for funding through the “GeoSim” Graduate College. The authors also thank the European Centre for Medium Range Weather Forecast for supporting this work by R.K's ECMWF research fellowship as well as hosting T.D., A.P., and R.K. for research stays. The authors acknowledge the North-German Supercomputing Alliance (HLRN) as well as the German Climate Computing Center (DKRZ) for providing HPC resources that have contributed to the research results reported in this paper. The authors also gratefully thank Olivier Pauluis, Remi Taullieux, and Mike Montgomery for many fruitful discussions which have helped strengthen our interpretation of the asymptotic results. T.D. and R.K. further thank Sundararaman Gopalakrishnan, Frank Marks, Paul Reasor, and Dave Nolan for their hospitality and insightful discussions during a research stay at NOAA/HRD and University of Miami. NCAR is sponsored by the National Science Foundation. § GOVERNING EQUATIONS IN THE CO-MOVING COORDINATESTransforming (<ref>) to the vortex-centered coordinates from section <ref> using eq:DerivativeTransformations and defining ≡ t and = u_r+ u_θ we find (+) + 1/· + /^3[z - ·] (+)+ 1/^71/ρ p + 1/^2 f × (+)= 0 ,w + 1/·w + /^3[z - ·]+ 1/^10 (1/ρ [z - ·] + 1 )= 0,ρ + 1/·(ρ) + 1/^3[z - ·] (ρ) = 0,Θ + 1/·Θ + /^3[z- ·] Θ = Q_Θ. § FULL SECOND ORDER HORIZONTAL MOMENTUM BALANCES /2θ - 2 2/ - (1)^2/ +·0z0/+ 1/ρ_06 -ρ_1/ρ_0^24 - f_0 1 = 0 t + 0z + 2(+ /) + /2θ- 0·0z +1/ρ_0 6θ = 0 § DERIVATION OF THE KINETIC ENERGY BUDGET EQ:EKINBALANCEWe start from eq:MomThetaTwoAverageAsymmetriesSplitOff, which is equivalent to (4.21) in <cit.> for f_0 = 0. This is verified straightforwardly by using u_r,02 = u_r,002 + u_r,*2. The equation is multiplied by ρ_0 0, and we use the mass conservation law in the form of eq. anelastic, , (ρ_0 u_r,002)_ + (ρ_0 w_00)_z = 0 , to generate the advective transport terms of kinetic energy in conservation form. We let = ρ_0 0^2 / 2 and obtain ( e_k)_t + ( u_r,002)_ + ( w_00)_z = - (u_r,002 + u_r,*) p4 . Focusing on the right-hand side of this equation, we rewrite the first term as [u_r,002p4= ( u_r,002p4)_ + ( w_00p4)_z - ρ_0 w_00(p4/ρ_0)_z; - p4/ρ_0[(ρ_0 u_r,002)_ + (ρ_0 w_00)_] . ] The square bracket vanishes according to eq:MassConservationApp, while we observe that by combining the axisymmetric part of the hydrostatic balance in vertmom2 with the equation of state in state42 to replace (p4/ρ_0)_z, and w52 to replace w_00 one finds ρ_0 w_00(p4/ρ_0)_z = ρ_0 Q_Θ,00/dΘ_2/dz4_0/Θ_0 . Next we rewrite the second term on the right of eq:EKinBalanceAppI using the definition of u_r,*2 in eq:UrStar, and the first Fourier modes of the vertical momentum balance in vert11newtest2 to find u_r,*2p4 = 1/2ρ_0 w_1·Θ_14/Θ_0 = 1/2ρ_0 Q_Θ,10/dΘ_2/dz·Θ_14/Θ_0 . To obtain the second equality we have used the asymmetric WTG-law from w5211 and the fact that the second term in that equation contributes a component to w_1 that is orthogonal ∂X0/∂ z, and thus also orthogonal to Θ_14.Insertion of eq:PressureWorkTerms–eq:AsymmetricHeatingPart generates the desired equation eq:EKinBalance. § RELATION TO LORENZ' THEORY OF AVAILABLE POTENTIAL ENERGYWe want to strengthen the fact that our outlined theory is compliant with Lorenz' concept of available potential energy (APE) <cit.>. We will see, under certain assumptions, that the kinetic energy generation of equation (<ref>) is identical to Lorenz' expressions of APE generation and conversion to kinetic energy.Therefore, we start with his equations (16), (20), (17), (18): At = -C + G, Kt = C ,C = -R/g∫_0^1/pTωdpG =1/g∫_0^Γ_d/Γ_d - ΓT'Q'/ Tdp. Here, A is the average available potential energy, K the mean kinetic energy, C the conversion of APE to kinetic energy, G the generation of APE (all per unit surface),the surface pressure ω = pt = pz w = -g ρ w the vertical velocity in pressure coordinates, Γ=Tz thelapse rate of temperature, Γ_d=g/ the dry-adiabatic lapse rate, Q = Θ Q_Θ the heat rate, (·) the horizontal mean, and (·) the deviation from the mean. In our case we neglect friction, hence equation (21) of <cit.> is trivially zero.We can make use of the results from above, as well asthose from <cit.>, that the leading-order expansion modes of the thermodynamical quantities are horizontally homogeneous. For example, for potential temperature we have Θ = T( Θ_0 + δ^2Θ_2 + δ^4)Θ = T( δ^4Θ4 + δ^5) Furthermore, we can find derived expression: T = ΘπT/ T = Θ/Θ + π/π Additionally, we make use of the circumstance, that the mean of an asymmetric field is zero.With that we get for equation (<ref>) G = g∫_0^H ρ/ΘΘz(Q_ΘΘ)dz , which is the the right-hand side of equation (<ref>) in physical dimensions, integrated over the whole domain, where Q_Θ is decaying sufficiently fast.With (<ref>) we get for the resulting vertical velocity w = 1/Θz( Q_Θ - u_θ/rΘ'_1 · e_r ) in physical units. With that we get for the horizontal mean ( T ω) ( T ω) = -g/ R p/ΘΘz( Θ Q_Θ) . Inserted into (<ref>) we get for C the same expression as for G in (<ref>).Finally, we conclude, that the generation of kinetic energy is equivalent to the generation of APE in Lorenz' theory and furthermore, that the expressions of generation APE and conversion into kinetic energy are identical for the setup of an asymmetrically heated vortex. All the APE, which is generated by heating is converted immediately into kinetic energy.§ THE CENTERLINE EQUATION OF MOTIONAs preparation for appendix <ref> give more details on the centerline equation of motion (<ref>) and provide all the necessary information to close the system of equations (in conjunction with (<ref>)).We will present the missing terms necessary to close equation (<ref>) in section <ref>, provide a split into adiabatic and diabatic contributions in section <ref> and further deepen the analysis of this equation in section <ref> for being able to construct a stable and efficient numerical scheme to solve the asymptotic equations. §.§ Formulation of Paeschke et al. (2012)In the course of this work we argued that the equations derived by <cit.> are valid in the vanishing-Coriolis case, , f_0 → 0. The structure of the equations stays essentially the same, only terms proportional to f_0 are to be dropped.For completeness below we present the remaining expressions for M_1 andnecessary to solve the system (<ref>) and (<ref>): M_1 = f^2/4πρ_0 Γ∂_z ( ρ_0 Γ^2/Θ_2'∂_zX ) = L[] = _π/2 L[_1] depends the following expressions:_1= _1 + _1 + _1 + _1 ,_1= ∂_r ( rw_1 ∂_z u_θ) ,_1 = _1 + H_s(r-1) 1/r^2 I_1 ,_1= r( ζ + f )W_1 , I_1= Γ/2π_-π/2 M_1 ,_1= (∂_r ϕ_1) (r∂_r ζ) ,_1= (w_0 u/r - ∂_r(r w_0 ∂_r u_θ)) ∂_zX , where L[·] is an integral operator, L[] = π/Γ∫_0^∞ r (r)dr , and _θ_0 denotes the matrix of two-dimensional rotation by an angle θ_0.Equations (<ref>) – (<ref>) involve terms which are resolved in terms of u_θ, X, Q_0 and Q_1: W_1 = -1/ρ_0∂_z (ρ_0w_1) w_0 = Q_0/Θ'_2w_1 = 1/Θ_2'(Q_1 + W _-π/2∂_zX ) W = u_θ/r( u_θ^2/r + fu_θ)ζ = 1/r(r u_θ)r ϕ_1= -r ∫_r^∞1/r̅^3∫_0^r̅r̅̅̅^2R_1 dr̅̅̅dr̅R_1 =W_1 + 1/2(∂_r w_0) ∂_zX The expressions above give rise to the reformulated equation of centerline motion: ∂_tX =u_s + 1/2ln (δ)k × M_1 - L[_1]§.§ Split into diabatic and adiabatic contributionsTo reveal more clearly the structure of the centerline evolution equation, we recall the formula w5211 for the vertical velocity Fourier modes, which separates diabatic from adiabatic effects. Rewriting this formula in the dipole vector notation we have w_1 =w_1, + w_1, , w_1, = 1/Θ_2' Q_1 , w_1, = 1/Θ_2' W _-π/2∂_zX =: _-π/2ŵ X , and realize that the adiabatic part is a linear (differential) operation on X: ŵ = W/Θ_1'∂_z From the equations in the previous subsection we see that by linearity of the expressions in w_1 we can assemble(and ultimately ) by linear superpositions of linear operations on X (operators are symbolically denoted by ) and in general nonlinear diabatic expressions._1 then becomes: _1= _1, + _-π/2ℋ̂ X_1, = ∂_r ( rw_1,)ℋ̂ X= ∂_r ( (ŵ X) ∂_z u_θ) We first need to evaluate the expression for W_1, W_1=W_1, + _-π/2Ŵ XW_1, = -1/ρ_0∂_z (ρ_0w_1,) W_1, = -_-π/21/ρ_0∂_z (ρ_0 ŵ X) := _-π/2Ŵ_1, X to split _1 accordingly: _1= _1, + _-π/2ℐ̂ X_1, = r( ζ + f )W_1, ℐ̂ X= r( ζ + f ) Ŵ_1, X Together with M_1 M_1= f^2/4πρ_0 Γ∂_z ( ρ_0 Γ^2/Θ_1'∂_zX ) =:M̂ X we get the following split for _1: _1= _1, + _-π/2ℐ̂ X + H_s(r-1) 1/r^2Γ/2π_-π/2M̂ X =: _1, + _-π/2ℐ̂̃̂ X Performing the split of R_1, R_1=W_1, + _-π/2Ŵ X + R_Q,0∂_zX, R_Q,0 = 1/2(∂_r w_0) ϕ_1 divides into ϕ_1= ϕ_1, + _-π/2ϕ̂ X + ϕ_Q,0∂_z Xϕ_1, = -r ∫_r^∞1/r̅^3∫_0^r̅r̅̅̅^2W_1,dr̅̅̅dr̅ ϕ̂ X= -r ∫_r^∞1/r̅^3∫_0^r̅r̅̅̅^2 Ŵ X dr̅̅̅dr̅ ϕ_Q,0 = -r ∫_r^∞1/r̅^3∫_0^r̅r̅̅̅^2 R_Q,0dr̅̅̅dr̅ and therefore _1 into _1= _1, + _-π/2𝒥̂ X + 𝒥_Q,0∂_zX ._1, = ∂_r (ϕ_1,)(r∂_r ζ)𝒥̂ X= ∂_r ( ϕ̂ X)(r∂_r ζ)𝒥_Q,0 = (∂_r ϕ_Q,0) (r∂_r ζ) For Q_1 we find the simple shorthand _1= ( w_0 u/r - ∂_r(r w_0 ∂_r u) ) ∂_zX =: 𝒬_0 ∂_zX .Our final result is identifying three different contributions to , a (generally nonlinear) diabatic term, an advective term, and a linear Sturm-Liouville-type operator acting on X: = L[_1,] + L[𝒥_Q,0∂_zX+𝒬_0 ∂_zX] + R_-π/2 L[ℋ̂ + ℐ̂̃̂ + 𝒥̂]X §.§ Characteristic structure of the vortex centerline equationWe continue rephrasing the original centerline tendency equation to further emphasize its structure. By trivially identifying ℝ^2 with ℂ we symbolically transform two-dimensional vectors a = (a_x, a_y) ∈ℝ to a = a_x + i a_y ∈ℂ. Operations such as (k ×·) and _π/2 become multiplications with i. By that we can identify a substructure of the equation to be of Schrödinger-type, and advective contribution and sources, generally dependent on X, u_θ, and the coordinates (r,z,t) but without any further specification: i(∂_t X + L[(𝒥_Q,0+𝒬_0) ∂_z X])= - 1/2lnδM̂ X - L[ℋ̂ + ℐ̂̃̂ + 𝒥̂] X++i u_s - i L[ℋ_Q,1 +ℐ_Q,1 + 𝒥_Q,1] By identifying structural components the centerline equation takes the form i(∂_t X + A ∂_z X)= Ĥ X + i Q + i u_s Note that the left-hand sides of eq. (<ref>) takes the form of an advection operation while the right-hand side involve linear and non-linear source terms.<cit.> pointed out that the adiabatic time evolution of the vortex centerline poses an eigenproblem. For that case the (homogeneous) centerline equation in the complex plane is written as i X_ht = Ĥ X_h As the spectrum ω_k of Ĥ is real, (<ref>) can be interpreted as a Schrödinger-type equation, hence eigenmodes X_k precess with the angular frequency ω_k in the complex plane. For the adiabatic problem we therefore find, that _k(t) = _ω_k t_k(t=0) . For the numerical experiments presented in section <ref> we use the first non-trivial eigenmode for initialization corresponding to the non-zero eigenvalue with the smallest magnitude.§ NUMERICAL SCHEME FOR ASYMPTOTIC EQUATIONSBy providing a closure (Q_0,Q_1) = F( X, u_θ, t) equations (<ref>) and (<ref>) form a closed set of partial differential equations which in general cannot be solved analytically. Thus, we further analyze the structure of these equations seeking for an adapted numerical method to allow for efficient and stable time integration.<cit.> first presented a numerical scheme for solving the coupled system (<ref>) and (<ref>). While he followed a method-of-lines approach discretizing the spatial derivatives by fourth-order approximation to solve the resulting system of ordinary differential equations by generic integrators we try to make use of the structure revealed in appendix <ref>. We revisited all the equations presented by <cit.> needed for closure and further performed the split of (<ref>) into linear and non-linear contributions which lead to a quasi-Hamiltonian substructure giving rise to the numerical scheme to be presented subsection <ref>. In addition to the centerline equation (<ref>) time evolution of the tangential velocity is re-written as ∂_t u_θ + u_r,001/r∂_r(ru_θ) + w_0 ∂_z u_θ = - u_r,*( u_θ/r + f ) - u_r,00f to identify an advection term (in polar coordinates) on the left-hand side and source terms on the right-hand side. The integration scheme of this equation is presented in subsection <ref>By our analysis we learned that the system of equations (<ref>) and (<ref>) is assembled by prototypes of partial differential equations that are i) advection equation, ii) Schrödinger equation, and iii) non-linearly coupled ordinary differential equations (source terms).Aiming at the scope of this work we restrict to asymmetric diabatic heat release which allows to drop all terms referring to symmetric vertical and radial motions. We further neglect background wind shear. Hence we drop all the advective terms from eqs. (<ref>) and (<ref>) and set u_s=0. §.§ Integration of centerlineFor the integration of the centerline position the general solution strategy is to integrate nonlinear source terms by the trapezoidal rule and the Sturm-Liouville operator (after appropriate spatial discretization) by the implicit midpoint rule. The choice of the latter is based on the idea of preserving unitarity during integration of the linear part of the equation. We also dropped contributions according to shear as they are not discussed in section <ref>. The composition of the sub-steps reads: X^* = 1/2Δ t Q(u^n, X^n, t^n) + X^n X^** = ( 1 - 1/2i Δ t Ĥ(u^n+1/2)) X^* X^*** = ( 1 + 1/2i Δ t Ĥ(u^n+1/2))^-1 X^**X^n+1 = 1/2Δ t + Q(u^n+1, X^n+1, t^n+1) + X^*** 1 represents the identity operator. For the evaluation of Ĥ we need u^n+1/2 which we obtain by the first-order predictor u^n+1/2 = u^*. §.§ Integration of tangential velocityThe integration of the tangential velocity equation is accomplished by applying the trapezoidal rule to the source term. Note that after dropping symmetric diabatic contributions all differential expression in equation (<ref>) vanished and we only have to integrate the non-differential source term proportional to u_r,*. The integration scheme for one timestep Δ t reads u^*= -1/2Δ t u_r,*^n ( u^n/r + f )+ u^n,u^n+1 = -1/2Δ t u_r,*^n+1( u^n+1/r + f )+ u^* , For clarity we dropped the _θ subindex. It becomes obvious that the final step of the integration involves an implicit solution strategy as the term u^n+1_r,* depends on both X and u at time level n+1. §.§ Coupled Integration The above stated integration scheme involves information from previous sub-steps for the coupled integration. In both, explicit and implicit forcing, the equations are couples to each other. For the implicit (finalizing) step it is necessary to iterate solving X^n+1 and u^n+1 with second order: X^n+1,0 = X^***u^n+1,0 = u^**X^n+1, ν = 1/2Δ t+ Q(u^n+1,ν-1, X^n+1,ν-1, t^n+1) + X^****u^n+1,ν = -1/2Δ t u_r,*^n+1, ν( u^n+1,ν/r + f )+ u^* This integration strategy is adopted from literature on implicit methods for fluid dynamics <cit.>. §.§ Details on the spatial discretization The equations are discretized on an equidistant grid allowing for straightforward finite-difference approximations of the derivate operators. Boundary conditions are accommodated by a extending the grid covering the physical domain plus a ghost layer of two cells. Solution values are stored in cell-centers while first derivate are computed typically on the corresponding faces.Prototypical differential expressions such as α∂_z(β∂_zψ) are discretized as: .α∂_z(β∂_zψ)|_z=z_i = 1/Δ z^2α_i (β_i+1/2( ψ_i+1 - ψ_i)- β_i-1/2( ψ_i - ψ_i-1) ) Integrals are computed via the trapezoidal rule where ghost cells are obtained by quadratic extrapolation. We further include an option to apply hyper-viscosity to the centerline stabilizing time integration with activated diabatic heating. Further details may be taken from the source code available on demand by the corresponding author. § DETAILS ON THE NUMERICAL IMPLEMENTATION §.§ Dimensional variables Though the derivation outlined above is carried out in terms of non-dimensional variables for the actual implementation into EULAG we used dimensional quantities of which some details will be presented in this section. In the spirit of asymptotic analysis for reconstructing dimensional variables and formulate leading-order relations by using leading-order or next-to-leading order modes.Before presenting specific relation which arise from the asymptotic analysis, we want to relate the expansion modes with mean background values, denoted by bars (·) and perturbations, denoted by primes (·).ρ = ρ( ρ_0 + δ^2 ρ_2 + δ^4 ρ̂_4 + δ^5 )ρ = ρ ( δ^4 ρ̂4 + δ^5 )p = p ( p_0 + δ^2 p_2 + δ^4 p̂_4 + δ^5 )p= p ( δ^4 p̂4 + δ^5 ) Θ = T ( Θ_0 + δ^2 Θ_2 + δ^4 Θ̂_4 + δ^5 )Θ = T ( δ^4 Θ̂4 + δ^5 ) Furthermore, we have u_θ =u · e_θ = u( 1/δ u_θ0 + 1)w=u( δŵ1 + δ^2) A trivial, but useful observation is (·) = 0.§.§.§ Pressure Equation (<ref>) balances pressure gradient with radial forces. We find the dimensional version as 1/ρpr = u_θ^2/r . §.§.§ Potential temperature For the deviation of potential temperature from its background mean value Θ we have for the first Fourier modes Θ_1 = -Θ/g1/ρpr Xz .§.§.§ Vertical velocity In general, , for arbitrary heating, the vertical velocity takes the following form in physical dimensions: w = 1/Θz( Q_Θ + Θ/gu_θ^3/r^2( _-π/2 Xz) · e_r )§.§.§ Diabatic heating Finally, we defined a heating dipole aligned with an angle θ_0 relative to the tilt direction: Q_Θ^θ_0 = Θ/gu_θ^3/r^2( _θ_0 Xz) · e_r §.§ Convergence resultsIn addition to comparing both simulation results of three-dimensional and asymptotic equations we check for convergence of each numerical schemes. For both simulations we check for self-consistency by comparing results at increasing resultion with high-resolved reference. Convergence of the EULAG simulations is displayed in figure <ref> showing second-order convergence. The numerical scheme solving the asymptotic equation on the other hand is tested by evolving X(t=0) = (cos(zπ/z_max), 0)^T for T=0.1 (in asymptotic units) and comparing solutions of different resolutions against a reference solution with 1280 grid points. The results are plotted in figure <ref> indicating second-order convergence as expected. spbasic
http://arxiv.org/abs/1708.07674v3
{ "authors": [ "Tom Dörffel", "Ariane Papke", "Rupert Klein", "Natalia Ernst", "Piotr Smolarkiewicz" ], "categories": [ "physics.flu-dyn", "physics.ao-ph" ], "primary_category": "physics.flu-dyn", "published": "20170825100736", "title": "Intensification of tilted atmospheric vortices by asymmetric diabatic heating" }
2020181163883 Hyperbolic Relaxation Method for Elliptic Equations Hannes R. Rüter^1, David Hilditch^1,2, Marcus Bugner^1, and Bernd Brügmann^1 December 30, 2023 ================================================================================ A thrackle is a drawing of a graph in which each pair of edges meets precisely once. Conway's Thrackle Conjecture asserts that a thrackle drawing of a graph on the plane cannot have more edges than vertices. We prove the Conjecture for thrackle drawings all of whose vertices lie on the boundaries of d ≤ 3 connected domains in the complement of the drawing. We also give a detailed description of thrackle drawings corresponding to the cases when d=2 (annular thrackles) and d=3 (pants thrackles).§ INTRODUCTIONLet G be a finite simple graph with n vertices and m edges. A thrackle drawing of G on the plane is a drawing :G→^2, in which every pair of edges meets precisely once, either at a common vertex or at a point of proper crossing (see <cit.> for definitions of a drawing of a graph and a proper crossing). The notion of thrackle was introduced in the late sixties by John Conway, in relation with the following conjecture.For a thrackle drawing of a graph on the plane, one has m≤ n. Despite considerable effort <cit.>, the conjecture remains open. The best known bound for a thrackleable graph with n vertices is m ≤ 1.3984 n <cit.>. Adding a point at infinity we can consider a thrackle drawing on the plane as a thrackle drawing on the 2-sphere S^2. The complement of a thrackle drawing on S^2 is the disjoint union of open discs. We say that a drawing belongs to the class T_d, d ≥ 1, if there exist d open discs D_1, …, D_d whose closures are pairwise disjoint such that all the vertices of the drawing lie on the union of their boundaries (a disk may contain no vertices on its boundary). We say that two thrackle drawings of class T_d are isotopic if they are isotopic as drawings on S^2 ∖ (∪_k=1^d D_k). We will also occasionally identify a graph G with its thrackle drawing (G) speaking, for example, of the vertices and edges of the drawing.Thrackles of class T_1 are called outerplanar: all their vertices lie on the boundary of a single disc D_1. Such thrackles are very well understood.Suppose a graph G admits an outerplanar thrackle drawing. Then *any cycle in G is odd <cit.>; *the number of edges of G does not exceed the number of vertices <cit.>; *if G is a cycle, then the drawing is Reidemeister equivalent to a standard odd musquash <cit.>.We say that two thrackle drawings are Reidemeister equivalent (or equivalent up to Reidemeister moves), if they can be obtained from one another by a finite sequence of Reidemeister moves of the third kind in the complement of vertices (see Section <ref>).A standard odd musquash is the simplest example of a thrackled cycle: for n odd, distribute n vertices evenly on a circle and then join by an edge every pair of vertices at the maximal distance from each other. This defines a musquash in the sense of Woodall <cit.>: an n-gonal musquash is a thrackled n-cycle whose successive edges e_0,…,e_n-1 intersect in the following manner: if the edge e_0 intersects the edges e_k_1,…,e_k_n-3 in that order, then for all j=1,…,n-1, the edge e_j intersects the edges e_k_1+j,…,e_k_n-3+j in that order, where the edge subscripts are computed modulo n. A complete classification of musquashes was obtained in<cit.>: every musquash is either isotopic to a standard n-musquash, or is a thrackled six-cycle.In this paper, we study thrackle drawings of the next two classes T_d: annular thrackles and pants thrackles.A thrackle drawing of class T_2 is called annular. Up to isotopy, we can assume that the boundaries of D_1 and D_2 are two concentric circles on the plane, and that the thrackle drawing, except for the vertices, entirely lies in the open annulus bounded by these circles. Clearly, any outerplanar drawing can be viewed as an annular drawing. Figure <ref> shows an example of an annular thrackle drawing which is not outerplanar. Note however that the underlying graph has some vertices of degree 1 (which must always be the case by Theorem <ref>(<ref>) below).We show that the three assertions of Theorem <ref> also hold for annular drawings.Suppose a graph G admits an annular thrackle drawing. Then *any cycle in G is odd; *the number of edges of G does not exceed the number of vertices; *if G is a cycle, then the drawing is, in fact, outerplanar (and as such, is Reidemeister equivalent to a standard odd musquash).We next proceed to the thrackle drawings of class T_3. We call such drawings pants thrackle drawings or pants thrackles.Any annular thrackle drawing is trivially a pants thrackle drawing. The pants thrackle drawing of a six-cycle in Figure <ref> is not annular. We prove the following.Suppose a graph G admits a pants thrackle drawing. Then *any even cycle in G is a six-cycle, and its drawing is Reidemeister equivalent to the one in Figure <ref>; *if G is an odd cycle, then the drawing can be obtained from a pants drawing of a three-cycle by a sequence of edge insertions; *the number of edges of G does not exceed the number of vertices.The procedure of edge insertion replaces an edge in a thrackle drawing by a three-path such that the resulting drawing is again a thrackle – see Section <ref> for details.The ideas of the proofs are roughly as follows. There is a toolbox of operations one can do on a thrackled graph while preserving thrackleability and that have been used in the past literature on thrackles; these include edge insertion, edge removal and vertex splitting. We investigate how these operations interact with the more restrictive annular or pants conditions. One key observation (Lemma <ref> below) is that, in order to preserve thrackleability, edge removal hinges on some empty triangle condition which blends well with the annular or the pants structure. This allows the study of irreducible thrackles, which are those for which no edge removal is possible. We prove that irreducible thrackled cycles are either triangles, or, in the case of pants drawing, a six-cycle.§ THRACKLE OPERATIONS §.§ Edge insertion and edge removalThe operation of edge insertion was introduced in <cit.>; given a thrackle drawing, one replaces an edge by a three-path in such a way that the resulting drawing is again a thrackle. All the changes to the drawing are performed in a small neighbourhood of the edge, as shown in Figure <ref>.Edge insertion on a given edge is not uniquely defined, even up to isotopy and Reidemeister moves, as we can choose one of two different orientations of the crossing of the first and the third edge of the three-path by which we replace the edge. We want to formalise and slightly modify the edge insertion procedure. Given an edge e=uv, on the first step, we remove from it a small segment Q_1Q_2 lying in the interior of e and containing no crossings with other edges. On the second step, we slightly extend the segments uQ_1 and Q_2v so that they cross (with one of two possible orientations), and then further extend each of them to cross other edges in such a way that the resulting drawing is again a thrackle. On the third step, we join the two endpoints of degree 1 of the two edges obtained at the first step so that the resulting drawing is again a thrackle. We make two observations regarding this process of edge insertion. First, it may happen that we change the drawing not only in a small neighbourhood of e, but also “far away" from it. Figure <ref> shows two Reidemeister inequivalent thrackled seven-cycles obtained from the standard 5-musquash by edge insertion. Note that the orientations of all the crossings in the two thrackles are the same (we note in passing, that up to isotopy and Reidemeister moves there exist only three thrackled seven-cycles: the two shown in Figure <ref> and the standard 7-musquash; we can prove that using the algorithm given in the end of Section 3 of <cit.>). Our second observation is that edge insertion may not always be possible within the same class T_d. For example, in the proof of assertion (<ref>) of Theorem <ref> in Section <ref>, it will be shown that no edge insertion on the pants thrackle drawing of the six-cycle shown in Figure <ref> produces a pants thrackle drawing. The operation of edge removal is inverse to the edge insertion operation. Let (G) be a thrackle drawing of a graph G and let v_1v_2v_3v_4 be a three-path in G such that v_2 =v_3 = 2. Let Q =(v_1v_2) ∩ (v_3v_4). Removing the edge v_2v_3, together with the segments Qv_2 and Qv_3 we obtain a drawing of a graph with a single edge v_1v_4 in place of the three-path v_1v_2v_3v_4 (Figure <ref>).Edge removal does not necessarily result in a thrackle drawing. Consider the triangular domainbounded by the arcs v_2v_3, Qv_2 and v_3Q and not containing the vertices v_1 and v_4 (if we consider the drawing on the plane,can be unbounded). We have the following lemma. Edge removal results in a thrackle drawing if and only ifcontains no vertices of (G). Note that for a thrackle drawing of class T_d, the condition of Lemma <ref> is satisfied ifcontains none of the d circles bounding the discs D_k. Given a thrackle drawing of class T_d of an n-cycle, edge removal, if it is possible, produces a thrackle drawing of the same class T_d of an (n-2)-cycle.We call a thrackle drawing irreducible if it admits no edge removals and reducible otherwise. To a path in a thrackle drawing of class T_d we can associate a word W in the alphabet X={x_1, …, x_d} in such a way that the i-th letter of W is x_k if the i-th vertex of the path lies on the boundary of the disc D_k. For a thrackled cycle, we consider the associated word W to be a cyclic word. For a word w and an integer m we denote w^m the word obtained by m consecutive repetitions of w. We have the following simple observation. For a thrackle drawing of a graph G of class T_d, *For no two different i, j = 1, …, d, may a thrackle drawing of class T_d contain two edges with the words x_i^2 and x_j^2. *Suppose that for some i = 1, …, d, a thrackle drawing of class T_d contains a two-path with the word x_i^3 the first two vertices of which have degree 2. Then the drawing is reducible. (<ref>) is obvious, as otherwise the thrackle condition will be violated by the corresponding two edges.(<ref>) The complement of the two-path in S^2 ∖ (∪_k=1^d D_k) is the union of three domains, exactly one of which has the two-path on its boundary. That domain can contain no other vertices of the thrackle inside it or on its boundary, as otherwise the thrackle condition is violated. But then by Lemma <ref>, edge removal can be performed on the three-path which is the union of the given two-path and the edge of the graph incident to its first vertex.§.§ Reidemeister movesA Reidemeister move can be performed on a triple of pairwise non-adjacent edges of a thrackle drawing if the open triangular domain bounded by the segments on each of the edges between the crossings with the other two contains no points of the drawing – see Figure <ref>.We say that two thrackle drawings are Reidemeister equivalent if one can be obtained from the other by a finite sequence of Reidemeister moves.Suppose that two thrackles _1(G) and _2(G) can be obtained from one another by a Reidemeister move on a triple of edges e_i, e_j, e_k. From Lemma <ref> it follows that if _1(G) admits edge removal on a three-path not containing these three edges, then _2(G) also does; moreover, after edge removals the resulting two thrackles can again be obtained from one another by the same Reidemeister move. However, adding an edge to _1(G) may result in a thrackle which is not Reidemeister equivalent to any thrackle obtained from _2(G) by adding an edge, as the added edge may end at a vertex inside the triangular domain _ijk bounded by e_i, e_j, e_k. The same is true for edge insertion on _1(G).Now suppose that _1(G) and _2(G) belong to a class T_d. The domains _ijk in both _1(G) and _2(G) contain no vertices. If we additionally require that they contain no “inessential" discs D_l, those having no vertices on their boundaries, then the edge added to _1(G) cannot end in _ijk and so we can add a corresponding edge to _2(G) such that the resulting two thrackles are again Reidemeister equivalent.§.§ Forbidden configurationsA graph having more edges than vertices always contains one of the following subgraphs: a theta-graph (two vertices joined by three disjoint paths), a dumbbell (two disjoint cycles with a path joining a vertex of one cycle to a vertex of another), or a figure-8 graph (two cycles sharing a vertex). To prove Conway's Thrackle Conjecture it is therefore sufficient to show that none of these three graphs admits a thrackle drawing. Repeatedly using the vertex-splitting operation <cit.> one can show that the existence of a counterexample of any of these three types implies the existence of a counterexample of the other two types.However, this may not be true for thrackle drawings of class T_d, as the required vertex-splitting operation on a vertex of degree 3 may not be permitted within the class T_d. The problem is that in order to remain withinthe class T_d, vertex-splittingon a vertex of degree 3 may only be performed by doubling the “middle" edge (as on the left in Figure <ref>) and this is too restrictive; for example, starting with a dumbbell, vertex-splitting withinthe class T_d might only increase the length of the dumbbell handle. So one might not be able to reduce a dumbbell to a figure-8 graph. Nevertheless, if we are given a thrackle drawing of class T_d of a figure-8 graph, we can always perform the vertex-splitting operation on the vertex of degree 4 to obtain a thrackle drawing of the same class T_d of a dumbbell, as on the right in Figure <ref>. This gives the following lemma. To prove Conway's Thrackle Conjecture for thrackle drawings in a class T_d it is sufficient to prove that no dumbbell and no theta-graph admit a thrackle drawing of class T_d. In both cases, the corresponding graph contains an even cycle.The second assertion is clear for a theta-graph, and for a dumbbell, follows from the fact that a thracklable graph contains no two vertex-disjoint odd cycles <cit.>. § ANNULAR THRACKLESIn this section, we prove Theorem <ref>. We can assume that the thrackle drawing lies in the closed annulus bounded by two concentric circles on the plane, the outer circle A and the inner circle B; the vertices lie in A ∪ B, and the rest of the drawing, in the open annulus. As in Section <ref> we can associate to a path within a thrackle a word in the alphabet {a,b}, where the letter a (respectively b) corresponds to a vertex lying on A (respectively on B). To an annular thrackle drawing of an n-cycle there corresponds a word W defined up to cyclic permutation and reversing.The following lemma and the fact that edge removal decreases the length of a cycle by 2 imply assertion (<ref>).If an n-cycle admits an irreducible annular thrackle drawing, then n = 3. By Lemma <ref>(<ref>) we can assume that W contains no two consecutive b's. If W contains no letters b at all, then the thrackle is outerplanar and the assertion of the lemma follows from Theorem <ref> (<ref>). Assuming that W contains at least one b we get that W contains a sequence aba. Suppose n>3; then n ≥ 5, as no 4-cycle admits a thrackle drawing on the plane. Consider the next letter in W. Up to isotopy, there are three possible ways of adding an extra edge. As the reader may verify, two of them produce a reducible thrackle by Lemma <ref>. The third one is shown in the middle in Figure <ref>.But then there is only one way to add the next edge, as on the right in Figure <ref> and the resulting thrackle drawing is reducible. By Lemma <ref>, if in the class of annular thrackles there exists a counterexample to Conway's Thrackle Conjecture, then there exists such a counterexample whose underlying graph contains an even cycle. So assertion (<ref>) follows from assertion (<ref>).We now prove assertion (<ref>). Suppose a cycle c of an odd length n admits an annular thrackle drawing. We can assume that the corresponding word W contains at least one b and does not contain b^2. Up to cyclic permutation, W=a^2p(ba)^rb, for some p ≥ 1, r ≥ 0. As n is odd, W contains a subword a^2. Let a^k, k ≥ 2, be a maximal by inclusion string of consecutive a's. If k=n-1, we are done. Otherwise, up to cyclic permutation, W=a^kb w b for some word w. Consider the edge e defined by the last pair aa in a^k. Let γ be the arc of A joining the endpoints of e such that the domain bounded by e ∪γ does not contain B. Every edge of the thrackle not sharing a common vertex with e crosses it, so every second vertex counting from the last a in a^k lies in the interior of γ. It follows that W=a^kbay_1ay_2a … y_q a b, where y_i ∈{a, b}, and so k is necessarily even. By the same reasoning, any maximal sequence of more than one consecutive a's in W is even. But then y_i=b, for all i=1, …, q, as otherwise W would contain a maximal sequence of consecutive a's of an odd length greater than one. To prove assertion (<ref>) we show any annular thrackled cycle is alternating; then the claim follows from the fact that alternating thrackles are outerplanar, as was proved in <cit.>. Recall that a thrackled cycle is called alternating if for every edge e and every two-path fg vertex-disjoint from e, the crossings of e by f and g have opposite orientations.Suppose c is a cycle of the shortest possible length which admits a non-alternating annular thrackle drawing (c); the length of c must be at least 7. An easy inspection shows that any edge vertex-disjoint with a two-path aba (or bab) crosses its edges with opposite orientations. The same is true for a two-path a^3. It remains to show that any edge vertex-disjoint with a two-path aab also crosses the edges of that two-path with opposite orientations. Up to isotopy, the only drawing for which this is not true is the one shown in Figure <ref>.Note that the edge which violates the alternating condition necessarily joins an a-vertex and a b-vertex. We claim that such a drawing cannot be a part of (c). To see that, we consider possible drawings of the four-path in c which extends the path a_1a_2b_1. The vertex following b_1 must be an a-vertex (call it a_3); there are two possible cases: a_3 = a' and a_3a'. In the first case, up to isotopy, we get the drawing on the left in Figure <ref>, and then there is only one possible way to attach an edge at a_1, as shown on the right in Figure <ref>. But then performing edge removal on a_1a_2 we get a shorter non-alternating annular thrackled cycle, a contradiction.Now suppose a_3a'. We have two cases for adding the edge b_1a_3, and then by Lemma <ref>, the letter after a_3 must be a b. In the first case, up to isotopy and a Reidemeister move, we get the drawing on the left in Figure <ref>, and then we can attach the edge joining a_3 to a b-vertex uniquely, up to isotopy and a Reidemeister move, as on the right in Figure <ref>.Again, performing edge removal on b_1a_3 we get a shorter non-alternating annular thrackled cycle. The second possibility of attaching the edge b_1a_3, a_3a', to the drawing in Figure <ref> is the one shown on the left in Figure <ref>, up to isotopy. Then the edge joining a_3 to the next b-vertex can be also added uniquely, up to isotopy, as on the right in Figure <ref>, and yet again, edge removal on b_1a_3 results in a shorter non-alternating annular thrackled cycle. This completes the proof of Theorem <ref>.§ PANTS THRACKLESIn this section, we prove Theorem <ref>. We represent the pair of pants domain P whose closure contains the drawing as the interior of an ellipse, with two disjoint closed discs removed. To a path in a pants thrackle drawing we associate a word in the alphabet {a, b, c}, where a corresponds to the vertices on the ellipse, and b and c, to the vertices on the circles bounding the discs (e.g., as in Figure <ref>).We start with the following proposition which implies assertion (<ref>) of Theorem <ref> and will also be used in the proof of assertion (<ref>).If a cycle C admits an irreducible pants thrackle drawing, then C is either a three-cycle or a six-cycle, and in the latter case, the drawing is Reidemeister equivalent to the one in Figure <ref>. Let W be the (cyclic) word corresponding to an irreducible pants thrackle drawing of a cycle C. The following lemma can be compared to Lemma <ref>.If W contains a^2, then one of the two domains of the complement of the corresponding edge in P is a disc, the cycle C is odd, and W=aay_1ay_2 … y_m-1ay_m, where y_i ∈{b, c} for i=1, …, m. Suppose no domain of the complement of an edge aa is a disc. By Lemma <ref>(<ref>), neither the letter which precedes a^2 in W, nor the next letter after a^2 is a, and for the corresponding edges to cross, those two letters must be the same, say b. If the corresponding three-path baab is irreducible, it has to be isotopic to the path on the left in Figure <ref>. But then there is a unique, up to isotopy, way to add to the path the starting segment of the next edge, and it produces a reducible three-path, as on the right in Figure <ref>.It follows that if W contains a^2, then one of the two domains of the complement of the corresponding edge in P is a disc. But then by the thrackle condition, every second vertex counting from the second a in aa is again a, so W=aay_1ay_2 … y_m-1ay_m, for some y_i ∈{a, b, c}. In particular, C is an odd cycle and furthermore, none of the y_i can be equal to a by Lemma <ref>(<ref>).Suppose the word W contains no subwords bb and cc. Then it contains no subwords caba or baca. Arguing by contradiction (and renaming the letters if necessary) suppose that W contains the subword caba. The only irreducible three-path corresponding to that subword, up to isotopy, is shown on the left in Figure <ref>. Suppose that the next letter in W is not a. Then the only irreducible four-path extending caba, up to isotopy, is the one shown on the right in Figure <ref>.If C is of length five, then W=cabac which contradicts the fact that the (cyclic) word W does not contain a subword cc. Otherwise, there are only three possible ways, up to isotopy and a Reidemeister move, to add another edge starting at the last added vertex c in such a way that the resulting drawing is a thrackled path. But one of them results in a reducible drawing, and the other two end in c contradicting the fact that W does not contain a subword cc.It follows that the letter following caba in W must be an a, so we get a subword cabaa. If the length of the cycle C is greater than 5, then by Lemma <ref>, the letter which precedes c must be a, so W contains the subword acabaa. But then the above argument applied to the subword acab (if we reverse the direction of C and swap b and c) implies that the letter which precedes the starting a is another a, so that W contains the subword aacabaa, giving a contradiction with Lemma <ref>.If C is of length five, then W=cabaa and the resulting drawing is reducible by Lemma <ref>, as there is just a single b in W, and so the triangular domain corresponding to the three-path caba on the left in Figure <ref> contains no other vertices of the thrackle. Now if W contains the subword aa, then by Lemma <ref> and Lemma <ref>, the word W may contain only one of the letters b or c. Then the drawing is annular, and hence by Lemma <ref> is reducible unless C is a three-cycle. Suppose W contains no letter repetitions. Then Lemma <ref> applies to any subword xyzy such that {x, y, z} = {a, b, c}. Furthermore, up to renaming the letters we can assume that W starts with ab. If W contains no subword abc, then W=(ab)^m, and so the drawing is annular. We can therefore assume that W contains a subword abc. Then the following letter cannot be any of b or c, so it must be an a. Repeating this argument we obtain that W=(abc)^m.We now modify the word W by attaching to every letter a subscript plus (respectively minus) if the tangent vector to the drawing in the direction of the cycle C makes a positive (respectively negative) turn at the corresponding vertex; in other words, the subscript is a plus (respectively a minus) if the path turns left (respectively right) at the vertex. We will occasionally omit the subscript when it is unknown or unimportant.Note that if the length of C is greater than 3, then no two consecutive subscripts in the word W=(abc)^m can be the same. Indeed, assume that W contains a subword ab_+c_+a. Then the corresponding irreducible three-path is unique up to isotopy, as shown on the left in Figure <ref>, and the only possible way to attach an edge ab results in a reducible drawing, as on the right in Figure <ref>. By reflection, a similar comment applies to subwords of the form ab_-c_-a.It follows that the subscripts in W alternate and in particular, the length of C is divisible by 6.There are two drawings of the three-path ab_+c_-a, both irreducible, as shown in Figure <ref>.They differ by the orientation of the crossing of the edges ab and ca. If we change the direction on C and swap the letters b and c, the subword ab_+c_-a does not change. By reflection, a similar comment applies to subwords of the form ab_-c_+a. Hence the whole word W is unchanged, with all the subscripts, but the orientation of the crossings of the edges ab and ca are reversed. We therefore lose no generality by assuming that the subword ab_+c_-a is represented by the three-path on the left in Figure <ref>. We can then uniquely, up to isotopy, add an edge ca to the starting vertex a, as on the left in Figure <ref>, which produces the four-paths corresponding to the subword ca_-b_+c_-a. Furthermore, up to isotopy and a Reidemeister move, we can uniquely add an edge bc to the starting vertex c, as on the right in Figure <ref>. We get the five-paths corresponding to the subword bc_+a_-b_+c_-a.One possibility for completing the cycle would be to now join the degree one vertices a and b of the five-path by an edge. This can be done uniquely up to isotopy and produces an irreducible pants thrackle drawing of a six-cycle corresponding to the word W=b_-c_+a_-b_+c_-a_+, as in Figure <ref>. Any other such drawing is equivalent to that up to isotopy and Reidemeister moves (which were possible at the intermediate steps of our construction).Otherwise, we can extend the five-path to a six-path corresponding to the subword ab_-c_+a_-b_+c_-a by adding an edge ab at the start. The resulting six-path is equivalent, up to isotopy and Reidemeister moves, to the one on the left in Figure <ref>.But then no edge ca (with the correct orientation at a) can be added at the start of the six-path: up to isotopy and Reidemeister moves, the only edge we can add does not start at c, as on the right in Figure <ref>. This completes the proof of the Proposition. By the Proposition, if an even cycle of length greater than 6 has a pants thrackle drawing, then it must be reducible. Hence, to prove assertion (<ref>) of Theorem <ref>, it suffices to show that the pants thrackle drawing of the six-cycle in Figure <ref> (or Reidemeister equivalent to it) admits no edge insertion such that the resulting thrackle drawing of the eight-cycle is again a pants thrackle drawing. One possible way is to consider all edge insertions following the procedure in Section <ref>. But as the resulting thrackles are sufficiently small, all these cases can be treated by computer. Using the algorithm given in the end of Section 3 of <cit.> we found that up to isotopy and Reidemeister moves, there exist exactly three thrackled eight-cycles; they are shown in Figure <ref>. Each of them is obtained by edge insertion in a thrackled six-cycle and belongs to class T_4, but none of them is a pants thrackle.This proves assertion (<ref>) of Theorem <ref>.It remains to prove assertion (<ref>). By Lemma <ref>, it suffices to show that if G is either a theta-graph or a dumbbell, then it admits no pants thrackle drawing. We also know from Lemma <ref> that in both cases, G contains an even cycle which by assertion (<ref>) must be a six-cycle whose thrackle drawing is Reidemeister equivalent to the one in Figure <ref>. The proof goes as follows: we explicitly construct pants thrackle drawings of a six-cycle with certain small trees attached to one of its vertices and first show that in a pants thrackle drawing of a three-path attached to a six-cycle, the drawing of the three-path is reducible. Repeatedly performing edge removals we get a pants thrackle drawing either of a theta-graph obtained from a six-cycle by joining two of its vertices by a path of length at most 2, or of a dumbbell consisting of a six-cycle and some other cycle joined by a path of length at most 2. The resulting theta-graphs are very small, and from <cit.> we know that they admit no thrackle drawing at all, and in particular, no pants thrackle drawing (the latter fact will also be confirmed in the course of the proof). Every resulting dumbbell contains one of two subgraphs obtained from the six-cycle by attaching a small tree, as in Figure <ref>.We show that for a pants thrackle drawing of each of these two subgraphs, to at least one of the two vertices v_1, v_2, it is not possible to attach another edge so that the resulting drawing is a pants thrackle drawing.We start with the pants thrackle drawing of the six-cycle and attach a path to one of its vertices. By cyclic symmetry, we can choose any vertex to attach a path. Moreover, from the arguments in Section <ref> it follows that Reidemeister moves on the original six-cycle and on the intermediate steps of adding edges will result in a Reidemeister equivalent drawing in the end. So we can attach a path edge-by-edge choosing one of Reidemeister equivalent drawings arbitrarily at each step.Up to isotopy and Reidemeister moves, there are two ways to attach an edge to a vertex of the drawing of the six-cycle, as in Figure <ref>. Note that the second endpoint of this edge is not one of the vertices of the six-cycle (so that no theta-graph obtained by joining two vertices of a six-cycle by an edge admits a pants thrackle drawing) and that in the two cases shown in Figure <ref>, it lies on different boundary components of P. It follows that renaming b and c and changing the direction on the cycle and the orientation on the plane, we obtain two Reidemeister equivalent drawings. We continue with the one on the left in Figure <ref> and attach another edge at the vertex of degree 1.This can be done uniquely up to isotopy and Reidemeister equivalence resulting in the drawing as on the left in Figure <ref>. Again, the second endpoint of the attached edge cannot be one of the vertices of the six-cycle (so that no theta-graph obtained by joining two vertices of a six-cycle by a two-path admits a pants thrackle drawing). Then we can attach another edge at that vertex. This can be done uniquely up to isotopy and Reidemeister equivalence, as on the right in Figure <ref>. But if the two vertices other than the endpoints in the so attached three-path have degree 2 in G, then the three-path is reducible by Lemma <ref>. Now if G is a theta-graph, then by repeatedly performing edge removals we obtain a pants thrackle drawing of a theta-graph obtained by joining two vertices of a six-cycle by a path of length at most 2, which is impossible, as we have shown above. If G is a dumbbell, then by repeatedly performing edge removals we obtain a pants thrackle drawing of a dumbbell consisting of the six-cycle and a cycle C', with a vertex of the six-cycle joined to the vertex v of C' by either an edge or a two-path. Such a dumbbell contains one of the two subgraphs given in Figure <ref>. So it remains to deal with these two cases.The vertex v has degree 3 in G, so we have to attach two edges to it. In the first case, we start with the drawing on the left in Figure <ref> and attach two edges to the vertex v. We obtain a unique drawing, up to isotopy and Reidemeister moves, as on the right in Figure <ref>. But then no edge can be attached to the vertex a_2 in such a way that the resulting drawing is a pants thrackle drawing. Similarly, in the second case, we start with the drawing on the left in Figure <ref> and attach two edges to the vertex v. We obtain a unique drawing, up to isotopy and Reidemeister moves, as on the right in Figure <ref>. But then no edge can be attached to the vertex b_1 in such a way that the resulting drawing is a pants thrackle drawing.This completes the proof of Theorem <ref>.We express our deep gratitude to Grant Cairns for his generous contribution to this paper, at all the stages, from mathematics to presentation.We are thankful to the reviewer for their kind permission to include a brief description of the ideas underlining the proof borrowed from their report. * alpha
http://arxiv.org/abs/1708.07351v3
{ "authors": [ "Grace Misereh", "Yuri Nikolayevsky" ], "categories": [ "math.CO", "Primary: 05C10, 05C62, Secondary: 68R10" ], "primary_category": "math.CO", "published": "20170824105126", "title": "Annular and pants thrackles" }
Multi-task Self-Supervised Visual Learning CarlDoersch^†Andrew Zisserman^†,* ^†DeepMind ^*VGG, Department of Engineering Science, University of Oxford December 30, 2023 =============================================================================================================================== We investigate methods for combining multiple self-supervised tasks—i.e., supervised tasks where data can be collected without manual labeling—in order to train a single visual representation. First, we provide an apples-to-apples comparison of four different self-supervised tasks using the very deep ResNet-101 architecture. We then combine tasks to jointly train a network. We also explore lasso regularization to encourage the network to factorize the information in its representation, and methods for “harmonizing” network inputs in order to learn a more unified representation. We evaluate all methods on ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our results show that deeper networks work better, and that combining tasks—even via a naïve multi-head architecture—always improves performance. Our best joint network nearly matchesthe PASCAL performance of a model pre-trained on ImageNet classification, and matches the ImageNet network on NYU depth prediction. § INTRODUCTIONVision is one of the most promising domains for unsupervised learning. Unlabeled images and video are available in practically unlimited quantities, and the most prominent present image models—neural networks—are data starved, easily memorizing even random labels for large image collections <cit.>. Yet unsupervised algorithms are still not very effective for training neural networks: they fail to adequately capture the visual semantics needed to solve real-world tasks like object detection or geometry estimation the way strongly-supervised methods do. For most vision problems, the current state-of-the-art approach begins by training a neural network on ImageNet <cit.> or a similarly large dataset which has been hand-annotated. How might we better train neural networks without manual labeling? Neural networks are generally trained via backpropagation on some objective function. Without labels, however, what objective function can measure how good the network is? Self-supervised learning answers this question by proposing various tasks for networks to solve, where performance is easy to measure, i.e., performance can be captured with an objective function like those seen in supervised learning. Ideally, these tasks will be difficult to solve without understanding some form of image semantics, yet any labels necessary to formulate the objective function can be obtained automatically. In the last few years, a considerable number of such tasks have been proposed <cit.>, such as asking a neural network to colorize grayscale images, fill in image holes, solve jigsaw puzzles made from image patches, or predict movement in videos. Neural networks pre-trained with these tasks can be re-trained to perform well on standard vision tasks (e.g. image classification, object detection, geometry estimation) with less manually-labeled data than networks which are initialized randomly. However, they still perform worse in this setting than networks pre-trained on ImageNet.This paper advances self-supervision first by implementing four self-supervision tasks and comparing their performance using three evaluation measures.The self-supervised tasks are: relative position <cit.>, colorization <cit.>, the “exemplar" task <cit.>, and motion segmentation <cit.> (described in section <ref>).The evaluation measures (section <ref>) assess a diverse set of applications that are standard for this area, including ImageNet image classification, object category detection on PASCAL VOC 2007, and depth prediction on NYU v2.Second, we evaluate if performance can be boosted by combining these tasks to simultaneously train a single trunk network.Combining the tasks fairly in a multi-task learning objective is challenging since the tasks learn at different rates, and we discuss how we handle this problem in section <ref>. We find that multiple tasks work better than one, and explore which combinations give the largest boost.Third, we identify two reasons why a naïve combination of self-supervision tasks might conflict, impeding performance: input channels can conflict, and learning tasks can conflict. The first sort of conflict might occur when jointly training colorization and exemplar learning: colorization receives grayscale images as input, while exemplar learning receives all color channels. This puts an unnecessary burden on low-level feature detectors that must operate across domains. The second sort of conflict might happen when one task learns semantic categorization (i.e. generalizing across instances of a class) and another learns instance matching (which should not generalize within a class). We resolve the first conflict via “input harmonization”, i.e. modifying network inputs so different tasks get more similar inputs. For the second conflict, we extend our mutli-task learning architecture with a lasso-regularized combination of features from different layers, which encourages the network to separate features that are useful for different tasks. These architectures are described in section <ref>.We use a common deep network across all experiments, a ResNet-101-v2, so that we can compare various diverse self-supervision tasks apples-to-apples. This comparison is the first of its kind.Previous work applied self-supervision tasks over a variety of CNN architectures (usually relatively shallow), and often evaluated the representations on different tasks; and even where the evaluation tasks are the same, there are often differences in the fine-tuning algorithms. Consequently, it has not been possible tocompare the performance of different self-supervision tasks across papers. Carrying out multiple fair comparisons, together with the implementation of the self-supervised tasks, joint training, evaluations, and optimization of a large network for several large datasets has been a significant engineering challenge. We describe how we carried out the large scale training efficiently in a distributed manner in section <ref>. This is another contribution of the paper.As shown in the experiments of section <ref>, by combining multiple self-supervision tasks we are able to close further the gap between self-supervised and fully supervisedpre-training over all three evaluation measures.§.§ Related Work Self-supervision tasks for deep learninggenerally involve taking a complex signal, hiding part of it from the network, and then asking the network to fill in the missing information. The tasks can broadly be divided into those that use auxiliary information or those that only use raw pixels.Tasks that use auxiliary information such asmulti-modal information beyond pixels include:predicting sound given videos <cit.>, predicting camera motion given two images of the same scene <cit.>, or predicting what robotic motion caused a change in a scene <cit.>. However, non-visual information can be difficult to obtain: estimating motion requires IMU measurements, running robots is still expensive, and sound is complex and difficult to evaluate quantitatively.Thus, many works use raw pixels. In videos, time can be a source of supervision. One can simply predict future <cit.>, although such predictions may be difficult to evaluate. One way to simplify the problem is to ask a network to temporally order a set of frames sampled from a video <cit.>. Another is to note that objects generally appear across many frames: thus, we can train features to remain invariant as a video progresses <cit.>.Finally, motion cues can separate foreground objects from background. Neural networks can be asked to re-produce these motion-based boundaries without seeing motion <cit.>.Self-supervised learning can also work with a single image. One can hide a part of the image and ask the network to make predictions about the hidden part. The network can be tasked with generating pixels, either by filling in holes <cit.>, or recovering color after images have been converted to grayscale <cit.>. Again, evaluating the quality of generated pixels is difficult. To simplify the task, one can extract multiple patches at random from an image, and then ask the network to position the patches relative to each other <cit.>. Finally, one can form a surrogate “class” by taking a single image and altering it many times via translations, rotations, and color shifts <cit.>, to create a synthetic categorization problem.Our work is also related to multi-task learning. Several recent works have trained deep visual representations using multiple tasks <cit.>, including one work <cit.> which combines no less than 7 tasks. Usually the goal is to create a single representation that works well for every task, and perhaps share knowledge between tasks. Surprisingly, however, previous work has shown little transfer between diverse tasks. Kokkinos <cit.>, for example, found a slight dip in performance with 7 tasks versus 2. Note that our work is not primarilyconcerned with the performance on the self-supervised tasks we combine: we evaluate on a separate set of semantic “evaluation tasks.” Some previous self-supervised learning literature has suggested performance gains from combining self-supervised tasks <cit.>, although these works used relatively similar tasks within relatively restricted domains where extra information was provided besides pixels. In this work, we find that pre-training on multiple diverse self-supervised tasks using only pixels yields strong performance. § SELF-SUPERVISED TASKS Too many self-supervised tasks have been proposed in recent years for us to evaluate every possible combination. Hence, we chose representative self-supervised tasks to reimplement and investigate in combination. We aimed for tasks that were conceptually simple, yet also as diverse as possible. Intuitively, a diverse set of tasks should lead to a diverse set of features, which will therefore be more likely to span the space of features needed for general semantic image understanding. In this section, we will briefly describe the four tasks we investigated. Where possible, we followed the procedures established in previous works, although in many cases modifications were necessary for our multi-task setup. Relative Position <cit.>: This task begins by sampling two patches at random from a single image and feeding them both to the network without context. The network's goal is to predict where one patch was relative to the other in the original image.The trunk is used to produce a representation separately for both patches, which are then fed into a head which combines the representations and makes a prediction. The patch locations are sampled from a grid, and pairs are always taken from adjacent grid points (including diagonals).Thus, there are only eight possible relative positions for a pair, meaning the network output is a simple eight-way softmax classification. Importantly, networks can learn to detect chromatic aberration to solve the task, a low-level image property that isn't relevant to semantic tasks.Hence, <cit.> employs “color dropping”, i.e., randomly dropping 2 of the 3 color channels and replacing them with noise.We reproduce color dropping, though our harmonization experiments explore other approaches to dealing with chromatic aberration that clash less with other tasks.Colorization <cit.>: Given a grayscale image (the L channel of the Lab color space), the network must predict the color at every pixel (specifically, the ab components of Lab). The color is predicted at a lower resolution than the image (a stride of 8 in our case, a stride of 4 was used in <cit.>), and furthermore, the colors are vector quantized into 313 different categories.Thus, there is a 313-way softmax classification for every 8-by-8 pixel region of the image.Our implementation closely follows <cit.>.Exemplar <cit.>: The original implementation of this task created pseudo-classes, where each class was generated by taking a patch from a single image and augmenting it via translation, rotation, scaling, and color shifts <cit.>. The network was trained to discriminate between pseudo-classes. Unfortunately, this approach isn't scalable to large datasets, since the number of categories (and therefore, the number of parameters in the final fully-connected layer) scales linearly in the number of images. However, the approach can be extended to allow an infinite number of classes by using a triplet loss, similar to <cit.>, instead of a classification loss per class.Specifically, we randomly sample two patches x_1 and x_2 from the same pseudo-class, and a third patch x_3 from a different pseudo-class (i.e. from a different image).The network is trained with a loss of the form max(D(f(x_1),f(x_2))-D(f(x_1),f(x_3))+M,0), where D is the cosine distance, f(x) is network features for x (including a small head) for patch x, and M is a margin which we set to 0.5.Motion Segmentation <cit.>: Given a single frame of video, this task asks the network to classify which pixels will move in subsequent frames. The “ground truth” mask of moving pixels is extracted using standard dense tracking algorithms. We follow Pathak et al. <cit.>, except that we replace their tracking algorithm with Improved Dense Trajectories <cit.>. Keypoints are tracked over 10 frames, and any pixel not labeled as camera motion by that algorithm is treated as foreground. The label image is downsampled by a factor of 8. The resulting segmentations look qualitatively similar to those given in Pathak et al. <cit.>. The network is trained via a per-pixel cross-entropy with the label image.Datasets: The three image-based tasks are all trained on ImageNet, as is common in prior work.The motion segmentation task uses the SoundNet dataset <cit.>.It is an open problem whether performance can be improved by different choices of dataset, or indeed by training on much larger datasets. § ARCHITECTURES In this section we describe three architectures: first, the (naïve) multi-task network that has a common trunk and a head for each task (figure <ref>a); second, the lasso extension of this architecture (figure <ref>b) that enables the training to determine the combination of layers to use for each self-supervised task; and third, a method for harmonizing input channels across self-supervision tasks.§.§ Common Trunk Our architecture begins with Resnet-101 v2 <cit.>, as implemented in TensorFlow-Slim <cit.>. We keep the entire architecture up to the end of block 3, and use the same block3 representation solve all tasks and evaluations (see figure <ref>a).Thus, our“trunk” has an output with 1024 channels, and consists of 88 convolution layers with roughly 30 million parameters.Block 4 contains an additional 13 conv layers and 20 million parameters, but we don't use it to save computation. Each task has a separate loss, and has extra layers in a “head,” which may have a complicated structure. For instance, the relative position and exemplar tasks have a siamese architecture. We implement this by passing all patches through the trunk as a single batch, and then re-arranging the elements in the batch to make pairs (or triplets) of representations to be processed by the head. At each training iteration, only one of the heads is active. However, gradients are averaged across many iterations where different heads are active, meaning that the overall loss is a sum of the losses of different tasks.§.§ Separating features via Lasso Different tasks require different features; this applies for both the self-supervised training tasks and the evaluation tasks. For example, information about fine-grained breeds of dogs is useful for, e.g., ImageNet classification, and also colorization.However, fine-grained information is less useful for tasks like PASCAL object detection, or for relative positioning of patches.Furthermore, some tasks require only image patches (such as relative positioning) whilst others can make use of entire images (such as colorization), and consequently features may be learnt at different scales.This suggests that, while training on self-supervised tasks, it might be advantageous to separate out groups of features that are useful for some tasks but not others.This would help us with evaluation tasks: we expect that any given evaluation task will be more similar to some self-supervised tasks than to others.Thus, if the features are factorized into different tasks, then the network can select from the discovered feature groups while training on the evaluation tasks.Inspired by recent works that extract information across network layers for the sake of transfer learning <cit.>, we propose a mechanism which allows a network to choose which layers are fed into each task.The simplest approach might be to use a task-specific skip layer which selects a single layer in ResNet-101 (out of a set of equal-sized candidate layers) and feeds it directly into the task's head.However, a hard selection operation isn't differentiable, meaning that the network couldn't learn which layer to feed into a task.Furthermore, some tasks might need information from multiple layers.Hence, we relax the hard selection process, and instead pass a linear combination of skip layers to each head.Concretely, each task has a set of coefficients, one for each of the 23 candidate layers in block 3.The representation that's fed into each task head is a sum of the layer activations weighted by these task-specific coefficients.We impose a lasso (L1) penalty to encourage the combination to be sparse, which therefore encourages the network to concentrate all of the information required by a single task into a small number of layers.Thus, when fine-tuning on a new task, these task-specific layers can be quickly selected or rejected as a group, using the same lasso penalty. Mathematically, we create a matrix α with N rows and M columns, where N is the number of self-supervised tasks, and M is the number of residual units in block 3. The representation passed to the head for task n is then:∑_m=1^Mα_n,m*Unit_mwhere Unit_m is the output of residual unit m. We enforce that ∑_m=1^Mα_n,m^2=1 for all tasks n, to control the output variance (note that the entries in α can be negative, so a simple sum is insufficient). To ensure sparsity, we add an L1 penalty on the entries of α to the objectivefunction. We create a similar α matrix for the set of evaluation tasks.§.§ Harmonizing network inputs Each self-supervised task pre-processes its data differently, so the low-level image statistics are often very different across tasks. This puts a heavy burden on the trunk network, since its features must generalize across these statistical differences, which may impede learning. Furthermore, it gives the network an opportunity to cheat: the network might recognize which task it must solve, and only represent information which is relevant to that task, instead of truly multi-task features. This problem is especially bad for relative position, which pre-processes its input data by discarding 2 of the 3 color channels, selected at random, and replacing them with noise. Chromatic aberration is also hard to detect in grayscale images. Hence, to “harmonize,” we replace relative position's preprocessing with the same preprocessing used for colorization: images are converted to Lab, and the a and b channels are discarded (we replicate the L channel 3 times so that the network can be evaluated on color images).§.§ Self-supervised network architecture implementation detailsThis section provides more details on the “heads” used in our self-supervised tasks. The bulk of the changes relative to the original methods (that used shallower networks) involve replacing simple convolutions with residual units. Vanishing gradients can be a problem with networks as deep as ours, and residual networks can help alleviate this problem. We did relatively little experimentation with architectures for the heads, due to the high computational cost of restarting training from scratch. Relative Position: Given a batch of patches, we begin by running ResNet-v2-101 at a stride of 8. Most block 3 convolutions produce outputs at stride 16, so running the network at stride 8 requires using convolutions that are dilated, or “atrous”, such that each neuron receives input from other neurons that are stride 16 apart in the previous layer. For further details, see the public implementation of ResNet-v2-101 striding in TF-Slim. Our patches are 96-by-96, meaning that we get a trunk feature map which is 12 × 12 × 1024 per patch. For the head, we apply two more residual units. The first has an output with 1024 channels, a bottleneck with 128 channels, and a stride of 2; the second has an output size of 512 channels, bottleneck with 128 channels, and stride 2. This gives us a representation of 3 × 3 × 512 for each patch. We flatten this representation for each patch, and concatenate the representations for patches that are paired. We then have 3 “fully-connected” residual units (equivalent to a convolutional residual unit where the spatial shape of the input and output is 1 × 1). These are all identical, with input dimensionality and output dimensionality of 3*3*512=4608 and a bottleneck dimensionality of 512. The final fully connected layer has dimensionality 8 producing softmax outputs. Colorization: As with relative position, we run the ResNet-v2-101 trunk at stride 8 via dilated convolutions. Our input images are 256 × 256, meaning that we have a 32 × 32 × 1024 feature map. Obtaining good performance when colorization is combined with other tasks seems to require a large number of parameters in the head. Hence, we use two standard convolution layers with a ReLU nonlinearity: the first has a 2 × 2 kernel and 4096 output channels, and the second has a 1 × 1 kernel with 4096 channels. Both have stride 1. The final output logits are produced by a 1x1 convolution with stride 1 and 313 output channels. The head has a total of roughly 35M parameters. Preliminary experiments with a smaller number of parameters showed that adding colorization degraded performance. We hypothesize that this is because the network's knowledge of color was pushed down into block 3 when the head was small, and thus the representations at the end of block 3 contained too much information about color. Exemplar: As with relative position, we run the ResNet-v2-101 trunk at stride 8 via dilated convolutions. We resize our images to 256 × 256 and sample patches that are 96 × 96. Thus we have a feature map which is 12 × 12 × 1024. As with relative position, we apply two residual units, the first with an output with 1024 channels, a bottleneck with 128 channels, and a stride of 2; the second has an output size of 512 channels, bottleneck with 128 channels, and stride 2. Thus, we have a 3 × 3 × 512-dimensional feature, which is used directly to compute the distances needed for our loss. Motion Segmentation: We reshape all images to 240 × 320, to better approximate the aspect ratios that are common in our dataset. As with relative position, we run the ResNet-v2-101 trunk at stride 8 via dilated convolutions. We expected that, like colorization, motion segmentation could benefit from a large head. Thus, we have two 1 × 1 conv layers each with dimension 4096, followed by another 1 ×1 conv layer which produces a single value, which is treated as a logit and used a per-pixel classification. Preliminary experiments with smaller heads have shown that such a large head is not necessarily important. § TRAINING THE NETWORKTraining a network with nearly 100 hidden layers requires considerable compute power, so we distribute it across several machines. As shown in figure <ref>, each machine trains the network on a single task. Parameters for the ResNet-101 trunk are shared across all replicas. There are also several task-specific layers, or heads, which are shared only between machines that are working on the same task. Each worker repeatedly computes losses which are then backpropagated to produce gradients.Given many workers operating independently, gradients are usually aggregated in one of two ways. The first option is asynchronous training, where a centralized parameter server receives gradients from workers, applies the updates immediately, and sends back the up-to-date parameters <cit.>. We found this approach to be unstable, since gradients may be stale if some machines run slowly. The other approach is synchronous training, where the parameter server accumulates gradients from all workers, applies the accumulated update while all workers wait, and then sends back identical parameters to all workers <cit.>, preventing stale gradients. “Backup workers” help prevent slow workers from slowing down training. However, in a multitask setup, some tasks are faster than others. Thus, slow tasks will not only slow down the computation, but their gradients are more likely to be thrown out.Hence, we used a hybrid approach: we accumulate gradients from all workers that are working on a single task, and then have the parameter servers apply the aggregated gradients from a single task when ready, without synchronizing with other tasks. Our experiments found that this approach resulted in faster learning than either purely synchronous or purely asynchronous training, and in particular, was more stable than asynchronous training.We also used the RMSProp optimizer, which has been shown to improve convergence in many vision tasks versus stochastic gradient descent. RMSProp re-scales the gradients for each parameter such that multiplying the loss by a constant factor does not change how quickly the network learns. This is a useful property in multi-task learning, since different loss functions may be scaled differently. Hence, we used a separate RMSProp optimizer for each task. That is, for each task, we keep separate moving averages of the squared gradients, which are used to scale the task's accumulated updates before applying them to the parameters.For all experiments, we train on 64 GPUs in parallel, and save checkpoints every roughly 2.4K GPU (NVIDIA K40) hours.These checkpoints are then used as initialization for our evaluation tasks. § EVALUATION Here we describe the three evaluation tasks that we transfer our representation to: image classification, object category detection, and pixel-wise depth prediction. ImageNet with Frozen Weights: We add a single linear classification layer (a softmax) to the network at the end of block 3, and train on the full ImageNet training set. We keep all pre-trained weights frozen during training, so we can evaluate raw features. We evaluate on the ImageNet validation set. The training set is augmented in translation and color, following <cit.>, but during evaluation, we don't use multi-crop or mirroring augmentation. This evaluation is similar to evaluations used elsewhere (particularly Zhang et al. <cit.>). Performing well requires good representation of fine-grained object attributes (to distinguish, for example, breeds of dogs). We report top-5 recall in all charts (except Table <ref>, which reports top-1 to be consistent with previous works). For most experiments we use only the output of the final “unit” of block 3, and use max pooling to obtain a 3 × 3 × 1024 feature vector, which is flattened and used as the input to the one-layer classifier. For the lasso experiments, however, we use a weighted combination of the (frozen) features from all block 3 layers, and we learn the weight for each layer, following the structure described in section <ref>.PASCAL VOC 2007 Detection: We use Faster-RCNN <cit.>, which trains a single network base with multiple heads for object proposals, box classification, and box localization. Performing well requires the network to accurately represent object categories and locations, with penalties for missing parts which might be hard to recognize (e.g., a cat's body is harder to recognize than its head). We fine-tune all network weights. For our ImageNet pre-trained ResNet-101 model, we transfer all layers up through block 3 from the pre-trained model into the trunk, and transfer block 4 into the proposal categorization head, as is standard. We do the same with our self-supervised network, except that we initialize the proposal categorization head randomly. Following Doersch et al. <cit.>, we use multi-scale data augmentation for all methods, including baselines. All other settings were left at their defaults. We train on the VOC 2007 trainval set, and evaluate Mean Average Precision on the VOC 2007 test set. For the lasso experiments, we feed our lasso combination of block 3 layers into the heads, rather than the final output of block 3.NYU V2 Depth Prediction: Depth prediction measures how well a network represents geometry, and how well that information can be localized to pixel accuracy. We use a modified version of the architecture proposed in Laina et al. <cit.>. We use the “up projection” operator defined in that work, as well as the reverse Huber loss. We replaced the ResNet-50 architecture with our ResNet-101 architecture, and feed the block 3 outputs directly into the up-projection layers (block 4 was not used in our setup). This means we need only 3 levels of up projection, rather than 4. Our up projection filter sizes were 512, 256, and 128. As with our PASCAL experiments, we initialize all layers up to block 3 using the weights from our self-supervised pre-training, and fine-tune all weights. We selected one measure—percent of pixels where relative error is below 1.25—as a representative measure (others available in appendix <ref>). Relative error is defined as max(d_gt/d_p,d_p/d_gt), where d_gt is groundtruth depth and d_p is predicted depth. For the lasso experiments, we feed our lasso combination of block3 layers into the up projection layers, rather than the final output of block 3. § RESULTS: COMPARISONS AND COMBINATIONS ImageNet Baseline: As an “upper bound”on performance, we train a full ResNet-101 model on ImageNet, which serves as a point of comparison for all our evaluations.Note that just under half of the parameters of this network are in block 4, which are not pre-trained in our self-supervised experiments(they are transferred from the ImageNet network only for the Pascal evaluations).We use the standard learning rate schedule of Szegedy et al. <cit.> for ImageNet training (multiply the learning rate by 0.94 every 2 epochs), but we don't use such a schedule for our self-supervised tasks. §.§ Comparing individual self-supervision tasksTable <ref> shows the performance of individual tasks for the three evaluation measures. Compared topreviously-published results, our performance is significantly higher in all cases, most likely due to the additional depth of ResNet (cf. AlexNet) and additional training time. Note, our ImageNet-trained baseline for Faster-RCNNis also above the previously published result using ResNet (69.9 in <cit.> cf. 74.2 for ours), mostly due to the addition of multi-scale augmentation for the training images following <cit.>.Of the self-supervised pre-training methods, relative position and colorization are the top performers, with relative position winning on PASCAL and NYU, and colorization winning on ImageNet-frozen. Remarkably, relative position performs on-par with ImageNet pre-training on depth prediction, and the gap is just 7.5% mAP on PASCAL.The only task where the gap remains large is the ImageNet evaluation itself, which is not surprising since the ImageNet pre-training and evaluation use the same labels. Motion segmentation and exemplar training are somewhat worse than the others, with exemplar worst on Pascal and NYU, and motion segmentation worst on ImageNet. Figure <ref> showshow the performance changes as pre-training time increases (time is on the x-axis).After 16.8K GPU hours, performance is plateauing but has not completely saturated, suggesting that results can be improved slightly given more time.Interestingly, on the ImageNet-frozen evaluation, where colorization is winning, the gap relative to relative position is growing.Also, while most algorithms slowly improve performance with training time, exemplar training doesn't fit this pattern: its performance falls steadily on ImageNet, and undulates on PASCAL and NYU.Even stranger, performance for exemplar is seemingly anti-correlated between Pascal and NYU from checkpoint to checkpoint.A possible explanation is that exemplar training encourages features that aren't invariant beyond thetraining transformations (e.g. they aren't invariant to object deformation or out-of-plane rotation), but are instead sensitive to the details of textures and low-level shapes.If these irrelevant details become prominent in the representation, they may serve as distractors for the evaluation classifiers. Note that the random baseline performance is low relative to a shallower network,especially the ImageNet-frozen evaluation (a linear classifier on random AlexNet's conv5 features has top-5 recall of 27.1%, cf. 10.5% for ResNet). All our pre-trained nets far outperform the random baseline.The fact that representations learnt by the various self-supervised methods have different strengths and weaknesses suggests that the features differ. Therefore, combining methods may yield further improvements. On the other hand, the lower-performing tasks might drag-down the performanceof the best ones. Resolving this uncertainty is a key motivator for the next section. Implementation Details: Unfortunately, intermittent network congestion can slow down experiments, so we don't measure wall time directly.Instead, we estimate compute time for a given task by multiplying the per-task training step count by a constant factor, which is fixed across all experiments, representing the average step time when network congestion is minimal.We add training cost across all tasks used in an experiment, and snapshot when the total cost crosses a threshold. For relative position, 1 epoch through the ImageNet train set takes roughly 350 GPU hours; for colorization it takes roughly 90 hours; for exemplar nets roughly 60 hours. For motion segmentation, one epoch through our video dataset takes roughly 400 GPU hours.§.§ Naïve multi-task combination of self-supervision tasksTable <ref> shows results for combining self-supervised pre-training tasks. Beginning with one of our strongest performers—relative position—we see that adding any of our other tasks helps performance on ImageNet and Pascal. Adding either colorization or exemplar leads to more than 6 points gain on ImageNet. Furthermore, it seems that the boosts are complementary: adding both colorization and exemplar gives a further 2% boost. Our best-performing method was a combination of all four self-supervised tasks. To further probe how well our representation localizes objects, we evaluated the PASCAL detector at a more stringent overlap criterion: 75% IoU (versus standard VOC 2007 criterion of 50% IoU).Our model gets 43.91% mAP in this setting, versus the standard ImageNet model's performance of 44.27%, a gap of less than half a percent.Thus, the self-supervised approach may be especially useful when accurate localization is important.The depth evaluation performance shows far less variation over the single and combinations tasks than the other evaluations. All methods are on par with ImageNet pre-training, with relative position exceeding this value slightly, and the combination with exemplar or motion segmentation leading to a slight drop. Combining relative position with with either exemplar or motion segmentation leads to a considerable improvement over those tasks alone.Finally, figure <ref> shows how the performance of these methods improves with more training. One might expect that more tasks would result in slower training, since more must be learned. Surprisingly, however the combination of all four tasks performs the best or nearly the best even at our earliest checkpoint.§.§ Mediated combination of self-supervision tasks Harmonization: We train two versions of a network on relative position and colorization: one using harmonization to make the relative position inputs look more like colorization, and one without it (equivalent to RP+Col in section <ref> above). As a baseline, we make the same modification to a network trained only on relative position alone: i.e., we convert its inputs to grayscale. In this baseline, we don't expect any performance boost over the original relative position task, because there are no other tasks to harmonize with. Results are shown in Table <ref>. However, on the ImageNet evaluation there is an improvement when we pre-train using only relative position (due to the change from adding noise to the other two channels to using grayscale input (three equal channels)), and this improvement follows through to the the combined relative position and colorization tasks.The other two evaluation tasks do not show any improvement with harmonization. This suggests that our networks are actually quite good at dealing with stark differences between pre-training data domains when the features are fine-tuned at test time.Lasso training: As a first sanity check, Figure <ref> plots the α matrix learned using all four self-supervised tasks. Different tasks do indeed select different layers. Somewhat surprisingly, however, there are strong correlations between the selected layers: most tasks want a combination of low-level information and high-level, semantic information. The depth evaluation network selects relatively high-level information, but evaluating on ImageNet-frozen and PASCAL makes the network select information from several levels, often not the ones that the pre-training tasks use. This suggests that, although there are useful features in the learned representation, the final output space for the representation is still losing some information that's useful for evaluation tasks, suggesting a possible area for future work.The final performance of this network is shown in Table <ref>. Thereare four cases: no lasso, lasso only on the evaluation tasks, lasso only at pre-training time, and lasso in both self-supervised training and evaluation. Unsurprisingly, using lasso only for pre-training performs poorly since not all information reaches the final layer. Surprisingly, however, using the lasso both for self-supervised training and evaluation is not very effective, contrary to previous results advocating that features should be selected from multiple layers for task transfer <cit.>. Perhaps the multi-task nature of our pre-training forces more information to propagate through the entire network, so explicitly extracting information from lower layers is unnecessary. § SUMMARY AND EXTENSIONSIn this work, our main findings are: (i)Deeper networks improve self-supervision over shallow networks; (ii) Combining self-supervision tasks always improves performance over the tasks alone; (iii) The gap between ImageNet pre-trained and self-supervision pre-trained with four tasks is nearlyclosed for the VOC detection evaluation, and completely closed for NYU depth, (iv) Harmonization and lasso weightings only have minimal effects; and, finally, (v) Combining self-supervised tasks leads to faster training.There are many opportunities for further improvements: we can add augmentation (as in the exemplar task) to all tasks; we could add more self-supervision tasks (indeed new ones have appeared during the preparation of this paper, e.g. <cit.>); we could add further evaluation tasks – indeeddepth prediction was not very informative, and replacing it by an alternative shape measurement task such as surface normal prediction may be more reliable; and we can experiment with methods for dynamically weighting the importance of tasks in the optimization. It would also be interesting to repeat these experiments with a deep network such as VGG-16 where consecutive layers are less correlated, or with even deeper networks (ResNet-152, DenseNet <cit.> and beyond) to tease out the match between self-supervision tasks and network depth. For the lasso, it might be worth investigating block level weightings using a group sparsity regularizer.For the future, given the performance improvements demonstrated in this paper, there is apossibility that self-supervision will eventually augment or replace fully supervised pre-training. Acknowledgements: Thanks to Relja Arandjelović, João Carreira, Viorica Pătrăucean and Karen Simonyan for helpful discussions. § ADDITIONAL METRICS FOR DEPTH PREDICTION Previous literature on depth prediction has established several measures of accuracy, since different errors may be more costly in different contexts. The measure used in the main paper was percent of pixels where relative depth—i.e., max(d_gt/d_p,d_p/d_gt)—is less than 1.25. This measures how often the estimated depth is very close to being correct. It is also standard to measure more relaxed thresholds of relative depth: 1.25^2 and 1.25^3. Furthermore, we can measure average errors across all pixels. Mean Absolute Error is the mean squared difference between ground truth and predicted values. Unlike the previous metrics, with Mean Absolute Error the worst predictions receive the highest penalties. Mean Relative Error weights the prediction error by the inverse of ground truth depth. Thus, errors on nearby parts of the scene are penalized more, which may be more relevant for, e.g., robot navigation.Tables <ref>, <ref>, <ref>, and <ref> are extended versions of tables<ref>, <ref>, <ref>, <ref>, respectively. For the most part, the additional measures tell the same story as the measure for depth reported in the main paper. Different self-supervised signals seem to perform similarly relative to one another: exemplar and relative position work best; color and motion segmentation work worse (table <ref>). Combinations still perform as well as the best method alone (table <ref>). Finally, it remains uncertain whether harmonization or the lasso technique provide a boost on depth prediction (tables <ref> and <ref>). ieee
http://arxiv.org/abs/1708.07860v1
{ "authors": [ "Carl Doersch", "Andrew Zisserman" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20170825185217", "title": "Multi-task Self-Supervised Visual Learning" }
[Corresponding author:][email protected] Lorena Engineering School, University of São [email protected] Lorena Engineering School, University of São Paulo. [email protected] Lorena Engineering School, University of São Paulo. Realistic implementations of the Kitaev chain require, in general, the introduction of extra internal degrees of freedom. In the present work, we discuss the presence of hidden BDI symmetries for free Hamiltonians describing systems with an arbitrary number of internal degrees of freedom. We generalize results of a spinfull Kitaev chain to construct a Hamiltonian with n internal degrees of freedom and obtain the corresponding hidden chiral symmetry. As an explicit application of this generalized result, we exploit by analytical and numerical calculations the case of a spinful 2-band Kitaev chain, which can host up to 4 Majorana bound states. We also observe the appearence of minigap states, when chiral symmetry is broken. Valid PACS appear here Hidden chiral symmetries in BDI multichannel Kitaev chains Durval Rodrigues Jr==========================================================§ INTRODUCTION In 1937, Ettore Majorana proposed that a suitable choice for the γ-matrix representation would lead to real solutions of the Dirac equation, thus implying that the fermions described by these field solutions corresponded to their own antiparticles. <cit.> In the past few years, this concept became extremely relevant in the context of Condensed Matter Physics, as Majorana quasiparticle excitations were predicted to emerge in topological superconductors, displaying non-abelian anyonic statistics. This very exotic exchange property has been considered, since then, a very promising route for solving the decoherence problem related to quantum information processing. <cit.>Kitaev, in a seminal paper, introduced a simple toy model, corresponding to a one-dimensional spinless p-wave superconductor, capable of hosting Majorana zero energy excitations at both ends.<cit.> A considerably large number of realistic systems exhibiting such phenomenon were then proposed. The most prominent example consists of a semiconductor nanowire with high spin-orbit coupling in the presence of a magnetic field and in proximity to a s-wave superconductor.<cit.> Besides the theoretical predictions, there has also been a substantial experimental effort devoted to detecting Majorana bound states in such nanowire heterostructures. <cit.> In addition, materials with triplet p-wave superconductivity, such as organic superconductors and the quasi-one-dimensional K_0.9Mo_6O_17, <cit.> as well as other heterostructures such as ferromagnetic nanowires, <cit.> were predicted to host Majorana bound states.Obtaining more realistic realizations of the Physics underlying the Kitaev chain may only be possible with the introduction of internal degrees of freedom, even though it eventually changes the topological classification of the system. For example, for systems such as organic superconductors, quasi-one-dimensional triplet superconductors, like K_0.9Mo_6O_17, and ferromagnetic nanowires, the relevant internal spin degrees of freedom lead to two different chiral symmetries, one of them characterized by a ℤ invariant winding number. <cit.> On the other hand, in semiconductor nanowire heterostructures, a common feature is the appearance of subbands due to size quantization, requiring the introduction of band mixing terms in the Hamiltonian, which break the DIII chiral symmetry.<cit.> However, as we have previously shown, hidden chiral symmetries can also be introduced in some limits.<cit.> Also, multiple Majorana modes counted by a winding number were predicted to appear in long-range hopping systems.<cit.> As a matter of fact, correctly accounting for discrete symmetries, such as the chiral symmetry discussed above, is of extreme experimental significance. Particularly, because the breaking of chiral symmetry may lead to the appearance of minigap states, which interfere in the observation of a clear zero-bias peak used as a signature for the presence of Majorana bound states in the system. <cit.> Moreover, theoretical studies of coupled Kitaev chains (Kitaev ladders)<cit.> and multiband systems,<cit.> as well as the recently reported experimental evidence of topological phenomena in a multiband superconductor,<cit.> corroborate the importance of considering the influence of pairings between internal degrees of freedom on the topological classification of superconductors.The topological character of a quantum system is uniquely defined by the number of space dimensions and the presence or absence of three discrete symmetries: charge-conjugation, time-reversal and chirality.<cit.> Since in superconductors charge conjugation is manifestly present, the other two, which should occur simultaneously or not at all, are the ones that have to be carefully analyzed when adding internal degrees of freedom. For one-dimensional topological superconductors, several works have suggested the introduction of pseudo-time-reversal operators, <cit.> resulting, for example, in the uncovering of hidden chiral symmetries in spinful systems.<cit.> In this work, we propose some conditions to construct Kitaev Hamiltonians with an arbitrary number of internal degrees of freedom and argue that it is also possible to define a hidden BDI chiral symmetry from the superconducting order parameter. These results are applied for a spinful two-band Kitaev chain.The present paper is organized as follows. In Sec. <ref>, we review the general ideas regarding the classification of one-dimensional topological superconductors, discussing the appropriate topological invariants for a given set of discrete symmetries. In Sec. <ref>, we briefly review the chiral symmetry leading to the BDI class<cit.> and the following geometrical interpretation of the constraints it imposes on the Hamiltonian for the existence of non-trivial topological invariants. In Sec. <ref>, we consider, in general, the problem of constructing a Kitaev chain with n degrees of freedom and show how to implement the Nambu representation to find hidden chiral symmetries. In Sec. <ref>, we particularize the previous construction to consider in details the case of a spinful Kitaev chain with two bands. Finally in Sec <ref>, we summarize our results and point out some interesting directions and open problems.§ CLASSIFICATION OF CHIRAL TOPOLOGICAL SUPERCONDUCTORS Non-trivial topological phases in condensed matter emerge as a consequence of the dimensionality of the system and the discrete symmetries it preserves.<cit.> For superconductors, the mean-field Bogoliubov-de Gennes theory manifestly preserves charge conjugation (𝒞) by construction. Thus, a chiral symmetric system with non-trivial topology necessarily requires time-reversal symmetry, or even a pseudo-time-reversal symmetry, to coexist. <cit.> By pseudo-time-reversal invariance, we mean a symmetry defined by an antiunitary operator that commutes with the Hamiltonian, but does not have the usual physical meaning of a time-reversal. Finally, in a one-dimensional superconductor, given a (pseudo-)time-reversal operator 𝒯, the set of possible values for the topological invariant depends on the sign of 𝒯^2. <cit.> In the following we discuss in more details these two cases.We first consider the case in which 𝒯_BDI^2=1, corresponding to the BDI class in the ten-fold scheme of classification of topological systems. In this case, the Bloch Hamiltonian can be written in terms of Pauli matrices τ_i for the particle-hole space asH_k = h_k ·τ.The vector h_k defines a topological space (𝔗) equivalent to a 1-sphere, such that the number of times the vector h_k winds around the origin while kgoes through the Brillouin zone (BZ) defines distinct topological phases, characterized by a different number of Majorana excitations. In other words, the number of Majorana bound-states can be counted by a topological invariant called the winding number, w ∈π_1(𝔗)=ℤ, defined as:<cit.>w = | ∮_BZdk/4π itr[𝒮_BDIH^-1_k∂_k H_k] |,where 𝒮_BDI is the chiral symmetry operator related to the (pseudo-)time-reversal by:𝒮_BDI = i𝒞𝒯_BDI.On the other hand, systems with a (pseudo-)time-reversal operator that obeys 𝒯_DIII^2=-1 are in the DIII class. Although we will not make any further comments on how to obtain topological invariants for this class, [For a more detailed discussion about the topological classes D, BDI and DIII, we refer the interested reader to the works of Budich and Ardonne<cit.> and Sedlmayr et. al.<cit.>.] it is important to remark the main difference between systems in the classes BDI and DIII. The presence of a (pseudo-)time-reversal operator that squares to -1 implies the presence of Kramer's degeneracy between Majorana excitations. Hence, for one pair of Majoranas to be annihilated, such degeneracy must be broken, requiring that (pseudo-)time-reversal and chiral symmetry be also broken. As a consequence, a DIII system with multiple pairs of Majorana zero modes can have only two distinct topological phases: one with and another without Majoranas. As a result, one must expect a ℤ_2 invariant instead of ℤ.In the following, we focus only on the BDI class, studying how additional internal degrees of freedom may change the behavior of the winding number. To do so, we search for a hidden chiral symmetry, namely an operator 𝒮:<cit.>𝒮 = i 𝒞𝒯with{H_k,𝒮}=0,defined by the physics of the triplet superconducting order parameter. We start from the idea of hidden chiral symmetry introduced by Dumitrescu et. al.<cit.> for spinful systems.§ THE MODELS §.§ A quick review on the spinfull Kitaev chainWe propose a generalized Hamiltonian for a spinfull p-wave superconductor considering all possible pairings between the spin channels that are physically compatible with the triplet superconducting state. On Wannier representation, it readsℋ = ℋ_0 + ℋ_R + ℋ_SC, [3] ℋ_0= -∑_n, σ, σ'μ_σσ' c_nσ'^† c_nσ + t_σσ' c_n+1σ'^† c_nσ + h.c.,[3] ℋ_R= ∑_n, σ, σ' iλ_σσ' c_n+1σ'^† c_nσ + h.c.,[3] ℋ_SC = ∑_n,σ,σ' (iσ_2 d·σ)_σ,σ'c_nσ^†c_n+1σ'^†+h.c.,[3]where μ_σσ' and t_σσ' are the spin dependent chemical potential and the hopping energy, respectively; iλ_σσ' is a purely complex hopping which gives rise to the Rashba spin-orbit coupling and d = (Δ_1,Δ_2,Δ_3) is the triplet superconducting order parameter. The fermion field operators c_nσ and c_nσ^† obey {c_nσ,c_mσ'^†} = δ_nmδ_σσ',where the indices n, m label lattice positions while σ, σ' label the spin projection along the z-axis. The set {σ_ν}_ν=0^4 consists of the 2× 2 identity matrix and the usual Pauli matrices for the spin space.For convenience, we rewrite the Hamiltonian (<ref>) in Bloch representation asℋ = ∫_BZdk/2π ψ_k^†H_kψ_k,where BZ indicates integration over the Brillouin zone. Using the Nambu representation ψ_k = (c_k, 𝒯c_k)^T, c_k = (c_k ↑, c_k ↓)^T, 𝒯 = iσ_2 𝒦, with 𝒦 denoting the complex conjugation operator, we obtain[From now on, we use Einstein summation convention for repeated indices; greek letters are used for sums starting from 0, while latin letters are reserved for sums starting from 1.]H_k= τ_3 ⊗ (ϵ_k^0σ_0 + λ_k ·σ)+ τ_0 ⊗ (λ_k^0 σ_0 + ϵ_k ·σ)+ τ_ϕ⊗d_k·σ,where {τ_ν}_ν=0^4 is the set with the 2× 2 identity and the Pauli matrices for particle-hole space; τ_ϕ = τ_1 sinϕ + τ_2 cosϕ, ϕ is the superconducting phase and [ϵ_k^νσ_ν]_σσ' = -μ_σσ' - 2t_σσ'cos k,[λ_k^νσ_ν]_σσ' = 2λ_σσ'sin k, d_k= dsin k.We note that-μ_σσ' = -μσ_0 + B·σwhere μ is the chemical potential and B is a Zeeman field. Also,t_σσ' = tσ_0 + C·σwhere t is the spin independent hopping energy, C is the spin dependent hopping energy.The Hamiltonian with no spin-dependent hopping was proposed as a realistic model for organic superconductors, such as the quasi-one-dimensional triplet superconductor K_0.9Mo_6O_17, and ferromagnetic nanowires with zero s-wave order parameter. <cit.> Moreover, it was also pointed out that the parameter choice leads to two possible chiral operators, i.e., unitary operators that anticommute with the Hamiltonian. One is the chiral symmetry related to the DIII classification, 𝒮_DIII = τ_ϕ + π/2⊗σ_0, a consequence of the invariance under the physical time-reversal operator defined by 𝒯_DIII = τ_0⊗ iσ_2𝒦, given 𝒞 = τ_ϕ+ π/2⊗σ_2𝒦. The other is the hidden chiral symmetry associated with the BDI classification, 𝒮_BDI = τ_ϕ+π/2⊗d̂·σ, d̂ = d/|d|, with a corresponding pseudo-time-reversal operator given by 𝒯_BDI = τ_0 ⊗[d̂·ê_2 + i(d̂∧ê_2)·σ]𝒦. The conditions for preserving chiral symmetry in a BDI system have an interesting geometric interpretation which we explore next. Imposing chiral symmetry leads to{H_k,𝒮_BDI} = 0 ⇒{[ [λ_k ·σ, d̂·σ]=0; {ϵ_k ·σ, d̂·σ}=0 ]. .Since [ϵ_k^0 σ_0, d̂·σ]=0, the condition (<ref>) trivially reduces to:[λ_k ·σ, d̂·σ] = 2iσ·(λ_k ∧d̂) = 0 ⇒λ_k ∥d̂, {ϵ_k ·σ, d̂·σ}=2 σ_0 ϵ_k ·d̂ = 0 ⇒ϵ_k ⊥d̂.These conditions lock the spin-dependent terms in order to maintain chirality. Finally, it is worth noting that chiral symmetry is only globally realized if ϵ_k ⊥d̂, ∀ k, since the k-dependency can result in sweet spots for specific values of k due to competition between B and C.To conclude this section, we remark that, although this construction has been explicitly carried out on the example of spinful systems, a system with any two internal degrees of freedom is described by the same mathematical model, thus, presenting the same “topology”. Therefore, a spinless system with two bands described in terms of Pauli matrices admits a similar Hamiltonian formulation and invariance under the same hidden symmetry operators, as we demonstrated in a previous work.<cit.> Based on such arguments, we provide next some general arguments for obtaining hidden BDI chiral symmetries on systems with n internal degrees of freedom and discuss the application of these ideas to a spinfull 2-band Kitaev chain. §.§ General construction of a Kitaev chain with n internal degrees of freedomThe conditions derived in Sec. <ref> for thechiral operator originally introduced byDumitrescu et al. <cit.> raises the question of whether it is possible to find similar hidden symmetries for systems with a richer spinorial structure. The idea is to consider a Hamiltonian which is an element of 𝔰𝔲(2) ×𝔰𝔲(n)(particle hole + other degrees of freedom). It is also necessary to introduce a generalized Nambu representation ψ_k = (c_k, 𝒯c_k)^T, where c_k is an element of the spinor representation of 𝔰𝔲(n). Although the construction of 𝒯 is highly dependent on the physical meaning attributed to 𝔰𝔲(n) and its representation, some general ideas can be discussed without choosing a specific representation of 𝒯. In the next section we will discuss in more details this representation choice for a specific algebra.Since the Hamiltonian is an element of 𝔰𝔲(2) ×𝔰𝔲(n), the action of any (pseudo-)time-reversal operator 𝒯 = U_𝒯𝒦 (U_𝒯 is unitary and 𝒦 denotes the complex conjugation) on the generators of 𝔰𝔲(n) divides it in one symplectic subgroup <cit.>𝒯t_a^S𝒯^-1 = -t_a^S,and one antisymplectic𝒯t_a^A𝒯^-1 = t_a^A.Another important point to consider for correctly implementing the Nambu representation is the effect of 𝒯 on the k-dependency of the Hamiltonian. Thus, we divide the possible terms in symmetric𝒯ϵ_k^a𝒯^-1 = ϵ_k^aand antisymmetric𝒯λ_k^a𝒯^-1 = -λ_k^aunder 𝒯. Taking into account these two effects of the action of 𝒯, we propose a general Nambu HamiltonianH_k= τ_3 ⊗ (ϵ_k^a t_a^A + λ_k^a t_a^S) + τ_0 ⊗ (ϵ_k^a t_a^S + λ_k^a t_a^A) + τ_ϕ⊗d_k^a t̃_a sin k,where t̃_a are the generators of 𝔰𝔲(n) such that U_𝒯t̃_a are symmetric matrices.[For example, in the 𝔰𝔲(2) case, U_𝒯 = iσ_2, as explicitly written in (<ref>). We note, nonetheless, that changing the spinor representation from the usual to the Nambu removes U_𝒯.]Now we can introduce a hidden chiral symmetry operator similar to the one introduced by Dumitrecu et al.<cit.> for spinfull systems:𝒮_BDI = τ_ϕ+π/2⊗d̂^a t̃_awhere d̂^a is the normalized d^a vector such that 𝒮_BDI^2=1. Finally, the condition for existence of chiral symmetry, i.e., {H_k, 𝒮_BDI}=0, implies[ϵ_k^a t_a^A + λ_k^a t_a^S, d̂^b t̃_b]=0, {ϵ_k^a t_a^S + λ_k^a t_a^A, d̂^b t̃_b}=0.These conditions result in a series of constraints on the Hamiltonian, which are analogous to the locking conditions on the spin space obtained in Sec. <ref>. The chiral operator prohibits some of the coefficients ϵ_k^a and λ_k^a multiplying the generators of 𝔰𝔲(n), i.e., the isospin-dependent terms are locked. However, the geometric interpretation is not completely analogous. The reason lies in the algebraic structure of 𝔰𝔲(n) for an arbitrary n≥3:[t_a, t_b]= if_ab^ct_c, {t_a,t_b} = 1/2nδ_ab t_0 + g_ab^c t_c,where some of structure constants f_ab^c are zero and some g_ab^c are non-zero. Thus, the parallel and perpendicular conditions derived in Sec. <ref> do not hold in general anymore.Even though we obtained some general conditions for constructing the Hamiltonian and finding hidden chiral symmetries, it is not clear how to apply these results without a specific choice of representation. Thus, we now provide a concrete discussion considering a spinful 2-band system. §.§ The spinful 2-band Kitaev chain and its chiral symmetriesFollowing the construction of a Kitaev chain with an arbitrary number of degrees of freedom presented in Sec. <ref>, we propose a general Hamiltonian for a spinfull Kitaev chain with two bands. The Hamiltonian is now an element of 𝔰𝔲(2) ×𝔰𝔲(4) ≅𝔰𝔲(2) ×𝔰𝔲(2) ×𝔰𝔲(2). Denoting the spin (band) subspace by the matrices σ_ν (ρ_ν), and taking 𝒯 = iσ_2 ⊗ iρ_2 𝒦, it is straightforward to obtainH_k = τ_3 ⊗ (ϵ_k^00σ_0 ⊗ρ_0 + ϵ_k^ijσ_i ⊗ρ_j + λ_k^i0σ_i ⊗ρ_0 + λ_k^0iσ_0 ⊗ρ_i) + τ_0 ⊗ (λ_k^00σ_0 ⊗ρ_0 + λ_k^ijσ_i ⊗ρ_j + ϵ_k^i0σ_i ⊗ρ_0 + ϵ_k^0iσ_0 ⊗ρ_i) + τ_ϕ⊗ d^ijσ_i ⊗ρ_j sin k. Next, we consider the necessary conditions to have the hidden chiral symmetry:𝒮_BDI = τ_ϕ + π/2⊗d̂^ijσ_i ⊗ρ_j,where d̂^ij is normalized so that 𝒮_BDI^2=1. It is evident that ϵ_k^00 cannot break chirality, whereas λ_k^00 must be always zero for 𝒮_BDI to be preserved, i.e., {H_k, 𝒮_BDI}=0. After collecting the terms with the same matrix structure of the superconducting order parameter, i.e., all terms proportional to σ_i ⊗ρ_j, the conditions (<ref>) and (<ref>) lead toϵ_k^ijd̂^abε_ia^n ε_jb^m= 0,λ_k^ijd̂^ij = 0.Here, ε_ia^n denotes the totally antisymmetric Levi-Civita tensor in three dimensions. For these terms, the analogy with Sec. <ref> is direct, because in this case f_ab^c are always non-zero and g_ab^c are always zero.To corroborate the results (<ref>) and (<ref>) regarding the locking conditions imposed by the superconducting order parameter d̂^ab, we have performed independent numerical simulations with the Kwant package.<cit.> For simplicity, we implemented the following representative Hamiltonian:H_k = τ_3 ⊗ (ϵ_k σ_0 ⊗ρ_0 + m σ_θ⊗ρ_γ) + τ_ϕ⊗Δσ_1 ⊗ρ_2 sin k,where ϵ_k = -μ -2tcos k, σ_θ = σ_1 sinθ + σ_3 cosθ and ρ_γ = ρ_1 sinγ + ρ_2 cosγ. Chiral symmetry S_BDI = τ_ϕ+π/2⊗σ_1 ⊗ρ_2 should be preserved if, and only if, σ_θ = ±σ_1 and ρ_γ = ±ρ_2. Therefore, varying the angles θ and γ may lead to the appearance of minigap states when chiral symmetry is broken and of Majorana zero modes when the chirality condition holds. This behavior is explicitly confirmed by Fig. <ref>. For γ=0 and σ_θ = ±σ_1, the minigap closes. However, for γ≠ 0, chiral symmetry is broken for any value of θ and the minigap only closes when accidental degeneracy emerges. Nonetheless, there is no topological protection in the latter case. For the Hamiltonian (<ref>), it is also possible to count the number of Majorana zero modes by calculating the winding number. In Fig. <ref>, we show the effect of varying μ and m on the number of Majorana pairs. As expected, four Majorana pairs are possible. If we increase the absolute values of μ or m, the overlap between these zero modes eventually leads to their annihilation, resulting in lower winding numbers. Finally, we remark that only even winding numbers appear in the phase diagram of Fig. <ref>, which is a feature of a symmetry between spin and band subspaces. This condition will be broken next.We now consider in more details the influence of the terms proportional to σ_i ⊗ρ_0 and σ_0 ⊗ρ_i. One can check that chiral symmetry requires:λ_k^i0d̂^abε_ia^n = 0, λ_k^0id̂^abε_ib^m = 0, ϵ_k^i0d̂^ib = 0, ϵ_k^0id̂^ai = 0.It is interesting to note that the previous conditions (<ref>) and (<ref>) to maintain chiral symmetry on spinfull systems still hold. Namely, (<ref>) implies that d̂^ib should be parallel to λ_k^i0 and (<ref>) means that d̂^ib needs to be perpendicular to ϵ_k^i0. Also, analogous results (<ref>) and (<ref>) hold for band degrees of freedom. Numerical simulations breaking these conditions on the band subspace also resulted on the appearance of minigap states, similar to the ones seen in Fig. <ref>.To evaluate the effect of breaking spin-band symmetry on the topological phase diagram, we added some of these four terms to the Hamiltonian (<ref>) according to:H_k → H_k + τ_0 ⊗ (B σ_3 ⊗ρ_0 + V σ_0 ⊗ρ_1).Here, B denotes a Zeeman field along the z-axis and V, an analogous contribution to the band subspace, but along the x-direction. As expected, odd winding numbers also appear as indicated in Figs. <ref> and <ref>. Hence, the system can indeed host any integer number of Majorana bound states from 0 to 4. Figure <ref> deserves some special care regarding the value of the winding number at the origin. As a matter of fact, in spite of what the diagram may suggest, exactly at the origin, i.e., for B=V=0, w=2, as consistency with Fig <ref> requires.Finally, there remains to take into account the effects of k-dependent terms, such as Rashba spin-orbit couplings, on the phase diagram. Interestingly, adding such terms to the hamiltonian (<ref>) don't change the topological phase diagrams in Fig. <ref>. Thus, indicating that the Majorana modes are insensitive to them. Nevertheless, for finite systems, the presence of k-dependent terms leads to the appearance of minigap states, which became clearer as we shortened the chain. This suggests that away from the continuum limit the very definition of BDI chirality does not hold.§ CONCLUSIONS One-dimensional p-wave systems with an arbitrary number of internal degrees of freedom allow the emergence of multiple zero energy Majorana excitations at both ends, if a BDI chiral symmetry is preserved. In this paper, we have shown that a hidden chiral symmetry can be derived from the superconducting terms in the Hamiltonian and provided a geometrical interpretation of the constraints imposed on systems that preserve it. This condition locks the isospin-dependent terms of the Hamiltonian by restricting the possible adjoint elements of the 𝔰𝔲(n) representation. We examined in details the consequences of this severe restriction imposed on BDI systems for a spinful 2-band p-wave superconductor, in particular, showing that breaking chiral symmetry leads to the emergence of minigap states, that the winding number can assume the values between 0 and 4 and, finally, that odd values of the winding number are only possible when the spin-band symmetry is broken.Finally, we point out that the construction of pseudo-time-reversal operators for the general case with n degrees of freedom is still a challenging open problem, as well as the appearance of minigap states in finite systems, that the authors wish to revisit in future works.§ ACKNOWLEDGMENTSThe authors would like to thank the Kwant package developers. The work of ALRM was supported by FAPESP grant No. 2016/10167-8. DRJ is a CNPq researcher. The authors wish to thank the financial support by CNPq, FAPESP, and CAPES. The authors would also like to thank Marcelo Hott for useful discussions.
http://arxiv.org/abs/1708.07866v2
{ "authors": [ "Antônio Lucas Rigotti Manesco", "Gabriel Weber", "Durval Rodrigues Jr" ], "categories": [ "cond-mat.supr-con" ], "primary_category": "cond-mat.supr-con", "published": "20170825191715", "title": "Hidden chiral symmetries in BDI multichannel Kitaev chains" }
[email protected] School of Physics, Huazhong University of Science and Technology, Wuhan 430074, China The shadow of a black hole can be one of the strong observational evidences for stationary black holes. If we see shadows at the center of galaxies,we would say whether the observed compact objects are black holes. In this paper, we consider a formula for the contour of a shadowin an asymptotically-flat, stationary, and axisymmetric black hole spacetime.We show that the formula is useful for obtaining the contour of the shadow of several black holessuch as the Kerr-Newman black hole and rotating regular black holes. Using the formula, we can obtain new examples of the contour of the shadow of rotating black holes if assumptions are satisfied.Black hole shadow in an asymptotically-flat, stationary, and axisymmetric spacetime:The Kerr-Newman and rotating regular black holesNaoki Tsukamoto December 30, 2023 ========================================================================================================================================§ INTRODUCTIONRecently, LIGO detected three gravitational wave events from binary black hole systems <cit.>. The events showed stellar-mass black holes really exist in our universe. The physics in strong gravitational field near black holes will be an important topic in not only general relativity but also astronomy. The black holes are described well by the Kerr black hole solutionwith an Arnowitt-Deser-Misner (ADM) mass M and an angular momentum J which is an exact solution of the Einstein equations. There, however, remains some possibility for the other black hole solutionsbecause of uncertainty in measuring of gravitational waves <cit.>.It is believed that there are supermassive black holes in the center of galaxiesand that isolated stationary black holes would be described well by the Kerr solution. From observations in weak gravitational fields,one can estimate the ADM mass M of suppermassive black holes and the distance between the observerand the black holes <cit.>. To get an evidence that the suppermassive compact objects in the centers of galaxies are black holesand that they are not the other exotic compact objects predicted by general relativity, we should pay attention to phenomena in strong gravitational fields such asan optically thin emission region around a black hole <cit.>, emissions from a geometrically thin accretion disk around a black hole <cit.>, and the shadow made by a black hole <cit.>.In the near future, we may check whether suppermassive black holes in our galaxy and near galaxies can be described by the Kerr black hole. The Event Horizon Telescope challenges us to measure a shadow made by the suppermassive black hole in the center of our galaxy <cit.>.Therefore, the details of the shadows of rotating black holes have been investigated eagerly. From an observational viewpoint to check that isolated stationary black holes in nature can be described by the Kerr black hole well,the contours of the shadows of dozens rotating black holes have been calculatedalready <cit.>. Semianalytic calculations <cit.> and a new numerical method <cit.> of the contour of the shadow of rotating black holes were investigated. Hioki and Maeda discussed a relation between the shape of contour of the shadow and the inclination angle and the spin parameter of the Kerr black hole <cit.>. Tsupko estimated analytically the spin parameter from the shape of the shadow of the Kerr black hole <cit.>. A method for distinguishing the Kerr black hole from other rotating black holes with the shape of the shadowhas been investigated by Tsukamoto et al. <cit.> and by Abdujabbarov et al. <cit.>.The theoretical aspects related to the contour of black hole shadows also have been investigated since the null geodesics near black holes and other compact objects would determine several important properties of spacetimes. It was pointed that quasinormal modes <cit.>of a static, spherically symmetric, and asymptotically flat black hole in the eikonal limitare determined by the parameters of the unstable circular null geodesics <cit.>. A tight relation between quasinormal modes and gravitational lensing near unstable circular null geodesic <cit.> was also considered <cit.>. Very recently, a relation between black hole shadows, spacetime instabilities, and fundamental photon orbits which is a generalization of circular photon orbitswas discussed in Ref. <cit.>.These close links between circular null geodesics, quasinormal modes, gravitational lensing, the shadows, and the other phenomena in strong gravitational fieldsin black hole spacetimes might be valid for only some well-known black holes and one might find counterexamples. For an example, Konoplya and Stuchlik showed that an expected link between the circular null geodesics and quasinormal modes is brokenin an asymptotically flat black hole spacetime in the Einstein-Lovelock theory <cit.>. From the theoretical viewpoint, it would be worth to investigate the details of the null geodesic near less-known black holes and of their shadows.In this paper,we investigate an simple analytic formula for the contour of the shadow of rotating black holes and we apply the formula to two new examples of rotating black holes and then we examine several known results with the formula. We emphasize that the formula would help to categorize the shadow of dozens rotating black holes, and that it can describe new examples of the contours of the shadows of rotating black holes. This paper is organized as follows.In Sec. II, we introduce a line element describing a rotating black hole spacetime.In Sec. III, we obtain null geodesic equations in the rotating black hole spacetime.In Sec. IV, we investigate a formula for the contour of the shadow of the rotating black holeand we apply the formula to two examples of rotating black holes. In Sec. V, we examine several known results by using the formula. In Sec. VI, we summarize our result. In this paper, we use the units in which the light speed and Newton's constant are unity.§ A LINE ELEMENT IN A ROTATING BLACK HOLE SPACETIMEA Newman-Janis algorithm <cit.> generates a stationary and axisymmetric black hole spacetime from a static and spherical black hole spacetime.The Newman-Janis algorithm was investigated to find the Kerr-Newman solutionis an exact solution of the Einstein-Maxwell equations <cit.>from the Reissner-Nordström solution. Recently, the Newman-Janis algorithm has been eagerly applied for regular black hole spacetime to obtain rotating regular black hole metrics.We consider a line element in an asymptotically-flat, stationary, and axisymmetric black hole spacetime, in the Boyer-Lindquist coordinates,ds^2 =-ρ^2Δ/Σdt^2 +Σsin^2θ/ρ^2[ dϕ-a(r^2+a^2-Δ)/Σdt ]^2 +ρ^2/Δdr^2 +ρ^2dθ^2= -(1-2m(r)r/ρ^2)dt^2-4m(r)arsin^2θ/ρ^2dϕ dt+( r^2+a^2+ 2m(r)a^2rsin^2θ/ρ^2) sin^2θ dϕ^2 +ρ^2/Δdr^2 +ρ^2dθ^2,whereρ^2 ≡ r^2+a^2cos^2θ ,Δ(r) ≡ r^2-2m(r)r+a^2,Σ ≡ (r^2+a^2)^2-a^2Δ(r) sin^2θand where a is a spin parameter defined as a≡ J/Mand M and J are the ADM mass and the angular momentum of the black hole, respectively,and m(r) is a function with respect to the radial coordinate satisfying m(r) → M as r →∞. The line element (<ref>) can be obtainedwhen we apply the Newman-Janis algorithm for an asymptotically-flat, static, and spherical black hole spacetime with a line element <cit.>,ds^2 = -(1-2m(r)/r)dt^2 +(1-2m(r)/r)^-1dr^2 +r^2 ( dθ^2 +sin^2θ dϕ^2). We assume the existence of an event horizon at r=r_+. In other words,we assume that an equation Δ(r)=0 has one or more positive solutionsand that r=r_+ is the largest positive solution among them. We also assume that m(r) is regular in a range r≥ r_+. In Secs. IV and V, we show that several examples of the function m(r). The components of the inverse metric are given byg^tt=-Σ/ρ^2Δ, g^tϕ=-2m(r)ar/ρ^2Δ, g^ϕϕ=Δ -a^2sin^2θ/ρ^2Δsin^2θ, g^rr=Δ/ρ^2, g^θθ=1/ρ^2. § NULL GEODESIC EQUATIONIn this section, we investigate the Hamilton-Jacobi method for the motion of a photon in the asymptotically-flat, stationary and axisymmetric black hole spacetime.We define the action S=S(λ, x^μ) as a function of the coordinates x^μ and the parameter λ. The Hamilton-Jacobi equation is obtained by∂ S/∂λ +H =0,where H≡ g_μνp^μp^ν/2 is the Hamiltonian of the photon motion and p_μ is the conjugate momentum of the photon given by p_μ≡∂ S/∂ x^μ.We can write the action S in the following form with the cyclic coordinates t and ϕ:S=1/2μ^2λ -Et+L ϕ +S_r(r) +S_θ(θ),wherethe conserved energy E≡ -p_t, the conserved angular momentum L≡ p_ϕ and the mass μ≡ -p_μp^μ=0 of the photonare constant along the geodesic and S_r(r) and S_θ(θ) are functions of the coordinates r and θ, respectively. We can rewrite the Hamilton-Jacobi equation in-Δ( dS_r/dr)^2 +[ (r^2+a^2)E-aL ]^2/Δ =( dS_θ/dθ)^2 +(L-aEsin^2θ)^2/sin^2θ.Both sides of this equation are constant.We can divide the Hamilton-Jacobi equation into two equations with respect to r and θ:𝒦=-Δ( dS_r/dr)^2 +[ (r^2+a^2)E-aL ]^2/Δand𝒦=( dS_θ/dθ)^2 +(L-aEsin^2θ)^2/sin^2θ,where 𝒦 is a constant.From dx^μ/dλ=p^μ=g^μν p_ν,we obtain ρ^2dt/dλ=-a(aEsin^2θ-L)+(r^2+a^2)P(r)/Δ(r),ρ^2dr/dλ=σ_r√(R(r)),ρ^2dθ/dλ=σ_θ√(Θ(θ)),ρ^2dϕ/dλ=-( aE-L/sin^2θ)+aP(r)/Δ(r),whereP(r)≡ E(r^2+a^2)-aL, R(r)≡ P(r)^2-Δ(r) [ (L-aE)^2+𝒬], Θ(θ) ≡𝒬+cos^2θ( a^2E^2-L^2/sin^2θ)and where σ_r=± 1 and σ_θ=± 1 are independentand 𝒬 is the Carter constant defined by 𝒬≡𝒦-(L-aE)^2. R(r) and Θ(θ) should be non-negative for the photon motion.From Θ(θ) ≥ 0, we obtainΘ(θ)/E^2=η +(a-ξ)^2 -( asinθ -ξ/sinθ)^2≥ 0,where η≡𝒬/E^2 and ξ≡ L/E.Equation (<ref>) can be rewritten inR(r)/E^2=r^4+ ( a^2-ξ^2-η )r^2 +2m(r) [ (ξ-a)^2+η]r -a^2ηand the derivative with respective to the radial coordinate r is given byR'/E^2=4r^3+2(a^2-ξ^2-η)r +2m(r) [ (ξ-a)^2 +η] f(r),where ' denotes the differentiation with respect to the radial coordinate r and where f(r) is defined asf(r)≡ 1+m'r/m. § THE SHADOW OF THE ROTATING BLACK HOLEIn this section, we consider a formula for the contour of the shadow of the rotating black hole and apply it for new examples. §.§ A formula for the black hole shadowWe give a formula for the contour of the shadow of the rotating black hole. We assume that the rotating black hole spacetime has unstable circular null orbits satisfying .R(r) |_r=r_0=. R'(r)|_r=r_0=0,and .R”(r) |_r=r_0 > 0,where r_0 is the radius of the unstable circular null orbits.We also assume r_+≤ r_0. From Eq. (<ref>), we obtain (4-f_0)r_0^4 +(2-f_0)r_0^2a^2 -[(2-f_0)r_0^2-f_0a^2]η =(2-f_0)r_0^2ξ^2andr_0^4-(2-f_0)m_0a^2r_0+[a^2-(2-f_0)m_0r_0]η =(2-f_0)m_0r_0 ( ξ^2-2aξ ),where m_0 and f_0 are defined as m_0≡ m(r_0) and f_0≡ f(r_0), respectively.From Eqs. (<ref>) and (<ref>), we obtain a quadratic equation with respective to ξ as a^2(r_0-f_0m_0)ξ^2-2am_0[(2-f_0)r_0^2-f_0a^2]ξ -r_0^5 +(4-f_0)m_0r_0^4 -2a^2r_0^3 +2a^2m_0(2-f_0)r_0^2 -a^4r_0 -a^4m_0f_0=0.Equation (<ref>) has two real solutions ξ=ξ_± given byξ_±≡m_0[ ( 2-f_0) r_0^2- f_0 a^2] ± r_0Δ_0/a(r_0-f_0m_0), where Δ_0 is defined as Δ_0 ≡Δ(r_0).We can simplify the solution ξ_+ as ξ_+=r_0^2+a^2/a,and, by using Eq. (<ref>), we obtain η=η_+≡ -r_0^4/a^2 =-(a-ξ_+)^2.We notice that ξ_+ and η_+ are the same as the Kerr black hole case <cit.>. From Eqs. (<ref>) and (<ref>), only θ=θ_+, where θ_+ is a constant satisfying ξ_+=asin^2θ_+, is permitted in this case. Thus, the solution ξ=ξ_+ must be rejected for our purpose to describe the black hole shadow. We choose the solution ξ=ξ_-, where ξ_-≡4m_0r_0^2-(r+f_0m_0)(r_0^2+a^2)/a(r_0-f_0m_0) since we are interested in the shadow of the black hole. From Eq. (<ref>), we get η=η_-, where η_-≡r_0^3{4(2-f_0)a^2m_0-r_0[r_0-(4-f_0)m_0]^2}/a^2(r_0-f_0m_0)^2. We consider an asymptotically-flat, stationary, and axisymmetric black holeseen by an observer at a large distance from the black hole along an inclination angle θ_i. The contour of the shadow of the black hole can be expressedby celestial coordinates α and β in a small part of the celestial sphere of the observer <cit.>. The celestial coordinates α and β are obtained by α = ξ_-/sinθ_iandβ =σ_θ√(η_- +(a-ξ_-)^2 -( asinθ_i -ξ_-/sinθ_i)^2).See Appendix A for the calculation of the celestial coordinates α and β. If we obtain a specific form of the function m(r), from Eqs. (<ref>) and (<ref>),we can obtain ξ_- and η_-. Then, by using α and β, we can draw the contour of the shadow of the black hole with the inclination angle θ_i. §.§ ApplicationWe apply the formula for two new examples of black holes to check that it works well. §.§.§ Rotating black hole with m(r)=2M/(1+e^s/r)First we consider a rotating black hole applied the Newman-Janis algorithmwith m(r)=2M/1+e^s/rsuggested in Refs. <cit.>. Here s is a positive constant. f_0 is given byf_0=r_0+(r_0+s)e^s/r_0/r_0(1+e^s/r_0).There is a boundary of parameters a and s for existence of the event horizon shown in Fig. 1. The maximum value of a/M is 1 for s/M = 0, it decreases as s/M increases, and it is 0 for s/M ∼ 1.114.From Eqs. (<ref>) - (<ref>),the shadow of the black hole is obtained as shown in Fig. 2.§.§.§ Rotating black hole with m(r)=4Me^u/√(r)/( 1+e^u/√(r))^2Next we consider a rotating black hole applied the Newman-Janis algorithm with m(r)=4Me^u/√(r)/( 1+e^u/√(r))^2where u is a positive constant which is suggested in Ref. <cit.>. f_0 is obtained as f_0=1-(1-e^u/√(r_0))u/2(1+e^u/√(r_0))√(r_0).Figure 3 shows a boundary of parameters a and u for existence of the event horizon.The maximum value of a/M is unity for u/√(M)= 0, decreases as u/√(M) increases, and vanishes for u/√(M)∼ 1.874. From Eqs. (<ref>) - (<ref>), (<ref>), and (<ref>),the contour of the shadow of the black hole is obtained and it is shown in Fig. 4.§ EXAMINATION OF KNOWN RESULTSIn this section, we examine that Eqs. (<ref>) and (<ref>) recoversseveral known results of rotating black holes such as the Kerr-Newman black hole and rotating regular black holes.§.§ The Kerr-Newman black holeThe Kerr-Newman solution is an exact solution of the Einstein-Maxwell equations <cit.>. The function m(r) of the Kerr-Newman spacetime with the electrical charge Q is given by m(r)=M-Q^2/2r.The Kerr-Newman solution is a black hole solution when M^2-a^2-Q^2 ≥ 0 is satisfied. The contour of the shadow of the Kerr-Newman black hole was investigated in Ref. <cit.>. From Eqs. (<ref>) and (<ref>), ξ_- and η_- are given byξ_-=2r_0(2Mr_0-Q^2)-(r_0+M)(r_0^2+a^2)/a(r_0-M)andη_- =r_0^2{ 4a^2(Mr_0-Q^2)-[r_0(r_0-3M)+2Q^2 ]^2 }/a^2(r_0-M)^2,respectively. Equations (<ref>) and (<ref>) are equal toEqs. (51) and (52) in Ref. <cit.> andEqs. (6) and (7) in Ref. <cit.>.When the electrical charge Q vanishes,the spacetime is the Kerr black hole spacetime <cit.> andξ_- and η_- becomeξ_-=4Mr_0^2-(r_0+M)(r_0^2+a^2)/a(r_0-M)andη_- =r_0^3[ 4a^2M- r_0(r_0-3M)^2 ]/a^2(r_0-M)^2,respectively. Equations (<ref>) and (<ref>) are equal to Eqs. (48) and (49) obtained by Bardeen <cit.>. §.§ A braneworld black hole with a tidal chargeA rotating black hole localized on a 3-brane in the Randall-Sundrum braneworld <cit.> was considered by Aliev and Gumrukcuoglu <cit.>. The function m(r) of the braneworld black hole with a tidal charge b is given by m(r)=M-b/2r.When M^2-a^2-b ≥ 0 is satisfied, the spacetime is a black hole spacetime. The line element is the same to the one of the Kerr-Newman black hole spacetime if the tidal charge is nonnegative while it is not if the tidal charge is negative.Notice that the spin parameter a can be larger than the ADM mass Mwhen the tidal charge b is negative <cit.>. The contour of the shadow of the rotating braneworld black hole with the tidal charge b was investigated in Refs. <cit.>. From Eqs. (<ref>) and (<ref>), we obtain ξ_- and η_- asξ_-=2r_0(2Mr_0-b)-(r_0+M)(r_0^2+a^2)/a(r_0-M)andη_- =r_0^2{ 4a^2(Mr_0-b)-[r_0(r_0-3M)+2b ]^2 }/a^2(r_0-M)^2,respectively. We notice that Eqs. (<ref>) and (<ref>) are the same as Eqs. (11) and (12) in Ref. <cit.>.§.§ Rotating regular black holeRecently, rotating regular black hole metrics were suggested eagerly.§.§.§ Rotating Bardeen black holeApplying the Newman-Janis algorithm to the Bardeen black hole <cit.>,a rotating regular black hole metric was obtained in Ref. <cit.>. The function m(r) is given bym(r)=M( r^2/r^2+c^2)^3/2where c is the monopole charge of a self-gravitating magnetic field <cit.>.The shadow of the rotating Bardeen black hole was investigated in Refs. <cit.>. From Eqs. (<ref>) and (<ref>), we obtain ξ_- and η_- asξ_-=1/a[(r_0^2+c^2)^5/2-Mr_0^2(r_0^2+4c^2)]{ 4Mr_0^4(r_0^2+c^2) .. -[(r_0^2+c^2)^5/2+Mr^2_0(r_0^2+4c^2) ](r_0^2+a^2) }and η_-= r_0^4/a^2[(r_0^2+c^2)^5/2-Mr_0^2(r_0^2+4c^2)]^2 ×{ 4Ma^2(r_0^2+c^2)^5/2(r_0^2-2c^2)..-[ (r_0^2+c^2)^5/2-3Mr_0^4 ]^2},respectively. Equations (<ref>) and (<ref>)  are the same as Eq. (2.19) in Ref. <cit.>and Eqs. (41) and (42) in Ref <cit.>. We comment on Ref. <cit.> in Appendix B.§.§.§ Rotating Hayward black holeApplying the Newman-Janis algorithm to a regular black hole considered by Hayward <cit.>,a rotating regular black hole metric was obtained in Ref. <cit.>. The function m(r) is given bym(r)=Mr^3/r^3+g^3where g is a constant.The contour of shadow of the rotating Hayward black hole was considered in Refs. <cit.>. From Eqs. (<ref>) and (<ref>), ξ_- and η_- are given byξ_-=1/a[(r_0^3+g^3)^2-Mr_0^2(r_0^3+4g^3)]{ 4Mr_0^4(r_0^3+g^3) .. -[(r_0^3+g^3)^2+Mr_0^2(r_0^3+4g^3) ](r_0^2+a^2) }and η_-= r_0^4/a^2[(r_0^3+g^3)^2-Mr_0^2(r_0^3+4g^3)]^2 ×{ 4Ma^2(r_0^3+g^3)^2(r_0^3-2g^3) .. -[ (r_0^3+g^3)^2-3Mr_0^5 ]^2},respectively. Equations (<ref>) and (<ref>) are the same as Eqs. (41) and (42) in Ref <cit.>. See also Appendix B.§.§.§ A rotating black hole considered by Ghosh <cit.>Ghosh applied the Newman-Janis algorithm to one of regular black holes suggested by Balart and Vagenas <cit.> and obtained a rotating regular black hole metric <cit.>. The function m(r) in Ref. <cit.> is m(r)=Me^-h/r,where h is constant.The contour of the shadow of the black hole was calculated by Amir and Ghosh <cit.>. From Eqs. (<ref>) and (<ref>), we obtain ξ_- and η_- asξ_-=4Mr_0^3e^-h/r_0-[r_0^2+M(r_0+h)e^-h/r_0](r_0^2+a^2)/a[r_0^2-M(r_0+h)e^-h/r_0]andη_-=r_0^4{4Ma^2(r_0-h)e^-h/r_0-[r_0^2+M(-3r_0+h)e^-h/r_0]^2}/a^2[r_0^2-M(r_0+h)e^-h/r_0]^2,respectively. Equations (<ref>) and (<ref>) are the same as Eqs. (22) and (23) in Ref. <cit.>.§.§.§ A rotating black hole considered by Tinchev <cit.>Tinchev suggested a rotating regular black hole with a functionm(r)=Me^-j/r^2,where j is a constant, and Tinchev calculated the contour of the shadow of the black hole <cit.>. From Eqs. (<ref>) and (<ref>), we get ξ_-=4Mr_0^4e^-j/r_0^2-[r_0^3+M(r_0^2+2j)e^-j/r_0^2](r_0^2+a^2)/a[r_0^3-M(r_0^2+2j)e^-j/r_0^2]andη_-= r_0^4/a^2[r_0^3-M(r_0^2+2j)e^-j/r_0^2]^2 ×{4Ma^2r_0(r_0^2-2j)e^-j/r_0^2.. -[r_0^3-M(3r_0^2-2j)e^-j/r_0^2]^2}. Equation (<ref>) is equal to ξ_- in Eq. (13) in Ref. <cit.> while Eq. (<ref>) is not equal to η_- in Eq. (13) in Ref. <cit.>,which is given by, in our notation,η_- =2{ (r_0^2+a^2) [ (r_0m_0)'^2+r_0^2] -4r_0^2m_0(r_0m_0)' }/[(r_0m_0)'-r_0]^2.When j vanishes, the rotating regular black hole is the Kerr black hole. For j=0, Eq. (<ref>) is equal to η_- in the Kerr black hole spacetime obtained as Eq. (<ref>) while Eq. (<ref>) becomes η_- =2 [ (r_0^2+a^2)(r_0^2+M^2)-4M^2r_0^2 ] /(r_0-M)^2and it is not the same as Eq. (<ref>). §.§ A rotating black hole considered by Atamurotov, Ghosh, and Ahmedov <cit.>Atamurotov et al. investigated the contour of the shadow of a rotating black hole <cit.> and they claimed that the rotating black hole was a rotating black hole obtained in Ref. <cit.>. The rotating black hole in Ref. <cit.> has the function m(r) given bym(r)=M-K^2(r)/2r,where K(r) is a function with respective to r.Please see Refs. <cit.> and <cit.> carefully.From Eqs. (<ref>) and (<ref>), we obtainξ_-=2r_0(2Mr_0-K_0^2)-(r_0+M-K_0K_0')(r_0^2+a^2)/a(r_0-M+K_0K_0')andη_-= r_0^2 /a^2(r_0-M+K_0K_0')^2 ×{ 4a^2 [ r_0(M+K_0K_0')-K_0^2] ..-[r_0(r_0-3M-K_0K_0')+2K_0^2 ]^2 },where K_0 and K_0' are defined as K_0≡ K(r_0) and K_0'≡ K'(r_0), respectively. Equation (<ref>) is equal to Eq. (24) in Ref. <cit.> while Eq. (<ref>) is not equal to η_- obtained by Atamurotov et al. as seen Eq. (25) in Ref. <cit.>. Atamurotov et al. obtained η_- as, in our notation, η_- =16Δ_0 r_0^2a^2-(r_0^2+a^2)Δ_0'+4rΔ_0+Δ_0'a/(Δ_0'a)^2,where Δ_0' is defined by Δ_0'≡Δ'(r_0). From a dimensional analysis, we notice that Eq. (25) in Ref. <cit.> or Eq. (<ref>) should be modified.§.§ A rotating black hole considered by Modesto and Nicolini <cit.>Modesto and Nicolini applied the Newman-Janis algorithm toa noncommutative geometry inspired Reissner-Nordström solution obtained in Ref. <cit.> and obtained a rotating black hole metric <cit.>. The function m(r) of the rotating black hole spacetime is given by m(r)=n(r)-q^2(r)/2r,where n(r) and q(r) are functions with respect to r.See Eqs. (37), (45) and (46) in Ref. <cit.> or Eq. (1)in Ref. <cit.>. [n(r), in our notation,is m(r) in a notation in Refs. <cit.>.]The contour of the shadow of the rotating black hole was investigated by Sharif and Iftikhar <cit.>. From Eqs. (<ref>) and (<ref>), we obtain ξ_- and η_- asξ_-= 1/a(r_0-n_0-n_0'r_0+q_0q_0')[ 2r_0(2n_0r_0-q_0^2) . . -(r_0+n_0+n_0'r_0-q_0q_0')(r_0^2+a^2) ]and η_-= r_0^2/a^2(r_0-n_0-n_0'r_0+q_0q_0')^2 ×{ 4a^2[ r_0(n_0-n_0'r_0+q_0q_0')-q_0^2 ] . . -[ r_0(r_0-3n_0+n_0'r_0-q_0q_0')+2q_0^2 ]^2},respectively, where n_0, n'_0, q_0, and q_0' are defined asn_0≡ n(r_0), n'_0 ≡ n'(r_0), q_0 ≡ q(r_0), and q_0' ≡ q'(r_0), respectively. We notice that Eqs. (<ref>) and (<ref>) are not equal to Eqs. (15) and (16) in Ref. <cit.>.Equations (15) and (16) in Ref. <cit.> are given by, in our notation, ξ_-= 1/a[n_0+r_0(n_0'-1)-q_0q_0'][ n_0(a^2-3r_0^2) . . +r_0(r_0^2+a^2)(n_0'+1)+2q_0^2-q_0q_0'(r_0^2+a^2) ]andη_-= r_0^2/a^2[n_0+r_0(n_0'-1)]^2[ n_0r_0(4a^2-9n_0r_0+6r_0^2) . -2n_0'r_0^2(2a^2+r_0^2-3n_0r_0) -r_0^4(n_0'^2+1)-4q_0^2(a^2+q_0^2-3n_0r_0+n_0'r_0^2+r_0^2). -q_0'(4a^2+4q_0^3-6n_0q_0'r_0-2n_0'q_0r_0^2-q_0r_0+2q_0r_0^2) ],respectively. From a dimensional analysis,we notice that Eqs. (15) and (16) in Ref. <cit.> or Eqs. (<ref>) and (<ref>) should be modified.§ SUMMARYWe have obtained a formula for the contour of the shadow of rotating black holes generated by the Newman-Janis algorithm.We have applied the formula to two new examples of the contour of the shadow of rotating black holes.We notice the shadows is very similar to the shadow of the Kerr-Newman black hole.By using the formula, we have examined ξ_- and η_- of the Kerr-Newman black hole and rotating regular black holes and the other rotating black holes. The formula would help to categorize the many known results of the shadow of rotating black holes and to obtain new examples of black hole shadows. § ACKNOWLEDGEMENTSThe author thanks R. A. Konoplya for his useful comment.This research was supported in part by the National Natural Science Foundation of China under Grant No. 11475065, the Major Program of the National Natural Science Foundation of China under Grant No. 11690021. § CELESTIAL COORDINATES Α AND ΒIn this Appendix, we show the calculation for the celestial coordinates α and β <cit.>. We can express the line element (<ref>)in the asymptotically-flat, stationary, and axisymmetric black hole spacetime in the Boyer-Lindquist coordinates as follows:ds^2 = -e^2νdt^2 +e^2ψ( dϕ-ω dt )^2 +e^2χdr^2 +ρ^2dθ^2,wheree^ν≡√(ρ^2Δ/Σ),e^ψ≡√(Σsin^2θ/ρ^2),ω≡2m(r)ar/Σ,e^χ≡√(ρ^2/Δ).The inverse of the metric tensors are obtained asg^tt = -e^-2ν,g^t ϕ =-ω e^-2ν,g^ϕϕ =-ω^2e^-2ν +e^-2ψ,g^rr = e^-2χ,g^θθ = ρ^-2.We use a tetrad frame by the basis-vectors e_(β)αdx^αand the contravariant basis-vectors e_(β)^α∂_α:e_(0)αdx^α=-e^νdt,e_(1)αdx^α=-ω e^ψdt +e^ψdϕ,e_(2)αdx^α=e^χdr,e_(3)αdx^α=ρ dθ,and e_(0)^α∂_α = e^-ν∂_t +ω e^-ν∂_ϕ,e_(1)^α∂_α = e^-ψ∂_ϕ,e_(2)^α∂_α = e^-χ∂_r,e_(3)^α∂_α = ρ^-1∂_θ.The tetrad components of the 4-momentum of the photonp^(β) are obtained asp^(0) = -p_(0)=-e^νp^t= -e^ν[ (E-ω L) e^-2ν] = (ω L -E) e^-ν, p^(1) = p_(1)=e^ψ( -p^tω+ p^ϕ) =Le^-ψ, p^(2) = p_(2)=e^χp^r,p^(3) = p_(3)=ρ p^θ, In an asymptotic region r →∞, we obtaine^ν→ 1,e^ψ→ r sinθ,ω→2Ma/r^3,e^χ→ 1.Thus, as r →∞, the tetrad components of the 4-momentumof the photon p^(β) are given byp^(0)→ -E,p^(1)→L/r sinθ,p^(2)→ p^r, p^(3)→ r p^θ. The contour of the shadow of an asymptotically-flat, stationary, and axisymmetric black hole seen by an observerat a large distance from the black hole with an inclination angle θ_i is expressedby celestial coordinates α and β <cit.>. The celestial coordinates α and β are defined byα≡lim_r→∞( -rp^(1)/p^(0))= ξ/sinθ_iandβ≡lim_r→∞( -rp^(3)/p^(0))= σ_θ√(Θ/E^2),respectively. § COMMENT ON LI AND BAMBI <CIT.>In this short Appendix, we comment on Ref. <cit.>. Li and Bambi claimed that ξ_- and η_- of the rotating Bardeen black hole and the rotating Hayward black hole are complicatedand they did not show the explicit forms of ξ_- and η_- in Ref. <cit.>.ξ_- and η_-, however, are not complicated so much as we showed them in Eqs. (<ref>) and (<ref>) for the rotating Bardeen black holeand in Eqs. (<ref>) and (<ref>) for the rotating Hayward black hole.We note that we should read Eq. (3.6) in Ref. <cit.>, in our notation,. R'(r)|_r=r_0 =4r_0^3+2(a^2-ξ^2-η)r_0+2m_0[η+(ξ-a)^2] =0as . R'(r)|_r=r_0 =4r_0^3+2(a^2-ξ^2-η)r_0+2m_0[η+(ξ-a)^2]f_0 =0which was obtained as Eq. (2.17) in Tsukamoto et al. <cit.>.99Abbott:2016blz B. P. Abbott et al. [LIGO Scientific and Virgo Collaborations],Phys. Rev. Lett.116, 061102 (2016). Abbott:2016nmjB. P. Abbott et al. [LIGO Scientific and Virgo Collaborations],Phys. Rev. Lett.116, 241103 (2016). Abbott:2017vtcB. P. Abbott et al. [LIGO Scientific and VIRGO Collaborations],Phys. Rev. Lett.118, 221101 (2017). Konoplya:2016pmhR. Konoplya and A. Zhidenko,Phys. Lett. B 756, 350 (2016). Ghez:2008msA. M. Ghez et al.,Astrophys. J.689, 1044 (2008). Gillessen:2008qvS. Gillessen, F. Eisenhauer, S. Trippe, T. Alexander, R. Genzel, F. Martins, and T. Ott,Astrophys. J.692, 1075 (2009). Meyer:2012hnL. Meyer et al.,Science 338, 84 (2012). Do:2013upaT. Do et al.,Astrophys. J.779, L6 (2013). Reid:2014boaM. J. Reid et al.,Astrophys. J.783, 130 (2014). ChatzopoulosS. Chatzopoulos, T. K. Fritz, O. Gerhard, S. Gillessen, C. Wegg, R. Genzel, and O. Pfuhl, Mon. Not. R. Astron. Soc. 447, 948 (2015).Falcke:1999pjH. Falcke, F. Melia, and E. Agol,Astrophys. J.528, L13 (2000). LuminetJ. -P. Luminet, Astronomy and Astrophysics, 75, 228, 1979.FukueJ. Fukue and T. Yokoyama,PASJ, 40, 15, 1988.Takahashi:2004xhR. Takahashi,Astrophys. J.611, 996 (2004). Bardeen:1973tlaJ. M. Bardeen, “Timelike and null geodesics in the Kerr metric,”in Black Holes (Les Astres Occlus), edited by C. Dewitt and B. S. Dewitt,215 (1973).Fish:2016jilV. L. Fish et al. [Event Horizon Telescope Collaboration],Galaxies 4, 54 (2016). Doeleman:2017nxkS. Doeleman,Nat. Astron.1, 646 (2017). deVries:2000A. de Vries, Class. Quantum Grav. 17, 123 (2000).Takahashi:2005hyR. Takahashi,Publ. Astron. Soc. Jap.57, 273 (2005). Kraniotis:2014paaG. V. Kraniotis,Gen. Rel. Grav.46, 1818 (2014). Amarilla:2010zqL. Amarilla, E. F. Eiroa, and G. Giribet,Phys. Rev. D 81, 124045 (2010).Nitta:2011inD. Nitta, T. Chiba, and N. Sugiyama,Phys. Rev. D 84, 063008 (2011). Schee:2008fcJ. Schee and Z. Stuchlik,Gen. Rel. Grav.41, 1795 (2009). Amarilla:2011fxL. Amarilla and E. F. Eiroa,Phys. Rev. D 85, 064019 (2012). Yumoto:2012kzA. Yumoto, D. Nitta, T. Chiba, and N. Sugiyama,Phys. Rev. D 86, 103001 (2012). Abdujabbarov:2012bnA. Abdujabbarov, F. Atamurotov, Y. Kucukakca, B. Ahmedov, and U. Camci,Astrophys. Space Sci.344, 429 (2013). Atamurotov:2013dpaF. Atamurotov, A. Abdujabbarov, and B. Ahmedov,Astrophys. Space Sci.348, 179 (2013).Atamurotov:2013scaF. Atamurotov, A. Abdujabbarov, and B. Ahmedov,Phys. Rev. D 88, 064004 (2013).Amarilla:2013sjL. Amarilla and E. F. Eiroa,Phys. Rev. D 87, 044057 (2013). Li:2013jraZ. Li and C. Bambi,JCAP 1401, 041 (2014). Wei:2013kzaS. W. Wei and Y. X. Liu,JCAP 1311, 063 (2013). Tsukamoto:2014tjaN. Tsukamoto, Z. Li, and C. Bambi,JCAP 1406, 043 (2014). Grenzebach:2014fhaA. Grenzebach, V. Perlick, and C. Lammerzahl,Phys. Rev. D 89, 124004 (2014). Papnoi:2014aaaU. Papnoi, F. Atamurotov, S. G. Ghosh, and B. Ahmedov,Phys. Rev. D 90, 024073 (2014). Wei:2015duaS. W. Wei, P. Cheng, Y. Zhong, and X. N. Zhou,JCAP 1508, 004 (2015). Abdolrahimi:2015ruaS. Abdolrahimi, R. B. Mann, and C. Tzounis,Phys. Rev. D 91, 084052 (2015). Grenzebach:2015oeaA. Grenzebach, V. Perlick, and C. Lammerzahl,Int. J. Mod. Phys. D 24, 1542024 (2015). Ghasemi-Nodehi:2015raaM. Ghasemi-Nodehi, Z. Li, and C. Bambi,Eur. Phys. J. C 75, 315 (2015). Cunha:2015ybaP. V. P. Cunha, C. A. R. Herdeiro, E. Radu, and H. F. Runarsson,Phys. Rev. Lett.115, 211102 (2015). Johannsen:2015hibT. Johannsen et al.,Phys. Rev. Lett.116, 031101 (2016). Tinchev:2015apfV. K. Tinchev,Chin. J. Phys.53, 110113 (2015). Shipley:2016omiJ. Shipley and S. R. Dolan,Class. Quant. Grav.33, 175001 (2016). Amir:2016cenM. Amir and S. G. Ghosh,Phys. Rev. D 94, 024054 (2016). Abdujabbarov:2016hnwA. Abdujabbarov, M. Amir, B. Ahmedov, and S. G. Ghosh,Phys. Rev. D 93, 104004 (2016). Vincent:2016sjqF. H. Vincent, E. Gourgoulhon, C. Herdeiro, and E. Radu,Phys. Rev. D 94, 084045 (2016). Dastan:2016vhbS. Dastan, R. Saffari, and S. Soroushfar,arXiv:1606.06994 [gr-qc].Younsi:2016azxZ. Younsi, A. Zhidenko, L. Rezzolla, R. Konoplya, and Y. Mizuno,Phys. Rev. D 94, 084025 (2016). Tretyakova:2016aleD. A. Tretyakova and T. M. Adyev,arXiv:1610.07300 [gr-qc].Dastan:2016bfyS. Dastan, R. Saffari, and S. Soroushfar,arXiv:1610.09477 [gr-qc].Sharif:2016znpM. Sharif and S. Iftikhar,Eur. Phys. J. C 76, 630 (2016). Cunha:2016wzkP. V. P. Cunha, C. A. R. Herdeiro, B. Kleihaus, J. Kunz, and E. Radu,Phys. Lett. B 768, 373 (2017). Singh:2017vfrB. P. Singh and S. G. Ghosh,arXiv:1707.07125 [gr-qc].Wang:2017hjlM. Wang, S. Chen, and J. Jing,arXiv:1707.09451 [gr-qc].Amir:2017slqM. Amir, B. P. Singh, and S. G. Ghosh,arXiv:1707.09521 [gr-qc].Johannsen:2015qcaT. Johannsen,Astrophys. J.777, 170 (2013). Hioki:2009naK. Hioki and K. i. Maeda,Phys. Rev. D 80, 024042 (2009). Tsupko:2017rdoO. Y. Tsupko,Phys. Rev. D 95, 104058 (2017). Abdujabbarov:2015xqaA. A. Abdujabbarov, L. Rezzolla, and B. J. Ahmedov,Mon. Not. Roy. Astron. Soc.454, 2423 (2015). Vishveshwara:1970zzC. V. Vishveshwara,Nature 227, 936 (1970).Mashhoon:1985cyaB. Mashhoon,Phys. Rev. D 31, 290 (1985).Hod:2009tdS. Hod,Phys. Rev. D 80, 064004 (2009). Cardoso:2008bpV. Cardoso, A. S. Miranda, E. Berti, H. Witek, and V. T. Zanchin,Phys. Rev. D 79, 064016 (2009). Bozza:2008miV. Bozza,Phys. Rev. D 78, 063014 (2008). Stefanov:2010xzI. Z. Stefanov, S. S. Yazadjiev, and G. G. Gyulchev,Phys. Rev. Lett.104, 251103 (2010). Wei:2013mdaS. W. Wei and Y. X. Liu,Phys. Rev. D 89, 047502 (2014). Raffaelli:2014olaB. Raffaelli,Gen. Rel. Grav.48, 16 (2016). Cunha:2017eoeP. V. P. Cunha, C. A. R. Herdeiro, and E. Radu,Phys. Rev. D 96, 024039 (2017). Konoplya:2017wotR. A. Konoplya and Z. Stuchlik,Phys. Lett. B 771, 597 (2017). Newman:1965twE. T. Newman and A. I. Janis,J. Math. Phys.6, 915 (1965).Newman:1965myE. T. Newman, R. Couch, K. Chinnapared, A. Exton, A. Prakash, and R. Torrence,J. Math. Phys.6, 918 (1965).Bambi:2013ufaC. Bambi and L. Modesto,Phys. Lett. B 721, 329 (2013). Balart:2014cgaL. Balart and E. C. Vagenas,Phys. Rev. D 90, 124045 (2014). AyonBeato:1999rgE. Ayon-Beato and A. Garcia,Phys. Lett. B 464, 25 (1999). Kerr:1963udR. P. Kerr,Phys. Rev. Lett.11, 237 (1963).Randall:1999eeL. Randall and R. Sundrum,Phys. Rev. Lett.83, 3370 (1999);L. Randall and R. Sundrum,Phys. Rev. Lett.83, 4690 (1999). Aliev:2005biA. N. Aliev and A. E. Gumrukcuoglu,Phys. Rev. D 71, 104027 (2005). Bardeen:1968J. Bardeen,in Proceedings of GR5, Tiflis, USSR, 1968 (unpublished).Borde:1996dfA. Borde,Phys. Rev. D 55, 7615 (1997). AyonBeato:2000zsE. Ayon-Beato and A. Garcia,Phys. Lett. B 493, 149 (2000). Hayward:2005giS. A. Hayward,Phys. Rev. Lett.96, 031103 (2006). Ghosh:2014pbaS. G. Ghosh,Eur. Phys. J. C 75,532 (2015). Atamurotov:2015xfaF. Atamurotov, S. G. Ghosh, and B. Ahmedov,Eur. Phys. J. C 76,273 (2016). CiriloLombardo:2004qwD. J. Cirilo Lombardo,Class. Quant. Grav.21, 1407 (2004). Modesto:2010rvL. Modesto and P. Nicolini,Phys. Rev. D 82, 104035 (2010). Ansoldi:2006vgS. Ansoldi, P. Nicolini, A. Smailagic, and E. Spallucci,Phys. Lett. B 645, 261 (2007). Chandrasekhar:1983 S. Chandrasekhar,Mathematical Theory of the Black Holes(Oxford University, New York, 1983).
http://arxiv.org/abs/1708.07427v3
{ "authors": [ "Naoki Tsukamoto" ], "categories": [ "gr-qc" ], "primary_category": "gr-qc", "published": "20170824140237", "title": "Black hole shadow in an asymptotically-flat, stationary, and axisymmetric spacetime: The Kerr-Newman and rotating regular black holes" }
August, 20171.8cmGauge Five-brane Solutions of Co-dimension Two 0.3cm in Heterotic Supergravity1.8cm Shin Sasaki[shin-s(at)kitasato-u.ac.jp]and Masaya Yata[myata(at)ibs.re.kr]0.5cm ^1Department of Physics Kitasato University Sagamihara 252-0373, Japan ^2Center for Theoretical Physics of the Universe Institute for Basic Science (IBS)Seoul 08826, Republic of Korea 2cm We continue to study the BPS gauge five-brane solutions of codimension two in ten-dimensional heterotic supergravity. The geometry including the dilaton and the NS-NS B-field are sourcedfrom the monopole chain in ℝ^2 × S^1. We find that the geometry is asymptotically Ricci flat and the dilaton is no longer imaginary valued.These properties are contrasted with the smeared counterpart discussed in our previous paper. We perform the T-duality transformations of the solution and find thatit never results in a non-geometric object. § INTRODUCTIONExtended objects known as branes play important roles in string theories. They have been utilized for studying supersymmetric gauge theories <cit.>, the AdS/CFT correspondence <cit.> and model buildings for particle physics in string theories <cit.>. In particular, the U-duality <cit.> isan important nature to understand the whole picture of string theories. When the eleven-dimensional M-theory is compactified on T^d, there appears the U-duality symmetry group E_d(d)(ℝ) in lower dimensions. BPS branes which preserve fractions of supersymmetryin lower dimensions are classified according to the U-duality symmetry group E_d(d)(ℝ).The higher dimensional origin of these lower-dimensional BPS branesare wrapped/unwrapped F-strings, D-branes, NS5-branes, Kaluza-Klein (KK) 5-branes and waves. There are exceptions, however, in lower than eight-dimensional space-time. BPS states whose origin cannot be traced back to these ordinary branes appear in the E_d(d)(ℝ) U-duality multiplet <cit.>. These BPS states are called exotic states whose higher dimensional origin are known as exotic branes <cit.>.Among other things, an exotic brane called 5^2_2-brane in type II string theories have been intensively studied<cit.>. The exotic 5^2_2-brane is a solitonic five-brane of codimension two, whose tension is proportional to g_s^-2, as its name stands for <cit.>. The 5^2_2-brane has the U(1)^2 isometry along the transverse directions to the brane worldvolume.It is a defect brane <cit.> and has a number of specific properties. For example, exotic branes are non-geometric objects <cit.>, namely,the background metric and other supergravity fields for an exotic brane are governed by multi-valued functions of space-time. However, monodromies associated with the exotic branes are indeed given by the U-duality group. This means that the geometry is patched together by the symmetry andit is completely a consistent solution in string theories.In this sense, they are not described by ordinary manifolds but their generalizations, called U-folds. The 5^2_2-branes in type II supergravitiesare obtained through the chain of T-duality transformations of the NS5-branes. The monodromy of the 5^2_2-brane geometry is given by the O(2,2) T-duality group. Therefore it is an explicit example of the T-fold. Some efforts has been devoted to understand the exotic branes in type II string theories.However, exotic branes in heterotic and type I string theories are poorly understood. In our previous paper <cit.>, we have studied the T-duality chain of five-branes in heterotic supergravity. It is known that there are three distinct five-branes in heterotic theories <cit.>. They are called the symmetric, neutral and gauge types<cit.>. In order to perform the T-duality transformations of these five-branes, we have introduced U(1)^2 isometry along the transverse directions to the branes by the smearing procedure. Since the heterotic supergravity action contains the kinetic term of the non-Abelian gauge field and the R^2 term which are the leading order in the α' corrections, the well-known Buscher rule <cit.> is modified in heterotic theories <cit.>.Accordingly, the generalized metric is also modified from the one in type II theories. Using the modified Buscher rule in heterotic theories we have derived the KK5-branes and 5^2_2-branes of the three kinds. We found that the monodromies of the 5^2_2-branes for the symmetric and the neutral types are given by the O(2,2) T-duality group and they are thereforenon-geometric objects. On the other hand, we were faced with some difficulties for the gauge type five-brane. We found that the smearing procedure makes the metric of the gauge type solution ill-defined. A function that governs the solution becomes negative valued at some regions in space-time and the dilaton ϕ becomes imaginary valued in there. In order to make the gauge type solution of codimension two well-defined, we need to abandon the smearing procedure and re-consider the gauge five-brane from the first principle.In this paper we make a complementary study of the gauge five-brane of codimension two in heterotic supergravity. We will re-construct the gauge five-brane based on the well-behaved monopole solution of codimension two which is known as a monopole chain <cit.>. We will show that the geometry is asymptotically Ricci flat and the solution is well-behaved except the origin of the brane position. We will perform the chain of the T-duality transformations on the solution and find that the 5^2_2-brane obtained through the new gauge five-brane is not a non-geometric object. The organization of this paper is as follows. In the next section, we introduce the gauge five-brane solution in heterotic supergravity. In section 3, we introduce the Nahm construction of monopole in ℝ^2 × S^1 in the large S^1 limit. In section 4, we write down the gauge five-brane solution of codimension two. In section 5, we analyze the T-duality transformations of the gauge five-brane. The background supergravity fields for the gauge KK5-brane and 5^2_2-brane are obtained. Section 6 is conclusion and discussions. A version of the KK5-brane in another T-duality route is found in Appendix A.§ FIVE-BRANE SOLUTIONS IN HETEROTIC SUPERGRAVITYThe low-energy effective theory of heterotic string theory is the ten-dimensional heterotic supergravity. The action in the 𝒪 (α') is given by <cit.>S =1/2 κ^2_10∫ d^10 x √(-g) e^-2ϕ[ R (ω) - 1/3Ĥ_MNP^(3)Ĥ^(3)MNP + 4 ∂_M ϕ∂^Mϕ/ + α'( Tr F_MN F^MN + R_MN AB (ω_+) R^MNAB (ω_+) ) ].Here we have used the conventionκ^2_10/2g^2_10 =α' whereκ_10 and g_10 are the gravitational and the gauge coupling constants in ten dimensions. The dilaton is denoted as ϕ. The metric g_MN (M,N=0, …, 9) is defined by the vielbein as g_MN = η_AB e_M ^A e_N ^B. Here A,B = 0, …, 9 are indices in the local Lorentz frameand we employ the mostly plus convention of the flat metric η_AB = diag (-1, 1, …, 1). The local Lorentz indices A, B, … are lowered and raised by η_AB and its inverse η^AB. The Ricci scalar R (ω) and the Riemann tensor R^AB_MN are defined by the spin connection ω_M^AB as R (ω) =e^M _A e^N _B R^AB_MN (ω), R^AB_MN (ω) =∂_M ω_N ^AB - ∂_Nω_M ^AB + ω_M ^ADη_CDω_N ^CB -ω_N ^ADη_CDω_M ^CB.The spin connection is expressed by the vielbein e_M ^A and its inverse:ω_M^ AB=12[ e^AN( ∂_Me_N^ B - ∂_Ne_M^ B ) -e^BN( ∂_Me_N^ A - ∂_Ne_M^ A ) -e^AP e^BQ ( ∂_P e_Q C - ∂_Qe_P C )e_M^ C].The Riemann tensor in the 𝒪 (α') action (<ref>) is defined through the modified spin connection ω_± M^AB. This is defined by ω_± M^AB = ω_M^AB±Ĥ_M^(3)^AB,where Ĥ_M^(3)^AB = e^NA e^PBĤ_MNP^(3) is the modified H-flux. The modified flux Ĥ_MNP^(3) is defined by Ĥ^(3)_MNP =H_MNP^(3) + α'( Ω_MNP^YM -Ω_MNP^L+) + 𝒪 (α^' 2).Here H_MNP^(3) is the ordinary field strength of the NS-NS B-field:H_MNP^(3) =1/2 (∂_M B_NP + ∂_N B_PM +∂_P B_MN).The Yang-Mills and the Lorentz Chern-Simons terms in (<ref>) are defined by Ω^YM_MNP = 3! Tr( A_[M∂_N A_P] + 2/3 A_[M A_N A_P]),Ω^L+_MNP = 3! ( η_BCη_ADω_+[M^AB∂_Nω_+P]^CD+ 2/3η_AGη_BCη_DFω_+[M^ABω_+N^CDω_+P]^FG).Here A_M = A^I_M T^I is the Yang-Mills gauge field andT^I(I,J,K = 1, …, dim𝒢) arethe generators of the Lie algebra𝒢 associated with the gauge group G. The gauge group G is SO(32) or E_8 × E_8 depending on the heterotic string theories we consider.The symbol [M_1 M_2 ⋯ M_n] stands for the anti-symmetrization of indices with weight 1/n!.The modified H-flux Ĥ^(3) obeys the Bianchi identity:d Ĥ^(3) = α' ( Tr F ∧ F - Tr R ∧ R ) + 𝒪 (α^' 2),where R^AB = 1/2! R^AB_MN d x^M ∧ d x^N is the SO(1,9)-valued curvature 2-form. The component of the Yang-Mills gauge field strength 2-formF = 1/2! F_MN d x^M ∧ dx^N is given by F_MN^I = ∂_M A_N^I - ∂_N A_M^I + f^I _JK A^J_M A^K _N,where f^I _JK is the structure constant for 𝒢.The 1/2 BPS ansatz for the five-brane solution is given by <cit.>ds^2 = η_μν dx^μ dx^ν + H (x) δ_mn dx^m dx^n,Ĥ^(3)_mnp = ∓1/2ε_mnpq∂_q H (x), e^2ϕ = H (x),F_mn = ±F̃_mn = ±1/2ε_mnpq F^pq, A_μ = 0,where the indices μ,ν = 0,5,6,7,8,9 stand for the world-volume while m,n = 1,2,3,4 represent the transverse directions to the five-branes. The Levi-Civita symbol is denoted asε_mnpq. The gauge field A_m satisfies the (anti)self-duality condition in the transverse four dimensions. By using the Bianchi identity (<ref>) together with the ansatz (<ref>), we find that the equation for the“H-function” H(x) reduces toH = ±α' Tr [F_mnF̃^mn] + ⋯,where is the Laplacian in the four-dimensional transverse space and⋯ are terms involving the Riemann curvature. The equation (<ref>) means that the source term in the right hand side of the Poisson equation forH(x)is given by the Yang-Mills instanton density and the Riemann curvatures.There are three distinct solutions to the 1/2 BPS five-brane conditions (<ref>).They are so called symmetric, neutral <cit.> and gauge five-brane solutions <cit.>.The symmetric solution is an exact solution in α'-expansionwhile the neutral and the gauge solutions are valid in 𝒪 (α'). For the neutral and the gauge solutions, the Riemann curvature terms in (<ref>) become higher orders in α' expansion and are neglected.The bulk gauge field becomes trivial in the neutral solution and it is just the NS5-brane in type II supergravities. On the other hand, the gauge solution involves the non-trivial bulk Yang-Mills field configuration.Indeed, the gauge field is given by the instanton configuration with thenon-zero topological charge k defined by k = - 1/ 32 π^2∫Tr [F ∧ F],where the integral is defined in the transverse four-space. Once a Yang-Mills instanton solution is obtained,theH-function H (x)is determined through the relation (<ref>). The other quantity that distinguishes the five-branesis the charge Q associated with the modified H-flux:Q = - 1/2π^2 α'∫_S^3Ĥ^(3).Here S^3 is the asymptotic three-sphere surrounding the five-branes. The symmetric, neutral and the gauge five-branes have charges (k,Q) = (1,n), (0,n), (1,8) respectively.In the previous paper, we studied the T-duality transformations of these heterotic five-brane solutions. In order to perform the T-duality transformations, we introduced U(1)^2 isometry along the transverse directions to the five-branes. To this end, we employed the smearing method for which the dimensionality of the space, where theH-function is defined, is effectively reduced <cit.>. Then we wrote down the explicit solutions of the codimension two five-branes with the U(1)^2 isometry. After performing the chain of T-dualities, we constructed the 5^2_2-branes of three kinds. We found that the symmetric and the neutral 5^2_2-branes are T-fold with the non-trivial O(2,2) monodromy. On the other hand, the gauge 5^2_2-brane solution is ill-defined in the sense that the dilaton becomes imaginary valuedin some regions of space-time.This is due to the fact that the right hand side of the equation (<ref>) that we have assumed is not appropriate one. For the concreteness, we start from the gauge five-brane of codimension four.A typical example of the self-dual solution is provided by the BPST one-instanton in the non-singular gauge <cit.>. By employing this configuration as the Yang-Mills field,the solution is given by <cit.>A_m = -σ_mn x^n/r^2 + ρ^2, H (r) = e^2ϕ_0 + 8 α' r^2 + 2 ρ^2/(r^2 +ρ^2)^2,B_mn = Θ_mn,   (constant ),wherer^2 = (x^1)^2 + (x^2)^2 + (x^3)^2 + (x^4)^2 andthe gauge field takes value in the SU(2) subgroup of G. Here σ_mn is the SO(4) Lorentz generator and ϕ_0, ρ are constants. In order to introduce the U(1) isometries in the transverse directions and reduce the codimension of the solution, we look for the self-duality solution to the Yang-Mills gauge field in lower dimensions. It is well known that the self-duality equation for the Yang-Mills gauge field becomes that of monopoles of codimension three by the dimensional reduction. Indeed, in <cit.>, the gauge five-brane solutions of codimension three based on the regular BPS monopoles was obtained. In the previous paper <cit.>, we constructeda gauge five-brane of codimension three by introducing the smearedinstantons in ℝ^3 × S^1 in the right hand side of (<ref>). This is a naive limit of the solution in <cit.> where the radius in ℝ^3 × S^1 becomes small (see fig. <ref> in section 6.). Proceeding further in this way, we have constructed the gauge five-brane solution of codimension two based on the smeared monopole in ℝ^2 × S^1.We found that the H-functionH(x⃗) associated with this solution is given by <cit.>H = e^2ϕ_0 - α' σ̃^2/2 r^2( h̃_0 - σ̃/2log (r/μ) )^2 , r^2 = (x^1)^2 + (x^2)^2,where ϕ_0, σ̃, h̃_0, μ are constants. This is obviously not positive definite.As a result, the dilation becomes imaginary valued at some points near the core of the brane. This indicates the fact that the smeared monopole does not work as a source of the well-defined brane geometry of codimension two.In the following, we replace the right hand side of (<ref>) with more appropriate solution, namely, the well-behaved periodic monopole and examine the gauge five-brane solution of codimension two again.§ NAHM CONSTRUCTION FOR MONOPOLES OF CODIMENSION TWOIn this section, we introduce the monopole solution of codimension two which will provide a well-defined brane geometry. There is a systematic mathematical program to find analytic solutions of monopoles, known as the Nahm construction <cit.>. The monopole of codimension two that we consider in this paper isjust the small S^1 limit of a periodic monopole defined in ℝ^2 × S^1. In the following, we write down the explicit field configurationthat is based on the periodic monopole solution discussed in <cit.>.The BPS monopole equation in ℝ^3 is defined as[In the following we employ the minus sign in the right hand side. It is possible, of course, to find the solution for the other sign.]D_i Φ = - B_i,(i = 1,2,3).Here Φ is an adjoint scalar field and B_i is the magnetic field defined through the gauge field A_i. They belong to the adjoint representation of a gauge group G with an anti-hermitian matrix. For definiteness, we consider the G = SU(2) gauge group. The relevant quantities are defined by D_i Φ = ∂_i Φ + [A_i, Φ], F_ij = ∂_i A_j - ∂_j A_i + [A_i, A_j], B_i = 1/2ε_ijk F_jk.We note that the equation (<ref>) is obtained via the dimensional reduction of the (anti)self-duality equation F_mn = - F̃_mn in ℝ^4.The adjoint scalar field Φ is identified with the gauge field component of the compact direction.We now compactify one of the three-dimensional direction in ℝ^3 to S^1 and consider the equation (<ref>) in ℝ^2 × S^1. We define the coordinate on ℝ^2 × S^1 by x⃗ = (x^1,x^2,x^3)≡ (x,y,z) and the S^1 direction has the periodicity z ∼ z + β. Here β = 2π R and R is the radius of S^1. We are looking for the solution to the equation (<ref>) in the small-β limit. In this limit, the equation (<ref>) is effectively defined in two dimensions. For the ordinary 't Hooft monopole of codimension three, the gauge group is broken down to U(1) at infinity and the asymptotic behavior of thesolution is governed by the Abelian Dirac monopole. We therefore employ the same boundary condition for our case. By using the Bianchi identity, the Abelian reduction of the monopole equation (<ref>) on ℝ^2 becomes( ∂^2/∂ x^2 + ∂^2/∂ y^2) Φ = 0.This is the Laplace equation in two dimensions whosespherically symmetric solution is given by Φ = c_1 log r + c_2,r^2 = x^2 + y^2.Here c_1,c_2 are constants. This is the boundary condition of the SU(2) monopoles in ℝ^2 × S^1. Cherkis and Kapustin claimed that the BPS monopoles defined in ℝ^2 × S^1 are the Nahm dual to solutions to the Hitchin system in ℝ× S^1 <cit.>. By the Nahm transformation, the solution to (<ref>) withthe boundary condition (<ref>) is given by Φ = ∫^∞_-∞ d u ∫^π/β_-π/β dvuΨ^†Ψ, A_i = ∫^∞_- ∞ d u ∫^π/β_-π/β dvΨ^†∂_i Ψ,(i=1,2,3).Here (u,v) are coordinates of the dual space ℝ× S^1 and 2π/β is the dual period of S^1. The “Dirac zero-mode” Ψ = Ψ (u,v; x⃗) has a 2 × 2 matrix representation satisfying the following relation:ΔΨ = 0, ∫^∞_- ∞ du ∫^π/β_-π/β dv Ψ^†Ψ = 1_2.Here Δ is the Dirac operator given by <cit.>Δ = [ [ 2 ∂_s̅ - z P(s); P^* (s̅)2 ∂_s + z ]], P(s) = C cosh (β s) - ζ,s = u +v,u ∈ℝ,v ∈[ -π/β,π/β),ζ = x +y,z ∼ z + β.Note that x,y,z have the mass dimension -1 while the dual coordinates s,u,v have the dimension +1. The function P is determined by the periodic Hitchin fields. A dimensionful constant C is recognized as the size of the monopole and it can be compared with the period β. The small C means C ≪β, namely, the decompactification limit R →∞. In this limit, the Hitchin equation reduces to the Nahm equation for the monopole in ℝ^3 <cit.>.We are interested in the solution in the large-C (or equivalently small-β) limit.In this limit, the radius of the physical circle in ℝ^2 × S^1 becomes small R → 0 and the monopoles exhibit the isometry along S^1. Now we solve the Dirac equation ΔΨ = 0. To this end, we look for functions f = f (u,v ; x⃗), g = g (u,v; x⃗) that satisfyΔ[ [ g; f ]] =[ [ 2 g_s̅ - z g + P (s) f; 2 f_s + z f + P^* (s̅) g ]] = 0.As discussed in <cit.>, in the region where P(s) is not zero, the solution to the equation (<ref>) becomes trivial f = g = 0. The exception is the points where P (s) = 0. In order to find the non-trivial solution for f,g, we first define a zero point of P(s_0) = 0, namely,s_0 = u_0 +v_0 = 1/βcosh^-1 (ζ/C),or more explicitlyu_0 =1/βcosh^-11/2C (√((C + x)^2 + y^2)+ √((C - x)^2 + y^2)), v_0 =1/βcos^-11/C (√((C + x)^2+ y^2) - √((C - x)^2 + y^2)) + n,(n ∈ℤ).Since cosh (x) is an even function of x, the zeros are in fact given by s = ± s_0. When these zeros are degenerate, s_0 = - s_0, we find x = ± C, y = 0.On top of the zero s=s_0 we have P = 0 and the solutions to the above equation are g = exp( z/2s̅),f = exp( - z/2 s ).When one leaves from the zero s=s_0,we have f = g = 0 as discussed. Indeed, P is a continuous function of s and one can reach the point s_0 continuously. Therefore f,g are continuous functions of s = u +v whose support is localized around s ≃ s_0. In order to find a solution, we expand P(s) around s = s_0 and find P(s) = P(s_0) + . ∂ P/∂ s|_s =s_0 (s-s_0) + ⋯≃βξ (s-s_0).Here we have defined ξ (x,y) = C sinh (β s_0). Then, around the zero s ∼ s_0, the equation (<ref>) becomes[ 2 g_s̅ - z g + βξ (s - s_0) f = 0,; 2 f_s + z f + βξ̅ (s̅ - s̅_0) g = 0. ]It is easy to confirm that E (s-s_0) defined by the following function satisfies the equations (<ref>):E (s) = exp[- β/2 |ξ| s s̅ - z/2 (s - s̅) ].This function E(s-s_0) has a peak at s ∼ s_0 and decays exponentially outside the support. Using this expression together with the fact that the zero points are indeed s= ± s_0, we have solutions to (<ref>): f(s) = |ξ|/ξ E (s ± s_0), g(s) = ± E (s ± s_0).Then, by using these functions, the solution to the Dirac equation is given by Ψ≃√(β/2π) |ξ|^-1/2[ [ ξ E (s-s_0) - ξ E (s+s_0); |ξ| E (s-s_0) |ξ| E (s+s_0) ]].Here we have introduced the overall factor for the normalization. Indeed, one calculatesΨ^†Ψ≃= β/π |ξ|[ [ |E_-|^2 0; 0 |E_+|^2 ]].Here we have defined E_± = E (s ± s_0) and |E_±|^2 =exp[ - 2 π |ξ| (u ± u_0)^2 ] exp[ - 2 π |ξ| (v ± v_0)^2 ].The integration by u in (<ref>) is just the Gaussian type and it is easy to perform.Similarly, the v-integration is well-approximated by the Gaussian in the small β limit:∫_-π/β^π/β d vexp[ - β |ξ| (v ± v_0)^2 ] ≃∫_-∞^∞ d vexp[ - β |ξ| (v ± v_0)^2 ] = √(π/β |ξ|).Therefore we find the Dirac zero-mode Ψ in (<ref>) is correctly normalized:∫^∞_-∞ du ∫_-π/β^π/β dvΨ^†Ψ≃β/π |ξ| ∫^∞_-∞ du∫^∞_-∞ dv[ [ |E_-|^2 0; 0 |E_+|^2 ]] = 1_2.Now we have found the Dirac zero-mode (<ref>). Through the Nahm transformation (<ref>), we are going to write down the solution for the physical fields. In the following, we derive the explicit monopole solution to the equation (<ref>). §.§ Adjoint scalar fieldThe solution to the adjoint scalar field is obtained asΦ =∫^∞_-∞ u du∫_-π/β^π/β dv Ψ^†Ψ≃ -u_0 τ_3 = -Re (s_0) τ_3,where we have approximated the v-integration by the Gaussian in the small β limit. As we have indicated, at the zero s_0 = 0, where x = ± C, y = 0 , we have r_0 =0 and Φ becomes trivial. Using the explicit form of u_0 given in (<ref>), in the asymptotic region x,y ≫ C, the solution (<ref>) behaves like Φ∼const. - /βlog(r/C) τ_3where r^2 = x^2 + y^2. This is the desired asymptotic behavior of the monopole (<ref>). Note that this τ_3represents a U(1) Cartan subgroup of SU(2). A gauge invariant quantity TrΦ^2 is evaluated asTr [Φ^2] = - 2 u_0^2.One observes that this is completely z-independent which implies that the solution represents a codimension two object. §.§ Gauge fieldWe proceed to construct the gauge field configuration. It is convenient to combine the gauge field as the ζ and ζ̅ components.Then, the Nahm transformation becomesA_ζ =1/2 (A_x -A_y) = ∫^∞_-∞du∫_-π/β^π/β dvΨ^†∂_ζΨ,A_ζ̅ =1/2 (A_x +A_y) = ∫^∞_-∞du∫_-π/β^π/β dvΨ^†∂_ζ̅Ψ,A_z =∫^∞_-∞du∫_-π/β^π/β dvΨ^†∂_z Ψ.Here ∂_ζ = 1/2 (∂_x - ∂_y), ∂_ζ̅ = 1/2 (∂_x + ∂_y).We first evaluate the u,v-integrations in A_z. Since the z-dependence is only inside the E_±, we haveΨ^†∂_z Ψ= - β |ξ|/π[ [(v - v_0) e^-β |ξ| {(u-u_0)^2 + (v-v_0)^2 }0;0 (v + v_0) e^- β |ξ| {(u+u_0)^2 + (v+v_0)^2 } ]].Again, in the small-β limit,the u,v-integrals of the Nahm transformation is approximated by the Gaussian and variants of it.Then we find the result is A_z = 0. Next, we calculate A_ζ. Aftertedious calculations,we have Ψ^†∂_ζΨ =β/2 π√(|ξ|)[ [ (ξ̅ψ_11 + |ξ| ψ_21) |E_-|^2(ξ̅ψ_12+ |ξ| ψ_22) E^*_- E_+; (- ξ̅ψ_11 + |ξ| ψ_21) E_+^* E_- (- ξ̅ψ_12 + |ξ| ψ_22) |E_+|^2 ]].Here the terms in each component are defined by ψ_11=|ξ|^-1/2/√(ζ^2 - C^2)[ 3/4 C cosh (β s_0) + ξ{ - β C/4 |ξ|ξ̅ |s-s_0|^2 cosh (β s_0) +1/β (β/2 |ξ| (s̅ - s̅_0) + z/2) }],ψ_12=|ξ|^-1/2/√(ζ^2 - C^2)[ - 3/4 C cosh (β s_0) + ξ{β C/4 |ξ|ξ̅ |s + s_0|^2 cosh (β s_0) +1/β (β/2 |ξ| (s̅ + s̅_0) + z/2) }],ψ_21=|ξ|^-1/2/√(ζ^2 - C^2)[ ξ̅ C/4 |ξ|cosh (β s_0) + |ξ|{ - β/2 |s-s_0|^2 ξ̅ C/2 |ξ|cosh (2π s_0) +1/β (β/2 |ξ| (s̅ - s̅_0) + z/2) }],ψ_22=|ξ|^-1/2/√(ζ^2 - C^2)[ ξ̅ C/4 |ξ|cosh (β s_0) - |ξ|{β/2 |s+s_0|^2 ξ̅ C/2 |ξ|cosh (β s_0) +1/β (β/2 |ξ| (s̅ + s̅_0) + z/2) }].Finally, we perform the integrations over u,v.Again, the integrations are approximated by the Gaussian or its variants in the small-β limit. After calculations, we findA_ζ (x,y,z)=1/2 √(ζ^2 - C^2)( [ζ/2 ξ + z/β- ζ/2 ξ e^-2 v_0 z e^-β |ξ| |s_0|^2; - ζ/2 ξ e^2v_0 z e^- β |ξ| |s_0|^2ζ/2 ξ - z/β ]).One notices that this expression does not exhibit the traceless condition of the SU(2) algebra. However, we observe that the expansion of the following quantityin the C ≫ x,y region,cosh^-11/2( √(( 1 + x/C)^2 +( y/C)^2 ) + √(( 1 - x/C)^2 +( y/C)^2 )) =y/C - 1/6( y/C)^3 + 1/2(y/C)(x/C)^2 + ⋯.exhibits the leading order behavior of each quantity in the expression (<ref>) in the large-C. Namely, using the expression (<ref>), we have u_0 ∼𝒪(y/C), v_0 ∼π/2 β + 𝒪( x/C) , |s_0|^2 = u_0^2 + v_0^2 ∼π^2/4 β^2 +𝒪( ( x/C)^2,( y/C)^2 ).Here we have chosen the n=0 branch in the definition of v_0. Then, we find β |ξ| |s_0|^2∼C/β +𝒪( ( x/C)^2,( y/C)^2 ).Therefore, the off-diagonal parts in (<ref>)behave like ∼ e^- C/β and they are exponentially suppressed and ignored compared to the diagonal part in the large-C (or small-β) limit. Furthermore, the term ζ/2ξ in the diagonal part is small compared to z/β and suppressed over a 𝒪(β/C) quantity. Therefore, we find the gauge field solution in the large-C(and small-β) limit isA_ζ∼z/2 β√(ζ^2 - C^2)( [10;0 -1 ]).This expression satisfies the traceless condition of the SU(2) algebra as expected.In summary, we obtain the gauge field of the codimension two monopole as A_x =z/2 β( 1/√(ζ^2 - C^2) - 1/√(ζ̅^2 - C^2)) τ_3, A_y = z/2 β( 1/√(ζ^2 - C^2) + 1/√(ζ̅^2 - C^2)) τ_3, A_z = 0.Surprisingly, the monopole solution (<ref>), (<ref>) we have obtained in the large-C limit via the Nahm construction is an exact solution to the BPS equation (<ref>). One can easily confirm that the solution (<ref>), (<ref>) satisfies the equation (<ref>) for any values of C. We will comment on this issue later.Since the gauge invariant quantity (<ref>) is independent of z, the z-dependence in (<ref>) is just due to the gauge artifact. In order to see this fact explicitly, we express the leading order form of the solution in 𝒪 (x^i/C). First, at large-C, we find the approximated solution is A_x ≃ z/β Cτ_3, A_y ≃ 0, A_z ≃ 0.From these we findB_x ≃ 0,B_y ≃/β Cτ_3, B_z ≃ 0.Next, using the expansion of (<ref>),we find the leading order behavior of the adjoint scalar field isΦ = -u_0 τ_3≃ - y/β Cτ_3 + 𝒪 ((x^i)^3/C^3).One confirms that these expressions indeed satisfy the BPS equation (<ref>). It is now straightforward to find a gauge transformation that makes the solution be z-independent. The gauge transformation Φ→Φ' = U Φ U^†, A_i → A_i' = U A_i U^† + U ∂_i U^†, U ∈ SU(2). with U = 1_2 + xz/β Cτ_3, U^† = 1_2 - xz/β Cτ_3,makes it possible to remove the z-dependence.The result is A'_x ∼ 0,A'_y ∼ 0,A'_z ∼ -x/β Cτ_3, Φ' ∼ -y/β Cτ_3.Therefore the solution represents completely codimension two object. This is similar to the situation where the 't Hooft monopole of codimension three is obtained by the periodic instanton on ℝ^3 × S^1 <cit.>. In there the S^1 dependence of the periodic instanton solutionis completely gauged away and the resulting solution is independent of the periodic direction.§ HETEROTIC GAUGE FIVE-BRANE WITH MONOPOLE OF CODIMENSION TWOIn this section, based on the monopole of codimension two discussed in the previous section, we construct the gauge five-brane solution in heterotic supergravity. In the ansatz (<ref>), we compactify the transverse directions x^3 and x^4 to T^2 = S^1 × S^1 and consider the small T^2 limit. The self-duality equation for the gauge field effectively reduces to that in two dimensions ℝ^2. As a solution to this equation, we employ the small S^1 limit of the monopole solution in ℝ^2 × S^1. The solution is given by (<ref>) and (<ref>). In particular, we identity the adjoint scalar field Φ with A_4 component.The Poisson equation (<ref>) reduces to∂_i^2 H = 4 α' ∂_i Tr [B_i Φ] + 𝒪(α^' 2), (i=1,2,3),where the source term in the right hand sideis provided by the one for the monopole of codimension two. Using the monopole equation (<ref>) and the solution (<ref>), the first term in the right hand side is rewritten as 4 α' ∂_i Tr [B_i Φ] = - 2 α' ∂_i^2 Tr [Φ^2] =4 α' ∂_i^2 u_0^2.Therefore, the H-function is determined to beH (x,y) = h_0 + 4 α'/β^2[ cosh^-11/2C (√((x+C)^2 +y^2) + √((x - C)^2 +y^2)) ]^2.Here h_0 is a constant. The metric, dilaton and the NS-NS B-field are determined through the BPS ansatz (<ref>). A particular emphasis is placed on the fact that the dilaton fielde^2 ϕ = H(x,y) never becomes imaginary valued when h_0 ≥ 0. This is contrasted with the gauge solution based on the smeared monopole <cit.>.Even more, the metric governed by the H-function (<ref>) is well-defined in ℝ^2.The asymptotic behavior of the harmonic function looks likeH (r) ∼4 α'/β^2 [log( r/C)]^2, (r →∞).This is compared with the smeared solution H (r) ∼ h_0 - 2α'/r^2 [log (r/μ)]^2 (r →∞). We note that the asymptotic behavior (<ref>) of the gauge five-brane is quite different from that ofan authentic harmonic function H (r) ∼log r(r →∞) for codimension two branes in type II theories.The Ricci scalar for the geometry is calculated to be R (ω) = -24 h_0 α' β^4/√((x - C)^2 + y^2)√((x + C)^2 + y^2) ×( h_0 β^2 + 4 α' arccosh^2 1/2C( √((x - C)^2 + y^2) + √((x + C)^2 + y^2)) )^-3.Here the definition of the Ricci scalar is given in (<ref>). One observes that the Ricci curvature of the geometry asymptotically vanishes. Remarkably, we find that all the components of the Ricci tensor R_MN vanishes at the infinity of ℝ^2.Therefore, the geometry is asymptotically Ricci flat.This is in contrast to the codimension two stand-alone objects intype II string theories <cit.>.Indeed, any supergravity solutions for stand-alone codimension two objects obtained so far have only their description near the core of branes <cit.>. The plots of the energy density for the monopoleand the absolute value of the Ricci scalar are found in fig.<ref>.One finds that there are two poles in the energy density and the scalar curvatures where the quantities diverge. The origin of these two poles is obvious from the viewpoint of the Nahm construction in compact spaces. As is evident from the expression (<ref>),the Hitchin field solution P (s) on the cylinder ℝ× S^1 clearly breaks the spherical symmetry in the physical space even for the single monopole case.This substantially leads to the axially symmetric monopole solution (<ref>), (<ref>). This breakdown of spherical symmetry seems a fate of codimension two monopoles <cit.>.As we have mentioned before, the solution (<ref>) and (<ref>) is valid for any values of C. The parameter C corresponds to the distance between two centers of the energy peaks and the supergravity solution becomes axially symmetric due to the existence of the C-parameter. This symmetry is quite different from the known five-brane solutions which have spherical symmetry.It is worthwhile toexamine the C→0 limit of the solution and look for a spherically symmetric solution.It is easy to take a C→0 limit for the gauge fields. However, it is not straightforward to consider the limit C → 0 for the adjoint scalar field. We evaluate only a dominant term for the adjoint scalar in the C→0 limit:Φ|_ C → 0= lim_C→0βcosh^-1√(x^2+y^2) C τ_3, A_x|_ C → 0=z2 β(1 ζ -1 ζ) τ_3,   A_y|_ C → 0= z2 β(1 ζ +1 ζ) τ_3,   A_z|_ C → 0= 0,Using this expression, D_iΦ is calculated asD_1 Φ|_ C → 0= lim_C→0 xβ√( x^2 + y^2 )√((x^2+y^2) - C^2)τ_3 =x β (x^2+y^2) τ_3, D_2 Φ|_ C → 0= lim_C→0 yβ√( x^2 + y^2 )√((x^2+y^2) - C^2)τ_3 =y β (x^2+y^2) τ_3, D_3 Φ|_ C → 0= 0.We also obtainB_1= -x β (x^2+y^2) τ_3 ,     B_2 =-y β (x^2+y^2) τ_3 ,     B_3 =0.Again, we confirm that the C → 0 expressions (<ref>) and (<ref>) indeed satisfy the BPS monopole equation (<ref>). We then find a heterotic five-brane solution based on(<ref>) asH|_ C → 0= h_0 + lim_C→0 4 α^'β^2 [ cosh^-1√(x^2+y^2) C ]^2,     e^2ϕ|_ C → 0= H|_ C → 0 , g_μν|_ C → 0= η_μν,     g_mn|_ C → 0= e^2ϕ|_ C → 0 δ_mn,     Ĥ_mnp|_ C → 0 = -12ε_mnpq∂_q H|_ C → 0 ,A_x|_ C → 0=z2 β(1 ζ -1 ζ) τ_3,   A_y|_ C → 0= z2 β(1 ζ +1 ζ) τ_3,   A_z|_ C → 0= 0, A_4 |_ C → 0= lim_C→0βcosh^-1√(x^2+y^2) C τ_3,    A_μ|_ C → 0=0.Under the C→0 limit, we find that the Ricci tensor converges to zero.This means that the spherical symmetric solution becomes globally Ricci flat. § KALUZA-KLEIN GAUGE FIVE-BRANE AND GAUGE 5^2_2-BRANEIn this section, we perform the T-duality transformations along the two isometries on the solution found in the previous section. We will write down the regular solutions for the KK5- and 5^2_2-branes associated with the gauge five-brane. Specifically, we are interested in the z-independent solution obtained in (<ref>) :H = h_0 +4 α^'β^2 C^2y^2,   e^2ϕ = H,    g_mn = e^2ϕδ_mn ,  g_μν =η_μν,  B_34 = -α^' 4 xy β^2 C^2 ,    A_3=x β Cτ_3 ,    A_4=- y β C τ_3 ,The other components are zero. We note that this solution is available up to 𝒪 ((x^i)^3/C^3).For the heterotic supergravity action in the first order in α^',the T-duality transformation rule, calledheterotic Buscher rule, is written as <cit.>, G_MN = g_MN - B_MN + 2 α^' A_M A_N,g̃_MN = g_MN + 1G^2_nn ( g_nn G_nM G_nN -G_nn g_nM G_nN -G_nn G_nM g_nM ),B̃_MN = B_MN + 1 G_nn (G_nMB_Nn - G_nN B_Mn),  g̃_nM = - g_nM G_nn +g_nn G_nM G^2_nn,  g̃_nn = -g_nn G^2_nn,B̃_nM = -1G_nn ( B_nM + G_nM ),  ϕ̃ = ϕ - 12log|G_nn|,  Ã^I_n = -A^I_nG_nn,  Ã^I_M = A^I_M - G_nM G_nn A^I_n,where the index “n” means the T-dualized (isometry) direction and the tilde represents dualized fields.The combination[ Here, we have chosen the combination g_MN - B_MN not g_MN + B_MNwhich may be widely used in literature. This choice originates fromthe convention related to the generalized spin connection ω_±. Since we have defined the R^2 term in the action by ω_+, we have to choose the abovementioned combination in G_MN. Otherwise the O(d,d) T-duality symmetry is not realized. ] g_MN - B_MN is a primitive metric in the double field theory (DFT) <cit.>.Since the fields (<ref>) have two isometry directions, there are two routes to obtain the 5^2_2-brane.We follow the route 1 in fig. <ref> and show the Kaluza-Klein gauge five-brane and 5^2_2-brane in this section. For another Kaluza-Klein gauge five-brane, we show it in the Appendix in detail.§.§ Kaluza-Klein gauge five-braneFirst, we perform a T-duality transformation with the gauge solution and obtain a T-dualized object– the KK5-brane of codimension two.When we take the heterotic T-duality with respect to the x^4-direction for the fields (<ref>), we obtain the Kaluza-Klein gauge five-brane which corresponds to “II. KK gauge five brane” in fig. <ref>:H= h_0 +4 α^'β^2 C^2 y^2,  e^2ϕ^(4) =Hh_0 , g^(4)_ab = g_ab = H δ_ab,  g^(4)_33 = H,  g^(4)_44= 1 h_0^2 H,  g_μν^(4)=η_μν, B^(4)_34 = α^' h_0 4 xy β^2 C^2,   (A_3)^(4)= x β C τ_3,    (A_4)^(4)= h_0y β C τ_3 ,where (4) means the T-dualized direction and the indices a,b=1,2.It is obvious that the B^(4) field has a non-zero component and the dilaton e^2ϕ^(4) is regular.Unlike the usual Kaluza-Klein five brane (Taub-NUT), the solution has non-zero components of the B-field and the gauge field.Therefore, we canregard the brane as a source of the H-flux. We also confirm that the gauge field satisfies the self-duality condition,F_mn =12 ε_mn^   pqF_pq in two dimensions. This sign flip originates from the convention of the heterotic Buscher rule <cit.>. The fields (<ref>) are almost the same with (<ref>) except for the constant coefficient.The reason is that the component of the extended metric G_44 becomes a constant,and somehow it is the same situation as in the smeared gauge solution discussed in <cit.>.§.§ Heterotic 5^2_2-braneNow, we perform the second T-duality transformation along the x^3 direction on the Kaluza-Klein solution.As a result, the solution obtained corresponds to the “III. Heterotic 5^2_2-brane” in fig.<ref>:H= h_0 +4 α^'β^2 C^2 y^2,  e^2ϕ^(43) = 1 h_0^2( h_0 +4 α^'β^2 C^2x^2) g^(43)_ab =H δ_ab,   g^(43)_33 = 1 h_0^2( h_0 +4 α^'β^2 C^2 (2 x^2 - y^2 ) ),   g^(43)_34=-8 α^' h_0^2xy β^2 C^2 ,   g^(43)_44= Hh_0^2,  g_μν^(43)=η_μν, B^(43)_34 = 4 α^' h_0^2xy β^2C^2 ,   (A_3)^(43)=- h_0x β C τ_3 ,   (A_4)^(43)= h_0y β C τ_3 .Here, the fields are written in O((x^i)^2/C^2).It is clear that the dilaton ϕ^(43) is regular and does not take a negative value, unlike the one in <cit.>. For this solution, the components of the B-field are non-zero. One can easily write down the generalized metric <cit.>associated with the solution (<ref>).By the generalized metric,we find that the monodromy around the gauge 5^2_2-brane solution we obtained is trivial and it does not exhibit any non-geometric feature.In other words, the gauge 5^2_2-brane in heterotic theories is not a non-geometric but ageometric object.This is in sharp contrast to the exotic 5^2_2-branes of the neutral andsymmetric types <cit.> and those in type II theories <cit.>. We also find that the gauge field strengthin (<ref>) satisfies the anti self-duality condition,F_μν = -12 ε_mn^   pqF_pq. § CONCLUSION AND DISCUSSIONSIn this paper, we studied the BPS gauge five-brane solution of codimension two in heterotic supergravity. The 1/2 BPS ansatz reveals the fact that the H-functionis determined through the source term given by the monopoles of codimension two. The desired monopole is described by the small circle limit of the monopole chain defined in ℝ^2 × S^1.An analytic solution of the monopole chain is explicitly written down by the Nahm construction discussed in <cit.>. Using this solution, we find the explicit form of the gauge five-brane of codimension two. We found two poles in the curvature of the geometry for the gauge five-brane.This is due to the fact that the monopole we constructed preserves only the axial symmetry in ℝ^2. This may be interpreted from the viewpoint of solitons in compact spaces. The multipole structure of solutions is quite common in the case of periodic instantons where these multipoles correspond to monopole constituents of a single instanton <cit.>. It is well known that solitons in compact spaces with non-trivial asymptotic holonomy possesses constituents inside each energy peak. In the analysis of the monopole, we employed the boundary condition where the solution behaves like the U(1) Dirac monopole at asymptotics. In other words, the SU(2) gauge symmetry is broken down to U(1) at infinity. This breaking is characterized by the asymptotic holonomy. As discussed in <cit.>, for the monopole in two dimensions,there is always a non-trivial asymptotic holonomy due to the logarithmic growth of the adjoint scalar fields. This substantially leads to the introduction of constituents for the monopoles. This phenomenon is interpreted as D-brane configurations <cit.> in type II theories. It is interesting to explore whether the same kind of interpretation is possible in heterotic theories.The parameter C in the solution controls the breaking of the spherical symmetry. We made an analysis on the C → 0 limit of the solution where the spherical symmetry is expected to be realized.We also found that the asymptotic geometry is Ricci flatwhich is consistent with the fact that the monopole charge density becomes zero at the asymptotic region.Despite this fact, however, the total energy (or the topological charge) associated with the monopole of codimension two diverges. This is an inevitable fate of codimension two objects.The H-functionthat governs the solution behaves like H (x,y) ∼ [log r]^2 in the asymptotics which is contrasted with the gauge solution based on the smeared monopole discussed in our previous paper <cit.>. The geometry based on the smeared monopole has been ill-defined in some regions in space-timeand the dilaton becomes imaginary valued in there. We stress that the new solution based on the monopole chain in this paper overcome this problematic property.Since the solution exhibits the U(1)^2 isometry along the transverse directions to the brane world-volume, we can perform the chain of the T-duality transformations. By applying the modified Buscher rule in heterotic theories,we performed the T-duality transformations of the gauge five-brane of codimension two. We wrote down the KK gauge five-brane and the gauge 5^2_2-brane. The latter is a candidate of exotic branes in heterotic string theories. We find that the monodromy of the gauge 5^2_2-brane is trivial and it is totally a geometric object. This is contrasted to the symmetric and the neutral 5^2_2-branes discussed in <cit.>.It is also interesting to make contact with the heterotic gauge five-branes of various codimensions (fig. <ref>). As we have discussed in the main body of this paper,the codimension three gauge five-brane based on the smeared instanton worked out in <cit.> just corresponds to the small circle limit of the one found in <cit.>. There is an implication that the large circle limit of the Ward's monopole chain becomes the ordinary 't Hooft monopole in three dimensions <cit.>. We therefore expect that the codimension two five-brane we have obtained reduces to the codimension three five-brane based on the 't Hooft monopole in this limit (the relation (i) in fig.<ref>). On the other hand, we naively expect that a strict β→ 0 limit of our solution results in the five-brane based on the smeared monopole (the relation (ii) in fig.<ref>). However, it is difficult to observe this issue due to the lack of the spherical symmetry of the Ward's solution. It seems plausible that more subtle limit need to be considered as in the case of the monopole limit of periodic instantons <cit.>. There are more interesting issues related to works done here and <cit.>. We have worked out the explicit T-duality chains of five-branes in heterotic theories. We have found that the three distinct five-branes exhibit totally different behavior under the chain of the T-duality. The 5^2_2-brane of the symmetric and the neutral types are non-geometric while the one of the gauge type is geometric. What nature does it clarify this property? The group theoretical classification of BPS multiplets in heterotic theories based on Abelian gauge symmetries has been studied <cit.>. It would be interesting to study the structure of the BPS multiplet,especially the T-duality brane orbit <cit.> in toroidary compactified heterotic theories.It was also discussed that when the instantonin the gauge five-brane of codimension four shrinks to zero size ρ→ 0, a gauge multiplet on the brane world-volume becomes massless and theenhanced SU(2) gauge symmetry is expected to appear <cit.>. This small instanton limit of the gauge five-brane in the SO(32) heterotic string theory is related to the D5-brane in type I theory via the S-duality.The fact that the moduli spaces of periodic monopoles have hyperKähler metrics <cit.> together with the discussion <cit.>may lead to an interpretation of the gauge five-brane discussed in this paper in type I theory side.Even more, it is interesting to study relations among various five-branes in heterotic and type I theories. We will come back on these issues in the future. §.§ AcknowledgmentsThe authors would like to thank A. Nakamula, J. H. Park and S. J. Reyfor useful discussions and comments.The work of S. S. is supported in part bythe Japan Society for the Promotion of Science (JSPS) KAKENHI Grant Number JP17K14294 and Kitasato University Research Grant for Young Researchers. The work of M. Y.  is supported by IBS-R018-D2.§ ANOTHER KALUZA-KLEIN GAUGE FIVE-BRANEHere we show another Kaluza-Klein gauge five-brane solution which corresponds to “IV. Another Kaluza-Klein gauge five-brane” in fig. <ref>. This is another configuration of the Kaluza-Klein gauge five-brane.Since the (x,y) dependence of the fields in (<ref>) is asymmetric, the form of the T-dualized fields is slightly different from the one in (<ref>).The fields can be obtained by the heterotic T-duality along x^3-direction for the solution (<ref>): H= h_0 +4 α^'β^2 C^2 y^2,   e^2ϕ^(3) = 1 h_0( h_0 +4 α^'β^2 C^2 ), g^(3)_ab =H δ_ab,   g^(3)_33 = 1 h_0^2( h_0 +4 α^'β^2 C^2 ( 2x^2 -y^2 ) ),   g^(3)_34= 8 α^' h_0 xy β^2 C^2 ,   g^(3)_44= H,  g_μν^(3)=η_μν, B^(3)_34 = -4 α^' h_0xy β^2 C^2 ,   (A_3)^(3)=- h_0x β C τ_3 ,   (A_4)^(3)=-y β C τ_3 .The fields are written in O((x^i)^2/C^2).There is a non-zero off-diagonal metric and the form of the dilaton ϕ^(3) and a component of the metric g^(3)_33 are different from (<ref>).However, the gauge fields in (<ref>) and (<ref>) are gauge equivalent.When we take a T-dual transformation along x^4 for (<ref>), we can obtain the heterotic 5^2_2-brane (<ref>) as in fig. <ref>. Therefore, we confirm that the T-duality web for the heterotic gauge five brane is closed. 0Giveon:1998sr A. Giveon and D. Kutasov,Rev. Mod. Phys.71 (1999) 983[hep-th/9802067]. Maldacena:1997re J. M. Maldacena,Int. J. Theor. Phys.38 (1999) 1113[Adv. Theor. Math. Phys.2 (1998) 231][hep-th/9711200]. Blumenhagen:2006ci R. Blumenhagen, B. Kors, D. Lust and S. Stieberger,Phys. Rept.445 (2007) 1[hep-th/0610327]. Hull:1994ys C. M. Hull and P. K. Townsend,Nucl. Phys. B 438 (1995) 109[hep-th/9410167].Elitzur:1997zn S. Elitzur, A. Giveon, D. Kutasov and E. Rabinovici,Nucl. Phys. B 509 (1998) 122 [hep-th/9707217]. Obers:1998fb N. A. Obers and B. Pioline,Phys. Rept.318 (1999) 113 [hep-th/9809039].Blau:1997du M. Blau and M. O'Loughlin,Nucl. Phys. B 525 (1998) 182[hep-th/9712047]. Eyras:1999at E. Eyras and Y. Lozano,Nucl. Phys. B 573 (2000) 735[hep-th/9908094]. LozanoTellechea:2000mc E. Lozano-Tellechea and T. Ortin,Nucl. Phys. B 607 (2001) 213[hep-th/0012051].Kikuchi:2012za T. Kikuchi, T. Okada and Y. Sakatani,Phys. Rev. D 86 (2012) 046001 [arXiv:1205.5549 [hep-th]]. Kimura:2013fda T. Kimura and S. Sasaki,Nucl. Phys. B 876 (2013) 493 [arXiv:1304.4061 [hep-th]], JHEP 1308 (2013) 126 [arXiv:1305.4439 [hep-th]], JHEP 1403 (2014) 128 [arXiv:1310.6163 [hep-th]]. Andriot:2014uda D. Andriot and A. Betz,JHEP 1407 (2014) 059[arXiv:1402.5972 [hep-th]].Kimura:2014upa T. Kimura, S. Sasaki and M. Yata,JHEP 1407 (2014) 127 [arXiv:1404.5442 [hep-th]],JHEP 1503 (2015) 076[arXiv:1411.3457 [hep-th]], JHEP 1602 (2016) 168[arXiv:1601.05589 [hep-th]].Chatzistavrakidis:2013jqa A. Chatzistavrakidis, F. F. Gautason, G. Moutsopoulos and M. Zagermann,Phys. Rev. D 89 (2014) 066004 [arXiv:1309.2653 [hep-th]]. Kimura:2015yla T. Kimura,Nucl. Phys. B 893 (2015) 1[arXiv:1410.8403 [hep-th]], arXiv:1503.08635 [hep-th],PTEP 2016 (2016) no.2,023B04[arXiv:1506.05005 [hep-th]], JHEP 1602 (2016) 013[arXiv:1512.05548 [hep-th]], PTEP 2016 (2016) no.5,053B05[arXiv:1601.02175 [hep-th]], JHEP 1605 (2016) 060[arXiv:1602.08606 [hep-th]].Okada:2014wma T. Okada and Y. Sakatani,JHEP 1503 (2015) 131[arXiv:1411.1043 [hep-th]]. Sakatani:2014hba Y. Sakatani,JHEP 1503 (2015) 135[arXiv:1412.8769 [hep-th]]. Sakatani:2016sko Y. Sakatani and S. Uehara,Phys. Rev. Lett.117 (2016) no.19,191601[arXiv:1607.04265 [hep-th]]. Lee:2016qwn K. Lee, S. J. Rey and Y. Sakatani,JHEP 1707 (2017) 075[arXiv:1612.08738 [hep-th]].Bergshoeff:2011se E. A. Bergshoeff, T. Ortin and F. Riccioni,Nucl. Phys. B 856 (2012) 210 [arXiv:1109.4484 [hep-th]]. deBoer:2010udJ. de Boer and M. Shigemori,Phys. Rev. Lett.104 (2010) 251603 [arXiv:1004.2521 [hep-th]], Phys. Rept.532 (2013) 65 [arXiv:1209.6056 [hep-th]]. Hull:2004in C. M. Hull,JHEP 0510 (2005) 065 [hep-th/0406102]. Hassler:2013wsa F. Haßler and D. Lüst,JHEP 1307 (2013) 048 [arXiv:1303.1413 [hep-th]].Sasaki:2016hpp S. Sasaki and M. Yata,JHEP 1611 (2016) 064[arXiv:1608.01436 [hep-th]]. Callan:1991dj C. G. Callan, Jr., J. A. Harvey and A. Strominger,Nucl. Phys. B 359 (1991) 611, Nucl. Phys. B 367 (1991) 60. Strominger:1990et A. Strominger,Nucl. Phys. B 343 (1990) 167Erratum: [Nucl. Phys. B 353 (1991) 565]Duff:1990wv M. J. Duff and J. X. Lu,Nucl. Phys. B 354 (1991) 141.Rey:1991uu S. J. Rey,In *Vancouver 1991, Proceedings, Particles and fields '91, vol. 2* 876-881 and SLAC Stanford - SLAC-PUB-5659 (91/09,rec.Nov.) 6 pBuscher:1987sk T. H. Buscher,Phys. Lett. B 194 (1987) 59, Phys. Lett. B 201 (1988) 466.Tseytlin:1991wrA. A. Tseytlin,Mod. Phys. Lett. A 6, 1721 (1991).Bergshoeff:1995cg E. Bergshoeff, B. Janssen and T. Ortin,Class. Quant. Grav.13 (1996) 321[hep-th/9506156]. Serone:2005ge M. Serone and M. Trapletti,Phys. Lett. B 637 (2006) 331[hep-th/0512272]. Ward:2005nn R. S. Ward,Phys. Lett. B 619 (2005) 177[hep-th/0505254]. Bergshoeff:1988nnE. Bergshoeff and M. de Roo,Phys. Lett. B 218, 210 (1989), Nucl. Phys. B 328 (1989) 439.Ortin:2015hya T. Ortin, “Gravity and Strings,”, Cambridge Monographs on Mathematical Physics, Cambridge University Press (2015-03-26). Belavin:1975fg A. A. Belavin, A. M. Polyakov, A. S. Schwartz and Y. S. Tyupkin,Phys. Lett. B 59 (1975) 85. Khuri:1992hk R. R. Khuri,Nucl. Phys. B 387 (1992) 315[hep-th/9205081].Gauntlett:1992nn J. P. Gauntlett, J. A. Harvey and J. T. Liu,Nucl. Phys. B 409 (1993) 363[hep-th/9211056]. Nahm:1979yw W. Nahm,Phys. Lett.90B (1980) 413.Hitchin:1983ay N. J. Hitchin,Commun. Math. Phys.89 (1983) 145.Cherkis:2000cj S. A. Cherkis and A. Kapustin,Commun. Math. Phys.218 (2001) 333[hep-th/0006050].Maldonado R. Maldonado,JHEP 1302 (2013) 099[arXiv:1212.4481 [hep-th]], arXiv:1311.6354 [hep-th],JHEP 1501 (2015) 062[arXiv:1405.3641 [hep-th]].Rossi:1978qe P. Rossi,Nucl. Phys. B 149 (1979) 170.Harrington:1978ve B. J. Harrington and H. K. Shepard,Phys. Rev. D 17 (1978) 2122.Greene:1989ya B. R. Greene, A. D. Shapere, C. Vafa and S. T. Yau,Nucl. Phys. B 337 (1990) 1.Gibbons:1995vg G. W. Gibbons, M. B. Green and M. J. Perry,Phys. Lett. B 370 (1996) 37[hep-th/9511080]. Hohm:2014sxa O. Hohm, A. Sen and B. Zwiebach,JHEP 1502 (2015) 079[arXiv:1411.5696 [hep-th]]. Hull:2009mi C. Hull and B. Zwiebach,JHEP 0909 (2009) 099[arXiv:0904.4664 [hep-th]]. Kraan:1998sn T. C. Kraan and P. van Baal,Phys. Lett. B 435 (1998) 389[hep-th/9806034]. Lee:1997vp K. M. Lee and P. Yi,Phys. Rev. D 56 (1997) 3711[hep-th/9702107]. Gross:1980br D. J. Gross, R. D. Pisarski and L. G. Yaffe,Rev. Mod. Phys.53 (1981) 43.Bergshoeff:2012jb E. A. Bergshoeff and F. Riccioni,JHEP 1301 (2013) 005[arXiv:1210.1422 [hep-th]]. Bergshoeff:2012ex E. A. Bergshoeff, A. Marrani and F. Riccioni,Nucl. Phys. B 861 (2012) 104[arXiv:1201.5819 [hep-th]]. Witten:1995gx E. Witten,Nucl. Phys. B 460 (1996) 541[hep-th/9511030]. Cherkis:2000ft S. A. Cherkis and A. Kapustin,Commun. Math. Phys.234 (2003) 1[hep-th/0011081].
http://arxiv.org/abs/1708.08066v2
{ "authors": [ "Shin Sasaki", "Masaya Yata" ], "categories": [ "hep-th" ], "primary_category": "hep-th", "published": "20170827084220", "title": "Gauge Five-brane Solutions of Co-dimension Two in Heterotic Supergravity" }
^1 Department of Physics, University of Oslo, P. O. Box 1048 Blindern, 0316 Oslo, Norway ^2 Instituto de Ciencia de Materiales de Aragón, (CSIC - Universidad de Zaragoza), C/ María de Luna 3, 5018 Zaragoza, Spain ^3 nSolution AS, Maries gate 6, 0368 Oslo, Norway ^4 Norwegian Defence Research Establishment (FFI), Kjeller, Norway ^5 Institute for Superconducting and Electronic Materials, University of Wollongong, Northfields Avenue, Wollongong, NSW 2522, [email protected] MgB_2 tapes with high critical current have a significant technological potential, but can experience operational breakdown due to thermomagnetic instability. Using magneto-optical imaging the spatial structure of the thermomagnetic avalanches has been resolved, and the reproducibility and thresholds for their appearance have been determined. By combining magneto-optical imaging with magnetic moment measurements, it is found that avalanches appear in a range between 1.7 mT and 2.5 T. Avalanches appearing at low fields are small intrusions at the tape's edge and non-detectable in measurements of magnetic moment. Larger avalanches have dendritic structures.Keywords: superconductivity, MgB_2 tapes, magneto-optical imaging, dendritic avalanchesDendritic flux avalanches in a superconducting MgB_2 tape T Qureishy^1, C Laliena^2, E Martínez^2, A J Qviller^3, J I Vestgården^1 ^4, T H Johansen^1 ^5, R Navarro^2 and P Mikheenko^1 August 24, 2017 =================================================================================================================================§ INTRODUCTION Soon after the discovery of superconductivity below 39 K in magnesium diboride (MgB_2) <cit.>, the huge interest of the scientific community led to an extensive physical characterization of this type-II superconductor <cit.>. Improved magnetic flux pinning and hence a higher critical current density, J_ c, was achieved with carbon substitution for boron by chemical doping <cit.>. The powder-in-tube technology, extensively used in the fabrication of the first-generation high-temperature superconductors, was found to be useful for producing MgB_2 tapes with high J_ c and significant technological potential <cit.>. Iron is often used as metallic sheath material for MgB_2 wires and tapes because of its good mechanical properties and chemical compatibility with MgB_2 <cit.>. Magnetic flux jumps are observed in most type-II superconductors when applying a magnetic field at low temperatures. These abrupt events are caused by a thermomagnetic instability <cit.>. Magnetic vortices move into the superconductor as an applied magnetic field or current is changed. When moving, they dissipate heat, and if this heat is not removed quickly, more vortices depin and move into the sample dissipating even more heat, and this self-amplifying process results in a thermomagnetic avalanche. Such events lead to jumps in magnetic moment of bulk MgB_2 <cit.>, as well as in MgB_2 films <cit.>, wires <cit.> and tapes <cit.>. The spatial structure of flux avalanches can be observed by magneto-optical imaging (MOI). Most commonly they have a dendritic structure, as found in thin films made of Nb <cit.>, NbN <cit.>, MgB_2 <cit.> and YBCO <cit.>, and also in foils of Nb <cit.>. The detailed structure of dendritic avalanches are unpredictable. Yet, they have upper and lower thresholds for both increasing and decreasing magnetic fields <cit.>. In films of MgB_2 cooled in zero magnetic field, dendritic avalanches have a threshold temperature of 10 K when applying an increasing magnetic field <cit.>.By combining MOI with magnetic moment measurements, we have investigated flux jumps in a 50-μm thick carbon-doped MgB_2 tape. We report observations of dendritic avalanches in the sample. Their structures, reproducibility, threshold magnetic fields and temperature-dependence are discussed.§ METHODS Six MgB_2 tapes were synthesized and characterized by MOI and magnetic moment measurements. Dendritic avalanches were found in only one of them, and the present paper focuses on detailed results for this particular sample. MOI results from four other tapes are presented in <cit.>. An MgB_2 wire was synthesized as well, and characterized by magnetic moment measurements. The MgB_2 tape was manufactured by the powder-in-tube method using in situ reaction of ball-milled precursor powders, similarly to the process described in <cit.>, but with oleic acid added to the powders before milling to improve its critical current density. Powders of Mg and B were mixed in stoichiometric ratio (1:2) with oleic acid in a Retsch MM 200 vibratory ball mill for 30 minutes. The amount of oleic acid was 10 wt.% of the Mg-B mixture. In order to obtain optimum superconducting properties, the precursor powder was subsequently heat treated at 400 C in an argon atmosphere for an hour <cit.>. After that, the powder was milled in a Retsch PM 100 planetary ball mill with balls of tungsten carbide, rotated at a speed of 200 revolutions per minute for 1.5 hours. Every three minutes the milling was paused for one minute, and the rotation direction was changed. The resulting powder was inserted into an iron tube with an inner and outer diameter of 4.5 and 5 mm, respectively. The tube was then sealed at both ends. After that, it was drawn through the formers in consecutive steps reducing the diameter down to 1.1 mm and finally cold-rolled into a 2-mm wide and 0.4-mm thick tape. During mechanical deformation, intermediate annealing at 550 C for an hour in argon atmosphere was performed to reduce the iron sheath's work-hardening. Finally, the tape was also sealed at both ends and annealed at 670 C for five hours in vacuum to form the final product. After annealing, the tape was cut into several pieces for characterization with different techniques. For MOI analysis, which is the main focus in the present work, a 10 mm-long piece was used. For measurements, one side of the tape was polished to expose the superconducting core, resulting in the sample, approximately 50 μm thick. For comparison, a piece of the same MgB_2 wire, without the rolling process, was annealed with the same conditions as were used for the tape.For characterization, the tape was mounted in a helium flow cryostat with a magneto-optical indicator film placed on top of it. The indicator is a Faraday-active layer, which in an optical system with crossed polarizers allows one to visualize magnetic flux that penetrates the specimen <cit.>. In our case it is a bismuth-substituted ferrite garnet film deposited on a gadolinium gallium garnet substrate with an aluminium mirror to reflect the light coming from the polarizer <cit.>. The superconductor with the indicator on top was cooled in zero magnetic field to a temperature below its critical temperature, T_ c. Then a field was applied perpendicular to its surface and increased to μ_ 0H = 85 mT, while taking images every 0.85 mT. After that, the sample was heated above T_ c, and the process was repeated. 4-5 mm long samples of the same MgB_2 wire and tape were characterized by vibrating sample magnetometers (VSM, Quantum Design PPMS-9T and PPMS-14T) and SQUID magnetometer (Quantum Design 5T). After removing the iron sheath by mechanical polishing, the diameter and thickness of the wire and tape were 0.9 mm and 0.29 mm, respectively. The isothermal magnetic hysteresis loops at T = 5 K were measured. For the tape, a magnetic field was applied perpendicular to its flat surface and increased from μ_ 0H = 0 to 9 T. After that, it was decreased back to zero, and finally increased in the opposite direction to - 9 T. In the case of the wire, the field was applied perpendicular to its axis and increased from 0 to 14 T, decreased back to 0 and then increased in the opposite direction to - 14 T.§ RESULTS AND DISCUSSION In figure <ref>(a), magnetic moment m of the tape is plotted as a function of applied magnetic field μ_ 0H at T = 5 K. m is smooth at high fields and contains flux jumps at low fields. Flux jumps are seen in the virgin curve and also in the reverse branches, after having applied the maximum field, in fields between + 2.5 and - 2.5 T. The inset in the lower-left corner shows several of the first flux jumps occurring in the virgin curve, including the first one at μ_ 0H = 0.145 T, marked by an arrow. The first flux jump is also shown in the inset in the lower-right corner.Figure <ref>(b) shows m as a function of μ_ 0H at 5 K for the MgB_2 wire. There are many flux jumps in the reverse branch in approximately the same field interval as in the tape, and also flux jumps up to about 3 T in the virgin curve of the wire. The inset to the left shows several of the first flux jumps, and an arrow points at the first one at μ_ 0H = 0.185 T. The inset to the right shows the first flux jump. The exact flux jump pattern in m(H) measurements depends on the ramp rate μdH/dt, which is 13-30 mT/s for these measurements. Since both the wire and tape contain similar flux jump behaviour, it is likely that rolling the wire into the tape before the final annealing and polishing off the iron sheath is not the only cause of flux jumps in the tape. Since magnetic moment measurements give the average magnetic properties over the whole volume of the samples, it is desirable to visualize local magnetic fields by MOI to clarify the sample behaviour, even if the operating range of the indicator films is limited to low magnetic fields, in our case 85 mT, below the first detected jump in the magnetic moment measurements. Figure <ref> shows colour-coded MOI images of a section of the tape covering two thirds of its whole length. The images were obtained at 3.7 K and different magnetic fields. Each image consists of a superposition of three images obtained under the same conditions (zero-field cooling to 3.7 K and applying the same magnetic field). They are presented in three different colours: red, green and blue (RGB). Grey colours in these images appear as a sum of these three colours. Brightness and contrast were optimized for each image individually. The images from the top to bottom correspond to increasing applied fields, namely to μ_ 0H = 18.7, 42.5, 62.1 and 84.2 mT, respectively. The inner region is dark, indicating zero flux, and the area outside of its borders is brighter, which shows that the tape expels magnetic field. The bright horizontal lines at the upper and lower edges of the tape are from the remnants of ferromagnetic sheath. Bright flux front propagating into the sample shows the advancement of magnetic flux. Magnetic flux penetrates gradually into the sample from the edges and forms a critical state-like region with a non-smooth flux front. The flux front is also non-homogeneous, especially in the upper-left corner, because of enhanced surface roughness in the tape there. In addition to gradual flux penetration, figure <ref> clearly shows specific dendritic formations. These formations appear suddenly and have branch-like structures resembling lightning. The first dendrites are small, with very few branches. As the applied magnetic field increases, new and larger dendrites appear. At the highest applied fields, new ones have even more branches. Most of the dendrites have a non-reproducible appearance, as can be seen in the RGB images, where colours are not mixed. The parts of dendrites that are yellow, cyan or magenta were created by dendrites following the same path in two out of three measurements. This makes the pattern or parts of the pattern reproducible to some extent, although branches rarely follow the same path at the same applied fields. More information can be extracted from differential MOI images presented in figure <ref>. In this figure, the temperature and fields are the same as in figure <ref>, with the only difference that preceding images are subtracted from them, which were obtained at slightly lower (by 0.85 mT) magnetic field. White dashed lines in the figure show the outline of the tape and were added as a guide to the eye. The insets in figure <ref>(a) show magnified images of two areas with enhanced brightness. As one can see, there are quite small dendritic avalanches at low applied magnetic fields, and several larger ones that appear at higher fields. The small dendrites at lower fields do not have as many branches as those appearing at higher fields. Figure <ref> shows images obtained at zero-field cooling to different temperatures of 4.0, 6.0, 8.0 and 10.0 K (from top to bottom), and applying magnetic field of 85 mT. Dendritic avalanches are seen in Figs. <ref>(a)-(c), but not in figure <ref>(d). The higher the temperature, the fewer dendritic avalanches are observed. The magenta colour of the dendrite in figure <ref>(c), which is a mixture of red and blue, shows that this particular dendrite is almost reproducible in two experiments. On the left end of the tape in Figs. <ref>(c) and <ref>(d), there is a white dendrite-like formation. However, magnetic flux penetrates gradually there while increasing the field, which indicates that it is not a thermomagnetic instability, but merely flux penetration into a crack in the sample. This colour-less feature has a similar structure to the large dendritic avalanches, which indicates that cracks play an important role in the resulting pattern of dendritic avalanches. This is further supported by the similarity between the red dendrite in figure <ref>(b) and the colour-less dendrite in Figs. <ref>(c) and  <ref>(d). The red avalanche may have propagated into the crack. This could explain the few cases of reproducibility of dendritic formations in the sample. The fields at which the first dendritic avalanche appeared at a given temperature were not the same in the three consecutive experiments. The lower threshold field μ_ 0H_ thr for the appearance of dendritic avalanches as a function of temperature T is shown in figure <ref>. The temperatures are 3.7 K and every 0.5 K from 4.0 to 10.0 K. At a given temperature, the red square, green triangle and blue circle correspond to the fields at which the first dendritic avalanche appeared in three different MOI experiments. Dendritic avalanches were observed in all three experiments at temperatures up to 8.5 K. At 9.0 K a dendritic avalanche was only observed in two of the experiments, and none were observed at 9.5 or 10.0 K. Since the applied magnetic field, limited by 85 mT, was far below the field needed for full penetration at the temperatures of the experiments, we cannot determine exact threshold temperature T_ thr for the disappearance of dendritic avalanches, but can state that it is above 9.0 K. In figure <ref>, μ_ 0H_ thr is merely constant at low temperatures, but increases a little with increasing T up to 8.0 K, and increases rapidly above 8.0 K. This relationship between μ_ 0H_ thr and T is similar to what has been previously reported for 300-nm thick films of MgB_2 <cit.> and 500-nm thick Nb films <cit.>, but the rapid increase occurs at different temperatures in those experiments. The variance of μ_ 0H_ thr is also merely constant up to T = 8.0 K and then increases, which can be seen for 8.5 and 9.0 K in comparison with that at lower temperatures. From theoretical models <cit.> one would expect the threshold field to be much higher in tapes. This indicates that the edge properties are determining the conditions for onset of the first avalanche. The threshold fields found from MOI experiments presented in figure <ref> are much lower than 145 mT found from the m(H) measurements at 5 K in figure <ref>(a), because one could not see the smallest avalanches in magnetic moment measurements. If the magnitude of magnetic moment in m(H) plots is lower than expected in areas where no flux jumps are seen, it is possible that there are in fact several flux jumps too small to be detected in such measurements, and being interpreted by the instrument as noise. MOI, although limited in operating range to low magnetic fields, can be employed to confirm the existence of such avalanches and visualize their propagation into the sample.§ CONCLUSIONS Flux jumps caused by thermomagnetic instability have been investigated in an MgB_2 tape by MOI and magnetic moment measurements. Dendritic avalanches were observed with MOI and occurred at much lower magnetic fields than flux jumps seen in measurements of magnetic moment. The smallest thermomagnetic instabilities seen in MOI could not be observed in our magnetic moment measurements. The dendritic avalanches in the tape have similar properties to those appearing in thin films, but have relatively few branches and their patterns are in some cases partially reproducible. Most of them, however, are non-reproducible. The average size of new dendritic avalanches, as well as their branching, increases with increase of applied magnetic field. Upon increasing the temperature, the number of avalanches decreases. The lower threshold magnetic field for their appearance first increases slowly with increasing temperature and then increases rapidly above 8 K. The lower threshold field is much lower than what is expected from theoretical models, indicating that the onset is dictated by the edge properties of the tape.This work was financially supported by the University of Oslo, the Spanish Ministerio de Economía y Competitividad, the European FEDER Program (Projects MAT2011-22719 and ENE-2014-52105-R) and the Gobierno de Aragón (research group T12). The authors would like to acknowledge the use of Servicio General de Apoyo a la Investigación-SAI, Universidad de Zaragoza and to thank I. Cabistany and J. A. Gómez for technical assistance with manufacturing the tapes. § REFERENCES
http://arxiv.org/abs/1708.07506v1
{ "authors": [ "Thomas Qureishy", "Carlos Laliena", "Elena Martínez", "Atle Jorstad Qviller", "Jørn Inge Vestgården", "Tom Henning Johansen", "Rafael Navarro", "Pavlo Mikheenko" ], "categories": [ "cond-mat.supr-con" ], "primary_category": "cond-mat.supr-con", "published": "20170824175829", "title": "Dendritic flux avalanches in a superconducting MgB2 tape" }
Aggregation and Resource Scheduling in Machine-type Communication Networks: A Stochastic Geometry Approach Onel L. Alcaraz Lpez, Student Member, IEEE, Hirley Alves, Member, IEEE, Pedro H. J. Nardelli, Matti Latva-aho, Senior Member, IEEE Onel L. Alcaraz Lpez, Hirley Alves, Matti Latva-aho are with the Centre for Wireless Communications (CWC), University of Oulu, Finland.{onel.alcarazlopez,hirley.alves,pedro.nardelli,matti.latva-aho}@oulu.fi Pedro H. J. Nardelli is with Laboratory of Control Engineering and Digital Systems, Lappeenranta University of Technology, Finland. [email protected] This work is partially supported by Academy of Finland (Aka) (Grants n.303532, n.307492), SRC/Aka (n. 292854), and by the Finnish Funding Agency for Technology and Innovation (Tekes), Bittium Wireless, Keysight Technologies Finland, Kyynel, MediaTek Wireless, Nokia Solutions and Networks. =================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== The centroid-based model for extractive document summarization is a simple and fast baseline that ranks sentences based on their similarity to a centroid vector. In this paper, we apply this ranking to possible summaries instead of sentences and use a simple greedy algorithm to find the best summary. Furthermore, we show possibilities to scale up to larger input document collections by selecting a small number of sentences from each document prior to constructing the summary. Experiments were done on the DUC2004 dataset for multi-document summarization. We observe a higher performance over the original model, on par with more complex state-of-the-art methods. § INTRODUCTIONExtractive multi-document summarization (MDS) aims to summarize a collection of documents by selecting a small number of sentences that represent the original content appropriately. Typical objectives for assembling a summary include information coverage and non-redundancy. A wide variety of methods have been introduced to approach MDS. Many approaches are based on sentence ranking, i.e. assigning each sentence a score that indicates how well the sentence summarizes the input  <cit.>. A summary is created by selecting the top entries of the ranked list of sentences. Since the sentences are often treated separately, thesemodels might allow redundancy in the summary. Therefore, they are often extended by an anti-redundancy filter while de-queuing ranked sentence lists. Other approaches work at summary-level rather than sentence-level and aim to optimize functions of sets of sentences to find good summaries, such as KL-divergence between probability distributions <cit.> or submodular functions that represent coverage, diversity, etc. <cit.>The centroid-based model belongs to the former group: it represents sentences as bag-of-word (BOW) vectors with TF-IDF weighting and uses a centroid of these vectors to represent the whole document collection <cit.>. The sentences are ranked by their cosine similarity to the centroid vector. This method is often found as a baseline in evaluations where it usually is outperformed  <cit.>.This baseline can easily be adapted to work at the summary-level instead the sentence level. This is done by representing a summary as the centroid of its sentence vectors and maximizing the similarity between the summary centroid and the centroid of the document collection. A simple greedy algorithm is used to find the best summary under a length constraint.In order to keep the method efficient, we outline different methods to select a small number of candidate sentences from each document in the input collection before constructing the summary.We test these modifications on the DUC2004 dataset for multi-document summarization. The results show an improvement of Rouge scores over the original centroid method. The performance is on par with state-of-the-art methods which shows that the similarity between a summary centroid and the input centroid is a well-suited function for global summary optimization.The summarization approach presented in this paper is fast, unsupervised and simple to implement. Nevertheless, it performs as well as more complex state-of-the-art approaches in terms of Rouge scores on the DUC2004 dataset. It can be used as a strong baseline for future research or as a fast and easy-to-deploy summarization tool. § APPROACH §.§ Original Centroid-based MethodThe original centroid-based model is described by <cit.>. It represents sentences as BOW vectors with TF-IDF weighting. The centroid vector is the sum of all sentence vectors and each sentence is scored by the cosine similarity between its vector representation and the centroid vector. Cosine similarity measures how close two vectors A and B are based on their angle and is defined as follows:sim(A, B) = A · B/|A||B|A summary is selected by de-queuing the ranked list of sentences in decreasing order until the desired summary length is reached. <cit.> implement this original model with the following modifications: * In order to avoid redundant sentences in the summary, a new sentence is only included if it does not exceed a certain maximum similarity to any of the already included sentences. * To focus on only the most important terms of the input documents, the values in the centroid vector which fall below a tuned threshold are set to zero.This model, which includes the anti-redundancy filter and the selection of top-ranking features, is treated as the "original" centroid-based model in this paper.We implement the selection of top-ranking features for both the original and modified models slightly differently to <cit.>: all words in the vocabulary are ranked by their value in the centroid vector. On a development dataset, a parameter is tuned that defines the proportion of the ranked vocabulary that is represented in the centroid vector and the rest is set to zero. This variant resulted in more stable behavior for different amounts of input documents. §.§ Modified Summary SelectionThe similarity to the centroid vector can also be used to score a summary instead of a sentence. By representing a summary as the sum of its sentence vectors, it can be compared to the centroid, which is different from adding centroid-similarity scores of individual sentences.With this modification, the summarization task is explicitly modelled as finding a combination of sentences that summarize the input well together instead of finding sentences that summarize the input well independently. This strategy should also be less dependent on anti-redundancy filtering since a combination of redundant sentences is probably less similar to the centroid than a more diverse selection that covers different prevalent topics.In the experiments, we will therefore call this modification the "global" variant of the centroid model. The same principle is used by the KLSum model <cit.> in which the optimal summary minimizes the KL-divergence of the probability distribution of words in the input from the distribution in the summary. KLSum uses a greedy algorithm to find the best summary. Starting with an empty summary, the algorithm includes at each iteration the sentence that maximizes the similarity to the centroid when added to the already selected sentences. We also use this algorithm for sentence selection. The procedure is depicted in Algorithm <ref> below. §.§ Preselection of SentencesThe modified sentence selection method is less efficient than the orginal method since at each iteration the score of a possible summary has to be computed for all remaining candidate sentences. It may not be noticeable for a small number of input sentences. However, it would have an impact if the amount of input documents was larger, e.g. for the summarization of top-100 search results in document retrieval.Therefore, we explore different methods for reducing the number of input sentences before applying the greedy sentence selection algorithm to make the model more suited for larger inputs. It is also important to examine how this affects Rouge scores. We test the following methods of selecting N sentences from each document as candidates for the greedy sentence selection algorithm: §.§.§ N-firstThe first N sentences of the document are selected. This results in a mixture of a lead-N baseline and the centroid-based method. §.§.§ N-bestThe sentences are ranked separately in each document by their cosine similarity to the centroid vector, in decreasing order. The N best sentences of each document are selected as candidates. §.§.§ New-TF-IDFEach sentence is scored by the sum of the TF-IDF scores of the terms that are mentioned in that sentence for the first time in the document. The intuition is that sentences are preferred if they introduce new important information to a document.Note that in each of these candidate selection methods, the centroid vector is always computed as the sum of all sentence vectors, including the ones of the ignored sentences. § EXPERIMENTS §.§ DatasetsFor testing, we use the DUC2004 Task 2 dataset from the Document Understanding Conference (DUC). The dataset consists of 50 document clusters containing 10 documents each.For tuning hyperparameters, we use the CNN/Daily Mail dataset <cit.> which provides summary bulletpoints for individual news articles. In order to adapt the dataset for MDS, 50 CNN articles were randomly selected as documents to initialize 50 clusters. For each of these seed articles, 9 articles with the highest word-overlap in the first 3 sentences were added to that cluster. This resulted in 50 documents clusters, each containing 10 topically related articles. The reference summaries for each cluster were created by interleaving the sentences of the article summaries until a length contraint (100 words) was reached.§.§ Baselines & Evaluation <cit.> published SumRepo, a repository of summaries for the DUC2004 dataset generated by several baseline and state-of-the-art methods [<http://www.cis.upenn.edu/ nlp/corpora/sumrepo.html>]. We evaluate summaries generated by a selection of these methods on the same data that we use for testing. We calculate Rouge scores with the Rouge toolkit <cit.>. In order to compare our results to <cit.> we use the same Rouge settings as they do[ROUGE-1.5.5 with the settings -n 4 -m -a -l 100 -x -c 95 -r 1000 -f A -p 0.5 -t 0] and report results for Rouge-1, Rouge-2 and Rouge-4 recall. The baselines include a basic centroid-based model without an anti-redundancy filter and feature reduction.§.§ PreprocessingIn the summarization methods proposed in this paper, the preprocessing includes sentence segmentation, lowercasing and stopword removal.§.§ Parameter TuningThe similarity threshold for avoiding redundancy (r) and the vocabulary-included-in-centroid ratio (v) are tuned with the original centroid model on our development set. Values from 0 to 1 with step size 0.1 were tested using a grid search. The optimal values for r and v were 0.6 and 0.1, respectively. These values were used for all tested variants of the centroid model. For the different methods of choosing N sentences of each document before summarization, we tuned N separately for each, with values from 1 to 10, using the global model. The best N found for N-first, N-best, new-tfidf were 7, 2 and 3 respectively.§.§ ResultsTable <ref> shows the Rouge scores measured in our experiments. The first two sections show results for baseline and SOTA summaries from SumRepo. The third section shows the summarization variants presented in this paper. "G" indicates that the global greedy algorithm was used instead of sentence-level ranking. In the last section, "- R" indicates that the method was tested without the anti-redundancy filter.Both the global optimization and the sentence preselection have a positive impact on the performance. The global + new-TF-IDF variant outperforms all but the DPP model in Rouge-1 recall. The global + N-first variant outperforms all other models in Rouge-2 recall. However, the Rouge scores of the SOTA methods and the introduced centroid variants are in a very similar range. Interestingly, the original centroid-based model, without any of the new modifications introduced in this paper, already shows quite high Rouge scores in comparison to the other baseline methods. This is due to the anti-redundancy filter and the selection of top-ranking features.In order to see whether the global sentence selection alleviates the need for an anti-redundancy filter, the original method and the global method (without N sentences per document selection) were tested without it (section 4 in Table <ref>). In terms of Rouge-1 recall, the original model is clearly very dependent on checking for redundancy when including sentences, while the global variant does not change its performance much without the anti-redundancy filter. This matches the expectation that the globally motivated method handles redundancy implicitly. § EXAMPLE SUMMARIESTable <ref> shows generated example summaries using the global centroid method with the three sentence preselection methods. For readability, truncated sentences (due to the 100-word limit) at the end of the summaries are excluded. The original positions of the summary sentences, i.e. the indices of the document and the sentence inside the document are given. As can be seen in the examples, the N-first method is restricted to sentences appearing early in documents. In the new-TF-IDF example, the second and third sentences were preselected because high ranking features such as "robot" and "arm" appeared for the first time in the respective documents. § RELATED WORKIn addition to various works on sophisticated models for multi-document summarization, other experiments have been done showing that simple modifications to the standard baseline methods can perform quite well.<cit.> improved the centroid-based method by representing sentences as sums of word embeddings instead of TF-IDF vectors so that semantic relationships between sentences that have no words in common can be captured.<cit.> also evaluated summaries from SumRepo and did experiments on improving baseline systems such as the centroid-based and the KL-divergence method with different anti-redundancy filters. Their best optimized baseline obtained a performance similar to the ICSI method in SumRepo. § CONCLUSIONIn this paper we show that simple modifications to the centroid-based method can bring its performance to the same level as state-of-the-art methods on the DUC2004 dataset. The resulting summarization methods are unsupervised, efficient and do not require complicated feature engineering or training.Changing from a ranking-based method to a global optimization method increases performance and makes the summarizer less dependent on explicitly checking for redundancy. This can be useful for input document collections with differing levels of content diversity.The presented methods for restricting the input to a maximum of N sentences per document lead to additional improvements while reducing computation effort, if global optimization is being used. These methods could be useful for other summarization models that rely on pairwise similarity computations between all input sentences, or other properties which would slow down summarization of large numbers of input sentences.The modified methods can also be used as strong baselines for future experiments in multi-document summarization. emnlp_natbib
http://arxiv.org/abs/1708.07690v1
{ "authors": [ "Demian Gholipour Ghalandari" ], "categories": [ "cs.CL" ], "primary_category": "cs.CL", "published": "20170825112144", "title": "Revisiting the Centroid-based Method: A Strong Baseline for Multi-Document Summarization" }
Frozen Finsler metrics] Geodesic sprays and frozen metricsin rheonomic Lagrange manifolds DTU Compute, Mathematics, Kgs. Lyngby, Denmark [S. Markvorsen][email protected][2000]Primary 53, 58 We define systems of pre-extremals for the energy functional of regular rheonomic Lagran­ge manifolds and show how they induce well-defined Hamilton orthogonal nets. Such nets have applications in the modelling of e.g. wildfire spread under time- and space-dependent conditions.The time function inheritedfrom such a Hamilton net induces in turn a time-independent Finsler metric – we call it the associated frozen metric. It is simply obtained by inserting the time function from the net into the given Lagrangean. The energy pre-extremals then become ordinary Finsler geodesics of the frozen metric and the Hamilton orthogonality property is preserved during the freeze. We compare our results with previous findings of G. W. Richards concerning his application of Huyghens' principle to establish the PDE system for Hamilton orthogonal nets in 2D Randers spaces and also concerning his explicit spray solutions for time-only dependent Randers spaces. We analyze examples of time-dependent 2D Randers spaces with simple, yet non-trivial, Zermelo data; we obtain analytic and numerical solutions to their respective energy pre-extremal equations; and we display details of the resulting (frozen) Hamilton orthogonal nets.[ Steen Markvorsen December 30, 2023 ===================== § INTRODUCTIONA large number of natural phenomena evolve under highly non-isotropic and time-varying conditions. The spread of wildfires is but one such phenomenon – see <cit.>. The concept of a regular rheonomic Lagrange manifold (M, L) offers a natural global geometric setting for an initial study of such phenomena. The anisotropy as well as the time- and space-dependency is represented by a time-dependent Finsler metric F with L=F^2. As a further structural Ansatz for the phenomena under consideration we will assume that they are 'driven' in this Finsler metric background as wave frontals issuing from a given initial base hypersurface N in M with F-unit speed rays, which all leave N orthogonally w.r.t. the metric F. Huyghens' principle then implies that the rays are everywhere what we call Hamilton orthogonal to the frontals. This key observation was also worked out by Richards in <cit.> and <cit.>, where he presents an explicit PDE system which is equivalent to the Hamilton orthogonality for the special 2D Finsler manifolds known as 2D Randers spaces, represented by their so-called Zermelo data.Our main result is that in general, i.e for any regular Lagrangean, and under the structural Ansatz above, Hamilton orthogonality is equivalent to the condition that all the rays of the spread phenomenon are energy pre-extremals for the given Lagrangean. The first order PDE system for Hamilton orthogonality is thus equivalent to a second order ODE system for these pre-extremals. The spread problem is in this way solvable via the rays, which then together – side by side – mold the frontals of the spread phenomenon. As a corollary – which may be of interest in its own right – we also show the following. If we freeze the metric F to the specific functional value that it has at a given point precisely when the frontal of the given spread passes through this point, then we obtain a frozen time-independent Finsler metric F, in which the given rays areF-geodesics, which mold the same frontals as before. These frozen metrics are highly dependent on both the base hypersurface N and on the time of ignition from N. As already alluded to in the abstract we illustrate and support these main results by explicit calculations for simple 2D time-dependent Randers metrics and we display various details from the corresponding point-ignited spread phenomena. §.§ Outline of paperIn section <ref> we describe the concept of a regular rheonomic Lagrange manifold and introduce the corresponding time-dependent indicatrix field. For any given variation of a given curve the corresponding L-energy and F-length (for unit speed curves) is differentiated with respect to the variation parameter in section <ref> and the ensuing extremal equations are displayed. The notion of a unit fibernet is introduced in section <ref> as a background for the presentation of the main results in sections <ref> and <ref> concerning the equivalence of Hamilton orthogonality and energy pre-extremal rays and concerning the frozen metrics – as mentioned in the introduction above. The rheonomic Randers spaces and their equivalent Zermelo data are considered in section <ref> with the purpose of presenting Richards' results and the promised examples in section <ref>.§ RHEONOMIC LAGRANGE MANIFOLDS A regular rheonomic Lagrange space (M^n, L) is a smooth manifold M with a time-dependent Lagrangean L modelled on a time-dependent Finsler metric F, i.e. L = F^2– see e.g. <cit.>, <cit.>, <cit.>, <cit.>, and <cit.>. The metric F induces for each time t in a given time interval a smooth family of Minkowski norms in the tangent spaces of M. For the transparency of this work, we shall be mainly interested in two-dimensional cases and examples. Correspondingly we write – with M = ℝ^2, p = (u,v) = (u^1, u^2), and V = (x,y) = (x^1, x^2) ∈ T_pℝ^2 = {∂_u , ∂_v}:F = F_t = F(t, p)= F(t, p, V) = F(t, u,v, x, y) =F(t, u^1, u^2, x^1, x^2) ,where the latter index notation is primarily used here for expressions involving general summation over repeated indices. The higher dimensional cases are easily obtained by extending the indices beyond 2 – like p = (u^1, ⋯ , u^n) and V = (x^1, ⋯ ,x^n) ∈ T_pM. By definition, see <cit.> and <cit.>, a Finsler metric on a domain 𝒰 is a smooth family of Minkowski norms on the tangent planes, i.e. a smoothfamily of indicatrix templates which at each time t in each tangent plane T_p 𝒰 at the respective points p = (u,v) in the parameter domain 𝒰is determined by the nonnegative smooth function F_t of t as follows: * F_t is smooth on each punctured tangent plane T_p 𝒰 - {(0,0)}.* F_t is positively homogeneous of degree one: F_t(kV) = kF_t(V) for every V∈ T_p 𝒰 and every k > 0.* The following bilinear symmetric form on the tangent plane is positive definite:g_t,p, V(U, W) = 1/2∂^2/∂λ∂μ[F_t^2(V + λ U + μ W)]_|λ=μ=0 Since the function F_t is homogenous of degree 1, the fundamental metric g_t,p, V(U, W) satisfies the following for each time t:g_t,p, V(V,W)= 1/2∂/∂λ[F_t^2(V + λ W)]_|λ=0g_t,p, V(V,V)=F_t^2(V) = ‖ V ‖^2_F_t . Suppose that we use the canonical basis {∂_u = b_1, ∂_v = b_2} in T_p 𝒰, and let V = x^ib_i. Then we can define coordinates of g = g_t,p,V in the usual way:2g_i j(V)=2g_t,p, V(b_i, b_j) = ∂^2/∂λ∂μ[F_t^2(V + λ b_i + μ b_j)]_|λ=μ=0= _i j(F_t^2)(V)=[F_t^2]_x^i x^j(V),where the Hessian is evaluated at the vector V and where the last line [F_t^2]_x^i x^jis 'shorthand' for the double derivatives of F_t^2 with respect to the tangent plane coordinates x^i.In the following we shall need other partial derivatives of F_t^2 – such as [F_t^2]_t(V), [F_t^2]_u^k(V), and [F_t^2]_u^l x^k(V)– as well as the inverse matrix of g_i j(V), which are now all well-defined, e.g.:[g^i j(V)] = [ g_i j(V) ]^-1and [g^i j(V)g_k j(V)]= [ [ 1 0; 0 1; ]] .Moreover, the following second order informations are of well-known and instrumental importance for the study of Finsler manifolds and Lagrangean geometry – see <cit.>, <cit.>, <cit.>, and <cit.>:G^i(y)= (1/4)g^i l(y)([ F_t^2]_x^k y^l(y)y^k - [ F_t^2]_x^l(y) ) N_0^i(y)=(1/2)g^i l(y)[ F_t^2]_t y^l(y). The set of points in the tangent plane T_p 𝒰 which have F_t-unit position vectors is called the instantaneous indicatrix of F_t at p:ℐ_t, p = F^-1_t(1) = {V ∈ T_p 𝒰| F(t,p, V) = 1} .Since g_t,p, V is positive definite, everyindicatrix ℐ_t, p is automatically strongly convex in its tangent plane at p, and it contains the origin of the tangent plane in its interior, see <cit.>. It is therefore a pointed oval – the point being that origin of the tangent plane. § VARIATIONS OF L-ENERGY AND OF F-LENGTHSince we shall be interested in particular aspects of the pre-extremals of the energy functional in (M, L) we briefly review the first variation of energy –with special emphasis on the influence of the time dependence of the underlying metric. In the following we suppress the indication of the time-dependence and writeFfor F_t.The first variation formula will give the ODE differential equation conditions for a curve to be an F^2- energy extremal in M. The ODE system for the extremals are, of course, nothing but the Euler–Lagrange equations for the time-dependent Lagrange functional L, see <cit.> and <cit.>:We let c denote a candidate curve for an extremal of F^2, i.e. of L :c:[a, b] → MThis means that there is a partition of [a,b]a = t_0 <⋯< t_m = b,such that c is smooth on each subinterval [t_i-1, t_i] for every i = 1,⋯, m quad .A variation of the curve c is then a piecewise smooth map H:(-ε, ε) × [a,b] → Msuch thatH is continuous on (-ε, ε) × [a,b] H is smoothon each (-ε, ε) × [t_i-1, t_i] H(0,t) = c(t) for all a ≤ t ≤ b.The last equation in (<ref>) states that c is the base curve in the family of curves c_u(t) = H(u,t), which sweeps out the variation.The variation H induces the associated variation vector field V(t), so that we have, in local coordinates:∂ H/∂ u(0, t) = V(t) = V^k(t)∂/∂ x_k|_c(t) The F^2-energy values of the individual piecewise smooth curves c_u(t) in the variation family H are then given byℰ(u)= ∫_a^bL(t, c_u(t), ċ_u(t)) dt = ∫_a^bF^2(t, c_u(t), ċ_u(t)) dt = ∑_i=1^m∫_t_i-1^t_iF^2(t, c_u(t),∂ H/∂ t(u, t) )dt Then we have the following u-derivative of ℰ(u) at u=0.We refer to <cit.> and apply the short hand notation presented in section <ref>. This calculation mimics almost verbatim the classical calculation in <cit.> with the difference, however, that our F field is now time-dependent, so that there will be an explicit extra term in the integrand below. This extra term is precisely given by N_0^i(y). ℰ'(0)= ∫_a^b ([F^2]_x^kV^k + [F^2]_y^kdV^k/dt) dt = ∫_a^b ([ F^2]_x^k - (d/dt[F^2]_y^k) )V^kdt aaaaaaa+ ∑_i=1^m[[F^2]_y^k V^k]_t_i-1^t_i=∫_a^b ([ F^2]_x^k - [F^2]_t y^k - [ F^2]_x^l y^kċ^ l -[ F^2]_y^l y^k c̈^ l)V^kdt aaaaaaa+∑_i=1^m[g_j k ċ^ jV^k]_t_i-1^t_i= - 2∫_a^b g_j k (c̈^ j + 2G^j(ċ) + N_0^j(ċ) )V^kdt aaaaaaa+2∑_i=1^m[g_j k ċ^ jV^k]_t_i-1^t_i . We have thus LetH and V denote a variation of a curve cas above. Then the energy functional on the given variation isℰ(c) = ∫_a^b F^2(t, c_u(t), ċ_u(t)) dt,with the following derivative:ℰ'(0)= - 2∫_a^b g_j k (c̈^ j + 2G^j(ċ) + N_0^j(ċ) )V^kdt aaaaaaa+2∑_i=1^m[g_j k ċ^ jV^k]_t_i-1^t_i .Since we shall need it below we observe the following immediate analogue for the F-length functional ℒ under the assumption that the base curve c is F-unit speed parametrized, see <cit.>, <cit.>: Suppose again we conider the variational setting with H and V as above, but now with base curve c satisfying the unit speed condition‖ċ(t) ‖_F = 1 for all t ∈[a, b].Then the length functional on the given variation isℒ(c) = ∫_a^b F(t, c_u(t), ċ_u(t)) dt,and it has the same expression for its derivative as the energy functional – except for the factor 2:ℒ'(0)= - ∫_a^b g_j k (c̈^ j + 2G^j(ċ) + N_0^j(ċ) )V^kdt aaaaaaa+∑_i=1^m[g_j k ċ^ jV^k]_t_i-1^t_i . We note that if the Lagrange-Finsler metric F is actually time-independent, then the formulas (<ref>) and (<ref>) above are just modified by setting N_0^j(ċ) =0. § UNIT FIBER NETSWe first define a special class of nets in (M^n, F) as follows:Let N^n-1 be a smooth embedded orientable hypersurface in (M^n, F) with a well-defined choice of normalvector field n at time t_0, i.e. n is everywhere orthogonal to N with respect to F_t_0 and has unit length with respect to F_t_0. An (N, t_0)-based unit fiber net γ in (M^n, F) is a diffeomorphismγ :N × ] t_0, T[ ⟼𝒰⊂ Mwith the property that each fiber γ(s_0, t) is a unit speed ray that 'leaves' N orthogonally at time t_0:γ(s, t_0)= sfor all s ∈ N and γ̇_t(s, t_0)=n(s) for all s ∈ N and ‖γ̇_t(s, t)‖ =1 for all s ∈ N and all t ∈]t_0, T[. The definition of an (N, t_0)-based unit fiber netcan easily be extended to submanifolds N of higher co-dimension than 1 by fattening the submanifold to a sufficiently thin ε-tube w.r.t. F_t_0 and then consider the boundary hypersurface of that tube instead. We shall tacitly assume this construction for the cases illustrated explicitly below, where N is a point p – in which case the point in questionshould be fattened to a small ε-sphere around p. For example, a (p, t_0)-based unit fiber net in ℝ^2 which corresponds to axis-symmetric polar elliptic coordinates is then a parametrization of the following type γ(s,t) = (p_1 + t· a·cos(s), p_2 + t· b ·sin(s)), t ∈]0, ∞[, s ∈ 𝕊^1. These specificnetsappear naturally in constant Randers metrics in ℝ^2 with Zermelo data a, b, C=0, and θ=0 – see sections <ref> and <ref> below.§.§ Huyghens' principleA much celebrated and useful principle for obtaining a particular spray structure of an(N, t_0)-based unit fiber net, whichis also our main concern in this work, is Huyghens' principle – see<cit.> and <cit.>. To state the principle together with one of its significant consequences, we shall first introduce the time-level sets of the given net. For each t_1∈]t_0, T[ we define such alevel in the usual way:η_t_1 ={ p ∈𝒰|t(p) = t_1 } . Let γ(s,t) denote an (N, t_0)-based unit fiber net. We will say that the net satisfies Huyghens' principle if the following holds true for all sufficiently small (but nonzero) time increments δ: The time level set η_t_1 + δ is obtained as the envelope of the set of (p, t_1)-based unit fiber nets of duration t ∈]t_1, t_1 + δ[ in 𝒰 where p goes through all points in η_t_1. These (p, t_1)-based unit fiber nets will be called Huyghens droplets of duration δ from the level set η_t_1 – see examples in figure <ref> below. The structural consequence of Huyghens' principle is then encoded into the following result, see <cit.>, and compare <cit.> (concerning conjugate directions) with figure <ref> (concerning the equivalent F-orthogonal directions) below.Let γ(s,t) denote an (N, t_0)-based unit fiber net which satisfies Huyghens' principle. Thenthe direction γ̇_s(s,t_1) of the frontal η_t_1 at time t_1 and at a given point p on that frontal is F-orthogonal to the direction γ̇_t(s,t_1) of the rayγ(s,t) through p, i.e.γ̇_s(s,t) _F_tγ̇_t(s,t) for all s ∈ N , t ∈]t_0, T[. The orthogonality obtained and expressed in (<ref>) we will call Hamilton orthogonality. In fact, the proof for Hamilton orthogonality only needs an infinitesimal version of Huyghens' principle. This specific consequence of Huyghens principle was established for (N, t_0)-based unit fiber nets in 2D Randers spaces by G. W. Richards in <cit.>, where the ensuing Hamilton orthogonality is expressed directly in terms of a system of PDE equations – see theorem <ref> in section <ref> below.It is not clear if – or under which additional conditions – the converse to theorem <ref> holds true, i.e. ifHamilton orthogonality for a given (N, t_0)-based unit fiber net implies that the net and the background metric also satisfies Huyghens' principle? The problem is that the metric F is in this generality time-dependent so that e.g. the triangle inequality does not hold for the respective rays. As we shall see below, however, the rays do solve an energy pre-extremal problem, and the rays are actually geodesics in the so-called frozen metrics associated with F. However, these frozen metrics typically do not agree if the corresponding nets have different base hypersurfaces N. Nevertheless, it is indicated by figure <ref> in section <ref> below, that in suitable simplistic settings, like the ones under consideration there, the Huyghens droplets from one frontal actually seem to envelope the next frontal in a corresponding increment of time.If F is time-independent we do have the necessary triangle inequalities at our disposal, and in this case the converse to theorem <ref> holds true – as discussed in <cit.>.§ MAIN RESULTSWe now show that the Hamilton orthogonality defined above is equivalent to a system of ODE equations for the rays of the net in question:Suppose γ(s,t) is an (N, t_0)-based unit fiber net in M. Then the following two conditions for γ are equivalent:A: The energy pre-extremal condition:ρ(s,t) ·γ̇_t(s,t) =∑_j(γ̈_t^ j(s,t) + 2G^j(γ̇_t(s,t)) + N_0^j(γ̇_t(s,t)))∂_j ,for some function ρ of s ∈N and t ∈]t_0, T[, and:B: The Hamilton orthogonality condition:γ̇_s(s,t) _F_tγ̇_t(s,t) for all s ∈ N , t ∈]t_0, T[. The unit fiber net satisfying one, hence both, of the conditions (<ref>) and (<ref>) is called the (N, t_0)-based WF-net in (M, L). Before proving the theorem we first observe, that standard ODE theory gives existence and uniqueness of a net γ(s,t) which satisfies the equations (<ref>) and (<ref>):Suppose N does not curve too much in the direction of its n-field at time t_0, i.e.the hypersurfaceN has bounded second fundamental form in M with respect to F_t_0. Then there exists a unique (N, t_0)-based WF-net γ(s,t) for t ∈]t_0, T[ for sufficiently small T. The rays of a WF-net are usually not extremals for the L-energy because ρ(s,t) =0 is ususally not compatible with the unit speed condition in (<ref>) in definition <ref>. Moreover, it is important to note that a given (N, t_0)-based WF net may be quite different from an (N, t_1)-based WF net for different initial times t_0 and t_1 – see example <ref>. For transparency, and mostly in order to relate directly to the key 2D examples that we display below, we will assume that M = ℝ^2, i.e. M is identical to its chart with coordinates (u,v) in ℝ^2. This is done without much lack of generality since in the general setting we are only concerned with the semi-local aspects of the N-based nets in question. Moreover, the expressions and arguments in this proof generalize easily to higher dimensions.Suppose first that the net satisfies (<ref>). Since each ray in the net by assumptionhas unit F-speed, the specific variation V = γ̇_s determined by the rays in the net will give ℒ^' = 0 for all values of b in the variational formula (<ref>) because each ray in the variation has constant length:0 = ℒ'(0)= - ∫_t_0^b g_j k (c̈^ j + 2G^j(ċ) + N_0^j(ċ) )V^kdt aaaaaaa+∑_i=1^m[g_j k ċ^ jV^k]_t_i-1^t_i .Since, also by assumptionc̈^ j + 2G^j(ċ) + N_0^j(ċ) = ρ(t) ·ċ^jfor some function ρ we get for all b ∈]t_0, T[:0 = - ∫_t_0^bm(t)·ρ(t) dt + m(b),where m is shorthand form(t) =g_j k ċ^j(t) V^k(t) .By assumption on the netwe have m(t_0) = 0 and from (<ref>) we get upon differentiation with respect to b:m'(b) =m(b)·ρ(b) for all b ∈]t_0, T[ .It follows that m(t) = 0 for all t and therefore – by (<ref>) – the variation vector field V(t) = γ̇_s(s,t) is F-orthogonal to ċ^j(t)∂_j = γ̇_t(s,t). Hence the net also satisfies equation (<ref>). Conversely, suppose that γ(s,t) satisfies (<ref>). Again we let V denote any net-induced variation vector field along a given base ray γ(s_0,t) = c(t), which again satisfies the following equation for all b, since V is assumed to be F-orthogonal to ċ:0 = ℒ'(0) = - ∫_t_0^b g_j k (c̈^ j + 2G^j(ċ) + N_0^j(ċ) )V^kdt .Since the integrand is thence identically 0, the vector∑_j(c̈^ j + 2G^j(ċ) + N_0^j(ċ) )∂_jis F-orthogonal to the variation field V and hence parallel to ċ(t) = γ̇_t. It follows, that the given net satisfying (<ref>) also satisfies (<ref>). We note en passant that if the Lagrangean L is time-independent, i.e. if (M, L) is a so-called regular scleronomic Lagrangean manifold, then F is also time-independent and the rays of any WF net in (M, L) are geodesics for the metric F. Indeed, in this case the WF net condition (<ref>) is precisely, that the rays are pre-geodesics of constant F-speed 1, hence they are geodesics. § FROZEN FINSLER METRICSLet γ(s,t) denote an (N, t_0)-basedWF net in (M, F) with the image 𝒰 = γ(N ×]t_0, T[ ). Then the frozen Finsler metric F on 𝒰⊂ M is defined as the following time-independent (scleronomic) Finsler metric:F(u, v, x, y) = F(t(u,v), u, v, x, y) for all (u,v) ∈ 𝒰.The metric F is called the frozen metric associated to F induced by the (N, t_0)-based WF-net. We show that the rays of a WF net are geodesics of the frozen metric:Let γ(s,t) denote an (N, t_0)-based WF-net in (M, L) with associated frozen metric F in the domain 𝒰. Then the rays γ(s_0,t) of γ are geodesic curves of F so that the given WF-net in (M, L) is also a WF-net in (M, F).This follows immediately from the fact that the given WF-net of F is also a WF-net of F since the Hamilton orthogonality conditions are locally the same with respect to both metrics in the net which itself coordinates the 'freeze' of F to F.We now obtain the same result from the respective first variation formulas, which is not surprising in view of their role in the establisment above of theorem <ref>. The freezing time function t=t(u,v) has partial derivatives which satisfy:1 = ∂ t/∂ u u̇(t) + ∂ t/∂ v v̇(t)Moreover, since the gradient of t in the Euclidean coordinate domain ℝ^2 is Euclidean-orthogonal to the frontalη_t_1 ={ (u,v) ∈𝒰|t(u,v) = t_1 } .we have:0 = ∂ t/∂ u W^1(u_0,v_0) + ∂ t/∂ v W^2(u_0,v_0),for any vector W ∈ T_(u_0,v_0) 𝒰 which is tangent to the level set η_t_0 in ℝ^2. In consequence we have for each vector W which is F-orthogonal to γ̇ (and therefore tangent to the t_0 level):g_j k[F^2]_t ∂t/∂ u^lg^jlW^k = ∂t/∂ u^kW^k = 0 .Since γ(s,t) is a Hamilton orthogonal net, equation (<ref>)means that the vector with coordinates g^j i[F^2]_t ∂t/∂ u^i is proportional to γ̇.The chain rule implies:[F^2]_u^l = [F^2]_u^l + [F^2]_t∂t/∂ u^l ,so that we also have:[F^2]_u^kx^l x^k - [F^2]_u^l= [F^2]_u^kx^l x^k + [F^2]_tx^l x^k ∂t/∂ u^k - [F^2]_u^l - [F^2]_t ∂t/∂ u^l= [F^2]_u^kx^l x^k + [F^2]_tx^l - [F^2]_u^l - [F^2]_t ∂t/∂ u^l . We insert these informations into the expression γ̈^ j + 2G^j for the time-independent metric F and obtain: γ̈^ j + 2G^j = γ̈^ j + (1/2)g^i l([ F^2]_u^k x^lx^k - [ F^2]_u^l) = γ̈^ j + (1/2)g^i l([F^2]_u^kx^l x^k + [F^2]_tx^l- [F^2]_u^l)γ̈^ j + + - (1/2)g^i l[F^2]_t ∂t/∂ u^l=γ̈^ j + 2G^j + N_0^j - (1/2)g^i l[F^2]_t ∂t/∂ u^l,which is the L-extremal 'curvature' of γ with respect to F – except for the last term which we know is proportional to γ̇_t. By assumption on the WF-net, the vector ∑_j(γ̈^ j + 2G^j + N_0^j)∂_jisproportional to γ̇_t, so that ∑_j(γ̈^ j + 2G^j)∂_j is also proportional to γ̇_t. Hence γ is a pre-geodesic curve with respect to F. Since the length of γ̇_t is constrainedto 1 by the WF net condition, the curve γ must be a geodesic in the metric F – see e.g. <cit.>.§ RHEONOMIC RANDERS METRICS IN ℝ^2A rheonomic Randers metric in M = ℝ^2 isrepresented by its instantaneous elliptic indicatrix fields. The representing ellipse field ℐ_t,p = E_(t,u,v) is parametrized as follows in the tangent space basis {∂_u, ∂_v} at (u,v) in the parameter domain:E_(t,u,v)(ψ) =R_θ(t,u,v)([[ a(t,u,v)cos(ψ); b(t,u,v)sin(ψ);]] + [[ c_1(t,u,v); c_2(t,u,v);]])where R_θ(t,u,v) denotes the rotation in the tangent plane at (u,v) by the angle θ(t,u,v) in the clock-wise direction, see figure <ref>:R_θ(t,u,v) = [ [cos(θ(t,u,v))sin(θ(t,u,v)); -sin(θ(t,u,v))cos(θ(t,u,v));]] The choice of orientation of θ (turning in the clockwise direction) is customary in the field of wildfires, see <cit.>, <cit.>. The translation vector C(t,u,v)= (c_1(t,u,v), c_2(t,u,v)) must always be assumed to be sufficiently small so that the resulting rotated and translated ellipse contains the origin of the tangent plane of the parameter domain at the point (u,v), i.e. so that the ellipse with its origin becomes a pointed oval in the sense of general Finsler indicatrices. The time-dependent Finsler metric induced from time-dependent Zermelo data is now obtained as follows – see <cit.>, <cit.>: Let h denote the Riemannian metric with the following component matrix with respect to the standard basis {∂_u , ∂_v} in every T_(u,v)ℝ^2:h = 1/a^2b^2[ [ a^2sin^2(θ) + b^2cos^2(θ) (a^2 - b^2)sin(θ)cos(θ); (a^2 - b^2)sin(θ)cos(θ) a^2cos^2(θ) + b^2sin^2(θ); ]].For elliptic template fields the direct way to the Finsler metric from the Zermelo data is as follows:Suppose for example that we are given ellipse field data a(t,u,v), b(t,u,v), C(t,u,v) = (c_1(t,u,v), c_2(t,u,v)), and θ(t,u,v). Then the corresponding Finsler metric F is determined by the following expression, where V = (x,y) denotes any vector in∈ T_(u,v)ℝ^2 – see e.g. <cit.> and <cit.>:F= F_t = F(t, p, V) = F(t, u,v,x,y) = (√(λ· h(V,V) + h^2(V,C))/λ) - (h(V,C)/λ) ,whereλ = 1 - h(C,C) > 0.§ RICHARD'S EQUATIONS Richards observed in <cit.> that Huyghens' principle for a spread phenomenon (in casu the spread of wildfires) in the plane together with a background indicatrix field of Zermelo type as discussed above forces the frontals of the spread to satisfy the PDE system of differential equations in theorem <ref> below. We consider an (N, t_0) based WF net γ(s, t) in ℝ^2 with a given ellipse template field for a Finsler metric F with Zermelo equivalent data a(t,u,v), b(t,u,v), C(t,u,v) = (c_1(t,u,v), c_2(t,u,v)), and θ(t,u,v). Suppose that the net satisfies Huyghens' principle as by definition <ref>. Then the net is determined by the following equations for the partial derivatives γ̇_s(s,t) = (u̇_s, v̇_s) and γ̇_t(s,t) = (u̇_t, v̇_t), where now, as indicated above, the control coefficients a, b, c_1, c_2, and θ are all allowed to depend on both time t and position (u,v):u̇_t=a^2cos(θ)( u̇_ssin(θ) + v̇_scos(θ) ) - b^2sin(θ)( u̇_scos(θ) - v̇_ssin(θ))/√(a^2(u̇_ssin(θ) + v̇_scos(θ))^2 + b^2(u̇_scos(θ) - v̇_ssin(θ))^2)+ c_1 cos(θ) + c_2 sin(θ), andv̇_t=-a^2sin(θ)( u̇_ssin(θ) + v̇_scos(θ) ) - b^2cos(θ)( u̇_scos(θ) - v̇_ssin(θ))/√(a^2(u̇_ssin(θ) + v̇_scos(θ))^2 + b^2(u̇_scos(θ) - v̇_ssin(θ))^2)- c_1 sin(θ) + c_2 cos(θ). For the very special and rare cases where the control coefficients a, b, c_1, c_2, and θ are assumed to depend only on time t and not on the position (u,v) Richards was able to find the following explicit analytic solution to the equations in theorem <ref> – see <cit.>:For a rheonomic 2D Randers metric in ℝ^2 with no spatial dependence the solution to the equations in theorem <ref> are given by the following expressions for γ(s,t) = (u(s,t), v(s,t)) with (u(0), v(0))= (u_0, v_0):u(s,t) =u_0 + ∫_0^tf(r)dr + ∫_0^tc_2(r)sin(θ(r)) + c_1(r)cos(θ(r)) dr,wheref(r) = a^2(r)cos(θ(r))cos(θ(r) + s) + b^2(r)sin(θ(r))sin(θ(r)+s)/√(a^2(r)cos^2(θ(r) + s) + b^2(r)sin^2(θ(r) + s)) ,andv(s,t) =v_0 + ∫_0^tg(r) dr + ∫_0^tc_2(r)cos(θ(r)) - c_1(r)sin(θ(r)) dr,whereg(r) = -a^2(r)sin(θ(r))cos(θ(r) + s) + b^2(r)cos(θ(r))sin(θ(r)+s)/√(a^2(r)cos^2(θ(r) + s) + b^2(r)sin^2(θ(r) + s)) . The solutions presented in theorem <ref> – and indirectly in theorem <ref> – can be shown directly to satisfy the unit F-speed condition andthe Hamilton orthogonality condition needed to form(p, 0)-based WF nets with p = (u_0, v_0). We illustrate the construction of a frozen metric in a most simple example using the following rheonomicmetric: In 𝒰= ℝ^2 we consider the time-dependent Riemannian metric for all t≥ 0:F(t, u, v, x,y) = F(t, u^1, u^2, x^1, x^2) = √(x^2 + (y/2)^2)/1+t .The corresponding Zermelo data are clearly the following:a(t,u,v)= 1+t b(t,u,v)= 2 + 2tC(t,u,v)= (0, 0)θ(t,u,v)= 0, For the rheonomic metric F we then have the following ingredients for the energy extremals and for the WF-ray equations:[ F^2]_u^l = 0[ F^2]_u^k x^l = 0[ F^2]_t = - 4x^2 + y^2/2(1 + t)^3[ F^2]_t x^1 = -4x/(1 + t)^3[ F^2]_t x^2 = -y/(1 + t)^3g_1 1 = 1/(1+t)^2g_2 2 = 1/4(1+t)^2g_1 2 = g_2 1 = 0G^i = 0N_0^1(t,u,v,x,y) = N_0^1 = -2x/1+tN_0^2(t,u,v,x,y) = N_0^2 = -2y/1+t . The energy extremal equations are thence:ü(t) + N_0^1(t, u(t), v(t), u̇(t), v̇(t))= 0v̈(t) + N_0^2(t, u(t), v(t), u̇(t), v̇(t))= 0 ,or, equivalently:ü -2u̇/1+t = 0v̈ -2v̇/1+t = 0.The solutions to these full energy extremal equations issuing from (u_0, v_0) = (0, 0) at time t=0 in the direction of the (non-unit vector) (u̇(0), v̇(0)) = (cos(s), sin(s)) are the following parametrized radial half lines:u(s,t)= (2/3t^3 + 2t^2 + 2t ) cos(s)/√(1 + 3cos^2(s))v(s,t)=(2/3t^3 + 2t^2 + 2t) sin(s)/√(1 + 3cos^2(s)) .As expected, the rays of this solution do not have constant unit speed:F(t, u, v, u̇, v̇) = 1+t. The WF net rheonomic equations (<ref>), however, give the correct solutions with F(t, u, v, u̇, v̇) = 1 for all s and t:u(s,t)= (t^2 + 2t) cos(s)/√(1 + 3cos^2(s))v(s,t)=(t^2 + 2t) sin(s)/√(1 + 3cos^2(s)) .From this solution we can now extract the time function:t = t(u,v) =-1 + √(1 + √(4u^2 + v^2)),and insert it into the rheonomic Finsler metric to obtain the corresponding frozen Finsler metric F as follows:F(u,v,x,y) = F(t(u,v),u,v,x,y) = √(4x^2 + y^2)/2√(1 + √(4u^2 + v^2)).For this frozen metric we have correspondingly for its WF-net geodesic equations:G^1(u,v,x,y) = u (-4x^2+ y^2) - 2v x y/4(1 + √(4u^2 + v^2))√(4u^2 + v^2)G^2(u,v,x,y) = v (4x^2- y^2) - 8u x y/4(1 + √(4u^2 + v^2))√(4u^2 + v^2) . The 'frozen' geodesics for F satisfies the equations:ü(t) + 2G^1(u(t), v(t), u̇(t), v̇(t))= 0v̈(t) + 2G^2(u(t), v(t), u̇(t), v̇(t))= 0. These equations are solved by the same rays as presented in equation (<ref>). Also they are the same rays as those obtained from Richards' recipe in theorem <ref> which are parametrized as follows:ũ(s̃,t)= (t^2/2 + t) cos(s̃)/√(4 - 3cos^2(s̃)) ṽ(s̃,t)=(2t^2 + 4t) sin(s̃)/√(4 - 3cos^2(s̃)) .The only difference is due to a different choice of parametrization of directions from p. Indeed, if we transform s̃ to s via the consistent equationscos(s̃)/√(4 - 3cos^2(s̃)) = 2cos(s)/√(1 + 3cos^2(s)) 2sin(s̃)/√(4 - 3cos^2(s̃)) = sin(s)/√(1 + 3cos^2(s)) ,then the two WF net solutions (<ref>) and (<ref>) do agree. Now, in comparison, we may consider instead the WF net geodesics obtained by starting at (0,0) at the later time t_0=1 in the same rheonomic metric F as above. A calculation along the same lines as above now gives the new time function:t = t(u,v) =-1 + √(4 + √(4u^2 + v^2)) .Insertion of this new time function into the given rheonomic metric F now gives the new frozen metric:F(u,v,x,y) = √(4x^2 + y^2)/2√(4 + √(4u^2 + v^2))with the following frozen geodesics – with (u(1), v(1)) = (0,0):γ(s,t)= (u(s,t), v(s,t))= ( (t^2 + 2t -3)cos(s)/√(1 + 3cos^2(s)), (t^2 + 2t -3)sin(s)/√(1 + 3cos^2(s))).The geodesic curves of the frozen metric are the same straight line curves as before – only now with a reparametrization in the t-direction. Observe that the new parametrization t^2 + 2t -3 is not simply obtained by inserting t-1 in place of t in the old parametrization t^2 + 2t . The frozen metric is clearly different – with an extra 4 in the denominator square root. Two somewhat more complicated examples of rheonomic metrics in ℝ^2 are considered in the next example:We present two simple cases of time-dependent Randers metrics in ℝ^2. Their respective time-dependent indicatrix fields are indicated in figures<ref> and <ref>: The Zermelo data for figure <ref> are as follows:a(t,u,v)= 1 b(t,u,v)= 2 + t/5C(t,u,v)= (0, 0)θ(t,u,v)= ((t+5)+u-v)/20.The Zermelo data for figure <ref>, which depend on time only, are the same as forfigure <ref>, except that the u and v dependence has been removed (from the rotation angle θ) so that:a(t,u,v)= 1b(t,u,v)= 2+t/5C(t,u,v)= (0,0) θ(t,u,v)= (t+5)/20 .For these two relatively simple fields we obtain – via numerical solutions – the (p, 0)-based WF nets displayed in the figures <ref> and <ref>, respectively, with ignition at point p = (0,0) at time t=0. The displayed frontals are then obtained at time values t_i = 0.2· 16 · i, where i = 1, ⋯, 5 so that the outermost frontal is the time-level set η_t_5 = η_16. In figure <ref> is shown also the two sets of Huyghens droplets of duration 0.2· 16 ignited from points on the two respective level setsη_t_4 at the corresponding time t_4. In the two cases under consideration the Huyghens droplets clearly show a tendency to envelope the outer frontal η_16 – cf. remark <ref>.plain ' ' 10Anastasiei1994 M. Anastasiei. The geometry of time-dependent Lagrangians. Mathematical and Computer Modelling, 20(4-5):67–81, 1994.Antonelli1991 P. L. Antonelli. Finsler Volterra-Hamilton systems in ecology. Tensor (N.S.), 50(1):22–31, 1991.AIM P. L. Antonelli, R. S. Ingarden, and M. Matsumoto. The theory of sprays and Finsler spaces with applications in physics and biology, volume 58 of Fundamental Theories of Physics. Kluwer Academic Publishers Group, Dordrecht, 1993.AZ1 P. L. Antonelli and T. J. Zastawniak. Introduction to diffusion on Finsler manifolds. Math. Comput. Modelling, 20(4-5):109–116, 1994. Lagrange geometry, Finsler spaces and noise applied in biology and physics.Arnold1989 V. I. Arnold. Mathematical methods of classical mechanics, volume 60 of Graduate Texts in Mathematics. Springer-Verlag, New York, second edition, 1989. Translated from the Russian by K. Vogtmann and A. Weinstein.BCS D. Bao, S.-S. Chern, and Z. Shen. An introduction to Riemann-Finsler geometry, volume 200 of Graduate Texts in Mathematics. Springer-Verlag, New York, 2000.BRS David Bao, Colleen Robles, and Zhongmin Shen. Zermelo navigation on Riemannian manifolds. J. Differential Geom., 66(3):377–435, 2004.frigioiu2008a Camelia Frigioiu. On the rheonomic Finslerian mechanical systems. Acta Mathematica Academiae Paedagogicae Nyiregyhaziensis, Acta Math. Acad. Paedagogicae Nyiregyhaziensis, 24(1):65–74, 2008.Markvorsen2016 Steen Markvorsen. A Finsler geodesic spray paradigm for wildfire spread modelling. Nonlinear Anal. Real World Appl., 28:208–228, 2016.munteanu2003a F. Munteanu. On the semispray of nonlinear connections in rheonomic Lagrange geometry. Finsler and Lagrange Geometries, Proceedings, pages 129–137, 2003.Richards1990 Gwynfor D. Richards. Elliptical growth model of forest fire fronts and its numerical solution. International Journal for Numerical Methods in Engineering, 30(6):1163–1179, 1990.Richards1993a Gwynfor D. Richards. Properties of elliptical wildfire growth for time dependent fuel and meteorological conditions. Combustion Science and Technology, 92(1-3):145–171, 1993.ShenBook2001 Zhongmin Shen. Lectures on Finsler geometry. World Scientific Publishing Co., Singapore, 2001.trumper1983a M. Trumper. Lagrangian mechanics and the geometry of configuration spacetime. Annals of Physics, 149(1):203–233, 1983.
http://arxiv.org/abs/1708.07350v1
{ "authors": [ "Steen Markvorsen" ], "categories": [ "math.DG", "53, 58" ], "primary_category": "math.DG", "published": "20170824105043", "title": "Geodesic sprays and frozen metrics in rheonomic Lagrange manifolds" }
Part-to-whole Registration of Histology and MRI using Shape Elements Jonas Pichat^1 Juan Eugenio Iglesias^1 Sotiris Nousias^1 Tarek Yousry^2 Sébastien Ourselin^1,3Marc Modat^1^1Translational Imaging Group, CMIC, University College London, UK^2Department of Brain Repair & Rehabilitation, UCL Institute of Neurology, UK^3Wellcome / EPSRC Centre for Interventional and Surgical Sciences, UCL, [email protected]===================================================================================================================================================================================================================================================================================================================================================================================empty Image registration between histology and magnetic resonance imaging (MRI) is a challenging task due to differences in structural content and contrast.Too thick and wide specimens cannot be processed all at once and must be cut into smaller pieces. This dramatically increases the complexity of the problem, since each piece should be individually and manually pre-aligned. To the best of our knowledge, no automatic method can reliably locate such piece of tissue within its respective whole in the MRI slice, and align it without any prior information. We propose here a novel automatic approach to the joint problem of multimodal registration between histology and MRI, when only a fraction of tissue is available from histology. The approach relies on the representation of images using their level lines so as to reach contrast invariance. Shape elements obtained via the extraction of bitangents are encoded in a projective-invariant manner, which permits the identification of common pieces of curves between two images. We evaluated the approach on human brain histology and compared resulting alignments against manually annotated ground truths. Considering the complexity of the brain folding patterns, preliminary results are promising and suggest the use of characteristic and meaningful shape elements for improved robustness and efficiency. § INTRODUCTION Histology is concerned with the various methods of microscopic examination of a thin tissue section. Cutting through a specimen permits the investigation of its internal topography and the observation of complex differentiated structures through staining. MRI constitutes an invaluable resource for routine, accurate, non-invasive study of biological structures in three dimensions. Relative to histology, MRI avoids irreversible damage and distortions induced by processing, cutting, mounting and staining during the histological preparation. However, resolution-wise, it is outperformed by histology.One of the many benefits of combining histology and MRI is to confirm non-invasive measures with baseline information on the actual properties of tissues <cit.> by accessing simultaneously the chemical and cellular information of the former and the rich structural information of the latter. Such combination relies on image registration and this can be achieved using iconic (intensity-based) <cit.> or geometric (feature-based) <cit.> approaches. Unfortunately, the extraction and manipulation of meaningful information from histology and clinical images is a very complicated task because each modality has, by nature, its own features and there does not always exist a mapping between their constituents: local intensity mappings are non-linear and images exhibit different structures—which is also a reason why intensity-based methods tend to get trapped in local optima. Hence, classical feature description methods, such as SIFT <cit.>, fail to match features <cit.>. Incidentally, manual extraction of landmarks may remain the safest way to establish correspondences across modalities <cit.>.Besides, it is common for histopathology laboratories to receive tissue samples that are: (P1) too wide or (P2) too thick, to be processed as they are. The sample is therefore cut into separate sub-blocks, each of which is processed individually. If no scan of each sub-block is available (unlike in <cit.> for example), one must keep track on which part of the sample each sub-block corresponds to and use that knowledge to initialise the registration of histological slices with the clinical image, or manually align them. As for problem (P2), attempts at using similarity measures have been made to initialise registrations, but those are ambiguous and rely on absolute measures rather than relative ones <cit.>. On that matter, it was shown in <cit.> that direct comparison of images from different modalities is non-trivial, and fails to reliably determine slice correspondences. To the best of our knowledge, no automatic method to address (P1) (see Fig. <ref>) has been proposed in the literature.§.§ Related work Regarding geometric approaches, one possible strategy to align histology and clinical imaging is to simplify the images into their contours, so as to come down to a monomodal registration problem and use the shape information provided by the external boundaries. In <cit.>, contours from both histology and slices from a rat brain atlas were extracted via thresholding and represented using B-splines. Then, they were described by means of sets of affine invariants constructed from the sequence of area patches bounded by the contour and the line connecting two consecutive inflections. In <cit.>, Curvature Scale Space <cit.> was used for the registration of whole-slide images of histological sections in order to represent shape (the tissue boundary) at various scales. In <cit.>, curvature maps at different scales were used to match boundaries of full brain MRI extracted via an active contour algorithm. The main weaknesses of active contours are the number of parameters and the sensitivity to initialisation. An alternative to using a single contour was proposed by morphologists, observing that level lines (the boundaries of level sets) provide a complete, contrast-invariant representation of images. Furthermore, level lines fit the boundaries of structures and sub-structures of objects very well. Then, given two images, the problem is to retrieve all the level lines that are common to both images; this is however feasible only if curves have been appropriately simplified (smoothed) <cit.> (p.95). Like in <cit.>, smooth pieces of level lines (the shape elements <cit.>) can be encoded to represent shape locally in e.g., an affine-invariant manner <cit.>. The comparison of the resulting canonical curves then permits to identify portions of level lines common to two images. Problem (P1) being multimodal and fractional by nature, it seems natural to formulate a solution that involves contrast- and geometric-invariance, as well as locality.Here, we present a novel approach to (P1) based on: (i) representing both histology and MRI images using their level lines <cit.>. This allows to reach contrast invariance and to consider implicitly several structural layers of the images—as opposed to relying solely on the outer boundaries of tissues. From there, characteristic shape elements can be extracted locally along the level lines via their bitangents (<ref>). (ii) Representing those elements in a projective invariant manner (<ref>) as introduced by Rothwell in <cit.>, so as to be robust to some non-linear deformations that tissues undergo during the histological process. Combining the two procedures permits the partial matching of shape elements regardless of the orientation of the tissue on glass slides. Registration is then obtained as a result of shape recognition (<ref>).§.§ Contributions * We address the joint problem of multimodal registration between a fraction of histology and its whole in an MRI slice as a result of shape recognition using portions of level lines. * We introduce an efficient refinement of bitangents via ellipses.* We extend Rothwell's framework to bitangents crossing the level lines and compare the resulting canonical curves using the Fréchet distance. § PREPROCESSING We used two standard preprocessing steps: first, smoothing, in order to simplify the image, preserve the shape of the tissue, remove unnecessary details and obtain smooth level lines (Fig. <ref>a); then, intensity correction, in order to account for inhomogeneities of the field in MR images (Fig. <ref>c) or illumination in histology. Smoothing is based on Affine Morphological Scale Space (AMSS) <cit.>. It is governed by the partial differential equation: ∂ u/∂ t = |Du|(u)^1/3 where u is the image, |Du| is the gradient of the image, (u) is the curvature of the level line and t is a scale parameter. AMSS smoothes homogeneous regions but enhances tissue boundaries. The sequence of updates necessary to its computation follows that presented in  <cit.> (equations of 2.3).Image intensities correction relies on surface fitting <cit.>: the low-frequency bias of an image can be estimated using an adequate basis of smooth and orthogonal polynomial functions. It then comes down to solving the least square problem A𝖼=𝖻, where 𝖻∈ℝ^N is the vector of all the pixels values, and 𝖼 the coefficients of one linear combination of basis functions. A ∈ℝ^N× (n+1)(m+1) is the matrix of the system: its k-th row is the vectorised outer product Φ(x_k)⊗Φ(y_k) with Φ(x_k) = [P_0(x_k), P_1(x_k), …, P_m(x_k)]^T and Φ(y_k) = [P_0(y_k), P_1(y_k), …, P_n(y_k)] for pixel k≤ N. P_i(.) denotes a certain 1D polynomial of degree i.Degrees m and n are usually taken small so as not to overfit the image intensities. The left inverse of A (it is full rank) gives the bias image, and correction is straightforward (Fig. <ref>b).§ FINDING BITANGENTSCharacteristic shape elements are extracted by means of bitangents of level lines. Bitangents are identified via the tangent space (<ref>) and each one is refined using two ellipses fitted in the neighbourhood of estimated bitangent points (<ref>). Since two ellipses have at most four bitangents (<ref>), one needs to be singled out which corresponds to the refined bitangent of the level line (<ref>).In the following, a bitangent point is one of the two points where a bitangent is in contact with the level line. The length of a bitangent is defined as the number of inflections of the portion of level line that it covers. As a result, a short bitangent refers to a bitangent that covers portions with exactly two inflections and a long one, more than two.§.§ Dual curveLet ℒ be a Jordan curve (level lines are plane simple curves, though closedness is not guaranteed for all of them in practice). Duality is defined as the polarity that sends any point to a line and vice versa. The image of a point with parameter t=t_0 is the line:ux(t_0) + vy(t_0) + 1 = 0.If the parameter t covers the whole range of definition, the resulting set of straight lines is the envelope of ℒ: the dual ℒ^* of ℒ is the set of its tangent lines. A parametrisation of ℒ^* in homogeneous coordinates can be obtained from (<ref>) by differentiationw.r.t the parameter t and elimination. This yields u=-ẏ(t)/ẋ(t)y(t) - ẏ(t)x(t) and v =ẋ(t)/ẋ(t)y(t) - ẏ(t)x(t) with x,y≠ 0 (dot notation is used for differentiation).Dual curves feature the following properties: an inflection of ℒ maps onto a cusp of the dual, and two points sharing a common tangent map onto a double point of the dual curve. More generally, a set of n points sharing a common tangent line maps onto a point of multiplicity n of the dual curve. Finding the bitangents of ℒ is therefore equivalent to finding self-intersections of the polygonal curve ℒ^* (Fig. <ref>). To that end, we used the Bentley-Ottmann algorithm <cit.>, which is a line sweep algorithm that reports all intersections among line segments in the plane.§.§ Refining bitangents locations The refinement of bitangents is preferable: since the slopes of tangents vary substantially in portions of high curvature, the lengths of segments of the dual curve increase on portions where a self-intersection may happen. The evaluation of that double point thus degrades, which directly affects the estimation of bitangents.§.§.§ Ellipse fitting In order to cope with bitangent errors, we propose to refine their locations by fitting ellipses <cit.> around estimated bitangent points. This allows skipping the rotation part prior to the quadratic fitting in <cit.>. Beforehand, bitangents lying on almost straight edges of the level lines are removed by looking at the residual of a line fit on the portions bounded by the two bitangent points. This is intended to avoid the degenerate case of fitting an ellipse to a nearly straight line.Let F be a general conic. It is defined as the set of points such that:F(𝖺,𝐱) = 𝖺.𝐱 = ax^2 + by^2 + cxy + dx + ey + f = 0, where 𝖺=[abcdef]^T and 𝐱=[x^2y^2xyxy1]^T.The constrained least square problem we wish to solve here is: _𝖺 = 𝖺^TS𝖺 subject to 𝖺^TC𝖺=1, where S=D^TD is the scatter matrix, D is the design matrix, made of the N points to be fitted and C is the constraint matrix which expresses the constraint 4ac - b^2 = 1 on the conic parameters to make it an ellipse. This translates in a [6× 6] matrix where C_22=-1 andC_31=C_13=2, the rest being zeros. This yields the generalised eigenvalue problem (GEP):S𝖺 = λ C𝖺. The ellipse coefficients, 𝖺 are the elements of the eigenvector that corresponds to the only positive eigenvalue. Although the impact of S being nearly singular and C being singular on the stability of the eigenvalues computation is discussed in <cit.>, we did not encounter any problem in our experiments.§.§.§ Bitangents of ellipses The main goal of this section is to compute the bitangents of two ellipses efficiently. This is achieved by transforming a system of two polynomial equations into a polynomial eigenvalue problem, and for further performance, into a generalised eigenvalue problem.Let us consider two ellipses, E_1(𝖺_1,𝐱) and E_2(𝖺_2,𝐱) defined by bivariate quadratic polynomials, like in (<ref>). The tangent line, Ty=ux+v to say E_1, is the line that intersects E_1 at exactly one point. By substitution, one gets a degree 2 polynomial in x, which has a single root if and only if its discriminant, Δ(α_1,𝗎)=0. When considering the tangent to both ellipses, this gives a system of n=2 polynomial equations in unknowns u,v: (s1) α_11u^2 + α_12v^2 + α_13uv + α_14u + α_15v + α_16 = 0 α_21u^2 + α_22v^2 + α_23uv + α_24u + α_25v + α_26 = 0, α_i1 = e_i^2 - 4c_if_iα_i2 = b_i^2 - 4a_ic_iα_i3 = 4c_id_i-2b_id_i, α_i4 = 2d_ie_i-2b_if_iα_i5 = 2b_id_i-4a_ie_i α_i6 = d_i^2-4a_if_i, i={1,2}.To start with, u is hidden in the coefficient field; (s1) becomes a system of two equations f_1(u,v) and f_2(u,v) in one variable v and coefficients from ℝ[u] i.e. f_1,f_2 ∈ (ℝ[u])[v]. The degrees of these two equations are d_1=d_2=2.Homogenising (s1) using a new variable w gives (s2), a system of two homogeneous polynomial equations F_1(v,w) and F_2(v,w) in two unknowns v,w:(s2) 0.88 α_11u^2 + α_12v^2 + α_13uv + α_14uw + α_15vw + α_16w^2 = 0 0.88 α_21u^2 + α_22v^2 + α_23uv + α_24uw + α_25vw + α_26w^2 = 0, The total degree d=∑_i=1^n(d_i-1)+1 equals 3. This gives the set 𝒮 of n+d-1d=4 possible monomials ω^δ=v^δ_2w^δ_3 in variables v,w of total degree d i.e., such that |δ|=∑_i=2^3δ_i=3: 𝒮={v^3,v^2w, vw^2, w^3}. The set 𝒮 can be partitioned into two subsets according to a modified Macaulay-based method <cit.>:[ 𝒮_1={ω^δ: |δ|=3, v^d_1|ω^δ},; 𝒮_2={ω^δ: |δ|=3, w^d_2|ω^δ}. ]In other words, 𝒮_1 (resp. 𝒮_2) is the set of monomials of total degree 3 that can be divided by v^2 (resp. w^2). This gives 𝒮_1={v^3, v^2w} and 𝒮_2={vw^2, w^3}, from which the extended set of four polynomial equations: vF_1 = 0, wF_1 = 0, vF_2 = 0 and wF_2 = 0 can be derived.After dehomogenisation (by setting w=1), the extended system can be rewritten as a polynomial eigenvalue problem (PEP):𝖢(u)𝗏=0,where 𝗏=[v^3v^2v1]^T and𝖢(u)= [ α_12 α_13u + α_15 α_11u^2+α_14u + α_160;0 α_12 α_13u + α_15 α_11u^2+α_14u + α_16; α_22 α_23u + α_25 α_21u^2+α_24u + α_260;0 α_22 α_23u + α_25 α_21u^2+α_24u + α_26 ].Non-trivial solutions to (<ref>) are the roots of (𝖢), which gives up to 4 real solutions for u. For each one of them e.g., u_1, the corresponding singular value decomposition has the form: 𝖢(u_1)=𝖴𝖲𝖵^T, where the solution vector [𝗏_1𝗏_2𝗏_3𝗏_4]^T is the column of 𝖵 that corresponds to the smallest singular value. The particular solution v_1 associated with u_1 is e.g. 𝗏_3/𝗏_4, meaning that one bitangent is parametrised by T_1y=u_1x+v_1.For the sake of completeness, the PEP (<ref>) can be further transformed into a GEP by first rewriting it as:0.84 ( [00 α_110;000 α_11;00 α_210;000 α_21 ]_𝖢_2 u^2 +[0 α_13 α_140;00 α_13 α_14;0 α_23 α_240;00 α_23 α_24 ]_𝖢_1 u + [ α_12 α_15 α_160;0 α_12 α_15 α_16; α_22 α_25 α_260;0 α_22 α_25 α_26 ]_𝖢_0)𝗏 = 0,which is equivalent to the GEP:𝖠𝗒=u𝖡𝗒,with 𝖠= [0_4I_4; -𝖢_0 -𝖢_1 ] and 𝖡= [ I_4 0_4; 0_4 𝖢_2 ], 0_4 and I_4 being the [4× 4] zero and identity matrices, and 𝗒= [𝗏; u𝗏 ]=[𝗒_1𝗒_2…𝗒_8]^T. A particular solution v_1 is e.g., the quotient 𝗒_3/𝗒_4 (or equivalently √(𝗒_1/𝗒_4)) from the eigenvector associated with eigenvalue u_1. Note that the resolution of (<ref>) is two orders of magnitude faster compared to (<ref>) using linear algebra packages.Lastly, when the two ellipses E_1 and E_2 intersect in two points, two out of the four eigenvalues obtained for u are complex. These correspond to the two internal bitangents: in that case, ellipses have only two external bitangents associated with the other two real eigenvalues. It is also worth noting that, when they exist, internal bitangents are associated with the extremal (real) eigenvalues.§.§.§ Selecting one bitangent In this section, we identify the only bitangent of E_1 and E_2 that is also a bitangent of ℒ (Fig. <ref>)—referred to as the usable bitangent.Let us consider: (i) bitangents directed from E_1 to E_2, (ii) E_1 is oriented positively and (iii) Δ is its left-most vertical tangent. Bitangents of E_1 can be cyclically ordered by considering independently the tangents below (in blue in Fig. <ref> Left), and above (in red) it, and sorting them by decreasing y-intercept with the ellipse's left-most tangent, Δ. This holds for cases where an ellipse lies above (resp. below) all of the bitangents. Lemma 1 in <cit.> states that the resulting cyclic order of the bitangent directions is 𝒞: [LL,LR,RL,RR] (L and R stand for left and right and refer to the locations of an ellipse relative to a bitangent). Four possible cases arise: (c1) E_2 stands to the right of E_1, (c2) is above E_1 intersecting Δ, (c3) is to the left of E_1, and (c4) is below E_1 intersecting Δ. For each case, the first bitangent encountered starting from Δ, counter-clockwise, has type LL, RR, RL and LR respectively; the next up to three bitangents for each case have their types deduced from the positive cyclic order 𝒞.Now in order to select the usable bitangent, one has to rely on the geometry of the level line ℒ. Let us define the unit curvature vector k, at every point along ℒ as the vector pointing toward the centre of the osculating circle: k=κn=⟨ k_x, k_y ⟩, where κ is the scalar curvature and n is the normal (it is colinear to the gradient of the image along ℒ and directed toward the inside of the clockwise-oriented closed curve here). The orientation of k allows differentiating otherwise ambiguous situations; for example, two pairs of ellipses (E_1, E_2) and (E_1, E_3), all of them fitting portions with same curvature and satisfying the configuration of case (c1), can be associated with a different type of usable bitangent, RR and RL respectively. This happens when k_1 and k_3 have opposite sense, while k_1 and k_2 have the same. In the following, positiveness is defined for (c1) and (c3) as k_y>0 and as k_x>0 for (c2) and (c4), and is denoted with the superscript (+). From there we define four patterns: (p1) (k_1^(+),k_2^(+)), (p2) (k_1^(+),k_2^(-)), (p3) (k_1^(-),k_2^(+)) and (p4) (k_1^(-),k_2^(-)). In cases (c1) and (c2), they correspond to the usable bitangent type LL, LR, RL, RR respectively. Conversely, in cases (c3) and (c4), they correspond to the type RR, RL, LR, LL respectively. Since there is a one to one correspondence between the four bitangents and the four types, it only requires identifying one of four patterns (p) and one of four cases (c) to pick the usable bitangent parameters.We also extend the mapping to intersecting ellipses (Fig. <ref> Middle) by observing that the cyclic order of bitangents is of the form [T_e, T_i, T_i, T_e] (subscripts e and i stand for external and internal). Since only external bitangents exist in the case where E_1 and E_2 intersect in two points (<ref>), we are left with the cyclic order [T_e, _, _, T_e].Bitangent points are straightforward to obtain for E_1 and E_2 by substitution of the tangent equation in the ellipses equations. Finally, we select the point of ℒ that is the closest to an ellipse bitangent point. Note that once all bitangents are refined, some bitangent points may collapse to similar locations. In order to reduce ineffective redundancy, only one bitangent out of those that have their end points close to each other is kept <cit.>.§ PROJECTIVE SHAPE REPRESENTATION We now have a set of refined bitangents. Let us consider one bitangent and its endpoints b_1 and b_2. In order to encode the shape of a portion of (oriented) level line ℒ_r = ℒ[b_1,b_2] (assuming b_1 comes before b_2) in a projective invariant manner (as opposed to affine invariant <cit.>, used in <cit.>), two more points are required: the cast points c_.. The four points b_1, c_1, c_2, b_2, invariant under projective transformation, form the vertices of a polygon—the level line frame ℱ_l—and are mapped to the unit square vertices, ℱ_c (the canonical frame) <cit.>. The resulting projection is applied to ℒ_r and provides a canonical curve that can be used for shape comparison and matching.A cast point c_1 (resp. c_2) is defined as the contact point of the tangent to ℒ_r that intersects the level line at b_1 (resp. b_2). There exist several such points for each bitangent point in the case of long bitangents. It thus becomes critical to ensure that a candidate frame ℱ_l forms a convex polygon so as to get an acceptable projection of ℒ_r to the canonical frame. In the case of short bitangents, the construction of ℱ_l is straightforward as only two cast points exist. As for long bitangents, a single portion of curve may be associated with several canonical curves, each of which depends on the frame configuration. As noted in <cit.>, it is preferable to pick those making a wide angle between the bitangent and the cast tangents, as well as those having the cast points as far from one another as possible: unbalanced frames may give distorted canonical curves. This holds for bitangents crossing the level line. It is also worth mentioning that this step drastically prunes the set of bitangents that can lead to satisfying frames.§.§ Canonical curvesThe goal is here to determine the 2D homography matrix such that 𝐱_𝐢=ρ𝖳𝐗_𝐢 <cit.>, where 𝐗_𝐢 = [X_iY_i1]^T is the i-th point in F_l (which no 3 are colinear) in homogeneous coordinates, 𝐱_𝐢 = [x_iy_i1]^T is the i-th vertex of the unit square defined by (0,0,1), (0,1,1), (1,1,1) and (1,0,1), 𝖳 is a [3× 3] matrix of the transformation parameters with 𝖳_33=1 and ρ is a non-zero scalar that gives by elimination 8 equations from four correspondences, linear in the parameters. The solution we are seeking is the unit singular vector corresponding to the smallest singular value of the matrix of the system.A normalisation step, which consists of translating and scaling, is recommended for it forces the entries of the matrix of the system to have similar magnitude. Further details can be found in <cit.> (p.108).§.§ Comparing polygonal curves Contrary to <cit.>, who relied on rays extended from an origin (1/2,0) in ℱ_c and designed a feature vector made of all the distances from every intersection point with the canonical curve to the origin, we compare canonical curves by means of the Fréchet distance (Fig. <ref>). The rationale is that we also consider bitangents that cross level lines. This means that the canonical curves may cross the base of ℱ_c one or several times with more or less complex convolutions, making the use of rays impractical.There are (at least) two common ways of defining the similarity between polygonal curves: the Hausdorff distance <cit.> and the Fréchet distance. The latter has the advantage that it takes into account the ordering of the points along the curves, thereby capturing curves structure better <cit.>. For the sake of speed, we used the discrete Fréchet distance (see Table 1 of <cit.>), which is an approximation of the continuous Fréchet distance: it only uses the curves vertices for measurements. From there, one can also define the reachable free space, which is the set of points for which the distance between two curves is lower than a distance parameter, δ and this allows tracking local similarity <cit.>. The Fréchet distance is the minimum δ that allows reaching the top right corner of the free space starting from (0,0).§ MATCHING AND REGISTRATION By cross-comparisons between histology and MRI, one obtains a measure of shape similarity (<ref>). Because each level line is associated with many canonical curves (one for each shape element), matches are found when the Fréchet distance is minimum and below a certain threshold. We can then use correspondences between ℱ_l in histology and MRI to compute an affine transformation (same principle as in <ref> with only 3 points—each providing two equations—and 𝗉=[T_11T_12…T_23001]^T). In order to minimise the global alignment error, the points must be well-arranged in images, i.e. the frames should be as wide as possible (hence the advantage of using long bitangents). When considering several level lines in both modalities, each canonical curve of each level line from histology returns at most one matching canonical curve for each level line in the MRI. False matches are filtered out using random sample consensus (RANSAC) <cit.> and a single global transformation is computed. § RESULTS AND DISCUSSIONWe evaluated the method on 7 pieces of tissue, altogether covering 3 different subjects. For each subject, we had access to T2w, PSIR and PD MRI volumes (7 slices, 0.25×0.25×2mm^3). From these volumes we selected the slice that visually looked the most similar to one piece of histology. Histological images were a series of 11 consecutive 2μ m-thick sections, stained with 11 different dyes. At this point, it is worth noting that because the histological slab was about 25μm-thick—compared to a 2mm-thick slice from MRI—projective invariance not only allowed being robust to tissue distortions in the recognition process but was also required in order to tolerate morphological variations happening within that 2mm gap.A ground truth arrangement similar to that of Fig. <ref> was available for direct assessment of success or failure of the alignment. It was made by a histopathologist at the time of the tissue preparation and essentially consisted of reporting the cassettes locations onto a slice of a medical image in order to keep track on which part of the sample the tissue piece was cut from. In the following, we call confusing (as opposed to meaningful <cit.> i.e., the tissue outer/inner boundaries) level lines, those not providing relevant information about the tissue shape.We ran two experiments (Fig. <ref>): (E1) consisted of using levels multiple of 16, 12, 8 and 4 in histology and MR images to investigate two questions: what is the impact of confusing level lines as well as their number, on the matching and the alignment? Can level lines be used as they are, without any form of prior knowledge about the tissue boundaries in images? Note that level lines were computed at quantised levels 0 to 255 by steps of 1. We expect that the sparser the set of level lines, the less informative about the actual tissue shape they can be (since information is lost when quantisation is coarse). This indeed translates in higher numbers of false than true matches when using between 1/16th and 1/8th of all available level lines (except for pieces 1 and 3 when using 1/8th, but this is hardly representative). When sufficient information comes in (1/4th), recognition becomes more successful: despite finding more false than true matches for piece 2, RANSAC was able to return the correct transformation—most of the false matches being isolated and spread across the MR image domain in that case. In contrary, RANSAC was unable to deal with false matches for pieces 5 and 6, those being related to ambiguities (shape elements were small and confusing).The second experiment (E2) investigated the question: how robust is the matching/alignment when injecting confusing information into a subset of meaningful level lines? As such, we increased the number of neighbouring level lines from ±5 to ±20 around a meaningful one. In practice, meaningful level lines are those around structural layers (contrasted boundaries) of the tissue and we manually picked the corresponding levels. We can observe that the more localised around relevant information the level lines are, the higher the ratio true/false matches and the more trustful the set of correspondences fed to RANSAC. This is where redundancy is very valuable. However, the more levels one includes, the further one goes from meaningful information, and the more confusing it can get (see the increase in false matches). Due to the complexity of the information and the sinuosity of the shape, we believe that starting from a meaningful subset of level lines is an important consideration. Resulting alignments are shown in Fig. <ref> for 3 pieces, considering neighbourhoods of ±10 level lines. Overall, 5 pieces were matched correctly and two incorrectly. As for piece 6, no shape element was discriminative enough to be correctly matched with an MRI portion of level line without any ambiguity (Fig. <ref>c), as only relatively short bitangents could be extracted. As for piece 7, this is due to the fact that it is close to convex (and thus was not considered in the previous experiments). As a result, a few or no bitangents could be extracted from that histological image and no match was therefore available. The main requirements of the approach are twofold and relate to the length of the bitangents and the threshold on the Fréchet distance. As stressed out earlier, short bitangents convey little and ambiguous information about shape. This results in false matches especially because of the tolerance of the projective-invariant setting and the sinuosity of the MRI level lines. As a matter of fact, we constrained the approach to using long bitangents: in practice, we used those covering portions of a level line with more than 6 inflections. If a histological image happened to have informative portions with more than two inflections but less than 6—as it was the case for piece 5—then the longest bitangents were used (4 and 5 inflections in that case). An upper bound was also set (we chose 10 inflections) in order to speed up the matching process and avoid aberrant comparisons with bitangents covering the whole MR image; that range was applied to both MRI and histology. The rationale for considering such a range is also that it is not guaranteed that two level lines have the exact same number of inflections on corresponding portions across modalities, but their smoothness ensures those numbers are close. Long bitangents produce characteristic canonical curves (furthermore associated with wide frames) and allow for lower thresholds on the Fréchet distance while discarding false matches better.§ CONCLUSION This paper stands as a proof of concept that multimodal registration between a piece of tissue from histology and its whole in an MRI—which, to the best of our knowledge, remains to be addressed—is achievable as a result of shape recognition using portions of their level lines. Such a formulation allows for contrast, projective invariant representation of shape elements and partial matching regardless of the orientation of the piece of tissue on the glass slide (flips, rotations). We also introduced a computationally efficient refinement of bitangents using ellipses, from which a single bitangent was retained according to the local geometry of the level line. All this however, is to be related with the complexity of medical images; successful alignments require subsets of meaningful level lines along with characteristic shape elements. Those were obtained via the extension of Rothwell's framework to bitangents crossing the level lines and by preferring long bitangents.Future works include: (i) the automatic extraction of meaningful level lines <cit.>; (ii) the use of shortcut Fréchet distance <cit.>, which bypasses large dissimilarities. This could improve robustness to tissue tears: a level line in histology may be globally close in terms of its shape to part of another in MRI but because of a tear that it follows, the distance between the associated canonical curves will be large.§ ACKNOWLEDGMENTSThe authors would like to thank Prof. Olga Ciccarelli and her group (UCL Institute of Neurology, Queen Square MS Centre), for kindly providing the data. This research was supported by the European Research Council (Starting Grant 677697, project BUNGEE-TOOLS), the University College London Leonard Wolfson Experimental Neurology Centre (PR/ylr/18575), the Alzheimer's Society UK (AS-PG-15-025), the EPSRC Centre for Doctoral Training in Medical Imaging (EP/L016478/1), the National Institute for Health Research University College London Hospitals Biomedical Research Centre and Wellcome/EPSRC (203145Z/16/Z, NS/A000050/1).ieee
http://arxiv.org/abs/1708.08117v1
{ "authors": [ "Jonas Pichat", "Juan Eugenio Iglesias", "Sotiris Nousias", "Tarek Yousry", "Sebastien Ourselin", "Marc Modat" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20170827180141", "title": "Part-to-whole Registration of Histology and MRI using Shape Elements" }
College of Computer Science, Zhejiang University [email protected] of Computer Science, Zhejiang University [email protected] of Computer Science, University of British Columbia [email protected] University Joint Institute of Frontier Technologies, College of Computer Science, Zhejiang University [email protected] University Joint Institute of Frontier Technologies, College of Computer Science, Zhejiang University HangzhouChina [email protected] Existing works for extracting navigation objects from webpages focus on navigation menus, so as to reveal the information architecture of the site. However, web 2.0 sites such as social networks, e-commerce portals etc. are making the understanding of the content structure in a web site increasingly difficult. Dynamic and personalized elements such as top stories, recommended list in a webpage are vital to the understanding of the dynamic nature of web 2.0 sites. To better understand the content structure in web 2.0 sites, in this paper we propose a new extraction method for navigation objects in a webpage.Our method will extract not only the static navigation menus, but also the dynamic and personalized page-specific navigation lists. Since the navigation objects in a webpage naturally come in blocks, we first cluster hyperlinks into different blocks by exploiting spatial locations of hyperlinks, the hierarchical structure of the DOM-tree and the hyperlink density. Then we identify navigation objects from those blocks using the SVM classifier with novel features such as anchor text lengths etc. Experiments on real-world data sets with webpages from various domains and styles verified the effectiveness of our method. <ccs2012> <concept> <concept_id>10002951.10003260.10003277.10003279</concept_id> <concept_desc>Information systems Data extraction and integration</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10002951.10003227.10003351</concept_id> <concept_desc>Information systems Data mining</concept_desc> <concept_significance>300</concept_significance> </concept> <concept> <concept_id>10010147.10010257.10010258.10010260.10003697</concept_id> <concept_desc>Computing methodologies Cluster analysis</concept_desc> <concept_significance>300</concept_significance> </concept> </ccs2012> [300]Information systems Data mining [300]Computing methodologies Cluster analysis [500]Information systems Data extraction and integrationNavigation Objects Extraction for Better Content Structure Understanding Can Wang December 30, 2023 ========================================================================§ INTRODUCTIONThe explosive growth of the World Wide Web generates tremendous amount of web dataand consequently web data mining has become an important technique for discovering useful information and knowledge.Among many popular topics in web data mining, extracting informationarchitecture or content structures for a web site has attracted many research attention in recent years.Existing works mainly extract navigation menus from webpages toreveal the content structure of the site <cit.>.Many applications can be derived from the extracted content structure, including generating site map to improve information accessibility for disabled users, or providing content hierarchy in search results <cit.> etc. However, the increasing number of web 2.0 sites such as social networks, e-commerce portals etc. areturning the web from a static information repository into a dynamic platform for information sharing and interactions.As shown in Figure <ref>, the information architecture on these sites are characterizednot only by the traditional static directory structure of the site,but also by the dynamic elements such asthe top reading list, recommended items etc.In fact, the dynamic nature of web 2.0 sites are better captured by these dynamic and personalized elements.But their importance are neglected in existing works of web structure extraction,which mainly focus on extracting static web site structures such as the navigation menus<cit.>, headings<cit.> etc. In this paper, we propose a new extraction method for navigation objectsin a webpage to capture both the static directory structures and the dynamic content structures in a web site.It is a non-trivial task mainly because of the great diversities in webpage structures. Webpages come withvarious layouts, thus navigation objects in different webpages varies greatly in their presentation.Moreover, many navigation elements in a webpage nowadaysare generated dynamically, or customized for specific users. To overcome these difficulties, we attempt to develop a page-dependent extractor for navigation objects in a webpage.Our method is based on following observations for navigation objects, in a typical webpage:1) the navigation objects are naturally grouped in different hyperlink blocks, in which few other contents other than these hyperlinks exist; 2) the anchor text for these hyperlinks are usually short and well aligned.With these observations, the first step of our method is to cluster hyperlinks in a webpage into multiple blocks by exploitingfeatures such as spatial locations of hyperlinks, the hierarchical structure of the DOM-tree and the hyperlink density etc. Then we identify navigation objects using the SVM classifier.Generally, the hyperlink blocks in a webpage can be divided into the following four categories: * Navigation Menu. Hyperlinks provide site-level navigation.They stay relatively invariant and can be directly mapped to the static directory structure in a website.* Navigation List. Hyperlinks provide page-dependent navigation and capture the dynamic and personalized content structures,such as recommended list etc.* Content Hyperlink. Hyperlinks appears in the main content. * Others. Hyperlinks include Ads, copyright information etc.Obviously, we intend to extract Navigation Menu and Navigation List in a webpage.The SVM classifier is trained with some well defined features,such as the number of hyperlinks, the mean and the variance of anchor text lengths etc.Experimental results in multiple real-world datasets verify the effectiveness of our method. The rest of the paper is organized as follows. We briefly review related works in section 2.We describe our method in section 3 and 4, the part of clustering hyperlinks into blocks is in section 3and the part of classifying hyperlink blocks is in section 4.Then in section 5 we show our experimental setup and results followed by discussing the results.Finally, we present our conclusions and plans to future research in section 6. § RELATED WORKOur work is related to areas of web structure mining and web information extraction. Web structure mining.Web structure mining aims to study the hyperlink structure of the web. Some early works studied the structure of the web at large <cit.><cit.>and uncover the major connected components of the web. Others analyzed the generally properties relatedwith the web graph, such as its diameter <cit.>,size and accessibility of information on the web <cit.> etc. PageRank <cit.> exploits the linkage information to learn theimportance of webpages and becomes widely used in modern search engines. Recent works on web structure mining focused more on the local structures of the web graph. Ravi et al. <cit.> used the hierarchical structure of URLsto generate hierarchical web site segmentation.Though the hierarchical structure of URLs was also used in many other works, such as <cit.>,the hierarchical structure of URLs does not reflect the web site organization accurately.Eduarda Mendes et al. <cit.> noticed that and thought navigation objects couldreflect the web site structure better.They applied frequent item-set algorithms on theoutgoing hyperlinks of webpages to detect repeated navigation menus and then used them to represent web sites.Keller et al. <cit.> also tried to use navigation menus toreveal the information architecture of web sites, but they extracted navigation menus in a very different way.They extracted navigation menus by analyzing maximal cliques on the web graph.Some works do not extract navigation objects directly, but they take into account thestructural information navigation objects provide. For instance,when Cindy Xide et al. <cit.> clustered webpages,they considered parallel links which are siblings in the DOM-tree of a webpageand usually in the same navigation objects.However, these works only focus on the static structure of a web site represented by navigation menus etc.and neglect the dynamic structure represented by personalized page-specific navigation lists.These navigation elements is vital to understand the dynamic nature of web 2.0 sites.Web information extraction.Information extraction from webpages has many applications.Most of the existing works focus on main content extraction from webpages and the early work about that canbe traced back to Rahman et al. <cit.>.They segment the webpages into zones based on its HTML structure and extract importantcontents by analyzing zone attributes. Among many different categories of extraction methods, template-based ones are popular because they are highly accurate and easy to implement.They extracted content from pages with a common template by looking for special HTML cues using regular expressions.A different category of template-based methods used template detection algorithms<cit.><cit.><cit.><cit.>,in which webpages with the same template are collected and used to learn common structures.The major problem with template-based extractors is that different extractors must be developed for different templates.What's more, once the template updates, as frequently happens in many web site, the extractor will be invalidated. To overcome the limitations of template-based methods, many researchers attempted to extractcontent from webpages in a template-independent way.Cai et al. <cit.> proposed a vision-based webpage segmentation algorithmnamed VIPS to divide a webpage into several blocks by its visual presentation.Zheng et al. <cit.> presented a template-independent news extraction method based on visual consistency.Wang et al. <cit.> exploited more features about the relation between the news title and body by firstly extractingthe title block and then extracting the body block.Shanchan et al. <cit.> trained a machine learning model with multiple features generated by utilizing DOM-tree node properties and extracted content using this model. Although these methods extract webpage content in a template-independent way, they still have to rely on some particular HTML cues (e.g., <table>, <td>, color and font etc) in their extraction, and thus are more easily affected by the underlying web development technologies. Two recent works, CETR <cit.> and CETD <cit.> address this issue by identifying regions withhigh text density, i.e., regions including many words and few tags are more likely to be main content. As can be seen, most existing works of information extraction from webpagesfocus on main content extraction and they can not be applied to extracting navigation objects directly.Even the template-based methods cannot be used directly to extract navigation objects becausenavigation lists in webpages are usually generated dynamically and page-dependent. § CLUSTERING HYPERLINKSOur work is motivated by the observation that the navigation objects are naturally grouped in different hyperlink blocks according to their purposes. To better illustrate our idea, we use a typical webpage, the home page ofTechweb[http://www.techweb.com] as an example. As shown in Figure <ref>,the hyperlinks in the webpage are obviously grouped in different blocks with their different visual presentation features. §.§ DOM-tree Before clustering hyperlinks in a webpage into blocks, we parse the webpage into a DOM-tree.Each webpage corresponds to a DOM-tree where detailed text, images and hyperlinks etc. are leaf nodes.An example of the DOM-tree is shown in Figure <ref>.The DOM-tree at the bottom of Figure <ref> is derived from the HTML codeat the top right, whose webpage layout is at the top left.The DOM-tree is a hierarchical structure and it has three useful properties as follows.First, the relation between child node and parent node reflects their relation inthe webpage layout, e.g., in Figure <ref> the node <p> and <img> are child nodesof node <div> reflects that text and image are included in the block corresponding to <div> in the webpage layout.Second, the relative positions of sibling nodes are preserved when they are displaying in the webpage.More specifically, if node a and node b are sibling nodes and a is at the left side of b on the DOM-tree,the displaying element corresponding to a must stay at the left side or the top ofthe displaying element corresponding to b in the webpage layout.Third, hyperlinks in the same block must have the same ancestor,which is the root node of the smallest sub-tree including that block.The above three properties are very useful when we cluster hyperlinks into blocks on the DOM-tree of a webpage.§.§ DOM-tree DistanceThe central problem in clustering hyperlinks is to define a reasonable distance between them that well conforms to their visual presentation. The most intuitive choice is the Euclid distance between their locations on the webpage as rendered by browsers. However, obtaining these locations requires expensive computation cost. Moreover, locations for many hyperlinks can not be obtained without user interactions, e.g., in multilevel menus, the displaying locations of hyperlinks in the second or third level menus are only available after clicking their parent menus. To address this issue, we analyze the structure of the HTML code and use the DOM-tree distanceto approximate the distance between two hyperlinks. We first traverse the DOM-tree of a given webpagewith depth-first search order and index each node we encounter, starting from 1.Then we calculate the DOM-tree Distance (DD) between hyperlinks l_1 and l_2 as follow:DD(l_1, l_2)=| index(l_1)- index(l_2)|,where index(l_i) means the index of hyperlink l_i. For two given hyperlink blocks B_1 and B_2, we define the gap between them asthe minimum distance between hyperlinks in B_1 and B_2:gap(B_1, B_2)=min_i,jDD(l_i,l_j),where l_i∈ B_1, l_j∈ B_2.We can use the internal node to represent a hyperlink block, which includes all hyperlink nodes in the corresponding sub-tree.In Figure <ref>, the node indexed with 2 can represent the hyperlink blockincluding hyperlinks indexed with 6, 8 and the node indexed with 11can represent the hyperlink block including hyperlink indexed with 12. The gap between these two hyperlink blocks is min{4, 6}=4. §.§ Hyperlink DensityAnother important observation is that a hyperlink block usually includes few text except the text in hyperlinks. We consequently define the Hyperlink Density HD(S) for a given layout block S, which consists of one or more sub-trees of a DOM-tree: HD(S)=#{anchor text in S}+ϵ/#{all text in S}+ϵ,where #{anchor text in S} means the word number of the anchor text in all hyperlinks in S,#{all text in S} means the word number of all text in S and ϵ is thesmoothing parameter to avoid dividing zero. We set ϵ=10^-10 in all our experiments.§.§ Clustering onDOM-treeIn the process of clustering hyperlinks into blocks, we make good use of the hierarchical structure of the DOM-treeand its properties.The complete algorithm of clustering hyperlinks on the DOM-tree is shownin Algorithm <ref> with details.The core of our algorithm is a recursive process. For two given hyperlink blocks B_1 and B_2,in which the hyperlinks have been ensured in the same block respectively.If these two hyperlink blocks have the same parent and are neighbors, we try to merge them.When the gap between hyperlink blocks B_1 and B_2 is no larger than a given threshold gt and the Hyperlink Density of thepotential hyperlink block consisting of B_1 and B_2 is no smaller than a given threshold hdt, we merge them into one hyperlink block.We only try to merge hyperlink blocks which have the same parent because hyperlinks in the same block should have the same ancestor.We only try to merge hyperlink blocks which are neighbors because the relative positions of sibling nodes are preservedwhen displaying in the webpage layout. The whole process executes from bottom to top on the whole DOM-tree and from left to right on each level of the DOM-tree.We have avoided a lot ofuseless comparison by making good use of the hierarchical structure and properties of the DOM-tree.§.§ ThresholdWe use the gap threshold (denoted by gt) and the Hyperlink Density threshold (denoted by hdt) to control the results of clustering.Due to the variety of webpages, gt and hdt vary greatly for different webpages.So we need an effective method to learn proper gt and hdt for each webpage.§.§.§ Gap threshold As we have explained in the previous sub-section,we only try to merge hyperlink blocks which are neighbors.So the proper value of gt is among the gaps between all neighbor hyperlink blocks with an additional 0.Though we cannot directly get the set S_b of all gaps between neighbor hyperlink blocks,we can easily get the set S_h of all distances between neighbor hyperlinks and we now prove that S_b=S_h.Firstly, each hyperlink is a hyperlink block which only contains one hyperlink, so S_h⊂ S_b .Secondly, as defined in equation (<ref>),the gap between two hyperlink blocksis the minimum distance between hyperlinks in those two hyperlink blocks,which must be the distance between two neighbor hyperlinks, so S_b⊂ S_h.Above all, S_b=S_h is proved. Let DL denote S_h with an additional 0, the problem of calculating gt becomes choosing a proper value from DL:(1) The gt should not be too large to avoid clustering all hyperlinks into very few big blocks;(2) The gt should not be too small to avoid clustering all hyperlinks into too many small blocks. We choose the following i-th value in DL as gt after sorting DL in decreasing order:_imin(DL_i/DL_1+βi/length(DL))where the DL_1 is the maximum value in DL, the length(DL) is the number of values in DL,1≤ i ≤length(DL). They are used to normalize the value of distance and the number of potential blocks.β is a tradeoff parameter and we set β = 1 in all our experiments.§.§.§ Hyperlink density threshold A hyperlink block includes few text except the text in hyperlinks.Intuitively, since the node with <body> tag is the root node of the DOM-treeand it contains no less other text than each hyperlink block.Let HD_B denote the Hyperlink Density of the whole webpage, then hdt=γHD_Bperform the lower bound of Hyperlink Density of hyperlink blocks.γ≥0 is a tuning parameter and we set γ = 1 in our experiments. § CLASSIFYING HYPERLINK BLOCKS We train a SVM classifier using RBF kernel with some well defined features to identify navigation objects. §.§ Features§.§.§ The number of hyperlinksFrom our observation, the navigation object usually contains many hyperlinks, while other hyperlink blocks contain less hyperlinks.So the number of hyperlinks is a very useful feature to distinguish navigation object from non-navigation object.For a given hyperlink block B_i, we denote the number of hyperlinks in it as #B_i.§.§.§ Text length in hyperlinksThe length of anchor text is also very useful.On one hand, anchor texts in a navigation object are usually short,while hyperlinks in main content usually have relatively longer texts and hyperlinksin Ads etc. usually contain images without any text.So the mean of text length in a navigation object is usually small but not zero.On the other hand, the text in a navigation object is usually neat and the variance of these text lengths is small.For a given hyperlink block B_i, we denote the mean and variance of the text length in its hyperlinksas B_i^tm and B_i^tv respectively.We apply the re-implemented Gaussian smoothing <cit.> to the text lengths of hyperlinks in a DOM-treeto avoid sudden changes in the text lengths. Above all, for a given hyperlink block B_i, the feature vector of B_i is [#B_i, B_i^tm, B_i^tv].Then the SVM classifier with RBF kernel is applied to classify B_i as navigation object or non-navigation object.§.§ SVM ClassifierSupport Vector Machine (SVM) is a famous supervised learning model. In order to perform non-linear classification, we use the SVM classifier with RBF kernel <cit.>.When using SVM classifiers, we need to calculate the distance between two points.Since the ranges of different features are significantly widely different,the features are normalized so that each feature contributes approximately in an equal proportion to the final distance.What's more, the normalization can also reduce the training time of SVM classifiers <cit.>. § EXPERIMENT Experiments on real world dataset demonstrate the effectiveness of our method. §.§ Date SetIn our experiments we use data from two sources: (1) dataset from CleanEval<cit.>;(2) news site data from MSS<cit.>. CleanEval: CleanEval is a shared competitive evaluation on the topic of cleaning arbitrary webpages [http://cleaneval.sigwac.org.uk]. It is a diverse dataset, only a few webpages are used from each site and the sites use various styles and structures.Moreover, this data set has many webpages including dynamic and page-dependent navigation elements. MSS: The dataset can be retrieved from Pasternak and Roth's repository[http://cogcomp.cs.illinois.edu/Data/MSS/].This data set contains 45 individual websites which are further separated into two non-overlapping sets. 1) the Big 5:Tribune, Freep, Ny Post, Suntimes and Techweb; 2) the Myriad 40: the webpages whichwere chosen randomly from the Yahoo! Directory.The Big 5 includes five most popular news sites and the Myriad 40contains an international mix of 40 English-language sites of widely varying size andsophistication.§.§ Performance Metrics§.§.§ Clustering hyperlinksThe results of clustering hyperlinks are identifications of several hyperlink blocks,and we compared them with the hand-labeled ground truth.The first metric is the Adjusted Rand Index (ARI)<cit.>.Rand Index (RI) is used to measure the agreement between the output results of clustering and the ground truth<cit.>.ARI is a adjusted-for-chance version of the Rand Index, which equals 0 on average for random partitions and 1for two identical partitions. So the larger ARI value means the better performance. The second metric is the Adjusted Mutual Information (AMI) <cit.>. Mutual Information (MI) isa symmetric measurement for quantifying the statistical information shared between the output results of clusteringand the ground truth <cit.>.AMI is an adjustment of the MI to account chances, it ranges from 0 to 1 and larger value indicates better performance. §.§.§ Classifying hyperlink blocks The performance of classifying hyperlink blocks is measured by standard metrics.Specifically, precision, recall and F_1-score are calculated by comparing the outputof our method against a hand-labeled gold standard. Performances on each dataset are calculated by averaging the scores of above metrics over all webpages. Note that every hyperlink in the webpage is considered as a distinct hyperlink even ifsome hyperlinks appear multiple times in a webpage. §.§ Implementation DetailsAll programs were implemented in Python language with the help of scikit-learn <cit.>.After parsing the HTML file of a webpage into a DOM-tree, we treated all elements with the tag <a> as hyperlinks,including some buttons and drop-down lists. We kept everything in a webpage without any preprocess,in order to show that our method can handle most noise in the webpage. §.§.§ Clustering hyperlinksIn order to properly evaluate the performance of our method on clustering hyperlinks, we compared our method's performance with several common clustering algorithms,including Agglomeration, DBSCAN, K-Means and Spectral Clustering <cit.>.All algorithms use equation (1) to measure the distance between two hyperlinks.The Agglomeration initializes every hyperlink to a singleton cluster at the beginning.At each of the N-1 steps, the two closest clusters are merged into one singleton cluster.We implement this algorithm by ourselves and use single-linkage to measure the intergroup dissimilarityand use gt as the threshold to jump out of its iteration.The DBSCAN algorithm regards clusters as areas with high density separated byareas with low density. We use the implementation in scikit-learn by setting eps = gt,min_samples=1, where eps means the maximum distance between two samples for them to be considered asin the same neighborhood and min_samples is the minimum number of samples ina neighborhood for a point to be considered as a core point.The K-Means algorithm clusters data by trying to split samples into K groups.We use the implementation in scikit-learn by setting parameter K with the number of blocks in the ground truth.For Spectral Clustering ,we use the implementation in scikit-learn by setting parameter K with the numberof blocks in the ground truth and use the one nearest neighbor method to construct the affinity matrix for Spectral Clustering. There are two versions of our method. CHD is the version of clustering hyperlink on DOM-treewithout considering Hyperlink Density by setting γ = 0 in equation (<ref>) and CHD-HD is the version considering Hyperlink Density by setting γ = 1. §.§.§ Classifying hyperlink blocksThe standard deviation is σ=2 in the re-implemented Gaussian smoothing algorithm. We classify each hyperlink block as navigation object or non-navigation object by using SVM with RBF kernel implemented in scikit-learn.The parameters in this SVM classifier are set as C=1.0 and γ =0.1, where C is the penaltyparameter of the error term and γ is the kernel coefficient for RBF.§.§ ResultsFor each data set, we randomly select 50% webpages as the training set and the remaining webpages as the testing set.§.§.§ Clustering hyperlinksTable <ref> and Table <ref> present the hyperlink clustering performance of different algorithms on the CleanEval, Myriad 40 and Big 5 data sets in the ARI metric and AMI metric respectively.The Big 5 has been broken down into it's individual sources. Comparing the average ARI values and AMI values over all data sets,our methods (including both CHD and CHD-HD) outperform all comparison methods.Actually, our methods have a better performance than most comparison methods when comparingARI values and AMI values on individual data set.Moreover, our method is more reliable than the comparison methods since our method has a stable performances while comparisonmethods may collapse on some particular data sets.It is because that our method makes good use of the hierarchical structure of the DOM-tree as well as the distance information on DOM-tree.Finally, CHD-HD always performs better than CHD, especially on the dataset CleanEval,which has the greatest diversity and most dynamic and page-dependent navigation elements.That means besides the hierarchical structure of the DOM-tree, Hyperlink Density is also very helpful. Besides our method, the Agglomeration has the best performance, except the collapse of performance on Suntimes data set.Although it makes no use of any information from the hierarchical structure, it uses the fact that hyperlinksin a block are gathering together. The DBSCAN also uses this fact, so its performances are quite similar withthe performance of Agglomeration. For instance, on the Myriad 40 and four sources in Big 5, the performanceof DBSCAN is the same as Agglomeration. For K-Means and Spectral clustering, the performance is very poor,even though they have “cheatet” by using K obtained from the ground truth.Actually, finding the best K is very difficult. The average cumulative percentage of webpages for which the clustering performance ofa particular method is less than a certain ARI value is plotted in Figure <ref>.The corresponding figure for AMI is in Figure <ref>. The slower the curve goes up fromleft to right, the better performance the corresponding method has.These two figures provide a more obvious illustration than Table <ref> and Table <ref>,in terms of the better performance that our method achieved on each ARI and AMI value relative tocomparison methods. The majority of webpages that our method processed have a largerAMI or ARI value. Taking Figure <ref> as an example,for Agglomeration, DBSCAN, K-Means and Spectral clustering, the average percentages of ARIvalue lower than 0.6 are about 26%, 27%, 84% and 89%. At the same time, for CHD and CHD-HD,such percentage are only 13% and 10% respectively. §.§.§ Classifying hyperlink blocksTable <ref> and Table <ref> present the results of classifying hyperlink blocks.It clearly shows that our method performs very well, not only on datasets with webpages froma single site (such as Tribune and Freep etc.) but also on datasets with webpages fromvarious sites (such as CleanEval and Myriad 40). The results onCleanEval data set are less competitive because this data set has the greatest diversity.Moreover, the result in which CHD-HD is used for clustering is better thanthe result under clustering using CHD. That is very reasonable becauseCHD-HD can achieve better clustering results than CHD.§.§ DiscussionTo show the generalization ability of our method, we continuously increase the percentage of hyperlinksin training set to be used from 1% to 100%, and plot the corresponding F_ 1-scores.The incremental value is 1% when the percentage is less than 10%,and the incremental value is 10% otherwise.We used CHD-HD to cluster hyperlinks in this experiment.We can observe that even using very few hyperlinks as the training data,5% hyperlinks of whole training set for an example, the performance of our method is very impressive.This means our method has a strong generalization ability because it needs very few training data to perform very well. That brings great practicability to our method. § CONCLUSIONS In this paper we propose a new extracting method for navigation objects in a webpageto capture both the static directory structures and the dynamic content structures in a website. Our method will extract not only the static navigation menus,but also the dynamic and personalized page-specific navigation lists,including top stories and recommended list etc. Based on the observation that hyperlinks in a webpage are naturally arranged in different blocks,we use a two-step process to extract navigation objects in a webpage by first clustering hyperlinks in a webpage into multiple blocks and then identify navigation object blocks from the clustering results using the SVM classifier.The effectiveness of our method is verified with experiments on real-world data sets. In addition to its effectiveness, the greatest strengths of our method arethe simplicity of its implementation and its great practicability.Firstly, it has a very strong ability of generalization and needs very few training data to perform well,which gives it great practicability. Secondly, our method only requires the HTML file of a webpage anddoes not need any preprocess to handle noises in the webpage.Thirdly, our method does not rely on any special HTML cues (e.g., <table>, <td>, color and font etc.),which brings great stabilization over time. There are several interesting problems to be investigated in our future work: (1) we will consider using more features in clustering hyperlinks and classifying hyperlink blocks without injuring the simplicity of our method;(2) we may try to achieve similar performance without any training data, which makes the method much easier to use;(3) we can incorporate additional information in our method, such as the cliques in the web graph,to further improve the understanding of content structures in websites. § ACKNOWLEDGMENTSThis work is supported by Alibaba-Zhejiang University Joint Institute of Frontier Technologies, Zhejiang Provincial Soft Science Project (Grant no. 2015C25053),Zhejiang Provincial Natural Science Foundation of China (Grant no. LZ13F020001),National Science Foundation of China (Grant no. 61173185). ACM-Reference-Format
http://arxiv.org/abs/1708.07940v1
{ "authors": [ "Kui Zhao", "Bangpeng Li", "Zilun Peng", "Jiajun Bu", "Can Wang" ], "categories": [ "cs.AI", "cs.IR" ], "primary_category": "cs.AI", "published": "20170826065924", "title": "Navigation Objects Extraction for Better Content Structure Understanding" }
Learning to Blame Ranjit Jhala December 30, 2023 =====================We have derived the Buchdahl's limit for a relativistic star in presence of the Kalb-Ramond field in four as well as in higher dimensions. It turns out that the Buchdahl's limit gets severely affected by the inclusion of the Kalb-Ramond field. In particular, the Kalb-Ramond field in four spacetime dimensions enables one to pack extra mass in any compact stellar structure of a given radius. On the other hand, a completely opposite picture emerges if the Kalb-Ramond field exists in higher dimensions, where the mass content of a compact star is smaller compared to that in general relativity. Implications are discussed. § INTRODUCTION There have been considerable interest in the compactness limit of any stellar structure, which originally initiated from the seminal work of Buchdahl, who showed that under reasonable assumptions the minimum radius of a star has to be greater than (9/8) of its Schwarzschild radius <cit.>. These assumptions involved the density of the star to be decreasing outwards and the interior solution being matched to the vacuum exterior one, which by uniqueness theorems ofis the Schwarzschild solution. This raises an intriguing question, how is the above limit modified if one considers a theory of gravity different fromor if one introduces some additional matter fields. Several such ideas have been explored quiet extensively in recent times, which include — (a) inclusion of cosmological constant<cit.>, (b) effects due to presence of extra dimensions<cit.>, (c) effect of scalar tensor theories <cit.> and (d) dependence of Buchdahl's limit on higher curvature gravity models<cit.>, such as, gravity, f(R) gravity <cit.>, pure Lovelock theories <cit.> etc. In some of these cases one had to impose some additional physically motivated assumptions, e.g., imposition of dominant energy condition, sub-luminal propagation of sound etc. All in all, the ultimate limit on stellar structures is of fundamental importance for not only understanding the differences between various gravity theories but also to differentiate how additional matter fields behave under self-gravity <cit.>. Having set the stage, let us consider the situation we will be interested in. Rather than working with modified gravity theories, we will modify Einstein's field equations by introducing additional matter fields. In particular we will mainly be concerned about the effects of the Kalb-Ramond field in the formation of any stellar structure and possible modification of the Buchdahl's limit thereof. Kalb-Ramond field arises naturally in the context of field theory (and also as a closed string mode in string theory) and is sort of a generalization of electrodynamics <cit.>. The gauge vector field in electrodynamics gets replaced by a second rank antisymmetric tensor field with a corresponding third rank antisymmetric field strength. Interestingly, the corresponding third rank antisymmetric tensor field appearing as field strength of the Kalb-Ramond field has conceptual as well as mathematicalsimilarity with the appearance of spacetime torsion <cit.>. In particular if one assumes spacetime torsion to be antisymmetric in all the three indices[In general, spacetime torsion appears through the definition of covariant derivative: ∇_iV^j=∂ _iV^j+A^j_ikV^k, with connection being written as A^a_bc=Γ ^a_bc+T^a_bc. Here Γ ^a_bc is the standard Christoffel connection, symmetric in (b,c), while T^a_bc is the torsion tensor, antisymmetric in (b,c).], then the decomposition of the Ricci scalar into parts dependent and independent of torsion leads to action with an additional term coinciding with the action for field coupled to gravity. Thus in this sense, whether we work with field or completely antisymmetric spacetime torsion the physics would remain unchanged. However for concreteness we will explicitly work with fieldwhile keeping the analogy with spacetime torsion in the backdrop. Thus given the action for the field in presence of gravity one can immediately obtain the corresponding modified Einstein's equations. This will result into different exterior as well as interior solution, in particular the exterior solution no longer represents a vacuum spacetime. This will inevitably results into modifications on the limits of stellar structure and hence the Buchdahl's limit would certainly be different from that in . There is another way a similar modification can be brought about, which corresponds to introduction of extra dimensions. Even though in the standard picture the normal matter fields, e.g., electromagnetic field does not propagate in the extra dimensions[This is due to the fact that all such matter fields originate from open string modes and hence has access to four-dimensional spacetime only as their end points are attached to it.], the on the other hand, being a closed string mode alikegravity, do propagate in higher dimensions (known as bulk). Hence the presence of the field brings in modifications to the effective gravitational field equations on any four dimensional hypersurface (referred to as the brane) embedded in the five dimensional bulk <cit.>. We will study the modified Buchdahl's limit in this scenario as well. The main purpose of this work is precisely to explore these modifications and hence to understand the departure fromthat field as well as existence of higher dimensions can bring in the stellar structures.The paper can broadly be divided into three sections. In <ref> we will discuss the basic mathematical framework of field coupled to gravity and possible effects due to extra dimensions. The formalism developed in <ref> will be applied to the derivation of Buchdahl's limit in the context of field in four spacetime dimensions in <ref>, while field in higher spacetime dimensions will be discussed in <ref>. Finally we conclude with a discussion on our results. We will set the fundamental constants c=1=ħ and will work within the convention of mostly positive signature. All uppercase Roman letters stand for higher dimensional spacetime indices, while lowercase Roman letters indicate spatial indices. On the other hand, Greek letters are used to label the four dimensional spacetime indices. § GRAVITY WITH FIELD: BASIC FORMALISM In this section we will briefly elaborate on the gravitational dynamics in presence of the field. As mentioned earlier, the field is an antisymmetric second rank tensor B_AB, while the field strength of the field is being denoted by H_PQR=∂ _[PB_Q R][Note that ∇ _[PB_QR]=(1/3){∇ _PB_QR+∇ _RB_PQ+∇ _QB_RP}=∂ _[PB_Q R], since all the affine connections cancel out each other due to antisymmetry of the field B_PQ.]. Alike electrodynamics the action for the field is taken to be square of the field strength. Thus the complete action for field with gravity in D spacetime dimensions takes the following form, 𝒜=∫ d^Dx√(-g)[R/16π G_D-1/12H_ABCH^ABC+L_ matter] , where G_D is the D dimensional gravitational constant and L_ matter consists of any additional matter fields that may be present. The factor (-1/12) ensures that in our signature convention the kinetic term in local inertial frame appears as (1/2)(∂ _tB_ij)^2. Even though the field in D spacetime dimensions has a total of D(D-1)/2 independent components, the actual number of propagating degrees of freedom are lesser in number. This arises from the fact that time derivatives of only the spatial components of the field appear in the Lagrangian. Thus among the total D(D-1)/2 independent components only the (D-1)(D-2)/2 will represent the propagating degrees of freedom. However one has to take into account of the additional gauge symmetry present in the system, i.e., there is a transformation B_PQ→ B_PQ+∇_Pξ _Q-∇ _Qξ _P, which keeps the Lagrangian invariant. The gauge field for the spatial part seemingly has (D-1) degrees offreedom and hence the total number of degrees of freedom would become (D-1)(D-4)/2. There still exists one more scalar degree of freedom, which appears if we change the gauge field by ξ _P→ξ _P+∂ _Pϕ. Thus actual number of degrees of freedom would be {(D-1)(D-4)/2}+1. Thus in four dimensions the field has a single degree of freedom, while for D=5, the number of degrees of freedom becomes three <cit.>.Having discussed some of the basic properties of the field, let us now inquire how it may affect the dynamics of gravity. For that purpose the most important ingredient is the gravitational field equations, which can be obtained by varying the action in <ref> with respect to the metric, leading to, G_AB =8π G_D{T^ (KR)_AB+T_AB^ (matter)}; T^ (KR)_AB =1/6[3H_APQH_B^ PQ-1/2{H_PQRH^PQR}g_AB]; T^ (matter)_AB=-2/√(-g)δ(√(-g)L_ matter)/δ g^AB . Here T_AB^ (KR) and T_AB^ (matter) respectively corresponds to energy momentum tensor for the field and any additional matter field that may be present in the system. One can also obtain the corresponding field equations for the field by varying the action with respect to B_PQ leading to ∇ _AH^ABC=0. Given the field equation for the field it is instructive to prove conservation of the respective energy momentum tensor, which can be derived along the following lines, ∇ _AT^A (KR)_B =1/6[3{∇ _AH^APQ}H_BPQ +3H^APQ{∇ _AH_BPQ}-H^PQR∇ _BH_PQR] =1/6[H^APQ{∇ _AH_BPQ-∇ _QH_ABP+∇_PH_QAB}-H^PQR∇ _BH_PQR] =0 . Here in the first line we have used the field equation for the field, while in the second line we have used the following result satisfied by the field strength, namely, ∇ _[AH_PQR]=0. The proof of the previous statement is analogous to that of electromagnetism and follows from the complete antisymmetry of the field strength. In what follows we will mainly be interested in four dimensional spacetimes with spherical symmetry. This can be achieved along two possible avenues — (a) One can start from a four dimensional action, which can be obtained by setting D=4 in <ref> or (b) Starting from a five dimensional spacetime but then projecting the gravitational field equations on a four dimensional hypersurface. In this work we will explore both these situations, following mainly <cit.>, to achieve our goal to derive the possible modifications in the Buchdahl's limit.We start with the first possibility, i.e., the field in four spacetime dimensions which is described by static and spherically symmetric metric ansatz. The line element fit for our purpose, expressing static and spherical symmetry becomes, ds^2=-e^ν(r)dt^2+e^λ(r)dr^2+r^2(dθ ^2+sin ^2θ dϕ ^2) , where ν(r) and λ(r) are arbitrary functions of the radial coordinate that we need to determine through the gravitational field equations. Returning back to the field as we have already emphasized, there is a single independent degree of freedom. This enables one to write down the field tensor H_μνρ in terms of a scalar field such that, H^μνρ=ϵ ^μνρσ∂ _σΦ [Note that, since ϵ ^μνρσ=(-1/√(-g))[μνρσ], where [μνρσ] is the completely antisymmetric object, it immediately follows that, ∇ _μH^μνρ=(-1/√(-g))[μνρσ]∇ _μ∇ _σΦ=0.]. In the context of spherical symmetry, the above scalar degree of freedom (also known as axion) becomes a function of radial coordinate alone, i.e., Φ=Φ(r). Thus only the H^023 element will contribute, leading to: H^023=ϵ ^0231Φ'(r), where `prime' denotes derivative with respect to the radial coordinate. Since ϵ ^μνρσ involves a 1/√(-g) factor, it follows that H^023 can be written as f(r)/sinθ, with f(r) being an arbitrary function of the radial coordinate. This can also be verified by solving the field equation for H^μνρ directly. With H^023 as the only non-zero element, the energy momentum tensor for the field turns out to have the following components, T^0 (KR)_0 =1/6[6H^023H_023-1/2×{6H^023H_023}]=1/2 H^023H_023=-e^-ν(r)H_023^2/2r^4sin ^2θ≡-h(r)^2 , T^1 (KR)_1 =-1/2H^023H_023=h(r)^2=-T^2 (KR)_2=-T^3 (KR)_3 . This suggests that the energy momentum tensor arising due to the field in a spherically symmetric context can actually be expressed as that of a perfect fluid, with the following structure, diag{-ρ_ KR(r),p_ KR(r),-p_ KR(r),-p_ KR(r)}, where ρ _ KR(r)=h(r)^2=p_ KR(r). It is clear that the energy density is positive definite, since the field is necessarily real. Surprisingly, the transverse (or, angular) part of the energy momentum tensor depicts the existence of a negative pressure. Due to this fact even though the field satisfies the weak energy condition, i.e., T_abu^au^b>0, it also satisfies, {T_ab-(1/2)Tg_ab}u^au^b=0. Thus field in spherically symmetric context in four spacetime dimension marginally satisfies the strong energy condition. Incidentally for field, the field equation ∇ _μH^μρσ=0 merely says that it can be written in terms of an axionic field, but the identity nabla _[μH_αβρ]=0 (or, equivalently ∇ _μT^μν=0) provides the differential equation satisfied by h(r). Let us now briefly mention about the existing results in the literature in the context of scalar-tensor theory to provide a comparison with the results appearing in our approach. An initial attempt to understand the Buchdahl's limit in the context of Brans-Dicke theory (a particular scalar tensor model) was presented in <cit.>, where a conformal transformation was used to transform the Brans-Dicke action in Jordan frame (i.e., when the action has coupling between Ricci scalar R and the scalar field ϕ) to that in Einstein frame (where the Ricci scalar has no coupling with the scalar field). Then it was demonstrated that the Buchdahl's limit in Brans-Dicke theory is larger compared to the general relativistic scenario. Later on several results have confirmed the above claim in various different scalar tensor models<cit.>. Thus in this work we provide yet another origin to arrive at a scalar tensor description of gravity, namely using Kalb-Ramond field. In the later part of this work we will explore whether this model also shares the same scenario as far as the Buchdahl's limit is concerned. Since a direct correspondence with earlier results will certainly bolster our claims, we will explore this connection with earlier works in <ref> in a detailed manner.In the present context, given the contribution from the field as expressed in <ref> and <ref> respectively, one can write down the corresponding gravitational field equations in the context of spherical symmetry. Further as the stellar interior is concerned, the normal matter is taken to be perfect fluid with energy-momentum tensor diag{-ρ(r),p(r),p(r),p(r) }. This completes our preliminary discussion and yields the necessary ingredients that we will require in later sections while discussing the effect of field on stellar structure in four spacetime dimensions.Before concluding this section, let us address the corresponding situation in the brane world scenario, where both gravity and the field live in five dimensional bulk, while we are interested in the gravitational dynamics on the four dimensional brane. There are several ways of handling this issue. For example, one may wish to average the bulk Einstein's equations over the extra dimension and hence arrive at a gravitational equation on the brane (this method was adopted in <cit.>), otherwise one may wish to project the five dimensional equations on a four dimensional hypersurface. Except these two there are other perturbative schemes available to determine the metric on the brane, inheriting bulk corrections <cit.>. Nonetheless we will follow the second pathway of projecting the bulk equations in this work, which was first developed in <cit.>. Later on, there have been numerous works based on this projective scheme, for a small representativeset, see <cit.>. Since details of this procedure are well established and discussed at good lengths in the above works we will concentrate here by illustrating the basic ingredients which will be necessary for our later purposes. The bulk field equations presented in <ref> can be appropriately projected on the brane hypersurface by using the projector h^A_B=δ ^A_B-n^An_B, where n_A is normal to the brane hypersurface. Using the projector h^A_B and incorporating the Gauss-Codazzi relation it is possible to write down the brane curvature tensor in terms of the bulk one. In this process one also derives the brane Ricci tensor and hence the brane Ricci scalar in terms of the bulk curvature components. Use of all these results relating brane curvature tensors to bulk curvature tensors enable one to write down the bulk gravitational field equations in terms of curvatures on the four dimensional brane hypersurface with additional contributions inherited from the bulk. In particular, from a purely gravitational point of view the bulk Einstein tensor G_AB will map into ^(4)G_μν+E_μν, where ^(4)G_μν is the standard Einstein tensor on the brane and E_μν=W_μA ν Bn^An^B is the additional contribution inherited from the bulk, dependent on the bulk Weyl tensor W_μ A ν B <cit.>. Similarly, the field present in the bulk (affecting the bulk Einstein's equation through the bulk energy momentum tensor presented in <ref>) also gets projected on the brane hypersurface and a particular combination of this projection will act as the source of four-dimensional effective gravitational field equations along with normal matter, which of course is confined to four dimensions <cit.>. In this context, the effect of extra dimensions as well as that of field modifies the four dimensional gravitational field equations as, ^(4)G_μν+E_μν=8π G_4{T_μν^ (matter)+Π_μν^ (matter)+ ^(4)T_μν^ (KR)} , where the physical interpretation of the terms appearing on the left hand side has been discussed earlier, in particular E_μν is the projection of the bulk Weyl tensor on the brane hypersurface. On the other hand, the right hand side has contribution from three parts — (a) four dimensional matter field characterized by T_μν^ (matter), (b) energy momentum tensor obtained by projecting the field on the brane, denoted as ^(4)T_μν^ (KR) and (c) energy-momentum tensor Π _μν^ (matter), which is quadratic in T_μν^ (matter). The explicit expression for the brane energy momentum tensor ^(4)T_μν^ (KR) originated from the bulk field becomes, ^(4)T_μν^ KR =2/3G_5/G_4[T_μν+(T_ABn^An^B-1/4T_A^A)g_μν] =2/3G_5/G_4[1/2H_μαβH_ν^ αβ -3/16{H_αβρH^αβρ}g_μν] . The first equality follows from the projection scheme and has been elaborated in <cit.>. Moreover T_AB appearing on the right hand side of the first expression is the bulk energy momentum tensor for the field, expressed as in <ref>. In order to arrive at the second expression we have assumed that the field B_AB is independent of the extra dimensional coordinate as well as n^AB_AB=0. Both of which are natural from the perspective of a brane observer and can be motivated along the following lines. First of all there are large number of gauge freedoms present in the field, namely B_μν→ B_μν+(∂ _μA_ν-∂ _νA_μ), where A_μ is an arbitrary vector field. Using this gauge freedom one can set n^AB_AB=0, known in the literature as the Randall-Sundrum gauge (since it was first used in the context of gravitational perturbation in <cit.>). The other condition (i.e., B_μν isindependent of extra dimension) can be argued from the mode decomposition of the field, where it turns out that the massless mode have no dependence on extra dimension and is purely a function of the brane coordinates <cit.>. Since we are interested in the lowest lying, i.e., massless mode alone, the field cannot have any extra dimension dependence, thereby justifying the previous assumptions.We would also like to point out that since the above energy momentum tensor is not derivable from a four dimensional action, the conservation of ^(4)T_μν^ (KR) is not immediate from the field equation for the field. Thus one generally treats the field equation for the field separately and considers the total combination -E_μν+ ^(4)T_μν^ (KR)+Π _μν^ (matter) to be conserved.The above energy momentum tensor ^(4)T_μν^ KR surprisingly has some very interesting properties, e.g., if we consider the static and spherically symmetric situation then only H_023 (or, H^023) component of the full field strength will be non-zero. In this case the components of this induced brane energy momentum tensor becomes, ^(4)T_0^0 (KR) =2/3G_5/G_4[1/2{2H_023H^023} -3/16{6H_023H^023}]=-1/12G_5/G_4H_023H^023=e^-ν(r)/12r^4sin ^2θ√(6/8π G_4λ_ T) H_023^2≡h̃(r)^2/√(8π G_4λ_ T)= ^(4)T_2^2 (KR)= ^(4)T_3^3 (KR); ^(4)T_1^1 (KR) =2/3G_5/G_4[ -3/16{6H_023H^023}]=3e^-ν(r)/4r^4sin ^2θG_5/G_4H_023^2 =9h̃(r)^2/√(8π G_4λ _ T) , where λ_ T=6(G_4/8π G_5^2) is the brane tension. Thus we immediately observe that the induced energy momentum tensor on the brane has negative energy density. One can immediately check that, T_abu^au^b∝ -h̃^2, for static observers, while {T_ab-(1/2)Tg_ab}u^au^b∝ -h̃^2+(1/2)(12h̃^2)=5h̃^2. Thus the induced energy momentum tensor on the brane from a bulk field violates the weak energy condition but does satisfy the strong energy condition. This should not come as a surprise, as there have been numerous instances in the literature, in various other contexts where weak energy conditions are being violated on the brane while they are being satisfied in the bulk (which is true for the scenario presented here as well). For example, in the context of a black hole on the brane it was argued that the induced energy density in the brane from the bulk must be negative to ensure attractive nature of gravity on thebrane <cit.>. Furthermore, the existence of negative energy density on the brane hypersurface has appeared in numerous other contexts, e.g., (a) Kaluza-Klein reduction to a lower dimensional hypersurface <cit.>, (b) trajectory of a test particle in a lower dimensional hypersurface and appearance of an extra force <cit.>, (c) due to topological defects on the hypersurface possibly created by a moving black hole <cit.> or may be due to some specific compactification scheme <cit.> (see also<cit.>). Thus the above provides one more instance to violate weak energy conditions on the brane, namely through the bulk field. Hence by no means this is unusual, it merely provides a new pathway for understanding the energy conditions in the context of braneworld scenario. Having discussed the physics involved as well as basic mathematical formalism, we will next consider the stellar structure and hence the Buchdahl'slimit in these scenarios. § FIELD IN FOUR DIMENSIONS AND LIMIT ON STELLAR STRUCTURE The basic field equations describing both gravity and field in four spacetime dimensions have been elaborated in the previous section. In particular we have explicitly demonstrated that in this context there exist a single degree of freedom in the field that we should worry about. In the case of static spacetime with spherical symmetry the field equations for gravity as well as that of field simplify considerably. The gravity sector is being determined by two unknowns λ(r) and ν(r) appearing in the spacetime geometry through <ref>, while information regarding field is essentially contained in the unknown function h(r), introduced in <ref>. Considering the interior of a stellar object to be filled with perfect fluid with energy density ρ(r) and pressure p(r), the gravitational field equations become, e^-λ(1/r^2-λ '/r)-1/r^2 =8π G_4(-ρ-h^2) , e^-λ(ν'/r+1/r^2)-1/r^2 =8π G_4(p+h^2) ,1/2e^-λ(ν”+ν'^2/2+ν'-λ'/r-ν'λ'/2) =8π G_4(p-h^2) . On the other hand, the conservation equation for the fluid as well as the field equation for the field takes the following simple form in the context of static and spherically symmetric spacetime, p'+ν'/2(p+ρ) =0 , h'+ν'/2h+2/rh =0 . One can solve for these equations if the energy density ρ(r)=ρ_ c, is a constant. Then <ref> can be integrated to yield, exp(-ν/2)=A(ρ_ c+p). Given this one can also integrate <ref> to obtain the contribution from the field as: h(r)=(1/r^2)exp(-ν/2)=(A/r^2)(ρ_ c+p). From this expression it is evident that h(r) decreases with an increase of the radial coordinate r. Hence the effective density ρ+h^2 also decreases as the surface of the star is being approached. We will use this result later on. However the above exact solutions in the case of constant density does not help much, since p(r) is still undetermined, which essentially makes h(r) an arbitrary function of the radial coordinate. Interestingly, the three Einstein's equations presented above are not independent, given any two of them along with <ref> and <ref> one can arrive at the remaining one. We will demonstrate this feature in an explicit manner, since it will provide an important relation which will be useful in our derivation of Buchdahl's limit. We will take <ref> and <ref> as the two independent equations and shall derive <ref> from them, where <ref> and <ref> will be used extensively. The demonstration goes as follows, one starts by differentiating <ref>, which leads to, e^-λ(ν”/r-ν'/r^2-2/r^3-λ'ν'/r-λ '/r^2)+2/r^3=8π G_4(p'+2hh') . Using the conservation equation for the fluid and the field from <ref> and <ref>, one can evaluate the right hand side of <ref>, leading to, 8π G_4(p'+2hh') =8π G_4[-ν'/2(p+ρ)+2h(-ν'/2h-2/rh) ] =e^-λ(-ν'λ'/2r-ν'^2/2r)-32π G_4/r h^2 . Substitution of this particular expression back to <ref> leads to, -32π G_4 h^2=e^-λ(ν”+ν'^2/2-λ'ν'/2+ν'-λ'/r) -16π G_4(p+h^2) , where in the last line we have used one of the Einstein's equations, namely <ref>. It is evident that the above equation when divided by a factor of two coincides with <ref> and hence the third Einstein's equation is redundant. Even then there is one ambiguity present in the system, which is worth mentioning. There are four independent differential equations governing the behaviour of this particular system, while there are five unknowns: ν(r),λ(r),h(r),p(r) and ρ(r). This problem is generally circumvented by assuming an equation of state for the perfect fluid, which we will not need here. We will now proceed further and shall determine the fundamental limit to the stellar structure, known in the literature as the Buchdahl's limit, following its discovery by Buchdahl in the context of general relativity. In the remaining part of this section we will derive the Buchdahl's limit in presence of the field explicitly. It is clear that one can integrate out <ref>, resulting into, e^-λ=1-2G_4m(r)/r; m(r)=∫ _0^rdr 4π r^2(ρ+h^2) . Since h^2>0, it is clear that the total gravitational mass experienced outside the star is larger than the actual matter density present inside, with the extra gravitating mass coming from the KR field strength. Let us now derive the Buchdahl's limit and for that let us start from <ref> and rewrite the same as, 2rν”+rν'^2-rλ 'ν'-2ν'=4/r(1-e^λ)+2λ'-64π G_4 rh^2e^λ . One can further use the following two identities d/dr[1/re^-λ/2de^ν/2/dr]=e^(ν-λ)/2/4r^2[2rν”+rν'^2-2ν'-rν'λ'] ,d/dr[1-e^-λ/2r^2] =e^-λ/2r^3[rλ'-2(e^λ-1)] , to rewrite <ref> as, e^-(ν+λ)/2d/dr[1/re^-λ/2d/dre^ν/2]=d/dr(1-e^-λ/2r^2)-16π G_4/rh^2 . At this stage one puts forward some sensible requirements, e.g., the average density ρ_ av=m(r)/r^3 should decrease outwards. Even though the average density involves contribution from the field, since the field strength itself decreases outwards the above condition will be trivially satisfied. Further, given the form of e^-λ as in <ref>, it is clear that the first term on the right hand side of <ref> is essentially dρ_ av/dr. Since we have already assumed that average density decreases outwards, it is clear that the right hand side of <ref> is negative, leading to, d/dr[1/re^-λ/2d/dre^ν/2]≤ 0 . Integrating the above relation from some radius r within the star to the surface of the star, given by the radius r_0, we obtain, 1/re^-λ/2de^ν/2/dr≥1/r_0[e^-λ/2d/dre^ν /2]_r=r_0=1/2r_0e^(ν_0-λ_0)/2ν'_0 , where quantities with subscript `0' denotes that they are to be evaluated at the surface of the star, located at r=r_0. At this stage one generally assumes that both the metric and its derivatives are continuous across the surface of the star, i.e., the exterior solution for λ and ν must be equal to the interior solution at the surface along with their derivatives. This prompts to replace the metric and its derivatives at the surface by the respective values associated with the external solution. However, for the moment being we will keep the above structure arbitrary, which will be useful later. Further multiplying both sides of <ref> by re^λ/2 and integrating again from the origin to the surface of the star, we obtain,e^ν/2(r=0)≤ e^ν_0/2-1/2r_0e^(ν_0-λ_0)/2ν'_0∫ _0^r_0dr r/√(1-2G_4m(r)/r) . As the average density is assumed to be decreasing outwards, it immediately follows that m(r)/r^3>M/r_0^3 and thus we will have the above inequality holding more strongly if we replace m(r)/r by (M/r_0^3)r^2. With this modification, the above inequality becomes, e^ν/2(r=0) ≤ e^ν_0/2-1/2r_0e^(ν_0-λ_0)/2ν'_0∫ _0^r_0dr r/√(1-2G_4M/r_0^3r^2)=e^ν_0/2-1/2r_0e^(ν_0-λ_0)/2ν'_0{r_0^2/1-e^-λ_0}(1-e^-λ_0/2) . Since both the pressure and the contribution from the field are positive and finite at the origin it follows that e^ν/2(r=0)>0. Applying this result to <ref> we immediately obtain the following inequality, e^ν_0/2-1/2r_0e^(ν_0-λ_0)/2ν'_0{r_0^2/1-e^-λ_0}(1-e^-λ_0/2)>0 .Note that the term e^ν_0/2 can be removed from the above inequality as e^ν_0/2>0 as well. Further evaluating <ref> at r=r_0 we obtain the following expression for ν'_0, ν'_0/2r_0=-1/2r_0^2+1/2e^λ_0{1/r_0^2+8π G_4h_0^2} . Substitution of the above expression for ν'_0/2r_0 in <ref> and subsequent multiplication by e^-λ _0/2 leads to the following expression, e^-λ_0/2-r_0^2/1-e^-λ_0[1/2r_0^2(1-e^-λ_0) +4π G_4h_0^2](1-e^-λ_0/2)>0 . Hence one can solve for this inequality for a corresponding bound on exp(-λ_0), which will certainly be different from the corresponding bound in Schwarzschild spacetime. Simplification of the above inequality leads to a quadratic expression for (1-e^-λ_0), whose one root corresponds to negative value of the same and hence can be neglected, while the other root provides the necessary inequality,1-e^-λ_0<4/9[1-6π G_4r_0^2 h_0^2+√(1+6π G_4h_0^2r_0^2)] . The corresponding bound on the ADM mass can be obtained by writing down e^-λ _0 in terms of the same and using the inequality presented in <ref>. In order to obtain e^-λ _0 in terms of the ADM mass we need to determine the exterior solution, which to the leading order in the parameter h_0 becomes e^-λ=1-2G_4ℳ/r+8π G_4h_0^2r_0^4/r^2+𝒪(r^-3) , e^ν=1-2G_4ℳ/r+𝒪(r^-3); h(r)=h_0(r_0^2/r^2)+𝒪(r^-3) , where ℳ corresponds to the desired ADM mass associated with the black hole spacetime. Thus evaluating e^-λ on the surface of the star and its subsequent substitution in <ref> provides the desired bound on ℳ as, 2G_4ℳ/r_0<4/9[1+12π G_4r_0^2 h_0^2+√(1+6π G_4h_0^2r_0^2)] . One can easily comprehend the correctness of this result by taking the h_0→ 0 limit, for which one immediately recovers the Buchdahl's limit, G_4ℳ/r_0<4/9. It is clear from the above result that in presence of field the upper bound on G_4/r_0 is larger than 4/9 (see <ref>). Thus as the strength of the field increases the compactness limit (or, equivalently the Buchdahl's limit) on the stellar structure also increases compared to that in . However, one point must be emphasized at this stage. Even though 2G_4ℳ/r_0 can become much larger than 8/9 due to higher and higher values of the field parameter h_0, it is fundamentally bounded by the black hole horizon. To be precise, if the surface of the star is within the event horizon, then the limit makes no sense. Thus one has to ensure that r_0>r_ h, where r_ h is the location of the horizon obtained from <ref>, in order to have asensible stellar structure. Thus for (2G_4ℳ/r_0)<(2G_4ℳ/r_ h), the bound on stellar structure is provided by (2G_4ℳ/r_0). This corresponds to the region to the left of the red arrow in <ref>. While in the opposite scenario, the ultimate bound on stellar structure is provided by the horizon radius as depicted by the region to the right of the red arrow in <ref>. Finally the choice π G_4h_0^2r_0^2∼ 0.052 marks the point where the stellar radius and the horizon radius coincides, as illustrated by the red arrow in <ref>. As evident, in both these situations one can have stellar configurations having more mass packed into a smaller volume, when compared to the corresponding situation in .For completeness, we would also like to comment on the connection between the spherically symmetric solution presented above, with those derived earlier in the literature (see, for example <cit.>). In particular, we will be considering the spherically symmetric and static solution obtained in <cit.>, in the context of Einstein gravity in presence of a non-trivial scalar field. Some generalization of this solution has been achieved and discussed in <cit.>. While an exact solution in the context of gravitational collapse in presence of a scalar field has been addressed in <cit.>. To understand the possible connection of the solution presented in this work with those in the earlier literatures, consider first the solution for the scalar field. The profile of the scalar field was given by ∼ 2λln{1-(4m^2/r^2)} <cit.>. Thus to leading order in r^-1, thescalar field solution behaves as, ∼ (8λ m^2/r^2). As evident from <ref>, the leading order behaviour for the scalar field exactly matches with the solution for the field in the present context. Further, an inspection of the g_tt component of the metric, when converted to the spherically symmetric form starting from the isotropic coordinates as presented in <cit.> reveals that to leading order it scales as, ∼ 1-(2m/r)+𝒪(1/r^3), which can also be compared with <ref>. Thus the static and spherically symmetric solution presented here is indeed consistent with the earlier findings in the literature.The above bound on ADM mass ℳ illustrates that given a certain radius r_0 of a compact stellar object, the maximum mass one can associate with it is larger in presence of field. Thus one can pack extra mass into the stellar structure (a similar scenario appears in f(R) gravity as well <cit.>.). This provides an interesting testbed for field. For example, if a compact object (possibly neutron star) is observed whose ℳ/r_0 ratio is larger then 4/9, then it can possibly signal towards the existence of a non-zero field. Given the significance of the above result in the astrophysical context, it would be of interest if some comment regarding stability of the solution can be made. For this purpose one need to consider three perturbation modes, namely the scaler modes, vector modes and of course the tensor modes. The scalar perturbation is essentially due to a scalar field, while electromagnetic fields are responsible for the vector perturbations. Finally gravitational perturbations are being represented by the tensor modes. In spherically symmetric background all these perturbations satisfy certain master equation governing the evolution of the perturbations. These equations can generically be written as <cit.>d^2Ψ _s/dr_*^2+{ω ^2-V_s(r_*)}Ψ_s=0 , where Ψ _s is the perturbation variable and s=0,1,2 for scalar, vector and tensor modes respectively. Since the spacetime is static the time dependence has been separated by assuming e^± iω t dependence. The stability of the perturbation mode hinges on the fact that there are no growing modes present, which loosely speaking originates from the positivity of the potential V_s. For example, in the context of the scalar perturbation the structure of the potential in the present context reads V_0 =e^νm^2+e^νℓ(ℓ+1)/r^2-1/2r∂ _re^ν-λ . By substituting the expressions for e^ν and e^-λ from <ref> and <ref> respectively, we observe that this coincides with the corresponding situation in Schwarzschild spacetime, with higher order corrections from the presence of field. This is because e^ν differs from the Schwarzschild solution only at 𝒪(1/r^3). A similar consideration will apply to the vector and tensor perturbations as well, where the leading contribution will be from Schwarzschild spacetime with sub-leading corrections due to the field. The field strength h_0 cannot be a large number (otherwise, solar system tests like bending of light would have observed the same, see <cit.>) and hence the stability of the Schwarzschild solution ensures that the corresponding solution discussed in this framework is also stable. However, in order to arrive at a complete picture it is necessary to work through the black hole perturbation theory in its full gory detail,which will be addressed elsewhere.The above results are also in complete agreement with the earlier results in the literature in the context of Buchdahl's limit in scalar tensor theories. In various scalar tensor models of gravity, which mostly are of Brans-Dicke origin, it has been demonstrated that the Buchdahl's limit increases, i.e., one can have more mass at a smaller radius. In particular, in <cit.> it was claimed that the Buchdahl's limit in the context of scalar tensor theories can even exceed the value unity, representing black hole horizon in general relativity. From <ref> it is clear that such a scenario is indeed present in our model as well, with a higher value for the field the Buchdahl's limit can definitely cross the black hole barrier. Furthermore, it was argued in <cit.> that if the energy density and pressure associated with the scalar field satisfies the condition ρ-3p>0, then the original Buchdahl's limit will be retrieved. One can trivial check that for field sucha condition can never be satisfied and hence we will always have modifications to the Buchdahl's limit. The above results explicitly demonstrate the robustness of the method presented in this work and are in complete agreement with earlier literatures <cit.>. This completes our discussion on stellar structure in presence of field in four spacetime dimensions. We will now discuss the corresponding scenario when extra spacetime dimensions are present.§ FIELD IN INDUCED GRAVITY THEORY AND LIMITS ON STELLAR STRUCTURE In the previous section we had discussed in detail how the Buchdahl's limit gets affected in presence of field in four spacetime dimensions. As emphasized earlier, the field being a closed string mode can probe the higher dimensions. Thus it is legitimate to ask, how the above calculation and in particular the Buchdahl's limit gets affected by the presence of field in higher dimensions. We will answer this question by using the effective field equation technique to obtain the gravitational field equations on the brane hypersurface. The presence of extra dimensions will bring an additional parameter, namely the brane tension λ_ T in the picture. In addition there will be several extra pieces in the gravitational field equations on the brane, e.g., effects of the bulk Weyl tensor E_μν, projection of the energy momentum tensor of the bulk field etc. Using the results obtained in <ref> in the context of spherical symmetry the corresponding field equations inthe interior of the perfect fluid star become, e^-λ(1/r^2-λ'/r)-1/r^2 =-8π G_4ρ-8π G_4/2λ_ Tρ ^2-6U/8π G_4λ _ T+√(8π G_4/λ _ T)h̃^2 , e^-λ(ν'/r+1/r^2)-1/r^2 =8π G_4p+8π G_4/2λ_ Tρ(ρ+2p) +2/8π G_4λ_ T(U+2P)+9√(8π G_4/λ _ T)h̃^2 ,1/2e^-λ(ν”+ν'^2/2+ν'-λ'/r-ν'λ'/2) =8π G_4p +8π G_4/2λ _ Tρ(ρ+2p) +2/8π G_4λ _ T(U-P)+√(8π G_4/λ _ T)h̃^2 . Here U=-(G_4/G_5)^2E_μνu^μu^ν is the “dark radiation” term and P=(1/2)(G_4/G_5)^2E_μν(u^μu^ν-3r^μr^ν) is the “dark pressure” term originating from the bulk Weyl tensor induced on the brane hypersurface. The vector u^μ corresponds to any timelike vector on the brane hypersurface, while r_μ corresponds to another spacelike vector on the brane hypersurface, such that u_μr^μ=0 <cit.>. From the above set of equations it is clear that in the limit λ _ T→∞, one recovers the Einstein's equations with perfect fluid source. Having obtained the Einstein's equations in the context of brane spacetime with Kalb-Ramond field, let us concentrate on the associated conservation relations.The conservation of perfect fluid energy momentum tensor representing the matter content of the star is again given by the standard expression as in <ref>. While the field equation for the field and the conservation of the remaining tensors yield,p' +ν'/2(p+ρ)=0 ,h̃' +ν'/2h̃+2/rh̃=0 ,1/8π G_4λ _ T{U' +2ν'U+2P'+ν'P+6/rP}=-8π G_4/2λ _ Tρ'(ρ+p)+5√(8π G_4/λ _ T)[ν'/2h̃^2+2/rh̃^2] , where <ref> corresponds to ∇ _[μH_αβρ]=0, with λ _T assumed to be finite. Thus <ref> has nolimit. On the other hand, <ref> originates by using <ref> and <ref> respectively in the conservation of {-E_μν+ ^(4)T_μν^ KR+Π _μν^ matter}. Note that in the limit λ _ T→∞, the last conservation relation becomes trivial. Given the above set of equations, one can infer some nice properties regarding the system without going into too much details. For example, if the star has constant density ρ_ c, then <ref> and <ref> can be integrated to yield, exp(-ν/2)=A(p+ρ_ c), and the field will behave as h(r)=(1/r^2)exp(-ν/2)=(A/r^2)(p+ρ_c). Thus in this case as well the contribution from field decreases as one moves towards the surface of the star. Further in this case of constant density star, from <ref> it is clear that the bulk stresses have to be nonzero both inside and outside of the star, since h(r) is non-zero everywhere. This is in sharp contrast to the corresponding situation depicted in <cit.> and arises solely due to the presence of the field. Moreover, <ref> cannot be integrated directly to provide an expression for U(r) when P(r)=0, unlike the situation in <cit.>, againdue to the field. Thus we conclude that there will be non-zero Weyl stresses present both inside and outside the stellar object, in presence of non-zero field (see also <cit.>). It is also possible to argue, using <ref> and <ref>, that the dark pressure P should be positive under reasonable physical assumptions. First of all, as already mentioned earlier, we assume that the energy density ρ and pressure p of the perfect fluid acting as the building material of the star decreases outwards. This suggests that ρ' and p' are both negative. Therefore as evident from <ref> ν'>0, since (ρ+p) is positive definite. We also assume that an identical situation holds for the dark pressure and dark radiation as well, or in other words these two quantities decrease as one reaches the outer region of the stellar structure, implying both U' and P' to be negative <cit.>. With these reasonable assumptions, let us examine <ref> in some detail. Firstly all the terms on the right hand side of <ref> are positive, thanks to the fact that ρ'<0 but ν'>0, while (ρ+p)is positive definite. Therefore the left hand side of <ref> should also be positive. However the terms U' and P' are negative, keeping only three terms depending either on U or P linearly to compensate them. In the regime of linear perturbation theory, with (brane curvature/bulk curvature) as the perturbation parameter, one can show that U=-E_μνu^μu^ν<0 <cit.> and therefore for <ref> to hold it is necessary that P>0. A similar conclusion can also be reached by using Big-Bang Nucleosynthesis as well as Cosmic Microwave Background to understand the dark radiation term. Using the current estimates for primordial ^4He as well as Deuterium to Hydrogen ratio one can safely argue that negative values of U are much more favoured compared to the scenario with U being positive <cit.>. Thus perturbative estimations as well as observational compatibility support in favour of negative dark radiation.Interestingly, the fact that the dark radiation term is negative is important to match the interior solution to the exterior one, since the exterior solutions generically have negative dark radiation <cit.>. Hence we can conclude that as far as the current scenario is considered (in particular <ref>), it votes for a positive value of the dark pressure term P, a fact which we will use in this work.Using <ref> one can solve for e^-λ and hence obtain the mass function in the interior of the stellar structure, leading to, e^-λ=1-2G_4m(r)/r; m(r)=∫ ^r_0dr 4π r^2{ρ+ρ ^2/2λ _ T+6U/(8π G_4)^2λ _ T-h̃^2/√(8π G_4λ _ T)} . Let us now analyze the structure of the above equation. The first term in the expression for m(r) is the normal matter energy density producing mass, while the second one arises due to the presence of extra dimensions. The third one has a purely geometric origin, namely from the projection of bulk Weyl tensor and the last bit is from the bulk field. The minus sign in front of the field strength ensures that it is effectively lowering the gravitational mass with respect to the situation when the field is absent (i.e., compared to the h̃=0 situation). This is unlike the situation in <ref>, where the field appears with a positive sign. Among the other two additional terms in <ref>, ρ^2 is always positive definite, while U can have either positive or negative contribution.To see that the three gravitational field equations are not independent (they cannot be, as there are only two unknown functions λ and ν), we can start with the derivative of <ref> with respect to the radial coordinate resulting into, e^-λ(ν”/r-ν'/r^2-2/r^3-λ'ν'/r-λ'/r^2)+2/r^3 =-ν'/2[8 π G_4(p+ρ)+8√(8π G_4/λ _ T)h̃^2+8π G_4/2λ _ T2ρ(p+ρ) +2/8π G_4λ _ T(4U+2P)]-16/r√(8π G_4/λ _ T)h̃^2-12P/8π G_4λ _ T r . The term inside square bracket is actually the subtraction of <ref> from <ref> as one can immediately verify. Using this fact along with rearrangement of terms, <ref> finally leads to, 1/re^-λ(ν”+ν'^2/2-λ'ν'/2+ν'-λ'/r) =2/r[e^-λ(ν'/r+1/r^2)-1/r^2]-16/r√(8π G_4/λ _ T)h̃^2-12P/8π G_4λ _ T r . As <ref> is being substituted for the term within square bracket in the right hand side, one gets back <ref>. Thus <ref> is indeed not an independent equation, but depends on the other two. The main reason behind this derivation is the fact that the above equation is quiet useful for our purpose, namely derivation of Buchdahl's limit. Keeping this in mind one can also rewrite <ref> such that the following relation is obtained, 2rν”+rν'^2-rλ 'ν'-2ν'=4/r(1-e^-λ)+2λ' -24 r P/8π G_4λ _ Te^λ-32r√(8π G_4/λ _ T)h̃^2e^λ . At this stage one can use the two identities introduced in <ref> and <ref> respectively, and then simplifying the resulting expression further, we obtain the following result, exp(-λ+ν/2)d/dr[1/re^-λ/2de^ν/2/dr] =d/dr(1-e^-λ/2r^2) -6P/8π G_4λ _ T r-8√(8π G_4/λ _ T)h^2/r . Among the three terms on the right hand side, the last one originating from the bulk field as well the one from dark pressure provides a negative contribution, since following our argument below <ref> it turns out that P>0. Another way to justify the positiveness of dark pressure is as follows: note that the configurations for U and P inside the stellar structure can not be arbitrary, as they have to match with the exterior solution as well. The exterior solution requires the pressure to be positive (see, e.g., <cit.>), which is mainly due to the origin of this term from bulk Weyl tensor. Thus for reasonable exterior solutions we will expect the “dark pressure” to be positive in the interior as well. On the other hand, as far as the first term in the right hand side of <ref> is concerned, <ref> ensures that it is the rate of change of average density m(r)/r^3. This requires the dark radiation term to decrease outwards as thesurface of the star is being approached (which it indeed does, see <cit.>). Among the other terms in <ref>, ρ and ρ^2 are both decreasing functions of the radial coordinate and as already pointed out, the field strength h̃(r) decreases as one moves radially outwards. Thus one can safely impose the assumption that average density should decrease outward. This ensures that the following inequality is being satisfied, d/dr[1/re^-λ/2de^ν/2/dr]<0 . Given this, one can proceed as in the previous section and following identical steps one finally arrives at the desired inequality involving e^λ alone, such that, e^-λ_0/2-[1/2r_0^2(1-e^-λ_0)+8π G_4/2λ _ Tρ_0^2/2 +2/8π G_4λ _ T(U_0/2+P_0)+9/2√(8π G_4/λ _ T)h̃_0^2]∫ _0^r_0dr re^λ/2 >0 .Regarding the integral, one can perform it by noting that the average density decreases outwards as we have elaborated earlier. In particular, from <ref> it follows that m(r)/r>(m(r_0)/r_0^3)r^2, making the above inequality stronger. Defining M to stand for m(r_0) one can immediately integrate the above expression leading to, e^-λ_0/2-r_0^2/1-e^-λ _0[1/2r_0^2(1-e^-λ_0)+8π G_4/2λ _ Tρ_0^2/2 +2/8π G_4λ _ T(U_0/2+P_0)+9/2√(8π G_4/λ _ T)h̃_0^2](1-e^-λ_0/2)>0 , where again any quantity with subscript `0' ensures that it has been obtained on the surface of the star. Working out the above inequality in an explicit manner the corresponding limit on exp(-λ _0) yields,1-e^-λ _0<4/9[1-3/2(8π G_4/2λ _ Tρ_0^2/2 +2/8π G_4λ _ T(U_0/2+P_0)+9/2√(8π G_4/λ _ T)h̃_0^2)r_0^2+√(1+3/2(8π G_4/2λ _ Tρ_0^2/2 +2/8π G_4λ _ T(U_0/2+P_0)+9/2√(8π G_4/λ _ T)h̃_0^2)r_0^2)] . Since the metric functions are continuous across the surface of the star, e^-λ _0 appearing in the above inequality can be replaced by the corresponding metric element in the exterior region in the limit r→ r_0. The exterior solution without the field have been derived in <cit.> and corresponds to, U=-(P/2)=-(4/3)π G_4λ _ T(|q|/r^4). Thus with the above choices for U and P but including the field as well, leads to the following static and spherically symmetric solution,e^-λ =1-2G_4ℳ/r-3P_0r_0^4/8π G_4λ _ T1/r^2-√(8π G_4/λ _ T)h_0^2r_0^4/r^2 +𝒪(r^-3);h̃(r)=h̃_0(r_0/r)^2+𝒪(r^-3) , e^ν =1-2G_4ℳ/r-3P_0r_0^4/8π G_4λ _ T1/r^2-5√(8π G_4/λ _ T)h_0^2r_0^4/r^2 +𝒪(r^-3) . Here P_0 stands for the value of the “dark pressure” on the stellar surface which is positive definite and the relation 2U_0+P_0=0 <cit.> can be used to replace all the “dark radiation” term by “dark pressure” on the stellar surface. Further substituting the metric element e^-λ _0, evaluated at the surface of the star from the above equation in <ref> one finally obtains the following bound on the ADM mass of the stellar object, 2G_4ℳ/r_0<4/9[1-3/2(8π G_4/2λ _ Tρ_0^2/2 +3P_0/4π G_4λ _ T +6√(8π G_4/λ _ T)h̃_0^2)r_0^2+√(1+3/2(8π G_4/2λ _ Tρ_0^2/2 +3P_0/16π G_4λ _ T+9/2√(8π G_4/λ _ T)h̃_0^2)r_0^2)] . This completes our discussion regarding the derivation of Buchdahl's limit in the presence of field and higher spatial dimensions. For completeness, let us briefly comment on the stability of the above solution. For this purpose, as in the previous scenario one needs to consider the scaler, vector and tensor perturbations. Since the background is still spherically symmetric, all these perturbations satisfy the master equation presented in <ref>. The time dependence will again be through the e^± iω t term and the stability essentially corresponds to the positivity of the potential V_s appearing in <ref>. In the present context both e^ν and e^-λ behaves as the corresponding metric elements associated with the Reissner-Nordström black hole with a negative Q^2, pertaining to the smallness of the parameter h_0. One can immediately verify, given the metric elements, that the potential V_s is necessarily positiveirrespective of the value of s <cit.>. Thus a preliminary analysis suggest that the solution considered above is indeed stable. However, in order to get the full picture and a concrete statement regarding stability, one must work through the black hole perturbation theory and the associated quasi-normal modes. We hope to address these issues elsewhere. As evident from <ref>, the bound on ADM mass 2G_4ℳ/r_0 is small compared to thevalue 8/9 (see <ref>). This is in complete contrast with the result obtained in the previous section, where the Buchdahl's limit was higher compared to the corresponding situation in . Thus in this particular scenario the maximum mass that a compact stellar object can inherit at a fixed radius will be less compared to . Thus it will be more difficult to probe this particular scenario, since if one observes a compact stellar object with a ℳ/r_0 ratio less than 4/9, it can either correspond toor the current scenario. It will be problematic to disentangle these two affects. § DISCUSSIONS We have explicitly demonstrated how the presence of field (or, equivalently spacetime torsion) as well as that of extra dimensions modify the Buchdahl's limit. While pursuing the above we have used some general principles, e.g., matter density should decrease outwards in order to arrive at an inequality that depends only on the g_rr component of the metric. This is possibly originating from the fact that, only the three curvature of a four dimensional spacetime encodes the gravity degrees of freedom. The above result also provides a new perspective on the Buchdahl's limit and possibly an universal upper bound on the g_rr component. In particular, if we consider the field in four spacetime dimensions, it follows that the (mass/radius) ratio is larger than 4/9, the value pertaining to . Thus at a certain radius one can introduce extra matter to the compact object. Hence if it is possible to detect a compact object with (mass/radius) ratio larger than theprediction one can inferabout the possible presence of the field. On the other hand, when extra spatial dimensions are introduced, the effect of the bulk field induced on the brane hypersurface leads to interesting features. For example, the effective energy momentum tensor on the brane violates the weak energy condition but does satisfy the strong energy condition. Similarly, there will be additional contributions to the gravitational field equations on the brane inherited from the presence of bulk spacetime. These two effects will result into modifications in the Buchdahl's limit associated with a compact stellar object. However unlike the scenario in four spacetime dimensions, as extra spatial dimensions are included the bound on (mass/radius) ratio decreases in comparison to . This makes it difficult to explore possible observational avenues in this context regarding the presence of the field as well as that of extra dimensions. § ACKNOWLEDGEMENTS Research of S.C. is supported by SERB-NPDF grant (PDF/2016/001589) from SERB, Government of India../utphys1
http://arxiv.org/abs/1708.08315v2
{ "authors": [ "Sumanta Chakraborty", "Soumitra SenGupta" ], "categories": [ "gr-qc", "hep-th" ], "primary_category": "gr-qc", "published": "20170825002355", "title": "Packing extra mass in compact stellar structures: An interplay between Kalb-Ramond field and extra dimensions" }
http://arxiv.org/abs/1708.07754v3
{ "authors": [ "D. Bazeia", "L. Losano", "M. A. Marques", "R. Menezes", "I. Zafalan" ], "categories": [ "hep-th" ], "primary_category": "hep-th", "published": "20170825141732", "title": "First Order Formalism for Generalized Vortices" }
On the repeated inversion of a covariance matrix M. de JongNWO-I, Nikhef, PO Box 41882, Amsterdam, 1098 DB Netherlands Leiden University, Leiden Institute of Physics, PO Box 9504, Leiden, 2300 RA Netherlands =========================================================================================================================================================================== In many cases, the values of some model parameters are determinedby maximising the likelihood of a set of data points given the parameter values.The presence of outliers in the data and correlations between data points complicate this procedure. An efficient procedure for the elimination of outliers is presented which takes the correlations between data points into account. § INTRODUCTION In general, the values of some model parameters are optimally determinedby maximising the likelihood of a set of data points given the parameter values (see e.g. <cit.>). In this, a data point that has to low a probability to match the model can be considered an outlier. The presence of such outliers in the data can readily be accommodated in the probability density function of the data points. It is, however, not straight forward to also take the correlations between the data points into account.If the underlying probability density functions are normal distributions,both the uncorrelated and correlated uncertainties of the data pointscan be incorporated in a single matrix. The values of the model parameters can then be determined by minimising the χ^2: χ^2 =ϵ^TV^-1 ϵ where ϵ and ϵ^T are the (N × 1) and (1 × N) vectors containing the distances between the model and the data points and V is a (N × N) matrix. The elements of V are set as follows. V_ii= (σ_i)^2V_ij=∑_k∂ϵ_i/∂ u_k∂ϵ_j/∂ u_k(δ u_k)^2 where σ_i refers to the uncorrelated uncertainty of data point i andu_k to some correlation parameter which itself has an an uncertainty δ u_k The terms in the summation of equation <ref> are commonly referred to as the covariances of the data points. The matrix V is therefore often called the covariance matrix. By construction, the matrix V is symmetric and can thus be inverted using an LDU decomposition. The computation of V^-1 then requires 𝒪(N^3) operations.  The presence of outliers cannot easily be incorporated in the covariance matrix. So, it is desirable to remove the outliers and repeat the fit. For this, a criterion is required to identify an outlier. A common criterion is based on the value of the so-called standard deviation, D: D_k ≡ |ϵ_k|/σ_k A typical maximal allowed value of D is D_max= 3 which –in the absence of correlations– corresponds to a probability to keep a good data point of about 0.997. The removal of outliers could simply proceed by removing the data point with the largest D and repeating the fituntil there are no more data points with D > D_max.  In case the correlations between the data points are strong, this procedure may no longer be adequate. In this scenario, the absolute values of some off-diagonal elements of V are comparable to (or even larger than) the values of the diagonal elements. The standard deviation is then no longer a good criterion becausethe distances and covariances of the other data points should also be taken into account. A brute force procedure to identify outliers is to 1) remove data point k, 2) determine V', 3) invert V' and 4) minimise χ^2,for each data point k. In this, V' corresponds to the (N-1)×(N-1) covariance matrix. The change in χ^2 is then a good criterion to identify an outlier, i.e:D_k ≡ √(χ^2 - χ^2_k) where χ^2_k corresponds to the χ^2 of the fit after removal of data point k. As each inversion of V' requires 𝒪((N-1)^3) operations,this way of eliminating outliers requires 𝒪(N^4) operations. For a large number of data points, the number of operations needed to eliminate outliers may become too excessive.   An alternative exist, based on the fact that the removal of data point k is equivalent to setting the corresponding uncertainty σ_k to infinity. By doing so, the (N × N) covariance matrix V' can be decomposed as follows: V' =V + gδ_k,k In this,g is some arbitrary large value; in any case much larger than σ_k. The matrix δ_i,j has 1 at row i and column j and 0 everywhere else. To repeat the fit without data point k, it is required to invert matrix V'. As a first step, the known inverse of the original matrix V is considered. This is possible because V^-1 and V' have the same dimensions.V' × V^-1= (V + g δ_k,k) × V^-1= VV^-1 + gδ_k,k V^-1= I  + g ( [03c⋯0;⋮ ⋮;03c⋯0; V^-1_k,1⋯ V^-1_k,k⋯ V^-1_k,N[0cm][l]←row k;03c⋯0;⋮ ⋮;03c⋯0;]) ≡ A where I is the identity matrix. It is obvious that the inverse of matrix V' is equal to the product V^-1 A^-1. Consequently, the problem of inverting V' is reduced to inverting matrix A. The inverse of matrix A is trivial, namely: A^-1= I  - g/1 + g V^-1_k,k( [03c⋯0;⋮ ⋮;03c⋯0; V^-1_k,1⋯ V^-1_k,k⋯ V^-1_k,N[0cm][l]←row k;03c⋯0;⋮ ⋮;03c⋯0;]) Now, one can let go g →∞ and multiply V^-1 and A^-1 to obtain the inverse of matrix V'. As can be seem from equation <ref>, the product of V^-1 and A^-1 requires 𝒪(N^2) operations. Hence, this way of eliminating outliers only requires 𝒪(N^3) operations which actually is equal to the number of operations needed to invert the covariance matrix for the first fit. It is interesting to note that no additional memory is required for the computation of V'^-1.   A further reduction in the number of operations is possible when the outcome of the first fit is retained. In that case, the distance ϵ_i between the model and data point i stays the same. The χ^2 without data point k can then be expressed as: χ^2_k=∑_i=1^N∑_j=1^N ϵ_i   V'^-1_i,j ϵ_j =∑_i=1^N∑_j=1^N ϵ_i   (V^-1A^-1)_i,j ϵ_j=χ^2 - 1/V^-1_k,k(∑_j=1^n V^-1_k,j ϵ_j)^2 As can be seen from equation <ref>, the distances and covariances of the other data points are taken into account but the inverse of V' is no longer required. As a result, the summation takes only 𝒪(N) operations and the elimination of outliers 𝒪(N^2) operations. § CONCLUSIONS An efficient method is presented to eliminate outliers from a set of data pointswhich takes the correlations between the data points into account. The number of operations needed for this procedure isthe same as that needed for the inversion of the covariance matrix. plain
http://arxiv.org/abs/1708.07622v1
{ "authors": [ "M. de Jong" ], "categories": [ "cs.NA" ], "primary_category": "cs.NA", "published": "20170825062556", "title": "On the repeated inversion of a covariance matrix" }
(),a,
http://arxiv.org/abs/1708.08146v2
{ "authors": [ "Luiz P. Carneiro", "J. Puls", "T. L. Hoffmann" ], "categories": [ "astro-ph.SR" ], "primary_category": "astro-ph.SR", "published": "20170827220011", "title": "Carbon line formation and spectroscopy in O-type stars" }
Departament de Física, Universitat Politècnica de Catalunya,Campus Nord B4-B5, E-08034, Barcelona, Spain We study the superfluid properties of a system of fully polarized dipolar bosons moving in the XY plane. We focus on the general case where the polarization field forms an arbitrary angle α with respect to the Z axis, while the system is still stable. We use the diffusion Monte Carlo and the path integral ground state methods to evaluate the one-body density matrix and the superfluid fractions in the region of the phase diagram where the system forms stripes. Despite its oscillatory behavior, the presence of a finite large-distance asymptotic value in the s-wave component of the one-body density matrix indicates the existence of a Bose condensate. The superfluid fraction along the stripes direction is always close to 1, while in the Y direction decreases to a small value that is nevertheless different from zero. These two facts confirms that the stripe phase of the dipolar Bose system is a clear candidate for an intrinsic supersolid without the presence of defects as described by the Andreev-Lifshitz mechanism. 05.30.Fk, 03.75.Hh, 03.75.SsDipolar Bose Supersolid StripesR. Bombin, J. Boronat and F. Mazzanti ===========================================Supersolid many-body systems appear in nature when two continuous U(1) symmetries are broken. The first one is associated to the translational invariance of the crystalline structure, while the second one corresponds to the appearance of a non-trivial global phase of the superfluid state <cit.>. Supersolid phases were predicted to exist in Helium already in the late 60's <cit.>,though their experimental observation has been ellusive. In fact, the claims for detection made at the beginning of this century have been refuted, as the observed behavior is not caused by finite non-conventional rotational inertia but rather to elastic effects <cit.>. In this way, a neat observation of supersolidity in ^4He is still lacking. In fact, it is not clear yet whether a pure, defect-free supersolid structure like the one that would be expected in ^4He really exists. Recently, the issue of supersolidity has emerged again, but now in the field of ultracold atoms. Two different experimental teams have claimed that spatial local order and superfluidity have been simultaneously observed in lattice setups <cit.> and in stripe phases <cit.>. In this way, the definition of what a supersolid really is seems to still be under discussion <cit.>.Superfluid properties of solid-like phases are also of fundamental interest in quantum condensed matter. One of these is the stripe phase, where the system presents spatial order in one direction but not in the others. For instance, stripe phases have been of major interest since 1990, when non-homogeneous metallic structures with broken spatial symmetry were found to favor superconductivity <cit.>.More recently, stripe phases have been observed in Bose-Einstein condensates with synthetically created spin-orbit coupling <cit.>, where the momentum dependence of the interaction induces spatial ordering along a single direction in some regions of the phase diagram <cit.>. Stripe phases have also been discussed in the context of quantum dipolar physics, including very recent theoreticaland experimental analysis of metastable striped gases of ^164Dy <cit.>. Due to the anisotropic character of the dipolar interaction, in some regions of the phase diagram dipoles arrange in stripes, both in Fermi <cit.> and Bose <cit.> systems. In some cases the presence of this phase has been reported to exist even in the isotropic limit <cit.>. Though the presence of stripe phases in dipolar systems is well established and has been recently observed <cit.>, it is not yet clear whether the system exhibits superfluid properties (thus formingsupersolid stripes) or not.In a previous work we determined the phase diagram <cit.> of the two-dimensional system of Bose dipoles at zero temperature, tracing the transition lines between the solid, gas and stripe phases. The formation and excitation spectrum of the stripe phase, where the system acquires crystal order in one direction while being fluid on the other,was previously analyzed in Ref. <cit.>.In this Letter we investigate the superfluid properties of the stripe phase as a function of the density and polarization angle.Our results show that dipolar stripes are a special form of supersolid,and we quantify the superfluid density and condensate fraction all along the superstripe phase.In the following we consider a system of N fully polarized dipolar bosons of mass m moving on the XY plane. All dipoles are considered to be aligned along a fixed direction in space given by a polarization (electric or magnetic) field, which is contained on the XZ plane and forming and angle α with respect to the Z axis.The model Hamiltonian describing the system becomes thenH = -ħ^22m∑_j=1^N ∇_j^2 + C_dd 4π∑_i<j^N [ 1 - 3λ^2 cos^2θ_ij r_ij^3], with λ=sinα, and (r_ij,θ_ij) the polar coordinates associated to the position vector of particle j with respect to particle i. The constant C_dd is proportional to the square of the (electric or magnetic) dipole moment of the components, assumed all of them to be identical.In the following we use dimensionless units obtained from the characteristic dipolar length r_0=m C_dd/(4πħ^2).We quantify the superfluid properties of the system evaluating both the one-body density matrix and its asymptotic value (the condensate fraction), and the superfluid density. In order to do that we employ stochastic methods. We use two different quantum Monte Carlo techniques that are known to provide exact values for the energy of the system within residual statistical noise: the diffusion Monte Carlo (DMC) <cit.> and the path integral Monte Carlo (PIGS) <cit.> methods. The DMC simulations have been performed using a second order propagator <cit.>, while a fourth order propagator has been employed in the PIGS calculations <cit.>. In all cases, a variational model of the ground state wave function Ψ_T is used.In the DMC method, the guiding wave function is used for importance sampling but the ground state estimation of any observable commuting with the Hamiltonian is exact.In PIGS simulations, Ψ_T acts as a boundary condition at the end points of the open chains representing the set of particles. It is then propagated in imaginary time to the center of the chains, where expectation values are evaluated. In this way, any contribution orthogonal to the exact ground state is wiped out.Two different models have been used in this work. In the DMC simulations, Ψ_T has been taken to be of the Jastrow form, with a two-body correlation factor that results from the zero-energy solution of the two-body problem associated to Eq. (<ref>) as derived in Ref. <cit.>, matched with a long-range phononic extension as discussed in the same reference.This model must be modified when describing the stripe phase, including a one-body term f_1( r) that allows for the formation of the stripes along the Y directionf_1(r)=exp[ η_scos(2π n_s y L_y)],with L_y the box side length along the Y direction, and n_s the number of stripes in the simulation box. Notice that these two parameters are not independent, as one must guarantee the simulation box is commensurated for a fixed number of particles.In Eq. (<ref>), η_s is a variational parameter that is consistently found to be zero in the gas phase, and non-zero in the stripe phase. For the PIGS simulations,we have adopted a much simpler model based on the zero-energy solution of the isotropic (α=0) problem, matched with a phononic tail as in Ref. <cit.>. Despite its simplicity, we have found no differences with the results obtained when using the same model as in the DMC case. Since we are analyzing superfluid properties, we have performed several calculations spanning a wide range of densities and polarization angles in theregions of the phase diagram where the system is in stripeform. Notice that, in the solid phase, the system arranges in a triangular lattice that completely breaks the continuous translational symmetry <cit.>, while in the stripe phase this symmetry is broken only in one direction (the Y axis in our setup). For the sake of comparison, we have also explored two additional points where the system remains either as a gas or as a solid. The set of points explored in this work is shown in the phase diagram, Fig. <ref>, and a summary of the results obtained for these points is reported in Table <ref>.A direct measure of the off-diagonal long-range order present in the system is provided by the one-body density matrix (OBDM)n_1( r_11') =Ω∫ d r_2⋯ d r_NΨ_0(r_1,r_2,…, r_N) Ψ_0(r'_1,r_2,…, r_N),with Ψ_0 the ground state wave function and Ω the volume of the container. In this way, n_1( r) is normalized such that n_1(0)=1, while n_1(| r_11'|→∞)→ n_0if there is off-diagonal long-range order, with n_0 the condensate fraction.Notice that, in 2D, n_0 can be non-zero only at T=0. Figure <ref> shows a comparison of the one-body density matrix of the system at points C and L of Fig. <ref>, corresponding to the same density nr_0^2=512 but different polarization angles. In all cases n_1( r) depends on the direction due to the anisotropy of the interaction.The lower curves show two cuts of n_1( r) along the X and Y directions, when the system is in the solid phase (point L), while the upper curves show the same quantities for the system in the stripe phase (point C). As it can be seen, all curves show an oscillatory behavior that is partially aconsequence of the anisotropy of the interaction <cit.>. Most remarkably, the curves corresponding to the solid phase decay exponentially to zero, while the ones for the stripe phase saturate to a common value that corresponds to n_0. The condensate fraction, which appears only in the s-wave term of the partial wave expansion of n_1( r), has been obtained by fitting a constant to the intermediate-distance tailin regions near (but not at) half the box side where the results are stable. All values in the third column of Table <ref> have been obtained in this way.At large densities, where increasing the polarization angle makes the system change from the solid to the stripe phase, the condensate fraction increases with increasing α. This is not surprising since the dipolar interaction is overall less repulsive when approaching the line of collapse, at the critical angle α_c≈ 0.615.The situation is reversed at lower densities, when the system changes from the gas to the stripe form (point I and J for instance). In this case and close to the transition line, the condensate fraction is expected to approach higher values, as the gas is less interacting. Perpendicular cuts at fixed polarization angle and increasing density leads always to a reduction in n_0, consistent with the fact that particles have less effective space. In any case the largest values of n_0 are achieved near the gas-stripe transition line at the lowest possible densities. In this way, the large-distance limit of the OBDM of thestripe phase is always non-zero, as happens with othersupersolid systems.Even though the presence of non-zero condensate fraction value already points towards a superfluid behavior, it is possible to evaluatedirectly the superfluid response of the system in DMC. At finite temperature, the superfluid fraction ρ_s is estimated from the winding number <cit.>, which takes into accounts the diffusion of world lines at large imaginary times.At T=0, this is equivalent <cit.> to measuing the diffusion of the center of mass of the system in the infinite imaginary time limit, according to the expression ρ_s = lim_τ→∞1 4 N τ( D_s(τ)D_0),where D_s(τ)=⟨ ( R_CM(τ) -R_CM(0) )^2 ⟩ and D_0=ħ^2/(2m). For the 2D system analyzed, we identify the X and Y components of this expression with the superfluid fractions along the X and Y directions, according to ρ_s=(ρ_s^x+ρ_s^y)/2.Figure <ref> shows our results for ρ_s^x, ρ_s^y and the total ρ_s for two perpendicular cuts on the phase diagram. The upper panel corresponds to a fixed density nr_0^2=512 and different angles in the region where the system remains in the stripe phase. The lower panel corresponds to a fixed angle α=0.6 but different densities, also in the stripe phase. The cut at nr_0^2=512 and increasing α shows that the X component of the superfluid fraction is always close to 1, while the Y component decreases to 0, leading to the overall value ρ_s≈ 1/2 near α=α_c. Remarkably, the total superfluid fraction ρ_s is larger close to the transition line to the solid phase, decreasing as α increases. In this way, the superfluid response is discontinuous across the solid-stripe transition. The fact that ρ_s^y (and thus ρ_s) decrease when α increases is once again a consequence of the anisotropic character of the dipolar interaction, which becomes less repulsive along the X direction with increasing α. Close to α_c the interaction along the X direction is weak and particles can easily flow in each stripe, but the confinement of the stripes is stronger and the system becomes more localized along the Y direction. This is confirmed by the fact that the optimal values of η_s in Eq. (<ref>) is larger when α approaches α_c at fixed density. A similar situation is found when the density is increased at constant α. The lower panel of Fig. <ref> show the different components of the superfluid fraction at α=0.6 and increasing density. Once again we observe that ρ_s^y decays to values close to zero already at nr_0^2=256, thus confirming that at high densities the confinement of the different stripes is very strong. Only point K in that line presents a large ρ_s^y value, but that point is essentially in the gas-stripe transition line, and we know the total superfluid fraction ρ_s=1 in the gas phase. Contrarily to what happens when moving from the stripe to the solid phase, in the gas-stripe transition the change in ρ_s, ρ_s^x and ρ_s^y appears to be continuous.At this point, and according to the previous results, one could wonder whether stripes are so tightly confined that no particleexchanges between different stripes is possible.If that was the case, one could also think that each stripe may behave as an isolated, (quasi) 1D system. In fact and according to the results in the last column of Table <ref>, in some regions the Y component of the superfluid fraction acquires very low values. However, it never vanishes.This indicates that, in fact, particle exchange between different stripes is always possible, though it becomes unlikely in the limits commented above. Taking that into account, one can look for traces of a (quasi) 1D behavior in the regions where ρ_a^y∼ 0.One way to do that is to analyze the system as a Luttinger liquid, and to check for consistency in the values of the corresponding Luttinger parameters. In order to do that we have extracted the sound velocity c from a fit of theform |k|/2c to the low-k behavior of the static structure factor S( k)evaluated both in DMC and PIGS. Once with it, we have performed a fit of the form n_1(u)=A u^-1/η with η=2π n/c  <cit.> to the X and Y components of n_1( r), with the results shown in Fig, <ref>. As it can be seen, the fit reproduces better the tail of n_1( r) along the X direction, while strong oscillations in the Y component are clearly visible and n_1( r) for r=(0,y) differs significantly from the fit.It must be kept in mind, though, that the large distance behavior of n_1( r) in Luttinger liquid theory is a decaying power law not compatible with a finite condensate fraction value, while we have seen before that the stripe phase OBDM presents a large-distance asymptotic value n_0≠ 0. In this way, the curve fits well the calculated X-component of the OBDM at intermediate distances only.The inset in Fig. <ref> shows a snapshot of the system after thermalization in PIGS, for the same conditions nr_0^2=512 and α=0.6, where a pair of examples where particle exchange between different stripes is visible, have been highlighted. It is worth recalling that since simulations in PIGS are done with open chains (with variational wave functions at the end points), it is hardly possible to see long exchange lines crossing the whole simulation box.In summary, we have performed DMC and PIGS simulations to analyze the supersolidproperties ofdipolar Bose stripes intwo dimensions forpolarization angles before collapse. We have evaluated the one-body density matrix to find that it always presents a finite (though in some regions, quite small) condensate fraction value, in contrast to the continuously decaying tail it presents in the solid phase. We have also evaluated the superfluid fraction along the X and Y directions to find that, at large densities and/or polarization angles, the Y-component becomes very small, though it never vanishes. At high densities and polarization angles the stripes are tightly confined and the intermediate distance behavior of the OBDM along the stripe direction has a dependence on the distance that is somehow compatible with a Luttinger liquid model. However, particle exchanges, always visible in configuration snapshots, lead to a finite condensate fraction value and an overall superfluid behavior that, together with the existence of Bragg peaks <cit.>, confirm the supersolidcharacter of that phase.This work has been supported by the Ministerio de Economia, Industria y Competitividad (MINECO, Spain) under grant No. FIS2014-56257-C2-1-P. 99Boninsegni_2012 M. Boninsegni, and N. V. Prokof'ev, Rev. Mod. Phys. 84, 759 (2012).Andreev_69 A. F. Andreev, and I. M. Lifshitz, JETP 29, 1107 (1969).Kim_2004 E. Kim, and M. H. W. Chan, Nature 427, 225 (2004).Kim_2012 D. Y. Kim, and M. H. W. Chan, Phys. Rev. Lett. 109, 155301 (2012).Leonard_17 J. Léonard, A. Morales, P. Zupancic, T. Esslinger, and T. Donner, Nature 543, 87 (2017).Li_17 J. R. Li, J. Lee, W. Huang, S. Burchesky, B. Shteynas, F.Ç. Top, A. O. Jamison, and W. Ketterle, Nature 543, 91 (2017).Anderson_2017 P. W. Anderson, Physics World 30, 21 (2017).Bianconi_00 A. Bianconi, Int. J. Mod. Phys. B, 14, 3289 (2000).Bianconi_13 A. Bianconi, D. Innocenti, and G. Campi,Journal of Superconductivity and Novel Magnetism, 26,2585 (2013). Li_13 Y. Li, G. I. Martone, L. P. Pitaevskii, and S. Stringari, Phys. Rev. Lett. 110, 235302 (2013).Wenzel_2017 M. Wenzel, F. Böttcher, T. Langen, I. Ferrier-Barbut, and T. Pfau, arXiv:1706.09388.Yamaguchi_10 Y. Yamaguchi, T. Sogo, T. Ito, and T. Miyakawa, Phys. Rev. A 82, 013643 (2010).Sun_10 K. Sun, C. Wu, and S. Das Sarma, Phys. Rev. B 82, 075105 (2010). Macia_2012 A. Macia, D. Hufnagl, F. Mazzanti, J. Boronat, and R. E. Zillich, Phys. Rev. Lett. 109, 235307 (2012)Macia_2014 A. Macia, J. Boronat, and F. Mazzanti, Phys. Rev. A 90, 061601(R) (2014).Parish_12 M. M. Parish, and F. M. Marchetti, Phys. Rev. Lett. 108, 145304 (2012). Kadau_2016 H. Kadau, M. Schmitt, M. Wenzel, C. Wink, T. Maier, I. Ferrier-Barbut, T. Pfau, Nature 530, 194-197 (2016).Hammond_94 B. L. Hammond, W. A. Lester Jr., and P. J. Reynolds, Monte Carlo Methods in Ab InitioQuantum Chemistry, (World Scientific, Singapore, 1994). Kosztin_96 I. Kosztin, B. Faber, and K. Schulten, Am. J. Phys. 64(5), 633 (1996).Chin_90 S. A. Chin, Phys. Rev. A 42, 6991 (1990). Sarsa_2000 A. Sarsa, K. E. Schmidt, and W. R. Magro, J. Chem. Phys. 113, 1366 (2000).Rota_2010 R. Rota, J. Casulleras, F. Mazzanti, and J. Boronat, Phys. Rev. E 81, 016707 (2010). Chin_02 S. A. Chin, and C. R. Chen, J. Chem. Phys. 117, 1409 (2002).Macia_11 A. Macia, F. Mazzanti, J. Boronat, and R. E. Zillich,Phys. Rev. A 84,033625 (2011).Pollock_87 E. L. Pollock and D. M. Ceperley, Phys. Rev. B 36, 8343 (1987).Zhang_95 S. Zhang, N. Kawashima, J. Carlson, and J. E. Gubernatis, Phys. Rev. Lett. 74, 1500 (1995).Luttinger_81 F. D. M. Haldane, Phys. Rev. Lett. 47, 1840 (1981).
http://arxiv.org/abs/1708.07673v2
{ "authors": [ "R. Bombin", "J. Boronat", "F. Mazzanti" ], "categories": [ "cond-mat.quant-gas" ], "primary_category": "cond-mat.quant-gas", "published": "20170825100625", "title": "Dipolar Bose Superstripes" }
Institute for Advanced Research, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8601, Japan; [email protected], [email protected] Department of Physics, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8601, Japan Institute for Advanced Research, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8601, Japan; [email protected], [email protected] Department of Physics, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8601, Japan Department of Physics, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8601, Japan Department of Physics, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8601, Japan Department of Physics, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8601, Japan Department of Physics, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8601, Japan Department of Physics, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8601, Japan Nobeyama Radio Observatory, Minamimaki-mura, Minamisaku-gun, Nagano 384-1305, Japan Department of Physics, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8601, Japan School of Physical Sciences, University of Adelaide, North Terrace, Adelaide, SA 5005, Australia Western Sydney University, Locked Bag 1797, Penrith South DC, NSW 1797, Australia School of Physics, The University of New South Wales, Sydney, 2052, Australia Research School of Astronomy and Astrophysics, Australian National University, Canberra ACT 2611, Australia National Astronomical Observatory of Japan, Mitaka 181-8588, Japan Department of Physics, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8601, Japan National Astronomical Observatory of Japan, Mitaka 181-8588, Japan National Astronomical Observatory of Japan, Mitaka 181-8588, Japan Department of Physics, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8601, Japan Department of Astrophysics, Graduate School of Science, Osaka Prefecture University, 1-1 Gakuen-cho, Naka-ku, Sakai 599-8531, Japan Institute for Space-Earth Environmental Research, Nagoya University, Chikusa-ku, Nagoya 464-8601, Japan Department of Astrophysics, Graduate School of Science, Osaka Prefecture University, 1-1 Gakuen-cho, Naka-ku, Sakai 599-8531, JapanWe present a new analysis of the interstellar protons toward the TeV γ-ray SNR RX J0852.0-4622 (G266.2-1.2, Vela Jr.). We used the NANTEN2 ^12CO(J = 1–0) and ATCA & Parkes Hi datasets in order to derive the molecular and atomic gas associated with the TeV γ-ray shell of the SNR. We find that atomic gas over a velocity range from V_LSR = -4 km s^-1 to 50 km s^-1 or 60 km s^-1 is associated with the entire SNR, while molecular gas is associated with a limited portion of the SNR. The large velocity dispersion of the Hi is ascribed to the expanding motion of a few Hi shells overlapping toward the SNR but is not due to the Galactic rotation. The total masses of the associated Hi and molecular gases are estimated to be ∼2.5 × 10^4 M_⊙ and ∼10^3 M_⊙, respectively. A comparison with the H.E.S.S. TeV γ-rays indicates that the interstellar protons have an average density around 100 cm^-3 and shows a good spatial correspondence with the TeV γ-rays. The total cosmic ray proton energy is estimated to be ∼10^48 erg for the hadronic γ-ray production, which may still be an underestimate by a factor of a few due to a small filling factor of the SNR volume by the interstellar protons. This result presents a third case, after RX J1713.7-3946 and HESS J1731-347, of the good spatial correspondence between the TeV γ-rays and the interstellar protons, lending further support for a hadronic component in the γ-rays from young TeV γ-ray SNRs.§ INTRODUCTION The origin of the cosmic rays (CRs) is one of the most fundamental issues in modern astrophysics since the discovery of the CRs by Victor Franz Hess in 1912. It is in particular important to understand the origin of the CR protons, the dominant constituent of the CRs. There have been a number of studies to address the acceleration sites of CR protons in the Galaxy <cit.>. It is widely accepted that CR particles are accelerated up the energies below 10^15 eV via the diffusive shock acceleration (DSA), which takes place for instance between the upstream and downstream of the high velocity shock front of the SNR <cit.>. It is crucial to verify observational signatures for the hadronic γ-rays of the CR proton origin.The recent advent of high resolution γ-ray observations have allowed us to image some 20 SNRs, offering a new opportunity to identify the hadronic γ-rays. Among them, four SNRs, RX J1713.7-3946, RX J0852.0-4622, RCW 86 and HESS J1731-347, show shell-type TeV γ-rays morphology as revealed by High Energy Stereoscopic System (H.E.S.S.) <cit.>. Potential hadronic γ-rays are produced by the proton-proton collisions followed by neutral pion decay, while any leptonic γ-rays come from CR electrons via the inverse-Compton effect and possibly Bremsstrahlung. Previous interpretations of the γ-ray observations usually used mainly the γ-ray spectra in order to discern the above two processes, while some attempts were made to consider explicitly the target protons in the interstellar medium (ISM) <cit.>. A conventional assumption is that the target protons are uniform, having density of 1 cm^-3 <cit.>. Such a low density is preferred because the DSA works efficiently at such low density. It is however becoming recognized that the γ-ray spectrum alone may not be sufficient to settle the origin of the γ-rays because the penetration of the CR protons into the target ISM may significantly depend on the density of the ISM <cit.>; the higher energy CR protons can penetrate deeply into the surrounding molecular cloud cores, whereas the lower energy CR protons cannot, making the γ-ray spectrum significantly harder than the fully interacting case <cit.>. I12 therefore suggest that the γ-ray spectra are not usable to discern the γ-ray production mechanisms, but that the spatial correspondence between the γ-rays and ISM distribution is a key element in testing for a hadronic γ-ray component. The effect of clumpy ISM was also discussed by <cit.> and <cit.>. It is often noted that the acceleration via DSA must happen in a low-density space, which has too low a density for the hadronic origin to be effective <cit.>. This is however not a difficulty if one takes into account the highly inhomogeneous ISM distribution as is commonly the case in a stellar wind evacuated cavity with a dense surrounding ISM shell like in RX J1713.7-3946 <cit.>.Several of the previous studies analyzed explicitly the distribution of the candidate target ISM protons observed by CO and compared them with the γ-ray distribution in the most typical TeV γ-ray SNRs including RX J1713.7-3946 and RX J0852.0-4622. <cit.> compared the TeV γ-ray distribution with the mm-wave rotational transition of CO, the tracer of H_2, obtained with the NANTEN 4 m telescope. These authors found some similarity between CO and γ-rays in RX J1713.7-3946, whereas part of the γ-ray shell was completely missed in CO. The authors thus did not reach a firm conclusion on the target ISM protons in the hadronic scenario. In case of RX J0852.0-4622, <cit.> made a similar comparison in a velocity range of 0–20 km s^-1 but found little sign of the target protons in CO. So, the ISM associated with RX J0852.0-4622 remained ambiguous.<cit.> (hereafter F12) carried out a detailed analysis of the ISM protons toward the RX J1713.7-3946 by employing both the molecular and atomic protons, and have shown for the first time that these ISM protons have a good spatial correspondence with the TeV γ-rays. This new study showed that the atomic protons are equally important as molecular protons as the target in the hadronic process, which was not previously taken into account. The study by F12 provides a necessary condition for the hadronic process, lending support for the hadronic scenario, although this correspondence alone does not exclude the leptonic process. F12 and I12 further considered the other relevant aspects including magneto-hydro-dynamical numerical simulations of the SN shocks and X-ray observations <cit.> and argued that the TeV γ-rays in RX J1713.7-3946 is likely emitted via the hadronic process. Another young SNR RX J0852.0-4622 was discovered by <cit.>, and RX J0852.0-4622 shows a hard X-ray spectrum toward part of the more extended Vela SNR in ROSAT All-Sky Survey image. TeV γ-rays were detected and imaged toward RX J0852.0-4622 by H.E.S.S.. RX J0852.0-4622 shows similar properties with RX J1713.7-3946; they are both young with ages of 2400–5100 yrs for RX J0852.0-4622 <cit.>: ∼1600 yr for RX J1713.7-3946 <cit.> having synchrotron X-ray emission without thermal features and share shell-like X-ray/TeV γ-ray morphology. RX J0852.0-4622 has an apparently large diameter of about 2 degrees. The size will allow us to test spatial correspondence between the γ-rays and the ISM at 0.12 degree angular resolution in FWHM of H.E.S.S.. We have two more shell-like TeV γ-ray SNRs RCW 86 and HESS J1731-347 <cit.>. In HESS J1731-347, <cit.> carried out a comparative analysis of the ISM and γ-rays and have shown that the ISM protons show that their spatial distributions are similar with each other, being consistent with a dominant hadronic component of g-rays plus minor contribution of a leptonic component. In RCW 86, a similar comparative analysis is being carried out by <cit.>.lccccccc[t] 1 Properties of CO Clouds toward the SNR RX J0852.0-4622 1cName l b T_ R^∗ V_peak Δ V Size Mass (degree) (degree) (K) 0.9[1](km s^-1) 0.9[1](km s^-1) (pc)(10^4 M_) 1c(1) (2) (3) (4) (5) (6) (7) (8) CO0S 267.07 -1.63 02.86 03.47 3.48 3.2 0220 CO20E 266.87 -0.67 11.46 19.80 2.06 6.8 1330 CO25W 266.47 -2.04 07.48 24.00 1.72 6.2 0650 CO25C 266.13 -1.00 02.04 24.45 2.68 1.9 0040 CO30E 266.67 -0.87 06.28 31.09 2.78 3.0 0180 CO45NW 265.33 -1.10 04.42 44.68 4.07 4.0 0330 CO60NW 265.53 -1.47 03.31 63.98 2.25 6.3 0410 Col. (1): Name of CO cloud. Cols. (2–7): Physica properties of the CO cloud obtained by a single Gaussian fitting. Cols. (2)–(3): Position of the peak intensity in the Galactic coordinate. Col. (4): Radiation temperature. Col. (5): Center velocity. Col. (6): Full-width half-maximum (FWHM) of line width. Col. (7): Size of CO cloud defined as 2 × (A / π)^0.5, where A is the area of cloud surface surrounded by the CO contours in Figure <ref>. Col. (8): Mass of CO cloud is defined as m_H μ∑_i [D^2 Ω N(H_2)], where m_H is the mass of the atomic hydrogen, μ is the mean molecular weight, D is the distance to RX J0852.0-4622, Ω is the angular size in pixel, and N(H_2) is column density of molecular hydrogen for each pixel. We used μ = 2.8 by taking into account the helium abundance of 20% relative to the molecular hydrogen in mass, and N(H_2) = 2.0 × 10^20[W(^12CO) (K km s^-1)] (cm^-2) <cit.>.The distance of RX J0852.0-4622 was not well determined in the previous works <cit.>. A possible distance of RX J0852.0-4622 was 250 ± 30 pc similar to the Vela SNR <cit.>, while another possible distance was larger than that of the Vela SNR. <cit.> argued that RX J0852.0-4622 is physically associated with the giant molecular cloud, the Vela Molecular Ridge <cit.>, and the distance of the VMR was estimated to be 700 pc to 200 pc <cit.>. It is however not established if the VMR is physically associated with RX J0852.0-4622 <cit.>. Toward the northwest-rim of RX J0852.0-4622, two observations were made with the XMM-Newton and an expansion velocity was derived and an age of 1.7–4.3× 10^3 years was estimated by <cit.>, assuming the free expansion phase. Recently, <cit.> improved the expansion measurement by using two Chandra datasets separated by 4.5 years toward the northwestern rim of RX J0852.0-4622. They derived an age of 2.4–5.1× 10^3 year old and the range of distance from 700 to 800 pc, which are roughly consistent with the previous study by <cit.>. We will therefore adopt the distance ∼750 pc to RX J0852.0-4622 in the present paper. Analyses of a central X-ray source in this SNR using data from multiple observatories suggest that the progenitor of the SNR is a high-mass star which led to a core-collapse type SNe <cit.>. This conclusion is also consistent with the estimate of the current SNR shell expansion speed <cit.>. In this paper, we present the results of an analysis of the ISM protons toward RX J0852.0-4622 by using both the CO and Hi data. Section <ref> gives the observations of CO and Hi. Section <ref> describes the high-energy γ-ray and X-ray data. Section <ref> gives the results of the CO and Hi analyses and Section <ref> discussion. We conclude the paper in Section <ref>.§ OBSERVATIONS§.§ CO observationsObservations of the ^12CO(J = 1–0) transition was carried out by the NANTEN 4 m telescope of Nagoya University at Las Campanas Observatory (2400 m above the sea level) in Chile in 1999 May–July <cit.>. The half-power beam width (HPBW) at the frequency of the ^12CO(J = 1–0), 115.290 GHz, was 160”. The observations were made by position switching mode and grid spacing was 120”. SIS (superconductor-insulator-superconductor) mixer receiver provided T_sys of ∼250 K in the single side band (SSB) including the atmosphere in the direction of the zenith. The spectrometer was an AOS (acousto-optical spectrometer) with 40 MHz bandwidth and 40 kHz resolution, providing 100 km s^-1 velocity coverage and a velocity resolution of 0.1 km s^-1. The velocity always refers to that of the local standard of rest. The rms noise per channel is ∼0.5 K. §.§ Hi observationsWe used the high resolution 21 cm Hi data obtained with Australia Telescope Compact Array (ATCA) in Narrabri, New South Wales in Australia consisting of six 22 m dishes. The region of RX J0852.0-4622 in l = 266 degree was observed as part of the Southern Galactic Plane Survey <cit.> and was combined with the Hi data taken with the 64 m Parkes telescope. The Parkes Hi data were taken for b = -10 to +10 degrees, while the ATCA covers b = -1.5 to +1.5 degrees. In order to supplement the southwestern half of the SNR new Hi observations were conducted during 24 hours on February 26–27 and March 29–30, 2011, with the ATCA in the EW352 and EW367 configurations (Project ID: C2449, PI: Y. Fukui). We employed the mosaicking technique, with 43 pointings arranged in a hexagonal grid at the Nyquist separation of 19'. The absolute flux density was scaled by observing PKS B1934-638, which was used as the primary bandpass and amplitude calibrator. We also periodically observed PKS 0823-500 for phase and gain calibration. The MIRIAD software package was used for the data reduction <cit.>. We combined the ATCA dataset with singledish observations taken with the Parkes 64 m telescope and the SGPS dataset. The final beam size of Hi is 245” × 130” with a position angle of 117 degrees. Typical rms noise level was 1.4 K per channel for a velocity resolution of 0.82 km s^-1.§ DISTRIBUTIONS OF THE SNR Figure <ref>a shows the TeV γ-ray and X-ray distributions of RX J0852.0-4622. These high energy features are thin shell-like and are enhanced toward the northern half.The γ-ray distribution <cit.> is obtained in the energy above 0.3 TeV with High Energy Stereoscopic System (H.E.S.S.) installed in Namibia which utilized the first four 12 m diameter Cherenkov telescopes of the current five-telescopes H.E.S.S. array. The H.E.S.S. image has a point-spread function of 0.06 degrees at the 68 % radius (hereafter termed “r_68”) or the half power full width of 0.144 degrees for 33 hr observations[A higher statistics γ-ray image was presented by <cit.> with an angular resolution of r_68 = 0.08 degree, poorer than 0.06 degree reported for the morphological analysis in <cit.>. Since there is no significant difference between the two images except for the angular resolution and photon statistics, we therefore use the previous image in <cit.> to improve the morphological study in this paper.]. The shell is well resolved for the 2-degree diameter, making RX J0852.0-4622 suitable for testing the spatial correspondence between the ISM and the γ-rays. The γ-rays peaked toward (l, b) ∼ (26697, -100) corresponds to a pulsar wind nebula PSR J0855-4644 at a distance of below 900 pc, and is not related to the RX J0852.0-4622 <cit.>. The total γ-ray flux in the energy range from 0.3–30 TeV is estimated to be (84.1 ± 4.3_stat ± 21.7_syst) × 10^-12 erg cm^-2 s^-1 <cit.>.ROSAT observations <cit.>, ASCA observations <cit.>, and Suzaku observations <cit.> show that the object is shell-like, with luminous northern and southern rims and less luminous northeastern and southwestern rims. The X-rays are superposed on the γ-ray image in Figure <ref>a. The most prominent X-ray peak is seen toward the northwestern shell, (l ,b) = (2654, -12) <cit.>. The X-rays are non-thermal with a photon index around 2.7 and the absorbing column density of 2–4× 10^21 cm^-2 <cit.>. ASCA and Chandra observations of the SNR show that the non-thermal X-rays are dominant with an absorption column density 1–4×10^21 cm^-2 toward the X-ray peak. A spatially-resolved spectroscopic X-ray study revealed details of the fine structure in the luminous northwestern rim complex of G 266.2-1.2 by observations made with Chandra <cit.>.§ ISM DISTRIBUTIONS§.§ CO and Hi distributionsCO is a tracer of molecular gas with density of a few times 100 cm^-3 or higher and the Hi traces the atomic gas with lower density less than several 100 cm^-3. The density range between them may be probed by the cool Hi gas as seen in self-absorption if the background Hi is bright enough. Such Hi self-absorption is in fact observed in RX J1713.7-3946 and is identified as part of the target ISM protons in the hadronic γ-ray production (F12). Figure <ref>b shows a three-color image of RX J0852.0-4622, consisting of ^12CO(J = 1–0) in red, Hi in green, and X-ray (2–5.7 keV) in blue. Hi emission delineates the outer boundary of the X-ray shell, except for the southeast and northeast. In the western region, the X-ray shell is well correlated with the CO clumps. Figures <ref> and <ref> show the velocity channel distributions of ^12CO(J = 1–0) and Hi every 5 km s^-1 in the range from -10 km s^-1 to 80 km s^-1, where the H.E.S.S. TeV γ-ray contours are superposed. The giant molecular cloud at 0–15 km s^-1 is the VMR <cit.>. The other CO features are all small and clumpy. We list the observational parameters of the small CO clouds in Table <ref>; the position of peak intensity within the cloud, velocity, line intensity, size, and mass. lcccccccccc 2Observed Properties of Hi Supershells toward RX J0852.0-46221cID Name l b v_c r a b ϕ v_exp Reference (deg) (deg) (km s^-1) (deg) (deg) (deg) (deg)(km s^-1)1c(1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) 1 GS 268-01+066 267.8 -1.1 66 —– 2.2±0.2 1.7±0.2-36.9 10.3 [1]2 GS 263-02+450 262.6 -1.8 45 3.60±0.40 —– —– —– 140 [2]3 GS 265-04+350 264.9 -3.6 35 2.47±0.05—– —– —– 9 This workCol. (1): Supershell ID. Col. (2): Supershell name. Cols. (3–4): Central position of the the supershell. Col. (5): Central radial velocity of the supershell. Col. (6): Radius of the fitted circle. Cols. (7–8): Major and minor semi-axis of the fitted ellipse. Col. (9): Major axis inclination ϕ relative to the Galactic longitude, measured counterclockwise form the Galactic plane. Col. (10) Velocity extension of the supershell. Col. (11): [1] <cit.>, [2] <cit.>.§.§ Associated cloudsWe shall first identify seven candidates CO clouds so named CO 0 S, CO 20 E, CO 25 W, CO 25 C, CO 30 E, CO 45 NW, and CO 60 NW, where their typical velocity is indicated by the Figure <ref>. In addition we show five Hi features, Hi 0 S, Hi 25 W, Hi 30 E, Hi 45 N, and Hi 60 NW in Figure <ref>, which are candidates for the Hi counterparts of the CO. By combining these CO with the Hi and the shell distribution, we select the plausible candidates for the clouds interacting with the SNR.Figure <ref> shows an overlay between CO and Hi. The CO corresponds well the south-western rim of the shell. This shows a clear association of CO cloud CO 25 W with the SNR. Hi 25 W is also associated with this CO showing a shift in position from the center to northwest in Figure <ref>. The shift is consistent with part of an expanding shell on the near side at an expanding velocity around 5–10 km s^-1. We also note that Hi 25 W is likely located toward the tangential edge of the shell, suggesting that the central velocity of the ISM associated with the SNR is around 25 km s^-1. Figures <ref> and <ref> show two additional cases of the association. Figure <ref> shows CO 30 E with Hi tail extending outward from the centre, Hi 30 E. The Hi is V-shaped pointing toward the centre of the SNR. CO 30 E is located at the tip of the Hi and is elongated in radial direction. We suggest that the distribution presents a blown-off cloud overtaken by the stellar wind by the SN progenitor, where the dense head of CO survived against the stellar wind with a Hi tail. A similar morphology is seen toward CO 25 C in Figure <ref>. The CO is also elongated toward the centre and the Hi, part of Hi 45 N, shows elongation in the same direction with the CO at the inner Hi tip (Figure <ref>). We discuss more quantitative details of the interaction in the next section. The three CO features, CO 25 W and CO 25 C, in a velocity range from 20 km s^-1 to 30 km s^-1 suggest the physical association of the CO and Hi with RX J0852.0-4622. We also note that the small double feature CO 0 S is seen toward the southeast of the γ-ray feature. These features and the Hi 0 S (Figure <ref>) show correspondence with the γ-rays. To summarize, the CO in a velocity range from -3 km s^-1 to 30 km s^-1with a central velocity 25 km s^-1 shows signs of the association with the SNR.Based on these candidates for the association, we extended a search for CO and Hi in a velocity from -5 km s^-1 to 65 km s^-1 in Figures <ref> and <ref>, and have summarized the possible candidate features in Figures <ref> and <ref>. These candidate CO and Hi features are CO 20 E, CO 45 NW and CO 60 NW, and Hi 45 N, and Hi 60 NW. It is notable that CO 25 W is found toward the γ-ray shell (panel 25–30 km s^-1).§.§ Distance of the ISM: Hi supershellsThe central velocity of the ISM associated with RX J0852.0-4622 is around 25 km s^-1 with a velocity span of ∼50 km s^-1. 25 km s^-1 corresponds to a kinematic distance of 4.3 kpc if the galactic rotation model is adopted <cit.>, significantly different from the adopted distance value of 750 pc <cit.>. We shall look at the Hi distribution around the SNR in order to clarify this discrepancy. Figures <ref>a, <ref>b, and <ref>c show the Hi distributions in the velocities of 66 km s^-1, 45 km s^-1, and 35 km s^-1, respectively. By a conventional Galactic rotation model, these velocities correspond to distances of 8.7 kpc, 6.4 kpc, and 5.4 kpc <cit.>. We argue here that the Galactic rotation is not the dominant cause of the Hi velocity but that the expanding motion of several Hi supershells is mainly responsible for the line of sight velocity. In fact, <cit.> and <cit.> identified the Hi supershells, GS 268-01+066 and GS 263-02+45 (see Figures <ref>a and <ref>b) toward the SNR. In addition, we newly identified the Hi supershell “GS 265-04+35" as shown in Figure <ref>c. The supershell has also an expanding motion and shows the front and rear wall in the Hi spectrum (see more details in Appendix). Figure <ref>d shows a schematic view of the supershell boundaries toward the SNR RX J0852.0-4622. The TeV γ-ray contours are overlapped three supershells, indicating that the expanding motion of the supershells dominates the Hi velocity field around the SNR. Their detailed physical parameters are shown in Table <ref>.The Hi position-velocity diagram on a large scale is shown in Figure <ref>. The Hi gas toward the SNR is not ordered in the spiral arm at 60 km s^-1 to 80 km s^-1 and the Hi has several velocity features at 0 km s^-1, 20–30 km s^-1 and 40–50 km s^-1 toward RX J0852.0-4622 in l = 263–268 degrees, all of which show kinematic properties consistent with part of an expanding shell (Figure <ref>). We therefore suggest that the apparent velocity shifts of the Hi in the region are due to the expansion of supershells which are driven by several stellar clusters located in the centre of each shell. The known molecular supershells, including the Carina flare supershell, show an expanding velocity of 10 km s^-1 with a ∼10 Myrs age <cit.>. We generally find no direct observational hints for any stellar clusters although it is not unusual that we see no clear signs of clusters, which should become faint, in 10 Myrs due to the evolution of high mass stars in the cluster. §.§ Total ISM protonsWe here derive the total ISM proton density. The molecular column density is calculated by canonical conversion factors and the error corresponds to 3 sigma noise fluctuations. The total intensity of ^12CO(J = 1–0) can be converted into the molecular column density N(H_2) (cm^-2) by the following relation;N(H_2) = XCO× W(^12CO)where the XCO factor N(H_2) (cm^-2)/W(^12CO) (K km s^-1) is adopted as XCO = 2.0 × 10^20 (cm^-2 / K km s^-1) <cit.>. Then, the total proton density is given as N_p(H_2) = 2N(H_2). Usually, the atomic proton column density is estimated by assuming that the 21 cm Hi line is optically thin. If this approximation is valid, the Hi column density N_p(Hi) (cm^-2) is estimated as follows <cit.>; N_p(Hi) = 1.823 × 10^18∫ T_b dV 0(cm^-2), where T_b (K) and V (km s^-1) are the 21 cm intensity and the peak velocity. <cit.> made a new analysis by using the sub-mm dust optical depth derived by the Planck satellite <cit.> and concluded that the Hi emission is generally optically thick with optical depth of around 1 in the local interstellar medium within 200 pc of the sun at the Galactic latitude higher than 15 deg. This optical depth correction increases the Hi density by a factor of ∼2 as compared with the optically thin case. We adopt this correction by using the relationship between the Hi integrated intensity and the 353 GHz dust optical depth τ_353 given in Figure <ref> <cit.>. Since RX J0852.0-4622 is close to the Galactic plane, the 353 GHz dust optical depth is not available for a single velocity component as in the local space. Instead, we adopt the empirical relationship between W(Hi) and τ_353 in Figure <ref> in order to estimate the N_p(Hi) from W(Hi). The ratio of the N_p(Hi) andτ_353 is determined to as follows in the optically thin limit <cit.>, N_p(Hi) /1 × 10^210(cm^-2) = (τ_353/4.77 × 10^-6)^1/1.3 hence, N_p(Hi) = (1.2 × 10^25) × (τ_353)^1/1.30(cm^-2) where the non-linear dust property, N_p(Hi) ∝ (τ_353)^1/1.3 derived by <cit.> is assumed, and this non linearity does not alter N_p(Hi) significantly in the present W(Hi) range.The Hi velocity range is taken to be 20 km s^-1 to 50 km s^-1 and -4 km s^-1 to 1 km s^-1, where VMR is dominant in the velocity range from -5 km s^-1 to 15 km s^-1. The CO 0 S, CO 25 W, and CO 25 W locations are also taken into account as the molecular components. The total ISM proton column density is given by the sum as follows;N_p(H_2 + Hi) = N_p(H_2) + N_p(Hi), Figures <ref>a, <ref>b, and <ref>c present the total molecular protons, atomic protons and the sum of the molecular and atomic protons, respectively. The resolution is adjusted to the major axis of the beam size (245”). Our assumed distance around 1 kpc for most of the associated ISM features is independently confirmed by the visual extinction A_V seen in the southern half of the SNR where the foreground contamination by the VMR is not significant. Figure <ref> shows the Av distribution from the Digitized Sky Survey I <cit.> where A_V is estimated by star counting of the 2MASS data. The direction (l, b) = (266^∘, -1^∘) is where the number of stars is small as compared with the Galactic centre and A_V is not very accurately estimated in general. Nevertheless we see a clear indication of enhanced A_V features in Figure <ref>, where A_V is typically 0.7 mag to 1.7 mag as is clearly seen toward the southern part of the γ-ray shell, and the CO 0 S, Hi 0 S, and CO 25 W clouds. The peaks have total proton column density of 5–10× 10^21 cm^-2 in Figure <ref>c, corresponding to A_V = 3–5 mag (A_V = N_p(H) / 1.7 × 10^21 cm^-2 mag). The fact that the extinction is visible toward the SNR indicates that the cloud is relatively close to the sun, and the image in Figure <ref> is consistent with a distance around 1 kpc.§.§ TeV γ-raysThe histogram in Figure <ref> indicates the average TeV γ-ray count taken every 0.1 degrees as a function of radius from the shell centre (l, b) = (26628, -124) obtained by H.E.S.S. <cit.>. The error is conservatively estimated as the (oversampling-corrected total smoothed count)^0.5. We shall here analyze the γ-ray distribution by using a spherical shell model. The distribution is fit by a spherically symmetric shell, following the method of F12 and a Gaussian intensity distribution in radius of the γ-ray counts as follows;F(r) =A ×e^-(r - r_0)^2 / 2σ ^2where A is a factor for normalization, the radius r_0 at the peak and the width of the shell σ. The green line in Figure <ref> shows the results. r_0 = 0.91 degrees, σ = 0.18 degrees. They correspond to ∼12 pc and ∼2.4 pc, at 750 pc.The azimuthal γ-ray distributions are obtained by taking the two circles centered at (l, b) = (26628, -124), where the radii of these circles are determined to be 0.64 degree and 0.91 degree, respectively, at the 1/3 count level of the peak of the Gaussian shell shown in Figure <ref>. The projected distribution is shown by the orange color. The range of the radius is the same as adopted by <cit.>. The fitting seems adequate in a range from 0.0 to 1.2 degrees.§ DISCUSSION Spatial correspondence between γ-rays and ISM protons is a key to discerning the γ-ray production mechanism (I12, F12). Figures <ref> shows a comparison between γ-rays and ISM in the azimuthal distribution (the angle is measured clockwise with 0 degrees in the southeast). The TeV γ-ray counts were normalized. The correspondence between the two is remarkable good, while some small deviations between them are seen at angles of -60 to 0 degrees and 120 to 150 degrees, and at the inside of the shell. This offers a third case after RX J1713.7-3946 (F12), which supports a hadronic origin of the γ-rays via spatial correspondence. The total Hi and H_2 masses involved in the shell are estimated to be ∼ 2.5 × 10^4 M_⊙ and ∼ 10^3 M_⊙ within the radius of ∼15 pc, respectively.In Figure <ref>a there is a point which shows significant deviation at an angle of 0 to 30 degrees and this corresponds to the position of the PWN unrelated to the SNR shell, because the pulsar characteristic age is 140 kyr and the large velocity ∼ 3000 km s^-1 needed if the pulsar is a remnant of the SNR <cit.>. The positions between 30 degrees and 150 degrees show a trend that the ISM proton column density under-estimates the γ-rays by less than 10 %. These positions show no CO emission and the Hi only is responsible for the ISM protons. In Figure <ref>c, the shell seems more intense in the γ-rays and may suggest that CR is enhanced as suggested by the significantly enhanced X-rays. We estimate the total energy of CR protons above 1 GeV by using the equation below <cit.>;W_p,tot∼ (7.1 ± 0.3_stat± 1.9_syst) × 10^49 (d / 750pc)^2 (n/1cm^-3)^-1 (erg)where d is the distance, 750 pc, and n the proton density, ∼100 cm^-3, by adopting the shell radius ∼15 pc and the thickness ∼9 pc. W_p,tot estimated to be ∼ 10^48 erg, corresponding to 0.1 % of the total kinetic energy of a SNe ∼10^51 erg. The Hi distribution seems to be fairly uneven in space and velocity as suggested by the non-uniform distribution in Figure <ref>. The coupling between CR protons and the target protons may not be complete. According to <cit.>, the effective mean target density for CR protons n_tg can be written as n_tg ≃ nf, where f is the volume filling factor of the interstellar protons. The value W_p,tot is therefore to be regarded as a lower limit. In RX J1713.7-3946 the total molecular mass and atomic mass are ∼10^4 M_⊙ for each and the total energy of the CR protons of 10^48 erg (F12). In RX J0852.0-4622, the total atomic mass is ∼2.5 × 10^4 M_⊙, while the total molecular mass is ∼ 10^3 M_⊙. Given the radii of the two SNRs, the average density over the whole volume of RX J1713.7-3946 is similar to that in RX J0852.0-4622. We find that these two young TeV γ-ray SNR have total CR proton energy at an order of 10^48 erg. The energy is significantly smaller than those discussed before <cit.> and suggests that the fraction of the explosion energy converted into CRs appears fairly low 0.1 % in such a young stage coupled with effect of the volume filling factor. The efficiency may possibly grow in time and this can be tested by exploring middle-aged SNRs with an age of 10^4 yrs. In fact for the middle-aged SNRs W44, W28, and IC443, the total CR proton energy is ∼10^49 erg <cit.>. Non-thermal X-rays indicate particle acceleration in the SN blast waves. <cit.> analyzed the X-rays with ASCA observations. They showed that the non-thermal X-rays are dominant similarly to RX J1713.7-3946. The absorption column density in front of the X-rays is 1.4 × 10^21 cm^-2 toward the western rim is consistent with the present ISM proton distribution in Figure <ref>c. ISM inhomogeneity is suggested by the CO clumps in Figure <ref> (CO 30 E and CO 25 C) overtaken by the blast waves. The situation is similar to the inside of RX J 1713.7-3946. DSA works for CR acceleration in the low-density cavity and the CR protons can collide with the ISM protons in the SNR. The diffusion length is estimated to be 1 pc for the magnetic field of 10 μG and CR proton energy of 100 TeV, large enough for the CR protons to interact with the CO and Hi. We note that there is a correlation between the γ-rays, X-rays and the ISM. All these components are enhanced on the western rim of the SNR but the eastern rim is quite weak. This distribution is interpreted in terms of the enhanced turbulence in the western rim such as suggested by magneto-hydro-dynamical numerical simulations by I12. These authors showed that the shock interacting with dense clumps create turbulence and that the turbulence amplifies the magnetic field up to 1 mG. Thus enhanced magnetic field can intensify the synchrotron X-rays and efficiently accelerate CRs, leading to the increase of the γ- and X-rays.RX J0852.0-4622 is suggested to be a core collapse SNR <cit.>. The parent cloud of the SN progenitor is likely the one at 25 km s^-1 elongated along the plane by 100 pc to the west. It is the most massive Hi complex in the region. The total Hi mass of the cloud is roughly estimated to be 10^5 M_⊙ and the H_2 mass 10^4 M_⊙. The Hi cloud is found to be located in the intersection among at least three Hi supershells which are expanding at 10–15 km s^-1. Collisions among the shells may be responsible for formation of the parent cloud in this relatively diffuse environment outside the solar circle <cit.>.§ SUMMARY We have carried out a combined analysis of CO and Hi toward the youngest TeV γ-ray SNR RX J0852.0-4622 following the first good spatial correlation found in RX J1713.7-3946. The main conclusions of the present study are summarized as follows; * The ISM in a velocity range from -4 km s^-1 to 50 km s^-1 is likely associated with the SNR. The ISM is dominated by the Hi gas and the mass of the molecular gas probed by CO corresponds to about 4 % of the Hi gas. This association is supported by the morphological signs of the interaction between CO/Hi and the SNR, such as cometary-tailed and shell-like shapes, in 20 km s^-1 to 30 km s^-1. The visual extinction in the south of the SNR corresponding the ISM protons lends another support for the association.* The total ISM protons show a good spatial correspondence with the TeV γ-rays in azimuthal distribution. This provides a third case of such correspondence next to the SNR RX J1713.7-3946 and HESS J1713-347, a necessary condition for a hadronic component of the γ-ray emission. The total CR proton energy is estimated to be 10^48 erg from the γ-rays measured by H.E.S.S. and the average ISM proton density 100 cm^-3 derived from the present study. The CR energy is a similar value obtained for RX J1713.7-3946 and is 0.1 % of the total kinetic energy of a SNe.* The large velocity splitting of the Hi gas is likely due to the expansion of a few supershells driven by star clusters but is not due to the Galactic rotation. The velocity range from 0 km s^-1 to 60 km s^-1 corresponds to that of the inter-arm features between the local arm and the Perseus arm. We find a possible parent Hi/CO cloud where the high mass stellar projenitor of the SNR was formed. This cloud located toward the interface of a few supershells and has a mass of 10^5 M_⊙ in Hi. § APPENDIX: IDENTIFICATION OF THE SUPERSHELL GS 265-04+35In order to identify the supershell GS 265-04+35, we used following three steps;* Searching for a central velocity of Hi supershellInvestigating the Hi profile toward the supershell and searching for the velocity with minimum Hi intensity. We defined that the velocity is the central velocity of an expanding motion <cit.>. The front and rear walls were also identified (Figure <ref>). The expanding velocity was estimated to be ∼9 km s^-1 as a difference between the central and front wall velocities (see also Table <ref>).* Estimating a geometric center of the Hi supershellIn the Hi map of the central velocity, we estimated a radial distance of the supershell as defined at peak Hi intensity every 15 degree for an assumed geometric center. We circulated only the radial distance with the peak Hi intensity of 30 K or higher. We minimized the dispersion of radial distances in azimuth angles, which gives the geometric center to be (l, b) = (2649, -36) (see also Table <ref>). * Estimating the averaged radii and compared these with the modelWe calculated the averaged radii from the geometric center for each velocity as shown in Figures <ref>a–<ref>c, and compared these with a model of expanding shell (Figures <ref>d–<ref>f).Finally, we confirmed that GS 265-04+35 is the expanding Hi supershell.The NANTEN project is based on a mutual agreement between Nagoya University and the Carnegie Institution of Washington (CIW). We greatly appreciate the hospitality of all the staff members of the Las Campanas Observatory of CIW. We are thankful to many Japanese public donors and companies who contributed to the realization of the project. This study was financially supported by Grants-in-Aid for Scientific Research (KAKENHI) of the Japanese society for the Promotion of Science (JSPS, grant Nos. 12J10082, 24224005, 25287035, and 15H05694). This work also was supported by “Building of Consortia for the Development of Human Resources in Science and Technology” of Ministry of Education, Culture, Sports, Science and Technology (MEXT, grant No. 01-M1-0305).MIRIAD <cit.>99 [Acero et al.(2013)]2013A A...551A...7A Acero, F., Gallant, Y., Ballet, J., Renaud, M., & Terrier, R. 2013, , 551, A7[Aharonian et al.(2004)]2004Natur.432...75A Aharonian, F. A., Akhperjanian, A. G., Aye, K.-M., et al. 2004, , 432, 75[Aharonian et al.(2005)]2005A A...437L...7A Aharonian, F., Akhperjanian, A. G., Bazer-Bachi, A. R., et al. 2005, , 437, L7[Aharonian et al.(2006)]2006A A...449..223A Aharonian, F., Akhperjanian, A. G., Bazer-Bachi, A. R., et al. 2006, , 449, 223[Aharonian et al.(2007a)]2007A A...464..235A Aharonian, F., Akhperjanian, A. G., Bazer-Bachi, A. R., et al. 2007, , 464, 235[Aharonian et al.(2007b)]2007ApJ...661..236A Aharonian, F., Akhperjanian, A. G., Bazer-Bachi, A. R., et al. 2007, , 661, 236[Aharonian et al.(2009)]2009ApJ...692.1500A Aharonian, F., Akhperjanian, A. G., de Almeida, U. B., et al. 2009, , 692, 1500[Allen et al.(2015)]2015ApJ...798...82A Allen, G. E., Chow, K., DeLaney, T., et al. 2015, , 798, 82 [Arnal & Corti(2007)]2007AA...476..255A Arnal, E. M., & Corti, M. 2007, , 476, 255[Aschenbach(1998)]1998Natur.396..141A Aschenbach, B. 1998, , 396, 141[Aschenbach et al.(1999)]1999A A...350..997A Aschenbach, B., Iyudin, A. F., & Schönfelder, V. 1999, , 350, 997[Bamba et al.(2005)]2005ApJ...632..294B Bamba, A., Yamazaki, R., & Hiraga, J. S. 2005, , 632, 294[Bell(1978)]1978MNRAS.182..147B Bell, A. R. 1978, , 182, 147[Bertsch et al.(1993)]1993ApJ...416..587B Bertsch, D. L., Dame, T. M., Fichtel, C. E., et al. 1993, , 416, 587[Brand & Blitz(1993)]1993A A...275...67B Brand, J., & Blitz, L. 1993, , 275, 67[Blandford & Ostriker(1978)]1978ApJ...221L..29B Blandford, R. D., & Ostriker, J. P. 1978, , 221, L29[Cha et al.(1999)]1999ApJ...515L..25C Cha, A. N., Sembach, K. R., & Danks, A. C. 1999, , 515, L25[Chen & Gehrels(1999)]1999ApJ...514L.103C Chen, W., & Gehrels, N. 1999, , 514, L103[Dawson et al.(2008a)]2008MNRAS.387...31D Dawson, J. R., Mizuno, N., Onishi, T., McClure-Griffiths, N. M., & Fukui, Y. 2008, , 387, 31[Dawson et al.(2008b)]2008PASJ...60.1297D Dawson, J. R., Kawamura, A., Mizuno, N., Onishi, T., & Fukui, Y. 2008, , 60, 1297[Dawson et al.(2011a)]2011ApJ...728..127D Dawson, J. R., McClure-Griffiths, N. M., Kawamura, A., et al. 2011, , 728, 127[Dawson et al.(2011b)]2011ApJ...741...85D Dawson, J. R., McClure-Griffiths, N. M., Dickey, J. M., & Fukui, Y. 2011, , 741, 85[Dawson et al.(2015)]2015ApJ...799...64D Dawson, J. R., Ntormousi, E., Fukui, Y., Hayakawa, T., & Fierlinger, K. 2015, , 799, 64[Dickey & Lockman(1990)]1990ARA A..28..215D Dickey, J. M., & Lockman, F. J. 1990, , 28, 215[Dobashi et al.(2005)]2005PASJ...57S...1D Dobashi, K., Uehara, H., Kandori, R., et al. 2005, , 57, S1[Ellison et al.(2010)]2010ApJ...712..287E Ellison, D. C., Patnaude, D. J., Slane, P., & Raymond, J. 2010, , 712, 287[Fukuda et al.(2014)]2014ApJ...788...94F Fukuda, T., Yoshiike, S., Sano, H., et al. 2014, , 788, 94[Fukui et al.(1999)]1999PASJ...51..751F Fukui, Y., Onishi, T., Abe, R., et al. 1999, , 51, 751[Fukui et al.(2003)]2003PASJ...55L..61F Fukui, Y., Moriguchi, Y., Tamura, K., et al. 2003, , 55, L61 [Fukui et al.(2012)]2012ApJ...746...82F Fukui, Y., Sano, H., Sato, J., et al. 2012, , 746, 82[Fukui et al.(2014)]2014ApJ...796...59F Fukui, Y., Okamoto, R., Kaji, R., et al. 2014, , 796, 59[Fukui et al.(2015)]2015ApJ...798....6F Fukui, Y., Torii, K., Onishi, T., et al. 2015, , 798, 6[Gabici et al.(2007)]2007Ap SS.309..365G Gabici, S., Aharonian, F. A., & Blasi, P. 2007, , 309, 365[Gabici & Aharonian(2014)]2014MNRAS.445L..70G Gabici, S., & Aharonian, F. A. 2014, , 445, L70 [Giuliani et al.(2010)]2010A A...516L..11G Giuliani, A., Tavani, M., Bulgarelli, A., et al. 2010, , 516, L11 [Giuliani et al.(2011)]2011ApJ...742L..30G Giuliani, A., Cardillo, M., Tavani, M., et al. 2011, , 742, L30[Hess & Steinmaurer(1935)]1935Natur.135..617H Hess, V. F., & Steinmaurer, R. 1935, , 135, 617[H.E.S.S. Collaboration et al.(2011)]2011A A...531A..81H H.E.S.S. Collaboration, Abramowski, A., Acero, F., et al. 2011, , 531, A81[H.E.S.S. Collaboration et al.(2016a)]2016arXiv160104461H H.E.S.S. Collaboration, Abramowski, A., Aharonian, F., et al. 2016a, arXiv:1601.04461[H.E.S.S. Collaboration et al.(2016b)]2016arXiv160908671H H.E.S.S. Collaboration, Abdalla, H., Abdalla, H., et al. 2016a, arXiv:1609.08671[H.E.S.S. Collaboration et al.(2016c)]2016arXiv161101863H H.E.S.S. Collaboration, Abdalla, H., Abramowski, A., et al. 2016b, arXiv:1611.01863 [Inoue et al.(2012)]2012ApJ...744...71I Inoue, T., Yamazaki, R., Inutsuka, S.-i., & Fukui, Y. 2012, , 744, 71[Iyudin et al.(2005)]2005A A...429..225I Iyudin, A. F., Aschenbach, B., Becker, W., Dennerl, K., & Haberl, F. 2005, , 429, 225[Iyudin et al.(2007)]2007ESASP.622...91I Iyudin, A. F., Aschenbach, V., Burwitz, V., et al. 2007, The Obscured Universe. Proceedings of the VI INTEGRAL Workshop, 622, 91 [Katsuda et al.(2008)]2008ApJ...678L..35K Katsuda, S., Tsunemi, H., & Mori, K. 2008, , 678, L35[Kalberla et al.(2005)]2005A A...440..775K Kalberla, P. M. W., Burton, W. B., Hartmann, D., et al. 2005, , 440, 775[Liseau et al.(1992)]1992A A...265..577L Liseau, R., Lorenzetti, D., Nisini, B., Spinoglio, L., & Moneti, A. 1992, , 265, 577[Matsunaga et al.(2001)]2001PASJ...53.1003M Matsunaga, K., Mizuno, N., Moriguchi, Y., et al. 2001, , 53, 1003[Maxted et al.(2017)]2017MNRAS...submitted Maxted, N., Burton, M., & Braiding, C., et al. 2017, submitted to [May et al.(1988)]1988A AS...73...51M May, J., Murphy, D. C., & Thaddeus, P. 1988, , 73, 51[McClure-Griffiths et al.(2002)]2002ApJ...578..176M McClure-Griffiths, N. M., Dickey, J. M., Gaensler, B. M., & Green, A. J. 2002, , 578, 176[McClure-Griffiths et al.(2005)]2005ApJS..158..178M McClure-Griffiths, N. M., Dickey, J. M., Gaensler, B. M., et al. 2005, , 158, 178[Mereghetti(2001)]2001ApJ...548L.213M Mereghetti, S. 2001, , 548, L213[Moriguchi et al.(2001)]2001PASJ...53.1025M Moriguchi, Y., Yamaguchi, N., Onishi, T., Mizuno, A., & Fukui, Y. 2001, , 53, 1025 [Moriguchi et al.(2005)]2005ApJ...631..947M Moriguchi, Y., Tamura, K., Tawara, Y., et al. 2005, , 631, 947 [Murphy & May(1991)]1991A A...247..202M Murphy, D. C., & May, J. 1991, , 247, 202[Okamoto et al.(2017)]2017ApJ...838..132O Okamoto, R., Yamamoto, H., Tachihara, K., et al. 2017, , 838, 132 [Pannuti et al.(2010)]2010ApJ...721.1492P Pannuti, T. G., Allen, G. E., Filipović, M. D., et al. 2010, , 721, 1492[Pavlov et al.(2001)]2001ApJ...559L.131P Pavlov, G. G., Sanwal, D., Kızıltan, B., & Garmire, G. P. 2001, , 559, L131[Planck Collaboration et al.(2014)]2014A A...571A..11P Planck Collaboration, Abergel, A., Ade, P. A. R., et al. 2014, , 571, A11[Roy et al.(2013)]2013ApJ...763...55R Roy, A., Martin, P. G., Polychroni, D., et al. 2013, , 763, 55[Sano et al.(2010)]2010ApJ...724...59S Sano, H., Sato, J., Horachi, H., et al. 2010, , 724, 59[Sano et al.(2013)]2013ApJ...778...59S Sano, H., Tanaka, T., Torii, K., et al. 2013, , 778, 59[Sano et al.(2015)]2015ApJ...799..175S Sano, H., Fukuda, T., Yoshiike, S., et al. 2015, , 799, 175[Sano et al.(2017)]sano2017inprep Sano, H., et al. 2017, in preparation [Sault et al.(1995)]1995ASPC...77..433S Sault, R. J., Teuben, P. J., & Wright, M. C. H. 1995, Astronomical Data Analysis Software and Systems IV, 77, 433 [Slane et al.(2001a)]2001ApJ...548..814S Slane, P., Hughes, J. P., Edgar, R. J., et al. 2001, , 548, 814[Slane et al.(2001b)]2001AIPC..565..403S Slane, P., Hughes, J. P., Edgar, R. J., et al. 2001, Young Supernova Remnants, 565, 403[Suad et al.(2014)]2014AA...564A.116S Suad, L. A., Caiafa, C. F., Arnal, E. M., & Cichowolski, S. 2014, , 564, A116[Takeda et al.(2016)]2016PASJ...68S..10T Takeda, S., Bamba, A., Terada, Y., et al. 2016, , 68, S10 [Tsunemi et al.(2000)]2000PASJ...52..887T Tsunemi, H., Miyata, E., Aschenbach, B., Hiraga, J., & Akutsu, D. 2000, , 52, 887[Vallée(2008)]2008AJ....135.1301V Vallée, J. P. 2008, , 135, 1301-1310[Yamaguchi et al.(1999a)]1999PASJ...51..765Y Yamaguchi, N., Mizuno, N., Moriguchi, Y., et al. 1999a, , 51, 765 [Yamaguchi et al.(1999b)]1999PASJ...51..775Y Yamaguchi, N., Mizuno, N., Saito, H., et al. 1999b, , 51, 775[Yoshiike et al.(2013)]2013ApJ...768..179Y Yoshiike, S., Fukuda, T., Sano, H., et al. 2013, , 768, 179 [Yoshiike et al.(2017)]2017ApJ...submitted Yoshiike, S., Fukuda, T., Sano, H., et al. 2017, submitted [Zirakashvili & Aharonian(2010)]2010ApJ...708..965Z Zirakashvili, V. N., & Aharonian, F. A. 2010, , 708, 965
http://arxiv.org/abs/1708.07911v2
{ "authors": [ "Y. Fukui", "H. Sano", "J. Sato", "R. Okamoto", "T. Fukuda", "S. Yoshiike", "K. Hayashi", "K. Torii", "T. Hayakawa", "G. Rowell", "M. D. Filipovic", "N. Maxted", "N. M. McClure-Griffiths", "A. Kawamura", "H. Yamamoto", "T. Okuda", "N. Mizuno", "K. Tachihara", "T. Onishi", "A. Mizuno", "H. Ogawa" ], "categories": [ "astro-ph.HE", "astro-ph.GA" ], "primary_category": "astro-ph.HE", "published": "20170826002835", "title": "A detailed study of the interstellar protons toward the TeV $γ$-ray SNR RX J0852.0$-$4622 (G266.2$-$1.2, Vela Jr.); a third case of the $γ$-rays and ISM spatial correspondence" }
Unified Host and Network Data Set Melissa J. M. Turcotte^*, Alexander D. Kent^* and Curtis Hash^†^*Los Alamos National Laboratory, Los Alamos, NM, 87545, U.S.A. [email protected]^†Ernst & Young============================================================================================================================================================================ The lack of data sets derived from operational enterprise networks continues to be a critical deficiency in the cyber security research community. Unfortunately, releasing viable data sets to the larger community is challenging for a number of reasons, primarily the difficulty of balancing security and privacy concerns against the fidelity and utility of the data. This chapter discusses the importance of cyber security research data sets and introduces a large data set derived from the operational network environment at Los Alamos National Laboratory. The hope is that this data set and associated discussion will act as a catalyst for both new research in cyber security as well as motivation for other organizations to release similar data sets to the community.§ INTRODUCTION The lack of diverse and useful data sets for cyber security research continues to play a profound and limiting role within the relevant research communities and their resulting published research. Organizations are reticent to release data for security and privacy reasons. In addition, the data sets that are released are encumbered in a variety of ways, from being stripped of so much information that they no longer provide rich research and analytical opportunities, to being so constrained by access restrictions that key details are lacking and independent validation is difficult. In many cases, organizations do not collect relevant data in sufficient volumes or with high enough fidelity to provide cyber research value. Unfortunately, there is generally little motivation for organizations to overcome these obstacles.In an attempt to help stimulate a larger research effort focused on operational cyber data as well as to motivate other organizations to release useful data sets, Los Alamos National Laboratory (LANL) has released two data sets for public use <cit.>. A third, entitled the “Unified Host and Network Data Set," is introduced in this chapter.The Unified Host and Network Data Set is a subset of network flow and computer events collected from the LANL enterprise network over the course of approximately 90 days.[The network flow data are only 89 days due to missing data on the first day.] The host (computer) event logs originated from the majority of LANL's computers that run the Microsoft Windows operating system. The network flow data originated from many of the internal core routers within the LANL enterprise network and are derived from router netflow records. The two data sets include many of the same computers but are not fully inclusive; the network data set includes many non-Windows computers and other network devices.Identifying values within the data sets have been de-identified (anonymized) to protect the security of LANL's operational IT environment and the privacy of individual users. The de-identified values match across both the host and network data allowing the two data elements to be used together for analysis and research. In some cases, the values were not de-identified, including well-known network ports, system-level usernames (not associated to people) and core enterprise hosts. In addition, a small set of hosts, users and processes were combined where they represented well-known, redundant entities. This consolidation was done for both normalization and security purposes.In order to transform the data into a format that is useful for researchers who are not domain experts, a significant effort was made to normalize the data while minimizing the artifacts that such normalization might introduce. §.§ Related public data setsA number of public, cyber security relevant data sets currently exist <cit.>.Some of these represent data collected from operational environments, while others capture specific, pseudo real-world events (for example, cyber security training exercises). Many data sets are synthetic and created using models intended to represent specific phenomenon of relevance; for example, the Carnegie Melon Software Engineering Institute provides several insider threat data sets that are entirely synthetic <cit.>. In addition, many of the data sets commonly seen within the research community are egregiously dated. The DARPA cyber security data sets <cit.> published in the 1990s are still regularly used, even though the systems, networks and attacks they represent have almost no relevance to modern computing environments. Another issue is that many of the available data sets have restrictive access and constraints on how they may be used. For example, the U.S. Department of Homeland Security provides the Information Marketplace for Policy and Analysis of Cyber-risk and Trust (IMPACT) <cit.>, which is intended to facilitate information sharing. However, the use of any of the data hosted by IMPACT requires registration and vetting prior to access. In addition, data owners may (and often do) place limitations on how and where the data may be used.Finally, many of the existing data sets are not adequately characterized for potential researchers. It is important that researchers have a thorough understanding of the context, normalization processes, idiosyncracies and other aspects of the data. Ideally, researchers should have sufficiently detailed information to avoid making false assumptions and to reproduce similar data. The need for such detailed discussion around published data sets is a primary purpose of this chapter.The remainder of this chapter is organized as follows: a description of the Network Flow Data is given in Section <ref> followed by the Windows Host Log Data in Section <ref>. Finally, a discussion of potential research directions is given in Section <ref>.§ NETWORK FLOW DATAThe network flow data set included in this release is comprised of records describing communication events between devices connected to the LANL enterprise network. Each flow is an aggregate summary of a (possibly) bi-directional network communication between two network devices. The data are derived from Cisco NetFlow Version 9 <cit.> flow records exported by the core routers. As such, the records lack the payload-level data upon which most commercial intrusion detection systems are based. However, research has shown that flow-based techniques have a number of advantages and are successful at detecting a variety of malicious network behaviors <cit.>. Furthermore, these techniques tend to be more robust against the vagaries of attackers, because they are not searching for specific signatures (e.g., byte patterns) and they are encryption-agnostic. Finally, in comparison to full-packet data, collection, analysis and archival storage of flow data at enterprise scales is straightforward and requires minimal infrastructure. §.§ Collection & TransformationAs mentioned previously, the raw data consisted of NetFlow V9 records that were exported from the core network routers to a centralized collection server. While V9 records can contain many different fields, only the following are considered: StartTime, EndTime, SrcIP, DstIP, Protocol, SrcPort, DstPort, Packets and Bytes. The specifics of the hardware and flow export protocol are largely irrelevant, as these fields are common to all network flow formats of which the authors are aware.This data can be quite challenging to model without a thorough understanding of its various idiosyncrasies. The following paragraphs discuss two of the most relevant issues with respect to modelling. For a comprehensive overview of these issues, among others, readers can refer to <cit.>.Firstly, note that these flow records are uni-directional (uniflows): each record describes a stream of packets sent from one network device (SrcIP) to another (DstIP). Hence, an established TCP connection — bi-directional by definition — between two network devices, A and B, results in two flow records: one from A to B and another from B to A. It follows that there is no relationship between the direction of a flow and the initiator of a bi-directional connection (i.e., it is not known whether A or B connected first). This is the case for most netflow implementations as bi-directional flow (biflow) protocols such as <cit.> have yet to gain widespread adoption. Clearly, this presents a challenge for detection of attack behaviors, such as lateral movement, where directionality is of primary concern.Secondly, significant duplication can occur due to flows encountering multiple netflow sensors in transit to their destination. Routers can be configured to track flows on ingress and egress, and, in more complex network topologies, a single flow can traverse multiple routers. More recently, the introduction of netflow-enabled switches and dedicated netflow appliances has exacerbated the issue. Ultimately, a single flow can result in many distinct flow records. To add further complexity, the flow records are not necessarily exact duplicates and their arrival times can vary considerably; these inconsistencies occur for many reasons, the particulars of which are too complex to discuss in this context.In order to simplify the data for modelling, a transformation process known as biflowing or stitching was employed. This is a process intended to aggregate duplicates and marry the opposing uniflows of bi-directional connections into a single, directed biflow record (Table <ref>). Many approaches to this problem can be found in the literature <cit.>, all of them imperfect.A straightforward approach was used that relies on simple port heuristics to decide direction. These heuristics are based on the assumption that SrcPorts are generally ephemeral (i.e., they are selected from a pre-defined, high range by the operating system), while DstPorts tend to have lower numbers that correspond to established, shared network services and will therefore be observed more frequently than ephemeral ports. The heuristics are given below in order of precedence. * Destination ports are less than 1024 and source ports are not. * The top 90 most frequently observed ports are destination ports. * The smaller of the two ports is the destination port. Each uniflow was transformed into a biflow by renaming the Packets and Bytes fields to SrcPackets and SrcBytes respectively. DstPackets and DstBytes fields were added with initial values of zero. Next, the port heuristics were considered and, if any were violated or ambiguous, the Src and Dst attributes were swapped, effectively reversing the direction. Finally, the 5-tuple was extracted from each record and used as the key in a lookup table. SrcIP, DstIP, SrcPort, DstPort, ProtocolIf a match was found, the flows were aggregated by keeping the minimum StartTime, maximum EndTime and summing the other attributes. If no match was found, the flow was simply added to the table. This process was performed in a streaming fashion on all of the records in the order in which they were received by the collector. Flows were periodically evicted from the lookup table after 30 minutes of inactivity (i.e., failing to match with any incoming flows). Flows that remained active for long periods of time were reported approximately every 3 hours, but were not evicted from the table until inactive.While biflowing the data mitigates the problems posed by duplicates and ambiguous directionality, it does not address another significant obstacle: the lack of stable identifiers upon which to build models. In some cases, IP addresses are transient (e.g., DHCP, VPN). In other cases, devices have multiple IP addresses (e.g., multihoming) or one IP address is shared by multiple devices (e.g., load-balancing, NAT). Whatever the case may be, modelling the behavior of IP addresses on a typical network is clearly error prone. Instead, one should endeavor to map IP addresses to more stable identifiers such as Media Access Control (MAC) addresses or fully-qualified domain names (FQDN), interchangeably referred to as hostnames throughout the rest of the chapter. As with directionality, there is no perfect solution to this problem. The most appropriate identifier will depend greatly on the configuration of the target network, as well as the availability of auxiliary data sources from which a mapping can be constructed. An ideal solution will likely involve some combination of supplementary network data (e.g., DNS logs, DNS zone transfers, DHCP logs, VPN logs, NAC logs), business rules and considerable trial and error.For this data release, a combination of Domain Name Service (DNS) and Dynamic Host Configuration Protocol (DHCP) logs was used to construct a mapping of IP addresses to FQDNs over time. The IP addresses in each biflow were then replaced with their corresponding FQDNs at the time of the flow. Where a given IP address and timestamp mapped to multiple FQDNs, business rules were incorporated to give preference to the least-ephemeral name. IP addresses that failed to map to any FQDN were left as-is. The resulting mix of names and IP addresses correspond to the SrcDevice and DstDevice fields in the final data.Finally, the data were de-identified by mapping SrcDevice, DstDevice, SrcPort and DstPort to random identifiers. In the event that the IP-to-FQDN mapping failed, the random identifier was prepended with “IP." Well-known ports were not de-identified. Records with protocol numbers other than 6 (TCP), 17 (UDP) and (1) ICMP were removed entirely. The output from this process is provided in CSV format, one record per line, with fields in the order shown in Table <ref>. §.§ Data QualitySeveral figures have been provided in order to assess the quality of the network flow data set. The top plot in Figure <ref>, which shows the number of biflows over time, demonstrates the periodicity that one would expect for data whose volume is driven by the comings and goings of employees during a typical 5-day workweek.The bottom plot of Figure <ref> is intended to measure the success rate of the biflowing and IP-to-FQDN mapping processes. TCP biflows where either SrcPackets or DstPackets is zero suggests a failure to find matching uniflows for both directions of the exchange. 57% of TCP and approximately 70% of all biflows fall within this category. This can largely be attributed to LANL's netflow sensor infrastructure, which has been specifically configured to export only one direction on many routes. In addition, some devices — namely vulnerability scanners and the like — attempt to connect to all possible IP addresses within a range; this results in a significant number of uniflows for which no response is possible. Likely for the same reason, IP-to-FQDN mapping failed for significantly more DstDevices than SrcDevices.Figure <ref> shows the daily proportion of biflows corresponding to each Protocol. Figure <ref> contains two histograms of the top SrcPorts and DstPorts respectively. Note the non-uniformity in the SrcPort histogram; this illustrates either a consistent failure of the biflowing process to choose the appropriate direction or the presence of protocols that use non-ephemeral source ports. For example, the Network Time Protocol (NTP) uses port 123 for both the source and destination ports per the specification.Figure <ref> shows the distribution of Duration, SrcBytes and DstBytes per Protocol. Of particular interest is the presence of many long-lived UDP and ICMP biflows in the data. This indicates frequent, persistent UDP and ICMP traffic sharing the same 5-tuple and is an unfortunate side-effect of not limiting the biflow transformation to TCP uniflows. Finally, Figure <ref> shows exemplar in-degree and out-degree distributions for two randomly-selected days.§ WINDOWS HOST LOG DATA As remote attackers and malicious insiders increasingly use encryption, network-only detection mechanisms are becoming less effective, particularly those that require the inspection of payload data within the network traffic. As a result, cyber defenders now rely heavily on endpoint agents and host event logs to detect and investigate incidents. Host event logs capture nuanced details for a wide range of activities; however, given the vast number of logged events and their specificity to an individual host, human analysts struggle to discover the few useful log entries amid the huge number of innocuous entries. Statistical analytics for host event data are in their infancy. Advanced analytical capabilities on this host data, including computer and user profiling, which move beyond signature-based methods, will increase network awareness and detection of advanced cyber threats.The host event data set is a subset of host event logs collected from all computers running the Microsoft Windows operating system on LANL's enterprise network. The host logs were collected with Windows Logging Service (WLS), which is a Windows service that forwards event logs, along with administrator-defined contextual data to a set of collection servers <cit.>. The released data are in JSON format in order to preserve the structure of the original events, unlike the two previously released data sets based on this log source <cit.>. The events from the host logs included in the data set are all related to authentication and process activity on each machine. Table <ref> contains the subset of EventIDs included from the event logs in the released data set and a brief description of each; see <cit.> for a more detailed description. Figure <ref> shows the percentage of EventIDs contained in the logs, as well as the LogonTypes for EventIDs 4624, 4625 and 4634.Each record in the data set will have some of the event attributes listed in Appendix <ref> and the table in Appendix <ref> specifies which Event IDs have each attribute. Note that not all events with a given EventID share the same set of attributes. If an expected attribute was missing from the original host log record, then the attribute was not included in the corresponding record in the de-identified data set.All records will contain the attributes EventID, LogHost and Time. LogHost indicates the network host where the record was logged. For directed authentication events, this attribute will always correspond to the computer to which the user is authenticating, and the source computer will be given by Source. For the user associated with the record, if the UserName ends in $ then it will correspond to the computer account for the specified computer. These computer accounts are host-specific accounts within the Microsoft Active Directory domain that allow the computer to authenticate as a unique entity within the network. Figure <ref> shows the count of unique processes, log hosts (LogHost), source hosts (Source), computer accounts (UserName ending in $) and users (UserName not ending in $) for the 90 day period. Figure <ref> shows the count for the same attributes on a per-day basis. Note that the set of source hosts includes devices running non-Windows operating systems, hence there are more source hosts than log hosts.Requests to the Kerberos Ticket Granting Service (TGS) (EventID 4769) correspond to a user requesting Kerberos authentication credentials from the Active Directory domain to a service or account name on a network computer. Hence, the LogHost attribute should always be an Active Directory machine and the service or account name the user is requesting access to will be given by ServiceName. The ServiceName often corresponds to a computer account on the target computer. Because this event only grants a credential, a subsequent network logon event (EventID 4624 - LogonType 3) to the computer indicated by ServiceName is common. This differs from the previous data release <cit.>, in which TGS events were assumed to be directed authentication events from the user's machine to the computer indicated by ServiceName, ignoring the Kerberos intermediary. When de-identifying the process events, only the base process name was de-identified and the extension was left as is. Further, the parent process names (ParentProcessName) do not have file extensions unlike the child process names (ProcessName); this is a direct artifact of how the process information is logged within WLS. The missing extension can be obtained by using the ParentProcessID to identify the parent process start event.Finally, many events include the DomainName attribute that indicates what Active Directory domain the event is associated with. The domain, combined with the UserName, should be considered a unique account identity. For example, user u1 with domain d1 is not necessarily user u1 in domain d2. In addition, the domain may actually be a hostname, indicating the event does not involve a user or account associated with an Active Directory domain, but is instead a local account. Again, these accounts should be considered unique to the host indicated within the DomainName attribute. For example, the Administrator account on host c1 likely does not have a relationship to the Administrator account on c2 or the Administrator account in domain d1. The LANL data sets have a single primary domain, with a number of much smaller, secondary domains and most computers have a small set of local accounts.§.§ Data parsing considerationsWhile host logs can be an extremely valuable data resource for cyber security research, the formatting and content of the logs can vary drastically between enterprises depending upon the audit policy and technologies used to collect and forward the logs to a centralized server. Hence, parsing the data and extracting the relevant attributes is an important first step in analyzing these data; see also <cit.>.Even though WLS provides more content and normalization around the raw Windows logs, some challenges were still faced to provide the de-identified data.Firstly, the semantics of attribute names are not necessarily the same for different EventIDs and the attribute names themselves may differ according to what tool is being used to collect and forward the logs. For example, with WLS the UserName for EventID 4774 is MappedName, for EventID 4778 and 4779 it is AccountName and for most other events it is TargetUserName. When parsing the data, these names were all standardized to UserName.As with the network flow data, an extremely important task is mapping IP addresses to FQDNs. Further, unlike netflow, each record may contain both IP addresses and hostnames. The machine where the event is recorded (LogHost for the de-identified data) is provided as a hostname, whereas the Source computer for network logons is often given as an IP address.Finally, both usernames and process names were standardized. In some records, usernames appear with the domain name or additional characters. These discrepancies were removed from the released data in order to ensure all usernames were in canonical form. In addition, some usernames, such as “Anonymous”, “Local Service” and “Network Service”, do not map to a computer or user account. For some analyses, one may want to remove these events. In the de-identified data these commonly-seen usernames were not anonymized. For the process names, dates, version numbers, operating systems and hexadecimal strings were removed where possible so that processes run on different operating systems or with different versions would map to the same process name. For example, flashplayerplugin_20_0_0_286.exe would be mapped to flashplayerplugin_VERSION.exe. § RESEARCH DIRECTIONSAnomaly detection for the defensive cyber domain is a major yet evolving research area, with much work still to be done in characterizing and finding anomalies within complex cyber data sets. Finding viable attack indicators and per computer, user and computer-to-computer models that enable anomaly detection and fingerprinting are all interesting and important research opportunities. Although research on anomaly detection for cyber defense spans more than two decades, operational tools are still almost exclusively rule- or signature-based. Two reasons that statistical methods have not been more widely adopted in practice are a high false positive rate and un-interpretable alerts. Analysts are inundated with a large number of alerts and triaging them takes significant time and resources; this results in low tolerance for false alarms and alerts that provide no contextual information to guide investigation. Signature-based systems can be finely tuned to reduce false positives as they rely on very specific peculiarities that have been previously identified and documented as indicative of a cyber attack. Further, they are interpretable as they refer to specific patterns within the data, such as weird domains, network protocols or process names.However, despite their inherent challenges, anomaly detection methods have the advantage of being able to detect new variants of cyber attacks and are able to keep pace with the rapidly changing cyber attack landscape by dynamically learning patterns for normal behavior and detecting deviations.Further, with the increasing level of encrypted network traffic, the importance of this research and the use of these methods can not be understated. Research into ways to reduce false positives and providing interpretable anomalies will have significant impact in furthering the use of anomaly detection systems. In fact, providing interpretable anomalies can help overcome the false positive issue as interpretability leads to quickly identifying alerts that are false positives in the same way it would enable understanding true positives. Research approaches to tackle these problems could include combining different data sets and signals, borrowing strength across entities that are similar by incorporating peer-based behavior, community detection approaches and ways to provide meaningful context surrounding alerts to human analysts. When using the host log data set for research, some notable characteristics of these data that need to be considered, especially if looking at the events as a time series, is periodicity and significant correlations between arrivals of different event types. This can be seen clearly in Figure <ref>, which shows the event times for various EventIDs for User205265. Periodicity in the data is often an artifact of the computer regularly renewing credentials. This explains why EventID 4624 - LogonType 3 (network logon) constitutes such a significant portion of the events as seen in Figure <ref>. For a given entity, extrapolating higher-level, interpretable actions from the sequence of low-level events would improve modelling efforts, understanding of these data and would itself be very useful for security analysts. See <cit.> and <cit.> for relevant research in this area.Another area for research with the host logs is exploring the records related to process starts and stops in detail, in particular looking at process trees. To date, little has been done in this area. Computer systems operate hierarchically; an initial root process starts many other processes, which in turn start and run descendants. A process tree is the dynamic structure that results. In theory, any process can be traced, through its ancestors, to the root process. Unusual or atypical events in process trees could indicate potential cyber security anomalies.Moving beyond anomaly detection, there are other important research directions for which these data could prove useful. For example, preliminary work has been done using similar data to model network segmentation and associated risk <cit.>. Using the data to build new, potential network topologies in order to reduce risk and improve security posture are viable directions. Another potential research problem is to quantify and understand data loss within cyber data sets. The collection and normalization processes in place for these data can result in information loss and understanding this data loss is an open problem both in general and specific to each element of the data. As most of the data elements represent people and their actions on computers, research on organizational and social behavior is also viable using these data.§ CONCLUSIONOperational cyber security data sets are paramount to ensuring valuable and productive research continues to improve the state of cyber defense. The network flow and host log event data discussed in this chapter are intended to enable such research as well as to provide an example for other potential data set providers. In particular, while there is a considerable amount of relevant work on network data, relatively little attention has been given to host log data in the literature. Host log data are becoming increasingly relevant as endpoint security tools gain popularity within the cyber security ecosystem. It is important that researchers embrace both the opportunity and challenge that they present. Finally, even less consideration has been given to meaningful analyses that combine these and other data sets. This paradigm shift towards a holistic approach to cyber security defense is critical to advancing the state of the art.§ ACKNOWLEDGMENTThis work has been authored by an employee of Los Alamos National Security, LLC, operator of the Los Alamos National Laboratory under Contract No. DE-AC52-06NA25396 with the U.S. Department of Energy. The United States Government retains and the publisher, by accepting this work for publication, acknowledges that the United States Government retains a nonexclusive, paid-up, irrevocable, world-wide license to publish or reproduce this work, or allow others to do so for United States Government purposes. § HOST LOG FIELDS* Time: The epoch time of the event in seconds.* EventID: Four digit integer corresponding to the event id of the record.* LogHost: The hostname of the computer that the event was recorded on. In the case of directed authentication events, the LogHost will correspond to the computer that the authentication event is terminating at (destination computer).* LogonType: Integer corresponding to the type of logon, see Table <ref>.* LogonTypeDescription: Description of the LogonType, see Table <ref>.* UserName: The user account initiating the event. If the user ends in $, then it corresponds to a computer account for the specified computer.* DomainName: Domain name of UserName. * LogonID: A semi-unique (unique between current sessions and LogHost) number that identifies the logon session just initiated. Any events logged subsequently during this logon session should report the same Logon ID through to the logoff event.* SubjectUserName: For authentication mapping events, the user account specified by this field is mapping to the user account in UserName.* SubjectDomainName: Domain name of SubjectUserName.* SubjectLogonID: See LogonID.* Status: Status of the authentication request. “0x0” means success otherwise failure, see <cit.> for failure codes for the appropriate Event ID.* Source: For authentication events, this will correspond to the the computer where the authentication originated (source computer), if it is a local logon event then this will be the same as the LogHost.* ServiceName: The account name of the computer or service the user is requesting the ticket for.* Destination: This is the server the mapped credential is accessing. This may indicate the local computer when starting another process with new account credentials on a local computer.* AuthenticationPackage: The type of authentication occurring including Negotiate, Kerberos, NTLM plus a few more.* FailureReason: The reason for a failed logon.* ProcessName: The process executable name, for authentication events this is the process that processed the authentication event. ProcessNames may include the file type extensions (i.e exe).* ProcessID:A semi-unique (unique between currently running processes AND LogHost) value that identifies the process.Process ID allows you to correlate other events logged in association with the same process through to the process end.* ParentProcessName: The process executable that started the new process. ParentProcessNames often do not have file extensions like ProcessName but can be compared by removing file extensions from the name.* ParentProcessID:Identifies the exact process that started the new process. Look for a preceding event 4688 with a ProcessID that matches this ParentProcessID. § EVENT ATTRIBUTESplainnat
http://arxiv.org/abs/1708.07518v1
{ "authors": [ "Melissa J. M. Turcotte", "Alexander D. Kent", "Curtis Hash" ], "categories": [ "cs.CR" ], "primary_category": "cs.CR", "published": "20170824180700", "title": "Unified Host and Network Data Set" }
http://arxiv.org/abs/1708.07502v2
{ "authors": [ "Mathew Madhavacheril", "Nicholas Battaglia", "Hironao Miyatake" ], "categories": [ "astro-ph.CO", "gr-qc", "hep-ph" ], "primary_category": "astro-ph.CO", "published": "20170824173611", "title": "Fundamental Physics from Future Weak-Lensing Calibrated Sunyaev-Zel'dovich Galaxy Cluster Counts" }
[1]Technische Universität Kaiserslautern, Department of Mathematics, Erwin-Schrödinger-Straße, 67663 Kaiserslautern, Germany({borsche, klar}@mathematik.uni-kl.de) [2]Fraunhofer ITWM, Fraunhoferplatz 1, 67663 Kaiserslautern, GermanyKinetic layers and coupling conditions formacroscopic equations on networks I: the wave equation R. Borsche[1] A. Klar[1] [2]================================================================================================== We consider kinetic and associated macroscopic equations on networks.The general approach will be explained in this paper for a linear kinetic BGK model and the corresponding limit for small Knudsen number, which isthe wave equation. Coupling conditions for the macroscopic equations are derived from the kinetic conditions via an asymptotic analysis near the nodes of the network.This analysis leads to the consideration of a fixpoint problem involving the coupled solutions of kinetic half-space problems.A new approximate method for the solution of kinetic half-space problems is derived and used for the determination of the coupling conditions.Numerical comparisons between the solutions of the macroscopic equation with different coupling conditions and the kinetic solution are presented for the case of tripod and more complicated networks. Keywords.Kinetic layer, coupling condition, kinetic half-space problem, networksAMS Classification.82B40, 90B10,65M08 § INTRODUCTION There have been many attemptsto define coupling conditions for macroscopic partial differential equations on networks including, for example,drift-diffusion equations, scalarhyperbolic equations, or hyperbolic systems likethe wave equation or Euler type models, see for example <cit.>.In <cit.> coupling conditions for scalar hyperbolic equations on networks are discussed and investigated. <cit.> treat the wave equation and general nonlinear hyperbolic systems are considered in <cit.>.We finally note, that, for example, for hyperbolic systemson networksthere are still many unsolved problems, like finding suitable coupling conditions without restricting to subsonic situations.On the other hand, coupling conditions for kinetic equations on networks have been discussed in a much smaller number of publications, see <cit.>.In <cit.> a first attempt to derive a coupling condition for a macroscopic equation from the underlying kinetic model has been presented for the case of a kinetic equations for chemotaxis. In the presentpaper, we will present a more general and more accurate procedure. It is motivated by the classical procedure to find kinetic slip boundary conditions for macroscopic equations based on the analysis of thekinetic layer <cit.>. In this work, we will derive coupling conditions for macroscopic equationson a network from underlying microscopic or kinetic models via an asymptotic analysis of the situation near the nodes.To explain the basic approach we concentrate on a simple linear BGK-type kinetic model with the linear wave equation as the associated macroscopic model.More complicated problems and, in particular, nonlinear models will be discussed in future work. The paper is organized in the following way.In section <ref> we discuss the kinetic and macroscopic equations and the boundary and coupling conditions for these equations. In section <ref>kinetic boundary layers are discussed, as well as anasymptotic analysis of the kinetic equations near the nodes.This leads to an abstract formulation of the coupling conditions for the macroscopic equations at the nodes based on a fix-point problem involving kinetic half-space equations.In the following section <ref> an approximate coupling condition is derived based on the so called Maxwell approximation of the half-space problem.Arefined method to determine the solution of the half space problems is derived, compared to previous approximate solution methods for half-space problems and applied to the problem of finding accurate coupling conditions for the macroscopic equations in sections <ref>.Moreover, the macroscopic equations on the network with the different coupling conditions are numerically compared to each other and to the full solutions of the kinetic equations on the networkin section <ref>. The results show the very good approximation of the underlying kinetic model by the macroscopic model with the new coupling conditions. § EQUATIONS AND BOUNDARY AND COUPLING CONDITIONSWe considerthelinear kineticBGK model in 1D for x ∈ℝ, v∈.∂_t f + v ∂_x f = -1/ϵ(f-(ρ + v/ a^2q ) M (v))with the MaxwellianM (v) =1/√(2 π a^2)exp( - | v|^2/2 a^2),whereρ_ϵ = ∫_-∞^∞ f(v) dv ,q_ϵ = ∫_-∞^∞ v f(v) dv .The associated macroscopic equationfor ϵ→ 0 is the wave equation∂_t ρ + ∂_x q =0 ∂_t q + ∂_x (a^2 ρ ) =0 . To illustrate the influence of the underlying kinetic model on the coupling conditions for the macroscopic equations, we also consider the following equation withbounded velocity spacev∈ [-1,1] ∂_t f + v ∂_x f = -1/ϵ(f-(ρ_ϵ+ v/a^2q_ϵ ) 1/2)with a^2 = 1/3 or ∂_t f + v ∂_x f = -1/ϵ(f-(ρ_ϵ/2+ 3/2 v q_ϵ ) )withρ_ϵ = ∫_-1^1 f(v) dv ,q_ϵ = ∫_-1^1 v f(v) dv.The associated macroscopic equation is again (<ref>)with a^2 = 1/3.§.§ Boundary conditionsFor x ∈ [0,b] we prescribe for the kineticequationf(0,v), v > 0, f(b,v), v<0.For (<ref>) the boundary conditions are given in characteristic variables <cit.>. The corresponding Riemann Invariants arer_1/2 = q ∓ a ρ.As boundary data the value of q + aρ at the left boundary and q - aρ at the right boundary are prescribed. §.§ Coupling conditions If these equations are considered on a network, it is sufficient to study a single coupling point. At each node so called coupling conditions are required.In the following we consider a node connecting n edges, which are oriented away from the node, as in Figure <ref>. Each edge i is parametrized by the interval [0,b_i] and the kinetic and macroscopic quantities are denoted by f^i and ρ^i, q^i respectively. A possible choice of coupling conditions for the kinetic problem are given by f^i(0,v) = ∑_j=1^n c_ijf^j (0,-v), v >0,compare <cit.>. The total mass in the system is conserved, if ∑_i=1^n c_ij = 1 holds. In the following we use the vector notation f^+ = C f^-, v >0 ,where f^+=(f^1(0,v),…,f^n(0,v)) and f^-=(f^1(0,-v),…,f^n(0,-v)). The coupling conditions for the macroscopic quantities are conditions on the characteristic variablesr_2^i (0) = q^i(0) + a ρ^i(0)using the given values of r_1^i (0) = q^i(0) - a ρ^i(0) .We refer to <cit.> for systems of macroscopic equations on networks. In the following we will derive, via asymptotic analysis,macroscopic coupling conditions from the kinetic coupling conditions (<ref>). § BOUNDARY AND COUPLING CONDITIONS FOR MACROSCOPIC EQUATIONS VIA KINETIC LAYER ANALYSIS§.§ Boundary conditionsA kinetic layer analysis, see <cit.>, at the left boundary of the interval [0,b], i.e.a rescaling of the spatial variable in equation (<ref>) with ϵ, gives to first order in ϵthefollowing stationary kinetichalf spaceproblem forx ∈ [0,∞]v ∂_x φ = - ( φ -(ρ + v/ a^2 q ) M (v) ) ,where ρ and q are the zeroth and first moments ofφ.At x=0 the boundary conditions for thehalf space problem areφ(0,v) =k(v) = f(0,v), v > 0 . On the left side, i.e. at x=∞,a condition is prescribingan arbitrary linear combination of the invariants of the half-space problem < v φ > and < v^2φ > . Here, and in the following we use the notation < φ> = ∫φ dv and< φ>_+ = ∫_v>0φ dv or < φ>_- = ∫_v<0φ dv. We use the values of the first Riemann Invariant (<ref>) r_1 = q -aρ of the macroscopic system (<ref>) to fix < (v - v^2/ a )φ> = r_1.The boundary condition for (<ref>) is obtained bydetermining r_2 fromthe asymptotic solution of the half space problem and setting r_2 = q_∞ +aρ_∞. The values ρ_∞ and q_∞ are the macroscopic quantities associated to the solution of the half-space problem at infinity, which has the formφ(∞,v) = (ρ_∞+ v/a^2 q_∞)M(v) . The solution of the half space problem is also used to determine the outgoing distributionA[k](v) = f(0,v)= φ(0,v), v<0 ,where the notation A is used for the so called Albedo operator of thehalf space problem.The structure of a half space problem is illustrated in Figure <ref>. §.§ Coupling conditions We use the corresponding procedure to determine the coupling conditions for the macroscopic equations. Starting from the kinetic coupling conditionsf^i(0,v) = ∑_j=1^n c_ijf^j (0,-v), v >0we determine the coupling conditions for the macroscopic equations in the following way. We use the kineticcoupling conditions to obtain conditions on the in- and outgoing solutions of the half space problems on the different arcs. That means φ^i(0,v) = ∑_j=1^n c_ijφ^j(0,-v) , v>0or, if we denote the ingoing function of the half-space problem on arc i by k^i (v),v>0and the outgoing solution by A^i [k^i] (v), v<0k^i(v)= ∑_j=1^n c_ijA^j [ k^j] (-v) , v>0 .This is a fix point equation for k^i, i= 1 …,n.Additional conditions are needed to solve the half-space problems, i.e. < ( v- v^2/a )φ^i>= r^i_1with r^i_1 =q^i -a ρ^i. The coupling conditions for the wave equationareconditions on the outgoing characteristic variables at x=0. We define r^i_2 (0) = q^i_∞[k^i]+ a ρ^i_∞[k^i]. The main task is now to find tractable expressions for thesecoupling conditions. In the next section we discuss theso called Maxwell method to solve the half-space problem approximately and the resulting approximate coupling conditions. In further sections a new refined method to determinethe solution of the half space problems based on half moment equations is derived and applied to the problem of finding accurate coupling conditions.Finally, the results are compared to full solutions of the kinetic equations on the network and to other approximate solution methods for the half-space problem. § APPROXIMATE SOLUTIONOF THE HALF SPACE PROBLEM VIA HALF-FLUXES AND APPROXIMATE COUPLING CONDITIONSTo solve kinetichalf space problem approximately a variety of different methods can be found in the literature.Approaches via a Galerkin method can be found in <cit.>.Approximatemethods to determine only theasymptotic states and outgoing distributions can be found in <cit.>. For the determination of the macroscopicboundary conditions we use in this sectiona simple approximation, the so called Maxwell approximation, see <cit.>.§.§ Approximate solution of the half space problemWe usethe equality of half-fluxes<v k(v) >_+ = <v (ρ_∞ + v/ a^2 q_∞) M >_+to obtain one condition.Together with the first Riemann Invariantq_∞ -aρ_∞= q -a ρ = r_1(0) = C =this determines ρ_∞ and q_∞. This can be rewritten as([ a 1/√(2 π)1/2; -a1 ]) ([ ρ_∞; q_∞ ]) = ([ <vk>_+;C ])with the solution([ ρ_∞; q_∞ ]) = 1/a( 1/√(2 π)- 1/2)([<vk>_+- C/2; a <vk>_+ + aC/√(2 π) ]).The outgoing distribution isapproximated byφ(0,v)=( ρ_∞ + v/ a^2q_∞) M (v ) , v <0.For bounded velocity space and equation (<ref>) we have([ 1/4 1/2;-a 1 ]) ([ ρ_∞; q_∞ ]) = ([ <vk>_+;C ]). §.§ Approximate coupling conditions The fix-point problem (<ref>) is in the present caseapproximated by the problemk^i(v) = ∑_j=1^n c_ij( ρ_∞^j [k^j]- v/ a^2q_∞^j [k^j] ) M (-v) ,v >0.Thus, the asymptotic values are determined by([ a 1/√(2 π)1/2; -a1 ])([ ρ^i_∞; q^i_∞ ]) = ([ ∑_j c_ij( ρ_∞^j a/√(2 π) -q_∞^j/ 2); r^i_1 (0) ]).These values can be assigned to the states of the wave equation (<ref>) at the coupling point byr^i_2 (0) = q^i_∞ +a ρ^i_∞ .Summing the first equation of (<ref>) for all i=1,…,n and using the conservation property of the kinetic coupling conditions (<ref>) directly yields the conservation of mass in the macroscopic variables∑_i=1^n q_∞^i =0.In the special case of auniform node, i.e. c_ij = 1/n-1 for i ≠ j and c_ij=0 for i =jwe havea 1/√(2 π)ρ^i_∞ + 1/2q^i_∞ =1/n-1∑_j i(a/√(2 π)ρ^j_∞- q^j_∞/ 2).Multiplying by √(2 π) and adding √(2 π)/n times (<ref>) we obtainaρ^i_∞ + √(2 π)/2n-2/n q^i_∞=1/n-1∑_j i(aρ^j_∞+√(2 π)/2n-2/n q^j_∞) i = 1,…,n.These n equations are not linearly independent and they can be reformulated asa ρ^i_∞ + √(2 π)/2n-2/n q^i_∞ = a ρ^j_∞+ √(2 π)/2n-2/n q^j_∞ i,j=1,…,n , i≠ j.Thus we have found a macroscopic invariant at the coupling point, i.e.ρ + √(2 π)/2 a n-2/n q. For bounded velocity space we have1/4ρ^i_∞ + 1/2q^i_∞ =∑_j c_ij(1/ 4ρ^j_∞ - q^j_∞/ 2).For a uniform node this leads toρ^i_∞ + 2(n-2)/nq^i_∞ =ρ^j_∞ + 2(n-2)/n q^j_∞and to the invariantρ_∞ + 2(n-2)/nq_∞ .For example for n=3, we obtain forthe above invariant (<ref>)a factor√(6 π)/6∼ 0.72360 for a^2 = 1/3 in contrast to the value 2/3 for the bounded velocity case.Thus although the kinetic coupling conditions and the macroscopic equations (<ref>) are identical we obtain different coupling conditions for a model with bounded velocity, since the half space problems are different. Another direct approach to obtain coupling conditions for the wave equation has been used in <cit.>.Using a full moment approximation of the distribution function in the case of bounded velocities, i.e. using f^± (v) = ρ/2±3/2 v q,v∈ [0,1] in the kinetic coupling conditions(<ref>), one obtains after integration over the positive velocities ρ^i_∞+ 3/2 q^i_∞ =∑_j c_ij(ρ^j_∞ - 3/2 q^j_∞) .This leads, for a uniform node, to the invariant ρ_∞ + 3(n-2)/2n q_∞ . In the case of unbounded velocities we obtain ρ^i_∞+ √(2)/ a √(π) q^i_∞ =∑_j c_ij(ρ^j_∞ - √(2)/ a √(π) q^j_∞) . and the invariant ρ_∞ + √(2)/ a √(π)(n-2)/n q_∞ . § COUPLING CONDITIONS VIAAPPROXIMATION BY HALF-MOMENT EQUATIONS In this section we develop a refined method to solve the half-space problem based on a half moment approximation and use it for the derivation of refined coupling conditions. We begin with the case of equation (<ref>). For further approaches to the approximate solution of half space problems we refer to <cit.>. §.§ The case of a bounded velocity domain Considerthe linear BGK model with bounded velocities (<ref>) and the corresponding limit equation (<ref>).§.§.§ Half moment equations We determine a half moment approximation for the half-space solution, compare <cit.>. We defineρ_ϵ^-=∫_-1^0 f(v)dv ,ρ_ϵ^+=∫_0^1f(v)dv ,q_ϵ^-=∫_-1^0 vf(v)dv, q_ϵ^+=∫_0^1vf(v)dv.Asclosure assumption we use the following approximation of the distribution function f byaffine linear functions in v to determine half-moment equations, see <cit.> and references therein.f(v) = a^+ + vb^+, v≥ 0and f(v) = a^- + vb^-, v≤ 0.One obtainsρ_ϵ^-= a^- -1/2b^-, ρ_ϵ^+=a^+ + 1/2b^+ , q_ϵ^-= -1/2a^- + 1/3b^- ,q_ϵ^+= 1/2a^+ + 1/3b^+and ∫_0^1 v^2f(v)dv = -1/6ρ_ϵ^+ + q_ϵ^+, ∫_-1^0 v^2f(v)dv = -1/6ρ_ϵ^- - q_ϵ^- .Finally, integrating the kinetic equation with respect to the corresponding half-spaces, we get the half-moment approximation of thekinetic equationas{[ ∂_tρ_ϵ^+ + ∂_xq_ϵ^+ =-1/ϵ(ρ_ϵ^+ - (ρ_ϵ/2 + 3 q_ϵ/4) ); ∂_tq_ϵ^+ + ∂_x(-1/6ρ_ϵ^+ + q_ϵ^+) = -1/ϵ(q_ϵ^+ -(ρ_ϵ/4 +q_ϵ/2)); ∂_tρ_ϵ^- + ∂_xq_ϵ^- =-1/ϵ(ρ_ϵ^- - (ρ_ϵ/2 - 3 q_ϵ/4) ); ∂_tq_ϵ^- + ∂_x( -1/6ρ_ϵ^- - q_ϵ^- ) =-1/ϵ(q_ϵ^- - (- ρ_ϵ/4 +q_ϵ/2)) .; ].Introducing the even-odd variablesρ_ϵ = ρ_ϵ^+ + ρ_ϵ^- ,ρ̂_ϵ = ρ_ϵ^+ - ρ_ϵ^- , q_ϵ = q_ϵ^+ + q_ϵ^- , q̂_ϵ = q_ϵ^+ -q_ϵ^- ,we can rewrite the system as {[ ∂_tρ_ϵ + ∂_xq_ϵ = 0;∂_tq_ϵ + ∂_x(-1/6ρ_ϵ + q̂_ϵ) = 0; ∂_tρ̂_ϵ + ∂_xq̂_ϵ =-1/ϵ(ρ̂_ϵ -3/2q_ϵ); ∂_tq̂_ϵ + ∂_x(-1/6ρ̂_ϵ + q_ϵ) =-1/ϵ(q̂_ϵ-ρ_ϵ/2) . ].Obviously, thehalf-moment modelhas again the wave equation (<ref>) asmacroscopic limit as ϵ goes to 0.§.§.§ Half space-half moment problem Rescaling the spatial variable in the half-moment problem with ϵ one obtains the following half-space problem forx ∈^+ {[∂_xq^+ = -(ρ^+ - (ρ/2 + 3 q/4) );∂_x(-1/6ρ^+ + q^+) =-(q^+ -(ρ/4 +q/2));∂_xq^- = -(ρ^- - (ρ/2 - 3 q/4) );∂_x( -1/6ρ^- - q^- ) = -(q^- - (- ρ/4 +q/2)); ].or{[∂_xq = 0; ∂_x(-1/6ρ + q̂) = 0; ∂_xq̂ = -(ρ̂ -3/2q); ∂_x(-1/6ρ̂ + q) = -(q̂-ρ/2) . ].Wehave to provide boundary conditions for ρ^+(0) and q^+ (0), as well as a condition at x=∞q_∞ - a ρ_∞ = r_1 (0) = C.Then, the half space problem can be solved explicitly. We determine a solution up to 3 constants which will be fixed with the above 3 conditions. First, we observe, that we have 2 invariantsq= C_1 = const -ρ/6 + q̂ = C_2From the last equation in (<ref>) we can deduce that at x = ∞q̂_∞ = ρ_∞/2 .Combining this with (<ref>) givesρ_∞ = 3 C_2 or q̂_∞ = 3 C_2/2. From the third equation of (<ref>) we obtainρ̂_∞ = 3q/2 = 3 C_1/2. This simplifies (<ref>) to {[ q = C_1; ρ =6 q̂ - 6 C_2; ∂_xq̂ =-(ρ̂ -3/2 C_1); ∂_x(-1/6ρ̂) = -(-2 q̂ +3 C_2) . ].The ODEs for ρ̂ andq̂ have the solutions ρ̂= γexp(-2x/a) + γ̂exp(2x/a) + 3 C_1/2 q̂ = a/2γexp(-2x/a) -a/2γ̂exp(2x/a) +3 C_2/2Since we are looking only for bounded solutions we are left withρ̂= γexp(-2x/a)+ 3 C_1/2 q̂ = a/2γexp(-2x/a) +3 C_2/2q=C_1 ρ = 3 a γexp(-2x/a) + 3C_2 .The threeparameters arefixed with the 3 conditions mentioned above. At x=0 inflow data is given1/2( q(0)+q̂ (0) ) = q_+ (0) 1/2( ρ (0)+ρ̂(0) ) = ρ_+ (0)and the Riemann Invariant at x=∞ givesq_∞ -a ρ_∞ = C.Inserting the above determined solution we obtain1/2( C_1 +a/2γ +3 C_2/2)= q_+ (0) 1/2( 3 a γ + 3C_2 +γ + 3 C_1/2)= ρ_+ (0) , which can be rewritten in terms of the asymptotic valuesq_∞/2 +ρ_∞/4 +a/4γ = q_+ (0) 3 q_∞/4 +ρ_∞/2 + (3 a+1/2) γ = ρ_+ (0). Together with the condition at infinity (<ref>), this determinesthe asymptotic values q_∞,ρ_∞ and γ.The outgoing quantities ρ_-(0), q_-(0) are then determined byq_∞/2 - ρ_∞/4 - a/4γ = q_-(0)- 3 q_∞/4+1 /2ρ_∞ + (3 a-1 /2) γ = ρ_- (0). With the Maxwell approximation for the half-moment problem in (<ref>) the asymptotic states are determined byq_+ (0) = q_+(∞)= 1/2 (q_∞ + q̂_∞)= q_∞/2 + ρ_∞/4and the condition at infinity (<ref>).The outgoing quantities areq_- (0)= q_-(∞) = - ρ_∞/4 +q_∞/2 ρ_-(0)= ρ_-(∞) = ρ_∞/2 -3 q_∞/4 .§.§.§ The extrapolation length To estimate the accuracy of our method, we consider the classical problem of determining the so-called extrapolation length <cit.>.For x ∈ℝ^+, v∈ [-1,1] we consider the half space equation v ∂_x f = - (f-(ρ/2+ 3/2 v q ))with ∫_-1^1 v f dv = q = 0 and f (0,v) = v , v >0. That means, we considerv ∂_x f = - (f-ρ/2) .The extrapolation length is the value of λ_∞ = f(∞,v) = ρ_∞/2. The Maxwell approximationgives λ_∞ = 2/3. The above half moment approximation givesρ_∞/4 +a/4γ= q_+ (0) = 1/3 ρ_∞/2 + (3 a+1/2) γ= ρ_+ (0) = 1/2 . This leads toρ_∞= 9 a+ 4/6 a+3and with a^2 =1/3 we obtain ρ_∞= 3 √(3) + 4/2 √(3)+3.Thus, the extrapolation length is approximated as λ_∞∼ 0.7113.The exact value computed from a spectral method is 0.7104,see <cit.>. This yields an error for the above half-moment method of approximately 0.1 %.In contrast, the Maxwell approximation gives 0.6666, which is an error of 6.1 %. The variational method in <cit.> gives 0.7083, which is an error of 0.3%.§.§.§ Half-moment coupling conditions In this subsection we determine the coupling conditions on the basis of the half-moment approximation of the half-space problem. Multiplying with v and integrating the kinetic coupling conditions (<ref>) with respect to thepositive and negative half momentsgivesq^i_+ (0) = - ∑_ j=1^nc_ij q_-^j (0).Inserting the half moment approximations (<ref>) yieldsq^i_∞/2 +ρ^i_∞/4 +a/4γ^i =∑_ j=1^nc_ij( -q^j_∞/2 + ρ^j_∞/4 + a/4γ^j). Again, a summation w.r.t. i=1,…,n directly givesthe equality of fluxes∑_i q^i_∞ =0 .For a uniform nodewith equal distribution c_ij = 1/n-1, i ≠ j and 0 otherwise,one obtains using the equality of fluxesn-2/2nq^i_∞ +ρ^i_∞/4 +a/4γ^i = n-2/2nq^j_∞ + ρ^j_∞/4 + a/4γ^j . or the invariance of ρ_∞ +2(n-2)/nq_∞ + aγ.Further, integrating the kinetic coupling conditions (<ref>) with respect to the positive and negativehalf moments, we obtain ρ^i_+ (0) =∑_j =1^nc_ijρ_-^j (0) . With the half moment approximations (<ref>) this reads3 q^i_∞/4 +ρ^i_∞/2 + (3 a+1/2) γ^i =∑_j=1^nc_ij( - 3 q^j_∞/4+1 /2ρ^j_∞ + (3 a-1 /2) γ^j) .Summing these conditions for i=1,…,n yields∑_i=1^nγ^i=0 . Thus, in the case of a uniform node, wederive anothercoupling invariant ρ_∞ + 3(n-2)/2n q_∞+ (3 a+ n-2/n) γ .Alltogether this yields 2nconditions at a node, i.e. (<ref>), (<ref>), (<ref>) and (<ref>).In combination with the conditions at infinity we have 3n conditions for 3n quantities γ^i, ρ^i_∞, q^i_∞. Note that the invariants (<ref>) and(<ref>) can be combined such that γ is eliminated, which gives the invariance of ρ_∞ + (n-2/n)9a+4(n-2/n)/4a+2(n-2/n) q_∞ .All thecoupling conditions for the wave equation with uniform nodes derived up to now are given by the conservation of mass (<ref>) and an invariant of the formρ + C q .They differ in the factor C, see Table <ref>. With a =1 /√(3) and n=3 the half moment approximationgivesC=0.7313compared to 0.6666 for Maxwell (<ref>) or 0.5 for the full moment approximation.A numerical comparison of the resulting network solutions is presented in the next section. For the solution of the wave equation (<ref>), we consider the mathematical entropy e = 1/2(ρ^2+1/a^2q^2). It evolves according to the conservation law ∂_t e + ∂_x (ρ q) = 0. Along one edge this entropyis conserved, but the total entropy in the network can change according to the entropy-fluxes at the nodes. Note that for all above models with ρ + Cq= C̃ and C>0 the total entropy decays, since ∑_i=1^nρq = ∑_i=1^n(C̃-Cq)q = C̃∑_i=1^nq -C∑_i=1^nq^2 =-C∑_i=1^nq^2 <0 . In the following we discuss also the case of an underlying kinetic model with unbounded velocities.§.§ The case of unbounded velocity domainWe now apply the above procedure to the linear BGK model (<ref>) with an unbounded velocity space, which has (<ref>) with arbitrary a as limit equation.§.§.§ Half moment equationsThe half moments are defined asρ ^-=∫_-∞^0 f(v)dv ,ρ^+=∫_0^∞ f(v)dv , q ^-=∫_-∞^0 vf(v)dv, q^+=∫_0^∞ vf(v)dv.As closure we consider the following approximations of the distribution functionf(v) = (a^+ + vb^+)M(v), v≥ 0and f(v) = (a^- + vb^-)M(v), v≤ 0,which leads toρ^-= 1/2a^- -a/√(2π)b^-, ρ^+=1/2a^+ + a/√(2π)b^+ , q^-= -a/√(2π)a^- + a^2/2b^- ,q^+= a/√(2π)a^+ + a^2/2b^+ .Inverting these relations gives∫_0^∞ v^2f(v)dv=1/π - 2((π - 4)a^2 ρ^++ a√(2π)q^+),∫_-∞^0 v^2f(v)dv=1/π - 2((π - 4)a^2 ρ^– a√(2π)q^-).This leads to the half-moment system {[∂_tρ^+ + ∂_xq^+= -1/ϵ(ρ^+ - (ρ/2 +q/√(2π)a) ); ∂_tq^+ + ∂_x(1/π - 2((π - 4)a^2 ρ^++ a√(2π)q^+))= -1/ϵ(q^+ -(aρ/√(2π) +q/2));∂_tρ^- + ∂_xq^-=-1/ϵ(ρ^- - (ρ/2 - q/√(2π)a) ); ∂_tq^- + ∂_x( 1/π - 2((π - 4) a^2ρ^– a√(2π)q^-))=-1/ϵ(q^- - (- aρ/√(2π) +q/2)) .;].Introducing the even-odd variables as beforewe can rewrite the system as {[ ∂_tρ + ∂_xq = 0; ∂_tq + ∂_x(π-4/π-2a^2ρ +a√(2π)/π-2q̂) = 0; ∂_tρ̂ + ∂_xq̂ =-1/ϵ(ρ̂ -2/√(2π) aq); ∂_tq̂ + ∂_x(π-4/π-2a^2ρ̂ +a√(2π)/π-2 q) =-1/ϵ(q̂-2aρ/√(2π)) . ].§.§.§ Half space-half moment problem The corresponding half space problem is forx ∈^+ given by{[ ∂_xq=0; ∂_x(π-4/π-2a^2ρ +a√(2π)/π-2q̂)=0;∂_xq̂=-(ρ̂ -2/√(2π) aq);∂_x(π-4/π-2a^2ρ̂ +a√(2π)/π-2 q)=-(q̂-2aρ/√(2π)) . ].As before we need boundary conditions onρ^+(0), q^+ (0) and a condition at infinityq_∞ - a ρ_∞ = r_1 = C.We now construct the explicit solution of this half space problem. The first two equations of (<ref>) state the invariance of q= C_1 = constπ-4/π-2a^2ρ +a√(2π)/π-2q̂ = C_2or ρ = π-2/a^2(π-4)C_2 - √(2π)/a(π-4)q̂.From the last equation of (<ref>) we can deduce that at x = ∞ we haveq̂_∞ = 2a/√(2π)ρ_∞ ,which leads with the above invariant to ρ_∞ = 1/a^2C_2. Moreover, from the third line in (<ref>) we obtainρ̂_∞ = 2/√(2π) aq_∞and thus we can transform the lower part of (<ref>) to{[∂_xq̂=-(ρ̂ -2/√(2π) aq_∞);∂_x(π-4/π-2a^2ρ̂ +a√(2π)/π-2 q_∞)= -( q̂ - 2aρ/√(2π)(π-2/a^2(π-4)C_2 - √(2π)/a(π-4)q̂)) . ].Rearranging gives{[ ∂_xρ̂ = -(π-2)^2/(π-4)^2a^2q̂ + 2(π-2)^2/√(2π)a^3(π-4)^2C_2; ∂_xq̂ = -ρ̂ +2/√(2π) aq_∞ . ].By defining λ =π-2/a(π-4) the solution of ρ̂ and q̂ isρ̂= γλexp(λ x) + γ̂λexp(-λ x) +2/√(2π) aq_∞ q̂ = -γexp(λ x) + γ̂exp( -λ x) +2a/√(2π)ρ_∞ .Since we are only interested in bounded solutions, we have γ̂=0 and thusρ̂= γλexp(λ x) +2/√(2π) aq_∞ q̂ = - γexp(λ x)+2a/√(2π)ρ_∞ .As before, there are three parameters which can be determined with the three conditions1/2( ρ (0)+ρ̂(0) ) = ρ_+ (0)1/2( q(0)+q̂ (0) ) = q_+ (0)q_∞ -a ρ_∞ = r_1 (0) .Inserting the expressions (<ref>) yields 1/2(ρ_∞+(λ +√(2π)/a(π-4))γ+2/√(2π) aq_∞)= ρ_+ (0)1/2( q_∞ -γ+2a/√(2π)ρ_∞)= q_+ (0)and for the outgoing quantities ρ_-(0) and q_-(0) we obtain1/2(ρ_∞+(-λ + √(2π)/a(π-4))γ-2/√(2π) aq_∞)= ρ_- (0)1/2( q_∞ + γ-2a/√(2π)ρ_∞)= q_- (0) .The Maxwell approximationis in this case given by the equationq_+ (0) = q_+(∞)= 1/2 (q_∞ + q̂_∞) = q_∞/2 + a/√(2π)ρ_∞ . The outgoing quantities are computed asq_- (0) = q_-(∞) = q_∞/2 - a/√(2π)ρ_∞ , ρ_-(0)= ρ_∞/2 -1/√(2π) aq_∞ . §.§.§ The extrapolation lengthIn order to estimate the quality of the above approximation, we considerv ∂_x f = - (f-(ρ + v/ a^2q ) M (v))with q =0 and k(v) = v M, in order to determine f(∞,v ) = λ_∞ M. The Maxwell approximation gives ρ_∞ = √(2 π)/2 a. With the half moment method we obtain1/2(ρ_∞+(λ +√(2π)/a(π-4))γ+2/√(2π) aq_∞)= ρ_+ (0)= 1/√(2 π) 1/2( q_∞ -γ+2a/√(2π)ρ_∞)= q_+ (0) = 1/2 .With q_∞ =0 this results in ρ_∞= π√(2 π) +2π(1+a)-2 √(2 π) -8 a/a(√(2π)π + 2 π -2 √(2 π) -4)For a=1 this gives ρ_∞ = 1.443.Compared to the very accurate value 1.4371 obtained with a spectral method <cit.> this yields an error of 0.4%. Maxwell gives1.2533, which is an error of 12.8%. The variational method <cit.> gives1.4245 or an error of 0.9%. §.§.§ Coupling conditions A half space half integration of the coupling conditions givesq^i_+ (0)= -∑_ j=1^n c_ijq_-^j (0)which yieldsq^i_∞ -γ^i+2a/√(2π)ρ^i_∞ =-∑_ j=1^n c_ij( q^j_∞ +γ^j-2a/√(2π)ρ^j_∞)Summing these equations gives∑_i q^i_∞ =0. Further, for a uniform node we observe the invariance ofn-2/nq^i_∞ -γ^i+2a/√(2π)ρ^i_∞ = n-2/nq^j_∞ -γ^j+2a/√(2π)ρ^j_∞ .Moreover, from ρ^i_+ (0) = ∑_ j=1^n c_ijρ_-^j (0) one obtains ρ^i_∞+(λ +√(2π)/a(π-4))γ^i+2 q^i_∞/√(2π) a = ∑_ j=1^n c_ij( ρ^j_∞+(-λ + √(2π)/a(π-4))γ^j-2 q^j_∞/√(2π) a) . Summing theseequations we obtain again ∑_i=1^n γ^i=0. In the case of a uniform node we can further rearrange, such that ρ^i_∞+(n-2/n + √(2π)/π-2) λγ^i+2/√(2π) an-2/nq^i_∞= ρ^j_∞+(n-2/n +√(2π)/π-2)λγ^j+2/√(2π) an-2/nq^j_∞ holds.Together with the conditions at infinity we have again 3n conditions for 3 n quantities γ^i, ρ^i_∞, q^i_∞. Combining (<ref>) and (<ref>) we can eliminate the γ^i and obtain as invariantρ+ n-2/n n-2/n(π-2)√(2π)+4π-8/√(2π)(π-4)+n-2/n(2π-4)+2√(2π) 1/aq. orρ+ n-2/n1/a 4 + n-2/n√(2 π)/√(2 π)+2 n-2/n q The values of the factor C for the coupling invariants ρ +C q for the present case with unbounded velocities are summarized in Table <ref>. For a=1 and n=3 thisfactor is approximately 0.5079. In contrast, theMaxwell approximation gives 0.4178. For a^2 =1/3 the factor is6 π/6∼ 0.723 for Maxwell and approximately 0.8797 for the above half moment formula. This has to be compared to the case of bounded velocities discussed in the last section, where the factor has been determined as0.7313 for the half-moment-and 0.6666 for the Maxwell approximation. Thus, depending on the underlying kinetic model different coupling conditions are obtained for the same macroscopicequation. § NUMERICAL RESULTS In this section we compare the numerical results of the different models on networks. The solutions of thekinetic equation (<ref>) are compared with the half-moment approximation (<ref>) and the macroscopic wave equation (<ref>) with different coupling conditions(<ref>) (<ref>) (<ref>) .The networks are composed of coupled edges, each arc is given by an interval x ∈ [0,1], which is discretized with 400 spatial cells if not otherwise stated. In the kinetic model the velocity domain[-1,1] is discretized with 400 cells and we choose ϵ = 0.001 if not otherwise stated.For the advective part of the equations we use an upwind scheme.The source term in the kinetic and half moment equations is approximated with the implicit Euler method. We note that for the wave equation the upwind scheme yields the exact solution by choosing the CFL number equal to 1.At the nodes we consider in addition to the coupling conditions discussed above for comparison a coupling based on the assumption of equal density at the node, see<cit.>ρ^i =ρ^ji≠ j, i,j=1,2,3 ∑_i=1^3q_i =0 . In general, at the outer boundariesof the network, boundary conditions have to be imposed. For the kinetic problem the values for the ingoing velocities have to be prescribed as mentioned before. For the half-moment approximation the natural boundary conditions are given by integrating the kinetic conditions and using ρ_+ and q_+ at left boundaries and ρ_- and q_- at rightboundaries.For the wave equations with full moment,Maxwell and half-moment conditions we use the corresponding approximations discussed above to provide boundary values for the macroscopic equations. We note that for the boundary values at theright end of the edges we apply the same procedure as detailed in the previous sections for the left end, reversingthe spatial orientation in the half space problem. More precisely,a given kinetic inflow ℓ(v) for v∈[-1,0] at the right boundary leads to the following boundary data. In thekinetic model we directly impose f(x=1,v) = ℓ(v) v∈[-1,0], for thehalf moment model the quantities ρ^-=<ℓ(v)>_- and q^-=<vℓ(v)>_- are fixed. In case of the wave equation, the right going Riemann invariant q+aρ=C is given by the inner data. The remaining information is given by the followingapproximations wave Maxwell <vℓ(v)>_-= -ρ/4+1/2qwave half-moment {[<ℓ(v)>_- =1/2ρ - 3/4q +-3a-1/2γ,; <vℓ(v)>_- =-1/4ρ + 1/2q +a/4γ ]. wave full-moment <ℓ(v)>_-=1/2ρ - 3/4q , which can be obtained by revisiting section <ref> with x∈[-∞,0]. These are used to prescribe the value of the left going characteristic q-aρ.§.§ Tripod networkIn a first example, we consider a tripod network with initial conditions f^1(x,v) = 1/2,f^2(x,v) = 1/3 and f^3(x,v) = 0. The corresponding macroscopic states are (ρ^1,q^1) = (1,0), (ρ^2,q^2) = (2/3,0) and(ρ^3,q^3) = (0,0). We use free boundary conditions at the exterior boundaries. The computational time is chosen such that the waves generated at the node do not reach the exterior boundaries.In Figure <ref> we compare the kinetic, the half-moment and the wave equation with coupling conditions given by theMaxwell, the half-moment and thefull-moment approach and the assumption of equal density at timeT = 1. We observe first, that the half moment model gives a very accurate approximation of the kinetic equation. Considering the wave equation with different coupling conditions one observes that the interior state is very accuratelyapproximated by the half moment coupling conditions. Also the Maxwell approximation provides agood approximation in this case. Note that a boundary layer is appearing at the node in the kinetic model, see Figure <ref> for a magnification of the situation on edge 1. Moreover, we investigate the evolution of the total entropy in the network, i.e. ∑_i=1^31/2∫( (ρ^i)^2+ 1/a^2(q^i)^2 ) dx. Initially, the total entropy at t=0 is equal to 0.722222. In this case we use a very fine grid with 30000 spatial cells for all models and 400 cells invelocity space for the kinetic equation. In Table <ref> the value of the total entropy at time T=0.1 is shown for the different coupling conditions together with the half moment and the kinetic solution for comparison. One observes the very accurate approximation given by the wave equation with half moment coupling conditions. Note, that even for the equal density a small amount ofentropyis lost. This is caused by the numerical diffusion in the very last time step, since we have Δ t<Δ x/a to hit the final time. In Table <ref> the value of the total entropy at time T=0.1 is shown for kinetic and half moment equations for different spatial discretizations. Finally, we investigate the kinetic equation for different values of ϵ. Forϵ→ 0 the kinetic and the half moment model are very well approximated by the solution of the wave equation with half moment coupling conditions, see Figure <ref> and the corresponding magnification at the node in Figure <ref>. §.§ Diamond network As a second example we consider a more complicated network, see Figure <ref>, as, for example, studied in <cit.> for the wave equation. As initial conditions for the kinetic equation we choose f^1(x,v) = 1,f^2(x,v) = 5/6 and f^j(x,v) = 1/2 for j=3,…,7, which corresponds to macroscopic densities ρ^1=2, ρ^2=5/3 and ρ^j=1 for j=3,…,7 and fluxes q^j = 0 j=1,…,7. These data are also prescribed at the two outer boundaries, i.e. k^1(v)=1, v∈ [0,1] and ℓ^7(v)=1/2, v∈ [-1,0]. Boundary conditions for the wave equation with full moment, Maxwell and half moment conditions are derived as detailed above. In case of the equal density conditions, we determine the ingoing characteristic using ρ=1, q=0 at the E_1-boundary and ρ=1/2, q=0 at the E_7-boundary.In Figure <ref> the density ρ^4 on edge 4 is displayed at time t=3 and t=10. As before, we observe a good agreement of the half moment coupling with the kinetic and half moment model.Also the Maxwell approximation is relatively close to the kinetic results. The states of the full moment coupling and the equal density coupling deviate remarkably from the kinetic results. In Figure <ref> on the left, ρ^4 at time t=50 is shown.All models have reached a stationary state except the equal density coupling. Since the entropy is conserved, a set of waves remains trapped in the network, oscillating back and forth. The stationary states of the models with entropy losses almost coincide. On the right hand side of Figure <ref> the evolution of the total entropy over time is plotted.The total entropy is increasing due to inflow from the boundaries.All models saturate at a certain level, but we again observe a deviation of the equal density and the full moment coupling compared to theaccurateMaxwell coupling. In this situation the results for thehalf-moment coupling, the half moment model and the kinetic model coincide.§ CONCLUSION AND OUTLOOK In this work we have derived coupling conditions for the wave equation on a network based on coupling conditions for an underlyingkinetic BGK type model via a layer analysisof the situation near the nodes. The presentation in this workincludes a new half-moment approximation ofthe kinetichalf-space problem. The general approach can be extended to morecomplicated problems like linearized Euler equations or kinetic based coupling conditions for nonlinear problems like Burgers and Lighthill-Whitham type equations which will be considered in a forthcoming publication. §ACKNOWLEDGMENTThe first author is supported by the Deutsche(DFG) grant BO 4768/1. The second author by DFG KL 1300/26. Moreover, funding by the DFG within the RTG 1932 "Stochastic Models for Innovations in the Engineering Sciences" is gratefully acknowledged.10 siam ALM10 S. Avdonin, G. Leugering, V. Mikhaylov, On an inverse problem for tree-like networks of elastic strings, Journal of Applied Mathematics and Mechanics (ZAMM), 90, 2, 136–150, 2010BHK06b M. Banda, M. Herty, A. Klar, Coupling conditions for gas networks governed by the isothermal Euler equations, NHM 1(2), 295-314, 2006BHK06a M. Banda, M. Herty, A. Klar, Gas flow in pipeline networks, NHM 1(1), 41-56, 2006 BSS84 C. Bardos, R. Santos, and R Sentis, Diffusion approximation and computation of the critical size, Trans. Amer. Math. Soc. 284, 2, 617-649, 1984 BLP79 A. Bensoussan, J.L. Lions, and G.C. Papanicolaou, Boundary-layers and homogenization of transport processes, J. Publ. RIMS Kyoto Univ. 15, 53-157, 1979BCG10 R. Borsche, R. Colombo, M. Garavello. On the coupling of systems of hyperbolic conservation laws with ordinary differential equations. Nonlinearity 23, 11,2749, 2010 BGKS14 R. Borsche, S. Göttlich, A. Klar, and P. Schillen. The scalar Keller-Segel model on networks. Math. Models Methods Appl. Sci., 24, 2, 221–247, 2014. BKKP16 R. Borsche, J. Kall, A. Klar, and T.N.H. Pham. Kinetic and related macroscopic models for chemotaxis on networks. Mathematical Models and Methods in Applied Sciences, 26, 6, 1219–1242, 2016.BNR14 G. Bretti, R. Natalini, and M. Ribot. A hyperbolic model of chemotaxis on a network: a numerical study. ESAIM: M2AN, 48, 1, 231–258, 2014. CC17 F. Camilli and L. Corrias. Parabolic models for chemotaxis on weighted networks. to appear in Journal de Mathematiques Pures et Appliquees, 2017.C69 C. Cercignani, A Variational Principle for Boundary Value Problems,J. of Stat. Phys. 1,2, 1969 C88 C. Cercignani, The Boltzmann Equation and its Applications, Springer, 1988 CLT14 I.K. Chen, T.P. Liu, and S. Takata, Boundary singularity for thermal transpiration problem of the linearized Boltzmann equation, Arch. Ration. Mech. Anal. 212, 2, 575–595, 2014.CGP05 G. M. Coclite, M. Garavello, and B. Piccoli, Traffic flow on a road network, SIAM J. Math. Anal., 36 , 1862–1886 2005. CHS08 R. Colombo, M. Herty, V.Sachers - On 2 × 2 conservation laws at a junction - SIAM J. Math. Anal. 40, 2, 2008. Coron F. Coron, Computation of the Asymptotic States for Linear Halfspace Problems, TTSP 19, 2,89, 1990 CGS88 F. Coron, F. Golse, C. Sulem, A Classification of Well-posed Kinetic Layer Problems, CPAM,41,409, 1988 DZ01 R. Dager, E. Zuazua, Controllability of tree-shaped networks of vibrating strings, C. R. Acad. Sci. Paris, 332, 1087–1092, 2001 DL02 B. Dubroca, A. Klar, Half Moment closure for radiative transfer equations, J. Comp. Phys., 180 (2), 584-596, 2002 EK16H. Egger, Thomas Kugler, Damped wave systems on networks: Exponential stability and uniform approximations https://arxiv.org/abs/1605.03066 (2016) FT15 L. Fermo and A. Tosin. A fully-discrete-state kinetic theory approach to traffic flow on road networks. Math. Models Methods Appl. Sci., 25,3, 423–461, 2015. G10 M. Garavello, A review of conservation laws on networks NHM 5, 3, 565 - 581, 2010G92 F. Golse, Knudsen Layers from a Computational Viewpoint, TTSP 21,3 ,211, 1992 G08 F. Golse, Analysis of the boundary layer equation in the kinetic theory of gases, Bull. Inst. Math. Acad. Sin. 3, 1, 211-242, 2008 GK95 F. Golse, A. Klar, Numerical Method for Computing Asymptotic States and Outgoing Distributions for a Kinetic Linear Half Space Problem, J. Stat. Phys. 80, 5-6, 1033-1061, 1995 GMP W. Greenberg, C. van der Mee, V. Protopopescu, Boundary Value Problems in Abstract Kinetic Theory, Birkhäuser, 1987 GMS09 G. Guerra, F. Marcellini, V. Schleper, Balance laws with integrable unbounded sources. SIAM Journal on Mathematical Analysis 41, 3, 1164–1189, 2009 GHKLS11 M. Gugat, M. Herty, A. Klar, G. Leugering, V. Schleper, Well–posedness of networked hyperbolic systems of balance laws, International Series of Numerical Mathematics, 160, 175-198, Springer, 2011HKP07 M. Herty, A. Klar and B. Piccoli. Existence ofsolutions for supply chain models based onpartial differential equations. SIAM J. Math. Anal. 39, 1, 160-173, 2007. HM09 M. Herty and S. Moutari. A macro-kinetic hybrid model for traffic flow on road networks. Comput. Methods Appl. Math., 9, 3,238–252, 2009.LLS162 Q. Li, J. Lu, and W. Sun, Half-space kinetic equations with general boundary conditions, Math. Comp. 86, 1269-1301,2017 LLS16 Q. Li,J.Lu,W. Sun, A convergent method for linear half-space kinetic equations, to appear inESAIM: M2AN, 2017 LF67S.K. Loyalka, J.H. Ferziger, Model Dependence of the Slip Coefficient, Phys. Fluids 10,8, 1967 LF81S.K. Loyalka, Approximate Method in the Kinetic Theory, Phys. Fluids 11,14, 1971 M47 R. E. Marshak, Note on the spherical harmonic method as applied to the Milne problem for a sphere, Phys. Rev. 71, 443-446, 1947 M J.C. Maxwell, Phil. Trans. Roy. Soc. I, Appendix, 1879; reprinted in The Scientific Papers of J.C.Maxwell, Dover, New York, 1965 ST81C.E. Siewert, J.R. Thomas, Strong Evaporation into a Half Space I, Z. Angew. Math. Physik, 32, 421, 1981 SO Y. Sone, Y. Onishi, Kinetic Theory of Evaporation and Condensation, Hydrodynamic Equation and Slip Boundary Condition, J. Phys. Soc. of Japan,44,6,1981, 1978 T09E.F. Toro Riemann solvers and numerical methods for fluid dynamics, Springer, 2009UTY03 S. Ukai, T. Yang, and S.-H. Yu, Nonlinear boundary layers of the Boltzmann equation. I. Existence, Comm. Math. Phys. 236,3, 373-393, 2003 VZ09 J. Valein, E. Zuazua, Stabilization of the Wave Equation on 1-d Networks, SIAM Journal on Control and Optimization, 48, 4, 2771-2797, 2009
http://arxiv.org/abs/1708.07757v1
{ "authors": [ "Raul Borsche", "Axel Klar" ], "categories": [ "math.AP", "math.NA", "35L45, 35R02, 82B40, 90B10, 65M08" ], "primary_category": "math.AP", "published": "20170825141904", "title": "Kinetic layers and coupling conditions for macroscopic equations on networks I: the wave equation" }
apj .,[C i] H i [C ii] [N ii] H ii M_ CO M_ outflow Ṁ M_ env M_ disk M_⊙ M_ obs M_ UVV_ LSR T_ A T_ ex E_ u.,1Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Pasadena CA,91109, USA; [email protected] Planck Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn, GermanyEmission from high–dipole moment molecules such as HCN allows determination of the density in molecular clouds, and is often considered to trace the “dense” gas available for star formation.We assess the importance of electron excitation in various environments.The ratio of the rate coefficients for electrons and H_2 molecules, ≃10^5 for HCN, yields the requirements for electron excitation to be of practical importance if n( H_2) ≤ 10^5.5 and X( e^-) ≥ 10^-5, where the numerical factors reflect critical values n_c( H_2) and X^*(e^-). This indicates that in regions where a large fraction of carbon is ionized, X( e^-) will belarge enough to make electron excitation significant. The situation is in general similar for other “high density tracers”, including HCO^+, CN, and CS. But there are significant differences in the critical electron fractional abundance, X^*( e^-), defined by the value required for equal effect from collisions with H_2 and e^-.Electron excitation is, for example, unimportant for CO and C^+.Electron excitation may be responsible for the surprisingly large spatial extent of the emission from dense gas tracers in some molecular clouds (Pety et al. 2017; Kauffmann, Goldsmith et al. 2017). Theenhanced estimates for HCN abundances and HCN/CO and HCN/HCO^+ ratios observed in the nuclear regions of luminous galaxies may be in part a result of electron excitation of high dipole moment tracers. The importance of electron excitation will depend on detailed models of the chemistry, which may well be non–steady state and non–static.§ INTRODUCTION The possible importance of excitation of the rotational levels of molecules by collisions with electrons, and consideration of the effect of such collisions on observed line ratios is not at all new <cit.>.However, relatively earlyobservations and modeling of molecular ions in dense clouds showed that in well–shielded regions, the fractional abundance of electrons is very low, 10^-7– 10^-8<cit.>.Values of X( e^-) in this range would make electron excitation insignificant, although the situation in clouds with lower extinction is dramatically different, withX( e^-)≥ 10^-5 making electron collisions the dominant excitation mechanism for CS <cit.> and for CN <cit.>.The early calculations of electron excitation rate coefficients <cit.> were forced to make a variety of approximations, but suggested that the excitation rates scale as the square of the molecule's permanent electric dipole moment.Thus, electron excitation would be important for the widely–observed CO molecule only under very exceptional circumstances, but the question of the possible importance of electron excitation in varied regions of the interstellar medium has not been examined.In studies of star formation in other galaxies, emission from the high–dipole moment molecule HCN has been used as a measure of the “dense” gas in which star formation takes place <cit.>.The question of the H_2 density that characterizes the regions responsible for this emission thus arises.Since this emission is generally not spatially resolved, excitation by electrons could be contributing, especially in the outer regions of clouds subject to highradiation fields.<cit.> have studied the density associated with HCN emission in the Orion molecular cloud, and find that a large fraction of the flux is produced in regions having n(H_2)≈10^3, well below the range ≥ 3×10^4 assumed by <cit.>.<cit.>studied a variety of molecules including HCN in Orion B, and found that the spatial extent of their emission was not correlated with the density of H_2 required for collisional excitation.One possible explanation is electron excitation in the outer regions of the cloud, making reexamination of the possible role of electron excitation appropriate [We acknowledge appreciatively the suggestion by Simon Glover to anayze the possible role of electron excitation.]. In this paper, we review the recent rate calculations for HCN, HCO^+, CS, and CN in <ref>. Their influence on the excitation of molecules—with a particular focus on HCN—is summarized in <ref>, which utilizes the three-level model developed in Appendix <ref> together with multilevel statistical equilibrium calculations. This section ends with a more general discussion that extends the argument to transitions of HCO^+, CS, and CN (<ref>). Clouds of different types in different environments are examined using a PDR code to determine their electron density distribution in <ref>. In this section we also discuss the question of the abundance of molecules in the high–electron density regions including diffuse and translucent clouds, molecular cloud edges, and the central regions of active galaxies.We summarize our conclusions in <ref>. § COLLISION RATE COEFFICIENTSWe first discuss the HCN molecule as perhaps the premier example of a high–dipole moment molecule for which electron excitation can be relatively important.Along with presenting their quantum collision rate coefficients for CS, <cit.> make a brief comparison of excitation by electrons and H_2, concluding that for X( e^-)≥ 10^-5, the latter should not be ignored.To a reasonable approximation the collision cross sections for electron excitation will be dominated by long–range forces and scale as the square of the permanent electric dipole moment, μ_e<cit.>.Thus the rates for the CS molecule having μ_e = 1.958 D <cit.> or HF with μ_e = 1.827 D <cit.> would be ≃ 50% of those of HCN having μ_e = 2.985 D <cit.>.An extremely polar molecule such as LiH, having μ_e = 5.88 D <cit.> would have electron collision rates almost a factor of 4 greater than those of HCN.But overall, rates for high–dipole moment molecules are fairly well confined within about an order of magnitude.The obvious outlier is CO, with μ_e = 0.11 D <cit.>, thus having electron collision rate coefficients ≃ 0.003 of those for high–dipole moment molecules.In the following section we focus on HCN, given its observational importance. We also consider HCO^+, CS, and CN in <ref>–<ref>.§.§ HCN Excitation by ElectronsThe calculation of electronic excitation of the lower rotational levels of HCN and isotopologues by <cit.> includes treatment of the hyperfine levels, and considered the HNC molecule and isotopologues as well.Here, we do not consider the issue of the hyperfine populations, which although observable and informative in dark clouds with relatively narrow line widths, are not an issue for study of GMCs, especially large–scale imaging in the Milky Way and other galaxies.<cit.> present their results in the form of polynomial coefficients for the deexcitation (J_final <J_initial) rate coefficients as a function of the kinetic temperature.We have calculated rates for a number of temperatures and give the results for some of the lowest rotational transitions in Table <ref>.We include here as well the analogous results for HCO^+, CS, and CN, which are discussed in <ref>–<ref>. We see that the temperature dependence of the deexcitation rate coefficients is quite weak, and that in common with previous analyses, | Δ J | = 1 (dipole–like) transitions are strongly favored for electron excitation of neutrals, but less strongly so for electron excitation of ions.For any transition, the collision rate is the product of the collision rate coefficient and the density of collision partners (e.g. electrons or H_2 molecules); C(s^-1) = R(cm^3s^-1)n(e^- or H_2; cm^-3).The full set of deexcitation rate coefficients is available on the LAMBDA website (<cit.>; http://home.strw.leidenuniv.nl/ moldata/). lclllll[ht!] 0ptElectron Deexcitation Rate Coefficients for Lower Rotational Transitions of HCN1, HCO^+2, and CS3 (Units are 10^-6 cm^3 s^-1) Molecule Transition 5cKinetic Temperature (K)J_u–J_l 10 20 40 80 100HCN 1 – 0 3.9 3.7 3.5 3.2 3.1 2 – 1 3.7 3.6 3.4 3.1 3.0 2 – 0 0.089 0.076 0.064 0.054 0.051HCO^+1 – 013.5 9.5 6.6 6.03.8 2 – 115.3 10.87.8 5.75.02 – 01.51.0 0.750.53 0.46CS 1 – 01.81.7 1.6 1.6 1.5 2 – 11.91.81.7 1.6 1.6 2 – 00.044 0.038 0.031 0.024 0.023 1see <ref>2see <ref>3see <ref> §.§ HCN Excitation by H and H_2 The calculation of rate coefficients for collisions between HCN and H_2 molecules started with <cit.>, who considered HCN as having only rotational levels and included He as the collision partner, representing H_2 in its ground para–H_2 (I = 0) state with antiparallel nuclear spins.Interest in the non–LTE ratio of HCN hyperfine components led <cit.> to include the hyperfine levels separately.They found that the individual excitation rate coefficients summed together to give total rotational excitation rate coefficients very similar to those found by <cit.>. <cit.> again considered He as the collision partner, while focusing on differences between the excitation rates for HCN and HNC. <cit.> employed a new potential energy surface (PES), while still considering the collision partner to be He.The rate coefficients are not very different from those found previously, but they do confirm the difference between HCN and HNC.<cit.> treat the colliding H_2 molecule as having internal structure, but average over H_2 orientations, considering effectively only molecular hydrogen in the j = 0 level (we employ lower case j to denote the rotational level of H_2 in order to avoid confusion with the rotational level of HCN).<cit.> have recentlycalculated collisions between HCN and H_2 molecules, considering for the first time the latter in individual rotational states.They find that there is a significant difference between collisions with the H_2 in the j = 0 level, compared to being in higher rotational levels.The deexcitation rate coefficients for the lower levels of HCN for H_2(j≥ 1) are quite similar, and are 3 to 9 times greater than those for H_2(j = 0), rather than, for example, there being a systematic difference for ortho– and para–H_2 rates.The HCN deexcitation rates for H_2(j = 0) are generally similar in magnitude to those of <cit.> and <cit.>; the deexcitation rates for Δ J= –1 and –2 transitions are comparable, in contrast to those for H_2(j ≥ 1), which show a significant propensity for Δ J = –2.The numerical results for many of these calculations (often not given in the published articles) are available on the website http://basecol.obspm.fr.The major difference between the rates for H_2(j = 0)and H_2(j≥ 1) adds a significant complication to the analysis of HCN excitation since it implies a dependence on the H_2 ortho to para ratio, which is itself poorly–known, and likely varies considerably as a function of environment <cit.>.If the H_2 ortho to para ratio is close to equilibrium at the local kinetic temperature, then in all but the most excited regions of molecular clouds, only the collision rate coefficients with H_2(j = 0) will be significant.§.§ HCO^+ <cit.> considered electron excitation of the HCO^+ ion.This work provided coefficients for evaluating the rates only for the three lowest rotational states, but the rates themselves are comparable to those of HCN, and thus are reasonably consistent with a scaling following μ_e^2.This calculation was supplemented by one including more levels with improved accuracy at low temperatures described by <cit.> and kindly provided to us by A. Faure.The newer deexcitation rate coefficients for low–J transitions are a factor ≃3 larger at 10 K, but the difference drops rapidly for higher kinetic temperatures and is only 10 to 20% for T_k = 100 K.We include deexcitation rate coefficients in Table <ref>.Collision rates for HCO^+ excitation by H_2 have been calculated by <cit.>.The deexcitation rate coefficients for collisions are a factor of 3 to 25 times larger than those for HCN.The HCO^+Δ J = –1 rate coefficients are larger than those for Δ J = –2 collisions, unlike the case for HCN, for which the inverse relationship holds.§.§ CSElectron collisions for CS have been analyzed by <cit.>, who found deexcitation rate coefficients a factor of 2 smaller than those for HCN, and with an exceptionally strong propensity rule favoring Δ J = –1 collisions. A selection of the lowest transitions are included in Table <ref>. The various calculations for electron excitation of the lower transitions of CS by various methods vary by less than 30%, and less for temperatures ≥ 50 K <cit.>, a typical accuracy that is probably characteristic of the electron collision rate coefficients for other molecules. Deexcitation rate coefficients for collisions with H_2 are from <cit.>, but these results do not differ appreciably from those of ) and are ≃ 3×10^-11 cm^3s^-1, comparable to those for HCN for Δ J = –2, but a factor 2 to 3 larger for Δ J = –1. §.§ CNThe spin–rotation coupling for CN complicates the energy level structure and makes accurate comparisons with simple rotors difficult.Excitation rate coefficients for collisions with electrons were calculated by <cit.> for the lowest few levels, and extended by <cit.>.From the Lambda Leiden Molecular Data Base (http://home.strw.leidenuniv.nl/ moldata/datafiles/cn.dat) we find a characteristic rate coefficient at 20 K for N = 1–0 equal to 5.7×10^-7 cm^3s^-1. § EXCITATION BY ELECTRONS AND H_2 MOLECULESIn this section we discuss how molecules are excited in collisions with electrons and H_2 molecules. Throughout this section we often use HCN as a reference case, given the high astrophysical importance of the molecule and its high sensitivity to collisions with electrons. §.§ Critical Electron Fractional AbundanceThe critical density of a molecule is often used to indicate how a molecular species depends on the environmental density. Appendix <ref> provides a discussion of such trends. For our discussion it is important to realize that we are dealing with two critical densities per molecule: one critical value that describes the excitation with electrons, n_c(e^-), and one that describes the excitation in collisions with H_2 molecules, n_c(H_2). These densities can then be used to gauge the relative importance of electron excitation for the excitation of a given molecule. In the context of the simplified 3–level model described in Appendix <ref>, the critical fractional abundance of electrons required to have the electron collision rate be equal to the H_2 collision rate isX^*( e^-) =R^e_10( H_2)/R^e_10( e^-) = n_ c( e^-)/n_ c( H_2) Table <ref> gives the critical densities and critical fractional abundance of electrons for the J = 1–0 transitions of HCN, HCO^+, CS, and CN.The entries for CN are representative values for the N = 1–0 transitions (near 113 GHz) and these must be regarded as somewhat more uncertain due to more complex molecular structure and less detailed calculations, as discussed above.HCN has relatively small rate coefficients for collisions with H_2 and consequently large n_c( H_2), while only modestly smaller rate coefficients for collisions with electrons.The result is that the critical electron fraction for HCN is lower than for the other high–dipole moment species considered.CN follows, with CS having a somewhat higher value of X^*( e^-).HCO^+ has a rather significantly higher value yet, making this species less likely to be impacted by electron excitation than the others.lccc[ht!] 0ptCritical Densities and Critical Electron Fractional abundances for the J = 1–0 Transitions1 of Different Species at 20 K Molecule n_c( e^-) n_c( H_2) X^*( e^-)cm^-3 cm^-3HCN 6.56.5×10^5 1.0×10^-5HCO^+ 4.5 1.2×10^5 3.8×10^-5CS 1.8 2.3×10^4 7.9×10^-5CN 21 1.7×10^6 1.3×10^-51The entries for CN are characteristic values for the N = 1–0 transitions, and are somewhat approximate §.§ Emission and Excitation TemperatureA complementary approach is to consider how the integrated intensity of different species is affected by electron excitation.For optically thin emission, the integrated antenna temperature is just proportional to the upper level column density which for a uniform cloud this is proportional to the upper level density.In the low density limit with no background radiation, from equations <ref> and <ref> we can write (for excitation by any combination of collision partners)∫ T_ a dv ∝ N_1A_10 = N( HCN)C^ t_01 where N_1 is the column density of HCN in the J=1 state,N(HCN) is the total column density of the molecule, and C^ t_01 = C^ e_01( e^-)+ C^ e_01( H_2)is the total collisional excitation rate from J = 0 to J = 1.The total deexcitation rate is determined from equation<ref> through detailed balance.Using the relationship between the collision rates and collision rate coefficients for the electrons and H_2 molecules and the critical densities for each species (from equations <ref> and <ref>), we can express the integrated intensity as∫ T_ a dv ∝ N( HCN) A_10( n(e^-)/n_c(e^-) + n(H_2)/n_c(H_2)) .A measure of the degree of excitation of the J = 1–0 transition is its excitation temperature, which is defined by the ratio of molecules per statistical weight in the upper and lower levels.With T^*_10 = Δ E/k_B we can writeT_ ex_10 = T^*_10/ln( N_0g_1/N_1g_0)=4.25  K/ln( 3n_0/n_1)=4.25  K/ln( 3A_10/C^t_01)where g_0 = 1 and g_1 = 3.The importance of electron collisions for the excitation of HCN (or other high–dipole moment molecules) does depend on the rate of neutral particle excitation that is present, as shown in Figure <ref>.The thermalization parameter Y = C^t_10/A_10 gives the total deexcitation rate relative to the spontaneous decay rate. Each of the curves in Figure <ref> is for a given value of Y, with small values of Y indicating subthermal excitation and Y ≫ 1 indicating thermalization.In the area where the curves are essentially vertical, the H_2 density is sufficient to provide the specified value of Y and the fractional electron abundance is sufficiently small that electrons are unimportant collision partners.In the area in which the curves run diagonally, the value of Y increases linearly as a function of X( e^-) and n( H_2), indicating that electrons are the dominant collision partners.In order that electron collisions be of practical importance, we must satisfy two conditions.First, the H_2 density must be insufficient to thermalize the excitation temperature, meaning that n( H_2) ≤ n_c(H_2).For HCN J = 1–0 this implies n( H_2) ≤ 10^5.5.Second, X( e^-) must be sufficiently large that electrons are the dominant collision partner. This means that we must be in or near to the area of the diagonal curves, which in combination with requirement 1 for that molecular transition we take as defined by X( e^-) ≥ X^*( e^-). For HCN J = 1–0 this means X( e^-) ≥ 10^-5.§.§ Multilevel Results for HCN In Figure <ref> we show the results for purely electron excitation of the HCN J = 1–0 transition from a 10 level calculation using RADEX <cit.> for the indicated conditions.In the upper panel we show the excitation temperature as a function of the electron density for the cases of a background temperature equal to 2.7 K (blue symbols) andequal to 0.0 K (red triangles).Given that the equivalent temperature difference between the upper and lower levels is 4.25 K, the higher background temperature corresponds to a significant excitation rate, andrises above the background only for an electron density of a few tenths .In the case of no background, however, there is no “competition” for the collisional excitation, andincreases monotonically starting from the lowest values of the electron density. In the lower panel we show the integrated intensity of the J = 1–0 line.With or without background, the emission increases linearly with collision rate as expected, as long as n( e^-) is well below n_c( e^-) = 6.5(Equation <ref>).The nonzero background temperature reduces the integrated intensity by a constant factor due to the reduced population in the J = 0 level available for excitation and emission of photons <cit.>.In Figure <ref> we present the results of a multilevel calculation for the conditions indicated. The H_2 excitation provides a certain level of excitation and emission; this is seen most clearly by comparing the left and right lower panels showing the integrated intensity. The excitation temperature for low electron densities is dominated by the background radiation and for both of the H_2 densities considered, the collisional excitation rate is not sufficient to increase T_ex significantly above2.7 K.In the presence of the background, the integrated emission is a more sensitive reflection of electron excitation than the excitation temperature.In agreement with the previous approximate analysis, the effect of the electron collisions becomes significant when the electron fractional abundance X( e^-) reaches 10^-5 (for n(H_2) = 10^3 or 10^4), at which point the emission has increased by 50%.Electron excitation is dominant for X( e^-) = 10^-4, with the integrated intensity increasing by a factor of 5.5 for n(H_2) = 10^3, and by a factor of 4.5 for n(H_2) = 10^4. For an H_2 density equal to 10^4 the H_2 excitation is significantly greater due to the order of magnitude greater density, but the electron density required to reach a level of emission significantly greater than that produced by the H_2 alone (e.g. ∫dv = 3 K ) is independent of the H_2 density.§.§ Extension to HCO^+, CS, and CNIt is more difficult for electron excitation to play a role for HCO^+ than for HCN, since a much higher fractional electron abundance is required in order that the electron rate be comparable to or exceed that for H_2.This is shown by the offset in the electron fractional abundances of the diamond symbols in Figure <ref> which show the value of X(e^-) required to double the intensity of the J = 1–0 transition relative to that produced by collisions with H_2.A fractional abundance of electrons ≃ 15 times greater is required for HCO^+ relative to that for HCN, largely due to the far larger H_2 deexcitation rate coefficients for HCO^+ more than outweighing its only somewhat larger e^- deexcitation rate coefficients. Figure <ref> compares HCN and CS excitation as a function of electron density for a H_2 density of3×10^3; this lower density is appropriate to ensure subthermal excitation.We consider the J = 2–1 transition of CS and the J = 1–0 transition of HCN in order to ensure comparable spontaneous decay rates.We see that an electron density ≃ 7 times greater for CS than for HCN is required to produce a factor of 2 enhancement in the integrated intensity. The conclusion from comparison of CS and HCO^+ with HCN is that for the latter molecule, a significantly lower fractional abundance of electrons can result in doubling the integrated intensity of the emission.Thus, if electron excitation is significant, we might expect enhanced HCN toHCO^+ ratio, more extended HCN emission, or both, and the same, though to a lesser degree, relative to CS.However these conclusions are highly dependent on the chemistry that is determining the abundances of these species. § CLOUD MODELS AND AND THE ELECTRON ABUNDANCE§.§Diffuse and Translucent CloudsDiffuse and translucent clouds have modest total extinction (A_ v≤ 2 mag.) and densities typically 50–100 .In consequence, carbon is largely ionized and the electron fractional abundance is on the order of 10^-4.Thus, as mentioned in <ref>, electron excitation of high–dipole moment molecules will be very significant.We have used the Meudon PDR code <cit.> to calculate the thermal and chemical structure of this cloud and show the results in Figure <ref>.Hydrogen is largely molecular except in the outer 0.25 mag. of the cloud and the electron abundance of 0.01results in n(e^-)/n(H_2) = 2×10^-4 throughout most of the cloud[All of the Meudon PDR code results presented in this paper assume a carbon to hydrogen ratio equal to 1.3×10^-4.This is somewhat lower than the value 1.6×10^-4 obtained for four sources by <cit.> using UV observations ofabsorption, and the value 1.4×10^-4 adopted for analysis of the158 μm fine structure line by <cit.>.Measurements of carbon and oxygen abundances in ionized regions compiled by <cit.> suggest a significant gradient in the [C]/[H] ratio, which they determine to be 6.3×10^-4 at a galactocentric distance of 6 kpc and 2.5×10^-4 at 10.5 kpc.A higher carbon abundance translates to higher electron abundance where carbon is ionized, so that we may be underestimating the importance of electron excitation, but by an amount that likely depends on environment and location.].Excitation of high–dipole moment molecules will thus predominantly be the result of collisions with electrons. However, as seen in the Figure, the density of HCN is only ≃10^-10 in the center of this cloud, corresponding to a fractional abundance relative to H_2 of 3×10^-13.X(HCN) falls rapidly below this value forA_ v≤ 0.2 mag.For C^+ itself, the situation is quite different.The deexcitation rate coefficients for collisions with electrons <cit.> are ≃ 350 times larger than those for collisions with atomic hydrogen <cit.>, and ≃ 100 times greater than those for collisions with H_2 with an ortho to para ratio of unity <cit.>.Thus, even with atomic carbon totally ionized, collisions with electrons will be unimportant compared to those with hydrogen, whether in atomic or molecular form.In fully ionized gas, on the other hand, excitation of C^+ will be via collisions with electrons. §.§ Electron Density in GMC Cloud Edges Giant molecular clouds (GMCs) exist in a large range of masses, sizes, and radiation environments, making it difficult to draw specific conclusions about the electron density within them, which varies significantly as function of position. We are interested primarily in the outer portion of the cloud where we expect the electron fractional abundance to be maximum.Such regions are, in fact the Photon Dominated Region (PDR) that borders every such cloud.As an illustrative example, we consider a slab cloud with a thickness equal to 5× 10^18 cm and a Gaussian density distribution with central proton density equal to 1×10^5 and 1/e radius2.35×10^18 cm, leading to an edge density equal to 1.1×10^3.The total cloud column density measured normal to the surface is 4.2×10^23.The results from the Meudon PDR code are shown in Figure <ref>. The solid curves are for a radiation field a factor of 10^4 greater than the standard ISRF. In the outer portion of the cloud shown, the transition from H to H_2 occurs at a density of ∼ 10^3 and an extinction of ≃ 1.2 mag, as a result of the relatively high external radiation field incident on the surface of the cloud.This level of radiation field is not unreasonably large for a cloud in the vicinity of a massive young stars.Using the model of <cit.>, the front surface of the Orion cloud within a radius of ∼0.9 pc of the Trapezium cluster is subject to a radiation field of this or greater intensity.In the outer portion of the cloud, the electron density is essentially equal to that of C^+, and the fractional abundance X( e^-)≃ 2×10^-4 in the outermost 1.2 mag., where atomic hydrogen is dominant, and remains at this value to the point where A_ v = 3 mag. The electron density drops significantly moving inward from this point, falling to 10^-5 at A_ v = 4 mag.From Figure <ref>, we see that the electrons increase the emission in HCN J= 1–0 by an order of magnitude relative that from H_2 at the point where n(H_2) ≃10^3.The dotted and dashed curves show the electron density for radiation fields increased by factors 10^3 and 10^2, respectively, relative to the standard ISRF.The lower radiation fields reduce the thickness of the layer of high electron density, but only slightly affect the density there.Reduction in the radiation field by a factor of 100 reduces the thickness of the layer by ≃ a factor of 2 in terms of extinction.The results from this modeling suggest that regions of significant size can have electron densities sufficient to increase the excitation of any high–dipole moment molecules that may be present by a factor ≃ 5.An important requirement for this to be of observational significance is that the density of the species in question be sufficient in the region of enhanced electron density.This issue is discussed in the following section. §.§ Molecular Abundances in GMC Cloud EdgesStandard (e.g. the Meudon PDR code utilized in <ref>) models of the chemistry in low–extinction portions of interstellar clouds predict that the density and fractional abundance of HCN will be quite low, as illustrated in Figure <ref> discussed in <ref>. A result for a more extended, higher density region, also obtained using the Meudon PDR code, is shown in Figure <ref>, which focuses on the outer region of a cloud with uniform proton density = 10^5 and total extinction = 50 mag.The incident radiation field is the standard ISRF.Within 2 mag of the cloud boundary we find X(HCN) ∼ 4×10^-9, a factor ≃ 40 less than that in the region with A_ v≥ 4 mag.This is sufficiently small to make the emission per unit area from the outer portion of the cloud, even with electron excitation, relatively weak relative to that in the better–shielded portion of the cloud.However, depending on the structure of the cloud and the geometry of any nearby sources enhancing the external radiation field, the electron excitation could significantly increase the total high–dipole moment molecular emission from the cloud.As indicated in Figure <ref>, an increased cosmic ray ionization rate does increase the HCN abundance in portions of the cloud characterized by A_ v≥ 1 mag.The larger rate here is a reasonable upper limit for the Milky Way, so this effect is likely to be limited, but not necessarily in other galaxies <cit.>. The abundance of high–dipole moment molecules (e.g. HCN and CS) and that of electrons, as enhanced by either UV or cosmic rays, are to a significant degree anticorrelated in standard PDR chemistry.This is illustrated in Table <ref> which gives some results for PDR models of clouds of different densities with total visual extinction equal to 50 mag, illuminated from both sides by standard ISRF, and experiencing a cosmic ray ionization rate equal to 5×10^-16 s^-1.This enhanced rate is responsible for the relatively large densities of atomic hydrogen in the well–shielded portions of the cloud. cccc0ptDensities of Different Species in Clouds of Different Densities Visual Extinction n(H) n(e^-) n(HCN) mag cm^-3 cm^-3 cm^-34cn(H) + 2n(H_2) = 10^6 cm^-30.1 5.1×10^32.0×10^+01.2×10^-21.0 1.9×10^17.4×10^-19.2×10^-33.0 2.1×10^12.5×10^-26.4×10^-24cn(H) + 2n(H_2) = 10^5 cm^-30.1 9.6×10^11.7×10^-14.7×10^-41.0 2.0×10^18.0×10^-39.0×10^-43.0 1.8×10^18.0×10^-31.5×10^-24cn(H) + 2n(H_2) = 10^4 cm^-30.1 1.2×10^26.9×10^-14.2×10^-61.0 2.1×10^13.3×10^-24.0×10^-53.0 1.9×10^15.5×10^-31.9×10^-34cn(H) + 2n(H_2) = 10^3 cm^-30.1 3.7×10^11.3×10^-18.0×10^-81.0 2.4×10^12.9×10^-28.5×10^-73.0 1.9×10^13.1×10^-37.4×10^-5 We see that only for the two lowest densities and for visual extinction less than 1 mag is the density of electrons high enough to significantly increase the excitation rate.However, the HCN density under these conditions corresponds to a fractional abundance only 1/1000 of that which can be reached in the well–shielded portions of clouds.Thus, the effect of the electron enhancement of the collision rate would be very difficult to discern.At high hydrogen densities, the density of electrons increases but their abundance relative to H_2, the dominant form of hydrogen is quite low.Thus, even though the HCN density increases almost as the square of the total density at low extinctions in this model, the HCN emission when strong, will be produced by collisions with H_2. The issue of thefractional abundance of all molecules in all regions may not yet be treated completely by such models.For example, the abundance of CO in diffuse clouds is well known to be much greater relative to standard models, and a variety of processes involving transient high temperatures have been proposed <cit.>; see also Section 4.4 of <cit.>.Possible mechanisms responsible for the elevated temperature include shocks, Alfvén waves, and turbulent dissipation <cit.>.The situation in GMCs is even less clear as their range of densities and other physical conditions makes determination of abundance of a specific species at a particular position in a cloud very difficult.However, if any or all of the above processes suggested to operate in diffuse clouds also are present in the outer regions (or possibly the entire volume) of GMCs, molecular abundances may also be significantly different than would expected from models with chemistry determined by the local kinetic temperature.The possibility of significant additional energy input to the gas in the boundary of the Taurus molecular cloud, a region with relatively low radiative flux, is suggested by the detection of emission in the rotational transitions of H_2, indicating that temperatures of several hundred K are present <cit.>. Questions such as the apparent high abundance of atomic carbon throughout the volume of clouds <cit.>, in contrast to what is predicted by chemical models with smoothly–varying density <cit.>, has motivated creation of highly–inhomogeneous “clumpy” cloud models <cit.>.In this picture, UV photons can permeate a large fraction of the clouds' volume, producing PDR regions on the clumps distributed throughout the cloud.Thus, the regions of high electron density which are located adjacent to where the C^+ transitions to C, are also distributed throughout the cloud.An entirely different class of models involves large–scale circulation of condensations between the outer regions of clouds and their interiors <cit.>.The effect of the circulation depends on many parameters, in particular the characteristic timescale, which is not very well determined.Yet another effect that may be significant is turbulent diffusion, which can significantly affect the radial distribution of abundances if the diffusion coefficient is sufficiently rapid <cit.>.HCN and electrons are not included specifically in the results these authors present, but given the nature of the mechanism, it is likely that the distribution of these species, with the former centrally concentrated and the latter greater at the edge in the absence of turbulent diffusion, will be made more uniform.If the abundance of HCN (and other high density tracers) is reasonably large in the outer portion of molecular clouds, and the electron fractional abundance approaches or exceeds 10^-4 there, the total mass of the “high density region” may be overestimated.This could have an impact on using such molecular transitions as tracers of the gas available for the formation of new stars <cit.>, and the possible role of electron excitation in enhancing the size of the high–dipole moment molecular emission should be considered. §.§ Extreme Cloud Environments The central regions of both starburst and AGN galaxies are extreme environments, with dramatically enhanced energy inputs compared to the “normal” ISM of the Milky Way and normal galactic disks.Determining the conditions in these regions is naturally challenging, but with the increasing availability of interferometers such as ALMA, there has been heightened interest in unravelling the physical conditions in the central nuclear concentration(s) as well as the surrounding tori that are seen in some galaxies.The density is one of the most important parameters, and low–J transitions of HCN and its intensity relative to other species, are one of the most often–used probes.The ratio of HCN to CO and HCN to HCO^+in the lowest rotational transition of each were observed in a number of nearby Seyfert galaxies by<cit.>, who proposed that the observed enhancement of R = I(HCN)/I(CO) in those dominated by an AGN could be produced by enhanced X–ray irradiation of the central region, based on the modeling of <cit.>.This interpretation was used to explain observations of the AGN NGC1068 by <cit.>.This diagnostic was extended to the J=3–2 transitions of HCN and HCO^+ in NGC1097 by <cit.>, who found that the enhanced ratio of these two molecules was consistent with X–ray ionization, using the model of <cit.>.More detailed models of X–ray dominated regions have since been developed by <cit.> and <cit.>, which indicate a different effect on the HCN/HCO^+ ratio than found by <cit.>.However, the X–ray ionization and heating is not the only mechanism proposed to explain enhanced HCN emission.Heating alone, if sufficient to accelerate the reaction CN + H_2→ HCN + H (Δ E/k = 820 K), can increase the abundance of HCN.<cit.> proposed that shocks could be compressing and heating the gas in the outflow associated with AGN Mrk 231.<cit.> observed the J=4–3 transition of HCN and HCO^+, along with other molecules, in the nucleus of AGN NGC1097. Combining their data with others (their Table 7) indicates that the HCN enhancement is greater in AGN than in Starburst galaxies.Their chemical modeling suggests a dramatically enhanced HCN abundance based solely on having the gas temperature exceed 500 K.These authors reject UV and X–rays as the explanation and favor mechanical heating, possibly from a (so far unobserved) AGN jet.<cit.> also favor non–X–ray heating to explain their observations of the galaxy Arp 220, employing the chemical models of <cit.> and <cit.>.In the context of these observations and proposed models, the relevance of electron excitation of high dipole moment molecules such as HCN is that the regions of the enhanced HCN abundance, whether produced by X–rays, shocks, or UV, could well include substantial electron densities as well.We discussed previously that the integrated intensities of HCN emission could be substantially enhanced if the fractional abundance of electrons is on the order of 10^-4.For subthermal excitation and optically thin emission, the J = 1–0 integrated intensity (equation <ref>) can be written ∫ T_ a dv ∝ N( molecule) A_10n( H_2)/n_c( H_2)( 1 + X( e^-)/X^*( e^-))If we consider a given molecular species in a region of specified H_2 density, the effect of the electron excitation is contained in the second term in brackets.As examples, for a fractionalabundance of 10^-5, the HCN and CN emission will be approximately doubled, while that of HCO^+ and CS will be only slightly enhanced.For a fractionalabundance of 10^-4, the emission from HCN and CN will be increased by approximately an order of magnitude, while that of CS and HCO^+ by factors ≃ 2.3 and 3.6, respectively. Electron excitation could thus be responsible at least in part for the enhanced HCN/HCO^+ ratio reported by <cit.> in some Seyfert nucleii.For CO (<ref>), the electron excitation rates are dramatically smaller than those for HCN so that X(e^-) ≥ 10^-5will dramatically enhance HCN emission relative to that of CO, which <cit.> report to be correlated with enhanced HCN/HCO^+.<cit.> employed the ratio of different HCN transitions to determine volume densities of H_2 and properties of the HCN–emitting region.Electron excitation can produce different ratios than does H_2 excitation as a result of the different J–dependence of the collision rate coefficients (Appendix <ref> and <cit.>). Figure <ref> shows the ratio of integrated intensities of different transitions J–J–1 relative to that of the 1–0 transition.A 10–level calculation using RADEX was employed.The presence of a fractional abundance of electrons equal to 10^-4 reduces the H_2 density to achieve a specified integrated intensity ratio.Observed ratios yield H_2 densities in the range 10^5 to 10^6.X(e^-) = 10^-4 reduces the required H_2 density by a factor 3 to 4, which would have a significant impact on characterizing the central regions of galaxies.How important and prevalent the effect of electron excitation is depends on having more reliable models of the radiation field, ionization and chemistry in the central regions of these luminous galaxies.Many effects that may be playing a role.IR pumping is likely important in some sources, as indicated by the detection of vibrationally excited HCN by <cit.>. <cit.> studied the effect of cosmic ray ionization rates up to a factor of 10^3 greater than standard (as compared to the modest factor of 10 considered in Figure <ref>) on the CO/H_2 ratio.The CO/H_2 ratio is reduced, but the magnitude of the effect depends on the local density.This study suggests that turbulent mixing, although not included in the modeling, is potentially important. § CONCLUSIONSWe have used quantum calculations of collisional excitation of the rotational levels of HCN, HCO^+, CN, and CS by electrons and H_2 molecules to evaluate the relative importance of electron excitation. The collisional deexcitation rate coefficients at the temperatures of molecular clouds are close to 10^5 times larger for electrons than for H_2 molecules (<ref>).The electron deexcitation rate coefficients scale as the square of the permanent electric dipole moment of the target molecule, so this effect is unimportant for the widely–used tracer CO.For subthermal excitation, the integrated intensity of the J = 1–0 transition is proportional to the sum of the electron and H_2 densities each normalized to the appropriate critical density (Eq. <ref>). The requirements for electron excitation to be of practical importance are n( H_2) ≤ n_c( H_2) and X( e^-) ≥ X^*( e^-) (<ref>; also see Eq. <ref>). For the J = 1–0 transition of HCN this implies n( H_2) ≤ 10^5.5 and X( e^-) ≥ 10^-5. In regions where carbon is largely ionized but hydrogen is molecular, the fractional abundance of electrons, X(e^-) = n(e^-)/n(H_2) can exceed 10^-4, making electrons dominant for the excitation of HCN (<ref>).The situation for CN is similar, although somwhat more uncertain due to less complete collision rate calculations. For HCO^+, the rate coefficients for collisions with H_2 are more than an order of magnitude larger than those for HCN, more than outweighing the somewhat larger rate coefficients for electron collisions, and demanding a factor 7–10 of higher electron fractional abundance for electron excitation to be significant.For CS, the rate coefficients for electrons are a factor of 2 smaller than those for HCN, while the H_2 rate coefficients are a factor ≃ 3 larger, with the combination results in requirement of a factor of 6 larger electron fractional abundance for electron excitation to be significant.Thus, HCN (and to slightly lesser degree CN) appears to be an unusually sensitive probe of electron excitation (Table <ref>).Conditions favoring high X(e^-) can occur in low extinction regions such as diffuse and translucent clouds (<ref>), and the outer parts of almost any molecular cloud, especially in regions with enhanced UV flux (<ref>).Thus, the excitation in the HCN–emitting region may not necessarily be controlled by the high H_2 density generally assumed.The central regions of luminous galaxies often show enhanced HCN emission, which could be in part a result of electron excitation, although the explanation is not certain, with enhanced UV, X–rays, cosmic rays, and mechanical heating having all been proposed as responsible for increasing the abundance of HCN (<ref>).Accurate determination of the possible importance of electron excitation will depend on having much improved models of the chemistry and dynamics of these regions, including the effects of transient heating and enhanced transport due to turbulence (<ref>). Significant additional theoretical work is therefore needed before a satisfying explanation can be given for the extended emission from molecules like HCN in low density environments <cit.>.We thank Simon Glover for suggesting that we consider electron excitation in the outer parts of molecular clouds.We thank the anonymous reviewer for suggestions that significantly broadened and improved the present investigation. We appreciate Floris van der Tak's critical help in entering data into and using RADEX, and thank Franck LePetit for valuable information concerning the Meudon PDR code. The authors appreciate information and pointers receivedfrom Susanne Aalto about molecules in Active Galactic Nucleii.Alexandre Faure graciously provided the full unpublished HCO^+–electron deexcitation rate coefficients from unpublished work by Faure and Tennyson.We thank Bill Langer and Kostas Tassis for a number of suggestions that improved this paper. This research was conducted in part at the Jet Propulsion Laboratory, which is operated by the California Institute of Technology under contract with the National Aeronautics and Space Administration (NASA). 2016 California Institute of Technology. § SIMPLIFIED MODELS OF EXCITATION In this Appendix we outline a very simplified model with which to give an idea of the relative importance of collisional excitation by electrons and by molecular hydrogen in the limit of low collision rates.We again adopt HCN as a representative high–dipole moment molecule.We adopt the rate coefficients for collisions with H_2 from <cit.>, understanding that while the result of <cit.> are quite similar, the situation could be quite different if a large fraction of the H_2 were in states having j≥ 1 and the results of <cit.> discussed above obtain.It is instructive to consider only a three level system (levels and rotational quantum numbers J = 0, 1, and 2) with no background radiation and optically thin transitions.The collision rates C_ij (s^-1) are equal to the collisional rate coefficients R_ij (cm^3s^-1) multiplied by the density of colliding particles (electrons or H_2 molecules, ).In general we must consider upwards and downwards collisions, but in the limit of low excitation, with the spontaneous downwards radiative rate, A_ul, much larger than the corresponding downwards collision rate, C_ul, downwards collisions can be neglected.The rate equations for the densities of HCN, n_0, n_1 and n_2 inlevels J= 0, 1 and 2, respectively, aren_0(C_01 + C_02) = n_1A_10 n_1A_10 = n_0C_01 + n_2A_21 andn_2A_21 = n_0C_02 + n_1C_12 where C_lu denotes the upward collision rate from level l to level u.Equation <ref> gives us immediately the ration_1/n_0 = C_01 + C_02/A_10 For more than three levels in the low excitation limit, it is appropriate to consider the total upwards collision rate out of J = 0 when analyzing the excitation of the J = 1 to J = 0 transition, as every such collisional excitation results in emission of a J = 1 to J = 0 photon.We define an effective excitation rate C^e_01( H_2) = C_01( H_2) + C_02( H_2)for 3 levels, and C^e_01( H_2) = Σ_k = 1^k =NC_0k( H_2)for N levels, since collisions with H_2 can result in |Δ J|> 1. We can express this as well in terms of effective rate coefficients since we have only to divide by the density of collision partners, and for the deexcitation rate coefficients we have R^e_10( H_2) = C^e_10( H_2)/n( H_2).From detailed balance for collisions with any partnerR^e_10 = R^e_01g_0/g_1exp(T^*_10/T_k)where the g's are the statistical weights, T^*_10 is the equivalent temperature of the J = 1 to J = 0 transition (Δ E/k_B = 4.25 K for HCN), and T_k is the kinetic temperature.Published calculations generally give the downwards rate coefficients, and the upwards rate coefficients must be calculated individually using detailed balance. For collisions with H_2at a kinetic temperature of 20 K, <cit.> give R_10 = 1.41×10^-11cm^3s^-1 and R_20 = 2.1×10^-11 cm^3s^-1.R_30 is 100 times smaller than these rate coefficients, while R_40 = 2.7×10^-12 cm^3s^-1 is marginally significant.From each of these we calculate the upwards rate coefficient, and the effective downward rate coefficient isR^e_10(H_2) = 3.8×10^-11 cm^3s^-1 and R^e_01(H_2) = 9.2×10^-11 cm^3s^-1.For electrons, since we consider only dipole–like collisions, we have C^e_10( e^-) = C_10( e^-) = R_10( e^-)n( e^-) whereR_10( e^-) is just the value from Table 1 at the appropriate kinetic temperature, which is 3.7×10^-6 cm^3s^-1 at 20 K. The critical density n_c is the density of colliding partners at which the downwards collision rate is equal to the spontaneous decay rate.This gives us for the J = 1–0 transition of HCNn_ c( e^-) = A_10/R^e_10( e^-) = 6.5  cm^-3 andn_ c( H_2) = A_10/R^e_10( H_2) = 6.5× 10^5  cm^-3 Table <ref> gives values for other molecules.
http://arxiv.org/abs/1708.07553v1
{ "authors": [ "Paul F. Goldsmith", "Jens Kauffmann" ], "categories": [ "astro-ph.GA" ], "primary_category": "astro-ph.GA", "published": "20170824205728", "title": "Electron Excitation of High Dipole Moment Molecules Reexamined" }
[ [ December 30, 2023 =====================§ INTRODUCTIONAmong a lot of mathematical options in physics, Quaternions are a famous option. Quaternion was initiated by Hamilton for the first time <cit.>. Quaternions in general can be represented byϕ= ϕ _0 + ϕ _1e_1 + ϕ _2e_2 + ϕ _3e_3,where ϕ _l (l = 0,1,2,3) are real coefficients. There are three imaginary unit in Quaternions which have propertye_ae_b =- δ _ab + ε _abce_c. (a,b,c = 1,2,3) If we set i,j,k as the imaginary units, we conclude from Eq.(<ref>) thatij =- ji = k,jk =- kj = i,ki =- ik = j.Eq.(<ref>) tells that in general multiplication of tow Quaternions has non-commutative nature, it means qppq. The idea for making use of Quaternions in Quantum mechanics has long a shining story. Physicists and mathematician havetried to found Quantum mechanics on the Quaternions. Finkelstein et al about new kind of quantum mechanics by using Quaternions made an interesting discussion in <cit.> or some works which are about using real and complexified Quaternions as underlying mathematical structure and by adopting a complex geometry by having ability to present a compatible face of Quantum mechanics <cit.> and there are a lot of valuable researches in this topic which are mentioned in Ref <cit.>. All of them tried to show that we can we a quantum mechanics which based on the Quaternionic. In the rest of paper, we are intend to present Quaternionic form of Dirac equation. We want to extend our recent papers <cit.> to relativistic form as well as have a study of scattering by double Dirac delta potential. We organized this article as presenting Quaternionic Dirac equation in Sec. <ref>. Introducing double Dirac delta potential and detail discussion about it and its effects in Sec. <ref>. And after bringing some information about probability current and conservation law in Sec. <ref>, the conclusions are appeared at the end. § DIRAC EQUATION IN QUATERNIONIC QUANTUM MECHANICS As in ordinary relativistic quantum mechanics, to investigate relativistic fermions, we should make use of Dirac equation. In Quaternionic quantum mechanics also there is Dirac equation but it has some differences. Quaternionic version of Dirac Equation in presence of vector and scalar potential can be written with the help of <cit.> as∂Ψ (r,t)/∂ t =- [ α . ∇+ β( i(m + S_a(r)) + jS_b(r)) + iV_a(r) + jV_b(r)]Ψ (r,t),where in Eq. (<ref>) ħ= c = 1 and[α= ( [ 0 σ; σ 0 ]);; β = ( [ 1 0; 0 - 1 ]) ] where Pauli matrices are σ. As is obviously clear the potentials have two parts. Real functions of them are shown by subscribe a(S_a(r),V_a(r) ∈) and for complex functions, they have subscribeb(S_b(r),V_b(r) ∈). By setting S_b(r),V_b(r) → 0 we can get to the well-known form of this equation as <cit.> i∂Ψ (r,t)/∂ t = ( α . P + β( m + S_a(r)) + V_a(r))Ψ (r,t). Since we are interested in time-independent interactions, it is better to consider wave function as Ψ (r,t) = Φ (r)e^ - iEt. Inserting Eq. (<ref>) into Eq. (<ref>), time-independent form of Quaternionic Dirac equation derives Φ (r)iE = ( α . ∇+ β( i(m + S_a(r)) + jS_b(r)) + iV_a(r) + jV_b(r))Φ (r). Note the coordinate part of wave function is a Quaternionic function and has components. Actually it usually is written like Φ (r) = ( [ Φ ^ + (r); Φ ^ - (r) ]). This form of representing is called spinor form of wave function. If we set iS_a(r) + jS_b(r) = iV_a(r) + jV_b(r), by some algebraic calculation we arrive at a coupled system of equations for the components Φ ^ + (x)iE = σ _xdΦ ^ - (x)/dx + ( im + 2(iS_a(x) + jS_b(x)))Φ ^ + (x),Φ ^ - (x)iE = σ _xdΦ ^ + (x)/dx - imΦ ^ - (x). Quaternionic wave function components are in the form of Φ^± (x) = ϕ^± _a (x) + j ϕ^± _b (x) where ϕ^± _a (x) and ϕ^± _b (x) are the complex functions. Considering such a form of the wave function components in Eq. (<ref>) yields ϕ^- _a (x) = σ_x/i(E+m)d ϕ^+ _a (x)/dx, ϕ^- _b (x) = σ_x/i(E-m)d ϕ^+ _b (x)/dx. Substitution of Eqs. (<ref>) and (<ref>) into Eq. (<ref>) provides a system of coupled differential equations d^2 ϕ^+ _a (x)/dx^2 + ( p^2 -2(E+m)S_a (x)) ϕ^+ _a (x) -2i (E+m) S^∗ _b (x) ϕ^+ _b (x) = 0, d^2 ϕ^+ _b (x)/dx^2 + ( -p^2 -2(E-m)S_a (x)) ϕ^+ _b (x) +2i (E-m) S_b (x) ϕ^+ _a (x) = 0, where p^2 = E^2 - m^2 and ∗ means the complex conjugation. Now, we are in a position to investigate scattering states due to Quaternionic Dirac delta potential. § QUATERNIONIC DOUBLE DIRAC DELTA AND SCATTERING Here, we introduce a Quaternionic form of Double Dirac Delta interaction <cit.> as S_a(x)= V_a( δ (x - a_0) + δ (x + a_0)) S_b(x)=iV_b( δ (x - a_0) + δ (x + a_0)). We assume that V_a and V_b are real constants. The well-known property of Dirac delta interactions is producing discontinuity condition for derivative of wave function. These conditions can be derived by integratingEqs. (<ref>) and (<ref>) around x = a_0 and x = -a_0. So discontinuity condition at x = a is . d ϕ^+ _a/dx|_x=a_0 ^+ - . d ϕ^+ _a/dx|_x=a_0 ^- = 2(E+m) (V_a ϕ^+ _a (a_0) + V_b ϕ^+ _b (a_0)), .d ϕ^+ _b/dx|_x=a_0^+ - . d ϕ^+ _b/dx|_x=a_0 ^- = 2(E-m) ( V_a ϕ^+ _a (a_0) + V_b ϕ^+ _b (a_0)). as well as for x = -a, we have . d ϕ^+ _a/dx|_x=-a_0 ^+ - . d ϕ^+ _a/dx|_x=-a_0 ^- = 2(E+m) (V_a ϕ^+ _a (-a_0) + V_b ϕ^+ _b (-a_0)), .d ϕ^+ _b/dx|_x=-a_0^+ - . d ϕ^+ _b/dx|_x=-a_0 ^- = 2(E-m) ( V_a ϕ^+ _a (-a_0) + V_b ϕ^+ _b (-a_0)). By the way, we should make three parts in our problem Region I x<-a_0, Region II -a_0 <x<a_0, Region I x>a_0. For the free particles we have ϕ^+ _a (x) = c_1 e^ipx + c_2 e^-ipx,ϕ^+ _b (x) = c_3 e^px + c_4 e^-px. where the coefficients are complex constants in general.Therefore, according to our assumption about the particles, the physical wave functions can be written as ϕ^+ _I (x) =e^ipx + r e^-ipx + j e^px,Φ _II^ + (x) = c_1 e^ipx + c_2 e^-ipx + j(c_3 e^px + c_4 e^-px),ϕ^+ _III (x) =t e^ipx+ j t̃ e^-px, In order to have explicit expression of the coefficients in Eqs. (<ref>), (<ref>) and (<ref>) we should match the wave functions at x = a_0 and x = -a_0 which yield x = a_0 c_1 e^i a_0 p+c_2 e^-i a_0 p=t e^i a_0 p, c_3 e^a_0 p+c_4 e^-a_0 p=t̃ e^-a_0 p, x = -a_0 r e^i a_0 p+e^-i a_0 p=c_1 e^-i a_0 p+c_2 e^i a_0 p, r e^i a_0 p+e^-i a_0 p=c_1 e^-i a_0 p+c_2 e^i a_0 p, and four equations also are derived by applying continuity conditions x=a_0 -i c_1 p e^i a_0 p+i c_2 p e^-i a_0 p+i p t e^i a_0 p=2 (E+m) (V_b t̃ e^-a_0 p+t V_a e^i a_0 p) p t̃(-e^-a_0 p)-c_3 p e^a_0 p+c_4 p e^-a_0 p=2 (E-m) (V_b t̃ e^-a_0 p+t V_a e^i a_0 p) x=-a_0 i c_1 p e^-i a_0 p-i c_2 p e^i a_0 p+i p r e^i a_0 p-i p e^-i a_0 p=2 (E+m) (V_b r̃ e^-a_0 p+V_a (r e^i a_0 p+e^-i a_0 p)) p r̃(-e^-a_0 p)+c_3 p e^-a_0 p-c_4 p e^a_0 p=2 (E-m) (V_b r̃ e^-a_0 p+V_a (r e^i a_0 p+e^-i a_0 p)) Sowe have eight equations and un-determined coefficients. By solving these equations, explicit form of each coefficient can be determined but because the solution of these equations were too large we could not be able to bring them. § PROBABILITY CURRENT AND CONSERVATION LAW In Quaternionic Quantum Mechanics, similar to the complex version, we have the continuity equation as ∂ρ/∂ t + ∇.J = 0, where ρ = Ψ̅Ψ,J = Ψ ^†αΨ. To check the conservation law of probability we need to calculate the currents of each regions. To derive the currents of each region, we need to the spinor form of the wave function of each region. This form of the wave functions can be obtained using Eqs. (<ref>) and (<ref>) as Φ_I (x)= [ e^ipx + r e^-ipx + j r̃ e^px; σ_x p ( e^ipx - r e^-ipx/E+m + j r̃ e^px/i(E-m)) ], Φ_II (x)= [t e^ipx+ j t̃ e^-px; σ_x p ( t e^ipx/E+m - j t̃ e^-px/i(E-m)) ]. It is straightforward to prove that by using the definition J_x = Ψ^†α_x Ψ, we derive the constraint |r|^2 + |t|^2 = 1. So form of conservation law of probability is Eq.(<ref>). This equation has been plotted in Fig. <ref> considering m= a_0 = V_a= V_b= 1 and E ∈ [1,4]. As is shown in Fig. <ref>, Eq. (<ref>) is valid. It is constructive if we check effects of potential coefficients and distance of the Dirac delta on the reflection and transmission coefficients. In Fig <ref> and <ref>, effects of potential coefficients on the reflection and transmission coefficients.It is seen that in Fig. <ref> that by increasing V_a, the reflection and transmission coefficient appears sharper but in Fig. <ref> we face with a different case. When V_b grows up, the reflection and transmission coefficients arises smoother than the in Fig. <ref>. In Fig. <ref>, by consideringfix values for the potential coefficients and the energy, we change the distance between the double Dirac delta functions. It can be seen that by increasing the distance, the number of fluctuations in the reflection and transmissions increase. § CONCLUSIONS In this paper, we presented Quaternionic version of Dirac equation in presence of vector and scalar potential. To ensure for correctness of this type of equation, we checked it in special case where we don't have Quaternionic potential, the result was what we expected. after introducing the scattering potential, we investigated effects of it. it causes to the discontinuity conditions for derivative of wave function. probability current for each region of our considered problem was determined. At the end conservation law of probability was derived. It was shown that how different parameters of the scattering potential can affect on the reflection and transmission coefficients for instanceincreasing the first part of the potentialcauses to have sharper reflection and transmission coefficients but the second part of the potential treated vice versa. It means that by decreasing the second part of the potential we have the sharp reflection and transmission coefficients. For the last case, effects of the distance between the double Dirac delta was shown. It was seen that the number fluctuations of the reflection and transmission increases by enlarging of the distance of the double Dirac Delta. § APPENDIX In this section details of derivation of Eq. (<ref>) are mentioned. In order to brief the calculation, we will indicate them in a compact form. At the first step, we are going to derivethe current of probability of region I. Using the definition of the probability current we have J_I= Ψ^ †α_x Ψ, = ( e^-ipx + r^∗ e^ipx - e^pxr̃^∗ j, σ_x p ( e^-ipx - r^∗ e^ipx/E+m + r̃^∗ e^px/i(E-m)j )) [ 0 σ_x; σ_x 0 ][ e^ipx + r e^-ipx + j r̃ e^px; σ_x p ( e^ipx - r e^-ipx/E+m + j r̃ e^px/i(E-m)) ]. This point should be noted that the spinor of wave function is a quaternion and since the coefficients are complex constants, order of them and j the imaginary unit is important. Hence in daggered form of the spinor we face with a reversed order in Eq (<ref>). Proceeding more in the matrix multiplication we have J_I = ( e^-ipx + r^∗ e^ipx^A_1- e^pxr̃^∗ j^A_2, σ_x p ( e^-ipx - r^∗ e^ipx/E+m_A_3+ r̃^∗ e^px/i(E-m)j_A_4)) [ p ( e^ipx - r e^-ipx/E+m^B_1 + j r̃ e^px/i(E-m)^B_2); σ_x( e^ipx + r e^-ipx_B_3 + j r̃ e^px_B_4) ], J_I =A_1 B_1 + A_1 B_2 + A_2 B_1 + A_2 B_2 + A_3 B_3 + A_3 B_4 + A_4 B_3 + A_4 B_4. To avoid complicity of multiplication of Eq. (<ref>), we have A_1 B_1= p/E+m( 1- r e^-2ipx + r^∗ e^2ipx - |r|^2 ), A_1 B_2=j p/i(E-m)( r̃ e^px(1+i) + r r̃ e^px (1-i)), A_2 B_1= j p/(E+m)( -r̃ e^px(1+i) + r r̃ e^px (1-i)), A_2 B_2=p/i(E-m)( |r̃|^2 e^2px), A_3 B_3=p/E+m( 1 + r e^-2ipx - r^∗ e^2ipx - |r|^2 ), A_3 B_4= j p/(E+m)( r̃ e^px(1+i) - r r̃ e^px (1-i)), A_4 B_3= -j p/i(E-m)( r̃ e^px(1+i) + r r̃ e^px (1-i)), A_4 B_4= -p/i(E-m)( |r̃|^2 e^2px). With the help of Eqs.(<ref>)-(<ref>), we can find theprobability current of the region I as J_I = 2p/E+m (1-|r|^2). In the same manner, we can find the probability current of region II as J_II = 2p/E+m |t|^2. Since there is no sink or source for the particles, we have J_I = J_II⇒ |r|^2 + |t|^2 =1. 99 1 W. R. Hamilton, Elements of Quaternions New York: Chelsea (1969).2 W. R. Hamilton, The Mathematical Papers of Sir William Rowan HamiltonCambridge: Cambridge University Press(1967).3A. A. Albert, Ann. of Math. 43 (1942) 161.4 B. A. Rosenfeld,A History of Non-Euclidean Geometry Springer-Verlag(1988).5 Carmondy, Kevin,App. Math.Comp 84 (1) (1997) 27. 6 D. Finkelstein, J. M. Jauch, S. Schiminovich and D. Speiser, J. Math. Phys. 3 (1962)207 ; 4 (1963) 788.7 D. Finkelstein, J. M. Jauch and D. Speiser, J. Math. Phys. 4 (1963) 136.8J. Rembielin'ski, J. Phys. A 11 (1978) 2323.9 L. P. Horwitz and L. C. Biedenharn, Ann. Phys. 157 (1984) 432.10 S. De Leo and G. Ducati, J. Phys. Math. 42(2001) 2236.11 A. J. Davies and B. H. McKellar, Phys. Rev. A 40 (1989) 4209.12 A. J. Davies and B. H. McKellar, Phys. Rev. A 46 (1992) 3671.13 S. De Leo, G. Ducati and C. Nishi, J.Phys. A 35(2002) 5411.14 A. Peres, Phys. Rev. Lett.42 (1979) 683.15 H. Kaiser, E. A. George and S. A. Werner, Phys. Rev. A 29(1984) 2276. 16 A. G. Klein, Physica B 151 (1988) 44.18 P. R. Girard,Eur.J.Phys. 5 (1984) 25.19 K. Shoemake, Comput. Graph. 19 (1985) 245. 20 S. Altmann, Rotations, Quaternions, and Double Groups Claredon, Oxford(1986). 21 M. Gogberashvili, Eur. Phys. J. C74 (2014) 3200.22 H. Sobhani and H. Hassanabadi, Can. J. Phys.94 (2016) 262. epjc H. Sobhani, H. Hassanabadi, and W.S. Chung,Eur. Phys. J. C77(2017) 425.indian H. Sobhani and H. Hassanabadi, Indian J Phys 91 (10) (2017) 1205.23 S. De Leo, G. Ducati and S. Giardino, J. Phys. Math. 6 (2015)1000130.25 H. Sobhani and H. Hassanabadi,Commun. Theor. Phys. 64 (2015) 263.26 H. Sobhani and H. Hassanabadi,Commun. Theor. Phys. 65(2016) 543.
http://arxiv.org/abs/1709.09941v1
{ "authors": [ "Hassan Hassanabadi", "Hadi Sobhani", "Won Sang Chung" ], "categories": [ "quant-ph", "hep-th", "math-ph", "math.MP" ], "primary_category": "quant-ph", "published": "20170826035549", "title": "Scattering Study of Fermions Due to Double Dirac Delta Potential in Quaternionic Relativistic Quantum Mechanics" }
^1Centre for Imaging Sciences, University of Manchester, Manchester, United Kingdom^2Sudan University of Science and Technology, Khartoum, Sudan^3School of Computing, Mathematics and Digital Technology, Manchester Metropolitan University, Manchester, United Kingdomcomputer vision, pattern recognition, feature descriptorMoi Hoon [email protected] are brief spontaneous facial expressions that appear on a face when a person conceals an emotion, making them different to normal facial expressions in subtlety and duration. Currently, emotion classes within the CASME II dataset are based on Action Units and self-reports, creating conflicts during machine learning training. We will show that classifying expressions using Action Units, instead of predicted emotion, removes the potential bias of human reporting. The proposed classes are tested using LBP-TOP, HOOF and HOG 3D feature descriptors. The experiments are evaluated on two benchmark FACS coded datasets: CASME II and SAMM. The best result achieves 86.35% accuracy when classifying the proposed 5 classes on CASME II using HOG 3D, outperforming the result of the state-of-the-art 5-class emotional-based classification in CASME II. Results indicate that classification based on Action Units provides an objective method to improve micro-expression recognition. § INTRODUCTIONA micro-facial expression is revealed when someone attempts to conceal their true emotion <cit.>. When they consciously realise that a facial expression is occurring, the person may try to suppress the facial expression because showing the emotion may not be appropriate <cit.>. Once the suppression has occurred, the person may mask over the original facial expression and cause a micro-facial expression. In a high-stakes environment, these expressions tend to become more likely as there is more risk to showing the emotion. Objective Classes for Micro-Facial Expression RecognitionAdrian K. Davison^1, Walied Merghani^2 and Moi Hoon Yap^3==============================================================The duration of a micro-expression is very short and is considered the main feature that distinguishes them from a facial expression <cit.>, with the general standard being a duration of no more than 500 ms <cit.>. Other definitions of speed that have been studied show micro-expressions to last less than 200 ms, this defined by Ekman and Friesen <cit.> as first to describe a micro-expression, 250 ms <cit.>, less than 330 ms <cit.> and less than half a second <cit.>. Micro-facial expression analysis is less established and harder to implement due to being less distinct than normal facial expressions. Feature representations, such as Local Binary Patterns (LBP) <cit.>, Histogram of Oriented Gradients (HOG) <cit.> and Histograms of Oriented Optical Flow (HOOF) <cit.>, are commonly used to describe micro-expressions. Although micro-facial expression analysis is very difficult, the popularity in recent years has grown due to the potential applications in security and interrogations <cit.>, healthcare <cit.> and automatic detection in real-world applications where the detection accuracy of humans peaks around 40% <cit.>.Generally, the process of recognising normal facial expressions involves preprocessing, feature extraction and classification. Micro-expression recognition is not an exception, but the features extracted should be more descriptive due the small movement in micro-expressions compared with normal expressions. One of the biggest problems faced by research in this area is the lack of publicly available datasets, which the success in facial expression recognition <cit.> research largely relies on. Gradually, datasets of spontaneously induced micro-expression have been developed <cit.>, but earlier research was centred around posed datasets <cit.>.Eliciting spontaneous micro-expression is a real challenge because it can be very difficult to induce the emotions in participants and also get them to conceal them effectively in a lab-controlled environment. Micro-expression datasets need decent ground truth labelling with Action Units (AUs) using the Facial Action Coding System (FACS) <cit.>. FACS objectively assigns AUs to the muscle movements of the face. If any classification of movements take place for micro-facial expressions, it should be done with AUs and not only emotions. Emotion classification requires the context of the situation for an interpreter to make a meaningful interpretation. Most spontaneous micro-expression datasets have FACS ground truth labels and estimated or predicted emotion. These have been annotated by an expert and self-reports written by participants. We contend that using AUs to classify micro-expressions gives more accurate results than using predicted emotion categories. By organising the AUs of the two most recent FACS coded state-of-the-art datasets, CASME II <cit.> and SAMM <cit.>, into objective classes, we ensure that the learning methods train on specific muscle movement patterns and therefore increase accuracy. Yan et al. <cit.> also state that it's inappropriate to categorise micro-expressions into emotion categories, and that using FACS AU research to inform the eventual emotional classification.To date, experiments on micro-expression recognition using categories based purely on AU movements, has not been completed. Additionally, the SAMM dataset was designed for micro-movement analysis rather than recognition. We contribute by completing recognition experiments on the SAMM dataset for the first time with three features previously used for micro-expression analysis: LBP-TOP <cit.>, HOOF <cit.> and HOG 3D <cit.>. Further, the proposed objective classes could inform future research on the importance of objectifying movements of the face.The remainder of this paper is divided into the following sections; Section 2 discusses the background of two of the FACS coded state-of-the-art datasets developed for micro-expression analysis and the related work in micro-expression recognition; Section 3 describes the methodology; Section 4 presents the results and discusses the effects of applying objective classification to a micro-expression recognition task; Section 6 concludes this paper and discusses future work. § BACKGROUNDThis section will describe two datasets which are used in the experiments for this paper. A comparative summary of the datasets can be seen in Table <ref>. Previously developed micro-expression recognition systems are also discussed using established features to represent each micro-expression. §.§ CASME IICASME II was developed by Yan et al. <cit.> and refers to Chinese Academy of Sciences Micro-expression Database II, which was preceded by CASME <cit.> with major improvements. All samples in CASME II are spontaneous and dynamic micro-expressions with a high frame rate (200 fps). There are a few frames kept before and after each micro-expression to make it suitable for detection experiments. The resolution of samples is 640×480 pixels for recording which saved as MJPEG and resolution about 280×340 pixels for cropped facial area. The participants' facial expressions were elicited in a well-controlled laboratory environment. The dataset contains 255 micro-expressions (gathered from 35 participants) and were selected from nearly 3000 facial movements and have been labelled with AUs based on FACS. Only 247 movements were used in the original experiments on CASME II <cit.>. The inter-coder reliability of the FACS codes within the dataset is 0.846. Flickering light was avoided in the recordings and highlights to regions of the face was reduced. However, there were some limitations: firstly, the materials used for eliciting micro-expression are video episodes which can have different meanings to different people, for example eating worms may not always disgust someone. Secondly, micro-expressions are elicited under one specific lab situation. There was some difficulty in eliciting some types of facial expressions in laboratory situations, such as sadness. When analysing the FACS codes of the CASME II dataset, it was found that there are many conflicts to the coded AUs and the estimated emotions. These inconsistencies do not help when attempting to train distinct machine learning classes, and adds further justification for the proposed introduction of new classes based on AUs only.For example, Subject 11 with the micro-expression clip filename of ‘EP19_03f’, was coded as an AU4 in the ‘others’estimated emotion category (shown in Fig. <ref>). However, Subject 26 with the micro-expression clip filename of ‘EP18_50’, was also coded with AU4 but in the ‘disgust’estimated emotion category (shown in Fig. <ref>). As can be seen in the apex frame (centre image) of both Fig. <ref> and  <ref>, AU4, the lowering of the brow, is present. Having the same movement in different categories is likely to have an effect on any training stage of machine learning. §.§ SAMMThe Spontaneous Actions and Micro-Movements (SAMM) <cit.> dataset is the first high-resolution dataset of 159 micro-movements induced spontaneously with the largest variability in demographics. To obtain a wide variety of emotional responses, the dataset was created to be as diverse as possible. A total of 32 participants were recruited for the experiment with a mean age of 33.24 years (SD: 11.32, ages between 19 and 57), and an even gender split of 16 male and female participants. The inter-coder reliability of the FACS codes within the dataset is 0.82, and was calculated by using a slightly modified version of the inter-reliability formula, found in the FACS Investigator's Guide <cit.>, to account for three coders rather than two.The inducement procedure was based on the 7 basic emotions <cit.> and recorded at 200 fps. As part of the experimental design, each video stimuli was tailored to each participant, rather than obtaining self reports during or after the experiment. This allowed for particular videos to be chosen and shown to participants for optimal inducement potential. The experiment comprised of 7 stimuli used to induce emotion in the participants who were told to suppress their emotions so that micro-facial movements might occur. To increase the chance of this happening, a prize of 50 was offered to the participant that could hide their emotion the best, therefore introducing a high-stakes situation <cit.>. Each participant completed a questionnaire prior to the experiment so that the stimuli could be tailored to each individual to increase the chances of emotional arousal.The SAMM dataset was originally designed to investigate micro-facial movements by analysing muscle movements of the face rather than recognising distinct classes <cit.>. We are the first to categorise SAMM based on the FACS AUs and then use these categories for micro-facial expression recognition.§.§ Related Work Currently, there are three features which many micro-expression recognition approaches rely on: Local Binary Patterns (LBP), Histogram of Oriented Gradients (HOG) and Histogram of Oriented Optical Flow (HOOF) based. We will discuss different methods that use these features in recent work on micro-expression recognition. Further, specific important micro-expression research is discussed. As an extension to the original Local Binary Pattern (LBP) <cit.> operator, Local Binary Patterns on Three Orthogonal Planes (LBP-TOP) was proposed by Zhao et al. <cit.> demonstrated to be effective for dynamic texture and facial expression analysis in the spatial-temporal domain. Since video sequence of time length T, usually it can be thought as a stack of XY planes along the time axis T, but also it can be thought as three planes XY, XT and YT. These provide information about space and time transition. The basic idea of LBP-TOP is similar to LBP, the difference being that LBP-TOP extracts features from all three planes which will be combined in into a single feature vector.Yan et al. <cit.> carried out the first micro-expression recognition experiment on the CASME II dataset. LBP-TOP <cit.> was used to extract the features and Support Vector Machine (SVM) <cit.> was employed as the classifier. The radii varied from 1 to 4 for X and Y, and from 2 to 4 for T (T=1 was not considered due to little change between two neighbouring frames at 200 fps), with classification occurring between five main categories of emotions provided in this experiment (happiness, disgust, surprise, repression and others).Davison et al. <cit.> used the LBP-TOP feature to differentiate between movements and neutral sequences, attempting to avoid bias when classifying with an SVM.The performance of <cit.> on recognising micro-expressions in 5-classes with LBP-TOP features extraction, achieved a best result of 63.41% accuracy, using leave-one-out cross-validation. This result is an average for recent micro-expression recognition research, and is likely due to the way micro-expressions are categorised. Of the 5-class in the CASME II dataset, 102 were classed as ‘others’, which denoted movements not suited for the other categories but related to emotion. The next highest category was ‘disgust’with 60 movements, showing that the ‘others’class made the categorisation imbalanced. Further, the categorisation was not based solely on AUs due to micro-expressions being short in duration and low in intensity, but also based on the participant's self-reporting. By classifying micro-expressions in this way, features are unlikely to exhibit a pattern, and therefore perform poorly during the recognition stage as can be seen in the other performance results. For example, in  <cit.>, the highest results is 63.41%, which is still relatively low.More recently, LBP-TOP was used as a base feature for micro-expression recognition with integral projection <cit.>. These representations attempt to improve discrimination between micro-expression classes and therefore improve recognition rates.Polikovsky et al. <cit.> used a 3D gradient histogram descriptor (HOG 3D) to recognise posed micro-facial expressions from high-speed videos. The paper used manually marked up areas that are relevant to FACS-based movement so that unnecessary parts of the face are left out. This does means that the method of classifying movement in these subjectively selected areas is time-consuming and would not suit a real-time application like interrogation. The spatio-temporal domain is explored highlighting the importance of the temporal plane in micro-expressions, however the bin selection for the XY plane is 8 and the XT, YT planes have been set to 12. The number of bins selected represents the different directions of movement in each plane. For HOOF-based methods, a Main Direction Mean Optical Flow (MDMO) feature was proposed by Liu et al. <cit.> for micro-facial expression recognition using SVM as a classifier. The method of detection also uses 36 regions, partitioned using 66 facial points on the face, to isolate local areas for analysis, but keeping the feature vector small for computational efficiency. The best result on the CASME II dataset was 67.37% using leave-one-subject-out cross validation.The basic HOOF descriptor was also used by Li et al. <cit.> as a comparative feature when spotting micro-expressions and then performing recognition. This is the first automatic micro-expression system which can spot and recognise micro-expressions from spontaneous video data, and be comparable to human performance. Using Robust Principal Component Analysis (RPCA) <cit.>, Wang et al. <cit.> extract the sparse information from micro-expression data, and then use Local Spatiotemporal Directional Features, based on LBP-TOPs dynamic features, to extract the subtle motion on the face from 16 ROIs of local importance to facial expression motion.A novel colour space model was created named Tensor Independent Color Space (TICS), that helps recognise micro-expressions <cit.>. By extracting the LBP-TOP features of independent colour components, micro-expression clips can be better recognised than the RGB space.Huang et al. <cit.> proposed Spatio-Temporal Completed Local Quantization Patterns (STCLQP), which extracts the sign, magnitude and orientation of the micro-expression data, then an efficient vector quantization and codebook selection are developed in both the appearance and temporal domains for generalising classical pattern types. Finally, using the developed codebooks, spatio-temporal features of sign, magnitude and orientation components are extracted and fused, with experiments being run on SMIC, CASME and CASME II.By exploiting the sparsity in the spatial and temporal domains of micro-expressions, a Sparse Tensor Canonical Correlation Analysis was proposed for micro-expression characteristics <cit.>. This method reduces the dimensionality of micro-expression data and enhances LBP coding to find a subspace to maximise the correlation between micro-expression data and their corresponding LBP code.Liong et al. <cit.> investigate the use of only two frames from a micro-expression clip: the onset and the apex frame. By only using a couple of frames, a good accuracy is achieved when using the proposed Bi-Weighted Oriented Optical Flow feature to encode the expressiveness of the apex frame.As micro-movements on the face are heavily affected by the global movements of a person's head, Xu et al. <cit.> propose a Facial Dynamics Map to distinguish between what is a micro-expression and what would be classed as a non-micro-expression. The facial surface movement between adjacent frames is predicted using optical flow. The movements are then extracted in a coarse-to-fine manner, indicating different levels of facial dynamics. This step is used to differentiate micro-movements from anything else. Finally, an SVM is used for both identification and categorisation.Wang et al. <cit.> recently proposed a Main Directional Maximal Difference (MDMD) method that uses the magnitude maximal difference in the main direction of optical flow features to find when facial movements occur. These movements can be used for both micro-expressions and macro-expressions to find the onset, apex and offset of a movement within the context of each examined clip.§ METHODOLOGY To overcome the conflicting classes in CASME II, we restructure the classes around the AUs that have been FACS coded. Using EMFACS <cit.>, a list of AUs and combinations are proposed for a fair categorisation of the SAMM <cit.> and CASME II <cit.> datasets. Categorising in this way removes the bias of human reporting and relies on the ground truth movement data, feature representation and recognition technique for each micro-expression clip. Table <ref> shows 7 classes and the corresponding AUs that have been assigned to that class. Classes I-VI are linked with happiness, surprise, anger, disgust, sadness and fear. Class VII relates to contempt and other AUs that have no emotional link in EMFACS <cit.>. It should be noted that the classes do not directly correlate to being these emotions, however the links used are informed from previous research <cit.>.Each movement in both datasets were classified based on the AU categories of Table <ref>, with the resulting frequency of movements being shown in Table <ref>.Micro-expression recognition experiments are run on two datasets: CASME II and SAMM. For this experiment, three types of feature representations are extracted from a sequence of grey images which represent the micro-expressions. These image sequences are divided into 5×5 blocks that are non-overlapping. The LBP-TOP features <cit.> radii parameters for X, Y and T are set to 1, 1 and 4 respectively and all neighbours in three planes set to 4. The HOG 3D <cit.> and HOOF <cit.> features are set to the parameters described in the original implementations. Sequential Minimal Optimization (SMO) <cit.> is used in the classification phase with 10-fold cross validation and leave-one-subject-out (LOSO) to classify between I-V, I-VI and I-VII classes. SMO is a fast algorithm for training SVMs, and provide a solution to solving very large quadratic programming (QP) problems, which are required to train SVMs. SMO avoid time-consuming QP calculations by breaking them down into smaller pieces. Doing this allows for the classification task to be completed much faster than using traditional SVMs <cit.>.§ RESULTSEvidence to support the proposed AU-based categories can be seen in the confusion matrix in Fig. <ref>. A high proportion of micro-expressions have been classified as ‘others’, for example 28.95% of the ‘happiness’ and 28.57% of the ‘disgust’ categories are classified as ‘others’ respectively. The original chosen emotions, including many placed in the ‘others’category, leads to a lot of conflict at the recognition stage. It should be noted that the CASME II dataset <cit.> included self-reporting, which adds another layer of complexity during classification.The proposed classes I-V classification results using LBP-TOP can be seen in the confusion matrix in Fig. <ref>. In contrast, the classification rates are more stable and outperforming the original classes overall. The results are by no means perfect, however it shows that the most logical direction is to use objective classes based on AUs rather than estimated emotion categories. Further investigation using an objective selection of FACS-based regions <cit.> supports this with AUC results for detecting relevant movements to be 0.7512 and 0.7261 on SAMM and CASME II, respectively.Table <ref> shows the experimental results on CASME II with each result metric being a weighted average calculation to account for imbalanced numbers within classes. Each experiment was completed for each feature and within the original classes defined in <cit.> and the proposed classes. Both the 10-fold cross-validation results and leave-one-subject-out (LOSO) are shown.The top performing feature achieves a weighted accuracy score of 86.35% for the HOG 3D feature in the proposed class I-V. This shows a large improvement over the original classes which achieved 80.93% for the same feature. Using LOSO, the results were comparable with the original classes. The highest accuracy was 76.60% from the HOOF feature, in the proposed I-VII classes. For the CASME II dataset results, using LBP-TOP and 10-fold cross-validation, the original method outperformed the classes I-VI and I-VII. In addition, for HOG3D LOSO, the original method outperforms in class I-VII when using F-measure as a measurement.The experiment based on the same conditions were then repeated for SAMM and can be seen in Table <ref>. Overall the recognition rates were good for SAMM, with the best result achieving an accuracy of 81.93% using LBP-TOP in I-VI classes for 10-fold cross validation. The best result using LOSO was from the HOG 3D feature, in the proposed I-VII classes and achieved 63.93%, however due to the lower amount of micro-expressions within the SAMM dataset compared with CASME II, the LOSO results were lower.Some results show that using LOSO, HOOF outperforms in CASME II while HOG3D outperforms in SAMM and in CASME II using LOSO, the HOOF feature achieves a higher accuracy for classes I-VII over I-VI, but not for the F-measure metric. Explanations of this comes down to the data, and how large some variations of the settings, such as resolution and capture methods, are set. The imbalance of data, specifically the low amounts of micro-expression data, can skew LOSO results with low amounts of testing and training. This shows how using LOSO for micro-expression recognition is difficult to quantify with a fair amount of significance. Further data collection of spontaneous micro-expressions is required to rectify this. § CONCLUSIONWe show that restructuring micro-expression classes objectively around the AUs, recognition results outperform the state-of-the-art, emotion-based classification approaches. As micro-expressions are so subtle, the best way to categorise is objectively as possible, so using AU codes is the most logical. Categorising using a combination of AUs and self-reports <cit.> can cause many conflicts when training a machine learning method. Further, dataset imbalances can be very detrimental to machine learning algorithms, and this is further emphasised with the relatively low amount of movements in both datasets. Future work will look into the effect of using more modern features, with AUs classification to improve on the recognition accuracy. This could include the MDMO feature <cit.>, local wrinkle feature <cit.> and the feature extraction methods described by Wang et al. <cit.>.Further work can be done to improve micro-facial expression datasets. Firstly, more datasets or expanding previous sets would be a simple improvement that can help move the research forward faster. Secondly, a standard procedure on how to maximise the amount of micro-movements induced spontaneously in laboratory controlled experiments would be beneficial. If collaboration between established datasets and researchers from psychology occurred, dataset creation would be more consistent.Deep learning has emerged as a new area of machine learning research <cit.>, and micro-expression analysis has yet to exploit this trend. Unfortunately, the amount of high-quality spontaneous micro-expression data is low and deep learning requires a large amount of data to work well <cit.>. Many video-based datasets previously used have over 10,000 video samples <cit.> and even over 1 million actions extracted from YouTube videos <cit.>. A real effort to gather spontaneous micro-expression data is required for deep learning approaches to be effective in the future.1pcData available from: http://www2.docm.mmu.ac.uk/STAFF/m.yap/dataset.phpA.K. Davison carried out the design of the study, the re-classification of the Action Units grouping and drafted the manuscript (The tasks was completed when A.K. Davison was in Manchester Metropolitan University). W. Merghani conducted the experiments, analysed the data and drafted the manuscript. M.H.Yap designed the study, developed the theory, assisted development and testing and edited the manuscript. All the authors have read and approved this version of the manuscript.We declare we have no competing interests.This work was completed in Manchester Metropolitan University on a "Future Research Leaders Programme" awarded to M.H. Yap. M.H.Yap is a Royal Society Industry Fellow. RS
http://arxiv.org/abs/1708.07549v2
{ "authors": [ "Adrian K. Davison", "Walied Merghani", "Moi Hoon Yap" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20170824203710", "title": "Objective Classes for Micro-Facial Expression Recognition" }
http://arxiv.org/abs/1708.08029v1
{ "authors": [ "Siyao Xu", "Bing Zhang" ], "categories": [ "astro-ph.HE", "hep-th", "physics.plasm-ph" ], "primary_category": "astro-ph.HE", "published": "20170827005254", "title": "Adiabatic non-resonant acceleration in magnetic turbulence and hard spectra of gamma-ray bursts" }
CHAPTER: FROM STRANGENESS ENHANCEMENT TOQUARK-GLUON PLASMA DISCOVERY[1] Koch,Müller, Rafelski]Peter Koch^1, Berndt Müller^2 and Johann Rafelski^3^1Bitfabrik GmbH & Co. KG, D-63110 Rodgau, Germany^2Department of Physics, Duke University, Durham, NC 27708, USA^3Department of Physics, The University of Arizona, Tucson, AZ 85721, USAThis is a short survey of signatures and characteristics of the quark-gluon plasma in the light of experimental results that havebeen obtained over the past three decades. In particular, we present an in-depth discussion of the strangeness observable, including a chronology of the experimental effort to detect QGP at CERN-SPS, BNL-RHIC, and CERN-LHC. ^*Dedicated to our mentor Walter Greiner; to be published in the memorial volume edited by Peter O. Hess. § INTRODUCTION Just fifteen years after the coincident creation in 1964 of two great ideas governing the strong interactions —quarks and the Hagedorn temperature — these two concepts merged, giving birth to a new discipline, the physics of the novel fifth state of matter, the quark-gluon plasma (QGP). Today there is consensus that QGP filled the cosmos during the first 20 μs after the Big-Bang. Forthree decades laboratory experiments at the European Center for Particle Physics (CERN) and Brookhaven National Laboratory (BNL) have been exploring this primordial phase of matter colliding nuclei at relativistic energies.As the ideas about QGP formation incollisions matured a practical challengeemerged: How can the locally color deconfined QGPstatebe distinguished from a gas of confined hadrons? In the period 1979–86 the strangeness signature of QGP was developed for this purpose, with our 1986 review <cit.> in essence completing the theoretical foundations. In a review of 1995 one of us (BM) presented a compendium of possible QGP signatures. <cit.>.During this period and in the following years many scientists were wondering whether QGP could be detected as a matter of principle. Could it be that a quark-gluon based description is merely a change of the Hilbert space basis,a unitary change between quark and hadron bases? If so, maybe there would be nothing to be discovered! A globally color-singletfireball composed of quarks and gluons and several Fermi in diameter is, in principle, a hadron, , a strongly interacting object. Today it is also understood that for an infinite QCD system there is no discontinuity in the equation of state of the baryon symmetric QCD matter. But the key to understanding why the QGP is a physically meaningful and observable concept is to ask the following question: Do hadronic states exist in nature containing many more than three quarks which cannot be factorized into color-singlet components, each containing a few quarks? Even if the complete Fock space of hadrons includes an extended QGP, such a state is distinct from states, such as ordinary atomic nuclei, in which color-singlet hadrons containing few quarks propagate across arbitrary distances while colored quarks and gluons are confined inside such locally color-singlet hadrons. BM presented these thoughts at the Quark Matter 1991 conference in Gatlinburg. <cit.> The ensuing heated discussions reverberate in retrospect, as they lie at the core of the lingering doubts about the observability of QGP formation incollisions that remained widespread even among experts for many years. As we discuss below, this misunderstanding was, in part, exacerbated by the fact that many observables that were proposed as QGP signatures are not sensitive to this defining property of the QGP, , local color deconfinement.The discussions at Quark Matter 1991 are recalled here as a reminder against which odds the experimental efforts aimed at discovering the QGP had to struggle and to explain why the discovery of QGP, which had in fact occurred at the time of that conference, (a) was not recognized as valid by one of the discovering experimental groups; (b) needed to wait nine more years to be announced by the CERN Laboratory where the experiments were being performed; (c) had to wait fourteen additional years before a competing laboratory, BNL, concurred after an intense intellectual struggle; and (d) is still a sometimes disputed discovery a quarter century later. We describe below the pivotal CERN-SPS experiments that, to the apparent disbelief of some of the involved scientists (see Subsect. <ref>), created the QGP phase of matter (Subsect. <ref>). In the following years these results found their confirmation in the Pb+Pb collision program at the CERN-SPS (Subsect. <ref>), leading to the CERN February 2000 QGP announcement based on two campaigns of experiments and many refereed and published articles. The QGP discovery was confirmed five years later by experiments at the BNL Relativistic Heavy Ion Collider (RHIC) employing new, independent probes that helped establish a broad public consensus (Sect. <ref>), and later by further experiments at CERN, both at SPS and LHC (Subsect. <ref>). The strangeness signature, which enabled the first clear observation of the QGP was originally conceived and proposed by one of us (JR) at CERN and later developed to full maturity by us initially in Walter Greiner's Institute. Walter was at that time among the vocal skeptics of our work and of QGP research more generally. However, his broad principled opposition to the subject provided additional inspiration for our work, which continued at the University of Cape Town. In hindsight, it is puzzling why Walter originally considered all QGP research with such skepticism, because he generally loved innovative, even exotic physics. He became, for example, a strong advocate of searches for stable or meta-stable multi-strange cold quark drops called strangelets. Whatever the reasons may have been, in later years Walter's institute, with his strong support, became the preeminent German center for theoretical QGP physics, and today QGP is a core component of the research program of the Frankfurt Institute for Advanced Study (FIAS), which he co-founded.§ STRANGENESS: THE PIVOTAL QGP SIGNATURE The existence and observability of a new phase of elementary matter, the QGP, must be demonstrated by experiment. This requires identification of probes of QGP that are: * operational on the collision time scale of 10^-23 s;* sensitive to the local color charge deconfinement allowing color charges to diffuse freely throughout the matter;* dependent on the gluon degree of freedom, which is the characteristic new dynamical degree of freedom. The heaviest of the three light quark flavors, strangeness, emerged 1980–82 as the pivotal signature of QGP satisfying these three conditions. When color bonds are broken, the chemically equilibrated deconfined state contains an unusually high abundance of strange quark pairs <cit.>. This statistical argument was soon complemented by a study of the dynamics of the strangeness (chemical) equilibration process. We found that predominantly thegluon component in the QGPproduces strange quark pairs rapidly, and just on the required time scale. <cit.> Our work also connected strangeness enhancement to the presence of gluons in the QGP. The high density of strangeness at the time of QGP hadronization was a natural source of multi-strange hadrons <cit.>, if hadronization proceeded predominantly by the coalescence of pre-existing quarks and antiquarks <cit.>.By Spring 1986 we had developed a detailed model and presented predictions showing how the high density and the mobility of already produced strange and antistrange quarks in the fireball favors the formation of multi-strange hadrons during hadronization <cit.>. We also showed that these particles areproduced quite rarely if only individual hadrons collide. We presenteda detailed discussion of how a fireball of deconfined quarks turns into strangeness carrying hadrons and showed that multi-strange antibaryons are the most characteristic signatures of the QGP nature of the source. By distinguishing the relative chemical equilibrium from the absolute yields of quark pairs we introduced what today is called the statistical hadronization modelwhich allows us to measure the chemical properties of the hadron source <cit.>. Though the production of final state hadrons characterizes the conditions in the QGP fireball at the time of its breakup (hadronization) the total strangeness flavoryield provides in situations when chemical equilibrium in QGP fireball is hard to achieve additionalinformation about conditions arising in the first instants of the reaction. In this sense strangeness alone, when studied in depth, can provide a wealth of insights about the formation and evolution of the QGP fireball. As we discuss below there are other observables available to explore the properties of the early stage QGP.Considering all produced hadronic particles it is possible to evaluate the property of thedense matter fireball. A fireball of QGP that expands and breaks apart should do thisin a manner that does not remember in great detail the mechanisms that led to the formation of the thermal fireball. Indeed, one of the important findings emerging from studies of the hadronization process is that the hadron chemical freeze-out conditions are universal <cit.>. This universality is further consistent with thesudden hadronization mechanism we first studied 30 years ago <cit.>.The strangeness observable was and is experimentally popular since strange hadrons are produced abundantly and can be measured over a large kinematic domain. Therefore, a large body of experimental results is available today. All of these results are consistent with hadronic particle production occurring from a dense source in which the deconfined strange quarks are already created before hadrons are formed. These (anti-)strange quarks are free to move around or diffuse through the QGP and are readily available to form hadrons. Once one has confirmed that a QGP was formed, other observables can be interpreted on that basis. However, few, if any, other QGP observables probe the characteristic nature of the source in a way that would uniquely pinpoint a QGPwith local color deconfinement at the time of hadron formation. Let us discuss a few examples: * Fluid dynamics: The fireball is recognized to consist of matter described by hydrodynamical simulations, <cit.> which implies the fireball is comprised of a near minimal specific viscosity liquid. <cit.>. It is natural to associate this result with a fluid composed of relativistic, strongly interacting particlesquarks and gluons, but does not by itself signal that the fluid is a QGP. Indeed, it is still not entirely clear how the QGP acquires its nearly perfect liquid nature at thermal length scales.* Jet quenching is observed in a clear and convincing way incollisions at sufficiently high energy. <cit.> Arguably, this property signals the formation of a fireball endowed with a high density of color fields, which impedes the escape of high energy particles (Subsect. <ref>).* Quarkonium production can occur in primary collisions <cit.> and charmrecombinant hadronization <cit.>. Heavy quarkonia, like jet emission, can be suppressed <cit.>in interaction with dense matter. Therefore the yield is determined by the interplay of at least three different mechanisms with processes contributing needing to be modeled in detail. Today, the increased charmonium yield at the LHC caused by cc̅ recombination, similar to the enhanced formation of multi-strange baryons observed over a wide energy range, is often considered as the most convincing quarkonium signature of QGP formation (Subsect. <ref>).* Electromagentic signals: Photons and dileptons are the most penetrating probes promising insights into the initial dynamics of QGP formation and evolution. <cit.> These observables will without doubt come of age in the future high luminosity RHIC and LHC runs. Today we can use them in a semi-quantitative manner to estimate, , the initial temperature of the fireball (Subsect. <ref>). In the next Section we briefly recapitulate the properties and evolution of a thermal QGP fireball formed incollisions, followed by a review of the chronology of the strangeness signature providing evidence for QGP formation in Section <ref>. There we describe the initial CERN-SPS research program, the first series of experiments, and the more mature Pb+Pb collision experiments that reconfirmed the early observations and led to the February 2000 CERN announcement of the discovery of the QGP, as well as more recent strangeness developments. The QGP discovery announcement by the RHIC community relied on other observables and is described in Section <ref>. § QUARK-GLUON PLASMA§.§ Evolution of fireball in time Inlaboratory experiments involving collisions of large nuclei at relativistic energies, several (nearly)independent reaction steps occur and ultimately lead to hadron production: * Formation ofthe primary fireball; a momentum equipartitioned partonic phase comprising in a compressed space-time domain most of the final state entropy;* The cooking of the energy content of the hot matter fireball towards chemical (flavor) equilibrium in a hot QGP phase;* Emergence of transient massive quarks due to spontaneous chiral symmetry breaking and disappearance of free gluons; from this point on the fireball cannot in chemical equilibrium if entropy, energy, baryon number, and strangeness are to be conserved;* Hadronization,is the coalescence of effective and strongly interacting up, down, and strange quarks and anti-quarks into the final state hadrons, with the coalescence probability weighted by accessible phase space. The hadronization process can be subject to detailed experimental study, resulting in determination of the physical properties and statistical parameters that govern the process. We discuss this below. Experimental information about the maximum temperatures reached in heavy ion collisions can be derived from the spectrum of radiated photons in the energy range E_T = 1-3 GeV, where direct photon emission is dominated by thermal radiation. The analysis of the measured spectrum in terms of the thermal properties of the fireball is somewhat model dependent, since the observed yield spans the entire collision history. In practice, the hydrodynamical simulations of the time evolution set a lower limit to the initial temperature. In Au+Au collisions at RHIC (√(s_ NN) = 200 GeV), this initial temperature exceeds 300 MeV; in Pb+Pb collisions at the LHC (√(s_ NN) = 2.76 TeV) the initial temperature is at least 450 MeV. This increase is in good agreement with the observed scaling of the particle multiplicity from RHIC to LHC, which indicates a substantialincrease in the fireball entropy content. §.§ QCD Matter in heavy ion collisions Lattice gauge theory has made impressive progress on the calculation of static thermodynamic properties of baryon symmetric QCD matter. The equation of statefor physical quark masses is now known forμ_B≲μ_B,crwith precision. The quasi-critical temperature where susceptibilities related to chiral symmetry peak has been determined in lattice QCD simulations: <cit.> T_c(μ_B=0) ≈ 150 MeV. A critical point is expected in the μ_B domainthat can be explored in experiments carried out at SPS and RHIC-BES, as is illustrated in left-hand part of QGPhase, which shows the approximate range of T explored in RHIC and LHC experiments with the corresponding energy density versus temperature at μ_B=0 curve calculated by lattice QCD.On right in QGPhase we show how these results stack up against the results obtained across 20 years of hadron chemical freeze-out points; chemical freeze-out is where the yields of produced hadrons will not change and hence these results must be below quasi-critical temperature. We see two lines describing results of statistical hadronization model final state analysis that span a range of μ_B fitted to the data in SPS, RHIC and LHC experiments. In between these two is the Lattice QCD critical temperature obtained and reconfirmed in past 5 years. From these results emerges clearly that the chemical nonequilibrium description of hadron production by a QGP is the only model compatible with the current understanding of strongly interacting matter.Note that the curve marked SHAREassumes as in our1986work relative chemical equilibrium, but allows the freedom in the yields of all quark-pairs. For further details see Ref. <cit.>.§.§ Properties of QCD matterIt is worthwhile asking which intrinsic properties of the quark-gluon plasma medium dynamics we can hope to determine experimentally and from which observables. A not exhaustive list seen already in the 1992 presentation <cit.> includes:* The equation of state of the matter, given by relations among the components of the energy-momentum tensor T_μν at equilibrium and their temperature dependence are reflected in the spectra of emitted particles. Lattice QCD is able to compute these quantities reliably. The analysis of chemical freeze-out conditions provides pressure P, energy density ϵ and entropy density σ.<cit.>* Transport coefficients of the quark-gluon plasma, especially the shear viscosity η, the coefficient q̂ governing the transverse momentum diffusion of a fast parton (often called the jet quenching parameter), the coefficient of linear energy loss ê, and the diffusion coefficient κ of a heavy quark, are related to the final-state flow pattern and the energy loss of fast partons that initiate jets. Lattice gauge theory presently cannot reliably calculate these dynamical quantities.* The static color screening length λ_D (the inverse Debye mass m_D) governs the dissolution of bound states of heavy quarks in the quark-gluon plasma. This static quantity can be reliably calculated on the lattice.* The electromagnetic response function of the quark-gluon plasma is reflected in the emission of thermal photons and lepton pairs. This dynamical quantity is difficult to calculate on the lattice, but moderate progress has been made recently.All but the last of these properties are microscopically related to correlation functions of the gauge field. This implies that the associated experimental observables are mostly sensitive to the gluon structure of the quark-gluon plasma. On the other hand, much more is known theoretically from lattice simulations about the quark structure of hot QCD matter, because it is much easier to construct operators from quark fields that can be reliably calculated. In this respect, lattice calculations and heavy ion experiments are to a certain degree complementary. The presence of jets in heavy ion collisions at LHC and RHIC tell us that at high virtuality Q^2 or high momenta p, the QGP is weakly coupled and has quasi particle structure. On the other hand, the collective flow properties of the matter produced in the collisions tells us that at thermal momentum scales the quark-gluon plasma is strongly coupled. At which Q^2 or p does the does the transition between strong and weak coupling occur? Does the quark-gluon plasma still contain quasi particles at the thermal scale?Which observables (jets?) can help us pinpoint where the transition occurs?Leaving aside these theoretical contemplations we now turn to the empirical observable, the strangeness content of QGP fireball. The reason that this is of interest is the undisputed observation that the yieldof strange quark pairs is noticeably greater incollisions compared to typical high energy elementary particle reactions. How this happens was addressed in our 1986 review <cit.>. In the following we focus onwhat happens next: how this medium heavy flavor hadronizes, what type of particles are produced, and what this tells about the physics of the source, so that our discussion can concentrate on the physics of the fireball source as observed through the eye of strangeness. § PROBING QCD MATTER WITH STRANGENESS §.§ CERN-SPS experiments: Overview The CERNhadrons shows the time line of those CERN experiments conceived in the early-to-mid 1980s that contributed to the observation of strangeness by means of study of emitted hadrons. Note that CERN accelerated oxygen (^16O) ion beams at 200 in 1986, followed within a year by sulfur (^32S) beams at the same energy, and in the mid-1990s by lead (^208Pb) beams at 158 . First results from Pb+Pb collisions were reported in 1996; this is indicated on the left inCERNhadrons. We will address below the pivotal results obtained by the two experimental series WA85–NA57 and NA35–NA49.The WA85 experiment started a bit later than the other Day-1 experiments, and it is instructive to understand the reasons for this delay: The initial experimental proposal was directed at our strange antibaryon signature of QGP; however, the CERN-SPS advisory committee was influenced by arguments by senior theorists that the strange antibaryon enhancementcould not be observed since, when QGP converts into hadrons, the multi-strange antibaryon (Ξ and Ω, Ω) signature would be erased by annihilation in the baryon-rich environment produced in collisions at the SPS energy. Aswe know today, the assumptions we made in our work are, in fact, realized in nature. The QGP fireball explodes so rapidly after hadronization occurs that many hadrons are produced free-streaming into the vacuum, and chemical reactions involving strangeness quickly fall out of equilibrium during the explosive expansion. Strange antibaryon annihilation does not prevail. This can easily be seen by comparing baryon and antibaryon p_-spectra as we show below. Another evidence for the sudden hadronization process, as this became known, is that the yields of hadrons follow the predicted pattern imposed by entropy, baryon number, strangeness content of QGP fireball, which means that the hadron yields follow a scenario today called chemical non-equilibrium.Despite this criticism the experiment WA85 was eventually approved in Fall 1986, (a) because a way was found to modify the CERN Ω-spectrometer to observe less exotic strange particles including kaons (s̅ q),(q̅ s) and singly strange antihyperons Λ(q̅q̅s̅); and (b) on the strength of the arguments presented in our 1986 published work <cit.>. §.§ CERN-NA35 Experiment The CERN-NA35 experiment was an extension of the LBNL-GSI collaboration at the BEVALAC with LBNL's Howell Pugh being the main force for strangeness. Howell was a member of both the NA35 and the NA36 experiments; however, NA36 was highly advanced and had instrumental difficulties, whereas NA35 relied on well established technology. Initially the objective of NA35 was the exploration of the equation of state of dense nuclear matter, a direct continuation of the effort carried out at BEVALAC by most of those involved in the experiment. The fade-out of the strangeness focused NA36 presented the NA35 experimental program with the opportunity to expand into the strangeness signature of QGP. Writing in 2000, Grażyna Odyniec <cit.> of LBNL commented From the very beginning Howell [Pugh], with firmness and clarity, advocated the study of strange baryon and antibaryon production. He played a leading role in launching two of the major CERN heavy-ion experiments: NA35 and NA36, the latter being exclusively dedicated to measurements of hyperons. Strangeness enhancement predicted by theorists was discovered by NA35 and reported at Quark Matter 1988.The NA35 results were presented in their extended and final form in 1990. The published article <cit.> stated in its abstract: Significant enhancement of the multiplicities of all observed strange particles relative to negative hadrons was observed in central S+S collisions, as compared to p+p and p+S collisions. In the concluding section the authors commented: Thus our observation …appears to be consistent with a dynamical evolution that passes through a deconfinement stage. Yet the article refrained from making a discovery claim by continuing: However, ... this may not be the only explanation because the possible pre-equilibrium aspects of the early interpenetration stage, or even the conceivable overall off-equilibrium nature of the entire dynamics, may also lead to enhanced strangeness production, even without plasma formation. In retrospect, this statement appears puzzling, because it had already been shown five years earlier <cit.>that the strangeness production and evolution within the hadronic gas phase could not chemically equilibrate strangeness. In their Quark Matter 1990 proceedings <cit.> report the NA35 Collaboration distanced itself even further from a possible QGP discovery claim by stating: We have demonstrated a two-fold increase in the relative s + s̅ concentration in central S-S collisions, both as reflected in the K/π ratio and in the hyperon multiplicities. A final explanation in terms of reaction dynamics has not been given as of yet.Clearly, the NA35 collaboration missed a unique opportunity to claim first observation of our QGP strangeness signature by refraining to interpret their experimental results in these terms. In the following, the publications of NA35 strangeness results regularly side-stepped any explicit mention of QGP formation as an explanation of their observations. It is tempting to speculate why this occurred: A motivation may have been that the idea of QGP formation in collisions of mid-sized nuclei in the SPS energy range ran counter to the views of many members of the high energy physics community, some being members of NA35, who thought that only RHIC with its ten-fold higher collision energy would be capable of creating a QGP.We leave further discussion of this question to future historians of science. §.§ CERN-WA85 Experiment Against this background the late arriving CERNΩ spectrometer experiment WA85 under the leadership of Emanuele Quercigh took center stage ofQGP search at SPS with published results on Λ and Λ̅ <cit.>, Ξ^-, Ξ^- <cit.>, and a systematic exploration of the parametric dependence of both observables, showing characteristics of a QGP <cit.>. In contrast to NA35, the WA85 Collaboration takes a much firmer position, citing evidence in favor of QGP formation: The(se) results indicate that our Ξ^- production rate, relative to Λ̅, is enhanced with respect to pp interactions; this result is difficult to explain in terms of non-QGP models [11] or QGP models with complete hadronization dynamics [12]. We note, however, that sudden hadronization from QGP near equilibrium could reproduce this enhancement [2]. Ref. [2] is an analysis <cit.> of these WA85 results within the nascent Statistical Hadronization Modelpublished in March 1991 by one of us (JR). In this work strange baryon and antibaryon particle production data for S–W collisions were used to determine the chemical properties of the particle source, the chemical potentials and phase space occupancy. In the abstract of this analysis we read: Experimental results on strange anti-baryon production in nuclear S–W collisions at 200 A GeV are described in terms of a simple model of an explosively disintegrating quark-gluon plasma (QGP). The summary closes with, We have presented here a method and provided a wealth of detailed predictions, which may be employed to study the evidence for the QGP origin of high p_ strange baryons and anti-baryons. The WA85 paper <cit.> cited above echoed this point of view, leading the WA85 collaboration in early 1991 to claim the QGP discovery. From today's vantage point, we can say that with this 1990/91 analysis method and the WA85 results and claims of the period, the QGP had been discovered; however, the discovery was not universally accepted by the broader physics community, not even by the majority of the community involved inexperiments at the CERN-SPS. For a more extensive discussion of the WA85 results and the arguments behind the discovery claim we refer to the popular review by the spokesman of the WA85 Collaboration, Emanuele Quercigh, prepared with JR.<cit.>. §.§ The path to CERN QGP discovery announcement Strangeness results continued to be published by these two CERN experimental groups: NA35/NA49 (evolving into NA61)and WA85/WA94/WA97 (evolving into NA57) in the ensuing decade through the early 21^ st century. The new experiment NA49 that replaced NA35 was evolving now. When CERN announced in early February 2000 the discovery of a new phase of matter, this event followed a decade of experimental work with dozens of refereed papers published showing agreement of QGP strangeness signature with our QGP based predictions. The NA35 Collaboration presented the ratio Λ/ p̅∼ 1.4 measured near mid-rapidity in Summer 1995 <cit.>, showing a three-to-fivefold enhancement, dependent on the collision system as compared to measurement in more elementary reactions. This was the QGP signature of the first strangeness papers in 1980 <cit.>. Due to the shift of the central rapidity for asymmetric collisions the decrease in this ratio as the asymmetry increases is in agreement with theoretical expectations; these results are shown on the right in AntiHypFig.The WA85/94 collaboration focused on multi-strange baryon and antibaryon ratios, for Ξ/Λ seethe 1993 review of David Evans <cit.>. A full summary of all results is contained in the review of Federico Antinori of 1997 <cit.> and shown in WA85WA94AntiHFig with data referring to the WA85/94 reports presented at the Quark Matter 1995 conference <cit.>. Seeing these developments one could not help but being convinced that the then forthcoming Pb+Pb experiments at CERN would confirm and cement the strangeness based QGP discovery before the end of the old millennium. This was indeed the case as we see, for example, in the retrospective summary AliceALL: in this 2013strangeness enhancement review by the CERN-ALICE collaboration we see that the Pb+Pb CERN-SPS antihyperon enhancement at √(s_NN)=17.2 GeV (equivalent to 158 ) reaches the value 20 for the triply strange Ω+Ω baryons (open triangles). This is the most dramatic medium modification result ever recorded incollisions. Despite these impressive (anti-)hyperon results available at the time of the CERN QGP announcement <cit.> in February 2000, the announcement was based on aconsensus presentation of all heavy-ion SPS-CERN physics results. There was a strategic problem, since someof these experiments had results that had little to do with the QGP discovery itself or were statistically marginal.With the consensus strategy CERN opened its QGP discovery up to criticism, where both the interpretation of the data and the statistical validity of some of the results could be questioned. This problem spilled over to the strangeness signature which should have been uncontested on both grounds. If CERN management was not willing to base its claim of QGP discovery on the impressive observations of strangeness enhancement alone, there must exist a scientific reason for this lack of confidence! Would an announcement of the QGP discovery been more compelling, if it had been based solely on the strangeness signature? While it is tempting to think so, it probably was not practical, because a very sizable fraction of thephysics community questioned then, and has continued to question, the relevance of strangeness as a signature of the QGP, in spite of the fact that strangeness enhancement is the sole QGP signature for which every single prediction has been quantitatively confirmed experimentally and no comprehensive alternative explanation has been given.In summary, we firmly believe that CERN had a justifiable case for the discovery of QGP with SPS results addressing strangeness and multi-strange antihyperon production. However, the lack of consensus within the community precluded this simple approach and the global all experiments claim advanced by CERN was not convincing to outside observers.It took another experimental program, that of the Relativistic Heavy Ion Collider commencing in 2000, with a broader range of accessible observables to create a wider consensus, leading up to a second announcement of QGP discovery in 2005. §.§ Developments at CERN after 2000 The NA49 collaboration evolving into NA61 focused its main objective since the CERN QGP discovery announcement on the search of a threshold in beam energy for the previously observed phenomena. In a systematic experimental study of the K^+/π^+ ratio <cit.> as function of energy the so-called Marek's Horn was discovered, named after Marek Gazdzicki, now spokesman of the NA61 collaboration.The Statistical Hadronization Model analysis of this feature is seen in Kpi2008Fig on the left, where the K^+/π^+-ratio for both experiment and theoretical fit adapted from Ref. <cit.> is shown. The question that we need to answer is what it is that thehorn signals? To this end we look at the right-hand panel of Kpi2008Fig showing the ratio of the thermal fireball energy to the number of strange quark pairs E/s. We clearly see that at the location of the horn-like feature the energy cost of producing a pair of strange quarks is leveling off at a low value, signaling onset of a new, more energy efficient, production mechanism. This E/s curve in the right panel of Kpi2008Fig is derived from the fit to the observed particle yields which fit the hornbut needs further interpretation. Considering our prior extensive work showing that quark based processes are less effective than gluon mediated processes in producing strange quark pairs, we conclude that in the knee ofE/s glue degrees of freedom must have been fully activated. The conclusion is that QGP has been formed at collision energies above √(s_NN)=7 GeV. Below this threshold we see a transition domain where with decreasing energy more and more fireball energy is required to make strange quark pairs.Future experiments (the second RHIC Beam Energy scan will probe down to √(s_NN)=3 GeV) will tell if this rise saturates at lower collision energies.Another interesting result, combining RHIC and SPS data, is that the ratio Ξ(ssq)/ϕ(ss̅) of those two different double strange particles is an energy independent constant (see Canonical). This observation decisively resolves the discussion about canonical strangeness suppression,the volume dependent suppression caused by overall net strangeness conservation. The data provide for a clear connection of multi-strange particle production to total strangeness content, but not to net strangeness content, which is zero for the ϕ. This rules out canonical suppression as a viable explanation of the multiplicity dependence of strangeness enhancement seen in Figs. <ref>, <ref>, and <ref>. Moreover, the universal value of the ratio is signaling that irrespective of how the fireball is formed there is no significant alteration in the final state of the yields of these double strange particles. For further analysis of the implications of these data we refer to Petran's work.<cit.>Returning now to continue the LHC contribution to QGP physics, the decrease of the enhancement effect with increasing collision energy seen in AliceALL signals that there is an increase in strange antihyperon production in the control pp or pA collisions. This effect has been nicely demonstrated by the ALICE-LHC collaboration, which recently published <cit.> the enhancement over minimum bias pp yields as a function of the charged hadron multiplicity dN_ ch/dη, combiningAA-results <cit.> with pA<cit.> and new pp results as shown in AliceNew on the left. The pp and pA results merge into the AA results when compared for the same multiplicity at central rapidity. We see a smooth increase with dN_ ch/dη, and in most cases a yield saturation at large hadron multiplicity indicating that QGP in internal thermal and chemical equilibrium is achieved as the volume of the fireball grows. Such an equilibrated QGP fireball hadronizes into an out of chemical equilibrium hadron abundance, an important insight we discussed already in our 1986 review <cit.>. The predominant modification of the fireball at hadronization for collisions in whichstrangeness is chemically saturated in the QGP is the fireball hadronization volume.A potential spoiler in the AliceNew is the behavior of Ξ in AA which seems to be on the low end of the pA results, an effect less than two standard deviations but somewhat disturbing the otherwise consistent pattern. Today this has attracted additional attention since the ALICE collaboration presented (preliminary) results at the highest LHC energy, √(s_ NN)=5.02 TeV, at the SQM2017 conference. These new√(s_ NN)=5.02 TeVΞ results differ significantly from those at √(s_ NN)=2.76 TeV as shown in the right panel of AliceNew. The remarkable, yet (for us) expected outcome is that the results at √(s_ NN)=5.02 are where we expected the data from √(s_ NN)=2.76 TeV to be, in agreement with the findings for other strange hadrons (results available, not shown in AliceNew).The ALICE Collaboration announced at SQM2017 the intent to review the analysis of theΞ data from the 2012/13 √(s_ NN)=2.76 TeV runs — the trends of pp and pA results suggest that the revised yields will agree with the new √(s_ NN)=5.02 TeV results.Let us now assume that the corrected√(s_ NN)=2.76 TeV antihyperons will indeed track the still preliminary √(s_ NN)=5.02 TeV data. This would demonstrate that the strangeness signature of QGP is driven by the global properties of thermal fireball, in particular its volume and/or life-time, and not by the collision system or the collision energy. This is so since the hadronization condition of QGP fireball is known to be universal across a wide range of collision energies and centralities <cit.>. We conclude that CERN-SPS results have shown (a) the onset of deconfinement near √(s_NN)=7 GeV, and that the systematics of multi-strange particle production eliminates alternate enhancement mechanisms, as borne outby the Ξ/ϕ ratio. The LHC data indicate that for the thousandfold higher LHC energy QGP can already be formed in high multiplicity pp and pA reactions. Across a wide range of accessible collision energies at SPS, RHIC, and LHC strangeness in the QGP saturates, and the fireball hadronizes in nearly identical fashion at universal physical conditions, with the final geometric size determining the total particle yields. The formation of the thermal QGP fireball depends on entropy content (hadron multiplicity) and not on how this state has been produced, the pp,pA and AA collisions being equivalent. § THE BNL-RHIC CONTRIBUTION TO THE QGP DISCOVERY §.§ Flow dynamics of QCD matter The standard modelof the dynamics of a relativistic heavy ion collision begins with a very brief period of kinetic equilibration – most likely less than 1 fm/c. After thatthe space-time evolution of QGP can be described by relativistic viscous hydrodynamics. Hydrodynamics is the effective theory of the transport of energy and momentum in matter on long distance and time scales. In order to be applicable to the description of QGPcreated incollisions, which forms tiny, short-lived droplets of femtometer size, the hydrodynamic equations must be relativistic and include the effects of (shear) viscosity. The causal relativistic theory of viscous fluid has been worked out over the past decade. It is based on the framework of the Müller-Israel-Stewart formulation of second-order hydrodynamics, which includes relaxation effects for the dissipative part of the stress tensor. Schematically, the equations have the form ∂_μ T^μν = 0 withT^μν= (ε +P) u^μ u^ν + Π^μν τ_π (dΠ^μν/dτ) + Π^μν=η (∂^μ u^ν + ∂^ν u^μ -trace) . It turns out that the ratio of the shear viscosity η to the entropy density S jointly with the equation of state is the quantity that most directly controls the behavior of the fluid. The quantity η/S is the relativistic generalization of the well known kinematic viscosity. Since in kinetic theory η is proportional to the mean free path of particles in the fluid, which is inversely proportional to the transport cross section, unitarity limits how small η can become under given conditions. An interesting consequence of this observation is that the quantity η/S has an apparent lower bound of the order of 0.08 (in units of ħ). The existence of such a bound was conjectured already three decades ago, but it was quantitatively derived only recently using the technique of holographic gravity duals. It is now believed that η/S ≥ (4π)^-1 for most sensible quantum field theories <cit.> at μ_B ≈ 0. The experimental handle for the determination of η/S is the azimuthal anisotropy of the flow of final-state particles in off-central heavy-ion collisions, where the nuclear overlap region is elongated in the direction perpendicular to the reaction plane. Hydrodynamics converts the anisotropy of the pressure gradient into a flow anisotropy, which sensitively depends on the value of η/S. The average geometric shape of the overlap region in symmetric nuclear collisions is dominated by the elliptic eccentricity, resulting an elliptic flow anisotropy characterized by the second Fourier coefficient v_2. Event-by-event fluctuations of the density distribution with Standard model of dn the overlap region generate higher Fourier coefficients for the initial geometry and final flow, encoded in higher Fourier coefficients v_3, v_4, etc. Their measurement is analogous to the mapping of the amplitudes of multipoles in the thermal fluctuations of the cosmic background radiation.The precise results of such an analysis of event-by-event fluctuations of the flow distribution depends somewhat on the structure of the initial-state density fluctuations, especially their radial profile and spatial scale. Thestudy of this kind to datestarts from the fluctuations of the gluon distribution in the colliding nuclei, evolves them for a brief period using classical Yang-Mills equations, and then inserts the fluctuating energy density distribution into viscous hydrodynamics. <cit.> The conclusion of this study is that the average value of η/S (averaged over the thermal history of the expansion) in Au+Au collisions at the top RHIC energy is 0.12; whereas the value for Pb+Pb collisions at LHC is 0.20 (see BM:fig4). While each of these values has systematic uncertainties of at least 50%, the ratio of these two values is probably rather stable against changes in the assumptions for the initial state. One interpretation of this result isthat the average value of η/S at the 10 times higher LHC energy is somewhat higher than at RHIC, indicating a significant temperature dependence of this quantity. In the frame of this explanationthe quark-gluon plasma at the lower temperature reached at RHIC is more strongly coupled and a more perfectliquid, making this energy domain especially interesting for the study of this observable. <cit.> §.§ Valence quark recombination If the term quark-gluon plasma is to truly apply to the hot QCD matter created in heavy ion collisions, it must contain excitations with the quantum numbers of quarks and gluons that are not confined into color singlet objects. As discussed in the context of the strangeness signature, one then expects that hadrons are formed by coalescence of valence quarks  <cit.> when the matter cools back down below T_c. This predicted the strongly enhanced production of hadrons containing multiple strange quarks as a characteristic signature of the formation of a quark-gluon plasma. This enhancement was clearly observed in Au+Au collisions at RHIC, as it had been before in AA collisions at the CERN-SPS, but its value as a quark-gluon plasma signature had been questioned on the basis of the fact that it is already present at the lower collision energies and also occurs to a lesser degree in proton-nucleus collisions. Recent results from the beam energy scan at RHIC, which clearly indicate the presence of partonic collective behavior at the top SPS energy domain, and from p+Pb collisions at LHC and d+Au collisions at RHIC, which produced strong evidence for the presence of collective flow and showed a continuity of strangeness enhancement with final-state multiplicity, have all but eliminated these doubts. Support forquark recombination idea came early in the RHIC physics program from particle identified spectra measured by PHENIX <cit.>. These showed an enhancement in the ratio of protons to pions in the transverse momentum range p_T = 1-3 GeV/c. This finding became known as the proton anomaly. The data also showed an apparent deviation from the mass hierarchy of the elliptic flow v_2(p_T) of identified hadrons in the same momentum range <cit.>. Hydrodynamics predicts that heavier hadrons should exhibit a smaller flow anisotropy at the same momentum p_T, but PHENIX data showed that the v_2 of protons and antiprotons exceeds that of pions for p_T > 2 GeV/c.The concept of valence quark recombination explained both experimental findings. If the collective transverse flow is carried by quarks and these quarks recombine at the moment of hadronization, then protons carrying three valence quarks receive a larger transverse momentum boost from the collective expansion than pions, which contain only two valence quarks. The same argument applies of course to all baryons and mesons. The application of the sudden recombination model relies on the insight that valence quarks coalescing into a hadron with a few GeV/c transverse momentum leave the quark-gluon plasma at nearly the speed of light and thus make a sudden transition from the dense matter into the surrounding vacuum.Theoretical considerations show that the mechanism of quark recombination from a thermal quark-gluon plasma <cit.> with the transverse flow generated by the expansion at RHIC exceeds the contribution to hadron formation by parton fragmentation for transverse momenta p_T < 3-4 GeV/c, precisely the regime where the proton puzzle was observed. Using reasonable values for the expansion velocity at the moment when the cooling matter hadronizes led to quantitative predictions for the transverse momentum dependence of the p/π and Λ/K_s^0 ratios as well as the elliptic flow of protons and pions, which reproduced the essential features of the PHENIX and STAR data (see Fig. <ref>). A particular interesting relationship is obtained for the elliptic flow spectrum of different hadron species containing n valence quarks <cit.>: v_2(p_t) ≈ n v_2^q(p_T/n), which relates the elliptic flow spectrum of mesons (n=2) to that of baryons (n=3). At low transverse momenta, where mass effects are not negligible, it has been suggested that the transverse momentum p_T variable should be replaced by the transverse kinetic energy m_T = √(p_T^2+m^2). With this heuristic substitution, the valence quark scaling of elliptic flow was found to hold over the entire range of available data <cit.> (see Fig. <ref>).In a more differential way than the strangeness enhancement signature, the sudden recombination model for hadron emission from the nuclear fireball provided evidence for the formation of a new state that contains collectively flowing matter composed of independently moving quarks and antiquarks. In the summary of their original publication Fries et al. <cit.> concluded: ...we propose a two component behavior of hadronic observables in heavy ion collisions at RHIC. These components include fragmentation of high-p_T partons and recombination from a thermal parton distribution. … Our scenario requires the assumption of a thermalized partonic phase characterized by an exponential momentum spectrum. Such a phase may be appropriately called a quark-gluon plasma. §.§ Beam Energy Scan During the years 2010–2011 RHIC conducted a beam energy scan (BES) for Au+Au collisions that covered the energies √(s_NN) = 7.7, 11.5, 19.6, 27, 39 GeV. The energy provided an essential link between the results previously obtained at the CERN-SPS and the data measured at the full RHIC collision energy of √(s_NN) = 200 GeV. A rather complete compilation of the data for observables related to the chemical and kinetic freeze-out parameters of the medium can be found in Ref. <cit.>. The BES confirmed the strong enhancement of strangeness production observed in the CERN-SPS experiments, including detailed features like Marek's horn, the peak in the K^+/π^+ ratio near √(s_NN) = 7 GeV. The strangeness flavor was found to be chemically equilibrated in central collisions over the entire energy range, and the main difference between different energies can be attributed to a strong variation in the baryon chemical potential from μ_B ≈ 100 MeV at the high-energy end to μ_B ≈ 400 MeV at the low-energy end of the BES. In summary, the BES established the continuity of chemical and thermal bulk properties of the medium from the CERN-SPS to the BNL-RHIC energy range and provided compelling evidence that a quark-gluon plasma is temporarily created across this range of collision energies. §.§ Jet Quenching Energetic partons, the precursors of later emerging hadronic jets, lose energy while traversing the quark gluon plasma either by elastic collisions with the medium constituents or by gluon radiation. At high energies, radiation should dominate; collisional energy loss is expected to be important for intermediate energy partons and for heavy quarks. Each mechanism is encoded in a transport coefficient, ê for collisional energy loss and q̂ for radiative energy loss: <cit.>(dE/dx)_ coll = - C_2 ê , (dE/dx)_ rad = - C_2 q̂ L ,where L denotes the path length traversed in matter and C_2 is the quadratic Casimir of the fast parton. The value of q̂ is given by the transverse momentum broadening of a fast light parton per unit path length.The evolution of a jet in the medium, shown schematically in JETcone, is characterized by several scales: The initial virtuality Q_ in associated with the hard scattering process; the transverse scale at which the medium appears opaque, also called the saturation scale Q_s; and the transverse geometric extension of the jet, r_⊥. Those components of the jet, for which r_⊥ > Q_s^-1, will be strongly modified by the medium. This means that the core of the jet will remain rather inert, except for an overall energy attenuation of the primary parton, but strong modifications are expected at larger angles and soft components of the jet. These features of jet modificationcan be encoded in a transport equation for the accompanying gluon radiation: <cit.>d/dtf(ω,k_⊥^2,t) = ê∂ f/∂ω + q̂/2∂ f/∂ k_⊥^2 + dN_ rad/dω dk_⊥^2 dt ,where the last term denotes the gluon radiation induced by the medium. The jet quenching parameterq̂ can be determined by analyzing the suppression of leading hadrons in A+A collisions, compared with the scaled p+p data, usually given by a suppression factor R_AA, which is of the order of 0.2 for hadrons of transverse momenta in the range of 10 MeV/c in Au+Au at RHIC and Pb+Pb at LHC. A systematic analysis of available data from RHIC and LHC was published by the JET Collaboration <cit.> (see Fig. <ref>). It suggests that the temperature averaged value of q̂ grows slightly less than linear with the matter density between RHIC and LHC. This confirms the notion that the quark-gluon plasma formed at higher temperatures is somewhat less strongly coupled.Using the values of q̂ and ê determined by comparison with R_AA data, one can also explain the strong increase of the di-jet asymmetry observed in central Pb+Pb collisions at LHC. This gives confidence that the basic mechanisms of jet modification and parton energy loss are reasonably well understood. The phenomenology of jet quenching at the LHC, exemplified by the CMS data from Pb+Pb collisions <cit.>shown in BM:fig7, agrees qualitatively well with the expectation that modifications are concentrated at large cone angles and soft momentum fractions within the jet.§.§ Quarkonium Melting Bound states of heavy quarks, especially quarkonia (J/ψ, ψ', the Υ states), are sensitive to the distance at which the color force is screened in the quark-gluon plasma. Several mechanisms contribute to nuclear modification of the quarkonium yield as is illustrated in BM:fig8. At sufficiently high temperatures the screening length becomes shorter than the size of the quarkonium radius and the QQ bound state melts . Since the radii of the quarkonium states vary widely – from approximately 0.1fm for the Υ ground state to almost 1 fm for ψ' – the sequential melting of these states could enable at least a semi-quantitative determination of the color screening length.The static screening length, which is relevant for heavy quarks, can be calculated within the context oflattice QCD. However, it has become well understood in recent years that static color screening is only part of the picture of quarkonium melting, and that quarkonium yields can not only be suppressed by the action of the medium, but also enhanced by recombination, if the density of heavy quarks and antiquarks is large enough. An important loss mechanism is ionization by absorption of thermal gluons. This mechanism gains in importance as the binding energy of a quarkonium state is lowered by color screening. The absorption channel can be included in the dynamical evolution of the amplitude as an imaginary part of the potential with a corresponding noise term ensuring ultimate approach to the equilibrium distribution:iħ∂/∂ tΨ_QQ = [ p_Q^2 + p_Q^2/2M_Q + V_QQ - i/2Γ_QQ + ξ_QQ] Ψ_QQ . Recombination of a heavy QQ-pair can occur at or near hadronization, similar to the sudden recombination mechanism that is thought to be responsible for the valence quark scaling of the identified particle elliptic flow. The yield of quarkonia formed in this manner grows quadratically with the heavy quark yield. Recombination of charm quark pairs into J/ψ and ψ' is thus expected to be much more frequent at LHC than at RHIC. This expectation is borne out by a comparison of the centrality dependence of J/ψ suppression observed by PHENIX in Au+Au at RHIC and by ALICE in Pb+Pb at LHC (see BM:fig9). The LHC data show less suppression in central collisions than the RHIC data, although the significantly hotter matter proceed at LHC energy must surely be more effective in melting the J/ψ state. Whether it is possible to measure enough observables in order to not only disentangle the action of these different mechanisms, but also determine the color screening length, will need to be seen. On the positive side, the theory of quarkonium transport in hot QCD matter has now reached a state of sophistication where this seems possible.In summary of thissection we note that the RHIC-BNLcontributed several additional and convincing experimental observables which further evolved comparing to the contemporary LHC results (which we did not discuss beyond strangeness). Without further discussion let us say that there is reason for hope that details of the initial state structure can be separated from viscous effects, and both can be separately extracted from the data. Jet physics opens new avenues of probing the quark-gluon plasma at different scales. The quarkonium data from the LHC suggest that recombination dominates in central Pb+Pb collisions for the cc states. § SUMMARY Since we have offered sub-summaries at the end of each section, we can be brief here: We have described in some detail the experimental developments that followed on the publication of our review of the strangeness signature of the QGP <cit.> in 1986. While over the past 30 years much has been learned in terms of experimental data accumulation and theoretical analysis, the basic insights described in our review have withstood the test of time. No other viable interpretation of all soft hadron production data measured incollisions exists than the formation of a thermally and chemically equilibrated QGP fireball that explosively disintegrates preserving entropy, strangeness and baryon number content established at the boundary between QGP and hadron gas. The fleeting presence of the QGP is most clearly witnesses by the large overabundance of strange antibaryons. Our chronology of the CERN-SPS strangeness research shows that the key experimental results were available as early as 1992 and were several times confirmed and published before CERN finally announced the QGP discovery in February 2000. We have made an attempt to explain why this announcement was less broadly accepted than probably would have been the case if it had been primarily based on the observation of the predicted and quantitatively confirmed strangeness enhancement. As it was presented in 2000, many doubts remained, and a whole new set of experiments at BNL-RHIC was required to establish a broad consensus with respect to the discovery of a new state of matter on the basis of additional phenomena not accessible at the SPS energies.We did not dwell on the numerous attempts made over the past 30 years to question the usefulness of strangeness as signature of QGP, as none of the related arguments has withstood the test of time.However, as an example of such proposals we discussed the fact that the Ξ/ϕ ratio is constant over the whole range of SPS and RHIC energies while the degree of strangeness enhancement varies substantially, discrediting the family of models called canonical enhancement. With new results for strange baryon and antibaryon enhancement over a wide range of system sizes now emerging from experiments at LHC, it becomes possible to explore in detail how the strangeness flavor is chemically equilibrated as function of QGP lifetime and size. As charm becomes an abundant flavor in the LHC energy domain, flavor probes of the QGP are further expanding their reach as primary bulk signals and probes of QGP formation. While the basic picture is now well established, much still waits to be learned.10Koch:1986udP. Koch, B. Müller and J. Rafelski, Strangeness in Relativistic Heavy Ion Collisions,Phys. Rept.142, 167 (1986). Muller:1994rbB. Müller, Physics and signatures of the quark - gluon plasma,Rept. Prog. Phys.58, 611 (1995). Muller:1991jkB. Müller, Signatures of the quark-gluon plasma,Nucl. Phys. A 544, 95 (1992).Rafelski:1980rk J. Rafelski and R. Hagedorn,From Hadron Gas To Quark Matter. 2,in Statistical Mechanics of Quarks and Hadrons, H. Satz, ed. (North Holland 1980) pp. 253–272, http://inspirehep.net/record/156201/files/198012212.pdfalso: Preprint CERN-TH-2969, October 1980.Rafelski:1980fyJ. Rafelski, Extreme States of Nuclear Matterp.282-324 in Proceedings of Workshop on Future Relativistic Heavy Ion Experiments held 7-10 October 1980 at: GSI, Darmstadt, Germany; Reprinted in:Eur. Phys. J. A 51, 115 (2015).Rafelski:1982puJ. Rafelski and B. Müller, Strangeness Production in the Quark - Gluon Plasma,Phys. Rev. Lett.48, 1066 (1982) Erratum: [Phys. Rev. Lett.56, 2334 (1986)].Rafelski:1982iiJ. Rafelski,Formation and Observables of the Quark-Gluon Plasma,preprint UFTP-80-1982, pp.331–347 in M. Jacob and J. Tran Thanh Van Phys. Rept. 88 (1982) 321; and in: Extreme States Of Nuclear Matter,Nucl. Phys. A 374 (1982) 489C. SHAREa M. Petran, J. Letessier, J. Rafelski and G. Torrieri, SHARE with CHARM,Comput. Phys. Commun.185, 2056 (2014).SHAREb G. Torrieri, S. Jeon, J. Letessier and J. Rafelski, SHAREv2: Fluctuations and a comprehensive treatment of decay feed-down,Comput. Phys. Commun.175, 635 (2006).SHAREc G. Torrieri, S. Steinke, W. Broniowski, W. Florkowski, J. Letessier and J. Rafelski, SHARE: Statistical hadronization with resonances,Comput. Phys. Commun.167, 229 (2005). Letessier:2005qeJ. Letessier and J. Rafelski,Hadron production and phase changes in relativistic heavy-ion collisions,Eur. Phys. J. A 35 221 (2008). Petran:2013qlaM. Petran and J. Rafelski, Universal hadronization condition in heavy ion collisions at √(s_NN)= 62 GeV and at √(s_NN)=2.76 TeV,Phys. Rev. C 88 021901 (2013).Rafelski:2015cxaJ. Rafelski, Melting Hadrons, Boiling Quarks,Eur. Phys. J. A 51, 114 (2015). Bjorken:1982qrJ. D. Bjorken, Highly Relativistic Nucleus-Nucleus Collisions: The Central Rapidity Region,Phys. Rev. D 27, 140 (1983). Ollitrault:1992bkJ. Y. Ollitrault, Anisotropy as a signature of transverse collective flow,Phys. Rev. D 46, 229 (1992). Song:2010mgH. Song, S. A. Bass, U. Heinz, T. Hirano and C. Shen, 200 A GeV Au+Au collisions serve a nearly perfect quark-gluon liquid,Phys. Rev. Lett.106, 192301 (2011) Erratum: [Phys. Rev. Lett.109, 139904 (2012)].Bjorken:1982tuJ. D. Bjorken, Energy Loss of Energetic Partons in Quark - Gluon Plasma: Possible Extinction of High p(t) Jets in Hadron - Hadron Collisions,http://lss.fnal.gov/archive/1982/pub/Pub-82-059-T.pdfFERMILAB-PUB-82-059-T.Wang:1991xyX. N. Wang and M. Gyulassy, Gluon shadowing and jet quenching in A + A collisions at s**(1/2) = 200-GeV,Phys. Rev. Lett.68, 1480 (1992). Adcox:2001jpK. Adcox et al.[PHENIX Collaboration], Suppression of hadrons with large transverse momentum in central Au+Au collisions at √(s_NN) = 130-GeV,Phys. Rev. Lett.88, 022301 (2002). Chatrchyan:2011sxS. Chatrchyan et al. [CMS Collaboration], Observation and studies of jet quenching in PbPb collisions at nucleon-nucleon center-of-mass energy = 2.76 TeV,Phys. Rev. C 84, 024906 (2011).Shuryak:1978ijE. V. Shuryak, Quark-Gluon Plasma and Hadronic Production of Leptons, Photons and Psions,Phys. Lett.78B, 150 (1978) [Sov. J. Nucl. Phys.28, 408 (1978)] [Yad. Fiz.28, 796 (1978)]. Thews:2000rjR. L. Thews, M. Schroedter and J. Rafelski, Enhanced J/ψ production in deconfined quark matter,Phys. Rev. C 63, 054905 (2001). Matsui:1986dkT. Matsui and H. Satz, J/ψ Suppression by Quark-Gluon Plasma Formation,Phys. Lett. B 178, 416 (1986). Feinberg:1976uaE. L. Feinberg, Direct Production of Photons and Dileptons in Thermodynamical Models of Multiple Hadron Production,Nuovo Cim. A 34, 391 (1976).Kapusta:1991qpJ. I. Kapusta, P. Lichard and D. Seibert, High-energy photons from quark - gluon plasma versus hot hadronic gas,Phys. Rev. D 44, 2774 (1991) Erratum: [Phys. Rev. D 47, 4171 (1993)]. Borsanyi:2012rrS. Borsanyi, Thermodynamics of the QCD transition from lattice,Nucl. Phys. A 904-905, 270c (2013). Odyniec:2001G. Odyniec, In the memory of Howel G Pugh,J. Phys. G: Nucl. Part. Phys. 27 (2001) 255.Gazdzicki:1989kdM. Gazdzicki et al. [NA35 Collaboration],Neutral Strange Particle Production in S-S Collisions at 200 GeV/nucleon, Nucl. Phys. A 498 (1989) 375C.Bartke:1990cnJ. Bartke et al. [NA35 Collaboration],Neutral strange particle production in sulphur sulphur and proton sulphur collisions at 200-GeV/nucleon,Z. Phys. C 48 (1990) 191. Koch:1984tzP. Koch and J. Rafelski, Time Evolution of Strange Particle Densities in Hot Hadronic Matter,Nucl. Phys. A 444, 678 (1985).Baechler:1991ppR. Stock et al. [NA35 Collaboration],Strangeness enhancement in central S-S collisions at 200-GeV/nucleon, Nucl. Phys. A 525 (1991) 221C(presented at QM90: VIIIth International Conference on Ultrarelativistic Nucleus-Nucleus Collisions, 7-11 May 1990, Menton, France). Bachler:1992jsJ. Bachler et al. [NA35 Collaboration],Production of charged kaons in proton-nucleus and nucleus-nucleus collisions at 200-GeV/Nucleon, Z. Phys. C 58 (1993) 367.Abatzis:1990cmS. Abatzis et al. [WA85 Collaboration],Λ and Λ̅ Production in Sulphur-Tungsten Interactions at 200-GeV/c Per Nucleon, Phys. Lett. B 244 (1990) 130. Abatzis:1990gzS. Abatzis et al. [WA85 Collaboration],Production of multi-strange baryons and anti-baryons in sulphur-tungsten interactions at 200-GeV/c per nucleon,Phys. Lett. B 259 (1991) 508.Abatzis:1991juS. Abatzis et al. [WA85 Collaboration],Ξ^-, Ξ^-, Λ and Λ production in sulphur-tungsten interactions at 200-GeV/c per nucleon, Phys. Lett. B 270 (1991) 123.Rafelski:1991rhJ. Rafelski,Strange anti-baryons from quark-gluon plasma,Phys. Lett. B 262 (1991) 333. QR2000E. Quercigh and J. Rafelski,A strange quark plasma,Phys. World 13 (2000) 37.Alber:1994tzT. Alber et al. [NA35 Collaboration],Strange particle production in nuclear collisions at 200-GeV per nucleon,Z. Phys. C 64 (1994) 195. Foka:1995Thesis P. Foka, Study of Strangness Production in Central Nucleus-Nucleus Collisions at 200 GeV/Nucleon by Developing a New Analysis Method for the NA35 Streamer Chamber Pictures,(University Geneva, July 1995).Alber:1996mqT. Alber et al. [NA35 Collaboration],Anti-baryon production in sulphur nucleus collisions at 200-GeV per nucleon,Phys. Lett. B 366 (1996) 56. Evans:1994sgD. Evans et al. [WA85 Collaboration],New results from WA85 on multi-strange hyperon production in 200-A/GeV/c S W interactions,Nucl. Phys. A 566 (1994) 225C. Antinori:1997nnF. Antinori, The heavy-ion physics programme at the CERN OMEGA spectrometer, pp 43-49 inM. Jacob and E. Quercigh, edts.The CERN OMEGA spectrometer,http://dx.doi.org/10.5170/CERN-1997-002Yellow Report CERN-97-02.DiBari:1995cyD. Di Bari et al. [WA85 Collaboration],Results on the production of baryons with |S| = 1, 2, 3 and strange mesons in S W collisions at 200-GeV/c per nucleon,Nucl. Phys. A 590 (1995) 307C. Kinson:1995czJ. B. Kinson et al. [WA94 Collaboration],Strange particle production in sulphur-sulphur interactions at 200-GeV/c per nucleon,Nucl. Phys. A 590 (1995) 317C.ABELEV:2013zaaB. B. Abelev et al. [ALICE Collaboration], Multi-strange baryon production at mid-rapidity in Pb-Pb collisions at √(s_NN) = 2.76 TeV,Phys. Lett. B 728 (2014) 216. [Corrigendum: Phys. Lett. B 734 (2014) 409. Heinz:2000bkU. W. Heinz and M. Jacob,Evidence for a new state of matter: An Assessment of the results from the CERN lead beam program, http://arxiv.org/abs/nucl-th/0002042nucl-th/0002042.RHIC2005a I. Arsene et al. [BRAHMS Collaboration],Quark gluon plasma and color glass condensate at RHIC? The Perspective from the BRAHMS experiment,Nucl. Phys. A 757, (2005) 1.RHIC2005b K. Adcox et al. [PHENIX Collaboration], Formation of dense partonic matter in relativistic nucleus-nucleus collisions at RHIC: Experimental evaluation by the PHENIX collaboration,Nucl. Phys. A 757, (2005) 184.RHIC2005c B. B. Back et al. [PHOBOS Collaboration], The PHOBOS perspective on discoveries at RHIC,Nucl. Phys. A 757, (2005) 28.RHIC2005d J. Adams et al. [STAR Collaboration], Experimental and theoretical challenges in the search for the quark gluon plasma: The STAR Collaboration's critical assessment of the evidence from RHIC collisions,Nucl. Phys. A 757, (2005) 102. Gazdzicki:2010ivM. Gazdzicki, M. Gorenstein and P. Seyboth,Onset of deconfinement in nucleus-nucleus collisions: Review for pedestrians and experts,Acta Phys. Polon. B 42 (2011) 307.Rafelski:2009guJ. Rafelski and J. Letessier,Particle Production and Deconfinement Threshold,http://inspirehep.net/record/811159/files/Confinement8_111.pdfPoS CONFINEMENT 8 (2008) 111.Petran:2009dcM. Petran and J. Rafelski,Multistrange Particle Production and the Statistical Hadronization Model,Phys. Rev. C 82 (2010) 011901.Adam:2015vsfJ. Adam et al. [ALICE Collaboration], Multi-strange baryon production in p-Pb collisions at √(s_𝐍𝐍)=5.02 TeV,Phys. Lett. B 758 (2016) 389. ALICE:2017jytJ. Adam et al. [ALICE Collaboration], Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions,Nature Phys.13, 535 (2017).Kovtun:2004deP. Kovtun, D. T. Son and A. O. Starinets, Viscosity in strongly interacting quantum field theories from black hole physics,Phys. Rev. Lett.94, 111601 (2005).Gale:2012rqC. Gale, S. Jeon, B. Schenke, P. Tribedy and R. Venugopalan, Event-by-event anisotropic flow in heavy-ion collisions from combined Yang-Mills and viscous fluid dynamics,Phys. Rev. Lett.110, no. 1, 012302 (2013).Bass:2017zynS. A. Bass, J. E. Bernhard and J. S. Moreland, Determination of Quark-Gluon-Plasma Parameters from a Global Bayesian Analysis,arXiv:1704.07671 [nucl-th].Biro:1983ghT. S. Biro and J. Zimanyi, Quark Gluon Plasma Formation In Heavy Ion Collisions and Quarkochemistry,Nucl. Phys. A 395, 525 (1983). Rafelski:1987unJ. Rafelski and M. Danos, Possible Signature for and Early Hadronization Mechanisms of Quark - Gluon Plasma,Phys. Lett. B 192, 432 (1987). Adler:2003kgS. S. Adler et al.[PHENIX Collaboration], Scaling properties of proton and anti-proton production in s(NN)**(1/2) 200-GeV Au+Au collisions,Phys. Rev. Lett.91, 172301 (2003). Adler:2003ktS. S. Adler et al.[PHENIX Collaboration], Elliptic flow of identified hadrons in Au+Au collisions at s(NN)**(1/2) = 200-GeV,Phys. Rev. Lett.91, 182301 (2003). Fries:2003vbR. J. Fries, B. Müller, C. Nonaka and S. A. Bass, Hadronization in heavy ion collisions: Recombination and fragmentation of partons,Phys. Rev. Lett.90, 202303 (2003). Greco:2003xtV. Greco, C. M. Ko and P. Levai, Parton coalescence and anti-proton / pion anomaly at RHIC,Phys. Rev. Lett.90, 202302 (2003). Fries:2011wzR. J. Fries, Quark Recombination in Heavy Ion Collisions,PoS CERP 2010, 008 (2010). Fries:2003kqR. J. Fries, B. Müller, C. Nonaka and S. A. Bass, Hadron production in heavy ion collisions: Fragmentation and recombination from a dense parton phase,Phys. Rev. C 68, 044902 (2003). Adare:2006tiA. Adare et al.[PHENIX Collaboration], Scaling properties of azimuthal anisotropy in Au+Au and Cu+Cu collisions at s(NN) = 200-GeV,Phys. Rev. Lett.98, 162301 (2007). Adamczyk:2017iwnL. Adamczyk et al. [STAR Collaboration], Bulk Properties of the Medium Produced in Relativistic Heavy-Ion Collisions from the Beam Energy Scan Program,arXiv:1701.07065 [nucl-ex].Majumder:2010qhA. Majumder and M. Van Leeuwen, The Theory and Phenomenology of Perturbative QCD Based Jet Quenching,Prog. Part. Nucl. Phys.66, 41 (2011).Qin:2010mnG. Y. Qin and B. Müller, Explanation of Di-jet asymmetry in Pb+Pb collisions at the Large Hadron Collider,Phys. Rev. Lett.106, 162302 (2011) Erratum: [Phys. Rev. Lett.108, 189904 (2012)].Burke:2013yraK. M. Burke et al. [JET Collaboration], Extracting the jet transport coefficient from jet quenching in high-energy heavy-ion collisions,Phys. Rev. C 90, 014909 (2014).Andronic:2013awaA. Andronic [ALICE Collaboration],J. Phys. Conf. Ser.455, 012002 (2013)Sanchez:2013lpaM. Calder�n de la Barca S�nchez [CMS Collaboration],J. Phys. Conf. Ser.458, 012011 (2013)
http://arxiv.org/abs/1708.08115v1
{ "authors": [ "Peter Koch", "Berndt Müller", "Johann Rafelski" ], "categories": [ "nucl-th", "hep-ph" ], "primary_category": "nucl-th", "published": "20170827173230", "title": "From Strangeness Enhancement to Quark-Gluon Plasma Discovery" }
Department of Life and Environmental Agricultural Sciences, Tottori University, Tottori 680-8551, Japan Department of Physics, Nara Women's University, Nara 630-8506, Japan Research Center for Nuclear Physics (RCNP), Osaka University,Ibaraki 567-0047, Japan, Department of Physics, Tokyo Metropolitan University, Hachioji 192-0397, JapanThe η mesic nucleus is considered to be one of the interesting exotic many body systemsand has been studied since 1980's theoretically and experimentally. Recently, the formation of the η mesic nucleus in the fusion reactions of the light nuclei such as d + d → (η + α) → X has been proposed and the experiments have been performed by WASA-at-COSY. We develop a theoretical model toevaluate the formation rate of the η mesic nucleus in the fusion reactions and show thecalculated results.We find that the η bound states could be observed in the reactions in cases with the strong attractive and small absorptive η-nucleus interactions.We compare our results with existing data of the d + d →η + α and thed + d →^3He + N + π reactions. We find that the analyses by our theoretical model with the existing data can provide new information on the η-nucleus interaction. 14.40.Aqpi, K, and eta mesons36.10.GvMesonic, hyperonic and antiprotonic atoms and molecules25.60.PjFusion reactions η-nucleus interaction from the d+d reaction around the η production threshold N. Ikeno1 H. Nagahiro2,3 D. Jido4 S. Hirenzaki2 Received: date / Revised version: date ============================================================================= § INTRODUCTIONThe existence of the bound states of the η meson in nucleus(η mesic nuclei) were predicted first by Haider and Liu <cit.> in 1980's. Stimulated by this theoretical result, there have been many studies of the structure and the formation reactions of the η mesic nucleus <cit.>. Recently, the η-nucleon and η-nucleus interactions have been studied theoretically in the context of the chiral symmetry of the strong interaction and the η mesic nucleus can be considered as one of the interesting objects to investigate the aspects of the chiral symmetry at finite density <cit.>.As for the experimental studies, the first attempt to observe the η mesic nucleus was performed by the (π^+, p) reaction with finite momentum transfer <cit.>,and the interpretation of the data is still controversial <cit.>. After that, there were many experimental searches of the bound states as reported in Refs. <cit.> for example. The systems with η in the light nucleus such as η-^3He state also have been studied seriously <cit.>and the data of the p+d →η +^3He reaction were studiedto deduce η -^3He interaction <cit.>. So far, the existence of the η-Mg bound state was concluded in Ref. <cit.>. However, we have not found any decisive evidence of theexistence of the η bound state in lighter nuclei like He.Recently, the new experiments of the d + d → (η + α) → X reaction have been proposed and performed at WASA-at-COSY <cit.>. In the experiments, the formation cross section of the η mesic nucleus in the α particle in the d + d fusion reaction is planned to be measured by observing the emitted particlesfrom the decay of the η mesic nucleusbelow the η production threshold.The formation rate of the emitting particles is expected to be enhanced at the resonance energy of the η bound state formation.The shape of the observed spectra in Ref. <cit.> were smooth without any clear peak structures, andthe upper limit of the η-nucleus formation cross section was reported to bearound 3–6 nb for d + d →^3He + n + π^0 reaction <cit.>. To evaluate these upper limits, the Fermi motion of N^* in nucleus <cit.> is also taken int account. The upper limit for^3He + p + π^-final state is two times larger because of isospin <cit.>. There were also the data of η production reaction d + d →η + α above the threshold <cit.>, which are expected to provide the valuable information on the reaction mechanism. In this paper, we develop a theoretical model to evaluate the formation cross section of the η bound states in the d + d reaction and show the numerical results.This theoretical model can be used to deduce the information on the η-nucleus interaction from the experimental spectra. We explain the details of our theoretical model in section <ref>. We show the numerical results and compare them with the existing data in section <ref> to deduce the information on η-nucleus interaction, and summarize this paper in section <ref>. § FORMULATIONIn this section, we consider the d + d → (η + α) → X reaction and explain our theoretical model developed in this article. In the experiments of this reaction at WASA-at-COSY <cit.>,the total energy of the system is varied by changing the deuteron beam momentum around the η + α threshold energy, which corresponds to the beam momentum p=2.3 GeV/c, and the η + α bound state production signal is expected to be observed as peak structures of the cross sectionof the d + d →^3He + p + π^- and d + d →^3He + n + π^0 reactions in the energy region below the η-α production threshold energy region.Based on these considerations, we have developed a phenomenological model described below. In the model, the fusion and η meson production processes are phenomenologically parameterized and the Green's function technique is used to sum up all η-α final states.First we formulate the transition amplitude for the d+d →η + α reaction. We adopt the framework of the hadron reaction phenomenologically.We take a model in whichthe η meson production and the d+d →α fusion take place in a finite size region as schematically pictured in Fig. <ref>. All the information on the finiteness of the reaction range, the spacial dimensions of the nuclear sizes, the structure of the deuterons and the alpha and overlap of their wavefunctions isrepresented by transition form factor F(q⃗).We are interested in the η production at the threshold, so that the final state, η and α, is dominatedby s wave and, thus, the total spin-parity of the final state with a pseudo scalar (η) and a scalar (α) bosonsis 0^-. Deuteron having spin 1 is represented by a axial vector boson.According to Lorenz invariance, pseudoscalar 0^- state can be madeout of two axial-vectors 1^+ by so-called anomalous coupling like ϵ^μνρσ∂_μ A_ν∂_ρ A_σ P S, where A_μ and P are an axial-vector boson and a pseudoscalar boson, respectively. S is a scalar boson.Thus, the interaction Hamiltonianmay be written asH_ int =-icϵ^ijk((∂_x_2^0∇^i_x_1 -∂_x_1^0∇^i_x_2) ϕ̂^j_d(x_1)ϕ̂^k_d (x_2)) ϕ̂_η^†(x_1) ϕ̂_α^†(x_2) F(x_1,x_2)where ϕ̂^i_d(x) is the deuteron field operator with spin index i,ϕ̂_η^†(x) and ϕ̂_α^†(x) arethe creation operators for η and α, respectively,and c expresses the interaction strength. The interaction strength c will be adjusted so as to reproduce the observed cross section.The function F(x_1, x_2) in Eq. (<ref>) represents non-local transition form factor of d + d →η + α, which is supposed to include the information onthe d + d →α fusion, andthe η meson production in the hadronic interaction such as N + N →η + N + N. Assuming the translational invariant,we define the Fourier transformationF(x_1, x_2) = ∫d^4q/(2π)^4 F(q⃗) e^i q· (x_1 - x_2).The momentum transfer q⃗ of the reaction is large andall nucleons should participate in the fusion reaction equally. Since it is hard to calculate F(q⃗) in a microscopic way, we treat it phenomenologically andassume a functional form of F(q⃗) in the numerical evaluation. Letting the wave functions of the incident deuterons labeled by d_1 and d_2be given by plane waves with momentum p_1 for deuteron d_1 and p_2 for deuteron d_2 and writing the wave functions of η and α in the final state asϕ_η^†(x⃗) e^-iE_η x^0 andϕ_α^†(x⃗) e^-iE_α x^0 with the η and α energies E_η and E_α, respectively,we obtain the connected partof the S-matrix in the center of mass frame:S =-i N_d_1 N_d_2 N_η N_α c ∫ d^4x_1 d^4 x_2ϵ^ijk( ∂_x_2^0∇^i_x_1 - ∂_x_1^0∇^i_x_2)[ χ_d_1^jχ_d_2^ke^- i p_1· x_1 e^-ip_2· x_2+ χ_d_2^jχ_d_1^ke^- i p_2· x_1 e^-ip_1· x_2]×∫d^4q/(2π)^4 F(q⃗) e^i q· (x_1 - x_2)ϕ^*_η(x⃗_1) e^iE_ηx_1^0ϕ^*_α(x⃗_2) e^iE_α x^0_2where χ_d_1 and χ_d_2 are the spin wave functions fordeuteron d_1 and d_2, respectively,and the normalization factors N_i are given as N_i = √(M_i/E_i)with mass M_i and energy E_i.Operating the derivatives onto the wave functions, we havethe S-matrix asS= -i 2πδ(E_1 + E_2 - E_f)N_d_1 N_d_2 N_η N_α × c ∫ d x⃗_1 d x⃗_2ϵ^ijk 2 (E_2 p_1^i - E_1 p_2^i)χ_d_1^jχ_d_2^ke^ i p⃗_1·x⃗_1 e^i p⃗_2·x⃗_2 ×∫d q⃗/(2π)^3 F(q⃗) e^-i q⃗· (x⃗_1 - x⃗_2)ϕ^*_η(x⃗_1) ϕ^*_α(x⃗_2),where E_1 and E_2 are the energies for deuteron d_1 and d_2,respectively, E_f =E_η + E_α and the integrations in terms ofthe time components provide energy conservation∫ dx_1^0 dx_2^0dq_0/2πe^-ix_1^0(E_1-q^0 - E_η) e^-ix_2^0(E_2+q^0-E_α) = 2πδ(E_1 + E_2 -E_f).In order to perform the spacial integrals, we introduce the relative coordinate forthe final state, R⃗ and r⃗, defined asR⃗ = m_ηx⃗_1 + M_αx⃗_2/m_η + M_α,r⃗= x⃗_1 - x⃗_2 .We also introduce the wave function for the relative motion of the final state,ϕ_f(r⃗), and assume that the center of mass motion of the η and α system is written as the plane wave. This implies that we replace the η and α wave functions as follows:N_η N_αϕ_η(x⃗_1) ϕ_α(x⃗_2) → N_f e^i p⃗_f·R⃗ϕ_f(r⃗),with the momentum of the center of motion p⃗_f = p⃗_η + p⃗_αand the normalization of the wave function of relative motion ϕ_f(x⃗)given as∫ dx⃗ | ϕ_f(x⃗)|^2 = 1.In this coordinate, the S-matrix is written asS=-i2πδ(E_1 + E_2 - E_f)N_d_1 N_d_2 N_f × cϵ^ijk 2 (E_2 p_1^i - E_1 p_2^i)χ_d_1^jχ_d_2^k ×∫ d R⃗ d r⃗ e^ i (p⃗_1 + p⃗_2) ·R⃗e^i (M_α/m_η+M_αp⃗_1- m_η/m_η+M_αp⃗_2)·r⃗ ×∫d q⃗/(2π)^3 F(q⃗) e^-i q⃗·r⃗ϕ^*_f(r⃗) e^-i p⃗_f·R⃗.The integral in terms of R⃗ provides momentum conservation ∫ d R⃗ e^i(p⃗_1 + p⃗_2 - p⃗_f) ·R⃗ = (2π)^3δ (p⃗_1 + p⃗_2 - p⃗_f).Introducing the Fourier transformϕ_f(r⃗) = ∫d p⃗/(2π)^3ϕ̃_f (p⃗) e^-ip⃗·r⃗,we perform the spacial integrals and obtainS=-i (2π)^4δ^(4)(p_1 + p_2 - p_f)N_d_1 N_d_2 N_f × cϵ^ijk 2 (E_2 p_1^i - E_1 p_2^i)χ_d_1^jχ_d_2^k ×∫d q⃗/(2π)^3 F(q⃗) ϕ̃^*_f (P⃗),where P⃗ is defined as P⃗ = M_α/m_η+M_αp⃗_1- m_η/m_η+M_αp⃗_2 - q⃗.In the center of mass frame, E_1=E_2≡ E_d and p⃗_1 = - p⃗_2≡p⃗. Since the T matrix is given by S = 1 - i T (2π)^4δ^(4)(p_1 + p_2 - p_f)N_d_1 N_d_2 N_f, we obtain the T-matrix in the centerof mass framasT = 4 c E_d p⃗· ( χ⃗_d_1×χ⃗_d_2) F̃(p⃗),where we have defined F̃(p⃗) as F̃(p⃗) ≡∫d q⃗/(2π)^3 F(q⃗) ϕ̃^*_f( p⃗ - q⃗) .With this T-matrix the cross section of the fusion with η production can be obtained asdσ = 1/9∑_χ_d_1,χ_d_2, f|T|^2/8 p_ c.m. E_d (2π)^4δ^(4)(p_i - p_f) d p⃗_f/(2π)^3 2 E_f,where p_ c.m. = | p⃗| and we take average for the initial spin and sum up all the possible final states. Performing the integral in terms of p⃗_f and taking spin sum, we obtainσ = 2π/9 c^2 p_ c.m.∑_f |F̃(p⃗)|^2δ(E_i - E_f),where we have used E_i = 2 E_d = E_f.The total cross section can be written with the Green's function of the η meson.Using Eq.(<ref>), we have∑_f |F̃(p⃗)| ^2δ(E_f - E_i) = 1/(2π)^6∑_fδ(E_f - E_i) ∫ d q⃗_1d r⃗_1 F(q⃗_1) e^-i(q⃗_1 - p⃗) ·r⃗_1ϕ^*_f(r⃗_1) ×∫ d q⃗_2d r⃗_2 F^*(q⃗_2) e^i(q⃗_2 - p⃗) ·r⃗_2ϕ_f(r⃗_2).The sum over the final states provides the inclusive spectrum and can be evaluated by using Green's function method as follows: Using the formula δ(x) = - 1/π Im1/x + iϵfor an infinitesimal quantity ϵ, we obtain∑_fδ(E_f - E_i) ϕ_f^*(r⃗_1) ϕ_f(r⃗_2)=- 1/π Im∑_fϕ^*_f(r⃗_1) 1/E_f - E_i + i ϵϕ_f(r⃗_2)=- 1/π Im∑_fϕ^*_f(r⃗_1) 1/Ĥ - E_i + i ϵϕ_f(r⃗_2),where we have used the fact that the wave function ϕ_f(r⃗) is an eigenfunction of the Hamiltonian Ĥ for the η–α system. Equation (<ref>) is an representation of Green's operator (Ĥ - E_i + i ϵ)^-1in terms of the eigenfunction of the Hamiltonian.Thus, we write the Green's function in the coordinate space asG(E_i ; r⃗_1, r⃗_2), and we have σ = 2/9 c^2 p_ c.m.× (-)Im∫ d r⃗_1 d r⃗_2 f(r⃗_1) e^i p⃗· r_1 G(E_i; r⃗_1, r⃗_2) f^*(r⃗_2) e^-i p⃗·r⃗_2,where we have introduced the coordinate space expression of F(q⃗) asf( r⃗) ≡∫dq⃗/(2π)^3 F(q⃗) e^- i q⃗·r⃗. Further by assuming the spherically symmetric form of f for simplicity, we replace f(r⃗) as,f(r⃗) → f(r).And by making use of the multipole expansion of G,G(E_i ; r⃗_1, r⃗_2) = ∑_ℓ m Y_ℓ m(r̂_1) Y_ℓ m^*(r̂_2) G^ℓ (E_i ; r_1, r_2),we obtain the final form of σ as,σ =- 8π/9 c^2 p_ c.m.Im∫ r_1^2 dr_1 r_2^2 dr_2f(r_1) f^*(r_2)× ∑_ℓ (2 ℓ +1)j_ℓ(p r_1)G^ℓ (E_i ;r_1,r_2)j_ℓ(p r_2),where j_ℓ is the spherical Bessel function. This expressionis used for the numerical evaluation of the fusion cross section of Eq. (<ref>). We can divide the total cross section σ into two parts,the conversion part σ_ conv and the escape part σ_ escas,σ = σ_ conv + σ_ esc,based on the identityImG = {G^† Im U G }+ {(1+G^†U^†)ImG_0 (1+UG) },where G_0 is the free Green's function of η and U the η-nucleus potential <cit.>. The first term of the R.H.S of Eq. (<ref>) represents the contribution of the η meson absorption by the imaginary part of the η-nucleus potential U and is called as the conversion part. The second term is the contribution from the η meson escape from the nucleus and is called as the escape part.The conversion part of the cross section σ_ conv is evaluated by the practical form written as,σ_ conv =- 8π/9 c^2 p_ c.m.∫ r_1^2 dr_1 r_2^2 dr_2 r_3^2 dr_3f(r_1) f^*(r_3) ×Im U_ opt(r_2) ∑_ℓ(2ℓ +1) j_ℓ(p r_1) × G^ℓ * (E_i ;r_1,r_2)G^ℓ (E_i ; r_2, r_3) j_ℓ(p r_3),for the spherical η-α optical potential U_ opt(r). We calculate the escape part as σ_ esc = σ - σ_ conv.The conversion and escape parts have the different energy dependence. In the subthreshold energy region of the η meson production, the total cross section σ is equal to the conversion part σ_ conv since the energy of the η meson is insufficient to escape from the nucleus and all η mesons must be absorbed to the nucleus finally. Thus, the signal of the formation of the η bound state is expected to be observed in σ_ conv. As shown in Eq. (<ref>), the expression of the conversion cross section σ_ conv includes the Green's function G^ℓ which is responsible for the peak and cusp structures in the spectrumas the consequences of the η-nucleus interaction such as the bound state formation with angular momentum ℓ. For higher partial waves ℓ without any bound states,the energy dependence of G^ℓ is tend to be rather mild and almost flat in the energy region of the η production threshold. In addition, σ_ conv also includes Im U_ opt and, thus, the size of σ_ conv will be larger for stronger absorptive potential.Consequently, as shown later, the flat contribution to the conversion spectrum gets larger in proportional to the strength of the absorptive potential.Above the threshold of η production, we have also the contribution from the escape processes. This escape part σ_ esc can be compared to the observed η production cross section in the d + d →η + α reaction. We should mention here to the effects of the distortion of the initial deuterons to the calculated results. In Eq. (<ref>), the deuteron waves are introduced to the formula as plane waves. The distortion effects will modify the deuteron-deuteron relative wave function and could change the results. In the present cases, however, we can expect the effects will be minor in the final spectra shown in next section by the following reasons. The energy range of the final spectraconsidered in this article is very narrow and restricted to only around the η production threshold. Thus, the distortion effects betweentwo deuterons are almost constant in this narrow energy range and are expected to change only absolute value of the spectra by an almost constant factor. On the other hand, in the present analyses, we normalize the calculated results using the experimental data of d+ d →η + αreaction observed above the threshold as we will see later. Thus, the final results are expected to be insensitive to the deuteron distortion. We have checked qualitatively this statement by introducing the spherical distortion factor to suppress the contributions from the small relative coordinate region in Eqs. (<ref>) and (<ref>), and we have confirmed that the distortion effects to the spectra is almost constant within the accuracy of around 15 % in the energy region considered here. Hence, we can neglect the deuteron distortion effects in this article.As for the numerical evaluation,we assume the η-α optical potential has the following form,U_ opt (r) = (V_0 + i W_0)ρ_α(r)/ρ_α(0),where V_0 and W_0 are the parameters to determine the potential strength at center of the α particle. The density of the α particle ρ_α (r) is assumed to have Gaussian form, ρ_α(r) = ρ_α(0) exp[ - r^2/a^2],with the range parameter a=1.373 fm which reproduces the R.M.S radius of α to be 1.681 fm. The central density of this distribution is ρ_α(0) = 0.28  fm^-3≃ 1.6 ρ_0 with the normal nuclear density ρ_0=0.17 fm^-3. As the practical form of F, we assume the Gaussian as,F(p⃗) = (2π)^3/2(2/p_0^2 π)^3/4exp[ - p^2/p_0^2],in this article and we treat p_0 as a phenomenological parameter. We also show the numerical results obtained by choosingother functional forms for the transition form factor F(p⃗) (f(r⃗)) in Appendix to estimate the functional form dependence of our results.We make a few comments on thedifficulties for developing more microscopic model to evaluate the reaction rate.The first difficulty is the large momentum transfer. We need around 1 GeV/c momentum transferat the η production threshold in the center of mass frame. The accuracy of the microscopic wave function in such high momentum transfer region is not well investigated. Another difficulty arises from the fact that the reaction is a fusion reaction. In the fusion reaction, all particles in the system participatethe reaction equally and receive large momentum transfer. Because of these features of this reaction, we need the sufficientlyaccurate wave function of the five-body system (four nucleons and one η meson)and the reliable description of the fusion and η production processes to perform the fully microscopic calculation. On the other hand, at the same time, we can also find an advantage for the studies of this reaction. We can make use of the experimental data of the d +d →η + α reaction just above the η production threshold <cit.>. These data must provide us important information on the reactionand can be used to fix the parameters included in the present model. It should be also mentioned that the energy spectrum of the η-α system is expected to be simple since the system is small and may have only a few bound levels of η even if they exist. The simple spectrum could be helpful to identify the bound levels from the data. § NUMERICAL RESULTS AND DISCUSSIONSThe theoretical model described in section <ref> includes three parameters, which are the strength of the real and imaginary parts of the η-α potential (V_0, W_0) defined in Eq. (<ref>), and the parameter p_0 appeared inEq.(<ref>) to determine the property of the function F which physical meanings are explained in Eqs. (<ref>) and (<ref>). We study first the sensitivity of the shape of the cross section to the parameter p_0. We show the calculated total cross section σ in Eq. (<ref>) for p_0 = 400, 500, 600 MeV/c cases with (V_0, W_0) = -(100, 10) MeVas functions of the η excited energy E_η - m_ηin Fig. <ref>.The peak structure of the results with p_0 = 400 MeV/c is smalland is located on the top of almost flat spectrum. We find that the structures appearing in the cross section is insensitive to p_0 and almost same for three p_0 values. Thus, we fix the value of this parameter to be p_0 = 500 MeV/c in thefollowing numerical results and focus on the sensitivity of the structure of the spectrum to the η-α potential strength. The parameter p_0 dependence of the total cross section can be understood by considering the change of spatial dimension of the f(r⃗) defined in Eq. (<ref>). For smaller p_0 value, the distribution of F in the momentum space is more compact and that of f in coordinate space is wider. Thus, for smaller p_0 values, we have relatively larger contributions of higher partial wave ℓ of the η meson in the calculation of the cross section in Eq. (<ref>), whichare expected not tohave any structures as function of energy around the threshold as mentioned before. Hence, for smaller p_0 values, the small peak structure appears on the top of the almost flat contribution in the spectrum. In Fig. <ref>, we show again the calculated σfor the case with parameters (V_0, W_0)=-(100,10) MeV for p_0 = 500 MeV/c to study the detail structure of the spectrum. Three lines correspond to the total cross section σ,the conversion part σ_ conv, and the escape part σ_ esc. The η production threshold corresponds to E_η - m_η =0 and the η-α bound states are expected to be produced in the subthreshold region E_η - m_η < 0. We can see in Fig. <ref> thatthe spectrum has non-trivial structures above the flat contribution whose height seems to be about 1 in the scale of the vertical axis.There is a clear peak atE_η - m_η≃ -7 MeV which corresponds to the formation of the η-α bound state. The calculated escape part is plotted with data in Fig. <ref>. We have adjusted the height of the spectrum by the interaction strength c in Eq. (<ref>). The agreement of the spectrum shapes of the calculated results and the data above the threshold seems reasonably good in this potential parameter. Hence, this observation implies that this parameter set, which predicts the formation of η-α bound state, does not contradicts to the d + d →η + α data above threshold. We will make further comments for the comparison with the subthreshold data later in this section.In Fig. <ref>,we show the results for (V_0, W_0)=-(100,5), -(100,20) and-(100,40) MeV cases with p_0 = 500 MeV/c to see the effects of the strength of the imaginary potential W_0 to the spectra. We can see from the figuresthe structure of the spectra below threshold E_η -m_η < 0 is sensitive to the value of the potential parameters W_0 and is expected to be the good observables to investigate the η-nucleus interaction. Actually, the subthreshold spectrum with small imaginary potential W_0 = -5 MeV clearly shows the existence of the bound state around E_η - m_η = -7 MeV as a peak structure.The width of the peak becomes wider and the peak height lower for larger |W_0| value. At the same time we can see thatthe structure of the spectra above the threshold E_η - m_η > 0 are relatively insensitive to theimaginary part of the η-nucleus interaction and the value of W_0.We also show the calculated results for different V_0 and W_0 values in Figs. <ref>, <ref> and <ref>.It could be interesting to note that the depth of the η-nucleus potential by the chiral unitary model in Refs. <cit.> is roughly close to -(50,40) MeV at normal nuclear density for real and imaginary parts, respectively. The depth of the so-called tρ potential evaluated by using theη-N scattering length a_η N = (0.28 + 0.19 i) fm in Ref. <cit.> is around - (30,20) MeV at normal nuclear density. It should be noted thatthe parameters V_0 and W_0 adopted here in Eq. (<ref>) indicate the potential strength at the center of the α-particle whereρ_α (0) ≃ 0.28 fm^-3 as defined in Eq. (<ref>).The η-α potential also has been studied microscopically inRefs. <cit.>,where the various values of the η-α scattering length a_ηα were reported. We could compare our potential strength (V_0, W_0) with the scattering length simply by the Born approximation as a_ηα = -2 μ∫ dr r^2 U_ opt(r) with the η-α reduced mass μ. For example, based on this relation, some of the potential parameters used in this article correspond to the scattering length as,* (V_0, W_0)=-(100,5) MeV↔a_ηα = 2.81 + 0.14 i fm* (V_0, W_0)=-(70,20) MeV↔a_ηα = 1.97 + 0.56 i fm* (V_0, W_0)=-(50,40) MeV↔a_ηα = 1.40 + 1.12 i fm. It will be interesting to compare these numbers with the microscopic scattering lengths, for example, results of the microscopic calculation listed in Table IV in Ref. <cit.>. We can see that the potential strengths adopted in this article are within the range of uncertainties of microscopic calculation. We find again the same tendencies in these results as in Fig. <ref>. In Fig. <ref> for V_0 = -70 MeV case, we also find the bound state peak at E_η - m_η≃ -1 MeV which is very clear for small |W_0| case.In the result shown in Fig. <ref> for V_0 = -50 MeV case, we find the cusp structure at the threshold energy, which becomes less prominent for larger absorption potential. In Fig. <ref>,for weaker attractive cases with V_0 = -30 MeV, we find the step like structure at the threshold for weak absorptive case with W_0 =-5 MeV, which becomes less prominent again for stronger absorptive potential with larger |W_0| value. From these figures, we find that the total spectra around and below thresholdE_η - m_η≤ 0 aresensitive to both the real and imaginary parts of the η-nucleus interaction described by the parameters (V_0, W_0) and are good observables to obtain information on the η-nucleus interaction.As shown in the conversion spectra in Figs. <ref>–<ref>,we have the almost flat contributions in whole energy region as mentioned in Sect. <ref>. This flat contribution is considered to be a part of the background cross sections of the experimental data. Thus, we subtract the flat contribution in the conversion part in the following numerical results to investigate the structure appearing in the spectrum. For this purpose, we subtract the minimum value of the conversion cross section in the energy range shown in the following figures. It will be extremely interesting to show the binding energies and widths of the η-α bound states obtained by solving the Klein-Gordon equation using the same η-α potential used to calculate the spectra in Figs. <ref>–<ref>. The results are compiled in Table <ref>.We find that only peak structures appearing in the spectra for the strong attractive–weak absorptive potential cases correspond to the existence of the bound states. Other structures in the spectra may not indicate the existence of the bound state, though they provide important information on the η-α interaction definitely.Now, we focus on the escape part of the spectrum which appears above the threshold, E_η - m_η > 0, and can be compared to the experimental data of the d + d →η + α reaction as already shownin Fig. <ref> for a certain parameter set.In Fig. <ref>, we show the calculated escape part of the spectra for V_0 = -100 MeV cases with different strengths of the absorptive potential together with the experimental data. The calculated results are scaled to fit the experimental data by changing the interaction strength c. As we can see from Figs. <ref> and <ref>, the shape of the escape part is relatively insensitive to the value of the potential parameter W_0 in this case. We also show the calculated results of the escape part with different potential parameters in Figs. <ref>,  <ref>and  <ref> for different strengths of theattractive potential. We find that the shape of the escape part is not very sensitive to the V_0 and W_0 parameters and that it could be uneasy to obtain the detail information on η-nucleus potential only from the escape parts. Although we have some cases which could be safely ruled outby these comparison, such as (V_0, W_0) = -(70,5) and -(50,5) MeV cases,in which we find a distinct threshold structureas shown in Figs. <ref> and <ref>, the whole shape of the calculated results are not so much different from that of experimental data in many cases. One of the best ways to obtain the decisive information on the η-nucleus interaction may be to have the direct experimentalobservation of the η-nucleus spectrum below the threshold, where the spectrum shape is more sensitive to the potentialparameters as shown in Figs. <ref>–<ref>,especiallyif a bound state exists.Nevertheless, it could happen that nature mightnotgive usany bound states or our experimental techniques would not be sufficient to distinguish a less prominent peak due to a large width. In such cases, we could deduce the informationon the η-nucleus interaction from the absolute value of the spectrumbelow the η-nucleus thresholdin comparison with the η production cross sectionin the same reactionabove the threshold. Here we have a model whichcan be used tocalculate both the conversionand escape parts in the same footing simultaneously.The conversion part describes the spectrum shape induced by the η absorption to the nucleus,while the escape part shows the η production cross section. In our model, we leavethe interaction strength of the d+d →η + α given inEq. (<ref>)as a free parameter. We can adjust this parameter so as toreproduce the η productiondataby the escape part of ourspectrum, and then the conversion part can be a outcomefrom the model. Since the conversion part of the spectrum below the threshold is more sensitive to theη-nucleusinteractionparameters, comparing the theoretical prediction andexperimental data of the d+d reaction below the threshold, wecandeduce the information on the interaction parameters.For the purpose, we show the scaled theoretical total cross section of the d + d → (η + α) → X reaction inFigs. <ref>–<ref> for various values of the parameter (V_0, W_0). The absolute value of the cross section in these figures are determined so that the escape part of the cross section reproduces the experimental data of d+d →η + αand the nonstructural flat contributions described above are subtracted in the conversion part. We should mention here that the structure of the spectra in these figures is enhanced for smaller |W_0| values inFigs. <ref> and <ref>, while it is suppressed for smaller |W_0| values in Figs. <ref> and <ref>. This behavior can beunderstood by considering the origin of the structure of the spectrum. For the strong attractive potential cases, since the structure is dominated by the peak due to the existence of the bound state, the peak structure becomes more prominent for the weaker imaginary because of the smaller width of the bound state. On the other hand, for the weak real potential cases, since the structure is dominated by the absorptive processes, the structure can be suppressed for the weaker absorptive potentials.For comparison of our calculated results with experimental data for the subthreshold energies, we show only the conversion part, since in experimentthe system energy is measured by observing a pion, nucleonand a residual nucleus emitted due to η absorptionand this process is counted in the conversion part in thecalculation. In Figs. <ref>–<ref>, we show the calculated conversion parts of the spectra which correspond to the η absorption processes.The obtained spectra shown in Figs. <ref>–<ref> can be compared to the shape of the experimental spectra on thebackground reported inRef. <cit.>, where the upper limit of the peak structure of thed + d →^3He + n + π^0is 3–6 nb.This implies that the experimental upper limit for the semi-inclusive conversionspectrum of d+d → (η + α) → ^3He + N+ πincluding both n + π^0 and p + π^- channel could beestimated to be3–6 nb × 3= 9–18 nb because of the isospin symmetry of the decay channel of the η-α system.In this case, the peak structures of the bound states inFigs. <ref> and <ref> are fully rejected and strong attractive with less absorptive potentials are not allowed. In addition, the upper limit provides the strong restriction to the η-α potential and only weak potential cases withsmall |V_0| and |W_0| valuessuch as (V_0, W_0)= -(50, 5) and -(30, 5) cases could be allowed by the limit.In order to understand the meaning of the experimental upper limit and the results of our analyses more clearly, we plot a contour plot of the height of the structure appeared in the conversion spectra on the flat contribution in the V_0 - W_0 plane in Fig. <ref>, where the acceptable region of V_0, W_0 values can be easily understood for each value of the upper limit of the height of the structure in the conversion spectra such as shownin Figs. <ref>–<ref>. From this figure, the upper limit reported in Ref. <cit.> is found to provide very valuable information on η-nucleus interaction and strongly suggests the small |V_0| and|W_0| values.However, it should be noted that the results in Fig. <ref> are considered to be qualitative sincewe have not considered the experimental energy resolution here. In addition, the shapes of the structure appearing inFigs. <ref>–<ref> are not simply the symmetric peak.We also need to understand the origin of the absorptive potential and branching ratio of various decay processes to compare our results to the data for the specific decay mode.Actually, we have only considered one-nucleon absorption process for η meson here. Multi-nucleon processes are also possible in reality <cit.>.Thus, for deducing the quantitative information on η-nucleus interaction, it is mandatory to make the detail comparison betweenthe calculated results and data by taking account of the realistic experimental energy resolution, asymmetric shape of the structure appeared in the conversion spectra, the branching ratio of the decay process of η bound states and so on, especially for subthreshold energy region, where the spectra may have variety of structures depending on the η-nucleus interaction strength.Finally, we mention the effects of the possible energy dependence of the optical potential. The optical potential has the energy dependence in general and the dependence could change the calculated spectra. To simulate the energy dependence of η-α optical potential, we adopted the energy dependence of the η-nucleon scattering length inRef. <cit.> andwe assumed the η-nucleon relative energyto be the quarter of that of the η-α. The potential strength are normalized at the threshold energy. The calculated results are shown in Figs. <ref> and <ref>. We have found that the energy dependence of the imaginary part of the optical potential mainly affects the strength of the conversion part in the spectra and changes the flat contribution of conversion part to the slope with some gradient. Hence, this effect could be important for the more realistic analyses. Though, there are many theoretical models for η-nucleon scattering length as compiled in Ref. <cit.>, the qualitative features seems common for all models. § CONCLUSIONWe have developed a theoretical model to evaluate the formation rate of the η-α bound states in the d + d fusion reaction. Because of the difficulties due to the large momentum transfer which is unavoidable to produce η meson in the fusion reaction, we formulate the model in a phenomenological way.We have shown the numerical results for the cases with the various sets of the η-nucleus interaction parameters.We have found that the data of η production above threshold provide important information on the absolute strength of the reactionby comparing them with the escape part of the calculated results.The upper limit of the formation cross section of the η mesic nucleus reported in Ref. <cit.> also provides the significant information on the strength of the η-nucleus interaction. We would like to stress here thatsimultaneous fit to both data of d+d →η + α andd +d → (η + α) → Xusing our model make it possible to provide valuable information on η-nucleus interaction. The results of our analyses are compiled in Fig. <ref> as a contour plot of the V_0 - W_0 plane. The present discussion is simply based on the value of the upper limit of the peak structure in the fusion reaction spectrum below the threshold. As for the further works, to make the analyses performed in this article more quantitative and developed, direct comparison of the spectrum shapes between the calculated results and experiments should be necessary. For this purpose, we should take account of experimental energy resolution in thecalculation and consider other possibilities of the shapes ofthe spectrum structure by improving the η-nucleus opticalpotential.We acknowledge the fruitful discussion withP. Moskal, W. Krzemien, and M. Skurzok.S. H. thanks A. Gal, N. G. Kelkar, S. Wycech, E. Oset and V. Metag for fruitful commentsand discussion in Krakow. We also thank K. Itahashi and H. Fujioka for many discussions andcollaborations on meson-nucleus systems. This work was partly supported byJSPS KAKENHI Grant Numbers JP24540274 and JP16K05355 (S.H.),17K05443 (H.N.), JP15H06413 (N.I.), and JP17K05449 (D.J.) in Japan. § APPENDIXIn this appendix, we show the numerical results for the differentfunctional form of the transition form factor. We consider the two functions defined asf_1 (r⃗) =(m/2π)^1/2e^-mr/rwith m= p_0/√(6), and f_2 (r⃗) =(λ^3/π)^1/2 e^- λ rwith λ = p_0 as different forms of the transition form factor.The Gaussian form defined in Eq. (<ref>) corresponds to f(r⃗) = (p_0^2 /2π)^3/4exp[ - p_0^2 r^2/4],in the coordinate space. The parameters m and λ are fixed to reproduce the same `root mean square radius' ( ∫ |f|^2 r^2d r⃗)^1/2 with the Gaussian form factor.We show the calculated results with f_1(r⃗) in Figs. <ref>–<ref> and results with f_2(r⃗) in Figs. <ref>–<ref>, which correspond to Figs. <ref>, <ref>, <ref>, <ref> obtained with f(r⃗) with p_0 =500 MeV/c. These results are observables which can be compared with the appropriate experimental data. We have found that all results resemble each other and that the numerical results are robust to the choice of the functional form of the transition form factor.LiuQ. Haider and L. C. Liu,Phys. Lett. B 172, (1986) 257; Phys. Rev. C 34, (1986) 1845. Chrien:1988gn R. E. Chrien et al.,Phys. Rev. Lett.60, (1988) 2595.Berger:1988ba J. Berger et al.,Phys. Rev. Lett.61, (1988) 919.Kohno:1989wn M. Kohno and H. Tanabe,Phys. Lett. B 231, (1989) 219. Kohno:1990xv M. Kohno and H. Tanabe,Nucl. Phys. A 519, (1990) 755. Chiang:1990ft H. C. Chiang, E. Oset and L. C. Liu,Phys. Rev. C 44, (1991) 738.Sokol G. A. Sokol and V. A. Tryasuchev, Bull. Lebedev Phys. Inst. 1991N4, 21 (1991) [Kratk. Soobshch. Fiz. 4, 23 (1991\SPLRD, 4, 21–24. 1991)];G. A. Sokol, T. A. Aibergenov, A. V. Kravtsov, A. I. L'vov, and L. N. Pavlyuchenko, Fizika B 8, 85 (1999). Johnson:1993zy J. D. Johnson et al.,Phys. Rev. C 47, (1993) 2571. Waas:1997pe T. Waas and W. Weise,Nucl. Phys. A 625, (1997) 287.Tsushima_Saito K. Tsushima, D. H. Lu, A. W. Thomas and K. Saito,Phys. Lett. B 443, (1998) 26; K. Saito, K. Tsushima, D. H. Lu and A. W. Thomas,Phys. Rev. C 59, (1999) 1203. Hayano:1998sy R. S. Hayano, S. Hirenzaki and A. Gillitzer,Eur. Phys. J. A 6, (1999) 99.Inoue:2002xw T. Inoue and E. Oset,Nucl. Phys. A 710, (2002) 354.GarciaRecio:2002cu C. Garcia-Recio, J. Nieves, T. Inoue and E. Oset,Phys. Lett. B 550, (2002) 47. Jido D. Jido, H. Nagahiro and S. Hirenzaki,Phys. Rev. C 66 (2002) 045202;Nucl. Phys. A 721, (2003) 665.Nagahiro:2003iv H. Nagahiro, D. Jido and S. Hirenzaki,Phys. Rev. C 68, (2003) 035205.Pfeiffer:2003zd M. Pfeiffer et al.,Phys. Rev. Lett.92, (2004) 252001.Hanhart:2004qs C. Hanhart,Phys. Rev. Lett.94, (2005) 049101.Nagahiro:2005gf H. Nagahiro, D. Jido and S. Hirenzaki,Nucl. Phys. A 761, (2005) 92.Kelkar:2006zs N. G. Kelkar, K. P. Khemchandani and B. K. Jain,J. Phys. G 32, (2006) L19.Jido:2008ng D. Jido, E. E. Kolomeitsev, H. Nagahiro and S. Hirenzaki,Nucl. Phys. A 811, (2008) 158.Song:2008ss C. Y. Song, X. H. Zhong, L. Li and P. Z. Ning,Europhys. Lett.81, (2008) 42002. Budzanowski:2008fr A. Budzanowski et al. [COSY-GEM Collaboration],Phys. Rev. C 79, (2009) 012201; V. Jha et al. [GEM Collaboration],Int. J. Mod. Phys. A 22, (2007) 596. Nagahiro:2008rj H. Nagahiro, D. Jido and S. Hirenzaki,Phys. Rev. C 80, (2009) 025205. Haider:2015fea Q. Haider and L. C. (L. C. )Liu,Int. J. Mod. Phys. E 24 10, (2015) 1530009.Khemchandani:2001kp K. P. Khemchandani, N. G. Kelkar and B. K. Jain,Nucl. Phys. A 708 (2002) 312. Khemchandani:2003dk K. P. Khemchandani, N. G. Kelkar and B. K. Jain,Phys. Rev. C 68 (2003) 064610. Khemchandani:2007ta K. P. Khemchandani, N. G. Kelkar and B. K. Jain,Phys. Rev. C 76 (2007) 069801. Xie:2016zhs J. J. Xie, W. H. Liang, E. Oset, P. Moskal, M. Skurzok and C. Wilkin,Phys. Rev. C 95 (2017) no.1,015202.Krzemien:2015fsa W. Krzemien, P. Moskal and M. Skurzok,Acta Phys. Polon. B 46, (2015) 757.Krzemien:2014qma W. Krzemien, P. Moskal and M. Skurzok,Few Body Syst.55, (2014) 795. Krzemien:2014ywa W. Krzemien et al. [WASA-at-COSY Collaboration],Acta Phys. Polon. B 45, (2014)689.Adlarson:2013xg P. Adlarson et al. [WASA-at-COSY Collaboration],Phys. Rev. C 87, (2013) 035204.Skurzok:2016fuv M. Skurzok et al. [WASA-at-COSY Collaboration],Acta Phys. Polon. B 47, (2016) 503. Skurzok:2011aa M. Skurzok, P. Moskal and W. Krzemien,Prog. Part. Nucl. Phys.67, (2012) 445.Adlarson:2016dme P. Adlarson et al.,Nucl. Phys. A 959 (2017) 102.Kelkar:2016uwa N. G. Kelkar,Eur. Phys. J. A 52 (2016) no.10,309.Frascaria:1994vaR. Frascaria et al.,Phys. Rev. C 50, (1994) 537. Willis:1997ix N. Willis et al.,Phys. Lett. B 406, (1997) 14. Wronska:2005wkA. Wronska et al.,Eur. Phys. J. A 26, (2005) 421. GreenO. Morimatsu and K. Yazaki, Nucl. Phys. A 435, (1985) 727; A 483, (1988) 493. Bhalerao:1985cr R. S. Bhalerao and L. C. Liu,Phys. Rev. Lett.54 (1985) 865. Rakityansky:1996gw S. A. Rakityansky, S. A. Sofianos, M. Braun, V. B. Belyaev and W. Sandhas,Phys. Rev. C 53 (1996) R2043. doi:10.1103/PhysRevC.53.R2043 Kelkar:2007pn N. G. Kelkar,Phys. Rev. Lett.99 (2007) 210403 doi:10.1103/PhysRevLett.99.210403 [arXiv:0711.4066 [quant-ph]].Fix:2017ani A. Fix and O. Kolesnikov,Phys. Lett. B 772 (2017) 663. Kulpa:1998vj J. Kulpa and S. Wycech,Acta Phys. Polon. B 29 (1998) 3077. Cieply:2013sya A. Cieplý and J. Smejkal,Nucl. Phys. A 919 (2013) 46.Cieply:2013sga A. Cieplý, E. Friedman, A. Gal and J. Mareš,Nucl. Phys. A 925 (2014) 126.
http://arxiv.org/abs/1708.07692v1
{ "authors": [ "N. Ikeno", "H. Nagahiro", "D. Jido", "S. Hirenzaki" ], "categories": [ "nucl-th", "hep-ph", "nucl-ex" ], "primary_category": "nucl-th", "published": "20170825112712", "title": "$η$-nucleus interaction from the $d+d$ reaction around the $η$ production threshold" }
A. Razmadze Mathematical Institute, Iv.Javakhishvili Tbilisi State University,Tbilisi, Georgia Institute of Quantum Physics and EngineeringTechnologies, Georgian Technical University, Tbilisi, Georgia National Research Nuclear University,MEPhI (Moscow Engineering Physics Institute),Moscow, Russia Laboratory of Information Technologies,Joint Institute for Nuclear Research,Dubna, RussiaThe 7-dimensional family 𝔓_X of so-called mixed X-states of 2-qubits is considered. Two types of stratification of 2-qubits X-state space,i.e., partitions of𝔓_X into orbit types with respect to the adjoint group actions, one of the global unitary group G_X ⊂ SU(4) and another one under the action of the local unitary group LG_X ⊂ G_X,is described.The equations and inequalities in the invariants of the corresponding groups, determiningeach stratification component, are given. On the stratifications of 2-qubits X-state space Arsen Khvedelidze1,2,3,[email protected] Astghik [email protected] December 30, 2023 =============================================================================§ INTRODUCTION The understanding of a symmetry that a physical system possesses, as well as this symmetry's breaking pattern allows us to explain uniquely a wide variety of phenomena in many areas of physics, including elementary particle physics and condensed matter physics <cit.>. The mathematical formulation of symmetries related to the Lie group action consists of the detection of the stratification of the representation space of corresponding symmetry group. Dealing with closed quantum systems the symmetries are realized by the unitary group actions and the quantum state space plays the role of the symmetry group representation space.Below, having in mind these observations,we will outline examples of the stratifications occurring for a quantum system composed of a pair of 2-level systems, two qubits. We will analyze symmetries associated with two subgroups of the special unitary group SU(4). More precisely,we will consider the 7-dimensional subspace 𝔓_X of a generic 2-qubit state space, the family of X-states (for definition see <cit.>, <cit.> and references therein) and reveal two types of its partition into the set of points having the same symmetry type. Theprimary stratification originates from the action of the invariance group of X-states, named the global unitary group G__X⊂ SU(4),whereasthe secondary one isdue to the action of the so-called local group LG__X⊂G_X of the X-states.§ X-STATES AND THEIR SYMMETRIESThe mixed 2-qubit X­-states can be defined based on the purely algebraic consideration. The idea is to fix thesubalgebra𝔤_X:=𝔰𝔲(2)⊕𝔰𝔲(2) ⊕𝔲(1) ∈𝔰𝔲(4)of the algebra 𝔰𝔲(4)and define the density matrix of X-states as ϱ_X= 1/4(I + i𝔤_X ) .In order to coordinatize the X­-state space we use thetensorial basis for the𝔰𝔲(4) algebra, σ_μν =σ_μ⊗σ_ν,μ,ν =0,1,2,3. It consists of all possible tensor products of two copies of Pauli matrices and a unit 2× 2 matrix, σ_μ=(I, σ_1,σ_2,σ_3) , which we order as follows (see the details in<cit.>):λ_1, …, λ_15 =i/2(σ_x0, σ_y0, σ_z0, σ_0x, σ_0y, σ_0z,σ_xx, σ_xy, σ_xz, σ_yx, σ_yy, σ_yz, σ_zx, σ_zy, σ_zz) .In this basis the 7-dimensional subalgebra𝔤_X is generated by the subsetα_X=( λ_3, λ_6, λ_7,λ_8, λ_10, -λ_11, λ_15), and thus the unit norm X­-state density matrix is given by the decomposition:ϱ_X= 1/4(I +2 i∑_λ_k ∈α_X h_kλ_k) .The real coefficients h_k are subject to the polynomial inequalities ensuring the semi-positivity of the density matrix, ϱ_X ≥ 0:𝔓_X= {h_i ∈ℝ^7|(h_3± h_6)^2+(h_8± h_10)^2+(h_7± h_11)^2≤ (1± h_15)^2 } .Using the definition (<ref>) one can conclude that the X-state space𝔓_X is invariant under the 7-parameter group,G_X :=exp (𝔤_X)∈ SU(4):gϱ_X g^†∈𝔓_X , ∀ g ∈ G_X .Group G_X plays the same role for the X­-states asthe special unitary group SU(4) plays for a generic 4-level quantum system, and thus is termed the global unitary group ofX­-states. According to <cit.>,group G_X admits the representation:G_X=P_π( [e^-i ω_15SU(2) 0; 0 e^i ω_15SU(2)^'; ])P_π ,P_π=( 1 0 000 0 0 10 0 1 00 1 0 0 ) .Correspondingly,the local unitary group of the X­-states is LG_X=P_πexp(ϕ_1/2) ⊗exp(ϕ_2/2)P_π ⊂ G_X . § GLOBAL ORBITS AND STATE SPACE DECOMPOSITIONNow we give a classification of the global G_X-orbits according to their dimensionality and isotropy group. Every density matrix ϱ_X can be diagonalized by some element of the global G_X group. In other words, all global G_X-orbits can be generated from thedensitymatrices, whose eigenvalues form the partially ordered simplexΔ_3, depicted on Figure <ref>.The tangent space to the G_X-orbits is spanned by the subset of linearly independent vectors, built from thevectors:t_k = [λ_k,ϱ_X],λ_k ∈α_X . The number of independent vectors t_k determines the dimensionalityof the G__X-orbits and is givenby the rank of the7 × 7 Gram matrix:𝒢_kl = 1/2(t_k t_l) .The Gram matrix (<ref>) has three zero eigenvalues and two double multiplicity eigenvalues:μ_± =-1/8((h_3± h_6)^2+(h_8± h_10)^2+(h_7± h_11)^2) .Correspondingly, the G_X-orbitshavedimensionality ofeither 4, 2 or 0. The orbits of maximal dimensionality, ( 𝒪)_=4, are characterizedby non-vanishingμ_±≠ 0 and consist of the set of density matrices with a generic spectrum, σ(ϱ_x)= (r_1,r_2,r_3,r_4) . If the density matricesobey the equationsh_6=± h_3 , h_10=± h_8 ,h_11=± h_7 ,they belong to the so-called degenerate orbits,(𝒪)_±=2 . The latter are generated from the matrices which have the double degenerate spectrum of theform, σ(ϱ_x)= (p,p,r_3,r_4) andσ(ϱ_x)= (r_1,r_2,q,q) respectively.Finally, there is a single orbit(𝒪)_0= 0, corresponding to the maximally mixed state ϱ_X = 1/4I. Considering the diagonal representative of the generic G__X-orbit one can be convinced that itsisotropy group is H=P_π( [ e^iωexpiγ_1/2σ_3^;_ e^-iωexp iγ_2/2σ_3;]) P_π , while for a diagonal representativewith a double degenerate spectrum the isotropy group is given by one of two groups: H_+=P_π( [ e^iωSU(2) ^; _ e^-iωexpiγ_2/2σ_3; ]) P_π , H_-=P_π( [ e^iωexpiγ_1/2σ_3^;_ e^-iωSU(2)^';]) P_π . For the single, zero dimensional orbit the isotropy group H_0 coincides with the whole invariance group, H_0=G_X.Therefore, the isotropy group of any element ofG__X-orbits belongs to one of these conjugacy classes: [H], [H_±] or [H_0]. Moreover, a straightforward analysis shows that[H_+]=[H_-]. Hence, any point ϱ∈𝔓_X belongs to one of three above-mentioned types of G__X-orbits[ The orbit type [ϱ] of a point ϱ∈𝔓_X is given by the conjugacy class of the isotropy group of point ϱ, i.e.,[ϱ]= [G_ϱ_X] . ], denoted afterwards as [H_t] , t =1,2,3.For a given H_t, the associated stratum 𝔓_[H_t] , defined asthe set of all points whose stabilizer is conjugate to H_t: 𝔓_[H_t]: ={ y ∈ 𝔓_X|y H_t}determines thesought-for decompositionofthe state space 𝔓_X into strata according to theorbit types:𝔓_X =⋃_𝔓_[H_i] .Thestrata𝔓_(H_i)are determined by this set of equations and inequalities: (1) 𝔓_[H]:={h_i ∈𝔓_X| μ_+ > 0, μ_- > 0 } ,(2) 𝔓_[H_+]∪𝔓_[H_-] := {h_i ∈𝔓_X| μ_+ = 0, μ_- > 0 }∪{h_i ∈𝔓_X|μ_+> 0, μ_- = 0 } ,(3) 𝔓_[H_0]: = {h_i ∈𝔓_X| μ_+ = 0, μ_- = 0 } . § LOCAL ORBITS AND STATE SPACE DECOMPOSITIONAnalogously,one can build up the X­- state space decomposition associated with the local group LG__X action. For this action the dimensionality of LG__X-orbits is given by the rank of the corresponding2 × 2 Gram matrix constructed out of vectors t_3 and t_6. Since its eigenvalues read:μ_1= -1/8((h_8+h_10)^2+(h_7+h_11)^2) ,μ_2=-1/8((h_8-h_10)^2+(h_7-h_11)^2) ,the LG_X-orbits are either generic ones with the dimensionality of ( 𝒪_L)_=2, or degenerate (𝒪_L)_±=1, or exceptional ones, (𝒪_L)_0=0. The LG_X-orbits can be collected into the strata according to their orbit type. There are three types of strata associated with the “local” isotropy subgroups of LH ∈ LG_X. Correspondingly, one can define the following “local” strata of state space:* the generic stratum, 𝔓^L_[I], whichhasa trivial isotropy type, [I],and is represented bythe inequalities:𝔓^L_[I]:={h_i ∈𝔓_X| μ_1 > 0, μ_2 > 0 } ,* the degenerate stratum, 𝔓^L_[H_L^±], collection of theorbits whose type is [H_L^±] , with the subgroup either H_L^+= I×exp(iuσ_3), orH_L^-= exp(ivσ_3)× I. The stratum defining equations read respectively:h_10 = ± h_8 ,h_11 = ± h_7 , * the exceptional stratum, 𝔓_[LG_X]of the type [LG_X], determined by the equations:h_11=h_10=h_8=h_7=0. Therefore, the local groupactionprescribes the following stratification of 2-qubit X­-state space: 𝔓_X =𝔓_[I]∪𝔓_[H^+_L]∪𝔓_[H^-_L]∪𝔓_[LG_X] .§ CONCLUDING REMARKSIn the present article we describe the stratification of 2-qubit X­-state space associated with the adjoint action of the global and local unitary groups. The global unitary symmetry is related to the propertiesof a system as a whole, while the local symmetries comprise information on the entanglement, cf. <cit.>. In an upcoming publication, based on the introduced stratification of state space, we plan to analyze an interplay between these two symmetries and particularly determine the entanglement/separability characteristicsof every stratum.MichelL.Michel, et.al.,"Symmetry, invariants, topology". Physics Reports, 341, 7, (2001).YuEberly2007T.Yu and J.H.Eberly, Quantum Inf. Comput., 7, 459-468, (2007). AAJMS2017 A.Khvedelidze and A.Torosyan, Journal of Mathematical Sciences, 224, 349-359, (2017).Chen2010X.Chen, Z.-C.Gu and X.-G.Wen, Phys. Rev. B 82, 155138, (2010).
http://arxiv.org/abs/1708.07438v1
{ "authors": [ "Arsen Khvedelidze", "Astghik Torosyan" ], "categories": [ "quant-ph" ], "primary_category": "quant-ph", "published": "20170824143154", "title": "On the stratifications of 2-qubits X-state space" }
1University of Bern, Center for Space and Habitability, Gesellschaftsstrasse 6, CH-3012, Bern, Switzerland.Emails: [email protected], [email protected] 2Johns Hopkins University, Department of Earth and Planetary Sciences, 301 Olin Hall, Baltimore, MD 21218, U.S.A. 3Johns Hopkins University, Department of Physics and Astronomy, Bloomberg Center for Physics and Astronomy, Baltimore, MD 21218, U.S.A. We present a novel generalization of the two-stream method of radiative transfer, which allows for the accurate treatment of radiative transfer in the presence of strong infrared scattering by aerosols.We prove that this generalization involves only a simple modification of the coupling coefficients and transmission functions in the hemispheric two-stream method.This modification originates from allowing the ratio of the first Eddington coefficients to depart from unity.At the heart of the method is the fact that this ratio may be computed once and for all over the entire range of values of the single-scattering albedo and scattering asymmetry factor.We benchmark our improved two-stream method by calculating the fraction of flux reflected by a single atmospheric layer (the reflectivity) and comparing these calculations to those performed using a 32-stream discrete-ordinates method.We further compare our improved two-stream method to the two-stream source function (16 streams) and delta-Eddington methods, demonstrating that it is often more accurate at the order-of-magnitude level.Finally, we illustrate its accuracy using a toy model of the early Martian atmosphere hosting a cloud layer composed of carbon-dioxide ice particles.The simplicity of implementation and accuracy of our improved two-stream method renders it suitable for implementation in three-dimensional general circulation models.In other words, our improved two-stream method has the ease of implementation of a standard two-stream method, but the accuracy of a 32-stream method. § INTRODUCTIONTwo-stream solutions have been studied for decades in the context of atmospheres and come in various flavors <cit.>.They originate from a neat mathematical trick: instead of solving the radiative transfer equation for the intensity, one solves for its moments.Besides the loss of angular information, the two-stream solution performs poorly when aerosols reside in the model atmosphere.A longstanding result, based on geomorphic evidence, that early Mars was able to harbor liquid water on its surface (seefor a review), due to the scattering greenhouse effect (e.g., ) mediated by carbon-dioxide ice clouds <cit.>, was called into question because the original two-stream calculation over-estimated the degree of warming <cit.>.Mars teaches us the lesson that the choice of radiative transfer method may alter the qualitative conclusion of a study, and inspires us to improve the accuracy of the two-stream solution in order to apply it broadly to exoplanetary atmospheres.The main source of error appears to be the over-estimation of the amount of infrared radiation reflected by aerosols, which leads to an over-estimation of the scattering greenhouse effect.On Earth, this effect is subdued because water clouds are strong infrared absorbers but weak infrared scatterers <cit.>.On Mars, it is pronounced because carbon-dioxide ice clouds scatter infrared radiation strongly <cit.>.Figure <ref> illustrates these differences.In general, we expect the two-stream method to perform poorly in the presence of medium-sized to large aerosols that have single-scattering albedos between 0.5 and 1 in the infrared range of wavelengths.This shortcoming motivates us to design an improved two-stream method that calculates the amount of reflected radiation accurately.Operationally, we accomplish this feat by revisiting the formalism surrounding the Eddington coefficients previously elucidated by <cit.>.Specifically, we relax the assumption that the first Eddington coefficients[As already noted in <cit.>, there is no consensus on how to number/order these Eddington coefficients, and we use the convention of <cit.>.] are equal and allow their ratio to depart from unity.The top-left panel of Figure <ref> shows our calculations for this ratio, E.We also show calculations for single atmospheric layers populated by aerosols with fixed values of the single-scattering albedo (ω_0) and scattering asymmetry factor (g).We consider single atmospheric layers, because if one attains understanding (and accuracy) for a single layer, then it straightforwardly generalizes to an arbitrary number of layers in a model atmosphere.For the sake of discussion, we refer to small, medium-sized and large aerosols as having ω_0=0.1 and g=0 (isotropic scattering), ω_0=g=0.5 and ω_0=g=0.9 (predominantly forward scattering), respectively.We will explore other choices later.To simplify terminology, we term the fraction of flux reflected and transmitted by an atmospheric layer the “reflectivity" and “transmissivity", respectively.In the example of a layer populated by medium-sized aerosols (top-right panel of Figure <ref>), we see that the original, hemispheric two-stream solution (e.g., ) over-estimates the true solution, which is computed using a 32-stream discrete-ordinates method via the open-sourcecomputer code <cit.>.Our improved two-stream solution with E=1 matches the reflectivity computed by the hemispheric two-stream method well; deviations are due to modifications we have made to the transmission function, as we will discuss.As E is varied from 1 to 1.4, we see that the reflectivity varies rather sensitively.The true solution is matched by a value of E between 1.1 and 1.2.This example illustrates that small variations of E from unity allow us to improve the accuracy of the two-stream solution drastically.Following through on this property of E, the bottom-left panel of Figure <ref> shows calculations of the reflectivity for atmospheric layers with small, medium-sized and large aerosols.For each calculation, the value of E has been chosen to match the reflectivity by construction; in <ref>, we will explain in detail how this is accomplished.For all three calculations, the original, hemispheric two-stream method over-estimates the reflectivity by ∼ 10%.For completeness, we also show the transmissivity associated with these three examples (bottom-right panel of Figure <ref>), where we see that the discrepancies between the hemispheric two-stream calculations and the true solutions are less pronounced.The overaching goal of the present study is to elucidate the theory behind the improvement of the two-stream method and the calculation of E (presented in <ref>).We further demonstrate that our improved two-stream method rivals or betters the two-stream source function method of <cit.>, which is widely implemented in the exo-atmospheres literature, in both accuracy and simplicity of implementation (in <ref>).We discuss the implications of our findings in <ref>.The present study is the fourth in a series of papers devoted to constructing analytical models for exoplanetary atmospheres to both aid in the development of intuition and provide algorithms for computation, following <cit.> (for shallow-water fluid dynamics), <cit.> (for two-stream radiative transfer) and <cit.> (for equilibrium chemistry).§ GENERALIZING THE TWO-STREAM FORMALISMIn the two-stream formalism, the reflectivity and transmissivity of a single layer are respectively <cit.>f_ T = ( ζ_-^2 - ζ_+^2 )T/( ζ_-T)^2 - ζ_+^2,  f_ R = ζ_- ζ_+ ( 1 -T^2 ) /ζ_+^2 - ( ζ_-T)^2, where ζ_± are the coupling coefficients and T is the transmission function.The coupling coefficients relate the relative strength of transmission versus reflection, and generally depend on ω_0 and g.When the layer is transparent, we have T=1, f_ T=1 and f_ R=0.When the layer is opaque, we have T=0, f_ T=0 and f_ R = ζ_-/ζ_+.These asymptotic limits suggest that plausible improvements to the two-stream solution are accomplished by modifying the coupling coefficients.In the present study, we focus on the reflectivity and transmissivity of a single layer, and not its emissivity (blackbody emission), because previous studies dealing with aerosols have shown that the largest sources of error originate from the reflectivity <cit.>.It has been previously shown that the expressions for the coupling coefficients with the hemispheric closure conserve energy by construction <cit.>.To modify the coupling coefficients, we need to understand their physical origin.In sacrificing accuracy for simplicity, the two-stream solutions contain an ambiguity: ratios of the moments of the intensity (mean intensity, flux, radiation pressure) are assumed to be constants known as Eddington coefficients <cit.>.Two of these Eddington coefficients are set to be equal by enforcing the condition of radiative equilibrium in the limit of pure scattering <cit.>.This occurs when the single-scattering albedo is exactly unity.However, the two-stream solutions formally break down at exactly ω_0=1 <cit.> and this limit is rarely reached in practice, which render this condition academic.If we relax this condition, then the coupling coefficients have a more general form,ζ_±≡1/2[ 1 ±√(E - ω_0/E - ω_0 g)].where E is the ratio of the Eddington coefficients.In the original two-stream solutions with the hemispheric (or hemi-isotropic) closure, we have E=1.The first improvement is to useE = ω_0 ( 1 - g r^2 )/1 - r^2to compute E1 values for use in the coupling coefficients.Here, we have r ≡ (1-R_∞)/(1+R_∞) and R_∞ is the asymptotic value of the reflectivity when a layer is opaque (τ≫ 1).The preceding expression is derived by setting R_∞ = ζ_-/ζ_+ and using equation (<ref>).It is worth emphasizing that this improvement ensures the asymptotic reflectivity matches the true solution by construction, as long as we have a way of computing R_∞.In Figure <ref> (top-left panel), the grid of values for E obtains from computing R_∞ using thecode <cit.>, which uses the discrete-ordinates method of radiative transfer <cit.>.We use 32-streamcalculations as the ground truth.It should be emphasized that this is the entire parameter space of interest for aerosols embedded in atmospheres.Part of the simplicity of the method is that the function E(ω_0, g) only needs to be computed once.It may then be stored and used for all future calculations.The second improvement we make is to modify the transmission function.In the limit of pure absorption, the exact solution of the radiative transfer equation yields T = 2 E_3(τ), an expression that formally integrates over all angles <cit.>.Here, E_3 is the exponential integral of the third order <cit.> and τ is the optical depth of the atmospheric layer.For transmission, we use T = 2 E_3 ( τ^'), where τ^' = τ√((1-ω_0)(1-ω_0 g)) and the additional factor derives from the hemispheric two-stream solution <cit.>.For reflection, we use T = 2 E_3( τ^'') and τ^'' = τ√((E-ω_0)(E-ω_0 g)).§ RESULTS §.§ Comparison to <cit.> We now compare calculations of the transmissivity and reflectivity to those performed using other commonly used methods: hemispheric two-stream <cit.>[We specifically use the two-stream formalism written down by <cit.>; we are not claiming that <cit.> should be solely cited for the two-stream method.], delta-Eddington <cit.> and two-stream source function <cit.>.The two-stream source function method is of particular interest, because it is widely implemented in the exo-atmospheres literature <cit.>.It achieves a multi-stream solution using a clever mathematical trick: inserting the two-stream solution into the term of the radiative transfer equation involving the scattering phase function, which implies that the solution is, strictly speaking, not self-consistent.We use the two-stream source function method with 8 streams in each hemisphere for a total of 16 streams, and sum up these streams numerically (weighted by the cosine of the polar angle) to construct the fluxes.The delta-Eddington method uses the Eddington closure (seeor ), but includes an additional feature: it approximates the scattering phase function as consisting of a Dirac-delta function and a series expansion involving the cosine of the scattering angle <cit.>.The motivation behind this approximation is to attain higher accuracy for radiative transfer with large aerosols, which tend to produce a strong forward peak in the scattered intensity.However, the delta-Eddington method has been criticized as being ad hoc, as the relative weighting of the Dirac-delta function and series terms is chosen arbitrarily <cit.>.[<cit.> describes how the delta-Eddington method includes the procedures of truncation and renormalization, describes them as being “ad hoc" and remarks how “none of the proposed variants is demonstrably superior to any other."<cit.> further adds that “the number of reasonable truncation and renormalization procedures is limited only by one's imagination."]This criticism provided us with motivation to avoid the delta-Eddington method and its variants when constructing our improved two-stream method.However, we include the delta-Eddington method in our comparisons because our emphasis is on reproducing Figures 2 and 3 of <cit.> as a benchmarking exercise.We again use 32-streamcalculations of the transmissivity and reflectivity as the ground truth.Figure <ref> shows the transmissivity and reflectivity for three different sets of values of ω_0 and g, chosen to faciliate comparison with the study of <cit.>.Most of the absorption or scattering of radiation by an atmospheric layer occurs at τ∼ 1.Dips in the error curves occur when the curve of the reflectivity or transmissivity intersects thecurve, such that the error between them formally drops to zero.In practice, it drops to nearly zero, because the numerically-computed curves sampled at discrete points do not formally intersect.The six sets of calculations in Figure <ref> suggest that our improved two-stream method achieves comparable or superior accuracy, compared to the other methods, often at the order-of-magnitude level, but with less computational effort.Next, we focus on the error associated with the reflectivity, as it is known to be larger than for the transmissivity in atmospheric calculations involving aerosols <cit.>.Figure <ref> quantifies the error incurred by using our improved two-stream method as a function of ω_0, g and τ.When τ=10, the error is ∼ 0.01%.This is unsurprising, as it is by construction.When τ=1, the error is typically ∼ 1%, unless both ω_0 ≈ 1 and g ≈ 1, in which case the error approaches 10%.In practice, the presence of gas, consisting of atoms and molecules, in the atmosphere reduces the value of ω_0 to below unity because they provide a source of absorption (which increases the total cross section), meaning that these large errors are rarely encountered.Figure <ref> also shows the ratio of errors of the improved two-stream versus the two-stream source function methods.For most combinations of ω_0 and g, the improved two-stream method is more accurate than the two-stream source function method by an order of magnitude, unless ω_0 ≳ 0.9.This is remarkable, because it has the implementational simplicity of the two-stream method, but an accuracy that is superior to a 16-stream method. §.§ Toy model of early Martian atmosphere So far, we have examined calculations with fixed values of ω_0 and g, because we were focused on benchmarking our improved two-stream method.Real aerosols are associated with ω_0 and g that are functions of wavelength <cit.>.To illustrate this behavior, we consider a toy model atmosphere of early Mars that hosts a cloud layer composed of medium-sized to large carbon-dioxide ice particles.We do not compute a more realistic Mars model, because this has already been done in <cit.>.Conceptually, understanding the interplay between aerosols and radiative transfer under Mars-like conditions is relevant to defining the outer edge of the classical habitable zone.We imagine a scenario where starlight penetrates the atmosphere and heats up its surface (much like on Earth), which then re-emits the heat as infrared radiation.The infrared radiation attempts to escape the atmosphere, but encounters the carbon-dioxide-ice cloud layer (assumed to be located at 0.1 bar), which reflects some of it back to the surface and heats up the atmosphere below the cloud layer.The ice particles are assumed to follow a gamma distribution with an effective particle radius of 25 μm.We assume that the atmosphere is dominated by gaseous carbon dioxide and that it has a temperature equal to the condensation temperature of carbon dioxide just below the cloud layer.The absorptivity of the gas is parametrized by a grey opacity with a value chosen such that the optical depth contributed by the gas alone is 0.1.We calculate the reflectivity of the cloud layer using realistic, wavelength-dependent single-scattering albedos and scattering asymmetry parameters ( and Figure <ref>).In Figure <ref>, our improved two-stream calculations yield errors at 1% or lower compared to 32-streamcalculations.By contrast, the hemispheric two-stream and two-stream source function methods yield errors at the 5–10% level, depending on how opaque the cloud is.As already eludicated by <cit.> using more realistic models of the early Martian atmosphere, these errors translate into an over-estimation of the surface temperature by about 40 K, enough to alter the qualitative conclusion.Similarly, we anticipate that sophisticated simulations of the outer edge of the habitable zone using three-dimensional climate models would benefit from more accurate radiative transfer calculations using our improved two-stream method.§ DISCUSSIONIn the absence of scattering, the outgoing and incoming two-stream fluxes are decoupled.The boundary condition at the bottom of the atmosphere (remnant heat from the formation of the exoplanet for gas giants and surface fluxes for rocky exoplanets) may be propagated upwards using the solution for the outgoing flux.The boundary condition at the top of the atmosphere (stellar irradiation) may be propagated downwards using the solution for the incoming flux.For each layer, one now has the values of the outgoing and incoming fluxes.Taking the difference between these fluxes yields the net flux, which is then fed into the first law of thermodynamics to compute the change in temperature of each layer <cit.>.Updating the temperature in each layer in turn alters the opacities and the fluxes.This iteration is performed numerically for a model atmosphere with multiple layers, until convergence is attained and radiative equilibrium is reached, e.g., see the implementation of <cit.>.When scattering is present, the pair of two-stream solutions feed into each other and an additional iteration is needed.Physically, radiation may be scattered multiple times as it travels from a layer to its immediate neighbors and beyond.It is worth emphasizing that this iteration is to enforce multiple scattering, and is distinct from the iteration for radiative equilibrium.The alternative to such an iteration is to perform matrix inversion.Instead of describing a pair of layers, the two-stream solutions may be used to describe the fluxes within a single layer bounded by two interfaces.An atmosphere with a finite number of layers is represented by a set of equations, with each equation describing the fluxes at the layer center and interfaces.Mathematically, this set of equations makes up a tridiagonal matrix, which may then be inverted using Thomas's algorithm <cit.>.Since Thomas's algorithm is essentially a set of recursive algebraic relations, the procedure remains efficient.This property renders the solutions feasible for implementation in three-dimensional general circulation models (e.g., ), which require radiative transfer to be highly efficient in order to simulate climates for ∼ 10^7 time steps or more.We acknowledge financial support from the Swiss National Science Foundation, the PlanetS National Center of Competence in Research (NCCR), the Center for Space and Habitability (CSH) and the Swiss-based MERAC Foundation.KH acknowledges a Visiting Professorship at Johns Hopkins University in the Zanvyl Krieger School of Arts and Sciences, held during the final revision and resubmission of this manuscript.99[Arfken & Weber(1995)]arfken Arfken, G.B., & Weber, H.J.1995, Mathematical Methods for Physicists, fourth edition (San Diego: Academic Press)[Cahoy et al.(2010)]cahoy10 Cahoy, K.L., Marley, M.S., & Fortney, J.J.2010, Astrophysical Journal, 724, 189[Chandrasekhar(1960)]chandra60 Chandrasekhar, S.1960, Radiative Transfer (New York: Dover)[Draine(2003)]draine03 Draine, B.T.2003, Astrophysical Journal, 598, 1017[Forget & Pierrehumbert(1997)]fp97 Forget, F., & Pierrehumbert, R.T.1997, Science, 278, 1273[Fortney et al.(2008)]fortney08 Fortney, J.J., Lodders, K., Marley, M.S., & Freedman, R.S.2008, Astrophysical Journal, 678, 1419[Goody & Yung(1989)]gy89 Goody, R.M., & Yung, Y.L.1989, Atmospheric Radiation: Theoretical Basis, second edition (New York: Oxford)[Hamre et al.(2013)]hamre13 Hamre, B., Stamnes, S., Stamnes, K., & Stamnes, J.J.2013, AIP Conference Proceedings, 1531, 923[Heng et al.(2012)]hhps12 Heng, K., Hayek, W., Pont, F., & Sing, D.K.2012, Monthly Notices of the Royal Astronomical Society, 420, 20[Heng & Workman(2014)]hw14 Heng, K., & Workman, J.2014, Astrophysical Journal Supplements, 213, 27[Heng et al.(2014)]hml14 Heng, K., Mendonça, J.M., & Lee, J.-M.2014, Astrophysical Journal Supplements, 215, 4[Heng & Tsai(2016)]ht16 Heng, K., & Tsai, S.-M.2016, Astrophysical Journal, 829, 104[Heng(2017)]heng17 Heng, K.2017, Exoplanetary Atmospheres: Theoretical Concepts and Foundations (Oxford: Princeton University Press)[Joseph et al.(1976)]joseph76 Joseph, J.H., Wiscombe, W.J., & Weinman, J.A.1976, Journal of the Atmospheric Sciences, 33, 2452[Kitzmann et al.(2013)]kitzmann13 Kitzmann, D., Patzer, A.B.C., & Rauer, H.2013, Astronomy & Astrophysics, 557 A6[Kitzmann(2016)]kitzmann16 Kitzmann, K.2016 Astrophysical Journal Letters, 817, L18[Malik et al.(2017)]malik17 Malik, M., Grosheintz, L., Mendonça, J.M., et al.2017, AJ, 153, 56[Marley & McKay(1999)]mm99 Marley, M.S., & McKay, C.P.1999, Icarus, 138, 268[Meador & Weaver(1980)]mw80 Meador, W.E., & Weaver, W.R.1980, Journal of the Atmospheric Sciences, 37, 630[Mihalas(1970)]mihalas70 Mihalas, D.1970, Stellar Atmospheres, first edition (San Francisco: Freeman)[Mihalas(1978)]mihalas78 Mihalas, D.1978, Stellar Atmospheres, second edition (San Francisco: Freeman)[Morley et al.(2013)]morley13 Morley, C.V., Fortney, J.J., Kempton, E. M.-R., et al.2013, Astrophysical Journal, 775, 33[Pierrehumbert(2010)]pierrehumbert10 Pierrehumbert, R.T.2010, Principles of Planetary Climate (New York: Cambridge)[Schuster(1905)]s05 Schuster, A.1905, ApJ, 21, 1[Showman et al.(2009)]showman09 Showman, A.P., Fortney, J.J., Lian, Y., et al.2009, Astrophysical Journal, 699, 564[Stamnes et al.(1988)]s88 Stamnes, K., Tsay, S.-C., Wiscombe, W., & Jayaweera, K.1988, Applied Optics, 27, 2502[Toon et al.(1989)]toon89 Toon, O.B., McKay, C.P., & Ackerman, T.P.1989, Journal of Geophysical Research, 94, 16287[Wiscombe(1977)]wiscombe77 Wiscombe, W.J.1977, Journal of the Atmospheric Sciences, 34, 1408[Wordsworth(2016)]wordsworth16 Wordsworth, R.D.2017, Annual Review of Earth & Planetary Sciences, 44, 381
http://arxiv.org/abs/1708.07915v1
{ "authors": [ "Kevin Heng", "Daniel Kitzmann" ], "categories": [ "astro-ph.EP", "physics.ao-ph", "physics.geo-ph" ], "primary_category": "astro-ph.EP", "published": "20170826014912", "title": "Analytical Models of Exoplanetary Atmospheres. IV. Improved Two-stream Radiative Transfer for the Treatment of Aerosols" }
Freezeout systematics due to the hadron spectrum Subhasis Samanta December 30, 2023 ================================================ Adaptive algorithms based on kernel structures have been a topic of significant research over the past few years. The main advantage is that they form a family of universal approximators, offering an elegant solution to problems with nonlinearities. Nevertheless these methods deal with kernel expansions, creating a growing structure also known as dictionary, whose size depends on the number of new inputs. In this paper we derive the set-membership kernel-based normalized least-mean square (SM-NKLMS) algorithm, which is capable of limiting the size of the dictionary created in stationary environments. We also derive as an extension the set-membership kernelized affine projection (SM-KAP) algorithm. Finally several experiments are presented to compare the proposed SM-NKLMS and SM-KAP algorithms to the existing methods.Kernel methods, sparsification, set-membership kernel adaptive filtering.§ INTRODUCTION Adaptive filtering algorithms have been the focus of a great deal of research in the past decades, and the machine learning community has embraced and further advanced the study of these methods. However, conventional adaptive algorithms often work with linear strucutres, limiting the performance that they can achieve and constraining the number of problems that can be solved. Under this scope a new family of nonlinear adaptive filter algorithms based on kernels was developed. A kernel is a function that compares the similarity between two inputs. The kernel adaptive filtering (KAF) algorithms have been tested in many different scenarios and applications <cit.><cit.> <cit.><cit.>, showing very good results.As described in <cit.>, one of the main advantages of KAF algorithms is that they are universal approximators, which gives them the capability to treat complex and nonlinear problems. In other words, they can model any input-output mapping. Many of these algorithms have no local minima, which is also a desirable characteristic. However, the computational complexity is significantly higher than their linear counterparts<cit.>.One of the first KAF algorithms to appear and widely adopted in the KAF family because of its simplicity is the kernel least-mean square (KLMS) proposed in <cit.> and extended in <cit.>. The KLMS algorithm is inspired by the least-mean square algorithm and showed good results, so that many researchers have worked since then in the development of new kernel versions of conventional adaptive algorithms. A few years later, a kernelized version of the NLMS algorithm was proposed in <cit.> using a nonlinear regression approach for time series prediction. In <cit.>, the affine projection algorithm (APA) was modified to develop a family of four algorithms known as the kernel affine projection algorithms (KAPA). The recursive least squares algorithm (RLS) was also extended in <cit.>, where the kernel recursive least squares was introduced (KRLS). Later , the authors of <cit.> proposed an extended version of the KRLS algorithm. Also the use of multiple kernels was studied in <cit.> and <cit.>. All the algorithms mentioned before have to deal with kernel expansions. In other words, they create a growing structure, also called dictionary, where they keep every new data input that arrives to compute the estimate of the desired output. The natural problem that arises is that the time and computational cost spent to compute a certain output could exceed the tolerable limits for a specific application. Several criteria were proposed to solve this problem. One of the most simple criteria is the novelty criterion, presented in <cit.>. Basically it establishes two thresholds to limit the size of the dictionary. Another method, the approximate linear dependency (ALD) was proposed in <cit.> and verifies if a new input can be expressed as a linear combination of the elements stored before adding this input to the dictionary. The coherence criterion was introduced in <cit.> also to limit the size of the dictionary based on the similarity of the inputs. A measure called surprise was presented in <cit.> to remove redundant data.In this work, we present the set-membership normalized kernel least-mean square (SM-NKLMS) and the set-membership kernel affine projection (SM-KAP) adaptive algorithms, which can provide a faster learning than existing kernel-based algorithms and limit the size of the dictionary without compromising performance. Similarly to existing set-membership algorithms <cit.>, the proposed SM-NKLMS and SM-KAP algorithms are equipped with variable step sizes and perform sparse updates that are useful for several applications <cit.>. Unlike existing kernel-based adaptive algorithms the proposed SM-NKLMS and SM-KAP algorithms deal with in a natural way with the kernel expansion because of the data selectivity based on error bounds that they implement.This paper is organized as follows. In section II, the problem formulation is presented. In section III the SM-NKLMS and the SM-KAP algorithms are derived. Section IV presents the simulations and results of the algorithms developed in an application involving a time series prediction task. Finally, section V presents the conclusions of this work.§ PROBLEM STATEMENT Let us consider an adaptive filtering problem with a sequence of training samples given by {x[i],d[i]}, where x[i] is the N-dimensional input vector of the system and d[i] represents the desired signal at time instant i. The output of the adaptive filter is given by y[i]=𝐰^Tx[i],where 𝐰 is the weight vector with length N.Let us define a non-linear transformation denoted by φ:ℝ→𝔽 that maps the input to a high-dimensional feature space. Applying the transformation stated before, we map the input and the weights to a high-dimensional space obtaining:φ[i]=φ(x[i]), ω[i]=φ(𝐰[i]),The error generated by the system is given by e[i]=d[i]-ω^T[i]φ[i]. The main objective of the kernel-based adaptive algorithms is to model a function to implement an input-output mapping, such that the mean square error generated by the system is minimized. In addition, we assume that the magnitude of the estimated error is upper bounded by a quantity γ. The idea of using an error bound was reported in <cit.> and was used since then to develop different versions of data selective algorithms. § PROPOSED SET-MEMBERSHIP KERNEL-BASED ALGORITHMS Assuming that the value of γ is correctly chosen then there exists several functions that satisfy the error requirement. To summarize, any function leading to an estimation error smaller than the defined threshold is an adequate solution, resulting in a set of filters. Consider a set S̅ containing all the possible input-desired pairs of interest {φ[i],d[i]}. Now we can define a set θ with all the possible functions leading to an estimation error bounded in magnitude by γ. This set is known as the feasibility set and is expressed by θ=⋂_{φ,d}∈S̅{ω∈𝔽 / |d-ω^Tφ|≤γ}Suppose that we are only interested in the case in which only measured data are available. Let us define a new set ℋ[i] with all the functions such that the estimation error is upper bounded by γ . The set is called constraint set and is mathematically defined byℋ[i]={ω∈𝔽 / |d[i]-ω^Tφ[i]|≤γ}It follows that for each data pair there exists an associated constraint set. The set containing the intersection of the constraint sets over all available time instants is called exact membership set and is given by the following equation:ψ[i]=⋂_k=0^iℋ[i]The exact membership set,ψ[i], should become small as the data containing new information arrives. This means that at some point the adaptive filter will reach a state where ψ[i]=ψ[i-1], so that there is no need to update ω[i]. This happens because ψ[i-1] is already a subset of ℋ[i].As a result, the update of any set-membership based algorithm is data dependent, saving resources, a fact that is crucial in kernel-based adaptive filters because of the growing structure that they create.As a first step we check if the previous estimate is outside the constraint set, i.e., |d[i]-ω^T[i-1]φ[i]|>γ. If the error exceeds the bound established, the algorithm performs an update so that the a posteriori estimated error lies in ℋ[i] .If the previous case occurs we minimize ||ω[i+1]-ω[i]||^2 subject to ω[i+1]∈ℋ[i], which means that the a posteriori error ξ_ap[i] is given by ξ_ap[i]=d[i]-ω^T[i+1]φ[i]=±γThe NKLMS update equation presented in <cit.> is given by:ω[i+1]=ω[i]+μ[i]/ε+||φ[i]||^2e[i]φ[i],where μ[i] is the step size that should be chosen to satisfy the constraints of the algorithm and ε is a small constant used to avoid numerical problems. Substituting (<ref>) in (<ref>) and using the kernel trick to replace dot products by kernel evaluations we arrive at:ξ_ap[i]=e[i]-μ[i]/ε+κ(x[i],x[i])e[i]κ(x[i],x[i])Assuming that the constant ε is sufficiently small to guarantee that κ(x[i],x[i])/ε+κ(x[i],x[i])≈1 and following the procedure stated in <cit.>we get: μ[i]=[ 1-γ/|e[i]|;0 ] [ |e[i]|>γ;]We can compute ω recursively as follows: ω[i+1] =ω[i-1]+μ[i-1]e[i-1]/ε+||φ[i-1]||^2φ[i-1]+μ[i]/ε+||φ[i]||^2e[i]φ[i] ⋮ ω[i+1]=ω[0]+∑_k=1^iμ[k]/ε+||φ[k]||^2e[k]φ[k]Setting ω[0] to zero leads to: ω[i+1]=∑_k=1^iμ[k]/ε+||φ[k]||^2e[k]φ[k]The output f(φ[i+1])=ω^T[i+1]φ[i+1]of the filter to a new input φ[i+1] can be computed as the following inner productf(φ[i+1])=[∑_k=1^iμ[k]e[k]/ε+||φ[k]||^2φ^T[k]]φ[i+1] =∑_k=1^iμ[k]e[k]/ε+||φ[k]||^2φ^T[k]φ[i+1].Using the kernel trick we obtain that the output is equal to:∑_k=1^iμ[k]e[k]/ε+κ(x[k],x[k])κ(x[k],x[i+1]),where μ[k] is given by (<ref>) . Let us define a coefficient vector a=μ[i]e[i], so that equation (<ref>) becomes: ∑a_i/ε+κ(x[k],x[k])κ(x[k],x[i+1])Equations (<ref>) -(<ref>) summarize the algorithm proposed. We set the initial value of ω to zero as well as the first coefficient. As new inputs arrive we can calculate the output of the system with (<ref>). Then the error may be computed and if it exceeds the bound established we calculate the step size with (<ref>). Finally we update the coefficients a_i . Note that some coefficients may be zero due the data selective characteristic of the algorithm. We do not need to store the zero coefficients as they do not contribute to the output computations, resulting in a saving of resources. This is an important result because it controls in a natural way the growing network created by the algorithm. In stationary environments the algorithm will limit the growing structure.Consider now the KAP algorithm, which uses the last K inputs to update the coefficients. Based on this fact, let us redefine our problem and use the past K constraint sets to perform the update. Under this scope it is also convenient to express the exact membership as follows: ψ[i]=(⋂_j=0^i-Kℋ[j])(⋂_l=i-K+1^iℋ[l])=ψ^i-K[i]⋂ψ^K[i],where ψ^K[i] designates the use of K constraint sets for updating. This means that the vector ω[i] should belong to ψ^K[i] . In order to develop the SM-KAP algorithm we need to set several bounds γ̅_k[i], for k=1,…,K, so that the error magnitudes should satisfy this constraints after updating. It follows that there exists a space S(i-k+1) containing all vectors ω satisfying d(i-k+1)-ω^Tφ(i-k+1)=γ̅_k[i] for k=1,…,K. The SM-KAPA should perform an update whenever ω[i]∉ψ^K[i], so that the equation ∥ω[i]-ω[i-1]∥^2 subject to d[i]-Φ^T[i]ω[i]=γ̅[i] should be minimized, where γ̅[i] is a vector containing all the K bounds. This constraint can also be expressed as d[i]-γ̅[i]=Φ^T[i]ω[i]. Solving the problem with the method of the Lagrange multipliers we get: ℒ(ω[i]) =∥ω[i]-ω[i-1]∥^2+λ^T[i](d[i]-Φ^T[i]ω[i]-γ̅[i]),where λ^T[i] is the vector of Lagrange multipliers. Now we can compute the gradient of ℒ(ω[i]) and equate it to zero.∂ℒ(ω[i])=/∂ω[i]2ω[i]-2ω[i-1]-λ^T[i]Φ^T[i]=0 ω[i]=ω[i-1]+1/2Φ[i]λ[i] d[i]-γ̅[i]=Φ^T[i](ω[i-1]+1/2Φ[i]λ[i]) d[i]-γ̅[i]=Φ^T[i]ω[i-1]+Φ^T[i]Φ[i]λ[i]/2 λ[i]/2=(Φ^T[i]Φ[i])^-1(e[i]-γ̅[i]),We can now formulate the updating equation, which is used as long as the error is greater than the established bound, i.e., |e[i]|>γ̅ω[i]=ω[i-1]+Φ[i](Φ^T[i]Φ[i])^-1(e[i]-γ̅[i]),where we have to consider that the vector e[i] is composed by the actual error and all K-1 a posteriori errors, corresponding to the K-1 last inputs used. This means that vector e[i] is expressed by [[e[i] e_ap[i-1] ⋯ e_ap[i-K+1] ]] ,where e_ap[i-k] denotes the a posteriori error computed using the coefficients at iteration i. In other words, e_ap[i-k]=d[i-k]-φ^T[i-k]ω[k].Let us now consider a simple choice for vector γ̅[i]. We can exploit the fact that the a posteriori error was updated to satisfy the constraint d[i]-Φ^T[i]ω[i]=γ̅[i].That means that we can set the values of γ̅_k[i] equal to e_ap[i-k+1]for i≠1. Substituting this condition in equation (<ref>) we obtain:ω[i]=ω[i-1]+Φ[i](Φ^T[i]Φ[i])^-1(e[i]-γ̅_̅1̅[i])u,where u=[[ 1 0 ⋯ 0 ]]^T. We can now select γ̅_̅1̅[i] as in the SM-NKLMS so that γ̅_̅1̅[i]=γ̅e[i]/|e[i]| ω[i]=ω[i-1]+Φ[i](Φ^T[i]Φ[i])^-1(η[i]e[i])u η[i]= 1-γ̅/|e[i]||e[i]|>γ̅0 ω[i]=∑_j=1^i-1a_j[i-1]φ[j]+(η[i]e[i])Φ[i]𝐀̃[i],where the matrix 𝐀̃[i] was redefined as𝐀̃[i]=(Φ^T[i]Φ[i]+ϵ𝐈)^-1u a_k[i]=η[i]e[i]ã_k[i], k=ia_k[i-1]+η[i]e[i]ã_K+k-i[i], i-K+1≤ ka_k[i-1] 1≤ k<i-K+1§ SIMULATIONS In this section we analyze the performance of the algorithms proposed for a time series prediction task. We used two different time series to perform the tests, the Mackey Glass time series and a laser generated time series. First we separate the data into two sets, one for training and the other for testing as suggested in <cit.>.The time-window was set to seven and the prediction horizon to one, so that the last seven inputs of the time series were used to predict the value one step ahead. Additionally, both time series were corrupted by additive Gaussian noise with zero mean and standard deviation equal to 0.04. The Gaussian kernel was used in all the algorithms to perform all the experiments. Using the silver rule and after several tests, the bandwith of the kernel was set to one.For the first experiment we analyze the performance of the adaptive algorithms over the Mackey-Glass time series. A total of 1500 sample inputs were used to generate the learning curve and the prediction was performed over 100 test samples. For the KAPA and the SM-KAPA algorithms, K was set to 7 so that the algorithms used the last seven input samples as a single input. For the KLMS algorithm the step size was set to 0.05.The error bound for the SM-NKLMS and the SM-KAPA algorithm was set to √(5)σ. The final results of the algorithms tested are shown in table <ref> where the last 100 data points of each learning curve were averaged to obtain the MSE. The learning curves of the algorithms based on kernels is presented in Figure <ref>. From the curves, we see that the algorithms proposed outperform conventional algorithms in convergence speed. In the second experiment we consider the performance of the proposed algorithms over a laser generated time series. In this case , 3500 sample inputs were used to generate the learning curve and the prediction was performed over 100 test samples. The setup used in the previous experiment was considered. Table <ref> summarizes the MSE obtained for every algorithm tested. The final learning curves are showed in Figure <ref>. On the next experiment we study the size of the dictionary generated by the conventional KLMS algorithm and by the proposed SM-NKLMS algorithm. The result is presented in Figure <ref>. We see that the proposed algorithm naturaly limits the size of the dictionary. As a final experiment, we analyze and compare the robustness of the algorithm proposed with respect to the conventional algorithms. Figure (<ref>) shows the results obtained. It is clear that the SM-NKLMS exhibits a better perfomance than the KLMS algorithm. In general, all kernel algorithms overperform their linear counterparts.§ CONCLUSIONS In this paper two data selective kernel-type algorithms were presented, the SM-KNLMS and the SM-KAP algorithms. Both algorithms have a faster convergence speed than the conventional algorithms. They also have the advantage of naturally limiting the size of the dictionary created by kernel-based algorithms and a good noise robustness. In general, the proposed algorithms outperform the existing kernel-based algorithms. plain
http://arxiv.org/abs/1708.08142v1
{ "authors": [ "R. C. de Lamare", "André Flores" ], "categories": [ "cs.LG" ], "primary_category": "cs.LG", "published": "20170827214148", "title": "Study of Set-Membership Kernel Adaptive Algorithms and Applications" }
gatech]Phanish Suryanarayanacor gatech]Phanisri P. Pratapa gatech]Abhiraj Sharma llnl]John E. Pask [gatech]College of Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA [llnl]Physics Division, Lawrence Livermore National Laboratory, Livermore, CA 94550, USA [cor]Corresponding Author ([email protected]) We present SQDFT: a large-scale parallel implementation of the Spectral Quadrature (SQ) method for 𝒪(N) Kohn-Sham Density Functional Theory (DFT) calculations at high temperature.Specifically, we develop an efficient and scalable finite-difference implementation of the infinite-cell Clenshaw-Curtis SQ approach, in which results for the infinite crystal are obtained by expressing quantities of interest as bilinear forms or sums of bilinear forms, that are then approximated by spatially localized Clenshaw-Curtis quadrature rules. We demonstrate the accuracy of SQDFT by showing systematic convergence of energies and atomic forces with respect to SQ parameters to reference diagonalization results, and convergence with discretization to established planewave results, for both metallic and insulating systems. We further demonstrate that SQDFT achieves excellent strong and weak parallel scaling on computer systems consisting of tens of thousands of processors, with near perfect 𝒪(N) scaling with system size and wall times as low as a few seconds per self-consistent field iteration. Finally, we verify the accuracy of SQDFT in large-scale quantum molecular dynamics simulations of aluminum at high temperature.Electronic structure, Linear scaling, Metallic systems, High temperature, Quantum molecular dynamics, High performance computing, Parallel computing § INTRODUCTIONKohn-Sham Density Functional Theory (DFT) <cit.> is a powerful tool for predicting and understanding a wide range of materials properties, from the first principles of quantum mechanics, with no empirical or adjustable parameters. The tremendous popularity of DFT is a consequence of its high accuracy to cost ratio relative to other such ab initio theories. However, the solution of the Schrödinger type eigenproblem for the Kohn-Sham orbitals remains a challenging task. In particular, since the orbitals need to be orthogonal and increase in number linearly with the number of atoms N, the overall computational complexity of DFT calculations scales as 𝒪(N^3) and the memory requirement scales as 𝒪(N^2) (see, e.g., <cit.>).The orthogonality constraint on the orbitals also results in global communications between processors in parallel computing, which limits parallel scalability. The need for high performance parallel computing is especially crucial for quantum molecular dynamics (QMD) calculations <cit.>, wherein tens or hundreds of thousands of Kohn-Sham solutions can be required to complete a single simulation. In order to overcome the critical 𝒪(N^3) scaling bottleneck, much research in the past two decades has been devoted to the development of 𝒪(N) solution strategies (see, e.g., <cit.> and references therein). Rather than calculate the orthonormal Kohn-Sham orbitals, these techniques directly determine the electron density, energy, and atomic forces in 𝒪(N) operations by exploiting the decay of the density matrix <cit.>.[The real-space density matrix has exponential decay for insulating systems as well as metallic systems at finite temperature <cit.>.] These efforts have yielded significant advances, culminating in mature implementations of a number of approaches <cit.>. However, significant challenges remain. In particular, the accuracy and stability of 𝒪(N) methods remain ongoing concerns due to the need for additional computational parameters, subtleties in determining sufficient numbers and/or centers of localized orbitals, limitations of underlying basis sets, and calculation of accurate atomic forces, as required for structural relaxation and molecular dynamics simulations <cit.>. In addition, efficient large-scale parallelization poses a significant challenge due to complex communications patterns and load balancing issues. Finally, and perhaps most importantly, the assumption of a band gap in the electronic structure makes existing methods inapplicable to metallic systems <cit.>.[This is also the case for insulating systems at sufficiently high temperature which results in conduction bands becoming partially occupied.]High-temperature DFT calculations present additional challenges <cit.>.Such calculations have a number of applications, including the study of warm dense matter and dense plasmas, as occur in laser experiments, and the interiors of giant planets and stars <cit.>. Particular challenges include the need for a significantly larger number of orbitals to be computed, as the number of partially occupied states increases, and need for more diffuse orbitals, as higher-energy states become less localized. Consequently, 𝒪(N^3) methods as well as local-orbital based 𝒪(N) methods have very large prefactors, which makes high-temperature QMD calculations for even small systems intractable. Recent work to address these challenges includes orbital-free molecular dynamics (OFMD) <cit.> wherein the standard Kohn-Sham kinetic energy is replaced by an approximation in terms of the density, extended first principles molecular dynamics (ext-FPMD) <cit.> wherein higher-energy states are approximated as planewaves rather than computed explicitly, and finite-temperature potential functional theory (PFT) <cit.> wherein an orbital-free free energy approximation is constructed through a coupling-constant formalism. While OFMD can miss electronic shell structure effects <cit.>, ext-FPMD and PFT have been shown to capture such effects in initial applications.The recently developed Spectral Quadrature (SQ) method for 𝒪(N) Kohn-Sham calculations <cit.> addresses both scaling with number of atoms and scaling with temperature, while retaining systematic convergence to standard 𝒪(N^3) results for metals and insulators alike.In this approach, all quantities of interest are expressed as bilinear forms or sums of bilinear forms, which are then approximated by Clenshaw-Curtis quadrature rules that remain spatially localized by exploiting the locality of electronic interactions in real-space <cit.>, i.e, the exponential decay of the density matrix in real-space for insulators as well as metals at finite temperature. In conjunction with local reformulation of the electrostatics, this technique enables the 𝒪(N) evaluation of the electronic density, energy, and atomic forces. The computational cost of SQ decreases rapidly with increasing temperature due to the enhanced locality of the electronic interactions and the increased smoothness of the Fermi-Dirac function. Further, it is well suited to scalable high-performance parallel computing since a majority of the communication is localized to nearby processors, whose pattern remains fixed throughout the simulation. The SQ approach also permits infinite-crystal calculations without recourse to Brillouin zone integration or large supercells, a technique referred to as the infinite-cell method <cit.>.In this paper, we present SQDFT: a parallel implementation of the SQ method for 𝒪(N) Kohn-Sham DFT calculations at high temperature.[Though we focus on high-temperature calculations in this work, SQDFT is also capable of performing 𝒪(N) DFT calculations at ambient temperature, with a larger prefactor (Appendix <ref>).] Specifically, we develop a finite-difference implementation of theinfinite-cell variant of the Clenshaw-Curtis SQ method that can efficiently scale on large-scale parallel computers. We verify the accuracy of SQDFT by showing systematic convergence of energies and atomic forces to reference diagonalization results, and convergence with discretization to established planewave results, for both metallic and insulating systems. We further show that SQDFT achieves excellent strong and weak parallel scaling on computer systems consisting of tens of thousands of cores, with near perfect 𝒪(N) scaling with system size and wall times as low as a few seconds per self-consistent field (SCF) iteration. Finally, we verify the accuracy of SQDFT in large-scale quantum molecular dynamics simulations of aluminum at high temperature.The remainder of this paper is organized as follows. In Section <ref>, we review the 𝒪(N) density matrix formulation of DFT. Next, we discuss the formulation and implementation of SQDFT in Section <ref> and study its accuracy, efficiency, and scaling in Section <ref>. Finally, we provide concluding remarks in Section <ref>. § 𝒪(N) DENSITY FUNCTIONAL THEORY Consider a cuboidal domain Ω containing N atoms, the unit cell of an infinite crystal. Let the nuclei be positioned at = {_1, _2, …, _N } and let there be a total of N_e valence electrons. Neglecting spin and Brillouin zone integration, the nonlinear eigenproblem for the electronic ground state in Kohn-Sham Density Functional Theory (DFT) can be written as <cit.>𝒟= g(ℋ,μ,σ) = ( 1 + exp(ℋ-μℐ/σ) )^-1 ,ℋ= -1/2∇^2 + V_xc + ϕ + 𝒱_nl , where 𝒟 is the density matrix; g is the Fermi-Dirac function; μ is the Fermi level, which is determined by solving for the constraint on the total number of electrons,i.e., 2 Tr(𝒟) = N_e; σ = k_B T is the smearing, where k_B is Boltzmann's constant and T is the electronic temperature[The electronic temperature/smearing is typically set to be equal to the ionic temperature in QMD simulations, particularly for those performed at high temperature.]; ℋ is the Hamiltonian; V_xc is the exchange-correlation potential; 𝒱_nl is the nonlocal pseudopotential; and ϕ is the electrostatic potential, the solution to the Poisson equation <cit.>-1/4 π∇^2 ϕ(,) = ρ_𝒟() + b(,) subject to periodic boundary conditions. Above, ρ_𝒟() = 2 𝒟(,)is the electron density and b = ∑_I b_I is the total pseudocharge density of the nuclei <cit.>, where b_I is the pseudocharge density of the I^th nucleus and the index I extends over all atoms in ^3.Once the electronic ground state has been determined, the free energy can be written as <cit.>[The repulsive energy correction for overlapping pseudocharges <cit.> has been explicitly included in the free energy expression. This term plays a particularly important role in high-temperature simulations since the ions get significantly closer compared to ambient temperature. ] ℱ () = 2Tr(𝒟ℋ)+ E_xc(ρ_𝒟) - ∫_Ω V_xc(ρ_𝒟()) ρ_𝒟()d + 1/2∫_Ω (b(,)-ρ_𝒟()) ϕ(,)d + 1/2∫_Ω( b̃(,) + b(,) ) V_c (,)d - 1/2∑_I∫_Ωb̃_I(,_I) Ṽ_I(,_I)d + 2 σTr( 𝒟log𝒟 + (ℐ-𝒟) log (ℐ-𝒟) ),where E_xc is the exchange-correlation energy; b̃ = ∑_Ib̃_I is the total reference pseudocharge density,where b̃_I is the reference pseudocharge density of the I^th nucleus and the summation index I runs over all atoms in ^3; V_c = ∑_I V_c,I = ∑_I(Ṽ_I-V_I ), where Ṽ_I and V_I are the potentials generated by b̃_I and b_I, respectively; and ℐ is the identity operator. The first term is the band structure energy (E_band) and the last term is the energy associated with the electronic entropy (E_ent). In order to update the positions of the ions during the Born-Oppenheimer quantum molecular dynamics (QMD) simulations, the Hellmann-Feynman force on the I^th nucleus can be written as <cit.>[For the reasons mentioned in footnote <ref>, the force corresponding to the repulsive energy correction for overlapping pseudocharges <cit.> has been explicitly included in the atomic force expression. Note that this term has been significantly simplified from its original form by utilizing the relation ∫∇ f(|-_I|) g(|-_I|)d=0 for spherically symmetric functions f and g, which in the present case are b_I, b_I', V_I, and V_I'.]𝐟_I=∑_I'∫_Ω∇ b_I'(,_I') ϕ(,)d +1/2∑_I'∫_Ω[ (b̃(,)+b(,) ) ∇ V_c,I' +( ∇b̃_I'(,_I') + ∇ b_I'(,_I') ) V_c(,) ]d - 4 ∑_I'Tr(𝒱_nl^I'∇𝒟),where the summation index I' runs over the I^th atom and its periodic images, and 𝒱_nl,I' is the nonlocal pseudopotential associated with the I'^th atom. The first two terms together constitute the local component of the force (𝐟_I^l) and the last term is the nonlocal component of the force (𝐟_I^nl). Within a real-space representation, the density matrix 𝒟 has exponential decay for insulators as well as metallic systems at finite electronic temperature/smearing <cit.>. This decay in the density matrix is exploited by linear-scaling methods through truncation within the calculations to enable 𝒪(N) computation of the electron density, energy, and atomic forces. In doing so, it has been observed that there is exponential convergence in the energy and forces with the size of the truncation region for insulating <cit.> as well as metallic systems at finite temperature <cit.>. Note that in the above description for DFT, we have employed a local reformulation of the electrostatics to enable 𝒪(N) scaling for the complete Kohn-Sham problem. § FORMULATION AND IMPLEMENTATION OF SQDFTSQDFT is a large-scale parallel implementation of the Clenshaw-Curtis Spectral Quadrature (SQ) method[The Clenshaw-Curtis variant of SQ is chosen here because it is more efficient compared to its Gauss counterpart <cit.>, particularly in the computation of the nonlocal component of the forces <cit.>.] <cit.> for 𝒪(N) Kohn-Sham density functional calculations.In this approach, all quantities of interest (in discrete form) are expressed as bilinear forms or sums of bilinear forms, that are then approximated by spatially localized quadrature rules. The method is identically applicable to insulating and metallic systems and well suited to massively parallel computation. Furthermore, the SQ method becomes more efficient as temperature is increased, since electronic interactions become more localized and the representation of the Fermi-Dirac function becomes more compact <cit.>. We employ the infinite-cell version of the Clenshaw-Curtis SQ method, wherein the results corresponding to the infinite crystal are obtained without recourse to Brillouin zone integration or large supercells <cit.>. Specifically, rather than employ Bloch boundary conditions for the orbitals on Ω, zero-Dirichlet boundary conditions are prescribed at infinity, and the relevant componentsof the density matrix for spatial points within Ω are calculated by utilizing the potential within the truncation region surrounding that point <cit.>. Periodic boundary conditions are retained for the electrostatic potential. Indeed, the infinite-cell SQ approach is equivalent to the standard Γ-point SQ calculation when the size of the truncation region is smaller than the size of the domain, a situation common in large-scale DFT simulations, particularly those at high temperature. We utilize a high-order finite-difference discretization in order to exploit the locality of electronic interactions in real space, enable systematic convergence, and facilitate large-scale parallel implementation. We solve the fixed-point problem in Eq. <ref> using the self consistent field (SCF) method <cit.>[In SQDFT, we perform a fixed-point iteration with respect to the effective potential: V_eff = V_xc + ϕ .], whose convergence is accelerated using the Periodic Pulay mixing scheme <cit.>, a technique that significantly outperforms the well-established Anderson/Pulay mixing <cit.>. We solve the Poisson problem in Eq. <ref> using the Alternating Anderson-Richardson (AAR) method <cit.>, an approach that outperforms the conjugate gradient method <cit.> in the context of large-scale parallel computations <cit.>. We perform NVE (microcanonical) simulations using the leapfrog method and NVT (canonical) simulations using the Verlet algorithm with the Nose-Hoover thermostat. At each MD step, we extrapolate the electron density using information from the previous two steps <cit.> to reduce SCF iterations. We parallelize the calculations using domain decomposition, with the communication between processors handled via the Message Passing Interface (MPI) <cit.>.In Fig. <ref>, we outline the key steps in a QMD simulation. We employ the Clenshaw-Curtis SQ method for calculating the electron density in each SCF iteration as well as for computing the energy and atomic forces (nonlocal component) once the electronic ground state has been determined. In the sections below, we discuss the calculation of the electron density, energy, and atomic forces in SQDFT. For additional details on the formulation and implementation of the pseudocharges, AAJ/AAR method, and Periodic Pulay mixing scheme, we refer the reader to the relevant previous works<cit.>.§.§ Finite-difference discretization We consider a cubical domain Ω and discretize it with a uniform grid of spacing h, resulting in N_d = n^3 finite-difference nodes, the collection of which is referred to as K_Ω. We parallelize the calculations by employing domain decomposition, i.e., we partition the domain into cubes of equal size such that Ω = ⋃_p=1^N_PΩ_p, where N_P is the total number of processors and Ω_p denotes the domain local to the p^th processor, each of which contains N_d^p = N_d/N_P finite-difference nodes. We refer to the collection of finite-difference nodes belonging to the p^th processor as K_Ω^p, where K_Ω = ⋃_p=1^N_P K_Ω^p with K_Ω^p ∩ K_Ω^q = ∅ if p ≠ q. We approximate the Laplacian arising in the Hamiltonian and the local electrostatic reformulation using the central finite-difference approximation:∇^2_h f |^(i,j,k) ≈ ∑_d=0^n_o w_d(f^(i+d,j,k) + f^(i-d,j,k) + f^(i,j+d,k) + f^(i,j-d,k) + f^(i,j,k+d) + f^(i,j,k-d)), w_0 = - 1/h^2∑_q=1^n_o1/q^2 ,w_d =2 (-1)^d+1/h^2 d^2(n_o!)^2/(n_o-d)! (n_o+d)! ,d=1, 2, …, n_o,where f^(i,j,k) denotes the value of the function f at the node indexed by (i,j,k) and 2n_o is the order of the approximation. Similarly, we approximate the gradient operator arising in the atomic forces using central finite-differences:∇_h f |^(i,j,k) ≈ ∑_d=1^n_ow̃_d ( ( f^(i+d,j,k) - f^(i-d,j,k)) 𝐞̂_1 + ( f^(i,j+d,k) - f^(i,j-d,k)) 𝐞̂_2+ ( f^(i,j,k+d) - f^(i,j,k-d)) 𝐞̂_3 ),w̃_d =(-1)^d+1/h d(n_o!)^2/(n_o-d)! (n_o+d)! ,d=1, 2, …, n_o,where 𝐞̂_1, 𝐞̂_2, and 𝐞̂_3 are the unit vectors along the edges of Ω. We enforce periodic boundary conditions by mapping any index that does not correspond to a node in the finite-difference grid to its periodic image within Ω. We approximate the spatial integrals arising in the Hamiltonian (projectors of the nonlocal pseudopotential), energy, and atomic forces using the trapezoidal rule:∫_Ω f()d≈ h^3∑_i, j, k=1^n f^(i,j,k) .Even though this is a low order quadrature scheme, it has been chosen to ensure that the discrete free energy is consistent with the discrete Kohn-Sham equations, i.e., the calculated electronic ground state corresponds to the minimum of the free energy within the finite-difference discretization <cit.>. In accordance with the nearsightedness principle <cit.>, we define the region of influence for any finite-difference node as the cube of side 2R_cut centered at that node. A cube is chosen rather than a sphere due to its simplicity and efficiency within the finite-difference implementation. The parameter R_cut corresponds to the distance beyond which electronic interactions are ignored, i.e., theoretically R_cut corresponds to the truncation radius for the density matrix. Within the above described finite-difference framework, we define the nodal Hamiltonian _q ∈^N_s × N_s for any node q ∈ K_Ω as the restriction of the Hamiltonian to its region of influence <cit.>, where N_s = (2R_cut/h + 1)^3 is the number of finite-difference nodes within the region of influence. Similarly, 𝐰_q ∈^N_s × 1, ∇_h,q∈^N_s × N_s, and 𝐕_nl,q^I ∈^N_s × N_s represent the restriction of the standard basis vector, gradient matrix, and nonlocal pseudopotential matrix of the I^th atom to the region of influence, respectively. It is important to note that _q, ∇_h,q, and 𝐕_nl,q^I are not explicitly determined/stored in SQDFT, rather their multiplication with a vector is directly computed in a matrix-free way.§.§ Electron density In each iteration of the SCF method, the electron density (Eq. <ref>) needs to be computed at all the finite-difference nodes. In SQDFT, the electron density at the q ∈ K_Ω^p node in the p^th processor is calculated using the relations <cit.>ρ_q =2/h^3∑_j=0^n_pl c_q^j ρ_q^j, c_q^j =2/π∫_-1^1g(r,μ̂_q,σ̂_q) T_j(r)/√(1-r^2) dr ,j=0, 1, …, n_pl ,ρ_q^j =𝐰_q^T 𝐭_q^j ,j=0, 1, …, n_pl/2 ρ_q^1 - 2(𝐭_q^j+1/2)^T 𝐭_q^j-1/2 ,j=n_pl/2+1, n_pl/2+3, …, n_pl-1ρ_q^0 - 2(𝐭_q^j/2)^T 𝐭_q^j/2 ,j=n_pl/2+2, n_pl/2+4, …, n_plwhere n_pl is the order of the Clenshaw-Curtis quadrature, chosen here to be a multiple of four for simplicity; T_j denotes the Chebyshev polynomial of degree j; c_q^j is the coefficient of T_j in the polynomial expansion of the Fermi-Dirac function, with the value of c_q^0 half of that given in the expression; μ̂_q=(μ-χ_q)/ζ_q is the scaled and shifted Fermi energy, where χ_q = (λ_q^max+λ_q^min)/2 and ζ_q = (λ_q^max-λ_q^min)/2, with λ_q^max and λ_q^min denoting the maximum and minimum eigenvalues of _q, respectively; σ̂_q = σ/ζ_q is the scaled smearing; and 𝐭_q^j = T_j(𝐇̂_q) 𝐰_q ∈^N_s × 1, which is determined using the following iteration, obtained as a consequence of the three term recurrence relation of Chebyshev polynomials: 𝐭^i+1_q= 2𝐇̂_q 𝐭^i_q-𝐭^i-1_q,i=1, 2, …, n_pl/2 𝐭_q^1 =𝐇̂_q 𝐰_q,𝐭_q^0 = 𝐰_q,where _q = (_q-χ_q 𝐈)/ζ_q is the scaled and shifted nodal Hamiltonian whose spectrum lies in the interval [-1,1]. Note that the iteration in Eq. <ref> proceeds only up to n_pl/2 rather than n_pl, since we have employed the product property of Chebyshev polynomials[2 T_j(r) T_k(r) = T_j+k(r) + T_|j-k|(r). Note that in previous work where the expression for the electron density has been derived <cit.>, this property has not been utilized, and therefore the iteration in Eq. <ref> proceeds up to n_pl. The present approach reduces the cost for the computation of ρ_q^j by a factor of two.] to directly calculate ρ_q^j for j=n_pl/2 +1, n_pl/2+2, …, n_pl, as described by Eq. <ref>. Since the values of ρ_q^j are independent of the Fermi level μ, they are first computed and stored. Next, μ is determined by satisfying the constraint on the total number of electrons:2 ∑_p=1^N_P∑_q∈ K_p∑_j=0^n_pl c_q^j ρ_q^j = N_e,where the values of c_q^j are given by Eq. <ref>. Finally, for the c_q^j corresponding to the Fermi level μ, the electron density is calculated via Eq. <ref>. Since the simulation domain Ω is cubical and we have employed a uniform finite-difference grid with uniform domain decomposition, the layout ofeffective potential values V_eff required from neighboring processors as part of the nodal Hamiltonians for grid points in each processor is identical. To accomplish this, we utilize the MPI command<cit.> to communicate the required values of V_eff (i.e., those within the region of influence for grid points in each processor) between processors. Doing so reduces the number of MPI related calls that would otherwise be required in every matrix-vector multiplication. After the communication is complete, for every finite-difference node q∈ K_Ω^p, we first calculate λ_q^max and λ_q^min—maximum and minimum eigenvalues of _q, respectively—using the Lanczos method <cit.>. Next, we perform the recursive iteration in Eq. <ref>, the results of which are used to calculate ρ_q^j as given in Eq. <ref>. The matrix-vector multiplications required as part of the Lanczos method and the recursive iteration in Eq. <ref> are performed in a matrix-free manner. The Chebyshev coefficients c_q^j are calculated using the discrete orthogonality of the Chebyshev polynomials <cit.>. The Fermi level is determined using Brent's method <cit.>, with Newton-Raphson's method becoming the preferred choice as the temperature is increased. While doing so, the global communication between processors is handled by using thecommand. During the electron density calculation, the memory costs are dominated by the storage of three vectors during the recursive iteration in Eq. <ref> (t_q^i+1, t_q^i, and t_q^i-1) and the storage of ρ_q^j for the calculation of the electron density in Eq. <ref>. Therefore, the memory costs per processor scale as 𝒪(3 N_s + n_pl N_d^p). The computational costs are dominated by the the matrix-vector products in the Lanczos iteration and the recursive iteration in Eq. <ref>. Therefore, the computational cost per processor scales as 𝒪(n_lancz N_s N_d^p + 1/2 n_pl N_s N_d^p ), where n_lancz is the number of Lanczos iterations required for determining λ_q^max and λ_q^min. Since N_s, n_pl, and n_lancz remain independent of system size, the overall memory and computational costs scale linearly with the number of finite-difference nodes in the domain Ω, and therefore 𝒪(N) with respect to the number of atoms.§.§ Free energy Once the electronic ground state has been determined, i.e., the SCF iteration has converged, the free energy (Eq. <ref>) needs to be calculated.Band structure energy The band structure energy in the Clenshaw-Curtis SQ method <cit.> takes the following form in parallel computations E_band= 2∑_p=1^N_P∑_q∈ K_p∑_j=0^n_pl (χ_q c_q^j + ζ_q d_q^j) ρ_q^j , d_q^j =2/π∫_-1^1r g(r,μ̂_q,σ̂_q) T_j(r)/√(1-r^2) dr , where d_q^j is the coefficient of T_j in the polynomial expansion of the band structure energy function (i.e., r g(r)), with the value of d_q^0 half of that given in the expression. In addition, c_q^j and ρ_q^j are as given in Eqs. <ref> and <ref>, respectively.Electronic entropy energy The electronic entropy energy in the Clenshaw-Curtis SQ approach <cit.> takes the following form in parallel computations S =2 σ∑_p=1^N_P∑_q∈ K_p∑_j=0^n_pl e_q^j ρ_q^j , e_q^j =2/π∫_-1^1g(r,μ̂_q,σ̂_q) log g(r,μ̂_q,σ̂_q) + (1-g(r,μ̂_q,σ̂_q)) log (1-g(r,μ̂_q,σ̂_q)) T_j(r)/√(1-r^2) dr ,where e_q^j is the coefficient of T_j in the polynomial expansion of the electronic entropy energy function (i.e., g(r) log g(r) + (1-g(r)) log (1-g(r))), with the value of e_q^0 half of that given in the expression. Again, ρ_q^j is as given in Eq. <ref>.Free energy The free energy of the system in SQDFT is computed asℱ() = h^3 ∑_p=1^N_P∑_q∈ K_p( 2/h^3∑_j=0^n_pl (χ_q c_q^j + ζ_q d_q^j) ρ_q^j +ε_xc(ρ_q) ρ_q - V_xc(ρ_p) ρ_q + 1/2(b_q - ρ_q) ϕ_q + 1/2(b̃_q + b_q) V_c,q- 1/2∑_I ∈ D_p^bb̃_I,qṼ_I,q + 2 σ/h^3∑_j=0^n_pl e_q^j ρ_q^j),where the spatial integrals in Eq. <ref> have been approximated using the trapezoidal rule in Eq. <ref> and D_p^b is the set of all atoms (considering all atoms in ^3) whose pseudocharges have overlap with the processor domain Ω_p. We note that the exchange correlation energy E_xc has been modeled using the Local Density Approximation (LDA) <cit.>, wherein ϵ_xc(ρ) is the sum of the exchange and correlation energy per particle of a uniform electron gas of density ρ. During the free energy calculation, the values of c_q^j and ρ^j_q determined as part of the electron density computation in the last SCF iteration are directly utilized. The Chebyshev coefficients d_q^j and e_q^j are determined using the discrete orthogonality of the Chebyshev polynomials <cit.>. Onecommand is utilized to simultaneously compute all the components of the energy. The computational cost per processor scales as 𝒪(n_plN_d^p), which translates to an overall scaling of 𝒪(N) with respect to the number of atoms. §.§ Atomic forces In order to update the positions of the atoms during the course of the QMD simulation, the Hellmann-Feynman forces on the nuclei (Eq. <ref>) need to be computed.Local component The local component of the force has the following discrete form in parallel computations𝐟_I^l = h^3 ∑_p=1^N_P∑_I' ∈ D_p,I^b∑_q ∈ K_p( ∇_h b_I'|_q ϕ_q + 1/2(b̃_q+b_q) ∇_h V_c,I'|_q + 1/2(∇_h b̃_I'|_q + ∇_h b_I'|_q ) V_c,q)where the integrals in Eq. <ref> have been approximated using the trapezoidal rule (Eqn. <ref>) and D_p,I^b is the set of the I^th atom and its images whose pseudocharges have overlap with the processor domain Ω_p.Nonlocal component The nonlocal component of the force in the SQ approach <cit.> takes the following form in parallel computations𝐟_I^nl = -4 ∑_p=1^N_P∑_I' ∈ D_p,I^c∑_q ∈ K_p_q^T𝐕_nl,q^I'∇_h,q( ∑_j=0^n_pl c_q^j 𝐭_q^j)where D_p,I^c is the set of the I^th atom and its images whose nonlocal projectors have overlap with the processor domain Ω_p, c_q^j is the coefficient of T_j in the polynomial expansion of the Fermi-Dirac function (Eq. <ref>); and 𝐭_q^j is determined using the recurrence relation:𝐭^i+1_q= 2𝐇̂_q 𝐭^i_q-𝐭^i-1_q,i=1, 2, …, n_pl 𝐭_q^1 =𝐇̂_q 𝐰_q,𝐭_q^0 = 𝐰_q,Note that unlike the iteration in Eq. <ref> which proceeds up to n_pl/2, the above iteration proceeds up to n_pl since some of the off-diagonal components of the density matrix are needed for the calculation of the nonlocal force in Eq. <ref>, i.e., ρ^j_q are not sufficient, rather 𝐭_q^j are required.Total atomic force The atomic force in SQDFT is then calculated as𝐟_I = h^3 ∑_p=1^N_P∑_I' ∈ D_p,I^b∑_q ∈ K_p( ∇_h b_I'|_q ϕ_q + 1/2(b̃_q+b_q) ∇_h V_c,I'|_q + 1/2(∇_h b̃_I'|_q + ∇_h b_I'|_q ) V_c,q)- 4 ∑_p=1^N_P∑_I' ∈ D_p,I^c∑_q ∈ K_p_q^T𝐕_nl,q^I'∇_h,q( ∑_j=0^n_pl c_q^j 𝐭_q^j). During the computation of the atomic forces, the values of c_q^j that were determined as part of the electron density calculation during the last SCF iteration are directly utilized. As mentioned previously, the nodal gradient matrix∇_h,q is not generated/stored explicitly, rather its product with 𝐭_q^j is calculated in matrix-free fashion. The total atomic force on all atoms is simultaneously computed using a single . Since the storage per processor scales as 𝒪(3 N_s) and the computational effort per processor scales as 𝒪(n_pl N_s N_d^p) + 𝒪(3 N_s)[This corresponds to thecalculation of the nonlocal component of the force, which is the dominant cost in the atomic force calculation in SQDFT. Note that 𝒪(3 N_s) arises due to the three matrix-vector products arising in the multiplication with the finite-difference gradient.], the overallstorage and computational cost scales as 𝒪(N) with respect to the number of atoms.Relation to classical Fermi Operator Expansion (FOE) The Clenshaw-Curtis SQ method bears some resemblance to the classical Fermi Operator Expansion (FOE) <cit.> in that both techniques use Chebyshev polynomials as the underlying basis for expanding the Fermi-Dirac function of a matrix. However, the key underlying difference is that the matrix in FOE corresponds to the Hamiltonian, whereas in SQ it corresponds to the nodal Hamiltonian. Therefore, truncation is automatically included within the SQ method and the key operation is reduced to local sparse matrix-vector products, as opposed to the global sparse matrix-matrix products in the FOE method. This makes SQ more efficient since (i) the Fermi level calculation does not require an outer loop, and (ii) inter-processor communication is needed just once per SCF iteration, unlike the FOE where it is required for every matrix-matrix multiplication. Finally, SQ also requires significantly less storage compared to FOE, making it especially well suited for modern high performance computing platforms. § RESULTS AND DISCUSSION In this section, we demonstrate the accuracy, efficiency, and scaling of SQDFT in Kohn-Sham Density Functional Theory (DFT) calculations at high temperature. In all simulations, we employ a twelfth-order finite-difference discretization, norm-conserving Troullier-Martins pseudopotentials <cit.>, and the Local Density Approximation (LDA) <cit.> with the Perdew-Wang parametrization <cit.> of the correlation energy calculated by Ceperley-Alder <cit.>. To ensure accuracy of the standard 3s3p pseudopotentials employed, temperatures were limited to T ≲ 80000 K, where 2p states can be treated as fully occupied.[Calculations with deeper 2s2p3s3p pseudopotentials show 2p occupation of 5.9991 at 80000 K. ]To demonstrate applicability to metals and insulators alike, we consider two systems: (i) aluminum, a prototypical metal, and (ii) lithium hydride, a prototypical insulator. We compare the results obtained by SQDFT to benchmarks obtained by the finite-difference code SPARC <cit.> and planewave code ABINIT <cit.>, both of which solve the Kohn-Sham problem via diagonalization. §.§ Accuracy and convergence We first study the accuracy of SQDFT, i.e., we verify the convergence of computed energies and atomic forces with respect to key SQ parameters (i.e., quadrature order n_pl and truncation radius R_cut) as well as spatial discretization (i.e., mesh-size h). As representative systems, we consider a 32-atom cell of aluminum at the equilibrium lattice constant of 7.78 Bohr and a 64-atom cell of lithium hydride at the equilibrium lattice constant of 7.37 Bohr, with all atoms randomly displaced by up to 15 % of the equilibrium interatomic distance. We select the electronic temperature/smearing to beσ=4 eV.First, we verify the convergence of SQDFT energies and forces with respect to n_pl and R_cut in Fig. <ref>, with the reference diagonalization answers obtained by SPARC at the same mesh-size and a 4 × 4 × 4 Monkhorst-Pack grid for Brillouin zone integration.[Unlike standard codes, SQDFT can obtain the infinite crystal result without recourse to Brillouin zone integration.] We choose mesh-sizes of h=0.7780 and h=0.5264 Bohr for the aluminum and lithium hydride systems, respectively. It is clear that SQDFT obtains exponential convergence in the energy and atomic forces with respect to both parameters, in agreement with previous studies <cit.>. In particular, { n_pl,R_cut}∼{28,6} and { n_pl,R_cut}∼{40,6} are sufficient to obtain chemical accuracy in both the energy and forces for the aluminum and lithium hydride systems, respectively. Importantly, these values further reduce as the smearing/temperature is increased <cit.>, which makes SQDFT particularly attractive for high-temperature simulations. Note that neither the energy nor the atomic forces are variational with respect to n_pl and R_cut, hence the non-monotonic convergence in Fig. <ref>.Next, we verify the convergence of SQDFT energies and forces with mesh-size h to those computed by the establishedplanewave code ABINIT. To do so, we utilize n_pl=160 and R_cut = 10 Bohr in SQDFT, which are sufficient to put the associated errors well below the mesh errors of interest (see Fig. <ref>). In ABINIT, we employ a planewave cutoff of 50 Ha and a 4 × 4 × 4 Monkhorst-Pack grid for Brillouin zone integration, which results in energy and forces that are converged to within 10^-6 Ha/atom and 10^-6 Ha/Bohr, respectively. As shown in Fig. <ref>, both the energy and atomic forces in SQDFT converge rapidly and systematically, with chemical accuracy readily obtained. Notably, we see that energies and forces converge at comparable rates, without the need for additional measures such as double-grid <cit.> or high-order integration <cit.>. Therefore, accurate forces are easily obtained, as needed for structural relaxations and molecular dynamics simulations. §.§ Scaling and performance We now study the scaling and performance of SQDFT on large-scale parallel computers with up to tens of thousands of processors. Specifically, we investigate the strong and weak scaling of SQDFT for aluminum and lithium hydride systems on theandsupercomputers at theLawrence Livermore National Laboratory (LLNL) <cit.>. In all calculations, we employ a smearing of σ=4 eV and utilize (i) h=0.7780 Bohr and { n_pl,R_cut} = {28,6.224Bohr} for the aluminum systems, and (ii) h=0.5264 Bohr and { n_pl,R_cut} = {40,6.387Bohr} for the lithium hydride systems. These parameters are sufficient to obtain chemical accuracy of 0.001 Ha/atom and 0.001 Ha/Bohr in the energy and atomic forces, respectively, as demonstrated in the previous section.First, we perform a strong scaling study for a 2048-atom aluminum system onand a 1728-atom lithium hydride system on , with atoms randomly displaced in both systems. For aluminum, the number of processors onis varied from 64 to 8000. For lithium hydride, the number of processors onis varied from 125 to 27000. The wall times per SCF iteration so obtained are presented in Fig. <ref>. Relative to the smallest number of processors, on the largest number of processors SQDFT achieves 97 % parallel efficiency for aluminum onand 95 % for lithium hydride on . It is clear that SQDFT demonstrates excellent strong scaling. Notably, the wall time per SCF iteration for systems containing ∼ 2000 atoms can be reduced to less than 5 seconds.Next, we perform a weak scaling study for aluminum and lithium hydride. For aluminum on , we increase the system size from 32 to 6912 atoms, while increasing the number of processors from 64 to 13824, maintaining two processors per atom for all systems. For lithium hydride on , we increase the system size from 8 to 10648 atoms, while increasing the number of processors from 27 to 35937, maintaining ∼ 3.4 processors per atom for all systems. The systems are generated by replicating 4-atom and 8-atom unit cells of aluminum and lithium hydride, respectively, with one atom in each unit cell randomly perturbed. We present the results so obtained in Fig. <ref>. We find the scaling with system size for aluminum onto be 𝒪(N^1.00) and the scaling for lithium hydride onto be 𝒪(N^1.01).It is clear that SQDFT demonstrates excellent weak scaling, with near perfect 𝒪(N) scaling with respect to system size in practical calculations. Overall, the excellent strong and weak scaling of SQDFT up to tens of thousands of processors makes it possible to perform high-temperature Kohn-Sham molecular dynamics simulations at large length and time scales, as we show below. §.§ High-temperature quantum molecular dynamics We now consider the accuracy and efficiency of SQDFT in high-temperature Born-Oppenheimer quantum molecular dynamics (QMD) simulations. Such simulations are a cornerstone of modern warm dense matter theory <cit.>, providing equation-of-state and shock-compression predictions of unprecedented accuracy, up to temperatures of ∼ 100 eV and pressures of 100s of Mbar; see, e.g.,<cit.>. At temperatures above ∼ 100 eV, conventional Kohn-Sham methods become prohibitively expensive, and so alternative methods such as OFMD <cit.> and, more recently, path integral Monte Carlo (PIMC) <cit.> have been employed to reach temperatures of 1000s of eV and higher. With sufficiently deep potentials, however, the new SQ methodology makes possible Kohn-Sham MD at temperatures of 1000s of eV as well. As a representative example, we choose an 864-atom aluminum system and perform a 0.15 ps NVE QMD simulation with time step of 0.1 fs.[Due to the high velocities of the ions in high-temperature simulations, the time step has been chosen significantly smaller than in ambient calculations <cit.>.] We use a mesh-size h=0.7780 Bohr, quadrature order n_pl=28, truncation radius R_cut=6.224 Bohr, initial ionic temperature T = 116045 K, initial atomic positions close to perfect FCC crystal with one atom in each 4-atom unit cell randomly displaced by the same amount, and initial velocities randomly assigned based on the Maxwell-Boltzmann distribution. We integrate the equations of motion using the Leapfrog method <cit.>. The values of n_pl and R_cut have been chosen so as to put the associated errors close to an order of magnitude lower than the discretization error, which is ∼ 0.001 Ha/atom and ∼ 0.001 Ha/Bohr in the energy and forces, respectively. At each MD step, we set the electronic temperature equal to the ionic temperature, e.g, σ=10 eV at the start of the simulation. We perform the simulation on 3375 processors onto obtain a wall clock time of ∼ 30 seconds per QMD step[Since the computational cost of SQDFT reduces with temperature, the wall time will further reduce as the temperature is increased.]. In Fig. <ref>, we plot the variation ofthe total energy and temperature of the system over the course of the simulation. We observe that the temperature settles after ∼ 30 fs, subsequent to which the mean and standard deviation of the total energy are -3.3451 and 4.7 × 10^-4 Ha/atom, respectively. In addition, the drift in total energy as obtained from a linear fit is ∼ 2.7 × 10^-4 Ha/atom-ps.SQDFT thus shows excellent energy conservation, consistent with the accurate atomic forces obtained.We also plot the calculated radial distribution function in Fig. <ref>, which is in agreement with previous studies <cit.>.§ CONCLUDING REMARKS We presented SQDFT: a large-scale parallel implementation of the Spectral Quadrature (SQ) method for 𝒪(N) Kohn-Sham density functional theory calculations at high temperature. Specifically, we developed an efficient and scalable finite-difference implementation of the infinite-cell Clenshaw-Curtis SQ approach, in which results for the infinite crystal are obtained by expressing quantities of interest as bilinear forms or sums of bilinear forms, that are then approximated by spatially localized Clenshaw-Curtis quadrature rules. We demonstrated the accuracy of SQDFT by showing systematic convergence of energies and atomic forces with respect to quadrature order and truncation radius to reference diagonalization results, and convergence with mesh spacing to established planewave results, for both metallic and insulating systems. In all cases, chemical accuracy was readily obtained. We demonstrated excellent strong and weak parallel scaling on computer systems consisting of tens of thousands of processors, with near perfect 𝒪(N) scaling with system size, and wall clock times as low as a few seconds per SCF iteration for insulating and metallic systems of ∼ 2000 atoms. Finally, we verified the accuracy and efficiency of SQDFT in large-scale quantum molecular dynamics (QMD) simulations at high temperature, demonstrating excellent energy conservation and QMD step times of ∼ 30 seconds for an 864-atom aluminum system at ∼ 80000 K.In the present work, we have focused on high-temperature Kohn-Sham DFT calculations. However, the SQ method is applicable at lower temperatures as well, with larger prefactor, as we show in Appendix <ref>. A possible approach to reduce this prefactor is to generate a localized orthonormal reduced basis (e.g., <cit.>), subsequent to which the SQ method is applied to the finite-difference Hamiltonian projected into this basis. This is indeed a promising path to 𝒪(N) DFT calculations of metals and insulators at ambient conditions which the authors are pursuing presently. § ACKNOWLEDGEMENTSThis work was supported in part by the National Science Foundation (Grant number 1333500), and performed in part under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. Early support from the Exascale Co-design Center for Materials in Extreme Environments supported by Office of Science Advanced Scientific Computing Research Program, and subsequent support from the Laboratory Directed Research and Development program at the Lawrence Livermore National Laboratory is gratefully acknowledged. Appendix§ AMBIENT TEMPERATURE KOHN-SHAM CALCULATIONS Though the focus of the present work has been Kohn-Sham calculations at high temperature, SQDFT can also be utilized at ambient temperature, albeit with a larger prefactor. This increase in prefactor may, however, be mitigated by the excellent parallel scaling of SQDFT on large-scale parallel computers.In order to demonstrate this, we consider an 864-atom randomly perturbed aluminum system with smearing σ = 0.27 eV, as typical in calculations of metallic systems at ambient conditions in order to facilitate self-consistent convergence <cit.>. We utilize h=0.7780 Bohr and { n_pl,R_cut} = {320,18.672Bohr}, which are sufficient to obtain chemical accuracy of 0.001 Ha/atom and 0.001 Ha/Bohr in the energy and atomic forces, respectively. We perform the calculations on , where the number of processors is varied from 1000 to 27000, the results of which are presented in Fig. <ref>. Relative to 1000 processors, the efficiency of SQDFT on 8000 processors is larger than 98 %, but on 27000 processors, the efficiency drops to 51 %. The reduced efficiency at this temperature at the largest processor counts arises due to the increased communications required for the larger nodal Hamiltonians (R_cut = 18.672Bohr) relative to the computational work per processor.However, SQDFT is still able to achieve wall times of less than a minute per SCF iteration, which demonstrates its ability to perform large-scale Kohn-Sham quantum molecular dynamics simulations (QMD) even at ambient temperature, given sufficient number of processors.ReferenceStyle84 natexlab#1#1 [#1],#1 [Hohenberg and Kohn(1964)]Hohenberg authorP. Hohenberg, authorW. Kohn, journalPhysical Review volume136 (year1964) pagesB864–B871.[Kohn and Sham(1965)]Kohn1965 authorW. Kohn, authorL. J. Sham, journalPhysical Review volume140 (year1965) pagesA1133–A1138.[Goedecker(1999)]Goedecker authorS. Goedecker, journalRev. Mod. Phys. volume71 (year1999) pages1085–1123.[Bowler and Miyazaki(2012)]Bowler2012 authorD. R. Bowler, authorT. Miyazaki, journalReports on Progress in Physics volume75 (year2012) pages036503.[Aarons et al.(2016)Aarons, Sarwar, Thompsett, and Skylaris]aarons2016perspective authorJ. Aarons, authorM. Sarwar, authorD. Thompsett, authorC.-K. Skylaris, journalThe Journal of Chemical Physics volume145 (year2016) pages220901.[Marx and Hutter(2009)]marx2009ab authorD. Marx, authorJ. Hutter, titleAb initio Molecular Dynamics: Basic Theory and Advanced Methods, publisherCambridge University Press, year2009.[Kresse and Hafner(1993)]kresse1993ab authorG. Kresse, authorJ. Hafner, journalPhysical Review B volume47 (year1993) pages558.[Goedecker(1998)]goedecker1998decay authorS. Goedecker, journalPhysical Review B volume58 (year1998) pages3501.[Ismail-Beigi and Arias(1999)]ismail1999locality authorS. Ismail-Beigi, authorT. Arias, journalPhysical review letters volume82 (year1999) pages2127.[Zhang and Drabold(2001)]zhang2001properties authorX. Zhang, authorD. Drabold, journalPhysical Review B volume63 (year2001) pages233109.[Taraskin et al.(2002)Taraskin, Fry, Zhang, Drabold, and Elliott]taraskin2002spatial authorS. Taraskin, authorP. Fry, authorX. Zhang, authorD. Drabold, authorS. Elliott, journalPhysical Review B volume66 (year2002) pages233101.[Benzi et al.(2013)Benzi, Boito, and Razouk]benzi2013decay authorM. Benzi, authorP. Boito, authorN. Razouk, journalSIAM Review volume55 (year2013) pages3–64.[Soler et al.(2002)Soler, Artacho, Gale, Garcia, Junquera, Ordejon, and Sanchez-Portal]SIESTA authorJ. M. Soler, authorE. Artacho, authorJ. D. Gale, authorA. Garcia, authorJ. Junquera, authorP. Ordejon, authorD. Sanchez-Portal, journalJ. Phys.: Condes. Matter volume14 (year2002) pages2745–2779.[SIESTA: www.icmab.es/siesta(7 05)]SIESTAweb SIESTA: www.icmab.es/siesta, yearaccessed 2017-07-05.[Gillan et al.(2007)Gillan, Bowler, Torralba, and Miyazaki]Conquestref authorM. J. Gillan, authorD. R. Bowler, authorA. S. Torralba, authorT. Miyazaki, journalComput. Phys. Commun. volume177 (year2007) pages14–18.[Conquest: www.order-n.org(7 05)]Conquestweb Conquest: www.order-n.org, yearaccessed 2017-07-05.[Skylaris et al.(2005)Skylaris, Haynes, Mostofi, and Payne]ONETEP authorC. K. Skylaris, authorP. D. Haynes, authorA. A. Mostofi, authorM. C. Payne, journalJ. Chem. Phys. volume122 (year2005).[ONETEP: www.onetep.org(7 05)]ONETEPweb ONETEP: www.onetep.org, yearaccessed 2017-07-05.[Tsuchida(2007)]FEMTECKref authorE. Tsuchida, journalJ. Phys. Soc. Jpn. volume76 (year2007).[Osei-Kuffuor and Fattebert(2014)]MGMolref authorD. Osei-Kuffuor, authorJ.-L. Fattebert, journalPhys. Rev. Lett. volume112 (year2014).[Mohr et al.(2014)Mohr, Ratcliff, Boulanger, Genovese, Caliste, Deutsch, and Goedecker]BigDFTref authorS. Mohr, authorL. E. Ratcliff, authorP. Boulanger, authorL. Genovese, authorD. Caliste, authorT. Deutsch, authorS. Goedecker, journalJ. Chem. Phys. volume140 (year2014).[BigDFT: bigdft.org(7 05)]BigDFTweb BigDFT: bigdft.org, yearaccessed 2017-07-05.[OpenMX: www.openmx-square.org(7 05)]OpenMXweb OpenMX: www.openmx-square.org, yearaccessed 2017-07-05.[Bock et al.(2014)Bock, Challacombe, Gan, Henkelman, Nemeth, Niklasson, Odell, Schwegler, Tymczak, and Weber]FreeONref authorN. Bock, authorM. Challacombe, authorC. K. Gan, authorG. Henkelman, authorK. Nemeth, authorA. M. N. Niklasson, authorA. Odell, authorE. Schwegler, authorC. J. Tymczak, authorV. Weber, titleFreeON, year2014. noteLos Alamos National Laboratory (LA-CC 01-2; LA-CC-04-086), Copyright University of California.[FreeON: www.openhub.net/p/freeon(7 05)]FreeONweb FreeON: www.openhub.net/p/freeon, yearaccessed 2017-07-05.[Ruiz-Serrano et al.(2012)Ruiz-Serrano, Hine, and Skylaris]RuiHinSky12 authorA. Ruiz-Serrano, authorN. D. M. Hine, authorC.-K. Skylaris, journalJ. Chem. Phys. volume136 (year2012).[Graziani et al.(2014)Graziani, Desjarlais, Redmer, and Tricky]gradesred2014 editorF. Graziani, editorM. P. Desjarlais, editorR. Redmer, editorS. B. Tricky (Eds.), titleFrontiers and Challenges in Warm Dense Matter, Lecture Notes in Computational Science and Engineering, publisherSpringer, year2014.[Graziani et al.(2012)Graziani, Batista, Benedict, Castor, Chen, Chen, Fichtl, Glosli, Grabowski, Graf, Hau-Riege, Hazi, Khairallah, Krauss, Langdon, London, Markmann, Murillo, Richards, Scott, Shepherd, Stanton, Streitz, Surh, Weisheit, and Whitley]grabasben2011 authorF. R. Graziani, authorV. S. Batista, authorL. X. Benedict, authorJ. I. Castor, authorH. Chen, authorS. N. Chen, authorC. A. Fichtl, authorJ. N. Glosli, authorP. E. Grabowski, authorA. T. Graf, authorS. P. Hau-Riege, authorA. U. Hazi, authorS. A. Khairallah, authorL. Krauss, authorA. B. Langdon, authorR. A. London, authorA. Markmann, authorM. S. Murillo, authorD. F. Richards, authorH. A. Scott, authorR. Shepherd, authorL. G. Stanton, authorF. H. Streitz, authorM. P. Surh, authorJ. C. Weisheit, authorH. D. Whitley, journalHigh Energy Density Physics volume8 (year2012) pages105–131.[Renaudin et al.(2003)Renaudin, Blancard, Clérouin, Faussurier, Noiret, and Recoules]renaudin2003aluminum authorP. Renaudin, authorC. Blancard, authorJ. Clérouin, authorG. Faussurier, authorP. Noiret, authorV. Recoules, journalPhysical review letters volume91 (year2003) pages075002.[Dharma-Wardana(2006)]dharma2006static authorM. Dharma-Wardana, journalPhysical Review E volume73 (year2006) pages036401.[Ernstorfer et al.(2009)Ernstorfer, Harb, Hebeisen, Sciaini, Dartigalongue, and Miller]ernstorfer2009formation authorR. Ernstorfer, authorM. Harb, authorC. T. Hebeisen, authorG. Sciaini, authorT. Dartigalongue, authorR. D. Miller, journalScience volume323 (year2009) pages1033–1037.[White et al.(2013)White, Richardson, Crowley, Pattison, Harris, and Gregori]white2013orbital authorT. White, authorS. Richardson, authorB. Crowley, authorL. Pattison, authorJ. Harris, authorG. Gregori, journalPhysical review letters volume111 (year2013) pages175002.[Lambert et al.(2006)Lambert, Clerouin, and Zerah]lamclegil2006 authorF. Lambert, authorJ. Clerouin, authorG. Zerah, journalPhys. Rev. E volume73 (year2006) pages016403.[Zhang et al.(2016)Zhang, Wang, Kang, Zhang, and He]zhang2016extended authorS. Zhang, authorH. Wang, authorW. Kang, authorP. Zhang, authorX. He, journalPhysics of Plasmas volume23 (year2016) pages042707.[Cangi and Pribram-Jones(2015)]cangi2015efficient authorA. Cangi, authorA. Pribram-Jones, journalPhysical Review B volume92 (year2015) pages161113.[Hu et al.(2016)Hu, Militzer, Collins, Driver, and Kress]humilkre2016 authorS. X. Hu, authorB. Militzer, authorL. A. Collins, authorK. P. Driver, authorJ. D. Kress, journalPhys. Rev. B volume94 (year2016).[Suryanarayana(2013)]suryanarayana2013spectral authorP. Suryanarayana, journalChemical Physics Letters volume584 (year2013) pages182–187.[Pratapa et al.(2016)Pratapa, Suryanarayana, and Pask]pratapa2016spectral authorP. P. Pratapa, authorP. Suryanarayana, authorJ. E. Pask, journalComputer Physics Communications volume200 (year2016) pages96–107.[Prodan and Kohn(2005)]prodan2005nearsightedness authorE. Prodan, authorW. Kohn, journalProceedings of the National Academy of Sciences of the United States of America volume102 (year2005) pages11635–11638.[Anantharaman and Cancès(2009)]anantharaman2009existence authorA. Anantharaman, authorE. Cancès, journalAnnales de l'Institut Henri Poincare (C) Non Linear Analysis volume26 (year2009) pages2425–2455.[Pask and Sterne(2005)]Pask2005 authorJ. E. Pask, authorP. A. Sterne, journalPhys. Rev. B volume71 (year2005) pages113101.[Gavini et al.(2007)Gavini, Knap, Bhattacharya, and Ortiz]Gavini2007 authorV. Gavini, authorJ. Knap, authorK. Bhattacharya, authorM. Ortiz, journalJournal of the Mechanics and Physics of Solids volume55 (year2007) pages669 – 696.[Suryanarayana et al.(2010)Suryanarayana, Gavini, Blesgen, Bhattacharya, and Ortiz]Phanish2010 authorP. Suryanarayana, authorV. Gavini, authorT. Blesgen, authorK. Bhattacharya, authorM. Ortiz, journalJournal of the Mechanics and Physics of Solids volume58 (year2010) pages256 – 280.[Suryanarayana et al.(2011)Suryanarayana, Bhattacharya, and Ortiz]Phanish2011 authorP. Suryanarayana, authorK. Bhattacharya, authorM. Ortiz, journalJournal of Computational Physics volume230 (year2011) pages5226 – 5238.[Suryanarayana and Phanish(2014)]Suryanarayana2014524 authorP. Suryanarayana, authorD. Phanish, journalJournal of Computational Physics volume275 (year2014) pages524 – 538.[Ghosh and Suryanarayana(2016)]ghosh2016higher authorS. Ghosh, authorP. Suryanarayana, journalJournal of Computational Physics volume307 (year2016) pages634–652.[Bowler et al.(2002)Bowler, Miyazaki, and Gillan]bowler2002recent authorD. Bowler, authorT. Miyazaki, authorM. Gillan, journalJournal of Physics: Condensed Matter volume14 (year2002) pages2781.[Skylaris and Haynes(2007)]skylaris2007achieving authorC.-K. Skylaris, authorP. D. Haynes, journalThe Journal of chemical physics volume127 (year2007) pages164712.[Suryanarayana(2017)]suryanarayana2017nearsightedness authorP. Suryanarayana, journalChemical Physics Letters volume679 (year2017) pages146–151.[Suryanarayana et al.(2013)Suryanarayana, Bhattacharya, and Ortiz]Phanish2012 authorP. Suryanarayana, authorK. Bhattacharya, authorM. Ortiz, journalJournal of the Mechanics and Physics of Solids volume61 (year2013) pages38 – 60.[Lin and Yang(2013)]lin2013elliptic authorL. Lin, authorC. Yang, journalSIAM Journal on Scientific Computing volume35 (year2013) pagesS277–S298.[Banerjee et al.(2016)Banerjee, Suryanarayana, and Pask]banerjee2016periodic authorA. S. Banerjee, authorP. Suryanarayana, authorJ. E. Pask, journalChemical Physics Letters volume647 (year2016) pages31–35.[Anderson(1965)]anderson1965iterative authorD. G. Anderson, journalJournal of the ACM (JACM) volume12 (year1965) pages547–560.[Pulay(1980)]pulay1980convergence authorP. Pulay, journalChemical Physics Letters volume73 (year1980) pages393–398.[Pratapa et al.(2016)Pratapa, Suryanarayana, and Pask]pratapa2016anderson authorP. P. Pratapa, authorP. Suryanarayana, authorJ. E. Pask, journalJournal of Computational Physics volume306 (year2016) pages43–54.[Suryanarayana et al.(2016)Suryanarayana, Pratapa, and Pask]suryanarayana2016alternating authorP. Suryanarayana, authorP. P. Pratapa, authorJ. E. Pask, journalarXiv preprint arXiv:1606.08740(year2016).[Shewchuk(1994)]Shewchuk1994 authorJ. R. Shewchuk, titleAn introduction to the conjugate gradient method without the agonizing pain, year1994.[Alfè(1999)]alfe1999ab authorD. Alfè, journalComputer Physics Communications volume118 (year1999) pages31–33.[Gropp et al.(1999)Gropp, Lusk, and Skjellum]gropp1999using authorW. Gropp, authorE. Lusk, authorA. Skjellum, titleUsing MPI: portable parallel programming with the message-passing interface, volume volume1, publisherMIT press, year1999.[Ghosh and Suryanarayana(2017a)]ghosh2016sparc2 authorS. Ghosh, authorP. Suryanarayana, journalComputer Physics Communications volume216 (year2017a) pages109 – 125.[Ghosh and Suryanarayana(2017b)]ghosh2016sparc1 authorS. Ghosh, authorP. Suryanarayana, journalComputer Physics Communications volume212 (year2017b) pages189 – 204.[Gropp et al.(2014)Gropp, Hoefler, Thakur, and Lusk]gropp2014using authorW. Gropp, authorT. Hoefler, authorR. Thakur, authorE. Lusk, titleUsing advanced MPI: Modern features of the message-passing interface, publisherMIT Press, year2014.[Lanczos(1950)]lanczos1950iteration authorC. Lanczos, titleAn iteration method for the solution of the eigenvalue problem of linear differential and integral operators, publisherUnited States Governm. Press Office Los Angeles, CA, year1950.[Pratapa and Suryanarayana(2016)]pratapa2016numerically authorP. P. Pratapa, authorP. Suryanarayana, journalMechanics Research Communications volume78 (year2016) pages27–33.[Gil et al.(2007)Gil, Segura, and Temme]gil2007numerical authorA. Gil, authorJ. Segura, authorN. M. Temme, titleNumerical methods for special functions, publisherSIAM, year2007.[Press(2007)]press2007numerical authorW. H. Press, titleNumerical recipes 3rd edition: The art of scientific computing, publisherCambridge university press, year2007.[Goedecker and Teter(1995)]goedecker1995tight authorS. Goedecker, authorM. Teter, journalPhysical Review B volume51 (year1995) pages9455.[Goedecker and Colombo(1994)]goedecker1994efficient authorS. Goedecker, authorL. Colombo, journalPhysical Review Letters volume73 (year1994) pages122.[Troullier and Martins(1991)]Troullier authorN. Troullier, authorJ. L. Martins, journalPhysical Review B volume43 (year1991) pages1993–2006.[Perdew and Wang(1992)]perdew1992accurate authorJ. P. Perdew, authorY. Wang, journalPhysical Review B volume45 (year1992) pages13244.[Ceperley and Alder(1980)]Ceperley1980 authorD. M. Ceperley, authorB. J. Alder, journalPhys. Rev. Lett. volume45 (year1980) pages566–569.[Gonze et al.(2002)Gonze, Beuken, Caracas, Detraux, Fuchs, Rignanese, Sindic, Verstraete, Zerah, Jollet, Torrent, Roy, Mikami, Ghosez, Raty, and Allan]ABINIT authorX. Gonze, authorJ. M. Beuken, authorR. Caracas, authorF. Detraux, authorM. Fuchs, authorG. M. Rignanese, authorL. Sindic, authorM. Verstraete, authorG. Zerah, authorF. Jollet, authorM. Torrent, authorA. Roy, authorM. Mikami, authorP. Ghosez, authorJ. Y. Raty, authorD. C. Allan, journalComputational Materials Science volume25 (year2002) pages478–492(15).[Ono and Hirose(1999)]OnoHir99 authorT. Ono, authorK. Hirose, journalPhys. Rev. Lett. volume82 (year1999) pages5016–5019.[Bobbitt et al.(2015)Bobbitt, Schofield, Lena, and Chelikowsky]BobSchChe15 authorN. S. Bobbitt, authorG. Schofield, authorC. Lena, authorJ. R. Chelikowsky, journalPhys. Chem. Chem. Phys.(year2015). noteDOI: 10.1039/c5cp02561c.[Lawrence Livermore National Laboratory (LLNL) high performance computing systems: https://computation.llnl.gov/computing/machine-catalog(6 27)]LLNLwebMachines Lawrence Livermore National Laboratory (LLNL) high performance computing systems: https://computation.llnl.gov/computing/machine-catalog, yearaccessed 2017-06-27.[Zhang et al.(2016)Zhang, Driver, Soubiran, and Militzer]zhadrimil2016 authorS. Zhang, authorK. P. Driver, authorF. Soubiran, authorB. Militzer, journalHigh Energy Density Physics volume21 (year2016) pages16–19.[Zhang et al.(2017)Zhang, Driver, Soubiran, and Militzer]zhadrimil2017b authorS. Zhang, authorK. P. Driver, authorF. Soubiran, authorB. Militzer, journalJ. Chem. Phys. volume146 (year2017).[Shamp et al.(2017)Shamp, Zurek, Ogitsu, Fratanduono, and Hamel]shazurham2017 authorA. Shamp, authorE. Zurek, authorT. Ogitsu, authorD. E. Fratanduono, authorS. Hamel, journalPhys. Rev. B volume95 (year2017).[Driver et al.(2017)Driver, Soubiran, Zhang, and Militzer]drisoumil2017 authorK. P. Driver, authorF. Soubiran, authorS. Zhang, authorB. Militzer, journalHigh Energy Density Physics volume23 (year2017) pages81–89.[Zhang et al.(2017)Zhang, Driver, Soubiran, and Militzer]zhadrimil2017 authorS. Zhang, authorK. P. Driver, authorF. Soubiran, authorB. Militzer, journalPhys. Rev. E volume96 (year2017).[Rapaport(2004)]rapaport2004art authorD. C. Rapaport, titleThe art of molecular dynamics simulation, publisherCambridge university press, year2004.[Lin et al.(2012)Lin, Lu, Ying, and Weinan]lin2012adaptive authorL. Lin, authorJ. Lu, authorL. Ying, authorE. Weinan, journalJournal of Computational Physics volume231 (year2012) pages2140–2154.[Zhang et al.(2017)Zhang, Lin, Hu, Yang, and Pask]zhang2017adaptive authorG. Zhang, authorL. Lin, authorW. Hu, authorC. Yang, authorJ. E. Pask, journalJournal of Computational Physics volume335 (year2017) pages426–443.[Kresse and Furthmüller(1996)]VASP authorG. Kresse, authorJ. Furthmüller, journalPhysical Review B volume54 (year1996) pages11169–11186.
http://arxiv.org/abs/1708.07913v2
{ "authors": [ "Phanish Suryanarayana", "Phanisri P. Pratapa", "Abhiraj Sharma", "John E. Pask" ], "categories": [ "physics.comp-ph", "cond-mat.mtrl-sci" ], "primary_category": "physics.comp-ph", "published": "20170826010703", "title": "SQDFT: Spectral Quadrature method for large-scale parallel $\\mathcal{O}(N)$ Kohn-Sham calculations at high temperature" }
caption=falsesubfig
http://arxiv.org/abs/1708.07823v4
{ "authors": [ "Giuliano Panico", "Francesco Riva", "Andrea Wulzer" ], "categories": [ "hep-ph", "hep-ex" ], "primary_category": "hep-ph", "published": "20170825174914", "title": "Diboson Interference Resurrection" }
The Asymmetric Colonel Blotto Game Yifan Zhu December 30, 2023 ==================================We present new photometric data of 11 hot Jupiter transiting exoplanets (CoRoT-12b, HAT-P-5b, HAT-P-12b, HAT-P-33b, HAT-P-37b, WASP-2b, WASP-24b, WASP-60b, WASP-80b, WASP-103b, XO-3b) in order to update their planetary parameters and to constrain information about their atmospheres. These observations of CoRoT-12b, HAT-P-37b and WASP-60b are the first follow-up data since their discovery. Additionally, the first near-UV transits of WASP-80b and WASP-103b are presented. We compare the results of our analysis with previous work to search for transit timing variations (TTVs) and a wavelength dependence in the transit depth. TTVs may be evidence of a third body in the system and variations in planetary radius with wavelength can help constrain the properties of the exoplanet's atmosphere. For WASP-103b and XO-3b, we find a possible variation in the transit depths that may be evidence of scattering in their atmospheres. The B-band transit depth of HAT-P-37b is found to be smaller than its near-IR transit depth and such a variation may indicate TiO/VO absorption. These variations are detected from 2-4.6σ, so follow-up observations are needed to confirm these results. Additionally, a flat spectrum across optical wavelengths is found for 5 of the planets (HAT-P-5b, HAT-P-12b, WASP-2b, WASP-24b, WASP-80b), suggestive that clouds may be present in their atmospheres. We calculate a refined orbital period and ephemeris for all the targets, which will help with future observations. No TTVs are seen in our analysis with the exception of WASP-80b and follow-up observations are needed to confirm this possible detection.planets and satellites: atmospheres – techniques: photometric – planet-star interactions § INTRODUCTION To date, over 3400 exoplanets have been discovered (NASA Exoplanet Archive; ) and most of these planets have been found using the transit method (e.g. ; ) in large-scale transit surveys such as Kepler (), K2 (), WASP (; ) and CoRoT (; ). Transiting exoplanet systems (TEPs) are of great interest because their radius can be directly measured in relation to their star with photometric observations (; ). With the addition of spectroscopic and radial velocity measurements, many physical properties of TEP systems (mass, radius, semi-major axis, gravity, temperature, eccentricity, orbital period) can be directly measured (e.g., ). Additionally, multiple-band photometry of a TEP system can be used to constrain the composition of an exoplanet's atmosphere (; ; ; ). The absorption properties of different species in a planetary atmosphere vary with wavelength, causing an observable variation in the planet's radius. Photometric light curve analysis can also be used to search for transit timing variations (TTVs). TTVs can indicate additional bodies in a TEP system or an unstable orbit caused by tidal forces from the star (e.g., ; ; ). In this work, we present new ground-based photometric data of 11 confirmed transiting hot Jupiter exoplanets. We describe and perform TEP modeling techniques (Section <ref>–<ref>) to determine the orbital and physical parameters of each system, and compare our results with previous published results to confirm and improve the planetary parameters (Section <ref>–<ref>). For each system, we combine our results with previous work to search for a variation in planetary radius with wavelength (Section <ref>), which could indicate Rayleigh scattering, the presence of an absorptive atmosphere, or clouds. Finally, we combine our mid-transit data with previous observations to recalculate each system's orbital period and search for TTVs.§ OBSERVATIONS AND DATA REDUCTION All the observations were performed at the University of Arizona's Steward Observatory 1.55-m Kuiper Telescope on Mt. Bigelow near Tucson, Arizona. The Mont4k CCD has a field of view of 9.7'×9.7' and contains a 4096×4096 pixel sensor. The CCD is binned 3×3 to achieve a resolution of 0.43/pixel and binning reduces the read-out time to ∼10 s. Our observations were taken with the Bessell U (303–417 nm), Harris B (360–500 nm), and Harris R (550–900 nm) photometric band filters. To ensure accurate timing in these observations, the clocks were synchronized with a GPS every few seconds. In all the data sets, the average shift in the centroid of our targets is less than 0.6 pixels (0.26) due to excellent autoguiding (the maximum is 3.4 pixels). This telescope has been used extensively in exoplanet transit studies (; ; ; ; ; ; ; ). A summary of all our observations are displayed in Table <ref>.To reduce the data and create the light curves we use the reduction pipeline [https://sites.google.com/a/email.arizona.edu/kyle-pearson/exodrpl] <cit.>. Each of our images are bias-subtracted and flat-fielded with 10 biases and flats. To produce the light curve for each observation we perform aperture photometry (usingin thepackage) by measuring the flux from our target star as well as the flux from 8 different reference stars with 110 different circular aperture radii. The aperture radii sizes we explore are different for every observation due to changes in seeing conditions. For the analysis, a constant sky annulus for every night of observation of each target is chosen (a different sky annulus is used depending on the seeing and the crowdedness of the target field) to measure the brightness of the sky during the observations. We reduce the risk of contamination by making sure no stray light from the target star or other nearby stars falls in the chosen aperture. A synthetic reference light curve is produced by averaging the light curves from our reference stars. The final light curve of each date is normalized by dividing by this synthetic light curve to correct for any systematic differences from atmospheric variations (i.e. airmass) throughout the night. Every combination of reference stars and aperture radii are considered and we systematically choose the best aperture and reference stars by minimizing the scatter in the Out-of-Transit (OoT) data points. The 1σ error bars on the data points include the readout noise, flat-fielding errors, and Poisson noise. The final light curves are presented in Figs. <ref>–<ref>. The data points of all our transits are available in electronic form (see Table <ref>). For all the transits, the OoT baselines have a photometric root-mean-squared (RMS) value between 1.13 and 7.76 millimagnitude (mmag).§ LIGHT CURVE ANALYSIS To find the best-fit to the light curves we use the EXOplanet MOdeling Package (; ; )[ is used in the analysis and is available on Github at https://github.com/astrojake/EXOMOPhttps://github.com/astrojake/EXOMOP ], which utilizes the analytic equations of <cit.> to generate a model transit. For a complete description ofsee <cit.> and <cit.>. The χ^2-fitting statistic for the model light curve used inis: χ^2 = ∑_i=1^N_pts[f_i()- f_i()/σ_i()]^2where N_pts is the total number of data points (Table <ref>), f_i() is the observed flux at time i, σ_i() is the error in the observed flux, and f_i() is the calculated model flux. uses the following procedure to find a best-fit to the data. A Levenberg-Marquardt (LM) non-linear least squares minimization (; ; ) is performed on the data and a bootstrap Monte Carlo technique () is used to calculate robust errors of the LM fitted parameters. Additionally, a Differential Evolution Markov Chain Monte Carlo (DE-MCMC; ; ) analysis is used to model the data. The fitted parameters that have the highest error bars from either the LM or DE-MCMC best-fitting model are used in the analysis. In every case both models find results within 1σ of each other. Additionally,uses the residual permutation (rosary bead; ), time-averaging (), and wavelet () methods to assess the importance of red noise in both fitting methods. Not accounting for red noise in the data underestimates the fitted parameters (; ). In order to be conservative, the red noise method that produces the largest errors is used to inflate the errors in the fitted parameters. Finally, in order to compensate for underestimated observational errors we multiply the error bars of the fitted parameters by √(χ^2_r) when the reduced chi-squared (χ^2_r) of the data (Table <ref>) is greater than unity (e.g. ; ;;; ; ).uses the Bayesian Information Criterion (BIC; ) to assess over-fitting of the data. The BIC is defined as BIC =χ^2 + k ln(N_pts),where χ^2 is calculated for the best-fitting model (equation <ref>) and k is the number of free parameters (Table <ref>) in the model fit [f_i()]. The power of the BIC is the penalty for a higher number of fitted model parameters, making it a robust way to compare different best-fit models. The preferred model is the one that produces the lowest BIC value. Each transit is modeled withusing 10000 iterations for the LM model and 20 chains and 20^6 links for the DE-MCMC model. The Gelman-Rubin statistic () is used to ensure chain convergence () in the MCMC model. During the analysis of each transit the mid-transit time (T_c), planet-to-star radius (R_p /R_*), scaled semi-major axis (a/R_*), and inclination (i) are set as free parameters. The previously published values for a/R_*, i, and a/R_* are used as priors for the LM model (Table <ref>). The results of the LM fit are used as the prior for the DE-MCMC. The eccentricity (e), argument of periastron (ω), and period (P_p) of each of the planets are fixed (see Table <ref> for their values) in the analysis because these parameters have minimal effect on the overall shape of the light curve. The linear and quadratic limb darkening coefficients in each filter are taken from <cit.> and interpolated to the stellar parameters of the host stars (see Table <ref>) using theapplet[http://astroutils.astronomy.ohio-state.edu/exofast/limbdark.shtml]<cit.>. In addition, a linear or quadratic least squares fit is modeled to the OoT baseline simultaneously with the <cit.> model. The BIC is used to determine whether to include any baseline fit in the best-fit model and the baseline with the lowest BIC value is always chosen.The light curve parameters obtained from theanalysis and the derived transit durations are summarized in Table <ref>. The modeled light curves can be found in Figs. <ref>–<ref> and the physical parameters for our targets are derived as outlined in Section <ref> (Tables <ref>–<ref>). A thorough description of the modeling and results of each system can be found in Section <ref>. § PHYSICAL PROPERTIES OF THE SYSTEMSWe use the results of our light curve modeling withcombined with other measurements in the literature to calculate the planetary mass (e.g. ; ), radius, density, surface gravity (e.g. ), modified equilibrium temperature (e.g. ), Safronov number (e.g. ; ), and atmospheric scale height (e.g. ; ). An updated period and ephemeris is also calculated and is described in detail in Section <ref>. To calculate the physical parameters we use the values from the modeling (P_p, R_p/R_∗, i, a/R_∗) and for the orbital (e) and host star parameters (radial velocity amplitude, mass, radius, equilibrium temperature) we use the values found in the literature. When calculating the scale height, the mean molecular weight in the planet's atmosphere was set to 2.3 assuming a H/He-dominated atmosphere (). The physical parameters of all our systems can be found in Tables <ref>–<ref>.§.§ Period DeterminationBy combining our mid-transit times found usingwith previously published mid-transit times, we can refine the orbital period of the targets. When necessary, the mid-transit times were transformed from HJD, which is based on UTC time, into BJD, which is based on Barycentric Dynamical Time (TDB), using the online converter[http://astroutils.astronomy.ohio-state.edu/time/hjd2bjd.html] by <cit.>. A refined ephemeris for each target is found by performing a weighted linear least-squares analysis using the following equation:T_c = T_c(0) + P_p×E, where T_c(0) is the mid-transit time at the discovery epoch measured in BJD_TDB, P_p is the orbital period of the target and E is the integer number of cycles after their discovery paper. See Tables <ref>-<ref> for an updated T_c and P_b for each system.For every system, we also made observation minus calculation mid-transit time (O-C) plots in order to search for any TTVs due to other bodies in the system. We used the derived period and ephemeris found in Tables <ref>-<ref> and Equation <ref> for the calculated mid-transit times. The O-C plots can be found in Figs <ref>–<ref>. The transit timing analysis for all our targets can be found in Table <ref> (the entire table can be found online). We do not observe any significant TTVs in our data with the exception of a 3.8σ deviation for WASP-80b for our observed transit. Since the possible TTV is only one data point and may be caused by an unknown systematic error, more observations of WASP-80b are needed to confirm this result. § INDIVIDUAL SYSTEMS§.§ CoRoT-12bCoRoT-12b was discovered by the CoRoT satellite () and was confirmed by follow-up photometry and radial-velocity measurements (). CoRoT-12b is an inflated hot Jupiter with a low density that is well predicted by standard models () for irradiated planets ().We observed a transit of CoRoT-12b on 2013 February 15 with the Harris R filter (Fig. <ref>). We find a R_p/R_∗ value 4.6σ greater than the discovery value. Our derived physical parameters are in good agreement with <cit.>. We find a planetary radius within 1.3σ of the previously calculated value and a planetary mass within 1σ (Tables <ref> and <ref>). §.§ HAT-P-5bHAT-P-5b is a hot Jupiter discovered by the HATNet project that orbits a slightly metal-rich star (). Follow-up multi-color transit observations of HAT-P-5b by <cit.> confirmed the existence of the planet and searched for a variation in planetary radius with wavelength. A significantly larger radius was found in the U-band than expected from Rayleigh scattering alone, which the authors suggest may be due to an unknown systematic error.We observed a transit of HAT-P-5b on 2015 June 6 with the Bessell U filter (Fig. <ref>). Our derived physical parameters are in agreement with previous literature (Tables <ref> and <ref>). We derive a U-band radius consistent with a weighted average of radii taken from 350-733 nm within 1σ (Table <ref>; Fig <ref>). The error on our U-band observation is too large to determine if the observation by <cit.> in the same band may have an unknown systematic error (as suggested by them). Our calculated period is in good agreement with the value found by <cit.> with a similar uncertainty. §.§ HAT-P-12bHAT-P-12b is a low density, sub-Saturn mass planet discovered by the HAT survey (). Multiple photometric studies have further refined the system's parameters and searched for TTVs (; ; ; ; ; ; ). <cit.> find a strong optical scattering slope from blue to near-IR wavelengths using Hubble Space Telescope and Spitzer Space Telescope transmission spectrum data.We observed a transit of HAT-P-12b on 2014 January 19 using the Harris B filter (Fig. <ref>). We derive an optical R_p/R_∗ within 1σ of previously derived radii at optical wavelengths (Table <ref>; Fig <ref>). These results are consistent with the planet having high clouds in its atmosphere (e.g. ; ) and the finding by <cit.> that HAT-P-12b has a cloudy atmosphere. We also find a period similar to <cit.>.§.§ HAT-P-33bHAT-P-33b is an inflated hot Jupiter orbiting a high-jitter star <cit.>. The high-jitter is believed to be caused by convective inhomogeneities in the host star (; ). The planetary radius and mass, which both depend on eccentricity, and the stellar parameters are not well constrained due to the large jitter (20ms^-1). HAT-P-33b's radius is either 1.7 or 1.8 R_Jup assuming a circular or eccentric orbit, respectively. The first follow-up observations by the Transiting Exoplanet Monitoring Project (TEMP) of HAT-P-33b confirmed the discovery parameters and detected no signs of TTVs ().We observed one transit of HAT-P-33b on 2012 April 6 with the Harris R filter (Fig. <ref>). We find a R-band R_p/R_∗ value that is larger by 3.4σ from the discovery R_p/R_∗ (Table <ref>). Follow-up observations are need to determine the cause of this discrepancy. §.§ HAT-P-37bHAT-P-37b was identified by the HATNet survey and was confirmed by high-resolution spectroscopy and further photometric observations (). HAT-P-37b is a hot Jupiter with a planetary mass of 1.169±0.103 M_Jup, a radius of 1.178±0.077 R_Jup, and a period of 2.797436±0.000007 d. Additional follow-up observations by <cit.> confirmed these planetary parameters.We obtained two transits of HAT-P-37b on 2015 July 1 with the Harris B and R filters (Fig. <ref>). We derive an R_p/R_∗ for each filter that differ by 1.7σ, with a larger radius in the R band (Table <ref>; Fig <ref>). The B-band R_p/R_∗ is smaller by 2.85σ from the near-IR R_p/R_∗ (Table <ref>; ). Our derived R-band R_p/R_∗ value agrees within 1σ of the Sloan i band value obtained by <cit.>. Near-UV observations are needed to determine if the slope between the B and R filters is real or an unknown systematic in the data. Our other derived physical parameters agree with previous literature to within 1σ (Tables <ref> and <ref>). We also calculate a refined period with a factor of 6 decrease in error.§.§ WASP-2bWASP-2b is a short-period hot Jupiter discovered by the WASP survey and confirmed by radial-velocity measurements taken with the SOPHIE spectrograph (). Extensive photometry and radial velocity measurements have been performed on WASP-2b, further refining its system parameters (; ; ; ; ; ; ; ; ).We observed WASP-2b on 2014 June 14 with the Harris B filter (Fig. <ref>). Our derived physical parameters and transit depth agree with previous literature to within 1σ and we calculate a period with a factor of 2 decrease in error (Tables <ref> and <ref>).§.§ WASP-24bWASP-24b is a hot Jupiter detected by WASP and confirmed by radial velocity measurements and additional photometric observations (). Further photometric studies calculated improved system parameters () and radial velocity measurements were used to determine that the planet exhibits a symmetrical Rossiter-McLaughlin effect, indicating a prograde, well-aligned orbit ().Our observations of WASP-24b took place on 2012 March 23 and 2012 April 6 (Fig. <ref>). We obtained two transits with the Harris R filter and each transit was modeled separately. The R_p/R_∗ of both dates overlap each other within 1σ. We then found the weighted average of the light curve parameters before deriving the physical parameters. Our weighted average R_p/R_∗ disagrees with previous R-band observations () by 4σ (Table <ref>; Fig <ref>). The cause of this difference is unknown but future observations can put better constraints on transit depth and solve this discrepancy. Our other derived parameters generally agree with previous results except for our planetary radius and equilibrium temperature, which differ from <cit.> by 1.4σ and 1.9σ, respectively (Tables <ref> and <ref>). We calculate a new period with a factor of 2.6 decrease in error (Table <ref>).§.§ WASP-60bWASP-60b was identified by WASP-North and was confirmed by radial-velocity measurements and follow-up photometry (). WASP-60b is an unexpectedly compact planet orbiting a metal-poor star.We observed a transit of WASP-60b with the Harris B filter on 2012 December 1 (Fig. <ref>). This observation is the first follow-up light curve of WASP-60b. During observations, the automatic guider briefly failed, resulting in a hole in the transit light curve. Despite this, we are able to derive parameters that agree with previous literature to within 1σ (Tables <ref> and <ref>). We find a B-band R_p/R_∗ value 1.3σ greater than the discovery R_p/R_∗. §.§ WASP-80bWASP-80b is a warm Saturn/hot Jupiter (M_p= 0.55±0.04M_jup) with one of the largest transit depths (0.17126±0.00031) discovered so far (). Multiple photometric studies have been done at various wavelengths to refine WASP-80b's planetary parameters (; ; ; ; ). The planet has a transmission spectrum consistent with thick clouds and atmospheric haze ().We observed WASP-80b on 2014 June 16 with the Bessell U filter, obtaining one transit (Fig. <ref>). Inclement weather conditions caused our guider to briefly fail, resulting in a hole in the transit light curve. We derive physical parameters that closely agree with previous literature and also calculate a slightly a refined period with a factor of 2 decrease in error (Tables <ref> and <ref>). Our observations possibly detect a TTV compared to previous work by 3.7σ (Section <ref>), however, further observations of WASP-80b are needed in order to confirm this result. §.§ WASP-103b WASP-103b is a hot Jupiter detected by the WASP survey with a mass of 1.49±0.09 M_jup, short period planet (P_p = 0.925542±0.000019d), and has an orbital radius only 20% larger than its Roche radius (). It was found that there is a faint, cool, and nearby (with a sky-projected separation of 0.242±0.016 arcsec) companion star of WASP-103 (; ). Further photometric observations were made to refine WASP-103b's planetary parameters and ephemeris (; ; ). A comparison of observed planetary radius at different wavelengths found a larger radius at bluer optical wavelengths, but <cit.> state that Rayleigh scattering cannot be the main cause even when including the contamination of the nearby companion star. We observed WASP-103b on 2015 June 3 with the Bessell U filter (Fig. <ref>). We derive a dilution-corrected near-UV ( R_p/R_∗)_cor that differs from the discovery value by 2.1σ. Our other calculated parameters agree with previous literature to within 1σ and our calculated period closely agrees with the period found by <cit.> (Tables <ref> and <ref>). A variation in R_p/R_∗ is found from the ultraviolet to the near-infrared wavelengths (Table <ref>; Fig <ref>) consistent with that found by <cit.>. We correct for the dilution due to the companion star being in our aperture using the procedure described below (this procedure is similar to that done by ). (1) The light curve is modeled with EXOMOP and we find an uncorrected transit depth of (R_p/R_∗)_uncor = 0.1174±0.0016. (2) Theoretical spectra of both stars is produced using ATLAS9-ODFNEW (). For WASP-103 we use T_eff = 6110 K and M_star = 1.22M_ () and for the companion star we use T_eff = 4405 K () and M_star = 0.721M_ (). Additionally, in order to scale the spectrum correctly we use the mass-lumnosity relation L=L_(M/M_)^4 for stars between 0.5 and 2 M_. (3) The ATLAS9-ODFNEW model spectra is convolved with the bandpass of the Bessell U filter (). (4) The corrected transit depth, ( R_p/R_∗)_cor, is found using the equation () (R_p/R_∗)_cor= (R_p/R_∗)_uncor√(F_tot/F_2),where F_tot is the total flux of both stars and F_2 is the flux from the companion star. In <cit.> the error of the photometric light curve dominated the error calculation of their corrected transit depth and therefore we also use our photometric error bars for the error in the (R_p/R_∗)_cor. Using this procedure we find a (R_p/R_∗)_cor = 0.1181±0.0016.§.§ XO-3bXO-3b is a massive planet (11.79±0.59 M_Jup) with a large eccentricity (0.26± 0.017) detected by the XO survey (). Further photometric observations have refined the system's parameters (; ; ; ) and <cit.> found that XO-3's spin axis is misaligned with XO-3b's rotation axis.We observed a transit of XO-3b on 2012 November 30 with the Harris B filter (Fig. <ref>). We derive physical parameters that are in agreement with previous literature (Tables <ref> and <ref>). Our calculated R_p/R_∗ is 2σ larger than the V-band R_p/R_∗ found by <cit.>. We calculate a refined period with an error decreased by a factor of 13 from the value found by <cit.>. A non-flat spectrum for R_p/R_∗ is found for XO-3b (Table <ref>; Fig <ref>).§ DISCUSSION §.§ Wavelength dependence on the transit depth We find a constant transit depth across optical wavelengths for the TEPs HAT-P-5b, HAT-P-12b, WASP-2b, WASP-24b, and WASP-80b (Fig <ref>, Table <ref>). A lack of variation in radius with wavelength could suggest these planets (HAT-P-5b, HAT-P-12b, WASP-2b,WASP-80b) have clouds/hazes in their upper atmospheres (e.g. ; ; ; ; ) or they have an isothermal pressure-temperature profile <cit.>. <cit.> also do not detect a significant variation in WASP-80b's transit depth with wavelength, and <cit.> finds a relatively flat spectrum of planetary radii for HAT-P-5b with the exception of their observed radius in the U-band (which they suspect is caused by systematic error in their U-band photometry). A flat spectrum for WASP-24b is also found with the exception of one value. Our R-band R_p/R_∗ found for WASP-24b differs by 4σ from the previously calculated R_p/R_∗ () for that same band. The cause of this is unclear and future observations are needed to investigate. Our results are consistent with other transiting exoplanet observations having a flat spectrum in optical wavelengths (i.e. TrES-3b, ;GJ 1214b, ; ; WASP-29b, ; ; HAT-P-19b, ; HAT-P-1b, HAT-P-13b, HAT-P-16b, HAT-P-22b, TrES-2b, WASP-33b, WASP-44b, WASP-48b, WASP-77Ab, ). We find variations in the transit depth with wavelength for CoRoT-12b, HAT-P-33b, HAT-P-37b, WASP-103b, and XO-3b (Fig <ref>-<ref>, Table <ref>), which could indicate scattering (i.e. due to aerosols or Rayleigh scattering) or absorption in their atmospheres (e.g. ; ). Our observation of HAT-P-37b exhibits a smaller transit depth in B-band than the red/near-IR value. Such a variation has only been seen in a recent paper by <cit.> where they observe a smaller B-band transit depth than optical in WASP-121b. <cit.> believe a possible cause of such a variation is TiO/VO absorption and this may also be the cause of the transit depth variations seen in HAT-P-37b. However, more theoretic modeling is needed to confirm that TiO/VO is in fact the opacity source. Additionally, a smaller near-UV radius was recently observed in the hot Jupiter WASP-1b (), however, these observations did not observe in the B-band. Future near-UV and blue-band observations are needed for WASP-103b and XO-3b to determine whether the scattering in their atmospheres is due to Rayleigh scattering (; ; ; ) since these bands are the only optical wavelengths not affected by strong spectral features. The radius variations in WASP-103b show a consistently larger transit depth in the near-UV and blue than the rest of the optical (this varation is still present when corrected for dilution due the companion star). Such a radius variation may indicate a change in particle size at different altitudes of the planetary atmosphere (e.g. ). We find a larger R-band transit depth in HAT-P-33b and CoRoT-12b than their discovery transit depths. Since the R-filter encompasses the Hα line (656.281 nm), our observation could be an indication of atmospheric escape such as that observed in the atmospheres of HD 189733b (; ; ; ; ) and HD 209458b <cit.> and predicted (e.g. ; ). Follow-up photometry and high-resolution spectroscopy observations are encouraged to confirm all the transit depth variations. These results also agree with observations of other exoplanets not having a flat spectrum (i.e. HD 209458b, ; HAT-P-5b, ; GJ 3470b, ; Qatar-2, ; WASP-17b, WASP-39b, HAT-P-1b, WASP-31b, HAT-P-12b, HD189733b, WASP-6b, ; CoRoT-1b, TrES-4b, WASP-1b, WASP-12b, WASP-36b, ). For illustration, the observed R_p/R_∗ differences with wavelength for each target (Table <ref>) are compared to theoretical predictions () for a model planetary atmosphere (Figure <ref>–<ref>). The models used are calculated for planets with a 1 M_Jup, g_p = 25 m s^-1 or g_p = 10 m s^-1, base radius of 1.25 R_Jup at 10 bar, T_eq closest to the measured value for each exoplanet (with model choices of 500, 750, 1000, 1250, 1500, 1750, 2000, 2500 K), and solar metallicity. To provide a best fit to the spectral changes a vertical offset is applied to the model. This comparison is helpful as it illustrates the size of observed variation compared to what the theoretical models predict. However, radiative transfer models calculated for each exoplanet individually are needed to fully understand their transmission spectra. Finally, no signs of asymmetric transits are seen in the near-UV light curves of HAT-P-5b, WASP-80b, and WASP-103b. This result is consistent with ground-based near-UV observations of 19 other transiting exoplanets (; ; ; ; ; ; ) that show no evidence of asymmetric transits. Additionally, theoretical modeling by <cit.> using theplasma simulation code showed that asymmetric transits cannot be produced in the broad-band near-UV band regardless of the assumed physical phenomena that could cause absorption (e.g. ; ; ; ; ).§.§.§ Variability in the host starsOne of the major assumptions in our interpretation that the planetary atmosphere is the cause of the transit depth variations is that the brightness of the host stars have minimal variability due to stellar activity. The presence of star spots and stellar activity can produce variations in the observed transit depth (e.g. ; ; ; ; ). This effect is stronger in the near-UV and blue and can mimic a Rayleigh scattering signature (e.g. ; ). Additionally, no obvious star spot crossing is seen in our data (Figs. <ref>-<ref>) with the possible exception of HAT-P-37b (see below).We estimate how much the transit depth may change due to unocculted spots using the formalization presented by <cit.>. This method assumes that the spots can be treated as a stellar spectrum but with a lower effective temperature, no surface brightness variation outside the spots, and no plage are present. The effect of these assumptions are a dimming of the star and therefore an increase in the transit depth. <cit.> find for HD 189733b that the change in transit depth due to unocculted spots, Δ(R_p/R_∗) = 2.08×10^-3/2 (R_p/R_∗) between 375–400 nm. Therefore, unocculted spots have minimal influence (assuming our host stars have unocculted spots similar to HD 189733b) on the observed transit depth variations since our final error bars (Table <ref>) are at least 10 times larger than the influence of these spots (e.g. Δ[R_p/R_∗] = 0.00014 for HAT-P-37b). Qualitatively, this result is consistent with the study by <cit.> that find that stellar activity similar to the sun has very little effect on the transit depth measured in near-UV to optical wavelengths. Nonetheless, we highly encourage follow-up observations and host star monitoring of all our targets to assess the effect of stellar activity on the observed transit depth variations. Next, we investigate what effect a star-spot crossing in the light curve of HAT-P-37 would have on its calculated transit depth. In the B-band light curve of HAT-P-37b (Figure <ref>) there may be a star-spot crossing at a phase range of 0.004–0.008. However, the detected signal is very close to the scatter in the light curve. If we model the light curve without the possible star-spot crossing we find a (R_p/R_∗) = 0.1278±0.0048, within 1σ of the transit depth of the entire light curve (0.1253±0.0021). <cit.> present a procedure to estimate the effects of unocculted spots on the transit depth. Their procedure can also be used to estimate the effect of star spot crossings on the transit depth, where instead of unocculted spots increasing the transit depth occulted spots should decrease it. <cit.> find that the change in transit depth due to spots, Δ(R_p^2/R_∗^2), isΔ(R_p^2/R_∗^2) = ( R_p/R_∗)^2 δT_spot/T_eff,where R_p/R_∗ is the unperturbed transit depth, δ is the fractional area of star spots, and T_spot is the temperature of the spot. If we set Δ(R_p^2/R_∗^2) = 3100 ppm (the approximate difference between our B-band and the Sloan-i transit depth; ), then we can estimate T_spot and δ. For spot temperatures between 2000–5000 K, we find that δ would be between 20 - 50 %. Typical values of δ for solar-like stars is around several % (e.g. ; ; ), so our estimated δ range is extremely high. Due to both these tests, it seems unlikely that the smaller B-band transit depth of HAT-P-37b is due to an occulted star-spot.§ CONCLUSIONSWe observed 11 transiting hot Jupiters (CoRoT-12b, HAT-P-5b, HAT-P-12b, HAT-P-33b, HAT-P-37b, WASP-2b, WASP-24b, WASP-60b, WASP-80b, WASP-103b, XO-3b) from the ground using near-UV and optical filters in order to update their system parameters and constrain their atmospheres. Our observations of CoRoT-12b, HAT-P-37b and WASP-60b are the first follow-up observations of these planets since their discovery and we also obtain the first near-UV light curves of WASP-80b and WASP-103b. We find that HAT-P-5b, HAT-P-12b, WASP-2b, WASP-24b, and WASP-80b exhibit a flat spectrum across the optical wavelengths, suggestive of clouds in their atmospheres. Variation in the transit depths is observed for WASP-103b and XO-3b and may indicate scattering in their atmospheres. Additionally, we observe a smaller B-band transit depth compared to near-IR in HAT-P-37b. Such a variation may be caused by TiO/VO absorption (). We find a larger R-band (which encompasses the Hα line) transit depths in HAT-P-33b and CoRoT-12b and this result may indicate possible atmospheric escape. Follow-up photometry and high-resolution spectroscopy observations are encouraged to confirm all the observed transit depth variations since they are only seen at 2-4.6σ. Our calculated physical parameters agree with previous studies within 1σ with a few exceptions (Tables <ref>–<ref>). For the exoplanets HAT-P-12b, HAT-P-37b, WASP-2b, WASP-24b, WASP-80b, and XO-3b we are able to refine their orbital periods from previous work (Tables <ref>–<ref>).§ ACKNOWLEDGMENTS J. Turner, R. Leiter, and R. Johnson were partially supported by the NASA's Planetary Atmospheres program. J. Turner and R. Leiter were also supported by the The Double Hoo Research Grant. J. Turner was also partially funded by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1315231. We would like thank Robert T. Zellem for his help with observations. This research has made use of the Exoplanet Orbit Database <cit.>, Exoplanet Data Explorer at exoplanets.org, Exoplanet Transit Database, Extrasolar Planet Transit Finder, NASA's Astrophysics Data System Bibliographic Services, and the International Variable Star Index (VSX) database, operated at AAVSO, Cambridge, Massachusetts, USA. This research has also made use of the NASA Exoplanet Archive, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program. mnras
http://arxiv.org/abs/1708.07909v1
{ "authors": [ "Jake D. Turner", "Robin M. Leiter", "Lauren I. Biddle", "Kyle A. Pearson", "Kevin K. Hardegree-Ullman", "Robert M. Thompson", "Johanna K. Teske", "Ian T. Cates", "Kendall L. Cook", "Michael P. Berube", "Megan N. Nieberding", "Christen K. Jones", "Brandon Raphael", "Spencer Wallace", "Zachary T. Watson", "Robert E. Johnson" ], "categories": [ "astro-ph.EP" ], "primary_category": "astro-ph.EP", "published": "20170826001513", "title": "Investigating the physical properties of transiting hot Jupiters with the 1.5-m Kuiper Telescope" }
[email protected][cor1] Correspondence: Ren-qian Zhang, School of Economics and Management, Beihang University, Beijing 100191, ChinaSchool of Economics and Management, Beihang University, Beijing 100191, ChinaWe introduce capital flow constraints, loss of good will and loan to the lot sizing problem. Capital flow constraint is different from traditional capacity constraints: when a manufacturer launches production, its present capital should not be less than its present total production cost; otherwise, it must decrease production quantity or suspend production. Unsatisfied demand in one period may cause customer's demand to shrink in the next period considering loss of goodwill. Fixed loan can be adopted in the starting period for production. A mixed integer model for a deterministic single-item problem is constructed. Based on the analysis about the structure of optimal solutions, we approximate it to a traveling salesman problem, and divide it into sub-linear programming problems without integer variables. A forward recursive algorithm with heuristic adjustments is proposed to solve it. When unit variable production costs are equal and goodwill loss rate is zero, the algorithm can obtain optimal solutions. Under other situations, numerical comparisons with CPLEX 12.6.2 show our algorithm can reach optimal in most cases and has computation time advantage for large-size problems. Numerical tests also demonstrate that initial capital availability as well as loan interest rate can substantially affect the manufacturer's optimal lot sizing decisions. lot sizing; customer good will; capital flow; profit maximization; loan§ INTRODUCTION The lot sizing problem was first introduced and solved by <cit.>. They proposed a polynomial algorithm to solve the single-item uncapacitated deterministic lot sizing problem, which has a computational complexity of O(T^2) and T is the length of planning horizon. <cit.> developed an O(T T) algorithm for the Wagner-Whitin cases. There is now abundant literature in this area that extends the basic model, such as the capacitated lot sizing problem, multi-item lot sizing problem, multi-level lot sizing problem, stochastic lot sizing problem, etc. This has also resulted in the inflation of the problems complexity. Mathematical programming heuristics, Lagrangian relaxation heuristics, decomposition and aggregation heuristics, meta heuristics, problem-specific greedy heuristics, piecewise linear approximation methods are used to solve different lot sizing problems. Some works adopting those methods can be found in <cit.>, <cit.>, <cit.>, <cit.>, <cit.>. Comprehensive reviews on lot sizing problem could be addressed in <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>. In addition to large amount of papers setting the objective to minimize total cost in the lot sizing problems, there are also some works that formulate profit maximization models. <cit.> developed a forward recursive dynamic programming algorithm to solve a single-item lot sizing problem with immediate lost sales for a profit maximization model. <cit.> investigated the single-item lot sizing problem for a warm/cold process with immediate lost sales and established theoretical results on the structure of optimal solutions.<cit.> built a multi-product capacitated lot sizing profit maximization model in which there is a negative relation between product price and customer demands. <cit.> also addressed a multi-product capacitated lot sizing problem with pricing, setup time, and more general holding costs. <cit.> studied the scheduling problem with demand choice flexibility and evaluated the efficiency of two mathematical models. However, those works didn't take capital flow constraints in their profit maximization models.Loss of goodwill has been taken into consideration by some researchers in many inventory models. <cit.> employed goodwill loss in a news vendor model. <cit.> considered loss of goodwill in a deterministic EOQ model and EPQ model respectively. <cit.> assumed supply shortage and time window violation will cause goodwill loss in a production scheduling and vehicle routing problem. In the lot sizing model, the concept of goodwill loss was first introduced by<cit.>. In the above papers, customer goodwill loss was assumed to result in a penalty cost. Some researchers began to consider that the loss of customer goodwill would manifest itself in terms of reduced future sales. This was first observed by <cit.>. Empirical evidence presented by <cit.> showed that goodwill loss occurred when there were alternative sources of supply for customers, and if there were better alternative sources, the prospect of long-term revenue loss was greater.<cit.> developed a single-item lot sizing model in which the unsatisfied demand in a given period caused the demand in the next period to shrink due to the loss of customer goodwill. <cit.> addressed the multi-item capacitated lot sizing problem with setup times and lost sales, and they used a Lagrangian relaxation of the capacity constraints to decompose it into single-item uncapacitated sub-problems.In business transactions, once a manufacturer encounters capital shortage, it needs to borrow money to maintain production; otherwise, it has to reduce or even cancel the production and could not provide sufficient products for their clients. A survey of 531 businesses that went bankrupt during the calendar year 1998 in <cit.> pointed out that inadequate financial planning was one of main reasons for their business failing. A report by <cit.> showed shortage of capital accounted for 17% of company bankruptcies in Australia in 2008. <cit.> found that 84% of high-tech entrepreneurs in the US had experienced a shortage of capital at some time. To deal with capital shortage problem, loan is a widespread and effective option for many companies. After an agreement between a borrower and a lender is made, the borrower receives money from the lender, and is obligated to pay back an equal amount of money with the addition of some interests to the lender at a later time. A survey in <cit.> about Small and medium enterprises (SMEs) in the 28 countries of the European Union showed that SMEs preferred to use bank loan, bank overdraft and trade credit. A report by <cit.> based on the surveys of over 1000 SMEs from 2014 to 2016 in UK revealed that bank loan, friend loan and third party loan together accounted for about 40% — and ranked first — of all external finances in the three years.Relevant works taking capital flow or financing into account in inventory management problems are the following. <cit.> adopted a news vendor model to analyze the importance of joint consideration of production and financial decisions.<cit.> investigated a multi period news vendor problem constrained by cash flow and proved the optimality of a base stock policy.<cit.> extended the model by considering short term financing.<cit.> built a periodic review inventory problem with working capital constraints, payment delay and multiple sources of financing. The above mentioned works are not for lot sizing problem and there are no fixed ordering costs in their models. Considering capital flow constraints, <cit.> built a single-item lot sizing model with trade credit and apply a dynamic programming algorithm to solve it.From the literature review above we can find that, most previous works on lot sizing problems seldom consider the influence of capital flow constraints and external financing to production planning. This, together with our discussion on the importance of capital management and widespread use of loan by manufacturers, motivate our study. The main contributions of this paper are the following.* We introduce capital flow constraints to the traditional lot sizing problems and formulate a profit maximization model.* Optimality structures of the solution to the problem is discussed, and we develop a polynomial forward recursive algorithm with some heuristic adjustments to solve the problem. * A common supply chain financing behavior, loan, is also introduced and discussed in the lot sizing problem. The rest of this paper is organized as follows. Section 2 formulates the mathematical model and discusses its NP-hardness. Section 3 gives some mathematical properties and approximates this model to a traveling salesman problem. Section 4 divides the problem into sub-linear problems and proposes an algorithm with some heuristic adjustment techniques. Section 5 implements the numerical analysis: use some numerical cases to show the influence of capital flow constraints; compare the performance of our algorithm on large random generated test cases with CPLEX; analyze the main factors affecting the performance of our algorithm. Finally, section 6 concludes the paper and outlines future research directions.§ PROBLEM DESCRIPTION§.§ Notations and assumptionsWe adopt the following notations for our model. Some relevant notations will be introduced when needed.tindex of a period, t=1,2,…,T.0 pt5 mmd_tdemand in period t.p_tunit selling price in period t.c_tunit production cost (variable cost) in period t.h_tunit inventory holding cost in period t.s_tproduction launching cost (fixed cost) in period t. B_cquantity of self-owned capital at the beginning of period 1.B_Lquantity of loan at the beginning of period 1.B_0total initial capital at the beginning of period 1 and B_0=B_c+B_L.I_0initial inventory level at the beginning of period 1.T_Llength of loan, T_L≤ T.rinterest rate of loan.βcustomer goodwill loss rate.Ma large number. The decision variables used in the models include the following: B_tend-of-period capital for period t.0 pt5 mmI_tend-of-period inventory level for period t.x_ta binary variable signaling whether the production occurs in period t.y_tproduction quantity in period t.w_t demand shortage (lost sales) in period t, and we define w_0=0.Ed_teffective demand in period t when considering customer goodwill loss.v_t realized demand in period t and v_t= Ed_t-w_t.δ_t a binary variable signaling whether Ed_t is positive. In our problem, we make the following assumptions: * Initialcapital of period t should not be less than the total production cost in period t, namely, B_t-1≥ s_tx_t+c_ty_t, in which initial capital of period t is B_t-1, and the total production cost in period t is production launching cost s_tx_t plus variable production cost c_ty_t.* End-of-period capital for each period should be not less than 0, namely, B_t≥ 0.* Initial inventory of the planning horizon is 0, namely, I_0=0.* No backorder is allowed.* The manufacturer could decide the realized quantity for customer's demand without paying penalty cost, but lost sales can cause the shrinking of demand in the next period.* The manufacturer uses loan in the first period and pays back the principal and interest after a certain length of periods; length of loan is shorter than the length of total planning horizon; mortgages are not required for loan.The biggest difference between our problem and traditional lot sizing problem is Assumption <ref> and <ref>: how much to produce is constrained by present capital, and end-of-period capital for each period should be above zero to avoid bankruptcy. Assumption <ref> and <ref> are also the standard assumptions of the Wagner-Whitin model.Assumption <ref> means the manufacturer can decide how much products they want to provide for customers. Assumption <ref> defines the loan type in our paper. Loan length is smaller than the total planning horizon because lenders tend to have higher risks for longer loan length and they generally don't provide loan out of the planning horizon in order to reduce risks.§.§ Mathematical models for our problem Considering loss of customer goodwill, effective demand is the remnant demand after the goodwill loss from the original demand. As in <cit.>, the effective demand in period t can be represented as:Ed_t=max{0,d_t-β w_t-1},∀ t. Eq. (<ref>) is a nonlinear equation. For convenience of computation, we bring several linear constraints to replace this nonlinear equation and construct the mixed integer programming model below.Model Pmax B_T-B_c-B_L=∑_t=1^T[p_t(Ed_t-w_t)-(h_tI_t+s_tx_t+c_ty_t)]-B_L (1+r)^T_L s.t. for t=1,2,…,T y_t≤Mx_t,s_tx_t+c_ty_t≤B_t-1, w_t≤Ed_t, I_t=I_t-1+y_t-Ed_t+w_t,B_0=B_c+B_L, B_t=B_t-1+p_t(Ed_t-w_t)-h_tI_t-s_tx_t-c_ty_t, t≠T_L, B_t-1+p_t(Ed_t-w_t)-h_tI_t-s_tx_t-c_ty_t-B_L(1+r)^T_L, t=T_L,d_t≤βw_t-1+Mδ_t,d_t≥βw_t-1-M(1-δ_t),Ed_t≤d_t-βw_t-1+M(1-δ_t), Ed_t≥d_t-βw_t-1-M(1-δ_t), Ed_t≤d_tδ_t,I_0=0,I_t≥0, Ed_t≥0, w_t≥0,y_t≥0, x_t∈{0,1}, δ_t∈{0,1}. The objective defined by Eq. (<ref>) is to maximize the capital increment from the beginning of the planning horizon to the final period, where the realized sales in period t are given by Ed_t-w_t, revenue in period t is p_t(Ed_t-w_t), total cost in period t is h_tI_t+s_tx_t+c_ty_t, or h_tI_t+s_tx_t+c_ty_t+B_L(1+r)^T_L if period t should pay back the loan.Constraint (<ref>) enforces setups with positive production in each period. Constraint (<ref>) represents Assumption <ref> and <ref>:initialcapital in period t should be not less than the total production cost in period t. It also ensures the non-negativity of B_t-1, which avoids bankruptcy. Constraint (<ref>) ensures that any lost demand w_t in period t not exceed the effective demand of that period. Constraint (<ref>) provides the inventory flow balance equation, and Constraints (<ref>) and (<ref>) define the capital flow balance.Constraints (<ref>)-(<ref>) are the linear descriptions of Eq. (<ref>). Among them, Constraints (<ref>) and (<ref>) ensure the binarity of δ_t: if d_t≤β w_t-1, δ_t=0 and effective demand Ed_t is also 0, else, δ_t=1 and effective demand Ed_t is positive;Constraint (<ref>), (<ref>) and (<ref>) determine the value of effective demand Ed_t: if d_t≤β w_t-1, δ_t=0 and Ed_t=0; else, δ_t=1 and Ed_t=d_t-β w_t-1. Constraints (<ref>), (<ref>) and (<ref>) guarantee the non-negativity and binarity of variables. Constraint (<ref>) also represents Assumption <ref> and <ref> in our problem.Note that if β=0, this model is transformed to a capital flow constrained problem without loss of goodwill. §.§ Computational complexity of Model PThe single-item capacitated lot sizing problem has been shown by <cit.> to be NP-hard. As for our problem, the capital flow constraint c_ty_t+s_tx_t≤ B_t-1 is a capacity constraint by removing s_tx_t and replacing B_t-1 with C_t, where C_t is the capacity in period t. Therefore, capital flow constraint is a special type of capacity constraints, and model P is also a NP-hard problem. § MATHEMATICAL PROPERTIES To describe the mathematical properties of our problem, we first define the concepts of production cycle and production round.In a production plan, if the manufacturer launches production at the beginningof period m, and it does not launch new production till the end of period t (m≤ t≤ T), we call period m to period t a production cycle. In a production plan, for one or more consecutive production cycles that last from the beginning of m to the end of period t, if initial inventory for period m and end-of-period inventory for period t are zeros, we call period m to period t a production round. §.§ Properties about initial inventory and capitalIf unit variable production costs are equal, namely, c_t=c, ∀ t, for any production cycle starting at period t+1 (t+1=1,…,T),the optimal solution satisfies I_t y_t+1=0. Lemma 1 is apparently true when t+1=1 because I_0=0. For t+1>1, if there is a solution that does not satisfy Lemma 2, namely, I_t>0 and y_t+1>0, assume that period t+1's former production cycle begins at period m (1≤ m≤ t), and the production cycle beginning at period t+1 lasts till period n (n≤ T). The production plan is shown in Figure <ref>.According to capital flow balance equation (<ref>), end-of-period capital for period t and period T are given by the following equations.B_t = B_m-1+∑_i=m^tp_i(Ed_i-w_i)-s_m-c_my_m-∑_i=m^th_iI_i, t≠ T_L,B_m-1+∑_i=m^tp_i(Ed_i-w_i)-s_m-c_my_m-∑_i=m^th_iI_i-B_L(1+r)^T_L,t=T_L, B_T =B_t+∑_i=t+1^T[p_i(Ed_i-w_i)-(h_iI_i+s_iy_i+c_iy_i)], T≠ T_L,B_t+∑_i=t+1^T[p_i(Ed_i-w_i)-(h_iI_i+s_iy_i+c_iy_i)]-B_L(1+r)^T_L,T=T_L. If the production quantity in period m reduces I_t and the production quantity in period t+1 increases I_t, then the effective demands Ed_i (i=1,…,T) would not be influenced and the production plan is still feasible. B_t and B_T change to the following:B_t^' =B_t+c_mI_t+∑_i=m^th_iI_t,B_T^' =B_T+c_mI_t+∑_i=m^th_iI_t-c_t+1I_t. If unit variable production costs are equal, B_T^'=B_T+∑_i=m^th_iI_t≥ B_T, The finalcapital increases. Therefore, I_t>0 and y_t+1>0 is not the optimal solution; optimal solution always satisfies I_ty_t+1=0. Lemma <ref> is also known as the zero-inventory-ordering policy, which means initial inventory of a production cycle is always zero. Define f_t(I_t-1,B_t-1,w_t-1) as the maximum capital increment during period t, t+1,…,T, given period t's initial inventory I_t-1, initial capital B_t-1, and previous period's demand shortage quantity w_t-1. For any period t (t=1,…,T), when initial inventory I_t-1 and last period's demand shortage w_t-1 are fixed, f_t(I_t-1,B_t-1,w_t-1) is nondecreasing with period t's initial capital B_t-1. To prove this property, we build a dynamic programming model for our problem. For any at period t (t=1,…,T), its states are: initial inventory I_t-1, initialcapital is B_t-1, and demand shortage quantity of previous period w_t-1. Its actions are production quantity y_t and demand realized quantity v_t. Lower bounds for y_t and v_t are both 0; Upper bound for y_t is the maximum production quantity under present capital and upper bound forv_t is the quantity of effective demand in this period, which are shown by Eq. (<ref>) and Eq. (<ref>), respectively.y_t= max{0,(B_t-1-s_t)/c_t}, v_t= max{0,β(d_t-w_t-1)}. Define a unit step function K(z): K(z)=1 if z>0;K(z)=0 if z≤ 0. The state transition equations are:I_t= I_t-1+y_t-v_t,B_t=B_t-1+p_tv_t-h_tI_t-s_tK(y_t>0)-c_ty_t, t≠ T_L, B_t-1+p_tv_t-h_tI_t-s_tK(y_t>0)-c_ty_t-B_L(1+r)^T_L, t=T_L, w_t= max{0,β(d_t-w_t-1)}-v_t. The functional equation for dynamic programming is:f_t(I_t-1,B_t-1,w_t-1)=max_0≤ y_t≤y_t, 0≤ v_t≤v_t{B_t-B_t-1+f_t+1(I_t,B_t,w_t)},f_T+1(I_t,B_t,w_t)=0. when I_t-1 and w_t-1 are fixed, if increasing B_t-1, from Eq. (<ref>) and Eq. (<ref>), the feasible domain for v_t does not change, but the feasible domain for y_t stays constant or expands. There always exists actions y'_t and v'_t that make f'_t(I_t-1,B'_t-1,w_t-1) not lower than f_t(I_t-1,B_t-1,w_t-1). Therefore, f_t(I_t-1,B_t-1,w_t-1) is nondecreasing with B_t-1 with fixed I_t-1 and w_t-1. For any period t (t=1,…,T), if goodwill lost rate is 0, when I_t-1 are fixed, f_t(I_t-1,B_t-1) is nondecreasing with B_t-1.The proof is similar to Lemma <ref>. If goodwill shortage rate is 0, demand shortage is not the states of the dynamic programming problem. The actions are still y_t and v_t and state transition equations are Eq.(<ref>) and Eq. (<ref>). The functional equation changes to be:f_t(I_t-1,B_t-1)=max_0≤ y_t≤y_t, 0≤ v_t≤v_t{B_t-B_t-1+f_t+1(I_t,B_t)},f_T+1(I_t,B_t)=0. When I_t-1 are fixed and increasing B_t-1, feasible domain for v_t keeps unchanged while feasible domain for y_t stays constant or expands. Therefore, f_t(I_t-1,B_t-1) is nondecreasing with B_t-1.§.§ Approximated traveling salesman problemDefine BB_m,n^B_m-1 as the maximum capital increment in a production round from period m to period n with initial capital B_m-1. Based on Lemma <ref>, Lemma <ref> and Lemma <ref>, our problem is approximately transformed to a traveling salesman problem finding the longest route as shown in Figure <ref> (in this case, T=4). Main ideas behind this approximation are that we apply the zero-inventory-ordering policy to divide production plan into several production rounds, and apply Lemma <ref> and Lemma <ref> to always select maximum initial capital for computation of the length of arcs in the traveling salesman problem.A functional equation is constructed for computation of the problem, and we set B_0^∗=B_0.B_n^∗=max_1≤ m< n[B_m-1^∗+BB_m,n^B_m-1^∗], n=1,2,… T. Apparently, B_T^∗=max_1≤ m≤ T[B_m-1^∗+BB_m,T^B_m-1^∗]. On this functional equation, we have the following properties.If unit variable production costs are equal and goodwill loss rate is 0, for any period t, given end-of-period inventory of period t is 0, the optimal production plan from period 1 to t is part of the optimal production plan from period 1 to T. If goodwill loss rate and end inventory level for period t are both 0, based on Eq. (<ref>), given initial capital and initial inventory, maximum capital increment during period t to period T is:f_t(I_t-1,B_t-1)=B_T-B_t-1=max_0≤ y_t≤y_t, 0≤ v_t≤v_t{B_t-B_t-1+f_t+1(0,B_t)}. From Lemma <ref>, f_t+1(0,B_t) is a non-decreasing function of B_t, and gets the maximum value when B_t is the maximum end-of-period capital for period t. Given zero end-of-period inventory in period t, this happens when the production plan from period 1 to t is optimal because optimal production plan satisfies zero-inventory-ordering policy when the variable production costs are equal. So, what is optimal for period 1 to period t is also optimal for f_t+1(0,B_t), which is the maximum capital increment during period t+1 to period T.Therefore, the optimal production plan from period 1 to t is part of the optimal production plan from period 1 to T.If unit variable production costs are equal and goodwill loss rate is 0, for any two period t_1, t_2 (t_1<t_2), given initial inventory of period t_1 is 0, end-of-period inventory of period t_2 is 0, the optimal production plan from period t_1 to t_2 is part of the optimal production plan from period 1 to T. By lemma <ref>, the optimal production plan from period 1 to t_1-1 is part of the whole production plan,and the optimal production plan from period 1 to t_1-1 is also part of the optimal production plan from period 1 to t_2. If the production plan from period t_1 to t_2 is optimal, together with the optimal plan form period 1 to t_1-1, it will result in an optimal production plan from period 1 to t_2 under the zero-inventory-ordering policy. Since the optimal production plan from period 1 to t_2 is part of the whole production plan, the optimal production plan from period t_1 to t_2 is part of the optimal production plan from period 1 to T. If unit variable production costs are equal and goodwill lost rate is 0, namely, c_t=c, ∀ t, and β=0, B_T^∗=max_1≤ m≤ T[B_m-1^∗+BB_m,T] provides an optimal solution.When the variable production costs are equal, Lemma <ref> shows the problem satisfies the zero-inventory ordering policy. Hence, the optimal production plan can be a combination of several production rounds. The functional equation (<ref>) in fact enumerates all the possible production rounds.Lemma <ref> and Lemma <ref> indicate that the optimal production plan in a given production round is part of total optimal production plan. These are the same properties as the Wagner-Whitin case <cit.>. Therefore, for all the combinations of production rounds in the computation of B_T^∗=max_1≤ m≤ T[B_m-1^∗+BB_m,T], the one that gives maximum final capital is the optimal solution.When the variable production costs are not all equal, or goodwill loss rate is not 0, the functional equation in Theorem <ref> only gets an approximate solution. However, based on some properties below, we devise some heuristic adjustments to make it close to the optimal solution. §.§ Properties for production plan adjustmentIn a feasible solution, assume the solution is x_t, y_t, w_t (t=1,2,… ,T). For any two consecutive production cycles, assume the former production cycle begins at period t_1, ends at t_2-1; the latter production cycle begins at period t_2, ends at period t_3, with end-of-period inventory at period t_3 is 0, end-of-period capital at period t_3 is B_t_3 and demand shortage at period t_3 is w_t_3. We can make a production plan adjustment from t^' (t_1+1≤ t^'≤ t_2-1) to t_2-1 if thecapital B_t_3^'≥ B_t_3, shortage quantity w_t_3^'≤ w_t_3 and still with 0 end-of-period inventory in t_3 after this adjustment.The feasible solution is presented by Figure <ref>. For period t_3+1, its functional equation isf_t_3+1(I_t_3,B_t_3,w_t_3)=max_0≤ y_t_3+1≤y_t_3+1, 0≤ v_t_3+1≤v_t_3+1{B_t_3+1-B_t_3+f_t_3+2(I_t_3+1,B_t_3+1,w_t_3+1)}. From Lemma <ref>, f_t_3+1(I_t_3,B_t_3,w_t_3) is non decreasing with B_t_3 when I_t_3 and w_t_3 are fixed. In the adjustment, I_t_3 are fixed to be 0, after increasing B_t_3 and decreasing w_t_3, the domains for y_t_3+1 and v_t_3+1 both expand from Eq. (<ref>) and Eq. (<ref>). Therefore, the final capital increment is non decreasing with this adjustment.The adjustment in Corollary <ref> is shown by Figure <ref>. In a feasible solution, assume that the solution is x_t, y_t, w_t (t=1,2,… ,T). For any two consecutive production cycles, assume that the former production cycle begins at period t_1, ends at period t_2-1, and the latter production cycle begins at period t_2. If B_t_1-1-s_t_1-c_t_1y_t_1>0, c_t_1+∑_i=t_1^t_2-1h_i<c_t_2, then it is better to move some production amount Δ y_t_2 from y_t_2 to y_t_1 to obtain more final capital.This heuristic step is shown in Figure <ref>. If B_t_1-1-s_t_1-c_t_1y_t_1>0, production cycle t_1 has residual production capacity, which could produce more. After the moving adjustment, the finalcapital changes to the following:B_T^'=B_T+(c_t_2-c_t_1-∑_i=t_1^t_2-1h_i)Δ y_t_2. If c_t_1+∑_i=t_1^t_2-1h_i<c_t_2, this adjustment does not affect the feasibility of the solution and B_T^'>B_T. Therefore, final capital increases. The moving production amount Δ y_t_2 can be obtained by Eq. (<ref>). Δ y_t_2=B_t_1-1-s_t_1/c_t_1-y_t_1.Eq. (<ref>) is the maximum production quantity increment that cycle t_1 can provide. § SUB-PROBLEMS AND ALGORITHM FOR OUR PROBLEM For the computation of BB_m,n in the recursive equation (<ref>), we remove the integers of Model P and divide it into sub-linear problems. In the sub-linear variables, only realized demand v_t, ∀ t, are decision variables. We also devise some heuristic techniques to adjust the production plan. §.§ sub-linear problemsBy definition, BB_m,n is the maximumcapital increment in a production round. Therefore, it may include several production cycles. For a production round with fixed k production cycles, assume the production launching periods are t_1, t_2, …, t_k (for convenience of expression, we set m=t_1, n=t_k+1-1), the production plan is shown in Figure <ref>.To compute BB_m,n, there are still integer variables δ_t, which is a 0-1 variable indicating whether previous goodwill loss is below effective demand. In Model P-sub1 below, we use a heuristic step by assuming δ_t=1, m≤ t≤ n, namely, all demand from period m to period n are above previous goodwill loss. With known w_t_1-1, we convert Model P to a sub linear problems P-sub1.Model P-sub1max BB_m,n =max{B_n-B_m-1}s.t. for t=m,m+1,…,n c_t_i∑_j=t_i^t_j+1-1v_j+s_t_i≤B_t_i-1,i=1,2,…,k,B_t≥0,I_t=∑_j=t+1^t_i+1-1v_j,t_i<t<t_i+1,i=1,2,…,k,B_t=B_t-1+p_tv_t-(h_tI_t+s_t+c_t∑_j=t_i^t_i+1-1v_j),t≠T_L,t=t_i,i=1,2,…,k,B_t-1+p_tv_t-h_tI_t,t≠T_L,t≠t_i,i=1,2,…,k,B_t=B_t-1+p_tv_t-(h_tI_t+s_t+c_t∑_j=t_i^t_i+1-1v_j)-T_L(1+L)^L,t=T_L,t=t_i,i=1,2,…,k,B_t-1+p_tv_t-h_tI_t-T_L(1+L)^L,t= T_L,t≠t_i,i=1,2,…,k, I_t_1-1=0, I_t_2-1=0, …, I_t_k-1=0, I_n=0, Ed_t= max{0,d_t-βw_t-1},t=t_1,Ed_t=d_t-β(Ed_t-1-v_t-1),t≠t_1,0≤v_t≤Ed_t. The objective function (<ref>) is to maximize capital increment from a production round starting at period m and ending at period n. Constraints (<ref>) and (<ref>) represents our paper's assumptions 1 and 2 about capital flow constraints. Constraint (<ref>) shows the relationship between I_t and v_t. Constraint (<ref>) and Constraint (<ref>) are the capital flow balance. Constraint (<ref>) means that initial inventory and end-of-period inventory of each production cycle are both zeros, which is a heuristic step if unit variable production costs are not equal. Constraints (<ref>) and (<ref>) are expressions of effective demands. Constraint (<ref>) also reflects the heuristic assumption:δ_t=1, m≤ t≤ n. Constraint (<ref>) provides the lower and upper bounds of variables v_t, which is realized demand in period t.If Model P-sub1 does not obtain a feasible solution, this may be related with the heuristic assumption of δ_t. So next step we relax this assumption without loss of goodwill: removing Constraint (<ref>) in Model P-sub1, amendingConstraint (<ref>) to get another sub linear problem P-sub2 below.Model P-sub2max BB_m,n =max{B_n-B_m-1}<ref> s.t. for t=m,m+1,…,n(<ref>)–(<ref>) 0≤v_t≤d_t.Based on the solution of Model P-sub2, compute δ_t according to equations (<ref>) and (<ref>) below.w_t =Ed_t-v_t, t=m,m+1…,n, δ_t =0, ifd_t-β w_t-1<0,t=m,m+1…,n, 1, ifd_t-β w_t-1≥ 0, t=m,m+1…,n. Based on the values of δ_t, another sub linear problem is formulated.Model P-sub3max BB_m,n =max{B_n-B_m-1}<ref> s.t. for t=m,m+1,…,n(<ref>)–(<ref>) Ed_t= d_t-β(Ed_t-1-v_t-1), if δ_t=1,0, if δ_t=0, d_t-β(Ed_t-1-v_t-1)<0,if δ_t=0. If Model P-sub1, Model P-sub2 and Model P-sub3 all do not have a feasible solution, we deem BB_m,n does not have a feasible solution and set v_t=0 (t=m,m+1,…,n). The relation of v_t with w_t is provided by Eq. (<ref>). The relation of v_t with y_t is given by Eq. (<ref>).y_t=∑_j=t_i^t_i+1-1v_j, t=t_i, i=1,2,…,k,0,t≠ t_i, i=1,2,…,k.§.§ Heuristic techniques in recursion and adjustmentsIt is time consuming and complex to enumerate all the possible production cycles in a production round. Therefore, when customer goodwill loss rate is zero, we use one production cycle in a production round; when goodwill loss rate is not zero, we use at most two production cycles in a production round for computation.For a certain period t+1 and given production plan from period 1 to period t, two situations are considered in computation of capital increment during a production round when goodwill loss rate is not zero: if there exists no production cycle before t+1, we compute only one production cycle beginning with period t+1 as a production round; if there exists production cycles before t+1, we view the nearest previous production cycle and the production cycle beginning with t+1 together as a production round, and make computations.After computation of capital increments during production rounds, we can get an approximated production plan from period 1 to any period n (1≤ n≤ T). Based on Corollary <ref>, we make heuristic adjustments to this production plan. Three situations are considered for this adjustment, which are shown by Figure <ref>. In Figure <ref>, periodn's production cycle beginning at period t+1. * Figure <ref> means we make adjustments to the first production cycle in the production round m to n: dividing the first production cycle into two cycles by enumerating all new production launching periods between period m to period t; recomputing BB_m,n, and selecting the one which gives maximum capital increment. * Figure <ref> means sometimes it's better to launch a new production cycle before period t+1 when there exists no production cycles before it: enumerating all new production launching period between period 1 to period t as m; recomputing BB_m,n and selecting the optimal one. * Figure <ref> means sometimes it's better to launch production later if the first production cycle include period 1: enumerating all new production launching period between period 1 to period t as m;recomputing BB_m,n, and selecting the optimal one that can give maximum capital increment. In computing BB_m,n for the three heuristic adjustments, a new linear constraint is added to the sub-linear problems: w_n'≤ w_n, which means demand shortage at period n after the adjustment should be not higher than its original value before the adjustment.If goodwill loss rate is zero, Corollary <ref> is not necessary and there is no need for the adjustments above. After the recursion of BB_m,n till the final period T, a production plan for the the whole planning horizon is obtained. Backward from period T to period 1, check if it satisfies the criteria of Corollary <ref> and make production adjustments.§.§ Computation Algorithm Based on functional equation (<ref>), sub-linear problems and heuristic techniques, we propose a forward recursive algorithm with heuristic adjustment algorithm (FRH) to solve Model P.Algorithm FRH for Model Pinitialization:  t=1, m=1, 1× T zero matrices x, y, B^∗, T× T zero matrices BB. Step 1:  For n=t, t+1, …, T, select the production round beginning at m and end at n, compute BB_m,n and record its value in BB(t,n).Step 2:  Compute B_t^∗ according to Eq. (<ref>), and obtain the production plan from period 1 to period t: x (1:t), y (1:t), w (1:t).Step 3:  Check if the present production plan from period 1 to t meets the three adjustment situations shown by Figure <ref>. If meeting the adjustment criteria, make adjustments and update x (1:t), y (1:t), w (1:t), BB(t,n:T). Step 4:  t=t+1, update m and repeat Step 1 ∼ Step 3 until t=T.Step 5:  Check if production plan meets Corollary <ref>, if meeting the criteria, make plan adjustments. Obtain final production plan: x (1:T), y (1:T), w (1:T) and final capital B^∗_T. Flow char of our algorithm is shown by Figure <ref>. §.§ Computation complexity of our algorithmDuring recursion and heuristic adjustments, when customer goodwill loss rate is zero, there are T(T+1)/2 computations of BB_m,n and BB_m,n includes one sub-linear problem; when customer goodwill loss rate is not zero, in the best case, it is same as the situation when goodwill loss rate is zero: there are no heuristic adjustments meeting Corollary <ref>, and computation of BB_m,n requires only one sub-linear problem. In the worst case, there are 3T(T+1)/2 computation of BB_m,n in total: T(T+1)/2 computations of BB_m,n at first, at most T(T+1)/2 computations of BB_m,n for the heuristic adjustments, and T(T+1)/2 computations of BB_m,n to update BB(t,n:T); each computation of BB_m,n includes 3 sub-linear problems.Therefore, there are T(T+1)/2 computations of sub-linear problems in the best case and at most 9T(T+1)/2 computations of sub-linear problems in the worst case. The computational complexity of our algorithm is O(T^2ψ), where ψ is the computational complexity of the algorithm for the sub-linear problems. Without integer variables, the sub-linear problems can be solved by polynomial interior point algorithm. For the common used polynomial interior point algorithm by <cit.>, ψ is O(T^3.5ℒ), where ℒ denotes the total length of the binary coding of the input data. This is the reason why we remove integer variables from original mixed integer model and divide it into sub-linear problems. Therefore, total computational complexity of our algorithm is O(T^5.5ℒ), which is a polynomial algorithm. § NUMERICAL ANALYSISIn this section, we first employ some numerical examples to show the influence of initial capital and loan, to the optimal production plan and final capital increment of a manufacturer, and then compare our algorithm with the business software CPLEX. In our numerical experiments, the linear programming algorithm for sub-linear problems is a function in MATLAB based on the paper of <cit.>, which is an interior point algorithm. The solution accuracy of the interior algorithm in MATLAB is controlled by the termination tolerance on the function, which we set as 0.0001%. The number of maximum iterations for interior point algorithm is set to be 50. Our algorithm is coded in MATLAB 2016a, and run on a desktop computer with an Intel (R) Core (TM) i5-6500 CPU, at 3.20 GHz, 16GB of RAM, and 64-bit Windows 7 operating system.§.§ Numerical examples about influence of capital flow to production plan Assume T=12, and goodwill loss rate β=0.5. The values of some other parameters are listed in Table <ref>.We solve the problem via our algorithm FRH. The solutions are optimal verified by CPLEX. When initial capital is 150 without loan, the optimal production plan is shown by Figure <ref>, in which the manufacturer could only launch two productions because of capital shortage. When initial capital is 200 without loan, the optimal production plan is shown by Figure <ref>. When initial capital is 200 withloan quantity 300, loan length 3 periods and loan interest rate 10%, a different optimal production plan is shown by Figure <ref>. Figures in Figure <ref> illustrate that quantity of initial capital and whether or not loan, does influence the optimal production plan of a manufacturer. Without loan, for different initial capital, maximum final capital increments are displayed by Figure <ref>. With fixed quantity of initial capital 200, fixed quantity of loan 300, loan length 3 periods, for different loan interest rates, maximum final capital increments is displayed by Figure <ref>.mydata.csv Initial capital,Interest rate, Capital increment1, Capital increment2 50, 0.01, 0,2060 150, 0.05, 70,2023 200, 0.1, 1891,1971 250,0.15, 2300,1913 300, 0.2, 2360,1851 350, 0.25, 2360,1784 400,0.3, 2360,1710In Figure <ref>, the dashed line represents the maximum capital increment without loan. Figure <ref> shows if capital is not sufficient, more initial capital will bring more final capital increment; if capital is sufficient, maximum final capital increment for a manufacturer is stable. Figure <ref> shows loan is helpful for a manufacturer if interest is low; but if interest rate is too high, final capital increment decreases and it is better for the manufacture not to loan. Therefore, the numerical examples above demonstrate initial capital availability as well as loan interest rate can substantially influence the operational decisions for a manufacturer.§.§ Comparison of our algorithm with some other heuristicsTo the best of our knowledge, although there are many heuristic algorithms in the literature dealing with capacity constrained lot sizing problems, those algorithms are not suitable for solving capital flow constrained lot sizing problem like ours. Solutions obtained by those algorithms are not feasible by the definition of capital flow constrains in this paper. Compared with traditional capacity constraints, capital flow constraints are stronger constraints which require initial capital of each period is above this period's total production cost, end-of-period capital of each period is above zero, and capital flow is related with many parameters like selling price, interest rate, etc. In terms of the comparison of our algorithm with meta heuristics, we attempt to solve our capital flow constrained lot sizing problem with some meta heuristics: genetic algorithm and simulated annealing algorithm. However, because of the capital flow constraints and other constraints, it is difficult for both genetic algorithm and simulated annealing algorithm to obtain a feasible solution even for a small numerical case. Therefore, we omit the comparison of our algorithm with other heuristic algorithms. §.§ Comparison of our algorithm with CPLEX on randomly generated problems We test our algorithm on a large set of randomly generated problems with CPLEX 12.6.2. The solution accuracy of CPLEX is controlled by the number of iterations, CPU seconds and the termination tolerance, which are set as 750,000, 18,000, and 0.0001%, respectively. The randomized scheme of test problem generation is similar to Aksen's work <cit.>, and is presented in Table <ref>.Since capital and goodwill loss rate can influence optimal production plan, two initial capital, two initial loan and three goodwill loss rates are set for our experiments, while these three parameters in Aksen's 360 test cases <cit.> are fixed or not included. As for the initial capital, B_c=s_1+c_1(d_1+d_2) guarantees the manufacture has enough capital for the production of first two periods; and B_c=s_1+c_1(d_1+d_2+d_3) guarantees the production of first three periods. There are 864 numerical cases for testing in total. Experimental results for different periods are shown by Table <ref>.As shown in Table <ref>, our algorithm performs well in the 844 test cases. There are only 57 cases that our algorithm doesn't get optimal solutions. Although there still exist some extreme cases with a maximum deviation 4.56% that our heuristic adjustment could not reach optimal, it could obtain optimal solutions in most cases and average deviation is 0.05%. Another finding of the experiment not shown by Table <ref> but also should be noted is that, when goodwill loss rate is zero and unit variable production costs are equal, our algorithm all obtain optimal solutions, which validates Theorem <ref>.In terms of computation time, CPLEX runs faster than our algorithm for small-size problems. However, when problem size grows large, average computation time for CPLEX increase rapidly and is much larger than our algorithm. This is because when T reaches 48, there are some cases that take maximum running time for CPLEX to stop iteration, which boost the average computation time. For the 864 numerical cases, average computation time of FRH algorithm is 19.95s while average computation time for CPLEX is 224.19s.Therefore, our algorithm is suitable for solving large-size problems. §.§ Factors affecting the performance of our algorithm In the next stage of experiment, we redesign the generation scheme of the test problems to investigate the influence of parameter values to the performance of our algorithm. In order to save computation time for comparison, we set production horizon length T fixed to be 12, initial loan B_L fixed to be 2000, and loaning length L fixed to be 6 in this stage of testing, setup cost s is also set fixed to be 1000. For other parameters, each has 2 generation modes: high fluctuations and low fluctuations with normal distribution, or high values and low values. Details of the generation scheme is displayed by Table <ref>. lllll Randomized generation scheme in the second stage of testing Low value (fluc.) High value(fluc.)Demand D μ=150, δ=10μ=150, δ=50Unit production cost cμ=13, δ=1μ=13, δ=5Unit holding cost hμ=5, δ=0.5μ=5, δ=2.5Selling price pμ=20, δ=1 μ=20, δ=5Initial capital B_cs_1+c_1(d_1+d_2)s_1+c_1∑_i=1^5d_iInterest rate r2%5%Goodwill loss rate β10%50% For each combination of those parameters, we generate 10 numerical cases. Therefore, there are 2^7× 10=1280 cases for testing. Experimental results in this stage are presented by Table <ref>. From table <ref>, of all the 1280 numerical cases, there are 96 numerical cases that our algorithm can't reach optimal with maximum deviation error 5.91% and average deviation 0.11%. we also find that for goodwill loss rate and initial capital, maximum deviation and average deviation between high and low values differ substantially. It seems goodwill loss rate and initial capital play a main role in affecting the performance of our algorithm. To consolidate this conclusion, we apply stepwise linear regression analysis by SPSS to all the 96 numerical cases that our algorithm has deviations. We set deviation as dependent variable, all the seven parameters as independent variables, confidence interval is 95%. Analysis of variance (anova) is presented by Figure <ref> and excluded variables by stepwise linear regression is given by Figure <ref>.Figure <ref> shows goodwill loss rate affects deviation the most (significance value 0.001<0.05); initial capital and goodwill loss rate together have significant influence to the deviation error (significance value 0.000<0.05), while other parameters are excluded from regression as shown by Figure <ref>. This coincides with the finding in Table <ref>. The reason could be: if initial capital is low and goodwill loss rate is high, it is more difficult for the heuristic techniques to adjust original solution to optimal. However, experiments in the two stages demonstrate our algorithm can reach optimal in over 90% cases and average deviation of our algorithm is rather low; moreover, when goodwill loss rate is zero and unit variable production are equal, our algorithm can definitely obtain optimal solutions.§ CONCLUSIONS Capital shortage is a key factor affecting the growth of many small and medium enterprises. However, capital flow constraints have not been taken into consideration by many lot sizing works. Previous methods such as Wagner-Whitin algorithm <cit.> and Aksen algorithm <cit.> for lot sizing problems can not obtain feasible solutions when considering capital flow constraints under the assumptions in our paper. We formulate a mathematical model for the lot sizing problem with capital flow constraints. Loss of goodwill and loan are also introduced in our problem. Based on the mathematical properties of the problem, we develop a forward recursion algorithm with heuristic adjustments. When unit variable productions costs are equal and goodwill loss rate is zero, our algorithm can obtain optimal solutions. Under other situations, its average deviation error is rather low. It is suitable to solve large-size problems for its computational efficiency. We also find initial capital availability and loan interest rate can affect a manufacturer's optimal lot sizing decisionsFuture research could extend in several directions: first is considering the multi-item model or the stochastic lot sizing problems withcapital flow constraints; second, other financial behaviors, such as trade credit, inventory financing and factoring business could also be taken into account in the lot sizing problem.§ REFERENCE jors
http://arxiv.org/abs/1708.08098v1
{ "authors": [ "Zhen Chen", "Ren-qian Zhang" ], "categories": [ "cs.CE" ], "primary_category": "cs.CE", "published": "20170827161135", "title": "Capital flow constrained lot sizing problem with loss of goodwill and loan" }
Jorge González-López [email protected] 0000-0003-3926-1411]Jorge González-López Instituto de Astrofísica and Centro de Astroingeniería, Facultad de Física, Pontificia Universidad Católica de Chile, Casilla 306, Santiago 22, ChileInstituto de Astrofísica and Centro de Astroingeniería, Facultad de Física, Pontificia Universidad Católica de Chile, Casilla 306, Santiago 22, ChileDept. of Astronomy and Astrophysics, Univ. of Chicago, 5640 S. Ellis Ave., Chicago, IL, 60637Kavli Institute for Cosmological Physics at The University of Chicago, Chicago, IL, USAMax-Planck-Institut für extraterrestrische Physik, Giessenbachstr. 1, D-85741 Garching, GermanyNASA Goddard Space Flight Center, Greenbelt, MD, USADepartment of Astronomy, University of Michigan, 1085 S. University Avenue, Ann Arbor, MI 48109, USANúcleo de Astronomía, Facultad de Ingeniería y Ciencias, Universidad Diego Portales, Av. Ejército 441, Santiago, ChileMIT Kavli Institute for Astrophysics and Space Research, 77 Massachusetts Avenue, Cambridge, MA 02139, USAInstituto de Física y Astronomía, Universidad de Valparaíso, Avda. Gran Bretaña 1111, Valparaíso, ChileWe present Atacama Large Millimeter/submillimeter Array observations of CO lines and dust continuum emission of the source RCSGA 032727–132609, a young z=1.7 low-metallicity starburst galaxy. The CO(3-2) and CO(6-5) lines, and continuum at rest-frame 450 μ m are detected and show a resolved structure in the image plane.We use the corresponding lensing model to obtain a source plane reconstruction of the detected emissions revealing intrinsic flux density of S_450 μ m=23.5_-8.1^+26.8 μJy and intrinsic CO luminosities L'_ CO(3-2)=2.90_-0.23^+0.21×10^8 K km s^-1 pc^2 and L'_ CO(6-5)=8.0_-1.3^+1.4×10^7 K km s^-1 pc^2. We used the resolved properties in the source plane to obtain molecular gas and star-formation rate surface densities ofΣ_ H2=16.2_-3.5^+5.8M_⊙pc^-2 and Σ_ SFR=0.54_-0.27^+0.89M_⊙yr^-1kpc^-2 respectively. The intrinsic properties of RCSGA 032727–132609 show an enhanced star-formation activity compared to local spiral galaxies with similar molecular gas densities, supporting the ongoing merger-starburst phase scenario. RCSGA 032727–132609 also appears to be a low–density starburst galaxy similar to local blue compact dwarf galaxies, which have been suggested as local analogs to high-redshift low-metallicity starburst systems. Finally, the CO excitation level in the galaxy is consistent with having the peak at J∼5, with a higher excitation concentrated in the star-forming clumps.§ INTRODUCTION The study of the distribution of molecular hydrogen (H2) and star formation rate (SFR) in high redshift galaxies has been a prolific field for the last 20 years. The CO emission lines, good tracers of the molecular gas <cit.>, and the cold dust continuum emission have been resolved in a number of galaxies at z>1 <cit.>.These observations have shown a picture where the SFR is strongly linked to the distribution of molecular gas, indicating that two mechanisms for star formation could be in place, the quiescent, normal phase and the starburst phase <cit.>. Most galaxies at all redshifts appear to be in the main sequence (MS) phase <cit.>, where the SFR is proportional to the stellar and molecular gas masses. These galaxies would be producing stars following the standard mechanism (still not fully understood) of star formation. The starburst phase corresponds to those galaxies that, for a given stellar and/or molecular gas mass, have SFRs of at least 10 times higher than that of MS galaxies. The starburst galaxies show a more efficient process of star formation, expected to be triggered by other factors such as galaxy interactions or particular environmental conditions <cit.>. It has been found that starburst galaxies at high redshift (z≥1) show similar star formation conditions and efficiencies as those observed in particular regions of local (ultra) luminous infrared galaxies <cit.>.The unresolved nature of most of the high redshift observations does not allow us to see whether the high redshift starburst galaxies have higher efficiencies across the whole galaxy or the star formation is dominated by small, highly efficient regions as in the local galaxies. The need for higher resolution observations is clear if we want to understand the starburst process and its conditions. The best way to resolve the emission in galaxies without increasing substantially the observation time is by targeting gravitationally lensed galaxies, specially the spatially resolved bright gravitational arcs. Observations of several lensed sources already show that different star forming regions show different star formation conditions <cit.>. In this letter we present the CO and continuum emission of the second-brightest optical giant arc discovered to date, the arc RCSGA 032727–132609 (hereafter RCS0327), strongly lensed by the foreground galaxy cluster RCS2 032727–132623 at z = 0.564 <cit.>, to reveal the star formation process in a starburst at z=1.7. The kinematic analysis of Hα on RCS0327 strongly suggests an interaction consistent with a merger of galaxies <cit.>. The SFRs measured in individual clumps and across the galaxy fall well above the MS relation for galaxies at z=1.7 and are consistent with a young low-metallicity (≈0.3 Z_⊙) starburst enhanced by the ongoing merger <cit.>. Furthermore, <cit.> measured resolved galactic winds in RCS0327, showing that the outflows are comparable to those observed in local starbursts. Throughout this paper, we adopt a cosmology with H_0 = 67.8 km s^-1 Mpc^-1, Ω_ m=0.308 and Ω_Λ=0.692 <cit.>.§ OBSERVATIONS The target RCS0327 was observed as part of the ALMA project 2015.1.00920.S which aimed to detect the emission lines CO(3-2), CO(6-5) and CO(8-7) and the underlying continuum. At the redshift for RCS0327 of z = 1.7037455 ± 0.000005 <cit.> the lines are centered at 127.895 GHz for CO(3-2), 255.746 GHz for CO(6-5) and at 340.934 GHz in for CO(8-7). The observations of the lines CO(6-5) and CO(3-2) were carried out during January 1st and 10th 2016. The observations of CO(8-7) unfortunately were not completed during cycle 3. In each observation, two overlapping spectral windows (SPWs) were placed to detect the line and two were placed to detect continuum. The passband and amplitude calibrator for both observations was J0423-0120, with J0336-1302 being used as phase calibrator. The reduction of the data was performed using the scripts provided by ALMA and used the pipeline version for cycle 3 and casa <cit.> version 4.5.1. Both emission lines were observed using the array configuration C36-1. The imaging of the calibrated data was done using the casa task . The data for CO(3-2) was imaged using natural weighting returning a synthesized beam size of 254×185 and position angle of -8568. The natural weighting synthesized beam for the CO(6-5) observations was 149×098 and position angle of 7975. To achieve a beam size similar to CO(3-2), a taper is applied to the CO(6-5) data to obtain a synthesized beam of 216×208 at a position angle of 5964. Data cubes using a spectral resolution of 20 km s^-1 were created for each of the emission lines. The continuum emission is estimated by using the line free channels and then is subtracted from the uv data usingand new continuum subtracted cubes are created using the same procedure as before. The continuum and line images were interactively cleaned by manually masking out the emission. § RESULTSThe CO(3-2), CO(6-5) and rest-frame 450μ m continuum emission are detected in the brightest regions of the arc (Figure <ref>). The continuum emission at rest-frame 830μ m is also detected but with lower significance.To find the CO(3-2) total flux we extract the spectra on the positions e1, e2 and u1+u2, clumps identified by <cit.> that are near the peaks of the observed emission, which are plotted in the bottom panel of Figure <ref>. A small astrometric offset was applied to the HST coordinates to match the 830μ m continuum detection of the brightest cluster galaxy to its NIR counterpart. We find that the region going from 127.874–127.976 GHz (≈239.3 km s^-1) provides a good range for the total emission of the line. The CO(3-2) line flux measured in the image plane for the section of the arc going from e1 to e2 (corresponding to lensed images 1 and 2) is I_ CO(3-2)=0.557± 0.071 Jy km s^-1 while for CO(6-5) is I_ CO(6-5)=0.885± 0.117 Jy km s^-1.The spatially integrated continuum emission measured in the same region as CO(3-2) is S_430 μ m=960 ± 98μ Jy and S_875 μ m=141 ± 31μ Jy. A second emitting region is detected over the arc, outside the primary beam (PB), marked with a red circle inthe middle panel of Figure <ref>. The continuum emission for that component is S_430 μ m,2nd=737 ± 370μ Jy. § DISCUSSION §.§ Source plane reconstruction Working with a galaxy that is magnified by strong gravitational lensing requires additional analysis to recover the galaxy's intrinsic properties. Knowledge of the lensing deflection field is required to reconstruct the galaxy in the source-plane, and to account for the magnification–which can be highly spatially variable–. Different galaxy regions will be stretched and magnified by a different factor, resulting in different physical scales over the arc. To fully understand our detected emission we need to take them to the source plane, where the physical scale is unique. To do the source plane reconstruction for RCS0327, we used ,[<https://github.com/sbussmann/uvmcmcfit>] which is aimplementation to fit emission models to interferometric data in the uv plane <cit.>. Exploring the emission in the uv plane should return the maximum amount of information from the observations and it has proven to be useful in revealing the source plane emission in bright SMGs discovered by the South Pole Telescope and observed with ALMA <cit.>.can fit the source plane emission of a galaxy together with the lensing potential. The galaxy emission is fitted assuming a 2D elliptical Gaussian (special case of a Sérsic profile with n=0.5) while the lensing potential is fitted by a singular isothermal ellipsoid (SIE).Recent high resolution imaging of high-redshift SMGs have shown no strong preference between fitting the dust continuum emission with Gaussian or Sérsic profiles <cit.>, supporting the usage of a simple 2d elliptical Gaussian function for the source model.We modifiedso that it uses a user provided lensing model deflection field allowing us to incorporate the detailed lensing model that is already available, based on HST imaging <cit.> and fit the ALMA data only to constrain the properties of the source plane emission. During the fitting of the emission, the lensing model is held fixed and only the source plane emission model is allowed to vary. The model found by using high-resolution HST observations of multiple images of strongly lensed galaxies at different redshifts together with the cluster members information outperforms in complexity and quality to the model we could find by fitting the lensing potential and source structure together using only the ALMA observations <cit.>.For the case of CO(3-2), a single 2D elliptical Gaussian was needed to fit the observed emission. We measure an intrinsic flux of S_ CO(3-2)=70.5_-5.6^+5.1 μJy (for the frequency range of ≈239.3 km s^-1) and an effective radius r_ CO(3-2)=0.142_-0.027^+0.025 arcseconds (1.23_-0.23^+0.22 kpc). The flux weighted magnification value for CO(3-2) is μ_ CO(3-2)=33.1_-4.9^+5.0 and an intrinsic CO(3-2) luminosity of L'_ CO(3-2)=2.90_-0.23^+0.21×10^8 K km s^-1 pc^2. In the case of CO(6-5), a single component gives S_ CO(6-5)=77.7_-12.9^+13.8 μJy (for the frequency range of ≈239.3 km s^-1) and an effective radius r_ CO(6-5)=0.055_-0.018^+0.021 arcseconds (0.48_-0.16^+0.18 kpc) The flux weighted magnification value for CO(6-5) is μ_ CO(6-5)=47.4_-9.2^+11.9 and an intrinsic CO(6-5) luminosity of L'_ CO(6-5)=8.0_-1.3^+1.4×10^7 K km s^-1 pc^2.To fit the continuum emission at 450 μ m, we needed two Gaussians in the source plane, since a single component was not enough to account for the whole observed flux outside the PB in Figure <ref>. The main component is well described by a Gaussian with S_450 μ m=23.5_-8.1^+26.8 μJy and r_450 μ m=0.147_-0.035^+0.051 arcseconds (1.28_-0.30^+0.44 kpc). The second component returns S_b,450 μ m, 2nd=25.5_-10.6^+12.5 μJy and r_b,450 μ m, 2nd=0.17_-0.07^+0.12 arcseconds. The flux weighted magnification value for the main component of continuum emission is μ_450 μ m=38.5_-20^+23.5, while for the second component is μ_b,450 μ m=28.1_-15.3^+27.0. In Figure <ref> we present the observed emission for CO(3-2), CO(6-5) and continuum at 450 μ m (left panels), the image plane representation of the best fit model found for each case (middle panels) and the residual images obtained after subtracting the best model simulated visibilities from the observed ones (right panels). In all cases the best fit models appear to account for most of the observed emission.§.§ Gas and dust distribution cCCCCCC Flux density values measured in the source plane.RegionS_450 μ mS_ CO(3-2)aS_ CO(6-5)aS_ CO(6-5)/S_ CO(3-2)Σ_ H2bΣ_ SFRμ Jyμ Jyμ JyM_⊙pc^-2 M_⊙yr^-1kpc^-2 Main Componentc 48.1_-16.6^+54.9 70.5_-5.6^+5.1 77.1_-12.9^+13.8 1.1±0.216.2_-3.5^+5.8 0.27_-0.13^+0.4301 4.0±0.9 6.7±3.3 3.6±3.4 0.5±0.67.1±3.5 0.16_-0.04^+0.1502 (e, f, u) 6.9±1.0 12.3±1.8 16.3±6.4 1.3±0.613.0±1.9 0.27_-0.04^+0.1703 1.4±0.6 6.6±2.1 0.0±1.6 <0.57.0±2.2 0.06_-0.02^+0.1004 0.0±0.3 1.2±1.1 0.0±1.6 <2.3 <0.1005 0.0±0.2 0.1±0.7 0.0±1.5 <1.5 <0.0706 (t) 0.0±0.2 0.0±0.7 0.0±1.6 <1.5 <0.0707 (t) 0.0±0.3 0.0±0.7 0.0±1.8 <1.5 <0.1008 0.5±0.3 5.0±3.1 10.2±9.7 <6.6 <0.1009 (d) 3.3±1.0 12.8±1.9 38.7±9.9 3.0±0.913.5±2.0 0.13_-0.04^+0.1710 (b, c) 3.0±1.9 12.7±2.7 0.0±1.6 <0.313.4±2.9 <0.6411 (r) 0.4±0.3 4.3±2.5 0.0±1.6 <5.3 <0.1012 (r) 0.0±0.3 0.5±0.7 0.0±1.7 <1.5 <0.1013 (t) 0.0±0.3 0.0±0.7 0.0±1.8 <1.5 <0.1014 0.0±0.4 0.0±0.7 0.0±2.1 <1.5 <0.1315 0.0±0.3 0.0±0.8 0.0±1.9 <1.7 <0.1016 0.3±0.3 0.1±0.7 0.0±1.8 <1.5 <0.1017 (a) 0.9±0.6 0.2±0.7 0.0±1.8 <1.5 <0.2018 (r) 0.7±0.7 0.1±0.7 0.0±1.9 <1.5 <0.2419 (r) 0.1±0.3 0.0±0.7 0.0±1.9 <1.5 <0.1020 0.0±0.3 0.0±0.7 0.0±2.1 <1.5 <0.1021 0.2±0.5 0.0±0.7 0.0±2.8 <1.5 <0.1722 0.0±0.3 0.0±0.8 0.0±2.1 <1.7 <0.1023 0.0±0.3 0.0±0.8 0.0±2.0 <1.7 <0.1024 0.0±0.3 0.0±0.7 0.0±2.0 <1.5 <0.1025 0.1±0.3 0.0±0.7 0.0±2.1 <1.5 <0.1026 (s) 0.1±0.3 0.0±0.7 0.0±2.2 <1.5 <0.1027 0.1±0.4 0.0±0.7 0.0±2.5 <1.5 <0.1328 0.4±0.5 0.0±0.8 0.0±3.1 <1.7 <0.17aFor the frequency range of ≈239.3 km s^-1. bUsing α_ CO=0.8. c and source plane emission knots <cit.>. Upper limits correspond to 2σ.We now use the CO(3-2) to estimate the amount and distribution of molecular gas in the galaxy. We first need to estimate the amount of CO(1-0) luminosity based on the observed CO(3-2) luminosity and use the CO conversion factor (α_ CO) to convert it to molecular gas mass <cit.>. The CO excitation level depends mainly on the density and temperature of the gas and it has been found that the excitation level differ for different source populations. Similar results have been found for the CO conversion factor, where a typical value of α_ CO∼0.8 M_⊙ ( K km s^-1 pc^2)^-1 has been used for nuclear starburst such as SMGs and QSOs and the Milky Way value of α_ CO∼4 M_⊙ ( K km s^-1 pc^2)^-1 in MS high redshift CSGs. Because of RCS0327 being cataloged as a starburst, to estimate the molecular mass we will use the values estimated for SMGs and starbursts at high redshift. Assuming a CO excitation level valid for SMGs of L'_ CO(3-2)/L'_ CO(1-0)=0.66 <cit.> we obtain an intrinsic CO line luminosity of L'_ CO(1-0)=4.40_-0.35^+0.32×10^8 K km s^-1 pc^2 and H_2=3.51_-0.28^+0.26×10^8 M_⊙. Using the effective radius measured for the CO(3-2) emission we estimate the total area for the molecular gas surface of A_ CO(3-2)=21.7±5.6 kpc^2 and obtained a H2 surface density of Σ_ H2=16.2_-3.5^+5.8M_⊙pc^-2. In Figure <ref> we present the source plane emission from the optical, CO(3-2) and continuum at 450 μ m. The regions where the CO(3-2) and continuum emission is produced corresponds to the clumps a–f <cit.>, which have a combined spectral energy distribution of SFR=9.0_-1.5^+8.3M_⊙yr^-1 <cit.>. We use the estimated total SFR=29±8M_⊙yr^-1 in the same region derived by <cit.> as an upper limit to the total SFR produced by clumps and the ISM combined. The SFR in combination with the molecular gas mass give a depletion time of ∼40 Myr and a SFR surface density of Σ_ SFR=0.54_-0.27^+0.89M_⊙yr^-1kpc^-2. We also derive resolved properties for individual regions plotted in the the right panel in Figure <ref>. These regions have a size of 025 (2.2 kpc) in the source plane, which given the variable magnification some will be larger than the beam projected at the source plane.We use a set of 50 source plane reconstructions taken from the fitting iterations to estimate the significance of the detection in each region. We also take the image plane noise map into the source plane to give a proper upper limit for the emission in the regions with S/N≤2. The results for all the regions are presented in Table <ref>. In Figure <ref> we present the resolved properties for the total emission of RCS0327 and for each of the regions described above. We compare these results with those obtained for Milky Way molecular clouds <cit.>, local spirals <cit.>, local starburst <cit.>, local blue compact dwarfs galaxies <cit.>, low-redshift dusty normal star-forming galaxies <cit.>, z=1-3 star forming galaxies (SFG) <cit.>, SMGs <cit.> and the lensed SMGs observed in high resolution by ALMA SDP.81 <cit.>. We notice that RCS0327 falls above the relation found by <cit.> for MS and starburst galaxies (blue dashed-lines), supporting the starburst nature of RCS0327. We point out that the position of RCS0327 on the diagram depends strongly on the assumed value for α_ CO, but also even when using the Milky value of α_ CO∼4 M_⊙ ( K km s^-1 pc^2)^-1 the galaxy would at most move to be on top of the starburst relation. For our estimate of the molecular mass we see that RCS0327 properties are similar to the local BCDs, which are low-metallicity starbursts galaxies showing higher star forming efficiencies when compared to normal disc galaxies <cit.>. BCDs have already been found to work well as local analogs to similar redshift low-metallicity starbursts based on properties derived using optical spectroscopy <cit.>. Based on the latter, we can use the α_ CO–metallicity relation found for BCDs to estimate an α_ CO value for RCS0327. The relation presented by <cit.> returns a value of α_ CO∼25 for RCS0327, consistent with the value given by other relations <cit.>. The new α_ CO∼25 value would increase the estimate molecular gas mass for RCS0327 putting it near the MS relation with a depletion time of ∼ 1 Gyr, in the same region as the z=1-3 SFGs.§.§ CO excitation level We can use the detected CO(3-2) and CO(6-5) emission lines to constrain the CO excitation level in RCS0327. The total intrinsic flux densities return a fraction of S_ CO(6-5)/S_ CO(3-2)=1.1±0.2. This value is consistent with having the peak at J∼5, similar to some CO excitation levels measured on SMGs and lower than the excitation level measured for QSOs <cit.>.We can use the same method presented in the previous section to obtain resolved measurements of S_ CO(6-5)/S_ CO(3-2) in the regions presented in Figure <ref>. We have 5 regions (see Table <ref>) with detections of CO(3-2) and constraints on CO(6-5). The ratios go from S_ CO(6-5)/S_ CO(3-2)<0.3 in region 10 to S_ CO(6-5)/S_ CO(3-2)=3.0±0.9 in region 9. Our results are consistent with those found for the high-z disk galaxy presented by <cit.>, where the CO excitation level is S_ CO(6-5)/S_ CO(3-2)>1 for the main star-forming clumps and S_ CO(6-5)/S_ CO(3-2)<1 for the inter-clump gas. The warmer and denser gas associated with the main clumps allows for higher CO excitation level when compared to the more extended gas. In the case of RCS0327, the main star-forming clumps identified by <cit.> and <cit.> correspond to the regions 2 and 9, which have S_ CO(6-5)/S_ CO(3-2) ratios of 1.3±0.6 and 3.0±0.9. The regions 1, 3 and 10 show S_ CO(6-5)/S_ CO(3-2)<1 values, consistent with the inter-clump gas of the simulations (Figure <ref>). § CONCLUSION We have presented ALMA observations of the emission lines CO(3-2), CO(6-5) and 450 μ m rest-frame continuum emission in the z=1.7 young low-metallicity starburst strongly-lensed galaxy RCS0327. The source plane reconstruction of the detected emission reveals that the molecular gas, traced by CO(3-2), is located on top of the star-forming clumps showing the gas reservoir fueling the ongoing star-forming process. The continuum dust emission follows a similar angular extension and distribution as the molecular gas but also extending towards parts of the galaxy not as bright in CO(3-2), showing a clear spatial offset.The molecular gas and SFR surface density show that RCS0327 is a low-density starburst, similar to local BCDs and probably triggered by an ongoing merger. These results support the scenario where BCDs are identified as local counterpart to the low-metallicity starburst galaxies at high redshift. The detected CO(3-2) and CO(6-5) return a CO excitation level consistent with having thepeak at J∼5 at large scales. The total excitation appears to be the result of the combination of higher excitation regions near the star-forming clumps and lower excitation regions over the more extended gas phase. This is one of the first times where the CO excitation level is resolved in detail on a galaxy at high redshift. We have shown that giant gravitational arcs offer an excellent opportunity to resolve in detail the different phases of the ISM in the cases where a good lensing model is in hand. Our coarse observations already show that RCS0327 is not well described by a single mode star-forming galaxy, showing different CO excitation levels, molecular gas reservoirs and dust obscuration across ∼8kpc. Future high angular resolution observations and the extension to other bright arcs will take us one step closer to understanding the star-formation process in galaxies at high redshift.This paper makes use of the following ALMA data: ADS/JAO.ALMA#2015.1.00920.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada) and NSC and ASIAA (Taiwan), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ.This research has been supported by CONICYT-Chile grant Basal-CATA PFB-06/2007, FONDECYT Regular 1141218 and ALMA-CONICYT project 31160033. M.A. acknowledges partial support from FONDECYT through grant 114009.[ALMA Partnership et al.(2015)]ALMA2015 ALMA Partnership, Vlahakis, C., Hunter, T. R., et al. 2015, , 808, L4 [Amorín et al.(2016)]Amorin2016 Amorín, R., Muñoz-Tuñón, C., Aguerri, J. A. L., & Planesas, P. 2016, , 588, A23 [Bigiel et al.(2010)]Bigiel2010 Bigiel, F., Leroy, A., Walter, F., et al. 2010, , 140, 1194 [Bolatto et al.(2013)]Bolatto2013 Bolatto, A. D., Wolfire, M., & Leroy, A. K. 2013, , 51, 207 [Bordoloi et al.(2016)]Bordoloi2016 Bordoloi, R., Rigby, J. R., Tumlinson, J., et al. 2016, , 458, 1891 [Bothwell et al.(2010)]Bothwell2010 Bothwell, M. S., Chapman, S. C., Tacconi, L., et al. 2010, , 405, 219 [Bournaud et al.(2015)]Bournaud2015 Bournaud, F., Daddi, E., Weiß, A., et al. 2015, , 575, A56 [Brammer et al.(2012)]Brammer2012 Brammer, G. B., Sánchez-Janssen, R., Labbé, I., et al. 2012, , 758, L17 [Brinchmann et al.(2004)]Brinchmann2004 Brinchmann, J., Charlot, S., White, S. D. M., et al. 2004, , 351, 1151 [Bussmann et al.(2013)]Bussmann2013 Bussmann, R. S., Pérez-Fournon, I., Amber, S., et al. 2013, , 779, 25 (B13)[Bussmann et al.(2015)]Bussmann2015 Bussmann, R. S., Riechers, D., Fialkov, A., et al. 2015, , 812, 43 [Carilli & Walter(2013)]Carilli_Walter2013 Carilli, C. L., & Walter, F. 2013, , 51, 105 [Daddi et al.(2010)]Daddi2010 Daddi, E., Elbaz, D., Walter, F., et al. 2010, , 714, L118 [Dekel et al.(2009)]Dekel2009 Dekel, A., Birnboim, Y., Engel, G., et al. 2009, , 457, 451 [Engel et al.(2010)]Engel2010 Engel, H., Tacconi, L. J., Davies, R. I., et al. 2010, , 724, 233 [Evans et al.(2014)]Evans2014 Evans, N. J., II, Heiderman, A., & Vutisalchavakul, N. 2014, , 782, 114 [Freundlich et al.(2013)]Freundlich2013 Freundlich, J., Combes, F., Tacconi, L. J., et al. 2013, , 553, A130[Genzel et al.(2010)]Genzel2010 Genzel, R., Tacconi, L. J., Gracia-Carpio, J., et al. 2010, , 407, 2091 [Genzel et al.(2015)]Genzel2015 Genzel, R., Tacconi, L. J., Lutz, D., et al. 2015, , 800, 20 [González-López et al.(2017)]Gonzalez-Lopez2017 González-López, J., Bauer, F. E., Romero-Cañizales, C., et al. 2017, , 597, A41 [Hatsukade et al.(2015)]Hatsukade2015 Hatsukade, B., Tamura, Y., Iono, D., et al. 2015, , 67, 93 [Heiderman et al.(2010)]Heiderman2010 Heiderman, A., Evans, N. J., II, Allen, L. E., Huard, T., & Heyer, M. 2010, , 723, 1019-1037 [Hezaveh et al.(2013)]Hezaveh2013 Hezaveh, Y. D., Marrone, D. P., Fassnacht, C. D., et al. 2013, , 767, 132 [Hodge et al.(2015)]Hodge2015 Hodge, J. A., Riechers, D., Decarli, R., et al. 2015, , 798, L18 [Hodge et al.(2016)]Hodge2016 Hodge, J. A., Swinbank, A. M., Simpson, J. M., et al. 2016, , 833, 103 [Hunt et al.(2015)]Hunt2015 Hunt, L. K., García-Burillo, S., Casasola, V., et al. 2015, , 583, A114 [Kennicutt(1998)]Kennicutt1998 Kennicutt, R. C., Jr. 1998, , 498, 541 [McMullin et al.(2007)]Mcmullin2007 McMullin, J. P., Waters, B., Schiebel, D., Young, W., & Golap, K. 2007, Astronomical Data Analysis Software and Systems XVI, 376, 127 [Omont(2007)]Omont2007 Omont, A. 2007, Reports on Progress in Physics, 70, 1099 [Planck Collaboration et al.(2016)]Planck2016 Planck Collaboration, Ade, P. A. R., Aghanim, N., et al. 2016, , 594, A13 [Riechers et al.(2013)]Riechers2013 Riechers, D. A., Bradford, C. M., Clements, D. L., et al. 2013, , 496, 329 [Sharon et al.(2012)]Sharon2012 Sharon, K., Gladders, M. D., Rigby, J. R., et al. 2012, , 746, 161 [Spilker et al.(2016)]Spilker2016 Spilker, J. S., Marrone, D. P., Aravena, M., et al. 2016, , 826, 112 [Tacconi et al.(2013)]Tacconi2013 Tacconi, L. J., Neri, R., Genzel, R., et al. 2013, , 768, 74 [Villanueva et al.(2017)]Villanueva2017 Villanueva, V., Ibar, E., Hughes, T. M., et al. 2017, arXiv:1705.09826 [Walter et al.(2004)]Walter2004 Walter, F., Carilli, C., Bertoldi, F., et al. 2004, , 615, L17 [Whitaker et al.(2014)]Whitaker2014 Whitaker, K. E., Rigby, J. R., Brammer, G. B., et al. 2014, , 790, 143 [Wuyts et al.(2010)]Wuyts2010 Wuyts, E., Barrientos, L. F., Gladders, M. D., et al. 2010, , 724, 1182 [Wuyts et al.(2014)]Wuyts2014 Wuyts, E., Rigby, J. R., Gladders, M. D., & Sharon, K. 2014, , 781, 61
http://arxiv.org/abs/1708.07898v1
{ "authors": [ "Jorge González-López", "L. Felipe Barrientos", "M. D. Gladders", "Eva Wuyts", "Jane Rigby", "Keren Sharon", "Manuel Aravena", "Matthew B. Bayliss", "Eduardo Ibar" ], "categories": [ "astro-ph.GA" ], "primary_category": "astro-ph.GA", "published": "20170825214546", "title": "ALMA resolves the molecular gas in a young low-metallicity starburst galaxy at z=1.7" }
http://arxiv.org/abs/1708.07488v3
{ "authors": [ "Adithan Kathirgamaraju", "Rodolfo Barniol Duran", "Dimitrios Giannios" ], "categories": [ "astro-ph.HE" ], "primary_category": "astro-ph.HE", "published": "20170824164849", "title": "Off-axis short GRBs from structured jets as counterparts to GW events" }
empty... [email protected] Institute of Modern Physics, Chinese Academy of Science, Lanzhou 730000, China Department of Physics Education, Chungbuk National University, Cheongju, Chungbuk 28644, Korea [email protected] Institute of Modern Physics, Chinese Academy of Science, Lanzhou 730000, ChinaBogoliubov Laboratory of Theoretical Physics, Joint Institute for Nuclear Research,Dubna, Moscow Region, 141980 Russia Institute of Modern Physics, Chinese Academy of Science, Lanzhou 730000, China International Center for Advanced Studies, Technical University, Gomel, 246746, BelarusThe lowest dimension three-gluon currents that couple to the exotic 0^±- glueballs have been constructed using the helicity formalism. Based on the constructed currents, we obtain new QCD SRs that have been used to extract the masses and the decay constants of the scalar exotic 0^±- glueballs. We estimate the masses for the scalar state and for the pseudoscalar state to be m_+=9.8^+1.3_-1.4 GeV andm_-=6.8^+1.1_-1.2 GeV.12.38.Lg, 12.38.BxExotic glueball 0^± - states in QCD Sum Rules Viachaslau Khandramai December 30, 2023 =============================================§ INTRODUCTION Glueballs are composite particles that contain gluons and no valence quarks. Theoretically, glueball should exist because of the non-Abelian and confinement properties ofQuantum Chromodynamics (QCD),due to the gluon self-interaction and strong “dressing” through vacuum fluctuations. However, there is no clear experimental evidence and glueballs remain undiscovered <cit.>. Their mixing with ordinary meson states makes it difficult to discover glueballs in an experimental search. Glueball studies are important for phenomenology both at the running and projected large-scale experiments in many research centers: Belle (Japan),BESIII (Beijing, China),LHC (CERN), GlueX (JLAB,USA), NICA (Dubna, Russia), HIAF (China) and FAIR (GSI, Germany).Theoretical studies of glueballs are only performed within nonperturbative approaches. Thebound states of gluons were considered within the lattice QCD <cit.>, the flux tube model <cit.>, constituent models <cit.>, and in the holographic approach <cit.>. The first study <cit.> of glueballs in the framework of QCD Sum Rules (SRs) <cit.> considered a pseudo-scalar 0^-+ state with an obtained mass of ∼ 1 GeV. Later the same group <cit.> applied this method to a scalar 0^++ glueball state and estimated its mass to be ∼ 0.7 GeV. Two-gluon glueballs have been broadly studied using QCD SRs <cit.>. In further studies <cit.>,these QCD SRs for the scalar and pseudoscalar glueballs was improved by calculating the direct instanton contribution andthe radiative corrections to the perturbative and nonperturbative parts of the correlator. Three-gluon glueballs were considered in <cit.> for a 0^++-state and later in works <cit.> the application of QCD SRs was extended to the 0^-+ scalar, vector and tensor states. Further reviews of glueball physics can be found in <cit.>.A way to avoid problems related to the mixing of glueballs with ordinary mesonic states would be to study glueballs with exotic quantum numbers (0^±-, 1^-+, 2^+-,...) which are not allowed in quark-antiquark systems. In our recent study <cit.> we proposed the 0^– glueball current ofdimension-12, which was used to obtainestimations of the mass, the decay constant and the width of the 0^– glueball.In this paper, we present for the first time a detailed procedure for the construction of the three-gluon glueballs currents based on the helicity formalism following <cit.>. This procedure is applied to construct the 0^±- glueball currents of the lowest possible dimension. Using these constructed currents, the QCD SRs have been obtained and analyzed to extract masses and decay constants of 0^±- glueballs. The search for the lowest dimension currents has been motivated by the necessity to improve the reliability of QCD SRs . In comparison with our previous study <cit.>, the SRs presented here have the following improvements: the Operator Product Expansion (OPE) starts from the condensates of lower dimensions so that the uncertainties in the OPE can be reduced; the current of lower dimension leads to a larger coupling with the glueball state; the first resonance contribution to SRs is larger for the current of lower dimension. In fact, in the new QCD SR for the 0^– state, the leading nonperturbative contribution comes from the 3-gluon condensate <G^3>, while in our previous study <cit.> the OPE starts from 4-gluon condensates. From the new SR, we have found that the mass of the 0^– glueball is very close to our previous result <cit.>. At the same time, the coupling of the new current to the glueball state has been found to be significantly larger compared to the dimension-12 current suggested in <cit.>. Therefore we conclude that the new current better represents the glueball state.The paper is organized as follows. In Sec. II we present the procedure for constructing of the three-gluon current using the helicity formalism <cit.>. We construct the currents that couple to the exotic 0^±- glueball helicity states. In Sec. III we present the OPE of correlators of the new currents and present the detailed theoretical scheme of QCD SRs. The masses and decay constant of the 0^±- glueballs are extracted then from QCD SRs. Section IV contains the discussion of our results.§ THREE-GLUON CURRENTS Here we provide the application of the helicity formalism to the construction of three-gluon currents in general form. The described technique is applied to construct the gauge invariant colorless currents that couple to 0^±- glueball states. §.§ Three-gluon helicity statesThe gluon field tensor G_μν corresponds to (1,0)⊕(0,1) representation of the Lorentz group and can be decomposed to positive and negative helicity parts G_μν=G_μν^++G_μν^-, where G_μν^∓=(G_μν±G̃_μν)/2 and dual tensor G̃_μν=-iϵ_μναβG^αβ/2. The negative helicity strength tensor G^- is in the (1, 0) representation,and the positive-helicity strength tensor G^+ is in the (0, 1) representation,thus the different helicity tensors are not mixed under Lorentz transformations. Therefore using helicity strength tensor G_μν^± as building blocks allows to decompose the glueball currents into irreducible representations of the Lorentz group <cit.>.To consider the three gluon helicity current in a general form, we define the generating current as: J(G_1G_2G_3) ∼ 1/3! S_G_1G_2G_3g_s^3 (O_1 G_1μ_1ν_1)^a_1 (O_2 G_2μ_2ν_2)^a_2 (O_3 G_3μ_3ν_3)^a_3 ,where G_i with i=1,2,3 stands for the gluon field strength tensor in one of the following forms: the strength tensor G, the dual tensor G̃, the positive helicity tensorG^+ or the negative helicity tensorG^-. The operator of symmetrization S_G_1G_2G_3 ensures that the current is symmetrical with respect to gluon interchange. The operators O_i with i=1,2,3 are the product of covariant derivatives to respect the gauge invariance of the constructed currents:O_i G_μν = D_τ_1D_τ_2⋯ D_τ_n G_μν . In order to consider both C-parities, we omit here the trace Tr in the color space that will be recovered later to construct colorless currents and insure the gauge invariance. Taking various O_i and ways for the contraction of the Lorentz indices, the currents of various quantum numbers will be generated.There are two possible combinations to construct helicity-λ current J^P_λ of theparity-P that are symmetrical with respect to the gluon exchanges: the maximal helicity (λ =3) state with the parities P=± 1:J^±_3=J_+++± J_— ,and the minimal helicity (λ =1) current with theparities P=± 1:J^±_1=J_++-± J_–+ ,where the indices in the currents on the right-hand side of the equation mean the helicities of gluons as J_h_1h_2h_3=J(G^h_1G^h_2G^h_3) in the general form, see Eq.(<ref>).In the definitions of the maximal and minimal helicity current we have omitted for simplicity the sign C of the arbitrary charge parity J_λ^P=J_λ^PC. Expanding the helicity currents in terms of the gluon strength tensor and its dual tensor one finds:J^+_3 = 1/4(J(GGG)+J(GG̃G̃)+J(G̃GG̃)+J(G̃G̃G)) ;J^-_3 = -1/4( J(G̃G̃G̃)+J(G̃GG)+J(GG̃G)+J(GGG̃)) ;J^+_1 = 1/12(3J(GGG)-J(G G̃G̃)-J(G̃GG̃)-J(G̃G̃G)) ;J^-_1 = 1/12(3J(G̃G̃G̃)-J(G̃GG)-J(GG̃G)-J(GGG̃) ) .In this consideration the three-gluon 0^±+ glueball currents <cit.>:J^++ = g_s^3 f^abcG^a_μνG^b_νρG^c_ρμ ,J^-+ = g_s^3 f^abcG̃^a_μνG̃^b_νρG̃^c_ρμrepresent the maximal (λ=3) helicity states J^±_3=J^±+ while all minimal (λ=1) helicity states have J^±_1=0. By introducing arbitrary linear operators O_ithese currents, Eqs. (<ref>) and (<ref>), can be generalized in following formJ(G_1G_2G_3) ∼ g_s^3 (O_1 G_1μν)^a_1 (O_2 G_2νρ)^a_2 (O_3 G_3ρμ)^a_3 .This form of the current has been used in the first QCD SR based study of negative charge parity 0^– scalar glueballs <cit.>. One can see that the contraction of the Lorentz indices leads to the following property for this type of currents, Eq. (<ref>):J(GGG)=J(GG̃G̃)=J(G̃GG̃)=J(G̃G̃G) ;J(G̃G̃G̃)=J(G̃GG)=J(GG̃G)=J(GGG̃) .Therefore such a currents represents maximal helicity states. §.§ Three-gluon helicity states of 0^±-glueballsIn order to construct the gauge invariant currents that couple to 0^±- glueballs, we are looking for scalar or unconserved vector local currents. The conserved vector currents correspond to the spurious state and do not couple to the scalar state <cit.>. Another important requirement to the current is having the nonzero Leading Order (LO) perturbative contribution to the spin-0 part of the correlator. In configuration space, the spin-0 projector in the correlator is a partial derivative. Therefore, the conserved vector currents have no spin-0 contribution. To eliminate possible ambiguity in the construction of the current and to avoid spurious states, we consider only the currents that are defined by the helicity gluons field strength tensor adopting the helicity formalism <cit.>. To construct the lowest dimension currents from helicity gluons that couple to 0^± - glueball states, we propose the generating current that respects all requirements described above:J_α(G_1G_2G_3) =      2g_s^3/3!S_123( {(D_ρ G_1μν), (D_σ G_2ρν) } (D_μ G_3σα) ) ,where the factor 2g_s^3 was introduced to have at LOJ_α(GGG)LO= g_s^3 d^abc (∂_ρ G^a_μν) (∂_σ G^b_ρν) (∂_μ G^c_σα),that can be easily compared to the currents, Eq. (<ref>) and Eq. (<ref>), suggested in <cit.> for 0^±+ glueball states.The currents Eq.(<ref>) of the maximal (λ=3) helicity state appear to be conserved in LO ∂_α J^±-_3,α=0 and therefore the maximal helicity current does not respect the nonzero LO term condition.While the minimal (λ=1) helicity 0^± - currents based on the generating current Eq.(<ref>) are non-conserved currents and have all desired properties:J^+-_α= g_s^3( { (D_τ G_μν), (D_τ G_ρν)} (D_μ G_ρα) ) ,J^–_α= g_s^3( { (D_τ G_μν), (D_τ G_ρν)} (D_μG̃_ρα) ) .We propose these currents to study 0^± - states. The new current for the 0^– state has significantly lower dimension than the current suggested recently in <cit.>. As we see in the next section and discuss in the introduction, the reduction of the dimension leads to the improvements of the reliability of QCD SRs: it reduces the OPE uncertainties and increases the coupling with the state and the first resonance contribution to SR. Any other choice of the dimension-9 generating current Eq. (<ref>), leads to the zero current or to the alternative current J^±-_α,alt that has identical coupling to spin-0 state in LO: ∂_α J^±-_α,altLO=∂_α J^±-_α. Using the gluon field tensors and covariant derivatives to ensure a gauge invariance of the current, we did not find any three-gluon current of dimension-7 and dimension-8 that respect above requirement.Applying the helicity formalism to the four-gluon states leads us to the conclusionthat there is no any nonzero helicity current of dimension-8which couples to the exotic 0^± - glueballs. § SUM RULES §.§ OPE of correlators Here we present the result for OPE of correlators that is the theoretical basis of QCD SRs approach <cit.>:Π^±_μν(q) = i∫ d^4xe^iqxJ^±-_μ(x) J^±-_ν(0)^†where the proposed current J^±-_α is given by Eq. (<ref>) and couples to the gluonic bound state |G(0^± -)⟩ with the mass m_±and the decay constant f_± through the relation: 0|J^±-_α|G(0^± -)=p_α f_± m_±^6 .The correlators of the vector currents have two components:Π^±_μν(q)=Π^(1)±(q^2)(q_μ q_ν-q^2g_μν)+Π^(0)±(q^2)q_μ q_ν ,where Π^(0) and Π^(1) are spin-0 and spin-1 contributions, respectively. Here we consider only the spin-0 part of the correlator OPE up to dimension-8 condensates:Π^(0)±_(OPE)= Π^(0)±_(pert) +Π^(0)±_(G2) +Π^(0)±_(G3) +Π^(0)±_(G4)+⋯ ,where the following terms are considered: the LO perturbative term (pert), the dimension-4 (G2), the dimension-6 (G3), and dimension-8 (G4) nonperturbative terms. The terms of the correlator OPE have been calculated and are given as follows:Π^(0)±_(pert) = -5α_s^3/9!8πq^12ln-q^2/μ^2 ,   Π^(0)±_(G2)= 0, Π^(0)±_(G3) = ±5α_s^2/2^8 3^3G_G3^±· q^6ln-q^2/μ^2 , Π^(0)±_(G4) = ∓α_s^2 π^2/2^6 3^3α_s^2 G^4_±· q^4ln-q^2/μ^2 ,where α_s=g_s^2/(4π) is the coupling constant, μ is the renormalization scale. The contributions G_G3^± and α_s^2 G^4_± are linear combinations of the dimension-6 and dimension-8 condensates described below. We adopt Mathematica package FEYNCALC <cit.> to handle the algebraic manipulation. The LO perturbative term is represented by the two-loop sunset diagram (the first diagram in Fig. <ref>), therefore for any scalar three-gluon current the largest prime divisor of denominator must be less than the dimension of the current. The leading nonperturbative contribution from the nonlocal two gluon condensate <cit.>, represented by second diagram in Fig. <ref>, is defined by the dimension-6 local condensates thanks to the derivatives in the currents:G_G3^+=9g^3G^3-88πα_s J^2 , G_G3^-=9g^3G^3-20πα_s J^2 ,where notations for condensates of dimension-6 are g^3G^3=g^3f^abcG^a_μνG^b_νG^c_μ and J^2=J^a_μ J^a_μ with the quark current J^a_μ=q̅_μ t^aq. For the same reason, the leading term of the third and fourth diagrams shown in Fig. <ref> is dimension-8 contribution. While the last diagram in Fig. <ref> starts from dimension-12 condensate and therefore is not considered here. The four-quark condensate J^2 is considered to be insignificant compare to three-gluon condensate gG^3≫J^2 and has not been included in the QCD SRs analysis. Therefore, the quarks contribute only perturbatively due to the strong coupling evolution as it is discussed below (see Eq. (<ref>)). The total dimension-8 condensate contribution to the correlator are presented by the four-gluon condensates:α_s^2G^4_+=     155(α_s f^abcG^b_μνG^c_ρσ)^2 +2678(α_s f^abcG^b_μνG^c_νρ)^2 , α_s^2 G^4_-=     845(α_s f^abcG^b_μνG^c_ρσ)^2 +1298(α_s f^abcG^b_μνG^c_νρ)^2 ,where quark-gluon condensates have been omitted. As expected, the nonperturbative terms in the approximation of self-dual (SD) gluon fields are equal in absolute value and have different signs (see Eq. (<ref>)) for the parity P=± 1:G_G3^±SD =  9g^3f^abcG^a_μνG^b_νG^c_μ , α_s^2 G^4_±SD =  2^2 3^2 83 (α_s f^abcG^b_μνG^c_νρ)^2 .For QCD SRs analysis we applythe hypothesis of vacuum dominance (HVD) to estimate the dimension-8 condensate:α_s^2 G^4_+HVD =  k_HVD3/2^41151 α_s G^2^2 ,    α_s^2 G^4_-HVD =  k_HVD3·7/2^4263 α_s G^2^2 ,where k_HVD denotes the coefficient of the HVD factorization violation. We vary this coefficient in the range k_HVD∈[0.25,4] to include the HVD-related uncertainty. Evaluating QCD SRs, we apply the results of recent studies <cit.> where the charmonium moments sum rules has been used to obtain the gluon condensate estimations:g^3G^3=(8.2± 2.0) GeV^2 α_s G^2 , α_sG^2=0.07(2)  GeV^4 .The ratio between the three-gluon and the two-gluon condensates agrees well with the instanton model <cit.> for the instanton radiusρ_c=1/(600 MeV):g^3G^3=48 π/5ρ_c^2α_s G^2 .Due to the large value of the Borel parameter M^2 in QCD SRs (see bellow)for exotic glueballs the possible direct instanton contributions to the correlators are expected to be strongly suppressed in comparison to OPE terms and therefore are not considered here.§.§ QCD SRsWe analyze the constructed QCD SRs for the 0^±- states on the same footing. Therefore here and below for simplicity we omit the parity and the spin signsΠ^(0)±_t→Π_t, where t denotes the different contributions to OPEof the correlator as explained above Eqs.(<ref>). In simplified notation the truncated OPE of correlator has a form:Π_(OPE)= Π_(pert) +Π_(G3)+Π_(G4) .The phenomenological part of QCD SR is based on the modeling of spectral density. For the phenomenological description of the correlator, we use the one-resonance model with the continuum contribution modeled by Im-part ofthe correlator OPE:ImΠ_(ph)(-s) =      π m^12f^2 δ(s-m^2) +Θ(s-s_0)ImΠ_(OPE)(-s) ,wherem is the mass of a resonance and s_0 is the continuum threshold. Then QCD SR reads1/π∫_0^s_0ImΠ_(OPE)(-s)/s+Q^2ds= f^2m^12/m^2+Q^2 . In the framework of QCD SRs <cit.>, the Borel transform B̂B̂_Q^2→ M^2[Π(Q^2)] = lim_n→∞(-Q^2)^n/Γ(n)[d^n/dQ^2nΠ(Q^2)]_Q^2=n M^2 ,is applied to both sides of the SR, Eq. (<ref>) in order to reduce the SR uncertainties by suppressing the contributions from excited resonances and higher order OPE terms. The Borel transformation modifies the components of the sum rule:R^t_0(M^2,s_0) = 1/π∫_0^s_0ds ImΠ_t(-s) e^-s/M^2 ,R^(res)_0(M^2) =m^12 f^2 e^-m^2/M^2 .Here we follow common practice of renormalization group improvement after Borel transformation, therefore in ImΠ^t(-s) all coupling constant are replaced by running constant α_s→α_s(M^2):α_s(Q^2)=4π/b_0 ln(Q^2/Λ_QCD^2) ,where the beta-function LO coefficient b_0=11-2N_f/3, the QCD scale Λ_QCD=350 MeV, and number of the flavors N_f=4.The mass is extracted from the family of the derivative SRs defined byR^t_k(M^2,s_0) =M^4d/d M^2 R^t_k-1(M^2,s_0).Denoting by R^(SR) the difference of the OPE result and the continuum contribution for any k≥ 0:R^(SR)_k(M^2,s_0) =       R^(pert)_k(M^2,s_0) + R^(G3)_k(M^2,s_0) + R^(G4)_k(M^2,s_0).we define the master sum rule(k=0) and the derivative SRs (k>0) by the following equations:R^(SR)_k(M^2,s_0)≈R^(res)_k(M^2,s_0) . The high dimension of the considered currents leads to the dependency of the continuum spectral density on s as ImΠ_(OPE)∼ s^6. Therefore, having in mind that the continuum contribution could give the large contribution <cit.>, we define the upper boundary M^2<M_+^2(s_0) of the fiducial window by the following condition which is less restrictive than the condition for the low-dimension correlators suggested in <cit.>:R^(res)_k(M^2)/ R^(SR)_k(M^2,∞) ≈R^(SR) _k(M^2,s_0)/ R^(SR)_k(M^2,∞) >1/10 .This condition influences the definition of the SR uncertainty, while the central values of predictions appear to be insensitive to it. The lowerboundary M_-^2 of the fiducial window M^2∈ [M_-^2,M_+^2(s_0)] is limited by the conditions| R^(G3)_k(M^2,∞)|/ R^(pert)_k(M^2,∞) < 2/3 ,  | R^(G4)_k(M^2,∞)|/ R^(pert)_k(M^2,∞)<1/3 ,that insure the OPE reliability.The values of mass and the decay constant can be extracted from QCD SRs,Eq.(<ref>), as:m_k(M^2,s_0)= √( R^(SR)_k+1(M^2,s_0)/ R^(SR)_k(M^2,s_0)) ,f_k^2(M^2,s_0)= e^M_G^2/M^2 R^(SR)_k(M^2,s_0)/M_G^2(6+k) . We define the mass and the decay constantby keeping the M^2-stability criteria δ_k below 10%∼ 1/3^2 that is the assumed OPE accuracy related to the condition Eq. (<ref>):δ_k= max f_k^2(M^2,s_0) -min f_k^2(M^2,s_0)/max f_k^2(M^2,s_0) +min f_k^2(M^2,s_0)<1/10 .This condition puts limits on the continuum threshold value s_0. The conditions Eqs.(<ref>,<ref>,<ref>) define the fiducial set of (M^2,s_0)-values. Finallywe define the prediction for the mass and the decay constant as an average of the maximal and the minimal values on the fiducial interval of M^2 with the fixed central value of threshold given in the last column of Table <ref>:m_k = max m_k(M^2,s_0) + min m_k(M^2,s_0)/2 ,f_k^2 = max f_k^2(M^2,s_0) + min f_k^2(M^2,s_0)/2 .The variation of the mass and the decay constantin the fiducial (M^2,s_0)-set defines uncertainties coming from the OPE truncation and the spectral function modeling.§.§ QCD SRs results for the 0^±- glueball states Performing the QCD SRs analysis described above, we obtain predictions for the masses and decay constants of the 0^±--states. These are presented in Table <ref> for the k=0 case together with the fiducial intervals of the SR parameters: the Borel parameter M^2 and the threshold value s_0. There are three sources of errors for mass and decay constant presented inTable <ref>: the first error represents theSR stability triggering Borel parameter M^2 dependence, the second represents the threshold s_0 dependence and the thirdis the uncertainty related to the variations of the gluon condensates G^3 and G^4. The first two errors, which originate from OPE truncation and continuum modeling, are defined by variation of results on the fiducial (M^2,s_0)-set that represents the conditions Eqs. (<ref>,<ref>,<ref>). The variation of the G^3 condensate comes from <cit.> (see Eq. (<ref>)). The uncertainties of the G^4 contribution have been estimated from the variation of the HVD violation coefficient (see Eq. (<ref>)) and the variation of the two-gluon condensate G^2 was estimated in <cit.>.In Fig. <ref>, we present the k=0 results for the glueball mass and the decay constant as a function of the Borel parameter for various values of the threshold parameter. As one can see, there is a rather good stability plateau for both quantities which is ensured by the condition in Eq. (<ref>). The masses and decay constants estimated with the higher values of k=1,2,3 are in agreement with the k=0 case. § SUMMARYWe have performed a study of C-odd scalar and pseudoscalar exotic glueball states within the framework of QCD SRs. The constructed QCD SRs include LO perturbative term and the nonperturbative contributions up to dimension-8 gluon condensates. The results from the QCD SRs analysis on the masses m_± and decay constants f_± of the 0^±- glueballs are given as follows: for the pseudoscalar statem_-=6.8^+1.1_-1.2 GeV ,f_-=1.3± 0.1  MeV ,and for the scalar statem_+=9.2^+1.3_-1.4 GeV , f_+=0.9±0.1 MeV .The construction of the three-gluon currents has been addressed in general form on the basis of the helicity formalism. The developed techniques of the helicity based current construction have been used to build new three-gluon currents of minimal dimension that couple to the 0^±- glueball states.Our previous QCD SRs results <cit.> on the 0^–-glueball mass using a dimension-12 current, M_G=6.3^+0.8_-1.1 GeV, is in good agreement with our new estimation, Eq.(<ref>). As one would expect, the current with higher dimension leads to a smaller coupling to the dimension-12 current with the glueball state <cit.>: F_G=67 ± 6 keV. Therefore, the new current of minimal dimension represents the most possible configuration of 0^– glueball.The Belle Collaboration <cit.> has performed a search in the range of masses lower than our predicted mass and found no evidence for the exotic 0^– glueball. Our result on the mass of the exotic 0^– glueball is inqualitative agreement with the result of lattice QCD <cit.>.On the other hand, the obtained mass of the 0^+- glueball state is noticeably larger than the latticeresults <cit.>.Unfortunately, the status of exotic glueball masses calculated using lattice QCD not clear at the present time (see the discussionin <cit.>and Table 3 therein). Some lattice groups have seen exotic glueball signals, while others found no indication of any signalsfor the same exotic states.Furthermore, in <cit.> it was emphasizedthat lattice QCD calculations using heavy glueball degrees of freedom should use improvedtechniques to assign J^PC quantum numbers. Due to these issues in lattice QCD, it is not a problem that our calculations does not match theirs.A recent study <cit.> withinQCD SR for the exotic 0^– tetraquark with light quark content predicted a small mass, M_tetra=1.66± 0.14 GeV.Therefore, the large mass difference should lead to a very small mixing between this light tetraquark state and the heavy exotic 0^– glueball. However, one cannot avoid the discussion about possible mixing between the exotic glueball states and the heavier tetraquarks of the same exotic quantum number, if such heavy tetraquarks exist.But we would like to point out thatto our knowledge, all estimations within various models give the value of the mass for the hidden-charmtetraquarks to be around 4 GeV (see review <cit.>), which is rather small in comparison to our glueball masses. In principle, the exotic glueballs can also mixwith the hidden-charm hybridwhich has the same quantum numbers. The recent lattice calculation for 0^+- hybrid predicts the mass to be around 4.4 GeV <cit.>. Since there a large mass gap between the 0^±- glueballs and the exotic hadrons with hidden-charm, the mixing of the exotic glueballs with hidden-charm states is expected to be small. In any case, the calculation of the mixing between different exotic states is very complicated due to contributions coming from both the perturbative and nonperturbative sectors of QCD and such kinds of studies are out of the scope of the present work. The decay of the three-gluon state to the hadrons is suppressed by the large power of the strong couplingat the virtuality of the glueball's gluons Q^2∼ 4 GeV^2, where we assume that the gluons carry equal momenta. One of the allowed channels includes charmonium in the final state. In particular,we consider the S-wave decay of glueball G(0^–)→ f_1(1285)+J/Ψto be the most preferabledue to the large glueball mass and the small widths of the final particles. Additionally, this channel could be enhanced by the decay of the hidden charm tetraquark. Therefore, charmonium data could be a good place to search for experimental evidence of exotic glueballs.§ ACKNOWLEDGMENT We would like to thank M. Elbistan, S. Mikhailov, P. Gandini and C. Halcrow for stimulating discussions and useful remarks. This work has been supported by the National Natural Science Foundation of China (Grants No. 11575254 and 11650110431), Chinese Academy of Sciences President's International Fellowship Initiative (Grant No. 2013T2J0011 and 2016PM053), the Japan Society for the Promotion of Science (Grant No.S16019). The work by H.J.L. was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by Ministry of Education under Grants No. 2016R1D1A1A09920078. The work ofA. P. and V. K. has been also supportedby the Russian Foundation for Basic Research under Grants No. 15-52-04023 and by the Belarusian Republican Foundation for Basic Research under Grants No. F15RM-072 respectively. 10Jia:2016cgl S. Jia et al., Phys. Rev. D95,012001(2017). Bali:1993fb G. S. Bali et al., Phys. Lett. B309,378(1993). Gregory:2012hu E. Gregory et al., JHEP 10,170(2012). Morningstar:1999rf C. J. Morningstar and M. J. Peardon, Phys. Rev. D60,034509(1999). Chen:2005mg Y. Chen et al., Phys. Rev. D73,014516(2006). Robson:1978iu D. Robson, Z. Phys. C3,199(1980). Isgur:1984bm N. Isgur and J. E. Paton, Phys. Rev. D31,2910(1985). Jaffe:1975fd R. L. Jaffe and K. Johnson, Phys. Lett. B60,201(1976). Carlson:1984wq C. E. Carlson, T. H. Hansson, and C. Peterson, Phys. Rev. D30,1594(1984). Chanowitz:1982qj M. S. Chanowitz and S. R. Sharpe, Nucl. Phys. B222,211(1983).Phys.B228,588(1983)].Cornwall:1982zn J. M. Cornwall and A. Soni, Phys. Lett. B120,431(1983). Cho:2015rsa Y. M. Cho et al., Phys. Rev. D91,114020(2015). Boulanger:2008aj N. Boulanger, F. Buisseret, V. Mathieu, and C. Semay, Eur. Phys. J. A38,317(2008). Csaki:1998qr C. Csaki, H. Ooguri, Y. Oz, and J. Terning, JHEP 01,017(1999). Bellantuono:2015fia L. Bellantuono, P. Colangelo, and F. Giannuzzi, JHEP 10,137(2015). Chen:2015zhh Y. Chen and M. Huang, Chin. Phys. C40,123101(2016). Brunner:2016ygk F. Brunner and A. Rebhan, (2016). Novikov:1979ux V. A. Novikov, M. A. Shifman, A. I. Vainshtein, and V. I. Zakharov, Phys. Lett. B86,347(1979).Teor. Fiz.29,649(1979)].Shifman:1978bx M. A. Shifman, A. I. Vainshtein, and V. I. Zakharov, Nucl. Phys. B147,385(1979). Novikov:1979va V. A. Novikov, M. A. Shifman, A. I. Vainshtein, and V. I. Zakharov, Nucl. Phys. B165,67(1980). Shuryak:1982dp E. V. Shuryak, Nucl. Phys. B203,116(1982). Zhang:2003mr A.-l. Zhang and T. G. Steele, Nucl. Phys. A728,165(2003). Narison:2005wc S. Narison, Phys. Rev. D73,114024(2006). Harnett:2000fy D. Harnett and T. G. Steele, Nucl. Phys. A695,205(2001). Forkel:2003mk H. Forkel, Phys. Rev. D71,054008(2005). Latorre:1987wt J. I. Latorre, S. Narison, and S. Paban, Phys. Lett. B191,437(1987). Liu:1998xx J.-P. Liu, Chin. Phys. Lett. 15,784(1998). Hao:2005hu G. Hao, C.-F. Qiao, and A.-L. Zhang, Phys. Lett. B642,53(2006). Mathieu:2008me V. Mathieu, N. Kochelev, and V. Vento, Int. J. Mod. Phys. E18,1(2009). Ochs:2013gi W. Ochs, J. Phys. G40,043001(2013). Pimikov:2016pag A. Pimikov, H.-J. Lee, N. Kochelev, and P. Zhang, Phys. Rev. D95,071501(2017). Pimikov:2017xap A. Pimikov, H.-J. Lee, and N. Kochelev, Phys. Rev. Lett. 119,079101(2017). Jacob:1959at M. Jacob and G. C. Wick, Annals Phys. 7,404(1959).Phys.281,774(2000)].Fritzsch:1975tx H. Fritzsch and P. Minkowski, Nuovo Cim. A30,393(1975). Mandula:1982us J. E. Mandula, G. Zweig, and J. Govaerts, Nucl. Phys. B228,109(1983). Jaffe:1985qp R. L. Jaffe, K. Johnson, and Z. Ryzak, Annals Phys. 168,344(1986).Shtabovenko:2016sxi V. Shtabovenko, R. Mertig, and F. Orellana, Comput. Phys. Commun. 207,432(2016).Mikhailov:1986be S. V. Mikhailov and A. V. Radyushkin,JETP Lett.43, 712 (1986) [Pisma Zh. Eksp. Teor. Fiz.43, 551 (1986)]. Mikhailov:1991pt S. V. Mikhailov and A. V. Radyushkin,Phys. Rev. D 45, 1754 (1992).Grozin:1985wj A. G. Grozin and Y. F. Pinelis,Phys. Lett.166B, 429 (1986).Grozin:1994hd A. G. Grozin,Int. J. Mod. Phys. A 10, 3497 (1995)Narison:2011xe S. Narison, Phys. Lett. B706,412(2012). Narison:2011rn S. Narison, Phys. Lett. B707,259(2012). Schafer:1996wv T. Schäfer and E. V. Shuryak, Rev. Mod. Phys. 70,323(1998). Matheus:2006xi R. D. Matheus, S. Narison, M. Nielsen, and J. M. Richard, Phys. Rev. D75,014005(2007). Huang:2016rro Z.-R. Huang et al., Phys. Rev. D95,076017(2017).Palameta:2017ols A. Palameta, J. Ho, D. Harnett and T. G. Steele,arXiv:1707.00063 [hep-ph].Chen:2016qju H. X. Chen, W. Chen, X. Liu and S. L. Zhu,Phys. Rept.639, 1 (2016) Liu:2012ze L. Liu et al. [Hadron Spectrum Collaboration],JHEP 1207, 126 (2012)
http://arxiv.org/abs/1708.07675v3
{ "authors": [ "Alexandr Pimikov", "Hee-Jung Lee", "Nikolai Kochelev", "Pengming Zhang", "Viachaslau Khandramai" ], "categories": [ "hep-ph" ], "primary_category": "hep-ph", "published": "20170825100904", "title": "Exotic glueball $0^{\\pm -}$ states in QCD Sum Rules" }
1Steward Observatory, Department of Astronomy, University of Arizona, 933 North Cherry Avenue, Tucson, AZ 857212Astrolab IRIS, Verbrandemolenstraat, Ypres, Belgium and Vereniging voor Sterrenkunde, Werkgroep Veranderlijke Sterren, Belgium3Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA, UK4Department of Physics and Astronomy, Iowa State University, A313E Zaffarano, Ames, IA 500105Department of Astronomy and Astrophysics, Pennsylvania State University, 525 Davey Lab, University Park, PA 168026Department of Physics and Astronomy, Louisiana State University, 261-A Nicholson Hall, Tower Dr, Baton Rouge, LA 708037Applied Physics Laboratory, Johns Hopkins University, 11100 Johns Hopkins Road, Laurel, MD 207238Center for Mathematical Plasma Astrophysics, University of Leuven, Belgium [email protected] test alternative hypotheses for the behavior of KIC 8462852, we obtained measurements of the star over a wide wavelength range from the UV to the mid-infrared from October 2015 through December 2016, using Swift, Spitzer and at AstroLAB IRIS. The star faded in a manner similar to the long-term fading seen in Kepler data about 1400 days previously. Thedimming rate for the entire period reported is 22.1 ± 9.7 milli-mag yr^-1 in the Swift wavebands, with amounts of 21.0 ± 4.5 mmag in the groundbased B measurements, 14.0 ± 4.5 mmag in V, and 13.0 ± 4.5 in R, and a rate of 5.0 ± 1.2 mmag yr^-1 averaged over the two warm Spitzer bands. Although the dimming is small, it is seen at ≳ 3 σ by three different observatories operating from the UV to the IR.The presence of long-term secular dimming means that previous SED models of the star based on photometric measurements taken years apart may not be accurate. We find that stellar models with T_eff = 7000 - 7100 K and A_V ∼ 0.73 best fit the Swift data from UV to optical. These models also show no excess in the near-simultaneous Spitzer photometry at 3.6 and 4.5 μm, although a longer wavelength excess from a substantial debris disk is still possible (e.g., as around Fomalhaut). The wavelength dependence of the fading favors a relatively neutral color (i.e., R_V ≳ 5, but not flat across all the bands) compared with the extinction law for the general ISM (R_V = 3.1), suggesting that the dimming arises from circumstellar material.§ INTRODUCTION KIC 8462852, also known as Boyajian's Star, is an enigmatic object discovered by citizen scientists of the Planet Hunters project studying data from the Kepler mission <cit.>. The main-sequence F1/2 V star <cit.> at ∼400 pc <cit.> has undergone irregularly shaped dips in flux up to ∼ 20% with durations of one to a few days <cit.>. A new episode of dips has started in May- June, 2017 <cit.>. The star also faded throughout theKepler mission <cit.>, initially in a slow decline, andthen a more rapid fading by ∼ 2% over about 300 days <cit.>.Such behavior is virtually unique among normal main-sequence stars <cit.>. Archival data have also been used to suggest adecline in stellar brightness over the past century with an average rate of -0.151 ± 0.012% yr^-1 <cit.>, though the existence and significanceof the century-long trend are disputed <cit.>. In addition, adaptiveoptics-corrected images in the JHK bands reveal a nearby source 2 from the primary star, withbrightness and color consistent with a M2 V companion at a projected distance of ∼800 AU <cit.>.Peculiar light curves and slow trends are common among young stellar objects (YSOs) <cit.>, and may result from the obscuration by dust generated by disintegrating planets <cit.> or planetesimal/planet collisions <cit.> if viewed edge-on <cit.>. However, KIC 8462852 does not appear to fit into either scenario. The optical to mid-IR spectrum of the star confirms that it is a mature main-sequence dwarf; spectral energy distribution (SED) modeling finds no significant IR excess in the 3-5region that could arise from a warm debris disk <cit.>; no excess is seen in the WISE photometry <cit.>; andmillimeter and sub-millimeter continuum observations also find no significant excess emission towards the star <cit.>. To characterize the effects of apparent astrometric motions that are driven by variability of field stars, the Kepler data have been studied using a principal component technique to remove correlated trends that are not relevant to the phenomena under investigation <cit.>.It was found that some variations seen in the Kepler light curve are likely from other sources close to the line of sight of KIC 8462852. In particular, the study suggests that the 0.88-day period, presumed to be the rotational modulation of KIC 8462852 <cit.>, is likely from a contaminating source. While the major dips in Q8 and Q16/17 (and the long-term secular dimming) are confirmed to be from KIC 8462852, the origin of the smaller dips is less certain.A viable explanation for the bizarre dips in the light curve of KIC 8462852 is the apparition of a large family of comets <cit.>, possibly the onset of a period like the Late Heavy Bombardment <cit.>. <cit.> show that the hypothesis is plausible by successfully modeling the last episode of dimming events in Kepler Quarters 16 and 17. Similar simulations by <cit.> also reproduce the primary features of the dips with one dust-enshrouded planetary object for each dip. However, this latter type of model would require additional comets or dust-enshrouded planets to explain any additional dips, essentially adding a large number of new free parameters. A number of other possibilities have been proposed.A possible explanation is that one or more planetary bodies have spiraled into the star. It is speculated that the return of the star to thermal equilibrium may explain the slow dimming, while the deep dips may arise from transits of planetary debris <cit.>. <cit.> model the events as being due to aTrojan-like asteroid system orbiting the star. In contrast, <cit.> suggest that the dimming might be caused by foreground dust in the ISM, with dense clumps in an intervening dark cloud responsible for the deep dips. Extinction by clumpy material in the outer Solar System has also been suggested <cit.>. In addition, instabilities in the star itself have been proposed <cit.>.To investigate this mystery, we are conducting ongoing monitoring of KIC 8462852 and its surrounding field with two space telescopes in seven wavebands, Swift/UVOT in uvw2 (effective wavelength 2030 Å), uvm2 (2231 Å), uvw1 (2634 Å), u (3501 Å), and v (5402 Å) bands <cit.>, and Spitzer/IRAC at 3.6 and 4.5<cit.>. In this paper, we report the results from the monitoring from October 2015 through December 2016 (we include a few measurements past this cutoff but have not made use of them in the analysis). We also use the automated and homogeneous observations from the AAVSO database obtained through December 2016 in optical BVR bands with the Keller F4.1 Newtonian New Multi-Purpose Telescope (NMPT) of the public observatory AstroLAB IRIS, Zillebeke, Belgium.§ OBSERVATIONS In this section, we describe the basic observations obtained with Swift/UVOT, AstroLAB IRIS, and Spitzer. Each of these sets of data indicates a subtle dimming of the star. However, gaining confidence in this result requires a detailed analysis of calibration issues, which is reserved for Section 3.§.§ Swift/UVOT The Ultraviolet/Optical Telescope (UVOT) is one of three instruments aboard the Swift mission. It is a modified Ritchey-Chrétien 30 cm telescope with a wide (17× 17) field of view and a microchannel plate intensified CCD detector operating in photon-counting mode <cit.>. UVOT provides images at 2.3 resolution and includes a clear white filter, u, b, and v optical filters, uvw1, uvm2, and uvw2 UV filters, a magnifier, two grisms, and a blocking filter. The uvw2 and uvw1 filters have substantial red leaks, which have been characterized to high precision by <cit.> and are included in the current UVOT filter curves. Calibration of the UVOT is discussed in depth by <cit.> and <cit.>.In full-frame mode, the CCD is read every 11 ms, which creates a problem of coincidence loss (similar to pile-up in the X-ray) for stars with count rates greater than 10 cts s^-1 (∼15 mag, depending on filter).The camera can be used in a windowed mode, in which a subset of the pixels is read. Given the brightness of KIC 8462852, to reduce the coincidence corrections we observed in a 5× 5 window (70 × 70 pixels), resulting in a 3.6 ms readout time. The observations were generally 1 ks in duration, utilizing a mode that acquired data in five filters (v, u, uvw1, uvm2 and uvw2, from 5900 to 1600 Å). To improve the precision of the photometric measurements and ensure that the target star and comparison stars all landed in the readout window, observations were performed with a “slew in place,” in which Swift observed the field briefly in the “filter of the day" – one of the four UV filters – before slewing a second time for more precise positioning. The slew-in-place images were used for additional data points in the UV. KIC 8462852 was first observed on October 22, 2015 and then approximately every three days from December 4, 2015 to March 27, 2016. It was later observed in coordination with the Spitzer campaign starting August, 2016.X-ray data were obtained simultaneously with the Swift/XRT <cit.>.No X-ray emission within the passband from 0.2 to 10 keV is seen from KIC 8462852 in 52 ks of exposure time down to a limit of 5 × 10^-15 erg s^-1 cm^-2 using the online analysis tools of <cit.>. §.§ AstroLAB IRIS Optical observations in B, V, and R bands were taken with the 684 mm aperture Keller F4.1 Newtonian New Multi-Purpose Telescope (NMPT) of the public observatory AstroLAB IRIS, Zillebeke, Belgium. The CCD detector assembly is a Santa Barbara Instrument Group (SBIG) STL 6303E operating at -20C. A 4-inch Wynne corrector feeds the CCD at a final focal ratio of 4.39, providing a nominal field of view of 20× 30. The 9physical pixels project to 0.62 and are read out binned to 3 × 3 pixels, i.e., 1.86 per combined pixel. The B, V, and R filters are from Astrodon Photometrics, and have been shown to reproduce the Johnson/Cousins system closely <cit.>. The earliest observation was made on September 29, 2015. There is a gap in time coverage from January 8 to June 8, 2016. We report the observations through December 2016. §.§ Spitzer Spitzer observations are made with the Infrared Array Camera (IRAC) with a uniform exposure design, which uses a cycling dither at 10 positions on the full array with 12 s frame time. Using multiple dither positions tends to even out the intra- and inter-pixel response variations of the detector; repeating the same dither pattern at every epoch puts KIC 8462852 roughly on the same pixels of the detector, further reducing potential instrumental bias on the photometry[<http://ssc.spitzer.caltech.edu/warmmission/news/> <18jul2013memo.pdf>].Our first Spitzer observation was executed on January 16, 2016. There is a gap in the time baseline of the monitoring in the period from April to July 2016 (from MJD 57475 to 57605) when KIC 8462852 was out of the visibility window of Spitzer[Note that the time coverage gap in the Swift and Spitzer data is only partially overlapped with the gap in the ground-based data.]. The Swift-Spitzer coordinated monitoring is still underway at the time this paper was written. ccccccccUVOT Photometry of KIC 8462852 WavebandMJDa raw σ correctedσ relative comparisonb σ magnitude magnitude magnitudemagnitude magnitude magnitudeuvw2 57317.518 14.820 0.050 14.776 0.089 0.045 0.074 uvw2 57317.521 14.800 0.030 14.793 0.055 0.007 0.046 uvw2 57317.552 14.790 0.030 14.787 0.047 0.003 0.036 uvw2 57317.554 14.810 0.030 14.784 0.043 0.026 0.030 uvw2 57317.584 14.780 0.030 14.755 0.058 0.025 0.050 uvw2 57317.587 14.820 0.030 14.735 0.061 0.085 0.053 aModified Julian date bAverage brightening of the comparison star measurements This table is available in its entirety in a machine-readable form in the online journal and as an appendix to this posting. A portion is shown here for guidance regarding its form and content. § DATA PROCESSING AND ANALYSISOur three data sources provide multiple accurate measurements of KIC 8462852 over a year. They were all interrupted when the viewing angle to KIC 8462852 passed too close to the sun. Because of differing viewing constraints, exactly when this gap in the data occurs differs among the observatories. Because of the differences in time coverage,we analyze long-term trends in the data sets in two ways. First, within a given data set, we fit a linear trend and use the slope and its error as an indication of any change.The Swift measurements are mostly prior to the gap, so we fitted both before the gap and for the whole set of measurements.For the sake of comparison, we treat the groundbased data the same way. The Spitzer observations began at the end of the first groundbased sequence, so we only fit the whole set. Although these fits make use of all the data in each band, they may give misleading information on the color behavior because the data do not have identical time coverage. In discussing color trends we focus just on the UVOT and groundbased data obtained in overlapping time sequences. For the Spitzer data, we calculate the difference from the first measurement to the later ones, and compare with a similar calculation for the groundbased point closest in time to the first Spitzer point, relative to the post-gap results from the ground. Details of these procedures are given below. §.§ SwiftData Swift/UVOT data were obtained directly from the HEASARC archive[<http://heasarc.gsfc.nasa.gov/cgi-bin/W3Browse/swift.pl/>. The same processed UVOT data are also available through the MAST archive at STScI, <http://archive.stsci.edu>.]. We then used the HEASARC FTOOLS software[<http://heasarc.gsfc.nasa.gov/docs/software/lheasoft/>] program UVOTSOURCE on the transformed sky images to generate point source photometry. The Swift/UVOT photometry includes a filter-dependent correction that accounts for the decline in instrument sensitivity <cit.>.However, the potential fading of KIC 8462852 pushes the boundaries of the UVOT calibration, which is specified to within 1%.To check for any residual sensitivity changes, we made photometric measurements for four additional field stars in the UVOT field. The reference stars selected are KIC 8462934, KIC 8462763, KIC 8462843, and KIC 8462736, which all lie 14 to 20 from KIC 8462852. Since these reference stars are all fainter than KIC 8462852 by 2 - 5 magnitude in the UV and have larger photometric errors individually, we took their weighted average as our photometric reference to minimize the noise. Their linear fits indicate that the average reference star appears to get brighter in the UVOT data over the full range of our time coverage, especially when the post-gap data are considered (-22.2 ± 5.2 mmag yr^-1). We examined the reference stars individually and found that all four of them follow similar and consistent brightening rates, eliminating the possibility of "bad" star contamination. This suggests a small residual instrumental trend in the UVOT data, which could reflect a small overestimate of the sensitivity loss or a small residual in the coincidence loss correction. To remove the instrumental trend of Swift/UVOT, we subtract the normalized magnitude of the average reference star from the absolutely calibrated magnitude of KIC 8462852. This inevitably propagates the photometric uncertainties of the average reference star into the KIC 8462852 light curve.In all following discussions, we only use the corrected calibration data of Swift/UVOT. All the Swift photometry is given in Table <ref> and is displayed in Figure <ref>.For a first search for trends in the brightness of the star, we performed a linear fit to the photometry. As shown in Table 2, with the corrected calibration the fading of KIC 8462852 is seen at a 3 σ significance level in the pre-gap data and by > 2 σ in the full data set.cccccccccccTrends of the brightness of KIC 84628524cPre-Gap 4cFull Data 3-6 8-11 Waveband λ_eff dm/dt σ(dm/dt) χ_red^2 siga dm/dt σ(dm/dt) χ_red^2 siga() (mmag yr^-1) (mmag yr^-1) (mmag yr^-1) (mmag yr^-1) uvw2 0.2030 83.7 66.6 0.81 71.8 31.5 0.92 uvm2 0.2231 210.5 203.91.03 -60.3 86.5 1.07 uvw1 0.2634 108.5 61.5 0.96 17.4 24.5 1.04 u 0.3501 69.0 44.0 0.29 42.7 17.8 0.30 v 0.5402 54.9 32.7 0.27 1.9 14.5 0.34 2cSwift Averageb 71.0 22.6 Y 22.1 9.7 B 0.435 20.2 11.4 26.3 1.5 Y V 0.548 16.2 8.1 21.6 1.5dYR 0.635 1.6 6.7 13.1 1.0 Y [3.6]c 3.5505.1 1.5 1.99 Y [4.5]c 4.4934.8 2.0 1.84 2cSpitzer Averageb 5.0 1.2 YaSignificance flag. “Y” means that the magnitude changing rate is significantly different from zero at the3σ level or higher. Otherwise this is left blank. bAveraging the data ignoring the wavelength information implicitly assumes a grey color of the fading. cWe only have 2 epochs of Spitzer observations before the gap, and the gap is offset relative to the groundbased data, so we do not quote values pre-gap. d Computed omitting the first night, which is systematically high and had a large number of measurements that drive the slope inappropriately.§.§ Ground-based Data The Swift data suggest that the star faded during our observations, and such behavior would be consistent with the long-term secular fading of KIC 8462852 observed previously <cit.>. To probe this behavior further, we turn to the ground-basedBVR photometry obtained at AstroLAB IRIS and available from AAVSO. We did not detect any major dips in the flux from the monitoring up to December 2016, but our measurements do indicate a slight long-term secular dimming. We used differential photometry relative to four stars in the field, selected to be similar in brightness and color to KIC 8462852(seeTable <ref>). These observations were obtained simultaneously with those of KIC 8462852, typically in a series in V followed by B and then R. The reductions utilized the LesvePhotometry reduction package <cit.>, which is optimized for time-series observations of variable stars. It automates reduction of the data in a series of observations, providing a homogeneous database of photometry for KIC 8462852.Errors from photon noise on the source, scintillation noise and background noise are determined for each observation, using the methodology in <cit.>. We eliminated data from a single night when the source was at large (> 2) airmass. We also eliminated data from nights with larger than normal estimated errors (> 0.025 magnitudes rms) for KIC 8462852, as identified either by the reduction package or by large rms errors for the data obtained within that night. The remaining photometry in B, V, and R bands is displayed in Figure <ref>. The scatter is somewhat larger than implied by the internal error estimates; therefore, we will base our error estimates on the scatter. Linear fits show a dimming in all three colors over the full data set, and probable dimming but not at signficant levels pre-gap (see Table 2). No significant dimming is seen post-gap.In estimating the uncertainties of these slopes, we found that the reduced χ^2 using the reported internal errors was large (2 to 4 depending on the band), consistent with our finding that the rms scatter of the measurements is larger than the reported errors. We brought the reduced χ^2 of the fit to ∼ 1 by adding an additional error of 0.01 mag in quadrature to each measurement. ccccccccccComparison stars for BVR photometry IDRA(2000)DEC (2000) V B-VV-R KIC 8462852 20 06 15.46 +44 27 24.611.86 0.54 0.40 Star 1 20 07 09.07 +44 20 17.1 11.59 0.54 0.40 Star 2 20 06 01.24 +44 29 32.4 12.42 0.79 0.51Star 3 20 06 21.21 +44 30 52.2 12.81 0.51 0.40Star 4 20 06 48.09 +44 22 48.1 11.26 0.47 0.35There are two issues with the linear fits. The first is that, given that there is no evidence for fading in the data after the interruption due to solar viewing constraints (seeFigure <ref>), the fits tend to be high toward the beginning and low toward the end of the post-gap sequence. This is particularly prominent in the v and V fits as shown inFigure <ref>. The linear fits are meant as the simplest way to quantify the dimming that would include all the data in each band, but they appear not to be exactly the correct dependence.The second issue is that, for the groundbased data, our discussion above does not include the possibility of systematic errors affecting the comparison pre- and post-gap, effects that are not included in theLesvePhotometry package. (We have already eliminated such errors for the Swift/UVOT data and, as discussed below, they should be negligible for the Spitzer measurements.) To test for such effects, we turned to the photometry of the four reference stars to estimate night-to-night and longer-termerrors and to select the nights with the most consistent observations to see if the dimming was apparent just using this subset of best measurements.Since these further steps made no reference to the photometry of KIC 8462852, they should introduce no bias in its measurements. We first computed the standard deviations of running sets of 45 measurements for each of these four stars. When this value exceeded an average of 0.015 per star, we investigated the photometry involved and eliminated nights contributing disproportionately to the value. Following this step, we examined the consistency of the remaining measurements of the reference stars. Since we do not know the “true” magnitudes of the stars, we instead tested for the consistency of the measurements of each star across the gap in time coverage. To establish a baseline, we identified two long consecutive sets of measurements for each star and each color, one on each side of the gap, that agreed well. We then tested each of the additional nights of data to see if they were consistent with this baseline, and added in the data for the nights that did not degrade the agreement across the gap. We carried out this procedure individually for each of the three colors, but found that the same nights were identified as having the highest quality photometry in each case. The final typical mis-match in photometry across the gap was 0.0032 magnitudes, showing that this vetting was effective in identifying nights with consistent results for the four reference stars. The photometry of KIC 8462852 on these nights is listed in Table <ref> and plotted in Figure  <ref>. The photometry selected to be of highest internal consistency is generally consistent with the rest of the measurements. The final averages for KIC 8462852 just based on these nights before and after the gap in the time series are shown inTable <ref>. Each band shows a small but statistically significant dimming from pre-gap to post-gap, both in the initial photometry (Table 2) and in that selected to be of highest quality. There is a hint of fading in the pre-gap data, but averaged over the three bands the net change is 0.010 ± 0.005 magnitudes, i.e. small and potentially insigificant. The post-gap data indicate no significant long-term secular changes beyond the errors of ∼ 0.003 magnitudes in any of the bands. These results suggest that most of the change in brightness occurred while the star was in the gap for the groundbased photometry. This behavior would be consistent with that observed for long-term dimming using Kepler data, where most of the change is a drop in brightness by ∼ 2% over a period of 300 days <cit.>. The gap in our data is intriguingly about 1400 days (∼ twice the interval between the two large dips in the light curve) past the time of the similar dimming seen in the Kepler data. The amplitude in the V and R bands (which together approximate the Kepler spectral response) is about 1.4 ± 0.3% (Table 5), and the duration of the gap in our highest quality BVR data is ∼ 150 days, values that are also reminiscent of the Kepler observations.cccccccSelected High-Quality BVR photometry JD B (mag) erra V (mag) erraR(mag) erra 2457328.25 12.379 0.009 11.846 0.009 11.455 0.0092457365.29 12.3730.009 11.841 0.009 11.453 0.0092457366.23 12.363 0.009 11.837 0.009 11.448 0.009 2457380.25 12.380 0.009 11.844 0.009 11.453 0.0092457386.26 12.372 0.009 11.850 0.009 11.453 0.0092457395.26 12.372 0.010 11.850 0.010 11.451 0.0102457396.25 12.378 0.009 11.849 0.009 11.462 0.0092457631.38 12.396 0.003 11.860 0.003 11.467 0.0032457640.31 12.381 0.013 11.851 0.013 11.460 0.0132457652.3 12.372 0.013 11.848 0.013 11.459 0.0132457658.29 12.396 0.013 11.860 0.013 11.461 0.0132457660.37 12.392 0.011 11.856 0.011 11.460 0.0112457664.36 12.394 0.013 11.850 0.013 11.458 0.013aCombined rms errors of the mean, i.e. rms scatter divided by the square root of (n-1) where n is the number of measurements.cccccccSummary of ground-based monitoring of KIC 8462852 MJD Range⟨ B⟩a err(B)b ⟨ V⟩a err(V)b⟨ R⟩a err(R)b57322 - 57396 12.374 0.0032 11.845 0.0032 11.453 0.0032 57549 - 57693 12.395 0.0032 11.859 0.0032 11.466 0.0032 Differences 0.021 0.0045 0.014 0.0045 0.013 0.0045 aAverage magnitude. bCombined rms errors of the mean, i.e. rms scatter divided by the square root of (n-1) where n is the number of measurements.§.§ SpitzerData For the purpose of probing for a long-term secular trend, we do not consider the earlier Spitzer photometry <cit.>. That observation was made under the SpiKeS program (Program ID 10067, PI M. Werner) in January 2015, too far from the epochs of our new data. In addition, the SpiKeS observation was executed with an AOR design different from ours for the dedicated monitoring of KIC 8462852, which may lead to different instrumental systematics in the photometry.The photometry we did use is from AORs 58782208, 58781696, 58781184, 58780928, 58780672, 58780416, and 58780160 (PID 11093, PI K. Y. L. Su) and 58564096 and 58564352 (PID 12124, PI Huan Meng).Photometric measurements were made on cBCD (artifact-corrected basic calibrated data) images with an aperture radius of 3 pixels and sky annulus inner and outer radii of 3 and 7 pixels, with the pixel phase effect and array location dependent response functions corrected. Aperture correction factors are 0.12856 and 0.12556 magnitudes at 3.6 and 4.5 , respectively <cit.>. Individual measurements were averaged for each epoch. The photometry is summarized in Table <ref>.During the period when the KIC 8462852 data were obtained, the photometric performance of IRAC is expected to have varied byless than 0.1% per year (i.e., < 1 mmag yr^-1)<cit.>.There is a suggestion of a fading at the ∼ 3 σ level of significance in the 3.6 μm band; there is also a fading in the 4.5 μm band at lower significance.cccccSpitzer photometry MJD [3.6] (mag) err [4.5] (mag) err 57040.36710.4627 0.002210.4243 0.002957403.58710.4510 0.0012 10.4334 0.001557453.61810.4551 0.0012 10.4346 0.001657606.74810.4537 0.0012 10.4375 0.001657621.03610.4554 0.0012 10.4392 0.001657635.44310.4579 0.0012 10.4324 0.001757648.37110.4543 0.0012 10.4345 0.001657672.53810.4559 0.0012 10.4382 0.001657676.32010.4549 0.0012 10.4383 0.001657690.43010.4580 0.0012 10.4377 0.001657704.53410.4574 0.0012 10.43630.001657719.37510.4604 0.0012 10.43570.001657732.49710.4514 0.0012 10.43920.001657756.31910.4486 0.0012 10.43440.001557813.96810.4511 0.001210.43880.0016§.§ Color-dependence of the dimming The three independent sets of observations presented in this paper all show evidence of dimming in KIC 8462852.Measurements reported by the All-Sky Automated Survey for Supernovae (ASAS-SN) <cit.> do not completely overlap with ours, but a preliminary analysis shows that they show a fading of about 8 mmag at V, comparing their data between MJD of 57200 and 57334 with that between MJD of 57550 and 57740; their measurements may indicate a further small fading after that. TheASAS-SN photometry has not been tested thoroughly for systematic errors at this small level (K. Stanek, private communication, and see warning at <https://asas-sn.osu.edu/>); nonetheless, the results agree within the mutual errors with ours. The dimming is also corroborated in measurements by <cit.>.However, we find that the amount of dimming is significantly lessin the infrared than in the optical and ultraviolet, as shown inFigure <ref>. We will investigate inSection 4 what constraints the wavelength dependence lets us put on this event. To do so, we need to focus on periods when measures are available in all the relevant bands (seeFigure <ref>), since otherwise wavelength-independent and -dependent brightness changes are degenerate. The first sequence of Swift measurements extends well beyond the end date for the first sequence of BVR measurements.For the purposes of Section 4, we compute the dimming using the average of the measurements through MJD 57396. For the groundbased measurements, we take only the measurements on nights that passed our test for consistency of the comparison star measurements. We average all the data from such nights pre-gap and, separately, post-gap, and base the errors on the rms scatter of the individual measurements. For Spitzer, there is only a single measurement close in time to the first groundbased sequence, namely at JD 2457404.To analyze these data, we first confirm the errors by computing the scatter in both bands (combined). This calculation indicates an error of 0.002, slightly larger than the quoted errors of 0.0012 - 0.0016. Using our more conservative error estimate, the change from the first point to the average of the post-gap ones at 3.6 μm is 0.0049 ± 0.0021 mag, and at 4.5 μm it is 0.0033 ± 0.0021 mag, or an average of 0.0041 ± 0.0015. Wecompare this value with the change between the average of the two sets obtained from the ground closest in time to the first Spitzer one, namely at MJD 57395 and 57396 (both of which passed our tests for high quality data), versus the later (post-gap) groundbased measurements. We averaged the measurements in B and Vtogether into a single higher-weight point for these two nights and then compared with the similar average of the post-gap measurements. The net change is 0.012 ± 0.0023 magnitudes, i.e., significantly larger than the change in the infrared. Table 7 summarizes the measurements we will use to examine the color-dependence of the dimming of KIC 8462852. It emphasizes conservative error estimation (including systematic ones) and homogeneous data across the three observatories, at the cost of nominal signal to noise. The dimming is apparent, at varying levels of statistical significance, in every band. The values in the table also agree with the slopes we computed previously (and noting that the time interval for the differences is about 74% of a year), as shown in the table for the cases with relatively high weight values so comparisons are meaningful.ccccccccccColor dependence of the dimming BandWavelength (μm) Interval magnitudea DimmingErrorb Telescopeuvw2 0.203 pre-gap 14.809 — — Swift uvw2 0.203 post-gap 14.826 0.017 0.018 Swift uvm2 0.223 pre-gap 14.808 — — Swift uvm2 0.223 post-gap 14.810 0.002 0.037 Swift uvw1 0.263 pre-gap 13.635 — — Swift uvw1 0.263 post-gap 13.649 0.014 0.011 Swift u 0.346 pre-gap 12.575 — — Swift u 0.346 post-gap 12.595 0.020 0.006 Swift u 0.346——0.03160.0132 from slope calculation B 0.435 pre-gap 12.374 — — AstroLAB-IRIS B 0.435 post-gap 12.395 0.021 0.0045 AstroLAB-IRIS B 0.435— —0.01950.0015 from slope calculationv 0.547 pre-gap 11.894 — — Swift v 0.547 post-gap 11.904 0.010 0.006 Swift V 0.548 pre-gap 11.845 — — AstroLAB-IRIS V 0.548 post-gap 11.859 0.014 0.0045 AstroLAB-IRIS V 0.548 — — 0.0160 0.0015from slope calculation R 0.635 pre-gap 11.453 — — AstroLAB-IRIS R 0.635 post-gap 11.466 0.013 0.0045 AstroLAB-IRIS R 0.635—— 0.0097 0.001 from slope calculation [3.6] 3.6 pre-gap 10.4510 — — IRAC [3.6] 3.6 post-gap 10.4559 0.0049 0.0021 IRAC [4.5] 4.5 pre-gap 10.4334 — — IRAC [4.5] 4.5 post-gap 10.4367 0.0033 0.0021 IRAC aAverage magnitude. bCombined rms errors of the mean, i.e. rms scatter divided by the square root of (n-1) where n is the number of measurements.§ DISCUSSION§.§ Mass Limit of Circumstellar Dust Many of the hypotheses to explain the variability of KIC 8462852 depend on the presence of a substantial amount of circumstellar material. Excess emission from circumstellar dust is therefore an interesting diagnostic. Such an excess has not been found at a significant level <cit.>. However, previous searches have used data from 2MASS <cit.>, GALEX <cit.>, warm Spitzer <cit.>, WISE <cit.>, and new optical observations, which were taken more than 10 years apart, to constrain the stellar atmospheric models that are used to look for excess. The variability of the star, particularly if it has been fading for years <cit.>, could undermine the searches for excesses. Now we can test if KIC 8462852 has had a significant excess at 3.6 and 4.5by fitting stellar models with the Swift data taken at specific epochs and compare the model-predicted stellar IR flux with the simultaneous Spitzer measurements. By October 2016, there have been five epochs at which we have Swift and Spitzer observations taken within 24 h: MJD 57403, 57454, 57621, 57635, and 57673. The first two are before March 2016, whereas the last three epochs are observed after the gap. To analyze these results, we adopt the ATLAS9 model <cit.> with the stellar parameters obtained from spectroscopic observations, log (g) = 4.0, and [M/H] = 0.0. We allow T_eff to vary between 6700 and 7300 K with an increment of 100 K. We find that the stellar models with T_eff = 7000 and 7100 K provide the best fits to the Swift photometry at all five epochs, with a minimum reduced χ^2 from 0.08 to 1.5 (see Figure 2). We did not use the Spitzer/IRAC measurements to further constrain the fit, since we did not want to bias any evidence for an infrared excess. Although these temperatures are slightly higher than originally estimated by <cit.>, they are in reasonable agreement with the SED model based on the 2MASS photometry <cit.> and the recent IRTF/SpeX spectrum leading to a classification of F1V - F2V, i.e., ∼ 6970 K<cit.>.All of these values can be somewhat degenerate with changes in the assumedlogg and metallicity. For our purposes, however, having a good empirical fit into the ultraviolet allows placing constraints on the extinction. All five best-fit stellar models, one for each epoch, have A_V in the range of 0.68 to 0.78, 2.0 to 2.3 times higher than A_V = 0.341 found by <cit.> with photometric measurements years apart. In the Spitzer/IRAC wavebands at 3.6 and 4.5 , the observed flux densities match the model-predicted stellar output fairly well at all five epochs. The average excess is -0.39 ± 0.30 mJy at 3.6and -0.29 ± 0.21 mJy at 4.5 . We conclude that we do not detect any significant excess of KIC 8462852 with near-simultaneous Swift and Spitzer observations.Our conclusion is consistent with that of <cit.>, showing that it is independent of the uncertainties in fitting the stellar SED. We have tested these conclusions using the average post-gap B, V, R, [3.6], and [4.5] measurements (Table 7) rather than the UVOT UV ones, standard stellar colors <cit.>, and a standard extinction law <cit.>. The best fit was obtained assuming the star is of F1V spectral type (nominal temperature of 7030 K), with A_V = 0.61. Given the uncertainties in the extinction law, the stellar models, and intrinsic stellar colors (e.g., the effects of metallicity), this agreement is excellent. The assigned extinction level also agrees roughly with the relatively red color of the star relative to a F2V comparison star in infrared spectra (C. M. Lisse, private communication). The conclusion about the absence of any infrared excess is unmodified with this calculation.Upper limits to the level of circumstellar dust were also determined at 850 μm with JCMT/SCUBA-2 <cit.>, and from WISE at 12 and 22 μm, in all cases where the stellar variations are relatively unimportant. Assuming that the grains emit as blockbodies and are distributed in a narrow, optically thin ring at various radii from the star, we place upper limits of ∼4 × 10^-4 for the fractional luminosity, L_dust/L_*, for warm rings of radii between 0.1 and 10 AU and an order of magnitude higher for cold rings lying between 40 and 100 AU, which would be a typical cold-ring size for a star of this luminosity (see also <cit.>). These limits are consistent with the presence of a prominent debris disk, since even around young stars these systems usually have L_dust/L_* ≲ 10^-3 <cit.>. We carried out a second calculation to place upper limits on the possible dust masses. We again assumed that the dust is distributed in an optically thin ring (0.1 AU wide in the ring plane). We took the optical constants derived for debris disk material <cit.> and used the Debris Disk Radiative Transfer Simulator[<http://www1.astrophysik.uni-kiel.de/dds/>] <cit.>. The minimum grain radius was set to 1 , roughly the blowout size for spherical particles around a F1/2 V star, and the power law index of the particle distribution was taken as 3.65 <cit.>. We take the upper limit at 850to be 4.76 mJy, or 5.6σ above zero, i.e., 3σ added to the 2.6σ “signal” at the position of the star <cit.>. Similarly, we take a 3σ upper limit of 0.63 mJy at 4.5 . We took the cataloged upper limits for the two WISE bands, which are computed in a similar way but at a 2σ level. Stellar parameters are assumed to be T_eff = 7000 K and L_* = 5 L_ <cit.>.The resulting limits are shown in Figure <ref>. The upper limit at 100 AU is slightly higher than the mass of 0.017 M_ for a similar range of dust sizes (i.e., up to 2 mm) in the Fomalhaut debris ring <cit.>. Again, a prominent but not extraordinary debris disk is allowed, corresponding for example to 2 - 3 orders of magnitude more dust than orbits the sun. The upper limits also permit sufficient dust mass to yield significant extinction. To demonstrate, weassume that the dust is in a ring at radius R with a thickness perpendicular to the orbital plane of 0.1 R; for simplicity we take an ISM-like dust particle size distribution with constant density within this ring. The resulting upper limit on the extinction is A_V ∼ 0.1<cit.>. This value is independent of the radius assumed for the ring, since as shown in Figure 3, the upper limits for the mass scale roughly as R^2, which is also the scaling of the ring area under our assumptions. Of course, this is only a rough estimate, but it is sufficient to demonstrate that detectable levels of extinction can be consistent with the upper limits on the thermal emission of any material surrounding the star.That is, current measurements allow enough material to orbit KIC 8462852 to account for a number of the hypotheses for its behavior, such as the inspiralling and disintegration of massive comets. The minimum mass required to account for the long-term secular dimmingthrough extinction is also within these mass constraints. §.§ Extinction Curve We now explore the hypothesis that the long-term secular dimming of KIC 8462852 is due to variable extinction by dust in the line of sight. The absence of excess emission at 3.6 and 4.5means that the photometry at these wavelengths is a measure of the stellar photospheric emission. Under the assumption that the fading of the star indicated in Table <ref> is due to dust passing in front of the star, the relative amounts of dimming at the different wavelengths can therefore be used to constrain the wavelength dependence of the extinction from the UV to 4.5 μm in the IR. Under this hypothesis, the dimming of KIC 8462852 may arise either from the interstellar medium (ISM) or circumstellar material. For convenience, we describe the color of the fading in the terminology of interstellar extinction, although circumstellar material might have different behavior if the color were measured to high accuracy. The Galactic ISM extinction curve from 0.1 to 3can be well characterized by only one free parameter, the total-to-selective extinction ratio, defined as R_V = A_V / E(B - V) <cit.>. Longward of 3 , measurements towards diffuse ISM in the Galactic plane <cit.>, towards the Galactic center <cit.>, and towards dense molecular clouds in nearby star-forming regions <cit.> reveal consistent shallow wavelength dependence of the ISM extinction in the Spitzer/IRAC bands. We find that extrapolating the analytical formula in <cit.> (CCM89,hereafter) to the 3.6 and 4.5bands of IRAC yields A_[3.6]/A_K_s from 0.42 to 0.54 and A_[4.5]/A_K_s from 0.28 to 0.37 for R_V values from 2.5 to 5.0, a range suitable for most sight lines in the Milky Way. Although theextinction law does not claim to apply to these wavelengths, the 3.6 and 4.5extrapolations are in good agreement with the IRAC observations <cit.>. Therefore, for simplicity we adopt theextinction law for all the seven bands monitored.We have fitted the wavelength-dependent dimming in Table <ref> with extinction curves using the formulation in ,parameterized by R_V and extended to 4.5 μm. Figure 4 shows the results. Because of the relatively small level of dimming in the ultraviolet, the best-fitting extinction curves are relatively `gray', i.e., have large values of R_V. The vertical dashed lines show confidence levels corresponding to 1, 2, and 3 σ. Values of R_V ∼ 5 are favored; the general value for the ISM, R_V = 3.1 is disfavored at a confidence level > 90%. Extinction even more gray than given by R_V = 5 (or a form differing more fundamentally from the interstellar law) is a definite possibility. However, a completely neutral extinction law is excluded because of the small variations at [3.6] and [4.5]. So far we have conducted simple fits via χ^2 minimization to the fading and extinction curves individually. However, they are interrelated. Therefore, we now fit them simultaneously using the Feldman and Cousins method (F&C, Feldman & Cousins, 1998; Sanchez et al. 2003). The method differs from a regular χ^2 minimization by allowing setting of physical boundaries in the fitting parameters (e.g. 2 < R_V < 6), and by adjusting the χ^2 statistics accordingly, via a Monte Carlo approach. Figure 5 shows the resulting confidence intervals. Those for χ^2 minimization agree excellently with the simple single-parameter results in Figure 4. The F&C formalism suggests a similar conclusion, i.e. R_V > 3.1, but at somewhat higher confidence, > 98%. The derived best fit dimming rate also agrees with the simple χ^2 analysis. We also tried to fit the pre- and post-gap data separately. The result was not successful: the F&C method fails to constrain reasonable values of R_V and dA_V /dt. This suggests that there is no measurable dimming in the pre- and post-gap datasets separately (e.g. the dimming happened during the gap), which is also in agreement with the findings from simple χ^2 minimization.§ CONCLUSIONS This paper continues the study of the long-term secular dimming of KIC 8462852, such as that seen during the Kepler mission <cit.>.We have observed a second dimming occurence, similar to that seen with Kepler. Our data extend from the UV (0.20 μm) to the mid-infrared (4.5 μm), allowing us to determine the spectral character of this event.The dimming is less in the infrared than in the visible and UV, showing that the responsible bodies must be small, no more than a few microns in size. We analyze the colors under the assumption that the dimming is due to extinction by intervening dust. We find that the colors are likely to be more neutral than the reddening by typical sight lines in the ISM (confidence level > 90%). That is, the dust responsible for the dimming differs from that along typical sight lines in the ISM, suggesting that the dust is not of normal interstellar origin. The discovery of a dimming pattern similar to that seen with Kepler roughly 1400 days previously <cit.>is challenging to reconcile with the hypothesis that these events result from dust produced during the assimilation of a planet <cit.>.The long-term secular dimming could correspond to some dusty structure in the Oort Cloud of the Sun with a column density gradient on ∼1 AU scale. The high ecliptic latitude of KIC 8462852 (β = +62.2) is not necessarily a problem for this hypothsis, as the Oort cloud should be nearly isotropic <cit.>. The primary difficulty with this picture is that the orbital timescale of any Oort cloud dust concentrations is 10^5 to 10^7 yr. Over the 8-year-long time line from the beginning of Kepler to our latest observations, the astrometric movement of such a structure should be dominated by the Earth's parallactic motion, and thus most of the observed light curve features should be recurrent relatively accurately on a yearly basis <cit.>. We conclude thatextinction by some form of circumstellar material is the most likely explanation for the long-term secular dimming.§ ACKNOWLEDGEMENTSThis work is based in part on observations made with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under a contract with NASA. Support for this work was provided by NASA through an award issued by JPL/Caltech. GMK is supported by the Royal Society as a Royal Society University Research Fellow.We acknowledge with thanks the variable star observations from the AAVSO International Database and the infrastructure maintained by AAVSO, which were used in this research. We thank Professor Jason Wright for his contiributions in acquiring the data. This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. This publication also makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration.AAVSO, Spitzer (IRAC), Swift (UVOT, XRT)[Ansdell et al.(2016)]ansdell2016 Ansdell, M., Gaidos, E., Rappaport, S. A. 2016, , 816, 69[Ballering et al.(2016)]ballering2016 Ballering, N. P., Su, K. Y. L., Rieke, G. H. & Gáspár, A. 2016, , 823, 108 [Ballesteros et al. (2017)]ballesteros2017 Ballesteros, F. J., Arnalte-Mus, P., Fernández-Soto, A., & Martínez, V. I.2017, arXiv 1705.08427v2 [Bodman & Quillen(2016)]bodman2016 Bodman, E. H. L. & Quillen, A. 2016, , 819, L34 [Boley et al.(2012)]boley2012 Boley, A. C. et al. 2012, , 750, L21 [Borucki et al.(2010)]borucki2010 Borucki, W. J., Koch, D., Basri, G., et al. 2010, Sci, 327, 977 [Boyajian et al.(2016)]boyajian2016 Boyajian, T. S., LaCourse, D. M., Rappaport, S. A. et al. 2016, , 457, 3988 [Boyajian et al. (2017)]boyajian2017 Boyajian, T., Croft, S., Wright, J. et al. 2017, The Astronomer's Telegram, #10406 [Bozhinova et al.(2016)]bozhinova2016 Bozhinova, I., Scholz, A. & Eislöffel, J. 2016, , 458, 3118 [Breeveld et al.(2010)]breeveld2010 Breeveld, A. A., Curran, P. A., Hoversten, E. A. et al. 2010, , 406, 1687 [Breeveld et al.(2011)]breeveld2011 Breeveld, A. A., Landsman, W., Holland, S. T. et al. 2011, in AIP Conf. Ser. 1358, Gamma Ray Bursts 2010, ed. J. E. McEnery, J. L. Racusin, & N. Gehrels (Melville, NY: AIP), 373 [Burrows et al.(2005)]burrows2005 Burrows, D. N., Hill, J. E., Nousek, J. A. et al. 2005, , 120, 165[Cardelli et al.(1989)]cardelli1989 Cardelli, J. A., Clayton, G. C. & Mathis, J. S. 1989, , 345, 245 [Carey et al.(2012)]carey2012 Carey, S., Ingalls, J., Hora, J. et al. 2012, , 8442, 84421Z [Castelli & Kurucz(2004)]castelli2004 Castelli, F. & Kurucz, R. L. 2004, arXiv:astro-ph/0405087 [Chapman et al.(2009)]chapman2009 Chapman, N. L., Mundy, L. G., Lai, S.-P. & Evans, N. J., II 2009, , 690, 496 [de Ponthière(2013)]deponthiere2013 de Ponthière, P. 2013, Software Programs for Variable Star Observers, <http://www.dppobservatory.net/astroprograms> </software4vsobservers.php>[Dones et al.(2015)]dones2015 Dones, L., Brasser, R., Kaib, N. & Rickman, H. 2015, , 197, 191[Evans et al.(2009)]evans2009 Evans, P. A., Beardmore, A. P., Page, K. L., et al. 2009, , 397, 1177[Fazio et al.(2004)]fazio2004 Fazio, G. G., Hora, J. L., Allen, L. E. et al. 2004, , 154, 10 [Feldman & Cousins (1998)]feldman1998 Feldman, G. J., & Cousins, R. D. 1998, Phys. Rev. D, 57, 3873[Foukal (2017)]foukal2017 Foukal, Peter 2017, , 842, 3 [Fritz et al.(2011)]fritz2011 Fritz, T. K., Gillessen, S., Dodds-Eden, K. et al. 2011, , 737, 73 [Gaia (2017)]gaia2017 <http://gea.esac.esa.int/archive/> [Gary (2017)]gary2017 Gary, B. L. 2017, <http://www.brucegary.net/KIV846/> [Gáspár et al.(2012)]gaspar2012 Gáspár, A., Psaltis, D., Rieke, G. H. & Özel, F. 2012, , 754, 74 [Güver & Özel (2009)]guver2009 Güver, Tolga, & Özel, Feryal 2009, MNRAS, 400, 2050 [Henden(2009)]henden2009 Henden, A. 2009, Astrodon Photometrics Test Summary, <http://www.astrodon.com/uploads/3/4/9/0/34905502/> <astrodonphotometrcshendentestsummary.pdf> [Hippke et al.(2016a)]hippke2016a Hippke, M., Angerhausen, D., Lund, M. B., Pepper, J. & Stassun, K. G. 2016b, , 825, 73 [Hippke et al.(2017)]hippke2017 Hippke, M., Kroll, P., Matthei, F. et al. 2017, , 837, 85[IRSA (2015)]irsa2015 IRSA 2015, http://irsa.ipac.caltech.edu/data/SPITZER/docs/irac/ [Indebetouw et al.(2005)]indebetouw2005 Indebetouw, R., Mathis, J. S. & Babler, B. L. 2005, , 619, 931 [Katz (2017)]katz2017 Katz, J. I. 2017, arXiv 1705.08377 [Kenyon & Bromley(2008)]kenyon2008 Kenyon, S. J. & Bromley, B. C. 2008, , 179, 451 [Kochanek et al. (2017)]kochanek2017 Kochanek, C. S., Shappee, B. J., Stanek, K. Z. et al. 2017, asXiv 1706.07060v1Nuclei, ed. L. C. Ho & J.-W. Wang (San Francisco, CA: ASP), 561 [Lisse et al.(2015)]lisse2015 Lisse, C. M., Sitko, M. L. & Marengo, M. 2015, , 815, L27 [Lund et al.(2016)]lund2016 Lund, M. B., Pepper, J., Stassun, K. G., Hippke, M. & Angerhausen, D. 2016, arXiv:1605.02760 [Makarov & Goldin(2016)]makarov2016 Makarov, V. V. & Goldin, A. 2016, , 833, 78 [Mamajek (2017)]mamajek2017 Mamajek, E. M. 2017,A Modern Mean Dwarf Stellar Color and Effective Temperature Sequence, < http://www.pas.rochester.edu/ emamajek/> <EEM_dwarf_UBVIJHK_colors_Teff.txt> [Marengo et al.(2015)]marengo2015 Marengo, M., Hulsebus, A. & Willis, S. 2015, , 814, L15 [Meng et al.(2014)]meng2014 Meng, H. Y. A., Su, K. Y. L., Rieke, G. H. et al. 2014, Science, 345, 1032 [Meng et al.(2015)]meng2015 Meng, H. Y. A., Su, K. Y. L., Rieke, G. H. et al. 2015, , 805, 77 [Metzger et al.(2017)]metzger2017 Metzger, B. D., Shen, K. J. & Stone, N. C. 2017, arXiv:1612.07332 [Montet & Simon(2016)]montet2016 Montet, B. T. & Simon, J. D. 2016, , 830, L39 [Morrissey et al.(2007)]morrissey2007 Morrissey, P., Conrow, T., Barlow, T. A. et al. 2007, , 173, 682 [Neslušan & Budaj(2017)]neslusan2017 Neslušan, L. & Budaj, J. 2017, å, 600A, 86 [Newberry (1991)]newberry1991Newberry, M. V. 1991, PASP, 103, 222 [Pecaut & Mamajek(2013)]pecaut2013 Pecaut, M. J. & Mamajek, E. E. 2013, , 208, 9 [Poole et al.(2008)]poole2008 Poole, T. S., Breeveld, A. A., Page, M. J. et al. 2008, , 383, 627[Rappaport et al.(2012)]rappaport2012 Rappaport, S., Levine, A., Chiang, E. et al. 2012, , 752, 1 [Rappaport et al.(2014)]rappaport2014 Rappaport, S., Barclay, T., DeVore, J. et al. 2014, , 784, 40 [Reach et al. (2010)]reach2010 Reach, W. T., Vaubaillon, J., Lisse, C. M., Holloway, M., & Rho, J. 2010, Icarus, 208, 276 [Rebull et al.(2014)]rebull2014 Rebull, L. M., Cody, A. M., Covey, K. R. et al. 2014, , 148, 92 [Rieke & Lebofsky (1985)]rieke1985 Rieke, G. H., & Lebofsky, M. J. 1985, , 288, 618 [Roming et al.(2000)]roming2000 Roming, P. W., Townsley, L. K., Nousek, J. A., et al. 2000, , 4140, 76 [Roming et al.(2004)]roming2004 Roming, P. W. A., Hunsberger, S. D., Mason, K. O., et al. 2004, , 5165, 262 [Roming et al.(2005)]roming2005 Roming, P. W. A., Kennedy, T. E., Mason, K. O. et al. 2005, , 120, 95 [Sanchez et al.(2003)]2003PhRvD..68k3004S Sanchez, M., Allison, W. W., Alner, G. J., et al. 2003, , 68, 113004 [Sanchis-Ojeda et al.(2015)]sanchis-ojeda2015 Sanchis-Ojeda, R., Rappaport, S., Pallè, E. et al. 2015, , 812, 112 [Schaefer(2016)]schaefer2016 Schaefer, B. E. 2016, , 822, L34 [Shappee et al. (2014)]shappee2014 Shappee, B. J., Prieto, J. L., Grupe, D., et al. 2014, ApJ, 788, 48 [Sheikh et al. (2016)]sheikh2016 Sheikh, M. A., Weaver, R., & Dahmen, K. A. 2016, Phys. Rev. Let., 117, 261101 [Schlecker (2016)]schlecker2016 Schlecker, Martin 2016, MSc thesis, Tech. Univ. Munich[Skrutskie et al.(2006)]skrutskie2006 Skrutskie, M.F., Cutri, R. M., Stiening, R. et al. 2006, , 131, 1163 [Stauffer et al.(2015)]stauffer2015 Stauffer, J., Cody, A. M., McGinnis, P. et al. 2015, , 149, 130 [Thompson et al.(2016)]thompson2016 Thompson, M. A., Scicluna, P., Kemper, F. et al. 2016, , 458, L39[Wolf & Hillenbrand(2005)]wolf2005 Wolf, S. & Hillenbrand, L. A. 2005, CoPhC, 171, 208 [Wright, et al. (2016)]wright2016b Wright, J. T., Cartier, K. M. S., Zao, M.et al. 2016, , 16, 17 [Wright & Sigurdsson(2016)]wright2016 Wright, J. T. & Sigurdsson, S. 2016, , 829, L3 [Wright et al.(2010)]wright2010 Wright, E. L., Eisenhardt, P. R. M., Mainzer, A. K. et al. 2010, , 140, 1868 [Wyatt et al.(2007)]wyatt2007 Wyatt, M. C., Smith, R., Greaves, J. S. et al. 2007, , 658, 569§ APPENDIX The following table provides the full set of UVOT photometry:ccccccccUVOT Photometry of KIC 8462852 WavebandMJDa raw σ correctedσ relative comparisonb σ magnitude magnitude magnitudemagnitude magnitude magnitudeuvw2 57317.518 14.820 0.050 14.776 0.089 0.045 0.074 uvw2 57317.521 14.800 0.030 14.793 0.055 0.007 0.046 uvw2 57317.552 14.790 0.030 14.787 0.047 0.003 0.036 uvw2 57317.554 14.810 0.030 14.784 0.043 0.026 0.030 uvw2 57317.584 14.780 0.030 14.755 0.058 0.025 0.050 uvw2 57317.587 14.820 0.030 14.735 0.061 0.085 0.053 uvw2 57357.014 14.790 0.030 14.772 0.049 0.018 0.039 uvw2 57357.017 14.830 0.040 14.828 0.061 0.003 0.046 uvw2 57360.465 14.790 0.040 14.802 0.063 -0.012 0.049 uvw2 57363.857 14.870 0.040 14.802 0.067 0.068 0.054 uvw2 57366.258 14.810 0.040 14.778 0.065 0.032 0.051 uvw2 57369.981 14.770 0.040 14.791 0.061 -0.021 0.046 uvw2 57369.983 14.760 0.040 14.746 0.064 0.014 0.050 uvw2 57372.245 14.780 0.040 14.790 0.066 -0.010 0.053 uvw2 57375.302 14.760 0.040 14.750 0.064 0.010 0.050 uvw2 57378.828 14.810 0.040 14.799 0.063 0.011 0.049 uvw2 57381.281 14.790 0.030 14.843 0.048 -0.053 0.037 uvw2 57381.284 14.880 0.040 14.865 0.063 0.015 0.049 uvw2 57384.147 14.760 0.040 14.791 0.063 -0.031 0.049 uvw2 57387.934 14.870 0.040 14.951 0.060 -0.081 0.045 uvw2 57390.524 14.910 0.040 14.979 0.063 -0.069 0.049 uvw2 57393.641 14.760 0.030 14.766 0.045 -0.006 0.034 uvw2 57393.645 14.810 0.040 14.870 0.064 -0.060 0.049 uvw2 57396.567 14.800 0.040 14.876 0.063 -0.076 0.049 uvw2 57399.031 14.790 0.040 14.736 0.065 0.054 0.051 uvw2 57402.617 14.830 0.040 14.865 0.062 -0.035 0.047 uvw2 57405.339 14.790 0.030 14.907 0.046 -0.117 0.035 uvw2 57405.342 14.840 0.040 14.796 0.064 0.044 0.050 uvw2 57408.274 14.820 0.040 14.809 0.064 0.011 0.050 uvw2 57411.458 14.830 0.040 14.857 0.063 -0.027 0.049 uvw2 57414.395 14.790 0.050 14.793 0.091 -0.003 0.076 uvw2 57416.976 14.820 0.040 14.856 0.065 -0.036 0.051 uvw2 57420.169 14.930 0.040 14.924 0.064 0.006 0.050 uvw2 57423.164 14.870 0.040 14.797 0.066 0.073 0.052 uvw2 57426.828 14.810 0.040 14.746 0.064 0.064 0.050 uvw2 57429.809 14.780 0.040 14.761 0.059 0.019 0.043 uvw2 57429.811 14.750 0.040 14.761 0.062 -0.011 0.047 uvw2 57432.017 14.820 0.040 14.811 0.066 0.010 0.053 uvw2 57435.009 14.780 0.040 14.783 0.064 -0.003 0.050 uvw2 57438.533 14.810 0.040 14.821 0.064 -0.011 0.050 uvw2 57441.518 14.780 0.040 14.777 0.060 0.004 0.045 uvw2 57441.520 14.820 0.040 14.790 0.065 0.030 0.052 uvw2 57444.565 14.870 0.040 14.819 0.066 0.051 0.053 uvw2 57447.422 14.830 0.040 14.841 0.065 -0.011 0.052 uvw2 57450.279 14.800 0.040 14.741 0.064 0.060 0.050 uvw2 57453.599 14.820 0.040 14.898 0.061 -0.078 0.046 uvw2 57453.601 14.800 0.040 14.767 0.064 0.033 0.050 uvw2 57459.388 14.820 0.040 14.892 0.064 -0.072 0.050 uvw2 57462.313 14.840 0.040 14.844 0.063 -0.004 0.049 uvw2 57465.238 14.760 0.040 14.774 0.061 -0.014 0.046 uvw2 57465.240 14.800 0.040 14.823 0.063 -0.023 0.049 uvw2 57468.099 14.780 0.040 14.802 0.064 -0.022 0.050 uvw2 57471.164 14.790 0.060 14.788 0.104 0.003 0.085 uvw2 57474.015 14.820 0.040 14.802 0.064 0.018 0.050 uvw2 57621.142 14.790 0.040 14.813 0.061 -0.023 0.046 uvw2 57621.144 14.870 0.040 14.972 0.061 -0.102 0.046 uvw2 57635.448 14.770 0.040 14.839 0.060 -0.069 0.045 uvw2 57656.697 14.740 0.040 14.700 0.066 0.040 0.052 uvw2 57672.645 14.860 0.040 14.917 0.065 -0.057 0.051 uvw2 57677.098 14.880 0.040 14.932 0.061 -0.052 0.046 uvw2 57677.101 14.720 0.040 14.830 0.061 -0.110 0.046 uvw2 57708.730 14.750 0.050 14.840 0.082 -0.090 0.065 uvw2 57719.421 14.780 0.040 14.785 0.066 -0.005 0.052 uvw2 57723.277 14.780 0.040 14.810 0.061 -0.030 0.046 uvw2 57730.131 14.740 0.040 14.743 0.066 -0.003 0.053 uvw2 57737.433 14.720 0.070 14.896 0.113 -0.176 0.089 uvw2 57737.450 14.790 0.040 14.867 0.061 -0.077 0.046 uvw2 57742.752 14.730 0.040 14.777 0.063 -0.047 0.049 uvw2 57744.212 14.790 0.040 14.814 0.064 -0.024 0.050 uvw2 57751.851 14.780 0.040 14.774 0.066 0.006 0.052 uvw2 57756.312 14.790 0.040 14.855 0.064 -0.065 0.050 uvw2 57765.268 14.750 0.040 14.796 0.063 -0.046 0.049 uvw2 57765.271 14.760 0.030 14.767 0.049 -0.007 0.038 uvw2 57765.274 14.850 0.040 14.831 0.066 0.020 0.053 uvw2 57772.977 14.770 0.040 14.793 0.060 -0.023 0.045 uvw2 57779.890 14.760 0.040 14.897 0.063 -0.137 0.049 uvw2 57786.140 14.780 0.040 14.829 0.064 -0.049 0.050 uvw2 57789.993 14.790 0.040 14.864 0.064 -0.074 0.050 uvw2 57792.322 14.780 0.040 14.913 0.063 -0.133 0.048 uvw2 57795.382 14.800 0.040 14.903 0.063 -0.103 0.049 uvw2 57798.502 14.810 0.040 14.919 0.062 -0.109 0.047 uvw2 57801.894 14.800 0.040 14.790 0.067 0.010 0.054 uvw2 57804.938 14.730 0.040 14.735 0.065 -0.005 0.052 uvw2 57807.728 14.760 0.030 14.827 0.049 -0.067 0.039 uvw2 57810.385 14.830 0.040 14.848 0.066 -0.018 0.052 uvw2 57853.037 14.740 0.040 14.825 0.061 -0.085 0.046 uvw2 57853.039 14.720 0.030 14.750 0.047 -0.030 0.037 uvw2 57853.043 14.760 0.040 14.798 0.063 -0.038 0.048 uvm2 57317.528 14.730 0.040 14.819 0.111 -0.089 0.104 uvm2 57357.022 14.790 0.040 14.923 0.120 -0.133 0.113 uvm2 57360.470 14.660 0.040 14.678 0.141 -0.018 0.135 uvm2 57363.862 14.850 0.050 15.042 0.133 -0.192 0.124 uvm2 57366.256 14.710 0.040 14.697 0.129 0.013 0.122 uvm2 57372.250 14.720 0.050 14.742 0.143 -0.022 0.134 uvm2 57375.307 14.700 0.040 14.696 0.137 0.004 0.131 uvm2 57378.825 14.730 0.040 14.985 0.090 -0.255 0.081 uvm2 57378.833 14.740 0.040 14.715 0.141 0.025 0.135 uvm2 57381.290 14.790 0.040 14.751 0.131 0.039 0.125 uvm2 57384.152 14.770 0.050 14.916 0.128 -0.146 0.117 uvm2 57387.939 14.860 0.050 14.818 0.136 0.042 0.127 uvm2 57390.521 14.690 0.030 14.788 0.086 -0.098 0.081 uvm2 57390.529 14.900 0.050 15.006 0.134 -0.106 0.124 uvm2 57393.650 14.760 0.050 14.763 0.140 -0.003 0.131 uvm2 57396.572 14.670 0.040 14.802 0.133 -0.132 0.127 uvm2 57399.036 14.690 0.040 14.604 0.147 0.086 0.141 uvm2 57402.614 14.740 0.040 14.765 0.101 -0.025 0.093 uvm2 57402.622 14.740 0.040 14.866 0.123 -0.126 0.117 uvm2 57405.347 14.850 0.050 15.068 0.120 -0.218 0.109 uvm2 57416.981 14.770 0.050 14.785 0.140 -0.015 0.131 uvm2 57426.826 14.830 0.060 14.562 0.200 0.268 0.191 uvm2 57426.833 14.800 0.050 14.932 0.132 -0.132 0.122 uvm2 57444.570 14.810 0.050 14.876 0.142 -0.066 0.133 uvm2 57447.427 14.750 0.050 14.955 0.128 -0.205 0.117 uvm2 57450.276 14.880 0.040 14.953 0.095 -0.073 0.087 uvm2 57450.284 14.660 0.040 14.646 0.149 0.014 0.144 uvm2 57453.606 14.700 0.050 14.925 0.134 -0.225 0.124 uvm2 57459.393 14.840 0.050 15.158 0.122 -0.318 0.111 uvm2 57462.310 14.770 0.040 14.902 0.112 -0.132 0.104 uvm2 57465.246 14.790 0.050 14.835 0.134 -0.045 0.124 uvm2 57474.013 14.680 0.050 14.765 0.139 -0.085 0.130 uvm2 57474.021 14.820 0.050 14.721 0.155 0.099 0.147 uvm2 57621.149 14.750 0.050 14.840 0.140 -0.090 0.131 uvm2 57635.454 14.640 0.040 14.668 0.129 -0.028 0.123 uvm2 57656.702 14.820 0.050 14.911 0.138 -0.091 0.128 uvm2 57672.650 14.680 0.050 14.647 0.146 0.033 0.137 uvm2 57677.106 14.720 0.040 14.785 0.131 -0.065 0.125 uvm2 57708.734 14.620 0.060 14.766 0.176 -0.146 0.166 uvm2 57719.426 14.690 0.040 14.938 0.122 -0.248 0.115 uvm2 57730.124 14.760 0.040 14.933 0.102 -0.173 0.094 uvm2 57730.128 14.740 0.040 14.731 0.107 0.009 0.099 uvm2 57730.136 14.750 0.050 14.931 0.129 -0.181 0.119 uvm2 57742.746 14.750 0.050 14.702 0.151 0.048 0.142 uvm2 57742.749 14.710 0.040 14.844 0.100 -0.134 0.092 uvm2 57742.757 14.720 0.050 14.740 0.139 -0.020 0.130 uvm2 57772.983 14.740 0.040 14.904 0.115 -0.164 0.107 uvm2 57786.137 14.740 0.040 14.878 0.097 -0.138 0.088 uvm2 57789.986 14.800 0.040 14.846 0.100 -0.046 0.091 uvm2 57789.990 14.740 0.040 14.880 0.099 -0.140 0.091 uvm2 57792.315 14.750 0.040 14.782 0.102 -0.032 0.093 uvm2 57792.319 14.770 0.040 14.866 0.098 -0.096 0.089 uvm2 57795.376 14.760 0.040 14.811 0.110 -0.051 0.102 uvm2 57795.379 14.800 0.040 14.908 0.097 -0.108 0.088 uvm2 57798.495 14.710 0.040 14.809 0.106 -0.099 0.098 uvm2 57798.498 14.800 0.040 14.794 0.104 0.006 0.096 uvm2 57804.932 14.710 0.040 14.674 0.101 0.036 0.093 uvm2 57804.935 14.720 0.040 14.874 0.099 -0.154 0.091 uvm2 57810.378 14.700 0.040 14.864 0.097 -0.164 0.088 uvm2 57810.382 14.730 0.040 14.844 0.098 -0.114 0.090 uvm2 57819.013 14.680 0.040 14.886 0.093 -0.206 0.084 uvm2 57819.017 14.820 0.040 14.935 0.097 -0.115 0.088 uvm2 57853.048 14.750 0.040 14.697 0.140 0.054 0.134 uvw1 57317.531 13.650 0.030 13.646 0.038 0.004 0.023 uvw1 57357.025 13.630 0.030 13.635 0.038 -0.005 0.024 uvw1 57360.473 13.660 0.030 13.644 0.040 0.016 0.026 uvw1 57363.855 13.650 0.030 13.549 0.049 0.101 0.038 uvw1 57363.864 13.660 0.030 13.655 0.042 0.005 0.029 uvw1 57369.991 13.600 0.030 13.585 0.042 0.015 0.029 uvw1 57372.253 13.650 0.030 13.695 0.040 -0.045 0.026 uvw1 57375.299 13.620 0.030 13.635 0.037 -0.015 0.022 uvw1 57375.310 13.600 0.030 13.588 0.040 0.012 0.026 uvw1 57378.836 13.620 0.030 13.584 0.040 0.036 0.026 uvw1 57381.292 13.660 0.030 13.629 0.040 0.031 0.026 uvw1 57387.930 13.620 0.030 13.644 0.037 -0.024 0.022 uvw1 57387.942 13.660 0.030 13.610 0.040 0.050 0.026 uvw1 57390.532 13.720 0.030 13.754 0.040 -0.034 0.026 uvw1 57393.652 13.610 0.030 13.631 0.040 -0.021 0.026 uvw1 57396.575 13.630 0.030 13.673 0.040 -0.043 0.026 uvw1 57399.028 13.620 0.020 13.619 0.028 0.001 0.020 uvw1 57399.038 13.630 0.030 13.652 0.040 -0.022 0.026 uvw1 57402.625 13.610 0.030 13.611 0.040 -0.001 0.026 uvw1 57405.350 13.660 0.030 13.621 0.040 0.040 0.026 uvw1 57408.282 13.720 0.030 13.700 0.042 0.020 0.029 uvw1 57411.454 13.620 0.020 13.618 0.027 0.002 0.018 uvw1 57411.465 13.700 0.030 13.684 0.042 0.016 0.029 uvw1 57416.983 13.650 0.030 13.628 0.042 0.022 0.029 uvw1 57420.176 13.680 0.030 13.667 0.042 0.013 0.029 uvw1 57423.161 13.630 0.020 13.634 0.028 -0.004 0.020 uvw1 57423.171 13.690 0.030 13.630 0.042 0.060 0.029 uvw1 57426.836 13.680 0.030 13.716 0.040 -0.036 0.026 uvw1 57429.819 13.610 0.030 13.628 0.040 -0.018 0.026 uvw1 57432.024 13.660 0.030 13.658 0.042 0.002 0.029 uvw1 57435.016 13.660 0.030 13.695 0.044 -0.035 0.032 uvw1 57441.528 13.700 0.030 13.675 0.042 0.025 0.029 uvw1 57444.572 13.660 0.030 13.679 0.040 -0.019 0.026 uvw1 57447.419 13.640 0.030 13.657 0.036 -0.017 0.020 uvw1 57447.430 13.620 0.030 13.580 0.042 0.040 0.029 uvw1 57450.287 13.660 0.030 13.683 0.041 -0.023 0.028 uvw1 57453.608 13.630 0.030 13.677 0.041 -0.047 0.028 uvw1 57459.386 13.690 0.030 13.647 0.044 0.043 0.032 uvw1 57459.395 13.670 0.030 13.722 0.040 -0.052 0.026 uvw1 57462.320 13.640 0.030 13.668 0.040 -0.028 0.026 uvw1 57465.248 13.630 0.030 13.613 0.040 0.017 0.026 uvw1 57468.107 13.630 0.030 13.634 0.040 -0.004 0.026 uvw1 57474.023 13.670 0.030 13.673 0.040 -0.003 0.026 uvw1 57621.151 13.640 0.030 13.687 0.040 -0.047 0.026 uvw1 57635.446 13.630 0.030 13.676 0.038 -0.046 0.024 uvw1 57635.456 13.610 0.030 13.667 0.039 -0.057 0.025 uvw1 57656.705 13.590 0.030 13.659 0.040 -0.069 0.026 uvw1 57672.652 13.640 0.030 13.642 0.042 -0.002 0.029 uvw1 57677.109 13.580 0.030 13.552 0.042 0.028 0.029 uvw1 57708.725 13.630 0.030 13.642 0.046 -0.012 0.035 uvw1 57719.418 13.600 0.030 13.594 0.037 0.006 0.022 uvw1 57719.429 13.640 0.030 13.635 0.042 0.005 0.029 uvw1 57723.274 13.590 0.030 13.613 0.038 -0.023 0.024 uvw1 57723.285 13.640 0.030 13.663 0.038 -0.023 0.024 uvw1 57730.139 13.640 0.030 13.691 0.040 -0.051 0.026 uvw1 57742.760 13.620 0.030 13.627 0.040 -0.007 0.026 uvw1 57744.219 13.610 0.030 13.656 0.040 -0.046 0.026 uvw1 57751.845 13.590 0.030 13.654 0.039 -0.064 0.025 uvw1 57751.848 13.660 0.030 13.672 0.037 -0.012 0.022 uvw1 57751.859 13.640 0.030 13.677 0.041 -0.037 0.028 uvw1 57756.319 13.640 0.030 13.680 0.042 -0.040 0.029 uvw1 57772.986 13.620 0.030 13.651 0.039 -0.031 0.025 uvw1 57779.884 13.600 0.030 13.642 0.038 -0.042 0.024 uvw1 57779.887 13.630 0.030 13.668 0.037 -0.038 0.022 uvw1 57779.898 13.600 0.030 13.635 0.042 -0.035 0.029 uvw1 57786.148 13.620 0.030 13.672 0.042 -0.052 0.029 uvw1 57790.001 13.620 0.030 13.655 0.042 -0.035 0.029 uvw1 57792.329 13.640 0.030 13.693 0.042 -0.053 0.029 uvw1 57795.389 13.620 0.030 13.650 0.042 -0.030 0.029 uvw1 57798.510 13.600 0.030 13.672 0.040 -0.072 0.026 uvw1 57804.946 13.620 0.030 13.636 0.042 -0.016 0.029 uvw1 57807.739 13.640 0.030 13.677 0.038 -0.037 0.023 uvw1 57810.392 13.620 0.030 13.659 0.040 -0.039 0.026 uvw1 57819.027 13.580 0.030 13.645 0.041 -0.065 0.028 uvw1 57853.051 13.610 0.030 13.628 0.040 -0.018 0.026 U 57317.534 12.560 0.020 12.557 0.025 0.003 0.015 U 57357.028 12.580 0.020 12.594 0.025 -0.014 0.015 U 57360.463 12.560 0.030 12.563 0.035 -0.003 0.017 U 57360.475 12.560 0.030 12.574 0.034 -0.014 0.016 U 57363.867 12.580 0.030 12.567 0.034 0.013 0.016 U 57369.993 12.560 0.030 12.559 0.034 0.001 0.015 U 57372.243 12.570 0.030 12.584 0.034 -0.014 0.015 U 57372.255 12.570 0.030 12.579 0.034 -0.009 0.015 U 57375.313 12.570 0.020 12.574 0.025 -0.004 0.015 U 57378.838 12.550 0.020 12.549 0.025 0.001 0.015 U 57381.295 12.590 0.020 12.579 0.025 0.011 0.015 U 57387.944 12.590 0.020 12.589 0.025 0.001 0.015 U 57390.534 12.590 0.030 12.602 0.034 -0.012 0.015 U 57393.654 12.580 0.030 12.564 0.034 0.016 0.015 U 57396.565 12.590 0.030 12.574 0.034 0.016 0.015 U 57396.577 12.580 0.030 12.594 0.034 -0.014 0.015 U 57399.040 12.550 0.030 12.575 0.035 -0.025 0.017 U 57402.627 12.570 0.020 12.589 0.025 -0.019 0.015 U 57408.272 12.570 0.030 12.559 0.034 0.011 0.015 U 57408.284 12.600 0.030 12.596 0.035 0.004 0.017 U 57411.467 12.590 0.030 12.589 0.034 0.001 0.015 U 57416.973 12.580 0.030 12.579 0.034 0.001 0.015 U 57416.986 12.600 0.030 12.589 0.035 0.011 0.017 U 57420.166 12.590 0.030 12.584 0.034 0.006 0.015 U 57420.179 12.590 0.030 12.552 0.035 0.038 0.017 U 57423.174 12.600 0.030 12.580 0.035 0.020 0.017 U 57426.839 12.570 0.020 12.559 0.025 0.011 0.015 U 57429.822 12.580 0.020 12.572 0.025 0.008 0.015 U 57432.014 12.570 0.030 12.574 0.034 -0.004 0.015 U 57432.027 12.600 0.030 12.605 0.034 -0.005 0.016 U 57441.530 12.640 0.030 12.600 0.035 0.040 0.017 U 57444.561 12.590 0.020 12.599 0.025 -0.009 0.015 U 57444.574 12.560 0.030 12.557 0.034 0.003 0.015 U 57447.432 12.580 0.030 12.582 0.034 -0.002 0.015 U 57450.289 12.580 0.030 12.569 0.034 0.011 0.015 U 57453.611 12.560 0.030 12.564 0.034 -0.004 0.015 U 57459.397 12.580 0.030 12.607 0.034 -0.027 0.016 U 57462.322 12.590 0.030 12.599 0.034 -0.009 0.016 U 57465.251 12.590 0.020 12.604 0.025 -0.014 0.015 U 57468.096 12.630 0.030 12.632 0.034 -0.002 0.015 U 57468.109 12.580 0.020 12.572 0.025 0.008 0.015 U 57474.026 12.600 0.020 12.601 0.026 -0.001 0.016 U 57621.154 12.580 0.030 12.587 0.034 -0.007 0.015 U 57635.459 12.580 0.020 12.599 0.025 -0.019 0.015 U 57656.695 12.600 0.030 12.604 0.034 -0.004 0.015 U 57656.707 12.590 0.030 12.597 0.034 -0.007 0.015 U 57672.642 12.650 0.030 12.642 0.034 0.008 0.015 U 57672.654 12.580 0.030 12.587 0.034 -0.007 0.015 U 57677.111 12.590 0.030 12.622 0.034 -0.032 0.015 U 57708.727 12.560 0.030 12.587 0.035 -0.027 0.017 U 57719.431 12.570 0.020 12.557 0.025 0.013 0.015 U 57723.288 12.590 0.020 12.592 0.025 -0.002 0.015 U 57730.141 12.590 0.030 12.582 0.034 0.008 0.015 U 57742.763 12.580 0.020 12.597 0.025 -0.017 0.015 U 57744.209 12.600 0.020 12.604 0.025 -0.004 0.015 U 57744.222 12.600 0.030 12.594 0.034 0.006 0.016 U 57751.861 12.580 0.020 12.574 0.025 0.006 0.015 U 57756.307 12.620 0.030 12.607 0.034 0.013 0.015 U 57756.309 12.640 0.030 12.629 0.034 0.011 0.015 U 57756.322 12.590 0.030 12.564 0.034 0.026 0.015 U 57772.970 12.600 0.030 12.599 0.034 0.001 0.015 U 57772.973 12.610 0.020 12.607 0.025 0.003 0.015 U 57772.989 12.590 0.020 12.589 0.025 0.001 0.015 U 57779.900 12.590 0.030 12.568 0.035 0.022 0.017 U 57786.150 12.600 0.030 12.587 0.035 0.013 0.017 U 57790.003 12.600 0.030 12.566 0.034 0.034 0.016 U 57792.331 12.590 0.030 12.529 0.035 0.061 0.017 U 57795.391 12.600 0.030 12.587 0.035 0.013 0.017 U 57798.512 12.600 0.020 12.602 0.025 -0.002 0.015 U 57804.948 12.590 0.030 12.577 0.034 0.013 0.015 U 57807.743 12.590 0.020 12.587 0.025 0.003 0.015 U 57810.395 12.590 0.030 12.597 0.034 -0.007 0.015 U 57819.029 12.580 0.030 12.577 0.034 0.003 0.016 U 57853.053 12.580 0.020 12.564 0.025 0.016 0.015 V 57317.524 11.890 0.020 11.880 0.025 0.010 0.014 V 57317.557 11.900 0.020 11.899 0.023 0.001 0.012 V 57317.591 11.900 0.020 11.910 0.025 -0.010 0.014 V 57357.020 11.890 0.020 11.878 0.023 0.012 0.012 V 57360.468 11.880 0.020 11.880 0.023 0.000 0.012 V 57363.860 11.910 0.020 11.888 0.023 0.022 0.012 V 57366.260 11.880 0.030 11.850 0.035 0.031 0.019 V 57369.986 11.890 0.020 11.887 0.023 0.003 0.012 V 57372.248 11.900 0.020 11.906 0.023 -0.006 0.012 V 57375.305 11.890 0.020 11.877 0.023 0.013 0.012 V 57378.831 11.910 0.020 11.912 0.023 -0.002 0.012 V 57381.287 11.900 0.020 11.901 0.023 -0.001 0.012 V 57384.149 11.890 0.020 11.908 0.023 -0.018 0.012 V 57387.936 11.900 0.020 11.900 0.023 0.000 0.012 V 57390.527 11.920 0.020 11.916 0.023 0.004 0.012 V 57393.647 11.900 0.020 11.903 0.023 -0.003 0.012 V 57396.570 11.900 0.020 11.900 0.023 0.000 0.012 V 57399.033 11.880 0.020 11.899 0.023 -0.019 0.012 V 57402.619 11.890 0.020 11.887 0.023 0.003 0.012 V 57405.345 11.920 0.020 11.916 0.023 0.004 0.012 V 57408.277 11.920 0.020 11.916 0.023 0.004 0.012 V 57411.460 11.920 0.020 11.913 0.023 0.008 0.012 V 57416.978 11.890 0.020 11.891 0.023 -0.001 0.012 V 57420.171 11.920 0.020 11.900 0.023 0.020 0.012 V 57423.166 11.920 0.020 11.911 0.023 0.009 0.012 V 57426.831 11.900 0.020 11.899 0.023 0.001 0.012 V 57429.814 11.890 0.020 11.883 0.023 0.007 0.012 V 57432.019 11.910 0.020 11.903 0.023 0.008 0.012 V 57435.012 11.900 0.020 11.909 0.023 -0.009 0.012 V 57438.535 11.910 0.020 11.911 0.023 -0.001 0.012 V 57441.523 11.930 0.020 11.907 0.023 0.023 0.012 V 57444.567 11.920 0.020 11.911 0.023 0.010 0.012 V 57447.425 11.900 0.020 11.903 0.023 -0.003 0.012 V 57450.282 11.930 0.020 11.926 0.023 0.004 0.012 V 57453.603 11.890 0.020 11.903 0.023 -0.013 0.012 V 57459.390 11.890 0.020 11.901 0.023 -0.011 0.012 V 57462.315 11.900 0.020 11.911 0.023 -0.011 0.012 V 57465.243 11.900 0.020 11.915 0.023 -0.015 0.012 V 57468.101 11.910 0.020 11.921 0.023 -0.011 0.012 V 57474.018 11.900 0.020 11.902 0.023 -0.002 0.012 V 57621.146 11.890 0.020 11.904 0.023 -0.014 0.012 V 57635.451 11.880 0.020 11.899 0.023 -0.019 0.012 V 57656.700 11.890 0.020 11.898 0.023 -0.008 0.012 V 57672.647 11.870 0.020 11.875 0.023 -0.005 0.012 V 57677.103 11.880 0.020 11.897 0.023 -0.017 0.012 V 57708.732 11.940 0.030 11.928 0.034 0.012 0.016 V 57719.424 11.940 0.020 11.917 0.023 0.023 0.012 V 57723.280 11.920 0.020 11.887 0.023 0.033 0.012 V 57730.133 11.930 0.020 11.910 0.023 0.020 0.012 V 57742.755 11.940 0.020 11.914 0.023 0.026 0.012 V 57744.214 11.940 0.020 11.909 0.023 0.031 0.012 V 57751.853 11.930 0.020 11.909 0.023 0.021 0.012 V 57756.314 11.950 0.020 11.930 0.023 0.020 0.012 V 57765.275 11.920 0.030 11.895 0.036 0.025 0.020 V 57772.980 11.930 0.020 11.903 0.023 0.027 0.012 V 57779.893 11.930 0.020 11.890 0.023 0.040 0.012 V 57786.143 11.950 0.020 11.930 0.023 0.020 0.012 V 57789.996 11.930 0.020 11.885 0.023 0.045 0.012 V 57792.324 11.940 0.020 11.914 0.023 0.026 0.012 V 57795.384 11.950 0.020 11.877 0.023 0.073 0.012 V 57798.504 11.930 0.020 11.897 0.023 0.033 0.012 V 57801.896 11.940 0.020 11.920 0.023 0.020 0.012 V 57804.941 11.910 0.020 11.865 0.023 0.045 0.012 V 57807.732 11.930 0.020 11.899 0.023 0.031 0.012 V 57810.387 11.930 0.020 11.908 0.023 0.022 0.012 V 57819.022 11.930 0.020 11.913 0.023 0.017 0.012 V 57853.045 11.920 0.020 11.885 0.023 0.035 0.012 aModified Julian date bAverage brightening of the comparison star measurements
http://arxiv.org/abs/1708.07556v2
{ "authors": [ "Huan Y. A. Meng", "George Rieke", "Franky Dubois", "Grant Kennedy", "Massimo Marengo", "Michael Siegel", "Kate Su", "Nicolas Trueba", "Mark Wyatt", "Tabetha Boyajian", "C. M. Lisse", "Ludwig Logie", "Steve Rau", "Sigfried Vanaverbeke" ], "categories": [ "astro-ph.SR" ], "primary_category": "astro-ph.SR", "published": "20170824210133", "title": "Extinction and the Dimming of KIC 8462852" }
[email protected] SUPA, School of Physics and Astronomy, The University of Edinburgh, James Clerk Maxwell Building, Peter Guthrie Tait Road, Edinburgh, EH9 3FD, United Kingdom SUPA, School of Physics and Astronomy, The University of Edinburgh, James Clerk Maxwell Building, Peter Guthrie Tait Road, Edinburgh, EH9 3FD, United Kingdom Department of Chemical Engineering, University of Michigan, Ann Arbor, MI 48109, USA In this paper we report a novel inertial instability that occurs in electro-osmotically driven channel flows. Weassume that the charge motion under the influence of an externally applied electric field is confined to a small vicinity of the channel walls that, effectively, drives a bulk flow through a prescribed slip velocity at the boundaries. Here, we study spatially-periodic wall velocity modulations in a two-dimensional straight channel numerically. At low slip velocities, the bulk flow consists of a set of vortices along each wall that are left-right symmetric, while at sufficiently high slip velocities, this flow loses its stability though a supercritical bifurcation. Surprisingly, the new flow state that bifurcates from a left-right symmetric base flow has a rather strong mean component along the channel, which is similar to pressure-driven velocity profiles. The instability sets in at rather small Reynolds numbers of about 20-30, and we discuss its potential applications in microfluidic devices. A hydrodynamic bifurcation in electroosmotically-driven periodic flows Ronald G. Larson December 30, 2023 ======================================================================In microfluidic devices, the use of electric fields as a means of driving flow via electro-osmosis is an intriguing alternative to using pressure drops or moving surfaces <cit.>. Electro-osmosis occurs when the ions in a double layer next to a charged surface are set in motion by an electric field, and the ions drag the solvent with them, producing bulk flow. Such flows are especially suitable for microfluidic applications, in which microfabrication techniques allow for control and patterning of electric and dielectric properties of channel surfaces.In this way, not only can bulk flow be generated to transport analytes, but patterned flow fields can be imposed, allowing, for example, for creation of microfluidic mixers <cit.>. Such flows may also assist in separating particles or cells, possibly both modulating and augmenting the inertial forces that produce size-depending cross-stream drift <cit.>. Even if uniform charge density is intended for a surface, some variation in charge is unavoidable, especially given the difficulty in controlling preciselythe surface chemistry producing the charge, and this will give rise to non-uniform electro-osmotic flows even in a straight channel.One of the conceptually simpler non-uniform electroosmotic flows that can be produced is generated by a sinusoidally periodic surface charge on each side of a straight channel <cit.>; see Fig.<ref>.This charge pattern leads to a spatially periodic charge in the double layer adjacent to the boundaries. When a voltage is applied along the channel, the velocity near the surface varies periodically as well, and acts like a periodic 'slip' velocity along the surface, generating a complex cellular flow in the fluid in the channel.This flow is attractive as a simple boundary condition (straight walls, periodic charge) that nevertheless generates a complex flow, and that, moreover, has an analytic solution in the limit of creeping flow and thin double layers <cit.>. It is easy to add a uniform surface charge density to the periodic charge, mimicking, for example, an imperfectly treated surface with charge non-uniformity. A periodic deviation from a uniform charge might produce a deviation in the mean flow rate in the channel owing to nonlinear coupling between the flow produced by the uniform charge, and that produced by the periodic charge variations.If the wall charge varies sinusoidally around zero, the electro-osmotic flow that is generated is periodic, and in the Stokes flow limit has no net flow direction. Here we consider the effect of inertia on this simple flow. We employ a spectral method to solve the two-dimensional Navier-Stokes equations, and find, surprisingly, for the case of zero average surface charge, so that the Stokes flow is periodic with no net flow, that at a modest Reynolds number Re=v_0 L/ν of around 20, there is a bifurcation to a secondary flow with a non-zero mean flow, even when there is no mean flow induced by the boundary conditions themselves.Here, v_0 is the characteristic velocity, L is the half-width of the channel, and ν is the kinematic viscosity of the fluid. The presence of this bifurcation means that, even for a boundary condition with no mean surface charge, and hence no mean current, a rectified mean flow can be produced through a purely oscillatory boundary condition.The direction of the mean flow, to the right or the left, is arbitrary, but could be imposed by adding some small bias to the initial oscillatory flow, either electrically, geometrically, or in some other way. We believe that this is the first report of this hydrodynamic instability in a periodically patterned electroosmotic flow (although this discovery was alluded to in an earlier co-authored by one of the present authors <cit.>).This bifurcation is of interest in its own right, but might also be a means of generating rectified flow in a channel with no net imposed current.In fact, since the base flow is completely periodic, the applied voltage along the channel could in principle also be alternating, without changing either the zero net current, or the direction of the resulting flow. To reach the bifurcation condition with a wall charge that varies with position sinusoidally around zero, the Reynolds number must reach a value of close to 20, which, for water with ν = 10^-6m^2/s, requires a flow velocity and channel width 2L that are relatively large. The flow velocity v_0 is given by around σ_0 E/μκ, where σ_0 is the amplitude of the surface charge density, E the electric field imposed parallel to the walls, μ the fluid viscosity and κ the inverse Debye length at the wall <cit.>, where typical values are σ_0 ∼ 1 charge/nm^2, μ = 10^-3Pa s, and κ^-1 = 10nm. Under these conditions, a field of 10^4V/m would yield a velocity of 10^-2m/s, and so a channel of width 2L=2mm would suffice to yield an instability.Note that the channel depth would need also to be comparable, or larger than, this scale, to prevent viscous suppression of the instability. This instability is distinct from well-known electrokinetic flow instabilities that result from coupling of electric fields and ionic conductivity gradients <cit.>, since there are no ionic conductivity gradients considered in what we report here. Such gradients can arise when a core fluid flow is focused by a 'sheath' fluid introduced at the walls of a microfluidic device, if the two fluids have differing ionic strengths. An instability occurs in such flows at a critical electric current Rayleigh number of Ra_e = (ϵ E_a^2 d^2/Dμ)((γ-1)/γ) |grad^*σ^*|_max = 205, as reported by Posner and Santiago <cit.> and Posner et al. <cit.>.Here ϵ is the fluid permittivity, E_a is the applied electric field, d is the channel depth, D is the ion diffusivity, μ is the fluid viscosity, γ is the ratio of core-to-sheath conductivities, and |grad^*σ^*|_max is the maximum dimensionless conductivity gradient.Note that the Rayleigh number representing the driving force for this instability disappears when the conductivities of the two fluids are equal to each other (i.e., γ = 1), since then there is no conductivity gradient to which the electric field can couple and the above Rayleigh number is zero. If a conductivity gradient is present,the instability produced by it could in many cases occur at lower field strength than that needed to produce the inertial instability to be discussed here.We consider the flow of a Newtonian fluid in a 2D-channel forced by a prescribed slip velocity at the channel walls, similar to Fig.<ref>. We introduce a Cartesian coordinate system with the x-axis pointing along the length of the channel, and the y-axis – in the wall-normal direction. The walls are located at y=± L, so that the total channel width is 2L. The velocity components are v=(u(x,y,t),v(x,y,t)), and the flow is driven by the following slip velocity at the wallsu(x,L,t) = v_+ coskx,u(x,-L,t) = v_- cos(kx+ϕ),v(x,± L,t) = 0.Here, v_+ and v_- are the maximum slip velocities at the corresponding boundary, k is the wavenumber of the slip velocity modulation, and ϕ is the phase difference between the velocity at the upper and lower walls. The equations of motion are given by the Navier-Stokes equationρ[ ∂ v/∂ t +v·∇ v] = -∇ p + μ∇^2v,and the incompressibility condition∇· v = 0.Here, ρ and μ are the density and viscosity of the fluid, respectively, and p is the pressure.The problem is rendered dimensionless by the rescaling of all the variables, wherewe use the half-width of the channel L as the unit of length, max(v_+,v_-) as the unit of velocity, and L/max(v_+,v_-) as the unit of time. We also introduce the Reynolds number Re=max(v_0,v_1) L/ν,and the dimensionless wave-vector k̃ = k L. In what follows, all variables are dimensionless unless stated otherwise. To reduce the number of degrees of freedom, we introduce the streamfunction Ψ=Ψ(x,y,t), such thatu = ∂Ψ/∂ y,v = -∂Ψ/∂ x.In terms of the streamfunction, the equation of motion is given by( ∂/∂ t + ∂Ψ/∂ y∂/∂ x - ∂Ψ/∂ x∂/∂ y)∇^2 Ψ = 1/Re∇^4 Ψ,with the following boundary conditions∂Ψ/∂ y(x,1,t) = ṽ_+ cosk̃ x, ∂Ψ/∂ y(x,-1,t) = ṽ_-cos(k̃ x+ϕ),∂Ψ/∂ x(x,± 1,t) = 0,where ṽ_+=v_+/max(v_+,v_-), and ṽ_-=v_-/max(v_+,v_-), and ∇^4 is the biharmonic operator.It is instructive to consider the limit of zero inertia and steady velocity field. In this case, Eq.(<ref>) reduces to the Stokes equation, ∇^4 Ψ = 0, that has a simple analytical solution satisfying the boundary conditions, Eqs.(<ref>),Ψ_0 = 2k̃/sinh^22k̃-4k̃^2[ A(x)(1+y)sinhk̃(1-y) - B(x)(1-y)sinhk̃(1+y)],whereA(x) = ṽ_+ cosk̃x + ṽ_- sinh2k̃/2k̃cos(k̃x+ϕ), B(x) = ṽ_- cos(k̃x+ϕ) + ṽ_+ sinh2k̃/2k̃cosk̃x.This solution is similar to the one obtained by Ajdari <cit.>. Although Ψ_0 differs from the true solution to Eq.(<ref>) for any finite amount of inertia, it is nevertheless useful for gaining an insight into the structure of the flow.In Fig.<ref> we plot the velocity profile given by Ψ_0 for ṽ_+=ṽ_-=1 (top row), and ṽ_+=1, ṽ_-=1/3 (bottom row) for three values of the phase difference ϕ: 0 (left column), π/2 (middle column), and π (right column). As can be seen from the figure, the flow consists of two arrays of vortices aligned along each wall with their relative position and strength set by the phase-difference ϕ and the velocity magnitudes ṽ_+ and ṽ_-, respectively.To assess the effect of inertia on this solution, we solve Eqs.(<ref>) and (<ref>) numerically using a Fourier-Chebyshev pseudo-spectral method <cit.>. We express the streamfunction as a Fourier seriesΨ(x,y,t) = ∑_n=-N^Nψ_n(y,t)e^i n k̃ x,where ψ_n(y,t) = ψ_-n^*(y,t) to ensure that Ψ(x,y,t) is real, and * denotes the complex conjugate. At any time t, ψ_n(y,t) is represented by its values at M Gauss-Lobatto points <cit.> in the wall-normal direction, and the y-derivatives are taken by multiplying these values with the Chebyshev pseudo-spectral differentiation matrix <cit.>. The non-linear terms are calculated by performing a discrete Fourier transform of the streamfunction to real space, evaluating the non-linear terms there, and performing an inverse discrete Fourier transform back to spectral space; the 3/2-rule is used to avoid aliasing errors and the boundary conditions are implemented using the tau-method <cit.>. For each set of parameters, we check convergence of the velocity field by comparing it at several resolutions (N,M); convergence was always reached for N=5 (before de-aliasing) and M=80. Most of the results presented below are obtained by using the Newton-Raphson algorithm <cit.> to solve the time-independent version of Eq.(<ref>). We also performed direct numerical simulations of Eq.(<ref>) using a fully-implicit Crank-Nicolson method <cit.>; for all parameters studied, convergence was reached for the dimensionless time-step of 10^-2.First, we study how the presence of inertia modifies the Stokes solution, Eq.(<ref>), at relatively low Reynolds numbers. Using the Newton-Raphson method, we find steady solutions of Eq.(<ref>), and compare them to the Stokes profile Ψ_0. The difference is quantified by calculating the kinetic energy of the flow, defined asE = k̃/2π∫_0^2π/k̃dx1/2∫_-1^1dy 1/2[ (∂Ψ/∂ y)^2+(∂Ψ/∂ x)^2 ],for the inertial, E_i, and Stokes solutions, E_s. In Fig.<ref> we plot the ratio E_i/E_s for ṽ_+=ṽ_-=1, ϕ=0, and k̃=π. The data demonstrate that the inertial contribution to the kinetic energy is only about 4% of the total kinetic energy at Re=30, and that that contribution decreases for smaller values of Re.The difference E_i/E_s-1scales quadratically with Re (fit not shown), implying that the small inertial correction to the Stokes profile can be obtained from the leading-order term of the perturbation theory in Re, even for Re∼ 30. Visual inspection of the inertial velocity profiles together with the data in Fig.<ref> suggests that the Stokes solution, Eq.(<ref>), is a very good approximation to the actual inertial solution even at moderate Reynolds numbers. The situation changes significantly at higher Reynolds numbers. In Fig.<ref>(left) we plot the velocity profile for the symmetric boundary conditions at Re=40 and k̃=π, and observe that it no longer posseses a translation-reflection symmetry along the x-axis, c.f. Fig.<ref>(top, left). This is associated with the emergence of the zeroth Fourier mode U(y) of the horizontal velocity component u(x,y), see Fig.<ref>(right), absent at lower Reynolds numbers. This x-independent, mean flow along the x-direction reaches significant amplitudes of about 23% of the maximum slip velocity at the wall.To characterise this new flow state, we introduce a dimensionless order parameterχ = Re k̃/2π∫_0^2π/k̃ dx ∫_-1^1 dy u(x,y) ≡ Re ∫_-1^1 U(y) dy,which is a two-dimensional flow rate along the channel (in physical units) scaled by the kinematic viscosity ν of the fluid. In Fig.<ref>(left) we plot χ as a function of the Reynolds number for ṽ_+=ṽ_-=1, ϕ=0, and k̃=π (black line). For low values of Re the flow is left-right-symmetric, there is no mean flow, and χ=0, while at larger Re, χ acquires non-zero values indicating the presence of a mean flow. The direction of the mean flow is selected by a spontaneous symmetry breaking, and can be in either direction along the channel. The state diagram, Fig.<ref>(left), therefore has two symmetric branches, ±χ, typical of a super-critical (pitchfork) bifurcation. By combining the Newton-Raphson and time-iteration techniques, we have verified that the left-right symmetric solution with χ=0 is also present for higher values of Re but is linearly unstable. The final flow state with χ 0 is stationary and stable with respect to small perturbations. Therefore, we conclude that the new flow state is a result of a linear instability that sets in at Re_crit≈ 33.3, for this set of parameters. In Fig.<ref>(left) we also show the bifurcation diagrams for other values of the phase-difference ϕ, and observe that the lowest Re_crit is achieved for ϕ=π; the corresponding base profile is shown in Fig.<ref>(top,right).The instability thresholds presented above were calculated by imposing a fixed value of k̃, i.e. assuming a particular spatial period of the solution. To find the critical condition in an infinitely long channel, we now study how Re_crit depends on k̃. In Fig.<ref>(right) we plot the non-linear stability thresholds for two values of ϕ, and observe that Re_crit^min = 21.2 for k̃=2.6 and ϕ=π. The stability thresholds for other values of ϕ lie in-between the two cases presented in Fig.<ref>(right), similar to Fig.<ref>(left).We also studied the effect of the asymmetry in the wall slip velocity (not shown), with either ṽ_+ or ṽ_- smaller than unity. For every set of ϕ and k̃ considered, the corresponding Re_crit was found to be larger than Re_crit for ṽ_+=ṽ_-=1.As mentioned in the Introduction, this instability can potentially be utilised as a means of creating a unidirectional flow in a microfluidic device, although relatively high transitional Reynolds numbers and the a priory unknown direction of the flow could make it impractical. We now attempt to assess whether a modification of the slip boundary condition, Eq.(<ref>), can alleviate both problems. To this end, we consider the following (dimensional) velocity profile prescribed at the wallsu(x,± L,t) = v_c ± v coskx,where the spatially-oscillatory component is the same as in Eq.(<ref>) for the most unstable parameters (v_+=v_-≡ v, ϕ=π, and k L=2.6), and we have introduced v_c – the amplitude of a constant slip velocity in the positive x-direction. Here, we study whether a small value of v_c can produce a significant mean flow at Re<Re_crit^min. The problem is made dimensionless as before, and we define additionally another Reynolds number, Re_c=v_c L/ν, based on the constant slip velocity. In the absence of the spatially-oscillatory component, equations of motion are trivially solved by a plug-like flow, U(y)=v_c (in physical units),which to the order parameter given by χ_c=2Re_c. For Re>0, we expect that the interaction between the plug-like and spatially-oscillatory components will generate flow rates enhanced beyond χ_c. In Fig.<ref> we present the bifurcation diagram for the modified boundary conditions, Eq.(<ref>), varying Re but keeping Re_c fixed to a particular value. The Re_c=0 data is the same as in Fig.<ref>(left) for ϕ=π. In the presence of a constant bias, the bifurcation diagram loses its ±χ symmetry, and we only plot the dimensionless flow ratein the same direction as the bias. For Re_c=0.1 and Re_c=1, the flow rate is dominated by the plug-like profile at low values of Re, while at larger Re there is an enhancement of the flow rate due to the instability. The bifurcation diagram now looks like an imperfect pitch-fork bifurcation. For yet larger Re_c, the effect of the underlying instability is masked by the presence of a strong bias and only a mild enhancement is observed. While the presence of the bias clearly enhances the mean flow rate and breaks the left-right symmetry, the enhancement is mild and it remains to be seen whether there are practical advantages of generating a steady flow in a microfluidic device by a slip velocity Eq.(<ref>) instead of a stronger steady component alone.The results developed here have several implications.First, we find that a periodic variation in wall charge will have only a small effect on average velocity in a microfluidic device with an otherwise uniform wall charge, if the periodic component is small (or even modest) in magnitude compared to the uniform component. This conclusion seems likely to hold if the non-uniform component is irregular or non-periodic, as long as it is significantly smaller than the uniform wall charge.Thus, surface charge in a microfluidic device does not need to be nearly perfectly uniform to achieve a uniform flow rate, whose magnitude is set by the average surface charge, a conclusion of importance in practical applications where wall charging is unlikely to be exquisitely uniform.Secondly, if a rectified flow with a sharp onset is desired in a microfluidic device using electric fields to drive the flow,this can be accomplished by exploiting the bifurcation described here, albeit only for rather large channel widths and heights (i.e., millimeters) and strong fields.In addition, there may be benefit in using periodic, or nearly periodic flows for separation of particles or cells based on size or other characteristics, including separations based on inertial forces.These inertial forces are already being exploited in pressure-driven flows to separate rare circulating tumor cells from white blood cells <cit.>.Electroosmotic flow driven by a periodic wall charge, along with fluid inertial forces, may expand the options for improving the efficiency of such devices. We note that inertial fluid forces in pressure-driven microfluidic devices are strong enough to induced circulating Dean flows, which are of great significance for separating particles and cells. Thus, the addition of electroosmotically driven flow, combined with inertial effects, opens multiple new opportunities for separations. Thirdly, the flows generated by periodic charges may provide a good experimental test of one's ability to control elecroosmotic flow fields, and of the ability to created controlled charge at walls.Since the flow field is readily predicted, including the effect of surface charge amplitude and other parameters,a measurement of the flow (even without the bifurcation) could be used to validate methods of controlling surface charge, for example. Fourthly, both the circulating primary flow and the secondary bifurcation flow described here occurs in a geometry of trivial simplicity(a straight channel), which allows it to be used as a test flow field for exploring various advanced simulating methods, such as mesoscopic flow simulations <cit.>, and for exploring the behaviour of complex fluids in complex flows, but with simple geometry and boundary conditions <cit.>. Finally, the flow is essentially completely viscous prior to the bifurcation and described by an analytical solution to the Stokes equation, and thus represents a particularly simple and elegant example of a classical forward bifurcation at a very modest Reynolds number, and is the simplest bifurcation so far presented for electroosmotic flow.R.G.L. acknowledges support and hospitality of the Higgs Centre for Theoretical Physics, University of Edinburgh where a part of this work was performed. A.M. acknowledges support from the UK Engineering and Physical Sciences Research Council (EP/I004262/1). Research outputs generated through the EPSRC grant EP/I004262/1 can be found at http://dx.doi.org/xxx-xxx.
http://arxiv.org/abs/1708.07470v1
{ "authors": [ "Alexander Morozov", "Davide Marenduzzo", "Ronald G. Larson" ], "categories": [ "physics.flu-dyn", "cond-mat.soft" ], "primary_category": "physics.flu-dyn", "published": "20170824155451", "title": "A hydrodynamic bifurcation in electroosmotically-driven periodic flows" }
Institute of Particle and Nuclear Physics, Henan Normal University, Xinxiang 453007, ChinaInstitute of Particle and Nuclear Physics, Henan Normal University, Xinxiang 453007, ChinaInstitute of Particle and Nuclear Physics, Henan Normal University, Xinxiang 453007, ChinaInstitute of Particle and Nuclear Physics, Henan Normal University, Xinxiang 453007, ChinaInstitute of Particle and Nuclear Physics, Henan Normal University, Xinxiang 453007, ChinaInstitute of Particle and Nuclear Physics, Henan Normal University, Xinxiang 453007, ChinaCollege of Physics and Electronic Engineering, Nanyang Normal University, Nanyang 473061, China The B_q^∗ → DP, DV weak decays are studied with the perturbative QCD approach, where q = u, d and s; P and V denote the ground SU(3) pseudoscalar and vector meson nonet. It is found that the branching ratios for the color-allowed B_q^∗ → D_qρ^- decays can reach up to 10^-9 or more, and should be promisingly measurable at the running LHC and forthcoming SuperKEKB experiments in the near future.12.15.Ji 12.39.St 13.25.Hw 14.40.NdStudy of the B_q^∗ → DM decays with perturbative QCD approach Jinshu Huang Received August 8, 2016; accepted March 2, 2017 ============================================================= § INTRODUCTION In accordance with the conventional quark model assignments, the ground spin-singlet pseudoscalar B_q mesons and spin-triplet vector B^∗_q mesons have the same flavor components, and consist of one valence heavy antiquark b̅ and one light quark q, i.e., b̅q, with q = u, d, s <cit.>. With the two e^+e^- B-factory BaBar and Belle experiments, there is a combined data sample of over 1 ab^-1 at the Υ(4S) resonance. The B_u,d meson weak decay modes with branching ratio of over 10^-6 have been well measured <cit.>. The B_s meson, which can be produced in hadron collisions or at/over the resonance Υ(5S) in e^+e^- collisions[1], [1] In hadron colliders, CDF and D0 each have accumulated about 10 fb^-1 data, and LHCb has accumulated over 5 fb^-1 data up to the year of 2016 <cit.>. In e^+e^- colliders, Belle has accumulated more than 100 fb^-1 data at the resonance Υ(5S) <cit.>. is being carefully scrutinized. However, the study of the B_q^∗ mesons has not actually attracted much attention yet, subject to the relatively inadequate statistics. Because the mass of the B_q^∗ mesons is a bit larger than that of the B_q mesons, the B_q^∗ meson should be produced at the relatively higher energy rather than at the resonance Υ(4S) in e^+e^- collisions. With the high luminosities and large production cross section at the running LHC, the forthcoming SuperKEKB and future Super proton proton Collider (SppC, which is still in the preliminary discussion and research stage up to now), more and more B_q^∗ mesons will be accumulated in the future, which makes the B_q^∗ mesons another research laboratory for testing the Cabibbo-Kobayashi-Maskawa (CKM) picture for CP-violating phenomena, examining our comprehension of the underlying dynamical mechanism for the weak decays of the heavy flavor hadrons.Having the same valence quark components and approximately an equal mass, both the B^∗_q and B_q mesons can decay via weak interactions into the same final states. On the one hand, the B^∗_q and B_q meson weak decays would provide each other with a spurious background; on the other hand, the interplay between the B_q^∗ and B_q weak decays could offer some potential useful information to constrain parameters within the standard model, and might shed some fresh light on various intriguing puzzles in the B_q meson decays. The B_q meson decays are well described by the bottom quark decay with the light spectator quark q in the spectator model. At the quark level, most of the hadronic B_q meson decays involve the b → c transition due to the hierarchy relation among the CKM matrix elements. As is well known, there is a more than 3 σ discrepancy between the value of |V_cb| obtained from inclusive determinations, |V_cb| = (42.2±0.8)×10^-3, and from exclusive ones, |V_cb| = (39.2±0.7)×10^-3 <cit.>. Besides the semileptonic B_q^(∗) → D^(∗)ℓν̅ decays, the nonleptonic B_q^(∗) → DM decays, with M representing the ground SU(3) pseudoscalar P and the vector V meson nonet, are also induced by the b → c transition, and hence could be used to extract/constrain the CKM matrix element |V_cb|.From the dynamical point of view, the phenomenological models used for the B_q → DM decays might, in principle, be extended and applied to the B_q^∗ → DM decays. The practical applicability and reliability of these models could be reevaluated with the B_q^∗ → DM decays. Recently, some attractive QCD-inspired methods, such as the perturbative QCD (pQCD) approach <cit.>, the QCD factorization (QCDF) approach <cit.>, soft and collinear effective theory <cit.> and so on, have been developed vigorously and employed widely to explain measurements on the B_q meson decays. The B_q → DM decays have been studied with the QCDF <cit.> and pQCD <cit.> approaches, but there are few research works on the B_q^∗ meson weak decays. Recently, the B_q^∗ → D_qV decays have been investigated with the QCDF approach <cit.>, and it is shown that the B_q^∗0 → D_q^+ρ^- decays with branching ratios of O(10^-8) might be accessible to the existing and future heavy flavor experiments. In this paper, we will give a comprehensive investigation into the two-body nonleptonic B_q^∗ → DM decays with the pQCD approach in order to provide the future experimental research with an available reference.As is well known, the B^∗_q meson decays are dominated by the electromagnetic interactions rather than the weak interactions, which differs significantly from the B_q meson decays. One can easily expect that the branching ratios for the B_q^∗ → DM weak decays should be very small due to the short electromagnetic lifetimes of the B_q^∗ mesons <cit.>, although these processes are favored by the CKM matrix element |V_cb|. Of course, an abnormal large branching ratio might be a possible hint of new physics beyond the standard model. There is still no experimental report on the B_q^∗ → DM weak decays so far. Furthermore, the B_q^∗ → DM weak decays offer the unique opportunity of observing the weak decay of a vector meson, where polarization effects could be explored.This paper is organized as follows. In section <ref>, we present the theoretical framework, the conventions and notations, together with amplitudes for the B_q^∗ → DM decays. Section <ref> is devoted to the numerical results and discussion. The final section is a summary. § THEORETICAL FRAMEWORK §.§ The effective Hamiltonian As is well known, the weak decays of the B_q^(∗) mesons inevitably involve multiple length scales, including the mass of m_W for the virtual gauge boson W, the mass of m_b for the decaying bottom quark, the infrared confinement scale Λ_ QCD of the strong interactions, and m_W ≫ m_b ≫ Λ_ QCD. So, one usually has to resort to the effective theory approximation scheme. With the operator product expansion and the renormalization group (RG) method, the effective Hamiltonian for the B_q^∗ → DM decays can be written as <cit.>, H_ eff= G_F/√(2) ∑_q^'=d,s V_cb V_uq^'^∗{ C_1(μ) Q_1(μ)+C_2(μ) Q_2(μ)} + h.c.,where G_F ≃ 1.166×10^-5GeV^-2 <cit.> is the Fermi coupling constant.Using the Wolfenstein parametrization, the CKM factor V_cbV_uq^'^∗ are expressed as a series expansion of the small Wolfenstein parameter λ ≈ 0.2 <cit.>. Up to the order of O(λ^7), they can be written as follows: V_cb V_ud^∗ =A λ^2 ( 1 -λ^2/2-λ^4/8 ) + O(λ^7) , V_cb V_us^∗ =A λ^3+ O(λ^7) . It is very clearly seen that the both V_cb V_ud^∗ and V_cb V_us^∗ are real-valued, i.e., there is no weak phase difference. However, nonzero weak phase difference is necessary and indispensable for the direct CP violation. Therefore, none of direct CP violation should be expected for the B_q^∗ → DM decays.The renormalization scale μ separates the physical contributions into the short- and long-distance parts. The Wilson coefficients C_1,2 summarize the physical contributions above the scale μ. They, in principle, are calculable order by order in the strong coupling α_s at the scale m_W with the ordinary perturbation theory, and then evolved with the RG equation to the characteristic scale μ ∼ O(m_b) for the bottom quark decay <cit.>. The Wilson coefficients at the scale m_W are determined at the quark level rather than the hadron level, so they are regarded as process-independent couplings of the local operators Q_i. Their explicit analytical expressions, including the next-to-leading order corrections, have been given in Ref.<cit.>.The physical contributions from the scales lower than μ are contained in the hadronic matrix elements (HME) where the local four-quark operators are sandwiched between the initial and final hadron states. The local six-dimension operators arising from the W-boson exchange are defined as follows:Q_1 =[ c̅_α γ_μ (1-γ_5) b_α] [ q̅^'_β γ^μ (1-γ_5) u_β] , Q_2 =[ c̅_α γ_μ (1-γ_5) b_β] [ q̅^'_β γ^μ (1-γ_5) u_α] .where α and β are color indices, i.e., the gluonic corrections are included. The operator Q_1 (Q_2) consists of two color-singlet (color-octet) currents. The operators Q_1 and Q_2, called current-current operators or tree operators, have the same flavor form and a different color structure. It is obvious that the B_q^∗ → DM decays are uncontaminated by the contributions from the penguin operators, which is positive to extract the CKM matrix element |V_cb|.Because of the participation of the strong interaction, especially, the long-distance effects in the conversion from the quarks of the local operators to the initial and final hadrons, barricades are still erected on the approaches of nonleptonic B_q^(∗) weak decays, which complicates the calculation. HME of the local operators are the most intricate part for theoretical calculation, where the perturbative and nonperturbative contributions entangle with each other. To evaluate the HME amplitudes, one usually has to resort to some plausible approximations and assumptions, which results in the model-dependence of theoretical predictions. It is obvious that a large part of the uncertainties does come from the practical treatment of HME, due to our inadequate understanding of the hadronization mechanism and the low-energy QCD behavior. For the phenomenology of the B_q^∗ → DM decays, one of the main tasks at this stage is how to effectively factorize HME of the local operators into hard and soft parts, and how to evaluate HME properly.§.§ Hadronic matrix elements One of the phenomenological schemes for the HME calculation is the factorization approximation based on Bjorken's a priori color transparency hypothesis, which says that the color singlet energetic hadron would have flown rapidly away from the color fields existing in the neighborhood of the interaction point before the soft gluons are exchanged among hadrons <cit.>. Modeled on the amplitudes for exclusive processes with the Lepage-Brodsky approach <cit.>, HME are usually written as the convolution integral of the hard kernels and the hadron distribution amplitudes (DAs). Hard kernels are expressed as the scattering amplitudes for the transition of the heavy bottom quark into light quarks. They are generally computable at the quark level with the perturbation theory as a series of expansion in the parameter 1/m_b and the strong coupling constant α_s in the heavy quark limit. It is assumed that the soft and nonperturbative contributions of HME could be absorbed into hadron DAs. The distribution amplitudes are functions of parton momentum fractions. They, although not calculable, are regarded as universal and can be determined by nonperturbative means or extracted from data. With the traits of universality and determinability of hadron DAs, HME have a sample structure and can be evaluated to make predictions.Besides the factorizable contributions to HME, the nonfactorizable corrections to HME also play an important role in commenting on the experimental measurements and solving the so-called puzzles and anomalies, and hence should be carefully considered, as commonly recognized by theoretical physicists. In order to regulate the endpoint singularities which appear in the spectator scattering and annihilation amplitudes with the QCDF approach and spoil the perturbative calculation with the collinear approximation <cit.>, it is suggested by the pQCD approach <cit.> that the transverse momentum of quarks should be conserved and, additionally, that a Sudakov factor should be introduced to DAs for all the participant hadrons to further suppress the long-distance and soft contributions. The basic pQCD formula for nonleptonic weak decay amplitudes could be factorized into three parts: the hard effects enclosed by the Wilson coefficients C_i, hard scattering kernels H_i, and the universal wave functions Φ_j. The general form is a multidimensional integral <cit.>, A_i ∝ ∫ ∏_jdx_j db_jC_i(t)H_i(t_i,x_j,b_j) Φ_j(x_j,b_j) e^-S_j, where x_j is the longitudinal momentum fraction of the valence quarks. b_j is the conjugate variable of the transverse momentum k_jT. The scale t_i is preferably chosen to be the maximum virtuality of all the internal particles. The Sudakov factor e^-S_j, together with the particular scale t_i, will ensure the perturbative calculation is feasible and reliable.§.§ Kinematic variablesThe B_q^(∗) weak decays are actually dominated by the b quark weak decay. In the heavy quark limit, the light quark originating from the heavy bottom quark decay is assumed to be energetic and race quickly away from the weak interaction point. If the velocity v ∼ c (the speed of light), the light quarks move near the light-cone line. The light-cone dynamics can be used to describe the relativistic system along the light-front direction. The light-cone coordinates (x^+,x^-,x_⊥) of space-time are defined as x^± = (x^0±x^3)/√(2) (or (t±x^3)/√(2)) and x_⊥ = x^i with i = 1 and 2. x^± = 0 is called the light-front. The scalar product of any two four-dimensional vectors is given by a·b = a_μb^μ = a^+b^- + a^-b^+ - a_⊥·b_⊥. In the rest frame of the B_q^∗ meson, the final D and M mesons are back-to-back. The light-cone kinematic variables are defined as follows: p_B_q^∗=p_1= m_1/√(2)(1,1,0) ,p_D=p_2=(p_2^+,p_2^-,0) ,p_M=p_3=(p_3^-,p_3^+,0) ,k_i=x_i p_i+(0,0,k_iT) ,p_i^±=(E_i ± p)/√(2),t=2 p_1·p_2=m_1^2+m_2^2-m_3^2= 2 m_1 E_2,u=2 p_1·p_3=m_1^2-m_2^2+m_3^2= 2 m_1 E_3,s=2 p_2·p_3=m_1^2-m_2^2-m_3^2,s t +s u-t u=4 m_1^2 p^2, where the subscripts i = 1, 2 and 3 of the variables (such as, four-dimensional momentum p_i, energy E_i, and mass m_i) correspond to the B_q^∗, D and M mesons, respectively. k_i is the momentum of the light antiquark carrying the longitudinal momentum fraction x_i. k_iT is the transverse momentum. t, u and s are the Lorentz scalar variables. p is the common momentum of the final states. These momenta are shown in Fig.<ref>(a), Fig.<ref>(a) and Fig.<ref>(a).§.§ Wave functions As aforementioned, wave functions are the essential input parameters in the master pQCD formula for the HME calculation. Following the notations in Refs. <cit.>, the wave functions of the participating meson are defined as the meson-to-vacuum HME. ⟨0|q̅_i(z)b_j(0)|B_q^∗(p,ϵ^∥)⟩ = f_B_q^∗/4∫d^4k e^-ik·z{ϵ̸^∥ [ m_B_q^∗ Φ_B_q^∗^v(k)- p̸ Φ_B_q^∗^t(k) ] }_ji,⟨0|q̅_i(z)b_j(0)|B_q^∗(p,ϵ^⊥)⟩ = f_B_q^∗/4∫d^4k e^-ik·z{ϵ̸^⊥ [ m_B_q^∗ Φ_B_q^∗^V(k)- p̸ Φ_B_q^∗^T(k) ] }_ji,⟨D_q(p)|c̅_i(0)q_j(z)|0⟩ = i f_D_q/4∫d^4k e^+ik·z {γ_5[ p̸ Φ_D_q^a(k) +m_D_q Φ_D_q^p(k) ] }_ji,⟨P(p)|q̅_i(0)q^'_j(z)|0⟩= 1/4∫d^4k e^+ik·z {γ_5[ p̸ Φ_P^a(k) +μ_P Φ_P^p(k) +μ_P (n̸_+n̸_--1) Φ_P^t(k) ] }_ji,⟨V(p,ϵ^∥)|q̅_i(0)q^'_j(z)|0⟩ = 1/4∫d^4k e^+ik·z {ϵ̸^∥ m_V Φ_V^v(k) +ϵ̸^∥p̸ Φ_V^t(k) -m_V Φ_V^s(k) }_ji,⟨V(p,ϵ^⊥)|q̅_i(0)q^'_j(z)|0⟩= 1/4∫d^4k e^+ik·z {ϵ̸^⊥ m_V Φ_V^V(k) +ϵ̸^⊥p̸ Φ_V^T(k) + i m_V/p·n_+ γ_5 ε_μναβγ^μ ϵ^⊥ν p^αn_+^β Φ_V^A(k) }_ji, where f_B_q^∗ and f_D_q are the decay constants of the B_q^∗ meson and the D_q meson, respectively. ϵ^∥ and ϵ^⊥ are the longitudinal and transverse polarization vectors. n_+ = (1,0,0) and n_- = (0,1,0) are the positive and negative null vectors, i.e., n_±^2 = 0. The chiral factor μ_P relates the pseudoscalar meson mass to the quark mass through the following way <cit.>, μ_P= m_π^2/m_u+m_d= m_K^2/m_u,d+m_s ≈(1.6±0.2) GeV.With the twist classification based on the power counting rule in the infinite momentum frame <cit.>, the wave functions Φ_B_q^∗,V^v,T and Φ_D_q,P^a are twist-2 (the leading twist), while the wave functions Φ_B_q^∗,V^t,V,s,A and Φ_D_q,P^p,t are twist-3. By integrating out the transverse momentum from the wave functions, one can obtain the corresponding distribution amplitudes. In our calculation, the expressions of the DAs for the heavy-flavored mesons are <cit.>ϕ_B_q^∗^v,T(x) = Ax x̅ exp{ -1/8 ω_B_q^∗^2 ( m_q^2/x+m_b^2/x̅) }, ϕ_B_q^∗^t(x) = B(x̅-x)^2 exp{ -1/8 ω_B_q^∗^2 ( m_q^2/x+m_b^2/x̅) }, ϕ_B_q^∗^V(x) = C {1+(x̅-x)^2} exp{ -1/8 ω_B_q^∗^2 ( m_q^2/x+m_b^2/x̅) },ϕ_D_q^a(x) = Dx x̅ exp{ -1/8 ω_D_q^2 ( m_q^2/x+m_c^2/x̅) }, ϕ_D_q^p(x) = E exp{ -1/8 ω_D_q^2 ( m_q^2/x+m_c^2/x̅) },where x and x̅ (≡ 1 - x) are thelongitudinal momentum fractions of the light and heavy partons;m_b, m_c and m_q are the mass of the valenceb, c and q quarks.The parameter ω_i determines the average transversemomentum of the partons, andω_i ≈ m_i α_s(m_i).The parameters A, B, C, D and E are the normalizationcoefficients to satisfy the conditions,∫_0^1dx ϕ_B_q^∗^v,t,V,T(x)=1, ∫_0^1dx ϕ_D_q^a,p(x) =1.The main distinguishing feature of the above DAs in Eqs.(<ref>-<ref>) is the exponential functions, where the exponential factors are proportional to the ratio of the parton mass squared m_i^2 to the momentum fraction x_i, i.e., m_i^2/x_i. Hence, the DAs of Eqs.(<ref>-<ref>) are generally consistent with the ansatz that the momentum fractions are shared among the valence quarks according to the quark mass, i.e., a light quark will carry a smaller fraction of the parton momentum than a heavy quark in a heavy-light system. In addition, the exponential functions strongly suppress the contributions from the endpoint of x, x̅ → 0, and naturally provide the effective truncation for the endpoint and soft contributions.As is well known, there are many phenomenological DA models for the charmed mesons. Some have been recited by Eq.(30) in Ref.<cit.>. One of the favorable DA models from the experimental data, without the distinction between the twist-2 and twist-3, has the common expression as below,ϕ_D_q(x) = 6 x x̅ {1+C_D_q (x̅-x) }, where the parameter C_D_u,d = 0.5 for the D_u,d meson, and C_D_s = 0.4 for the D_s meson.The expressions of the twist-2 quark-antiquark DAs for the light pseudoscalar and vector mesons have the expansion <cit.>,ϕ_P^a(x)=i f_P 6 x x̅ ∑_i=0 a^P_i C_i^3/2(ξ), ϕ_V^v(x) =f_V 6 x x̅ ∑_i=0 a^∥_i C_i^3/2(ξ) ,ϕ_V^T(x) =f_V^T 6 x x̅ ∑_i=0 a^⊥_i C_i^3/2(ξ) , where f_P is the decay constant for the pseudoscalar meson P; f_V and f_V^T are the vector and tensor (also called the longitudinal and transverse) decay constants for the vector meson V. The nonperturbative parameters of a_i^P,∥,⊥ are called the Gegenbauer moments, and a_0^P,∥,⊥ = 1 for the asymptotic forms, a_ odd i^P,∥,⊥ = 0 for the DAs of the G-parity eigenstates, such as the unflavored π, η, η^', ρ, ω, ϕ mesons. The short-hand notation ξ = x - x̅ = 2 x - 1. The analytical expressions of the Gegenbauer polynomials C_i^j(ξ) are as below,C_0^j(ξ)=1, C_1^j(ξ)=2 j ξ, C_2^j(ξ)=2 j (j+1) ξ^2-j, ......As for the twist-3 DAs for the light pseudoscalar and vector mesons, their asymptotic forms will be employed in this paper for the simplification <cit.>, i.e.,ϕ_P^p(x) =+i f_PC_0^1/2(ξ) , ϕ_P^t(x) =-i f_PC_1^1/2(ξ),ϕ_V^t(x)=+3f_V^T ξ^2,ϕ_V^s(x)=-3f_V^T ξ,ϕ_V^V(x)=+3/4 f_V (1+ξ^2) ,ϕ_V^A(x)=-3/2 f_V ξ.§.§ Decay amplitudes As aforementioned, the B_q^∗ → DM weak decays are induced practically by the b quark decay at the quark level. There are three possible types of Feynman diagrams for the B_q^∗ → DM decays with the pQCD approach, i.e., the color-allowed topologies of Fig.<ref> induced by the external W-emission interactions, the color-suppressed topologies of Fig.<ref> induced by the internal W-emission interactions, and the annihilation topologies of Fig.<ref> induced by the W-exchange interactions. In the emission topologies of Fig.<ref> (Fig.<ref>), the light spectator quark in the B_q^∗ meson is absorbed by the recoiled D_q (M_q) meson, and the exchanged gluons are space-like. In the annihilation topologies of Fig.<ref>, the exchanged gluons are time-like, which then split into the light quark-antiquark pair.The first two diagrams of Fig.<ref>, Fig.<ref>, and Fig.<ref> are usually called the factorizable topologies. In the color-allowed (color-suppressed) factorizable emission topologies, the gluons are exchanged only between the initial B_q^∗ and the recoil D_q (M_q) meson pair, and the emission M (D^0) meson could be completely parted from the B_q^∗D_q (B_q^∗M_q) system. In the factorizable annihilation topologies, the gluons are exchanged only between the final DM meson pair, and the initial B_q^∗ meson could be directly separated from the DM meson pair. Hence, in the factorizable emission (annihilation) topologies, the integral of the wave functions for the emission (initial) mesons reduces to the corresponding decay constant. For the factorizable topologies, the decay amplitudes will have the relatively simple structures, and can be written as the product of the decay constants and the hadron transition form factors. With the pQCD approach, the form factors can be written as the convolution integral of the hard scattering amplitudes and the hadron DAs.The last two diagrams of Fig.<ref>, Fig.<ref>, and Fig.<ref> are usually called the nonfactorizable topologies. In the nonfactorizable topologies, the emission meson is entangled with the gluons that radiated from the spectator quark, and hence on meson can be separated clearly from the other mesons. Hence, the decay amplitudes for the nonfactorizable topologies have quite complicated structures, and the amplitude convolution integral involve the wave functions of all the participating mesons. The nonfactorizable emission topologies within the pQCD framework are also called the spectator scattering topologies with the QCDF approach. Especially for the color-suppressed emission topologies, the factorizable contributions are proportional to the small parameter a_2, hence, the nonfactorizable contributions, being proportional to the large Wilson coefficient C_1, should be significant. As widely recognized, the nonfactorizable contributions play an important role in clarifying or reducing some discrepancies between the theoretical results and the experimental data on the nonleptonic B meson weak decays.Among the three possible types of Feynman diagrams (Fig.<ref>, Fig.<ref>, and Fig.<ref>), only one or two of them will contribute to the specific B_q^∗ → DM decays. The explicit amplitudes for the concrete B_q^∗ → DP, DV decays have been collected in the Appendixes <ref> and <ref>, and the building blocks in the Appendixes <ref>, <ref> and <ref>. According to the polarization relations between the initial and final vector mesons, the amplitudes for the B_q^∗ → DV decays can generally be decomposed into the following structures <cit.>,A(B_q^∗→DV)=A_L(ϵ_B_q^∗^∥,ϵ_V^∥)+ A_N(ϵ_B_q^∗^⊥·ϵ_V^⊥)+iA_T ε_μναβ ϵ_B_q^∗^μ ϵ_V^ν p_B_q^∗^α p_V^β. which is conventionally written as the helicity amplitudes,H_0 =A_L(ϵ_B_q^∗^∥,ϵ_V^∥), H_∥ = √(2)A_N, H_⊥ = √(2) m_B_q^∗ pA_T.As is well known, it is commonly assumed that the SU(3) symmetry breaking interactions mixes the isospin-singlet neutral members of the octet with the singlet states. The ideal mixing angle θ_V (with sinθ_V = 1/√(3)) between the octet and the singlet states is almost true in practice for the physical ω and ϕ mesons, i.e., the valence quark components are ω = (uu̅+dd̅)/√(2) and ϕ = ss̅. As for the mixing among the light pseudoscalar mesons, the notations known as the quark-flavor basis description <cit.> is adopted here, and for simplicity, the possible gluonium and charmonium compositions are neglected for the time being, i.e.,([ η; η^' ]) =([cosθ_P -sinθ_P;sinθ_Pcosθ_P ])([ η_q; η_s ]), where the flavor states η_q = (uu̅+dd̅)/√(2) and η_s = ss̅. The mixing angle determined from experimental data is θ_P = (39.3±1.0)^∘ <cit.>. The mass relations between the physical states (η and η^') and the flavor states (η_q and η_s) arem_η_q^2 = m_η^2 cos^2θ_P +m_η^'^2 sin^2θ_P -√(2) f_η_s/f_η_q(m_η^'^2- m_η^2) cosθ_P sinθ_P,m_η_s^2 = m_η^2 sin^2θ_P +m_η^'^2 cos^2θ_P -f_η_q/√(2) f_η_s (m_η^'^2- m_η^2) cosθ_P sinθ_P, where f_η_q and f_η_s are the decay constants.The amplitudes for the B_q^∗ → Dη, Dη^' decays can be written as A(B_q^∗→Dη) = cosθ_PA(B_q^∗→Dη_q) -sinθ_PA(B_q^∗→Dη_s) , A(B_q^∗→Dη^') = sinθ_PA(B_q^∗→Dη_q) +cosθ_PA(B_q^∗→Dη_s) . § NUMERICAL RESULTS AND DISCUSSION In the rest frame of the B_q^∗ meson, the branching ratio is defined asBr(B_q^∗→DV)= 1/24π p/m_B^∗_q^2Γ_B^∗_q {|H_0|^2+|H_∥|^2+|H_⊥|^2}, Br(B_q^∗→DP)= 1/24π p/m_B^∗_q^2Γ_B^∗_q | A(B_q^∗→DP)|^2, where Γ_B^∗_q is the full decay width of the B_q^∗ meson.Unfortunately, the experimental data on Γ_B^∗_q are still unavailable until now. As is generally known, the electromagnetic radiation processes B^∗_q → B_qγ dominate the B^∗_q meson decays, and the mass differences between the B^∗_q and B_q mesons are very small, m_B_q^∗ - m_B_q ≲ 50 MeV <cit.>, which results in the fact that the photons from the B^∗_q → B_qγ process are too soft to be easily identified by the detectors at the existing experiments. A good approximation for the decay width is Γ_B_q^∗ ≈ Γ(B_q^∗→B_qγ). Theoretically, there is the close relation between the partial decay width for the B_q^∗ → B_qγ decay and the magnetic dipole (M1) moment of the B_q^∗ meson <cit.>, i.e.,Γ(B_q^∗→B_qγ)= 4/3 α_ emk_γ^3 μ^2_h, where α_ em is the fine structure constant; k_γ = (m_B_q^∗^2-m_B_q^2)/2m_B_q^∗ is the photon momentum in the rest frame of the B_q^∗ meson; μ_h is the M1 moment of the B_q^∗ meson. There are a large number of theoretical predictions on the partial decay width Γ(B_q^∗→B_qγ). Many of these have been collected into Table 7 of Ref.<cit.> and Tables 3 and 4 of Ref.<cit.>. However, there are big differences among these estimations with various models, due to our inaccurate information about the M1 moments of mesons. In principle, the M1 moment of a hadron should be the sum of the M1 moments of its constituent quarks. As is well known, for an elementary particle, the M1 moment is proportional to the charge and inversely proportional to the mass. Hence, the M1 moment of the heavy-light B_q^∗ meson should be mainly affected by the M1 moment of the light quark rather than the bottom quark. With the M1 moments of the light u, d and s quarks in the terms of the nuclear magnetons μ_N, i.e., μ_u ≃ 1.85 μ_N, μ_d ≃ -0.97 μ_N, and μ_s ≃ -0.61 μ_N <cit.>, it is expected to have the relations Γ(B_u^∗→B_uγ) > Γ(B_d^∗→B_dγ) > Γ(B_s^∗→B_sγ), and therefore the relations Γ_B_u^∗ > Γ_B_d^∗ > Γ_B_s^∗. It is far beyond the scope of this paper to elaborate more on the details of the decay width Γ_B_q^∗. In our calculation, in order to give a quantitative estimation of the branching ratios for the B_q^∗ → DM decays, we will use the following values of the decay widths,Γ_B_u^∗ ∼Γ(B_u^∗→B_uγ) ∼ 450 eV, Γ_B_d^∗ ∼Γ(B_d^∗→B_dγ) ∼ 150 eV, Γ_B_s^∗ ∼Γ(B_s^∗→B_sγ) ∼ 100 eV, which is basically consistent with the recent results of Ref.<cit.>.The numerical values of other input parameters are collected in Table <ref>, where their central values will be fixed as the default inputs unless otherwise specified. In addition, in order to investigate the effects from different DA models, we explore three scenarios, * Scenario I:Eqs.(<ref>-<ref>)for the DAs of ϕ_B^∗^v,t,V,T and ϕ_D_q^a,p;* Scenario II: ϕ_B^∗^v,t,V,T = Eq.(<ref>), and ϕ_D_q^a,p = Eq.(<ref>);* Scenario III: ϕ_B^∗^v,t,V,T = Eq.(<ref>), and ϕ_D_q^a,p = Eq.(<ref>). Our numerical results on the branching ratios are presented in Tables <ref> and <ref>, where the uncertainties come from the typical scale (1±0.1)t_i, the mass m_c and m_b, and the hadronic parameters (including the decay constants, Gegenbauer moments, and so on), respectively. The following are some comments.(1) Generally, the B_q^∗ → DP decay modes could be divided into three categories, i.e. the “T”, “C”, and “A” types are dominated by contributions from the color-allowed emission topologies of Fig.<ref>, the color-suppressed emission topologies of Fig.<ref>, and the pure annihilation topologies of Fig.<ref>, respectively. And each category could be further divided into two classes, i.e., the decay amplitudes of the classes “I” and “II” are proportional to the CKM factors of V_cb V_ud^∗ ∼ Aλ^2 and V_cb V_us^∗ ∼ Aλ^3, respectively. There are many hierarchical relations among the branching ratios, such as,Br(class T-I) >Br(class C-I) >Br(class A-I), Br(class T-II) >Br(class C-II) >Br(class A-II), Br(class X-I) >Br(class X-II), for X = T, C, A. These categories and relations also happen to hold true for the B_q^∗ → DV decays.For the “T” and “C” types of the B_q^∗ → DM decays, the annihilation contributions have a negligible impact on the branching ratios, and they are strongly suppressed relative to the emission contributions, as is stated by the QCDF approach <cit.>.For the “T” types of the B_q^∗ → DM decays, the factorizable contributions from the emission topologies to the branching ratios are dominant over other contributions. However, for the “C” types of the B_q^∗ → DM decays, the nonfactorizable contributions to the branching ratios become very important, and sometimes even dominant.(2) With the law of conservation of angular momentum, three partial wave amplitudes, including the s-, p-, and d-wave amplitudes, all will contribute to the B_q^∗ → DV decays, while only the p-wave amplitude will contribute to the B_q^∗ → DP decays. Besides, the branching ratios are proportional to the squares of the decay constants with the pQCD approach. With the magnitude relations between the decay constants f_V > f_P, one should expect to have the general relation of the branching ratios,Br(B_q^∗→DV) >Br(B_q^∗→DP), for the final vector V and pseudoscalar P mesons carrying the same flavor, azimuthal and magnetic isospin quantum numbers. And due to the relations between the decay constants f_B_s^∗ > f_B_u,d^∗ and f_D_s > f_D_u,d, and the relations between the decay widths Γ_B_s^∗ < Γ_B_u,d^∗, the color-allowed B_s^∗0 → D_s^+ρ^- decay has a relatively large branching ratio.Furthermore, our study results show that for the “T” types of the B_q^∗ → DV decays, the contributions of the longitudinal polarization part are dominant. Take the B_s^∗0 → D_s^+ρ^- decay for example, the longitudinal polarization fraction f_0 ≡ |H_0|^2/|H_0|^2 +|H_∥|^2+|H_⊥|^2 ≈ 90% (85%), the parallel polarization fraction f_∥ ≡ |H_0|^2/|H_∥|^2 +|H_∥|^2+|H_⊥|^2 ≈ 9% (12%), and the perpendicular polarization fraction f_⊥ ≡ |H_0|^2/|H_⊥|^2 +|H_∥|^2+|H_⊥|^2 ≈ 1% (3%) with the DA scenarios I (II and III), which generally agree with those obtained by the QCDF approach <cit.>.(3) As is well known, the theoretical results depend on the values of the input parameters. From the numbers in Tables <ref> and <ref>, it is clearly seen that the main uncertainty is due to the limited knowledge of the hadron DAs, for example, the large discrepancy among the different DA scenarios. Besides the theoretical uncertainties listed in Tables <ref> and <ref>, the CKM parameters will bring some 6% uncertainties. With a different value of the decay width Γ_B_q^∗, the branching ratios in Tables <ref> and <ref> should be multiplied by the factors of 450eV/Γ_B_u^∗, 150eV/Γ_B_d^∗, 100eV/Γ_B_s^∗ for the B_u^∗, B_d^∗, B_s^∗ weak decays, respectively. To reduce the theoretical uncertainties, one of the commonly used methods is to exploit the rate of the branching ratios, such as,Br(B_u^∗-→D_u^0π^-)/ Br(B_u^∗-→D_u^0K^-) ≈ f_π^2/λ^2 f_K^2, Br(B_u^∗-→D_u^0ρ^-)/ Br(B_u^∗-→D_u^0K^∗-) ≈ f_ρ^2/λ^2 f_K^∗^2, Br(B_s^∗0→D_u^0ϕ)/ Br(B_s^∗0→D_u^0K^∗0) ≈ λ^2 f_ϕ^2/f_K^∗^2.(4) The branching ratios for the B_q^∗ → DM decays are smaller by at least five orders of magnitude than the branching ratios for the B_q → DM decays <cit.>. This fact implies that the possible background from the B_q^∗ → DM decays could be safely neglected when the B_q → DM decays were analyzed, but not vice versa, i.e., one of main pollution backgrounds for the B_q^∗ → DM decays would come from the B_q → DM decays, even if the invariant mass of the DM meson pair could be used to distinguish the B_q^∗ meson from the B_q meson experimentally. (5) The event numbers of the B_q^∗ meson in a data sample can be calculated by the following formula,N(B_q^∗) =L_ int × σ_bb̅ × f_B_q ×f_B_q^∗/ f_B_q. f_B_q^∗ = 2 × f_B_q^∗B_q^∗+2 × f_B_q^∗B_q^∗π+f_B_q^∗B_q+ c.c.+f_B_q^∗B_qπ+ c.c+⋯, where L_ int is the integrated luminosity, σ_bb̅ denotes the bb̅ pair production cross section, f_B_q, f_B_q^∗B_q^∗, ⋯ refer to the production fraction of all the B_q meson, the B_q^∗B_q^∗ meson pair, ⋯. The production fractions of specific modes at the center-of-mass of the Υ(5S) resonance <cit.> are listed in Table <ref>. With a large production cross section of the process e^+e^- → bb̅ at the Υ(5S) peak σ_bb̅ = (0.340±0.016)nb <cit.>, it is expected that some 3.3×10^9 B_u,d^∗ and 1.2×10^9 B_s^∗ mesons could be available per 10ab^-1 Υ(5S) dataset. The branching ratios of the color-allowed “T-I” class B_q^∗ → DM decays can reach up to O(10^-9) or more, which are essentially coincident with those obtained by the QCDF approach <cit.>. Hence, a few events of the B_q^∗ → D_qπ^- and B_u,d^∗ → D_u,dρ^- decays, and dozens of the B_s^∗0 → D_s^+ρ^- decay, might be available at the forthcoming SuperKEKB. At high energy hadron colliders, for example, given with the cross section at the LHCb σ_bb̅ ≈ 100 μb <cit.>, with a similar ratio f_B_u = f_B_d = 0.344±0.021 and f_B_s = 0.115±0.013 at Tevatron <cit.> and a similar ratio f_B_q^∗/f_B_q at the Υ(5S) meson <cit.>, some 9.8×10^13 B_u,d^∗ events and 2.2×10^13 B_s^∗ events per ab^-1 dataset could be available at the LHCb, corresponding to more than 10^5 of the B_s^∗0 → D_s^+ρ^- decay events and over 10^4 of the B_q^∗ → D_qπ^- and B_u,d^∗ → D_u,dρ^- decay events, which should be easily measured by the future LHCb experiments. § SUMMARY Besides the dominant electromagnetic decay mode, the ground vector B_q^∗ meson (q = u, d and s) can also decay via the weak interactions within the standard model. A large amount of the B_q^∗ mesons are expected to be accumulated with the running LHC and the forthcoming SuperKEKB, which makes it seemingly possible to explore the B_q^∗ meson weak decays experimentally. The theoretical study is necessary to offer a ready reference. In this paper, we investigated the B_q^∗ → DP, DV decays with the phenomenological pQCD approach. It is found that the color-allowed B_q^∗ → D_qρ^- decays have branching ratios ≳ 10^-9, and should be promisingly accessible at the high luminosity experiments in the future. § ACKNOWLEDGMENTSThe work is supported by the National Natural Science Foundation of China (Grant Nos. U1632109, 11547014 and 11475055).§ THE AMPLITUDE FOR THE B_Q^∗ → DP DECAYSA(B_u^∗-→D_u^0π^-)=FV_cb V_ud^∗ {∑_iM^T_i,P +∑_jM^C_j,P}, A(B_u^∗-→D_u^0K^-) =FV_cb V_us^∗ {∑_iM^T_i,P +∑_jM^C_j,P}, A(B_d^∗0→D_d^+π^-) =FV_cb V_ud^∗ {∑_iM^T_i,P +∑_jM^A_j,P}, A(B_d^∗0→D_d^+K^-) =FV_cb V_us^∗ ∑_iM^T_i,P, √(2)A(B_d^∗0→D_u^0π^0) =FV_cb V_ud^∗ { -∑_iM^C_i,P +∑_jM^A_j,P}, √(2)A(B_d^∗0→D_u^0η_q) =FV_cb V_ud^∗ {∑_iM^C_i,P +∑_jM^A_j,P}, A(B_d^∗0→D_u^0K^0) =FV_cb V_us^∗ ∑_iM^C_i,P, A(B_d^∗0→D_s^+K^-) =FV_cb V_ud^∗ ∑_iM^A_i,P, A(B_s^∗0→D_s^+π^-) =FV_cb V_ud^∗ ∑_iM^T_i,P, A(B_s^∗0→D_s^+K^-) =FV_cb V_us^∗ {∑_iM^T_i,P +∑_jM^A_j,P}, A(B_s^∗0→D_d^+π^-) =FV_cb V_us^∗ ∑_iM^A_i,P, √(2)A(B_s^∗0→D_u^0π^0) =FV_cb V_us^∗ ∑_iM^A_i,P, √(2)A(B_s^∗0→D_u^0η_q) =FV_cb V_us^∗ ∑_iM^A_i,P, A(B_s^∗0→D_u^0η_s) =FV_cb V_us^∗ ∑_iM^C_i,P, A(B_s^∗0→D_u^0K^0) =FV_cb V_ud^∗ ∑_iM^C_i,P, F= G_F/√(2) π C_F/N_cf_B_q^∗f_D, where M^k_i,j is the amplitude building blocks. The superscripts k = T, C, A correspond to the color-allowed emission topologies of Fig.<ref>, the color-suppressed emission topologies of Fig.<ref>, the annihilation topologies of Fig.<ref>. The subscripts i = a, b, c, d correspond to the diagram indices. The subscripts j = P, L, N, T correspond to the different helicity amplitudes. The analytical expressions of the amplitude building blocks M^k_i,j are given in the Appendix <ref>, <ref>, <ref>. § THE AMPLITUDE FOR THE B_Q^∗ → DV DECAYSiA_λ(B_u^∗-→D_u^0ρ^-)=FV_cb V_ud^∗ {∑_iM^T_i,λ +∑_jM^C_j,λ}, iA_λ(B_u^∗-→D_u^0K^∗-)=FV_cb V_us^∗ {∑_iM^T_i,λ +∑_jM^C_j,λ}, iA_λ(B_d^∗0→D_d^+ρ^-) =FV_cb V_ud^∗ {∑_iM^T_i,λ +∑_jM^A_j,λ}, i A_λ(B_d^∗0→D_d^+K^∗-) =FV_cb V_us^∗ ∑_iM^T_i,λ, i √(2)A_λ(B_d^∗0→D_u^0ρ^0) =FV_cb V_ud^∗ { -∑_iM^C_i,λ +∑_jM^A_j,λ}, i √(2)A_λ(B_d^∗0→D_u^0ω) =FV_cb V_ud^∗ {∑_iM^C_i,λ +∑_jM^A_j,λ}, i A_λ(B_d^∗0→D_u^0K^∗0) =FV_cb V_us^∗ ∑_iM^C_i,λ, i A_λ(B_d^∗0→D_s^+K^∗-) =FV_cb V_ud^∗ ∑_iM^A_i,λ, i A_λ(B_s^∗0→D_s^+ρ^-) =FV_cb V_ud^∗ ∑_iM^T_i,λ, i A_λ(B_s^∗0→D_s^+K^∗-) =FV_cb V_us^∗ {∑_iM^T_i,λ +∑_jM^A_j,λ}, i A_λ(B_s^∗0→D_d^+ρ^-) =FV_cb V_us^∗ ∑_iM^A_i,λ, i √(2)A_λ(B_s^∗0→D_u^0ρ^0) =FV_cb V_us^∗ ∑_iM^A_i,λ, i √(2)A_λ(B_s^∗0→D_u^0ω) =FV_cb V_us^∗ ∑_iM^A_i,λ, i A_λ(B_s^∗0→D_u^0ϕ) =FV_cb V_us^∗ ∑_iM^C_i,λ, i A_λ(B_s^∗0→D_u^0K^∗0) =FV_cb V_ud^∗ ∑_iM^C_i,λ, where the index λ corresponds to three different helicity amplitudes, i.e., λ = L, N, T.§ AMPLITUDE BUILDING BLOCKS FOR THE COLOR-ALLOWEDB_Q^∗ → D_QM DECAYS The expressions of the amplitude building blocks M^T_i,j for the color-allowed topologies are presented as follows, where the subscript i corresponds to the diagram indices of Fig.<ref>; and j corresponds to the different helicity amplitudes. M^T_a,P =2 m_1 p ∫_0^1dx_1∫_0^1dx_2∫_0^∞b_1db_1∫_0^∞b_2db_2H^T_f(α^T,β^T_a,b_1,b_2)E^T_f(t^T_a)× α_s(t^T_a)a_1(t^T_a) ϕ_B_q^∗^v(x_1) {ϕ_D^a(x_2) ( m_1^2 x̅_2+m_3^2 x_2 )+ ϕ_D^p(x_2)m_2 m_b},M^T_a,L = ∫_0^1dx_1∫_0^1dx_2∫_0^∞b_1db_1∫_0^∞b_2db_2H^T_f(α^T,β^T_a,b_1,b_2)E^T_f(t^T_a) α_s(t^T_a)×a_1(t^T_a) ϕ_B_q^∗^v(x_1) {ϕ_D^a(x_2) ( m_1^2 s x̅_2+m_3^2 t x_2 ) + ϕ_D^p(x_2)m_2 m_b u },M^T_a,N =m_1 m_3 ∫_0^1dx_1∫_0^1dx_2∫_0^∞b_1db_1∫_0^∞b_2db_2H^T_f(α^T,β^T_a,b_1,b_2)E^T_f(t^T_a)× α_s(t^T_a)a_1(t^T_a) ϕ_B_q^∗^V(x_1) {ϕ_D^a(x_2)(2 m_2^2 x_2 -t) -2m_2 m_b ϕ_D^p(x_2) },M^T_a,T =2 m_1 m_3 ∫_0^1dx_1∫_0^1dx_2∫_0^∞b_1db_1∫_0^∞b_2db_2H^T_f(α^T,β^T_a,b_1,b_2)E^T_f(t^T_a)× α_s(t^T_a)a_1(t^T_a) ϕ_B_q^∗^V(x_1) ϕ_D^a(x_2), M^T_b,P =2 m_1 p ∫_0^1dx_1∫_0^1dx_2∫_0^∞b_1db_1∫_0^∞b_2db_2H^T_f(α^T,β^T_b,b_2,b_1)E^T_f(t^T_b)× α_s(t^T_b) {ϕ_B_q^∗^v(x_1) [ 2 m_2 m_c ϕ_D^p(x_2)-ϕ_D^a(x_2)(m_2^2 x̅_1+m_3^2 x_1) ]+ ϕ_B_q^∗^t(x_1) [2 m_1 m_2 ϕ_D^p(x_2) x̅_1 -m_1m_c ϕ_D^a(x_2) ] }a_1(t^T_b),M^T_b,L = ∫_0^1dx_1∫_0^1dx_2∫_0^∞b_1db_1∫_0^∞b_2db_2H^T_f(α^T,β^T_b,b_2,b_1)E^T_f(t^T_b) α_s(t^T_b)× a_1(t^T_b) {ϕ_B_q^∗^t(x_1) [ 2 m_1 m_2 ϕ_D^p(x_2) (s-u x_1) -m_1 m_c s ϕ_D^a(x_2) ]+ ϕ_B_q^∗^v(x_1) [ ϕ_D^a(x_2) ( m_3^2 tx_1 -m_2^2 u x̅_1 ) +2 m_2 m_c u ϕ_D^p(x_2) ] },M^T_b,N =m_3 ∫_0^1dx_1∫_0^1dx_2∫_0^∞b_1db_1∫_0^∞b_2db_2H^T_f(α^T,β^T_b,b_2,b_1)E^T_f(t^T_b)× α_s(t^T_b) {ϕ_B_q^∗^V(x_1)m_1 [ ϕ_D^a(x_2) (2 m_2^2-t x_1) -4 m_2 m_c ϕ_D^p(x_2) ]+ ϕ_B_q^∗^T(x_1) [ ϕ_D^a(x_2) t m_c+ϕ_D^p(x_2) 2 m_2 (2 m_1^2 x_1-t)] }a_1(t^T_b),M^T_b,T =2 m_3 ∫_0^1dx_1∫_0^1dx_2∫_0^∞b_1db_1∫_0^∞b_2db_2H^T_f(α^T,β^T_b,b_2,b_1)E^T_f(t^T_b) α_s(t^T_b)×a_1(t^T_b) {ϕ_B_q^∗^T(x_1) [ ϕ_D^p(x_2) 2 m_2-ϕ_D^a(x_2) m_c]-ϕ_B_q^∗^V(x_1) ϕ_D^a(x_2) m_1 x_1}, M^T_c,P = 2 m_1 p/N_c ∫_0^1dx_1∫_0^1dx_2∫_0^1dx_3∫_0^∞db_1∫_0^∞b_2db_2∫_0^∞b_3db_3 δ(b_1-b_2)× ϕ_P^a(x_3) α_s(t^T_c)C_2(t^T_c) {ϕ_B_q^∗^v(x_1) ϕ_D^a(x_2)(2 m_2^2 x_2+s x̅_3-t x_1)+ ϕ_B_q^∗^t(x_1) ϕ_D^p(x_2) m_1 m_2 (x_1-x_2) } H^T_n(α^T,β^T_c,b_3,b_2) E_n(t^T_c),M^T_c,L = 1/N_c ∫_0^1dx_1∫_0^1dx_2∫_0^1dx_3∫_0^∞db_1∫_0^∞b_2db_2∫_0^∞b_3db_3H^T_n(α^T,β^T_c,b_3,b_2)× δ(b_1-b_2) E_n(t^T_c) ϕ_V^v(x_3) {ϕ_B_q^∗^v(x_1) ϕ_D^a(x_2)u (2 m_2^2 x_2+s x̅_3-t x_1)+ ϕ_B_q^∗^t(x_1) ϕ_D^p(x_2)m_1 m_2 (u x_1-s x_2-2 m_3^2 x̅_3)} α_s(t^T_c) C_2(t^T_c),M^T_c,N = m_3/N_c ∫_0^1dx_1∫_0^1dx_2∫_0^1dx_3∫_0^∞db_1∫_0^∞b_2db_2∫_0^∞b_3db_3×H^T_n(α^T,β^T_c,b_3,b_2)E_n(t^T_c) α_s(t^T_c)C_2(t^T_c) δ(b_1-b_2)× {ϕ_B_q^∗^V(x_1) ϕ_D^a(x_2) ϕ_V^V(x_3)2 m_1(t x_1-2 m_2^2 x_2-s x̅_3)+ ϕ_B_q^∗^T(x_1) ϕ_D^p(x_2) ϕ_V^V(x_3)m_2(t x_2+u x̅_3-2 m_1^2 x_1)+ ϕ_B_q^∗^T(x_1) ϕ_D^p(x_2) ϕ_V^A(x_3)2 m_1 m_2 p(x_2-x̅_3) },M^T_c,T = m_3/N_c p ∫_0^1dx_1∫_0^1dx_2∫_0^1dx_3∫_0^∞db_1∫_0^∞b_2db_2∫_0^∞b_3db_3×H^T_n(α^T,β^T_c,b_3,b_2)E_n(t^T_c) α_s(t^T_c)C_2(t^T_c) δ(b_1-b_2)× {ϕ_B_q^∗^V(x_1) ϕ_D^a(x_2) ϕ_V^A(x_3)2(2 m_2^2 x_2+s x̅_3-t x_1)+ ϕ_B_q^∗^T(x_1) ϕ_D^p(x_2) ϕ_V^A(x_3)r_2(2 m_1^2 x_1-t x_2-u x̅_3)+ ϕ_B_q^∗^T(x_1) ϕ_D^p(x_2) ϕ_V^V(x_3)2 m_2 p(x̅_3-x_2) }, M^T_d,P = 2 m_1 p/N_c ∫_0^1dx_1∫_0^1dx_2∫_0^1dx_3∫_0^∞db_1∫_0^∞b_2db_2∫_0^∞b_3db_3 ϕ_P^a(x_3)× δ(b_1-b_2) α_s(t^T_d)C_2(t^T_d)E_n(t^T_d) {ϕ_B_q^∗^v(x_1) ϕ_D^a(x_2)s (x_2-x_3) + ϕ_B_q^∗^t(x_1) ϕ_D^p(x_2) m_1 m_2 (x_1-x_2) }H^T_n(α^T,β^T_d,b_3,b_2),M^T_d,L = 1/N_c ∫_0^1dx_1∫_0^1dx_2∫_0^1dx_3∫_0^∞db_1∫_0^∞b_2db_2∫_0^∞b_3db_3 δ(b_1-b_2)×E_n(t^T_d) α_s(t^T_d)C_2(t^T_d) ϕ_V^v(x_3) {ϕ_B_q^∗^v(x_1) ϕ_D^a(x_2) 4 m_1^2 p^2 (x_2-x_3)+ ϕ_B_q^∗^t(x_1) ϕ_D^p(x_2) m_1 m_2 ( u x_1-s x_2-2 m_3^2 x_3) }H^T_n(α^T,β^T_d,b_3,b_2),M^T_d,N = m_2 m_3/N_c ∫_0^1dx_1∫_0^1dx_2∫_0^1dx_3∫_0^∞db_1∫_0^∞b_2db_2∫_0^∞b_3db_3 δ(b_1-b_2)× ϕ_B_q^∗^T(x_1) ϕ_D^p(x_2) α_s(t^T_d)C_2(t^T_d) {ϕ_V^V(x_3)( t x_2+u x_3-2 m_1^2 x_1)+ ϕ_V^A(x_3) 2 m_1 p (x_2-x_3)}H^T_n(α^T,β^T_d,b_3,b_2) E_n(t^T_d),M^T_d,T = m_2 m_3/N_c m_1 p ∫_0^1dx_1∫_0^1dx_2∫_0^1dx_3∫_0^∞db_1∫_0^∞b_2db_2∫_0^∞b_3db_3 δ(b_1-b_2)× ϕ_B_q^∗^T(x_1) ϕ_D^p(x_2) α_s(t^T_d)C_2(t^T_d) {ϕ_V^A(x_3)( 2 m_1^2 x_1-t x_2-u x_3)+ ϕ_V^V(x_3) 2 m_1 p (x_3-x_2)}H^T_n(α^T,β^T_d,b_3,b_2) E_n(t^T_d),where N_c = 3 is the color number. α_s is the strong coupling constant. C_1,2 are the Wilson coefficients. The parameter a_i is defined asa_1 =C_1+1/N_c C_2,a_2 =C_2+1/N_c C_1.The functions H_f,n^T and the Sudakov factors E_f,n^T are defined as follows, where the subscripts f and n correspond to the factorizable and nonfactorizable topologies.H_f^T(α,β,b_i,b_j)= K_0(b_i√(-α)) {θ(b_i-b_j) K_0(b_i√(-β)) I_0(b_j√(-β))+ (b_i↔ b_j) }, H_n^T(α,β,b_i,b_j)= {θ(b_i-b_j)K_0(b_i√(-α)) I_0(b_j√(-α)) + (b_i↔ b_j)} × {θ(-β)K_0(b_i√(-β)) +π/2 θ(+β) [ i J_0(b_i√(β))- Y_0(b_i√(β)) ] }, E_f^T(t) = exp{ -S_B_q^∗(t)-S_D(t) }, E_n(t) = exp{ -S_B_q^∗(t)-S_D(t)-S_M(t) }, S_B_q^∗(t)=s(x_1,b_1,p_1^+) +2∫_1/b_1^tdμ/μγ_q, S_D(t)=s(x_2,b_2,p_2^+) + s(x̅_2,b_2,p_2^+) +2∫_1/b_2^tdμ/μγ_q, S_M(t)=s(x_3,b_3,p_3^+) + s(x̅_3,b_3,p_3^+) +2∫_1/b_3^tdμ/μγ_q, where I_0, J_0, K_0 and Y_0 are the Bessel functions; γ_q = -α_s/π is the quark anomalous dimension; the expression of s(x,b,Q) can be found in the appendix of Ref.<cit.>; α^T and β_i^T are the virtualities of the gluon and quark propagators; the subscripts of the quark virtuality β_i^T and the typical scale t_i^T correspond to the diagram indices of Fig.<ref>.α^T =x_1^2 m_1^2+x_2^2 m_2^2-x_1 x_2 t, β_a^T =x_2^2 m_2^2-x_2 t+m_1^2-m_b^2, β_b^T =x_1^2 m_1^2-x_1 t+m_2^2-m_c^2, β_c^T = α^T+x̅_3^2 m_3^2-x_1 x̅_3 u+x_2 x̅_3 s, β_d^T = α^T+x_3^2 m_3^2-x_1 x_3 u+x_2 x_3 s,t_a(b)^T = max(√(-α^T),√(|β_a(b)^T|),1/b_1,1/b_2),t_c(d)^T = max(√(-α^T),√(|β_c(d)^T|),1/b_2,1/b_3).§ AMPLITUDE BUILDING BLOCKS FOR THE COLOR-SUPPRESSED B_Q^∗ → DM_Q DECAYS The expressions of the amplitude building blocks M^C_i,j for the color-suppressed topologies are displayed as follows, where the subscript i corresponds to the diagram indices of Fig.<ref>; and j corresponds to the different helicity amplitudes. M^C_a,P =-∫_0^1dx_1∫_0^1dx_3∫_0^∞b_1db_1∫_0^∞b_3db_3 ϕ_B_q^∗^v(x_1) α_s(t^C_a)a_2(t^C_a)× H^C_f(α^C,β^C_a,b_1,b_3) { 2 m_1 p ϕ_P^a(x_3) (m_1^2 x̅_3+m_2^2 x_3)+ 2 m_1 p μ_P m_b ϕ_P^p(x_3)+ μ_P m_bt ϕ_P^t(x_3) } E^C_f(t^C_a),M^C_a,L = -∫_0^1dx_1∫_0^1dx_3∫_0^∞b_1db_1∫_0^∞b_3db_3H^C_f(α^C,β^C_a,b_1,b_3)× α_s(t^C_a)a_2(t^C_a) ϕ_B_q^∗^v(x_1) {ϕ_V^v(x_3)(m_1^2 s x̅_3+m_2^2 u x_3)+ m_3 m_b t ϕ_V^t(x_3)+ 2 m_1 pm_3 m_b ϕ_V^s(x_3)} E^C_f(t^C_a),M^C_a,N = ∫_0^1dx_1∫_0^1dx_3∫_0^∞b_1db_1∫_0^∞b_3db_3H^C_f(α^C,β^C_a,b_1,b_3)× α_s(t^C_a)a_2(t^C_a) ϕ_B_q^∗^V(x_1) {ϕ_V^V(x_3) m_1 m_3 (t-s x_3)+ m_1 m_b s ϕ_V^T(x_3)+ 2 m_3 pm_1^2 x̅_3 ϕ_V^A(x_3)} E^C_f(t^C_a),M^C_a,T = -∫_0^1dx_1∫_0^1dx_3∫_0^∞b_1db_1∫_0^∞b_3db_3H^C_f(α^C,β^C_a,b_1,b_3)× α_s(t^C_a)a_2(t^C_a) ϕ_B_q^∗^V(x_1) { (m_3/p) ϕ_V^A(x_3) (t-s x_3)+ ϕ_V^V(x_3) 2 m_1 m_3 x̅_3 + ϕ_V^T(x_3) 2 m_1 m_b} E^C_f(t^C_a), M^C_b,P =2 m_1 p ∫_0^1dx_1∫_0^1dx_3∫_0^∞b_1db_1∫_0^∞b_3db_3H^C_f(α^C,β^C_b,b_3,b_1)E^C_f(t^C_b) α_s(t^C_b)×a_2(t^C_b) {ϕ_B_q^∗^v(x_1) ϕ_P^a(x_3)(m_3^2 x̅_1+m_2^2 x_1) -ϕ_B_q^∗^t(x_1) ϕ_P^p(x_3)2 m_1 μ_P x̅_1},M^C_b,L = ∫_0^1dx_1∫_0^1dx_3∫_0^∞b_1db_1∫_0^∞b_3db_3H^C_f(α^C,β^C_b,b_3,b_1)E^C_f(t^C_b) α_s(t^C_b)a_2(t^C_b)× {ϕ_B_q^∗^v(x_1) ϕ_V^v(x_3)(m_2^2 u x_1-m_3^2 t x̅_1) -ϕ_B_q^∗^t(x_1) ϕ_V^s(x_3)4 m_1^2 m_3 p x̅_1},M^C_b,N =m_1 m_3 ∫_0^1dx_1∫_0^1dx_3∫_0^∞b_1db_1∫_0^∞b_3db_3H^C_f(α^C,β^C_b,b_3,b_1)E^C_f(t^C_b)× α_s(t^C_b)a_2(t^C_b) ϕ_B_q^∗^V(x_1) {ϕ_V^V(x_3)(s-t x_1)+ϕ_V^A(x_3)2 m_1 p x̅_1},M^C_b,T = -m_3/p ∫_0^1dx_1∫_0^1dx_3∫_0^∞b_1db_1∫_0^∞b_3db_3H^C_f(α^C,β^C_b,b_3,b_1)E^C_f(t^C_b)× α_s(t^C_b)a_2(t^C_b) ϕ_B_q^∗^V(x_1) {ϕ_V^A(x_3)(s-t x_1)+ϕ_V^V(x_3)2 m_1 p x̅_1}, M^C_c,P = ∫_0^1dx_1∫_0^1dx_2∫_0^1dx_3∫_0^∞db_1∫_0^∞b_2db_2∫_0^∞b_3db_3H^C_n(α^C,β^C_c,b_2,b_3)× δ(b_1-b_3) {ϕ_B_q^∗^t(x_1) ϕ_D^a(x_2) m_1 μ_P [ ϕ_P^t(x_3)(t x_1-2 m_2^2 x̅_2-s x_3) + ϕ_P^p(x_3)2 m_1 p (x_3-x_1) ] - ϕ_B_q^∗^v(x_1) ϕ_P^a(x_3)2 m_1 p [ ϕ_D^p(x_2)m_2 m_c+ ϕ_D^a(x_2)(s x̅_2+2 m_3^2 x_3-u x_1)] }E_n(t^C_c) α_s(t^C_c) C_1(t^C_c)/N_c,M^C_c,L = ∫_0^1dx_1∫_0^1dx_2∫_0^1dx_3∫_0^∞db_1∫_0^∞b_2db_2∫_0^∞b_3db_3H^C_n(α^C,β^C_c,b_2,b_3)× δ(b_1-b_3) {ϕ_B_q^∗^t(x_1) ϕ_D^a(x_2) m_1m_3 [ ϕ_V^t(x_3)(t x_1-2 m_2^2 x̅_2-s x_3) + ϕ_V^s(x_3)2 m_1 p (x_3-x_1)] + ϕ_B_q^∗^v(x_1) ϕ_V^v(x_3) [ -ϕ_D^p(x_2)m_2 m_c u+ ϕ_D^a(x_2)4 m_1^2 p^2(x_1-x̅_2)] }E_n(t^C_c) α_s(t^C_c)C_1(t^C_c)/N_c,M^C_c,N = 1/N_c ∫_0^1dx_1∫_0^1dx_2∫_0^1dx_3∫_0^∞db_1∫_0^∞b_2db_2∫_0^∞b_3db_3H^C_n(α^C,β^C_c,b_2,b_3)× δ(b_1-b_3)E_n(t^C_c) α_s(t^C_c){ϕ_B_q^∗^V(x_1) ϕ_D^p(x_2) ϕ_V^V(x_3) 2 m_1 m_2 m_3 m_c+ ϕ_B_q^∗^T(x_1) ϕ_D^a(x_2) ϕ_V^T(x_3) [ m_1^2 s (x̅_2-x_1) +m_3^2 t (x_3-x̅_2) ] }C_1(t^C_c),M^C_c,T = 2/N_c ∫_0^1dx_1∫_0^1dx_2∫_0^1dx_3∫_0^∞db_1∫_0^∞b_2db_2∫_0^∞b_3db_3H^C_n(α^C,β^C_c,b_2,b_3)×C_1(t^C_c) {ϕ_B_q^∗^T(x_1) ϕ_D^a(x_2) ϕ_V^T(x_3) [ m_1^2 (x_1-x̅_2) + m_3^2 (x̅_2-x_3) ]- ϕ_B_q^∗^V(x_1) ϕ_D^p(x_2) ϕ_V^A(x_3) m_2 m_3 m_c/p}E_n(t^C_c) α_s(t^C_c) δ(b_1-b_3), M^C_d,P = 1/N_c ∫_0^1dx_1∫_0^1dx_2∫_0^1dx_3∫_0^∞db_1∫_0^∞b_2db_2∫_0^∞b_3db_3H^C_n(α^C,β^C_d,b_2,b_3)E_n(t^C_d)× δ(b_1-b_3) α_s(t^C_d)C_1(t^C_d) ϕ_D^a(x_2) {ϕ_B_q^∗^v(x_1) ϕ_P^a(x_3)2 m_1 p s (x_2-x_3)+ ϕ_B_q^∗^t(x_1) m_1 μ_P [ ϕ_P^p(x_3)2 m_1 p(x_3-x_1)+ ϕ_P^t(x_3) (2 m_2^2 x_2+s x_3-t x_1)] },M^C_d,L = 1/N_c ∫_0^1dx_1∫_0^1dx_2∫_0^1dx_3∫_0^∞db_1∫_0^∞b_2db_2∫_0^∞b_3db_3H^C_n(α^C,β^C_d,b_2,b_3)E_n(t^C_d)× δ(b_1-b_3) α_s(t^C_d)C_1(t^C_d) ϕ_D^a(x_2) {ϕ_B_q^∗^v(x_1) ϕ_V^v(x_3)4 m_1^2 p^2 (x_2-x_3)+ ϕ_B_q^∗^t(x_1) m_1 m_3 [ ϕ_V^s(x_3)2 m_1 p(x_3-x_1)+ ϕ_V^t(x_3) (2 m_2^2 x_2+s x_3-t x_1)] },M^C_d,N = 1/N_c ∫_0^1dx_1∫_0^1dx_2∫_0^1dx_3∫_0^∞db_1∫_0^∞b_2db_2∫_0^∞b_3db_3H^C_n(α^C,β^C_d,b_2,b_3)E_n(t^C_d) × δ(b_1-b_3) α_s(t^C_d)C_1(t^C_d) ϕ_B_q^∗^T(x_1) ϕ_D^a(x_2) ϕ_V^T(x_3) { m_1^2 s (x_1-x_2)+m_3^2 t (x_2-x_3) },M^C_d,T = 2/N_c ∫_0^1dx_1∫_0^1dx_2∫_0^1dx_3∫_0^∞db_1∫_0^∞b_2db_2∫_0^∞b_3db_3H^C_n(α^C,β^C_d,b_2,b_3)E_n(t^C_d) × δ(b_1-b_3) α_s(t^C_d)C_1(t^C_d) ϕ_B_q^∗^T(x_1) ϕ_D^a(x_2) ϕ_V^T(x_3) { m_1^2 (x_2-x_1)+m_3^2 (x_3-x_2) }. The functions H_f,n^C have the similar expressions for H_f,n^T, i.e.,H_f^C(α,β,b_i,b_j)= H_f^T(α,β,b_i,b_j), H_n^C(α,β,b_i,b_j)= H_n^T(α,β,b_i,b_j).The Sudakov factor E_f^C are defined asE_f^C(t) = exp{ -S_B_q^∗(t)-S_M(t) }, and the expressions for E_n(t), S_B_q^∗(t), S_D(t) and S_M(t) are the same as those given in the Appendix <ref>. α^C and β_i^C are the gluon and quark virtualities; the subscripts of β_i^C and t_i^C correspond to the diagram indices of Fig.<ref>.α^C =x_1^2 m_1^2+x_3^2 m_3^2-x_1 x_3 u, β_a^C =x_3^2 m_3^2-x_3 u+m_1^2-m_b^2, β_b^C =x_1^2 m_1^2-x_1 u+m_3^2, β_c^C = α^C+x̅_2^2 m_2^2-x_1 x̅_2 t+x_3 x̅_2 s-m_c^2, β_d^C = α^C+x_2^2 m_2^2-x_1 x_2 t+x_2 x_3 s,t_a(b)^C = max(√(-α^C),√(|β_a(b)^C|),1/b_1,1/b_3),t_c(d)^C = max(√(-α^C),√(|β_c(d)^C|),1/b_2,1/b_3).§ AMPLITUDE BUILDING BLOCKS FOR THE ANNIHILATIONB^∗0 → DM DECAYS The expressions of the amplitude building blocks M^A_i,j for the annihilation topologies are listed as follows, where the subscript i corresponds to the diagram indices of Fig.<ref>; and j corresponds to different helicity amplitudes. M^A_a,P = ∫_0^1dx_2∫_0^1dx_3∫_0^∞b_2db_2∫_0^∞b_3db_3H^A_f(α^A,β^A_a,b_2,b_3)E^A_f(t^A_a) α_s(t^A_a)× a_2(t^A_a) {ϕ_D^p(x_2) [ ϕ_P^a(x_3) 4 m_1 m_2 m_c p+ϕ_P^p(x_3) 4 m_1 m_2 μ_P p x_3+ ϕ_P^t(x_3) 2 m_2 μ_P (t+u x̅_3)] - ϕ_D^a(x_2) [ ϕ_P^p(x_3)2 m_1 m_c μ_P p+ ϕ_P^a(x_3) 2 m_1 p(m_1^2 x̅_3+m_2^2 x_3)+ϕ_P^t(x_3) m_c μ_P t ] },M^A_a,L = ∫_0^1dx_2∫_0^1dx_3∫_0^∞b_2db_2∫_0^∞b_3db_3H^A_f(α^A,β^A_a,b_2,b_3)E^A_f(t^A_a) α_s(t^A_a)× a_2(t^A_a) {ϕ_D^p(x_2) [ ϕ_V^v(x_3) 2 m_2 m_c u-ϕ_V^t(x_3) 2 m_2 m_3 (t+u x̅_3)- ϕ_V^s(x_3) 4 m_1m_2 m_3 p x_3] + ϕ_D^a(x_2) [ ϕ_V^s(x_3) 2 m_1 m_3 m_c p- ϕ_V^v(x_3)(m_2^2 u x_3+m_1^2 s x̅_3)+ϕ_V^t(x_3) m_3 m_c t ] },M^A_a,N = ∫_0^1dx_2∫_0^1dx_3∫_0^∞b_2db_2∫_0^∞b_3db_3H^A_f(α^A,β^A_a,b_2,b_3)E^A_f(t^A_a)× {ϕ_D^a(x_2) [ ϕ_V^V(x_3) m_1 m_3 (s x̅_3+2 m_2^2)-ϕ_V^T(x_3) m_1 m_c s+ ϕ_V^A(x_3) 2 m_1^2m_3 p x̅_3] - ϕ_D^p(x_2) [ ϕ_V^V(x_3) 4 m_1 m_2 m_3 m_c- ϕ_V^T(x_3) 2 m_1 m_2 (s+2 m_3^2 x̅_3)] } α_s(t^A_a)a_2(t^A_a),M^A_a,T = ∫_0^1dx_2∫_0^1dx_3∫_0^∞b_2db_2∫_0^∞b_3db_3H^A_f(α^A,β^A_a,b_2,b_3)E^A_f(t^A_a)× {ϕ_D^p(x_2)4 m_2[ ϕ_V^T(x_3) m_1+ϕ_V^A(x_3) m_3 m_c/p ]- ϕ_D^a(x_2) [ ϕ_V^V(x_3) 2 m_1 m_3 x̅_3+ϕ_V^T(x_3) 2 m_1 m_c+ ϕ_V^A(x_3) (m_3/p) (s x̅_3+2 m_2^2)] } α_s(t^A_a) a_2(t^A_a), M^A_b,P =2 m_1 p ∫_0^1dx_2∫_0^1dx_3∫_0^∞b_2db_2∫_0^∞b_3db_3H^A_f(α^A,β^A_b,b_3,b_2)E^A_f(t^A_b) α_s(t^A_b)× a_2(t^A_b) {ϕ_D^p(x_2) ϕ_P^p(x_3) 2 m_2 μ_P x̅_2-ϕ_D^a(x_2) ϕ_P^a(x_3)(m_1^2 x_2+m_3^2 x̅_2) },M^A_b,L =-∫_0^1dx_2∫_0^1dx_3∫_0^∞b_2db_2∫_0^∞b_3db_3H^A_f(α^A,β^A_b,b_3,b_2)E^A_f(t^A_b) α_s(t^A_b)a_2(t^A_b)× {ϕ_D^p(x_2) ϕ_V^s(x_3) 4 m_1 m_2 m_3 p x̅_2+ϕ_D^a(x_2) ϕ_V^v(x_3)(m_1^2 s x_2+m_3^2 t x̅_2) },M^A_b,N =m_1 m_3 ∫_0^1dx_2∫_0^1dx_3∫_0^∞b_2db_2∫_0^∞b_3db_3H^A_f(α^A,β^A_b,b_3,b_2)E^A_f(t^A_b)× α_s(t^A_b)a_2(t^A_b) ϕ_D^a(x_2) {ϕ_V^V(x_3)(s+2 m_2^2 x_2)- ϕ_V^A(x_3) 2 m_1 p },M^A_b,T = ∫_0^1dx_2∫_0^1dx_3∫_0^∞b_2db_2∫_0^∞b_3db_3H^A_f(α^A,β^A_b,b_3,b_2)E^A_f(t^A_b) α_s(t^A_b)×a_2(t^A_b) ϕ_D^a(x_2) {ϕ_V^V(x_3)2 m_1 m_3- ϕ_V^A(x_3) (m_3/p) (s+2 m_2^2 x_2) }, M^A_c,P = ∫_0^1dx_1∫_0^1dx_2∫_0^1dx_3∫_0^∞b_1db_1∫_0^∞b_2db_2∫_0^∞db_3H^A_n(α^A,β^A_c,b_1,b_2)× δ(b_2-b_3){ϕ_D^a(x_2) ϕ_P^a(x_3)2 m_1 p [ ϕ_B_q^∗^v(x_1)(s x_2+2 m_3^2 x̅_3-u x̅_1)+ ϕ_B_q^∗^t(x_1) m_1 m_b] + ϕ_B_q^∗^v(x_1) ϕ_D^p(x_2)m_2 μ_p [ ϕ_P^p(x_3)2 m_1 p (x_2-x̅_3) + ϕ_P^t(x_3) (2 m_1^2 x̅_1 -t x_2-u x̅_3) ] }E_n(t^A_c) α_s(t^A_c)C_1(t^A_c)/N_c,M^A_c,L = ∫_0^1dx_1∫_0^1dx_2∫_0^1dx_3∫_0^∞b_1db_1∫_0^∞b_2db_2∫_0^∞db_3H^A_n(α^A,β^A_c,b_1,b_2)× δ(b_2-b_3) {ϕ_B_q^∗^v(x_1) ϕ_D^p(x_2)m_2 m_3[ ϕ_V^t(x_3)(t x_2+u x̅_3-2 m_1^2 x̅_1)+ ϕ_V^s(x_3)2 m_1 p (x̅_3-x_2)] + ϕ_D^a(x_2) ϕ_V^v(x_3) [ ϕ_B_q^∗^t(x_1)m_1 m_b s+ ϕ_B_q^∗^v(x_1) 4 m_1^2 p^2 (x_2-x̅_1) ]}E_n(t^A_c) α_s(t^A_c)C_1(t^A_c)/N_c,M^A_c,N = ∫_0^1dx_1∫_0^1dx_2∫_0^1dx_3∫_0^∞b_1db_1∫_0^∞b_2db_2∫_0^∞db_3H^A_n(α^A,β^A_c,b_1,b_2)E_n(t^A_c)× δ(b_2-b_3) α_s(t^A_c) {ϕ_B_q^∗^V(x_1) ϕ_D^p(x_2) ϕ_V^T(x_3)m_1 m_2 (u x̅_1-s x_2-2 m_3^2 x̅_3)+ ϕ_B_q^∗^T(x_1) ϕ_D^a(x_2)m_3 m_b [ ϕ_V^A(x_3) 2 m_1 p -ϕ_V^V(x_3) t]}C_1(t^A_c)/N_c,M^A_c,T = ∫_0^1dx_1∫_0^1dx_2∫_0^1dx_3∫_0^∞b_1db_1∫_0^∞b_2db_2∫_0^∞db_3H^A_n(α^A,β^A_c,b_1,b_2)× δ(b_2-b_3) E_n(t^A_c) α_s(t^A_c) {ϕ_B_q^∗^V(x_1) ϕ_D^p(x_2) ϕ_V^T(x_3)2 m_1 m_2 (x̅_1-x_2)+ ϕ_B_q^∗^T(x_1) ϕ_D^a(x_2)m_3 m_b [ ϕ_V^A(x_3) t/(m_1 p) -2 ϕ_V^V(x_3)]}C_1(t^A_c)/N_c, M^A_d,P = 1/N_c ∫_0^1dx_1∫_0^1dx_2∫_0^1dx_3∫_0^∞b_1db_1∫_0^∞b_2db_2∫_0^∞db_3H^A_n(α^A,β^A_d,b_1,b_2)E_n(t^A_d)× δ(b_2-b_3) α_s(t^A_d)C_1(t^A_d) ϕ_B_q^∗^v(x_1) {ϕ_D^a(x_2) ϕ_P^a(x_3)2 m_1 p ( 2 m_2^2 x_2+s x̅_3-t x_1)+ ϕ_D^p(x_2)m_2 μ_P [ ϕ_P^t(x_3)(2 m_1^2 x_1-t x_2-u x̅_3) +ϕ_P^p(x_3) 2 m_1 p (x̅_3-x_2)] },M^A_d,L = 1/N_c ∫_0^1dx_1∫_0^1dx_2∫_0^1dx_3∫_0^∞b_1db_1∫_0^∞b_2db_2∫_0^∞db_3H^A_n(α^A,β^A_d,b_1,b_2)E_n(t^A_d)× δ(b_2-b_3) α_s(t^A_d)C_1(t^A_d) ϕ_B_q^∗^v(x_1) {ϕ_D^a(x_2) ϕ_V^v(x_3)u ( 2 m_2^2 x_2+s x̅_3-t x_1)- ϕ_D^p(x_2)m_2 m_3 [ ϕ_V^t(x_3)(2 m_1^2 x_1-t x_2-u x̅_3) +ϕ_V^s(x_3) 2 m_1 p (x̅_3-x_2)] },M^A_d,N = 1/N_c ∫_0^1dx_1∫_0^1dx_2∫_0^1dx_3∫_0^∞b_1db_1∫_0^∞b_2db_2∫_0^∞db_3H^A_n(α^A,β^A_d,b_1,b_2)×E_n(t^A_d) α_s(t^A_d) ϕ_B_q^∗^V(x_1) {ϕ_D^a(x_2) ϕ_V^V(x_3)2 m_1 m_3 ( t x_1-2 m_2^2 x_2-s x̅_3)+ ϕ_D^p(x_2) ϕ_V^T(x_3)m_1 m_2 (u x_1-s x_2-2 m_3^2 x̅_3)] }C_1(t^A_d) δ(b_2-b_3),M^A_d,T = 2/N_c ∫_0^1dx_1∫_0^1dx_2∫_0^1dx_3∫_0^∞b_1db_1∫_0^∞b_2db_2∫_0^∞db_3H^A_n(α^A,β^A_d,b_1,b_2)×E_n(t^A_d) α_s(t^A_d) ϕ_B_q^∗^V(x_1) {ϕ_D^a(x_2) ϕ_V^A(x_3)(m_3/p) ( 2 m_2^2 x_2+s x̅_3-t x_1)+ ϕ_D^p(x_2) ϕ_V^T(x_3)m_1 m_2 (x_1-x_2) ] }C_1(t^A_d) δ(b_2-b_3). The functions H_f,n^A and the Sudakov factor E_f^A are defined as follows.H_f^A(α,β,b_i,b_j)= π^2/4 { i J_0(b_i√(α))- Y_0(b_i√(α)) } × {θ(b_i-b_j) [ i J_0(b_i√(β))- Y_0(b_i√(β)) ] J_0(b_j√(β)) +(b_i↔ b_j) }, H_n^A(α,β,b_i,b_j)= {θ(-β)K_0(b_i√(-β)) + π/2θ(+β) [ i J_0(b_i√(β))- Y_0(b_i√(β)) ] } × π/2 {θ(b_i-b_j) [ i J_0(b_i√(α))- Y_0(b_i√(α)) ]J_0(b_j√(α))+ (b_i↔ b_j) }, E_f^A(t) = exp{ -S_D(t)-S_M(t) }, and the expressions for E_n(t), S_B_q^∗(t), S_D(t) and S_M(t) are the same as those given in the Appendix <ref>. α^A and β_i^A are the gluon and quark virtualities; the subscripts of β_i^A and t_i^A correspond to the diagram indices of Fig.<ref>.α^A =x_2^2 m_2^2+x̅_3^2 m_3^2+x_2 x̅_3 s, β_a^A = x̅_3^2 m_3^2+x̅_3 s+m_2^2-m_c^2, β_b^A =x_2^2 m_2^2+x_2 s+m_3^2, β_c^A = α^A+x̅_1^2 m_1^2 -x̅_1 x_2 t-x̅_1 x̅_3 u-m_b^2, β_d^A = α^A+x_1^2 m_1^2 -x_1 x_2 t-x_1 x̅_3 u,t_a(b)^A = max(√(α^A),√(|β_a(b)^A|),1/b_2,1/b_3),t_c(d)^A = max(√(α^A),√(|β_c(d)^A|),1/b_1,1/b_2). 99 pdg C. Patrignani et al. (Particle Data Group), Chin. Phys. C 40, 100001 (2016). epjc74 Ed. A. Bevan et al., Eur. Phys. J. C 74, 3026 (2014). lhcb-operation http://lhcb-operationsplots.web.cern.ch/lhcb-operationsplots/index.htm. prd52.3958 H. Li, Phys. Rev. D 52, 3958 (1995). prd55.5577 C. Chang,H. Li, Phys. Rev. D 55, 5577 (1997). prd56.1615 T. Yeh, H. Li, Phys. Rev. D 56, 1615 (1997). plb504.6 Y. Keum, H. Li, A. Sanda, Phys. Lett. B 504, 6 (2001). prd63.054008 Y. Keum, H. Li, A. Sanda, Phys. Rev. D 63, 054008 (2001). prd63.074006 Y. Keum, H. Li, Phys. Rev. D 63, 074006 (2001). prd63.074009 C. Lü, K. Ukai, M. Yang, Phys. Rev. D 63, 074009 (2001). epjc23.275 C. Lü, M. Yang, Eur. Phys. J. C 23, 275 (2002). prl83.1914 M. Beneke et al., Phys. Rev. Lett. 83, 1914 (1999). npb591.313 M. Beneke et al., Nucl. Phys. B 591, 313 (2000). npb606.245 M. Beneke et al., Nucl. Phys. B 606, 245 (2001). plb488.46 D. Du, D. Yang, G. Zhu, Phys. Lett. B 488, 46 (2000). plb509.263 D. Du, D. Yang, G. Zhu, Phys. Lett. B 509, 263 (2001). prd64.014036D. Du, D. Yang, G. Zhu, Phys. Rev. D 64, 014036 (2001). npb774.64 M. Beneke, J. Rohrer, D. Yang, Nucl. Phys. B 774, 64 (2007). prd77.074013 J. Sun et al., Phys. Rev. D 77, 074013 (2008). prd63.014006 C. Bauer, S. Fleming, M. Luke, Phys. Rev. D 63, 014006 (2000). prd63.114020 C. Bauer et al., Phys. Rev. D 63, 114020 (2001). plb516.134 C. Bauer, I. Stewart, Phys. Lett. B 516, 134 (2001). prd65.054022 C. Bauer, D. Pirjol, I. Stewart, Phys. Rev. D 65, 054022 (2002). prd66.014017 C. Bauer et al., Phys. Rev. D 66, 014017 (2002). npb643.431 M. Beneke et al., Nucl. Phys. B 643, 431 (2002). plb553.267 M. Beneke, T. Feldmann, Phys. Lett. B 553, 267 (2003). npb685.249 M. Beneke, T. Feldmann, Nucl. Phys. B 685, 249 (2004). plb476.339 J. Chay, Phys. Lett. B 476, 339 (2000). prd69.094018 Y. Keum et al., Phys. Rev. D 69, 094018 (2004). prd78.014018 R. Li, C. Lü, H. Zou, Phys. Rev. D 78, 014018 (2008). epjc76.523 Q. Chang et al., Eur. Phys. J. C 76, 523 (2016). epja52.90 V. Šimonis, Eur. Phys. J. A 52, 90 (2016). 9512380 G. Buchalla, A. Buras, M. Lautenbacher, Rev. Mod. Phys. 68, 1125, (1996). npb11.325 J. Bjorken, Nucl. Phys. B (Proc. Suppl.) 11, 325 (1989). prd22.2157 G. Lepage, S. Brodsky, Phys. Rev. D 22, 2157 (1980). npb529.323 P. Ball et al., Nucl. Phys. B 529, 323 (1998). prd65.014007 T. Kurimoto, H. Li, A. Sanda, Phys. Rev. D 65, 014007 (2001). prd92.074028 J. Sun et al., Phys. Rev. D 92, 074028 (2015). plb751.171 Y. Yang et al., Phys. Lett. B 751, 171 (2015). plb752.322 J. Sun et al., Phys. Lett. B 752, 322 (2016). jhep9901.010 P. Ball, JHEP, 9901, 010 (1999). jhep0703.069 P. Ball, G. Jones, JHEP, 0703, 069 (2007). jhep0605.004 P. Ball, V. Braun, A. Lenz, JHEP, 0605, 004 (2006). prd66.054013 C. Chen, Y. Keum, H. Li, Phys. Rev. D 66, 054013 (2002). ijmpa31.1650146 Y. Yang et al., Int. J. Mod. Phys. A 31, 1650146 (2016). npb911.890 J. Sun et al., Nucl. Phys. B 911, 890 (2016). prd95.036024 J. Sun et al., Phys. Rev. D 95, 036024 (2017). prd58.114006 Th. Feldmann, P. Kroll and B. Stech, Phys. Rev. D 58, 114006 (1998). jhep1404.177 C. Cheung, C. Hwang, JHEP, 1404, 177 (2014). uds A. Kamal, Particle Physics, Springer-Verlag Berlin Heidelberg, 2014, p. 297-298. prd91.114509 B. Colquhoun et al. (HPQCD Collaboration), Phys. Rev. D 91, 114509 (2015). epjc74.3026 Ed. A. Bevan et al., Eur. Phys. J. C 74, 3026 (2014). crp16.435 T. Gershon, M. Needham, Comptes Rendus Physique 16, 435 (2015). prl118.052002 R. Aaij . et al. (LHCb Collaboration), Phys. Rev. Lett. 118, 052002 (2017). 1002.5012 A. Akeroyd et al., arXiv:1002.5012[hep-ex].
http://arxiv.org/abs/1708.07668v1
{ "authors": [ "Junfeng Sun", "Jie Gao", "Yueling Yang", "Qin Chang", "Na Wang", "Gongru Lu", "Jinshu Huang" ], "categories": [ "hep-ph" ], "primary_category": "hep-ph", "published": "20170825093719", "title": "Study of the $\\bar{B}_{q}^{\\ast}$ ${\\to}$ $DM$ decays with perturbative QCD approach" }
Simon Rubinstein-Salzedo: Euler Circle, Palo Alto, CA 94306, USA [email protected] Zhu: Shanghai Foreign Language School, Shanghai 200083, China [email protected] The Asymmetric Colonel Blotto Game Yifan Zhu December 30, 2023 ==================================This paper explores the Nash equilibria of a variant of the Colonel Blotto game, which we call the Asymmetric Colonel Blotto game. In the Colonel Blotto game, two players simultaneously distribute forces across n battlefields.Within each battlefield, the player that allocates the higher level of force wins.The payoff of the game is the proportion of wins on the individual battlefields.In the asymmetric version, the levels of force distributed to the battlefields must be nondecreasing. In this paper, we find a family of Nash equilibria for the case with three battlefields and equal levels of force and prove the uniqueness of the marginal distributions.We also find the unique equilibrium payoff for all possible levels of force in the case with two battlefields, and obtain partial results for the unique equilibrium payoff for asymmetric levels of force in the case with three battlefields. § INTRODUCTIONIn this section we discuss the background and origins of the Asymmetric Colonel Blotto game.The Colonel Blotto game, which originates with Borel in <cit.>, is a constant-sum game involving two players, A and B, and n independent battlefields.A distributes a total of X_A units of force among the battlefields, and B distributes a total of X_B units of force among the battlefields, in such a way that each player allocates a nonnegative amount of force to each battlefield. The player who sends the higher level of force to a particular battlefield wins that battlefield. The payoff for the whole game is the proportion of the wins on the individual battlefields.Roberson in <cit.> characterizes the unique equilibrium payoffs for all (symmetric and asymmetric) configurations of the players' aggregate levels of force, and characterizes the complete set of equilibrium univariate marginal distributions for most of these configurations for the Colonel Blotto game.A possible variant of the Colonel Blotto game, which has not been studied before, is the Asymmetric Colonel Blotto game, where the forces distributed among the battlefields must be in non-decreasing order.The Asymmetric Colonel Blotto game is a constant-sum game involving two players, A and B, and n independent battlefields. A distributes X_A units of force among the battlefields in a nondecreasing manner and B distributes X_B units of force among the battlefields in a non-decreasing manner. Each player distributes forces without knowing the opponent's distribution. The player who provides the higher amount of force to a battlefield wins that battlefield. If both players deploy the same amount of force to a battlefield, we declare that battlefield to be a draw, and the payoff of that battlefield is equally distributed among the two players.[As we show in <Ref>, Nash equilibria of games with equal levels of force do not contain atoms, so the probability that the two players place equal force on some battlefield is 0. Thus we may, if we choose, use a different tie-breaking rule without altering the result in this case.]The payoff for each player is the proportion of battlefields won.In this paper, we study the Nash equilibria and equilibrium payoffs of Asymmetric Colonel Blotto games.In <Ref>, we find a family of equilibria for the game with three battlefields and equal levels of force, and we prove the uniqueness of the marginal distribution functions. We also prove that in any equilibrium strategies for a game with equal levels of force and at least three battlefields, there are no atoms in the marginal distributions.In <Ref>, we find the unique equilibrium payoffs of all cases of the Asymmetric Colonel Blotto game involving only two battlefields, and in <Ref> we find the unique equilibrium payoffs in the case of three battlefields in certain cases. We conclude with <Ref>, where we discuss the difficulties in extending our work to the case of n≥ 4 battlefields.§ THE MODEL In this section we introduce the model and related concepts. The definitions in this section are adaptations from those in <cit.> to the asymmetric version. §.§ PlayersTwo players, A and B, simultaneously allocate their forces X_A and X_B across n battlefields in a nondecreasing manner. Each player distributes forces without knowing the opponent's distribution.The player who provides the higher level of force to a battlefield wins that battlefield, gaining a payoff of 1/n. If both players deploy the same level of force to a battlefield, that battlefield is a draw and both players gain a payoff of 1/2n. The payoff for each player is the proportion of battlefields won, or equivalently, the sum of the payoffs across all the battlefields.[That the payoff for each player is the sum of the payoffs across all the battlefields means that two different joint distributions are equivalent if they have the same marginal distributions. Hence, this definition makes it possible to separate a joint distribution into the marginal distributions and a n-copula later in this paper.]Player i sends x^k_i units to the kth battlefield. For player i, the set of feasible allocations of force across the n battlefields in the Asymmetric Colonel Blotto game is denoted by 𝔅_i:𝔅_i={𝐱∈ℝ^n | ∑^n_j=1x_i^j = X_i,0≤ x^1≤ x^2≤⋯≤ x^n}.Given an n-variate cumulative distribution function H, for every 𝐱,𝐲∈ℝ^n such that x_k≤ y_k for all k∈{1,…,n}, the H-volume of the n-box [x_1,y_1]×⋯×[x_n,y_n] is,V_H([𝐱,𝐲])=nΔ_x_n^y_nn-1Δ_x_n-1^y_n-1…2Δ_x_2^y_21Δ_x_1^y_1H(𝐭),wherekΔ_x_k^y_kH(𝐭)=H(t_1,…,t_k-1,y_k,t_k+1,…,t_n)-H(t_1,…,t_k-1,x_k,t_k+1,…,t_n). Intuitively, the H-volume of a n-box just measures the probability that a point within that n-box will be chosen given the cumulative distribution function H.The support of an n-variate cumulative distribution function H is the complement of the union of all open sets of ℝ^n with H-volume zero. Intuitively, the support of a mixed strategy is just the closure of the set of pure strategies that might be chosen.§.§ StrategiesA mixed strategy, or a distribution of force, for player i is an n-variate cumulative distribution function (cdf) P_i:ℝ^n_+→[0,1] with support in the set of feasible allocations of force 𝔅_𝔦. This means that if player i chooses strategy (X^j)_j=1^n, then the probability that X^j≤ x^j (j=1,…,n) is P_i(x^1,…,x^n).P_i has marginal cumulative distribution functions {F_i^j}_j=1^n, one univariate marginal cumulative distribution function for each battle field j. F_i^j(x^j) is the probability that X^j≤ x^j. Equivalently, F_i^j(x)=P_i(X_i,X_i,…,x,X_i,…,X_i), where the jth argument is x, and the rest of the arguments are X_i, the player's entire allocation of force. We write P_i=(F_i^j)_j=1^n.In the case where the mixed strategy is the combination of finite pure strategies, the mixed strategy P_i where (i^1_j,i^2_j,…,i^n_j) units of force are distributed the battlefields 1,2,…,n respectively with probability p_j is denoted byP_i={((i^1_j,i^2_j,…,i^n_j),p_j)}.Here i^1_j+i^2_j+…+i^n_j=X_i and ∑_j p_j = 1. §.§ The Asymmetric Colonel Blotto GameThe Asymmetric Colonel Blotto game with n battlefields, denoted by ACB(X_A,X_B,n),is a one-shot game in which players simultaneously and independently announce distributions of force (x_i^1,…,x_i^n) subject to their budget constraints ∑_j=1^n x_1^j=X_A and ∑_j=1^n x_2^j=X_B, x_i^j≥ 0 for each i,j, and such that x_i^1≤ x_i^2≤⋯ x_i^n for i=1,2. Each battlefield, providing a payoff of 1/n, is won by the player that provides the higher allocation of force on that battlefield (and declared a draw if both players allocate the same level of force to a battlefield, each gaining a payoff of 1/2n), and players' payoffs equal the sum of the payoffs over all the battlefields. §.§ Nash equilibriumMixed strategies P_A and P_B form a Nash equilibrium if and only if neither player can increase payoff by changing to a different strategy.Since this particular game is two-player and constant-sum, it has the interesting property that the equilibrium payoff is always unique:The Nash equilibrium payoff for both players of any two-player and constant-sum game is unique.Suppose P_A and P_B is a pair of Nash equilibrium strategies.Let w_i be the payoff for player i. For any pair of Nash equilibrium strategies P'_A and P'_B, let w'_i be the payoff for player i.Let us consider the payoff for both players when player A plays strategy P_A and player B plays strategy P'_B. Call the payoff for player A v_A and the payoff for player B v_B.Since P_A is a strategy in a Nash equilibrium, w_A≥ v_A. Similarly, w'_B≥ v_B. So w_A+w'_B≥ v_A+v_B. Since we are considering a constant sum game, w_A+w'_B≥ v_A+v_B=w_A+w_B. Hence, w'_B≥ w_B. Similarly, we must have w_B≥ w'_B. So w_B=w'_B. Similarly w_A=w'_A.§ OPTIMAL UNIVARIATE MARGINAL DISTRIBUTIONS FOR THREE BATTLEFIELDS In this section we use copulas to separate the joint distributions of players into the marginal distributions and suitable copula. We also find and prove the unique univariate marginal distribution for ACB(1,1,3).Let us first introduce the concept of copulas:Let I denote the unit interval [0,1]. An n-copula is a function C from I^n to I such that * For all 𝐱∈ I^n, C(𝐱) = 0 if at least one coordinate of 𝐱 is 0; and if all coordinates of 𝐱 are 1 except x_k, then C (𝐱) = x_k.* For every 𝐱, 𝐲∈ I^n such that x_k ≤ y_k for all k∈{1,…,n}, the C-volume of the n-box [x_1, y_1] ×…× [x_n, y_n] satisfiesV_C([𝐱,𝐲])≥ 0.The crucial property of n-copulas that we need is the following theorem of Sklar:Let H be an n-variate distribution function with univariate marginal distribution functions F_1, F_2, …, F_n. Then there exists an n-copula C such that for all 𝐱∈ℝ^n,H (x_1, … , x_n) = C (F_1 (x_1) , …, F_n (x_n)).Conversely, if C is an n-copula and F_1,F_2,… ,F_n are univariate distribution functions, then the function H defined by <ref> is an n-variate distribution function with univariate marginal distribution functions F_1, F_2, … , F_n. The proof of this theorem can be found in <cit.>.This theorem establishes the equivalence between a joint distribution on the one hand, and a combination of a complete set of marginal distributions and a n-copula on the other hand. We will now show that the univariate marginal distribution functions and the n-copula are separate components of the players' best responses.In the game ACB(X_A, X_B , n), suppose that the opponent's strategy is fixed as the distribution P_-i, and that X_A = X_B. Then, in order for player i to maximize payoff under the constraint that the support of the chosen strategy must be in 𝔅_i, player i must solve an optimization problem. Given that there are no atoms in Nash equilibrium strategies (<Ref>), we can write the Lagrangian for this optimization problem asmax_{F_i^ j}_j=1^nλ_i∑_j=1^n[∫_0^∞[ 1/nλ_iF^j_-i(x)-x ]dF^j_i ]+λ_iX_i,where the set of univariate marginal distribution functions {F_i^ j}_j=1^n satisfy the constraint that there exists an n-copula C such that the support of the n-variate distributionC( F^1_i(x^1),…,F^n_i(x^n) )is contained in 𝔅_i.[Here we only maximize over the set of {F_i^ j}_j=1^n that satisfy the constraint, not all of them.]The payoff for player i given the opponent's marginal distribution functions {F_-i^ j}_j=1^n is the sum of the payoffs across all the battlefields: ∑_j=1^n ∫_0^∞1/nF_-i^j(x)dF_i^j.Here, the integral is the Riemann-Stieltjes integral, so the integrand is 0 for x>X_i. We also use the Riemann-Stieltjes integral for other integrals later in the paper.max_P_i∑_j=1^n ∫_0^∞1/nF_-i^j(x) dF_i^j.That P_i is contained in 𝔅_i implies that the sum of the levels of force across all battlefields is X_i:∑_j=1^n ∫_0^∞ x dF^j_i = X_i.Hence, the Lagrangian is max_P_i[ ∑_j=1^n ∫_0^∞1/nF_-i^j(x)dF_i^j -λ_i[ ∑_j=1^n ∫_0^∞ x dF^j_i - X_i ] ] = max_{F_i^ j}_j=1^nλ_i∑_j=1^n[∫_0^∞[ 1/nλ_iF^j_-i(x)-x ]dF^j_i ]+λ_iX_i .Finally, from <Ref> the n-variate distribution function P_i is equivalent to the set of univariate marginal distribution functions {F_i^ j}_j=1^n combined with an appropriate n-copula, C, so the result follows directly.theoremuniqueCDFACBThe unique Nash equilibrium univariate marginal distribution functions of the game ACB(1, 1 , 3) are for each player to allocate forces according to the following univariate distribution functions: F^1(u)= {[ 3u 0≤ u≤1/3;1 1/3<u≤ 1 ]. F^2(u)= {[0 0≤ u<1/6;-1/2+3u 1/6≤ u≤1/2;1 1/2<u≤ 1 ]. F^3(u)= {[0 0≤ u<1/3;-1+3u 1/3≤ u≤2/3;1 2/3<u≤ 1 ].The expected payoff for both players is 1/2.This means that any equilibrium strategies must have the marginal distributions described above, and that any joint distribution with support in 𝔅_i with such marginal distributions is an equilibrium strategy. Intuitively, it is easy to see why this particular set of marginal distributions might guarantee a Nash equilibrium. Since the distribution density is the same among the three battlefields, the payoff of a pure strategy p = (a,b,c) remains constant at 1/2 when it changes inside the region 0≤ a≤1/3, 1/6≤ b≤1/2, and 1/3≤ c≤2/3. A player can only hope to increase payoff above that given by p by moving below the lower bound of the marginal distribution in some battlefield and staying inside the bounds of the marginal distributions in the other battlefields. However, this is impossible: a cannot be negative; any attempt to bring b below 1/6 would result in c being above the upper bound 2/3; c, as the biggest of the 3, cannot be below 1/3. (The rigorous proof of this can be found in <Ref>.)Before we give the formal proof of this theorem, let us first examine some joint distributions that satisfy the conditions in <Ref>.Consider the 3-variate distribution function P_1 that uniformly places mass 1/3 on each of the three sides of the equilateral triangle with vertices (1/3,1/3,1/3), (0,1/2,1/2), and (1/6,1/6,2/3) to (1/3,1/3,1/3) (Depicted in <Ref>). Clearly its marginal distributions are those described in <Ref>.Similarly, as in <Ref>, divide the original equilateral triangle into three smaller equilateral triangles with side lengths 1/3 of the original, and let P_2 be the strategy that uniformly distribute on the sides of the smaller triangles. Clearly P_2 has the same marginal distributions as P_1, and is thus a joint distribution as described in <Ref>.As shown in <Ref>, we can continue this process on the smaller triangles (or only on some of the smaller triangles), and thus we obtain an countably-infinite family of joint distributions with marginal distributions as described in <Ref>. Furthermore, given any two such suitable joint distributions, their weighted average is also a suitable joint distribution, and thus we obtain a continuum of suitable joint distributions. Given these joint distributions that have marginal distributions as characterized by <Ref>, we have the following theorem:For the unique set of equilibrium univariate marginal distribution functions {F_i^ j}_j=1^3 characterized in <Ref>, there exists a 3-copula Csuch that the support of the 3-variate distribution function C( F_i^1( x^1 ), F_i^2( x^2 ), F_i^3( x^3 ) )is contained in 𝔅_i.Consider the 3-variate distribution function P_1 that uniformly places mass 1/3 on each of the three sides of the equilateral triangle with vertices (1/3,1/3,1/3), (0,1/2,1/2), and (1/6,1/6,2/3) to (1/3,1/3,1/3) (depicted in <Ref>). Clearly its marginal distributions are those described in <Ref>, and its support is in 𝔅_i. Hence, according to Sklar's theorem (<Ref>), for the unique set of equilibrium univariate marginal distribution functions {F_i^ j}_j=1^3 characterized in <Ref>, there exists a 3-copula Csuch that the support of the 3-variate distribution function C( F_i^1( x^1 ), F_i^2( x^2 ), F_i^3( x^3 ) ) is contained in 𝔅_i. Before we provide the formal proof of <Ref>, we first seek to provide some intuition for the outline of the proof, which takes inspiration from the proofs in <cit.> and <cit.>. From <ref> in <Ref>, we know that in an Asymmetric Colonel Blotto game ACB(1,1,3), each player's Lagrangian can be written asmax_{F_i^j}_j=1^3λ_i∑_j=1^3[∫_0^∞[ 1/3λ_iF^j_-i(x)-x ]dF^j_i ]+λ_iX_i,subject to the constraint that there exists an n-copula, C, such that the support of the n-variate distribution C( F^1_i(x^1),…,F^n_i(x^n) ) is contained in 𝔅_i.If there exists a suitable 3-copula, then, for different j, F_i^j is independent. So <ref> is the maximization of three independent sums, hence the sum of three independent maximizations:max_{F_i^ j}_j=1^3λ_i∑_j=1^3[∫_0^∞[ 1/3λ_iF^j_-i(x)-x ]dF^j_i ]+λ_iX_i = ∑_j=1^3 max_F_i^ jλ_i∫_0^∞[ 1/3λ_iF^j_-i(x)-x ] dF^j_i +λ_iX_i.Hence we have reduced the maximization problem over a joint distribution to separate maximization problems over univariate distributions, which can be easily solved.Note that each separate maximization problem has the same form as that of an all-pay auction.An all-pay auction is an auction where several players simultaneously call out a bid for a prize, and all bidders pay regardless of who wins the prize; the prize is awarded to the highest bidder. In an all-pay auction with two bidders, let F_i represent bidder i's distribution of the bid, and v_i represent the value of the auction for bidder i.Each bidder i's problem is max_ F_i ∫^∞ _0 [ v_i F_-i(x) -x ]dF_i. In the separate maximization problems for the Asymmetric Colonel Blotto game, the quantity 1/3λ_i acts as the value v_i for the all-pay auctions. <Ref> establishes the uniqueness of the Lagrange multipliers, hence the uniqueness of the value v_i.A potential issue that arises is whether the constraint that the strategy P_i must be in 𝔅_i leads to equilibria outside those characterized by <Ref>. From Sklar's Theorem (<Ref>), we know that the joint distribution P_i is equivalent to a set of marginal distributions {F_i^ j}_j=1^3, together with a suitable 3-copula C. So if a suitable 3-copula exists, the constraint that P_i be in 𝔅_i places no restraint on the set of potential univariate marginal distribution functions, {F_i^ j}_j=1^3; instead, this constraint and the set of univariate marginal distributions places a restraint on the set of feasible 3-copulas. Since <Ref> establishes the existence of suitable 3-copula, this is not an issue.On the other hand, the restriction on the 3-copula implies that the set of equilibrium 3-variate distributions for the game forms a strict subset of the set of all 3-variate distribution functions with univariate marginal distribution functions characterized by <Ref>.The proof of <Ref> under the assumption that suitable 3-copula exists is contained in the results that fill up the rest of this section. The proof takes inspiration from the proofs found in <cit.> and <cit.>.First, for the form of the Lagrangian in <Ref> to be accurate, we need show that there are no atoms in any Nash equilibrium strategies. The following theorem proves this in the more general case of equal levels of force for both players and any n number of battlefields where n≥3: If n≥ 3, then Nash equilibrium strategies for ACB(1,1,n) cannot contain atoms.Suppose that we have an equilibrium strategy P_A with an atom in battlefield j on a_j. Let p=(b_1,b_2,…,b_j-1,b_j=a_j,b_j+1,…,b_n) be any pure strategy in the support of P_A that contains playing a_j on battlefield j.The general idea of this proof will be to find a pure strategy p' that does strictly better against P_A than p, thus reaching a contradiction that P_A cannot be an equilibrium strategy as we supposed.Let f_k(a) denote the possibility of choosing a on battlefield k in P_A. If b_k is any point that is not an atom and b_k is greater than b_k-1 (or greater than 0 in the case of b_1), then consider the pure strategy p' that plays ϵ lower on battlefield k and plays ϵ' = ϵ/n-j+1 higher on battlefield j and all the battlefields after that.We can always find sufficiently small positive ϵ and ϵ' such that there is no atom between b_k and b_k-ϵ in P_A. So the payoff of p' against P_A minus the payoff of p against P_A is at least 1/nf_k(a_j) - δfor any δ>0. Hence, p' does strictly better than p against P_A.Therefore, for P_A to be an equilibrium strategy, all such b_k that are not atoms must be equal to b_k-1 (or 0 if k=1). So every pure strategy p in the support of P_A containing the atom a_j on battlefield j must be of the following form: a series of zeros in the first few battlefields (possibly none), an atom, the same level of force in the next few battlefields (also possibly none), another atom, the same level of force (as in the previous atom) in the next few battlefields, and so forth.Now, one of the following statements must be true: * All p in the support of P_A containing the atom a_j on battlefield j is played with probability 0.* There exists some p=(c_1,…,c_n) in the support of P_A containing the atom a_j on battlefield j that is played with a positive probability, hence every c_k is an atom on battlefield k. Suppose that statement 1 is true. For a_j to be played with some positive probability, there must be a continuum of such p.Hence there must also be a continuum of atoms, which is clearly impossible.So statement 2 must be true.Let q = (c_1,…,c_n) be such a pure strategy in the support of P_A where every c_k is an atom on battlefield k.Some casework is needed here:* All the c_k are the same. Then they must all be 1/n.In any pure strategy where player A plays 1/n on the first battlefield, he must also play 1/n on all the other battlefields.So f_1(1/n)≤ f_k(1/n) for all k≥ 2.Hence,f_1(1/n) < ∑_k=2^n f_k(1/n).Consider the pure strategy q' that plays ( 1/n-ϵ) on battlefield 1 and plays ( 1/n+ϵ/n-1) on all the other battlefields. We can find a sufficiently small positive ϵ such that there are no atoms between 1/n and ( 1/n-ϵ) on battlefield 1. The payoff of q' against P_A minus the payoff of q against P_A is at least 1/n·( ∑_k=2^n f_k(1/n)- f_1(1/n)) - δfor any δ>0. So q' does strictly better against P_A than q. * All the c_k fall into exactly two values, d_0 and d_1. (d_0<d_1) Suppose q contains m battlefields with level of force d_0 and then (n-m) battlefields with level of force d_1.i * If d_0=0, then d_1 = 1/n-m.Given any pure strategy in the support of P_A that plays d_1 on battlefield (m+1), it must also play d_1 on all the battlefields after that, and play 0 on the battlefields 1 to m. So f_m+1(d_1)≤ f_k(c_k) where k≠ m+1.Hence, ∑_k≠ m+1f_k(c_k)>f_m+1(d_1=c_m+1).Consider the pure strategy q' that plays d_1-ϵ on battlefield (m+1) and plays c_k+ϵ/n-1 on battlefield k for all k≠ m+1. We can find a sufficiently small positive ϵ such that there are no atoms between d_1 and d_1-ϵ on battlefield (m+1). The payoff of q' against P_A minus the payoff of q against P_A is at least:1/n·(∑_k≠ m+1f_k(c_k) - f_m+1(d_1))- δfor any δ>0. So q' does strictly better against P_A than q.* If d_0>0, then at least one of the following two must be true:*f_m+1(c_m+1)< ∑_k≠ m+1f_k(c_k). *∑_k=m+1^nf_k(c_k)>∑_k=1^mf_k(c_k). Similar to the arguments above, if the first one is true, then we can construct a q' by playing ϵ lower on battlefield (m+1) and ϵ' higher on all the other battlefields; if the second one is true, then we can construct a q' by playing ϵ higher on battlefield (m+1) and all the battlefields after that, and playing ϵ' lower on battlefields 1 to m.In either case, q' does strictly better than q against P_A. * All the c_k fall into at least three different values. So from these values we can choose two different values that are not zero. Then we apply the proof in <ref> and obtain the needed pure strategy q'.In all the cases, a contradiction is reached, showing that P_A cannot be an equilibrium strategy. In the following discussions, let P = { F^j }_j=1^3 be any joint distribution characterized in <Ref>, and let P' = { f^j }_j=1^3 be any equilibrium strategy. Our goal is to prove that P is an equilibrium strategy, and that P and P' have the same marginal distributions.Suppose p = (a,b,c) is any pure strategy in 𝔅_A=𝔅_B=𝔅. Then the payoff of p against P is 1/2 if 0≤ a≤1/3, 1/6≤ b≤1/2, and 1/3≤ c≤2/3; and the payoff is less than 1/2 otherwise.Suppose A plays the mixed strategy P and B plays the pure strategy p=(a,b,c), where 0≤ a≤ b≤ c and a+b+c=1. Then, let W(a,b,c) be the payoff for B. SoW(a,b,c)=1/3(F^1(a)+F^2(b)+F^3(c))Our goal is to find the maximum value of W(a,b,c) in 𝔅 and to show that it is no greater than 0. Clearly, 0≤ a≤a+b+c/3 = 1/3, so F^1(a)=3a. And b≤b+c/2≤1/2 * If b<1/6, then c=1-a-b≥ 1-2b>2/3, so F^2(b)=0 and F^3(c)=1. And a≤ b<1/6W(a,b,c) =1/3(3a+0+1)< 1/3(3·1/6+0+1)=1/2.So W(a,b,c)<1/2.* If b≥1/6, then F^2(b)=-1/2+3b. Since c≥1/3, F^3(c)≤ 3c-1.W(a,b,c) ≤1/3(3a-1/2+3b+3c-1)= (a+b+c)-1/2=1/2.Equality holds if and only if F^3(c) = 3c-1, which is equivalent to 1/3≤ c≤2/3. In this case 0≤ a≤1/3, 1/6≤ b≤1/2, and 1/3≤ c≤2/3. Otherwise, equality does not hold, and the payoff is less than 1/2. Any joint strategy P as characterized in <Ref> is a Nash equilibrium strategy.We know that the game ACB(1,1,3) is symmetrical and has constant sum 1, and since <Ref> indicates that P gives a payoff of at least 1/2 against any pure strategy, so P must be an equilibrium strategy. Let s̅^1 = 1/3, s^1 = 0, s̅^2 = 1/2, s^2 = 1/6, s̅^3 = 1/3, s^3 = 2/3. Clearly, s̅^j is just the upper bound of P on battlefield j, and s^j is the lower bound.F^j(x^j) = x^j-s^j/s̅^j - s^j for s^j≤ x^j≤s̅^j and all j.This is self-evident from the representation of F^j in <Ref>:F^1(u)=3u0≤ u≤1/311/3<u≤ 1 F^2(u)=0 0≤ u<1/6 -1/2+3u1/6≤ u≤1/211/2<u≤ 1 F^3(u)= 0 0≤ u<1/3 -1+3u1/3≤ u≤2/312/3<u≤ 1. If x < s^j, then f^j(x) = 0. If x > s̅^j, then f^j(x) = 1. Or, in other words, P' does not place any strategy outside [ s^j, s̅^j ]. Since ACB(1,1,3) is a two player symmetric constant sum 1 game, every pure strategy in the support of P', an equilibrium strategy, must give the unique equilibrium payoff, 1/2, when played against another equilibrium strategy, P. From <Ref> we know that a pure strategy p only gives payoff 1/2 against P when p plays a level of force between s^j and s̅^j on battlefield j for all j. So P' cannot play any strategy outside that range.f^j(s^j) = 0 and f^j(s̅^j) = 1. <Ref> implies that f^j is continuous. This, together with <Ref>, gives the desired result. Let us recall player i's optimization problem for ACB(1,1,3) (<ref> in <Ref>):max_{F_i^ j}_j=1^3λ_i∑_j=1^3[∫_0^∞[ 1/3λ_iF^j_-i(x)-x ]dF^j_i ]+λ_iX_iwhere the set of univariate marginal distribution functions {F_i^ j}_j=1^3 satisfy the constraint that there exists a 3-copula C such that the support of the 3-variate distribution C( F^1_i(x^1),F^2_i(x^2),F^3_i(x^3) ) is contained in 𝔅_i. From the lemmas above, we can add some further restrictions to it. From <Ref>, we know that P_i must be played within [ s^j,s^j] for every battlefield j. From <Ref>, we know that P is an equilibrium strategy, so P_i must be a best response against P and vice versa.Since <Ref> establishes the existence of suitable 3-copula, we can disregard that restriction for now and focus on the rest.For different j, F_i^j is independent. So <ref> is the maximization of three independent sums, hence the sum of three independent maximizations:max_{F_i^ j}_j=1^3λ_i∑_j=1^3[∫_0^∞[ 1/3λ_iF^j_-i(x)-x ]dF^j_i ]+λ_iX_i = ∑_j=1^3 max_F_i^ jλ_i∫_0^∞[ 1/3λ_iF^j_-i(x)-x ]dF^j_i +λ_iX_i.The term λ_iX_i is just a constant, so we could throw that away. Thus the problem for player i becomes:max_F_i^ jλ_i∫_0^∞[ 1/3λ_iF^j_-i(x)-x ]dF^j_ifor all battlefields j, under the constraint that P_A is a best response against P, P is a best response against P_A, and P_A is played within [ s^j,s̅^j ]. Let us set P_A = P' = { f^j }_j=1^3.Since we assume the existence of a suitable 3-copula, the different f^j can be considered independent and the different maximizations for different battlefields can also be considered independent. Hence, f^j and F^j form an equilibrium for all j.Let B_i^j(x_i^j,F^j_-i) = λ_i( 1/3λF_-i^j( x^j_i ) -x_i^j ). This is the payoff for player i by playing x_i^j when player -i plays F^j_-i in the maximization problem for battlefield j. B^j_i(x^j,f^j) = λ_i( 1/3λ_if^j( x^j ) -x^j ) is constant for all s^j≤ x^j≤s̅^j.Since F^j is an equilibrium strategy against f^j, every strategy in the support of F gives a constant payoff against f^j. Since the support of F^j is [ s^j , s̅^j ], the result directly follows.B^j_i(x^j,f^j) = λ_i( 1/3λ_if^j( x^j ) -x^j ) = -λ_is^j = 1/3-λ_is̅^j for all s^j≤ x^j≤s̅^j. From <Ref>, B^j_i(s^j,f^j) = -λ_is^j, and B^j_i(s̅^j,f^j) =1/3-λ_is̅^j. The result directly follows from <Ref>.λ_i = 1 for all i.From <Ref>, we have λ_i = 1/3(s̅^j - s^j). Note that (s̅^j - s^j) is always 1/3 for all j, so λ_i = 1.f^j(x^j) = F^j(x^j) for all j and all x^j. From <Ref>, we have f^j(x^j) = x^j-s^j/s̅^j - s^j for s^j≤ x^j≤s̅^j and all j. From <Ref>, the value of f^j coincides with the value of F^j here. <Ref> ensures that f^j and F^j are the same elsewhere. With these lemmas, we can prove the uniqueness of the marginal distributions in the Nash equilibria of the game ACB(1,1,3). We restate the theorem here for convenience. *From <Ref> we know that every joint distribution with marginal distribution functions as characterized above is a Nash equilibrium strategy, hence the second part of the theorem is proved.<Ref> establishes the uniqueness of marginal distributions of Nash equilibrium strategies, and proves that these marginal distributions are exactly those characterized above. Hence, we have proven the first part of the theorem. § UNIQUE EQUILIBRIUM PAYOFFS OF THE GAME ACB(X_A,X_B,2) In this section we find the unique equilibrium payoffs of all cases of the Asymmetric Colonel Blotto game involving only two battlefields. Suppose without loss of generality that X_A=1 and X_B=t≤1.Let W_n(t) denote the payoff for A in a Nash equilibrium in such a game with n battlefields. From <Ref>, we know that W_n(t) is well-defined.W_2(t)= [ k+2/2k+2, 2k/2k+1≤ t< 2k+2/2k+3 where k=0,1,2,…; ]See <Ref> for a graphical representation. The proof for W_2(t) and constructions of Nash equilibriums can be found later in this section. Before we go on to prove this, let us first prove a lemma regarding the Asymmetric Colonel Blotto game with two battlefields: Suppose that X_A>X_B. If player A deploys the pure strategy (a,X_A-a) and B deploys the pure strategy (b,X_B-b), then W_A = 1/2a-b<0 or a-b>X_A-X_B3/4a-b=0 or a-b=X_A-X_B 1 0<a-b<X_A-X_Bwhere W_A is the payoff for player A. * If a-b<0, then X_A-a>X_B-b, so W_A=1/2.* If a-b=0, then X_A-a>X_B-b, so W_A=3/4.* If 0<a-b<X_A-X_B, then X_A-a>X_B-b, so W_A=1.* If a-b=X_A-X_B>0, then W_A=3/4.* If a-b>X_A-X_B, then X_A-a<X_B-b, so W_A=1/2.Hence,W_A = 1/2a-b<0 or a-b>X_A-X_B3/4a-b=0 or a-b=X_A-X_B 1 0<a-b<X_A-X_B.With the help of <Ref>, we can prove <Ref>:* Suppose that t<2/3. In this case player A can simply overwhelm player B in all the battlefields.Take P_A=((1/3,2/3),1) and P_B to be any strategy. P_A and P_B form a Nash equilibrium and W_2(t) = 1.Given any pure strategy (x,t-x) of B, we must have x≤ t-x, so x≤t/2, 1/3>t/2≥ x, and 2/3>t≥ t-x.Thus in this case, the payoff to B is 0. This means that B cannot increase payoff regardless of the strategy (or mixed strategy) chosen. On the other hand, the payoff of A is 1, which is the maximum possible value, so clearly neither can A increase payoff by changing strategy. Hence,P_A=((1/3,2/3),1) and any P_B form a Nash equilibrium and W_2(t) = 1.* Suppose k is such that 2k/2k+1≤ t< 2k+2/2k+3, where k∈ℤ^+. TakeP_A={((ϵ+j(1-t),1-ϵ-j(1-t)),1/k+1) | 0≤ j≤ k}and P_B={((j(1-t),t-j(1-t)),1/k+1) | 0≤ j≤ k},where ϵ is such that2k+1/2 t-k<ϵ<min(1-t,t k-k+1/2).The notation here just means that player A plays pure strategy (ϵ+j(1-t),1-ϵ-j(1-t)) with probability 1/k+1 for all j such that 0≤ j≤ k; and player B plays pure strategy(j(1-t),t-j(1-t))with probability 1/k+1 for all j such that 0≤ j≤ k. Then we claim that P_A and P_B form a Nash equilibrium and W_2(t)=k+2/2k+2.First we will show that these mixed strategies are legitimate. If t<2k+2/2k+3, then 2k+3/2 t<k+1, so 2k+1/2 t-k<1-t. Now t/2<1/2, so 2k+1/2t-k<tk-k+1/2, which in turn implies that 2k+1/2· t-k≥ k-k=0.Hence, a positive ϵ satisfying <ref> exists.Further, we need to check that the level of force distributed on the first battlefield, x_1, is less than or equal to the force distributed on the second battlefield, x_2; or, equivalently, for player i, we need to check that x_1≤X_i/2. First, let's check player A's strategy. Since j≤ k, we must haveϵ+j(1-t)≤ϵ+k(1-t).Then we plug in the upper bound of ϵ in <ref> and getϵ+k(1-t)<t· k-k+1/2+k(1-t)=1/2.So ϵ+j(1-t)<1/2. Now let's check player B's strategy. We already know that t≥2k/2k+1, rearrange and we would getk(1-t)≤t/2 .Since j(1-t)≤ k(1-t), we must havej(1-t)≤t/2 . So both P_A and P_B are legitimate mixed strategies.Suppose A chooses some pure strategy p'_A=(x,1-x). Set a=⌊x/1-t⌋. Hence,(1-t)a≤ x<(1-t)(a+1)where 0≤ a ≤ k+1. Now let us expand P_B into pure strategies in the calculation of W_A( p'_A, P_B ):(k+1)W_A(p'_A,P_B) =∑_j=0^kW_A((x,1-x),(j(1-t),t-j(1-t)))≤∑_j=0^a-2W_A((x,1-x),(j(1-t),t-j(1-t))) +∑_j=a+1^kW_A((x,1-x),(j(1-t),t-j(1-t))) +W_A((x,1-x),((a-1)(1-t),t-(a-1)(1-t))) +W_A((x,1-x),(a(1-t),t-a(1-t))).There is a ≤ sign on the second line since if a=k+1, there is one additional non-negative term on the right,W_A((x,1-x),((k+1)(1-t),t-(k+1)(1-t))), compared with the original formula.First let us consider the sum∑_j=0^a-2W_A((x,1-x),(j(1-t),t-j(1-t))).Here,x-j(1-t) ≥ x-(a-2)(1-t)≥ a(1-t)-(a-2)(1-t)>(1-t).Hence, according to <Ref>,W_A((x,1-x),(j(1-t),t-j(1-t)))=1/2if 0≤ j≤ a-2. Thus, ∑_j=0^a-2W_A((x,1-x),(j(1-t),t-j(1-t)))=a-1/2.Then let us consider the sum∑_j=a+1^kW_A((x,1-x),(j(1-t),t-j(1-t))).Here,x-j(1-t) ≤ x-(a+1)(1-t)< (a+1)(1-t)-(a+1)(1-t)<0. Hence, according to <Ref>,W_A((x,1-x),(j(1-t),t-j(1-t)))=1/2if a+1≤ j≤ k. Thus,∑_j=a+1^kW_A((x,1-x),(j(1-t),t-j(1-t)))=k-a/2. If x=a(1-t), then according to <Ref>,W_A((x,1-x),((a-1)(1-t),t-(a-1)(1-t)))+W_A((x,1-x),(a(1-t),t-a(1-t))) =3/4+3/4=3/2. If x>a(1-t), then according to <Ref>,W_A((x,1-x),((a-1)(1-t),t-(a-1)(1-t)))+W_A((x,1-x),(a(1-t),t-a(1-t)))=1/2+1=3/2. Hence, W_A(p'_A,P_B)≤1/k+1·k+2/2 = k+2/2k+2. So A cannot increase payoff above k+2/2k+2 by changing strategy. Now let's consider player B's strategy. Suppose B chooses some pure strategy p'_B=(y,t-y). Set b=⌈y-ϵ/1-t⌉. Hence,(1-t)(b-1)+ϵ< y≤ (1-t)b+ϵwhere 0≤ b≤ k.* In the case where 0≤ b≤ k-1, let's expand P_A into pure strategies in the calculations of W_A( P_A, p'_B ):(k+1)W_A(P_A,p'_B) =∑_j=0^kW_A((j(1-t)+ϵ,1-j(1-t)-ϵ),(y,t-y))= ∑_j=0^b-1W_A((j(1-t)+ϵ,1-j(1-t)-ϵ),(y,t-y)) +∑_j=b+2^kW_A((j(1-t)+ϵ,1-j(1-t)-ϵ),(y,t-y)) +W_A((b(1-t)+ϵ,1-b(1-t)-ϵ),(y,t-y)) +W_A(((b+1)(1-t)+ϵ,1-(b+1)(1-t)-ϵ),(y,t-y)). First let us consider the sum∑_j=0^b-1W_A((j(1-t)+ϵ,1-j(1-t)-ϵ),(y,t-y)).Here,j(1-t)+ϵ≤ (b-1)(1-t)+ϵ <y.Hence, according to <Ref>,W_A((j(1-t)+ϵ,1-j(1-t)-ϵ),(y,t-y))=1/2if 0≤ j≤ b-1. Thus,∑_j=0^b-1W_A((j(1-t)+ϵ,1-j(1-t)-ϵ),(y,t-y))=b/2.Then let us consider the sum∑_j=b+2^kW_A((j(1-t)+ϵ,1-j(1-t)-ϵ),(y,t-y)).Here,j(1-t)+ϵ-y ≥ (b+2)(1-t)+ϵ-y≥ (b+2)(1-t)+ϵ-b(1-t)-ϵ>1-t. Hence, according to <Ref>,W_A((j(1-t)+ϵ,1-j(1-t)-ϵ),(y,t-y))=1/2if b+2≤ j≤ k. Thus,∑_j=b+2^kW_A((j(1-t)+ϵ,1-j(1-t)-ϵ),(y,t-y))=k-b-1/2. If y=b(1-t)+ϵ, then according to <Ref>,W_A((b(1-t)+ϵ,1-b(1-t)-ϵ),(y,t-y))+W_A(((b+1)(1-t)+ϵ,1-(b+1)(1-t)-ϵ),(y,t-y)) =3/4+3/4 = 3/2. If y<b(1-t)+ϵ, then according to <Ref>,W_A((b(1-t)+ϵ,1-b(1-t)-ϵ),(y,t-y))+W_A(((b+1)(1-t)+ϵ,1-(b+1)(1-t)-ϵ),(y,t-y)) = 1+ 1/2 = 3/2.Therefore, in either case, W_A(P_A,p'_B) = k+2/2k+2.* If b=k, thenk(1-t)+ϵ > k(1-t)+2k+1/2t-k =1/2t ≥ y.So(k+1)W_A(P_A,p'_B) =∑_j=0^kW_A((j(1-t)+ϵ,1-j(1-t)-ϵ),(y,t-y))= ∑_j=0^k-1W_A((j(1-t)+ϵ,1-j(1-t)-ϵ),(y,t-y))+W_A((k(1-t)+ϵ,1-k(1-t)-ϵ),(y,t-y)). Similarly, ∑_j=0^k-1W_A((j(1-t)+ϵ,1-j(1-t)-ϵ),(y,t-y))=k/2.And since (1-t)(k-1)+ϵ< y< (1-t)k+ϵ,W_A((k(1-t)+ϵ,1-k(1-t)-ϵ),(y,t-y))=1. Hence, W_A(P_A,p'_B)≥k+2/2k+2, which means that B cannot increase payoff above k/2k+2 by changing strategy. Hence, P_A and P_B form a Nash equilibrium, and the equilibrium payoff for A is W_2(t)=k+2/2k+2.* Finally, suppose that t=1. Take any mixed strategy P_A and any mixed strategy P_B. Then they form a Nash equilibrium with W_2(t)=1/2. To see this, suppose A plays the pure strategy P'_A=(a,1-a) and B plays the pure strategy P'_B=(b,1-b). * If a=b, clearly W_A(P'_A,P'_B)=1/2.* If a<b, then 1-a>1-b, so W_A(P'_A,P'_B)=1/2. Similarly, if a>b, W_A(P'_A,P'_B)=1/2.Hence, the payoff is 1/2 regardless of the pure strategies that both players play. As a result, the payoff is also 1/2 regardless of what mixed strategies that the two players play. § UNIQUE EQUILIBRIUM PAYOFFS OF THE GAME ACB(X_A,X_B,3) In this section we find the unique equilibrium payoffs of some cases of the Asymmetric Colonel Blotto game involving three battlefields. The results that follow are ordered by ascending values of t. Suppose without loss of generality that X_A=1 and X_B=t≤1. The case where t=1 is already solved in <Ref>, and we have W_3(1) = 1/2. In the following discussions, let the function s(x) be defined as follows:s(x) = 0 x<01/2x = 01 x>0. In the case where t<6/11, w_3(t) = 1.In this case, player A can simply overwhelm player B in all the battlefields. Take P_A={((2/11,3/11,6/11),1)} and P_B to be any strategy. P_A and P_B form a Nash equilibrium and W_3(t) = 1. In the case where 6/11≤ t<18/31, take P_A={((t/3+ϵ,t/2+ϵ,1-5/6t-2ϵ),1/3), . ((t/3+ϵ,1-4/3t-2ϵ,t+ϵ),1/3), . ((1-3/2t-2ϵ,t/2+ϵ,t+ϵ),1/3) }andP_B={((0,0,t),1/3), ((0,t/2,t/2),1/3), ((t/3,t/3,t/3),1/3) }where 0<ϵ<1/2(1-31/18t).P_A and P_B form a Nash equilibrium and W_3(t)=8/9. Since t<18/31, a real ϵ satisfying the necessary condition must exist. From the range of t and ϵ, we can check that the strategies of A and B are legitimate, or in other words, the levels of force of the battlefields are nondecreasing.Suppose A plays the pure strategy p'_A=(a,b,c) where a+b+c=1. Then,9W_A(p'_A,P_B) =3W_A((a,b,c),(0,0,t))+3W_A((a,b,c),(0,t/2,t/2))+3W_A((a,b,c),(t/3,t/3,t/3))=2s(a-0)+s(a-t/3)+s(b-0)+s(b-t/2)+s(b-t/3)+s(c-t/3)+s(c-t/2)+s(c-t/3).For W_A(p'_A,P_B) to be more than 8/9, none of the terms on the right can be 0. Hence, we must havea≥t/3, b≥t/2, c≥ t.So1=a+b+c≥11/6t.This is only possible when t=11/6. In this case, p'_A must be (t/3,t/2,t), so W_A(p'_A,P_B)=5/6<8/9. Hence, W_A(p'_A,P_B)≤8/9 for all pure strategies p'_A.Suppose B plays the pure strategy p'_B=(d,e,f) where d+e+f=t. Then,9W_A(P_A,p'_B) =3W_A((t/3+ϵ,t/2+ϵ,1-5/6t-2ϵ),(d,e,f))+3W_A((t/3+ϵ,1-4/3t-2ϵ,t+ϵ),(d,e,f))+3W_A((1-3/2t-2ϵ,t/2+ϵ,t+ϵ),(d,e,f)).Remember that d≤t/3, e≤t/2, and f≤ t. So,9W_A(P_A,p'_B)=6+s(1-3/2t-2ϵ-d)+s(1-4/3t-2ϵ-e)+s(1-5/6t-2ϵ-f).From 6/11≤ t<18/31 and 0<ϵ<1/2(1-31/18t), we can show thatd≥ 1-3/2t-2ϵ⇒ e≤t-d/2≤5/4t+ϵ-1/2<1-4/3t-2ϵ, e≥ 1-4/3t-2ϵ⇒ f≤ t-e≤7/3t-1+2ϵ<1-5/6t-2ϵ, andf≥ 1-5/6t-2ϵ⇒ d≤ t-2f≤8/3t-2+4ϵ<1-3/2t-2ϵ.Hence, at least 2 terms on the right must be 1.So W_A(P_A,p'_B)≥8/9 for all pure strategies p'_B.To conclude, player A cannot increase payoff above 8/9 by changingstrategy,and player B can also not increase payoff above 1/9 by changing strategy.So P_A,P_B form a Nash equilibrium and the equilibrium payoff W_3(t)=8/9. When 3/5<t<30/47, take P_A={((30-22t/75,15-11t/25,11t/15),1/2),((2t/15,13t/30,1-17t/30),1/2)} and P_B={((t/3,t/3,t/3),1/2),((0,0,t),1/2)}.Then P_A and P_B form a Nash equilibrium and W_3(t)=5/6. Let's begin with checking that the strategies distributed among the three battlefields are non-decreasing. First, let's check player A's first strategy. x_A^1 is obviously smaller than x_A^2 (x_i^j means the level of force player i distributes on battlefield j):30-22t/75≤30-22t/75·3/2=15-11t/25.Since t>3/5 and 3/5>45/88, we must have 5<88t. Rearrange and we would get 15-11t/25<11t/15.Now let's check player A's second strategy. Obviously, x_A^1' is smaller than x_A^2':2t/15<13t/30.Furthermore, since t<1, we can rearrange and obtain13t/30<1-17t/30.Hence, P_A is legitimate. And clearly P_B is legitimate.If player A plays the pure strategy p'_A=(a,b,1-a-b). Then,6W_A(P'_A,P_B) = 3W_A( (a,b,1-a-b),(t/3,t/3,t/3))+3W_A((a,b,1-a-b),(0,0,t) )=s(a-t/3)+s(b-t/3)+s(1-a-b-t/3)+s(a-0)+s(b-0)+s(1-a-b-t).For W_A(p'_A,P_B) to be greater than 5/6, none of the six terms on the right can be 0. Hence, we obtain the following inequalities:[ a≥t/3 a≥0; b≥t/3 b≥0; 1-a-b≥t/3 1-a-b≥ t. ]Add together a≥t/3, b≥t/3, 1-a-b≥ t, and we get1≥5/3· t.Hence, t≥3/5, contradicting our hypothesis on t. Therefore, we must have W_A(p'_A,P_B)≤5/6 given all pure strategy p'_A.If player B plays the pure strategy p'_B=(c,d,t-c-d), then,6W_A(P_A,p'_B)=3W_A((30-22t/75,15-11t/25,11t/15),(c,d,t-c-d))+3W_A((2t/15,13t/30,1-17t/30),(c,d,t-c-d))=s(30-22t/75-c)+s(15-11t/25-d)+s(11t/15-t+c+d)+s(2t/15-c)+s(13t/30-d)+s(1-17t/30-t+c+d).Rearrange t<30/47 and we get30-22t/75>t/3≥ c.Rearrange t<30/47 and we also get15-11t/25>t/2≥ d.Similarly, rearrange t<30/47 and we get1-17t/30>t≥ t-c-d.Hence,6W_A(P_A,p'_B)=3+s(-4t/15+c+d)+s(2t/15-c)+s(13t/30-d)* If c<2t/15, * and if d≥13t/15, then c+d>4t/15. So6W_A(P_A,p'_B) =3+s(-4t/15+c+d)+s(2t/15-c)+s(13t/30-d)≥3+1+1+0=5. * and if d<13t/15, then6W_A(P_A,p'_B) =3+s(-4t/15+c+d)+s(2t/15-c)+s(13t/30-d)≥ 3+0+1+1=5.* If c=2t/15, then, d≤t-c/2=13t/30, and c+d≥2c=4t/15. * If d=13t/30, then t-c-d=13t/30. So6W_A(P_A,p'_B) =3+s(-4t/15+c+d)+s(2t/15-c)+s(13t/30-d)=3+1+ 1/2 + 1/2=5. * If d<13t/30, then6W_A(P_A,p'_B) =3+s(-4t/15+c+d)+s(2t/15-c)+s(13t/30-d)≥3+1/2+ 1/2 + 1=5.* If c>2t/15, then, c+d≥2c>4t/15, so6W_A(P_A,p'_B) =3+s(-4t/15+c+d)+s(2t/15-c)+s(13t/30-d)≥3+1+1+0=5. Hence, W_A(P_A,p'_B)≥5/6 for any pure strategy p'_B.To conclude, player A cannot increase payoff above 5/6 by changing strategy,and player B can also not increase payoff above 1/6 by changing strategy.So P_A,P_B form a Nash equilibrium and the equilibrium payoff, W_3(t), is 5/6.To guarantee a Nash equilibrium, player A can play any strategy P_A={((a,b,c),1/2),((d,e,f),1/2)}satisfyinga+b+c=1,a≤ b≤ c, d≤ e≤ f, d+e+f=1,a>t/3, b>t/2, f> t, and2d+c≥ t, d+2e≥ t.The strategy {((30-22t/75,15-11t/25,11t/15),1/2),((2t/15,13t/30,1-17t/30),1/2)} is only one of the possible ones. W_3(2/3)≤4/5. Let B play the strategyP_B={((0,1/16,29/48),1/5), ((0,0,2/3),1/5),((1/16,1/16,13/24),1/5),.. ((1/8,13/48,13/48),1/5), ((5/24,11/48,11/48),1/5) }and let A play any pure strategy p, then we verified using a computer that the payoff for A, W_A( p, P_B ), is at most 4/5. W_3(5/6)≥2/3. Let A play the strategyP_A=(1/6,1/3,1/2), and let B play any pure strategy p, then we verified using a computer that the payoff for A, W_A( P_A, p ), is at least 2/3. Note that in this case where t≠1 and in the general case for n=2 (<Ref>), there are Nash equilibrium strategies with atoms. This is behavior very different from what we saw in <Ref>, and also very different from what we proved about Nash equilibria of the game ACB(1,1,n) where n≥3 in <Ref>. § OPEN PROBLEMSStill much remains unknown about the Asymmetric Colonel Blotto game in the general case. For example, what would a Nash equilibrium for the game ACB(1,1,4) look like? Or a Nash equilibrium for the game ACB(1,1,n) where n≥ 5? The methods used to prove the uniqueness of the marginal distributions of the game ACB(1,1,3) in <Ref> cannot be used here since these methods only prove the uniqueness of the marginal distributions given a Nash equilibrium strategy, so these methods do not work when we cannot find a Nash equilibrium in the first place. It is hard to find a Nash equilibrium for the game ACB(1,1,4), although one can be approximated by means of computer simulation. What's more, we can show that the marginal distributions of Nash equilibrium strategies cannot be uniform. This makes it difficult to guess the correct Nash equilibrium strategy.Another problem is to determine how the unique equilibrium payoff varies in the game ACB(1,t,n) as t varies continuously in the general case. As we have shown in <Ref>, W_2(t) is locally constant and discontinuous as a function of t. This is quite a surprising result, as it indicates that there are phase changes in the game ACB(1,t,2) as t changes. Our partial results in <Ref> also indicate that W_3(t) is a discontinuous function (<Ref> and <Ref>). Computer simulation of discrete cases also indicates that sometimes it is not differentiable where the function itself is continuous. Maybe the phase changes in this case correspond to discontinuous jumps in the equilibrium strategies. This can be illustrated by the drastic difference between the equilibrium strategies in <Ref> and those in <Ref>. Is it possible to find all the critical values of t where these phase changes occur?Yet another fundamental question left unanswered is the existence of Nash equilibria for the game ACB(X_A,X_B,n) in the general case. We have discussed Nash equilibria in special cases, but we have not given a proof that guarantees the existence of Nash equilibria in the general case.alpha
http://arxiv.org/abs/1708.07916v1
{ "authors": [ "Simon Rubinstein-Salzedo", "Yifan Zhu" ], "categories": [ "cs.GT" ], "primary_category": "cs.GT", "published": "20170826015948", "title": "The Asymmetric Colonel Blotto Game" }