text
stringlengths
11
9.77k
label
stringlengths
2
104
We explored the occurrence rate of small close-in planets among $\textit{Kepler}$ target stars as a function of the iron abundance and the stellar total velocity $V_\mathrm{tot}$. We estimated the occurrence rate of those planets by combining information from LAMOST and the California-$\textit{Kepler}$ Survey (CKS) and found that iron-poor stars exhibit an increase in the occurrence with $V_\mathrm{tot}$ from $f < 0.2$ planets per star at $ V_\mathrm{tot} < 30\ \mathrm{km~s}^{-1}$ to $f \sim 1.2$ at $V_\mathrm{tot} > 90\ \mathrm{km~s}^{-1}$. We suggest this planetary profusion may be a result of a higher abundance of $\alpha$ elements associated with iron-poor, high-velocity stars. Furthermore, we have identified an increase in small planet occurrence with iron abundance, particularly for the slower stars ($V_\mathrm{tot} < 30\ \mathrm{km~s}^{-1}$), where the occurrence increased to $f \sim 1.1$ planets per star in the iron-rich domain. Our results suggest there are two regions in the $([\mathrm{Fe}/\mathrm{H}],[\alpha/\mathrm{Fe}])$ plane in which stars tend to form and maintain small planets. We argue that analysis of the effect of overall metal content on planet occurrence is incomplete without including information on both iron and $\alpha$ element enhancement.
astrophysics
We find the geometric phase of a two-level system undergoing pure dephasing via interaction with an arbitrary environment, taking into account the effect of the initial system-environment correlations. We use our formalism to calculate the geometric phase for the two-level system in the presence of both harmonic oscillator and spin environments, and we consider the initial state of the two-level system to be prepared by a projective measurement or a unitary operation. The geometric phase is evaluated for a variety of parameters such as the system-environment coupling strength to show that the initial correlations can affect the geometric phase very significantly even for weak and moderate system-environment coupling strengths. Moreover, the correction to the geometric phase due to the system-environment coupling generally becomes smaller (and can even be zero) if initial system-environment correlations are taken into account, thus implying that the system-environment correlations can increase the robustness of the geometric phase.
quantum physics
We study how to compute the operator product expansion coefficients in the exact renormalization group formalism. After discussing possible strategies, we consider some examples explicitly, within the $\epsilon$-expansions, for the Wilson-Fisher fixed points of the real scalar theory in $d=4-\epsilon$ dimensions and the Lee-Yang model in $d=6-\epsilon$ dimensions. Finally we discuss how our formalism may be extended beyond perturbation theory.
high energy physics theory
The tidal force from a supermassive black hole can rip apart a star that passes close enough in what is known as a Tidal Disruption Event. Typically half of the destroyed star remains bound to the black hole and falls back on highly eccentric orbits, forming an accretion flow which powers a luminous flare. In this paper we use analytical and numerical calculations to explore the effect of stellar rotation on the fallback rate of material. We find that slowly spinning stars ($\Omega_* \lesssim 0.01 \Omega_{\rm{breakup}}$) provide only a small perturbation to fallback rates found in the non-spinning case. However when the star spins faster, there can be significant effects. If the star is spinning retrograde with respect to its orbit the tidal force from the black hole has to spin down the star first before disrupting it, causing delayed and sometimes only partial disruption events. However, if the star is spinning prograde this works with the tidal force and the material falls back sooner and with a higher peak rate. We examine the power-law index of the fallback curves, finding that in all cases the fallback rate overshoots the canonical $t^{-5/3}$ rate briefly after the peak, with the depth of the overshoot dependent on the stellar spin. We also find that in general the late time evolution is slightly flatter than the canonical $t^{-5/3}$ rate. We therefore conclude that considering the spin of the star may be important in modelling observed TDE lightcurves.
astrophysics
Recent work showed that stabilizing affine control systems to desired (sets of) states while optimizing quadratic costs and observing state and control constraints can be reduced to quadratic programs (QP) by using control barrier functions (CBF) and control Lyapunov functions. In our own recent work, we defined high order CBFs (HOCBFs) to accommodating systems and constraints with arbitrary relative degrees, and a penalty method to increase the feasibility of the corresponding QPs. In this paper, we introduce adaptive CBF (AdaCBFs) that can accommodate time-varying control bounds and dynamics noise, and also address the feasibility problem. Central to our approach is the introduction of penalty functions in the definition of an AdaCBF and the definition of auxiliary dynamics for these penalty functions that are HOCBFs and are stabilized by CLFs. We demonstrate the advantages of the proposed method by applying it to a cruise control problem with different road surfaces, tires slipping, and dynamics noise.
electrical engineering and systems science
We consider (0+1) and (1+1) dimensional Yukawa theory in various scalar field backgrounds, which are solving classical equations of motion: $\ddot{\phi}_{cl} = 0$ or $\Box \phi_{cl} = 0$, correspondingly. The (0+1)--dimensional theory we solve exactly. In (1+1)--dimensions we consider background fields of the form $\phi_{cl} = E\, t$ and $\phi_{cl} = E\, x$, which are inspired by the constant electric field. Here $E$ is a constant. We study the backreaction problem by various methods, including the dynamics of a coherent state. We also calculate loop corrections to the correlation functions in the theory using the Schwinger--Keldysh diagrammatic technique.
high energy physics theory
We show that irreducible unipotent representations of split Levi subgroups of finite groups of Lie type extend to their stabilizers inside the normalizer of the given Levi subgroup. For this purpose, we extend the multiplicity-freeness theorem from regular embeddings to arbitrary isotypies.
mathematics
We trained and evaluated a localization-based deep CNN for breast cancer screening exam classification on over 200,000 exams (over 1,000,000 images). Our model achieves an AUC of 0.919 in predicting malignancy in patients undergoing breast cancer screening, reducing the error rate of the baseline (Wu et al., 2019a) by 23%. In addition, the models generates bounding boxes for benign and malignant findings, providing interpretable predictions.
electrical engineering and systems science
In current work, non-familiar shifted Lucas polynomials are introduced. We have constructed a computational wavelet technique for solution of initial/boundary value second order differential equations. For this numerical scheme, we have developed weight function and Rodrigues' formula for Lucas polynomials. Further, Lucas polynomials and their properties are used to propose shifted Lucas polynomials and then utilization of shifted Lucas polynomials provides us shifted Lucas wavelet. We furnished the operational matrix of differentiation and the product operational matrix of the shifted Lucas wavelets. Moreover, convergence and error analysis ensure accuracy of the proposed method. Illustrative examples show that the present method is numerically fruitful, effective and convenient for solving differential equations
mathematics
Development of the self-sustained quantum-electrodynamical (QED) cascade in a single strong laser pulse is studied analytically and numerically. The hydrodynamical approach is used to construct the analytical model of the cascade evolution, which includes the key features of the cascade observed in 3D QED particle-in-cell (QED-PIC) simulations such as the magnetic field predominance in the cascade plasma and laser energy absorption. The equations of the model are derived in the closed form and are solved numerically. Direct comparison between the solutions of the model equations and 3D QED-PIC simulations shows that our model is able to describe the complex nonlinear process of the cascade development qualitatively well. The various regimes of the interaction based on the intensity of the laser pulse are revealed in both the solutions of the model equations and the results of the QED-PIC simulations.
physics
Spatial econometric research typically relies on the assumption that the spatial dependence structure is known in advance and is represented by a deterministic spatial weights matrix. Contrary to classical approaches, we investigate the estimation of sparse spatial dependence structures for regular lattice data. In particular, an adaptive least absolute shrinkage and selection operator (lasso) is used to select and estimate the individual connections of the spatial weights matrix. To recover the spatial dependence structure, we propose cross-sectional resampling, assuming that the random process is exchangeable. The estimation procedure is based on a two-step approach to circumvent simultaneity issues that typically arise from endogenous spatial autoregressive dependencies. The two-step adaptive lasso approach with cross-sectional resampling is verified using Monte Carlo simulations. Eventually, we apply the procedure to model nitrogen dioxide ($\mathrm{NO_2}$) concentrations and show that estimating the spatial dependence structure contrary to using prespecified weights matrices improves the prediction accuracy considerably.
statistics
This paper focuses on exploring efficient ways to find $\mathcal{H}_2$ optimal Structure-Preserving Model Order Reduction (SPMOR) of the second-order systems via interpolatory projection-based method Iterative Rational Krylov Algorithm (IRKA). To get the reduced models of the second-order systems, the classical IRKA deals with the equivalent first-order converted forms and estimates the first-order reduced models. The drawbacks of that of the technique are failure of structure preservation and abolishing the properties of the original models, which are the key factors for some of the physical applications. To surpass those issues, we introduce IRKA based techniques that enable us to approximate the second-order systems through the reduced models implicitly without forming the first-order forms. On the other hand, there are very challenging tasks to the Model Order Reduction (MOR) of the large-scale second-order systems with the optimal $\mathcal{H}_2$ error norm and attain the rapid rate of convergence. For the convenient computations, we discuss competent techniques to determine the optimal $\mathcal{H}_2$ error norms efficiently for the second-order systems. The applicability and efficiency of the proposed techniques are validated by applying them to some large-scale systems extracted form engineering applications. The computations are done numerically using MATLAB simulation and the achieved results are discussed in both tabular and graphical approaches.
mathematics
The Zwicky Transient Facility (ZTF) is a new optical time-domain survey that uses the Palomar 48-inch Schmidt telescope. A custom-built wide-field camera provides a 47 deg$^2$ field of view and 8 second readout time, yielding more than an order of magnitude improvement in survey speed relative to its predecessor survey, the Palomar Transient Factory (PTF). We describe the design and implementation of the camera and observing system. The ZTF data system at the Infrared Processing and Analysis Center provides near-real-time reduction to identify moving and varying objects. We outline the analysis pipelines, data products, and associated archive. Finally, we present on-sky performance analysis and first scientific results from commissioning and the early survey. ZTF's public alert stream will serve as a useful precursor for that of the Large Synoptic Survey Telescope.
astrophysics
We consider string theory vacua with tadpoles for dynamical fields and uncover universal features of the resulting spacetime-dependent solutions. We argue that the solutions can extend only a finite distance $\Delta$ away in the spacetime dimensions over which the fields vary, scaling as $\Delta^n\sim {\cal T}$ with the strength of the tadpole ${\cal T}$. We show that naive singularities arising at this distance scale are physically replaced by ends of spacetime, related to the cobordism defects of the swampland cobordism conjecture and involving stringy ingredients like orientifold planes and branes, or exotic variants thereof. We illustrate these phenomena in large classes of examples, including AdS$_5\times T^{1,1}$ with 3-form fluxes, 10d massive IIA, M-theory on K3, the 10d non-supersymmetric $USp(32)$ strings, and type IIB compactifications with 3-form fluxes and/or magnetized D-branes. We also describe a 6d string model whose tadpole triggers spontaneous compactification to a semirealistic 3-family MSSM-like particle physics model.
high energy physics theory
It seems to be a pearl of conventional wisdom that parameter learning in deep sum-product networks is surprisingly fast compared to shallow mixture models. This paper examines the effects of overparameterization in sum-product networks on the speed of parameter optimisation. Using theoretical analysis and empirical experiments, we show that deep sum-product networks exhibit an implicit acceleration compared to their shallow counterpart. In fact, gradient-based optimisation in deep tree-structured sum-product networks is equal to gradient ascend with adaptive and time-varying learning rates and additional momentum terms.
computer science
We measure the impact of nonvanishing boundary localised terms on $\Delta B=2$ transitions in five-dimensional Universal Extra Dimensional scenario where masses and coupling strengths of several interactions of Kaluza-Klein modes are significantly modified with respect to the minimal counterpart. In such scenario we estimate the Kaluza-Klein contributions of quarks, gauge bosons and charged Higgs by evaluating the one-loop box diagrams that are responsible for the $\Delta B=2$ transitions. Using the loop function (obtained from one-loop box diagrams) we determine several important elements that are involved in Wolfenstein parametrisation. Moreover, with these elements we also study the geometrical shape of unitarity triangle. Besides, we compute the quantity $\Delta M_s$ scaled by the corresponding Standard Model value. Outcomes of our theoretical predictions have been compared to the allowed ranges of the corresponding observables simultaneously. Our current analysis shows that, depending on the parameters in this scenario the lower limit on the inverse of the radius of compactification can reach to an appreciable large value ($\approx 1.48$ TeV or even higher).
high energy physics phenomenology
We investigate the gauge symmetry and gauge fixing dependence properties of the effective average action for quantum gravity models of general form. Using the background field formalism and the standard BRST-based arguments, one can establish the special class of regulator functions that preserves the background field symmetry of the effective average action. Unfortunately, regardless the gauge symmetry is preserved at the quantum level, the non-invariance of the regulator action under the global BRST transformations leads to the gauge fixing dependence even under the use of the on-shell conditions.
high energy physics theory
A key tool astronomers have to investigate the nature of extragalactic transients is their position on their host galaxies. Galactocentric offsets, enclosed fluxes and the fraction of light statistic are widely used at different wavelengths to help infer the nature of transient progenitors. Motivated by the proposed link between magnetars and fast radio bursts (FRBs), we create a face-on image of the Milky Way using best estimates of its size, structure and colour. We place Galactic magnetars, pulsars and X-ray binaries on this image, using the available distance information. Galactocentric offsets, enclosed fluxes and fraction of light distributions are compared to extragalactic transient samples. We find that FRBs are located on their hosts in a manner consistent with Galactic neutron stars on the Milky Way's light, although we cannot determine which specific population is the best match. The Galactic distributions are consistent with other classes of extragalactic transient much less often, across the range of comparisons made. We demonstrate that the fraction of light method should be carefully used in galaxies with multiple components, and when comparing data with different redshift distributions and spatial resolutions. Star forming region offsets of a few hundred parsec are found to be typical of young neutron stars in the Milky Way, and therefore FRB offsets of this size do not preclude a magnetar origin, although the interpretation of these offsets is currently unclear. Overall, our results provide further support for FRB models invoking isolated young neutron stars, or binaries containing a neutron star.
astrophysics
We have been working on speech synthesis for rakugo (a traditional Japanese form of verbal entertainment similar to one-person stand-up comedy) toward speech synthesis that authentically entertains audiences. In this paper, we propose a novel evaluation methodology using synthesized rakugo speech and real rakugo speech uttered by professional performers of three different ranks. The naturalness of the synthesized speech was comparable to that of the human speech, but the synthesized speech entertained listeners less than the performers of any rank. However, we obtained some interesting insights into challenges to be solved in order to achieve a truly entertaining rakugo synthesizer. For example, naturalness was not the most important factor, even though it has generally been emphasized as the most important point to be evaluated in the conventional speech synthesis field. More important factors were the understandability of the content and distinguishability of the characters in the rakugo story, both of which the synthesized rakugo speech was relatively inferior at as compared with the professional performers. We also found that fundamental frequency fo modeling should be further improved to better entertain audiences. These results show important steps to reaching authentically entertaining speech synthesis.
electrical engineering and systems science
Vortex-mediated mutual friction governs the coupling between the superfluid and normal components in neutron star interiors. By, for example, comparing precise timing observations of pulsar glitches with theoretical predictions it is possible to constrain the physics in the interior of the star, but to do so an accurate model of the mutual friction coupling in general relativity is needed. We derive such a model directly from Carter's multifluid formalism, and study the vortex structure and coupling time-scale between the components in a relativistic star. We calculate how general relativity modifies the shape and the density of the quantized vortices and show that, in the quasi-Schwarzschild coordinates, they can be approximated as straight lines for realistic neutron star configurations. Finally, we present a simple universal formula (given as a function of the stellar compactness alone) for the relativistic correction to the glitch rise-time, which is valid under the assumption that the superfluid reservoir is in a thin shell in the crust or in the outer core. This universal relation can be easily employed to correct, a posteriori, any Newtonian estimate for the coupling time-scale, without any additional computational expense.
astrophysics
We perform an off-shell treatment of asymptotically decelerating spatially flat FRW spacetimes at future null infinity. We obtain supertranslation and superrotation-like asymptotic diffeomorphisms which are consistent with the global symmetries of FRW and we compute how the asymptotic data is transformed under them. Further, we study in detail the effect of these diffeomorphisms on some simple backgrounds including unperturbed FRW and Sultana-Dyer black hole. In particular, we investigate how these transformations act on several cosmologically perturbed backgrounds.
high energy physics theory
Detectability describes the property of an system whose current and the subsequent states can be uniquely determined after a finite number of observations. In this paper, we relax detectability to C-detectability that only requires a given set of crucial states can be distinguished from other states. Four types of C-detectability: strong C-detectability, weak C-detectability, periodically strong C-detectability, and periodically weak C-detectability are defined in the framework of labeled Petri nets, which have larger modeling power than finite automata. Moreover, based on the notion of basis markings, the approaches are developed to verify the four C-detectability of a bounded labeled Petri net system. Without computing the whole reachability space and without enumerating all the markings consistent with an observation, the proposed approaches are more efficient.
electrical engineering and systems science
We investigate radial solutions for the problem \[ \begin{cases} \displaystyle -\Delta U=\frac{\lambda+\delta|\nabla U|^2}{1-U},\; U>0 & \textrm{in}\ B,\\ U=0 & \textrm{on}\ \partial B, \end{cases} \] which is related to the study of Micro-Electromechanical Systems (MEMS). Here, $B\subset \mathbb{R}^N$ $(N\geq 2)$ denotes the open unit ball and $\lambda, \delta>0$ are real numbers. Two classes of solutions are considered in this work: (i) {\it regular solutions}, which satisfy $0<U<1$ in $B$ and (ii) {\it rupture solutions} which satisfy $U(0)=1$, and thus make the equation singular at the origin. Bifurcation with respect to parameter $\lambda>0$ is also discussed.
mathematics
We report on a study of the high-mass star formation in the the HII region W28A2 by investigating the molecular clouds extended over ~5-10 pc from the exciting stars using the 12CO and 13CO (J=1-0) and 12CO (J=2-1) data taken by the NANTEN2 and Mopra observations. These molecular clouds consist of three velocity components with the CO intensity peaks at V_LSR ~ -4 km s$^{-1}$, 9 km s$^{-1}$ and 16 km s$^{-1}$. The highest CO intensity is detected at V_LSR ~ 9 km s$^{-1}$, where the high-mass stars with the spectral types of O6.5-B0.5 are embedded. We found bridging features connecting these clouds toward the directions of the exciting sources. Comparisons of the gas distributions with the radio continuum emission and 8 um infrared emission show spatial coincidence/anti-coincidence, suggesting physical associations between the gas and the exciting sources. The 12CO J=2-1 to 1-0 intensity ratio shows a high value (> 0.8) toward the exciting sources for the -4 km s$^{-1}$ and +9 km s$^{-1}$ clouds, possibly due to heating by the high-mass stars, whereas the intensity ratio at the CO intensity peak (V_LSR ~ 9 km s$^{-1}$) lowers down to ~0.6, suggesting self absorption by the dense gas in the near side of the +9 km s$^{-1}$ cloud. We found partly complementary gas distributions between the -4 km s$^{-1}$ and +9 km s$^{-1}$ clouds, and the -4 km s$^{-1}$ and +16 km s$^{-1}$ clouds. The exciting sources are located toward the overlapping region in the -4 km s$^{-1}$ and +9 km s$^{-1}$ clouds. Similar gas properties are found in the Galactic massive star clusters, RCW 38 and NGC 6334, where an early stage of cloud collision to trigger the star formation is suggested. Based on these results, we discuss a possibility of the formation of high-mass stars in the W28A2 region triggered by the cloud-cloud collision.
astrophysics
We have extended the pure rotational investigation of the two isomers syn and anti vinyl mercaptan to the millimeter domain using a frequency-multiplication spectrometer. The species were produced by a radiofrequency discharge in 1,2-ethanedithiol. Additional transitions have been re-measured in the centimeter band using Fourier-transform microwave spectroscopy to better determine rest frequencies of transitions with low-$J$ and low-$K_a$ values. Experimental investigations were supported by quantum chemical calculations on the energetics of both the [C$_2$,H$_4$,S] and [C$_2$,H$_4$,O] isomeric families. Interstellar searches for both syn and anti vinyl mercaptan as well as vinyl alcohol were performed in the EMoCA (Exploring Molecular Complexity with ALMA) spectral line survey carried out toward Sagittarius (Sgr) B2(N2) with the Atacama Large Millimeter/submillimeter Array (ALMA). Highly accurate experimental frequencies (to better than 100 kHz accuracy) for both syn and anti isomers of vinyl mercaptan have been measured up to 250 GHz; these deviate considerably from predictions based on extrapolation of previous microwave measurements. Reliable frequency predictions of the astronomically most interesting millimeter-wave lines for these two species can now be derived from the best-fit spectroscopic constants. From the energetic investigations, the four lowest singlet isomers of the [C$_2$,H$_4$,S] family are calculated to be nearly isoenergetic, which makes this family a fairly unique test bed for assessing possible reaction pathways. Upper limits for the column density of syn and anti vinyl mercaptan are derived toward the extremely molecule-rich star-forming region Sgr B2(N2) enabling comparison with selected complex organic molecules.
astrophysics
We propose a simple experiment to explore magnetic fields created by electric railways and compare them with a simple model and parameters estimated using easily available information. A pedestrian walking on an overpass above train tracks registers the components of the magnetic field with the built-in magnetometer of a smartphone. The experimental results are successfully compared with a model of the magnetic field of the transmission lines and the local Earth's magnetic field. This experiment, suitable for a field trip, involves several abilities, such as modeling the magnetic field of power lines, looking up reliable information and estimating non-easily accessible quantities.
physics
By using conformable fractional of the Nikiforov-Uvarov (CF-NU) method, the radial Schrodinger equation is analytically solved. The energy eigenvalues and corresponding functions are obtained, in which the dependent temperature potential is employed. The effect of fraction-order parameter is studied on heavy-quarkonium masses such as charmonium and bottomonium in a hot QCD medium in the 3D and the higher dimensional space. A comparison is studied with recent works. We conclude that the fractional-order plays an important role in a hot QCD medium in the 3D and higher-dimensional space.
high energy physics phenomenology
Telescopes for observing the Cosmic Microwave Background (CMB) usually have shields and baffle structures in order to reduce the pickup from the ground. These structures may introduce unwanted sidelobes. We present a method to measure and model baffling structures of large aperture telescope optics to predict the sidelobe pattern.
astrophysics
Generalizing our ideas in [arXiv:1006.3313], we explain how topologically-twisted N=2 gauge theory on a four-manifold with boundary, will allow us to furnish purely physical proofs of (i) the Atiyah-Floer conjecture, (ii) Munoz's theorem relating quantum and instanton Floer cohomology, (iii) their monopole counterparts, and (iv) their higher rank generalizations. In the case where the boundary is a Seifert manifold, one can also relate its instanton Floer homology to modules of an affine algebra via a 2d A-model with target the based loop group. As an offshoot, we will be able to demonstrate an action of the affine algebra on the quantum cohomology of the moduli space of flat connections on a Riemann surface, as well as derive the Verlinde formula.
high energy physics theory
We propose a revision of the system developed by L'epine et al. (2007) for spectroscopic M subdwarf classification. Based on an analysis of subdwarf spectra and templates from Savcheva et al. (2014), we show thatthe CaH1 feature originally proposed by Gizis (1997) is important in selecting reliable cool subdwarf spectra. This index should be used in combination with the [TiO5, CaH2+CaH3] relation provided by L\'epine et al. (2007) to avoid misclassification results. In the new system, the dwarf-subdwarf separators are first derived from a sample of more than 80,000 M dwarfs and a "labeled" subdwarf subsample, these objects being all visually identified from their optical spectra. Based on these two samples, we re-fit the initial [TiO5, CaH1] relation, and propose a new [CaOH, CaH1] relation supplementing the [TiO5, CaH1] relation to reduce the impact of uncertainty in flux calibration on classification accuracy. In addition, we recalibrate the $\zeta_{TiO/CaH}$ parameter defined in L'epine et al. (2007) to enable its successful application to LAMOST spectra. Using this new system, we select candidates from LAMOST Data Release 4 and finally identify a set of 2791 new M subdwarf stars, covering the spectral sequence from type M0 to M7. This sample contains a large number of objects located at low Galactic latitudes, especially in the Galactic anti-center direction, expanding beyond previously published halo- and thick disk-dominated samples. Besides, we detect magnetic activity in 141 objects. We present a catalog for this M subdwarf sample, including radial velocities, spectral indices and errors, activity flags, with a compilation of external data (photometric and GAIA DR2 astrometric parameters). The catalog is provided on-line, and the spectra can be retrieved from the LAMOST Data Release web portal.
astrophysics
Visual information plays a critical role in human decision-making process. While recent developments on visually-aware recommender systems have taken the product image into account, none of them has considered the aesthetic aspect. We argue that the aesthetic factor is very important in modeling and predicting users' preferences, especially for some fashion-related domains like clothing and jewelry. This work addresses the need of modeling aesthetic information in visually-aware recommender systems. Technically speaking, we make three key contributions in leveraging deep aesthetic features: (1) To describe the aesthetics of products, we introduce the aesthetic features extracted from product images by a deep aesthetic network. We incorporate these features into recommender system to model users' preferences in the aesthetic aspect. (2) Since in clothing recommendation, time is very important for users to make decision, we design a new tensor decomposition model for implicit feedback data. The aesthetic features are then injected to the basic tensor model to capture the temporal dynamics of aesthetic preferences (e.g., seasonal patterns). (3) We also use the aesthetic features to optimize the learning strategy on implicit feedback data. We enrich the pairwise training samples by considering the similarity among items in the visual space and graph space; the key idea is that a user may likely have similar perception on similar items. We perform extensive experiments on several real-world datasets and demonstrate the usefulness of aesthetic features and the effectiveness of our proposed methods.
computer science
The building sector consumes the largest energy in the world, and there have been considerable research interests in energy consumption and comfort management of buildings. Inspired by recent advances in reinforcement learning (RL), this paper aims at assessing the potential of RL in building climate control problems with occupant interaction. We apply a recent RL approach, called DDPG (deep deterministic policy gradient), for the continuous building control tasks and assess its performance with simulation studies in terms of its ability to handle (a) the partial state observability due to sensor limitations; (b) complex stochastic system with high-dimensional state-spaces, which are jointly continuous and discrete; (c) uncertainties due to ambient weather conditions, occupant's behavior, and comfort feelings. Especially, the partial observability and uncertainty due to the occupant interaction significantly complicate the control problem. Through simulation studies, the policy learned by DDPG demonstrates reasonable performance and computational tractability.
computer science
Several concepts for heliospheric missions operating at heliocentric distances far beyond Earth orbit are currently investigated by the scientific community. The mission concept of the Interstellar Probe (McNutt et al. 2018), e.g., aims at reaching a distance of 1000 au away from the Sun within this century. This would allow the coming generation to obtain a global view of our heliosphere from an outside vantage point by measuring the Energetic Neutral Atoms (ENAs) originating from the various plasma regions. It would also allow for direct sampling of unperturbed interstellar medium, and for many observation opportunities beyond heliospheric science, such as visits to Kuiper Belt Objects, a comprehensive view on the interplanetary dust populations, and infrared astronomy free from the foreground emission of the Zodiacal cloud. In this study, we present a simple empirical model of ENAs from the heliosphere and derive basic requirements for ENA instrumentation onboard a spacecraft at great heliocentric distances. We consider the full energy range of heliospheric ENAs from 10 eV to 100 keV because each part of the energy spectrum has its own merits for heliospheric science. To cover the full ENA energy range, two or three different ENA instruments are needed. Thanks to parallax observations, some insights about the nature of the IBEX Ribbon and the dimensions of the heliosphere can already be gained by ENA imaging from a few au heliocentric distance. To directly reveal the global shape of the heliosphere, measurements from outside the heliosphere are, of course, the best option.
astrophysics
We propose emulation of Hawking radiation (HR) by means of acoustic excitations propagating on top of persistent current in an atomic Bose-Einstein condensate (BEC) loaded in an annular confining potential. The setting is initially created as a spatially uniform one, and then switches into a nonuniform configuration, while maintaining uniform BEC density. The eventual setting admits the realization of sonic black and white event horizons with different slopes of the local sound speed. A smooth slope near the white-hole horizon suppresses instabilities in the supersonic region. It is found that tongue-shaped patterns of the density-density correlation function, which represent the acoustic analog of HR, are strongly affected by the radius of the ring-shaped configuration and number of discrete acoustic modes admitted by it. There is a minimum radius that enables the emulation of HR. We also briefly discuss a possible similarity of properties of the matter-wave sonic black holes to the known puzzle of the stability of Planck-scale primordial black holes in quantum gravity.
condensed matter
A quantum scar - an enhancement of a quantum probability density in the vicinity of a classical periodic orbit - is a fundamental phenomenon connecting quantum and classical mechanics. Here we demonstrate that some of the eigenstates of the perturbed two-dimensional anisotropic (elliptic) harmonic oscillator are strongly scarred by the Lissajous orbits of the unperturbed classical counterpart. In particular, we show that the occurrence and geometry of these quantum Lissajous scars are connected to the anisotropy of the harmonic confinement, but unlike the classical Lissajous orbits the scars survive under a small perturbation of the potential. This Lissajous scarring is caused by the combined effect of the quantum (near) degeneracies in the unperturbed system and the localized character of the perturbation. Furthermore, we discuss experimental schemes to observe this perturbation-induced scarring.
quantum physics
An often used model for quantum theory is to associate to every physical system a C*-algebra. From a physical point of view it is unclear why operator algebras would form a good description of nature. In this paper, we find a set of physically meaningful assumptions such that any physical theory satisfying these assumptions must embed into the category of finite-dimensional C*-algebras. These assumptions were originally introduced in the setting of effectus theory, a categorical logical framework generalizing classical and quantum logic. As these assumptions have a physical interpretation, this motivates the usage of operator algebras as a model for quantum theory. In contrast to other reconstructions of quantum theory, we do not start with the framework of generalized probabilistic theories and instead use effect theories where no convex structure and no tensor product needs to be present. The lack of this structure in effectus theory has led to a different notion of pure maps. A map in an effectus is pure when it is a composition of a compression and a filter. These maps satisfy particular universal properties and respectively correspond to `forgetting' and `measuring' the validity of an effect. We define a pure effect theory (PET) to be an effect theory where the pure maps form a dagger-category and filters and compressions are adjoint. We show that any convex finite-dimensional PET must embed into the category of Euclidean Jordan algebras. Moreover, if the PET also has monoidal structure, then we show that it must embed into either the category of real or complex C*-algebras, which completes our reconstruction.
quantum physics
We present multi-epoch infrared photometry and spectroscopy obtained with warm Spitzer, Subaru and SOFIA to assess variability for the young ($\sim$20 Myr) and dusty debris systems around HD 172555 and HD 113766A. No variations (within 0.5%) were found for the former at either 3.6 or 4.5 $\mu$m, while significant non-periodic variations (peak-to-peak of $\sim$10-15% relative to the primary star) were detected for the latter. Relative to the Spitzer IRS spectra taken in 2004, multi-epoch mid-infrared spectra reveal no change in either the shape of the prominent 10 $\mu$m solid-state features or the overall flux levels (no more than 20%) for both systems, corroborating that the population of sub-$\mu$m-sized grains that produce the pronounced solid-state features is stable over a decadal timescale. We suggest that these sub-$\mu$m-sized grains were initially generated in an optically thick clump of debris of mm-sized vapor condensates resulting from a recent violent impact between large asteroidal or planetary bodies. Because of the shielding from the stellar photons provided by this clump, intense collisions led to an over-production of fine grains that would otherwise be ejected from the system by radiation pressure. As the clump is sheared by its orbital motion and becomes optically thin, a population of very fine grains could remain in stable orbits until Poynting-Robertson drag slowly spirals them into the star. We further suggest that the 3-5 $\mu$m disk variation around HD 113766A is consistent with a clump/arc of such fine grains on a modestly eccentric orbit in its terrestrial zone.
astrophysics
This article is dedicated to the presentation of a novel experimental bench designed to study the photoproduction of H2. It is composed of three main parts: a light source, a fully equipped flat torus reactor and the related analytical system. The reactor hydrodynamic behaviour has been carefully examined and it can be considered as perfectly mixed. The photon flux density is accurately known thanks to reconciled quantum sensor and actinometry experiments. The incident photon direction is perpendicular to the reactor windows; in such a configuration the radiative transfer description may be properly approximated as a one dimensional problem in Cartesian geometry. Based on accurate pressure measurement in the gas tight photoreactor, the production rates of H2 (using CdS particles in association with sulphide and sulfite ions as hole scavengers) are easily and trustingly obtained. First estimations of apparent quantum yield have proven to be dependent on mean volumetric rate of radiant light energy absorbed hence demonstrating the need for the use of a radiative transfer approach to understand the observed phenomena and for the proper formulation of the thermo-kinetic coupling.
physics
We theoretically study magnetic response of a superconductor/ferromagnet/normal-metal (SFN) strip in an in-plane Fulde--Ferrell (FF) state. We show that unlike to ordinary superconducting strip the FF strip can be switched from diamagnetic to paramagnetic and then back to diamagnetic state by {\it increasing} the perpendicular magnetic field. Being in paramagnetic state FF strip exhibits magnetic field driven second order phase transition from FF state to the ordinary state without spatial modulation along the strip. We argue that the global paramagnetic response is connected with peculiar dependence of sheet superconducting current density on supervelocity in FF state and it exists in nonlinear regime.
condensed matter
Adaptive designs for clinical trials permit alterations to a study in response to accumulating data in order to make trials more flexible, ethical and efficient. These benefits are achieved while preserving the integrity and validity of the trial, through the pre-specification and proper adjustment for the possible alterations during the course of the trial. Despite much research in the statistical literature highlighting the potential advantages of adaptive designs over traditional fixed designs, the uptake of such methods in clinical research has been slow. One major reason for this is that different adaptations to trial designs, as well as their advantages and limitations, remain unfamiliar to large parts of the clinical community. The aim of this paper is to clarify where adaptive designs can be used to address specific questions of scientific interest; we introduce the main features of adaptive designs and commonly used terminology, highlighting their utility and pitfalls, and illustrate their use through case studies of adaptive trials ranging from early-phase dose escalation to confirmatory Phase III studies.
statistics
We investigate a two-body quantum system with hard-core interaction potential in a two-dimensional harmonic trap. We provide the exact analytical solution of the problem. The energy spectrum of this system as a function of the range of the interaction reveals level crossings which can be explained examining the potential and kinetic part of the energy, as well as the localization and oscillatory properties of the wavefunctions quantified by the Fisher information. The latter is sensitive to the wavefunction localization between nodes promoting this property as a resource of quantum correlations in interacting particle systems.
quantum physics
Typical risk classification procedure in insurance is consists of a priori risk classification determined by observable risk characteristics, and a posteriori risk classification where the premium is adjusted to reflect the policyholder's claim history. While using the full claim history data is optimal in a posteriori risk classification procedure, i.e. giving premium estimators with the minimal variances, some insurance sectors, however, only use partial information of the claim history for determining the appropriate premium to charge. Classical examples include that auto insurances premium are determined by the claim frequency data and workers' compensation insurances are based on the aggregate severity. The motivation for such practice is to have a simplified and efficient posteriori risk classification procedure which is customized to the involved insurance policy. This paper compares the relative efficiency of the two simplified posteriori risk classifications, i.e. based on frequency versus severity, and provides the mathematical framework to assist practitioners in choosing the most appropriate practice.
statistics
We define string geometry: spaces of superstrings including the interactions, their topologies, charts, and metrics. Trajectories in asymptotic processes on a space of strings reproduce the right moduli space of the super Riemann surfaces in a target manifold. Based on the string geometry, we define Einstein-Hilbert action coupled with gauge fields, and formulate superstring theory non-perturbatively by summing over metrics and the gauge fields on the spaces of strings. This theory does not depend on backgrounds. The theory has a supersymmetry as a part of the diffeomorphisms symmetry on the superstring manifolds. We derive the all-order perturbative scattering amplitudes that possess the super moduli in type IIA, type IIB and SO(32) type I superstring theories from the single theory, by considering fluctuations around fixed backgrounds representing type IIA, type IIB and SO(32) type I perturbative vacua, respectively. The theory predicts that we can see a string if we microscopically observe not only a particle but also a point in the space-time. That is, this theory unifies particles and the space-time.
high energy physics theory
When fine-tuning pretrained models for classification, researchers either use a generic model head or a task-specific prompt for prediction. Proponents of prompting have argued that prompts provide a method for injecting task-specific guidance, which is beneficial in low-data regimes. We aim to quantify this benefit through rigorous testing of prompts in a fair setting: comparing prompted and head-based fine-tuning in equal conditions across many tasks and data sizes. By controlling for many sources of advantage, we find that prompting does indeed provide a benefit, and that this benefit can be quantified per task. Results show that prompting is often worth 100s of data points on average across classification tasks.
computer science
We study the problem of non-magnetic impurities adsorbed on bilayer graphene in the diluted regime. We analyze the impurity spectral densities for various concentrations and gate fields. We also analyze the effect of the adsorbate on the local density of states (LDOS) of the different C atoms in the structure and present some evidence of strong localization for the electronic states with energies close to the Dirac point.
condensed matter
With the increase in the amount of data and the expansion of model scale, distributed parallel training becomes an important and successful technique to address the optimization challenges. Nevertheless, although distributed stochastic gradient descent (SGD) algorithms can achieve a linear iteration speedup, they are limited significantly in practice by the communication cost, making it difficult to achieve a linear time speedup. In this paper, we propose a computation and communication decoupled stochastic gradient descent (CoCoD-SGD) algorithm to run computation and communication in parallel to reduce the communication cost. We prove that CoCoD-SGD has a linear iteration speedup with respect to the total computation capability of the hardware resources. In addition, it has a lower communication complexity and better time speedup comparing with traditional distributed SGD algorithms. Experiments on deep neural network training demonstrate the significant improvements of CoCoD-SGD: when training ResNet18 and VGG16 with 16 Geforce GTX 1080Ti GPUs, CoCoD-SGD is up to 2-3$\times$ faster than traditional synchronous SGD.
computer science
The evaluation of obstructions (stenosis) in coronary arteries is currently done by a physician's visual assessment of coronary angiography video sequences. It is laborious, and can be susceptible to interobserver variation. Prior studies have attempted to automate this process, but few have demonstrated an integrated suite of algorithms for the end-to-end analysis of angiograms. We report an automated analysis pipeline based on deep learning to rapidly and objectively assess coronary angiograms, highlight coronary vessels of interest, and quantify potential stenosis. We propose a 3-stage automated analysis method consisting of key frame extraction, vessel segmentation, and stenosis measurement. We combined powerful deep learning approaches such as ResNet and U-Net with traditional image processing and geometrical analysis. We trained and tested our algorithms on the Left Anterior Oblique (LAO) view of the right coronary artery (RCA) using anonymized angiograms obtained from a tertiary cardiac institution, then tested the generalizability of our technique to the Right Anterior Oblique (RAO) view. We demonstrated an overall improvement on previous work, with key frame extraction top-5 precision of 98.4%, vessel segmentation F1-Score of 0.891 and stenosis measurement 20.7% Type I Error rate.
electrical engineering and systems science
Violation of parity symmetry gives rise to various physical phenomena such as nonlinear transport and cross-correlated responses. In particular, the nonlinear conductivity has been attracting a lot of attentions in spin-orbit coupled semiconductors, superconductors, topological materials, and so on. In this paper we present theoretical study of the nonlinear conductivity in odd-parity magnetic multipole ordered systems whose $\mathcal{PT}$-symmetry is essentially distinct from the previously studied acentric systems. Combining microscopic formulation and symmetry analysis, we classify the nonlinear responses in the $\mathcal{PT}$-symmetric systems as well as $\mathcal{T}$-symmetric (non-magnetic) systems, and uncover nonlinear conductivity unique to the odd-parity magnetic multipole systems. A giant nonlinear Hall effect, nematicity-assisted dichroism and magnetically-induced Berry curvature dipole effect are proposed and demonstrated in a model for Mn-based magnets.
condensed matter
Extracting the speech of a target speaker from mixed audios, based on a reference speech from the target speaker, is a challenging yet powerful technology in speech processing. Recent studies of speaker-independent speech separation, such as TasNet, have shown promising results by applying deep neural networks over the time-domain waveform. Such separation neural network does not directly generate reliable and accurate output when target speakers are specified, because of the necessary prior on the number of speakers and the lack of robustness when dealing with audios with absent speakers. In this paper, we break these limitations by introducing a new speaker-aware speech masking method, called X-TaSNet. Our proposal adopts new strategies, including a distortion-based loss and corresponding alternating training scheme, to better address the robustness issue. X-TaSNet significantly enhances the extracted speech quality, doubling SDRi and SI-SNRi of the output speech audio over state-of-the-art voice filtering approach. X-TaSNet also improves the reliability of the results by improving the accuracy of speaker identity in the output audio to 95.4%, such that it returns silent audios in most cases when the target speaker is absent. These results demonstrate X-TaSNet moves one solid step towards more practical applications of speaker extraction technology.
electrical engineering and systems science
Disordered interacting spin chains that undergo a many-body localization transition are characterized by two limiting behaviors where the dynamics are chaotic and integrable. However, the transition region between them is not fully understood yet. We propose here a possible finite-size precursor of a critical point that shows a typical finite-size scaling and distinguishes between two different dynamical phases. The kurtosis excess of the diagonal fluctuations of the full one-dimensional momentum distribution from its microcanonical average is maximum at this singular point in the paradigmatic disordered $J_1$-$J_2$ model. For system sizes accessible to exact diagonalization, both the position and the size of this maximum scale linearly with the system size. Furthermore, we show that this singular point is found at the same disorder strength at which the Thouless and the Heisenberg energies coincide. Below this point, the spectral statistics follow the universal random matrix behavior up to the Thouless energy. Above it, no traces of chaotic behavior remain, and the spectral statistics are well described by a generalized semi-Poissonian model, eventually leading to the integrable Poissonian behavior. We provide, thus, an integrated scenario for the many-body localization transition, conjecturing that the critical point in the thermodynamic limit, if it exists, should be given by this value of disorder strength.
condensed matter
We introduce a novel methodology for establishing the presence of Standing Accretion Shock Instabilities (SASI) in the dynamics of a core collapse supernova from the observed neutrino event rate at water- or ice-based neutrino detectors. The methodology uses a likelihood ratio in the frequency domain as a test-statistics; it is also employed to assess the potential to estimate the frequency and the amplitude of the SASI modulations of the neutrino signal. The parameter estimation errors are consistent with the minimum possible errors as evaluated from the inverse of the Fisher information matrix, and close to the theoretical minimum for the SASI amplitude. Using results from a core-collapse simulation of a 15 solar-mass star by Kuroda $\it {et\, al.}$ (2017) as a test bed for the method, we find that SASI can be identified with high confidence for a distance to the supernova of up to $\sim 6$ kpc for IceCube and and up to $\sim 3$ kpc for a 0.4 Mt mass water Cherenkov detector. This methodology will aid the investigation of a future galactic supernova.
astrophysics
The linearly polarized quasi-real photons from the highly Lorentz-contracted Coulomb fields of relativistic heavy ions can fluctuate to quark-antiquark pairs, scatter off a target nucleus and emerge as vector mesons. In the process, the two colliding nuclei can switch roles to act as photon emitter or target, forming a double-slit interference pattern. The product from photoproduction inherits the photon polarization states, leading to the asymmetries of the decay angular distributions. In this letter, we study the interference effect in polarization dimension from the asymmetries of the decay angular distributions for photoprodution in heavy-ion collisions and find a periodic oscillation with the transverse momentum of vector meson, which could reasonably explain the transverse momentum dependence of the 2nd-order modulation in azimuth for the $\rho^{0}$ decay observed by the STAR collaboration.
high energy physics phenomenology
Comparative analysis of event sequence data is essential in many application domains, such as website design and medical care. However, analysts often face two challenges: they may not always know which sets of event sequences in the data are useful to compare, and the comparison needs to be achieved at different granularity, due to the volume and complexity of the data. This paper presents, ICE, an interactive visualization that allows analysts to explore an event sequence dataset, and identify promising sets of event sequences to compare at both the pattern and sequence levels. More specifically, ICE incorporates a multi-level matrix-based visualization for browsing the entire dataset based on the prefixes and suffixes of sequences. To support comparison at multiple levels, ICE employs the unit visualization technique, and we further explore the design space of unit visualizations for event sequence comparison tasks. Finally, we demonstrate the effectiveness of ICE with three real-world datasets from different domains.
computer science
In this paper, we have investigated the effect of magnetic field numerically as well as analytically for holographic insulator/superconductor phase transition in higher dimensional Gauss-Bonnet gravity. First we have analysed the critical phenomena with magnetic field using two different numerical methods, namely, quasinormal modes method and the shooting method. Then we have carried out our calculation analytically using the St$\ddot{u}$rm-Liouville eigenvalue method. The methods show that marginally stable modes emerge at critical values of the chemical potential and the magnetic field satisfying the relation $\Lambda^2\equiv\mu^2-B$. We observe that the value of the chemical potential and hence the value of $\Lambda$ increases with higher values of the Gauss-Bonnet parameter and dimension of spacetime for a fixed mass of the scalar field. This clearly indicates that the phase transition from insulator to superconductor becomes difficult in the presence of the magnetic field for higher values of the Gauss-Bonnet parameter and dimension of spacetime. Our analytic results are in very good agreement with our numerical results.
high energy physics theory
We observed the post-common-envelope eclipsing binary with a white dwarf component, QS Vir, using the 1.88 m telescope of Kotammia Observatory in Egypt. The new observations were analyzed together with all multicolor light curves available online (sampling a period of 25 yr), using a full-feature binary system modeling software based on Roche geometry. This is the first time complete photometric modeling was done with most of these data. QS Vir is a detached system, with the red dwarf component underfilling its Roche lobe by a small margin. All light curves feature out-of-eclipse variability that is associated with ellipsoidal variation, mutual irradiation and irregularities in surface brightness of the tidally distorted and magnetically active red dwarf. We tested models with one, two, and three dark spots and found that one spot is sufficient to account for the light curve asymmetry in all data sets, although this does not rule out the presence of multiple spots. We also found that a single spotted model cannot fit light curves observed simultaneously in different filters. Instead, each filter requires a different spot configuration. To thoroughly explore the parameter space of spot locations, we devised a grid-search procedure and used it to find consistent solutions. Based on this, we conclude that the dark spot responsible for light curve distortions has been stable for the past 15 yr, after a major migration that happened between 1993 and 2002, possibly due to a flip-flop event.
astrophysics
As part of the mm-Wave Interferometric Survey of Dark Object Masses (WISDOM), we present a measurement of the mass of the supermassive black hole (SMBH) in the nearby early-type galaxy NGC 0383 (radio source 3C 031). This measurement is based on Atacama Large Millimeter/sub-milimeter Array (ALMA) cycle 4 and 5 observations of the 12CO(2-1) emission line with a spatial resolution of 58x32pc2 (0."18x0."1). This resolution, combined with a channel width of 10 km/s, allows us to well resolve the radius of the black hole sphere of influence (measured as R_SOI = 316pc = 0."98), where we detect a clear Keplerian increase of the rotation velocities. NGC 0383 has a kinematically-relaxed, smooth nuclear molecular gas disc with weak ring/spiral features. We forward-model the ALMA data cube with the Kinematic Molecular Simulation (KinMS) tool and a Bayesian Markov Chain Monte Carlo method to measure a SMBH mass of (4.2+/-0.7)x10^9 Msun, a F160W-band stellar mass-to-light ratio that varies from 2.8+/-0.6 Msun/Lsun in the centre to 2.4+/-0.3 Msun/Lsun at the outer edge of the disc and a molecular gas velocity dispersion of 8.3+/-2.1 km/s (all 3-sigma uncertainties). We also detect unresolved continuum emission across the full bandwidth, consistent with synchrotron emission from an active galactic nucleus. This work demonstrates that low-J CO emission can resolve gas very close to the SMBH (~140,000 Schwarzschild radii) and hence that the molecular gas method is highly complimentary to megamaser observations as it can probe the same emitting material.
astrophysics
In this work, we present a Lyapunov framework for establishing stability with respect to a compact set for a nested interconnection of nonlinear dynamical systems ordered from slow to fast according to their convergence rates, where each of the dynamics are influenced only by the slower dynamics and the successive fastest one. The proposed approach explicitly considers more than two time scales, it does not require modeling multiple time scales via scalar time constants, and provides analytic bounds that make ad-hoc time-scale separation arguments rigorous. Motivated by the technical results, we develop a novel control strategy for a grid-forming power converter that consists of an inner cascaded two-degree of freedom controller and dispatchable virtual oscillator control as a reference model. The resulting closed-loop converter-based AC power system is in the form of a nested system with multiple time scales. We apply our technical results to obtain explicit bounds on the controller set-points, branch powers, and control gains that guarantee almost global asymptotic stability of the multi-converter AC power system with respect to a pre-specified solution of the AC power-flow equations. Finally, we validate the performance of the proposed control structure in a case study using a high-fidelity simulation with detailed hardware validated converter models.
mathematics
Multivariate functional data are becoming ubiquitous with advances in modern technology and are substantially more complex than univariate functional data. We propose and study a novel model for multivariate functional data where the component processes are subject to mutual time warping. That is, the component processes exhibit a similar shape but are subject to systematic phase variation across their time domains. To address this previously unconsidered mode of warping, we propose new registration methodology which is based on a shift-warping model. Our method differs from all existing registration methods for functional data in a fundamental way. Namely, instead of focusing on the traditional approach to warping, where one aims to recover individual-specific registration, we focus on shift registration across the components of a multivariate functional data vector on a population-wide level. Our proposed estimates for these shifts are identifiable, enjoy parametric rates of convergence and often have intuitive physical interpretations, all in contrast to traditional curve-specific registration approaches. We demonstrate the implementation and interpretation of the proposed method by applying our methodology to the Z\"urich Longitudinal Growth data and study its finite sample properties in simulations.
statistics
Generalizations of the AGT correspondence between 4D $\mathcal{N}=2$ $SU(2)$ supersymmetric gauge theory on ${\mathbb {C}}^2$ with $\Omega$-deformation and 2D Liouville conformal field theory include a correspondence between 4D $\mathcal{N}=2$ $SU(N)$ supersymmetric gauge theories, $N = 2, 3, \ldots$, on ${\mathbb {C}}^2/{\mathbb {Z}}_n$, $n = 2, 3, \ldots$, with $\Omega$-deformation and 2D conformal field theories with $\mathcal{W}^{\, para}_{N, n}$ ($n$-th parafermion $\mathcal{W}_N$) symmetry and $\widehat{\mathfrak{sl}}(n)_N$ symmetry. In this work, we trivialize the factor with $\mathcal{W}^{\, para}_{N, n}$ symmetry in the 4D $SU(N)$ instanton partition functions on ${\mathbb {C}}^2/{\mathbb {Z}}_n$ (by using specific choices of parameters and imposing specific conditions on the $N$-tuples of Young diagrams that label the states), and extract the 2D $\widehat{\mathfrak{sl}}(n)_N$ WZW conformal blocks, $n = 2, 3, \ldots$, $N = 1, 2, \ldots\, .$
high energy physics theory
Propagation properties of electromagnetic (EM) waves in the dense medium of neutron star are studied. It is pointed out that EM waves develop a longitudinal component when they propagate in such a dense medium. Renormalization scheme of quantum electrodynamics (QED) is used to investigate the behavior of EM waves in transverse and longitudinal directions. Medium response to EM waves indicates that the electromagnetic properties of the dense material are modified as a result of the interaction of EM waves with the matter. Using QED, expressions for the electromagnetic properties such as electric permittivity, magnetic permeability and the refractive index of dense matter are obtained. The results are applied to a particular star for illustration.
astrophysics
Domains are homogeneous areas of discrete symmetry, created in nonequilibrium phase transitions. They are separated by domain walls, topological objects which prevent them from fusing together. Domains may reconfigure by thermally-driven microscopic processes, and in quantum systems, by macroscopic quantum tunnelling. The underlying microscopic physics that defines the system's energy landscape for tunnelling is of interest in many different systems, from cosmology and other quantum domain systems, and more generally to nuclear physics, matter waves, magnetism, and biology. A unique opportunity to investigate the dynamics of microscopic correlations leading to emergent behaviour, such as quantum domain dynamics is offered by quantum materials. Here, as a direct realization of Feynman's idea of using a quantum computer to simulate a quantum system, we report an investigation of quantum electron reconfiguration dynamics and domain melting in two matching embodiments: a prototypical two-dimensionally electronically ordered solid-state quantum material and a simulation on a latest-generation quantum simulator. We use scanning tunnelling microscopy to measure the time-evolution of electronic domain reconfiguration dynamics and compare this with the time evolution of domains in an ensemble of entangled correlated electrons in simulated quantum domain melting. The domain reconfiguration is found to proceed by tunnelling in an emergent, self-configuring energy landscape, with characteristic step-like time evolution and temperature-dependences observed macroscopically. The remarkable correspondence in the dynamics of a quantum material and a quantum simulation opens the way to an understanding of emergent behaviour in diverse interacting many-body quantum systems at the microscopic level.
quantum physics
Minimization of energy functionals is based on a discretization by the finite element method and optimization by the trust-region method. A key tool is a local evaluation of the approximated gradients together with sparsity of the resulting Hessian matrix. We describe a vectorized MATLAB implementation of the p-Laplace problem in one and two space-dimensions, however it is easily applicable to other energy formulations.
mathematics
Following the Hamiltonian structure of bi-gravity and multi-gravity models in the full phase space, we have constructed the generating functional of diffeomorphism gauge symmetry. As is expected, this generator is constructed from the first class constraints of the system. We show that this gauge generator works well in giving the gauge transformations of the canonical variables.
high energy physics theory
In this work, we consider adaptive mesh refinement for a monolithic phase-field description for fractures in brittle materials. Our approach is based on an a posteriori error estimator for the phase-field variational inequality realizing the fracture irreversibility constraint. The key goal is the development of a reliable and efficient residual-type error estimator for the phase-field fracture model in each time-step. Based on this error estimator, error indicators for local mesh adaptivity are extracted. The proposed estimator is based on a technique known for singularly perturbed equations in combination with estimators for variational inequalities. These theoretical developments are used to formulate an adaptive mesh refinement algorithm. For the numerical solution, the fracture irreversibility is imposed using a Lagrange multiplier. The resulting saddle-point system has three unknowns: displacements, phase-field, and a Lagrange multiplier for the crack irreversibility. Several numerical experiments demonstrate our theoretical findings with the newly developed estimators and the corresponding refinement strategy.
mathematics
Entanglement is of paramount importance in quantum information theory. Its supremacy over classical correlations has been demonstrated in numerous information theoretic protocols. Here we study possible adequacy of quantum entanglement in Bayesian game theory, particularly in social welfare solution (SWS), a strategy which the players follow to maximize the sum of their payoffs. Given a multi-partite quantum state as an advice, players can come up with several correlated strategies by performing local measurements on their parts of the quantum state. A quantum strategy is called quantum-SWS if it is advantageous over a classical equilibrium (CE) strategy in the sense that none of the players has to sacrifice their CE-payoff rather some have incentive and at the same time it maximizes the sum of all players' payoffs over all possible quantum advantageous strategies. Quantum state yielding such a quantum-SWS is called a quantum social welfare advice (SWA). We show that any two-qubit pure entangled state, even if it is arbitrarily close to a product state, can serve as quantum-SWA in some Bayesian game. Our result, thus, gives cognizance to the fact that every two-qubit pure entanglement is the best resource for some operational task.
quantum physics
We present a first calculation of the heavy flavor contribution to the longitudinally polarized DIS structure function $g_1$, differential in the transverse momentum or the rapidity of the observed heavy antiquark $\overline{Q}$. All results are obtained at next-to-leading order accuracy with a newly developed parton-level Monte Carlo generator that also allows one to study observables associated with the heavy quark pair such as its invariant mass distribution or its correlation in azimuthal angle. First phenomenological studies are presented in a kinematic regime relevant for a future Electron-Ion Collider with a particular emphasis on the sensitivity to the helicity gluon distribution. Finally, we also provide first NLO results for the full neutral-current sector of polarized DIS, i.e., including contributions from Z-boson exchange.
high energy physics phenomenology
This research employs the Bayesian network modeling approach, and the Markov chain Monte Carlo technique, to learn about the role of lies and violence in teachings of major religions, using a unique dataset extracted from long-standing Vietnamese folktales. The results indicate that, although lying and violent acts augur negative consequences for those who commit them, their associations with core religious values diverge in the final outcome for the folktale characters. Lying that serves a religious mission of either Confucianism or Taoism (but not Buddhism) brings a positive outcome to a character (\b{eta}T_and_Lie_O= 2.23; \b{eta}C_and_Lie_O= 1.47; \b{eta}T_and_Lie_O= 2.23). A violent act committed to serving Buddhist missions results in a happy ending for the committer (\b{eta}B_and_Viol_O= 2.55). What is highlighted here is a glaring double standard in the interpretation and practice of the three teachings: the very virtuous outcomes being preached, whether that be compassion and meditation in Buddhism, societal order in Confucianism, or natural harmony in Taoism, appear to accommodate two universal vices-violence in Buddhism and lying in the latter two. These findings contribute to a host of studies aimed at making sense of contradictory human behaviors, adding the role of religious teachings in addition to cognition in belief maintenance and motivated reasoning in discounting counterargument.
physics
Long-lasting activity complexes (ACs), characterised as a series of closely located, continuously emerging solar active regions (ARs), are considered generating prominent poleward surges from observations. The surges lead to significant variations of the polar field, which are important for the modulation of solar cycles. We aim to study a prominent poleward surge during solar cycle 24 on the southern hemisphere, and analyse its originating ACs and the effect on the polar field evolution. We automatically identify and characterize ARs based on synoptic magnetograms from the Solar Dynamic Observatory. We assimilate these ARs with realistic magnetic configuration into a surface flux transport model, and simulate the creation and migration of the surge. Our simulations well reproduce the characteristics of the surge and show that the prominent surge is mainly caused by the ARs belonging to two ACs during Carrington Rotations 2145-2159 (December 2013-January 2015). The surge has a strong influence on the polar field evolution of the southern hemisphere during the latter half of cycle 24. Without the about one-year-long flux emergence in the form of ACs, the polar field around the cycle minimum would have remained at a low level and even reversed to the polarity at cycle 23 minimum. Our study also shows that the long-lived unipolar regions due to the decay of the earlier emerging ARs cause an intrinsic difficulty of automatically identifying and precisely quantifying later emerging ARs in ACs.
astrophysics
We present a new approximation algorithm for the treewidth problem which constructs a corresponding tree decomposition as well. Our algorithm is a faster variation of Reed's classical algorithm. For the benefit of the reader, and to be able to compare these two algorithms, we start with a detailed time analysis for Reed's algorithm. We fill in many details that have been omitted in Reed's paper. Computing tree decompositions parameterized by the treewidth $k$ is fixed parameter tractable (FPT), meaning that there are algorithms running in time $O(f(k) g(n))$ where $f$ is a computable function, $g$ is a polynomial function, and $n$ is the number of vertices. An analysis of Reed's algorithm shows $f(k) = 2^{O(k \log k)}$ and $g(n) = n \log n$ for a 5-approximation. Reed simply claims time $O(n \log n)$ for bounded $k$ for his constant factor approximation algorithm, but the bound of $2^{\Omega(k \log k)} n \log n$ is well known. From a practical point of view, we notice that the time of Reed's algorithm also contains a term of $O(k^2 2^{24k} n \log n)$, which for small $k$ is much worse than the asymptotically leading term of $2^{O(k \log k)} n \log n$. We analyze $f(k)$ more precisely, because the purpose of this paper is to improve the running times for all reasonably small values of $k$. Our algorithm runs in $\mathcal{O}(f(k)n\log{n})$ too, but with a much smaller dependence on $k$. In our case, $f(k) = 2^{\mathcal{O}(k)}$. This algorithm is simple and fast, especially for small values of $k$. We should mention that Bodlaender et al.\ [2016] have an asymptotically faster algorithm running in time $2^{\mathcal{O}(k)} n$. It relies on a very sophisticated data structure and does not claim to be useful for small values of $k$.
computer science
Time-resolved photoemission with ultrafast pump and probe pulses is an emerging technique with wide application potential. Real-time recording of non-equilibrium electronic processes, transient states in chemical reactions or the interplay of electronic and structural dynamics offers fascinating opportunities for future research. Combining valence-band and core-level spectroscopy with photoelectron diffraction for electronic, chemical and structural analysis requires few 10 fs soft X-ray pulses with some 10 meV spectral resolution, which are currently available at high repetition rate free-electron lasers. The PG2 beamline at FLASH (DESY, Hamburg) provides a high pulse rate of 5000 pulses/s, 60 fs pulse duration and 40 meV bandwidth in an energy range of 25-830 eV with a photon beam size down to 50 microns in diameter. We have constructed and optimized a versatile setup commissioned at FLASH/PG2 that combines FEL capabilities together with a multidimensional recording scheme for photoemission studies. We use a full-field imaging momentum microscope with time-of-flight energy recording as the detector for mapping of 3D band structures in ($k_x$, $k_y$, $E$) parameter space with unprecedented efficiency. Our instrument can image full surface Brillouin zones with up to 7 {\AA} $^{-1}$ diameter in a binding-energy range of several eV, resolving about $2.5\times10^5$ data voxels. As an example, we present results for the ultrafast excited state dynamics in the model van der Waals semiconductor WSe$_2$.
condensed matter
We combine K-Nearest Neighbors (KNN) with genetic algorithm (GA) for photometric redshift estimation of quasars, short for GeneticKNN, which is a weighted KNN approach supported by GA. This approach has two improvements compared to KNN: one is the feature weighted by GA; another is that the predicted redshift is not the redshift average of K neighbors but the weighted average of median and mean of redshifts for K neighbors, i.e. $p\times z_{median} + (1-p)\times z_{mean}$. Based on the SDSS and SDSS-WISE quasar samples, we explore the performance of GeneticKNN for photometric redshift estimation, comparing with the other six traditional machine learning methods, i.e. Least absolute shrinkage and selection operator (LASSO), support vector regression (SVR), Multi Layer Perceptrons (MLP), XGBoost, KNN and random forest. KNN and random forest show their superiority. Considering the easy implementation of KNN, we make improvement on KNN as GeneticKNN and apply GeneticKNN on photometric redshift estimation of quasars. Finally the performance of GeneticKNN is better than that of LASSO, SVR, MLP, XGBoost, KNN and random forest for all cases. Moreover the accuracy is better with the additional WISE magnitudes for the same method.
astrophysics
Among transition metal dichalcogenides (TMdCs) as alternatives for Pt-based catalysts, metallic-TMdCs catalysts have highly reactive basal-plane but are unstable. Meanwhile, chemically stable semiconducting-TMdCs show limiting catalytic activity due to their inactive basal-plane. Here, we propose metallic vanadium sulfide (VSn) nanodispersed in a semiconducting MoS2 film (V-MoS2) as an efficient catalyst. During synthesis, vanadium atoms are substituted into hexagonal monolayer MoS2 to form randomly distributed VSn units. The V-MoS2 film on a Cu electrode exhibits Pt-scalable catalytic performance; current density of 1000 mA cm-2 at 0.6 V, overpotential of -0.06 V at a current density of 10 mA cm-2 and exchange current density of 0.65 mA cm-2 at 0 V with excellent cycle stability for hydrogen-evolution-reaction (HER). The high intrinsic HER performance of V-MoS2 is explained by the efficient electron transfer from the Cu electrode to chalcogen vacancies near vanadium sites with optimal Gibbs free energy (-0.02 eV). This study adds insight into ways to engineer TMdCs at the atomic-level to boost intrinsic catalytic activity for hydrogen evolution.
condensed matter
The next few years will be exciting as prototype universal quantum processors emerge, enabling implementation of a wider variety of algorithms. Of particular interest are quantum heuristics, which require experimentation on quantum hardware for their evaluation, and which have the potential to significantly expand the breadth of quantum computing applications. A leading candidate is Farhi et al.'s Quantum Approximate Optimization Algorithm, which alternates between applying a cost-function-based Hamiltonian and a mixing Hamiltonian. Here, we extend this framework to allow alternation between more general families of operators. The essence of this extension, the Quantum Alternating Operator Ansatz, is the consideration of general parametrized families of unitaries rather than only those corresponding to the time-evolution under a fixed local Hamiltonian for a time specified by the parameter. This ansatz supports the representation of a larger, and potentially more useful, set of states than the original formulation, with potential long-term impact on a broad array of application areas. For cases that call for mixing only within a desired subspace, refocusing on unitaries rather than Hamiltonians enables more efficiently implementable mixers than was possible in the original framework. Such mixers are particularly useful for optimization problems with hard constraints that must always be satisfied, defining a feasible subspace, and soft constraints whose violation we wish to minimize. More efficient implementation enables earlier experimental exploration of an alternating operator approach to a wide variety of approximate optimization, exact optimization, and sampling problems. Here, we introduce the Quantum Alternating Operator Ansatz, lay out design criteria for mixing operators, detail mappings for eight problems, and provide brief descriptions of mappings for diverse problems.
quantum physics
Three-dimensional (3D) shape measurement devices and techniques are being rapidly adopted within a variety of industries and applications. As acquiring 3D range data becomes faster and more accurate it becomes more challenging to efficiently store, transmit, or stream this data. One prevailing approach to compressing 3D range data is to encode it within the color channels of regular 2D images. This paper presents a novel method for reducing the depth range of a 3D geometry such that it can be stored within a 2D image using lower encoding frequencies (or a fewer number of encoding periods). This allows for smaller compressed file sizes to be achieved without a proportional increase in reconstruction errors. Further, as the proposed method occurs prior to encoding, it is readily compatible with a variety of existing image-based 3D range geometry compression methods.
electrical engineering and systems science
This work presents a new and simple approach for fine-tuning pretrained word embeddings for text classification tasks. In this approach, the class in which a term appears, acts as an additional contextual variable during the fine tuning process, and contributes to the final word vector for that term. As a result, words that are used distinctively within a particular class, will bear vectors that are closer to each other in the embedding space and will be more discriminative towards that class. To validate this novel approach, it was applied to three Arabic and two English datasets that have been previously used for text classification tasks such as sentiment analysis and emotion detection. In the vast majority of cases, the results obtained using the proposed approach, improved considerably.
computer science
We develop a data-driven approach for signal denoising that utilizes variational mode decomposition (VMD) algorithm and Cramer Von Misses (CVM) statistic. In comparison with the classical empirical mode decomposition (EMD), VMD enjoys superior mathematical and theoretical framework that makes it robust to noise and mode mixing. These desirable properties of VMD materialize in segregation of a major part of noise into a few final modes while majority of the signal content is distributed among the earlier ones. To exploit this representation for denoising purpose, we propose to estimate the distribution of noise from the predominantly noisy modes and then use it to detect and reject noise from the remaining modes. The proposed approach first selects the predominantly noisy modes using the CVM measure of statistical distance. Next, CVM statistic is used locally on the remaining modes to test how closely the modes fit the estimated noise distribution; the modes that yield closer fit to the noise distribution are rejected (set to zero). Extensive experiments demonstrate the superiority of the proposed method as compared to the state of the art in signal denoising and underscore its utility in practical applications where noise distribution is not known a priori.
electrical engineering and systems science
We explore the prospect of producing primordial black holes around the solar mass region during an early matter domination epoch. The early matter-dominated epoch can arise when a moduli field comes to dominate the energy density of the Universe prior to big bang nucleosynthesis. The absence of radiation pressure during a matter-dominated epoch enhances primordial black hole formation from the gravitational collapse of primordial density fluctuations. In particular, we find that primordial black holes are produced in the $0.1-10~M_{\odot}$ mass range with a favorable choice of parameters in the theory. However, they cannot explain all of the merger events detected by the LIGO/Virgo gravitational wave search. In such a case, primordial black holes form about $4\%$ of the total dark matter abundance, of which $95\%$ belongs to the LIGO/Virgo consistent mass range. The rest of the dark matter could be in the form of particles that are produced from the decay of the moduli field during reheating.
astrophysics
We study the peak-to-average power ratio (PAPR) problem in orthogonal frequency-division multiplexing (OFDM) systems. In conventional clipping and filtering based PAPR reduction techniques, clipping noise is allowed to spread over the whole active passband, thus degrading the transmit signal quality similarly at all active subcarriers. However, since modern radio networks support frequency-multiplexing of users and services with highly different quality-of-service expectations, clipping noise from PAPR reduction should be distributed unequally over the corresponding physical resource blocks (PRBs). To facilitate this, we present an efficient PAPR reduction technique, where clipping noise can be flexibly controlled and filtered inside the transmitter passband, allowing to control the transmitted signal quality per PRB. Numerical results are provided in 5G New Radio (NR) mobile network context, demonstrating the flexibility and efficiency of the proposed method.
electrical engineering and systems science
Transition metal based oxide heterostructures exhibit diverse emergent phenomena e.g. two dimensional electron gas, superconductivity, non-collinear magnetic phase, ferroelectricity, polar vortices, topological Hall effect etc., which are absent in the constituent bulk oxides. The microscopic understandings of these properties in such nanometer thick materials are extremely challenging. Synchrotron x-ray based techniques such as x-ray diffraction, x-ray absorption spectroscopy (XAS), resonant x-ray scattering (RXS), resonant inelastic x-ray scattering (RIXS), x-ray photoemission spectroscopy, etc. are essential to elucidating the response of lattice, charge, orbital, and spin degrees of freedoms to the heterostructuring. As a prototypical case of complex behavior, rare-earth nickelates (RENiO3 with RE=La, Pr, Nd, Sm, Eu, Lu) based thin films and heterostructures have been investigated quite extensively in recent years. An extensive body of literature about these systems exists and for an overview of the field, we refer the interested readers to the recent reviews Annual Review of Materials Research 46, 305 (2016) and Reports on Progress in Physics 81, 046501 (2018). In the present article, we give a brief review that concentrates on the use of synchrotron based techniques to investigate a specific set of EuNiO3/LaNiO3 superlattices, specifically designed to solve a long-standing puzzle about the origin of simultaneous electronic, magnetic and structural transitions of the RENiO3 series.
condensed matter
Linear regression with measurement error in the covariates is a heavily studied topic, however, the statistics/econometrics literature is almost silent to estimating a multi-equation model with measurement error. This paper considers a seemingly unrelated regression model with measurement error in the covariates and introduces two novel estimation methods: a pure Bayesian algorithm (based on Markov chain Monte Carlo techniques) and its mean field variational Bayes (MFVB) approximation. The MFVB method has the added advantage of being computationally fast and can handle big data. An issue pertinent to measurement error models is parameter identification, and this is resolved by employing a prior distribution on the measurement error variance. The methods are shown to perform well in multiple simulation studies, where we analyze the impact on posterior estimates arising due to different values of reliability ratio or variance of the true unobserved quantity used in the data generating process. The paper further implements the proposed algorithms in an application drawn from the health literature and shows that modeling measurement error in the data can improve model fitting.
statistics
The availability of multi-omics data has revolutionized the life sciences by creating avenues for integrated system-level approaches. Data integration links the information across datasets to better understand the underlying biological processes. However, high-dimensionality, correlations and heterogeneity pose statistical and computational challenges. We propose a general framework, probabilistic two-way partial least squares (PO2PLS), which addresses these challenges. PO2PLS models the relationship between two datasets using joint and data-specific latent variables. For maximum likelihood estimation of the parameters, we implement a fast EM algorithm and show that the estimator is asymptotically normally distributed. A global test for testing the relationship between two datasets is proposed, and its asymptotic distribution is derived. Notably, several existing omics integration methods are special cases of PO2PLS. Via extensive simulations, we show that PO2PLS performs better than alternatives in feature selection and prediction performance. In addition, the asymptotic distribution appears to hold when the sample size is sufficiently large. We illustrate PO2PLS with two examples from commonly used study designs: a large population cohort and a small case-control study. Besides recovering known relationships, PO2PLS also identified novel findings. The methods are implemented in our R-package PO2PLS. Supplementary materials for this article are available online.
statistics
Quantum walks in atomic systems, owing to their continuous nature, are especially well-suited for the simulation of many-body physics and can potentially offer an exponential speedup in solving certain black box problems. Photonics offers an alternate route to simulating such nonclassical behavior in a more robust platform. However, in photonic implementations to date, an increase to the depth of a continuous quantum walk requires modifying the footprint of the system. Here we report continuous walks of a two-photon quantum frequency comb with entanglement across multiple dimensions. The coupling between frequency modes is mediated by electro-optic phase modulation, which makes the evolution of the state completely tunable over a continuous range. With arbitrary control of the phase across different modes, we demonstrate a rich variety of behavior: from walks exhibiting ballistic transport or strong energy confinement, to subspaces featuring bosonic or fermionic character. We also explore the role of entanglement dimensionality and demonstrate biphoton energy bound states, which are only possible with multilevel entanglement. This suggests the potential for such walks to quantify entanglement in high-dimensional systems.
quantum physics
In a recent Letter, Dornheim et al. [PRL 125, 085001 (2020)] have investigated the nonlinear density response of the uniform electron gas in the warm dense matter regime. More specifically, they have studied the cubic response function at the first harmonic, which cannot be neglected in many situations of experimental relevance. In this work, we go one step further and study the full spectrum of excitations at the higher harmonics of the original perturbation based on extensive new ab initio path integral Monte Carlo (PIMC) simulations. We find that the dominant contribution to the density response beyond linear response theory is given by the quadratic response function at the second harmonic in the moderately nonlinear regime. Furthermore, we show that the nonlinear density response is highly sensitive to exchange-correlation effects, which makes it a potentially valuable new tool of diagnostics. To this end, we present a new theoretical description of the nonlinear electronic density response based on the recent effective static approximation to the local field correction [PRL 125, 235001 (2020)], which accurately reproduces our PIMC data with negligible computational cost.
physics
For a multi-cell, multi-user, cellular network downlink sum-rate maximization through power allocation is a nonconvex and NP-hard optimization problem. In this paper, we present an effective approach to solving this problem through single- and multi-agent actor-critic deep reinforcement learning (DRL). Specifically, we use finite-horizon trust region optimization. Through extensive simulations, we show that we can simultaneously achieve higher spectral efficiency than state-of-the-art optimization algorithms like weighted minimum mean-squared error (WMMSE) and fractional programming (FP), while offering execution times more than two orders of magnitude faster than these approaches. Additionally, the proposed trust region methods demonstrate superior performance and convergence properties than the Advantage Actor-Critic (A2C) DRL algorithm. In contrast to prior approaches, the proposed decentralized DRL approaches allow for distributed optimization with limited CSI and controllable information exchange between BSs while offering competitive performance and reduced training times.
computer science
High penetration of distributed generation will be characteristic to future distribution networks. The dynamic, intermittent, uncertain and deregulated nature of distributed generation raises the need for online, distributed economic dispatch techniques. In this paper, we demonstrate the application of such approaches using population dynamics. We propose a congestion management algorithm and demonstrate the notable properties and requirements of the proposed approach.
electrical engineering and systems science
We propose that the superconductivity recently observed in Nd$_{0.8}$Sr$_{0.2}$NiO$_2$ with critical temperature in the range $9$ K to $15$ K results from the same charge carriers and the same mechanism that we have proposed give rise to superconductivity in both hole-doped and electron-doped cuprates: pairing of hole carriers in oxygen $p\pi$ orbitals, driven by a correlated hopping term in the effective Hamiltonian that lowers the kinetic energy, as described by the theory of hole superconductivity. We predict a large increase in $T_c$ with compressive epitaxial strain.
condensed matter
Avalanche photodiode based single photon detectors, as crucial and practical components, are widely used in quantum key distribution (QKD) systems. For effective detection, most of these SPDs are operated in the gated mode, in which the gate is added to obtain high avalanche gain, and is removed to quench the avalanche. The avalanche transition region (ATR) is a certain existence in the process of adding and removing the gate. We first experimentally investigate the characteristic of the ATR, including in the commercial SPD and high-speed SPD, and then propose an ATR attack to control the detector. In the experiment of hacking the plug-and-play QKD system, Eve only introduces less than 0.5 % quantum bit error rate, and almost leaves no traces of her presence including the photocurrent and afterpulse probability. We finally give possible countermeasures against this attack.
quantum physics
Ultra-Fast Charging (UFC) is a rising technology that can shorten the time of charging an Electric Vehicle (EV) from hours to minutes. However, the power consumption characteristics of UFC bring new challenges to the existing power system, and its pros and cons are yet to be studied. This project aims to set up a framework for studying the different aspects of substituting the normal non-residential EV chargers within the San Francisco Bay Area with Ultra-Fast Charging (UFC) stations. Three objectives were defined for three stakeholders involved in this simulation, namely: the EV user, the station owner, and the grid operator. The results show that, UFCs will significantly contribute to increase of peak load and power consumption during the peak demand period, which is an undesirable outcome from grid operation perspective. Total electricity and operations and maintenance costs for station owner would increase subsequently, while this can be justified by analyzing the value of time (VOT) from an EV-user perspective. Additionally, peak-shaving using battery storage facilities is studied for complementing the applied technology change and mitigating the impacts of higher power consumption on the grid.
electrical engineering and systems science
We consider a non-conserving zero-range process with hopping rate proportional to the number of particles at each site. Particles are added to the system with a site-dependent creation rate, and removed from the system with a uniform annihilation rate. On a fully-connected lattice with a large number of sites, the mean-field geometry leads to a negative binomial law for the number of particles at each site, with parameters depending on the hopping, creation and annihilation rates. This model of particles is mapped to a model of population dynamics: the site label is interpreted as a level of fitness, the site-dependent creation rate is interpreted as a selection function, and the hopping process is interpreted as the introduction of mutants. In the limit of large density, the fraction of the total population occupying each site approaches the limiting distribution in the house-of-cards model of selection-mutation, introduced by Kingman. A single site can be occupied by a macroscopic fraction of the particles if the mutation rate is below a critical value (which matches the critical value worked out in the house-of-cards model). This feature generalises to classes of selection functions that increase sufficiently fast at high fitness. The process can be mapped to a model of evolving networks, inspired by the Bianconi--Barab\'asi model, but involving a large and fixed set of nodes. Each node forms links at a rate biased by its fitness, moreover links are destroyed at a uniform rate, and redirected at a certain rate. If this redirection rate matches the mutation rate, the number of links pointing to nodes of a given fitness level is distributed as the numbers of particles in the non-conserving zero-range process. There is a finite critical redirection rate if the density of quenched fitnesses goes to zero sufficiently fast at high fitness.
condensed matter
Consider the higher order parabolic operator $\partial_t+(-\Delta_x)^m$ and the higher order Schr\"{o}dinger operator $i^{-1}\partial_t+(-\Delta_x)^m$ in $X=\{(t,x)\in\mathbb{R}^{1+n};~|t|<A,|x_n|<B\}$, where $m$ and $n$ are any positive integers. Under certain lower order and regularity assumptions, we prove that if solutions to the linear problems vanish when $x_n>0$, then the solutions vanish in $X$. Such results are global if $n>1$, and we also prove some relevant local results.
mathematics
We report the synthesis of CeBi single crystals grown out of Bi self flux and a systematic study of the magnetic and transport properties with varying temperature and applied magnetic fields. From these $R(T,B)$ and $M(T,B)$ data we could assemble the temperature-field ($T-B$) phase diagram for CeBi and visualize the three dimensional $M-T-B$ surface. The magnetoresistance (MR) in the low temperature regime shows a power-law, non-saturated behavior with large MR ($\sim 3\times10^5 \%$ at $2~$K and $14~$T), along with Shubnikov-de Haas oscillations. With increasing temperatures, MR decreases, and then becomes negative for $T\gtrsim 12~$K. This crossover in MR seems to be unrelated to any specific metamagnetic transitions, but rather associated with changing from a low-temperature normal metal with an anomalously large MR to increased scattering off of local Ce moments as temperature increases.
condensed matter
Using physical models, we study the sensitivity of polycyclic aromatic hydrocarbon (PAH) emission spectra to the character of the illuminating starlight, to the PAH size distribution, and to the PAH charge distribution. Starlight models considered range from the emission from a 3 Myr-old starburst, rich in far-ultraviolet (FUV) radiation, to the FUV-poor spectrum of the very old population of the M31 bulge. A wide range of starlight intensities is considered. The effects of reddening in dusty clouds are investigated for different starlight spectra. For a fixed PAH abundance parameter $q_{\rm PAH}$, the fraction of the IR power appearing in the PAH emission features can vary by a factor of two as the starlight spectrum varies from FUV-poor (M31 bulge) to FUV-rich (young starburst). We show how $q_{\rm PAH}$ can be measured from the strength of the 7.7$\mu$m emission. The fractional power in the 17$\mu$m feature can be suppressed by high starlight intensities.
astrophysics
We argue that the infall time to the singularity in the interior of a black hole, is always related to a classical thermalization time. This indicates that singularities are related to the equilibration of infalling objects with the microstates of the black hole, but only in the sense of classical equilibration. When the singularity is reached, the quantum state of the black hole, initially a tensor product of the state of the infalling object and that of the black hole, is not yet a "generic" state in the enlarged Hilbert space, so its complexity is not maximal. We relate these observations to the phenomenon of mirages in the membrane paradigm description of the black hole horizon and to the shrinking of the area of causal diamonds inside the black hole. The observations are universal and we argue that they give a clue to the nature of the underlying quantum theory of black holes in all types of asymptotic space-times.
high energy physics theory
We present a simple model of two dark matter species with opposite millicharge that can form electrically neutral bound states via the exchange of a massive dark photon. If bound state formation is suppressed at low temperatures, a sub-dominant fraction of millicharged particles remains at late times, which can give rise to interesting features in the 21 cm absorption profile at cosmic dawn. The dominant neutral component, on the other hand, can have dipole interactions with ordinary matter, leading to non-standard signals in direct detection experiments. We identify the parameter regions predicting a percent-level ionisation fraction and study constraints from laboratory searches for dark matter scattering and dark photon decays.
high energy physics phenomenology
We consider a distributionally robust formulation of stochastic optimization problems arising in statistical learning, where robustness is with respect to uncertainty in the underlying data distribution. Our formulation builds on risk-averse optimization techniques and the theory of coherent risk measures. It uses semi-deviation risk for quantifying uncertainty, allowing us to compute solutions that are robust against perturbations in the population data distribution. We consider a large family of loss functions that can be non-convex and non-smooth and develop an efficient stochastic subgradient method. We prove that it converges to a point satisfying the optimality conditions. To our knowledge, this is the first method with rigorous convergence guarantees in the context of non-convex non-smooth distributionally robust stochastic optimization. Our method can achieve any desired level of robustness with little extra computational cost compared to population risk minimization. We also illustrate the performance of our algorithm on real datasets arising in convex and non-convex supervised learning problems.
mathematics
We investigate the presence of vortex configurations in generalized Maxwell-Chern-Simons models with nonminimal coupling, in which we introduce a function that modifies the dynamical term of the scalar field in the Lagrangian. We first follow a route already considered in previous works to develop the Bogomol'nyi procedure, and, in this context, we use the first order equations to obtain a vortex with a novel behavior at its core. We then go further and introduce a novel procedure to develop the Bogomol'nyi methodology. It supports distinct first order equations, and we then investigate another model, in which the vortex may engender inversion of the magnetic flux, an effect with no precedents in the study of vortices within the nonminimal context.
high energy physics theory
Topological states require the presence of extended bulk states, as usually found in the picture of energy bands and topological states bridging the bulk gaps. But in driven systems this can be circumvented, and one can get topological states coexisting with fully localized bulk states, as in the case of the anomalous Floquet-Anderson insulator. Here, we show the fingerprints of this peculiar topological phase in the transport properties and their dependence on the disorder strength, geometrical configuration (two-terminal and multiterminal setups) and details of the driving protocol.
condensed matter
With the ability to directly obtain the Wigner function and density matrix of photon states, quantum tomography (QT) has had a significant impact on quantum optics, quantum computing and quantum information. By an appropriate sequence of measurements on the evolution of each degreeof freedom (DOF), the full quantum state of the observed photonic system can be determined. The first proposal to extend the application of QT to reconstruction of complete quantum states of matter wavepackets had generated enormous interest in ultrafast diffraction imaging and pump-probe spectroscopy of molecules. This interest was elevated with the advent of ultrafast electron and X-ray diffraction techniques using electron accelerators and X-ray free electron lasers to add temporal resolution to the observed nuclear and electron distributions. However, the great interest in this area has been tempered by the illustration of an impossibility theorem, known as the dimension problem. Not being able to associate unitary evolution to every DOF of molecular motion, quantum tomography could not be used beyond 1D and categorically excludes most vibrational and all rotational motion of molecules. Here we present a theoretical advance to overcome the notorious dimension problem. Solving this challenging problem is important to push imaging molecular dynamics to the quantum limit. The new theory has solved this problem, which makes quantum tomography a truly useful methodology in ultrafast physics and enables the making of quantum version of a molecular movie. With the new theory, quantum tomography can be finally advanced to a sufficient level to become a general method for reconstructing quantum states of matter, without being limited in one dimension. Our new concept is demonstrated using a simulated dataset of ultrafast diffraction experiment of laser-aligned nitrogen molecules.
quantum physics
We discuss whether the bound nature of multiquark states in quark models could benefit from relativistic effects on the kinetic energy operator. For mesons and baryons, relativistic corrections to the kinetic energy lead to lower energies, and thus call for a retuning of the parameters of the model. For multiquark states, as well as their respective thresholds, a comparison is made of the results obtained with non-relativistic and relativistic kinetic energy. It is found that the binding energy is lower in the relativistic case. In particular, $QQ\bar q\bar q$ tetraquarks with double heavy flavor become stable for a larger ratio of the heavy to light quark masses; and the all-heavy tetraquarks $QQ\bar Q\bar Q$ that are not stable in standard non-relativistic quark models remain unstable when a relativistic form of kinetic energy is adopted
high energy physics phenomenology
It is well known that the expected number of real zeros of a random cosine polynomial $ V_n(x) = \sum_ {j=0} ^{n} a_j \cos (j x) , \ x \in (0,2\pi) $, with the $ a_j $ being standard Gaussian i.i.d. random variables is asymptotically $ 2n / \sqrt{3} $. On the other hand, some of the previous works on the random cosine polynomials with dependent coefficients show that such polynomials have at least $ 2n / \sqrt{3} $ expected real zeros lying in one period. In this paper we investigate two classes of random cosine polynomials with pairwise equal blocks of coefficients. First, we prove that a random cosine polynomial with the blocks of coefficients being of a fixed length and satisfying $ A_{2j}=A_{2j+1} $ possesses the same expected real zeros as the classical case. Afterwards, we study a case containing only two equal blocks of coefficients, and show that in this case significantly more real zeros should be expected compared to those of the classical case.
mathematics