text
stringlengths 11
9.77k
| label
stringlengths 2
104
|
---|---|
Microgrids are localized electrical grids with control capability that are able to disconnect from the traditional grid to operate autonomously. They strengthen grid resilience, help mitigate grid disturbances, and support a flexible grid by enabling the integration of distributed energy resources. Given the likely presence of critical loads, the proper protection of microgrids is of vital importance; however, this is complicated in the case of inverter-interfaced microgrids where low fault currents preclude the use of conventional time-overcurrent protection. This paper introduces and investigates the application of dynamic state estimation, a generalization of differential protection, for the protection of radial portions of microgrids (or distribution networks); both phasor-based and dynamic approaches are investigated for protection. It is demonstrated through experiments on three case-study systems that dynamic state estimation is capable of correctly identifying model parameters for both normal and faulted operation.
|
electrical engineering and systems science
|
We demonstrate single-shot compressive three-dimensional (3D) $(x, y, z)$ imaging based on interference coding. The depth dimension of the object is encoded into the interferometric spectra of the light field, resulting a $(x, y, \lambda)$ datacube which is subsequently measured by a single-shot spectrometer. By implementing a compression ratio up to $400$, we are able to reconstruct $1G$ voxels from a 2D measurement. Both an optimization based compressive sensing algorithm and a deep learning network are developed for 3D reconstruction from a single 2D coded measurement. Due to the fast acquisition speed, our approach is able to capture volumetric activities at native camera frame rates, enabling 4D (volumetric-temporal) visualization of dynamic scenes.
|
electrical engineering and systems science
|
The transition of the power grid requires new technologies and methodologies, which can only be developed and tested in simulations. Especially larger simulation setups with many levels of detail can become quite slow. Therefore, the number of possible simulation evaluations decreases. One solution to overcome this issue is to use surrogate models, i.e., data-driven approximations of (sub)systems. In a recent work, a surrogate model for a low voltage grid was built using artificial neural networks, which achieved satisfying results. However, there were still open questions regarding the assumptions and simplifications made. In this paper, we present the results of our ongoing research, which answer some of these question. We compare different machine learning algorithms as surrogate models and exchange the grid topology and size. In a set of experiments, we show that algorithms based on linear regression and artificial neural networks yield the best results independent of the grid topology. Furthermore, adding volatile energy generation and a variable phase angle does not decrease the quality of the surrogate models.
|
electrical engineering and systems science
|
A three parameter family of probability distributions is constructed such that its Mellin transform is defined over the same domain as the 2D GMC on the Riemann sphere with three insertion points $(\alpha_1,\alpha_2,\alpha_3)$ and satisfies the DOZZ formula in the sense of Kupiainen (Ann. Math. 191 (2020) 81 -- 166). The probability distributions in the family are defined as products of independent Fyodorov-Bouchaud and powers of Barnes beta distributions of types $(2, 1)$ and $(2, 2).$ In the special case of $\alpha_1+\alpha_2+\alpha_3=2Q$ the constructed probability distribution is shown to be consistent with the known small deviation asymptotic of the 2D GMC laws with everywhere positive curvature.
|
mathematics
|
In many real-world situations, there are constraints on the ways in which a physical system can be manipulated. We investigate the entropy production (EP) and extractable work involved in bringing a system from some initial distribution $p$ to some final distribution $p'$, given that the set of master equations available to the driving protocol obeys some constraints. We first derive general bounds on EP and extractable work, as well as a decomposition of the nonequilibrium free energy into an "accessible free energy" (which can be extracted as work, given a set of constraints) and "inaccessible free energy" (which must be dissipated as EP). In a similar vein, we consider the thermodynamics of information in the presence of constraints, and decompose the information acquired in a measurement into "accessible" and "inaccessible" components. This decomposition allows us to consider the thermodynamic efficiency of different measurements of the same system, given a set of constraints. We use our framework to analyze protocols subject to symmetry, modularity, and coarse-grained constraints, and consider various examples including the Szilard box, the 2D Ising model, and a multi-particle flashing ratchet.
|
condensed matter
|
To improve the driving mobility and energy efficiency of connected autonomous electrified vehicles, this paper presents an integrated longitudinal speed decision-making and energy efficiency control strategy. The proposed approach is a hierarchical control architecture, which is assumed to consist of higher-level and lower-level controls. As the core of this study, model predictive control and reinforcement learning are combined to improve the powertrain mobility and fuel economy for a group of automated vehicles. The higher-level exploits the signal phase and timing and state information of connected autonomous vehicles via vehicle to infrastructure and vehicle to vehicle communication to reduce stopping at red lights. The higher-level outputs the optimal vehicle velocity using model predictive control technique and receives the power split control from the lower-level con-troller. These two levels communicate with each other via a controller area network in the real vehicle. The lower-level utilizes a model-free reinforcement learning method to improve the fuel economy for each connected autonomous vehicle. Numerical tests illustrate that vehicle mobility can be noticeably improved (traveling time reduced by 30%) by reducing red-light idling. The effectiveness and performance of the proposed method are validated via comparison analysis among different energy efficiency controls (fuel economy promoted by 13%).
|
electrical engineering and systems science
|
We present an experimental investigation of a new polymorphic 2D single layer of phosphorus on Ag(111). The atomically-resolved scanning tunneling microscopy (STM) images show a new 2D material composed of freely-floating phosphorus pentamers organized into a 2D layer, where the pentamers are aligned in close-packed rows. The scanning tunneling spectroscopy (STS) measurements reveal a semiconducting character with a band gap of 1.20 eV. This work presents the formation at low temperature (LT) of a new polymorphic 2D phosphorus layer composed of a floating 2D pentamer structure. The smooth curved terrace edges and a lack of any clear crystallographic orientation with respect to the Ag(111) substrate at room temperature indicates a smooth potential energy surface that is reminiscent of a liquid-like growth phase. This is confirmed by density functional theory (DFT) calculations that find a small energy barrier of only 0.17 eV to surface diffusion of the pentamers (see Supplemental Material). The formation of extended, homogeneous domains is a key ingredient to opening a new avenue to integrate this new 2D material into electronic devices.
|
condensed matter
|
We give strong numerical evidence for the existence of an instability afflicting six-dimensional Reissner-Nordstr\"om de Sitter (RNdS) black holes. This instability is akin of the Konoplya-Zhidenko instability present in RNdS black holes in seven spacetime dimensions and above. Moreover, we perform a detailed analysis of the near-horizon limit of extremal RNdS black holes, and find that unstable gravitational modes effectively behave as a massive scalar field whose mass violates the AdS$_2$ Breitenl\"ohner-Freedman bound (if and only if $d\geq 6$), thus providing a physical argument for the existence of the instability. Finally, we show that the frequency spectrum of perturbations of RNdS has a remarkable intricate structure with several bifurcations/mergers that appears unique to RNdS black holes
|
high energy physics theory
|
In order to avoid the difficulties encountered by relativistic quantum theory of single particles, we pursue a deductive development of the theory from physical principles, without canonical quantization, by making use of group-theoretical methods. Our work has pointed out the necessity of new classes of irreducible representations of the Poincar\'e group the quantum theory of a particle can be based on. For spin 0 particle, four inequivalent theories are completely determined, with fundamental differences with respect to Klein-Gordon theory.
|
quantum physics
|
In this paper, we study the feedback game on $3$-chromatic Eulerian triangulations of surfaces. We prove that the winner of the game on every $3$-chromatic Eulerian triangulation of a surface all of whose vertices have degree $0$ modulo $4$ is always fixed. Moreover, we also study the case of $3$-chromatic Eulerian triangulations of surfaces which have at least two vertices whose degrees are $2$ modulo $4$, and in particular, we determine the winner of the game on a concrete class of such graphs, called an octahedral path.
|
computer science
|
Asymptotic approximations to the zeros of Jacobi polynomials are given, with methods to obtain the coefficients in the expansions. These approximations can be used as standalone methods for the non-iterative computation of the nodes of Gauss--Jacobi quadratures of high degree ($n\ge 100$). We also provide asymptotic approximations for functions related to the first order derivative of Jacobi polynomials which are used for computing the weights of the Gauss--Jacobi quadrature. The performance of the asymptotic approximations is illustrated with numerical examples, and it is shown that nearly double precision relative accuracy is obtained both for the nodes and the weights when $n\ge 100$ and $-1< \alpha, \beta\le 5$. For smaller degrees the approximations are also useful as they provide $10^{-12}$ relative accuracy for the nodes when $n\ge 20$, and just one Newton step would be sufficient to guarantee double precision accuracy in that cases.
|
mathematics
|
It is critical yet challenging for deep learning models to properly characterize uncertainty that is pervasive in real-world environments. Although a lot of efforts have been made, such as heteroscedastic neural networks (HNNs), little work has demonstrated satisfactory practicability due to the different levels of compromise on learning efficiency, quality of uncertainty estimates, and predictive performance. Moreover, existing HNNs typically fail to construct an explicit interaction between the prediction and its associated uncertainty. This paper aims to remedy these issues by developing SDE-HNN, a new heteroscedastic neural network equipped with stochastic differential equations (SDE) to characterize the interaction between the predictive mean and variance of HNNs for accurate and reliable regression. Theoretically, we show the existence and uniqueness of the solution to the devised neural SDE. Moreover, based on the bias-variance trade-off for the optimization in SDE-HNN, we design an enhanced numerical SDE solver to improve the learning stability. Finally, to more systematically evaluate the predictive uncertainty, we present two new diagnostic uncertainty metrics. Experiments on the challenging datasets show that our method significantly outperforms the state-of-the-art baselines in terms of both predictive performance and uncertainty quantification, delivering well-calibrated and sharp prediction intervals.
|
computer science
|
Markov Chain Monte Carlo (MCMC) is a class of algorithms to sample complex and high-dimensional probability distributions. The Metropolis-Hastings (MH) algorithm, the workhorse of MCMC, provides a simple recipe to construct reversible Markov kernels. Reversibility is a tractable property that implies a less tractable but essential property here, invariance. Reversibility is however not necessarily desirable when considering performance. This has prompted recent interest in designing kernels breaking this property. At the same time, an active stream of research has focused on the design of novel versions of the MH kernel, some nonreversible, relying on the use of complex invertible deterministic transforms. While standard implementations of the MH kernel are well understood, the aforementioned developments have not received the same systematic treatment to ensure their validity. This paper fills the gap by developing general tools to ensure that a class of nonreversible Markov kernels, possibly relying on complex transforms, has the desired invariance property and leads to convergent algorithms. This leads to a set of simple and practically verifiable conditions.
|
statistics
|
A key challenge in spatial statistics is the analysis for massive spatially-referenced data sets. Such analyses often proceed from Gaussian process specifications that can produce rich and robust inference, but involve dense covariance matrices that lack computationally exploitable structures. The matrix computations required for fitting such models involve floating point operations in cubic order of the number of spatial locations and dynamic memory storage in quadratic order. Recent developments in spatial statistics offer a variety of massively scalable approaches. Bayesian inference and hierarchical models, in particular, have gained popularity due to their richness and flexibility in accommodating spatial processes. Our current contribution is to provide computationally efficient exact algorithms for spatial interpolation of massive data sets using scalable spatial processes. We combine low-rank Gaussian processes with efficient sparse approximations. Following recent work by [1], we model the low-rank process using a Gaussian predictive process (GPP) and the residual process as a sparsity-inducing nearest-neighbor Gaussian process (NNGP). A key contribution here is to implement these models using exact conjugate Bayesian modeling to avoid expensive iterative algorithms. Through the simulation studies, we evaluate performance of the proposed approach and the robustness of our models, especially for long range prediction. We implement our approaches for remotely sensed light detection and ranging (LiDAR) data collected over the US Forest Service Tanana Inventory Unit (TIU) in a remote portion of Interior Alaska.
|
statistics
|
We use cosmological hydrodynamical simulations to examine the physical properties of the gas in the circumgalactic media (CGM) of star-forming galaxies as a function of angular orientation. We utilise TNG50 of the IllustrisTNG project, as well as the EAGLE simulation to show that observable properties of CGM gas correlate with azimuthal angle, defined as the galiocentric angle with respect to the central galaxy. Both simulations are in remarkable agreement in predicting a strong modulation of flow rate direction with azimuthal angle: inflow is more substantial along the galaxy major axis, while outflow is strongest along the minor axis. The absolute rates are noticeably larger for higher (log(M_* / M_sun) ~ 10.5) stellar mass galaxies, up to an order of magnitude compared to M^dot < 1 M_sun/yr/sr for log(M_* / M_sun) ~ 9.5 objects. Notwithstanding the different numerical and physical models, both TNG50 and EAGLE predict that the average metallicity of the CGM is higher along the minor versus major axes of galaxies. The angular signal is robust across a wide range of galaxy stellar mass 8.5 < log(M_* / M_sun) < 10.5 at z<1. This azimuthal dependence is particularly clear at larger impact parameters b > 100 kpc. Our results present a global picture whereby, despite the numerous mixing processes, there is a clear angular dependence of the CGM metallicity. We make forecasts for future large survey programs that will be able to compare against these expectations. Indeed, characterising the kinematics, spatial distribution and metal content of CGM gas is key to a full understanding of the exchange of mass, metals, and energy between galaxies and their surrounding environments.
|
astrophysics
|
Supermassive black holes (BHs) and their host galaxies are interlinked by virtue of feedbacks and are thought to be co-eval across the Hubble time. This relation is highlighted by an approximate proportionality between the BH mass $M_\bullet$ and the mass of a stellar bulge $M_\ast$ of the host galaxy. However, a large spread of the ratio $M_\bullet/M_\ast$ and a considerable excess of BH mass at redshifts $z\sim8$, indicate that the coevolution of central massive BHs and stellar populations in host galaxies may have experienced variations in its intensity. These issues require a robust determination of the relevant masses (BH, stars and gas), which is difficult in the case of distant high-redshift galaxies that are unresolved. In this paper, we seek to identify spectral diagnostics that may tell us about the relative masses of the BH, the gas mass and stellar mass. We consider general features of SEDs of galaxies that harbour growing massive BHs, forming stars and interstellar/circumgalactic gas. We focus on observational manifestations of possible predominances or intermittent variations in evolutionary episodes of growing massive BHs and forming stellar populations. We consider simplified scenarios for star formation and massive BHs growth, and simple models for chemical composition of gas, for dust free gas as well as for gas with dust mass fraction of $1/3$ of the metal content. We argue that wideband multi-frequency observations (X-ray to submillimeter) of the composite emission spectra of growing BH, stellar population and nebular emission of interstellar gas are sufficient to infer their masses.
|
astrophysics
|
The spontaneous breaking of U(1)_B-L around the scale of grand unification can simultaneously account for hybrid inflation, leptogenesis, and neutralino dark matter, thus resolving three major puzzles of particle physics and cosmology in a single predictive framework. The B-L phase transition also results in a network of cosmic strings. If strong and electroweak interactions are unified in an SO(10) gauge group, containing U(1)_B-L as a subgroup, these strings are metastable. In this case, they produce a stochastic background of gravitational waves that evades current pulsar timing bounds, but features a flat spectrum with amplitude h^2\Omega_GW ~ 10^-8 at interferometer frequencies. Ongoing and future LIGO observations will hence probe the scale of B-L breaking.
|
high energy physics phenomenology
|
We study the query complexity of Bayesian Private Learning: a learner wishes to locate a random target within an interval by submitting queries, in the presence of an adversary who observes all of her queries but not the responses. How many queries are necessary and sufficient in order for the learner to accurately estimate the target, while simultaneously concealing the target from the adversary? Our main result is a query complexity lower bound that is tight up to the first order. We show that if the learner wants to estimate the target within an error of $\varepsilon$, while ensuring that no adversary estimator can achieve a constant additive error with probability greater than $1/L$, then the query complexity is on the order of $L\log(1/\varepsilon)$, as $\varepsilon \to 0$. Our result demonstrates that increased privacy, as captured by $L$, comes at the expense of a {multiplicative} increase in query complexity. Our proof method builds on Fano's inequality and a family of proportional-sampling estimators. As an illustration of the method's wider applicability, we generalize the complexity lower bound to settings involving high-dimensional linear query learning and partial adversary observation.
|
computer science
|
In this paper, we propose a compositional approach for the construction of finite abstractions (a.k.a. finite Markov decision processes (MDPs)) for networks of discrete-time stochastic control subsystems that are not necessarily stabilizable. The proposed approach leverages the interconnection topology and a notion of finite-step stochastic storage functions, that describes joint dissipativity-type properties of subsystems and their abstractions, and establishes a finite-step stochastic simulation function as a relation between the network and its abstraction. To this end, we first develop a new type of compositionality conditions which is less conservative than the existing ones. In particular, using a relaxation via a finite-step stochastic simulation function, it is possible to construct finite abstractions such that stabilizability of each subsystem is not necessarily required. We then propose an approach to construct finite MDPs together with their corresponding finite-step storage functions for general discrete-time stochastic control systems satisfying an incremental passivablity property. We also construct finite MDPs for a particular class of nonlinear stochastic control systems. To demonstrate the effectiveness of the proposed results, we apply our results on three different case studies.
|
electrical engineering and systems science
|
The propagation of a free massless scalar field in a 1 + 1 dimensional Minkowski space modeling a wormhole is considered. The wormhole model consists on two timelike trajectories, which represent the entrance and the exit of the wormhole, connected via some transfer function that specifies how incoming modes that reach the entrance are transferred to the exit. We find that particles and energy fluxes are generically produced except for transfer functions that represent global conformal transformations. We consider several examples involving exit trajectories which are assymptotically inertial, asymptotically null and also involving a faster-than-light motion to illustrate the peculiarities of the emitted energy fluxes and quantum correlations.
|
high energy physics theory
|
We present the census of massive (log(M$_{*}$/M$_{\odot}$)$\geq 11$) galaxies at $3<z<6$ identified over the COSMOS/UltraVISTA Ultra-Deep field stripes: consisting of $\approx100$ and $\approx20$ high-confidence candidates at $3<z<4$ and $4<z<6$, respectively. The $3<z<4$ population is comprised of post-starburst, UV star-forming and dusty-star forming galaxies in roughly equal fractions, while UV-star-forming galaxies dominate at $4<z<6$ . We account for various sources of biases in SED modelling, finding that the treatment of emission line contamination is essential for understanding the number densities and mass growth histories of massive galaxies at $z>3$. The significant increase in observed number densities at $z\sim4$ ($>\times$ 5 in $\lesssim600$ Myrs) implies that this is the epoch at which log(M$_{*}$/M$_{\odot}$)$\geq 11$ galaxies emerge in significant numbers, with stellar ages ($\approx500-900$ Myrs) indicating rapid formation epochs as early as $z\sim7$. Leveraging ancillary multi-wavelength datasets, we perform panchromatic SED modelling to constrain the total star-formation activity of the sample. The star-formation activity of the sample is generally consistent with being on the star-formation main sequence at the considered redshifts, with $\approx15-25\%$ of the population showing evidence of suppressed star-formation rates, indicating that quenching mechanisms are already at play by $z\sim4$. We stack available HST imaging, confirming their compact nature ($r_{e}\lesssim2.2$ kpc), consistent with expected sizes of high-$z$ star-forming galaxies. Finally, we discuss how our results are in-line with the early formation epochs and short formation timescales inferred from the fossil records of the most massive galaxies in the Universe.
|
astrophysics
|
We demonstrate and characterize that a carrier-envelope-phase (CEP)-controlled ultrashortchirped field is an efficient and robust mechanism to modify the dissociation dynamics of molecularhydrogen. Different dissociation pathways are collectively induced and their interference contributeto the kinetic energy release spectra. Chirping is able to efficiently manipulate the interferencesof different dissociation pathways. We demonstrate a linear relationship between chirp and CEP-dependence, dissociation as well as directional electron localization.
|
physics
|
Inference on the extremal behaviour of spatial aggregates of precipitation is important for quantifying river flood risk. There are two classes of previous approach, with one failing to ensure self-consistency in inference across different regions of aggregation and the other requiring highly inflexible marginal and spatial dependence structure assumptions. To overcome these issues, we propose a model for high-resolution precipitation data, from which we can simulate realistic fields and explore the behaviour of spatial aggregates. Recent developments in spatial extremes literature have seen promising progress with spatial extensions of the Heffernan and Tawn (2004) model for conditional multivariate extremes, which can handle a wide range of dependence structures. Our contribution is twofold: new parametric forms for the dependence parameters of this model; and a novel framework for deriving aggregates addressing edge effects and sub-regions without rain. We apply our modelling approach to gridded East-Anglia, UK precipitation data. Return-level curves for spatial aggregates over different regions of various sizes are estimated and shown to fit very well to the data.
|
statistics
|
Exascale computing holds great opportunities for molecular dynamics (MD) simulations. However, to take full advantage of the new possibilities, we must learn how to focus computational power on the discovery of complex molecular mechanisms, and how to extract them from enormous amounts of data. Both aspects still rely heavily on human experts, which becomes a serious bottleneck when a large number of parallel simulations have to be orchestrated to take full advantage of the available computing power. Here, we use artificial intelligence (AI) both to guide the sampling and to extract the relevant mechanistic information. We combine advanced sampling schemes with statistical inference, artificial neural networks, and deep learning to discover molecular mechanisms from MD simulations. Our framework adaptively and autonomously initializes simulations and learns the sampled mechanism, and is thus suitable for massively parallel computing architectures. We propose practical solutions to make the neural networks interpretable, as illustrated in applications to molecular systems.
|
physics
|
We account for particle emission and gravitational radiation from cosmic string loops to determine their effect on the loop distribution and observational signatures of strings. The effect of particle emission is that the number density of loops no longer scales. This results in a high frequency cutoff on the stochastic gravitational wave background, but we show that the expected cutoff is outside the range of current and planned detectors. Particle emission from string loops also produces a diffuse gamma ray background that is sensitive to the presence of kinks and cusps on the loops. However, both for kinks and cusps, and with mild assumptions about particle physics interactions, current diffuse gamma-ray background observations do not constrain $G\mu$.
|
high energy physics phenomenology
|
The under-abundance of asteroids on orbits with small perihelion distances suggests that thermally-driven disruption may be an important process in the removal of rocky bodies in the Solar System. Here we report our study of how the debris streams arise from possible thermally-driven disruptions in the near-Sun region. We calculate that a small body with a diameter $\gtrsim0.5$ km can produce a sufficient amount of material to allow the detection of the debris at the Earth as meteor showers, and that bodies at such sizes thermally disrupt every $\sim2$ kyrs. We also find that objects from the inner parts of the asteroid belt are more likely to become Sun-approacher than those from the outer parts. We simulate the formation and evolution of the debris streams produced from a set of synthetic disrupting asteroids drawn from Granvik et al. (2016)'s near-Earth object population model, and find that they evolve 10--70 times faster than streams produced at ordinary solar distances. We compare the simulation results to a catalog of known meteor showers on Sun-approaching orbits. We show that there is a clear overabundance of Sun-approaching meteor showers, which is best explained by a combining effect of comet contamination and an extended disintegration phase that lasts up to a few kyrs. We suggest that a few asteroid-like Sun-approaching objects that brighten significantly at their perihelion passages could, in fact, be disrupting asteroids. An extended period of thermal disruption may also explain the widespread detection of transiting debris in exoplanetary systems. Data and codes that generate the figures and main results of this work are publicly available on 10.5281/zenodo.2547298 and https://github.com/Yeqzids/near-sun-disruptions.
|
astrophysics
|
We give the hyperasymptotic expansion of the plaquette with a precision that includes the terminant associated to the leading renormalon. Subleading effects are also considered. The perturbative series is regulated using the principal value prescription for its Borel integral. We use this analysis to give a determination of the gluon condensate in SU(3) pure gluodynamics that is independent of the scale and renormalization scheme used for the coupling constant: $\langle G^2 \rangle_{\rm PV} (n_f=0)=3.15(18)\; r_0^{-4}$.
|
high energy physics phenomenology
|
Voronoi mosaics inspired by the seed points placed on the Archimedes Spirals are reported. Voronoi entropy was calculated for these patterns. Equidistant and non-equidistant patterns are treated. Voronoi mosaics built from cells of equal size which are of primary importance for decorative arts are reported. The pronounced prevalence of hexagons is inherent for the patterns with an equidistant and non-equidistant distribution of points, when the distance between the seed points is of the same order of magnitude as the distance between the turns of the spiral. Penta- and heptagonal 'defected' cells appeared in the Voronoi diagrams due to the finite nature of the pattern. The ordered Voronoi tessellations demonstrating the Voronoi entropy larger than 1.71, reported for the random 2D distribution of points, were revealed. The dependence of the Voronoi entropy on the total number of the seed points located on the Archimedes Spirals is reported. The aesthetic attraction of the Voronoi mosaics arising from seed points placed on the Archimedes Spirals is discussed.
|
mathematics
|
We study the suppressions of high transverse momentum single hadron and dihadron productions in high-energy heavy-ion collisions based on the framework of a next-to-leading-order perturbative QCD parton model combined with the higher-twist energy loss formalism.Our model can provide a consistant description for the nuclear modification factors of single hadron and dihadron productions in central and non-central nucleus-nucleus collisions at RHIC and the LHC energies. We quantitatively extract the value of jet quenching parameter $\hat q$ via a global $\chi^2$ analysis, and obtain ${\hat{q}}/{T^3} = 4.1 \sim 4.4$ at $T = 378$~MeV at RHIC and ${\hat{q}}/{T^3} = 2.6 \sim 3.3$ at $T = 486$~MeV at the LHC, which are consistent with the results from JET Collaboration. We also provide the predictions for the nuclear modification factors of dihadron productions in Pb+Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 5.02 TeV and in Xe+Xe collisions at $\sqrt{s_{\rm{NN}}}$ = 5.44 TeV.
|
high energy physics phenomenology
|
The intrinsic antiferromagnetic (AFM) interlayer coupling in two-dimensional magnetic topological insulator MnBi$_2$Te$_4$ places a restriction on realizing stable quantum anomalous Hall effect (QAHE) [Y. Deng et al., Science 367, 895 (2020)]. Through density functional theory calculations, we demonstrate the possibility of tuning the AFM coupling to the ferromagnetic coupling in MnBi$_2$Te$_4$ films by alloying about 50% V with Mn. As a result, QAHE can be achieved without alternation with the even or odd septuple layers. This provides a practical strategy to get robust QAHE in ultrathin MnBi$_2$Te$_4$ films, rendering them attractive for technological innovations.
|
condensed matter
|
This paper proposes a hardware-oriented dropout algorithm, which is efficient for field programmable gate array (FPGA) implementation. In deep neural networks (DNNs), overfitting occurs when networks are overtrained and adapt too well to training data. Consequently, they fail in predicting unseen data used as test data. Dropout is a common technique that is often applied in DNNs to overcome this problem. In general, implementing such training algorithms of DNNs in embedded systems is difficult due to power and memory constraints. Training DNNs is power-, time-, and memory- intensive; however, embedded systems require low power consumption and real-time processing. An FPGA is suitable for embedded systems for its parallel processing characteristic and low operating power; however, due to its limited memory and different architecture, it is difficult to apply general neural network algorithms. Therefore, we propose a hardware-oriented dropout algorithm that can effectively utilize the characteristics of an FPGA with less memory required. Software program verification demonstrates that the performance of the proposed method is identical to that of conventional dropout, and hardware synthesis demonstrates that it results in significant resource reduction.
|
computer science
|
We study the interplay between interactions and finite-temperature dephasing baths. We consider a double well with strongly interacting bosons coupled, via the density, to a bosonic bath. Such a system, when the bath has infinite temperature and instantaneous decay of correlations, relaxes with an emerging algebraic behavior with exponent 1/2. Here we show that, because of the finite-temperature baths and of the choice of spectral densities, such an algebraic relaxation may occur for a shorter duration and the characteristic exponent can be lower than 1/2. These results show that the interaction-induced impeding of relaxation is stronger and more complex when the bath has finite temperature and/or nonzero timescale for the decay of correlations.
|
quantum physics
|
Transition path sampling (TPS) is a powerful technique for investigating rare transitions, especially when the mechanism is unknown and one does not have access to the reaction coordinate. Straightforward application of TPS does not directly provide the free energy landscape nor the kinetics, which motivated the development of path sampling extensions, such as transition interface sampling (TIS), and the reweighted paths ensemble (RPE), that are able to simultaneously access both kinetics and thermodynamics. However, performing TIS is more involved than TPS, and still requires (some) insight in the reaction to define interfaces. While packages that can efficiently compute path ensembles for TIS are now available, it would be useful to directly compute the free energy from a single TPS simulation. To achieve this, we developed an approximate method, denoted Virtual Interface Exchange, that makes use of the rejected pathways in a form of waste recycling. The method yields an approximate reweighted path ensemble that allows an immediate view of the free energy landscape from a single TPS, as well as enables a full committor analysis.
|
physics
|
The present work concerns data of three campaigns carried out during the last twenty years in the plain of the Garigliano river surrounding the Garigliano Nuclear Power Plant (GNPP), which is located in Southern Italy and shut down in 1979. Moreover, some data from surveys held in the eighties, across the Chernobyl accident, have been taken in account. The results for the soil samples, in particular for 137Cs and 236U specific activity, were analyzed for their extension in space and in time. Some of the problems related to the classical analysis of environmental radiological data (non-normal distribution of the values, small number of sample points, multiple comparison and presence of values lesser than the minimum detectable activity) have been overcome with the use of Bayesian methods. The scope of the paper is threefold: (1) to introduce the data of the last campaign held in the Garigliano plain; (2) to insert these data in a larger spatio-temporal frame; (3) to show how the Bayesian approach can be applied to radiological environmental surveys, stressing out its advantages over other approaches, using the data of the campaigns. It results that (i) no new contribution there was in the last decades, (ii) specific activity values of the area surrounding the GNPP are consistent with those obtained in other farther areas, (iii) the effective depletion half-life factor for 137Cs is much lower than the half-life of the radionuclide.
|
physics
|
During metastatic dissemination, streams of cells collectively migrate through a network of narrow channels within the extracellular matrix, before entering into the blood stream. This strategy is believed to outperform other migration modes, based on the observation that individual cancer cells can take advantage of confinement to switch to an adhesion-independent form of locomotion. Yet, the physical origin of this behaviour has remained elusive and the mechanisms behind the emergence of coherent flows in populations of invading cells under confinement are presently unknown. Here we demonstrate that human fibrosarcoma cells (HT1080) confined in narrow stripe-shaped regions undergo collective migration by virtue of a novel type of topological edge currents, resulting from the interplay between liquid crystalline (nematic) order, microscopic chirality and topological defects. Thanks to a combination of in vitro experiments and theory of active hydrodynamics, we show that, while heterogeneous and chaotic in the bulk of the channel, the spontaneous flow arising in confined populations of HT1080 cells is rectified along the edges, leading to long-ranged collective cell migration, with broken chiral symmetry. These edge currents are fuelled by layers of +1/2 topological defects, orthogonally anchored at the channel walls and acting as local sources of chiral active stress. Our work highlights the profound correlation between confinement and collective migration in multicellular systems and suggests a possible mechanism for the emergence of directed motion in metastatic cancer.
|
condensed matter
|
We study the density of the set $\operatorname{SNA}(M,Y)$ of those Lipschitz maps from a (complete pointed) metric space $M$ to a Banach space $Y$ which strongly attain their norm (i.e.\ the supremum defining the Lipschitz norm is actually a maximum). We present new and somehow counterintuitive examples, and we give some applications. First, we show that $\operatorname{SNA}(\mathbb T,Y)$ is not dense in ${\mathrm{Lip}}_0(\mathbb T,Y)$ for any Banach space $Y$, where $\mathbb T$ denotes the unit circle in the Euclidean plane. This provides the first example of a Gromov concave metric space (i.e.\ every molecule is a strongly exposed point of the unit ball of the Lipschitz-free space) for which the density does not hold. Next, we construct metric spaces $M$ satisfying that $\operatorname{SNA}(M,Y)$ is dense in ${\mathrm{Lip}}_0(M,Y)$ regardless $Y$ but which contains an isometric copy of $[0,1]$ and so the Lipschitz-free space $\mathcal F(M)$ fails the Radon--Nikod\'{y}m property, answering in the negative a posed question. Furthermore, an example $M$ can be produced failing all the previously known sufficient conditions to get the density of strongly norm attaining Lipschitz maps. Finally, among other applications, we prove that given a compact metric $M$ which does not contains any isometric copy of $[0,1]$ and a Banach space $Y$, if $\operatorname{SNA}(M,Y)$ is dense, then $\operatorname{SNA}(M,Y)$ actually contains an open dense subset and $B_{\mathcal F(M)}=\overline{{\mathrm{co}}}(\operatorname{str-exp}(B_{\mathcal F(M)}))$. Further, we show that if $M$ is a boundedly compact metric space for which $\operatorname{SNA}(M,\mathbb R)$ is dense in ${\mathrm{Lip}}_0(M,\mathbb R)$, then the unit ball of the Lipschitz-free space on $M$ is the closed convex hull of its strongly exposed points.
|
mathematics
|
Precision measurements of the positron flux in cosmic ray have revealed an unexplained bump in the spectrum around $E\simeq 300\,\mathrm{GeV}$, not clearly attributable to known astrophysical processes. We propose annihilation of dark matter of mass $m_\chi = 780\,\mathrm{GeV}$ with a late-time cross section $\sigma v = 4.63\times 10^{-24}\,\mathrm{cm^3\,s^{-1}}$ as a possible source. The nonmonotonic dependence of the annihilation rate on dark matter velocity, owing to a selective $p$-wave Sommerfeld enhancement, allows such a large signal from the Milky Way without violating corresponding constraints from CMB and dwarf galaxy observations. We briefly explore other signatures of this scenario, and outline avenues to test it in future experiments.
|
high energy physics phenomenology
|
The LHC is undergoing a high luminosity upgrade, which is set to increase the instantaneous luminosity by at least a factor of five, resulting in a higher muon flux rate in the forward region, which will overwhelm the current trigger system of the CMS experiment. The ME0, a gas electron multiplier detector, is proposed for the Phase-2 Muon System Upgrade to help increase the muon acceptance and to control the Level 1 muon trigger rate. To lower the probability of HV discharges, the ME0 was designed with GEM foils that are segmented on both sides. Initial testing of the ME0 showed substantial crosstalk between readout sectors. Here, we investigate, characterize, and quantify the crosstalk in the detector, and estimate the performance of the chamber as a result of this crosstalk via simulation of the detector dead time, efficiency loss, and frontend electronics response. The results of crosstalk via signals produced by applying a square voltage pulse directly on the readout strips of the detector with a pulser are summarized, and the efficacy of various mitigation strategies are presented. The crosstalk is a result of capacitive coupling between the readout strips on the readout board and between the readout strips and the bottom of GEM3. The crosstalk also generally follows a pattern where the largest magnitude of crosstalk is within the same azimuthal readout segment in the detector and in the nearest horizontal segments. The use of bypass capacitors and larger HV segments successfully reduce the crosstalk: we observe a maximum decrease of crosstalk in sectors previously experiencing crosstalk from $(1.66\pm0.03)\%$ to $(1.11\pm0.02)\%$ with all HV segments connected in parallel on the bottom of GEM3, with an HV low-pass filter, and an HV divider. These mitigation strategies slightly increase crosstalk $\big(\hspace{-0.1cm}\lessapprox 0.4\%\big)$ in readout sectors farther away.
|
physics
|
In this work, we use pQCD approach to calculate 20 $B_{(s)}\to D^*_{s0}(2317)P(V)$ two body decays by assuming $D^*_{s0}(2317)$ as a $\bar cs$ scalar meson, where $P(V)$ denotes a pseudoscalar (vector) meson. These $B_{(s)}$ decays can serve as an ideal platform to probe the valuable information on the inner structure of the charmed-strange meson $D^*_{s0}(2317)$, and to explore the dynamics of strong interactions and signals of new physics. These considered decays can be divided into two types: the CKM favored decays and the CKM suppressed decays. The former are induced by $b\to c$ transition, whose branching ratios are larger than $10^{-5}$. The branching fraction of the decay $\bar B^0_s\to D^{*+}_{s0}(2317)\rho^{-}$ is the largest and reaches about $1.8\times 10^{-3}$, while the branching ratios for the decay $\bar B^0_s\to D^{*+}_{s0}(2317)K^{*-}$ and other two pure annihilation decays $\bar B^0\to D^{*+}_{s0}(2317)K^-, D^{*+}_{s0}(2317)K^{*-}$ are only at $10^{-5}$ order. Our predictions are consistent well with the results given by the light cone sum rules approach. These decays are most likely to be measured at the running LHCb and the forthcoming SuperKEKB. The latter are induced by $b\to u$ transition, among of which the channel $\bar B^0\to D^{*-}(2317)\rho^+$ has the largest branching fraction, reaching up to $10^{-5}$ order. Again the pure annihilation decays $B^-\to D^{*-}_{s0}(2317)\phi, \bar B^0\to D^{*-}_{s0}(2317)K^+(K^{*+}), B^-\to D^{*-}_{s0}(2317)K^0(K^{*0})$, have the smallest branching ratios, which drop to as low as $10^{-10}\sim10^{-8}$.
|
high energy physics phenomenology
|
A powerful control method in experimental quantum computing is the use of spin echoes, employed to select a desired term in the system's internal Hamiltonian, while refocusing others. Here we address a more general problem, describing a method to not only turn on and off particular interactions but also to rescale their strengths so that we can generate any desired effective internal Hamiltonian. We propose an algorithm based on linear programming for achieving time-optimal rescaling solutions in fully coupled systems of tens of qubits, which can be modified to obtain near time-optimal solutions for rescaling systems with hundreds of qubits.
|
quantum physics
|
In this paper, we address the challenging task of estimating 6D object pose from a single RGB image. Motivated by the deep learning based object detection methods, we propose a concise and efficient network that integrate 6D object pose parameter estimation into the object detection framework. Furthermore, for more robust estimation to occlusion, a non-local self-attention module is introduced. The experimental results show that the proposed method reaches the state-of-the-art performance on the YCB-video and the Linemod datasets.
|
computer science
|
Aims. Our aim is to estimate the intergalactic medium transmission towards UV-selected star-forming galaxies at redshift 4 and above and study the effect of the dust attenuation on these measurements. Methods. The ultra-violet spectrum of high redshift galaxies is a combination of their intrinsic emission and the effect of the Inter-Galactic medium (IGM) absorption along their line of sight. Using data coming from the unprecedented deep spectroscopy from the VANDELS ESO public survey carried out with the VIMOS instrument we compute both the dust extinction and the mean transmission of the IGM as well as its scatter from a set of 281 galaxies at z>3.87. Because of a degeneracy between the dust content of the galaxy and the IGM, we first estimate the stellar dust extinction parameter E(B-V) and study the result as a function of the dust prescription. Using these measurements as constraint for the spectral fit we estimate the IGM transmission Tr(Lyalpha). Both photometric and spectroscopic SED fitting are done using the SPectroscopy And photometRy fiTting tool for Astronomical aNalysis (SPARTAN) that is able to fit the spectral continuum of the galaxies as well as photometric data. Results. Using the classical Calzetti's attenuation law we find that E(B-V) goes from 0.11 at z=3.99 to 0.08 at z=5.15. These results are in very good agreement with previous measurements from the literature. We estimate the IGM transmission and find that the transmission is decreasing with increasing redshift from Tr(Lyalpha)=0.53 at z=3.99 to 0.28 at z=5.15. We also find a large standard deviation around the average transmission that is more than 0.1 at every redshift. Our results are in very good agreement with both previous measurements from AGN studies and with theoretical models.
|
astrophysics
|
The single-layered ruthenate Sr$_2$RuO$_4$ is one of the most enigmatic unconventional superconductors. While for many years it was thought to be the best candidate for a chiral $p$-wave superconducting ground state, desirable for topological quantum computations, recent experiments suggest a singlet state, ruling out the original $p$-wave scenario. The superconductivity as well as the properties of the multi-layered compounds of the ruthenate perovskites are strongly influenced by a van Hove singularity in proximity of the Fermi energy. Tiny structural distortions move the van Hove singularity across the Fermi energy with dramatic consequences for the physical properties. Here, we determine the electronic structure of the van Hove singularity in the surface layer of Sr$_2$RuO$_4$ by quasiparticle interference imaging. We trace its dispersion and demonstrate from a model calculation accounting for the full vacuum overlap of the wave functions that its detection is facilitated through the octahedral rotations in the surface layer.
|
condensed matter
|
We develop a language-guided navigation task set in a continuous 3D environment where agents must execute low-level actions to follow natural language navigation directions. By being situated in continuous environments, this setting lifts a number of assumptions implicit in prior work that represents environments as a sparse graph of panoramas with edges corresponding to navigability. Specifically, our setting drops the presumptions of known environment topologies, short-range oracle navigation, and perfect agent localization. To contextualize this new task, we develop models that mirror many of the advances made in prior settings as well as single-modality baselines. While some of these techniques transfer, we find significantly lower absolute performance in the continuous setting -- suggesting that performance in prior `navigation-graph' settings may be inflated by the strong implicit assumptions.
|
computer science
|
In this paper we study dynamical systems generated by an evolution operator of a dioecious population. This evolution operator is a six-parametric, non-linear operator mapping $[0,1]^2$ to itself. We find all fixed points and under some conditions on parameters we give limit points of trajectories constructed by iterations of the evolution operator.
|
mathematics
|
The confinement/deconfinement transition described the Polyakov-Nambu-Jona-Lasinio (PNJL) model is extended to be operative at zero temperature regime. In this study, the scalar and vector channel interaction strengths of the original PNJL model are modified by introducing a dependence on the traced Polyakov loop. In such a way the effective interactions depend on the quark phase and in turn provides a backreaction of the quarks to the gluonic sector, also at zero temperature. On general grounds from quantum chromodynamics this is an expected feature. The thermodynamics of the extended model (PNJL0) is studied in detail. It presents along with a suitable choice of the Polyakov potential, a first order confined/deconfined quark phase transition even at $T=0$. We also show that the vector channel plays an important role in order to allow $\Phi\ne0$ solutions for the PNJL0 model. Furthermore, the sensitivity of the combined quarkyonic and deconfinement phases to the vector interaction strength and the proposed parametrization of the Polyakov-loop potential at $T=0$ allowed to set a window for the bulk values of the relevant parameters.
|
high energy physics phenomenology
|
We study the phase transitions of three-dimensional (3D) classical O(3) model and the two-dimensional (2D) classical XY model, as well as both the quantum phase transitions of 2D and 3D dimerized spin-1/2 antiferromagnets, using the techniques of supervised neural network (NN). Moreover, unlike the conventional approaches commonly used in the literature, the training sets employed in our investigation are neither the theoretical nor the real configurations of the considered systems. Remarkably, with such an unconventional set up of the training stage in conjunction with semi-experimental finite-size scaling formulas, the associated critical points determined by the NN method agree well with the established results in the literature. The outcomes obtained here imply that certain unconventional training strategies, like the one used in this study, are not only cost-effective in computation, but are also applicable for a wild range of physical systems.
|
condensed matter
|
We report an infinite number of orthonormal wave functions bases for the quantum problem of a free particle in presence of an applied external magnetic field. Each set of orthonormal wave functions (basis) is labeled by an integer $p$, which is the number of magnetic fluxons trapped in the unit cell. These bases are suitable to describe particles whose probability density is periodic and defines a lattice in position space. The present bases of orthonormal wave functions unveils fractional effects since the number of particles in the unit cell is independent of the number of trapped fluxons. For a single particle under $p$ fluxes in the unit cell, and confined to the lowest Landau level, the probability density vanishes in $p$ points, thus each zero is associated to a fraction $1/p$ of the particle. Remarkably the case of $n+1$ filled Landau levels, hence with a total of $N=(n+1)p$ fermions, $n$ being the highest filled Landau level, the density displays an egg-box pattern with $p^2$ maxima (minima) which means that a $(n+1)/p$ fraction of flux is associated to every one of these maxima (minima). We also consider the case of particles interacting through the magnetic field energy created by their own motion and find an attractive interaction among them in case they are confined to the lowest Landau level ($n=0$). The well-known de Haas-van Alphen oscillations are retrieved within the present orthonormal basis of wave functions thus providing evidence of its correctness.
|
condensed matter
|
We propose a technique to develop (and localize in) topological maps from light detection and ranging (Lidar) data. Localizing an autonomous vehicle with respect to a reference map in real-time is crucial for its safe operation. Owing to the rich information provided by Lidar sensors, these are emerging as a promising choice for this task. However, since a Lidar outputs a large amount of data every fraction of a second, it is progressively harder to process the information in real-time. Consequently, current systems have migrated towards faster alternatives at the expense of accuracy. To overcome this inherent trade-off between latency and accuracy, we propose a technique to develop topological maps from Lidar data using the orthogonal Tucker3 tensor decomposition. Our experimental evaluations demonstrate that in addition to achieving a high compression ratio as compared to full data, the proposed technique, $\textit{TensorMap}$, also accurately detects the position of the vehicle in a graph-based representation of a map. We also analyze the robustness of the proposed technique to Gaussian and translational noise, thus initiating explorations into potential applications of tensor decompositions in Lidar data analysis.
|
electrical engineering and systems science
|
We prove three new versions of Stone Duality. The main version is the following: the category of Kolmogorov locally small spaces and bounded continuous mappings is equivalent to the category of spectral spaces with decent lumps and with bornologies in the lattices of compact (not necessarily Hausdorff) open sets as objects and spectral mappings respecting the decent lumps and satisfying a boundedness condition as morphisms as well as it is dually equivalent to the category of bounded distributive lattices with bornologies and with decent lumps of prime filters as objects and homomorphisms of bounded lattices respecting those decent lumps and satisfying a domination condition as morphisms. Some theory of strongly locally spectral spaces is developed.
|
mathematics
|
The breakup of water droplets when exposed to high-speed gas flows is investigated using both high-magnification shadowgraphy experiments as well as fully three-dimensional numerical simulations, which account for viscous as well as capillary effects. After thorough validation of the simulations with respect to the experiments, we elucidate the ligament formation process and the effect of surface tension. By Fourier decomposition of the flow field, we observe the development of specific azimuthal modes, which destabilize the liquid sheet surrounding the droplet. Eventually, the liquid sheet is ruptured, which leads to the formation of ligaments. We further observe the ligament formation and shedding to be a recurrent process. While the first ligament shedding weakly depends on the Weber number, subsequent shedding processes seem to be driven primarily by inertia and the vortex shedding in the wake of the deformed droplet.
|
physics
|
The autoencoder model uses an encoder to map data samples to a lower dimensional latent space and then a decoder to map the latent space representations back to the data space. Implicitly, it relies on the encoder to approximate the inverse of the decoder network, so that samples can be mapped to and back from the latent space faithfully. This approximation may lead to sub-optimal latent space representations. In this work, we investigate a decoder-only method that uses gradient flow to encode data samples in the latent space. The gradient flow is defined based on a given decoder and aims to find the optimal latent space representation for any given sample through optimisation, eliminating the need of an approximate inversion through an encoder. Implementing gradient flow through ordinary differential equations (ODE), we leverage the adjoint method to train a given decoder. We further show empirically that the costly integrals in the adjoint method may not be entirely necessary. Additionally, we propose a $2^{nd}$ order ODE variant to the method, which approximates Nesterov's accelerated gradient descent, with faster convergence per iteration. Commonly used ODE solvers can be quite sensitive to the integration step-size depending on the stiffness of the ODE. To overcome the sensitivity for gradient flow encoding, we use an adaptive solver that prioritises minimising loss at each integration step. We assess the proposed method in comparison to the autoencoding model. In our experiments, GFE showed a much higher data-efficiency than the autoencoding model, which can be crucial for data scarce applications.
|
statistics
|
A suspended layer made up of ferromagnetically ordered spins could be created between two mono/multilayer graphene through intercalation. Stability and electronic structure studies show that, when fluorine molecules are intercalated between two mono/multilayer graphene, their bonds get stretched enough ($\sim$ 1.9$-$2.0 {\AA}) to weaken their molecular singlet eigenstate. Geometrically, these stretched molecules form a pseudoatomized fluorine layer by maintaining a van der Waals separation of $\sim$ 2.6 {\AA} from the adjacent carbon layers. As there is a significant charge transfer from the adjacent carbon layers to the fluorine layers, a mixture of triplet and doublet states stabilize to induce local spin-moments at each fluorine sites and in turn form a suspended 2D spin lattice. The spins of this lattice align ferromagnetically with nearest neighbour coupling strength as large as $\sim$ 100 meV. Our finite temperature \textit {ab initio} molecular dynamics study reveals that the intercalated system can be stabilized up to a temperature of 100 K with an average magnetic moment of $\sim$ 0.6 $\mu_{B}$/F. However, if the graphene layers can be held fixed, the room temperature stability of such a system is feasible.
|
condensed matter
|
We proposed a general Zeeman slower scheme applicable to the majority of the laser-coolable molecules. Different from previous schemes, the key idea of our scheme lies in that the compensation of the detuning with the magnetic field is done for the repumping laser instead of the cooling laser. Only atoms or molecules with the right velocity will be repumped and laser slowed. Such scheme is more feasible for molecules with complex energy sturcutres. We apply this scheme for molecules with large Land\'e g-factor of the excited states and polyatomic molecules, and it shows a better slowing efficiency.
|
physics
|
We proposed a new estimation algorithm of extended Kalman filter (EKF) based on improved Thevenin model; Experiments were carried out to verify the validity with seven 4Ah lithium cobalt acid batteries in series. The experimental results showed that when using the algorithm, the estimation error of SOC is in the scope of error allowed, and the requirement of online SOC estimation can be satisfied.
|
electrical engineering and systems science
|
This article reviews the extraordinary features of quantum information predicted by the quantum formalism, which, combined with the development of modern quantum technologies, have opened new horizons in quantum physics that can potentially affect various areas of our live, leading to new technologies such as quantum cybersecurity, quantum communication, quantum metrology, and quantum computation.
|
quantum physics
|
Structured equations are a standard modeling tool in mathematical biology. They areintegro-differential equations where the unknown depends on one or several variables, representing the state or phenotype of individuals. A large literature has been devoted to many aspects of these equations and in particular to the study of measure solutions.Here we introduce a transport distance closely related to the Monge-Kantorovich distance,which appears to be non-expanding for several (mainly linear) examples of structured equations.
|
mathematics
|
We investigate the capability of the DUNE Near Detector (ND) to constrain Non Standard Interaction parameters (NSI) describing the production of neutrinos ($\varepsilon_{\alpha\beta}^s$) and their detection ($\varepsilon_{\alpha\beta}^d$). We show that the DUNE ND is able to reject a large portion of the parameter space allowed by DUNE Far Detector analyses and to set the most stringent bounds from accelerator neutrino experiments on $|\varepsilon_{\mu e}^{s,d}|$ and $|\varepsilon_{\mu\tau}^{s,d}|$ for wide intervals of the related phases. We also provide simple analytic understanding of our results as well as their dependence on the data taking time, showing that the DUNE ND offers a theoretically clean environment where to study source and detector NSI.
|
high energy physics phenomenology
|
Let $M$ be a compact Riemannian manifold and $h$ a smooth function on $M$. Let $\rho^h(x)=\inf_{|v|=1}\left(Ric_x(v,v)-2Hess(h)_x(v,v) \right)$. Here $Ric_x$ denotes the Ricci curvature at $x$ and $Hess(h)$ is the Hessian of $h$. Then $M$ has finite fundamental group if $\Delta ^h-\rho^h<0$. Here $\Delta ^h=: \Delta +2L_{\nabla h}$ is the Bismut-Witten Laplacian. This leads to a quick proof of recent results on extension of Myers' theorem to manifolds with mostly positive curvature. There is also a similar result for noncompact manifolds.
|
mathematics
|
We discuss how standard $T_2$-based quantum sensing and noise spectroscopy protocols often give rise to an inadvertent quench of the system or environment being probed: there is an effective sudden change in the environmental Hamiltonian at the start of the sensing protocol. These quenches are extremely sensitive to the initial environmental state, and lead to observable changes in the sensor qubit evolution. We show how these new features can be used to directly access environmental response properties. This enables methods for direct measurement of bath temperature, and methods to diagnose non-thermal equilibrium states. We also discuss techniques that allow one to deliberately control and modulate this quench physics, which enables reconstruction of the bath spectral function. Extensions to non-Gaussian quantum baths are also discussed, as is the direct applicability of our ideas to standard diamond NV-center based quantum sensing platforms.
|
quantum physics
|
Relation extraction from text is an important task for automatic knowledge base population. In this thesis, we first propose a syntax-focused multi-factor attention network model for finding the relation between two entities. Next, we propose two joint entity and relation extraction frameworks based on encoder-decoder architecture. Finally, we propose a hierarchical entity graph convolutional network for relation extraction across documents.
|
computer science
|
This paper addresses the average cost minimization problem for discrete-time systems with multiplicative and additive noises via reinforcement learning. By using Q-function, we propose an online learning scheme to estimate the kernel matrix of Q-function and to update the control gain using the data along the system trajectories. The obtained control gain and kernel matrix are proved to converge to the optimal ones. To implement the proposed learning scheme, an online model-free reinforcement learning algorithm is given, where recursive least squares method is used to estimate the kernel matrix of Q-function. A numerical example is presented to illustrate the proposed approach.
|
electrical engineering and systems science
|
Millimeter-wave (30-300 GHz) and Terahertz-band communications (0.3-10 THz) are envisioned as key wireless technologies to satisfy the demand for Terabit-per-second (Tbps) links in the 5G and beyond eras. The very large available bandwidth in this ultra-broadband frequency range comes at the cost of a very high propagation loss, which combined with the low power of mm-wave and THz-band transceivers limits the communication distance and data-rates. In this paper, the concept of intelligent communication environments enabled by Ultra-Massive MIMO platforms is proposed to increase the communication distance and data-rates at mm-wave and THz-band frequencies. An end-to-end physical model is developed by taking into account the capabilities of novel intelligent plasmonic antenna arrays which can operate in transmission, reception, reflection and waveguiding, as well as the peculiarities of the mm-wave and THz-band multi-path channel. Based on the developed model, extensive quantitative results for different scenarios are provided to illustrate the performance improvements in terms of both achievable distance and data-rate in Ultra-Massive MIMO environments.
|
electrical engineering and systems science
|
We present the first polarimetric detection of the inner disk component around the pre-main sequence B9.5 star HD 141569A. Gemini Planet Imager H-band (1.65 micron) polarimetric differential imaging reveals the highest signal-to-noise ratio detection of this ring yet attained and traces structure inwards to 0.25" (28 AU at a distance of 111 pc). The radial polarized intensity image shows the east side of the disk, peaking in intensity at 0.40" (44 AU) and extending out to 0.9" (100 AU). There is a spiral arm-like enhancement to the south, reminiscent of the known spiral structures on the outer rings of the disk. The location of the spiral arm is coincident with 12CO J=3-2 emission detected by ALMA, and hints at a dynamically active inner circumstellar region. Our observations also show a portion of the middle dusty ring at ~220 AU known from previous observations of this system. We fit the polarized H-band emission with a continuum radiative transfer Mie model. Our best-fit model favors an optically thin disk with a minimum dust grain size close to the blow-out size for this system: evidence of on-going dust production in the inner reaches of the disk. The thermal emission from this model accounts for virtually all of the far-infrared and millimeter flux from the entire HD 141569A disk, in agreement with the lack of ALMA continuum and CO emission beyond ~100 AU. A remaining 8-30 micron thermal excess a factor of ~2 above our model argues for a yet-unresolved warm innermost 5-15 AU component of the disk.
|
astrophysics
|
The National Health and Nutrition Examination Survey (NHANES) is a major program of the National Center for Health Statistics, designed to assess the health and nutritional status of adults and children in the United States. The analysis of NHANES dental caries data faces several challenges, including (1) the data were collected using a complex, multistage, stratified, unequal-probability sampling design; (2) the sample size of some primary sampling units (PSU), e.g., counties, is very small; (3) the measures of dental caries have complicated structure and correlation, and (4) there is a substantial percentage of nonresponses, for which the missing data are expected to be not missing at random or non-ignorable. We propose a Bayesian hierarchical spatial model to address these analysis challenges. We develop a two-level Potts model that closely resembles the caries evolution process and captures complicated spatial correlations between teeth and surfaces of the teeth. By adding Bayesian hierarchies to the Potts model, we account for the multistage survey sampling design and also enable information borrowing across PSUs for small area estimation. We incorporate sampling weights by including them as a covariate in the model and adopt flexible B-splines to achieve robust inference. We account for non-ignorable missing outcomes and covariates using the selection model. We use data augmentation coupled with the noisy exchange sampler to obtain the posterior of model parameters that involve doubly-intractable normalizing constants. Our analysis results show strong spatial associations between teeth and tooth surfaces and that dental hygienic factors, fluorosis and sealant reduce the risks of having dental diseases.
|
statistics
|
We study the generation of correlated photon pairs and heralded single photons via strongly non-degenerate spontaneous four-wave mixing (SFWM) in a series of identical micro-/nano fibers (MNF). Joint spectral intensity of the biphoton field generated at the wavelength of about 880 nm and 1310 nm has been measured under excitation by 100 ps laser pulses demonstrating good agreement with the theoretical prediction. The measured zero-time second-order autocorrelation function was about 0.2 when the emission rate of the heralded photons was of 4 Hz. The MNF-based source perfectly matches standard single-mode fibers, which makes it compatible with the existing fiber communication networks. In addition, SFWM observation in a series of identical MNFs allows increasing generation rate of single photons via spatial multiplexing.
|
quantum physics
|
We investigate in detail the charged Higgs production associated with a $W$ boson at electron-positron colliders within the framework of the Type-I two-Higgs-doublet model (THDM). We calculate the integrated cross section at the LO and analyze the dependence of the cross section on the THDM parameters and the colliding energy in a benchmark scenario of the input parameters of Higgs sector. The numerical results show that the integrated cross section is sensitive to the charged Higgs mass, especially in the vicinity of $m_{H^{\pm}} \simeq 184~ {\rm GeV}$ at a $500~ {\rm GeV}$ $e^+e^-$ collider, and decreases consistently as the increment of $\tan\beta$ in the low $\tan\beta$ region. The peak in the colliding energy distribution of the cross section arises from the resonance of loop integrals and its position moves towards low colliding energy as the increment of $m_{H^{\pm}}$. We also study the two-loop NLO QCD corrections to both the integrated cross section and the angular distribution of the charged Higgs boson, and find that the QCD relative correction is also sensitive to the charged Higgs mass and strongly depends on the final-state phase space. For $\tan\beta = 2$, the QCD relative correction at a $500~ {\rm GeV}$ $e^+e^-$ collider varies in the range of $[-10\%,\, 11\%]$ as $m_{H^{\pm}}$ increases from $150$ to $400~ {\rm GeV}$.
|
high energy physics phenomenology
|
We develop a Bayesian analysis method for selecting the most probable equation of state under a set of constraints from compact star physics, which now include the tidal deformability from GW170817. We apply this method for the first time to a two-parameter family of hybrid equations of state that is based on realistic models for the hadronic phase (KVORcut02) and the quark matter phase (SFM$\alpha$) which produce a third family of hybrid stars in the mass-radius diagram. One parameter ($\alpha$) characterizes the screening of the string tension in the string-flip model of quark matter while the other ($\Delta_P$) belongs to the mixed phase construction that mimics the thermodynamics of pasta phases and includes the Maxwell construction as a limiting case for $\Delta_P=0$. We present the corresponding results for compact star properties like mass, radius and tidal deformabilities and use empirical data for them in the newly developed Bayesian analysis method to obtain the probabilities for the model parameters within their considered range.
|
astrophysics
|
Rank minimization is of interest in machine learning applications such as recommender systems and robust principal component analysis. Minimizing the convex relaxation to the rank minimization problem, the nuclear norm, is an effective technique to solve the problem with strong performance guarantees. However, nonconvex relaxations have less estimation bias than the nuclear norm and can more accurately reduce the effect of noise on the measurements. We develop efficient algorithms based on iteratively reweighted nuclear norm schemes, while also utilizing the low rank factorization for semidefinite programs put forth by Burer and Monteiro. We prove convergence and computationally show the advantages over convex relaxations and alternating minimization methods. Additionally, the computational complexity of each iteration of our algorithm is on par with other state of the art algorithms, allowing us to quickly find solutions to the rank minimization problem for large matrices.
|
mathematics
|
Both neurophysiological and psychophysical experiments have pointed out the crucial role of recurrent and feedback connections to process context-dependent information in the early visual cortex. While numerous models have accounted for feedback effects at either neural or representational level, none of them were able to bind those two levels of analysis. Is it possible to describe feedback effects at both levels using the same model? We answer this question by combining Predictive Coding (PC) and Sparse Coding (SC) into a hierarchical and convolutional framework. In this Sparse Deep Predictive Coding (SDPC) model, the SC component models the internal recurrent processing within each layer, and the PC component describes the interactions between layers using feedforward and feedback connections. Here, we train a 2-layered SDPC on two different databases of images, and we interpret it as a model of the early visual system (V1 & V2). We first demonstrate that once the training has converged, SDPC exhibits oriented and localized receptive fields in V1 and more complex features in V2. Second, we analyze the effects of feedback on the neural organization beyond the classical receptive field of V1 neurons using interaction maps. These maps are similar to association fields and reflect the Gestalt principle of good continuation. We demonstrate that feedback signals reorganize interaction maps and modulate neural activity to promote contour integration. Third, we demonstrate at the representational level that the SDPC feedback connections are able to overcome noise in input images. Therefore, the SDPC captures the association field principle at the neural level which results in better disambiguation of blurred images at the representational level.
|
computer science
|
Session types are formal specifications of communication protocols, allowing protocol implementations to be verified by typechecking. Up to now, session type disciplines have assumed that the communication medium is reliable, with no loss of messages. However, unreliable broadcast communication is common in a wide class of distributed systems such as ad-hoc and wireless sensor networks. Often such systems have structured communication patterns that should be amenable to analysis by means of session types, but the necessary theory has not previously been developed. We introduce the Unreliable Broadcast Session Calculus, a process calculus with unreliable broadcast communication, and equip it with a session type system that we show is sound. We capture two common operations, broadcast and gather, inhabiting dual session types. Message loss may lead to non-synchronised session endpoints. To further account for unreliability we provide with an autonomous recovery mechanism that does not require acknowledgements from session participants. Our type system ensures soundness, safety, and progress between the synchronised endpoints within a session. We demonstrate the expressiveness of our framework by implementing Paxos, the textbook protocol for reaching consensus in an unreliable, asynchronous network.
|
computer science
|
A combined measurement and Monte-Carlo simulation study was carried out in order to characterize the particle self-shielding effect of B4C grains in neutron shielding concrete. Several batches of a specialized neutron shielding concrete, with varying B4C grain sizes, were exposed to a 2 {\AA} neutron beam at the R2D2 test beamline at the Institute for Energy Technology located in Kjeller, Norway. The direct and scattered neutrons were detected with a neutron detector placed behind the concrete blocks and the results were compared to Geant4 simulations. The particle self-shielding effect was included in the Geant4 simulations by calculating effective neutron cross-sections during the Monte-Carlo simulation process. It is shown that this method well reproduces the measured results. Our results show that shielding calculations for low-energy neutrons using such materials would lead to an underestimate of the shielding required for a certain design scenario if the particle self-shielding effect is not included in the calculations.
|
physics
|
We present measurements of the $E$-mode ($EE$) polarization power spectrum and temperature-$E$-mode ($TE$) cross-power spectrum of the cosmic microwave background using data collected by SPT-3G, the latest instrument installed on the South Pole Telescope. This analysis uses observations of a 1500 deg$^2$ region at 95, 150, and 220 GHz taken over a four month period in 2018. We report binned values of the $EE$ and $TE$ power spectra over the angular multipole range $300 \le \ell < 3000$, using the multifrequency data to construct six semi-independent estimates of each power spectrum and their minimum-variance combination. These measurements improve upon the previous results of SPTpol across the multipole ranges $300 \le \ell \le 1400$ for $EE$ and $300 \le \ell \le 1700$ for $TE$, resulting in constraints on cosmological parameters comparable to those from other current leading ground-based experiments. We find that the SPT-3G dataset is well-fit by a $\Lambda$CDM cosmological model with parameter constraints consistent with those from Planck and SPTpol data. From SPT-3G data alone, we find $H_0 = 68.8 \pm 1.5 \mathrm{km\,s^{-1}\,Mpc^{-1}}$ and $\sigma_8 = 0.789 \pm 0.016$, with a gravitational lensing amplitude consistent with the $\Lambda$CDM prediction ($A_L = 0.98 \pm 0.12$). We combine the SPT-3G and the Planck datasets and obtain joint constraints on the $\Lambda$CDM model. The volume of the 68% confidence region in six-dimensional $\Lambda$CDM parameter space is reduced by a factor of 1.5 compared to Planck-only constraints, with only slight shifts in central values. We note that the results presented here are obtained from data collected during just half of a typical observing season with only part of the focal plane operable, and that the active detector count has since nearly doubled for observations made with SPT-3G after 2018.
|
astrophysics
|
We study the statistical properties of the long-time dynamics of the rule 54 reversible cellular automaton (CA), driven stochastically at its boundaries. This CA can be considered as a discrete-time and deterministic version of the Fredrickson-Andersen kinetically constrained model (KCM). By means of a matrix product ansatz, we compute the exact large deviation cumulant generating functions for a wide range of time-extensive observables of the dynamics, together with their associated rate functions and conditioned long-time distributions over configurations. We show that for all instances of boundary driving the CA dynamics occurs at the point of phase coexistence between competing active and inactive dynamical phases, similar to what happens in more standard KCMs. We also find the exact finite size scaling behaviour of these trajectory transitions, and provide the explicit "Doob-transformed" dynamics that optimally realises rare dynamical events.
|
condensed matter
|
K2-291 (EPIC 247418783) is a solar-type star with a radius of R_star = 0.899 $\pm$ 0.034 R_sun and mass of M_star=0.934 $\pm$ 0.038 M_sun. From K2 C13 data, we found one super-Earth planet (R_p = 1.589+0.095-0.072 R_Earth) transiting this star on a short period orbit (P = 2.225177 +6.6e-5 -6.8e-5 days). We followed this system up with adaptive-optic imaging and spectroscopy to derive stellar parameters, search for stellar companions, and determine a planet mass. From our 75 radial velocity measurements using HIRES on Keck I and HARPS-N on Telescopio Nazionale Galileo, we constrained the mass of EPIC 247418783b to M_p = 6.49 $\pm$ 1.16 M_Earth. We found it necessary to model correlated stellar activity radial velocity signals with a Gaussian process in order to more accurately model the effect of stellar noise on our data; the addition of the Gaussian process also improved the precision of this mass measurement. With a bulk density of 8.84+2.50-2.03 g cm-3, the planet is consistent with an Earth-like rock/iron composition and no substantial gaseous envelope. Such an envelope, if it existed in the past, was likely eroded away by photo-evaporation during the first billion years of the star's lifetime.
|
astrophysics
|
In this note, we generalize the Fresnel integrals using oscillatory integral, and then we obtain an extention of the stationary phase method.
|
mathematics
|
A statistically significant excess of gamma rays has been reported and robustly confirmed in the Galactic Center over the past decade. Large local dark matter densities suggest that this Galactic Center Excess (GCE) may be attributable to new physics, and indeed it has been shown that this signal is well-modelled by annihilations dominantly into $b\bar{b}$ with a WIMP-scale cross section. In this paper, we consider Majorana dark matter annihilating through a Higgs portal as a candidate source for this signal, where a large CP-violation in the Higgs coupling may serve to severely suppress scattering rates. In particular, we explore the phenomenology of two minimal UV completions, a singlet-doublet model and a doublet-triplet model, and map out the available parameter space which can give a viable signal while respecting current experimental constraints.
|
high energy physics phenomenology
|
In this work we argue that the power and effectiveness of the Bohrian approach to quantum mechanics is grounded on an inconsistent form of anti-realist realism which is responsible not only for the uncritical tolerance -- in physics -- towards the "standard" account of the theory of quanta, but also -- in philosophy -- of the alarming reproduction of quantum narratives. Niels Bohr's creative methodology can be exposed through the analysis of what John Archibald Wheeler called "the great smoky dragon". We will discuss the existence of such dragons within the "minimal interpretation" applied by physicists in the orthodox textbook formulation of quantum mechanics as well as within the many "supplementary interpretations" introduced by philosophers -- or philosophically inclined physicists -- in order to solve the infamous measurement problem. After analyzing the role of smoky dragons within both contemporary physics and philosophy of physics we will propose a general procedure grounded on a series of necessary theoretical conditions for producing adequate physical concepts that -- hopefully -- could be used as tools and weapons to capture and defeat these beautiful and powerful creatures.
|
physics
|
We prove that any finite system of interacted automata can not leave some finite arrear of Calley graph of periodic group. If group has non-periodic element, then its Calley graph can be explored by some finite automata with 3 pebbles. If group is finitelly generated and aperiodic then it can not be explored by any system of finite automata.
|
mathematics
|
We describe and test a family of new numerical methods to solve the Schrodinger equation in self-gravitating systems, e.g. Bose-Einstein condensates or 'fuzzy'/ultra-light scalar field dark matter. The methods are finite-volume Godunov schemes with stable, higher-order accurate gradient estimation, based on a generalization of recent mesh-free finite-mass Godunov methods. They couple easily to particle-based N-body gravity solvers (with or without other fluids, e.g. baryons), are numerically stable, and computationally efficient. Different sub-methods allow for manifest conservation of mass, momentum, and energy. We consider a variety of test problems and demonstrate that these can accurately recover solutions and remain stable even in noisy, poorly-resolved systems, with dramatically reduced noise compared to some other proposed implementations (though certain types of discontinuities remain challenging). This is non-trivial because the "quantum pressure" is neither isotropic nor positive-definite and depends on higher-order gradients of the density field. We implement and test the method in the code GIZMO.
|
astrophysics
|
One of the most important issues in the critical assessment of spatio-temporal stochastic models for epidemics is the selection of the transmission kernel used to represent the relationship between infectious challenge and spatial separation of infected and susceptible hosts. As the design of control strategies is often based on an assessment of the distance over which transmission can realistically occur and estimation of this distance is very sensitive to the choice of kernel function, it is important that models used to inform control strategies can be scrutinised in the light of observation in order to elicit possible evidence against the selected kernel function. While a range of approaches to model criticism are in existence, the field remains one in which the need for further research is recognised. In this paper, building on earlier contributions by the authors, we introduce a new approach to assessing the validity of spatial kernels - the latent likelihood ratio tests - and compare its capacity to detect model misspecification with that of tests based on the use of infection-link residuals. We demonstrate that the new approach, which combines Bayesian and frequentist ideas by treating the statistical decision maker as a complex entity, can be used to formulate tests with greater power than infection-link residuals to detect kernel misspecification particularly when the degree of misspecification is modest. This new approach avoids the use of a fully Bayesian approach which may introduce undesirable complications related to computational complexity and prior sensitivity.
|
statistics
|
Deep learning image classifiers usually rely on huge training sets and their training process can be described as learning the similarities and differences among training images. But, images in large training sets are not usually studied from this perspective and fine-level similarities and differences among images is usually overlooked. This is due to lack of fast and efficient computational methods to analyze the contents of these datasets. Some studies aim to identify the influential and redundant training images, but such methods require a model that is already trained on the entire training set. Here, using image processing and numerical analysis tools we develop a practical and fast method to analyze the similarities in image classification datasets. We show that such analysis can provide valuable insights about the datasets and the classification task at hand, prior to training a model. Our method uses wavelet decomposition of images and other numerical analysis tools, with no need for a pre-trained model. Interestingly, the results we obtain corroborate the previous results in the literature that analyzed the similarities using pre-trained CNNs. We show that similar images in standard datasets (such as CIFAR) can be identified in a few seconds, a significant speed-up compared to alternative methods in the literature. By removing the computational speed obstacle, it becomes practical to gain new insights about the contents of datasets and the models trained on them. We show that similarities between training and testing images may provide insights about the generalization of models. Finally, we investigate the similarities between images in relation to decision boundaries of a trained model.
|
electrical engineering and systems science
|
Although the free energy of a genome packing into a virus is dominated by DNA-DNA interactions, ordering of the DNA inside the capsid is elasticity-driven, suggesting general solutions with DNA organized into spool-like domains. Using analytical calculations and computer simulations of a long elastic filament confined to a spherical container, we show that the ground state is not a single spool as assumed hitherto, but an ordering mosaic of multiple homogeneously-ordered domains. At low densities, we observe concentric spools, while at higher densities, other morphologies emerge, which resemble topological links. We discuss our results in the context of metallic wires, viral DNA, and flexible polymers.
|
condensed matter
|
We report measurements and calculations on the properties of the intermetallic compound Be$_5$Pt. High-quality polycrystalline samples show a nearly constant temperature dependence of the electrical resistivity over a wide temperature range. On the other hand, relativistic electronic structure calculations indicate the existence of a narrow pseudogap in the density of states arising from accidental approximate Dirac cones extremely close to the Fermi level. A small true gap of order 3 meV is present at the Fermi level, yet the measured resistivity is nearly constant from low to room temperature. We argue that this unexpected behavior can be understood by a cancellation of the energy dependence of density of states and relaxation time due to disorder, and discuss a model for electronic transport. With applied pressure, the resistivity becomes semiconducting, consistent with theoretical calculations that show that the band gap increases with applied pressure. We further discuss the role of Be inclusions in the samples.
|
condensed matter
|
Artistically controlling fluid simulations requires a large amount of manual work by an artist. The recently presented transportbased neural style transfer approach simplifies workflows as it transfers the style of arbitrary input images onto 3D smoke simulations. However, the method only modifies the shape of the fluid but omits color information. In this work, we therefore extend the previous approach to obtain a complete pipeline for transferring shape and color information onto 2D and 3D smoke simulations with neural networks. Our results demonstrate that our method successfully transfers colored style features consistently in space and time to smoke data for different input textures.
|
computer science
|
We introduce a weak concept of Morita equivalence, in the birational context, for Poisson modules on complex normal Poisson projective varieties. We show that Poisson modules, on projective varieties with mild singularities, are either rationally Morita equivalent to a flat partial holomorphic sheaf, or a sheaf with a meromorphic flat connection or a co-Higgs sheaf. As an application, we study the geometry of rank two meromorphic rank two $\mathfrak{sl}_2$-Poisson modules which can be interpreted as a Poisson analogous to transversally projective structures for codimension one holomorphic foliations. Moreover, we describe the geometry of the symplectic foliation induced by the Poisson connection on the projectivization of the Poisson module.
|
mathematics
|
We present and analyse a new tidal disruption event (TDE), AT2017eqx at redshift z=0.1089, discovered by Pan-STARRS and ATLAS. The position of the transient is consistent with the nucleus of its host galaxy; it peaks at a luminosity of $L \approx 10^{44}$ erg s$^{-1}$; and the spectrum shows a persistent blackbody temperature $T \gtrsim 20,000$ K with broad H I and He II emission. The lines are initially centered at zero velocity, but by 100 days the H I lines disappear while the He II develops a blueshift of $\gtrsim 5,000$ km s$^{-1}$. Both the early- and late-time morphologies have been seen in other TDEs, but the complete transition between them is unprecedented. The evolution can be explained by combining an extended atmosphere, undergoing slow contraction, with a wind in the polar direction becoming visible at late times. Our observations confirm that a lack of hydrogen a TDE spectrum does not indicate a stripped star, while the proposed model implies that much of the diversity in TDEs may be due to the observer viewing angle. Modelling the light curve suggests AT2017eqx resulted from the complete disruption of a solar-mass star by a black hole of $\sim 10^{6.3} M_\odot$. The host is another quiescent, Balmer-strong galaxy, though fainter and less centrally concentrated than most TDE hosts. Radio limits rule out a relativistic jet, while X-ray limits at 500 days are among the deepest for a TDE at this phase.
|
astrophysics
|
With an ever-increasing amount of astronomical data being collected, manual classification has become obsolete; and machine learning is the only way forward. Keeping this in mind, the Large Synoptic Survey Telescope (LSST) Team hosted the Photometric LSST Astronomical Time-Series Classification Challenge (PLAsTiCC) in 2018. The aim of this challenge was to develop models that accurately classify astronomical sources into different classes, scaling from a limited training set to a large test set. In this text, we report our results of experimenting with Bidirectional Gated Recurrent Unit (GRU) based deep learning models to deal with time series data of the PLAsTiCC dataset. We demonstrate that GRUs are indeed suitable to handle time series data. With minimum preprocessing and without augmentation, our stacked ensemble of GRU and Dense networks achieves an accuracy of 76.243%. Data from astronomical surveys such as LSST will help researchers answer questions pertaining to dark matter, dark energy and the origins of the universe; accurate classification of astronomical sources is the first step towards achieving this. Our code is open-source and has been made available on GitHub here: https://github.com/AKnightWing/Astronomical-Classification-PLASTICC
|
astrophysics
|
In this paper, we address a pursuit-evasion game involving multiple players by utilizing tools and techniques from reinforcement learning and matrix game theory. In particular, we consider the problem of steering an evader to a goal destination while avoiding capture by multiple pursuers, which is a high-dimensional and computationally intractable problem in general. In our proposed approach, we first formulate the multi-agent pursuit-evasion game as a sequence of discrete matrix games. Next, in order to simplify the solution process, we transform the high-dimensional state space into a low-dimensional manifold and the continuous action space into a feature-based space, which is a discrete abstraction of the original space. Based on these transformed state and action spaces, we subsequently employ min-max Q-learning, to generate the entries of the payoff matrix of the game, and subsequently obtain the optimal action for the evader at each stage. Finally, we present extensive numerical simulations to evaluate the performance of the proposed learning-based evading strategy in terms of the evader's ability to reach the desired target location without being captured, as well as computational efficiency.
|
electrical engineering and systems science
|
In this article, we investigate group differences in phthalate exposure profiles using NHANES data. Phthalates are a family of industrial chemicals used in plastics and as solvents. There is increasing evidence of adverse health effects of exposure to phthalates on reproduction and neuro-development, and concern about racial disparities in exposure. We would like to identify a single set of low-dimensional factors summarizing exposure to different chemicals, while allowing differences across groups. Improving on current multi-group additive factor models, we propose a class of Perturbed Factor Analysis (PFA) models that assume a common factor structure after perturbing the data via multiplication by a group-specific matrix. Bayesian inference algorithms are defined using a matrix normal hierarchical model for the perturbation matrices. The resulting model is just as flexible as current approaches in allowing arbitrarily large differences across groups but has substantial advantages that we illustrate in simulation studies. Applying PFA to NHANES data, we learn common factors summarizing exposures to phthalates, while showing clear differences across groups.
|
statistics
|
Variational inference is a popular method for estimating model parameters and conditional distributions in hierarchical and mixed models, which arise frequently in many settings in the health, social, and biological sciences. Variational inference in a frequentist context works by approximating intractable conditional distributions with a tractable family and optimizing the resulting lower bound on the log-likelihood. The variational objective function is typically less computationally intensive to optimize than the true likelihood, enabling scientists to fit rich models even with extremely large datasets. Despite widespread use, little is known about the general theoretical properties of estimators arising from variational approximations to the log-likelihood, which hinders their use in inferential statistics. In this paper we connect such estimators to profile M-estimation, which enables us to provide regularity conditions for consistency and asymptotic normality of variational estimators. Our theory also motivates three methodological improvements to variational inference: estimation of the asymptotic model-robust covariance matrix, a one-step correction that improves estimator efficiency, and an empirical assessment of consistency. We evaluate the proposed results using simulation studies and data on marijuana use from the National Longitudinal Study of Youth.
|
statistics
|
In arXiv:1906.11820 and arXiv:1907.05404 we proposed an approach based on graphs to characterize 5d superconformal field theories (SCFTs), which arise as compactifications of 6d $\mathcal{N}= (1,0)$ SCFTs. The graphs, so-called combined fiber diagrams (CFDs), are derived using the realization of 5d SCFTs via M-theory on a non-compact Calabi--Yau threefold with a canonical singularity. In this paper we complement this geometric approach by connecting the CFD of an SCFT to its weakly coupled gauge theory or quiver descriptions and demonstrate that the CFD as recovered from the gauge theory approach is consistent with that as determined by geometry. To each quiver description we also associate a graph, and the embedding of this graph into the CFD that is associated to an SCFT provides a systematic way to enumerate all possible consistent weakly coupled gauge theory descriptions of this SCFT. Furthermore, different embeddings of gauge theory graphs into a fixed CFD can give rise to new UV-dualities for which we provide evidence through an analysis of the prepotential, and which, for some examples, we substantiate by constructing the M-theory geometry in which the dual quiver descriptions are manifest.
|
high energy physics theory
|
Combination of low-tensor rank techniques and the Fast Fourier transform (FFT) based methods had turned out to be prominent in accelerating various statistical operations such as Kriging, computing conditional covariance, geostatistical optimal design, and others. However, the approximation of a full tensor by its low-rank format can be computationally formidable. In this work, we incorporate the robust Tensor Train (TT) approximation of covariance matrices and the efficient TT-Cross algorithm into the FFT-based Kriging. It is shown that here the computational complexity of Kriging is reduced to $\mathcal{O}(d r^3 n)$, where $n$ is the mode size of the estimation grid, $d$ is the number of variables (the dimension), and $r$ is the rank of the TT approximation of the covariance matrix. For many popular covariance functions the TT rank $r$ remains stable for increasing $n$ and $d$. The advantages of this approach against those using plain FFT are demonstrated in synthetic and real data examples.
|
statistics
|
We probe the universality hypothesis by analytically computing, at least, the two-loop corrections to the critical exponents for $q$-deformed O($N$) self-interacting $\lambda\phi^{4}$ scalar field theories through six distinct and independent field-theoretic renormalization group methods and $\epsilon$-expansion techniques. We show that the effect of $q$-deformation on the one-loop corrections to the $q$-deformed critical exponents is null, so universality hypothesis is broken down at this loop order. Such effect emerges just, at least, at two-loop level and the validity of universality hypothesis is restored. The $q$-deformed critical exponents obtained through the six methods are the same and, furthermore, reduce to their non-deformed values in the appropriated limit.
|
high energy physics theory
|
With increasing number of COVID-19 cases globally, all the countries are ramping up the testing numbers. While the RT-PCR kits are available in sufficient quantity in several countries, others are facing challenges with limited availability of testing kits and processing centers in remote areas. This has motivated researchers to find alternate methods of testing which are reliable, easily accessible and faster. Chest X-Ray is one of the modalities that is gaining acceptance as a screening modality. Towards this direction, the paper has two primary contributions. Firstly, we present the COVID-19 Multi-Task Network which is an automated end-to-end network for COVID-19 screening. The proposed network not only predicts whether the CXR has COVID-19 features present or not, it also performs semantic segmentation of the regions of interest to make the model explainable. Secondly, with the help of medical professionals, we manually annotate the lung regions of 9000 frontal chest radiographs taken from ChestXray-14, CheXpert and a consolidated COVID-19 dataset. Further, 200 chest radiographs pertaining to COVID-19 patients are also annotated for semantic segmentation. This database will be released to the research community.
|
electrical engineering and systems science
|
In this paper it is shown that many of the observables in QED-type theories can be realized in terms of a combinatorial structure called chord diagrams. The advantage of this representations is that we get the desired asymptotic information for a number of classes of Feynman diagrams without appealing to singularity analysis. This relation also answers the unexplained connection between the number of diagrams from Yukawa theory and quenched QED
|
high energy physics theory
|
We discuss a method for classifying the singularity types of 1/2 Calabi-Yau 3-folds, a family of rational elliptic 3-folds introduced in a previous study in relation to various U(1) factors in 6D F-theory models. A projective dual pair of del Pezzo manifolds recently studied by Mukai is used to analyze the singularity types. In particular, we studied the maximal rank seven singularity types of 1/2 Calabi-Yau 3-folds. The structures of the singular fibers are analyzed using blow-ups. Double covers of the 1/2 Calabi-Yau 3-folds yield elliptic Calabi-Yau 3-folds and applications to six-dimensional $N = 1$ F-theory on the Calabi-Yau 3-folds are also discussed. The deduced singular fibers have applications in studying the gauge groups formed in 6D F-theory compactifications. The blow-up methods used to analyze the singular fibers and sections utilized in this research might have applications in studying the U(1) factors and hypermultiplets charged under U(1) in 6D F-theory.
|
high energy physics theory
|
This paper gives a consistent, asymptotically normal estimator of the expected value function when the state space is high-dimensional and the first-stage nuisance functions are estimated by modern machine learning tools. First, we show that value function is orthogonal to the conditional choice probability, therefore, this nuisance function needs to be estimated only at $n^{-1/4}$ rate. Second, we give a correction term for the transition density of the state variable. The resulting orthogonal moment is robust to misspecification of the transition density and does not require this nuisance function to be consistently estimated. Third, we generalize this result by considering the weighted expected value. In this case, the orthogonal moment is doubly robust in the transition density and additional second-stage nuisance functions entering the correction term. We complete the asymptotic theory by providing bounds on second-order asymptotic terms.
|
statistics
|
The DRAO Synthesis Telescope (ST) is a forefront telescope for imaging large-scale neutral hydrogen and polarized radio continuum emission at arcminute resolution. Equipped for observations at 1420 and 408 MHz, the ST completed the Canadian Galactic Plane Survey, providing pioneering measurements of arcminute-scale structure in HI emission and self-absorption and of the diffuse polarized emission, using a fine grid of Rotation Measures to chart the large-scale Galactic magnetic field, and advancing the knowledge of the Galactic rotation curve. In this paper we describe a plan for renewal of the Synthesis Telescope that will create a forefront scientific instrument, a testbed for new radio astronomy technologies, and a training ground for the next generation of Canadian radio astronomers and radio telescope engineers. The renewed telescope will operate across the entire range 400 to 1800 MHz. Collaborations between DRAO and university partners have already demonstrated a novel feed antenna to cover this range, low-noise amplifiers, and a new GPU-based correlator of bandwidth 400 MHz. The renewed ST will provide excellent sensitivity to extended HI, covering the Galactic disk and halo, spectro-polarimetry with unprecedented resolution in angle and in Faraday depth, the ability to search for OH masers in all four 18-cm lines simultaneously, and sensitive recombination-line observations stacked over as many as forty transitions. As a testbed the renewed ST will evaluate low-cost digital clocking and sampling techniques of wide significance for the ngVLA, SKA, and other future telescopes, and a prototype of the digital correlator developed at DRAO for SKA-mid.
|
astrophysics
|
We investigate the tomographical structure of pion and kaon in light cone quark model (LCQM). In particular, we study the parton distribution amplitude (PDA) of pion and kaon. We obtain the parton distribution function (PDF) and the generalized parton distributions (GPDs) of the pion and kaon. The valence quark PDA and PDF of pion, after QCD evolution, are found to be consistent with the data from the E791 and the E615 experiments at Fermilab, respectively. Further, we investigate the transverse momentum distributions (TMDs) of pion and kaon. We also discuss the unpolarized TMD evolution for pion and kaon in this model.
|
high energy physics phenomenology
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.