text
stringlengths
11
9.77k
label
stringlengths
2
104
Neutrinos are a guaranteed signal from supernova explosions in the Milky Way, and a most valuable messenger that can provide us with information about the deepest parts of supernovae. In particular, neutrinos will provide us with physical quantities, such as the radius and mass of protoneutron stars (PNS), which are the central engine of supernovae. This requires a theoretical model that connects observables such as neutrino luminosity and average energy with physical quantities. Here, we show analytic solutions for the neutrino-light curve derived from the neutrino radiation transport equation by employing the diffusion approximation and the analytic density solution of the hydrostatic equation for a PNS. The neutrino luminosity and the average energy as functions of time are explicitly presented, with dependence on PNS mass, radius, the total energy of neutrinos, surface density, and opacity. The analytic solutions provide good representations of the numerical models from a few seconds after the explosion and allow a rough estimate of these physical quantities to be made from observational data.
astrophysics
Decision forests, including Random Forests and Gradient Boosting Trees, have recently demonstrated state-of-the-art performance in a variety of machine learning settings. Decision forests are typically ensembles of axis-aligned decision trees; that is, trees that split only along feature dimensions. In contrast, many recent extensions to decision forests are based on axis-oblique splits. Unfortunately, these extensions forfeit one or more of the favorable properties of decision forests based on axis-aligned splits, such as robustness to many noise dimensions, interpretability, or computational efficiency. We introduce yet another decision forest, called "Sparse Projection Oblique Randomer Forests" (SPORF). SPORF uses very sparse random projections, i.e., linear combinations of a small subset of features. SPORF significantly improves accuracy over existing state-of-the-art algorithms on a standard benchmark suite for classification with >100 problems of varying dimension, sample size, and number of classes. To illustrate how SPORF addresses the limitations of both axis-aligned and existing oblique decision forest methods, we conduct extensive simulated experiments. SPORF typically yields improved performance over existing decision forests, while mitigating computational efficiency and scalability and maintaining interpretability. SPORF can easily be incorporated into other ensemble methods such as boosting to obtain potentially similar gains.
statistics
Inspired by a recent classical distribution-free junta tester by Chen, Liu, Serverdio, Sheng, and Xie (STOC'18), we construct a quantum tester for the same problem with complexity $O(k/\varepsilon)$, which constitutes a quadratic improvement. We also prove that there is no efficient quantum algorithm for this problem using quantum examples as opposed to quantum membership queries. This result was obtained independently from the $\tilde O(k/\varepsilon)$ algorithm for this problem by Bshouty.
quantum physics
Brain pathologies can vary greatly in size and shape, ranging from few pixels (i.e. MS lesions) to large, space-occupying tumors. Recently proposed Autoencoder-based methods for unsupervised anomaly segmentation in brain MRI have shown promising performance, but face difficulties in modeling distributions with high fidelity, which is crucial for accurate delineation of particularly small lesions. Here, similar to these previous works, we model the distribution of healthy brain MRI to localize pathologies from erroneous reconstructions. However, to achieve improved reconstruction fidelity at higher resolutions, we learn to compress and reconstruct different frequency bands of healthy brain MRI using the laplacian pyramid. In a range of experiments comparing our method to different State-of-the-Art approaches on three different brain MR datasets with MS lesions and tumors, we show improved anomaly segmentation performance and the general capability to obtain much more crisp reconstructions of input data at native resolution. The modeling of the laplacian pyramid further enables the delineation and aggregation of lesions at multiple scales, which allows to effectively cope with different pathologies and lesion sizes using a single model.
electrical engineering and systems science
Context. In the advent of new infrared, high-resolution spectrometers, accurate and precise atomic data in the infrared is urgently needed. Identifications, wavelengths, strengths, broadening and hyper-fine splitting parameters of stellar lines in the near-IR are in many cases not accurate enough to model observed spectra, and in other cases even non existing. Some stellar features are unidentified. Aims. The aim with this work is to identify a spectral feature at lambda(vac) = 1063.891 nm or lambda(air) = 1063.600 nm seen in spectra of stars of different spectral types, observed with the GIANO-B spectrometer. Methods. Searching for spectral lines to match the unidentified feature in linelists from standard atomic databases was not successful. However, by investigating the original, published laboratory data we were able to identify the feature and solve the problem. To confirm its identification, we model the presumed stellar line in the solar intensity spectrum and find an excellent match. Results. We find that the observed spectral feature is a stellar line originating from the 4s'-4p' transition in S I, and that the reason for its absence in atomic line databases is a neglected air-to-vacuum correction in the original laboratory measurements from 1967 for this line only. From interpolation we determine the laboratory wavelength of the S I line to be lambda(vac) = 1063.8908 nm or lambda(air) = 1063.5993 nm, and the excitation energy of the upper level to be 9.74978 eV.
astrophysics
The knowledge of distribution grid models, including topologies and line impedances, is essential to grid monitoring, control and protection. However, this information is often unavailable, incomplete or outdated. The increasing deployment of smart meters (SMs) provides a unique opportunity to address this issue. This paper proposes a two-stage data-driven framework for distribution grid modeling using SM data. In the first stage, we propose to identify the topology via reconstructing a weighted Laplacian matrix of distribution networks, which is mathematically proven to be robust against moderately heterogeneous R/X profiles. In the second stage, we develop nonlinear least absolute deviations (LAD) and least squares (LS) regression models to estimate line impedances of single branches based on a nonlinear inverse power flow, which is then embedded within a bottom-up sweep algorithm to achieve the identification across the network in a branch-wise manner. Because the estimation models are inherently non-convex programs and NP-hard, we specially address their tractable convex relaxations and verify the exactness. In addition, we design a conductor library to significantly narrow down the solution space. Numerical results on the modified IEEE 13-bus, 37-bus and 69-bus test feeders validate the effectiveness of the proposed methods.
electrical engineering and systems science
Introduction of electric field in the D-brane worldvolume induces a horizon in the open string geometry perceived by the brane fluctuations. We study the holographic entanglement entropy (HEE) and sub-region complexity (HSC) in these asymptotically AdS geometries in three, four and five dimensions aiming to capture these quantities in the flavor sector introduced by the D-branes. Both the strip and spherical sub-regions have been considered. We show that the entropy associated with the open string horizon, which earlier failed to reproduce the thermal entropy in the boundary, now precisely matches with the entanglement entropy at high temperatures. We check the validity of embedding function theorem while computing the HEE and attempt to reproduce the first law of entanglement thermodynamics, at least at leading order. On the basis of obtained results, we also reflect upon consequences of applying Ryu-Takayanagi proposal on these non-Einstein geometries.
high energy physics theory
We present observations from the Transiting Exoplanet Survey Satellite (TESS) of twenty bright core-collapse supernovae with peak TESS-band magnitudes $\lesssim18$ mag. We reduce this data with an implementation of the image subtraction pipeline used by the All-Sky Automated Survey for Supernovae (ASAS-SN) optimized for use with the TESS images. In empirical fits to the rising light curves, we do not find strong correlations between the fit parameters and the peak luminosity. Existing semi-analytic models fit the light curves of the Type II supernovae well, but do not yield reasonable estimates of the progenitor radius or explosion energy, likely because they are derived for use with ultraviolet observations while TESS observes in the near-infrared. If we instead fit the data with numerically simulated light curves, the rising light curves of the Type~II SNe are consistent with the explosions of red supergiants. While we do not identify shock breakout emission for any individual event, when we combine the fit residuals of the Type II supernovae in our sample, we do find a $>5\sigma$ flux excess in the $\sim 0.5$~day before the start of the light curve rise. It is likely that this excess is due to shock breakout emission, and that during its extended mission TESS will observe a Type II supernova bright enough for this signal to be detected directly.
astrophysics
Adhoc wireless sensor network is an architecture of connected nodes, each node has its main elements such as sensors, computation and communications capabilities. Adhoc WSNs restrained energy sources result in a shorter lifetime of the sensor network and inefficient topology. In this paper, a new approach for saving and energy controlling is introduced using quality of service, the main reason is to reduce the nodes energy through discovering the best optimum route that meets quality of service requirements; quality of service technique is used to find the optimum methodology for nodes packets transmission and energy consumption.
computer science
Particle filtering (PF) is an often used method to estimate the states of dynamical systems. A major limitation of the standard PF method is that the dimensionality of the state space increases as the time proceeds and eventually may cause degeneracy of the algorithm. A possible approach to alleviate the degeneracy issue is to compute the marginal posterior distribution at each time step, which leads to the so-called marginal PF method. A key issue in the marginal PF method is to construct a good sampling distribution in the marginal space. When the posterior distribution is close to Gaussian, the Ensemble Kalman filter (EnKF) method can usually provide a good sampling distribution; however the EnKF approximation may fail completely when the posterior is strongly non-Gaussian. In this work we propose a defensive marginal PF (DMPF) algorithm which constructs a sampling distribution in the marginal space by combining the standard PF and the EnKF approximation using a multiple importance sampling (MIS) scheme. An important feature of the proposed algorithm is that it can automatically adjust the relative weight of the PF and the EnKF components in the MIS scheme in each step, according to how non-Gaussian the posterior is. With numerical examples we demonstrate that the proposed method can perform well regardless of whether the posteriors can be well approximated by Gaussian.
statistics
Assessing that a sparse polynomial G divides another sparse polynomial F is not yet known to admit a polynomial time algorithm. While computing the quotient Q = F quo G can be done in polynomial time with respect to the sparsities of F, G and Q, it is not yet sufficient to get a polynomial time divisibility test in general. Indeed, the sparsity of the quotient Q can be exponentially larger than the ones of F and G. In the favorable case where the sparsity #Q of the quotient is polynomial, the best known algorithm to compute Q has a non-linear factor #G#Q in the complexity, which is not optimal. In this work, we are interested in the two aspects of this problem. First, we propose a new randomized algorithm that computes the quotient of two sparse polynomials when the division is exact. Its complexity is quasi-linear in the sparsities of F, G and Q. Our approach relies on sparse interpolation and it works over any finite fields or the ring of integers. Then, as a step toward faster divisibility testing, we provide a new polynomial time algorithm when the divisor has some specific shapes. More precisely, we reduce the problem to finding a polynomial S such that QS is sparse and testing divisibility by S can be done in polynomial time. We identify some structure patterns in the divisor G for which we can efficiently compute such a polynomial S.
computer science
We propose a model with radiatively induced neutrino mass at two-loop level, applying modular $A_4$ symmetry. The neutrino mass matrix is formulated where the structure of associated couplings are restricted by the symmetry. Then we show several predictions in the lepton sector, satisfying lepton flavor violations as well as neutrino oscillation data. We also discuss muon anomalous magnetic moment and briefly comment on dark matter candidate.
high energy physics phenomenology
Handling of missed data is one of the main tasks in data preprocessing especially in large public service datasets. We have analysed data from the Trauma Audit and Research Network (TARN) database, the largest trauma database in Europe. For the analysis we used 165,559 trauma cases. Among them, there are 19,289 cases (13.19\%) with unknown outcome. We have demonstrated that these outcomes are not missed `completely at random' and, hence, it is impossible just to exclude these cases from analysis despite the large amount of available data. We have developed a system of non-stationary Markov models for the handling of missed outcomes and validated these models on the data of 15,437 patients which arrived into TARN hospitals later than 24 hours but within 30 days from injury. We used these Markov models for the analysis of mortality. In particular, we corrected the observed fraction of death. Two na\"ive approaches give 7.20\% (available case study) or 6.36\% (if we assume that all unknown outcomes are `alive'). The corrected value is 6.78\%. Following the seminal paper of Trunkey (1983) the multimodality of mortality curves has become a much discussed idea. For the whole analysed TARN dataset the coefficient of mortality monotonically decreases in time but the stratified analysis of the mortality gives a different result: for lower severities the coefficient of mortality is a non-monotonic function of the time after injury and may have maxima at the second and third weeks. The approach developed here can be applied to various healthcare datasets which experience the problem of lost patients and missed outcomes.
statistics
We propose to use a quantum adiabatic and simulated-annealing framework to compute theground state of small molecules. The initial Hamiltonian of our algorithms is taken to be themaximum commuting Hamiltonian that consists of a maximal set of commuting terms in the fullHamiltonian of molecules in the Pauli basis. We consider two variants. In the first method, weperform the adiabatic evolution on the obtained time- or path-dependent Hamiltonian with theinitial state as the ground state of the maximum commuting Hamiltonian. However, this methoddoes suffer from the usual problems of adiabatic quantum computation due to degeneracy andenergy-level crossings along the Hamiltonian path. This problem is mitigated by a Zeno method,i.e., via a series of eigenstate projections used in the quantum simulated annealing, with the path-dependent Hamiltonian augmented by a sum of Pauli X terms, whose contribution vanishes at thebeginning and the end of the path. In addition to the ground state, the low lying excited states canbe obtained using this quantum Zeno approach with equal accuracy to that of the ground state.
quantum physics
The results of this thesis allows one to replace calculations in tricategories with equivalent calculations in Gray categories (aka semistrict tricategories). In particular the rewriting calculus for Gray categories as used for example by the online proof assistant globular (arXiv:1612.01093), or equivalently the Gray-diagrams of arXiv:1211.0529 can then be used also in the case of a fully weak tricategory.
mathematics
Based on the motivation that some quantum gravity theories predicts the Lorentz Invariance Violation (LIV) around Planck-scale energy levels, this paper proposes a new formalism that addresses the possible effects of LIV in the electrodynamics. This formalism is capable of changing the usual electrodynamics through high derivative arbitrary mass dimension terms that includes a constant background field controlling the intensity of LIV in the models, producing modifications in the dispersion relations in a manner that is similar to the Myers-Pospelov approach. With this framework, it was possible to generate a CPT-even and CPT-odd generalized modifications of the electrodynamics in order to study the stability and causality of these theories considering the isotropic case for the background field. An additional analysis of unitarity at tree level was considered by studying the saturated propagators. After this analysis, we conclude that, while CPT-even modifications always preserves the stability, causality and unitarity in the boundaries of the effective field theory and therefore may be good candidates for field theories with interactions, the CPT-odd one violates causality and unitarity. This feature is a consequence of the vacuum birefringence characteristics that are present in CPT-odd theories for the photon sector.
high energy physics theory
We propose a supervised principal component regression method for relating functional responses with high dimensional covariates. Unlike the conventional principal component analysis, the proposed method builds on a newly defined expected integrated residual sum of squares, which directly makes use of the association between functional response and predictors. Minimizing the integrated residual sum of squares gives the supervised principal components, which is equivalent to solving a sequence of nonconvex generalized Rayleigh quotient optimization problems and thus is computationally intractable. To overcome this computational challenge, we reformulate the nonconvex optimization problems into a simultaneous linear regression, with a sparse penalty added to deal with high dimensional predictors. Theoretically, we show that the reformulated regression problem recovers the same supervised principal subspace under suitable conditions. Statistically, we establish non-asymptotic error bounds for the proposed estimators. Numerical studies and an application to the Human Connectome Project lend further support.
statistics
We present a pair of adjoint optimal control problems characterizing a class of time-symmetric stochastic processes defined on random time intervals. The associated PDEs are of free-boundary type. The particularity of our approach is that it involves two adjoint optimal stopping times adapted to a pair of filtrations, the traditional increasing one and another, decreasing. They are the keys of the time symmetry of the construction, which can be regarded as a generalization of "Schr\"odinger's problem" (1931-32) to space-time domains. The relation with the notion of "Hidden diffusions" is also described.
mathematics
It is shown that the gauge invariance and gauge dependence properties of effective action for Yang-Mills theories should be considered as two independent issues in the background field formalism. Application of this formalism to formulate the functional renormalization group approach is discussed. It is proven that there is a possibility to construct the corresponding average effective action invariant under the gauge transformations of background vector field. Nevertheless, being gauge invariant this action remains gauge dependent on-shell.
high energy physics theory
The three electromagnetic form factors for the transition from a 3/2+ Sigma* hyperon to the ground-state Lambda hyperon are studied. At low energies, combinations of the transition form factors can be deduced from Dalitz decays of the Sigma* hyperon to Lambda plus an electron-positron pair. It is pointed out how more information can be obtained with the help of the self-analyzing weak decay of the Lambda. In particular it is shown that these transition form factors are complex quantities already in this kinematical region. Such measurements are feasible at hyperon factories as for instance the Facility for Antiproton and Ion Research (FAIR). At higher energies, the transition form factors can be measured in electron-positron collisions. The pertinent relations between the transition form factors and the decay distributions and differential cross sections are presented. Using dispersion theory, the low-energy electromagnetic form factors for the Sigma*-to-Lambda transition are related to the pion vector form factor. The additionally required input, i.e. the two-pion - Sigma* - Lambda amplitudes are determined from relativistic next-to-leading-order (NLO) baryon chiral perturbation theory including the baryons from the octet and the decuplet. A poorly known NLO parameter is fixed to the experimental value of the Sigma* to Lambda-gamma decay width. Pion rescattering is taken into account by dispersion theory solving a Muskhelishvili-Omnes equation. Subtracted and unsubtracted dispersion relations are discussed. However, in view of the fact that the transition form factors are complex quantities, the current data situation does not allow for a full determination of the subtraction constants. To reduce the number of free parameters, unsubtracted dispersion relations are used to make predictions for the transition form factors in the low-energy space- and timelike regions.
high energy physics phenomenology
We construct a local $\psi$-epistemic hidden-variable model of Bell correlations by a retrocausal adaptation of the originally superdeterministic model given by Brans. In our model, for a pair of particles the joint quantum state $|\psi_e(t)\rangle$ as determined by preparation is epistemic. The model also assigns to the pair of particles a factorisable joint quantum state $|\psi_o(t)\rangle$ which is different from the prepared quantum state $|\psi_e(t)\rangle$ and has an ontic status. The ontic state of a single particle consists of two parts. First, a single particle ontic quantum state $\chi(\vec{x},t)|i\rangle$, where $\chi(\vec{x},t)$ is a 3-space wavepacket and $|i\rangle$ is a spin eigenstate of the future measurement setting. Second, a particle position in 3-space $\vec{x}(t)$, which evolves via a de Broglie-Bohm type guidance equation with the 3-space wavepacket $\chi(\vec{x},t)$ acting as a local pilot wave. The joint ontic quantum state $|\psi_o(t)\rangle$ fixes the measurement outcomes deterministically whereas the prepared quantum state $|\psi_e(t)\rangle$ determines the distribution of the $|\psi_o(t)\rangle$'s over an ensemble. Both $|\psi_o(t)\rangle$ and $|\psi_e(t)\rangle$ evolve via the Schrodinger equation. Our model exactly reproduces the Bell correlations for any pair of measurement settings. We also consider `non-equilibrium' extensions of the model with an arbitrary distribution of hidden variables. We show that, in non-equilibrium, the model generally violates no-signalling constraints while remaining local with respect to both ontology and interaction between particles. We argue that our model shares some structural similarities with the modal class of interpretations of quantum mechanics.
quantum physics
The effects of collisional processes in the hot QCD medium to thermal dilepton production from $q\overline{q}$ annihilation in relativistic heavy-ion collisions have been investigated. The non-equilibrium corrections to the momentum distribution function have been estimated within the framework of ensemble-averaged diffusive Vlasov-Boltzmann equation, encoding the effects of collisional processes and turbulent chromo-fields in the medium. The analysis has been done by considering the realistic equation of state by employing a quasiparticle model for the thermal QCD medium. The contributions from the $2\rightarrow2$ elastic scattering processes have been quantified for the thermal dilepton production rate. We have showed that the collisional corrections induce appreciable enhancement over the equilibrium dilepton spectra. A comparative study between collisional and anomalous contributions to the dilepton production rates has also been explored. The collisional contributions are seen to be marginal over that due to collisionless anomalous transport.
high energy physics phenomenology
Timing of the Crab and Vela pulsars have recently revealed very peculiar evolutions of their spin frequency during the early stage of a glitch. We show that these differences can be interpreted from the interactions between neutron superfluid vortices and proton fluxoids in the core of these neutron stars. In particular, pinning of individual vortices to fluxoids is found to have a dramatic impact on the mutual friction between the neutron superfluid and the rest of the star. The number of fluxoids attached to vortices turns out to be a key parameter governing the global dynamics of the star. These results may have implications for the interpretation of other astrophysical phenomena such as pulsar-free precession or the r-mode instability.
astrophysics
Multiple seasonal patterns play a key role in time series forecasting, especially for business time series where seasonal effects are often dramatic. Previous approaches including Fourier decomposition, exponential smoothing, and seasonal autoregressive integrated moving average (SARIMA) models do not reflect the distinct characteristics of each period in seasonal patterns. We propose a mixed hierarchical seasonality (MHS) model. Intermediate parameters for each seasonal period are first estimated, and a mixture of intermediate parameters is taken. This results in a model that automatically learns the relative importance of each seasonality and addresses the interactions between them. The model is implemented with Stan, a probabilistic language, and was compared with three existing models on a real-world dataset of pallet transport from a logistic network. Our new model achieved considerable improvements in terms of out of sample prediction error (MAPE) and predictive density (ELPD) compared to complete pooling, Fourier decomposition, and SARIMA model.
statistics
The $\Lambda$CDM model provides a good fit to a large span of cosmological data but harbors areas of phenomenology. With the improvement of the number and the accuracy of observations, discrepancies among key cosmological parameters of the model have emerged. The most statistically significant tension is the $4-6\sigma$ disagreement between predictions of the Hubble constant $H_0$ by early time probes with $\Lambda$CDM model, and a number of late time, model-independent determinations of $H_0$ from local measurements of distances and redshifts. The high precision and consistency of the data at both ends present strong challenges to the possible solution space and demand a hypothesis with enough rigor to explain multiple observations--whether these invoke new physics, unexpected large-scale structures or multiple, unrelated errors. We present a thorough review of the problem, including a discussion of recent Hubble constant estimates and a summary of the proposed theoretical solutions. Some of the models presented are formally successful, improving the fit to the data in light of their additional degrees of freedom, restoring agreement within $1-2\sigma$ between {\it Planck} 2018, using CMB power spectra data, BAO, Pantheon SN data, and R20, the latest SH0ES Team measurement of the Hubble constant ($H_0 = 73.2 \pm 1.3{\rm\,km\,s^{-1}\,Mpc^{-1}}$ at 68\% confidence level). Reduced tension might not simply come from a change in $H_0$ but also from an increase in its uncertainty due to degeneracy with additional physics, pointing to the need for additional probes. While no specific proposal makes a strong case for being highly likely or far better than all others, solutions involving early or dynamical dark energy, neutrino interactions, interacting cosmologies, primordial magnetic fields, and modified gravity provide the best options until a better alternative comes along.[Abridged]
astrophysics
Discretely-modulated continuous-variable quantum key distribution (CVQKD) is more suitable for long-distance transmission compared with its Gaussian-modulated CVQKD counterpart. However, its security can only be guaranteed when modulation variance is very small, which limits its further development. To solve this problem, in this work, we propose a novel scheme for discretely-modulated CVQKD using multi-label learning technology, called multi-label learning-based CVQKD (ML-CVQKD). In particular, the proposed scheme divides the whole quantum system into state learning and state prediction. The former is used for training and estimating quantum classifier, and the latter is used for generating final secret key. A quantum multi-label classification (QMLC) algorithm is also designed as an embedded classifier for distinguishing coherent state. Feature extraction for coherent state and related machine learning-based metrics for the quantum classifier are successively suggested. Security analysis shows that QMLC-embedded ML-CVQKD is able to immune intercept-resend attack so that small modulation variance is no longer compulsively required, thereby improving the performance of discretely-modulated CVQKD system.
quantum physics
Direct nuclear reactions with radioactive ion beams represent an extremely powerful tool to extend the study of fundamental nuclear properties far from stability. These measurements require pure and dense targets to cope with the low beam intensities. The $^3$He cryogenic target HeCTOr has been designed to perform direct nuclear reactions in inverse kinematics. Its high density of scattering centers (10$^{20}$ atoms/cm$^2$) makes it particularly suited for experiments where low-intensity radioactive beams are involved. The target was employed in a first in-beam experiment, where it was coupled to state-of-the-art gamma-ray and particle detectors. It showed excellent stability in gas temperature and density over time. Relevant experimental quantities, such as total target thickness, energy resolution and gamma-ray absorption, were determined through dedicated Geant4 simulations and found to be in good agreement with experimental data.
physics
In this paper, the issue of model uncertainty in safety-critical control is addressed with a data-driven approach. For this purpose, we utilize the structure of an input-ouput linearization controller based on a nominal model along with a Control Barrier Function and Control Lyapunov Function based Quadratic Program (CBF-CLF-QP). Specifically, we propose a novel reinforcement learning framework which learns the model uncertainty present in the CBF and CLF constraints, as well as other control-affine dynamic constraints in the quadratic program. The trained policy is combined with the nominal model-based CBF-CLF-QP, resulting in the Reinforcement Learning-based CBF-CLF-QP (RL-CBF-CLF-QP), which addresses the problem of model uncertainty in the safety constraints. The performance of the proposed method is validated by testing it on an underactuated nonlinear bipedal robot walking on randomly spaced stepping stones with one step preview, obtaining stable and safe walking under model uncertainty.
electrical engineering and systems science
In this paper a cosmological solution of polynomial type $H \approx ( t + const.)^{-1}$ for the causal thermodynamical approach of Isarel-Stewart, found in \cite{MCruz:2017, Cruz2017}, is constrained using the joint of the latest measurements of the Hubble parameter (OHD) and Type Ia Supernovae (SNIa). Since the expansion described by this solution does not present a transition from a decelerated phase to an accelerated one, both phases can be well modeled connecting both phases by requiring the continuity of the Hubble parameter at $z=z_{t}$, the accelerated-decelerated transition redshift. Our best fit constrains the main free parameters of the model to be $A_1= 1.58^{+0.08}_{-0.07}$ ($A_2=0.84^{+0.02}_{-0.02}$) for the accelerated (decelerated) phase. For both phases we obtain $q=-0.37^{+0.03}_{-0.03}$ ($0.19^{+0.03}_{-0.03}$) and $\omega_{eff} = -0.58^{+0.02}_{-0.02}$ ($-0.21^{+0.02}_{-0.02}$) for the deceleration parameter and the effective equation of state, respectively. Comparing our model and LCDM statistically through the Akaike information criterion and the Bayesian information criterion we obtain that the LCDM model is preferred by the OHD+SNIa data. Finally, it is shown that the constrained parameters values satisfy the criterion for a consistent fluid description of a dissipative dark matter component, but with a high value of the speed of sound within the fluid, which is a drawback for a consistent description of the structure formation. We briefly discuss the possibilities to overcome this problem with a non-linear generalization of the causal linear thermodynamics of bulk viscosity and also with the inclusion of some form of dark energy.
astrophysics
We explore the gravitational implementation of the field theory Cardy-like limit recently used in the successful microstate countings of AdS black hole entropy in various dimensions. On the field theory side, the Cardy-like limit focuses on a particular scaling of conserved electric charges and angular momenta, and we first translate this scaling to the gravitational side by a limiting procedure on the black hole parameters. We note that the scaling naturally accompanies a near-horizon region for which these black hole solutions are greatly simplified. Applying the Kerr/CFT correspondence to the near-horizon region, we precisely reproduce the Bekenstein-Hawking entropy of asymptotically AdS$_{4, 5, 6, 7}$ BPS black holes. Our results explicitly provide a microscopic and universal low energy description for AdS black holes across various dimensions.
high energy physics theory
Lepton-number violation (LNV), in general, implies nonzero Majorana masses for the Standard Model neutrinos. Since neutrino masses are very small, for generic candidate models of the physics responsible for LNV, the rates for almost all experimentally accessible LNV observables -- except for neutrinoless double-beta decay -- are expected to be exceedingly small. Guided by effective-operator considerations of LNV phenomena, we identify a complete family of models where lepton number is violated but the generated Majorana neutrino masses are tiny, even if the new-physics scale is below 1 TeV. We explore the phenomenology of these models, including charged-lepton flavor-violating phenomena and baryon-number-violating phenomena, identifying scenarios where the allowed rates for $\mu^-\to e^+$-conversion in nuclei are potentially accessible to next-generation experiments.
high energy physics phenomenology
We consider a real scalar singlet field which provides a strong first-order electroweak phase transition via its coupling to the Higgs boson, and gives a $CP$ violating contribution on the top quark mass via a dimension-6 operator. We study the correlation between the baryon-to-entropy ratio produced by electroweak baryogenesis, and the gravitational wave signal from the electroweak phase transition. We show that future gravitational wave experiments can test, in particular, the region of the model parameter space where the observed baryon-to-entropy ratio can be obtained even if the new physics scale, which is explicit in the dimension-6 operator, is high.
high energy physics phenomenology
We discover a new class of topological solitons. These solitons can exist in a space of infinite volume like, e.g., $\mathbb{R}^n$, but they cannot be placed in any finite volume, because the resulting formal solutions have infinite energy. These objects are, therefore, interpreted as totally incompressible solitons. As a first, particular example we consider (1+1) dimensional kinks in theories with a nonstandard kinetic term or, equivalently, in models with the so-called runaway (or vacummless) potentials. But incompressible solitons exist also in higher dimensions. As specific examples in (3+1) dimensions we study Skyrmions in the dielectric extensions both of the minimal and the BPS Skyrme models. In the the latter case, the skyrmionic matter describes a completely incompressible topological perfect fluid.
high energy physics theory
In this paper, we propose a class of discrete-time approximation schemes for fully nonlinear Hamilton-Jacobi-Bellman (HJB) equations associated with stochastic optimal control problems under the $G$-expectation framework. We prove the convergence of the discrete schemes and determine the convergence rate. Several numerical examples are presented to illustrate the effectiveness of the obtained results.
mathematics
We present a probabilistic 3D generative model, named Generative Cellular Automata, which is able to produce diverse and high quality shapes. We formulate the shape generation process as sampling from the transition kernel of a Markov chain, where the sampling chain eventually evolves to the full shape of the learned distribution. The transition kernel employs the local update rules of cellular automata, effectively reducing the search space in a high-resolution 3D grid space by exploiting the connectivity and sparsity of 3D shapes. Our progressive generation only focuses on the sparse set of occupied voxels and their neighborhood, thus enabling the utilization of an expressive sparse convolutional network. We propose an effective training scheme to obtain the local homogeneous rule of generative cellular automata with sequences that are slightly different from the sampling chain but converge to the full shapes in the training data. Extensive experiments on probabilistic shape completion and shape generation demonstrate that our method achieves competitive performance against recent methods.
computer science
It is widely accepted that inverse square L\'evy walks are optimal search strategies because they maximize the encounter rate with sparse, randomly distributed, replenishable targets when the search restarts in the vicinity of the previously visited target, which becomes revisitable again with high probability, i.e., non-destructive foraging [Nature 401, 911 (1999)]. The precise conditions for the validity of this L\'evy flight foraging hypothesis (LFH) have been widely described in the literature [Phys. Life Rev. 14, 94 (2015)]. Nevertheless, three objecting claims to the LFH have been raised recently for $d \geq 2$: (i) the capture rate $\eta$ has linear dependence on the target density $\rho$ for all values of the L\'evy index $\alpha$; (ii) "the gain $\eta_{max}/\eta$ achieved by varying $\alpha $ is bounded even in the limit $\rho \to 0 $" so that "tuning $\alpha$ can only yield a marginal gain"; (iii) depending on the values of the radius of detection $a$, the restarting distance $l_c$ and the scale parameter $s$, the optimum is realized for a range of $\alpha$ [Phys. Rev. Lett. 124, 080601 (2020)]. Here we answer each of these three criticisms in detail and show that claims (i)-(iii) do not actually invalidate the LFH. Our results and analyses restore the original result of the LFH for non-destructive foraging.
condensed matter
We study exponential families of distributions that are multivariate totally positive of order 2 (MTP2), show that these are convex exponential families, and derive conditions for existence of the MLE. Quadratic exponential familes of MTP2 distributions contain attractive Gaussian graphical models and ferromagnetic Ising models as special examples. We show that these are defined by intersecting the space of canonical parameters with a polyhedral cone whose faces correspond to conditional independence relations. Hence MTP2 serves as an implicit regularizer for quadratic exponential families and leads to sparsity in the estimated graphical model. We prove that the maximum likelihood estimator (MLE) in an MTP2 binary exponential family exists if and only if both of the sign patterns $(1,-1)$ and $(-1,1)$ are represented in the sample for every pair of variables; in particular, this implies that the MLE may exist with $n=d$ observations, in stark contrast to unrestricted binary exponential families where $2^d$ observations are required. Finally, we provide a novel and globally convergent algorithm for computing the MLE for MTP2 Ising models similar to iterative proportional scaling and apply it to the analysis of data from two psychological disorders.
statistics
With Aperture synthesis (AS) technique, a number of small antennas can assemble to form a large telescope which spatial resolution is determined by the distance of two farthest antennas instead of the diameter of a single-dish antenna. Different from direct imaging system, an AS telescope captures the Fourier coefficients of a spatial object, and then implement inverse Fourier transform to reconstruct the spatial image. Due to the limited number of antennas, the Fourier coefficients are extremely sparse in practice, resulting in a very blurry image. To remove/reduce blur, "CLEAN" deconvolution was widely used in the literature. However, it was initially designed for point source. For extended source, like the sun, its efficiency is unsatisfied. In this study, a deep neural network, referring to Generative Adversarial Network (GAN), is proposed for solar image deconvolution. The experimental results demonstrate that the proposed model is markedly better than traditional CLEAN on solar images.
astrophysics
In 1976, Gorini, Kossakowski, Sudarshan and Lindblad independently discovered a general form of master equations for an open quantum Markovian dynamics. In honor of all the authors, the equation is nowadays called the GKLS master equation. In this paper, we show universal constraints on the relaxation times valid for any d-level GKLS master equations, which is a generalization of the well-known constraints for 2-level systems. Specifically, we show that any relaxation rate, the inverse-relaxation time, is not greater than half of the sum of all relaxation rates. Since the relaxation times are measurable in experiments, our constraints provide a direct experimental test for the validity of the GKLS master equations, and hence for the conditions of the complete positivity and Markovianity.
quantum physics
Deep neural networks enable highly accurate image segmentation, but require large amounts of manually annotated data for supervised training. Few-shot learning aims to address this shortcoming by learning a new class from a few annotated support examples. We introduce, a novel few-shot framework, for the segmentation of volumetric medical images with only a few annotated slices. Compared to other related works in computer vision, the major challenges are the absence of pre-trained networks and the volumetric nature of medical scans. We address these challenges by proposing a new architecture for few-shot segmentation that incorporates 'squeeze & excite' blocks. Our two-armed architecture consists of a conditioner arm, which processes the annotated support input and generates a task-specific representation. This representation is passed on to the segmenter arm that uses this information to segment the new query image. To facilitate efficient interaction between the conditioner and the segmenter arm, we propose to use 'channel squeeze & spatial excitation' blocks - a light-weight computational module - that enables heavy interaction between both the arms with negligible increase in model complexity. This contribution allows us to perform image segmentation without relying on a pre-trained model, which generally is unavailable for medical scans. Furthermore, we propose an efficient strategy for volumetric segmentation by optimally pairing a few slices of the support volume to all the slices of the query volume. We perform experiments for organ segmentation on whole-body contrast-enhanced CT scans from the Visceral Dataset. Our proposed model outperforms multiple baselines and existing approaches with respect to the segmentation accuracy by a significant margin. The source code is available at https://github.com/abhi4ssj/few-shot-segmentation.
computer science
Self-triggered control (STC) is a sample-and-hold control method aimed at reducing communications within networked-control systems; however, existing STC mechanisms often maximize how late the next sample is, and as such they do not provide any sampling optimality in the long-term. In this work, we devise a method to construct self-triggered policies that provide near-maximal average inter-sample time (AIST) while respecting given control performance constraints. To achieve this, we rely on finite-state abstractions of a reference event-triggered control, in which early triggers are also allowed. These early triggers constitute controllable actions of the abstraction, for which an AIST-maximizing strategy can be computed by solving a mean-payoff game. We provide optimality bounds, and how to further improve them through abstraction refinement techniques.
electrical engineering and systems science
We consider the problem of jointly estimating expectation values of many Pauli observables, a crucial subroutine in variational quantum algorithms. Starting with randomized measurements, we propose an efficient derandomization procedure that iteratively replaces random single-qubit measurements with fixed Pauli measurements; the resulting deterministic measurement procedure is guaranteed to perform at least as well as the randomized one. In particular, for estimating any $L$ low-weight Pauli observables, a deterministic measurement on only of order $\log(L)$ copies of a quantum state suffices. In some cases, for example when some of the Pauli observables have a high weight, the derandomized procedure is substantially better than the randomized one. Specifically, numerical experiments highlight the advantages of our derandomized protocol over various previous methods for estimating the ground-state energies of small molecules.
quantum physics
The quantum measurement problem may have a resolution in de Broglie-Bohm theory in which measurements lead to dynamical wavefunction collapse. We study the collapse in a simple setup and find that there may be slight differences between probabilities derived from standard quantum mechanics versus those from de Broglie-Bohm theory in certain situations, possibly paving the way for an experimental test.
quantum physics
An equilibrium system which is perturbed by an external potential relaxes to a new equilibrium state, a process obeying the fluctuation-dissipation theorem. In contrast, perturbing by nonconservative forces yields a nonequilibrium steady state, and the fluctuation-dissipation theorem can in general not be applied. Here we exploit a freedom inherent to linear response theory: Force fields which perform work that does not couple statistically to the considered observable can be added without changing the response. Using this freedom, we demonstrate that the fluctuation-dissipation theorem can be applied for certain nonconservative forces. We discuss the case of a nonconservative force field linear in particle coordinates, where the mentioned freedom can be formulated in terms of symmetries. In particular, for the case of shear, this yields a response formula, which we find advantageous over the known Green-Kubo relation in terms of statistical accuracy.
condensed matter
Arrays of coupled semiconductor lasers are systems possessing radically complex dynamics that makes them useful for numerous applications in beam forming and beam shaping. In this work, we investigate the spatial controllability of oscillation amplitudes in an array of coupled photonic dimers, each consisting of two semiconductor lasers driven by differential pumping rates. We consider parameter values for which each dimer's stable phase-locked state has become unstable through a Hopf bifurcation and we show that, by assigning appropriate pumping rate values to each dimer, large-amplitude oscillations coexist with negligibly small amplitude oscillations. The spatial profile? of the amplitude of oscillations across the array can be dynamically controlled by appropriate pumping rate values in each dimer. This feature is shown to be quite robust, even for random detuning between the lasers, and suggests a mechanism for dynamically reconfigurable production of a large diversity of spatial profiles of laser amplitude oscillations.
physics
We study Jackiw-Teitelboim gravity with positive cosmological constant as a model for de Sitter quantum gravity. We focus on the quantum mechanics of the model at past and future infinity. There is a Hilbert space of asymptotic states and an infinite-time evolution operator between the far past and far future. This evolution is not unitary, although we find that it acts unitarily on a subspace up to non-perturbative corrections. These corrections come from processes which involve changes in the spatial topology, including the nucleation of baby universes. There is significant evidence that this 1+1 dimensional model is dual to a 0+0 dimensional matrix integral in the double-scaled limit. So the bulk quantum mechanics, including the Hilbert space and approximately unitary evolution, emerge from a classical integral. We find that this emergence is a robust consequence of the level repulsion of eigenvalues along with the double scaling limit, and so is rather universal in random matrix theory.
high energy physics theory
Carefully designed nanostructures can inspire new type of optomechanical interactions and allow surpassing limitations set by classical diffractive optical elements. Apart from strong near-field localization, nanostructured environment allows controlling scattering channels and might tailor many-body interactions. Here we investigate an effect of optical binding, where several particles demonstrate a collective mechanical behaviour of bunching together in a light field. In contrary to classical binding, where separation distances between particles are diffraction limited, an auxiliary hyperbolic metasurface is shown here to break this barrier by introducing several controllable near-field interaction channels. Strong material dispersion of the hyperbolic metamaterial along with high spatial confinement of optical modes, which it supports, allow achieving superior tuning capabilities and efficient control over binding distances on the nanoscale. In addition, a careful choice of the metamaterial slabs thickness enables decreasing optical binding distances by orders of magnitude compared to free space scenarios due to the multiple reflections of volumetric modes from the substrate. Auxiliary tunable metamaterials, which allow controlling collective optomechanical interactions on the nanoscale, open a venue for new investigations including collective nanofluidic interactions, triggered bio-chemical reactions and many others.
physics
The standard model of elementary particles (SM) suffers from various problems, such as power-law ultraviolet (UV) sensitivity, exclusion of general relativity (GR), and absence of a dark matter candidate. The LHC experiments, according to which the TeV domain appears to be empty of new particles, started sidelining TeV-scale SUSY and other known cures of the UV sensitivity. In search for a remedy, in this work, it is revealed that affine curvature can emerge in a way restoring gauge symmetries explicitly broken by the UV cutoff. This emergent curvature cures the UV sensitivity and incorporates GR as symmetry-restoring emergent gravity ({\it symmergent gravity}, in brief) if a new physics sector (NP) exists to generate the Planck scale and if SM+NP is fermi-bose balanced. This setup, carrying fingerprints of trans-Planckian SUSY, predicts that gravity is Einstein (no higher-curvature terms), cosmic/gamma rays can originate from heavy NP scalars, and the UV cutoff might take right value to suppress the cosmological constant (alleviating fine-tuning with SUSY). The NP does not have to couple to the SM. In fact, NP-SM coupling can take any value from zero to $\Lambda^2_{SM}/\Lambda^2_{NP}$ if the SM is not to jump from $\Lambda_{SM}\approx 500\, {\rm GeV}$ to the NP scale $\Lambda_{NP}$. The zero coupling, certifying an undetectable NP, agrees with all the collider and dark matter bounds at present. The {\it seesawic} bound $\Lambda^2_{SM}/\Lambda^2_{NP}$, directly verifiable at colliders, implies that: {\it (i)} dark matter must have a mass $\lesssim \Lambda_{SM}$, {\it (ii)} Higgs-curvature coupling must be $\approx 1.3\%$, {\it (iii)} the SM RGEs must remain nearly as in the SM, and {\it (iv)} right-handed neutrinos must have a mass $\lesssim 1000\, {\rm TeV}$. These signatures serve as a concise testbed for symmergence.
high energy physics phenomenology
The explorations of the quantum-inspired symmetries in optical and photonic systems have witnessed immense research interests both fundamentally and technologically in a wide range of subjects of physics and engineering. One of the principal emerging fields in this context is non-Hermitian physics based on parity-time symmetry, originally proposed in the studies pertaining to quantum mechanics and quantum field theory, recently ramified into diverse set of areas, particularly in optics and photonics. The intriguing physical effects enabled by non-Hermitian physics and PT symmetry have enhanced significant applications prospects and engineering of novel materials. In addition, it has observed increasing research interests in many emerging directions beyond optics and photonics. This Review paper attempts to bring together the state of the art developments in the field of complex non-Hermitian physics based on PT symmetry in various physical settings along with elucidating key concepts and background and a detailed perspective on new emerging directions. It can be anticipated that this trendy field of interest can be indispensable in providing new perspectives in maneuvering the flow of light in the diverse physical platforms in optics, photonics, condensed matter, opto-electronics and beyond, and offer distinctive applications prospects in novel functional materials.
physics
Vehicle trajectory optimization is essential to ensure vehicles travel efficiently and safely. This paper presents an infrastructure assisted constrained connected automated vehicles (CAVs) trajectory optimization method on curved roads. This paper systematically formulates the problem based on a curvilinear coordinate which is flexible to model complex road geometries. Further, to deal with the spatial varying road obstacles, traffic regulations, and geometric characteristics, two-dimensional vehicle kinematics is given in a spatial formulation with exact road information provided by the infrastructure. Consequently, we applied a multi-objective model predictive control (MPC) approach to optimize the trajectories in a rolling horizon while satisfying the collision avoidances and vehicle kinematics constraints. To verify the efficiency of our method, a numerical simulation is conducted. As the results suggest, the proposed method can provide smooth vehicular trajectories, avoid road obstacles, and simultaneously follow traffic regulations, which is robust to road geometries and disturbances.
electrical engineering and systems science
The P-difference between two sets $\mathcal{A}$ and $\mathcal{B}$ is the set of all points, $\mathcal{C}$, such that the addition of $\mathcal{B}$ to any of the points in $\mathcal{C}$ is contained in $\mathcal{A}$. Such a set difference plays an important role in robust model predictive control and in set-theoretic control. In the paper we demonstrate that an inner approximation of the P-difference between two semialgebraic sets can be computed using the Sums of Squares Programming, and we illustrate the procedure using several computational examples.
electrical engineering and systems science
Linear modal instabilities of flow over finite-span untapered wings have been investigated numerically at Reynolds number 400, at a range of angles of attack and sweep on two wings having aspect ratios 4 and 8. Base flows have been generated by direct numerical simulation, marching the unsteady incompressible three-dimensional Navier-Stokes equations to a steady state, or using selective frequency damping to obtain stationary linearly unstable flows. Unstable three-dimensional linear global modes of swept wings have been identified for the first time using spectral-element time-stepping solvers. The effect of the wing geometry and flow parameters on these modes has been examined in detail. An increase of the angle of attack was found to destabilize the flow, while an increase of the sweep angle had the opposite effect. On unswept wings, TriGlobal analysis revealed that the most unstable global mode peaks in the midspan region of the wake; the peak of the mode structure moves towards the tip as sweep is increased. Data-driven analysis was then employed to study the effects of wing geometry and flow conditions on the nonlinear wake. On unswept wings, the dominant mode at low angles of attack is a Kelvin-Helmholtz-like instability, qualitatively analogous with global modes of infinite-span wings under same conditions. At higher angles of attack and moderate sweep angles, the dominant mode is a structure denominated the interaction mode. At high sweep angles, this mode evolves into elongated streamwise vortices on higher aspect ratio wings, while on shorter wings it becomes indistinguishable from tip-vortex instability.
physics
Prior information can be incorporated in matrix completion to improve estimation accuracy and extrapolate the missing entries. Reproducing kernel Hilbert spaces provide tools to leverage the said prior information, and derive more reliable algorithms. This paper analyzes the generalization error of such approaches, and presents numerical tests confirming the theoretical results.
statistics
We discuss reduced-scaling strategies employing recently introduced sub-system embedding sub-algebras coupled-cluster formalism (SES-CC) to describe many-body systems. These strategies utilize properties of the SES-CC formulations where the equations describing certain classes of sub-systems can be integrated into a computational flows composed coupled eigenvalue problems of reduced dimensionality. Additionally, these flows can be determined at the level of the CC Ansatz by the inclusion of selected classes of cluster amplitudes, which define the wave function "memory" of possible partitionings of the many-body system into constituent sub-systems. One of the possible ways of solving these coupled problems is through implementing procedures, where the information is passed between the sub-systems in a self-consistent manner. As a special case, we consider local flow formulations where the so-called local character of correlation effects can be closely related to properties of sub-system embedding sub-algebras employing localized molecular basis. We also generalize flow equations to the time domain and to downfolding methods utilizing double exponential unitary CC Ansatz (DUCC), where reduced dimensionality of constituent sub-problems offer a possibility of efficient utilization of limited quantum resources in modeling realistic systems.
quantum physics
Ba(Ni$_{1-x}$Co$_x$)$_2$As$_2$ is a structural homologue of the pnictide high temperature superconductor, Ba(Fe$_{1-x}$Co$_x$)$_2$As$_2$, in which the Fe atoms are replaced by Ni. Superconductivity is highly suppressed in this system, reaching a maximum $T_c$ = 2.3 K, compared to 24 K in its iron-based cousin, and the origin of this $T_c$ suppression is not known. Using x-ray scattering, we show that Ba(Ni$_{1-x}$Co$_x$)$_2$As$_2$ exhibits a unidirectional charge density wave (CDW) at its triclinic phase transition. The CDW is incommensurate, exhibits a sizable lattice distortion, and is accompanied by the appearance of $\alpha$ Fermi surface pockets in photoemission [B. Zhou et al., Phys. Rev. B 83, 035110 (2011)], suggesting it forms by an unconventional mechanism. Co doping suppresses the CDW, paralleling the behavior of antiferromagnetism in iron-based superconductors. Our study demonstrates that pnictide superconductors can exhibit competing CDW order, which may be the origin of $T_c$ suppression in this system.
condensed matter
Cellular-connected unmanned aerial vehicles (UAVs) are expected to play a major role in various civilian and commercial applications in the future. While existing cellular networks can provide wireless coverage to UAV user equipment (UE), such legacy networks are optimized for ground users which makes it challenging to provide reliable connectivity to aerial UEs. To ensure reliable and effective mobility management for aerial UEs, estimating the velocity of cellular-connected UAVs carries critical importance. In this paper, we introduce an approximate probability mass function (PMF) of handover count (HOC) for different UAV velocities and different ground base station (GBS) densities. Afterward, we derive the Cramer-Rao lower bound (CRLB) for the velocity estimate of a UAV, and also provide a simple unbiased estimator for the UAV's velocity which depends on the GBS density and HOC measurement time. Our simulation results show that the accuracy of velocity estimation increases with the GBS density and HOC measurement window. Moreover, the velocity of commercially available UAVs can be estimated efficiently with reasonable accuracy.
electrical engineering and systems science
Unmanned aerial vehicle (UAV) swarm has emerged as a promising novel paradigm to achieve better coverage and higher capacity for future wireless network by exploiting the more favorable line-of-sight (LoS) propagation. To reap the potential gains of UAV swarm, the remote control signal sent by ground control unit (GCU) is essential, whereas the control signal quality are susceptible in practice due to the effect of the adjacent channel interference (ACI) and the external interference (EI) from radiation sources distributed across the region. To tackle these challenges, this paper considers priority-aware resource coordination in a multi-UAV communication system, where multiple UAVs are controlled by a GCU to perform certain tasks with a pre-defined trajectory. Specifically, we maximize the minimum signal-to-interference-plus-noise ratio (SINR) among all the UAVs by jointly optimizing channel assignment and power allocation strategy under stringent resource availability constraints. According to the intensity of ACI, we consider the corresponding problem in two scenarios, i.e., Null-ACI and ACI systems. By virtue of the particular problem structure in Null-ACI case, we first recast the formulation into an equivalent yet more tractable form and obtain the global optimal solution via Hungarian algorithm. For general ACI systems, we develop an efficient iterative algorithm for its solution based on the smooth approximation and alternating optimization methods. Extensive simulation results demonstrate that the proposed algorithms can significantly enhance the minimum SINR among all the UAVs and adapt the allocation of communication resources to diverse mission priority.
electrical engineering and systems science
A neural implicit outputs a number indicating whether the given query point in space is inside, outside, or on a surface. Many prior works have focused on _latent-encoded_ neural implicits, where a latent vector encoding of a specific shape is also fed as input. While affording latent-space interpolation, this comes at the cost of reconstruction accuracy for any _single_ shape. Training a specific network for each 3D shape, a _weight-encoded_ neural implicit may forgo the latent vector and focus reconstruction accuracy on the details of a single shape. While previously considered as an intermediary representation for 3D scanning tasks or as a toy-problem leading up to latent-encoding tasks, weight-encoded neural implicits have not yet been taken seriously as a 3D shape representation. In this paper, we establish that weight-encoded neural implicits meet the criteria of a first-class 3D shape representation. We introduce a suite of technical contributions to improve reconstruction accuracy, convergence, and robustness when learning the signed distance field induced by a polygonal mesh -- the _de facto_ standard representation. Viewed as a lossy compression, our conversion outperforms standard techniques from geometry processing. Compared to previous latent- and weight-encoded neural implicits we demonstrate superior robustness, scalability, and performance.
computer science
Let $n$ be a positive integer. A collection $\cal S$ of subsets of $[n]=\{1,\ldots,n\}$ is called {\it symmetric} if $X\in {\cal S}$ implies $X^\ast\in {\cal S}$, where $X^\ast:=\{i\in [n]\colon n-i+1\notin X\}$. We show that in each of the three types of separation relations: {\it strong}, {\it weak} and {\it chord} ones, the following "purity phenomenon" takes place: all inclusion-wise maximal symmetric separated collections in $2^{[n]}$ have the same cardinality. These give "symmetric versions" of well-known results on the purity of usual strongly, weakly and chord separated collections of subsets of $[n]$, and in the case of weak separation, this extends a recent result due to Karpman on the purity of symmetric weakly separated collections in $\binom{[n]}{n/2}$ for $n$ even.
mathematics
We propose a simple experimental technique to separately map the emission from electric and magnetic dipole transitions close to single dielectric nanostructures, using a few nanometer thin film of rare-earth ion doped clusters. Rare-earth ions provide electric and magnetic dipole transitions of similar magnitude. By recording the photoluminescence from the deposited layer excited by a focused laser beam, we are able to simultaneously map the electric and magnetic emission enhancement on individual nanostructures. In spite of being a diffraction-limited far-field method with a spatial resolution of a few hundred nanometers, our approach appeals by its simplicity and high signal-to-noise ratio. We demonstrate our technique at the example of single silicon nanorods and dimers, in which we find a significant separation of electric and magnetic near-field contributions. Our method paves the way towards the efficient and rapid characterization of the electric and magnetic optical response of complex photonic nanostructures.
condensed matter
The 3D local atomic structures and crystal defects at the interfaces of heterostructures control their electronic, magnetic, optical, catalytic and topological quantum properties, but have thus far eluded any direct experimental determination. Here we determine the 3D local atomic positions at the interface of a MoS2-WSe2 heterojunction with picometer precision and correlate 3D atomic defects with localized vibrational properties at the epitaxial interface. We observe point defects, bond distortion, atomic-scale ripples and measure the full 3D strain tensor at the heterointerface. By using the experimental 3D atomic coordinates as direct input to first principles calculations, we reveal new phonon modes localized at the interface, which are corroborated by spatially resolved electron energy-loss spectroscopy. We expect that this work will open the door to correlate structure-property relationships of a wide range of heterostructure interfaces at the single-atom level.
condensed matter
Using data from the HAWC gamma-ray Telescope, we have studied a sample of 37 millisecond pulsars (MSPs), selected for their spindown power and proximity. From among these MSP, we have identified four which favor the presence of very high-energy gamma-ray emission at a level of $(2\Delta \ln \mathcal{L})^{1/2} \ge 2.5$. Adopting a correlation between the spindown power and gamma-ray luminosity of each pulsar, we performed a stacked likelihood analysis of these 37 MSPs, finding that the data supports the conclusion that these sources emit very high-energy gamma-rays at a level of $(2\Delta \ln \mathcal{L})^{1/2} = 4.24$. Among sets of randomly selected sky locations within HAWC's field-of-view, less than 1\% of such realizations yielded such high statistical significance. Our analysis suggests that MSPs produce very high-energy gamma-ray emission with a similar efficiency to that observed from the Geminga TeV-halo, $\eta_{\rm MSP} = (0.39-1.08) \times \eta_{\rm Geminga}$. This conclusion poses a significant challenge for pulsar interpretations of the Galactic Center gamma-ray excess, as it suggests that any population of MSPs potentially capable of producing the GeV excess would also produce TeV-scale emission in excess of that observed by HESS from this region. Future observations by CTA will be able to substantially clarify this situation.
astrophysics
A major goal in paleoclimate science is to reconstruct historical climates using proxies for climate variables such as those observed in sediment cores, and in the process learn about climate dynamics. This is hampered by uncertainties in how sediment core depths relate to ages, how proxy quantities relate to climate variables, how climate models are specified, and the values of parameters in climate models. Quantifying these uncertainties is key in drawing well founded conclusions. Analyses are often performed in separate stages with, for example, a sediment core's depth-age relation being estimated as stage one, then fed as an input to calibrate climate models as stage two. Here, we show that such "multi-stage" approaches can lead to misleading conclusions. We develop a joint inferential approach for climate reconstruction, model calibration, and age model estimation. We focus on the glacial-interglacial cycle over the past 780 kyr, analysing two sediment cores that span this range. Our age estimates are largely in agreement with previous studies, but provides the full joint specification of all uncertainties, estimation of model parameters, and the model evidence. By sampling plausible chronologies from the posterior distribution, we demonstrate that downstream scientific conclusions can differ greatly both between different sampled chronologies, and in comparison with conclusions obtained in the complete joint inferential analysis. We conclude that multi-stage analyses are insufficient when dealing with uncertainty, and that to draw sound conclusions the full joint inferential analysis must be performed.
statistics
Lee-Yang and Fisher zeros are crucial for the study of phase transitions in the grand canonical and the canonical ensembles, respectively. However, these powerful methods do not cover the isothermal-isobaric ensemble (NPT ensemble), which reflects the conditions of many experiments. In this work we present a theory of the phase transitions in terms of the zeros of the NPT-ensemble partition functions in the complex plane. The proposed theory provides an approach to calculate all the partition function zeros in the NPT ensemble, which form certain curves in the thermodynamic limit. To verify the theory we consider Tonks gas and van der Waals fluid in the NPT ensemble. In the case of Tonks gas, similarly to the Lee-Yang circle theorem, we obtain an exact equation for the zero limit curve. We also derive an approximated limit curve equation for van der Waals fluid in terms of the Szeg\"o curve. This curve fits numerically calculated zeros and correctly describes how the phenomenon of phase transition depends on the temperature.
condensed matter
We study a qubit-oscillator system, with a time-dependent coupling coefficient, and present a scheme for generating entangled Schr\"odinger-cat states with large mean photon numbers and also a scheme that protects the cat states against dephasing caused by the nonlinearity in the system. We focus on the case where the qubit frequency is small compared to the oscillator frequency. We first present the exact quantum state evolution in the limit of infinitesimal qubit frequency. We then analyze the first-order effect of the nonzero qubit frequency. Our scheme works for a wide range of coupling strength values, including the recently achieved deep-strong-coupling regime.
quantum physics
Magnetic domain walls function as a wave guide for low energy magnons. In this paper we develop the theory for the non-local transport of these bound magnons through a ferromagnetic insulator that are injected and detected electrically in adjacent normal metal leads by spin-flip scatting processes and the (inverse) spin-Hall effect. Our set-up requires a twofold degeneracy of the magnetic ground state, which we realize by an easy axis and hard axis anisotropy, in the ferromagnetic insulator. This is readily provided by a broad range of materials. The domain wall is a a topologically protected feature of the system and we obtain the non-local spin transport through it. Thereby we provide a framework for reconfigureable magnonic devices.
condensed matter
We report the discovery of superconductivity in the ternary aluminide Nb$_{5}$Sn$_{2}$Al, which crystallizes in the W$_{5}$Si$_{3}$-type structure with one-dimensional Nb chains along the $c$-axis. It is found that the compound has a multiband nature and becomes a weakly coupled, type-II superconductor below 2.0 K. The bulk nature of superconductivity is confirmed by the specific heat jump, whose temperature dependence shows apparent deviation from a single isotropic gap behavior. The lower and upper critical fields are estimated to be 2.0 mT and 0.3 T, respectively. From these values, we derive the penetration depth, coherence length and Ginzburg-Landau parameter to be 516 nm, 32.8 nm and 15.6, respectively. By contrast, the isostructural compound Ti$_{5}$Sn$_{2}$Al dose not superconduct above 0.5 K. A comparison of these results with other W$_{5}$Si$_{3}$-type superconductors suggests that $T_{\rm c}$ of these compounds correlates with the average number of valence electrons per atom.
condensed matter
The upper limit of the laser field strength in perfect vacuum is usually considered as the Schwinger field, corresponding to ~10^29W/cm^2. We investigate such limitations under realistic non-ideal vacuum conditions and find out that intensity suppression appears starting from 10^25W/cm^2, showing an upper threshold at 1026W/cm^2 level if the residual electron density in chamber surpasses 109cm^-^3. This is because the presence of residual electrons triggers the avalanche of quantum-electrodynamics cascade that creates copious electron and positron pairs. The leptons are further trapped within the driving laser field due to radiation-reaction, which significantly depletes the laser energy. The relationship between the attainable intensity and the vacuity is given according to particle-in-cell simulations and theoretical analysis. These results answer a critical problem on the achievable light intensity based on present vacuum conditions and provide a guideline for future 100's-Petawatt class laser development.
physics
A semi-quantum key distribution (SQKD) protocol allows two users, one of whom is restricted in their quantum capabilities, to establish a shared secret key, secure against an all-powerful adversary. In this paper, we design a new SQKD protocol using high-dimensional quantum states and conduct an information theoretic security analysis. We show that, similar to the fully-quantum key distribution case, high-dimensional systems can increase the noise tolerance in the semi-quantum case. Along the way, we prove several general security results which are applicable to other SQKD protocols (both high-dimensional ones and standard qubit-based protocols).
quantum physics
The Gemini Planet Imager (GPI) is a high-contrast imaging instrument designed to directly image and characterize exoplanets. GPI is currently undergoing several upgrades to improve performance. In this paper, we discuss the upgrades to the GPI IFS. This primarily focuses on the design and performance improvements of new prisms and filters. This includes an improved high-resolution prism which will provide more evenly dispersed spectra across y, J, H and K-bands. Additionally, we discuss the design and implementation of a new low-resolution mode and prism which allow for imaging of all four bands (y, J, H and K-bands) simultaneously at R=10. We explore the possibility of using a multiband filter which would block the light between the four spectral bands. We discuss possible performance improvements from the multiband filter, if implemented. Finally we explore the possibility of making small changes to the optical design to improve the IFS's performance near the edge of the field of view.
astrophysics
We perform a careful study of the infrared sector of massless non-abelian gauge theories in four-dimensional Minkowski spacetime using the covariant phase space formalism, taking into account the boundary contributions arising from the gauge sector of the theory. Upon quantization, we show that the boundary contributions lead to an infinite degeneracy of the vacua. The Hilbert space of the vacuum sector is not only shown to be remarkably simple, but also universal. We derive a Ward identity that relates the n-point amplitude between two generic in- and out-vacuum states to the one computed in standard QFT. In addition, we demonstrate that the familiar single soft gluon theorem and multiple consecutive soft gluon theorem are consequences of the Ward identity.
high energy physics theory
Let $M$ be a closed oriented $4$-manifold admitting a rank-$2$ oriented foliation with a metric of leafwise positive scalar curvature. If $b^+>1$, then we will show that the Seiberg-Witten invariant vanishes for all \spinc structures.
mathematics
The structure of DNA double helix is stabilized by metal counterions condensed to a diffuse layer around the macromolecule. The dynamics of counterions in real conditions is governed by the electric fields from DNA and other biological macromolecules. In the present work the molecular dynamics study {was} performed for the system of DNA double helix with neutralizing K$^+$ counterions and for the system of KCl salt solution in the external electric field of different strength (up to 32 mV/{\AA}). The analysis of ionic conductivities of these systems {has shown} that the counterions around the DNA double helix are slowed down compared with KCl salt solution. The calculated {values of ion mobility} are within (0.05$\div$0.4) mS/cm depending on the orientation of the external electric field relatively to the double helix. Under the electric field parallel to the macromolecule K$^+$ counterions move along the grooves of the double helix staying longer in the places with narrower minor groove. Under the electric field perpendicular to the macromolecule the dynamics of counterions is less affected by DNA atoms, and starting with the electric field values about 30 mV/{\AA} the double helix undergoes a phase transition from double-stranded to single-strand state.
physics
The single-top production is an important process at the LHC to test the Standard Model (SM) and search for the new physics beyond the SM. Although the complete next-to-next-to-leading order (NNLO) QCD correction to the single-top production is crucial, this calculation is still challenging at present. In order to efficiently reduce the NNLO single-top amplitude, we improve the auxiliary mass flow (AMF) method by introducing the $\epsilon$ truncation. For demonstration we choose one typical planar double-box diagram for the $tW$ production. It is shown that one coefficient of the form factors on its amplitude can be systematically reduced into the linear combination of 198 scalar integrals.
high energy physics phenomenology
Data transmission over the mmWave in fifth-generation wireless networks aims to support very high speed wireless communications. A substantial increase in spectrum efficiency for mmWave transmission can be achieved by using advanced hybrid precoding, for which accurate channel state information is the key. Rather than estimating the entire channel matrix, directly estimating subspace information, which contains fewer parameters, does have enough information to design transceivers. However, the large channel use overhead and associated computational complexity in the existing channel subspace estimation techniques are major obstacles to deploy the subspace approach for channel estimation. In this paper, we propose a sequential two-stage subspace estimation method that can resolve the overhead issues and provide accurate subspace information. Utilizing a sequential method enables us to avoid manipulating the entire high-dimensional training signal, which greatly reduces the complexity. Specifically, in the first stage, the proposed method samples the columns of channel matrix to estimate its column subspace. Then, based on the obtained column subspace, it optimizes the training signals to estimate the row subspace. For a channel with $N_r$ receive antennas and $N_t$ transmit antennas, our analysis shows that the proposed technique only requires $O(N_t)$ channel uses, while providing a guarantee of subspace estimation accuracy. By theoretical analysis, it is shown that the similarity between the estimated subspace and the true subspace is linearly related to the signal-to-noise ratio (SNR), i.e., $O(\text{SNR})$, at high SNR, while quadratically related to the SNR, i.e., $O(\text{SNR}^2)$, at low SNR. Simulation results show that the proposed sequential subspace method can provide improved subspace accuracy, normalized mean squared error, and spectrum efficiency over existing methods.
electrical engineering and systems science
Phoneme boundary detection plays an essential first step for a variety of speech processing applications such as speaker diarization, speech science, keyword spotting, etc. In this work, we propose a neural architecture coupled with a parameterized structured loss function to learn segmental representations for the task of phoneme boundary detection. First, we evaluated our model when the spoken phonemes were not given as input. Results on the TIMIT and Buckeye corpora suggest that the proposed model is superior to the baseline models and reaches state-of-the-art performance in terms of F1 and R-value. We further explore the use of phonetic transcription as additional supervision and show this yields minor improvements in performance but substantially better convergence rates. We additionally evaluate the model on a Hebrew corpus and demonstrate such phonetic supervision can be beneficial in a multi-lingual setting.
electrical engineering and systems science
Purpose: Body composition is known to be associated with many diseases including diabetes, cancers and cardiovascular diseases. In this paper, we developed a fully automatic body tissue decomposition procedure to segment three major compartments that are related to body composition analysis - subcutaneous adipose tissue (SAT), visceral adipose tissue (VAT) and muscle. Three additional compartments - the ventral cavity, lung and bones were also segmented during the segmentation process to assist segmentation of the major compartments. Methods: A convolutional neural network (CNN) model with densely connected layers was developed to perform ventral cavity segmentation. An image processing workflow was developed to segment the ventral cavity in any patient's CT using the CNN model, then further segment the body tissue into multiple compartments using hysteresis thresholding followed by morphological operations. It is important to segment ventral cavity firstly to allow accurate separation of compartments with similar Hounsfield unit (HU) inside and outside the ventral cavity. Results: The ventral cavity segmentation CNN model was trained and tested with manually labelled ventral cavities in 60 CTs. Dice scores (mean +/- standard deviation) for ventral cavity segmentation were 0.966+/-0.012. Tested on CT datasets with intravenous (IV) and oral contrast, the Dice scores were 0.96+/-0.02, 0.94+/-0.06, 0.96+/-0.04, 0.95+/-0.04 and 0.99+/-0.01 for bone, VAT, SAT, muscle and lung, respectively. The respective Dice scores were 0.97+/-0.02, 0.94+/-0.07, 0.93+/-0.06, 0.91+/-0.04 and 0.99+/-0.01 for non-contrast CT datasets. Conclusion: A body tissue decomposition procedure was developed to automatically segment multiple compartments of the ventral body. The proposed method enables fully automated quantification of 3D ventral body composition metrics from CT images.
electrical engineering and systems science
Recent research in differential privacy demonstrated that (sub)sampling can amplify the level of protection. For example, for $\epsilon$-differential privacy and simple random sampling with sampling rate $r$, the actual privacy guarantee is approximately $r\epsilon$, if a value of $\epsilon$ is used to protect the output from the sample. In this paper, we study whether this amplification effect can be exploited systematically to improve the accuracy of the privatized estimate. Specifically, assuming the agency has information for the full population, we ask under which circumstances accuracy gains could be expected, if the privatized estimate would be computed on a random sample instead of the full population. We find that accuracy gains can be achieved for certain regimes. However, gains can typically only be expected, if the sensitivity of the output with respect to small changes in the database does not depend too strongly on the size of the database. We only focus on algorithms that achieve differential privacy by adding noise to the final output and illustrate the accuracy implications for two commonly used statistics: the mean and the median. We see our research as a first step towards understanding the conditions required for accuracy gains in practice and we hope that these findings will stimulate further research broadening the scope of differential privacy algorithms and outputs considered.
statistics
We calculate the pair production rates for spin-$1$ or vector particles on spaces of the form $M \times {\mathbb R}^{1,1}$ with $M$ corresponding to ${\mathbb R}^2$ (flat), $S^2$ (positive curvature) and $H^2$ (negative curvature), with and without a background (chromo)magnetic field on $M$. Beyond highlighting the effects of curvature and background magnetic field, this is particularly interesting since vector particles are known to suffer from the Nielsen-Olesen instability, which can dramatically increase pair production rates. The form of this instability for $S^2$ and $H^2$ is obtained. We also give a brief discussion of how our results relate to ideas about confinement in nonabelian theories.
high energy physics theory
Density functional theory together with the Kohn-Sham (KS) scheme represent an efficient framework to recover the ground state density and energy of a many-body quantum system from an auxiliary "non-interacting" system (one-body with a local potential that is a density functional). However, to fully achieve establishing the KS scheme, a general proof of the validity of the so-called "non-interacting $v$-representability" conjecture is still required. In this article, we propose such a proof. We reduce the demonstration to proving (1) that the KS potential is differentiable for all non-interacting $v$-representable densities and (2) that its derivative is bounded. Then, we demonstrate points (1) and (2) applying static linear response to the non-interacting system.
quantum physics
In the low-energy effective theory of neutrinos, the Haar measure for unitary matrices is very likely to give rise to the observed PMNS matrix. Assuming the Haar measure, we determine the probability density functions for all quadratic, quartic Majorana, and quartic Dirac rephasing invariants for an arbitrary number of neutrino generations. We show that for a fixed number of neutrinos, all rephasing invariants of the same type have the same probability density function under the Haar measure. We then compute the moments of the rephasing invariants to determine, with the help of the Mellin transform, the three probability density functions. We finally investigate the physical implications of our results in function of the number of neutrinos.
high energy physics phenomenology
Sparse regression such as the Lasso has achieved great success in handling high-dimensional data. However, one of the biggest practical problems is that high-dimensional data often contain large amounts of missing values. Convex Conditioned Lasso (CoCoLasso) has been proposed for dealing with high-dimensional data with missing values, but it performs poorly when there are many missing values, so that the high missing rate problem has not been resolved. In this paper, we propose a novel Lasso-type regression method for high-dimensional data with high missing rates. We effectively incorporate mean imputed covariance, overcoming its inherent estimation bias. The result is an optimally weighted modification of CoCoLasso according to missing ratios. We theoretically and experimentally show that our proposed method is highly effective even when there are many missing values.
statistics
In the leading paradigm of modern cosmology, about 80% of our Universe's matter content is in the form of hypothetical, as yet undetected particles. These do not emit or absorb radiation at any observable wavelengths, and therefore constitute the so-called Dark Matter (DM) component of the Universe. Detecting the particles forming the Milky Way DM component is one of the main challenges for astroparticle physics and basic science in general. One promising way to achieve this goal is to search for rare DM-electron interactions in low-background deep underground detectors. Key to the interpretation of this search is the response of detectors' materials to elementary DM-electron interactions defined in terms of electron wave functions' overlap integrals. In this work, we compute the response of atomic argon and xenon targets used in operating DM search experiments to general, so far unexplored DM-electron interactions. We find that the rate at which atoms can be ionized via DM-electron scattering can in general be expressed in terms of four independent atomic responses, three of which we identify here for the first time. We find our new atomic responses to be numerically important in a variety of cases, which we identify and investigate thoroughly using effective theory methods. We then use our atomic responses to set 90% confidence level (C.L.) exclusion limits on the strength of a wide range of DM-electron interactions from the null result of DM search experiments using argon and xenon targets.
high energy physics phenomenology
Centaurus A (Cen~A) is the nearest active radio galaxy, which has kiloparsec (kpc) scale jets and {giant lobes detected by various instruments in radio and X-ray frequency ranges}. The $Fermi$--Large Area Telescope and High Energy Stereoscopic System (HESS) confirmed, that Cen~A is a very high-energy (VHE; $> 0.1$~TeV) $\gamma$-ray emitter with a known spectral {softening} in the energy range from a few GeV to TeV. In this work, we consider a synchrotron self-Compton model in the nucleus for the broad band spectrum {below the break energy} and an external Compton model in kpc-scale jets for the $\gamma$-ray excess. Our results show that the observed $\gamma$-ray excess can be suitably described by the inverse Compton scattering of the starlight photons in the kpc-scale jets, which is consistent with the recent tentative report by the HESS on the spatial extension of the TeV emission along the jets. Considering the spectral fitting results, the excess can only be seen in Cen~A, which is probably due to two factors: (1) the host galaxy is approximately 50 times more luminous than other typical radio galaxies and (2) the core $\gamma$-ray spectrum quickly decays above a few MeV due to the low maximum electron Lorentz factor of $\gamma_{\rm c}=2.8 \times 10^3$ resulting from the large magnetic field of 3.8~G in the core. By the comparison with other $\gamma$-ray detected radio galaxies, we found that the magnetic field strength of relativistic jets scales with the distance from the central black holes $d$ with $B (d) \propto d^{-0.88 \pm 0.14}$.
astrophysics
While the Gleason score is the most important prognostic marker for prostate cancer patients, it suffers from significant observer variability. Artificial Intelligence (AI) systems, based on deep learning, have proven to achieve pathologist-level performance at Gleason grading. However, the performance of such systems can degrade in the presence of artifacts, foreign tissue, or other anomalies. Pathologists integrating their expertise with feedback from an AI system could result in a synergy that outperforms both the individual pathologist and the system. Despite the hype around AI assistance, existing literature on this topic within the pathology domain is limited. We investigated the value of AI assistance for grading prostate biopsies. A panel of fourteen observers graded 160 biopsies with and without AI assistance. Using AI, the agreement of the panel with an expert reference standard significantly increased (quadratically weighted Cohen's kappa, 0.799 vs 0.872; p=0.018). Our results show the added value of AI systems for Gleason grading, but more importantly, show the benefits of pathologist-AI synergy.
electrical engineering and systems science
Scenario reduction is an important topic in stochastic programming problems. Due to the random behavior of load and renewable energy, stochastic programming becomes a useful technique to optimize power systems. Thus, scenario reduction gets more attentions in recent years. Many scenario reduction methods have been proposed to reduce the scenario set in a fast speed. However, the speed of scenario reduction is still very slow, in which it takes at least several seconds to several minutes to finish the reduction. This limitation of speed prevents stochastic programming to be implemented in real-time optimal control problems. In this paper, a fast scenario reduction method based on deep learning is proposed to solve this problem. Inspired by the deep learning based image process, recognition and generation methods, the scenario data are transformed into a 2D image-like data and then to be fed into a deep convolutional neural network (DCNN). The output of the DCNN will be an "image" of the reduced scenario set. Since images can be processed in a very high speed by neural networks, the scenario reduction by neural network can also be very fast. The results of the simulation show that the scenario reduction with the proposed DCNN method can be completed in very high speed.
electrical engineering and systems science
One of the most promising routes towards fault-tolerant quantum computation utilizes topological quantum error correcting codes, such as the $\mathbb{Z}_2$ surface code. Logical qubits can be encoded in a variety of ways in the surface code, based on either boundary defects, holes, or bulk twist defects. However proposed fault-tolerant implementations of the Clifford group in these schemes are limited and often require unnecessary overhead. For example, the Clifford phase gate in certain planar and hole encodings has been proposed to be implemented using costly state injection and distillation protocols. In this paper, we show that within any encoding scheme for the logical qubits, we can fault-tolerantly implement the full Clifford group by using joint measurements involving a single appropriately encoded logical ancilla. This allows us to provide new low overhead implementations of the full Clifford group in surface and color codes. It also provides the first proposed implementations of the full Clifford group in hyperbolic codes. We further use our methods to propose state-of-the art encoding schemes for small numbers of logical qubits; for example, for code distances $d = 3,5,7$, we propose a scheme using $60, 160, 308$ (respectively) physical data qubits, which allow for the full logical Clifford group to be implemented on two logical qubits. To our knowledge, this is the optimal proposal to date, and thus may be useful for demonstration of fault-tolerant logical gates in small near-term quantum computers.
quantum physics
Finding parametric models that accurately describe the dependence structure of observed data is a central task in the analysis of time series. Classical frequency domain methods provide a popular set of tools for fitting and diagnostics of time series models, but their applicability is seriously impacted by the limitations of covariances as a measure of dependence. Motivated by recent developments of frequency domain methods that are based on copulas instead of covariances, we propose a novel graphical tool that allows to access the quality of time series models for describing dependencies that go beyond linearity. We provide a thorough theoretical justification of our approach and show in simulations that it can successfully distinguish between subtle differences of time series dynamics, including non-linear dynamics which result from GARCH and EGARCH models. We also demonstrate the utility of the proposed tools through an application to modeling returns of the S&P 500 stock market index.
statistics
While Robust Model Predictive Control considers the worst-case system uncertainty, Stochastic Model Predictive Control, using chance constraints, provides less conservative solutions by allowing a certain constraint violation probability depending on a predefined risk parameter. However, for safety-critical systems it is not only important to bound the constraint violation probability but to reduce this probability as much as possible. Therefore, an approach is necessary that minimizes the constraint violation probability while ensuring that the Model Predictive Control optimization problem remains feasible. We propose a novel Model Predictive Control scheme that yields a solution with minimal constraint violation probability for a norm constraint in an environment with uncertainty. After minimal constraint violation is guaranteed the solution is then also optimized with respect to other control objectives. Further, it is possible to account for changes over time of the support of the uncertainty. We first present a general method and then provide an approach for uncertainties with symmetric, unimodal probability density function. Recursive feasibility and convergence of the method are proved. A simulation example demonstrates the effectiveness of the proposed method.
electrical engineering and systems science
Negative probability values have been widely employed as an indicator of the nonclassicality of quantum systems. Known as a quasiprobability distribution, they are regarded as a useful tool that provides significant insight into the underlying fundamentals of quantum theory when compared to the classical statistics. However, in this approach, an operational interpretation of these negative values with respect to the definition of probability---the relative frequency of occurred event---is missing. An alternative approach is therefore considered where the quasiprobability operationally reveals the negativity of measured quantities. We here present an experimental realization of the operational quasiprobability, which consists of sequential measurements in time. To this end, we implement two sets of polarization measurements of single photons. We find that the measured negativity can be interpreted in the context of selecting measurements, and it reflects the nonclassical nature of photons. Our results suggest a new operational way to unravel the nonclassicality of photons in the context of measurement selection.
quantum physics
Given a $k\times n$ integer primitive matrix $A$ (i.e., a matrix can be extended to an $n\times n$ unimodular matrix over the integers) with size of entries bounded by $\lambda$, we study the probability that the $m\times n$ matrix extended from $A$ by choosing other $m-k$ vectors uniformly at random from $\{0, 1, \ldots, \lambda-1\}$ is still primitive. We present a complete and rigorous proof that the probability is at least a constant for the case of $m\le n-4$. Previously, only the limit case for $\lambda\rightarrow\infty$ with $k=0$ was analysed in Maze et al. (2011), known as the natural density. As an application, we prove that there exists a fast Las Vegas algorithm that completes a $k\times n$ primitive matrix $A$ to an $n\times n$ unimodular matrix within expected $\tilde{O}(n^{\omega}\log \|A\|)$ bit operations, where $\tilde{O}$ is big-$O$ but without log factors, $\omega$ is the exponent on the arithmetic operations of matrix multiplication and $\|A\|$ is the maximal absolute value of entries of $A$.
computer science
Identifying the target speaker in hearing aid applications is crucial to improve speech understanding. Recent advances in electroencephalography (EEG) have shown that it is possible to identify the target speaker from single-trial EEG recordings using auditory attention decoding (AAD) methods. AAD methods reconstruct the attended speech envelope from EEG recordings, based on a linear least-squares cost function or non-linear neural networks, and then directly compare the reconstructed envelope with the speech envelopes of speakers to identify the attended speaker using Pearson correlation coefficients. Since these correlation coefficients are highly fluctuating, for a reliable decoding a large correlation window is used, which causes a large processing delay. In this paper, we investigate a state-space model using correlation coefficients obtained with a small correlation window to improve the decoding performance of the linear and the non-linear AAD methods. The experimental results show that the state-space model significantly improves the decoding performance.
electrical engineering and systems science
We consider the task of photo-realistic unconditional image generation (generate high quality, diverse samples that carry the same visual content as the image) on mobile platforms using Generative Adversarial Networks (GANs). In this paper, we propose a novel approach to trade-off image generation accuracy of a GAN for the energy consumed (compute) at run-time called Scale-Energy Tradeoff GAN (SETGAN). GANs usually take a long time to train and consume a huge memory hence making it difficult to run on edge devices. The key idea behind SETGAN for an image generation task is for a given input image, we train a GAN on a remote server and use the trained model on edge devices. We use SinGAN, a single image unconditional generative model, that contains a pyramid of fully convolutional GANs, each responsible for learning the patch distribution at a different scale of the image. During the training process, we determine the optimal number of scales for a given input image and the energy constraint from the target edge device. Results show that with SETGAN's unique client-server-based architecture, we were able to achieve a 56% gain in energy for a loss of 3% to 12% SSIM accuracy. Also, with the parallel multi-scale training, we obtain around 4x gain in training time on the server.
computer science
A matroid $N$ is said to be triangle-rounded in a class of matroids $\mathcal{M}$ if each $3$-connected matroid $M\in \mathcal{M}$ with a triangle $T$ and an $N$-minor has an $N$-minor with $T$ as triangle. Reid gave a result useful to identify such matroids as stated next: suppose that $M$ is a binary $3$-connected matroid with a $3$-connected minor $N$, $T$ is a triangle of $M$ and $e\in T\cap E(N)$; then $M$ has a $3$-connected minor $M'$ with an $N$-minor such that $T$ is a triangle of $M'$ and $|E(M')|\le |E(N)|+2$. We strengthen this result by dropping the condition that such element $e$ exists and proving that there is a $3$-connected minor $M'$ of $M$ with an $N$-minor $N'$ such that $T$ is a triangle of $M'$ and $E(M')-E(N')\subseteq T$. This result is extended to the non-binary case and, as an application, we prove that $M(K_5)$ is triangle-rounded in the class of the regular matroids.
mathematics
In this work we present a Reduced Order Model which is specifically designed to deal with turbulent flows in a finite volume setting. The method used to build the reduced order model is based on the idea of merging/combining projection-based techniques with data-driven reduction strategies. In particular, the work presents a mixed strategy that exploits a data-driven reduction method to approximate the eddy viscosity solution manifold and a classical POD-Galerkin projection approach for the velocity and the pressure fields, respectively. The newly proposed reduced order model has been validated on benchmark test cases in both steady and unsteady settings with Reynolds up to Re=O(10^5).
mathematics
We present results of our analysis of up to 15 years of photometric data from eight AM CVn systems with orbital periods between 22.5 and 26.8 min. Our data has been collected from the GOTO, ZTF, Pan-STARRS, ASAS-SN and Catalina all-sky surveys and amateur observations collated by the AAVSO. We find evidence that these interacting ultra-compact binaries show a similar diversity of long term optical properties as the hydrogen accreting dwarf novae. We found that AM CVn systems in the previously identified accretion disc instability region are not a homogenous group. Various members of the analysed sample exhibit behaviour reminiscent of Z Cam systems with long super outbursts and standstills, SU UMa systems with regular, shorter super outbursts, and nova-like systems which appear only in a high state. The addition of TESS full frame images of one of these systems, KL Dra, reveals the first evidence for normal outbursts appearing as a precursor to super outbursts in an AM CVn system. Our results will inform theoretical modelling of the outbursts of hydrogen deficient systems.
astrophysics
Instead of the often used Foldy-Wouthuysen-Tani (FWT) transformation in non-relativistic quantum chromodynamics (NRQCD), we take a more general relation between the relativistic and non-relativistic on-shell spinors to recalculate the quark-quark-gluon vertex for heavy quarks. In comparison with the previous result using FWT, the recalculated coefficients in the NRQCD Lagrangian are different at order $1/m^3$ and new at order $1/m^4$ and $1/m^5$, where $m$ is the heavy quark mass.
high energy physics phenomenology
Baryon number fluctuations are believed to be good signatures of the QCD phase transition and its CP. Since the fluctuations are proportional to the various order baryon-number susceptibilities and the quark-number density is determined by the dressed quark propagator only, then by the dressed quark propagator of NJL model we can calculate the moments of quark number. We choose the parameterized scheme by J. Cleymans and the recent RHIC results to correspond the freeze-out temperature $T$ and the quark chemical potential $\mu$ in NJL results to the collision energies $\sqrt{S_{NN}}$ in experiments. It is found that in the latter case the results of NJL model fit the experimental better, especially in $0-5\%$ centrality. At the same time there are still some problems to be noticed: when $\sqrt{S_{NN}}\le 10 GeV$ the NJL results show obvious fluctuations; the NJL results are comprehensively less than the experimental data; the NJL results behave differently from the experimental data in some cases.
high energy physics phenomenology
Excitons dominate the optics of atomically-thin transition metal dichalcogenides and 2D van der Waals heterostructures. Interlayer 2D excitons, with an electron and a hole residing in different layers, form rapidly in heterostructures either via direct charge transfer or via Coulomb interactions that exchange energy between layers. Here, we report prominent intertube excitonic effects in quasi-1D van der Waals heterostructures consisting of C/BN/MoS$_2$ core/shell/skin nanotubes. Remarkably, under pulsed infrared excitation of excitons in the semiconducting CNTs we observed a rapid (sub-picosecond) excitonic response in the visible range from the MoS$_2$ skin, which we attribute to intertube biexcitons mediated by dipole-dipole Coulomb interactions in the coherent regime. On longer ($>100$ps) timescales hole transfer from the CNT core to the MoS$_2$ skin further modified the MoS$_2$'s absorption. Our direct demonstration of intertube excitonic interactions and charge transfer in 1D van der Waals heterostructures suggests future applications in infrared and visible optoelectronics using these radial heterojunctions.
condensed matter
Discrepancy between the measured value and the Standard Model prediction of the muon anomalous magnetic moment is a possible hint for new physics. A $Z^\prime$ particle with $\mu\tau$ flavor violating couplings can give a large contribution to the muon anomalous magnetic moment due to the $\tau$ mass enhancement at the one-loop level, and is known to explain the above discrepancy. In this paper, we study the potential of the Large Hadron Collider (LHC) for detecting such a $Z^\prime$ boson via the $p p \to\mu^-\mu^-\tau^+\tau^+ $ process. Earlier studies in the literature only considered the production channel with quark initial states ($p p \to q \bar q \to\mu^-\mu^-\tau^+\tau^+ $). Here, we show that the photon initiated process, $p p \to \gamma \gamma \to \mu^-\mu^-\tau^+\tau^+ $, is in fact the dominant production mode, for a heavy $Z^\prime$ boson of mass greater than a few hundred GeV. The potential of the high luminosity (HL) LHC is also considered.
high energy physics phenomenology