text
stringlengths 11
9.77k
| label
stringlengths 2
104
|
---|---|
This paper presents a deep learning-based estimation of the intensity component of MultiSpectral bands by considering joint multiplication of the neighbouring spectral bands. This estimation is conducted as part of the component substitution approach for fusion of PANchromatic and MultiSpectral images in remote sensing. After computing the band dependent intensity components, a deep neural network is trained to learn the nonlinear relationship between a PAN image and its nonlinear intensity components. Low Resolution MultiSpectral bands are then fed into the trained network to obtain an estimate of High Resolution MultiSpectral bands. Experiments conducted on three datasets show that the developed deep learning-based estimation approach provides improved performance compared to the existing methods based on three objective metrics. | electrical engineering and systems science |
A catalog of Galactic globular clusters has been compiled and used to analyze relations between the chemical and kinematic parameters of the clusters. The catalog contains positions, distances, luminosities, metallicites, and horizontal-branch morphology indices for 157~globular clusters, as well as space velocities for 72~globular clusters. For 69~globular clusters, these data are suppleented with the relative abundances of 28~chemical elements produced in various nuclear-synthesis processes, taken from 101~papers published between 1986 and 2018. The tendency for redder horizontal branches in low-metallicity accreted globular clusters is discussed. The discrepancy between the criteria for cluster membership in the thick-disk and halo subsystems based on chemical and kinematic properties is considered. This is manifest through the fact that all metal-rich ($\rm{[Fe/H]} > -1.0$) clusters are located close to the center and plane of the Galaxy, regardless of their kinematic membership in particular Galaxy subsystems. An exception is three accreted clusters lost by a dwarf galaxy in Sagittarius. At the same time, the fraction of more distant clusters is high among metal-poorer clusters in any kinematically selected Galactic subsystem. | astrophysics |
Consider a subject or unit being monitored over a period of random duration in a longitudinal time-to-event study in a biomedical, public health, or engineering setting. As time moves forward, this unit experiences recurrent events of several types and a longitudinal marker transitions over a discrete state-space. In addition, its "health" status also transitions over a discrete state-space containing at least one absorbing state. A vector of covariates will also be associated with this unit. Of major interest for this unit is the time-to-absorption of its health status process, which represents this unit's lifetime. Aside from being affected by the covariate vector, there is a synergy among the recurrent competing risks processes, the longitudinal marker process, and the health status process in the sense that the time-evolution of each process is affected by the other processes. To exploit this synergy in order to obtain more realistic models and enhance inferential performance, a joint stochastic model for these components is proposed and the proper statistical inference methods for this model are developed. This joint model has the potential of facilitating precision interventions, thereby enhancing precision or personalized medicine. A stochastic process approach, using counting processes and continuous-time Markov chains, is utilized, which allows for modeling the dynamicity arising from the synergy among the model components and the impact of performed interventions after event occurrences and the increasing number of event occurrences. Likelihood-based inferential methods are developed based on observing a sample of these units. Properties of the inferential procedures are examined through simulations and illustrated using some real data sets. | statistics |
The thermal fit to preliminary HADES data of Au+Au collisions at $\sqrt{s_{_{NN}}}=2.4$ GeV shows two degenerate solutions at $T\approx50$ MeV and $T\approx70$ MeV. The analysis of the same particle yields in a transport simulation of the UrQMD model yields the same features, i.e. two distinct temperatures for the chemical freeze-out. While both solutions yield the same number of hadrons after resonance decays, the feeddown contribution is very different for both cases. This highlights that two systems with different chemical composition can yield the same multiplicities after resonance decays. The nature of these two minima is further investigated by studying the time-dependent particle yields and extracted thermodynamic properties of the UrQMD model. It is confirmed, that the evolution of the high temperature solution resembles cooling and expansion of a hot and dense fireball. The low temperature solution displays an unphysical evolution: heating and compression of matter with a decrease of entropy. These results imply that the thermal model analysis of systems produced in low energy nuclear collisions is ambiguous but can be interpreted by taking also the time evolution and resonance contributions into account. | high energy physics phenomenology |
Bayesian neural networks (BNNs) are making significant progress in many research areas where decision making needs to be accompanied by uncertainty estimation. Being able to quantify uncertainty while making decisions is essential for understanding when the model is over-/under-confident, and hence BNNs are attracting interest in safety-critical applications, such as autonomous driving, healthcare and robotics. Nevertheless, BNNs have not been as widely used in industrial practice, mainly because of their increased memory and compute costs. In this work, we investigate quantisation of BNNs by compressing 32-bit floating-point weights and activations to their integer counterparts, that has already been successful in reducing the compute demand in standard pointwise neural networks. We study three types of quantised BNNs, we evaluate them under a wide range of different settings, and we empirically demonstrate that an uniform quantisation scheme applied to BNNs does not substantially decrease their quality of uncertainty estimation. | computer science |
In this paper, we present novel non-relativistic superalgebras which correspond to supersymmetric extensions of the enlarged extended Bargmann algebra. The three-dimensional non-relativistic Chern-Simons supergravity actions invariant under the aforementioned superalgebras are constructed. The new non-relativistic superalgebras allow to accommodate a cosmological constant in a non-relativistic supergravity theory. Interestingly, we show that one of the non-relativistic supergravity theories presented here leads to the recently introduced Maxwellian exotic Bargmann supergravity when the flat limit $\ell \rightarrow\infty$ is considered. Besides, we show that both descriptions can be written in terms of a supersymmetric extension of the Nappi-Witten algebra or the extended Newton-Hooke superalgebra. | high energy physics theory |
We study the instability of a thin membrane (of zero bending rigidity) to out-of-plane deflections, when the membrane is immersed in an inviscid fluid flow and sheds a trailing vortex-sheet wake. We solve the nonlinear eigenvalue problem iteratively with large ensembles of initial guesses, for three canonical boundary conditions---both ends fixed, one end fixed and one free, and both free. Over several orders of magnitude of membrane mass density, we find instability by divergence or flutter (particularly at large mass density, or with one or both ends free). The most unstable eigenmodes generally become "wavier" at smaller mass density and smaller tension, but with regions of nonmonotonic behavior. We find good quantitative agreement with unsteady time-stepping simulations at small amplitude, but only qualitative similarities with the eventual steady-state large-amplitude motions. | physics |
Aims. We aim to constrain the size and porosity of ejected dust particles from comet 252P/LINEAR and their evolution near the perihelion via near-infrared multiband polarimetry. A close approach of the comet to the Earth in March 2016 (~0.036 au) provided a rare opportunity for the sampling of the comet with a high spatial resolution. Methods. We made NIR JHKS bands polarimetric observations of the comet for 12 days near perihelion, interspersed between broadband optical imaging observations over four months. In addition, dynamical simulation of the comet was performed 1000 yr backward in time. Results. We detected two discontinuous brightness enhancements. Before the first enhancement, the NIR polarization degrees were far lower than those of ordinary comets at a given phase angle. Soon after the activation, however, they increased by ~13 % at most, showing unusual blue polarimetric color over the J and H bands (-2.55 % / um on average) and bluing of both J-H and H-Ks dust color. Throughout the event, the polarization vector was marginally aligned perpendicular to the scattering plane. The subsequent postperihelion reactivation of the comet lasted for approximately 1.5 months, with a factor of ~30 times pre-activation dust mass-loss rates in the Rc band. Conclusions. The marked increase in the polarization degree with blue NIR polarimetric color is reminiscent of the behaviors of a fragmenting comet D/1999 S4 (LINEAR). The most plausible scenario for the observed polarimetric properties of 252P/LINEAR would be an ejection of predominantly large, compact dust particles from the desiccated surface layer. We conjecture that the more intense solar heating that the comet has received in the near-Earth orbit would cause the paucity of small, fluffy dust particles around the nucleus of the comet. | astrophysics |
The AGILE (Advanced enerGetic Ion eLectron tElescope) project focuses on the development of a compact low-cost space-based instrument to measure the intensities of charged particles and ions in space. Using multiple layers of fast silicon sensors and custom front-end electronics, the instrument is designed for real-time particle identification of a large variety of elements from H to Fe and spanning energies from 1 to 100 MeV per nucleon. The robust method proposed in this work uses key defining features of electronic signals generated by charged particles (ions) traveling through silicon layers to reliably identify and characterize particles in situ. AGILE will use this real-time pulse shape discrimination technique for the first time in space based instrumentation. | physics |
In this work, a novel mechanism for spontaneous symmetry breaking is presented. This mechanism avoids quadratic divergencies and is thus capable of addressing the hierarchy problem in gauge theories. Using the scale-dependent effective action $\Gamma_{k}$ minimally coupled to a gravitational sector, variational parameter setting is applied. This provides a mass and vacuum expectation value as a function of the constants arising in the low scale expansion of Newtons' and cosmological couplings. A comparison with experimental data, such as the Higgs mass, allows putting restrictions on these constants. With this generic approach one can compare with explicit candidates for an effective field theory of gravity. As an example, we use the asymptotic safety scenario, where we find restrictions on the matter content of the theory. | high energy physics theory |
Search and recommendation systems, such as search engines, recruiting tools, online marketplaces, news, and social media, output ranked lists of content, products, and sometimes, people. Credit ratings, standardized tests, risk assessments output only a score, but are also used implicitly for ranking. Bias in such ranking systems, especially among the top ranks, can worsen social and economic inequalities, polarize opinions, and reinforce stereotypes. On the other hand, a bias correction for minority groups can cause more harm if perceived as favoring group-fair outcomes over meritocracy. In this paper, we formulate the problem of underranking in group-fair rankings, which was not addressed in previous work. Most group-fair ranking algorithms post-process a given ranking and output a group-fair ranking. We define underranking based on how close the group-fair rank of each item is to its original rank, and prove a lower bound on the trade-off achievable for simultaneous underranking and group fairness in ranking. We give a fair ranking algorithm that takes any given ranking and outputs another ranking with simultaneous underranking and group fairness guarantees comparable to the lower bound we prove. Our algorithm works with group fairness constraints for any number of groups. Our experimental results confirm the theoretical trade-off between underranking and group fairness, and also show that our algorithm achieves the best of both when compared to the state-of-the-art baselines. | computer science |
This paper is concerned with a family of Reaction-Diffusion systems that we introduced in [15], and that generalizes the SIR type models from epidemiology. Such systems are now also used to describe collective behaviors.In this paper, we propose a modeling approach for these apparently diverse phenomena through the example of the dynamics of social unrest. The model involves two quantities: the level of social unrest, or more generally activity, u, and a field of social tension v, which play asymmetric roles. We think of u as the actually observed or explicit quantity while v is an ambiant, sometimes implicit, field of susceptibility that modulates the dynamics of u. In this article, we explore this class of model and prove several theoretical results based on the framework developed in [15], of which the present work is a companion paper. We particularly emphasize here two subclasses of systems: tension inhibiting and tension enhancing. These are characterized by respectively a negative or a positivefeedback of the unrest on social tension. We establish several properties for these classes and also study some extensions. In particular, we describe the behavior of the system following an initial surge of activity. We show that the model can give rise to many diverse qualitative dynamics. We also provide a variety of numerical simulations to illustrate our results and to reveal further properties and open questions. | mathematics |
Complex computer simulators are increasingly used across fields of science as generative models tying parameters of an underlying theory to experimental observations. Inference in this setup is often difficult, as simulators rarely admit a tractable density or likelihood function. We introduce Adversarial Variational Optimization (AVO), a likelihood-free inference algorithm for fitting a non-differentiable generative model incorporating ideas from generative adversarial networks, variational optimization and empirical Bayes. We adapt the training procedure of generative adversarial networks by replacing the differentiable generative network with a domain-specific simulator. We solve the resulting non-differentiable minimax problem by minimizing variational upper bounds of the two adversarial objectives. Effectively, the procedure results in learning a proposal distribution over simulator parameters, such that the JS divergence between the marginal distribution of the synthetic data and the empirical distribution of observed data is minimized. We evaluate and compare the method with simulators producing both discrete and continuous data. | statistics |
Risk assessment instruments are used across the criminal justice system to estimate the probability of some future behavior given covariates. The estimated probabilities are then used in making decisions at the individual level. In the past, there has been controversy about whether the probabilities derived from group-level calculations can meaningfully be applied to individuals. Using Bayesian hierarchical models applied to a large longitudinal dataset from the court system in the state of Kentucky, we analyze variation in individual-level probabilities of failing to appear for court and the extent to which it is captured by covariates. We find that individuals within the same risk group vary widely in their probability of the outcome. In practice, this means that allocating individuals to risk groups based on standard approaches to risk assessment, in large part, results in creating distinctions among individuals who are not meaningfully different in terms of their likelihood of the outcome. This is because uncertainty about the probability that any particular individual will fail to appear is large relative to the difference in average probabilities among any reasonable set of risk groups. | statistics |
A partial least squares regression is proposed for estimating the function-on-function regression model where a functional response and multiple functional predictors consist of random curves with quadratic and interaction effects. The direct estimation of a function-on-function regression model is usually an ill-posed problem. To overcome this difficulty, in practice, the functional data that belong to the infinite-dimensional space are generally projected into a finite-dimensional space of basis functions. The function-on-function regression model is converted to a multivariate regression model of the basis expansion coefficients. In the estimation phase of the proposed method, the functional variables are approximated by a finite-dimensional basis function expansion method. We show that the partial least squares regression constructed via a functional response, multiple functional predictors, and quadratic/interaction terms of the functional predictors is equivalent to the partial least squares regression constructed using basis expansions of functional variables. From the partial least squares regression of the basis expansions of functional variables, we provide an explicit formula for the partial least squares estimate of the coefficient function of the function-on-function regression model. Because the true forms of the models are generally unspecified, we propose a forward procedure for model selection. The finite sample performance of the proposed method is examined using several Monte Carlo experiments and two empirical data analyses, and the results were found to compare favorably with an existing method. | statistics |
We present a composably secure protocol allowing $n$ parties to test an entanglement generation resource controlled by a possibly dishonest party. The test consists only in local quantum operations and authenticated classical communication once a state is shared among them and provides composable security, namely it can be used as a secure subroutine by $n$ honest parties within larger communication protocols to test if a source is sharing quantum states that are at least $\epsilon$-close to the GHZ state. This claim comes on top of previous results on multipartite entanglement verification where the security was studied in the usual game-based model. Here, we improve the protocol to make it more suitable for practical use in a quantum network and we study its security in the Abstract Cryptography framework to highlight composability issues and avoid hidden assumptions. This framework is a top-to-bottom theory that makes explicit any piece of information that each component (party or resource) gets at every time-step of the protocol. Moreover any security proof, which amounts to showing indistinguishability between an ideal resource having the desired security properties (up to local simulation) and the concrete resource representing the protocol, is composable for free in this setting. This allows us to readily compose our basic protocol in order to create a composably secure multi-round protocol enabling honest parties to obtain a state close to a GHZ state or an abort signal, even in the presence of a noisy or malicious source. Our protocol can typically be used as a subroutine in a Quantum Internet, to securely share a GHZ state among the network before performing a communication or computation protocol. | quantum physics |
We put forward a unimodular $N=1, d=4$ anti-de Sitter supergravity theory off shell. This theory, where the Cosmological Constant does not couple to gravity, has a unique maximally supersymmetric classical vacuum which is Anti-de Sitter spacetime with radius given by the equation of motion of the auxiliary scalar field, ie, $S=\frac{3}{\kappa L}$. However, we see that the non-supersymmetric classical vacua of the unimodular theory are Minkowski and de Sitter spacetimes as well as anti-de Sitter spacetime with radius $l\neq L$. | high energy physics theory |
Given a 3-manifold $M$ fibering over the circle, we investigate how the asymptotic translation lengths of pseudo-Anosov monodromies in the arc complex vary as we vary the fibration. We formalize this problem by defining normalized asymptotic translation length functions $\mu_d$ for every integer $d \ge 1$ on the rational points of a fibered face of the unit ball of the Thurston norm on $H^1(M;\mathbb{R})$. We show that even though the functions $\mu_d$ themselves are typically nowhere continuous, the sets of accumulation points of their graphs on $d$-dimensional slices of the fibered face are rather nice and in a way reminiscent of Fried's convex and continuous normalized stretch factor function. We also show that these sets of accumulation points depend only on the shape of the corresponding slice. We obtain a particularly concrete description of these sets when the slice is a simplex. We also compute $\mu_1$ at infinitely many points for the mapping torus of the simplest hyperbolic braid to show that the values of $\mu_1$ are rather arbitrary. This suggests that giving a formula for the functions $\mu_d$ seems very difficult even in the simplest cases. | mathematics |
This manuscripts develops a new class of deep learning algorithms for outcomes that are potentially censored. To account for censoring, the unobservable loss function used in the absence of censoring is replaced by a censoring unbiased transformation. The resulting class of algorithms can be used to estimate both survival probabilities and restricted mean survival. We show how the deep learning algorithms can be implemented using software for uncensored data using a form of response transformation. Simulations and analysis of the Netherlands 70 Gene Signature Data show strong performance of the proposed algorithms. | statistics |
We unveil an original manifestation of Anderson localization for wave packets launched with a finite average velocity: after an initial ballistic motion, the center of mass of the wave packet experiences a retroreflection and slowly returns to its initial position, an effect that we dub "Quantum Boomerang" and describe numerically and analytically in dimension 1. In dimension 3, we show numerically that the quantum boomerang is a genuine signature of Anderson localization: it exists if and only if the quantum dynamics if localized. | condensed matter |
Domain walls are the transition regions between two magnetic domains. These objects have been very relevant during the last decade, not only due to their intrinsic interest in the development of novel spintronics devices but also because of their fundamental interest. The study of domain wall has been linked to the research on novel spin-orbit coupling phenomena such as the Dzyaloshinskii-Moriya interaction and the spin Hall effect amount others. Domain walls can be nucleated in ferromagnetic nanostrips and can be driven by conventional magnetic fields and spin currents due to the injection of electrical pulses, which make them very promising for technological applications of recording and logic devices. In this review, based on full micromagnetic simulations supported by extended one-dimensional models, we describe the static and dynamic properties of domain walls in thin ferromagnetic and ferrimagnetic wires with perpendicular magnetic anisotropy. The present chapter aims to provide a fundamental theoretical description of the fundaments of domain walls, and the numerical tools and models which allow describing the DW dynamics in previous and future experimental setups. | condensed matter |
We prove a nonlinear variant of the general Brascamp-Lieb inequality. Instances of this inequality are quite prevalent in analysis, and we illustrate this with substantial applications in harmonic analysis and partial differential equations. Our proof consists of running an efficient, or "tight", induction on scales argument, which uses the existence of gaussian near-extremisers to the underlying linear Brascamp-Lieb inequality (Lieb's theorem) in a fundamental way. A key ingredient is an effective version of Lieb's theorem, which we establish via a careful analysis of near-minimisers of weighted sums of exponential functions. | mathematics |
Bell's theorem cannot be proved if complementary measurements have to be represented by random variables which cannot be added or multiplied. One such case occurs if their domains are not identical. The case more directly related to the Einstein-Rosen-Podolsky argument occurs if there exists an `element of reality' but nevertheless addition of complementary results is impossible because they are represented by elements from different arithmetics. A naive mixing of arithmetics leads to contradictions at a much more elementary level than the Clauser-Horne-Shimony-Holt inequality. | quantum physics |
In the paper [Angelini M C, Parisi G, and Ricci-Tersenghi F, Ensemble renormalization group for disordered systems, Phys. Rev. B 87 134201 (2013)] we introduced a real-space renormalization group called Ensemble Renormalization Group (ERG) and we applied it to the Edwards-Anderson model, obtaining estimates for the critical exponents in good agreement with those from Monte Carlo simulations. Recently the paper [Castellana M, Real-space renormalization-group methods for hierarchical spin glasses, J. Phys. A: Math. Theor. 52 445002 (2019)] re-examined the ERG method from a different perspective, concluding that the previous results were wrong, and claiming that the ERG method predicts trivially wrong critical exponents. In this comment we explain why the conclusions reached by Castellana are wrong, as they are based on a misinterpretation of finite-size effects. We conclude that the ERG method remains a good RG method to obtain critical exponents in strongly disordered models (if properly used). | condensed matter |
We present a general argument that highlights the difficulty of determining the space-time structure of the renormalizable bottom quark Yukawa interactions of the Standard Model Higgs boson, or for that matter of any hypothetical spin-zero particle, at high energy colliders. The essence of the argument is that, it is always possible, by chiral rotations, to transform between scalar and pseudoscalar Yukawa interactions without affecting the interactions of bottom quarks with SM gauge bosons. Since these rotations affect only the $b$-quark mass terms in the Standard Model Lagrangian, any differences in observables for scalar versus pseudoscalar couplings vanish when $m_b \rightarrow 0$, and are strongly suppressed in high energy processes involving the heavy spin-zero particle where the $b$-quarks are typically relativistic. We show, however, that the energy dependence of, for instance, $e^+e^- \rightarrow b\bar{b} X$ (here $X$ denotes the spin-zero particle) close to the reaction threshold may serve to provide a distinction between the scalar versus pseudoscalar coupling at electron-positron colliders that are being proposed, provided that the $Xb\bar{b}$ coupling is sizeable. We also note that while various kinematic distributions for $t \bar{t} h$ are indeed sensitive to the space-time structure of the top Yukawa coupling, for a spin-0 particle $X$ of an arbitrary mass, the said sensitivity is lost if $m_{X} >> m_t$. | high energy physics phenomenology |
CP violation in the third generation fermion Higgs couplings may play an important role in the electroweak baryogenesis, however it is strongly bounded by the experimental limits on the electron electric dipole moment. In the effective field theory framework to beyond the Standard Model physics, it can originate from dimension six operators. Under such assumptions like Minimal Flavour Violation or flavour symmetries, the Wilson coefficients of those operators have in general certain flavour structure. In that case the imaginary parts of the $h\tau\tau$ and $hee$ couplings are linked to each other. We point out that due to the very strong bound from the electron electric dipole moment on the imaginary part of $hee$ coupling, the bound on the imaginary part of the $h\tau\tau$ coupling is then generically two orders of magnitude stronger than the one obtained from the $\tau$-loop contributions to the Barr-Zee diagram. We briefly discuss the potential impact of this fact on the electroweak baryogenesis. | high energy physics phenomenology |
Microsphere-based subwavelength imaging technique was first demonstrated in 2011. After nearly a decade of efforts, such technique has spawned numerous interests in fields such as laser nano-machining, imaging, sensing and biological detection. For wider industrial scale application of the technique, a robust, low-cost objective lens incorporating microsphere lens is highly desired and sought by many researchers. In this work, we demonstrate a unibody microscope objective lens formed by tipping a high-index microsphere onto a Plano-Convex lens and subsequently fitting them into a conventional objective lens. We call such new objective the Plano-Convex-Microsphere (PCM) objective, which resembles the appearance and operation of an ordinary microscope objective, while providing super-resolving power in discerning subwavelength nanoscale features in air condition. The imaging performance of PCM-objective along with the working distance has been systematically investigated. With the assistance of scanning process, larger area imaging is realized. The PCM-objective can be easily adaptive to many existing microscope systems and appealing for commercialization. | physics |
A 7-dimensional area-minimizing embedded hypersurface $M$ will in general have a discrete singular set. The same is true if $M$ is stable, or has bounded index, provided $H^6(sing M) = 0$. We show that if $M_i$ are a sequence of such minimal hypersurfaces which are minimizing, stable, or have bounded index, then $M_i$ can limit to a singular $M$ with only very controlled geometry, topology, and singular set. We show one can always "parameterize" a subsequence $i'$ with controlled bi-Lipschitz maps $\phi_{i'}$ taking $\phi_{i'}(M_{1'}) = M_{i'}$. As a consequence, we prove the space of smooth, closed, embedded minimal hypersurfaces $M$ in a closed Riemannian 8-manifold $(N, g)$ with a priori bounds $H^7(M) \leq \Lambda$ and $index(M) \leq I$ divides into finitely-many diffeomorphism types, and this finiteness continues to hold (in a suitable sense) if one allows the metric g to vary, or M to be singular. | mathematics |
The gravitational waves emitted from a binary neutron star merger, as predicted from general relativistic magneto-hydrodynamics calculations, are sensitive to the appearance of quark matter and the stiffness of the equation of state of QCD matter present in the inner cores of the stars. This is a new messenger observable from outer space, which does provide direct signals for the phase structure of strongly interacting QCD matter at high baryon density and high temperature. These astrophysically created extremes of thermodynamics do match, to within 20\%, the values of densities and temperatures which we find in relativistic hydrodynamics and transport theory of heavy ion collisions at the existing laboratories, if though at quite different rapidity windows, impact parameters and bombarding energies of the heavy nuclear systems. We demonstrate how one unified equation of state can be constructed and used for both neutron star physics and hot QCD matter excited at laboratory facilities. The similarity in underlying QCD physics allows the gravitational wave signals from future advanced LIGO and Virgo events to be combined with the analysis of high multiplicity fluctuations and flow measurements in heavy ion detectors in the lab to pin down the EoS and the phase structure of dense matter. | high energy physics phenomenology |
The Mass Density approach to GRW (GRWm for short) has been widely discussed in the quantum foundations literature. A crucial feature of GRWm is the introduction of a relation of accessibility for mass, which allows to explain the determinacy of experimental outcomes thus also addressing the tails problem of GRW. However, the relation of accessibility leaves the ontological meaning of the non-accessible portion of mass utterly unexplained. In this paper I discuss two viable approaches to non-accessible mass, which I call anti-realist and realist, and will defend the latter. First, I show that the anti-realist approach suffers from various objections. Second, I develop an account of non-accessible mass density states as objectively indeterminate states of affairs. | quantum physics |
A wide variety of activation functions have been proposed for neural networks. The Rectified Linear Unit (ReLU) is especially popular today. There are many practical reasons that motivate the use of the ReLU. This paper provides new theoretical characterizations that support the use of the ReLU, its variants such as the leaky ReLU, as well as other activation functions in the case of univariate, single-hidden layer feedforward neural networks. Our results also explain the importance of commonly used strategies in the design and training of neural networks such as "weight decay" and "path-norm" regularization, and provide a new justification for the use of "skip connections" in network architectures. These new insights are obtained through the lens of spline theory. In particular, we show how neural network training problems are related to infinite-dimensional optimizations posed over Banach spaces of functions whose solutions are well-known to be fractional and polynomial splines, where the particular Banach space (which controls the order of the spline) depends on the choice of activation function. | statistics |
In this work, we take a closer look at the evaluation of two families of methods for enriching information from knowledge graphs: Link Prediction and Entity Alignment. In the current experimental setting, multiple different scores are employed to assess different aspects of model performance. We analyze the informativeness of these evaluation measures and identify several shortcomings. In particular, we demonstrate that all existing scores can hardly be used to compare results across different datasets. Moreover, we demonstrate that varying size of the test size automatically has impact on the performance of the same model based on commonly used metrics for the Entity Alignment task. We show that this leads to various problems in the interpretation of results, which may support misleading conclusions. Therefore, we propose adjustments to the evaluation and demonstrate empirically how this supports a fair, comparable, and interpretable assessment of model performance. Our code is available at https://github.com/mberr/rank-based-evaluation. | computer science |
We investigate the effect of conditional null measurements on a quantum system and find a rich variety of behaviors. Specifically, quantum dynamics with a time independent $H$ in a finite dimensional Hilbert space are considered with repeated strong null measurements of a specified state. We discuss four generic behaviors that emerge in these monitored systems. The first arises in systems without symmetry, along with their associated degeneracies in the energy spectrum, and hence in the absence of dark states as well. In this case, a unique final state can be found which is determined by the largest eigenvalue of the survival operator, the non-unitary operator encoding both the unitary evolution between measurements and the measurement itself. For a three-level system, this is similar to the well known shelving effect. Secondly, for systems with built-in symmetry and correspondingly a degenerate energy spectrum, the null measurements dynamically select the degenerate energy levels, while the non-degenerate levels are effectively wiped out. Thirdly, in the absence of dark states, and for specific choices of parameters, two or more eigenvalues of the survival operator match in magnitude, and this leads to an oscillatory behavior controlled by the measurement rate and not solely by the energy levels. Finally, when the control parameters are tuned, such that the eigenvalues of the survival operator all coalesce to zero, one has exceptional points that corresponds to situations that violate the null measurement condition, making the conditional measurement process impossible. | quantum physics |
Skyrmions can be stabilized in magnetic systems with broken inversion symmetry and chiral interactions, such as Dzyaloshinskii-Moriya interactions (DMI). Further, compensation of magnetic moments in ferrimagnetic materials can significantly reduce magnetic dipolar interactions, which tend to favor large skyrmions. Tuning DMI is essential to control skyrmion properties, with symmetry breaking at interfaces offering the greatest flexibility. However, in contrast to the ferromagnet case, few studies have investigated interfacial DMI in ferrimagnets. Here we present a systematic study of DMI in ferrimagnetic CoGd films by Brillouin light scattering. We demonstrate the ability to control DMI by the CoGd cap layer composition, the stack symmetry and the ferrimagnetic layer thickness. The DMI thickness dependence confirms its interfacial nature. In addition, magnetic force microscopy reveals the ability to tune DMI in a range that stabilizes sub-100 nm skyrmions at room temperature in zero field. Our work opens new paths for controlling interfacial DMI in ferrimagnets to nucleate and manipulate skyrmions. | condensed matter |
In this paper the scattering between the non-topological kinks arising in a family of two-component scalar field theory models is analyzed. A winding charge is carried by these defects. As a consequence, two different classes of kink scattering processes emerge: (1) collisions between kinks that carry the same winding number and (2) scattering events between kinks with opposite winding number. The variety of scattering channels is very rich and it strongly depends on the collision velocity and the model parameter. For the first type of events, four distinct scattering channels are found: \textit{kink reflection} (kinks collide and bounce back), \textit{one-kink (partial) annihilation} (the two non-topological kinks collide causing the annihilation of one half of each kink and the subsequent recombination of the other two halves, giving rise to a new non-topological kink with the opposite winding charge), \textit{winding flip kink reflection} (kinks collide and emerge with the opposite winding charge) and \textit{total kink annihilation} (kinks collide and decay to the vacuum configuration). For the second type of events, the scattering channels comprise \textit{bion formation} (kink and antikink form a long-living bound state), \textit{kink-antikink passage} (kinks collide and pass each other) and \textit{kink-antikink annihilation} (kink and antikink collide and decay to the vacuum configuration). | high energy physics theory |
The charmonium-like resonance $ Z_c(3900) $ and its excited state $ Z(4430) $ are among the particles that are serious candidates for double heavy tetraquarks. Calculations of different parameters associated with these states both in the vacuum and the medium with finite density are of great importance. Such investigations help us clarify their nature, internal quark-gluon organization and quantum numbers. In this accordance, we extend our previous analyses on the ground state $ Z_c(3900) $ to investigate the medium modifications on different parameters of the excited $ Z(4430) $ state. In particular, we calculate the mass, vector self-energy and current coupling of $ Z(4430) $ in terms of density, up to a density comparable to the density of the cores of massive neutron stars. The obtained results may help experimental groups aiming to study the behavior of exotic states at higher densities. | high energy physics phenomenology |
We minimise the Canham-Helfrich energy in the class of closed immersions with prescribed genus, surface area and enclosed volume. Compactness is achieved in the class of oriented varifolds. The main result is a lower-semicontinuity estimate for the minimising sequence, which is in general false by a counter example by Gro{\ss}e-Brauckmann. The main argument involved is showing partial regularity of the limit. It entails comparing the Helfrich energy of the minimising sequence locally to that of a biharmonic graph. This idea is by Simon, but it cannot be directly applied, since the area and enclosed volume of the graph may differ. By an idea of Schygulla we adjust these quantities by using a two parameter diffeomorphism of $\mathbb{R}^3$ | mathematics |
The low mass star ASASSN-13db experienced an EXor outburst in 2013, which identified it as a Young Stellar Object (YSO). Then, from 2014 to 2017 it had another outburst, longer and more luminous than the earlier. We analyze the observations of the second outburst, and compare it to eruptions of Intermediate Luminosity Optical Transients (ILOTs). We show that the decline of the light curve is almost identical to that of the V838 Mon, a prototype of a type of ILOT known as Luminous Red Nova (LRN). This similarity becomes conspicuous when oscillations that are associated with rotation are filtered out from the light curve of ASASSN-13db. We suggest that the eruption was the result of accretion of a proto-planet of a few Earth masses. The proto-planet was shredded by tidal forces before it was accreted onto the YSO, releasing gravitational energy that powered the outburst for $\approx 800$ days, and ended in a $\approx 55$ days decline phase. When the accretion material started depleting the accretion rate lowered and the eruption light curve declined for almost two months. Then it exhausted completely, creating a sharp break in the light curve. Another possibility is that the mass was a result of an instability in the proto-planetary disk that lead to a large episode of accretion from an inner viscous disk. We find that the variation of the temperature of the outburst is consistent with the surface temperature expected from a depleted viscous accretion disk. The 2014-2017 outburst of ASASSN-13db may be the least energetic ILOT to have been discovered to date, with an energy budget of only $\approx 10^{42}$ erg. | astrophysics |
In Part II of this series of papers, we consider an initial-boundary value problem for the Kolmogorov--Petrovskii--Piscounov (KPP) type equation with a discontinuous cut-off in the reaction function at concentration $u=u_c$. For fixed cut-off value $u_c \in (0,1)$, we apply the method of matched asymptotic coordinate expansions to obtain the complete large-time asymptotic form of the solution which exhibits the formation of a permanent form travelling wave structure. In particular, this approach allows the correction to the wave speed and the rate of convergence of the solution onto the permanent form travelling wave to be determined via a detailed analysis of the asymptotic structures in small-time and, subsequently, in large-space. The asymptotic results are confirmed against numerical results obtained for the particular case of a cut-off Fisher reaction function. | mathematics |
In this paper we develop a predictive model for the spread of COVID-19 infection at a provincial (i.e. EU NUTS-3) level in Italy by using official data from the Italian Ministry of Health integrated with data extracted from daily official press conferences of regional authorities and from local newspaper websites. This integration is mainly concerned with COVID-19 cause specific death data which are not available at NUTS-3 level from open official data data channels. An adjusted time-dependent SIRD model is used to predict the behavior of the epidemic, specifically the number of susceptible, infected, deceased and recovered people. Predictive model performance is evaluated using comparison with real data. | statistics |
We study the phase separation configurations and their rotational properties for a mixture of two interacting charged Bose-Einstein condensates subject to a magnetic field trapped in disc and Corbino geometries. We calculate the ground state energies of azimuthal and radial phase separation configurations using the Gross-Pitaevskii and the Thomas-Fermi approximations. We show that the results for experimentally relevant system parameters from both approaches are in good agreement. The immiscible mixture in both geometries with equal intracomponent interactions favors the azimuthal phase separation for all intercomponent interactions. Only an imbalance in the intracomponent interactions can result in a transition to the radial phase separation, for which the transition becomes sensitive to the shape of the trap. We present phase diagrams as a function of the inter and intracomponent interactions. While the radial phase separation is widely favoured in disc geometry, the azimuthal phase separation is favoured for narrower Corbino geometries. We explore the rotational properties of the spatially separated condensates under the magnetic field, studying their angular momenta and velocity fields. The quantization of circulation breaks down for the azimuthal phase separation. In this case, the bulk region of the condensate continues to display superfluid flow behavior whereas the velocity field shows a rigid body behavior along the phase boundaries. | condensed matter |
We address the problem of the origin of massive stars, namely the origin, path and timescale of the mass flows that create them. Based on extensive numerical simulations, we propose a scenario where massive stars are assembled by large-scale, converging, inertial flows that naturally occur in supersonic turbulence. We refer to this scenario of massive-star formation as the "Inertial-Inflow Model". This model stems directly from the idea that the mass distribution of stars is primarily the result of turbulent fragmentation. Under this hypothesis, the statistical properties of the turbulence determine the formation timescale and mass of prestellar cores, posing definite constraints on the formation mechanism of massive stars. We quantify such constraints by the analysis of a simulation of supernova-driven turbulence in a 250-pc region of the interstellar medium, describing the formation of hundreds of massive stars over a time of approximately 30 Myr. Due to the large size of our statistical sample, we can say with full confidence that massive stars in general do not form from the collapse of massive cores, nor from competitive accretion, as both models are incompatible with the numerical results. We also compute synthetic continuum observables in Herschel and ALMA bands. We find that, depending on the distance of the observed regions, estimates of core mass based on commonly-used methods may exceed the actual core masses by up to two orders of magnitude, and that there is essentially no correlation between estimated and real core masses. | astrophysics |
We investigate the $B^+\to J/\psi \phi K^+$ decay via various rescattering diagrams. Without introducing genuine exotic resonances, it is shown that the $Z_{cs}(4000)$, $Z_{cs}(4220)$ and $X(4700)$ reported by the LHCb collaboration can be simulated by the $J/\psi K^{*+}$, $\psi^\prime K^+$ and $\psi^\prime \phi$ threshold cusps, respectively. These cusps are enhanced by some nearby triangle singularities. The $X(4685)$ with $J^P=1^+$ cannot be well simulated by the threshold effects in our model, which implies that it may be a genuine resonance. | high energy physics phenomenology |
The inertial extended Lagrangian/self-consistent field scheme (iEL-SCF) has been adopted for solving charge equilibration in LAMMPS as part of the reactive force field ReaxFF, which due to the charge conservation constraint requires solving two sets of linear system of equations for the new charges at each molecular dynamics time-step. Therefore, the extended Lagrangian for charge equilibration is comprised of two auxiliary variables for the intermediate charges which serve as an initial guess for the real charges. We show that the iEL-SCF is able to reduce the number of SCF cycles by 50-80% of the original conjugate gradient self-consistent field solver as tested across diverse systems including water, ferric hydroxide, nitramine RDX, and hexanitrostilbene. | physics |
In this letter we present, for the first time, the results for the photoproduction of massive gauge bosons in proton -- proton, proton -- Lead and Lead -- Lead collisions at the Large Hadron Collider (LHC), High -- Energy LHC (HE -- LHC) and Future Circular Collider (FCC). Predictions for the rapidity distributions and total cross sections are presented. We predict a large number of events in the rapidity range probed by the LHC detectors, which implies that this process can be used to probe the photoproduction of massive gauge bosons as well to perform the search of Beyond Standard Model physics. | high energy physics phenomenology |
Production of a forward Drell-Yan lepton pair accompanied by a jet separated by a large rapidity interval is proposed to study the BFKL evolution at the LHC. Several observables to be measured are presented including the azimuthal angle dependence of the lepton pair which allows to determine Drell-Yan structure functions. | high energy physics phenomenology |
We consider a one-dimensional morphoelastic model describing post-burn scar contractions. This model describes the movement of the skin and the development of the effective Eulerian strain in the tissue. Besides these components, the model also contains components that play a major role in the wound healing process. These components are fibroblasts, myofibroblasts, signaling molecules, and collagen. We perform a sensitivity analysis for many parameters and use the results for a feasibility study. In this study, we test whether the model is suitable for predicting how contraction develops in different age groups. To this end, we conduct an extensive literature review to find parameter values. From the sensitivity analysis, we conclude that the most sensitive parameters are the equilibrium collagen concentration in the dermal layer, the apoptosis rate of fibroblasts and myofibroblasts, and the secretion rate of signaling molecules. Further, although we can use the model to simulate distinct contraction densities in different age groups, our results differ from what is seen in the clinic. | mathematics |
Controllable geometric manipulation via micromachining techniques provides a promising tool for enhancing useful topological electrical responses relevant to future applications such as quantum information science. Here we present microdevices fabricated with focused ion beam from indium-doped topological insulator Pb1-xSnxTe. With device thickness on the order of 1 {\mu}m and an extremely large bulk resistivity, we achieve an unprecedented enhancement of the surface contribution to about 30% of the total conductance near room temperature. The surface contribution increases as the temperature is reduced, becoming dominant below approximately 180 K, compared to 30 K in mm-thickness crystals. In addition to the enhanced surface contribution to normal-state transport, we observe the emergence of a two-dimensional superconductivity below 6 K. Measurements of magnetoresistivity at high magnetic fields reveal a weak antilocalization behavior in the normal-state magnetoconductance at low temperature and a variation in the power-law dependence of resistivity on temperature with field. These results demonstrate that interesting electrical response relevant to practical applications can be achieved by suitable engineering of single crystals. | condensed matter |
Hybrid perovskites with mixed organic cations such as methylammonium and formamidinium have attracted interest due to their improved stability and capability to tune their properties varying the composition. In this work we report on the local variation of the structural and electronic properties in mixed A-site cation MA/FA lead iodide perovskites FAXMA1-XPBI3 evaluated from static first-principles calculations in certain structures where the orientations of organic cations result from examining the energy landscape of some compositions. The cation replacement at the A-site to form the solid solution causes an increase tilting of the inorganic PbI6 octahedra: in the FA-rich compounds the replacement of FA by a smaller cation like MA is to compensate the reduced space filling offered by the smaller cation, whereas in the MA-rich compounds is to expand the space needed for the larger cation. In fact, the effect of octahedron tiltings exceeds that of unit-cell size in determining the band gap for these organic cation mixtures. Our calculations indicate that the key role played by hydrogen bonds with iodine anions in pure compounds is preserved in the cation mixed perovskites. It is found that MA-I bonds remain stronger than FA-I bonds throughout the composition range regardless of the unit-cell expansion as the FA content increases. Our calculations reveal how the hydrogen bonds stabilize the no-bonding I-5p orbitals, spatially perpendicular to the Pb-I-Pb bond axis, lowering their energy when the H-I interaction occurs, which would explain the well-known role of hydrogen bonding in the structural stabilization of hybrid perovskites. These results contribute to the understanding on the role played by cation mixing at A sites in the physics of lead halide perovskites. | condensed matter |
Ride-hailing services are growing rapidly and becoming one of the most disruptive technologies in the transportation realm. Accurate prediction of ride-hailing trip demand not only enables cities to better understand people's activity patterns, but also helps ride-hailing companies and drivers make informed decisions to reduce deadheading vehicle miles traveled, traffic congestion, and energy consumption. In this study, a convolutional neural network (CNN)-based deep learning model is proposed for multi-step ride-hailing demand prediction using the trip request data in Chengdu, China, offered by DiDi Chuxing. The CNN model is capable of accurately predicting the ride-hailing pick-up demand at each 1-km by 1-km zone in the city of Chengdu for every 10 minutes. Compared with another deep learning model based on long short-term memory, the CNN model is 30% faster for the training and predicting process. The proposed model can also be easily extended to make multi-step predictions, which would benefit the on-demand shared autonomous vehicles applications and fleet operators in terms of supply-demand rebalancing. The prediction error attenuation analysis shows that the accuracy stays acceptable as the model predicts more steps. | computer science |
We explore training attention-based encoder-decoder ASR in low-resource settings. These models perform poorly when trained on small amounts of transcribed speech, in part because they depend on having sufficient target-side text to train the attention and decoder networks. In this paper we address this shortcoming by pretraining our network parameters using only text-based data and transcribed speech from other languages. We analyze the relative contributions of both sources of data. Across 3 test languages, our text-based approach resulted in a 20% average relative improvement over a text-based augmentation technique without pretraining. Using transcribed speech from nearby languages gives a further 20-30% relative reduction in character error rate. | electrical engineering and systems science |
We investigate the impact of electron-lattice coupling on the stability of various magnetic orders in rare-earth nickelates. We use the Hartree-Fock approximation, at zero temperature, to study an effective, two-band model with correlations characterized by a Hubbard $U$ and a Hund's $J$. This is coupled to breathing-mode distortions of the octahedral oxygen cages, described semi-classically, with a Holstein term. We analyze the effect of the various parameters on the resulting phase diagram, in particular on the charge disproportionation and on the magnetic order. We confirm that the coupling to the lattice cooperates with Hund's coupling and thus encourages charge disproportionation. We also find that it favors the fully disproportionated, 4-site periodic magnetic order of type $\Uparrow 0 \Downarrow 0$. Other convergent magnetic phases, such as the collinear $\uparrow\uparrow\downarrow\downarrow$ and non-collinear $\uparrow\rightarrow\downarrow\leftarrow$ states, do not couple to the lattice because of their lack of charge disproportionation. Novel phases, e.g. with charge disproportionation but no magnetic order, are also found to be stabilized in specific conditions. | condensed matter |
We present ALMA [CII] 158 $\mu$m line and underlying far-infrared (FIR) continuum emission observations ($0''.70 \times 0''.56$ resolution) toward HSC J124353.93$+$010038.5 (J1243$+$0100) at $z = 7.07$, the only low-luminosity ($M_{\rm 1450} > -25$ mag) quasar currently known at $z > 7$. The FIR continuum is bright (1.52 mJy) and resolved with a total luminosity of $L_{\rm FIR} = 3.5 \times 10^{12}~L_\odot$. The spatially extended component is responsible for $\sim 40\%$ of the emission. The area-integrated [CII] spectrum shows a broad wing (${\rm FWHM} = 997$ km s$^{-1}$, $L_{\rm [CII]} = 1.2 \times 10^9~L_\odot$) as well as a bright core (${\rm FWHM} = 235$ km s$^{-1}$, $L_{\rm [CII]} = 1.9 \times 10^9~L_\odot$). This wing is the first detection of a galactic-scale quasar-driven outflow (atomic outflow rate $> 447~M_\odot$ yr$^{-1}$) at $z > 7$. The estimated large mass loading factor of the total outflow (e.g., $\gtrsim 9$ relative to the [CII]-based SFR) suggests that this outflow will soon quench the star-formation of the host. The core gas dynamics are governed by rotation, with a rotation curve suggestive of a compact bulge ($\sim 3.3 \times 10^{10}~M_\odot$), although it is not yet spatially resolved. Finally, we found that J1243$+$0100 has a black hole mass-to-dynamical mass ratio (and -to-bulge mass ratio) of $\sim 0.4\%$ ($\sim 1\%$), consistent with the local value within uncertainties. Our results therefore suggest that the black hole-host co-evolution relation is already in place at $z \sim 7$ for this object. | astrophysics |
This work is a part of an ongoing effort to prove the correctness of invertibility conditions for the theory of fixed-width bit-vectors, which are used to solve quantified bit-vector formulas in the Satisfiability Modulo Theories (SMT) solver CVC4. While many of these were proved in a completely automatic fashion for any bit-width, some were only proved for bit-widths up to 65, even though they are being used to solve formulas over arbitrary bit-widths. In this paper we describe our initial efforts in proving a subset of these invertibility conditions in the Coq proof assistant. We describe the Coq library that we use, as well as the extensions that we introduced to it. | computer science |
Blomer and Maga recently proved that, if $F$ is an $L^2$-normalized Hecke Maass cusp form for $\mathrm{SL}_n(\mathbb Z)$, and $\Omega$ is a compact subset of $\mathrm{PGL}_n(\mathbb R)/\mathrm{PO}_n(\mathbb R)$, then we have $\|F|_\Omega\|_\infty\ll_\Omega\lambda_F^{n(n-1)/8-\delta_n}$ for some $\delta_n>0$, where $\lambda_F$ is the Laplacian eigenvalue of $F$. In the present paper, we prove an explicit version of their result. | mathematics |
We investigate the interaction potential of superconducting vortices at the full quantum level. We formulate the interaction potential in a constrained path integral and calculate it by the quantum Monte Carlo simulation. The vortex-vortex potential is attractive (type-I), repulsive (type-II), and flat (critical) depending on a coupling constant. The vortex-antivortex potential also depends on the coupling constant at long range but is always attractive at short range. | condensed matter |
In this paper, we have derived certain classical inequalities, namely, Young's, H\"older's, Minkowski's and Hermite-Hadamard inequalities for pseudo-integral (also known as $g$-integral). For Young's, H\"older's, Minkowski's inequalities, both the cases $p>1$ as well as $p<1,\,p\ne 0$ have been covered. Moreover, in the case of Hermite-Hadamard inequality, a refinement has also been proved and as a special case, $g$-analogue of geometric-logarithmic-arithmetic inequality has been deduced. | mathematics |
Recent advances in deep learning have had a methodological and practical impact on brain-computer interface research. Among the various deep network architectures, convolutional neural networks have been well suited for spatio-spectral-temporal electroencephalogram signal representation learning. Most of the existing CNN-based methods described in the literature extract features at a sequential level of abstraction with repetitive nonlinear operations and involve densely connected layers for classification. However, studies in neurophysiology have revealed that EEG signals carry information in different ranges of frequency components. To better reflect these multi-frequency properties in EEGs, we propose a novel deep multi-scale neural network that discovers feature representations in multiple frequency/time ranges and extracts relationships among electrodes, i.e., spatial representations, for subject intention/condition identification. Furthermore, by completely representing EEG signals with spatio-spectral-temporal information, the proposed method can be utilized for diverse paradigms in both active and passive BCIs, contrary to existing methods that are primarily focused on single-paradigm BCIs. To demonstrate the validity of our proposed method, we conducted experiments on various paradigms of active/passive BCI datasets. Our experimental results demonstrated that the proposed method achieved performance improvements when judged against comparable state-of-the-art methods. Additionally, we analyzed the proposed method using different techniques, such as PSD curves and relevance score inspection to validate the multi-scale EEG signal information capturing ability, activation pattern maps for investigating the learned spatial filters, and t-SNE plotting for visualizing represented features. Finally, we also demonstrated our method's application to real-world problems. | electrical engineering and systems science |
We carried out a parameter-space exploration of the ammonia abundance in the pre-stellar core L1544, where it has been observed to increase toward the center of the core with no signs of freeze-out onto grain surfaces. We considered static and dynamical physical models coupled with elaborate chemical and radiative transfer calculations, and explored the effects of varying model parameters on the (ortho+para) ammonia abundance profile. None of our models are able to reproduce the inward-increasing tendency in the observed profile; ammonia depletion always occurs in the center of the core. In particular, our study shows that including the chemical desorption process, where exothermic association reactions on the grain surface can result in the immediate desorption of the product molecule, leads to ammonia abundances that are over an order of magnitude above the observed level in the innermost 15000 au of the core - at least when one employs a constant efficiency for the chemical desorption process irrespective of the ice composition. Our results seemingly constrain the chemical desorption efficiency of ammonia on water ice to below 1%. It is increasingly evident that time-dependent effects must be considered so that the results of chemical models can be reconciled with observations. | astrophysics |
We consider fuzzy, or continuous, bits, which take values in [0;1] and (-1;1] instead of {0;1}, and operations on them (NOT, XOR etc.) and on their sequences (ADD), to obtain the generalization of cryptographic hash functions, CHFs, for the messages consisting of fuzzy bits, so that CHFs become smooth and non-constant functions of each bit of the message. We then train the neural networks to predict the message that has a given hash, where the loss function for the hash of predicted message and given true hash is backpropagatable. The results of the trainings for the standard CHFs - MD5, SHA1, SHA2-256, and SHA3/Keccak - with small number of (optionally weakened) rounds are presented and compared. | computer science |
Recently solid state nanopores nanogaps have generated a lot of interest in ultrafast DNA sequencing. However, there are challenges to slow down the DNA translocation process to achieve a single nucleobase resolution. A series of computational tools have been used in an attempt to study the DNA translocations in several model systems. The prospect of finding an efficient nanoelectrode for such human genome sequencing might offer an entirely innovative way of preventive health care. Here, we have studied the performance of a boron carbide BC$_3$ based nanogap setup for DNA sequencing using the density functional theory and non equilibrium Greens function-based methods. The electric current variations under different applied bias voltages are found to be significant due to changes in the nucleotides orientation and lateral position and can even outperform graphene. Computed relatively lower interaction energy for BC$_3$ electrodes compared to graphene electrodes indicates that BC$_3$ is a better nanoelectrode for DNA sequencing. From our results, we have found that the unique identification of all four nucleotides possible in the 0.3 to 0.4 V bias region. Furthermore, each of the four nucleotides exhibits around one order of current difference, which makes it possible to identify all four nucleotides uniquely. Thus, we believe that BC$_3$ based nanoelectrodes may be utilized toward the development of a practical nanodevice for DNA sequencing. | physics |
We present a novel unifying interpretation of excess event rates observed in several dark matter direct-detection experiments that utilize single-electron threshold semiconductor detectors. Despite their different locations, exposures, readout techniques, detector composition, and operating depths, these experiments all observe statistically significant excess event rates of $\sim$ 10 Hz/kg. However, none of these persistent excesses has yet been reported as a dark matter signal because individually, each can be attributed to different well-motivated but unmodeled backgrounds, and taken together, they cannot be explained by dark matter particles scattering elastically off detector nuclei or electrons. We show that these results can be reconciled if the semiconductor detectors are seeing a collective inelastic process, consistent with exciting a plasmon. We further show that plasmon excitation could arise in two compelling dark matter scenarios, both of which can explain rates of existing signal excesses in germanium and, at least at the order of magnitude level, across several single-electron threshold detectors. At least one of these scenarios also yields the correct relic density from thermal freeze-out. Both dark matter scenarios motivate a radical rethinking of the standard interpretations of dark matter-electron scattering from recent experiments. | high energy physics phenomenology |
A systematic method is presented for the construction and classification of algebras of gauge transformations for arbitrary high rank tensor gauge fields. For every tensor gauge field of a given rank, the gauge transformation will be stated, in a generic way, via an ansatz that contains all the possible terms, with arbitrary coefficients and the maximum number of tensor gauge functions. The requirement for the closure of the algebra will prove to be restrictive, but, nevertheless, leave a variety of choices. Properly adjusting the values of the initial coefficients and imposing restrictions on the gauge functions, one can, one the one hand, recover all the, so far, analysed algebras and on the other, construct new ones. The presentation of a brand new algebra for tensor gauge transformations is the central result of this article. | high energy physics theory |
Sub-GeV dark matter candidates are of increasing interest, because long-favored candidates such as GeV-scale WIMPs have not been detected. For low-mass dark matter, model-independent constraints are weak or nonexistent. We show that for such candidates, because the number density is high, cosmic ray propagation can be affected by elastic scattering with dark matter. We call this type of search `reverse direct detection,' because dark matter is the target and Standard Model particles are the beam. Using a simple propagation model for galactic cosmic rays, we calculate how dark matter affects cosmic ray spectra at Earth, and set new limits on the dark matter-proton and dark matter-electron cross sections. For protons, our limit is competitive with cosmological constraints, but is independent. For electrons, our limit covers masses not yet probed, and improves on cosmological constraints by one to two orders of magnitude. We comment on how future work can significantly improve the sensitivity of cosmic-ray probes of dark matter interactions. | high energy physics phenomenology |
Tracer-kinetic models allow for the quantification of kinetic parameters such as blood flow from dynamic contrast-enhanced magnetic resonance (MR) images. Fitting the observed data with multi-compartment exchange models is desirable, as they are physiologically plausible and resolve directly for blood flow and microvascular function. However, the reliability of model fitting is limited by the low signal-to-noise ratio, temporal resolution, and acquisition length. This may result in inaccurate parameter estimates. This study introduces physics-informed neural networks (PINNs) as a means to perform myocardial perfusion MR quantification, which provides a versatile scheme for the inference of kinetic parameters. These neural networks can be trained to fit the observed perfusion MR data while respecting the underlying physical conservation laws described by a multi-compartment exchange model. Here, we provide a framework for the implementation of PINNs in myocardial perfusion MR. The approach is validated both in silico and in vivo. In the in silico study, an overall reduction in mean-squared error with the ground-truth parameters was observed compared to a standard non-linear least squares fitting approach. The in vivo study demonstrates that the method produces parameter values comparable to those previously found in literature, as well as providing parameter maps which match the clinical diagnosis of patients. | electrical engineering and systems science |
A lot of work in social virtual reality, including our own group's, has focused on effectiveness of specific social behaviours such as eye-gaze, turn taking, gestures and other verbal and non-verbal cues. We have built upon these to look at emergent phenomena such as co-presence, leadership and trust. These give us good information about the usability issues of specific social VR systems, but they don't give us much information about the requirements for such systems going forward. In this short paper we discuss how we are broadening the scope of our work on social systems, to move out of the laboratory to more ecologically valid situations and to study groups using social VR for longer periods of time. | computer science |
We report the discovery of soft X-ray pulsations from the nearby millisecond pulsar PSR J1231$-$1411 using NICER. The pulsed emission is characterized by a broad and asymmetric main pulse and a much fainter secondary interpulse, with a total pulsed count rate of 0.055 c s$^{-1}$ in the 0.35-1.5 keV band. We analyzed Fermi LAT data to update the pulse timing model covering 10 years of data and used that model to coherently combine NICER data over a year of observations. Spectral modeling suggests that the flux is dominated by thermal emission from a hot spot (or spots) on the neutron star surface. The phase relationship between the X-ray pulse and the radio and $\gamma$ rays provides insight into the geometry of the system. | astrophysics |
Let $f : \mathbb{C}^2 \to \mathbb{C}$ be a mixed polynomial, i.e., a complex polynomial of variables $(u, \bar{u}, v, \bar{v})$ such that $f(0) = 0$ and $f$ has an isolated singularity at the origin. Then, associated with $f$, we have the well-defined link $L_f$, which is the intersection of the mixed hypersurface $V_f:=f^{-1}(0)$ with a 3-sphere of small radius centered at the origin. Such a link is called a real algebraic link. Classification and characterization of real algebraic links are still open. In this paper, we use properties of mixed polynomials to construct a new class of real algebraic links. More exactly, real algebraic links arise from products of mixed polynomials $f=p \cdot q:\mathbb{C}^2 \to \mathbb{C}$, with $p(0)=q(0)=0$. | mathematics |
To what extent does Noether's principle apply to quantum channels? Here, we quantify the degree to which imposing a symmetry constraint on quantum channels implies a conservation law, and show that this relates to physically impossible transformations in quantum theory, such as time-reversal and spin-inversion. In this analysis, the convex structure and extremal points of the set of quantum channels symmetric under the action of a Lie group $G$ becomes essential. It allows us to derive bounds on the deviation from conservation laws under any symmetric quantum channel in terms of the deviation from closed dynamics as measured by the unitarity of the channel. In particular, we investigate in detail the $U(1)$ and $SU(2)$ symmetries related to energy and angular momentum conservation laws. In the latter case, we provide fundamental limits on how much a spin-$j_A$ system can be used to polarise a larger spin-$j_B$ system, and on how much one can invert spin polarisation using a rotationally-symmetric operation. Finally, we also establish novel links between unitarity, complementary channels and purity that are of independent interest. | quantum physics |
In this research, we use the Gibbons-Werner method (Gauss-Bonnet theorem) on the optical geometry of a black hole and wormhole, extending the calculation of the weak gravitational lensing within the Maxwell's fish eye-like profile and dark matter medium. The angle is seen as a partially topological effect and the Gibbons-Werner method can be used on any asymptotically flat Riemannian optical geometry of compact objects in dark matter medium. | physics |
The aim of this paper is to study the dimensions and standard part maps between the field of $p$-adic numbers ${{\mathbb Q}_p}$ and its elementary extension $K$ in the language of rings $L_r$. We show that for any $K$-definable set $X\subseteq K^m$, $\text{dim}_K(X)\geq \text{dim}_{{\mathbb Q}_p}(X\cap {{\mathbb Q}_p}^m)$. Let $V\subseteq K$ be convex hull of $K$ over ${{\mathbb Q}_p}$, and $\text{\st}: V\rightarrow {{\mathbb Q}_p}$ be the standard part map. We show that for any $K$-definable function $f:K^m\rightarrow K$, there is definable subset $D\subseteq{{\mathbb Q}_p}^m$ such that ${{\mathbb Q}_p}^m\backslash D$ has no interior, and for all $x\in D$, either $f(x)\in V$ and $\text{st}(f(\text{st}^{-1}(x)))$ is constant, or $f(\text{st}^{-1}(x))\cap V=\emptyset$. We also prove that $\text{dim}_K(X)\geq \text{dim}_{{\mathbb Q}_p}(\text{st}(X\cap V^m))$ for every definable $X\subseteq K^m$. | mathematics |
The conformal loop ensemble (CLE) is the canonical conformally invariant probability measure on non-crossing loops in a simply connected domain in $\mathbb C$ and is indexed by a parameter $\kappa \in (8/3,8)$. We consider CLE$_\kappa$ on the whole-plane in the regime in which the loops are self-intersecting ($\kappa \in (4,8)$) and show that it is invariant under the inversion map $z \mapsto 1/z$. This shows that whole-plane CLE$_\kappa$ for $\kappa \in (4,8)$ defines a conformally invariant measure on loops on the Riemann sphere. The analogous statement in the regime in which the loops are simple ($\kappa \in (8/3,4]$) was proven by Kemppainen and Werner and together with the present work covers the entire range $\kappa \in (8/3,8)$ for which CLE$_\kappa$ is defined. As an intermediate step in the proof, we show that CLE$_\kappa$ for $\kappa \in (4,8)$ on an annulus, with any specified number of inner-boundary-surrounding loops, is well-defined and conformally invariant. | mathematics |
In addition to novel surface states, topological insulators can also exhibit robust gapless states at crystalline defects. Step edges constitute a class of common defects on the surface of crystals. In this work we establish the topological nature of one-dimensional (1D) bound states localized at step edges of the [001] surface of a topological crystalline insulator (TCI) Pb$_{0.7}$Sn$_{0.3}$Se, both theoretically and experimentally. We show that the topological stability of the step edge states arises from an emergent particle-hole symmetry of the surface low-energy physics, and demonstrate the experimental signatures of the particle-hole symmetry breaking. We also reveal the effects of an external magnetic field on the 1D bound states. Our work suggests the possibility of similar topological step edge modes in other topological materials with a rocks-salt structure. | condensed matter |
We experimentally investigate charge transport through a single planar junction between Cd$_3$As$_2$ Dirac semimetal and a normal Au lead. For non-superconducting bulk Cd$_3$As$_2$ samples, we observe non-Ohmic $dV/dI(V)$ curves, which strongly resemble standard Andreev reflection with well-defined superconducting gap. Andreev-like behavior is demonstrated for Cd$_3$As$_2$ samples with different surface and contact preparation techniques. We connect this behavior with surface superconductivity due to the flat-band formation in Cd$_3$As$_2$, which has been predicted theoretically. The conclusion on superconductivity is also supported by the gap suppression by magnetic fields or temperature. | condensed matter |
Understanding the computational power of noisy intermediate-scale quantum (NISQ) devices is of both fundamental and practical importance to quantum information science. Here, we address the question of whether error-uncorrected noisy quantum computers can provide computational advantage over classical computers. Specifically, we study noisy random circuit sampling in one dimension (or 1D noisy RCS) as a simple model for exploring the effects of noise on the computational power of a noisy quantum device. In particular, we simulate the real-time dynamics of 1D noisy random quantum circuits via matrix product operators (MPOs) and characterize the computational power of the 1D noisy quantum system by using a metric we call MPO entanglement entropy. The latter metric is chosen because it determines the cost of classical MPO simulation. We numerically demonstrate that for the two-qubit gate error rates we considered, there exists a characteristic system size above which adding more qubits does not bring about an exponential growth of the cost of classical MPO simulation of 1D noisy systems. Specifically, we show that above the characteristic system size, there is an optimal circuit depth, independent of the system size, where the MPO entanglement entropy is maximized. Most importantly, the maximum achievable MPO entanglement entropy is bounded by a constant that depends only on the gate error rate, not on the system size. We also provide a heuristic analysis to get the scaling of the maximum achievable MPO entanglement entropy as a function of the gate error rate. The obtained scaling suggests that although the cost of MPO simulation does not increase exponentially in the system size above a certain characteristic system size, it does increase exponentially as the gate error rate decreases, possibly making classical simulation practically not feasible even with state-of-the-art supercomputers. | quantum physics |
We study the effective interactions of external electromagnetic fields induced by fluctuations of virtual particles in the vacuum of quantum electrodynamics. Our main focus is on these interactions at two-loop order. We discuss in detail the emergence of the renowned Heisenberg-Euler effective action from the underlying microscopic theory of quantum electrodynamics, emphasizing its distinction from a standard one-particle irreducible effective action. In our explicit calculations we limit ourselves to constant and slowly varying external fields, allowing us to adopt a locally constant field approximation. One of our main findings is that at two-loop order there is a finite one-particle reducible contribution to the Heisenberg-Euler effective action in constant fields, which was previously assumed to vanish. In addition to their conceptual significance, our results are relevant for high-precision probes of quantum vacuum nonlinearity in strong electromagnetic fields. | high energy physics theory |
This paper considers utility optimal power control for energy harvesting wireless devices with a finite capacity battery. The distribution information of the underlying wireless environment and harvestable energy is unknown and only outdated system state information is known at the device controller. This scenario shares similarity with Lyapunov opportunistic optimization and online learning but is different from both. By a novel combination of Zinkevich's online gradient learning technique and the drift-plus-penalty technique from Lyapunov opportunistic optimization, this paper proposes a learning-aided algorithm that achieves utility within $O(\epsilon)$ of the optimal, for any desired $\epsilon>0$, by using a battery with an $O(1/\epsilon)$ capacity. The proposed algorithm has low complexity and makes power investment decisions based on system history, without requiring knowledge of the system state or its probability distribution. | mathematics |
We calculate gravitational wave power spectra from first order early Universe phase transitions using the Sound Shell Model. The model predicts that the power spectrum depends on the mean bubble separation, the phase transition strength, the phase boundary speed, with the overall frequency scale set by the nucleation temperature. There is also a dependence on the time evolution of the bubble nucleation rate. The gravitational wave peak power and frequency are in good agreement with published numerical simulations, where bubbles are nucleated simultaneously. Agreement is particularly good for detonations, but the total power for deflagrations is predicted higher than numerical simulations show, indicating refinement of the model of the transfer of energy to the fluid is needed for accurate computations. We show how the time-dependence of the bubble nucleation rate affects the shape of the power spectrum: an exponentially rising nucleation rate produces higher amplitude gravitational waves at a longer wavelength than simultaneous nucleation. We present an improved fit for the predicted gravitational wave power spectrum in the form of a double broken power law, where the two breaks in the slope happen at wavenumber corresponding to the mean bubble separation and the thickness of the fluid shell surrounding the expanding bubbles, which in turn is related to the difference of the phase boundary speed from the speed of sound. | astrophysics |
Purely kinetic k-essence models have been shown in the literature to be a field theory equivalent of barotropic fluid models of dark energy or dark matter-dark energy unification. In the modeling framework where the speed of sound squared of a barotropic fluid is modeled as a function of its Equation of State parameter, a systematic procedure of obtaining the Lagrangian density of an equivalent purely kinetic k-essence model is presented. As this modeling approach starts from the speed of sound, purely kinetic k-essence models can be constructed for which the speed of sound is in agreement with the observational constraints. Depending on the chosen functional form for the barotropic fluid speed of sound squared, analytically tractable examples of solutions for the purely kinetic k-essence Lagrangian density in parametric and closed form are obtained. | astrophysics |
High quality factor ($Q$) nanomechanical resonators have received a lot of attention for sensor applications with unprecedented sensitivity. Despite the large interest, few investigations into the frequency stability of high-$Q$ resonators have been reported. Such resonators are characterized by a linewidth significantly smaller than typically employed measurement bandwidths, which is the opposite regime to what is normally considered for sensors. Here, the frequency stability of high-$Q$ silicon nitride string resonators is investigated both in open-loop and closed-loop configurations. The stability is here characterized using the Allan deviation. For open-loop tracking, it is found that the Allan deviation gets separated into two regimes, one limited by the thermomechanical noise of the resonator and the other by the detection noise of the optical transduction system. The point of transition between the two regimes is the resonator response time, which can be shown to have a linear dependence on $Q$. Laser power fluctuations from the optical readout is found to present a fundamental limit to the frequency stability. Finally, for closed-loop measurements, the response time is shown to no longer be intrinsically limited but instead given by the bandwidth of the closed-loop tracking system. Computed Allan deviations based on theory are given as well and found to agree well with the measurements. These results are of importance for the understanding of fundamental limitations of high-$Q$ resonators and their application as high performance sensors. | condensed matter |
Context: Star formation takes place in giant molecular clouds, resulting in mass-segregated young stellar clusters composed of Sun-like stars, brown dwarves, and massive O-type(50-100\msun) stars. Aims: To identify candidate hub-filament systems (HFS) in the Milky-Way and examine their role in the formation of the highest mass stars and star clusters. Methods: Filaments around ~35000 HiGAL clumps that are detected using the DisPerSE algorithm. Hub is defined as a junction of three or more filaments. Column density maps were masked by the filament skeletons and averaged for HFS and non-HFS samples to compute the radial profile along the filaments into the clumps. Results: ~3700~(11\%) are candidate HFS of which, ~2150~(60\%) are pre-stellar, ~1400~(40\%) are proto-stellar. All clumps with L>10^4 Lsun and L>10^5 Lsun at distances respectively within 2kpc and 5kpc are located in the hubs of HFS. The column-densities of hubs are found to be enhanced by a factor of ~2 (pre-stellar sources) up to ~10 (proto-stellar sources). Conclusions: All high-mass stars preferentially form in the density enhanced hubs of HFS. This amplification can drive the observed longitudinal flows along filaments providing further mass accretion. Radiation pressure and feedback can escape into the inter-filamentary voids. We propose a "filaments to clusters" unified paradigm for star formation, with the following salient features: a) low-intermediate mass stars form in the filaments slowly (10^6yr) and massive stars quickly (10^5yr) in the hub, b) the initial mass function is the sum of stars continuously created in the HFS with all massive stars formed in the hub, c) Feedback dissiption and mass segregation arise naturally due to HFS properties, and c) explain age spreads within bound clusters and formation of isolated OB associations. | astrophysics |
We performed a radio recombination line (RRL) survey to construct a high-mass star-forming region (HMSFR) sample in the Milky Way based on the all-sky Wide-Field Infrared Survey Explorer ($\textit{All-WISE}$) point source catalog. The survey was observed with the Shanghai 65m Tianma radio telescope (TMRT) covering 10 hydrogen RRL transitions ranging from H98$\alpha$ to H113$\alpha$ (corresponding to the rest frequencies of 4.5$-$6.9 GHz) simultaneously. Out of 3348 selected targets, we identified an HMSFR sample consisting of 517 sources traced by RRLs, a large fraction of this sample (486) locate near the Galactic plane ($|$$\textit{b}$$|$ $<$ 2 deg). In addition to the hydrogen RRLs, we also detected helium and carbon RRLs towards 49 and 23 sources respectively. We cross-match the RRL detections with the 6.7 methanol maser sources built up in previous works for the same target sample, as a result, 103 HMSFR sources were found to harbor both emissions. In this paper, we present the HMSFR catalog accompanied by the measured RRL line properties and a correlation with our methanol maser sample, which is believed to tracer massive stars at earlier stages. The construction of an HMSFR sample consisting of sources in various evolutionary stages indicated by different tracers is fundamental for future studies of high-mass star formation in such regions. | astrophysics |
Mobility and blockage are two critical challenges in wireless transmission over millimeter-wave (mmWave) and Terahertz (THz) bands. In this paper, we investigate network massive multiple-input multiple-output (MIMO) transmission for mmWave/THz downlink in the presence of mobility and blockage. Considering the mmWave/THz propagation characteristics, we first propose to apply per-beam synchronization for network massive MIMO to mitigate the channel Doppler and delay dispersion effects. Accordingly, we establish a transmission model. We then investigate network massive MIMO downlink transmission strategies with only the statistical channel state information (CSI) available at the base stations (BSs), formulating the strategy design problem as an optimization problem to maximize the network sum-rate. We show that the beam domain is favorable to perform transmission, and demonstrate that BSs can work individually when sending signals to user terminals. Based on these insights, the network massive MIMO precoding design is reduced to a network sum-rate maximization problem with respect to beam domain power allocation. By exploiting the sequential optimization method and random matrix theory, an iterative algorithm with guaranteed convergence performance is further proposed for beam domain power allocation. Numerical results reveal that the proposed network massive MIMO transmission approach with the statistical CSI can effectively alleviate the blockage effects and provide mobility enhancement over mmWave and THz bands. | electrical engineering and systems science |
We gain tight rigorous bounds on the renormalisation fixed point function for period doubling in families of unimodal maps with degree 2 critical point. By writing the relevant eigenproblems in a modified nonlinear form, we use these bounds, together with a contraction mapping argument, to gain tight bounds on the essential eigenvalues and eigenfunctions of the linearised renormalisation operator at the fixed point and also those of the operator encoding the universal scaling of added uncorrelated noise. We gain bounds on the corresponding power series coefficients and universal constants accurate to over 400 significant figures, confirming and (in the case of noise) extending the accuracy of previous numerical estimates, by using multi-precision interval arithmetic with rigorous directed rounding to implement operations on a space of analytic functions. | mathematics |
This work presents a parametrized family of distances, namely the Alpha Procrustes distances, on the set of symmetric, positive definite (SPD) matrices. The Alpha Procrustes distances provide a unified formulation encompassing both the Bures-Wasserstein and Log-Euclidean distances between SPD matrices. We show that the Alpha Procrustes distances are the Riemannian distances corresponding to a family of Riemannian metrics on the manifold of SPD matrices, which encompass both the Log-Euclidean and Wasserstein Riemannian metrics. This formulation is then generalized to the set of positive definite Hilbert-Schmidt operators on a Hilbert space, unifying the infinite-dimensional Bures-Wasserstein and Log-Hilbert-Schmidt distances. In the setting of reproducing kernel Hilbert spaces (RKHS) covariance operators, we obtain closed form formulas for all the distances via the corresponding kernel Gram matrices. From a statistical viewpoint, the Alpha Procrustes distances give rise to a parametrized family of distances between Gaussian measures on Euclidean space, in the finite-dimensional case, and separable Hilbert spaces, in the infinite-dimensional case, encompassing the 2-Wasserstein distance, with closed form formulas via Gram matrices in the RKHS setting. The presented formulations are new both in the finite and infinite-dimensional settings. | mathematics |
To follow up recent work of Xiao-Song Yang on the Nos\'e-Hoover oscillator we consider Dettmann's harmonic oscillator, which relates Yang's ideas directly to Hamiltonian mechanics. We also use the Hoover-Holian oscillator to relate our mechanical studies to Gibbs' statistical mechanics. All three oscillators are described by a coordinate $q$ and a momentum $p$. Additional control variables $(\zeta, \xi)$ vary the energy. Dettmann's description includes a time-scaling variable $s$, as does Nos\'e's original work. Time scaling controls the rates at which the $(q,p,\zeta)$ variables change. The ergodic Hoover-Holian oscillator provides the stationary Gibbsian probability density for the time-scaling variable $s$. Yang considered {\it qualitative} features of Nos\'e-Hoover dynamics. He showed that longtime Nos\'e-Hoover trajectories change energy, repeatedly crossing the $\zeta = 0$ plane. We use moments of the motion equations to give two new, different, and brief proofs of Yang's long-time limiting result. | condensed matter |
Synchrotron-based X-ray computed tomography is widely used for investigating inner structures of specimens at high spatial resolutions. However, potential beam damage to samples often limits the X-ray exposure during tomography experiments. Proposed strategies for eliminating beam damage also decrease reconstruction quality. Here we present a deep learning-based method to enhance low-dose tomography reconstruction via a hybrid-dose acquisition strategy composed of extremely sparse-view normal-dose projections and full-view low-dose projections. Corresponding image pairs are extracted from low-/normal-dose projections to train a deep convolutional neural network, which is then applied to enhance full-view noisy low-dose projections. Evaluation on two experimental datasets under different hybrid-dose acquisition conditions show significantly improved structural details and reduced noise levels compared to uniformly distributed acquisitions with the same number of total dosage. The resulting reconstructions also preserve more structural information than reconstructions processed with traditional analytical and regularization-based iterative reconstruction methods from uniform acquisitions. Our performance comparisons show that our implementation, HDrec, can perform denoising of a real-world experimental data 410x faster than the state-of-the-art Xlearn method while providing better quality. This framework can be applied to other tomographic or scanning based X-ray imaging techniques for enhanced analysis of dose-sensitive samples and has great potential for studying fast dynamic processes. | electrical engineering and systems science |
Let ${\mathcal C}_n$ be the set of all permutation cycles of length $n$ over $\{1,2,\ldots,n\}$. Let $${\mathfrak f}_n(q):=\sum_{\sigma\in{\mathcal C}_{n+1}}q^{{\mathrm maj}\,\sigma} $$ be a $q$-analogue of the factorial $n!$, where ${\mathrm maj}$ denotes the major index. We prove a $q$-analogue of Wilson's congruence $$ {\mathfrak f}_{n-1}(q)\equiv\mu(n)\pmod{\Phi_n(q)}, $$ where $\mu$ denotes the M\"obius function and $\Phi_n(q)$ is the $n$-th cyclotomic polynomial. | mathematics |
Background: Trends in hospitalised case-fatality risk (HFR), risk of intensive care unit (ICU) admission and lengths of stay for patients hospitalised for COVID-19 in England over the pre-vaccination era are unknown. Methods: Data on hospital and ICU admissions with COVID-19 at 31 NHS trusts in England were collected by Public Health England's Severe Acute Respiratory Infections surveillance system and linked to death information. We applied parametric multi-state mixture models, accounting for censored outcomes and regressing risks and times between events on month of admission, geography, and baseline characteristics. Findings: 20,785 adults were admitted with COVID-19 in 2020. Between March and June/July/August estimated HFR reduced from 31.9% (95% confidence interval 30.3-33.5%) to 10.9% (9.4-12.7%), then rose steadily from 21.6% (18.4-25.5%) in September to 25.7% (23.0-29.2%) in December, with steeper increases among older patients, those with multi-morbidity and outside London/South of England. ICU admission risk reduced from 13.9% (12.8-15.2%) in March to 6.2% (5.3-7.1%) in May, rising to a high of 14.2% (11.1-17.2%) in September. Median length of stay in non-critical care increased during 2020, from 6.6 to 12.3 days for those dying, and from 6.1 to 9.3 days for those discharged. Interpretation: Initial improvements in patient outcomes, corresponding to developments in clinical practice, were not sustained throughout 2020, with HFR in December approaching the levels seen at the start of the pandemic, whilst median hospital stays have lengthened. The role of increased transmission, new variants, case-mix and hospital pressures in increasing COVID-19 severity requires urgent further investigation. | statistics |
We show that Residual Networks (ResNet) is equivalent to boosting feature representation, without any modification to the underlying ResNet training algorithm. A regret bound based on Online Gradient Boosting theory is proved and suggests that ResNet could achieve Online Gradient Boosting regret bounds through neural network architectural changes with the addition of a shrinkage parameter in the identity skip-connections and using residual modules with max-norm bounds. Through this relation between ResNet and Online Boosting, novel feature representation boosting algorithms can be constructed based on altering residual modules. We demonstrate this through proposing decision tree residual modules to construct a new boosted decision tree algorithm and demonstrating generalization error bounds for both approaches; relaxing constraints within BoostResNet algorithm to allow it to be trained in an out-of-core manner. We evaluate convolution ResNet with and without shrinkage modifications to demonstrate its efficacy, and demonstrate that our online boosted decision tree algorithm is comparable to state-of-the-art offline boosted decision tree algorithms without the drawback of offline approaches. | statistics |
The supersymmetric R\'enyi entropy across a spherical entangling surface in a $d$-dimensional SCFT with flavor defects is equivalent to a supersymmetric partition function on $\mathbb{H}^{d-1} \times \mathbb{S}^1$, which can be computed exactly using localization. We consider the holographically dual BPS solutions in $(d +1)$-dimensional matter coupled supergravity $(d = 3 , 5)$, which are charged hyperbolically sliced AdS black holes. We compute the renormalized on-shell action and the holographic supersymmetric R\'enyi entropy and show a perfect match with the field theory side. Our setup allows a direct map between the chemical potentials for the global symmetries of the field theories and those of the gravity solutions. We also discuss a simple case where angular momentum is added. | high energy physics theory |
Liquid-metal infiltrated Cu30Mo70 (weigth percentage) is subjected to severe plastic deformation using high pressure torsion. The initially equiaxed dual phase structure is gradually transformed into a lamellar structure composed of individual Cu and Mo layers. The thickness of the lamellae varies between the micrometer and nanometer ranges depending on the amount of applied strain. Consistent with the refinement of the microstructural features, strength and hardness substantially increase. In addition, an acceptable ductility is found in the intermediate deformation range. An assessment of the damage tolerance of the produced composites is performed by measuring the fracture toughness in different crack propagation directions. The results indicate the development of a pronounced anisotropy with increasing degree of deformation which is an effect of the concurrent alignment of the nanostructured lamellar composite into the shear plane. | condensed matter |
We analyze the dislocation content of grain boundary (GB) phase junctions, i.e., line defects separating two different GB phases coexisting on the same GB plane. While regular GB disconnections have been characterized for a variety of interfaces, GB phase junctions formed by GBs with different structures and different numbers of excess atoms have not been previously studied. We apply a general Burgers circuit analysis to calculate the Burgers vectors b of junctions in two {\Sigma}5 Cu boundaries previously simulated with molecular dynamics. The Burgers vectors of these junctions cannot be described by the displacement shift complete (DSC) lattice alone. We show that, in general, the normal component of b is not equal to the difference in the GB excess volumes, but contains another contribution from the numbers of GB atoms per unit area {\Delta}N required to transform one GB phase into another. In the boundaries studied, the latter component dominates and even changes the sign of b. We derive expressions for the normal and tangential components of b in terms of the DSC lattice vectors and the non-DSC part due to {\Delta}N and additional GB excess properties, including excess volume and shears. These expressions provide a connection between GB phase transformations driven by the GB free energy difference and the motion of GB junctions under applied normal and shear stresses. The proposed analysis quantifies b and therefore makes it possible to calculate the elastic part of the energy of these defects, evaluate their contribution to the nucleation barrier during GB phase transformations, and treat elastic interactions with other defects. | condensed matter |
Dense video captioning is a task of localizing interesting events from an untrimmed video and producing textual description (captions) for each localized event. Most of the previous works in dense video captioning are solely based on visual information and completely ignore the audio track. However, audio, and speech, in particular, are vital cues for a human observer in understanding an environment. In this paper, we present a new dense video captioning approach that is able to utilize any number of modalities for event description. Specifically, we show how audio and speech modalities may improve a dense video captioning model. We apply automatic speech recognition (ASR) system to obtain a temporally aligned textual description of the speech (similar to subtitles) and treat it as a separate input alongside video frames and the corresponding audio track. We formulate the captioning task as a machine translation problem and utilize recently proposed Transformer architecture to convert multi-modal input data into textual descriptions. We demonstrate the performance of our model on ActivityNet Captions dataset. The ablation studies indicate a considerable contribution from audio and speech components suggesting that these modalities contain substantial complementary information to video frames. Furthermore, we provide an in-depth analysis of the ActivityNet Caption results by leveraging the category tags obtained from original YouTube videos. Code is publicly available: github.com/v-iashin/MDVC | computer science |
We give a complete classification of analytic equivalence of germs of parametric families of systems of complex linear differential equations unfolding a generic resonant singularity of Poincare rank 1 in dimension $n = 2$ whose leading matrix is a Jordan bloc. The moduli space of analytic equivalence classes is described in terms of a tuple of formal invariants and a single analytic invariant obtained from the trace of monodromy, and analytic normal forms are given. We also explain the underlying phenomena of confluence of two simple singularities and of a turning point, the associated Stokes geometry, and the change of order of Borel summability of formal solutions in dependence on a complex parameter. | mathematics |
We investigate the proximity-induced exchange coupling in transition-metal dichalcogenides (TMDCs), originating from spin injector geometries composed of hexagonal boron-nitride (hBN) and ferromagnetic (FM) cobalt (Co) or nickel (Ni), from first-principles. We employ a minimal tight-binding Hamiltonian that captures the low energy bands of the TMDCs around K and K' valleys, to extract orbital, spin-orbit, and exchange parameters. The TMDC/hBN/FM heterostructure calculations show that due to the hBN buffer layer, the band structure of the TMDC is preserved, with an additional proximity-induced exchange splitting in the bands. We extract proximity exchange parameters in the 1--10 meV range, depending on the FM. The combination of proximity-induced exchange and intrinsic spin-orbit coupling (SOC) of the TMDCs, leads to a valley polarization, translating into magnetic exchange fields of tens of Tesla. The extracted parameters are useful for subsequent exciton calculations of TMDCs in the presence of a hBN/FM spin injector. Our calculated absorption spectra show large splittings for the exciton peaks; in the case of MoS$_2$/hBN/Co we find a value of about 8 meV, corresponding to about 50 Tesla external magnetic field in bare TMDCs. The reason lies in the band structure, where a hybridization with Co $d$ orbitals causes a giant valence band exchange splitting of more than 10 meV. Structures with Ni do not show any $d$ level hybridization features, but still sizeable proximity exchange and exciton peak splittings of around 2 meV are present in the TMDCs. | condensed matter |
We propose a scheme to perform braiding and all other unitary operations with Majorana modes in 1D that, in contrast to previous proposals, is solely based on resonant manipulation involving the first excited state extended over the modes. The detection of the population of the excited state also enables initialization and read-out. We provide an elaborated illustration of the scheme with a concrete device. | condensed matter |
Multiple Sclerosis (MS) is a type of brain disease which causes visual, sensory, and motor problems for people with a detrimental effect on the functioning of the nervous system. In order to diagnose MS, multiple screening methods have been proposed so far; among them, magnetic resonance imaging (MRI) has received considerable attention among physicians. MRI modalities provide physicians with fundamental information about the structure and function of the brain, which is crucial for the rapid diagnosis of MS lesions. Diagnosing MS using MRI is time-consuming, tedious, and prone to manual errors. Hence, computer aided diagnosis systems (CADS) based on artificial intelligence (AI) methods have been proposed in recent years for accurate diagnosis of MS using MRI neuroimaging modalities. In the AI field, automated MS diagnosis is being conducted using (i) conventional machine learning and (ii) deep learning (DL) techniques. The conventional machine learning approach is based on feature extraction and selection by trial and error. In DL, these steps are performed by the DL model itself. In this paper, a complete review of automated MS diagnosis methods performed using DL techniques with MRI neuroimaging modalities are discussed. Also, each work is thoroughly reviewed and discussed. Finally, the most important challenges and future directions in the automated MS diagnosis using DL techniques coupled with MRI modalities are presented in detail. | electrical engineering and systems science |
Mass outflow rates and loading factors are typically used to infer the quenching potential of galactic-scale outflows. However, these generally rely on observations of a single gas phase which can severely underestimate the total ejected gas mass. To address this, we use observations of high mass ($\geqslant$10$^{10}$ M$_{\odot}$), normal star-forming galaxies at $z\sim$0 from the MaNGA, xCOLD GASS, xGASS and ALFALFA surveys and a stacking of NaD, H$\alpha$, CO(1-0) and HI 21cm tracers with the aim of placing constraints on an average, total mass outflow rate and loading factor. We find detections of outflows in both neutral and ionised gas tracers, with no detections in stacks of molecular or atomic gas emission. Modelling of the outflow components reveals velocities of $|$v$_{\text{NaD}}|$=131 km s$^{-1}$ and $|$v$_{\text{H}\alpha}|$=439 km s$^{-1}$ and outflow rates of $\dot{M}_{\text{NaD}}$=7.55 M$_{\odot}$yr$^{-1}$ and $\dot{M}_{\text{H}\alpha}$=0.10 M$_{\odot}$yr$^{-1}$ for neutral and ionised gas, respectively. Assuming a molecular/atomic outflow velocity of 200 km s$^{-1}$, we derive upper limits of $\dot{M}_{\text{CO}}<$19.43 M$_{\odot}$yr$^{-1}$ and $\dot{M}_{\text{HI}}<$26.72 M$_{\odot}$yr$^{-1}$ for the molecular and atomic gas, respectively. Combining the detections and upper limits, we find average total outflow rates of $\dot{M}_{\text{tot}}\lesssim$27 M$_{\odot}$yr$^{-1}$ and a loading factor of $\eta_{\text{tot}}\lesssim$6.39, with molecular gas likely contributing $\lesssim$72% of the total mass outflow rate, and neutral and ionised gas contributing $\sim$28% and $<$1%, respectively. Our results suggest that, to first order, a degree of quenching via ejective feedback could occur in normal galaxies when considering all gas phases, even in the absence of an AGN. | astrophysics |
Most existing statistical network analysis literature assumes a global view of the network, under which community detection, testing, and other statistical procedures are developed. Yet in the real world, people frequently make decisions based on their partial understanding of the network information. As individuals barely know beyond friends' friends, we assume that an individual of interest knows all paths of length up to $L=2$ that originate from her. As a result, this individual's perceived adjacency matrix $\bbB$ differs significantly from the usual adjacency matrix $\bbA$ based on the global information. The new individual-centered partial information framework sparks an array of interesting endeavors from theory to practice. Key general properties on the eigenvalues and eigenvectors of $\bbB_E$, a major term of $\bbB$, are derived. These general results, coupled with the classic stochastic block model, lead to a new theory-backed spectral approach to detecting the community memberships based on an anchored individual's partial information. Real data analysis delivers interesting insights that cannot be obtained from global network analysis. | statistics |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.